repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
smorton2/think-stats
code/pandas_examples.ipynb
gpl-3.0
from __future__ import print_function, division %matplotlib inline import numpy as np import nsfg import first import analytic import thinkstats2 import seaborn """ Explanation: Pandas Examples http://thinkstats2.com Copyright 2017 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation """ import brfss df = brfss.ReadBrfss() df.describe() """ Explanation: Let's read data from the BRFSS: End of explanation """ groupby = df.groupby('sex') groupby """ Explanation: If we group by sex, we get a DataFrameGroupBy object. End of explanation """ seriesgroupby = groupby.htm3 seriesgroupby """ Explanation: If we select a particular column from the GroupBy, we get a SeriesGroupBy object. End of explanation """ groupby.mean() """ Explanation: If you invoke a reduce method on a DataFrameGroupBy, you get a DataFrame: End of explanation """ seriesgroupby.mean() """ Explanation: If you invoke a reduce method on a SeriesGroupBy, you get a Series: End of explanation """ groupby.aggregate(['mean', 'std']) seriesgroupby.aggregate(['mean', 'std']) """ Explanation: You can use aggregate to apply a collection of reduce methods: End of explanation """ def trimmed_mean(series): lower, upper = series.quantile([0.05, 0.95]) return series.clip(lower, upper).mean() """ Explanation: If the reduce method you want is not available, you can make your own: End of explanation """ trimmed_mean(df.htm3) """ Explanation: Here's how it works when we apply it directly: End of explanation """ seriesgroupby.apply(trimmed_mean) """ Explanation: And we can use apply to apply it to each group: End of explanation """ ps = np.linspace(0, 1, 11) ps """ Explanation: The digitize-groupby combo Let's say we want to group people into deciles (bottom 10%, next 10%, and so on). We can start by defining the cumulative probabilities that mark the borders between deciles. End of explanation """ series = df.htm3 bins = series.quantile(ps) bins """ Explanation: And then use deciles to find the values that correspond to those cumulative probabilities. End of explanation """ np.digitize(series, bins) """ Explanation: digitize takes a series and a sequence of bin boundaries, and computes the bin index for each element in the series. End of explanation """ def digitize(series, n=11): ps = np.linspace(0, 1, n) bins = series.quantile(ps) return np.digitize(series, bins) """ Explanation: Exercise: Collect the code snippets from the previous cells to write a function called digitize that takes a Series and a number of bins and return the results from np.digitize. End of explanation """ df['height_decile'] = digitize(df.htm3) df.height_decile.describe() """ Explanation: Now, if your digitize function is working, we can assign the results to a new column in the DataFrame: End of explanation """ groupby = df.groupby('height_decile') """ Explanation: And then group by height_decile End of explanation """ groupby.mean() """ Explanation: Now we can compute means for each variable in each group: End of explanation """ weights = groupby.wtkg2 weights """ Explanation: It looks like: The shortest people are older than the tallest people, on average. The shortest people are much more likely to be female (no surprise there). The shortest people are lighter than the tallest people (wtkg2), and they were lighter last year, too (wtyrago). Shorter people are more oversampled, so they have lower final weights. This is at least partly, and maybe entirely, due to the relationship with sex. The fact that all of these variables are associates with height suggests that it will be important to control for age and sex for almost any analysis we want to do with this data. Nevertheless, we'll start with a simple analysis looking at weights within each height group. End of explanation """ quantiles = weights.quantile([0.25, 0.5, 0.75]) quantiles type(quantiles.index) """ Explanation: If we apply quantile to a SeriesGroupBy, we get back a Series with a MultiIndex. End of explanation """ quantiles.unstack() """ Explanation: If you unstack a MultiIndex, the inner level of the MultIndex gets broken out into columns. End of explanation """ quantiles.unstack().plot() """ Explanation: Which makes it convenient to plot each of the columns as a line. End of explanation """ from thinkstats2 import Cdf cdfs = weights.apply(Cdf) cdfs """ Explanation: The other view of this data we might like is the CDF of weight within each height group. We can use apply with the Cdf constructor from thinkstats2. The results is a Series of Cdf objects. End of explanation """ import thinkplot thinkplot.Cdfs(cdfs[1:11:2]) thinkplot.Config(xlabel='Weight (kg)', ylabel='Cdf') """ Explanation: And now we can plot the CDFs End of explanation """ groupby = df.groupby(['sex', 'height_decile']) groupby.mean() groupby.wtkg2.mean() cdfs = groupby.wtkg2.apply(Cdf) cdfs cdfs.unstack() men = cdfs.unstack().loc[1] men thinkplot.Cdfs(men[1:11:2]) women = cdfs.unstack().loc[2] women thinkplot.Cdfs(women[1:11:2]) """ Explanation: Exercise: Plot CDFs of weight for men and women separately, broken out by decile of height. End of explanation """
Kaggle/learntools
notebooks/geospatial/raw/ex5.ipynb
apache-2.0
import math import geopandas as gpd import pandas as pd from shapely.geometry import MultiPolygon import folium from folium import Choropleth, Marker from folium.plugins import HeatMap, MarkerCluster from learntools.core import binder binder.bind(globals()) from learntools.geospatial.ex5 import * """ Explanation: Introduction You are part of a crisis response team, and you want to identify how hospitals have been responding to crash collisions in New York City. <center> <img src="https://i.imgur.com/wamd0n7.png" width="450"><br/> </center> Before you get started, run the code cell below to set everything up. End of explanation """ def embed_map(m, file_name): from IPython.display import IFrame m.save(file_name) return IFrame(file_name, width='100%', height='500px') """ Explanation: You'll use the embed_map() function to visualize your maps. End of explanation """ collisions = gpd.read_file("../input/geospatial-learn-course-data/NYPD_Motor_Vehicle_Collisions/NYPD_Motor_Vehicle_Collisions/NYPD_Motor_Vehicle_Collisions.shp") collisions.head() """ Explanation: Exercises 1) Visualize the collision data. Run the code cell below to load a GeoDataFrame collisions tracking major motor vehicle collisions in 2013-2018. End of explanation """ m_1 = folium.Map(location=[40.7, -74], zoom_start=11) # Your code here: Visualize the collision data ____ # Uncomment to see a hint #_COMMENT_IF(PROD)_ q_1.hint() # Show the map embed_map(m_1, "q_1.html") #%%RM_IF(PROD)%% m_1 = folium.Map(location=[40.7, -74], zoom_start=11) # Visualize the collision data HeatMap(data=collisions[['LATITUDE', 'LONGITUDE']], radius=9).add_to(m_1) # Show the map embed_map(m_1, "q_1.html") # Get credit for your work after you have created a map q_1.check() # Uncomment to see our solution (your code may look different!) #_COMMENT_IF(PROD)_ q_1.solution() """ Explanation: Use the "LATITUDE" and "LONGITUDE" columns to create an interactive map to visualize the collision data. What type of map do you think is most effective? End of explanation """ hospitals = gpd.read_file("../input/geospatial-learn-course-data/nyu_2451_34494/nyu_2451_34494/nyu_2451_34494.shp") hospitals.head() """ Explanation: 2) Understand hospital coverage. Run the next code cell to load the hospital data. End of explanation """ m_2 = folium.Map(location=[40.7, -74], zoom_start=11) # Your code here: Visualize the hospital locations ____ # Uncomment to see a hint #_COMMENT_IF(PROD)_ q_2.hint() # Show the map embed_map(m_2, "q_2.html") #%%RM_IF(PROD)%% m_2 = folium.Map(location=[40.7, -74], zoom_start=11) # Visualize the hospital locations for idx, row in hospitals.iterrows(): Marker([row['latitude'], row['longitude']], popup=row['name']).add_to(m_2) # Show the map embed_map(m_2, "q_2.html") # Get credit for your work after you have created a map q_2.check() # Uncomment to see our solution (your code may look different!) #_COMMENT_IF(PROD)_ q_2.solution() """ Explanation: Use the "latitude" and "longitude" columns to visualize the hospital locations. End of explanation """ # Your code here outside_range = ____ # Check your answer q_3.check() #%%RM_IF(PROD)%% coverage = gpd.GeoDataFrame(geometry=hospitals.geometry).buffer(10000) my_union = coverage.geometry.unary_union outside_range = collisions.loc[~collisions["geometry"].apply(lambda x: my_union.contains(x))] # Check your answer q_3.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_3.hint() #_COMMENT_IF(PROD)_ q_3.solution() """ Explanation: 3) When was the closest hospital more than 10 kilometers away? Create a DataFrame outside_range containing all rows from collisions with crashes that occurred more than 10 kilometers from the closest hospital. Note that both hospitals and collisions have EPSG 2263 as the coordinate reference system, and EPSG 2263 has units of meters. End of explanation """ percentage = round(100*len(outside_range)/len(collisions), 2) print("Percentage of collisions more than 10 km away from the closest hospital: {}%".format(percentage)) """ Explanation: The next code cell calculates the percentage of collisions that occurred more than 10 kilometers away from the closest hospital. End of explanation """ def best_hospital(collision_location): # Your code here name = ____ return name # Test your function: this should suggest CALVARY HOSPITAL INC print(best_hospital(outside_range.geometry.iloc[0])) # Check your answer q_4.check() #%%RM_IF(PROD)%% def best_hospital(collision_location): idx_min = hospitals.geometry.distance(collision_location).idxmin() my_hospital = hospitals.iloc[idx_min] name = my_hospital["name"] return name q_4.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_4.hint() #_COMMENT_IF(PROD)_ q_4.solution() """ Explanation: 4) Make a recommender. When collisions occur in distant locations, it becomes even more vital that injured persons are transported to the nearest available hospital. With this in mind, you decide to create a recommender that: - takes the location of the crash (in EPSG 2263) as input, - finds the closest hospital (where distance calculations are done in EPSG 2263), and - returns the name of the closest hospital. End of explanation """ # Your code here highest_demand = ____ # Check your answer q_5.check() #%%RM_IF(PROD)%% highest_demand = outside_range.geometry.apply(best_hospital).value_counts().idxmax() q_5.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_5.hint() #_COMMENT_IF(PROD)_ q_5.solution() """ Explanation: 5) Which hospital is under the highest demand? Considering only collisions in the outside_range DataFrame, which hospital is most recommended? Your answer should be a Python string that exactly matches the name of the hospital returned by the function you created in 4). End of explanation """ m_6 = folium.Map(location=[40.7, -74], zoom_start=11) coverage = gpd.GeoDataFrame(geometry=hospitals.geometry).buffer(10000) folium.GeoJson(coverage.geometry.to_crs(epsg=4326)).add_to(m_6) HeatMap(data=outside_range[['LATITUDE', 'LONGITUDE']], radius=9).add_to(m_6) folium.LatLngPopup().add_to(m_6) embed_map(m_6, 'm_6.html') """ Explanation: 6) Where should the city construct new hospitals? Run the next code cell (without changes) to visualize hospital locations, in addition to collisions that occurred more than 10 kilometers away from the closest hospital. End of explanation """ # Your answer here: proposed location of hospital 1 lat_1 = ____ long_1 = ____ # Your answer here: proposed location of hospital 2 lat_2 = ____ long_2 = ____ # Do not modify the code below this line try: new_df = pd.DataFrame( {'Latitude': [lat_1, lat_2], 'Longitude': [long_1, long_2]}) new_gdf = gpd.GeoDataFrame(new_df, geometry=gpd.points_from_xy(new_df.Longitude, new_df.Latitude)) new_gdf.crs = {'init' :'epsg:4326'} new_gdf = new_gdf.to_crs(epsg=2263) # get new percentage new_coverage = gpd.GeoDataFrame(geometry=new_gdf.geometry).buffer(10000) new_my_union = new_coverage.geometry.unary_union new_outside_range = outside_range.loc[~outside_range["geometry"].apply(lambda x: new_my_union.contains(x))] new_percentage = round(100*len(new_outside_range)/len(collisions), 2) print("(NEW) Percentage of collisions more than 10 km away from the closest hospital: {}%".format(new_percentage)) # Did you help the city to meet its goal? q_6.check() # make the map m = folium.Map(location=[40.7, -74], zoom_start=11) folium.GeoJson(coverage.geometry.to_crs(epsg=4326)).add_to(m) folium.GeoJson(new_coverage.geometry.to_crs(epsg=4326)).add_to(m) for idx, row in new_gdf.iterrows(): Marker([row['Latitude'], row['Longitude']]).add_to(m) HeatMap(data=new_outside_range[['LATITUDE', 'LONGITUDE']], radius=9).add_to(m) folium.LatLngPopup().add_to(m) display(embed_map(m, 'q_6.html')) except: q_6.hint() #%%RM_IF(PROD)%% # Proposed location of hospital 1 lat_1 = 40.6714 long_1 = -73.8492 # Proposed location of hospital 2 lat_2 = 40.6702 long_2 = -73.7612 # Check your answer try: new_df = pd.DataFrame( {'Latitude': [lat_1, lat_2], 'Longitude': [long_1, long_2]}) new_gdf = gpd.GeoDataFrame(new_df, geometry=gpd.points_from_xy(new_df.Longitude, new_df.Latitude)) new_gdf.crs = {'init' :'epsg:4326'} new_gdf = new_gdf.to_crs(epsg=2263) # get new percentage new_coverage = gpd.GeoDataFrame(geometry=new_gdf.geometry).buffer(10000) new_my_union = new_coverage.geometry.unary_union new_outside_range = outside_range.loc[~outside_range["geometry"].apply(lambda x: new_my_union.contains(x))] new_percentage = round(100*len(new_outside_range)/len(collisions), 2) print("(NEW) Percentage of collisions more than 10 km away from the closest hospital: {}%".format(new_percentage)) # Did you help the city to meet its goal? q_6.assert_check_passed() # make the map m = folium.Map(location=[40.7, -74], zoom_start=11) folium.GeoJson(coverage.geometry.to_crs(epsg=4326)).add_to(m) folium.GeoJson(new_coverage.geometry.to_crs(epsg=4326)).add_to(m) for idx, row in new_gdf.iterrows(): Marker([row['Latitude'], row['Longitude']]).add_to(m) HeatMap(data=new_outside_range[['LATITUDE', 'LONGITUDE']], radius=9).add_to(m) folium.LatLngPopup().add_to(m) display(embed_map(m, 'q_6.html')) except: q_6.hint() # Uncomment to see one potential answer #_COMMENT_IF(PROD)_ q_6.solution() """ Explanation: Click anywhere on the map to see a pop-up with the corresponding location in latitude and longitude. The city of New York reaches out to you for help with deciding locations for two brand new hospitals. They specifically want your help with identifying locations to bring the calculated percentage from step 3) to less than ten percent. Using the map (and without worrying about zoning laws or what potential buildings would have to be removed in order to build the hospitals), can you identify two locations that would help the city accomplish this goal? Put the proposed latitude and longitude for hospital 1 in lat_1 and long_1, respectively. (Likewise for hospital 2.) Then, run the rest of the cell as-is to see the effect of the new hospitals. Your answer will be marked correct, if the two new hospitals bring the percentage to less than ten percent. End of explanation """
mdbecker/daa_philly_2015
DataPhilly_Analysis.ipynb
mit
%matplotlib inline import seaborn as sns import pandas as pd from matplotlib import rcParams # Modify aesthetics for visibility during presentation sns.set_style('darkgrid', {'axes.facecolor': '#C2C2C8'}) sns.set_palette('colorblind') # Make everything bigger for visibility during presentation rcParams['figure.figsize'] = 20, 10 rcParams['axes.titlesize'] = 'xx-large' rcParams['axes.labelsize'] = 'x-large' rcParams['xtick.labelsize'] = 'x-large' rcParams['ytick.labelsize'] = 'x-large' rcParams['legend.fontsize'] = 'xx-large' rcParams['lines.linewidth'] = 4.0 rcParams['grid.linewidth'] = 2.0 # Hide warnings in the notebook import warnings warnings.filterwarnings('ignore') """ Explanation: Analyzing the Philadelphia Data Science Scene with Python Instructions The latest version of this notebook can always be found and viewed online here. It's strongly recommended that you view the online version of this document. Instructions for setting up Jupyter Notebook and the required libraries can be found online here. The repo for this project can be found and forked here. DataPhilly <img src="dataphilly.jpeg" width="70%" /> DataPhilly is a local data meetup group I started back in 2012. I had attended a few data science conferences and I was really disappointed about the lack of a local meetup group for people interested in data science. And so DataPhilly was born! Jupyter Notebook The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more. <img src="jupyterpreview.png" width="70%" /> Through Jupyter's kernel and messaging architecture, the Notebook allows code to be run in a range of different programming languages. For each notebook document that a user opens, the web application starts a kernel that runs the code for that notebook. Each kernel is capable of running code in a single programming language and there are kernels available in the following languages Python(https://github.com/ipython/ipython) Julia (https://github.com/JuliaLang/IJulia.jl) R (https://github.com/takluyver/IRkernel) Ruby (https://github.com/minrk/iruby) Haskell (https://github.com/gibiansky/IHaskell) Scala (https://github.com/Bridgewater/scala-notebook) node.js (https://gist.github.com/Carreau/4279371) Go (https://github.com/takluyver/igo) The default kernel runs Python code. The notebook provides a simple way for users to pick which of these kernels is used for a given notebook. Jupyter examples and tutorials can be found in the Jupyter github repo here. The task The task I'll be walking you through today will demonstrate how to use Python for exploratory data analysis. The dataset I'll use is one I created by querying the Meetup API for the DataPhilly meetup. I'll walk you through using Jupyter notebook (The webapp we're using now), Pandas (an excel like tool for data exploration) and scikit-learn (a Python machine learning library) to explore the DataPhilly dataset. I won't go in depth into these tools but my hope is that you'll leave my talk wanting to learn more about using Python for exploratory data analysis and that you'll learn some interesting things about DataPhilly in the process. Initializing our environment First let's start off by initializing our environment * %matplotlib inline initializes matplotlib so that we can display graphs and charts in our notebook. * import seaborn as sns imports seaborn a graphing library built on top of matplotlib. * import pandas as pd imports pandas a tool I'll explain in the next section. Hint: If you've installed Jupyter Notebook and you're running this on your machine, you can use the run button <i class="fa-step-forward fa"></i> in the toolbar at the top of the page to execute each cell Click on the cell above and the cell below. You'll notice that the cell above is Markdown. You can edit it by double clicking on it. The cell below contains Python code which can be modified and executed. If the code has any output it will be printed out below the cell with <font color="darkred">Out [n]:</font> in front of it. End of explanation """ events_df = pd.read_pickle('events.pkl') events_df = events_df.sort_values(by='time') events_df """ Explanation: Pandas <img src="pandas_logo.png" width="50%" /> Pandas is a library that provides data analysis tools for the Python programming language. You can think of it as Excel on steroids, but in Python. To start off, I've used the meetup API to gather a bunch of data on members of the DataPhilly meetup group. First let's start off by looking at the events we've had over the past few years. I've loaded the data into a pandas DataFrame and stored it in the file events.pkl. A DataFrame is a table similar to an Excel spreadsheet. Let's load it and see what it looks like: DataPhilly events dataset End of explanation """ events_df['yes_rsvp_count'] """ Explanation: You can access values in a DataFrame column like this: End of explanation """ events_df.iloc[4] """ Explanation: You can access a row of a DataFrame using iloc: End of explanation """ events_df.head() """ Explanation: We can view the first few rows using the head method: End of explanation """ events_df.tail(3) """ Explanation: And similarly the last few using tail: End of explanation """ yes_rsvp_count = events_df['yes_rsvp_count'] yes_rsvp_count.sum(), yes_rsvp_count.mean(), yes_rsvp_count.min(), yes_rsvp_count.max() """ Explanation: We can see that the yes_rsvp_count contains the number of people who RSVPed yes for each event. First let's look at some basic statistics: End of explanation """ type(yes_rsvp_count) """ Explanation: When we access a single column of the DataFrame like this we get a Series object which is just a 1-dimensional version of a DataFrame. End of explanation """ yes_rsvp_count.describe() """ Explanation: We can use the built-in describe method to print out a lot of useful stats in a nice tabular format: End of explanation """ events_df['total_RSVP_count'] = events_df['waitlist_count'] + events_df['yes_rsvp_count'] events_df['total_RSVP_count'] """ Explanation: Next I'd like to graph the number of RSVPs over time to see if there are any interesting trends. To do this let's first sum the waitlist_count and yes_rsvp_count columns and make a new column called total_RSVP_count. End of explanation """ events_df['total_RSVP_count'].plot() """ Explanation: We can plot these values using the plot method End of explanation """ events_df.head(2) import datetime def get_datetime_from_epoch(epoch): return datetime.datetime.fromtimestamp(epoch/1000.0) events_df['time'] = events_df['time'].apply(get_datetime_from_epoch) events_df['time'] """ Explanation: The plot method utilizes the matplotlib library behind the scenes to draw the plot. This is interesting, but it would be nice to have the dates of the meetups on the X-axis of the plot. To accomplish this, let's convert the time field from a unix epoch timestamp to a python datetime utilizing the apply method and a function. End of explanation """ events_df.set_index('time', inplace=True) events_df[['total_RSVP_count']].plot() """ Explanation: Next let's make the time column the index of the DataFrame using the set_index method and then re-plot our data. End of explanation """ all_rsvps = events_df[['yes_rsvp_count', 'waitlist_count', 'total_RSVP_count']] all_rsvps.plot(title='Attendance over time') """ Explanation: We can also easily plot multiple columns on the same plot. End of explanation """ members_df = pd.read_pickle('members.pkl') for column in ['joined', 'visited']: members_df[column] = members_df[column].apply(get_datetime_from_epoch) members_df.head(3) """ Explanation: DataPhilly members dataset Alright so I'm seeing some interesting trends here. Let's take a look at something different. The Meetup API also provides us access to member info. Let's have a look at the data we have available: End of explanation """ gender_counts = members_df['gender'].value_counts() gender_counts """ Explanation: You'll notice that I've anonymized the meetup member_id and the member's name. I've also used the python module SexMachine to infer members gender based on their first name. I ran SexMachine on the original names before I anonymized them. Let's have a closer look at the gender breakdown of our members: End of explanation """ members_df['membership_count'].hist(bins=20) """ Explanation: Next let's use the hist method to plot a histogram of membership_count. This is the number of groups each member is in. End of explanation """ members_df['membership_count'].value_counts().head() """ Explanation: Something looks odd here let's check out the value_counts: End of explanation """ members_df_non_zero = members_df[members_df['membership_count'] != 0] members_df_non_zero['membership_count'].hist(bins=50) """ Explanation: Okay so most members are members of 0 meetup groups?! This seems odd! I did a little digging and came up with the answer; members can set their membership details to be private, and then this value will be zero. Let's filter out these members and recreate the histogram. End of explanation """ ax = members_df_non_zero['membership_count'].hist(bins=50) ax.set_yscale('log') ax.set_xlim(0, 500) """ Explanation: Okay so most members are only members of a few meetup groups. There's some outliers that are pretty hard to read, let's try plotting this on a logarithmic scale to see if that helps: End of explanation """ all_the_meetups = members_df[members_df['membership_count'] > 100] filtered = all_the_meetups[['membership_count', 'city', 'country', 'state']] filtered.sort_values(by='membership_count', ascending=False) """ Explanation: Let's use a mask to filter out the outliers so we can dig into them a little further: End of explanation """ all_the_meetups = members_df[ (members_df['membership_count'] > 100) & (members_df['city'] != 'Philadelphia') ] filtered = all_the_meetups[['membership_count', 'city', 'country', 'state']] filtered.sort_values(by='membership_count', ascending=False) """ Explanation: The people from Philly might actually be legitimate members, let's use a compound mask to filter them out as well: End of explanation """ rsvps_df = pd.read_pickle('rsvps.pkl') rsvps_df.head(3) """ Explanation: That's strange, I don't think we've ever had any members from Berlin, San Francisco, or Jerusalem in attendance :-). The RSVP dataset Moving on, we also have all the events that each member RSVPed to: End of explanation """ joined_with_rsvps_df = pd.merge(members_df, rsvps_df, left_on='anon_id', right_on='member_id') joined_with_rsvps_df.head(3) joined_with_rsvps_df.columns """ Explanation: <img src="inner_join.png" width="50%" /> We can utilize the pandas merge method to join our members DataFrame and our rsvps DataFrame: End of explanation """ male_attendees = joined_with_rsvps_df[joined_with_rsvps_df['gender'].isin(['male', 'mostly_male'])] male_attendees.tail(3) female_attendees = joined_with_rsvps_df[joined_with_rsvps_df['gender'].isin(['female', 'mostly_female'])] female_attendees.tail(3) """ Explanation: Now we have a ton of data, let's see what kind of interesting things we can discover. Let's look at the some stats on male attendees vs. female attendees: First we can use the isin method to make DataFrames for male and female members. End of explanation """ event_ids = [ '102502622', '106043892', '107740582', '120425212', '133803672', '138415912', '144769822', '149515412', '160323532', '168747852', '175993712', '182860422', '206754182', '215265722', '219055217', '219840555', '220526799', '221245827', '225488147', '89769502', '98833672' ] male_attendees[event_ids].sum().head(3) """ Explanation: Next we can use the sum method to count the number of male and female attendees per event and create a Series for each. End of explanation """ gender_attendance = pd.DataFrame({'male': male_attendees[event_ids].sum(), 'female': female_attendees[event_ids].sum()}) gender_attendance.head(3) """ Explanation: We can then recombine the male and female Series' into a new DataFrame. End of explanation """ events_with_gender_df = pd.merge(events_df, gender_attendance, left_on='id', right_index=True) events_with_gender_df.head(3) """ Explanation: And then we can use merge again to combine this with our events DataFrame. End of explanation """ gender_df = events_with_gender_df[['female', 'male']] gender_df.plot(title='Attendance by gender over time') """ Explanation: The we can plot the attendance by gender over time End of explanation """ female_ratio = gender_df['female'].div(gender_df['male'] + gender_df['female']) female_ratio.plot(title='Percentage female attendance over time', ylim=(0.0, 1.0)) """ Explanation: This might be easier to interpret by looking at the percentage of females in attendance. We can use the div (divide) method to calculate this. End of explanation """ members_df['topics'].iloc[0] """ Explanation: The members DataFrame also has some other interesting stuff in it. Let's take a look at the topics column. End of explanation """ from collections import Counter topic_counter = Counter() for m in members_df['topics']: topic_counter.update([t['name'] for t in m]) topic_counter.most_common(20) """ Explanation: Let's see if we can identify any trends in member's topics. Let's start off by identifying the most common topics: End of explanation """ top_100_topics = set([t[0] for t in topic_counter.most_common(100)]) topic_member_map = {} for i, m in members_df.iterrows(): if m['topics']: top_topic_count = {} for topic in m['topics']: if topic['name'] in top_100_topics: top_topic_count[topic['name']] = 1 topic_member_map[m['anon_id']] = top_topic_count top_topic_df = pd.DataFrame(topic_member_map) top_topic_df.head(3) """ Explanation: Next let's create a new DataFrame where each column is one of the top 100 topics, and each row is a member. We'll set the values of each cell to be either 0 or 1 to indicate that that member has (or doesn't have) that topic. End of explanation """ top_topic_df = top_topic_df.T top_topic_df.head(3) """ Explanation: Okay for what I'm going to do next, I want the rows to be the members and the columns to be the topics. We can use the T (transpose) method to fix this. End of explanation """ top_topic_df.fillna(0, inplace=True) top_topic_df.head(3) """ Explanation: Next we can use the fillna method to fill in the missing values with zeros. End of explanation """ from sklearn.cluster import MiniBatchKMeans as KMeans X = top_topic_df.as_matrix() n_clusters = 3 k_means = KMeans(init='k-means++', n_clusters=n_clusters, n_init=10, random_state=47) k_means.fit(X) k_means.labels_ """ Explanation: Next let's use a clustering algorithm to see if there are any patterns in the topics members are interested in. A clustering algorithm groups a set of data points so that similar objects are in the same group. This is a classic type of unsupervised machine learning. Below you can find visualisations of how different clustering algorithms perform on various kinds of data: <img src="plot_cluster_comparison_001.png" width="90%" /> Kmeans clustering is quick and can scale well to larger datasets. Let's see how it performs on our dataset: scikit-learn <img src="scikit-learn-logo-notext.png" width="20%" /> We'll use a python machine learning library called scikit-learn to do the clustering. End of explanation """ Counter(list(k_means.labels_)).most_common() """ Explanation: We've grouped our members into 3 clusters, let's see how many members are in each cluster End of explanation """ from collections import defaultdict cluster_index_map = defaultdict(list) for i in range(k_means.labels_.shape[0]): cluster_index_map[k_means.labels_[i]].append(top_topic_df.index[i]) for cluster_num in range(n_clusters): print 'Cluster {}'.format(cluster_num) f = top_topic_df[top_topic_df.index.isin(cluster_index_map[cluster_num])].sum() f2 = f[f > 0] f3 = f2.sort_values(ascending=False) print f3[:10] print """ Explanation: Next let's see which topics are most popular in each cluster: End of explanation """
CamDavidsonPilon/lifelines
examples/Proportional hazard assumption.ipynb
mit
from lifelines.datasets import load_rossi rossi = load_rossi() cph = CoxPHFitter() cph.fit(rossi, 'week', 'arrest') cph.print_summary(model="untransformed variables", decimals=3) """ Explanation: Testing the proportional hazard assumptions This Jupyter notebook is a small tutorial on how to test and fix proportional hazard problems. An important question to first ask is: do I need to care about the proportional hazard assumption? - often the answer is no. The proportional hazard assumption is that all individuals have the same hazard function, but a unique scaling factor infront. So the shape of the hazard function is the same for all individuals, and only a scalar multiple changes per individual. $$h_i(t) = a_i h(t)$$ At the core of the assumption is that $a_i$ is not time varying, that is, $a_i(t) = a_i$. Further more, if we take the ratio of this with another subject (called the hazard ratio): $$\frac{h_i(t)}{h_j(t)} = \frac{a_i h(t)}{a_j h(t)} = \frac{a_i}{a_j}$$ is constant for all $t$. In this tutorial we will test this non-time varying assumption, and look at ways to handle violations. End of explanation """ cph.check_assumptions(rossi, p_value_threshold=0.05, show_plots=True) """ Explanation: Checking assumptions with check_assumptions New to lifelines 0.16.0 is the CoxPHFitter.check_assumptions method. This method will compute statistics that check the proportional hazard assumption, produce plots to check assumptions, and more. Also included is an option to display advice to the console. Here's a breakdown of each information displayed: Presented first are the results of a statistical test to test for any time-varying coefficients. A time-varying coefficient imply a covariate's influence relative to the baseline changes over time. This implies a violation of the proportional hazard assumption. For each variable, we transform time four times (these are common transformations of time to perform). If lifelines rejects the null (that is, lifelines rejects that the coefficient is not time-varying), we report this to the user. Some advice is presented on how to correct the proportional hazard violation based on some summary statistics of the variable. As a compliment to the above statistical test, for each variable that violates the PH assumption, visual plots of the the scaled Schoenfeld residuals is presented against the four time transformations. A fitted lowess is also presented, along with 10 bootstrapped lowess lines (as an approximation to the confidence interval of the original lowess line). Ideally, this lowess line is constant (flat). Deviations away from the constant line are violations of the PH assumption. Why the scaled Schoenfeld residuals? This section can be skipped on first read. Let $s_{t,j}$ denote the scaled Schoenfeld residuals of variable $j$ at time $t$, $\hat{\beta_j}$ denote the maximum-likelihood estimate of the $j$th variable, and $\beta_j(t)$ a time-varying coefficient in (fictional) alternative model that allows for time-varying coefficients. Therneau and Grambsch showed that. $$E[s_{t,j}] + \hat{\beta_j} = \beta_j(t)$$ The proportional hazard assumption implies that $\hat{\beta_j} = \beta_j(t)$, hence $E[s_{t,j}] = 0$. This is what the above proportional hazard test is testing. Visually, plotting $s_{t,j}$ over time (or some transform of time), is a good way to see violations of $E[s_{t,j}] = 0$, along with the statisical test. End of explanation """ from lifelines.statistics import proportional_hazard_test results = proportional_hazard_test(cph, rossi, time_transform='rank') results.print_summary(decimals=3, model="untransformed variables") """ Explanation: Alternatively, you can use the proportional hazard test outside of check_assumptions: End of explanation """ cph.fit(rossi, 'week', 'arrest', strata=['wexp']) cph.print_summary(model="wexp in strata") cph.check_assumptions(rossi, show_plots=True) """ Explanation: Stratification In the advice above, we can see that wexp has small cardinality, so we can easily fix that by specifying it in the strata. What does the strata do? Let's go back to the proportional hazard assumption. In the introduction, we said that the proportional hazard assumption was that $$ h_i(t) = a_i h(t)$$ In a simple case, it may be that there are two subgroups that have very different baseline hazards. That is, we can split the dataset into subsamples based on some variable (we call this the stratifying variable), run the Cox model on all subsamples, and compare their baseline hazards. If these baseline hazards are very different, then clearly the formula above is wrong - the $h(t)$ is some weighted average of the subgroups' baseline hazards. This ill fitting average baseline can cause $a_i$ to have time-dependent influence. A better model might be: $$ h_{i |i\in G}(t) = a_i h_G(t)$$ where now we have a unique baseline hazard per subgroup $G$. Because of the way the Cox model is designed, inference of the coefficients is identical (expect now there are more baseline hazards, and no variation of the stratifying variable within a subgroup $G$). End of explanation """ cph.fit(rossi, 'week', 'arrest', strata=['wexp'], formula="bs(age, df=4, lower_bound=10, upper_bound=50) + fin +race + mar + paro + prio") cph.print_summary(model="spline_model"); print() cph.check_assumptions(rossi, show_plots=True, p_value_threshold=0.05) """ Explanation: Since age is still violating the proportional hazard assumption, we need to model it better. From the residual plots above, we can see a the effect of age start to become negative over time. This will be relevant later. Below, we present three options to handle age. Modify the functional form The proportional hazard test is very sensitive (i.e. lots of false positives) when the functional form of a variable is incorrect. For example, if the association between a covariate and the log-hazard is non-linear, but the model has only a linear term included, then the proportional hazard test can raise a false positive. The modeller can choose to add quadratic or cubic terms, i.e: rossi['age**2'] = (rossi['age'] - rossi['age'].mean())**2 rossi['age**3'] = (rossi['age'] - rossi['age'].mean())**3 but I think a more correct way to include non-linear terms is to use basis splines: End of explanation """ rossi_strata_age = rossi.copy() rossi_strata_age['age_strata'] = pd.cut(rossi_strata_age['age'], np.arange(0, 80, 3)) rossi_strata_age[['age', 'age_strata']].head() # drop the orignal, redundant, age column rossi_strata_age = rossi_strata_age.drop('age', axis=1) cph.fit(rossi_strata_age, 'week', 'arrest', strata=['age_strata', 'wexp']) cph.print_summary(3, model="stratified age and wexp") cph.plot() cph.check_assumptions(rossi_strata_age) """ Explanation: We see may still have potentially some violation, but it's a heck of a lot less. Also, interestingly, when we include these non-linear terms for age, the wexp proportionality violation disappears. It is not uncommon to see changing the functional form of one variable effects other's proportional tests, usually positively. So, we could remove the strata=['wexp'] if we wished. Bin variable and stratify on it The second option proposed is to bin the variable into equal-sized bins, and stratify like we did with wexp. There is a trade off here between estimation and information-loss. If we have large bins, we will lose information (since different values are now binned together), but we need to estimate less new baseline hazards. On the other hand, with tiny bins, we allow the age data to have the most "wiggle room", but must compute many baseline hazards each of which has a smaller sample size. Like most things, the optimial value is somewhere inbetween. End of explanation """ from lifelines.utils import to_episodic_format # the time_gaps parameter specifies how large or small you want the periods to be. rossi_long = to_episodic_format(rossi, duration_col='week', event_col='arrest', time_gaps=1.) rossi_long.head(25) """ Explanation: Introduce time-varying covariates Our second option to correct variables that violate the proportional hazard assumption is to model the time-varying component directly. This is done in two steps. The first is to transform your dataset into episodic format. This means that we split a subject from a single row into $n$ new rows, and each new row represents some time period for the subject. It's okay that the variables are static over this new time periods - we'll introduce some time-varying covariates later. See below for how to do this in lifelines: End of explanation """ rossi_long['time*age'] = rossi_long['age'] * rossi_long['stop'] from lifelines import CoxTimeVaryingFitter ctv = CoxTimeVaryingFitter() ctv.fit(rossi_long, id_col='id', event_col='arrest', start_col='start', stop_col='stop', strata=['wexp']) ctv.print_summary(3, model="age * time interaction") ctv.plot() """ Explanation: Each subject is given a new id (but can be specified as well if already provided in the dataframe). This id is used to track subjects over time. Notice the arrest col is 0 for all periods prior to their (possible) event as well. Above I mentioned there were two steps to correct age. The first was to convert to a episodic format. The second is to create an interaction term between age and stop. This is a time-varying variable. Instead of CoxPHFitter, we must use CoxTimeVaryingFitter instead since we are working with a episodic dataset. End of explanation """
mdeff/ntds_2017
projects/reports/brain_network/1_ntds_project.ipynb
mit
##force not printing %%capture %matplotlib inline !pip install h5py import numpy as np import h5py from scipy import sparse import IPython.display as ipd import matplotlib.pyplot as plt import re import networkx as nx import scipy as sp import scipy.sparse as sps ##read .h5 file format containing the information about the microcolumn of the averege individual. Run load_ntds_project before this cell! file_name='cons_locs_pathways_mc2_Column.h5' h5=h5py.File(file_name,'r') """ Explanation: 1) GSP on the Digital Reconstruction of the Brain Stefania Ebli - Christopher Elin - Florian Roth The goal of this project is to explore a digital reconstruction of the brain through graph signal processing techniques. In 2015 the Blue Brain Project published the first digital reconstruction of the microcircuit of the somatosensory cortex of a juvenile rat. This reconstruction follows biological principles. In particular, it reproduces in detail the anatomy and the physiology of each neuron in the digital microcircuit. For example, it contains information about the electrical properties, morphological properties, synaptic properties and membrane properties of the neurons. A column (or microcolumn) of the microcircuit is a neocortical volume of 0.29 ± 0.01 $mm^3$ containing ~31,000 neuron. Digitally reconstructed neurons are positioned in this volume according to experimentally based estimates of their densities in the 6 layers into which each microcolumn is divided. After their allocation in the 3D space, the connectivity between the neurons is reconstructed. Lastly, different stochastic instances of this microcolumn are initiated and assembled to simulate a defined volume in the somatosensory cortex. The experimental data, the digital reconstruction, and the simulation results are available at the Neocortical Microcircuit Collaboration Portal. In the section Downloads one can find the data we used for this project. The connectivity of different instances of a modeled microcircuit are available as HDF5 files. For building our network we will use one of the 7 stochastic instances of a modeled microcircuit based on averaged measurements of neuron densities, E/I balance and layer widths. In particular, we arbitraly chose the third of the seven stochastic instances. This instance is contained in the file cons_locs_pathways_mc2_Column.h5. Information about the anatomy and physiology are available in the .JSON files pathways_physiology_factsheets_simplified.json and pathways_anatomy_factsheets_simplified.json. ## 1.1) Data Exploration Now we'll download the file cons_locs_pathways_mc2_Column.h5 containing the infomation about the connectivity and the properties of the neurons and we'll briefly explore its content: End of explanation """ list(h5.keys()) """ Explanation: Inside the downloaded file two main data sources can be found : 'populations' and 'connectivity'. The source 'populations' contains information for each individual neuron in the microcircuit, like its position in 3d space, in- and out-degree of connectivity, etc. while 'connectivity' contains infromation about the connections inside the chosen volume. End of explanation """ m_type=dict() m_type_inv=dict() m_values=list(h5['populations'].keys()) nb_values_show = 16 for i in range(0, len(m_values)): m_type[i] =m_values[i] m_type_inv[m_values[i]]=i print('We have {} different mophological types. Here there are the first {} m-type: '.format(len(m_type), nb_values_show)) for x in range(nb_values_show-1): print(m_type[x], end=' ') """ Explanation: The information is splitted into 55 groups, each one corresponding to a morphological type (m-type) used in the model (for example 'L4_PC' = layer 4 pyramidal cell). For further information about the classification of neuron in morphological type refer to 'Reconstruction and Simulation of Neocortical Microcircuitry' (Markram et al., 2015; Cell). End of explanation """ print('The 3D position of the first L4_PC neuron is {}'.format(list(h5['populations'][ 'L4_PC' ]['locations'])[0])) """ Explanation: Each of the name of the 55 different morpholagical groups contains additional information about the position and the type of the neuron. For example, we know that a neuron belonging to the group 'L4_PC' = 'layer 4 pyramidal' is an excitatory neuron in layer 4 of the microcircuit, while a neuron in the group 'L4_MC'= 'layer 4 martinotti cell' is an inhibitory neuron in layer 4. The following is a table representing the 55 different morphological type of neurons. The numbers on the left from 1 to 6 are the 6 different layers in which the micorcircuit is divided. Inhibitory neurons are assigned to a blue label whereas excitatory neurons are assigned to a red label. For more details about this classification we refer to 'Reconstruction and Simulation of Neocortical Microcircuitry' (Markram et al., 2015; Cell). For each group, ['populations'] contains the locations (in micrometer) in the 3D space of the neurons of the given group. For example, if we want to know the 3D position of the first neuron of m-type 'L4_PC : End of explanation """ pos_neuron=dict() pos_neuron_inv=dict() for i in range(0, len(m_values)): pos_neuron[i] =list(h5['populations'][m_type[i]]['locations']) pos_neuron_inv[m_values[i]]=list(h5['populations'][m_type[i]]['locations']) all_positions=pos_neuron.values() """ Explanation: Now, we'll build an array containing all the distances between neurons End of explanation """ print('The number of L4_PC neurons is {}'.format(len(list(h5['populations'][ 'L4_PC' ]['locations'])))) #calculation of the total number of neurons of the microcircuit num_neuron=dict() num_neuron_inv=dict() for i in range(0, len(m_values)): num_neuron[i]=len(list(h5['populations'][m_type[i]]['locations'])) num_neuron_inv[m_values[i]]=len(list(h5['populations'][m_type[i]]['locations'])) N=sum(num_neuron.values()) print('The total number of neurons in our microcircuit is {}'.format(N)) """ Explanation: How many L4_PC neurons do we have? End of explanation """ ##Plot histogram of m-types label_morphology=[] for i in num_neuron.keys(): label_morphology.extend(num_neuron[i]*[i]) label_layer=[] label_layer_morpho=[] for i in m_type.keys(): label_layer.extend(num_neuron[i]*list(map(int, re.findall(r'^\D*(\d+)', m_type[i])))) label_layer_morpho.extend(list(map(int, re.findall(r'^\D*(\d+)', m_type[i])))) label_layer = [x if x!= 23 else 2 for x in label_layer] label_layer_morpho = [x if x!= 23 else 2 for x in label_layer_morpho] def hist_morpho(label_morpho,a,b,y,title='Histogram of m-type'): fig = plt.figure(figsize=(a,b)) ax=fig.add_subplot(1,1,1) n, my_bins, patches = ax.hist(label_morpho, bins=np.arange(-0.5,55,1), facecolor='b',edgecolor='black', linewidth=1.2) plt.xlabel('M-type') plt.ylabel('Counts') plt.title(title) plt.axis([-0.5, 54.5, -0.01, y]) plt.grid(True) ax.set_xticks(np.linspace(0,54,55)) ax.set_xticklabels(m_values, rotation='vertical') for i in range(0,5): patches[i].set_facecolor('r') for i in range(5,15): patches[i].set_facecolor('y') for i in range(15,27): patches[i].set_facecolor('g') for i in range(27,40): patches[i].set_facecolor('b') for i in range(40,55): patches[i].set_facecolor('purple') plt.show() hist_morpho(label_morphology,a=15,b=7,y=6000) """ Explanation: Now we'll plot an histogram with the distribution of the different 55 m-types of the 31346 neurons. End of explanation """ ##Distribution of neurons in the 6 lyers fig = plt.figure(figsize=(9,5)) ax=fig.add_subplot(1,1,1) n, bins, patches = plt.hist(label_layer,bins=[0.5,1.5,2,3,3.5,4.5,5.5,6.5], facecolor='g',edgecolor='black', linewidth=1.2) plt.xlabel('Layer') plt.ylabel('Counts') plt.title('Histogram of Layer') plt.axis([0.5, 6.5, 0, 15000]) patches[0].set_facecolor('r') patches[2].set_facecolor('y') patches[4].set_facecolor('g') patches[5].set_facecolor('b') patches[6].set_facecolor('purple') plt.grid(True) plt.show() print('There are {:.0f} in layer 1'.format(n[0])) print('There are {:.0f} in layer 2-3'.format(n[1])) print('There are {:.0f} in layer 4'.format(n[3])) print('There are {:.0f} in layer 5'.format(n[4])) print('There are {:.0f} in layer 6'.format(n[5])) print('{:.2%} of all the neurons are located in the 6th layer.'.format(n[6]/N)) """ Explanation: The following is an histogram showing the distribution of the neurons in the 6 layers into which the microcolum is divided. Note: M-type in L2 and L3 are not separated. For example, L23_MC is the group of Martinotti cells extending both in layer 2 and 3. Our convention will be to take layer 2 as a layer-label for these neurons. End of explanation """ print('The number of afferent connections of the first neuron in -L4_PC- neurons is {}'.format(list(h5['populations'][ 'L4_PC' ]['nCellAff' ][0])[0])) print('The number of afferent synapses of the first neuron in -L4_PC- neurons is {}'.format(list(h5['populations'][ 'L4_PC' ][ 'nSynAff' ][0])[0])) """ Explanation: In the provided file one finds for every neuron the number of neurons which are sending and receiving signals from or to it. The number of afferent (in-degree) and efferent (out-degree) connected neurons are respectivly contained in 'nCellAff' and 'nCellEff'. Two connected neurons can form multiple connections (synapses). Therefore, for a given neuron the values 'nSynAff' and 'nSynEff' count with multiplicity the total number of afferent and efferent connections. End of explanation """ affconn_mtype=[] effconn_mtype=[] affsyn_mtype=[] effsyn_mtype=[] dist_affconn_mtype=[] dist_effconn_mtype=[] dist_affsyn_mtype=[] dist_effsyn_mtype=[] for x in m_values: dist_affconn_mtype.extend(h5['populations'][ x ]['nCellAff']) dist_effconn_mtype.extend(h5['populations'][ x ]['nCellEff']) dist_affsyn_mtype.extend((h5['populations'][ x ]['nSynAff'])) dist_effsyn_mtype.extend((h5['populations'][ x ]['nSynEff'])) affconn_mtype.append(np.sum(np.array(h5['populations'][ x ]['nCellAff']))) effconn_mtype.append(np.sum(np.array(h5['populations'][ x ]['nCellEff']))) affsyn_mtype.append(np.sum(np.array(h5['populations'][ x ]['nSynAff']))) effsyn_mtype.append(np.sum(np.array(h5['populations'][ x ]['nSynEff']))) tot_effconn=np.sum(effconn_mtype) tot_affconn=np.sum(affconn_mtype) tot_affsyn=np.sum(affsyn_mtype) tot_effsyn=np.sum(effsyn_mtype) print('The total number of afferent connections (counting also connection formed outside our volume) is {}'.format(tot_affconn)) print('The total number of efferent connections (counting also connection formed outside our volume) is {}'.format(tot_effconn)) print('The total number of afferent synapses (counting also synapses formed outside our volume) is {}'.format(tot_affsyn)) print('The total number of efferent synapses (counting also synapses formed outside our volume) is {}'.format(tot_effsyn)) w=(tot_affsyn/tot_affconn + tot_effsyn/tot_effconn)/2 print('On average one connection between two neuron is formed by ~{} synapses'.format(w)) """ Explanation: Note: The number of afferent and efferent connections as well as the number of afferent and efferent synapses data includes some connections from or to neurons outside the microcolum we are considering. The aim of this is to reduce the so called 'edge effect' or 'border effect'. That is the effect that neurons at the boundary of the given microcolum have fewer connections. The reason is that a modeled microcircuit is surrounded by other 6 microcircuits and connections from or to these microcircuits are also taken into account in this count. For example, due the border effect the number of total afferent connections as well as the number of total synapses is not equal respectively to the number of total efferent connections and synapses (in theory, for a given network, one should have total in-degree = total =out degree). Where for total in- respective out-degree we mean the sum of all in-degree over the nodes. End of explanation """ conn=np.array(h5['connectivity']['L4_MC']['L4_PC']['cMat']) print('The connectivity matrix from L4_MC to L4_PC has size {}'.format(conn.shape)) conn2=np.array(h5['connectivity']['L4_PC']['L4_MC']['cMat']) print('The connectivity matrix from L4_PC to L4_MC has size {}'.format(conn2.shape)) """ Explanation: If one wants to look only at the inner connectivity of the selected microcolumn, the 'connectivity' key contains the binary connection matrix. This matrix is splitted into submatrices labeled first by the name of the morphological type of the source of the signal (pre-synaptic neurons), then to the morphological type of the sink of the signal (the post-synaptic neurons). For an example in the following we are looking at the connectivity matrix from L4_MC to L4_PC neurons: End of explanation """ #rows are source of the signal, columns are target for x in m_values: for y in m_values: if y == m_values[0]: conn_tot_row=np.array(h5['connectivity'][x][y]['cMat']) else: conn_temp_row=np.array(h5['connectivity'][x][y]['cMat']) conn_tot_row = np.concatenate((conn_tot_row, conn_temp_row), axis=1) if x == m_values[0]: conn_tot = conn_tot_row else: conn_tot = np.concatenate((conn_tot, conn_tot_row), axis=0) print('The shape should be #neurons x #neurons = {} x {}, and we have: {}'.format(N,N,conn_tot.shape) ) #Plot the adjacency matrix plt.figure(figsize=(10,4)) plt.subplot(1, 1, 1) M = sps.csr_matrix(conn_tot) plt.spy(M, markersize = 0.1, color = 'black') plt.suptitle('Directed Adjacency matrx of the microcircuit', y=1.05,fontsize=10) plt.show() G=nx.MultiDiGraph() G=nx.from_numpy_matrix(conn_tot) print('Our network has {} nodes and a total of {} connections (counting bidirectional edges as one edge).'.format(len(G.nodes()),len(G.edges()))) """ Explanation: 1.2)Network Building Concatenate all the cMat matrices Note : the network we'll build using 'cMat' is directed as h5['connectivity'][x][y]['cMat'] is the adjacency matrix detacting if there are outgoing connections from a presynaptic neuron (source of signal) positioned along the first axis of the matrix to a postsynaptic neuron (sink of the signal) positioned along the second axis of the matrix. Therefore, we'll build a directed adjacency matrix with presynaptic neurons along the raws and postsynaptic neurons along the columns: End of explanation """ #symmetrize conn_tot #conn_tot_symm=np.maximum( conn_tot, conn_tot.transpose()) """ Explanation: If one wants to transform the directed network into an undirected one, it's possible to symmetrize the matrix in the following way: End of explanation """ out_degree=np.sum(conn_tot, axis=1) in_degree=np.sum(conn_tot, axis=0) tot_degree= out_degree + in_degree tot_connections=np.sum(tot_degree/2) av_indegree= np.mean(in_degree) av_outdegree=np.mean(out_degree) av_totdegree=np.mean(tot_degree) print('The number of connections is {:.0f}'.format(tot_connections)) print('The average out-degree = average in-degree is {}'.format(av_outdegree)) print('The average degree is {}'.format(av_totdegree)) """ Explanation: 1.3)Network Analysis and Statistics In this section we'll show some classical network analysis and statistics on the microcircuit. If not specified, for our analysis, we are considering the directed network. 1.3.1 Some statistics - What is the average in-degree, out-degree and tot-degree? End of explanation """ p=np.sum(tot_degree)/(N*(N-1)) print('The probability of connection inside our microcircuit is {}'.format(p)) one_way=conn_tot-conn_tot.T tot_one_way=np.count_nonzero(one_way)/2 print('Number of one-way edges is: {:.0f} Number of two-ways edges (e.g presence of connection (0,1) and connection (1,0)): {:.0f}'.format(tot_one_way, (tot_connections-tot_one_way))) """ Explanation: - What is the probability of connection inside our micorcicuit? End of explanation """ #symmetrize conn_tot #conn_tot_symm=np.maximum( conn_tot, conn_tot.transpose()) #diam=nx.diameter(G) """ Explanation: - What is the diameter of our network? Due to excessive time of computation, we were not able to compute the diameter of the directed network. Instead, we were able to compute the diameter of the undirected network. The execution time of algorithm was ~24h. End of explanation """ ##Execution time ~24h #APL = nx.average_shortest_path_length(G) """ Explanation: Result: The diameter of our network is 5. -What is the average paths length of our network? The APL is defined as the average number of steps along the shortest paths for all possible pairs of network nodes. It is a measure of the efficiency of information or mass transport on a network. Source. We computed the APL of the directed network. End of explanation """ ##Execution time ~12h #giant_G = max(nx.connected_component_subgraphs(G), key=len) #print('Giant component of the G network : {}'.format(len(giant_G.nodes()))) #print('Therefore the network is fully connected') """ Explanation: Result: The average path length for the directed network is equal to "2.475062028240122114e+00" and the average path length for the undirected network is equal to ~2.33. An "average" path length is not very easy to interpret as it depends on the choice of statistical models used in addition to being at the macroscopic level. Source - What is the size of the giant component in directed network? We computed the following code for the undirected graph bacuase nx.connected_component_subgraphs(G) is not implemented for directed types. End of explanation """ #print('Average clustering coefficient of the G network : {}'.format(nx.average_clustering(G)))##we are computing the clustering coefficient of a directed network """ Explanation: Result: The giant component in the undirected network is of size 31346. - What is the average clustering coefficient of our network? We computed the following code for the undirected graph because nx.average_clustering(G) is not implemented for directed types. End of explanation """ ##out degree distribution with border-effect plt.figure(figsize=(15,4)) plt.subplot(1, 2, 1) nou, binsou, patchesou = plt.hist(out_degree,bins=50 , facecolor='g',edgecolor='black', linewidth=1.2) plt.xlabel('Out-Degree') plt.ylabel('Counts') plt.title('Out-Degree Distribution') plt.grid(True) ##out degree distribution without border-effect plt.subplot(1,2,2) out_degree_noborder=[] for i in range(len(dist_effconn_mtype)): out_degree_noborder.append(dist_effconn_mtype[i][0]) noubor, binsoubor, patchesoubor = plt.hist(out_degree_noborder,bins=50 , facecolor='g',edgecolor='black', linewidth=1.2) plt.xlabel('Out-Degree') plt.ylabel('Counts') plt.title(' Out-Degree Distribution considering neurons outside our volume') plt.grid(True) """ Explanation: Result: The average clustering coefficient of the undirected network is 0.056742203420848264 1.3.2 Degree Distributions Now we'll analyse the the in- and out-degree distributions and how they differ from the distributions given by 'nCellAff' and 'nCellEff' which contains also the connections outside our model. End of explanation """ M = binsou[round(len(nou)*0.8)] out_hubs=[i for i in range(N) if out_degree[i]>M] oh=[label_morphology[i] for i in out_hubs] hist_morpho(oh,a=10,b=5,y=70,title='Out-Hubs') """ Explanation: - Which are the out-hubs neuron considering only the inner connections? End of explanation """ ##in degree distribution with border-effect plt.figure(figsize=(15,4)) plt.subplot(1, 2, 1) nin, binsin, patchesin = plt.hist(in_degree,bins=50 , facecolor='b',edgecolor='black', linewidth=1.2) plt.xlabel('In-Degree') plt.ylabel('Counts') plt.title('In-Degree Distribution') plt.grid(True) ##in degree distribution with no border-effect in_degree_noborder=[] for i in range(len(dist_affconn_mtype)): in_degree_noborder.append(dist_affconn_mtype[i][0]) plt.subplot(1, 2, 2) ninbor, binsinbor, patchesinbor = plt.hist(in_degree_noborder,bins=50 , facecolor='b',edgecolor='black', linewidth=1.2) plt.xlabel('In-Degree') plt.ylabel('Counts') plt.title('In-Degree Distribution considering neurons outside our volume') plt.grid(True) """ Explanation: Results: Most of the hubs with a very high out degree belong to the morphological type of Pyramidal Cell in layer four. End of explanation """ M=binsin[round(len(nou)*0.8)] in_hubs=[i for i in range(N) if in_degree[i]>M] ih=[label_morphology[i] for i in in_hubs] hist_morpho(ih,a=10,b=4,y=120,title='In-Hubs') """ Explanation: - Which are the in-hubs neuron considering only the inner connections? End of explanation """ ##in degree distribution with border-effect plt.figure(figsize=(15,4)) plt.subplot(1, 2, 1) ntot, binstot, patchestot = plt.hist(tot_degree,bins=50 , facecolor='purple',edgecolor='black', linewidth=1.2) plt.xlabel('Tota-Degree') plt.ylabel('Counts') plt.title('Total-Degree Distribution') plt.grid(True) #Tot_degree with no border effects tot_degree_noborder=in_degree_noborder + out_degree_noborder plt.subplot(1, 2, 2) ntotbor, binstotbor, patchestotbor = plt.hist(tot_degree_noborder,bins=50 , facecolor='purple',edgecolor='black', linewidth=1.2) plt.xlabel('Total-Degree') plt.ylabel('Counts') plt.title('Total Degree Distribution considering neurons outside our volume') plt.grid(True) """ Explanation: Results: Most of the hubs with a very high in-degree belong to the morphological type Thick-tufted Pyramidal Cell in layer five. Now we will consider the distribution of the nodes degree without taking in account the directionaity: degree+in-deggree+out-degree. End of explanation """ M=binstot[round(len(nou)*0.8)] tot_hubs=[i for i in range(N) if tot_degree[i]>M] th=[label_morphology[i] for i in tot_hubs] hist_morpho(th,a=10,b=4,y=100,title='Total-Hubs') """ Explanation: - Which are the total-hubs neuron considering only the inner connections? End of explanation """ #creation of the E-R graph er=nx.erdos_renyi_graph(N, p) er.size() """ Explanation: Results: Most of the hubs with a very high total-degree belong to the morphological type Thick-tufted Pyramidal Cell in layer five. 1.3.3 Approximation by random networks models Is our network well approximated by a random graph, that is an Erdős–Rényi (E-R) graph with the same connection probability? End of explanation """ er_degree=er.degree() er_degree=[er_degree[i] for i in range(0,N)] fig = plt.figure(figsize=(9,5)) ax=fig.add_subplot(1,1,1) ntot, binstot, patchestot = plt.hist(tot_degree,bins=50 , facecolor='purple',edgecolor='black', linewidth=1.2) ner, binser, patcheser = plt.hist(er_degree,bins=50 , facecolor='g',edgecolor='black', linewidth=0.2) plt.xlabel('ER-Degree') plt.ylabel('Counts') plt.title('Degree Distribution of the ER graph approximating our network') plt.grid(True) plt.legend(['bluebrain network','Erdos-Renyi graph']) plt.show() """ Explanation: Comment: The number of edges of the random graph is what we expect it to be and similar to the one of our network. End of explanation """ #er_dima=nx.diameter(er) """ Explanation: Comment: The microcircuit netwok is not well approximated by the unifrom distribution of an E-R graph with the same connections probability. End of explanation """ #er_cc=average_clustering(er) """ Explanation: Result: The diameter of the Erdos Reny graph is 3. End of explanation """ #APL = nx.average_shortest_path_length(er) """ Explanation: Result: The average clustering coefficient of the Erdős–Rényi graph is 1.588056052869360468e-02 The APL of an Erdős–Rényi graph is given by $ \frac{ln(N)}{ln(K)}\approx 1.68$, in our case. We also computed the exact value: End of explanation """ #creation of the Barabasi-Albert graph m=int(av_totdegree/2) ba= nx.barabasi_albert_graph(N, m) ba.size() """ Explanation: Result: The APL is 1.984485. Is our network well approximated by a random scale-free network with preferential attachment, namely a Barabasi-Alber graph network? End of explanation """ ba_degree=ba.degree() ba_degree=[ba_degree[i] for i in range(0,N)] fig = plt.figure(figsize=(9,5)) ax=fig.add_subplot(1,1,1) ntot, binstot, patchestot = plt.hist(tot_degree,bins=50 , facecolor='purple',edgecolor='black', linewidth=1.2) nba, binsba, patchesba = plt.hist(ba_degree,bins=50 , facecolor='g',edgecolor='black', linewidth=0.2) plt.xlabel('ER-Degree') plt.ylabel('Counts') plt.title('Degree Distribution of the B-A graph approximating our network') plt.grid(True) plt.legend(['bluebrain network','Barabasi-Albert graph']) plt.show() """ Explanation: Comment: The number of edges of the barabasi-albert graph is not exactly the one of our network but still quite similar. End of explanation """ #~20h computation #ba_diam=nx.diameter(ba) """ Explanation: Comment: The microcircuit netwok is not well approximated by power low distribution of an B-A graph with the same average degree. End of explanation """ #~20h computation #ba=nx.average_clustering(ba) """ Explanation: Result: The diameter of the Barabasi Albert graph is 3. End of explanation """ #APL = nx.average_shortest_path_length(ba) """ Explanation: Result: The average clustering coefficient of the Barabasi Albert Network is 4.775815955795591899e-02 End of explanation """
bioe-ml-w18/bioe-ml-winter2018
homeworks/Week1-Introduction.ipynb
mit
%matplotlib inline import numpy as np from sklearn.datasets import load_boston import matplotlib.pyplot as plt """ Explanation: Week 1 - Introduction Due January 18 at 8 PM A quick introduction to git and python. Please run through this tutorial on how git functions. Further reading on git exists here. For an introduction to python programming, please follow the tutorials here and here. We will cover certain aspects of python programming and specific packages throughout the course. However, at the end of these tutorials, you should be comfortable with the exercises below. End of explanation """ def fib(): arr = np.zeros(100, dtype=np.float64) arr[1] = 1 for ii in range(2, 100): arr[ii] = arr[ii-1] + arr[ii-2] return arr seq = fib() assert(seq[99] == seq[98] + seq[97]) assert(seq[2] == 1) """ Explanation: (1) Write a function that stores the first 100 numbers of the fibannaci sequence in a numpy array. Then write two assertions as tests of your function. End of explanation """ boston = load_boston() y = boston.target x = boston.data[:, 0] ylabel = "Home Prices" xlabel = "Crime" plt.scatter(x, y); plt.xlabel(xlabel); plt.ylabel(ylabel); """ Explanation: (2) Plot the selected data from this Boston home price dataset, labeling the X and Y axes. End of explanation """ def func(A, B): return A + (2 * B) assert(func(0, 20) == 40) assert(func(1, 10) == 21) assert(func(2, 5) == 12) assert(func(3, 2) == 7) assert(func(4, 1) == 6) """ Explanation: (3) Write a function that satisfies the assertions below. End of explanation """
bmorris3/boyajian_star_arces
kic8462852.ipynb
mit
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import astropy.units as u from astropy.time import Time from toolkit import EchelleSpectrum """ Explanation: KIC 8462852 (Boyajian's Star) spectroscopic follow up Brett Morris and Jim Davenport Apache Point Observatory ARC 3.5 m telescope, ARCES echelle spectrograph This notebook contains code for retrieving my APO ARC 3.5 m/ARCES (R~31,500) spectrum of Boyajian's Star (KIC 8462852) at 2017-05-20 10:34 UTC. It downloads the wavelength-calibrated spectra from my webpage and caches them locally, then normalizes the continuum using the spectroscopic standard BD +28 4211, and shifts the wavelengths to the star's rest frame. The telluric standard star HR 7916 (spectral type F2Vn) is also included. <img src="http://staff.washington.edu/bmmorris/images/light_curve.png" alt="Light curve" style="width: 500px;"/> End of explanation """ kic8462852_1_url = 'http://staff.washington.edu/bmmorris/docs/KIC8462852.0001.wfrmcpc.fits' kic8462852_2_url = 'http://staff.washington.edu/bmmorris/docs/KIC8462852.0003.wfrmcpc.fits' kic8462852_3_url = 'http://staff.washington.edu/bmmorris/docs/KIC8462852.0065.wfrmcpc.fits' spectroscopic_standard_url = 'http://staff.washington.edu/bmmorris/docs/BD28_4211.0034.wfrmcpc.fits' telluric_standard_url = 'http://staff.washington.edu/bmmorris/docs/HR7916.0002.wfrmcpc.fits' target_spectrum_1 = EchelleSpectrum.from_fits_url(kic8462852_1_url) target_spectrum_2 = EchelleSpectrum.from_fits_url(kic8462852_2_url) target_spectrum_3 = EchelleSpectrum.from_fits_url(kic8462852_3_url) spectroscopic_standard = EchelleSpectrum.from_fits_url(spectroscopic_standard_url) telluric_standard = EchelleSpectrum.from_fits_url(telluric_standard_url) """ Explanation: Download and cache spectra: End of explanation """ only_orders = np.arange(len(target_spectrum_1.spectrum_list)) target_spectrum_1.continuum_normalize(spectroscopic_standard, polynomial_order=10, only_orders=only_orders, plot_masking=False) only_orders = np.arange(len(target_spectrum_2.spectrum_list)) target_spectrum_2.continuum_normalize(spectroscopic_standard, polynomial_order=10, only_orders=only_orders, plot_masking=False) only_orders = np.arange(len(target_spectrum_3.spectrum_list)) target_spectrum_3.continuum_normalize(spectroscopic_standard, polynomial_order=10, only_orders=only_orders, plot_masking=False) telluric_standard.continuum_normalize(spectroscopic_standard, polynomial_order=10, only_orders=only_orders, plot_masking=False) """ Explanation: Fit a polynomial of order polynomial_order to each spectral order of the spectrum of spectroscopic_standard, then normalize each spectral order by that polynomial to remove the blaze function. End of explanation """ rv_shifts = u.Quantity([target_spectrum_1.rv_wavelength_shift(order) for order in only_orders]) median_rv_shift = np.median(rv_shifts) target_spectrum_1.offset_wavelength_solution(median_rv_shift) rv_shifts = u.Quantity([target_spectrum_2.rv_wavelength_shift(order) for order in only_orders]) median_rv_shift = np.median(rv_shifts) target_spectrum_2.offset_wavelength_solution(median_rv_shift) rv_shifts = u.Quantity([target_spectrum_3.rv_wavelength_shift(order) for order in only_orders]) median_rv_shift = np.median(rv_shifts) target_spectrum_3.offset_wavelength_solution(median_rv_shift) rv_shifts = u.Quantity([telluric_standard.rv_wavelength_shift(order) for order in only_orders]) median_rv_shift = np.median(rv_shifts) telluric_standard.offset_wavelength_solution(median_rv_shift) """ Explanation: Calculate the wavelength offset necessary to shift the spectra into the star's rest frame, then shift the wavelengths accordingly. End of explanation """ from toolkit import get_phoenix_model_spectrum phoenix_6800_40 = get_phoenix_model_spectrum(6800, 4.0) """ Explanation: Download a PHOENIX model atmosphere (Husser 2013) for comparison with Boyajian's star, with $T_{eff}=6800$ K and $\log g = 4.0$. End of explanation """ def get_nearest_order(feature_wavelength): return np.argmin([np.abs(feature_wavelength - target_spectrum_1.get_order(i).wavelength.mean().value) for i in range(len(target_spectrum_1.spectrum_list))]) def plot_spectral_feature(spectrum, center_wavelength, width_angstroms, phoenix_model=phoenix_6800_40, plot_model=True, label=None, title=None, ax=None, legend=True, spectrum_kwargs=None, model_kwargs=None): """ Plot the spectrum, centered on wavelength ``center_wavelength``, with width ``width_angstroms``. """ if ax is None: ax = plt.gca() if title is None: title = 'APO/ARCES (Brett Morris)' if spectrum_kwargs is None: spectrum_kwargs = dict(lw=1, color='k') if model_kwargs is None: model_kwargs = dict(label='PHOENIX model', color='r', alpha=0.5) feature_order = spectrum.get_order(get_nearest_order(center_wavelength)) normed_flux = feature_order.masked_flux / np.median(feature_order.masked_flux) ax.plot(feature_order.masked_wavelength, normed_flux, label=spectrum.name, **spectrum_kwargs) model_already_plotted = any([line.get_label().startswith('PHOENIX') for line in ax.get_lines()]) if plot_model and not model_already_plotted: model_wavelength_range = ((phoenix_6800_40.wavelength.value < center_wavelength + width_angstroms/2) & (phoenix_6800_40.wavelength.value > center_wavelength - width_angstroms/2)) normed_model_flux = phoenix_6800_40.flux / np.median(phoenix_6800_40.flux) normed_model_flux *= (np.median(normed_flux) / normed_model_flux[model_wavelength_range].max()) ax.plot(phoenix_6800_40.wavelength, normed_model_flux, **model_kwargs) ax.set_title(title) ax.set_xlabel('Wavelength [Angstrom]') ax.set_ylabel('Flux') ax.set_xlim([center_wavelength - width_angstroms/2, center_wavelength + width_angstroms/2]) ax.set_ylim([0, 1.1*normed_flux.max()]) if legend: ax.legend(fontsize=10, bbox_to_anchor=(0, 1, 1.4, 0)) ax.get_xaxis().get_major_formatter().set_useOffset(False) return ax obs_time = Time(target_spectrum_1.header['DATE-OBS'], format='isot') print("Date of observation (UTC): ", obs_time.datetime) target_spectrum_1.name += " " + str(obs_time.datetime.date()) obs_time = Time(target_spectrum_2.header['DATE-OBS'], format='isot') print("Date of observation (UTC): ", obs_time.datetime) target_spectrum_2.name += " " + str(obs_time.datetime.date()) obs_time = Time(target_spectrum_3.header['DATE-OBS'], format='isot') print("Date of observation (UTC): ", obs_time.datetime) target_spectrum_3.name += " " + str(obs_time.datetime.date()) """ Explanation: Define some convenience functions for plotting spectral features: End of explanation """ plot_spectral_feature(target_spectrum_1, 5892, 10, spectrum_kwargs=dict(lw=1.5, color='#36C400')) plot_spectral_feature(target_spectrum_2, 5892, 10, spectrum_kwargs=dict(lw=1.5, color='#2D0CE8')) plot_spectral_feature(target_spectrum_3, 5892, 10, spectrum_kwargs=dict(lw=1.5, color='#EA00FF')) plot_spectral_feature(telluric_standard, 5892, 10, spectrum_kwargs=dict(lw=1.5, color='#0B70E8')) """ Explanation: Plot the NaD absorption features. End of explanation """ fig, ax = plt.subplots(1, 2, figsize=(12, 5)) # H-alpha plot_spectral_feature(target_spectrum_1, 6562.3, 20, ax=ax[0], legend=False) plot_spectral_feature(target_spectrum_2, 6562.3, 20, ax=ax[0], legend=False) plot_spectral_feature(target_spectrum_3, 6562.3, 20, ax=ax[0], legend=False) plot_spectral_feature(telluric_standard, 6562.3, 20, ax=ax[0], spectrum_kwargs=dict(lw=1, color='b'), legend=False, title=r'H$\alpha$') # H-beta plot_spectral_feature(target_spectrum_1, 4861, 20, ax=ax[1], legend=False) plot_spectral_feature(target_spectrum_2, 4861, 20, ax=ax[1], legend=False) plot_spectral_feature(target_spectrum_3, 4861, 20, ax=ax[1], legend=False) plot_spectral_feature(telluric_standard, 4861, 20, ax=ax[1], spectrum_kwargs=dict(lw=1, color='b'), legend=True, title=r'H$\beta$') """ Explanation: Plot the spectrum at H$\alpha$ and H$\beta$. End of explanation """ from scipy.optimize import fmin_powell def gaussian(x, amp, mean, sigma): return amp * np.exp(-0.5 * (mean - x)**2 / sigma**2) def four_gaussians(wavelength, amp1, amp2, amp3, amp4, mean1, mean2, mean3, mean4, sig1, sig2, sig3, sig4): return (1 + gaussian(wavelength, amp1, mean1, sig1) + gaussian(wavelength, amp2, mean2, sig2) + gaussian(wavelength, amp3, mean3, sig3) + gaussian(wavelength, amp4, mean4, sig4)) def chi2(params, wavelength, flux): return np.sum((four_gaussians(wavelength, *params) - flux)**2) # Get the spectral order with the NaD feature na_feature_wavelength = 5892 na_order_target = target_spectrum_1.get_order(get_nearest_order(na_feature_wavelength)) na_order_telluric = telluric_standard.get_order(get_nearest_order(na_feature_wavelength)) # Normalize the spectrum by a low order polynomial lam = na_order_target.masked_wavelength.value flux = na_order_target.masked_flux.value / na_order_telluric.masked_flux.value continuum_params = np.polyfit(lam-lam.mean(), flux, 2) flux /= np.polyval(continuum_params, lam-lam.mean()) # Initial gaussian amplitudes, means, and standard deviations init_params = [-0.8, -0.5, -0.5, -0.4, 5889.5, 5889.2, 5895.5, 5895.2, 0.1, 0.1, 0.1, 0.1] # Minimize the chi2 best_params = fmin_powell(chi2, init_params, args=(lam, flux)) # Plot the results fig, ax = plt.subplots(1, 2, figsize=(12, 5)) ax[0].plot(lam, flux, 'k-', lw=2, label='ARCES') ax[0].set_xlim([5888.5, 5890.5]) ax[0].plot(lam, four_gaussians(lam, *best_params), 'r', label='Simple fit') ax[0].set_xlabel('Wavelength') ax[0].set_ylabel('Flux') ax[1].plot(lam, flux, 'k-', lw=2, label='ARCES') ax[1].set_xlim([5894, 5896.5]) ax[1].plot(lam, four_gaussians(lam, *best_params), 'r', label='Simple fit') ax[1].set_xlabel('Wavelength') ax[1].set_ylabel('Flux') ax[1].legend(loc='lower right') fig.suptitle('NaD absorption features') for axis in ax: axis.get_xaxis().get_major_formatter().set_useOffset(False) from scipy.integrate import quad a = 5888.5 b = 5890 c = 5894.5 d = 5896.5 integral1, err = quad(lambda x: four_gaussians(x, *best_params), a, b) integral2, err = quad(lambda x: four_gaussians(x, *best_params), c, d) equivalent_width1 = ((b-a) - integral1) equivalent_width2 = ((d-c) - integral2) print("NaD equivalent width (at 5889.5 A):", equivalent_width1, "Angstrom") print("NaD equivalent width (at 5895.5 A):", equivalent_width2, "Angstrom") """ Explanation: Fit the NaD absorption features to measure the approximate equivalent width, for comparison with these results Jason Curtis on Jason Wright's blog: equivalent width = 420 mÅ. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/sandbox-3/ocnbgchem.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-3', 'ocnbgchem') """ Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era: CMIP6 Institute: MIROC Source ID: SANDBOX-3 Topic: Ocnbgchem Sub-Topics: Tracers. Properties: 65 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:41 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) """ Explanation: 1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) """ Explanation: 5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Nutrients Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List nutrient species present in ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Nitrous Species If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous species. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.5. Nitrous Processes If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous processes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Upper Trophic Levels Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Define how upper trophic level are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) """ Explanation: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of phytoplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Pft Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of zooplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Zooplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there bacteria representation ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Lability Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe treatment of lability in dissolved organic matter End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) """ Explanation: 13.3. Size If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.4. Size If Discrete Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Sinking Speed If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) """ Explanation: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.2. Abiotic Carbon Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is abiotic carbon modelled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) """ Explanation: 14.3. Alkalinity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is alkalinity modelled ? End of explanation """
SteveDiamond/cvxpy
examples/notebooks/WWW/fir_chebychev_design.ipynb
gpl-3.0
import numpy as np import cvxpy as cp #******************************************************************** # Problem specs. #******************************************************************** # Number of FIR coefficients (including the zeroth one). n = 20 # Rule-of-thumb frequency discretization (Cheney's Approx. Theory book). m = 15*n w = np.linspace(0,np.pi,m) #******************************************************************** # Construct the desired filter. #******************************************************************** # Fractional delay. D = 8.25 # Delay value. Hdes = np.exp(-1j*D*w) # Desired frequency response. # Gaussian filter with linear phase. (Uncomment lines below for this design.) #var = 0.05 #Hdes = 1/(np.sqrt(2*np.pi*var)) * np.exp(-np.square(w-np.pi/2)/(2*var)) #Hdes = np.multiply(Hdes, np.exp(-1j*n/2*w)) """ Explanation: Chebychev design of an FIR filter given a desired $H(\omega)$ A derivative work by Judson Wilson, 5/27/2014.<br> Adapted from the CVX example of the same name, by Almir Mutapcic, 2/2/2006. Topic References: "Filter design" lecture notes (EE364) by S. Boyd Introduction This program designs an FIR filter, given a desired frequency response $H_\mbox{des}(\omega)$. The design is judged by the maximum absolute error (Chebychev norm). This is a convex problem (after sampling it can be formulated as an SOCP), which may be written in the form: \begin{array}{ll} \mbox{minimize} & \max |H(\omega) - H_\mbox{des}(\omega)| \quad \mbox{ for } 0 \le \omega \le \pi, \end{array} where the variable $H$ is the frequency response function, corresponding to an impulse response $h$. Initialize problem data End of explanation """ # A is the matrix used to compute the frequency response # from a vector of filter coefficients: # A[w,:] = [1 exp(-j*w) exp(-j*2*w) ... exp(-j*n*w)] A = np.exp( -1j * np.kron(w.reshape(-1, 1), np.arange(n))) # Presently CVXPY does not do complex-valued math, so the # problem must be formatted into a real-valued representation. # Split Hdes into a real part, and an imaginary part. Hdes_r = np.real(Hdes) Hdes_i = np.imag(Hdes) # Split A into a real part, and an imaginary part. A_R = np.real(A) A_I = np.imag(A) # # Optimal Chebyshev filter formulation. # # h is the (real) FIR coefficient vector, which we are solving for. h = cp.Variable(shape=n) # The objective is: # minimize max(|A*h-Hdes|) # but modified into an equivelent form: # minimize max( real(A*h-Hdes)^2 + imag(A*h-Hdes)^2 ) # such that all computation is done in real quantities only. obj = cp.Minimize( cp.max( cp.square(A_R * h - Hdes_r) # Real part. + cp.square(A_I * h - Hdes_i) ) ) # Imaginary part. # Solve problem. prob = cp.Problem(obj) prob.solve() # Check if problem was successfully solved. print('Problem status: {}'.format(prob.status)) if prob.status != cp.OPTIMAL: raise Exception('CVXPY Error') print("final objective value: {}".format(obj.value)) """ Explanation: Solve the minimax (Chebychev) design problem End of explanation """ import matplotlib.pyplot as plt # Show plot inline in ipython. %matplotlib inline # Plot properties. plt.rc('text', usetex=True) plt.rc('font', family='serif') font = {'weight' : 'normal', 'size' : 16} plt.rc('font', **font) # Plot the FIR impulse reponse. plt.figure(figsize=(6, 6)) plt.stem(range(n), h.value) plt.xlabel('n') plt.ylabel('h(n)') plt.title('FIR filter impulse response') # Plot the frequency response. H = np.exp(-1j * np.kron(w.reshape(-1, 1), np.arange(n))).dot(h.value) plt.figure(figsize=(6, 6)) # Magnitude plt.plot(w, 20 * np.log10(np.abs(H)), label='optimized') plt.plot(w, 20 * np.log10(np.abs(Hdes)),'--', label='desired') plt.xlabel(r'$\omega$') plt.ylabel(r'$|H(\omega)|$ in dB') plt.title('FIR filter freq. response magnitude') plt.xlim(0, np.pi) plt.ylim(-30, 10) plt.legend(loc='lower right') # Phase plt.figure(figsize=(6, 6)) plt.plot(w, np.angle(H)) plt.xlim(0, np.pi) plt.ylim(-np.pi, np.pi) plt.xlabel(r'$\omega$') plt.ylabel(r'$\angle H(\omega)$') plt.title('FIR filter freq. response angle') """ Explanation: Result plots End of explanation """
claudiuskerth/PhDthesis
Data_analysis/SNP-indel-calling/dadi/DEDUPLICATED/deduplicated_spectra.ipynb
mit
# load dadi module import sys sys.path.insert(0, '/home/claudius/Downloads/dadi') import dadi % ll ! cat ERY.unfolded.sfs.dadi """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a data-toc-modified-id="Plot-the-spectra-1" href="#Plot-the-spectra"><span class="toc-item-num">1&nbsp;&nbsp;</span>Plot the spectra</a></div><div class="lev1 toc-item"><a data-toc-modified-id="After-new-coverage-filtering-2" href="#After-new-coverage-filtering"><span class="toc-item-num">2&nbsp;&nbsp;</span>After new coverage filtering</a></div><div class="lev1 toc-item"><a data-toc-modified-id="After-more-stringent-coverage-filtering-3" href="#After-more-stringent-coverage-filtering"><span class="toc-item-num">3&nbsp;&nbsp;</span>After more stringent coverage filtering</a></div><div class="lev1 toc-item"><a data-toc-modified-id="read-data-in-at-least-15-ind-4" href="#read-data-in-at-least-15-ind"><span class="toc-item-num">4&nbsp;&nbsp;</span>read data in at least 15 ind</a></div> # Plot the spectra End of explanation """ # import 1D spectrum of ery fs_ery = dadi.Spectrum.from_file('ERY.unfolded.sfs.dadi') fs_par = dadi.Spectrum.from_file('PAR.unfolded.sfs.dadi') import pylab %matplotlib inline pylab.rcParams['figure.figsize'] = [14, 12] pylab.plot(fs_par.fold(), 'go-', label='par') pylab.plot(fs_ery.fold(), 'rs-', label='ery') pylab.xlabel('minor allele frequency') pylab.ylabel('SNP count') pylab.legend() """ Explanation: This doesn't look good! End of explanation """ # load dadi module import sys sys.path.insert(0, '/home/claudius/Downloads/dadi') import dadi # import 1D spectrum of ery fs_ery = dadi.Spectrum.from_file('Ery.unfolded.sfs.dadi') fs_par = dadi.Spectrum.from_file('Par.unfolded.sfs.dadi') fs_ery.S() import pylab %matplotlib inline pylab.rcParams['figure.figsize'] = [12, 10] pylab.plot(fs_par.fold(), 'go-', label='par') pylab.plot(fs_ery.fold(), 'rs-', label='ery') pylab.xlabel('minor allele frequency') pylab.ylabel('SNP count') pylab.legend() """ Explanation: What is going on here? After new coverage filtering I have done new excess and minimum coverage filtering. See excess_coverage_filter.py and minimum_coverage_filter.py. This resulted in many more positions, contigs and reads retained, i. e. more info to determine SFS. I have created SAF's with a minimum of 9 individuals with read data and used the accellerated version of the EM algorithm in realSFS. End of explanation """ % ll ../../ANGSD/DEDUPLICATED/afs/ery # import 1D spectrum of ery fs_ery = dadi.Spectrum.from_file('../../ANGSD/DEDUPLICATED/afs/ery/ery.unfolded.sfs.dadi') fs_par = dadi.Spectrum.from_file('../../ANGSD/DEDUPLICATED/afs/par/par.unfolded.sfs.dadi') fs_ery.S() pylab.plot(fs_par.fold(), 'go-', label='par') pylab.plot(fs_ery.fold(), 'rs-', label='ery') pylab.xlabel('minor allele frequency') pylab.ylabel('SNP count') pylab.legend() """ Explanation: Unfortunately, this is still completely unusable. Only few sites are inferred variable. I think that the coverage is too low for ANGSD to be able to destinguish alleles from sequencing errors. The ML SFS is therefore probably very unstable. After more stringent coverage filtering I have applied a slightly more stringent minimum coverage filtering: instead of at least 1x in 15 ind. I have applied 3x in 10 ind. This retained 368,764 sites on 9,119 contigs. Other than that, I have applied exactly the same filtering as above. Before HWE filtering there were 414,122 sites from originally 85,488,084 sites (0.48%). There were only 125,875 sites if I had filtered for 3x coverage in at least 15 individuals. End of explanation """ # import 1D spectrum of ery fs_ery = dadi.Spectrum.from_file('../../ANGSD/DEDUPLICATED/afs/ery/ery.unfolded.15.sfs.dadi') fs_par = dadi.Spectrum.from_file('../../ANGSD/DEDUPLICATED/afs/par/par.unfolded.15.sfs.dadi') fs_ery.S() pylab.plot(fs_par.fold(), 'go-', label='par') pylab.plot(fs_ery.fold(), 'rs-', label='ery') pylab.xlabel('minor allele frequency') pylab.ylabel('SNP count') pylab.legend() """ Explanation: Unfortunately, the Big Data set from standard RAD sequencing does not provide enough information for genotype likelihoods to estimate allele frequency spectra from. It may be that the coverage filtering just leaves too few sites that can provide enough signal to overcome the noise in the data. read data in at least 15 ind Using the sites that had at least 3x coverage in at least 10 individuals across the two populations, I have run the SAF calculation again with -minInd 15, that is requiring at least 15 (of 18) individuals with read data (i. e. at least 1x coverage). End of explanation """
ledrui/Regression
week2/week-2-multiple-regression-assignment-1-blank.ipynb
mit
import graphlab """ Explanation: Regression Week 2: Multiple Regression (Interpretation) The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions. In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will: * Use SFrames to do some feature engineering * Use built-in graphlab functions to compute the regression weights (coefficients/parameters) * Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares * Look at coefficients and interpret their meanings * Evaluate multiple models via RSS Fire up graphlab create End of explanation """ sales = graphlab.SFrame('kc_house_data.gl/') """ Explanation: Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located. End of explanation """ train_data,test_data = sales.random_split(.8,seed=0) """ Explanation: Split data into training and testing. We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you). End of explanation """ example_features = ['sqft_living', 'bedrooms', 'bathrooms'] example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features, validation_set = None) """ Explanation: Learning a multiple regression model Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features: example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code: (Aside: We set validation_set = None to ensure that the results are always the same) End of explanation """ example_weight_summary = example_model.get("coefficients") print example_weight_summary """ Explanation: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows: End of explanation """ example_predictions = example_model.predict(train_data) print example_predictions[0] # should be 271789.505878 """ Explanation: Making Predictions In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions. Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above: End of explanation """ def get_residual_sum_of_squares(model, data, outcome): # First get the predictions prediction = model.predict(data) # Then compute the residuals/errors residual = prediction - outcome # Then square and add them up RSS = 0 for error in residual: RSS = RSS + (error**2) return(RSS) """ Explanation: Compute RSS Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome. End of explanation """ rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price']) print rss_example_train # should be 2.7376153833e+14 """ Explanation: Test your function by computing the RSS on TEST data for the example model: End of explanation """ from math import log """ Explanation: Create some new features Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms. You will use the logarithm function to create a new feature. so first you should import it from the math library. End of explanation """ train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2) test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2) # create the remaining 3 features in both TEST and TRAIN data # bed_bath train_data['bed_bath_rooms'] = train_data['bedrooms']*train_data['bathrooms'] test_data['bed_bath_rooms'] = test_data['bedrooms']*test_data['bathrooms'] # log_sqft_living train_data['log_sqft_living'] = train_data['sqft_living'].apply(lambda x: log(x)) test_data['log_sqft_living'] = test_data['sqft_living'].apply(lambda x: log(x)) #lat_plus_long train_data['lat_plus_long'] = train_data['lat']+train_data['long'] test_data['lat_plus_long'] = test_data['lat']+test_data['long'] """ Explanation: Next create the following 4 new features as column in both TEST and TRAIN data: * bedrooms_squared = bedrooms*bedrooms * bed_bath_rooms = bedrooms*bathrooms * log_sqft_living = log(sqft_living) * lat_plus_long = lat + long As an example here's the first one: End of explanation """ test_data['lat_plus_long'].mean() """ Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms. bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large. Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values. Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why) Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits) End of explanation """ model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long'] model_2_features = model_1_features + ['bed_bath_rooms'] model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long'] """ Explanation: Learning Multiple Models Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more: * Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude * Model 2: add bedrooms*bathrooms * Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude End of explanation """ # Learn the three models: (don't forget to set validation_set = None) model_1 = graphlab.linear_regression.create(train_data, target = 'price', features = model_1_features, validation_set = None) ## model_2 = graphlab.linear_regression.create(train_data, target = 'price', features = model_2_features, validation_set = None) ## model_3 = graphlab.linear_regression.create(train_data, target = 'price', features = model_3_features, validation_set = None) # Examine/extract each model's coefficients: model_1_coefficients = model_1.get("coefficients") print" Model_1", model_1_coefficients ## model_2_coefficients = model_2.get("coefficients") print " Model_2", model_2_coefficients ## model_3_coefficients = model_3.get("coefficients") print " Model_3", model_3_coefficients """ Explanation: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients: End of explanation """ # Compute the RSS on TRAINING data for each of the three models and record the values: ## Model_1 model_1_rss = get_residual_sum_of_squares(model_1, train_data, train_data['price']) print " model_1 rss",model_1_rss ## Model_2 model_2_rss = get_residual_sum_of_squares(model_2, train_data, train_data['price']) print " model_2 rss",model_2_rss ## model_3_rss = get_residual_sum_of_squares(model_3, train_data, train_data['price']) print " model_3 rss",model_3_rss """ Explanation: Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1? Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2? Think about what this means. Comparing multiple models Now that you've learned three models and extracted the model weights we want to evaluate which model is best. First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models. End of explanation """ # Compute the RSS on TESTING data for each of the three models and record the values: ## Model_1 model_1_rss_test = get_residual_sum_of_squares(model_1, test_data, test_data['price']) print " model_1 rss",model_1_rss_test ## Model_2 model_2_rss_test = get_residual_sum_of_squares(model_2, test_data, test_data['price']) print " model_2 rss",model_2_rss_test ## model_3_rss_test = get_residual_sum_of_squares(model_3, test_data, test_data['price']) print " model_3 rss",model_3_rss_test """ Explanation: Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected? Now compute the RSS on on TEST data for each of the three models. End of explanation """
mne-tools/mne-tools.github.io
0.23/_downloads/eb0c29f55af0173daab811d4f4dc2f40/simulated_raw_data_using_subject_anatomy.ipynb
bsd-3-clause
# Author: Ivana Kojcic <ivana.kojcic@gmail.com> # Eric Larson <larson.eric.d@gmail.com> # Kostiantyn Maksymenko <kostiantyn.maksymenko@gmail.com> # Samuel Deslauriers-Gauthier <sam.deslauriers@gmail.com> # License: BSD (3-clause) import os.path as op import numpy as np import mne from mne.datasets import sample print(__doc__) # In this example, raw data will be simulated for the sample subject, so its # information needs to be loaded. This step will download the data if it not # already on your machine. Subjects directory is also set so it doesn't need # to be given to functions. data_path = sample.data_path() subjects_dir = op.join(data_path, 'subjects') subject = 'sample' meg_path = op.join(data_path, 'MEG', subject) # First, we get an info structure from the sample subject. fname_info = op.join(meg_path, 'sample_audvis_raw.fif') info = mne.io.read_info(fname_info) tstep = 1 / info['sfreq'] # To simulate sources, we also need a source space. It can be obtained from the # forward solution of the sample subject. fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif') fwd = mne.read_forward_solution(fwd_fname) src = fwd['src'] # To simulate raw data, we need to define when the activity occurs using events # matrix and specify the IDs of each event. # Noise covariance matrix also needs to be defined. # Here, both are loaded from the sample dataset, but they can also be specified # by the user. fname_event = op.join(meg_path, 'sample_audvis_raw-eve.fif') fname_cov = op.join(meg_path, 'sample_audvis-cov.fif') events = mne.read_events(fname_event) noise_cov = mne.read_cov(fname_cov) # Standard sample event IDs. These values will correspond to the third column # in the events matrix. event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3, 'visual/right': 4, 'smiley': 5, 'button': 32} # Take only a few events for speed events = events[:80] """ Explanation: Simulate raw data using subject anatomy This example illustrates how to generate source estimates and simulate raw data using subject anatomy with the :class:mne.simulation.SourceSimulator class. Once the raw data is simulated, generated source estimates are reconstructed using dynamic statistical parametric mapping (dSPM) inverse operator. End of explanation """ activations = { 'auditory/left': [('G_temp_sup-G_T_transv-lh', 30), # label, activation (nAm) ('G_temp_sup-G_T_transv-rh', 60)], 'auditory/right': [('G_temp_sup-G_T_transv-lh', 60), ('G_temp_sup-G_T_transv-rh', 30)], 'visual/left': [('S_calcarine-lh', 30), ('S_calcarine-rh', 60)], 'visual/right': [('S_calcarine-lh', 60), ('S_calcarine-rh', 30)], } annot = 'aparc.a2009s' # Load the 4 necessary label names. label_names = sorted(set(activation[0] for activation_list in activations.values() for activation in activation_list)) region_names = list(activations.keys()) """ Explanation: In order to simulate source time courses, labels of desired active regions need to be specified for each of the 4 simulation conditions. Make a dictionary that maps conditions to activation strengths within aparc.a2009s :footcite:DestrieuxEtAl2010 labels. In the aparc.a2009s parcellation: 'G_temp_sup-G_T_transv' is the label for primary auditory area 'S_calcarine' is the label for primary visual area In each of the 4 conditions, only the primary area is activated. This means that during the activations of auditory areas, there are no activations in visual areas and vice versa. Moreover, for each condition, contralateral region is more active (here, 2 times more) than the ipsilateral. End of explanation """ def data_fun(times, latency, duration): """Function to generate source time courses for evoked responses, parametrized by latency and duration.""" f = 15 # oscillating frequency, beta band [Hz] sigma = 0.375 * duration sinusoid = np.sin(2 * np.pi * f * (times - latency)) gf = np.exp(- (times - latency - (sigma / 4.) * rng.rand(1)) ** 2 / (2 * (sigma ** 2))) return 1e-9 * sinusoid * gf """ Explanation: Create simulated source activity Generate source time courses for each region. In this example, we want to simulate source activity for a single condition at a time. Therefore, each evoked response will be parametrized by latency and duration. End of explanation """ times = np.arange(150, dtype=np.float64) / info['sfreq'] duration = 0.03 rng = np.random.RandomState(7) source_simulator = mne.simulation.SourceSimulator(src, tstep=tstep) for region_id, region_name in enumerate(region_names, 1): events_tmp = events[np.where(events[:, 2] == region_id)[0], :] for i in range(2): label_name = activations[region_name][i][0] label_tmp = mne.read_labels_from_annot(subject, annot, subjects_dir=subjects_dir, regexp=label_name, verbose=False) label_tmp = label_tmp[0] amplitude_tmp = activations[region_name][i][1] if region_name.split('/')[1][0] == label_tmp.hemi[0]: latency_tmp = 0.115 else: latency_tmp = 0.1 wf_tmp = data_fun(times, latency_tmp, duration) source_simulator.add_data(label_tmp, amplitude_tmp * wf_tmp, events_tmp) # To obtain a SourceEstimate object, we need to use `get_stc()` method of # SourceSimulator class. stc_data = source_simulator.get_stc() """ Explanation: Here, :class:~mne.simulation.SourceSimulator is used, which allows to specify where (label), what (source_time_series), and when (events) event type will occur. We will add data for 4 areas, each of which contains 2 labels. Since add_data method accepts 1 label per call, it will be called 2 times per area. Evoked responses are generated such that the main component peaks at 100ms with a duration of around 30ms, which first appears in the contralateral cortex. This is followed by a response in the ipsilateral cortex with a peak about 15ms after. The amplitude of the activations will be 2 times higher in the contralateral region, as explained before. When the activity occurs is defined using events. In this case, they are taken from the original raw data. The first column is the sample of the event, the second is not used. The third one is the event id, which is different for each of the 4 areas. End of explanation """ raw_sim = mne.simulation.simulate_raw(info, source_simulator, forward=fwd) raw_sim.set_eeg_reference(projection=True) mne.simulation.add_noise(raw_sim, cov=noise_cov, random_state=0) mne.simulation.add_eog(raw_sim, random_state=0) mne.simulation.add_ecg(raw_sim, random_state=0) # Plot original and simulated raw data. raw_sim.plot(title='Simulated raw data') """ Explanation: Simulate raw data Project the source time series to sensor space. Three types of noise will be added to the simulated raw data: multivariate Gaussian noise obtained from the noise covariance from the sample data blink (EOG) noise ECG noise The :class:~mne.simulation.SourceSimulator can be given directly to the :func:~mne.simulation.simulate_raw function. End of explanation """ epochs = mne.Epochs(raw_sim, events, event_id, tmin=-0.2, tmax=0.3, baseline=(None, 0)) evoked_aud_left = epochs['auditory/left'].average() evoked_vis_right = epochs['visual/right'].average() # Visualize the evoked data evoked_aud_left.plot(spatial_colors=True) evoked_vis_right.plot(spatial_colors=True) """ Explanation: Extract epochs and compute evoked responsses End of explanation """ method, lambda2 = 'dSPM', 1. / 9. inv = mne.minimum_norm.make_inverse_operator(epochs.info, fwd, noise_cov) stc_aud = mne.minimum_norm.apply_inverse( evoked_aud_left, inv, lambda2, method) stc_vis = mne.minimum_norm.apply_inverse( evoked_vis_right, inv, lambda2, method) stc_diff = stc_aud - stc_vis brain = stc_diff.plot(subjects_dir=subjects_dir, initial_time=0.1, hemi='split', views=['lat', 'med']) """ Explanation: Reconstruct simulated source time courses using dSPM inverse operator Here, source time courses for auditory and visual areas are reconstructed separately and their difference is shown. This was done merely for better visual representation of source reconstruction. As expected, when high activations appear in primary auditory areas, primary visual areas will have low activations and vice versa. End of explanation """
bmeaut/python_nlp_2017_fall
course_material/12_Semantics_1/12_Semantics_1_lab.ipynb
mit
!pip install nltk """ Explanation: 11. Semantics 1: words - Lab excercises 11.E1 Accessing WordNet using NLTK 11.E2 Using word embeddings 11.E3 Comparing WordNet and word embeddings 11.E1 Accessing WordNet using NLTK <a id='11.E1'></a> NLTK (Natural Language Toolkit) is a python library for accessing many NLP tools and resources. The NLTK WordNet interface is described here: http://www.nltk.org/howto/wordnet.html The NLTK python package can be installed using pip: End of explanation """ import nltk nltk.download('wordnet') """ Explanation: Import nltk and use its internal download tool to get WordNet: End of explanation """ from nltk.corpus import wordnet as wn """ Explanation: Import the wordnet module: End of explanation """ club_synsets = wn.synsets('club') print(club_synsets) """ Explanation: Access synsets of a word using the synsets function: End of explanation """ for synset in club_synsets: print("{0}\t{1}".format(synset.name(), synset.definition())) dog = wn.synsets('dog')[0] dog.definition() """ Explanation: Each synset has a definition function: End of explanation """ dog.lemmas() """ Explanation: List lemmas of a synset: End of explanation """ dog.hypernyms() dog.hyponyms() """ Explanation: List hypernyms and hyponyms of a synset End of explanation """ list(dog.closure(lambda s: s.hypernyms())) """ Explanation: The closure method of synsets allows us to retrieve the transitive closure of the hypernym, hyponym, etc. relations: End of explanation """ cat = wn.synsets('cat')[0] dog.lowest_common_hypernyms(cat) dog.common_hypernyms(cat) dog.path_similarity(cat) """ Explanation: common_hypernyms and lowest_common_hypernyms work in relation to another synset: End of explanation """ wn.all_synsets(pos='n') for c, noun in enumerate(wn.all_synsets(pos='n')): if c > 5: break print(noun.name()) """ Explanation: To iterate through all synsets, possibly by POS-tag, use all_synsets, which returns a generator: End of explanation """ !wget http://sandbox.hlt.bme.hu/~recski/stuff/glove.6B.50d.txt.gz !gunzip -f glove.6B.50d.txt.gz """ Explanation: Excercise (optional): use WordNet to implement the "Guess the category" game: the program lists lemmas that all share a hypernym, which the user has to guess. 11.E2 Using word embeddings <a id='11.E2'></a> Download and extract the word embedding glove.6B, which was trained on 6 billion words of English text using the GloVe algorithm. End of explanation """ import numpy as np # words, word_index, emb = read_embedding('glove.6B.50d.txt') # emb = normalize_embedding(emb) """ Explanation: Read the embedding into a 2D numpy array. Word forms should be stored in a separate 1D array. Also create a word index, a dictionary that returns the index of each word in the embedding. Vectors should be normalized to a length of 1 End of explanation """ # vec_sim('cat', 'dog', word_index, emb) """ Explanation: write a function that takes two words and the embedding as input and returns their cosine similarity End of explanation """ # print(nearest_n('dog', words, word_index, emb)) # print(nearest_n('king', words, word_index, emb)) """ Explanation: Implement a function that takes a word as a parameter and returns the 5 words that are closest to it in the embedding space End of explanation """ # synset_emb = embed_synsets(words, word_index, emb) """ Explanation: 11.E3 Vector similarity in WordNet <a id='11.E3'></a> Use the code written in 11.E2 to analyze word groups in WordNet: Create an embedding of WordNet synsets by mapping each of them to the mean of their lemmas' vectors. End of explanation """ # synset_sim(dog, cat, synset_emb) """ Explanation: write a function that measures the similarity of two synsets based on the cosine similarity of their vectors End of explanation """ # nearest_n_synsets(wn.synsets('penguin')[0], synset_emb, 10) """ Explanation: Write a function that takes a synset as input and retrieves the n most similar synsets, using the above embedding End of explanation """ # compare_sims(sample, synset_emb, word_index, emb) """ Explanation: Build the list of all words that are both in wordnet and the GloVe embedding. On a sample of 100 such words, measure Spearman correlation of synset similarity and vector similarity (use scipy.stats.spearmanr) End of explanation """
tuanavu/coursera-university-of-washington
machine_learning/4_clustering_and_retrieval/assigment/week5/.ipynb_checkpoints/module-8-boosting-assignment-1-graphlab-checkpoint.ipynb
mit
import sys sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages') import graphlab """ Explanation: Exploring Ensemble Methods In this assignment, we will explore the use of boosting. We will use the pre-implemented gradient boosted trees in GraphLab Create. You will: Use SFrames to do some feature engineering. Train a boosted ensemble of decision-trees (gradient boosted trees) on the LendingClub dataset. Predict whether a loan will default along with prediction probabilities (on a validation set). Evaluate the trained model and compare it with a baseline. Find the most positive and negative loans using the learned model. Explore how the number of trees influences classification performance. Let's get started! Fire up Graphlab Create End of explanation """ loans = graphlab.SFrame('lending-club-data.gl/') """ Explanation: Load LendingClub dataset We will be using the LendingClub data. As discussed earlier, the LendingClub is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors. Just like we did in previous assignments, we will build a classification model to predict whether or not a loan provided by lending club is likely to default. Let us start by loading the data. End of explanation """ loans.column_names() """ Explanation: Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset. We have done this in previous assignments, so we won't belabor this here. End of explanation """ loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1) loans = loans.remove_column('bad_loans') """ Explanation: Modifying the target column The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan. As in past assignments, in order to make this more intuitive and consistent with the lectures, we reassign the target to be: * +1 as a safe loan, * -1 as a risky (bad) loan. We put this in a new column called safe_loans. End of explanation """ target = 'safe_loans' features = ['grade', # grade of the loan (categorical) 'sub_grade_num', # sub-grade of the loan as a number from 0 to 1 'short_emp', # one year or less of employment 'emp_length_num', # number of years of employment 'home_ownership', # home_ownership status: own, mortgage or rent 'dti', # debt to income ratio 'purpose', # the purpose of the loan 'payment_inc_ratio', # ratio of the monthly payment to income 'delinq_2yrs', # number of delinquincies 'delinq_2yrs_zero', # no delinquincies in last 2 years 'inq_last_6mths', # number of creditor inquiries in last 6 months 'last_delinq_none', # has borrower had a delinquincy 'last_major_derog_none', # has borrower had 90 day or worse rating 'open_acc', # number of open credit accounts 'pub_rec', # number of derogatory public records 'pub_rec_zero', # no derogatory public records 'revol_util', # percent of available credit being used 'total_rec_late_fee', # total late fees received to day 'int_rate', # interest rate of the loan 'total_rec_int', # interest received to date 'annual_inc', # annual income of borrower 'funded_amnt', # amount committed to the loan 'funded_amnt_inv', # amount committed by investors for the loan 'installment', # monthly payment owed by the borrower ] """ Explanation: Selecting features In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are described in the code comments below. If you are a finance geek, the LendingClub website has a lot more details about these features. The features we will be using are described in the code comments below: End of explanation """ loans, loans_with_na = loans[[target] + features].dropna_split() # Count the number of rows with missing data num_rows_with_na = loans_with_na.num_rows() num_rows = loans.num_rows() print 'Dropping %s observations; keeping %s ' % (num_rows_with_na, num_rows) """ Explanation: Skipping observations with missing values Recall from the lectures that one common approach to coping with missing values is to skip observations that contain missing values. We run the following code to do so: End of explanation """ safe_loans_raw = loans[loans[target] == 1] risky_loans_raw = loans[loans[target] == -1] # Undersample the safe loans. percentage = len(risky_loans_raw)/float(len(safe_loans_raw)) safe_loans = safe_loans_raw.sample(percentage, seed = 1) risky_loans = risky_loans_raw loans_data = risky_loans.append(safe_loans) print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data)) print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data)) print "Total number of loans in our new dataset :", len(loans_data) """ Explanation: Fortunately, there are not too many missing values. We are retaining most of the data. Make sure the classes are balanced We saw in an earlier assignment that this dataset is also imbalanced. We will undersample the larger class (safe loans) in order to balance out our dataset. We used seed=1 to make sure everyone gets the same results. End of explanation """ train_data, validation_data = loans_data.random_split(.8, seed=1) """ Explanation: Checkpoint: You should now see that the dataset is balanced (approximately 50-50 safe vs risky loans). Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods. Split data into training and validation sets We split the data into training data and validation data. We used seed=1 to make sure everyone gets the same results. We will use the validation data to help us select model parameters. End of explanation """ model_5 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 5) """ Explanation: Gradient boosted tree classifier Gradient boosted trees are a powerful variant of boosting methods; they have been used to win many Kaggle competitions, and have been widely used in industry. We will explore the predictive power of multiple decision trees as opposed to a single decision tree. Additional reading: If you are interested in gradient boosted trees, here is some additional reading material: * GraphLab Create user guide * Advanced material on boosted trees We will now train models to predict safe_loans using the features above. In this section, we will experiment with training an ensemble of 5 trees. To cap the ensemble classifier at 5 trees, we call the function with max_iterations=5 (recall that each iterations corresponds to adding a tree). We set validation_set=None to make sure everyone gets the same results. End of explanation """ # Select all positive and negative examples. validation_safe_loans = validation_data[validation_data[target] == 1] validation_risky_loans = validation_data[validation_data[target] == -1] # Select 2 examples from the validation set for positive & negative loans sample_validation_data_risky = validation_risky_loans[0:2] sample_validation_data_safe = validation_safe_loans[0:2] # Append the 4 examples into a single dataset sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky) sample_validation_data """ Explanation: Making predictions Just like we did in previous sections, let us consider a few positive and negative examples from the validation set. We will do the following: * Predict whether or not a loan is likely to default. * Predict the probability with which the loan is likely to default. End of explanation """ model_5.predict(sample_validation_data) len(sample_validation_data[sample_validation_data['safe_loans'] != model_5.predict(sample_validation_data)])/float(len(sample_validation_data)) """ Explanation: Predicting on sample validation data For each row in the sample_validation_data, write code to make model_5 predict whether or not the loan is classified as a safe loan. Hint: Use the predict method in model_5 for this. End of explanation """ model_5.predict(sample_validation_data, output_type='probability') """ Explanation: Quiz question: What percentage of the predictions on sample_validation_data did model_5 get correct? Prediction probabilities For each row in the sample_validation_data, what is the probability (according model_5) of a loan being classified as safe? Hint: Set output_type='probability' to make probability predictions using model_5 on sample_validation_data: End of explanation """ model_5.evaluate(validation_data) """ Explanation: Quiz Question: According to model_5, which loan is the least likely to be a safe loan? Checkpoint: Can you verify that for all the predictions with probability &gt;= 0.5, the model predicted the label +1? Evaluating the model on the validation data Recall that the accuracy is defined as follows: $$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}} $$ Evaluate the accuracy of the model_5 on the validation_data. Hint: Use the .evaluate() method in the model. End of explanation """ graphlab.canvas.set_target('ipynb') model_5.show(view='Evaluation') predictions = model_5.predict(validation_data) len(predictions) confusion_matrix = model_5.evaluate(validation_data)['confusion_matrix'] confusion_matrix """ Explanation: Calculate the number of false positives made by the model. End of explanation """ confusion_matrix[(confusion_matrix['target_label'] == -1) & (confusion_matrix['predicted_label'] == 1)] # false_positives = (validation_data[validation_data['safe_loans'] != predictions]['safe_loans'] == -1).sum() false_positives = confusion_matrix[(confusion_matrix['target_label'] == -1) & (confusion_matrix['predicted_label'] == 1)]['count'][0] print false_positives """ Explanation: Quiz question: What is the number of false positives on the validation_data? End of explanation """ # false_negatives = (validation_data[validation_data['safe_loans'] != predictions]['safe_loans'] == +1).sum() false_negatives = confusion_matrix[(confusion_matrix['target_label'] == 1) & (confusion_matrix['predicted_label'] == -1)]['count'][0] print false_negatives """ Explanation: Calculate the number of false negatives made by the model. End of explanation """ cost_of_mistakes = (false_negatives * 10000) + (false_positives * 20000) print cost_of_mistakes """ Explanation: Comparison with decision trees In the earlier assignment, we saw that the prediction accuracy of the decision trees was around 0.64 (rounded). In this assignment, we saw that model_5 has an accuracy of 0.67 (rounded). Here, we quantify the benefit of the extra 3% increase in accuracy of model_5 in comparison with a single decision tree from the original decision tree assignment. As we explored in the earlier assignment, we calculated the cost of the mistakes made by the model. We again consider the same costs as follows: False negatives: Assume a cost of \$10,000 per false negative. False positives: Assume a cost of \$20,000 per false positive. Assume that the number of false positives and false negatives for the learned decision tree was False negatives: 1936 False positives: 1503 Using the costs defined above and the number of false positives and false negatives for the decision tree, we can calculate the total cost of the mistakes made by the decision tree model as follows: cost = $10,000 * 1936 + $20,000 * 1503 = $49,420,000 The total cost of the mistakes of the model is $49.42M. That is a lot of money!. Quiz Question: Using the same costs of the false positives and false negatives, what is the cost of the mistakes made by the boosted tree model (model_5) as evaluated on the validation_set? End of explanation """ validation_data['predictions'] = model_5.predict(validation_data, output_type='probability') """ Explanation: Reminder: Compare the cost of the mistakes made by the boosted trees model with the decision tree model. The extra 3% improvement in prediction accuracy can translate to several million dollars! And, it was so easy to get by simply boosting our decision trees. Most positive & negative loans. In this section, we will find the loans that are most likely to be predicted safe. We can do this in a few steps: Step 1: Use the model_5 (the model with 5 trees) and make probability predictions for all the loans in the validation_data. Step 2: Similar to what we did in the very first assignment, add the probability predictions as a column called predictions into the validation_data. Step 3: Sort the data (in descreasing order) by the probability predictions. Start here with Step 1 & Step 2. Make predictions using model_5 for examples in the validation_data. Use output_type = probability. End of explanation """ print "Your loans : %s\n" % validation_data['predictions'].head(4) print "Expected answer : %s" % [0.4492515948736132, 0.6119100103640573, 0.3835981314851436, 0.3693306705994325] """ Explanation: Checkpoint: For each row, the probabilities should be a number in the range [0, 1]. We have provided a simple check here to make sure your answers are correct. End of explanation """ validation_data[['grade','predictions']].sort('predictions', ascending = False)[0:5] """ Explanation: Now, we are ready to go to Step 3. You can now use the prediction column to sort the loans in validation_data (in descending order) by prediction probability. Find the top 5 loans with the highest probability of being predicted as a safe loan. End of explanation """ validation_data[['grade','predictions']].sort('predictions', ascending = True)[0:5] """ Explanation: Quiz question: What grades are the top 5 loans? Let us repeat this excercise to find the top 5 loans (in the validation_data) with the lowest probability of being predicted as a safe loan: End of explanation """ model_10 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 10, verbose=False) """ Explanation: Checkpoint: You should expect to see 5 loans with the grade ['D', 'C', 'C', 'C', 'B']. Effect of adding more trees In this assignment, we will train 5 different ensemble classifiers in the form of gradient boosted trees. We will train models with 10, 50, 100, 200, and 500 trees. We use the max_iterations parameter in the boosted tree module. Let's get sarted with a model with max_iterations = 10: End of explanation """ model_50 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 50, verbose=False) model_100 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 100, verbose=False) model_200 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 200, verbose=False) model_500 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 500, verbose=False) """ Explanation: Now, train 4 models with max_iterations to be: * max_iterations = 50, * max_iterations = 100 * max_iterations = 200 * max_iterations = 500. Let us call these models model_50, model_100, model_200, and model_500. You can pass in verbose=False in order to suppress the printed output. Warning: This could take a couple of minutes to run. End of explanation """ print model_10.evaluate(validation_data)['accuracy'] print model_50.evaluate(validation_data)['accuracy'] print model_100.evaluate(validation_data)['accuracy'] print model_200.evaluate(validation_data)['accuracy'] print model_500.evaluate(validation_data)['accuracy'] """ Explanation: Compare accuracy on entire validation set Now we will compare the predicitve accuracy of our models on the validation set. Evaluate the accuracy of the 10, 50, 100, 200, and 500 tree models on the validation_data. Use the .evaluate method. End of explanation """ import matplotlib.pyplot as plt %matplotlib inline def make_figure(dim, title, xlabel, ylabel, legend): plt.rcParams['figure.figsize'] = dim plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) if legend is not None: plt.legend(loc=legend, prop={'size':15}) plt.rcParams.update({'font.size': 16}) plt.tight_layout() """ Explanation: Quiz Question: Which model has the best accuracy on the validation_data? Quiz Question: Is it always true that the model with the most trees will perform best on test data? Plot the training and validation error vs. number of trees Recall from the lecture that the classification error is defined as $$ \mbox{classification error} = 1 - \mbox{accuracy} $$ In this section, we will plot the training and validation errors versus the number of trees to get a sense of how these models are performing. We will compare the 10, 50, 100, 200, and 500 tree models. You will need matplotlib in order to visualize the plots. First, make sure this block of code runs on your computer. End of explanation """ train_err_10 = 1 - model_10.evaluate(train_data)['accuracy'] train_err_50 = 1 - model_50.evaluate(train_data)['accuracy'] train_err_100 = 1 - model_100.evaluate(train_data)['accuracy'] train_err_200 = 1 - model_200.evaluate(train_data)['accuracy'] train_err_500 = 1 - model_500.evaluate(train_data)['accuracy'] """ Explanation: In order to plot the classification errors (on the train_data and validation_data) versus the number of trees, we will need lists of these accuracies, which we get by applying the method .evaluate. Steps to follow: Step 1: Calculate the classification error for model on the training data (train_data). Step 2: Store the training errors into a list (called training_errors) that looks like this: [train_err_10, train_err_50, ..., train_err_500] Step 3: Calculate the classification error of each model on the validation data (validation_data). Step 4: Store the validation classification error into a list (called validation_errors) that looks like this: [validation_err_10, validation_err_50, ..., validation_err_500] Once that has been completed, the rest of the code should be able to evaluate correctly and generate the plot. Let us start with Step 1. Write code to compute the classification error on the train_data for models model_10, model_50, model_100, model_200, and model_500. End of explanation """ training_errors = [train_err_10, train_err_50, train_err_100, train_err_200, train_err_500] """ Explanation: Now, let us run Step 2. Save the training errors into a list called training_errors End of explanation """ validation_err_10 = 1 - model_10.evaluate(validation_data)['accuracy'] validation_err_50 = 1 - model_50.evaluate(validation_data)['accuracy'] validation_err_100 = 1 - model_100.evaluate(validation_data)['accuracy'] validation_err_200 = 1 - model_200.evaluate(validation_data)['accuracy'] validation_err_500 = 1 - model_500.evaluate(validation_data)['accuracy'] """ Explanation: Now, onto Step 3. Write code to compute the classification error on the validation_data for models model_10, model_50, model_100, model_200, and model_500. End of explanation """ validation_errors = [validation_err_10, validation_err_50, validation_err_100, validation_err_200, validation_err_500] """ Explanation: Now, let us run Step 4. Save the training errors into a list called validation_errors End of explanation """ plt.plot([10, 50, 100, 200, 500], training_errors, linewidth=4.0, label='Training error') plt.plot([10, 50, 100, 200, 500], validation_errors, linewidth=4.0, label='Validation error') make_figure(dim=(10,5), title='Error vs number of trees', xlabel='Number of trees', ylabel='Classification error', legend='best') """ Explanation: Now, we will plot the training_errors and validation_errors versus the number of trees. We will compare the 10, 50, 100, 200, and 500 tree models. We provide some plotting code to visualize the plots within this notebook. Run the following code to visualize the plots. End of explanation """
ray-project/ray
doc/source/tune/examples/tune-comet.ipynb
apache-2.0
import numpy as np from ray import tune def train_function(config, checkpoint_dir=None): for i in range(30): loss = config["mean"] + config["sd"] * np.random.randn() tune.report(loss=loss) """ Explanation: (tune-comet-ref)= Using Comet with Tune Comet is a tool to manage and optimize the entire ML lifecycle, from experiment tracking, model optimization and dataset versioning to model production monitoring. {image} /images/comet_logo_full.png :align: center :alt: Comet :height: 120px :target: https://www.comet.ml/site/ {contents} :backlinks: none :local: true Example To illustrate logging your trial results to Comet, we'll define a simple training function that simulates a loss metric: End of explanation """ api_key = "YOUR_COMET_API_KEY" project_name = "YOUR_COMET_PROJECT_NAME" # This cell is hidden from the rendered notebook. It makes the from unittest.mock import MagicMock from ray.tune.integration.comet import CometLoggerCallback CometLoggerCallback._logger_process_cls = MagicMock api_key = "abc" project_name = "test" """ Explanation: Now, given that you provide your Comet API key and your project name like so: End of explanation """ from ray.tune.integration.comet import CometLoggerCallback analysis = tune.run( train_function, name="comet", metric="loss", mode="min", callbacks=[ CometLoggerCallback( api_key=api_key, project_name=project_name, tags=["comet_example"] ) ], config={"mean": tune.grid_search([1, 2, 3]), "sd": tune.uniform(0.2, 0.8)}, ) print(analysis.best_config) """ Explanation: You can add a Comet logger by specifying the callbacks argument in your tune.run accordingly: End of explanation """
zauonlok/cs231n
assignment2/BatchNormalization.ipynb
mit
# As usual, a bit of setup import time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.iteritems(): print '%s: ' % k, v.shape """ Explanation: Batch Normalization One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3]. The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated. The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features. It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension. [3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", ICML 2015. End of explanation """ # Check the training-time forward pass by checking means and variances # of features both before and after batch normalization # Simulate the forward pass for a two-layer network N, D1, D2, D3 = 200, 50, 60, 3 X = np.random.randn(N, D1) W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) a = np.maximum(0, X.dot(W1)).dot(W2) print 'Before batch normalization:' print ' means: ', a.mean(axis=0) print ' stds: ', a.std(axis=0) # Means should be close to zero and stds close to one print 'After batch normalization (gamma=1, beta=0)' a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'}) print ' mean: ', a_norm.mean(axis=0) print ' std: ', a_norm.std(axis=0) # Now means should be close to beta and stds close to gamma gamma = np.asarray([1.0, 2.0, 3.0]) beta = np.asarray([11.0, 12.0, 13.0]) a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'}) print 'After batch normalization (nontrivial gamma, beta)' print ' means: ', a_norm.mean(axis=0) print ' stds: ', a_norm.std(axis=0) # Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass. N, D1, D2, D3 = 200, 50, 60, 3 W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) bn_param = {'mode': 'train'} gamma = np.ones(D3) beta = np.zeros(D3) for t in xrange(50): X = np.random.randn(N, D1) a = np.maximum(0, X.dot(W1)).dot(W2) batchnorm_forward(a, gamma, beta, bn_param) bn_param['mode'] = 'test' X = np.random.randn(N, D1) a = np.maximum(0, X.dot(W1)).dot(W2) a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param) # Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print 'After batch normalization (test-time):' print ' means: ', a_norm.mean(axis=0) print ' stds: ', a_norm.std(axis=0) """ Explanation: Batch normalization: Forward In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation. End of explanation """ # Gradient check batchnorm backward pass N, D = 4, 5 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) bn_param = {'mode': 'train'} fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout) _, cache = batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = batchnorm_backward(dout, cache) print 'dx error: ', rel_error(dx_num, dx) print 'dgamma error: ', rel_error(da_num, dgamma) print 'dbeta error: ', rel_error(db_num, dbeta) """ Explanation: Batch Normalization: backward Now implement the backward pass for batch normalization in the function batchnorm_backward. To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass. Once you have finished, run the following to numerically check your backward pass. End of explanation """ N, D = 100, 500 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) bn_param = {'mode': 'train'} out, cache = batchnorm_forward(x, gamma, beta, bn_param) t1 = time.time() dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache) t2 = time.time() dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache) t3 = time.time() print 'dx difference: ', rel_error(dx1, dx2) print 'dgamma difference: ', rel_error(dgamma1, dgamma2) print 'dbeta difference: ', rel_error(dbeta1, dbeta2) print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2)) """ Explanation: Batch Normalization: alternative backward In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper. Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster. NOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it. End of explanation """ N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for reg in [0, 3.14]: print 'Running check with reg = ', reg model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64, use_batchnorm=True) loss, grads = model.loss(X, y) print 'Initial loss: ', loss for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) if reg == 0: print """ Explanation: Fully Connected Nets with Batch Normalization Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization. Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation. HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py. End of explanation """ # Try training a very deep net with batchnorm hidden_dims = [100, 100, 100, 100, 100] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 2e-2 bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True) model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False) bn_solver = Solver(bn_model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=200) bn_solver.train() solver = Solver(model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=200) solver.train() """ Explanation: Batchnorm for deep networks Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization. End of explanation """ plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label='baseline') plt.plot(bn_solver.loss_history, 'o', label='batchnorm') plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label='baseline') plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm') plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label='baseline') plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm') for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() """ Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster. End of explanation """ # Try training a very deep net with batchnorm hidden_dims = [50, 50, 50, 50, 50, 50, 50] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } bn_solvers = {} solvers = {} weight_scales = np.logspace(-4, 0, num=20) for i, weight_scale in enumerate(weight_scales): print 'Running weight scale %d / %d' % (i + 1, len(weight_scales)) bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True) model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False) bn_solver = Solver(bn_model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=False, print_every=200) bn_solver.train() bn_solvers[weight_scale] = bn_solver solver = Solver(model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=False, print_every=200) solver.train() solvers[weight_scale] = solver # Plot results of weight scale experiment best_train_accs, bn_best_train_accs = [], [] best_val_accs, bn_best_val_accs = [], [] final_train_loss, bn_final_train_loss = [], [] for ws in weight_scales: best_train_accs.append(max(solvers[ws].train_acc_history)) bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history)) best_val_accs.append(max(solvers[ws].val_acc_history)) bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history)) final_train_loss.append(np.mean(solvers[ws].loss_history[-100:])) bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:])) plt.subplot(3, 1, 1) plt.title('Best val accuracy vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Best val accuracy') plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline') plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm') plt.legend(ncol=2, loc='lower right') plt.subplot(3, 1, 2) plt.title('Best train accuracy vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Best training accuracy') plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline') plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm') plt.legend() plt.subplot(3, 1, 3) plt.title('Final training loss vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Final training loss') plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline') plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm') plt.legend() plt.gcf().set_size_inches(10, 15) plt.show() """ Explanation: Batch normalization and initialization We will now run a small experiment to study the interaction of batch normalization and weight initialization. The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale. End of explanation """
galtay/keras_examples
examples_1.ipynb
gpl-3.0
# set some constants RAND_SEED_1 = 3826204 import numpy as np np.random.seed(RAND_SEED_1) import os import pandas import sklearn.datasets from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import keras from keras.models import Sequential from keras.layers import InputLayer, Dense import seaborn import plotters import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Machine Learning with Keras - Part 1 Gabriel Altay Binary Classification In this notebook we review some fundamental theory and a few best practices when designing and implementing machine learning classification models. We end with a simple linear network for binary classification. Import Modules End of explanation """ def iris_dataframe(iris): """Create DataFrame with Iris data.""" df_features = pandas.DataFrame(data=iris.data, columns=iris.feature_names) df_targets = pandas.DataFrame(data=iris.target, columns=['target_id']) df_targets['target_name'] = df_targets['target_id'].apply( lambda x: iris.target_names[x]) # could use concat or merge to join the DataFrames #df = pandas.concat([df_features, df_targets], axis=1) df = pandas.merge( df_features, df_targets, left_index=True, right_index=True) return df iris = sklearn.datasets.load_iris() iris_df = iris_dataframe(iris) iris_df.head(4) """ Explanation: Load Data Get the Iris dataset included with scikit-learn and load it into a pandas DataFrame. You can read more about this data set on the wikipedia page Iris flower data set. Below is a short summary, The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals [sepal length, sepal width, petal length, petal width], in centimetres. Based on the combination of these four features, Fisher developed a linear discriminant model to distinguish the species from each other. End of explanation """ n_features = len(iris.feature_names) n_samples = len(iris_df) # number of rows in the DataFrame n_classes = len(iris.target_names) print('class names: {}'.format(iris.target_names)) print('feature names: {}'.format(iris.feature_names)) print('n_classes: ', n_classes) print('n_features: ', n_features) print('n_samples: ', n_samples) """ Explanation: Some Terminology Lets setup some terminology so we can refer to things unambiguously. The Iris dataset has measurements of four quantities on 150 different Iris flowers. The 150 flowers are equaly split among three species. In the jargon of machine learning, we say that we have 150 samples, four features, and three classes. End of explanation """ pp = seaborn.pairplot(iris_df, vars=iris.feature_names, hue='target_name') """ Explanation: Visualize Raw Data Because there are four features for each flower in the Iris dataset, we can think of this data as living in a 4-D space. We can visualize the relationship between the four features using the pairplot function of the seaborn module. Each point in the off-diagonal panels represents a flower (sample). The color of each point indicates the class (species) and the position indicates the values for two of the four features. The six panels in the upper right part of the plot contain the same information as the six panels in the bottom left but the points are reflected through the x=y line. Together the off-diagonal panels show every possible orthogonal projection of this 4-D data down to a 2-D space. The diagonal elements show the distribution of values for each individual feature. End of explanation """ # Modify data to have only two classes and create variables with standard names X = iris_df[iris.feature_names].values # (n_samples, n_features) Y = iris_df['target_id'].values # (n_samples,) Y[Y==2] = 1 # change the 2's to 1's print('first 5 rows of X') print(X[0:5, :]) print() print('All entries in Y') print(Y) print() """ Explanation: Linear Separability The pairplot above gives us an opportunity to introduce the idea of linear separability. For the moment, lets forget that the Iris data is 4-D and just concentrate on the 2-D plot of petal width vs sepal length in the bottom left panel. Intuitively (in two dimensions), two groups of points are linearly separable if a single line can be drawn that separates them. For example, we can see that a line defined by petal width = 0.75 will leave all the blue points on one side and all the green and red points on the other. In this case, we say that the blue points (group 1) are linearly separable from the red and green points (group 2). On the other hand, there is no line that could be drawn that would separate the red points from the green points. Therefore, the red and green points are not linearly separable. We have confined ourselves to two dimensions so far, but the concept of linear separability can be defined in any number of dimensions (see the technical supplement for more details). What does this tell us about the 4-D dataset? While it is true that points that are separable in a projection to lower dimensions (2-D in this case) will also be separable in higher dimensions (i.e. once we add the 3rd and 4th dimension), the argument doesn't run the other way. While we know the red and green points are not linearly separable in 2-D, we cannot determine if they are linearly separable in 4-D just by looking at the 2-D plots (it turns out they are not linearly separable even in 4-D but we know that only because other people have already worked it out). In general this has consequences when choosing classification models. Namely, all linear models will be incapable of perfect classification for datasets that are not linearly separable. This is not necessarily a bad thing (perfect classification of training data can be a sign of overfitting) but it should set our expectations while training models. Lets Start with Binary Classification We can transform the "multi-class" Iris classification problem into a binary classification problem by combining the 1=veriscolor and 2=virginica classes into one class. This will allow us to start with a very simple problem that we know is linearly separable. All machine learning texts and code implementations use their own syntax, but there is a set of semi-standard variable names that have been adopted in a wide range of settings. $X$ is commonly used to represent the "feature matrix" which has one row per sample and one column per feature. $Y$ is commonly used to refer to the "target vector" or "target array" and stores the true class labels for each sample. Below we construct these arrays for our toy binary classification problem by keeping class 0 (setosa) as is and mapping both class 1 (veriscolor) and class 2 (virginica) to class 1. End of explanation """ Xtrain, Xtest, Ytrain, Ytest = train_test_split( X, Y, train_size=0.7, stratify=Y, random_state=RAND_SEED_1) # confirm class ratios in Ytrain and Ytest print('Ytrain.shape: ', Ytrain.shape) print('Ytest.shape: ', Ytest.shape) print('Ytrain class 1 / class 0: ', Ytrain.sum() / Ytrain.size) print('Ytest class 1 / class 0: ', Ytest.sum() / Ytest.size) """ Explanation: Holding Some Data Back In order to measure the quality of a model, it is best to test it on data that was not used in the training process. Doing this allows us to gain insight into the classic problem of under/over fitting (also known as the bias/variance tradeoff). In the most basic implementation of this idea, you simply split your data into a training set (for training) and a testing set (for evaluation of model quality). Typically 15-30% of the data is held back in the test set but there is no magic number that is best. A more advanced (and computationally more expensive) approach is K-fold cross validation. In this approach, you first choose a number of K-folds (say k=5) and split your data into k pieces. Then you train k models. For each one you use (k-1)/k of the data for training and 1/k of the data for testing (4/5 and 1/5 if k=5). You can then examine the mean performance of the 5 models to evaluate the quality of the model and you have made use of all of your training data. For now, we will use a simple 70/30 train test split, but will provide examples of K-fold cross-validation later. Another important concept in splitting data is that of "stratification" which is a form a splitting such that the proportion of classes in different data folds or sets is the same as in the whole dataset. In the case of our modified binary dataset we will maintain the 1/3 = class 1 and 2/3 class = 2 ratio. Most training methods work best when the classes are balanced (i.e. when we have the same number of each class) however, most training methods allow us to upweight under-represented classes (or equivalently downweight over-represented classes). End of explanation """ scaler = StandardScaler() scaler.fit(Xtrain) Xtrain_scld = scaler.transform(Xtrain) Xtest_scld = scaler.transform(Xtest) print('Xtrain_scld.shape: ', Xtrain_scld.shape) # verify scaled train data has mean=0 for each feature # reductions on numpy arrays allow us to specify an axis # 0 = row axis = take mean of each column print(Xtrain_scld.mean(axis=0)) # verify that the variance is 1 print(Xtrain_scld.var(axis=0)) # note that we expect this to be only approximately true for the test # data. the point of the test set is to simulate data that we haven't # seen yet. that is why we only pass Xtrain to the `scaler.fit` method. # we then transform the test data but using the mean and standard deviation # measured from the train data. print(Xtest_scld.mean(axis=0)) print(Xtest_scld.var(axis=0)) """ Explanation: Scaling Data Many machine learning techniques perform best when the feature data is scaled such that the distribution of values in each feature has a mean of zero and a variance of one. This is not true for all methods (tree classifiers for example) but it is almost never a bad idea. That being said, extreme outliers can heavily skew scaled data and none of these data preprocessing methods should ever be performed blindly. Scikit learn provides a number of scaling objects that will apply the appropriate transformation on our training data and remember it so that it can also be applied to the test data and future examples we would like to classify. End of explanation """ model_binary = Sequential() model_binary.add(InputLayer(input_shape=(n_features,))) model_binary.add(Dense(1, activation='sigmoid')) weights_initial = model_binary.get_weights() print('weights_initial - input nodes: \n', weights_initial[0]) print('weights_initial - bias node: ', weights_initial[1]) """ Explanation: Creating a Neural Network Binary Classifier The simplest possible neural network we can make to look at our binary Iris data has 4 input nodes (one for each feature) and 1 output node. This architectue is the neural network version of traditional logistic regression. Note that the activation function we use (sigmoid) is simply another name for the logistic function $f(x) = 1 / (1 + e^{-x})$. When using the sequential model API, we instanciate a Sequantial object and use its add method to add layers. In this case we add an input layer and a single "Dense" layer which will produce weighted connections to every node in the input layer. Right after we define the model, we save the weights so we can reset the state of the network when we want. End of explanation """ model_binary.summary() """ Explanation: Keras models come with a handy method called summary that will describe their architecture. Calling it will show the shape of each layer and the number of parameters that are trainable. End of explanation """ plotters.network_architecture(model_binary) """ Explanation: The shape of the InputLayer (None, 4) indicates that an individual sample has 4 features, but we can pass samples into the network in batches of arbitrary size. A single sample input would have shape (1,4) while batches of 20 samples would have shape (20,4). Note that the summary method above reported 5 trainable parameters (weights connecting the nodes). This might seem like more than expected in our fully connected neural network with 4 input nodes connected to 1 output node (4*1=4). This is because there is also a bias node with a weight associated to it. I've written a function to visualize the architecture that colors the bias nodes red and draws their weights with dotted lines. Note that 5 parameters is the same number of parameters we would have in a linear predictor function from logistic regression. End of explanation """ optimizer = keras.optimizers.SGD(lr=1.0e-4) loss = 'binary_crossentropy' metrics = ['accuracy'] model_binary.compile(optimizer=optimizer, loss=loss, metrics=metrics) """ Explanation: Now that we have a model architecture in place, we just need to compile it. During this process we choose an optimizer (a method for minimizing functions), a loss function (the actual function that will be minimized), and a list of metrics to keep track of during training. The value of the loss function at each iteration of training is stored by default and we add the accuracy to that default list. The accuracy metric is a simple measure of the fraction of samples we classify correctly. The optimizer we choose (SGD) is stochastic gradient descent and the keyword lr sets the learning rate. The loss function, binary_crossentropy, is described in more detail in the technical supplement. End of explanation """ def plot_acc(history): plt.plot(history['acc'], color='red', ls='-', lw=2.0, label='train acc') plt.plot(history['loss'], color='red', ls='--', lw=2.0, label='train loss') plt.plot(history['val_acc'], color='blue', ls='-', lw=1.0, label='valid acc') plt.plot(history['val_loss'], color='blue', ls='--', lw=1.0, label='valid loss') plt.legend(ncol=1, bbox_to_anchor=(1.30, 0.8)) plt.ylim(-0.1, 1.5) mdlout_binary = model_binary.fit( Xtrain_scld, Ytrain, validation_data=(Xtest_scld, Ytest), batch_size=105, nb_epoch=10, verbose=1) plot_acc(mdlout_binary.history) """ Explanation: Now we are ready to actually train the model! There are only two more parameters we have to set. The batch_size determines how many samples are fed through the network before the weights are updated using a stochastic gradient descent step and the nb_epoch parameter determines how many times the entire dataset is fed through the network. For our small dataset it is no problem for us to feed the entire dataset through the network before each weight update. This gives the most accurate estimate of the gradient we want to descend. BUT accuracy is not always the best thing. Using smaller batches is sometimes (maybe most times) better because 1) its faster and 2) it adds noise to the path we take to find a good minimum of the loss function and that allows us to get out of local minima. For this example, we will start with a batch size equal to the whole training set and a small learning rate to see what happens. What constitutes a "small" learning rate is almost always empirically determined. I have selected one for the purposes of this example that we can treat as "small" in the sense that we get improved results later by increasing it. The model will output to the screen during training and the return value (mdlout_binary) will store training and validation losses and accuracies. After training, we will plot both the loss and accuracy of both the training and test sets for each epoch. End of explanation """ model_binary.set_weights(weights_initial) mdlout_binary = model_binary.fit( Xtrain_scld, Ytrain, validation_data=(Xtest_scld, Ytest), batch_size=105, nb_epoch=8000, verbose=0) plot_acc(mdlout_binary.history) """ Explanation: Wow! we produced a really boring plot (I'm guessing ... it's hard to say exactly what will be in the plot as each run will be different due to randomness in the train/test split and the weight initialization of the network). Having said that, the learning rate, batch size, and number of epochs we chose should not produce much except horizontal lines above. Lets try resetting the model and training for a lot of epochs. I turn the verbosity way down this time so we can train for a few thousand epochs without filling up the notebook with logs. End of explanation """ mdlout_binary = model_binary.fit( Xtrain_scld, Ytrain, validation_data=(Xtest_scld, Ytest), batch_size=105, nb_epoch=8000, verbose=0) plot_acc(mdlout_binary.history) print('final training accuracy: ', mdlout_binary.history['acc'][-1]) print('final test accuracy: ', mdlout_binary.history['val_acc'][-1]) """ Explanation: That's more like it. With 8000 epochs of training and 1 weight update per epoch we see the accuracy increasing and the loss function decreasing. However, we know this problem is linearly separable and so with enough training we know this network should acheive perfect classification. Lets train for another 8000 epochs and see what happens. This time we wont reset the weights so the training should be cumulative. End of explanation """ model_binary.set_weights(weights_initial) mdlout_binary = model_binary.fit( Xtrain_scld, Ytrain, validation_data=(Xtest_scld, Ytest), batch_size=1, nb_epoch=80, verbose=0) plot_acc(mdlout_binary.history) print('final training accuracy: ', mdlout_binary.history['acc'][-1]) print('final test accuracy: ', mdlout_binary.history['val_acc'][-1]) """ Explanation: Depending on your luck with the random number generator, the final training and test accuracy should both be close to 1.0 now. However, this has been a relatively time consuming training adventure for the very small amount of data we have. Lets see what happens when we update the weights more often by decreasing the batch size. Just so we can sample the extremes, lets set the batch size to 1 so that we are making a weight update after every sample. Note that this is approximately a factor of 100 more weight updates per epoch than we were doing before. End of explanation """ optimizer = keras.optimizers.SGD(lr=1.0e-3) loss = 'binary_crossentropy' metrics = ['accuracy'] model_binary.compile(optimizer=optimizer, loss=loss, metrics=metrics) model_binary.set_weights(weights_initial) mdlout_binary = model_binary.fit( Xtrain_scld, Ytrain, validation_data=(Xtest_scld, Ytest), batch_size=1, nb_epoch=80, verbose=0) plot_acc(mdlout_binary.history) print('final training accuracy: ', mdlout_binary.history['acc'][-1]) print('final test accuracy: ', mdlout_binary.history['val_acc'][-1]) """ Explanation: It looks like setting the batch size to 1 allowed us to train the model much more quickly. With a training sample size of 105, a batch size of 1, and 80 epochs of training we've made 105 * 80 = 8400 weight updates. However, we could probably do better by increasing the learning rate. Lets try that and see what happens. End of explanation """
yw-fang/readingnotes
machine-learning/Caicloud-book2017/tensorflow-note1.ipynb
apache-2.0
import tensorflow as tf a = tf.constant([1.0,2.0], name = "a") b = tf.constant([1.0,2.0], name = "b") c = a + b with tf.Session() as sess: sess.run(c) print(c)# """ Explanation: Head First TensorFlow Author: Yue-Wen FANG, Contact: fyuewen@gmail.com Revision history: created in late August 2017, at New York Uniersity, Shanghai New York University, Shanghai Kyto University In the begining, I read the book by Jiaxuan Li, but soon I realized it is not good for beginers, and hence I switch to read this book by Caicloud. 1. 深度学习简介 2. TensorFlow 环境搭建 Please refere to my notes in the directory of JiaxuanLi-book2017 A test code for your environment: End of explanation """ import tensorflow as tf a = tf.constant([1.0,2.0], name = "a") b = tf.constant([1.0,2.0], name = "b") c = a + b """ Explanation: 3. TensorFlow 入门 3.1 TensorFlow 计算模型——计算图 计算图是TensorFlow 中最基本的一个概念。TensorFlow 中所有的计算都会被转化成计算图中的节点。(在之后的段落中,我为了省略时常将 TensorFlow 简化为 TF) 3.1.1 计算图的概念 TensorFlow~中,张量可以简单地理解为多维数组,在 3.2 节中将对张量做更加详细的介绍。作者认为,如果 TensorFlow 中的 Tensor 表明了它的数据结构,那么 Flow 则体现了它的计算模型。 Flow 直观地体现了张量之间是通过计算实现相互转化的。TensorFlow 是一个通过计算图的形式来表述计算的编程系统。TensorFlow 中每个计算都是计算图上的一个节点,而节点之间的“边”则描述了计算之间的依赖关系。 3.1.2 计算图的使用 TensorFlow~程序一般可分2阶段: 定义计算图中的所有计算; 例 End of explanation """ 2. 执行计算。这个部分将在 3.3 节中介绍。 """ Explanation: 上述这个案例,定义了两个输入 a 和 b, 以及定义了一个“和”计算。 End of explanation """ import tensorflow as tf a = tf.constant([1.0,2.0], name = "a") b = tf.constant([1.0,2.0], name = "b") c = a + b print(a.graph is tf.get_default_graph()) """ Explanation: 在 TensorFlow 中,系统会自动维护一个默认的计算图,通过 tf.get_default_graph 函数可以获取默认计算图。 End of explanation """ import tensorflow as tf g1 = tf.Graph() with g1.as_default(): #在计算图g1中定义变量 "v", 并初始化为0 v = tf.get_variable( "v", initializer=tf.zeros_initializer(shape=[1])) g2 = tf.Graph() with g2.as_default(): #在计算图g1中定义变量 "v", 并初始化为1 v = tf.get_variable( "v", initializer=tf.ones_initializer(shape=[1])) # 在计算图 g1 中读取变量 v 的取值 with tf.Session(graph = g1) as sess: tf.initizlize_all_variables().run() with tf.variable_scope("", reuse=True): #在计算图 g1 中,变量 v 的取值应该为0, 所以下面这行会输出 [0.] print(sess.run(tf.get_variable("v"))) # 在计算图 g2 中读取变量 v 的取值 with tf.Session(graph = g1) as sess: tf.initialize_all_variables().run() with tf.variable_scope("", reuse=True): #在计算图 g2 中,变量“v"的值应该为1,所以下面会输出[1.] print(sess.run(tf.get_variable("v"))) """ Explanation: 上述代码段最后一句,作者说它实现了如何获取默认计算图以及如何查看一个运算所属计算图。当然,这点我有点不明白?? 除了使用默认的计算图,TF 支持通过 tf.Graph 函数来生成新的计算图。 不同计算图上的张量和运算都不会共享。以下代码示意了如何在不同计算图上定义和使用变量: End of explanation """ import tensorflow as tf a = tf.constant([1.0, 2.0], name = 'a') b = tf.constant([2.0, 3.0], name = 'b') sum = tf.add(a, b, name='add') print(sum) """ Explanation: 可能因为我用的是TF 1.3,上述代码并不能运行通过??! 不过原书对上述代码有这样的讨论:TF的计算图不仅仅可以用来隔离张量和计算,还提供了管理张量和计算的控制。 3.2 TensofrFlow 数据模型——张量 3.2.1 张量的概念 虽然这个小节叫做“张量的概念”,但其实这本书并未介绍什么是张量。张量的概念是数学中的,请参考数学书籍。不过本书告诉了我们,TF中的的数据和张量的关系:所有的TF数据都是以张量的方式表示的。从功能上看,张量可以被简单地理解为多维数组。例如零阶张量表示scalar,也就是一个数;第一阶张量为vector,也就是一个一维数组;第 n 阶张量理解为一个 n 为数组。 然而在TF中,张量的实现并不是直接采用数组的形式,它只是对TF中运算结果的引用。在张量中并没有真正保存数组,它保存的是如何得到这些数字的计算过程。 例如下面这个例子进行加法运算,最后运行出来的结果并不是加法的结果,而是对结果的一个引用: End of explanation """ import tensorflow as tf a = tf.constant([1.0, 2.0], name = 'a') b = tf.constant([2.0, 3.0], name = 'b') #使用张量记录中间结果并使用张量进行计算 result = a + b # 直接用向量计算,可读性相对较差 result = tf.constant([1.0, 2.0], name = 'a') + tf.constant([2.0, 3.0], name = 'b') """ Explanation: 从上可以看出 TF 中的张量和 Numpy 中的数组是不同的,TF中计算的结果,并非数字结果,而是一个张量的结构:即 name、shape、和 type 3.2.2 张量的使用 相对计算模型,数据模型相对简单。张量使用主要分两大类。 第一类用途是对中间计算结果的引用。当一个计算包含很多中间结果时,使用张量可以提高可读性。 例如 End of explanation """ import tensorflow as tf a = tf.constant([1.0, 2.0], name = 'a') sess = tf.Session() sess.run(a) sess.close() """ Explanation: 用张量进行计算的好处在深度学习中会尤其体现出来,因为神经网络层次越多,所需要的计算就越复杂。直接的数组计算会使得代码可读性很差。 这样做的好处还体现在一些其他方面,例如在卷积神经网络中,卷阶层和池化层可能涉及张量变维,通过result.get_shape 函数来获取结果张量的维度信息可以免去人工计算的麻烦。 第二类用途是当计算图构造完成之后,张量可以用力获得计算结果,也就是得到真实的数字。虽然张量本身没有存储具体数字,不过通过session可以得到具体的数字。下面3.3节会进行具体介绍。 3.3 TensorFlow 运行模型——会话 本节主要介绍利用session 会话来执行定义好的运行。session拥有并管理TF程序运行时的所有资源。当素有计算完成之后需要关闭会话才能回收资源,否则可能会造成资源泄露问题。 TF 中使用会话的模式一般有两种,第一种模式需要明确调用会话生成函数和关闭会话函数,例如 End of explanation """ import tensorflow as tf a = tf.constant([1.0, 2.0], name = 'a') with tf.Session() as sess: sess.run(a) """ Explanation: 使用上述模式时,计算完成后必须关闭session。为了使用方便,可以通过第二种模式,即可利用 Python 上下文管理器来使用session: End of explanation """
IanHawke/maths-with-python
09-exceptions-testing.ipynb
mit
from __future__ import division def divide(numerator, denominator): """ Divide two numbers. Parameters ---------- numerator: float numerator denominator: float denominator Returns ------- fraction: float numerator / denominator """ return numerator / denominator print(divide(4.0, 5.0)) """ Explanation: Exceptions and Testing Things go wrong when programming all the time. Some of these "problems" are errors that stop the program from making sense. Others are problems that stop the program from working in specific, special cases. These "problems" may be real, or we may want to treat them as special cases that don't stop the program from running. These special cases can be dealt with using exceptions. Exceptions Let's define a function that divides two numbers. End of explanation """ print(divide(4.0, 0.0)) """ Explanation: But what happens if we try something really stupid? End of explanation """ denominators = [1.0, 0.0, 3.0, 5.0] for denominator in denominators: print(divide(4.0, denominator)) """ Explanation: So, the code works fine until we pass in input that we shouldn't. When we do, this causes the code to stop. To show how this can be a problem, consider the loop: End of explanation """ try: print(divide(4.0, 0.0)) except ZeroDivisionError: print("Dividing by zero is a silly thing to do!") denominators = [1.0, 0.0, 3.0, 5.0] for denominator in denominators: try: print(divide(4.0, denominator)) except ZeroDivisionError: print("Dividing by zero is a silly thing to do!") """ Explanation: There are three sensible results, but we only get the first. There are many more complex, real cases where it's not obvious that we're doing something wrong ahead of time. In this case, we want to be able to try running the code and catch errors without stopping the code. This can be done in Python: End of explanation """ try: print(divide(4.0, "zero")) except ZeroDivisionError: print("Dividing by zero is a silly thing to do!") """ Explanation: The idea here is given by the names. Python will try to execute the code inside the try block. This is just like an if or a for block: each command that is indented in that block will be executed in order. If, and only if, an error arises then the except block will be checked. If the error that is produced matches the one listed then instead of stopping, the code inside the except block will be run instead. To show how this works with different errors, consider a different silly error: End of explanation """ try: print(divide(4.0, "zero")) except ZeroDivisionError: print("Dividing by zero is a silly thing to do!") except TypeError: print("Dividing by a string is a silly thing to do!") """ Explanation: We see that, as it makes no sense to divide by a string, we get a TypeError instead of a ZeroDivisionError. We could catch both errors: End of explanation """ try: print(divide(4.0, "zero")) except: print("Some error occured") """ Explanation: We could catch any error: End of explanation """ try: print(divide(4.0, "zero")) except (ZeroDivisionError, TypeError) as exception: print("Some error occured: {}".format(exception)) """ Explanation: This doesn't give us much information, and may lose information that we need in order to handle the error. We can capture the exception to a variable, and then use that variable: End of explanation """ denominators = [1.0, 0.0, 3.0, "zero", 5.0] results = [] divisors = [] for denominator in denominators: try: result = divide(4.0, denominator) except (ZeroDivisionError, TypeError) as exception: print("Error of type {} for denominator {}".format(exception, denominator)) else: results.append(result) divisors.append(denominator) print(results) print(divisors) """ Explanation: Here we have caught two possible types of error within the tuple (which must, in this case, have parantheses) and captured the specific error in the variable exception. This variable can then be used: here we just print it out. Normally best practise is to be as specific as possible on the error you are trying to catch. Extending the logic Sometimes you may want to perform an action only if an error did not occur. For example, let's suppose we wanted to store the result of dividing 4 by a divisor, and also store the divisor, but only if the divisor is valid. One way of doing this would be the following: End of explanation """ def divide_sum(numerator, denominator1, denominator2): """ Divide a number by a sum. Parameters ---------- numerator: float numerator denominator1: float Part of the denominator denominator2: float Part of the denominator Returns ------- fraction: float numerator / (denominator1 + denominator2) """ return numerator / (denominator1 + denominator2) divide_sum(1, 1, -1) """ Explanation: The statements in the else block are only run if the try block succeeds. If it doesn't - if the statements in the try block raise an exception - then the statements in the else block are not run. Exceptions in your own code Sometimes you don't want to wait for the code to break at a low level, but instead stop when you know things are going to go wrong. This is usually because you can be more informative about what's going wrong. Here's a slightly artificial example: End of explanation """ def divide_sum(numerator, denominator1, denominator2): """ Divide a number by a sum. Parameters ---------- numerator: float numerator denominator1: float Part of the denominator denominator2: float Part of the denominator Returns ------- fraction: float numerator / (denominator1 + denominator2) """ if (denominator1 + denominator2) == 0: raise ZeroDivisionError("The sum of denominator1 and denominator2 is zero!") return numerator / (denominator1 + denominator2) divide_sum(1, 1, -1) """ Explanation: It should be obvious to the code that this is going to go wrong. Rather than letting the code hit the ZeroDivisionError exception automatically, we can raise it ourselves, with a more meaningful error message: End of explanation """ from math import sqrt def real_quadratic_roots(a, b, c): """ Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist. Parameters ---------- a : float Coefficient of x^2 b : float Coefficient of x^1 c : float Coefficient of x^0 Returns ------- roots : tuple The roots Raises ------ NotImplementedError If the roots are not real. """ discriminant = b**2 - 4.0*a*c if discriminant < 0.0: raise NotImplementedError("The discriminant is {} < 0. " "No real roots exist.".format(discriminant)) x_plus = (-b + sqrt(discriminant)) / (2.0*a) x_minus = (-b - sqrt(discriminant)) / (2.0*a) return x_plus, x_minus print(real_quadratic_roots(1.0, 5.0, 6.0)) real_quadratic_roots(1.0, 1.0, 5.0) """ Explanation: There are a large number of standard exceptions in Python, and most of the time you should use one of those, combined with a meaningful error message. One is particularly useful: NotImplementedError. This exception is used when the behaviour the code is about to attempt makes no sense, is not defined, or similar. For example, consider computing the roots of the quadratic equation, but restricting to only real solutions. Using the standard formula $$ x_{\pm} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} $$ we know that this only makes sense if $b^2 \ge 4ac$. We put this in code as: End of explanation """ from math import sqrt def real_quadratic_roots(a, b, c): """ Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist. Parameters ---------- a : float Coefficient of x^2 b : float Coefficient of x^1 c : float Coefficient of x^0 Returns ------- roots : tuple or None The roots """ discriminant = b**2 - 4.0*a*c if discriminant < 0.0: return None x_plus = (-b + sqrt(discriminant)) / (2.0*a) x_minus = (-b + sqrt(discriminant)) / (2.0*a) return x_plus, x_minus """ Explanation: Testing How do we know if our code is working correctly? It is not when the code runs and returns some value: as seen above, there may be times where it makes sense to stop the code even when it is correct, as it is being used incorrectly. We need to test the code to check that it works. Unit testing is the idea of writing many small tests that check if simple cases are behaving correctly. Rather than trying to prove that the code is correct in all cases (which could be very hard), we check that it is correct in a number of tightly controlled cases (which should be more straightforward). If we later find a problem with the code, we add a test to cover that case. Consider a function solving for the real roots of the quadratic equation again. This time, if there are no real roots we shall return None (to say there are no roots) instead of raising an exception. End of explanation """ print(real_quadratic_roots(1, 0, 1)) """ Explanation: First we check what happens if there are imaginary roots, using $x^2 + 1 = 0$: End of explanation """ print(real_quadratic_roots(1, 0, 0)) """ Explanation: As we wanted, it has returned None. We also check what happens if the roots are zero, using $x^2 = 0$: End of explanation """ print(real_quadratic_roots(1, 0, -1)) """ Explanation: We get the expected behaviour. We also check what happens if the roots are real, using $x^2 - 1 = 0$ which has roots $\pm 1$: End of explanation """ from math import sqrt def real_quadratic_roots(a, b, c): """ Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist. Parameters ---------- a : float Coefficient of x^2 b : float Coefficient of x^1 c : float Coefficient of x^0 Returns ------- roots : tuple or None The roots """ discriminant = b**2 - 4.0*a*c if discriminant < 0.0: return None x_plus = (-b + sqrt(discriminant)) / (2.0*a) x_minus = (-b - sqrt(discriminant)) / (2.0*a) return x_plus, x_minus """ Explanation: Something has gone wrong. Looking at the code, we see that the x_minus line has been copied and pasted from the x_plus line, without changing the sign correctly. So we fix that error: End of explanation """ print(real_quadratic_roots(1, 0, 1)) print(real_quadratic_roots(1, 0, 0)) print(real_quadratic_roots(1, 0, -1)) """ Explanation: We have changed the code, so now have to re-run all our tests, in case our change broke something else: End of explanation """ print(real_quadratic_roots(0, 1, 1)) """ Explanation: As a final test, we check what happens if the equation degenerates to a linear equation where $a=0$, using $x + 1 = 0$ with solution $-1$: End of explanation """ from math import sqrt def real_quadratic_roots(a, b, c): """ Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist. Parameters ---------- a : float Coefficient of x^2 b : float Coefficient of x^1 c : float Coefficient of x^0 Returns ------- roots : tuple or float or None The root(s) (two if a genuine quadratic, one if linear, None otherwise) Raises ------ NotImplementedError If the equation has trivial a and b coefficients, so isn't solvable. """ discriminant = b**2 - 4.0*a*c if discriminant < 0.0: return None if a == 0: if b == 0: raise NotImplementedError("Cannot solve quadratic with both a" " and b coefficients equal to 0.") else: return -c / b x_plus = (-b + sqrt(discriminant)) / (2.0*a) x_minus = (-b - sqrt(discriminant)) / (2.0*a) return x_plus, x_minus """ Explanation: In this case we get an exception, which we don't want. We fix this problem: End of explanation """ print(real_quadratic_roots(1, 0, 1)) print(real_quadratic_roots(1, 0, 0)) print(real_quadratic_roots(1, 0, -1)) print(real_quadratic_roots(0, 1, 1)) """ Explanation: And we now must re-run all our tests again, as the code has changed once more: End of explanation """ from numpy.testing import assert_equal, assert_allclose def test_real_distinct(): """ Test that the roots of x^2 - 1 = 0 are \pm 1. """ roots = (1.0, -1.0) assert_equal(real_quadratic_roots(1, 0, -1), roots, err_msg="Testing x^2-1=0; roots should be 1 and -1.") test_real_distinct() """ Explanation: Formalizing tests This small set of tests covers most of the cases we are concerned with. However, by this point it's getting hard to remember what each line is actually testing, and what the correct value is meant to be. To formalize this, we write each test as a small function that contains this information for us. Let's start with the $x^2 - 1 = 0$ case where the roots are $\pm 1$: End of explanation """ def test_should_fail(): """ Comparing the roots of x^2 - 1 = 0 to (1, 1), which should fail. """ roots = (1.0, 1.0) assert_equal(real_quadratic_roots(1, 0, -1), roots, err_msg="Testing x^2-1=0; roots should be 1 and 1." " So this test should fail") test_should_fail() """ Explanation: What this function does is checks that the results of the function call match the expected value, here stored in roots. If it didn't match the expected value, it would raise an exception: End of explanation """ from math import sqrt def test_real_distinct_irrational(): """ Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \pm 1e-5. """ roots = (1 + 1e-5, 1 - 1e-5) assert_equal(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots, err_msg="Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.") test_real_distinct_irrational() """ Explanation: Testing that one floating point number equals another can be dangerous. Consider $x^2 - 2 x + (1 - 10^{-10}) = 0$ with roots $1.1 \pm 10^{-5} )$: End of explanation """ from math import sqrt def test_real_distinct_irrational(): """ Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \pm 1e-5. """ roots = (1 + 1e-5, 1 - 1e-5) assert_allclose(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots, err_msg="Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.") test_real_distinct_irrational() """ Explanation: We see that the solutions match to the first 14 or so digits, but this isn't enough for them to be exactly the same. In this case, and in most cases using floating point numbers, we want the result to be "close enough": to match the expected precision. There is an assertion for this as well: End of explanation """ from math import sqrt from numpy.testing import assert_equal, assert_allclose def test_no_roots(): """ Test that the roots of x^2 + 1 = 0 are not real. """ roots = None assert_equal(real_quadratic_roots(1, 0, 1), roots, err_msg="Testing x^2+1=0; no real roots.") def test_zero_roots(): """ Test that the roots of x^2 = 0 are both zero. """ roots = (0, 0) assert_equal(real_quadratic_roots(1, 0, 0), roots, err_msg="Testing x^2=0; should both be zero.") def test_real_distinct(): """ Test that the roots of x^2 - 1 = 0 are \pm 1. """ roots = (1.0, -1.0) assert_equal(real_quadratic_roots(1, 0, -1), roots, err_msg="Testing x^2-1=0; roots should be 1 and -1.") def test_real_distinct_irrational(): """ Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \pm 1e-5. """ roots = (1 + 1e-5, 1 - 1e-5) assert_allclose(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots, err_msg="Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.") def test_real_linear_degeneracy(): """ Test that the root of x + 1 = 0 is -1. """ root = -1.0 assert_equal(real_quadratic_roots(0, 1, 1), root, err_msg="Testing x+1=0; root should be -1.") test_no_roots() test_zero_roots() test_real_distinct() test_real_distinct_irrational() test_real_linear_degeneracy() """ Explanation: The assert_allclose statement takes options controlling the precision of our test. We can now write out all our tests: End of explanation """
PySCeS/PyscesToolbox
documentation/notebooks/RateChar.ipynb
bsd-3-clause
mod = pysces.model('lin4_fb.psc') rc = psctb.RateChar(mod) """ Explanation: RateChar RateChar is a tool for performing generalised supply-demand analysis (GSDA) [5,6]. This entails the generation data needed to draw rate characteristic plots for all the variable species of metabolic model through parameter scans and the subsequent visualisation of these data in the form of ScanFig objects. Features Performs parameter scans for any variable species of a metabolic model Stores results in a structure similar to Data2D. Saving of raw parameter scan data, together with metabolic control analysis results to disk. Saving of RateChar sessions to disk for later use. Generates rate characteristic plots from parameter scans (using ScanFig). Can perform parameter scans of any variable species with outputs for relevant response, partial response, elasticity and control coefficients (with data stores as Data2D objects). Usage and Feature Walkthrough Workflow Performing GSDA with RateChar usually requires taking the following steps: Instantiation of RateChar object (optionally specifying default settings). Performing a configurable parameter scan of any combination of variable species (or loading previously saved results). Accessing scan results through RateCharData objects corresponding to the names of the scanned species that can be found as attributes of the instantiated RateChar object. Plotting results of a particular species using the plot method of the RateCharData object corresponding to that species. Further analysis using the do_mca_scan method. Session/Result saving if required. Further Analysis .. note:: Parameter scans are performed for a range of concentrations values between two set values. By default the minimum and maximum scan range values are calculated relative to the steady state concentration the species for which a scan is performed respectively using a division and multiplication factor. Minimum and maximum values may also be explicitly specified. Furthermore the number of points for which a scan is performed may also be specified. Details of how to access these options will be discussed below. Object Instantiation Like most tools provided in PySCeSToolbox, instantiation of a RateChar object requires a pysces model object (PysMod) as an argument. A RateChar session will typically be initiated as follows (here we will use the included lin4_fb.psc model): End of explanation """ rc = psctb.RateChar(mod,min_concrange_factor=100, max_concrange_factor=100, scan_points=255, auto_load=False) """ Explanation: Default parameter scan settings relating to a specific RateChar session can also be specified during instantiation: End of explanation """ mod.species rc.do_ratechar() """ Explanation: min_concrange_factor : The steady state division factor for calculating scan range minimums (default: 100). max_concrange_factor : The steady state multiplication factor for calculating scan range maximums (default: 100). scan_points : The number of concentration sample points that will be taken during parameter scans (default: 256). auto_load : If True RateChar will try to load saved data from a previous session during instantiation. Saved data is unaffected by the above options and are only subject to the settings specified during the session where they were generated. (default: False). The settings specified with these optional arguments take effect when the corresponding arguments are not specified during a parameter scan. Parameter Scan After object instantiation, parameter scans may be performed for any of the variable species using the do_ratechar method. By default do_ratechar will perform parameter scans for all variable metabolites using the settings specified during instantiation. For saving/loading see Saving/Loading Sessions below. End of explanation """ rc.do_ratechar(fixed=['S1','S3'], scan_min=0.02, max_concrange_factor=110, scan_points=200) """ Explanation: Various optional arguments, similar to those used during object instantiation, can be used to override the default settings and customise any parameter scan: fixed : A string or list of strings specifying the species for which to perform a parameter scan. The string 'all' specifies that all variable species should be scanned. (default: all) scan_min : The minimum value of the scan range, overrides min_concrange_factor (default: None). scan_max : The maximum value of the scan range, overrides max_concrange_factor (default: None). min_concrange_factor : The steady state division factor for calculating scan range minimums (default: None) max_concrange_factor : The steady state multiplication factor for calculating scan range maximums (default: None). scan_points : The number of concentration sample points that will be taken during parameter scans (default: None). solver : An integer value that specifies which solver to use (0:Hybrd,1:NLEQ,2:FINTSLV). (default: 0). .. note:: For details on different solvers see the PySCeS documentation: For example in a scenario where we only wanted to perform parameter scans of 200 points for the metabolites S1 and S3 starting at a value of 0.02 and ending at a value 110 times their respective steady-state values the method would be called as follows: End of explanation """ # Each key represents a field through which results can be accessed sorted(rc.S3.scan_results.keys()) """ Explanation: Accessing Results Parameter Scan Results Parameter scan results for any particular species are saved as an attribute of the RateChar object under the name of that species. These RateCharData objects are similar to Data2D objects with parameter scan results being accessible through a scan_results DotDict: End of explanation """ # Single value results # scan_min value rc.S3.scan_results.scan_min # fixed metabolite name rc.S3.scan_results.fixed # 1-dimensional ndarray results (only every 10th value of 200 value arrays) # scan_range values rc.S3.scan_results.scan_range[::10] # J_R3 values for scan_range rc.S3.scan_results.J_R3[::10] # total_supply values for scan_range rc.S3.scan_results.total_supply[::10] # Note that J_R3 and total_supply are equal in this case, because S3 # only has a single supply reaction """ Explanation: .. note:: The DotDict data structure is essentially a dictionary with additional functionality for displaying results in table form (when appropriate) and for accessing data using dot notation in addition the normal dictionary bracket notation. In the above dictionary-like structure each field can represent different types of data, the most simple of which is a single value, e.g., scan_min and fixed, or a 1-dimensional numpy ndarray which represent input (scan_range) or output (J_R3, J_R4, total_supply): End of explanation """ # Metabolic Control Analysis coefficient line data # Names of elasticity coefficients related to the 'S3' parameter scan rc.S3.scan_results.ec_names # The x, y coordinates for two points that will be used to plot a # visual representation of ecR3_S3 rc.S3.scan_results.ecR3_S3 # The x,y coordinates for two points that will be used to plot a # visual representation of ecR4_S3 rc.S3.scan_results.ecR4_S3 # The ecR3_S3 and ecR4_S3 data collected into a single array # (horizontally stacked). rc.S3.scan_results.ec_data """ Explanation: Finally data needed to draw lines relating to metabolic control analysis coefficients are also included in scan_results. Data is supplied in 3 different forms: Lists names of the coefficients (under ec_names, prc_names, etc.), 2-dimensional arrays with exactly 4 values (representing 2 sets of x,y coordinates) that will be used to plot coefficient lines, and 2-dimensional array that collects coefficient line data for each coefficient type into single arrays (under ec_data, prc_names, etc.). End of explanation """ # Metabolic control analysis coefficient results rc.S3.mca_results """ Explanation: Metabolic Control Analysis Results The in addition to being able to access the data that will be used to draw rate characteristic plots, the user also has access to the values of the metabolic control analysis coefficient values at the steady state of any particular species via the mca_results field. This field represents a DotDict dictionary-like object (like scan_results), however as each key maps to exactly one result, the data can be displayed as a table (see Basic Usage): End of explanation """ # Control coefficient ccJR3_R1 value rc.S3.mca_results.ccJR3_R1 """ Explanation: Naturally, coefficients can also be accessed individually: End of explanation """ # Rate characteristic plot for 'S3'. S3_rate_char_plot = rc.S3.plot() """ Explanation: Plotting Results One of the strengths of generalised supply-demand analysis is that it provides an intuitive visual framework for inspecting results through the used of rate characteristic plots. Naturally this is therefore the main focus of RateChar. Parameter scan results for any particular species can be visualised as a ScanFig object through the plot method: End of explanation """ # Display plot via `interact` and enable certain lines by clicking category buttons. # The two method calls below are equivalent to clicking the 'J_R3' # and 'Partial Response Coefficients' buttons: # S3_rate_char_plot.toggle_category('J_R3',True) # S3_rate_char_plot.toggle_category('Partial Response Coefficients',True) S3_rate_char_plot.interact() #remove_next # To avoid duplication - do not run #ex display(Image(path.join(notebook_dir,'images','ratechar_1.png'))) #ex """ Explanation: Plots generated by RateChar do not have widgets for each individual line; lines are enabled or disabled in batches according to the category they belong to. By default the Fluxes, Demand and Supply categories are enabled when plotting. To display the partial response coefficient lines together with the flux lines for J_R3, for instance, we would click the J_R3 and the Partial Response Coefficients buttons (in addition to those that are enabled by default). End of explanation """ S3_rate_char_plot.toggle_line('prcJR3_S3_R4', False) S3_rate_char_plot.show() """ Explanation: Modifying the status of individual lines is still supported, but has to take place via the toggle_line method. As an example prcJR3_C_R4 can be disabled as follows: End of explanation """ # This points to a file under the Pysces directory save_file = '~/Pysces/rc_doc_example.npz' # Correct path depending on platform - necessary for platform independent scripts if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8): save_file = psctb.utils.misc.unix_to_windows_path(save_file) else: save_file = path.expanduser(save_file) rc.save_session(file_name = save_file) """ Explanation: .. note:: For more details on saving see the sections Saving and Default Directories and ScanFig under Basic Usage. Saving Saving/Loading Sessions RateChar sessions can be saved for later use. This is especially useful when working with large data sets that take some time to generate. Data sets can be saved to any arbitrary location by supplying a path: End of explanation """ rc.save_session() # to "~/Pysces/lin4_fb/ratechar/save_data.npz" """ Explanation: When no path is supplied the dataset will be saved to the default directory. (Which should be "~/Pysces/lin4_fb/ratechar/save_data.npz" in this case. End of explanation """ rc.load_session(save_file) # OR rc.load_session() # from "~/Pysces/lin4_fb/ratechar/save_data.npz" """ Explanation: Similarly results may be loaded using the load_session method, either with or without a specified path: End of explanation """ # This points to a subdirectory under the Pysces directory save_folder = '~/Pysces/lin4_fb/' # Correct path depending on platform - necessary for platform independent scripts if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8): save_folder = psctb.utils.misc.unix_to_windows_path(save_folder) else: save_folder = path.expanduser(save_folder) rc.save_results(save_folder) """ Explanation: Saving Results Results may also be exported in csv format either to a specified location or to the default directory. Unlike saving of sessions results are spread over multiple files, so here an existing folder must be specified: End of explanation """ # Otherwise results will be saved to the default directory rc.save_results(save_folder) # to sub folders in "~/Pysces/lin4_fb/ratechar/ """ Explanation: A subdirectory will be created for each metabolite with the files ec_results_N, rc_results_N, prc_results_N, flux_results_N and mca_summary_N (where N is a number starting at "0" which increments after each save operation to prevent overwriting files). End of explanation """
ecell/ecell4-notebooks
en/tests/Homodimerization_and_Annihilation.ipynb
gpl-2.0
%matplotlib inline from ecell4.prelude import * """ Explanation: Homodimerization and Annihilation This is for an integrated test of E-Cell4. Here, we test homodimerization and annihilation. End of explanation """ D = 1 radius = 0.005 N_A = 60 ka_factor = 0.1 # 0.1 is for reaction-limited N = 30 # a number of samples """ Explanation: Parameters are given as follows. D, radius, N_A, and ka_factor mean a diffusion constant, a radius of molecules, an initial number of molecules of A, and a ratio between an intrinsic association rate and collision rate defined as ka andkD below, respectively. Dimensions of length and time are assumed to be micro-meter and second. End of explanation """ import numpy kD = 4 * numpy.pi * (radius * 2) * (D * 2) ka = kD * ka_factor kon = ka * kD / (ka + kD) """ Explanation: Calculating optimal reaction rates. ka is intrinsic, kon is effective reaction rates. Be careful about the calculation of a effective rate for homo-dimerization. An intrinsic must be halved in the formula. This kind of parameter modification is not automatically done. End of explanation """ y0 = {'A': N_A} duration = 3 opt_kwargs = {'xlim': (0, duration), 'ylim': (0, N_A)} """ Explanation: Start with A molecules, and simulate 3 seconds. End of explanation """ with species_attributes(): A | {'radius': radius, 'D': D} with reaction_rules(): A + A > ~A2 | kon * 0.5 m = get_model() """ Explanation: Make a model with an effective rate. This model is for macroscopic simulation algorithms. End of explanation """ ret1 = run_simulation(duration, y0=y0, model=m) ret1.plot(**opt_kwargs) """ Explanation: Save a result with ode as obs, and plot it: End of explanation """ ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver='gillespie', repeat=N) ret2.plot('o', ret1, '-', **opt_kwargs) """ Explanation: Simulating with gillespie: End of explanation """ ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('meso', Integer3(4, 4, 4)), repeat=N) ret2.plot('o', ret1, '-', **opt_kwargs) """ Explanation: Simulating with meso: End of explanation """ with species_attributes(): A | {'radius': radius, 'D': D} with reaction_rules(): A + A > ~A2 | ka m = get_model() """ Explanation: Make a model with an intrinsic rate. This model is for microscopic (particle) simulation algorithms. End of explanation """ ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('spatiocyte', radius), repeat=N) ret2.plot('o', ret1, '-', **opt_kwargs) """ Explanation: Simulating with spatiocyte: End of explanation """ ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('egfrd', Integer3(4, 4, 4)), repeat=N) ret2.plot('o', ret1, '-', **opt_kwargs) """ Explanation: Simulating with egfrd: End of explanation """
parrt/msan501
notes/files.ipynb
mit
f = open("data/prices.txt") # or just "prices.txt" print(type(f)) print(f) f.close() print(f.closed) """ Explanation: Loading files The goal of this lecture-lab is to learn how to extract data from files on your laptop's disk. We'll load words from a text file and numbers from data files. Along the way, we'll learn more about filenames and paths to files. The first two elements of our generic analytics program template says to acquire data and then load it into a data structure: Acquire data, which means finding a suitable file or collecting data from the web and storing in a file Load data from disk and place into memory organized into data structures For now, we'll satisfy the first step by just downloading ready-made data files from the web by hand. In MSAN692 -- Data Acquisition, we'll learn all about how to pull data from the web programmatically. This lecture focuses on the second step in the analytics program template. As we go along, I'm going to repeatedly ask you to type in a bunch of these examples. It's critical that you learn the code patterns associated with loading data from files. Please type in your code without cutting and pasting. What are files? As we've discussed before, both the disk and RAM are forms of memory. RAM is much faster (but smaller) than the disk but RAM all disappears when the power goes out. Disks, on the other hand, are persistent. A file is simply a chunk of data on the disk identified by a filename. You use files all the time. For example, we can double-click on a text file or Excel file, which opens an application to display those files. We need to be able to write Python programs that read data from files just like, say, Excel does. Accessing data in RAM is very easy in a Python program, we simply refer to the various elements in a list using an index, such as names[i]. File data is less convenient to access because we have to explicitly load the file into working memory first. For example, we might want to load a list of names from a file into a names list. If a file is too big to fit into memory all at once, we have to process the data in chunks. For now, let's assume all files fit in memory. Even so, accessing files is a bit of a hassle because we must explicitly tell Python to open a file and (often) then close it when we're done. We also must distinguish between reading and writing from/to a file, which dictates the mode in which we open the file. We can open the file in read, write, or append mode. For this lab, we will only concern ourselves with the default case of "opening a file for reading." Here is how to open a file called foo.txt in read mode (the default) then immediately close that file: f = open('foo.txt') # open for read mode f.close() # ok, we're done Hmm...what kind of object is returned from open() and stored in f? Why do we have to close files? File descriptors When we open a file, Python gives us a "file object" that is really just a handle or descriptor that the operating system gives us. It's a unique identifier and how the operating system likes to identify a file that we work with. The file object is not the filename and is also not the file itself on the disk. It's really just a descriptor and a reference to the file. We will use a filename to get a file object using open() and use the file object to get the file contents. End of explanation """ with open("data/prices.txt") as f: contents = f.read() print(type(contents)) print(contents[0:10]) print(f.closed) """ Explanation: (Think of TextIOWrapper as file.) The close operation informs the operating system that you no longer need that resource. The operating system can only open so many files at once so you should close files when you're done using them. Later, when you are learning to write data to files, the close operation is also important. Closing a file flushes any data in memory buffers that needs to be written. From the Python documentation: "It is a common bug to write a program where you have the code to add all the data you want to a file, but the program does not end up creating a file. Usually this means you forgot to close the file." <img src="images/redbang.png" width="30" align="left"> To help avoid confusion, keep this analogy in mind. Your house contents (file) is different than your address (file name) and different than a piece of paper with the address written on it (file descriptor). More specifically: The filename is a string that identifies a file on the disk. It can be fully qualified or relative to the current working directory. The file object is not the filename and is also not the file itself on the disk. It's really just a descriptor and a reference to the file. The contents of the file is different than the filename and the file (descriptor) object that Python gives us. Python WITH statement More recent versions of Python provide an excellent mechanism to avoid forgetting the file close operation. The with statement is more general, but we will use it just for automatically closing files. Even if there is an exception inside the with statement that forces the program to terminate, the close operation occurs. Here is how End of explanation """ ! head -10 data/IntroIstanbul.txt """ Explanation: File names and paths You know what a file name is because you've created lots of files before. (BTW, another reminder not to use spaces in your file or directory names.) Paths are unique specifiers or locators for directories or files. A fully-qualified filename gives a description of the directories from the root of the file system, separated by /. The root of the file system is identified with / (forward slash) at the start of a pathname. You are probably used to seeing it as "Macintosh HD" but from a programming point of view, it's just /. On Windows, which we will not consider here, the root includes the drive specification and a backslash like C:\. Here's a useful diagram showing the components of a fully qualified pathname to a file called view.py: <img src="images/path-names.png" width="750"> As a shorthand, you can start a path with ~, which means "my home directory". On a Mac that's /Users/parrt or whatever your user ID is. On Linux, it's probably /home/parrt. The last element in a path is either a filename or a directory. For example to refer to the directory holding view.py in the above diagram, use path /Users/parrt/classes/msan501/images-parrt. Or, using the shortcut, the fully qualified path is ~/classes/msan501/images-parrt. Here's an example bash session that uses some fully qualified paths: bash $ ls /Users/parrt/classes/msan501/images-parrt/view.py /Users/parrt/classes/msan501/images-parrt/view.py $ cd /Users/parrt/classes/msan501/images-parrt
$ pwd /Users/parrt/classes/msan501/images-parrt $ cd ~/classes/msan501/images-parrt $ pwd /Users/parrt/classes/msan501/images-parrt Current working directory All programs run with the notion of a current working directory. So, if a program is running inside the directory ~/classes/msan501/images-parrt, then the program could refer to any data files sitting in that directory with just a file name--no path is required. For example, let's use the ls program to demonstrate the different kinds of paths. bash $ cd ~/classes/msan501/images-parrt $ ls view.py $ ls /Users/parrt/classes/msan501 images-parrt/ $ ls /Users/parrt/classes msan501/ Any path that does not start with ~ or / is called a relative pathname. For completeness, note that .. means the directory above the current working directory: bash $ cd ~/classes/msan501/images-parrt $ ls .. images-parrt/ $ ls ../.. msan501/ Sometimes you will see me use /tmp, which is a temporary directory or dumping ground. All files in that directory are usually erased when you reboot. Loading text files As we discussed early in the semester, files are just bits. It's how we interpret the bits that is meaningful. The bits could represent an image, a movie, an article, data, Python program text, whatever. Let's call any file containing characters a text file and anything else a binary file. Text files are usually 1 byte per character (8 bits) and have the notion of a line. A line is just a sequence of characters terminated with either \r\n (Windows) or \n (UNIX, Mac). A text file is usually then a sequence of lines. Download this sample text file, IntroIstanbul.txt so we have something to work with. You can save it in /tmp or whatever directory you are using for in class work. For the purposes of this discussion, I have data files in a subdirectory called data of this notes directory. The first 10 lines of the file look like: End of explanation """ ! od -c data/IntroIstanbul.txt | head -5 """ Explanation: You can ignore the "!" on the front as it is just telling this Jupyter notebook to run the terminal command that follows. If you want you can think of ! as the $ terminal prompt in this context. Now, let's examine the contents of the file in a raw fashion rather than with a text editor. The od command (octal dump) is useful for looking at the bytes of the file. Use option -c to see the contents as 1-byte characters: End of explanation """ with open('data/IntroIstanbul.txt') as f: contents = f.read() # read all content of the file print(contents[0:200]) # print just the first 200 characters """ Explanation: That "| head -5" pipes (the vertical bar "|" looks like a pipe) the output of the od command to the head program, which gives this the first five lines of the output. When we have a lot of output we can also pipe the output to the more program to paginate long output. bash $ od -c data/IntroIstanbul.txt | more ... The \n character you see represents the single character we know as the carriage return. The numbers on the left are the character offsets into the file (it looks like they are octal not decimal, btw; use -A d to get decimal addresses). Let's look at some common programming patterns dealing with text files. Pattern: Load all file contents into a string. The sequence of operation is open, load, close. End of explanation """ with open('data/IntroIstanbul.txt') as f: contents = f.read() # read all content of the file words = contents.split(' ') print(words[0:100]) # print first 100 words """ Explanation: Exercise Without cutting and pasting, type in that sequence and make sure you can print the contents of the file from Python. Instead of data, use whatever directory you saved that IntroIstanbul.txt in. Pattern: Load all words of file into a list. This pattern is just an extension of the previous where we split() on the space character to get a list: End of explanation """ with open('data/prices.txt') as f: prices = f.readlines() # get lines of file into a list prices[0:10] """ Explanation: Because we are splitting on the space character, newlines and multiple space characters in a row yield "words" that are not useful. We need to transform that list into a new list before it is useful. Exercise Using the filter programming pattern filters words for only those words greater than 1 character; place into another list called words2. Hint len(s) gets the length of string s. [solutions] Exercise Put all of this together by writing a function called getwords that takes filename as a parameter and returns the list of words greater than one character long. This is a combination of the "load all words of the file into a list" pattern and the previous exercise. [solutions] Loading all lines of a file Reading the contents of a file into a string is not always that useful. We typically want to deal with the words, as we just saw, or the lines of a text file. Natural language processing (NLP) would focus on using the words, but let's look at some data files, which typically structure files as lines of data. Each line represents an observation, data point, or record. We could split the text contents by \n to get the lines, but it is so common that Python provides functions to do that for us. To give us some data to play with, download prices.txt that has a list of prices, one price per line. Here's another very common programming pattern: Pattern: Read all of the lines of the file into a list. The sequence is open, read lines, close: End of explanation """ import numpy as np prices2 = np.array(prices, dtype=float) # convert to array of numbers print(type(prices2)) print(prices2[0:10]) from lolviz import * objviz(prices2) """ Explanation: Exercise Without cutting and pasting, type in that code and make sure you can read the lines of the file into a list. Exercise Use the strip() function on each element of the list so that you get: ['0.605', '0.600', '0.594', ...]. Use a list comprehension to map the prices to a new version of the prices list. [solutions] Converting list of strings to numpy array The numbers have the \n character on the end but that's not a problem because we can easily convert that using NumPy: End of explanation """ ! head -4 data/player-heights.csv """ Explanation: Exercise Add this conversion to the previous exercise and make sure you get an array as output. (I'm trying to give you repeated experience typing code that reads data from a file and processes it in some way.) [solutions] Loading CSV files Let's look at a more complicated data file. Download heights.csv, which starts out like this: End of explanation """ import numpy as np with open('data/player-heights.csv') as f: lines = f.readlines() lines = [line.strip() for line in lines] # remove \n on end lines[0:5] header = lines[0] data = lines[1:] # slice # print it back out print(header) for d in data[0:5]: print(d) """ Explanation: It is still a text file, but now we start to get the idea that text files can follow a particular format. In this case, we recognize it as a comma-separated value (CSV) file. It also has a header line that names the columns, which means we need to treat the first line differently than the remainder of the file. Pattern: Load a CSV file into a list of lists We already know how to open a file and get the lines, so let's do that and also separate the lines into the header and the data components: End of explanation """ import pandas as pd prices = pd.read_csv('data/prices.txt', header=None) prices.head(5) """ Explanation: Exercise As an exercise, type in that code and make sure you get the header and the first five lines of data printed out. Exercise Each row of the data is a string with two numbers in it. We need to convert that string into a list with two floating-point numbers using split(','). Combining all of those two-element lists into an overall list gives us the two-dimensional table we need, which is our next exercise. Write a function called getcsv(filename) that returns a list of row lists, where the first row is the header row. Strip off any \n char on the end of lines. The output should look like: python [['6.329999924', ' 6.079999924'], ['6.5', ' 6.579999924'], ['6.5', ' 6.25']] Use list comprehensions where you can. [solutions] Exercise import pandas as pd and then convert the data from the previous exercise into a data frame. Pandas doesn't automatically understand that the first row is the header so slice out data[1:] as the first argument to the pd.DataFrame() data frame constructor and then pass data[0] as the columns parameter. Print it out and you should see something like: Football height Basketball height 0 6.329999924 6.079999924 1 6.5 6.579999924 2 6.5 6.25 ... [solutions] Using Pandas to load CSV files Of course, loading CSV is something that data scientists need to do all of the time and so there is a simple function you can use from Pandas, another library you will probably become very familiar with: End of explanation """ data = pd.read_csv('data/player-heights.csv') data.head(5) """ Explanation: (header=None indicates that there are no column names in the first line of the file.) This even works for CSV files with header rows: End of explanation """ n = 5 with open('data/prices.txt') as f: for line in f: # for each line in the file if n>0: print(float(line)) # process the line in some way n -= 1 """ Explanation: We'll see this stuff again in data frames. Processing files line by line The previous mechanism for getting lines of text into memory works well except that it requires we load everything into memory all at once. That is pretty inefficient and limits the size of the data we can process to the amount of memory we have. Pattern: Read data line by line not all at once. We can use a for-each loop where the sequence of data is the file descriptor: python with open('data/prices.txt') as f: for line in f: # for each line in the file print(float(line)) # process the line in some way End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_ml/td2a_correction_cl_reg_anomaly.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt from jyquickhelper import add_notebook_menu add_notebook_menu() """ Explanation: 2A.data - Classification, régression, anomalies - correction Le jeu de données Wine Quality Data Set contient 5000 vins décrits par leurs caractéristiques chimiques et évalués par un expert. Peut-on s'approcher de l'expert à l'aide d'un modèle de machine learning. End of explanation """ from ensae_teaching_cs.data import wines_quality from pandas import read_csv df = read_csv(wines_quality(local=True, filename=True)) df.head() ax = df['quality'].hist(bins=16) ax.set_title("Distribution des notes"); """ Explanation: Les données On peut les récupérer sur github...data_2a. End of explanation """ from sklearn.model_selection import train_test_split X = df.drop("quality", axis=1) y = df["quality"] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5) X_train.shape, X_test.shape from pandas import DataFrame def distribution(y_train, y_test): df_train = DataFrame(dict(color=y_train)) df_test = DataFrame(dict(color=y_test)) df_train['ctrain'] = 1 df_test['ctest'] = 1 h_train = df_train.groupby('color').count() h_test = df_test.groupby('color').count() merge = h_train.join(h_test, how='outer') merge["ratio"] = merge.ctest / merge.ctrain return merge distribution(y_train, y_test) ax = y_train.hist(bins=24, label="train", align="right") y_test.hist(bins=24, label="test", ax=ax, align="left") ax.set_title("Distribution des notes") ax.legend(); """ Explanation: Il y a peu de très mauvais ou très bons vins. On découpe en apprentissage / test ce qui va nécessairement rendre leur prédiction complexe : un modèle reproduit en quelque sorte ce qu'il voit. End of explanation """ X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.5) X_train.shape, X_test.shape ax = y_train.hist(bins=24, label="train", align="right") y_test.hist(bins=24, label="test", ax=ax, align="left") ax.set_title("Distribution des notes - statifiée") ax.legend(); distribution(y_train, y_test) """ Explanation: Avec un peu de chance, les notes extrêmes sont présentes dans les bases d'apprentissages et de tests mais une note seule a peu d'influence sur un modèle. Pour s'assurer une meilleur répartition train / test, on peut s'assurer que chaque note est bien répartie de chaque côté. On se sert du paramètre stratify. End of explanation """ from sklearn.linear_model import LogisticRegression logreg = LogisticRegression() try: logreg.fit(X_train, y_train) except Exception as e: print(e) """ Explanation: La répartition des notes selon apprentissage / test est plus uniforme. Premier modèle End of explanation """ from sklearn.preprocessing import OneHotEncoder one = OneHotEncoder() one.fit(X_train[['color']]) tr = one.transform(X_test[["color"]]) tr """ Explanation: Une colonne n'est pas numérique. On utilise un OneHotEncoder. End of explanation """ tr.todense()[:5] """ Explanation: La matrice est sparse ou creuse. End of explanation """ from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer numeric_features = [c for c in X_train if c != 'color'] pipe = Pipeline([ ("prep", ColumnTransformer([ ("color", Pipeline([ ('one', OneHotEncoder()), ('select', ColumnTransformer([('sel1', 'passthrough', [0])])) ]), ['color']), ("others", "passthrough", numeric_features) ])), ]) pipe.fit(X_train) pipe.transform(X_test)[:2] from jyquickhelper import RenderJsDot from mlinsights.plotting import pipeline2dot dot = pipeline2dot(pipe, X_train) RenderJsDot(dot) """ Explanation: Ensuite il faut fusionner ces deux colonnes avec les données ou une seule puisqu'elles sont corrélées. Ou alors on écrit un pipeline... End of explanation """ pipe = Pipeline([ ("prep", ColumnTransformer([ ("color", Pipeline([ ('one', OneHotEncoder()), ('select', ColumnTransformer([('sel1', 'passthrough', [0])])) ]), ['color']), ("others", "passthrough", numeric_features) ])), ("lr", LogisticRegression(max_iter=1000)), ]) pipe.fit(X_train, y_train) from sklearn.metrics import classification_report print(classification_report(y_test, pipe.predict(X_test))) """ Explanation: Il reste quelques bugs. On ajoute un classifieur. End of explanation """ from sklearn.ensemble import RandomForestClassifier pipe = Pipeline([ ("prep", ColumnTransformer([ ("color", Pipeline([ ('one', OneHotEncoder()), ('select', ColumnTransformer([('sel1', 'passthrough', [0])])) ]), ['color']), ("others", "passthrough", numeric_features) ])), ("lr", RandomForestClassifier()), ]) pipe.fit(X_train, y_train) print(classification_report(y_test, pipe.predict(X_test))) """ Explanation: Pas extraordinaire. End of explanation """ from sklearn.metrics import roc_curve, auc labels = pipe.steps[1][1].classes_ y_score = pipe.predict_proba(X_test) fpr = dict() tpr = dict() roc_auc = dict() for i, cl in enumerate(labels): fpr[cl], tpr[cl], _ = roc_curve(y_test == cl, y_score[:, i]) roc_auc[cl] = auc(fpr[cl], tpr[cl]) fig, ax = plt.subplots(1, 1, figsize=(8,4)) for k in roc_auc: ax.plot(fpr[k], tpr[k], label="c%d = %1.2f" % (k, roc_auc[k])) ax.legend(); """ Explanation: Beaucoup mieux. Courbe ROC pour chaque classe End of explanation """ from sklearn.metrics import confusion_matrix confusion_matrix(y_test, pipe.predict(X_test), labels=labels) """ Explanation: Ces chiffres peuvent paraître élevés mais ce n'est pas formidable quand même. End of explanation """ def confusion_matrix_df(y_test, y_true): conf = confusion_matrix(y_test, y_true) labels = list(sorted(set(y_test))) df = DataFrame(conf, columns=labels) df.set_index(labels) return df confusion_matrix_df(y_test, pipe.predict(X_test)) """ Explanation: Ce n'est pas très joli... End of explanation """ import numpy ind = numpy.max(pipe.predict_proba(X_test), axis=1) >= 0.6 confusion_matrix_df(y_test[ind], pipe.predict(X_test)[ind]) """ Explanation: Mais cela veut dire que pour un score élevé, le taux de bonne classification s'améliore. End of explanation """ from sklearn.covariance import EllipticEnvelope one = Pipeline([ ("prep", ColumnTransformer([ ("color", Pipeline([ ('one', OneHotEncoder()), ('select', ColumnTransformer([('sel1', 'passthrough', [0])])) ]), ['color']), ("others", "passthrough", numeric_features) ])), ("lr", EllipticEnvelope()), ]) one.fit(X_train) ano = one.predict(X_test) ano from pandas import DataFrame df = DataFrame(dict(note=y_test, ano=one.decision_function(X_test), pred=pipe.predict(X_test), errors=y_test == pipe.predict(X_test), proba_max=numpy.max(pipe.predict_proba(X_test), axis=1), )) df["anoclip"] = df.ano.apply(lambda x: max(x, -200)) df.head() import seaborn seaborn.lmplot(x="anoclip", y="proba_max", hue="errors", truncate=True, height=5, data=df, logx=True, fit_reg=False); df.corr() """ Explanation: Les petites classes ont disparu : le modèle n'est pas sûr du tout pour les classes 3, 4, 9. On voit aussi que le modèle se trompe souvent d'une note, il serait sans doute plus judicieux de passer à un modèle de régression plutôt que de classification. Cependant, un modèle de régression ne fournit pas de score de confiance. Sans doute serait-il possible d'en construire avec un modèle de détection d'anomalie... Anomalies Une anomalie est un point aberrant. Cela revient à dire que sa probabilité qu'un tel événement se reproduise est faible. Un modèle assez connu est EllipticEnvelope. On suppose que si le modèle détecte une anomalie, un modèle de prédiction aura plus de mal à prédire. On réutilise le pipeline précédent en changeant juste la dernière étape. End of explanation """ fig, ax = plt.subplots(1, 2, figsize=(14, 4)) df.proba_max.hist(ax=ax[0], bins=50) df.anoclip.hist(ax=ax[1], bins=50) ax[0].set_title("Distribution du score de classification") ax[1].set_title("Distribution du score d'anomalie"); """ Explanation: Les résultats précédents ne sont pas probants. On peut changer de modèle de détection d'anomalies mais les conclusions restent les mêmes. Le score d'anomalie n'est pas relié au score de prédiction. End of explanation """ pipe_ano = Pipeline([ ("prep", ColumnTransformer([ ("color", Pipeline([ ('one', OneHotEncoder()), ('select', ColumnTransformer([('sel1', 'passthrough', [0])])) ]), ['color']), ("others", "passthrough", numeric_features) ])), ("lr", RandomForestClassifier()), ]) pipe_ano.fit(X_train, one.predict(X_train)) confusion_matrix_df(one.predict(X_test), pipe_ano.predict(X_test)) """ Explanation: C'est joli mais ils n'ont rien à voir. Et c'était prévisible car le modèle de prédiction qu'on utilise est tout-à-fait capable de prédire ce qu'est une anomalie. End of explanation """ from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import r2_score pipe_reg = Pipeline([ ("prep", ColumnTransformer([ ("color", Pipeline([ ('one', OneHotEncoder()), ('select', ColumnTransformer([('sel1', 'passthrough', [0])])) ]), ['color']), ("others", "passthrough", numeric_features) ])), ("lr", RandomForestRegressor()), ]) pipe_reg.fit(X_train, y_train) r2_score(y_test, pipe_reg.predict(X_test)) """ Explanation: Le modèle d'anomalie n'apporte donc aucune information nouvelle. Cela signifie que le modèle prédictif initial n'améliorerait pas sa prédiction en utilisant le score d'anomalie. Il n'y a donc aucune chance que les erreurs ou les score de prédiction soient corrélés au score d'anomalie d'une manière ou d'une autre. Score de confiance pour une régression End of explanation """ error = y_test - pipe_reg.predict(X_test) score = numpy.max(pipe.predict_proba(X_test), axis=1) fig, ax = plt.subplots(1, 2, figsize=(12, 4)) seaborn.kdeplot(score, error, ax=ax[1]) ax[1].set_ylim([-1.5, 1.5]) ax[1].set_title("Densité") ax[0].plot(score, error, ".") ax[0].set_xlabel("score de confiance du classifieur") ax[0].set_ylabel("Erreur de prédiction") ax[0].set_title("Lien entre classification et prédiction"); """ Explanation: Pas super. Mais... End of explanation """ fig, ax = plt.subplots(1, 2, figsize=(12, 4)) seaborn.kdeplot(score, error.abs(), ax=ax[1]) ax[1].set_ylim([0, 1.5]) ax[1].set_title("Densité") ax[0].plot(score, error.abs(), ".") ax[0].set_xlabel("score de confiance du classifieur") ax[0].set_ylabel("Erreur de prédiction") ax[0].set_title("Lien entre classification et prédiction"); """ Explanation: Comme prévu le modèle ne se trompe pas plus dans un sens que dans l'autre. End of explanation """
arcyfelix/Courses
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/05-Miscellaneous-Topics/00-Deep-Nets-with-TF-Abstractions.ipynb
apache-2.0
from sklearn.datasets import load_wine wine_data = load_wine() type(wine_data) wine_data.keys() print(wine_data.DESCR) feat_data = wine_data['data'] labels = wine_data['target'] """ Explanation: Deep Nets with TF Abstractions Let's explore a few of the various abstractions that TensorFlow offers. You can check out the tf.contrib documentation for more options. The Data To compare these various abstractions we'll use a dataset easily available from the SciKit Learn library. The data is comprised of the results of a chemical analysis of wines grown in the same region in Italy by three different cultivators. There are thirteen different measurements taken for different constituents found in the three types of wine. We will use the various TF Abstractions to classify the wine to one of the 3 possible labels. First let's show you how to get the data: End of explanation """ from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(feat_data, labels, test_size = 0.3, random_state = 101) """ Explanation: Train Test Split End of explanation """ from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaled_x_train = scaler.fit_transform(X_train) scaled_x_test = scaler.transform(X_test) """ Explanation: Scale the Data End of explanation """ import tensorflow as tf from tensorflow import estimator X_train.shape feat_cols = [tf.feature_column.numeric_column("x", shape = [13])] deep_model = estimator.DNNClassifier(hidden_units = [13,13,13], feature_columns = feat_cols, n_classes = 3, optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) ) input_fn = estimator.inputs.numpy_input_fn(x = {'x':scaled_x_train}, y = y_train, shuffle = True, batch_size = 10, num_epochs = 5) deep_model.train(input_fn = input_fn, steps = 500) input_fn_eval = estimator.inputs.numpy_input_fn(x = {'x': scaled_x_test}, shuffle = False) preds = list(deep_model.predict(input_fn = input_fn_eval)) predictions = [p['class_ids'][0] for p in preds] from sklearn.metrics import confusion_matrix, classification_report print(classification_report(y_test,predictions)) """ Explanation: Abstractions Estimator API End of explanation """ from tensorflow.contrib.keras import models dnn_keras_model = models.Sequential() """ Explanation: TensorFlow Keras Create the Model End of explanation """ from tensorflow.contrib.keras import layers dnn_keras_model.add(layers.Dense(units = 13, input_dim = 13, activation = 'relu')) dnn_keras_model.add(layers.Dense(units = 13, activation = 'relu')) dnn_keras_model.add(layers.Dense(units = 13, activation = 'relu')) dnn_keras_model.add(layers.Dense(units = 3, activation = 'softmax')) """ Explanation: Add Layers to the model End of explanation """ from tensorflow.contrib.keras import losses, optimizers, metrics losses.sparse_categorical_crossentropy dnn_keras_model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) """ Explanation: Compile the Model End of explanation """ dnn_keras_model.fit(scaled_x_train,y_train,epochs=50) predictions = dnn_keras_model.predict_classes(scaled_x_test) print(classification_report(predictions, y_test)) """ Explanation: Train Model End of explanation """ import pandas as pd from sklearn.datasets import load_wine from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler wine_data = load_wine() feat_data = wine_data['data'] labels = wine_data['target'] X_train, X_test, y_train, y_test = train_test_split(feat_data, labels, test_size = 0.3, random_state = 101) scaler = MinMaxScaler() scaled_x_train = scaler.fit_transform(X_train) scaled_x_test = scaler.transform(X_test) # ONE HOT ENCODED onehot_y_train = pd.get_dummies(y_train).as_matrix() one_hot_y_test = pd.get_dummies(y_test).as_matrix() """ Explanation: Layers API https://www.tensorflow.org/tutorials/layers Formating Data End of explanation """ num_feat = 13 num_hidden1 = 13 num_hidden2 = 13 num_outputs = 3 learning_rate = 0.01 import tensorflow as tf from tensorflow.contrib.layers import fully_connected """ Explanation: Parameters End of explanation """ X = tf.placeholder(tf.float32, shape = [None,num_feat]) y_true = tf.placeholder(tf.float32, shape = [None,3]) """ Explanation: Placeholder End of explanation """ actf = tf.nn.relu """ Explanation: Activation Function End of explanation """ hidden1 = fully_connected(X, num_hidden1, activation_fn = actf) hidden2 = fully_connected(hidden1,num_hidden2,activation_fn = actf) output = fully_connected(hidden2, num_outputs) """ Explanation: Create Layers End of explanation """ loss = tf.losses.softmax_cross_entropy(onehot_labels = y_true, logits = output) """ Explanation: Loss Function End of explanation """ optimizer = tf.train.AdamOptimizer(learning_rate) train = optimizer.minimize(loss) """ Explanation: Optimizer End of explanation """ init = tf.global_variables_initializer() training_steps = 1000 with tf.Session() as sess: sess.run(init) for i in range(training_steps): sess.run(train, feed_dict = {X: scaled_x_train, y_true: y_train}) # Get Predictions logits = output.eval(feed_dict = {X: scaled_x_test}) preds = tf.argmax(logits, axis = 1) results = preds.eval() from sklearn.metrics import confusion_matrix,classification_report print(classification_report(results,y_test)) """ Explanation: Init End of explanation """
kaushik94/tardis
docs/research/code_comparison/plasma_compare/plasma_compare.ipynb
bsd-3-clause
from tardis.simulation import Simulation from tardis.io.config_reader import Configuration from IPython.display import FileLinks """ Explanation: Plasma comparison End of explanation """ config = Configuration.from_yaml('tardis_example.yml') sim = Simulation.from_config(config) """ Explanation: The example tardis_example can be downloaded here tardis_example.yml End of explanation """ # All Si ionization states sim.plasma.ion_number_density.loc[14] # Normalizing by si number density sim.plasma.ion_number_density.loc[14] / sim.plasma.number_density.loc[14] # Accessing the first ionization state sim.plasma.ion_number_density.loc[14, 1] sim.plasma.update(density=[1e-13]) sim.plasma.ion_number_density """ Explanation: Accessing the plasma states In this example, we are accessing Si and also the unionized number density (0) End of explanation """ si_ionization_state = None for cur_t_rad in range(1000, 20000, 100): sim.plasma.update(t_rad=[cur_t_rad]) if si_ionization_state is None: si_ionization_state = sim.plasma.ion_number_density.loc[14].copy() si_ionization_state.columns = [cur_t_rad] else: si_ionization_state[cur_t_rad] = sim.plasma.ion_number_density.loc[14].copy() %pylab inline fig = figure(0, figsize=(10, 10)) ax = fig.add_subplot(111) si_ionization_state.T.iloc[:, :3].plot(ax=ax) xlabel('radiative Temperature [K]') ylabel('Number density [1/cm$^3$]') """ Explanation: Updating the plasma state It is possible to update the plasma state with different temperatures or dilution factors (as well as different densities.). We are updating the radiative temperatures and plotting the evolution of the ionization state End of explanation """
tensorflow/tflite-micro
third_party/xtensa/examples/pytorch_to_tflite/pytorch_to_tflite_converter/pytorch_to_onnx_to_tflite_int8.ipynb
apache-2.0
!pip install onnx !pip install onnxruntime # Some standard imports import numpy as np import torch import torch.onnx import torchvision.models as models import onnx import onnxruntime """ Explanation: <a href="https://colab.research.google.com/github/nyadla-sys/pytorch_2_tflite/blob/main/pytorch_to_onnx_to_tflite(quantized)_with_imagedata.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Install ONNX and ONNX runtime End of explanation """ model = models.mobilenet_v2(pretrained=True) model.eval() """ Explanation: Load mobilenetV2 from torch models End of explanation """ IMAGE_SIZE = 224 BATCH_SIZE = 1 IMAGE_SIZE = 224 # Input to the model x = torch.randn(BATCH_SIZE, 3, 224, 224, requires_grad=True) torch_out = model(x) # Export the model torch.onnx.export(model, # model being run x, # model input (or a tuple for multiple inputs) "mobilenet_v2.onnx", # where to save the model (can be a file or file-like object) export_params=True, # store the trained parameter weights inside the model file opset_version=10, # the ONNX version to export the model to do_constant_folding=True, # whether to execute constant folding for optimization input_names = ['input'], # the model's input names output_names = ['output'], # the model's output names dynamic_axes={'input' : {0 : 'BATCH_SIZE'}, # variable length axes 'output' : {0 : 'BATCH_SIZE'}}) onnx_model = onnx.load("mobilenet_v2.onnx") onnx.checker.check_model(onnx_model) """ Explanation: Convert from pytorch to onnx End of explanation """ ort_session = onnxruntime.InferenceSession("mobilenet_v2.onnx") def to_numpy(tensor): return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() # compute ONNX Runtime output prediction ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(x)} ort_outs = ort_session.run(None, ort_inputs) # compare ONNX Runtime and PyTorch results np.testing.assert_allclose(to_numpy(torch_out), ort_outs[0], rtol=1e-03, atol=1e-05) print("Exported model has been tested with ONNXRuntime, and the result looks good!") """ Explanation: Compare ONNX Runtime and Pytorch results End of explanation """ !pip install onnx-tf from onnx_tf.backend import prepare import onnx onnx_model_path = 'mobilenet_v2.onnx' tf_model_path = 'model_tf' onnx_model = onnx.load(onnx_model_path) tf_rep = prepare(onnx_model) tf_rep.export_graph(tf_model_path) """ Explanation: Convert from Onnx to TF saved model End of explanation """ import tensorflow as tf saved_model_dir = 'model_tf' tflite_model_path = 'mobilenet_v2_float32.tflite' # Convert the model converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) tflite_model = converter.convert() # Save the model with open(tflite_model_path, 'wb') as f: f.write(tflite_model) """ Explanation: Convert from TF saved model to TFLite(float32) model End of explanation """ import numpy as np import tensorflow as tf tflite_model_path = '/content/mobilenet_v2_float32.tflite' # Load the TFLite model and allocate tensors interpreter = tf.lite.Interpreter(model_path=tflite_model_path) interpreter.allocate_tensors() # Get input and output tensors input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Test the model on random input data input_shape = input_details[0]['shape'] print(input_shape) input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() # get_tensor() returns a copy of the tensor data # use tensor() in order to get a pointer to the tensor output_data = interpreter.get_tensor(output_details[0]['index']) print("Predicted value for [0, 1] normalization. Label index: {}, confidence: {:2.0f}%" .format(np.argmax(output_data), 100 * output_data[0][np.argmax(output_data)])) """ Explanation: Run inference on TFLite(float32) with random data End of explanation """ !wget --no-check-certificate \ https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \ -O /content/cats_and_dogs_filtered.zip import os import zipfile local_zip = '/content/cats_and_dogs_filtered.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/content') zip_ref.close() import tensorflow as tf import numpy as np tflite_model_path = '/content/mobilenet_v2_float32.tflite' #tflite_model_path = '/content/model_float32.tflite' # Load the TFLite model and allocate tensors interpreter = tf.lite.Interpreter(model_path=tflite_model_path) interpreter.allocate_tensors() print("== Input details ==") print("name:", interpreter.get_input_details()[0]['name']) print("shape:", interpreter.get_input_details()[0]['shape']) print("type:", interpreter.get_input_details()[0]['dtype']) print("\nDUMP INPUT") print(interpreter.get_input_details()[0]) print("\n== Output details ==") print("name:", interpreter.get_output_details()[0]['name']) print("shape:", interpreter.get_output_details()[0]['shape']) print("type:", interpreter.get_output_details()[0]['dtype']) print("\nDUMP OUTPUT") print(interpreter.get_output_details()[0]) # Get input and output tensors input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Test the model on image data input_shape = input_details[0]['shape'] #print(input_shape) image = tf.io.read_file('/content/cats_and_dogs_filtered/validation/cats/cat.2000.jpg') image = tf.io.decode_jpeg(image, channels=3) image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE]) image = tf.reshape(image,[3,IMAGE_SIZE,IMAGE_SIZE]) image = tf.expand_dims(image, 0) print("Real image shape") print(image.shape) #print(image) interpreter.set_tensor(input_details[0]['index'], image) interpreter.invoke() # get_tensor() returns a copy of the tensor data # use tensor() in order to get a pointer to the tensor output_data = interpreter.get_tensor(output_details[0]['index']) print("Predicted value . Label index: {}, confidence: {:2.0f}%" .format(np.argmax(output_data), 100 * output_data[0][np.argmax(output_data)])) """ Explanation: Run inference on TFLite(float32) with image data End of explanation """ # A generator that provides a representative dataset import tensorflow as tf from PIL import Image from torchvision import transforms saved_model_dir = 'model_tf' #flowers_dir = '/content/images' def representative_data_gen(): dataset_list = tf.data.Dataset.list_files('/content/cats_and_dogs_filtered/train' + '/*/*') for i in range(1): image = next(iter(dataset_list)) image = tf.io.read_file(image) image = tf.io.decode_jpeg(image, channels=3) image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE]) image = tf.reshape(image,[3,IMAGE_SIZE,IMAGE_SIZE]) image = tf.cast(image / 127., tf.float32) image = tf.expand_dims(image, 0) print(image.shape) yield [image] from PIL import Image from torchvision import transforms # Download an example image from the pytorch website import urllib url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") try: urllib.URLopener().retrieve(url, filename) except: urllib.request.urlretrieve(url, filename) def representative_data_gen_1(): dataset_list = tf.data.Dataset.list_files('/content/cats_and_dogs_filtered/train' + '/*/*') for i in range(100): input_image = next(iter(dataset_list)) input_image = Image.open(filename) preprocess = transforms.Compose([ transforms.RandomCrop(224, padding=4), transforms.Resize(224), transforms.RandomHorizontalFlip(), #transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) input_tensor = preprocess(input_image) print(input_tensor.shape) input_tensor = tf.expand_dims(input_tensor, 0) print("torch input_tensor size") print(input_tensor.shape) yield [input_tensor] #converter = tf.lite.TFLiteConverter.from_keras_model(model) converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) # This enables quantization converter.optimizations = [tf.lite.Optimize.DEFAULT] # This sets the representative dataset for quantization converter.representative_dataset = representative_data_gen_1 # This ensures that if any ops can't be quantized, the converter throws an error converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # For full integer quantization, though supported types defaults to int8 only, we explicitly declare it for clarity. converter.target_spec.supported_types = [tf.int8] # These set the input and output tensors to uint8 (added in r2.3) converter.inference_input_type = tf.int8 converter.inference_output_type = tf.int8 tflite_model = converter.convert() with open('mobilenet_v2_1.0_224_quant.tflite', 'wb') as f: f.write(tflite_model) """ Explanation: Convert from TF saved model to TFLite(quantized) model End of explanation """ # Download an example image from the pytorch website import urllib url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") try: urllib.URLopener().retrieve(url, filename) except: urllib.request.urlretrieve(url, filename) # sample execution (requires torchvision) from PIL import Image from torchvision import transforms input_image = Image.open(filename) preprocess = transforms.Compose([ transforms.Resize(224), transforms.CenterCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) input_tensor = preprocess(input_image) print(input_tensor.shape) #input_tensor = tf.reshape(input_tensor,[3,IMAGE_SIZE,IMAGE_SIZE]) #input_tensor = tf.cast(input_tensor , tf.float32) input_tensor = tf.expand_dims(input_tensor, 0) print("torch input_tensor size") print(input_tensor.shape) import numpy as np import tensorflow as tf tflite_model_path = '/content/mobilenet_v2_float32.tflite' # Load the TFLite model and allocate tensors interpreter = tf.lite.Interpreter(model_path=tflite_model_path) interpreter.allocate_tensors() # Get input and output tensors input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Test the model on random input data input_shape = input_details[0]['shape'] print(input_shape) input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32) interpreter.set_tensor(input_details[0]['index'], input_tensor) interpreter.invoke() # get_tensor() returns a copy of the tensor data # use tensor() in order to get a pointer to the tensor output_data = interpreter.get_tensor(output_details[0]['index']) print("Predicted value for [0, 1] normalization. Label index: {}, confidence: {:2.0f}%" .format(np.argmax(output_data), 100 * output_data[0][np.argmax(output_data)])) """ Explanation: Run inference on TFLite(float32) model with dog.jpg "https://github.com/pytorch/hub/raw/master/images/dog.jpg" End of explanation """ # Download an example image from the pytorch website import urllib url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") try: urllib.URLopener().retrieve(url, filename) except: urllib.request.urlretrieve(url, filename) import tensorflow as tf import numpy as np tflite_model_path = '/content/mobilenet_v2_1.0_224_quant.tflite' interpreter = tf.lite.Interpreter(model_path=tflite_model_path) interpreter.allocate_tensors() # Get input and output tensors input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() test_details = interpreter.get_input_details()[0] scale, zero_point = test_details['quantization'] print(scale) print(zero_point) # Test the model on image data # sample execution (requires torchvision) from PIL import Image from torchvision import transforms input_image = Image.open(filename) preprocess = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) input_tensor = preprocess(input_image) print(input_tensor.shape) input_tensor = torch.unsqueeze(input_tensor, 0) input_tensor = torch.quantize_per_tensor(input_tensor, torch.tensor(scale), torch.tensor(zero_point), torch.qint8) input_tensor = torch.int_repr(input_tensor).numpy() print("torch input_tensor size:") print(input_tensor.shape) print(input_tensor) interpreter.set_tensor(input_details[0]['index'], input_tensor) interpreter.invoke() # get_tensor() returns a copy of the tensor data # use tensor() in order to get a pointer to the tensor output_data = interpreter.get_tensor(output_details[0]['index']) print("Predicted value . Label index: {}, confidence: {:2.0f}%" .format(np.argmax(output_data), 100 * output_data[0][np.argmax(output_data)])) """ Explanation: Run inference on TFLite(quantized) model with dog.jpg "https://github.com/pytorch/hub/raw/master/images/dog.jpg" End of explanation """
kpolimis/kpolimis.github.io-src
output/downloads/notebooks/nba_mvp_comparisons-part_1.ipynb
gpl-3.0
import os import urllib import webbrowser import pandas as pd from datetime import datetime from bs4 import BeautifulSoup """ Explanation: NBA MVP Comparisons Part 1 25 <sup>th</sup> December 2018 It's Christmas and that means a full slate of NBA games. This time of year also provokes some great NBA discussions and the best NBA discussions are comparative. Arguments like: would Jordan's '96 Bulls would beat the '17 Warriors? Another comparison that is sure to create great debate is who had the best (or worst) Most Valuable Player (MVP) season in history? This question has a strong empirical dimension, in that we can observe quantifiable aspects across seasons, and can leverage programming to both gather and and analyze the data. The MVP data will gather comes from basketball-reference.com. Basketball-reference is part of the Sports-Reference sites, "a group of sites providing both basic and sabermetric statistics and resources for sports fans everywhere. [Sports-Reference aims] to be the easiest-to-use, fastest, most complete sources for sports statistics anywhere" (sports-reference.com). In this post, we will gather and pre-process all the data for the a multi-part series to determine: 1. the MVP finalist with best case for winning their year 2. predict the 2018-2019 MVP Let the shade begin Outline Import modules Examine html structure of webpage Use a function with Beautiful Soup to parse webpages into .csv Analyze .csv of webpage as a Pandas DataFrame Process the data import relevant modules standard library modules: os urllib webbrowser datetime open source modules: pandas Beautiful Soup End of explanation """ url = 'https://www.basketball-reference.com/awards/mvp.html' webbrowser.open_new_tab(url) # get the html html = urllib.request.urlopen(url) # create the BeautifulSoup object soup = BeautifulSoup(html, "lxml") """ Explanation: Let's examine the webpage with all the MVP data from the 1966-1956 season to the 2017-2018 season End of explanation """ # Extract the necessary values for the column headers from the table # and store them as a list column_headers = [th.getText() for th in soup.findAll('th', limit=30)] column_headers = [s for s in column_headers if len(s) != 0] column_headers = column_headers[1:] print(column_headers) column_headers = [e for e in column_headers if e not in ('Shooting', 'Advanced')][:-7] print(column_headers) len(column_headers) """ Explanation: Scraping the Column Headers The column headers we need for our DataFrame are found in the th element End of explanation """ # The data is found within the table rows # We want the elements from the 3rd row and on table_rows = soup.find_all("tr")[2:] print(type(table_rows)) table_rows[0] # take a look at the first row """ Explanation: Scraping the Data Note that table_rows is a list of tag elements. End of explanation """ def extract_mvp_data(table_rows): """ Extract and return the the desired information from the td elements within the table rows. :param: table_rows: list of soup `tr` elements :return: list of player-year MVP observation """ # create the empty list to store the player data player_data = [] for row in table_rows: # for each row do the following # Get the text for each table data (td) element in the row player_year = [td.get_text() for td in row.find_all("th")] player_list = [td.get_text() for td in row.find_all("td")] # there are some empty table rows, which are the repeated # column headers in the table # we skip over those rows and and continue the for loop if not player_list: continue # Now append the data to list of data player_info = player_year+player_list player_data.append(player_info) return player_data """ Explanation: The data we want for each player is found within the the td (or table data) elements. Below I've created a function that extracts the data we want from table_rows. The comments should walk you through what each part of the function does. End of explanation """ # extract the data we want mvp_data = extract_mvp_data(table_rows) # and then store it in a DataFrame mvp_data_df = pd.DataFrame(mvp_data) mvp_data_df[0:1] """ Explanation: now we can create a DataFrame with the MVP data End of explanation """ mvp_data_df.columns = column_headers print("the MVP dataframe has {} rows (player-year observations) and {} columns".format(mvp_data_df.shape[0], mvp_data_df.shape[1])) mvp_data_df.head() """ Explanation: rename the columns view the data End of explanation """ url_2018_mvp_finalist = 'https://www.basketball-reference.com/awards/awards_2018.html#mvp' # get the html # examine webpage html_finalist = urllib.request.urlopen(url_2018_mvp_finalist) webbrowser.open_new_tab(url) # create the BeautifulSoup object soup_finalist = BeautifulSoup(html_finalist, "lxml") # Extract the necessary values for the column headers from the table and store them as a list column_headers_finalist = [th.getText() for th in soup_finalist.findAll('th', limit=30)] column_headers_finalist = [s for s in column_headers_finalist if len(s) != 0] column_headers_finalist = column_headers_finalist[1:] print("raw column names in finalist table: {}".format(column_headers_finalist)) column_headers_finalist = [e for e in column_headers_finalist if e not in ('Shooting', 'Advanced', 'Per Game')][:-4] print("formatted column names in finalist table: {}".format(column_headers_finalist)) print("{} columns in finalist table".format(len(column_headers_finalist))) # The data is found within the `tr` elements of the first `tbody` element # We want the elements from the 3rd row and on table_rows_finalist = soup_finalist.find("tbody").find_all("tr") print(type(table_rows_finalist)) table_rows_finalist[-1] # take a look at the last row """ Explanation: now we need the data on all the finalist for the question: which finalist had the best argument for winning that year? End of explanation """ def extract_player_data(table_rows): """ Extract and return the the desired information from the td elements within the table rows. :param: table_rows: list of soup `tr` elements :return: list of player-year MVP finalist observations """ # create the empty list to store the player data player_data = [] for row in table_rows: # for each row do the following # Get the text for each table data (td) element in the row player_rank = [td.get_text() for td in row.find_all("th")] player_list = [td.get_text() for td in row.find_all("td")] # there are some empty table rows, which are the repeated # column headers in the table # we skip over those rows and and continue the for loop if not player_list: continue # Now append the data to list of data player_info = player_rank+player_list player_data.append(player_info) return player_data # extract the data we want data = extract_player_data(table_rows_finalist) # and then store it in a DataFrame example_player_df = pd.DataFrame(data) example_player_df.columns = column_headers_finalist print("the MVP finalist dataframe has {} rows (player-year observations) and {} columns".format(example_player_df.shape[0], example_player_df.shape[1])) example_player_df.tail(6) """ Explanation: create a function to extract MVP finalist data End of explanation """ # Create an empty list that will contain all the dataframes # (one dataframe for all finalist dataframes) mvp_finalist_list = [] # a list to store any errors that may come up while scraping errors_list = [] """ Explanation: Scraping the Data for All MVP Finalists Since 1956 Scraping the for finalist data since 1956 follows is essentially the same process as above, just repeated for each year, using a for loop. As we loop over the years, we will create a DataFrame for each year of MVP finalist data, and append it to a large list of DataFrames that contains all the MVP finalists data. We will also have a separate list that will contain any errors and the url associated with that error. This will let us know if there are any issues with our scraper, and which url is causing the error. End of explanation """ loop_start = datetime.now() print(loop_start) # The url template that we pass in the finalist year info url_template = "https://www.basketball-reference.com/awards/awards_{year}.html#mvp" # for each year from 1956 to (and including) 2018 for year in range(1956, 2019): # Use try/except block to catch and inspect any urls that cause an error try: # get the MVP finalist data url url = url_template.format(year=year) # get the html html = urllib.request.urlopen(url) # create the BeautifulSoup object soup = BeautifulSoup(html, "lxml") # get the column headers column_headers = [th.getText() for th in soup.findAll('th', limit=30)] column_headers = [s for s in column_headers if len(s) != 0] column_headers = column_headers[1:] column_headers = [e for e in column_headers if e not in ('Shooting', 'Advanced', 'Per Game')][:-4] # select the data from the table table_rows = soup.find_all("tr")[2:] # extract the player data from the table rows player_data = extract_player_data(table_rows) # create the dataframe for the current year's mvp finalist data # subset to only include MVP finalists year_df = pd.DataFrame(player_data) year_df.columns = column_headers year_df = year_df.loc[year_df["Pts Max"]==year_df["Pts Max"].unique()[0]] # add the year of the MVP finalist data to the dataframe year_df.insert(0, "Year", year) # append the current dataframe to the list of dataframes mvp_finalist_list.append(year_df) except Exception as e: # Store the url and the error it causes in a list error =[url, e] # then append it to the list of errors errors_list.append(error) loop_stop = datetime.now() print(loop_stop) print("the loop took {}".format(loop_stop-loop_start)) """ Explanation: let's time how long this loop takes End of explanation """ print(len(errors_list)) errors_list """ Explanation: the loop took ~ 1 minute End of explanation """ print(type(mvp_finalist_list)) print(len(mvp_finalist_list)) mvp_finalist_list[0:1] column_headers_finalist.insert(0, "Year") print(column_headers_finalist) print(len(column_headers_finalist)) # store all finalist data in one DataFrame mvp_finalist_df = pd.concat(mvp_finalist_list, axis=0) print(mvp_finalist_df.shape) # Take a look at the first row mvp_finalist_df.iloc[0] """ Explanation: We don't get any errors, so that's good. Now we can concatenate all the DataFrames we scraped and create one large DataFrame containing all the finalist data End of explanation """ os.makedirs('../data/raw_data', exist_ok=True) os.makedirs('../data/clean_data', exist_ok=True) mvp_finalist_df.head() # Write out the MVP data and MVP finalist data to the raw_data folder in the data folder mvp_data_df.to_csv("../data/raw_data/mvp_data_df_raw.csv", index=False) mvp_finalist_df.to_csv("../data/raw_data/mvp_finalist_df_raw.csv", index=False) """ Explanation: Now that we fixed up the necessary columns, let's write out the raw data to a CSV file. End of explanation """ mvp_data_df_clean = pd.read_csv("../data/raw_data/mvp_data_df_raw.csv", encoding = "Latin-1") mvp_finalist_df_clean = pd.read_csv("../data/raw_data/mvp_finalist_df_raw.csv", encoding = "Latin-1") mvp_data_df_clean.head() """ Explanation: Cleaning the Data Now that we have the raw MVP data, we need to clean it up a bit for data exploration End of explanation """ mvp_data_columns_dict = {'Season':'season', 'Lg':'league', 'Player':'player', 'Voting': 'voting', 'Tm': 'team', 'Age': 'age', 'G': 'games_played', 'MP': 'avg_minutes', 'PTS': 'avg_points', 'TRB': 'avg_rebounds', 'AST': 'avg_assists', 'STL': 'avg_steals', 'BLK': 'avg_blocks', 'FG%': 'field_goal_pct', '3P%': 'three_pt_pct', 'FT%': 'free_throw_pct', 'WS': 'win_shares', 'WS/48': 'win_shares_per_48' } mvp_finalist_columns_dict = {'Year':'year', 'Player':'player', 'Rank': 'rank', 'Tm': 'team', 'Age': 'age', 'First': 'first_place_votes', 'Pts Won': 'points_won', 'Pts Max': 'points_max', 'Share':'vote_share', 'G': 'games_played', 'MP': 'avg_minutes', 'PTS': 'avg_points', 'TRB': 'avg_rebounds', 'AST': 'avg_assists', 'STL': 'avg_steals', 'BLK': 'avg_blocks', 'FG%': 'field_goal_pct', '3P%': 'three_pt_pct', 'FT%': 'free_throw_pct', 'WS': 'win_shares', 'WS/48': 'win_shares_per_48' } mvp_data_df_clean.rename(index=str,columns=mvp_data_columns_dict, inplace=True) mvp_data_df_clean.head() mvp_finalist_df_clean.rename(index=str,columns=mvp_finalist_columns_dict, inplace=True) mvp_finalist_df_clean.head() """ Explanation: create dictionaries for renaming columns rename all columns with dictionaries End of explanation """ # convert the data to proper numeric types mvp_data_df_clean = mvp_data_df_clean.apply(pd.to_numeric, errors="ignore") mvp_data_df_clean.info() # convert the data to proper numeric types mvp_finalist_df_clean = mvp_finalist_df_clean.apply(pd.to_numeric, errors="ignore") mvp_finalist_df_clean.info() """ Explanation: Cleaning Up the Rest of the Data End of explanation """ # Get the column names for the numeric columns num_cols_mvp = mvp_data_df_clean.columns[mvp_data_df_clean.dtypes != object] num_cols_finalist = mvp_finalist_df_clean.columns[mvp_finalist_df_clean.dtypes != object] # Replace all NaNs with 0 mvp_data_df_clean.loc[:, num_cols_mvp] = mvp_data_df_clean.loc[:, num_cols_mvp].fillna(0) mvp_finalist_df_clean.loc[:, num_cols_finalist] = mvp_finalist_df_clean.loc[:, num_cols_finalist].fillna(0) mvp_finalist_df_clean.info() """ Explanation: We are not done yet. A lot of out numeric columns are missing data because players didn't accumulate any of those stats. For example, the 3 point line is introduced in 1982 and all players in preceding seasons don't have this statistic. Additionally, we want to select the columns with numeric data and then replace the NaNs (the current value that represents the missing data) with 0s, as that is a more appropriate value. End of explanation """ mvp_data_df_clean = mvp_data_df_clean.loc[mvp_data_df_clean["league"]=="NBA"] mvp_data_df_clean = mvp_data_df_clean[pd.notnull(mvp_data_df_clean['team'])] """ Explanation: remove ABA winners remove MVP summary table End of explanation """ mvp_data_df_clean = mvp_data_df_clean[pd.notnull(mvp_data_df_clean['player'])] mvp_data_df_clean.sort_values(['season'], ascending=False, axis=0, inplace=True) mvp_finalist_df_clean = mvp_finalist_df_clean[pd.notnull(mvp_finalist_df_clean['player'])] mvp_finalist_df_clean.sort_values(['year', 'points_won'], ascending=False, axis=0, inplace=True) mvp_data_df_clean.to_csv("../data/clean_data/mvp_data_df_clean.csv", index=False) print(mvp_data_df_clean.shape) mvp_data_df_clean.head() mvp_finalist_df_clean.to_csv("../data/clean_data/mvp_finalist_df_clean.csv", index=False) print(mvp_finalist_df_clean.shape) mvp_finalist_df_clean.head() """ Explanation: We are finally done cleaning the data and now we can save it to a CSV file. End of explanation """ import sys import bs4 print(f'last updated: {datetime.now().strftime("%Y-%m-%d %H:%M")} \n') print(f"System and module version information: \n") print(f"Python version: {sys.version_info}") print(f"urllib.request version: {urllib.request.__version__}") print(f"pandas version: {pd.__version__}") print(f"Beautiful Soup version: {bs4.__version__}") """ Explanation: Review In this tutorial, we learned how to examine the html structure of webpage use functions based on the Beautiful Soup module to parse tables on multiple webpage into .csv analyzed a .csv file using the Pandas module Download this notebook or see a static view here End of explanation """
samstav/scipy_2015_sklearn_tutorial
notebooks/01.3 Data Representation for Machine Learning.ipynb
cc0-1.0
from sklearn.datasets import load_iris iris = load_iris() """ Explanation: Representation and Visualization of Data Machine learning is about creating models from data: for that reason, we'll start by discussing how data can be represented in order to be understood by the computer. Along with this, we'll build on our matplotlib examples from the previous section and show some examples of how to visualize data. Data in scikit-learn Data in scikit-learn, with very few exceptions, is assumed to be stored as a two-dimensional array, of size [n_samples, n_features]. Many algorithms also accept scipy.sparse matrices of the same shape. n_samples: The number of samples: each sample is an item to process (e.g. classify). A sample can be a document, a picture, a sound, a video, an astronomical object, a row in database or CSV file, or whatever you can describe with a fixed set of quantitative traits. n_features: The number of features or distinct traits that can be used to describe each item in a quantitative manner. Features are generally real-valued, but may be boolean or discrete-valued in some cases. The number of features must be fixed in advance. However it can be very high dimensional (e.g. millions of features) with most of them being zeros for a given sample. This is a case where scipy.sparse matrices can be useful, in that they are much more memory-efficient than numpy arrays. Each sample (data point) is a row in the data array, and each feature is a column. A Simple Example: the Iris Dataset As an example of a simple dataset, we're going to take a look at the iris data stored by scikit-learn. The data consists of measurements of three different species of irises. There are three species of iris in the dataset, which we can picture here: Iris Setosa Iris Versicolor Iris Virginica Quick Question: If we want to design an algorithm to recognize iris species, what might the data be? Remember: we need a 2D array of size [n_samples x n_features]. What would the n_samples refer to? What might the n_features refer to? Remember that there must be a fixed number of features for each sample, and feature number i must be a similar kind of quantity for each sample. Loading the Iris Data with Scikit-learn Scikit-learn has a very straightforward set of data on these iris species. The data consist of the following: Features in the Iris dataset: sepal length in cm sepal width in cm petal length in cm petal width in cm Target classes to predict: Iris Setosa Iris Versicolour Iris Virginica <img src="figures/petal_sepal.jpg" alt="Sepal" style="width: 400px;"/> "Petal-sepal". Licensed under CC BY-SA 3.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg#/media/File:Petal-sepal.jpg scikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays: End of explanation """ iris.keys() """ Explanation: The resulting dataset is a Bunch object: you can see what's available using the method keys(): End of explanation """ n_samples, n_features = iris.data.shape print(n_samples) print(n_features) # the sepal length, sepal width, petal length and petal width of the first sample (first flower) print(iris.data[0]) """ Explanation: The features of each sample flower are stored in the data attribute of the dataset: End of explanation """ print(iris.data.shape) print(iris.target.shape) print(iris.target) """ Explanation: The information about the class of each sample is stored in the target attribute of the dataset: End of explanation """ print(iris.target_names) """ Explanation: The names of the classes are stored in the last attribute, namely target_names: End of explanation """ %matplotlib inline import matplotlib.pyplot as plt x_index = 0 y_index = 1 # this formatter will label the colorbar with the correct target names formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)]) plt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target) plt.colorbar(ticks=[0, 1, 2], format=formatter) plt.xlabel(iris.feature_names[x_index]) plt.ylabel(iris.feature_names[y_index]) """ Explanation: This data is four dimensional, but we can visualize two of the dimensions at a time using a simple scatter-plot. Again, we'll start by enabling matplotlib inline mode: End of explanation """ from sklearn import datasets """ Explanation: Quick Exercise: Change x_index and y_index in the above script and find a combination of two parameters which maximally separate the three classes. This exercise is a preview of dimensionality reduction, which we'll see later. Other Available Data Scikit-learn makes available a host of datasets for testing learning algorithms. They come in three flavors: Packaged Data: these small datasets are packaged with the scikit-learn installation, and can be downloaded using the tools in sklearn.datasets.load_* Downloadable Data: these larger datasets are available for download, and scikit-learn includes tools which streamline this process. These tools can be found in sklearn.datasets.fetch_* Generated Data: there are several datasets which are generated from models based on a random seed. These are available in the sklearn.datasets.make_* You can explore the available dataset loaders, fetchers, and generators using IPython's tab-completion functionality. After importing the datasets submodule from sklearn, type datasets.load_&lt;TAB&gt; or datasets.fetch_&lt;TAB&gt; or datasets.make_&lt;TAB&gt; to see a list of available functions. End of explanation """ from sklearn.datasets import get_data_home get_data_home() !ls $HOME/scikit_learn_data/ """ Explanation: The data downloaded using the fetch_ scripts are stored locally, within a subdirectory of your home directory. You can use the following to determine where it is: End of explanation """ from sklearn.datasets import load_digits digits = load_digits() digits.keys() n_samples, n_features = digits.data.shape print((n_samples, n_features)) print(digits.data[0]) print(digits.target) """ Explanation: Be warned: many of these datasets are quite large, and can take a long time to download! (especially on Conference wifi). If you start a download within the IPython notebook and you want to kill it, you can use ipython's "kernel interrupt" feature, available in the menu or using the shortcut Ctrl-m i. You can press Ctrl-m h for a list of all ipython keyboard shortcuts. Loading Digits Data Now we'll take a look at another dataset, one where we have to put a bit more thought into how to represent the data. We can explore the data in a similar manner as above: End of explanation """ print(digits.data.shape) print(digits.images.shape) """ Explanation: The target here is just the digit represented by the data. The data is an array of length 64... but what does this data mean? There's a clue in the fact that we have two versions of the data array: data and images. Let's take a look at them: End of explanation """ import numpy as np print(np.all(digits.images.reshape((1797, 64)) == digits.data)) """ Explanation: We can see that they're related by a simple reshaping: End of explanation """ print(digits.data.__array_interface__['data']) print(digits.images.__array_interface__['data']) """ Explanation: Aside... numpy and memory efficiency: You might wonder whether duplicating the data is a problem. In this case, the memory overhead is very small. Even though the arrays are different shapes, they point to the same memory block, which we can see by doing a bit of digging into the guts of numpy: End of explanation """ # set up the figure fig = plt.figure(figsize=(6, 6)) # figure size in inches fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # plot the digits: each image is 8x8 pixels for i in range(64): ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[]) ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest') # label the image with the target value ax.text(0, 7, str(digits.target[i])) """ Explanation: The long integer here is a memory address: the fact that the two are the same tells us that the two arrays are looking at the same data. Let's visualize the data. It's little bit more involved than the simple scatter-plot we used above, but we can do it rather quickly. End of explanation """ from sklearn.datasets import make_s_curve data, colors = make_s_curve(n_samples=1000) print(data.shape) print(colors.shape) from mpl_toolkits.mplot3d import Axes3D ax = plt.axes(projection='3d') ax.scatter(data[:, 0], data[:, 1], data[:, 2], c=colors) ax.view_init(10, -60) """ Explanation: We see now what the features mean. Each feature is a real-valued quantity representing the darkness of a pixel in an 8x8 image of a hand-written digit. Even though each sample has data that is inherently two-dimensional, the data matrix flattens this 2D data into a single vector, which can be contained in one row of the data matrix. Generated Data: the S-Curve One dataset often used as an example of a simple nonlinear dataset is the S-cure: End of explanation """ from sklearn.datasets import fetch_olivetti_faces # fetch the faces data # Use a script like above to plot the faces image data. # hint: plt.cm.bone is a good colormap for this data """ Explanation: This example is typically used with an unsupervised learning method called Locally Linear Embedding. We'll explore unsupervised learning in detail later in the tutorial. Exercise: working with the faces dataset Here we'll take a moment for you to explore the datasets yourself. Later on we'll be using the Olivetti faces dataset. Take a moment to fetch the data (about 1.4MB), and visualize the faces. You can copy the code used to visualize the digits above, and modify it for this data. End of explanation """ # %load solutions/02A_faces_plot.py faces = fetch_olivetti_faces() # set up the figure fig = plt.figure(figsize=(6, 6)) # figure size in inches fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # plot the faces: for i in range(64): ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[]) ax.imshow(faces.images[i], cmap=plt.cm.bone, interpolation='nearest') """ Explanation: Solution: End of explanation """
jupyter/nbgrader
nbgrader/docs/source/user_guide/downloaded/ps1/archive/ps1_hacker_attempt_2016-01-30-20-30-10_problem1.ipynb
bsd-3-clause
NAME = "Alyssa P. Hacker" COLLABORATORS = "Ben Bitdiddle" """ Explanation: Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All). Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below: End of explanation """ def squares(n): """Compute the squares of numbers from 1 to n, such that the ith element of the returned list equals i^2. """ if n < 1: raise ValueError return [i ** 2 for i in range(1, n + 1)] """ Explanation: For this problem set, we'll be using the Jupyter notebook: Part A (2 points) Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError. End of explanation """ squares(10) """Check that squares returns the correct output for several inputs""" assert squares(1) == [1] assert squares(2) == [1, 4] assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121] """Check that squares raises an error for invalid inputs""" try: squares(0) except ValueError: pass else: raise AssertionError("did not raise") try: squares(-4) except ValueError: pass else: raise AssertionError("did not raise") """ Explanation: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does: End of explanation """ def sum_of_squares(n): """Compute the sum of the squares of numbers from 1 to n.""" return sum(squares(n)) """ Explanation: Part B (1 point) Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality. End of explanation """ sum_of_squares(10) """Check that sum_of_squares returns the correct answer for various inputs.""" assert sum_of_squares(1) == 1 assert sum_of_squares(2) == 5 assert sum_of_squares(10) == 385 assert sum_of_squares(11) == 506 """Check that sum_of_squares relies on squares.""" orig_squares = squares del squares try: sum_of_squares(1) except NameError: pass else: raise AssertionError("sum_of_squares does not use squares") finally: squares = orig_squares """ Explanation: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get: End of explanation """ import math def hypotenuse(n): """Finds the hypotenuse of a right triangle with one side of length n and the other side of length n-1.""" # find (n-1)**2 + n**2 if (n < 2): raise ValueError("n must be >= 2") elif n == 2: sum1 = 5 sum2 = 0 else: sum1 = sum_of_squares(n) sum2 = sum_of_squares(n-2) return math.sqrt(sum1 - sum2) print(hypotenuse(2)) print(math.sqrt(2**2 + 1**2)) print(hypotenuse(10)) print(math.sqrt(10**2 + 9**2)) """ Explanation: Part C (1 point) Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function. $\sum_{i=1}^n i^2$ Part D (2 points) Find a usecase for your sum_of_squares function and implement that usecase in the cell below. End of explanation """
fluxcapacitor/source.ml
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/GlobalDataScience/Mar272017-SantaClara-Deploy-SparkML-TensorflowAI/GlobalDataScience-SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
apache-2.0
import numpy as np import os import tensorflow as tf from tensorflow.contrib.session_bundle import exporter import time # make things wide from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) from IPython.display import clear_output, Image, display, HTML def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = "<stripped %d bytes>"%size return strip_def def show_graph(graph_def=None, width=1200, height=800, max_const_size=32, ungroup_gradients=False): if not graph_def: graph_def = tf.get_default_graph().as_graph_def() """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) data = str(strip_def) if ungroup_gradients: data = data.replace('"gradients/', '"b_') #print(data) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(data), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:{}px;height:{}px;border:0" srcdoc="{}"></iframe> """.format(width, height, code.replace('"', '&quot;')) display(HTML(iframe)) # If this errors out, increment the `export_version` variable, restart the Kernel, and re-run #flags = tf.app.flags #FLAGS = flags.FLAGS #flags.DEFINE_integer("batch_size", 10, "The batch size to train") batch_size = 10 #flags.DEFINE_integer("epoch_number", 10, "Number of epochs to run trainer") epoch_number = 10 #flags.DEFINE_integer("steps_to_validate", 1,"Steps to validate and print loss") steps_to_validate = 1 #flags.DEFINE_string("checkpoint_dir", "./checkpoint/", "indicates the checkpoint dirctory") checkpoint_dir = "./checkpoint/" #flags.DEFINE_string("model_path", "./model/", "The export path of the model") #flags.DEFINE_string("model_path", "/root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/", "The export path of the model") model_path = "/root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/" #flags.DEFINE_integer("export_version", 27, "The version number of the model") from datetime import datetime seconds_since_epoch = int(datetime.now().strftime("%s")) export_version = seconds_since_epoch # If this errors out, increment the `export_version` variable, restart the Kernel, and re-run def main(): # Define training data x = np.ones(FLAGS.batch_size) y = np.ones(FLAGS.batch_size) # Define the model X = tf.placeholder(tf.float32, shape=[None], name="X") Y = tf.placeholder(tf.float32, shape=[None], name="yhat") w = tf.Variable([1.0], name="weight") b = tf.Variable([1.0], name="bias") loss = tf.square(Y - tf.matmul(X, w) - b) train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss) predict_op = tf.mul(X, w) + b saver = tf.train.Saver() checkpoint_dir = FLAGS.checkpoint_dir checkpoint_file = checkpoint_dir + "/checkpoint.ckpt" if not os.path.exists(checkpoint_dir): os.makedirs(checkpoint_dir) # Start the session with tf.Session() as sess: sess.run(tf.initialize_all_variables()) ckpt = tf.train.get_checkpoint_state(checkpoint_dir) if ckpt and ckpt.model_checkpoint_path: print("Continue training from the model {}".format(ckpt.model_checkpoint_path)) saver.restore(sess, ckpt.model_checkpoint_path) saver_def = saver.as_saver_def() print(saver_def.filename_tensor_name) print(saver_def.restore_op_name) # Start training start_time = time.time() for epoch in range(FLAGS.epoch_number): sess.run(train_op, feed_dict={X: x, Y: y}) # Start validating if epoch % FLAGS.steps_to_validate == 0: end_time = time.time() print("[{}] Epoch: {}".format(end_time - start_time, epoch)) saver.save(sess, checkpoint_file) tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.pb', as_text=False) tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.txt', as_text=True) start_time = end_time # Print model variables w_value, b_value = sess.run([w, b]) print("The model of w: {}, b: {}".format(w_value, b_value)) # Export the model print("Exporting trained model to {}".format(FLAGS.model_path)) model_exporter = exporter.Exporter(saver) model_exporter.init( sess.graph.as_graph_def(), named_graph_signatures={ 'inputs': exporter.generic_signature({"features": X}), 'outputs': exporter.generic_signature({"prediction": predict_op}) }) model_exporter.export(FLAGS.model_path, tf.constant(export_version), sess) print('Done exporting!') if __name__ == "__main__": main() show_graph() """ Explanation: Who Am I? Chris Fregly Research Scientist @ PipelineIO Video Series Author "High Performance Tensorflow in Production" @ OReilly (Coming Soon) Founder @ Advanced Spark and Tensorflow Meetup Github Repo DockerHub Repo Slideshare YouTube Who Was I? Software Engineer @ Netflix, Databricks, IBM Spark Tech Center Infrastructure and Tools of this Talk Docker Images, Containers Useful Docker Image: AWS + GPU + Docker + Tensorflow + Spark Kubernetes Container Orchestration Across Clusters Weavescope Kubernetes Cluster Visualization Jupyter Notebooks What We're Using Here for Everything! Airflow Invoke Any Type of Workflow on Any Type of Schedule Github Commit New Model to Github, Airflow Workflow Triggered for Continuous Deployment DockerHub Maintains Docker Images Continuous Deployment Not Just for Code, Also for ML/AI Models! Canary Release Deploy and Compare New Model Alongside Existing Metrics and Dashboards Not Just System Metrics, ML/AI Model Prediction Metrics NetflixOSS-based Prometheus Grafana Elasticsearch Separate Cluster Concerns Training/Admin Cluster Prediction Cluster Hybrid Cloud Deployment for eXtreme High Availability (XHA) AWS and Google Cloud Apache Spark Tensorflow + Tensorflow Serving Types of Model Deployment KeyValue ie. Recommendations In-memory: Redis, Memcache On-disk: Cassandra, RocksDB First-class Servable in Tensorflow Serving PMML It's Useful and Well-Supported Apple, Cisco, Airbnb, HomeAway, etc Please Don't Re-build It - Reduce Your Technical Debt! Native Code (CPU and GPU) Hand-coded (Python + Pickling) ie. Generate Java Code from PMML Tensorflow Models freeze_graph.py: Combine Tensorflow Graph (Static) with Trained Weights (Checkpoints) into Single Deployable Model Model Deployments and Rollbacks Mutable Each New Model is Deployed to Live, Running Container Immutable Each New Model is a New Docker Image Optimizing Tensorflow Models for Serving Python Scripts optimize_graph_for_inference.py Pete Warden's Blog Graph Transform Tool Compile (Tensorflow 1.0+) XLA Compiler Compiles 3 graph operations (input, operation, output) into 1 operation Removes need for Tensorflow Runtime (20 MB is significant on tiny devices) Allows new backends for hardware-specific optimizations (better portability) tfcompile Convert Graph into executable code Compress/Distill Ensemble Models Convert ensembles or other complex models into smaller models Re-score training data with output of model being distilled Train smaller model to produce same output Output of smaller model learns more information than original label Optimizing Model Serving Runtime Environment Throughput Option 1: Add more Tensorflow Serving servers behind load balancer Option 2: Enable request batching in each Tensorflow Serving Option Trade-offs: Higher Latency (bad) for Higher Throughput (good) $TENSORFLOW_SERVING_HOME/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=tensorflow_minimal --model_base_path=/root/models/tensorflow_minimal/export --enable_batching=true --max_batch_size=1000000 --batch_timeout_micros=10000 --max_enqueued_batches=1000000 Latency The deeper the model, the longer the latency Start inference in parallel where possible (ie. user inference in parallel with item inference) Pre-load common inputs from database (ie. user attributes, item attributes) Pre-compute/partial-compute common inputs (ie. popular word embeddings) Memory Word embeddings are huge! Use hashId for each word Off-load embedding matrices to parameter server and share between serving servers Demos!! Train and Deploy Tensorflow AI Model (Simple Model, Immutable Deploy) Train Tensorflow AI Model End of explanation """ !ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export !ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027 !git status !git add --all /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027/ !git status !git commit -m "updated tensorflow model" !git status # If this fails with "Permission denied", use terminal within jupyter to manually `git push` !git push """ Explanation: Commit and Deploy New Tensorflow AI Model Commit Model to Github End of explanation """ from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) from IPython.display import clear_output, Image, display, HTML html = '<iframe width=100% height=500px src="http://demo.pipeline.io:8080/admin">' display(HTML(html)) """ Explanation: Airflow Workflow Deploys New Model through Github Post-Commit Webhook to Triggers End of explanation """ !kubectl scale --context=awsdemo --replicas=2 rc spark-worker-2-0-1 !kubectl get pod --context=awsdemo """ Explanation: Train and Deploy Spark ML Model (Airbnb Model, Mutable Deploy) Scale Out Spark Training Cluster Kubernetes CLI End of explanation """ from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) from IPython.display import clear_output, Image, display, HTML html = '<iframe width=100% height=500px src="http://kubernetes-aws.demo.pipeline.io">' display(HTML(html)) """ Explanation: Weavescope Kubernetes AWS Cluster Visualization End of explanation """ from pyspark.ml.linalg import Vectors from pyspark.ml.feature import VectorAssembler, StandardScaler from pyspark.ml.feature import OneHotEncoder, StringIndexer from pyspark.ml import Pipeline, PipelineModel from pyspark.ml.regression import LinearRegression # You may need to Reconnect (more than Restart) the Kernel to pick up changes to these sett import os master = '--master spark://spark-master-2-1-0:7077' conf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m' packages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1' jars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar' py_files = '--py-files /root/lib/jpmml.py' os.environ['PYSPARK_SUBMIT_ARGS'] = master \ + ' ' + conf \ + ' ' + packages \ + ' ' + jars \ + ' ' + py_files \ + ' ' + 'pyspark-shell' print(os.environ['PYSPARK_SUBMIT_ARGS']) from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() """ Explanation: Generate PMML from Spark ML Model End of explanation """ df = spark.read.format("csv") \ .option("inferSchema", "true").option("header", "true") \ .load("s3a://datapalooza/airbnb/airbnb.csv.bz2") df.registerTempTable("df") print(df.head()) print(df.count()) """ Explanation: Step 0: Load Libraries and Data End of explanation """ df_filtered = df.filter("price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null") df_filtered.registerTempTable("df_filtered") df_final = spark.sql(""" select id, city, case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA') then state else 'Other' end as state, space, cast(price as double) as price, cast(bathrooms as double) as bathrooms, cast(bedrooms as double) as bedrooms, room_type, host_is_super_host, cancellation_policy, cast(case when security_deposit is null then 0.0 else security_deposit end as double) as security_deposit, price_per_bedroom, cast(case when number_of_reviews is null then 0.0 else number_of_reviews end as double) as number_of_reviews, cast(case when extra_people is null then 0.0 else extra_people end as double) as extra_people, instant_bookable, cast(case when cleaning_fee is null then 0.0 else cleaning_fee end as double) as cleaning_fee, cast(case when review_scores_rating is null then 80.0 else review_scores_rating end as double) as review_scores_rating, cast(case when square_feet is not null and square_feet > 100 then square_feet when (square_feet is null or square_feet <=100) and (bedrooms is null or bedrooms = 0) then 350.0 else 380 * bedrooms end as double) as square_feet from df_filtered """).persist() df_final.registerTempTable("df_final") df_final.select("square_feet", "price", "bedrooms", "bathrooms", "cleaning_fee").describe().show() print(df_final.count()) print(df_final.schema) # Most popular cities spark.sql(""" select state, count(*) as ct, avg(price) as avg_price, max(price) as max_price from df_final group by state order by count(*) desc """).show() # Most expensive popular cities spark.sql(""" select city, count(*) as ct, avg(price) as avg_price, max(price) as max_price from df_final group by city order by avg(price) desc """).filter("ct > 25").show() """ Explanation: Step 1: Clean, Filter, and Summarize the Data End of explanation """ continuous_features = ["bathrooms", \ "bedrooms", \ "security_deposit", \ "cleaning_fee", \ "extra_people", \ "number_of_reviews", \ "square_feet", \ "review_scores_rating"] categorical_features = ["room_type", \ "host_is_super_host", \ "cancellation_policy", \ "instant_bookable", \ "state"] """ Explanation: Step 2: Define Continous and Categorical Features End of explanation """ [training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2]) """ Explanation: Step 3: Split Data into Training and Validation End of explanation """ continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol="unscaled_continuous_features") continuous_feature_scaler = StandardScaler(inputCol="unscaled_continuous_features", outputCol="scaled_continuous_features", \ withStd=True, withMean=False) """ Explanation: Step 4: Continous Feature Pipeline End of explanation """ categorical_feature_indexers = [StringIndexer(inputCol=x, \ outputCol="{}_index".format(x)) \ for x in categorical_features] categorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \ outputCol="oh_encoder_{}".format(x.getOutputCol() )) \ for x in categorical_feature_indexers] """ Explanation: Step 5: Categorical Feature Pipeline End of explanation """ feature_cols_lr = [x.getOutputCol() \ for x in categorical_feature_one_hot_encoders] feature_cols_lr.append("scaled_continuous_features") feature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \ outputCol="features_lr") """ Explanation: Step 6: Assemble our Features and Feature Pipeline End of explanation """ linear_regression = LinearRegression(featuresCol="features_lr", \ labelCol="price", \ predictionCol="price_prediction", \ maxIter=10, \ regParam=0.3, \ elasticNetParam=0.8) estimators_lr = \ [continuous_feature_assembler, continuous_feature_scaler] \ + categorical_feature_indexers + categorical_feature_one_hot_encoders \ + [feature_assembler_lr] + [linear_regression] pipeline = Pipeline(stages=estimators_lr) pipeline_model = pipeline.fit(training_dataset) print(pipeline_model) """ Explanation: Step 7: Train a Linear Regression Model End of explanation """ from jpmml import toPMMLBytes model_bytes = toPMMLBytes(spark, training_dataset, pipeline_model) print(model_bytes.decode("utf-8")) """ Explanation: Step 8: Convert PipelineModel to PMML End of explanation """ import urllib.request namespace = 'default' model_name = 'airbnb' version = '1' update_url = 'http://prediction-pmml-aws.demo.pipeline.io/update-pmml/%s/%s/%s' % (namespace, model_name, version) update_headers = {} update_headers['Content-type'] = 'application/xml' req = urllib.request.Request(update_url, \ headers=update_headers, \ data=model_bytes) resp = urllib.request.urlopen(req) print(resp.status) # Should return Http Status 200 import urllib.parse import json namespace = 'default' model_name = 'airbnb' version = '1' evaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml/%s/%s/%s' % (namespace, model_name, version) evaluate_headers = {} evaluate_headers['Content-type'] = 'application/json' input_params = '{"bathrooms":5.0, \ "bedrooms":4.0, \ "security_deposit":175.00, \ "cleaning_fee":25.0, \ "extra_people":1.0, \ "number_of_reviews": 2.0, \ "square_feet": 250.0, \ "review_scores_rating": 2.0, \ "room_type": "Entire home/apt", \ "host_is_super_host": "0.0", \ "cancellation_policy": "flexible", \ "instant_bookable": "1.0", \ "state": "CA"}' encoded_input_params = input_params.encode('utf-8') req = urllib.request.Request(evaluate_url, \ headers=evaluate_headers, \ data=encoded_input_params) resp = urllib.request.urlopen(req) print(resp.read()) """ Explanation: Push PMML to Live, Running Spark ML Model Server (Mutable) End of explanation """ from urllib import request sourceBytes = ' \n\ private String str; \n\ \n\ public void initialize(Map<String, Object> args) { \n\ } \n\ \n\ public Object predict(Map<String, Object> inputs) { \n\ String id = (String)inputs.get("id"); \n\ \n\ return id.equals("21619"); \n\ } \n\ '.encode('utf-8') from urllib import request namespace = 'default' model_name = 'java_equals' version = '1' update_url = 'http://prediction-java-aws.demo.pipeline.io/update-java/%s/%s/%s' % (namespace, model_name, version) update_headers = {} update_headers['Content-type'] = 'text/plain' req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes) resp = request.urlopen(req) generated_code = resp.read() print(generated_code.decode('utf-8')) from urllib import request namespace = 'default' model_name = 'java_equals' version = '1' evaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version) evaluate_headers = {} evaluate_headers['Content-type'] = 'application/json' input_params = '{"id":"21618"}' encoded_input_params = input_params.encode('utf-8') req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params) resp = request.urlopen(req) print(resp.read()) # Should return false from urllib import request namespace = 'default' model_name = 'java_equals' version = '1' evaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version) evaluate_headers = {} evaluate_headers['Content-type'] = 'application/json' input_params = '{"id":"21619"}' encoded_input_params = input_params.encode('utf-8') req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params) resp = request.urlopen(req) print(resp.read()) # Should return true """ Explanation: Deploy Java-based Model (Simple Model, Mutable Deploy) End of explanation """ from urllib import request sourceBytes = ' \n\ public Map<String, Object> data = new HashMap<String, Object>(); \n\ private String host = "http://prediction-keyvalue-aws.demo.pipeline.io/";\n\ private String path = "evaluate-keyvalue/default/1"; \n\ \n\ public void initialize(Map<String, Object> args) { \n\ data.put("url", host + path); \n\ } \n\ \n\ public Object predict(Map<String, Object> inputs) { \n\ try { \n\ String userId = (String)inputs.get("userId"); \n\ String itemId = (String)inputs.get("itemId"); \n\ String url = data.get("url") + "/" + userId + "/" + itemId; \n\ \n\ return org.apache.http.client.fluent.Request \n\ .Get(url) \n\ .execute() \n\ .returnContent(); \n\ \n\ } catch(Exception exc) { \n\ System.out.println(exc); \n\ throw exc; \n\ } \n\ } \n\ '.encode('utf-8') from urllib import request namespace = 'default' model_name = 'java_httpclient' version = '1' update_url = 'http://prediction-java-aws.demo.pipeline.io/update-java/%s/%s/%s' % (namespace, model_name, version) update_headers = {} update_headers['Content-type'] = 'text/plain' req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes) resp = request.urlopen(req) generated_code = resp.read() print(generated_code.decode('utf-8')) from urllib import request namespace = 'default' model_name = 'java_httpclient' version = '1' evaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version) evaluate_headers = {} evaluate_headers['Content-type'] = 'application/json' input_params = '{"userId":"21619", "itemId":"10006"}' encoded_input_params = input_params.encode('utf-8') req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params) resp = request.urlopen(req, timeout=3) print(resp.read()) # Should return false """ Explanation: Deploy Java Model (HttpClient Model, Mutable Deploy) End of explanation """ from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) from IPython.display import clear_output, Image, display, HTML html = '<iframe width=1200px height=500px src="http://hystrix.demo.pipeline.io/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22Predictions%20-%20AWS%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-aws.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22Predictions%20-%20GCP%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-gcp.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D">' display(HTML(html)) """ Explanation: Load Test and Compare Cloud Providers (AWS and Google) Monitor Performance Across Cloud Providers NetflixOSS Services Dashboard (Hystrix) End of explanation """ from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) from IPython.display import clear_output, Image, display, HTML html = '<iframe width=1200px height=500px src="http://grafana.demo.pipeline.io">' display(HTML(html)) """ Explanation: Grafana + Prometheus Dashboard End of explanation """ # Spark ML - PMML - Airbnb !kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml !kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml # Codegen - Java - Simple !kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml !kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml # Tensorflow AI - Tensorflow Serving - Simple !kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml !kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml """ Explanation: Start Load Tests Run JMeter Tests from Local Laptop (Limited by Laptop) Run Headless JMeter Tests from Training Clusters in Cloud End of explanation """ !kubectl delete --context=awsdemo rc loadtest-aws-airbnb !kubectl delete --context=gcpdemo rc loadtest-aws-airbnb !kubectl delete --context=awsdemo rc loadtest-aws-equals !kubectl delete --context=gcpdemo rc loadtest-aws-equals !kubectl delete --context=awsdemo rc loadtest-aws-minimal !kubectl delete --context=gcpdemo rc loadtest-aws-minimal """ Explanation: End Load Tests End of explanation """ !kubectl rolling-update prediction-tensorflow --context=awsdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow !kubectl get pod --context=awsdemo !kubectl rolling-update prediction-tensorflow --context=gcpdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow !kubectl get pod --context=gcpdemo """ Explanation: Rolling Deploy Tensorflow AI (Simple Model, Immutable Deploy) Kubernetes CLI End of explanation """
Nozdi/first-steps-with-pandas-workshop
first-steps-with-pandas-with-solutions.ipynb
mit
import platform print('Python: ' + platform.python_version()) import numpy as np print('numpy: ' + np.__version__) import pandas as pd print('pandas: ' + pd.__version__) import scipy print('scipy: ' + scipy.__version__) import sklearn print('scikit-learn: ' + sklearn.__version__) import matplotlib as plt print('matplotlib: ' + plt.__version__) import flask print('flask: ' + flask.__version__) """ Explanation: Welcome to the 'First steps with pandas'! After this workshop you can (hopefully) call yourselves Data Scientists! Gitter: https://gitter.im/first-steps-with-pandas-workshop Before coding, let's check whether we have proper versions of libraries End of explanation """ # In case of no Internet, use: # pd.read_json('data/cached_Python.json') ( pd.read_json('https://raw.githubusercontent.com/Nozdi/first-steps-with-pandas-workshop/master/data/cached_python.json') .resample('1W') .mean() ['daily_views'] ) """ Explanation: What is pandas? pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. Why to use it? It has ready solutions for most of data-related problems fast development no reinventing wheel fewer mistakes/bugs End of explanation """ some_data = [ list(range(1,100)) for x in range(1,1000) ] some_df = pd.DataFrame(some_data) def standard_way(data): return [[col*2 for col in row] for row in data] def pandas_way(df): return df * 2 %timeit standard_way(some_data) %timeit pandas_way(some_df) """ Explanation: It is easy to pick up few simple concepts that are very powerful easy, standardized API good code readability It is reasonably fast End of explanation """ strengths = pd.Series([400, 200, 300, 400, 500]) strengths names = pd.Series(["Batman", "Robin", "Spiderman", "Robocop", "Terminator"]) names """ Explanation: It has a very cool name. https://c1.staticflickr.com/5/4058/4466498508_35a8172ac1_b.jpg Library highlights http://pandas.pydata.org/#library-highlights<br/> http://pandas.pydata.org/pandas-docs/stable/api.html Data structures Series Series is a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers, Python objects, etc.). End of explanation """ heroes = pd.DataFrame({ 'hero': names, 'strength': strengths }) heroes other_heroes = pd.DataFrame([ dict(hero="Hercules", strength=800), dict(hero="Conan") ]) other_heroes another_heroes = pd.DataFrame([ pd.Series(["Wonder Woman", 10, 3], index=["hero", "strength", "cookies"]), pd.Series(["Xena", 20, 0], index=["hero", "strength", "cookies"]) ]) another_heroes """ Explanation: DataFrame DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects. Creating End of explanation """ another_heroes.columns another_heroes.shape another_heroes.info() """ Explanation: Meta data End of explanation """ another_heroes['cookies'] another_heroes.cookies another_heroes[ ['hero', 'cookies'] ] """ Explanation: Selecting [string] --&gt; Series [ list of strings ] --&gt; DataFrame End of explanation """ another_heroes[['hero', 'cookies']][['cookies']] another_heroes[['hero', 'cookies']][['cookies']]['cookies'] """ Explanation: Chaining (most of operations on DataFrame returns new DataFrame or Series) End of explanation """ # Solution here titles = pd.Series(["Avatar", "Pirates of the Caribbean: At World's End", "Spectre"]) imdb_scores = pd.Series([7.9, 7.1, 6.8]) pd.DataFrame({'movie_title': titles, 'imdb_score': imdb_scores}) """ Explanation: EXERCISE Create DataFrame presented below in 3 different ways movie_title imdb_score 0 Avatar 7.9 1 Pirates of the Caribbean: At World's End 7.1 2 Spectre 6.8 Help: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#from-dict-of-series-or-dicts With dict of Series End of explanation """ # Solution here pd.DataFrame([ dict(movie_title="Avatar", imdb_score=7.9), dict(movie_title="Pirates of the Caribbean: At World's End", imdb_score=7.1), dict(movie_title="Spectre", imdb_score=6.8), ]) """ Explanation: With list of dicts End of explanation """ # Solution here pd.DataFrame([ pd.Series(["Avatar", 7.9], index=['movie_title', 'imdb_score']), pd.Series(["Pirates of the Caribbean: At World's End", 7.1], index=['movie_title', 'imdb_score']), pd.Series(["Spectre", 6.8], index=['movie_title', 'imdb_score']) ]) """ Explanation: With list of Series End of explanation """ # Uncomment and press tab.. # pd.read_ # SQL, csv, hdf # pd.read_csv? # executing bash in jupyter notebook !head -c 500 data/cached_python.json pd.read_json('data/cached_python.json') """ Explanation: I/O part I Reading popular formats / data sources End of explanation """ # Solution here movies = pd.read_csv('data/movies.csv') movies.head() """ Explanation: EXERCISE Load movies from data/movies.csv to variable called movies End of explanation """ # Solution here print(movies.shape) print(movies.columns) """ Explanation: Analyze what dimensions and columns it has End of explanation """ heroes """ Explanation: Filtering End of explanation """ heroes['strength'] == 400 heroes[heroes['strength'] == 400] heroes[heroes['strength'] > 400] """ Explanation: Boolean indexing End of explanation """ try: heroes[200 < heroes['strength'] < 400] except ValueError: print("This cool Python syntax ain't work :(") heroes[ (heroes['strength'] > 200) & (heroes['strength'] < 400) ] heroes[ (heroes['strength'] <= 200) | (heroes['strength'] >= 400) ] """ Explanation: Multiple conditions End of explanation """ ~(heroes['strength'] == 400) heroes['strength'] != 400 heroes[~( (heroes['strength'] <= 200) | (heroes['strength'] >= 400) )] """ Explanation: Negation ~ is a negation operator End of explanation """ heroes[ heroes['hero'].isin(['Batman', 'Robin']) ] """ Explanation: Filtering for containing one of many values (SQL's IN) End of explanation """ # Solution here movies[movies['director_name'] == "Clint Eastwood"] """ Explanation: EXERCISE What movies has been directed by Clint Eastwood? End of explanation """ # Solution here movies[movies['gross'] > 500e6]['movie_title'] """ Explanation: What movies have earned above $500m? End of explanation """ # Solution here movies[movies['language'] == 'Polish']['movie_title'] """ Explanation: Are there any Polish movies? End of explanation """ # Solution here movies[ (movies['movie_facebook_likes'] > 100000) & (movies['imdb_score'] > 8.5) ]['movie_title'] """ Explanation: What are really popular great movies? (> 100k FB likes, > 8.5 IMDB score) End of explanation """ # Solution here brutals = ["Jason Statham", "Sylvester Stallone"] god = "Morgan Freeman" movies[ (movies['actor_1_name'].isin(brutals)) | (movies['actor_1_name'] == god) ]['movie_title'].head() """ Explanation: In what movies main role was played by brutals like "Jason Statham", "Sylvester Stallone" or god ("Morgan Freeman")? End of explanation """ heroes.values """ Explanation: I/O part O As numpy array End of explanation """ heroes.to_dict() heroes.to_dict('records') """ Explanation: As (list) of dicts End of explanation """ heroes.to_json() heroes.to_json(orient='records') heroes.to_csv() heroes.to_csv(index=False) heroes.to_csv('data/heroes.csv', index=False) """ Explanation: As popular data format End of explanation """ # Solution here cols = [ 'movie_title', 'actor_1_name', 'actor_2_name', 'actor_3_name', 'budget' ] movies[movies['budget'] > 200e6][cols].to_csv("data/expensive-cast.csv", index=False) """ Explanation: EXERCISE Create a csv with movie titles and cast (actors) of movies with budget above $200m End of explanation """ # Solution here cols = [ 'movie_title', 'movie_facebook_likes' ] movies[movies['director_name'] == 'Christopher Nolan'][cols].to_dict('r') """ Explanation: Create a list of dicts with movie titles and facebook likes of all Christopher Nolan's movies End of explanation """ heroes """ Explanation: New columns End of explanation """ heroes['health'] = np.NaN heroes.head() heroes['health'] = 100 heroes.head() heroes['height'] = [180, 170, 175, 190, 185] heroes heroes['is_hungry'] = pd.Series([True, False, False, True, True]) heroes """ Explanation: Creating new column End of explanation """ heroes['strength'] * 2 heroes['strength'] / heroes['height'] heroes['strength_per_cm'] = heroes['strength'] / heroes['height'] heroes """ Explanation: Vector operations End of explanation """ pd.Series([1, 2, 3]).map(lambda x: x**3) pd.Series(['Batman', 'Robin']).map(lambda x: x[:2]) # however, more idiomatic approach for strings is to do.. pd.Series(['Batman', 'Robin']).str[:2] pd.Series(['Batman', 'Robin']).str.lower() pd.Series([ ['Batman', 'Robin'], ['Robocop'] ]).map(len) heroes['code'] = heroes['hero'].map(lambda name: name[:2]) heroes heroes['effective_strength'] = heroes.apply( lambda row: (not row['is_hungry']) * row['strength'], axis=1 ) heroes.head() heroes[['health', 'strength']] = heroes[['health', 'strength']].applymap( lambda x: x + 100 ) heroes """ Explanation: Map, apply, applymap, str End of explanation """ heroes['strength'].value_counts() heroes.sort_values('strength') heroes.sort_values( ['is_hungry', 'code'], ascending=[False, True] ) """ Explanation: Cheatsheet map: 1 =&gt; 1 apply: n =&gt; 1 applymap: n =&gt; n Sorting and value counts (bonus skill) End of explanation """ # Solution here movies['profitability'] = movies['gross'] / movies['budget'] movies.sort_values('profitability', ascending=False).head(10) """ Explanation: EXERCISE What are 10 most profitable movies? (ratio between gross and budget) End of explanation """ # Solution here movies['first_genre'] = movies['genres'].str.split('|').str[0] movies.head() """ Explanation: Create a column 'first_genre'. What is the distribution of values in this column? End of explanation """ heroes """ Explanation: Visualizing data End of explanation """ heroes.describe() """ Explanation: Basic stats End of explanation """ %matplotlib inline pd.Series([1, 2, 3]).plot() pd.Series([1, 2, 3], index=['Batman', 'Robin', 'Rambo']).plot() pd.Series([1, 2, 3], index=['Batman', 'Robin', 'Rambo']).plot(kind='bar') pd.Series([1, 2, 3], index=['Batman', 'Robin', 'Rambo']).plot( kind='bar', figsize=(15, 6) ) pd.Series([1, 2, 3], index=['Batman', 'Robin', 'Rambo']).plot(kind='pie') heroes.plot() indexed_heroes = heroes.set_index('hero') indexed_heroes indexed_heroes.plot() indexed_heroes.plot(kind='barh') indexed_heroes.plot(kind='bar', subplots=True, figsize=(15, 15)) indexed_heroes[['height', 'strength']].plot(kind='bar') heroes.plot(x='hero', y=['height', 'strength'], kind='bar') # alternative to subplots heroes.plot( x='hero', y=['height', 'strength'], kind='bar', secondary_y='strength', figsize=(10,8) ) heroes.plot( x='hero', y=['height', 'strength'], kind='bar', secondary_y='strength', title='Super plot of super heroes', figsize=(10,8) ) """ Explanation: Plotting End of explanation """ heroes.hist(figsize=(10, 10)) heroes.hist( figsize=(10, 10), bins=2 ) """ Explanation: Histogram End of explanation """ heroes.describe()['strength'].plot(kind='bar') """ Explanation: DataFrames everywhere.. are easy to plot End of explanation """ # Solution here nolan_movies = movies[movies['director_name'] == 'Christopher Nolan'] nolan_movies = nolan_movies.set_index('movie_title') nolan_movies['gross'].plot(kind='bar') """ Explanation: EXERCISE Create a chart presenting grosses of movies directed by Christopher Nolan End of explanation """ # Solution here movies['duration'].hist(bins=25) """ Explanation: What are typical durations of the movies? End of explanation """ # Solution here movies['first_genre'].value_counts().plot( kind='pie', figsize=(15,15) ) """ Explanation: What is percentage distribution of first genre? (cake) End of explanation """ movie_heroes = pd.DataFrame({ 'hero': ['Batman', 'Robin', 'Spiderman', 'Robocop', 'Lex Luthor', 'Dr Octopus'], 'movie': ['Batman', 'Batman', 'Spiderman', 'Robocop', 'Spiderman', 'Spiderman'], 'strength': [400, 100, 400, 560, 89, 300], 'speed': [100, 10, 200, 1, 20, None], }) movie_heroes = movie_heroes.set_index('hero') movie_heroes movie_heroes.groupby('movie') list(movie_heroes.groupby('movie')) """ Explanation: Aggregation Grouping https://www.safaribooksonline.com/library/view/learning-pandas/9781783985128/graphics/5128OS_09_01.jpg End of explanation """ movie_heroes.groupby('movie').size() movie_heroes.groupby('movie').count() movie_heroes.groupby('movie')['speed'].sum() movie_heroes.groupby('movie').mean() movie_heroes.groupby('movie').apply( lambda group: group['strength'] / group['strength'].max() ) movie_heroes.groupby('movie').agg({ 'speed': 'mean', 'strength': 'max', }) movie_heroes = movie_heroes.reset_index() movie_heroes movie_heroes.groupby(['movie', 'hero']).mean() """ Explanation: Aggregating End of explanation """ # Solution here movies.groupby('title_year')['gross'].max().tail(10).plot(kind='bar') """ Explanation: EXERCISE What was maximal gross in each year? End of explanation """ # Solution here ( movies .groupby('director_name') ['gross'] .mean() .sort_values(ascending=False) .head(3) ) """ Explanation: Which director earns the most on average? End of explanation """ movie_heroes apetite = pd.DataFrame([ dict(hero='Spiderman', is_hungry=True), dict(hero='Robocop', is_hungry=False) ]) apetite movie_heroes['is_hungry'] = apetite['is_hungry'] movie_heroes apetite.index = [2, 3] movie_heroes['is_hungry'] = apetite['is_hungry'] movie_heroes """ Explanation: Index related operations Data alignment on Index End of explanation """ indexed_movie_heroes = movie_heroes.set_index('hero') indexed_movie_heroes indexed_apetite = apetite.set_index('hero') indexed_apetite # and alignment works well automagically.. indexed_movie_heroes['is_hungry'] = indexed_apetite['is_hungry'] indexed_movie_heroes """ Explanation: Setting index End of explanation """ movie_heroes apetite # couple of other arguments available here pd.merge( movie_heroes[['hero', 'speed']], apetite, on=['hero'], how='outer' ) """ Explanation: Merging two DFs (a'la SQL join) End of explanation """ spiderman_meals = pd.DataFrame([ dict(time='2016-10-15 10:00', calories=300), dict(time='2016-10-15 13:00', calories=900), dict(time='2016-10-15 15:00', calories=1200), dict(time='2016-10-15 21:00', calories=700), dict(time='2016-10-16 07:00', calories=1600), dict(time='2016-10-16 13:00', calories=600), dict(time='2016-10-16 16:00', calories=900), dict(time='2016-10-16 20:00', calories=500), dict(time='2016-10-16 21:00', calories=300), dict(time='2016-10-17 08:00', calories=900), ]) spiderman_meals spiderman_meals.dtypes spiderman_meals['time'] = pd.to_datetime(spiderman_meals['time']) spiderman_meals.dtypes spiderman_meals spiderman_meals = spiderman_meals.set_index('time') spiderman_meals spiderman_meals.index """ Explanation: DateTime operations End of explanation """ spiderman_meals["2016-10-15"] spiderman_meals["2016-10-16 10:00":] spiderman_meals["2016-10-16 10:00":"2016-10-16 20:00"] spiderman_meals["2016-10"] """ Explanation: Filtering End of explanation """ spiderman_meals.resample('1D').sum() spiderman_meals.resample('1H').mean() spiderman_meals.resample('1H').ffill() spiderman_meals.resample('1D').first() """ Explanation: Resampling (downsampling and upsampling) End of explanation """ # Solution here force_awakens_tweets = pd.read_csv( 'data/theforceawakens_tweets.csv', parse_dates=['created_at'], index_col='created_at' ) force_awakens_tweets.head() """ Explanation: EXERCISE Read Star Wars: The Force Awakens's tweets from data/theforceawakens_tweets.csv. Create DateTimeIndex from created_at column. End of explanation """ # Solution here force_awakens_tweets.resample('1D').count() """ Explanation: How many tweets did Star Wars: The Force Awakens have in each of last days? End of explanation """ # Solution here ( force_awakens_tweets .resample('4H') .count() .plot(figsize=(15, 5)) ) ( force_awakens_tweets["2016-09-29":] .resample('1H') .count() .plot(figsize=(15, 5)) ) """ Explanation: What were the most popular tweeting times of the day for that movie? End of explanation """ heroes_with_missing = pd.DataFrame([ ('Batman', None, None), ('Robin', None, 100), ('Spiderman', 400, 90), ('Robocop', 500, 95), ('Terminator', 600, None) ], columns=['hero', 'strength', 'health']) heroes_with_missing heroes_with_missing.dropna() heroes_with_missing.fillna(0) heroes_with_missing.fillna({'strength': 10, 'health': 20}) heroes_with_missing.fillna(heroes_with_missing.min()) heroes_with_missing.fillna(heroes_with_missing.median()) """ Explanation: Advanced topics + Advanced exercises Filling missing data End of explanation """ pd.DataFrame({'x': [1, 2], 'y': [10, 20]}).plot(x='x', y='y', kind='scatter') from sklearn.linear_model import LinearRegression X=[ [1], [2] ] y=[ 10, 20 ] clf = LinearRegression() clf.fit(X, y) clf.predict([ [0.5], [2], [4] ]) X = np.array([ [1], [2] ]) y = np.array([ 10, 20 ]) X clf = LinearRegression() clf.fit(X, y) clf.predict( np.array([ [0.5], [2], [4] ]) ) train_df = pd.DataFrame([ (1, 10), (2, 20), ], columns=['x', 'y']) train_df clf = LinearRegression() clf.fit(train_df[['x']], train_df['y']) clf.predict([[0.5]]) test_df = pd.DataFrame({'x': [0.5, 1.5, 4]}) test_df clf.predict(test_df[['x']]) test_df['y'] = clf.predict(test_df[['x']]) test_df train_df['color'] = 'blue' test_df['color'] = 'red' all_df = train_df.append(test_df) all_df.plot(x='x', y='y', kind='scatter', figsize=(10, 8), color=all_df['color']) """ Explanation: Scikit-learn End of explanation """ # Solution here from sklearn.linear_model import LinearRegression FEATURES = ['num_voted_users', 'imdb_score'] TARGET = 'gross' movies_with_data = movies[FEATURES + [TARGET]].dropna() X = movies_with_data[FEATURES].values y = movies_with_data[TARGET].values clf = LinearRegression() clf.fit(X, y) clf.predict([ [800000, 8.0], [400000, 8.0], [400000, 4.0], [ 40000, 8.0], ]) """ Explanation: More models to try: http://scikit-learn.org/stable/supervised_learning.html#supervised-learning EXERCISE Integration with scikit-learn: Create a model that tries to predict gross of movie. Use any features of the movies dataset. End of explanation """ # Solution here def discover_similar_plot(target_keywords, threshold=0.5): movies_with_plot = movies.dropna( subset=['plot_keywords'] ).copy() movies_with_plot['plot_keywords_set'] = movies_with_plot[ 'plot_keywords' ].str.split('|').map(set) movies_with_plot['match_count'] = movies_with_plot[ 'plot_keywords_set' ].map( lambda keywords: len(keywords.intersection(target_keywords)) ) return movies_with_plot[ (movies_with_plot['match_count'] >= threshold*len(target_keywords)) ] discover_similar_plot(['magic', 'harry', 'wizard'])['movie_title'] """ Explanation: Create a method discovering movies with plot keywords similar to the given list of keywords (i.e. ['magic', 'harry', 'wizard']) End of explanation """ # Solution in flask_exercise.py """ Explanation: Integration with Flask In the file flask_exercise.py you'll find the scaffolding for Flask app.<br/>Create endpoints returning: - all movie titles available in the movies dataset - 10 worst rated movies ever - 10 best rated (imdb_score) movies in a given year End of explanation """
Naereen/notebooks
simus/Naive_simulations_of_the_Monty-Hall_paradox.ipynb
mit
import random M = 3 allocation = [False] * (M - 1) + [True] # Only 1 treasure! assert set(allocation) == {True, False} # Check: only True and False assert sum(allocation) == 1 # Check: only 1 treasure! """ Explanation: Numerical simulations of the Monty-Hall "paradox" This short notebook aims at simulating trials of the so-called Monty-Hall problem, and thus helping to convince about the result thanks to a numerical results rather than a possibly-unclear proof. Definition of the problem There is $M \geq 3$ doors, and behind only one of them there is a treasure. The goal of the player is to find the treasure, following this game: The player first chose a door, but does not open it yet, All remaining doors but one are opened, and none of them contain the treasure. The player sees $M - 2$ bad doors, So there is just 2 doors, the one first chosen, and the last one. She knows the treasure is behind one of them, And she has to decide if she wants to stay on her initial choice, or switches to the last door. Finally, the chosen door is opened, and the player wins this round if she found the treasure. The goal of this notebook is to numerically prove that the choice of always switching to the last door is the best one. Starting our numerical simulations We start by importing some modules, then define a function, randomAllocation(), to generate a random allocation of treasures behind the M doors. Note: all the code below is generic for any $M \geq 3$, but M = 3 is used to keep small and clear visualizations. End of explanation """ allocation """ Explanation: Just to check: End of explanation """ def randomAllocation(): r = allocation[:] random.shuffle(r) return r """ Explanation: We can generate a random spot for the treasure by simply shuffling (with random.shuffle()): End of explanation """ for _ in range(10): print(randomAllocation()) """ Explanation: Let us quickly check this function randomAllocation(): End of explanation """ def last(r, i): # Select a random index corresponding of the door we keep if r[i]: # She found the treasure, returning a random last door return random.choice([j for (j, v) in enumerate(r) if j != i]) else: # She didn't find the treasure, returning the treasure door # Indeed, the game only removes door that don't contain the treasure return random.choice([j for (j, v) in enumerate(r) if j != i and v]) for _ in range(10): r = randomAllocation() i = random.randint(0, M - 1) j = last(r, i) print("- r =", r, "i =", i, "and last(r, i) =", j) print(" Stay on", r[i], "or go to", r[j], "?") """ Explanation: We need to write a small function to simulate the choice of the door to show to the player, show(): End of explanation """ def firstChoice(): global M # Uniform first choice return random.randint(0, M - 1) """ Explanation: We need a function to simulate the first choice of the player, and a simple choice is to select a uniform choice: End of explanation """ def simulate(stayOrNot): # Random spot for the treasure r = randomAllocation() # Initial choice i = firstChoice() # Which door are remove, or equivalently which is the last one to be there? j = last(r, i) assert {r[i], r[j]} == {False, True} # There is still the treasure and only one stay = stayOrNot() if stay: return r[i] else: return r[j] """ Explanation: Now we can simulate a game, for a certain left-to-be-written function strategy() that decides to keep or to change the initial choice. End of explanation """ N = 10000 def simulateManyGames(stayOrNot): global N results = [simulate(stayOrNot) for _ in range(N)] return sum(results) """ Explanation: We can simulate many outcome of the game for one strategy, and return the number of time it won (i.e. average number of time it found the good door, by finding r[i] = True or r[j] = True): End of explanation """ def keep(): return True # True == also stay on our first choice rate = simulateManyGames(keep) print("- For", N, "simulations, the strategy 'keep' has won", rate, "of the trials...") print(" ==> proportion = {:.2%}.".format(rate / float(N))) """ Explanation: Comparing two strategies, on many randomized trials We will simulate the two strategies, keep() vs change(), on $N = 10000$ randomized games. Keeping our first choice, keep() End of explanation """ def change(): return False # False == never stay, ie always chose the last door rate = simulateManyGames(change) print("- For", N, "simulations, the strategy 'change' has won", rate, " of the trials...") print(" ==> proportion = {:.2%}.".format(rate / float(N))) """ Explanation: $\implies$ We find a chance of winning of about $\frac{1}{M} = \frac{1}{3}$ for this strategy, which is very logical as only the initial choice matters, and due to the uniform location of the treasure behind the $M = 3$ doors, and the uniform first choice with firstChoice(). Changing our first choice, change() End of explanation """ def bernoulli(p=0.5): return random.random() < p rate = simulateManyGames(bernoulli) print("- For", N, "simulations, the strategy 'bernoulli' has won", rate, " of the trials...") print(" ==> proportion = {:.2%}.".format(rate / float(N))) """ Explanation: $\implies$ We find a chance of winning of about $\frac{M - 1}{M} = \frac{2}{3}$ for this strategy, which is less logical. Due to the uniform location of the treasure behind the $M = 3$ doors, and the uniform first choice with firstChoice(), we have a $\frac{1}{M}$ chance of finding the treasure from the first time. If we found it, the last door does not contain the treasure, hence we loose as we switch to it, However, if we did not find it, the last door has to contain the treasure, hence we win as we switch to it deterministically (i.e. always). The first case has probability $\frac{1}{M}$, probability of loosing, and the second case has probability $\frac{M - 1}{M}$, probability of loosing. $\implies$ Conclusion : this strategy change() has a chance of winning of $\frac{2}{3}$, far better than the chance of $\frac{1}{3}$ for stay(). We proved numerically the results given and explained here on the Wikipedia page. Great! Bernoulli choice We can try a randomized strategy, a simple one can be to follow the decision of a (biased) coin: Toss a coin (with fixed head probability equals to $p \in [0, 1]$) If head, switch to the last door. End of explanation """ import numpy as np import matplotlib.pyplot as plt """ Explanation: Now we can try different values for $p$, and plot the resulting chance of winning the game as a function of $p$. Hopefully, it should be monotonic, confirming the result explained above. End of explanation """ values_p = np.linspace(0, 1, 500) def stratBernoulli(p): def stayOrNot(): return bernoulli(p=p) return stayOrNot """ Explanation: We generate lots of values for $p$, then a function stratBernoulli() to create the strategy described above, for some $p \in [0, 1]$. End of explanation """ chance_of_winning = [simulateManyGames(stratBernoulli(p)) / float(N) for p in values_p] plt.figure() plt.plot(values_p, chance_of_winning, 'r') plt.title("Monty-Hall paradox with {} doors ({} random simulation)".format(M, N)) plt.xlabel("Probability $p$ of staying on our first choice (Bernoulli strategy)") plt.ylabel("Probability of winning") plt.ylim(0, 1) plt.yticks(np.linspace(0, 1, 11)) plt.show() """ Explanation: Let finally do all the simulations, and store the empirical probability of winning the game when following a Bernoulli strategy. This line takes about $4$ minutes on my laptop, it's not that quick. End of explanation """ def completeSimu(): global M global N allocation = [False] * (M - 1) + [True] # Only 1 treasure! def randomAllocation(): r = allocation[:] random.shuffle(r) return r def last(r, i): # Select a random index corresponding of the door we keep if r[i]: # She found the treasure, returning a random last door return random.choice([j for (j, v) in enumerate(r) if j != i]) else: # She didn't find the treasure, returning the treasure door # Indeed, the game only removes door that don't contain the treasure return random.choice([j for (j, v) in enumerate(r) if j != i and v]) def simulate(stayOrNot): # Random spot for the treasure r = randomAllocation() # Initial choice i = firstChoice() # Which door are remove, or equivalently which is the last one to be there? j = last(r, i) stay = stayOrNot() if stay: return r[i] else: return r[j] def simulateManyGames(stayOrNot): global N results = [simulate(stayOrNot) for _ in range(N)] return sum(results) values_p = np.linspace(0, 1, 300) chance_of_winning = [simulateManyGames(stratBernoulli(p)) / float(N) for p in values_p] plt.figure() plt.plot(values_p, chance_of_winning, 'r') plt.title("Monty-Hall paradox with {} doors ({} random simulation)".format(M, N)) plt.xlabel("Probability $p$ of staying on our first choice (Bernoulli strategy)") plt.ylabel("Probability of winning") plt.ylim(0, 1) plt.yticks(np.linspace(0, 1, 11)) plt.show() M = 4 completeSimu() """ Explanation: Examples with $M = 100$ doors End of explanation """ M = 100 completeSimu() """ Explanation: Last plot, for $M = 100$ It clearly shows the linear behavior we expected. End of explanation """
sbu-python-summer/python-tutorial
day-1/python-advanced-datatypes.ipynb
bsd-3-clause
from __future__ import print_function """ Explanation: These notes follow the official python tutorial pretty closely: http://docs.python.org/3/tutorial/ End of explanation """ a = [1, 2.0, "my list", 4] print(a) """ Explanation: Lists Lists group together data. Many languages have arrays (we'll look at those in a bit in python). But unlike arrays in most languages, lists can hold data of all different types -- they don't need to be homogeneos. The data can be a mix of integers, floating point or complex #s, strings, or other objects (including other lists). A list is defined using square brackets: End of explanation """ print(a[2]) """ Explanation: We can index a list to get a single element -- remember that python starts counting at 0: End of explanation """ print(a*2) """ Explanation: Like with strings, mathematical operators are defined on lists: End of explanation """ print(len(a)) """ Explanation: The len() function returns the length of a list End of explanation """ a[1] = -2.0 a a[0:1] = [-1, -2.1] # this will put two items in the spot where 1 existed before a """ Explanation: Unlike strings, lists are mutable -- you can change elements in a list easily End of explanation """ a[1] = ["other list", 3] a """ Explanation: Note that lists can even contain other lists: End of explanation """ a.append(6) a a.pop() a """ Explanation: Just like everything else in python, a list is an object that is the instance of a class. Classes have methods (functions) that know how to operate on an object of that class. There are lots of methods that work on lists. Two of the most useful are append, to add to the end of a list, and pop, to remove the last element: End of explanation """ a = [] a.append(1) a.append(2) a.append(3) a.append(4) a.append(5) a a.pop() a.pop() a.pop() a.pop() a.pop() a.pop() """ Explanation: <div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div> An operation we'll see a lot is to begin with an empty list and add elements to it. An empty list is created as: a = [] Create an empty list Append the integers 1 through 10 to it. Now pop them out of the list one by one. <hr> End of explanation """ a = [1, 2, 3, 4] b = a # both a and b refer to the same list object in memory print(a) a[0] = "changed" print(b) """ Explanation: copying may seem a little counterintuitive at first. The best way to think about this is that your list lives in memory somewhere and when you do a = [1, 2, 3, 4] then the variable a is set to point to that location in memory, so it refers to the list. If we then do b = a then b will also point to that same location in memory -- the exact same list object. Since these are both pointing to the same location in memory, if we change the list through a, the change is reflected in b as well: End of explanation """ c = list(a) # you can also do c = a[:], which basically slices the entire list a[1] = "two" print(a) print(c) """ Explanation: if you want to create a new object in memory that is a copy of another, then you can either index the list, using : to get all the elements, or use the list() function: End of explanation """ f = [1, [2, 3], 4] print(f) g = list(f) print(g) """ Explanation: Things get a little complicated when a list contains another mutable object, like another list. Then the copy we looked at above is only a shallow copy. Look at this example&mdash;the list within the list here is still the same object in memory for our two copies: End of explanation """ f[1][0] = "a" print(f) print(g) """ Explanation: Now we are going to change an element of that list [2, 3] inside of our main list. We need to index f once to get that list, and then a second time to index that list: End of explanation """ f[0] = -1 print(g) print(f) """ Explanation: Note that the change occured in both&mdash;since that inner list is shared in memory between the two. Note that we can still change one of the other values without it being reflected in the other list&mdash;this was made distinct by our shallow copy: End of explanation """ print(id(a), id(b), id(c)) """ Explanation: Note: this is what is referred to as a shallow copy. If the original list had any special objects in it (like another list), then the new copy and the old copy will still point to that same object. There is a deep copy method when you really want everything to be unique in memory. When in doubt, use the id() function to figure out where in memory an object lies (you shouldn't worry about the what value of the numbers you get from id mean, but just whether they are the same as those for another object) End of explanation """ my_list = [10, -1, 5, 24, 2, 9] my_list.sort() print(my_list) print(my_list.count(-1)) my_list help(a.insert) a.insert(3, "my inserted element") a """ Explanation: There are lots of other methods that work on lists (remember, ask for help) End of explanation """ b = [1, 2, 3] c = [4, 5, 6] d = b + c print(d) """ Explanation: joining two lists is simple. Like with strings, the + operator concatenates: End of explanation """ my_dict = {"key1":1, "key2":2, "key3":3} print(my_dict["key1"]) """ Explanation: Dictionaries A dictionary stores data as a key:value pair. Unlike a list where you have a particular order, the keys in a dictionary allow you to access information anywhere easily: End of explanation """ my_dict["newkey"] = "new" print(my_dict) """ Explanation: you can add a new key:pair easily, and it can be of any type End of explanation """ keys = list(my_dict.keys()) print(keys) """ Explanation: Note that a dictionary is unordered. You can also easily get the list of keys that are defined in a dictionary End of explanation """ print("key1" in keys) print("invalidKey" in keys) """ Explanation: and check easily whether a key exists in the dictionary using the in operator End of explanation """ squares = [x**2 for x in range(10)] squares """ Explanation: List Comprehensions list comprehensions provide a compact way to initialize lists. Some examples from the tutorial End of explanation """ [(x, y) for x in [1,2,3] for y in [3,1,4] if x != y] """ Explanation: here we use another python type, the tuple, to combine numbers from two lists into a pair End of explanation """ a = (1, 2, 3, 4) print(a) """ Explanation: <div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div> Use a list comprehension to create a new list from squares containing only the even numbers. It might be helpful to use the modulus operator, % <hr> Tuples tuples are immutable -- they cannot be changed, but they are useful for organizing data in some situations. We use () to indicate a tuple: End of explanation """ w, x, y, z = a print(w) print(w, x, y, z) """ Explanation: We can unpack a tuple: End of explanation """ a[0] = 2 """ Explanation: Since a tuple is immutable, we cannot change an element: End of explanation """ z = list(a) z[0] = "new" print(z) """ Explanation: But we can turn it into a list, and then we can change it End of explanation """ n = 0 while n < 10: print(n) n += 1 """ Explanation: Control Flow To write a program, we need the ability to iterate and take action based on the values of a variable. This includes if-tests and loops. Python uses whitespace to denote a block of code. While loop A simple while loop&mdash;notice the indentation to denote the block that is part of the loop. Here we also use the compact += operator: n += 1 is the same as n = n + 1 End of explanation """ for n in range(2, 10, 2): print(n) print(list(range(10))) """ Explanation: This was a very simple example. But often we'll use the range() function in this situation. Note that range() can take a stride. End of explanation """ x = 0 if x < 0: print("negative") elif x == 0: print("zero") else: print("positive") """ Explanation: if statements if allows for branching. python does not have a select/case statement like some other languages, but if, elif, and else can reproduce any branching functionality you might need. End of explanation """ alist = [1, 2.0, "three", 4] for a in alist: print(a) for c in "this is a string": print(c) """ Explanation: Iterating over elements it's easy to loop over items in a list or any iterable object. The in operator is the key here. End of explanation """ n = 0 for a in alist: if a == "three": break else: n += 1 print(n) """ Explanation: We can combine loops and if-tests to do more complex logic, like break out of the loop when you find what you're looking for End of explanation """ print(alist.index("three")) """ Explanation: (for that example, however, there is a simpler way) End of explanation """ my_dict = {"key1":1, "key2":2, "key3":3} for k, v in my_dict.items(): print("key = {}, value = {}".format(k, v)) # notice how we do the formatting here for k in sorted(my_dict): print(k, my_dict[k]) """ Explanation: for dictionaries, you can also loop over the elements End of explanation """ for n, a in enumerate(alist): print(n, a) """ Explanation: sometimes we want to loop over a list element and know its index -- enumerate() helps here: End of explanation """
tensorflow/docs
site/en/tutorials/estimator/linear.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ !pip install sklearn import os import sys import numpy as np import pandas as pd import matplotlib.pyplot as plt from IPython.display import clear_output from six.moves import urllib """ Explanation: Build a linear model with Estimators <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/estimator/linear"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/estimator/linear.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/estimator/linear.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/estimator/linear.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Warning: Estimators are not recommended for new code. Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details. Overview This end-to-end walkthrough trains a logistic regression model using the tf.estimator API. The model is often used as a baseline for other, more complex, algorithms. Note: A Keras logistic regression example is available and is recommended over this tutorial. Setup End of explanation """ import tensorflow.compat.v2.feature_column as fc import tensorflow as tf # Load dataset. dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv') dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv') y_train = dftrain.pop('survived') y_eval = dfeval.pop('survived') """ Explanation: Load the titanic dataset You will use the Titanic dataset with the (rather morbid) goal of predicting passenger survival, given characteristics such as gender, age, class, etc. End of explanation """ dftrain.head() dftrain.describe() """ Explanation: Explore the data The dataset contains the following features End of explanation """ dftrain.shape[0], dfeval.shape[0] """ Explanation: There are 627 and 264 examples in the training and evaluation sets, respectively. End of explanation """ dftrain.age.hist(bins=20) """ Explanation: The majority of passengers are in their 20's and 30's. End of explanation """ dftrain.sex.value_counts().plot(kind='barh') """ Explanation: There are approximately twice as many male passengers as female passengers aboard. End of explanation """ dftrain['class'].value_counts().plot(kind='barh') """ Explanation: The majority of passengers were in the "third" class. End of explanation """ pd.concat([dftrain, y_train], axis=1).groupby('sex').survived.mean().plot(kind='barh').set_xlabel('% survive') """ Explanation: Females have a much higher chance of surviving versus males. This is clearly a predictive feature for the model. End of explanation """ CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck', 'embark_town', 'alone'] NUMERIC_COLUMNS = ['age', 'fare'] feature_columns = [] for feature_name in CATEGORICAL_COLUMNS: vocabulary = dftrain[feature_name].unique() feature_columns.append(tf.feature_column.categorical_column_with_vocabulary_list(feature_name, vocabulary)) for feature_name in NUMERIC_COLUMNS: feature_columns.append(tf.feature_column.numeric_column(feature_name, dtype=tf.float32)) """ Explanation: Feature Engineering for the Model Estimators use a system called feature columns to describe how the model should interpret each of the raw input features. An Estimator expects a vector of numeric inputs, and feature columns describe how the model should convert each feature. Selecting and crafting the right set of feature columns is key to learning an effective model. A feature column can be either one of the raw inputs in the original features dict (a base feature column), or any new columns created using transformations defined over one or multiple base columns (a derived feature columns). The linear estimator uses both numeric and categorical features. Feature columns work with all TensorFlow estimators and their purpose is to define the features used for modeling. Additionally, they provide some feature engineering capabilities like one-hot-encoding, normalization, and bucketization. Base Feature Columns End of explanation """ def make_input_fn(data_df, label_df, num_epochs=10, shuffle=True, batch_size=32): def input_function(): ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df)) if shuffle: ds = ds.shuffle(1000) ds = ds.batch(batch_size).repeat(num_epochs) return ds return input_function train_input_fn = make_input_fn(dftrain, y_train) eval_input_fn = make_input_fn(dfeval, y_eval, num_epochs=1, shuffle=False) """ Explanation: The input_function specifies how data is converted to a tf.data.Dataset that feeds the input pipeline in a streaming fashion. tf.data.Dataset can take in multiple sources such as a dataframe, a csv-formatted file, and more. End of explanation """ ds = make_input_fn(dftrain, y_train, batch_size=10)() for feature_batch, label_batch in ds.take(1): print('Some feature keys:', list(feature_batch.keys())) print() print('A batch of class:', feature_batch['class'].numpy()) print() print('A batch of Labels:', label_batch.numpy()) """ Explanation: You can inspect the dataset: End of explanation """ age_column = feature_columns[7] tf.keras.layers.DenseFeatures([age_column])(feature_batch).numpy() """ Explanation: You can also inspect the result of a specific feature column using the tf.keras.layers.DenseFeatures layer: End of explanation """ gender_column = feature_columns[0] tf.keras.layers.DenseFeatures([tf.feature_column.indicator_column(gender_column)])(feature_batch).numpy() """ Explanation: DenseFeatures only accepts dense tensors, to inspect a categorical column you need to transform that to a indicator column first: End of explanation """ linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns) linear_est.train(train_input_fn) result = linear_est.evaluate(eval_input_fn) clear_output() print(result) """ Explanation: After adding all the base features to the model, let's train the model. Training a model is just a single command using the tf.estimator API: End of explanation """ age_x_gender = tf.feature_column.crossed_column(['age', 'sex'], hash_bucket_size=100) """ Explanation: Derived Feature Columns Now you reached an accuracy of 75%. Using each base feature column separately may not be enough to explain the data. For example, the correlation between age and the label may be different for different gender. Therefore, if you only learn a single model weight for gender="Male" and gender="Female", you won't capture every age-gender combination (e.g. distinguishing between gender="Male" AND age="30" AND gender="Male" AND age="40"). To learn the differences between different feature combinations, you can add crossed feature columns to the model (you can also bucketize age column before the cross column): End of explanation """ derived_feature_columns = [age_x_gender] linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns+derived_feature_columns) linear_est.train(train_input_fn) result = linear_est.evaluate(eval_input_fn) clear_output() print(result) """ Explanation: After adding the combination feature to the model, let's train the model again: End of explanation """ pred_dicts = list(linear_est.predict(eval_input_fn)) probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts]) probs.plot(kind='hist', bins=20, title='predicted probabilities') """ Explanation: It now achieves an accuracy of 77.6%, which is slightly better than only trained in base features. You can try using more features and transformations to see if you can do better! Now you can use the train model to make predictions on a passenger from the evaluation set. TensorFlow models are optimized to make predictions on a batch, or collection, of examples at once. Earlier, the eval_input_fn was defined using the entire evaluation set. End of explanation """ from sklearn.metrics import roc_curve from matplotlib import pyplot as plt fpr, tpr, _ = roc_curve(y_eval, probs) plt.plot(fpr, tpr) plt.title('ROC curve') plt.xlabel('false positive rate') plt.ylabel('true positive rate') plt.xlim(0,) plt.ylim(0,) """ Explanation: Finally, look at the receiver operating characteristic (ROC) of the results, which will give us a better idea of the tradeoff between the true positive rate and false positive rate. End of explanation """
apark263/tensorflow
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apache-2.0
# Import TensorFlow and enable eager execution # This code requires TensorFlow version >=1.9 import tensorflow as tf tf.enable_eager_execution() # We'll generate plots of attention in order to see which parts of an image # our model focuses on during captioning import matplotlib.pyplot as plt # Scikit-learn includes many helpful utilities from sklearn.model_selection import train_test_split from sklearn.utils import shuffle import re import numpy as np import os import time import json from glob import glob from PIL import Image import pickle """ Explanation: Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"). Image Captioning with Attention <table class="tfo-notebook-buttons" align="left"><td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td><td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table> Image captioning is the task of generating a caption for an image. Given an image like this: Image Source, License: Public Domain Our goal is to generate a caption, such as "a surfer riding on a wave". Here, we'll use an attention-based model. This enables us to see which parts of the image the model focuses on as it generates a caption. This model architecture below is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. The code uses tf.keras and eager execution, which you can learn more about in the linked guides. This notebook is an end-to-end example. If you run it, it will download the MS-COCO dataset, preprocess and cache a subset of the images using Inception V3, train an encoder-decoder model, and use it to generate captions on new images. The code requires TensorFlow version >=1.9. If you're running this in Colab In this example, we're training on a relatively small amount of data as an example. On a single P100 GPU, this example will take about ~2 hours to train. We train on the first 30,000 captions (corresponding to about ~20,000 images depending on shuffling, as there are multiple captions per image in the dataset) End of explanation """ annotation_zip = tf.keras.utils.get_file('captions.zip', cache_subdir=os.path.abspath('.'), origin = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip', extract = True) annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json' name_of_zip = 'train2014.zip' if not os.path.exists(os.path.abspath('.') + '/' + name_of_zip): image_zip = tf.keras.utils.get_file(name_of_zip, cache_subdir=os.path.abspath('.'), origin = 'http://images.cocodataset.org/zips/train2014.zip', extract = True) PATH = os.path.dirname(image_zip)+'/train2014/' else: PATH = os.path.abspath('.')+'/train2014/' """ Explanation: Download and prepare the MS-COCO dataset We will use the MS-COCO dataset to train our model. This dataset contains >82,000 images, each of which has been annotated with at least 5 different captions. The code below will download and extract the dataset automatically. Caution: large download ahead. We'll use the training set, it's a 13GB file. End of explanation """ # read the json file with open(annotation_file, 'r') as f: annotations = json.load(f) # storing the captions and the image name in vectors all_captions = [] all_img_name_vector = [] for annot in annotations['annotations']: caption = '<start> ' + annot['caption'] + ' <end>' image_id = annot['image_id'] full_coco_image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (image_id) all_img_name_vector.append(full_coco_image_path) all_captions.append(caption) # shuffling the captions and image_names together # setting a random state train_captions, img_name_vector = shuffle(all_captions, all_img_name_vector, random_state=1) # selecting the first 30000 captions from the shuffled set num_examples = 30000 train_captions = train_captions[:num_examples] img_name_vector = img_name_vector[:num_examples] len(train_captions), len(all_captions) """ Explanation: Optionally, limit the size of the training set for faster training For this example, we'll select a subset of 30,000 captions and use these and the corresponding images to train our model. As always, captioning quality will improve if you choose to use more data. End of explanation """ def load_image(image_path): img = tf.read_file(image_path) img = tf.image.decode_jpeg(img, channels=3) img = tf.image.resize_images(img, (299, 299)) img = tf.keras.applications.inception_v3.preprocess_input(img) return img, image_path """ Explanation: Preprocess the images using InceptionV3 Next, we will use InceptionV3 (pretrained on Imagenet) to classify each image. We will extract features from the last convolutional layer. First, we will need to convert the images into the format inceptionV3 expects by: * Resizing the image to (299, 299) * Using the preprocess_input method to place the pixels in the range of -1 to 1 (to match the format of the images used to train InceptionV3). End of explanation """ image_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet') new_input = image_model.input hidden_layer = image_model.layers[-1].output image_features_extract_model = tf.keras.Model(new_input, hidden_layer) """ Explanation: Initialize InceptionV3 and load the pretrained Imagenet weights To do so, we'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture. * Each image is forwarded through the network and the vector that we get at the end is stored in a dictionary (image_name --> feature_vector). * We use the last convolutional layer because we are using attention in this example. The shape of the output of this layer is 8x8x2048. * We avoid doing this during training so it does not become a bottleneck. * After all the images are passed through the network, we pickle the dictionary and save it to disk. End of explanation """ # getting the unique images encode_train = sorted(set(img_name_vector)) # feel free to change the batch_size according to your system configuration image_dataset = tf.data.Dataset.from_tensor_slices( encode_train).map(load_image).batch(16) for img, path in image_dataset: batch_features = image_features_extract_model(img) batch_features = tf.reshape(batch_features, (batch_features.shape[0], -1, batch_features.shape[3])) for bf, p in zip(batch_features, path): path_of_feature = p.numpy().decode("utf-8") np.save(path_of_feature, bf.numpy()) """ Explanation: Caching the features extracted from InceptionV3 We will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this would exceed the memory limitations of Colab (although these may change, an instance appears to have about 12GB of memory currently). Performance could be improved with a more sophisticated caching strategy (e.g., by sharding the images to reduce random access disk I/O) at the cost of more code. This will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you could: install tqdm (!pip install tqdm), then change this line: for img, path in image_dataset: to: for img, path in tqdm(image_dataset):. End of explanation """ # This will find the maximum length of any caption in our dataset def calc_max_length(tensor): return max(len(t) for t in tensor) # The steps above is a general process of dealing with text processing # choosing the top 5000 words from the vocabulary top_k = 5000 tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k, oov_token="<unk>", filters='!"#$%&()*+.,-/:;=?@[\]^_`{|}~ ') tokenizer.fit_on_texts(train_captions) train_seqs = tokenizer.texts_to_sequences(train_captions) tokenizer.word_index['<pad>'] = 0 # creating the tokenized vectors train_seqs = tokenizer.texts_to_sequences(train_captions) # padding each vector to the max_length of the captions # if the max_length parameter is not provided, pad_sequences calculates that automatically cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post') # calculating the max_length # used to store the attention weights max_length = calc_max_length(train_seqs) """ Explanation: Preprocess and tokenize the captions First, we'll tokenize the captions (e.g., by splitting on spaces). This will give us a vocabulary of all the unique words in the data (e.g., "surfing", "football", etc). Next, we'll limit the vocabulary size to the top 5,000 words to save memory. We'll replace all other words with the token "UNK" (for unknown). Finally, we create a word --> index mapping and vice-versa. We will then pad all sequences to the be same length as the longest one. End of explanation """ # Create training and validation sets using 80-20 split img_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector, cap_vector, test_size=0.2, random_state=0) len(img_name_train), len(cap_train), len(img_name_val), len(cap_val) """ Explanation: Split the data into training and testing End of explanation """ # feel free to change these parameters according to your system's configuration BATCH_SIZE = 64 BUFFER_SIZE = 1000 embedding_dim = 256 units = 512 vocab_size = len(tokenizer.word_index) # shape of the vector extracted from InceptionV3 is (64, 2048) # these two variables represent that features_shape = 2048 attention_features_shape = 64 # loading the numpy files def map_func(img_name, cap): img_tensor = np.load(img_name.decode('utf-8')+'.npy') return img_tensor, cap dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train)) # using map to load the numpy files in parallel # NOTE: Be sure to set num_parallel_calls to the number of CPU cores you have # https://www.tensorflow.org/api_docs/python/tf/py_func dataset = dataset.map(lambda item1, item2: tf.py_func( map_func, [item1, item2], [tf.float32, tf.int32]), num_parallel_calls=8) # shuffling and batching dataset = dataset.shuffle(BUFFER_SIZE) # https://www.tensorflow.org/api_docs/python/tf/contrib/data/batch_and_drop_remainder dataset = dataset.batch(BATCH_SIZE) dataset = dataset.prefetch(1) """ Explanation: Our images and captions are ready! Next, let's create a tf.data dataset to use for training our model. End of explanation """ def gru(units): # If you have a GPU, we recommend using the CuDNNGRU layer (it provides a # significant speedup). if tf.test.is_gpu_available(): return tf.keras.layers.CuDNNGRU(units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') else: return tf.keras.layers.GRU(units, return_sequences=True, return_state=True, recurrent_activation='sigmoid', recurrent_initializer='glorot_uniform') class BahdanauAttention(tf.keras.Model): def __init__(self, units): super(BahdanauAttention, self).__init__() self.W1 = tf.keras.layers.Dense(units) self.W2 = tf.keras.layers.Dense(units) self.V = tf.keras.layers.Dense(1) def call(self, features, hidden): # features(CNN_encoder output) shape == (batch_size, 64, embedding_dim) # hidden shape == (batch_size, hidden_size) # hidden_with_time_axis shape == (batch_size, 1, hidden_size) hidden_with_time_axis = tf.expand_dims(hidden, 1) # score shape == (batch_size, 64, hidden_size) score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis)) # attention_weights shape == (batch_size, 64, 1) # we get 1 at the last axis because we are applying score to self.V attention_weights = tf.nn.softmax(self.V(score), axis=1) # context_vector shape after sum == (batch_size, hidden_size) context_vector = attention_weights * features context_vector = tf.reduce_sum(context_vector, axis=1) return context_vector, attention_weights class CNN_Encoder(tf.keras.Model): # Since we have already extracted the features and dumped it using pickle # This encoder passes those features through a Fully connected layer def __init__(self, embedding_dim): super(CNN_Encoder, self).__init__() # shape after fc == (batch_size, 64, embedding_dim) self.fc = tf.keras.layers.Dense(embedding_dim) def call(self, x): x = self.fc(x) x = tf.nn.relu(x) return x class RNN_Decoder(tf.keras.Model): def __init__(self, embedding_dim, units, vocab_size): super(RNN_Decoder, self).__init__() self.units = units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = gru(self.units) self.fc1 = tf.keras.layers.Dense(self.units) self.fc2 = tf.keras.layers.Dense(vocab_size) self.attention = BahdanauAttention(self.units) def call(self, x, features, hidden): # defining attention as a separate model context_vector, attention_weights = self.attention(features, hidden) # x shape after passing through embedding == (batch_size, 1, embedding_dim) x = self.embedding(x) # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size) x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1) # passing the concatenated vector to the GRU output, state = self.gru(x) # shape == (batch_size, max_length, hidden_size) x = self.fc1(output) # x shape == (batch_size * max_length, hidden_size) x = tf.reshape(x, (-1, x.shape[2])) # output shape == (batch_size * max_length, vocab) x = self.fc2(x) return x, state, attention_weights def reset_state(self, batch_size): return tf.zeros((batch_size, self.units)) encoder = CNN_Encoder(embedding_dim) decoder = RNN_Decoder(embedding_dim, units, vocab_size) optimizer = tf.train.AdamOptimizer() # We are masking the loss calculated for padding def loss_function(real, pred): mask = 1 - np.equal(real, 0) loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask return tf.reduce_mean(loss_) """ Explanation: Model Fun fact, the decoder below is identical to the one in the example for Neural Machine Translation with Attention. The model architecture is inspired by the Show, Attend and Tell paper. In this example, we extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 2048). We squash that to a shape of (64, 2048). This vector is then passed through the CNN Encoder(which consists of a single Fully connected layer). The RNN(here GRU) attends over the image to predict the next word. End of explanation """ # adding this in a separate cell because if you run the training cell # many times, the loss_plot array will be reset loss_plot = [] EPOCHS = 20 for epoch in range(EPOCHS): start = time.time() total_loss = 0 for (batch, (img_tensor, target)) in enumerate(dataset): loss = 0 # initializing the hidden state for each batch # because the captions are not related from image to image hidden = decoder.reset_state(batch_size=target.shape[0]) dec_input = tf.expand_dims([tokenizer.word_index['<start>']] * BATCH_SIZE, 1) with tf.GradientTape() as tape: features = encoder(img_tensor) for i in range(1, target.shape[1]): # passing the features through the decoder predictions, hidden, _ = decoder(dec_input, features, hidden) loss += loss_function(target[:, i], predictions) # using teacher forcing dec_input = tf.expand_dims(target[:, i], 1) total_loss += (loss / int(target.shape[1])) variables = encoder.variables + decoder.variables gradients = tape.gradient(loss, variables) optimizer.apply_gradients(zip(gradients, variables), tf.train.get_or_create_global_step()) if batch % 100 == 0: print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, batch, loss.numpy() / int(target.shape[1]))) # storing the epoch end loss value to plot later loss_plot.append(total_loss / len(cap_vector)) print ('Epoch {} Loss {:.6f}'.format(epoch + 1, total_loss/len(cap_vector))) print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) plt.plot(loss_plot) plt.xlabel('Epochs') plt.ylabel('Loss') plt.title('Loss Plot') plt.show() """ Explanation: Training We extract the features stored in the respective .npy files and then pass those features through the encoder. The encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder. The decoder returns the predictions and the decoder hidden state. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss. Use teacher forcing to decide the next input to the decoder. Teacher forcing is the technique where the target word is passed as the next input to the decoder. The final step is to calculate the gradients and apply it to the optimizer and backpropagate. End of explanation """ def evaluate(image): attention_plot = np.zeros((max_length, attention_features_shape)) hidden = decoder.reset_state(batch_size=1) temp_input = tf.expand_dims(load_image(image)[0], 0) img_tensor_val = image_features_extract_model(temp_input) img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3])) features = encoder(img_tensor_val) dec_input = tf.expand_dims([tokenizer.word_index['<start>']], 0) result = [] for i in range(max_length): predictions, hidden, attention_weights = decoder(dec_input, features, hidden) attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy() predicted_id = tf.argmax(predictions[0]).numpy() result.append(tokenizer.index_word[predicted_id]) if tokenizer.index_word[predicted_id] == '<end>': return result, attention_plot dec_input = tf.expand_dims([predicted_id], 0) attention_plot = attention_plot[:len(result), :] return result, attention_plot def plot_attention(image, result, attention_plot): temp_image = np.array(Image.open(image)) fig = plt.figure(figsize=(10, 10)) len_result = len(result) for l in range(len_result): temp_att = np.resize(attention_plot[l], (8, 8)) ax = fig.add_subplot(len_result//2, len_result//2, l+1) ax.set_title(result[l]) img = ax.imshow(temp_image) ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent()) plt.tight_layout() plt.show() # captions on the validation set rid = np.random.randint(0, len(img_name_val)) image = img_name_val[rid] real_caption = ' '.join([tokenizer.index_word[i] for i in cap_val[rid] if i not in [0]]) result, attention_plot = evaluate(image) print ('Real Caption:', real_caption) print ('Prediction Caption:', ' '.join(result)) plot_attention(image, result, attention_plot) # opening the image Image.open(img_name_val[rid]) """ Explanation: Caption! The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output. Stop predicting when the model predicts the end token. And store the attention weights for every time step. End of explanation """ image_url = 'https://tensorflow.org/images/surf.jpg' image_extension = image_url[-4:] image_path = tf.keras.utils.get_file('image'+image_extension, origin=image_url) result, attention_plot = evaluate(image_path) print ('Prediction Caption:', ' '.join(result)) plot_attention(image_path, result, attention_plot) # opening the image Image.open(image_path) """ Explanation: Try it on your own images For fun, below we've provided a method you can use to caption your own images with the model we've just trained. Keep in mind, it was trained on a relatively small amount of data, and your images may be different from the training data (so be prepared for weird results!) End of explanation """
wonkoderverstaendige/RattusExMachina
doc/Playtesting.ipynb
mit
result_path = '../src/USB_Virtual_Serial_Rcv_Speed_Test/usb_serial_receive/host_software/' print [f for f in os.listdir(result_path) if f.endswith('.txt')] def read_result(filename): results = {} current_blocksize = None with open(os.path.join(result_path, filename)) as f: for line in f.readlines(): if line.startswith('port'): current_blocksize = int(re.search('(?:size.)(\d*)', line).groups()[0]) results[current_blocksize] = [] else: results[current_blocksize].append(int(line[:-4].strip())/1000.) return results # Example: results = read_result('result_readbytes.txt') for bs in sorted(results.keys()): speeds = results[bs] print "{bs:4d}B blocks: {avg:4.0f}±{sem:.0f} KB/s".format(bs=bs, avg=mean(speeds), sem=stats.sem(speeds)) # Standard sizes, speeds_standard = zip(*[(k, mean(v)) for k, v in read_result('result_standard.txt').items()]) # ReadBytes sizes, speeds_readbytes = zip(*[(k, mean(v)) for k, v in read_result('result_readbytes.txt').items()]) # Readbytes+8us overhead per transferred SPI packet (worst case scenario?) sizes, speeds_readbytes_oh = zip(*[(k, mean(v)) for k, v in read_result('result_readbytes_overhead.txt').items()]) # ReadBytes+spi4teensy on 8 channels sizes, speeds_readbytes_spi = zip(*[(k, mean(v)) for k, v in read_result('result_readbytes_spi4teensy.txt').items()]) fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10, 5)) axes.semilogx(sizes, speeds_standard, 'gx', basex=2, label='Standard') axes.semilogx(sizes, speeds_readbytes, 'rx', basex=2, label='ReadBytes') axes.semilogx(sizes, speeds_readbytes_oh, 'bx', basex=2, label='ReadBytes+OH') axes.semilogx(sizes, speeds_readbytes_spi, 'k+', basex=2, label='ReadBytes+spi4teensy@8channels') axes.set_xlabel('Block size [B]') axes.set_ylabel('Transfer speed [kB/s]') axes.legend(loc=2) axes.set_xlim((min(sizes)/2., max(sizes)*2)) fig.tight_layout() #TODO: use individual values, make stats + error bars n = int(1e6) data = data=''.join([chr(i%256) for i in range(n)]) t = %timeit -o -q transfer_test(data) print "{:.1f} KB, {:.2f} s, {:.1f} KB/s".format(len(data)/1000., mean(t.all_runs), len(data)/1000./mean(t.all_runs)) """ Explanation: PJRC's receive test (host in C, variable buffer size, receiving in 64 Byte chunks) Anything below 64 bytes is not a full USB packet and waits for transmission. Above, full speed is achieved. End of explanation """ n_val = 4096 max_val = 4096 # cosines cosines = ((np.cos(np.linspace(-np.pi, np.pi, num=n_val))+1)*(max_val/2)).astype('uint16') # noise noise = (np.random.rand(n_val)*max_val).astype('uint16') # ramps ramps = np.linspace(0, max_val, n_val).astype('uint16') # squares hi = np.ones(n_val/4, dtype='uint16')*max_val-1 lo = np.zeros_like(hi) squares = np.tile(np.hstack((hi, lo)), 2) # all together arr = np.dstack((cosines, noise, ramps, squares, \ cosines, noise, ramps, squares, \ cosines, noise, ramps, squares, \ cosines, noise, ramps, squares)).flatten() fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(13, 8)) axes[0].set_xlim((0, cosines.size)) axes[0].plot(cosines, label='cosine'); axes[0].plot(noise, label='random'); axes[0].plot(ramps, label='ramp'); axes[0].plot(squares, label='square'); axes[0].legend() axes[1].set_xlim((0, arr.size)) axes[1].plot(arr); fig.tight_layout() n = 500 data = np.tile(arr, n).view(np.uint8) t = %timeit -o -q -n 1 -r 1 tx = transfer_test(data) print "{:.1f} KB, {:.2f} s, {:.1f} KB/s".format(arr.nbytes/1000.*n, mean(t.all_runs), arr.nbytes/1000.*n/mean(t.all_runs)) t = %timeit -o -q -n 1 -r 1 tx = transfer_test(data) print "{:.1f} KB, {:.2f} s, {:.1f} KB/s".format(arr.nbytes/1000.*n, mean(t.all_runs), arr.nbytes/1000.*n/mean(t.all_runs)) """ Explanation: Send arbitrary signals End of explanation """ data_path = "../data/lynn/lynn.dat" data_float = np.fromfile(data_path, dtype='(64,)i2').astype(np.float) # normalize the array to 12bit data_float -= data_float.min() data_float /= data_float.max() data_float *= (2**12-1) data_scaled = data_float.astype(np.uint16) print data_scaled.min(), data_scaled.max() fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(13, 7)) for n in range(0, 64, 4): axes.plot(data_scaled[0:20000, n]+n*70, label="Channel %d"%n); plt.legend() fig.tight_layout() print "first channel :", data_scaled[0,0:3] print "second channel:", data_scaled[8,0:3] print "interleaved :", data_scaled[(0, 8), 0:3].transpose().flatten() n = 5 data = np.tile(data_scaled[:, 0:64:4].transpose().flatten(), n).tobytes() len(data) transfer_test(data) t = %timeit -q -o -n 1 -r 1 transfer_test(data); print "{:.1f} KB, {:.2f} s, {:.1f} KB/s".format(data_scaled[:, 0:64:4].nbytes/1000.*n, mean(t.all_runs), data_scaled[:, 0:64:4].nbytes/1000.*n/mean(t.all_runs)) type(data) data """ Explanation: Send "neural" data Using Lynn's data set from the Klusters2 example End of explanation """
Aniruddha-Tapas/Applied-Machine-Learning
Machine Learning using GraphLab/Predicting House Prices using GraphLab Create.ipynb
mit
import graphlab """ Explanation: Predicting House Prices using GraphLab Create Install GraphLab Create using the official guide Fire up graphlab create End of explanation """ sales = graphlab.SFrame('home_data.gl/') sales """ Explanation: Load some house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located. End of explanation """ graphlab.canvas.set_target('ipynb') sales.show(view="Scatter Plot", x="sqft_living", y="price") """ Explanation: Exploring the data for housing sales The house price is correlated with the number of square feet of living space. End of explanation """ train_data,test_data = sales.random_split(.8,seed=0) """ Explanation: Create a simple regression model of sqft_living to price Split data into training and testing. We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you). End of explanation """ sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'],validation_set=None) """ Explanation: Build the regression model using only sqft_living as a feature End of explanation """ print test_data['price'].mean() print sqft_model.evaluate(test_data) """ Explanation: Evaluate the simple model End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.plot(test_data['sqft_living'],test_data['price'],'.', test_data['sqft_living'],sqft_model.predict(test_data),'-') """ Explanation: RMSE of about \$255,170! Let's show what our predictions look like Matplotlib is a Python plotting library that is also useful for plotting. You can install it with: 'pip install matplotlib' End of explanation """ sqft_model.get('coefficients') """ Explanation: Above: blue dots are original data, green line is the prediction from the simple regression. Below: we can view the learned regression coefficients. End of explanation """ my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode'] sales[my_features].show() sales.show(view='BoxWhisker Plot', x='zipcode', y='price') """ Explanation: Explore other features in the data To build a more elaborate model, we will explore using more features. End of explanation """ my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features,validation_set=None) print my_features """ Explanation: Pull the bar at the bottom to view more of the data. 98039 is the most expensive zip code. Build a regression model with more features End of explanation """ print sqft_model.evaluate(test_data) print my_features_model.evaluate(test_data) """ Explanation: Comparing the results of the simple model with adding more features End of explanation """ house1 = sales[sales['id']=='5309101200'] print house1['price'] print sqft_model.predict(house1) print my_features_model.predict(house1) """ Explanation: The RMSE goes down from \$255,170 to \$179,508 with more features. Apply learned models to predict prices of 3 houses The first house we will use is considered an "average" house in Seattle. End of explanation """ house2 = sales[sales['id']=='1925069082'] house2 """ Explanation: In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better. Prediction for a second, fancier house We will now examine the predictions for a fancier house. End of explanation """ print sqft_model.predict(house2) print my_features_model.predict(house2) """ Explanation: <img src="https://ssl.cdn-redfin.com/photo/1/bigphoto/302/734302_0.jpg"> End of explanation """ bill_gates = {'bedrooms':[8], 'bathrooms':[25], 'sqft_living':[50000], 'sqft_lot':[225000], 'floors':[4], 'zipcode':['98039'], 'condition':[10], 'grade':[10], 'waterfront':[1], 'view':[4], 'sqft_above':[37500], 'sqft_basement':[12500], 'yr_built':[1994], 'yr_renovated':[2010], 'lat':[47.627606], 'long':[-122.242054], 'sqft_living15':[5000], 'sqft_lot15':[40000]} """ Explanation: In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house. Last house, super fancy Our last house is a very large one owned by a famous Seattleite. End of explanation """ print my_features_model.predict(graphlab.SFrame(bill_gates)) """ Explanation: <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/Bill_gates%27_house.jpg/2560px-Bill_gates%27_house.jpg"> End of explanation """
metpy/MetPy
v1.1/_downloads/cdca3e0cb8a2930cccab0e29b97ef52a/upperair_soundings.ipynb
bsd-3-clause
import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1.inset_locator import inset_axes import pandas as pd import metpy.calc as mpcalc from metpy.cbook import get_test_data from metpy.plots import Hodograph, SkewT from metpy.units import units """ Explanation: Upper Air Sounding Tutorial Upper air analysis is a staple of many synoptic and mesoscale analysis problems. In this tutorial we will gather weather balloon data, plot it, perform a series of thermodynamic calculations, and summarize the results. To learn more about the Skew-T diagram and its use in weather analysis and forecasting, checkout this &lt;http://www.pmarshwx.com/research/manuals/AF_skewt_manual.pdf&gt;_ air weather service guide. End of explanation """ col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed'] df = pd.read_fwf(get_test_data('nov11_sounding.txt', as_file_obj=False), skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names) # Drop any rows with all NaN values for T, Td, winds df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed' ), how='all').reset_index(drop=True) # We will pull the data out of the example dataset into individual variables and # assign units. p = df['pressure'].values * units.hPa T = df['temperature'].values * units.degC Td = df['dewpoint'].values * units.degC wind_speed = df['speed'].values * units.knots wind_dir = df['direction'].values * units.degrees u, v = mpcalc.wind_components(wind_speed, wind_dir) """ Explanation: Getting Data Upper air data can be obtained using the siphon package, but for this tutorial we will use some of MetPy's sample data. This event is the Veterans Day tornado outbreak in 2002. End of explanation """ # Calculate the LCL lcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0]) print(lcl_pressure, lcl_temperature) # Calculate the parcel profile. parcel_prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC') """ Explanation: Thermodynamic Calculations Often times we will want to calculate some thermodynamic parameters of a sounding. The MetPy calc module has many such calculations already implemented! Lifting Condensation Level (LCL) - The level at which an air parcel's relative humidity becomes 100% when lifted along a dry adiabatic path. Parcel Path - Path followed by a hypothetical parcel of air, beginning at the surface temperature/pressure and rising dry adiabatically until reaching the LCL, then rising moist adiabatially. End of explanation """ # Create a new figure. The dimensions here give a good aspect ratio fig = plt.figure(figsize=(9, 9)) skew = SkewT(fig) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r', linewidth=2) skew.plot(p, Td, 'g', linewidth=2) skew.plot_barbs(p, u, v) # Show the plot plt.show() """ Explanation: Basic Skew-T Plotting The Skew-T (log-P) diagram is the standard way to view rawinsonde data. The y-axis is height in pressure coordinates and the x-axis is temperature. The y coordinates are plotted on a logarithmic scale and the x coordinate system is skewed. An explanation of skew-T interpretation is beyond the scope of this tutorial, but here we will plot one that can be used for analysis or publication. The most basic skew-T can be plotted with only five lines of Python. These lines perform the following tasks: Create a Figure object and set the size of the figure. Create a SkewT object Plot the pressure and temperature (note that the pressure, the independent variable, is first even though it is plotted on the y-axis). Plot the pressure and dewpoint temperature. Plot the wind barbs at the appropriate pressure using the u and v wind components. End of explanation """ # Create a new figure. The dimensions here give a good aspect ratio fig = plt.figure(figsize=(9, 9)) skew = SkewT(fig, rotation=30) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot_barbs(p, u, v) skew.ax.set_ylim(1000, 100) skew.ax.set_xlim(-40, 60) # Plot LCL temperature as black dot skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black') # Plot the parcel profile as a black line skew.plot(p, parcel_prof, 'k', linewidth=2) # Shade areas of CAPE and CIN skew.shade_cin(p, T, parcel_prof, Td) skew.shade_cape(p, T, parcel_prof) # Plot a zero degree isotherm skew.ax.axvline(0, color='c', linestyle='--', linewidth=2) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() # Show the plot plt.show() """ Explanation: Advanced Skew-T Plotting Fiducial lines indicating dry adiabats, moist adiabats, and mixing ratio are useful when performing further analysis on the Skew-T diagram. Often the 0C isotherm is emphasized and areas of CAPE and CIN are shaded. End of explanation """ # Create a new figure. The dimensions here give a good aspect ratio fig = plt.figure(figsize=(9, 9)) skew = SkewT(fig, rotation=30) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot_barbs(p, u, v) skew.ax.set_ylim(1000, 100) skew.ax.set_xlim(-40, 60) # Plot LCL as black dot skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black') # Plot the parcel profile as a black line skew.plot(p, parcel_prof, 'k', linewidth=2) # Shade areas of CAPE and CIN skew.shade_cin(p, T, parcel_prof, Td) skew.shade_cape(p, T, parcel_prof) # Plot a zero degree isotherm skew.ax.axvline(0, color='c', linestyle='--', linewidth=2) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() # Create a hodograph # Create an inset axes object that is 40% width and height of the # figure and put it in the upper right hand corner. ax_hod = inset_axes(skew.ax, '40%', '40%', loc=1) h = Hodograph(ax_hod, component_range=80.) h.add_grid(increment=20) h.plot_colormapped(u, v, wind_speed) # Plot a line colored by wind speed # Show the plot plt.show() """ Explanation: Adding a Hodograph A hodograph is a polar representation of the wind profile measured by the rawinsonde. Winds at different levels are plotted as vectors with their tails at the origin, the angle from the vertical axes representing the direction, and the length representing the speed. The line plotted on the hodograph is a line connecting the tips of these vectors, which are not drawn. End of explanation """
y2ee201/Deep-Learning-Nanodegree
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] """ Explanation: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function vocab = set(text) vocab_to_int = {c:i for i, c in enumerate(vocab)} int_to_vocab = {i:c for i, c in enumerate(vocab)} return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables) """ Explanation: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) End of explanation """ def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function dict_punctuation = { '.':'||Period||', ',':'||Comma||', '"':'||Quotation_Mark||', ';':'||Semicolon||', '!':'||Exclamation_Mark||', '?':'||Question_Mark||', '(':'||Left_Parenthesis||', ')':'||Right_Parenthesis||', '--':'||Dash||', '\n':'||Return||' } return dict_punctuation """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup) """ Explanation: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU End of explanation """ def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name = 'input') targets = tf.placeholder(tf.int32, [None, None], name = 'targets') learning_rate = tf.placeholder(tf.float32, name = 'learning_rate') return inputs, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs) """ Explanation: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following the tuple (Input, Targets, LearingRate) End of explanation """ lstm_layers = 1 keep_prob = 1 def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) cell_state = cell.zero_state(batch_size, tf.float32) cell_state = tf.identity(cell_state, name = 'initial_state') return cell, cell_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell) """ Explanation: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) End of explanation """ def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # TODO: Implement Function embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1)) embed = tf.nn.embedding_lookup(embedding, input_data) return embed """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed) """ Explanation: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. End of explanation """ def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name = 'final_state') return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn) """ Explanation: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) End of explanation """ embed_dim = 256 def build_nn(cell, rnn_size, input_data, vocab_size): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :return: Tuple (Logits, FinalState) """ # TODO: Implement Function embed = get_embed(input_data, vocab_size, embed_dim) outputs, final_state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected(outputs, vocab_size, weights_initializer=tf.truncated_normal_initializer(stddev=0.1)) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn) """ Explanation: Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) End of explanation """ def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function batch_count = len(int_text)//(batch_size * seq_length) counter = (batch_size * seq_length) final = [] row = [] for i in range(batch_count): x = int_text[i * counter : (i + 1) * counter] x = np.reshape(x, (batch_size, seq_length)) y = int_text[(i * counter) + 1 : ((i + 1) * counter) + 1] y = np.reshape(y, (batch_size, seq_length)) row = np.array([x,y]) final.append(row) return np.array(final) # test = get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) # print(test) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches) """ Explanation: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2 3], [ 7 8 9]], # Batch of targets [[ 2 3 4], [ 8 9 10]] ], # Second Batch [ # Batch of Input [[ 4 5 6], [10 11 12]], # Batch of targets [[ 5 6 7], [11 12 13]] ] ] ``` End of explanation """ # Number of Epochs num_epochs = 500 # Batch Size batch_size = 1024 # RNN Size rnn_size = 512 # Sequence Length seq_length = 65 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 10 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save' """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) """ Explanation: Save Parameters Save seq_length and save_dir for generating a new TV script. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() """ Explanation: Checkpoint End of explanation """ def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ # TODO: Implement Function return loaded_graph.get_tensor_by_name('input:0'), loaded_graph.get_tensor_by_name('initial_state:0'), loaded_graph.get_tensor_by_name('final_state:0'), loaded_graph.get_tensor_by_name('probs:0') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors) """ Explanation: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) End of explanation """ def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function return int_to_vocab.get(np.argmax(probabilities)) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word) """ Explanation: Choose Word Implement the pick_word() function to select the next word using probabilities. End of explanation """ gen_length = 500 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) """ Explanation: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. End of explanation """
satishgoda/learning
prg/web/javascript/libs/d3/d3_1_intro.ipynb
mit
%%javascript require.config({ paths: { d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.5.5/d3.min', } }); """ Explanation: View this document in jupyter nbviewer References http://blog.thedataincubator.com/2015/08/embedding-d3-in-an-ipython-notebook https://github.com/cmoscardi/embedded_d3_example/blob/master/Embedded_D3.ipynb https://bost.ocks.org/mike/circles/ End of explanation """ %%javascript require( ['d3'], function(d3) { } ); %%svg <svg width="720" height="120"> <circle id="circle0" cx="40" cy="60" r="10"></circle> <circle id="circle0" cx="80" cy="60" r="10"></circle> <circle id="circle0" cx="120" cy="60" r="10"></circle> </svg> %%javascript require(['d3'], function(d3) { var circle = d3.selectAll("#circle1"); circle.style("fill", "red"); circle.attr("r", 10); }); %%svg <svg width="720" height="120"> <circle id="circle1" cx="40" cy="60" r="10"></circle> <circle id="circle1" cx="80" cy="60" r="10"></circle> <circle id="circle1" cx="120" cy="60" r="10"></circle> </svg> %%javascript require(['d3'], function(d3) { var circle = d3.selectAll("#circle2"); circle.style("fill", "red"); circle.attr("r", 30); }); %%svg <svg width="720" height="120"> <circle id="circle2" cx="40" cy="60" r="10" style="fill:blue;"></circle> <circle id="circle2" cx="80" cy="60" r="10" style="fill:blue;"></circle> <circle id="circle2" cx="120" cy="60" r="10" style="fill:blue;"></circle> </svg> %%javascript require( ['d3'], function(d3) { var circle = d3.selectAll("#circle3"); circle.attr( "cx", function() { return Math.random() * 720; } ); } ); %%svg <svg width="720" height="120"> <circle id="circle3" cx="40" cy="60" r="30" style="fill:red;"></circle> <circle id="circle3" cx="80" cy="60" r="30" style="fill:red;"></circle> <circle id="circle3" cx="120" cy="60" r="30" style="fill:red;"></circle> </svg> %%svg <svg width="720" height="120"> <circle id="circle4before" cx="40" cy="60" r="10" style="fill:red;"></circle> <circle id="circle4before" cx="240" cy="60" r="10" style="fill:green;"></circle> <circle id="circle4before" cx="500" cy="60" r="10" style="fill:blue;"></circle> </svg> %%javascript require( ['d3'], function(d3) { var circle = d3.selectAll("#circle4after") circle.data([50, 150, 600]); circle.attr( "r", function(dataitem) { return Math.sqrt(dataitem); } ); } ); %%svg <svg width="720" height="120"> <circle id="circle4after" cx="40" cy="60" r="10" style="fill:red;"></circle> <circle id="circle4after" cx="240" cy="60" r="10" style="fill:green;"></circle> <circle id="circle4after" cx="500" cy="60" r="10" style="fill:blue;"></circle> </svg> %%svg <svg width="720" height="120"> <rect id="rect0" x="1" y="0" width="100" height="100" style="fill:red;"></rect> <rect id="rect0" x="5" y="0" width="100" height="100" style="fill:green;"></rect> <rect id="rect0" x="9" y="0" width="100" height="100" style="fill:blue;"></rect> </svg> %%javascript require( ['d3'], function(d3) { var rect = d3.selectAll("#rect1") rect.data([0, 40, 100]); rect.attr( "x", function(dataitem, index) { return index * 100 + dataitem; } ); } ); %%svg <svg width="720" height="120"> <rect id="rect1" x="1" y="0" width="100" height="100" style="fill:red;"></rect> <rect id="rect1" x="5" y="0" width="100" height="100" style="fill:green;"></rect> <rect id="rect1" x="9" y="0" width="100" height="100" style="fill:blue;"></rect> </svg> """ Explanation: Template code for writing d3js scripts End of explanation """ %%svg <svg width="500" height="100"> <g stroke="green"> <line x1="10" y1="30" x2="10" y2="100" stroke-width="1"></line> <line x1="20" y1="30" x2="20" y2="100" stroke-width="1"></line> </g> </svg> %%svg <svg width="720" height="120"> <circle cx="40" cy="60" r="10"></circle> <circle cx="80" cy="60" r="10"></circle> <circle cx="120" cy="60" r="10"></circle> </svg> """ Explanation: TODO End of explanation """ %%javascript require( ['d3'], function(d3) { var svg = d3.select("svg"); var circle = svg.selectAll("circle"); circle.data([5, 10, 15, 20]); circle.attr("r", function(d) { return d; }); console.log(circle); var circleEnter = circle.enter().append("circle"); circleEnter.attr('r', function(d) {return 100;}) } ); """ Explanation: Enter selections are not working!! End of explanation """
DwangoMediaVillage/pqkmeans
tutorial/2_image_clustering.ipynb
mit
import numpy import pqkmeans import tqdm import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Chapter 2: Image clustering This chapter contains the followings: Read images from the CIFAR10 dataset Extract a deep feature (VGG16 fc6 activation) from each image using Keras Run clustering on deep features Visualize the result of image clustering Requisites: - numpy - pqkmeans - keras - tqdm - scipy - matplotlib 1. Read images from the CIFAR10 dataset End of explanation """ from keras.datasets import cifar10 (img_train, _), (img_test, _) = cifar10.load_data() """ Explanation: In this chapter, we show an example of image clustering. A deep feature (VGG16 fc6 activation) is extracted from each image using Keras, then the features are clustered using PQk-means. First, let's read images from the CIFAR10 dataset. End of explanation """ print("The first image of img_train:\n") plt.imshow(img_train[0]) """ Explanation: When you run the above cell for the first time, this would take several minutes to download the dataset to your local space (typically ~/.keras/datasets). The CIFAR10 dataset contains small color images, where each image is uint8 RGB 32x32 array. The shape of img_train is (50000, 32, 32, 3), and that of img_test is (10000, 32, 32, 3). Let's see some of them. End of explanation """ img_train = img_train[0:1000] img_test = img_test[0:5000] print("img_train.shape:\n{}".format(img_train.shape)) print("img_test.shape:\n{}".format(img_test.shape)) """ Explanation: To train a PQ-encoder, we pick up the top 1000 images from img_train. The clustering will be run on the top 5000 images from img_test. End of explanation """ from keras.applications.vgg16 import VGG16 from keras.applications.vgg16 import preprocess_input from keras.models import Model from scipy.misc import imresize base_model = VGG16(weights='imagenet') # Read the ImageNet pre-trained VGG16 model model = Model(inputs=base_model.input, outputs=base_model.get_layer('fc1').output) # We use the output from the 'fc1' layer def extract_feature(model, img): # This function takes a RGB image (np.array with the size (H, W, 3)) as an input, then return a 4096D feature vector. # Note that this can be accelerated by batch-processing. x = imresize(img, (224, 224)) # Resize to 224x224 since the VGG takes this size as an input x = numpy.float32(x) # Convert from uint8 to float32 x = numpy.expand_dims(x, axis=0) # Convert the shape from (224, 224) to (1, 224, 224) x = preprocess_input(x) # Subtract the average value of ImagNet. feature = model.predict(x)[0] # Extract a feature, then reshape from (1, 4096) to (4096, ) feature /= numpy.linalg.norm(feature) # Normalize the feature. return feature """ Explanation: 2. Extract a deep feature (VGG16 fc6 activation) from each image using Keras Next, let us extract a 4096-dimensional deep feature from each image. For the feature extactor, we employ an activation from the 6th full connected layer (in Keras implementation, it is called fc1) of the ImageNet pre-trained VGG16 model. See the tutorial of keras for more details. End of explanation """ features_train = numpy.array([extract_feature(model, img) for img in tqdm.tqdm(img_train)]) features_test = numpy.array([extract_feature(model, img) for img in tqdm.tqdm(img_test)]) print("features_train.shape:\n{}".format(features_train.shape)) print("features_test.shape:\n{}".format(features_test.shape)) """ Explanation: For the first time, this also takes several minutes to download the ImageNet pre-trained weights. Let us extract features from images as follows. This takes several minutes using a usual GPU such as GTX1080. End of explanation """ # Train an encoder encoder = pqkmeans.encoder.PQEncoder(num_subdim=4, Ks=256) encoder.fit(features_train) # Encode the deep features to PQ-codes pqcodes_test = encoder.transform(features_test) print("pqcodes_test.shape:\n{}".format(pqcodes_test.shape)) # Run clustering K = 10 print("Runtime of clustering:") %time clustered = pqkmeans.clustering.PQKMeans(encoder=encoder, k=K).fit_predict(pqcodes_test) """ Explanation: Now we have a set of 4096D features for both the train-dataset and the test-dataset. Note that features_train[0] is an image descriptor for img_train[0] 3. Run clustering on deep features Let us train a PQ-encoder using the training dataset, and compress the deep features into PQ-codes End of explanation """ for k in range(K): print("Cluster id: k={}".format(k)) img_ids = [img_id for img_id, cluster_id in enumerate(clustered) if cluster_id == k] cols = 10 img_ids = img_ids[0:cols] if cols < len(img_ids) else img_ids # Let's see the top 10 results # Visualize images assigned to this cluster imgs = img_test[img_ids] plt.figure(figsize=(20, 5)) for i, img in enumerate(imgs): plt.subplot(1, cols, i + 1) plt.imshow(img) plt.show() """ Explanation: 4. Visualize the result of image clustering Now we can visualize image clusters. As can be seen, each cluster has similar images such as "horses", "cars", etc. End of explanation """
jpopham91/nhl15-analytics
Untitled.ipynb
mit
## import statements import pandas as pd import numpy as np import seaborn as sns import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline ## set up plotting aesthetics sns.set(rc={'axes.facecolor' : '#202020', 'axes.labelcolor' : '#e0e0e0', 'axes.edgecolor' : '#e0e0e0', 'axes.grid' : False, 'text.color' : '#e0e0e0', 'figure.facecolor' : '#202020', 'figure.edgecolor' : '#202020', 'xtick.color' : '#e0e0e0', 'ytick.color' : '#e0e0e0', 'legend.fancybox' : 'true' }) octal_seq = [sns.hls_palette(12, 0, .5, .5)[i] for i in [0,1,2,4,7,9]] octal_cat = [sns.hls_palette(12, 0, .5, .5)[i] for i in [7,1,4,0,9,2]] octal_unity = [sns.hls_palette(12, 0, .5, .5)[i] for i in [4]] #sns.palplot(octal_seq) #sns.palplot(octal_cat) """ Explanation: Analysis of NHL 15 Online Versus Data Notebook setup End of explanation """ # returns a colormap for an rgb color fading to full transparency def fade_map(r=255,g=255,b=255,max_a=1,N=16): N = 16 r_list = r*np.ones(N)/255 g_list = g*np.ones(N)/255 b_list = b*np.ones(N)/255 a_list = np.linspace(0, max_a, N) C = np.transpose([r_list,g_list,b_list,a_list]) return mpl.colors.ListedColormap(C) # simulate continuity in an integers by adding random value between +/- 1/2 def jitter_int(x): # rand decimal on (-0.5, +0.5) j = np.random.random_sample()-0.5 return (float(x) + j) # perform kde plot for different factor values def factorkde(stat, factor, data, smooth=False, **kwargs): # create buffer not to change existing data temp_df = data # smooth kde by adding jitter if smooth: temp_df[stat] = temp_df[stat].apply(jitter_int) # get list of all possible factors factor_list = list(set(temp_df[factor].values)) # separate distributions for each factor dists = [(temp_df.loc[data[factor]==f, stat], f) for f in factor_list] # plot kde's for each factor for dist in dists: sns.kdeplot(dist[0], label=dist[1], **kwargs) def plot_counts(factor, data, num_to_plot = -1, **kwargs): factor_list = data[factor].value_counts().index.values df = pd.DataFrame({factor : factor_list, 'count' : data[factor].value_counts().values}) total = df['count'].sum() df['pct'] = 100*df['count']/total if num_to_plot == -1: num_to_plot = len(factor_list) return sns.factorplot(x = factor, y = 'pct', data = df[:num_to_plot], kind = 'bar', ci = 0, estimator = sum, **kwargs) def get_record(df, small=0): wins = df['result'].value_counts()['win'] losses = df['result'].value_counts()['loss'] total = wins+losses prob = wins / total err = np.sqrt(prob*(1-prob) / (total+small)) return(wins, losses, prob, err) def record_str(df, small=0): w, l, p, e = get_record(df, small) p *= 100 e *= 100 return('{0}-{1} ({2:.0f} ± {3:.0f}%)'.format(w, l, p, e)) """ Explanation: Custom function definitions End of explanation """ # read data df = pd.read_csv('./data/nhl15_gamelog.csv') #df.info() ## transform data df['count'] = 1 # get stat differentials df['g_dif'] = df.gf - df.ga df['s_dif'] = df.sf - df.sa df['toa_dif'] = df.toaf - df.toaa df['hit_dif'] = df.hitf - df.hita df['lvl_dif'] = df.lvl - df.opplvl df['opp_skill_quantile'] = pd.qcut(df.opplvl, 5, labels=['Lowest', 'Lower', 'Average', 'Higher', 'Highest']) # get stat percentages df['pct_g'] = 100 * df.gf / (df.gf + df.ga) df['pct_s'] = 100 * df.sf / (df.sf + df.sa) df['pct_toa'] = 100 * df.toaf / (df.toaf + df.toaa) df['pct_offense'] = (df.pct_s + df.pct_toa)/2 # categorize wins vs losses df['result'] = df['g_dif'].apply(lambda l: 'win' if l > 0 else 'loss') df['win'] = df['g_dif'].apply(lambda l: 1 if l > 0 else 0) df['more_shots'] = df['s_dif'].apply(lambda l: 1 if l > 0 else 0) df['more_toa'] = df['toa_dif'].apply(lambda l: 1 if l > 0 else 0) df['quadrant'] = 10 * df.more_toa + df.more_shots df['quadrant'] = df['quadrant'].apply(lambda l: 'both' if l == 11 else 'poss' if l == 10 else 'shots' if l == 1 else 'neither' ) df['previous_result'] = df['result'].shift(1) df['previous_result2'] = df['result'].shift(2) df['previous_result3'] = df['result'].shift(3) """ Explanation: Data munging End of explanation """ sns.lmplot(x = 's_dif', x_jitter = 0.5, y = 'toa_dif', hue = 'result', fit_reg = False, data = df.sort('g_dif', ascending=False), palette = octal_cat, aspect = 1) """ Explanation: Visualization Posession and Shot Advantages End of explanation """ temp_df = df.rename(columns={'g_dif': 'Goal Differential'}) grid = sns.lmplot(x = 's_dif', x_jitter = 0.5, y = 'toa_dif', hue = 'Goal Differential', fit_reg = False, data = temp_df.sort('Goal Differential', ascending=False), palette = sns.diverging_palette(240, 30, l=50, s=90, n=12, sep=1, center='dark'), scatter_kws={"s": 40} ) grid.set_axis_labels('Shot Difference', 'Attack Time Difference (min)') plt.title('Stat Advantage Heatmap') #fig.legend('Goal Differential') """ Explanation: This gives a basic overview of the raw stat distributions in wins and losses. It is clear the majority of wins (blue) occur in the first quadrant, where there is an advantage in both shots and time of posession. We can take this a step further and use a color gradient for goal differential instead of just wins/losses. This should highlight big wins and losses compared to close decisions. End of explanation """ temp_df = df.rename(columns={'g_dif': 'Goal Differential'}) grid = sns.lmplot(x = 'pct_s', x_jitter = 0.5, y = 'pct_toa', hue = 'Goal Differential', fit_reg = False, data = temp_df.sort('Goal Differential', ascending=False), palette = sns.diverging_palette(240, 30, l=50, s=90, n=12, sep=1, center='dark'), scatter_kws={"s": 40} ) grid.set_axis_labels('Percent of Shots', 'Percent of Attack Time') plt.title('Stat Advantage Heatmap') plt.xlim(0,100) plt.ylim(0,100) #fig.legend('Goal Differential') """ Explanation: A similar plot can be produced using the percentage of shots and posession instead of the absolute differences End of explanation """ sns.set_palette(octal_cat) factorkde(stat='s_dif', factor='result', data=df.sort('result'), smooth=True, kernel='gau') plt.title('Distribution of Shot Differentials by Result') plt.xlabel('Shot Differential') plt.ylabel('Fraction of Games') sns.set_palette(octal_cat) factorkde(stat='toa_dif', factor='result', data=df.sort('result'), smooth=True, kernel='gau') plt.title('Distribution of Attack Time Differentials by Result') plt.xlabel('Attack Time Differential') plt.ylabel('Fraction of Games') """ Explanation: Here, the effect of shot and posession advantages are quite a bit clearer. Another way of viewing these distributions would be a plot of their kernel density estimates. This gives a gaussian-like distribution of results, and the custom 'factorkde' function allows two separate curves to be overlayed representing wins and losses. End of explanation """ fig, ax = plt.subplots() sns.kdeplot(df['s_dif'], df['toa_dif'], cmap=fade_map(max_a=.05), shade=True, ax=ax) sns.kdeplot(df[df.result=='win']['s_dif'], df[df.result=='win']['toa_dif'], cmap=fade_map(64,128,192,1), shade=False, ax=ax, label='Wins') sns.kdeplot(df[df.result=='loss']['s_dif'], df[df.result=='loss']['toa_dif'], cmap=fade_map(192,128,64,1), shade=False, ax=ax, label='Losses') plt.title('Distribution of Shot and TOA Differentials by Result') plt.xlabel('Shot Differential') plt.ylabel('Posession Time Differential (min)') """ Explanation: 2D KDE plots can be used to visualize the distribution of both shots and TOA simultaneously. This gives a heatmap of sorts which indicate the common stat differentials in wins and losses. End of explanation """ print(record_str(df, 0)) """ Explanation: Lets take a look at the raw numbers for different scenarios. Currently, my overall record in logged games is: End of explanation """ print('Shot Advantage: ', record_str(df[df.s_dif>0])) print('Posession Advantage:', record_str(df[df.toa_dif>0])) """ Explanation: And in games with shot and posession advantages: End of explanation """ print('Shot and Posession Advantage:', record_str(df.loc[df.s_dif>0].loc[df.toa_dif>0], 1)) print('Shot Advantage Only: ', record_str(df.loc[df.s_dif>0].loc[df.toa_dif<0], 1)) print('Posession Advantage Only: ', record_str(df.loc[df.s_dif<0].loc[df.toa_dif>0], 1)) print('Neither Advantage: ', record_str(df.loc[df.s_dif<0].loc[df.toa_dif<0], 1)) """ Explanation: So it would seem that shots are a slightly better predictor of the result of the game, but it isn't statistically significant. The shot advantage does suggest in increase in the odds of winning over baseline. What happens if we control both variables at the same time? (i.e. Shot but not Posession advantage) End of explanation """ print('Overall: ', record_str(df, 0)) print('Coming off win: ', record_str(df.loc[df.previous_result=='win'], 1)) print('Coming off loss:', record_str(df.loc[df.previous_result=='loss'], 1)) print('Coming off 2 wins: ', record_str(df.loc[df.previous_result=='win'].loc[df.previous_result2=='win'], 1)) print('Coming off 2 losses:', record_str(df.loc[df.previous_result=='loss'].loc[df.previous_result2=='loss'], 1)) print('Coming off 3 wins: ', record_str(df.loc[df.previous_result=='win'].loc[df.previous_result2=='win'].loc[df.previous_result3=='win'], 1)) print('Coming off 3 losses:', record_str(df.loc[df.previous_result=='loss'].loc[df.previous_result2=='loss'].loc[df.previous_result3=='loss'], 1)) """ Explanation: From these results it definitely appears that a shot advantage is much more important than a posession time advantage. This would suggest that shooting from the outside and looking for rebounds fares better than playing keep-away and trying to set up high percentage shots. Can you get on a hot streak? End of explanation """ grid = sns.barplot(x = 'opp_skill_quantile', y = 'win', data = df.sort('opplvl'), estimator = np.mean, ci = 80, palette = octal_seq) """ Explanation: I find this very surprising. I definitely feel like my odds of winning are higher after a few consecutive wins, but that doesn't appear to be the case. This is result agrees with the famous 'hot hand fallacy'. Opponent's Skill Level Using the game's built in ranking system, we can look at the results as a function of the opponent's skill End of explanation """ print('Against Equal or Higher Rank:', record_str(df.loc[df.lvl_dif<=0], 1)) print('Against Lower Rank: ', record_str(df.loc[df.lvl_dif>0], 1)) """ Explanation: There's definitely a large degree of variability, but against lesser opponents the probability of winning is significantly higher. End of explanation """ sns.barplot(x = 'opp_skill_quantile', y = 's_dif', data = df.sort('opplvl'), estimator = np.mean, ci = 80, palette = octal_seq) plt.xlabel('Opponent Skill') plt.ylabel('Shot Differential') sns.barplot(x = 'opp_skill_quantile', y = 'toa_dif', data = df.sort('opplvl'), estimator = np.mean, ci = 80, palette = octal_seq) plt.xlabel('Opponent Skill') plt.ylabel('Attack Time Differential') """ Explanation: So what do the stats look like as the oppent skill varies? End of explanation """ grid = plot_counts('oppteam', df, 6, aspect=2.5, palette=octal_seq) grid.set_axis_labels('Team Faced', 'Percentage of Games Played') print('Against Chicago: ', record_str(df[df.oppteam=='CHI'])) print('Against New York: ', record_str(df[df.oppteam=='NYR'])) print('Against Montreal: ', record_str(df[df.oppteam=='MTL'])) print('Against Pittsburgh: ', record_str(df[df.oppteam=='PIT'])) print('Against Boston: ', record_str(df[df.oppteam=='BOS'])) print('Against Washington: ', record_str(df[df.oppteam=='WSH'])) from IPython.core.display import HTML styles = open('./themes/custom.css', 'r').read() HTML(styles) """ Explanation: Other than the two extremes, the stats look pretty balanced in most games. Unsurprisingly, the stat advantages are in my favor against weaker opponents, and vice-versa for stronger ones. Most Common Teams Faced End of explanation """
gganssle/dB-vs-perc
dB-vs-perc.ipynb
mit
# percentage function, v = new value, r = reference value def perc(v,r): return 100 * (v / r) """ Explanation: Decibels vs. Percentages Percentages are simple, right? I bought four oranges. I ate two. What percentage of the original four do I have left? 50%. Easy. How many decibels down in oranges am I? Not so easy, eh? Well the answer is 3. Skim the rest of this notebook to find out why. Percentage: The prefix <code>cent</code> indicates one hundred. For example: one <b>cent</b>ury is equal to one-hundred years. Percentage is simply a ratio of numbers compared to the number one-hundred, hence "per-cent", or "per one hundred". End of explanation """ original = 4 uneaten = 2 print("You're left with", perc(uneaten, original), "% of the original oranges.") """ Explanation: Let's now formalize our oranges percentage answer from above: End of explanation """ # decibel function, v = new value, r = reference value def deci(v,r): return 10 * math.log(v / r, 10) """ Explanation: Decibels: A decibel is simply a different way to represent the ratio of two numbers. It's based on a logrithmic calculation, as to compare values with large variance and small variance on the same scale (more on this below). <br><i>For an entertainingly complete history about how decibels were decided upon, read the <a href="https://en.wikipedia.org/wiki/Decibel" target="_blank">Wikipedia article.</a></i> End of explanation """ print("After lunch you have", round( deci(uneaten, original), 2), "decibels less oranges.") """ Explanation: Let's now formalize our oranges decibel answer from above: End of explanation """ perc(3000000, original) """ Explanation: That's it. Simply calculate the log to base ten of the ratio and multiply by ten. <hr> Advanced Part <hr> Well <b>who cares?</b> I'm just going to use percentages. They're easier to calculate and I don't have to relearn my forgotten-for-decades logarithms. <br>True, most people use percentages because that's what everyone else does. However, I suggest that decibel loss is a more powerful and perceptively accurate representation of ratios. Large ratios - small ratios Imagine you started with four oranges, and had somehow gained three million oranges. Sitting in the middle of your grove, you'd be left with a fairly cumbersome percentage to express: End of explanation """ deci(3000000, original) """ Explanation: "I have seventy five million percent of my original oranges." Yikes. What does that even mean? Use decibels instead: End of explanation """ # Less oranges than original number print(deci(uneaten, original)) print(perc(uneaten, original)) # More oranges than original number print(deci(8, original)) print(perc(8, original)) """ Explanation: "I've gained fifty-nine decibels of oranges." Negative ratios Additionally, the decibel scale automatically expresses positive and negative ratios. End of explanation """ perc(5.2, original) """ Explanation: Greater than 100% ratios There's some ambiguity when a person states she has 130% more oranges than her original number. Does this mean she has 5.2 oranges (which is 30% more than 4 oranges)? End of explanation """ perc(9.2, original) """ Explanation: Does she have 9.2 oranges (which is 130% more than 4 oranges)? End of explanation """ deci(5.2, original) """ Explanation: Expressed in decibel format, the answer is clear: End of explanation """ width = 100 center = 50 ref = center percplot = np.zeros(width) deciplot = np.zeros(width) for i in range(width): val = center - (width/2) + i if val == 0: val = 0.000001 percplot[i] = perc(val, ref) deciplot[i] = deci(val, ref) plt.plot(range(width), percplot, 'r', label="percentage") plt.plot(range(width), deciplot, 'b', label="decibels") plt.plot((ref,ref), (-100,percplot[width-1]), 'k', label="reference value") plt.legend(loc=4) plt.show() #plt.savefig('linear_plot.png', dpi=150) plt.semilogy(range(width), deciplot, 'b', label="decibels") plt.semilogy(range(width), percplot, 'r', label="percentage") plt.legend(loc=4) plt.show() #plt.savefig('log_plot.png', dpi=150) """ Explanation: She's gained 1.1 decibels in orange holdings. How do they stack up? End of explanation """
dtamayo/reboundx
ipython_examples/ParameterInterpolation.ipynb
gpl-3.0
import numpy as np data = np.loadtxt('m.txt') # return (N, 2) array mtimes = data[:, 0] # return only 1st col masses = data[:, 1] # return only 2nd col data = np.loadtxt('r.txt') rtimes = data[:, 0] Rsuns = data[:, 1] # data in Rsun units # convert Rsun to AU radii = np.zeros(Rsuns.size) for i, r in enumerate(Rsuns): radii[i] = r * 0.00465047 """ Explanation: Interpolating Parameters If parameters are changing significantly on dynamical timescales (e.g. mass transfer at pericenter on very eccentric orbits) you need a specialized numerical scheme to do that accurately. However, there are many astrophysical settings where parameters change very slowly compared to all the dynamical timescales in the problem. As long as this is the case and the changes are adiabatic, you can modify these parameters between calls to sim.integrate very flexibly and without loss of accuracy. In order to provide a machine-independent way to interpolate parameters at arbitrary times, which can be shared between the C and Python versions of the code, we have implemented an interpolator object. For example, say you want to interpolate stellar evolution data. We show below how you can use the Interpolator structure to spline a discrete set of time-parameter values. We begin by reading in mass and radius data of our Sun, starting roughly 4 million years before the tip of its red-giant branch (RGB), and separating them into time and value arrays. You can populate these arrays however you want, but we load two text files (one for stellar mass, the other for stellar radius), where the first column gives the time (e.g., the Sun's age), and the second column gives the corresponding value (mass or radius). All values need to be in simulation units. If you're using AU, then your stellar radii should also be in AU. For an example reading MESA (Modules for Experiments in Stellar Astrophysics) data output logs, see https://github.com/sabaronett/REBOUNDxPaper. End of explanation """ import rebound import reboundx M0 = 0.8645388227818771 # initial mass of star R0 = 0.3833838293200158 # initial radius of star def makesim(): sim = rebound.Simulation() sim.G = 4*np.pi**2 # use units of AU, yrs and solar masses sim.add(m=M0, r=R0, hash='Star') sim.add(a=1., hash='Earth') sim.collision = 'direct' # check if RGB Sun engulfs Earth sim.integrator = 'whfast' sim.dt = 0.1*sim.particles[1].P sim.move_to_com() return sim %matplotlib inline sim = makesim() ps = sim.particles fig, ax = rebound.OrbitPlot(sim) ax.set_xlim([-2,2]) ax.set_ylim([-2,2]) ax.grid() """ Explanation: Next we set up the Sun-Earth system. End of explanation """ rebx = reboundx.Extras(sim) starmass = reboundx.Interpolator(rebx, mtimes, masses, 'spline') starradius = reboundx.Interpolator(rebx, rtimes, radii, 'spline') """ Explanation: Now we can create an Interpolator object for each parameter set and pass the corresponding arrays as arguments. End of explanation """ %%time Nout = 1000 mass = np.zeros(Nout) radius = np.zeros(Nout) a = np.zeros(Nout) ts = np.linspace(0., 4.e6, Nout) T0 = 1.23895e10 # Sun's age at simulation start for i, time in enumerate(ts): sim.integrate(time) ps[0].m = starmass.interpolate(rebx, t=T0+sim.t) ps[0].r = starradius.interpolate(rebx, t=T0+sim.t) sim.move_to_com() # lost mass had momentum, so need to move back to COM frame mass[i] = sim.particles[0].m radius[i] = sim.particles[0].r a[i] = sim.particles[1].a fig, ax = rebound.OrbitPlot(sim) ax.set_xlim([-2,2]) ax.set_ylim([-2,2]) ax.grid() """ Explanation: Finally, we integrate for 4 Myr, updating the central body's mass and radius interpolated at the time between outputs. We then plot the resulting system: End of explanation """ import matplotlib.pyplot as plt fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(15,10)) fig.subplots_adjust(hspace=0) ax1.set_ylabel("Star's Mass ($M_{\odot}$)", fontsize=24) ax1.plot(ts,mass, color='tab:orange') ax1.grid() ax2.set_xlabel('Time (yr)', fontsize=24) ax2.ticklabel_format(axis='x', style='sci', scilimits=(0,0)) ax2.set_ylabel('Distances (AU)', fontsize=24) ax2.plot(ts,a, label='$a_{\oplus}$') ax2.plot(ts,radius, label='$R_{\odot}$') ax2.legend(fontsize=24, loc='best') ax2.grid() """ Explanation: We see that, as the Sun loses mass along its RGB phase, the Earth has correspondingly and adiabatically expanded, as one might expect. Let's now plot the Sun's mass over time, and a comparison of the Sun's radius and Earth's semi-major axis over time, adjacent to one another. End of explanation """
lfz/Guided-Denoise
prepare_data.ipynb
apache-2.0
imagenet_path = '/work/imagenet/train/' path2 = './Originset/' n_per_class = 4 # train n_per_class_test = [10,40] # test n_train = int(n_per_class*0.75 ) subdirs = os.listdir(imagenet_path) subdirs = np.sort(subdirs) label_mapping={} example = pandas.read_csv('./sample_dev_dataset.csv') for id,name in enumerate(subdirs): label_mapping[name] = id+1 class1 = np.load('utils/dataset2_trainclass.npy') class2 = np.load('utils/dataset2_valclass.npy') class1 = [label_mapping[name] for name in class1] class2 = [label_mapping[name] for name in class2] n_repeat = n_per_class info_list = np.zeros([n_repeat*1000,12]).astype('str') trainset_d1 = np.array([]) valset_d1 = np.array([]) trainset_d2 = np.array([]) valset_d2 = np.array([]) namelist = np.array([]) i_cum =0 for i_dir,dir in enumerate(subdirs): fullpath = os.path.join(imagenet_path,dir) filelist = os.listdir(fullpath) randid = np.random.permutation(len(filelist))[:n_repeat] chosen_im = np.array(filelist)[randid] rename_im = np.array([n.split('.')[0]+'png' for n in chosen_im]) trainset_d1 = np.concatenate([trainset_d1,rename_im[:n_train]]) valset_d1 = np.concatenate([valset_d1,rename_im[n_train:]]) fullimpath = [os.path.join(fullpath,f) for f in chosen_im] namelist = np.concatenate([namelist,fullimpath]) labels = label_mapping[dir] if labels in class1: trainset_d2 = np.concatenate([trainset_d2,rename_im]) valset_d2 = np.concatenate([valset_d2,rename_im]) for i in range(n_repeat): target_class = labels while target_class==labels: target_class = np.random.randint(1000) # info_list[i].append([chosen_im[i].split('.')[0],0,0,0,1,1,labels,target_class,0,0,0,0]) info_list[i_cum] = np.array([chosen_im[i].split('.')[0],0,0,0,1,1,labels,target_class,0,0,0,0]) i_cum += 1 newpd = pandas.DataFrame(info_list) newpd.columns = example.columns newpd.to_csv('dev_dataset.csv') pool = Pool() resize_partial = partial(resize_all,namelist=namelist,path2=path2) _ = pool.map(resize_partial,range(len(namelist))) np.save('./utils/dataset1_train_split.npy',trainset_d1) np.save('./utils/dataset1_val_split.npy',valset_d1) np.save('./utils/dataset2_train_split.npy',trainset_d2) np.save('./utils/dataset2_val_split.npy',valset_d2) """ Explanation: generate the training set, set your imagenet_path dataset1: the default dataset dataset2: only 750 classes in the training set, and 250 classes in the testset, designed to evaluate the transferability the training and validation sets are extracted from the training set of the imagenet End of explanation """ imagenet_path = '/work/imagenet/val/' path2 = './Originset_test/' with open('/work/imagenet/meta/val.txt') as f: tmp = f.readlines() label_val = {} for line in tmp: label_val[line.split(' ')[0]] = int(line.split(' ')[1].split('\n')[0])+1 example = pandas.read_csv('/work/adv/toolkit/dataset/dev_dataset.csv') keys = np.array(label_val.keys()) values = np.array(label_val.values()) i_cum =0 namelist = [] # info_list = np.zeros([n_repeat*1000,12]).astype('str') info_list = [] for i_class in range(1,1001): if i_class in class1: n_repeat = n_per_class_test[0] else: n_repeat = n_per_class_test[1] filelist = keys[values == i_class] randid = np.random.permutation(len(filelist))[:n_repeat] chosen_im = np.array(filelist)[randid] fullimpath = [os.path.join(imagenet_path,f) for f in chosen_im] labels = i_class for i in range(n_repeat): target_class = labels while target_class==labels: target_class = np.random.randint(1000) info_list.append([chosen_im[i].split('.')[0],0,0,0,1,1,labels,target_class,0,0,0,0]) # info_list[i_cum] = np.array([chosen_im[i].split('.')[0],0,0,0,1,1,labels,target_class,0,0,0,0]) namelist.append(fullimpath[i]) i_cum += 1 newpd = pandas.DataFrame(info_list) newpd.columns = example.columns newpd.to_csv('dev_dataset_test.csv') label1 = pandas.read_csv('dev_dataset.csv') label1 = np.array([label1['ImageId'],label1['TrueLabel']]) label2 = pandas.read_csv('dev_dataset_test.csv') label2 = np.array([label2['ImageId'],label2['TrueLabel']]) tmp = np.concatenate([label1,label2],1).T labels = {} for key,value in tmp: labels[key] = value np.save('utils/labels.npy',labels) names = label2[0] values = label2[1] allnames = [] for i in range(1,1001): class_names = names[values==i][:10] allnames.append(class_names) allnames = np.concatenate(allnames) np.save('utils/dataset1_test_split.npy',allnames) allnames = [] class2 = np.load('utils/dataset2_valclass.npy') class2 = [label_mapping[name] for name in class2] for i in class2: class_names = names[values==i] allnames.append(class_names) allnames = np.concatenate(allnames) np.save('utils/dataset2_test_split.npy',allnames) resize_partial = partial(resize_all,namelist=namelist,path2=path2) _ = pool.map(resize_partial,range(len(namelist))) """ Explanation: generate the testing set the test set is extracted from the validation set of the imagenet End of explanation """
xlbaojun/Note-jupyter
05其他/pandas文档-zh-master/.ipynb_checkpoints/数据结构的内置方法-checkpoint.ipynb
gpl-2.0
import numpy as np import pandas as pd index = pd.date_range('1/1/2000', periods=8) s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=['A', 'B', 'C']) wp = pd.Panel(np.random.randn(2,5,4), items=['Item1', 'Item2'], major_axis=pd.date_range('1/1/2000',periods=5), minor_axis=['A', 'B', 'C', 'D']) """ Explanation: 数据结构的内置方法 这一节介绍常用的pandas数据结构内置方法。很重要的一节。 创建本节要用到的数据结构。 End of explanation """ long_series = pd.Series(np.random.randn(1000)) long_series.head() long_series.tail(3) """ Explanation: Head() Tail() 想要预览Series或DataFrame对象,可以使用head()和tail()方法。默认显示5行数据,你也可以自己设置显示的行数。 End of explanation """ df[:2] df.columns = [x.lower() for x in df.columns] #将列名重置为小写 df """ Explanation: 属性和 ndarray pandas对象有很多属性,你可以通过这些属性访问数据。 shape: 显示对象的维度,同ndarray 坐标label Series: index DataFrame: index(行)和columns Panel: items, major_axis and minor_axis 可以通过属性进行安全赋值。 End of explanation """ s.values df.values type(df.values) wp.values """ Explanation: 只想得到对象中的数据而忽略index和columns,使用values属性就可以 End of explanation """ df = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c']), 'two' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']), 'three' : pd.Series(np.random.randn(3), index=['b', 'c', 'd'])}) df row = df.ix[1] row column = df['two'] column df.sub(row, axis='columns') df.sub(row, axis=1) df.sub(row, axis='index') df.sub(row, axis=0) """ Explanation: 如果DataFrame或Panel对象的数据类型相同(比如都是 int64),修改object.values相当于直接修改原对象的值。如果数据类型不相同,则根本不能对values属性返回值进行赋值。 注意: 如果对象内数据类型不同,values返回的ndarray的dtype将是能够兼容所有数据类型的类型。比如,有的列数据是int,有的列数据是float,.values返回的ndarray的dtype将是float。 加速的操作 pandas从0.11.0版本开始使用numexpr库对二值数值类型操作加速,用bottleneck库对布尔操作加速。 加速效果对大数据尤其明显。 这里有一个速度的简单对比,使用100,000行* 100列的DataFrame: 所以,在安装pandas后也要顺便安装numexpr, bottleneck。 灵活的二元运算 在所有的pandas对象之间的二元运算中,大家最感兴趣的一般是下面两个: * 高维数据结构(比如DataFrame)和低维数据结构(比如Series)之间计算时的广播(broadcasting)行为 * 计算时有缺失值 广播 DataFrame对象内置add(),sub(),mul(),div()以及radd(), rsub(),...等方法。 至于广播计算,Series的输入是最有意思的。 End of explanation """ df df2 = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c']), 'two' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']), 'three' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd'])}) df2 df + df2 df.add(df2, fill_value=0) #注意['a', 'three']不是NaN """ Explanation: 填充缺失值 在Series和DataFrame中,算术运算方法(比如add())有一个fill_value参数,含义很明显,计算前用一个值来代替缺失值,然后再参与运算。注意,如果参与运算的两个object同一位置(同行同列)都是NaN,fill_value不起作用,计算结果还是NaN。 看例子: End of explanation """ df.gt(df2) df2.ne(df) """ Explanation: 灵活的比较操作 pandas引入了二元比较运算方法:eq, ne, lt, gt, le。 End of explanation """ df>0 (df>0).all() #与操作 (df > 0).any()#或操作 """ Explanation: 意思操作返回一个和输入对象同类型的对象,值类型为bool,返回结果可以用于检索。 布尔降维 Boolean Reductions pandas提供了三个方法(any(), all(), bool())和一个empty属性来对布尔结果进行降维。 End of explanation """ (df > 0).any().any() """ Explanation: 同样可以对降维后的结果再进行降维。 End of explanation """ df.empty pd.DataFrame(columns=list('ABC')).empty """ Explanation: 使用empty属性检测一个pandas对象是否为空。 End of explanation """ pd.Series([True]).bool() pd.Series([False]).bool() pd.DataFrame([[True]]).bool() pd.DataFrame([[False]]).bool() """ Explanation: 对于只含有一个元素的pandas对象,对其进行布尔检测,使用bool(): End of explanation """ df + df == df*2 (df+df == df*2).all() """ Explanation: 比较对象是否相等 一个问题通常有多种解法。一个最简单的例子:df+df和df*2。为了检测两个计算结果是否相等,你可能想到:(df+df == df*2).all(),然而,这样计算得到的结果是False: End of explanation """ np.nan == np.nan """ Explanation: 为什么df + df == df*2 返回的结果含有False?因为NaN和NaN比较厚结果为False! End of explanation """ (df+df).equals(df*2) """ Explanation: 还好pandas提供了equals()方法解决上面NaN之间不想等的问题。 End of explanation """ df1 = pd.DataFrame({'c':['f',0,np.nan]}) df1 df2 = pd.DataFrame({'c':[np.nan, 0, 'f']}, index=[2,1,0]) df2 df1.equals(df2) df1.equals(df2.sort_index()) #对df2的索引排序,然后再比较 """ Explanation: 注意: 在使用equals()方法进行比较时,两个对象如果数据不一致必为False。 End of explanation """ pd.Series(['foo', 'bar', 'baz']) == 'foo' pd.Index(['foo', 'bar', 'baz']) == 'foo' """ Explanation: 不同类型的对象之间 逐元素比较 你可以直接对pandas对象和一个常量值进行逐元素比较: End of explanation """ pd.Series(['foo', 'bar', 'baz']) == pd.Index(['foo', 'bar', 'qux']) pd.Series(['foo', 'bar', 'baz']) == np.array(['foo', 'bar', 'qux']) pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar']) #长度不相同 """ Explanation: 不同类型的对象(比如pandas数据结构、numpy数组)之间进行逐元素的比较也是没有问题的,前提是两个对象的shape要相同。 End of explanation """ np.array([1,2,3]) == np.array([2]) np.array([1, 2, 3]) == np.array([1, 2]) """ Explanation: 但要知道不同shape的numpy数组之间是可以直接比较的!因为广播!即使无法广播,也不会Error而是返回False。 End of explanation """ df1 = pd.DataFrame({'A' : [1., np.nan, 3., 5., np.nan], 'B' : [np.nan, 2., 3., np.nan, 6.]}) df1 df2 = pd.DataFrame({'A' : [5., 2., 4., np.nan, 3., 7.], 'B' : [np.nan, np.nan, 3., 4., 6., 8.]}) df2 df1.combine_first(df2) """ Explanation: combine_first() 看一下例子: End of explanation """ combiner = lambda x,y: np.where(pd.isnull(x), y,x) df1.combine(df2, combiner) """ Explanation: 解释: 对于df1中NaN的元素,用df2中对应位置的元素替换! DataFrame.combine() DataFrame.combine()方法接收一个DF对象和一个combiner方法。 End of explanation """ df df.mean() #axis=0, 计算每一列的平均值 df.mean(1) #计算每一行的平均值 """ Explanation: 统计相关 的方法 Series, DataFrame和Panel内置了许多计算统计相关指标的方法。这些方法大致分为两类: * 返回低维结果,比如sum(),mean(),quantile() * 返回同原对象同样大小的对象,比如cumsum(), cumprod() 总体来说,这些方法接收一个坐标轴参数: * Series不需要坐标轴参数 * DataFrame 默认axis=0(index), axis=1(columns) * Panel 默认axis=1(major), axis=0(items), axis=2(minor) End of explanation """ df.sum(0, skipna=False) df.sum(axis=1, skipna=True) """ Explanation: 所有的这些方法都有skipna参数,含义是计算过程中是否剔除缺失值,skipna默认值为True。 End of explanation """ ts_stand = (df-df.mean())/df.std() ts_stand.std() xs_stand = df.sub(df.mean(1), axis=0).div(df.std(1), axis=0) xs_stand.std(1) """ Explanation: 这些函数可以参与算术和广播运算。 比如: End of explanation """ df.cumsum() """ Explanation: 注意cumsum() cumprod()方法 保留NA值的位置。 End of explanation """ series = pd.Series(np.random.randn(500)) series[20:500]=np.nan series[10:20]=5 series.nunique() series """ Explanation: 下面列出常用的方法及其描述。提醒每一个方法都有一个level参数用于具有层次索引的对象。 | 方法 | 描述 | | ------------- |:-------------:| | count | 沿着坐标轴统计非空的行数| | sum | 沿着坐标轴取加和| | mean | 沿着坐标轴求均值| |mad|沿着坐标轴计算平均绝对偏差| |median|沿着坐标轴计算中位数| |min|沿着坐标轴取最小值| |max|沿着坐标轴取最大值| |mode|沿着坐标轴取众数| |abs|计算每一个值的绝对值| |prod|沿着坐标轴求乘积| |std|沿着坐标轴计算标准差| |var|沿着坐标轴计算无偏方差| |sem|沿着坐标轴计算标准差| |skew|沿着坐标轴计算样本偏斜| |kurt|沿着坐标轴计算样本峰度| |quantile|沿着坐标轴计算样本分位数,单位%| |cumsum|沿着坐标轴计算累加和| |cumprod|沿着坐标轴计算累积乘| |cummax|沿着坐标轴计算累计最大| |cummin|沿着坐标轴计算累计最小| Note:所有需要沿着坐标轴计算的方法,默认axis=0,即将方法应用到每一列数据上。 Series还有一个nunique()方法返回非空数值 组成的集合的大小。 End of explanation """ series = pd.Series(np.random.randn(1000)) series[::2]=np.nan series.describe() frame = pd.DataFrame(np.random.randn(1000, 5), columns=['a', 'b', 'c', 'd', 'e']) frame.ix[::2]=np.nan frame.describe() """ Explanation: descrieb(), 数据摘要 describe()方法非常有用,它计算数据的各种常用统计指标(比如平均值、标准差等),计算时不包括NA。拿到数据首先要有大概的了解,使用describe()方法就对了。 End of explanation """ series.describe(percentiles=[.05, .25, .75, .95]) """ Explanation: 默认describe()只包含25%, 50%, 75%, 也可以通过percentiles参数进行指定。 End of explanation """ s = pd.Series(['a', 'a', 'b', 'b', 'a', 'a', np.nan, 'c', 'd', 'a']) s.describe() """ Explanation: 如果Series内数据是非数值类型,describe()也能给出一定的统计结果 End of explanation """ frame = pd.DataFrame({'a':['Yes', 'Yes', 'NO', 'No'], 'b':range(4)}) frame.describe() """ Explanation: 如果DataFrame对象有的列是数值类型,有的列不是数值类型,describe()仅对数值类型的列进行计算。 End of explanation """ frame.describe(include=['object']) #只对非数值列进行统计计算 frame.describe(include=['number']) frame.describe(include='all')#'all'不是列表 """ Explanation: 如果非要知道非数值列的统计指标呢?describe提供了include参数,取值范围{'object', 'number', 'all'}。 看一下例子, 注意'object'和'number'都是在列表中,而'all'不需要放在列表中: End of explanation """ s1 = pd.Series(np.random.randn(5)) s1 s1.idxmin(), s1.idxmax() #最小值:-0.296405, 最大值:1.735420 df1 = pd.DataFrame(np.random.randn(5,3), columns=list('ABC')) df1 df1.idxmin(axis=0) df1.idxmax(axis=1) """ Explanation: 最大/最小值对应的索引值 Series和DataFrame内置的idxmin() idxmax()方法求得 最小值、最大值对应的索引值,看一下例子: End of explanation """ df3 = pd.DataFrame([2, 1, 1, 3, np.nan], columns=['A'], index=list('edcba')) df3 df3['A'].idxmin() """ Explanation: 如果多个数值都是最大值或最小值,idxmax() idxmin()返回最大值、最小值第一次出现对应的索引值 End of explanation """ data = np.random.randint(0, 7, size=50) data s = pd.Series(data) s.value_counts() pd.value_counts(data) #也是全局方法 """ Explanation: 实际上,idxmin和idxmax就是NumPy中的argmin和argmax。 value_counts() 数值计数 value_counts()计算一维度数据结构的直方图。 End of explanation """ s5 = pd.Series([1,1,3,3,3,5,5,7,7,7]) s5.mode() df5 = pd.DataFrame({"A": np.random.randint(0, 7, size=50), "B": np.random.randint(-10, 15, size=50)}) df5 df5.mode() """ Explanation: 虽然前面介绍过mode()方法了,看两个例子吧: End of explanation """ arr = np.random.randn(20) factor = pd.cut(arr, 4) factor factor = pd.cut(arr, [-5, -1, 0, 1, 5]) #输入 离散区间 factor """ Explanation: 区间离散化 cut() qcut()方法可以对连续数据进行离散化 End of explanation """ arr = np.random.randn(30) factor = pd.qcut(arr, [0, .25, .5, .75, 1]) factor pd.value_counts(factor) """ Explanation: qcut()方法计算样本的分位数,比如我们可以将正态分布的数据 进行四分位数离散化: End of explanation """ arr = np.random.randn(20) factor = pd.cut(arr, [-np.inf, 0, np.inf]) factor """ Explanation: 离散区间也可以用极限定义 End of explanation """ #f, g 和h是三个方法,接收DataFrame对象,返回DataFrame对象 f(g(h(df), arg1=1), arg2=2, arg3=3) """ Explanation: 函数应用 如果你想用自己写的方法或其他库方法操作pandas对象,你应该知道下面的三种方式。 具体选择哪种方式取决于你是想操作整个DataFrame对象还是DataFrame对象的某几行或某几列,或者逐元素操作。 管道 pipe() 基于列或行的函数引用 apply() 对DataFrame对象逐元素计算 applymap() 对管道 DataFrame和Series当然能够作为参数传入方法。然而,如果涉及到多个方法的序列调用,推荐使用pipe()。看一下例子: End of explanation """ (df.pipe(h).pipe(g, arg1=1).pipe(f, arg2=2, arg3=3)) """ Explanation: 上面一行代码推荐用下面的等价写法: End of explanation """ df.apply(np.mean) df.apply(np.mean, axis=1) df.apply(lambda x: x.max() - x.min()) df.apply(np.cumsum) df.apply(np.exp) """ Explanation: 注意 f g h三个方法中DataFrame都是作为第一个参数。如果DataFrame作为第二个参数呢?方法是为pipe提供(callable, data_keyword),pipe会自动调用DataFrame对象。 比如,使用statsmodels处理回归问题,他们的API期望第一个参数是公式,第二个参数是DataFrame对象data。我们使用pipe传递(sm.poisson, 'data'): pipe灵感来自于Unix中伟大的艺术:管道。pandas中pipe()的实现很简洁,推荐阅读源代码pd.DataFrame.pipe 基于行或者列的函数应用 任意函数都可以直接对DataFrame或Panel某一坐标轴进行直接操纵,只需要使用apply()方法即可,同描述性统计方法一样,apply()方法接收axis参数。 End of explanation """ tsdf = pd.DataFrame(np.random.randn(1000, 3), columns=['A', 'B', 'C'], index=pd.date_range('1/1/2000', periods=1000)) tsdf tsdf.apply(lambda x:x.idxmax()) """ Explanation: 灵活运用apply()方法可以统计出数据集的很多特性。比如,假设我们希望从数据中抽取每一列最大值的索引值。 End of explanation """ def subtract_and_divide(x, sub, divide=1): return (x - sub)/divide df.apply(subtract_and_divide, args=(5,),divide=3) """ Explanation: apply()方法当然支持接收其他参数了,比如下面的例子: End of explanation """ df df.apply(pd.Series.interpolate) """ Explanation: 另一个有用的特性是对DataFrame对象传递Series方法,然后针对DF对象的每一列或每一行执行 Series内置的方法! End of explanation """ df4 = pd.DataFrame(np.random.randn(4, 3),index=['a','b','c','d'],columns=['one', 'two', 'three']) df4 f = lambda x:len(str(x)) df4['one'].map(f) df4.applymap(f) """ Explanation: 应用逐元素操作的Python方法 既然不是所有的方法都能被向量化(接收NumPy数组,返回另一个数组或者值),但是DataFrame内置的applymap()和Series的map()方法能够接收任意的接收一个值且返回一个值的Python方法。 End of explanation """ s = pd.Series(['six', 'seven', 'six', 'seven', 'six'], index=['a', 'b', 'c', 'd', 'e']) t = pd.Series({'six':6., 'seven':7.}) s t s.map(t) """ Explanation: Series.map()还有一个功能是模仿merge(), join() End of explanation """ s = pd.Series(np.random.randn(5), index=['a','b','c','d','e']) s s.reindex(['e', 'b', 'f', 'd']) """ Explanation: 重新索引和改变label reindex()是pandas中基本的数据对其方法。其他所有依赖label对齐的方法基本都要靠reindex()实现。reindex(重新索引)意味着是沿着某条轴转换数据以匹配新设定的label。具体来说,reindex()做了三件事情: * 对数据进行排序以匹配新的labels * 如果新label对应的位置没有数据,插入缺失值NA * 可以指定调用fill填充数据。 下面是一个简单的例子: End of explanation """ df df.reindex(index=['c', 'f', 'b'], columns=['three', 'two', 'one']) """ Explanation: 对于DataFrame来说,你可以同时改变列名和索引值。 End of explanation """ rs = s.reindex(df.index) rs rs.index is df.index """ Explanation: 如果只想改变列或者索引的label,DataFrame也提供了reindex_axis()方法,接收label和axis。 End of explanation """ df2 = pd.DataFrame(np.random.randn(3, 2),index=['a','b','c'],columns=['one', 'two']) df2 df df.reindex_like(df2) """ Explanation: 上面一行代码顺便说明了Series的索引和DataFrame的索引是同一类的实例。 重新索引来和另一个对象对齐 reindex_like() 你可能想传递一个对象,使得原来对象的label和传入的对象一样,使用reindex_like()即可。 End of explanation """ s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) s1 = s[:4] s2 = s[1:] s1 s2 s1.align(s2) s1.align(s2, join='inner') #交集是 'b', 'c', 'd' """ Explanation: 使用align() 是两个对象相互对齐 align()方法是让两个对象同事对齐的最快方法。它含有join参数, * join='outer': 取得两个对象的索引并集,这也是join的默认值。 * join='left': 使用调用对象的索引值 * join='right':使用被调用对象的索引值 * join='inner': 使用两个对象的索引交集 align()方法返回一个元组,元素元素是重新索引的Series对象。 End of explanation """ df df2 = df.iloc[:5,:2] df2 df.align(df2, join='inner') df.align(df2) """ Explanation: 对于DataFrame来说,join方法默认会应用到索引和列名。 End of explanation """ df.align(df2, join='inner', axis=0) """ Explanation: align()也含有一个axis参数,指定仅对于某一坐标轴进行对齐。 End of explanation """ df.align(df2.ix[0], axis=1) """ Explanation: DataFrame.align()同样能接收Series对象,此时axis指的是DataFrame对象的索引或列。 End of explanation """ rng = pd.date_range('1/3/2000', periods=8) ts = pd.Series(np.random.randn(8), index=rng) ts2 = ts[[0, 3, 6]] ts ts2 ts2.reindex(ts.index) ts2.reindex(ts.index, method='ffill') #索引小那一行的数值填充NaN ts2.reindex(ts.index, method='bfill') #索引大的非NaN的数值填充NaN ts2.reindex(ts.index, method='nearest') """ Explanation: 重索引时顺便填充数值 reindex()方法还有一个method参数,用于填充数值,method取值如下: * pad/ffill: 使用后面的值填充数值 * bfill/backfill: 使用前面的值填充数值 * nearest: 使用最近的索引值进行填充 以Series为例,看一下: End of explanation """ ts2.reindex(ts.index).fillna(method='ffill') """ Explanation: method参数要求索引必须是有序的:递增或递减。 除了method='nearest',其他method取值也能用fillna()方法实现: End of explanation """ ts2.reindex(ts.index, method='ffill', limit=1) ts2 ts ts2.reindex(ts.index, method='ffill', tolerance='1 day') """ Explanation: 二者的区别是:如果索引不是有序的,reindex()会报错,而fillna()和interpolate()不会检查索引是否有序。 重索引时 有条件地填充NaN limit和tolerance参数会对填充操作进行条件限制,通常限制填充的次数。 End of explanation """ df df.drop(['A'], axis=1) """ Explanation: 移除某些索引值 和reindex()方法很相似的是drop(),用于移除索引的某些取值。 End of explanation """ s s.rename(str.upper) """ Explanation: 重命名索引值 rename() 方法可以对索引值重新命名,命名方式可以是字典或Series,也可以是任意的方法。 End of explanation """ df = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c']), 'two' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']), 'three' : pd.Series(np.random.randn(3), index=['b', 'c', 'd'])}) df df.rename(columns={'one' : 'foo', 'two' : 'bar'}, index={'a' : 'apple', 'b' : 'banana', 'd' : 'durian'}) """ Explanation: 唯一的要求是传入的函数调用索引值时必须有一个返回值,如果你传入的是字典或Series,要求是索引值必须是其键值。这点很好理解。 End of explanation """ s.rename('sclar-name') """ Explanation: 默认情况下修改的仅仅是副本,如果想对原对象索引值修改,inplace=True. 在0.18.0版本中,rename方法也能修改Series.name End of explanation """ df = pd.DataFrame({'col1' : np.random.randn(3), 'col2' : np.random.randn(3)}, index=['a', 'b', 'c']) df for col in df: #产生的是列名 print col """ Explanation: 迭代操作 Iteration pandas中迭代操作依赖于具体的对象。迭代Series对象时类似迭代数组,产生的是值,迭代DataFrame或Panel对象时类似迭代字典的键值。 一句话,(for i in object)产生: * Series: 值 * DataFrame: 列名 * Panel: item名 看一下例子吧: End of explanation """ for item, frame in df.iteritems(): print item, frame """ Explanation: pandas对象也有类似字典的iteritems()方法来迭代(key, value)。 为了每一行迭代DataFrame对象,有两种方法: * iterrows(): 按照行来迭代(index, Series)。会把每一行转为Series对象。 * itertuples(): 按照行来迭代namedtuples. 这种方法比iterrows()快,大多数情况下推荐使用此方法。 警告: 迭代pandas对象通常会比较慢。所以尽量避免迭代操作,可以用以下方法替换迭代: * 数据结构的内置方法,索引或numpy方法等 * apply() * 使用cython写内循环 警告: 当迭代进行时永远不要有修改操作。 iteritems() 类似字典的接口,iteritems()对键值对进行迭代操作: * Series: (index, scalar value) * DataFrame: (column, Series) * Panel: (item, DataFrame) 看一下例子: End of explanation """ for row_index, row in df.iterrows(): print row_index, row """ Explanation: iterrows() iterrows()方法用于迭代DataFrame的每一行,返回的是索引和Series的迭代器,但要注意Series的dtype可能和原来每一行的dtype不同。 End of explanation """ for row in df.itertuples(): print row """ Explanation: itertuples() itertuples()方法迭代DataFrame每一行,返回的是namedtuple。因为返回的不是Series,所以会保留DataFrame中值的dtype。 End of explanation """ s = pd.Series(pd.date_range('20160101 09:10:12', periods=4)) s s.dt.hour s.dt.second s.dt.day """ Explanation: .dt 访问器 Series对象如果索引是datetime/period,可以用自带的.dt访问器返回日期、小时、分钟。 End of explanation """ s = pd.Series(pd.date_range('20130101', periods=4)) s s.dt.strftime('%Y/%m/%d') """ Explanation: 改变时间的格式也很方便,Series.dt.strftime() End of explanation """ s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat']) s s.str.lower() """ Explanation: ## 字符串处理方法 Series带有一系列的字符串处理, 默认不对NaN处理。 End of explanation """ unsorted_df = df.reindex(index=['a', 'd', 'c', 'b'], columns=['three', 'two', 'one']) unsorted_df unsorted_df.sort_index() unsorted_df.sort_index(ascending=False) unsorted_df.sort_index(axis=1) unsorted_df['three'].sort_index() # Series """ Explanation: 排序 排序方法可以分为两大类: 按照实际的值排序和按照label排序。 按照索引排序 sort_index() Series.sort_index(), DataFrame.sort_index(), 参数是ascending, axis。 End of explanation """ df1 = pd.DataFrame({'one':[2,1,1,1],'two':[1,3,2,4],'three':[5,4,3,2]}) df1 df1.sort_values(by='two') df1.sort_values(by=['one', 'two']) """ Explanation: 按照值排序 Series.sort_values(), DataFrame.sort_values()用于按照值进行排序,参数有by End of explanation """ s[2] = np.nan s.sort_values() s.sort_values(na_position='first') #将NA放在前面 """ Explanation: 通过na_position参数处理NA值。 End of explanation """ ser = pd.Series([1,2,3]) ser.searchsorted([0, 3]) #元素0下标是0, 元素3下标是2。注意不同元素之间是独立的,所以元素3的位置是2而不是插入0后的3. ser.searchsorted([0, 4]) ser.searchsorted([1, 3], side='right') ser.searchsorted([1, 3], side='left') ser = pd.Series([3, 1, 2]) ser.searchsorted([0, 3], sorter=np.argsort(ser)) """ Explanation: searchsorted() Series.searchsorted()类似numpy.ndarray.searchsorted()。 找到元素在排好序后的位置(下标)。 End of explanation """ s = pd.Series(np.random.permutation(10)) s s.sort_values() s.nsmallest(3) s.nlargest(3) """ Explanation: 最小/最大值 Series有nsmallest() nlargest()方法能够返回最小或最大的n个值。如果Series对象很大,这两种方法会比先排序后使用head()方法快很多。 End of explanation """ df = pd.DataFrame({'a': [-2, -1, 1, 10, 8, 11, -1], 'b': list('abdceff'), 'c': [1.0, 2.0, 4.0, 3.2, np.nan, 3.0, 4.0]}) df df.nlargest(5, 'a') #列 'a'最大的3个值 df.nlargest(5, ['a', 'c']) df.nsmallest(3, 'a') df.nsmallest(5, ['a', 'c']) """ Explanation: 从v0.17.0开始,DataFrame也有了以上两个方法。 End of explanation """ df1 = pd.DataFrame({'one':[2,1,1,1],'two':[1,3,2,4],'three':[5,4,3,2]}) df1.columns = pd.MultiIndex.from_tuples([('a','one'),('a','two'),('b','three')]) df1 df1.sort_values(by=('a','two')) """ Explanation: 多索引列 排序 如果一列是多索引,你必须指定全部每一级的索引。 End of explanation """ df = pd.DataFrame(dict(A = np.random.rand(3), B = 1, C = 'foo', D = pd.Timestamp('20010102'), E = pd.Series([1.0]*3).astype('float32'), F = False, G = pd.Series([1]*3,dtype='int8'))) df df.dtypes """ Explanation: 复制 copy()方法复制数据结构的值并返回一个新的对象。记住复制操作不到万不得已不使用。 比如,改变DataFrame对象值的几种方法: * inserting, deleting, modifying a column * 为索引、列 赋值 * 对于同构数据,直接使用values属性修改值。 几乎所有的方法都不对原对象进行直接修改,而是返回修改后的一个新对象!如果原对象数据被修改,肯定是你显示指定的修改操作。 ## dtypes属性 pandas对象的主要数据类型包括: float, int, bool, datetime64[ns], datetime64[ns, tz], timedelta[ns], category, object. 除此之外还有更具体的说明存储比特数的数据类型比如int64, int32。 DataFrame的dtypes属性返回一个Series对象,Series值是DF中每一列的数据类型。 End of explanation """ df['A'].dtype """ Explanation: Series同样有dtypes属性 End of explanation """ pd.Series([1,2,3,4,5,6.]) pd.Series([1,2,3,6.,'foo']) """ Explanation: 如果pandas对象的一列中有多种数据类型,dtype返回的是能兼容所有数据类型的类型,object范围最大的。 End of explanation """ df.get_dtype_counts() """ Explanation: get_dtype_counts()方法返回DataFrame中每一种数据类型的列数。 End of explanation """ df1 = pd.DataFrame(np.random.randn(8, 1), columns=['A'], dtype='float32') df1 df1.dtypes df2 = pd.DataFrame(dict( A = pd.Series(np.random.randn(8), dtype='float16'), B = pd.Series(np.random.randn(8)), C = pd.Series(np.array(np.random.randn(8), dtype='uint8')) )) #这里是float16, uint8 df2 df2.dtypes """ Explanation: 数值数据类型可以在ndarray,Series和DataFrame中传播。 End of explanation """ pd.DataFrame([1, 2], columns=['a']).dtypes pd.DataFrame({'a': [1, 2]}).dtypes pd.DataFrame({'a': 1}, index=list(range(2))).dtypes """ Explanation: 数据类型的默认值 整型的默认值蕾西int64,浮点型的默认类型是float64,和你用的是32位还是64位的系统无关。 End of explanation """ frame = pd.DataFrame(np.array([1, 2])) #如果是在32位系统,数据类型int32 """ Explanation: Numpy中数值的具体类型则要依赖于平台。 End of explanation """ df1.dtypes df2.dtypes df1.reindex_like(df2).fillna(value=0.0).dtypes df3 = df1.reindex_like(df2).fillna(value=0.0) + df2 df3.dtypes """ Explanation: upcasting 不同类型结合时会upcast,即得到更通用的类型,看例子吧: End of explanation """ df3 df3.dtypes df3.astype('float32').dtypes """ Explanation: astype()方法 使用astype()显示的进行转型。默认会返回原对象的副本,即使数据类型不变。当然可以传递copy=False参数直接对原对象转型。 End of explanation """ df3['D'] = '1.' df3['E'] = '1' df3 df3.dtypes #现在'D' 'E'两列都是object类型 df3.convert_objects(convert_numeric=True).dtypes df3['D'] = df3['D'].astype('float16') df3['E'] = df3['E'].astype('int32') df3.dtypes """ Explanation: 对object类型进行转型 convert_objects()方法能对object类型进行转型。如果想转为数字,参数是convert_numeric=True。 End of explanation """ df = pd.DataFrame({'string': list('abc'), 'int64': list(range(1, 4)), 'uint8': np.arange(3, 6).astype('u1'), 'float64': np.arange(4.0, 7.0), 'bool1': [True, False, True], 'bool2': [False, True, False], 'dates': pd.date_range('now', periods=3).values, 'category': pd.Series(list("ABC")).astype('category')}) df df['tdeltas'] = df.dates.diff() df['uint64'] = np.arange(3, 6).astype('u8') df['other_dates'] = pd.date_range('20130101', periods=3).values df['tz_aware_dates'] = pd.date_range('20130101', periods=3, tz='US/Eastern') df df.dtypes """ Explanation: 基于dtype 选择列 select_dtypes()方法实现了基于列dtype的构造子集方法。 End of explanation """ df.select_dtypes(include=[bool]) df.select_dtypes(include=['bool']) df.dtypes df.select_dtypes(include=['number', 'bool'], exclude=['unsignedinteger']) """ Explanation: select_dtypes()有两个参数:include, exclude。含义是要选择的列的dtype和不选择列的dtype。 End of explanation """ df.select_dtypes(include=['object']) """ Explanation: 如果要选择字符串类型的列,必须使用object类型。 End of explanation """ def subdtypes(dtype): subs = dtype.__subclasses__() if not subs: return dtype return [dtype, [subdtypes(dt) for dt in subs]] subdtypes(np.generic) """ Explanation: 如果想要知道某种数据类型的所有子类型,比如numpy.number类型,你可以定义如下的方法: End of explanation """
peap/notebooks
pycon-2015/d01t01.minecraft.ipynb
mit
from mcpi import minecraft mc = minecraft.Minecraft.create(ip, port, my_name) # send a chat message mc..... # teleport mc.player.getPos() # returns a Vec3 instance; could also get pitch/orientation of player mv.player.setPos(pos_vector) # place blocks from mcpi import block mc.setBlock(x, y, z, block.STONE) mv.setBlocks(x0, y0, z0, x1, y1, z2, block.STONE) # detect blocks mc.getBlock(x, y, z) mc.getBlocks(x0, y0, z0, x1, y1, z1) """ Explanation: Exploring Minecraft with Python Kurt Grandis (@kgrandis) CEO @ Parelio 04/10/2015 The Challenge Can we create an environment that's easy and engaging enough to teach kids how to program? What is Minecraft? It's Minecraft. It's super popular. Coding & Minecraft? Console-based games aren't thaaaat exciting for kids, hence the trend toward robotics and visual programming, like with Minecraft. Raspberry Pi edition of Minecraft (Minecraft Pi Edition) has a Python API. But not everyone has a raspberry pi at home, so there's a Bukkit plugin for Minecraft: https://github.com/zhuowei/RaspberryJuice - lets you use the python API with the PC version of Minecraft - Mojang isn't building a python API, because RaspberryJuice is already out there The API End of explanation """ # clear all the things mc.setBlocks(0, 0, 4, 100, 100, 100, block.AIR) # introduce writing and using functions wall() tower(height) sphere(radius) pyramid(width, height) house() lava(x, y, z) water(x, y z) # create your own teleporter by watching for the player stepping on a certain block, then changing their position """ Explanation: Challenges End of explanation """
mne-tools/mne-tools.github.io
0.20/_downloads/caf43e32a02942fa21bbe6ad66eceb14/plot_label_from_stc.ipynb
bsd-3-clause
# Author: Luke Bloy <luke.bloy@gmail.com> # Alex Gramfort <alexandre.gramfort@inria.fr> # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.minimum_norm import read_inverse_operator, apply_inverse from mne.datasets import sample print(__doc__) data_path = sample.data_path() fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif' subjects_dir = data_path + '/subjects' subject = 'sample' snr = 3.0 lambda2 = 1.0 / snr ** 2 method = "dSPM" # use dSPM method (could also be MNE or sLORETA) # Compute a label/ROI based on the peak power between 80 and 120 ms. # The label bankssts-lh is used for the comparison. aparc_label_name = 'bankssts-lh' tmin, tmax = 0.080, 0.120 # Load data evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0)) inverse_operator = read_inverse_operator(fname_inv) src = inverse_operator['src'] # get the source space # Compute inverse solution stc = apply_inverse(evoked, inverse_operator, lambda2, method, pick_ori='normal') # Make an STC in the time interval of interest and take the mean stc_mean = stc.copy().crop(tmin, tmax).mean() # use the stc_mean to generate a functional label # region growing is halted at 60% of the peak value within the # anatomical label / ROI specified by aparc_label_name label = mne.read_labels_from_annot(subject, parc='aparc', subjects_dir=subjects_dir, regexp=aparc_label_name)[0] stc_mean_label = stc_mean.in_label(label) data = np.abs(stc_mean_label.data) stc_mean_label.data[data < 0.6 * np.max(data)] = 0. # 8.5% of original source space vertices were omitted during forward # calculation, suppress the warning here with verbose='error' func_labels, _ = mne.stc_to_label(stc_mean_label, src=src, smooth=True, subjects_dir=subjects_dir, connected=True, verbose='error') # take first as func_labels are ordered based on maximum values in stc func_label = func_labels[0] # load the anatomical ROI for comparison anat_label = mne.read_labels_from_annot(subject, parc='aparc', subjects_dir=subjects_dir, regexp=aparc_label_name)[0] # extract the anatomical time course for each label stc_anat_label = stc.in_label(anat_label) pca_anat = stc.extract_label_time_course(anat_label, src, mode='pca_flip')[0] stc_func_label = stc.in_label(func_label) pca_func = stc.extract_label_time_course(func_label, src, mode='pca_flip')[0] # flip the pca so that the max power between tmin and tmax is positive pca_anat *= np.sign(pca_anat[np.argmax(np.abs(pca_anat))]) pca_func *= np.sign(pca_func[np.argmax(np.abs(pca_anat))]) """ Explanation: Generate a functional label from source estimates Threshold source estimates and produce a functional label. The label is typically the region of interest that contains high values. Here we compare the average time course in the anatomical label obtained by FreeSurfer segmentation and the average time course from the functional label. As expected the time course in the functional label yields higher values. End of explanation """ plt.figure() plt.plot(1e3 * stc_anat_label.times, pca_anat, 'k', label='Anatomical %s' % aparc_label_name) plt.plot(1e3 * stc_func_label.times, pca_func, 'b', label='Functional %s' % aparc_label_name) plt.legend() plt.show() """ Explanation: plot the time courses.... End of explanation """ brain = stc_mean.plot(hemi='lh', subjects_dir=subjects_dir) brain.show_view('lateral') # show both labels brain.add_label(anat_label, borders=True, color='k') brain.add_label(func_label, borders=True, color='b') """ Explanation: plot brain in 3D with PySurfer if available End of explanation """
stubz/deep-learning
embeddings/Skip-Grams-Solution.ipynb
mit
import time import numpy as np import tensorflow as tf import utils """ Explanation: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. End of explanation """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() """ Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. End of explanation """ words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) """ Explanation: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. End of explanation """ vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] """ Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. End of explanation """ from collections import Counter import random threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] """ Explanation: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words. End of explanation """ def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start:idx] + words[idx+1:stop+1]) return list(target_words) """ Explanation: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window. End of explanation """ def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y """ Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. End of explanation """ train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels') """ Explanation: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1. End of explanation """ n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs) """ Explanation: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. End of explanation """ # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) """ Explanation: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works. End of explanation """ with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) """ Explanation: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. End of explanation """ with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) """ Explanation: Restore the trained network if you need to: End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) """ Explanation: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. End of explanation """
Kaggle/learntools
notebooks/deep_learning_intro/raw/ex1.ipynb
apache-2.0
# Setup plotting import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') # Set Matplotlib defaults plt.rc('figure', autolayout=True) plt.rc('axes', labelweight='bold', labelsize='large', titleweight='bold', titlesize=18, titlepad=10) # Setup feedback system from learntools.core import binder binder.bind(globals()) from learntools.deep_learning_intro.ex1 import * """ Explanation: Introduction In the tutorial we learned about the building blocks of neural networks: linear units. We saw that a model of just one linear unit will fit a linear function to a dataset (equivalent to linear regression). In this exercise, you'll build a linear model and get some practice working with models in Keras. Before you get started, run the code cell below to set everything up. End of explanation """ import pandas as pd red_wine = pd.read_csv('../input/dl-course-data/red-wine.csv') red_wine.head() """ Explanation: The Red Wine Quality dataset consists of physiochemical measurements from about 1600 Portuguese red wines. Also included is a quality rating for each wine from blind taste-tests. First, run the next cell to display the first few rows of this dataset. End of explanation """ red_wine.shape # (rows, columns) """ Explanation: You can get the number of rows and columns of a dataframe (or a Numpy array) with the shape attribute. End of explanation """ # YOUR CODE HERE input_shape = ____ # Check your answer q_1.check() #%%RM_IF(PROD)%% input_shape = [12] q_1.assert_check_failed() #%%RM_IF(PROD)%% input_shape = 11 q_1.assert_check_failed() #%%RM_IF(PROD)%% input_shape = [11] q_1.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_1.hint() #_COMMENT_IF(PROD)_ q_1.solution() """ Explanation: 1) Input shape How well can we predict a wine's perceived quality from the physiochemical measurements? The target is 'quality', and the remaining columns are the features. How would you set the input_shape parameter for a Keras model on this task? End of explanation """ from tensorflow import keras from tensorflow.keras import layers # YOUR CODE HERE model = ____ # Check your answer q_2.check() #%%RM_IF(PROD)%% from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.Dense(units=1) ]) q_2.assert_check_failed() #%%RM_IF(PROD)%% from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.Dense(units=2, input_shape=[11]) ]) q_2.assert_check_failed() #%%RM_IF(PROD)%% from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.Dense(units=1, input_shape=[11]) ]) q_2.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_2.hint() #_COMMENT_IF(PROD)_ q_2.solution() """ Explanation: 2) Define a linear model Now define a linear model appropriate for this task. Pay attention to how many inputs and outputs the model should have. End of explanation """ # YOUR CODE HERE #_UNCOMMENT_IF(PROD)_ #w, b = ____ # Check your answer q_3.check() #%%RM_IF(PROD)%% w, b = [1.0, 2.0] q_3.assert_check_failed() #%%RM_IF(PROD)%% w, b = model.get_weights() q_3.assert_check_failed() #%%RM_IF(PROD)%% w, b = model.weights q_3.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_3.hint() #_COMMENT_IF(PROD)_ q_3.solution() """ Explanation: 3) Look at the weights Internally, Keras represents the weights of a neural network with tensors. Tensors are basically TensorFlow's version of a Numpy array with a few differences that make them better suited to deep learning. One of the most important is that tensors are compatible with GPU and TPU) accelerators. TPUs, in fact, are designed specifically for tensor computations. A model's weights are kept in its weights attribute as a list of tensors. Get the weights of the model you defined above. (If you want, you could display the weights with something like: print("Weights\n{}\n\nBias\n{}".format(w, b))). End of explanation """ import tensorflow as tf import matplotlib.pyplot as plt model = keras.Sequential([ layers.Dense(1, input_shape=[1]), ]) x = tf.linspace(-1.0, 1.0, 100) y = model.predict(x) plt.figure(dpi=100) plt.plot(x, y, 'k') plt.xlim(-1, 1) plt.ylim(-1, 1) plt.xlabel("Input: x") plt.ylabel("Target y") w, b = model.weights # you could also use model.get_weights() here plt.title("Weight: {:0.2f}\nBias: {:0.2f}".format(w[0][0], b[0])) plt.show() """ Explanation: (By the way, Keras represents weights as tensors, but also uses tensors to represent data. When you set the input_shape argument, you are telling Keras the dimensions of the array it should expect for each example in the training data. Setting input_shape=[3] would create a network accepting vectors of length 3, like [0.2, 0.4, 0.6].) Optional: Plot the output of an untrained linear model The kinds of problems we'll work on through Lesson 5 will be regression problems, where the goal is to predict some numeric target. Regression problems are like "curve-fitting" problems: we're trying to find a curve that best fits the data. Let's take a look at the "curve" produced by a linear model. (You've probably guessed that it's a line!) We mentioned that before training a model's weights are set randomly. Run the cell below a few times to see the different lines produced with a random initialization. (There's no coding for this exercise -- it's just a demonstration.) End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/sdk/SDK_AutoML_Video_Action_Recognition.ipynb
apache-2.0
!pip3 uninstall -y google-cloud-aiplatform !pip3 install google-cloud-aiplatform import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Feedback or issues? For any feedback or questions, please open an issue. Vertex SDK for Python: AutoML Video Action Recognition Example To use this Jupyter notebook, copy the notebook to a Google Cloud Notebooks instance with Tensorflow installed and open it. You can run each step, or cell, and see its results. To run a cell, use Shift+Enter. Jupyter automatically displays the return value of the last line in each cell. For more information about running notebooks in Google Cloud Notebook, see the Google Cloud Notebook guide. This notebook demonstrate how to create an AutoML Video Action Recognition Model, with a Vertex AI video dataset, and how to serve the model for batch prediction. It will require you provide a bucket where the dataset will be stored. Note: you may incur charges for training, prediction, storage or usage of other GCP products in connection with testing this SDK. Install Vertex SDK for Python After the SDK installation the kernel will be automatically restarted. End of explanation """ MY_PROJECT = "YOUR PROJECT ID" MY_STAGING_BUCKET = "gs://YOUR BUCKET" # bucket should be in same region as ucaip import sys if "google.colab" in sys.modules: import os from google.colab import auth auth.authenticate_user() os.environ["GOOGLE_CLOUD_PROJECT"] = MY_PROJECT """ Explanation: Enter Your Project and GCS Bucket Enter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. End of explanation """ TASK_TYPE = "mbsdk_automl-video-training" PREDICTION_TYPE = "action_recognition" MODEL_TYPE = "CLOUD" TASK_NAME = f"{TASK_TYPE}_{PREDICTION_TYPE}" BUCKET_NAME = MY_STAGING_BUCKET.split("gs://")[1] GCS_PREFIX = TASK_NAME print(f"Bucket Name: {BUCKET_NAME}") print(f"Task Name: {TASK_NAME}") """ Explanation: Set Your Task Name, and GCS Prefix If you want to centeralize all input and output files under the gcs location. End of explanation """ automl_video_demo_train_data = "gs://automl-video-demo-data/hmdb_golf_swing_all.csv" automl_video_demo_batch_prediction_data = ( "gs://automl-video-demo-data/hmdb_golf_swing_predict.jsonl" ) """ Explanation: HMDB: a large human motion database We prepared some training data and prediction data for the demo using the HMDB Dataset. The HMDB Dataset is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/ For more information about this dataset please visit: https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/ End of explanation """ gcs_source_train = f"gs://{BUCKET_NAME}/{TASK_NAME}/data/video_action_recognition.csv" !gsutil cp $automl_video_demo_train_data $gcs_source_train """ Explanation: Copy AutoML Video Demo Train Data for Creating Managed Dataset End of explanation """ from google.cloud import aiplatform aiplatform.init(project=MY_PROJECT, staging_bucket=MY_STAGING_BUCKET) """ Explanation: Run AutoML Video Training with Managed Video Dataset Initialize Vertex SDK for Python Initialize the client for Vertex AI. End of explanation """ dataset = aiplatform.VideoDataset.create( display_name=f"temp-{TASK_NAME}", gcs_source=gcs_source_train, import_schema_uri=aiplatform.schema.dataset.ioformat.video.action_recognition, sync=False, ) """ Explanation: Create a Dataset on Vertex AI We will now create a Vertex AI video dataset using the previously prepared csv files. Choose one of the options below. Option 1: Using MBSDK VideoDataset class End of explanation """ dataset.wait() """ Explanation: Option 2: Using MBSDK Dataset class dataset = aiplatform.Dataset.create( display_name=f'temp-{TASK_NAME}', metadata_schema_uri=aiplatform.schema.dataset.metadata.video, gcs_source=gcs_source_train, import_schema_uri=aiplatform.schema.dataset.ioformat.video.action_recognition, sync=False ) End of explanation """ job = aiplatform.AutoMLVideoTrainingJob( display_name=f"temp-{TASK_NAME}", prediction_type=PREDICTION_TYPE, model_type=MODEL_TYPE, ) """ Explanation: Launch a Training Job and Create a Model on Vertex AI Config a Training Job End of explanation """ model = job.run( dataset=dataset, training_fraction_split=0.8, test_fraction_split=0.2, model_display_name=f"temp-{TASK_NAME}", sync=False, ) model.wait() """ Explanation: Run the Training Job End of explanation """ gcs_source_batch_prediction = f"gs://{BUCKET_NAME}/{TASK_NAME}/data/video_action_recognition_batch_prediction.jsonl" gcs_destination_prefix_batch_prediction = ( f"gs://{BUCKET_NAME}/{TASK_NAME}/batch_prediction" ) !gsutil cp $automl_video_demo_batch_prediction_data $gcs_source_batch_prediction batch_predict_job = model.batch_predict( job_display_name=f"temp-{TASK_NAME}", gcs_source=gcs_source_batch_prediction, gcs_destination_prefix=gcs_destination_prefix_batch_prediction, sync=False, ) batch_predict_job.wait() bp_iter_outputs = batch_predict_job.iter_outputs() prediction_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("prediction"): prediction_results.append(blob.name) import json import tensorflow as tf tags = list() for prediction_result in prediction_results: gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}" with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile: for line in gfile.readlines(): line = json.loads(line) break line """ Explanation: Batch Prediction Job on the Model Copy AutoML Video Demo Prediction Data for Creating Batch Prediction Job End of explanation """
tensorflow/docs-l10n
site/en-snapshot/probability/examples/Multiple_changepoint_detection_and_Bayesian_model_selection.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Probability Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ import numpy as np import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_probability as tfp from tensorflow_probability import distributions as tfd from matplotlib import pylab as plt %matplotlib inline import scipy.stats """ Explanation: Multiple changepoint detection and Bayesian model selection Bayesian model selection <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Multiple_changepoint_detection_and_Bayesian_model_selection"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Multiple_changepoint_detection_and_Bayesian_model_selection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Multiple_changepoint_detection_and_Bayesian_model_selection.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Multiple_changepoint_detection_and_Bayesian_model_selection.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Imports End of explanation """ true_rates = [40, 3, 20, 50] true_durations = [10, 20, 5, 35] observed_counts = tf.concat( [tfd.Poisson(rate).sample(num_steps) for (rate, num_steps) in zip(true_rates, true_durations)], axis=0) plt.plot(observed_counts) """ Explanation: Task: changepoint detection with multiple changepoints Consider a changepoint detection task: events happen at a rate that changes over time, driven by sudden shifts in the (unobserved) state of some system or process generating the data. For example, we might observe a series of counts like the following: End of explanation """ num_states = 4 initial_state_logits = tf.zeros([num_states]) # uniform distribution daily_change_prob = 0.05 transition_probs = tf.fill([num_states, num_states], daily_change_prob / (num_states - 1)) transition_probs = tf.linalg.set_diag(transition_probs, tf.fill([num_states], 1 - daily_change_prob)) print("Initial state logits:\n{}".format(initial_state_logits)) print("Transition matrix:\n{}".format(transition_probs)) """ Explanation: These could represent the number of failures in a datacenter, number of visitors to a webpage, number of packets on a network link, etc. Note it's not entirely apparent how many distinct system regimes there are just from looking at the data. Can you tell where each of the three switchpoints occurs? Known number of states We'll first consider the (perhaps unrealistic) case where the number of unobserved states is known a priori. Here, we'd assume we know there are four latent states. We model this problem as a switching (inhomogeneous) Poisson process: at each point in time, the number of events that occur is Poisson distributed, and the rate of events is determined by the unobserved system state $z_t$: $$x_t \sim \text{Poisson}(\lambda_{z_t})$$ The latent states are discrete: $z_t \in {1, 2, 3, 4}$, so $\lambda = [\lambda_1, \lambda_2, \lambda_3, \lambda_4]$ is a simple vector containing a Poisson rate for each state. To model the evolution of states over time, we'll define a simple transition model $p(z_t | z_{t-1})$: let's say that at each step we stay in the previous state with some probability $p$, and with probability $1-p$ we transition to a different state uniformly at random. The initial state is also chosen uniformly at random, so we have: $$ \begin{align} z_1 &\sim \text{Categorical}\left(\left{\frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4}\right}\right)\ z_t | z_{t-1} &\sim \text{Categorical}\left(\left{\begin{array}{cc}p & \text{if } z_t = z_{t-1} \ \frac{1-p}{4-1} & \text{otherwise}\end{array}\right}\right) \end{align}$$ These assumptions correspond to a hidden Markov model with Poisson emissions. We can encode them in TFP using tfd.HiddenMarkovModel. First, we define the transition matrix and the uniform prior on the initial state: End of explanation """ # Define variable to represent the unknown log rates. trainable_log_rates = tf.Variable( tf.math.log(tf.reduce_mean(observed_counts)) + tf.random.stateless_normal([num_states], seed=(42, 42)), name='log_rates') hmm = tfd.HiddenMarkovModel( initial_distribution=tfd.Categorical( logits=initial_state_logits), transition_distribution=tfd.Categorical(probs=transition_probs), observation_distribution=tfd.Poisson(log_rate=trainable_log_rates), num_steps=len(observed_counts)) """ Explanation: Next, we build a tfd.HiddenMarkovModel distribution, using a trainable variable to represent the rates associated with each system state. We parameterize the rates in log-space to ensure they are positive-valued. End of explanation """ rate_prior = tfd.LogNormal(5, 5) def log_prob(): return (tf.reduce_sum(rate_prior.log_prob(tf.math.exp(trainable_log_rates))) + hmm.log_prob(observed_counts)) losses = tfp.math.minimize( lambda: -log_prob(), optimizer=tf.optimizers.Adam(learning_rate=0.1), num_steps=100) plt.plot(losses) plt.ylabel('Negative log marginal likelihood') rates = tf.exp(trainable_log_rates) print("Inferred rates: {}".format(rates)) print("True rates: {}".format(true_rates)) """ Explanation: Finally, we define the model's total log density, including a weakly-informative LogNormal prior on the rates, and run an optimizer to compute the maximum a posteriori (MAP) fit to the observed count data. End of explanation """ # Runs forward-backward algorithm to compute marginal posteriors. posterior_dists = hmm.posterior_marginals(observed_counts) posterior_probs = posterior_dists.probs_parameter().numpy() """ Explanation: It worked! Note that the latent states in this model are identifiable only up to permutation, so the rates we recovered are in a different order, and there's a bit of noise, but generally they match pretty well. Recovering the state trajectory Now that we've fit the model, we might want to reconstruct which state the model believes the system was in at each timestep. This is a posterior inference task: given the observed counts $x_{1:T}$ and model parameters (rates) $\lambda$, we want to infer the sequence of discrete latent variables, following the posterior distribution $p(z_{1:T} | x_{1:T}, \lambda)$. In a hidden Markov model, we can efficiently compute marginals and other properties of this distribution using standard message-passing algorithms. In particular, the posterior_marginals method will efficiently compute (using the forward-backward algorithm) the marginal probability distribution $p(Z_t = z_t | x_{1:T})$ over the discrete latent state $Z_t$ at each timestep $t$. End of explanation """ def plot_state_posterior(ax, state_posterior_probs, title): ln1 = ax.plot(state_posterior_probs, c='blue', lw=3, label='p(state | counts)') ax.set_ylim(0., 1.1) ax.set_ylabel('posterior probability') ax2 = ax.twinx() ln2 = ax2.plot(observed_counts, c='black', alpha=0.3, label='observed counts') ax2.set_title(title) ax2.set_xlabel("time") lns = ln1+ln2 labs = [l.get_label() for l in lns] ax.legend(lns, labs, loc=4) ax.grid(True, color='white') ax2.grid(False) fig = plt.figure(figsize=(10, 10)) plot_state_posterior(fig.add_subplot(2, 2, 1), posterior_probs[:, 0], title="state 0 (rate {:.2f})".format(rates[0])) plot_state_posterior(fig.add_subplot(2, 2, 2), posterior_probs[:, 1], title="state 1 (rate {:.2f})".format(rates[1])) plot_state_posterior(fig.add_subplot(2, 2, 3), posterior_probs[:, 2], title="state 2 (rate {:.2f})".format(rates[2])) plot_state_posterior(fig.add_subplot(2, 2, 4), posterior_probs[:, 3], title="state 3 (rate {:.2f})".format(rates[3])) plt.tight_layout() """ Explanation: Plotting the posterior probabilities, we recover the model's "explanation" of the data: at which points in time is each state active? End of explanation """ most_probable_states = hmm.posterior_mode(observed_counts) most_probable_rates = tf.gather(rates, most_probable_states) fig = plt.figure(figsize=(10, 4)) ax = fig.add_subplot(1, 1, 1) ax.plot(most_probable_rates, c='green', lw=3, label='inferred rate') ax.plot(observed_counts, c='black', alpha=0.3, label='observed counts') ax.set_ylabel("latent rate") ax.set_xlabel("time") ax.set_title("Inferred latent rate over time") ax.legend(loc=4) """ Explanation: In this (simple) case, we see that the model is usually quite confident: at most timesteps it assigns essentially all probability mass to a single one of the four states. Luckily, the explanations look reasonable! We can also visualize this posterior in terms of the rate associated with the most likely latent state at each timestep, condensing the probabilistic posterior into a single explanation: End of explanation """ max_num_states = 10 def build_latent_state(num_states, max_num_states, daily_change_prob=0.05): # Give probability exp(-100) ~= 0 to states outside of the current model. active_states_mask = tf.concat([tf.ones([num_states]), tf.zeros([max_num_states - num_states])], axis=0) initial_state_logits = -100. * (1 - active_states_mask) # Build a transition matrix that transitions only within the current # `num_states` states. transition_probs = tf.fill([num_states, num_states], 0. if num_states == 1 else daily_change_prob / (num_states - 1)) padded_transition_probs = tf.eye(max_num_states) + tf.pad( tf.linalg.set_diag(transition_probs, tf.fill([num_states], - daily_change_prob)), paddings=[(0, max_num_states - num_states), (0, max_num_states - num_states)]) return initial_state_logits, padded_transition_probs # For each candidate model, build the initial state prior and transition matrix. batch_initial_state_logits = [] batch_transition_probs = [] for num_states in range(1, max_num_states+1): initial_state_logits, transition_probs = build_latent_state( num_states=num_states, max_num_states=max_num_states) batch_initial_state_logits.append(initial_state_logits) batch_transition_probs.append(transition_probs) batch_initial_state_logits = tf.stack(batch_initial_state_logits) batch_transition_probs = tf.stack(batch_transition_probs) print("Shape of initial_state_logits: {}".format(batch_initial_state_logits.shape)) print("Shape of transition probs: {}".format(batch_transition_probs.shape)) print("Example initial state logits for num_states==3:\n{}".format(batch_initial_state_logits[2, :])) print("Example transition_probs for num_states==3:\n{}".format(batch_transition_probs[2, :, :])) """ Explanation: Unknown number of states In real problems, we may not know the 'true' number of states in the system we're modeling. This may not always be a concern: if you don't particularly care about the identities of the unknown states, you could just run a model with more states than you know the model will need, and learn (something like) a bunch of duplicate copies of the actual states. But let's assume you do care about inferring the 'true' number of latent states. We can view this as a case of Bayesian model selection: we have a set of candidate models, each with a different number of latent states, and we want to choose the one that is most likely to have generated the observed data. To do this, we compute the marginal likelihood of the data under each model (we could also add a prior on the models themselves, but that won't be necessary in this analysis; the Bayesian Occam's razor turns out to be sufficient to encode a preference towards simpler models). Unfortunately, the true marginal likelihood, which integrates over both the discrete states $z_{1:T}$ and the (vector of) rate parameters $\lambda$, $$p(x_{1:T}) = \int p(x_{1:T}, z_{1:T}, \lambda) dz d\lambda,$$ is not tractable for this model. For convenience, we'll approximate it using a so-called "empirical Bayes" or "type II maximum likelihood" estimate: instead of fully integrating out the (unknown) rate parameters $\lambda$ associated with each system state, we'll optimize over their values: $$\tilde{p}(x_{1:T}) = \max_\lambda \int p(x_{1:T}, z_{1:T}, \lambda) dz$$ This approximation may overfit, i.e., it will prefer more complex models than the true marginal likelihood would. We could consider more faithful approximations, e.g., optimizing a variational lower bound, or using a Monte Carlo estimator such as annealed importance sampling; these are (sadly) beyond the scope of this notebook. (For more on Bayesian model selection and approximations, chapter 7 of the excellent Machine Learning: a Probabilistic Perspective is a good reference.) In principle, we could do this model comparison simply by rerunning the optimization above many times with different values of num_states, but that would be a lot of work. Here we'll show how to consider multiple models in parallel, using TFP's batch_shape mechanism for vectorization. Transition matrix and initial state prior: rather than building a single model description, now we'll build a batch of transition matrices and prior logits, one for each candidate model up to max_num_states. For easy batching we'll need to ensure that all computations have the same 'shape': this must correspond to the dimensions of the largest model we'll fit. To handle smaller models, we can 'embed' their descriptions in the topmost dimensions of the state space, effectively treating the remaining dimensions as dummy states that are never used. End of explanation """ trainable_log_rates = tf.Variable( tf.fill([batch_initial_state_logits.shape[0], max_num_states], tf.math.log(tf.reduce_mean(observed_counts))) + tf.random.stateless_normal([1, max_num_states], seed=(42, 42)), name='log_rates') hmm = tfd.HiddenMarkovModel( initial_distribution=tfd.Categorical( logits=batch_initial_state_logits), transition_distribution=tfd.Categorical(probs=batch_transition_probs), observation_distribution=tfd.Poisson(log_rate=trainable_log_rates), num_steps=len(observed_counts)) print("Defined HMM with batch shape: {}".format(hmm.batch_shape)) """ Explanation: Now we proceed similarly as above. This time we'll use an extra batch dimension in trainable_rates to separately fit the rates for each model under consideration. End of explanation """ rate_prior = tfd.LogNormal(5, 5) def log_prob(): prior_lps = rate_prior.log_prob(tf.math.exp(trainable_log_rates)) prior_lp = tf.stack( [tf.reduce_sum(prior_lps[i, :i+1]) for i in range(max_num_states)]) return prior_lp + hmm.log_prob(observed_counts) """ Explanation: In computing the total log prob, we are careful to sum over only the priors for the rates actually used by each model component: End of explanation """ losses = tfp.math.minimize( lambda: -log_prob(), optimizer=tf.optimizers.Adam(0.1), num_steps=100) plt.plot(losses) plt.ylabel('Negative log marginal likelihood') num_states = np.arange(1, max_num_states+1) plt.plot(num_states, -losses[-1]) plt.ylim([-400, -200]) plt.ylabel("marginal likelihood $\\tilde{p}(x)$") plt.xlabel("number of latent states") plt.title("Model selection on latent states") """ Explanation: Now we optimize the batch objective we've constructed, fitting all candidate models simultaneously: End of explanation """ rates = tf.exp(trainable_log_rates) for i, learned_model_rates in enumerate(rates): print("rates for {}-state model: {}".format(i+1, learned_model_rates[:i+1])) """ Explanation: Examining the likelihoods, we see that the (approximate) marginal likelihood tends to prefer a three-state model. This seems quite plausible -- the 'true' model had four states, but from just looking at the data it's hard to rule out a three-state explanation. We can also extract the rates fit for each candidate model: End of explanation """ most_probable_states = hmm.posterior_mode(observed_counts) fig = plt.figure(figsize=(14, 12)) for i, learned_model_rates in enumerate(rates): ax = fig.add_subplot(4, 3, i+1) ax.plot(tf.gather(learned_model_rates, most_probable_states[i]), c='green', lw=3, label='inferred rate') ax.plot(observed_counts, c='black', alpha=0.3, label='observed counts') ax.set_ylabel("latent rate") ax.set_xlabel("time") ax.set_title("{}-state model".format(i+1)) ax.legend(loc=4) plt.tight_layout() """ Explanation: And plot the explanations each model provides for the data: End of explanation """
jarrison/trEFM-learn
Examples/demo.ipynb
mit
import numpy as np import matplotlib.pyplot as plt from trEFMlearn import data_sim %matplotlib inline """ Explanation: Welcome! Let's start by assuming you have downloaded the code, and ran the setup.py . This demonstration will show the user how predict the time constant of their trEFM data using the methods of statistical learning. Let's start by importing the data simulation module trEFMlearn package. This package contains methods to numerically simulate some experimental data. End of explanation """ tau_array = np.logspace(-8, -5, 100) fit_object, fit_tau = data_sim.sim_fit(tau_array) """ Explanation: Simulation You can create an array of time constants that you would like to simulate the data for. This array can then be input into the simulation function which simulates the data as well as fits it using standard vector regression. This function can take a few minutes dpending on the number of time constants you provide. Run this cell and wait for the function to complete. There may be an error that occurs, don't fret as this has no effect. End of explanation """ plt.figure() plt.title('Fit Time Constant vs. Actual') plt.plot(fit_tau, 'bo') plt.plot(tau_array,'g') plt.ylabel('Tau (s)') plt.yscale('log') plt.show() # Calculate the error at each measurement. error = (tau_array - fit_tau) / tau_array plt.figure() plt.title('Error Signal') plt.plot(tau_array, error) plt.ylabel('Error (%)') plt.xlabel('Time Constant (s)') plt.xscale('log') plt.show() """ Explanation: Neato! Looks like that function is all done. We now have an SVR Object called "fit_object" as well as a result of the fit called "fit_tau". Let's take a look at the result of the fit by comparing it to the actual input tau. End of explanation """ from trEFMlearn import process_image """ Explanation: Clearly the SVR method is quite capable of reproducing the time constants simulated data using very simple to calculate features. We observe some lower limit to the model's ability to calculate time constants, which is quite interesting. However, this lower limit appears below 100 nanoseconds, a time-scale that is seldom seen in the real world. This could be quite useful for extracting time constant data! Analyzing a Real Image The Data In order to assess the ability of the model to apply to real images, I have taken a trEFM image of an MDMO photovoltaic material. There are large aggregates of acceptor material that should show a nice contrast in the way that they generate and hold charge. Each pixel of this image has been pre-averaged before being saved with this demo program. Each pixel is a measurement of the AFM cantilever position as a function of time. The Process Our mission is to extract the time constant out of this signal using the SVR fit of our simulated data. We accomplish this by importing and calling the "process_image" function. End of explanation """ tau_img, real_sum_img, fft_sum_img, amp_diff_img = process_image.analyze_image('.\\image data\\', fit_object) """ Explanation: The image processing function needs two inputs. First we show the function the path to the provided image data. We then provide the function with the SVR object that was previously generated using the simulated cantilever data. Processing this image should only take 15 to 30 seconds. End of explanation """ # Something went wrong in the data on the first line. Let's skip it. tau_img = tau_img[1:] real_sum_img = real_sum_img[1:] fft_sum_img = fft_sum_img[1:] amp_diff_img = amp_diff_img[1:] plt.figure() upper_lim = (tau_img.mean() + 2*tau_img.std()) lower_lim = (tau_img.mean() - 2*tau_img.std()) plt.imshow(tau_img,vmin=lower_lim, vmax=upper_lim,cmap = 'cubehelix') plt.show() """ Explanation: Awesome. That was pretty quick huh? Without this machine learning method, the exact same image we just analyzed takes over 8 minutes to run. Yes! Now let's take a look at what we get. End of explanation """ fig, axs = plt.subplots(nrows=3) axs[0].imshow(real_sum_img ,'hot') axs[0].set_title('Total Signal Sum') axs[1].imshow(fft_sum_img, cmap='hot') axs[1].set_title('Sum of the FFT Power Spectrum') axs[2].imshow(amp_diff_img, cmap='hot') axs[2].set_title('Difference in Amplitude After Trigger') plt.tight_layout() plt.show() """ Explanation: You can definitely begin to make out some of the structure that is occuring in the photovoltaic performance of this device. This image looks great, but there are still many areas of improvement. For example, I will need to extensively prove that this image is not purely a result of topographical cross-talk. If this image is correct, this is a significant improvement on our current imaging technique. The Features In the next cell we show an image of the various features that were calculated from the raw deflection signal. Some features more clearly matter than others and indicate that the search for better and more representative features is desirable. However, I think this is a great start to a project I hope to continue developing in the future. End of explanation """
mne-tools/mne-tools.github.io
0.17/_downloads/96cf5c207119de22548efa8f14198f9e/plot_artifacts_correction_rejection.ipynb
bsd-3-clause
# sphinx_gallery_thumbnail_number = 3 import numpy as np import mne from mne.datasets import sample data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname) # already has an EEG ref """ Explanation: Rejecting bad data (channels and segments) End of explanation """ raw.info['bads'] = ['MEG 2443'] """ Explanation: Marking bad channels Sometimes some MEG or EEG channels are not functioning properly for various reasons. These channels should be excluded from analysis by marking them bad as. This is done by setting the 'bads' in the measurement info of a data container object (e.g. Raw, Epochs, Evoked). The info['bads'] value is a Python string. Here is example: End of explanation """ # Reading data with a bad channel marked as bad: fname = data_path + '/MEG/sample/sample_audvis-ave.fif' evoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0)) # restrict the evoked to EEG and MEG channels evoked.pick_types(meg=True, eeg=True, exclude=[]) # plot with bads evoked.plot(exclude=[], time_unit='s') print(evoked.info['bads']) """ Explanation: Why setting a channel bad?: If a channel does not show a signal at all (flat) it is important to exclude it from the analysis. If a channel as a noise level significantly higher than the other channels it should be marked as bad. Presence of bad channels can have terribe consequences on down stream analysis. For a flat channel some noise estimate will be unrealistically low and thus the current estimate calculations will give a strong weight to the zero signal on the flat channels and will essentially vanish. Noisy channels can also affect others when signal-space projections or EEG average electrode reference is employed. Noisy bad channels can also adversely affect averaging and noise-covariance matrix estimation by causing unnecessary rejections of epochs. Recommended ways to identify bad channels are: Observe the quality of data during data acquisition and make notes of observed malfunctioning channels to your measurement protocol sheet. View the on-line averages and check the condition of the channels. Compute preliminary off-line averages with artifact rejection, SSP/ICA, and EEG average electrode reference computation off and check the condition of the channels. View raw data with :func:mne.io.Raw.plot without SSP/ICA enabled and identify bad channels. <div class="alert alert-info"><h4>Note</h4><p>Setting the bad channels should be done as early as possible in the analysis pipeline. That's why it's recommended to set bad channels the raw objects/files. If present in the raw data files, the bad channel selections will be automatically transferred to averaged files, noise-covariance matrices, forward solution files, and inverse operator decompositions.</p></div> The actual removal happens using :func:pick_types &lt;mne.pick_types&gt; with exclude='bads' option (see picking_channels). Instead of removing the bad channels, you can also try to repair them. This is done by interpolation of the data from other channels. To illustrate how to use channel interpolation let us load some data. End of explanation """ evoked.interpolate_bads(reset_bads=False, verbose=False) """ Explanation: Let's now interpolate the bad channels (displayed in red above) End of explanation """ evoked.plot(exclude=[], time_unit='s') """ Explanation: Let's plot the cleaned data End of explanation """ eog_events = mne.preprocessing.find_eog_events(raw) n_blinks = len(eog_events) # Center to cover the whole blink with full duration of 0.5s: onset = eog_events[:, 0] / raw.info['sfreq'] - 0.25 duration = np.repeat(0.5, n_blinks) annot = mne.Annotations(onset, duration, ['bad blink'] * n_blinks, orig_time=raw.info['meas_date']) raw.set_annotations(annot) print(raw.annotations) # to get information about what annotations we have raw.plot(events=eog_events) # To see the annotated segments. """ Explanation: <div class="alert alert-info"><h4>Note</h4><p>Interpolation is a linear operation that can be performed also on Raw and Epochs objects.</p></div> For more details on interpolation see the page channel_interpolation. Marking bad raw segments with annotations MNE provides an :class:mne.Annotations class that can be used to mark segments of raw data and to reject epochs that overlap with bad segments of data. The annotations are automatically synchronized with raw data as long as the timestamps of raw data and annotations are in sync. See sphx_glr_auto_tutorials_plot_brainstorm_auditory.py for a long example exploiting the annotations for artifact removal. The instances of annotations are created by providing a list of onsets and offsets with descriptions for each segment. The onsets and offsets are marked as seconds. onset refers to time from start of the data. offset is the duration of the annotation. The instance of :class:mne.Annotations can be added as an attribute of :class:mne.io.Raw. End of explanation """ reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6) """ Explanation: It is also possible to draw bad segments interactively using :meth:raw.plot &lt;mne.io.Raw.plot&gt; (see sphx_glr_auto_tutorials_plot_visualize_raw.py). As the data is epoched, all the epochs overlapping with segments whose description starts with 'bad' are rejected by default. To turn rejection off, use keyword argument reject_by_annotation=False when constructing :class:mne.Epochs. When working with neuromag data, the first_samp offset of raw acquisition is also taken into account the same way as with event lists. For more see :class:mne.Epochs and :class:mne.Annotations. Rejecting bad epochs When working with segmented data (Epochs) MNE offers a quite simple approach to automatically reject/ignore bad epochs. This is done by defining thresholds for peak-to-peak amplitude and flat signal detection. In the following code we build Epochs from Raw object. One of the provided parameter is named reject. It is a dictionary where every key is a channel type as a string and the corresponding values are peak-to-peak rejection parameters (amplitude ranges as floats). Below we define the peak-to-peak rejection values for gradiometers, magnetometers and EOG: End of explanation """ events = mne.find_events(raw, stim_channel='STI 014') event_id = {"auditory/left": 1} tmin = -0.2 # start of each epoch (200ms before the trigger) tmax = 0.5 # end of each epoch (500ms after the trigger) baseline = (None, 0) # means from the first instant to t = 0 picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=True, stim=False, exclude='bads') epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks_meg, baseline=baseline, reject=reject, reject_by_annotation=True) """ Explanation: <div class="alert alert-info"><h4>Note</h4><p>The rejection values can be highly data dependent. You should be careful when adjusting these values. Make sure not too many epochs are rejected and look into the cause of the rejections. Maybe it's just a matter of marking a single channel as bad and you'll be able to save a lot of data.</p></div> We then construct the epochs End of explanation """ epochs.drop_bad() """ Explanation: We then drop/reject the bad epochs End of explanation """ print(epochs.drop_log[40:45]) # only a subset epochs.plot_drop_log() """ Explanation: And plot the so-called drop log that details the reason for which some epochs have been dropped. End of explanation """
UltronAI/Deep-Learning
CS231n/assignment2/ConvolutionalNetworks.ipynb
mit
# As usual, a bit of setup from __future__ import print_function import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.cnn import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient from cs231n.layers import * from cs231n.fast_layers import * from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.items(): print('%s: ' % k, v.shape) """ Explanation: Convolutional Networks So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead. First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset. End of explanation """ x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3) conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.array([[[[-0.08759809, -0.10987781], [-0.18387192, -0.2109216 ]], [[ 0.21027089, 0.21661097], [ 0.22847626, 0.23004637]], [[ 0.50813986, 0.54309974], [ 0.64082444, 0.67101435]]], [[[-0.98053589, -1.03143541], [-1.19128892, -1.24695841]], [[ 0.69108355, 0.66880383], [ 0.59480972, 0.56776003]], [[ 2.36270298, 2.36904306], [ 2.38090835, 2.38247847]]]]) # Compare your output to ours; difference should be around 2e-8 print('Testing conv_forward_naive') print('difference: ', rel_error(out, correct_out)) """ Explanation: Convolution: Naive forward pass The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive. You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear. You can test your implementation by running the following: End of explanation """ from scipy.misc import imread, imresize kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg') # kitten is wide, and puppy is already square d = kitten.shape[1] - kitten.shape[0] kitten_cropped = kitten[:, d//2:-d//2, :] img_size = 200 # Make this smaller if it runs too slow x = np.zeros((2, 3, img_size, img_size)) x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1)) x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1)) # Set up a convolutional weights holding 2 filters, each 3x3 w = np.zeros((2, 3, 3, 3)) # The first filter converts the image to grayscale. # Set up the red, green, and blue channels of the filter. w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]] w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]] w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]] # Second filter detects horizontal edges in the blue channel. w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]] # Vector of biases. We don't need any bias for the grayscale # filter, but for the edge detection filter we want to add 128 # to each output so that nothing is negative. b = np.array([0, 128]) # Compute the result of convolving each input in x with each filter in w, # offsetting by b, and storing the results in out. out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1}) def imshow_noax(img, normalize=True): """ Tiny helper to show images as uint8 and remove axis labels """ if normalize: img_max, img_min = np.max(img), np.min(img) img = 255.0 * (img - img_min) / (img_max - img_min) plt.imshow(img.astype('uint8')) plt.gca().axis('off') # Show the original images and the results of the conv operation plt.subplot(2, 3, 1) imshow_noax(puppy, normalize=False) plt.title('Original image') plt.subplot(2, 3, 2) imshow_noax(out[0, 0]) plt.title('Grayscale') plt.subplot(2, 3, 3) imshow_noax(out[0, 1]) plt.title('Edges') plt.subplot(2, 3, 4) imshow_noax(kitten_cropped, normalize=False) plt.subplot(2, 3, 5) imshow_noax(out[1, 0]) plt.subplot(2, 3, 6) imshow_noax(out[1, 1]) plt.show() """ Explanation: Aside: Image processing via convolutions As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check. End of explanation """ np.random.seed(231) x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 1, 'pad': 1} dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout) out, cache = conv_forward_naive(x, w, b, conv_param) dx, dw, db = conv_backward_naive(dout, cache) # Your errors should be around 1e-8' print('Testing conv_backward_naive function') print('dx error: ', rel_error(dx, dx_num)) print('dw error: ', rel_error(dw, dw_num)) print('db error: ', rel_error(db, db_num)) """ Explanation: Convolution: Naive backward pass Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency. When you are done, run the following to check your backward pass with a numeric gradient check. End of explanation """ x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2} out, _ = max_pool_forward_naive(x, pool_param) correct_out = np.array([[[[-0.26315789, -0.24842105], [-0.20421053, -0.18947368]], [[-0.14526316, -0.13052632], [-0.08631579, -0.07157895]], [[-0.02736842, -0.01263158], [ 0.03157895, 0.04631579]]], [[[ 0.09052632, 0.10526316], [ 0.14947368, 0.16421053]], [[ 0.20842105, 0.22315789], [ 0.26736842, 0.28210526]], [[ 0.32631579, 0.34105263], [ 0.38526316, 0.4 ]]]]) # Compare your output with ours. Difference should be around 1e-8. print('Testing max_pool_forward_naive function:') print('difference: ', rel_error(out, correct_out)) """ Explanation: Max pooling: Naive forward Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency. Check your implementation by running the following: End of explanation """ np.random.seed(231) x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout) out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_backward_naive(dout, cache) # Your error should be around 1e-12 print('Testing max_pool_backward_naive function:') print('dx error: ', rel_error(dx, dx_num)) """ Explanation: Max pooling: Naive backward Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency. Check your implementation with numeric gradient checking by running the following: End of explanation """ from cs231n.fast_layers import conv_forward_fast, conv_backward_fast from time import time np.random.seed(231) x = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16) conv_param = {'stride': 2, 'pad': 1} t0 = time() out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param) t1 = time() out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param) t2 = time() print('Testing conv_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('Difference: ', rel_error(out_naive, out_fast)) t0 = time() dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive) t1 = time() dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast) t2 = time() print('\nTesting conv_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('dx difference: ', rel_error(dx_naive, dx_fast)) print('dw difference: ', rel_error(dw_naive, dw_fast)) print('db difference: ', rel_error(db_naive, db_fast)) from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast np.random.seed(231) x = np.random.randn(100, 3, 32, 32) dout = np.random.randn(100, 3, 16, 16) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} t0 = time() out_naive, cache_naive = max_pool_forward_naive(x, pool_param) t1 = time() out_fast, cache_fast = max_pool_forward_fast(x, pool_param) t2 = time() print('Testing pool_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('fast: %fs' % (t2 - t1)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('difference: ', rel_error(out_naive, out_fast)) t0 = time() dx_naive = max_pool_backward_naive(dout, cache_naive) t1 = time() dx_fast = max_pool_backward_fast(dout, cache_fast) t2 = time() print('\nTesting pool_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('dx difference: ', rel_error(dx_naive, dx_fast)) """ Explanation: Fast layers Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py. The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory: bash python setup.py build_ext --inplace The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights. NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation. You can compare the performance of the naive and fast versions of these layers by running the following: End of explanation """ from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward np.random.seed(231) x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param) dx, dw, db = conv_relu_pool_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout) print('Testing conv_relu_pool') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) from cs231n.layer_utils import conv_relu_forward, conv_relu_backward np.random.seed(231) x = np.random.randn(2, 3, 8, 8) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} out, cache = conv_relu_forward(x, w, b, conv_param) dx, dw, db = conv_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout) print('Testing conv_relu:') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) """ Explanation: Convolutional "sandwich" layers Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks. End of explanation """ model = ThreeLayerConvNet() N = 50 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N) loss, grads = model.loss(X, y) print('Initial loss (no regularization): ', loss) model.reg = 0.5 loss, grads = model.loss(X, y) print('Initial loss (with regularization): ', loss) """ Explanation: Three-layer ConvNet Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network. Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug: Sanity check loss After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up. End of explanation """ num_inputs = 2 input_dim = (3, 16, 16) reg = 0.0 num_classes = 10 np.random.seed(231) X = np.random.randn(num_inputs, *input_dim) y = np.random.randint(num_classes, size=num_inputs) model = ThreeLayerConvNet(num_filters=3, filter_size=3, input_dim=input_dim, hidden_dim=7, dtype=np.float64) loss, grads = model.loss(X, y) for param_name in sorted(grads): f = lambda _: model.loss(X, y)[0] param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6) e = rel_error(param_grad_num, grads[param_name]) print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))) """ Explanation: Gradient check After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2. End of explanation """ np.random.seed(231) num_train = 100 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } model = ThreeLayerConvNet(weight_scale=1e-2) solver = Solver(model, small_data, num_epochs=15, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=1) solver.train() """ Explanation: Overfit small data A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy. End of explanation """ plt.subplot(2, 1, 1) plt.plot(solver.loss_history, 'o') plt.xlabel('iteration') plt.ylabel('loss') plt.subplot(2, 1, 2) plt.plot(solver.train_acc_history, '-o') plt.plot(solver.val_acc_history, '-o') plt.legend(['train', 'val'], loc='upper left') plt.xlabel('epoch') plt.ylabel('accuracy') plt.show() """ Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting: End of explanation """ model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001) solver = Solver(model, data, num_epochs=1, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=20) solver.train() """ Explanation: Train the net By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set: End of explanation """ from cs231n.vis_utils import visualize_grid grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1)) plt.imshow(grid.astype('uint8')) plt.axis('off') plt.gcf().set_size_inches(5, 5) plt.show() """ Explanation: Visualize Filters You can visualize the first-layer convolutional filters from the trained network by running the following: End of explanation """ np.random.seed(231) # Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalization N, C, H, W = 2, 3, 4, 5 x = 4 * np.random.randn(N, C, H, W) + 10 print('Before spatial batch normalization:') print(' Shape: ', x.shape) print(' Means: ', x.mean(axis=(0, 2, 3))) print(' Stds: ', x.std(axis=(0, 2, 3))) # Means should be close to zero and stds close to one gamma, beta = np.ones(C), np.zeros(C) bn_param = {'mode': 'train'} out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization:') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) # Means should be close to beta and stds close to gamma gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8]) out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization (nontrivial gamma, beta):') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) np.random.seed(231) # Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass. N, C, H, W = 10, 4, 11, 12 bn_param = {'mode': 'train'} gamma = np.ones(C) beta = np.zeros(C) for t in range(50): x = 2.3 * np.random.randn(N, C, H, W) + 13 spatial_batchnorm_forward(x, gamma, beta, bn_param) bn_param['mode'] = 'test' x = 2.3 * np.random.randn(N, C, H, W) + 13 a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) # Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print('After spatial batch normalization (test-time):') print(' means: ', a_norm.mean(axis=(0, 2, 3))) print(' stds: ', a_norm.std(axis=(0, 2, 3))) """ Explanation: Spatial Batch Normalization We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization." Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map. If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W. Spatial batch normalization: forward In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following: End of explanation """ np.random.seed(231) N, C, H, W = 2, 3, 4, 5 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W) bn_param = {'mode': 'train'} fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout) _, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache) print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta)) """ Explanation: Spatial batch normalization: backward In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check: End of explanation """
tensorflow/docs-l10n
site/ko/tutorials/distribute/custom_training.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ # Import TensorFlow import tensorflow as tf # Helper libraries import numpy as np import os print(tf.__version__) """ Explanation: 훈련 루프와 함께 tf.distribute.Strategy 사용하기 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/custom_training"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/custom_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/custom_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/distribute/custom_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td> </table> 이 예제는 사용자 정의 훈련 루프(custom training loops)와 함께 tf.distribute.Strategy를 사용하는 법을 보여드립니다. 우리는 간단한 CNN 모델을 패션 MNIST 데이터셋에 대해 훈련을 할 것입니다. 패션 MNIST 데이터셋은 60000개의 28 x 28 크기의 훈련 이미지들과 10000개의 28 x 28 크기의 테스트 이미지들을 포함하고 있습니다. 이 예제는 유연성을 높이고, 훈련을 더 잘 제어할 수 있도록 사용자 정의 훈련 루프를 사용합니다. 또한, 사용자 훈련 루프를 사용하는 것은 모델과 훈련 루프를 디버깅하기 쉬워집니다. End of explanation """ fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # Adding a dimension to the array -> new shape == (28, 28, 1) # We are doing this because the first layer in our model is a convolutional # layer and it requires a 4D input (batch_size, height, width, channels). # batch_size dimension will be added later on. train_images = train_images[..., None] test_images = test_images[..., None] # Getting the images in [0, 1] range. train_images = train_images / np.float32(255) test_images = test_images / np.float32(255) """ Explanation: 패션 MNIST 데이터셋 다운로드 End of explanation """ # If the list of devices is not specified in the # `tf.distribute.MirroredStrategy` constructor, it will be auto-detected. strategy = tf.distribute.MirroredStrategy() print ('Number of devices: {}'.format(strategy.num_replicas_in_sync)) """ Explanation: 변수와 그래프를 분산하는 전략 만들기 tf.distribute.MirroredStrategy 전략이 어떻게 동작할까요? 모든 변수와 모델 그래프는 장치(replicas, 다른 문서에서는 replica가 분산 훈련에서 장치 등에 복제된 모델을 의미하는 경우가 있으나 이 문서에서는 장치 자체를 의미합니다)에 복제됩니다. 입력은 장치에 고르게 분배되어 들어갑니다. 각 장치는 주어지는 입력에 대해서 손실(loss)과 그래디언트를 계산합니다. 그래디언트들을 전부 더함으로써 모든 장치들 간에 그래디언트들이 동기화됩니다. 동기화된 후에, 동일한 업데이트가 각 장치에 있는 변수의 복사본(copies)에 동일하게 적용됩니다. 노트: 하나의 범위를 지정해서 모든 코드를 집어넣을 수 있습니다. 자, 같이 살펴보시죠! End of explanation """ BUFFER_SIZE = len(train_images) BATCH_SIZE_PER_REPLICA = 64 GLOBAL_BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync EPOCHS = 10 """ Explanation: 입력 파이프라인 설정하기 플랫폼에 무관한 SavedModel 형식으로 그래프와 변수들을 내보냅니다. 모델을 저장한 후에는 범위 없이 불러올 수도 있고, 범위를 지정하여 불러올 수도 있습니다. End of explanation """ train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE) test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE) train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset) test_dist_dataset = strategy.experimental_distribute_dataset(test_dataset) """ Explanation: 데이터세트를 만들고 배포합니다. End of explanation """ def create_model(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(64, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) return model # Create a checkpoint directory to store the checkpoints. checkpoint_dir = './training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") """ Explanation: 모델 만들기 tf.keras.Sequential을 사용해서 모델을 생성합니다. Model Subclassing API로도 모델 생성을 할 수 있습니다. End of explanation """ with strategy.scope(): # Set reduction to `none` so we can do the reduction afterwards and divide by # global batch size. loss_object = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=tf.keras.losses.Reduction.NONE) def compute_loss(labels, predictions): per_example_loss = loss_object(labels, predictions) return tf.nn.compute_average_loss(per_example_loss, global_batch_size=GLOBAL_BATCH_SIZE) """ Explanation: 손실 함수 정의하기 일반적으로 1 GPU/CPU가 있는 단일 시스템에서 손실은 입력 배치의 예제 수로 나뉩니다. 그렇다면 tf.distribute.Strategy를 사용할 때 손실을 어떻게 계산해야 할까요? 예를 들어 GPU가 4개 있고 배치 크기가 64라고 가정해 보겠습니다. 하나의 입력 배치가 전체 복제본(4개의 GPU)에 걸쳐 분배되고 각 복제본은 크기 16의 입력을 받습니다. 각 복제본의 모델은 해당 입력으로 순방향 전달을 수행하고 손실을 계산합니다. 이제 손실을 해당 입력의 예제 수로 나누는 대신(BATCH_SIZE_PER_REPLICA = 16), 손실을 GLOBAL_BATCH_SIZE(64)로 나누어야 합니다. 이렇게 하는 이유는 무엇일까요? 각 복제본에서 그래디언트가 계산된 후 이를 합산하여 전체 복제본에 걸쳐 동기화되기 때문에 이렇게 해야 합니다. TensorFlow에서 이 작업을 어떻게 수행할까요? 이 튜토리얼에서와 같이 사용자 지정 훈련 루프를 작성하는 경우 예제당 손실을 합산하고 합계를 GLOBAL_BATCH_SIZE로 나누어야 합니다: scale_loss = tf.reduce_sum(loss) * (1. / GLOBAL_BATCH_SIZE) 또는 tf.nn.compute_average_loss를 사용하여 예제당 손실, 선택적 샘플 가중치, GLOBAL_BATCH_SIZE를 인수로 사용하고 조정된 손실을 반환할 수 있습니다. 모델에서 정규화 손실을 사용하는 경우 복제본 수에 따라 손실 값을 확장해야 합니다. tf.nn.scale_regularization_loss 함수를 사용하여 이를 수행할 수 있습니다. tf.reduce_mean을 사용하는 것은 권장하지 않습니다. 이렇게 하면 손실이 실제 복제본 배치 크기별로 나눠지는데, 이는 단계별로 다를 수 있습니다. 이 축소 및 크기 조정은 keras model.compile 및 model.fit에서 자동으로 수행됩니다. tf.keras.losses 클래스를 사용하는 경우(아래 예와 같이) 손실 축소는 NONE 또는 SUM 중 하나로 명시적으로 지정되어야 합니다. AUTO 및 SUM_OVER_BATCH_SIZE는 tf.distribute.Strategy와 함께 사용할 때 허용되지 않습니다. AUTO는 사용자가 어떤 축소가 필요한지 명시적으로 생각해야 하므로(분산된 경우에 이러한 축소가 올바른지 확인하기 위해) 허용되지 않습니다. SUM_OVER_BATCH_SIZE는 현재 복제본 배치 크기별로만 나누고 복제본 수로 나누는 일은 사용자에게 맡기는 데, 이를 쉽게 놓칠 수 있다는 점 때문에 허용되지 않습니다. 따라서 사용자가 직접 명시적으로 축소를 수행할 것을 권장합니다. labels이 다차원인 경우 각 샘플의 요소 수에 걸쳐 per_example_loss의 평균을 구합니다. 예를 들어 predictions의 형상이 (batch_size, H, W, n_classes)이고 labels이 (batch_size, H, W)인 경우, 다음과 같이 per_example_loss를 업데이트해야 합니다: per_example_loss /= tf.cast(tf.reduce_prod(tf.shape(labels)[1:]), tf.float32) 주의: 손실의 형상을 확인하세요. tf.losses/tf.keras.losses의 손실 함수는 일반적으로 입력의 마지막 차원에 대한 평균을 반환합니다. 손실 클래스는 이러한 함수를 래핑합니다. 손실 클래스의 인스턴스를 생성할 때 reduction=Reduction.NONE을 전달하는 것은 "추가적인 축소가 없음"을 의미합니다. [batch, W, H, n_classes]의 예제 입력 형상을 갖는 범주형 손실의 경우, n_classes 차원이 축소됩니다. losses.mean_squared_error 또는 losses.binary_crossentropy와 같은 포인트별 손실의 경우, 더미 축을 포함시켜 [batch, W, H, 1]이 [batch, W, H]로 축소되도록 합니다. 더미 축이 없으면 [batch, W, H]가 [batch, W]로 잘못 축소됩니다. End of explanation """ with strategy.scope(): test_loss = tf.keras.metrics.Mean(name='test_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='train_accuracy') test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='test_accuracy') """ Explanation: 손실과 정확도를 기록하기 위한 지표 정의하기 이 지표(metrics)는 테스트 손실과 훈련 정확도, 테스트 정확도를 기록합니다. .result()를 사용해서 누적된 통계값들을 언제나 볼 수 있습니다. End of explanation """ # model, optimizer, and checkpoint must be created under `strategy.scope`. with strategy.scope(): model = create_model() optimizer = tf.keras.optimizers.Adam() checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) def train_step(inputs): images, labels = inputs with tf.GradientTape() as tape: predictions = model(images, training=True) loss = compute_loss(labels, predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_accuracy.update_state(labels, predictions) return loss def test_step(inputs): images, labels = inputs predictions = model(images, training=False) t_loss = loss_object(labels, predictions) test_loss.update_state(t_loss) test_accuracy.update_state(labels, predictions) # `run` replicates the provided computation and runs it # with the distributed input. @tf.function def distributed_train_step(dataset_inputs): per_replica_losses = strategy.run(train_step, args=(dataset_inputs,)) return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None) @tf.function def distributed_test_step(dataset_inputs): return strategy.run(test_step, args=(dataset_inputs,)) for epoch in range(EPOCHS): # TRAIN LOOP total_loss = 0.0 num_batches = 0 for x in train_dist_dataset: total_loss += distributed_train_step(x) num_batches += 1 train_loss = total_loss / num_batches # TEST LOOP for x in test_dist_dataset: distributed_test_step(x) if epoch % 2 == 0: checkpoint.save(checkpoint_prefix) template = ("Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, " "Test Accuracy: {}") print (template.format(epoch+1, train_loss, train_accuracy.result()*100, test_loss.result(), test_accuracy.result()*100)) test_loss.reset_states() train_accuracy.reset_states() test_accuracy.reset_states() """ Explanation: 훈련 루프 End of explanation """ eval_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='eval_accuracy') new_model = create_model() new_optimizer = tf.keras.optimizers.Adam() test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE) @tf.function def eval_step(images, labels): predictions = new_model(images, training=False) eval_accuracy(labels, predictions) checkpoint = tf.train.Checkpoint(optimizer=new_optimizer, model=new_model) checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) for images, labels in test_dataset: eval_step(images, labels) print ('전략을 사용하지 않고, 저장된 모델을 복원한 후의 정확도: {}'.format( eval_accuracy.result()*100)) """ Explanation: 위의 예제에서 주목해야 하는 부분 이 예제는 train_dist_dataset과 test_dist_dataset을 for x in ... 구조를 통해서 반복합니다. 스케일이 조정된 손실은 distributed_train_step의 반환값입니다. tf.distribute.Strategy.reduce 호출을 사용해서 장치들 간의 스케일이 조정된 손실 값을 전부 합칩니다. 그리고 나서 tf.distribute.Strategy.reduce 반환 값을 더하는 식으로 배치 간의 손실을 모읍니다. tf.keras.Metrics는 tf.distribute.Strategy.run에 의해서 실행되는 train_step과 test_step 함수 안에서 업데이트되어야 합니다.* tf.distribute.Strategy.run은 전략의 각 로컬 복제본에서 결과를 반환하며, 이 결과를 소비하는 여러 방법이 있습니다. tf.distribute.Strategy.reduce를 수행하여 집계된 값을 얻을 수 있습니다. tf.distribute.Strategy.experimental_local_results를 수행하여 결과에 포함된 값 목록을 로컬 복제당 하나씩 가져올 수도 있습니다. 최신 체크포인트를 불러와서 테스트하기 tf.distribute.Strategy를 사용해서 체크포인트가 만들어진 모델은 전략 사용 여부에 상관없이 불러올 수 있습니다. End of explanation """ for _ in range(EPOCHS): total_loss = 0.0 num_batches = 0 train_iter = iter(train_dist_dataset) for _ in range(10): total_loss += distributed_train_step(next(train_iter)) num_batches += 1 average_train_loss = total_loss / num_batches template = ("Epoch {}, Loss: {}, Accuracy: {}") print (template.format(epoch+1, average_train_loss, train_accuracy.result()*100)) train_accuracy.reset_states() """ Explanation: 데이터셋에 대해 반복작업을 하는 다른 방법들 반복자(iterator)를 사용하기 만약 주어진 스텝의 수에 따라서 반복하기 원하면서 전체 데이터셋을 보는 것을 원치 않는다면, iter를 호출하여 반복자를 만들 수 있습니다. 그 다음 명시적으로 next를 호출합니다. 또한, tf.funtion 내부 또는 외부에서 데이터셋을 반복하도록 설정 할 수 있습니다. 다음은 반복자를 사용하여 tf.function 외부에서 데이터셋을 반복하는 코드 예제입니다. End of explanation """ @tf.function def distributed_train_epoch(dataset): total_loss = 0.0 num_batches = 0 for x in dataset: per_replica_losses = strategy.run(train_step, args=(x,)) total_loss += strategy.reduce( tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None) num_batches += 1 return total_loss / tf.cast(num_batches, dtype=tf.float32) for epoch in range(EPOCHS): train_loss = distributed_train_epoch(train_dist_dataset) template = ("Epoch {}, Loss: {}, Accuracy: {}") print (template.format(epoch+1, train_loss, train_accuracy.result()*100)) train_accuracy.reset_states() """ Explanation: tf.function 내부에서 반복하기 전체 입력 train_dist_dataset에 대해서 tf.function 내부에서 for x in ... 생성자를 사용함으로써 반복을 하거나, 위에서 사용했던 것처럼 반복자를 사용함으로써 반복을 할 수 있습니다. 아래의 예제에서는 tf.function로 한 훈련의 에포크를 감싸고 그 함수에서 train_dist_dataset를 반복하는 것을 보여 줍니다. End of explanation """
NSLS-II-HXN/PyXRF
examples/Batch_mode_fit.ipynb
bsd-3-clause
param_file = '2468_fitting.json' # parameter file to fit all the data for fname in filelist: fit_pixel_data_and_save(wd, fname, param_file_name=param_file) """ Explanation: Batch mode to fit spectrum from detector sum if the detector is well aligned, you can fit the summed spectrum from each detector End of explanation """ param_file = '2468_fitting.json' # parameter file to fit data from detector summed param_file1 = '2468_fitting_det1.json' # parameter file to fit data from detector 1 param_file2 = '2468_fitting_det2.json' # parameter file to fit data from detector 2 param_file3 = '2468_fitting_det2.json' # parameter file to fit data from detector 3 paramlist = [param_file1, param_file2, param_file3] """ Explanation: Batch mode to fit spectrum from individual detector End of explanation """ for fname in filelist: fit_pixel_data_and_save(wd, fname, param_file_name=param_file, fit_channel_each=True, param_channel_list=paramlist, save_txt=True, save_tiff=True) """ Explanation: You can also turn on paramter save_txt, save_tiff(default as true), so pyxrf will output txt and tiff files. End of explanation """ param_file = '2468_fitting.json' # parameter file to fit the data fname = 'scan_2468.h5' energy = 10 # incident energy at KeV fit_pixel_data_and_save(wd, fname, param_file_name=param_file, incident_energy=energy) """ Explanation: Batch mode to fit spectrum with given incident energy End of explanation """
adrn/TriandRRLyrae
notebooks/Target selection.ipynb
mit
d = triand['dh'].data d_cut = (d > 15) & (d < 21) triand_dist = triand[d_cut] c_triand = _c_triand[d_cut] print(len(triand_dist)) plt.hist(triand_dist['<Vmag>'].data) """ Explanation: Now a distance cut: End of explanation """ ptf_triand = ascii.read("/Users/adrian/projects/streams/data/observing/triand.txt") ptf_c = coord.SkyCoord(ra=ptf_triand['ra']*u.deg, dec=ptf_triand['dec']*u.deg) print ptf_triand.colnames, len(ptf_triand) obs_dist = distance(ptf_triand['Vmag'].data) ((obs_dist > 12*u.kpc) & (obs_dist < 25*u.kpc)).sum() ptf_triand[0] """ Explanation: Stars I actually observed End of explanation """ rrlyr_d = np.genfromtxt("/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt", skiprows=2, dtype=None, names=['l','b','vhel','vgsr','src','ra','dec','name','dist']) obs_rrlyr = rrlyr_d[rrlyr_d['src'] == 'PTF'] """ Explanation: Data for the observed stars End of explanation """ fig,ax = plt.subplots(1,1,figsize=(10,8)) # ax.plot(c.galactic.l.degree, c.galactic.b.degree, linestyle='none', # marker='o', markersize=4, alpha=0.75) # ALL RR LYRAE ax.plot(c_triand.galactic.l.degree, c_triand.galactic.b.degree, linestyle='none', marker='o', markersize=5, alpha=0.75) ax.plot(ptf_c.galactic.l.degree, ptf_c.galactic.b.degree, linestyle='none', marker='o', markerfacecolor='none', markeredgewidth=2, markersize=12, alpha=0.75) ax.plot(obs_rrlyr['l'], obs_rrlyr['b'], linestyle='none', mec='r', marker='o', markerfacecolor='none', markeredgewidth=2, markersize=12, alpha=0.75) # x = np.linspace(-10,40,100) # x[x < 0] += 360. # y = np.linspace(30,45,100) # x,y = map(np.ravel, np.meshgrid(x,y)) # ccc = coord.SkyCoord(ra=x*u.deg,dec=y*u.deg) # ax.plot(ccc.galactic.l.degree, ccc.galactic.b.degree, linestyle='none') ax.set_xlim(97,162) ax.set_ylim(-37,-13) ax.set_xlabel("$l$ [deg]") ax.set_ylabel("$b$ [deg]") """ Explanation: Comparison of stars observed with Catalina End of explanation """ fig,ax = plt.subplots(1,1,figsize=(10,8)) ax.plot(c_triand.galactic.l.degree, c_triand.galactic.b.degree, linestyle='none', marker='o', markersize=4, alpha=0.75) ax.plot(ptf_c.galactic.l.degree, ptf_c.galactic.b.degree, linestyle='none', marker='o', markerfacecolor='none', markeredgewidth=2, markersize=8, alpha=0.75) ax.plot(obs_rrlyr['l'], obs_rrlyr['b'], linestyle='none', mec='r', marker='o', markerfacecolor='none', markeredgewidth=2, markersize=8, alpha=0.75) ax.plot(c_triand.galactic.l.degree[10], c_triand.galactic.b.degree[10], linestyle='none', marker='o', markersize=25, alpha=0.75) ax.set_xlim(97,162) ax.set_ylim(-37,-13) c_triand.icrs[10] """ Explanation: Issues Why are some of the PTF RR Lyrae missing from Catalina? Because they are too faint! (R>18) Why are Catalina stars missing from PTF? More observations, larger selection window. End of explanation """ brani = ascii.read("/Users/adrian/projects/triand-rrlyrae/brani_sample/TriAnd.dat") blaschko = brani[(brani['objectID'] == "13322281016459551106") | (brani['objectID'] == "13879390364114107826")] for b in blaschko: row = ptf_triand[np.argmin(np.sqrt((ptf_triand['ra'] - b['ra'])**2 + (ptf_triand['dec'] - b['dec'])**2))] print(row['name']) print(coord.SkyCoord(ra=row['ra']*u.deg, dec=row['dec']*u.deg).galactic) zip(obs_rrlyr['l'], obs_rrlyr['b']) d = V_to_dist(triand['<Vmag>'].data).to(u.kpc).value bins = np.arange(1., 60+5, 3) plt.figure(figsize=(10,8)) n,bins,patches = plt.hist(triand['dh'].data, bins=bins, alpha=0.5, label='Catalina') for pa in patches: if pa.xy[0] < 15. or pa.xy[0] > 40.: pa.set_alpha(0.2) # other_bins = np.arange(0, 15+2., 2.) # plt.hist(V_to_dist(triand['<Vmag>'].data), bins=other_bins, alpha=0.2, color='k') # other_bins = np.arange(40, 60., 2.) # plt.hist(V_to_dist(triand['<Vmag>'].data), bins=other_bins, alpha=0.2, color='k') plt.hist(V_to_dist(ptf_triand['Vmag'].data), bins=bins, alpha=0.5, label='PTF/MDM') plt.xlabel("Distance [kpc]") plt.ylabel("Number") # plt.ylim(0,35) plt.legend(fontsize=20) plt.axvline(18.) plt.axvline(28.) """ Explanation: Possible Blaschko stars: * R_13322281016459551106 * R_13879390364114107826 End of explanation """ import emcee import triangle from scipy.misc import logsumexp ((distance(triand['<Vmag>'].data) > (15.*u.kpc)) & (distance(triand['<Vmag>'].data) < (40.*u.kpc))).sum() !head -n3 /Users/adrian/projects/triand-rrlyrae/data/triand_giants.txt d = np.loadtxt("/Users/adrian/projects/triand-rrlyrae/data/triand_giants.txt", skiprows=1) d2 = np.genfromtxt("/Users/adrian/projects/triand-rrlyrae/data/TriAnd_Mgiant.txt", skiprows=2) plt.plot(d[:,0], d[:,2], linestyle='none') plt.plot(d2[:,0], d2[:,3], linestyle='none') ix = (d[:,2] < 100) & (d[:,2] > -50) ix = np.ones_like(ix).astype(bool) plt.plot(d[ix,0], d[ix,2], linestyle='none') plt.plot(d[ix,0], -1*d[ix,0] + 170, marker=None) plt.xlabel('l [deg]') plt.ylabel('v_r [km/s]') plt.figure() plt.plot(d[ix,0], d[ix,1], linestyle='none') plt.xlabel('l [deg]') plt.ylabel('b [deg]') def ln_normal(x, mu, sigma): return -0.5*np.log(2*np.pi) - np.log(sigma) - 0.5*((x-mu)/sigma)**2 # def ln_prior(p): # m,b,V = p # if m > 0. or m < -50: # return -np.inf # if b < 0 or b > 500: # return -np.inf # if V <= 0.: # return -np.inf # return -np.log(V) # def ln_likelihood(p, l, vr, sigma_vr): # m,b,V = p # sigma = np.sqrt(sigma_vr**2 + V**2) # return ln_normal(vr, m*l + b, sigma) # mixture model - f_ol is outlier fraction def ln_prior(p): m,b,V,f_ol = p if m > 0. or m < -50: return -np.inf if b < 0 or b > 500: return -np.inf if V <= 0.: return -np.inf if f_ol > 1. or f_ol < 0.: return -np.inf return -np.log(V) def likelihood(p, l, vr, sigma_vr): m,b,V,f_ol = p sigma = np.sqrt(sigma_vr**2 + V**2) term1 = ln_normal(vr, m*l + b, sigma) term2 = ln_normal(vr, 0., 120.) return np.array([term1, term2]) def ln_likelihood(p, *args): m,b,V,f_ol = p x = likelihood(p, *args) # coefficients b = np.zeros_like(x) b[0] = 1-f_ol b[1] = f_ol return logsumexp(x,b=b, axis=0) def ln_posterior(p, *args): lnp = ln_prior(p) if np.isinf(lnp): return -np.inf return lnp + ln_likelihood(p, *args).sum() def outlier_prob(p, *args): m,b,V,f_ol = p p1,p2 = likelihood(p, *args) return f_ol*np.exp(p2) / ((1-f_ol)*np.exp(p1) + f_ol*np.exp(p2)) vr_err = 2 # km/s nwalkers = 32 sampler = emcee.EnsembleSampler(nwalkers=nwalkers, dim=4, lnpostfn=ln_posterior, args=(d[ix,0],d[ix,2],vr_err)) p0 = np.zeros((nwalkers,sampler.dim)) p0[:,0] = np.random.normal(-1, 0.1, size=nwalkers) p0[:,1] = np.random.normal(150, 0.1, size=nwalkers) p0[:,2] = np.random.normal(25, 0.5, size=nwalkers) p0[:,3] = np.random.normal(0.1, 0.01, size=nwalkers) for pp in p0: lnp = ln_posterior(pp, *sampler.args) if not np.isfinite(lnp): print("you suck") pos,prob,state = sampler.run_mcmc(p0, N=100) sampler.reset() pos,prob,state = sampler.run_mcmc(pos, N=1000) fig = triangle.corner(sampler.flatchain, labels=[r'$\mathrm{d}v/\mathrm{d}l$', r'$v_0$', r'$\sigma_v$', r'$f_{\rm halo}$']) figsize = (12,8) MAP = sampler.flatchain[sampler.flatlnprobability.argmax()] pout = outlier_prob(MAP, d[ix,0], d[ix,2], vr_err) plt.figure(figsize=figsize) cl = plt.scatter(d[ix,0], d[ix,2], c=(1-pout), s=30, cmap='RdYlGn', vmin=0, vmax=1) cbar = plt.colorbar(cl) cbar.set_clim(0,1) # plt.plot(d[ix,0], d[ix,2], linestyle='none', marker='o', ms=4) plt.xlabel(r'$l\,[{\rm deg}]$') plt.ylabel(r'$v_r\,[{\rm km\,s}^{-1}]$') ls = np.linspace(d[ix,0].min(), d[ix,0].max(), 100) for i in np.random.randint(len(sampler.flatchain), size=100): m,b,V,f_ol = sampler.flatchain[i] plt.plot(ls, m*ls+b, color='#555555', alpha=0.1, marker=None) best_m,best_b,best_V,best_f_ol = MAP plt.plot(ls, best_m*ls + best_b, color='k', alpha=1, marker=None) plt.plot(ls, best_m*ls + best_b + best_V, color='k', alpha=1, marker=None, linestyle='--') plt.plot(ls, best_m*ls + best_b - best_V, color='k', alpha=1, marker=None, linestyle='--') plt.xlim(ls.max()+2, ls.min()-2) plt.title("{:.1f}% halo stars".format(best_f_ol*100.)) print(((1-pout) > 0.75).tolist()) print best_m, best_b, best_V print "MAP velocity dispersion: {:.2f} km/s".format(best_V) high_p = (1-pout) > 0.8 plt.figure(figsize=figsize) cl = plt.scatter(d[high_p,0], d[high_p,1], c=d[high_p,2]-d[high_p,2].mean(), s=30, cmap='coolwarm', vmin=-40, vmax=40) cbar = plt.colorbar(cl) ax = plt.gca() ax.set_axis_bgcolor('#555555') plt.xlim(ls.max()+2,ls.min()-2) plt.ylim(-50,-10) plt.xlabel(r'$l\,[{\rm deg}]$') plt.ylabel(r'$b\,[{\rm deg}]$') plt.title(r'$P_{\rm TriAnd} > 0.8$', y=1.02) """ Explanation: For Kathryn's proposal End of explanation """ rrlyr_d = np.genfromtxt("/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt", skiprows=2, dtype=None) !cat "/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt" rrlyr_d = np.genfromtxt("/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt", skiprows=2) rrlyr_vr_err = 10. MAP = sampler.flatchain[sampler.flatlnprobability.argmax()] pout = outlier_prob(MAP, rrlyr_d[:,0], rrlyr_d[:,3], rrlyr_vr_err) plt.figure(figsize=figsize) cl = plt.scatter(rrlyr_d[:,0], rrlyr_d[:,1], c=(1-pout), s=30, cmap='RdYlGn', vmin=0, vmax=1) cbar = plt.colorbar(cl) cbar.set_clim(0,1) # plt.plot(d[ix,0], d[ix,2], linestyle='none', marker='o', ms=4) plt.xlabel(r'$l\,[{\rm deg}]$') plt.ylabel(r'$b\,[{\rm deg}]$') plt.xlim(ls.max()+2,ls.min()-2) plt.ylim(-50,-10) plt.title("RR Lyrae") MAP = sampler.flatchain[sampler.flatlnprobability.argmax()] pout = outlier_prob(MAP, rrlyr_d[:,0], rrlyr_d[:,3], rrlyr_vr_err) plt.figure(figsize=figsize) cl = plt.scatter(rrlyr_d[:,0], rrlyr_d[:,3], c=(1-pout), s=30, cmap='RdYlGn', vmin=0, vmax=1) cbar = plt.colorbar(cl) cbar.set_clim(0,1) # plt.plot(d[ix,0], d[ix,2], linestyle='none', marker='o', ms=4) plt.xlabel(r'$l\,[{\rm deg}]$') plt.ylabel(r'$v_r\,[{\rm km\,s}^{-1}]$') ls = np.linspace(d[ix,0].min(), d[ix,0].max(), 100) best_m,best_b,best_V,best_f_ol = MAP plt.plot(ls, best_m*ls + best_b, color='k', alpha=1, marker=None) plt.plot(ls, best_m*ls + best_b + best_V, color='k', alpha=1, marker=None, linestyle='--') plt.plot(ls, best_m*ls + best_b - best_V, color='k', alpha=1, marker=None, linestyle='--') plt.xlim(ls.max()+2, ls.min()-2) plt.title("RR Lyrae") """ Explanation: Now read in RR Lyrae data, compute prob for each star End of explanation """
NYUDataBootcamp/Projects
UG_S16/Breitstone-Patel-NLTK.ipynb
mit
import nltk #nltk.download() """ Explanation: Estimating Sentiment Orientation with SKLearn Jason Brietstone jb4562@nyu.edu & Amar Patel acp455@stern.nyu.edu Natural language processsing is a booming field in the finance industry because of the massive amounts of user generated data that has recently become avaiable for analysis. Websites such as Twitter, StockTwits and Facebook, if analyzed correctly, can yield very relevient insights for predictions. Natural language processing can be a very daunting task because of how many layers of analysis you can add and the immense size of the English language. NLTK is a Python module and data repository, which serves as a tool for easy language processing. This paper proposes a simple method of deriving either a positive or negative sentiment orientation and highlights a few of the useful tools NLTK offers. Firstly we need to download the NLTK repository, which is achieved by running nltk.download(). You can choose to download all text examples or download just the movie reviews which we use in this example into the corpus folder. End of explanation """ from nltk.corpus import wordnet #example word: plan, you may change this word to anything that has synonyms and antonyms test_word = 'good' syns = wordnet.synsets(test_word) # Lemma is another word for something simialr to a synonym print(syns[1].lemmas()[1].name()) """ Explanation: Wordnet is a NLTK sub package that can be used to link togther words with their respective synonyms, antonyms, part of speech and definitions. This is a very powerful tool because creating a similar wordnet requires a significant amount of databasing and organization. End of explanation """ print(syns[0].definition()) """ Explanation: Here we are going to verify that the we are using the version of the word we think we are using by pulling the definition End of explanation """ #examples print(syns[0].examples()) """ Explanation: We can also test test the word by using the .examples() method which will yield examples of the word in question End of explanation """ synonyms =[] antonyms=[] for syn in wordnet.synsets(test_word): for l in syn.lemmas(): #gather all lemmas of each synonym in the synonym set synonyms.append(l.name()) if l.antonyms(): #gather all antonyms of each lemma of each synonym in the synonym set antonyms.append(l.antonyms()[0].name()) print(set(synonyms)) print(set(antonyms)) """ Explanation: For each word we can create a comprehensive list of all synonyms and antonyms by creating a for loop. End of explanation """ # yields % similarity of the words w1 = wordnet.synset('ship.n.01') # ship , refrences that it is the noun ship, 1st entry of similar words w2 = wordnet.synset('boat.n.01')# boat print('word similarity is',w1.wup_similarity(w2)*100,'%') w1 = wordnet.synset('ship.n.01') # ship , the noun ship, 1st entry of similar word w2 = wordnet.synset('car.n.01')# boat print('word similarity is',w1.wup_similarity(w2)*100,'%') w1 = wordnet.synset('ship.n.01') # ship , the noun ship, 1st entry of similar word w2 = wordnet.synset('cat.n.01')# boat print('word similarity is',w1.wup_similarity(w2)*100,'%') """ Explanation: Wu and Palmer System In the English language, there are multiple different ways of expressing an idea. Very often, people think that by using a synonym, the meaning of the sentence is unchanged. Under many circumstances, this is true, however to the computer a slight word change can make a big difference in the returned list of lemmas and antonyms. One method we can use to determine the similarity between to words to make sure any syntax changes we make dont alter the meaning of the word is to use the Wu and Palmer system of determing semantic similarity by calling on the wup_similarity method. End of explanation """ import random from nltk.corpus import movie_reviews #1000 labeled posotive or negative movie reviews documents = [(list(movie_reviews.words(fileid)), category) # list of tuples for features for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] #documents[1] # prints the first movie review in tokenized format random.shuffle(documents)# removes bias by not training and testing on the same set all_words = [] for w in movie_reviews.words(): all_words.append(w.lower()) all_words = nltk.FreqDist(all_words)# makes frequency distrubution print(all_words.most_common(10)) # prints(10 most common words , frequency) print("stupid occured ",all_words['stupid'],'times') # prints frequency of stupid """ Explanation: TurnItIn as a use case Many students try to buy essays online. The services that sell those papers often use a form of natrual language processing to change the words with synonyms. The above method could determine if that has been occuring by gauging the similarities to other papers Sentiment Analysis through Tokenization Now we are going to begin to develop our module for determining sentiment. We are going to achieve this by tokenizing the movie reviews in our set. Tokenizing is the process of splitting blocks of text in to list of individual words. By doing this we can determining if the occurence of a particular word can be used as an indicator for posotive or negative sentiment. When doing this, we hope to see results that do NOT use any non-substantial words such as conjuctions (i.e. 'the', 'an', 'or', 'and'). This method does not have to be used only for determining sentiment. Other potential use cases could include determining the subject of a block of text or determining the origin of the author by indicators that would represent local slangs. There are 2 main benefits to using this method: 1. If we were to create our own list of positive or negative indicators to test against, we may risk missing out on words that could be impactful 2. We remove a significant amount of statical bias by not assuming a words impact but by judging based on what has already occured End of explanation """ word_features = list(all_words.keys())[:3000] def find_features(document): words = set(document) # converting list to set makes all the words not the amount features ={} for w in word_features: features[w] = (w in words) return features featuresets = [(find_features(rev),category) for (rev, category) in documents] ''' Here we create a testing and training set by arbitrariy splitting up the 3000 words in word_features ''' training_set =featuresets[1900:] testing_set = featuresets[:1900] featuresets[0][1] #here, we reference the dictionary (list of features, pos or neg) to give the second entry """ Explanation: Above we created a list all_words of all the used words, then turned that into a frequency distrubution and printed the top 15 most common words, then we printed how many times the word 'stupid' occured in the list all_words. Notice how these are obviously all conjunctions, common prepositions, and words like "the, that" as well as a hyphen. Below, we are going to create our training set and testing set. First we pull out the keys from our all_words frequency distrubution. Considering all_words is a dictionary of ("the word", Frequency) we will now have each word. Our feature set, which is the raw data modified to show our distingushing feature, has the words that what we are defining in our training and testing set. End of explanation """ classifier =nltk.NaiveBayesClassifier.train(training_set) print("original naieve bayes algo accuracy:", nltk.classify.accuracy(classifier,testing_set)*100) classifier.show_most_informative_features(15) """ Explanation: First, we are going to test using a simple NaiveBayesClassifier provided by NLTK. We will have it return the prediction accuracy and the most informative features. We hope to see two things that will demonstrate the efficiency of this: 1. No conjuctions (i.e. and, the, etc.) occur in the most informative feature set 2. High algo accuracy End of explanation """ from sklearn.naive_bayes import MultinomialNB, GaussianNB , BernoulliNB from nltk.classify.scikitlearn import SklearnClassifier from sklearn.linear_model import LogisticRegression,SGDClassifier from sklearn.svm import SVC, LinearSVC, NuSVC from nltk.classify import ClassifierI from statistics import mode class VoteClassifier(ClassifierI): def __init__(self,*classifiers): self._classifiers = classifiers def classify(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) return mode(votes) def confidence(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) choice_votes = votes.count(mode(votes)) conf = choice_votes / len(votes) return conf MNB_classifier = SklearnClassifier(MultinomialNB()) MNB_classifier.train(training_set) print("MNB_classifier accuracy percent:", (nltk.classify.accuracy(MNB_classifier, testing_set))*100) BernoulliNB_classifier = SklearnClassifier(BernoulliNB()) BernoulliNB_classifier.train(training_set) print("BernoulliNB_classifier accuracy percent:", (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))*100) LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression_classifier.train(training_set) print("LogisticRegression_classifier accuracy percent:", (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))*100) SGDClassifier_classifier = SklearnClassifier(SGDClassifier()) SGDClassifier_classifier.train(training_set) print("SGDClassifier_classifier accuracy percent:", (nltk.classify.accuracy(SGDClassifier_classifier, testing_set))*100) #Another possible test, but is known to be largely inaccurate #SVC_classifier = SklearnClassifier(SVC()) #SVC_classifier.train(training_set) #print("SVC_classifier accuracy percent:", (nltk.classify.accuracy(SVC_classifier, testing_set))*100) LinearSVC_classifier = SklearnClassifier(LinearSVC()) LinearSVC_classifier.train(training_set) print("LinearSVC_classifier accuracy percent:", (nltk.classify.accuracy(LinearSVC_classifier, testing_set))*100) NuSVC_classifier = SklearnClassifier(NuSVC()) NuSVC_classifier.train(training_set) print("NuSVC_classifier accuracy percent:", (nltk.classify.accuracy(NuSVC_classifier, testing_set))*100) """ Explanation: As you can see above the Naive Bayes Classifier is not a very accurate algorithim. To increase accuracy, we are going to try to use as many diffrent classifier methods as possible to test the data on. From there we are going to define a function that creates a voting system where each classifier votes. The outcome is the average of the votes, for example if 5/8 classifiers say positive, we will vote positive. We are also going to print the classification and the confidence of the algo. End of explanation """ voted_classifier = VoteClassifier(classifier, NuSVC_classifier, LinearSVC_classifier, SGDClassifier_classifier, MNB_classifier, BernoulliNB_classifier, LogisticRegression_classifier) print("voted_classifier accuracy percent:", (nltk.classify.accuracy(voted_classifier, testing_set))*100) #Example Classifications of the first two tests print("Classification 0:", voted_classifier.classify(testing_set[0][0]), "Confidence %:",voted_classifier.confidence(testing_set[0][0])*100) print("Classification 1:", voted_classifier.classify(testing_set[1][0]), "Confidence %:",voted_classifier.confidence(testing_set[1][0])*100) """ Explanation: Above we printed the accuracy for all the classifiers and their respective performance. Despite their individual accuracies, it is important to note that they were generally in agreement. Below, we are printing the confidence, which is calcualted by the amount of classifiers in agreement. This removes any time that individual classifeirs may have been lacking in their ability to predict a ceartin secenrio. However, it will more heavily weight any individual machine learning funcation that may have been more accurate than even the majority. Accuracy optimization depends on the data set in question. To optimize, you must try running all individual classifiers, and then selectively remove those that failed to meet sufficient accuracy. For further investigation, one could leverage a data set that is structured in a way that pos or neg reviews are grouped in order to test the accuracy for just posotive or negative reviews. You may find that some are biased towards one side and can be removed for better accuracy. Obviously, regardless of any investigation you make, you also likely be able to increase the accuracy and applicability of the algothim by using a larger training set. Remember, in practice, using a larger training set will increase the time and processing power necissary for completion, which is relevant if the speed of execution is important. End of explanation """ def make_confidence(number_of_tests): confidence = [] for x in range (0,number_of_tests): confidence.append(voted_classifier.confidence(testing_set[x][0])*100) return confidence import matplotlib.pyplot as plt import pandas as pd y = make_confidence(1000) #use all 1000 tests x = range(0, len(y)) #create a dictionary to sort data count_data = dict((i, y.count(i)) for i in y) """ Explanation: Now, we will find out how confident each of these tests were: End of explanation """ %matplotlib inline from decimal import Decimal import collections #Sort the Dictionary od = collections.OrderedDict(sorted(count_data.items())) plt.style.use('fivethirtyeight') plt.bar(range(len(od)), od.values(), align='center') plt.xticks(range(len(od)), [float(Decimal("%.2f" % key)) for key in od.keys()]) plt.show() """ Explanation: Now, we can figure out what is the distribution of confidence, what the average and standard deviation are, to get an idea of its true accuracy End of explanation """ labels = ['4/7','5/7','6/7','7/7'] plt.pie(list(count_data.values()), labels = labels, autopct='%1.1f%%') plt.show() """ Explanation: This data is interesting. First, the x labels show four possible options That is because the voting process decides on a simple majority, which in this case, is best out of seven 57.14% represents 4/7, 71.43% is 5/7, 85.71% is 6/7 and 100.0% is 7/7 End of explanation """ import numpy as np mean = np.mean(y, dtype=np.float64) print("The mean confidence level is: ", mean) stdev = np.std(y, dtype = np.float64) print("The standard deviation of the confidence is: ", stdev) #Linear Regression import statsmodels.api as sm from statsmodels import regression """ Explanation: This shows us that only a smaller of the time do all 7 tests agree while the largest percentage of the time, only 5/7 tests agree. This points to the limitations of the tests we use in this example End of explanation """ # Let's define everything in familiar regression terms X = x Y = y def linreg(x,y): # We add a constant so that we can also fit an intercept (alpha) to the model x = sm.add_constant(x) model = regression.linear_model.OLS(y,x).fit() # Remove the constant now that we're done x = x[:, 1] #print(model.params) return model.params[0], model.params[1] #alpha and beta alpha, beta = linreg(X,Y) print ('alpha: ' + str(alpha)) print ('beta: ' + str(beta)) X2 = np.linspace(X.start, X.stop) Y_hat = X2 * beta + alpha plt.scatter(X, Y, alpha=0.25) # Plot the raw data plt.xlabel("Number of Tests", fontsize = "16") plt.ylabel("Confidence", fontsize = "16") plt.axis('tight') # Add the regression line, colored in red plt.plot(X2, Y_hat, 'r', alpha=0.9); """ Explanation: Factor models are a way of explaining the results via a linear combination of its inherent alpha as well as exposure to other indicators. The general form of a factor model is $$Y = \alpha + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_n X_n$$ End of explanation """
antoniomezzacapo/qiskit-tutorial
community/teach_me_qiskit_2018/e91_qkd/e91_quantum_key_distribution_protocol.ipynb
apache-2.0
# useful additional packages import numpy as np import random # regular expressions module import re # importing the QISKit from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, Aer # import basic plot tools from qiskit.tools.visualization import circuit_drawer, plot_histogram """ Explanation: <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left"> E91 quantum key distribution protocol Contributors Andrey Kardashin Introduction Suppose that Alice wants to send a message to Bob. In order to protect the information in the message from the eavesdropper Eve, it must be encrypted. Encryption is the process of encoding the plaintext into ciphertext. The strength of encryption, that is, the property to resist decryption, is determined by its algorithm. Any encryption algorithm is based on the use of a key. In order to generate the ciphertext, the one-time pad technique is usually used. The idea of this technique is to apply the exclusive or (XOR) $\oplus$ operation to bits of the plaintext and bits of the key to obtain the ciphertext. Thus, if $m=(m_1 \ldots m_n)$, $c=(c_1 \ldots c_n)$ and $k=(k_1 \ldots k_n)$ are binary strings of plaintext, ciphertext and key respectively, then the encryption is defined as $c_i=m_i \oplus k_i$, and decryption as $m_i=c_i \oplus k_i$. The one-time pad method is proved to be be absolutely secure. Thus, if Eve intercepted the ciphertext $c$, she will not get any information from the message $m$ until she has the key $k$. The main problem of modern cryptographic systems is the distribution among the participants of the communication session of a secret key, possession of which should not be available to third parties. The rapidly developing methods of quantum key distribution can solve this problem regardless of the capabilities of the eavesdropper. In this tutorial, we show how Alice and Bob can generate a secret key using the E91 quantum key distribution protocol. Quantum entanglement The E91 protocol developed by Artur Ekert in 1991 is based on the use of entangled states and Bell's theorem (see Entanglement Revisited QISKit tutorial). It is known that two electrons A and B can be prepared in such a state that they can not be considered separately from each other. One of these states is the singlet state $$\lvert\psi_s\rangle = \frac{1}{\sqrt{2}}(\lvert0\rangle_A\otimes\lvert1\rangle_B - \lvert1\rangle_A\otimes\lvert0\rangle_B) = \frac{1}{\sqrt{2}}(\lvert01\rangle - \lvert10\rangle),$$ where the vectors $\lvert 0 \rangle$ and $\lvert 1 \rangle$ describe the states of each electron with the [spin](https://en.wikipedia.org/wiki/Spin_(physics%29) projection along the positive and negative direction of the z axis. The observable of the projection of the spin onto the direction $\vec{n}=(n_x, n_y, n_z)$ is given by $$\vec{n} \cdot \vec{\sigma} = n_x X + n_y Y + n_z Z,$$ where $\vec{\sigma} = (X, Y, Z)$ and $X, Y, Z$ are the Pauli matrices. For two qubits A and B, the observable $(\vec{a} \cdot \vec{\sigma})_A \otimes (\vec{b} \cdot \vec{\sigma})_B$ describes the joint measurement of the spin projections onto the directions $\vec{a}$ and $\vec{b}$. It can be shown that the expectation value of this observable in the singlet state is $$\langle (\vec{a} \cdot \vec{\sigma})A \otimes (\vec{b} \cdot \vec{\sigma})_B \rangle{\psi_s} = -\vec{a} \cdot \vec{b}. \qquad\qquad (1)$$ Here we see an interesting fact: if Alice and Bob measure the spin projections of electrons A and B onto the same direction, they will obtain the opposite results. Thus, if Alice got the result $\pm 1$, then Bob definitely will get the result $\mp 1$, i.e. the results will be perfectly anticorrelated. CHSH inequality In the framework of classical physics, it is impossible to create a correlation inherent in the singlet state $\lvert\psi_s\rangle$. Indeed, let us measure the observables $X$, $Z$ for qubit A and observables $W = \frac{1}{\sqrt{2}} (X + Z)$, $V = \frac{1}{\sqrt{2}} (-X + Z)$ for qubit B. Performing joint measurements of these observables, the following expectation values can be obtained: \begin{eqnarray} \langle X \otimes W \rangle_{\psi_s} &= -\frac{1}{\sqrt{2}}, \quad \langle X \otimes V \rangle_{\psi_s} &= \frac{1}{\sqrt{2}}, \qquad\qquad (2) \ \langle Z \otimes W \rangle_{\psi_s} &= -\frac{1}{\sqrt{2}}, \quad \langle Z \otimes V \rangle_{\psi_s} &= -\frac{1}{\sqrt{2}}. \end{eqnarray} Now we can costruct the Clauser-Horne-Shimony-Holt (CHSH) correlation value: $$C = \langle X\otimes W \rangle - \langle X \otimes V \rangle + \langle Z \otimes W \rangle + \langle Z \otimes V \rangle = -2 \sqrt{2}. \qquad\qquad (3)$$ The local hidden variable theory which was developed in particular to explain the quantum correlations gives that $\lvert C \rvert \leqslant 2$. But Bell's theorem states that "no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics." Thus, the violation of the CHSH inequality (i.e. $C = -2 \sqrt{2}$ for the singlet state), which is a generalized form of Bell's inequality, can serve as an indicator of quantum entanglement. This fact finds its application in the E91 protocol. The protocol To implement the E91 quantum key distribution protocol, there must be a source of qubits prepared in the singlet state. It does not matter to whom this source belongs: to Alice, to Bob, to some trusted third-party Charlie or even to Eve. The steps of the E91 protocol are following. Charlie, the owner of the singlet state preparation device, creates $N$ entangled states $\lvert\psi_s\rangle$ and sends qubits A to Alice and qubits B to Bob via the quantum channel. Participants Alice and Bob generate strings $b=(b_1 \ldots b_N)$ and $b^{'}=(b_1^{'} \ldots b_N^{'})$, where $b_i, b^{'}_j = 1, 2, 3$. Depending on the elements of these strings, Alice and Bob measure the spin projections of their qubits along the following directions: \begin{align} b_i = 1: \quad \vec{a}_1 &= (1,0,0) \quad (X \text{ observable}) & b_j^{'} = 1: \quad \vec{b}_1 &= \left(\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}}\right) \quad (W \text{ observable}) \ b_i = 2: \quad \vec{a}_2 &= \left(\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}}\right) \quad (W \text{ observable}) & b_j^{'} = 2: \quad \vec{b}_2 &= (0,0,1) \quad ( \text{Z observable}) \ b_i = 3: \quad \vec{a}_3 &= (0,0,1) \quad (Z \text{ observable}) & b_j^{'} = 3: \quad \vec{b}_3 &= \left(-\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}}\right) \quad (V \text{ observable}) \end{align} <img src="images/vectors.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="center"> We can describe this process as a measurement of the observables $(\vec{a}_i \cdot \vec{\sigma})_A \otimes (\vec{b}_j \cdot \vec{\sigma})_B$ for each singlet state created by Charlie. Alice and Bob record the results of their measurements as elements of strings $a=(a_1 \ldots a_N)$ and $a^{'} =(a_1^{'} \ldots a_N^{'})$ respectively, where $a_i, a^{'}_j = \pm 1$. Using the classical channel, participants compare their strings $b=(b_1 \ldots b_N)$ and $b^{'}=(b_1^{'} \ldots b_N^{'})$. In other words, Alice and Bob tell each other which measurements they have performed during the step 2. If Alice and Bob have measured the spin projections of the $m$-th entangled pair of qubits onto the same direction (i.e. $\vec{a}_2/\vec{b}_1$ or $\vec{a}_3/\vec{b}_2$ for Alice's and Bob's qubit respectively), then they are sure that they obtained opposite results, i.e. $a_m = - a_m^{'}$ (see Eq. (1)). Thus, for the $l$-th bit of the key strings $k=(k_1 \ldots k_n),k^{'}=(k_1^{'} \ldots k_n^{'})$ Alice and Bob can write $k_l = a_m, k_l^{'} = -a_m^{'}$. Using the results obtained after measuring the spin projections onto the $\vec{a}_1/\vec{b}_1$, $\vec{a}_1/\vec{b}_3$, $\vec{a}_3/\vec{b}_1$ and $\vec{a}_3/\vec{b}_3$ directions (observables $(2)$), Alice and Bob calculate the CHSH correlation value $(3)$. If $C = -2\sqrt{2}$, then Alice and Bob can be sure that the states they had been receiving from Charlie were entangled indeed. This fact tells the participants that there was no interference in the quantum channel. Simulation In this section we simulate the E91 quantum key distribution protocol without the presence of an eavesdropper. End of explanation """ # Creating registers qr = QuantumRegister(2, name="qr") cr = ClassicalRegister(4, name="cr") """ Explanation: Step one: creating the singlets In the first step Alice and Bob receive their qubits of the singlet states $\lvert\psi_s\rangle$ created by Charlie. For our simulation, we need registers with two quantum bits and four classical bits. End of explanation """ singlet = QuantumCircuit(qr, cr, name='singlet') singlet.x(qr[0]) singlet.x(qr[1]) singlet.h(qr[0]) singlet.cx(qr[0],qr[1]) """ Explanation: Let us assume that qubits qr[0] and qr[1] belong to Alice and Bob respetively. In classical bits cr[0] and cr[1] Alice and Bob store their measurement results, and classical bits cr[2] and cr[3] are used by Eve to store her measurement results of Alice's and Bob's qubits. Now Charlie creates a singlet state: End of explanation """ ## Alice's measurement circuits # measure the spin projection of Alice's qubit onto the a_1 direction (X basis) measureA1 = QuantumCircuit(qr, cr, name='measureA1') measureA1.h(qr[0]) measureA1.measure(qr[0],cr[0]) # measure the spin projection of Alice's qubit onto the a_2 direction (W basis) measureA2 = QuantumCircuit(qr, cr, name='measureA2') measureA2.s(qr[0]) measureA2.h(qr[0]) measureA2.t(qr[0]) measureA2.h(qr[0]) measureA2.measure(qr[0],cr[0]) # measure the spin projection of Alice's qubit onto the a_3 direction (standard Z basis) measureA3 = QuantumCircuit(qr, cr, name='measureA3') measureA3.measure(qr[0],cr[0]) ## Bob's measurement circuits # measure the spin projection of Bob's qubit onto the b_1 direction (W basis) measureB1 = QuantumCircuit(qr, cr, name='measureB1') measureB1.s(qr[1]) measureB1.h(qr[1]) measureB1.t(qr[1]) measureB1.h(qr[1]) measureB1.measure(qr[1],cr[1]) # measure the spin projection of Bob's qubit onto the b_2 direction (standard Z basis) measureB2 = QuantumCircuit(qr, cr, name='measureB2') measureB2.measure(qr[1],cr[1]) # measure the spin projection of Bob's qubit onto the b_3 direction (V basis) measureB3 = QuantumCircuit(qr, cr, name='measureB3') measureB3.s(qr[1]) measureB3.h(qr[1]) measureB3.tdg(qr[1]) measureB3.h(qr[1]) measureB3.measure(qr[1],cr[1]) ## Lists of measurement circuits aliceMeasurements = [measureA1, measureA2, measureA3] bobMeasurements = [measureB1, measureB2, measureB3] """ Explanation: Qubits qr[0] and qr[1] are now entangled. After creating a singlet state, Charlie sends qubit qr[0] to Alice and qubit qr[1] to Bob. Step two: measuring First let us prepare the measurements which will be used by Alice and Bob. We define $A(\vec{a}_i) = \vec{a}_i \cdot \vec{\sigma}$ and $B(\vec{b}_j) = \vec{b}_j \cdot \vec{\sigma}$ as the spin projection observables used by Alice and Bob for their measurements. To perform these measurements, the standard basis $Z$ must be rotated to the proper basis when it is needed (see Superposition and Entanglement and Bell Tests user guides). Here we define the notation of possible measurements of Alice and Bob: Blocks on the left side can be considered as detectors used by the participants to measure $X, W, Z$ and $V$ observables. Now we prepare the corresponding curcuits. End of explanation """ # Define the number of singlets N numberOfSinglets = 500 """ Explanation: Supose Alice and Bob want to generate a secret key using $N$ singlet states prepared by Charlie. End of explanation """ aliceMeasurementChoices = [random.randint(1, 3) for i in range(numberOfSinglets)] # string b of Alice bobMeasurementChoices = [random.randint(1, 3) for i in range(numberOfSinglets)] # string b' of Bob """ Explanation: The participants must choose the directions onto which they will measure the spin projections of their qubits. To do this, Alice and Bob create the strings $b$ and $b^{'}$ with randomly generated elements. End of explanation """ circuits = [] # the list in which the created circuits will be stored for i in range(numberOfSinglets): # create the name of the i-th circuit depending on Alice's and Bob's measurement choices circuitName = str(i) + ':A' + str(aliceMeasurementChoices[i]) + '_B' + str(bobMeasurementChoices[i]) # create the joint measurement circuit # add Alice's and Bob's measurement circuits to the singlet state curcuit # singlet state circuit # measurement circuit of Alice # measurement circuit of Bob circuitName = singlet + aliceMeasurements[aliceMeasurementChoices[i]-1] + bobMeasurements[bobMeasurementChoices[i]-1] # add the created circuit to the circuits list circuits.append(circuitName) """ Explanation: Now we combine Charlie's device and Alice's and Bob's detectors into one circuit (singlet + Alice's measurement + Bob's measurement). End of explanation """ print(circuits[0].name) """ Explanation: Let us look at the name of one of the prepared circuits. End of explanation """ backend=Aer.get_backend('qasm_simulator') result = execute(circuits, backend=backend, shots=1).result() print(result) """ Explanation: It tells us about the number of the singlet state received from Charlie, and the measurements applied by Alice and Bob. In the circuits list we have stored $N$ (numberOfSinglets) circuits similar to those shown in the figure below. The idea is to model every act of the creation of the singlet state, the distribution of its qubits among the participants and the measurement of the spin projection onto the chosen direction in the E91 protocol by executing each circuit from the circuits list with one shot. Step three: recording the results First let us execute the circuits on the simulator. End of explanation """ result.get_counts(circuits[0]) plot_histogram(result.get_counts(circuits[0])) """ Explanation: Look at the output of the execution of the first circuit. End of explanation """ abPatterns = [ re.compile('..00$'), # search for the '..00' output (Alice obtained -1 and Bob obtained -1) re.compile('..01$'), # search for the '..01' output re.compile('..10$'), # search for the '..10' output (Alice obtained -1 and Bob obtained 1) re.compile('..11$') # search for the '..11' output ] """ Explanation: It consists of four digits. Recall that Alice and Bob store the results of the measurement in classical bits cr[0] and cr[1] (two digits on the right). Since we model the secret key generation process without the presence of an eavesdropper, the classical bits cr[2] and cr[3] are always 0. Also note that the output is the Python dictionary, in which the keys are the obtained results, and the values are the counts. Alice and Bob record the results of their measurements as bits of the strings $a$ and $a^{'}$. To simulate this process we need to use regular expressions module re. First, we compile the search patterns. End of explanation """ aliceResults = [] # Alice's results (string a) bobResults = [] # Bob's results (string a') for i in range(numberOfSinglets): res = list(result.get_counts(circuits[i]).keys())[0] # extract the key from the dict and transform it to str; execution result of the i-th circuit if abPatterns[0].search(res): # check if the key is '..00' (if the measurement results are -1,-1) aliceResults.append(-1) # Alice got the result -1 bobResults.append(-1) # Bob got the result -1 if abPatterns[1].search(res): aliceResults.append(1) bobResults.append(-1) if abPatterns[2].search(res): # check if the key is '..10' (if the measurement results are -1,1) aliceResults.append(-1) # Alice got the result -1 bobResults.append(1) # Bob got the result 1 if abPatterns[3].search(res): aliceResults.append(1) bobResults.append(1) """ Explanation: Using these patterns, we can find particular results in the outputs and fill strings the $a$ and $a^{'}$ with the results of Alice's and Bob's measurements. End of explanation """ aliceKey = [] # Alice's key string k bobKey = [] # Bob's key string k' # comparing the stings with measurement choices for i in range(numberOfSinglets): # if Alice and Bob have measured the spin projections onto the a_2/b_1 or a_3/b_2 directions if (aliceMeasurementChoices[i] == 2 and bobMeasurementChoices[i] == 1) or (aliceMeasurementChoices[i] == 3 and bobMeasurementChoices[i] == 2): aliceKey.append(aliceResults[i]) # record the i-th result obtained by Alice as the bit of the secret key k bobKey.append(- bobResults[i]) # record the multiplied by -1 i-th result obtained Bob as the bit of the secret key k' keyLength = len(aliceKey) # length of the secret key """ Explanation: Step four: revealing the bases In the previos step we have stored the measurement results of Alice and Bob in the aliceResults and bobResults lists (strings $a$ and $a^{'}$). Now the participants compare their strings $b$ and $b^{'}$ via the public classical channel. If Alice and Bob have measured the spin projections of their qubits of the i-th singlet onto the same direction, then Alice records the result $a_i$ as the bit of the string $k$, and Bob records the result $-a_i$ as the bit of the string $k^{'}$ (see Eq. (1)). End of explanation """ abKeyMismatches = 0 # number of mismatching bits in Alice's and Bob's keys for j in range(keyLength): if aliceKey[j] != bobKey[j]: abKeyMismatches += 1 """ Explanation: The keys $k$ and $k'$ are now stored in the aliceKey and bobKey lists, respectively. The remaining results which were not used to create the keys can now be revealed. It is important for Alice and Bob to have the same keys, i.e. strings $k$ and $k^{'}$ must be equal. Let us compare the bits of strings $k$ and $k^{'}$ and find out how many there are mismatches in the keys. End of explanation """ # function that calculates CHSH correlation value def chsh_corr(result): # lists with the counts of measurement results # each element represents the number of (-1,-1), (-1,1), (1,-1) and (1,1) results respectively countA1B1 = [0, 0, 0, 0] # XW observable countA1B3 = [0, 0, 0, 0] # XV observable countA3B1 = [0, 0, 0, 0] # ZW observable countA3B3 = [0, 0, 0, 0] # ZV observable for i in range(numberOfSinglets): res = list(result.get_counts(circuits[i]).keys())[0] # if the spins of the qubits of the i-th singlet were projected onto the a_1/b_1 directions if (aliceMeasurementChoices[i] == 1 and bobMeasurementChoices[i] == 1): for j in range(4): if abPatterns[j].search(res): countA1B1[j] += 1 if (aliceMeasurementChoices[i] == 1 and bobMeasurementChoices[i] == 3): for j in range(4): if abPatterns[j].search(res): countA1B3[j] += 1 if (aliceMeasurementChoices[i] == 3 and bobMeasurementChoices[i] == 1): for j in range(4): if abPatterns[j].search(res): countA3B1[j] += 1 # if the spins of the qubits of the i-th singlet were projected onto the a_3/b_3 directions if (aliceMeasurementChoices[i] == 3 and bobMeasurementChoices[i] == 3): for j in range(4): if abPatterns[j].search(res): countA3B3[j] += 1 # number of the results obtained from the measurements in a particular basis total11 = sum(countA1B1) total13 = sum(countA1B3) total31 = sum(countA3B1) total33 = sum(countA3B3) # expectation values of XW, XV, ZW and ZV observables (2) expect11 = (countA1B1[0] - countA1B1[1] - countA1B1[2] + countA1B1[3])/total11 # -1/sqrt(2) expect13 = (countA1B3[0] - countA1B3[1] - countA1B3[2] + countA1B3[3])/total13 # 1/sqrt(2) expect31 = (countA3B1[0] - countA3B1[1] - countA3B1[2] + countA3B1[3])/total31 # -1/sqrt(2) expect33 = (countA3B3[0] - countA3B3[1] - countA3B3[2] + countA3B3[3])/total33 # -1/sqrt(2) corr = expect11 - expect13 + expect31 + expect33 # calculate the CHSC correlation value (3) return corr """ Explanation: Note that since the strings $k$ and $k^{'}$ are secret, Alice and Bob have no information about mismatches in the bits of their keys. To find out the number of errors, the participants can perform a random sampling test. Alice randomly selects $\delta$ bits of her secret key and tells Bob which bits she selected. Then Alice and Bob compare the values of these check bits. For large enough $\delta$ the number of errors in the check bits will be close to the number of errors in the remaining bits. Step five: CHSH correlation value test Alice and Bob want to be sure that there was no interference in the communication session. To do that, they calculate the CHSH correlation value $(3)$ using the results obtained after the measurements of spin projections onto the $\vec{a}_1/\vec{b}_1$, $\vec{a}_1/\vec{b}_3$, $\vec{a}_3/\vec{b}_1$ and $\vec{a}_3/\vec{b}_3$ directions. Recall that it is equivalent to the measurement of the observables $X \otimes W$, $X \otimes V$, $Z \otimes W$ and $Z \otimes V$ respectively. According to the Born-von Neumann statistical postulate, the expectation value of the observable $E = \sum_j e_j \lvert e_j \rangle \langle e_j \rvert$ in the state $\lvert \psi \rangle$ is given by $$\langle E \rangle_\psi = \mathrm{Tr}\, \lvert\psi\rangle \langle\psi\rvert \, E = \ \mathrm{Tr}\, \lvert\psi\rangle \langle\psi\rvert \sum_j e_j \lvert e_j \rangle \langle e_j \rvert = \sum_j \langle\psi\rvert(e_j \lvert e_j \rangle \langle e_j \rvert) \lvert\psi\rangle = \sum_j e_j \left|\langle\psi\lvert e_j \rangle \right|^2 = \ \sum_j e_j \mathrm{P}\psi (E \models e_j),$$ where $\lvert e_j \rangle$ is the eigenvector of $E$ with the corresponding eigenvalue $e_j$, and $\mathrm{P}\psi (E \models e_j)$ is the probability of obtainig the result $e_j$ after measuring the observable $E$ in the state $\lvert \psi \rangle$. A similar expression can be written for the joint measurement of the observables $A$ and $B$: $$\langle A \otimes B \rangle_\psi = \sum_{j,k} a_j b_k \mathrm{P}\psi (A \models a_j, B \models b_k) = \sum{j,k} a_j b_k \mathrm{P}_\psi (a_j, b_k). \qquad\qquad (4)$$ Note that if $A$ and $B$ are the spin projection observables, then the corresponding eigenvalues are $a_j, b_k = \pm 1$. Thus, for the observables $A(\vec{a}_i)$ and $B(\vec{b}_j)$ and singlet state $\lvert\psi\rangle_s$ we can rewrite $(4)$ as $$\langle A(\vec{a}_i) \otimes B(\vec{b}_j) \rangle = \mathrm{P}(-1,-1) - \mathrm{P}(1,-1) - \mathrm{P}(-1,1) + \mathrm{P}(1,1). \qquad\qquad (5)$$ In our experiments, the probabilities on the right side can be calculated as follows: $$\mathrm{P}(a_j, b_k) = \frac{n_{a_j, b_k}(A \otimes B)}{N(A \otimes B)}, \qquad\qquad (6)$$ where the numerator is the number of results $a_j, b_k$ obtained after measuring the observable $A \otimes B$, and the denominator is the total number of measurements of the observable $A \otimes B$. Since Alice and Bob revealed their strings $b$ and $b^{'}$, they know what measurements they performed and what results they have obtained. With this data, participants calculate the expectation values $(2)$ using $(5)$ and $(6)$. End of explanation """ corr = chsh_corr(result) # CHSH correlation value # CHSH inequality test print('CHSH correlation value: ' + str(round(corr, 3))) # Keys print('Length of the key: ' + str(keyLength)) print('Number of mismatching bits: ' + str(abKeyMismatches) + '\n') """ Explanation: Output Now let us print all the interesting values. End of explanation """ # measurement of the spin projection of Alice's qubit onto the a_2 direction (W basis) measureEA2 = QuantumCircuit(qr, cr, name='measureEA2') measureEA2.s(qr[0]) measureEA2.h(qr[0]) measureEA2.t(qr[0]) measureEA2.h(qr[0]) measureEA2.measure(qr[0],cr[2]) # measurement of the spin projection of Allice's qubit onto the a_3 direction (standard Z basis) measureEA3 = QuantumCircuit(qr, cr, name='measureEA3') measureEA3.measure(qr[0],cr[2]) # measurement of the spin projection of Bob's qubit onto the b_1 direction (W basis) measureEB1 = QuantumCircuit(qr, cr, name='measureEB1') measureEB1.s(qr[1]) measureEB1.h(qr[1]) measureEB1.t(qr[1]) measureEB1.h(qr[1]) measureEB1.measure(qr[1],cr[3]) # measurement of the spin projection of Bob's qubit onto the b_2 direction (standard Z measurement) measureEB2 = QuantumCircuit(qr, cr, name='measureEB2') measureEB2.measure(qr[1],cr[3]) # lists of measurement circuits eveMeasurements = [measureEA2, measureEA3, measureEB1, measureEB2] """ Explanation: Finaly, Alice and Bob have the secret keys $k$ and $k^{'}$ (aliceKey and bobKey)! Now they can use the one-time pad technique to encrypt and decrypt messages. Since we simulate the E91 protocol without the presence of Eve, the CHSH correlation value should be close to $-2\sqrt{2} \approx -2.828$. In addition, there should be no mismatching bits in the keys of Alice and Bob. Note also that there are 9 possible combinations of measurements that can be performed by Alice and Bob, but only 2 of them give the results using which the secret keys can be created. Thus, the ratio of the length of the keys to the number of singlets $N$ should be close to $2/9$. Simulation of eavesdropping Suppose some third party wants to interfere in the communication session of Alice and Bob and obtain a secret key. The eavesdropper can use the intercept-resend attacks: Eve intercepts one or both of the entangled qubits prepared by Charlie, measures the spin projections of these qubits, prepares new ones depending on the results obtained ($\lvert 01 \rangle$ or $\lvert 10 \rangle$) and sends them to Alice and Bob. A schematic representation of this process is shown in the figure below. Here $E(\vec{n}_A) = \vec{n}_A \cdot \vec{\sigma}$ and $E(\vec{n}_B) = \vec{n}_A \cdot \vec{\sigma}$ are the observables of the of the spin projections of Alice's and Bob's qubits onto the directions $\vec{n}_A$ and $\vec{n}_B$. It would be wise for Eve to choose these directions to be $\vec{n}_A = \vec{a}_2,\vec{a}_3$ and $\vec{n}_B = \vec{b}_1,\vec{b}_2$ since the results obtained from other measurements can not be used to create a secret key. Let us prepare the circuits for Eve's measurements. End of explanation """ # list of Eve's measurement choices # the first and the second elements of each row represent the measurement of Alice's and Bob's qubits by Eve respectively eveMeasurementChoices = [] for j in range(numberOfSinglets): if random.uniform(0, 1) <= 0.5: # in 50% of cases perform the WW measurement eveMeasurementChoices.append([0, 2]) else: # in 50% of cases perform the ZZ measurement eveMeasurementChoices.append([1, 3]) """ Explanation: Like Alice and Bob, Eve must choose the directions onto which she will measure the spin projections of the qubits. In our simulation, the eavesdropper randomly chooses one of the observables $W \otimes W$ or $Z \otimes Z$ to measure. End of explanation """ circuits = [] # the list in which the created circuits will be stored for j in range(numberOfSinglets): # create the name of the j-th circuit depending on Alice's, Bob's and Eve's choices of measurement circuitName = str(j) + ':A' + str(aliceMeasurementChoices[j]) + '_B' + str(bobMeasurementChoices[j] + 2) + '_E' + str(eveMeasurementChoices[j][0]) + str(eveMeasurementChoices[j][1] - 1) # create the joint measurement circuit # add Alice's and Bob's measurement circuits to the singlet state curcuit # singlet state circuit # Eve's measurement circuit of Alice's qubit # Eve's measurement circuit of Bob's qubit # measurement circuit of Alice # measurement circuit of Bob circuitName = singlet + eveMeasurements[eveMeasurementChoices[j][0]-1] + eveMeasurements[eveMeasurementChoices[j][1]-1] + aliceMeasurements[aliceMeasurementChoices[j]-1] + bobMeasurements[bobMeasurementChoices[j]-1] # add the created circuit to the circuits list circuits.append(circuitName) """ Explanation: Like we did before, now we create the circuits with singlet states and detectors of Eve, Alice and Bob. End of explanation """ backend=Aer.get_backend('qasm_simulator') result = execute(circuits, backend=backend, shots=1).result() print(result) """ Explanation: Now we execute all the prepared circuits on the simulator. End of explanation """ print(str(circuits[0].name) + '\t' + str(result.get_counts(circuits[0]))) plot_histogram(result.get_counts(circuits[0])) """ Explanation: Let us look at the name of the first circuit and the output after it is executed. End of explanation """ ePatterns = [ re.compile('00..$'), # search for the '00..' result (Eve obtained the results -1 and -1 for Alice's and Bob's qubits) re.compile('01..$'), # search for the '01..' result (Eve obtained the results 1 and -1 for Alice's and Bob's qubits) re.compile('10..$'), re.compile('11..$') ] """ Explanation: We can see onto which directions Eve, Alice and Bob measured the spin projections and the results obtained. Recall that the bits cr[2] and cr[3] (two digits on the left) are used by Eve to store the results of her measurements. To extract Eve's results from the outputs, we need to compile new search patterns. End of explanation """ aliceResults = [] # Alice's results (string a) bobResults = [] # Bob's results (string a') # list of Eve's measurement results # the elements in the 1-st column are the results obtaned from the measurements of Alice's qubits # the elements in the 2-nd column are the results obtaned from the measurements of Bob's qubits eveResults = [] # recording the measurement results for j in range(numberOfSinglets): res = list(result.get_counts(circuits[j]).keys())[0] # extract a key from the dict and transform it to str # Alice and Bob if abPatterns[0].search(res): # check if the key is '..00' (if the measurement results are -1,-1) aliceResults.append(-1) # Alice got the result -1 bobResults.append(-1) # Bob got the result -1 if abPatterns[1].search(res): aliceResults.append(1) bobResults.append(-1) if abPatterns[2].search(res): # check if the key is '..10' (if the measurement results are -1,1) aliceResults.append(-1) # Alice got the result -1 bobResults.append(1) # Bob got the result 1 if abPatterns[3].search(res): aliceResults.append(1) bobResults.append(1) # Eve if ePatterns[0].search(res): # check if the key is '00..' eveResults.append([-1, -1]) # results of the measurement of Alice's and Bob's qubits are -1,-1 if ePatterns[1].search(res): eveResults.append([1, -1]) if ePatterns[2].search(res): eveResults.append([-1, 1]) if ePatterns[3].search(res): eveResults.append([1, 1]) """ Explanation: Now Eve, Alice and Bob record the results of their measurements. End of explanation """ aliceKey = [] # Alice's key string a bobKey = [] # Bob's key string a' eveKeys = [] # Eve's keys; the 1-st column is the key of Alice, and the 2-nd is the key of Bob # comparing the strings with measurement choices (b and b') for j in range(numberOfSinglets): # if Alice and Bob measured the spin projections onto the a_2/b_1 or a_3/b_2 directions if (aliceMeasurementChoices[j] == 2 and bobMeasurementChoices[j] == 1) or (aliceMeasurementChoices[j] == 3 and bobMeasurementChoices[j] == 2): aliceKey.append(aliceResults[j]) # record the i-th result obtained by Alice as the bit of the secret key k bobKey.append(-bobResults[j]) # record the multiplied by -1 i-th result obtained Bob as the bit of the secret key k' eveKeys.append([eveResults[j][0], -eveResults[j][1]]) # record the i-th bits of the keys of Eve keyLength = len(aliceKey) # length of the secret skey """ Explanation: As before, Alice, Bob and Eve create the secret keys using the results obtained after measuring the observables $W \otimes W$ and $Z \otimes Z$. End of explanation """ abKeyMismatches = 0 # number of mismatching bits in the keys of Alice and Bob eaKeyMismatches = 0 # number of mismatching bits in the keys of Eve and Alice ebKeyMismatches = 0 # number of mismatching bits in the keys of Eve and Bob for j in range(keyLength): if aliceKey[j] != bobKey[j]: abKeyMismatches += 1 if eveKeys[j][0] != aliceKey[j]: eaKeyMismatches += 1 if eveKeys[j][1] != bobKey[j]: ebKeyMismatches += 1 """ Explanation: To find out the number of mismatching bits in the keys of Alice, Bob and Eve we compare the lists aliceKey, bobKey and eveKeys. End of explanation """ eaKnowledge = (keyLength - eaKeyMismatches)/keyLength # Eve's knowledge of Bob's key ebKnowledge = (keyLength - ebKeyMismatches)/keyLength # Eve's knowledge of Alice's key """ Explanation: It is also good to know what percentage of the keys is known to Eve. End of explanation """ corr = chsh_corr(result) """ Explanation: Using the chsh_corr function defined above we calculate the CSHS correlation value. End of explanation """ # CHSH inequality test print('CHSH correlation value: ' + str(round(corr, 3)) + '\n') # Keys print('Length of the key: ' + str(keyLength)) print('Number of mismatching bits: ' + str(abKeyMismatches) + '\n') print('Eve\'s knowledge of Alice\'s key: ' + str(round(eaKnowledge * 100, 2)) + ' %') print('Eve\'s knowledge of Bob\'s key: ' + str(round(ebKnowledge * 100, 2)) + ' %') """ Explanation: And now we print all the results. End of explanation """
fionapigott/Data-Science-45min-Intros
python-decorators-101/python-decorators-101.ipynb
unlicense
def duplicator(str_arg): """Create a string that is a doubling of the passed-in arg.""" # use the passed arg to create a larger string (double it, with a space between) internal_variable = ' '.join( [str_arg, str_arg] ) return internal_variable # print (don't call) the function print( duplicator ) # equivalently (in IPython): #duplicator """ Explanation: Python decorators Josh Montague, 2015-12 Built on OS X, with IPython 3.0 on Python 2.7 In this session, we'll dig into some details of Python functions. The end goal will be to understand how and why you might want to create decorators with Python functions. Note: there is an Appendix at the end of this notebook that dives deeper into scope and Python namespaces. I wrote out the content because they're quite relevant to talking about decorators. But, ultimately, we only 45 minutes, and it couldn't all fit. If you're curious, take a few extra minutes to review that material, as well. Functions A lot of this RST has to do with understanding the subtleties of Python functions. So, we're going to spend some time exploring them. In Python, functions are objects [T]his means the language supports passing functions as arguments to other functions, returning them as the values from other functions, and assigning them to variables or storing them in data structures. (wiki) This is not true (or at least not easy) in all programming languages. I don't have a ton of experience to back this up. But, many moons ago, I remember that Java functions only lived inside objects and classes. Let's take a moment to look at a relatively simple function and appreciate what it does and what we can do with it. End of explanation """ # now *call* the function by using parens output = duplicator('yo') # verify the expected behavior print(output) """ Explanation: Remember that IPython and Jupyter will automatically (and conveniently!) call the __repr__ of an object if it is the last thing in the cell. But I'll use the print() function explicitly just to be clear. This displays the string representation of the object. It usually includes: an object type (class) an object name a memory location Now, let's actually call the function (which we do with use of the parentheses), and assign the return value (a string) to a new variable. End of explanation """ # the dir() built-in function displays the argument's attributes dir(duplicator) """ Explanation: Because functions are objects, they have attributes just like any other Python object. End of explanation """ # first, recall the normal behavior of useful_function() duplicator('ring') # now create a new variable and assign our function to it another_duplicator = duplicator # now, we can use the *call* notation because the new variable is # assigned the original function another_duplicator('giggity') # and we can verify that this is actually a reference to the # original function print( "original function: %s" % duplicator ) print print( "new function: %s" % another_duplicator ) """ Explanation: Because functions are objects, we can pass them around like any other data type. For example, we can assign them to other variables! If you occasionally still have dreams about the Enumerator, this will look familiar. End of explanation """ def speaker(): """ Simply return a word (a string). Other than possibly asking 'why are you writing this simple function in such a complicated fashion?' this should hopefuly should be pretty clear. """ # define a local variable word='hello' def shout(): """Return a capitalized version of word.""" # though not in the innermost scope, this is in the namespace one # level out from here return word.upper() # call shout and then return the result of calling it (a string) return shout() # remember that the result is a string, now print it. the sequence: # - put word and shout in local namespace # - define shout() # - call shout() # - look for 'word', return it # - return the return value of shout() print( speaker() ) """ Explanation: By looking at the memory location, we can see that the second function is just a pointer to the first function! Cool! Functions inside functions With an understanding of what's inside a function and what we can do with it, consider the case were we define a new function within another function. This may seem overly complicated for a little while, but stick with me. In the example below, we'll define an outer function which includes a local variable, then a local function definition. The inner function returns a string. The outer function calls the inner function, and returns the resulting value (a string). End of explanation """ try: # this function only exists in the local scope of the outer function shout() except NameError, e: print(e) """ Explanation: Now, this may be intuitive, but it's important to note that the inner function is not accessible outside of the outer function. The interpreter can always step out into larger (or "more outer") namespaces, but we can't dig deeper into smaller ones. End of explanation """ def speaker_func(): """Similar to speaker(), but this time return the actual inner function!""" word = 'hello' def shout(): """Return an all-caps version of the passed word.""" return word.upper() # don't *call* shout(), just return it return shout # remember: our function returns another function print( speaker_func() ) """ Explanation: Functions out of functions What if we'd like our outer function to return a function? For example, return the inner function instead of the return value of the inner function. End of explanation """ # this will assign to the variable new_shout, a value that is the shout function new_shout = speaker_func() """ Explanation: Remember that the return value of the outer function is another function. And just like we saw earlier, we can print the function to see the name and memory location. Note that the name is that of the inner function. Makes sense, since that's what we returned. Like we said before, since this is an object, we can pass this function around and assign it to other variables. End of explanation """ # which means we can *call* it new_shout() """ Explanation: Which means we can also call it with parens, as usual. End of explanation """ from operator import itemgetter # we might want to sort this by the first or second item tuple_list = [(1,5),(9,2),(5,4)] # itemgetter is a callable (like a function) that we pass in as an argument to sorted() sorted(tuple_list, key=itemgetter(1)) def tuple_add(tup): """Sum the items in a tuple.""" return sum(tup) # now we can map the tuple_add() function across the tuple_list iterable. # note that we're passing a function as an argument! map(tuple_add, tuple_list) """ Explanation: Functions into functions If functions are objects, we can just as easily pass a function into another function. You've probably seen this before in the context of sorting, or maybe using map: End of explanation """ def verbose(func): """Add some marginally annoying verbosity to the passed func.""" def inner(): print("heeeeey everyone, I'm about to do a thing!") print("hey hey hey, I'm about to call a function called: {}".format(func.__name__)) print # now call (and print) the passed function print func() print print("whoa, did everyone see that thing I just did?! SICK!!") return inner """ Explanation: If we can pass functions into and out of other functions, then I propose that we can extend or modify the behavior of a function without actually editing the original function! Decorators 🎉💥🎉💥🎉💥🎉💥🎉💥 For example, say there's some previously-defined function in and you'd like it to be more verbose. For now, let's just assume that printing a bunch of information to stdout is our goal. Below, we define a function verbose() that takes another function as an argument. It does other tasks both before and after actually calling the passed-in function. End of explanation """ # here's our original function (that we don't want to modify) def say_hi(): """Return 'hi'.""" return '--> hi. <--' # understand the original behavior of the function say_hi() """ Explanation: Now, imagine we have a function that we wish had more of this type of "logging." But, we don't want to jump in and add a bunch of code to the original function. End of explanation """ # this is now a function... verbose_say_hi = verbose(say_hi) # which we can call... verbose_say_hi() """ Explanation: Instead, we pass the original function as an arg to our verbose function. Remember that this returns the inner function, so we can assign it and then call it. End of explanation """ # this will clobber the existing namespace value (the original function def). # in it's place we have the verbose version! say_hi = verbose(say_hi) say_hi() """ Explanation: Looking at the output, we can see that when we called verbose_say_hi(), all of the code in it ran: two print statements then the passed function say_hi() was called it's return value was printed finally, there was some other printing defined in the inner function We'd now say that verbose_say_hi() is a decorated version of say_hi(). And, correspondingly, that verbose() is our decorator. A decorator is a callable that takes a function as an argument and returns a function (probably a modified version of the original function). Now, you may also decide that the modified version of the function is the only version you want around. And, further, you don't want to change any other code that may depend on this. In that case, you want to overwrite the namespace value for the original function! End of explanation """ ! cat _uneditable_lib.py """ Explanation: Uneditable source code One use-case where this technique can be useful is when you need to use an existing base of code that you can't edit. There's an existing library that defines classes and methods that are aligned with your needs, but you need a slight variation on them. Imagine there is a library called (creatively) uneditable_lib that implements a Coordinate class (a point in two-dimensional space), and an add() method. The add() method allows you to add the vectors of two Coordinates together and returns a new Coordinate object. It has great documentation and you know the source Python source code looks like this: End of explanation """ ! ls | grep .pyc # you can still *use* the compiled code from uneditable_lib import Coordinate, add # make a couple of coordinates using the existing library coord_1 = Coordinate(x=100, y=200) coord_2 = Coordinate(x=-500, y=400) print( coord_1 ) print( add(coord_1, coord_2) ) """ Explanation: BUT Imagine you don't actually have the Python source, you have the compiled binary. Try opening this file in vi and see how it looks. End of explanation """ def coordinate_decorator(func): """Decorates the pre-built source code for Coordinates. We need the resulting coordinates to only exist in the first quadrant, so we'll truncate negative values to zero. """ def checker(a, b): """Enforces first-quadrant coordinates.""" ret = func(a, b) # check the result and make sure we're still in the # first quadrant at the end [ that is, x and y > 0 ] if ret.x < 0 or ret.y < 0: ret = Coordinate(ret.x if ret.x > 0 else 0, ret.y if ret.y > 0 else 0 ) return ret return checker """ Explanation: But, imagine that for our particular use-case, we need to confine the resulting coordinates to the first quadrant (that is, x &gt; 0 and y &gt; 0). We want any negative component in the coordinates to just be truncated to zero. We can't edit the source code, but we can decorate (and modify) it! End of explanation """ # first we decorate the existing function add = coordinate_decorator(add) # then we can call it as before print( add(coord_1, coord_2) ) """ Explanation: We can decorate the preexisting add() function with our new wrapper. And since we may be using other code from uneditable_lib with an API that expects the function to still be called add(), we can just overwrite that namespace variable. End of explanation """ from IPython.display import Image Image(url='http://i.giphy.com/8VrtCswiLDNnO.gif') """ Explanation: And, we now have a truncated Coordinate that lives in the first quadrant. End of explanation """ def say_bye(): """Return 'bye'.""" return '--> bye. <--' say_bye() """ Explanation: If we are running out of time, this is an ok place to wrap up. Examples Here are some real examples you might run across in the wild: Flask (web framework) uses decorators really well @app.route is a decorator that lets you decorate an arbitrary Python function and turn it into a URL path. @login_required is a decorator that lets your function define the appropriate authentication. Fabric (ops / remote server tasks) includes a number of "helper decorators" for task and hostname management. Here are some things we didn't cover If you go home tonight and can't possibly wait to learn more about decorators, here are the next things to look up: passing arguments to a decorator @functools.wraps implementing a decorator as a class If there is sufficient interest in a Decorators, Part Deux, those would be good starters. THE END Is there still time?! If so, here are a couple of other useful things worth saying, quickly... Decorating a function at definition (with @) You might still want to use a decorator to modify a function that you wrote in your own code. You might ask "But if we're already writing the code, why not just make the function do what we want in the first place?" Valid question. One place where this comes up is in practicing DRY (Don't Repeat Yourself) software engineering practices. If an identical block of logic is to be used in many places, that code should ideally be written in only one place. In our case, we could imagine making a bunch of different functions more verbose. Instead of adding the verbosity (print statements) to each of the functions, we should define that once and then decorate the other functions. Another nice example is making your code easier to understand by separating necessary operational logic from the business logic. There's a nice shorthand - some syntactic sugar - for this kind of statement. To illustrate it, let's just use a variation on a method from earlier. First, see how the original function behaves: End of explanation """ @verbose def say_bye(): """Return 'bye'.""" return '--> bye. <--' say_bye() Image(url='http://i.giphy.com/piupi6AXoUgTe.gif') """ Explanation: Remember the verbose() decorator that we already created? If this function (and perhaps others) should be made verbose at the time they're defined, we can apply the decorator right then and there using the @ shorthand: End of explanation """ # -c option starts a new interpreter session in a subshell and evaluates the code in quotes. # here, we just assign the value 3 to the variable x and print the global namespace ! python -c 'x=3; print( globals() )' """ Explanation: But that shouldn't actually blow your mind. Based on our discussion before, you can probably guess that the decorator notation is just shorthand for: say_bye = verbose( say_bye ) One place where this shorthand can come in particularly handy is when you need to stack a bunch of decorators. In place of nested decorators like this: python my_func = do_thing_a( add_numbers( subtract( verify( my_func )))) We can write this as: python @do_thing_a @add_numbers @subtract @verify def my_func(): # something useful happens here Note that the order matters! Ok, thank you, please come again. THE END AGAIN Ok, final round, I promise. Appendix This is material that I originally intended to include in this RST (because it's relevant), but ultimately cut for clarity. You can come back and revisit it any time. Scopes and namespaces Roughly speaking, scope and namespace are the reason you can type some code (like variable_1 = 'dog') and then transparently use variable_1 later in your code. Sounds obvious, but easy to take for granted! The concepts of scope and namespace in Python are pretty interesting and complex. Some time when you're bored and want to learn some details, have a read of this nice explainer or the official docs on the Python Execution Model. A short way to think about them is the following: A namespace is a mapping from (variable) names to values; think about it like a Python dictionary, where our code can look up the keys (the variable names) and then use the corresponding values. You will generally have many namespaces in your code, they are usually nested (sort of like an onion), and they can even have identical keys (variable names). The scope (at a particular location in the code) defines in which namespace we look for variables (dictionary keys) when our code is executing. While this RST isn't explicitly about scope, understanding these concepts will make it easier to read the code later on. Let's look at some examples. There are two built-in functions that can aid in exploring the namespace at various points in your code: globals() and locals() return a dictionary of the names and values in their respective scope. Since the namespaces in IPython are often huge, let's use IPython's bash magic to call out to a normal Python session to test how globals() works: End of explanation """ # this var is defined at the "outermost" level of this code block z = 10 def printer(x): """Print some things to stdout.""" # create a new var within the scope of this function animal = 'baboon' # ask about the namespace of the inner-most scope, "local" scope print('local namespace: {}\n'.format(locals())) # now, what about this var, which is defined *outside* the function? print('variable defined *outside* the function: {}'.format(z)) printer(17) """ Explanation: Note that there are a bunch of other dunder names that are in the global namespace. In particular, note that '__name__' = '__main__' because we ran this code from the command line (a comparison that you've made many times in the past!). And you can see the variable x that we assigned the value of 3. We can also look at the namespace in a more local scope with the locals() function. Inside the body of a function, the local namespace is limited to those variables defined within the function. End of explanation """ try: # remember that this var was created and assigned only within the function animal except NameError, e: print(e) """ Explanation: First, you can see that when our scope is 'inside the function', the namespace is very small. It's the local variables defined within the function, including the arg we passed the function. But, you can also see that we can still "see" the variable z, which was defined outside the function. This is because even though z doesn't exist in the local namespace, this is just the "innermost" of a series of nested namespaces. When we failed to find z in locals(), the interpreter steps "out" a layer, and looks for a namespace key (variable name) that's defined outside of the function. If we look through this (and any larger) namespace and still fail to find a key (variable name) for z, the interpreter will raise a NameError. While the interpreter will always continue looking in larger or more outer scopes, it can't do the opposite. Since y is created and assigned within the scope of our function, it goes "out of scope" as soon as the function returns. Local variables defined within the scope of a function are only accessible from that same scope - inside the function. End of explanation """ def outer(x): def inner(): print x return inner """ Explanation: Closures This is all relevant, because part of the mechanism behind a decorator is the concept of a function closure. A function closure captures the enclosing state (namespace) at the time a non-global function is defined. To see an example, consider the following code: End of explanation """ o = outer(7) o() try: x except NameError, e: print(e) print( dir(o) ) print( o.func_closure ) """ Explanation: We saw earlier, that the variable x isn't directly accessible outside of the function outer() because it's created within the scope of that function. But, Python's function closures mean that because inner() is not defined in the global scope, it keeps track of the surrounding namespace wherein it was defined. We can verify this by inspecting an example object: End of explanation """
mbeyeler/opencv-machine-learning
notebooks/06.02-Detecting-Pedestrians-in-the-Wild.ipynb
mit
datadir = "data/chapter6" dataset = "pedestrians128x64" datafile = "%s/%s.tar.gz" % (datadir, dataset) """ Explanation: <!--BOOK_INFORMATION--> <a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a> This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler. The code is released under the MIT license, and is available on GitHub. Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations. If you find this content useful, please consider supporting the work by buying the book! <!--NAVIGATION--> < Implementing Our First Support Vector Machine | Contents | Detecting Pedestrians with Support Vector Machines > Detecting Pedestrians in the Wild We briefly talked about the difference between detection and recognition. While recognition is concerned with classifying objects (for example, as pedestrians, cars, bicycles, and so on), detection is basically answering the question: is there a pedestrian present in this image? The basic idea behind most detection algorithms is to split up an image into many small patches, and then classify each image patch as either containing a pedestrian or not. This is exactly what we are going to do in this section. In order to arrive at our own pedestrian detection algorithm, we need to perform the following steps: 1. Build a database of images containing pedestrians. These will be our positive data samples. 2. Build a database of images not containing pedestrians. These will be our negative data samples. 3. Train an SVM on the dataset. 4. Apply the SVM to every possible patch of a test image in order to decide whether the overall image contains a pedestrian. Obtaining the dataset For the purpose of this section, we will work with the MIT People dataset, which we are free to use for non-commercial purposes. So make sure not to use this in your groundbreaking autonomous start-up company before obtaining a corresponding software license. The dataset can be obtained from http://cbcl.mit.edu/software-datasets/PedestrianData.html. There you should find a DOWNLOAD button that leads you to a file called http://cbcl.mit.edu/projects/cbcl/software-datasets/pedestrians128x64.tar.gz. However, if you followed our installation instructions from earlier and checked out the code on GitHub, you already have the dataset and are ready to go! The file can be found at notebooks/data/pedestrians128x64.tar.gz. Since we are supposed to run this code from a Jupyter Notebook in the notebooks/ directory, the relative path to the data directory is simply data/: End of explanation """ extractdir = "%s/%s" % (datadir, dataset) """ Explanation: The first thing to do is to unzip the file. We will extract all files into their own subdirectories in data/pedestrians128x64/: End of explanation """ def extract_tar(datafile, extractdir): try: import tarfile except ImportError: raise ImportError("You do not have tarfile installed. " "Try unzipping the file outside of Python.") tar = tarfile.open(datafile) tar.extractall(path=extractdir) tar.close() print("%s successfully extracted to %s" % (datafile, extractdir)) """ Explanation: We can do this either by hand (outside of Python) or with the following function: End of explanation """ extract_tar(datafile, datadir) """ Explanation: Then we can call the function like this: End of explanation """ import cv2 import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(10, 6)) for i in range(5): filename = "%s/per0010%d.ppm" % (extractdir, i) img = cv2.imread(filename) plt.subplot(1, 5, i + 1) plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) plt.axis('off') """ Explanation: The dataset comes with a total of 924 color images of pedestrians, each scaled to 64 x 128 pixels and aligned so that the person's body is in the center of the image. Scaling and aligning all data samples is an important step of the process, and we are glad that we don't have to do it ourselves. These images were taken in Boston and Cambridge in a variety of seasons and under several different lighting conditions. We can visualize a few example images by reading the image with OpenCV and passing an RGB version of the image to Matplotlib: End of explanation """ win_size = (48, 96) block_size = (16, 16) block_stride = (8, 8) cell_size = (8, 8) num_bins = 9 hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins) """ Explanation: We can look at additional pictures, but it's pretty clear that there is no straightforward way to describe pictures of people as easily as we did in the previous section for + and - data points. Part of the problem is thus finding a good way to represent these images. Does this ring a bell? It should! We're talking about feature engineering. Taking a glimpse at the histogram of oriented gradients (HOG) The HOG might just provide the help we're looking for in order to get this project done. HOG is a feature descriptor for images, much like the ones we discussed in Chapter 4, Representing Data and Engineering Features. It has been successfully applied to many different tasks in computer vision, but seems to work especially well for classifying people. The essential idea behind HOG features is that the local shapes and appearance of objects within an image can be described by the distribution of edge directions. The image is divided into small connected regions, within which a histogram of gradient directions (or edge directions) is compiled. Then, the descriptor is assembled by concatenating the different histograms. Please refer to the book for an example illustration. The HOG descriptor is fairly accessible in OpenCV by means of cv2.HOGDescriptor, which takes a bunch of input arguments, such as the detection window size (minimum size of the object to be detected, 48 x 96), the block size (how large each box is, 16 x 16), the cell size (8 x 8), and the cell stride (how many pixels to move from one cell to the next, 8 x 8). For each of these cells, the HOG descriptor then calculates a histogram of oriented gradients using nine bins: End of explanation """ import numpy as np import random random.seed(42) X_pos = [] for i in random.sample(range(900), 400): filename = "%s/per%05d.ppm" % (extractdir, i) img = cv2.imread(filename) if img is None: print('Could not find image %s' % filename) continue X_pos.append(hog.compute(img, (64, 64))) """ Explanation: Although this function call looks fairly complicated, these are actually the only values for which the HOG descriptor is implemented. The argument that matters the most is the window size (win_size). All that's left to do is call hog.compute on our data samples. For this, we build a dataset of positive samples (X_pos) by randomly picking pedestrian images from our data directory. In the following code snippet, we randomly select 400 pictures from the over 900 available, and apply the HOG descriptor to them: End of explanation """ X_pos = np.array(X_pos, dtype=np.float32) y_pos = np.ones(X_pos.shape[0], dtype=np.int32) X_pos.shape, y_pos.shape """ Explanation: We should also remember that OpenCV wants the feature matrix to contain 32-bit floating point numbers, and the target labels to be 32-bit integers. We don't mind, since converting to NumPy arrays will allow us to easily investigate the sizes of the matrices we created: End of explanation """ negset = "pedestrians_neg" negfile = "%s/%s.tar.gz" % (datadir, negset) negdir = "%s/%s" % (datadir, negset) extract_tar(negfile, datadir) """ Explanation: It looks like we picked a total of 399 training samples, each of which have 1,980 feature values (which are the HOG feature values). Generating negatives The real challenge, however, is to come up with the perfect example of a non-pedestrian. After all, it's easy to think of example images of pedestrians. But what is the opposite of a pedestrian? This is actually a common problem when trying to solve new machine learning problems. Both research labs and companies spend a lot of time creating and annotating new datasets that fit their specific purpose. If you're stumped, let me give you a hint on how to approach this. A good first approximation to finding the opposite of a pedestrian is to assemble a dataset of images that look like the images of the positive class but do not contain pedestrians. These images could contain anything like cars, bicycles, streets, houses, and maybe even forests, lakes, or mountains. A good place to start is the Urban and Natural Scene dataset by the Computational Visual Cognition Lab at MIT. The complete dataset can be obtained from http://cvcl.mit.edu/database.htm, but don't bother. I have already assembled a good amount of images from categories such as open country, inner cities, mountains, and forests. You can find them in the data/pedestrians_neg directory: End of explanation """ import os hroi = 128 wroi = 64 X_neg = [] for negfile in os.listdir(negdir): filename = '%s/%s' % (negdir, negfile) img = cv2.imread(filename) img = cv2.resize(img, (512, 512)) for j in range(5): rand_y = random.randint(0, img.shape[0] - hroi) rand_x = random.randint(0, img.shape[1] - wroi) roi = img[rand_y:rand_y + hroi, rand_x:rand_x + wroi, :] X_neg.append(hog.compute(roi, (64, 64))) """ Explanation: All the images are in color, in .jpeg format, and are 256 x 256 pixels. However, in order to use them as samples from a negative class that go together with our images of pedestrians earlier, we need to make sure that all images have the same pixel size. Moreover, the things depicted in the images should roughly be at the same scale. Thus, we want to loop through all the images in the directory (via os.listdir) and cut out a 64 x 128 region of interest (ROI): End of explanation """ X_neg = np.array(X_neg, dtype=np.float32) y_neg = -np.ones(X_neg.shape[0], dtype=np.int32) X_neg.shape, y_neg.shape """ Explanation: What did we almost forget? Exactly, we forgot to make sure that all feature values are 32-bit floating point numbers. Also, the target label of these images should be -1, corresponding to the negative class: End of explanation """ X = np.concatenate((X_pos, X_neg)) y = np.concatenate((y_pos, y_neg)) from sklearn import model_selection as ms X_train, X_test, y_train, y_test = ms.train_test_split( X, y, test_size=0.2, random_state=42 ) """ Explanation: Then we can concatenate all positive (X_pos) and negative samples (X_neg) into a single dataset X, which we split using the all too familiar train_test_split function from scikitlearn: End of explanation """ def train_svm(X_train, y_train): svm = cv2.ml.SVM_create() svm.train(X_train, cv2.ml.ROW_SAMPLE, y_train) return svm """ Explanation: Implementing the support vector machine We already know how to build an SVM in OpenCV, so there's nothing much to see here. Planning ahead, we wrap the training procedure into a function, so that it's easier to repeat the procedure in the future: End of explanation """ def score_svm(svm, X, y): from sklearn import metrics _, y_pred = svm.predict(X) return metrics.accuracy_score(y, y_pred) """ Explanation: The same can be done for the scoring function. Here we pass a feature matrix X and a label vector y, but we do not specify whether we're talking about the training or the test set. In fact, from the viewpoint of the function, it doesn't matter what set the data samples belong to, as long as they have the right format: End of explanation """ svm = train_svm(X_train, y_train) score_svm(svm, X_train, y_train) score_svm(svm, X_test, y_test) """ Explanation: Then we can train and score the SVM with two short function calls: End of explanation """ score_train = [] score_test = [] for j in range(3): svm = train_svm(X_train, y_train) score_train.append(score_svm(svm, X_train, y_train)) score_test.append(score_svm(svm, X_test, y_test)) _, y_pred = svm.predict(X_test) false_pos = np.logical_and((y_test.ravel() == -1), (y_pred.ravel() == 1)) if not np.any(false_pos): print('done') break X_train = np.concatenate((X_train, X_test[false_pos, :]), axis=0) y_train = np.concatenate((y_train, y_test[false_pos]), axis=0) """ Explanation: Thanks to the HOG feature descriptor, we make no mistake on the training set. However, our generalization performance is quite abysmal (64.6 percent), as it is much less than the training performance (100 percent). This is an indication that the model is overfitting the data. The fact that it is performing way better on the training set than the test set means that the model has resorted to memorizing the training samples, rather than trying to abstract it into a meaningful decision rule. What can we do to improve the model performance? Bootstrapping the model An interesting way to improve the performance of our model is to use bootstrapping. This idea was actually applied in one of the first papers on using SVMs in combination with HOG features for pedestrian detection. So let's pay a little tribute to the pioneers and try to understand what they did. Their idea was quite simple. After training the SVM on the training set, they scored the model and found that the model produced some false positives. Remember that false positive means that the model predicted a positive (+) for a sample that was really a negative (-). In our context, this would mean the SVM falsely believed an image to contain a pedestrian. If this happens for a particular image in the dataset, this example is clearly troublesome. Hence, we should add it to the training set and retrain the SVM with the additional troublemaker, so that the algorithm can learn to classify that one correctly. This procedure can be repeated until the SVM gives satisfactory performance. We will talk about bootstrapping in more detail in Chapter 11, Selecting the Right Model with Hyperparameter Tuning. Let's do the same. We will repeat the training procedure a maximum of three times. After each iteration, we identify the false positives in the test set and add them to the training set for the next iteration: End of explanation """ score_train score_test """ Explanation: This allows us to improve the model over time: End of explanation """ img_test = cv2.imread('data/chapter6/pedestrian_test.jpg') stride = 16 found = [] for ystart in np.arange(0, img_test.shape[0], stride): for xstart in np.arange(0, img_test.shape[1], stride): if ystart + hroi > img_test.shape[0]: continue if xstart + wroi > img_test.shape[1]: continue roi = img_test[ystart:ystart + hroi, xstart:xstart + wroi, :] feat = np.array([hog.compute(roi, (64, 64))]) _, ypred = svm.predict(feat) if np.allclose(ypred, 1): found.append((ystart, xstart, hroi, wroi)) """ Explanation: Here, we achieved 64.6 percent accuracy in the first round, but were able to get that up to a perfect 100 percent in the second round. Detecting pedestrians in a larger image What's left to do is to connect the SVM classification procedure with the process of detection. The way to do this is to repeat our classification for every possible patch in the image. This is similar to what we did earlier when we visualized decision boundaries; we created a fine grid and classified every point on that grid. The same idea applies here. We divide the image into patches and classify every patch as either containing a pedestrian or not. Therefore, if we want to do this, we have to loop over all possible patches in an image, each time shifting our region of interest by a small number of stride pixels: End of explanation """ hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins) rho, _, _ = svm.getDecisionFunction(0) sv = svm.getSupportVectors() hog.setSVMDetector(np.append(sv[0, :].ravel(), rho)) """ Explanation: Because pedestrians could appear not just at various locations but also in various sizes, we would have to rescale the image and repeat the whole process. Thankfully, OpenCV has a convenience function for this multi-scale detection task in the form of the detectMultiScale function. This is a bit of a hack, but we can pass all SVM parameters to the hog object: End of explanation """ hogdef = cv2.HOGDescriptor() hogdef.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector()) found, _ = hogdef.detectMultiScale(img_test) """ Explanation: In practice, when people are faced with a standard task such as pedestrian detection, they often rely on precanned SVM classifiers that are built into OpenCV. This is the method that I hinted at in the very beginning of this chapter. By loading either cv2.HOGDescriptor_getDaimlerPeopleDetector() or cv2.HOGDescriptor_getDefaultPeopleDetector(), we can get started with only a few lines of code: End of explanation """ fig = plt.figure(figsize=(10, 6)) ax = fig.add_subplot(111) ax.imshow(cv2.cvtColor(img_test, cv2.COLOR_BGR2RGB)) from matplotlib import patches for f in found: ax.add_patch(patches.Rectangle((f[0], f[1]), f[2], f[3], color='y', linewidth=3, fill=False)) plt.savefig('detected.png') """ Explanation: Then we can mark the detected pedestrians in the image by looping over the bounding boxes in found: End of explanation """
mguerrap/tydal
Module2.ipynb
mit
import tydal.module2_utils as tide import tydal.quiz2 stationmap = tide.add_station_maps() stationmap """ Explanation: Module 2: Tides in the Puget Sound Learning Objectives I. Tidal Movement II. Tidal Cycle and Connection to Sea Surface Elevation Let's take a closer look at the movement of tides through the Strait of Juan de Fuca. We'll be using the tidal stations at Neah Bay, Port Angeles, and Port Townsend. Their tidal data and locations can be found at NOAA Tides and Currents webpage. Below, we plotted the locations of the three tidal stations in the Strait of Juan de Fuca. From west to east: Neah Bay, Port Angeles, and Port Townsend. End of explanation """ NeahBay = tide.load_Neah_Bay('Data/') PortAngeles = tide.load_Port_Angeles('Data/') PortTownsend = tide.load_Port_Townsend('Data/') Tides = tide.create_tide_dataset(NeahBay,PortAngeles,PortTownsend) %matplotlib inline tide.plot_tide_data(Tides,'2016-10-01','2016-10-02') """ Explanation: As the tide moves through the Strait, it creates a change in the elevation of the water surface. Below we'll cycle through a tidal cycle and look at how the tide moves through the Strait. Use the slider to move through the time series and look how the measured tide at a station relates to the other stations, and its effect on the water elevation. End of explanation """ quiz2.quiz() """ Explanation: Take a look at the time series for each station. It looks like a wave. In fact, the tide is a wave. That wave propogates through the Strait, starting at Neah Bay and travelling to Port Townsend. This is reflected in the elevation, as the peak elevation moves from one station to the following station. # Module 2 Quiz End of explanation """
kit-cel/wt
mloc/ch1_Preliminaries/Newton_method.ipynb
gpl-2.0
import importlib autograd_available = True # if automatic differentiation is available, use it try: import autograd except ImportError: autograd_available = False pass if autograd_available: import autograd.numpy as np from autograd import grad, hessian else: import numpy as np import matplotlib.pyplot as plt import matplotlib from ipywidgets import interactive import ipywidgets as widgets %matplotlib inline font = {'size' : 20} plt.rc('font', **font) plt.rc('text', usetex=matplotlib.checkdep_usetex(True)) matplotlib.rcParams['text.latex.preamble']=[r"\usepackage{amsmath}"] if autograd_available: print('Using autograd to compute gradients') else: print('Using hand-calculated gradient') """ Explanation: Newton Method for Optimization This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br> This code illustrates: * Newton Update using the Hessian * Interactive demonstration of step size influence End of explanation """ # Valley def myfun(x): return (x[0]**2)/16 + 9*(x[1]**2) if autograd_available: gradient = grad(myfun) Hessian = hessian(myfun) else: def gradient(x): grad = [x[0]/8, 18*x[1]] return grad; def Hessian(x): H = [[1/8, 0.0], [0.0, 18.0]] return np.array(H) """ Explanation: Specify the function to minimize as a simple python function.<br> We start with a very simple function that is given by \begin{equation} f(\boldsymbol{x}) = \frac{1}{16}x_1^2 + 9x_2^2 \end{equation} The derivative is automatically computed using the autograd library, which returns a function that evaluates the gradient of myfun. The gradient can also be easily computed by hand and is given as \begin{equation} \nabla f(\boldsymbol{x}) = \begin{pmatrix} \frac{1}{8}x_1 \ 18x_2 \end{pmatrix} \end{equation} The Hessian is easily computed by hand is given as \begin{equation} \boldsymbol{H}(f(\boldsymbol{x})) = \begin{pmatrix} \frac{1}{8} & 0 \ 0 & 18 \end{pmatrix} \end{equation} End of explanation """ x = np.arange(-5.0, 5.0, 0.02) y = np.arange(-2.0, 2.0, 0.02) X, Y = np.meshgrid(x, y) fZ = myfun([X,Y]) plt.figure(1,figsize=(10,6)) plt.rcParams.update({'font.size': 14}) plt.contourf(X,Y,fZ,levels=20) plt.colorbar() plt.xlabel("$x$") plt.ylabel("$y$") plt.show() """ Explanation: Plot the function as a 2d surface plot. Different colors indicate different values of the function. End of explanation """ epsilon = 0.5 start = np.array([-4.0,-1.0]) points = [] while len(points) < 200: points.append( (start,myfun(start)) ) g = gradient(start) H = Hessian(start) Hinv = np.linalg.inv(H) start = start - epsilon * (Hinv @ g) """ Explanation: Carry out the simple gradient descent strategy by using only the sign of the gradient. Carry out 200 iterations (without using a stopping criterion). The values of epsilon and the starting point are specified End of explanation """ trajectory_x = [points[i][0][0] for i in range(len(points))] trajectory_y = [points[i][0][1] for i in range(len(points))] plt.figure(1,figsize=(16,6)) plt.subplot(121) plt.rcParams.update({'font.size': 14}) plt.contourf(X,Y,fZ,levels=20) plt.xlim(-5,0) plt.ylim(-2,2) plt.xlabel("$x$") plt.ylabel("$y$") plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2) plt.subplot(122) plt.plot(range(0,len(points)),list(zip(*points))[1]) plt.grid(True) plt.xlabel("Step $i$") plt.ylabel(r"$f(\boldsymbol{x}^{(i)})$") plt.show() """ Explanation: Plot the trajectory and the value of the function (right subplot). Note that the minimum of this function is achieved for (0,0) and is 0. As the original function is a quadratic function, we are exact and go straight towards the minimum. End of explanation """ def plot_function(epsilon, start_x, start_y): start = np.array([start_x,start_y]) points = [] while len(points) < 200: points.append( (start,myfun(start)) ) g = gradient(start) H = Hessian(start) Hinv = np.linalg.inv(H) start = start - epsilon * (Hinv @ g) trajectory_x = [points[i][0][0] for i in range(len(points))] trajectory_y = [points[i][0][1] for i in range(len(points))] plt.figure(3,figsize=(15,5)) plt.subplot(121) plt.rcParams.update({'font.size': 14}) plt.contourf(X,Y,fZ,levels=20) plt.xlim(-5,0) plt.ylim(-2,2) plt.xlabel("$x$") plt.ylabel("$y$") plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2) plt.subplot(122) plt.plot(range(0,len(points)),list(zip(*points))[1]) plt.grid(True) plt.xlabel("Step $i$") plt.ylabel(r"$f(\boldsymbol{x}^{(i)})$") plt.show() epsilon_values = np.arange(0.0,1,0.0002) interactive_update = interactive(plot_function, \ epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.1, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%')), start_x = widgets.FloatSlider(min=-5.0,max=0.0,step=0.001,value=-4.0, continuous_update=False, description='x'), \ start_y = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, value=-1.0, continuous_update=False, description='y')) output = interactive_update.children[-1] output.layout.height = '370px' interactive_update """ Explanation: This is an interactive demonstration of gradient descent, where you can specify yourself the starting point as well as the step value. You can see that depending on the step size, the minimization can get unstable End of explanation """ # Rosenbrock function def rosenbrock_fun(x): return (1-x[0])**2+100*((x[1]-(x[0])**2)**2) if autograd_available: rosenbrock_gradient = grad(rosenbrock_fun) rosenbrock_Hessian = hessian(rosenbrock_fun) else: def rosenbrock_gradient(x): grad = [-2*(1-x[0])-400*(x[1]-x[0]**2)*x[0], 200*(x[1]-x[0]**2)] return grad def rosenbrock_Hessian(x): H = np.array([[2 - 400*(x[1] - 3*x[0]**2), -400*x[0]], [-400*x[0], 200]]) return H def temp_rosenbrock_Hessian(x): H = np.array([[2 - 400*(x[1] - 3*x[0]**2), -400*x[0]], [-400*x[0], 200]]) return H xr = np.arange(-1.6, 1.6, 0.01) yr = np.arange(-1.0, 3.0, 0.01) Xr, Yr = np.meshgrid(xr, yr) fZr = rosenbrock_fun([Xr,Yr]) def plot_function_rosenbrock(epsilon, start_x, start_y): start = np.array([start_x,start_y]) points = [] while len(points) < 1000: points.append( (start,rosenbrock_fun(start)) ) g = rosenbrock_gradient(start) H = rosenbrock_Hessian(start) Hinv = np.linalg.inv(H) start = start - epsilon * (Hinv @ g) trajectory_x = [points[i][0][0] for i in range(len(points))] trajectory_y = [points[i][0][1] for i in range(len(points))] plt.figure(4,figsize=(15,5)) plt.subplot(121) plt.rcParams.update({'font.size': 14}) plt.contourf(Xr,Yr,fZr,levels=20) plt.xlabel("$x$") plt.ylabel("$y$") plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2) plt.xlim((min(xr), max(xr))) plt.ylim((min(yr), max(yr))) plt.subplot(122) plt.plot(range(0,len(points)),list(zip(*points))[1]) plt.grid(True) plt.xlabel("Step $i$") plt.ylabel(r"$f(\boldsymbol{x}^{(i)})$") plt.show() epsilon_values = np.arange(0.0,1,0.00002) interactive_update = interactive(plot_function_rosenbrock, \ epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.1, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%')), start_x = widgets.FloatSlider(min=-1.6,max=1.6,step=0.0001,value=0.6, continuous_update=False, description='x'), \ start_y = widgets.FloatSlider(min=-1.0, max=3.0, step=0.0001, value=0.1, continuous_update=False, description='y')) output = interactive_update.children[-1] output.layout.height = '350px' interactive_update """ Explanation: Next, we consider the so-called Rosenbrock function, which is given by \begin{equation} f(\boldsymbol{x}) = (1-x_1)^2 + 100(x_2-x_1^2)^2 \end{equation} Its gradient is given by \begin{equation} \nabla f(\boldsymbol{x}) = \begin{pmatrix} -2(1-x_1)-400(x_2-x_1^2)x_1 \ 200(x_2-x_1^2)\end{pmatrix} \end{equation} Its Hessian is given by \begin{equation} \boldsymbol{H}(f(\boldsymbol{x})) = \begin{pmatrix} 2 - 400(x_2 - 3x_1^2) & -400x_1 \ -400x_1 & 200 \end{pmatrix} \end{equation} The Rosenbrock function has a global minimum at (1,1) but is difficult to optimize due to its curved valley. For details, see <url>https://en.wikipedia.org/wiki/Rosenbrock_function</url> End of explanation """
paultheastronomer/OAD-Data-Science-Toolkit
Teaching Materials/Machine Learning/ml-training-intro/notebooks/07 - Trees.ipynb
gpl-3.0
from sklearn.tree import DecisionTreeClassifier, export_graphviz tree = DecisionTreeClassifier(max_depth=2) tree.fit(X_train, y_train) tree_dot = export_graphviz(tree, out_file=None, feature_names=cancer.feature_names, filled=True) print(tree_dot) import graphviz graphviz.Source(tree_dot) """ Explanation: tree visualization End of explanation """ tree = DecisionTreeClassifier().fit(X_train, y_train) tree_dot = export_graphviz(tree, out_file=None, feature_names=cancer.feature_names, filled=True) graphviz.Source(tree_dot, format="png") tree = DecisionTreeClassifier(max_depth=1).fit(X_train, y_train) tree_dot = export_graphviz(tree, out_file=None, feature_names=cancer.feature_names) graphviz.Source(tree_dot, format="png") tree = DecisionTreeClassifier(max_leaf_nodes=8).fit(X_train, y_train) tree_dot = export_graphviz(tree, out_file=None, feature_names=cancer.feature_names, filled=True) graphviz.Source(tree_dot, format="png") tree = DecisionTreeClassifier(min_samples_split=50).fit(X_train, y_train) tree_dot = export_graphviz(tree, out_file=None, feature_names=cancer.feature_names, filled=True) graphviz.Source(tree_dot, format="png") from sklearn.model_selection import GridSearchCV param_grid = {'max_depth':range(1, 7)} grid = GridSearchCV(DecisionTreeClassifier(random_state=0), param_grid=param_grid, cv=10) grid.fit(X_train, y_train) from sklearn.model_selection import GridSearchCV, StratifiedShuffleSplit param_grid = {'max_depth':range(1, 7)} grid = GridSearchCV(DecisionTreeClassifier(random_state=0), param_grid=param_grid, cv=StratifiedShuffleSplit(100)) grid.fit(X_train, y_train) scores = pd.DataFrame(grid.cv_results_) scores.plot(x='param_max_depth', y=['mean_train_score', 'mean_test_score'], ax=plt.gca()) plt.legend(loc=(1, 0)) from sklearn.model_selection import GridSearchCV param_grid = {'max_leaf_nodes':range(2, 20)} grid = GridSearchCV(DecisionTreeClassifier(random_state=0), param_grid=param_grid, cv=StratifiedShuffleSplit(100, random_state=1)) grid.fit(X_train, y_train) scores = pd.DataFrame(grid.cv_results_) scores.plot(x='param_max_leaf_nodes', y=['mean_train_score', 'mean_test_score'], ax=plt.gca()) plt.legend(loc=(1, 0)) scores = pd.DataFrame(grid.cv_results_) scores.plot(x='param_max_leaf_nodes', y='mean_train_score', yerr='std_train_score', ax=plt.gca()) scores.plot(x='param_max_leaf_nodes', y='mean_test_score', yerr='std_test_score', ax=plt.gca()) grid.best_params_ tree_dot = export_graphviz(grid.best_estimator_, out_file=None, feature_names=cancer.feature_names, filled=True) graph = graphviz.Source(tree_dot, format="png") graph pd.Series(grid.best_estimator_.feature_importances_, index=cancer.feature_names).plot(kind="barh") # set PATH=PATH;<Anaconda>\Library\bin\graphviz """ Explanation: Parameter Tuning End of explanation """
Naereen/notebooks
agreg/Algorithme de Cocke-Kasami-Younger (python3).ipynb
mit
# On a besoin de listes et de tuples from typing import List, Tuple # Module disponible en Python version >= 3.5 """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Table-des-matières" data-toc-modified-id="Table-des-matières-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Table des matières</a></div><div class="lev1 toc-item"><a href="#1.-Agrégation-externe-de-mathématiques" data-toc-modified-id="1.-Agrégation-externe-de-mathématiques-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>1. Agrégation externe de mathématiques</a></div><div class="lev2 toc-item"><a href="#1.1-Leçon-orale,-option-informatique" data-toc-modified-id="1.1-Leçon-orale,-option-informatique-21"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>1.1 Leçon orale, option informatique</a></div><div class="lev4 toc-item"><a href="#Feedbacks?" data-toc-modified-id="Feedbacks?-2101"><span class="toc-item-num">2.1.0.1&nbsp;&nbsp;</span>Feedbacks?</a></div><div class="lev1 toc-item"><a href="#2.-Algorithme-de-Cocke-Kasami-Younger" data-toc-modified-id="2.-Algorithme-de-Cocke-Kasami-Younger-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>2. Algorithme de Cocke-Kasami-Younger</a></div><div class="lev3 toc-item"><a href="#2.0.1-Implémentation-d'un-développement-pour-les-leçons-906,-907,-910,-923." data-toc-modified-id="2.0.1-Implémentation-d'un-développement-pour-les-leçons-906,-907,-910,-923.-301"><span class="toc-item-num">3.0.1&nbsp;&nbsp;</span>2.0.1 Implémentation d'un développement pour les leçons 906, 907, 910, 923.</a></div><div class="lev3 toc-item"><a href="#2.0.2-Références-:" data-toc-modified-id="2.0.2-Références-:-302"><span class="toc-item-num">3.0.2&nbsp;&nbsp;</span>2.0.2 Références :</a></div><div class="lev2 toc-item"><a href="#2.1-Classes-pour-répresenter-une-grammaire" data-toc-modified-id="2.1-Classes-pour-répresenter-une-grammaire-31"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>2.1 Classes pour répresenter une grammaire</a></div><div class="lev3 toc-item"><a href="#2.1.1-Du-typage-en-Python-?!" data-toc-modified-id="2.1.1-Du-typage-en-Python-?!-311"><span class="toc-item-num">3.1.1&nbsp;&nbsp;</span>2.1.1 Du typage en Python ?!</a></div><div class="lev3 toc-item"><a href="#2.1.2-La-classe-Grammaire" data-toc-modified-id="2.1.2-La-classe-Grammaire-312"><span class="toc-item-num">3.1.2&nbsp;&nbsp;</span>2.1.2 La classe <code>Grammaire</code></a></div><div class="lev3 toc-item"><a href="#2.1.3-Premier-exemple-de-grammaire-(non-Chomsky)" data-toc-modified-id="2.1.3-Premier-exemple-de-grammaire-(non-Chomsky)-313"><span class="toc-item-num">3.1.3&nbsp;&nbsp;</span>2.1.3 Premier exemple de grammaire (non-Chomsky)</a></div><div class="lev3 toc-item"><a href="#2.1.4-Second-exemple-de-grammaire-(non-Chomsky)" data-toc-modified-id="2.1.4-Second-exemple-de-grammaire-(non-Chomsky)-314"><span class="toc-item-num">3.1.4&nbsp;&nbsp;</span>2.1.4 Second exemple de grammaire (non-Chomsky)</a></div><div class="lev3 toc-item"><a href="#2.1.5-Dernier-exemple-de-grammaire" data-toc-modified-id="2.1.5-Dernier-exemple-de-grammaire-315"><span class="toc-item-num">3.1.5&nbsp;&nbsp;</span>2.1.5 Dernier exemple de grammaire</a></div><div class="lev2 toc-item"><a href="#2.2-Vérifier-qu'une-grammaire-est-bien-formée" data-toc-modified-id="2.2-Vérifier-qu'une-grammaire-est-bien-formée-32"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>2.2 Vérifier qu'une grammaire est bien formée</a></div><div class="lev2 toc-item"><a href="#2.3-Vérifier-qu'une-grammaire-est-en-forme-normale-de-Chomsky" data-toc-modified-id="2.3-Vérifier-qu'une-grammaire-est-en-forme-normale-de-Chomsky-33"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>2.3 Vérifier qu'une grammaire est en forme normale de Chomsky</a></div><div class="lev2 toc-item"><a href="#2.4-(enfin)-L'algorithme-de-Cocke-Kasami-Younger" data-toc-modified-id="2.4-(enfin)-L'algorithme-de-Cocke-Kasami-Younger-34"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>2.4 (enfin) L'algorithme de Cocke-Kasami-Younger</a></div><div class="lev2 toc-item"><a href="#2.5-Exemples" data-toc-modified-id="2.5-Exemples-35"><span class="toc-item-num">3.5&nbsp;&nbsp;</span>2.5 Exemples</a></div><div class="lev3 toc-item"><a href="#2.5.1-Avec-$G_3$" data-toc-modified-id="2.5.1-Avec-$G_3$-351"><span class="toc-item-num">3.5.1&nbsp;&nbsp;</span>2.5.1 Avec <span class="MathJax_Preview" style="color: inherit;"><span class="MJXp-math" id="MJXp-Span-545"><span class="MJXp-msubsup" id="MJXp-Span-546"><span class="MJXp-mi MJXp-italic" id="MJXp-Span-547" style="margin-right: 0.05em;">G</span><span class="MJXp-mn MJXp-script" id="MJXp-Span-548" style="vertical-align: -0.4em;">3</span></span></span></span><script type="math/tex" id="MathJax-Element-96">G_3</script></a></div><div class="lev3 toc-item"><a href="#2.5.2-Avec-$G_6$" data-toc-modified-id="2.5.2-Avec-$G_6$-352"><span class="toc-item-num">3.5.2&nbsp;&nbsp;</span>2.5.2 Avec <span class="MathJax_Preview" style="color: inherit;"><span class="MJXp-math" id="MJXp-Span-556"><span class="MJXp-msubsup" id="MJXp-Span-557"><span class="MJXp-mi MJXp-italic" id="MJXp-Span-558" style="margin-right: 0.05em;">G</span><span class="MJXp-mn MJXp-script" id="MJXp-Span-559" style="vertical-align: -0.4em;">6</span></span></span></span><script type="math/tex" id="MathJax-Element-98">G_6</script></a></div><div class="lev2 toc-item"><a href="#2.6-Mise-en-forme-normale-de-Chomsky-(bonus)" data-toc-modified-id="2.6-Mise-en-forme-normale-de-Chomsky-(bonus)-36"><span class="toc-item-num">3.6&nbsp;&nbsp;</span>2.6 Mise en forme normale de Chomsky <em>(bonus)</em></a></div><div class="lev3 toc-item"><a href="#2.6.1-Exemple-pour-$G_1$" data-toc-modified-id="2.6.1-Exemple-pour-$G_1$-361"><span class="toc-item-num">3.6.1&nbsp;&nbsp;</span>2.6.1 Exemple pour <span class="MathJax_Preview">G_1</span><script type="math/tex">G_1</script></a></div><div class="lev3 toc-item"><a href="#2.6.2-Exemple-pour-$G_6$" data-toc-modified-id="2.6.2-Exemple-pour-$G_6$-362"><span class="toc-item-num">3.6.2&nbsp;&nbsp;</span>2.6.2 Exemple pour <span class="MathJax_Preview">G_6</span><script type="math/tex">G_6</script></a></div> # Table des matières * [1. Agrégation externe de mathématiques](#1.-Agrégation-externe-de-mathématiques) * [1.1 Leçon orale, option informatique](#1.1-Leçon-orale,-option-informatique) * [2. Algorithme de Cocke-Kasami-Younger](#2.-Algorithme-de-Cocke-Kasami-Younger) * &nbsp; * [2.0.1 Implémentation d'un développement pour les leçons 906, 907, 910, 923.](#2.0.1-Implémentation-d'un-développement-pour-les-leçons-906,-907,-910,-923.) * [2.0.2 Références :](#2.0.2-Références-:) * [2.1 Classes pour répresenter une grammaire](#2.1-Classes-pour-répresenter-une-grammaire) * [2.1.1 Du typage en Python ?!](#2.1.1-Du-typage-en-Python-?!) * [2.1.2 La classe ``Grammaire``](#2.1.2-La-classe-Grammaire) * [2.1.3 Premier exemple de grammaire (non-Chomsky)](#2.1.3-Premier-exemple-de-grammaire-%28non-Chomsky%29) * [2.1.4 Second exemple de grammaire (non-Chomsky)](#2.1.4-Second-exemple-de-grammaire-%28non-Chomsky%29) * [2.1.5 Dernier exemple de grammaire](#2.1.5-Dernier-exemple-de-grammaire) * [2.2 Vérifier qu'une grammaire est bien formée](#2.2-Vérifier-qu'une-grammaire-est-bien-formée) * [2.3 Vérifier qu'une grammaire est en forme normale de Chomsky](#2.3-Vérifier-qu'une-grammaire-est-en-forme-normale-de-Chomsky) * [2.4 (enfin) L'algorithme de Cocke-Kasami-Younger](#2.4-%28enfin%29-L'algorithme-de-Cocke-Kasami-Younger) * [2.5 Exemples](#2.5-Exemples) * [2.5.1 Avec $G_3$](#2.5.1-Avec-$G_3$) * [2.5.2 Avec $G_6$](#2.5.2-Avec-$G_6$) * [2.6 Mise en forme normale de Chomsky *(bonus)*](#2.6-Mise-en-forme-normale-de-Chomsky-*%28bonus%29*) * [2.6.1 Exemple pour $G_1$](#2.6.1-Exemple-pour-$G_1$) * [2.6.2 Exemple pour $G_6$](#2.6.2-Exemple-pour-$G_6$) # 1. Agrégation externe de mathématiques ## 1.1 Leçon orale, option informatique > - Ce [notebook Jupyter](http://jupyter.org/) est une implémentation d'un algorithme constituant un développement pour l'option informatique de l'agrégation externe de mathématiques. > - Il s'agit de l'[algorithme de Cocke-Kasami-Younger](https://fr.wikipedia.org/wiki/Algorithme_de_Cocke-Younger-Kasami). > - Cette implémentation (partielle) a été rédigée par [Lilian Besson](http://perso.crans.org/besson/) ([sur GitHub ?](https://github.com/Naereen/), [sur Bitbucket ?](https://bitbucket.org/lbesson)), et [est open-source](https://github.com/Naereen/notebooks/blob/master/agreg/Algorithme%20de%20Cocke-Kasami-Younger%20%28python3%29.ipynb). > #### Feedbacks? > - Vous avez trouvé un bug ? → [Signalez-le moi svp !](https://github.com/Naereen/notebooks/issues/new), merci d'avance. > - Vous avez une question ? → [Posez la svp !](https://github.com/Naereen/ama.fr) [![Demandez moi n'importe quoi !](https://img.shields.io/badge/Demandez%20moi-n'%20importe%20quoi-1abc9c.svg)](https://GitHub.com/Naereen/ama.fr) ---- # 2. Algorithme de Cocke-Kasami-Younger ### 2.0.1 Implémentation d'un développement pour les leçons 906, 907, 910, 923. L'algorithme de Cocke-Kasami-Younger (CYK) permet de résoudre le problème du mot en temps $\mathcal{O}(|w|^3)$, par programmation dynamique. La grammaire $G$ doit déjà avoir été mise en forme de [forme normale de Chomsky](https://fr.wikipedia.org/wiki/Forme_normale_de_Chomsky), ce qui prend un temps $\mathcal{O}(|G|^2)$ et produit une grammaire équivalente $G'$ de taille $\mathcal{O}(|G|^2)$ en partant de $G$ (qui doit être bien formée). ### 2.0.2 Références : - [Cocke-Kasami-Younger sur Wikipedia](https://fr.wikipedia.org/wiki/Algorithme_de_Cocke-Younger-Kasami), - Bien traité dans ["Hopcroft, Ullman", Ch7.4.4, p298](https://catalogue.ens-cachan.fr/cgi-bin/koha/opac-detail.pl?biblionumber=23694), - Esquissé dans ["Carton", Ex4.7 Fig4.2 p170](https://catalogue.ens-cachan.fr/cgi-bin/koha/opac-detail.pl?biblionumber=41719), - [Développement tapé en PDF par Theo Pierron (2014)](http://perso.eleves.ens-rennes.fr/~tpier758/agreg/dvpt/info/CYK.pdf), - [Ces slides d'un cours sur les langages et les grammaires](http://pageperso.lif.univ-mrs.fr/~alexis.nasr/Ens/M2/pcfg.pdf). ---- ## 2.1 Classes pour répresenter une grammaire Au lieu de types formels définis en OCaml, on utilise des classes en Python, pour répresenter une grammaire (pas seulement en forme normale de Chomsky mais dans une forme un peu plus générale). ### 2.1.1 Du typage en Python ?! Mais comme je veux frimer en utilisant des types formels, on va utiliser des [annotations de types en Python](https://www.python.org/dev/peps/pep-0484/). C'est assez nouveau, disponible **à partir de Python 3.5**. Si vous voulez en savoir plus, une bonne première lecture peut être [cette page](https://mypy.readthedocs.io/en/latest/builtin_types.html). *Note :* ces annotations de types ne sont PAS nécessaires. End of explanation """ # Type pour une variable, juste une chaine, e.g. 'X' ou 'S' Var = str # Type pour un alphabet Alphabet = List[Var] # Type pour une règle : un symbole transformé en une liste de symboles Regle = Tuple[Var, List[Var]] """ Explanation: On définit les types qui nous intéressent : End of explanation """ class Grammaire(object): """ Type pour les grammaires algébriques (en forme de Chomsky). """ def __init__(self, sigma: Alphabet, v: Alphabet, s: Var, r: List[Regle], nom="G"): """ Grammaire en forme de Chomsky : - sigma : alphabet de production, type Alphabet, - v : alphabet de travail, type Alphabet, - s : symbol initial, type Var, - r : liste de règles, type List[Regle]. """ # On se contente de stocker les champs : self.sigma = sigma self.v = v self.s = s self.r = r self.nom = nom def __str__(self) -> str: """ Permet d'afficher une grammaire.""" str_regles = ', '.join( "{} -> {}".format(regle[0], ''.join(regle[1]) if regle[1] else 'ε') for regle in self.r ) return r"""Grammaire {} : - Alphabet Σ = {}, - Non terminaux V = {}, - Symbole initial : '{}', - Règles : {}.""".format(self.nom, set(self.sigma), set(self.v), self.s, str_regles) """ Explanation: Note : ces annotations de types ne sont là que pour illustrer et aider le programmeur, Python reste un langage dynamiquement typé (i.e. on fait ce qu'on veut...). 2.1.2 La classe Grammaire Une grammaire $G$ est définie par : $\Sigma$ son alphabet de production, qui sont les lettres dans les mots produits à la fin, e.g., $\Sigma = { a, b}$, $V$ son alphabet de travail, qui sont les lettres utilisées dans la génération de mots, mais pas dans les mots à la fin, e.g., $V = {S, A}$, $S$ est le symbole de travail initial, $R$ est un ensemble de règles, qui sont de la forme $U \rightarrow x_1 \dots x_n$ pour $U \in V$ une variable de travail (pas de production), et $x_1, \dots, x_n$ sont variables de production ou de travail (dans $\Sigma \cup V$), e.g., $R = { S \rightarrow \varepsilon, S \rightarrow A S b, A \rightarrow a, A \rightarrow a a }$. Et ainsi on peut definir un classe Grammaire, qui n'est rien d'autre qu'un moyen d'encapsuler ces différentes valeurs $\Sigma$, $V$, $S$, et $R$ (en OCaml, ce serait un type avec des champs d'enregistrement, défini par exemple par type grammar = { sigma : string list; v: string list; s: string; r: (string, strin list) list; };;). On ajoute aussi une méthode __str__ à cette classe Grammaire pour afficher la grammaire joliment. End of explanation """ g1 = Grammaire( ['a', 'b'], # Alphabet de production ['S'], # Alphabet de travail 'S', # Symbole initial (un seul) [ # Règles ('S', []), # S -> ε ('S', ['a', 'S', 'b']), # S -> a S b ], nom="G1" ) print(g1) """ Explanation: 2.1.3 Premier exemple de grammaire (non-Chomsky) On commence avec un premier exemple basique, la grammaire $G_1$ avec pour seule règle : $S \rightarrow aSb \;|\; \varepsilon$. C'est la grammaire naturelle, bien formée, pour les mots de la forme $a^n b^n$ pour tout $n \geq 0$. Cf. cet exemple sur Wikipedia. Par contre, elle n'est pas en forme normale de Chomsky. End of explanation """ g2 = Grammaire( ['x', 'y', 'z', '+', '-', '*', '/', '(', ')'], # Alphabet de production ['S'], # Alphabet de travail 'S', # Symbole initial (un seul) [ # Règles ('S', ['x']), # S -> x ('S', ['y']), # S -> y ('S', ['z']), # S -> z ('S', ['S', '+', 'S']), # S -> S + S ('S', ['S', '-', 'S']), # S -> S - S ('S', ['S', '*', 'S']), # S -> S * S ('S', ['S', '/', 'S']), # S -> S / S ('S', ['(', 'S', ')']), # S -> (S) ], nom="G2" ) print(g2) """ Explanation: 2.1.4 Second exemple de grammaire (non-Chomsky) Voici un autre exemple basique, la grammaire $G_2$ qui engendre les expressions arithmétiques en trois variables $x$, $y$ et $z$, correctement parenthésées. Une seule règle de production, ou une union de règle de production, suffit : $$ S \rightarrow x \;|\; y \;|\; z \;|\; S+S \;|\; S-S \;|\; S∗S \;|\; S/S \;|\; (S). $$ Cf. cet autre exemple sur Wikipedia. End of explanation """ g3 = Grammaire( # Alphabet de production, des vrais mots anglais (avec une espace pour que la phrase soit lisible ['she ', 'eats ', 'with ', 'fish ', 'fork ', 'a ', 'an ', 'ork ', 'sword '], # Alphabet de travail, des catégories de mots : V pour verbes, P pour pronom etc. ['S', 'NP ', 'VP ', 'PP ', 'V ', 'Det ', 'DetVo ', 'N ', 'NVo ', 'P '], # Det = a : déterminant # DetVo = an : déterminant avant un nom commençant par une voyelle # N = (fish, fork, sword) : un nom # NVo = ork : un nom commençant par une voyelle # NP = she | a (fish, fork, sword) | an ork : un sujet # V = eats : verbe conjugué # P = with : conjonction de coordination # VP = eats : verbe conjugué suivi d'un objet # PP : with NP : complément d'objet direct 'S', # Symbole initial (un seul) [ # Règles # Règles de constuction de phrase ( 'S', ['NP ', 'VP '] ), # 'S' -> 'NP' 'VP' ( 'VP ', ['VP ', 'PP '] ), # 'VP' -> 'VP' 'PP' ( 'VP ', ['V ', 'NP '] ), # 'VP' -> 'V' 'NP' ( 'PP ', ['P ', 'NP '] ), # 'PP' -> 'P' 'NP' ( 'NP ', ['Det ', 'N '] ), # 'NP' -> 'Det' 'N' ( 'NP ', ['DetVo ', 'NVo '] ), # 'NP' -> 'DetVo' 'NVo' # Règles de création de mots ( 'VP ', ['eats '] ), # 'VP' -> 'eats ' ( 'NP ', ['she '] ), # 'NP' -> 'she ' ( 'V ', ['eats '] ), # 'V' -> 'eats ' ( 'P ', ['with '] ), # 'P' -> 'with ' ( 'N ', ['fish '] ), # 'N' -> 'fish ' ( 'N ', ['fork '] ), # 'N' -> 'fork ' ( 'N ', ['sword '] ), # 'N' -> 'sword ' ( 'NVo ', ['ork '] ), # 'NVo' -> 'ork ' ( 'Det ', ['a '] ), # 'Det' -> 'a ' ( 'DetVo ', ['an '] ), # 'DetVo' -> 'an ' ], nom="G3" ) print(g3) """ Explanation: 2.1.5 Dernier exemple de grammaire Voici un dernier exemple, moins basique, la grammaire $G_3$ qui engendre des phrases "simples" (et très limitées) en anglais. Inspirée de cet exemple sur Wikipedia (en anglais). Cette grammaire $G_3$ est sous forme normale de Chomsky. End of explanation """ def estBienFormee(self: Grammaire) -> bool: """ Vérifie que G est bien formée. """ sigma, v, s, regles = set(self.sigma), set(self.v), self.s, self.r tests = [ s in v, # s est bien une variable de travail sigma.isdisjoint(v), # Lettres et variables de travail sont disjointes all( regle[0] in v # Les membres gauches de règles sont des variables and # Les membres droits de règles sont des variables ou des lettres all(r in sigma | v for r in regle[1]) for regle in regles ) ] return all(tests) # On ajoute la fonction comme une méthode (au cas où...) Grammaire.estBienFormee = estBienFormee for g in [g1, g2, g3]: print(g) print("La grammaire", g.nom, "est-elle bien formée ?", estBienFormee(g)) print() """ Explanation: Nous utiliserons ces exemples de grammaire plus tard, pour vérifier que nos fonctions sont correctement écrites. 2.2 Vérifier qu'une grammaire est bien formée On veut pouvoir vérifier qu'une grammaire $G$ (i.e., un objet instance de Grammaire) est bien formée (cf. votre cours de langage formel pour une définition propre) : $S$ doit être une variable de travail, i.e., $S \in V$, Les variables de production et les variables de travail doivent être distinctes, i.e., $\Sigma \cap V = \emptyset$, Pour chaque règle, $r = A \rightarrow w$, les membres gauches des règles sont réduits à une seule variable de travail, et les membres droits sont des mots, vides ou constitués de variables de production ou de travail, i.e., $A \in V$, et $w \in (\Sigma \cup V)^{\star}$, On vérifie ça facilement avec la fonction suivante : End of explanation """ g4 = Grammaire( ['a', 'b'], # Alphabet de production ['S'], # Alphabet de travail 'S', # Symbole initial (un seul) [ # Règles ('S', []), # S -> ε ('S', ['a', 'S', 'b']), # S -> a S b ('a', ['a', 'a']), # a -> a a, cette règle n'est pas en forme normale ], nom="G4" ) print(g4) print("La grammaire", g4.nom, "est-elle bien formée ?", estBienFormee(g4)) """ Explanation: On peut définir une autre grammaire qui n'est pas bien formée, pour voir. Cette grammaire $G_4$ engendre les mots de la forme $a^{n+k} b^n$ pour $n,k \in \mathbb{N}$, mais on lui donne une règle de dédoublement des $a$ : $a \rightarrow a a$ (notez que $a$, une variable de production, est à gauche d'une règle). End of explanation """ g5 = Grammaire( ['a', 'b'], # Alphabet de production ['S', 'A'], # Alphabet de travail 'S', # Symbole initial (un seul) [ # Règles ('S', []), # S -> ε ('S', ['A', 'S', 'b']), # S -> A S b ('A', ['A', 'A']), # A -> A A, voila comment on gère a -> a a ('A', ['a']), # A -> a ], nom="G5" ) print(g5) print("La grammaire", g5.nom, "est-elle bien formée ?", estBienFormee(g5)) """ Explanation: Juste par curiosité, la voici transformée pour devenir bien formée, ici on a juste eu besoin d'ajouter une variable de travail $A$ qui peut donner $a$ ou $A A$ : End of explanation """ def estChomsky(self: Grammaire) -> bool: """ Vérifie que G est sous forme normale de Chomksy. """ sigma, v, s, regles = set(self.sigma), set(self.v), self.s, self.r estBienChomsky = all( ( # S -> epsilon regle[0] == s and not regle[1] ) or ( # A -> a len(regle[1]) == 1 and regle[1][0] in sigma # a in Sigma ) or ( # A -> B C len(regle[1]) == 2 and regle[1][0] in v # B in V, not Sigma and regle[1][1] in v # C in V, not Sigma ) for regle in regles ) return estBienChomsky and estBienFormee(self) # On ajoute la fonction comme une méthode (au cas où...) Grammaire.estChomsky = estChomsky """ Explanation: 2.3 Vérifier qu'une grammaire est en forme normale de Chomsky On veut maintenant pouvoir vérifier qu'une grammaire $G$ (i.e., un objet instance de Grammaire) est bien en forme normale de Chomsky. En effet, l'algorithme CKY n'a aucune chance de fonctionner si la grammaire n'est pas sous la bonne forme. Pour que $G$ soit en forme normale de Chomsky : - elle doit d'abord être bien formée (cf. ci-dessus), - et chaque règle doit être - soit de la forme $S \rightarrow \varepsilon$, - soit de la forme $A \rightarrow a$ pour $(A, a)$ dans $V \times \Sigma$, - soit de la forme $A \rightarrow B C$ pour $(A, B, C)$ dans $V^3$ (certains ouvrages demandent à ce qu'il n'y ait aucune production de $S$ le symbole initial, i.e., $B,C \neq S$, mais ça ne change rien pour l'algorithme qu'on implémente plus bas). On vérifie ça facilement, point par point, dans la fonction suivante : End of explanation """ for g in [g1, g2, g3, g4, g5]: print(g) print("La grammaire", g.nom, "est-elle de bien formée ?", estBienFormee(g)) print("La grammaire", g.nom, "est-elle de Chomsky ?", estChomsky(g)) print() """ Explanation: On peut tester avec les cinq grammaires definies plus haut ($G_1$, $G_2$, $G_3$, $G_4$, $G_5$). Seule la grammaire $G_3$ est de Chomsky. End of explanation """ g6 = Grammaire( ['a', 'b'], # Alphabet de production ['S', 'T', 'A', 'B'], # Alphabet de travail 'S', # Symbole initial (un seul) [ # Règles ('S', []), # S -> ε, on efface S si on veut produire le mot vide # On coupe la règle S -> A S B en deux : ('S', ['A', 'T']), # S -> A T ('T', ['S', 'B']), # T -> S B ('A', ['A', 'A']), # A -> A A, voilà comment on gère a -> a a # Production de lettres ('A', ['a']), # A -> a ('B', ['b']), # B -> b ], nom="G6" ) print(g6) print("La grammaire", g6.nom, "est-elle bien formée ?", estBienFormee(g6)) print("La grammaire", g6.nom, "est-elle de Chomsky ?", estChomsky(g6)) """ Explanation: À la main, on peut transformer $G_5$ pour la mettre en forme de Chomsky (et après, on passe à CYK). Notez que cette transformation est automatique, elle est implémentée dans le cas general (d'une grammaire $G$ bien formée), ci-dessus en partie 5. End of explanation """ def cocke_kasami_younger(self, w): """ Vérifie si le mot w est dans L(G). """ assert estChomsky(self), "Erreur : {} n'est pas en forme de Chomsky, l'algorithme de Cocke-Kasami-Younger ne fonctionnera pas.".format(self.nom) sigma, v, s, regles = set(self.sigma), set(self.v), self.s, self.r n = len(w) E = dict() # De taille n^2 # Cas special pour tester si le mot vide est dans L(G) if n == 0: return (s, []) in regles, E # Boucle en O(n^2) for i in range(n): for j in range(n): E[(i, j)] = set() # Boucle en O(n x |G|) for i in range(n): for regle in regles: # Si regle est de la forme : A -> a if len(regle[1]) == 1: A = regle[0] a = regle[1][0] if w[i] == a: # Notez que c'est le seul moment ou utilise le mot w ! E[(i, i)] = E[(i, i)] | {A} # Boucle en O(n^3 x |G|) for d in range(1, n): # Longueur du morceau for i in range(n - d): # Début du morceau j = i + d # Fin du morceau, on regarde w[i]..w[j] for k in range(i, j): # Parcourt du morceau, ..w[k].., sans la fin for regle in regles: # Si regle est de la forme A -> B C if len(regle[1]) == 2: A = regle[0] B, C = regle[1] if B in E[(i, k)] and C in E[(k + 1, j)]: E[(i, j)] = E[(i, j)] | {A} # On a fini, il suffit maintenant d'utiliser la table créée par programmation dynamique return s in E[(0, n - 1)], E # On ajoute la fonction comme une méthode (au cas où...) Grammaire.genere = cocke_kasami_younger """ Explanation: 2.4 (enfin) L'algorithme de Cocke-Kasami-Younger On passe enfin à l'algorithme de Cocke-Kasami-Younger. L'algorithme va prendre une grammaire $G$, bien formée, de taille $|G|$ (definie comme la somme des longueurs de $\Sigma$ et $V$ et la somme des tailles des règles), ainsi qu'un mot $w$ de taille $n = |w|$ (attention, ce n'est pas une str mais une liste de variables List[Var], i.e., une liste de str). Le but est de vérifier si le mot $w$ peut être engendrée par la grammaire $G$, i.e., de déterminer si $w \in L(G)$. Pour le détail de fonctionnement, cf. le code Python ci dessous, ou la page Wikipedia. L'algorithme aura : une complexité en mémoire en $\mathcal{O}(|G| + |w|^2)$, une complexité en temps en $\mathcal{O}(|G| \times |w|^3)$, ce qui montrera que le problème du mot pour les grammaires en forme de Chomsky est dans $\mathcal{P}$ (en temps polynomial, c'est déjà cool) et en temps raisonnable (cubique en $n = |w|$, c'est encore mieux !). On va utiliser une table de hachage E contiendra, à la fin du calcul, les $E_{i, j}$ définis par : $$ E_{i, j} := { A \in V : w[i, j] \in L_G(A) }.$$ Ou l'on a noté $w[i, j] = w_i \dots w_j$ le sous-mot d'indices $i,\dots,j$, et $L_G(A)$ le langage engendré par $G$ en partant du symbole $A$ (et pas du symbole initial $S$). Note : la table de hachage n'est pas vraiment requise, une liste de liste fonctionnerait aussi mais la notation en serait moins proche de celle utilisée en maths. End of explanation """ def testeMot(g, w): """ Joli affichage pour un test """ print("# Test si w in L(G) :") print(" Pour", g.nom, "et w =", w) estDansLG, E = cocke_kasami_younger(g, w) if estDansLG: print(" ==> Ce mot est bien engendré par G !") else: print(" ==> Ce mot n'est pas engendré par G !") return estDansLG, E """ Explanation: 2.5 Exemples On présente ici des exemples d'utilisation de cette fonction cocke_kasami_younger avec les grammaires $G_i$ présentées plus haut et quelques examples de mots $w$. End of explanation """ print(g3) print(estChomsky(g3)) w1 = [ "she ", "eats ", "a ", "fish ", "with ", "a ", "fork " ] # True estDansLG1, E1 = testeMot(g3, w1) """ Explanation: 2.5.1 Avec $G_3$ End of explanation """ for k in E1.copy(): if k in E1 and not E1[k]: # On retire les clés qui ont un E[(i, j)] vide del(E1[k]) print(E1) """ Explanation: Pour cet exemple, on peut afficher la table E (en ne montrant que les cases qui ont un $E_{i, j}$ non-vide) : End of explanation """ w2 = [ "she ", "attacks ", "a ", "fish ", "with ", "a ", "fork " ] # False estDansLG2, E2 = testeMot(g3, w2) w3 = [ "she ", "eats ", "an ", "ork ", "with ", "a ", "sword " ] # True estDansLG3, E3 = testeMot(g3, w3) """ Explanation: End of explanation """ w4 = [ "she ", "eats ", "an ", "fish ", "with ", "a ", "fork " ] # False estDansLG4, E4 = testeMot(g3, w4) w5 = [ "she ", "eat ", "a ", "fish ", "with ", "a ", "fork " ] # False estDansLG5, E5 = testeMot(g3, w5) w6 = [ "she ", "eats ", "a ", "fish ", "with ", "a ", "fish " , "with ", "a ", "fish " , "with ", "a ", "fish " , "with ", "a ", "fish " ] # True estDansLG6, E6 = testeMot(g3, w6) """ Explanation: D'autres exemples : End of explanation """ print(g6) for w in [ [], ['a', 'b'], ['a', 'a', 'a', 'b', 'b', 'b'], # True, True, True ['a', 'a', 'a', 'a', 'b', 'b', 'b'], # True ['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'b'], # True ['a', 'b', 'a'], ['a', 'a', 'a', 'b', 'b', 'b', 'b'], # False, False ['c'], ['a', 'a', 'a', 'c'], # False, False ]: testeMot(g6, w) """ Explanation: 2.5.2 Avec $G_6$ End of explanation """ def miseChomsky(self): """ Met en forme normale de Chomsky la grammaire self, qui doit être bien formée. - On suppose que l'alphabet Sigma est dans {a,..,z}, - On suppose que l'alphabet v est dans {A,..,Z}. """ assert estBienFormee(self), "Erreur : {} n'est pas en bien formée, la mise en forme normale de Chomsky ne fonctionnera pas.".format(self.nom) sigma, v, s, regles = set(self.sigma), set(self.v), self.s, self.r if estChomsky(self): print("Info : la grammaire {} est déjà en forme normale de Chomsky, il n'y a rien à faire.".format(self.nom)) return Grammaire(sigma, v, s, regles) assert sigma < set(chr(i) for i in range(ord('a'), ord('z') + 1)), "Erreur : {} n'a pas ses lettres de production Sigma dans 'a'..'z' ...".format(self.nom) assert v < set(chr(i) for i in range(ord('A'), ord('Z') + 1)), "Erreur : {} n'a pas ses lettres de travail V dans 'A'..'Z' ...".format(self.nom) # Algorithme en deux étapes, G --> G', puis G' --> G'' # 1. G --> G' : On ajoute des variables de travail et on substitue a -> V_a dans les autres règles # On pose les attributs de G', qui vont être changés sigma2 = list(sigma) v2 = set(v) s2 = s regles2 = [] V_ = lambda a: 'V_{}'.format(a) for a in sigma: v2.add(V_(a)) regles2.append([V_(a), [a]]) # Ajout de la règle V_a -> a (production de la lettre correspondante) substitutionLettre = lambda b: V_(b) if (b in sigma) else b substitutionMot = lambda lb: [substitutionLettre(b) for b in lb] for regle in regles: S = regle[0] w = regle[1] if len(w) >= 2: # Si ce n'est pas une règle A -> epsilon regles2.append([S, substitutionMot(w)]) else: # Ici on devrait garder la possibilte de creer le mot vide regles2.append([S, w]) nom2 = self.nom + "'" print(Grammaire(list(sigma2), list(v2), s2, regles2, nom=nom2)) # 2. G' --> G'' : On découpe les règles A -> A1..An qui ont n > 2 # On pose les attributs de G'', qui vont être changés sigma3 = list(sigma2) v3 = set(v2) s3 = s2 regles3 = [] for k, regle in enumerate(regles2): S = regle[0] w = regle[1] # w = S1 .. Sn n = len(w) if n > 2: prime = lambda Si: "%s'_%d" % (Si, k) # Ajouter le k dans le nom assure que les nouvelles variables de travail sont toutes uniques # Premiere règle : S -> S_1 S'_2 regles3.append([S, [w[0], prime(w[1])]]) v3.add(prime(w[1])) for i in range(1, len(w) - 2): # Pour chaque règle intermédiaire : S'_i -> S_i S'_{i+1} regles3.append([prime(w[i]), [w[i], prime(w[i + 1])]]) v3.add(prime(w[i])) v3.add(prime(w[i + 1])) # Dernière règle : S'_{n-1} -> S_{n-1} S_n regles3.append([prime(w[n - 2]), [w[n - 2], w[n - 1]]]) v3.add(prime(w[n - 2])) else: regles3.append([S, w]) # Terminé nom3 = self.nom + "''" return Grammaire(list(sigma3), list(v3), s3, regles3, nom=nom3) # On ajoute la fonction comme une méthode (au cas où...) Grammaire.miseChomsky = miseChomsky """ Explanation: 2.6 Mise en forme normale de Chomsky (bonus) On pourrait aussi implémenter la mise en forme normale de Chomsky, comme exposée et prouvée dans le développement. La preuve faite dans le développement garantit que la fonction ci-dessous transforme une grammaire $G$ en grammaire équivalente $G'$, avec l'éventuelle perte du mot vide $\varepsilon$ : $$ L(G') = L(G) \setminus { \varepsilon }. $$ L'algorithme aura : une complexité en mémoire en $\mathcal{O}(|G|)$, une complexité en temps en $\mathcal{O}(|G| |\Sigma_G|)$. C'est un algorithme en deux étapes : D'abord, on transforme $G$ en $G'$ : on ajoute des variables de travail pour chaque lettre de production, $V_a \in V$ pour $a \in \Sigma$, on remplace chaque $a$ dans des membres gauches de règles par la nouvelle $V_a$, et ensuite on ajoute des règles de production de lettre $V_a \rightarrow a$ dans $R$, Ensuite, $G''$ est obtenue en découpant les règles de $G$ qui sont de tailles $> 2$ : une règle $S \rightarrow S_1 \dots S_n$ devient $n-1$ règles : $S \rightarrow S_1 S_2'$, $S_i' \rightarrow S_i S_{i+1}'$ (pour $i = 2,\dots,n - 2$), et $S_{n-1}' \rightarrow S_{n-1} S_n$. Il faut aussi ajouter toutes ces nouvelles variables $S_i'$ (en s'assurant qu'elles sont uniques, pour chaque règle), on ajoute pour cela le numéro de la règle : $S_i'=$ A'_k pour la k -ième règle et le symbole $S_i=$ A. End of explanation """ print(g1) print("\n(Non) La grammaire", g1.nom, "est-elle de Chomsky ?", estChomsky(g1)) print("\nOn essaie de la mettre sous forme normale de Chomksy...\n") g1_Chom = miseChomsky(g1) print(g1_Chom) print("\n ==> La grammaire", g1_Chom.nom, "est-elle de Chomsky ?", estChomsky(g1_Chom)) """ Explanation: 2.6.1 Exemple pour $G_1$ End of explanation """ print(g5) print("\n(Non) La grammaire", g5.nom, "est-elle de Chomsky ?", estChomsky(g5)) print("\nOn essaie de la mettre sous forme normale de Chomksy...\n") g5_Chom = miseChomsky(g5) print(g5_Chom) print("\n ==> La grammaire", g5_Chom.nom, "est-elle de Chomsky ?", estChomsky(g5_Chom)) """ Explanation: 2.6.2 Exemple pour $G_6$ End of explanation """
tensorflow/privacy
tensorflow_privacy/privacy/privacy_tests/membership_inference_attack/codelabs/word2vec_codelab.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2022 The TensorFlow Authors. End of explanation """ # install dependencies !pip install gensim --upgrade !pip install git+https://github.com/tensorflow/privacy from IPython.display import clear_output clear_output() # imports import smart_open import random import gensim.utils import os import bz2 import multiprocessing import logging import tqdm import xml import numpy as np from gensim.models import Word2Vec from six import raise_from from gensim.corpora.wikicorpus import WikiCorpus, init_to_ignore_interrupt, \ ARTICLE_MIN_WORDS, _process_article, IGNORED_NAMESPACES, get_namespace from pickle import PicklingError from xml.etree.cElementTree import iterparse, ParseError from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import membership_inference_attack as mia from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import data_structures as mia_data_structures from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import plotting as mia_plotting from tensorflow_privacy.privacy.privacy_tests.secret_sharer.exposures import compute_exposure_interpolation, compute_exposure_extrapolation # all the functions we need to get data and canary it # we will use google drive to store data models to be able to reuse them # you can change this to local directories by changing DATA_DIR and MODEL_DIR # make sure to copy the data locally, otherwise training will be very slow # code in this cell originates from https://github.com/google/embedding-tests # some edits were made to allow saving to google drive, and to add canaries from google.colab import drive drive.mount('/content/drive/') LOCAL_DATA_DIR = 'data_dir' LOCAL_MODEL_DIR = 'model_dir' DATA_DIR = '/content/drive/MyDrive/w2v/data_dir/' MODEL_DIR = '/content/drive/MyDrive/w2v/model_dir/' # made up words will be used for canaries MADE_UP_WORDS = [] for i in range(20): MADE_UP_WORDS.append("o"*i + "oongaboonga") # deterministic dataset partitioning def gen_seed(idx, n=10000): random.seed(12345) seeds = [] for i in range(n): s = random.random() seeds.append(s) return seeds[idx] def make_wiki9_dirs(data_dir): # makes all the directories we'll need to store data wiki9_path = os.path.join(data_dir, 'wiki9', 'enwik9.bz2') wiki9_dir = os.path.join(data_dir, 'wiki9', 'articles') wiki9_split_dir = os.path.join(data_dir, 'wiki9', 'split') for d in [wiki9_dir, wiki9_split_dir]: if not os.path.exists(d): os.makedirs(d) return wiki9_path, wiki9_dir, wiki9_split_dir def extract_pages(f, filter_namespaces=False, filter_articles=None): try: elems = (elem for _, elem in iterparse(f, events=("end",))) except ParseError: yield None, "", None elem = next(elems) namespace = get_namespace(elem.tag) ns_mapping = {"ns": namespace} page_tag = "{%(ns)s}page" % ns_mapping text_path = "./{%(ns)s}revision/{%(ns)s}text" % ns_mapping title_path = "./{%(ns)s}title" % ns_mapping ns_path = "./{%(ns)s}ns" % ns_mapping pageid_path = "./{%(ns)s}id" % ns_mapping try: for elem in elems: if elem.tag == page_tag: title = elem.find(title_path).text text = elem.find(text_path).text if filter_namespaces: ns = elem.find(ns_path).text if ns not in filter_namespaces: text = None if filter_articles is not None: if not filter_articles( elem, namespace=namespace, title=title, text=text, page_tag=page_tag, text_path=text_path, title_path=title_path, ns_path=ns_path, pageid_path=pageid_path): text = None pageid = elem.find(pageid_path).text yield title, text or "", pageid # empty page will yield None elem.clear() except ParseError: yield None, "", None return class MyWikiCorpus(WikiCorpus): def get_texts(self): logger = logging.getLogger(__name__) articles, articles_all = 0, 0 positions, positions_all = 0, 0 tokenization_params = ( self.tokenizer_func, self.token_min_len, self.token_max_len, self.lower) texts = ((text, title, pageid, tokenization_params) for title, text, pageid in extract_pages(bz2.BZ2File(self.fname), self.filter_namespaces, self.filter_articles)) print("got texts") pool = multiprocessing.Pool(self.processes, init_to_ignore_interrupt) try: # process the corpus in smaller chunks of docs, # because multiprocessing.Pool # is dumb and would load the entire input into RAM at once... for group in gensim.utils.chunkize(texts, chunksize=10 * self.processes, maxsize=1): for tokens, title, pageid in pool.imap(_process_article, group): articles_all += 1 positions_all += len(tokens) # article redirects and short stubs are pruned here if len(tokens) < self.article_min_tokens or \ any(title.startswith(ignore + ':') for ignore in IGNORED_NAMESPACES): continue articles += 1 positions += len(tokens) yield (tokens, (pageid, title)) except KeyboardInterrupt: logger.warn( "user terminated iteration over Wikipedia corpus after %i" " documents with %i positions " "(total %i articles, %i positions before pruning articles" " shorter than %i words)", articles, positions, articles_all, positions_all, ARTICLE_MIN_WORDS ) except PicklingError as exc: raise_from( PicklingError('Can not send filtering function {} to multiprocessing, ' 'make sure the function can be pickled.'.format( self.filter_articles)), exc) else: logger.info( "finished iterating over Wikipedia corpus of %i " "documents with %i positions " "(total %i articles, %i positions before pruning articles" " shorter than %i words)", articles, positions, articles_all, positions_all, ARTICLE_MIN_WORDS ) self.length = articles # cache corpus length finally: pool.terminate() def write_wiki9_articles(data_dir): wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(data_dir) wiki = MyWikiCorpus(wiki9_path, dictionary={}, filter_namespaces=False) i = 0 for text, (p_id, title) in tqdm.tqdm(wiki.get_texts()): i += 1 if title is None: continue article_path = os.path.join(wiki9_dir, p_id) if os.path.exists(article_path): continue with open(article_path, 'wb') as f: f.write(' '.join(text).encode("utf-8")) print("done", i) def split_wiki9_articles(data_dir, exp_id=0): wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(data_dir) all_docs = list(os.listdir(wiki9_dir)) print("wiki9 len", len(all_docs)) print(wiki9_dir) s = gen_seed(exp_id) random.seed(s) random.shuffle(all_docs) random.seed() n = len(all_docs) // 2 return all_docs[:n], all_docs[n:] def read_wiki9_train_split(data_dir, exp_id=0): wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(data_dir) split_path = os.path.join(wiki9_split_dir, 'split{}.train'.format(exp_id)) if not os.path.exists(split_path): train_docs, _ = split_wiki9_articles(exp_id=exp_id) with open(split_path, 'w') as f: for doc in tqdm.tqdm(train_docs): with open(os.path.join(wiki9_dir, doc), 'r') as fd: f.write(fd.read()) f.write(' ') return split_path def build_vocab(word2vec_model): vocab = word2vec_model.wv.index_to_key counts = [word2vec_model.wv.get_vecattr(word, "count") for word in vocab] sorted_inds = np.argsort(counts) sorted_vocab = [vocab[ind] for ind in sorted_inds] return sorted_vocab def sample_words(vocab, count, rng): inds = rng.choice(len(vocab), count, replace=False) return [vocab[ind] for ind in inds], rng def gen_canaries(num_canaries, canary_repeat, vocab_model_path, seed=0): # create canaries, injecting made up words into the corpus existing_w2v = Word2Vec.load(vocab_model_path) existing_vocab = build_vocab(existing_w2v) rng = np.random.RandomState(seed) all_canaries = [] for i in range(num_canaries): new_word = MADE_UP_WORDS[i%len(MADE_UP_WORDS)] assert new_word not in existing_vocab canary_words, rng = sample_words(existing_vocab, 4, rng) canary = canary_words[:2] + [new_word] + canary_words[2:] all_canaries.append(canary) all_canaries = all_canaries * canary_repeat return all_canaries # iterator for training documents, with an option to canary class WIKI9Articles: def __init__(self, docs, data_dir, verbose=0, ssharer=False, num_canaries=0, canary_repeat=0, canary_seed=0, vocab_model_path=None): self.docs = [(0, doc) for doc in docs] if ssharer: all_canaries = gen_canaries( num_canaries, canary_repeat, vocab_model_path, canary_seed) self.docs.extend([(1, canary) for canary in all_canaries]) np.random.RandomState(0).shuffle(self.docs) wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(data_dir) self.dirname = wiki9_dir self.verbose = verbose def __iter__(self): for is_canary, fname in tqdm.tqdm(self.docs) if self.verbose else self.docs: if not is_canary: for line in smart_open.open(os.path.join(self.dirname, fname), 'r', encoding='utf-8'): yield line.split() else: yield fname def train_word_embedding(data_dir, model_dir, exp_id=0, use_secret_sharer=False, num_canaries=0, canary_repeat=1, canary_seed=0, vocab_model_path=None): # this function trains the word2vec model, after setting up the training set logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) params = { 'sg': 1, 'negative': 25, 'alpha': 0.05, 'sample': 1e-4, 'workers': 48, 'epochs': 5, 'window': 5, } train_docs, test_docs = split_wiki9_articles(data_dir, exp_id) print(len(train_docs), len(test_docs)) wiki9_articles = WIKI9Articles( train_docs, data_dir, ssharer=use_secret_sharer, num_canaries=num_canaries, canary_repeat=canary_repeat, canary_seed=canary_seed, vocab_model_path=vocab_model_path) if not os.path.exists(model_dir): os.makedirs(model_dir) model = Word2Vec(wiki9_articles, **params) if not use_secret_sharer: model_path = os.path.join(model_dir, 'wiki9_w2v_{}.model'.format(exp_id)) else: model_path = os.path.join(model_dir, 'wiki9_w2v_{}_{}_{}_{}.model'.format( exp_id, num_canaries, canary_repeat, canary_seed )) model.save(model_path) return model_path, train_docs, test_docs # setup directories wiki9_path, wiki9_dir, wiki9_split_dir = make_wiki9_dirs(DATA_DIR) local_wiki9_path, local_wiki9_dir, local_wiki9_splitdir = make_wiki9_dirs(LOCAL_DATA_DIR) # download and format documents !wget http://mattmahoney.net/dc/enwik9.zip !unzip enwik9.zip !bzip2 enwik9 !cp enwik9.bz2 $wiki9_path !cp $wiki9_path $local_wiki9_path write_wiki9_articles(LOCAL_DATA_DIR) # need local data for fast training """ Explanation: Use Membership Inference and Secret Sharer to Test Word Embedding Models This notebook shows how to run privacy tests for word2vec models, trained with gensim. Models are trained using the procedure used in https://arxiv.org/abs/2004.00053, code for which is found here: https://github.com/google/embedding-tests . We run membership inference as well as secret sharer. Membership inference attempts to identify whether a given document was included in training. Secret sharer adds random "canary" documents into training, and identifies which canary was added. <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/privacy/blob/master/tensorflow_privacy/privacy/privacy_tests/membership_inference_attack/codelabs/word2vec_codelab.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/privacy/blob/master/tensorflow_privacy/privacy/privacy_tests/membership_inference_attack/codelabs/word2vec_codelab.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> End of explanation """ for i in range(10): if os.path.exists(os.path.join(MODEL_DIR, f"wiki9_w2v_{i}.model")): print("done", i) continue model_path, train_docs, test_docs = train_word_embedding(LOCAL_DATA_DIR, MODEL_DIR, exp_id=i) print(model_path) """ Explanation: Membership Inference Attacks Let's start by running membership inference on a word2vec model. We'll start by training a bunch of word2vec models with different train/test splits. This can take a long time, so be patient! End of explanation """ from re import split def loss(model, window): # compute loss for a single window of 5 tokens try: sum_embedding = np.array([model.wv[word] for word in window]).sum(axis=0) except: return np.nan middle_embedding = model.wv[window[2]] context_embedding = 0.25*(sum_embedding - middle_embedding) return np.linalg.norm(middle_embedding - context_embedding) def loss_per_article(model, article): # compute loss for a full document losses = [] article = article.split(' ') embs = [model.wv[word] if word in model.wv else np.nan for word in article] for i in range(len(article) - 4): middle_embedding = embs[i+2] context_embedding = 0.25*(np.mean(embs[i:i+2] + embs[i+3:i+5])) losses.append(np.linalg.norm(middle_embedding - context_embedding)) return np.nanmean(losses) """ Explanation: We now define our loss function. We follow https://arxiv.org/abs/2004.00053, computing the loss of a document as the average loss over all 5 token "windows" in the document. End of explanation """ all_models = [] for i in range(1000, 1020): model_path = os.path.join(MODEL_DIR, f"wiki9_w2v_{i}.model") if not os.path.exists(model_path): continue all_models.append(Word2Vec.load(model_path)) train_docs, test_docs = split_wiki9_articles(LOCAL_DATA_DIR, 0) all_docs = sorted(train_docs + test_docs) all_losses = np.zeros((len(all_docs), len(all_models))) for i, doc in tqdm.tqdm(enumerate(all_docs)): if i > 1000: continue with open(os.path.join(local_wiki9_dir, doc), 'r') as fd: doc_text = fd.read() for j, model in enumerate(all_models): all_losses[i,j] = loss_per_article(model, doc_text) """ Explanation: Let's now get the losses of all models on all documents. This also takes a while, so we'll only get a subset. End of explanation """ all_losses = all_losses[:500, :] doc_lookup = {doc: i for i, doc in enumerate(all_docs)} def compute_scores_in_out(losses, seeds): in_scores = [[] for _ in range(losses.shape[0])] out_scores = [[] for _ in range(losses.shape[0])] for seed in seeds: train_docs, test_docs = split_wiki9_articles(LOCAL_DATA_DIR, seed) for train_doc in train_docs: ind = doc_lookup[train_doc] if ind >= all_losses.shape[0]: continue in_scores[ind].append([all_losses[ind, seed-1000]]) for test_doc in test_docs: ind = doc_lookup[test_doc] if ind >= all_losses.shape[0]: continue out_scores[ind].append([all_losses[ind, seed-1000]]) in_scores = [np.array(s) for s in in_scores] out_scores = [np.array(s) for s in out_scores] print(in_scores[0].shape) return in_scores, out_scores # we will do MI on model 0 in_scores, out_scores = compute_scores_in_out(all_losses, list(range(1001, 1020))) """ Explanation: We're going to be running the LiRA attack, so, for each document, we get the document's losses when it is in the model, and the losses when it is not in the model. End of explanation """ # global threshold MIA attack train_docs, test_docs = split_wiki9_articles(LOCAL_DATA_DIR, 1000) train_losses, test_losses = [], [] for train_doc in train_docs: ind = doc_lookup[train_doc] if ind >= all_losses.shape[0]: continue train_losses.append(all_losses[ind, 0]) for test_doc in test_docs: ind = doc_lookup[test_doc] if ind >= all_losses.shape[0]: continue test_losses.append(all_losses[ind, 0]) attacks_result_baseline = mia.run_attacks( mia_data_structures.AttackInputData( loss_train = -np.nan_to_num(train_losses), loss_test = -np.nan_to_num(test_losses))).single_attack_results[0] print('Global Threshold MIA attack:', f'auc = {attacks_result_baseline.get_auc():.4f}', f'adv = {attacks_result_baseline.get_attacker_advantage():.4f}') """ Explanation: Now let's run the global threshold membership inference attack. It gets an advantage of around 0.07. End of explanation """ # run LiRA from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import advanced_mia as amia good_inds = [] for i, (in_s, out_s) in enumerate(zip(in_scores, out_scores)): if len(in_s) > 0 and len(out_s) > 0: good_inds.append(i) for i in good_inds: assert len(in_scores[i]) > 0 assert len(in_scores[i]) > 0 scores = amia.compute_score_lira(all_losses[good_inds, 0], [in_scores[i] for i in good_inds], [out_scores[i] for i in good_inds], fix_variance=True) train_docs, test_docs = split_wiki9_articles(LOCAL_DATA_DIR, 1000) in_mask = np.zeros(len(good_inds), dtype=bool) for doc in train_docs: ind = doc_lookup[doc] if ind >= all_losses.shape[0]: continue if ind in good_inds: in_mask[good_inds.index(ind)] = True """ Explanation: And now we run LiRA. First we need to compute LiRA scores. End of explanation """ attacks_result_baseline = mia.run_attacks( mia_data_structures.AttackInputData( loss_train = scores[in_mask], loss_test = scores[~in_mask])).single_attack_results[0] print('Advanced MIA attack with Gaussian:', f'auc = {attacks_result_baseline.get_auc():.4f}', f'adv = {attacks_result_baseline.get_attacker_advantage():.4f}') """ Explanation: And now we threshold on LiRA scores, as before. Advantage goes from .07 to .13, it almost doubled! End of explanation """ vocab_model_path = os.path.join(MODEL_DIR, 'wiki9_w2v_1.model') interp_exposures = {} extrap_exposures = {} all_canaries = gen_canaries(10000, 1, vocab_model_path, 0) for repeat_count in [5, 10, 20]: model_path = os.path.join(MODEL_DIR, 'wiki9_w2v_0_20_{}_0.model'.format(repeat_count)) print(os.path.exists(model_path)) model_path, _, _ = train_word_embedding( LOCAL_DATA_DIR, MODEL_DIR, exp_id=0, use_secret_sharer=True, num_canaries=20, canary_repeat=repeat_count, canary_seed=0, vocab_model_path=vocab_model_path) canaried_model = Word2Vec.load(model_path) canary_losses = [loss(canaried_model, canary) for canary in all_canaries] loss_secrets = np.array(canary_losses[:20]) loss_ref = np.array(canary_losses[20:]) loss_secrets = {1: loss_secrets[~np.isnan(loss_secrets)]} loss_ref = loss_ref[~np.isnan(loss_ref)] exposure_interpolation = compute_exposure_interpolation(loss_secrets, loss_ref) exposure_extrapolation = compute_exposure_extrapolation(loss_secrets, loss_ref) interp_exposures[repeat_count] = exposure_interpolation[1] extrap_exposures[repeat_count] = exposure_extrapolation[1] """ Explanation: Secret Sharer Here, we're going to run a secret sharer attack on a word2vec model. Our canaries (generated above in gen_canaries) look like the following: "word1 word2 made_up_word word3 word4", where all the words except for the made up word are real words from the vocabulary. The model's decision on where to put the made up word in embedding space will depend solely on the canary, which will make this an effective attack. We insert canaries with various repetition counts, and train some models: End of explanation """ for key in interp_exposures: print(f"Repeats: {key}, Interpolation Exposure: {np.median(interp_exposures[key])}, Extrapolation Exposure: {np.median(extrap_exposures[key])}") """ Explanation: And now let's run secret sharer! Exposure is quite high! End of explanation """
lisa-1010/smart-tutor
code/experiment_setup_step_by_step.ipynb
mit
n_students = 10000 seqlen = 100 concept_tree = cdg.ConceptDependencyGraph() concept_tree.init_default_tree(n=N_CONCEPTS) print ("Initializing synthetic data sets...") for policy in ['random', 'expert', 'modulo']: filename = "{}stud_{}seq_{}.pickle".format(n_students, seqlen, policy) dgen.generate_data(concept_tree, n_students=n_students, seqlen=seqlen, policy=policy, filename="{}{}".format(SYN_DATA_DIR, filename)) print ("Data generation completed. ") """ Explanation: Generate data Only needs to be done once. NOTE: If you already have the pickle files in the "synthetic_data" folder, you do NOT need to run the following cell. End of explanation """ dataset_name = "10000stud_100seq_modulo" # load generated data from picke files and convert it to a format so we can feed it into an RNN for training # NOTE: This step is not necessary if you already have the data saved in the format for RNNs. data = dataset_utils.load_data(filename="../synthetic_data/{}.pickle".format(dataset_name)) input_data_, output_mask_, target_data_ = dataset_utils.preprocess_data_for_rnn(data) # Save numpy matrices to files so data loading is faster, since we don't have to do the conversion again. dataset_utils.save_rnn_data(input_data_, output_mask_, target_data_, dataset_name) # load the numpy matrices input_data_, output_mask_, target_data_ = dataset_utils.load_rnn_data(dataset_name) print input_data_.shape print target_data_.shape from sklearn.model_selection import train_test_split x_train, x_test, mask_train, mask_test, y_train, y_test = train_test_split(input_data_, output_mask_, target_data_, test_size=0.1, random_state=42) train_data = (x_train, mask_train, y_train) """ Explanation: Load data set Takes about 3 minutes for 10000 students with sequence length 100. End of explanation """ import models_dict_utils # Each RNN model can be identified by its model_id string. # We will save checkpoints separately for each model. # Models can have different architectures, parameter dimensions etc. and are specified in models_dict.json model_id = "learned_from_modulo" # Specify input / output dimensions and hidden size n_timesteps = 100 n_inputdim = 20 n_outputdim = 10 n_hidden = 32 # If you are creating a new RNN model or just to check if it already exists: # Only needs to be done once for each model models_dict_utils.check_model_exists_or_create_new(model_id, n_inputdim, n_hidden, n_outputdim) # Load model with parameters initialized randomly dmodel = dm.DynamicsModel(model_id=model_id, timesteps=100, load_checkpoint=False) # train model for two epochs (saves checkpoint after each epoch) # (checkpoint saves the weights, so we can load in pretrained models.) dmodel.train(train_data, n_epoch=2) # Load model from latest checkpoint dmodel = dm.DynamicsModel(model_id=model_id, timesteps=100, load_checkpoint=True) # train for 2 more epochs dmodel.train(train_data, n_epoch=2) dmodel.train(train_data, n_epoch=16) # important to cast preds as numpy array. preds = np.array(dmodel.predict(x_test[:1])) print preds.shape print preds[0,0] # Load model with different number of timesteps from checkpoint # Since RNN weights don't depend on # timesteps (weights are the same across time), we can load in the weights for # any number of timesteps. The timesteps parameter describes the # of timesteps in the input data. generator_model = dm.DynamicsModel(model_id=model_id, timesteps=1, load_checkpoint=True) # make a prediction preds = generator_model.predict(x_test[:1,:1, :]) print preds[0][0] """ Explanation: Build, train and save RNN Dynamics Model End of explanation """
tuanavu/coursera-university-of-washington
machine_learning/4_clustering_and_retrieval/assigment/week6/6_hierarchical_clustering_blank.ipynb
mit
import graphlab import matplotlib.pyplot as plt import numpy as np import sys import os import time from scipy.sparse import csr_matrix from sklearn.cluster import KMeans from sklearn.metrics import pairwise_distances %matplotlib inline '''Check GraphLab Create version''' from distutils.version import StrictVersion assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.' """ Explanation: Hierarchical Clustering Hierarchical clustering refers to a class of clustering methods that seek to build a hierarchy of clusters, in which some clusters contain others. In this assignment, we will explore a top-down approach, recursively bipartitioning the data using k-means. Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook. Import packages The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page. End of explanation """ wiki = graphlab.SFrame('people_wiki.gl/') """ Explanation: Load the Wikipedia dataset End of explanation """ wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text']) """ Explanation: As we did in previous assignments, let's extract the TF-IDF features: End of explanation """ from em_utilities import sframe_to_scipy # converter # This will take about a minute or two. tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf') """ Explanation: To run k-means on this dataset, we should convert the data matrix into a sparse matrix. End of explanation """ from sklearn.preprocessing import normalize tf_idf = normalize(tf_idf) """ Explanation: To be consistent with the k-means assignment, let's normalize all vectors to have unit norm. End of explanation """ def bipartition(cluster, maxiter=400, num_runs=4, seed=None): '''cluster: should be a dictionary containing the following keys * dataframe: original dataframe * matrix: same data, in matrix format * centroid: centroid for this particular cluster''' data_matrix = cluster['matrix'] dataframe = cluster['dataframe'] # Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow. kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=-1) kmeans_model.fit(data_matrix) centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_ # Divide the data matrix into two parts using the cluster assignments. data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \ data_matrix[cluster_assignment==1] # Divide the dataframe into two parts, again using the cluster assignments. cluster_assignment_sa = graphlab.SArray(cluster_assignment) # minor format conversion dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \ dataframe[cluster_assignment_sa==1] # Package relevant variables for the child clusters cluster_left_child = {'matrix': data_matrix_left_child, 'dataframe': dataframe_left_child, 'centroid': centroids[0]} cluster_right_child = {'matrix': data_matrix_right_child, 'dataframe': dataframe_right_child, 'centroid': centroids[1]} return (cluster_left_child, cluster_right_child) """ Explanation: Bipartition the Wikipedia dataset using k-means Recall our workflow for clustering text data with k-means: Load the dataframe containing a dataset, such as the Wikipedia text dataset. Extract the data matrix from the dataframe. Run k-means on the data matrix with some value of k. Visualize the clustering results using the centroids, cluster assignments, and the original dataframe. We keep the original dataframe around because the data matrix does not keep auxiliary information (in the case of the text dataset, the title of each article). Let us modify the workflow to perform bipartitioning: Load the dataframe containing a dataset, such as the Wikipedia text dataset. Extract the data matrix from the dataframe. Run k-means on the data matrix with k=2. Divide the data matrix into two parts using the cluster assignments. Divide the dataframe into two parts, again using the cluster assignments. This step is necessary to allow for visualization. Visualize the bipartition of data. We'd like to be able to repeat Steps 3-6 multiple times to produce a hierarchy of clusters such as the following: (root) | +------------+-------------+ | | Cluster Cluster +------+-----+ +------+-----+ | | | | Cluster Cluster Cluster Cluster Each parent cluster is bipartitioned to produce two child clusters. At the very top is the root cluster, which consists of the entire dataset. Now we write a wrapper function to bipartition a given cluster using k-means. There are three variables that together comprise the cluster: dataframe: a subset of the original dataframe that correspond to member rows of the cluster matrix: same set of rows, stored in sparse matrix format centroid: the centroid of the cluster (not applicable for the root cluster) Rather than passing around the three variables separately, we package them into a Python dictionary. The wrapper function takes a single dictionary (representing a parent cluster) and returns two dictionaries (representing the child clusters). End of explanation """ wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=8, seed=1) """ Explanation: The following cell performs bipartitioning of the Wikipedia dataset. Allow 20-60 seconds to finish. Note. For the purpose of the assignment, we set an explicit seed (seed=1) to produce identical outputs for every run. In pratical applications, you might want to use different random seeds for all runs. End of explanation """ left_child """ Explanation: Let's examine the contents of one of the two clusters, which we call the left_child, referring to the tree visualization above. End of explanation """ right_child """ Explanation: And here is the content of the other cluster we named right_child. End of explanation """ def display_single_tf_idf_cluster(cluster, map_index_to_word): '''map_index_to_word: SFrame specifying the mapping betweeen words and column indices''' wiki_subset = cluster['dataframe'] tf_idf_subset = cluster['matrix'] centroid = cluster['centroid'] # Print top 5 words with largest TF-IDF weights in the cluster idx = centroid.argsort()[::-1] for i in xrange(5): print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroid[idx[i]])), print('') # Compute distances from the centroid to all data points in the cluster. distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten() # compute nearest neighbors of the centroid within the cluster. nearest_neighbors = distances.argsort() # For 8 nearest neighbors, print the title as well as first 180 characters of text. # Wrap the text at 80-character mark. for i in xrange(8): text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25]) print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'], distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else '')) print('') """ Explanation: Visualize the bipartition We provide you with a modified version of the visualization function from the k-means assignment. For each cluster, we print the top 5 words with highest TF-IDF weights in the centroid and display excerpts for the 8 nearest neighbors of the centroid. End of explanation """ display_single_tf_idf_cluster(left_child, map_index_to_word) display_single_tf_idf_cluster(right_child, map_index_to_word) """ Explanation: Let's visualize the two child clusters: End of explanation """ athletes = left_child non_athletes = right_child """ Explanation: The left cluster consists of athletes, whereas the right cluster consists of non-athletes. So far, we have a single-level hierarchy consisting of two clusters, as follows: Wikipedia + | +--------------------------+--------------------+ | | + + Athletes Non-athletes Is this hierarchy good enough? When building a hierarchy of clusters, we must keep our particular application in mind. For instance, we might want to build a directory for Wikipedia articles. A good directory would let you quickly narrow down your search to a small set of related articles. The categories of athletes and non-athletes are too general to facilitate efficient search. For this reason, we decide to build another level into our hierarchy of clusters with the goal of getting more specific cluster structure at the lower level. To that end, we subdivide both the athletes and non-athletes clusters. Perform recursive bipartitioning Cluster of athletes To help identify the clusters we've built so far, let's give them easy-to-read aliases: End of explanation """ # Bipartition the cluster of athletes left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=8, seed=1) """ Explanation: Using the bipartition function, we produce two child clusters of the athlete cluster: End of explanation """ display_single_tf_idf_cluster(left_child_athletes, map_index_to_word) """ Explanation: The left child cluster mainly consists of baseball players: End of explanation """ display_single_tf_idf_cluster(right_child_athletes, map_index_to_word) """ Explanation: On the other hand, the right child cluster is a mix of football players and ice hockey players: End of explanation """ baseball = left_child_athletes ice_hockey_football = right_child_athletes """ Explanation: Note. Concerning use of "football" The occurrences of the word "football" above refer to association football. This sports is also known as "soccer" in United States (to avoid confusion with American football). We will use "football" throughout when discussing topic representation. Our hierarchy of clusters now looks like this: Wikipedia + | +--------------------------+--------------------+ | | + + Athletes Non-athletes + | +-----------+--------+ | | | + + football/ baseball ice hockey Should we keep subdividing the clusters? If so, which cluster should we subdivide? To answer this question, we again think about our application. Since we organize our directory by topics, it would be nice to have topics that are about as coarse as each other. For instance, if one cluster is about baseball, we expect some other clusters about football, basketball, volleyball, and so forth. That is, we would like to achieve similar level of granularity for all clusters. Notice that the right child cluster is more coarse than the left child cluster. The right cluster posseses a greater variety of topics than the left (ice hockey/football vs. baseball). So the right child cluster should be subdivided further to produce finer child clusters. Let's give the clusters aliases as well: End of explanation """ # Bipartition the cluster of non-athletes left_child_non_athletes, right_child_non_athletes = bipartition(non_athletes, maxiter=100, num_runs=8, seed=1) display_single_tf_idf_cluster(left_child_non_athletes, map_index_to_word) display_single_tf_idf_cluster(right_child_non_athletes, map_index_to_word) """ Explanation: Cluster of ice hockey players and football players In answering the following quiz question, take a look at the topics represented in the top documents (those closest to the centroid), as well as the list of words with highest TF-IDF weights. Quiz Question. Bipartition the cluster of ice hockey and football players. Which of the two child clusters should be futher subdivided? Note. To achieve consistent results, use the arguments maxiter=100, num_runs=8, seed=1 when calling the bipartition function. The left child cluster The right child cluster Caution. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters. If a cluster is highly mixed, the top articles and words may not convey the full picture of the cluster. Thus, we may be misled if we judge the purity of clusters solely by their top documents and words. Many interesting topics are hidden somewhere inside the clusters but do not appear in the visualization. We may need to subdivide further to discover new topics. For instance, subdividing the ice_hockey_football cluster led to the appearance of golf. Quiz Question. Which diagram best describes the hierarchy right after splitting the ice_hockey_football cluster? Refer to the quiz form for the diagrams. Cluster of non-athletes Now let us subdivide the cluster of non-athletes. End of explanation """ scholars_politicians_etc = left_child_non_athletes musicians_artists_etc = right_child_non_athletes """ Explanation: The first cluster consists of scholars, politicians, and government officials whereas the second consists of musicians, artists, and actors. Run the following code cell to make convenient aliases for the clusters. End of explanation """
pacoqueen/ginn
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Cell Magics.ipynb
gpl-2.0
%lsmagic """ Explanation: Cell Magics in IPython IPython has a system of commands we call 'magics' that provide a mini command language that is orthogonal to the syntax of Python and is extensible by the user with new commands. Magics are meant to be typed interactively, so they use command-line conventions, such as using whitespace for separating arguments, dashes for options and other conventions typical of a command-line environment. Magics come in two kinds: Line magics: these are commands prepended by one % character and whose arguments only extend to the end of the current line. Cell magics: these use two percent characters as a marker (%%), and they receive as argument both the current line where they are declared and the whole body of the cell. Note that cell magics can only be used as the first line in a cell, and as a general principle they can't be 'stacked' (i.e. you can only use one cell magic per cell). A few of them, because of how they operate, can be stacked, but that is something you will discover on a case by case basis. The %lsmagic magic is used to list all available magics, and it will show both line and cell magics currently defined: End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt """ Explanation: Since in the introductory section we already covered the most frequently used line magics, we will focus here on the cell magics, which offer a great amount of power. Let's load matplotlib and numpy so we can use numerics/plotting at will later on. End of explanation """ %timeit np.linalg.eigvals(np.random.rand(100,100)) %%timeit a = np.random.rand(100, 100) np.linalg.eigvals(a) """ Explanation: <!--====--> Some simple cell magics Timing the execution of code; the 'timeit' magic exists both in line and cell form: End of explanation """ %%capture capt from __future__ import print_function import sys print('Hello stdout') print('and stderr', file=sys.stderr) capt.stdout, capt.stderr capt.show() """ Explanation: The %%capture magic can be used to capture the stdout/err of any block of python code, either to discard it (if it's noise to you) or to store it in a variable for later use: End of explanation """ %%writefile foo.py print('Hello world') %run foo """ Explanation: The %%writefile magic is a very useful tool that writes the cell contents as a named file: End of explanation """ %%script python2 import sys print 'hello from Python %s' % sys.version %%script python3 import sys print('hello from Python: %s' % sys.version) """ Explanation: <!--====--> Magics for running code under other interpreters IPython has a %%script cell magic, which lets you run a cell in a subprocess of any interpreter on your system, such as: bash, ruby, perl, zsh, R, etc. It can even be a script of your own, which expects input on stdin. To use it, simply pass a path or shell command to the program you want to run on the %%script line, and the rest of the cell will be run by that script, and stdout/err from the subprocess are captured and displayed. End of explanation """ %%ruby puts "Hello from Ruby #{RUBY_VERSION}" %%bash echo "hello from $BASH" """ Explanation: IPython also creates aliases for a few common interpreters, such as bash, ruby, perl, etc. These are all equivalent to %%script &lt;name&gt; End of explanation """ %%bash echo "hi, stdout" echo "hello, stderr" >&2 %%bash --out output --err error echo "hi, stdout" echo "hello, stderr" >&2 print(error) print(output) """ Explanation: Capturing output You can also capture stdout/err from these subprocesses into Python variables, instead of letting them go directly to stdout/err End of explanation """ %%ruby --bg --out ruby_lines for n in 1...10 sleep 1 puts "line #{n}" STDOUT.flush end """ Explanation: Background Scripts These scripts can be run in the background, by adding the --bg flag. When you do this, output is discarded unless you use the --out/err flags to store output as above. End of explanation """ ruby_lines print(ruby_lines.read().decode('utf8')) """ Explanation: When you do store output of a background thread, these are the stdout/err pipes, rather than the text of the output. End of explanation """
ireapps/cfj-2017
completed/05. pandas? pandas! (Part 1).ipynb
mit
import pandas as pd """ Explanation: Automate your analysis with pandas Automating your data analysis is one of the most powerful things you can do with Python in a newsroom. We're going to use a library called pandas that will leave a replicable, transparent script for others to follow. Warmup: MLB salary data Remember data/mlb.csv? Let's use that to practice. Make a list of questions What's the total MLB payroll? What's the total payroll by team? What's the typical (mean) salary for a designated hitter? (What else?) Import pandas End of explanation """ df = pd.read_csv('data/mlb.csv') # use head to check it out df.head() """ Explanation: Read in the data into a data frame We'll use the read_csv() method to read the CSV file in as a DataFrame. End of explanation """ df.count() """ Explanation: How many players? Use the .count() method to find out how many pieces of data are in each column. End of explanation """ df.SALARY.sum() """ Explanation: Total MLB payroll To select a column from your data frame, you can use either dot or bracket notation. The result will be a Series. Then, you can use the sum() method to sum numeric data. End of explanation """ df.TEAM.unique() """ Explanation: Get a list of teams Sometimes you want to get a list of unique values in a column. You can use the unique() method for this. End of explanation """ df[['TEAM', 'SALARY']].groupby('TEAM') \ .sum() \ .reset_index() \ .set_index('TEAM') \ .sort_values('SALARY', ascending=False) """ Explanation: Total payroll by team We want to group our data by team and sum the salaries -- analagous to a pivot table in Excel or a GROUP BY statement in SQL. Our steps: Select the two columns we care about -- to select multiple columns, pass a list of columns to the data frame inside square brackets Use the groupby() function to group the data by team Use the sum() method to sum up the grouped data Use the reset_index() method to turn the Series back into a DataFrame Index the results on the TEAM column (optional) Use the sort_values() method to order the data by the SALARY column End of explanation """ # filter data for designated hitters dh = df[df['POS'] == 'DH'] # get median dh.SALARY.median() """ Explanation: Typical salary for DH Let's find the median salary for designated hitters. To filter a data frame, you use square brackets to pass a filtering condition to the data frame: df[YOUR CONDITION HERE]. It's like a WHERE clause in SQL. In this case, we can use one of Python's comparison operators to define our conditions. If the value in the POS column is DH, include it in the results. Then we're going to select the values in the SALARY column and calculate the typical salary with the mean() function. End of explanation """
ledeprogram/algorithms
class10/donow/Lee_Dongjin_10_donow.ipynb.ipynb
gpl-3.0
import pg8000 conn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', database="training", port=5432, user='dot_student', password='qgis') cursor = conn.cursor() database=cursor.execute("SELECT * FROM winequality") import pandas as pd import matplotlib.pyplot as plt %matplotlib inline df = pd.read_sql("SELECT * FROM winequality", conn) df.head() df=df.rename(columns = lambda x : str(x)[1:]) df.columns = [x.replace('\'', '') for x in df.columns] df.columns """ Explanation: Create a classifier to predict the wine color from wine quality attributes using this dataset: http://archive.ics.uci.edu/ml/datasets/Wine+Quality The data is in the database we've been using host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com' database='training' port=5432 user='dot_student' password='qgis' table name = 'winequality' End of explanation """ df.info() """ Explanation: Query for the data and create a numpy array End of explanation """ x = df.ix[:, df.columns != 'color'].as_matrix() # the attributes x y = df['color'].as_matrix() # the attributes y """ Explanation: Split the data into features (x) and target (y, the last column in the table) Remember you can cast the results into an numpy array and then slice out what you want End of explanation """ from sklearn import tree import matplotlib.pyplot as plt import numpy as np dt = tree.DecisionTreeClassifier() dt = dt.fit(x,y) """ Explanation: Create a decision tree with the data End of explanation """ from sklearn.cross_validation import cross_val_score scores = cross_val_score(dt,x,y,cv=10) np.mean(scores) """ Explanation: Run 10-fold cross validation on the model End of explanation """ df.columns # running this on the decision tree plt.plot(dt.feature_importances_,'o') plt.ylim(0,1) plt.xlim(0,10) # free_sulfur_dioxide is the most important feature. """ Explanation: If you have time, calculate the feature importance and graph based on the code in the slides from last class End of explanation """
JohnGriffiths/ConWhAt
docs/examples/assessing_the_network_impact_of_lesions.ipynb
bsd-3-clause
# ConWhAt stuff from conwhat import VolConnAtlas,StreamConnAtlas,VolTractAtlas,StreamTractAtlas from conwhat.viz.volume import plot_vol_scatter # Neuroimaging stuff import nibabel as nib from nilearn.plotting import (plot_stat_map,plot_surf_roi,plot_roi, plot_connectome,find_xyz_cut_coords) from nilearn.image import resample_to_img # Viz stuff %matplotlib inline from matplotlib import pyplot as plt import seaborn as sns # Generic stuff import glob, numpy as np, pandas as pd, networkx as nx from datetime import datetime """ Explanation: Assess network impact of lesion End of explanation """ lesion_file = 'synthetic_lesion_20mm_sphere_-46_-60_6.nii.gz' # we created this file from scratch in the previous example """ Explanation: We now use the synthetic lesion constructed in the previous example in a ConWhAt lesion analysis. End of explanation """ lesion_img = nib.load(lesion_file) plot_roi(lesion_file); """ Explanation: Take another quick look at this mask: End of explanation """ cw_atlases_dir = '/global/scratch/hpc3230/Data/conwhat_atlases' # change this accordingly atlas_name = 'CWL2k8Sc33Vol3d100s_v01' atlas_dir = '%s/%s' %(cw_atlases_dir, atlas_name) """ Explanation: Since our lesion mask does not (by construction) have a huge amount of spatial detail, it makes sense to use one of the lower-resolution atlas. As one might expect, computation time is considerably faster for lower-resolution atlases. End of explanation """ cw_vca = VolConnAtlas(atlas_dir=atlas_dir) """ Explanation: See the previous tutorial on 'exploring the conwhat atlases' for more info on how to examine the components of a given atlas in ConWhAt. Initialize the atlas End of explanation """ idxs = 'all' # alternatively, something like: range(1,100), indicates the first 100 cnxns (rows in .vmfs) """ Explanation: Choose which connections to evaluate. This is normally an array of numbers indexing entries in cw_vca.vfms. Pre-defining connection subsets is a useful way of speeding up large analyses, especially if one is only interested in connections between specific sets of regions. As we are using a relatively small atlas, and our lesion is not too extensive, we can assess all connections. End of explanation """ jlc_dir = '/global/scratch/hpc3230/joblib_cache_dir' # this is the cache dir where joblib writes temporary files lo_df,lo_nx = cw_vca.compute_hit_stats(lesion_file,idxs,n_jobs=4,joblib_cache_dir=jlc_dir) """ Explanation: Now, compute lesion overlap statistics. End of explanation """ lo_df.head() """ Explanation: This takes about 20 minutes to run. vca.compute_hit_stats() returns a pandas dataframe, lo_df, and a networkx object, lo_nx. Both contain mostly the same information, which is sometimes more useful in one of these formats and sometimes in the other. lo_df is a table, with rows corresponding to each connection, and columns for each of a wide set of statistical metrics for evaluating sensitivity and specificity of binary hit/miss data: End of explanation """ lo_df[['TPR', 'corr_thrbin']].iloc[:10].T """ Explanation: Typically we will be mainly interested in two of these metric scores: TPR - True positive (i.e. hit) rate: number of true positives, divided by number of true positives + number of false negatives corr_thrbin - Pearson correlation between the lesion amge and the thresholded, binarized connectome edge image (group-level visitation map) End of explanation """ tpr_adj = nx.to_pandas_adjacency(lo_nx,weight='TPR') cpr_adj = nx.to_pandas_adjacency(lo_nx,weight='corr_thrbin') """ Explanation: We can obtain these numbers as a 'modification matrix' (connectivity matrix) End of explanation """ np.corrcoef(tpr_adj.values.ravel(), cpr_adj.values.ravel()) fig, ax = plt.subplots(ncols=2, figsize=(12,4)) sns.heatmap(tpr_adj,xticklabels='',yticklabels='',vmin=0,vmax=0.5,ax=ax[0]); sns.heatmap(cpr_adj,xticklabels='',yticklabels='',vmin=0,vmax=0.5,ax=ax[1]); """ Explanation: These two maps are, unsurprisingly, very similar: End of explanation """ fig, ax = plt.subplots(ncols=2, figsize=(12,4)) sns.heatmap(tpr_adj, xticklabels='',yticklabels='',cmap='Reds', mask=tpr_adj.values==0,vmin=0,vmax=0.5,ax=ax[0]); sns.heatmap(cpr_adj,xticklabels='',yticklabels='',cmap='Reds', mask=cpr_adj.values==0,vmin=0,vmax=0.5,ax=ax[1]); """ Explanation: (...with an alternative color scheme...) End of explanation """ cw_vca.vfms.loc[lo_df.index].head() """ Explanation: We can list directly the most affected (greatest % overlap) connections, End of explanation """ parc_img = cw_vca.region_nii parc_dat = parc_img.get_data() parc_vals = np.unique(parc_dat)[1:] ccs = {roival: find_xyz_cut_coords(nib.Nifti1Image((dat==roival).astype(int),img.affine), activation_threshold=0) for roival in roivals} ccs_arr = np.array(ccs.values()) """ Explanation: To plot the modification matrix information on a brain, we first need to some spatial locations to plot as nodes. For these, we calculate (an approprixation to) each atlas region's centriod location: End of explanation """ fig, ax = plt.subplots(figsize=(16,6)) plot_connectome(tpr_adj.values,ccs_arr,axes=ax,edge_threshold=0.2,colorbar=True, edge_cmap='Reds',edge_vmin=0,edge_vmax=1., node_color='lightgrey',node_kwargs={'alpha': 0.4}); #edge_vmin=0,edge_vmax=1) fig, ax = plt.subplots(figsize=(16,6)) plot_connectome(cpr_adj.values,ccs_arr,axes=ax) """ Explanation: Now plotting on a glass brain: End of explanation """
wakkadojo/OperationPeanut
oldModels/AlmondNut_PreMomentum.ipynb
gpl-3.0
def attach_ratings_diff_stats(df, ratings_eos, season): out_cols = list(df.columns) + ['mean_rtg_1', 'std_rtg_1', 'num_rtg_1', 'mean_rtg_2', 'std_rtg_2', 'num_rtg_2'] rtg_1 = ratings_eos.rename(columns = {'mean_rtg' : 'mean_rtg_1', 'std_rtg' : 'std_rtg_1', 'num_rtg' : 'num_rtg_1'}) rtg_2 = ratings_eos.rename(columns = {'mean_rtg' : 'mean_rtg_2', 'std_rtg' : 'std_rtg_2', 'num_rtg' : 'num_rtg_2'}) return df\ .merge(rtg_1, left_on = ['Season', 'Team1'], right_on = ['season', 'team'])\ .merge(rtg_2, left_on = ['Season', 'Team2'], right_on = ['season', 'team'])\ [out_cols] def get_eos_ratings(ratings): ratings_last_day = ratings.groupby('season').aggregate(max)[['rating_day_num']].reset_index() ratings_eos_all = ratings_last_day\ .merge(ratings, left_on = ['season', 'rating_day_num'], right_on = ['season', 'rating_day_num']) ratings_eos = ratings_eos_all.groupby(['season', 'team']).aggregate([np.mean, np.std, len])['orank'] return ratings_eos.reset_index().rename(columns = {'mean' : 'mean_rtg', 'std' : 'std_rtg', 'len' : 'num_rtg'}) def get_score_fluctuation(reg_season, season): # note: quick and dirty; not best practice for home / away etc b/c these would only improve est for # std on second order # scale the score spreads by # posessions # note: units don't really matter because this is used in a ratio and is normalized later rsc = reg_season[reg_season['Season'] == season].copy() # avg home vs away hscores = rsc[rsc['Wloc'] == 'H']['Wscore'].tolist() + rsc[rsc['Wloc'] == 'A']['Lscore'].tolist() ascores = rsc[rsc['Wloc'] == 'A']['Wscore'].tolist() + rsc[rsc['Wloc'] == 'H']['Lscore'].tolist() home_correction = np.mean(hscores) - np.mean(ascores) # get posessions per game posessions = 0.5 * ( rsc['Lfga'] - rsc['Lor'] + rsc['Lto'] + 0.475*rsc['Lfta'] +\ rsc['Wfga'] - rsc['Wor'] + rsc['Wto'] + 0.475*rsc['Wfta'] ) # get victory margins and correct for home / away -- scale for posessions rsc['win_mgn'] = rsc['Wscore'] - rsc['Lscore'] rsc['win_mgn'] += np.where(rsc['Wloc'] == 'H', -home_correction, 0) rsc['win_mgn'] += np.where(rsc['Wloc'] == 'A', home_correction, 0) rsc['win_mgn_scaled'] = rsc['win_mgn'] * 100 / posessions # score per 100 posessions # get mgn of victory stats per team win_mgns_wins = rsc[['Wteam', 'win_mgn_scaled']].rename(columns = {'Wteam' : 'team', 'win_mgn_scaled' : 'mgn'}) win_mgns_losses = rsc[['Lteam', 'win_mgn_scaled']].rename(columns = {'Lteam' : 'team', 'win_mgn_scaled' : 'mgn'}) win_mgns_losses['mgn'] *= -1 win_mgns = pd.concat([win_mgns_wins, win_mgns_losses]) return win_mgns.groupby('team').aggregate(np.std).rename(columns = {'mgn' : 'std_mgn'}).reset_index() def attach_score_fluctuations(df, reg_season, season): cols_to_keep = list(df.columns) + ['std_mgn_1', 'std_mgn_2'] fluct = get_score_fluctuation(reg_season, season) fluct1 = fluct.rename(columns = {'std_mgn' : 'std_mgn_1'}) fluct2 = fluct.rename(columns = {'std_mgn' : 'std_mgn_2'}) return df\ .merge(fluct1, left_on = 'Team1', right_on = 'team')\ .merge(fluct2, left_on = 'Team2', right_on = 'team')[cols_to_keep] def attach_kenpom_stats(df, kenpom, season): cols_to_keep = list(df.columns) + ['adjem_1', 'adjem_2', 'adjt_1', 'adjt_2'] kp1 = kenpom[kenpom['Season'] == season][['Team_Id', 'AdjEM', 'AdjTempo']]\ .rename(columns = {'AdjEM' : 'adjem_1', 'AdjTempo' : 'adjt_1'}) kp2 = kenpom[kenpom['Season'] == season][['Team_Id', 'AdjEM', 'AdjTempo']]\ .rename(columns = {'AdjEM' : 'adjem_2', 'AdjTempo' : 'adjt_2'}) return df\ .merge(kp1, left_on = 'Team1', right_on = 'Team_Id')\ .merge(kp2, left_on = 'Team2', right_on = 'Team_Id')[cols_to_keep] def get_root_and_leaves(hierarchy): all_children = set(hierarchy[['Strongseed', 'Weakseed']].values.flatten()) all_parents = set(hierarchy[['Slot']].values.flatten()) root = [ p for p in all_parents if p not in all_children ][0] leaves = [ c for c in all_children if c not in all_parents ] return root, leaves def get_tourney_tree_one_season(tourney_slots, season): def calculate_depths(tree, child, root): if child == root: return 0 elif tree[child]['depth'] < 0: tree[child]['depth'] = 1 + calculate_depths(tree, tree[child]['parent'], root) return tree[child]['depth'] hierarchy = tourney_slots[tourney_slots['Season'] == season][['Slot', 'Strongseed', 'Weakseed']] root, leaves = get_root_and_leaves(hierarchy) # should be R6CH... tree_raw = {**dict(zip(hierarchy['Strongseed'],hierarchy['Slot'])), **dict(zip(hierarchy['Weakseed'],hierarchy['Slot']))} tree = { c : {'parent' : tree_raw[c], 'depth' : -1} for c in tree_raw} for c in leaves: calculate_depths(tree, c, root) return tree def get_tourney_trees(tourney_slots): return { season : get_tourney_tree_one_season(tourney_slots, season)\ for season in tourney_slots['Season'].unique() } def slot_matchup_from_seed(tree, seed1, seed2): # return which slot the two teams would face off in if seed1 == seed2: return seed1 next_seed1 = seed1 if tree[seed1]['depth'] < tree[seed2]['depth'] else tree[seed1]['parent'] next_seed2 = seed2 if tree[seed2]['depth'] < tree[seed1]['depth'] else tree[seed2]['parent'] return slot_matchup_from_seed(tree, next_seed1, next_seed2) def get_team_seed(tourney_seeds, season, team): seed = tourney_seeds[ (tourney_seeds['Team'] == team) & (tourney_seeds['Season'] == season) ]['Seed'].values if len(seed) == 1: return seed[0] else: return None def dist(play_lat, play_lng, lat, lng): return geodist((play_lat, play_lng), (lat, lng)).miles def reg_distance_to_game(games_in, team_geog): games = games_in.copy() out_cols = list(games.columns) + ['w_dist', 'l_dist'] w_geog = team_geog.rename(columns = {'lat' : 'w_lat', 'lng' : 'w_lng'}) l_geog = team_geog.rename(columns = {'lat' : 'l_lat', 'lng' : 'l_lng'}) games = games\ .merge(w_geog, left_on = 'Wteam', right_on = 'team_id')\ .merge(l_geog, left_on = 'Lteam', right_on = 'team_id') # handle neutral locations later by averaging distance from home for 2 teams if neutral location games['play_lat'] = np.where(games['Wloc'] == 'H', games['w_lat'], games['l_lat']) games['play_lng'] = np.where(games['Wloc'] == 'H', games['w_lng'], games['l_lng']) games['w_dist'] = games.apply(lambda x: dist(x['play_lat'], x['play_lng'], x['w_lat'], x['w_lng']), axis = 1) games['l_dist'] = games.apply(lambda x: dist(x['play_lat'], x['play_lng'], x['l_lat'], x['l_lng']), axis = 1) # correct for neutral games['w_dist'], games['l_dist'] =\ np.where(games['Wloc'] == 'N', (games['w_dist'] + games['l_dist'])/2, games['w_dist']),\ np.where(games['Wloc'] == 'N', (games['w_dist'] + games['l_dist'])/2, games['l_dist']) return games[out_cols] def tourney_distance_to_game(tourney_raw_in, tourney_geog, team_geog, season): out_cols = list(tourney_raw_in.columns) + ['dist_1', 'dist_2'] tourney_raw = tourney_raw_in.copy() geog_1 = team_geog.rename(columns = {'lat' : 'lat_1', 'lng' : 'lng_1'}) geog_2 = team_geog.rename(columns = {'lat' : 'lat_2', 'lng' : 'lng_2'}) geog_play = tourney_geog[tourney_geog['season'] == season][['slot', 'lat', 'lng']]\ .rename(columns = {'lat' : 'lat_p', 'lng' : 'lng_p'}) tourney_raw = tourney_raw\ .merge(geog_1, left_on = 'Team1', right_on = 'team_id')\ .merge(geog_2, left_on = 'Team2', right_on = 'team_id')\ .merge(geog_play, left_on = 'SlotMatchup', right_on = 'slot') tourney_raw['dist_1'] = tourney_raw.apply(lambda x: dist(x['lat_p'], x['lng_p'], x['lat_1'], x['lng_1']), axis = 1) tourney_raw['dist_2'] = tourney_raw.apply(lambda x: dist(x['lat_p'], x['lng_p'], x['lat_2'], x['lng_2']), axis = 1) return tourney_raw[out_cols] def get_raw_reg_season_data(reg_season, team_geog, season): cols_to_keep = ['Season', 'Daynum', 'Team1', 'Team2', 'score_1', 'score_2', 'dist_1', 'dist_2'] rsr = reg_season[reg_season['Season'] == season] # reg season raw rsr = reg_distance_to_game(rsr, team_geog) rsr['Team1'] = np.where(rsr['Wteam'] < rsr['Lteam'], rsr['Wteam'], rsr['Lteam']) rsr['Team2'] = np.where(rsr['Wteam'] > rsr['Lteam'], rsr['Wteam'], rsr['Lteam']) rsr['score_1'] = np.where(rsr['Wteam'] < rsr['Lteam'], rsr['Wscore'], rsr['Lscore']) rsr['score_2'] = np.where(rsr['Wteam'] > rsr['Lteam'], rsr['Wscore'], rsr['Lscore']) rsr['dist_1'] = np.where(rsr['Wteam'] < rsr['Lteam'], rsr['w_dist'], rsr['l_dist']) rsr['dist_2'] = np.where(rsr['Wteam'] > rsr['Lteam'], rsr['w_dist'], rsr['l_dist']) return rsr[cols_to_keep] def get_raw_tourney_data(tourney_seeds, tourney_trees, tourney_geog, team_geog, season): # tree to find play location tree = tourney_trees[season] # get all teams in tourney seed_map = tourney_seeds[tourney_seeds['Season'] == season].set_index('Team').to_dict()['Seed'] teams = sorted(seed_map.keys()) team_pairs = sorted([ (team1, team2) for team1 in teams for team2 in teams if team1 < team2 ]) tourney_raw = pd.DataFrame(team_pairs).rename(columns = { 0 : 'Team1', 1 : 'Team2' }) tourney_raw['Season'] = season # find out where they would play each other tourney_raw['SlotMatchup'] = tourney_raw.apply( lambda x: slot_matchup_from_seed(tree, seed_map[x['Team1']], seed_map[x['Team2']]), axis = 1 ) # get features tourney_raw = tourney_distance_to_game(tourney_raw, tourney_geog, team_geog, season) return tourney_raw def attach_supplements(data, reg_season, kenpom, ratings_eos, season): dc = data.copy() dc = attach_ratings_diff_stats(dc, ratings_eos, season) # get ratings diff stats dc = attach_kenpom_stats(dc, kenpom, season) dc = attach_score_fluctuations(dc, reg_season, season) return dc """ Explanation: Almond Nut Learner Use published rankings together with distance traveled to play to classify winners + losers Train to regular season and test on post season considerations: Refine Vegas odds in first round PREDICTING UPSETS?? team upset rating team score variance upset predictors based on past seasons Ratings closer to date played Model tuning / hyperparameter tuning Implemented individual ratings vs aggregate Look at aggregate and derive statistics diff vs absolute ratings Use diffs for feature generation only use final rankings instead of those at time of play? For now: time of play Distance from home? Distance from last game? For now: distance from home How do regular season and playoffs differ in features? Is using distance in playoffs trained on regular season right? Augment (not yet executed) Defensive / offense ratings from kenpom Elo, Elo differences, and assoc probabilities Ensemble? Construct micro-classifier from elo Coaches Look at momentum + OT effects when training Beginning of season vs end of season for training End of explanation """ def generate_features(df): has_score = 'score_1' in df.columns and 'score_2' in df.columns cols_to_keep = ['Team1', 'Team2', 'Season', 'ln_dist_diff', 'rtg_diff', 't_rtg', 'pt_diff', 't_score'] +\ (['Team1_win'] if has_score else []) features = df.copy() features['ln_dist_diff'] = np.log((1 + df['dist_1'])/(1 + df['dist_2'])) # use negative for t_rtg so that better team has higher statistic than worse team features['rtg_diff'] = -(df['mean_rtg_1'] - df['mean_rtg_2']) features['t_rtg'] = -(df['mean_rtg_1'] - df['mean_rtg_2']) / np.sqrt(df['std_rtg_1']**2 + df['std_rtg_2']**2) features['pt_diff'] = df['adjem_1'] - df['adjem_2'] features['t_score'] = (df['adjem_1'] - df['adjem_2']) / np.sqrt(df['std_mgn_1']**2 + df['std_mgn_2']**2) # truth feature: did team 1 win? if has_score: features['Team1_win'] = features['score_1'] > features['score_2'] return features[cols_to_keep] def normalize_features(train, test, features): all_data_raw = pd.concat([train[features], test[features]]) all_data_norm = skpp.scale(all_data_raw) # with_mean = False ? train_norm = train.copy() test_norm = test.copy() train_norm[features] = all_data_norm[:len(train)] test_norm[features] = all_data_norm[len(train):] return train_norm, test_norm def get_key(df): return df['Season'].map(str) + '_' + df['Team1'].map(str) + '_' + df['Team2'].map(str) """ Explanation: Feature engineering Log of distance Capture rating diffs Capture rating diffs acct for variance (t score) Diff in expected scores via EM diffs Tag winners in training set + viz. Also, normalize data. End of explanation """ features_to_use = ['ln_dist_diff', 'rtg_diff', 't_rtg', 'pt_diff', 't_score'] predict_field = 'Team1_win' def get_features(season, tourney_slots, ratings, reg_season, team_geog, kenpom, tourney_seeds, tourney_geog): # support data tourney_trees = get_tourney_trees(tourney_slots) ratings_eos = get_eos_ratings(ratings) # regular season cleaned data regular_raw = get_raw_reg_season_data(reg_season, team_geog, season) regular_raw = attach_supplements(regular_raw, reg_season, kenpom, ratings_eos, season) # post season cleaned data tourney_raw = get_raw_tourney_data(tourney_seeds, tourney_trees, tourney_geog, team_geog, season) tourney_raw = attach_supplements(tourney_raw, reg_season, kenpom, ratings_eos, season) # get and normalize features feat_train = generate_features(regular_raw) feat_test = generate_features(tourney_raw) train_norm, test_norm = normalize_features(feat_train, feat_test, features_to_use) return regular_raw, tourney_raw, feat_train, feat_test, train_norm, test_norm def make_predictions(season, train_norm, test_norm, tourney, C = 1): # fit lr = sklm.LogisticRegression(C = C) # fit_intercept = False??? lr.fit(train_norm[features_to_use].values, train_norm[predict_field].values) # predictions probs = lr.predict_proba(test_norm[features_to_use].values) keys = get_key(test_norm) predictions = pd.DataFrame({'Id' : keys.values, 'Pred' : probs[:,1]}) # Evaluate outcomes res_base = tourney[(tourney['Season'] == season) & (tourney['Daynum'] > 135)].copy().reset_index() res_base['Team1'] = np.where(res_base['Wteam'] < res_base['Lteam'], res_base['Wteam'], res_base['Lteam']) res_base['Team2'] = np.where(res_base['Wteam'] > res_base['Lteam'], res_base['Wteam'], res_base['Lteam']) res_base['Result'] = (res_base['Wteam'] == res_base['Team1']).map(lambda x: 1 if x else 0) res_base['Id'] = get_key(res_base) # attach results to predictions res = pd.merge(res_base[['Id', 'Result']], predictions, on = 'Id', how = 'left') # logloss ll = skm.log_loss(res['Result'], res['Pred']) # print(lr.intercept_) # print(lr.coef_) return predictions, res, ll all_predictions = [] for season in [2013, 2014, 2015, 2016]: regular_raw, tourney_raw, feat_train, feat_test, train_norm, test_norm = \ get_features(season, tourney_slots, ratings, reg_season, team_geog, kenpom, tourney_seeds, tourney_geog) # see below for choice of C predictions, res, ll = make_predictions(season, train_norm, test_norm, tourney, C = 5e-3) print(ll) all_predictions += [predictions] # 0.559078513104 -- 2013 # 0.541984791608 -- 2014 # 0.480356337664 -- 2015 # 0.511671826092 -- 2016 pd.concat(all_predictions).to_csv('./submissions/simpleLogisticModel2013to2016_tuned.csv', index = False) sns.pairplot(train_norm, hue = predict, vars = ['ln_dist_diff', 'rtg_diff', 't_rtg', 'pt_diff', 't_score']) plt.show() """ Explanation: Running the model End of explanation """ teams[teams['Team_Id'].isin([1163, 1196])] tourney_raw[(tourney_raw['Team1'] == 1163) & (tourney_raw['Team2'] == 1196)] feat_test[(feat_test['Team1'] == 1195) & (feat_test['Team2'] == 1196)] res.ix[np.argsort(-(res['Pred'] - res['Result']).abs())].reset_index(drop = True) # accuracy? np.sum(np.where(res['Pred'] > 0.5, res['Result'] == 1, res['Result'] == 0)) / len(res) """ Explanation: Sandbox explorations End of explanation """ cs_to_check = np.power(10, np.arange(-4, 2, 0.1)) years_to_check = range(2011, 2017) c_effect_df_dict = { 'C' : cs_to_check } for yr in years_to_check: regular_raw, tourney_raw, feat_train, feat_test, train_norm, test_norm = \ get_features(yr, tourney_slots, ratings, reg_season, team_geog, kenpom, tourney_seeds, tourney_geog) log_losses = [ make_predictions(yr, train_norm, test_norm, tourney, C = C)[2] for C in cs_to_check ] c_effect_df_dict[str(yr)] = log_losses c_effect = pd.DataFrame(c_effect_df_dict) plt.semilogx() for col in [ col for col in c_effect if col != 'C' ]: plt.plot(c_effect['C'], c_effect[col]) plt.legend(loc = 3) plt.xlabel('C') plt.ylabel('logloss') plt.ylim(0.45, 0.65) plt.show() """ Explanation: Effect of C on different years End of explanation """ # contribution to logloss rc = res.copy() ftc = feat_test.copy() ftc['Id'] = get_key(ftc) rc['logloss_contrib'] = -np.log(np.where(rc['Result'] == 1, rc['Pred'], 1 - rc['Pred'])) / len(rc) ftc = pd.merge(rc, ftc, how = 'left', on = 'Id') fig, axes = plt.subplots(nrows=1, ncols=2, figsize = (10, 4)) im = axes[0].scatter(ftc['t_score'], ftc['t_rtg'], c = ftc['logloss_contrib'], vmin = 0, vmax = 0.025, cmap = plt.cm.get_cmap('coolwarm')) axes[0].set_xlabel('t_score') axes[0].set_ylabel('t_rtg') #plt.colorbar(sc) axes[1].scatter(-ftc['ln_dist_diff'], ftc['t_rtg'], c = ftc['logloss_contrib'], vmin = 0, vmax = 0.025, cmap = plt.cm.get_cmap('coolwarm')) axes[1].set_xlabel('ln_dist_diff') cb = fig.colorbar(im, ax=axes.ravel().tolist(), label = 'logloss_contrib') plt.show() """ Explanation: Look at who is contributing to logloss End of explanation """ tourney_rounds = tourney_raw[['Team1', 'Team2', 'Season', 'SlotMatchup']].copy() tourney_rounds['Id'] = get_key(tourney_rounds) tourney_rounds['round'] = tourney_rounds['SlotMatchup'].map(lambda s: int(s[1])) tourney_rounds = tourney_rounds[['Id', 'round']] ftc_with_rounds = pd.merge(ftc, tourney_rounds, how = 'left', on = 'Id') fig, axs = plt.subplots(ncols=2, figsize = (10, 4)) sns.barplot(data = ftc_with_rounds, x = 'round', y = 'logloss_contrib', errwidth = 0, ax = axs[0]) sns.barplot(data = ftc_with_rounds, x = 'round', y = 'logloss_contrib', errwidth = 0, estimator=max, ax = axs[1]) axs[0].set_ylim(0, 0.035) axs[1].set_ylim(0, 0.035) plt.show() """ Explanation: Logloss contribution by round End of explanation """ sns.barplot(data = reg_season[reg_season['Season'] > 2000], x = 'Season', y = 'Numot', errwidth = 0) plt.show() """ Explanation: Overtime counts End of explanation """ sns.lmplot('mean_rtg', 'std_rtg', data = ratings_eos, fit_reg = False) plt.show() ratings_eos_test = ratings_eos.copy() ratings_eos_test['parabola_mean_model'] =(ratings_eos_test['mean_rtg'].max()/2)**2-(ratings_eos_test['mean_rtg'] - ratings_eos_test['mean_rtg'].max()/2)**2 sns.lmplot('parabola_mean_model', 'std_rtg', data = ratings_eos_test, fit_reg = False) plt.show() test_data_test = test_data.copy() test_data_test['rtg_diff'] = test_data_test['mean_rtg_1'] - test_data_test['mean_rtg_2'] test_data_test['t_model'] = test_data_test['rtg_diff']/(test_data_test['std_rtg_1']**2 + test_data_test['std_rtg_2']**2)**0.5 #sns.lmplot('rtg_diff', 't_model', data = test_data_test, fit_reg = False) sns.pairplot(test_data_test[['rtg_diff', 't_model']]) plt.show() """ Explanation: A look at dynamics of ratings data End of explanation """ dist_test = get_training_data(reg_season, team_geog, 2016) w_dist_test = dist_test[['w_dist', 'Wscore']].rename(columns = {'w_dist' : 'dist', 'Wscore' : 'score'}) l_dist_test = dist_test[['l_dist', 'Lscore']].rename(columns = {'l_dist' : 'dist', 'Lscore' : 'score'}) dist_test = pd.concat([w_dist_test, l_dist_test]).reset_index()[['dist', 'score']] plt.hist(dist_test['dist']) plt.xlim(0, 3000) plt.semilogy() plt.show() bucket_size = 1 dist_test['bucket'] = bucket_size * (np.log(dist_test['dist'] + 1) // bucket_size) dist_grp = dist_test.groupby('bucket').aggregate([np.mean, np.std, len])['score'] dist_grp['err'] = dist_grp['std'] / np.sqrt(dist_grp['len']) plt.plot(dist_grp['mean']) plt.fill_between(dist_grp.index, (dist_grp['mean'] - 2*dist_grp['err']).values, (dist_grp['mean'] + 2*dist_grp['err']).values, alpha = 0.3) plt.xlabel('log of distance traveled') plt.ylabel('avg score') plt.show() """ Explanation: Quick investigation: looks like avg score decreases with log of distance traveled End of explanation """
tensorflow/docs-l10n
site/zh-cn/guide/saved_model.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TensorFlow Authors. End of explanation """ import os import tempfile from matplotlib import pyplot as plt import numpy as np import tensorflow as tf tmpdir = tempfile.mkdtemp() physical_devices = tf.config.list_physical_devices('GPU') for device in physical_devices: tf.config.experimental.set_memory_growth(device, True) file = tf.keras.utils.get_file( "grace_hopper.jpg", "https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg") img = tf.keras.preprocessing.image.load_img(file, target_size=[224, 224]) plt.imshow(img) plt.axis('off') x = tf.keras.preprocessing.image.img_to_array(img) x = tf.keras.applications.mobilenet.preprocess_input( x[tf.newaxis,...]) """ Explanation: 使用 SavedModel 格式 <table class="tfo-notebook-buttons" align="left"> <td data-segment-approved="false"><a target="_blank" href="https://tensorflow.google.cn/guide/saved_model"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td> <td data-segment-approved="false"><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/saved_model.ipynb"> <img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td> <td data-segment-approved="false"><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/saved_model.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td> <td data-segment-approved="false"><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/saved_model.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td> </table> SavedModel 包含一个完整的 TensorFlow 程序——不仅包含权重值,还包含计算。它不需要原始模型构建代码就可以运行,因此,对共享和部署(使用 TFLite、TensorFlow.js、TensorFlow Serving 或 TensorFlow Hub)非常有用。 您可以使用以下 API 以 SavedModel 格式保存和加载模型: 低级 tf.saved_model API。本文档将详细介绍如何使用此 API。 保存:tf.saved_model.save(model, path_to_dir) 加载:model = tf.saved_model.load(path_to_dir) 高级tf.keras.Model API。请参阅 Keras 保存和序列化指南。 如果您只是想在训练中保存/加载权重,请参阅检查点指南。 从 Keras 创建 SavedModel 为便于简单介绍,本部分将导出一个预训练 Keras 模型来处理图像分类请求。本指南的其他部分将详细介绍和讨论创建 SavedModel 的其他方式。 End of explanation """ labels_path = tf.keras.utils.get_file( 'ImageNetLabels.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt') imagenet_labels = np.array(open(labels_path).read().splitlines()) pretrained_model = tf.keras.applications.MobileNet() result_before_save = pretrained_model(x) decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1] print("Result before saving:\n", decoded) """ Explanation: 我们会使用 Grace Hopper 的一张照片作为运行示例,并使用一个预先训练的 Keras 图像分类模型,因为它简单易用。您也可以使用自定义模型,后文会作详细介绍。 End of explanation """ mobilenet_save_path = os.path.join(tmpdir, "mobilenet/1/") tf.saved_model.save(pretrained_model, mobilenet_save_path) """ Explanation: 对该图像的顶部预测是“军服”。 End of explanation """ loaded = tf.saved_model.load(mobilenet_save_path) print(list(loaded.signatures.keys())) # ["serving_default"] """ Explanation: 保存路径遵循 TensorFlow Serving 使用的惯例,路径的最后一个部分(此处为 1/)是模型的版本号——它可以让 Tensorflow Serving 之类的工具推断相对新鲜度。 您可以使用 tf.saved_model.load 将 SavedModel 加载回 Python,并查看 Admiral Hopper 的图像是如何分类的。 End of explanation """ infer = loaded.signatures["serving_default"] print(infer.structured_outputs) """ Explanation: 导入的签名总是会返回字典。要自定义签名名称和输出字典键,请参阅在导出过程中指定签名。 End of explanation """ labeling = infer(tf.constant(x))[pretrained_model.output_names[0]] decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1] print("Result after saving and loading:\n", decoded) """ Explanation: 从 SavedModel 运行推断会产生与原始模型相同的结果。 End of explanation """ !ls {mobilenet_save_path} """ Explanation: 在 TensorFlow Serving 中运行 SavedModel 可以通过 Python 使用 SavedModel(下文中有详细介绍),但是,生产环境通常会使用专门服务进行推理,而不会运行 Python 代码。使用 TensorFlow Serving 时,这很容易从 SavedModel 进行设置。 请参阅 TensorFlow Serving REST 教程了解端到端 tensorflow-serving 示例。 磁盘上的 SavedModel 格式 SavedModel 是一个包含序列化签名和运行这些签名所需的状态的目录,其中包括变量值和词汇表。 End of explanation """ !saved_model_cli show --dir {mobilenet_save_path} --tag_set serve """ Explanation: saved_model.pb 文件用于存储实际 TensorFlow 程序或模型,以及一组已命名的签名——每个签名标识一个接受张量输入和产生张量输出的函数。 SavedModel 可能包含模型的多个变体(多个 v1.MetaGraphDefs,通过 saved_model_cli 的 --tag_set 标记进行标识),但这种情况很少见。可以为模型创建多个变体的 API 包括 tf.Estimator.experimental_export_all_saved_models 和 TensorFlow 1.x 中的 tf.saved_model.Builder。 End of explanation """ !ls {mobilenet_save_path}/variables """ Explanation: variables 目录包含一个标准训练检查点(参阅训练检查点指南)。 End of explanation """ class CustomModule(tf.Module): def __init__(self): super(CustomModule, self).__init__() self.v = tf.Variable(1.) @tf.function def __call__(self, x): print('Tracing with', x) return x * self.v @tf.function(input_signature=[tf.TensorSpec([], tf.float32)]) def mutate(self, new_v): self.v.assign(new_v) module = CustomModule() """ Explanation: assets 目录包含 TensorFlow 计算图使用的文件,例如,用于初始化词汇表的文本文件。本例中没有使用这种文件。 SavedModel 可能有一个用于保存 TensorFlow 计算图未使用的任何文件的 assets.extra 目录,例如,为使用者提供的关于如何处理 SavedModel 的信息。TensorFlow 本身并不会使用此目录。 保存自定义模型 tf.saved_model.save 支持保存 tf.Module 对象及其子类,如 tf.keras.Layer 和 tf.keras.Model。 我们来看一个保存和恢复 tf.Module 的示例。 End of explanation """ module_no_signatures_path = os.path.join(tmpdir, 'module_no_signatures') module(tf.constant(0.)) print('Saving model...') tf.saved_model.save(module, module_no_signatures_path) """ Explanation: 当您保存 tf.Module 时,任何 tf.Variable 特性、tf.function 装饰的方法以及通过递归遍历找到的 tf.Module 都会得到保存。(参阅检查点教程,了解此递归便利的详细信息。)但是,所有 Python 特性、函数和数据都会丢失。也就是说,当您保存 tf.function 时,不会保存 Python 代码。 如果不保存 Python 代码,SavedModel 如何知道怎样恢复函数? 简单地说,tf.function 的工作原理是,通过跟踪 Python 代码来生成 ConcreteFunction(一个可调用的 tf.Graph 包装器)。当您保存 tf.function 时,实际上保存的是 tf.function 的 ConcreteFunction 缓存。 要详细了解 tf.function 与 ConcreteFunction 之间的关系,请参阅 tf.function 指南。 End of explanation """ imported = tf.saved_model.load(module_no_signatures_path) assert imported(tf.constant(3.)).numpy() == 3 imported.mutate(tf.constant(2.)) assert imported(tf.constant(3.)).numpy() == 6 """ Explanation: 加载和使用自定义模型 在 Python 中加载 SavedModel 时,所有 tf.Variable 特性、tf.function 装饰方法和 tf.Module 都会按照与原始保存的 tf.Module 相同对象结构进行恢复。 End of explanation """ optimizer = tf.optimizers.SGD(0.05) def train_step(): with tf.GradientTape() as tape: loss = (10. - imported(tf.constant(2.))) ** 2 variables = tape.watched_variables() grads = tape.gradient(loss, variables) optimizer.apply_gradients(zip(grads, variables)) return loss for _ in range(10): # "v" approaches 5, "loss" approaches 0 print("loss={:.2f} v={:.2f}".format(train_step(), imported.v.numpy())) """ Explanation: 由于没有保存 Python 代码,所以使用新输入签名调用 tf.function 会失败: python imported(tf.constant([3.])) <pre data-segment-approved="false">ValueError: Could not find matching function to call for canonicalized inputs ((<tf.tensor shape="(1,)" dtype="float32">,), {}). Only existing signatures are [((TensorSpec(shape=(), dtype=tf.float32, name=u'x'),), {})]. </tf.tensor></pre> 基本微调 可以使用变量对象,还可以通过导入的函数向后传播。对于简单情形,这足以支持 SavedModel 的微调(即重新训练)。 End of explanation """ loaded = tf.saved_model.load(mobilenet_save_path) print("MobileNet has {} trainable variables: {}, ...".format( len(loaded.trainable_variables), ", ".join([v.name for v in loaded.trainable_variables[:5]]))) trainable_variable_ids = {id(v) for v in loaded.trainable_variables} non_trainable_variables = [v for v in loaded.variables if id(v) not in trainable_variable_ids] print("MobileNet also has {} non-trainable variables: {}, ...".format( len(non_trainable_variables), ", ".join([v.name for v in non_trainable_variables[:3]]))) """ Explanation: 一般微调 与普通 __call__ 相比,Keras 的 SavedModel 提供了更多详细信息来解决更复杂的微调情形。TensorFlow Hub 建议在共享的 SavedModel 中提供以下详细信息(如果适用),以便进行微调: 如果模型使用随机失活,或者是训练与推理之间的前向传递不同的另一种技术(如批次归一化),则 __call__ 方法会获取一个可选的 Python 值 training= 参数。该参数的默认值为 False,但可将其设置为 True。 对于变量的对应列表,除了 __call__ 特性,还有 .variable 和 .trainable_variable 特性。在微调过程中,.trainable_variables 省略了一个变量,该变量原本可训练,但打算将其冻结。 对于 Keras 等将权重正则化项表示为层或子模型特性的框架,还有一个 .regularization_losses 特性。它包含一个零参数函数的列表,这些函数的值应加到总损失中。 回到初始 MobileNet 示例,我们来看看具体操作: End of explanation """ assert len(imported.signatures) == 0 """ Explanation: 导出时指定签名 TensorFlow Serving 之类的工具和 saved_model_cli 可以与 SavedModel 交互。为了帮助这些工具确定要使用的 ConcreteFunction,我们需要指定服务上线签名。tf.keras.Model 会自动指定服务上线签名,但是,对于自定义模块,我们必须明确声明服务上线签名。 重要提示:除非您需要使用 Python 将模型导出到 TensorFlow 2.x 之外的环境,否则您不需要明确导出签名。如果您在寻找为特定函数强制输入签名的方式,请参阅 tf.function 的 input_signature 参数。 默认情况下,自定义 tf.Module 中不会声明签名。 End of explanation """ module_with_signature_path = os.path.join(tmpdir, 'module_with_signature') call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32)) tf.saved_model.save(module, module_with_signature_path, signatures=call) imported_with_signatures = tf.saved_model.load(module_with_signature_path) list(imported_with_signatures.signatures.keys()) """ Explanation: 要声明服务上线签名,请使用 signatures 关键字参数指定 ConcreteFunction。当指定单个签名时,签名键为 'serving_default',并将保存为常量 tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY。 End of explanation """ module_multiple_signatures_path = os.path.join(tmpdir, 'module_with_multiple_signatures') signatures = {"serving_default": call, "array_input": module.__call__.get_concrete_function(tf.TensorSpec([None], tf.float32))} tf.saved_model.save(module, module_multiple_signatures_path, signatures=signatures) imported_with_multiple_signatures = tf.saved_model.load(module_multiple_signatures_path) list(imported_with_multiple_signatures.signatures.keys()) """ Explanation: 要导出多个签名,请将签名键的字典传递给 ConcreteFunction。每个签名键对应一个 ConcreteFunction。 End of explanation """ class CustomModuleWithOutputName(tf.Module): def __init__(self): super(CustomModuleWithOutputName, self).__init__() self.v = tf.Variable(1.) @tf.function(input_signature=[tf.TensorSpec([], tf.float32)]) def __call__(self, x): return {'custom_output_name': x * self.v} module_output = CustomModuleWithOutputName() call_output = module_output.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32)) module_output_path = os.path.join(tmpdir, 'module_with_output_name') tf.saved_model.save(module_output, module_output_path, signatures={'serving_default': call_output}) imported_with_output_name = tf.saved_model.load(module_output_path) imported_with_output_name.signatures['serving_default'].structured_outputs """ Explanation: 默认情况下,输出张量名称非常通用,如 output_0。为了控制输出的名称,请修改 tf.function,以便返回将输出名称映射到输出的字典。输入的名称来自 Python 函数参数名称。 End of explanation """
mpanteli/music-outliers
notebooks/sensitivity_experiment_outliers.ipynb
mit
results_file = '../data/lda_data_8.pickle' n_iters = 10 for n in range(n_iters): print "iteration %d" % n print results_file X, Y, Yaudio = classification.load_data_from_pickle(results_file) # get only 80% of the dataset.. to vary the choice of outliers X, _, Y, _ = train_test_split(X, Y, train_size=0.8, stratify=Y) print X.shape, Y.shape # outliers print "detecting outliers..." df_global, threshold, MD = outliers.get_outliers_df(X, Y, chi2thr=0.999) outliers.print_most_least_outliers_topN(df_global, N=10) # write output print "writing file" df_global.to_csv('../data/outliers_'+str(n)+'.csv', index=False) n_iters = 10 ranked_countries = pd.DataFrame() ranked_outliers = pd.DataFrame() for n in range(n_iters): df_global = pd.read_csv('../data/outliers_'+str(n)+'.csv') df_global = df_global.sort_values('Outliers', axis=0, ascending=False).reset_index() ranked_countries = pd.concat([ranked_countries, df_global['Country']], axis=1) ranked_outliers = pd.concat([ranked_outliers, df_global['Outliers']], axis=1) ranked_countries_arr = ranked_countries.get_values() """ Explanation: Sample 80% of the dataset, for 10 times Let's sample only 80% of the recordings each time (in a stratified manner) so that the set of recordings considered for each country is changed every time. End of explanation """ # majority voting + precision at K K_vote = 10 country_vote = Counter(ranked_countries_arr[:K_vote, :].ravel()) df_country_vote = pd.DataFrame.from_dict(country_vote, orient='index').reset_index() df_country_vote.sort_values(0, ascending=False) def precision_at_k(array, gr_truth, k): return len(set(array[:k]) & set(gr_truth[:k])) / float(k) k = 10 ground_truth = df_country_vote['index'].get_values() p_ = [] for j in range(ranked_countries_arr.shape[1]): p_.append(precision_at_k(ranked_countries_arr[:, j], ground_truth, k)) p_ = np.array(p_) print 'mean', np.mean(p_) print 'std', np.std(p_) print p_ """ Explanation: Estimate precision at K First get the ground truth from a majority vote on the top K=10 positions. End of explanation """
ssunkara1/bqplot
examples/Marks/Object Model/Lines.ipynb
apache-2.0
import numpy as np #For numerical programming and multi-dimensional arrays from pandas import date_range #For date-rate generation from bqplot import * #We import the relevant modules from bqplot """ Explanation: The Lines Mark Lines is a Mark object that is primarily used to visualize quantitative data. It works particularly well for continuous data, or when the shape of the data needs to be extracted. Introduction The Lines object provides the following features: Ability to plot a single set or multiple sets of y-values as a function of a set or multiple sets of x-values Ability to style the line object in different ways, by setting different attributes such as the colors, line_style, stroke_width etc. Ability to specify a marker at each point passed to the line. The marker can be a shape which is at the data points between which the line is interpolated and can be set through the markers attribute The Lines object has the following attributes | Attribute | Description | Default Value | |:-:|---|:-:| | colors | Sets the color of each line, takes as input a list of any RGB, HEX, or HTML color name | CATEGORY10 | | opacities | Controls the opacity of each line, takes as input a real number between 0 and 1 | 1.0 | | stroke_width | Real number which sets the width of all paths | 2.0 | | line_style | Specifies whether a line is solid, dashed, dotted or both dashed and dotted | 'solid' | | interpolation | Sets the type of interpolation between two points | 'linear' | | marker | Specifies the shape of the marker inserted at each data point | None | | marker_size | Controls the size of the marker, takes as input a non-negative integer | 64 | |close_path| Controls whether to close the paths or not | False | |fill| Specifies in which way the paths are filled. Can be set to one of {'none', 'bottom', 'top', 'inside'}| None | |fill_colors| List that specifies the fill colors of each path | [] | | Data Attribute | Description | Default Value | |x |abscissas of the data points | array([]) | |y |ordinates of the data points | array([]) | |color | Data according to which the Lines will be colored. Setting it to None defaults the choice of colors to the colors attribute | None | To explore more features, run the following lines of code: python from bqplot import Lines ?Lines or visit the Lines documentation page Let's explore these features one by one We begin by importing the modules that we will need in this example End of explanation """ security_1 = np.cumsum(np.random.randn(150)) + 100. security_2 = np.cumsum(np.random.randn(150)) + 100. """ Explanation: Random Data Generation End of explanation """ sc_x = LinearScale() sc_y = LinearScale() line = Lines(x=np.arange(len(security_1)), y=security_1, scales={'x': sc_x, 'y': sc_y}) ax_x = Axis(scale=sc_x, label='Index') ax_y = Axis(scale=sc_y, orientation='vertical', label='y-values of Security 1') Figure(marks=[line], axes=[ax_x, ax_y], title='Security 1') """ Explanation: Basic Line Chart Using the bqplot, object oriented API, we can generate a Line Chart with the following code snippet: End of explanation """ line.colors = ['DarkOrange'] """ Explanation: The x attribute refers to the data represented horizontally, while the y attribute refers the data represented vertically. We can explore the different attributes by changing each of them for the plot above: End of explanation """ # The opacity allows us to display the Line while featuring other Marks that may be on the Figure line.opacities = [.5] line.stroke_width = 2.5 line.line_style = 'dashed' line.interpolation = 'basis' """ Explanation: In a similar way, we can also change any attribute after the plot has been displayed to change the plot. Run each of the cells below, and try changing the attributes to explore the different features and how they affect the plot. End of explanation """ line.marker = 'triangle-down' """ Explanation: While a Lines plot allows the user to extract the general shape of the data being plotted, there may be a need to visualize discrete data points along with this shape. This is where the markers attribute comes in. End of explanation """ # Here we define the dates we would like to use dates = date_range(start='01-01-2007', periods=150) dt_x = DateScale() sc_y = LinearScale() time_series = Lines(x=dates, y=security_1, scales={'x': dt_x, 'y': sc_y}) ax_x = Axis(scale=dt_x, label='Date') ax_y = Axis(scale=sc_y, orientation='vertical', label='Security 1') Figure(marks=[time_series], axes=[ax_x, ax_y], title='A Time Series Plot') """ Explanation: The marker attributes accepts the values square, circle, cross, diamond, square, triangle-down, triangle-up, arrow, rectangle, ellipse. Try changing the string above and re-running the cell to see how each marker type looks. Plotting a Time-Series The DateScale allows us to plot time series as a Lines plot conveniently with most date formats. End of explanation """ x_dt = DateScale() y_sc = LinearScale() """ Explanation: Plotting Multiples Sets of Data with Lines The Lines mark allows the user to plot multiple y-values for a single x-value. This can be done by passing an ndarray or a list of the different y-values as the y-attribute of the Lines as shown below. End of explanation """ dates_new = date_range(start='06-01-2007', periods=150) securities = np.cumsum(np.random.randn(150, 10), axis=0) positions = np.random.randint(0, 2, size=10) # We pass the color scale and the color data to the lines line = Lines(x=dates, y=[security_1, security_2], scales={'x': x_dt, 'y': y_sc}, labels=['Security 1', 'Security 2']) ax_x = Axis(scale=x_dt, label='Date') ax_y = Axis(scale=y_sc, orientation='vertical', label='Security 1') Figure(marks=[line], axes=[ax_x, ax_y], legend_location='top-left') """ Explanation: We pass each data set as an element of a list. The colors attribute allows us to pass a specific color for each line. End of explanation """ line.x, line.y = [dates, dates_new], [security_1, security_2] """ Explanation: Similarly, we can also pass multiple x-values for multiple sets of y-values End of explanation """ x_dt = DateScale() y_sc = LinearScale() col_sc = ColorScale(colors=['Red', 'Green']) dates_color = date_range(start='06-01-2007', periods=150) securities = 100. + np.cumsum(np.random.randn(150, 10), axis=0) positions = np.random.randint(0, 2, size=10) # Here we generate 10 random price series and 10 random positions # We pass the color scale and the color data to the lines line = Lines(x=dates_color, y=securities.T, scales={'x': x_dt, 'y': y_sc, 'color': col_sc}, color=positions, labels=['Security 1', 'Security 2']) ax_x = Axis(scale=x_dt, label='Date') ax_y = Axis(scale=y_sc, orientation='vertical', label='Security 1') Figure(marks=[line], axes=[ax_x, ax_y], legend_location='top-left') """ Explanation: Coloring Lines according to data The color attribute of a Lines mark can also be used to encode one more dimension of data. Suppose we have a portfolio of securities and we would like to color them based on whether we have bought or sold them. We can use the color attribute to encode this information. End of explanation """ line.color = None """ Explanation: We can also reset the colors of the Line to their defaults by setting the color attribute to None. End of explanation """ sc_x = LinearScale() sc_y = LinearScale() patch = Lines(x=[[0, 2, 1.2], [0.5, 2.5, 1.7], [4,5,6, 6, 5, 4, 3]], y=[[0, 0, 1], [0.5, 0.5, -0.5], [1, 1.1, 1.2, 2.3, 2.2, 2.7, 1.0]], fill_colors=['orange', 'blue', 'red'], fill='inside', stroke_width=10, close_path=True, scales={'x': sc_x, 'y': sc_y}, display_legend=True) Figure(marks=[patch], animation_duration=1000) patch.fill = 'top' patch.fill = 'bottom' patch.opacities = [0.1, 0.2] patch.x = [[2, 3, 3.2], [0.5, 2.5, 1.7], [4,5,6, 6, 5, 4, 3]] #patch.fill=['', 'blue'] patch.close_path = False """ Explanation: Patches The fill attribute of the Lines mark allows us to fill a path in different ways, while the fill_colors attribute lets us control the color of the fill End of explanation """
parrt/msan501
notes/aliasing.ipynb
mit
x = y = 7 print(x,y) """ Explanation: Data aliasing One of the trickiest things about programming is figuring out exactly what data a variable refers to. Remember that we use names like data and salary to represent memory cells holding data values. The names are easier to remember than the physical memory addresses, but we can get fooled. For example, it's obvious that two variables x and y can both have the same integer value 7: End of explanation """ x = y = 7 print(id(x)) print(id(y)) """ Explanation: But, did you know that they are both referring to the same 7 object? In other words, variables in Python are always references or pointers to data so the variables are not technically holding the value. Pointers are like phone numbers that "point at" phones but pointers themselves are not the phone itself. We can uncover this secret level of indirection using the built-in id(x) function that returns the physical memory address pointed out by x. To demonstrate that, let's ask what x and y point at: End of explanation """ from lolviz import * callviz(varnames=['x','y']) """ Explanation: Wow! They are the same. That number represents the memory location where Python has stored the shared 7 object. Of course, as programmers we don't think of these atomic elements as referring to the same object; just keep in mind that they do. We are more likely to view them as copies of the same number, as lolviz shows visually: End of explanation """ name = 'parrt' userid = name # userid now points at the same memory as name print(id(name)) print(id(userid)) """ Explanation: Let's verify that the same thing happens for strings: End of explanation """ you = [1,3,5] me = [1,3,5] print(id(you)) print(id(me)) callviz(varnames=['you','me']) """ Explanation: Ok, great, so we are in fact sharing the same memory address to hold the string 'parrt' and both of the variable names point at that same shared space. We call this aliasing, in the language implementation business. Things only get freaky when we start changing shared data. This can't happen with integers and strings because they are immutable (can't be changed). Let's look at two identical copies of a single list: End of explanation """ you = [1,3,5] me = [1,3,5] print(you, me) you[0] = 99 print(you, me) """ Explanation: Those lists have the same value but live a different memory addresses. They are not aliased; they are not shared. Consequently, changing one does not change the other: End of explanation """ you = [1,3,5] me = you print(id(you)) print(id(me)) print(you, me) callviz(varnames=['you','me']) """ Explanation: On the other hand, let's see what happens if we make you and me share the same copy of the list (point at the same memory location): End of explanation """ you[0] = 99 print(you, me) callviz(varnames=['you','me']) """ Explanation: Now, changing one appears to change the other, but in fact both simply refer to the same location in memory: End of explanation """ you = [1,3,5] me = you callviz(varnames=['you','me']) me = [9,7,5] # doesn't affect `you` at all print(you) print(me) callviz(varnames=['you','me']) """ Explanation: Don't confuse changing the pointer to the list with changing the list elements: End of explanation """ X = [[1,2],[3,4]] Y = X.copy() # shallow copy callviz(varnames=['X','Y']) X[0][1] = 99 callviz(varnames=['X','Y']) print(Y) """ Explanation: This aliasing of data happens a great deal when we pass lists or other data structures to functions. Passing list Quantity to a function whose argument is called data means that the two are aliased. We'll look at this in more detail in the "Visibility of symbols" section of Organizing your code with functions. Shallow copies End of explanation """
DallasTrinkle/Onsager
examples/GF-convergence.ipynb
mit
import sys sys.path.extend(['../']) import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') %matplotlib inline import onsager.crystal as crystal import onsager.GFcalc as GFcalc """ Explanation: Convergence of Green function calculation We check the convergence with $N_\text{kpt}$ for the calculation of the vacancy Green function for FCC and HCP structures. In particular, we will look at: The $\mathbf{R}=0$ value, The largest $\mathbf{R}$ value in the calculation of a first neighbor thermodynamic interaction range, The difference of the Green function value for (1) and (2), with increasing k-point density. End of explanation """ a0 = 1. FCC, HCP = crystal.Crystal.FCC(a0, "fcc"), crystal.Crystal.HCP(a0, chemistry="hcp") print(FCC) print(HCP) """ Explanation: Create an FCC and HCP lattice. End of explanation """ FCCR = np.array([0,2.,2.]) HCPR1, HCPR2 = np.array([4.,0.,0.]), np.array([2.,0.,2*np.sqrt(8/3)]) FCCsite, FCCjn = FCC.sitelist(0), FCC.jumpnetwork(0, 0.75) HCPsite, HCPjn = HCP.sitelist(0), HCP.jumpnetwork(0, 1.01) """ Explanation: We will put together our vectors for consideration: Maximum $\mathbf{R}$ for FCC = (400), or $\mathbf{x}=2\hat j+2\hat k$. Maximum $\mathbf{R}$ for HCP = (440), or $\mathbf{x}=4\hat i$, and (222), or $\mathbf{x}=2\hat i + 2\sqrt{8/3}\hat k$. and our sitelists and jumpnetworks. End of explanation """ FCCdata = {pmaxerror:[] for pmaxerror in range(-16,0)} print('kpt\tNkpt\tG(0)\tG(R)\tG diff') for Nmax in range(1,13): GFFCC = GFcalc.GFCrystalcalc(FCC, 0, FCCsite, FCCjn, Nmax=Nmax) Nreduce, Nkpt, kpt = GFFCC.Nkpt, np.prod(GFFCC.kptgrid), GFFCC.kptgrid for pmax in sorted(FCCdata.keys(), reverse=True): GFFCC.SetRates(np.ones(1), np.zeros(1), np.ones(1)/12, np.zeros(1), 10**(pmax)) g0,gR = GFFCC(0,0,np.zeros(3)), GFFCC(0,0,FCCR) FCCdata[pmax].append((Nkpt, g0, gR)) Nkpt,g0,gR = FCCdata[-8][-1] # print the 10^-8 values print("{k[0]}x{k[1]}x{k[2]}\t".format(k=kpt) + " {:5d} ({})\t{:.12f}\t{:.12f}\t{:.12f}".format(Nkpt, Nreduce, g0, gR,g0-gR)) HCPdata = [] print('kpt\tNkpt\tG(0)\tG(R1)\tG(R2)\tG(R1)-G(0)\tG(R2)-G0') for Nmax in range(1,13): GFHCP = GFcalc.GFCrystalcalc(HCP, 0, HCPsite, HCPjn, Nmax=Nmax) GFHCP.SetRates(np.ones(1), np.zeros(1), np.ones(2)/12, np.zeros(2), 1e-8) g0,gR1,gR2 = GFHCP(0,0,np.zeros(3)), GFHCP(0,0,HCPR1), GFHCP(0,0,HCPR2) Nreduce, Nkpt, kpt = GFHCP.Nkpt, np.prod(GFHCP.kptgrid), GFHCP.kptgrid HCPdata.append((Nkpt, g0, gR1, gR2)) print("{k[0]}x{k[1]}x{k[2]}\t".format(k=kpt) + "{:5d} ({})\t{:.12f}\t{:.12f}\t{:.12f}\t{:.12f}\t{:.12f}".format(Nkpt, Nreduce, g0, gR1, gR2, g0-gR1, g0-gR2)) """ Explanation: We use $N_\text{max}$ parameter, which controls the automated generation of k-points to iterate through successively denser k-point meshes. End of explanation """ print('pmax\tGinf\talpha (Nkpt^-5/3 prefactor)') Ginflist=[] for pmax in sorted(FCCdata.keys(), reverse=True): data = FCCdata[pmax] Nk53 = np.array([N**(5/3) for (N,g0,gR) in data]) gval = np.array([g0 for (N,g0,gR) in data]) N10,N5 = np.average(Nk53*Nk53),np.average(Nk53) g10,g5 = np.average(gval*Nk53*Nk53),np.average(gval*Nk53) denom = N10-N5**2 Ginf,alpha = (g10-g5*N5)/denom, (g10*N5-g5*N10)/denom Ginflist.append(Ginf) print('{}\t{}\t{}'.format(pmax, Ginf, alpha)) """ Explanation: First, look at the behavior of the error with $p_\text{max}$(error) parameter. The k-point integration error scales as $N_\text{kpt}^{5/3}$, and we see the $p_\text{max}$ error is approximately $10^{-8}$. End of explanation """ # plot the errors from pmax = 10^-8 data = FCCdata[-8] Nk = np.array([N for (N,g0,gR) in data]) g0val = np.array([g0 for (N,g0,gR) in data]) gRval = np.array([gR for (N,g0,gR) in data]) gplot = [] Nk53 = np.array([N**(5/3) for (N,g0,gR) in data]) for gdata, start in zip((g0val, gRval, g0val-gRval), (0,1,2)): N10,N5 = np.average(Nk53[start:]*Nk53[start:]),np.average(Nk53[start:]) denom = N10-N5**2 g10 = np.average(gdata[start:]*Nk53[start:]*Nk53[start:]) g5 = np.average(gdata[start:]*Nk53[start:]) Ginf,alpha = (g10-g5*N5)/denom, (g10*N5-g5*N10)/denom gplot.append(np.abs(gdata-Ginf)) fig, ax1 = plt.subplots() ax1.plot(Nk, gplot[0], 'k', label='$G(\mathbf{0})$ error $\sim N_{\mathrm{kpt}}^{-5/3}$') ax1.plot(Nk, gplot[1], 'b', label='$G(\mathbf{R})$ error $\sim N_{\mathrm{kpt}}^{-5/3}$') ax1.plot(Nk, gplot[2], 'b--', label='$G(\mathbf{0})-G(\mathbf{R})$ error') ax1.set_xlim((1e2,2e5)) ax1.set_ylim((1e-11,1)) ax1.set_xscale('log') ax1.set_yscale('log') ax1.set_xlabel('$N_{\mathrm{kpt}}$', fontsize='x-large') ax1.set_ylabel('integration error $G-G^\infty$', fontsize='x-large') ax1.legend(bbox_to_anchor=(0.6,0.6,0.4,0.4), ncol=1, shadow=True, frameon=True, fontsize='x-large') ax2 = ax1.twiny() ax2.set_xscale('log') ax2.set_xlim(ax1.get_xlim()) ax2.set_xticks([n for n in Nk]) ax2.set_xticklabels(["${:.0f}^3$".format(n**(1/3)) for n in Nk]) ax2.set_xlabel('k-point grid', fontsize='x-large') ax2.grid(False) ax2.tick_params(axis='x', top='on', direction='in', length=6) plt.show() # plt.savefig('FCC-GFerror.pdf', transparent=True, format='pdf') """ Explanation: Plot the error in the Green function for FCC (at 0, maximum R, and difference between those GF). We extract the infinite value by fitting the error to $N_{\mathrm{kpt}}^{-5/3}$, which empirically matches the numerical error. End of explanation """ # plot the errors from pmax = 10^-8 data = HCPdata Nk = np.array([N for (N,g0,gR1,gR2) in data]) g0val = np.array([g0 for (N,g0,gR1,gR2) in data]) gR1val = np.array([gR1 for (N,g0,gR1,gR2) in data]) gR2val = np.array([gR2 for (N,g0,gR1,gR2) in data]) gplot = [] Nk53 = np.array([N**(5/3) for (N,g0,gR1,gR2) in data]) for gdata, start in zip((g0val, gR1val, gR2val, g0val-gR1val, g0val-gR2val), (3,3,3,3,3)): N10,N5 = np.average(Nk53[start:]*Nk53[start:]),np.average(Nk53[start:]) denom = N10-N5**2 g10 = np.average(gdata[start:]*Nk53[start:]*Nk53[start:]) g5 = np.average(gdata[start:]*Nk53[start:]) Ginf,alpha = (g10-g5*N5)/denom, (g10*N5-g5*N10)/denom gplot.append(np.abs(gdata-Ginf)) fig, ax1 = plt.subplots() ax1.plot(Nk, gplot[0], 'k', label='$G(\mathbf{0})$ error $\sim N_{\mathrm{kpt}}^{-5/3}$') ax1.plot(Nk, gplot[1], 'b', label='$G(\mathbf{R}_1)$ error $\sim N_{\mathrm{kpt}}^{-5/3}$') ax1.plot(Nk, gplot[2], 'r', label='$G(\mathbf{R}_2)$ error $\sim N_{\mathrm{kpt}}^{-5/3}$') ax1.plot(Nk, gplot[3], 'b--', label='$G(\mathbf{0})-G(\mathbf{R}_1)$ error') ax1.plot(Nk, gplot[4], 'r--', label='$G(\mathbf{0})-G(\mathbf{R}_2)$ error') ax1.set_xlim((1e2,2e5)) ax1.set_ylim((1e-11,1)) ax1.set_xscale('log') ax1.set_yscale('log') ax1.set_xlabel('$N_{\mathrm{kpt}}$', fontsize='x-large') ax1.set_ylabel('integration error $G-G^\infty$', fontsize='x-large') ax1.legend(bbox_to_anchor=(0.6,0.6,0.4,0.4), ncol=1, shadow=True, frameon=True, fontsize='medium') ax2 = ax1.twiny() ax2.set_xscale('log') ax2.set_xlim(ax1.get_xlim()) ax2.set_xticks([n for n in Nk]) # ax2.set_xticklabels(["${:.0f}$".format((n*1.875)**(1/3)) for n in Nk]) ax2.set_xticklabels(['6','10','16','20','26','30','36','40','46','50','56','60']) ax2.set_xlabel('k-point divisions (basal)', fontsize='x-large') ax2.grid(False) ax2.tick_params(axis='x', top='on', direction='in', length=6) plt.show() # plt.savefig('HCP-GFerror.pdf', transparent=True, format='pdf') """ Explanation: Plot the error in Green function for HCP. End of explanation """
molgor/spystats
notebooks/Analysis of spatial models using systematic and random samples.ipynb
bsd-2-clause
#new_data = prepareDataFrame("/RawDataCSV/idiv_share/plotsClimateData_11092017.csv") ## En Hec #new_data = prepareDataFrame("/home/hpc/28/escamill/csv_data/idiv/plotsClimateData_11092017.csv") ## New "official" dataset new_data = prepareDataFrame("/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv") #IN HEC #new_data = prepareDataFrame("/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv") """ Explanation: new_data['residuals1'] = results.resid End of explanation """ def systSelection(dataframe,k): n = len(dataframe) idxs = range(0,n,k) systematic_sample = dataframe.iloc[idxs] return systematic_sample ################## k = 10 # The k-th element to take as a sample systematic_sample = systSelection(new_data,k) ax= systematic_sample.plot(column='logBiomass',figsize=(16,10),cmap=plt.cm.Blues,edgecolors='') """ Explanation: Subseting the data Three different methods for subsetting the data. 1. Using a systematic selection by index modulus 2. Using a random uniform selection by indices. 2. A geographic subselection (Clip) Systematic selection End of explanation """ def randomSelection(dataframe,p): n = len(dataframe) idxs = np.random.choice(n,p,replace=False) random_sample = dataframe.iloc[idxs] return random_sample ################# n = len(new_data) p = 3000 # The amount of samples taken (let's do it without replacement) """ Explanation: Random (Uniform) selection End of explanation """ def subselectDataFrameByCoordinates(dataframe,namecolumnx,namecolumny,minx,maxx,miny,maxy): """ Returns a subselection by coordinates using the dataframe/ """ minx = float(minx) maxx = float(maxx) miny = float(miny) maxy = float(maxy) section = dataframe[lambda x: (x[namecolumnx] > minx) & (x[namecolumnx] < maxx) & (x[namecolumny] > miny) & (x[namecolumny] < maxy) ] return section # COnsider the the following subregion minx = -100 maxx = -85 miny = 30 maxy = 35 section = subselectDataFrameByCoordinates(new_data,'LON','LAT',minx,maxx,miny,maxy) #section = new_data[lambda x: (x.LON > minx) & (x.LON < maxx) & (x.LAT > miny) & (x.LAT < maxy) ] section.plot(column='logBiomass') """ Explanation: Geographic subselection End of explanation """ # old variogram (using all data sets) #gvg,tt = createVariogram("/apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv",new_data) ## New variogram for new data gvg,tt = createVariogram("/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",new_data) #For HEC #gvg,tt = createVariogram("/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv",new_data) import numpy as np xx = np.linspace(0,1000000,1000) gvg.plot(refresh=False) plt.plot(xx,gvg.model.f(xx),lw=2.0,c='k') plt.title("Empirical Variogram with fitted Whittle Model") gvg.model samples = map(lambda i : systSelection(new_data,i), range(20,2,-1)) samples = map(lambda i : randomSelection(new_data,3000),range(100)) s = samples[0] vs = tools.Variogram(s,'logBiomas',model=gvg.model) %timeit vs.distance_coordinates %time vs.model.f(vs.distance_coordinates.flatten()) ## let's try to use a better model vs.model.f(vs.distance_coordinates.flatten()) %time vs.model.corr_f(vs.distance_coordinates.flatten()).reshape(vs.distance_coordinates.shape) matern_model = tools.MaternVariogram(sill=0.340125401705,range_a=5577.83789733, nugget=0.33, kappa=4) whittle_model = tools.WhittleVariogram(sill=0.340288288241, range_a=40963.3203528, nugget=0.329830410223, alpha=1.12279978135) exp_model = tools.ExponentialVariogram(sill=0.340294258738, range_a=38507.8253768, nugget=0.329629457808) gaussian_model = tools.GaussianVariogram(sill=0.340237044718, range_a=44828.0323827, nugget=0.330734960804) spherical_model = tools.SphericalVariogram(sill=266491706445.0, range_a=3.85462485193e+19, nugget=0.3378323178453) %time matern_model.f(vs.distance_coordinates.flatten()) %time whittle_model.f(vs.distance_coordinates.flatten()) %time exp_model.f(vs.distance_coordinates.flatten()) %time gaussian_model.f(vs.distance_coordinates.flatten()) %time spherical_model.f(vs.distance_coordinates.flatten()) %time mcf = matern_model.corr_f(vs.distance_coordinates.flatten()) %time wcf = whittle_model.corr_f(vs.distance_coordinates.flatten()) %time ecf = exp_model.corr_f(vs.distance_coordinates.flatten()) %time gcf = gaussian_model.corr_f(vs.distance_coordinates.flatten()) %time scf = spherical_model.corr_f(vs.distance_coordinates.flatten()) %time mcf0 = matern_model.corr_f_old(vs.distance_coordinates.flatten()) %time wcf0 = whittle_model.corr_f_old(vs.distance_coordinates.flatten()) %time ecf0 = exp_model.corr_f_old(vs.distance_coordinates.flatten()) %time gcf0 = gaussian_model.corr_f_old(vs.distance_coordinates.flatten()) %time scf0 = spherical_model.corr_f_old(vs.distance_coordinates.flatten()) w = matern_model.corr_f(vs.distance_coordinates.flatten()) w2 = matern_model.corr_f_old(vs.distance_coordinates.flatten()) print(np.array_equal(mcf,mcf0)) print(np.array_equal(wcf,wcf0)) print(np.array_equal(ecf,ecf0)) print(np.array_equal(gcf,gcf0)) #np.array_equal(scf,scf0) np.array_equal() %time vs.calculateCovarianceMatrix() """ Explanation: Model Analysis with the empirical variogram End of explanation """ ### read csv files conf_ints = pd.read_csv("/outputs/gls_confidence_int.csv") params = pd.read_csv("/outputs/params_gls.csv") params2 = pd.read_csv("/outputs/params2_gls.csv") pvals = pd.read_csv("/outputs/pvalues_gls.csv") pnobs = pd.read_csv("/outputs/n_obs.csv") prsqs = pd.read_csv("/outputs/rsqs.csv") params conf_ints pvals plt.plot(pnobs.n_obs,prsqs.rsq) plt.title("$R^2$ statistic for GLS on logBiomass ~ logSppn using Sp.autocor") plt.xlabel("Number of observations") tt = params.transpose() tt.columns = tt.iloc[0] tt = tt.drop(tt.index[0]) plt.plot(pnobs.n_obs,tt.Intercept) plt.title("Intercept parameter") plt.plot(pnobs.n_obs,tt.logSppN) plt.title("logSppn parameter") """ Explanation: Analysis and Results for the systematic sample End of explanation """ ccs = map(lambda s : bundleToGLS(s,gvg.model),samples) #bundleToGLS(samples[22],gvg.model) covMat = buildSpatialStructure(samples[8],gvg.model) #np.linalg.pinv(covMat) calculateGLS(samples[8],covMat) #tt = covMat.flatten() secvg = tools.Variogram(samples[8],'logBiomass',model=gvg.model) DM = secvg.distance_coordinates dm = DM.flatten() dm.sort() pdm = pd.DataFrame(dm) xxx = pdm.loc[pdm[0] > 0].sort() xxx.shape 8996780 + 3000 - (3000 * 3000) pdm.shape dd = samples[22].drop_duplicates(subset=['newLon','newLat']) secvg2 = tools.Variogram(dd,'logBiomass',model=gvg.model) covMat = buildSpatialStructure(dd,gvg.model) calculateGLS(dd,covMat) samples[22].shape gvg.model.corr_f(xxx.values()) kk gvg.model.corr_f([100]) gvg.model.corr_f([10]) """ Explanation: Test for analysis End of explanation """
sserkez/ocelot
test/workshop/2_tracking.ipynb
gpl-3.0
# the output of plotting commands is displayed inline within frontends, # directly below the code cell that produced it %matplotlib inline # this python library provides generic shallow (copy) and deep copy (deepcopy) operations from copy import deepcopy # import from Ocelot main modules and functions from ocelot import * # import from Ocelot graphical modules from ocelot.gui.accelerator import * # import injector lattice from ocelot.test.workshop.injector_lattice import * """ Explanation: This notebook was created by Sergey Tomin for Workshop: Designing future X-ray FELs. Source and license info is on GitHub. August 2016. Tutorial N2. Tracking. As an example, we will use lattice file (converted to Ocelot format) of the European XFEL Injector. This example will cover the following topics: calculation of the linear optics for the European XFEL Injector. Tracking of the particles in first and second order approximation without collective effects. Coordiantes Coordinates in Ocelot are following: $$ \left (x, \quad x' = \frac{p_x}{p_0} \right), \qquad \left (y, \quad y' = \frac{p_y}{p_0} \right), \qquad \left (\Delta s = c\tau, \quad p = \frac{\Delta E}{p_0 c} \right)$$ Requirements injector_lattice.py - input file, the Injector lattice. beam_130MeV.ast - input file, initial beam distribution in ASTRA format. End of explanation """ lat = MagneticLattice(cell, stop=None) """ Explanation: If you want to see injector_lattice.py file you can run following command (lattice file is very large): $ %load injector_lattice.py The variable cell contains all the elements of the lattice in right order. And again Ocelot will work with class MagneticLattice instead of simple sequence of element. So we have to run following command. End of explanation """ # initialization of Twiss object tws0 = Twiss() # defining initial twiss parameters tws0.beta_x = 29.171 tws0.beta_y = 29.171 tws0.alpha_x = 10.955 tws0.alpha_y = 10.955 # defining initial electron energy in GeV tws0.E = 0.005 # calculate optical functions with initial twiss parameters tws = twiss(lat, tws0, nPoints=None) # ploting twiss paramentrs. plot_opt_func(lat, tws, top_plot=["Dx", "Dy"], fig_name="i1", legend=False) plt.show() """ Explanation: 1. Design optics calculation of the European XFEL Injector Remark For convenience reasons, we define optical functions starting at the gun by backtracking of the optical functions derived from ASTRA (or similar space charge code) at 130 MeV at the entrance to the first quadrupole. The optical functions we thus obtain have obviously nothing to do with the actual beam envelope form the gun to the 130 MeV point. Because we work with linear accelerator we have to define initial energy and initial twiss paramters in order to get correct twiss functions along the Injector. End of explanation """ from ocelot.adaptors.astra2ocelot import * #p_array_init = astraBeam2particleArray(filename='beam_130MeV.ast') p_array_init = astraBeam2particleArray(filename='beam_130MeV_off_crest.ast') """ Explanation: 2. Tracking in first and second order approximation without any collective effects Remark Because of the reasons mentioned above, we start the beam tracking from the first quadrupole after RF cavities. Loading of beam distribution In order to perform tracking we have to have beam distribution. We will load beam distribution from a ASTRA file ('beam_distrib.ast'). And we convert the Astra beam distribution to Ocelot format - ParticleArray. ParticleArray is designed for tracking. In order to work with converters we have to import specific module from ocelot.adaptors from ocelot.adaptors.astra2ocelot import * After importing ocelot.adaptors.astra2ocelot we can use converter astraBeam2particleArray() to load and convert. As you will see beam distribution consists of 200 000 particles (that is why loading can take a few second), charge 250 pC, initial energy is about 6.5 MeV. ParticleArray is a class which includes several parameters and methods. * ParticleArray.particles is a 1D numpy array with coordinates of particles in $$ParticleArray.particles = [\vec{x_0}, \vec{x_1}, ..., \vec{x_n}], $$ where $$\vec{x_n} = (x_n, x_n', y_n, y_n', \tau_n, p_n)$$ * ParticleArray.s is the longitudinal coordinate of the reference particle in [m]. * ParticleArray.E is the energy of the reference particle in [GeV]. * ParticleArray.q_array - is a 1D numpy array of the charges each (macro) particles in [C] End of explanation """ # initialization of tracking method method = MethodTM() # for second order tracking we have to choose SecondTM method.global_method = SecondTM # for first order tracking uncomment next line # method.global_method = TransferMap # we will start simulation from the first quadrupole (QI.46.I1) after RF section. # you can change stop element (and the start element, as well) # START_73_I1 - marker before Dog leg # START_96_I1 - marker before Bunch Compresion lat_t = MagneticLattice(cell, start=QI_46_I1, stop=None, method=method) """ Explanation: Selection of the tracking order and lattice for the tracking. MagneticLattice(sequence, start=None, stop=None, method=MethodTM()) have wollowing arguments: * sequence - list of the elements, * start - first element of the lattice. If None, then lattice starts from the first element of the sequence, * stop - last element of the lattice. If None, then lattice stops by the last element of the sequence, * method=MethodTM() - method of the tracking. MethodTM() class assigns transfer map to every element. By default all elements are assigned first order transfer map - TransferMap. One can create one's own map, but there are following predefined maps: - TransferMap - first order matrices. - SecondTM - 2nd order matrices. - KickTM - kick applyed. - RungeKuttaTM - Runge-Kutta integrator is applyed, but required 3D magnetic field function element.mag_field = lambda x, y, z: (Bx, By, Bz) (see example ocelot/demos/ebeam/tune_shift.py) End of explanation """ navi = Navigator(lat_t) p_array = deepcopy(p_array_init) tws_track, p_array = track(lat_t, p_array, navi) # you can change top_plot argument, for example top_plot=["alpha_x", "alpha_y"] plot_opt_func(lat_t, tws_track, top_plot=["E"], fig_name=0, legend=False) plt.show() """ Explanation: Tracking for tracking we have to define following objects: * Navigator defines step (dz) of tracking and which, if it exists, physical process will be applied at each step. In order to add collective effects (Space charge, CSR or wake) method add_physics_proc() must be run. - **Method:** * Navigator.add_physics_proc(physics_proc, elem1, elem2) - physics_proc - physics process, can be CSR, SpaceCharge or Wake, - elem1 and elem2 - first and last elements between which the physics process will be applied. track(MagneticLatice, ParticleArray, Navigator) - the function performs tracking through the lattice [lat] of the particles [p_array]. This function also calculate twiss parameters of the beam distribution on each tracking step. End of explanation """ bins_start, hist_start = get_current(p_array, charge=p_array.q_array[0], num_bins=200) plt.figure(4) plt.title("current: end") plt.plot(bins_start*1000, hist_start) plt.xlabel("s, mm") plt.ylabel("I, A") plt.grid(True) plt.show() """ Explanation: Current profile End of explanation """ tau = np.array([p.tau for p in p_array]) dp = np.array([p.p for p in p_array]) x = np.array([p.x for p in p_array]) y = np.array([p.y for p in p_array]) ax1 = plt.subplot(311) ax1.plot(-tau*1000, x*1000, 'r.') plt.setp(ax1.get_xticklabels(), visible=False) plt.ylabel("x, mm") plt.grid(True) ax2 = plt.subplot(312, sharex=ax1) ax2.plot(-tau*1000, y*1000, 'r.') plt.setp(ax2.get_xticklabels(), visible=False) plt.ylabel("y, mm") plt.grid(True) ax3 = plt.subplot(313, sharex=ax1) ax3.plot(-tau*1000, dp, 'r.') plt.ylabel("dE/E") plt.xlabel("s, mm") plt.grid(True) plt.show() """ Explanation: Beam distribution End of explanation """
alasdairtran/mclearn
projects/alasdair/notebooks/02_exploratory_analysis.ipynb
bsd-3-clause
# remove after testing %load_ext autoreload %autoreload 2 import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from urllib.request import urlopen from sklearn.decomposition import PCA from mclearn.viz import (plot_class_distribution, plot_hex_map, plot_filters_and_spectrum, plot_scatter_with_classes) from mclearn.preprocessing import balanced_train_test_split %matplotlib inline sns.set_style('ticks') fig_dir = '../thesis/figures/' target_col = 'class' sdss_features = ['psfMag_r_w14', 'psf_u_g_w14', 'psf_g_r_w14', 'psf_r_i_w14', 'psf_i_z_w14', 'petroMag_r_w14', 'petro_u_g_w14', 'petro_g_r_w14', 'petro_r_i_w14', 'petro_i_z_w14', 'petroRad_r'] vstatlas_features = ['rmagC', 'umg', 'gmr', 'rmi', 'imz', 'rmw1', 'w1m2'] sdss = pd.read_hdf('../data/sdss.h5', 'sdss') vstatlas = pd.read_hdf('../data/vstatlas.h5', 'vstatlas') """ Explanation: Exploratory Analysis End of explanation """ fig = plt.figure(figsize=(5, 5)) ax = plot_class_distribution(sdss[target_col]) ax.tick_params(top='off', right='off') fig.savefig(fig_dir + '2_astro/sdss_class_distribution.pdf', bbox_inches='tight') fig = plt.figure(figsize=(5, 5)) ax = plot_class_distribution(vstatlas[target_col]) ax.tick_params(top='off', right='off') fig.savefig(fig_dir + '2_astro/vstatlas_class_distribution.pdf', bbox_inches='tight') sdss[target_col].value_counts() 0.3*(25604+ 6559+ 2303+590) vstatlas[target_col].value_counts() """ Explanation: Distribution of Classes End of explanation """ fig = plt.figure(figsize=(10,5)) zero_values = np.zeros(1) ax = plot_hex_map(zero_values, zero_values, axisbg=None, colorbar=False, labels=True) fig.savefig(fig_dir + '2_astro/mollweide_map.pdf', bbox_inches='tight') """ Explanation: Maps of Classes We have around 2.8 million labelled data points. Below are the maps showing how the three classes - galaxies, stars, and quasars - are distributed. Here we use the Mollweide projection, with the following coordinate layout. The red line is the plane of the Milky Way. End of explanation """ # make Boolean index of each object is_galaxy = sdss[target_col] == 'Galaxy' is_star = sdss[target_col] == 'Star' is_quasar = sdss[target_col] == 'Quasar' # extract the coordinates of each object galaxy_ra, galaxy_dec = sdss[is_galaxy]['ra'], sdss[is_galaxy]['dec'] star_ra, star_dec = sdss[is_star]['ra'], sdss[is_star]['dec'] quasar_ra, quasar_dec = sdss[is_quasar]['ra'], sdss[is_quasar]['dec'] # plot galaxy map fig = plt.figure(figsize=(10,5)) ax = plot_hex_map(galaxy_ra, galaxy_dec) fig.savefig(fig_dir + '4_expt1/sdss_train_galaxies.png', bbox_inches='tight', dpi=300) # plot star map fig = plt.figure(figsize=(10,5)) ax = plot_hex_map(star_ra, star_dec) fig.savefig(fig_dir + '4_expt1/sdss_train_stars.png', bbox_inches='tight', dpi=300) # plot quasar map fig = plt.figure(figsize=(10,5)) ax = plot_hex_map(quasar_ra, quasar_dec) fig.savefig(fig_dir + '4_expt1/sdss_train_quasars.png', bbox_inches='tight', dpi=300) """ Explanation: Here are the distribution map of galaxies, stars, and quasars, respectively. End of explanation """ vega_url = 'http://www.astro.washington.edu/users/ivezic/DMbook/data/1732526_nic_002.ascii' ugriz_filter_url = 'http://www.sdss.org/dr7/instruments/imager/filters/%s.dat' filter_dir = '../data/filters' spectra_dir = '../data/spectra' fig = plt.figure(figsize=(10,5)) ax = plot_filters_and_spectrum(ugriz_filter_url, vega_url, filter_dir=filter_dir, spectra_dir=spectra_dir) fig.savefig(fig_dir + '2_astro/vega_filters_and_spectrum.pdf', bbox_inches='tight') """ Explanation: Photometry vs Spectroscopy To see the difference between photometry and spectroscopy, we plot the spectrum of Vega (which gives us a lot of information but this is expensive to obtain) and the 5 ugriz photometric filters. End of explanation """ X_train, X_test, y_train, y_test = balanced_train_test_split( sdss[sdss_features], sdss[target_col], train_size=200000, test_size=100000, random_state=2) pca = PCA(n_components=2) projection = pca.fit_transform(X_train) classes = ['Galaxy', 'Quasar', 'Star'] fig = plt.figure(figsize=(10, 5)) ax = plot_scatter_with_classes(projection, y_train, classes) ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) fig.savefig(fig_dir + '4_expt1/sdss_pca_all.png', bbox_inches='tight', dpi=300) """ Explanation: PCA and Dimensionality Reduction We reduce the 11 dimensions down to 2 dimensions using PCA. End of explanation """
gdementen/larray
doc/source/tutorial/tutorial_combine_arrays.ipynb
gpl-3.0
from larray import * # load the 'demography_eurostat' dataset demography_eurostat = load_example_data('demography_eurostat') # load 'gender' and 'time' axes gender = demography_eurostat.gender time = demography_eurostat.time # load the 'population' array from the 'demography_eurostat' dataset population = demography_eurostat.population # show 'population' array population # load the 'population_benelux' array from the 'demography_eurostat' dataset population_benelux = demography_eurostat.population_benelux # show 'population_benelux' array population_benelux """ Explanation: Combining arrays Import the LArray library: End of explanation """ other_countries = zeros((Axis('country=Luxembourg,Netherlands'), gender, time), dtype=int) # insert new countries before 'France' population_new_countries = population.insert(other_countries, before='France') population_new_countries # insert new countries after 'France' population_new_countries = population.insert(other_countries, after='France') population_new_countries """ Explanation: The LArray library offers several methods and functions to combine arrays: insert: inserts an array in another array along an axis append: adds an array at the end of an axis. prepend: adds an array at the beginning of an axis. extend: extends an array along an axis. stack: combines several arrays along a new axis. Insert End of explanation """ # append data for 'Luxembourg' population_new = population.append('country', population_benelux['Luxembourg'], 'Luxembourg') population_new """ Explanation: See insert for more details and examples. Append Append one element to an axis of an array: End of explanation """ population_lux = Array([-1, 1], gender) population_lux population_new = population.append('country', population_lux, 'Luxembourg') population_new """ Explanation: The value being appended can have missing (or even extra) axes as long as common axes are compatible: End of explanation """ # append data for 'Luxembourg' population_new = population.prepend('country', population_benelux['Luxembourg'], 'Luxembourg') population_new """ Explanation: See append for more details and examples. Prepend Prepend one element to an axis of an array: End of explanation """ population_extended = population.extend('country', population_benelux[['Luxembourg', 'Netherlands']]) population_extended """ Explanation: See prepend for more details and examples. Extend Extend an array along an axis with another array with that axis (but other labels) End of explanation """ # imagine you have loaded data for each country in different arrays # (e.g. loaded from different Excel sheets) population_be = population['Belgium'] population_fr = population['France'] population_de = population['Germany'] print(population_be) print(population_fr) print(population_de) # create a new array with an extra axis 'country' by stacking the three arrays population_be/fr/de population_stacked = stack({'Belgium': population_be, 'France': population_fr, 'Germany': population_de}, 'country') population_stacked """ Explanation: See extend for more details and examples. Stack Stack several arrays together to create an entirely new dimension End of explanation """
irsisyphus/machine-learning
4 Data Processing.ipynb
apache-2.0
import pandas as pd wine_data_remote = 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data' wine_data_local = '../datasets/wine/wine.data' df_wine = pd.read_csv(wine_data_remote, header=None) df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline'] #print('Class labels', np.unique(df_wine['Class label'])) #df_wine.head() from sklearn import __version__ as skv from distutils.version import LooseVersion as CheckVersion if CheckVersion(skv) < '0.18': from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=0.3, random_state=0) from sklearn.preprocessing import StandardScaler stdsc = StandardScaler() X_train_std = stdsc.fit_transform(X_train) X_test_std = stdsc.transform(X_test) """ Explanation: Assignment 4 - dimensionality reduction This assignment focuses on two different ways for dimensionality reduction: * feature selection * feature extraction This assignment has weighting $1.5$. Sequential feature selection (50 points) There is a sample code in PML chapter 4 for sequential bardward selection (SBS) and its application to subsequent KNN classifier. Implement sequential forward selection (SFS), and compare it with sequential backward selection by plotting the accuracy versus the number of features. You can start with the sample code provided in the slides. You can extend the existing SBS class to handle both forward and backward selection, or implement a separate class for SFS. Plot and compare the two accuracy versus number-of-features plots for SFS and SBS. Use the wine dataset as follows. End of explanation """ from sklearn.base import clone from itertools import combinations import numpy as np from sklearn.cross_validation import train_test_split from sklearn.metrics import accuracy_score class SequentialSelection(): def __init__(self, estimator, k_features, scoring=accuracy_score, backward = True, test_size=0.25, random_state=1): self.scoring = scoring self.estimator = clone(estimator) self.k_features = k_features self.backward = backward self.test_size = test_size self.random_state = random_state def fit(self, X, y): X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=self.test_size, random_state=self.random_state) dim = X_train.shape[1] all_indices = tuple(range(dim)) self.subsets_ = [] self.scores_ = [] if self.backward: self.indices_ = all_indices dims = range(dim, self.k_features-1, -1) else: # forward self.indices_ = [] dims = range(1, self.k_features+1, 1) for dim in dims: scores = [] subsets = [] if self.backward: p_set = [p for p in combinations(self.indices_, r=dim)] else: remaining_indices = set(all_indices).difference(self.indices_) p_set = [tuple(set(p).union(set(self.indices_))) for p in combinations(remaining_indices, r=1)] for p in p_set: score = self._calc_score(X_train, y_train, X_test, y_test, p) scores.append(score) subsets.append(p) best = np.argmax(scores) self.indices_ = subsets[best] self.subsets_.append(self.indices_) self.scores_.append(scores[best]) self.k_score_ = self.scores_[-1] return self def transform(self, X): return X[:, self.indices_] def _calc_score(self, X_train, y_train, X_test, y_test, indices): self.estimator.fit(X_train[:, indices], y_train) y_pred = self.estimator.predict(X_test[:, indices]) score = self.scoring(y_test, y_pred) return score """ Explanation: Answer Implement your sequential backward selection class here, either as a separate class or by extending the SBS class that can handle both forward and backward selection (via an input parameter to indicate the direction). End of explanation """ %matplotlib inline import matplotlib.pyplot as plt from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=2) # selecting features for backward in [True, False]: if backward: k_features = 1 else: k_features = X_train_std.shape[1] ss = SequentialSelection(knn, k_features=k_features, backward=backward) ss.fit(X_train_std, y_train) # plotting performance of feature subsets k_feat = [len(k) for k in ss.subsets_] plt.plot(k_feat, ss.scores_, marker='o') plt.ylim([0.7, 1.1]) plt.ylabel('Accuracy') plt.xlabel('Number of features') if backward: plt.title('backward') else: plt.title('forward') plt.grid() plt.tight_layout() # plt.savefig('./sbs.png', dpi=300) plt.show() """ Explanation: Apply your sequential forward/backward selection code to the KNN classifier with the wine data set, and plot the accuracy versus number-of-features curves for both. Describe the similarities and differences you can find, e.g. * do the two methods agree on the optimal number of features? * do the two methods have similar accuracy scores for each number of features? * etc. Answer SBS and SFS give similar plots for accuracy versus the feature size, they agree on the accuracy score for 5 to 9 features based on the parameters setting used in the code. SFS reaches a higher accuracy score than SBS with a smaller feature size. We could see from the figure that SFS has a higher accuracy of 1 with a featue size of 3 while SBS gives an accuracy of 0.96. End of explanation """ %matplotlib inline """ Explanation: PCA versus LDA (50 points) We have learned two different methods for feature extraction, PCA (unsupervised) and LDA (supervised). Under what circumstances would PCA and LDA produce very different results? Provide one example dataset in 2D, analyze it via PCA and LDA, and plot it with the PCA and LDA components. You can use code from the scikit-learn library. Answer One simple case is several very tall and narrow clusters stacked horizontally. PCA/LDA and will favor the vertical/horizontal direction, respectively. To avoid curing via standardization, just having enough classes so that the total/individual set is sufficiently isotropic/anisotropic. The difference could be explained: PCA: (1) unsupervised (no class label information) (2) project data into dimensions that maximize variance. LDA: (1) supervised (with class label information) (2) project data into dimensions to (a) maximize inter-class spread, (b) minimize intra-class spread End of explanation """ # visualize the data set import numpy as np import matplotlib.pyplot as plt def plot_dataset(X, y, xlabel="", ylabel=""): num_class = np.unique(y).size colors = ['red', 'blue', 'green', 'black'] markers = ['^', 'o', 's', 'd'] if num_class <= 1: plt.scatter(X[:, 0], X[:, 1]) pass else: for k in range(num_class): plt.scatter(X[y == k, 0], X[y == k, 1], color=colors[k], marker=markers[k], alpha=0.5) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.tight_layout() plt.show() from sklearn.datasets import make_blobs def blobs(num_samples_per_class, dimension, num_class, cluster_std, center_spacing): cluster_scale = num_class*center_spacing class_centers = np.zeros((num_class, 2)) for k in range(num_class): class_centers[k, 0] = center_spacing*(k-num_class*0.5) X, y = make_blobs(n_samples = num_samples_per_class*num_class, n_features = dimension, centers = class_centers, cluster_std=cluster_std) X[:, 1] *= cluster_scale return X, y def slices(num_samples_per_class, dimension, num_class, aspect_ratio=1.0): num_rows = num_class*num_samples_per_class num_cols = dimension y = np.zeros(num_rows) X = np.random.uniform(low=0, high=1, size=(num_rows, num_cols)) for k in range(num_class): row_lo = k*num_samples_per_class row_hi = (k+1)*num_samples_per_class y[row_lo:row_hi].fill(k) X[row_lo:row_hi, 0] = (X[row_lo:row_hi, 0]+k)*aspect_ratio/num_class y = y.astype(int) return X, y num_samples_per_class = 100 dimension = 2 cluster_std = 1 center_spacing = 5*cluster_std aspect_ratio = 0.8 num_class = 3 # You guys could try different dataset using blobs or slices and adjusting the angle below #X, y = blobs(num_samples_per_class, dimension, num_class, cluster_std, center_spacing) X, y = slices(num_samples_per_class, dimension, num_class, aspect_ratio) print(X.shape) print(np.unique(y)) class RotationMatrix(): # angle in degree def __init__(self, angle=0): theta = (angle/180.0)*np.pi self.matrix = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) def rot(self, X): return X.dot(self.matrix.transpose()) angle = 0 if angle != 0: rmat = RotationMatrix(angle) X = rmat.rot(X) from sklearn.preprocessing import StandardScaler if False: sc = StandardScaler() X = sc.fit_transform(X) """ Explanation: Write code to produce your own dataset in 2D. You are free to design relative characteristics like the number of class, the number of samples for each class, as long as your dataset could be analyzed via PCA and LDA. End of explanation """ plot_dataset(X, y) """ Explanation: Plot your data set, with different classes in different marker colors and/or shapes. You can write your own plot code or use existing library plot code. End of explanation """ from sklearn.decomposition import PCA pca = PCA() X_pca = pca.fit_transform(X) plot_dataset(X_pca, y, xlabel='PC 1', ylabel='PC 2') from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA lda = LDA(n_components=dimension) X_lda = lda.fit_transform(X, y) plot_dataset(X_lda, y, xlabel = "LD 1", ylabel = "LD 2") """ Explanation: Apply your dataset through PCA and LDA, and plot the projected data using the same plot code. Explain the differences you notice, and how you manage to construct your dataset to achieve such differences. You can use the PCA and LDA code from the scikit-learn library. End of explanation """
probml/pyprobml
notebooks/misc/linreg_hierarchical_pymc3.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import pymc3 as pm import pandas as pd url = "https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/radon.csv?raw=true" data = pd.read_csv(url) county_names = data.county.unique() county_idx = data["county_code"].values !pip install arviz import arviz """ Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/linreg_hierarchical_pymc3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Hierarchical Bayesian Linear Regression in PyMC3 The text and code for this notebook are taken directly from this blog post by Thomas Wiecki and Danne Elbers. Original notebook. Gelman et al.'s (2007) radon dataset is a classic for hierarchical modeling. In this dataset the amount of the radioactive gas radon has been measured among different households in all county's of several states. Radon gas is known to be the highest cause of lung cancer in non-smokers. It is believed to enter the house through the basement. Moreover, its concentration is thought to differ regionally due to different types of soil. Here we'll investigate this difference and try to make predictions of radon levels in different countys and where in the house radon was measured. In this example we'll look at Minnesota, a state that contains 85 county's in which different measurements are taken, ranging from 2 till 80 measurements per county. First, we'll load the data: End of explanation """ data[["county", "log_radon", "floor"]].head() """ Explanation: The relevant part of the data we will model looks as follows: End of explanation """ # takes about 45 minutes indiv_traces = {} for county_name in county_names: # Select subset of data belonging to county c_data = data.loc[data.county == county_name] c_data = c_data.reset_index(drop=True) c_log_radon = c_data.log_radon c_floor_measure = c_data.floor.values with pm.Model() as individual_model: # Intercept prior a = pm.Normal("alpha", mu=0, sigma=1) # Slope prior b = pm.Normal("beta", mu=0, sigma=1) # Model error prior eps = pm.HalfCauchy("eps", beta=1) # Linear model radon_est = a + b * c_floor_measure # Data likelihood y_like = pm.Normal("y_like", mu=radon_est, sigma=eps, observed=c_log_radon) # Inference button (TM)! trace = pm.sample(progressbar=False) indiv_traces[county_name] = trace """ Explanation: As you can see, we have multiple radon measurements (log-converted to be on the real line) in a county and whether the measurement has been taken in the basement (floor == 0) or on the first floor (floor == 1). Here we want to test the prediction that radon concentrations are higher in the basement. The Models Pooling of measurements Now you might say: "That's easy! I'll just pool all my data and estimate one big regression to asses the influence of measurement across all counties". In math-speak that model would be: $$radon_{i, c} = \alpha + \beta*\text{floor}_{i, c} + \epsilon$$ Where $i$ represents the measurement, $c$ the county and floor contains which floor the measurement was made. If you need a refresher on Linear Regressions in PyMC3, check out my previous blog post. Critically, we are only estimating one intercept and one slope for all measurements over all counties. Separate regressions But what if we are interested whether different counties actually have different relationships (slope) and different base-rates of radon (intercept)? Then you might say "OK then, I'll just estimate $n$ (number of counties) different regresseions -- one for each county". In math-speak that model would be: $$radon_{i, c} = \alpha_{c} + \beta_{c}*\text{floor}_{i, c} + \epsilon_c$$ Note that we added the subindex $c$ so we are estimating $n$ different $\alpha$s and $\beta$s -- one for each county. This is the extreme opposite model, where above we assumed all counties are exactly the same, here we are saying that they share no similarities whatsoever which ultimately is also unsatisifying. Hierarchical Regression: The best of both worlds Fortunately there is a middle ground to both of these extreme views. Specifically, we may assume that while $\alpha$s and $\beta$s are different for each county, the coefficients all come from a common group distribution: $$\alpha_{c} \sim \mathcal{N}(\mu_{\alpha}, \sigma_{\alpha}^2)$$ $$\beta_{c} \sim \mathcal{N}(\mu_{\beta}, \sigma_{\beta}^2)$$ We thus assume the intercepts $\alpha$ and slopes $\beta$ to come from a normal distribution centered around their respective group mean $\mu$ with a certain standard deviation $\sigma^2$, the values (or rather posteriors) of which we also estimate. That's why this is called multilevel or hierarchical modeling. How do we estimate such a complex model with all these parameters you might ask? Well, that's the beauty of Probabilistic Programming -- we just formulate the model we want and press our Inference Button(TM). Note that the above is not a complete Bayesian model specification as we haven't defined priors or hyperpriors (i.e. priors for the group distribution, $\mu$ and $\sigma$). These will be used in the model implementation below but only distract here. Probabilistic Programming Individual/non-hierarchical model To really highlight the effect of the hierarchical linear regression we'll first estimate the non-hierarchical Bayesian model from above (separate regressions). For each county a new estimate of the parameters is initiated. As we have no prior information on what the intercept or regressions could be we are placing a Normal distribution centered around 0 with a wide standard-deviation. We'll assume the measurements are normally distributed with noise $\epsilon$ on which we place a Half-Cauchy distribution. End of explanation """ with pm.Model() as hierarchical_model: # Hyperpriors mu_a = pm.Normal("mu_alpha", mu=0.0, sigma=1) sigma_a = pm.HalfCauchy("sigma_alpha", beta=1) mu_b = pm.Normal("mu_beta", mu=0.0, sigma=1) sigma_b = pm.HalfCauchy("sigma_beta", beta=1) # Intercept for each county, distributed around group mean mu_a a = pm.Normal("alpha", mu=mu_a, sigma=sigma_a, shape=len(data.county.unique())) # Intercept for each county, distributed around group mean mu_a b = pm.Normal("beta", mu=mu_b, sigma=sigma_b, shape=len(data.county.unique())) # Model error eps = pm.HalfCauchy("eps", beta=1) # Expected value radon_est = a[county_idx] + b[county_idx] * data.floor.values # Data likelihood y_like = pm.Normal("y_like", mu=radon_est, sigma=eps, observed=data.log_radon) with hierarchical_model: hierarchical_trace = pm.sample() pm.traceplot(hierarchical_trace); pm.traceplot(hierarchical_trace, var_names=["alpha", "beta"]) """ Explanation: Hierarchical Model Instead of initiating the parameters separatly, the hierarchical model initiates group parameters that consider the county's not as completely different but as having an underlying similarity. These distributions are subsequently used to influence the distribution of each county's $\alpha$ and $\beta$. End of explanation """ selection = ["CASS", "CROW WING", "FREEBORN"] fig, axis = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True) axis = axis.ravel() for i, c in enumerate(selection): c_data = data.loc[data.county == c] c_data = c_data.reset_index(drop=True) z = list(c_data["county_code"])[0] xvals = np.linspace(-0.2, 1.2) for a_val, b_val in zip(indiv_traces[c]["alpha"][::10], indiv_traces[c]["beta"][::10]): axis[i].plot(xvals, a_val + b_val * xvals, "b", alpha=0.05) axis[i].plot( xvals, indiv_traces[c]["alpha"][::10].mean() + indiv_traces[c]["beta"][::10].mean() * xvals, "b", alpha=1, lw=2.0, label="individual", ) for a_val, b_val in zip(hierarchical_trace["alpha"][::10][z], hierarchical_trace["beta"][::10][z]): axis[i].plot(xvals, a_val + b_val * xvals, "g", alpha=0.05) axis[i].plot( xvals, hierarchical_trace["alpha"][::10][z].mean() + hierarchical_trace["beta"][::10][z].mean() * xvals, "g", alpha=1, lw=2.0, label="hierarchical", ) axis[i].scatter( c_data.floor + np.random.randn(len(c_data)) * 0.01, c_data.log_radon, alpha=1, color="k", marker=".", s=80, label="original data", ) axis[i].set_xticks([0, 1]) axis[i].set_xticklabels(["basement", "first floor"]) axis[i].set_ylim(-1, 4) axis[i].set_title(c) if not i % 3: axis[i].legend() axis[i].set_ylabel("log radon level") """ Explanation: The marginal posteriors in the left column are highly informative. mu_a tells us the group mean (log) radon levels. mu_b tells us that the slope is significantly negative (no mass above zero), meaning that radon concentrations are higher in the basement than first floor. We can also see by looking at the marginals for a that there is quite some differences in radon levels between counties; the different widths are related to how much measurements we have per county, the more, the higher our confidence in that parameter estimate. <div class="alert alert-warning"> After writing this blog post I found out that the chains here (which look worse after I just re-ran them) are not properly converged, you can see that best for `sigma_beta` but also the warnings about "diverging samples" (which are also new in PyMC3). If you want to learn more about the problem and its solution, see my more recent blog post <a href='https://twiecki.github.io/blog/2017/02/08/bayesian-hierchical-non-centered/'>"Why hierarchical models are awesome, tricky, and Bayesian"</a>. </div> Posterior Predictive Check The Root Mean Square Deviation To find out which of the models works better we can calculate the Root Mean Square Deviaton (RMSD). This posterior predictive check revolves around recreating the data based on the parameters found at different moments in the chain. The recreated or predicted values are subsequently compared to the real data points, the model that predicts data points closer to the original data is considered the better one. Thus, the lower the RMSD the better. When computing the RMSD (code not shown) we get the following result: individual/non-hierarchical model: 0.13 hierarchical model: 0.08 As can be seen above the hierarchical model performs a lot better than the non-hierarchical model in predicting the radon values. Following this, we'll plot some examples of county's showing the true radon values, the hierarchial predictions and the non-hierarchical predictions. End of explanation """ hier_a = hierarchical_trace["alpha"].mean(axis=0) hier_b = hierarchical_trace["beta"].mean(axis=0) indv_a = [indiv_traces[c]["alpha"].mean() for c in county_names] indv_b = [indiv_traces[c]["beta"].mean() for c in county_names] fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot( 111, xlabel="Intercept", ylabel="Floor Measure", title="Hierarchical vs. Non-hierarchical Bayes", xlim=(0.25, 2), ylim=(-2, 1.5), ) ax.scatter(indv_a, indv_b, s=26, alpha=0.4, label="non-hierarchical") ax.scatter(hier_a, hier_b, c="red", s=26, alpha=0.4, label="hierarchical") for i in range(len(indv_b)): ax.arrow( indv_a[i], indv_b[i], hier_a[i] - indv_a[i], hier_b[i] - indv_b[i], fc="k", ec="k", length_includes_head=True, alpha=0.4, head_width=0.02, ) ax.legend(); """ Explanation: In the above plot we have the data points in black of three selected counties. The thick lines represent the mean estimate of the regression line of the individual (blue) and hierarchical model (in green). The thinner lines are regression lines of individual samples from the posterior and give us a sense of how variable the estimates are. When looking at the county 'CASS' we see that the non-hierarchical estimation has huge uncertainty about the radon levels of first floor measurements -- that's because we don't have any measurements in this county. The hierarchical model, however, is able to apply what it learned about the relationship between floor and radon-levels from other counties to CASS and make sensible predictions even in the absence of measurements. We can also see how the hierarchical model produces more robust estimates in 'CROW WING' and 'FREEBORN'. In this regime of few data points the non-hierarchical model reacts more strongly to individual data points because that's all it has to go on. Having the group-distribution constrain the coefficients we get meaningful estimates in all cases as we apply what we learn from the group to the individuals and vice-versa. Shrinkage Shrinkage describes the process by which our estimates are "pulled" towards the group-mean as a result of the common group distribution -- county-coefficients very far away from the group mean have very low probability under the normality assumption. In the non-hierachical model every county is allowed to differ completely from the others by just using each county's data, resulting in a model more prone to outliers (as shown above). End of explanation """
keras-team/keras-io
examples/vision/ipynb/vit_small_ds.ipynb
apache-2.0
import math import numpy as np import tensorflow as tf from tensorflow import keras import tensorflow_addons as tfa import matplotlib.pyplot as plt from tensorflow.keras import layers # Setting seed for reproducibiltiy SEED = 42 keras.utils.set_random_seed(SEED) """ Explanation: Train a Vision Transformer on small datasets Author: Aritra Roy Gosthipaty<br> Date created: 2022/01/07<br> Last modified: 2022/01/10<br> Description: Training a ViT from scratch on smaller datasets with shifted patch tokenization and locality self-attention. Introduction In the academic paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models. The self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets. In the academic paper Vision Transformer for Small-Size Datasets, the authors set out to tackle the problem of locality inductive bias in ViTs. The main ideas are: Shifted Patch Tokenization Locality Self Attention This example implements the ideas of the paper. A large part of this example is inspired from Image classification with Vision Transformer. Note: This example requires TensorFlow 2.6 or higher, as well as TensorFlow Addons, which can be installed using the following command: python pip install -qq -U tensorflow-addons Setup End of explanation """ NUM_CLASSES = 100 INPUT_SHAPE = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}") print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}") """ Explanation: Prepare the data End of explanation """ # DATA BUFFER_SIZE = 512 BATCH_SIZE = 256 # AUGMENTATION IMAGE_SIZE = 72 PATCH_SIZE = 6 NUM_PATCHES = (IMAGE_SIZE // PATCH_SIZE) ** 2 # OPTIMIZER LEARNING_RATE = 0.001 WEIGHT_DECAY = 0.0001 # TRAINING EPOCHS = 50 # ARCHITECTURE LAYER_NORM_EPS = 1e-6 TRANSFORMER_LAYERS = 8 PROJECTION_DIM = 64 NUM_HEADS = 4 TRANSFORMER_UNITS = [ PROJECTION_DIM * 2, PROJECTION_DIM, ] MLP_HEAD_UNITS = [2048, 1024] """ Explanation: Configure the hyperparameters The hyperparameters are different from the paper. Feel free to tune the hyperparameters yourself. End of explanation """ data_augmentation = keras.Sequential( [ layers.Normalization(), layers.Resizing(IMAGE_SIZE, IMAGE_SIZE), layers.RandomFlip("horizontal"), layers.RandomRotation(factor=0.02), layers.RandomZoom(height_factor=0.2, width_factor=0.2), ], name="data_augmentation", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) """ Explanation: Use data augmentation A snippet from the paper: "According to DeiT, various techniques are required to effectively train ViTs. Thus, we applied data augmentations such as CutMix, Mixup, Auto Augment, Repeated Augment to all models." In this example, we will focus solely on the novelty of the approach and not on reproducing the paper results. For this reason, we don't use the mentioned data augmentation schemes. Please feel free to add to or remove from the augmentation pipeline. End of explanation """ class ShiftedPatchTokenization(layers.Layer): def __init__( self, image_size=IMAGE_SIZE, patch_size=PATCH_SIZE, num_patches=NUM_PATCHES, projection_dim=PROJECTION_DIM, vanilla=False, **kwargs, ): super().__init__(**kwargs) self.vanilla = vanilla # Flag to swtich to vanilla patch extractor self.image_size = image_size self.patch_size = patch_size self.half_patch = patch_size // 2 self.flatten_patches = layers.Reshape((num_patches, -1)) self.projection = layers.Dense(units=projection_dim) self.layer_norm = layers.LayerNormalization(epsilon=LAYER_NORM_EPS) def crop_shift_pad(self, images, mode): # Build the diagonally shifted images if mode == "left-up": crop_height = self.half_patch crop_width = self.half_patch shift_height = 0 shift_width = 0 elif mode == "left-down": crop_height = 0 crop_width = self.half_patch shift_height = self.half_patch shift_width = 0 elif mode == "right-up": crop_height = self.half_patch crop_width = 0 shift_height = 0 shift_width = self.half_patch else: crop_height = 0 crop_width = 0 shift_height = self.half_patch shift_width = self.half_patch # Crop the shifted images and pad them crop = tf.image.crop_to_bounding_box( images, offset_height=crop_height, offset_width=crop_width, target_height=self.image_size - self.half_patch, target_width=self.image_size - self.half_patch, ) shift_pad = tf.image.pad_to_bounding_box( crop, offset_height=shift_height, offset_width=shift_width, target_height=self.image_size, target_width=self.image_size, ) return shift_pad def call(self, images): if not self.vanilla: # Concat the shifted images with the original image images = tf.concat( [ images, self.crop_shift_pad(images, mode="left-up"), self.crop_shift_pad(images, mode="left-down"), self.crop_shift_pad(images, mode="right-up"), self.crop_shift_pad(images, mode="right-down"), ], axis=-1, ) # Patchify the images and flatten it patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding="VALID", ) flat_patches = self.flatten_patches(patches) if not self.vanilla: # Layer normalize the flat patches and linearly project it tokens = self.layer_norm(flat_patches) tokens = self.projection(tokens) else: # Linearly project the flat patches tokens = self.projection(flat_patches) return (tokens, patches) """ Explanation: Implement Shifted Patch Tokenization In a ViT pipeline, the input images are divided into patches that are then linearly projected into tokens. Shifted patch tokenization (STP) is introduced to combat the low receptive field of ViTs. The steps for Shifted Patch Tokenization are as follows: Start with an image. Shift the image in diagonal directions. Concat the diagonally shifted images with the original image. Extract patches of the concatenated images. Flatten the spatial dimension of all patches. Layer normalize the flattened patches and then project it. | | | :--: | | Shifted Patch Tokenization Source | End of explanation """ # Get a random image from the training dataset # and resize the image image = x_train[np.random.choice(range(x_train.shape[0]))] resized_image = tf.image.resize( tf.convert_to_tensor([image]), size=(IMAGE_SIZE, IMAGE_SIZE) ) # Vanilla patch maker: This takes an image and divides into # patches as in the original ViT paper (token, patch) = ShiftedPatchTokenization(vanilla=True)(resized_image / 255.0) (token, patch) = (token[0], patch[0]) n = patch.shape[0] count = 1 plt.figure(figsize=(4, 4)) for row in range(n): for col in range(n): plt.subplot(n, n, count) count = count + 1 image = tf.reshape(patch[row][col], (PATCH_SIZE, PATCH_SIZE, 3)) plt.imshow(image) plt.axis("off") plt.show() # Shifted Patch Tokenization: This layer takes the image, shifts it # diagonally and then extracts patches from the concatinated images (token, patch) = ShiftedPatchTokenization(vanilla=False)(resized_image / 255.0) (token, patch) = (token[0], patch[0]) n = patch.shape[0] shifted_images = ["ORIGINAL", "LEFT-UP", "LEFT-DOWN", "RIGHT-UP", "RIGHT-DOWN"] for index, name in enumerate(shifted_images): print(name) count = 1 plt.figure(figsize=(4, 4)) for row in range(n): for col in range(n): plt.subplot(n, n, count) count = count + 1 image = tf.reshape(patch[row][col], (PATCH_SIZE, PATCH_SIZE, 5 * 3)) plt.imshow(image[..., 3 * index : 3 * index + 3]) plt.axis("off") plt.show() """ Explanation: Visualize the patches End of explanation """ class PatchEncoder(layers.Layer): def __init__( self, num_patches=NUM_PATCHES, projection_dim=PROJECTION_DIM, **kwargs ): super().__init__(**kwargs) self.num_patches = num_patches self.position_embedding = layers.Embedding( input_dim=num_patches, output_dim=projection_dim ) self.positions = tf.range(start=0, limit=self.num_patches, delta=1) def call(self, encoded_patches): encoded_positions = self.position_embedding(self.positions) encoded_patches = encoded_patches + encoded_positions return encoded_patches """ Explanation: Implement the patch encoding layer This layer accepts projected patches and then adds positional information to them. End of explanation """ class MultiHeadAttentionLSA(tf.keras.layers.MultiHeadAttention): def __init__(self, **kwargs): super().__init__(**kwargs) # The trainable temperature term. The initial value is # the square root of the key dimension. self.tau = tf.Variable(math.sqrt(float(self._key_dim)), trainable=True) def _compute_attention(self, query, key, value, attention_mask=None, training=None): query = tf.multiply(query, 1.0 / self.tau) attention_scores = tf.einsum(self._dot_product_equation, key, query) attention_scores = self._masked_softmax(attention_scores, attention_mask) attention_scores_dropout = self._dropout_layer( attention_scores, training=training ) attention_output = tf.einsum( self._combine_equation, attention_scores_dropout, value ) return attention_output, attention_scores """ Explanation: Implement Locality Self Attention The regular attention equation is stated below. | | | :--: | | Source | The attention module takes a query, key, and value. First, we compute the similarity between the query and key via a dot product. Then, the result is scaled by the square root of the key dimension. The scaling prevents the softmax function from having an overly small gradient. Softmax is then applied to the scaled dot product to produce the attention weights. The value is then modulated via the attention weights. In self-attention, query, key and value come from the same input. The dot product would result in large self-token relations rather than inter-token relations. This also means that the softmax gives higher probabilities to self-token relations than the inter-token relations. To combat this, the authors propose masking the diagonal of the dot product. This way, we force the attention module to pay more attention to the inter-token relations. The scaling factor is a constant in the regular attention module. This acts like a temperature term that can modulate the softmax function. The authors suggest a learnable temperature term instead of a constant. | | | :--: | | Locality Self Attention Source | The above two pointers make the Locality Self Attention. We have subclassed the layers.MultiHeadAttention and implemented the trainable temperature. The attention mask is built at a later stage. End of explanation """ def mlp(x, hidden_units, dropout_rate): for units in hidden_units: x = layers.Dense(units, activation=tf.nn.gelu)(x) x = layers.Dropout(dropout_rate)(x) return x # Build the diagonal attention mask diag_attn_mask = 1 - tf.eye(NUM_PATCHES) diag_attn_mask = tf.cast([diag_attn_mask], dtype=tf.int8) """ Explanation: Implement the MLP End of explanation """ def create_vit_classifier(vanilla=False): inputs = layers.Input(shape=INPUT_SHAPE) # Augment data. augmented = data_augmentation(inputs) # Create patches. (tokens, _) = ShiftedPatchTokenization(vanilla=vanilla)(augmented) # Encode patches. encoded_patches = PatchEncoder()(tokens) # Create multiple layers of the Transformer block. for _ in range(TRANSFORMER_LAYERS): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches) # Create a multi-head attention layer. if not vanilla: attention_output = MultiHeadAttentionLSA( num_heads=NUM_HEADS, key_dim=PROJECTION_DIM, dropout=0.1 )(x1, x1, attention_mask=diag_attn_mask) else: attention_output = layers.MultiHeadAttention( num_heads=NUM_HEADS, key_dim=PROJECTION_DIM, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, encoded_patches]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=1e-6)(x2) # MLP. x3 = mlp(x3, hidden_units=TRANSFORMER_UNITS, dropout_rate=0.1) # Skip connection 2. encoded_patches = layers.Add()([x3, x2]) # Create a [batch_size, projection_dim] tensor. representation = layers.LayerNormalization(epsilon=1e-6)(encoded_patches) representation = layers.Flatten()(representation) representation = layers.Dropout(0.5)(representation) # Add MLP. features = mlp(representation, hidden_units=MLP_HEAD_UNITS, dropout_rate=0.5) # Classify outputs. logits = layers.Dense(NUM_CLASSES)(features) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=logits) return model """ Explanation: Build the ViT End of explanation """ # Some code is taken from: # https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2. class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule): def __init__( self, learning_rate_base, total_steps, warmup_learning_rate, warmup_steps ): super(WarmUpCosine, self).__init__() self.learning_rate_base = learning_rate_base self.total_steps = total_steps self.warmup_learning_rate = warmup_learning_rate self.warmup_steps = warmup_steps self.pi = tf.constant(np.pi) def __call__(self, step): if self.total_steps < self.warmup_steps: raise ValueError("Total_steps must be larger or equal to warmup_steps.") cos_annealed_lr = tf.cos( self.pi * (tf.cast(step, tf.float32) - self.warmup_steps) / float(self.total_steps - self.warmup_steps) ) learning_rate = 0.5 * self.learning_rate_base * (1 + cos_annealed_lr) if self.warmup_steps > 0: if self.learning_rate_base < self.warmup_learning_rate: raise ValueError( "Learning_rate_base must be larger or equal to " "warmup_learning_rate." ) slope = ( self.learning_rate_base - self.warmup_learning_rate ) / self.warmup_steps warmup_rate = slope * tf.cast(step, tf.float32) + self.warmup_learning_rate learning_rate = tf.where( step < self.warmup_steps, warmup_rate, learning_rate ) return tf.where( step > self.total_steps, 0.0, learning_rate, name="learning_rate" ) def run_experiment(model): total_steps = int((len(x_train) / BATCH_SIZE) * EPOCHS) warmup_epoch_percentage = 0.10 warmup_steps = int(total_steps * warmup_epoch_percentage) scheduled_lrs = WarmUpCosine( learning_rate_base=LEARNING_RATE, total_steps=total_steps, warmup_learning_rate=0.0, warmup_steps=warmup_steps, ) optimizer = tfa.optimizers.AdamW( learning_rate=LEARNING_RATE, weight_decay=WEIGHT_DECAY ) model.compile( optimizer=optimizer, loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[ keras.metrics.SparseCategoricalAccuracy(name="accuracy"), keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"), ], ) history = model.fit( x=x_train, y=y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_split=0.1, ) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test, batch_size=BATCH_SIZE) print(f"Test accuracy: {round(accuracy * 100, 2)}%") print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%") return history # Run experiments with the vanilla ViT vit = create_vit_classifier(vanilla=True) history = run_experiment(vit) # Run experiments with the Shifted Patch Tokenization and # Locality Self Attention modified ViT vit_sl = create_vit_classifier(vanilla=False) history = run_experiment(vit_sl) """ Explanation: Compile, train, and evaluate the mode End of explanation """
AtmaMani/pyChakras
udemy_ml_bootcamp/Python-for-Data-Visualization/Seaborn/Distribution Plots.ipynb
mit
import seaborn as sns %matplotlib inline """ Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> Distribution Plots Let's discuss some plots that allow us to visualize the distribution of a data set. These plots are: distplot jointplot pairplot rugplot kdeplot Imports End of explanation """ tips = sns.load_dataset('tips') tips.head() """ Explanation: Data Seaborn comes with built-in data sets! End of explanation """ sns.distplot(tips['total_bill']) # Safe to ignore warnings """ Explanation: distplot The distplot shows the distribution of a univariate set of observations. End of explanation """ sns.distplot(tips['total_bill'],kde=False,bins=30) """ Explanation: To remove the kde layer and just have the histogram use: End of explanation """ sns.jointplot(x='total_bill',y='tip',data=tips,kind='scatter') sns.jointplot(x='total_bill',y='tip',data=tips,kind='hex') sns.jointplot(x='total_bill',y='tip',data=tips,kind='reg') """ Explanation: jointplot jointplot() allows you to basically match up two distplots for bivariate data. With your choice of what kind parameter to compare with: * “scatter” * “reg” * “resid” * “kde” * “hex” End of explanation """ sns.pairplot(tips) sns.pairplot(tips,hue='sex',palette='coolwarm') """ Explanation: pairplot pairplot will plot pairwise relationships across an entire dataframe (for the numerical columns) and supports a color hue argument (for categorical columns). End of explanation """ sns.rugplot(tips['total_bill']) """ Explanation: rugplot rugplots are actually a very simple concept, they just draw a dash mark for every point on a univariate distribution. They are the building block of a KDE plot: End of explanation """ # Don't worry about understanding this code! # It's just for the diagram below import numpy as np import matplotlib.pyplot as plt from scipy import stats #Create dataset dataset = np.random.randn(25) # Create another rugplot sns.rugplot(dataset); # Set up the x-axis for the plot x_min = dataset.min() - 2 x_max = dataset.max() + 2 # 100 equally spaced points from x_min to x_max x_axis = np.linspace(x_min,x_max,100) # Set up the bandwidth, for info on this: url = 'http://en.wikipedia.org/wiki/Kernel_density_estimation#Practical_estimation_of_the_bandwidth' bandwidth = ((4*dataset.std()**5)/(3*len(dataset)))**.2 # Create an empty kernel list kernel_list = [] # Plot each basis function for data_point in dataset: # Create a kernel for each point and append to list kernel = stats.norm(data_point,bandwidth).pdf(x_axis) kernel_list.append(kernel) #Scale for plotting kernel = kernel / kernel.max() kernel = kernel * .4 plt.plot(x_axis,kernel,color = 'grey',alpha=0.5) plt.ylim(0,1) # To get the kde plot we can sum these basis functions. # Plot the sum of the basis function sum_of_kde = np.sum(kernel_list,axis=0) # Plot figure fig = plt.plot(x_axis,sum_of_kde,color='indianred') # Add the initial rugplot sns.rugplot(dataset,c = 'indianred') # Get rid of y-tick marks plt.yticks([]) # Set title plt.suptitle("Sum of the Basis Functions") """ Explanation: kdeplot kdeplots are Kernel Density Estimation plots. These KDE plots replace every single observation with a Gaussian (Normal) distribution centered around that value. For example: End of explanation """ sns.kdeplot(tips['total_bill']) sns.rugplot(tips['total_bill']) sns.kdeplot(tips['tip']) sns.rugplot(tips['tip']) """ Explanation: So with our tips dataset: End of explanation """
DS-100/sp17-materials
sp17/hw/hw2/hw2.ipynb
gpl-3.0
import math import numpy as np import matplotlib %matplotlib inline import matplotlib.pyplot as plt !pip install -U okpy from client.api.notebook import Notebook ok = Notebook('hw2.ok') """ Explanation: Homework 2: Language in the 2016 Presidential Election Popular figures often have help managing their media presence. In the 2016 election, Twitter was an important communication medium for every major candidate. Many Twitter posts posted by the top two candidates were actually written by their aides. You might wonder how this affected the content or language of the tweets. In this assignment, we'll look at some of the patterns in tweets by the top two candidates, Clinton and Trump. We'll start with Clinton. Along the way, you'll get a first look at Pandas. Pandas is a Python package that provides a DataFrame data structure similar to the datascience package's Table, which you might remember from Data 8. DataFrames are a bit harder to use than Tables, but they provide more advanced functionality and are a standard tool for data analysis in Python. Some of the analysis in this assignment is based on a post by David Robinson. Feel free to read the post, but do not copy from it! David's post is written in the R programming language, which is a favorite of many data analysts, especially academic statisticians. Once you're done with your analysis, you may find it interesting to see whether R is easier to use for this task. To start the assignment, run the cell below to set up some imports and the automatic tests. End of explanation """ ds_tweets_save_path = "BerkeleyData_recent_tweets.pkl" from pathlib import Path # Guarding against attempts to download the data multiple # times: if not Path(ds_tweets_save_path).is_file(): import json # Loading your keys from keys.json (which you should have filled # in in question 1): with open("keys.json") as f: keys = json.load(f) import tweepy # Authenticating: auth = tweepy.OAuthHandler(keys["consumer_key"], keys["consumer_secret"]) auth.set_access_token(keys["access_token"], keys["access_token_secret"]) api = tweepy.API(auth) # Getting as many recent tweets by @BerkeleyData as Twitter will let us have: example_tweets = list(tweepy.Cursor(api.user_timeline, id="BerkeleyData").items()) # Saving the tweets to a file as "pickled" objects: with open(ds_tweets_save_path, "wb") as f: import pickle pickle.dump(example_tweets, f) # Re-loading the results: with open(ds_tweets_save_path, "rb") as f: import pickle example_tweets = pickle.load(f) # Looking at one tweet object, which has type Status: example_tweets[0] # You can try something like this: # import pprint; pprint.pprint(vars(example_tweets[0])) # ...to get a more easily-readable view. """ Explanation: Getting the dataset Since we'll be looking at Twitter data, we need to download the data from Twitter! Twitter provides an API for downloading tweet data in large batches. The tweepy package makes it fairly easy to use. Question 0 Install tweepy, if you don't already have it. (Be sure to activate your Conda environment for the class first. Then run pip install tweepy.) There are instructions on using tweepy here, but we will give you example code. Twitter requires you to have authentication keys to access their API. To get your keys, you'll have to sign up as a Twitter developer. Question 1 Follow these instructions to get your keys: Create a Twitter account. You can use an existing account if you have one. Under account settings, add your phone number to the account. Create a Twitter developer account. Attach it to your Twitter account. Once you're logged into your developer account, create an application for this assignment. You can call it whatever you want, and you can write any URL when it asks for a web site. On the page for that application, find your Consumer Key and Consumer Secret. On the same page, create an Access Token. Record the resulting Access Token and Access Token Secret. Edit the file keys.json and replace the placeholders with your keys. Don't turn in that file. I AM AN IMPORTANT NOTE. DO NOT SKIP ME. If someone has your authentication keys, they can access your Twitter account and post as you! So don't give them to anyone, and don't write them down in this notebook. The usual way to store sensitive information like this is to put it in a separate file and read it programmatically. That way, you can share the rest of your code without sharing your keys. That's why we're asking you to put your keys in keys.json for this assignment. I AM A SECOND IMPORTANT NOTE. Twitter limits developers to a certain rate of requests for data. If you make too many requests in a short period of time, you'll have to wait awhile (around 15 minutes) before you can make more. So carefully follow the code examples you see and don't rerun cells without thinking. Instead, always save the data you've collected to a file. We've provided templates to help you do that. In the example below, we have loaded some tweets by @BerkeleyData. Run it, inspect the output, and read the code. End of explanation """ def load_keys(path): """Loads your Twitter authentication keys from a file on disk. Args: path (str): The path to your key file. The file should be in JSON format and look like this (but filled in): { "consumer_key": "<your Consumer Key here>", "consumer_secret": "<your Consumer Secret here>", "access_token": "<your Access Token here>", "access_token_secret": "<your Access Token Secret here>" } Returns: dict: A dictionary mapping key names (like "consumer_key") to key values.""" ... def download_recent_tweets_by_user(user_account_name, keys): """Downloads tweets by one Twitter user. Args: user_account_name (str): The name of the Twitter account whose tweets will be downloaded. keys (dict): A Python dictionary with Twitter authentication keys (strings), like this (but filled in): { "consumer_key": "<your Consumer Key here>", "consumer_secret": "<your Consumer Secret here>", "access_token": "<your Access Token here>", "access_token_secret": "<your Access Token Secret here>" } Returns: list: A list of Status objects, each representing one tweet.""" ... def save_tweets(tweets, path): """Saves a list of tweets to a file in the local filesystem. This function makes no guarantee about the format of the saved tweets, **except** that calling load_tweets(path) after save_tweets(tweets, path) will produce the same list of tweets and that only the file at the given path is used to store the tweets. (That means you can implement this function however you want, as long as saving and loading works!) Args: tweets (list): A list of tweet objects (of type Status) to be saved. path (str): The place where the tweets will be saved. Returns: None""" ... def load_tweets(path): """Loads tweets that have previously been saved. Calling load_tweets(path) after save_tweets(tweets, path) will produce the same list of tweets. Args: path (str): The place where the tweets were be saved. Returns: list: A list of Status objects, each representing one tweet.""" ... # When you are done, run this cell to load @HillaryClinton's tweets. # Note the function get_tweets_with_cache. You may find it useful # later. def get_tweets_with_cache(user_account_name, keys_path): """Get recent tweets from one user, loading from a disk cache if available. The first time you call this function, it will download tweets by a user. Subsequent calls will not re-download the tweets; instead they'll load the tweets from a save file in your local filesystem. All this is done using the functions you defined in the previous cell. This has benefits and drawbacks that often appear when you cache data: +: Using this function will prevent extraneous usage of the Twitter API. +: You will get your data much faster after the first time it's called. -: If you really want to re-download the tweets (say, to get newer ones, or because you screwed up something in the previous cell and your tweets aren't what you wanted), you'll have to find the save file (which will look like <something>_recent_tweets.pkl) and delete it. Args: user_account_name (str): The Twitter handle of a user, without the @. keys_path (str): The path to a JSON keys file in your filesystem. """ save_path = "{}_recent_tweets.pkl".format(user_account_name) from pathlib import Path if not Path(save_path).is_file(): keys = load_keys(keys_path) tweets = download_recent_tweets_by_user(user_account_name, keys) save_tweets(tweets, save_path) return load_tweets(save_path) clinton_tweets = get_tweets_with_cache("HillaryClinton", "keys.json") # If everything is working properly, this should print out # a Status object (a single tweet). clinton_tweets should # contain around 3000 tweets. clinton_tweets[0] _ = ok.grade('q02') _ = ok.backup() """ Explanation: Question 2 Write code to download all the recent tweets by Hillary Clinton (@HillaryClinton). Follow our example code if you wish. Write your code in the form of four functions matching the documentation provided. (You may define additional functions as helpers.) Once you've written your functions, you can run the subsequent cell to download the tweets. End of explanation """ def extract_text(tweet): ... def extract_time(tweet): ... def extract_source(tweet): ... _ = ok.grade('q03') _ = ok.backup() """ Explanation: Exploring the dataset Twitter gives us a lot of information about each tweet, not just its text. You can read the full documentation here. Look at one tweet to get a sense of the information we have available. Question 3 Which fields contain: 1. the actual text of a tweet, 2. the time when the tweet was posted, and 3. the source (device and app) from which the tweet was posted? To answer the question, write functions that extract each field from a tweet. (Each one should take a single Status object as its argument.) End of explanation """ import pandas as pd df = pd.DataFrame() """ Explanation: Question 4 Are there any other fields you think might be useful in identifying the true author of an @HillaryClinton tweet? (If you're reading the documentation, consider whether fields are actually present often enough in the data to be useful.) Write your answer here, replacing this text. Building a Pandas table JSON (and the Status object, which is just Tweepy's translation of the JSON produced by the Twitter API to a Python object) is nice for transmitting data, but it's not ideal for analysis. The data will be easier to work with if we put them in a table. To create an empty table in Pandas, write: End of explanation """ def make_dataframe(tweets): """Make a DataFrame from a list of tweets, with a few relevant fields. Args: tweets (list): A list of tweets, each one a Status object. Returns: DataFrame: A Pandas DataFrame containing one row for each element of tweets and one column for each relevant field.""" df = ... ... ... ... return df """ Explanation: (pd is the standard abbrevation for Pandas.) Now let's make a table with useful information in it. To add a column to a DataFrame called df, write: df['column_name'] = some_list_or_array (This page is a useful reference for many of the basic operations in Pandas. You don't need to read it now, but it might be helpful if you get stuck.) Question 5 Write a function called make_dataframe. It should take as its argument a list of tweets like clinton_tweets and return a Pandas DataFrame. The DataFrame should contain columns for all the fields in question 3 and any fields you listed in question 4. Use the field names as the names of the corresponding columns. End of explanation """ clinton_df = make_dataframe(clinton_tweets) # The next line causes Pandas to display all the characters # from each tweet when the table is printed, for more # convenient reading. Comment it out if you don't want it. pd.set_option('display.max_colwidth', 150) clinton_df.head() _ = ok.grade('q05') _ = ok.backup() """ Explanation: Now you can run the next line to make your DataFrame. End of explanation """ ... """ Explanation: Tweetsourcing Now that the preliminaries are done, we can do what we set out to do: Try to distinguish between Clinton's own tweets and her aides'. Question 6 Create a plot showing how many tweets came from each kind of source. For a real challenge, try using the Pandas documentation and Google to figure out how to do this. Otherwise, hints are provided. Hint: Start by grouping the data by source. df['source'].value_counts() will create an object called a Series (which is like a table that contains exactly 2 columns, where one column is called the index). You can create a version of that Series that's sorted by source (in this case, in alphabetical order) by calling sort_index() on it. Hint 2: To generate a bar plot from a Series s, call s.plot.barh(). You can also use matplotlib's plt.barh, but it's a little bit complicated to use. End of explanation """ # Do your analysis, then write your conclusions in a brief comment. ... # ... """ Explanation: You should find that most tweets come from TweetDeck. Question 7 Filter clinton_df to examine some tweets from TweetDeck and a few from the next-most-used platform. From examining only a few tweets (say 10 from each category), can you tell whether Clinton's personal tweets are limited to one platform? Hint: If df is a DataFrame and filter_array is an array of booleans of the same length, then df[filter_array] is a new DataFrame containing only the rows in df corresponding to True values in filter_array. End of explanation """ def is_clinton(tweet): """Distinguishes between tweets by Clinton and tweets by her aides. Args: tweet (Status): One tweet. Returns: bool: True if the tweet is written by Clinton herself.""" ... ... """ Explanation: When in doubt, read... Check Hillary Clinton's Twitter page. It mentions an easy way to identify tweets by the candidate herself. All other tweets are by her aides. Question 8 Write a function called is_clinton that takes a tweet (in JSON) as its argument and returns True for personal tweets by Clinton and False for tweets by her aides. Use your function to create a column called is_personal in clinton_df. Hint: You might find the string method endswith helpful. End of explanation """ # This cell is filled in for you; just run it and examine the output. def pivot_count(df, vertical_column, horizontal_column): """Cross-classifies df on two columns.""" pivoted = pd.pivot_table(df[[vertical_column, horizontal_column]], index=[vertical_column], columns=[horizontal_column], aggfunc=len, fill_value=0) return pivoted.rename(columns={False: "False", True: "True"}) clinton_pivoted = pivot_count(clinton_df, 'source', 'is_personal') clinton_pivoted """ Explanation: Now we have identified Clinton's personal tweets. Let us return to our analysis of sources and see if there was any pattern we could have found. You may recall that Tables from Data 8 have a method called pivot, which is useful for cross-classifying a dataset on two categorical attrbiutes. DataFrames support a more complicated version of pivoting. The cell below pivots clinton_df for you. End of explanation """ ... """ Explanation: Do Clinton and her aides have different "signatures" of tweet sources? That is, for each tweet they send, does Clinton send tweets from each source with roughly the same frequency as her aides? It's a little hard to tell from the pivoted table alone. Question 9 Create a visualization to facilitate that comparison. Hint: df.plot.barh works for DataFrames, too. But think about what data you want to plot. End of explanation """ # Use this cell to perform your hypothesis test. # This function is provided for your convenience. It's okay # if you don't use it. def expand_counts(source_counts): """'Blow up' a list/array of counts of categories into an array of individuals matching the counts. This is the inverse of np.bincount (with some technical caveats: only if the order of the individuals doesn't matter and each category has at least one count). For example, we can generate a list of two individuals of type 0, four of type 1, zero of type 2, and one of type 3 as follows: >>> expand_counts([2, 4, 0, 1]) array([0, 0, 1, 1, 1, 1, 3])""" return np.repeat(np.arange(len(source_counts)), source_counts) ... """ Explanation: You should see that there are some differences, but they aren't large. Do we need to worry that the differences (or lack thereof) are just "due to chance"? Statistician Ani argues as follows: "The tweets we see are not a random sample from anything. We have simply gathered every tweet by @HillaryClinton from the last several months. It is therefore meaningless to compute, for example, a confidence interval for the rate at which Clinton used TweetDeck. We have calculated exactly that rate from the data we have." Statistician Belinda responds: "We are interested in whether Clinton and her aides behave differently in general with respect to Twitter client usage in a way that we could use to identify their tweets. It's plausible to imagine that the tweets we see are a random sample from a huge unobserved population of all the tweets Clinton and her aides might send. We must worry about error due to random chance when we draw conclusions about this population using only the data we have available." Question 10 What position would you take on this question? Choose a side and give one (brief) argument for it, or argue for some third position. Write your answer here, replacing this text. Question 11 Assume you are convinced by Belinda's argument. Perform a statistical test of the null hypothesis that the Clinton and aide tweets' sources are all independent samples from the same distribution (that is, that the differences we observe are "due to chance"). Briefly describe the test methodology and report your results. Hint: If you need a refresher, this section of the Data 8 textbook from Fall 2016 covered this kind of hypothesis test. Hint 2: Feel free to use datascience.Table to answer this question. However, it will be advantageous to learn how to do it with numpy alone. In our solution, we used some numpy functions you might not be aware of: np.append, np.random.permutation, np.bincount, and np.count_nonzero. We have provided the function expand_counts, which should help you solve a tricky problem that will arise. End of explanation """ probability_clinton = ... probability_clinton _ = ok.grade('q12') _ = ok.backup() """ Explanation: Write your answer here, replacing this text. Question 12 Suppose you sample a random @HillaryClinton tweet and find that it is from the Twitter Web Client. Your visualization in question 9 should show you that Clinton tweets from this source about twice as frequently as her aides do, so you might imagine it's reasonable to predict that the tweet is by Clinton. But what is the probability that the tweet is by Clinton? (You should find a relatively small number. Clinton's aides tweet much more than she does. So even though there is a difference in their tweet source usage, it would be difficult to classify tweets this way.) Hint: Bayes' rule is covered in this section of the Data 8 textbook. End of explanation """ trump_tweets = ... trump_df = ... trump_df.head() """ Explanation: Another candidate Our results so far aren't Earth-shattering. Clinton uses different Twitter clients at slightly different rates than her aides. Now that we've categorized the tweets, we could of course investigate their contents. A manual analysis (also known as "reading") might be interesting, but it is beyond the scope of this course. And we'll have to wait a few more weeks before we can use a computer to help with such an analysis. Instead, let's repeat our analysis for Donald Trump. Question 13 Download the tweet data for Trump (@realDonaldTrump), and repeat the steps through question 6 to create a table called trump_df. End of explanation """ ... """ Explanation: Question 14 Make a bar chart of the sources of Trump tweets. End of explanation """ def is_trump_style_retweet(tweet_text): """Returns True if tweet_text looks like a Trump-style retweet.""" ... def is_aide_style_retweet(tweet_text): """Returns True if tweet_text looks like an aide-style retweet.""" ... def tweet_type(tweet_text): """Returns "Trump retweet", "Aide retweet", or "Not a retweet" as appropriate.""" ... trump_df['tweet_type'] = ... trump_df _ = ok.grade('q15') _ = ok.backup() """ Explanation: You should find two major sources of tweets. It is reported (for example, in this Gawker article) that Trump himself uses an Android phone (a Samsung Galaxy), while his aides use iPhones. But Trump has not confirmed this. Also, he has reportedly switched phones since his inauguration! How might we verify whether this is a way to identify his tweets? A retweet is a tweet that replies to (or simply repeats) a tweet by another user. Twitter provides several mechanisms for this, as explained in this article. However, Trump has an unusual way of retweeting: He simply adds the original sender's name to the original message, puts everything in quotes, and then adds his own comments at the end. For example, this is a tweet by user @melissa7889: @realDonaldTrump @JRACKER33 you should run for president! Here is Trump's retweet of this, from 2013: "@melissa7889: @realDonaldTrump @JRACKER33 you should run for president!" Thanks,very nice! Since 2015, the usual way of retweeting this message, and the method used by Trump's aides (but not Trump himself), would have been: Thanks,very nice! RT @melissa7889: @realDonaldTrump @JRACKER33 you should run for president! Question 15 Write a function to identify Trump-style retweets, and another function to identify the aide-style retweets. Then, use them to create a function called tweet_type that takes a tweet as its argument and returns values "Trump retweet", "Aide retweet", and "Not a retweet" as appropriate. Use your function to add a 'tweet_type' column to trump_df. Hint: Try the string method startswith and the Python keyword in. End of explanation """ trump_pivoted = ... trump_pivoted _ = ok.grade('q16') _ = ok.backup() """ Explanation: Question 16 Cross-classify @realDonaldTrump tweets by source and by tweet_type into a table called trump_pivoted. Hint: We did something very similar after question 7. You don't need to write much new code for this. End of explanation """ ... """ Explanation: Question 17 Does the cross-classified table show evidence against the hypothesis that Trump and his advisors tweet from roughly the same sources? Again assuming you agree with Statistician Belinda, run an hypothesis test in the next cell to verify that there is a difference in the relevant distributions. Then use the subsequent cell to describe your methodology and results. Are there any important caveats? End of explanation """ _ = ok.grade_all() """ Explanation: Write your answer here, replacing this text. We are really interested in knowing whether we can classify @realDonaldTrump tweets on the basis of the source. Just knowing that there is a difference in source distributions isn't nearly enough. Instead, we would like to claim something like this: "@realDonaldTrump tweets from Twitter for Android are generally authored by Trump himself. Other tweets are generally authored by his aides." Question 18 If you use bootstrap methods to compute a confidence interval for the proportion of Trump aide retweets from Android phones in "the population of all @realDonaldTrump retweets," you will find that the interval is [0, 0]. That's because there are no retweets from Android phones by Trump aides in our dataset. Is it reasonable to conclude from this that Trump aides definitely never tweet from Android phones? Write your answer here, replacing this text. Submitting your assignment First, run the next cell to run all the tests at once. End of explanation """ # Now, we'll submit to okpy _ = ok.submit() """ Explanation: Now, run this code in your terminal to make a git commit that saves a snapshot of your changes in git. The last line of the cell runs git push, which will send your work to your personal Github repo. Note: Don't add and commit your keys.json file! git add -A will do that, but the code we've written below won't. # Tell git to commit your changes to this notebook git add sp17/hw/hw2/hw2.ipynb # Tell git to make the commit git commit -m "hw2 finished" # Send your updates to your personal private repo git push origin master Finally, we'll submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https://okpy.org/cal/data100/sp17/. End of explanation """
brsaylor/atn-tools
notebooks/parameter-space-coverage-3d.ipynb
gpl-3.0
import random import numpy as np import plotly.plotly as py import plotly.graph_objs as go import plotly.offline as offline offline.init_notebook_mode(connected=True) """ Explanation: Parameter space coverage 3D graphs See the parameter-space-coverage notebook for more information. End of explanation """ sample_size = 1000 def plot_3d_scatter(X, Y, Z, filename): trace = go.Scatter3d( x=X, y=Y, z=Z, mode='markers', marker=dict( size=4, #line=dict( # color='rgba(217, 217, 217, 0.14)', # width=0.5 #), opacity=0.5 ) ) data = [trace] layout = go.Layout( margin=dict( l=0, r=0, b=0, t=0 ) ) fig = go.Figure(data=data, layout=layout) return py.iplot(fig, filename=filename) """ Explanation: Set the sample size: End of explanation """ X = [random.uniform(0, 1) for i in range(sample_size)] Y = [random.uniform(0, 1) for i in range(sample_size)] Z = [random.uniform(0, 1) for i in range(sample_size)] plot_3d_scatter(X, Y, Z, 'paramspace-uniform') """ Explanation: Uniform End of explanation """ step_size = 0.05 all_points = [] for x in np.arange(0, 1, step_size): for y in np.arange(0, 1, step_size): for z in np.arange(0, 1, step_size): all_points.append((x, y, z)) print("Number of parameter value combinations: {:,}".format(len(all_points))) sample = random.sample(all_points, sample_size) X, Y, Z = zip(*sample) # unzip sample plot_3d_scatter(X, Y, Z, 'paramspace-stepped') """ Explanation: Stepped Set the step size. I'm using a larger value than in the parameter-space-coverage notebook (0.05 compared to 0.01) so that the quantization effect is more visible in 3 dimensions. End of explanation """
xunilrj/sandbox
courses/IMTx-Queue-Theory/Week1_Lab_Random_Variables.ipynb
apache-2.0
%matplotlib inline from pylab import * N = 10**5 lambda_ = 2.0 ######################################## # Supply the missing coefficient herein below V1 = -1.0/lambda_ data = V1*log(rand(N)) ######################################## m = mean(data) v = var(data) print("\u03BB={0}: m={1:1.2f}, \u03C3\u00B2={2:1.2f}" .format(lambda_,m,v)) #\u... for unicode caracters """ Explanation: <CENTER> <p><font size="5"> Queuing theory: from Markov chains to multiserver systems</font></p> <p><font size="5"> Python Lab </p> <p><font size="5"> Week I: simulation of random variables </p> </CENTER> In this first lab, we are going to introduce basics of random variables simulation, focusing on the simulation of exponential and Poisson distributions, that play a central role in mathematical modelling of queues. We will see how to draw samples from distributions by the inverse transform sampling method or by using the Statistics sublibrary of Scipy. We will use the inverse transform sampling method to draw samples from the exponential distribution. Then we will introduce the Poisson distribution. As explained in the general introduction to the labs (Week 0), to complete a lab, you will have to fill in undefined variables in the code. Then, the code will generate some variables named Vi, with i=1,... . You will find all the Vis generated form your results by running the last cell of code of the lab. You can check your results by answering to the exercise at the end of the lab section where you will be asked for the values of the Vis. Let $F(x)=P(X\leq x)$ denote the distribution function of some random variable $X$. When $F$ is continuous and strictly monotone increasing on the domain of $X$, then the random variable $U=F(X)$ with values in $[0,1]$ satisfies $$ P(U\leq u)=P(F(X)\leq u)=P(X\leq F^{-1}(u))=F(F^{-1}(u))=u,\qquad \forall u\in[0,1]. $$ Thus, $U$ is a uniform random variable over [0,1], what we note $U\sim\mathcal{U}{[0,1]}$. In other words, for all $a,b$, with $0\leq a\leq b\leq 1$, then $P(U\in[a,b])=b-a$. Conversly, the distribution function of the random variable $Y=F^{-1}(U)$ is $F$ when $U\sim\mathcal{U}{[0,1]}$. 1) Based on the above results explain how to draw samples from an $Exp(\lambda)$ distribution. Draw $N=10^5$ samples of an $Exp(\lambda)$ distribution, for $\lambda=2$. Calculate the mean $m$ and variance $\sigma^2$ of an $Exp(\lambda)$ distribution and compute the sample estimates of $m$ and $\sigma^2$. End of explanation """ from scipy.stats import poisson lambda_ = 20 N = 10**5 #################################### # Give parameters mu and size in function poisson.rvs # (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.poisson.html) sample = poisson.rvs(mu= lambda_, size= N) #################################### # mean and variance of sample vector mean_sample = mean(sample) var_sample = var(sample) print(("\u03BB = {0}\nestimated mean = {1:1.2f}\n" +"estimated var = {2:1.2f}") .format(lambda_,mean_sample, var_sample)) #------------------------ V2 = mean_sample """ Explanation: 2) The discrete valued random variable $X$ follows a Poisson distribution if its probabilities depend on a parameter $\lambda$ and are such that $$ P(X=k)=\dfrac{\lambda^k}{k!}e^{-\lambda},\quad{\text for }\; k=0,1,2,\ldots $$ We denote by $\mathcal{P}(\lambda)$ the Poisson distribution with parameter $\lambda$. As for continuous valued distributions, samples from discrete distributions can be obtained via the inverse transform sampling method. Alternatively, one can use the statistics sublibrary of Scipy to draw samples from the a Poisson distribution (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.poisson.html). Draw $N=10^5$ samples from the Poisson distribution with parameter $\lambda = 20$ and compute their sample mean and variance. Check that they are close to their theoretical values that are both equal to $\lambda$ Answer to question 2 End of explanation """ print("---------------------------\n" +"RESULTS SUPPLIED FOR LAB 1:\n" +"---------------------------") results = ("V"+str(k) for k in range(1,3)) for x in results: try: print(x+" = {0:.2f}".format(eval(x))) except: print(x+": variable is undefined") """ Explanation: Your answers for the exercise End of explanation """
IS-ENES-Data/submission_forms
test/forms/test/.ipynb_checkpoints/test_ki_12345-checkpoint.ipynb
apache-2.0
# please edit the (red) information below: Name, email and project the data belongs to from dkrz_forms import form_handler my_project = "DKRZ_CDP" my_first_name = "...." # example: sf.first_name = "Harold" my_last_name = "...." # example: sf.last_name = "Mitty" my_email = "...." # example: sf.email = "Mr.Mitty@yahoo.com" my_keyword = "...." # example: sf.keyword = "mymodel_myrunid" sf = form_handler.init_form(my_project,my_first_name,my_last_name,my_email,my_keyword) """ Explanation: Generic DKRZ national archive form This form is intended to provide a generic template for interactive forms e.g. for testing ... to be finalized ... End of explanation """ sf.myattribute = "myinformation" """ Explanation: Edit form information End of explanation """ form_handler.save_form(sf,"..my comment..") # edit my comment info """ Explanation: Save your form your form will be stored (the form name consists of your last name plut your keyword) End of explanation """ form_handler.email_form_info(sf) form_handler.form_submission(sf) """ Explanation: officially submit your form the form will be submitted to the DKRZ team to process you also receive a confirmation email with a reference to your online form for future modifications End of explanation """
mne-tools/mne-tools.github.io
0.24/_downloads/b6ccbb801939862ed915d2c7295ac245/sensor_permutation_test.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD-3-Clause import numpy as np import mne from mne import io from mne.stats import permutation_t_test from mne.datasets import sample print(__doc__) """ Explanation: Permutation T-test on sensor data One tests if the signal significantly deviates from 0 during a fixed time window of interest. Here computation is performed on MNE sample dataset between 40 and 60 ms. End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id = 1 tmin = -0.2 tmax = 0.5 # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # pick MEG Gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True, exclude='bads') epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6)) data = epochs.get_data() times = epochs.times temporal_mask = np.logical_and(0.04 <= times, times <= 0.06) data = np.mean(data[:, :, temporal_mask], axis=2) n_permutations = 50000 T0, p_values, H0 = permutation_t_test(data, n_permutations, n_jobs=1) significant_sensors = picks[p_values <= 0.05] significant_sensors_names = [raw.ch_names[k] for k in significant_sensors] print("Number of significant sensors : %d" % len(significant_sensors)) print("Sensors names : %s" % significant_sensors_names) """ Explanation: Set parameters End of explanation """ evoked = mne.EvokedArray(-np.log10(p_values)[:, np.newaxis], epochs.info, tmin=0.) # Extract mask and indices of active sensors in the layout stats_picks = mne.pick_channels(evoked.ch_names, significant_sensors_names) mask = p_values[:, np.newaxis] <= 0.05 evoked.plot_topomap(ch_type='grad', times=[0], scalings=1, time_format=None, cmap='Reds', vmin=0., vmax=np.max, units='-log10(p)', cbar_fmt='-%0.1f', mask=mask, size=3, show_names=lambda x: x[4:] + ' ' * 20, time_unit='s') """ Explanation: View location of significantly active sensors End of explanation """
GoogleCloudPlatform/asl-ml-immersion
notebooks/ml_fairness_explainability/explainable_ai/labs/xai_image_caip.ipynb
apache-2.0
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") import os PROJECT_ID = "" # TODO: your PROJECT_ID here. os.environ["PROJECT_ID"] = PROJECT_ID BUCKET_NAME = PROJECT_ID # TODO: replace your BUCKET_NAME, if needed REGION = "us-central1" os.environ["BUCKET_NAME"] = BUCKET_NAME os.environ["REGION"] = REGION """ Explanation: AI Explanations: Deploying an image model Overview This tutorial shows how to train a Keras classification model on image data and deploy it to the AI Platform Explanations service to get feature attributions on your deployed model. If you've already got a trained model and want to deploy it to AI Explanations, skip to the Export the model as a TF 2 SavedModel section. Dataset The dataset used for this tutorial is the flowers dataset from TensorFlow Datasets. Objective The goal of this tutorial is to train a model on a simple image dataset (flower classification) to understand how you can use AI Explanations with image models. For image models, AI Explanations returns an image with the pixels highlighted that signaled your model's prediction the most. This tutorial focuses more on deploying the model to AI Platform with Explanations than on the design of the model itself. Setup Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. End of explanation """ %%bash exists=$(gsutil ls -d | grep -w gs://${BUCKET_NAME}/) if [ -n "$exists" ]; then echo -e "Bucket gs://${BUCKET_NAME} already exists." else echo "Creating a new GCS bucket." gsutil mb -l ${REGION} gs://${BUCKET_NAME} echo -e "\nHere are your current buckets:" gsutil ls fi """ Explanation: Run the following cell to create your Cloud Storage bucket if it does not already exist. End of explanation """ import io import os import random from base64 import b64encode import numpy as np import PIL import tensorflow as tf from matplotlib import pyplot as plt AUTO = tf.data.experimental.AUTOTUNE print("AUTO", AUTO) import explainable_ai_sdk """ Explanation: Import libraries Import the libraries for this tutorial. End of explanation """ GCS_PATTERN = "gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec" IMAGE_SIZE = [192, 192] BATCH_SIZE = 32 VALIDATION_SPLIT = 0.19 CLASSES = [ "daisy", "dandelion", "roses", "sunflowers", "tulips", ] # do not change, maps to the labels in the data (folder names) # Split data files between training and validation filenames = tf.io.gfile.glob(GCS_PATTERN) random.shuffle(filenames) split = int(len(filenames) * VALIDATION_SPLIT) training_filenames = filenames[split:] validation_filenames = filenames[:split] print( "Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format( len(filenames), len(training_filenames), len(validation_filenames) ) ) validation_steps = ( int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE ) steps_per_epoch = ( int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE ) print( "With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format( BATCH_SIZE, steps_per_epoch, validation_steps ) ) """ Explanation: Download and preprocess the data This section shows how to download the flower images, use the tf.data API to create a data input pipeline, and split the data into training and validation sets. End of explanation """ def dataset_to_numpy_util(dataset, N): dataset = dataset.batch(N) if tf.executing_eagerly(): # In eager mode, iterate in the Dataset directly. for images, labels in dataset: numpy_images = images.numpy() numpy_labels = labels.numpy() break else: # In non-eager mode, must get the TF note that # yields the nextitem and run it in a tf.Session. get_next_item = dataset.make_one_shot_iterator().get_next() with tf.Session() as ses: numpy_images, numpy_labels = ses.run(get_next_item) return numpy_images, numpy_labels def title_from_label_and_target(label, correct_label): label = np.argmax(label, axis=-1) # one-hot to class number correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number correct = label == correct_label return ( "{} [{}{}{}]".format( CLASSES[label], str(correct), ", shoud be " if not correct else "", CLASSES[correct_label] if not correct else "", ), correct, ) def display_one_flower(image, title, subplot, red=False): plt.subplot(subplot) plt.axis("off") plt.imshow(image) plt.title(title, fontsize=16, color="red" if red else "black") return subplot + 1 def display_9_images_from_dataset(dataset): subplot = 331 plt.figure(figsize=(13, 13)) images, labels = dataset_to_numpy_util(dataset, 9) for i, image in enumerate(images): title = CLASSES[np.argmax(labels[i], axis=-1)] subplot = display_one_flower(image, title, subplot) if i >= 8: break plt.tight_layout() plt.subplots_adjust(wspace=0.1, hspace=0.1) plt.show() def display_9_images_with_predictions(images, predictions, labels): subplot = 331 plt.figure(figsize=(13, 13)) for i, image in enumerate(images): title, correct = title_from_label_and_target(predictions[i], labels[i]) subplot = display_one_flower(image, title, subplot, not correct) if i >= 8: break plt.tight_layout() plt.subplots_adjust(wspace=0.1, hspace=0.1) plt.show() def display_training_curves(training, validation, title, subplot): if subplot % 10 == 1: # set up the subplots on the first call plt.subplots(figsize=(10, 10), facecolor="#F0F0F0") plt.tight_layout() ax = plt.subplot(subplot) ax.set_facecolor("#F8F8F8") ax.plot(training) ax.plot(validation) ax.set_title("model " + title) ax.set_ylabel(title) ax.set_xlabel("epoch") ax.legend(["train", "valid."]) """ Explanation: The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model. End of explanation """ def read_tfrecord(example): features = { "image": tf.io.FixedLenFeature( [], tf.string ), # tf.string means bytestring "class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar "one_hot_class": tf.io.VarLenFeature(tf.float32), } example = tf.io.parse_single_example(example, features) image = tf.image.decode_jpeg(example["image"], channels=3) image = ( tf.cast(image, tf.float32) / 255.0 ) # convert image to floats in [0, 1] range image = tf.reshape( image, [*IMAGE_SIZE, 3] ) # explicit size will be needed for TPU one_hot_class = tf.sparse.to_dense(example["one_hot_class"]) one_hot_class = tf.reshape(one_hot_class, [5]) return image, one_hot_class def load_dataset(filenames): # Read data from TFRecords # TODO: Complete the load_dataset function to load the images from TFRecords return dataset """ Explanation: Read images and labels from TFRecords In this dataset the images are stored as TFRecords. TODO:Complete the load_dataset function to load the images from TFRecords End of explanation """ display_9_images_from_dataset(load_dataset(training_filenames)) """ Explanation: Use the visualization utility function provided earlier to preview flower images with their labels. End of explanation """ def get_batched_dataset(filenames): dataset = load_dataset(filenames) dataset = dataset.cache() # This dataset fits in RAM dataset = dataset.repeat() dataset = dataset.shuffle(2048) dataset = dataset.batch(BATCH_SIZE) dataset = dataset.prefetch( AUTO ) # prefetch next batch while training (autotune prefetch buffer size) # For proper ordering of map/batch/repeat/prefetch, see Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets return dataset def get_training_dataset(): return get_batched_dataset(training_filenames) def get_validation_dataset(): return get_batched_dataset(validation_filenames) some_flowers, some_labels = dataset_to_numpy_util( load_dataset(validation_filenames), 8 * 20 ) """ Explanation: Create training and validation datasets End of explanation """ from tensorflow.keras import Sequential from tensorflow.keras.layers import ( BatchNormalization, Conv2D, Dense, GlobalAveragePooling2D, MaxPooling2D, ) from tensorflow.keras.optimizers import Adam model = Sequential( [ # Stem Conv2D( kernel_size=3, filters=16, padding="same", activation="relu", input_shape=[*IMAGE_SIZE, 3], ), BatchNormalization(), Conv2D(kernel_size=3, filters=32, padding="same", activation="relu"), BatchNormalization(), MaxPooling2D(pool_size=2), # Conv Group Conv2D(kernel_size=3, filters=64, padding="same", activation="relu"), BatchNormalization(), MaxPooling2D(pool_size=2), Conv2D(kernel_size=3, filters=96, padding="same", activation="relu"), BatchNormalization(), MaxPooling2D(pool_size=2), # Conv Group Conv2D(kernel_size=3, filters=128, padding="same", activation="relu"), BatchNormalization(), MaxPooling2D(pool_size=2), Conv2D(kernel_size=3, filters=128, padding="same", activation="relu"), BatchNormalization(), # 1x1 Reduction Conv2D(kernel_size=1, filters=32, padding="same", activation="relu"), BatchNormalization(), # Classifier GlobalAveragePooling2D(), Dense(5, activation="softmax"), ] ) model.compile( optimizer=Adam(lr=0.005, decay=0.98), loss="categorical_crossentropy", metrics=["accuracy"], ) model.summary() """ Explanation: Build, train, and evaluate the model This section shows how to build, train, evaluate, and get local predictions from a model by using the TF.Keras Sequential API. End of explanation """ EPOCHS = 20 # Train for 60 epochs for higher accuracy, 20 should get you ~75% # TODO: Using the GPU train the model for 20 to 60 epochs """ Explanation: Train the model Train this on a GPU by attaching a GPU to your CAIP notebook instance. On a CPU, it'll take ~30 minutes to run training. On a GPU, it takes ~5 minutes. TODO: Using the GPU train the model defined above End of explanation """ # Randomize the input so that you can execute multiple times to change results permutation = np.random.permutation(8 * 20) some_flowers, some_labels = ( some_flowers[permutation], some_labels[permutation], ) predictions = model.predict(some_flowers, batch_size=16) evaluations = model.evaluate(some_flowers, some_labels, batch_size=16) print(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist()) print("[val_loss, val_acc]", evaluations) display_9_images_with_predictions(some_flowers, predictions, some_labels) """ Explanation: Visualize local predictions Get predictions on your local model and visualize the images with their predicted labels, using the visualization utility function provided earlier. End of explanation """ export_path = "gs://" + BUCKET_NAME + "/explanations/mymodel" def _preprocess(bytes_input): decoded = tf.io.decode_jpeg(bytes_input, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) resized = tf.image.resize(decoded, size=(192, 192)) return resized @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def preprocess_fn(bytes_inputs): with tf.device("cpu:0"): decoded_images = tf.map_fn(_preprocess, bytes_inputs, dtype=tf.float32) return { "numpy_inputs": decoded_images } # User needs to make sure the key matches model's input m_call = tf.function(model.call).get_concrete_function( [ tf.TensorSpec( shape=[None, 192, 192, 3], dtype=tf.float32, name="numpy_inputs" ) ] ) @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def serving_fn(bytes_inputs): # TODO: Complete the function return prob tf.saved_model.save( model, export_path, signatures={ "serving_default": serving_fn, "xai_preprocess": preprocess_fn, # Required for XAI "xai_model": m_call, # Required for XAI }, ) """ Explanation: Export the model as a TF 2.3 SavedModel When using TensorFlow 2.3, you export the model as a SavedModel and load it into Cloud Storage. During export, you need to define a serving function to convert data to the format your model expects. If you send encoded data to AI Platform, your serving function ensures that the data is decoded on the model server before it is passed as input to your model. Serving function for image data Sending base 64 encoded image data to AI Platform is more space efficient. Since this deployed model expects input data as raw bytes, you need to ensure that the b64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model. To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is part of the model's graph (instead of upstream on a CPU). When you send a prediction or explanation request, the request goes to the serving function (serving_fn), which preprocesses the b64 bytes into raw numpy bytes (preprocess_fn). At this point, the data can be passed to the model (m_call). TODO: Complete the serving function End of explanation """ ! saved_model_cli show --dir $export_path --all """ Explanation: Get input and output signatures Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. You'll use this information when you deploy your model to AI Explanations in the next section. End of explanation """ loaded = tf.saved_model.load(export_path) input_name = list( loaded.signatures["xai_model"].structured_input_signature[1].keys() )[0] print(input_name) output_name = list(loaded.signatures["xai_model"].structured_outputs.keys())[0] print(output_name) preprocess_name = list( loaded.signatures["xai_preprocess"].structured_input_signature[1].keys() )[0] print(preprocess_name) """ Explanation: You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer. You need the signatures for the following layers: Serving function input layer Model input layer Model output layer End of explanation """ from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder # We want to explain 'xai_model' signature. builder = SavedModelMetadataBuilder(export_path, signature_name="xai_model") random_baseline = np.random.rand(192, 192, 3) builder.set_image_metadata( "numpy_inputs", input_baselines=[random_baseline.tolist()] ) builder.save_metadata(export_path) """ Explanation: Generate explanation metadata In order to deploy this model to AI Explanations, you need to create an explanation_metadata.json file with information about your model inputs, outputs, and baseline. You can use the Explainable AI SDK to generate most of the fields. For image models, using [0,1] as your input baseline represents black and white images. This example uses np.random to generate the baseline because the training images contain a lot of black and white (i.e. daisy petals). Note: for the explanation request, use the model's signature for the input and output tensors. Do not use the serving function signature. End of explanation """ import datetime MODEL = "flowers" + TIMESTAMP print(MODEL) # Create the model if it doesn't exist yet (you only need to run this once) ! gcloud ai-platform models create $MODEL --enable-logging --region=$REGION """ Explanation: Deploy model to AI Explanations This section shows how to use gcloud to deploy the model to AI Explanations, using two different explanation methods for image models. Create the model End of explanation """ # Each time you create a version the name should be unique IG_VERSION = "v_ig" ! gcloud beta ai-platform versions create $IG_VERSION \ --model $MODEL \ --origin $export_path \ --runtime-version 2.3 \ --framework TENSORFLOW \ --python-version 3.7 \ --machine-type n1-standard-4 \ --explanation-method integrated-gradients \ --num-integral-steps 25 \ --region $REGION # Make sure the IG model deployed correctly. State should be `READY` in the following log ! gcloud ai-platform versions describe $IG_VERSION --model $MODEL --region $REGION """ Explanation: Create explainable model versions For image models, we offer two choices for explanation methods: * Integrated Gradients (IG) * XRAI You can find more info on each method in the documentation. You can deploy a version with both so that you can compare results. If you already know which explanation method you'd like to use, you can deploy one version and skip the code blocks for the other method. Creating the version will take ~5-10 minutes. Note that your first deploy may take longer. Deploy an Integrated Gradients model End of explanation """ # Each time you create a version the name should be unique XRAI_VERSION = "v_xrai" # Create the XRAI version with gcloud ! gcloud beta ai-platform versions create $XRAI_VERSION \ --model $MODEL \ --origin $export_path \ --runtime-version 2.3 \ --framework TENSORFLOW \ --python-version 3.7 \ --machine-type n1-standard-4 \ --explanation-method xrai \ --num-integral-steps 25 \ --region $REGION # Make sure the XRAI model deployed correctly. State should be `READY` in the following log ! gcloud ai-platform versions describe $XRAI_VERSION --model $MODEL --region=$REGION """ Explanation: Deploy an XRAI model End of explanation """ # Resize the images to what your model is expecting (192,192) test_filenames = [] for i in os.listdir("../assets/flowers"): img_path = "../assets/flowers/" + i with PIL.Image.open(img_path) as ex_img: resize_img = ex_img.resize([192, 192]) resize_img.save(img_path) test_filenames.append(img_path) """ Explanation: Get predictions and explanations This section shows how to prepare test images to send to your deployed model, and how to send a batch prediction request to AI Explanations. Get and prepare test images To prepare the test images: Download a small sample of images from the flowers dataset -- just enough for a batch prediction. Resize the images to match the input shape (192, 192) of the model. Save the resized images back to your bucket. End of explanation """ # Prepare your images to send to your Cloud model instances = [] for image_path in test_filenames: img_bytes = tf.io.read_file(image_path) b64str = b64encode(img_bytes.numpy()).decode("utf-8") instances.append({preprocess_name: {"b64": b64str}}) """ Explanation: Format your explanation request Prepare a batch of instances. End of explanation """ # IG EXPLANATIONS remote_ig_model = explainable_ai_sdk.load_model_from_ai_platform(project=PROJECT_ID, model=MODEL, version=IG_VERSION, region=REGION) ig_response = #TODO for response in ig_response: response.visualize_attributions() # XRAI EXPLANATIONS remote_xrai_model = #TODO: Similar to above, load the XRAI model xrai_response = #TODO for response in xrai_response: response.visualize_attributions() """ Explanation: Send the explanations request and visualize If you deployed both an IG and an XRAI model, you can request explanations for both models and compare the results. If you only deployed one model above, run only the cell for that explanation method. You can use the Explainable AI SDK to send explanation requests to your deployed model and visualize the explanations. TODO: Write code to get explanations from the saved model. You will need to use model.explain(instances) to get the results End of explanation """ for i, response in enumerate(ig_response): attr = response.get_attribution() baseline_score = attr.baseline_score predicted_score = attr.example_score print("Baseline score: ", baseline_score) print("Predicted score: ", predicted_score) print("Predicted - Baseline: ", predicted_score - baseline_score, "\n") """ Explanation: Check explanations and baselines To better make sense of your feature attributions, you can compare them with your model's baseline. For image models, the baseline_score returned by AI Explanations is the score your model would give an image input with the baseline you specified. The baseline is different for each class in the model. Every time your model predicts tulip as the top class, you'll see the same baseline score. Earlier, you used a baseline image of np.random randomly generated values. If you'd like the baseline for your model to be solid black and white images instead, pass [0,1] as the value to input_baselines in your explanation_metadata.json file above. If the baseline_score is very close to the value of example_score, the highlighted pixels may not be meaningful. Calculate the difference between baseline_score and example_score for the three test images above. Note that the score values for classification models are probabilities: the confidence your model has in its predicted class. A score of 0.90 for tulip means your model has classified the image as a tulip with 90% confidence. The code below checks baselines for the IG model. To inspect your XRAI model, swap out the ig_response and IG_VERSION variables below. End of explanation """ # Convert your baseline from above to a base64 string rand_test_img = PIL.Image.fromarray((random_baseline * 255).astype("uint8")) buffer = io.BytesIO() rand_test_img.save(buffer, format="PNG") new_image_string = b64encode(np.asarray(buffer.getvalue())).decode("utf-8") # Preview it plt.imshow(rand_test_img) sanity_check_img = {preprocess_name: {"b64": new_image_string}} """ Explanation: Explain the baseline image Another way to check your baseline choice is to view explanations for this model's baseline image: an image array of randomly generated values using np.random. First, convert the same np.random baseline array generated earlier to a base64 string and preview it. This encodes the random noise as if it's a PNG image. Additionally, you must convert the byte buffer to a numpy array, because this is the format the underlying model expects for input when you send the explain request. End of explanation """ # Sanity Check explanations EXPLANATIONS sanity_check_response = remote_ig_model.explain([sanity_check_img]) """ Explanation: Send the explanation request for the baseline image. (To check a baseline image for XRAI, change IG_VERSION to XRAI_VERSION below.) End of explanation """ sanity_check_response[0].visualize_attributions() """ Explanation: Visualize the explanation for your random baseline image, highlighting the pixels that contributed to the prediction End of explanation """ attr = sanity_check_response[0].get_attribution() baseline_score = attr.baseline_score example_score = attr.example_score print(abs(baseline_score - example_score)) """ Explanation: The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values, try increasing the number of integral steps used when you deploy your model. End of explanation """ # Delete model version resource ! gcloud ai-platform versions delete $IG_VERSION --quiet --model $MODEL ! gcloud ai-platform versions delete $XRAI_VERSION --quiet --model $MODEL # Delete model resource ! gcloud ai-platform models delete $MODEL --quiet """ Explanation: Cleaning Up End of explanation """
dbkinghorn/blog-jupyter-notebooks
ML-Logistic-Regression-theory.ipynb
gpl-3.0
import numpy as np # numeriacal computing import matplotlib.pyplot as plt # plotting core import seaborn as sns # higher level plotting tools %matplotlib inline sns.set() def g(z) : # sigmoid function return 1/(1 + np.exp(-z)) z = np.linspace(-10,10,100) plt.plot(z, g(z)) plt.title("Sigmoid Function g(z) = 1/(1 + exp(-z))", fontsize=24) """ Explanation: Logistic Regression the Theory Despite it's name Logistic Regression is not actually referring to regression in the sense that we covered with Linear Regression. It is a widely used classification algorithm. "Regression" is an historic part of the name. Logistic regression makes use of what is know as a binary classifier. It utilizes the Logistic function or Sigmoid function to predict a probability that the answer to some question is 1 or 0, yes or no, true or false, good or bad etc.. It's this function that will drive the algorithm and is also interesting in that it can be used as an "activation function" for Neural Networks. As with the posts on Linear Regression { (1), (2), (3), (4), (5), (6) } Logistic Regression will be a good algorithm to dig into for understanding Machine Learning. Classification with Logistic Regression Classification algorithms do what the name suggests i.e. they train models to predict what class some object belongs to. A very common application is image classification. Given some photo, what is it? It is the success of solving that kind of problem with sophisticated deep neural networks running on GPU's that caused the big resurgence of interest in machine learning a few years ago. Logistic Regression is an algorithm that is relatively simple and powerful for deciding between two classes, i.e. it's a binary classifier. It basically gives a function that is a boundary between two different classes. It can be extended to handle more than two classes by a method referred to as "one-vs-all" (multinomial logistic regression or softmax regression) which is really a collection of binary classifiers that just picks out the most likely class by looking at each class individually verses everything else and then picks the class that has the highest probability. Examples of problems that could be addressed with Logistic Regression are, - Spam filtering -- spam or not spam - Cell image -- cancer or normal - Production line part scan -- good or defective - Epidemiological study for illness, "symptoms" -- has it or doesn't - is-a-(fill in the blank) or not You probably get the idea. It's a simple yes-or-no type of classifier. Logistic regression can make use of large numbers of features including continuous and discrete variables and non-linear features. It can be used for many kinds of problems. Logistic Regression Model The first thing to understand is that this is "supervised learning". The training data will be labeled data and effectively have just 2 values, that is, $$ y \in {0,1}$$ $$ y = \left{ \begin{array}{ll} 0 & \text{The Negative Case} \ 1 & \text{The Positive Case} \end{array} \right. $$ 0 and 1 are the labels that are assigned to the "objects or questions" we are looking at. For example if we are looking at spam email then a message that is spam is labeled as 1. We want a model that will produce values between 0 and 1 and will interpret the value of the model as a probability of the test case being positive or negative (true or false). $$ 0 \le h_a(x) \le 1 $$ $$ h_a(x) = P(y=1| x:a)$$ The expression above is read as The probability that $y=1$ given the values in the feature vector $x$ parameterized by $a$. Also, since $h$ is being interpreted as a probability the probability that $y=0$ is given by $P(y=0| x:a) = 1 - P(y=1| x:a)$ since the probabilities have to add to 1 (there are only 2 choices!). The model (hypothesis) function $h$ looks like, $$ \bbox[25px,border:2px solid green]{ \begin{align} h_a(x) & = g(a'x) \ \ \text{Letting } z& = a'x \ \ h_a(x) =g(z) & = \frac{1}{1 + e^{-z}} \ \ \end{align} }$$ When we vectorize the model to generate algorithms we will use $X$, the augmented matrix of feature variables with a column of ones, the same as it was in the posts on linear regression. Note that when we are looking at a single input vector $x$, the first element of $x$ is set to $1$ i.e. $x_0 = 1$. This multiplies the constant term $a_0$. $h_a(x)$ and $h_a(X)$ in the case where we have $n$ features looks like, $$ \begin{align} h_a(x) & = \frac{1}{1+e^{-a'x}} \ \ h_a(x) & = g(a_0 + a_1 x_1 + a_2 x_2 + \cdots + a_n x_n) \ \ h_a(X )& = g \left( \begin{bmatrix} 1 & x^{(1)}1 & x^{(1)}_2 & \ldots & x^{(1)}_n \ 1 & x^{(2)}_1 & x^{(2)}_2 & \ldots & x^{(2)}_n \ \vdots & \vdots & \vdots & \ddots & \vdots \ 1 & x^{(m)}_1 & x^{(m)}_2 & \ldots & x^{(m)}_n \end{bmatrix} \begin{bmatrix} a{0} \ a_{1} \a_{2} \ \vdots \ a_{n} \end{bmatrix} \right) \end{align} $$ $m$ is the number of elements in the test-set. As was the case in Linear Regression, the feature variables can be non-linear terms such as, $x_1^2, x_1x_2, \sqrt x_1 \dots $. The model itself is in the class of "Generalized Linear Models" because the parameter vector $a$ is linear with respect to the features. The logistic regression model looks like the linear regression model "wrapped" as the argument to the logistic function $g$. $g(z)$ is the logistic function or sigmoid function. Lets load up some Python modules and see what $g$ looks like. Sigmoid function $g(z)$ End of explanation """ # Generate 2 clusters of data S = np.eye(2) x1, y1 = np.random.multivariate_normal([1,1], S, 40).T x2, y2 = np.random.multivariate_normal([-1,-1], S, 40).T fig, ax = plt.subplots() ax.plot(x1,y1, "o", label='neg data' ) ax.plot(x2,y2, "P", label='pos data') xb = np.linspace(-3,3,100) a = [0.55,-1.3] ax.plot(xb, a[0] + a[1]*xb , label='b(x) = %.2f + %.2f x' %(a[0], a[1])) plt.title("Decision Boundary", fontsize=24) plt.legend(); """ Explanation: There are several features of $g$ to note, - For larger values of $z$ $g(z)$ approaches 1 - For more negative values of $z$ $g(z)$ approaches 0 - The value of $g(0) = 0.5$ - For $z \ge 0$, $g(z)\ge 0.5$ - For $z \lt 0$, $g(z)\lt 0.5$ 0.5 will be the cutoff for decisions. That is, if $g(z) \ge 0.5$ then the "answer" is "the positive case", 1, if $g(z) \lt 0.5$ then the answer is "the negative case", 0. Decision Boundary The value 0.5 mentioned above creates a boundary for classification by our model (hypothesis) $h_a(x)$ $$ \begin{align} \text{if } h_a(x) \ge 0.5 & \text{ then we say } &y=1 \ \ \text{if } h_a(x) \lt 0.5 & \text{ then } &y=0 \end{align} $$ Looking at $g(z)$ more closely gives, $$ \begin{align} h_a(x) = g(a'x) \ge 0.5 & \text{ when} & a'x \ge 0 \ \ h_a(x) = g(a'x) \lt 0.5 & \text{ when} & a'x \le 0 \end{align} $$ Therefore, $$ \bbox[25px,border:2px solid green]{ \begin{align} a'x \ge 0.5 & \text{ implies } & y = 1 \ \ a'x \lt 0.5 & \text{ implies} & y = 0 \end{align} }$$ The Decision Boundary is the "line" defined by $a'x$ that separates the area where $y=0$ and $y=1$. The "line" defined by $a'x$ can be non-linear since the feature variables $x_i$ can be non-linear. The decision boundary can be any shape (curve) that fits the data. We use a Cost Function derived from the logistic regression sigmoid function to helps us find the parameters $a$ that define the optimal decision boundary $a'x$. After we have found the optimal values of $a$ the model function $h_a(x$), which uses the sigmoid function, will tell us which side of the decision boundary our "question" lies on based on the values of the features $x$ that we give it. If you understand the paragraph above then you have a good idea of what logistic regression about! Here's some examples of what that Decision Boundary might look like; End of explanation """ fig, ax = plt.subplots() x3, y3 = np.random.multivariate_normal([0,0], [[.5,0],[0,.5]] , 400).T t = np.linspace(0,2*np.pi,400) ax.plot((3+x3)*np.sin(t), (3+y3)*np.cos(t), "o") ax.plot(x3, y3, "P") xb1 = np.linspace(-5.0, 5.0, 100) xb2 = np.linspace(-5.0, 5.0, 100) Xb1, Xb2 = np.meshgrid(xb1,xb2) b = Xb1**2 + Xb2**2 - 2.5 ax.contour(Xb1,Xb2,b,[0], colors='r') plt.title("Decision Boundary", fontsize=24) ax.axis('equal') """ Explanation: The plot above shows 2 sets of training-data. The positive case is represented by green '+' and the negative case by blue 'o'. The red line is the decision boundary $b(x) = 0.55 -1.3x$. Any test cases that are above the line are negative and any below are positive. The parameters for that red line would be what we could have determined from doing a Logistic Regression run on those 2 sets of training data. The next plot shows a case where the decision boundry is more complicated. It's represented by $b(x_1,x_2) = x_1^2 +x_2^2 - 2.5$ End of explanation """ z = np.linspace(-10,10,100) fig, ax = plt.subplots() ax.plot(z, g(z)) ax.set_title('Sigmoid Function 1/(1 + exp(-z))', fontsize=24) ax.annotate('Convex', (-7.5,0.2), fontsize=18 ) ax.annotate('Concave', (3,0.8), fontsize=18 ) z = np.linspace(-10,10,100) plt.plot(z, -np.log(g(z))) plt.title("Log Sigmoid Function -log(1/(1 + exp(-z)))", fontsize=24) plt.annotate('Convex', (-2.5,3), fontsize=18 ) """ Explanation: In this plot the positive outcomes are in a circular region in the center of the plot. The decision boundary the red circle. ## Cost Function for Logistic Regression A cost function's main purpose is to penalize bad choices for the parameters to be optimized and reward good ones. It should be easy to minimize by having a single global minimum and not be overly sensitive to changes in its arguments. It is also nice if it is differentiable, (without difficulty) so you can find the gradient for the minimization problem. That is, it's best if it is "convex", "well behaved" and "smooth". The cost function for logistic regression is written with logarithmic functions. An argument for using the log form of the cost function comes from the statistical derivation of the likelihood estimation for the probabilities. With the exponential form that's is a product of probabilities and the log-likelihood is a sum. [The statistical derivations are always interesting but usually complex. We don't really need to look at that to justify the cost function we will use.] The log function is also a monotonically increasing function so the negative of the log is decreasing. The minimization of a function and minimizing the negative log of that function will give the same values for the parameters. The log form will also be convex which means it will have a single global minimum whereas a simple "least-squares" cost function using the sigmoid function can have multiple minimum and abrupt changes. The log form is just better behaved! To see some of this lets looks at a plot of the sigmoid function and the negative log of the sigmoid function. End of explanation """ x = np.linspace(-10,10,50) plt.plot(g(x), -np.log(g(x))) plt.title("h(x) vs J(a)=-log(h(x)) for y = 1", fontsize=24) plt.xlabel('h(x)') plt.ylabel('J(a)') """ Explanation: Recall that in the training-set $y$ are labels with a values or 0 or 1. The cost function will be broken down into two cases for each data point $(i)$, one for $y=1$ and one for $y=0$. These two cases can then be combined into a single cost function $J$ $$ \bbox[25px,border:2px solid green]{ \begin{align} J^{(i)}{y=1}(a) & = -log(h_a(x^{(i)})) \ \ J^{(i)}{y=0}(a) & = -log(1 - h_a(x^{(i)})) \ \ J(a) & = -\frac{1}{m}\sum^{m}_{i=1} y^{(i)} log(h_a(x^{(i)})) + (1-y^{(i)})log(1 - h_a(x^{(i)})) \end{align} }$$ You can see that the factors $y$ and $(1-y)$ effectively pick out the terms for the cases $y=1$ and $y=0$. Vectorized form of $J(a)$ $J(a)$ can be written in vector form eliminating the summation sign as, $$ \bbox[25px,border:2px solid green]{ \begin{align} h_a(X) &= g(Xa) \ J(a) &= -\frac{1}{m} \left( y' log(h_a(X) + (1-y)'log(1 - h_a(X) \right) \end{align} }$$ To visualize how the cost functions works look at the following plots, End of explanation """ x = np.linspace(-10,10,50) plt.plot(g(x), -np.log(1-g(x))) plt.title("h(x) vs J(a)=-log(1-h(x)) for y = 0", fontsize=24) plt.xlabel('h(x)') plt.ylabel('J(a)') """ Explanation: You can see from this plot that when $y=1$ the cost $J(a)$ is large if $h(x)$ goes toward 0. That is, it favors $h(x)$ going to 1 which is what we want. End of explanation """
kit-cel/wt
ccgbc/Guest_Lecture_Coding_Learning/Neural_decoding_Deep_Unfolding.ipynb
gpl-2.0
import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" import tensorflow as tf import numpy as np from pprint import pprint %matplotlib inline import matplotlib.pyplot as plt seed = 1337 tf.set_random_seed(seed) np.random.seed(seed) """ Explanation: Deep Learning for Communications By Jakob Hoydis, Contact jakob.hoydis@nokia-bell-labs.com This code is provided as supplementary material to the tutorial Deep Learning for Communications. It is licensed under the GPLv2 license. If you in any way use this code for research that results in publications, please cite it appropriately. Deep Unfolding - Neural Belief Propagation In this notebook we show how to interpret the existing belief propagation decoding algorithm as explicite neural network and analyze how to train such a network End of explanation """ #load list of predefined matrices ParityCheckMatrices = [[[1,0,1,1,1,0,0],[0,1,0,1,1,1,0],[0,0,1,0,1,1,1]],[[1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1]],[[1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1]],[[0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,0,0,0],[0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0],[0,1,1,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,1,0,0],[1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0],[1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],[0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],[0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],[1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0],[0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,2],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,1,0,0,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0],[0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],[0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]]] ParityCheck_id = 3 # decide which parity check matrix should be used (0-2: BCH; 3: LDPC) ParityCheckMatrix = np.array(ParityCheckMatrices[ParityCheck_id]) # load parity-check matrix n = ParityCheckMatrix.shape[1] # define number of codeword bits n (codeword length) k = n-ParityCheckMatrix.shape[0] # define number of information bits k per codeword coderate = k/n # coderate as fraction of information bits per CWs num_bp_iterations = 10 # number of bp iterations #print parity-check matrix print(ParityCheckMatrix) print('n:{}, k:{}, coderate:{}'.format(n,k,coderate)) """ Explanation: Problem Description: The task is to implement the belief propagation decoding algorithm for channel coding and add trainable weights to the edges in the graph. Can stochastic gradient descent optimize these weights? Setting up the scenario We use pre-generated parity check matrixes (BCH and Polar); see webdemos (and other references given in the lecture): http://webdemo.inue.uni-stuttgart.de/webdemos/02_lectures/MEC/LDPC_degree_distribution/ http://webdemo.inue.uni-stuttgart.de/webdemos/03_theses/LDPC_Codes_from_Communication_Standards/ End of explanation """ plt.figure(figsize=(12, 8)); plt.xlabel('Variable Nodes i'); plt.ylabel('Check Nodes j'); plt.spy(ParityCheckMatrix); """ Explanation: Let's analyze the parity-check matrix When loading the (7,4) BCH code you should see the well-known (7,4) Hamming code structure (see lecture). End of explanation """ def ebnodb2noisevar(ebno_db, coderate): ebno = 10**(ebno_db/10) noise_var = 1/(2*coderate*ebno) return noise_var """ Explanation: Each black dot at position (j,i) defines a connection (edge) between VN i and CN j in the corresponding Tanner graph Helper functions Compute the noise variance for a given SNR and coderate. End of explanation """ def compute_llr(y, noise_var): return 2*y/noise_var """ Explanation: Compute LLRs for BPSK from given channel observations. End of explanation """ H = tf.Variable(ParityCheckMatrix, trainable=False, dtype=tf.float32) H_unconnected = tf.cast(tf.equal(H, 0), dtype=tf.float32) """ Explanation: Modeling belief propagation as deep neural network Now we need to create our neural network: The Graph is defined by the partiy-check matrix H. H_unconnected defines all unnconnected positions. Remark: We use dtype=tf.float32 as this is more natural in the GPU environment. However, boolean values (connection yes/no) are sufficient. End of explanation """ batch_size = tf.placeholder_with_default(100, shape=()) noise_var = tf.placeholder_with_default(1.0, shape=()) """ Explanation: Define placeholder for batch size and noise_var (SNR) End of explanation """ x = tf.ones(shape=[n], dtype=tf.float32) noise = tf.random_normal(shape=[batch_size, n], stddev=tf.sqrt(noise_var)) y = x + noise """ Explanation: We use the all-zero CW x of length n (as it is part of any linear code) to avoid encoding. The channel is a simple AWGN channel with noise variance "noise_var". Finally, the channel output y is y=x+n. End of explanation """ def var_to_check(cv, H, llr, init=False, final=False): if init: #first layer # Simply send channel llrs along the edges vc = tf.transpose(tf.expand_dims(H, axis=0)*tf.expand_dims(llr, axis=1), perm=[0,2,1]) vc = tf.nn.tanh(tf.clip_by_value(1/2*vc, clip_value_min=-9.9, clip_value_max=9.9)) return vc, vc else: shape_cv = cv.get_shape().as_list() shape_llr = llr.get_shape().as_list() W_c = tf.Variable(initial_value=tf.ones(shape=[1, shape_cv[1], shape_cv[2]]), trainable=True,dtype=tf.float32) cv *= W_c if final: # last layer (marginalization) # Final marginalization vc = tf.reduce_sum(cv, axis=1)+llr return vc, vc else: # Sum-up messages, add llr, and substract message for relative edge vc = tf.reduce_sum(cv, axis=1)+llr vc = tf.expand_dims(H, axis=0)*tf.expand_dims(vc, axis=1) - cv vc = tf.transpose(vc, perm=[0,2,1]) vc = tf.nn.tanh(tf.clip_by_value(1/2*vc, clip_value_min=-9.9, clip_value_max=9.9)) return vc, tf.reduce_sum(cv, axis=1)+llr def check_to_var(vc, H, H_unconnected): vc_tanh = tf.transpose(vc, perm=[0,2,1]) cv = tf.reduce_prod(vc_tanh + H_unconnected, axis=2, keepdims=True) cv = tf.where(tf.equal(cv,0),tf.ones_like(cv)*1e-12,cv) cv = (cv/(vc_tanh+H_unconnected))*tf.expand_dims(H, axis=0) cv = tf.clip_by_value(cv, clip_value_min=-1+1e-6, clip_value_max=1-1e-6) cv = 2*tf.atanh(cv) return cv """ Explanation: Define VN to CN messages The LLR values are clipped such that the absolute value of each message is not larger than 10. In this Notebook we only train weights of messages from VNs to CNs, but extensions are straightforward. The messages are initialized with 1 (rather than a random initalization). Define CN to VN messages The CN update requires message clipping due to numerical stability of the CN update function End of explanation """ cv = tf.zeros(shape=[batch_size,n-k,n]) llr = compute_llr(y, noise_var) loss = 0 for i in range(num_bp_iterations): is_final = (i==num_bp_iterations-1) is_init = (i==0) vc, logits = var_to_check(cv, H, llr, init=is_init, final=is_final) if not is_final: cv = check_to_var(vc, H, H_unconnected) if not is_init: loss += tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.ones(shape=[batch_size, n]), logits=logits) loss /= num_bp_iterations x_hat = tf.cast(tf.greater(tf.nn.sigmoid(vc), 0.5), dtype=tf.float32) ber = tf.reduce_mean(tf.cast(tf.not_equal(x, x_hat), dtype=tf.float32)) W_total = tf.global_variables() #for weigth distribution histograms """ Explanation: Define the neural network In total num_bp_iterations are performaned. The all zero CW is used (as it is part of any linear code). In the first iteration there are no previous messages and, thus, only the channel observations (llr) are used as input (rather than pre-initalizing a previous layer with 0). The last layer performs a final marginalization step, i.e., it sums ob all incoming messages at the VN. End of explanation """ optimizer = tf.train.AdamOptimizer(1e-3) grads_and_vars = optimizer.compute_gradients(loss) grads_and_vars = [(tf.clip_by_value(g, -10, 10), v) for g,v in grads_and_vars] step = optimizer.apply_gradients(grads_and_vars) """ Explanation: Define ADAM optimizer with apply gradient clipping End of explanation """ sess = tf.InteractiveSession() tf.global_variables_initializer().run() """ Explanation: Start new Session End of explanation """ samples = 10000 epochs = 10 ebnos_db = np.linspace(1,6, 6) bers_no_training = np.zeros(shape=[ebnos_db.shape[0]]) for j in range(epochs): for i in range(ebnos_db.shape[0]): ebno_db = ebnos_db[i] bers_no_training[i] += sess.run(ber, feed_dict={ batch_size: samples, noise_var: ebnodb2noisevar(ebno_db, coderate) }) bers_no_training /= epochs plt.figure(figsize=(10, 5)) plt.semilogy(ebnos_db, bers_no_training, '-x') plt.grid(which='both'); plt.xlabel('EbNo [dB]'); plt.ylabel('BER'); plt.ylim([1e-6, 1e-0]); plt.xlim([ebnos_db[0], ebnos_db[-1]]); plt.legend(['No Training - Conventional BP Decoder with %d iterations' % (num_bp_iterations)]); """ Explanation: Evaluate belief propagation decoding (without training) As all of the weights are initialized with 1, the inital performance of the neural networks equals the performance of the conventional BP decoder. For the Monte-Carlo BER simulations batches of size "samples" are transmitted and decoded one-shot. This is repeated "iterations" times, i.e., in total "samples*iterations" codewords are simulated per SNR point. See http://webdemo.inue.uni-stuttgart.de/webdemos/08_research/GPU_LDPC_Decoder/index.php for reference curves. End of explanation """ w=sess.run(W_total, feed_dict={}) #get all weights from the graph and use flatten for the histogram function Wuntrained=w[1].flatten()#shkip first variable (as this is the parity-check matrix) for i in range(1,len(w)): Wuntrained=np.concatenate((w[i].flatten(),Wuntrained),axis=0) plt.figure(figsize=(10, 5)); plt.hist(Wuntrained,bins=40,range=(-0.2, 1.4)) plt.grid(which='both'); plt.xlabel('Weight'); plt.ylabel('Occurrence'); plt.xlim([0.5, 1.2]); plt.legend(['No Training - Weights before Training']); """ Explanation: Check whether all weights are properly initalized End of explanation """ train_ebno_db = 4 for it in range(2000): feed_dict = { batch_size: 100, noise_var: ebnodb2noisevar(train_ebno_db, coderate) } sess.run(step, feed_dict=feed_dict) #provide intermediate BER metric (not used for training!) if (it%100==0): feed_dict = { batch_size: 10000, noise_var: ebnodb2noisevar(train_ebno_db, coderate) } l, b = sess.run([loss, ber], feed_dict=feed_dict) print(it, "Loss: {}, BER: {:.2E}".format(l, b)) """ Explanation: All weights are properly set to 1. Train the neural network We now train the neural belief propagation decoder in the same way as we trained previous neural networks, i.e., by using stochastic-gradient descent. We print intermediate BER values to track the progress. However, this information is not used for training at any time. End of explanation """ samples = 10000 epochs = 10 ebnos_db = np.linspace(1,6, 6) bers = np.zeros(shape=[ebnos_db.shape[0]]) for j in range(epochs): for i in range(ebnos_db.shape[0]): ebno_db = ebnos_db[i] bers[i] += sess.run(ber, feed_dict={ batch_size: samples, noise_var: ebnodb2noisevar(ebno_db, coderate) }) bers /= epochs plt.figure(figsize=(10, 5)) plt.semilogy(ebnos_db, bers, '-x') plt.semilogy(ebnos_db, bers_no_training, '-0')#use previously stored BER plt.grid(which='both'); plt.xlabel('EbNo [dB]'); plt.ylabel('BER'); plt.ylim([1e-6, 1e-0]); plt.xlim([ebnos_db[0], ebnos_db[-1]]); plt.legend(['Trained Neural BP Decoder with %d iterations' % (num_bp_iterations),'No Training - Conventional BP Decoder with %d iterations' % (num_bp_iterations)]); #plt.savefig('ber.pdf', bbox_inches='tight'); #create plots for the lecture slides """ Explanation: And evaluate the performance of the trained decoder Now we evaluate the BER performance of the trained BP decoder End of explanation """ w=sess.run(W_total, feed_dict={}) #get all weights from the graph and use flatten for the histogram function Wtrained=w[1].flatten()#shkip first variable (as this is the parity-check matrix) for i in range(1,len(w)): Wtrained=np.concatenate((w[i].flatten(),Wtrained),axis=0) plt.figure(figsize=(10, 5)); plt.hist(Wuntrained,bins=20, range=(-0.2, 1.4))#use previously stored weights plt.hist(Wtrained,bins=20, range=(-0.2, 1.4)) plt.grid(which='both'); plt.xlabel('Weight'); plt.ylabel('Occurrence'); plt.xlim([0.5, 1.2]); plt.legend(['No Training - Weights before Training','Trained Weights']); #plt.savefig('hist.pdf', bbox_inches='tight'); #create plots for the lecture slides """ Explanation: As can be seen the decoder improves by approximately 0.5 dB. Keep in mind that the used LDPC code is very short (n=100; tpyically n>1000 bits per codeword) and not well-designed. For well-designed LDPC codes from existing standards (to the best of our knowledgeg) no further gains can be observed. Plot the Histogram of the Trained Weights End of explanation """
prk327/CoAca
2_Indexing_and_Selecting_Data.ipynb
gpl-3.0
import numpy as np import pandas as pd market_df = pd.read_csv("../global_sales_data/market_fact.csv") market_df.head() """ Explanation: Indexing and Selecting Data In this section, you will: Select rows from a dataframe Select columns from a dataframe Select subsets of dataframes Selecting Rows Selecting rows in dataframes is similar to the indexing you have seen in numpy arrays. The syntax df[start_index:end_index] will subset rows according to the start and end indices. End of explanation """ # Selecting the rows from indices 2 to 6 market_df[2:7] # Selecting alternate rows starting from index = 5 market_df[5::2].head() """ Explanation: Notice that, by default, pandas assigns integer labels to the rows, starting at 0. End of explanation """ # Using df['column'] sales = market_df['Sales'] sales.head() # Using df.column sales = market_df.Sales sales.head() # Notice that in both these cases, the resultant is a Series object print(type(market_df['Sales'])) print(type(market_df.Sales)) """ Explanation: Selecting Columns There are two simple ways to select a single column from a dataframe - df['column_name'] and df.column_name. End of explanation """ # Select Cust_id, Sales and Profit: market_df[['Cust_id', 'Sales', 'Profit']].head() """ Explanation: Selecting Multiple Columns You can select multiple columns by passing the list of column names inside the []: df[['column_1', 'column_2', 'column_n']]. For instance, to select only the columns Cust_id, Sales and Profit: End of explanation """ type(market_df[['Cust_id', 'Sales', 'Profit']]) # Similarly, if you select one column using double square brackets, # you'll get a df, not Series type(market_df[['Sales']]) """ Explanation: Notice that in this case, the output is itself a dataframe. End of explanation """ # Trying to select the third row: Throws an error market_df[2] """ Explanation: Selecting Subsets of Dataframes Until now, you have seen selecting rows and columns using the following ways: * Selecting rows: df[start:stop] * Selecting columns: df['column'] or df.column or df[['col_x', 'col_y']] * df['column'] or df.column return a series * df[['col_x', 'col_y']] returns a dataframe But pandas does not prefer this way of indexing dataframes, since it has some ambiguity. For instance, let's try and select the third row of the dataframe. End of explanation """ # Changing the row indices to Ord_id market_df.set_index('Ord_id').head() """ Explanation: Pandas throws an error because it is confused whether the [2] is an index or a label. Recall from the previous section that you can change the row indices. End of explanation """ market_df.set_index('Order_Quantity').head() """ Explanation: Now imagine you had a column with entries [2, 4, 7, 8 ...], and you set that as the index. What should df[2] return? The second row, or the row with the index value = 2? Taking an example from this dataset, say you decide to assign the Order_Quantity column as the index. End of explanation """
mohanprasath/Course-Work
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part07-e01_sequence_analysis/src/project_notebook_sequence_analysis.ipynb
gpl-3.0
from collections import defaultdict from itertools import product import numpy as np from numpy.random import choice """ Explanation: Sequence Analysis with Python Contact: Veli Mäkinen veli.makinen@helsinki.fi The following assignments introduce applications of hashing with dict() primitive of Python. While doing so, a rudimentary introduction to biological sequences is given. This framework is then enhanced with probabilities, leading to routines to generate random sequences under some constraints, including a general concept of Markov-chains. All these components illustrate the usage of dict(), but at the same time introduce some other computational routines to efficiently deal with probabilities. The function collections.defaultdict can be useful. Below are some "suggested" imports. Feel free to use and modify these, or not. Generally it's good practice to keep most or all imports in one place. Typically very close to the start of notebooks. End of explanation """ def dna_to_rna(s): return "".join("U" for _ in s) if __name__ == '__main__': print(dna_to_rna("AACGTGATTTC")) """ Explanation: The automated TMC tests do not test cell outputs. These are intended to be evaluated in the peer reviews. So it is still be a good idea to make the outputs as clear and informative as possible. To keep TMC tests running as well as possible it is recommended to keep global variable assignments in the notebook to a minimum to avoid potential name clashes and confusion. Additionally you should keep all actual code exection in main guards to keep the test running smoothly. If you run check_sequence.py in the part07-e01_sequence_analysis folder, the script should finish very quickly and optimally produce no output. If you download data from the internet during execution (codon usage table), the parts where downloading is done should not work if you decide to submit to the tmc server. Local tests should work fine. DNA and RNA A DNA molecule consist, in principle, of a chain of smaller molecules. These smaller molecules have some common basic components (bases) that repeat. For our purposes it is sufficient to know that these bases are nucleotides adenine, cytosine, guanine, and thymine with abbreviations A, C, G, and T. Given a DNA sequence e.g. ACGATGAGGCTCAT, one can reverse engineer (with negligible loss of information) the corresponding DNA molecule. Parts of a DNA molecule can transcribe into an RNA molecule. In this process, thymine gets replaced by uracil (U). Write a function dna_to_rna to convert a given DNA sequence $s$ into an RNA sequence. For the sake of exercise, use dict() to store the symbol to symbol encoding rules. Create a program to test your function. End of explanation """ def get_dict(): return {} if __name__ == '__main__': codon_to_aa = get_dict() print(codon_to_aa) """ Explanation: Idea of solution fill in Discussion fill in Proteins Like DNA and RNA, protein molecule can be interpreted as a chain of smaller molecules, where the bases are now amino acids. RNA molecule may translate into a protein molecule, but instead of base by base, three bases of RNA correspond to one base of protein. That is, RNA sequence is read triplet (called codon) at a time. Consider the codon to amino acid conversion table in http://www.kazusa.or.jp/codon/cgi-bin/showcodon.cgi?species=9606&aa=1&style=N. Write a function get_dict to read the table into a dict(), such that for each RNA sequence of length 3, say $\texttt{AGU}$, the hash table stores the conversion rule to the corresponding amino acid. You may store the html page to your local src directory, and parse that file. End of explanation """ def get_dict_list(): return {} if __name__ == '__main__': aa_to_codons = get_dict_list() print(aa_to_codons) """ Explanation: Idea of solution fill in Discussion fill in Use the same conversion table as above, but now write function get_dict_list to read the table into a dict(), such that for each amino acid the hash table stores the list of codons encoding it. End of explanation """ def rna_to_prot(s): return "" def dna_to_prot(s): return rna_to_prot(dna_to_rna(s)) if __name__ == '__main__': print(dna_to_prot("ATGATATCATCGACGATGTAG")) """ Explanation: Idea of solution fill in Discussion fill in With the conversion tables at hand, the following should be trivial to solve. Fill in function rna_to_prot in the stub solution to convert a given DNA sequence $s$ into a protein sequence. You may use the dictionaries from exercises 2 and 3. You can test your program with ATGATATCATCGACGATGTAG. End of explanation """ def get_probabability_dict(): return {} if __name__ == '__main__': codon_to_prob = get_probabability_dict() items = sorted(codon_to_prob.items(), key=lambda x: x[0]) for i in range(1 + len(items)//6): print("\t".join( f"{k}: {v:.6f}" for k, v in items[i*6:6+i*6] )) """ Explanation: Idea of solution fill in Discussion fill in You may notice that there are $4^3=64$ different codons, but only 20 amino acids. That is, some triplets encode the same amino acid. Reverse translation It has been observed that among the codons coding the same amino acid, some are more frequent than others. These frequencies can be converted to probabilities. E.g. consider codons AUU, AUC, and AUA that code for amino acid isoleucine. If they are observed, say, 36, 47, 17 times, respectively, to code isoleucine in a dataset, the probability that a random such event is AUU $\to$ isoleucine is 36/100. This phenomenon is called codon adaptation, and for our purposes it works as a good introduction to generation of random sequences under constraints. Consider the codon adaptation frequencies in http://www.kazusa.or.jp/codon/cgi-bin/showcodon.cgi?species=9606&aa=1&style=N and read them into a dict(), such that for each RNA sequence of length 3, say AGU, the hash table stores the probability of that codon among codons encoding the same amino acid. Put your solution in the get_probabability_dict function. End of explanation """ class ProteinToMaxRNA: def __init__(self): pass def convert(self, s): return "" if __name__ == '__main__': protein_to_rna = ProteinToMaxRNA() print(protein_to_rna.convert("LTPIQNRA")) """ Explanation: Idea of solution fill in Discussion fill in Now you should have everything in place to easily solve the following. Write a class ProteinToMaxRNA with a convert method which converts a protein sequence into the most likely RNA sequence to be the source of this protein. Run your program with LTPIQNRA. End of explanation """ def random_event(dist): """ Takes as input a dictionary from events to their probabilities. Return a random event sampled according to the given distribution. The probabilities must sum to 1.0 """ return next(iter(dist)) if __name__ == '__main__': distribution = dict(zip("ACGT", [0.10, 0.35, 0.15, 0.40])) print(", ".join(random_event(distribution) for _ in range(29))) """ Explanation: Idea of solution fill in Discussion fill in Now we are almost ready to produce random RNA sequences that code a given protein sequence. For this, we need a subroutine to sample from a probability distribution. Consider our earlier example of probabilities 36/100, 47/100, and 17/100 for AUU, AUC, and AUA, respectively. Let us assume we have a random number generator random() that returns a random number from interval $[0,1)$. We may then partition the unit interval according to cumulative probabilities to $[0,36/100), [36/100,83/100), [83/100,1)$, respectively. Depending which interval the number random() hits, we select the codon accordingly. Write a function random_event that chooses a random event, given a probability distribution (set of events whose probabilities sum to 1). You can use function random.uniform to produce values uniformly at random from the range $[0,1)$. The distribution should be given to your function as a dictionary from events to their probabilities. End of explanation """ class ProteinToRandomRNA(object): def __init__(self): pass def convert(self, s): return "" if __name__ == '__main__': protein_to_random_codons = ProteinToRandomRNA() print(protein_to_random_codons.convert("LTPIQNRA")) """ Explanation: Idea of solution fill in Discussion fill in With this general routine, the following should be easy to solve. Write a class ProteinToRandomRNA to produce a random RNA sequence encoding the input protein sequence according to the input codon adaptation probabilities. The actual conversion is done through the convert method. Run your program with LTPIQNRA. End of explanation """ def sliding_window(s, k): """ This function returns a generator that can be iterated over all starting position of a k-window in the sequence. For each starting position the generator returns the nucleotide frequencies in the window as a dictionary. """ for _ in s: yield {} if __name__ == '__main__': s = "TCCCGACGGCCTTGCC" for d in sliding_window(s, 4): print(d) """ Explanation: Idea of solution fill in Discussion fill in Generating DNA sequences with higher-order Markov chains We will now reuse the machinery derived above in a related context. We go back to DNA sequences, and consider some easy statistics that can be used to characterize the sequences. First, just the frequencies of bases $\texttt{A}$, $\texttt{C}$, $\texttt{G}$, $\texttt{T}$ may reveal the species from which the input DNA originates; each species has a different base composition that has been formed during evolution. More interestingly, the areas where DNA to RNA transcription takes place (coding region) have an excess of $\texttt{C}$ and $\texttt{G}$ over $\texttt{A}$ and $\texttt{T}$. To detect such areas a common routine is to just use a sliding window of fixed size, say $k$, and compute for each window position $T[i..i+k-1]$ the base frequencies, where $T[1..n]$ is the input DNA sequence. When sliding the window from $T[i..i+k-1]$ to $T[i+1..i+k]$ frequency $f(T[i])$ gets decreases by one and $f(T[i+k])$ gets increased by one. Write a generator sliding_window to compute sliding window base frequencies so that each moving of the window takes constant time. We saw in the beginning of the course one way how to create generators using generator expression. Here we use a different way. For the function sliding_window to be a generator, it must have at least one yield expression, see https://docs.python.org/3/reference/expressions.html#yieldexpr. Here is an example of a generator expression that works similarily to the built in range generator: Python def range(a, b=None, c=1): current = 0 if b == None else a end = a if b == None else b while current &lt; end: yield current current += c A yield expression can be used to return a value and temporarily return from the function. End of explanation """ def context_list(s, k): return {} if __name__ == '__main__': k = 2 s = "ATGATATCATCGACGATCTAG" d = context_list(s, k) print(d) """ Explanation: Idea of solution fill in Discussion fill in Our models so far have been so-called zero-order models, as each event has been independent of other events. With sequences, the dependencies of events are naturally encoded by their contexts. Considering that a sequence is produced from left-to-right, a first-order context for $T[i]$ is $T[i-1]$, that is, the immediately preceding symbol. First-order Markov chain is a sequence produced by generating $c=T[i]$ with the probability of event of seeing symbol $c$ after previously generated symbol $a=T[i-1]$. The first symbol of the chain is sampled according to the zero-order model. The first-order model can naturally be extended to contexts of length $k$, with $T[i]$ depending on $T[i-k..i-1]$. Then the first $k$ symbols of the chain are sampled according to the zero-order model. The following assignments develop the routines to work with the higher-order Markov chains. In what follows, a $k$-mer is a substring $T[i..i+k-1]$ of the sequence at an arbitrary position. Write function context_list that given an input DNA sequence $T$ associates to each $k$-mer $W$ the concatenation of all symbols $c$ that appear after context $W$ in $T$, that is, $T[i..i+k]=Wc$. For example, <span style="color:red; font:courier;">GA</span> is associated to <span style="color:blue; font: courier;">TCT</span> in $T$=<span style="font: courier;">AT<span style="color:red;">GA</span><span style="color:blue;">T</span>ATCATC<span style="color:red;">GA</span><span style="color:blue;">C</span><span style="color:red;">GA</span><span style="color:blue;">T</span>GTAG</span>, when $k=2$. End of explanation """ def context_probabilities(s, k): return {} if __name__ == '__main__': pass """ Explanation: Idea of solution fill in Discussion fill in With the above solution, write function context_probabilities to count the frequencies of symbols in each context and convert these frequencies into probabilities. Run context_probabilities with $T=$ ATGATATCATCGACGATGTAG and $k$ values 0 and 2. End of explanation """ class MarkovChain: def __init__(self, zeroth, kth, k=2): self.k = k self.zeroth = zeroth self.kth = kth def generate(self, n, seed=None): return "$" * n if __name__ == '__main__': zeroth = {'A': 0.2, 'C': 0.19, 'T': 0.31, 'G': 0.3} kth = {'GT': {'A': 1.0, 'C': 0.0, 'T': 0.0, 'G': 0.0}, 'CA': {'A': 0.0, 'C': 0.0, 'T': 1.0, 'G': 0.0}, 'TC': {'A': 0.5, 'C': 0.0, 'T': 0.0, 'G': 0.5}, 'GA': {'A': 0.0, 'C': 0.3333333333333333, 'T': 0.6666666666666666, 'G': 0.0}, 'TG': {'A': 0.5, 'C': 0.0, 'T': 0.5, 'G': 0.0}, 'AT': {'A': 0.2, 'C': 0.4, 'T': 0.0, 'G': 0.4}, 'TA': {'A': 0.0, 'C': 0.0, 'T': 0.5, 'G': 0.5}, 'AC': {'A': 0.0, 'C': 0.0, 'T': 0.0, 'G': 1.0}, 'CG': {'A': 1.0, 'C': 0.0, 'T': 0.0, 'G': 0.0}} n = 10 seed = 0 mc = MarkovChain(zeroth, kth) print(mc.generate(n, seed)) """ Explanation: Idea of solution fill in Discussion fill in With the above solution and the function random_event from the earlier exercise, write class MarkovChain. Its generate method should generate a random DNA sequence following the original $k$-th order Markov chain probabilities. End of explanation """ def context_pseudo_probabilities(s, k): return {"": ""} if __name__ == '__main__': k = 2 s = "ATGATATCATCGACGATGTAG" kth = context_pseudo_probabilities(s, k) zeroth = context_pseudo_probabilities(s, 0)[""] print(f"zeroth: {zeroth}") print("\n".join(f"{k}: {dict(v)}" for k, v in kth.items())) print("\n", MarkovChain(zeroth, kth, k).generate(20)) """ Explanation: Idea of solution fill in Discussion fill in If you have survived so far without problems, please run your program a few more times with different inputs. At some point you should get a lookup error in your hash-table! The reason for this is not your code, but the way we defined the model: Some $k$-mers may not be among the training data (input sequence $T$), but such can be generated as the first $k$-mer that is generated using the zero-order model. A general approach to fixing such issues with incomplete training data is to use pseudo counts. That is, all imaginable events are initialized to frequency count 1. Write a new solution context_pseudo_probabilities based on the solution to problem 11. But this time use pseudo counts in order to obtain a $k$-th order Markov chain that can assign a probability for any DNA sequence. You may use the standard library function itertools.product to iterate over all $k$-mer of given length (product("ACGT", repeat=k)). End of explanation """ class MarkovProb: def __init__(self, k, zeroth, kth): self.k = k self.zeroth = zeroth self.kth = kth def probability(self, s): return np.nan if __name__ == '__main__': k = 2 kth = context_pseudo_probabilities("ATGATATCATCGACGATGTAG", k) zeroth = context_pseudo_probabilities("ATGATATCATCGACGATGTAG", 0)[""] mc = MarkovProb(2, zeroth, kth) s="ATGATATCATCGACGATGTAG" print(f"Probability of sequence {s} is {mc.probability(s)}") """ Explanation: Idea of solution fill in Discussion fill in Write class MarkovProb that given the $k$-th order Markov chain developed above to the constructor, its method probability computes the probability of a given input DNA sequence. End of explanation """ class MarkovLog(object): def __init__(self, k, zeroth, kth): pass def log_probability(self, s): return np.nan if __name__ == '__main__': k = 2 kth = context_pseudo_probabilities("ATGATATCATCGACGATGTAG", k) zeroth = context_pseudo_probabilities("ATGATATCATCGACGATGTAG", 0)[""] mc = MarkovLog(2, zeroth, kth) s="ATGATATCATCGACGATGTAG" print(f"Log probability of sequence {s} is {mc.log_probability(s)}") """ Explanation: Idea of solution fill in Discussion fill in With the last assignment you might end up in trouble with precision, as multiplying many small probabilities gives a really small number in the end. There is an easy fix by using so-called log-transform. Consider computation of $P=s_1 s_2 \cdots s_n$, where $0\leq s_i\leq 1$ for each $i$. Taking logarithm in base 2 from both sides gives $\log 2 P= \log _2 (s_1 s_2 \cdots s_n)=\log_2 s_1 + \log_2 s_2 + \cdots \log s_n= \sum{i=1}^n \log s_i$, with repeated application of the property that the logarithm of a multiplication of two numbers is the sum of logarithms of the two numbers taken separately. The results is abbreviated as log-probability. Write class MarkovLog that given the $k$-th order Markov chain developed above to the constructor, its method log_probability computes the log-probability of a given input DNA sequence. Run your program with $T=$ ATGATATCATCGACGATGTAG and $k=2$. End of explanation """ def better_context_probabilities(s, k): return {"": ""} if __name__ == '__main__': k = 2 s = "ATGATATCATCGACGATGTAG" d = better_context_probabilities(s, k) print("\n".join(f"{k}: {v}" for k, v in d.items())) """ Explanation: Idea of solution fill in Discussion fill in Finally, if you try to use the code so far for very large inputs, you might observe that the concatenation of symbols following a context occupy considerable amount of space. This is unnecessary, as we only need the frequencies. Optimize the space requirement of your code from exercise 13 for the $k$-th order Markov chain by replacing the concatenations by direct computations of the frequencies. Implement this as the better_context_probabilities function. End of explanation """ class SimpleMarkovChain(object): def __init__(self, s, k): pass def generate(self, n, seed=None): return "🐍"*n if __name__ == '__main__': k = 2 s = "ATGATATCATCGACGATGTAG" n = 10 seed = 7 mc = SimpleMarkovChain(s, k) print(mc.generate(n, seed)) """ Explanation: Idea of solution fill in Discussion fill in While the earlier approach of explicit concatenation of symbols following a context suffered from inefficient use of space, it does have a benefit of giving another much simpler strategy to sample from the distribution: observe that an element of the concatenation taken uniformly randomly is sampled exactly with the correct probability. Revisit the solution 12 and modify it to directly sample from the concatenation of symbols following a context. The function np.random.choice may be convenient here. Implement the modified version as the new SimpleMarkovChain class. End of explanation """ def kmer_index(s, k): return {} if __name__ == '__main__': k=2 s = "ATGATATCATCGACGATGTAG" print("Using string:") print(s) print("".join([str(i%10) for i in range(len(s))])) print(f"\n{k}-mer index is:") d=kmer_index(s, k) print(dict(d)) """ Explanation: Idea of solution fill in Discussion fill in $k$-mer index Our $k$-th order Markov chain can now be modified to a handy index structure called $k$-mer index. This index structure associates to each $k$-mer its list of occurrence positions in DNA sequence $T$. Given a query $k$-mer $W$, one can thus easily list all positions $i$ with $T[i..k-1]=W$. Implement function kmer_index inspired by your earlier code for the $k$-th order Markov chain. Test your program with ATGATATCATCGACGATGTAG and $k=2$. End of explanation """ def codon_probabilities(rna): """ Given an RNA sequence, simply calculates the proability of all 3-mers empirically based on the sequence """ return {"".join(codon): 0 for codon in product("ACGU", repeat=3)} def kullback_leibler(p, q): """ Computes Kullback-Leibler divergence between two distributions. Both p and q must be dictionaries from events to probabilities. The divergence is defined only when q[event] == 0 implies p[event] == 0. """ return np.nan if __name__ == '__main__': aas = list("*ACDEFGHIKLMNPQRSTVWY") # List of amino acids n = 10000 # generate a random protein and some associated rna protein = "".join(choice(aas, n)) pass # Maybe check that converting back to protein results in the same sequence pass # Calculate codon probabilities of the rna sequence cp_predicted = codon_probabilities("<rna sequence>") # placeholder call # Calculate codon probabilities based on the codon usage table cp_orig = {"".join(codon): 0 for codon in product("ACGU", repeat=3)} # placeholder dict # Create a completely random RNA sequence and get the codon probabilities pass cp_uniform = codon_probabilities("<random rna sequence>") # placeholder call print("d(original || predicted) =", kullback_leibler(cp_orig, cp_predicted)) print("d(predicted || original) =", kullback_leibler(cp_predicted, cp_orig)) print() print("d(original || uniform) =", kullback_leibler(cp_orig, cp_uniform)) print("d(uniform || original) =", kullback_leibler(cp_uniform, cp_orig)) print() print("d(predicted || uniform) =", kullback_leibler(cp_predicted, cp_uniform)) print("d(uniform || predicted) =", kullback_leibler(cp_uniform, cp_predicted)) """ Explanation: Idea of solution fill in Discussion fill in Comparison of probability distributions Now that we know how to learn probability distributions from data, we might want to compare two such distributions, for example, to test if our programs work as intended. Let $P={p_1,p_2,\ldots, p_n}$ and $Q={q_1,q_2,\ldots, q_n}$ be two probability distributions for the same set of $n$ events. This means $\sum_{i=1}^n p_i=\sum_{i=1}^n q_i=1$, $0\leq p_j \leq 1$, and $0\leq q_j \leq 1$ for each event $j$. Kullback-Leibler divergence is a measure $d()$ for the relative entropy of $P$ with respect to $Q$ defined as $d(P||Q)=\sum_{i=1}^n p_i \log\frac{p_i}{q_i}$. This measure is always non-negative, and 0 only when $P=Q$. It can be interpreted as the gain of knowing $Q$ to encode $P$. Note that this measure is not symmetric. Write function kullback_leibler to compute $d(P||Q)$. Test your solution by generating a random RNA sequence encoding the input protein sequence according to the input codon adaptation probabilities. Then you should learn the codon adaptation probabilities from the RNA sequence you generated. Then try the same with uniformly random RNA sequences (which don't have to encode any specific protein sequence). Compute the relative entropies between the three distribution (original, predicted, uniform) and you should observe a clear difference. Because $d(P||Q)$ is not symmetric, you can either print both $d(P||Q)$ and $d(Q||P)$, or their average. This problem may be fairly tricky. Only the kullback_leibler function is automatically tested. The codon probabilities is probably a useful helper function. The main guarded section can be completed by filling out the pass sections using tooling from previous parts and fixing the placeholder lines. End of explanation """ def get_stationary_distributions(transition): """ The function get a transition matrix of a degree one Markov chain as parameter. It returns a list of stationary distributions, in vector form, for that chain. """ return np.random.rand(2, 4) - 0.5 if __name__ == "__main__": transition=np.array([[0.3, 0, 0.7, 0], [0, 0.4, 0, 0.6], [0.35, 0, 0.65, 0], [0, 0.2, 0, 0.8]]) print("\n".join( ", ".join( f"{pv:+.3f}" for pv in p) for p in get_stationary_distributions(transition))) """ Explanation: Idea of solution fill in Discussion fill in Stationary and equilibrium distributions (extra) Let us consider a Markov chain of order one on the set of nucleotides. Its transition probabilities can be expressed as a $4 \times 4$ matrix $P=(p_{ij})$, where the element $p_{ij}$ gives the probability of the $j$th nucleotide on the condition the previous nucleotide was the $i$th. An example of a transition matrix is \begin{array}{l|rrrr} & A & C & G & T \ \hline A & 0.30 & 0.0 & 0.70 & 0.0 \ C & 0.00 & 0.4 & 0.00 & 0.6 \ G & 0.35 & 0.0 & 0.65 & 0.0 \ T & 0.00 & 0.2 & 0.00 & 0.8 \ \end{array}. A distribution $\pi=(\pi_1,\pi_2,\pi_3,\pi_4)$ is called stationary, if $\pi = \pi P$ (the product here is matrix product). Write function get_stationary_distributions that gets a transition matrix as parameter, and returns the list of stationary distributions. You can do this with NumPy by first taking transposition of both sides of the above equation to get equation $\pi^T = P^T \pi^T$. Using numpy.linalg.eig take all eigenvectors related to eigenvalue 1.0. By normalizing these vectors to sum up to one get the stationary distributions of the original transition matrix. In the main function print the stationary distributions of the above transition matrix. End of explanation """ def kl_divergences(initial, transition): """ Calculates the the Kullback-Leibler divergences between empirical distributions generated using a markov model seeded with an initial distributin and a transition matrix, and the initial distribution. Sequences of length [1, 10, 100, 1000, 10000] are generated. """ return zip([1, 10, 100, 1000, 10000], np.random.rand(5)) if __name__ == "__main__": transition=np.array([[0.3, 0, 0.7, 0], [0, 0.4, 0, 0.6], [0.35, 0, 0.65, 0], [0, 0.2, 0, 0.8]]) print("Transition probabilities are:") print(transition) stationary_distributions = get_stationary_distributions(transition) print("Stationary distributions:") print(np.stack(stationary_distributions)) initial = stationary_distributions[1] print("Using [{}] as initial distribution\n".format(", ".join(f"{v:.2f}" for v in initial))) results = kl_divergences(initial, transition) for prefix_length, divergence in results: # iterate on prefix lengths in order (1, 10, 100...) print("KL divergence of stationary distribution prefix " \ "of length {:5d} is {:.8f}".format(prefix_length, divergence)) """ Explanation: Idea of solution Discussion Implement the kl_divergence function below so that the main guarded code runs properly. Using your modified Markov chain generator generate a nucleotide sequence $s$ of length $10\;000$. Choose prefixes of $s$ of lengths $1, 10, 100, 1000$, and $10\;000$. For each of these prefixes find out their nucleotide distribution (of order 0) using your earlier tool. Use 1 as the pseudo count. Then, for each prefix, compute the KL divergence between the initial distribution and the normalized nucleotide distribution. End of explanation """ def main(transition, equilibrium_distribution): vals = list(zip(np.random.rand(10), np.random.rand(10, 4) - 0.5)) return zip(np.random.rand(2, 4) - 0.5, [vals[:5], vals[5:]]) if __name__ == "__main__": transition = np.array([[0.3, 0.1, 0.5, 0.1], [0.2, 0.3, 0.15, 0.35], [0.25, 0.15, 0.2, 0.4], [0.35, 0.2, 0.4, 0.05]]) print("Transition probabilities are:", transition, sep="\n") stationary_distributions = get_stationary_distributions(transition) # Uncomment the below line to check that there actually is only one stationary distribution # assert len(stationary_distributions) == 1 equilibrium_distribution = stationary_distributions[0] print("Equilibrium distribution:") print(equilibrium_distribution) for initial_distribution, results in main(transition, equilibrium_distribution): print("\nUsing {} as initial distribution:".format(initial_distribution)) print("kl-divergence empirical distribution") print("\n".join("{:.11f} {}".format(di, kl) for di, kl in results)) """ Explanation: Idea of solution fill in Discussion fill in Implement the following in the main function. Find the stationary distribution for the following transition matrix: \begin{array}{ l | r r r r} & A & C & G & T \ \hline A & 0.30 & 0.10 & 0.50 & 0.10 \ C & 0.20 & 0.30 & 0.15 & 0.35 \ G & 0.25 & 0.15 & 0.20 & 0.40 \ T & 0.35 & 0.20 & 0.40 & 0.05 \ \end{array} Since there is only one stationary distribution, it is called the equilibrium distribution. Choose randomly two nucleotide distributions. You can take these from your sleeve or sample them from the Dirichlet distribution. Then for each of these distributions as the initial distribution of the Markov chain, repeat the above experiment. The main function should return tuples, where the first element is the (random) initial distribution and the second element contains the results as a list of tuples where the first element is the kl divergence and the second element the empirical nucleotide distribution, for the different prefix lengths. The state distribution should converge to the equilibrium distribution no matter how we start the Markov chain! That is the last line of the tables should have KL-divergence very close to $0$ and an empirical distribution very close to the equilibrium distribution. End of explanation """
datamicroscopes/release
examples/normal-inverse-wishart.ipynb
bsd-3-clause
import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt %matplotlib inline sns.set_context('talk') sns.set_style('darkgrid') """ Explanation: Real Valued Data and the Normal Inverse-Wishart Distribution One of the most common forms of data is real valued data Let's set up our environment and consider an example dataset End of explanation """ iris = sns.load_dataset('iris') iris.head() """ Explanation: The Iris Flower Dataset is a standard machine learning data set dating back to the 1930s. It contains measurements from 150 flowers, 50 from each of the following species: Iris Setosa Iris Versicolor Iris Virginica End of explanation """ irisplot = sns.pairplot(iris, hue="species", palette='Set2', diag_kind="kde", size=2.5) irisplot.fig.suptitle('Scatter Plots and Kernel Density Estimate of Iris Data by Species', fontsize = 18) irisplot.fig.subplots_adjust(top=.9) """ Explanation: In the case of the iris dataset, plotting the data shows that indiviudal species exhibit a typical range of measurements End of explanation """ from microscopes.models import niw as normal_inverse_wishart mvn5 = normal_inverse_wishart(5) """ Explanation: If we wanted to learn these underlying species' measurements, we would use these real valued measurements and make assumptions about the structure of the data. In practice, real valued data is commonly assumed to be distributed normally, or Gaussian We could assume that conditioned on species, the measurement data follwed a multivariate normal $$P(\mathbf{x}|species=s)\sim\mathcal{N}(\mu_{s},\Sigma_{s})$$ The normal inverse-Wishart distribution allows us to learn the underlying parameters of each normal distribution, its mean $\mu_s$ and its covariance $\Sigma_s$. Since the normal inverse-Wishart is the conjugate prior of the multivariate normal, the posterior distribution of a multivariate normal with a normal inverse-Wishart prior also follows a normal inverse-Wishart distribution. This allows us to infer the distirbution over values of $\mu_s$ and $\Sigma_{s}$ when we define our model. Note that if we have only one real valued variable, the normal inverse-Wishart distribution is often referred to as the normal inverse-gamma distribution. In this case, we learn the scalar valued mean $\mu$ and variance $\sigma^2$ for each inferred cluster. Univariate real data, however, should be modeled with our normal invese-chi-squared distribution, which is optimized for infering univariate parameters. See Murphy 2007 for derrivations of our normal likelihood models To specify the joint distribution of a multivariate normal inverse-Wishart distribution, we would import our likelihood model Note: for a Normal Inverse-Wishart model, you must indicate the number of dimensions of the likelihood For example, a distribution with 5 dimensions, one would call End of explanation """
mne-tools/mne-tools.github.io
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
bsd-3-clause
import os import numpy as np import matplotlib.pyplot as plt import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file) raw.crop(0, 60).load_data() # use just 60 seconds of data, to save memory """ Explanation: Filtering and resampling data This tutorial covers filtering and resampling, and gives examples of how filtering can be used for artifact repair. :depth: 2 We begin as always by importing the necessary Python modules and loading some example data &lt;sample-dataset&gt;. We'll also crop the data to 60 seconds (to save memory on the documentation server): End of explanation """ mag_channels = mne.pick_types(raw.info, meg='mag') raw.plot(duration=60, order=mag_channels, proj=False, n_channels=len(mag_channels), remove_dc=False) """ Explanation: Background on filtering A filter removes or attenuates parts of a signal. Usually, filters act on specific frequency ranges of a signal — for example, suppressing all frequency components above or below a certain cutoff value. There are many ways of designing digital filters; see disc-filtering for a longer discussion of the various approaches to filtering physiological signals in MNE-Python. Repairing artifacts by filtering Artifacts that are restricted to a narrow frequency range can sometimes be repaired by filtering the data. Two examples of frequency-restricted artifacts are slow drifts and power line noise. Here we illustrate how each of these can be repaired by filtering. Slow drifts Low-frequency drifts in raw data can usually be spotted by plotting a fairly long span of data with the :meth:~mne.io.Raw.plot method, though it is helpful to disable channel-wise DC shift correction to make slow drifts more readily visible. Here we plot 60 seconds, showing all the magnetometer channels: End of explanation """ for cutoff in (0.1, 0.2): raw_highpass = raw.copy().filter(l_freq=cutoff, h_freq=None) fig = raw_highpass.plot(duration=60, order=mag_channels, proj=False, n_channels=len(mag_channels), remove_dc=False) fig.subplots_adjust(top=0.9) fig.suptitle('High-pass filtered at {} Hz'.format(cutoff), size='xx-large', weight='bold') """ Explanation: A half-period of this slow drift appears to last around 10 seconds, so a full period would be 20 seconds, i.e., $\frac{1}{20} \mathrm{Hz}$. To be sure those components are excluded, we want our highpass to be higher than that, so let's try $\frac{1}{10} \mathrm{Hz}$ and $\frac{1}{5} \mathrm{Hz}$ filters to see which works best: End of explanation """ filter_params = mne.filter.create_filter(raw.get_data(), raw.info['sfreq'], l_freq=0.2, h_freq=None) """ Explanation: Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts. Notice that the text output summarizes the relevant characteristics of the filter that was created. If you want to visualize the filter, you can pass the same arguments used in the call to :meth:raw.filter() &lt;mne.io.Raw.filter&gt; above to the function :func:mne.filter.create_filter to get the filter parameters, and then pass the filter parameters to :func:mne.viz.plot_filter. :func:~mne.filter.create_filter also requires parameters data (a :class:NumPy array &lt;numpy.ndarray&gt;) and sfreq (the sampling frequency of the data), so we'll extract those from our :class:~mne.io.Raw object: End of explanation """ mne.viz.plot_filter(filter_params, raw.info['sfreq'], flim=(0.01, 5)) """ Explanation: Notice that the output is the same as when we applied this filter to the data using :meth:raw.filter() &lt;mne.io.Raw.filter&gt;. You can now pass the filter parameters (and the sampling frequency) to :func:~mne.viz.plot_filter to plot the filter: End of explanation """ def add_arrows(axes): # add some arrows at 60 Hz and its harmonics for ax in axes: freqs = ax.lines[-1].get_xdata() psds = ax.lines[-1].get_ydata() for freq in (60, 120, 180, 240): idx = np.searchsorted(freqs, freq) # get ymax of a small region around the freq. of interest y = psds[(idx - 4):(idx + 5)].max() ax.arrow(x=freqs[idx], y=y + 18, dx=0, dy=-12, color='red', width=0.1, head_width=3, length_includes_head=True) fig = raw.plot_psd(fmax=250, average=True) add_arrows(fig.axes[:2]) """ Explanation: Power line noise Power line noise is an environmental artifact that manifests as persistent oscillations centered around the AC power line frequency_. Power line artifacts are easiest to see on plots of the spectrum, so we'll use :meth:~mne.io.Raw.plot_psd to illustrate. We'll also write a little function that adds arrows to the spectrum plot to highlight the artifacts: End of explanation """ meg_picks = mne.pick_types(raw.info) # meg=True, eeg=False are the defaults freqs = (60, 120, 180, 240) raw_notch = raw.copy().notch_filter(freqs=freqs, picks=meg_picks) for title, data in zip(['Un', 'Notch '], [raw, raw_notch]): fig = data.plot_psd(fmax=250, average=True) fig.subplots_adjust(top=0.85) fig.suptitle('{}filtered'.format(title), size='xx-large', weight='bold') add_arrows(fig.axes[:2]) """ Explanation: It should be evident that MEG channels are more susceptible to this kind of interference than EEG that is recorded in the magnetically shielded room. Removing power-line noise can be done with a notch filter, applied directly to the :class:~mne.io.Raw object, specifying an array of frequencies to be attenuated. Since the EEG channels are relatively unaffected by the power line noise, we'll also specify a picks argument so that only the magnetometers and gradiometers get filtered: End of explanation """ raw_downsampled = raw.copy().resample(sfreq=200) for data, title in zip([raw, raw_downsampled], ['Original', 'Downsampled']): fig = data.plot_psd(average=True) fig.subplots_adjust(top=0.9) fig.suptitle(title) plt.setp(fig.axes, xlim=(0, 300)) """ Explanation: :meth:~mne.io.Raw.notch_filter also has parameters to control the notch width, transition bandwidth and other aspects of the filter. See the docstring for details. Resampling EEG and MEG recordings are notable for their high temporal precision, and are often recorded with sampling rates around 1000 Hz or higher. This is good when precise timing of events is important to the experimental design or analysis plan, but also consumes more memory and computational resources when processing the data. In cases where high-frequency components of the signal are not of interest and precise timing is not needed (e.g., computing EOG or ECG projectors on a long recording), downsampling the signal can be a useful time-saver. In MNE-Python, the resampling methods (:meth:raw.resample() &lt;mne.io.Raw.resample&gt;, :meth:epochs.resample() &lt;mne.Epochs.resample&gt; and :meth:evoked.resample() &lt;mne.Evoked.resample&gt;) apply a low-pass filter to the signal to avoid aliasing, so you don't need to explicitly filter it yourself first. This built-in filtering that happens when using :meth:raw.resample() &lt;mne.io.Raw.resample&gt;, :meth:epochs.resample() &lt;mne.Epochs.resample&gt;, or :meth:evoked.resample() &lt;mne.Evoked.resample&gt; is a brick-wall filter applied in the frequency domain at the Nyquist frequency of the desired new sampling rate. This can be clearly seen in the PSD plot, where a dashed vertical line indicates the filter cutoff; the original data had an existing lowpass at around 172 Hz (see raw.info['lowpass']), and the data resampled from 600 Hz to 200 Hz gets automatically lowpass filtered at 100 Hz (the Nyquist frequency_ for a target rate of 200 Hz): End of explanation """ current_sfreq = raw.info['sfreq'] desired_sfreq = 90 # Hz decim = np.round(current_sfreq / desired_sfreq).astype(int) obtained_sfreq = current_sfreq / decim lowpass_freq = obtained_sfreq / 3. raw_filtered = raw.copy().filter(l_freq=None, h_freq=lowpass_freq) events = mne.find_events(raw_filtered) epochs = mne.Epochs(raw_filtered, events, decim=decim) print('desired sampling frequency was {} Hz; decim factor of {} yielded an ' 'actual sampling frequency of {} Hz.' .format(desired_sfreq, decim, epochs.info['sfreq'])) """ Explanation: Because resampling involves filtering, there are some pitfalls to resampling at different points in the analysis stream: Performing resampling on :class:~mne.io.Raw data (before epoching) will negatively affect the temporal precision of Event arrays, by causing jitter_ in the event timing. This reduced temporal precision will propagate to subsequent epoching operations. Performing resampling after epoching can introduce edge artifacts on every epoch, whereas filtering the :class:~mne.io.Raw object will only introduce artifacts at the start and end of the recording (which is often far enough from the first and last epochs to have no affect on the analysis). The following section suggests best practices to mitigate both of these issues. Best practices To avoid the reduction in temporal precision of events that comes with resampling a :class:~mne.io.Raw object, and also avoid the edge artifacts that come with filtering an :class:~mne.Epochs or :class:~mne.Evoked object, the best practice is to: low-pass filter the :class:~mne.io.Raw data at or below $\frac{1}{3}$ of the desired sample rate, then decimate the data after epoching, by either passing the decim parameter to the :class:~mne.Epochs constructor, or using the :meth:~mne.Epochs.decimate method after the :class:~mne.Epochs have been created. <div class="alert alert-danger"><h4>Warning</h4><p>The recommendation for setting the low-pass corner frequency at $\frac{1}{3}$ of the desired sample rate is a fairly safe rule of thumb based on the default settings in :meth:`raw.filter() <mne.io.Raw.filter>` (which are different from the filter settings used inside the :meth:`raw.resample() <mne.io.Raw.resample>` method). If you use a customized lowpass filter (specifically, if your transition bandwidth is wider than 0.5× the lowpass cutoff), downsampling to 3× the lowpass cutoff may still not be enough to avoid `aliasing`_, and MNE-Python will not warn you about it (because the :class:`raw.info <mne.Info>` object only keeps track of the lowpass cutoff, not the transition bandwidth). Conversely, if you use a steeper filter, the warning may be too sensitive. If you are unsure, plot the PSD of your filtered data *before decimating* and ensure that there is no content in the frequencies above the `Nyquist frequency`_ of the sample rate you'll end up with *after* decimation.</p></div> Note that this method of manually filtering and decimating is exact only when the original sampling frequency is an integer multiple of the desired new sampling frequency. Since the sampling frequency of our example data is 600.614990234375 Hz, ending up with a specific sampling frequency like (say) 90 Hz will not be possible: End of explanation """
agushman/coursera
src/cours_3/week_4/edit_CookingLDA_PA.ipynb
mit
import json with open("recipes.json") as f: recipes = json.load(f) print(recipes[0]) """ Explanation: Programming Assignment: Готовим LDA по рецептам Как вы уже знаете, в тематическом моделировании делается предположение о том, что для определения тематики порядок слов в документе не важен; об этом гласит гипотеза «мешка слов». Сегодня мы будем работать с несколько нестандартной для тематического моделирования коллекцией, которую можно назвать «мешком ингредиентов», потому что на состоит из рецептов блюд разных кухонь. Тематические модели ищут слова, которые часто вместе встречаются в документах, и составляют из них темы. Мы попробуем применить эту идею к рецептам и найти кулинарные «темы». Эта коллекция хороша тем, что не требует предобработки. Кроме того, эта задача достаточно наглядно иллюстрирует принцип работы тематических моделей. Для выполнения заданий, помимо часто используемых в курсе библиотек, потребуются модули json и gensim. Первый входит в дистрибутив Anaconda, второй можно поставить командой pip install gensim Построение модели занимает некоторое время. На ноутбуке с процессором Intel Core i7 и тактовой частотой 2400 МГц на построение одной модели уходит менее 10 минут. Загрузка данных Коллекция дана в json-формате: для каждого рецепта известны его id, кухня (cuisine) и список ингредиентов, в него входящих. Загрузить данные можно с помощью модуля json (он входит в дистрибутив Anaconda): End of explanation """ from gensim import corpora, models import numpy as np """ Explanation: Составление корпуса End of explanation """ texts = [recipe["ingredients"] for recipe in recipes] dictionary = corpora.Dictionary(texts) # составляем словарь corpus = [dictionary.doc2bow(text) for text in texts] # составляем корпус документов print(texts[0]) print(corpus[0]) """ Explanation: Наша коллекция небольшая, и целиком помещается в оперативную память. Gensim может работать с такими данными и не требует их сохранения на диск в специальном формате. Для этого коллекция должна быть представлена в виде списка списков, каждый внутренний список соответствует отдельному документу и состоит из его слов. Пример коллекции из двух документов: [["hello", "world"], ["programming", "in", "python"]] Преобразуем наши данные в такой формат, а затем создадим объекты corpus и dictionary, с которыми будет работать модель. End of explanation """ np.random.seed(76543) # здесь код для построения модели: lda_1 = models.LdaModel(corpus, id2word=dictionary, num_topics=40, passes=5) topics = lda_1.show_topics(num_topics=40, num_words=10, formatted=False) c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs = 0, 0, 0, 0, 0, 0 for _, top_words in lda_1.print_topics(num_topics=40, num_words=10): c_salt += top_words.count(u'salt') c_sugar += top_words.count(u'sugar') c_water += top_words.count(u'water') c_mushrooms += top_words.count(u'mushrooms') c_chicken += top_words.count(u'chicken') c_eggs += top_words.count(u'eggs') def save_answers1(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs): with open("cooking_LDA_pa_task1.txt", "w") as fout: fout.write(" ".join([str(el) for el in [c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs]])) print(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs) save_answers1(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs) """ Explanation: У объекта dictionary есть полезная переменная dictionary.token2id, позволяющая находить соответствие между ингредиентами и их индексами. Обучение модели Вам может понадобиться документация LDA в gensim. Задание 1. Обучите модель LDA с 40 темами, установив количество проходов по коллекции 5 и оставив остальные параметры по умолчанию. Затем вызовите метод модели show_topics, указав количество тем 40 и количество токенов 10, и сохраните результат (топы ингредиентов в темах) в отдельную переменную. Если при вызове метода show_topics указать параметр formatted=True, то топы ингредиентов будет удобно выводить на печать, если formatted=False, будет удобно работать со списком программно. Выведите топы на печать, рассмотрите темы, а затем ответьте на вопрос: Сколько раз ингредиенты "salt", "sugar", "water", "mushrooms", "chicken", "eggs" встретились среди топов-10 всех 40 тем? При ответе не нужно учитывать составные ингредиенты, например, "hot water". Передайте 6 чисел в функцию save_answers1 и загрузите сгенерированный файл в форму. У gensim нет возможности фиксировать случайное приближение через параметры метода, но библиотека использует numpy для инициализации матриц. Поэтому, по утверждению автора библиотеки, фиксировать случайное приближение нужно командой, которая написана в следующей ячейке. Перед строкой кода с построением модели обязательно вставляйте указанную строку фиксации random.seed. End of explanation """ import copy dictionary2 = copy.deepcopy(dictionary) """ Explanation: Фильтрация словаря В топах тем гораздо чаще встречаются первые три рассмотренных ингредиента, чем последние три. При этом наличие в рецепте курицы, яиц и грибов яснее дает понять, что мы будем готовить, чем наличие соли, сахара и воды. Таким образом, даже в рецептах есть слова, часто встречающиеся в текстах и не несущие смысловой нагрузки, и поэтому их не желательно видеть в темах. Наиболее простой прием борьбы с такими фоновыми элементами — фильтрация словаря по частоте. Обычно словарь фильтруют с двух сторон: убирают очень редкие слова (в целях экономии памяти) и очень частые слова (в целях повышения интерпретируемости тем). Мы уберем только частые слова. End of explanation """ frequent_words = list() for el in dictionary2.dfs: if dictionary2.dfs[el] > 4000: frequent_words.append(el) print(frequent_words) dict_size_before = len(dictionary2.dfs) dictionary2.filter_tokens(frequent_words) dict_size_after = len(dictionary2.dfs) corpus2 = [dictionary2.doc2bow(text) for text in texts] corpus_size_before = 0 for i in corpus: corpus_size_before += len(i) corpus_size_after = 0 for i in corpus2: corpus_size_after += len(i) def save_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after): with open("cooking_LDA_pa_task2.txt", "w") as fout: fout.write(" ".join([str(el) for el in [dict_size_before, dict_size_after, corpus_size_before, corpus_size_after]])) print(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after) save_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after) """ Explanation: Задание 2. У объекта dictionary2 есть переменная dfs — это словарь, ключами которого являются id токена, а элементами — число раз, сколько слово встретилось во всей коллекции. Сохраните в отдельный список ингредиенты, которые встретились в коллекции больше 4000 раз. Вызовите метод словаря filter_tokens, подав в качестве первого аргумента полученный список популярных ингредиентов. Вычислите две величины: dict_size_before и dict_size_after — размер словаря до и после фильтрации. Затем, используя новый словарь, создайте новый корпус документов, corpus2, по аналогии с тем, как это сделано в начале ноутбука. Вычислите две величины: corpus_size_before и corpus_size_after — суммарное количество ингредиентов в корпусе (для каждого документа вычислите число различных ингредиентов в нем и просуммируйте по всем документам) до и после фильтрации. Передайте величины dict_size_before, dict_size_after, corpus_size_before, corpus_size_after в функцию save_answers2 и загрузите сгенерированный файл в форму. End of explanation """ np.random.seed(76543) lda_2 = models.LdaModel(corpus2, id2word=dictionary2, num_topics = 40, passes = 5) top_topics_1 = lda_1.top_topics(corpus) top_topics_2 = lda_2.top_topics(corpus2) def topics_mean(all_topics): return np.mean([one_topics[1] for one_topics in all_topics]) coherence_1 = topics_mean(top_topics_1) coherence_2 = topics_mean(top_topics_2) def save_answers3(coherence_1, coherence_2): with open("cooking_LDA_pa_task3.txt", "w") as fout: fout.write(" ".join(["%3f"%el for el in [coherence_1, coherence_2]])) print(coherence_1, coherence_2) save_answers3(coherence_1, coherence_2) """ Explanation: Сравнение когерентностей Задание 3. Постройте еще одну модель по корпусу corpus2 и словарю dictionary2, остальные параметры оставьте такими же, как при первом построении модели. Сохраните новую модель в другую переменную (не перезаписывайте предыдущую модель). Не забудьте про фиксирование seed! Затем воспользуйтесь методом top_topics модели, чтобы вычислить ее когерентность. Передайте в качестве аргумента соответствующий модели корпус. Метод вернет список кортежей (топ токенов, когерентность), отсортированных по убыванию последней. Вычислите среднюю по всем темам когерентность для каждой из двух моделей и передайте в функцию save_answers3. End of explanation """ lda_1.get_document_topics(corpus2[0]) """ Explanation: Считается, что когерентность хорошо соотносится с человеческими оценками интерпретируемости тем. Поэтому на больших текстовых коллекциях когерентность обычно повышается, если убрать фоновую лексику. Однако в нашем случае этого не произошло. Изучение влияния гиперпараметра alpha В этом разделе мы будем работать со второй моделью, то есть той, которая построена по сокращенному корпусу. Пока что мы посмотрели только на матрицу темы-слова, теперь давайте посмотрим на матрицу темы-документы. Выведите темы для нулевого (или любого другого) документа из корпуса, воспользовавшись методом get_document_topics второй модели: End of explanation """ lda_1.alpha """ Explanation: Также выведите содержимое переменной .alpha второй модели: End of explanation """ np.random.seed(76543) lda_3 = models.ldamodel.LdaModel(corpus2, id2word=dictionary2, num_topics=40, passes=5, alpha = 1) lda_3.get_document_topics(corpus2[0]) def sum_doc_topics(model, corpus): return sum([len(model.get_document_topics(i, minimum_probability=0.01)) for i in corpus]) count_lda_2 = sum_doc_topics(lda_2,corpus2) count_lda_3 = sum_doc_topics(lda_3,corpus2) def save_answers4(count_model_2, count_model_3): with open("cooking_LDA_pa_task4.txt", "w") as fout: fout.write(" ".join([str(el) for el in [count_model_2, count_model_3]])) print(count_lda_2, count_lda_3) save_answers4(count_lda_2, count_lda_3) """ Explanation: У вас должно получиться, что документ характеризуется небольшим числом тем. Попробуем поменять гиперпараметр alpha, задающий априорное распределение Дирихле для распределений тем в документах. Задание 4. Обучите третью модель: используйте сокращенный корпус (corpus2 и dictionary2) и установите параметр alpha=1, passes=5. Не забудьте про фиксацию seed! Выведите темы новой модели для нулевого документа; должно получиться, что распределение над множеством тем практически равномерное. Чтобы убедиться в том, что во второй модели документы описываются гораздо более разреженными распределениями, чем в третьей, посчитайте суммарное количество элементов, превосходящих 0.01, в матрицах темы-документы обеих моделей. Другими словами, запросите темы модели для каждого документа с параметром minimum_probability=0.01 и просуммируйте число элементов в получаемых массивах. Передайте две суммы (сначала для модели с alpha по умолчанию, затем для модели в alpha=1) в функцию save_answers4. End of explanation """ from sklearn.ensemble import RandomForestClassifier from sklearn.cross_validation import cross_val_score X = np.zeros((len(recipes), 40)) y = [recipe['cuisine'] for recipe in recipes] for i in range(len(recipes)): for top in lda_2.get_document_topics(corpus2[i]): X[i, top[0]] = top[1] RFC = RandomForestClassifier(n_estimators = 100) estimator = cross_val_score(RFC, X, y, cv=3).mean() def save_answers5(accuracy): with open("cooking_LDA_pa_task5.txt", "w") as fout: fout.write(str(accuracy)) print(estimator) save_answers5(estimator) """ Explanation: Таким образом, гиперпараметр alpha влияет на разреженность распределений тем в документах. Аналогично гиперпараметр eta влияет на разреженность распределений слов в темах. LDA как способ понижения размерности Иногда, распределения над темами, найденные с помощью LDA, добавляют в матрицу объекты-признаки как дополнительные, семантические, признаки, и это может улучшить качество решения задачи. Для простоты давайте просто обучим классификатор рецептов на кухни на признаках, полученных из LDA, и измерим точность (accuracy). Задание 5. Используйте модель, построенную по сокращенной выборке с alpha по умолчанию (вторую модель). Составьте матрицу $\Theta = p(t|d)$ вероятностей тем в документах; вы можете использовать тот же метод get_document_topics, а также вектор правильных ответов y (в том же порядке, в котором рецепты идут в переменной recipes). Создайте объект RandomForestClassifier со 100 деревьями, с помощью функции cross_val_score вычислите среднюю accuracy по трем фолдам (перемешивать данные не нужно) и передайте в функцию save_answers5. End of explanation """ def generate_recipe(model, num_ingredients): theta = np.random.dirichlet(model.alpha) for i in range(num_ingredients): t = np.random.choice(np.arange(model.num_topics), p=theta) topic = model.show_topic(t, topn=model.num_terms) topic_distr = [x[1] for x in topic] terms = [x[0] for x in topic] w = np.random.choice(terms, p=topic_distr) print w print(generate_recipe(lda_1, 5)) print('\n') print(generate_recipe(lda_2, 5)) print('\n') print(generate_recipe(lda_3, 5)) """ Explanation: Для такого большого количества классов это неплохая точность. Вы можете попробовать обучать RandomForest на исходной матрице частот слов, имеющей значительно большую размерность, и увидеть, что accuracy увеличивается на 10–15%. Таким образом, LDA собрал не всю, но достаточно большую часть информации из выборки, в матрице низкого ранга. LDA — вероятностная модель Матричное разложение, использующееся в LDA, интерпретируется как следующий процесс генерации документов. Для документа $d$ длины $n_d$: 1. Из априорного распределения Дирихле с параметром alpha сгенерировать распределение над множеством тем: $\theta_d \sim Dirichlet(\alpha)$ 1. Для каждого слова $w = 1, \dots, n_d$: 1. Сгенерировать тему из дискретного распределения $t \sim \theta_{d}$ 1. Сгенерировать слово из дискретного распределения $w \sim \phi_{t}$. Подробнее об этом в Википедии. В контексте нашей задачи получается, что, используя данный генеративный процесс, можно создавать новые рецепты. Вы можете передать в функцию модель и число ингредиентов и сгенерировать рецепт :) End of explanation """ import pandas import seaborn from matplotlib import pyplot as plt %matplotlib inline def compute_topic_cuisine_matrix(model, corpus, recipes): # составляем вектор целевых признаков targets = list(set([recipe["cuisine"] for recipe in recipes])) # составляем матрицу tc_matrix = pandas.DataFrame(data=np.zeros((model.num_topics, len(targets))), columns=targets) for recipe, bow in zip(recipes, corpus): recipe_topic = model.get_document_topics(bow) for t, prob in recipe_topic: tc_matrix[recipe["cuisine"]][t] += prob # нормируем матрицу target_sums = pandas.DataFrame(data=np.zeros((1, len(targets))), columns=targets) for recipe in recipes: target_sums[recipe["cuisine"]] += 1 return pandas.DataFrame(tc_matrix.values/target_sums.values, columns=tc_matrix.columns) def plot_matrix(tc_matrix): plt.figure(figsize=(10, 10)) seaborn.heatmap(tc_matrix, square=True) # Визуализируйте матрицу plot_matrix(compute_topic_cuisine_matrix(lda_1, corpus, recipes)) plot_matrix(compute_topic_cuisine_matrix(lda_2, corpus2, recipes)) plot_matrix(compute_topic_cuisine_matrix(lda_3, corpus2, recipes)) """ Explanation: Интерпретация построенной модели Вы можете рассмотреть топы ингредиентов каждой темы. Большиснтво тем сами по себе похожи на рецепты; в некоторых собираются продукты одного вида, например, свежие фрукты или разные виды сыра. Попробуем эмпирически соотнести наши темы с национальными кухнями (cuisine). Построим матрицу $A$ размера темы $x$ кухни, ее элементы $a_{tc}$ — суммы $p(t|d)$ по всем документам $d$, которые отнесены к кухне $c$. Нормируем матрицу на частоты рецептов по разным кухням, чтобы избежать дисбаланса между кухнями. Следующая функция получает на вход объект модели, объект корпуса и исходные данные и возвращает нормированную матрицу $A$. Ее удобно визуализировать с помощью seaborn. End of explanation """