repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
OpenBookProjects/ipynb | _data-sci-cases/PyData2015Paris-pandas_introduction.ipynb | mit | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn
pd.options.display.max_rows = 8
"""
Explanation: <CENTER>
<img src="img/PyDataLogoBig-Paris2015.png" width="50%">
<header>
<h1>Introduction to Pandas</h1>
<h3>April 3rd, 2015</h3>
<h2>Joris Van den Bossche</h2>
<p></p>
Source: <a href="https://github.com/jorisvandenbossche/2015-PyDataParis">https://github.com/jorisvandenbossche/2015-PyDataParis</a>
</header>
</CENTER>
About me: Joris Van den Bossche
PhD student at Ghent University and VITO, Belgium
bio-science engineer, air quality research
pandas core dev
->
https://github.com/jorisvandenbossche
@jorisvdbossche
Licensed under CC BY 4.0 Creative Commons
Content of this talk
Why do you need pandas?
Basic introduction to the data structures
Guided tour through some of the pandas features with a case study about air quality
If you want to follow along, this is a notebook that you can view or run yourself:
All materials (notebook, data, link to nbviewer): https://github.com/jorisvandenbossche/2015-PyDataParis
You need pandas > 0.15 (easy solution is using Anaconda)
Some imports:
End of explanation
"""
import airbase
data = airbase.load_data()
data
"""
Explanation: Let's start with a showcase
Case study: air quality in Europe
AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe
Starting from these hourly data for different stations:
End of explanation
"""
data['1999':].resample('A').plot(ylim=[0,100])
"""
Explanation: to answering questions about this data in a few lines of code:
Does the air pollution show a decreasing trend over the years?
End of explanation
"""
exceedances = data > 200
exceedances = exceedances.groupby(exceedances.index.year).sum()
ax = exceedances.loc[2005:].plot(kind='bar')
ax.axhline(18, color='k', linestyle='--')
"""
Explanation: How many exceedances of the limit values?
End of explanation
"""
data['weekday'] = data.index.weekday
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour])['FR04012'].mean().unstack(level=0)
data_weekend.plot()
"""
Explanation: What is the difference in diurnal profile between weekdays and weekend?
End of explanation
"""
s = pd.Series([0.1, 0.2, 0.3, 0.4])
s
"""
Explanation: We will come back to these example, and build them up step by step.
Why do you need pandas?
Why do you need pandas?
When working with tabular or structured data (like R dataframe, SQL table, Excel spreadsheet, ...):
Import data
Clean up messy data
Explore data, gain insight into data
Process and prepare your data for analysis
Analyse your data (together with scikit-learn, statsmodels, ...)
Pandas: data analysis in python
For data-intensive work in Python the Pandas library has become essential.
What is pandas?
Pandas can be thought of as NumPy arrays with labels for rows and columns, and better support for heterogeneous data types, but it's also much, much more than that.
Pandas can also be thought of as R's data.frame in Python.
Powerful for working with missing data, working with time series data, for reading and writing your data, for reshaping, grouping, merging your data, ...
It's documentation: http://pandas.pydata.org/pandas-docs/stable/
Key features
Fast, easy and flexible input/output for a lot of different data formats
Working with missing data (.dropna(), pd.isnull())
Merging and joining (concat, join)
Grouping: groupby functionality
Reshaping (stack, pivot)
Powerful time series manipulation (resampling, timezones, ..)
Easy plotting
Basic data structures
Pandas does this through two fundamental object types, both built upon NumPy arrays: the Series object, and the DataFrame object.
Series
A Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created:
End of explanation
"""
s.index
"""
Explanation: Attributes of a Series: index and values
The series has a built-in concept of an index, which by default is the numbers 0 through N - 1
End of explanation
"""
s.values
"""
Explanation: You can access the underlying numpy array representation with the .values attribute:
End of explanation
"""
s[0]
"""
Explanation: We can access series values via the index, just like for NumPy arrays:
End of explanation
"""
s2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])
s2
s2['c']
"""
Explanation: Unlike the NumPy array, though, this index can be something other than integers:
End of explanation
"""
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3, 'United Kingdom': 64.9, 'Netherlands': 16.9})
population
population['France']
"""
Explanation: In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value:
End of explanation
"""
population * 1000
"""
Explanation: but with the power of numpy arrays:
End of explanation
"""
population['Belgium']
population['Belgium':'Germany']
"""
Explanation: We can index or slice the populations as expected:
End of explanation
"""
population[['France', 'Netherlands']]
population[population > 20]
"""
Explanation: Many things you can do with numpy arrays, can also be applied on objects.
Fancy indexing, like indexing with a list or boolean indexing:
End of explanation
"""
population / 100
"""
Explanation: Element-wise operations:
End of explanation
"""
population.mean()
"""
Explanation: A range of methods:
End of explanation
"""
s1 = population[['Belgium', 'France']]
s2 = population[['France', 'Germany']]
s1
s2
s1 + s2
"""
Explanation: Alignment!
Only, pay attention to alignment: operations between series will align on the index:
End of explanation
"""
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
"""
Explanation: DataFrames: Multi-dimensional Data
A DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
<img src="img/dataframe.png" width=110%>
One of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Note that in the IPython notebook, the dataframe will display in a rich HTML view:
End of explanation
"""
countries.index
countries.columns
"""
Explanation: Attributes of the DataFrame
A DataFrame has besides a index attribute, also a columns attribute:
End of explanation
"""
countries.dtypes
"""
Explanation: To check the data types of the different columns:
End of explanation
"""
countries.info()
"""
Explanation: An overview of that information can be given with the info() method:
End of explanation
"""
countries.values
"""
Explanation: Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:
End of explanation
"""
countries = countries.set_index('country')
countries
"""
Explanation: If we don't like what the index looks like, we can reset it and set one of our columns:
End of explanation
"""
countries['area']
"""
Explanation: To access a Series representing a column in the data, use typical indexing syntax:
End of explanation
"""
countries['population']*1000000 / countries['area']
"""
Explanation: As you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.
Let's compute density of each country:
End of explanation
"""
countries['density'] = countries['population']*1000000 / countries['area']
countries
"""
Explanation: Adding a new column to the dataframe is very simple:
End of explanation
"""
countries[countries['density'] > 300]
"""
Explanation: We can use masking to select certain data:
End of explanation
"""
countries.sort_index(by='density', ascending=False)
"""
Explanation: And we can do things like sorting the items in the array, and indexing to take the first two rows:
End of explanation
"""
countries.describe()
"""
Explanation: One useful method to use is the describe method, which computes summary statistics for each column:
End of explanation
"""
countries.plot()
"""
Explanation: The plot method can be used to quickly visualize the data in different ways:
End of explanation
"""
countries['population'].plot(kind='bar')
countries.plot(kind='scatter', x='population', y='area')
"""
Explanation: However, for this dataset, it does not say that much.
End of explanation
"""
countries = countries.drop(['density'], axis=1)
"""
Explanation: The available plotting types: ‘line’ (default), ‘bar’, ‘barh’, ‘hist’, ‘box’ , ‘kde’, ‘area’, ‘pie’, ‘scatter’, ‘hexbin’.
End of explanation
"""
countries['area']
"""
Explanation: Some notes on selecting data
One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. We now have to distuinguish between:
selection by label
selection by position.
For a DataFrame, basic indexing selects the columns.
Selecting a single column:
End of explanation
"""
countries[['area', 'density']]
"""
Explanation: or multiple columns:
End of explanation
"""
countries['France':'Netherlands']
"""
Explanation: But, slicing accesses the rows:
End of explanation
"""
countries.loc['Germany', 'area']
countries.loc['France':'Germany', :]
countries.loc[countries['density']>300, ['capital', 'population']]
"""
Explanation: For more advanced indexing, you have some extra attributes:
loc: selection by label
iloc: selection by position
End of explanation
"""
countries.iloc[0:2,1:3]
"""
Explanation: Selecting by position with iloc works similar as indexing numpy arrays:
End of explanation
"""
countries.loc['Belgium':'Germany', 'population'] = 10
countries
"""
Explanation: The different indexing methods can also be used to assign data:
End of explanation
"""
from IPython.display import HTML
HTML('<iframe src=http://www.eea.europa.eu/data-and-maps/data/airbase-the-european-air-quality-database-8#tab-data-by-country width=700 height=350></iframe>')
"""
Explanation: There are many, many more interesting operations that can be done on Series and DataFrame objects, but rather than continue using this toy data, we'll instead move to a real-world example, and illustrate some of the advanced concepts along the way.
Case study: air quality data of European monitoring stations (AirBase)
AirBase (The European Air quality dataBase)
AirBase: hourly measurements of all air quality monitoring stations from Europe.
End of explanation
"""
pd.read
countries.to
"""
Explanation: Importing and cleaning the data
Importing and exporting data with pandas
A wide range of input/output formats are natively supported by pandas:
CSV, text
SQL database
Excel
HDF5
json
html
pickle
...
End of explanation
"""
!head -1 ./data/BETR8010000800100hour.1-1-1990.31-12-2012
"""
Explanation: Now for our case study
I downloaded some of the raw data files of AirBase and included it in the repo:
station code: BETR801, pollutant code: 8 (nitrogen dioxide)
End of explanation
"""
data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')
data.head()
"""
Explanation: Just reading the tab-delimited data:
End of explanation
"""
colnames = ['date'] + [item for pair in zip(["{:02d}".format(i) for i in range(24)], ['flag']*24) for item in pair]
data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012",
sep='\t', header=None, na_values=[-999, -9999], names=colnames)
data.head()
"""
Explanation: Not really what we want.
With using some more options of read_csv:
End of explanation
"""
data = data.drop('flag', axis=1)
data
"""
Explanation: So what did we do:
specify that the values of -999 and -9999 should be regarded as NaN
specified are own column names
For now, we disregard the 'flag' columns
End of explanation
"""
df = pd.DataFrame({'A':['one', 'one', 'two', 'two'], 'B':['a', 'b', 'a', 'b'], 'C':range(4)})
df
"""
Explanation: Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index.
Intermezzo: reshaping your data with stack, unstack and pivot
The docs say:
Pivot a level of the (possibly hierarchical) column labels, returning a
DataFrame (or Series in the case of an object with a single level of
column labels) having a hierarchical index with a new inner-most level
of row labels.
<img src="img/stack.png" width=70%>
End of explanation
"""
df = df.set_index(['A', 'B'])
df
result = df['C'].unstack()
result
df = result.stack().reset_index(name='C')
df
"""
Explanation: To use stack/unstack, we need the values we want to shift from rows to columns or the other way around as the index:
End of explanation
"""
df.pivot(index='A', columns='B', values='C')
"""
Explanation: pivot is similar to unstack, but let you specify column names:
End of explanation
"""
df = pd.DataFrame({'A':['one', 'one', 'two', 'two', 'one', 'two'], 'B':['a', 'b', 'a', 'b', 'a', 'b'], 'C':range(6)})
df
df.pivot_table(index='A', columns='B', values='C', aggfunc='count') #'mean'
"""
Explanation: pivot_table is similar as pivot, but can work with duplicate indices and let you specify an aggregation function:
End of explanation
"""
data = data.set_index('date')
data_stacked = data.stack()
data_stacked
"""
Explanation: Back to our case study
We can now use stack to create a timeseries:
End of explanation
"""
data_stacked = data_stacked.reset_index(name='BETR801')
data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['level_1'], format="%Y-%m-%d%H")
data_stacked = data_stacked.drop(['date', 'level_1'], axis=1)
data_stacked
"""
Explanation: Now, lets combine the two levels of the index:
End of explanation
"""
import airbase
no2 = airbase.load_data()
"""
Explanation: For this talk, I put the above code in a separate function, and repeated this for some different monitoring stations:
End of explanation
"""
no2.head(3)
no2.tail()
"""
Explanation: FR04037 (PARIS 13eme): urban background site at Square de Choisy
FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia
BETR802: urban traffic site in Antwerp, Belgium
BETN029: rural background site in Houtem, Belgium
See http://www.eea.europa.eu/themes/air/interactive/no2
Exploring the data
Some useful methods:
head and tail
End of explanation
"""
no2.info()
"""
Explanation: info()
End of explanation
"""
no2.describe()
"""
Explanation: Getting some basic summary statistics about the data with describe:
End of explanation
"""
no2.plot(kind='box', ylim=[0,250])
no2['BETR801'].plot(kind='hist', bins=50)
no2.plot(figsize=(12,6))
"""
Explanation: Quickly visualizing the data
End of explanation
"""
no2[-500:].plot(figsize=(12,6))
"""
Explanation: This does not say too much ..
We can select part of the data (eg the latest 500 data points):
End of explanation
"""
no2.index
"""
Explanation: Or we can use some more advanced time series features -> next section!
Working with time series data
When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available:
End of explanation
"""
no2["2010-01-01 09:00": "2010-01-01 12:00"]
"""
Explanation: Indexing a time series works with strings:
End of explanation
"""
no2['2012']
"""
Explanation: A nice feature is "partial string" indexing, where we can do implicit slicing by providing a partial datetime string.
E.g. all data of 2012:
End of explanation
"""
data['2012-01':'2012-03']
"""
Explanation: Or all data of January up to March 2012:
End of explanation
"""
no2.index.hour
no2.index.year
"""
Explanation: Time and date components can be accessed from the index:
End of explanation
"""
no2.resample('D').head()
"""
Explanation: The power of pandas: resample
A very powerfull method is resample: converting the frequency of the time series (e.g. from hourly to daily data).
The time series has a frequency of 1 hour. I want to change this to daily:
End of explanation
"""
no2.resample('D', how='max').head()
"""
Explanation: By default, resample takes the mean as aggregation function, but other methods can also be specified:
End of explanation
"""
no2.resample('M').plot() # 'A'
# no2['2012'].resample('D').plot()
no2.loc['2009':, 'FR04037'].resample('M', how=['mean', 'median']).plot()
"""
Explanation: The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/dev/timeseries.html#offset-aliases
These strings can also be combined with numbers, eg '10D'.
Further exploring the data:
End of explanation
"""
no2_1999 = no2['1999':]
no2_1999.resample('A').plot()
no2_1999.mean(axis=1).resample('A').plot(color='k', linestyle='--', linewidth=4)
"""
Explanation: Question: The evolution of the yearly averages with, and the overall mean of all stations
End of explanation
"""
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
df.groupby('key').aggregate('sum') # np.sum
df.groupby('key').sum()
"""
Explanation: Analysing the data
Intermezzo - the groupby operation (split-apply-combine)
By "group by" we are referring to a process involving one or more of the following steps
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
<img src="img/splitApplyCombine.png">
Similar to SQL GROUP BY
The example of the image in pandas syntax:
End of explanation
"""
no2['month'] = no2.index.month
"""
Explanation: Back to the air quality data
Question: how does the typical monthly profile look like for the different stations?
First, we add a column to the dataframe that indicates the month (integer value of 1 to 12):
End of explanation
"""
no2.groupby('month').mean()
no2.groupby('month').mean().plot()
"""
Explanation: Now, we can calculate the mean of each month over the different years:
End of explanation
"""
no2.groupby(no2.index.hour).mean().plot()
"""
Explanation: Question: The typical diurnal profile for the different stations
End of explanation
"""
no2.index.weekday?
no2['weekday'] = no2.index.weekday
"""
Explanation: Question: What is the difference in the typical diurnal profile between week and weekend days.
End of explanation
"""
no2['weekend'] = no2['weekday'].isin([5, 6])
data_weekend = no2.groupby(['weekend', no2.index.hour]).mean()
data_weekend.head()
data_weekend_FR04012 = data_weekend['FR04012'].unstack(level=0)
data_weekend_FR04012.head()
data_weekend_FR04012.plot()
"""
Explanation: Add a column indicating week/weekend
End of explanation
"""
exceedances = no2 > 200
# group by year and count exceedances (sum of boolean)
exceedances = exceedances.groupby(exceedances.index.year).sum()
ax = exceedances.loc[2005:].plot(kind='bar')
ax.axhline(18, color='k', linestyle='--')
"""
Explanation: Question: What are the number of exceedances of hourly values above the European limit 200 µg/m3 ?
End of explanation
"""
# add a weekday and week column
no2['weekday'] = no2.index.weekday
no2['week'] = no2.index.week
no2.head()
# pivot table so that the weekdays are the different columns
data_pivoted = no2['2012'].pivot_table(columns='weekday', index='week', values='FR04037')
data_pivoted.head()
box = data_pivoted.boxplot()
"""
Explanation: Question: Visualize the typical week profile for the different stations as boxplots.
Tip: the boxplot method of a DataFrame expects the data for the different boxes in different columns)
End of explanation
"""
no2[['BETR801', 'BETN029', 'FR04037', 'FR04012']].corr()
no2[['BETR801', 'BETN029', 'FR04037', 'FR04012']].resample('D').corr()
no2 = no2[['BETR801', 'BETN029', 'FR04037', 'FR04012']]
"""
Explanation: Exercise: Calculate the correlation between the different stations
End of explanation
"""
|
arsenovic/clifford | docs/tutorials/cga/clustering.ipynb | bsd-3-clause | from clifford.g3c import *
print('e1*e1 ', e1*e1)
print('e2*e2 ', e2*e2)
print('e3*e3 ', e3*e3)
print('e4*e4 ', e4*e4)
print('e5*e5 ', e5*e5)
"""
Explanation: This notebook is part of the clifford documentation: https://clifford.readthedocs.io/.
Example 2 Clustering Geometric Objects
In this example we will look at a few of the tools provided by the clifford package for (4,1) conformal geometric algebra (CGA) and see how we can use them in a practical setting to cluster geometric objects via the simple K-means clustering algorithm provided in clifford.tools
As before the first step in using the package for CGA is to generate and import the algebra:
End of explanation
"""
from clifford.tools.g3c import *
import numpy as np
def generate_random_object_cluster(n_objects, object_generator, max_cluster_trans=1.0, max_cluster_rot=np.pi/8):
""" Creates a cluster of random objects """
ref_obj = object_generator()
cluster_objects = []
for i in range(n_objects):
r = random_rotation_translation_rotor(maximum_translation=max_cluster_trans, maximum_angle=max_cluster_rot)
new_obj = apply_rotor(ref_obj, r)
cluster_objects.append(new_obj)
return cluster_objects
"""
Explanation: The tools submodule of the clifford package contains a wide array of algorithms and tools that can be useful for manipulating objects in CGA. In this case we will be generating a large number of objects and then segmenting them into clusters.
We first need an algorithm for generating a cluster of objects in space. We will construct this cluster by generating a random object and then repeatedly disturbing this object by some small fixed amount and storing the result:
End of explanation
"""
from pyganja import *
clustered_circles = generate_random_object_cluster(10, random_circle)
sc = GanjaScene()
for c in clustered_circles:
sc.add_object(c, rgb2hex([255,0,0]))
draw(sc, scale=0.05)
"""
Explanation: We can use this function to create a cluster and then we can visualise this cluster with pyganja.
End of explanation
"""
from clifford.tools.g3c import generate_random_object_cluster
"""
Explanation: This cluster generation function appears in clifford tools by default and it can be imported as follows:
End of explanation
"""
def generate_n_clusters( object_generator, n_clusters, n_objects_per_cluster ):
object_clusters = []
for i in range(n_clusters):
cluster_objects = generate_random_object_cluster(n_objects_per_cluster, object_generator,
max_cluster_trans=0.5, max_cluster_rot=np.pi / 16)
object_clusters.append(cluster_objects)
all_objects = [item for sublist in object_clusters for item in sublist]
return all_objects, object_clusters
"""
Explanation: Now that we can generate individual clusters we would like to generate many:
End of explanation
"""
from clifford.tools.g3c import generate_n_clusters
all_objects, object_clusters = generate_n_clusters(random_circle, 2, 5)
sc = GanjaScene()
for c in all_objects:
sc.add_object(c, rgb2hex([255,0,0]))
draw(sc, scale=0.05)
"""
Explanation: Again this function appears by default in clifford tools and we can easily visualise the result:
End of explanation
"""
from clifford.tools.g3c.object_clustering import n_clusters_objects
import time
def run_n_clusters( object_generator, n_clusters, n_objects_per_cluster, n_shotgunning):
all_objects, object_clusters = generate_n_clusters( object_generator, n_clusters, n_objects_per_cluster )
[new_labels, centroids, start_labels, start_centroids] = n_clusters_objects(n_clusters, all_objects,
initial_centroids=None,
n_shotgunning=n_shotgunning,
averaging_method='unweighted')
return all_objects, new_labels, centroids
"""
Explanation: Given that we can now generate multiple clusters of objects we can test algorithms for segmenting them.
The function run_n_clusters below generates a lot of objects distributed into n clusters and then attempts to segment the objects to recover the clusters.
End of explanation
"""
def visualise_n_clusters(all_objects, centroids, labels,
color_1=np.array([255, 0, 0]),
color_2=np.array([0, 255, 0])):
"""
Utility method for visualising several clusters and their respective centroids
using pyganja
"""
alpha_list = np.linspace(0, 1, num=len(centroids))
sc = GanjaScene()
for ind, this_obj in enumerate(all_objects):
alpha = alpha_list[labels[ind]]
cluster_color = (alpha * color_1 + (1 - alpha) * color_2).astype(np.int)
sc.add_object(this_obj, rgb2hex(cluster_color))
for c in centroids:
sc.add_object(c, Color.BLACK)
return sc
object_generator = random_circle
n_clusters = 3
n_objects_per_cluster = 10
n_shotgunning = 60
all_objects, labels, centroids = run_n_clusters(object_generator, n_clusters,
n_objects_per_cluster, n_shotgunning)
sc = visualise_n_clusters(all_objects, centroids, labels,
color_1=np.array([255, 0, 0]),
color_2=np.array([0, 255, 0]))
draw(sc, scale=0.05)
"""
Explanation: Lets try it!
End of explanation
"""
|
volodymyrss/3ML | examples/MULTINEST parallel demo.ipynb | bsd-3-clause | from ipyparallel import Client
rc = Client(profile='mpi')
# Grab a view
view = rc[:]
# Activate parallel cell magics
view.activate()
"""
Explanation: Parallel MULTINEST with 3ML
J. Michael Burgess
MULTINEST
MULTINEST is a Bayesian posterior sampler that has two distinct advantages over traditional MCMC:
* Recovering multimodal posteriors
* In the case that the posterior is does not have a single maximum, traditional MCMC
may miss other modes of the posterior
* Full marginal likelihood computation
* This allows for direct model comparison via Bayes factors
To run the MULTINEST sampler in 3ML, one must have the foloowing software installed:
* MULTINEST (http://xxx.lanl.gov/abs/0809.3437) (git it here: https://github.com/JohannesBuchner/MultiNest)
* pymultinest (https://github.com/JohannesBuchner/PyMultiNest)
Parallelization
MULTINEST can be run in a single instance, but it can be incredibly slow. Luckily, it can be built with MPI support enabling it to be run on a multicore workstation or cluster very effeciently.
There are multiple ways to invoke the parallel run of MULTINEST in 3ML: e.g., one can write a python script with all operations and invoke:
```bash
$> mpiexec -n <N> python my3MLscript.py
```
However, it is nice to be able to stay in the Jupyter environment with ipyparallel which allow the user to easily switch bewteen single instance, desktop cores, and cluster environment all with the same code.
Setup
The user is expected to have and MPI distribution installed (open-mpi, mpich) and have compiled MULTINEST against the MPI library. Additionally, the user should setup and ipyparallel profile. Instructions can be found here: http://ipython.readthedocs.io/en/2.x/parallel/parallel_mpi.html
Initialize the MPI engine
Details for luanching ipcluster on a distributed cluster are not covered here, but everything is the same otherwise.
In the directory that you want to run 3ML in the Jupyter notebook launch and ipcontroller:
```bash
$> ipcontroller start --profile=mpi --ip='*'
```
Next, launch MPI with the desired number of engines:
```bash
$> mpiexec -n <N> ipengine --mpi=mpi4py --profile=mpi
```
Now, the user can head to the notebook and begin!
Running 3ML
First we get a client and and connect it to the running profile
End of explanation
"""
with view.sync_imports():
import threeML
import astromodels
"""
Explanation: Import 3ML and astromodels to the workers
End of explanation
"""
%%px
# Make GBM detector objects
src_selection = "0.-10."
nai0 = threeML.FermiGBM_TTE_Like('NAI0',
"glg_tte_n0_bn080916009_v01.fit",
"-10-0,100-200", # background selection
src_selection, # source interval
rspfile="glg_cspec_n0_bn080916009_v07.rsp")
nai3 = threeML.FermiGBM_TTE_Like('NAI3',"glg_tte_n3_bn080916009_v01.fit",
"-10-0,100-200",
src_selection,
rspfile="glg_cspec_n3_bn080916009_v07.rsp")
nai4 = threeML.FermiGBM_TTE_Like('NAI4',"glg_tte_n4_bn080916009_v01.fit",
"-10-0,100-200",
src_selection,
rspfile="glg_cspec_n4_bn080916009_v07.rsp")
bgo0 = threeML.FermiGBM_TTE_Like('BGO0',"glg_tte_b0_bn080916009_v01.fit",
"-10-0,100-200",
src_selection,
rspfile="glg_cspec_b0_bn080916009_v07.rsp")
# Select measurements
nai0.set_active_measurements("10.0-30.0", "40.0-950.0")
nai3.set_active_measurements("10.0-30.0", "40.0-950.0")
nai4.set_active_measurements("10.0-30.0", "40.0-950.0")
bgo0.set_active_measurements("250-43000")
# Set up 3ML likelihood object
triggerName = 'bn080916009'
ra = 121.8
dec = -61.3
data_list = threeML.DataList( nai0,nai3,nai4,bgo0 )
band = astromodels.Band()
GRB = threeML.PointSource( triggerName, ra, dec, spectral_shape=band )
model = threeML.Model( GRB )
# Set up Bayesian details
bayes = threeML.BayesianAnalysis(model, data_list)
band.K.prior = astromodels.Log_uniform_prior(lower_bound=1E-2, upper_bound=5)
band.xp.prior = astromodels.Log_uniform_prior(lower_bound=1E2, upper_bound=2E3)
band.alpha.prior = astromodels.Uniform_prior(lower_bound=-1.5,upper_bound=0.)
band.beta.prior = astromodels.Uniform_prior(lower_bound=-3.,upper_bound=-1.5)
"""
Explanation: Now we set up the analysis in the normal way except the following two caveats:
* we must call the threeML module explicity because ipyparallel does not support from <> import *
* we use the %%px cell magic (or %px line magic) to perfrom operations in the workers
End of explanation
"""
%px samples = bayes.sample_multinest(n_live_points=400,resume=False)
"""
Explanation: Finally we call MULTINEST. If all is set up properly, MULTINEST will gather the distributed objects and quickly sample the posterior
End of explanation
"""
# Execute commands that allow for saving figures
# grabbing samples, etc
%%px --targets ::1
samples = bayes.raw_samples()
f=bayes.get_credible_intervals()
bayes.corner_plot(plot_contours=True, plot_density=False)
# Bring the raw samples local
raw_samples=view['samples'][0]
raw_samples['bn080916009.spectrum.main.Band.K']
"""
Explanation: Viewing the results
Now we need to bring the BayesianAnalysis object back home. Unfortunately, not all objects can be brought back. So you must save figures to the workers. Future implementations of 3ML will allow for saving of the results to a dedicated format which can then be viewed on the local machine. More soon!
End of explanation
"""
|
patrick-kidger/diffrax | examples/latent_ode.ipynb | apache-2.0 | import time
import diffrax
import equinox as eqx
import jax
import jax.nn as jnn
import jax.numpy as jnp
import jax.random as jrandom
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import optax
matplotlib.rcParams.update({"font.size": 30})
"""
Explanation: Latent ODE
This example trains a Latent ODE.
In this case, it's on a simple dataset of decaying oscillators. That is, 2-dimensional time series that look like:
xx ***
** *
x* **
*x
x *
* * xxxxx
* x * xx xx *******
x x **
x * x * x * xxxxxxxx ******
x * x * x * xxx *xx *
x * xx ** x ** xx
x * x * x * xx ** xx
* x * x ** x * xxx
x * * x * xx **
x * x * xx xx* ***
x *x * xxx xxx *****
x x* * xx
x xx ******
xxxxx
The model is trained to generate samples that look like this.
What's really nice about this example is that we will take the underlying data to be irregularly sampled. We will have different observation times for different batch elements.
Most differential equation libraries will struggle with this, as they usually mandate that the differential equation be solved over the same timespan for all batch elements. Working around this can involve programming complexity like outputting at lots and lots of times (the union of all the observations times in the batch), or mathematical complexities like reparameterising the differentiating equation.
However Diffrax is capable of handling this without such issues! You can vmap over
different integration times for different batch elements.
Reference:
bibtex
@incollection{rubanova2019latent,
title={{L}atent {O}rdinary {D}ifferential {E}quations for {I}rregularly-{S}ampled
{T}ime {S}eries},
author={Rubanova, Yulia and Chen, Ricky T. Q. and Duvenaud, David K.},
booktitle={Advances in Neural Information Processing Systems},
publisher={Curran Associates, Inc.},
year={2019},
}
This example is available as a Jupyter notebook here.
End of explanation
"""
class Func(eqx.Module):
scale: jnp.ndarray
mlp: eqx.nn.MLP
def __call__(self, t, y, args):
return self.scale * self.mlp(y)
"""
Explanation: The vector field. Note its overall structure of scalar * tanh(mlp(y)) which is a good structure for Latent ODEs. (Here the tanh is part of self.mlp.)
End of explanation
"""
class LatentODE(eqx.Module):
func: Func
rnn_cell: eqx.nn.GRUCell
hidden_to_latent: eqx.nn.Linear
latent_to_hidden: eqx.nn.MLP
hidden_to_data: eqx.nn.Linear
hidden_size: int
latent_size: int
def __init__(
self, *, data_size, hidden_size, latent_size, width_size, depth, key, **kwargs
):
super().__init__(**kwargs)
mkey, gkey, hlkey, lhkey, hdkey = jrandom.split(key, 5)
scale = jnp.ones(())
mlp = eqx.nn.MLP(
in_size=hidden_size,
out_size=hidden_size,
width_size=width_size,
depth=depth,
activation=jnn.softplus,
final_activation=jnn.tanh,
key=mkey,
)
self.func = Func(scale, mlp)
self.rnn_cell = eqx.nn.GRUCell(data_size + 1, hidden_size, key=gkey)
self.hidden_to_latent = eqx.nn.Linear(hidden_size, 2 * latent_size, key=hlkey)
self.latent_to_hidden = eqx.nn.MLP(
latent_size, hidden_size, width_size=width_size, depth=depth, key=lhkey
)
self.hidden_to_data = eqx.nn.Linear(hidden_size, data_size, key=hdkey)
self.hidden_size = hidden_size
self.latent_size = latent_size
# Encoder of the VAE
def _latent(self, ts, ys, key):
data = jnp.concatenate([ts[:, None], ys], axis=1)
hidden = jnp.zeros((self.hidden_size,))
for data_i in reversed(data):
hidden = self.rnn_cell(data_i, hidden)
context = self.hidden_to_latent(hidden)
mean, logstd = context[: self.latent_size], context[self.latent_size :]
std = jnp.exp(logstd)
latent = mean + jrandom.normal(key, (self.latent_size,)) * std
return latent, mean, std
# Decoder of the VAE
def _sample(self, ts, latent):
dt0 = 0.4 # selected as a reasonable choice for this problem
y0 = self.latent_to_hidden(latent)
sol = diffrax.diffeqsolve(
diffrax.ODETerm(self.func),
diffrax.Tsit5(),
ts[0],
ts[-1],
dt0,
y0,
saveat=diffrax.SaveAt(ts=ts),
)
return jax.vmap(self.hidden_to_data)(sol.ys)
@staticmethod
def _loss(ys, pred_ys, mean, std):
# -log p_θ with Gaussian p_θ
reconstruction_loss = 0.5 * jnp.sum((ys - pred_ys) ** 2)
# KL(N(mean, std^2) || N(0, 1))
variational_loss = 0.5 * jnp.sum(mean**2 + std**2 - 2 * jnp.log(std) - 1)
return reconstruction_loss + variational_loss
# Run both encoder and decoder during training.
def train(self, ts, ys, *, key):
latent, mean, std = self._latent(ts, ys, key)
pred_ys = self._sample(ts, latent)
return self._loss(ys, pred_ys, mean, std)
# Run just the decoder during inference.
def sample(self, ts, *, key):
latent = jrandom.normal(key, (self.latent_size,))
return self._sample(ts, latent)
"""
Explanation: Wrap up the differential equation solve into a model.
End of explanation
"""
def get_data(dataset_size, *, key):
ykey, tkey1, tkey2 = jrandom.split(key, 3)
y0 = jrandom.normal(ykey, (dataset_size, 2))
t0 = 0
t1 = 2 + jrandom.uniform(tkey1, (dataset_size,))
ts = jrandom.uniform(tkey2, (dataset_size, 20)) * (t1[:, None] - t0) + t0
ts = jnp.sort(ts)
dt0 = 0.1
def func(t, y, args):
return jnp.array([[-0.1, 1.3], [-1, -0.1]]) @ y
def solve(ts, y0):
sol = diffrax.diffeqsolve(
diffrax.ODETerm(func),
diffrax.Tsit5(),
ts[0],
ts[-1],
dt0,
y0,
saveat=diffrax.SaveAt(ts=ts),
)
return sol.ys
ys = jax.vmap(solve)(ts, y0)
return ts, ys
def dataloader(arrays, batch_size, *, key):
dataset_size = arrays[0].shape[0]
assert all(array.shape[0] == dataset_size for array in arrays)
indices = jnp.arange(dataset_size)
while True:
perm = jrandom.permutation(key, indices)
(key,) = jrandom.split(key, 1)
start = 0
end = batch_size
while start < dataset_size:
batch_perm = perm[start:end]
yield tuple(array[batch_perm] for array in arrays)
start = end
end = start + batch_size
"""
Explanation: Toy dataset of decaying oscillators.
By way of illustration we set this up as a differential equation and solve this using Diffrax as well. (Despite this being an autonomous linear ODE, for which a closed-form solution is actually available.)
End of explanation
"""
def main(
dataset_size=10000,
batch_size=256,
lr=1e-2,
steps=250,
save_every=50,
hidden_size=16,
latent_size=16,
width_size=16,
depth=2,
seed=5678,
):
key = jrandom.PRNGKey(seed)
data_key, model_key, loader_key, train_key, sample_key = jrandom.split(key, 5)
ts, ys = get_data(dataset_size, key=data_key)
model = LatentODE(
data_size=ys.shape[-1],
hidden_size=hidden_size,
latent_size=latent_size,
width_size=width_size,
depth=depth,
key=model_key,
)
@eqx.filter_value_and_grad
def loss(model, ts_i, ys_i, key_i):
batch_size, _ = ts_i.shape
key_i = jrandom.split(key_i, batch_size)
loss = jax.vmap(model.train)(ts_i, ys_i, key=key_i)
return jnp.mean(loss)
@eqx.filter_jit
def make_step(model, opt_state, ts_i, ys_i, key_i):
value, grads = loss(model, ts_i, ys_i, key_i)
key_i = jrandom.split(key_i, 1)[0]
updates, opt_state = optim.update(grads, opt_state)
model = eqx.apply_updates(model, updates)
return value, model, opt_state, key_i
optim = optax.adam(lr)
opt_state = optim.init(eqx.filter(model, eqx.is_inexact_array))
# Plot results
num_plots = 1 + (steps - 1) // save_every
if ((steps - 1) % save_every) != 0:
num_plots += 1
fig, axs = plt.subplots(1, num_plots, figsize=(num_plots * 8, 8))
axs[0].set_ylabel("x")
axs = iter(axs)
for step, (ts_i, ys_i) in zip(
range(steps), dataloader((ts, ys), batch_size, key=loader_key)
):
start = time.time()
value, model, opt_state, train_key = make_step(
model, opt_state, ts_i, ys_i, train_key
)
end = time.time()
print(f"Step: {step}, Loss: {value}, Computation time: {end - start}")
if (step % save_every) == 0 or step == steps - 1:
ax = next(axs)
# Sample over a longer time interval than we trained on. The model will be
# sufficiently good that it will correctly extrapolate!
sample_t = jnp.linspace(0, 12, 300)
sample_y = model.sample(sample_t, key=sample_key)
sample_t = np.asarray(sample_t)
sample_y = np.asarray(sample_y)
ax.plot(sample_t, sample_y[:, 0])
ax.plot(sample_t, sample_y[:, 1])
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel("t")
plt.savefig("latent_ode.png")
plt.show()
main()
"""
Explanation: The main entry point. Try running main() to train a model.
End of explanation
"""
|
YAtOff/python0-reloaded | week5/Booleans and if.ipynb | mit | seconds = 30
0 <= seconds <= 59
seconds = -1
0 <= seconds <= 59
"""
Explanation: Изразът 0 <= seconds <= 59 e булев и има стойност True или False.
End of explanation
"""
def valid_seconds(seconds):
if True:
return True
else:
return False
"""
Explanation: Т. е. горната функция е еквивалентна на:
End of explanation
"""
def valid_seconds(seconds):
if False:
return True
else:
return False
"""
Explanation: когато 0 <= seconds <= 59 е True, и на:
End of explanation
"""
def valid_seconds(seconds):
return 0 <= seconds <= 59
valid_seconds(30)
"""
Explanation: когато 0 <= seconds <= 59 е False.
По-лесният начин е функцията просто да върне като резултат стойността на булевия израз.
End of explanation
"""
|
as595/AllOfYourBases | MISC/NVSS_selection.ipynb | gpl-3.0 | %matplotlib inline
"""
Explanation: [171009 - AMS] Original script written
This script illustrates the obervational selection bias in the P-D diagram distribution of radio galaxies shown in Fig. 7 of https://arxiv.org/abs/1704.00516.
We want our plots to appear in line with the script rather than as separate windows:
End of explanation
"""
import numpy as np # for array manipulation
import pylab as pl # for plotting
"""
Explanation: We'll need to import some libraries:
End of explanation
"""
logSize = np.arange(2.,4.,0.1) # kpc
"""
Explanation: Pick a range of sizes to test. Here I'm selecting evenly spaced sizes in log space from 100 to 10000, at a spacing of $10^{0.1}$:
End of explanation
"""
z = 0.01
"""
Explanation: Set a redshift:
End of explanation
"""
from astropy.cosmology import WMAP9 as cosmo
D_A = cosmo.kpc_proper_per_arcmin(z)
"""
Explanation: Calculate angular distance at this redshift, assuming some cosmology:
End of explanation
"""
AngSize = 10**(logSize)/D_A # arcmin
"""
Explanation: Use this to convert our physical sizes to angular sizes on the sky:
End of explanation
"""
Area = 2.*(np.pi*(AngSize/4.)**2) # arcmin^2
"""
Explanation: Work out the area for each source using our simple 2 circle model (note that AngSize is an array so this will also be an array):
End of explanation
"""
mpc2m = 3.09e22
D_L = cosmo.luminosity_distance(z)*mpc2m
"""
Explanation: Work out the luminosity distance at this redshift and convert from Mpc to metres:
End of explanation
"""
thresh = 2.5e-3 # Jy/beam
"""
Explanation: Use the NVSS 5$\sigma$ threshold as our limiting point. $\sigma = 0.5$mJy/beam.
End of explanation
"""
bm2amin = 1.13*np.pi*(0.75**2)
print bm2amin
"""
Explanation: Work out how many sq arcminutes per beam (beam FWHM is 45 arcsec = 0.75 arcmin):
End of explanation
"""
F = (thresh/bm2amin)*Area # Jy
"""
Explanation: Work out the integrated flux density for this source:
End of explanation
"""
Prad = 1e-26*F*4*np.pi*D_L**2 # W/Hz/m^2
"""
Explanation: Use the integrated flux density to work out the radio power:
End of explanation
"""
pl.subplot(111)
pl.plot(10**logSize,Prad)
pl.axis([200.,6000.,1e23,1e29])
pl.loglog()
pl.title(r'Detection threshold for $z=0.01$')
pl.ylabel(r'Radio Power [W/Hz]')
pl.xlabel(r'Size [kpc]')
pl.show()
"""
Explanation: Plot the output:
End of explanation
"""
|
mayankjohri/LetsExplorePython | Section 2 - Advance Python/Chapter S2.01 - Functional Programming/01_01_Functional_Programming_Introduction.ipynb | gpl-3.0 | # not so functional function
a = 0
def global_sum(x):
global a
x += a
return x
print(global_sum(1))
print(a)
a = 11
print(global_sum(1))
print(a)
# not so functional function
a = 0
def global_sum(x):
global a
return x + a
print(global_sum(x=1))
print(a)
a = 11
print(global_sum(x=1))
print(a)
"""
Explanation: Introduction to Functional Programming
What is Functional Programming
Functional programming is a programming paradigm that revolves around pure functions.
Pure function
A pure function is a function which can be represented as a mathematical expression. That means, no side-effects should be present, i.e. no I/O operations, no global state changes, no database interactions.
<img src="files/PureFunction.png" width="500" alt="Pure Function Representation">
The output from a pure function is depended ONLY on its inputs. Thus, if a pure function is called with the same inputs a million times, you would get the same result every single time.
End of explanation
"""
# a better functional function
def better_sum(a, x):
return a+x
num = better_sum(1, 1)
print(num)
num = better_sum(1, 3)
print(num)
num = better_sum(1, 1)
print(num)
"""
Explanation: In the above example, the output of the function global_sum changed due to the value of a, thus it is unfunctional function.
End of explanation
"""
|
letsgoexploring/linearsolve-package | docs/source/examples.ipynb | mit | # Import numpy, pandas, linearsolve, matplotlib.pyplot
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
# Input model parameters
parameters = pd.Series(dtype=float)
parameters['alpha'] = .35
parameters['beta'] = 0.99
parameters['delta'] = 0.025
parameters['rhoa'] = .9
parameters['sigma'] = 1.5
parameters['A'] = 1
# Funtion that evaluates the equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Household Euler equation
euler_eqn = p.beta*fwd.c**-p.sigma*(p.alpha*cur.a*fwd.k**(p.alpha-1)+1-p.delta) - cur.c**-p.sigma
# Goods market clearing
market_clearing = cur.c + fwd.k - (1-p.delta)*cur.k - cur.a*cur.k**p.alpha
# Exogenous technology
technology_proc = p.rhoa*np.log(cur.a) - np.log(fwd.a)
# Stack equilibrium conditions into a numpy array
return np.array([
euler_eqn,
market_clearing,
technology_proc
])
# Initialize the model
model = ls.model(equations = equations,
n_states=2,
n_exo_states = 1,
var_names=['a','k','c'],
parameters = parameters)
# Compute the steady state numerically
guess = [1,1,1]
model.compute_ss(guess)
# Find the log-linear approximation around the non-stochastic steady state and solve
model.approximate_and_solve()
# Compute impulse responses and plot
model.impulse(T=41,t0=5,shocks=None)
fig = plt.figure(figsize=(12,4))
ax1 =fig.add_subplot(1,2,1)
model.irs['e_a'][['a','k','c']].plot(lw='5',alpha=0.5,grid=True,ax=ax1).legend(loc='upper right',ncol=3)
ax2 =fig.add_subplot(1,2,2)
model.irs['e_a'][['e_a','a']].plot(lw='5',alpha=0.5,grid=True,ax=ax2).legend(loc='upper right',ncol=2)
"""
Explanation: Overview and Examples
A brief description of what linearsolve is for followed by examples.
What linearsolve Does
linearsolve defines a class - linearsolve.model - with several functions for approximating, solving, and simulating dynamic stochastic general equilibrium (DSGE) models. The equilibrium conditions for most DSGE models can be expressed as a vector function $f$:
\begin{align}
f(E_t X_{t+1}, X_t, \epsilon_{t+1}) = 0,
\end{align}
where 0 is an $n\times 1$ vector of zeros, $X_t$ is an $n\times 1$ vector of endogenous variables, and $\epsilon_{t+1}$ is an $m\times 1$ vector of exogenous structural shocks to the model. $E_tX_{t+1}$ denotes the expecation of the $t+1$ endogenous variables based on the information available to agents in the model as of time period $t$.
linearsolve.model has methods for computing linear and log-linear approximations of the model given above and methods for solving and simulating the linear model.
Example 1: Quickly Simulate an RBC Model
Here I demonstrate how how relatively straightforward it is to appoximate, solve, and simulate a DSGE model using linearsolve. In the example that follows, I describe the procedure more carefully.
\begin{align}
C_t^{-\sigma} & = \beta E_t \left[C_{t+1}^{-\sigma}(\alpha A_{t+1} K_{t+1}^{\alpha-1} + 1 - \delta)\right]\
C_t + K_{t+1} & = A_t K_t^{\alpha} + (1-\delta)K_t\
\log A_{t+1} & = \rho_a \log A_{t} + \epsilon_{t+1}
\end{align}
In the block of code that immediately follows, I input the model, solve for the steady state, compute the log-linear approximation of the equilibirum conditions, and compute some impulse responses following a shock to technology $A_t$.
End of explanation
"""
# Input model parameters
parameters = pd.Series(dtype=float)
parameters['alpha'] = .35
parameters['beta'] = 0.99
parameters['delta'] = 0.025
parameters['rhoa'] = .9
parameters['sigma'] = 1.5
parameters['A'] = 1
"""
Explanation: Example 2: An RBC Model with More Details
Consider the equilibrium conditions for a basic RBC model without labor:
\begin{align}
C_t^{-\sigma} & = \beta E_t \left[C_{t+1}^{-\sigma}(\alpha A_{t+1} K_{t+1}^{\alpha-1} + 1 - \delta)\right]\
Y_t & = A_t K_t^{\alpha}\
I_t & = K_{t+1} - (1-\delta)K_t\
Y_t & = C_t + I_t\
\log A_t & = \rho_a \log A_{t-1} + \epsilon_t
\end{align}
In the nonstochastic steady state, we have:
\begin{align}
K & = \left(\frac{\alpha A}{1/\beta+\delta-1}\right)^{\frac{1}{1-\alpha}}\
Y & = AK^{\alpha}\
I & = \delta K\
C & = Y - I
\end{align}
Given values for the parameters $\beta$, $\sigma$, $\alpha$, $\delta$, and $A$, steady state values of capital, output, investment, and consumption are easily computed.
Initializing the model in linearsolve
To initialize the model, we need to first set the model's parameters. We do this by creating a Pandas Series variable called parameters:
End of explanation
"""
# Define function to compute equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Household Euler equation
euler_eqn = p.beta*fwd.c**-p.sigma*(p.alpha*fwd.y/fwd.k+1-p.delta) - cur.c**-p.sigma
# Production function
production_fuction = cur.a*cur.k**p.alpha - cur.y
# Capital evolution
capital_evolution = fwd.k - (1-p.delta)*cur.k - cur.i
# Goods market clearing
market_clearing = cur.c + cur.i - cur.y
# Exogenous technology
technology_proc = cur.a**p.rhoa- fwd.a
# Stack equilibrium conditions into a numpy array
return np.array([
euler_eqn,
production_fuction,
capital_evolution,
market_clearing,
technology_proc
])
"""
Explanation: Next, we need to define a function that returns the equilibrium conditions of the model. The function will take as inputs two vectors: one vector of "current" variables and another of "forward-looking" or one-period-ahead variables. The function will return an array that represents the equilibirum conditions of the model. We'll enter each equation with all variables moved to one side of the equals sign. For example, here's how we'll enter the produciton fucntion:
production_function = technology_current*capital_current**alpha - output_curent
Here the variable production_function stores the production function equation set equal to zero. We can enter the equations in almost any way we want. For example, we could also have entered the production function this way:
production_function = 1 - output_curent/technology_current/capital_current**alpha
One more thing to consider: the natural log in the equation describing the evolution of total factor productivity will create problems for the solution routine later on. So rewrite the equation as:
\begin{align}
A_{t+1} & = A_{t}^{\rho_a}e^{\epsilon_{t+1}}\
\end{align}
So the complete system of equations that we enter into the program looks like:
\begin{align}
C_t^{-\sigma} & = \beta E_t \left[C_{t+1}^{-\sigma}(\alpha Y_{t+1} /K_{t+1}+ 1 - \delta)\right]\
Y_t & = A_t K_t^{\alpha}\
I_t & = K_{t+1} - (1-\delta)K_t\
Y_t & = C_t + I_t\
A_{t+1} & = A_{t}^{\rho_a}e^{\epsilon_{t+1}}
\end{align}
Now let's define the function that returns the equilibrium conditions:
End of explanation
"""
# Initialize the model
rbc = ls.model(equations = equations,
n_states=2,
n_exo_states=1,
var_names=['a','k','c','y','i'],
parameters=parameters)
"""
Explanation: Notice that inside the function we have to define the variables of the model form the elements of the input vectors variables_forward and variables_current.
Initializing the model
To initialize the model, we need to specify the total number of state variables in the model, the number of state variables with exogenous shocks, the names of the endogenous variables, and the parameters of the model.
It is essential that the variable names are ordered in the following way: First the names of the endogenous variables with the state variables with exogenous shocks, then the state variables without shocks, and finally the control variables. Ordering within the groups doesn't matter.
End of explanation
"""
# Compute the steady state numerically
guess = [1,1,1,1,1]
rbc.compute_ss(guess)
print(rbc.ss)
"""
Explanation: Steady state
Next, we need to compute the nonstochastic steady state of the model. The .compute_ss() method can be used to compute the steady state numerically. The method's default is to use scipy's fsolve() function, but other scipy root-finding functions can be used: root, broyden1, and broyden2. The optional argument options lets the user pass keywords directly to the optimization function. Check out the documentation for Scipy's nonlinear solvers here: http://docs.scipy.org/doc/scipy/reference/optimize.html
End of explanation
"""
# Steady state solution
p = parameters
K = (p.alpha*p.A/(1/p.beta+p.delta-1))**(1/(1-p.alpha))
C = p.A*K**p.alpha - p.delta*K
Y = p.A*K**p.alpha
I = Y - C
rbc.set_ss([p.A,K,C,Y,I])
print(rbc.ss)
"""
Explanation: Note that the steady state is returned as a Pandas Series. Alternatively, you could compute the steady state directly and then sent the rbc.ss attribute:
End of explanation
"""
# Find the log-linear approximation around the non-stochastic steady state
rbc.log_linear_approximation()
print('The matrix A:\n\n',np.around(rbc.a,4),'\n\n')
print('The matrix B:\n\n',np.around(rbc.b,4))
"""
Explanation: Log-linearization and solution
Now we use the .log_linear() method to find the log-linear appxoximation to the model's equilibrium conditions. That is, we'll transform the nonlinear model into a linear model in which all variables are expressed as log-deviations from the steady state. Specifically, we'll compute the matrices $A$ and $B$ that satisfy:
\begin{align}
A E_t\left[ x_{t+1} \right] & = B x_t + \left[ \begin{array}{c} \epsilon_{t+1} \ 0 \end{array} \right],
\end{align}
where the vector $x_{t}$ denotes the log deviation of the endogenous variables from their steady state values.
End of explanation
"""
# Solve the model
rbc.solve_klein(rbc.a,rbc.b)
# Display the output
print('The matrix F:\n\n',np.around(rbc.f,4),'\n\n')
print('The matrix P:\n\n',np.around(rbc.p,4))
"""
Explanation: Finally, we need to obtain the solution to the log-linearized model. The solution is a pair of matrices $F$ and $P$ that specify:
The current values of the non-state variables $u_{t}$ as a linear function of the previous values of the state variables $s_t$.
The future values of the state variables $s_{t+1}$ as a linear function of the previous values of the state variables $s_t$ and the future realisation of the exogenous shock process $\epsilon_{t+1}$.
\begin{align}
u_t & = Fs_t\
s_{t+1} & = Ps_t + \epsilon_{t+1}.
\end{align}
We use the .klein() method to find the solution.
End of explanation
"""
# Compute impulse responses and plot
rbc.impulse(T=41,t0=1,shocks=None,percent=True)
print('Impulse responses to a 0.01 unit shock to A:\n\n',rbc.irs['e_a'].head())
"""
Explanation: Impulse responses
One the model is solved, use the .impulse() method to compute impulse responses to exogenous shocks to the state. The method creates the .irs attribute which is a dictionary with keys equal to the names of the exogenous shocks and the values are Pandas DataFrames with the computed impulse respones. You can supply your own values for the shocks, but the default is 0.01 for each exogenous shock.
End of explanation
"""
rbc.irs['e_a'][['a','k','c','y','i']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2)
rbc.irs['e_a'][['e_a','a']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2)
"""
Explanation: Plotting is easy.
End of explanation
"""
rbc.stoch_sim(T=121,drop_first=100,cov_mat=np.array([0.00763**2]),seed=0,percent=True)
rbc.simulated[['k','c','y','i']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=4)
rbc.simulated[['a']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=4)
rbc.simulated['e_a'].plot(lw='5',alpha=0.5,grid=True).legend(ncol=4)
"""
Explanation: Stochastic simulation
Creating a stochastic simulation of the model is straightforward with the .stoch_sim() method. In the following example, I create a 151 period (including t=0) simulation by first simulating the model for 251 periods and then dropping the first 100 values. The standard deviation of the shock to $A_t$ is set to 0.00763. The seed for the numpy random number generator is set to 0.
End of explanation
"""
# Input model parameters
beta = 0.99
sigma= 1
eta = 1
omega= 0.8
kappa= (sigma+eta)*(1-omega)*(1-beta*omega)/omega
rhor = 0.9
phipi= 1.5
phiy = 0
rhog = 0.5
rhou = 0.5
rhov = 0.9
Sigma = 0.001*np.eye(3)
# Store parameters
parameters = pd.Series({
'beta':beta,
'sigma':sigma,
'eta':eta,
'omega':omega,
'kappa':kappa,
'rhor':rhor,
'phipi':phipi,
'phiy':phiy,
'rhog':rhog,
'rhou':rhou,
'rhov':rhov
})
# Define function that computes equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Exogenous demand
g_proc = p.rhog*cur.g - fwd.g
# Exogenous inflation
u_proc = p.rhou*cur.u - fwd.u
# Exogenous monetary policy
v_proc = p.rhov*cur.v - fwd.v
# Euler equation
euler_eqn = fwd.y -1/p.sigma*(cur.i-fwd.pi) + fwd.g - cur.y
# NK Phillips curve evolution
phillips_curve = p.beta*fwd.pi + p.kappa*cur.y + fwd.u - cur.pi
# interest rate rule
interest_rule = p.phiy*cur.y+p.phipi*cur.pi + fwd.v - cur.i
# Fisher equation
fisher_eqn = cur.i - fwd.pi - cur.r
# Stack equilibrium conditions into a numpy array
return np.array([
g_proc,
u_proc,
v_proc,
euler_eqn,
phillips_curve,
interest_rule,
fisher_eqn
])
# Initialize the nk model
nk = ls.model(equations=equations,
n_states=3,
n_exo_states = 3,
var_names=['g','u','v','i','r','y','pi'],
parameters=parameters)
# Set the steady state of the nk model
nk.set_ss([0,0,0,0,0,0,0])
# Find the log-linear approximation around the non-stochastic steady state
nk.linear_approximation()
# Solve the nk model
nk.solve_klein(nk.a,nk.b)
"""
Explanation: Example 3: A New-Keynesian Model
Consider the new-Keynesian business cycle model from Walsh (2017), chapter 8 expressed in log-linear terms:
\begin{align}
y_t & = E_ty_{t+1} - \sigma^{-1} (i_t - E_t\pi_{t+1}) + g_t\
\pi_t & = \beta E_t\pi_{t+1} + \kappa y_t + u_t\
i_t & = \phi_x y_t + \phi_{\pi} \pi_t + v_t\
r_t & = i_t - E_t\pi_{t+1}\
g_{t+1} & = \rho_g g_{t} + \epsilon_{t+1}^g\
u_{t+1} & = \rho_u u_{t} + \epsilon_{t+1}^u\
v_{t+1} & = \rho_v v_{t} + \epsilon_{t+1}^v
\end{align}
where $y_t$ is the output gap (log-deviation of output from the natural rate), $\pi_t$ is the quarterly rate of inflation between $t-1$ and $t$, $i_t$ is the nominal interest rate on funds moving between period $t$ and $t+1$, $r_t$ is the real interest rate, $g_t$ is the exogenous component of demand, $u_t$ is an exogenous component of inflation, and $v_t$ is the exogenous component of monetary policy.
Since the model is already log-linear, there is no need to approximate the equilibrium conditions. We'll still use the .log_linear method to find the matrices $A$ and $B$, but we'll have to set the islinear option to True to avoid generating an error.
Inintialize model and solve
End of explanation
"""
# Compute impulse responses
nk.impulse(T=11,t0=1,shocks=None)
# Create the figure and axes
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(3,1,1)
ax2 = fig.add_subplot(3,1,2)
ax3 = fig.add_subplot(3,1,3)
# Plot commands
nk.irs['e_g'][['g','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Demand shock',ax=ax1).legend(loc='upper right',ncol=5)
nk.irs['e_u'][['u','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Inflation shock',ax=ax2).legend(loc='upper right',ncol=5)
nk.irs['e_v'][['v','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Interest rate shock',ax=ax3).legend(loc='upper right',ncol=5)
"""
Explanation: Compute impulse responses and plot
Compute impulse responses of the endogenous variables to a one percent shock to each exogenous variable.
End of explanation
"""
# Compute stochastic simulation
nk.stoch_sim(T=151,drop_first=100,cov_mat=Sigma,seed=0)
# Create the figure and axes
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
# Plot commands
nk.simulated[['y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Output, inflation, and interest rates',ax=ax1).legend(ncol=4)
nk.simulated[['g','u','v']].plot(lw='5',alpha=0.5,grid=True,title='Exogenous demand, inflation, and policy',ax=ax2).legend(ncol=4,loc='lower right')
# Plot simulated exogenous shocks
nk.simulated[['e_g','g']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)
nk.simulated[['e_u','u']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)
nk.simulated[['e_v','v']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)
"""
Explanation: Construct a stochastic simulation and plot
Contruct a 151 period stochastic simulation by first siumlating the model for 251 periods and then dropping the first 100 values. The seed for the numpy random number generator is set to 0.
End of explanation
"""
|
yw-fang/readingnotes | machine-learning/GitHub/Git_in_pycharm.ipynb | apache-2.0 | ssh-keygen -t rsa -b 4096 -C "fyuewen@hotmail.com"
"""
Explanation: 1. version control using git built in pycharm
When using pycharm in Ubuntu, I got an error associated id_rsa. I was clear that this error must be caused by my settings on the shsh keys. In this ubuntu, I have generated muliple ssh private/public key paris before, this makes the pycharm version control frustrated.
Up till now, pycharm itself does not support several keys. However, a plugin called intellij-plugins can help save the key and passphrase information. For you information, you can see the stackoverflow discussion here
Here, I will show you how to solve this problem using the .ssh/config file.
To make it clear, I genereate a new ssh key pair:
End of explanation
"""
ssh-add ~/.ssh/id_rsa_pycharm-git
ssh-add -l # to ensure the key is added
"""
Explanation: I named the private key file to be 'id_rsa_pycharm-git', corerspondingly it's public key would be 'id_rsa_pycharm-git.pub'.
End of explanation
"""
|
Kaggle/learntools | notebooks/nlp/raw/ex3.ipynb | apache-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import spacy
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.nlp.ex3 import *
print("\nSetup complete")
# Load the large model to get the vectors
nlp = spacy.load('en_core_web_lg')
review_data = pd.read_csv('../input/nlp-course/yelp_ratings.csv')
review_data.head()
"""
Explanation: Vectorizing Language
Embeddings are both conceptually clever and practically effective.
So let's try them for the sentiment analysis model you built for the restaurant. Then you can find the most similar review in the data set given some example text. It's a task where you can easily judge for yourself how well the embeddings work.
End of explanation
"""
reviews = review_data[:100]
# We just want the vectors so we can turn off other models in the pipeline
with nlp.disable_pipes():
vectors = np.array([nlp(review.text).vector for idx, review in reviews.iterrows()])
vectors.shape
"""
Explanation: Here's an example of loading some document vectors.
Calculating 44,500 document vectors takes about 20 minutes, so we'll get only the first 100. To save time, we'll load pre-saved document vectors for the hands-on coding exercises.
End of explanation
"""
# Loading all document vectors from file
vectors = np.load('../input/nlp-course/review_vectors.npy')
"""
Explanation: The result is a matrix of 100 rows and 300 columns.
Why 100 rows?
Because we have 1 row for each column.
Why 300 columns?
This is the same length as word vectors. See if you can figure out why document vectors have the same length as word vectors (some knowledge of linear algebra or vector math would be needed to figure this out).
Go ahead and run the following cell to load in the rest of the document vectors.
End of explanation
"""
from sklearn.svm import LinearSVC
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(vectors, review_data.sentiment,
test_size=0.1, random_state=1)
# Create the LinearSVC model
model = LinearSVC(random_state=1, dual=False)
# Fit the model
____
# Uncomment and run to see model accuracy
# print(f'Model test accuracy: {model.score(X_test, y_test)*100:.3f}%')
# Uncomment to check your work
#q_1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
#%%RM_IF(PROD)%%
model = LinearSVC(random_state=1, dual=False)
model.fit(X_train, y_train)
q_1.assert_check_passed()
# Scratch space in case you want to experiment with other models
#second_model = ____
#second_model.fit(X_train, y_train)
#print(f'Model test accuracy: {second_model.score(X_test, y_test)*100:.3f}%')
"""
Explanation: 1) Training a Model on Document Vectors
Next you'll train a LinearSVC model using the document vectors. It runs pretty quick and works well in high dimensional settings like you have here.
After running the LinearSVC model, you might try experimenting with other types of models to see whether it improves your results.
End of explanation
"""
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
"""
Explanation: Document Similarity
For the same tea house review, find the most similar review in the dataset using cosine similarity.
2) Centering the Vectors
Sometimes people center document vectors when calculating similarities. That is, they calculate the mean vector from all documents, and they subtract this from each individual document's vector. Why do you think this could help with similarity metrics?
Run the following line after you've decided your answer.
End of explanation
"""
review = """I absolutely love this place. The 360 degree glass windows with the
Yerba buena garden view, tea pots all around and the smell of fresh tea everywhere
transports you to what feels like a different zen zone within the city. I know
the price is slightly more compared to the normal American size, however the food
is very wholesome, the tea selection is incredible and I know service can be hit
or miss often but it was on point during our most recent visit. Definitely recommend!
I would especially recommend the butternut squash gyoza."""
def cosine_similarity(a, b):
return np.dot(a, b)/np.sqrt(a.dot(a)*b.dot(b))
review_vec = nlp(review).vector
## Center the document vectors
# Calculate the mean for the document vectors, should have shape (300,)
vec_mean = vectors.mean(axis=0)
# Subtract the mean from the vectors
centered = ____
# Calculate similarities for each document in the dataset
# Make sure to subtract the mean from the review vector
sims = ____
# Get the index for the most similar document
most_similar = ____
# Uncomment to check your work
#q_3.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
#%%RM_IF(PROD)%%
review_vec = nlp(review).vector
## Center the document vectors
# Calculate the mean for the document vectors
vec_mean = vectors.mean(axis=0)
# Subtract the mean from the vectors
centered = vectors - vec_mean
# Calculate similarities for each document in the dataset
# Make sure to subtract the mean from the review vector
sims = np.array([cosine_similarity(review_vec - vec_mean, vec) for vec in centered])
# Get the index for the most similar document
most_similar = sims.argmax()
q_3.assert_check_passed()
print(review_data.iloc[most_similar].text)
"""
Explanation: 3) Find the most similar review
Given an example review below, find the most similar document within the Yelp dataset using the cosine similarity.
End of explanation
"""
# Check your answer (Run this code cell to receive credit!)
q_4.solution()
"""
Explanation: Even though there are many different sorts of businesses in our Yelp dataset, you should have found another tea shop.
4) Looking at similar reviews
If you look at other similar reviews, you'll see many coffee shops. Why do you think reviews for coffee are similar to the example review which mentions only tea?
End of explanation
"""
|
zklgame/CatEyeNets | test/two_layer_net.ipynb | mit | import os
os.chdir(os.getcwd() + '/..')
# Run some setup code for this notebook
import random
import numpy as np
import matplotlib.pyplot as plt
from utils.data_utils import load_CIFAR10
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
from classifiers.neural_net import TwoLayerNet
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / np.maximum(1e-8, np.abs(x) + np.abs(y)))
# Create a small net and toy data to check implementations.
# set random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
"""
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
"""
scores = net.loss(X)
print('scores: ')
print(scores)
print
print('correct scores:')
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print(correct_scores)
print
# The difference should be very small, get < 1e-7
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
"""
Explanation: Forward pass: compute scores
End of explanation
"""
loss, _ = net.loss(X, y, reg=0.05)
corrent_loss = 1.30378789133
# should be very small, get < 1e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - corrent_loss)))
"""
Explanation: Forward pass: compute loss
End of explanation
"""
from utils.gradient_check import eval_numerical_gradient
loss, grads = net.loss(X, y, reg=0.05)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.05)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
"""
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
"""
net = init_toy_model()
stats = net.train(X, y, X, y, learning_rate=1e-1, reg=5e-6, num_iters=100, verbose=False)
print('Final training loss: ', stats['loss_history'][-1])
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
"""
Explanation: Train the network
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Split the data
num_training = 49000
num_validation = 1000
num_test = 1000
mask = range(num_training, num_training+num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = xrange(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Preprocessing: reshape the image data into rows
X_train = X_train.reshape(X_train.shape[0], -1)
X_val = X_val.reshape(X_val.shape[0], -1)
X_test = X_test.reshape(X_test.shape[0], -1)
# Normalize the data: subtract the mean rows
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
print(X_train.shape, X_val.shape, X_test.shape)
"""
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
"""
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.25, num_iters=1000, batch_size=200, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print('Validation accuracy: ', val_acc)
"""
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
"""
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Classification accuracy')
plt.show()
from utils.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
"""
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
"""
input_size = 32 * 32 * 3
num_classes = 10
hidden_layer_size = [50]
learning_rates = [3e-4, 9e-4, 1e-3, 3e-3]
regularization_strengths = [7e-1, 8e-1, 9e-1, 1]
results = {}
best_model = None
best_val = -1
for hidden_size in hidden_layer_size:
for lr in learning_rates:
for reg in regularization_strengths:
model = TwoLayerNet(input_size, hidden_size, num_classes, std=1e-3)
stats = model.train(X_train, y_train, X_val, y_val,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, num_iters=5000, batch_size=200, verbose=True)
train_acc = (model.predict(X_train) == y_train).mean()
val_acc = (model.predict(X_val) == y_val).mean()
print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc))
results[(hidden_size, lr, reg)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_model = model
print
print
print('best val_acc: %f' % (best_val))
old_lr = -1
for hidden_size, lr, reg in sorted(results):
if old_lr != lr:
old_lr = lr
print
train_acc, val_acc = results[(hidden_size, lr, reg)]
print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc))
for hidden_size, lr, reg in sorted(results):
train_acc, val_acc = results[(hidden_size, lr, reg)]
print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc))
# visualize the weights of the best network
show_net_weights(best_model)
"""
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
"""
test_acc = (best_model.predict(X_test) == y_test).mean()
print('Test accuracy: ', test_acc)
"""
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation
"""
|
natashabatalha/PandExo | notebooks/HST_WFC3.ipynb | gpl-3.0 | import sys
sys.path.append('..')
import pandexo.engine.justdoit as jdi
"""
Explanation: HST's Tranisting Exoplanet Noise Simulator
This file demonstrates how to predict the:
1. Transmission/emission spectrum S/N ratio
2. Observation start window
for any system observed with WFC3/IR.
Background information
Pandeia: ETC for JWST
PandExo: Exoplanet noise simulator for JWST
End of explanation
"""
exo_dict = jdi.load_exo_dict()
"""
Explanation: Edit Inputs
Load in a blank exoplanet dictionary
End of explanation
"""
#WASP-43
exo_dict['star']['jmag'] = 9.995 # J magnitude of the system
exo_dict['star']['hmag'] = 9.397 # H magnitude of the system
#WASP-43b
exo_dict['planet']['type'] = 'user' # user specified inputs
exo_dict['planet']['exopath'] = jdi.os.getcwd()+'/WASP43b-Eclipse_Spectrum.txt' # filename for model spectrum
exo_dict['planet']['w_unit'] = 'um' # wavelength unit
exo_dict['planet']['f_unit'] = 'fp/f*' # flux ratio unit (can also put "rp^2/r*^2")
exo_dict['planet']['depth'] = 4.0e-3 # flux ratio
exo_dict['planet']['i'] = 82.6 # Orbital inclination in degrees
exo_dict['planet']['ars'] = 5.13 # Semi-major axis / stellar radius
exo_dict['planet']['period'] = 0.8135 # Orbital period in days
exo_dict['planet']['transit_duration']= 4170.0/60/60/24#(optional if given above info) transit duration in days
exo_dict['planet']['w'] = 90 #(optional) longitude of periastron. Default is 90
exo_dict['planet']['ecc'] = 0 #(optional) eccentricity. Default is 0
"""
Explanation: Edit stellar and planet inputs
End of explanation
"""
inst_dict = jdi.load_mode_dict('WFC3 G141')
"""
Explanation: Step 2) Load in instrument dictionary
WFC3 G141
WFC3 G102
End of explanation
"""
exo_dict['observation']['noccultations'] = 5 # Number of transits/eclipses
inst_dict['configuration']['detector']['subarray'] = 'GRISM256' # GRISM256 or GRISM512
inst_dict['configuration']['detector']['nsamp'] = 10 # WFC3 N_SAMP, 1..15
inst_dict['configuration']['detector']['samp_seq'] = 'SPARS10' # WFC3 SAMP_SEQ, SPARS5 or SPARS10
inst_dict['strategy']['norbits'] = 3 # Number of HST orbits
inst_dict['strategy']['nchan'] = 15 # Number of spectrophotometric channels
inst_dict['strategy']['scanDirection'] = 'Forward' # Spatial scan direction, Forward or Round Trip
inst_dict['strategy']['schedulability'] = 30 # 30 for small/medium program, 100 for large program
inst_dict['strategy']['windowSize'] = 20 # (optional) Observation start window size in minutes. Default is 20 minutes.
inst_dict['strategy']['useFirstOrbit'] = True # (optional) Default is False, option to use first orbit
inst_dict['strategy']['calculateRamp'] = True # Enables ramp effect simulation for flux plot
inst_dict['strategy']['targetFluence'] = 30000 # Maximum pixel fluence level (in electrons)
"""
Explanation: Edit HST/WFC3 detector and observation inputs
End of explanation
"""
foo = jdi.run_pandexo(exo_dict, inst_dict, output_file='wasp43b.p')
foo['wfc3_TExoNS']['info']
inst_dict['configuration']['detector']['nsamp'] = None
inst_dict['configuration']['detector']['samp_seq'] = None
bar = jdi.run_pandexo(exo_dict, inst_dict, output_file='wasp43b.p')
bar['wfc3_TExoNS']['info']
inst_dict['strategy']['scanDirection'] = 'Round Trip'
hst = jdi.run_pandexo(exo_dict, inst_dict, output_file='wasp43b.p')
hst['wfc3_TExoNS']['info']
"""
Explanation: Run PandExo
jdi.run_pandexo(exo, inst, param_space = 0, param_range = 0,save_file = True,
output_path=os.getcwd(), output_file = '')
See wiki Attributes for more thorough explanation fo inputs
End of explanation
"""
import pandexo.engine.justplotit as jpi
#using foo from above
#other keys include model=True/False
datawave, dataspec, dataerror, modelwave, modelspec = jpi.hst_spec(foo)
"""
Explanation: Plot Results
Plot simulated spectrum using specified file
End of explanation
"""
#using foo from above
obsphase1, obstr1, obsphase2, obstr2,rms = jpi.hst_time(foo)
"""
Explanation: Compute earliest and latest possible start times for given start window size
End of explanation
"""
obsphase1, counts1, obsphase2, counts2, noise = jpi.hst_simulated_lightcurve(foo)
"""
Explanation: Compute simulated lightcurves in fluence (unit: electrons)
if configuration option inst_dict['strategy']['calculateRamp'] is set as True, lightcurves will have ramp effect systematics
End of explanation
"""
hst['wfc3_TExoNS']['info']
"""
Explanation: Print important info for observation
End of explanation
"""
|
bioinf-jku/SNNs | TF_1_x/SelfNormalizingNetworks_MLP_MNIST.ipynb | gpl-3.0 | import tensorflow as tf
import numpy as np
from sklearn.preprocessing import StandardScaler
import numbers
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import array_ops
from tensorflow.python.layers import utils
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
print(tf.__version__)
# Parameters
learning_rate = 0.05
training_epochs = 15
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 784 # 1st layer number of features
n_hidden_2 = 784 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
dropoutRate = tf.placeholder(tf.float32)
is_training= tf.placeholder(tf.bool)
"""
Explanation: Tutorial on self-normalizing networks on the MNIST data set: multi-layer perceptrons
Author: Guenter Klambauer, 2017
Derived from: Aymeric Damien
End of explanation
"""
def selu(x):
with ops.name_scope('elu') as scope:
alpha = 1.6732632423543772848170429916717
scale = 1.0507009873554804934193349852946
return scale*tf.where(x>=0.0, x, alpha*tf.nn.elu(x))
"""
Explanation: (1) Definition of scaled exponential linear units (SELUs)
End of explanation
"""
def dropout_selu(x, rate, alpha= -1.7580993408473766, fixedPointMean=0.0, fixedPointVar=1.0,
noise_shape=None, seed=None, name=None, training=False):
"""Dropout to a value with rescaling."""
def dropout_selu_impl(x, rate, alpha, noise_shape, seed, name):
keep_prob = 1.0 - rate
x = ops.convert_to_tensor(x, name="x")
if isinstance(keep_prob, numbers.Real) and not 0 < keep_prob <= 1:
raise ValueError("keep_prob must be a scalar tensor or a float in the "
"range (0, 1], got %g" % keep_prob)
keep_prob = ops.convert_to_tensor(keep_prob, dtype=x.dtype, name="keep_prob")
keep_prob.get_shape().assert_is_compatible_with(tensor_shape.scalar())
alpha = ops.convert_to_tensor(alpha, dtype=x.dtype, name="alpha")
keep_prob.get_shape().assert_is_compatible_with(tensor_shape.scalar())
if tensor_util.constant_value(keep_prob) == 1:
return x
noise_shape = noise_shape if noise_shape is not None else array_ops.shape(x)
random_tensor = keep_prob
random_tensor += random_ops.random_uniform(noise_shape, seed=seed, dtype=x.dtype)
binary_tensor = math_ops.floor(random_tensor)
ret = x * binary_tensor + alpha * (1-binary_tensor)
a = tf.sqrt(fixedPointVar / (keep_prob *((1-keep_prob) * tf.pow(alpha-fixedPointMean,2) + fixedPointVar)))
b = fixedPointMean - a * (keep_prob * fixedPointMean + (1 - keep_prob) * alpha)
ret = a * ret + b
ret.set_shape(x.get_shape())
return ret
with ops.name_scope(name, "dropout", [x]) as name:
return utils.smart_cond(training,
lambda: dropout_selu_impl(x, rate, alpha, noise_shape, seed, name),
lambda: array_ops.identity(x))
"""
Explanation: (2) Definition of dropout variant for SNNs
End of explanation
"""
# (1) Scale input to zero mean and unit variance
scaler = StandardScaler().fit(mnist.train.images)
# Tensorboard
logs_path = '~/tmp'
# Create model
def multilayer_perceptron(x, weights, biases, rate, is_training):
# Hidden layer with SELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
#netI_1 = layer_1
layer_1 = selu(layer_1)
layer_1 = dropout_selu(layer_1,rate, training=is_training)
# Hidden layer with SELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
#netI_2 = layer_2
layer_2 = selu(layer_2)
layer_2 = dropout_selu(layer_2,rate, training=is_training)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
"""
Explanation: (3) Input data scaled to zero mean and unit variance
End of explanation
"""
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1],stddev=np.sqrt(1/n_input))),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2],stddev=np.sqrt(1/n_hidden_1))),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes],stddev=np.sqrt(1/n_hidden_2)))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1],stddev=0)),
'b2': tf.Variable(tf.random_normal([n_hidden_2],stddev=0)),
'out': tf.Variable(tf.random_normal([n_classes],stddev=0))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases, rate=dropoutRate, is_training=is_training)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
# Initializing the variables
init = tf.global_variables_initializer()
# Create a histogramm for weights
tf.summary.histogram("weights2", weights['h2'])
tf.summary.histogram("weights1", weights['h1'])
# Create a summary to monitor cost tensor
tf.summary.scalar("loss", cost)
# Create a summary to monitor accuracy tensor
tf.summary.scalar("accuracy", accuracy)
# Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
# Launch the graph
gpu_options = tf.GPUOptions(allow_growth=True)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
sess.run(init)
summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = scaler.transform(batch_x)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y, dropoutRate: 0.05, is_training:True})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch:", '%04d' % (epoch+1), "cost=","{:.9f}".format(avg_cost))
accTrain, costTrain = sess.run([accuracy, cost],
feed_dict={x: batch_x, y: batch_y,
dropoutRate: 0.0, is_training:False})
print("Train-Accuracy:", accTrain,"Train-Loss:", costTrain)
batch_x_test, batch_y_test = mnist.test.next_batch(512)
batch_x_test = scaler.transform(batch_x_test)
accTest, costVal = sess.run([accuracy, cost], feed_dict={x: batch_x_test, y: batch_y_test,
dropoutRate: 0.0, is_training:False})
print("Validation-Accuracy:", accTest,"Val-Loss:", costVal,"\n")
"""
Explanation: (4) Initialization with STDDEV of sqrt(1/n)
End of explanation
"""
|
jlgelpi/BioPhysics | Notebooks/6m0j_check.ipynb | mit | %load_ext autoreload
%autoreload 2
"""
Explanation: Structure checking tutorial
A complete checking analysis of a single structure follows.
use .revert_changes() at any time to recover the original structure
Structure checking is a key step before setting up a protein system for simulations.
A number of normal issues found in structures at Protein Data Bank may compromise the success of the simulation, or may suggest that longer equilibration procedures are necessary.
The biobb_structure_checking modules allow to
- Do basic manipulations on structures (selection of models, chains, alternative locations
- Detect and fix amide assignments, wrong chirality
- Detect and fix protein backbone issues (missing fragments, and atoms, capping)
- Detect and fix missing side-chain atoms
- Add hydrogen atoms according to several criteria
- Detect and classify clashes
- Detect possible SS bonds
biobb_structure_checking modules can used at the command line biobb_structure_checking/bin/check_structure
End of explanation
"""
import biobb_structure_checking
import biobb_structure_checking.constants as cts
from biobb_structure_checking.structure_checking import StructureChecking
base_dir_path=biobb_structure_checking.__path__[0]
args = cts.set_defaults(base_dir_path,{'notebook':True})
"""
Explanation: Installation
Basic imports and initialization
End of explanation
"""
with open(args['commands_help_path']) as help_file:
print(help_file.read())
#TODO: prepare a specific help method
# print_help(command)
"""
Explanation: General help
End of explanation
"""
base_path = '/home/gelpi/DEVEL/BioPhysics/wdir/'
args['input_structure_path'] = base_path + '6m0j.cif'
args['output_structure_path'] = base_path + '6m0j_fixed.pdb'
args['output_structure_path_charges'] = base_path + '6m0j_fixed.pdbqt'
args['debug'] = False
args['verbose'] = False
"""
Explanation: Set input (PDB or local file, pdb or mmCif formats allowed) and output (local file, pdb format).
Use pdb:pdbid for downloading structure from PDB (RCSB)
End of explanation
"""
st_c = StructureChecking(base_dir_path, args)
"""
Explanation: Initializing checking engine, loading structure and showing statistics
End of explanation
"""
st_c.models()
"""
Explanation: models
Checks for the presence of models in the structure.
MD simulations require a single structure, although some structures (e.g. biounits) may be defined as a series of models, in such case all of them are usually required.
Use models('--select N') to select model num N for further analysis
End of explanation
"""
st_c.chains()
"""
Explanation: chains
Checks for chains (also obtained from print_stats), and allow to select one or more.
MD simulations are usually performed with complete structures. However input structure may contain several copies of the system, or contains additional chains like peptides or nucleic acids that may be removed.
Use chains('X,Y') to select chain(s) X and Y to proceed
End of explanation
"""
st_c.altloc()
"""
Explanation: altloc
Checks for the presence of residues with alternative locations. Atoms with alternative coordinates and their occupancy are reported.
MD simulations requires a single position for each atom.
Use altloc('occupancy | alt_ids | list of res:id) to select the alternative
End of explanation
"""
st_c.altloc('occupancy')
st_c.altloc()
"""
Explanation: We need to choose one of the alternative forms for each residue
End of explanation
"""
st_c.metals()
"""
Explanation: metals
Detects HETATM being metal ions allow to selectively remove them.
To remove use metals (' All | None | metal_type list | residue list ')
End of explanation
"""
st_c.ligands()
st_c.ligands('All')
st_c.ligands()
"""
Explanation: ligands
Detects HETATM (excluding Water molecules) to selectively remove them.
To remove use ligands('All | None | Residue List (by id, by num)')
End of explanation
"""
st_c.rem_hydrogen()
"""
Explanation: rem_hydrogen
Detects and remove hydrogen atoms.
MD setup can be done with the original H atoms, however to prevent from non standard labelling, remove them is safer.
To remove use rem_hydrogen('yes')
End of explanation
"""
st_c.water()
st_c.water("yes")
"""
Explanation: water
Detects water molecules and allows to remove them
Crystallographic water molecules may be relevant for keeping the structure, however in most cases only some of them are required. These can be later added using other methods (titration) or manually.
To remove water molecules use water('yes')
End of explanation
"""
st_c.amide()
"""
Explanation: amide
Amide terminal atoms in Asn ang Gln residues can be labelled incorrectly.
amide suggests possible fixes by checking the sourrounding environent.
To fix use amide ('All | None | residue_list')
Note that the inversion of amide atoms may trigger additional contacts.
End of explanation
"""
st_c.amide('all')
"""
Explanation: Fix all amide residues and recheck
End of explanation
"""
st_c.amide('A42,A103')
st_c.amide('E394')
"""
Explanation: Comparing both checks it becomes clear that GLN A42, GLN E498, ASN A103, and ASN A194 are getting new contacts as thay have both changed, ASN E394 is worse as it has now two contacts
End of explanation
"""
st_c.chiral()
"""
Explanation: chiral
Side chains of Thr and Ile are chiral, incorrect atom labelling lead to the wrong chirality.
To fix use chiral('All | None | residue_list')
End of explanation
"""
st_c.backbone()
st_c.backbone('--fix_atoms All --fix_chain none --add_caps none')
"""
Explanation: Backbone
Detects and fixes several problems with the backbone
use any of
--fix_atoms All|None|Residue List
--fix_chain All|None|Break list
--add_caps All|None|Terms|Breaks|Residue list
--no_recheck
--no_check_clashes
End of explanation
"""
st_c.fixside()
"""
Explanation: fixside
Detects and re-built missing protein side chains.
To fix use fixside('All | None | residue_list')
End of explanation
"""
st_c.getss()
st_c.getss('all')
"""
Explanation: getss
Detects possible -S-S- bonds based on distance criteria.
Proper simulation requires those bonds to be correctly set. Use All|None|residueList to mark them
End of explanation
"""
st_c.add_hydrogen()
st_c.add_hydrogen('auto')
"""
Explanation: Add_hydrogens
Add Hydrogen Atoms. Auto: std changes at pH 7.0. His->Hie. pH: set pH value
list: Explicit list as [*:]HisXXHid, Interactive[_his]: Prompts for all selectable residues
Fixes missing side chain atoms unless --no_fix_side is set
Existing hydrogen atoms are removed before adding new ones unless --keep_h set.
End of explanation
"""
st_c.clashes()
"""
Explanation: clashes
Detects steric clashes based on distance criteria.
Contacts are classified in:
* Severe: Too close atoms, usually indicating superimposed structures or badly modelled regions. Should be fixed.
* Apolar: Vdw colissions.Usually fixed during the simulation.
* Polar and ionic. Usually indicate wrong side chain conformations. Usually fixed during the simulation
End of explanation
"""
st_c.checkall()
st_c._save_structure(args['output_structure_path'])
st_c.rem_hydrogen('yes')
#st_c.add_hydrogen('--add_charges --add_mode auto')
#Alternative way calling through command line
import os
os.system('check_structure -i ' + args['output_structure_path'] + ' -o ' + args['output_structure_path_charges'] + ' add_hydrogen --add_charges --add_mode auto')
#st_c._save_structure(args['output_structure_path_charges'])
#st_c.revert_changes()
"""
Explanation: Complete check in a single method
End of explanation
"""
|
eford/rebound | ipython_examples/Testparticles.ipynb | gpl-3.0 | import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(m=1e-3, a=1, e=0.05)
sim.move_to_com()
sim.integrator = "whfast"
sim.dt = 0.05
sim.status()
"""
Explanation: Test particles
In this tutorial, we run a simulation with many test particles. A simulation with test particles can be much faster, because it scales as $\mathcal{O}(N)$ compared to a simulation with massive particles, which scales as $\mathcal{O}(N^2)$.
There are two types of test particles implemented in REBOUND. We first talk about real test particle, i.e. particles which have no mass and therefore do not perturb any other particle. In REBOUND, these are referred to as type 0.
Let's first set up two massive particles in REBOUND, move to the center of mass frame, and choose WHFast as the integrator.
End of explanation
"""
import numpy as np
N_testparticle = 1000
a_initial = np.linspace(1.1, 3, N_testparticle)
for a in a_initial:
sim.add(a=a,f=np.random.rand()*2.*np.pi) # mass is set to 0 by default, random true anomaly
"""
Explanation: Next, we'll add the test particles. We just set the mass to zero. Note that we give the add() function no m argument and it therefore sets the mass is zero. We randomize the true anomaly of the particles and place them outside the massive planet.
The test-particles must be added after all massive planets have been added.
End of explanation
"""
sim.N_active = 2
"""
Explanation: Next, we set the N_active variable of REBOUND to the number of active particles in our simulation. Here, we have two active (massive) particles, the star and the planet.
End of explanation
"""
t_max = 200.*2.*np.pi
N_out = 10
xy = np.zeros((N_out, N_testparticle, 2))
times = np.linspace(0, t_max, N_out)
for i, time in enumerate(times):
sim.integrate(time)
for j, p in enumerate(sim.particles[2:]):
xy[i][j] = [p.x, p.y]
"""
Explanation: Next, let's do the simulation. We will run it for 200 orbits of the planet which, in our units of $G=1$, is $t_{\rm max} = 200\cdot2\pi$. While we run the simulation, we'll keep store the position of all test particles 10 times during the interval.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(5,5))
ax = plt.subplot(111)
ax.set_xlim([-3,3])
ax.set_ylim([-3,3])
plt.scatter(xy[:,:,0],xy[:,:,1],marker=".",linewidth='0');
"""
Explanation: We now plot the test particles' positions.
End of explanation
"""
orbits = sim.calculate_orbits()[1:]
a_final = [o.a for o in orbits]
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
ax.set_yscale('log')
ax.set_xlabel(r"period ratio $r$")
ax.set_ylabel("relative semi-major axis change")
plt.plot(np.power(a_initial,1.5),(np.fabs(a_final-a_initial)+1.0e-16)/a_initial,marker=".");
"""
Explanation: One can see that some particles changed their orbits quite significantly, while others seem to stay roughly on circular orbits. To investigate this a bit further, we now calculate and plot the relative change of the test particles' semi-major axis over the duration of the simulation. We'll plot it as a function of the initial period ratio $r=P_{\rm test particle}/P_{\rm planet}$ for which we make use of Kepler's law, $P = 2\pi\sqrt{a^3/GM}$.
End of explanation
"""
e_final = np.array([o.e for o in orbits])
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
#ax.set_ylim([0,1])
ax.set_yscale('log')
ax.set_xlabel(r"period ratio $r$")
ax.set_ylabel("final eccentricity")
plt.plot(np.power(a_initial,1.5),e_final+1.0e-16,marker=".");
"""
Explanation: Very close to the planet test particles change their semi-major axis by order unity. These particles have a close encounter with the planet and get scattered.
We also see two peaks at $r=2$ and $r=3$. These correspond to mean motion resonances. We can also see the mean motion resonances by plotting the eccentricities of the particles.
End of explanation
"""
print(sim.calculate_orbits()[0].a)
"""
Explanation: Once again, we see peaks at $r=2$ and $r=3$, corresponding to the 2:1 and 3:1 mean motion resonance. You can even see a hint of an effect at $r=4$, the 4:1 mean motion resonance.
In the above example, the planet did not change its semi-major axis as the test particles have zero mass and do not affect any other particles.
End of explanation
"""
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(m=1e-3, a=1, e=0.05)
sim.move_to_com()
sim.integrator = "whfast"
sim.dt = 0.05
N_testparticle = 1000
a_initial = np.linspace(1.1, 3, N_testparticle)
for a in a_initial:
sim.add(a=a,f=np.random.rand()*2.*np.pi, m=1e-7)
"""
Explanation: Let us change this assumption by allow the test particles to have a small mass and influence the planet. Test particles do still not influence other test particles. This setup is referred to as type 1 in REBOUND.
End of explanation
"""
sim.N_active = 2
sim.testparticle_type = 1
"""
Explanation: As above, we set N_active to the number of massive bodies. We also set the testparticle_type to 1, which allows interactions between test particles and massive particles, but not between test particles themselves. This is similar to what MERCURY calls small bodies.
End of explanation
"""
sim.integrate(t_max)
print(sim.calculate_orbits()[0].a)
"""
Explanation: If we integrate this simulation forwards in time and output the semi-major axis of the planet, we can see that it changed slightly from the initial $a=1$ due to interactions with the test particles.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_brainstorm_auditory.ipynb | bsd-3-clause | # Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
#
# License: BSD (3-clause)
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
"""
Explanation: Brainstorm auditory tutorial dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see [1]_ and:
http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
FieldTrip bug tracker <http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300>_.
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
"""
use_precomputed = True
"""
Explanation: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change this to False. With
use_precomputed = False running time of this script can be several
minutes even on a fast computer.
End of explanation
"""
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', 'bst_auditory',
'S01_Noise_20131218_01.ds')
"""
Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:mne.io.Raw.
End of explanation
"""
preload = not use_precomputed
raw = read_raw_ctf(raw_fname1, preload=preload)
n_times_run1 = raw.n_times
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)])
raw_erm = read_raw_ctf(erm_fname, preload=preload)
"""
Explanation: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded in the memory.
End of explanation
"""
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True,
ecg=True)
"""
Explanation: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz).
In addition:
1 stim channel for marking presentation times for the stimuli
1 audio channel for the sent signal
1 response channel for recording the button presses
1 ECG bipolar
2 EOG bipolar (vertical and horizontal)
12 head tracking channels
20 unused channels
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
End of explanation
"""
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.annotations = annotations
del onsets, durations, descriptions
"""
Explanation: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
End of explanation
"""
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
"""
Explanation: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
End of explanation
"""
raw.plot(block=True)
"""
Explanation: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
End of explanation
"""
if not use_precomputed:
meg_picks = mne.pick_types(raw.info, meg=True, eeg=False)
raw.plot_psd(tmax=np.inf, picks=meg_picks)
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks=meg_picks)
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
End of explanation
"""
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
"""
Explanation: We also lowpass filter the data at 100 Hz to remove the hf components.
End of explanation
"""
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
"""
Explanation: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
End of explanation
"""
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
"""
Explanation: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
End of explanation
"""
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
"""
Explanation: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
End of explanation
"""
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=False,
proj=True)
"""
Explanation: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
End of explanation
"""
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs, picks
"""
Explanation: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
End of explanation
"""
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
"""
Explanation: The averages for each conditions are computed.
End of explanation
"""
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with :func:mne.io.Raw.filter), but to reduce memory
consumption of this tutorial, we do it at evoked stage. (At the raw stage,
you could alternatively notch filter with :func:mne.io.Raw.notch_filter.)
End of explanation
"""
evoked_std.plot(window_title='Standard', gfp=True)
evoked_dev.plot(window_title='Deviant', gfp=True)
"""
Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
End of explanation
"""
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard')
evoked_dev.plot_topomap(times=times, title='Deviant')
"""
Explanation: Show activations as topography figures.
End of explanation
"""
evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal')
evoked_difference.plot(window_title='Difference', gfp=True)
"""
Explanation: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
End of explanation
"""
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
"""
Explanation: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
End of explanation
"""
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
"""
Explanation: The transformation is read from a file. More information about coregistering
the data, see ch_interactive_analysis or
:func:mne.gui.coregistration.
End of explanation
"""
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
"""
Explanation: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: CHDBBCEJ, :func:mne.setup_source_space,
create_bem_model, :func:mne.bem.make_watershed_bem.
End of explanation
"""
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
"""
Explanation: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
End of explanation
"""
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
"""
Explanation: Deviant condition.
End of explanation
"""
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
"""
Explanation: Difference.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/03_model_performance/c_custom_keras_estimator.ipynb | apache-2.0 | import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
"""
Explanation: Custom Estimator with Keras
Learning Objectives
- Learn how to create custom estimator using tf.keras
Introduction
Up until now we've been limited in our model architectures to premade estimators. But what if we want more control over the model? We can use the popular Keras API to create a custom model. Keras is a high-level API to build and train deep learning models. It is user-friendly, modular and makes writing custom building blocks of Tensorflow code much easier.
Once we've build a Keras model we then it to an estimator using tf.keras.estimator.model_to_estimator()This gives us access to all the flexibility of Keras for creating deep learning models, but also the production readiness of the estimator framework!
End of explanation
"""
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0], [40.0], [-74.0], [40.7]]
def read_dataset(csv_path):
def parse_row(row):
# Decode the CSV row into list of TF tensors
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
# Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields))
# NEW: Add engineered features
features = add_engineered_features(features)
# Separate the label from the features
label = features.pop("fare_amount") # remove label from features and store
return features, label
# Create a dataset containing the text lines.
dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
dataset = dataset.flat_map(map_func = lambda filename: tf.data.TextLineDataset(filenames = filename).skip(count = 1))
# Parse each CSV row into correct (features,label) format for Estimator API
dataset = dataset.map(map_func = parse_row)
return dataset
def create_feature_keras_input(features, label):
features = tf.feature_column.input_layer(features = features, feature_columns = create_feature_columns())
return features, label
def train_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features, label) format
dataset = read_dataset(csv_path)
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
#3. Create single feature tensor for input to Keras Model
dataset = dataset.map(map_func = create_feature_keras_input)
return dataset
def eval_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features, label) format
dataset = read_dataset(csv_path)
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
#3. Create single feature tensor for input to Keras Model
dataset = dataset.map(map_func = create_feature_keras_input)
return dataset
"""
Explanation: Train and Evaluate input functions
For the most part, we can use the same train and evaluation input functions that we had in previous labs. Note the function create_feature_keras_input below. We will use this to create the first layer of the model. This function is called in turn during the train_input_fn and eval_input_fn as well.
End of explanation
"""
def add_engineered_features(features):
features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
features["euclidean_dist"] = tf.sqrt(x = features["latdiff"]**2 + features["londiff"]**2)
return features
def create_feature_columns():
# One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)
# Cross features to get combination of day and hour
fc_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)
# Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets)
fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets)
fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets)
fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets)
feature_columns = [
#1. Engineered using tf.feature_column module
tf.feature_column.indicator_column(categorical_column = fc_day_hr), # 168 columns
fc_bucketized_plat, # 16 + 1 = 17 columns
fc_bucketized_plon, # 16 + 1 = 17 columns
fc_bucketized_dlat, # 16 + 1 = 17 columns
fc_bucketized_dlon, # 16 + 1 = 17 columns
#2. Engineered in input functions
tf.feature_column.numeric_column(key = "latdiff"), # 1 column
tf.feature_column.numeric_column(key = "londiff"), # 1 column
tf.feature_column.numeric_column(key = "euclidean_dist") # 1 column
]
return feature_columns
"""
Explanation: Feature Engineering
We'll use the same engineered features that we had in previous labs.
End of explanation
"""
num_feature_columns = 168 + (16 + 1) * 4 + 3
print("num_feature_columns = {}".format(num_feature_columns))
"""
Explanation: Calculate the number of feature columns that will be input to our Keras model
End of explanation
"""
def create_keras_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape = (num_feature_columns,), name = "dense_input"))
model.add(tf.keras.layers.Dense(units = 64, activation = "relu", name = "dense0"))
model.add(tf.keras.layers.Dense(units = 64, activation = "relu", name = "dense1"))
model.add(tf.keras.layers.Dense(units = 64, activation = "relu", name = "dense2"))
model.add(tf.keras.layers.Dense(units = 64, activation = "relu", name = "dense3"))
model.add(tf.keras.layers.Dense(units = 8, activation = "relu", name = "dense4"))
model.add(tf.keras.layers.Dense(units = 1, activation = None, name = "logits"))
def rmse(y_true, y_pred): # Root Mean Squared Error
return tf.sqrt(x = tf.reduce_mean(input_tensor = tf.square(x = y_pred - y_true)))
model.compile(
optimizer = tf.train.AdamOptimizer(),
loss = "mean_squared_error",
metrics = [rmse])
return model
"""
Explanation: Build Custom Keras Model
Now we can begin building our Keras model. Have a look at the guide here to see more explanation.
End of explanation
"""
# Create serving input function
def serving_input_fn():
feature_placeholders = {
"dayofweek": tf.placeholder(dtype = tf.int32, shape = [None]),
"hourofday": tf.placeholder(dtype = tf.int32, shape = [None]),
"pickuplon": tf.placeholder(dtype = tf.float32, shape = [None]),
"pickuplat": tf.placeholder(dtype = tf.float32, shape = [None]),
"dropofflon": tf.placeholder(dtype = tf.float32, shape = [None]),
"dropofflat": tf.placeholder(dtype = tf.float32, shape = [None]),
}
features = {key: tensor for key, tensor in feature_placeholders.items()}
# Perform our feature engineering during inference as well
features, _ = create_feature_keras_input((add_engineered_features(features)), None)
return tf.estimator.export.ServingInputReceiver(features = {"dense_input": features}, receiver_tensors = feature_placeholders)
"""
Explanation: Serving input function
Once we've constructed our model in Keras, we next create the serving input function. This is also similar to what we have done in previous labs. Note that we use our create_feature_keras_input function again so that we perform our feature engineering during inference.
End of explanation
"""
def train_and_evaluate(output_dir):
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
estimator = tf.keras.estimator.model_to_estimator(
keras_model = create_keras_model(),
model_dir = output_dir,
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = 100 # checkpoint every N steps
)
)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn(csv_path = "./taxi-train.csv"),
max_steps = 500)
exporter = tf.estimator.LatestExporter(name = 'exporter', serving_input_receiver_fn = serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn(csv_path = "./taxi-valid.csv"),
steps = None,
start_delay_secs = 10, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 10, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(
estimator = estimator,
train_spec = train_spec,
eval_spec = eval_spec)
%%time
OUTDIR = "taxi_trained"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR)
"""
Explanation: Train and Evaluate
To train our model, we can use train_and_evaluate as we have before. Note that we use tf.keras.estimator.model_to_estimator to create our estimator. It takes as arguments the compiled keras model, the OUTDIR, and optionally a tf.estimator.Runconfig. Have a look at the documentation for tf.keras.estimator.model_to_estimator to make sure you understand how arguments are used.
End of explanation
"""
|
stubz/deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print(text[3920:3960])
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
dict_punc = {
'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parentheses||',
'--': '||dash||',
'\n': '||return||'
}
return dict_punc
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
#
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
# TODO: Implement Function
return (input, targets, learning_rate)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop]*2) # In Anna Karina example, it is multiplied by num_layers, and num_layers was set 2.
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
# TODO: Implement Function
return (cell, initial_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
# TODO: Implement Function
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return (outputs, final_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data, vocab_size, rnn_size) # embed_dim can be rnn_size? should we use something else?
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs,vocab_size,
weights_initializer=tf.truncated_normal_initializer(mean=0.0,stddev=0.01),
biases_initializer=tf.zeros_initializer(),
activation_fn=None)
# TODO: Implement Function
return (logits, final_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
#n_batches = len(int_text)//batch_size
# ignore texts that do not fit into the last batch size
#mytext = int_text[:n_batches*batch_size]
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# Number of Epochs
num_epochs = 500
# Batch Size
batch_size = 500
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.005
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
InputTensor = loaded_graph.get_tensor_by_name('input:0')
InitialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0')
FinalStateTensor = loaded_graph.get_tensor_by_name('final_state:0')
ProbsTensor = loaded_graph.get_tensor_by_name('probs:0')
return (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
p = np.squeeze(probabilities)
idx = np.argsort(p)[-1]
return int_to_vocab[idx]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
jphall663/GWU_data_mining | 10_model_interpretability/src/mono_xgboost.ipynb | apache-2.0 | # imports
import h2o
from h2o.estimators.xgboost import H2OXGBoostEstimator
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import xgboost as xgb
# start h2o
h2o.init()
h2o.remove_all()
"""
Explanation: License
Copyright 2017 J. Patrick Hall, jphall@gwu.edu
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Monotonic Gradient Boosting using XGBoost
http://xgboost.readthedocs.io/en/latest//tutorials/monotonic.html
Monotonicity is an important facet of intepretability. Monotonicity constraints ensure that the modeled relationship between inputs and the target move in only direction, i.e. as an input increases the target can only increase or as input increases the target can only decrease. Such monotonic relationships are usually easier to explain and understand than non-monotonic relationships.
Preliminaries: imports, start h2o, load and clean data
End of explanation
"""
# load clean data
path = '../../03_regression/data/train.csv'
frame = h2o.import_file(path=path)
# assign target and inputs
y = 'SalePrice'
X = [name for name in frame.columns if name not in [y, 'Id']]
"""
Explanation: Load and prepare data for modeling
End of explanation
"""
# determine column types
# impute
reals, enums = [], []
for key, val in frame.types.items():
if key in X:
if val == 'enum':
enums.append(key)
else:
reals.append(key)
_ = frame[reals].impute(method='median')
# split into training and validation
train, valid = frame.split_frame([0.7], seed=12345)
# for convenience create a tuple for xgboost monotone_constraints parameter
mono_constraints = tuple(int(i) for i in np.ones(shape=(int(1), len(reals))).tolist()[0])
"""
Explanation: Monotonic constraints are easier to understand for numeric inputs without missing values
End of explanation
"""
# Check log transform - looks good
%matplotlib inline
train['SalePrice'].log().as_data_frame().hist()
# Execute log transform
train['SalePrice'] = train['SalePrice'].log()
valid['SalePrice'] = valid['SalePrice'].log()
print(train[0:3, 'SalePrice'])
"""
Explanation: Train a monotonic predictive model
In this XGBoost GBM all the modeled relationships between the inputs and the target are forced to be monotonically increasing.
Log transform for better regression results and easy RMSLE in XGBoost
End of explanation
"""
ave_y = train['SalePrice'].mean()[0]
# XGBoost uses SVMLight data structure, not Numpy arrays or Pandas data frames
dtrain = xgb.DMatrix(train.as_data_frame()[reals],
train.as_data_frame()['SalePrice'])
dvalid = xgb.DMatrix(valid.as_data_frame()[reals],
valid.as_data_frame()['SalePrice'])
# tuning parameters
params = {
'objective': 'reg:linear',
'booster': 'gbtree',
'eval_metric': 'rmse',
'eta': 0.005,
'subsample': 0.1,
'colsample_bytree': 0.8,
'max_depth': 5,
'reg_alpha' : 0.01,
'reg_lambda' : 0.0,
'monotone_constraints':mono_constraints,
'base_score': ave_y,
'silent': 0,
'seed': 12345,
}
# watchlist is used for early stopping
watchlist = [(dtrain, 'train'), (dvalid, 'eval')]
# train model
xgb_model1 = xgb.train(params,
dtrain,
1000,
evals=watchlist,
early_stopping_rounds=50,
verbose_eval=True)
"""
Explanation: Train XGBoost with monotonicity Constraints
End of explanation
"""
_ = xgb.plot_importance(xgb_model1)
"""
Explanation: Plot variable importance
End of explanation
"""
def par_dep(xs, frame, model, resolution=20, bins=None):
""" Creates Pandas dataframe containing partial dependence for a single variable.
Args:
xs: Variable for which to calculate partial dependence.
frame: H2OFrame for which to calculate partial dependence.
model: XGBoost model for which to calculate partial dependence.
resolution: The number of points across the domain of xs for which to calculate partial dependence.
Returns:
Pandas dataframe containing partial dependence values.
"""
# don't show progress bars for parse
h2o.no_progress()
# init empty Pandas frame w/ correct col names
par_dep_frame = pd.DataFrame(columns=[xs, 'partial_dependence'])
# cache original data
col_cache = h2o.deep_copy(frame[xs], xid='col_cache')
# determine values at which to calculate partial dependency
if bins == None:
min_ = frame[xs].min()
max_ = frame[xs].max()
by = (max_ - min_)/resolution
bins = np.arange(min_, max_, by)
# calculate partial dependency
# by setting column of interest to constant
for j in bins:
frame[xs] = j
dframe = xgb.DMatrix(frame.as_data_frame(),)
par_dep_i = h2o.H2OFrame(model.predict(dframe).tolist())
par_dep_j = par_dep_i.mean()[0]
par_dep_frame = par_dep_frame.append({xs:j,
'partial_dependence': par_dep_j},
ignore_index=True)
# return input frame to original cached state
frame[xs] = h2o.get_frame('col_cache')
return par_dep_frame
"""
Explanation: Examine monotonic behavior with partial dependence and ICE
Partial dependence is used to view the global, average behavior of a variable under the monotonic model.
ICE is used to view the local behavior of a single instance and single variable under the monotonic model.
Overlaying partial dependence onto ICE in a plot is a convenient way to validate and understand both global and local monotonic behavior.
Helper function for calculating partial dependence
End of explanation
"""
par_dep_OverallCond = par_dep('OverallCond', valid[reals], xgb_model1)
par_dep_GrLivArea = par_dep('GrLivArea', valid[reals], xgb_model1)
par_dep_LotArea = par_dep('LotArea', valid[reals], xgb_model1)
"""
Explanation: Calculate partial dependence for 3 important variables
End of explanation
"""
def get_quantile_dict(y, id_, frame):
""" Returns the percentiles of a column y as the indices for another column id_.
Args:
y: Column in which to find percentiles.
id_: Id column that stores indices for percentiles of y.
frame: H2OFrame containing y and id_.
Returns:
Dictionary of percentile values and index column values.
"""
quantiles_df = frame.as_data_frame()
quantiles_df.sort_values(y, inplace=True)
quantiles_df.reset_index(inplace=True)
percentiles_dict = {}
percentiles_dict[0] = quantiles_df.loc[0, id_]
percentiles_dict[99] = quantiles_df.loc[quantiles_df.shape[0]-1, id_]
inc = quantiles_df.shape[0]//10
for i in range(1, 10):
percentiles_dict[i * 10] = quantiles_df.loc[i * inc, id_]
return percentiles_dict
"""
Explanation: Helper function for finding decile indices
End of explanation
"""
quantile_dict = get_quantile_dict('SalePrice', 'Id', valid)
"""
Explanation: Calculate deciles of SaleProce
End of explanation
"""
bins_OverallCond = list(par_dep_OverallCond['OverallCond'])
bins_GrLivArea = list(par_dep_GrLivArea['GrLivArea'])
bins_LotArea = list(par_dep_LotArea['LotArea'])
for i in sorted(quantile_dict.keys()):
col_name = 'Percentile_' + str(i)
par_dep_OverallCond[col_name] = par_dep('OverallCond',
valid[valid['Id'] == int(quantile_dict[i])][reals],
xgb_model1,
bins=bins_OverallCond)['partial_dependence']
par_dep_GrLivArea[col_name] = par_dep('GrLivArea',
valid[valid['Id'] == int(quantile_dict[i])][reals],
xgb_model1,
bins=bins_GrLivArea)['partial_dependence']
par_dep_LotArea[col_name] = par_dep('LotArea',
valid[valid['Id'] == int(quantile_dict[i])][reals],
xgb_model1,
bins=bins_LotArea)['partial_dependence']
"""
Explanation: Calculate values for ICE
End of explanation
"""
# OverallCond
fig, ax = plt.subplots()
par_dep_OverallCond.drop('partial_dependence', axis=1).plot(x='OverallCond', colormap='gnuplot', ax=ax)
par_dep_OverallCond.plot(title='Partial Dependence and ICE for OverallCond',
x='OverallCond',
y='partial_dependence',
style='r-',
linewidth=3,
ax=ax)
_ = plt.legend(bbox_to_anchor=(1.05, 0),
loc=3,
borderaxespad=0.)
# GrLivArea
fig, ax = plt.subplots()
par_dep_GrLivArea.drop('partial_dependence', axis=1).plot(x='GrLivArea', colormap='gnuplot', ax=ax)
par_dep_GrLivArea.plot(title='Partial Dependence and ICE for GrLivArea',
x='GrLivArea',
y='partial_dependence',
style='r-',
linewidth=3,
ax=ax)
_ = plt.legend(bbox_to_anchor=(1.05, 0),
loc=3,
borderaxespad=0.)
# LotArea
fig, ax = plt.subplots()
par_dep_LotArea.drop('partial_dependence', axis=1).plot(x='LotArea', colormap='gnuplot', ax=ax)
par_dep_LotArea.plot(title='Partial Dependence and ICE for LotArea',
x='LotArea',
y='partial_dependence',
style='r-',
linewidth=3,
ax=ax)
_ = plt.legend(bbox_to_anchor=(1.05, 0),
loc=3,
borderaxespad=0.)
"""
Explanation: Plot Partial Dependence and ICE
End of explanation
"""
h2o.cluster().shutdown(prompt=True)
"""
Explanation: Shutdown H2O
End of explanation
"""
|
ktmud/deep-learning | IMDB-keras/IMDB_In_Keras.ipynb | mit | # Imports
import numpy as np
import keras
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(42)
"""
Explanation: Analyzing IMDB Data in Keras
End of explanation
"""
# Loading the data (it's preloaded in Keras)
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000)
print(x_train.shape)
print(x_test.shape)
"""
Explanation: 1. Loading the data
This dataset comes preloaded with Keras, so one simple command will get us training and testing data. There is a parameter for how many words we want to look at. We've set it at 1000, but feel free to experiment.
End of explanation
"""
print(x_train[0])
print(y_train[0])
"""
Explanation: 2. Examining the data
Notice that the data has been already pre-processed, where all the words have numbers, and the reviews come in as a vector with the words that the review contains. For example, if the word 'the' is the first one in our dictionary, and a review contains the word 'the', then there is a 1 in the corresponding vector.
The output comes as a vector of 1's and 0's, where 1 is a positive sentiment for the review, and 0 is negative.
End of explanation
"""
# One-hot encoding the output into vector mode, each of length 1000
tokenizer = Tokenizer(num_words=1000)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
print(x_train[0])
"""
Explanation: 3. One-hot encoding the output
Here, we'll turn the input vectors into (0,1)-vectors. For example, if the pre-processed vector contains the number 14, then in the processed vector, the 14th entry will be 1.
End of explanation
"""
# One-hot encoding the output
num_classes = 2
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train.shape)
print(y_test.shape)
"""
Explanation: And we'll also one-hot encode the output.
End of explanation
"""
# TODO: Build the model architecture
# TODO: Compile the model using a loss function and an optimizer.
"""
Explanation: 4. Building the model architecture
Build a model here using sequential. Feel free to experiment with different layers and sizes! Also, experiment adding dropout to reduce overfitting.
End of explanation
"""
# TODO: Run the model. Feel free to experiment with different batch sizes and number of epochs.
"""
Explanation: 5. Training the model
Run the model here. Experiment with different batch_size, and number of epochs!
End of explanation
"""
score = model.evaluate(x_test, y_test, verbose=0)
print("Accuracy: ", score[1])
"""
Explanation: 6. Evaluating the model
This will give you the accuracy of the model, as evaluated on the testing set. Can you get something over 85%?
End of explanation
"""
|
Kaggle/learntools | notebooks/sql_advanced/raw/ex2.ipynb | apache-2.0 | # Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql_advanced.ex2 import *
print("Setup Complete")
"""
Explanation: Introduction
Here, you'll use window functions to answer questions about the Chicago Taxi Trips dataset.
Before you get started, run the code cell below to set everything up.
End of explanation
"""
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "chicago_taxi_trips" dataset
dataset_ref = client.dataset("chicago_taxi_trips", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "taxi_trips" table
table_ref = dataset_ref.table("taxi_trips")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
"""
Explanation: The following code cell fetches the taxi_trips table from the chicago_taxi_trips dataset. We also preview the first five rows of the table. You'll use the table to answer the questions below.
End of explanation
"""
# Fill in the blank below
avg_num_trips_query = """
WITH trips_by_day AS
(
SELECT DATE(trip_start_timestamp) AS trip_date,
COUNT(*) as num_trips
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE trip_start_timestamp >= '2016-01-01' AND trip_start_timestamp < '2018-01-01'
GROUP BY trip_date
ORDER BY trip_date
)
SELECT trip_date,
____
OVER (
____
____
) AS avg_num_trips
FROM trips_by_day
"""
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
#%%RM_IF(PROD)%%
avg_num_trips_query = """
WITH trips_by_day AS
(
SELECT DATE(trip_start_timestamp) AS trip_date,
COUNT(*) as num_trips
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE trip_start_timestamp >= '2016-01-01' AND trip_start_timestamp < '2018-01-01'
GROUP BY trip_date
ORDER BY trip_date
)
SELECT trip_date,
AVG(num_trips)
OVER (
ORDER BY trip_date
ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING
) AS avg_num_trips
FROM trips_by_day
"""
q_1.check()
"""
Explanation: Exercises
1) How can you predict the demand for taxis?
Say you work for a taxi company, and you're interested in predicting the demand for taxis. Towards this goal, you'd like to create a plot that shows a rolling average of the daily number of taxi trips. Amend the (partial) query below to return a DataFrame with two columns:
- trip_date - contains one entry for each date from January 1, 2016, to December 31, 2017.
- avg_num_trips - shows the average number of daily trips, calculated over a window including the value for the current date, along with the values for the preceding 15 days and the following 15 days, as long as the days fit within the two-year time frame. For instance, when calculating the value in this column for January 5, 2016, the window will include the number of trips for the preceding 4 days, the current date, and the following 15 days.
This query is partially completed for you, and you need only write the part that calculates the avg_num_trips column. Note that this query uses a common table expression (CTE); if you need to review how to use CTEs, you're encouraged to check out this tutorial in the Intro to SQL micro-course.
End of explanation
"""
# Amend the query below
trip_number_query = """
SELECT pickup_community_area,
trip_start_timestamp,
trip_end_timestamp
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE DATE(trip_start_timestamp) = '2017-05-01'
"""
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
#%%RM_IF(PROD)%%
trip_number_query = """
SELECT pickup_community_area,
trip_start_timestamp,
trip_end_timestamp,
RANK()
OVER (
PARTITION BY pickup_community_area
ORDER BY trip_start_timestamp
) AS trip_number
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE DATE(trip_start_timestamp) = '2017-05-01'
"""
q_2.check()
"""
Explanation: 2) Can you separate and order trips by community area?
The query below returns a DataFrame with three columns from the table: pickup_community_area, trip_start_timestamp, and trip_end_timestamp.
Amend the query to return an additional column called trip_number which shows the order in which the trips were taken from their respective community areas. So, the first trip of the day originating from community area 1 should receive a value of 1; the second trip of the day from the same area should receive a value of 2. Likewise, the first trip of the day from community area 2 should receive a value of 1, and so on.
Note that there are many numbering functions that can be used to solve this problem (depending on how you want to deal with trips that started at the same time from the same community area); to answer this question, please use the RANK() function.
End of explanation
"""
# Fill in the blanks below
break_time_query = """
SELECT taxi_id,
trip_start_timestamp,
trip_end_timestamp,
TIMESTAMP_DIFF(
trip_start_timestamp,
____
OVER (
PARTITION BY ____
ORDER BY ____),
MINUTE) as prev_break
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE DATE(trip_start_timestamp) = '2017-05-01'
"""
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
#%%RM_IF(PROD)%%
break_time_query = """
SELECT taxi_id,
trip_start_timestamp,
trip_end_timestamp,
TIMESTAMP_DIFF(
trip_start_timestamp,
LAG(trip_end_timestamp, 1) OVER (PARTITION BY taxi_id ORDER BY trip_start_timestamp),
MINUTE) as prev_break
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE DATE(trip_start_timestamp) = '2017-05-01'
"""
q_3.check()
"""
Explanation: 3) How much time elapses between trips?
The (partial) query in the code cell below shows, for each trip in the selected time frame, the corresponding taxi_id, trip_start_timestamp, and trip_end_timestamp.
Your task in this exercise is to edit the query to include an additional prev_break column that shows the length of the break (in minutes) that the driver had before each trip started (this corresponds to the time between trip_start_timestamp of the current trip and trip_end_timestamp of the previous trip). Partition the calculation by taxi_id, and order the results within each partition by trip_start_timestamp.
Some sample results are shown below, where all rows correspond to the same driver (or taxi_id). Take the time now to make sure that the values in the prev_break column make sense to you!
Note that the first trip of the day for each driver should have a value of NaN (not a number) in the prev_break column.
End of explanation
"""
|
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/dev/n02_separating_the_test_set.ipynb | mit | # Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../')
"""
Explanation: On this notebook the test and training sets will be defined.
End of explanation
"""
from sklearn.model_selection import TimeSeriesSplit
num_samples = 30
dims = 2
X = np.random.random((num_samples,dims))
y = np.array(range(num_samples))
tscv = TimeSeriesSplit(n_splits=3)
print(tscv)
TimeSeriesSplit(n_splits=3)
for train_index, test_index in tscv.split(X):
print("TRAIN_indexes:", train_index, "TEST_indexes:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
"""
Explanation: Let's test the scikit learn example for TimeSeriesSplit (with some modifications)
End of explanation
"""
data_df = pd.read_pickle('../../data/data_df.pkl')
print(data_df.shape)
data_df.head(10)
"""
Explanation: It may be useful for validation purposes. The test set will be separated before, anyway. The criterion to follow is to always keep causality.
Let's get the data and preserve one part as the test set.
Note: The way the test set will be used, is still not defined. Also, the definition of X and y may depend on the length of the base time interval used for training. But, in any case, it is a good practise to separate a fraction of the data for test, that will be untouched regardless of all those decisions.
End of explanation
"""
num_test_samples = 252 * 2
data_train_val_df, data_test_df = data_df.unstack().iloc[:-num_test_samples], data_df.unstack().iloc[-num_test_samples:]
def show_df_basic(df):
print(df.shape)
print('Starting value: %s\nEnding value: %s' % (df.index.get_level_values(0)[0], df.index.get_level_values(0)[-1]))
print(df.head())
show_df_basic(data_train_val_df)
show_df_basic(data_test_df)
"""
Explanation: I will save about two years worth of data for the test set (it wouldn't be correct to save a fixed fraction of the total set because the size of the "optimal" training set is still to be defined; I may end up using much less than the total dataset).
End of explanation
"""
data_test_df.loc[slice(None),(slice(None),'Close')].head()
"""
Explanation: I could select the Close values, for example, like below...
End of explanation
"""
data_test_df.xs('Close', level=1, axis=1).head()
"""
Explanation: Or like this...
End of explanation
"""
data_train_val_df = data_train_val_df.swaplevel(0, 1, axis=1).stack().unstack()
show_df_basic(data_train_val_df)
data_test_df = data_test_df.swaplevel(0, 1, axis=1).stack().unstack()
show_df_basic(data_test_df)
"""
Explanation: But I think it will be more clear if I swap the levels in the columns
End of explanation
"""
data_train_val_df['Close']
"""
Explanation: Now it's very easy to select one of the features:
End of explanation
"""
data_train_val_df.to_pickle('../../data/data_train_val_df.pkl')
data_test_df.to_pickle('../../data/data_test_df.pkl')
"""
Explanation: Let's pickle the data
End of explanation
"""
|
SinaraGharibyan/SinaraGharibyan.github.io | CB/Appendix2.ipynb | mit | from BeautifulSoup import *
import requests
url = "https://careercenter.am/ccidxann.php"
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page)
tables = soup.findAll("table")
my_table = tables[0]
rows = my_table.findAll('tr')
data_list = []
for i in rows:
columns = i.findAll('td')
for j in columns:
data_list.append(j.text)
even = []
for i in range(len(data_list)):
if i%2 == 0:
even.append(data_list[i])
list_manager=[]
for i in even:
manager=re.findall('.*\s*(?:M|m)anager\s*.*',i)
if len(manager)==1:
list_manager.append(manager)
for i in range(len(list_manager)):
list_manager[i]=list_manager[i][0]
len(list_manager)
list_marketing=[]
for i in even:
marketing=re.findall('.*\s*(?:M|m)arketing\s*.*',i)
if len(marketing)==1:
list_marketing.append(marketing)
for i in range(len(list_marketing)):
list_marketing[i]=list_marketing[i][0]
len(list_marketing )
"""
Explanation: Պահանջված մասնագիտություններ
End of explanation
"""
a_tags=my_table.findAll('a')
new_list=list_link[1:]
list_link=[]
for i in a_tags:
d=i.get('href')
list_link.append(d)
new_link=[]
for i in new_list:
m='https://careercenter.am/'+i
new_link.append(m)
for i in new_link:
response=requests.get(i)
page=response.text
soup=BeautifulSoup(page)
list_soup=[]
for i in range(0,len(new_link)):
response=requests.get(new_link[i])
page=response.text
soup=BeautifulSoup(page)
list_soup.append(soup)
exp_list=[]
for i in list_soup:
m=i.findAll('p')
for j in m:
exp_list.append(j.text)
list_exp=[]
for i in exp_list:
exp=re.findall('[0-9]\syears\sof.*(?:E|e)xperience\s*',i)
if len(exp)==1:
list_exp.append(exp)
for i in range(len(list_exp)):
list_exp[i]=list_exp[i][0]
list_years=[]
for i in list_exp:
exp=re.findall('[0-9]',i)
if len(exp)==1:
list_years.append(exp)
for i in range(len(list_years)):
list_years[i]=list_years[i][0]
int_list_years=[]
for i in list_years:
m=int(i)
int_list_years.append(m)
sum=0
for j in int_list_years:
sum=sum+j
mean=sum/len(int_list_years)
print 'Միջին ստաժ՝ ' + str(mean)
sum=0
for j in int_list_years:
sum=sum+j
mean=sum/len(new_link)
print 'Ընդհանուր միջին ստաժ՝ ' + str(mean)
"""
Explanation: Կոդերի սկզբունքը նույնն է նաև մնացած մասնագիտությունների համար
Միջին ստաժ
End of explanation
"""
list_english=[]
for i in exp_list:
exp=re.findall('\s*English\s*[a-z]*\s*[a-z]*',i)
if len(exp)==1:
list_english.append(exp)
for i in range(len(list_english)):
list_english[i]=list_english[i][0]
len(list_english)
print 'Անգլերեն պահանջող աշխատանքներ՝ ' + str(len(list_english) )
list_russian=[]
for i in exp_list:
exp=re.findall('\s*Russian\s*[a-z]*\s*[a-z]*',i)
if len(exp)==1:
list_russian.append(exp)
print 'Ռուսերեն պահանջող աշխատանքներ՝ ' + str(len(list_russian))
"""
Explanation: Պահանջվող որակավորումներ
End of explanation
"""
exp_company=[]
for i in list_soup:
m=i.findAll('p',attrs={'align':'center'})
for j in m:
exp_company.append(j.text)
list_zangi=[]
for i in exp_company:
exp=re.findall('Zangi',i)
if len(exp)==1:
list_zangi.append(exp)
print 'Zangi-ի կողմից հայատարարված աշխատատեղեր՝ ' + str(len(list_zangi))
list_llc=[]
for i in exp_company:
exp=re.findall('.*\s*Tech[a-z]*',i)
if len(exp)==1:
list_llc.append(exp)
print '"Tech" անունով ընկերությունների կողմից հայատարարված աշխատատեղեր՝ ' + str(len(list_llc))
"""
Explanation: Սկզբունքը նույնն է նաև մնացած բոլոր հմտությունների որոնման համար
Ակտիվ գործատուներ
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/7aed4bc8cd1643f9a23125c34f543ae6/plot_59_head_positions.ipynb | bsd-3-clause | # Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
from os import path as op
import mne
print(__doc__)
data_path = op.join(mne.datasets.testing.data_path(verbose=True), 'SSS')
fname_raw = op.join(data_path, 'test_move_anon_raw.fif')
raw = mne.io.read_raw_fif(fname_raw, allow_maxshield='yes').load_data()
raw.plot_psd()
"""
Explanation: Extracting and visualizing subject head movement
Continuous head movement can be encoded during MEG recordings by use of
HPI coils that continuously emit sinusoidal signals. These signals can then be
extracted from the recording and used to estimate head position as a function
of time. Here we show an example of how to do this, and how to visualize
the result.
HPI frequencies
First let's load a short bit of raw data where the subject intentionally moved
their head during the recording. Its power spectral density shows five peaks
(most clearly visible in the gradiometers) corresponding to the HPI coil
frequencies, plus other peaks related to power line interference (60 Hz and
harmonics).
End of explanation
"""
chpi_amplitudes = mne.chpi.compute_chpi_amplitudes(raw)
"""
Explanation: Estimating continuous head position
First, let's extract the HPI coil amplitudes as a function of time:
End of explanation
"""
chpi_locs = mne.chpi.compute_chpi_locs(raw.info, chpi_amplitudes)
"""
Explanation: Second, let's compute time-varying HPI coil locations from these:
End of explanation
"""
head_pos = mne.chpi.compute_head_pos(raw.info, chpi_locs, verbose=True)
"""
Explanation: Lastly, compute head positions from the coil locations:
End of explanation
"""
mne.viz.plot_head_positions(head_pos, mode='traces')
"""
Explanation: Note that these can then be written to disk or read from disk with
:func:mne.chpi.write_head_pos and :func:mne.chpi.read_head_pos,
respectively.
Visualizing continuous head position
We can plot as traces, which is especially useful for long recordings:
End of explanation
"""
mne.viz.plot_head_positions(head_pos, mode='field')
"""
Explanation: Or we can visualize them as a continuous field (with the vectors pointing
in the head-upward direction):
End of explanation
"""
|
DS-100/sp17-materials | sp17/labs/lab11/lab11_solution.ipynb | gpl-3.0 | !pip install -U sklearn
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn as skl
import sklearn.linear_model as lm
import scipy.io as sio
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('lab11.ok')
"""
Explanation: Lab 11: Regularization and Cross-Validation
End of explanation
"""
!head train.csv
"""
Explanation: Today's lab covers:
How to use regularization to avoid overfitting
How to use cross-validation to find the amount of regularization that produces a model with the least error for new data
Dammed Data
We've put our data into two files: train.csv and test.csv which contain
the training and test data, respectively. You are not allowed to train
on the test data.
The y values in the training data correspond to the amount of water that flows out of the dam on a particular day. There is only 1 feature: the increase in water level for the dam's reservoir on that day, which we'll call x.
End of explanation
"""
data = pd.read_csv('train.csv')
X = data[['X']].as_matrix()
y = data['y'].as_matrix()
X.shape, y.shape
_ = plt.plot(X[:, 0], y, '.')
plt.xlabel('Change in water level (X)')
plt.ylabel('Water flow out of dam (y)')
def plot_data_and_curve(curve_x, curve_y):
plt.plot(X[:, 0], y, '.')
plt.plot(curve_x, curve_y, '-')
plt.ylim(-20, 60)
plt.xlabel('Change in water level (X)')
plt.ylabel('Water flow out of dam (y)')
"""
Explanation: Let's load in the data:
End of explanation
"""
linear_clf = lm.LinearRegression() #SOLUTION
# Fit your classifier
linear_clf.fit(X, y)
# Predict a bunch of points to draw best fit line
all_x = np.linspace(-55, 55, 200).reshape(-1, 1)
line = linear_clf.predict(all_x)
plot_data_and_curve(all_x, line)
"""
Explanation: Question 1: As a warmup, let's fit a line to this data using sklearn.
We've imported sklearn.linear_model as lm, so you can use that instead of
typing out the whole module name. Running the cell should show the data
with your best fit line.
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
X_poly = PolynomialFeatures(degree=8).fit_transform(X) #SOLUTION
X_poly.shape
"""
Explanation: Question 2: If you had to guess, which has a larger effect on error for this dataset: bias or variance?
Explain briefly.
SOLUTION: Bias. Our data are curved up, but our line cannot model that curve, so bias is high. Variance is low because the complexity of our model is low relative to the size of the training set. That is, if we drew a new dataset from the population of days, we would probably get a similar line.
Question 3: Let's now add some complexity to our model by adding polynomial features.
Reference the sklearn docs on the PolynomialFeatures class. You should use this class to add polynomial features to X up to degree 8 using the fit_transform method.
The final X_poly data matrix should have shape (33, 9). Think about and discuss why.
End of explanation
"""
poly_clf = lm.LinearRegression() #SOLUTION
# Fit your classifier
poly_clf.fit(X_poly, y) #SOLUTION
# Set curve to your model's predictions on all_x
curve = poly_clf.predict(PolynomialFeatures(degree=8).fit_transform(all_x)) #SOLUTION
plot_data_and_curve(all_x, curve)
"""
Explanation: Question 4: Now, fit a linear regression to the data, using polynomial features.
Then, follow the code in Question 1 to make predictions for the values in all_x. You'll have to add polynomial features to all_x in order to make predictions.
Then, running this cell should plot the best fit curve using a degree 8 polynomial.
End of explanation
"""
def mse(predicted_y, actual_y):
return np.mean((predicted_y - actual_y) ** 2)
line_training_error = mse(linear_clf.predict(X), y) #SOLUTION
poly_training_error = mse(poly_clf.predict(PolynomialFeatures(degree=8).fit_transform(X)), y) #SOLUTION
line_training_error, poly_training_error
"""
Explanation: Question 5: Think about and discuss what you notice in the model's predictions.
Now, compute the mean squared training error for both the best fit line and polynomial. Again, you'll have to transform the training data for the polynomial regression before you can make predictions.
You should get training errors of around 52.8 and 5.23 for line and polynomial models, respectively. Why does the polynomial model get a lower training error than the linear model?
End of explanation
"""
from sklearn.pipeline import make_pipeline
poly_pipeline = make_pipeline(PolynomialFeatures(degree=8), lm.LinearRegression()) #SOLUTION
# Fit the pipeline on X and y
poly_pipeline.fit(X, y) #SOLUTION
# Compute the training error
pipeline_training_error = mse(poly_pipeline.predict(X), y) #SOLUTION
pipeline_training_error
"""
Explanation: Question 6: It's annoying to have to transform the data every time we want to use polynomial features. We can use a Pipeline to let us do both transformation and regression in one step.
Read the docs for make_pipeline and create a pipeline for polynomial regression called poly_pipeline. Then, fit it on X and y and compute the training error as in Question 5. The training errors should match.
End of explanation
"""
from sklearn.model_selection import train_test_split
np.random.seed(42)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.33) #SOLUTION
X_train.shape, X_valid.shape
"""
Explanation: Nice! With pipelines, we can combine any number of transformations and treat the whole thing as a single classifier.
Question 7: Now, we know that a low training error doesn't necessarily mean your model is good. So, we'll hold out some points from the training data for a validation set. We'll use these held-out points to choose the best model.
Use the train_test_split function to split out one third of the training data for validation. Call the resulting datasets X_train, X_valid, y_train, y_valid.
X_train should have shape (22, 1). X_valid should have shape (11, 1)
End of explanation
"""
# Fit the linear classifier
linear_clf.fit(X_train, y_train) #SOLUTION
# Fit the polynomial pipeline
poly_pipeline.fit(X_train, y_train) #SOLUTION
X_train_line_error = mse(linear_clf.predict(X_train), y_train) #SOLUTION
X_valid_line_error = mse(linear_clf.predict(X_valid), y_valid) #SOLUTION
X_train_poly_error = mse(poly_pipeline.predict(X_train), y_train) #SOLUTION
X_valid_poly_error = mse(poly_pipeline.predict(X_valid), y_valid) #SOLUTION
X_train_line_error, X_valid_line_error, X_train_poly_error, X_valid_poly_error
"""
Explanation: Question 8: Now, set X_train_line_error, X_valid_line_error,
X_train_poly_error, X_valid_poly_error to the training and validation
errors for both linear and polynomial regression.
You'll have to call .fit on your classifiers/pipelines again since we're using
X_train and y_train instead of X and y.
You should see that the validation error for the polynomial fit is significantly
higher than the linear fit (152.6 vs 115.2).
End of explanation
"""
ridge_pipeline = make_pipeline(PolynomialFeatures(degree=8), lm.Ridge(normalize=True, alpha=1.)) #SOLUTION
# Fit your classifier
ridge_pipeline.fit(X_train, y_train) #SOLUTION
# Set curve to your model's predictions on all_x
ridge_curve = ridge_pipeline.predict(all_x) #SOLUTION
plot_data_and_curve(all_x, ridge_curve)
"""
Explanation: Question 9: Our 8 degree polynomial is overfitting our data.
To reduce overfitting, we can use regularization.
The usual cost function for linear regression is:
$$J(\theta) = (Y - X \theta)^T (Y - X \theta)$$
Edit the cell below to show the cost function of linear regressions with L2 regularization. Use
$\lambda $ as your regularization parameter.
$$J(\theta) = (Y - X \theta)^T (Y - X \theta)$$
Now, explain why this cost function helps reduce overfitting.
SOLUTION: Adding regularization effectively restricts the set of possible polynomials we're allowed to use to fit the data. In particular, polynomials with large coefficients, which can be used to fit data very precisely, are discouraged.
Question 10: L2 regularization for linear regression is also known as
Ridge regression. Create a pipeline called ridge_pipeline that again
creates polynomial features with degree 8 and then uses the Ridge sklearn
classifier.
The alpha argument is the same as our $\lambda$. Leave it as the default (1.0). You should set normalize=True to normalize your data before fitting. Why do we have to do this?
Then, fit your pipeline on the data. The cell will then plot the curve of your
regularized classifier. You should notice the curve is significantly
smoother.
Then, fiddle around with the alpha value. What do you notice as you
increase alpha? Decrease alpha?
End of explanation
"""
ridge_train_error = mse(ridge_pipeline.predict(X_train), y_train) #SOLUTION
ridge_valid_error = mse(ridge_pipeline.predict(X_valid), y_valid) #SOLUTION
ridge_train_error, ridge_valid_error
"""
Explanation: Question 11: Compute the training and validation error for the ridge_pipeline.
How do the errors compare to the errors for the unregularized model? Why did each one go up/down?
End of explanation
"""
alphas = [0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 10.0]
# Your code to find the best alpha
def compute_error(alpha):
pline = make_pipeline(PolynomialFeatures(degree=8),
lm.Ridge(normalize=True, alpha=alpha))
pline.fit(X_train, y_train)
return mse(pline.predict(X_valid), y_valid)
errors = [compute_error(alpha) for alpha in alphas]
best_alpha_idx = np.argmin(errors)
best_alpha, best_error = alphas[best_alpha_idx], errors[best_alpha_idx]
best_alpha, best_error
"""
Explanation: Question 12: Now we want to know: how do we choose the best alpha value?
This is where we use our validation set. We can try out a bunch of alphas and pick the one that gives us the least error on the validation set. Why can't we use the one that gives us the least error on the training set? The test set?
For each alpha in the given alphas list, fit a Ridge regression model to the training set and check its accuracy on the validation set.
Finally, set best_alpha to the best value. You should get a best alpha of 0.01 with a validation error of 15.7.
End of explanation
"""
best_pipeline = make_pipeline(PolynomialFeatures(degree=8), lm.Ridge(normalize=True, alpha=best_alpha)) #SOLUTION
best_pipeline.fit(X_train, y_train)
best_curve = best_pipeline.predict(all_x)
plot_data_and_curve(all_x, best_curve)
"""
Explanation: Question 13: Now, set best_pipeline to the pipeline with the degree 8 polynomial transform and the ridge regression model with the best value of alpha.
End of explanation
"""
test_data = pd.read_csv('test.csv')
X_test = data[['X']].as_matrix()
y_test = data['y'].as_matrix()
line_test_error = mse(linear_clf.predict(X_test), y_test)
poly_test_error = mse(poly_pipeline.predict(X_test), y_test)
best_test_error = mse(best_pipeline.predict(X_test), y_test)
line_test_error, poly_test_error, best_test_error
"""
Explanation: Now, run the cell below to find the test error of your simple linear model, your polynomial model, and your regularized polynomial model.
End of explanation
"""
i_finished_the_lab = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
"""
Explanation: Nice! You've use regularization and cross-validation to fit an accurate polynomial model to the dataset.
In the future, you'd probably want to use something like RidgeCV to automatically perform cross-validation, but it's instructive to do it yourself at least once.
Submitting your assignment
If you made a good-faith effort to complete the lab, change i_finished_the_lab to True in the cell below. In any case, run the cells below to submit the lab.
End of explanation
"""
|
compmech/meshless | notebooks/example_buckling_composite_plate.ipynb | bsd-2-clause | a = 0.5
b = 0.5
E1 = 49.627e9
E2 = 15.43e9
nu12 = 0.38
G12 = 4.8e9
G13 = 4.8e9
G23 = 4.8e9
laminaprop = (E1, E2, nu12, G12, G13, G23)
tmap = {
45: 0.143e-3,
-45: 0.143e-3,
0: 1.714e-3
}
X = 4
angles = [-45, +45, 0, +45, -45, 0]*X + [0, -45, +45, 0, +45, -45]*X
plyts = [tmap[angle] for angle in angles]
"""
Explanation: Buckling of a Composite Plate
Repreducing results for CASE A, AR1, simply supported from
Qiao Jie Yang. "Simplified approaches to buckling of composite plates". Master Thesis. Faculty of Mathematics and Natural Science, University of Oslo, May 2009.
https://www.researchgate.net/file.PostFileLoader.html?id=58ca91dff7b67e479264554f&assetKey=AS%3A472526997463041%401489670623513
See Table 6-1 of the above reference when comparing the results herein obtained.
Plate geometry and laminate data
End of explanation
"""
import numpy as np
from scipy.spatial import Delaunay
import matplotlib.pyplot as plt
xs = np.linspace(0, a, a // 0.025)
ys = np.linspace(0, b, b // 0.025)
points = np.array(np.meshgrid(xs, ys)).T.reshape(-1, 2)
tri = Delaunay(points)
plt.triplot(points[:, 0], points[:, 1], tri.simplices.copy())
plt.gca().set_aspect('equal')
plt.show()
"""
Explanation: Generating Mesh
End of explanation
"""
from scipy.sparse import coo_matrix
from meshless.composite.laminate import read_stack
from meshless.sparse import solve
from meshless.linear_buckling import lb
from meshless.espim.read_mesh import read_delaunay
from meshless.espim.plate2d_calc_k0 import calc_k0
from meshless.espim.plate2d_calc_kG import calc_kG
from meshless.espim.plate2d_add_k0s import add_k0s
mesh = read_delaunay(points, tri)
nodes = np.array(list(mesh.nodes.values()))
prop_from_nodes = True
nodes_xyz = np.array([n.xyz for n in nodes])
"""
Explanation: Using Meshless Package
End of explanation
"""
# applying heterogeneous properties
for node in nodes:
xyz = node.xyz
lam = read_stack(angles, plyts=plyts, laminaprop=laminaprop)
node.prop = lam
print('DEBUG lam.t', node.prop.t)
"""
Explanation: Applying laminate properties
End of explanation
"""
DOF = 5
def bc(K, mesh):
for node in nodes[nodes_xyz[:, 0] == xs.min()]:
for dof in [1, 3]:
j = dof-1
K[node.index*DOF+j, :] = 0
K[:, node.index*DOF+j] = 0
for node in nodes[nodes_xyz[:, 1] == ys.min()]:
for dof in [2, 3]:
j = dof-1
K[node.index*DOF+j, :] = 0
K[:, node.index*DOF+j] = 0
for node in nodes[nodes_xyz[:, 1] == ys.max()]:
for dof in [3]:
j = dof-1
K[node.index*DOF+j, :] = 0
K[:, node.index*DOF+j] = 0
for node in nodes[nodes_xyz[:, 0] == xs.max()]:
for dof in [3]:
j = dof-1
K[node.index*DOF+j, :] = 0
K[:, node.index*DOF+j] = 0
"""
Explanation: Defining Boundary Conditions
End of explanation
"""
#k0s_method = 'edge-based'
k0s_method = 'cell-based'
#k0s_method = 'cell-based-no-smoothing'
k0 = calc_k0(mesh, prop_from_nodes)
add_k0s(k0, mesh, prop_from_nodes, k0s_method)
bc(k0, mesh)
k0 = coo_matrix(k0)
"""
Explanation: Calculating Constitutive Stiffness Matrix
End of explanation
"""
def define_loads(mesh):
loads = []
load_nodes = nodes[(nodes_xyz[:, 0] == xs.max()) &
(nodes_xyz[:, 1] != ys.min()) &
(nodes_xyz[:, 1] != ys.max())]
fx = -1. / (nodes[nodes_xyz[:, 0] == xs.max()].shape[0] - 1)
for node in load_nodes:
loads.append([node, (fx, 0, 0)])
load_nodes = nodes[(nodes_xyz[:, 0] == xs.max()) &
((nodes_xyz[:, 1] == ys.min()) |
(nodes_xyz[:, 1] == ys.max()))]
fx = -1. / (nodes[nodes_xyz[:, 0] == xs.max()].shape[0] - 1) / 2
for node in load_nodes:
loads.append([node, (fx, 0, 0)])
return loads
n = k0.shape[0] // DOF
fext = np.zeros(n*DOF, dtype=np.float64)
loads = define_loads(mesh)
for node, force_xyz in loads:
fext[node.index*DOF + 0] = force_xyz[0]
print('Checking sum of forces: %s' % str(fext.reshape(-1, DOF).sum(axis=0)))
"""
Explanation: Defining Load and External Force Vector
End of explanation
"""
d = solve(k0, fext, silent=True)
total_trans = (d[0::DOF]**2 + d[1::DOF]**2)**0.5
print('Max total translation', total_trans.max())
"""
Explanation: Running Static Analysis
End of explanation
"""
kG = calc_kG(d, mesh, prop_from_nodes)
bc(kG, mesh)
kG = coo_matrix(kG)
"""
Explanation: Calculating Geometric Stiffness Matrix
End of explanation
"""
eigvals, eigvecs = lb(k0, kG, silent=True)
print('First 5 eigenvalues N/mm')
print('\n'.join(map(str, eigvals[0:5] / b / 1000)))
import matplotlib.pyplot as plt
from matplotlib import cm
ind0 = np.array([[n.index, i] for (i, n) in enumerate(nodes)])
ind0 = ind0[np.argsort(ind0[:, 0])]
nodes_in_k0 = nodes[ind0[:, 1]]
xyz = np.array([n.xyz for n in nodes_in_k0])
ind = np.lexsort((xyz[:, 1], xyz[:, 0]))
w = eigvecs[:, 0][2::DOF][ind]
xyz = xyz[ind]
plt.figure(dpi=100)
levels = np.linspace(w.min(), w.max(), 400)
plt.tricontourf(xyz[:, 0], xyz[:, 1], w, levels=levels, cmap=cm.gist_rainbow_r)
plt.gca().set_aspect('equal')
plt.show()
"""
Explanation: Running Linear Buckling Analysis
End of explanation
"""
|
LDSSA/learning-units | units/11-validation-metrics/practice/Exercise - Validation Metrics for Classification.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, \
recall_score, f1_score, roc_auc_score, roc_curve, confusion_matrix
from sklearn.model_selection import train_test_split
%matplotlib inline
def plot_roc_curve(roc_auc, fpr, tpr):
# Function to plot ROC Curve
# Inputs:
# roc_auc - AU ROC value (float)
# fpr - false positive rate (output of roc_curve()) array
# tpr - true positive rate (output of roc_curve()) array
plt.figure(figsize=(8,6))
lw = 2
plt.plot(fpr, tpr, color='orange', lw=lw, label='ROC curve (AUROC = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--', label='random')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.grid()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
"""
Explanation: Exercise - Validation Metrics for Classification
Load the data (train and test data)
Fit the Logistic Regression
(ASSIGNMENT) Check the accuracy and the AU ROC
Visualize the ROC curve
Discuss metric results
NOTE: Run all cells until the TASK 1 (do not make changes)
By: Hugo Lopes
Learning Unit 11
End of explanation
"""
df = pd.read_csv('../data/exercise_dataset_LU11.csv')
print('Shape:', df.shape)
df.head()
"""
Explanation: Load an example dataset
Data already prepared for a classifier
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(df.iloc[:, 1:],
df.iloc[:, 0],
test_size=0.33,
random_state=42)
"""
Explanation: Divide into Train and Test sets:
X_train: train data
y_train: target of train data
X_test: test data
y_test: target of test data
End of explanation
"""
# Code here:
"""
Explanation: Task 1: Fit the LogisticRegression() with the Train Set
End of explanation
"""
# Code here:
"""
Explanation: Task 2: Get the predictions & scores/probas on the Test Set
End of explanation
"""
# Code here for accuracy score, AU ROC:
# Code here for ROC curve:
# Call plot_roc_curve():
"""
Explanation: Task 3: Get the Accuracy score & AU ROC & ROC Curve
End of explanation
"""
|
nipunsadvilkar/ProbabilityForHackers | Introducing Random Variables.ipynb | mit | %matplotlib inline
import numpy as np
import pandas as pd
from itertools import product
# from IPython.core.display import HTML
# css = open('media/style-table.css').read() + open('media/style-notebook.css').read()
# HTML('<style>{}</style>'.format(css))
one_toss = np.array(['H', 'T'])
two_tosses = list(product(one_toss, repeat=2))
two_tosses
# For three tosses, just change the number of repetitions:
three_tosses = list(product(one_toss, repeat=3))
three_tosses
"""
Explanation: Random Variables
Frequently, when an experiment is performed, we are interested mainly in some function of the outcome as opposed to the actual outcome itself.
For instance,<br>
1) In recent flipping a coin experiment, we may be interested in the total number of heads that occur and not care at all about the actual Head(H)–Tail(T) sequence that results. <br>
2) In throwing dice, we are often interested in the sum of the two dice and are not really concerned about the separate values of each die. That is, we may be interested in knowingthat the sum is 7 and may not be concerned over whether the actual outcome was: (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), or (6, 1). <br>
Also, These quantities of interest, or, more formally, these real-valued functions defined on the sample space, are known as 'Random Variables'.
Lets do an experiment with Python to demostrate
Why we need Random Variables?
& show its importance
End of explanation
"""
three_toss_probs = (1/8)*np.ones(8)
three_toss_space = pd.DataFrame({
'Omega':three_tosses,
'P(omega)':three_toss_probs
})
three_toss_space
"""
Explanation: As shown earlier in slide,<br>
A probability space $(\Omega, P)$ is an outcome space accompanied by the probabilities of all the outcomes.
<br>If you assume all eight outcomes of three tosses are equally likely, the probabilities are all 1/8:
End of explanation
"""
die = np.arange(1, 7, 1)
five_rolls = list(product(die, repeat=5))
# five_rolls = [list(i) for i in product(die, repeat=5)]
five_roll_probs = (1/6**5)**np.ones(6**5)
five_roll_space = pd.DataFrame({
'Omega':five_rolls,
'P(omega)':five_roll_probs
})
five_roll_space
"""
Explanation: As you can see above, Product spaces(Probability spaces) get large very quickly.
If we are tossing 10 times, the outcome space would consist of the $2^{10}$ sequences of 10 elements where each element is H or T. <br>
The outcomes are a pain to list by hand, but computers are good at saving us that kind of pain.
Lets take example of rolling die,<br>
If we roll a die 5 times, there are almost 8,000 possible outcomes:
End of explanation
"""
five_rolls_sum = pd.DataFrame({
'Omega':five_rolls,
'S(omega)':five_roll_space['Omega'].map(lambda val: sum(val)),
'P(omega)':five_roll_probs
})
five_rolls_sum
"""
Explanation: A Function on the Outcome Space
Suppose you roll a die five times and add up the number of spots you see. If that seems artificial, be patient for a moment and you'll soon see why it's interesting.
The sum of the rolls is a numerical function on the outcome space $\Omega$ of five rolls. The sum is thus a random variable. Let's call it $S$ . Then, formally,
$S: \Omega \rightarrow { 5, 6, \ldots, 30 }$
The range of $S$ is the integers 5 through 30, because each die shows at least one and at most six spots. We can also use the equivalent notation
$\Omega \stackrel{S}{\rightarrow} { 5, 6, \ldots, 30 }$
From a computational perspective, the elements of $\Omega$ are in the column omega of five_roll_space. Let's apply this function and create a larger table.
End of explanation
"""
five_rolls_sum[five_rolls_sum['S(omega)']==10]
"""
Explanation: Functions of Random Variables,
A random variable is a numerical function on $\Omega$ . Therefore by composition, a numerical function of a random variable is also a random variable.
For example, $S^2$ is a random variable, calculated as follows:
$S^2(\omega) = \big{(} S(\omega)\big{)}^2$
Thus for example $S^2(\text{[6 6 6 6 6]}) = 30^2 = 900$.
Events Determined by $S$
From the table five_rolls_sum it is hard to tell how many rows show a sum of 6, or 10, or any other value. To better understand the properties of $S$, we have to organize the information in five_rolls_sum.
For any subset $A$ of the range of $S$, define the event ${S \in A}$ as
$$
{S \in A } = {\omega: S(\omega) \in A }
$$
That is, ${ S \in A}$ is the collection of all $\omega$ for which $S(\omega)$ is in $A$.
If that definition looks unfriendly, try it out in a special case. Take $A = {5, 30}$. Then ${S \in A}$ if and only if either all the rolls show 1 spot or all the rolls show 6 spots. So
$$
{S \in A} = {\text{[1 1 1 1 1], [6 6 6 6 6]}}
$$
It is natural to ask about the chance the sum is a particular value, say 10. That's not easy to read off the table, but we can access the corresponding rows:
End of explanation
"""
dist_S = five_rolls_sum.drop('Omega', axis=1).groupby('S(omega)', as_index=False).sum()
dist_S
"""
Explanation: There are 126 values of $\omega$ for which $S(\omega) = 10$. Since all the $\omega$ are equally likely, the chance that $S$ has the value 10 is 126/7776.
We are informal with notation and write ${ S = 10 }$ instead of ${ S \in {10} }$:
$$
P(S = 10) = \frac{126}{7776} = 1.62\%
$$
This is how Random Variables help us quantify the results of experiments for the purpose of analysis.
i.e., Random variables provide numerical summaries of the experiment in question. - Stats110 harvard (also below paragraph)
This definition is abstract but fundamental; one of the most important skills to
develop when studying probability and statistics is the ability to go back and forth
between abstract ideas and concrete examples. Relatedly, it is important to work
on recognizing the essential pattern or structure of a problem and how it connectsto problems you have studied previously. We will often discuss stories that involve
tossing coins or drawing balls from urns because they are simple, convenient sce-
narios to work with, but many other problems are isomorphic: they have the same
essential structure, but in a different guise.
we can use mathematical opeartion on these variables since they are real valued function nowto problems you have studied previously. We will often discuss stories that involve
tossing coins or drawing balls from urns because they are simple, convenient sce-
narios to work with, but many other problems are isomorphic: they have the same
essential structure, but in a di↵erent guise.
Looking at Distributions
The table below shows all the possible values of $S$ along with all their probabilities. It is called a "Probability Distribution Table" for $S$ .
End of explanation
"""
dist_S.ix[:,1].sum()
"""
Explanation: The contents of the table – all the possible values of the random variable, along with all their probabilities – are called the probability distribution of $S$ , or just distribution of $S$ for short. The distribution shows how the total probability of 100% is distributed over all the possible values of $S$ .
Let's check this, to make sure that all the $\omega$ 's in the outcome space have been accounted for in the column of probabilities.
End of explanation
"""
dist_S.ix[:,0], dist_S.ix[:,1]
s = dist_S.ix[:,0]
p_s = dist_S.ix[:,1]
dist_S = pd.concat([s,p_s],axis=1)
dist_S
dist_S.plot(x="S(omega)",y="P(omega)", kind="bar")
from prob140 import Plot
!pip install sympy
"""
Explanation: That's 1 in a computing environment, and it is true in general for the distribution of any random variable.
Probabilities in a distribution are non-negative and sum to 1.
Visualising Distribution
End of explanation
"""
|
michrawson/nyu_ml_lectures | notebooks/03.1 Case Study - Supervised Classification of Handwritten Digits.ipynb | cc0-1.0 | from sklearn.datasets import load_digits
digits = load_digits()
"""
Explanation: Supervised Learning: Classification of Handwritten Digits
In this section we'll apply scikit-learn to the classification of handwritten
digits. This will go a bit beyond the iris classification we saw before: we'll
discuss some of the metrics which can be used in evaluating the effectiveness
of a classification model.
We'll work with the handwritten digits dataset which we saw in an earlier
section of the tutorial.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
"""
Explanation: We'll re-use some of our code from before to visualize the data and remind us what
we're looking at:
End of explanation
"""
from sklearn.decomposition import RandomizedPCA
pca = RandomizedPCA(n_components=2, random_state=1999)
proj = pca.fit_transform(digits.data)
plt.scatter(proj[:, 0], proj[:, 1], c=digits.target)
plt.colorbar()
"""
Explanation: Visualizing the Data
A good first-step for many problems is to visualize the data using one of the
Dimensionality Reduction techniques we saw earlier. We'll start with the
most straightforward one, Principal Component Analysis (PCA).
PCA seeks orthogonal linear combinations of the features which show the greatest
variance, and as such, can help give you a good idea of the structure of the
data set. Here we'll use RandomizedPCA, because it's faster for large N.
End of explanation
"""
from sklearn.manifold import Isomap
iso = Isomap(n_neighbors=5, n_components=2)
proj = iso.fit_transform(digits.data)
plt.scatter(proj[:, 0], proj[:, 1], c=digits.target)
plt.colorbar()
"""
Explanation: Here we see that the digits do cluster fairly well, so we can expect even
a fairly naive classification scheme to do a decent job separating them.
A weakness of PCA is that it produces a linear dimensionality reduction:
this may miss some interesting relationships in the data. If we want to
see a nonlinear mapping of the data, we can use one of the several
methods in the manifold module. Here we'll use Isomap (a concatenation
of Isometric Mapping) which is a manifold learning method based on
graph theory:
End of explanation
"""
from sklearn.naive_bayes import GaussianNB
from sklearn.cross_validation import train_test_split
# split the data into training and validation sets
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, random_state=0)
# train the model
clf = GaussianNB()
clf.fit(X_train, y_train)
# use the model to predict the labels of the test data
predicted = clf.predict(X_test)
expected = y_test
"""
Explanation: It can be fun to explore the various manifold learning methods available,
and how the output depends on the various parameters used to tune the
projection.
In any case, these visualizations show us that there is hope: even a simple
classifier should be able to adequately identify the members of the various
classes.
Question: Given these projections of the data, which numbers do you think
a classifier might have trouble distinguishing?
Gaussian Naive Bayes Classification
For most classification problems, it's nice to have a simple, fast, go-to
method to provide a quick baseline classification. If the simple and fast
method is sufficient, then we don't have to waste CPU cycles on more complex
models. If not, we can use the results of the simple method to give us
clues about our data.
One good method to keep in mind is Gaussian Naive Bayes. It is a generative
classifier which fits an axis-aligned multi-dimensional Gaussian distribution to
each training label, and uses this to quickly give a rough classification. It
is generally not sufficiently accurate for real-world data, but can perform surprisingly well.
End of explanation
"""
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(X_test.reshape(-1, 8, 8)[i], cmap=plt.cm.binary,
interpolation='nearest')
# label the image with the target value
if predicted[i] == expected[i]:
ax.text(0, 7, str(predicted[i]), color='green')
else:
ax.text(0, 7, str(predicted[i]), color='red')
"""
Explanation: Question: why did we split the data into training and validation sets?
Let's plot the digits again with the predicted labels to get an idea of
how well the classification is working:
End of explanation
"""
matches = (predicted == expected)
print(matches.sum())
print(len(matches))
matches.sum() / float(len(matches))
"""
Explanation: Quantitative Measurement of Performance
We'd like to measure the performance of our estimator without having to resort
to plotting examples. A simple method might be to simply compare the number of
matches:
End of explanation
"""
print(clf.score(X_test, y_test))
"""
Explanation: We see that nearly 1500 of the 1800 predictions match the input. But there are other
more sophisticated metrics that can be used to judge the performance of a classifier:
several are available in the sklearn.metrics submodule.
We can also use clf.score as a helper method to calculate how well the classifier performs.
End of explanation
"""
|
jseabold/statsmodels | examples/notebooks/markov_autoregression.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import requests
from io import BytesIO
# NBER recessions
from pandas_datareader.data import DataReader
from datetime import datetime
usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))
"""
Explanation: Markov switching autoregression models
This notebook provides an example of the use of Markov switching models in statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.
This is tested against the Markov-switching models from E-views 8, which can be found at http://www.eviews.com/EViews8/ev8ecswitch_n.html#MarkovAR or the Markov-switching models of Stata 14 which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.
End of explanation
"""
# Get the RGNP data to replicate Hamilton
dta = pd.read_stata('https://www.stata-press.com/data/r14/rgnp.dta').iloc[1:]
dta.index = pd.DatetimeIndex(dta.date, freq='QS')
dta_hamilton = dta.rgnp
# Plot the data
dta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3))
# Fit the model
mod_hamilton = sm.tsa.MarkovAutoregression(dta_hamilton, k_regimes=2, order=4, switching_ar=False)
res_hamilton = mod_hamilton.fit()
res_hamilton.summary()
"""
Explanation: Hamilton (1989) switching model of GNP
This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written:
$$
y_t = \mu_{S_t} + \phi_1 (y_{t-1} - \mu_{S_{t-1}}) + \phi_2 (y_{t-2} - \mu_{S_{t-2}}) + \phi_3 (y_{t-3} - \mu_{S_{t-3}}) + \phi_4 (y_{t-4} - \mu_{S_{t-4}}) + \varepsilon_t
$$
Each period, the regime transitions according to the following matrix of transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
p_{01} & p_{11}
\end{bmatrix}
$$
where $p_{ij}$ is the probability of transitioning from regime $i$, to regime $j$.
The model class is MarkovAutoregression in the time-series part of statsmodels. In order to create the model, we must specify the number of regimes with k_regimes=2, and the order of the autoregression with order=4. The default model also includes switching autoregressive coefficients, so here we also need to specify switching_ar=False to avoid that.
After creation, the model is fit via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum.
End of explanation
"""
fig, axes = plt.subplots(2, figsize=(7,7))
ax = axes[0]
ax.plot(res_hamilton.filtered_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Filtered probability of recession')
ax = axes[1]
ax.plot(res_hamilton.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Smoothed probability of recession')
fig.tight_layout()
"""
Explanation: We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.
For reference, the shaded periods represent the NBER recessions.
End of explanation
"""
print(res_hamilton.expected_durations)
"""
Explanation: From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
End of explanation
"""
# Get the dataset
ew_excs = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn').content
raw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine='python')
raw.index = pd.date_range('1926-01-01', '1995-12-01', freq='MS')
dta_kns = raw.loc[:'1986'] - raw.loc[:'1986'].mean()
# Plot the dataset
dta_kns[0].plot(title='Excess returns', figsize=(12, 3))
# Fit the model
mod_kns = sm.tsa.MarkovRegression(dta_kns, k_regimes=3, trend='nc', switching_variance=True)
res_kns = mod_kns.fit()
res_kns.summary()
"""
Explanation: In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.
Kim, Nelson, and Startz (1998) Three-state Variance Switching
This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn.
The model in question is:
$$
\begin{align}
y_t & = \varepsilon_t \
\varepsilon_t & \sim N(0, \sigma_{S_t}^2)
\end{align}
$$
Since there is no autoregressive component, this model can be fit using the MarkovRegression class. Since there is no mean effect, we specify trend='nc'. There are hypothesized to be three regimes for the switching variances, so we specify k_regimes=3 and switching_variance=True (by default, the variance is assumed to be the same across regimes).
End of explanation
"""
fig, axes = plt.subplots(3, figsize=(10,7))
ax = axes[0]
ax.plot(res_kns.smoothed_marginal_probabilities[0])
ax.set(title='Smoothed probability of a low-variance regime for stock returns')
ax = axes[1]
ax.plot(res_kns.smoothed_marginal_probabilities[1])
ax.set(title='Smoothed probability of a medium-variance regime for stock returns')
ax = axes[2]
ax.plot(res_kns.smoothed_marginal_probabilities[2])
ax.set(title='Smoothed probability of a high-variance regime for stock returns')
fig.tight_layout()
"""
Explanation: Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
End of explanation
"""
# Get the dataset
filardo = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn').content
dta_filardo = pd.read_table(BytesIO(filardo), sep=' +', header=None, skipfooter=1, engine='python')
dta_filardo.columns = ['month', 'ip', 'leading']
dta_filardo.index = pd.date_range('1948-01-01', '1991-04-01', freq='MS')
dta_filardo['dlip'] = np.log(dta_filardo['ip']).diff()*100
# Deflated pre-1960 observations by ratio of std. devs.
# See hmt_tvp.opt or Filardo (1994) p. 302
std_ratio = dta_filardo['dlip']['1960-01-01':].std() / dta_filardo['dlip'][:'1959-12-01'].std()
dta_filardo['dlip'][:'1959-12-01'] = dta_filardo['dlip'][:'1959-12-01'] * std_ratio
dta_filardo['dlleading'] = np.log(dta_filardo['leading']).diff()*100
dta_filardo['dmdlleading'] = dta_filardo['dlleading'] - dta_filardo['dlleading'].mean()
# Plot the data
dta_filardo['dlip'].plot(title='Standardized growth rate of industrial production', figsize=(13,3))
plt.figure()
dta_filardo['dmdlleading'].plot(title='Leading indicator', figsize=(13,3));
"""
Explanation: Filardo (1994) Time-Varying Transition Probabilities
This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn.
In the above models we have assumed that the transition probabilities are constant across time. Here we allow the probabilities to change with the state of the economy. Otherwise, the model is the same Markov autoregression of Hamilton (1989).
Each period, the regime now transitions according to the following matrix of time-varying transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00,t} & p_{10,t} \
p_{01,t} & p_{11,t}
\end{bmatrix}
$$
where $p_{ij,t}$ is the probability of transitioning from regime $i$, to regime $j$ in period $t$, and is defined to be:
$$
p_{ij,t} = \frac{\exp{ x_{t-1}' \beta_{ij} }}{1 + \exp{ x_{t-1}' \beta_{ij} }}
$$
Instead of estimating the transition probabilities as part of maximum likelihood, the regression coefficients $\beta_{ij}$ are estimated. These coefficients relate the transition probabilities to a vector of pre-determined or exogenous regressors $x_{t-1}$.
End of explanation
"""
mod_filardo = sm.tsa.MarkovAutoregression(
dta_filardo.iloc[2:]['dlip'], k_regimes=2, order=4, switching_ar=False,
exog_tvtp=sm.add_constant(dta_filardo.iloc[1:-1]['dmdlleading']))
np.random.seed(12345)
res_filardo = mod_filardo.fit(search_reps=20)
res_filardo.summary()
"""
Explanation: The time-varying transition probabilities are specified by the exog_tvtp parameter.
Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.
Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,3))
ax.plot(res_filardo.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='gray', alpha=0.2)
ax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1])
ax.set(title='Smoothed probability of a low-production state');
"""
Explanation: Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
End of explanation
"""
res_filardo.expected_durations[0].plot(
title='Expected duration of a low-production state', figsize=(12,3));
"""
Explanation: Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time:
End of explanation
"""
|
xmnlab/pywim | notebooks/WeightEstimation.ipynb | mit | from IPython.display import display
from matplotlib import pyplot as plt
from scipy import integrate
import numpy as np
import pandas as pd
import peakutils
import sys
# local
sys.path.insert(0, '../')
from pywim.estimation.speed import speed_by_peak
from pywim.utils import storage
from pywim.utils.dsp import wave_curve
from pywim.utils.stats import iqr
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Weight-Estimation" data-toc-modified-id="Weight-Estimation-1"><span class="toc-item-num">1 </span>Weight Estimation</a></div><div class="lev2 toc-item"><a href="#Algorithm-Setup" data-toc-modified-id="Algorithm-Setup-11"><span class="toc-item-num">1.1 </span>Algorithm Setup</a></div><div class="lev2 toc-item"><a href="#Open-Raw-Data-File-(Synthetic)" data-toc-modified-id="Open-Raw-Data-File-(Synthetic)-12"><span class="toc-item-num">1.2 </span>Open Raw Data File (Synthetic)</a></div><div class="lev2 toc-item"><a href="#Data-cleaning" data-toc-modified-id="Data-cleaning-13"><span class="toc-item-num">1.3 </span>Data cleaning</a></div><div class="lev2 toc-item"><a href="#Speed-estimation" data-toc-modified-id="Speed-estimation-14"><span class="toc-item-num">1.4 </span>Speed estimation</a></div><div class="lev2 toc-item"><a href="#Wave-Curve-extration" data-toc-modified-id="Wave-Curve-extration-15"><span class="toc-item-num">1.5 </span>Wave Curve extration</a></div><div class="lev2 toc-item"><a href="#Weight-estimation" data-toc-modified-id="Weight-estimation-16"><span class="toc-item-num">1.6 </span>Weight estimation</a></div><div class="lev3 toc-item"><a href="#Estimation-by-Peak-Voltage" data-toc-modified-id="Estimation-by-Peak-Voltage-161"><span class="toc-item-num">1.6.1 </span>Estimation by Peak Voltage</a></div><div class="lev3 toc-item"><a href="#Estimation-by-Area-under-the-signal" data-toc-modified-id="Estimation-by-Area-under-the-signal-162"><span class="toc-item-num">1.6.2 </span>Estimation by Area under the signal</a></div><div class="lev3 toc-item"><a href="#Estimation-by-Re-sampling-of-area" data-toc-modified-id="Estimation-by-Re-sampling-of-area-163"><span class="toc-item-num">1.6.3 </span>Estimation by Re-sampling of area</a></div><div class="lev1 toc-item"><a href="#References" data-toc-modified-id="References-2"><span class="toc-item-num">2 </span>References</a></div>
# Weight Estimation
Weight estimation can differ respectively of the technology.
## Algorithm Setup
End of explanation
"""
f = storage.open_file('../data/wim_day_001_01_20170324.h5')
dset = f[list(f.keys())[0]]
df = storage.dataset_to_dataframe(dset)
# information on the file
paddle = len(max(dset.attrs, key=lambda v: len(v)))
print('METADATA')
print('='*80)
for k in dset.attrs:
print('{}:'.format(k).ljust(paddle, ' '), dset.attrs[k], sep='\t')
df.plot()
plt.grid(True)
plt.show()
"""
Explanation: Open Raw Data File (Synthetic)
End of explanation
"""
data_cleaned = df.copy()
for k in data_cleaned.keys():
# use the first 10 points as reference to correct the baseline
# in this case should work well
data_cleaned[k] -= data_cleaned[k].values[:10].mean()
data_cleaned.plot()
plt.grid(True)
plt.show()
"""
Explanation: Data cleaning
## use information from data cleaning report
End of explanation
"""
# calculates the speed for each pair of sensors by axles
speed = speed_by_peak.sensors_estimation(
data_cleaned, dset.attrs['sensors_distance']
)
display(speed)
"""
Explanation: Speed estimation
End of explanation
"""
curves = []
for k in data_cleaned.keys():
curves.append(
wave_curve.select_curve_by_threshold(
data_cleaned[k], threshold=1, delta_x=5
)
)
for c in curves[-1]:
# plot each axle measured
c.plot()
plt.grid(True)
plt.title(k)
plt.show()
"""
Explanation: Wave Curve extration
End of explanation
"""
def weigh_by_peak_signal_voltage(peaks: [float], cs: [float]):
"""
:param peaks: peak signal voltage array
:type peaks: np.array
:param cs: calibration factor array
:type cs: np.array
:returns: np.array
"""
return np.array(peaks * cs)
x = data_cleaned.index.values
for k in data_cleaned.keys():
y = data_cleaned[k].values
indexes = peakutils.indexes(y, thres=0.5, min_dist=30)
# calibration given by random function
c = np.random.randint(900, 1100, 1)
w = weigh_by_peak_signal_voltage(y[indexes], c)
print(k, w)
"""
Explanation: Weight estimation
\cite{kwon2007development} presents three approach about weight estimation:
Peak voltage;
Area under the signal;
Re-sampling of area (proposed method).
In this study, these three methods will be implemented.
Estimation by Peak Voltage
According to \cite{kwon2007development}, the peak voltage generated by the
same vehicle does not change for different speeds, however, this assumption is
incorrect since the peak will change if tire inflation pressure is not
constant. So, this method can be very helpful when accuracy is not important.
The equation presented in that study is:
\begin{equation}\label{eq:weigh_by_peak}
w = \alpha * peak_signal_voltage(x_i)
\end{equation}
where:
peak_signal_voltage($x_i$) is the peak voltage value of the digitized signal x(t);
and α is a calibration factor which must be determined using a known axle load.
End of explanation
"""
load = []
t = 1/dset.attrs['sample_rate']
print('t = ', t)
for axles_curve in curves:
# composite trapezoidal rule
load.append([
integrate.trapz(v, dx=t)
for v in axles_curve
])
print('\nLoad estimation:')
display(load)
# W = (v/L) * A * C
v = speed
a = load
l = 0.053 # sensor width
c = dset.attrs['calibration_constant']
w = []
for i, _load in enumerate(load):
# sensor data
_w = []
for j in range(len(_load)):
# axle data
_w.append((v[i][j]/l) * a[i][j] * c[i])
w.append(_w)
weight = np.matrix(w)
print('Axle estimated weight by each sensor:')
display(weight)
weight_axles = []
for i in range(weight.shape[0]):
v = pd.Series(weight[:, i].view(np.ndarray).flatten())
weight_axles.append(iqr.reject_outliers(v).mean())
print('Axle estimated weight:')
display(weight_axles)
gvw = sum(weight_axles)
print('Gross Vehicle Weigh:', gvw)
"""
Explanation: Estimation by Area under the signal
\cite{kwon2007development} presented the axle load computation method recommended
by Kistler \cite{kistler2004installation} that computes the axle loads using the area
under the signal curve and the speed of the vehicle traveling. A typical signal curve
can be viewed as:
\begin{figure}
\centerline{\includegraphics[width=10cm]{img/kistler-signal.png}}
\caption{\label{fig:kistler-signal} Raw data signal illustration. Source \cite{kistler2004installation}}
\end{figure}
The equation
presented by Kistler \cite{kistler2004installation} is:
\begin{equation}\label{eq:weigh_by_area_under_the_signal_1}
W = \frac{V . C}{L} . \int_{t_2+\Delta t}^{t_1 - \Delta t} (x(t) − b(t)) dt
\end{equation}
where:
t1 and t2 are point where threshold touches on the start and the end of the signal;
$\Delta$t is an average value from t1 and t2 to the point when the signal is near to the baseline;
C is a constant calibration factor;
L is the sensor width;
V is the speed (velocity) of the vehicle;
x(t) is the load signal;
b(t) is the baseline level.
Also, there is a digital form as:
\begin{equation}\label{eq:weigh_by_area_under_the_signal_2}
W = \frac{V . C}{L} . \sum(x_i-b_i)
\end{equation}
End of explanation
"""
|
chrisbarnettster/cfg-analysis-on-heroku-jupyter | notebooks/notebooks/othernotebook.ipynb | mit | import numpy as np
np.random.seed(data_id)
data = np.random.randn(100)
"""
Explanation: Notebook arguments
data_id (int): Select which data file to load. Valid values: 0, 1, 2.
analysis_type (string): Which analysis type to perform. Valid valuse 'a', 'b' and 'c'
Template Notebook
<p class=lead>This notebook (pretends to) executes one of the three analysis type
'a', 'b' or 'c', on one of the data files in a set (identified by an index).</p>
You can either run this notebook directly, or run it through the master notebook for batch processing.
Load the data
We mock loading some data. We use data_id as a seed to generate different data-sets.
In the real-world case can be the index in a list of data files.
End of explanation
"""
analysis_dict = dict(a=np.mean, b=np.max, c=np.min)
result = analysis_dict[analysis_type](data)
print('Result of analysis "%s" on dataset %d is %.3f.' % (analysis_type, data_id, result))
"""
Explanation: Processing
End of explanation
"""
|
arcyfelix/Courses | 18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb | apache-2.0 | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Time Series Exercise -
Follow along with the instructions in bold. Watch the solutions video if you get stuck!
The Data
Source: https://datamarket.com/data/set/22ox/monthly-milk-production-pounds-per-cow-jan-62-dec-75#!ds=22ox&display=line
Monthly milk production: pounds per cow. Jan 62 - Dec 75
Import numpy pandas and matplotlib
End of explanation
"""
data = pd.read_csv("./data/monthly-milk-production.csv", index_col = 'Month')
"""
Explanation: Use pandas to read the csv of the monthly-milk-production.csv file and set index_col='Month'
End of explanation
"""
data.head()
"""
Explanation: Check out the head of the dataframe
End of explanation
"""
data.index = pd.to_datetime(data.index)
"""
Explanation: Make the index a time series by using:
milk.index = pd.to_datetime(milk.index)
End of explanation
"""
data.plot()
"""
Explanation: Plot out the time series data.
End of explanation
"""
data.info()
training_set = data.head(156)
test_set = data.tail(12)
"""
Explanation: Train Test Split
Let's attempt to predict a year's worth of data. (12 months or 12 steps into the future)
Create a test train split using indexing (hint: use .head() or tail() or .iloc[]). We don't want a random train test split, we want to specify that the test set is the last 12 months of data is the test set, with everything before it is the training.
End of explanation
"""
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
training_set = scaler.fit_transform(training_set)
test_set_scaled = scaler.transform(test_set)
"""
Explanation: Scale the Data
Use sklearn.preprocessing to scale the data using the MinMaxScaler. Remember to only fit_transform on the training data, then transform the test data. You shouldn't fit on the test data as well, otherwise you are assuming you would know about future behavior!
End of explanation
"""
def next_batch(training_data, batch_size, steps):
"""
INPUT: Data, Batch Size, Time Steps per batch
OUTPUT: A tuple of y time series results. y[:,:-1] and y[:,1:]
"""
# STEP 1: Use np.random.randint to set a random starting point index for the batch.
# Remember that each batch needs have the same number of steps in it.
# This means you should limit the starting point to len(data)-steps
random_start = np.random.randint(0, len(training_data) - steps)
# STEP 2: Now that you have a starting index you'll need to index the data from
# the random start to random start + steps + 1. Then reshape this data to be (1,steps+1)
# Create Y data for time series in the batches
y_batch = np.array(training_data[random_start : random_start + steps + 1]).reshape(1, steps+1)
# STEP 3: Return the batches. You'll have two batches to return y[:,:-1] and y[:,1:]
# You'll need to reshape these into tensors for the RNN to .reshape(-1,steps,1)
return y_batch[:, :-1].reshape(-1, steps, 1), y_batch[:, 1:].reshape(-1, steps, 1)
"""
Explanation: Batch Function
We'll need a function that can feed batches of the training data. We'll need to do several things that are listed out as steps in the comments of the function. Remember to reference the previous batch method from the lecture for hints. Try to fill out the function template below, this is a pretty hard step, so feel free to reference the solutions!
End of explanation
"""
import tensorflow as tf
"""
Explanation: Setting Up The RNN Model
Import TensorFlow
End of explanation
"""
num_inputs = 1
num_time_steps = 12
num_neurons = 100
num_outputs = 1
learning_rate = 0.03
num_train_iter = 4000
batch_size = 1
"""
Explanation: The Constants
Define the constants in a single cell. You'll need the following (in parenthesis are the values I used in my solution, but you can play with some of these):
* Number of Inputs (1)
* Number of Time Steps (12)
* Number of Neurons per Layer (100)
* Number of Outputs (1)
* Learning Rate (0.03)
* Number of Iterations for Training (4000)
* Batch Size (1)
End of explanation
"""
X = tf.placeholder(tf.float32, [None, num_time_steps, num_inputs])
y = tf.placeholder(tf.float32, [None, num_time_steps, num_outputs])
"""
Explanation: Create Placeholders for X and y. (You can change the variable names if you want). The shape for these placeholders should be [None,num_time_steps-1,num_inputs] and [None, num_time_steps-1, num_outputs] The reason we use num_time_steps-1 is because each of these will be one step shorter than the original time steps size, because we are training the RNN network to predict one point into the future based on the input sequence.
End of explanation
"""
cell = tf.contrib.rnn.OutputProjectionWrapper(tf.contrib.rnn.BasicLSTMCell(num_units = num_neurons, activation = tf.nn.relu), output_size = num_outputs)
"""
Explanation: Now create the RNN Layer, you have complete freedom over this, use tf.contrib.rnn and choose anything you want, OutputProjectionWrappers, BasicRNNCells, BasicLSTMCells, MultiRNNCell, GRUCell etc... Keep in mind not every combination will work well! (If in doubt, the solutions used an Outputprojection Wrapper around a basic LSTM cell with relu activation.
End of explanation
"""
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype = tf.float32)
"""
Explanation: Now pass in the cells variable into tf.nn.dynamic_rnn, along with your first placeholder (X)
End of explanation
"""
# MSE
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
train = optimizer.minimize(loss)
"""
Explanation: Loss Function and Optimizer
Create a Mean Squared Error Loss Function and use it to minimize an AdamOptimizer, remember to pass in your learning rate.
End of explanation
"""
init = tf.global_variables_initializer()
"""
Explanation: Initialize the global variables
End of explanation
"""
saver = tf.train.Saver()
"""
Explanation: Create an instance of tf.train.Saver()
End of explanation
"""
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = 0.75)
with tf.Session() as sess:
# Run
sess.run(init)
for iteration in range(num_train_iter):
X_batch, Y_batch = next_batch(training_set, batch_size, num_time_steps)
sess.run(train, feed_dict = {X: X_batch, y: Y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict = {X: X_batch, y: Y_batch})
print(iteration, "\tMSE:", mse)
# Save Model for Later
saver.save(sess, "./checkpoints/ex_time_series_model")
"""
Explanation: Session
Run a tf.Session that trains on the batches created by your next_batch function. Also add an a loss evaluation for every 100 training iterations. Remember to save your model after you are done training.
End of explanation
"""
test_set
"""
Explanation: Predicting Future (Test Data)
Show the test_set (the last 12 months of your original complete data set)
End of explanation
"""
with tf.Session() as sess:
# Use your Saver instance to restore your saved rnn time series model
saver.restore(sess, "./checkpoints/ex_time_series_model")
# Create a numpy array for your genreative seed from the last 12 months of the
# training set data. Hint: Just use tail(12) and then pass it to an np.array
train_seed = list(training_set[-12:])
## Now create a for loop that
for iteration in range(12):
X_batch = np.array(train_seed[-num_time_steps:]).reshape(1, num_time_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
train_seed.append(y_pred[0, -1, 0])
"""
Explanation: Now we want to attempt to predict these 12 months of data, using only the training data we had. To do this we will feed in a seed training_instance of the last 12 months of the training_set of data to predict 12 months into the future. Then we will be able to compare our generated 12 months to our actual true historical values from the test set!
Generative Session
NOTE: Recall that our model is really only trained to predict 1 time step ahead, asking it to generate 12 steps is a big ask, and technically not what it was trained to do! Think of this more as generating new values based off some previous pattern, rather than trying to directly predict the future. You would need to go back to the original model and train the model to predict 12 time steps ahead to really get a higher accuracy on the test data. (Which has its limits due to the smaller size of our data set)
Fill out the session code below to generate 12 months of data based off the last 12 months of data from the training set. The hardest part about this is adjusting the arrays with their shapes and sizes. Reference the lecture for hints.
End of explanation
"""
train_seed
"""
Explanation: Show the result of the predictions.
End of explanation
"""
results = scaler.inverse_transform(np.array(train_seed[12:]).reshape(12, 1))
"""
Explanation: Grab the portion of the results that are the generated values and apply inverse_transform on them to turn them back into milk production value units (lbs per cow). Also reshape the results to be (12,1) so we can easily add them to the test_set dataframe.
End of explanation
"""
test_set['Generated'] = results
"""
Explanation: Create a new column on the test_set called "Generated" and set it equal to the generated results. You may get a warning about this, feel free to ignore it.
End of explanation
"""
test_set
"""
Explanation: View the test_set dataframe.
End of explanation
"""
test_set.plot()
"""
Explanation: Plot out the two columns for comparison.
End of explanation
"""
|
boffi/boffi.github.io | dati_2015/01/.ipynb_checkpoints/Resonance-checkpoint.ipynb | mit | def x_2z_over_dst(z):
w = 2*pi
# beta = 1, wn =w
wd = w*sqrt(1-z*z)
# Clough Penzien p. 43
A = z/sqrt(1-z*z)
def f(t):
return (cos(wd*t)+A*sin(wd*t))*exp(-z*w*t)-cos(w*t)
return pl.vectorize(f)
"""
Explanation: Resonant excitation
We want to study the behaviour of an undercritically damped SDOF system when it is
subjected to a harmonic force $p(t) = p_o \sin\omega_nt$, i.e., when the excitation frequency equals the free vibration frequency of the system.
Of course, $\beta=1$, $D(\beta,\zeta)|{\beta=1}=\displaystyle\frac{1}{2\zeta}$
and $\theta=\pi/2$, hence $$\xi(t)=\Delta{st}\,\frac{1}{2\zeta}\cos\omega_nt.$$
Starting from rest conditions, we have
$$\frac{x(t)}{\Delta_{st}} = \exp(-\zeta\omega_n t)\left(
-\frac{\omega_n}{2\omega_D}\sin(\omega_n t)
-\frac{1}{2\zeta}\cos(\omega_n t)\right) + \frac{1}{2\zeta}\cos(\omega_n t)$$
and, multiplying both sides by $2\zeta$
\begin{align}
x(t)\frac{2\zeta}{\Delta_{st}} = \bar{x}(t)& =
\exp(-\zeta\omega_n t)\left(
-\zeta\frac{\omega_n}{\omega_D}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t)\
& = \exp(-\zeta\omega_n t)\left(
-\frac{\zeta}{\sqrt{1-z^2}}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t).
\end{align}
We have now a normalized function of time that grows, oscillating, from 0 to 1,
where the free parameters are just $\omega_n$ and $\zeta$.
To go further, we set arbitrarily $\omega_n=2\pi$ (our plots will be nicer...)
and have just a dependency on $t$ and $\zeta$.
Eventually, we define a function of $\zeta$ that returns a function of $t$ only,
here it is...
End of explanation
"""
t = pl.linspace(0,20,1001)
print(t)
"""
Explanation: Above we compute some constants that depend on $\zeta$,
i.e., the damped frequency and the coefficient in
front of the sine term, then we define a function of time
in terms of these constants and of $\zeta$ itself.
Because we are going to use this function with a vector argument,
the last touch is to vectorize the function just before returning it
to the caller.
Plotting our results
We start by using a function defined in the pylab aka pl module to
generate a vector whose entries are 1001 equispaced real numbers, starting from
zero and up to 20, inclusive of both ends, and assigning the name t to this vector.
End of explanation
"""
zetas = (.02, .05, .10, .20)
print(zetas)
"""
Explanation: We want to see what happens for different values of $\zeta$, so we create
a list of values and assign the name zetas to this list.
End of explanation
"""
for z in zetas:
# call the function of zeta that returns
# a function of time, assign the name bar_x to this function
bar_x = x_2z_over_dst(z)
# do the plotting...
pl.plot(t,bar_x(t))
pl.ylim((-1.0, 1.0))
pl.title(r'$\zeta=%4.2f$'%(z,))
pl.show()
"""
Explanation: Now, the real plotting:
z takes in turn each of the values in zetas,
then we generate a function of time for the current z
we generate a plot with a line that goes through the point
(a(0),b(0)), (a(1),b(1)), (a(2),b(2)), ...
where, in our case, a is the vector t and b is the vector
returned from the vectorized function bar_x
we make a slight adjustement to the extreme values of the y-axis
of the plot
we give a title to the plot
we FORCE (pl.show()) the plot to be produced.
End of explanation
"""
t = pl.linspace(0,5,501)
for z in zetas:
# call the function of zeta that returns
# a function of time, assign the name bar_x to this function
bar_x = x_2z_over_dst(z)
# do the plotting...
pl.plot(t,bar_x(t)/2/z, label=r'$\zeta=%4.2f$'%(z,))
pl.legend(ncol=5,loc='lower center', fancybox=1, shadow=1, framealpha=.95)
pl.grid()
"""
Explanation: Wait a minute!
So, after all this work, we have that the greater the damping, the smaller the
number of cycles that's needed to reach the maximum value of the response...
Yes, it's exactly like that, and there is a reason. Think of it.
.
.
.
.
.
.
.
.
.
.
We have normalized the response functions to have always a maximum absolute
value of one, but in effect the max values are different, and a heavily damped
system needs less cycles to reach steady-state because the maximum value is much,
much smaller.
Let's plot the unnormalized (well, there's still the $\Delta_{st}$ normalization)
responses.
Note the differences with above:
we focus on a shorter interval of time and, in each step
we don't add a title
we don't force the creation of a distinct plot in each cycle,
we add a label to each curve
at the end of the cycle,
we ask for the generation of a legend that uses the labels
we specified to generate a, well, a legend for the curves
we ask to plot all the properly labeled curves using pl.plot().
End of explanation
"""
|
ProfessorKazarinoff/staticsite | content/code/pint/diffusion_problem_with_python_pint.ipynb | gpl-3.0 | import pint
from math import exp, sqrt
u = pint.UnitRegistry()
"""
Explanation: I was working through a diffusion problem and thought that Python and a package for dealing with units and unit conversions called pint would be usefull.
I'm using the Anaconda distribution of Python, which comes with the Anaconda Prompt already installed. For help installing Anaconda, see a previous blog post: Installing Anaconda on Windows 10.
To use the pint package, I needed to install pint using the Anaconda Prompt:
```
pip install pint
```
The problem I'm working on involes the diffusion of nitrogen gas (N<sub>2</sub>) into a thin plate of iron.
Given:
When α-iron is put in a nitrogen atmosphere, the concentration of nitrogen in the α-iron, $C_{N}$ (in units of wt%) is a function of the nitrogen pressure $P_{N_2}$ and temperature $T$ accoding to the relationship:
$$C_{N} = 4.9 \times 10^{-6} \sqrt{P_{N_2}} exp{\frac{Q_n}{RT}} $$
Where:
$Q_n = 37,600 \frac{J}{mol}$
$R=8.31 \frac{J}{mol-K}$
$T$ is the temperature in Kelvin.
At 300 °C the nitrogen gas pressure on one side of an iron plate is 0.10 MPa. On the other side of the iron plate, the nitrogen gas pressure is 5.0 MPa. The iron plate is a 1.5 mm thick. Assume the pre-exponential term $D_0$ and the activation energy of diffusion of nitrogen in carbon, $Q_d$ are equal to the values below:
$D_0 = 5 \times 10^{-7} \frac{m^2}{s}$
$Q_d = 77,000 \frac{J}{mol} $
Find:
Calculate the diffusion flux, J through the plate using Fick's First Law of Diffusion:
$$ J = -D \frac{dC}{dx} $$
Solution:
We have a couple different quantities and a couple of different units to handle to solve this problem. We'll start out importing pint and creating a UnitRegistry object. We'll also need the exp (e raised to a power) and sqrt (square root) functions from the math module, part of the Python standard library.
End of explanation
"""
Q_ = u.Quantity
T = Q_(300, u.degC)
print('T = {}'.format(T))
T.ito('degK')
print('T = {}'.format(T))
T = T.magnitude * u.kelvin
print(T)
"""
Explanation: Let's start with the temperature, T = 300 °C.
Temperature units in °F and °C are relative units with an off-set scale. °C and °F are not multiplicative units. Non-multiplicatve units are handled by pint a little differently compared to regular multiplicative units.
To create a variable including a unit of degrees C, we instantiate a Quantity object and pass in the temperature in °C along with the unit (u.degC). We can convert the temperature to Kelvin (K) using the .ito method.
Since we want to do some mulipication, division and exponentiation with our temperature, we need to convert the temperature to a multiplicative unit. Pint has two versions of the temperature unit in Kelvin (K). There is the non-multiplicative type degK and the multiplicative type kelvin.
We convert the temperature variable T to the multiplicative type kelvin by pulling out the magnitude (the number part without the degK unit) from the T variable and multiplying it by the kevlin unit from pint.
End of explanation
"""
Qn = 37600 * u.J/u.mol
R = 8.31 * u.J/(u.mol*u.kelvin)
"""
Explanation: Next we'll create variables for $Q_n = 37,600 \frac{J}{mol}$ and the universal gas contant $R=8.31 \frac{J}{mol-K}$
End of explanation
"""
PN1 = 0.10
PN2 = 5.0
"""
Explanation: Our first nitrogen pressure is 0.10 MPa and our second nitrogen pressure is 5.0 MPa, we'll make variables for both:
End of explanation
"""
CN1 = (4.9e-3)*sqrt(PN1)*exp(-Qn/(R*T))
print(CN1)
CN2 = (4.9e-3)*sqrt(PN2)*exp(-Qn/(R*T))
print(CN2)
"""
Explanation: Now we can calculate the two nitrogren concentrations in wt% using the equation:
$$C_{N} = 4.9 \times 10^{-6} \sqrt{P_{N_2}} exp{\frac{Q_n}{RT}}$$
where $P_{N_2}$ = 0.10 for one side of the iron plate and $P_{N_2}$ = 5.0 for the other side of the iron plate
End of explanation
"""
p=7.874*u.g/u.cm**3
p.ito(u.kg/u.m**3)
mFe = 1*u.kg
vFe = mFe/p
"""
Explanation: These values CN1 and CN2 are in units of wt% N in an iron-nitrogen "alloy" where almost all of the alloy is iron with only a small amount of nitrogen. To use Fick's First Law of Diffusion:
$$ J = -D \frac{dC}{dx} $$
We need a concentration gradient $dC$ in units of mass per unit volume like kg/m<sup>3</sup> or g/cm<sup>3</sup> not in units of wt %. Therefore we need to convert the two concentrations of nitrogen in iron, CN1 and CN2 from units of wt% to units of kg/m<sup>3</sup>.
To make the conversion between wt% and mass per unit volume we have to pick a sample mass of iron. This mass of iron will contain a mass of nitrogen (based on wt%). We can divide this mass of nitrogen by the volume of iron that corresponds to the mass of iron we picked. As long as we divide the mass of nitrogen by the volume of iron that contains that mass of nitrogen, we will end up with a unit conversion from wt% to kg/m<sup>3</sup> that works. So let's pick 1 kilogram of iron, and use the density of iron as 7.874 g/cm<sup>3</sup>.
We set a variable p to equal the density of iron in g/cm<sup>3</sup> and use the .ito() method to convert the density to units of kg/m<sup>3</sup>. Then we divide the mass of iron that we picked (1 kg) and convert it to volume of iron using the density p. This will give use the volume of 1kg of iron in units of m<sup>3</sup>.
End of explanation
"""
mN1 = mFe*CN1*0.01
CN1 = mN1/vFe
print(CN1)
mN2 = mFe*CN2*0.01
CN2 = mN2/vFe
print(CN2)
"""
Explanation: Now we'll determine how many kg of nitrogen there are in 1 kg of iron given our concentrations CN1 and CN2 in wt%. Note that we have to multiply CN1 and CN2 by 0.01 because CN1 and CN2 are in units of %.
When we divide the mass of nitrogen by the volume of iron, we get a concentration of nitrogen in iron in units of kg/m<sup>3</sup>, which is the concentration units we need to use the Fick's First Law of Diffusion.
End of explanation
"""
dC = CN2-CN1
dx = 1.5 *u.mm
dx.ito(u.m)
"""
Explanation: Back to Fick's Fist Law of Diffusion:
$$ J = -D \frac{dC}{dx} $$
The difference in concentration $dC$, is just the difference between the two concentrations CN1 and CN2 now that they are both in units of kg/m<sup>3</sup>. $dx$, the change in distance is the thickness of the plate, 1.5 mm. We'll convert the change in distance, $dx$ to units of meters using the ito() method.
End of explanation
"""
D0 = 5e-7 * u.m**2/u.s
Qd = 77000 * u.J/u.mol
"""
Explanation: Next we need to find the diffusion coefficient $D$. To do this, we need the pre-exponential term $D_0$ and the activating envery of diffusion $Q_d$.
From the beginning of the problem:
$D_0 = 5 \times 10^{-7} \frac{m^2}{s}$
$Q_d = 77,000 \frac{J}{mol} $
Let's assign these to variables with the applicable units.
End of explanation
"""
D = D0 * exp(-Qd/(R*T))
print(D)
"""
Explanation: To calculate diffusion constant $D$, we use the equation which relates diffusion coefficient, $D$ to temperature, $T$ according to:
$$ D = D_0e^{\frac{-Q_d}{RT}} $$
End of explanation
"""
J = -D*(dC/dx)
J
"""
Explanation: Now that we have $D$, $dC$ and $dx$, we can finally calculate diffusion flux, $J$ through the plate using Fick's First Law of Diffusion:
$$ J = -D \frac{dC}{dx} $$
End of explanation
"""
|
irazhur/StatisticalMethods | examples/SDSScatalog/FirstLook.ipynb | gpl-2.0 | %load_ext autoreload
%autoreload 2
import numpy as np
import SDSS
import pandas as pd
import matplotlib
%matplotlib inline
objects = "SELECT top 10000 \
ra, \
dec, \
type, \
dered_u as u, \
dered_g as g, \
dered_r as r, \
dered_i as i, \
petroR50_i AS size \
FROM PhotoObjAll \
WHERE \
((type = '3' OR type = '6') AND \
ra > 185.0 AND ra < 185.2 AND \
dec > 15.0 AND dec < 15.2)"
print objects
# Download data. This can take a while...
sdssdata = SDSS.select(objects)
sdssdata
"""
Explanation: A First Look at the SDSS Photometric "Galaxy" Catalog
The Sloan Digital Sky Survey imaged over 10,000 sq degrees of sky (about 25% of the total), automatically detecting, measuring and cataloging millions of "objects".
While the primary data products of the SDSS was (and still are) its spectroscopic surveys, the photometric survey provides an important testing ground for dealing with pure imaging surveys like those being carried out by DES and that is planned with LSST.
Let's download part of the SDSS photometric object catalog and explore it.
SDSS data release 12 (DR12) is described at the SDSS3 website and in the survey paper by Alam et al 2015.
We will use the SDSS DR12 SQL query interface. For help designing queries, the sample queries page is invaluable, and you will probably want to check out the links to the "schema browser" at some point as well. Notice the "check syntax only" button on the SQL query interface: this is very useful for debugging SQL queries.
Small test queries can be executed directly in the browser. Larger ones (involving more than a few tens of thousands of objects, or that involve a lot of processing) should be submitted via the CasJobs system. Try the browser first, and move to CasJobs when you need to.
End of explanation
"""
!mkdir -p downloads
sdssdata.to_csv("downloads/SDSSobjects.csv")
"""
Explanation: Notice:
* Some values are large and negative - indicating a problem with the automated measurement routine. We will need to deal with these.
* Sizes are "effective radii" in arcseconds. The typical resolution ("point spread function" effective radius) in an SDSS image is around 0.7".
Let's save this download for further use.
End of explanation
"""
# We'll use astronomical g-r color as the colorizer, and then plot
# position, magnitude, size and color against each other.
data = pd.read_csv("downloads/SDSSobjects.csv",usecols=["ra","dec","u","g",\
"r","i","size"])
# Filter out objects with bad magnitude or size measurements:
data = data[(data["u"] > 0) & (data["g"] > 0) & (data["r"] > 0) & (data["i"] > 0) & (data["size"] > 0)]
# Log size, and g-r color, will be more useful:
data['log_size'] = np.log10(data['size'])
data['g-r_color'] = data['g'] - data['r']
# Drop the things we're not so interested in:
del data['u'], data['g'], data['r'], data['size']
data.head()
# Get ready to plot:
pd.set_option('display.max_columns', None)
# !pip install --upgrade seaborn
import seaborn as sns
sns.set()
def plot_everything(data,colorizer,vmin=0.0,vmax=10.0):
# Truncate the color map to retain contrast between faint objects.
norm = matplotlib.colors.Normalize(vmin=vmin, vmax=vmax)
cmap = matplotlib.cm.jet
m = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
plot = pd.scatter_matrix(data, alpha=0.2,figsize=[15,15],color=m.to_rgba(data[colorizer]))
return
plot_everything(data,'g-r_color',vmin=-1.0, vmax=3.0)
"""
Explanation: Visualizing Data in N-dimensions
This is, in general, difficult.
Looking at all possible 1 and 2-dimensional histograms/scatter plots helps a lot.
Color coding can bring in a 3rd dimension (and even a 4th). Interactive plots and movies are also well worth thinking about.
<br>
Here we'll follow a multi-dimensional visualization example due to Josh Bloom at UC Berkeley:
End of explanation
"""
zoom = data.copy()
del zoom['ra'],zoom['dec'],zoom['g-r_color']
plot_everything(zoom,'i',vmin=15.0, vmax=21.5)
"""
Explanation: Size-magnitude
Let's zoom in and look at the objects' (log) sizes and magnitudes.
End of explanation
"""
|
aschaffn/phys202-2015-work | assignments/assignment05/InteractEx02.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 2
Imports
End of explanation
"""
def plot_sine1(a,b):
x = np.arange(0, 4*np.pi,.01)
plt.plot(x, np.sin(a*x + b))
plt.xlim(0,4*np.pi)
plt.ylim(-1.1, 1.1)
plt.xlabel("x")
plt.xticks(np.arange(0,5)*np.pi, ['0','$\pi$', '$2\pi$', '$3\pi$', '$4\pi$'])
plt.ylabel("$f(x)$")
plt.title("$f(x) = \sin(ax+b)$")
# there's got to be a more slick way to paste labels
plot_sine1(5, 3.4)
"""
Explanation: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$.
End of explanation
"""
interact(plot_sine1, a = (0.,5.,.1), b=(-5.,5.,.1));
assert True # leave this for grading the plot_sine1 exercise
"""
Explanation: Then use interact to create a user interface for exploring your function:
a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.
b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
End of explanation
"""
def plot_sine2(a,b,style):
x = np.arange(0, 4*np.pi,.01)
plt.plot(x, np.sin(a*x + b),style)
plt.xlim(0,4*np.pi)
plt.ylim(-1.1, 1.1)
plt.xlabel("x")
plt.xticks(np.arange(0,5)*np.pi, ['0','$\pi$', '$2\pi$', '$3\pi$', '$4\pi$'])
plt.ylabel("$f(x)$")
plt.title("$f(x) = \sin(ax+b)$")
plot_sine2(4.0, -1.0, 'r--')
mem = {'x':"apple", 'y':"peach"}
mem
"""
Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:
dashed red: r--
blue circles: bo
dotted black: k.
Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.
End of explanation
"""
interact(plot_sine2, a = (0.,5.,.1), b=(-5.,5.,.1), \
style = {"dotted blue line":'b.', "black circles":'ko', "red triangles":"r2"});
assert True # leave this for grading the plot_sine2 exercise
"""
Explanation: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
End of explanation
"""
|
letsgoexploring/teaching | winter2017/econ129/python/Econ129_Class_03_Complete.ipynb | mit | # Create a variable that stores the strong called 'apple'
a = 'apple'
# Create a copy of a with the ps removed and reassign the value of a
a = a.replace('p','')
print(a)
"""
Explanation: Class 3: NumPy (and a quick string example)
Brief introduction to the NumPy module.
Preliminary example
I recently found myself needing to copy and paste names and email addresses from an email header. I required the names and emails to formatted like this:
Name 1 Email 1
Name 2 Email 2
Name 3 Email 3
But what I had was this:
"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler <e@email.com>, "Bernhard Riemann" <zeta@email.com>
Sure, I could manually go through delete the characters that aren't required. The manual approach would be fine for a small list but the exercise would quickly become obnoxious as the list of names increases.
Python is great for modifying strings. The string method that we want to use is replace(). replace() has two required arguments: old and new. old is the sbstring that is to be replaced and new is what replaces the original substring. The replace() method does not change the value of the original string, but returns a new string.
For example, suppose that we want to remove the every 'p' from the string 'apple'.
End of explanation
"""
# Create a variable that stores the strong called 'apple'
a = 'apple'
# Create a copy of a with the ps, l, and e removed and reassign the value of a
a = a.replace('p','').replace('l','').replace('e','')
print(a)
"""
Explanation: You can apply the replace() method multiple times:
End of explanation
"""
# Original character string
string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>'
# Remove <, >, and " from string and overwrite and print the result
string = string.replace('<','').replace('>','').replace('"<"','')
# Create a new variable called string_formatted with the commas replaced by the new line character '\n'
string_formatted = string.replace(', ','\n')
# Print string_formatted
print(string_formatted)
"""
Explanation: Now we have the tools to solve the email problem.
End of explanation
"""
string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>'
string = string.replace('<','').replace('>','').replace('"<"','').replace(',','')
for s in string.split():
if '@' in s:
print(s)
"""
Explanation: A related problem might be to extract only the email address from the orginal string. To do this, we can use replace() method to remove the '<', '>', and ',' characters. Then we use the split() method to break the string apart at the spaces. The we loop over the resulting list of strings and take only the strings with '@' characters in them.
End of explanation
"""
import numpy as np
"""
Explanation: Numpy
NumPy is a powerful Python module for scientific computing. Among other things, NumPy defines an N-dimensional array object that is especially convenient to use for plotting functions and for simulating and storing time series data. NumPy also defines many useful mathematical functions like, for example, the sine, cosine, and exponential functions and has excellent functions for probability and statistics including random number generators, and many cumulative density functions and probability density functions.
Importing NumPy
The standard way to import NumPy so that the namespace is np. This is for the sake of brevity.
End of explanation
"""
# Create a variable called a1 equal to a numpy array containing the numbers 1 through 5
a1 = np.array([1,2,3,4,5])
print(a1)
# Find the type of a1
print(type(a1))
# find the shape of a1
print(np.shape(a1))
# Use ndim to find the rank or number of dimensions of a1
print(np.ndim(a1))
# Create a variable called a2 equal to a 2-dimensionl numpy array containing the numbers 1 through 4
a2 = np.array([[1,2],[3,4]])
print(a2)
# find the shape of a2
print(np.shape(a2))
# Use ndim to find the rank or number of dimensions of a2
print(np.ndim(a2))
# Create a variable called c an empty numpy array
a3 = np.array([])
print(a3)
# find the shape of a3
print(np.shape(a3))
# Use ndim to find the rank or number of dimensions of a3
print(np.ndim(a3))
"""
Explanation: NumPy arrays
A NumPy ndarray is a homogeneous multidimensional array. Here, homogeneous means that all of the elements of the array have the same type. An nadrray is a table of numbers (like a matrix but with possibly more dimensions) indexed by a tuple of positive integers. The dimensions of NumPy arrays are called axes and the number of axes is called the rank. For this course, we will work almost exclusively with 1-dimensional arrays that are effectively vectors. Occasionally, we might run into a 2-dimensional array.
Basics
The most straightforward way to create a NumPy array is to call the array() function which takes as an argument a list. For example:
End of explanation
"""
# Create a variable called b that is equal to a numpy array containing the numbers 1 through 5
b = np.arange(1,6,1)
print(b)
# Create a variable called c that is equal to a numpy array containing the numbers 0 through 10
c = np.arange(11)
print(c)
"""
Explanation: Special functions for creating arrays
Numpy has several built-in functions that can assist you in creating certain types of arrays: arange(), zeros(), and ones(). Of these, arrange() is probably the most useful because it allows you a create an array of numbers by specifying the initial value in the array, the maximum value in the array, and a step size between elements. arrange() has three arguments: start, stop, and step:
arange([start,] stop[, step,])
The stop argument is required. The default for start is 0 and the default for step is 1. Note that the values in the created array will stop one increment below stop. That is, if arrange() is called with stop equal to 9 and step equal to 0.5, then the last value in the returned array will be 8.5.
End of explanation
"""
# Construct a 1x5 array of zeros
print(np.zeros(5))
# Construct a 2x2 array of ones
print(np.zeros([2,2]))
"""
Explanation: The zeros() and ones() take as arguments the desired shape of the array to be returned and fill that array with either zeros or ones.
End of explanation
"""
# Define two 1-dimensional arrays
A = np.array([2,4,6])
B = np.array([3,2,1])
C = np.array([-1,3,2,-4])
# Multiply A by a constant
print(3*A)
# Exponentiate A
print(A**2)
# Add A and B together
print(A+B)
# Exponentiate A with B
print(A**B)
# Add A and C together
print(A+C)
"""
Explanation: Math with NumPy arrays
A nice aspect of NumPy arrays is that they are optimized for mathematical operations. The following standard Python arithemtic operators +, -, *, /, and ** operate element-wise on NumPy arrays as the following examples indicate.
End of explanation
"""
# Compute the sine of the values in A
print(np.sin(A))
"""
Explanation: The error in the preceding example arises because addition is element-wise and A and C don't have the same shape.
End of explanation
"""
# Use a for loop with a NumPy array to print the numbers 0 through 4
for x in np.arange(5):
print(x)
"""
Explanation: Iterating through Numpy arrays
NumPy arrays are iterable objects just like lists, strings, tuples, and dictionaries which means that you can use for loops to iterate through the elements of them.
End of explanation
"""
# Set N equal to the number of terms to sum
N = 1000
# Initialize a variable called summation equal to 0
summation = 0
# loop over the numbers 1 through N
for n in np.arange(1,N+1):
summation = summation + 1/n**2
# Print the approximation and the exact solution
print('approx:',summation)
print('exact: ',np.pi**2/6)
"""
Explanation: Example: Basel problem
One of my favorite math equations is:
\begin{align}
\sum_{n=1}^{\infty} \frac{1}{n^2} & = \frac{\pi^2}{6}
\end{align}
We can use an iteration through a NumPy array to approximate the lefthand-side and verify the validity of the expression.
End of explanation
"""
|
mhdella/scipy_2015_sklearn_tutorial | notebooks/05.3 In Depth - Trees and Forests.ipynb | cc0-1.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Estimators In Depth: Trees and Forests
End of explanation
"""
from figures import make_dataset
x, y = make_dataset()
X = x.reshape(-1, 1)
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(max_depth=5)
reg.fit(X, y)
X_fit = np.linspace(-3, 3, 1000).reshape((-1, 1))
y_fit_1 = reg.predict(X_fit)
plt.plot(X_fit.ravel(), y_fit_1, color='blue', label="prediction")
plt.plot(X.ravel(), y, '.k', label="training data")
plt.legend(loc="best")
"""
Explanation: Here we'll explore a class of algorithms based on decision trees.
Decision trees at their root are extremely intuitive. They
encode a series of "if" and "else" choices, similar to how a person might make a decision.
However, which questions to ask, and how to proceed for each answer is entirely learned from the data.
For example, if you wanted to create a guide to identifying an animal found in nature, you
might ask the following series of questions:
Is the animal bigger or smaller than a meter long?
bigger: does the animal have horns?
yes: are the horns longer than ten centimeters?
no: is the animal wearing a collar
smaller: does the animal have two or four legs?
two: does the animal have wings?
four: does the animal have a bushy tail?
and so on. This binary splitting of questions is the essence of a decision tree.
One of the main benefit of tree-based models is that they require little preprocessing of the data.
They can work with variables of different types (continuous and discrete) and are invariant to scaling of the features.
Another benefit is that tree-based models are what is called "non-parametric", which means they don't have a fix set of parameters to learn. Instead, a tree model can become more and more flexible, if given more data.
In other words, the number of free parameters grows with the number of samples and is not fixed, as for example in linear models.
Decision Tree Regression
A decision tree is a simple binary classification tree that is
similar to nearest neighbor classification. It can be used as follows:
End of explanation
"""
from sklearn.datasets import make_blobs
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
from figures import plot_2d_separator
X, y = make_blobs(centers=[[0, 0], [1, 1]], random_state=61526, n_samples=100)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
clf = DecisionTreeClassifier(max_depth=5)
clf.fit(X_train, y_train)
plot_2d_separator(clf, X, fill=True)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s=60, alpha=.7)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=60)
"""
Explanation: A single decision tree allows us to estimate the signal in a non-parametric way,
but clearly has some issues. In some regions, the model shows high bias and
under-fits the data.
(seen in the long flat lines which don't follow the contours of the data),
while in other regions the model shows high variance and over-fits the data
(reflected in the narrow spikes which are influenced by noise in single points).
Decision Tree Classification
Decision tree classification work very similarly, by assigning all points within a leaf the majority class in that leaf:
End of explanation
"""
from figures import plot_tree_interactive
plot_tree_interactive()
"""
Explanation: There are many parameter that control the complexity of a tree, but the one that might be easiest to understand is the maximum depth. This limits how finely the tree can partition the input space, or how many "if-else" questions can be asked before deciding which class a sample lies in.
This parameter is important to tune for trees and tree-based models. The interactive plot below shows how underfit and overfit looks like for this model. Having a max_depth of one is clearly an underfit model, while a depth of seven or eight clearly overfits. The maximum depth a tree can be grown at for this dataset is 8, at which point each leave only contains samples from a single class. This is known as all leaves being "pure".
End of explanation
"""
from figures import plot_forest_interactive
plot_forest_interactive()
"""
Explanation: Decision trees are fast to train, easy to understand, and often lead to interpretable models. However, single trees often tend to overfit the training data. Playing with the slider above you might notice that the model starts to overfit even before it has a good separation between the classes.
Therefore, in practice it is more common to combine multiple trees to produce models that generalize better. The most common methods for combining trees are random forests and gradient boosted trees.
Random Forests
Random forests are simply many trees, built on different random subsets of the data, and using different random subsets of the features for each split.
This makes the trees different from each other, and makes them overfit to different aspects. Then, their predictions are averaged, leading to a smoother estimate that overfits less.
End of explanation
"""
from sklearn import grid_search
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
digits = load_digits()
X, y = digits.data, digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rf = RandomForestClassifier(n_estimators=200)
parameters = {'max_features':['sqrt', 'log2', 10],
'max_depth':[5, 7, 9]}
clf_grid = grid_search.GridSearchCV(rf, parameters, n_jobs=-1)
clf_grid.fit(X_train, y_train)
clf_grid.score(X_train, y_train)
clf_grid.score(X_test, y_test)
"""
Explanation: Selecting the Optimal Estimator via Cross-Validation
End of explanation
"""
from sklearn.ensemble import GradientBoostingRegressor
clf = GradientBoostingRegressor(n_estimators=100, max_depth=5, learning_rate=.2)
clf.fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
"""
Explanation: Another option: Gradient Boosting
Another Ensemble method that can be useful is Boosting: here, rather than
looking at 200 (say) parallel estimators, We construct a chain of 200 estimators
which iteratively refine the results of the previous estimator.
The idea is that by sequentially applying very fast, simple models, we can get a
total model error which is better than any of the individual pieces.
End of explanation
"""
|
MegaShow/college-programming | Homework/Principles of Artificial Neural Networks/Week 5 CNN 1/Week5.ipynb | mit | import numpy as np
def convolution(img, kernel, padding=1, stride=1):
"""
img: input image with one channel
kernel: convolution kernel
"""
h, w = img.shape
kernel_size = kernel.shape[0]
# height and width of image with padding
ph, pw = h + 2 * padding, w + 2 * padding
padding_img = np.zeros((ph, pw))
padding_img[padding:h + padding, padding:w + padding] = img
# height and width of output image
result_h = (h + 2 * padding - kernel_size) // stride + 1
result_w = (w + 2 * padding - kernel_size) // stride + 1
result = np.zeros((result_h, result_w))
# convolution
x, y = 0, 0
for i in range(0, ph - kernel_size + 1, stride):
for j in range(0, pw - kernel_size + 1, stride):
roi = padding_img[i:i+kernel_size, j:j+kernel_size]
result[x, y] = np.sum(roi * kernel)
y += 1
y = 0
x += 1
return result
"""
Explanation: Week 5: CNN-1
实验准备
熟悉python语言的使用和numpy,torch的基本用法
熟悉神经网络的训练过程与优化方法
结合理论课的内容,了解卷积与卷积神经网络(CNN)的内容和原理
了解常用的CNN模型的基本结构,如AlexNet,Vgg,ResNet
实验过程
1. 卷积与卷积层
numpy实现卷积
pytorch中的卷积层和池化层
2. CNN
实现并训练一个基本的CNN网络
ResNet
VGG
卷积
在实验课上我们已经了解过卷积运算的操作当我们对一张二维的图像做卷积时,将卷积核沿着图像进行滑动乘加即可(如上图所示).
下面的conv函数实现了对二维单通道图像的卷积.考虑输入的卷积核kernel的长宽相同,padding为对图像的四个边缘补0,stride为卷积核窗口滑动的步长.
End of explanation
"""
from PIL import Image
import matplotlib.pyplot as plt
img = Image.open('pics/lena.jpg').convert('L')
plt.imshow(img, cmap='gray')
# a Laplace kernel
laplace_kernel = np.array([[-1, -1, -1],
[-1, 8, -1],
[-1, -1, -1]])
# Gauss kernel with kernel_size=3
gauss_kernel3 = (1/ 16) * np.array([[1, 2, 1],
[2, 4, 2],
[1, 2, 1]])
# Gauss kernel with kernel_size=5
gauss_kernel5 = (1/ 84) * np.array([[1, 2, 3, 2, 1],
[2, 5, 6, 5, 2],
[3, 6, 8, 6, 3],
[2, 5, 6, 5, 2],
[1, 2, 3, 2, 1]])
fig, ax = plt.subplots(1, 3, figsize=(12, 8))
laplace_img = convolution(np.array(img), laplace_kernel, padding=1, stride=1)
ax[0].imshow(Image.fromarray(laplace_img), cmap='gray')
ax[0].set_title('laplace')
gauss3_img = convolution(np.array(img), gauss_kernel3, padding=1, stride=1)
ax[1].imshow(Image.fromarray(gauss3_img), cmap='gray')
ax[1].set_title('gauss kernel_size=3')
gauss5_img = convolution(np.array(img), gauss_kernel5, padding=2, stride=1)
ax[2].imshow(Image.fromarray(gauss5_img), cmap='gray')
ax[2].set_title('gauss kernel_size=5')
"""
Explanation: 下面在图像上简单一下测试我们的conv函数,这里使用3*3的高斯核对下面的图像进行滤波.
End of explanation
"""
def myconv2d(features, weights, padding=0, stride=1):
"""
features: input, in_channel * h * w
weights: kernel, out_channel * in_channel * kernel_size * kernel_size
return output with out_channel
"""
in_channel, h, w = features.shape
out_channel, _, kernel_size, _ = weights.shape
# height and width of output image
output_h = (h + 2 * padding - kernel_size) // stride + 1
output_w = (w + 2 * padding - kernel_size) // stride + 1
output = np.zeros((out_channel, output_h, output_w))
# call convolution out_channel * in_channel times
for i in range(out_channel):
weight = weights[i]
for j in range(in_channel):
feature_map = features[j]
kernel = weight[j]
output[i] += convolution(feature_map, kernel, padding, stride)
return output
"""
Explanation: 上面我们实现了实现了对单通道输入单通道输出的卷积.在CNN中,一般使用到的都是多通道输入多通道输出的卷积,要实现多通道的卷积, 我们只需要对循环调用上面的conv函数即可.
End of explanation
"""
input_data=[
[[0,0,2,2,0,1],
[0,2,2,0,0,2],
[1,1,0,2,0,0],
[2,2,1,1,0,0],
[2,0,1,2,0,1],
[2,0,2,1,0,1]],
[[2,0,2,1,1,1],
[0,1,0,0,2,2],
[1,0,0,2,1,0],
[1,1,1,1,1,1],
[1,0,1,1,1,2],
[2,1,2,1,0,2]]
]
weights_data=[[
[[ 0, 1, 0],
[ 1, 1, 1],
[ 0, 1, 0]],
[[-1, -1, -1],
[ -1, 8, -1],
[ -1, -1, -1]]
]]
# numpy array
input_data = np.array(input_data)
weights_data = np.array(weights_data)
# show the result
print(myconv2d(input_data, weights_data, padding=3, stride=3))
"""
Explanation: 接下来, 让我们测试我们写好的myconv2d函数.
End of explanation
"""
import torch
import torch.nn.functional as F
input_tensor = torch.tensor(input_data).unsqueeze(0).float()
F.conv2d(input_tensor, weight=torch.tensor(weights_data).float(), bias=None, stride=3, padding=3)
"""
Explanation: 在Pytorch中,已经为我们提供了卷积和卷积层的实现.使用同样的input和weights,以及stride,padding,pytorch的卷积的结果应该和我们的一样.可以在下面的代码中进行验证.
End of explanation
"""
def convolutionV2(img, kernel, padding=(0,0), stride=(1,1)):
h, w = img.shape
kh, kw = kernel.shape
# height and width of image with padding
ph, pw = h + 2 * padding[0], w + 2 * padding[1]
padding_img = np.zeros((ph, pw))
padding_img[padding[0]:h + padding[0], padding[1]:w + padding[1]] = img
# height and width of output image
result_h = (h + 2 * padding[0] - kh) // stride[0] + 1
result_w = (w + 2 * padding[1] - kw) // stride[1] + 1
result = np.zeros((result_h, result_w))
# convolution
x, y = 0, 0
for i in range(0, ph - kh + 1, stride[0]):
for j in range(0, pw - kw + 1, stride[1]):
roi = padding_img[i:i+kh, j:j+kw]
result[x, y] = np.sum(roi * kernel)
y += 1
y = 0
x += 1
return result
# test input
test_input = np.array([[1, 1, 2, 1],
[0, 1, 0, 2],
[2, 2, 0, 2],
[2, 2, 2, 1],
[2, 3, 2, 3]])
test_kernel = np.array([[1, 0], [0, 1], [0, 0]])
# output
print(convolutionV2(test_input, test_kernel, padding=(1, 0), stride=(1, 1)))
print(convolutionV2(test_input, test_kernel, padding=(2, 1), stride=(1, 2)))
"""
Explanation: 作业:
上述代码中convolution的实现只考虑卷积核以及padding和stride长宽一致的情况,若输入的卷积核可能长宽不一致,padding与stride的输入可能为两个元素的元祖(代表两个维度上的padding与stride)并使用下面test input对你的convolutionV2进行测试.
End of explanation
"""
import torch
import torch.nn as nn
x = torch.randn(1, 1, 32, 32)
conv_layer = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=3, stride=1, padding=0)
y = conv_layer(x)
print(x.shape)
print(y.shape)
"""
Explanation: 卷积层
Pytorch提供了卷积层和池化层供我们使用.
卷积层与上面相似, 而池化层与卷积层相似,Pooling layer的主要目的是缩小features的size.常用的有MaxPool(滑动窗口取最大值)与AvgPool(滑动窗口取均值)
End of explanation
"""
x = torch.randn(1, 1, 32, 32)
conv_layer = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=5, stride=2, padding=2)
y = conv_layer(x)
print(x.shape)
print(y.shape)
# input N * C * H * W
x = torch.randn(1, 1, 4, 4)
# maxpool
maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
y = maxpool(x)
# avgpool
avgpool = nn.AvgPool2d(kernel_size=2, stride=2)
z = avgpool(x)
#avgpool
print(x)
print(y)
print(z)
"""
Explanation: 请问:
1. 输入与输出的tensor的size分别是多少?该卷积层的参数量是多少?
2. 若kernel_size=5,stride=2,padding=2, 输出的tensor的size是多少?在上述代码中改变参数后试验后并回答.
3. 若输入的tensor size为N*C*H*W,若第5行中卷积层的参数为in_channels=C,out_channels=Cout,kernel_size=k,stride=s,padding=p,那么输出的tensor size是多少?
答:
1. 输入的tensor的大小为$1 * 1 * 32 * 32$,输出的tensor的大小为$1 * 3 * 30 * 30$,这说明卷积核是$1 * 1 * 3 * 3$的规模,一共有3个卷积核。
2. 输出的tensor的大小为$1 * 3 * 16 * 16$,代码验证如下。
3. 输出的tensor大小为$N * C_{out} * ((H+2p-k)//s+1) * ((W+2p-k)//s+1)$。
End of explanation
"""
import torch
import torch.nn as nn
import torch.utils.data as Data
import torchvision
class MyCNN(nn.Module):
def __init__(self, image_size, num_classes):
super(MyCNN, self).__init__()
# conv1: Conv2d -> BN -> ReLU -> MaxPool
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
# conv2: Conv2d -> BN -> ReLU -> MaxPool
self.conv2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
# fully connected layer
self.fc = nn.Linear(32 * (image_size // 4) * (image_size // 4), num_classes)
def forward(self, x):
"""
input: N * 3 * image_size * image_size
output: N * num_classes
"""
x = self.conv1(x)
x = self.conv2(x)
# view(x.size(0), -1): change tensor size from (N ,H , W) to (N, H*W)
x = x.view(x.size(0), -1)
output = self.fc(x)
return output
"""
Explanation: GPU
我们可以选择在cpu或gpu上来训练我们的模型.
实验室提供了4卡的gpu服务器,要查看各个gpu设备的使用情况,可以在服务器上的jupyter主页点击new->terminal,在terminal中输入nvidia-smi即可查看每张卡的使用情况.如下图.
上图左边一栏显示了他们的设备id(0,1,2,3),风扇转速,温度,性能状态,能耗等信息,中间一栏显示他们的bus-id和显存使用量,右边一栏是GPU使用率等信息.注意到中间一栏的显存使用量,在训练模型前我们可以根据空余的显存来选择我们使用的gpu设备.
在本次实验中我们将代码中的torch.device('cuda:0')的0更换成所需的设备id即可选择在相应的gpu设备上运行程序.
CNN(卷积神经网络)
一个简单的CNN
接下来,让我们建立一个简单的CNN分类器.
这个CNN的整体流程是
卷积(Conv2d) -> BN(batch normalization) -> 激励函数(ReLU) -> 池化(MaxPooling) ->
卷积(Conv2d) -> BN(batch normalization) -> 激励函数(ReLU) -> 池化(MaxPooling) ->
全连接层(Linear) -> 输出.
End of explanation
"""
def train(model, train_loader, loss_func, optimizer, device):
"""
train model using loss_fn and optimizer in an epoch.
model: CNN networks
train_loader: a Dataloader object with training data
loss_func: loss function
device: train on cpu or gpu device
"""
total_loss = 0
# train the model using minibatch
for i, (images, targets) in enumerate(train_loader):
images = images.to(device)
targets = targets.to(device)
# forward
outputs = model(images)
loss = loss_func(outputs, targets)
# backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_loss += loss.item()
# every 100 iteration, print loss
if (i + 1) % 100 == 0:
print ("Step [{}/{}] Train Loss: {:.4f}"
.format(i+1, len(train_loader), loss.item()))
return total_loss / len(train_loader)
def evaluate(model, val_loader, device):
"""
model: CNN networks
val_loader: a Dataloader object with validation data
device: evaluate on cpu or gpu device
return classification accuracy of the model on val dataset
"""
# evaluate the model
model.eval()
# context-manager that disabled gradient computation
with torch.no_grad():
correct = 0
total = 0
for i, (images, targets) in enumerate(val_loader):
# device: cpu or gpu
images = images.to(device)
targets = targets.to(device)
outputs = model(images)
# return the maximum value of each row of the input tensor in the
# given dimension dim, the second return vale is the index location
# of each maxium value found(argmax)
_, predicted = torch.max(outputs.data, dim=1)
correct += (predicted == targets).sum().item()
total += targets.size(0)
accuracy = correct / total
print('Accuracy on Test Set: {:.4f} %'.format(100 * accuracy))
return accuracy
def save_model(model, save_path):
# save model
torch.save(model.state_dict(), save_path)
import matplotlib.pyplot as plt
def show_curve(ys, title):
"""
plot curlve for Loss and Accuacy
Args:
ys: loss or acc list
title: loss or accuracy
"""
x = np.array(range(len(ys)))
y = np.array(ys)
plt.plot(x, y, c='b')
plt.axis()
plt.title('{} curve'.format(title))
plt.xlabel('epoch')
plt.ylabel('{}'.format(title))
plt.show()
"""
Explanation: 这样,一个简单的CNN模型就写好了.与前面的课堂内容相似,我们需要对完成网络进行训练与评估的代码.
End of explanation
"""
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
# mean and std of cifar10 in 3 channels
cifar10_mean = (0.49, 0.48, 0.45)
cifar10_std = (0.25, 0.24, 0.26)
# define transform operations of train dataset
train_transform = transforms.Compose([
# data augmentation
transforms.Pad(4),
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32),
transforms.ToTensor(),
transforms.Normalize(cifar10_mean, cifar10_std)])
test_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(cifar10_mean, cifar10_std)])
# torchvision.datasets provide CIFAR-10 dataset for classification
train_dataset = torchvision.datasets.CIFAR10(root='./data/',
train=True,
transform=train_transform,
download=True)
test_dataset = torchvision.datasets.CIFAR10(root='./data/',
train=False,
transform=test_transform)
# Data loader: provides single- or multi-process iterators over the dataset.
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=100,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=100,
shuffle=False)
"""
Explanation: 准备数据与训练模型
接下来,我们使用CIFAR10数据集来对我们的CNN模型进行训练.
CIFAR-10:该数据集共有60000张彩色图像,这些图像是32*32,分为10个类,每类6000张图.这里面有50000张用于训练,构成了5个训练批,每一批10000张图;另外10000用于测试,单独构成一批.在本次实验中,使用CIFAR-10数据集来训练我们的模型.我们可以用torchvision.datasets.CIFAR10来直接使用CIFAR10数据集.
End of explanation
"""
def fit(model, num_epochs, optimizer, device):
"""
train and evaluate an classifier num_epochs times.
We use optimizer and cross entropy loss to train the model.
Args:
model: CNN network
num_epochs: the number of training epochs
optimizer: optimize the loss function
"""
# loss and optimizer
loss_func = nn.CrossEntropyLoss()
model.to(device)
loss_func.to(device)
# log train loss and test accuracy
losses = []
accs = []
for epoch in range(num_epochs):
print('Epoch {}/{}:'.format(epoch + 1, num_epochs))
# train step
loss = train(model, train_loader, loss_func, optimizer, device)
losses.append(loss)
# evaluate step
accuracy = evaluate(model, test_loader, device)
accs.append(accuracy)
# show curve
show_curve(losses, "train loss")
show_curve(accs, "test accuracy")
# hyper parameters
num_epochs = 10
lr = 0.01
image_size = 32
num_classes = 10
# declare and define an objet of MyCNN
mycnn = MyCNN(image_size, num_classes)
print(mycnn)
# Device configuration, cpu, cuda:0/1/2/3 available
device = torch.device('cuda:0')
optimizer = torch.optim.Adam(mycnn.parameters(), lr=lr)
# start training on cifar10 dataset
fit(mycnn, num_epochs, optimizer, device)
"""
Explanation: 训练过程中使用交叉熵(cross-entropy)损失函数与Adam优化器来训练我们的分类器网络.
阅读下面的代码并在To-Do处,根据之前所学的知识,补充前向传播和反向传播的代码来实现分类网络的训练.
End of explanation
"""
# 3x3 convolution
def conv3x3(in_channels, out_channels, stride=1):
return nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
# Residual block
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, downsample=None):
super(ResidualBlock, self).__init__()
self.conv1 = conv3x3(in_channels, out_channels, stride)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(out_channels, out_channels)
self.bn2 = nn.BatchNorm2d(out_channels)
self.downsample = downsample
def forward(self, x):
"""
Defines the computation performed at every call.
x: N * C * H * W
"""
residual = x
# if the size of input x changes, using downsample to change the size of residual
if self.downsample:
residual = self.downsample(x)
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out += residual
out = self.relu(out)
return out
"""
Explanation: ResNet
接下来,让我们完成更复杂的CNN的实现.
ResNet又叫做残差网络.在ResNet网络结构中会用到两种残差模块,一种是以两个3*3的卷积网络串接在一起作为一个残差模块,另外一种是1*1、3*3、1*1的3个卷积网络串接在一起作为一个残差模块。他们如下图所示。
我们以左边的模块为例实现一个ResidualBlock.注意到由于我们在两次卷积中可能会使输入的tensor的size与输出的tensor的size不相等,为了使它们能够相加,所以输出的tensor与输入的tensor size不同时,我们使用downsample(由外部传入)来使保持size相同
现在,试在To-Do补充代码完成下面的forward函数来完成ResidualBlock的实现,并运行它.
End of explanation
"""
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=10):
"""
block: ResidualBlock or other block
layers: a list with 3 positive num.
"""
super(ResNet, self).__init__()
self.in_channels = 16
self.conv = conv3x3(3, 16)
self.bn = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# layer1: image size 32
self.layer1 = self.make_layer(block, 16, num_blocks=layers[0])
# layer2: image size 32 -> 16
self.layer2 = self.make_layer(block, 32, num_blocks=layers[1], stride=2)
# layer1: image size 16 -> 8
self.layer3 = self.make_layer(block, 64, num_blocks=layers[2], stride=2)
# global avg pool: image size 8 -> 1
self.avg_pool = nn.AvgPool2d(8)
self.fc = nn.Linear(64, num_classes)
def make_layer(self, block, out_channels, num_blocks, stride=1):
"""
make a layer with num_blocks blocks.
"""
downsample = None
if (stride != 1) or (self.in_channels != out_channels):
# use Conv2d with stride to downsample
downsample = nn.Sequential(
conv3x3(self.in_channels, out_channels, stride=stride),
nn.BatchNorm2d(out_channels))
# first block with downsample
layers = []
layers.append(block(self.in_channels, out_channels, stride, downsample))
self.in_channels = out_channels
# add num_blocks - 1 blocks
for i in range(1, num_blocks):
layers.append(block(out_channels, out_channels))
# return a layer containing layers
return nn.Sequential(*layers)
def forward(self, x):
out = self.conv(x)
out = self.bn(out)
out = self.relu(out)
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.avg_pool(out)
# view: here change output size from 4 dimensions to 2 dimensions
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
resnet = ResNet(ResidualBlock, [2, 2, 2])
print(resnet)
"""
Explanation: 下面是一份针对cifar10数据集的ResNet的实现.
它先通过一个conv3x3,然后经过3个包含多个残差模块的layer(一个layer可能包括多个ResidualBlock, 由传入的layers列表中的数字决定), 然后经过一个全局平均池化层,最后通过一个线性层.
End of explanation
"""
# Hyper-parameters
num_epochs = 10
lr = 0.001
# Device configuration
device = torch.device('cuda:0')
# optimizer
optimizer = torch.optim.Adam(resnet.parameters(), lr=lr)
fit(resnet, num_epochs, optimizer, device)
"""
Explanation: 使用fit函数训练实现的ResNet,观察结果变化.
End of explanation
"""
resnet = ResNet(ResidualBlock, [2, 2, 2])
num_epochs = 10
lr = 0.0009
device = torch.device('cuda:0')
optimizer = torch.optim.Adam(resnet.parameters(), lr=lr)
fit(resnet, num_epochs, optimizer, device)
"""
Explanation: 作业
尝试改变学习率lr,使用SGD或Adam优化器,训练10个epoch,提高ResNet在测试集上的accuracy.
End of explanation
"""
from torch import nn
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
# The output of AdaptiveAvgPool2d is of size H x W, for any input size.
self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
self.fc1 = nn.Linear(channel, channel // reduction)
self.fc2 = nn.Linear(channel // reduction, channel)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
b, c, _, _ = x.shape
out = self.avg_pool(x).view(b, c)
out = self.fc1(out)
out = self.fc2(out)
out = self.sigmoid(out).view(b, c, 1, 1)
return out * x
class SEResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, downsample=None, reduction=16):
super(SEResidualBlock, self).__init__()
self.conv1 = conv3x3(in_channels, out_channels, stride)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(out_channels, out_channels)
self.bn2 = nn.BatchNorm2d(out_channels)
self.se = SELayer(out_channels, reduction)
self.downsample = downsample
def forward(self, x):
residual = x
if self.downsample:
residual = self.downsample(x)
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out += residual
out = self.relu(out)
return out
se_resnet = ResNet(SEResidualBlock, [2, 2, 2])
print(se_resnet)
# Hyper-parameters
num_epochs = 10
lr = 0.001
# Device configuration
device = torch.device('cuda:0')
# optimizer
optimizer = torch.optim.Adam(se_resnet.parameters(), lr=lr)
fit(se_resnet, num_epochs, optimizer, device)
"""
Explanation: 作业
下图表示将SE模块嵌入到ResNet的残差模块.
其中,global pooling表示全局池化层(将输入的size池化为1*1), 将c*h*w的输入变为c*1*1的输出.FC表示全连接层(线性层),两层FC之间使用ReLU作为激活函数.通过两层FC后使用sigmoid激活函数激活.最后将得到的c个值与原输入c*h*w按channel相乘,得到c*h*w的输出.
补充下方的代码完成SE-Resnet block的实现.
End of explanation
"""
import math
class VGG(nn.Module):
def __init__(self, cfg):
super(VGG, self).__init__()
self.features = self._make_layers(cfg)
# linear layer
self.classifier = nn.Linear(512, 10)
def forward(self, x):
out = self.features(x)
out = out.view(out.size(0), -1)
out = self.classifier(out)
return out
def _make_layers(self, cfg):
"""
cfg: a list define layers this layer contains
'M': MaxPool, number: Conv2d(out_channels=number) -> BN -> ReLU
"""
layers = []
in_channels = 3
for x in cfg:
if x == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
layers += [nn.Conv2d(in_channels, x, kernel_size=3, padding=1),
nn.BatchNorm2d(x),
nn.ReLU(inplace=True)]
in_channels = x
layers += [nn.AvgPool2d(kernel_size=1, stride=1)]
return nn.Sequential(*layers)
cfg = {
'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
vggnet = VGG(cfg['VGG11'])
print(vggnet)
# Hyper-parameters
num_epochs = 10
lr = 1e-3
# Device configuration
device = torch.device('cuda:0')
# optimizer
optimizer = torch.optim.Adam(vggnet.parameters(), lr=lr)
fit(vggnet, num_epochs, optimizer, device)
"""
Explanation: Vgg
接下来让我们阅读vgg网络的实现代码.VGGNet全部使用3*3的卷积核和2*2的池化核,通过不断加深网络结构来提升性能。Vgg表明了卷积神经网络的深度增加和小卷积核的使用对网络的最终分类识别效果有很大的作用.
下面是一份用于训练cifar10的简化版的vgg代码.
有时间的同学可以阅读并训练它.
End of explanation
"""
|
obulpathi/datascience | scikit/titanic/notebooks/Section 1-1 - Filling-in Missing Values.ipynb | apache-2.0 | import pandas as pd
import numpy as np
df = pd.read_csv('../data/train.csv')
"""
Explanation: Section 1-1 - Filling-in Missing Values
In the previous section, we ended up with a smaller set of predictions because we chose to throw away rows with missing values. We build on this approach in this section by filling in the missing data with an educated guess.
We will only provide detailed descriptions on new concepts introduced.
Pandas - Extracting data
End of explanation
"""
df = df.drop(['Name', 'Ticket', 'Cabin'], axis=1)
"""
Explanation: Pandas - Cleaning data
End of explanation
"""
df.info()
"""
Explanation: Similar to the previous section, we review the data type and value counts.
End of explanation
"""
age_mean = df['Age'].mean()
df['Age'] = df['Age'].fillna(age_mean)
"""
Explanation: There are a number of ways that we could fill in the NaN values of the column Age. For simplicity, we'll do so by taking the average, or mean, of values of each column. We'll review as to whether taking the median would be a better choice in a later section.
End of explanation
"""
from scipy.stats import mode
mode_embarked = mode(df['Embarked'])[0][0]
df['Embarked'] = df['Embarked'].fillna(mode_embarked)
df['Gender'] = df['Sex'].map({'female': 0, 'male': 1}).astype(int)
df['Port'] = df['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int)
df = df.drop(['Sex', 'Embarked'], axis=1)
cols = df.columns.tolist()
cols = [cols[1]] + cols[0:1] + cols[2:]
df = df[cols]
"""
Explanation: Exercise
Write the code to replace the NaN values by the median, instead of the mean.
Taking the average does not make sense for the column Embarked, as it is a categorical value. Instead, we shall replace the NaN values by the mode, or most frequently occurring value.
End of explanation
"""
df.info()
"""
Explanation: We now review details of our training data.
End of explanation
"""
train_data = df.values
"""
Explanation: Hence have we have preserved all the rows of our data set, and proceed to create a numerical array for Scikit-learn.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators = 100)
model = model.fit(train_data[0:,2:],train_data[0:,0])
"""
Explanation: Scikit-learn - Training the model
End of explanation
"""
df_test = pd.read_csv('../data/test.csv')
"""
Explanation: Scikit-learn - Making predictions
End of explanation
"""
df_test.info()
df_test = df_test.drop(['Name', 'Ticket', 'Cabin'], axis=1)
"""
Explanation: We now review what needs to be cleaned in the test data.
End of explanation
"""
df_test['Age'] = df_test['Age'].fillna(age_mean)
"""
Explanation: As per our previous approach, we fill in the NaN values in the column Age with the mean.
End of explanation
"""
fare_means = df.pivot_table('Fare', index='Pclass', aggfunc='mean')
fare_means
"""
Explanation: For the column Fare, however, it makes sense to fill in the NaN values with the mean by the column Pclass, or Passenger class.
End of explanation
"""
df_test['Fare'] = df_test[['Fare', 'Pclass']].apply(lambda x:
fare_means[x['Pclass']] if pd.isnull(x['Fare'])
else x['Fare'], axis=1)
"""
Explanation: Here we created a pivot table by calculating the mean of the column Fare by each Pclass, which we will use to fill in our NaN values.
End of explanation
"""
df_test['Gender'] = df_test['Sex'].map({'female': 0, 'male': 1}).astype(int)
df_test['Port'] = df_test['Embarked'].map({'C':1, 'S':2, 'Q':3})
df_test = df_test.drop(['Sex', 'Embarked'], axis=1)
test_data = df_test.values
output = model.predict(test_data[:,1:])
"""
Explanation: This is one of the more complicated lines of code we'll encounter, so let's unpack this.
First, we look at each of the pairs (Fare, Pclass) (i.e. lambda x). From this pair, we check if the Fare part is NaN (i.e. if pd.isnull(x['Fare'])). If Fare is NaN, we look at the Pclass value of that pair (i.e. x['PClass']), and replace the NaN value the mean fare of that class (i.e. fare_means[x['Pclass']]). If Fare is not NaN, then we keep it the same (i.e. else x['Fare']).
End of explanation
"""
result = np.c_[test_data[:,0].astype(int), output.astype(int)]
df_result = pd.DataFrame(result[:,0:2], columns=['PassengerId', 'Survived'])
df_result.to_csv('../results/titanic_1-1.csv', index=False)
"""
Explanation: Pandas - Preparing for submission
End of explanation
"""
df_result.shape
"""
Explanation: Now we'll discover that our submission has 418 predictions, and can proceed to make our first leaderboard entry.
https://www.kaggle.com/c/titanic-gettingStarted/submissions/attach
End of explanation
"""
|
tpin3694/tpin3694.github.io | python/parallel_processing.ipynb | mit | from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
"""
Explanation: Title: Parallel Processing
Slug: parallel_processing
Summary: Lightweight Parallel Processing in Python.
Date: 2016-01-23 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
This tutorial is inspired by Chris Kiehl's great post on multiprocessing.
Preliminaries
End of explanation
"""
# Create a list of some data
data = range(29999)
"""
Explanation: Create Some Data
End of explanation
"""
# Create a function that takes a data point
def some_function(datum):
# and returns the datum raised to the power of itself
return datum**datum
"""
Explanation: Create An Operation To Execute On The Data
End of explanation
"""
%%time
# Create an empty for the results
results = []
# For each value in the data
for datum in data:
# Append the output of the function when applied to that datum
results.append(some_function(datum))
"""
Explanation: Traditional Approach
End of explanation
"""
# Create a pool of workers equaling cores on the machine
pool = ThreadPool()
%%time
# Apply (map) some_function to the data using the pool of workers
results = pool.map(some_function, data)
# Close the pool
pool.close()
# Combine the results of the workers
pool.join()
"""
Explanation: Parallelism Approach
End of explanation
"""
|
azhurb/deep-learning | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 0
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
#import tensorflow as tf
#norm = [tf.image.per_image_standardization(img) for img in x ]
#print(norm[0][0])
#return norm
# TODO: Implement Function
#print(x.shape)
#norm = np.empty(x.shape)
#print(norm.shape)
#for img_idx, img in enumerate(x):
#
# for row_idx, row in enumerate(img):
#
# for col_idx, col in enumerate(row):
# for pixel_idx, pixel in enumerate(col):
# norm[img_idx, row_idx, col_idx, pixel_idx] = pixel/255
norm = np.array((x)/(255.0))
#print(norm.shape)
#print(norm[0])
return norm
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit(np.arange(10))
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
#print(lb.transform(x)[:10])
return lb.transform(x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name = 'x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
return tf.placeholder(tf.float32, [None, n_classes], name = 'y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
return tf.placeholder(tf.float32, name = 'keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
#print(conv_ksize)
#print(conv_num_outputs)
#print(x_tensor.get_shape().as_list())
#print(conv_strides)
#print(pool_ksize)
#print(pool_strides)
tensor_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], tensor_shape[3], conv_num_outputs],mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
strides = [1, conv_strides[0], conv_strides[1], 1]
x = tf.nn.conv2d(x_tensor, weights, strides, padding='SAME')
x = tf.nn.bias_add(x, bias)
x = tf.nn.relu(x)
return tf.nn.max_pool(x, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
tensor_shape = x_tensor.get_shape().as_list()
return tf.reshape(x_tensor, [-1, tensor_shape[1]*tensor_shape[2]*tensor_shape[3]])
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
#print(x_tensor.get_shape().as_list())
#print(num_outputs)
tensor_shape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([tensor_shape[1], num_outputs], mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
fc = tf.add(tf.matmul(x_tensor, weight), bias)
fc = tf.nn.relu(fc)
return fc
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
#print(x_tensor.get_shape().as_list())
#print(num_outputs)
tensor_shape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([tensor_shape[1], num_outputs], mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
return tf.add(tf.matmul(x_tensor, weight), bias)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv = conv2d_maxpool(x, 128, [2, 2], [2, 2], [2, 2], [2, 2])
conv = conv2d_maxpool(x, 128, [2, 2], [2, 2], [2, 2], [2, 2])
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat = flatten(conv)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc = fully_conn(flat, 256)
#fc = tf.nn.dropout(fc, keep_prob)
#fc = tf.nn.dropout(fc, keep_prob)
#fc = fully_conn(fc, 256)
fc = tf.nn.dropout(fc, keep_prob)
fc = fully_conn(fc, 128)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(fc, 10)
# TODO: return output
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
loss = sess.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_acc = sess.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 15
batch_size = 64
keep_probability = 0.5
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
TomTranter/OpenPNM | examples/io_and_visualization/Quick Plotting in OpenPNM.ipynb | mit | import warnings
import scipy as sp
import numpy as np
import openpnm as op
%matplotlib inline
np.random.seed(10)
ws = op.Workspace()
ws.settings['loglevel'] = 40
np.set_printoptions(precision=4)
net = op.network.Cubic(shape=[5, 5, 1])
"""
Explanation: Producing Quick and Easy Plots of Topology within OpenPNM
The main way to visualize OpenPNM networks is Paraview, but this can be a bit a hassle when building a new network topology that needs quick feedback for troubleshooting. Starting in V1.6, OpenPNM offers two plotting functions for showing pore locations and the connections between them: openpnm.topotools.plot_coordinates and openpnm.topotools.plot_connections. This example demonstrates how to use these two methods.
Visualize pore and throats in a 2D network
Start by initializing OpenPNM and creating a network. For easier visualization we'll use a 2D network:
End of explanation
"""
net.add_boundary_pores(['left', 'right'])
"""
Explanation: Next we'll add boundary pores to two sides of the network, to better illustrate these plot commands:
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_coordinates(network=net, pores=net.pores('internal'), c='r')
fig.set_size_inches([5, 5])
ax = fig.gca() # This line is only needed in Jupyter notebooks
"""
Explanation: Now let's use plot_coordinates to plot the pore centers in a 3D plot, starting with the internal pores:
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
Ps = net.pores('*boundary')
fig = op.topotools.plot_coordinates(network=net, pores=Ps, fig=fig, c='b')
ax.get_figure() # This line is only needed in Jupyter notebooks
"""
Explanation: Note that the above call to plot_coordinates returns a figure handle fig. This can be passed into subsequent plotting methods to overlay points.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
Ts = net.find_neighbor_throats(pores=Ps)
fig = op.topotools.plot_connections(network=net, throats=Ts, fig=fig, c='b')
Ts = net.find_neighbor_throats(pores=net.pores('internal'), mode='xnor')
fig = op.topotools.plot_connections(network=net, throats=Ts, fig=fig, c='r')
ax.get_figure() # This line is only needed in Jupyter notebooks
"""
Explanation: Next, let's add lines to the above plot indicating the throat connections. Again, by reusing the fig object we can overlay more information:
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
net = op.network.Voronoi(num_points=100, shape=[1, 1, 1])
fig = op.topotools.plot_connections(network=net, c='g')
fig.set_size_inches([10, 10])
fig = op.topotools.plot_coordinates(network=net, c='r', fig=fig)
"""
Explanation: These two methods are meant for quick and rough visualizations. If you require high quality 3D images, you should use Paraview:
<img src="https://i.imgur.com/uSBVFi9.png" style="width: 60%" align="left"/>
Visualize in 3D too
The plot_connections and plot_coordinates methods also work in 3D.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_connections(network=net, c='g', linewidth=3, alpha=0.5)
fig.set_size_inches([10, 10])
fig = op.topotools.plot_coordinates(network=net, c='r', s=np.random.rand(net.Np)*100, marker='x', fig=fig)
"""
Explanation: The above plot is a static image generated at a default angle. It is possible to get an interactive window that can be rotated and zoomed. This is done by entering %matplotlib notebook at the top of the notebook. To return to the default behavior, use %matplotlib inline.
Any arguments passed to either plot function that are not expected will be passed onto the matplotlib plot command which is used to generate these graphs. This means you can adjust the appearance to the extent that you can figure out which commands to send to plot. For instance, the following code creates fatter lines and makes them slightly transparent, then the markers are changed to an 'x' and their size is selected randomly.
End of explanation
"""
|
PythonFreeCourse/Notebooks | week02/4_Lists.ipynb | mit | prime_ministers = ['David Ben-Gurion', 'Moshe Sharett', 'David Ben-Gurion', 'Levi Eshkol', 'Yigal Alon', 'Golda Meir']
"""
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<p style="text-align: right; direction: rtl; float: right;">רשימות</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">הגדרה</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
רשימה, כשמה כן היא, מייצגת <mark>אוסף מסודר של ערכים</mark>. רשימות יהיו סוג הנתונים הראשון שנכיר בפייתון, ש<mark>מטרתו היא לקבץ ערכים</mark>.<br>
הרעיון מוכר לנו מהיום־יום: רשימת פריטים לקנייה בסופר שמסודרת לפי הא–ב, או רשימת ההופעות בקיץ הקרוב המסודרת לפי תאריך.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נסו לדמיין רשימה כמסדרון ארוך, שבו עומדים בתור אחד אחרי השני איברים מסוגים שאנחנו מכירים בפייתון.<br>
אם נשתמש בדימוי הלייזרים שנתנו למשתנים בשבוע הקודם, אפשר להגיד שמדובר בלייזר שמצביע לשורת לייזרים, שבה כל לייזר מצביע על ערך כלשהו.
</p>
<table style="font-size: 2rem; border: 0px solid black; border-spacing: 0px;">
<tr>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">3</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">4</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">5</td>
</tr>
<tbody>
<tr>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"David Ben-Gurion"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe Sharett"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"David Ben-Gurion"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Levi Eshkol"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Yigal Alon"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Golda Meir"</td>
</tr>
<tr style="background: #f5f5f5;">
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-6</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-5</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-4</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-3</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td>
</tr>
</tbody>
</table>
<br>
<p style="text-align: center; direction: rtl; clear: both; font-size: 1.8rem">
דוגמה לרשימה: 6 ראשי הממשלה הראשונים בישראל לפי סדר כהונתם, משמאל לימין
</p>
<p style="text-align: right; direction: rtl; float: right;">דוגמאות</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>רשימת שמות ראשי הממשלה במדינת ישראל לפי סדר כהונתם.</li>
<li>רשימת הגילים של התלמידים בכיתה, מהמבוגר לצעיר.</li>
<li>רשימת שמות של התקליטים שיש לי בארון, מסודרת מהתקליט השמאלי לימני.</li>
<li>רשימה שבה כל איבר מייצג אם לראש הממשלה שנמצא בתא התואם ברשימה הקודמת היו משקפיים.</li>
<li>האיברים 42, 8675309, 73, <span dir="ltr" style="direction: ltr;">-40</span> ו־186282 בסדר הזה.</li>
<li>רשימה של תחזית מזג האוויר ב־7 הימים הקרובים. כל איבר ברשימה הוא בפני עצמו רשימה, שמכילה שני איברים: הראשון הוא מה תהיה הטמפרטורה הממוצעת, והשני הוא מה תהיה הלחות הממוצעת.</li>
<ol>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<strong>תרגול</strong>:
הרשימות שהוצגו למעלה הן <dfn>רשימות הומוגניות</dfn>, כאלו שכל האיברים שבהן הם מאותו סוג.<br>
כתבו עבור כל אחת מהרשימות שהוצגו בדוגמה מה סוג הנתונים שיישמר בהן.
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<strong>תרגול</strong>:
נסו לתת דוגמה לעוד 3 רשימות שבהן נתקלתם לאחרונה.</p>
</div>
</div>
## <p style="text-align: right; direction: rtl; float: right;">רשימות בקוד</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
רשימות הן אחד מסוגי הנתונים הכיפיים ביותר בפייתון, וזאת בזכות הגמישות האדירה שיש לנו בתכנות עם רשימות.
</p>
### <span style="text-align: right; direction: rtl; float: right; clear: both;">הגדרת רשימה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נגדיר בעזרת פייתון את הרשימה שפגשנו למעלה – 6 ראשי הממשלה הראשונים מאז קום המדינה:
</p>
End of explanation
"""
print(prime_ministers)
type(prime_ministers)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מה התרחש בקוד?<br>
התחלנו את הגדרת הרשימה באמצעות התו <code dir="ltr" style="direction: ltr;">[</code>.<br>
מייד אחרי התו הזה הכנסנו איברים לרשימה לפי הסדר הרצוי, כאשר כל איבר מופרד ממשנהו בפסיק (<code>,</code>).<br>
במקרה שלנו, כל איבר הוא מחרוזת המייצגת ראש ממשלה. הכנסנו את ראשי הממשלה לרשימה <mark>לפי סדר</mark> כהונתם.<br>
שימו לב שהרשימה מכילה איבר מסוים פעמיים – מכאן ש<mark>רשימה היא מבנה נתונים שתומך בחזרות</mark>.<br>
לסיום, נסגור את הגדרת הרשימה באמצעות התו <code dir="ltr" style="direction: ltr;">]</code>.<br>
</p>
End of explanation
"""
numbers = [1, 2, 3, 4, 5, 6, 7]
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל להגדיר רשימה של המספרים הטבעיים עד 7:
</p>
End of explanation
"""
wtf = ['The cake is a', False, 42]
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
<dfn>רשימה הומוגנית</dfn> היא רשימה שבה האיברים שנמצאים בכל אחד מהתאים הם מאותו סוג. רשימות "בעולם האמיתי" הן בדרך כלל הומוגניות.<br>
<dfn>רשימה הטרוגנית</dfn> היא רשימה שבה איברים בתאים שונים יכולים להיות מסוגים שונים.<br>
ההבדל הוא סמנטי בלבד, ופייתון לא מבדילה בין רשימה הטרוגנית לרשימה הומוגנית.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לשם הדוגמה, נגדיר רשימה הטרוגנית:
</p>
End of explanation
"""
empty_list = []
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל אפילו להגדיר רשימה ריקה, שבה אין איברים כלל:</p>
End of explanation
"""
# Index 0 1 2 3 4 5
vinyls = ['Ecliptica', 'GoT Season 6', 'Lone Digger', 'Everything goes numb', 'Awesome Mix Vol. 1', 'Ultimate Sinatra']
"""
Explanation: <p style="text-align: right; direction: rtl; float: right;">גישה לאיברי הרשימה</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לכל תא ברשימה יש מספר, שמאפשר לנו להתייחס לאיבר שנמצא באותו תא.<br>
הדבר דומה ללייזר שעליו יש מדבקת שם ("שמות ראשי ממשלה"), והוא מצביע על שורת לייזרים שעל התווית שלהם מופיע מספר המתאר את מיקומם בשורה.<br>
התא השמאלי ביותר ברשימה ממוספר כ־0, התא שנמצא אחריו (מימינו) מקבל את המספר 1, וכך הלאה עד לסוף הרשימה.<br>
המספור של כל תא נקרא <dfn>המיקום שלו ברשימה</dfn>, או <dfn>האינדקס שלו</dfn>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נגדיר את רשימת שמות התקליטים שיש לי בבית:
</p>
End of explanation
"""
print(vinyls[4])
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בהנחה שאנחנו מתים על Guardians of the Galaxy, נוכל לנסות להשיג מהרשימה את Awesome Mix Vol. 1.<br>
כדי לעשות זאת, נציין את שם הרשימה שממנה אנחנו רוצים לקבל את האיבר, ומייד לאחר מכן את מיקומו ברשימה בסוגריים מרובעים.
</p>
End of explanation
"""
# 0 1 2 3 4 5
vinyls = ['Ecliptica', 'GoT Season 6', 'Lone Digger', 'Everything goes number', 'Awesome Mix Vol. 1', 'Ultimate Sinatra']
# -6 -5 -4 -3 -2 -1
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; clear: both;">
התא הראשון ממוספר 0, ולא 1.<br>
יש לכך סיבות טובות, אבל פעמים רבות תרגישו שהמספור הזה לא טבעי ועלול ליצור <dfn>באגים</dfn>, שהם קטעי קוד שמתנהגים אחרת משציפה המתכנת.<br>
כפועל יוצא, המיקום ברשימה של התא האחרון לא יהיה כאורך הרשימה, אלא כאורך הרשימה פחות אחד.<br>
משמע: ברשימה שבה 3 איברים, מספרו של התא האחרון יהיה 2.
</p>
</div>
</div>
<figure>
<img src="images/list-of-vinyls.png" width="100%" style="display: block; margin-left: auto; margin-right: auto;" alt="תמונה של 6 תקליטים על שטיח. משמאל לימין: Ecliptica / Sonata Arctica, Game of Thrones Season 6 / Ramin Djawadi, Caravan Palace / Lone Digger, Everything goes numb / Streetlight Manifesto, Awesome Mix Vol. 1 / Guardians of the Galaxy, Ultimate Sinatra / Frank Sinatra. מעל כל דיסק מופיע מספר, מ־0 עבור התקליט השמאלי ועד 5 עבור התקליט הימני. מתחת לתקליטים מופיע המספר -1 עבור התקליט הימני ביותר, וכך הלאה עד -5 עבור התקליט השמאלי ביותר.">
<figcaption style="text-align: center; direction: rtl; clear: both;">
רשימת (חלק מ)התקליטים בארון שלי, מסודרת מהתקליט השמאלי לימני.<br>
</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כפי שניתן לראות בתמונה, פייתון מנסה לעזור לנו ומאפשרת לנו לגשת לאיברים גם מהסוף.<br>
חוץ מהמספור הרגיל שראינו קודם, אפשר לגשת לאיברים מימין לשמאל באמצעות מספור שלילי.<br>
האיבר האחרון יקבל את המספר <span style="direction: ltr" dir="ltr">-1</span>, זה שלפניו (משמאלו) יקבל <span style="direction: ltr" dir="ltr">-2</span> וכן הלאה.
</p>
End of explanation
"""
print(vinyls[-2])
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אם נרצה לגשת שוב לאותו דיסק, אבל הפעם מהסוף, נוכל לכתוב זאת כך:
</p>
End of explanation
"""
type(vinyls[0])
print(vinyls[0] + ', By Sonata Arctica')
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כדאי לזכור שהתוכן של כל אחד מהתאים הוא ערך לכל דבר.<br>
יש לו סוג, ואפשר לבצע עליו פעולות כמו שלמדנו עד עכשיו:
</p>
End of explanation
"""
# כמה תקליטים יש לי?
len(vinyls)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
לסיום, נראה שבדיוק כמו במחרוזות, נוכל לבדוק את אורך הרשימה על ידי שימוש בפונקציה <code>len</code>.
</p>
End of explanation
"""
print(vinyls)
vinyls[1] = 'GoT Season 7'
print(vinyls)
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; clear: both;">
אם ננסה לגשת לתא שאינו קיים, נקבל <code>IndexError</code>.<br>
זה בדרך כלל קורה כשאנחנו שוכחים להתחיל לספור מ־0.<br>
אם השגיאה הזו מופיעה כשאתם מתעסקים עם רשימות, חשבו איפה בקוד פניתם לתא שאינו קיים.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right;">השמה ברשימות</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפעמים נרצה לשנות את הערך של האיברים ברשימה.<br>
נפנה ללייזר מסוים בשורת הלייזרים שלנו, ונבקש ממנו להצביע לערך חדש:
</p>
End of explanation
"""
[1, 2, 3] + [4, 5, 6]
['a', 'b', 'c'] + ['easy', 'as'] + [1, 2, 3]
"""
Explanation: <p style="text-align: right; direction: rtl; float: right;">אופרטורים חשבוניים על רשימות</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אופרטורים שהכרנו כשלמדנו על מחרוזות, יעבדו נהדר גם על רשימות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כפי ש־<code>+</code> משרשר בין מחרוזות, הוא יודע לשרשר גם בין רשימות:
</p>
End of explanation
"""
['wake up', 'go to school', 'sleep'] * 365
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
וכפי ש־<code>*</code> משרשר מחרוזת לעצמה כמות מסוימת של פעמים, כך הוא יפעל גם עם רשימות:
</p>
End of explanation
"""
['Is', 'someone', 'getting'] + ['the', 'best,'] * 4 + ['of', 'you?']
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר גם לשלב:
</p>
End of explanation
"""
[1, 2, 3] + 5
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב שכל אופרטור שתשימו ליד הרשימה מתייחס <em>לרשימה בלבד</em>, ולא לאיברים שבתוכה.<br>
משמע <code dir="ltr" style="direction: ltr;">+ 5</code> לא יוסיף לכם 5 לכל אחד מהאיברים, אלא ייכשל כיוון שפייתון לא יודעת לחבר רשימה למספר שלם.<br>
</p>
End of explanation
"""
prime_ministers = ['David Ben-Gurion', 'Moshe Sharett', 'David Ben-Gurion', 'Levi Eshkol', 'Yigal Alon', 'Golda Meir']
print(prime_ministers)
prime_ministers + ['Yitzhak Rabin']
print(prime_ministers)
print(prime_ministers)
prime_ministers = prime_ministers + ['Yitzhak Rabin']
print(prime_ministers)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב גם שהפעלת אופרטור על רשימה לא גורמת לשינוי הרשימה, אלא רק מחזירה ערך.<br>
כדי לשנות ממש את הרשימה, נצטרך להשתמש בהשמה:
</p>
End of explanation
"""
pupils_in_sunday = ['Moshe', 'Dukasit', 'Michelangelo']
pupils_in_monday = ['Moshe', 'Dukasit', 'Master Splinter']
pupils_in_tuesday = ['Moshe', 'Dukasit', 'Michelangelo']
pupils_in_wednesday = ['Moshe', 'Dukasit', 'Michelangelo', 'Master Splinter']
"""
Explanation: <p style="text-align: right; direction: rtl; float: right;">אופרטורים השוואתיים על רשימות</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נגדיר את רשימת האנשים שנכחו בכיתה ביום ראשון, שני, שלישי ורביעי:
</p>
End of explanation
"""
print("Is it Monday? " + str(pupils_in_sunday == pupils_in_monday))
print("Is it Tuesday? " + str(pupils_in_sunday == pupils_in_tuesday))
print("Is it Wednesday? " + str(pupils_in_sunday == pupils_in_wednesday))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
רשימות תומכות בכל אופרטורי ההשוואה שלמדנו עד כה.<br>
נתחיל בקל ביותר. בואו נבדוק באיזה יום הרכב התלמידים בכיתה היה זהה להרכב התלמידים שהיה בה ביום ראשון:
</p>
End of explanation
"""
print('Moshe' in pupils_in_tuesday)
# זה אותו דבר כמו:
print('Moshe' in ['Moshe', 'Dukasit', 'Michelangelo'])
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
האם משה נכח בכיתה ביום שלישי?
</p>
End of explanation
"""
'Master Splinter' not in pupils_in_tuesday
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכיח שמאסטר ספלינטר הבריז באותו יום:
</p>
End of explanation
"""
python_new_version = [3, 7, 2]
python_old_version = [2, 7, 16]
print(python_new_version > python_old_version)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ולסיום, בואו נבדוק איזו גרסה חדשה יותר:
</p>
End of explanation
"""
pupils_in_sunday = ['Moshe', 'Dukasit', 'Michelangelo']
pupils_in_monday = ['Moshe', 'Dukasit', 'Splinter']
pupils_in_tuesday = ['Moshe', 'Dukasit', 'Michelangelo']
pupils_in_wednesday = ['Moshe', 'Dukasit', 'Michelangelo', 'Splinter']
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי לבצע השוואה בין רשימות, פייתון מנסה להשוות את האיבר הראשון מהרשימה הראשונה לאיבר הראשון מהרשימה השנייה.<br>
אם יש "תיקו", היא תעבור לאיבר השני בכל רשימה, כך עד סוף הרשימה.
</p>
<p style="text-align: right; direction: rtl; float: right;">רשימה של רשימות</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעיתים דברים בחיים האמיתיים הם מורכבים מדי מכדי לייצג אותם ברשימה סטנדרטית.<br>
הרבה פעמים נשים לב שיוקל לנו אם ניצור רשימה שבה כל תא הוא רשימה בפני עצמו.<br>הרעיון הזה ייצור לנו רשימה של רשימות.<br>
ניקח לדוגמה את הרשימות שהגדרנו למעלה, שמתארות מי נכח בכל יום בכיתה:
</p>
End of explanation
"""
pupils = [pupils_in_sunday, pupils_in_monday, pupils_in_tuesday, pupils_in_wednesday]
print(pupils)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אנחנו רואים לפנינו רשימה של ימים, שקל להכניס לרשימה אחת גדולה:
</p>
End of explanation
"""
pupils = [['Moshe', 'Dukasit', 'Michelangelo'], ['Moshe', 'Dukasit', 'Splinter'], ['Moshe', 'Dukasit', 'Michelangelo'], ['Moshe', 'Dukasit', 'Michelangelo', 'Splinter']]
"""
Explanation: <table style="font-size: 1rem; border: 0px solid black; border-spacing: 0px;">
<tr>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">3</td>
</tr>
<tbody>
<tr>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">
<table style="font-size: 1.1rem; border: 0px solid black; border-spacing: 0px;">
<tr>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td>
</tr>
<tbody>
<tr>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Dukasit"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Michelangelo"</td>
</tr>
<tr style="background: #f5f5f5;">
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-3</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td>
</tr>
</tbody>
</table>
</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">
<table style="font-size: 1.1rem; border: 0px solid black; border-spacing: 0px;">
<tr>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td>
</tr>
<tbody>
<tr>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Dukasit"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Splinter"</td>
</tr>
<tr style="background: #f5f5f5;">
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-3</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td>
</tr>
</tbody>
</table>
</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">
<table style="font-size: 1.1rem; border: 0px solid black; border-spacing: 0px;">
<tr>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td>
</tr>
<tbody>
<tr>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Dukasit"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Michelangelo"</td>
</tr>
<tr style="background: #f5f5f5;">
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-3</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td>
</tr>
</tbody>
</table>
</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">
<table style="font-size: 1.1rem; border: 0px solid black; border-spacing: 0px;">
<tr>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;">0</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">1</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">2</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;">3</td>
</tr>
<tbody>
<tr>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Moshe"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Dukasit"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Michelangelo"</td>
<td style="padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;">"Splinter"</td>
</tr>
<tr style="background: #f5f5f5;">
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-4</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-3</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr style="background: #f5f5f5;">
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;">-4</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-3</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-2</td>
<td style="padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;">-1</td>
</tr>
</tbody>
</table>
<br>
<p style="text-align: center; direction: rtl; clear: both; font-size: 1.8rem">
דוגמה לרשימה של רשימות: נוכחות התלמידים בימי ראשון עד רביעי
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השורה שכתבנו למעלה זהה לחלוטין לשורה הבאה, שבה אנחנו מגדירים רשימה אחת שכוללת את רשימות התלמידים שנכחו בכיתה בכל יום.
</p>
End of explanation
"""
pupils[0]
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל לקבל את רשימת התלמידים שנכחו ביום ראשון בצורה הבאה:
</p>
End of explanation
"""
pupils_in_sunday = pupils[0]
print(pupils_in_sunday[-1])
# או פשוט:
print(pupils[0][-1])
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ואת התלמיד האחרון שנכח ביום ראשון בצורה הבאה:
</p>
End of explanation
"""
print("pupils = " + str(pupils))
print("-" * 50)
print("1. 'Moshe' in pupils == " + str('Moshe' in pupils))
print("2. 'Moshe' in pupils[0] == " + str('Moshe' in pupils[0]))
print("3. ['Moshe', 'Splinter'] in pupils == " + str(['Moshe', 'Splinter'] in pupils))
print("4. ['Moshe', 'Splinter'] in pupils[-1] == " + str(['Moshe', 'Splinter'] in pupils[-1]))
print("5. ['Moshe', 'Dukasit', 'Splinter'] in pupils == " + str(['Moshe', 'Dukasit', 'Splinter'] in pupils))
print("6. ['Moshe', 'Dukasit', 'Splinter'] in pupils[0] == " + str(['Moshe', 'Dukasit', 'Splinter'] in pupils[0]))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אם קשה לכם לדמיין את זה, עשו זאת בשלבים.<br>
בדקו מה יש ב־<code>pupils</code>, אחרי זה מה מחזיר <code>pupils[0]</code>, ואז נסו לקחת ממנו את האיבר האחרון, <code>pupils[0][-1]</code>.
</p
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי להבין טוב יותר איך רשימה של רשימות מתנהגת, חשוב להבין את התוצאות של הביטויים הבוליאניים הבאים.<br>
זה קצת מבלבל, אבל אני סומך עליכם שתחזיקו מעמד:
</p>
End of explanation
"""
judges = ['Esther Hayut', 'Miriam Naor', 'Asher Grunis', 'Dorit Beinisch', 'Aharon Barak']
"""
Explanation: <ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>הביטוי הבוליאני בשורה 1 מחזיר <samp>False</samp>, כיוון שכל אחד מהאיברים ברשימה <var>pupils</var> הוא רשימה, ואף אחד מהם אינו המחרוזת <em>"Moshe"</em>.</li>
<li>הביטוי הבוליאני בשורה 2 מחזיר <samp>True</samp>, כיוון שהאיבר הראשון ב־<var>pupils</var> הוא רשימה שמכילה את המחרוזת <em>"Moshe"</em>.</li>
<li>הביטוי הבוליאני בשורה 3 מחזיר <samp>False</samp>, כיוון שאין בתוך <var>pupils</var> רשימה שאלו בדיוק הערכים שלה. יש אומנם רשימה שמכילה את האיברים האלו, אבל השאלה הייתה האם הרשימה הגדולה (<var>pupils</var>) מכילה איבר ששווה בדיוק ל־<code>['Moshe', 'Splinter']</code>.</li>
<li>הביטוי הבוליאני בשורה 4 מחזיר <samp>False</samp>, כיוון שברשימה האחרונה בתוך <var>pupils</var> אין איבר שהוא הרשימה <code>["Moshe", "Splinter"]</code>.</li>
<li>הביטוי הבוליאני בשורה 5 מחזיר <samp>True</samp>, כיוון שיש רשימה ישירות בתוך <var>pupils</var> שאלו הם ערכיה.</li>
<li>הביטוי הבוליאני בשורה 6 מחזיר <samp>False</samp>, כיוון שברשימה הראשונה בתוך <var>pupils</var> אין איבר שהוא הרשימה הזו.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
זכרו שעבור פייתון אין שום דבר מיוחד ברשימה של רשימות. היא בסך הכול רשימה רגילה, שכל אחד מאיבריה הוא רשימה.<br>
מבחינתה אין הבדל בין רשימה כזו לכל רשימה אחרת.
</p>
<p style="text-align: right; direction: rtl; float: right;">המונח Iterable</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
באתרי אינטרנט ובתיעוד של פייתון אנחנו נפגש פעמים רבות עם המילה <dfn>Iterable</dfn>.<br>
בקורס נשתמש במונח הזה פעמים רבות כדי להבין טוב יותר איך פייתון מתנהגת.<br>
<mark>נגדיר ערך כ־<dfn>iterable</dfn> אם ניתן לפרק אותו לכלל האיברים שלו.</mark><br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כה אנחנו מכירים 2 סוגי משתנים שעונים להגדרה iterables: רשימות ומחרוזות.<br>
ניתן לפרק רשימה לכל האיברים שמרכיבים אותה, וניתן לפרק מחרוזת לכל התווים שמרכיבים אותה.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יש הרבה במשותף לכל הדברים שניתן להגיד עליהם שהם iterables:<br>
על חלק גדול מה־iterables אפשר להפעיל פעולות שמתייחסות לכלל האיברים שבהם, כמו <code>len</code> שמראה את מספר האיברים בערך.<br>
על חלק גדול מה־iterables יהיה אפשר גם להשתמש בסוגריים מרובעים כדי לגשת לאיבר מסוים שנמצא בהם.<br>
בעתיד נלמד על עוד דברים שמשותפים לרוב (או לכל) ה־iterables.
</p>
<p style="align: right; direction: rtl; float: right; clear: both;">מונחים</p>
<dl style="text-align: right; direction: rtl; float: right; clear: both;">
<dt>רשימה</dt><dd>סוג משתנה שמטרתו לקבץ ערכים אחרים בסדר מסוים.</dd>
<dt>תא</dt><dd>מקום ברשימה שמכיל איבר כלשהו.</dd>
<dt>מיקום</dt><dd>מיקום של תא מסוים הוא המרחק שלו מהתא הראשון ברשימה, שמיקומו הוא 0. זהו מספר שמטרתו לאפשר גישה לתא מסוים ברשימה.</dd>
<dt>אינדקס</dt><dd>מילה נרדפת ל"מיקום".</dd>
<dt>איבר</dt><dd>ערך שנמצא בתא של רשימה. ניתן לאחזר אותו אם נציין את שם הרשימה, ואת מיקום התא שבו הוא נמצא.</dd>
<dt>רשימה הומוגנית</dt><dd>רשימה שבה כל האיברים הם מאותו סוג.</dd>
<dt>רשימה הטרוגנית</dt><dd>רשימה שבה לכל איבר יכול להיות סוג שונה.</dd>
<dt>Iterable</dt><dd>ערך שמורכב מסדרה של ערכים אחרים.</dd>
</dl>
<p style="text-align: right; direction: rtl; float: right;">לסיכום</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>מספר האיברים ברשימה יכול להיות 0 (רשימה ריקה) או יותר.</li>
<li>לאיברים ברשימה יש סדר.</li>
<li>כל איבר ברשימה ממוספר החל מהאיבר הראשון שממוספר 0, ועד האיבר האחרון שמספרו הוא אורך הרשימה פחות אחד.</li>
<li>ניתן לגשת לאיבר גם לפי המיקום שלו וגם לפי המרחק שלו מסוף הרשימה, באמצעות התייחסות למיקום השלילי שלו.</li>
<li>איברים ברשימה יכולים לחזור על עצמם.</li>
<li>רשימה יכולה לכלול איברים מסוג אחד בלבד (<dfn>רשימה הומוגנית</dfn>) או מכמה סוגים שונים (<dfn>רשימה הטרוגנית</dfn>).</li>
<li>אורך הרשימה יכול להשתנות במהלך ריצת התוכנית.</li>
<ol>
## <p style="align: right; direction: rtl; float: right; clear: both;">תרגול</p>
### <p style="align: right; direction: rtl; float: right; clear: both;">סדר בבית המשפט!</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו קוד שיסדר את רשימת נשיאי בית המשפט לפי סדר אלפבתי.<br>
זה אכן אמור להיות מסורבל מאוד. בעתיד נלמד לכתוב קוד מוצלח יותר לבעיה הזו.<br>
השתמשו באינדקסים, ושמרו ערכים בצד במשתנים.
</p>
End of explanation
"""
ice_cream_flavours = ['chocolate', 'vanilla', 'pistachio', 'banana']
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בונוס: כתבו קטע קוד שבודק שהרשימה (שמכילה 5 איברים) אכן מסודרת.
</p>
<p style="align: right; direction: rtl; float: right; clear: both;">מה זה משובחה בכלל?</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפניכם רשימה של שמות טעמי גלידה שנמצאים בדוכן הגלידה השכונתי.<br>
קבלו מהמשתמש קיפיק את הטעם האהוב עליו, והדפיסו למשתמש האם הטעם שלו נמכר בדוכן.
</p>
End of explanation
"""
rabanim = ['Rashi', 'Maimonides', 'Nachmanides', 'Rabbeinu Tam']
'Rashi' in rabanim
'RASHI' in rabanim
['Rashi'] in rabanim
['Rashi', 'Nachmanides'] in rabanim
'Bruria' in rabanim
rabanim + ['Gershom ben Judah']
'Gershom ben Judah' in rabanim
'3' in [1, 2, 3]
(1 + 5 - 3) in [1, 2, 3]
[1, 5, 3] > [1, 2, 3]
rabanim[0] in [rabanim[0] + rabanim[1]]
rabanim[0] in [rabanim[0]] + [rabanim[1]]
rabanim[-1] == rabanim[0] or rabanim[-1] == rabanim[1] or rabanim[-1] == rabanim[2] or rabanim[-1] == rabanim[3]
rabanim[-1] == rabanim[0] or rabanim[-1] == rabanim[1] or rabanim[-1] == rabanim[2] and rabanim[-1] != rabanim[3]
rabanim[-1] == rabanim[0] or rabanim[-1] == rabanim[1] or rabanim[-1] == rabanim[2] and rabanim[-1] == rabanim[3]
1 in [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
[1, 2, 3] in [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
[[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][2]
[[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][3]
[[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][-1] * 5
[[[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][-1]] * 5
[[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][-1]
[[1, 2, 3], [4, 5, 6], [7, 8, 9]][0][-1] == [[7, 8, 9], [4, 5, 6], [1, 2, 3]][2][2]
[[1, 2, 3]] in [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
[[1, 2, 3], [4, 5, 6]] in [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
[[1, 2, 3], [4, 5, 6]] in [[[1, 2, 3], [4, 5, 6]], [7, 8, 9]]
"""
Explanation: <p style="align: right; direction: rtl; float: right; clear: both;">מה רש"י?</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפניכם כמה ביטויים.<br>
רשמו לעצמכם מה תהיה תוצאת כל ביטוי, ורק אז הריצו אותו.
</p>
End of explanation
"""
|
christofs/jupyter | .ipynb_checkpoints/compare-checkpoint.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: (Dieses Jupyter Notebook ist live unter: http://mybinder.org/repo/christofs/jupyter.)
Korpora vergleichen
Dieses Jupyter Notebook erläutert einige Aspekte des Vergleichs von Korpora.
End of explanation
"""
loc = 10 # = Arithmetisches Mittel
scale = 20 # = Standardabweichung
"""
Explanation: Zunächst laden wir einen Datensatz.
Motivation: Kommt ein Wort häufiger bei Racine vor als bei seinen Zeitgenossen? Trinken die Deutschen mehr Bier als die Polen?
Das folgende Tutorial fokussiert darauf, wie Verteilungen generiert und mit verschiedenen statistischen Tests daraufhin überprüft werden können, wie ähnlich sich die beiden Verteilungen sind.
Anders ausgedrückt prüft man, wie wahrscheinlich es ist, dass die beiden untersuchten Verteilungen tatsächlich aus einer gemeinsamen Verteilung stammen.
Wenn die Unterschiede zwischen zwei Verteilungen allein durch zufällige Schwankungen beim Sampeln der Verteilungen erklärbar sind, dann ist kein statistisch signifikanter Unterschied der Verteilungen zu erkennen.
Generieren von Verteilungen
Zunächst müssen wir die Verteilungen generieren, die es dann zu vergleichen gilt.
Dafür können wir eine Bibliothek einsetzen, die das Generieren von Verteilungen mit gewünschten Eigenschaften erlaubt: scipy.stats.
Dokumentation: http://docs.scipy.org/doc/scipy/reference/stats.html
Wir definieren zunächst die Parameter der Verteilung:
End of explanation
"""
import scipy.stats as sts
Verteilung = sts.norm(loc, scale) # loc=arithmetisches Mittel, scale = Standardabweichung
Werte = Verteilung.rvs(size=5000)
print("Zehn Einzelwerte:", Werte[0:10])
"""
Explanation: Wir generieren die Verteilung und viele Einzelwerte.
Hierfür müssen wir (spätestens jetzt) die Bibliothek scipy.stats importieren
Außerdem schauen wir uns einige Werte der Verteilung an.
End of explanation
"""
# Histogramm
plt.hist(Werte, 100)
plt.axis([-200, 200, 0, 200]) # x-min, x-max, y-min, y-max
plt.show()
# Boxplot
plt.boxplot(Werte, vert=False)
plt.show()
"""
Explanation: Wir visualisieren die Werte der Verteilung als Histogramm und als Boxplot.
Histogramm: http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.hist
Boxplot: http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.boxplot
End of explanation
"""
from math import sqrt
mean, var, skew = Verteilung.stats(moments='mvs')
print("mean:", mean, "\nstdv:", sqrt(var), "\nskew:", skew)
"""
Explanation: Wir extrahieren zur Kontrolle die Eckdaten der Verteilung:
End of explanation
"""
# Erste Verteilung
loc = 15 # = Arithmetisches Mittel
scale = 5 # = Standardabweichung
Verteilung = sts.norm(loc, scale) # loc=arithmetisches Mittel, scale = Standardabweichung
Werte1 = Verteilung.rvs(size=500)
# Zweite Verteilung
loc = 25 # = Arithmetisches Mittel
scale = 5 # = Standardabweichung
Verteilung = sts.norm(loc, scale) # loc=arithmetisches Mittel, scale = Standardabweichung
Werte2 = Verteilung.rvs(size=500)
"""
Explanation: Jetzt wird es möglich, mit unterschiedlichen Parametern unterschiedliche Verteilungen zu generieren.
Sie folgen immer einer Normalverteilung, aber eben mit spezifischen Mittelwert und Standardabweichung.
Probieren Sie es aus!
(Hinweis: Falls notwendig, können Sie im Histogramm die Achsenskalen anpassen.)
Vergleich von zwei Verteilungen
Für den Vergleich zweier Verteilungen müssen wir zunächst zwei Verteilungen generieren und dann statistische Tests anwenden, die diese beiden Verteilungen auf ihre Unterschiedlichkeit hin überprüfen.
Zunächst generieren wir also nicht eine, sondern gleich zwei Verteilungen. Entsprechend legen wir auch die Parameter für beide getrennt fest.
End of explanation
"""
# Zwei Histogramme
plt.hist(Werte1, 10, histtype="stepfilled", color=(1, 1, 0, 0.2))
plt.hist(Werte2, 10, histtype="stepfilled", color=(0, 0, 1, 0.2))
plt.show()
# Zwei Boxplots
Daten = [Werte1, Werte2]
plt.boxplot(Daten, vert=False)
plt.show()
"""
Explanation: Zunächst visualisieren wir die beiden Verteilungen gemeinsam, um einen ersten Eindruck ihres Verhältnisses zueinander zu bekommen.
End of explanation
"""
MW1 = np.mean(Werte1)
MW2 = np.mean(Werte2)
Verhältnis = MW1/MW2
print("Mittelwert1 :", MW1, "\nMittelwert2 :", MW2, "\n Verhältnis :", Verhältnis)
"""
Explanation: Jetzt können wir statistische Tests anwenden, und zwar gleich mehrere. Dann sehen wir, wie sich die verschiedenen Test unter verschiedenen Bedingungen verhalten.
1. Das Verhältnis der Mittelwerte (VM)
End of explanation
"""
Welch = sts.ttest_ind(Werte1, Werte2, equal_var=False)
print("Statistik:", Welch[0], "\n P-Wert:", Welch[1])
"""
Explanation: 2. Den t-Test (Welch's t-Test)
Dokumentation: http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.ttest_ind.html
End of explanation
"""
Wilcoxon = sts.wilcoxon(Werte1, Werte2)
print("Statistik:", Wilcoxon[0], "\n P-Wert:", Wilcoxon[1])
"""
Explanation: 3. Den Wilcoxon Rank-Sum Test
End of explanation
"""
|
ucsd-ccbb/jupyter-genomics | notebooks/crispr/Dual CRISPR 5-Count Plots.ipynb | mit | g_timestamp = ""
g_dataset_name = "20160510_A549"
g_count_alg_name = "19mer_1mm_py"
g_fastq_counts_dir = '/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/interim/20160510_D00611_0278_BHK55CBCXX_A549'
g_fastq_counts_run_prefix = "19mer_1mm_py_20160615223822"
g_collapsed_counts_dir = "/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/processed/20160510_A549"
g_collapsed_counts_run_prefix = "20160510_A549_19mer_1mm_py_20160616101309"
g_combined_counts_dir = ""
g_combined_counts_run_prefix = ""
g_plots_dir = ""
g_plots_run_prefix = ""
g_code_location = "/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python"
"""
Explanation: Dual CRISPR Screen Analysis
Count Plots
Amanda Birmingham, CCBB, UCSD (abirmingham@ucsd.edu)
Instructions
To run this notebook reproducibly, follow these steps:
1. Click Kernel > Restart & Clear Output
2. When prompted, click the red Restart & clear all outputs button
3. Fill in the values for your analysis for each of the variables in the Input Parameters section
4. Click Cell > Run All
<a name = "input-parameters"></a>
Input Parameters
End of explanation
"""
%matplotlib inline
"""
Explanation: Matplotlib Display
End of explanation
"""
import sys
sys.path.append(g_code_location)
"""
Explanation: CCBB Library Imports
End of explanation
"""
# %load -s describe_var_list /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/utilities/analysis_run_prefixes.py
def describe_var_list(input_var_name_list):
description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list]
return "".join(description_list)
from ccbbucsd.utilities.analysis_run_prefixes import check_or_set, get_run_prefix, get_timestamp
g_timestamp = check_or_set(g_timestamp, get_timestamp())
g_collapsed_counts_dir = check_or_set(g_collapsed_counts_dir, g_fastq_counts_dir)
g_collapsed_counts_run_prefix = check_or_set(g_collapsed_counts_run_prefix, g_fastq_counts_run_prefix)
g_combined_counts_dir = check_or_set(g_combined_counts_dir, g_collapsed_counts_dir)
g_combined_counts_run_prefix = check_or_set(g_combined_counts_run_prefix, g_collapsed_counts_run_prefix)
g_plots_dir = check_or_set(g_plots_dir, g_combined_counts_dir)
g_plots_run_prefix = check_or_set(g_plots_run_prefix,
get_run_prefix(g_dataset_name, g_count_alg_name, g_timestamp))
print(describe_var_list(['g_timestamp','g_collapsed_counts_dir', 'g_collapsed_counts_run_prefix',
'g_combined_counts_dir', 'g_combined_counts_run_prefix', 'g_plots_dir',
'g_plots_run_prefix']))
from ccbbucsd.utilities.files_and_paths import verify_or_make_dir
verify_or_make_dir(g_collapsed_counts_dir)
verify_or_make_dir(g_combined_counts_dir)
verify_or_make_dir(g_plots_dir)
"""
Explanation: Automated Set-Up
End of explanation
"""
# %load -s get_counts_file_suffix /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/construct_counter.py
def get_counts_file_suffix():
return "counts.txt"
# %load -s get_collapsed_counts_file_suffix,get_combined_counts_file_suffix /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/count_combination.py
def get_collapsed_counts_file_suffix():
return "collapsed.txt"
def get_combined_counts_file_suffix():
return "counts_combined.txt"
"""
Explanation: Count File Suffixes
End of explanation
"""
# %load /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/count_plots.py
# third-party libraries
import matplotlib.pyplot
import numpy
import pandas
# ccbb libraries
from ccbbucsd.utilities.analysis_run_prefixes import strip_run_prefix
from ccbbucsd.utilities.files_and_paths import build_multipart_fp, get_file_name_pieces, get_filepaths_by_prefix_and_suffix
# project-specific libraries
from ccbbucsd.malicrispr.count_files_and_dataframes import get_counts_df
__author__ = "Amanda Birmingham"
__maintainer__ = "Amanda Birmingham"
__email__ = "abirmingham@ucsd.edu"
__status__ = "prototype"
DEFAULT_PSEUDOCOUNT = 1
def get_boxplot_suffix():
return "boxplots.png"
def make_log2_series(input_series, pseudocount_val):
revised_series = input_series + pseudocount_val
log2_series = revised_series.apply(numpy.log2)
nan_log2_series = log2_series.replace([numpy.inf, -numpy.inf], numpy.nan)
return nan_log2_series.dropna().reset_index(drop=True)
# note that .reset_index(drop=True) is necessary as matplotlib boxplot function (perhaps among others)
# throws an error if the input series doesn't include an item with index 0--which can be the case if
# that first item was NaN and was dropped, and series wasn't reindexed.
def show_and_save_histogram(output_fp, title, count_data):
matplotlib.pyplot.figure(figsize=(20,20))
matplotlib.pyplot.hist(count_data)
matplotlib.pyplot.title(title)
matplotlib.pyplot.xlabel("log2(raw counts)")
matplotlib.pyplot.ylabel("Frequency")
matplotlib.pyplot.savefig(output_fp)
matplotlib.pyplot.show()
def show_and_save_boxplot(output_fp, title, samples_names, samples_data, rotation_val=0):
fig = matplotlib.pyplot.figure(1, figsize=(20,20))
ax = fig.add_subplot(111)
bp = ax.boxplot(samples_data)
ax.set_xticklabels(samples_names, rotation=rotation_val)
ax.set_xlabel("samples")
ax.set_ylabel("log2(raw counts)")
matplotlib.pyplot.title(title)
fig.savefig(output_fp, bbox_inches='tight')
matplotlib.pyplot.show()
def plot_raw_counts(input_dir, input_run_prefix, counts_suffix, output_dir, output_run_prefix, boxplot_suffix):
counts_fps_for_run = get_filepaths_by_prefix_and_suffix(input_dir, input_run_prefix, counts_suffix)
for curr_counts_fp in counts_fps_for_run:
_, curr_sample, _ = get_file_name_pieces(curr_counts_fp)
stripped_sample = strip_run_prefix(curr_sample, input_run_prefix)
count_header, curr_counts_df = get_counts_df(curr_counts_fp, input_run_prefix)
curr_counts_df.rename(columns={count_header:stripped_sample}, inplace=True)
count_header = stripped_sample
log2_series = make_log2_series(curr_counts_df[count_header], DEFAULT_PSEUDOCOUNT)
title = " ".join([input_run_prefix, count_header, "with pseudocount", str(DEFAULT_PSEUDOCOUNT)])
output_fp_prefix = build_multipart_fp(output_dir, [count_header, input_run_prefix])
boxplot_fp = output_fp_prefix + "_" + boxplot_suffix
show_and_save_boxplot(boxplot_fp, title, [count_header], log2_series)
hist_fp = output_fp_prefix + "_" + "hist.png"
show_and_save_histogram(hist_fp, title, log2_series)
def plot_combined_raw_counts(input_dir, input_run_prefix, combined_suffix, output_dir, output_run_prefix, boxplot_suffix):
output_fp = build_multipart_fp(output_dir, [output_run_prefix, boxplot_suffix])
combined_counts_fp = build_multipart_fp(input_dir, [input_run_prefix, combined_suffix])
combined_counts_df = pandas.read_table(combined_counts_fp)
samples_names = combined_counts_df.columns.values[1:] # TODO: remove hardcode
samples_data = []
for curr_name in samples_names:
log2_series = make_log2_series(combined_counts_df[curr_name], DEFAULT_PSEUDOCOUNT)
samples_data.append(log2_series.tolist())
title = " ".join([input_run_prefix, "all samples", "with pseudocount", str(DEFAULT_PSEUDOCOUNT)])
show_and_save_boxplot(output_fp, title, samples_names, samples_data, 90)
"""
Explanation: Count Plots Functions
End of explanation
"""
from ccbbucsd.utilities.files_and_paths import summarize_filenames_for_prefix_and_suffix
print(summarize_filenames_for_prefix_and_suffix(g_fastq_counts_dir, g_fastq_counts_run_prefix, get_counts_file_suffix()))
# this call makes one boxplot per raw fastq
plot_raw_counts(g_fastq_counts_dir, g_fastq_counts_run_prefix, get_counts_file_suffix(), g_plots_dir,
g_plots_run_prefix, get_boxplot_suffix())
"""
Explanation: Individual fastq Plots
End of explanation
"""
print(summarize_filenames_for_prefix_and_suffix(g_collapsed_counts_dir, g_collapsed_counts_run_prefix,
get_collapsed_counts_file_suffix()))
plot_raw_counts(g_collapsed_counts_dir, g_collapsed_counts_run_prefix, get_collapsed_counts_file_suffix(),
g_plots_dir, g_plots_run_prefix, get_boxplot_suffix())
"""
Explanation: Individual Sample Plots
End of explanation
"""
print(summarize_filenames_for_prefix_and_suffix(g_combined_counts_dir, g_combined_counts_run_prefix,
get_combined_counts_file_suffix()))
plot_combined_raw_counts(g_combined_counts_dir, g_combined_counts_run_prefix, get_combined_counts_file_suffix(),
g_plots_dir, g_plots_run_prefix, get_boxplot_suffix())
"""
Explanation: Combined Samples Plots
End of explanation
"""
|
planetlabs/notebooks | jupyter-notebooks/tasking-api/planet_tasking_api_order_edit_and_cancel.ipynb | apache-2.0 | # Import the os module in order to access environment variables
import os
#If you are running this notebook outside of the docker environment that comes with the repo, you can uncomment the next line to provide your API key
#os.environ['PL_API_KEY']=input('Please provide your API Key')
# Setup the API Key from the `PL_API_KEY` environment variable
PLANET_API_KEY = os.getenv('PL_API_KEY')
"""
Explanation: Tasking API Order Edit and Cancellation
Introduction
This tutorial is an introduction on how to edit and cancel tasking orders using Planet's Tasking API. It provides code samples on how to write simple Python code to do this.
The API reference documentation can be found at https://developers.planet.com/docs/tasking
Requirements
Software & Modules
This tutorial assumes familiarity with the Python programming language throughout. Familiarity with basic REST API concepts and usage is also assumed.
We'll be using a "Jupyter Notebook" (aka Python Notebook) to run through the examples.
To learn more about and get started with using Jupyter, visit: Jupyter and IPython.
For the best experience, download this notebook and run it on your system, and make sure to install the modules listed below first. You can also copy the examples' code to a separate Python files an run them directly with Python on your system if you prefer.
Planet API Key
You should have an account on the Planet Platform to access the Tasking API. You may retrieve your API key from your account page, or from the "API Tab" in Planet Explorer.
Before continuing
It is highly recommended that before continuing with these examples you first complete the Tasking API Order Creation Notebook, which will give instructions on how to first create a tasking order. We will be creating a tasking order as part of this notebook but will not be going into details.
Overview
The basic workflow
Create the tasking order
Edit the tasking order
Cancel the tasking order
Examples on how to create tasking orders and view their captures can be found in the notebook planet_tasking_api_order_creation.ipynb
API Endpoints
This tutorial will cover the following API endpoint:
/order
Basic Setup
Before interacting with the Planet Tasking API using Python, we will set up our environment with some useful modules and helper functions.
We'll configure authentication to the Planet Tasking API
We'll use the requests Python module to make HTTP communication easier.
We'll use the json Python module to help us work with JSON responses from the API.
We'll use the pytz Python module to define the time frame for the order that we will be creating.
We'll create a function called p that will print Python dictionaries nicely.
Then we'll be ready to make our first call to the Planet Tasking API by hitting the base endpoint at https://api.planet.com/tasking/v2.
Let's start by configuring authentication:
Authentication
Authentication with the Planet Tasking API can be achieved using a valid Planet API key.
You can export your API Key as an environment variable on your system:
export PL_API_KEY="YOUR API KEY HERE"
Or add the variable to your path, etc.
To start our Python code, we'll setup an API Key variable from an environment variable to use with our requests:
End of explanation
"""
# Import helper modules
import json
import requests
import pytz
from time import sleep
from datetime import datetime, timedelta
# Helper function to printformatted JSON using the json module
def p(data):
print(json.dumps(data, indent=2))
# Setup Planet Tasking PLANET_API_HOST
TASKING_ORDERS_API_URL = "https://api.planet.com/tasking/v2/orders/"
# Setup the session
session = requests.Session()
# Authenticate
session.headers.update({
'Authorization': f'api-key {PLANET_API_KEY}',
'Content-Type': 'application/json'
})
"""
Explanation: Helper Modules and Functions
End of explanation
"""
# Define the name and coordinates for the order
name=input("Give the order a name")
latitude=float(input("Provide the latitude"))
longitude=float(input("Provide the longitude"))
# Because the geometry is GeoJSON, the coordinates must be longitude,latitude
order = {
'name': name,
'geometry': {
'type': 'Point',
'coordinates': [
longitude,
latitude
]
}
}
# Set a start and end time, giving the order a week to complete
tomorrow = datetime.now(pytz.utc) + timedelta(days=1)
one_week_later = tomorrow + timedelta(days=7)
datetime_parameters = {
'start_time': tomorrow.isoformat(),
'end_time': one_week_later.isoformat()
}
# Add use datetime parameters
order.update(datetime_parameters)
# The creation of an order is a POST request to the /orders endpoint
res = session.request('POST', TASKING_ORDERS_API_URL, json=order)
if response.status_code == 403:
print('Your PLANET_API_KEY is valid, but you are not authorized.')
elif response.status_code == 401:
print('Your PLANET_API_KEY is incorrect')
elif response.status_code == 201:
print('Your order was created successfully')
else:
print(f'Received status code {res.status_code} from the API. Please contact support.')
# Order created. Here is the response
p(res.json())
"""
Explanation: 1 | Creating an order
End of explanation
"""
# Get the response JSON and extract the ID of the order
response = res.json()
order_id = response["id"]
# Provide a new name for the order
new_name=input("Provide a new name for the order")
# Define the payload
edit_payload = {
'name': new_name
}
# Editing an order requires a PUT request to be made. The order id is concantenated to the end of the URl. E.g. https://api.planet.com/tasking/v2/orders/<ORDER_ID>
res = session.request('PUT', TASKING_ORDERS_API_URL + order_id, json=edit_payload)
p(res.json())
"""
Explanation: 2 | Editing the tasking order
Changing the tasking order that we have just created is possible but with a few restrictions that you need to be aware of. Only the name, the rank, the start time and the end time of a tasking order can be changed. The geometry of an existing tasking order cannot be altered after creation. If the geometry is wrong, then the only recourse is to cancel the tasking order and start anew.
With editing, the "when can a tasking order be edited" is just as important as the "what can be edited". A tasking order can have the following states: PENDING, REQUESTED, IN PROGRESS, FULFILLED, CANCELLED and EXPIRED. A tasking order can only be edited in the first three states, PENDING, REQUESTED and IN PROGRESS and of these states, the start time of a tasking order can only be edited whilst the tasking order is in PENDING or REQUESTED. The start time cannot be changed once a tasking order is in progress.kwargs
With that, let's look at editing the order that we just created to give it a different name:
End of explanation
"""
# Get the response JSON and extract the ID of the order
response = res.json()
order_id = response["id"]
# Cancel an order by sending a DELETE request to the orders endpoint
res = session.request('DELETE', TASKING_ORDERS_API_URL + order_id)
p(res.json())
"""
Explanation: 3 | Cancelling the tasking order
As with editing a tasking order, cancellation has some limitations on when a tasking order can be cancelled. Tasking orders can only be cancelled only when the tasking order is in one of the following states: PENDING, IN PROGRESS, RECEIVED and REQUESTED.
Let's delete the tasking order that we have created:
End of explanation
"""
|
bsafdi/NPTFit | examples/Example10_HighLat_Analysis.ipynb | mit | # Import relevant modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import corner
import matplotlib.pyplot as plt
from NPTFit import nptfit # module for performing scan
from NPTFit import create_mask as cm # module for creating the mask
from NPTFit import dnds_analysis # module for analysing the output
from NPTFit import psf_correction as pc # module for determining the PSF correction
from __future__ import print_function
"""
Explanation: Example 10: Analyzing the Results of the High-Lat Run
In this Example we analyze the results of an MPI run of NPTFit performed over high latitudes.
The example batch script we provide, Example10_HighLat_Batch.batch is for SLURM. This calls the run file Example10_HighLat_Run.py using MPI, and is an example of how to perform a more realistic analysis using NPTFit.
NB: The batch file must be run before this notebook.
NB: This example makes use of the Fermi Data, which needs to already be installed. See Example 1 for details.
In this example, we model the source-count function as a triply-broken power law. In detail, the source count function is then:
$$ \frac{dN}{dS} = A \left{ \begin{array}{lc} \left( \frac{S}{S_{b,1}} \right)^{-n_1}, & S \geq S_{b,1} \ \left(\frac{S}{S_{b,1}}\right)^{-n_2}, & S_{b,1} > S \geq S_{b,2} \ \left( \frac{S_{b,2}}{S_{b,1}} \right)^{-n_2} \left(\frac{S}{S_{b,2}}\right)^{-n_3}, & S_{b,2} > S \geq S_{b,3} \ \left( \frac{S_{b,2}}{S_{b,1}} \right)^{-n_2} \left( \frac{S_{b,3}}{S_{b,2}} \right)^{-n_3} \left(\frac{S}{S_{b,3}}\right)^{-n_4}, & S_{b,3} > S \end{array} \right. $$
and is thereby described by the following eight parameters:
$$ \theta = \left[ A, n_1, n_2, n_3, n_4, S_b^{(1)}, S_b^{(2)}, S_b^{(3)} \right]\,. $$
This provides an example of a more complicated source count function, and also explains why the run needs MPI.
End of explanation
"""
n = nptfit.NPTF(tag='HighLat_Example')
fermi_data = np.load('fermi_data/fermidata_counts.npy').astype(np.int32)
fermi_exposure = np.load('fermi_data/fermidata_exposure.npy')
n.load_data(fermi_data, fermi_exposure)
analysis_mask = cm.make_mask_total(band_mask = True, band_mask_range = 50)
n.load_mask(analysis_mask)
dif = np.load('fermi_data/template_dif.npy')
iso = np.load('fermi_data/template_iso.npy')
n.add_template(dif, 'dif')
n.add_template(iso, 'iso')
n.add_template(np.ones(len(iso)), 'iso_np', units='PS')
n.add_poiss_model('dif','$A_\mathrm{dif}$', [0,20], False)
n.add_poiss_model('iso','$A_\mathrm{iso}$', [0,5], False)
n.add_non_poiss_model('iso_np',
['$A^\mathrm{ps}_\mathrm{iso}$',
'$n_1$','$n_2$','$n_3$','$n_4$',
'$S_b^{(1)}$','$S_b^{(2)}$','$S_b^{(3)}$'],
[[-6,2],
[2.05,5],[1.0,3.5],[1.0,3.5],[-1.99,1.99],
[30,80],[1,30],[0.1,1]],
[True,False,False,False,False,False,False,False])
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812)
f_ary, df_rho_div_f_ary = pc_inst.f_ary, pc_inst.df_rho_div_f_ary
n.configure_for_scan(f_ary=f_ary, df_rho_div_f_ary=df_rho_div_f_ary, nexp=5)
"""
Explanation: Load in scan
We need to create an instance of nptfit.NPTF and load in the scan performed using MPI.
End of explanation
"""
n.load_scan()
"""
Explanation: Finally, load the completed scan performed using MPI.
End of explanation
"""
an = dnds_analysis.Analysis(n)
"""
Explanation: Analysis
As in Example 9 we first initialize the analysis module. We will provide the same basic plots as in that notebook, where more details on each option is provided.
End of explanation
"""
an.make_triangle()
"""
Explanation: 1. Make triangle plots
End of explanation
"""
print("Iso NPT Intensity",corner.quantile(an.return_intensity_arrays_non_poiss('iso_np'),[0.16,0.5,0.84]), "ph/cm^2/s")
print("Iso PT Intensity",corner.quantile(an.return_intensity_arrays_poiss('iso'),[0.16,0.5,0.84]), "ph/cm^2/s")
print("Dif PT Intensity",corner.quantile(an.return_intensity_arrays_poiss('dif'),[0.16,0.5,0.84]), "ph/cm^2/s")
"""
Explanation: 2. Get Intensities
End of explanation
"""
an.plot_source_count_median('iso_np',smin=0.01,smax=1000000,nsteps=10000,color='tomato',spow=2)
an.plot_source_count_band('iso_np',smin=0.01,smax=1000000,nsteps=10000,qs=[0.16,0.5,0.84],color='tomato',alpha=0.3,spow=2)
plt.yscale('log')
plt.xscale('log')
plt.xlim([1e-12,5e-6])
plt.ylim([5e-14,1e-11])
plt.tick_params(axis='x', length=5, width=2, labelsize=18)
plt.tick_params(axis='y', length=5, width=2, labelsize=18)
plt.ylabel('$F^2 dN/dF$ [counts cm$^{-2}$s$^{-1}$deg$^{-2}$]', fontsize=18)
plt.xlabel('$F$ [counts cm$^{-2}$ s$^{-1}$]', fontsize=18)
plt.title('High Latitudes Isotropic NPTF', y=1.02)
"""
Explanation: 3. Plot Source Count Distributions
End of explanation
"""
an.plot_intensity_fraction_non_poiss('iso_np', bins=100, color='tomato', label='Iso PS')
an.plot_intensity_fraction_poiss('iso', bins=100, color='cornflowerblue', label='Iso')
an.plot_intensity_fraction_poiss('dif', bins=100, color='plum', label='Dif')
plt.xlabel('Flux fraction (%)')
plt.legend(fancybox = True)
plt.xlim(0,80);
"""
Explanation: 4. Plot Intensity Fractions
End of explanation
"""
Aiso_poiss_post = an.return_poiss_parameter_posteriors('iso')
Adif_poiss_post = an.return_poiss_parameter_posteriors('dif')
f, axarr = plt.subplots(1, 2);
f.set_figwidth(8)
f.set_figheight(4)
axarr[0].hist(Aiso_poiss_post, histtype='stepfilled', color='cornflowerblue', bins=np.linspace(.5,1,30),alpha=0.4);
axarr[0].set_title('$A_\mathrm{iso}$')
axarr[1].hist(Adif_poiss_post, histtype='stepfilled', color='lightsalmon', bins=np.linspace(10,15,30),alpha=0.4);
axarr[1].set_title('$A_\mathrm{dif}$')
plt.setp([a.get_yticklabels() for a in axarr[:]], visible=False);
plt.tight_layout()
"""
Explanation: 5. Access Parameter Posteriors
Poissonian parameters
End of explanation
"""
Aiso_non_poiss_post, n_non_poiss_post, Sb_non_poiss_post = an.return_non_poiss_parameter_posteriors('iso_np')
f, axarr = plt.subplots(2, 4);
f.set_figwidth(16)
f.set_figheight(8)
axarr[0, 0].hist(Aiso_non_poiss_post, histtype='stepfilled', color='cornflowerblue', bins=np.linspace(0,.0001,30),alpha=0.4);
axarr[0, 0].set_title('$A_\mathrm{iso}^\mathrm{ps}$')
axarr[0, 1].hist(n_non_poiss_post[0], histtype='stepfilled', color='lightsalmon', bins=np.linspace(2,4,30),alpha=0.4);
axarr[0, 1].set_title('$n_1^\mathrm{iso}$')
axarr[0, 2].hist(n_non_poiss_post[1], histtype='stepfilled', color='lightsalmon', bins=np.linspace(1,3.5,30),alpha=0.4);
axarr[0, 2].set_title('$n_2^\mathrm{iso}$')
axarr[0, 3].hist(n_non_poiss_post[2], histtype='stepfilled', color='lightsalmon', bins=np.linspace(1,3.5,30),alpha=0.4);
axarr[0, 3].set_title('$n_3^\mathrm{iso}$')
axarr[1, 0].hist(n_non_poiss_post[3], histtype='stepfilled', color='lightsalmon', bins=np.linspace(-2,2,30),alpha=0.4);
axarr[1, 0].set_title('$n_4^\mathrm{iso}$')
axarr[1, 1].hist(Sb_non_poiss_post[0], histtype='stepfilled', color='plum', bins=np.linspace(30,80,30),alpha=0.4);
axarr[1, 1].set_title('$S_b^{(1), \mathrm{iso}}$')
axarr[1, 2].hist(Sb_non_poiss_post[1], histtype='stepfilled', color='plum', bins=np.linspace(1,30,30),alpha=0.4);
axarr[1, 2].set_title('$S_b^{(2), \mathrm{iso}}$')
axarr[1, 3].hist(Sb_non_poiss_post[2], histtype='stepfilled', color='plum', bins=np.linspace(0.1,1,30),alpha=0.4);
axarr[1, 3].set_title('$S_b^{(3), \mathrm{iso}}$')
plt.setp(axarr[0, 0], xticks=[x*.00005 for x in range(3)])
plt.setp(axarr[1, 0], xticks=[x*1.-2.0 for x in range(4)])
plt.setp(axarr[1, 3], xticks=[x*0.2+0.2 for x in range(5)])
plt.setp([a.get_yticklabels() for a in axarr[:, 0]], visible=False);
plt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False);
plt.setp([a.get_yticklabels() for a in axarr[:, 2]], visible=False);
plt.setp([a.get_yticklabels() for a in axarr[:, 3]], visible=False);
plt.tight_layout()
"""
Explanation: Non-poissonian parameters
End of explanation
"""
|
prasants/pyds | 06.List_it_out.ipynb | mit | final = "It is with a heavy heart that I take up my pen to write these the last words in which I shall ever record the singular gifts by which my friend Mr. Sherlock Holmes was distinguished."
final = final.replace(".", "")
final = final.split(" ")
final
type(final)
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Lists" data-toc-modified-id="Lists-1"><span class="toc-item-num">1 </span>Lists</a></div><div class="lev2 toc-item"><a href="#Final-Problem" data-toc-modified-id="Final-Problem-11"><span class="toc-item-num">1.1 </span>Final Problem</a></div><div class="lev2 toc-item"><a href="#Indexing-and-Slicing-:-How-to-access-parts-of-a-list" data-toc-modified-id="Indexing-and-Slicing-:-How-to-access-parts-of-a-list-12"><span class="toc-item-num">1.2 </span>Indexing and Slicing : How to access parts of a list</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-121"><span class="toc-item-num">1.2.1 </span>Exercise</a></div><div class="lev2 toc-item"><a href="#List-Functions" data-toc-modified-id="List-Functions-13"><span class="toc-item-num">1.3 </span>List Functions</a></div><div class="lev2 toc-item"><a href="#Finding-Items" data-toc-modified-id="Finding-Items-14"><span class="toc-item-num">1.4 </span>Finding Items</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-141"><span class="toc-item-num">1.4.1 </span>Exercise</a></div><div class="lev2 toc-item"><a href="#Numerical-Functions-with-Lists" data-toc-modified-id="Numerical-Functions-with-Lists-15"><span class="toc-item-num">1.5 </span>Numerical Functions with Lists</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-151"><span class="toc-item-num">1.5.1 </span>Exercise</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-16"><span class="toc-item-num">1.6 </span>Exercise</a></div><div class="lev1 toc-item"><a href="#Sets" data-toc-modified-id="Sets-2"><span class="toc-item-num">2 </span>Sets</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-201"><span class="toc-item-num">2.0.1 </span>Exercise</a></div><div class="lev1 toc-item"><a href="#Tuples" data-toc-modified-id="Tuples-3"><span class="toc-item-num">3 </span>Tuples</a></div><div class="lev2 toc-item"><a href="#Tuples-and-Numbers" data-toc-modified-id="Tuples-and-Numbers-31"><span class="toc-item-num">3.1 </span>Tuples and Numbers</a></div><div class="lev1 toc-item"><a href="#Dictionaries" data-toc-modified-id="Dictionaries-4"><span class="toc-item-num">4 </span>Dictionaries</a></div><div class="lev2 toc-item"><a href="#Common-Dictionary-Operations" data-toc-modified-id="Common-Dictionary-Operations-41"><span class="toc-item-num">4.1 </span>Common Dictionary Operations</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-42"><span class="toc-item-num">4.2 </span>Exercise</a></div>
Collections of Items: Lists, Sets, Tuples, Dictionaries
# Lists
Python has many ways to store a collection of similar or dissimilar items.
You have already encountered lists, even though you haven't been formally introduced.
## Final Problem
End of explanation
"""
all_info = ["Sherlock", True, 42]
print(all_info)
print(type(all_info))
more_info = ["Watson", "Hooley", all_info]
print(more_info)
print(type(more_info))
"""
Explanation: <img src="images/pylists.jpg">
Lists are a great way to store items of multiple types.
End of explanation
"""
all_info = ["Sherlock", True, 42]
all_info[0]
all_info[1]
more_info = ["Watson", "Hooley", all_info]
more_info[-1]
more_info[-1][0]
a_list = [1,2,3,4,5,6,7,8,9,10]
a_list[0]
a_list[2:4]
a_list[1:]
a_list[-3:]
a_list[3:6]
a_list[0:3]
"""
Explanation: Indexing and Slicing : How to access parts of a list
End of explanation
"""
# Your code here
"""
Explanation: Exercise
Print out the second and last items of the list within the list more_info
all_info = ["Sherlock", True, 42]
more_info = ["Watson", "Hooley", all_info]
End of explanation
"""
incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"]
type(incubees)
incubees.append("Jian Yang")
incubees
brogrammers = ["Aly", "Jason"]
incubees.append(brogrammers)
incubees
incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"]
incubees.extend(brogrammers)
incubees
"""
Explanation: List Functions
These come in handy when working with larger data sets where you need to add to your existing information base.
End of explanation
"""
incubees1 = ["Richard", "Gilfoyle", "Dinesh", "Nelson"]
brogrammers1 = ["Aly", "Jason"]
incubees1.append(brogrammers1)
print(incubees1)
print(len(incubees1))
incubees2 = ["Richard", "Gilfoyle", "Dinesh", "Nelson"]
brogrammers2 = ["Aly", "Jason"]
incubees2.extend(brogrammers2)
print(incubees2)
print(len(incubees2))
incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"]
incubees.sort()
print(incubees)
incubees.reverse()
print(incubees)
incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"]
brogrammers = ["Aly", "Jason"]
print(incubees + brogrammers)
new_list = incubees + brogrammers
print (new_list*2)
"""
Explanation: Let's look at this again, comparing the two approaches.
End of explanation
"""
incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"]
incubees.index("Richard")
incubees.index("Dinesh")
incubees.index("Jian Yang")
incubees2 = incubees*2
incubees2.count("Richard")
incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"]
incubees
incubees.insert(0, "Jian Yang")
incubees
incubees.pop(0)
incubees
"""
Explanation: As a general rule though, if you're adding a single item, use append. Try what happens if you use the .extend method to add a single item, like "Jian Yang" to our list of original Incubees.
Finding Items
End of explanation
"""
passage = """It is with a heavy heart that I take up my pen to write these the last words in which I shall ever record the singular gifts by which my friend Mr. Sherlock Holmes was distinguished. In an incoherent and, as I deeply feel, an entirely inadequate fashion, I have endeavored to give some account of my strange experiences in his company from the chance which first brought us together at the period of the “Study in Scarlet,” up to the time of his interference in the matter of the “Naval Treaty”—an interference which had the unquestionable effect of preventing a serious international complication. It was my intention to have stopped there, and to have said nothing of that event which has created a void in my life which the lapse of two years has done little to fill. My hand has been forced, however, by the recent letters in which Colonel James Moriarty defends the memory of his brother, and I have no choice but to lay the facts before the public exactly as they occurred. I alone know the absolute truth of the matter, and I am satisfied that the time has come when no good purpose is to be served by its suppression. As far as I know, there have been only three accounts in the public press: that in the Journal de Geneve on May 6th, 1891, the Reuter’s despatch in the English papers on May 7th, and finally the recent letter to which I have alluded. Of these the first and second were extremely condensed, while the last is, as I shall now show, an absolute perversion of the facts. It lies with me to tell for the first time what really took place between Professor Moriarty and Mr. Sherlock Holmes.
It may be remembered that after my marriage, and my subsequent start in private practice, the very intimate relations which had existed between Holmes and myself became to some extent modified. He still came to me from time to time when he desired a companion in his investigation, but these occasions grew more and more seldom, until I find that in the year 1890 there were only three cases of which I retain any record. During the winter of that year and the early spring of 1891, I saw in the papers that he had been engaged by the French government upon a matter of supreme importance, and I received two notes from Holmes, dated from Narbonne and from Nimes, from which I gathered that his stay in France was likely to be a long one. It was with some surprise, therefore, that I saw him walk into my consulting-room upon the evening of April 24th. It struck me that he was looking even paler and thinner than usual."""
# Your code below
"""
Explanation: Exercise
Convert the passage below into a list after replacing all punctuation
Who is mentioned more times - Sherlock or Moriarty?
How many times does the author refer to himself? (Hint: Count the use of the word 'my')
End of explanation
"""
list_a = [21,14,7,19,15,47,42,55,97,92]
len(list_a)
sum(list_a)
max(list_a)
min(list_a)
range = max(list_a) - min(list_a)
print(range)
"""
Explanation: Numerical Functions with Lists
Lists are useful for more than just processing text.
End of explanation
"""
# Your code below:
"""
Explanation: Exercise
list_a = [21,14,7,19,15,47,42,55,97,92]
* Find the average of list_a
* Find the median of list_a
If you need a refresher on how to find the average (or mean) and median, we will cover that later.
End of explanation
"""
titanic = """CAPE RACE, N.F., April 15. -- The White Star liner Olympic reports by wireless this evening that the Cunarder Carpathia reached, at daybreak this morning, the position from which wireless calls for help were sent out last night by the Titanic after her collision with an iceberg. The Carpathia found only the lifeboats and the wreckage of what had been the biggest steamship afloat.
The Titanic had foundered at about 2:20 A.M., in latitude 41:46 north and longitude 50:14 west. This is about 30 minutes of latitude, or about 34 miles, due south of the position at which she struck the iceberg. All her boats are accounted for and about 655 souls have been saved of the crew and passengers, most of the latter presumably women and children. There were about 1,200 persons aboard the Titanic.
The Leyland liner California is remaining and searching the position of the disaster, while the Carpathia is returning to New York with the survivors.
It can be positively stated that up to 11 o'clock to-night nothing whatever had been received at or heard by the Marconi station here to the effect that the Parisian, Virginian or any other ships had picked up any survivors, other than those picked up by the Carpathia.
First News of the Disaster.
The first news of the disaster to the Titanic was received by the Marconi wireless station here at 10:25 o'clock last night (as told in yesterday's New York Times.) The Titanic was first heard giving the distress signal "C. Q. D.," which was answered by a number of ships, including the Carpathia, the Baltic and the Olympic. The Titanic said she had struck an iceberg and was in immediate need of assistance, giving her position as latitude 41:46 north and longitude 50:14 west.
At 10:55 o'clock the Titanic reported she was sinking by the head, and at 11:25 o'clock the station here established communication with the Allan liner Virginian, from Halifax to Liverpool, and notified her of the Titanic's urgent need of assistance and gave her the Titanic's position.
The Virginian advised the Marconi station almost immediately that she was proceeding toward the scene of the disaster.
At 11:36 o'clock the Titanic informed the Olympic that they were putting the women off in boats and instructed the Olympic to have her boats read to transfer the passangers.
The Titanic, during all this time, continued to give distress signals and to announce her position.
The wireless operator seemed absolutely cool and clear-headed, his sending throughout being steady and perfectly formed, and the judgment used by him was of the best.
The last signals heard from the Titanic were received at 12:27 A.M., when the Virginian reported having heard a few blurred signals which ended abruptly."""
# Your code here
"""
Explanation: Exercise
Let's combine lists, string methods and a bit of logic.
Strip the passage of all punctuation
How many times does the word 'Titanic' appear?
How many times does 'Carpathia' appear?
Slightly trickier question - how many words does each paragraph have? (Hint: Split the passage at "\n", then count the words for each paragraph)
End of explanation
"""
set_a = {1,2,3,4,5}
print(set_a)
"""
Explanation: Sets
Here's how you make a set.
End of explanation
"""
set_a = {1,2,3,4,5}
5 in set_a
6 in set_a
"""
Explanation: That's it. As simple as that.
So why do we have sets, as opposed to just using lists?
Sets are really fast when it comes to checking for membership. Here's how:
End of explanation
"""
set_b = {1,2,3}
print(set_a - set_b)
print(set_a.difference(set_b))
"""
Explanation: But wait, there's more!
set_a.add(x): add a value to a set
set_a.remove(x): remove a value from a set
set_a - set_b: return values in a but not in b.
set_a.difference(set_b): same as set_a - set_b
set_a | set_b: elements in a or b. Equivalent to set_a.union(set_b)
set_a & set_b: elements in both a and b. Equivalent to set_a.intersection(set_b)
set_a ^ set_b: elements in a or b but not both. Equivalent to set_a.symmetric_difference(set_b)
set_a <= set_b: tests whether every element in set_a is in set_b. Equivalent to set_a.issubset(set_b)
End of explanation
"""
pf1 = {"AA", "AAC", "AAP", "ABB", "AC", "ACCO", "AAPL", "AZO", "ZEN", "PX", "GS"}
pf2 = {"AA", "GRUB", "AAC", "GWR", "AAP", "C", "AC", "CVS"}
# Find the stocks in either pf1 or pf2, but not in both.
# Find the stocks in both portfolios
# Create a third portfolio named pf3, which has pf1 and pf2 combined
# Market conditions have changed, let's drop GRUB and CVS from pf3 and add IBM
"""
Explanation: Exercise
An analyst is looking at two portfolios, and wants to identify the unique ones.
pf1 = {"AA", "AAC", "AAP", "ABB", "AC", "ACCO", "AAPL", "AZO", "ZEN", "PX", "GS"}
pf2 = {"AA", "GRUB", "AAC", "GWR", "AAP", "C", "AC", "CVS"}
Write code for the following:
* Find the stocks in either pf1 or pf2, but not in both. (Hint: Symmetric Difference)
* Find the stocks in both portfolios (Hint: Intersection)
* Create a third portfolio named pf3, which has pf1 and pf2 combined (Hint: Union)
* Market conditions have changed, let's drop GRUB and CVS from pf3 and add IBM (Hint: set_a.remove(x) and set_a.add(x) )
End of explanation
"""
children = ("Meadow", "Anthony")
capos = ("Paulie", "Silvio", "Christopher", "Furio","Richie")
len(children)
len(capos)
capos
capos = list(capos)
capos
capos.append("Bobby")
capos
capos = tuple(capos)
capos
"""
Explanation: <img src="images/sets_easy.jpg">
Tuples
Pronounced too-puhl
We will keep this section very short in this section, but will revisit this later once we have introduced some more advanced concepts.
For now, remember that a tuple is used when the values are fixed. In Python terms, it is what is referred to as 'immutable'.
<img src="images/tuples.jpg">
Examples:
End of explanation
"""
monthly_high = (115.20, 113.60, 117.15, 120.90, 118.25)
print("Monthly high is", max(monthly_high))
print("Monthly low is", min(monthly_high))
print("Range:", max(monthly_high)-min(monthly_high))
"""
Explanation: Tuples and Numbers
End of explanation
"""
dict_1 = {"a":1, "b":2, "c":3, "d":4}
print(dict_1)
fav_book = {
"title": "Crime and Punishment",
"author": "Fyodor Dostoyevsky",
"price": 10.95,
"pages": 400,
"source": "Amazon",
"awesome": True
}
fav_book["title"]
fav_book["awesome"]
# Rarely used in this manner
fav_book.get("price")
fav_book["weight"] = 42
print(fav_book)
"awesome" in fav_book
# Doesn't work!
True in fav_book
"""
Explanation: <img src="images/commando.gif"> <img src="images/czech.gif">
Dictionaries
Dictionaries contain a key and a value. They are also referred to as dicts, maps, or hashes.
End of explanation
"""
dict_1 = {"a":1, "b":2, "c":3, "d":4}
print(dict_1)
dict_1.keys()
dict_1.values()
dict_1.pop("d")
print(dict_1)
fav_book.pop("awesome")
fav_book
"""
Explanation: Common Dictionary Operations
End of explanation
"""
a_dict = {"a":"e", "b":5, "c":3, "c": 4}
b_dict = {"c":5, "d":6}
a_set = set(a_dict)
b_set = set(b_dict)
a_set.intersection(b_set)
"""
Explanation: Exercise
Find matching key between the two dictionaries.
End of explanation
"""
|
kubeflow/pipelines | components/gcp/dataflow/launch_template/sample.ipynb | apache-2.0 | %%capture --no-stderr
!pip3 install kfp --upgrade
"""
Explanation: Name
Data preparation by using a template to submit a job to Cloud Dataflow
Labels
GCP, Cloud Dataflow, Kubeflow, Pipeline
Summary
A Kubeflow Pipeline component to prepare data by using a template to submit a job to Cloud Dataflow.
Details
Intended use
Use this component when you have a pre-built Cloud Dataflow template and want to launch it as a step in a Kubeflow Pipeline.
Runtime arguments
Argument | Description | Optional | Data type | Accepted values | Default |
:--- | :---------- | :----------| :----------| :---------- | :----------|
project_id | The ID of the Google Cloud Platform (GCP) project to which the job belongs. | No | GCPProjectID | | |
gcs_path | The path to a Cloud Storage bucket containing the job creation template. It must be a valid Cloud Storage URL beginning with 'gs://'. | No | GCSPath | | |
launch_parameters | The parameters that are required to launch the template. The schema is defined in LaunchTemplateParameters. The parameter jobName is replaced by a generated name. | Yes | Dict | A JSON object which has the same structure as LaunchTemplateParameters | None |
location | The regional endpoint to which the job request is directed.| Yes | GCPRegion | | None |
staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information. This is done so that you can resume the job in case of failure.| Yes | GCSPath | | None |
validate_only | If True, the request is validated but not executed. | Yes | Boolean | | False |
wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 |
Input data schema
The input gcs_path must contain a valid Cloud Dataflow template. The template can be created by following the instructions in Creating Templates. You can also use Google-provided templates.
Output
Name | Description
:--- | :----------
job_id | The id of the Cloud Dataflow job that is created.
Caution & requirements
To use the component, the following requirements must be met:
- Cloud Dataflow API is enabled.
- The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
- The Kubeflow user service account is a member of:
- roles/dataflow.developer role of the project.
- roles/storage.objectViewer role of the Cloud Storage Object gcs_path.
- roles/storage.objectCreator role of the Cloud Storage Object staging_dir.
Detailed description
You can execute the template locally by following the instructions in Executing Templates. See the sample code below to learn how to execute the template.
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
End of explanation
"""
import kfp.components as comp
dataflow_template_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_template/component.yaml')
help(dataflow_template_op)
"""
Explanation: Load the component using KFP SDK
End of explanation
"""
!gsutil cat gs://dataflow-samples/shakespeare/kinglear.txt
"""
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code.
In this sample, we run a Google-provided word count template from gs://dataflow-templates/latest/Word_Count. The template takes a text file as input and outputs word counts to a Cloud Storage bucket. Here is the sample input:
End of explanation
"""
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Template'
OUTPUT_PATH = '{}/out/wc'.format(GCS_WORKING_DIR)
"""
Explanation: Set sample parameters
End of explanation
"""
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataflow launch template pipeline',
description='Dataflow launch template pipeline'
)
def pipeline(
project_id = PROJECT_ID,
gcs_path = 'gs://dataflow-templates/latest/Word_Count',
launch_parameters = json.dumps({
'parameters': {
'inputFile': 'gs://dataflow-samples/shakespeare/kinglear.txt',
'output': OUTPUT_PATH
}
}),
location = '',
validate_only = 'False',
staging_dir = GCS_WORKING_DIR,
wait_interval = 30):
dataflow_template_op(
project_id = project_id,
gcs_path = gcs_path,
launch_parameters = launch_parameters,
location = location,
validate_only = validate_only,
staging_dir = staging_dir,
wait_interval = wait_interval)
"""
Explanation: Example pipeline that uses the component
End of explanation
"""
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
"""
Explanation: Compile the pipeline
End of explanation
"""
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
"""
Explanation: Submit the pipeline for execution
End of explanation
"""
!gsutil cat $OUTPUT_PATH*
"""
Explanation: Inspect the output
End of explanation
"""
|
sspickle/sci-comp-notebooks | P05-DemonAlgorithm.ipynb | mit | import matplotlib.pyplot as pl
import numpy as np
#
# rand() returns a single random number:
#
print(np.random.rand())
#
# hist plots a histogram of an array of numbers
#
print(pl.hist(np.random.normal(size=1000)))
m=28*1.67e-27 # mass of a molecule (e.g., Nitrogen)
g=9.8 # grav field strength
kb=1.67e-23 # boltzman constant
demonE = 0.0 # initial demon energy
N=10000 # number of molecules
M=400000 # number of iterations
h=20000.0 # height scale
def setup(N=100,L=1.0):
y=L*np.random.rand(N) # put N particles at random heights (y) between 0 and L
return y
yarray = setup(N=1000,L=2.0)
pl.hist(yarray)
def shake(y, demonE, delta=0.1):
"""
Pass in the current demon energy as an argument.
delta is the size of change in y to generate, more or less.
randomly choose a particle, change it's position slightly (around delta)
return the new demon energy and a boolean (was the change accepted?)
"""
ix = int(np.random.rand()*len(y))
deltaY = delta*np.random.normal()
deltaE = deltaY*m*g
accept=False
if deltaE < demonE and (y[ix]+deltaY>0):
demonE -= deltaE # take the energy from the demon, or give it if deltaE<0.
y[ix] += deltaY
accept=True
return demonE, accept
y = setup(N,L=h)
acceptCount = 0
demonList = []
for i in range(M):
demonE,accept = shake(y, demonE, delta=0.2*h)
demonList.append(demonE)
if accept:
acceptCount += 1
pl.title("Distribution of heights")
pl.xlabel("height (m)")
pl.ylabel("number in height range")
pl.hist(y,bins=40)
print(100.0*acceptCount/M, "percent accepted")
print("Averge height=%4.3fm" % (y.sum()/len(y),))
#
# Build a histogram of Demon Energies
#
pl.title("Distribution of Demon Energies")
pl.xlabel("Energy Ranges (J)")
pl.ylabel("Number in Energy Ranges")
ns, bins, patches = pl.hist(demonList, bins=60)
"""
Explanation: The Demon Algorithm
There are a number of approaches to complex problems involving large numbers of interactions where the objective is to find the "average" behavior of the system over a long period of time. We've seen that we can integrage Newton's 2nd Law to see the precise behavior of a multipartical system over time. When we have a handful of objects in a system this works well. However, if we have thousands or millions of particles, it's not practical. Looking at "average" behavior however glosses over the precision of following each interaction and attempts only to see what happens on a less fine-grained scale. This means we sacrifice the hope of getting a detailed pictured of a microscopic physical process, but achieve the reward of a more general understanding of the large scale consequences of that process. The demon algorithm is such an approach. It's a simple way to simulate the random exchange of energy between components of a system over time. Here's the basic idea:
Suppose we have a demon..
1 Make a small change to the system.
2 Compute $\Delta E$. If $\Delta E<0$ give it to the “demon” and accept the change.
3 If $\Delta E>0$ and the demon has that much energy available, accept the change and take the energy from the demon.
4 If the demon doesn’t have that much energy, then reject the change.
Example Problem
Compute the height distribution of nitrogen molecules near the Earth's surface. Assume T=const. and that the weight of a molecule is constant.
$$ PE(y) = m g y $$
so $\Delta E$ is just $m g \Delta y$.
Below is a sample program that uses the demon algorithm to approach this problem.
End of explanation
"""
#
# Use a "curve fit" to find the temperature of the demon
#
from scipy.optimize import curve_fit
def fLinear(x, m, b):
return m*x + b
energies = (bins[:-1]+bins[1:])/2.0
xvals = np.array(energies) # fit log(n) vs. energy
yvals = np.log(np.array(ns))
sig = 1.0/np.sqrt(np.array(ns))
#
# make initial estimates of slope and intercept.
#
m0 = (yvals[-1]-yvals[0])/(xvals[-1]-xvals[0])
b0 = yvals[0]-m0*xvals[0]
popt, pcov = curve_fit(fLinear, xvals, yvals, p0=(m0, b0), sigma=sig)
m=popt[0] # slope
dm=np.sqrt(pcov[0,0]) # sqrt(variance(slope))
b=popt[1] # int
db=np.sqrt(pcov[1,1]) # sqrt(variance(int))
Temp=-1.0/(m*kb) # temperature
dT = abs(dm*Temp/m)# approx uncertainty in temp
print("slope=", m, "+/-", dm )
print("intercept=", b, "+/-", db)
print("Temperature=", Temp, "+/-", dT, "K")
pl.title("Demon Energy Distribution")
pl.xlabel("Energy (J)")
pl.ylabel("log(n) (number of demon visit to energy)")
pl.errorbar(xvals, yvals, sig, fmt='r.')
pl.plot(xvals,yvals,'b.',label="Demon Energies")
pl.plot(xvals,fLinear(xvals, m, b),'r-', label="Fit")
pl.legend()
"""
Explanation: Demonic Thermometer
You can easily see that the demon acts like an small thermometer. According to the Maxwell-Boltzmann distribution the energy distribution of the demon's energy should go like:
$$P(E) = P_0 e^{-E/k_B T}$$
Where $P_0$ is the basically the probability of having an energy of zero. (Actually, maybe a better way to think of it is as a normalization constant that's determined by the requirement that the total probability to have any energy is 1.0). The histogram of demon energies tells us the number of times the demon have various values of energy during the calculation. This is proportional to the probability that the demon had various energies. We can fit that probability to an exponential curve (or the log of the probability to a straight line) and from the slope of the line deduce the temperature!
See below how the code does exactly this.
End of explanation
"""
|
quantopian/research_public | notebooks/data/quandl.bundesbank_bbk01_wt5511/notebook.ipynb | apache-2.0 | # import the dataset
from quantopian.interactive.data.quandl import bundesbank_bbk01_wt5511 as dataset
# Since this data is public domain and provided by Quandl for free, there is no _free version of this
# data set, as found in the premium sets. This import gets you the entirety of this data set.
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
"""
Explanation: Quandl: Price of Gold
In this notebook, we'll take a look at data set , available on Quantopian. This dataset spans from 1968 through the current day. It contains the value for the price of gold, as sourced by the Deutsche Bundesbank Data Repository. We access this data via the API provided by Quandl. See Quandl's detail page for this set for more information.
Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
Limits
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
With preamble in place, let's get started:
<a id='interactive'></a>
Interactive Overview
Accessing the data with Blaze and Interactive on Research
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
To see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
End of explanation
"""
gold_df = odo(dataset, pd.DataFrame)
gold_df.plot(x='asof_date', y='value')
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("Price of Gold")
plt.title("Gold Price")
plt.legend().set_visible(False)
"""
Explanation: Let's go plot it for fun.
End of explanation
"""
small_df = odo(dataset[dataset.asof_date >= '2002-01-01'], pd.DataFrame)
small_df.plot(x='asof_date', y='value')
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("Price of Gold")
plt.title("Gold Price")
plt.legend().set_visible(False)
"""
Explanation: The data points between 2007 and 2015 are missing because the number of results is limited to 10,000. Let's narrow the timeframe to get a complete picture of recent prices.
End of explanation
"""
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
# For use in your algorithms
# Using the full dataset in your pipeline algo
from quantopian.pipeline.data.quandl import bundesbank_bbk01_wt5511 as gold
"""
Explanation: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
from quantopian.pipeline.data.quandl import bundesbank_bbk01_wt5511 as gold
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
pipe.add(gold.value.latest, 'value')
End of explanation
"""
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
for data in (gold,):
_print_fields(data)
print "---------------------------------------------------\n"
"""
Explanation: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
End of explanation
"""
# Let's see what this data looks like when we run it through Pipeline
# This is constructed the same way as you would in the backtester. For more information
# on using Pipeline in Research view this thread:
# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
pipe = Pipeline()
pipe.add(gold.value.latest, 'value')
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
"""
Explanation: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
End of explanation
"""
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
# Import the datasets available
# For use in your algorithms
# Using the full dataset in your pipeline algo
from quantopian.pipeline.data.quandl import bundesbank_bbk01_wt5511 as gold
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Add pipeline factors
pipe.add(gold.value.latest, 'value')
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
"""
Explanation: Taking what we've seen from above, let's see how we'd move that into the backtester.
End of explanation
"""
|
PythonFreeCourse/Notebooks | week08/4_Exceptions_Part_2.ipynb | mit | import os
import zipfile
"""
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<span style="text-align: right; direction: rtl; float: right;">חריגות – חלק 2</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במחברת הקודמת התמודדנו לראשונה עם חריגות.<br>
למדנו לפרק הודעות שגיאה לרכיביהן ולחלץ מהן מידע מועיל, העמקנו בדרך הפעולה של Traceback ודיברנו על סוגי החריגות השונים בפייתון.<br>
ראינו לראשונה את מילות המפתח <code>try</code> ו־<code>except</code>, ולמדנו כיצד להשתמש בהן כדי לטפל בחריגות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
דיברנו על כך שטיפול בחריגות עשוי למנוע את קריסת התוכנית, וציינו גם שכדאי לבחור היטב באילו חריגות לטפל.<br>
הבהרנו שאם נטפל בחריגות ללא אבחנה, אנחנו עלולים ליצור "תקלים שקטים" שפייתון לא תדווח לנו עליהם ויהיו קשים לאיתור.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לבסוף, הצגנו כיצד השגיאות בפייתון הן בסך הכול מופע שנוצר ממחלקה שמייצגת את סוג החריגה.<br>
הראינו כיצד לקבל גישה למופע הזה מתוך ה־<code>except</code>, וראינו את עץ הירושה המרשים של סוגי החריגות בפייתון.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במחברת זו נמשיך ללמוד על טיפול בחריגות.<br>
עד סוף המחברת תוכלו להתריע בעצמכם על חריגה וליצור סוגי חריגות משל עצמכם.<br>
זאת ועוד, תלמדו על יכולות מתקדמות יותר הנוגעות לטיפול בחריגות בפייתון, ועל הרגלי עבודה נכונים בכל הקשור בעבודה עם חריגות.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ניקוי משטחים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעיתים חשוב לנו לוודא ששורת קוד תתבצע בכל מקרה, גם אם הכול סביב עולה באש.<br>
לרוב, זה קורה כאשר אנחנו פותחים משאב כלשהו (קובץ, חיבור לאתר אינטרנט) וצריכים למחוק או לסגור את המשאב בסוף הפעולה.<br>
במקרים כאלו, חשוב לנו שהשורה תתבצע אפילו אם הייתה התרעה על חריגה במהלך הרצת הקוד.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה, לדוגמה, לכווץ את כל התמונות בתיקיית images לארכיון בעזרת המודול <var>zipfile</var>.<br>
אין מה לחשוש – המודול מובן יחסית וקל לשימוש.<br>
כל שנצטרך לעשות זה ליצור מופע של <var>ZipFile</var> ולהפעיל עליו את הפעולה <var>write</var> כדי לצרף לארכיון קבצים.<br>
אם אתם מרגישים נוח, זה הזמן לכתוב את הפתרון לכך בעצמכם. אם לא, ודאו שאתם מבינים היטב את התאים הבאים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נתחיל ביבוא המודולים הרלוונטיים:
</p>
End of explanation
"""
def get_file_paths_from_folder(folder):
"""Yield paths for all the files in `folder`."""
for file in os.listdir(folder):
path = os.path.join(folder, file)
yield path
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
וככלי עזר, נכתוב generator שמקבל כפרמטר נתיב לתיקייה, ומחזיר את הנתיב לכל הקבצים שבה:
</p>
End of explanation
"""
def zip_folder(folder_name):
our_zipfile = zipfile.ZipFile('images.zip', 'w')
for file in get_file_paths_from_folder(folder_name):
our_zipfile.write(file)
our_zipfile.close()
zip_folder('images')
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
עכשיו נכתוב פונקציה שיוצרת קובץ ארכיון חדש, מוסיפה אליו את הקבצים שבתיקיית התמונות וסוגרת את קובץ הארכיון:
</p>
End of explanation
"""
def zip_folder(folder_name):
our_zipfile = zipfile.ZipFile('images.zip', 'w')
try:
for file in get_file_paths_from_folder(folder_name):
our_zipfile.write(file)
except Exception as error:
print(f"Critical failure occurred: {error}.")
our_zipfile.close()
zip_folder('NON_EXISTING_DIRECTORY')
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל מה יקרה אם תיקיית התמונות גדולה במיוחד ונגמר לנו המקום בזיכרון של המחשב?<br>
מה יקרה אם אין לנו גישה לאחד הקבצים והקריאה של אותו קובץ תיכשל?<br>
נטפל במקרים שבהם פייתון תתריע על חריגה:
</p>
End of explanation
"""
try:
1 / 0
finally:
print("+-----------------+")
print("| Executed anyway |")
print("+-----------------+")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
התא למעלה מפר עיקרון חשוב שדיברנו עליו:<br>
עדיף שלא לתפוס את החריגה אם לא יודעים בדיוק מה הסוג שלה, למה היא התרחשה וכיצד לטפל בה.<br>
אבל רגע! אם לא נתפוס את החריגה, כיצד נוודא שהקוד שלנו סגר את קובץ הארכיון באופן מסודר לפני שהתוכנה קרסה?
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
זה הזמן להכיר את מילת המפתח <code>finally</code>, שבאה אחרי ה־<code>except</code> או במקומו.<br>
השורות שכתובות ב־<code>finally</code> יתבצעו <em>תמיד</em>, גם אם הקוד קרס בגלל חריגה.<br>
שימוש ב־<code>finally</code> ייראה כך:
</p>
End of explanation
"""
def stubborn_finally_example():
try:
return True
finally:
print("This line will be executed anyway.")
stubborn_finally_example()
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב שאף על פי שהקוד שנמצא בתוך ה־<code>try</code> קרס, ה־<code>finally</code> התבצע.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
למעשה, <code>finally</code> עקשן כל כך שהוא יתבצע אפילו אם היה <code>return</code>:
</p>
End of explanation
"""
def zip_folder(folder_name):
our_zipfile = zipfile.ZipFile('images.zip', 'w')
try:
for file in get_file_paths_from_folder(folder_name):
our_zipfile.write(file)
finally:
our_zipfile.close()
print(f"Is our_zipfiles closed?... {our_zipfile}")
zip_folder('images')
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נשתמש במנגנון הזה כדי לוודא שקובץ הארכיון באמת ייסגר בסופו של דבר, ללא תלות במה שיקרה בדרך:
</p>
End of explanation
"""
zip_folder('NO_SUCH_DIRECTORY')
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ונבדוק שזה יעבוד גם אם נספק תיקייה לא קיימת, לדוגמה:
</p>
End of explanation
"""
def zip_folder(folder_name):
our_zipfile = zipfile.ZipFile('images.zip', 'w')
try:
for file in get_file_paths_from_folder(folder_name):
our_zipfile.write(file)
except FileNotFoundError as err:
print(f"Critical error: {err}.\nArchive is probably incomplete.")
finally:
our_zipfile.close()
print(f"Is our_zipfiles closed?... {our_zipfile}")
zip_folder('NO_SUCH_DIRECTORY')
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
יופי! עכשיו כשראינו התרעה על חריגת <var>FileNotFoundError</var> כשמשתמש הכניס נתיב לא תקין לתיקייה, ראוי שנטפל בה:
</p>
End of explanation
"""
def read_file(path):
try:
princess = open(path, 'r')
except FileNotFoundError as err:
print(f"Can't find file '{path}'.\n{err}.")
return None
else:
text = princess.read()
princess.close()
return text
print(read_file('resources/castle.txt'))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
יותר טוב!<br>
היתרון בצורת הכתיבה הזו הוא שגם אם תהיה התרעה על חריגה שאינה מסוג <var>FileNotFoundError</var> והתוכנה תקרוס,<br>
נוכל להיות בטוחים שקובץ הארכיון נסגר כראוי.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הכול בסדר</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כה למדנו על 3 מילות מפתח שקשורות במנגנון לטיפול בחריגות של פייתון: <code>try</code>, <code>except</code> ו־<code>finally</code>.<br>
אלו רעיונות מרכזיים בטיפול בחריגות, ותוכלו למצוא אותם בצורות כאלו ואחרות בכל שפת תכנות עכשווית שמאפשרת טיפול בחריגות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אלא שבפייתון ישנה מילת מפתח נוספת שהתגנבה למנגנון הטיפול בחריגות: <code>else</code>.<br>
תחת מילת המפתח הזו יופיעו פעולות שנרצה לבצע רק אם הקוד שב־<code>try</code> רץ במלואו בהצלחה,<br>
או במילים אחרות: באף שלב לא הייתה התרעה על חריגה; אף לא <code>except</code> אחד התבצע.
</p>
End of explanation
"""
def read_file(path):
try:
princess = open(path, 'r')
text = princess.read()
princess.close()
return text
except FileNotFoundError as err:
print(f"Can't find file '{path}'.\n{err}.")
return None
print(read_file('resources/castle.txt'))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
"אבל רגע", ישאלו חדי העין מביניכם.<br>
"הרי המטרה היחידה של <code>else</code> היא להריץ קוד אם הקוד שב־<code>try</code> רץ עד סופו,<br>
אז למה שלא פשוט נכניס אותו כבר לתוך ה־<code>try</code>, מייד אחרי הקוד שרצינו לבצע?"<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
וזו שאלה שיש בה היגיון רב –<br>
הרי קוד שקורס ב־<code>try</code> ממילא גורם לכך שהקוד שנמצא אחריו ב־<code>try</code> יפסיק לרוץ.<br>
אז למה לא פשוט לשים שם את קוד ההמשך? מה רע בקטע הקוד הבא?
</p>
End of explanation
"""
def read_file(path):
try:
princess = open(path, 'r')
text = princess.read()
except (FileNotFoundError, PermissionError) as err:
print(f"Can't find file '{path}'.\n{err}.")
text = None
else:
princess.close()
finally:
return text
print(read_file('resources/castle.txt3'))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ההבדל הוא רעיוני בעיקרו.<br>
המטרה שלנו היא להעביר את הרעיון שמשתקף מהקוד שלנו לקוראו בצורה נהירה יותר, קצת כמו בספר טוב.<br>
מילת המפתח <code>else</code> תעזור לקורא להבין איפה חשבנו שעשויה להיות ההתרעה על החריגה,<br>
ואיפה אנחנו רוצים להמשיך ולהריץ קוד פייתון שקשור לאותו קוד.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ישנו יתרון נוסף בהפרדת הקוד ל־<code>try</code> ול־<code>else</code> –<br>
השיטה הזו עוזרת לנו להפריד בין הקוד שבו ייתפסו התרעות על חריגות, לבין הקוד שירוץ אחריו ושבו לא יטופלו חריגות.<br>
כיוון שהשורות שנמצאות בתוך ה־<code>else</code> לא נמצאות בתוך ה־<code>try</code>, פייתון לא תתפוס התרעות על חריגות שהתרחשו במהלך הרצתן.<br>
שיטה זו עוזרת לנו ליישם את כלל האצבע שמורה לנו לתפוס התרעות על חריגות באופן ממוקד – <br>
בעזרת <code>else</code> לא נתפוס התרעות על חריגות בקוד שבו לא התכוונו מלכתחילה לתפוס התרעות על חריגות.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>print_item</var> שמקבלת כפרמטר ראשון רשימה, וכפרמטר שני מספר ($n$).<br>
הפונקציה תדפיס את האיבר ה־$n$־י ברשימה.<br>
טפלו בכל ההתרעות על חריגות שעלולות להיווצר בעקבות הרצת הפונקציה.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לסיכום, ניצור קטע קוד שמשתמש בכל מילות המפתח שלמדנו בהקשר של טיפול בחריגות:
</p>
End of explanation
"""
raise ValueError("Just an example.")
"""
Explanation: <figure>
<img src="images/try_except_flow_full.svg?v=5" style="width: 700px; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה יש תרשים זרימה המציג כיצד פייתון קוראת את הקוד במבנה try-except-else-finally. התרשים בסגנון קומיקסי עם אימוג'ים. החץ ממנו נכנסים לתרשים הוא 'התחל ב־try' עם סמלון של דגל מרוצים, שמוביל לעיגול שבו כתוב 'הרץ את השורה המוזחת הבאה בתוך ה־try'. מתוך עיגול זה יש חץ לעצמו, שבו כתוב 'אין התראה על חריגה' עם סמלון של וי ירוק, וחץ נוסף שבו כתוב 'אין שורות נוספות ב־try' עם סמלון של וי ירוק שמוביל לעיגול 'הרץ את השורות המוזחות בתוך else, אם יש כזה'. מעיגול זה יוצא חץ נוסף ל'הרץ את השורות המוזחות בתוך finally, אם יש כאלו'. מהעיגול האחרון שהוזכר יוצא חץ כלפי מטה לכיוון מלל עם דגל מרוצים שעליו כתוב 'סוף'. מהעיגול הראשון שהוזכר, 'הרץ את השורה המוזחת הבאה בתוך ה־try', יוצא גם חץ שעליו כתוב 'התרעה על חריגה' עם סמלון של פיצוץ, ומוביל לעיגול שבו כתוב 'חפש except עם סוג החריגה'. מעיגול זה יוצאים שני חצים: הראשון 'לא קיים' (החץ אדום מקווקו), עם סמלון של איקס אדום שמוביל לעיגול ללא מוצא בו כתוב 'זרוק התרעה על חריגה', שמוביל (בעזרת חץ אדום מקווקו) לשרשרת עיגולים ללא מוצא. בראשון כתוב 'הרץ את השורות המוזחות בתוך finally, אם יש כזה', והוא מצביע בעזרת חץ אדום מקווקו על עיגול נוסף בו כתוב 'חדול מהרצת התוכנית'. על החץ השני שיוצא מ'חפש except עם סוג החריגה' כתוב 'קיים' עם סמלון של וי ירוק, והוא מוביל לעיגול 'הרץ את השורות המוזחות בתוך ה־except'. ממנו יש חץ לעיגול שתואר מקודם, 'הרץ את השורות המוזחות בתוך ה־finally, אם יש כזה', ומוביל לכיתוב 'סוף הטיפול בשגיאות. המשך בהרצת התוכנית.' עם דגל מרוץ. כל החצים באיור ירוקים פרט לחצים שהוזכרו כאדומים."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">
תרשים זרימה המציג כיצד פייתון קוראת את הקוד במבנה <code>try</code>, <code>except</code>, <code>else</code>, <code>finally</code>.
</figcaption>
</figure>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: פותחים שעון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>estimate_read_time</var>, שמקבלת נתיב לקובץ, ומודדת בתוך כמה זמן פייתון קוראת את הקובץ.<br>
על הפונקציה להוסיף לקובץ בשם log.txt שורה שבה כתוב את שם הקובץ שניסיתם לקרוא, ובתוך כמה שניות פייתון קראה את הקובץ.<br>
הפונקציה תטפל בכל מקרי הקצה ובהתרעות על חריגות שבהם היא עלולה להיתקל.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">יצירת התרעה על חריגה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כה התמקדנו בטיפול בהתרעות על חריגות שעלולות להיווצר במהלך ריצת התוכנית.<br>
בהגיענו לכתוב תוכניות גדולות יותר שמתכנתים אחרים ישתמשו בהן, לעיתים קרובות נרצה ליצור בעצמנו התרעות על חריגות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
התרעה על חריגה, כפי שלמדנו, היא דרך לדווח למתכנת שמשהו בעייתי התרחש בזמן ריצת התוכנית.<br>
נוכל ליצור התרעות כאלו בעצמנו, כדי להודיע על בעיות אפשריות למתכנתים שמשתמשים בקוד שלנו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יצירת התרעה על חריגה היא עניין פשוט למדי שמורכב מ־3 חלקים:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>שימוש במילת המפתח <code>raise</code>.</li>
<li>ציון סוג החריגה שעליה אנחנו הולכים להתריע – <var>ValueError</var>, לדוגמה.</li>
<li>בסוגריים אחרי כן – הודעה שתתאר למתכנת שישתמש בקוד את הבעיה.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
זה ייראה כך:
</p>
End of explanation
"""
def _check_time_fields(hour, minute, second, microsecond, fold):
if not 0 <= hour <= 23:
raise ValueError('hour must be in 0..23', hour)
if not 0 <= minute <= 59:
raise ValueError('minute must be in 0..59', minute)
if not 0 <= second <= 59:
raise ValueError('second must be in 0..59', second)
if not 0 <= microsecond <= 999999:
raise ValueError('microsecond must be in 0..999999', microsecond)
if fold not in (0, 1):
raise ValueError('fold must be either 0 or 1', fold)
return hour, minute, second, microsecond, fold
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה דוגמה לקוד אמיתי שמממש התרעה על חריגה.<br>
<a href="https://github.com/python/cpython/blob/578c3955e0222ec7b3146197467fbb0fcfae12fe/Lib/datetime.py#L397">הקוד הבא</a> לקוח מהמודול <var>datetime</var>, והוא רץ בכל פעם <a href="https://github.com/python/cpython/blob/578c3955e0222ec7b3146197467fbb0fcfae12fe/Lib/datetime.py#L1589">שמבקשים ליצור</a> מופע חדש של תאריך.<br>
שימו לב כיצד יוצר המודול בודק את כל אחד מחלקי התאריך, ואם הערך חורג מהטווח שהוגדר – הוא מתריע על חריגה עם הודעת חריגה ממוקדת:
</p>
End of explanation
"""
DAYS = [
'Sunday', 'Monday', 'Tuesday', 'Wednesday',
'Thursday', 'Friday', 'Saturday',
]
def get_day_by_number(number):
try:
return DAYS[number - 1]
except IndexError:
raise ValueError("The number parameter must be between 1 and 7.")
for i in range(1, 9):
print(get_day_by_number(i))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מטרת הפונקציה היא להבין אם השעה שהועברה ל־<var>datetime</var> תקינה.<br>
בפונקציה, בודקים אם השעה היא מספר בטווח 0–23, אם מספר הדקות הוא מספר בטווח 0–59 וכן הלאה.<br>
אם אחד התנאים לא מתקיים – מתריעים למתכנת שניסה ליצור את מופע התאריך על חריגה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הקוד משתמש בתעלול מבורך – ביצירת מופע ממחלקה של חריגה, אפשר להשתמש ביותר מפרמטר אחד.<br>
הפרמטר הראשון תמיד יוקדש להודעת השגיאה, אבל אפשר להשתמש בשאר הפרמטרים כדי להעביר מידע נוסף על החריגה.<br>
בדרך כלל מעבירים שם מידע על הערכים שגרמו לבעיה, או את הערכים עצמם.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: סכו"ם</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בתור רשת לכלי עבודה אתם מנסים לספור את מלאי ה<b>ס</b>ולמות, <b>כ</b>רסומות <b>ומ</b>חרטות שקיימים אצלכם.<br>
כתבו מחלקה שמייצגת חנות (<var>Store</var>), ולה 3 תכונות:<br>
מספר הסולמות (<var>ladders</var>), מספר הכרסומות (<var>millings</var>) ומספר המחרטות (<var>lathes</var>) במלאי.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>count_inventory</var> שמקבלת רשימת מופעים של חנויות, ומחזירה את מספר הפריטים הכולל במלאי.<br>
צרו התרעות על חריגות במידת הצורך, בין אם במחלקה ובין אם בפונקציה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">טכניקות בניהול חריגות</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מיקוד החריגה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
טכניקה מעניינת שמשתמשים בה מדי פעם היא ניסוח מחדש של התרעה על חריגה.<br>
נבחר לנהוג כך כשהניסוח מחדש יעזור לנו למקד את מי שישתמש בקוד שלנו.<br>
בטכניקה הזו נתפוס בעזרת <code>try</code> חריגה מסוג מסוים, וב־<code>except</code> ניצור התרעה חדשה על חריגה עם הודעת שגיאה משלנו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה דוגמה:
</p>
End of explanation
"""
ADDRESS_BOOK = {
'Padfoot': '12 Grimmauld Place, London, UK',
'Jerry': 'Apartment 5A, 129 West 81st Street, New York, New York',
'Clark': '344 Clinton St., Apt. 3B, Metropolis, USA',
}
def get_address_by_name(name):
try:
return ADDRESS_BOOK[name]
except KeyError as err:
with open('errors.txt', 'a') as errors:
errors.write(str(err))
raise KeyError(str(err))
for name in ('Padfoot', 'Clark', 'Jerry', 'The Ink Spots'):
print(get_address_by_name(name))
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">טיפול והתרעה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
טכניקה נוספת היא ביצוע פעולות מסוימות במהלך ה־<code>except</code>, והתרעה על החריגה מחדש.<br>
השימוש בטכניקה הזו נפוץ מאוד.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שימוש בה הוא מעין סיפור קצר בשלושה חלקים:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>תופסים את החריגה.</li>
<li>מבצעים פעולות רלוונטיות כמו:
<ul>
<li>מתעדים את התרחשות החריגה במקום חיצוני, כמו קובץ, או אפילו מערכת ייעודית לניהול שגיאות.</li>
<li>מבטלים את הפעולות שכן הספקנו לעשות לפני שהייתה התרעה על חריגה.</li>
</ul>
</li>
<li>מקפיצים מחדש את החריגה – את אותה חריגה בדיוק או אחת מדויקת יותר.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה:
</p>
End of explanation
"""
def get_address_by_name(name):
try:
return ADDRESS_BOOK[name]
except KeyError as err:
with open('errors.txt', 'a') as errors:
errors.write(str(err))
raise
for name in ('Padfoot', 'Clark', 'Jerry', 'The Ink Spots'):
print(get_address_by_name(name))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
למעשה, הרעיון של התרעה מחדש על חריגה הוא כה נפוץ, שמפתחי פייתון יצרו עבורו מעין קיצור.<br>
אם אתם נמצאים בתוך <code>except</code> ורוצים לזרוק בדיוק את החריגה שתפסתם, פשוט כתבו <code>raise</code> בלי כלום אחריו:
</p>
End of explanation
"""
class AddressUnknownError(Exception):
pass
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">יצירת חריגה משלנו</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בתוכנות גדולות במיוחד נרצה ליצור סוגי חריגות משלנו.<br>
נוכל לעשות זאת בקלות אם נירש ממחלקה קיימת שמייצגת חריגה:
</p>
End of explanation
"""
def get_address_by_name(name):
try:
return ADDRESS_BOOK[name]
except KeyError:
raise AddressUnknownError(f"Can't find the address of {name}.")
for name in ('Padfoot', 'Clark', 'Jerry', 'The Ink Spots'):
print(get_address_by_name(name))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בשלב זה, נוכל להתריע על חריגה בעזרת סוג החריגה שיצרנו:
</p>
End of explanation
"""
class DrunkUserError(Exception):
"""Exception raised for errors in the input."""
def __init__(self, name, bac, *args, **kwargs):
super().__init__(*args, **kwargs)
self.name = name
self.bac = bac # Blood Alcohol Content
def __str__(self):
return (
f"{self.name} must not drriiiive!!! @_@"
f"\nBAC: {self.bac}"
)
def start_driving(username, blood_alcohol_content):
if blood_alcohol_content > 0.024:
raise DrunkUserError(username, blood_alcohol_content)
return True
start_driving("Kipik", 0.05)
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/tip.png" style="height: 50px !important;" alt="טיפ!" title="טיפ!">
</div>
<div style="width: 90%;">
נהוג לסיים את שמות המחלקות המייצגות חריגה במילה <em>Error</em>.
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
זכרו שהירושה כאן משפיעה על הדרך שבה תטופל החריגה שלכם.<br>
אם, נניח, <var>AddressUnknownError</var> הייתה יורשת מ־<var>KeyError</var>, ולא מ־<var>Exception</var>,<br>
זה אומר שכל מי שהיה עושה <code>except KeyError</code> היה תופס גם חריגות מסוג <var>AddressUnknownError</var>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יש לא מעט יתרונות ליצירת שגיאות משל עצמנו:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>המתכנתים שמשתמשים בפונקציה יכולים לתפוס התרעות ספציפיות יותר.</li>
<li>הקוד הופך לבהיר יותר עבור הקורא ועבור מי שמקבל את ההתרעה על החריגה.</li>
<li>בזכות רעיון הירושה, אפשר לספק לחריגות הללו התנהגות מותאמת אישית.</li>
</ol>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; ">
<img src="images/deeper.svg?a=1" style="height: 50px !important;" alt="העמקה" title="העמקה">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
כבכל ירושה, תוכלו לדרוס את הפעולות <code>__init__</code> ו־<code>__str__</code> של מחלקת־העל שממנה ירשתם.<br>
דריסה כזו תספק לכם גמישות רבה בהגדרת החריגות שיצרתם ובשימוש בהן.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה דוגמה קצרצרה ליצירת חריגה מותאמת אישית:
</p>
End of explanation
"""
def get_nth_char(string, n):
n = n - 1 # string[0] is the first char (n = 1)
if isinstance(string, (str, bytes)) and n < len(string):
return string[n]
return ''
print(get_nth_char("hello", 1))
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">נימוסים והליכות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
טיפול בחריגות היא הדרך הטובה ביותר להגיב על התרחשויות לא סדירות ולנהל אותן בקוד הפייתון שאנחנו כותבים.<br>
כפי שכבר ראינו במחברות קודמות, בכלים מורכבים ומתקדמים יש יותר מקום לטעויות, וקווים מנחים יעזרו לנו להתנהל בצורה נכונה.<br>
נעבור על כמה כללי אצבע ורעיונות מועילים שיקלו עליכם לעבוד נכון עם חריגות:
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">טיפול ממוקד</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
באופן כללי, נעדיף להיות כמה שיותר ממוקדים בטיפול בחריגות.<br>
כשאנחנו מטפלים בחריגה, אנחנו יוצאים מנקודת הנחה שאנחנו יודעים מה הבעיה וכיצד יש לטפל בה.<br>
לדוגמה, אם משתמש הזין ערך שלא נתמך בקוד שלנו, נרצה לעצור את קריסת התוכנית ולבקש ממנו להזין ערך מתאים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לא נרצה, לדוגמה, לתפוס התרעות על חריגות שלא התכוונו לתפוס מלכתחילה.<br>
אנחנו מעוניינים לטפל רק בבעיות שאנחנו יודעים שעלולות להתרחש.<br>
אם ישנה בעיה שאנחנו לא יודעים עליה – אנחנו מעדיפים שפייתון תצעק כדי שנדע שהיא קיימת.<br>
"השתקה" של בעיות שאנחנו לא יודעים על קיומן היא פתח לתקלים בלתי צפויים וחמורים אף יותר.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד, הנקודה הזו תבוא לידי ביטוי כשנכתוב אחרי ה־<code>except</code> את רשימת סוגי החריגות שבהן נטפל.<br>
נשתדל שלא לטפל ב־<var>Exception</var>, משום שאז נתפוס כל סוג חריגה שיורש ממנה (כמעט כולם).<br>
נשתדל גם לא לדחוס אחרי ה־<code>except</code> סוגי חריגות שאנחנו לא יודעים אם הם רלוונטיים או לא.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יתרה מזאת, טיפול בשגיאות יתבצע רק על קוד שאנחנו יודעים שעלול לגרום להתרעה על חריגה.<br>
קוד שלא קשור לחריגה שהולכת להתרחש – לא יהיה חלק מהליך הטיפול בשגיאות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד, הנקודה הזו תבוא לידי ביטוי בכך שבתוך ה־<code>try</code> יוזחו כמה שפחות שורות קוד.<br>
תחת ה־<code>try</code> נכתוב אך ורק את הקוד שעלול להתריע על חריגה, ושום דבר מעבר לו.<br>
כך נדע שאנחנו לא תופסים בטעות חריגות שלא התכוונו לתפוס מלכתחילה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">חריגות הן עבור המתכנת</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אנחנו מעוניינים שהמתכנת שישתמש בקוד יקבל התרעות על חריגות שיבהירו לו מהן הבעיות בקוד שכתב, ויאפשרו לו לטפל בהן.<br>
אם כתבנו מודול או פונקציה שמתכנת אחר הולך להשתמש בה, לדוגמה, נקפיד ליצור התרעות על חריגות שיעזרו לו לנווט בקוד שלנו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעומת המתכנת, אנחנו שואפים שמי שישתמש בתוכנית (הלקוח של המוצר, נניח) לעולם לא יצטרך להתמודד עם התרעות על חריגות.<br>
התוכנית לא אמורה לקרוס בגלל חריגה אף פעם, אלא לטפל בחריגה ולחזור לפעולה תקינה.<br>
אם החריגה קיצונית ומחייבת את הפסקת הריצה של התוכנית, עלינו לפעול בצורה אחראית:<br>
נבצע שמירה מסודרת של כמה שיותר פרטים על הודעת השגיאה, נסגור חיבורים למשאבים, נמחק קבצים שיצרנו ונכבה את התוכנה בצורה מסודרת.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">EAFP או LBYL</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בכל הקשור לשפות תכנות, ישנן שתי גישות נפוצות לטיפול במקרי קצה בתוכנית.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הגישה הראשונה נקראת <var>LBYL</var>, או Look Before You Leap ("הסתכל לפני שאתה קופץ").<br>
גישה זו דוגלת בבדיקת השטח לפני ביצוע כל פעולה.<br>
הפעולה תתבצע לבסוף, רק כשנהיה בטוחים שהרצתה חוקית ולא גורמת להתרעה על חריגה.<br>
קוד שכתב מי שדוגל בשיטה הזו מתאפיין בשימוש תדיר במילת המפתח <code>if</code>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הגישה השנייה נקראת <var>EAFP</var>, או Easier to Ask for Forgiveness than Permission ("קל יותר לבקש סליחה מלבקש רשות").<br>
גישה זו דוגלת בביצוע פעולות מבלי לבדוק לפני כן את היתכנותן, ותפיסה של התרעה על חריגה אם היא מתרחשת.<br>
קוד שכתב מי שדוגל בשיטה הזו מתאפיין בשימוש תדיר במבני <code>try-except</code>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה שתי דוגמאות להבדלים בגישות.<br>
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">דוגמה 1: מספר תו במחרוזת</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נכתוב פונקציה שמקבלת מחרוזת ומיקום ($n$), ומחזירה את התו במיקום ה־$n$־י במחרוזת.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפניכם הקוד בגישת LBYL, ובו אנחנו מנסים לבדוק בזהירות אם אכן מדובר במחרוזת, ואם יש בה לפחות $n$ תווים.<br>
רק אחרי שאנחנו מוודאים שכל דרישות הקדם מתקיימות, אנחנו ניגשים לבצע את הפעולה.
</p>
End of explanation
"""
def get_nth_char(string, n):
try:
return string[n - 1]
except (IndexError, TypeError) as e:
print(e)
return ''
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
והנה אותו קוד בגישת EAFP. הפעם פשוט ננסה לאחזר את התו, ונסמוך על מבנה ה־<code>try-except</code> שיתפוס עבורנו את החריגות:
</p>
End of explanation
"""
import os
import pathlib
def is_path_writable(filepath):
"""Return if the path is writable."""
path = pathlib.Path(filepath)
directory = path.parent
is_dir_writable = directory.is_dir() and os.access(directory, os.W_OK)
is_exists = path.exists()
is_file_writable = path.is_file() and os.access(path, os.W_OK)
return is_dir_writable and ((not is_exists) or is_file_writable)
def write_textfile(filepath, text):
"""Safely write `text` to `filepath`."""
if is_path_writable(filepath):
with open(filepath, 'w', encoding='utf-8') as f:
f.write(text)
return True
return False
write_textfile("not_worms.txt", "What the holy hand grenade was that?")
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">דוגמה 2: כתיבה לקובץ</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נתכנת פונקציה שמקבלת נתיב לקובץ וטקסט, וכותבת את הטקסט לקובץ.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הנה הקוד בגישת LBYL, ובו אנחנו מנסים לבדוק בזהירות אם הקובץ אכן בטוח לכתיבה.<br>
רק אחרי שאנחנו מוודאים שיש לנו גישה אליו, שאכן מדובר בקובץ ושאפשר לכתוב אליו, אנחנו מבצעים את הכתיבה לקובץ.
</p>
End of explanation
"""
import os
import pathlib
def write_textfile(filepath, text):
"""Safely write `text` to `filepath`."""
try:
with open(filepath, 'w', encoding='utf-8') as f:
f.write(text)
except (ValueError, OSError) as e:
print(e)
return False
return True
write_textfile("not_worms.txt", "What the holy hand grenade was that?")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
והנה אותו קוד בגישת EAFP. הפעם פשוט ננסה לכתוב לקובץ, ונסמוך על מבנה ה־<code>try-except</code> שיתפוס עבורנו את החריגות:
</p>
End of explanation
"""
try:
# Code
...
except Exception:
pass
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מתכנתי פייתון נוטים יותר לתכנות בגישת EAFP.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">אחריות אישית</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
טיפול בחריגה ימנע מהתוכנה לקרוס, ועשוי להחביא את העובדה שהייתה בעיה בזרימת התוכנית.<br>
לרוב זה מצוין ובדיוק מה שאנחנו רוצים, אבל מתכנתים בתחילת דרכם עלולים להתפתות לנצל את העובדה הזו יתר על המידה.<br>
לפניכם דוגמה לקטע קוד שחניכים רבים משתמשים בו בתחילת דרכם:
</p>
End of explanation
"""
# Example 1
class PhoneNumberNotFound(Exception):
pass
# Example 2
def get_key(d, k, default=None):
try:
return d[k]
except:
return default
# Example 3
def write_file(path, text):
try:
f = open(path, 'w')
f.write(text)
f.close()
except IOError:
pass
# Example 4
PHONEBOOK = {'867-5309': 'Jenny'}
def get_name_by_phone(phonebook, phone_number):
if phone_number not in phonebook:
raise ValueError("person_number not in phonebook")
return phonebook[phone_number]
phone_number = input("Hi Mr. User!\nEnter phone:")
get_name_by_phone(PHONEBOOK, phone_number)
# Example 5
def my_sum(items):
try:
total = 0
for element in items:
total = total + element
return total
except TypeError:
return 0
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
הטריק הזה נקרא "השתקת חריגות".<br>
ברוב המוחלט של המקרים זה לא מה שאנחנו רוצים.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתקת החריגה עלולה לגרום לתקל בהמשך ריצת התוכנית, ויהיה לנו קשה מאוד לאתר אותו בעתיד.<br>
פעמים רבות השתקה שכזו מעידה על כך שהחריגה נתפסה מוקדם מדי.<br>
במקרים כאלו, עדיף לטפל בהתרעה על החריגה בפונקציה שקראה למקום שבו התרחשה ההתרעה על החריגה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם תגיעו למצב שבו אתם משתיקים חריגות, עצרו ושאלו את עצמכם אם זה הפתרון הטוב ביותר.<br>
לרוב, עדיף יהיה לטפל בהתרעה על החריגה ולהביא את התוכנה למצב תקין,<br>
או לפחות לשמור את פרטי ההתרעה לקובץ המתעד את ההתרעות על החריגות שהתרחשו בזמן ריצת התוכנה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגילים</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">באנו להנמיך</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפניכם דוגמאות קוד מחרידות להפליא.<br>
תקנו אותן כך שיתאימו לנימוסים והליכות שלמדנו בסוף המחברת.<br>
היעזרו באינטרנט במידת הצורך.
</p>
End of explanation
"""
def digest(key, data):
S = list(range(256))
j = 0
for i in range(256):
j = (j + S[i] + ord(key[i % len(key)])) % 256
S[i], S[j] = S[j], S[i]
j = 0
y = 0
for char in data:
j = (j + 1) % 256
y = (y + S[j]) % 256
S[j], S[y] = S[y], S[j]
yield chr(ord(char) ^ S[(S[j] + S[y]) % 256])
def decrypt(key, message):
return ''.join(digest(key, message))
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">באנו להרים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה המיועדת למתכנתים בחברת "The Syndicate".<br>
הפונקציה תקבל כפרמטרים נתיב לקובץ (<var>filepath</var>) ומספר שורה (<var>line_number</var>).<br>
הפונקציה תחזיר את מה שכתוב בקובץ שנתיבו הוא <var>filepath</var> בשורה שמספרה הוא <var>line_number</var>.<br>
נהלו את השגיאות היטב. בכל פעם שישנה התרעה על חריגה, כתבו אותה לקובץ log.txt עם חותמת זמן וההודעה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ילד שלי מוצלח</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
צפנת פענח ניסה להעביר ליוליוס פואמה מעניינת שכתב.<br>
בניסיוננו להתחקות אחר עקבותיו של צפנת פענח, ניסינו לשים את ידינו על המסר – אך גילינו שהוא מוצפן.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בתיקיית resources מצורפים שני קבצים: users.txt ו־passwords.txt.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כל שורה בקובץ users.txt נראית כך:
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
העמודה הראשונה מייצגת את מספר המשתמש, העמודה השנייה מייצגת את שמו ושאר העמודות מייצגות פרטים מזהים עליו.<br>
העמודות מופרדות בתו |.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כל שורה בקובץ בקובץ passwords.txt נראית כך:
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שתי העמודות הראשונות הן מספרי המשתמש, כפי שהם מוגדרים ב־users.txt.<br>
העמודה השלישית היא סיסמת ההתקשרות ביניהם.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו את הפונקציות הבאות:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li><var>load_file</var> – טוענת קובץ טבלאי שהשורה הראשונה שבו היא כותרת, והעמודות שבו מופרדת זו מזו בתו |.<br>
הפונקציה תחזיר רשימה של מילונים. כל מילון ברשימה ייצג שורה בקובץ. המפתחות של כל מילון יהיו שמות השדות מהכותרת.</li>
<li><var>get_user_id</var> – שמקבלת את שם המשתמש, ומחזירה את מספר המשתמש שלו.</li>
<li><var>get_password</var> – שמקבלת שני מספרים סידוריים של משתמשים ומחזירה את סיסמת ההתקשרות בינם.</li>
<li><var>decrypt_file</var> – שמקבלת מפתח ונתיב לקובץ, ומפענחת אותו באמצעות הפונקציה <var>decrypt</var>.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לצורך פתרון החידה, מצאו את סיסמת ההתקשרות של המשתמשים Zaphnath Paaneah ו־Gaius Iulius Caesar.<br>
פענחו בעזרתה את המסר הסודי שבקובץ message.txt.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתמשו בתרגיל כדי לתרגל את מה שלמדתם בנושא טיפול בחריגות.
</p>
End of explanation
"""
|
andres-root/AIND | Therm2/dog-breed/dog_app.ipynb | mit | from sklearn.datasets import load_files
from keras.utils import np_utils
import numpy as np
from glob import glob
# define function to load train, test, and validation datasets
def load_dataset(path):
data = load_files(path)
dog_files = np.array(data['filenames'])
dog_targets = np_utils.to_categorical(np.array(data['target']), 133)
return dog_files, dog_targets
# load train, test, and validation datasets
train_files, train_targets = load_dataset('dogImages/train')
valid_files, valid_targets = load_dataset('dogImages/valid')
test_files, test_targets = load_dataset('dogImages/test')
# load list of dog names
dog_names = [item[20:-1] for item in sorted(glob("dogImages/train/*/"))]
# print statistics about the dataset
print('There are %d total dog categories.' % len(dog_names))
print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files])))
print('There are %d training dog images.' % len(train_files))
print('There are %d validation dog images.' % len(valid_files))
print('There are %d test dog images.'% len(test_files))
"""
Explanation: Artificial Intelligence Nanodegree
Convolutional Neural Networks
Project: Write an Algorithm for a Dog Identification App
In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this IPython notebook.
Why We're Here
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).
In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
The Road Ahead
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
Step 0: Import Datasets
Step 1: Detect Humans
Step 2: Detect Dogs
Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Step 4: Use a CNN to Classify Dog Breeds (using Transfer Learning)
Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)
Step 6: Write your Algorithm
Step 7: Test Your Algorithm
<a id='step0'></a>
Step 0: Import Datasets
Import Dog Dataset
In the code cell below, we import a dataset of dog images. We populate a few variables through the use of the load_files function from the scikit-learn library:
- train_files, valid_files, test_files - numpy arrays containing file paths to images
- train_targets, valid_targets, test_targets - numpy arrays containing onehot-encoded classification labels
- dog_names - list of string-valued dog breed names for translating labels
End of explanation
"""
import random
random.seed(8675309)
# load filenames in shuffled human dataset
human_files = np.array(glob("lfw/*/*"))
random.shuffle(human_files)
# print statistics about the dataset
print('There are %d total human images.' % len(human_files))
"""
Explanation: Import Human Dataset
In the code cell below, we import a dataset of human images, where the file paths are stored in the numpy array human_files.
End of explanation
"""
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[3])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(img)
plt.show()
"""
Explanation: <a id='step1'></a>
Step 1: Detect Humans
We use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory.
In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
End of explanation
"""
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
"""
Explanation: Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.
In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.
Write a Human Face Detector
We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.
End of explanation
"""
human_files_short = human_files[:100]
dog_files_short = train_files[:100]
# Do NOT modify the code above this line.
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
detected_humans = 0
detected_dogs = 0
for human in human_files_short:
if face_detector(human):
detected_humans += 1
for dog in dog_files_short:
if face_detector(dog):
detected_dogs += 1
print(detected_humans)
print(detected_dogs)
"""
Explanation: (IMPLEMENTATION) Assess the Human Face Detector
Question 1: Use the code cell below to test the performance of the face_detector function.
- What percentage of the first 100 images in human_files have a detected human face?
99%
- What percentage of the first 100 images in dog_files have a detected human face?
11%
Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.
Answer:
Percentage of human faces detected in the human files: 99%
Percentage of human faces detected in the dog files: 11%
End of explanation
"""
## (Optional) TODO: Report the performance of another
## face detection algorithm on the LFW dataset
### Feel free to use as many code cells as needed.
"""
Explanation: Question 2: This algorithmic choice necessitates that we communicate to the user that we accept human images only when they provide a clear view of a face (otherwise, we risk having unneccessarily frustrated users!). In your opinion, is this a reasonable expectation to pose on the user? If not, can you think of a way to detect humans in images that does not necessitate an image with a clearly presented face?
Answer:
I don't think using images without humans or a clear view of a human faces are a problem. We should know the probability of having a human in the image. If this probability is low we don't waste resources searching for a face, but if it is high enough we search for the human face.
At the end we should provide more complete information to the user. For instance we could return the total number of images, the total number of images that are humans, the total number of images that we believe are not humans and the total number of faces.
Asking the user to only provide human faces could be a pain. Think about the case where only twenty images in a huge dataset of thousands of images are not humans. We should not stop our search by giving a message "please provide only human images". Probably the user is not able to tell where those wrong images are.
How could we use a method to dettect humans in an image?
Let's think about what we know about how the human brain recognizes objects. Researchers at MIT’s Department of Brain and Cognitive Sciences suggest that the human brain represents visual information in a hierarchical way. As visual input flows from the retina into primary visual cortex and then inferotemporal (IT) cortex, it is processed at each level and becomes more specific until objects can be identified. Well deep neural networks work in a similar fashion, so I think we definitely need a deep neural network with many layers to be able to filter the very specific features that define the shape of a human body.
We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on each of the datasets.
End of explanation
"""
from keras.applications.resnet50 import ResNet50
# define ResNet50 model
ResNet50_model = ResNet50(weights='imagenet')
"""
Explanation: <a id='step2'></a>
Step 2: Detect Dogs
In this section, we use a pre-trained ResNet-50 model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories. Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.
End of explanation
"""
from keras.preprocessing import image
from tqdm import tqdm
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
"""
Explanation: Pre-process the Data
When using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape
$$
(\text{nb_samples}, \text{rows}, \text{columns}, \text{channels}),
$$
where nb_samples corresponds to the total number of images (or samples), and rows, columns, and channels correspond to the number of rows, columns, and channels for each image, respectively.
The path_to_tensor function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape
$$
(1, 224, 224, 3).
$$
The paths_to_tensor function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape
$$
(\text{nb_samples}, 224, 224, 3).
$$
Here, nb_samples is the number of samples, or number of images, in the supplied array of image paths. It is best to think of nb_samples as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!
End of explanation
"""
from keras.applications.resnet50 import preprocess_input, decode_predictions
def ResNet50_predict_labels(img_path):
# returns prediction vector for image located at img_path
img = preprocess_input(path_to_tensor(img_path))
return np.argmax(ResNet50_model.predict(img))
"""
Explanation: Making Predictions with ResNet-50
Getting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function preprocess_input. If you're curious, you can check the code for preprocess_input here.
Now that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the predict method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the ResNet50_predict_labels function below.
By taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this dictionary.
End of explanation
"""
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
prediction = ResNet50_predict_labels(img_path)
return ((prediction <= 268) & (prediction >= 151))
"""
Explanation: Write a Dog Detector
While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the ResNet50_predict_labels function above returns a value between 151 and 268 (inclusive).
We use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).
End of explanation
"""
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
detected_humans = 0
detected_dogs = 0
for human in human_files_short:
if dog_detector(human):
detected_humans += 1
for dog in dog_files_short:
if dog_detector(dog):
detected_dogs += 1
print(detected_humans)
print(detected_dogs)
"""
Explanation: (IMPLEMENTATION) Assess the Dog Detector
Question 3: Use the code cell below to test the performance of your dog_detector function.
- What percentage of the images in human_files_short have a detected dog?
- What percentage of the images in dog_files_short have a detected dog?
Answer:
Dogs detected in human_files_short: 0%
Dogs detected in dog_files_short: 99%
End of explanation
"""
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# pre-process the data for Keras
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
test_tensors = paths_to_tensor(test_files).astype('float32')/255
"""
Explanation: <a id='step3'></a>
Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 1%. In Step 5 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
Be careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel.
Brittany | Welsh Springer Spaniel
- | -
<img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200">
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
Curly-Coated Retriever | American Water Spaniel
- | -
<img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200">
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
Yellow Labrador | Chocolate Labrador | Black Labrador
- | -
<img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220">
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
Pre-process the Data
We rescale the images by dividing every pixel in every image by 255.
End of explanation
"""
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
model = Sequential()
### TODO: Define your architecture.
model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', input_shape=(224, 224, 3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(GlobalAveragePooling2D())
model.add(Dense(133, activation='relu'))
model.summary()
"""
Explanation: (IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:
model.summary()
We have imported some Python modules to get you started, but feel free to import as many modules as you need. If you end up getting stuck, here's a hint that specifies a model that trains relatively fast on CPU and attains >1% test accuracy in 5 epochs:
Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. If you chose to use the hinted architecture above, describe why you think that CNN architecture should work well for the image classification task.
Answer:
End of explanation
"""
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
"""
Explanation: Compile the Model
End of explanation
"""
from keras.callbacks import ModelCheckpoint
### TODO: specify the number of epochs that you would like to use to train the model.
epochs = 10
### Do NOT modify the code below this line.
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
model.fit(train_tensors, train_targets,
validation_data=(valid_tensors, valid_targets),
epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)
"""
Explanation: (IMPLEMENTATION) Train the Model
Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.
You are welcome to augment the training data, but this is not a requirement.
End of explanation
"""
model.load_weights('saved_models/weights.best.from_scratch.hdf5')
"""
Explanation: Load the Model with the Best Validation Loss
End of explanation
"""
# get index of predicted dog breed for each image in test set
dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]
# report test accuracy
test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
"""
Explanation: Test the Model
Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.
End of explanation
"""
bottleneck_features = np.load('bottleneck_features/DogVGG16Data.npz')
train_VGG16 = bottleneck_features['train']
valid_VGG16 = bottleneck_features['valid']
test_VGG16 = bottleneck_features['test']
"""
Explanation: <a id='step4'></a>
Step 4: Use a CNN to Classify Dog Breeds
To reduce training time without sacrificing accuracy, we show you how to train a CNN using transfer learning. In the following step, you will get a chance to use transfer learning to train your own CNN.
Obtain Bottleneck Features
End of explanation
"""
VGG16_model = Sequential()
VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))
VGG16_model.add(Dense(133, activation='softmax'))
VGG16_model.summary()
"""
Explanation: Model Architecture
The model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.
End of explanation
"""
VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
"""
Explanation: Compile the Model
End of explanation
"""
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5',
verbose=1, save_best_only=True)
VGG16_model.fit(train_VGG16, train_targets,
validation_data=(valid_VGG16, valid_targets),
epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)
"""
Explanation: Train the Model
End of explanation
"""
VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')
"""
Explanation: Load the Model with the Best Validation Loss
End of explanation
"""
# get index of predicted dog breed for each image in test set
VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]
# report test accuracy
test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
"""
Explanation: Test the Model
Now, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.
End of explanation
"""
from extract_bottleneck_features import *
def VGG16_predict_breed(img_path):
# extract bottleneck features
bottleneck_feature = extract_VGG16(path_to_tensor(img_path))
# obtain predicted vector
predicted_vector = VGG16_model.predict(bottleneck_feature)
# return dog breed that is predicted by the model
return dog_names[np.argmax(predicted_vector)]
"""
Explanation: Predict Dog Breed with the Model
End of explanation
"""
### TODO: Obtain bottleneck features from another pre-trained CNN.
"""
Explanation: <a id='step5'></a>
Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
In Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, you must use the bottleneck features from a different pre-trained model. To make things easier for you, we have pre-computed the features for all of the networks that are currently available in Keras:
- VGG-19 bottleneck features
- ResNet-50 bottleneck features
- Inception bottleneck features
- Xception bottleneck features
The files are encoded as such:
Dog{network}Data.npz
where {network}, in the above filename, can be one of VGG19, Resnet50, InceptionV3, or Xception. Pick one of the above architectures, download the corresponding bottleneck features, and store the downloaded file in the bottleneck_features/ folder in the repository.
(IMPLEMENTATION) Obtain Bottleneck Features
In the code block below, extract the bottleneck features corresponding to the train, test, and validation sets by running the following:
bottleneck_features = np.load('bottleneck_features/Dog{network}Data.npz')
train_{network} = bottleneck_features['train']
valid_{network} = bottleneck_features['valid']
test_{network} = bottleneck_features['test']
End of explanation
"""
### TODO: Define your architecture.
"""
Explanation: (IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:
<your model's name>.summary()
Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
Answer:
End of explanation
"""
### TODO: Compile the model.
"""
Explanation: (IMPLEMENTATION) Compile the Model
End of explanation
"""
### TODO: Train the model.
"""
Explanation: (IMPLEMENTATION) Train the Model
Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.
You are welcome to augment the training data, but this is not a requirement.
End of explanation
"""
### TODO: Load the model weights with the best validation loss.
"""
Explanation: (IMPLEMENTATION) Load the Model with the Best Validation Loss
End of explanation
"""
### TODO: Calculate classification accuracy on the test dataset.
"""
Explanation: (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.
End of explanation
"""
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
"""
Explanation: (IMPLEMENTATION) Predict Dog Breed with the Model
Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan_hound, etc) that is predicted by your model.
Similar to the analogous function in Step 5, your function should have three steps:
1. Extract the bottleneck features corresponding to the chosen CNN model.
2. Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.
3. Use the dog_names array defined in Step 0 of this notebook to return the corresponding breed.
The functions to extract the bottleneck features can be found in extract_bottleneck_features.py, and they have been imported in an earlier code cell. To obtain the bottleneck features corresponding to your chosen CNN architecture, you need to use the function
extract_{network}
where {network}, in the above filename, should be one of VGG19, Resnet50, InceptionV3, or Xception.
End of explanation
"""
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
"""
Explanation: <a id='step6'></a>
Step 6: Write your Algorithm
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
- if a dog is detected in the image, return the predicted breed.
- if a human is detected in the image, return the resembling dog breed.
- if neither is detected in the image, provide output that indicates an error.
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and dog_detector functions developed above. You are required to use your CNN from Step 5 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!
(IMPLEMENTATION) Write your Algorithm
End of explanation
"""
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
"""
Explanation: <a id='step7'></a>
Step 7: Test Your Algorithm
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
(IMPLEMENTATION) Test Your Algorithm on Sample Images!
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
Answer:
End of explanation
"""
|
AustinACM-SigKDD/SciKit_2015_11 | Pre-Model Workflow.ipynb | gpl-2.0 | %install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
%load_ext watermark
%watermark -a "Jaya Zenchenko" -n -t -z -u -h -m -w -v -p scikit-learn,matplotlib,pandas,seaborn,numpy,scipy,conda
"""
Explanation: ACM SIGKDD Austin
Advanced Machine Learning with Python
Class 1: Pre-Model Workflow
Jaya Zenchenko
Nov 18th, 2015
We will primarily be working out of the Scikit Learn Cookbook. I love to use Jupyter notebooks so this presentation will be slides from the notebook. Recently started using the live-reveal extension to make slides of my notebook. You can download it and play around with the examples yourself later. Also a reminder that this course is intended to be at the intermediate level.
Pre-Model Workflow : Scikit Learn Cookbook
Why Pre-process data?
Filling in Missing Values
Dealing with Numerical and Categorical Variables
Scaling/Normalizing
Pipeline for Pre-processing data
References
So we will be going over pre-model workflow primarily out of the scikit learn cookbook, but I'll also be including other examples not in the book. How many people here have used scikit-learn before? How many people here have had experience with data cleaning? I have had some experience with data preprocessing, I have learned a lot by dealing with very messy data. I think the best way to have the importance of data cleaning and preprocessing understood is by dealing with it often. Always learning some new trick or finding some new aberration in data. I would love for others to share as we go through some of these sections if they have examples of atrocious data and how they worked around it.
Install Watermark for Reproducibility:
End of explanation
"""
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
import numpy
import seaborn as sns
import pandas as pd
%matplotlib inline
from sklearn.pipeline import Pipeline
from sklearn import preprocessing
sns.set()
iris = load_iris()
iris.feature_names
iris.data[0:5]
"""
Explanation: Ever since I discovered Watermark, I love it because it automatically documents the package versions, and information about the machine I ran the code on. This is great for reproducibility when sharing your work/results with others.
Why Pre-Process the data?
Scikit Learn - Machine Learning Package for python
- Very easy to run machine learning algorithms
- More difficult to use the algorithms correctly
- Garbage in, garbage out!
So scikit learn is a machine learning package for python. I think it is the gold standard for the way packages should be in the scientific community. It is so easy to pick up quickly and use to build awesome models and create machine learning algorithms with. So it's easy to use, but the real work is in 2 areas, first we need to make sure our data is cleaned and properly formatted for the various machine learning methods, and we need to make sure that the data looks the way that it should for the given model. We want to make sure that we are satisfying the assumptions of the model before using it. Garbage in, garbage out. So this is why we need to preprocess our data.
Various issues in the data can include:
- Noisy data (out-of-range, impossible combinations, human error, etc)
- Missing Data (General noise/error, MCAR, MNAR, MAR)
- Data of different types (numeric, categorical)
- Too many attributes
- Too much data
- Data does not fit the assumed characteristic for a given method
For each of these issues, we have different solutions:
- Imputing (filling in missing values)
- Binary Features
- Categorical
- Binarizing
- Correlation Analysis/Chi-Squared Test of Independance
- Aggregating
- Random Sampling
- Scaling/Normalizing
So what kinds of issues can we find in our data? Data has been known to "lie". It comes from various sources such as people, sensors, the internet, and all these have been known to lie sometimes. We can have noisy data, missing data, different types of data, too much data, and data values not as expected. I'll be going over a few examples of these today, primarily ways of filling in missing data, dealing with data of different types, and scaling and normalizing.
Example Datasets:
Download Data Sets:
scikit learn datasets
UCI Machine Learning Repository
Kaggle Data Sets
Local government/open data sets
Create Data Sets:
Create data with specific properties (distributions, number of clusters, noise, etc)
Scikit learn has many built in data sets, it's a good place to start to play with these techniques and other methods for future classes. Other data sets include the UCI Machine Learning repository, Kaggle data sets, and local government/open data sets.
Another option is to create your own fake data set with different properties (distribution, number of clusters, noise, etc). This can be a good approach to test out different algorithms and their performance on data sets that are behaving with the assumptions in mind.
Remember that when using real data, to always spend enough time doing exploratory data analysis to understand the data before applying different methods.
Download Example Data set from Scikit Learn:
End of explanation
"""
df = sns.load_dataset("iris")
df.head()
"""
Explanation: I like seaborn's visualization capability and integration with pandas so I'm going to download the dataset from there instead.
End of explanation
"""
numpy.random.seed(seed=40)
sample_idx = numpy.random.random_integers(0,df.shape[0]-1, 10 )
feature_idx = numpy.random.random_integers(0,df.shape[1]-2, 10)
print "sample_idx", sample_idx
print "feature_idx", feature_idx
for idx, jdx in zip(sample_idx, feature_idx):
df.ix[idx, jdx] = None
df.head(15)
"""
Explanation: Filling in Missing Values:
Let's randomly select samples to remove so that we have missing values in our data.
End of explanation
"""
imputer = preprocessing.Imputer()
imputer
"""
Explanation: scikit-learn has an Imputer function. This has a fit_transform function and can be part of a pre-processing pipeline.
End of explanation
"""
imputed_df = df.copy()
imputed_data = imputer.fit_transform(df.ix[:,0:4])
imputed_df.ix[:,0:4] = imputed_data
imputed_df.head(15)
print df.mean()
df.groupby('species').mean()
"""
Explanation: Strategy can take values 'mean', 'median', 'most_frequent'. Can also give the Imputer function what the missing value looks like in the data. Sometimes people use -1 or 99999.
End of explanation
"""
sns.pairplot(df, hue="species")
plt.show()
"""
Explanation: Now something to think about might be whether want to set the mean of all the data as the imputed value. Looking at the data by the given species, for petal length, the means vary vastly between the 3 species. So we many want to impute using the mean within the species and not over the whole data set.
End of explanation
"""
sns.set(style="ticks")
exercise = sns.load_dataset("exercise")
exercise.head()
"""
Explanation: Looking at this data, another method could be to use clustering or regression to model the data without missing values and then see where the data with some missing feature values would be. The thing to note about imputing the data in a more specialized way is that the preprocessing.impute() function would need to be called on the data itself and the Imputer could not be part of the pipeline.
Dealing with Different Types (Numeric, Categorical):
Binarizing:
Binarizing is the process of converting a variable to a 0 or 1 given a certain threshold.
To show an example for binarizing, I wanted to have a data set with both categorical and numeric data. I downloaded an 'exercise' dataset from seaborn.
End of explanation
"""
exercise.pulse.hist()
plt.title('Histogram of Pulse')
plt.show()
exercise.ix[:,'high_pulse'] = preprocessing.binarize(exercise.pulse, threshold=120)[0]
exercise.head()
exercise[exercise.high_pulse==1].head()
"""
Explanation: Let's look at a histogram of the numeric variable - pulse:
End of explanation
"""
encoder = preprocessing.LabelEncoder()
exercise.diet.unique()
encoder.fit_transform(exercise.diet)
"""
Explanation: Obviously this is a simple example, we could just easily do this with a one line pandas call, but again the advantage of doing this through scikit-learn is that it can be part of the pipeline.
pandas one liner: exercise['high_pulse'] = exercise.pulse>120
Create numerical features from the Categorical:
Since we can't just plug in the exercise data as in into a machine learning algorithm, we need to transform the data so that it only contains numerical data. So there are 2 primary ways of doing this. One is to create a numeric value for each of the categories in a given column, or another way is to create new features very similar to what is called "creating dummy variables" in statistics.
LabelEncoder()
OneHotEncoder()
End of explanation
"""
exercise.diet.cat.codes.head()
"""
Explanation: OneHotEncoder expects the data to be numeric, so LabelEncoder would need to be applied first to convert everything to a numeric value.
Way to do it in pandas, because 1 and 0 don't have any meaning really, it doesn't matter which one got the 1 label and which one got the 0 label.
End of explanation
"""
exercise_numeric_df = exercise.copy()
exercise.columns
exercise.head()
"""
Explanation: Let's make a deep copy of the exercise data frame so we can start modifying it.
End of explanation
"""
cat_columns = ['diet','kind', 'time']
# Pandas: exercise_numeric_df[cat_columns] = exercise[cat_columns].apply(lambda x: x.cat.codes)
exercise_numeric_df[cat_columns] = exercise[cat_columns].apply(lambda x: encoder.fit_transform(x))
exercise_numeric_df.head()
"""
Explanation: Let's identify the categorical columns:
End of explanation
"""
one_hot_encoder = preprocessing.OneHotEncoder(categorical_features=[2,4,5])
one_hot_encoder
exercise_numeric_encoded_matrix = one_hot_encoder.fit_transform(exercise_numeric_df.values)
exercise_numeric_encoded_matrix.toarray()[0:10,:]
exercise_numeric_encoded_matrix.shape
"""
Explanation: Now we need to convert the diet, kind, and time columns into "dummy variables":
End of explanation
"""
pd.get_dummies(exercise).head()
pd.get_dummies(exercise).shape
"""
Explanation: It's much easier to visualize what is happening using pandas, so I'll include that here as well.
End of explanation
"""
exercise_numeric_encoded_matrix
standard_scaler = preprocessing.StandardScaler(with_mean=True)
standard_scaler
exercise_numeric_encoded_matrix.toarray()[0:5,0:8]
exercise_data_scaled = standard_scaler.fit_transform(exercise_numeric_encoded_matrix.toarray()[:,0:8])
numpy.mean(exercise_data_scaled, axis=0)
numpy.linalg.norm(exercise_data_scaled[0,:])
normalizer = preprocessing.Normalizer()
normalizer
exercise_data_scaled_normalized = normalizer.fit_transform(exercise_data_scaled)
numpy.linalg.norm(exercise_data_scaled_normalized[0,:])
exercise_data_scaled_normalized[0:5,:]
"""
Explanation: So most of the time I do go through the pandas approach because it's more readable and then at the end I'll use the .values function to get the values out of the data frame. Pandas might be a way to start exploring the data quickly, but once the algorithm is finalized, then putting the scikit learn pipeline can be what is in production.
Scaling and Normalizing:
StandardScaler() - z score normalization - subtract the mean, divide by the std.
MinMaxScaler() - data is scaled to a fixed range, usually 0 to 1.
Normalizer() - normalized to have length 1
Z score normalization makes the data normally distributed which is an assumption for many algorithms. Standardizing the features so that they are centered around 0 with a standard deviation of 1 is not only important if we are comparing measurements that have different units, but it is also a general requirement for many machine learning algorithms. Min-max scaling transforms the data so that there is smaller standard deviations for outliers than z score. Z score standardization is performed more frequently than min max. However min-max scaling is used in image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB color range).
End of explanation
"""
from sklearn import pipeline
"""
Explanation: Example of pipeline:
preprocessing_pipeline = preprocessing.Pipeline([('impute_missing', imputer), ('cat_to_numeric', label_encoder), ('one_hot_encoding', one_hot_encoding), ('standard_scaler', standard_scaler), ('normalizer', normalizer)])
preprocessing_pipeline.fit_transform(X)
FunctionTransformer() - Can create your own transformer function to include in the pipeline
The last item can be an estimator with fit_predict and score functions
End of explanation
"""
my_function = preprocessing.FunctionTransformer(func=lambda x: x.toarray()[:,0:8], \
validate=True, accept_sparse=True, pass_y=False)
preprocessing_pipeline = pipeline.Pipeline([('one_hot_encoding', one_hot_encoder), \
('my_function', my_function), \
('standard_scaler', standard_scaler), \
('normalizer', normalizer)])
preprocessing_pipeline.fit_transform(exercise_numeric_df.values)[0:5,:]
"""
Explanation: I was excited to find the FunctionTransformer functionality, this means we can create our own modification of the data. This is newly available in scikit-learn v 0.17 which just recently was released.
End of explanation
"""
|
CNR-Engineering/TelTools | notebook/Handle Serafin files.ipynb | gpl-3.0 | from pyteltools.slf import Serafin
with Serafin.Read('../scripts_PyTelTools_validation/data/Yen/fis_yen-exp.slf', 'en') as resin:
# Read header (SerafinHeader is stored in `header` attribute of `Serafin` class)
resin.read_header()
# Display a summary
print(resin.header.summary())
# Get time (in seconds) and display it
resin.get_time()
print(resin.time)
"""
Explanation: Main classes to deal with:
- SerafinHeader
- Read (derived from Serafin)
- Write (derived from Serafin)
Read Telemac file
Read a binary Selafin file.
Automatic dectection of precision (single or double) and endianness (big or little endian).
End of explanation
"""
import numpy as np
from pyteltools.slf import Serafin
with Serafin.Read('../scripts_PyTelTools_validation/data/Yen/fis_yen-exp.slf', 'en') as resin:
resin.read_header()
# Copy header
output_header = resin.header.copy()
# Change some header attributes if required
#output_header.toggle_endianness()
#output_header.to_single_precision()
values = np.empty((output_header.nb_var, output_header.nb_nodes), dtype=output_header.np_float_type)
with Serafin.Write('/tmp/test.slf', 'fr', overwrite=True) as resout:
resout.write_header(output_header)
# Copy all frames
for time_index, time in enumerate(resin.time):
for i, var_ID in enumerate(output_header.var_IDs):
values[i, :] = resin.read_var_in_frame(time_index, var_ID)
resout.write_entire_frame(output_header, time, values)
"""
Explanation: Write Telemac file
End of explanation
"""
|
rldotai/rlbench | rlbench/off_policy_comparison-short.ipynb | gpl-3.0 | def compute_value_dct(theta_lst, features):
return [{s: np.dot(theta, x) for s, x in features.items()} for theta in theta_lst]
def compute_values(theta_lst, X):
return [np.dot(X, theta) for theta in theta_lst]
def compute_errors(value_lst, error_func):
return [error_func(v) for v in value_lst]
def rmse_factory(true_values, d=None):
true_values = np.ravel(true_values)
# sensible default for weighting distribution
if d is None:
d = np.ones_like(true_values)
else:
d = np.ravel(d)
assert(len(d) == len(true_values))
# the actual root-mean square error
def func(v):
diff = true_values - v
return np.sqrt(np.mean(d*diff**2))
return func
"""
Explanation: True Values
The "true" values can be computed analytically in this case, so we did so.
We can also compute the distribution for weighting the errors.
End of explanation
"""
# define the experiment
num_states = 8
num_features = 6
num_active = 3
num_runs = 10
max_steps = 10000
# set up environment
env = chicken.Chicken(num_states)
# Define the target policy
pol_pi = policy.FixedPolicy({s: {0: 1} for s in env.states})
# Define the behavior policy
pol_mu = policy.FixedPolicy({s: {0: 1} if s < 4 else {0: 0.5, 1: 0.5} for s in env.states})
# state-dependent gamma
gm_dct = {s: 0.9 for s in env.states}
gm_dct[0] = 0
gm_func = parametric.MapState(gm_dct)
gm_p_func = parametric.MapNextState(gm_dct)
# set up algorithm parameters
update_params = {
'alpha': 0.02,
'beta': 0.002,
'gm': gm_func,
'gm_p': gm_p_func,
'lm': 0.0,
'lm_p': 0.0,
'interest': 1.0,
}
# Run all available algorithms
data = dict()
for name, alg in algos.algo_registry.items():
print(name)
run_lst = []
for i in range(num_runs):
print("Run: %d"%i, end="\r")
episode_data = dict()
# Want to use random features
phi = features.RandomBinary(num_features, num_active)
episode_data['features'] = {s: phi(s) for s in env.states}
# Set up the agent
_update_params = update_params.copy()
if name == 'ETD':
_update_params['alpha'] = 0.002
agent = OffPolicyAgent(alg(phi.length), pol_pi, pol_mu, phi, _update_params)
# Run the experiment
episode_data['steps'] = run_contextual(agent, env, max_steps)
run_lst.append(episode_data)
data[name] = run_lst
# True values & associated stationary distribution
theta_ls = np.array([ 0.4782969, 0.531441 , 0.59049, 0.6561, 0.729, 0.81, 0.9, 1.])
d_pi = np.ones(num_states)/num_states
D_pi = np.diag(d_pi)
# define the error/objective function
err_func = rmse_factory(theta_ls, d=d_pi)
baseline = err_func(np.zeros(num_states))
for name, experiment in data.items():
print(name)
errors = []
for episode in experiment:
feats = experiment[0]['features']
X = np.array([feats[k] for k in sorted(feats.keys())])
steps = experiment[0]['steps']
thetas = list(pluck('theta', steps))
# compute the values at each step
val_lst = compute_values(thetas, X)
# compute the errors at each step
err_lst = compute_errors(val_lst, err_func)
errors.append(err_lst)
# calculate the average error
clipped_errs = np.clip(errors, 0, 100)
avg_err = np.mean(clipped_errs, axis=0)
# plot the errors
fig, ax = plt.subplots()
ax.plot(avg_err)
# format the graph
ax.set_ylim(1e-2, 2)
ax.axhline(baseline, c='red')
ax.set_yscale('log')
plt.show()
"""
Explanation: Comparing the Errors
For each algorithm, we get the associated experiment, and calculate the errors at each timestep, averaged over the runs performed with that algorithm.
End of explanation
"""
|
antoniomezzacapo/qiskit-tutorial | community/terra/qis_intro/entanglement_testing.ipynb | apache-2.0 | # Imports
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute
from qiskit.tools.visualization import matplotlib_circuit_drawer as circuit_drawer
from qiskit.tools.visualization import plot_histogram, qx_color_scheme
from qiskit.wrapper.jupyter import *
from qiskit import IBMQ, Aer
from qiskit.backends.ibmq import least_busy
IBMQ.load_accounts()
# use simulator to learn more about entangled quantum states where possible
sim_backend = Aer.get_backend('qasm_simulator')
sim_shots = 8192
# use device to test entanglement
device_shots = 1024
device_backend = least_busy(IBMQ.backends(operational=True, simulator=False))
device_coupling = device_backend.configuration()['coupling_map']
print("the best backend is " + device_backend.name() + " with coupling " + str(device_coupling))
"""
Explanation: <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Testing Entanglement
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
Contributors
Jay Gambetta, Antonio Córcoles, Anna Phan
Entanglement
In creating entanglement, we introduced you to the quantum concept of entanglement. We made the quantum state $|\psi\rangle= (|00\rangle+|11\rangle)/\sqrt{2}$ and showed that (accounting for experimental noise) the system has perfect correlations in both the computational and superposition bases. This means if $q_0$ is measured in state $|0\rangle$, we know $q_1$ is in the same state; likewise, if $q_0$ is measured in state $|+\rangle$, we know $q_1$ is also in the same state.
To understand the implications of this in more detail, we will look at the following topics in this notebook:
* Two-Qubit Correlated Observables, where we learn more about two-qubit observables
* CHSH Inequality, where we use the observables to compare quantum mechanics to hidden variable models on two qubits
* Two-, Three-, and Four-Qubit GHZ States, where we create entangled states over more qubits
* Mermin's Test and the Three Box Game, where we compare quantum mechanics to hidden variable models on three qubits
Two-Qubit Correlated Observables<a id='section1'></a>
An observable is a Hermitian matrix where the real eigenvalues represent the outcome of the experiment, and the eigenvectors are the states to which the system is projected under measurement. That is, an observable $A$ is given by
$$ A = \sum_j a_j|a_j\rangle\langle a_j|$$
where $|a_j\rangle$ is the eigenvector of the observable with result $a_j$. The expectation value of this observable is given by
$$\langle A \rangle = \sum_j a_j |\langle \psi |a_j\rangle|^2 = \sum_j a_j \mathrm{Pr}(a_j|\psi).$$
We can see there is the standard relationship between average (expectation value) and probability.
For a two-qubit system, the following are important two-outcome ($\pm1$) single-qubit observables:
$$ Z= |0\rangle\langle 0| - |1\rangle\langle 1|$$
$$ X= |+\rangle\langle +| - |-\rangle\langle -|$$
These are also commonly referred to as the Pauli $Z$ and $X$ operators. These can be further extended to the two-qubit space to give
$$\langle I\otimes Z\rangle =\mathrm{Pr}(00|\psi) - \mathrm{Pr}(01|\psi) + \mathrm{Pr}(10|\psi)- \mathrm{Pr}(11|\psi)$$
$$\langle Z\otimes I\rangle =\mathrm{Pr}(00|\psi) + \mathrm{Pr}(01|\psi) - \mathrm{Pr}(10|\psi)- \mathrm{Pr}(11|\psi)$$
$$\langle Z\otimes Z\rangle =\mathrm{Pr}(00|\psi) - \mathrm{Pr}(01|\psi) - \mathrm{Pr}(10|\psi)+ \mathrm{Pr}(11|\psi)$$
$$\langle I\otimes X\rangle =\mathrm{Pr}(++|\psi) - \mathrm{Pr}(+-|\psi) + \mathrm{Pr}(-+|\psi)- \mathrm{Pr}(--|\psi)$$
$$\langle X\otimes I\rangle =\mathrm{Pr}(++|\psi) + \mathrm{Pr}(+-|\psi) - \mathrm{Pr}(-+|\psi)- \mathrm{Pr}(--|\psi)$$
$$\langle X\otimes X\rangle =\mathrm{Pr}(++|\psi) - \mathrm{Pr}(+-|\psi) - \mathrm{Pr}(-+|\psi)+ \mathrm{Pr}(--|\psi)$$
$$\langle Z\otimes X\rangle =\mathrm{Pr}(0+|\psi) - \mathrm{Pr}(0-|\psi) - \mathrm{Pr}(1+|\psi)+ \mathrm{Pr}(1-|\psi)$$
$$\langle X\otimes Z\rangle =\mathrm{Pr}(+0|\psi) - \mathrm{Pr}(+1|\psi) - \mathrm{Pr}(-0|\psi)+ \mathrm{Pr}(-1|\psi)$$
End of explanation
"""
# Creating registers
q = QuantumRegister(2)
c = ClassicalRegister(2)
# quantum circuit to make an entangled bell state
bell = QuantumCircuit(q, c)
bell.h(q[0])
bell.cx(q[0], q[1])
# quantum circuit to measure q in the standard basis
measureZZ = QuantumCircuit(q, c)
measureZZ.measure(q[0], c[0])
measureZZ.measure(q[1], c[1])
bellZZ = bell+measureZZ
# quantum circuit to measure q in the superposition basis
measureXX = QuantumCircuit(q, c)
measureXX.h(q[0])
measureXX.h(q[1])
measureXX.measure(q[0], c[0])
measureXX.measure(q[1], c[1])
bellXX = bell+measureXX
# quantum circuit to measure ZX
measureZX = QuantumCircuit(q, c)
measureZX.h(q[0])
measureZX.measure(q[0], c[0])
measureZX.measure(q[1], c[1])
bellZX = bell+measureZX
# quantum circuit to measure XZ
measureXZ = QuantumCircuit(q, c)
measureXZ.h(q[1])
measureXZ.measure(q[0], c[0])
measureXZ.measure(q[1], c[1])
bellXZ = bell+measureXZ
circuits = [bellZZ,bellXX,bellZX,bellXZ]
circuit_drawer(bellZZ,style=qx_color_scheme())
circuit_drawer(bellXX,style=qx_color_scheme())
circuit_drawer(bellZX,style=qx_color_scheme())
circuit_drawer(bellXZ,style=qx_color_scheme())
%%qiskit_job_status
HTMLProgressBar()
job = execute(circuits, backend=device_backend, coupling_map=device_coupling, shots=device_shots)
result = job.result()
observable_first ={'00': 1, '01': -1, '10': 1, '11': -1}
observable_second ={'00': 1, '01': 1, '10': -1, '11': -1}
observable_correlated ={'00': 1, '01': -1, '10': -1, '11': 1}
print('IZ = ' + str(result.average_data(bellZZ,observable_first)))
print('ZI = ' + str(result.average_data(bellZZ,observable_second)))
print('ZZ = ' + str(result.average_data(bellZZ,observable_correlated)))
print('IX = ' + str(result.average_data(bellXX,observable_first)))
print('XI = ' + str(result.average_data(bellXX,observable_second)))
print('XX = ' + str(result.average_data(bellXX,observable_correlated)))
print('ZX = ' + str(result.average_data(bellZX,observable_correlated)))
print('XZ = ' + str(result.average_data(bellXZ,observable_correlated)))
"""
Explanation: Recall that to make the Bell state $|\psi\rangle= (|00\rangle+|11\rangle)/\sqrt{2}$ from the initial state $|00\rangle$, the quantum circuit first applies a Hadamard on $q_0$, followed by a CNOT from $q_0$ to $q_1$. Using Qiskit, this can done by using the script below to measure the above expectation values; we run four different experiments with measurements in the standard basis, superposition basis, and a combination of both.
End of explanation
"""
CHSH = lambda x : x[0]+x[1]+x[2]-x[3]
measure = [measureZZ, measureZX, measureXX, measureXZ]
# Theory
sim_chsh_circuits = []
sim_x = []
sim_steps = 30
for step in range(sim_steps):
theta = 2.0*np.pi*step/30
bell_middle = QuantumCircuit(q,c)
bell_middle.ry(theta,q[0])
for m in measure:
sim_chsh_circuits.append(bell+bell_middle+m)
sim_x.append(theta)
job = execute(sim_chsh_circuits, backend=sim_backend, shots=sim_shots)
result = job.result()
sim_chsh = []
circ = 0
for x in range(len(sim_x)):
temp_chsh = []
for m in range(len(measure)):
temp_chsh.append(result.average_data(sim_chsh_circuits[circ].name,observable_correlated))
circ += 1
sim_chsh.append(CHSH(temp_chsh))
# Experiment
real_chsh_circuits = []
real_x = []
real_steps = 10
for step in range(real_steps):
theta = 2.0*np.pi*step/10
bell_middle = QuantumCircuit(q,c)
bell_middle.ry(theta,q[0])
for m in measure:
real_chsh_circuits.append(bell+bell_middle+m)
real_x.append(theta)
%%qiskit_job_status
HTMLProgressBar()
job = execute(real_chsh_circuits, backend=device_backend, coupling_map=device_coupling, shots=device_shots)
result = job.result()
real_chsh = []
circ = 0
for x in range(len(real_x)):
temp_chsh = []
for m in range(len(measure)):
temp_chsh.append(result.average_data(real_chsh_circuits[circ].name,observable_correlated))
circ += 1
real_chsh.append(CHSH(temp_chsh))
plt.plot(sim_x, sim_chsh, 'r-', real_x, real_chsh, 'bo')
plt.plot([0, 2*np.pi], [2, 2], 'b-')
plt.plot([0, 2*np.pi], [-2, -2], 'b-')
plt.grid()
plt.ylabel('CHSH', fontsize=20)
plt.xlabel(r'$Y(\theta)$', fontsize=20)
plt.show()
"""
Explanation: Here we see that for the state $|\psi\rangle= (|00\rangle+|11\rangle)/\sqrt{2}$, expectation values (within experimental errors) are
Observable | Expected value |Observable | Expected value|Observable | Expected value
------------- | ------------- | ------------- | ------------- | ------------- | -------------
ZZ | 1 |XX | 1 | ZX | 0
ZI | 0 |XI | 0 | XZ | 0
IZ | 0 |IX | 0 | |
How do we explain this situation? Here we introduce the concept of a hidden variable model. If we assume there is a hidden variable $\lambda$ and follow these two assumptions:
Locality: No information can travel faster than the speed of light. There is a hidden variable $\lambda$ that defines all the correlations so that $$\langle A\otimes B\rangle = \sum_\lambda P(\lambda) A(\lambda) B(\lambda).$$
Realism: All observables have a definite value independent of the measurement ($A(\lambda)=\pm1$ etc.).
then can we describe these observations? --- The answer is yes!
Assume $\lambda$ has two bits, each occurring randomly with probability 1/4. The following predefined table would then explain all the above observables:
$\lambda$ | Z (qubit 1) |Z (qubit 2) | X (qubit 1)| X (qubit 2)
------------- | ------------- | ------------- | ------------- | -------------
00 | 1 | 1 | 1 | 1
01 | 1 | 1 |-1 |-1
10 |-1 |-1 |-1 |-1
11 |-1 |-1 | 1 | 1
Thus, with a purely classical hidden variable model, we are able to reconcile the measured observations we had for this particular Bell state. However, there are some states for which this model will not hold. This was first observed by John Stewart Bell in 1964. He proposed a theorem that suggests that there are no hidden variables in quantum mechanics. At the core of Bell's theorem is the famous Bell inequality. Here we'll use a refined version of this inequality (known as the CHSH inequality, derived by John Clauser, Michael Horne, Abner Shimony, and Richard Holt in 1969) to demonstrate Bell's proposal.
CHSH Inequality <a id='section2'></a>
In the CHSH inequality, we measure the correlator of four observables: $A$ and $A'$ on $q_0$, and $B$ and $B'$ on $q_1$, which have eigenvalues $\pm 1$. The CHSH inequality says that no local hidden variable theory can have
$$|C|>2$$
where
$$C = \langle B\otimes A\rangle + \langle B\otimes A'\rangle+\langle B'\otimes A'\rangle-\langle B'\otimes A\rangle.$$
What would this look like with some hidden variable model under the locality and realism assumptions from above? $C$ then becomes
$$C = \sum_\lambda P(\lambda) { B(\lambda) [ A(\lambda)+A'(\lambda)] + B'(\lambda) [ A'(\lambda)-A(\lambda)]$$
and $[A(\lambda)+A'(\lambda)]=2$ (or 0) while $[A'(\lambda)-A(\lambda)]=0$ (or 2) respectively. That is, $|C|=2$, and noise will only make this smaller.
If we measure a number greater than 2, the above assumptions cannot be valid. (This is a perfect example of one of those astonishing counterintuitive ideas one must accept in the quantum world.) For simplicity, we choose these observables to be
$$C = \langle Z\otimes Z\rangle + \langle Z\otimes X\rangle+\langle X\otimes X\rangle-\langle X\otimes Z\rangle.$$
$Z$ is measured in the computational basis, and $X$ in the superposition basis ($H$ is applied before measurement). The input state $$|\psi(\theta)\rangle = I\otimes Y(\theta)\frac{|00\rangle + |11\rangle}{\sqrt(2)} = \frac{\cos(\theta/2)|00\rangle + \cos(\theta/2)|11\rangle+\sin(\theta/2)|01\rangle-\sin(\theta/2)|10\rangle}{\sqrt{2}}$$ is swept vs. $\theta$ (think of this as allowing us to prepare a set of states varying in the angle $\theta$).
Note that the following demonstration of CHSH is not loophole-free.
End of explanation
"""
print(real_chsh)
"""
Explanation: The resulting graph created by running the previous cell compares the simulated data (sinusoidal line) and the data from the real experiment. The graph also gives lines at $\pm 2$ for reference. Did you violate the hidden variable model?
Here is the saved CHSH data.
End of explanation
"""
# 2 - qubits
# quantum circuit to make GHZ state
q2 = QuantumRegister(2)
c2 = ClassicalRegister(2)
ghz = QuantumCircuit(q2, c2)
ghz.h(q2[0])
ghz.cx(q2[0],q2[1])
# quantum circuit to measure q in standard basis
measureZZ = QuantumCircuit(q2, c2)
measureZZ.measure(q2[0], c2[0])
measureZZ.measure(q2[1], c2[1])
ghzZZ = ghz+measureZZ
measureXX = QuantumCircuit(q2, c2)
measureXX.h(q2[0])
measureXX.h(q2[1])
measureXX.measure(q2[0], c2[0])
measureXX.measure(q2[1], c2[1])
ghzXX = ghz+measureXX
circuits2 = [ghzZZ, ghzXX]
circuit_drawer(ghzZZ,style=qx_color_scheme())
circuit_drawer(ghzXX,style=qx_color_scheme())
job2 = execute(circuits2, backend=sim_backend, shots=sim_shots)
result2 = job2.result()
plot_histogram(result2.get_counts(ghzZZ))
plot_histogram(result2.get_counts(ghzXX))
# 3 - qubits
# quantum circuit to make GHZ state
q3 = QuantumRegister(3)
c3 = ClassicalRegister(3)
ghz3 = QuantumCircuit(q3, c3)
ghz3.h(q3[0])
ghz3.cx(q3[0],q3[1])
ghz3.cx(q3[1],q3[2])
# quantum circuit to measure q in standard basis
measureZZZ = QuantumCircuit(q3, c3)
measureZZZ.measure(q3[0], c3[0])
measureZZZ.measure(q3[1], c3[1])
measureZZZ.measure(q3[2], c3[2])
ghzZZZ = ghz3+measureZZZ
measureXXX = QuantumCircuit(q3, c3)
measureXXX.h(q3[0])
measureXXX.h(q3[1])
measureXXX.h(q3[2])
measureXXX.measure(q3[0], c3[0])
measureXXX.measure(q3[1], c3[1])
measureXXX.measure(q3[2], c3[2])
ghzXXX = ghz3+measureXXX
circuits3 = [ghzZZZ, ghzXXX]
circuit_drawer(ghzZZZ,style=qx_color_scheme())
circuit_drawer(ghzXXX,style=qx_color_scheme())
job3 = execute(circuits3, backend=sim_backend, shots=sim_shots)
result3 = job3.result()
plot_histogram(result3.get_counts(ghzZZZ))
plot_histogram(result3.get_counts(ghzXXX))
# 4 - qubits
# quantum circuit to make GHZ state
q4 = QuantumRegister(4)
c4 = ClassicalRegister(4)
ghz4 = QuantumCircuit(q4, c4)
ghz4.h(q4[0])
ghz4.cx(q4[0],q4[1])
ghz4.cx(q4[1],q4[2])
ghz4.h(q4[3])
ghz4.h(q4[2])
ghz4.cx(q4[3],q4[2])
ghz4.h(q4[3])
ghz4.h(q4[2])
# quantum circuit to measure q in standard basis
measureZZZZ = QuantumCircuit(q4, c4)
measureZZZZ.measure(q4[0], c4[0])
measureZZZZ.measure(q4[1], c4[1])
measureZZZZ.measure(q4[2], c4[2])
measureZZZZ.measure(q4[3], c4[3])
ghzZZZZ = ghz4+measureZZZZ
measureXXXX = QuantumCircuit(q4, c4)
measureXXXX.h(q4[0])
measureXXXX.h(q4[1])
measureXXXX.h(q4[2])
measureXXXX.h(q4[3])
measureXXXX.measure(q4[0], c4[0])
measureXXXX.measure(q4[1], c4[1])
measureXXXX.measure(q4[2], c4[2])
measureXXXX.measure(q4[3], c4[3])
ghzXXXX = ghz4+measureXXXX
circuits4 = [ghzZZZZ, ghzXXXX]
circuit_drawer(ghzZZZZ,style=qx_color_scheme())
circuit_drawer(ghzXXXX,style=qx_color_scheme())
job4 = execute(circuits4, backend=sim_backend, shots=sim_shots)
result4 = job4.result()
plot_histogram(result4.get_counts(ghzZZZZ))
plot_histogram(result4.get_counts(ghzXXXX))
"""
Explanation: Despite the presence of loopholes in our demonstration, we can see that this experiment is compatible with quantum mechanics as a theory with no local hidden variables. See the original experimental demonstrations of this test with superconducting qubits here and here.
Two-, Three-, and Four-Qubit GHZ States<a id='section3'></a>
What does entanglement look like beyond two qubits? An important set of maximally entangled states are known as GHZ states (named after Greenberger, Horne, and Zeilinger). These are the states of the form
$|\psi\rangle = \left (|0...0\rangle+|1...1\rangle\right)/\sqrt{2}$. The Bell state previously described is merely a two-qubit version of a GHZ state. The next cells prepare GHZ states of two, three, and four qubits.
End of explanation
"""
# quantum circuit to make GHZ state
q3 = QuantumRegister(3)
c3 = ClassicalRegister(3)
ghz3 = QuantumCircuit(q3, c3)
ghz3.h(q3[0])
ghz3.cx(q3[0],q3[1])
ghz3.cx(q3[0],q3[2])
# quantum circuit to measure q in standard basis
measureZZZ = QuantumCircuit(q3, c3)
measureZZZ.measure(q3[0], c3[0])
measureZZZ.measure(q3[1], c3[1])
measureZZZ.measure(q3[2], c3[2])
ghzZZZ = ghz3+measureZZZ
circuits5 = [ghzZZZ]
circuit_drawer(ghzZZZ,style=qx_color_scheme())
job5 = execute(circuits5, backend=sim_backend, shots=sim_shots)
result5 = job5.result()
plot_histogram(result5.get_counts(ghzZZZ))
"""
Explanation: Mermin's Test and the Three Box Game<a id='section4'></a>
In case the violation of Bell's inequality (CHSH) by two qubits is not enough to convince you to believe in quantum mechanics, we can generalize to a more stringent set of tests with three qubits, which can give a single-shot violation (rather than taking averaged statistics). A well-known three-qubit case is Mermin's inequality, which is a test we can perform on GHZ states.
An example of a three-qubit GHZ state is $|\psi\rangle = \left (|000\rangle+|111\rangle\right)/\sqrt{2}$. You can see this is a further generalization of a Bell state and, if measured, should give $|000\rangle$ half the time and $|111 \rangle$ the other half of the time.
End of explanation
"""
MerminM = lambda x : x[0]*x[1]*x[2]*x[3]
observable ={'000': 1, '001': -1, '010': -1, '011': 1, '100': -1, '101': 1, '110': 1, '111': -1}
# quantum circuit to measure q XXX
measureXXX = QuantumCircuit(q3, c3)
measureXXX.h(q3[0])
measureXXX.h(q3[1])
measureXXX.h(q3[2])
measureXXX.measure(q3[0], c3[0])
measureXXX.measure(q3[1], c3[1])
measureXXX.measure(q3[2], c3[2])
ghzXXX = ghz3+measureXXX
# quantum circuit to measure q XYY
measureXYY = QuantumCircuit(q3, c3)
measureXYY.s(q3[1]).inverse()
measureXYY.s(q3[2]).inverse()
measureXYY.h(q3[0])
measureXYY.h(q3[1])
measureXYY.h(q3[2])
measureXYY.measure(q3[0], c3[0])
measureXYY.measure(q3[1], c3[1])
measureXYY.measure(q3[2], c3[2])
ghzXYY = ghz3+measureXYY
# quantum circuit to measure q YXY
measureYXY = QuantumCircuit(q3, c3)
measureYXY.s(q3[0]).inverse()
measureYXY.s(q3[2]).inverse()
measureYXY.h(q3[0])
measureYXY.h(q3[1])
measureYXY.h(q3[2])
measureYXY.measure(q3[0], c3[0])
measureYXY.measure(q3[1], c3[1])
measureYXY.measure(q3[2], c3[2])
ghzYXY = ghz3+measureYXY
# quantum circuit to measure q YYX
measureYYX = QuantumCircuit(q3, c3)
measureYYX.s(q3[0]).inverse()
measureYYX.s(q3[1]).inverse()
measureYYX.h(q3[0])
measureYYX.h(q3[1])
measureYYX.h(q3[2])
measureYYX.measure(q3[0], c3[0])
measureYYX.measure(q3[1], c3[1])
measureYYX.measure(q3[2], c3[2])
ghzYYX = ghz3+measureYYX
circuits6 = [ghzXXX, ghzYYX, ghzYXY, ghzXYY]
circuit_drawer(ghzXXX,style=qx_color_scheme())
circuit_drawer(ghzYYX,style=qx_color_scheme())
circuit_drawer(ghzYXY,style=qx_color_scheme())
circuit_drawer(ghzXYY,style=qx_color_scheme())
%%qiskit_job_status
HTMLProgressBar()
job6 = execute(circuits6, backend=device_backend, coupling_map=device_coupling, shots=device_shots)
result6 = job6.result()
temp=[]
temp.append(result6.average_data(ghzXXX,observable))
temp.append(result6.average_data(ghzYYX,observable))
temp.append(result6.average_data(ghzYXY,observable))
temp.append(result6.average_data(ghzXYY,observable))
print(MerminM(temp))
"""
Explanation: Suppose we have three independent systems, ${A, B, C}$, for which we can query two particular questions (observables) $X$ and $Y$. In each case, either query can give $+1$ or $-1$. Consider whether it is possible to choose some state of the three boxes, such that we can satisfy the following four conditions: $X_A Y_B Y_C = 1$, $Y_A X_B Y_C =1$, $Y_A Y_B X_C = 1$, and $X_A X_B X_C = -1$. Classically, this can be shown to be impossible... but a three-qubit GHZ state can in fact satisfy all four conditions.
End of explanation
"""
|
wcmckee/wcmckee | artcgallery.ipynb | mit | import os
import arrow
import getpass
raw = arrow.now()
myusr = getpass.getuser()
galpath = ('/home/{}/git/artcontrolme/galleries/'.format(myusr))
galpath = ('/home/{}/git/artcontrolme/galleries/'.format(myusr))
popath = ('/home/{}/git/artcontrolme/posts/'.format(myusr))
class DayStuff():
def getUsr():
return getpass.getuser()
def reTime():
return raw()
def getYear():
return raw.strftime("%Y")
def getMonth():
return raw.strftime("%m")
def getDay():
return raw.strftime("%d")
def Fullday():
return (getYear() + '/' + getMonth() + '/' + getDay())
def fixDay():
return (raw.strftime('%Y/%m/%d'))
#def postPath():
#return ('/home/{}/git/artcontrolme/posts/'.format(myusr))
def listPath():
return os.listdir(popath)
#def galleryPath():
# return (galpath)
def galyrPath():
return ('{}{}'.format(galpath, getYear()))
def galmonPath():
return('{}{}/{}'.format(galpath, getYear(), getMonth()))
def galdayPath():
return('{}{}/{}/{}'.format(galpath, getYear(), getMonth(), getDay()))
def galleryList():
return os.listdir('/home/{}/git/artcontrolme/galleries/'.format(myusr))
def galyrList():
return os.listdir('/home/{}/git/artcontrolme/galleries/{}/{}'.format(myusr, getYear()))
def galmonList():
return os.listdir('/home/{}/git/artcontrolme/galleries/{}/{}'.format(myusr, getYear(), getMonth()))
def galdayList():
return os.listdir('/home/{}/git/artcontrolme/galleries/{}/{}/{}'.format(myusr, getYear(), getMonth(), getDay()))
def checkYear():
if getYear() not in galleryList():
return os.mkdir('{}{}'.format(galleryPath(), getYear()))
def checkMonth():
if getMonth() not in DayStuff.galyrList():
return os.mkdir('{}{}'.format(galleryPath(), getMonth()))
def checkDay():
if getDay() not in DayStuff.galmonList():
return os.mkdir('{}/{}/{}'.format(galleryPath(), getMonth(), getDay()))
#def makeDay
#DayStuff.getUsr()
#DayStuff.getYear()
#DayStuff.getMonth()
#DayStuff.getDay()
#DayStuf
#DayStuff.Fullday()
#DayStuff.postPath()
#DayStuff.
#DayStuff.galmonPath()
#DayStuff.galdayPath()
#DayStuff.galyrList()
#getDay()
#getMonth()
#galleryList()
#DayStuff.checkDay()
#DayStuff.galyrList()
#DayStuff.galmonList()
#DayStuff.checkDay()
#DayStuff.checl
#DayStuff.checkMonth()
#DayStuff.galyrList()
#listPath()
#if getYear() not in galleryList():
# os.mkdir('{}{}'.format(galleryPath(), getYear()))
#galleryPath()
#fixDay()
#galleryPath()
#Fullday()
#getDay()
#getYear()
#getMonth()
#getusr()
#yraw = raw.strftime("%Y")
#mntaw = raw.strftime("%m")
#dytaw = raw.strftime("%d")
#fulda = yraw + '/' + mntaw + '/' + dytaw
#fultim = fulda + ' ' + raw.strftime('%H:%M:%S')
#arnow = arrow.now()
#curyr = arnow.strftime('%Y')
#curmon = arnow.strftime('%m')
#curday = arnow.strftime('%d')
#galerdir = ('/home/wcmckee/github/artcontrolme/galleries/')
#galdir = os.listdir('/home/wcmckee/github/artcontrolme/galleries/')
#galdir
#mondir = os.listdir(galerdir + curyr)
#daydir = os.listdir(galerdir + curyr + '/' + curmon )
#daydir
#galdir#
#mondir
#daydir
#if curyr in galdir:
# pass
#else:
# os.mkdir(galerdir + curyr)
#if curmon in mondir:
# pass
#else:
# os.mkdir(galerdir + curyr + '/' + curmon)
#fulldaypath = (galerdir + curyr + '/' + curmon + '/' + curday)
#if curday in daydir:
# pass
#else:
# os.mkdir(galerdir + curyr + '/' + curmon + '/' + curday)
#galdir
#mondir
#daydir
#str(arnow.date())
#nameofblogpost = input('Post name: ')
"""
Explanation: <h3>artcontrol gallery</h3>
Create gallery for artcontrol artwork.
Uses Year / Month / Day format.
Create blog post for each day there is a post.
It will need to list the files for that day and create a markdown file in posts that contains the artwork. Name of art then followed by each pience of artwork -line, bw, color.
write a message about each piece of artwork.
End of explanation
"""
#daypost = open('/home/{}/github/artcontrolme/posts/{}.md'.format(getusr(), nameofblogpost), 'w')
#daymetapost = open('/home/{}/github/artcontrolme/posts/{}.meta'.format(getUsr(), nameofblogpost), 'w')
#daymetapost.write('.. title: ' + nameofblogpost + ' \n' + '.. slug: ' + nameofblogpost + ' \n' + '.. date: ' + fultim + ' \n' + '.. author: wcmckee')
#daymetapost.close()
#todayart = os.listdir(fulldaypath)
#titlewor = list()
#titlewor
"""
Explanation: check to see if that blog post name already excist, if so error and ask for something more unique!
input art piece writers. Shows the art then asks for input, appending the input below the artwork. Give a name for the art that is appended above.
End of explanation
"""
#galpath = ('/galleries/' + curyr + '/' + curmon + '/' + curday + '/')
#galpath
#todayart.sort()
#todayart
#for toar in todayart:
# daypost.write(('!' + '[' + toar.strip('.png') + '](' + galpath + toar + ')\n'))
#daypost.close()
"""
Explanation:
End of explanation
"""
|
mdiaz236/DeepLearningFoundations | sentiment-rnn/.ipynb_checkpoints/Sentiment RNN-checkpoint.ipynb | mit | import numpy as np
import tensorflow as tf
from collections import Counter
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
# Create your dictionary that maps vocab words to integers here
unique_words = list(set(words))
vocab_to_int = {k:v for k, v in zip(unique_words, range(1, len(unique_words) + 1))}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = [[vocab_to_int[word] for word in review.split()] for review in reviews]
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = [1 if label == 'positive' else 0 for label in labels.split()]
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: If you built labels correctly, you should see the next output.
End of explanation
"""
# Filter out that review with 0 length
reviews_ints = [review for review in reviews_ints if len(review) > 0 ]
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
def pad_take(review, n = 200):
if len(review) >= n:
return np.array(review[:n])
else:
return np.append(np.zeros(n - len(review)), review)
seq_len = 200
features = np.array([pad_take(review, seq_len) for review in reviews_ints])
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
features[:10,:100]
"""
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
"""
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder
labels_ =
keep_prob =
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
outputs, final_state =
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
rflamary/POT | docs/source/auto_examples/plot_otda_mapping_colors_images.ipynb | mit | # Authors: Remi Flamary <remi.flamary@unice.fr>
# Stanislas Chambon <stan.chambon@gmail.com>
#
# License: MIT License
import numpy as np
from scipy import ndimage
import matplotlib.pylab as pl
import ot
r = np.random.RandomState(42)
def im2mat(I):
"""Converts and image to matrix (one pixel per line)"""
return I.reshape((I.shape[0] * I.shape[1], I.shape[2]))
def mat2im(X, shape):
"""Converts back a matrix to an image"""
return X.reshape(shape)
def minmax(I):
return np.clip(I, 0, 1)
"""
Explanation: OT for image color adaptation with mapping estimation
OT for domain adaptation with image color adaptation [6] with mapping
estimation [8].
[6] Ferradans, S., Papadakis, N., Peyre, G., & Aujol, J. F. (2014). Regularized
discrete optimal transport. SIAM Journal on Imaging Sciences, 7(3),
1853-1882.
[8] M. Perrot, N. Courty, R. Flamary, A. Habrard, "Mapping estimation for
discrete optimal transport", Neural Information Processing Systems (NIPS),
2016.
End of explanation
"""
# Loading images
I1 = ndimage.imread('../data/ocean_day.jpg').astype(np.float64) / 256
I2 = ndimage.imread('../data/ocean_sunset.jpg').astype(np.float64) / 256
X1 = im2mat(I1)
X2 = im2mat(I2)
# training samples
nb = 1000
idx1 = r.randint(X1.shape[0], size=(nb,))
idx2 = r.randint(X2.shape[0], size=(nb,))
Xs = X1[idx1, :]
Xt = X2[idx2, :]
"""
Explanation: Generate data
End of explanation
"""
# EMDTransport
ot_emd = ot.da.EMDTransport()
ot_emd.fit(Xs=Xs, Xt=Xt)
transp_Xs_emd = ot_emd.transform(Xs=X1)
Image_emd = minmax(mat2im(transp_Xs_emd, I1.shape))
# SinkhornTransport
ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1)
ot_sinkhorn.fit(Xs=Xs, Xt=Xt)
transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=X1)
Image_sinkhorn = minmax(mat2im(transp_Xs_sinkhorn, I1.shape))
ot_mapping_linear = ot.da.MappingTransport(
mu=1e0, eta=1e-8, bias=True, max_iter=20, verbose=True)
ot_mapping_linear.fit(Xs=Xs, Xt=Xt)
X1tl = ot_mapping_linear.transform(Xs=X1)
Image_mapping_linear = minmax(mat2im(X1tl, I1.shape))
ot_mapping_gaussian = ot.da.MappingTransport(
mu=1e0, eta=1e-2, sigma=1, bias=False, max_iter=10, verbose=True)
ot_mapping_gaussian.fit(Xs=Xs, Xt=Xt)
X1tn = ot_mapping_gaussian.transform(Xs=X1) # use the estimated mapping
Image_mapping_gaussian = minmax(mat2im(X1tn, I1.shape))
"""
Explanation: Domain adaptation for pixel distribution transfer
End of explanation
"""
pl.figure(1, figsize=(6.4, 3))
pl.subplot(1, 2, 1)
pl.imshow(I1)
pl.axis('off')
pl.title('Image 1')
pl.subplot(1, 2, 2)
pl.imshow(I2)
pl.axis('off')
pl.title('Image 2')
pl.tight_layout()
"""
Explanation: Plot original images
End of explanation
"""
pl.figure(2, figsize=(6.4, 5))
pl.subplot(1, 2, 1)
pl.scatter(Xs[:, 0], Xs[:, 2], c=Xs)
pl.axis([0, 1, 0, 1])
pl.xlabel('Red')
pl.ylabel('Blue')
pl.title('Image 1')
pl.subplot(1, 2, 2)
pl.scatter(Xt[:, 0], Xt[:, 2], c=Xt)
pl.axis([0, 1, 0, 1])
pl.xlabel('Red')
pl.ylabel('Blue')
pl.title('Image 2')
pl.tight_layout()
"""
Explanation: Plot pixel values distribution
End of explanation
"""
pl.figure(2, figsize=(10, 5))
pl.subplot(2, 3, 1)
pl.imshow(I1)
pl.axis('off')
pl.title('Im. 1')
pl.subplot(2, 3, 4)
pl.imshow(I2)
pl.axis('off')
pl.title('Im. 2')
pl.subplot(2, 3, 2)
pl.imshow(Image_emd)
pl.axis('off')
pl.title('EmdTransport')
pl.subplot(2, 3, 5)
pl.imshow(Image_sinkhorn)
pl.axis('off')
pl.title('SinkhornTransport')
pl.subplot(2, 3, 3)
pl.imshow(Image_mapping_linear)
pl.axis('off')
pl.title('MappingTransport (linear)')
pl.subplot(2, 3, 6)
pl.imshow(Image_mapping_gaussian)
pl.axis('off')
pl.title('MappingTransport (gaussian)')
pl.tight_layout()
pl.show()
"""
Explanation: Plot transformed images
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/messy-consortium/cmip6/models/sandbox-2/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-2', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
dlsun/symbulate | tutorial/gs_rv.ipynb | mit | from symbulate import *
%matplotlib inline
"""
Explanation: Getting Started with Symbulate
Section 2. Random Variables
<a id='contents'></a>
<Probability Spaces | Contents | Multiple random variables and joint distributions>
Every time you start Symbulate, you must first run (SHIFT-ENTER) the following commands.
End of explanation
"""
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
X.sim(10000)
"""
Explanation: This section provides an introduction to the Symbulate commands for simulating and summarizing values of a random variable.
<a id='counting_numb_heads'></a>
Example 2.1: Counting the number of Heads in a sequence of coin flips
In Example 1.7 we simulated the value of the number of Heads in a sequence of five coin flips. In that example, we simulated the individual coin flips (with 1 representing Heads and 0 Tails) and then used .apply() with the sum function to count the number of Heads. The following Symbulate commands achieve the same goal by defining an RV, X, which measures the number of Heads for each outcome.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: The number of Heads in five coin flips is a random variable: a function that takes as an input an outcome of a probability space and returns a real number. The first argument of RV is the probability space on which the RV is defined, e.g., sequences of five 1/0s. The second argument is the function which maps outcomes in the probability space to real numbers, e.g., the sum of the 1/0 values. Values of an RV can be simulated with .sim().
<a id='sum_of_two_dice'></a>
Exercise 2.2: Sum of two dice
After defining an appropriate BoxModel probability space, define an RV X representing the sum of two six-sided fair dice, and simulate 10000 values of X.
End of explanation
"""
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
sims = X.sim(10000)
sims.tabulate()
"""
Explanation: Solution
<a id='dist_of_five_flips'></a>
Example 2.3: Summarizing simulation results with tables and plots
In Example 2.1 we defined a RV, X, the number of Heads in a sequence of five coin flips. Simulated values of a random variable can be summarized using .tabulate() (with normalize=False (default) for frequencies (counts) or True for relative frequencies (proportions)).
End of explanation
"""
sims.plot()
"""
Explanation: The table above can be used to approximate the distribution of the number of Heads in five coin flips. The distribution of a random variable specifies the possible values that the random variable can take and their relative likelihoods. The distribution of a random variable can be visualized using .plot().
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: By default, .plot() displays relative frequencies (proportions). Use .plot(normalize=False) to display frequencies (counts).
<a id='dist_of_sum_of_two_dice'></a>
Exercise 2.4: The distribution of the sum of two dice rolls
Continuing Exercise 2.2 summarize with a table and a plot the distribution of the sum of two rolls of a fair six-sided die.
End of explanation
"""
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
sims = X.sim(10000)
sims.count_leq(3)/10000
"""
Explanation: Solution
<a id='prob_of_three_heads'></a>
Example 2.5: Estimating probabilities from simulations
There are several other tools for summarizing simulations, like the count functions. For example, the following commands approximate P(X <= 3) for Example 2.1, the probability that in five coin flips at most three of the flips land on Heads.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: <a id='prob_of_10_two_dice'></a>
Exercise 2.6: Estimating probabilities for the sum of two dice rolls
Continuing Exercise 2.2, estimate P(X >= 10), the probability that the sum of two fair six-sided dice is at least 10.
End of explanation
"""
Y = RV(Binomial(5, 0.5))
Y.sim(10000).plot()
"""
Explanation: Solution
<a id='sim_from_binom'></a>
Example 2.7: Specifying a RV by its distribution
The plot in Example 2.3 displays the approximate distribution of the random variable X, the number of Heads in five flips of a fair coin. This distribution is called the Binomial distribution with n=5 trials (flips) and a probability that each trial (flip) results in success (1 i.e. Heads) equal to p=0.5.
In the above examples the RV X was explicitly defined on the probability space P - i.e. the BoxModel for the outcomes (1 or 0) of the five individual flips - via the sum function. This setup implied a Binomial(5, 0.5) distribution for X.
In many situations the distribution of an RV is assumed or specified directly, without mention of the underlying probabilty space or the function defining the random variable. For example, a problem might state "let Y have a Binomial distribution with n=5 and p=0.5". The RV command can also be used to define a random variable by specifying its distribution, as in the following.
End of explanation
"""
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
X.sim(10000).plot(jitter=True)
Y = RV(Binomial(5, 0.5))
Y.sim(10000).plot(jitter=True)
"""
Explanation: By definition, a random variable must always be a function defined on a probability space. Specifying a random variable by specifying its distribution, as in Y = RV(Binomial(5, 0.5)), has the effect of defining the probability space to be the distribution of the random variable and the function defined on this space to be the identity (f(x) = x). However, it is more appropriate to think of such a specification as defining a random variable with the given distribution on an unspecified probability space through an unspecified function.
For example, the random variable $X$ in each of the following situations has a Binomial(5, 0.5) distribution.
- $X$ is the number of Heads in five flips of a fair coin
- $X$ is the number of Tails in five flips of a fair coin
- $X$ is the number of even numbers rolled in five rolls of a fair six-sided die
- $X$ is the number of boys in a random sample of five births
Each of these situations involves a different probability space (coins, dice, births) with a random variable which counts according to different criteria (Heads, Tails, evens, boys). These examples illustrate that knowledge that a random variable has a specific distribution (e.g. Binomial(5, 0.5)) does not necessarily convey any information about the underlying observational units or variable being measured. This is why we say a specification like X = RV(Binomial(5, 0.5)) defines a random variable X on an unspecified probability space via an unspecified function.
The following code compares the two methods for definiting of a random variable with a Binomial(5, 0.5) distribution. (The jitter=True option offsets the vertical lines so they do not coincide.)
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: In addition to Binomial, many other commonly used distributions are built in to Symbulate.
<a id='discrete_unif_dice'></a>
Exercise 2.8: Simulating from a discrete Uniform model
A random variable has a DiscreteUniform distribution with parameters a and b if it is equally likely to to be any of the integers between a and b (inclusive). Let X be the roll of a fair six-sided die. Define an RV X by specifying an appropriate DiscreteUniform distribution, then simulate 10000 values of X and summarize its approximate distribution in a plot.
End of explanation
"""
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
Y = 5 - X
Y.sim(10000).tabulate()
"""
Explanation: Solution
<a id='numb_tails'></a>
Example 2.9: Random variables versus distributions
Continuing Example 2.1, if X is the random variable representing number of Heads in five coin flips then Y = 5 - X is random variable representing the number of Tails.
End of explanation
"""
outcome = (1, 0, 0, 1, 0)
X(outcome)
Y(outcome)
"""
Explanation: It is important not to confuse a random variable with its distribution. Note that X and Y are two different random variables; they measure different things. For example, if the outcome of the flips is (1, 0, 0, 1, 0) then X = 2 but Y = 3. The following code illustrates how an RV can be called as a function to return its value for a particular outcome in the probability space.
End of explanation
"""
X.sim(10000).plot(jitter=True)
Y.sim(10000).plot(jitter=True)
"""
Explanation: In fact, in this example the values of X and Y are unequal for every outcome in the probability space . However, while X and Y are two different random variables, they do have the same distribution over many outcomes.
End of explanation
"""
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
X.sim(10000).mean()
"""
Explanation: See Example 2.7 for further comments about the difference between random variables and distributions.
<a id='expected_value_numb_of_heads'></a>
Example 2.10: Expected value of the number of heads in five coin flips
The expected value, or probability-weighted average value, of an RV can be approximated by simulating many values of the random variable and finding the sample mean (i.e. average) using .mean(). Continuing Example 2.1, the following code estimates the expected value of the number of Heads in five coin flips.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: Over many sets of five coin flips, we expect that there will be on average about 2.5 Heads per set. Note that 2.5 is not the number of Heads we would expect in a single set of five coin flips.
<a id='expected_value_sum_of_dice'></a>
Exercise 2.11: Expected value of the sum of two dice rolls
Continuing Exercise 2.2, approximate the expected value of the sum of two six-sided dice rolls. (Bonus: interpret the value as an appropriate long run average.)
End of explanation
"""
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
sims = X.sim(10000)
sims.sd()
"""
Explanation: Solution
<a id='sd_numb_of_heads'></a>
Example 2.12: Standard deviation of the number of Heads in five coin flips
The expected value of an RV is its long run average, while the standard deviation of an RV measures the average degree to which individual values of the RV vary from the expected value. The standard deviation of an RV can be approximated from simulated values with .sd(). Continuing Example 2.1, the following code estimates the standard deviation of the number of Heads in five coin flips.
End of explanation
"""
sims.var()
"""
Explanation: Inspecting the plot in Example 2.3 we see there are many simulated values of 2 and 3, which are 0.5 units away from the expected value of 2.5. There are relatively fewer values of 0 and 5 which are 2.5 units away from the expected value of 2.5. Roughly, the simulated values are on average 1.1 units away from the expected value.
Variance is the square of the standard deviation and can be approximated with .var().
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: <a id='sd_sum_of_dice'></a>
Exercise 2.13: Standard deviation of the sum of two dice rolls
Continuing Exercise 2.2, approximate the standard deviation of the sum of two six-sided dice rolls. (Bonus: interpret the value.)
End of explanation
"""
X = RV(Normal(mean=69.1, sd=2.9))
sims = X.sim(10000)
"""
Explanation: Solution
<a id='dist_of_normal'></a>
Example 2.14: Continuous random variables
The RVs we have seen so far have been discrete. A discrete random variable can take at most countably many distinct values. For example, the number of Heads in five coin flips can only take values 0, 1, 2, 3, 4, 5.
A continuous random variable can take any value in some interval of real numbers. For example, if X represents the height of a randomly selected U.S. adult male then X is a continuous random variable. Many continuous random variables are assumed to have a Normal distribution. The following simulates values of the RV X assuming it has a Normal distribution with mean 69.1 inches and standard deviation 2.9 inches.
End of explanation
"""
sims.plot()
"""
Explanation: The same simulation tools are available for both discrete and continuous RVs. Calling .plot() for a continuous RV produces a histogram which displays frequencies of simulated values falling in interval "bins".
End of explanation
"""
X.sim(10000).plot(bins=60)
"""
Explanation: The number of bins can be set using the bins= option in .plot()
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: It is not recommended to use .tabulate() with continuous RVs as almost all simulated values will only occur once.
<a id='sim_unif'></a>
Exercise 2.15: Simulating from a (continuous) uniform distribution
The continuous analog of a BoxModel is a Uniform distribution which produces "equally likely" values in an interval with endpoints a and b. (What would you expect the plot of such a distribution to look like?)
Let X be a random variable which has a Uniform distribution on the interval [0, 1]. Define an appropriate RV and use simulation to display its approximate distribution. (Note that the underlying probability space is unspecified.)
End of explanation
"""
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
Y = X.apply(sqrt)
Y.sim(10000).plot()
"""
Explanation: Solution
<a id='sqrt_ex'></a>
Example 2.16: Transformations of random variables
In Example 2.9 we defined a new random variable Y = 5 - X (the number of Tails) by transforming the RV X (the number of Heads). A transformation of an RV is also an RV. If X is an RV, define a new random variable Y = g(X) using X.apply(g). The resulting Y behaves like any other RV.
Note that for arithmetic operations and many common math functions (such as exp, log, sin) you can simply call g(X) rather than X.apply(g).
Continuing Example 2.1, let $X$ represent the number of Heads in five coin flips and define the random variable $Y = \sqrt{X}$. The plot below approximates the distribution of $Y$; note that the possible values of $Y$ are 0, 1, $\sqrt{2}$, $\sqrt{3}$, 2, and $\sqrt{5}$.
End of explanation
"""
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
Y = sqrt(X)
Y.sim(10000).plot()
"""
Explanation: The following code uses a g(X) definition rather than X.apply(g).
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: <a id='dif_normal'></a>
Exercise 2.17 Function of a RV that has a Uniform distribution
In Example 2.15 we encountered uniform distributions. Let $U$ be a random variable which has a Uniform distribution on the interval [0, 1]. Use simulation to display the approximate distribution of the random variable $Y = -\log(U)$.
End of explanation
"""
def number_switches(x):
count = 0
for i in list(range(1, len(x))):
if x[i] != x[i-1]:
count += 1
return count
number_switches((1, 1, 1, 0, 0, 1, 0, 1, 1, 1))
"""
Explanation: Solution
<a id='Numb_distinct'></a>
Example 2.18: Number of switches between Heads and Tails in coin flips
RVs can be defined or transformed through user defined functions. As an example, let Y be the number of times a sequence of five coin flips switches between Heads and Tails (not counting the first toss). For example, for the outcome (0, 1, 0, 0, 1), a switch occurs on the second third, and fifth flip so Y = 3. We define the random variable Y by first defining a function that takes as an input a list of values and returns as an output the number of times a switch from the previous value occurs in the sequence. (Defining functions is one area where some familiarity with Python is helpful.)
End of explanation
"""
P = BoxModel([1, 0], size=5)
Y = RV(P, number_switches)
outcome = (0, 1, 0, 0, 1)
Y(outcome)
"""
Explanation: Now we can use the number_switches function to define the RV Y on the probability space corresponding to five flips of a fair coin.
End of explanation
"""
Y.sim(10000).plot()
"""
Explanation: An RV defined or transformed through a user-defined function behaves like any other RV.
End of explanation
"""
def number_distinct_values(x):
return len(set(x))
number_distinct_values((1, 1, 4))
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: <a id='Numb_alterations'></a>
Exercise 2.19: Number of distinct faces rolled in 6 rolls
Let X count the number of distinct faces rolled in 6 rolls of a fair six-sided die. For example, if the result of the rolls is (3, 3, 3, 3, 3, 3) then X = 1; if (6, 4, 5, 4, 6, 6) then X=3; etc. Use the number_distinct_values function defined below to define the RV X on an appropriate probability space. Then simulate values of X and plot its approximate distribution. (The number_distinct_values function takes as an input a list of values and returns as an output the number of distinct values in the list. We have used the Python functions set and len.)
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: Solution
Additional Exercises
<a id='ev_max_of_dice'></a>
Exercise 2.20: Max of two dice rolls
1) Approximate the distribution of the max of two six-sided dice rolls.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: 2) Approximate the probability that the max of two six-sided dice rolls is greater than or equal to 5.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: 3) Approximate the mean and standard deviation of the max of two six-sided dice rolls.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: Hint
Solution
<a id='var_transformed_unif'></a>
Exercise 2.21: Transforming a random variable
Let $X$ have a Uniform distribution on the interval [0, 3] and let $Y = 2\cos(X)$.
1) Approximate the distribution of $Y$.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: 2) Approximate the probability that the $Y$ is less than 1.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: 3) Approximate the mean and standard deviation of $Y$.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: Hint
Solution
<a id='log_normal'></a>
Exercise 2.22: Function of a random variable.
Let $X$ be a random variable which has a Normal(0,1) distribution. Let $Y = e^X$.
1) Use simulation to display the approximate distribution of $Y$.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: 2) Approximate the probability that the $Y$ is greater than 2.
End of explanation
"""
### Type your commands in this cell and then run using SHIFT-ENTER.
### Type your commands in this cell and then run using SHIFT-ENTER.
"""
Explanation: 3) Approximate the mean and standard deviation of $Y$.
End of explanation
"""
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
X.sim(10000)
"""
Explanation: Hint
Solution
<a id='hints'></a>
Hints for Additional Exercises
<a id='hint_ev_max_of_dice'></a>
Exercise 2.20: Hint
In Exercise 2.2 we simulated the sum of two six-sided dice rolls. Define an RV using the max function to return the larger of the two rolls. In Example 2.5 we estimated the probability of a random variable taking a value. In Example 2.10 we applied the .mean() funtion to return the long run expected average. In Example 2.12 we estimated the standard deviation.
Back
<a id='hint_var_transformed_unif'></a>
Exercise 2.21: Hint
Example 2.9 introduces transformations. In Exercise 2.15 we simulated an RV that had a Uniform distribution. In Example 2.5 we estimated the probabilities for a RV. In Example 2.10 we applied the .mean() funtion to return the long run expected average. In Example 2.12 we estimated the standard deviation.
Back
<a id='hint_log_normal'></a>
Exercise 2.22: Hint
In Example 2.14 we simulated an RV with a Normal distribution. In Example 2.9 we defined a random variable as a function of another random variable. In Example 2.5 we estimated the probability of a random variable taking a value. In Example 2.10 we applied the .mean() funtion to return the long run expected average. In Example 2.12 we estimated the standard deviation.
Back
Solutions to Exercises
<a id='sol_sum_of_two_dice'></a>
Exercise 2.2: Solution
End of explanation
"""
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
sims = X.sim(10000)
sims.tabulate(normalize=True)
sims.plot()
"""
Explanation: Back
<a id='sol_dist_of_sum_of_two_dice'></a>
Exercise 2.4: Solution
End of explanation
"""
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
sims = X.sim(10000)
sims.count_geq(10) / 10000
"""
Explanation: Back
<a id='sol_prob_of_10_two_dice'></a>
Exercise 2.6: Solution
End of explanation
"""
X = RV(DiscreteUniform(a=1, b=6))
X.sim(10000).plot(normalize=True)
"""
Explanation: Back
<a id='sol_expected_discrete_unif_dice'></a>
Exercise 2.8: Solution
End of explanation
"""
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
X.sim(10000).mean()
"""
Explanation: Back
<a id='sol_expected_value_sum_of_dice'></a>
Exercise 2.11: Solution
End of explanation
"""
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
X.sim(10000).sd()
"""
Explanation: Over many pairs of rolls of fair six-sided dice, we expect that on average the sum of the two rolls will be about 7.
Back
<a id='sol_sd_sum_of_dice'></a>
Exercise 2.13: Solution
End of explanation
"""
X = RV(Uniform(a=0, b=1))
X.sim(10000).plot()
"""
Explanation: Over many pairs of rolls of fair six-sided dice, the values of the sum are on average roughly 2.4 units away from the expected value of 7.
Back
<a id='sol_sim_unif'></a>
Exercise 2.15: Solution
End of explanation
"""
U = RV(Uniform(a=0, b=1))
Y = -log(U)
Y.sim(10000).plot()
"""
Explanation: Back
<a id='sol_dif_normal'></a>
Exercise 2.17: Solution
End of explanation
"""
def number_distinct_values(x):
return len(set(x))
P = BoxModel([1,2,3,4,5,6], size=6)
X = RV(P, number_distinct_values)
X.sim(10000).plot()
"""
Explanation: Note that the RV has an Exponential(1) distribution.
Back
<a id='sol_Numb_alterations'></a>
Exercise 2.19: Solution
End of explanation
"""
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, max)
sims = X.sim(10000)
sims.plot()
"""
Explanation: Back
<a id='sol_ev_max_of_dice'></a>
Exercise 2.20: Solution
1) Approximate the distribution of the max of two six-sided dice rolls.
End of explanation
"""
sims.count_geq(5)/10000
"""
Explanation: 2) Approximate the probability that the max of two six-sided dice rolls is greater than or equal to 5.
End of explanation
"""
sims.mean()
sims.sd()
"""
Explanation: 3) Approximate the mean and standard deviation of the max of two six-sided dice rolls.
End of explanation
"""
X = RV(Uniform(0, 3))
Y = 2 * cos(X)
sims = Y.sim(10000)
sims.plot()
"""
Explanation: Back
<a id='sol_var_transformed_unif'></a>
Exercise 2.21: Solution
1) Approximate the distribution of $Y$.
End of explanation
"""
X = RV(Uniform(0, 3))
Y = 2 * X.apply(cos)
sims = Y.sim(10000)
sims.plot()
"""
Explanation: Alternatively,
End of explanation
"""
sims.count_lt(1)/10000
"""
Explanation: 2) Approximate the probability that the Y is less than 2.
End of explanation
"""
sims.mean()
sims.sd()
"""
Explanation: 3) Approximate the mean and standard deviation of Y.
End of explanation
"""
X = RV(Normal(0, 1))
Y = exp(X)
sims = Y.sim(10000)
sims.plot()
"""
Explanation: Back
<a id='sol_log_normal'></a>
Exercise 2.22: Solution
1) Use simulation to display the approximate distribution of Y.
End of explanation
"""
sims.count_gt(2)/10000
"""
Explanation: 2) Approximate the probability that the Y is greater than 2.
End of explanation
"""
sims.mean()
sims.sd()
"""
Explanation: 3) Approximate the mean and standard deviation of Y.
End of explanation
"""
|
edhenry/notebooks | Breadth First Search.ipynb | mit | class Vertex:
def __init__(self, key):
# unique ID for vertex
self.id = key
# dict of connected nodes
self.connected_to = {}
def add_neighbor(self, neighbor, weight=0):
# Add an entry to the connected_to dict with a given
# weight
self.connected_to[neighbor] = weight
def __str__(self):
# override __str__ for printing
return(str(self.id) + ' connected to: ' + str([x.id for x in self.connected_to]))
def get_connections(self):
# return keys from connected_to dict
return self.connected_to.keys()
def get_id(self):
# return vertex id's
return self.id
def get_weight(self):
# return weights of edges connected to vertex
return self.connected_to[neighbor]
class Graph:
def __init__(self):
# dictionary of vertices
self.vertices_list = {}
# vertex count
self.num_vertices = 0
def add_vertex(self, key):
# increment counter when adding vertex
self.num_vertices = self.num_vertices + 1
new_vertex = Vertex(key)
self.vertices_list[key] = new_vertex
return new_vertex
def get_vertex(self, n):
# check if vertex exists, return if True
if n in self.vertices_list:
return self.vertices_list[n]
else:
return None
def __contains__(self, n):
# override __contains__ to list all vertices in Graph object
return n in self.vertices_list
def add_edge(self, s, f, cost=0):
# add edge to graph; s = start node; e = end node
if s not in self.vertices_list:
nv = self.add_vertex(s)
if f not in self.vertices_list:
nv = self.add_vertex(f)
self.vertices_list[s].add_neighbor(self.vertices_list[f], cost)
def get_vertices(self):
# return keys of vertices in Graph
return self.vertices_list.keys()
def __iter__(self):
# override __iter__ to return iterable of vertices
return iter(self.vertices_list.values())
node_names = ["A", "B", "C",
"D", "E", "F",
"G"]
# Instantiate graph object and add vertices
g = Graph()
for i in node_names:
g.add_vertex(i)
# add a bunch of edges between vertices
g.add_edge('A','B')
g.add_edge('B','C')
g.add_edge('C','E')
g.add_edge('E','D')
g.add_edge('D','B')
g.add_edge('E','F')
g.add_edge('B','E')
for v in g:
for w in v.get_connections():
print("(%s, %s)" % (v.get_id(), w.get_id()))
# list our vertices
for i in node_names:
print(g.get_vertex(i))
from collections import deque
def breadth_first_search(starting_node, goal_node):
visited_nodes = set()
queue = deque([starting_node])
while len(queue) > 0:
node = queue.pop()
if node in visited_nodes:
continue
visited_nodes.add(node)
if node.get_id == goal_node.get_id:
return True
for n in node.connected_to:
if n not in visited_nodes:
queue.appendleft(n)
return False
"""
Explanation: Breadth First Search
In this notebook / blog post we will explore breadth first search, which is an algorithm for searching a given graph for the lowest cost path to a goal state $G$.
The cost is intentionally abstract as it can be defined as whatever you'd like it to be, whether it be the least amount of vertices traversed to get to $G$ or whether it be the lowest sum of the weights of edges between a given state and the goal state, $G$.
Some quick notational and fundamental review of the definition of a graph is below :
Vertex
End state, also called a node, of a given path through a graph $G$
Can also house additional information known as a payload
Edge
Also called an arc, the element that connects two vertices within a graph
Can be either one way or two way; one way = directed graph or digraph
Weight
A value assigned to an edge to denote "cost" of traversing that edge between two vertices
With these definitions we can formally define as a graph, $G$ where $G = (V,E)$.
$V$ is a set of vertices and $E$ is a set of edges, respectively.
Each edge is a tuple $(v,w)$ where $w,v \in V$, adding $w$ as a third component to represent the weight of that vertex.
Path
A sequence of edges that connect two vertices.
Formally defined as ${w_{1},w_{2},...,w_{n}}$ such that $(w_{i},w_{i+1}) \in E \ \ \ \forall 1 \le i \le n-1$
There are great libraries that provide Graph ADT's, but in this example we'll implement a Graph class ourselves. It will be useful in understanding a graph and how we can use it.
We'll define two classes to support this effort, a Vertex class, which will represent a given vertex being added to the graph, and a Graph class which holds the master list of vertices.
End of explanation
"""
breadth_first_search(g.get_vertex('A'), g.get_vertex('G'))
"""
Explanation: Using the breadth_first_search implementation that we've written, above, we can then ask the graph is there exists a path between multiple nodes. Our function will return a True or a False accordingly.
End of explanation
"""
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
edges = [('A','B'),('B','C'),('C','E'),
('E','D'),('D','B'),('E','F'),
('B','E')]
networkx_graph = nx.Graph()
for node in node_names:
networkx_graph.add_node(node)
networkx_graph.add_edges_from(edges)
nx.draw_networkx(networkx_graph)
"""
Explanation: Past creating our own Vertex and Graph objects that we can use to assemble our own graphs, we can use libraries like NetworkX to create graphs and implement algorithms, like breadth first search, over them.
End of explanation
"""
networkx_graph_1 = nx.dense_gnm_random_graph(10,10)
nx.draw_networkx(networkx_graph_1)
"""
Explanation: But the library also has the added ability to generate random graphs for us. In this case, the dense_gnm_random_graph() will generate a random graph of $G_{n,m}$ where $n$ is the node count and $m$ are the number of edges randomly distributed throughout the graph.
End of explanation
"""
# quick hack to traverse the iterables returned
for node in nx.nodes(networkx_graph_1):
neighbors = []
for neighbor in nx.all_neighbors(networkx_graph_1, node):
neighbors.append(neighbor)
print("Node %s has neighbors : %s" % (node, neighbors))
"""
Explanation: The networkx library tends to return iterators for each object within the graph context, such as the graph iteself, or the nodes within a graph or the neighbors of a particular node within the graph. This is useful because traversal algorithms such as breadth first search tend to operator in an iterative manner.
nodes returns an iterable for the nodes in a graph
all_neighbors returns an interable for all neighbors of a passed in graph and specific node
End of explanation
"""
[[neighbor for neighbor in nx.all_neighbors(networkx_graph_1, node)] for node in nx.nodes(networkx_graph_1)]
"""
Explanation: Or just because, here's a list comprehension that can do the same thing, that actually shows off a bit of Python's nested list comprehension functionality. It is possible to also push the print function into the list comprehension below, but it only works in Python 3+ and but is not considered pythonic -- so I'm only leaving it to return the nested arrays that a list comprehension normally would.
End of explanation
"""
print(list(nx.bfs_edges(networkx_graph_1, 0)))
"""
Explanation: The networkx library also includes many, many algorithm implementations already so we can utilize their built-in breadth_first_search algorithm, as we see below. We're able to print a traversal of the graph starting at node 0 and print the entire path taken through the graph.
End of explanation
"""
print(list(nx.dfs_edges(networkx_graph_1, 0)))
"""
Explanation: Much like we see above, the networkx library also has a built-in depth first search algorithm that will traverse the graph and return an unordered list of tuples of edges that are traversed. I will save a depth first search implementation over our custom Graph object for future posts.
End of explanation
"""
|
mrustl/flopy | examples/Notebooks/flopy3_multi-component_SSM.ipynb | bsd-3-clause | import os
import numpy as np
from flopy import modflow, mt3d, seawat
"""
Explanation: FloPy
Using FloPy to simplify the use of the MT3DMS SSM package
A multi-component transport demonstration
End of explanation
"""
nlay, nrow, ncol = 10, 10, 10
perlen = np.zeros((10), dtype=np.float) + 10
nper = len(perlen)
ibound = np.ones((nlay,nrow,ncol), dtype=np.int)
botm = np.arange(-1,-11,-1)
top = 0.
"""
Explanation: First, we will create a simple model structure
End of explanation
"""
model_ws = 'data'
modelname = 'ssmex'
mf = modflow.Modflow(modelname, model_ws=model_ws)
dis = modflow.ModflowDis(mf, nlay=nlay, nrow=nrow, ncol=ncol,
perlen=perlen, nper=nper, botm=botm, top=top,
steady=False)
bas = modflow.ModflowBas(mf, ibound=ibound, strt=top)
lpf = modflow.ModflowLpf(mf, hk=100, vka=100, ss=0.00001, sy=0.1)
oc = modflow.ModflowOc(mf)
pcg = modflow.ModflowPcg(mf)
rch = modflow.ModflowRch(mf)
"""
Explanation: Create the MODFLOW packages
End of explanation
"""
itype = mt3d.Mt3dSsm.itype_dict()
print(itype)
print(mt3d.Mt3dSsm.get_default_dtype())
ssm_data = {}
"""
Explanation: We'll track the cell locations for the SSM data using the MODFLOW boundary conditions.
Get a dictionary (dict) that has the SSM itype for each of the boundary types.
End of explanation
"""
ghb_data = {}
print(modflow.ModflowGhb.get_default_dtype())
ghb_data[0] = [(4, 4, 4, 0.1, 1.5)]
ssm_data[0] = [(4, 4, 4, 1.0, itype['GHB'], 1.0, 100.0)]
ghb_data[5] = [(4, 4, 4, 0.25, 1.5)]
ssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)]
for k in range(nlay):
for i in range(nrow):
ghb_data[0].append((k, i, 0, 0.0, 100.0))
ssm_data[0].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0))
ghb_data[5] = [(4, 4, 4, 0.25, 1.5)]
ssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)]
for k in range(nlay):
for i in range(nrow):
ghb_data[5].append((k, i, 0, -0.5, 100.0))
ssm_data[5].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0))
"""
Explanation: Add a general head boundary (ghb). The general head boundary head (bhead) is 0.1 for the first 5 stress periods with a component 1 (comp_1) concentration of 1.0 and a component 2 (comp_2) concentration of 100.0. Then bhead is increased to 0.25 and comp_1 concentration is reduced to 0.5 and comp_2 concentration is increased to 200.0
End of explanation
"""
wel_data = {}
print(modflow.ModflowWel.get_default_dtype())
wel_data[0] = [(0, 4, 8, 10.0)]
ssm_data[0].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0))
ssm_data[5].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0))
"""
Explanation: Add an injection well. The injection rate (flux) is 10.0 with a comp_1 concentration of 10.0 and a comp_2 concentration of 0.0 for all stress periods. WARNING: since we changed the SSM data in stress period 6, we need to add the well to the ssm_data for stress period 6.
End of explanation
"""
ghb = modflow.ModflowGhb(mf, stress_period_data=ghb_data)
wel = modflow.ModflowWel(mf, stress_period_data=wel_data)
"""
Explanation: Add the GHB and WEL packages to the mf MODFLOW object instance.
End of explanation
"""
mt = mt3d.Mt3dms(modflowmodel=mf, modelname=modelname, model_ws=model_ws)
btn = mt3d.Mt3dBtn(mt, sconc=0, ncomp=2, sconc2=50.0)
adv = mt3d.Mt3dAdv(mt)
ssm = mt3d.Mt3dSsm(mt, stress_period_data=ssm_data)
gcg = mt3d.Mt3dGcg(mt)
"""
Explanation: Create the MT3DMS packages
End of explanation
"""
print(ssm.stress_period_data.dtype)
"""
Explanation: Let's verify that stress_period_data has the right dtype
End of explanation
"""
swt = seawat.Seawat(modflowmodel=mf, mt3dmodel=mt,
modelname=modelname, namefile_ext='nam_swt', model_ws=model_ws)
vdf = seawat.SeawatVdf(swt, mtdnconc=0, iwtable=0, indense=-1)
mf.write_input()
mt.write_input()
swt.write_input()
"""
Explanation: Create the SEAWAT packages
End of explanation
"""
fname = modelname + '.vdf'
f = open(os.path.join(model_ws, fname),'r')
lines = f.readlines()
f.close()
f = open(os.path.join(model_ws, fname),'w')
for line in lines:
f.write(line)
for kper in range(nper):
f.write("-1\n")
f.close()
"""
Explanation: And finally, modify the vdf package to fix indense.
End of explanation
"""
|
agile-geoscience/striplog | docs/tutorial/01_Basics.ipynb | apache-2.0 | import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import striplog
striplog.__version__
# If you get a lot of warnings here, try running this block again.
from striplog import Legend, Lexicon, Interval, Component
legend = Legend.builtin('NSDOE')
lexicon = Lexicon.default()
"""
Explanation: Striplog basics
This notebooks looks at the main striplog object. For the basic objects it depends on, see Basic objects.
First, import anything we might need.
End of explanation
"""
from striplog import Striplog
print(Striplog.__doc__)
"""
Explanation: Making a striplog
End of explanation
"""
imgfile = "M-MG-70_14.3_135.9.png"
strip = Striplog.from_img(imgfile, 14.3, 135.9, legend=legend)
strip
strip.plot(legend, ladder=True, aspect=5)
"""
Explanation: Here is one of the images we will convert into striplogs:
<img src="M-MG-70_14.3_135.9.png" width=50 style="float:left" />
End of explanation
"""
print(strip[:5])
strip.unique
"""
Explanation: Representations of a striplog
There are several ways to inspect a striplog:
print prints the contents of the striplog
top shows us a list of the primary lithologies in the striplog, in order of cumulative thickness
plot makes a plot of the striplog with coloured bars
End of explanation
"""
depth = 0
list_of_int = []
for i in strip.unique:
list_of_int.append(Interval(depth, depth+i[1], components=[i[0]]))
depth += i[1]
Striplog(list_of_int).plot(legend, aspect=3)
"""
Explanation: It's easy enough to visualize this. Perhaps this should be a method...
End of explanation
"""
strip.plot()
"""
Explanation: Plot
If you call plot() on a Striplog you'll get random colours (one per rock type in the striplog), and preset aspect ratio of 10.
End of explanation
"""
strip.plot(legend, ladder=True, aspect=5, ticks=5)
hashy_csv = """colour,width,hatch,component colour,component grainsize,component lithology
#dddddd,1,---,grey,,siltstone,
#dddddd,2,xxx,,,anhydrite,
#dddddd,3,...,grey,vf-f,sandstone,
#dddddd,4,+--,,,dolomite,
#dddddd,5,ooo,,,volcanic,
#dddddd,6,---,red,,siltstone,
#dddddd,7,,,,limestone,
"""
hashy = Legend.from_csv(text=hashy_csv)
strip.plot(hashy, ladder=True, aspect=6, lw=1)
"""
Explanation: For more control, you can pass some parameters. You'll probably always want to pass a legend.
End of explanation
"""
print(strip[:3])
print(strip[-1].primary.summary())
for i in strip[:5]:
print(i.summary())
len(strip)
import numpy as np
np.array([d.top.z for d in strip[5:13]])
"""
Explanation: Manipulating a striplog
Again, the object is indexable and iterable.
End of explanation
"""
indices = [2,4,6]
strip[indices].plot(legend, aspect=5)
"""
Explanation: You can even index into it with an iterable, like a list of indices. The result is a striplog.
End of explanation
"""
strip[1:3]
rock = strip.find('sandstone')[1].components[0]
rock2 = Component({'lithology':'shale', 'colour':'grey'})
iv = Interval(top=300, base=350, description='', components=[rock, rock2])
strip[-3:-1] + Striplog([iv])
del strip[4]
strip.plot(aspect=5)
"""
Explanation: Slicing and indexing
Slicing returns a new striplog:
End of explanation
"""
print(strip.to_las3())
strip.source
csv_string = """top, base, lithology
200.000, 230.329, Anhydrite
230.329, 233.269, Grey vf-f sandstone
233.269, 234.700, Anhydrite
234.700, 236.596, Dolomite
236.596, 237.911, Red siltstone
237.911, 238.723, Anhydrite
238.723, 239.807, Grey vf-f sandstone
239.807, 240.774, Red siltstone
240.774, 241.122, Dolomite
241.122, 241.702, Grey siltstone
241.702, 243.095, Dolomite
243.095, 246.654, Grey vf-f sandstone
246.654, 247.234, Dolomite
247.234, 255.435, Grey vf-f sandstone
255.435, 258.723, Grey siltstone
258.723, 259.729, Dolomite
259.729, 260.967, Grey siltstone
260.967, 261.354, Dolomite
261.354, 267.041, Grey siltstone
267.041, 267.350, Dolomite
267.350, 274.004, Grey siltstone
274.004, 274.313, Dolomite
274.313, 294.816, Grey siltstone
294.816, 295.397, Dolomite
295.397, 296.286, Limestone
296.286, 300.000, Volcanic
"""
strip2 = Striplog.from_csv(text=csv_string, lexicon=lexicon)
"""
Explanation: Read or write CSV or LAS3
End of explanation
"""
Component.from_text('Grey vf-f sandstone', lexicon)
las3 = """~Lithology_Parameter
LITH . : Lithology source {S}
LITHD. MD : Lithology depth reference {S}
~Lithology_Definition
LITHT.M : Lithology top depth {F}
LITHB.M : Lithology base depth {F}
LITHN. : Lithology name {S}
~Lithology_Data | Lithology_Definition
200.000, 230.329, Anhydrite
230.329, 233.269, Grey vf-f sandstone
233.269, 234.700, Anhydrite
234.700, 236.596, Dolomite
236.596, 237.911, Red siltstone
237.911, 238.723, Anhydrite
238.723, 239.807, Grey vf-f sandstone
239.807, 240.774, Red siltstone
240.774, 241.122, Dolomite
241.122, 241.702, Grey siltstone
241.702, 243.095, Dolomite
243.095, 246.654, Grey vf-f sandstone
246.654, 247.234, Dolomite
247.234, 255.435, Grey vf-f sandstone
255.435, 258.723, Grey siltstone
258.723, 259.729, Dolomite
259.729, 260.967, Grey siltstone
260.967, 261.354, Dolomite
261.354, 267.041, Grey siltstone
267.041, 267.350, Dolomite
267.350, 274.004, Grey siltstone
274.004, 274.313, Dolomite
274.313, 294.816, Grey siltstone
294.816, 295.397, Dolomite
295.397, 296.286, Limestone
296.286, 300.000, Volcanic
"""
strip3 = Striplog.from_las3(las3, lexicon)
print(strip3)
"""
Explanation: Notice the warning about a missing term in the lexicon.
End of explanation
"""
|
angelmtenor/data-science-keras | simple_tickets.ipynb | mit | import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import helper
import keras
helper.info_gpu()
helper.reproducible(seed=9) # setup reproducible results from run to run using Keras
%matplotlib inline
"""
Explanation: Simple Tickets prediction with DNN
Predicting the number of tickets requested by different clients
Supervised Learning. Regression
Data taken from Udacity's problem solving with advanced analytics
Here a neural network is effectively appllied in a simple problem usually solved with linear models
End of explanation
"""
data_path = 'data/simple_tickets_data.csv'
target = ['Average Number of Tickets']
df = pd.read_csv(data_path)
print("rows: {} \ncolumns: {} \ntarget: {}".format(*df.shape, target))
"""
Explanation: 1. Data Processing and Exploratory Data Analysis
End of explanation
"""
df.head()
"""
Explanation: Show the data
End of explanation
"""
df.describe(percentiles=[0.5])
"""
Explanation: Numerical data
End of explanation
"""
helper.missing(df);
"""
Explanation: Missing values
End of explanation
"""
df = df.drop('Client ID', axis='columns')
"""
Explanation: Transform the data
Remove irrelevant features
End of explanation
"""
numerical = ['Number of Employees', 'Value of Contract', 'Average Number of Tickets']
df = helper.classify_data(df, target, numerical)
pd.DataFrame(dict(df.dtypes), index=["Type"])[df.columns].head() # show data types
"""
Explanation: Classify variables
End of explanation
"""
helper.show_categorical(df)
"""
Explanation: Visualize the data
Categorical features
End of explanation
"""
helper.show_target_vs_categorical(df, target)
"""
Explanation: Target vs Categorical features
End of explanation
"""
helper.show_numerical(df, kde=True)
"""
Explanation: Numerical features
End of explanation
"""
helper.show_target_vs_numerical(df, target, point_size=20)
"""
Explanation: Target vs Numerical features
End of explanation
"""
g = sns.PairGrid(df, y_vars=target, x_vars=['Number of Employees', 'Value of Contract'],
size=7, hue='Industry', aspect=1.5)
g.map(sns.regplot).add_legend();
#sns.pairplot(df, hue = 'Industry', vars=['Number of Employees', 'Value of Contract'] +
# targets, size = 4)
"""
Explanation: Target vs All features
End of explanation
"""
helper.show_correlation(df, target, figsize=(7,4))
"""
Explanation: These figures suggest that a simple linear model could be used to make accurate predictions
Correlation between numerical features and target
End of explanation
"""
droplist = [] # features to drop
# For the model 'data' instead of 'df'
data = df.copy()
data.drop(droplist, axis='columns', inplace=True)
data.head(3)
"""
Explanation: 2. Neural Network model
Select the features
End of explanation
"""
data, scale_param = helper.scale(data)
"""
Explanation: Scale numerical variables
Shift and scale numerical variables to a standard normal distribution. The scaling factors are saved to be used for predictions.
End of explanation
"""
data, dict_dummies = helper.replace_by_dummies(data, target)
model_features = [f for f in data if f not in target] # sorted neural network inputs
data.head(3)
"""
Explanation: Create dummy features
Replace categorical features (no target) with dummy features
End of explanation
"""
test_size = 0.2
random_state = 0
from sklearn.model_selection import train_test_split
train, test = train_test_split(data, test_size=test_size, random_state=random_state)
# Separate the data into features and target (x=features, y=target)
x_train, y_train = train.drop(target, axis=1).values, train[target].values
x_test, y_test = test.drop(target, axis=1).values, test[target].values
"""
Explanation: Split the data into training and test set
Data leakage: Test set hidden when training the model, but seen when preprocessing the dataset
No validation set will be used here (300 samples)
End of explanation
"""
print("train size \t X:{} \t Y:{}".format(x_train.shape, y_train.shape))
print("test size \t X:{} \t Y:{} ".format(x_test.shape, y_test.shape))
"""
Explanation: One-hot encode the output not needed for regression
End of explanation
"""
from keras.models import Sequential
from keras.layers.core import Dense, Dropout
def build_nn(input_size, output_size, summary=False):
input_nodes = input_size
weights = keras.initializers.RandomNormal(stddev=0.001)
model = Sequential()
model.add(
Dense(
input_nodes,
input_dim=input_size,
activation='tanh',
kernel_initializer=weights,
bias_initializer=weights))
model.add(Dense(1, activation=None, kernel_initializer=weights, bias_initializer=weights))
model.compile(loss='mean_squared_error', optimizer='adam')
if summary:
model.summary()
return model
"""
Explanation: Build the Neural Network for Regression
End of explanation
"""
from time import time
model_path = os.path.join("models", "simple_tickets.h5")
def train_nn(model, x_train, y_train, validation_data=None, path=False, show=True):
"""
Train the neural network model. If no validation_datais provided, a split for validation
will be used
"""
if show:
print('Training ....')
#callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=0)]
t0 = time()
history = model.fit(
x_train,
y_train,
epochs=30,
batch_size=16,
validation_split=0,
validation_data=validation_data,
callbacks=None,
verbose=0)
if show:
print("time: \t {:.1f} s".format(time() - t0))
helper.show_training(history)
if path:
model.save(path)
print("\nModel saved at", path)
return history
model = None
model = build_nn(x_train.shape[1], y_train.shape[1], summary=False)
train_nn(model, x_train, y_train, validation_data=None, path=model_path);
from sklearn.metrics import r2_score
ypred_train = model.predict(x_train)
#ypred_val = model.predict(x_val)
print('Training R2-score: \t{:.3f}'.format(r2_score(y_train, ypred_train)))
#print('Validation R2-score: \t{:.3f}'.format(r2_score(y_val, ypred_val)))
"""
Explanation: Train the Neural Network
End of explanation
"""
# model = keras.models.load_model(model_path)
# print("Model loaded:", model_path)
def evaluate_nn(model, x_test, y_test):
score = model.evaluate(x_test, y_test, verbose=0)
print("\nTest loss:\t\t{:.4f}".format(score))
ypred_test = model.predict(x_test)
print('\nTest R2-score: \t\t{:.3f}'.format(r2_score(y_test, ypred_test)))
evaluate_nn(model, x_test, y_test)
"""
Explanation: Evaluate the model
End of explanation
"""
def predict_nn(model, x_test, target):
""" Return a dataframe with actual and predicted targets in original scale"""
for t in target:
pred = model.predict(x_test, verbose=0)
restore_pred = pred * scale_param[t][1] + scale_param[t][0]
restore_pred = restore_pred.round()
restore_y = y_test * scale_param[t][1] + scale_param[t][0]
restore_y = restore_y.round()
pred_label = 'Predicted_' + t
error_label = t + ' error (%)'
pred_df = pd.DataFrame({
t: np.squeeze(restore_y),
pred_label: np.squeeze(restore_pred)
})
pred_df[error_label] = ((pred_df[pred_label] - pred_df[t]) * 100 / pred_df[t]).round(1)
print(t, ". Prediction error:")
print("Mean: \t {:.2f}%".format(pred_df[error_label].mean()))
print("Stddev: {:.2f}%".format(pred_df[error_label].std()))
sns.distplot(pred_df[error_label])
plt.xlim(xmin=-600, xmax=600)
return pred_df
pred_df = predict_nn(model, x_test, target)
pred_df.head()
"""
Explanation: Make predictions
End of explanation
"""
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(x_train, y_train)
pred = reg.predict(x_test)
t=target[0]
restore_pred = pred * scale_param[t][1] + scale_param[t][0]
restore_pred = restore_pred.round()
restore_y = y_test * scale_param[t][1] + scale_param[t][0]
restore_y = restore_y.round()
pred_label = 'Predicted_' + t
error_label = t + ' error (%)'
pred_df = pd.DataFrame({
t: np.squeeze(restore_y),
pred_label: np.squeeze(restore_pred)
})
pred_df[error_label] = ((pred_df[pred_label] - pred_df[t]) * 100 / pred_df[t]).round(1)
print(t, ". Prediction error:")
print("Mean: \t {:.2f}%".format(pred_df[error_label].mean()))
print("Stddev: {:.2f}%".format(pred_df[error_label].std()))
sns.distplot(pred_df[error_label])
plt.xlim(xmin=-600, xmax=600)
"""
Explanation: The prediction error (%) can be especially high when the number of tickets is low. The absolute error could be a better indicator here.
Compare with linear regression
End of explanation
"""
helper.ml_regression(x_train, y_train[:,0], x_test, y_test[:,0])
"""
Explanation: The mean and standard deviation of the error is higher with the linear model.
Compare with classical ML
End of explanation
"""
|
diegocavalca/Studies | books/deep-learning-with-python/2.1-a-first-look-at-a-neural-network.ipynb | cc0-1.0 | from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
"""
Explanation: A first look at a neural network
This notebook contains the code samples found in Chapter 2, Section 1 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
We will now take a look at a first concrete example of a neural network, which makes use of the Python library Keras to learn to classify
hand-written digits. Unless you already have experience with Keras or similar libraries, you will not understand everything about this
first example right away. You probably haven't even installed Keras yet. Don't worry, that is perfectly fine. In the next chapter, we will
review each element in our example and explain them in detail. So don't worry if some steps seem arbitrary or look like magic to you!
We've got to start somewhere.
The problem we are trying to solve here is to classify grayscale images of handwritten digits (28 pixels by 28 pixels), into their 10
categories (0 to 9). The dataset we will use is the MNIST dataset, a classic dataset in the machine learning community, which has been
around for almost as long as the field itself and has been very intensively studied. It's a set of 60,000 training images, plus 10,000 test
images, assembled by the National Institute of Standards and Technology (the NIST in MNIST) in the 1980s. You can think of "solving" MNIST
as the "Hello World" of deep learning -- it's what you do to verify that your algorithms are working as expected. As you become a machine
learning practitioner, you will see MNIST come up over and over again, in scientific papers, blog posts, and so on.
The MNIST dataset comes pre-loaded in Keras, in the form of a set of four Numpy arrays:
End of explanation
"""
train_images.shape
len(train_labels)
train_labels
"""
Explanation: train_images and train_labels form the "training set", the data that the model will learn from. The model will then be tested on the
"test set", test_images and test_labels. Our images are encoded as Numpy arrays, and the labels are simply an array of digits, ranging
from 0 to 9. There is a one-to-one correspondence between the images and the labels.
Let's have a look at the training data:
End of explanation
"""
test_images.shape
len(test_labels)
test_labels
"""
Explanation: Let's have a look at the test data:
End of explanation
"""
from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dense(10, activation='softmax'))
"""
Explanation: Our workflow will be as follow: first we will present our neural network with the training data, train_images and train_labels. The
network will then learn to associate images and labels. Finally, we will ask the network to produce predictions for test_images, and we
will verify if these predictions match the labels from test_labels.
Let's build our network -- again, remember that you aren't supposed to understand everything about this example just yet.
End of explanation
"""
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
"""
Explanation: The core building block of neural networks is the "layer", a data-processing module which you can conceive as a "filter" for data. Some
data comes in, and comes out in a more useful form. Precisely, layers extract representations out of the data fed into them -- hopefully
representations that are more meaningful for the problem at hand. Most of deep learning really consists of chaining together simple layers
which will implement a form of progressive "data distillation". A deep learning model is like a sieve for data processing, made of a
succession of increasingly refined data filters -- the "layers".
Here our network consists of a sequence of two Dense layers, which are densely-connected (also called "fully-connected") neural layers.
The second (and last) layer is a 10-way "softmax" layer, which means it will return an array of 10 probability scores (summing to 1). Each
score will be the probability that the current digit image belongs to one of our 10 digit classes.
To make our network ready for training, we need to pick three more things, as part of "compilation" step:
A loss function: the is how the network will be able to measure how good a job it is doing on its training data, and thus how it will be
able to steer itself in the right direction.
An optimizer: this is the mechanism through which the network will update itself based on the data it sees and its loss function.
Metrics to monitor during training and testing. Here we will only care about accuracy (the fraction of the images that were correctly
classified).
The exact purpose of the loss function and the optimizer will be made clear throughout the next two chapters.
End of explanation
"""
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
"""
Explanation: Before training, we will preprocess our data by reshaping it into the shape that the network expects, and scaling it so that all values are in
the [0, 1] interval. Previously, our training images for instance were stored in an array of shape (60000, 28, 28) of type uint8 with
values in the [0, 255] interval. We transform it into a float32 array of shape (60000, 28 * 28) with values between 0 and 1.
End of explanation
"""
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
"""
Explanation: We also need to categorically encode the labels, a step which we explain in chapter 3:
End of explanation
"""
network.fit(train_images, train_labels, epochs=5, batch_size=128)
"""
Explanation: We are now ready to train our network, which in Keras is done via a call to the fit method of the network:
we "fit" the model to its training data.
End of explanation
"""
test_loss, test_acc = network.evaluate(test_images, test_labels)
print('test_acc:', test_acc)
"""
Explanation: Two quantities are being displayed during training: the "loss" of the network over the training data, and the accuracy of the network over
the training data.
We quickly reach an accuracy of 0.989 (i.e. 98.9%) on the training data. Now let's check that our model performs well on the test set too:
End of explanation
"""
|
Alexoner/mooc | cs231n/assignment3/q3.ipynb | apache-2.0 | # A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from time import time
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
"""
Explanation: Transfer learning
In the previous exercise we introduced the TinyImageNet-100-A dataset, and combined a handful of pretrained models on this dataset to improve our classification performance.
In this exercise we will explore several ways to adapt one of these same pretrained models to the TinyImageNet-100-B dataset, which does not share any images or object classes with TinyImage-100-A. We will see that we can use a pretrained classfier together with a small amount of training data from TinyImageNet-100-B to achieve reasonable performance on the TinyImageNet-100-B validation set.
End of explanation
"""
# Load the TinyImageNet-100-B dataset
from cs231n.data_utils import load_tiny_imagenet, load_models
tiny_imagenet_b = 'cs231n/datasets/tiny-imagenet-100-B'
class_names, X_train, y_train, X_val, y_val, X_test, y_test = load_tiny_imagenet(tiny_imagenet_b)
# Zero-mean the data
mean_img = np.mean(X_train, axis=0)
X_train -= mean_img
X_val -= mean_img
X_test -= mean_img
# We will use a subset of the TinyImageNet-B training data
mask = np.random.choice(X_train.shape[0], size=5000, replace=False)
X_train = X_train[mask]
y_train = y_train[mask]
# Load a pretrained model; it is a five layer convnet.
models_dir = 'cs231n/datasets/tiny-100-A-pretrained'
model = load_models(models_dir)['model1']
"""
Explanation: Load data and model
You should already have downloaded the TinyImageNet-100-A and TinyImageNet-100-B datasets along with the pretrained models. Run the cell below to load (a subset of) the TinyImageNet-100-B dataset and one of the models that was pretrained on TinyImageNet-100-A.
TinyImageNet-100-B contains 50,000 training images in total (500 per class for all 100 classes) but for this exercise we will use only 5,000 training images (50 per class on average).
End of explanation
"""
for names in class_names:
print ' '.join('"%s"' % name for name in names)
"""
Explanation: TinyImageNet-100-B classes
In the previous assignment we printed out a list of all classes in TinyImageNet-100-A. We can do the same on TinyImageNet-100-B; if you compare with the list in the previous exercise you will see that there is no overlap between the classes in TinyImageNet-100-A and TinyImageNet-100-B.
End of explanation
"""
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(class_names), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(y_train == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = X_train[train_idx] + mean_img
img = img.transpose(1, 2, 0).astype('uint8')
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(class_names[class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
"""
Explanation: Visualize Examples
Similar to the previous exercise, we can visualize examples from the TinyImageNet-100-B dataset. The images are similar to TinyImageNet-100-A, but the images and classes in the two datasets are disjoint.
End of explanation
"""
from cs231n.classifiers.convnet import five_layer_convnet
# These should store extracted features for the training and validation sets
# respectively.
#
# More concretely, X_train_feats should be an array of shape
# (X_train.shape[0], 512) where X_train_feats[i] is the 512-dimensional
# feature vector extracted from X_train[i] using model.
#
# Similarly X_val_feats should have shape (X_val.shape[0], 512) and
# X_val_feats[i] should be the 512-dimensional feature vector extracted from
# X_val[i] using model.
X_train_feats = None
X_val_feats = None
# Use our pre-trained model to extract features on the subsampled training set
# and the validation set.
################################################################################
# TODO: Use the pretrained model to extract features for the training and #
# validation sets for TinyImageNet-100-B. #
# #
# HINT: Similar to computing probabilities in the previous exercise, you #
# should split the training and validation sets into small batches to avoid #
# using absurd amounts of memory. #
################################################################################
X_train_feats = five_layer_convnet(X_train, model, y=None, reg=0.0,
extract_features=True)
X_val_feats = five_layer_convnet(X_val, model, y=None, reg=0.0,
extract_features=True)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
"""
Explanation: Extract features
ConvNets tend to learn generalizable high-level image features. For the five layer ConvNet architecture, we will use the (rectified) activations of the first fully-connected layer as our high-level image features.
Open the file cs231n/classifiers/convnet.py and modify the five_layer_convnet function to return features when the extract_features flag is True. This should be VERY simple.
Once you have done that, fill in the cell below, which should use the pretrained model in the model variable to extract features from all images in the training and validation sets.
End of explanation
"""
from cs231n.classifiers.k_nearest_neighbor import KNearestNeighbor
# Predicted labels for X_val using a k-nearest-neighbor classifier trained on
# the features extracted from X_train. knn_y_val_pred[i] = c indicates that
# the kNN classifier predicts that X_val[i] has label c.
knn_y_val_pred = None
################################################################################
# TODO: Use a k-nearest neighbor classifier to compute knn_y_val_pred. #
# You may need to experiment with k to get the best performance. #
################################################################################
knn = KNearestNeighbor()
knn.train(X_train_feats, y_train)
knn_y_val_pred = knn.predict(X_val_feats, k=25)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
print 'Validation set accuracy: %f' % np.mean(knn_y_val_pred == y_val)
"""
Explanation: kNN with ConvNet features
A simple way to implement transfer learning is to use a k-nearest neighborhood classifier. However instead of computing the distance between images using their pixel values as we did in Assignment 1, we will instead say that the distance between a pair of images is equal to the L2 distance between their feature vectors extracted using our pretrained ConvNet.
Implement this idea in the cell below. You can use the KNearestNeighbor class in the file cs321n/classifiers/k_nearest_neighbor.py.
End of explanation
"""
dists = knn.compute_distances_no_loops(X_val_feats)
num_imgs = 5
neighbors_to_show = 6
query_idxs = np.random.randint(X_val.shape[0], size=num_imgs)
next_subplot = 1
first_row = True
for query_idx in query_idxs:
query_img = X_val[query_idx] + mean_img
query_img = query_img.transpose(1, 2, 0).astype('uint8')
plt.subplot(num_imgs, neighbors_to_show + 1, next_subplot)
plt.imshow(query_img)
plt.gca().axis('off')
if first_row:
plt.title('query')
next_subplot += 1
o = np.argsort(dists[query_idx])
for i in xrange(neighbors_to_show):
img = X_train[o[i]] + mean_img
img = img.transpose(1, 2, 0).astype('uint8')
plt.subplot(num_imgs, neighbors_to_show + 1, next_subplot)
plt.imshow(img)
plt.gca().axis('off')
if first_row:
plt.title('neighbor %d' % (i + 1))
next_subplot += 1
first_row = False
"""
Explanation: Visualize neighbors
Recall that the kNN classifier computes the distance between all of its training instances and all of its test instances. We can use this distance matrix to help understand what the ConvNet features care about; specifically, we can select several random images from the validation set and visualize their nearest neighbors in the training set.
You will see that many times the nearest neighbors are quite far away from each other in pixel space; for example two images that show the same object from different perspectives may appear nearby in ConvNet feature space.
Since the following cell selects random validation images, you can run it several times to get different results.
End of explanation
"""
from cs231n.classifiers.linear_classifier import Softmax
softmax_y_train_pred = None
softmax_y_val_pred = None
################################################################################
# TODO: Train a softmax classifier to predict a TinyImageNet-100-B class from #
# features extracted from our pretrained ConvNet. Use this classifier to make #
# predictions for the TinyImageNet-100-B training and validation sets, and #
# store them in softmax_y_train_pred and softmax_y_val_pred. #
# #
# You may need to experiment with number of iterations, regularization, and #
# learning rate in order to get good performance. The softmax classifier #
# should achieve a higher validation accuracy than the kNN classifier. #
################################################################################
softmax = Softmax()
# NOTE: the input X of softmax classifier if an array of shape D x N
softmax.train(X_train_feats.T, y_train,
learning_rate=1e-2, reg=1e-4, num_iters=1000)
y_train_pred = softmax.predict(X_train_feats.T)
y_val_pred = softmax.predict(X_val_feats.T)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
print y_val_pred.shape, y_train_pred.shape
train_acc = np.mean(y_train == y_train_pred)
val_acc = np.mean(y_val_pred == y_val)
print train_acc, val_acc
"""
Explanation: Softmax on ConvNet features
Another way to implement transfer learning is to train a linear classifier on top of the features extracted from our pretrained ConvNet.
In the cell below, train a softmax classifier on the features extracted from the training set of TinyImageNet-100-B and use this classifier to predict on the validation set for TinyImageNet-100-B. You can use the Softmax class in the file cs231n/classifiers/linear_classifier.py.
End of explanation
"""
from cs231n.classifier_trainer import ClassifierTrainer
# Make a copy of the pretrained model
model_copy = {k: v.copy() for k, v in model.iteritems()}
# Initialize the weights of the last affine layer using the trained weights from
# the softmax classifier above
model_copy['W5'] = softmax.W.T.copy().astype(model_copy['W5'].dtype)
model_copy['b5'] = np.zeros_like(model_copy['b5'])
# Fine-tune the model. You will need to adjust the training parameters to get good results.
trainer = ClassifierTrainer()
learning_rate = 1e-4
reg = 1e-1
dropout = 0.5
num_epochs = 2
finetuned_model = trainer.train(X_train, y_train, X_val, y_val,
model_copy, five_layer_convnet,
learning_rate=learning_rate, reg=reg, update='rmsprop',
dropout=dropout, num_epochs=num_epochs, verbose=True)[0]
"""
Explanation: Fine-tuning
We can improve our classification results on TinyImageNet-100-B further by fine-tuning our ConvNet. In other words, we will train a new ConvNet with the same architecture as our pretrained model, and use the weights of the pretrained model as an initialization to our new model.
Usually when fine-tuning you would re-initialize the weights of the final affine layer randomly, but in this case we will initialize the weights of the final affine layer using the weights of the trained softmax classifier from above.
In the cell below, use fine-tuning to improve your classification performance on TinyImageNet-100-B. You should be able to outperform the softmax classifier from above using fewer than 5 epochs over the training data.
You will need to adjust the learning rate and regularization to achieve good fine-tuning results.
End of explanation
"""
|
willettk/insight | notebooks/Probability tutorial.ipynb | apache-2.0 | def compare(analytic,N,f):
errval = err(f,N)
successes = sum(f)
print "Analytic prediction: {:.0f}%.".format(analytic*100.)
print "Monte Carlo: {:.0f} +- {:.0f}%.".format(successes/float(N)*100.,errval*100.)
def err(fx,N):
# http://www.northeastern.edu/afeiguin/phys5870/phys5870/node71.html
f2 = [x*x for x in fx]
return np.sqrt((1./N * sum(f2) - (1./N * sum(fx))**2)/float(N))
"""
Explanation: Probability tutorial
Problems by Peter Komar
18 Jul 2016
Sample problems from Peter Komar; after trying to analytically solve everything, Monte Carlo and see if I'm right.
End of explanation
"""
import numpy as np
from numpy.random import binomial
# Default is 1000 trials each
N = 1000
p_rain_sat = 0.5
p_rain_sun = 0.2
p_light_sat = 0.9
p_heavy_sat = 0.1
p_light_sun = 1.0
p_heavy_sun = 0.0
f = []
for i in range(N):
# Light rain on Saturday?
rain_sat = binomial(1,p_rain_sat)
if rain_sat:
light_sat = binomial(1,p_light_sat)
else:
light_sat = 0
# Light rain on Sunday?
rain_sun = binomial(1,p_rain_sun)
if rain_sun:
light_sun = binomial(1,p_light_sun)
else:
light_sun = 0
if light_sat and light_sun:
f.append(1)
else:
f.append(0)
compare(9/100.,N,f)
"""
Explanation: Forward probability
Question 1
Q1: What is the probability of light rain on both days?
End of explanation
"""
f = []
for i in range(N):
# Light rain on either day?
rain_sat = binomial(1,p_rain_sat)
rain_sun = binomial(1,p_rain_sun)
if rain_sat or rain_sun:
f.append(1)
else:
f.append(0)
compare(60/100.,N,f)
"""
Explanation: Q2: What is the probability of rain during the weekend?
End of explanation
"""
from random import randint
f = []
for i in range(N):
# Draw candy from bag 1
r1 = randint(0,6)
if r1 < 3:
candy1 = "taffy"
else:
candy1 = "caramel"
# Draw candy from bag 2
r2 = randint(0,5)
if r2 == 0:
candy2 = "taffy"
else:
candy2 = "caramel"
if candy1 is not candy2:
f.append(1)
else:
f.append(0)
compare(19/42.,N,f)
"""
Explanation: Question 2
Q1: With what probability are the two drawn pieces of candy different?
End of explanation
"""
f = []
for i in range(N):
# Choose the bag
bag = binomial(1,0.5)
if bag:
# Bag 1
# First draw
r1 = randint(0,6)
if r1 < 3:
candy1 = "taffy"
else:
candy1 = "caramel"
# Second draw
r2 = randint(0,5)
if candy1 is "taffy":
if r2 < 2:
candy2 = "taffy"
else:
candy2 = "caramel"
else:
if r2 < 3:
candy2 = "taffy"
else:
candy2 = "caramel"
else:
# Bag 2
# First draw
r1 = randint(0,5)
if r1 < 2:
candy1 = "taffy"
else:
candy1 = "caramel"
# Second draw
r2 = randint(0,4)
if candy1 is "caramel":
if r2 < 4:
candy2 = "caramel"
else:
candy2 = "taffy"
else:
candy2 = "caramel"
if candy1 is not candy2:
f.append(1)
else:
f.append(0)
compare(23/42.,N,f)
"""
Explanation: Q2: With what probability are the two drawn pieces of candy different if they are drawn from the same (but randomly chosen) bag?
End of explanation
"""
p_H = 0.5
f = []
for i in range(N):
# Flip coin 1
c1 = binomial(1,p_H)
# Flip coin 2
c2 = binomial(1,p_H)
# Flip coin 3
c3 = binomial(1,p_H)
total_heads = c1 + c2 + c3
# Three heads
if total_heads == 3:
reward = 100
if total_heads == 2:
reward = 40
if total_heads == 1:
reward = 0
if total_heads == 0:
reward = -200
f.append(reward)
print "Analytic: {:.2f} +- {:.0f}".format(20/8.,82)
print "Monte Carlo: {:.2f} +- {:.0f}".format(np.mean(f),np.std(f))
"""
Explanation: Question 3
Q: What is the expectation value and standard deviation of the reward?
End of explanation
"""
n = 10
f = []
for i in range(N):
line = range(n)
np.random.shuffle(line)
# Assume Potter, Granger, Weasley correspond to 0, 1, and 2
indices = [line.index(person) for person in (0,1,2)]
if max(indices) - min(indices) == 2:
f.append(1)
compare(1/15.,N,f)
"""
Explanation: Question 4
Q1: What is the probability that Potter, Granger, and Weasley are standing next to each other?
End of explanation
"""
f = []
for i in range(N):
line = range(n)
np.random.shuffle(line)
# Assume Potter, Granger, Weasley correspond to 0, 1, and 2
indices = [line.index(person) for person in (0,1,2)]
if max(indices) - min(indices) == 2:
f.append(1)
else:
# Shift line halfway around and check again
line = list(np.roll(line,n//2))
indices = [line.index(person) for person in (0,1,2)]
if max(indices) - min(indices) == 2:
f.append(1)
compare(1/12.,N,f)
"""
Explanation: Q2: What is the probability that Potter, Granger, and Weasley are standing next to each other if the line is a circle?
End of explanation
"""
f = []
for i in range(N):
guys = ['a','b','c','d','e']
gals = ['alpha','beta','gamma','delta','epsilon']
np.random.shuffle(guys)
np.random.shuffle(gals)
if guys.index('c') == gals.index('gamma'):
f.append(1)
compare(1./5,N,f)
"""
Explanation: Question 5
Q: What is the probability that c dances with gamma?
End of explanation
"""
f = []
for i in range(N):
fellows = range(21)
np.random.shuffle(fellows)
# Derrick = 0, Gaurav = 1
group_derrick = fellows.index(0)//7
group_gaurav = fellows.index(1)//7
if group_derrick == group_gaurav:
f.append(1)
compare(0.30,N,f)
"""
Explanation: Question 6
Q: What is the probability that Derrick and Gaurav end up in the same group?
End of explanation
"""
f = []
for i in range(N):
a,b,c,d = 0,0,0,0
for candy in range(10):
selection = randint(0,3)
if selection == 0:
a += 1
if selection == 1:
b += 1
if selection == 2:
c += 1
if selection == 3:
d += 1
if a == 0:
f.append(1)
compare(0.75**10,N,f)
"""
Explanation: Question 7
Q: What is the probability that stocking A gets no candy?
End of explanation
"""
n = 20
f = []
for i in range(N):
throws = np.random.randint(1,11,n)
counts = np.bincount(throws)
if counts[1] == 2:
f.append(1)
analytic = 10**(np.log10(190) + 18*np.log10(9) - 20)
compare(analytic,N,f)
"""
Explanation: Question 8
Q1: What is the probability that we get two 1s in the first twenty throws?
End of explanation
"""
n = 10
f = []
for i in range(N):
throws = np.random.randint(1,11,n)
counts = np.bincount(throws)
if counts[1] == 1 and throws[-1] == 1:
f.append(1)
analytic = 0.9**9 * 0.1
compare(analytic,N,f)
"""
Explanation: Q2: What is the probability that we get the first 1 in the tenth throw?
End of explanation
"""
n = 30
f = []
for i in range(N):
throws = np.random.randint(1,11,n)
counts = np.bincount(throws)
if counts[1] == 3 and throws[-1] == 1:
f.append(1)
analytic = (29*28/2. * 0.9**27 * 0.1**2) * 0.1
compare(analytic,N,f)
"""
Explanation: Q3: What is the probability that we get the third 1 on the thirtieth throw?
End of explanation
"""
|
DJCordhose/ai | notebooks/talks/2017_mcubed/nn-intro.ipynb | mit | import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import matplotlib.pylab as plt
import numpy as np
from distutils.version import StrictVersion
import sklearn
print(sklearn.__version__)
assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')
import keras
print(keras.__version__)
assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')
import pandas as pd
print(pd.__version__)
assert StrictVersion(pd.__version__) >= StrictVersion('0.20.0')
"""
Explanation: Introduction to Neural Networks
How manual coding works
Opposing basic Idea of Supervised Machine Learning
Hope: System can generalize to previously unknown data and situations
Common Use Case: Classification
Types of Machine Learning
AI vs Machine Learning (ML)
NVIDIA Blog: What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?
End of explanation
"""
from sklearn.datasets import load_iris
iris = load_iris()
iris.data[0]
print(iris.DESCR)
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
iris_df = pd.DataFrame(iris.data, columns=iris.feature_names)
CMAP = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
pd.plotting.scatter_matrix(iris_df, c=iris.target, edgecolor='black', figsize=(15, 15), cmap=CMAP)
plt.show()
"""
Explanation: One of the Classics: Classify Iris Type by sizes of their flower
Solving Iris with Neural Networks
First we load the data set and get an impression
https://en.wikipedia.org/wiki/Iris_flower_data_set
End of explanation
"""
# keras.layers.Input?
from keras.layers import Input
inputs = Input(shape=(4, ))
# keras.layers.Dense?
from keras.layers import Dense
# just linear activation (like no activation function at all)
fc = Dense(3)(inputs)
from keras.models import Model
model = Model(input=inputs, output=fc)
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# this is just random stuff, no training has taken place so far
model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]]))
"""
Explanation: https://en.wikipedia.org/wiki/File:Petal-sepal.jpg
The artificial Neuron
<h1 style="color: red">Question: What kind of equation is this? What is the graph of such a function?</h1>
The Classic: A fully connected network with a hidden layer
End of explanation
"""
inputs = Input(shape=(4, ))
fc = Dense(10)(inputs)
predictions = Dense(3, activation='softmax')(fc)
model = Model(input=inputs, output=predictions)
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]]))
"""
Explanation: This is the output of all 3 hidden neurons, but what we really want is a category for iris category
Softmax activation turns each output to a percantage between 0 and 1 all adding up to 1
interpretation is likelyhood of category
End of explanation
"""
X = np.array(iris.data)
y = np.array(iris.target)
X.shape, y.shape
y[100]
# tiny little pieces of feature engeneering
from keras.utils.np_utils import to_categorical
num_categories = 3
y = to_categorical(y, num_categories)
y[100]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42, stratify=y)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
!rm -r tf_log
tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')
# https://keras.io/callbacks/#tensorboard
# To start tensorboard
# tensorboard --logdir=/mnt/c/Users/olive/Development/ml/tf_log
# open http://localhost:6006
%time model.fit(X_train, y_train, epochs=500, validation_split=0.2, callbacks=[tb_callback])
# %time model.fit(X_train, y_train, epochs=500, validation_split=0.2)
"""
Explanation: <h1 style="color: red">Question: What is the minimum amount of hidden neurons to solve this categorization task?</h1>
Now we have likelyhoods for categories, but still our model is totally random
Training
training is performed using Backpropagation
each pair of ground truth input and output is passed through network
difference between expected output (ground truth) and actual result is summed up and forms loss function
loss function is to be minimized
optimizer defines strategy to minimize loss
Optimizers: Adam and RMSprop seem nice
http://cs231n.github.io/neural-networks-3/#ada
End of explanation
"""
model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]]))
X[0], y[0]
train_loss, train_accuracy = model.evaluate(X_train, y_train)
train_loss, train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test)
test_loss, test_accuracy
"""
Explanation: Evaluation
End of explanation
"""
# Keras format
model.save('nn-iris.hdf5')
"""
Explanation: Save Model in Keras Format
End of explanation
"""
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen | wc-arbeiten-tf-10-aufgabe.ipynb | gpl-3.0 | #importieren sie die Bibliothek pandas
#importieren sie matplotlib.pyplot as plt
#laden Sie die Datei "daten.csv" auf Ihren Hub
#laden Sie die Datei "daten.csv" in einen Datframe df
#Einlesen der Dateien
#Betrachten Sie die ersten Daten des Dataframes df
#Erzeugen Sie einen Scatterplot
#importieren Sie tensorflow as tf
# Laden der Bibliotheen
#importieren Sie das keras.model Sequential()
#importieren Sie die Keras Layer Dense und Activation
#Auslesen der Daten und Labels aus dem Dataframe
x_input = df[['x1','x2']].values
y_input = df[['label']].values
#Ausgabe der Datenwerte
"""
Explanation: <h1>ANN - Erstes arbeiten mit Tensorflow - Binäre Klassifikation</h1>
End of explanation
"""
#Initialisierung des Neuronalen Netzes
#Hinzufügen eines Dense Layers
# Compilieren des Modells
# Überprüfen der Konfiguration
# Trainieren des Modells
epoch_num = 1000
# batch_num = 56 eventuell als Beispil ergänzen
history = nn.fit(x_input, y_input, epochs=epoch_num, verbose = 1)
#Evaluierung der Ergebnisse
#Test mit Daten
x_test=[[2,3],[6,4],[5,5]]
ergebnis = nn.predict(x_test)
print(ergebnis)
history_dict = history.history
history_dict.keys()
acc = history_dict['accuracy']
#val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
#val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, acc, 'r', label='Accuracy')
# b is for "solid blue line"
#plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training loss und accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss / Accuracy')
plt.legend()
plt.show()
"""
Explanation: <h2> Erstes Neuronales Netz - Single Layer</h2>
End of explanation
"""
#Initialisieren eine neuen Netzwerkes nn2
#Hinzufügen der layer
# Kompilieren des neuen Modells
nn2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
#Ausgabe der Modellstruktur
#Trainieren des Modells
epoch_num = 50
history = nn2.fit(x_input,y_input, epochs=epoch_num)
#Evaluierung der Ergebnisse
nn.evaluate(x_input, y_input)
acc = history_dict['accuracy']
#val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
#val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, acc, 'r', label='Accuracy')
# b is for "solid blue line"
#plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training loss und accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss / Accuracy')
plt.legend()
plt.show()
"""
Explanation: <h2> Initialisierung eines zweiten Neuronalen Netzes - Multi Layer</h2>
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mpi-m/cmip6/models/mpi-esm-1-2-hr/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-hr', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MPI-M
Source ID: MPI-ESM-1-2-HR
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
jeffcarter-github/MachineLearningLibrary | MachineLearningLibrary/Cluster/kmeans_example.ipynb | mit | from __future__ import print_function, division
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
from KMeans import KMeans
"""
Explanation: This notebook is designed for the exploration of the K-Means algorithm...
1. Arbitrary data sets can be created...
2. K-Means algo can be run with different intializations ('forgy', 'random', k_means++)...
End of explanation
"""
np.random.seed(3)
# controls distance between clusters...
d_scalar = 2.0
# control size of clusters...
s_scalar = 0.5
# number of dimensions...
dimensions = 4
# data points per cluster...
n_data = 500
# actual number of clusters...
n_clusters = 5
# create data offsets
c = [d_scalar * np.random.randn(1, dimensions) for i in range(n_clusters)]
# create scaled data with offsets
x = [s_scalar * np.random.randn(n_data, dimensions) - c[i] for i in range(n_clusters)]
X = np.concatenate(x, axis=0)
plt.figure()
plt.subplot(221)
plt.title('2D Cluster Slice')
for i in range(n_clusters):
plt.scatter(x[i][:,0], x[i][:,1], marker='.', alpha=0.25)
plt.xlabel('X_0')
plt.ylabel('X_1')
plt.subplot(222)
plt.title('2D Cluster Slice')
for i in range(n_clusters):
plt.scatter(x[i][:,0], x[i][:,2], marker='.', alpha=0.25)
plt.xlabel('X_0')
plt.ylabel('X_2')
plt.subplot(223)
for i in range(n_clusters):
plt.scatter(x[i][:,0], x[i][:,3], marker='.', alpha=0.25)
plt.xlabel('X_0')
plt.ylabel('X_3')
plt.subplot(224)
for i in range(n_clusters):
plt.scatter(x[i][:,1], x[i][:,2], marker='.', alpha=0.25)
plt.xlabel('X_1')
plt.ylabel('X_2')
plt.tight_layout()
"""
Explanation: Create Data...
End of explanation
"""
trial_clusters = [2, 3, 4, 6, 8, 10, 15, 20]
inertia_lst = []
for i in trial_clusters:
kmeans = KMeans(n_clusters=i, init='kmeans++', max_iter=100)
kmeans.fit(X)
inertia_lst.append(kmeans.inertia)
plt.figure()
plt.plot(trial_clusters, inertia_lst, 'o--')
plt.xlim(min(trial_clusters) - 1, max(trial_clusters) + 1)
plt.ylabel('inertia')
plt.xlabel('n_clusters')
"""
Explanation: Cluster via KMeans
End of explanation
"""
|
Biles430/FPF_PIV | PIV_092117.ipynb | mit | import pandas as pd
import numpy as np
import PIV as piv
import time_series as ts
import time
import sys
import h5py
from scipy.signal import medfilt
import matplotlib.pyplot as plt
import hotwire as hw
import imp
from datetime import datetime
%matplotlib inline
now = datetime.now()
#for setting movie
import time
import pylab as pl
from IPython import display
# import functions to be run
imp.reload(ts)
imp.reload(piv)
imp.reload(hw)
%run 'air_prop.py'
%run 'piv_outer.py'
%run 'piv_readin.py'
%run 'piv_inner.py'
"""
Explanation: PIV analysis and Plotting
Code set using data from PIV experiments on 09-21-17<br>
$U_\infty = 4.5 (400rpm)$<br>
Test 0: <br>
Data taken at 500Hz continuously<br>
Test 1: <br>
Data taken at 500Hz for 100 images on a 1Hz loop<br>
Test 2: <br>
Data taken at 500Hz for 500 images on a .5Hz loop<br>
Test 3: <br>
Data taken at 500Hz continuously<br><br>
Laser Power = 14amps <br>
Last updated: 09-26-17 <br>
Code Strucutre: <br>
- import libraries <br>
- run analysis codes <br>
- read in data <br>
- plot outer <br>
- plot inner <br>
End of explanation
"""
## DATA SET READ IN ##
#data sets taken continuously (test_0, test_3)
#Parameter set
date = '092117_0'
data_delimiter = '\t'
num_images = 10917
sizex = 128
sizey = 129
walloffset = 2 #mm
side_error = 5
#determine file name
file_name = dict()
for j in range(1, num_images+1):
file_name[j] = '/B' + str('{0:05}'.format(j)) + '.txt'
#list name of data set folders
base_name = dict()
#List the base name for each test to be read in and analyzed, names taken directly from folder
base_name[0] = '/media/drummond/My Passport/DATA/FPF/test_092117/Cam_Date=170921_Time=120913_TR_SeqPIV_MP(1x16x16_50ov_ImgCorr)=unknown'
base_name[1] = '/media/drummond/My Passport/DATA/FPF/test_092117/Cam_Date=170921_Time=140859_TR_SeqPIV_MP(1x16x16_50ov_ImgCorr)=unknown'
[u, v, x, y, bad_im] = piv_readin(date, file_name, base_name, num_images, data_delimiter, sizex, sizey, walloffset, side_error)
## DATA SET READ IN ##
#data set taken on cycle, 100 images every 1hz (test_1)
#Parameter set
date = '092117_1'
data_delimiter = '\t'
num_images = 10907
sizex = 128
sizey = 129
walloffset = 2 #mm
side_error = 5
#determine file name
file_name = dict()
for j in range(1, num_images+1):
file_name[j] = '/B' + str('{0:05}'.format(j)) + '.txt'
#list name of data set folders
base_name = dict()
#List the base name for each test to be read in and analyzed, names taken directly from folder
base_name[0] = '/media/drummond/My Passport/DATA/FPF/test_092117/Cam_Date=170921_Time=124152_TR_SeqPIV_MP(1x16x16_50ov_ImgCorr)=unknown'
piv_readin(date, file_name, base_name, num_images, data_delimiter, sizex, sizey, walloffset, side_error)
## DATA SET READ IN ##
#data set taken on cycle, 500 images every .5hz (test_1)
#Parameter set
date = '092117_2'
data_delimiter = '\t'
num_images = 10520
sizex = 128
sizey = 129
walloffset = 2 #mm
side_error = 5
#determine file name
file_name = dict()
for j in range(1, num_images+1):
file_name[j] = '/B' + str('{0:05}'.format(j)) + '.txt'
#list name of data set folders
base_name = dict()
#List the base name for each test to be read in and analyzed, names taken directly from folder
base_name[0] = '/media/drummond/My Passport/DATA/FPF/test_092117/Cam_Date=170921_Time=130741_TR_SeqPIV_MP(1x16x16_50ov_ImgCorr)=unknown'
piv_readin(date, file_name, base_name, num_images, data_delimiter, sizex, sizey, walloffset, side_error)
"""
Explanation: Read in and Filter Datasets
End of explanation
"""
# Plot Outer Normalized Data
date = '092117'
legend = [r'$Re_{\theta}=$30288, Cont.', r'$Re_{\theta}=$30288, 100im', r'$Re_{\theta}=$30288, 500im']
num_tests = 3
piv_outer(date, num_tests, legend)
"""
Explanation: Mean Velocity Plots
End of explanation
"""
##Plot Inner Normalized Data##
date = '092117'
num_tests = 3
utau = .15
legend = [r'$Re_{\theta}=$30288, Cont.', r'$Re_{\theta}=$30288, 100im', r'$Re_{\theta}=$30288, 500im']
piv_inner(date, num_tests, utau, legend)
"""
Explanation: Inner Normalized Plots
End of explanation
"""
## Control Volume Analysis ##
umean = np.nanmean(u[0], axis=0)
vmean = np.nanmean(v[0], axis=0)
mean_vel = np.sqrt(umean**2 + vmean**2)
#print(np.shape(mean_vel))
cv_in = np.trapz(mean_vel[:, 0], x = y)*-1
cv_out = np.trapz(mean_vel[:, -1], x = y)*-1
cv_delta = cv_out - cv_in
vel_out_y = cv_delta / (x[-1] - x[0])
plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k')
plt.semilogx(y, mean_vel[:, 0], '-xb')
plt.semilogx(y, mean_vel[:, -1], '-xr')
plt.legend(['Control Volume In', 'Control Volume Out'])
plt.ylabel('Velocity Magnitude (m/sec)')
plt.xlabel('Wall Normal Position (m)')
"""
Explanation: Control Volume Analysis
Procedure: <br>
1). Create mean velocity field from mean u and v velocity fields ($(u^2 + v^2)^{1/2}$)<br>
2). Integrate left side of image (control volume in) <br>
3). Integrate right side of image (control volumne out) <br>
4). Calculate difference (control volume delta) and divide by streamwise length of FOV
End of explanation
"""
freq = 500 #hz
pixel_size = 0.0002 #mm/pixel
#calculate displacement in x dir
x_disp = u[0]*(1/freq)
#organize into 1-d vector
x_disp = np.array(np.reshape(x_disp, [1, 127*68*10917]))[0]
#calculate in pixel disp
x_disp = x_disp / pixel_size
#plot
plt.figure(num=None, figsize=(10, 8), dpi=100, facecolor='w', edgecolor='k')
plt.hist(x_disp[0:1000000], bins=5000, range=[20, 40], normed=True)
plt.title('Streamwise Velocity pixel displacement PDF')
plt.xlabel('Pixel Displacement')
plt.ylabel('Normalize Counts')
plt.show()
np.shape(u)
#calculate displacement in y dir
y_disp = v[0]*(1/freq)
#organize into 1-d vector
y_disp = np.array(np.reshape(y_disp, [1, 127*68*10917]))[0]
#calculate in pixel disp
y_disp = y_disp / pixel_size
#plot
plt.figure(num=None, figsize=(10, 8), dpi=100, facecolor='w', edgecolor='k')
plt.hist(y_disp[:1000000], bins=5000, normed=True)
plt.title('Wall-normal Velocity pixel displacement PDF')
plt.xlabel('Pixel Displacement')
plt.ylabel('Normalize Counts')
plt.show()
"""
Explanation: The control volume input is {{print('%.4g'%(cv_in))}} $m^2/sec$ <br>
The control volume ouput is {{print('%.4g'%(cv_out))}} $m^2/sec$ <br>
Giving a difference of {{print('%.4g'%(cv_delta))}} <br>
For which the top length of the control volume is {{ x[-1] - x[0] }}m <br>
Giving the average v velocity to be {{print('%.4g'%(vel_out_y))}} $m/sec$
Pixel Locking
Procedure: Take masked and filtered datasets <br>
1). Convert into 1-D vector of all velocities (10520 images x 127 rows x 97 columns) <br>
2). Convert into displacement using known image frequency ($500hz$) <br>
3). Convert into # of pixel displacement by using calibration size ($.2mm/pixel$)
End of explanation
"""
np.shape(u)
"""
Explanation: Autocorrelation Plot
Procedure: <br>
-working to examine when each point in the velocity field becomes time indep. <br>
- in development
End of explanation
"""
|
robertoalotufo/ia898 | deliver/tutorial-python.ipynb | mit | a = 3
print (type(a) )
b = 3.14
print (type(b) )
c = 3 + 4j
print (type(c) )
d = False
print (type(d) )
print (a + b )
print (b * c )
print (c / a )
"""
Explanation: Introdução ao Python
Python é uma linguagem muito poderosa bastante utilizada em processamento de imagens e aprendizado de máquina. A maioria das bibliotecas de redes neurais profundas possuem interface para Python.
Tipos de variáveis em Python com ênfase nos tipos sequênciais
O Python é uma linguagem de programação de alto nível, interpretada, imperativa, orientada a objetos,
de tipagem dinâmica e forte, que possui ainda as seguintes características:
- Não há pré-declaração de variáveis, e os tipos das variáveis são determinados dinamicamente.
- O controle de bloco é feito apenas por indentação; não há delimitadores do tipo BEGIN e END ou { e }.
- Oferece tipos de dados de alto nível: strings, listas, tuplas, dicionários, arquivos, classes.
- É orientada a objetos.
É uma linguagem moderna e adaptada para o desenvolvimento tanto de aplicações genéricas como científicas. Para aplicações
científicas, o Python possui um pacote muito importante e eficiente para o processamento de arrays multidimensionais: Numpy.
Em sua forma nativa, o Python suporta os seguintes tipos de variáveis:
| Tipo Variável | Descrição | Exemplo de sintaxe |
|---------------|---------------------------------------------|-------------------------|
| int | Variável inteira | a = 103458 |
| float | Variável de ponto flutuante | pi = 3.14159265 |
| bool | Variável booleana - True ou False | a = False |
| complex | Variável de número complexo | c = 2+3j |
| str | Variável para cadeia de caracteres ASCII | a = "Exemplo" |
| list | Lista heterogênea pode alterar os valores | lista = [4,'eu',1] |
| tuple | Tupla heterogênea imutável | tupla = (1,'eu',2) |
| dict |Conjunto associativo de valores | dic = {1:'eu',2:'você'} |
Tipos Númericos:
Declarando variáveis dos tipos inteiro, booleano, ponto flutuante e complexo e realizando algumas
operações simples
End of explanation
"""
nome1 = 'Faraday'
nome2 = "Maxwell"
print ('string do tipo:', type(nome1), 'nome1:', nome1, "comprimento:", len(nome1) )
"""
Explanation: Observe que em operações envolvendo elementos de tipos diferentes a linguagem realiza a
conversão dos elementos ao tipo adequado, conforme a seguinte hierarquia: complexo > ponto flutuante > inteiro
Tipos Sequenciais:
Python possui três tipos sequenciais principais: listas, tuplas e cadeia de caracteres (string).
Strings:
Pode-se declarar strings tanto usando aspas simples como duplas ou triplas.
Strings são vetores imutáveis compostos de caracteres. Pode-se calcular o tamanho do
string usando-se len.
End of explanation
"""
print ('Primeiro caractere de ', nome1, ' é: ', nome1[0] )
print ('Último caractere de ', nome1, ' é: ', nome1[-1] )
print ('Repetindo-se strings 3 vezes', 3 * nome1 )
"""
Explanation: String é um vetor imutável de caracteres. É possível indexar um caractere único e é possível aplicar regras
consistentes do Python no tratamento de sequências, tais como fatiamento (slicing) e formas de indexação.
Em Python, o primeiro elemento é sempre indexado como zero, assim quando tenho um string de 5 caracteres, ele
é indexado de 0 a 4. É possível também indexar os elementos da direita para a esquerda utilizando índices
negativos. Assim, o último elemento do vetor pode ser indexado pelo índice -1.
End of explanation
"""
lista1 = [1, 1.1, 'um'] # Listas podem conter elementos de diferentes tipos.
lista2 = [3+4j, lista1] # Inclusive uma lista pode conter outras listas como elementos!
print ('tipo da lista1=', type(lista1) )
print ('lista2=', lista2 )
lista2[1] = 'Faraday' #Diferentemente das strings, pode-se atribuir novos valores a elementos da lista.
print ('lista2=', lista2 )
lista3 = lista1 + lista2 # Concatenando 2 listas
print ('lista3=',lista3 )
print ('concatenando 2 vezes:',2*lista3 )
"""
Explanation: Listas:
Lista é uma sequência de elementos de diferentes tipos que podem ser indexados, alterados e operados. Listas são definidas por elementos separados por vírgulas iniciado e terminado por colchetes.
End of explanation
"""
#Declarando tuplas
tupla1 = () # tupla vazia
tupla2 = ('Gauss',) # tupla com apenas um elemento. Note a vírgula.
tupla3 = (1.1, 'Ohm', 3+4j)
tupla4 = 3, 'aqui', True
print ('tupla1=', tupla1 )
print ('tupla2=', tupla2 )
print ('tupla3=', tupla3 )
print ('tupla4=', tupla4 )
print ('tipo da tupla3=', type(tupla3) )
"""
Explanation: Tuplas:
Tupla é similar a lista, porém seus valores são imutáveis. Tupla é uma sequência de objetos separados
por vírgulas que podem, opcionalmente, serem iniciados e terminados por parênteses. Tupla contendo
um único elemento precisa ser seguido de uma vírgula.
note: O entendimento da tupla é muito importante e ela será bastante utilizada neste curso, pois muitos parâmetros do ndarray do NumPy são setados utilizando tuplas.
End of explanation
"""
s = 'abcdefg'
print ('s=',s )
print ('s[0:2] =', s[0:2] ) # caracteres a partir da posição 0 (inclusivo) até 2 (exclusivo)
print ('s[2:5] =', s[2:5] ) # caracteres a partir da posição 2 (inclusivo) até 5 (exclusivo)
"""
Explanation: Slicing em tipos sequenciais
Além dos tipos sequenciais como listas, tuplas e strings poderem ser indexados, é possível também
selecionar subconjuntos através do conceito de slicing (fatiamento).
Por exemplo:
End of explanation
"""
s = 'abcdefg'
print ('s=',s )
print ('s[:2] =', s[:2] ) # caracteres a partir do início até 2 (exclusivo)
print ('s[2:] =', s[2:] ) # caracteres a partir da posição 2 (inclusivo) até o final do string
print ('s[-2:] =', s[-2:] )# últimos 2 caracteres
"""
Explanation: Quando o início for zero e o final for o comprimento do string, ele pode ser omitido. Veja os
exemplos:
End of explanation
"""
s = 'abcdefg'
print ('s=',s )
print ('s[2:5]=', s[2:5] )
print ('s[0:5:2]=',s[0:5:2] )
print ('s[::2]=', s[::2] )
print ('s[:5]=', s[:5] )
print ('s[3:]=', s[3:] )
print ('s[::-1]=', s[::-1] )
"""
Explanation: Note que a posição de início é sempre inclusiva e a posição final é sempre exclusiva.
Isto é feito para que a concatenação entre s[:i] e s[i:] seja igual a s.
O slicing permite ainda um terceiro valor que é opcional: step.
Para quem é familiarizado com a linguagem C, os 3 parâmetros do slicing é similar ao for:
|comando for | slicing |
|-------------------------------------------|-----------------------|
|for (i=inicio; i < fim; i += passo) a[i] | a[inicio:fim:passo] |
Veja exemplos de indexação usando slicing num string de 7 caracteres, indexados de 0 a 6:
|slice | indices | explicação |
|---------|-------------|---------------------------------------|
| 0:5 |0,1,2,3,4 |vai de 0 até 4 que é menor que 5 |
| 2:5 |2,3,4 |vai de 2 até 4 |
| 0:5:2 |0,2,4 |vai de 0 até 4, pulando de 2 em 2 |
| ::2 |0,2,4,6 |vai do início até o final de 2 em 2 |
| :5 |0,1,2,3,4 |vai do início até 4, que é menor que 5 |
| 3: |3,4,5,6 |vai de 3 até o final |
| ::-1 |6,5,4,3,2,1,0|vai do final (6) até o início |
Veja estes exemplos aplicados no string 'abcdefg':
End of explanation
"""
s = "abc"
s1,s2,s3 = s
print ('s1:',s1 )
print ('s2:',s2 )
print ('s3:',s3 )
list = [1,2,3]
t = 8,9,True
print ('list=',list )
list = t
print ('list=',list )
(_,a,_) = t
print ('a=',a )
"""
Explanation: Este conceito de slicing será essencial neste curso. Ele pode ser aplicado em strings, tuplas, listas e principalmente no ndarray do NumPy. Procure entendê-lo integralmente.
Atribuição em tipos sequenciais
Tanto strings, tuplas como listas podem ser desempacotados através de atribuição.
O importante é que o mapeamento seja consistente. Lembrar que a única sequência
que é mutável, isto é, pode ser modificada por atribuição é a lista.
Procure estudar os exemplos abaixo:
End of explanation
"""
s = 'formatação inteiro:%d, float:%f, string:%s' % (5, 3.2, 'alo')
print (s )
"""
Explanation: Formatação de string para impressão
Um string pode ser formatado de modo parecido com a sintaxe do sprintf em C/C++ na forma:
string % tupla. Iremos usar bastante este modelo para colocar legendas nas imagens. Exemplos:
End of explanation
"""
dict1 = {'blue':135,'green':12.34,'red':'ocean'} #definindo um dicionário
print(type(dict1))
print(dict1)
print(dict1['blue'])
print(dict1.keys()) # Mostra as chaves do dicionário
del dict1['blue'] # Deleta o elemento com a chave 'blue'
print(dict1.keys()) # Mostra as chaves do dicionário após o elemento com a chave 'blue' ser apagado
"""
Explanation: Outros tipos
Existem ainda os Dicionários e Conjuntos, entretanto eles não serão utilizados durante este curso.
Dicionários:
Dicionários podem ser definidos como sendo listas associativas que ao invés de associar os seus elementos
a índices númericos, associa os seus elementos a palavras-chave.
Declarando dicionários e realizando algumas operações simples
End of explanation
"""
lista1 = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
lista2 = ['red', 'blue', 'green','red','red']
conjunto1 = set(lista1) # Definindo um conjunto
conjunto2 = set(lista2)
print(conjunto1) # Observe que os elementos repetidos são contados apenas uma vez
print(type(conjunto1))
print(conjunto1 | conjunto2) # União de 2 conjuntos
"""
Explanation: Conjuntos
Conjuntos são coleções de elementos que não possuem ordenação e também não apresentam
elementos repetidos.
Declarando conjuntos e realizando algumas operações simples
End of explanation
"""
#Ex1: O último comando print (executa independentemente do valor de x
x = -1
if x<0:
print('x é menor que zero!')
elif x==0:
print('x é igual a zero')
else:
print('x é maior que zero!')
print('Esta frase é escrita independentemente do valor de x')
#Ex2: Os dois últimos comandos print (estão dentro do laço devido a
#indentação, portanto só executam se x for maior que zero
if x<0:
print('x é menor que zero!')
elif x==0:
print('x é igual a zero')
else:
print('x é maior que zero!')
print('Esta frase é escrita apenas para x maior que zero')
"""
Explanation: A importância da indentação na linguagem Python:
Diferentemente das outras linguagens que utilizam palavras-chave como begin e end ou chaves {, } para delimitar
seus blocos (if, for, while, etc.), a linguagem Python utiliza a indentação do código para determinar quais comandos
estão aninhados dentro de um bloco, portanto a indentação é de fundamental importância na linguagem Python.
End of explanation
"""
browsers = ["Safari", "Firefox", "Google Chrome", "Opera", "IE"]
for browser in browsers:
print (browser )
"""
Explanation: Loops
Loop for
Iterando numa lista de strings
End of explanation
"""
numbers = [1,10,20,30,40,50]
sum = 0
for number in numbers:
sum = sum + number
print (sum )
"""
Explanation: Iterando numa lista de inteiros
End of explanation
"""
word = "computer"
for letter in word:
print (letter )
"""
Explanation: Iterando nos caracteres de uma string
End of explanation
"""
browsers = ["Safari", "Firefox", "Google Chrome", "Opera", "IE"]
i = 0
while i < len(browsers) and i>=0: # Duas condições para que o loop continue
print (browsers[i] )
i = i + 1
"""
Explanation: Iterando num iterador - xrange
Loop while
O loop while executa até que uma condição de parada seja atingida.
End of explanation
"""
for x in range(1, 4):
for y in range(1, 3):
print ('%d * %d = %d' % (x, y, x*y) )
print ('Dentro do primeiro for, mas fora do segundo' )
"""
Explanation: Loops aninhados
É possível aninhar loops, note que a indentação é importante para saber dentro de qual loop o comando se encontra.
End of explanation
"""
def soma( x, y):
s = x + y
return s
"""
Explanation: Funções em Python
Sintaxe para definir funções
As funções em Python utilizam a palavra chave def seguida do nome da função e os parâmetros entre parêntesis
terminado por dois pontos como no exemplo a seguir onde a função soma é definida para retornar a soma de
seus dois parâmetros. Observe que o corpo da função é indentado da definição da função:
End of explanation
"""
r = soma(50, 20)
print (r )
"""
Explanation: Para se realizar a chamada da função soma, basta utilizá-la pelo seu nome passando os parâmetros como argumentos da função. Veja o exemplo a seguir
End of explanation
"""
def soma( x, y, squared=False):
if squared:
s = (x + y)**2
else:
s = (x + y)
return s
"""
Explanation: Parâmetros da função
Existem dois tipos de parâmetros: posicional e com palavra chave. Os posicionais são aqueles
que são identificados pela ordem em que aparecem na lista dos parâmetros da função. Já os
com palavra chave, são identificados por nome=. Os parâmetros por palavra chave podem também
ser posicionais, mas tem a vantagem que se não forem passados, ele assumem o valor mencionado
por falta (default). Veja o exemplo abaixo:
End of explanation
"""
print ('soma(2, 3):', soma(2, 3) )
print ('soma(2, 3, False):', soma(2, 3, False) )
print ('soma(2, 3, True):', soma(2, 3, True) )
print ('soma(2, 3, squared= True):', soma(2, 3, squared= True) )
"""
Explanation: Observe que os parâmetros, x e y são posicionais e serão os 2 primeiros argumentos
da chamada da função. O terceiro parâmetro é por palavra chave e portanto opcional, posso
usá-lo tanto na forma posicional, como na forma explícita com a palavra chave. A grande
vantagem neste esquema é que posso ter um grande número de parâmetros com palavra chave e
na hora de usar a função deixar explicitamente só os parâmetros desejados.
Veja os exemplos:
End of explanation
"""
|
jwjohnson314/data-801 | notebooks/stay_classy.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
dir(list)
class Rectangle(object):
"""
retangular objects - requires a 2 x 5 np.array corresponding to points in the plane
traversed counterclockwise - first same as last
"""
def __init__(self, coords=None):
"""
C++/Java/Fortran/etc. programmers - this is like the constructor
"""
self.coords = coords
def plot(self, **kwargs):
"""
class method - generally public in Python
"""
plt.fill_between(self.coords[0],self.coords[1], **kwargs)
unit_square = Rectangle(coords=np.array([[0, 1, 1, 0], [0, 0, 1, 1]]))
dir(unit_square)
unit_square.plot(lw=5)
fig, ax = plt.subplots()
unit_square.plot(lw=5)
plt.ylim(-1, 2)
plt.xlim(-1, 2);
class Rectangle(object):
"""
rectangular objects
requires a 2 x 5 np.array corresponding to four points in the plane
traversed counterclockwise
two sides parallel to horizontal axis
first set of coordinates same as last
"""
def __init__(self, coords=None):
self.coords = coords
def plot(self, **kwargs):
"""
basic mechanism to plot the rectangle
"""
plt.fill_between(self.coords[0],self.coords[1], **kwargs)
def get_area(self):
"""
basic area function
Document class methods just like any other function
"""
return (np.max(self.coords[0]) - np.min(self.coords[0])) * (np.max(self.coords[1] - np.min(self.coords[1])))
rect = Rectangle(coords=np.array([[0, 2, 2, 0, 0], [0, 0, 1, 1, 0]]))
print('rectangle area = %d' % rect.get_area())
rect.plot(lw=5)
dir(rect)
rect.__dir__()
dir(rect).sort() == rect.__dir__().sort()
"""
Explanation: Custom data types
Python is an object oriented programming language. OOP is an import programming paradigm that you need to know about in order to fully understand Python.
Fundamental ideas:
- Objects
- Inheritance
Objects and classes
The 'class' keyword allows us to define custom data types.
End of explanation
"""
rect.plot?
"""
Explanation: Check out our documentation:
End of explanation
"""
rect.__dict__
"""
Explanation: Classes are dicts in a sense - hence the magic dict function.
End of explanation
"""
# objects are the most basic Python types
class EuclideanShape2D(object):
"""
generic base class for shapes
all shapes have area"""
def __init__(self):
pass
def get_area(self):
pass
def plot(self):
print('no plot method defined - TO DO')
pass
object?
class Rectangle(EuclideanShape2D):
"""
rectangular objects
requires a 2 x 5 np.array corresponding to four points in the plane
traversed counterclockwise
two sides parallel to horizontal axis
"""
def __init__(self, coords=None):
self.coords = coords
def plot(self, **kwargs):
"""
basic mechanism to plot the rectangle
"""
plt.fill_between(self.coords[0],self.coords[1], **kwargs)
def get_area(self):
"""
basic area function
"""
return (np.max(self.coords[0]) - np.min(self.coords[0])) * (np.max(self.coords[1] - np.min(self.coords[1])))
class Circle(EuclideanShape2D):
"""
circular objects
requires a center and a radius
"""
def __init__(self, center=None, radius=None):
self.center = center
self.radius = radius
def plot(self, **kwargs):
c = plt.Circle(self.center, self.radius)
fig, ax = plt.subplots(figsize=(6,6))
ax.set_ylim(self.center[1] - self.radius - 1, self.center[1] + self.radius + 1)
ax.set_xlim(self.center[0] - self.radius - 1, self.center[1] + self.radius + 1)
ax.add_artist(c)
def get_area(self):
return np.pi*self.radius**2
def get_circumference(self):
return 2*np.pi*self.radius
circy = Circle(center=(1,1), radius=3)
circy.plot(lw=5)
circy.get_circumference()
class Triangle(EuclideanShape2D):
"""
triangle class
requires a 2 x 3 np array corresponding to three points in the plane
assumes base is parallel to x-axis
starting point is lower left"""
def __init__(self, coords):
self.coords = coords
def get_area(self):
return 0.5 * (self.coords[0, 1] - self.coords[0, 0]) * (self.coords[1,2] - self.coords[1,1])
# no plotting method defined - what happens?
tri = Triangle(coords=np.array([[0, 2, 1], [0, 0, 3]]))
tri.get_area()
tri.plot()
from scipy.stats import bernoulli
?bernoulli
dir(bernoulli)
"""
Explanation: Inheritance
Classes share many common attributes and methods. And just like in real life, there are general and specialized types of things.
End of explanation
"""
bernoulli.__doc__
"""
Explanation: lots of stuff - what do the underscores and double-underscores all mean? https://www.python.org/dev/peps/pep-0008/#method-names-and-instance-variables
End of explanation
"""
class SpecialList:
'''
A class slightly extending the capabilities of list.
'''
def __init__(self, values=None):
if values is None:
self.values = []
else:
self.values = values
def __len__(self):
return len(self.values)
def __getitem__(self, key):
return self.values[key]
def __setitem__(self, key, value):
self.values[key] = value
def __delitem__(self, key):
del self.values[key]
def __iter__(self):
return iter(self.values)
def __reversed__(self):
return SpecialList(reversed(self.values))
def append(self, value):
self.values.append(value)
def head(self, n=5):
return self.values[:n]
def tail(self, n=5):
return self.values[-n:]
x = SpecialList(values=list(range(100)))
x.head()
x.head(10)
len(x)
y=iter(x)
next(y)
next(y)
del x[5] # behavior defined by __delitem__!
x.head(10)
next(y)
"""
Explanation: Here is how we can use some of these magic methods to build a custom data structure - a list with additional capabilities, like 'head' and 'tail.'
End of explanation
"""
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def ll(act, pred, threshold=1e-15):
pred = np.maximum(pred, threshold)
pred = np.minimum(1 - threshold, pred)
return (-1 / len(act)) * np.sum(((1 - act) * np.log(1 - pred) + act * np.log(pred)))
class SGDClassifier(object):
"""
sgd_classifier
random initialization
"""
def __init__(self, data=None, label=None, alpha=None, max_epochs=None):
# don't introduce a new class attribute outside of init!
self.data=data
self.label = label
if data is not None:
self.w = np.random.randn(self.data.shape[1]) / np.sqrt(self.data.shape[1]) # Xavier initialization
else:
self.w = None
if alpha is None:
self.alpha = 0.0001
else:
self.alpha = alpha
if max_epochs is None:
self.max_epochs = 10000
else:
self.max_epochs = max_epochs
self.train_losses=[]
self.val_losses=[]
def fit(self):
if self.data is None:
print('No data, nothing to fit')
return
if self.label is None:
print('No labels, can\'t fit')
return
# cross-validation
train_idx = np.random.choice(range(self.data.shape[0]), replace=False,
size=int(np.floor(0.8 * self.data.shape[0])))
val_idx = [i for i in range(self.data.shape[0]) if i not in train_idx]
trainX = self.data[train_idx, :]
valX = self.data[val_idx, :]
trainy = self.label[train_idx]
valy = self.label[val_idx]
n = trainX.shape[0]
for i in range(self.max_epochs):
# Update weights - where the magic happens
for j in range(n):
self.w += self.alpha * (trainy[j] -
sigmoid(np.dot(trainX[j, :], self.w))) * trainX[j, :]
if i % 100 == 0:
current_train_loss = ll(trainy, sigmoid(np.dot(trainX, self.w)))
self.train_losses.append(current_train_loss)
current_val_loss = ll(valy, sigmoid(np.dot(valX, self.w)))
self.val_losses.append(current_val_loss)
print('epoch {}: train loss {:.5f}\tvalidation loss {:.5f}'.format(i, current_train_loss, current_val_loss))
def predict(self):
# TO_DO
pass
def plot(self, **kwargs):
if self.data.shape[1] != 3:
print('wrong dimensions for plot != 2')
return
x = np.linspace(-5, 5, 100)
plt.plot(x, -self.w[0] / self.w[2] - (self.w[1] /self.w[2]) * x, **kwargs)
plt.scatter(self.data[:500, 1], self.data[:500, 2], color='red')
plt.scatter(self.data[500:, 1], self.data[500:, 2], color='blue')
plt.ylim(-5, 5)
# grab our synthetic data from last time
data = np.genfromtxt('../data/synthetic_data.txt', delimiter=' ')
label = np.genfromtxt('../data/label.txt')
d =SGDClassifier(data=data, label=label, alpha=0.0001, max_epochs=500)
whos
d.w
d.fit()
d.plot(color='steelblue', lw=5)
d.w
d.val_losses[-1]
SGDClassifier(data=data, label=label).w
# messier data
x1 = np.random.normal(loc=1, scale=3, size=500)
x2 = np.random.normal(loc=3, scale=3, size=500)
y1 = np.random.normal(loc=1, scale=3, size=500)
y2 = np.random.normal(loc=3, scale=3, size=500)
x = np.hstack([x1, x2])
y = np.hstack([y1, y2])
ones = np.ones(1000)
data = np.vstack([ones, x, y])
data = data.T
lab1 = np.zeros(500)
lab2 = np.ones(500)
labs = np.hstack([lab1, lab2]).T
d2 = SGDClassifier(data, labs, max_epochs=500)
d2.fit()
d2.plot(lw=5)
d2.val_losses[-1]
"""
Explanation: A more real-world example
Here is an example of a class built for stochastic gradient descent. Need our helper functions.
End of explanation
"""
|
GoogleCloudPlatform/analytics-componentized-patterns | retail/recommendation-system/bqml-mlops/kfp_tutorial.ipynb | apache-2.0 | # CHANGE the following settings
BASE_IMAGE='gcr.io/your-image-name'
MODEL_STORAGE = 'gs://your-bucket-name/folder-name' #Must include a folder in the bucket, otherwise, model export will fail
BQ_DATASET_NAME="hotel_recommendations" #This is the name of the target dataset where you model and predictions will be stored
PROJECT_ID="your-project-id" #This is your GCP project ID that can be found in the GCP console
KFPHOST="your-ai-platform-pipeline-url" # Kubeflow Pipelines URL, can be found from settings button in CAIP Pipelines
REGION='your-project-region' #For example, us-central1, note that Vertex AI endpoint deployment region must match MODEL_STORAGE bucket region
ENDPOINT_NAME='your-vertex-ai-endpoint-name'
DEPLOY_COMPUTE='your-endpoint-compute-size'#For example, n1-standard-4
DEPLOY_IMAGE='us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.0-82:latest' #Do not change, BQML XGBoost is currently compatible with 0.82
"""
Explanation: Tutorial Overview
This is a two part tutorial where part one will walk you through a complete end to end Machine Learning use case using Google Cloud Platform. You will learn how to build a hybrid recommendation model with embedding technique with Google BigQuery Machine Learning from book “BigQuery: The Definitive Guide”, a highly recommended book written by BigQuery and ML expert Valliappa Lakshmanan. We will not cover in detail on typical machine learining steps such data exploration and cleaning, feature selection, and feature engineering (other than embedding technique we show here). We encourage the readers to do so and see if you can improve the model quality and performance. Instead we will mostly focus on show you how to orchestrate the entire machine learning process with Kubeflow on Google AI Platform Pipelines. In PART TWO, you will learn how to setup a CI/CD pipeline with Google Cloud Source Repositories and Google Cloud Build.
The use case is to predict the the propensity of booking for any user/hotel combination. The intuition behind the embedding layer with Matrix Factorization is if we can find similar hotels that are close in the embedding space, we will achieve a higher accuracy to predict whether the user will book the hotel.
Prerequisites
Download the Expedia Hotel Recommendation Dataset from Kaggle. You will be mostly working with the train.csv dataset for this tutorial
Upload the dataset to BigQuery by following the how-to guide Loading CSV Data
Follow the how-to guide create flex slots, reservation and assignment in BigQuery for training ML models. <strong>Make sure to create Flex slots and not month/year slots so you can delete them after the tutorial.</strong>
Build and push a docker image using this dockerfile as the base image for the Kubeflow pipeline components.
Create an instance of AI Platform Pipelines by following the how-to guide Setting up AI Platform Pipelines.
Create or use a Google Cloud Storage bucket to export the finalized model to.
Change the following cell to reflect your setup
End of explanation
"""
from typing import NamedTuple
import json
import os
def run_bigquery_ddl(project_id: str, query_string: str, location: str) -> NamedTuple(
'DDLOutput', [('created_table', str), ('query', str)]):
"""
Runs BigQuery query and returns a table/model name
"""
print(query_string)
from google.cloud import bigquery
from google.api_core.future import polling
from google.cloud import bigquery
from google.cloud.bigquery import retry as bq_retry
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query_string, retry=bq_retry.DEFAULT_RETRY)
job._retry = polling.DEFAULT_RETRY
while job.running():
from time import sleep
sleep(0.1)
print('Running ...')
tblname = job.ddl_target_table
tblname = '{}.{}'.format(tblname.dataset_id, tblname.table_id)
print('{} created in {}'.format(tblname, job.ended - job.started))
from collections import namedtuple
result_tuple = namedtuple('DDLOutput', ['created_table', 'query'])
return result_tuple(tblname, query_string)
"""
Explanation: Create BigQuery function
Create a generic BigQuery function that runs a BigQuery query and returns the table/model created. This will be re-used to return BigQuery results for all the different segments of the BigQuery process in the Kubeflow Pipeline. You will see later in the tutorial where this function is being passed as parameter (ddlop) to other functions to perform certain BigQuery operation.
End of explanation
"""
def train_matrix_factorization_model(ddlop, project_id, dataset):
query = """
CREATE OR REPLACE MODEL `{project_id}.{dataset}.my_implicit_mf_model_quantiles_demo_binary_prod`
OPTIONS
(model_type='matrix_factorization',
feedback_type='implicit',
user_col='user_id',
item_col='hotel_cluster',
rating_col='rating',
l2_reg=30,
num_factors=15) AS
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
""".format(project_id = project_id, dataset = dataset)
return ddlop(project_id, query, 'US')
def evaluate_matrix_factorization_model(project_id, mf_model, location='US')-> NamedTuple('MFMetrics', [('msqe', float)]):
query = """
SELECT * FROM ML.EVALUATE(MODEL `{project_id}.{mf_model}`)
""".format(project_id = project_id, mf_model = mf_model)
print(query)
from google.cloud import bigquery
import json
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple('MFMetrics', ['msqe'])
return result_tuple(metrics_df.loc[0].to_dict()['mean_squared_error'])
"""
Explanation: Creating the model
We will start by training a matrix factorization model that will allow us to understand the latent relationship between user and hotel clusters. The reason why we are doing this is because matrix factorization approach can only find latent relationship between a user and a hotel. However, there are other intuitive useful predictors (such as is_mobile, location, etc.) that can improve the model performance. So together, we can feed the resulting weights/factors as features among with other features to train the final XGBoost model.
End of explanation
"""
def create_user_features(ddlop, project_id, dataset, mf_model):
#Feature engineering for useres
query = """
CREATE OR REPLACE TABLE `{project_id}.{dataset}.user_features_prod` AS
WITH u as
(
select
user_id,
count(*) as total_visits,
count(distinct user_location_city) as distinct_cities,
sum(distinct site_name) as distinct_sites,
sum(is_mobile) as total_mobile,
sum(is_booking) as total_bookings,
FROM `{project_id}.{dataset}.hotel_train`
GROUP BY 1
)
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM
u JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'user_id' AND feature = CAST(u.user_id AS STRING)
""".format(project_id = project_id, dataset = dataset, mf_model=mf_model)
return ddlop(project_id, query, 'US')
def create_hotel_features(ddlop, project_id, dataset, mf_model):
#Feature eingineering for hotels
query = """
CREATE OR REPLACE TABLE `{project_id}.{dataset}.hotel_features_prod` AS
WITH h as
(
select
hotel_cluster,
count(*) as total_cluster_searches,
count(distinct hotel_country) as distinct_hotel_countries,
sum(distinct hotel_market) as distinct_hotel_markets,
sum(is_mobile) as total_mobile_searches,
sum(is_booking) as total_cluster_bookings,
FROM `{project_id}.{dataset}.hotel_train`
group by 1
)
SELECT
h.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS hotel_factors
FROM
h JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'hotel_cluster' AND feature = CAST(h.hotel_cluster AS STRING)
""".format(project_id = project_id, dataset = dataset, mf_model=mf_model)
return ddlop(project_id, query, 'US')
"""
Explanation: Creating embedding features for users and hotels
We will use the matrix factorization model to create corresponding user factors, hotel factors and embed them together with additional features such as total visits and distinct cities to create a new training dataset to an XGBoost classifier which will try to predict the the likelihood of booking for any user/hotel combination. Also note that we aggregated and grouped the original dataset by user_id.
End of explanation
"""
def combine_features(ddlop, project_id, dataset, mf_model, hotel_features, user_features):
#Combine user and hotel embedding features with the rating associated with each combination
query = """
CREATE OR REPLACE TABLE `{project_id}.{dataset}.total_features_prod` AS
with ratings as(
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
)
select
h.* EXCEPT(hotel_cluster),
u.* EXCEPT(user_id),
IFNULL(rating,0) as rating
from `{hotel_features}` h, `{user_features}` u
LEFT OUTER JOIN ratings r
ON r.user_id = u.user_id AND r.hotel_cluster = h.hotel_cluster
""".format(project_id = project_id, dataset = dataset, mf_model=mf_model, hotel_features=hotel_features, user_features=user_features)
return ddlop(project_id, query, 'US')
"""
Explanation: Function below combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier. Note the target variable is rating which is converted into a binary classfication.
End of explanation
"""
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_hotels`(h ARRAY<FLOAT64>)
RETURNS
STRUCT<
h1 FLOAT64,
h2 FLOAT64,
h3 FLOAT64,
h4 FLOAT64,
h5 FLOAT64,
h6 FLOAT64,
h7 FLOAT64,
h8 FLOAT64,
h9 FLOAT64,
h10 FLOAT64,
h11 FLOAT64,
h12 FLOAT64,
h13 FLOAT64,
h14 FLOAT64,
h15 FLOAT64
> AS (STRUCT(
h[OFFSET(0)],
h[OFFSET(1)],
h[OFFSET(2)],
h[OFFSET(3)],
h[OFFSET(4)],
h[OFFSET(5)],
h[OFFSET(6)],
h[OFFSET(7)],
h[OFFSET(8)],
h[OFFSET(9)],
h[OFFSET(10)],
h[OFFSET(11)],
h[OFFSET(12)],
h[OFFSET(13)],
h[OFFSET(14)]
));
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_users`(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)]
));
"""
Explanation: We will create a couple of BigQuery user-defined functions (UDF) to convert arrays to a struct and its array elements are the fields in the struct. <strong>Be sure to change the BigQuery dataset name to your dataset name. </strong>
End of explanation
"""
def train_xgboost_model(ddlop, project_id, dataset, total_features):
#Combine user and hotel embedding features with the rating associated with each combination
query = """
CREATE OR REPLACE MODEL `{project_id}.{dataset}.recommender_hybrid_xgboost_prod`
OPTIONS(model_type='boosted_tree_classifier', input_label_cols=['rating'], AUTO_CLASS_WEIGHTS=True)
AS
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
""".format(project_id = project_id, dataset = dataset, total_features=total_features)
return ddlop(project_id, query, 'US')
def evaluate_class(project_id, dataset, class_model, total_features, location='US')-> NamedTuple('ClassMetrics', [('roc_auc', float)]):
query = """
SELECT
*
FROM ML.EVALUATE(MODEL `{class_model}`, (
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
))
""".format(dataset = dataset, class_model = class_model, total_features = total_features)
print(query)
from google.cloud import bigquery
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple('ClassMetrics', ['roc_auc'])
return result_tuple(metrics_df.loc[0].to_dict()['roc_auc'])
"""
Explanation: Train XGBoost model and evaluate it
End of explanation
"""
def export_bqml_model(project_id, model, destination) -> NamedTuple('ModelExport', [('destination', str)]):
import subprocess
#command='bq extract -destination_format=ML_XGBOOST_BOOSTER -m {}:{} {}'.format(project_id, model, destination)
model_name = '{}:{}'.format(project_id, model)
print (model_name)
subprocess.run(['bq', 'extract', '-destination_format=ML_XGBOOST_BOOSTER', '-m', model_name, destination], check=True)
from collections import namedtuple
result_tuple = namedtuple('ModelExport', ['destination'])
return result_tuple(destination)
def deploy_bqml_model_vertexai(project_id, region, model_name, endpoint_name, model_dir, deploy_image, deploy_compute):
from google.cloud import aiplatform
parent = "projects/" + project_id + "/locations/" + region
client_options = {"api_endpoint": "{}-aiplatform.googleapis.com".format(region)}
clients = {}
#upload the model to Vertex AI
clients['model'] = aiplatform.gapic.ModelServiceClient(client_options=client_options)
model = {
"display_name": model_name,
"metadata_schema_uri": "",
"artifact_uri": model_dir,
"container_spec": {
"image_uri": deploy_image,
"command": [],
"args": [],
"env": [],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": ""
}
}
upload_model_response = clients['model'].upload_model(parent=parent, model=model)
print("Long running operation on uploading the model:", upload_model_response.operation.name)
model_info = clients['model'].get_model(name=upload_model_response.result(timeout=180).model)
#Create an endpoint on Vertex AI to host the model
clients['endpoint'] = aiplatform.gapic.EndpointServiceClient(client_options=client_options)
create_endpoint_response = clients['endpoint'].create_endpoint(parent=parent, endpoint={"display_name": endpoint_name})
print("Long running operation on creating endpoint:", create_endpoint_response.operation.name)
endpoint_info = clients['endpoint'].get_endpoint(name=create_endpoint_response.result(timeout=180).name)
#Deploy the model to the endpoint
dmodel = {
"model": model_info.name,
"display_name": 'deployed_'+model_name,
"dedicated_resources": {
"min_replica_count": 1,
"max_replica_count": 1,
"machine_spec": {
"machine_type": deploy_compute,
"accelerator_count": 0,
}
}
}
traffic = {
'0' : 100
}
deploy_model_response = clients['endpoint'].deploy_model(endpoint=endpoint_info.name, deployed_model=dmodel, traffic_split=traffic)
print("Long running operation on deploying the model:", deploy_model_response.operation.name)
deploy_model_result = deploy_model_response.result()
"""
Explanation: Export XGBoost model and host it as a model endpoint on Vertex AI
One of the nice features of BigQuery ML is the ability to import and export machine learning models. In the function defined below, we are going to export the trained XGBoost model to a Google Cloud Storage bucket. We will later have Google Cloud AI Platform host this model as an endpoint for predictions. It is worth mentioning that you can host this model on any platform that supports Booster (XGBoost 0.82). Check out the documentation for more information on exporting BigQuery ML models and their formats.
End of explanation
"""
import kfp.dsl as dsl
import kfp.components as comp
import time
@dsl.pipeline(
name='Training pipeline for hotel recommendation prediction',
description='Training pipeline for hotel recommendation prediction'
)
def training_pipeline(project_id = PROJECT_ID):
import json
#Minimum threshold for model metric to determine if model will be deployed for prediction
mf_msqe_threshold = 0.5
class_auc_threshold = 0.8
#Defining function containers
ddlop = comp.func_to_container_op(run_bigquery_ddl, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery'])
evaluate_class_op = comp.func_to_container_op(evaluate_class, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery','pandas'])
evaluate_mf_op = comp.func_to_container_op(evaluate_matrix_factorization_model, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery','pandas'])
export_bqml_model_op = comp.func_to_container_op(export_bqml_model, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery'])
deploy_bqml_model_op = comp.func_to_container_op(deploy_bqml_model_vertexai, base_image=BASE_IMAGE, packages_to_install=['google-cloud-aiplatform'])
#############################
#Defining pipeline execution graph
dataset = BQ_DATASET_NAME
#Train matrix factorization model
mf_model_output = train_matrix_factorization_model(ddlop, PROJECT_ID, dataset).set_display_name('train matrix factorization model')
mf_model_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
mf_model = mf_model_output.outputs['created_table']
#Evaluate matrix factorization model
mf_eval_output = evaluate_mf_op(PROJECT_ID, mf_model).set_display_name('evaluate matrix factorization model')
mf_eval_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
with dsl.Condition(mf_eval_output.outputs['msqe'] < mf_msqe_threshold):
#Create features for classification model
user_features_output = create_user_features(ddlop, PROJECT_ID, dataset, mf_model).set_display_name('create user factors features')
user_features = user_features_output.outputs['created_table']
user_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
hotel_features_output = create_hotel_features(ddlop, PROJECT_ID, dataset, mf_model).set_display_name('create hotel factors features')
hotel_features = hotel_features_output.outputs['created_table']
hotel_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
total_features_output = combine_features(ddlop, PROJECT_ID, dataset, mf_model, hotel_features, user_features).set_display_name('combine all features')
total_features = total_features_output.outputs['created_table']
total_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
#Train XGBoost model
class_model_output = train_xgboost_model(ddlop, PROJECT_ID, dataset, total_features).set_display_name('train XGBoost model')
class_model = class_model_output.outputs['created_table']
class_model_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
class_eval_output = evaluate_class_op(project_id, dataset, class_model, total_features).set_display_name('evaluate XGBoost model')
class_eval_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
with dsl.Condition(class_eval_output.outputs['roc_auc'] > class_auc_threshold):
#Export model
export_destination_output = export_bqml_model_op(project_id, class_model, MODEL_STORAGE).set_display_name('export XGBoost model')
export_destination_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
export_destination = export_destination_output.outputs['destination']
deploy_model = deploy_bqml_model_op(PROJECT_ID, REGION, class_model, ENDPOINT_NAME, MODEL_STORAGE, DEPLOY_IMAGE, DEPLOY_COMPUTE).set_display_name('Deploy XGBoost model')
deploy_model.execution_options.caching_strategy.max_cache_staleness = 'P0D'
"""
Explanation: Defining the Kubeflow Pipelines (KFP)
Now we have the necessary functions defined, we are now ready to create a workflow using Kubeflow Pipeline. The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL).
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
The pipeline performs the following steps -
* Trains a Matrix Factorization model
* Evaluates the trained Matrix Factorization model and if the Mean Square Error is less than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Engineers new user factors feature with the Matrix Factorization model
* Engineers new hotel factors feature with the Matrix Factorization model
* Combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier
* Trains a XGBoost classifier
* Evalutes the trained XGBoost model and if the ROC AUC score is more than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Exports the XGBoost model to a Google Cloud Storage bucket
* Deploys the XGBoost model from the Google Cloud Storage bucket to Google Cloud AI Platform for prediction
End of explanation
"""
pipeline_func = training_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
import kfp
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
client = kfp.Client(KFPHOST)
experiment = client.create_experiment('hotel_recommender_experiment')
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
"""
Explanation: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
End of explanation
"""
|
jdhp-docs/python-notebooks | ai_ml_multilayer_perceptron_fr.ipynb | mit | STR_CUR = r"i" # Couche courante
STR_PREV = r"j" # Couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau)
STR_NEXT = r"k" # Couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau)
STR_EX = r"\eta" # Exemple (*sample* ou *feature*) courant (i.e. le vecteur des entrées courantes du réseau)
STR_POT = r"x" # *Potentiel d'activation* du neurone $i$ pour l'exemple $\ex$
STR_POT_CUR = r"x_i" # *Potentiel d'activation* du neurone $i$ pour l'exemple $\ex$
STR_WEIGHT = r"w"
STR_WEIGHT_CUR = r"w_{ij}" # Poids de la connexion entre le neurone $j$ et le neurone $i$
STR_ACTIVTHRES = r"\theta" # *Seuil d'activation* du neurone $i$
STR_ACTIVFUNC = r"f" # *Fonction d'activation* du neurone $i$
STR_ERRFUNC = r"E" # *Fonction objectif* ou *fonction d'erreur*
STR_LEARNRATE = r"\epsilon" # *Pas d'apprentissage* ou *Taux d'apprentissage*
STR_LEARNIT = r"n" # Numéro d'itération (ou cycle ou époque) du processus d'apprentissage
STR_SIGOUT = r"y" # Signal de sortie du neurone $i$ pour l'exemple $\ex$
STR_SIGOUT_CUR = r"y_i"
STR_SIGOUT_PREV = r"y_j"
STR_SIGOUT_DES = r"d" # Sortie désirée (*étiquette*) du neurone $i$ pour l'exemple $\ex$
STR_SIGOUT_DES_CUR = r"d_i"
STR_WEIGHTS = r"W" # Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche)
STR_ERRSIG = r"\Delta" # *Signal d'erreur* du neurone $i$ pour l'exemple $\ex$
def tex(tex_str):
return r"$" + tex_str + r"$"
%matplotlib inline
import nnfigs
# https://github.com/jeremiedecock/neural-network-figures.git
import nnfigs.core as nnfig
import matplotlib.pyplot as plt
fig, ax = nnfig.init_figure(size_x=8, size_y=4)
nnfig.draw_synapse(ax, (0, -6), (10, 0))
nnfig.draw_synapse(ax, (0, -2), (10, 0))
nnfig.draw_synapse(ax, (0, 2), (10, 0))
nnfig.draw_synapse(ax, (0, 6), (10, 0), label=tex(STR_WEIGHT_CUR), label_position=0.5, fontsize=14)
nnfig.draw_synapse(ax, (10, 0), (12, 0))
nnfig.draw_neuron(ax, (0, -6), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, 6), 0.5, empty=True)
plt.text(x=0, y=7.5, s=tex(STR_PREV), fontsize=14)
plt.text(x=10, y=1.5, s=tex(STR_CUR), fontsize=14)
plt.text(x=0, y=0, s=r"$\vdots$", fontsize=14)
plt.text(x=-2.5, y=0, s=tex(STR_SIGOUT_PREV), fontsize=14)
plt.text(x=13, y=0, s=tex(STR_SIGOUT_CUR), fontsize=14)
plt.text(x=9.2, y=-1.8, s=tex(STR_POT_CUR), fontsize=14)
nnfig.draw_neuron(ax, (10, 0), 1, ag_func="sum", tr_func="sigmoid")
plt.show()
"""
Explanation: Perceptron Multicouche
TODO
Shallow et Deep learning à lire:
- https://www.miximum.fr/blog/introduction-au-deep-learning-2/
- https://sciencetonnante.wordpress.com/2016/04/08/le-deep-learning/
- https://www.technologies-ebusiness.com/enjeux-et-tendances/le-deep-learning-pas-a-pas
- http://scholar.google.fr/scholar_url?url=https://arxiv.org/pdf/1404.7828&hl=fr&sa=X&scisig=AAGBfm07Y2UDlPpbninerh4gxHUj2SJfDQ&nossl=1&oi=scholarr&sqi=2&ved=0ahUKEwjfxMu7jKnUAhUoCsAKHR_RDlkQgAMIKygAMAA
Principales implémentations en Python
Scikit-learn: http://scikit-learn.org/stable/modules/neural_networks_supervised.html
...
Notes du livre Dunod
Notations
Les notations suivantes sont détaillées au fil du document:
$\newcommand{\cur}{i}$
$\cur$: couche courante
$\newcommand{\prev}{j}$
$\newcommand{\prevcur}{{\cur\prev}}$
$\prev$: couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau)
$\newcommand{\next}{k}$
$\newcommand{\curnext}{{\next\cur}}$
$\next$: couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau)
$\newcommand{\ex}{\eta}$
$\ex$: exemple (sample ou feature) courant (i.e. le vecteur des entrées courantes du réseau)
$\newcommand{\pot}{x}$
$\pot_\cur$: Potentiel d'activation du neurone $i$ pour l'exemple courant
$\newcommand{\weight}{w}$
$\newcommand{\wcur}{{\weight_{\cur\prev}}}$
$\wcur$: Poids de la connexion entre le neurone $j$ et le neurone $i$
$\newcommand{\activthres}{\theta}$
$\activthres_\cur$: Seuil d'activation du neurone $i$
$\newcommand{\activfunc}{f}$
$\activfunc_\cur$: Fonction d'activation du neurone $i$
$\newcommand{\errfunc}{E}$
$\errfunc$: Fonction objectif ou fonction d'erreur
$\newcommand{\learnrate}{\epsilon}$
$\learnrate$: Pas d'apprentissage ou Taux d'apprentissage
$\newcommand{\learnit}{n}$
$\learnit$: Numéro d'itération (ou cycle ou époque) du processus d'apprentissage
$\newcommand{\sigout}{y}$
$\sigout_\cur$: Signal de sortie du neurone $i$ pour l'exemple courant
$\newcommand{\sigoutdes}{d}$
$\sigoutdes_\cur$: Sortie désirée (étiquette) du neurone $i$ pour l'exemple courant
$\newcommand{\weights}{\boldsymbol{W}}$
$\weights$: Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche)
$\newcommand{\errsig}{\Delta}$
$\errsig_i$: Signal d'erreur du neurone $i$ pour l'exemple courant
End of explanation
"""
%matplotlib inline
import nnfigs
# https://github.com/jeremiedecock/neural-network-figures.git
import nnfigs.core as nnfig
import matplotlib.pyplot as plt
fig, ax = nnfig.init_figure(size_x=8, size_y=4)
nnfig.draw_synapse(ax, (0, -6), (10, 0))
nnfig.draw_synapse(ax, (0, -2), (10, 0))
nnfig.draw_synapse(ax, (0, 2), (10, 0))
nnfig.draw_synapse(ax, (0, 6), (10, 0))
nnfig.draw_synapse(ax, (0, -6), (10, -4))
nnfig.draw_synapse(ax, (0, -2), (10, -4))
nnfig.draw_synapse(ax, (0, 2), (10, -4))
nnfig.draw_synapse(ax, (0, 6), (10, -4))
nnfig.draw_synapse(ax, (0, -6), (10, 4))
nnfig.draw_synapse(ax, (0, -2), (10, 4))
nnfig.draw_synapse(ax, (0, 2), (10, 4))
nnfig.draw_synapse(ax, (0, 6), (10, 4))
nnfig.draw_synapse(ax, (10, -4), (12, -4))
nnfig.draw_synapse(ax, (10, 0), (12, 0))
nnfig.draw_synapse(ax, (10, 4), (12, 4))
nnfig.draw_neuron(ax, (0, -6), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, 6), 0.5, empty=True)
nnfig.draw_neuron(ax, (10, -4), 1, ag_func="sum", tr_func="sigmoid")
nnfig.draw_neuron(ax, (10, 0), 1, ag_func="sum", tr_func="sigmoid")
nnfig.draw_neuron(ax, (10, 4), 1, ag_func="sum", tr_func="sigmoid")
plt.text(x=0, y=7.5, s=tex(STR_PREV), fontsize=14)
plt.text(x=10, y=7.5, s=tex(STR_CUR), fontsize=14)
plt.text(x=0, y=0, s=r"$\vdots$", fontsize=14)
plt.text(x=9.7, y=-6.1, s=r"$\vdots$", fontsize=14)
plt.text(x=9.7, y=5.8, s=r"$\vdots$", fontsize=14)
plt.text(x=12.5, y=4, s=tex(STR_SIGOUT + "_1"), fontsize=14)
plt.text(x=12.5, y=0, s=tex(STR_SIGOUT + "_2"), fontsize=14)
plt.text(x=12.5, y=-4, s=tex(STR_SIGOUT + "_3"), fontsize=14)
plt.text(x=16, y=4, s=tex(STR_ERRFUNC + "_1 = " + STR_SIGOUT + "_1 - " + STR_SIGOUT_DES + "_1"), fontsize=14)
plt.text(x=16, y=0, s=tex(STR_ERRFUNC + "_2 = " + STR_SIGOUT + "_2 - " + STR_SIGOUT_DES + "_2"), fontsize=14)
plt.text(x=16, y=-4, s=tex(STR_ERRFUNC + "_3 = " + STR_SIGOUT + "_3 - " + STR_SIGOUT_DES + "_3"), fontsize=14)
plt.text(x=16, y=-8, s=tex(STR_ERRFUNC + " = 1/2 ( " + STR_ERRFUNC + "^2_1 + " + STR_ERRFUNC + "^2_2 + " + STR_ERRFUNC + "^2_3 + \dots )"), fontsize=14)
plt.show()
"""
Explanation: $$
\pot_\cur = \sum_\prev \wcur \sigout\prev
$$
$$
\sigout\cur = \activfunc(\pot_\cur)
$$
$$
\weights = \begin{pmatrix}
\weight_{11} & \cdots & \weight_{1m} \
\vdots & \ddots & \vdots \
\weight_{n1} & \cdots & \weight_{nm}
\end{pmatrix}
$$
Divers
Le PMC peut approximer n'importe quelle fonction continue avec une précision arbitraire suivant le nombre de neurones présents sur la couche cachée.
Initialisation des poids: généralement des petites valeurs aléatoires
TODO: quelle différence entre:
* réseau bouclé
* réseau récurent
Fonction objectif (ou fonction d'erreur)
Fonction objectif: $\errfunc \left( \weights \left( \learnit \right) \right)$
$\learnit$: itération courante de l'apprentissage $(1, 2, ...)$
Typiquement, la fonction objectif (fonction d'erreur) est la somme du carré de l'erreur de chaque neurone de sortie.
$$
\errfunc = \frac12 \sum_{\cur \in \Omega} \left[ \sigout_\cur - \sigoutdes_\cur \right]^2
$$
$\Omega$: l'ensemble des neurones de sortie
Le $\frac12$, c'est juste pour simplifier les calculs de la dérivée ?
End of explanation
"""
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(4, 4))
x = np.arange(10, 30, 0.1)
y = (x - 20)**2 + 2
ax.set_xlabel(r"Poids $" + STR_WEIGHTS + "$", fontsize=14)
ax.set_ylabel(r"Fonction objectif $" + STR_ERRFUNC + "$", fontsize=14)
# See http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.tick_params
ax.tick_params(axis='both', # changes apply to the x and y axis
which='both', # both major and minor ticks are affected
bottom='on', # ticks along the bottom edge are on
top='off', # ticks along the top edge are off
left='on', # ticks along the left edge are on
right='off', # ticks along the right edge are off
labelbottom='off', # labels along the bottom edge are off
labelleft='off') # labels along the lefleft are off
ax.set_xlim(left=10, right=25)
ax.set_ylim(bottom=0, top=5)
ax.plot(x, y);
"""
Explanation: Apprentissage
Mise à jours des poids
$$
\weights(\learnit + 1) = \weights(\learnit) \underbrace{- \learnrate \nabla \errfunc \left( \weights(\learnit) \right)}
$$
$- \learnrate \nabla \errfunc \left( \weights(\learnit) \right)$: descend dans la direction opposée au gradient (plus forte pente)
avec $\nabla \errfunc \left( \weights(\learnit) \right)$: gradient de la fonction objectif au point $\weights$
$\learnrate > 0$: pas (ou taux) d'apprentissage
$$
\begin{align}
\delta_{\wcur} & = \wcur(\learnit + 1) - \wcur(\learnit) \
& = - \learnrate \frac{\partial \errfunc}{\partial \wcur}
\end{align}
$$
$$
\Leftrightarrow \wcur(\learnit + 1) = \wcur(\learnit) - \learnrate \frac{\partial \errfunc}{\partial \wcur}
$$
Chaque présentation de l'ensemble des exemples = un cycle (ou une époque) d'apprentissage
Critère d'arrêt de l'apprentissage: quand la valeur de la fonction objectif se stabilise (ou que le problème est résolu avec la précision souhaitée)
"généralement il n'y a qu'un seul minimum local" (preuve ???)
"dans le cas contraire, le plus simple est de recommencer plusieurs fois l'apprentissage avec des poids initiaux différents et de conserver la meilleure matrice $\weights$ (celle qui minimise $\errfunc$)"
End of explanation
"""
%matplotlib inline
import nnfigs
# https://github.com/jeremiedecock/neural-network-figures.git
import nnfigs.core as nnfig
import matplotlib.pyplot as plt
fig, ax = nnfig.init_figure(size_x=8, size_y=4)
nnfig.draw_synapse(ax, (0, -2), (10, 0))
nnfig.draw_synapse(ax, (0, 2), (10, 0), label=tex(STR_WEIGHT + "_{" + STR_NEXT + STR_CUR + "}"), label_position=0.5, fontsize=14)
nnfig.draw_synapse(ax, (10, 0), (12, 0))
nnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)
plt.text(x=0, y=3.5, s=tex(STR_CUR), fontsize=14)
plt.text(x=10, y=3.5, s=tex(STR_NEXT), fontsize=14)
plt.text(x=0, y=-0.2, s=r"$\vdots$", fontsize=14)
nnfig.draw_neuron(ax, (10, 0), 1, ag_func="sum", tr_func="sigmoid")
plt.show()
"""
Explanation: Apprentissage incrémentiel (ou partiel) (ang. incremental learning):
on ajuste les poids $\weights$ après la présentation d'un seul exemple
("ce n'est pas une véritable descente de gradient").
C'est mieux pour éviter les minimums locaux, surtout si les exemples sont
mélangés au début de chaque itération
Apprentissage différé (ang. batch learning):
TODO...
Est-ce que la fonction objectif $\errfunc$ est une fonction multivariée
ou est-ce une aggrégation des erreurs de chaque exemple ?
TODO: règle du delta / règle du delta généralisée
Rétropropagation du gradient
Rétropropagation du gradient:
une méthode pour calculer efficacement le gradient de la fonction objectif $\errfunc$.
Intuition:
La rétropropagation du gradient n'est qu'une méthode parmis d'autre pour résoudre le probème d'optimisation des poids $\weight$. On pourrait très bien résoudre ce problème d'optimisation avec des algorithmes évolutionnistes par exemple.
En fait, l'intérêt de la méthode de la rétropropagation du gradient (et ce qui explique sa notoriété) est qu'elle formule le problème d'optimisation des poids avec une écriture analytique particulièrement efficace qui élimine astucieusement un grand nombre de calculs redondants (un peu à la manière de ce qui se fait en programmation dynamique): quand on decide d'optimiser les poids via une descente de gradient, certains termes (les signaux d'erreurs $\errsig$) apparaissent un grand nombre de fois dans l'écriture analytique complète du gradient. La méthode de la retropropagation du gradient fait en sorte que ces termes ne soient calculés qu'une seule fois.
À noter qu'on aurrait très bien pu résoudre le problème avec une descente de gradient oú le gradient $\frac{\partial \errfunc}{\partial\wcur(\learnit)}$ serait calculé via une approximation numérique (méthode des différences finies par exemple) mais ce serait beaucoup plus lent et beaucoup moins efficace...
Principe:
on modifie les poids à l'aide des signaux d'erreur $\errsig$.
$$
\wcur(\learnit + 1) = \wcur(\learnit) \underbrace{- \learnrate \frac{\partial \errfunc}{\partial \wcur(\learnit)}}{\delta\prevcur}
$$
$$
\begin{align}
\delta_\prevcur & = - \learnrate \frac{\partial \errfunc}{\partial \wcur(\learnit)} \
& = - \learnrate \errsig_\cur \sigout\prev
\end{align}
$$
Dans le cas de l'apprentissage différé (batch), on calcule pour chaque exemple l'erreur correspondante. Leur contribution individuelle aux modifications des poids sont additionnées
L'apprentissage suppervisé fonctionne mieux avec des neurones de sortie linéaires (fonction d'activation $\activfunc$ = fonction identitée) "car les signaux d'erreurs se transmettent mieux".
Des données d'entrée binaires doivent être choisies dans ${-1,1}$ plutôt que ${0,1}$ car un signal nul ne contribu pas à l'apprentissage.
Voc:
- erreur marginale: TODO
Signaux d'erreur $\errsig_\cur$ pour les neurones de sortie $(\cur \in \Omega)$
$$
\errsig_\cur = \activfunc'(\pot_\cur)[\sigout_\cur - \sigoutdes_\cur]
$$
Signaux d'erreur $\errsig_\cur$ pour les neurones cachés $(\cur \not\in \Omega)$
$$
\errsig_\cur = \activfunc'(\pot_\cur) \sum_\next \weight_\curnext \errsig_\next
$$
End of explanation
"""
%matplotlib inline
import nnfigs
# https://github.com/jeremiedecock/neural-network-figures.git
import nnfigs.core as nnfig
import matplotlib.pyplot as plt
fig, ax = nnfig.init_figure(size_x=8, size_y=4)
HSPACE = 6
VSPACE = 4
# Synapse #####################################
# Layer 1-2
nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + "_1"), label_position=0.4)
nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, VSPACE), color="lightgray")
nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, -VSPACE), color="lightgray")
nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, -VSPACE), color="lightgray")
# Layer 2-3
nnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, VSPACE), label=tex(STR_WEIGHT + "_2"), label_position=0.4)
nnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, VSPACE), color="lightgray")
nnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, -VSPACE), label=tex(STR_WEIGHT + "_3"), label_position=0.4)
nnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, -VSPACE), color="lightgray")
# Layer 3-4
nnfig.draw_synapse(ax, (2*HSPACE, VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + "_4"), label_position=0.4)
nnfig.draw_synapse(ax, (2*HSPACE, -VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + "_5"), label_position=0.4, label_offset_y=-0.8)
# Neuron ######################################
# Layer 1 (input)
nnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True, line_color="lightgray")
# Layer 2
nnfig.draw_neuron(ax, (HSPACE, VSPACE), 1, ag_func="sum", tr_func="sigmoid")
nnfig.draw_neuron(ax, (HSPACE, -VSPACE), 1, ag_func="sum", tr_func="sigmoid", line_color="lightgray")
# Layer 3
nnfig.draw_neuron(ax, (2*HSPACE, VSPACE), 1, ag_func="sum", tr_func="sigmoid")
nnfig.draw_neuron(ax, (2*HSPACE, -VSPACE), 1, ag_func="sum", tr_func="sigmoid")
# Layer 4
nnfig.draw_neuron(ax, (3*HSPACE, 0), 1, ag_func="sum", tr_func="sigmoid")
# Text ########################################
# Layer 1 (input)
plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + "_i"), fontsize=12)
# Layer 2
plt.text(x=HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + "_1"), fontsize=12)
plt.text(x=HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + "_1"), fontsize=12)
# Layer 3
plt.text(x=2*HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + "_2"), fontsize=12)
plt.text(x=2*HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + "_2"), fontsize=12)
plt.text(x=2*HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + "_3"), fontsize=12)
plt.text(x=2*HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + "_3"), fontsize=12)
# Layer 4
plt.text(x=3*HSPACE-1.25, y=1.5, s=tex(STR_POT + "_o"), fontsize=12)
plt.text(x=3*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + "_o"), fontsize=12)
plt.text(x=3*HSPACE+2, y=-0.3,
s=tex(STR_ERRFUNC + " = (" + STR_SIGOUT + "_o - " + STR_SIGOUT_DES + "_o)^2/2"),
fontsize=12)
plt.show()
"""
Explanation: Plus de détail : calcul de $\errsig_\cur$
Dans l'exemple suivant on ne s'intéresse qu'aux poids $\weight_1$, $\weight_2$, $\weight_3$, $\weight_4$ et $\weight_5$ pour simplifier la demonstration.
End of explanation
"""
def sigmoid(x, _lambda=1.):
y = 1. / (1. + np.exp(-_lambda * x))
return y
%matplotlib inline
x = np.linspace(-5, 5, 300)
y1 = sigmoid(x, 1.)
y2 = sigmoid(x, 5.)
y3 = sigmoid(x, 0.5)
plt.plot(x, y1, label=r"$\lambda=1$")
plt.plot(x, y2, label=r"$\lambda=5$")
plt.plot(x, y3, label=r"$\lambda=0.5$")
plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')
plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')
plt.legend()
plt.title("Fonction sigmoïde")
plt.axis([-5, 5, -0.5, 2]);
"""
Explanation: Attention: $\weight_1$ influe $\pot_2$ et $\pot_3$ en plus de $\pot_1$ et $\pot_o$.
Calcul de $\frac{\partial \errfunc}{\partial \weight_4}$
rappel:
$$
\begin{align}
\errfunc &= \frac12 \left( \sigout_o - \sigoutdes_o \right)^2 \tag{1} \
\sigout_o &= \activfunc(\pot_o) \tag{2} \
\pot_o &= \sigout_2 \weight_4 + \sigout_3 \weight_5 \tag{3} \
\end{align}
$$
c'est à dire:
$$
\errfunc = \frac12 \left( \activfunc \left( \sigout_2 \weight_4 + \sigout_3 \weight_5 \right) - \sigoutdes_o \right)^2
$$
donc, en appliquant les règles de derivation de fonctions composées, on a:
$$
\frac{\partial \errfunc}{\partial \weight_4} =
\frac{\partial \pot_o}{\partial \weight_4}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}_{\errsig_o}
$$
de (1), (2) et (3) on déduit:
$$
\begin{align}
\frac{\partial \pot_o}{\partial \weight_4} &= \sigout_2 \
\frac{\partial \sigout_o}{\partial \pot_o} &= \activfunc'(\pot_o) \
\frac{\partial \errfunc}{\partial \sigout_o} &= \sigout_o - \sigoutdes_o \
\end{align}
$$
le signal d'erreur s'écrit donc:
$$
\begin{align}
\errsig_o &=
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o} \
&= \activfunc'(\pot_o) [\sigout_o - \sigoutdes_o]
\end{align}
$$
Calcul de $\frac{\partial \errfunc}{\partial \weight_5}$
$$
\frac{\partial \errfunc}{\partial \weight_5} =
\frac{\partial \pot_o}{\partial \weight_5}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}_{\errsig_o}
$$
avec:
$$
\begin{align}
\frac{\partial \pot_o}{\partial \weight_5} &= \sigout_3 \
\frac{\partial \sigout_o}{\partial \pot_o} &= \activfunc'(\pot_o) \
\frac{\partial \errfunc}{\partial \sigout_o} &= \sigout_o - \sigoutdes_o \
\errsig_o &=
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o} \
&= \activfunc'(\pot_o) [\sigout_o - \sigoutdes_o]
\end{align}
$$
Calcul de $\frac{\partial \errfunc}{\partial \weight_2}$
$$
\frac{\partial \errfunc}{\partial \weight_2} =
\frac{\partial \pot_2}{\partial \weight_2}
%
\underbrace{
\frac{\partial \sigout_2}{\partial \pot_2}
\frac{\partial \pot_o}{\partial \sigout_2}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}{\errsig_o}
}{\errsig_2}
$$
avec:
$$
\begin{align}
\frac{\partial \pot_2}{\partial \weight_2} &= \sigout_1 \
\frac{\partial \sigout_2}{\partial \pot_2} &= \activfunc'(\pot_2) \
\frac{\partial \pot_o}{\partial \sigout_2} &= \weight_4 \
\errsig_2 &=
\frac{\partial \sigout_2}{\partial \pot_2}
\frac{\partial \pot_o}{\partial \sigout_2}
\errsig_o \
&= \activfunc'(\pot_2) \weight_4 \errsig_o
\end{align}
$$
Calcul de $\frac{\partial \errfunc}{\partial \weight_3}$
$$
\frac{\partial \errfunc}{\partial \weight_3} =
\frac{\partial \pot_3}{\partial \weight_3}
%
\underbrace{
\frac{\partial \sigout_3}{\partial \pot_3}
\frac{\partial \pot_o}{\partial \sigout_3}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}{\errsig_o}
}{\errsig_3}
$$
avec:
$$
\begin{align}
\frac{\partial \pot_3}{\partial \weight_3} &= \sigout_1 \
\frac{\partial \sigout_3}{\partial \pot_3} &= \activfunc'(\pot_3) \
\frac{\partial \pot_o}{\partial \sigout_3} &= \weight_5 \
\errsig_3 &=
\frac{\partial \sigout_3}{\partial \pot_3}
\frac{\partial \pot_o}{\partial \sigout_3}
\errsig_o \
&= \activfunc'(\pot_3) \weight_5 \errsig_o
\end{align}
$$
Calcul de $\frac{\partial \errfunc}{\partial \weight_1}$
$$
\frac{\partial \errfunc}{\partial \weight_1} =
\frac{\partial \pot_1}{\partial \weight_1}
%
\underbrace{
\frac{\partial \sigout_1}{\partial \pot_1}
\left(
\frac{\partial \pot_2}{\partial \sigout_1} % err?
\underbrace{
\frac{\partial \sigout_2}{\partial \pot_2}
\frac{\partial \pot_o}{\partial \sigout_2}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}{\errsig_o}
}{\errsig_2}
+
\frac{\partial \pot_3}{\partial \sigout_1} % err?
\underbrace{
\frac{\partial \sigout_3}{\partial \pot_3}
\frac{\partial \pot_o}{\partial \sigout_3}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}{\errsig_o}
}{\errsig_3}
\right)
}_{\errsig_1}
$$
avec:
$$
\begin{align}
\frac{\partial \pot_1}{\partial \weight_1} &= \sigout_i \
\frac{\partial \sigout_1}{\partial \pot_1} &= \activfunc'(\pot_1) \
\frac{\partial \pot_2}{\partial \sigout_1} &= \weight_2 \
\frac{\partial \pot_3}{\partial \sigout_1} &= \weight_3 \
\errsig_1 &=
\frac{\partial \sigout_1}{\partial \pot_1}
\left(
\frac{\partial \pot_2}{\partial \sigout_1}
\errsig_2
+
\frac{\partial \pot_3}{\partial \sigout_1}
\errsig_3
\right) \
&=
\activfunc'(\pot_1) \left( \weight_2 \errsig_2 + \weight_3 \errsig_3 \right)
\end{align}
$$
Fonctions d'activation : fonctions sigmoides (en forme de "S")
La fonction sigmoïde (en forme de "S") est définie par :
$$f(x) = \frac{1}{1 + e^{-x}}$$
pour tout réel $x$.
On peut la généraliser à toute fonction dont l'expression est :
$$f(x) = \frac{1}{1 + e^{-\lambda x}}$$
End of explanation
"""
def d_sigmoid(x, _lambda=1.):
e = np.exp(-_lambda * x)
y = _lambda * e / np.power(1 + e, 2)
return y
%matplotlib inline
x = np.linspace(-5, 5, 300)
y1 = d_sigmoid(x, 1.)
y2 = d_sigmoid(x, 5.)
y3 = d_sigmoid(x, 0.5)
plt.plot(x, y1, label=r"$\lambda=1$")
plt.plot(x, y2, label=r"$\lambda=5$")
plt.plot(x, y3, label=r"$\lambda=0.5$")
plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')
plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')
plt.legend()
plt.title("Fonction dérivée de la sigmoïde")
plt.axis([-5, 5, -0.5, 2]);
"""
Explanation: Fonction dérivée :
$$
f'(x) = \frac{\lambda e^{-\lambda x}}{(1+e^{-\lambda x})^{2}}
$$
qui peut aussi être défini par
$$
\frac{\mathrm{d} y}{\mathrm{d} x} = \lambda y (1-y)
$$
où $y$ varie de 0 à 1.
End of explanation
"""
def tanh(x):
y = np.tanh(x)
return y
x = np.linspace(-5, 5, 300)
y = tanh(x)
plt.plot(x, y)
plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')
plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')
plt.title("Fonction tangente hyperbolique")
plt.axis([-5, 5, -2, 2]);
"""
Explanation: Tangente hyperbolique
End of explanation
"""
def d_tanh(x):
y = 1. - np.power(np.tanh(x), 2)
return y
x = np.linspace(-5, 5, 300)
y = d_tanh(x)
plt.plot(x, y)
plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')
plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')
plt.title("Fonction dérivée de la tangente hyperbolique")
plt.axis([-5, 5, -2, 2]);
"""
Explanation: Dérivée :
$$
\tanh '= \frac{1}{\cosh^{2}} = 1-\tanh^{2}
$$
End of explanation
"""
# Define the activation function and its derivative
activation_function = tanh
d_activation_function = d_tanh
def init_weights(num_input_cells, num_output_cells, num_cell_per_hidden_layer, num_hidden_layers=1):
"""
The returned `weights` object is a list of weight matrices,
where weight matrix at index $i$ represents the weights between
layer $i$ and layer $i+1$.
Numpy array shapes for e.g. num_input_cells=2, num_output_cells=2,
num_cell_per_hidden_layer=3 (without taking account bias):
- in: (2,)
- in+bias: (3,)
- w[0]: (3,3)
- w[0]+bias: (3,4)
- w[1]: (3,2)
- w[1]+bias: (4,2)
- out: (2,)
"""
# TODO:
# - faut-il que wij soit positif ?
# - loi normale plus appropriée que loi uniforme ?
# - quel sigma conseillé ?
W = []
# Weights between the input layer and the first hidden layer
W.append(np.random.uniform(low=0., high=1., size=(num_input_cells + 1, num_cell_per_hidden_layer + 1)))
# Weights between hidden layers (if there are more than one hidden layer)
for layer in range(num_hidden_layers - 1):
W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_cell_per_hidden_layer + 1)))
# Weights between the last hidden layer and the output layer
W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_output_cells)))
return W
def evaluate_network(weights, input_signal): # TODO: find a better name
# Add the bias on the input layer
input_signal = np.concatenate([input_signal, [-1]])
assert input_signal.ndim == 1
assert input_signal.shape[0] == weights[0].shape[0]
# Compute the output of the first hidden layer
p = np.dot(input_signal, weights[0])
output_hidden_layer = activation_function(p)
# Compute the output of the intermediate hidden layers
# TODO: check this
num_layers = len(weights)
for n in range(num_layers - 2):
p = np.dot(output_hidden_layer, weights[n + 1])
output_hidden_layer = activation_function(p)
# Compute the output of the output layer
p = np.dot(output_hidden_layer, weights[-1])
output_signal = activation_function(p)
return output_signal
def compute_gradient():
# TODO
pass
weights = init_weights(num_input_cells=2, num_output_cells=2, num_cell_per_hidden_layer=3, num_hidden_layers=1)
print(weights)
#print(weights[0].shape)
#print(weights[1].shape)
input_signal = np.array([.1, .2])
input_signal
evaluate_network(weights, input_signal)
"""
Explanation: Fonction logistique
Fonctions ayant pour expression
$$
f(t) = K \frac{1}{1+ae^{-rt}}
$$
où $K$ et $r$ sont des réels positifs et $a$ un réel quelconque.
Les fonctions sigmoïdes sont un cas particulier de fonctions logistique avec $a > 0$.
Python implementation
End of explanation
"""
|
robertoalotufo/ia898 | dev/2017-02-28-RAL-Revisao-de-Algebra-Linear.ipynb | mit | import numpy as np
from numpy.random import randn
"""
Explanation: Revisão de Álgebra Linear
End of explanation
"""
A = np.array([[123, 343, 100],
[ 33, 0, -50]])
print (A )
print (A.shape )
print (A.shape[0] )
print (A.shape[1] )
B = np.array([[5, 3, 2, 1, 4],
[0, 2, 1, 3, 8]])
print (B )
print (B.shape )
print (B.shape[0] )
print (B.shape[1] )
"""
Explanation: Matrizes
$$ A = \begin{bmatrix} 123, & 343, & 100\
33, & 0, & -50 \end{bmatrix} $$
End of explanation
"""
print ('A=\n', A )
for i in range(A.shape[0]):
for j in range(A.shape[1]):
print ('A[%d,%d] = %d' % (i,j, A[i,j]) )
"""
Explanation: $$ A = \begin{bmatrix} 123, & 343, & 100\
33, & 0, & -50 \end{bmatrix} =
\begin{bmatrix} a_{0,0}, & a_{0,1}, & a_{0,2}\
a_{1,0}, & a_{1,1}, & a_{1,2} \end{bmatrix} $$
$$ a_{i,j} $$ é elemento da i-ésima linha e j-ésima coluna
Em NumPy, para matriz de duas dimensões, a primeira dimensão é o número de linhas shape[0] e
a segunda dimensão é o número de colunas, shape[1].
O primeiro índice i de A[i,j], é o índice das linhas e o segundo índice j, é o índice
das colunas.
End of explanation
"""
B = np.array([[3],
[5]])
print ('B=\n', B )
print ('B.shape:', B.shape )
"""
Explanation: Matriz vetor
Um vetor coluna é uma matriz de duas dimensões, porém com apenas uma coluna, tem o shape (n,1), isto é, tem n linhas e 1 coluna.
End of explanation
"""
A = (10*randn(2,3)).astype(int)
B = randn(2,3)
C = A + B
print ('A=\n',A )
print ('B=\n',B )
print ('C=\n',C )
"""
Explanation: Adição de matrizes
$$ C = A + B $$
$$ c_{i,j} = a_{i,j} + b_{i,j} $$ para todo os elementos de $A$, $B$ e $C$.
É importante que as dimensões destas três matrizes sejam iguais.
End of explanation
"""
print ('A=\n', A )
print()
print ('4 * A=\n', 4 * A )
"""
Explanation: Multiplicação de matrizes
Multiplicação matriz e escalar
$$ \beta A = \begin{bmatrix} \beta a_{0,0} & \beta a_{0,1} & \ldots & a_{0,m-1}\
\beta a_{1,0} & \beta a_{1,1} & \ldots & a_{1,m-1} \
\vdots & \vdots & \vdots & vdots \
\beta a_{n-1,0} & \beta a_{n1,1} & \ldots & a_{n-1,m-1}
\end{bmatrix} $$
End of explanation
"""
|
DominikDitoIvosevic/Uni | STRUCE/2018/SU-2018-LAB02-0036477171.ipynb | mit | # Učitaj osnovne biblioteke...
import sklearn
import mlutils
import matplotlib.pyplot as plt
%pylab inline
"""
Explanation: Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http://www.fer.unizg.hr/predmet/su
Laboratorijska vježba 2: Linearni diskriminativni modeli
Verzija: 1.2
Zadnji put ažurirano: 26. listopada 2018.
(c) 2015-2018 Jan Šnajder, Domagoj Alagić
Objavljeno: 26. listopada 2018.
Rok za predaju: 5. studenog 2018. u 07:00h
Upute
Prva laboratorijska vježba sastoji se od šest zadataka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija.
Osigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.
Vježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.
End of explanation
"""
from sklearn.linear_model import LinearRegression, RidgeClassifier
from sklearn.metrics import accuracy_score
"""
Explanation: Zadatci
1. Linearna regresija kao klasifikator
U prvoj laboratorijskoj vježbi koristili smo model linearne regresije za, naravno, regresiju. Međutim, model linearne regresije može se koristiti i za klasifikaciju. Iako zvuči pomalo kontraintuitivno, zapravo je dosta jednostavno. Naime, cilj je naučiti funkciju $f(\mathbf{x})$ koja za negativne primjere predviđa vrijednost $1$, dok za pozitivne primjere predviđa vrijednost $0$. U tom slučaju, funkcija $f(\mathbf{x})=0.5$ predstavlja granicu između klasa, tj. primjeri za koje vrijedi $h(\mathbf{x})\geq 0.5$ klasificiraju se kao pozitivni, dok se ostali klasificiraju kao negativni.
Klasifikacija pomoću linearne regresije implementirana je u razredu RidgeClassifier. U sljedećim podzadatcima istrenirajte taj model na danim podatcima i prikažite dobivenu granicu između klasa. Pritom isključite regularizaciju ($\alpha = 0$, odnosno alpha=0). Također i ispišite točnost vašeg klasifikacijskog modela (smijete koristiti funkciju metrics.accuracy_score). Skupove podataka vizualizirajte korištenjem pomoćne funkcije plot_clf_problem(X, y, h=None) koja je dostupna u pomoćnom paketu mlutils (datoteku mlutils.py možete preuzeti sa stranice kolegija). X i y predstavljaju ulazne primjere i oznake, dok h predstavlja funkciju predikcije modela (npr. model.predict).
U ovom zadatku cilj je razmotriti kako se klasifikacijski model linearne regresije ponaša na linearno odvojim i neodvojivim podatcima.
End of explanation
"""
seven_X = np.array([[2,1], [2,3], [1,2], [3,2], [5,2], [5,4], [6,3]])
seven_y = np.array([1, 1, 1, 1, 0, 0, 0])
clf = RidgeClassifier().fit(seven_X, seven_y)
predicted_y = clf.predict(seven_X)
score = accuracy_score(y_pred=predicted_y, y_true=seven_y)
print(score)
mlutils.plot_2d_clf_problem(X=seven_X, y=predicted_y, h=None)
"""
Explanation: (a)
Prvo, isprobajte ugrađeni model na linearno odvojivom skupu podataka seven ($N=7$).
End of explanation
"""
lr = LinearRegression().fit(seven_X, seven_y)
predicted_y_2 = lr.predict(seven_X)
mlutils.plot_2d_clf_problem(X=seven_X, y=seven_y, h= lambda x : lr.predict(x) >= 0.5)
"""
Explanation: Kako bi se uvjerili da se u isprobanoj implementaciji ne radi o ničemu doli o običnoj linearnoj regresiji, napišite kôd koji dolazi do jednakog rješenja korištenjem isključivo razreda LinearRegression. Funkciju za predikciju, koju predajete kao treći argument h funkciji plot_2d_clf_problem, možete definirati lambda-izrazom: lambda x : model.predict(x) >= 0.5.
End of explanation
"""
outlier_X = np.append(seven_X, [[12,8]], axis=0)
outlier_y = np.append(seven_y, 0)
lr2 = LinearRegression().fit(outlier_X, outlier_y)
predicted_y_2 = lr2.predict(outlier_X)
mlutils.plot_2d_clf_problem(X=outlier_X, y=outlier_y, h= lambda x : lr2.predict(x) >= 0.5)
"""
Explanation: Q: Kako bi bila definirana granica između klasa ako bismo koristili oznake klasa $-1$ i $1$ umjesto $0$ i $1$?
(b)
Probajte isto na linearno odvojivom skupu podataka outlier ($N=8$):
End of explanation
"""
unsep_X = np.append(seven_X, [[2,2]], axis=0)
unsep_y = np.append(seven_y, 0)
lr3 = LinearRegression().fit(unsep_X, unsep_y)
predicted_y_2 = lr3.predict(unsep_X)
mlutils.plot_2d_clf_problem(X=unsep_X, y=unsep_y, h= lambda x : lr3.predict(x) >= 0.5)
"""
Explanation: Q: Zašto model ne ostvaruje potpunu točnost iako su podatci linearno odvojivi?
(c)
Završno, probajte isto na linearno neodvojivom skupu podataka unsep ($N=8$):
End of explanation
"""
from sklearn.datasets import make_classification
x, y = sklearn.datasets.make_classification(n_samples=100, n_informative=2, n_redundant=0, n_repeated=0, n_features=2, n_classes=3, n_clusters_per_class=1)
#print(dataset)
mlutils.plot_2d_clf_problem(X=x, y=y, h=None)
"""
Explanation: Q: Očito je zašto model nije u mogućnosti postići potpunu točnost na ovom skupu podataka. Međutim, smatrate li da je problem u modelu ili u podacima? Argumentirajte svoj stav.
2. Višeklasna klasifikacija
Postoji više načina kako se binarni klasifikatori mogu se upotrijebiti za višeklasnu klasifikaciju. Najčešće se koristi shema tzv. jedan-naspram-ostali (engl. one-vs-rest, OVR), u kojoj se trenira po jedan klasifikator $h_j$ za svaku od $K$ klasa. Svaki klasifikator $h_j$ trenira se da razdvaja primjere klase $j$ od primjera svih drugih klasa, a primjer se klasificira u klasu $j$ za koju je $h_j(\mathbf{x})$ maksimalan.
Pomoću funkcije datasets.make_classification generirajte slučajan dvodimenzijski skup podataka od tri klase i prikažite ga koristeći funkciju plot_2d_clf_problem. Radi jednostavnosti, pretpostavite da nema redundantnih značajki te da je svaka od klasa "zbijena" upravo u jednu grupu.
End of explanation
"""
fig = plt.figure(figsize=(5,15))
fig.subplots_adjust(wspace=0.2)
y_ovo1 = [ 0 if i == 0 else 1 for i in y]
lrOvo1 = LinearRegression().fit(x, y_ovo1)
fig.add_subplot(3,1,1)
mlutils.plot_2d_clf_problem(X=x, y=y_ovo1, h= lambda x : lrOvo1.predict(x) >= 0.5)
y_ovo2 = [ 0 if i == 1 else 1 for i in y]
lrOvo2 = LinearRegression().fit(x, y_ovo2)
fig.add_subplot(3,1,2)
mlutils.plot_2d_clf_problem(X=x, y=y_ovo2, h= lambda x : lrOvo2.predict(x) >= 0.5)
y_ovo3 = [ 0 if i == 2 else 1 for i in y]
lrOvo3 = LinearRegression().fit(x, y_ovo3)
fig.add_subplot(3,1,3)
mlutils.plot_2d_clf_problem(X=x, y=y_ovo3, h= lambda x : lrOvo3.predict(x) >= 0.5)
"""
Explanation: Trenirajte tri binarna klasifikatora, $h_1$, $h_2$ i $h_3$ te prikažite granice između klasa (tri grafikona). Zatim definirajte $h(\mathbf{x})=\mathrm{argmax}_j h_j(\mathbf{x})$ (napišite svoju funkciju predict koja to radi) i prikažite granice između klasa za taj model. Zatim se uvjerite da biste identičan rezultat dobili izravno primjenom modela RidgeClassifier, budući da taj model za višeklasan problem zapravo interno implementira shemu jedan-naspram-ostali.
Q: Alternativna shema jest ona zvana jedan-naspram-jedan (engl, one-vs-one, OVO). Koja je prednost sheme OVR nad shemom OVO? A obratno?
End of explanation
"""
def sigm(alpha):
def f(x):
return 1 / (1 + exp(-alpha*x))
return f
ax = list(range(-10, 10))
ay1 = list(map(sigm(1), ax))
ay2 = list(map(sigm(2), ax))
ay3 = list(map(sigm(4), ax))
fig = plt.figure(figsize=(5,15))
p1 = fig.add_subplot(3, 1, 1)
p1.plot(ax, ay1)
p2 = fig.add_subplot(3, 1, 2)
p2.plot(ax, ay2)
p3 = fig.add_subplot(3, 1, 3)
p3.plot(ax, ay3)
"""
Explanation: 3. Logistička regresija
Ovaj zadatak bavi se probabilističkim diskriminativnim modelom, logističkom regresijom, koja je, unatoč nazivu, klasifikacijski model.
Logistička regresija tipičan je predstavnik tzv. poopćenih linearnih modela koji su oblika: $h(\mathbf{x})=f(\mathbf{w}^\intercal\tilde{\mathbf{x}})$. Logistička funkcija za funkciju $f$ koristi tzv. logističku (sigmoidalnu) funkciju $\sigma (x) = \frac{1}{1 + \textit{exp}(-x)}$.
(a)
Definirajte logističku (sigmoidalnu) funkciju $\mathrm{sigm}(x)=\frac{1}{1+\exp(-\alpha x)}$ i prikažite je za $\alpha\in{1,2,4}$.
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures as PolyFeat
from sklearn.metrics import log_loss
def loss_function(h_x, y):
return -y * np.log(h_x) - (1 - y) * np.log(1 - h_x)
def lr_h(x, w):
Phi = PolyFeat(1).fit_transform(x.reshape(1,-1))
return sigm(1)(Phi.dot(w))
def cross_entropy_error(X, y, w):
Phi = PolyFeat(1).fit_transform(X)
return log_loss(y, sigm(1)(Phi.dot(w)))
def lr_train(X, y, eta = 0.01, max_iter = 2000, alpha = 0, epsilon = 0.0001, trace= False):
w = zeros(shape(X)[1] + 1)
N = len(X)
w_trace = [];
error = epsilon**-1
for i in range(0, max_iter):
dw0 = 0; dw = zeros(shape(X)[1]);
new_error = 0
for j in range(0, N):
h = lr_h(X[j], w)
dw0 += h - y[j]
dw += (h - y[j])*X[j]
new_error += loss_function(h, y[j])
if abs(error - new_error) < epsilon:
print('stagnacija na i = ', i)
break
else: error = new_error
w[0] -= eta*dw0
w[1:] = w[1:] * (1-eta*alpha) - eta*dw
w_trace.extend(w)
if trace:
return w, w_trace
else: return w
"""
Explanation: Q: Zašto je sigmoidalna funkcija prikladan izbor za aktivacijsku funkciju poopćenoga linearnog modela?
</br>
Q: Kakav utjecaj ima faktor $\alpha$ na oblik sigmoide? Što to znači za model logističke regresije (tj. kako izlaz modela ovisi o normi vektora težina $\mathbf{w}$)?
(b)
Implementirajte funkciju
lr_train(X, y, eta=0.01, max_iter=2000, alpha=0, epsilon=0.0001, trace=False)
za treniranje modela logističke regresije gradijentnim spustom (batch izvedba). Funkcija uzima označeni skup primjera za učenje (matrica primjera X i vektor oznaka y) te vraća $(n+1)$-dimenzijski vektor težina tipa ndarray. Ako je trace=True, funkcija dodatno vraća listu (ili matricu) vektora težina $\mathbf{w}^0,\mathbf{w}^1,\dots,\mathbf{w}^k$ generiranih kroz sve iteracije optimizacije, od 0 do $k$. Optimizaciju treba provoditi dok se ne dosegne max_iter iteracija, ili kada razlika u pogrešci unakrsne entropije između dviju iteracija padne ispod vrijednosti epsilon. Parametar alpha predstavlja faktor regularizacije.
Preporučamo definiranje pomoćne funkcije lr_h(x,w) koja daje predikciju za primjer x uz zadane težine w. Također, preporučamo i funkciju cross_entropy_error(X,y,w) koja izračunava pogrešku unakrsne entropije modela na označenom skupu (X,y) uz te iste težine.
NB: Obratite pozornost na to da je način kako su definirane oznake (${+1,-1}$ ili ${1,0}$) kompatibilan s izračunom funkcije gubitka u optimizacijskome algoritmu.
End of explanation
"""
trained = lr_train(seven_X, seven_y)
print(cross_entropy_error(seven_X, seven_y, trained))
print(trained)
h3c = lambda x: lr_h(x, trained) > 0.5
figure()
mlutils.plot_2d_clf_problem(seven_X, seven_y, h3c)
"""
Explanation: (c)
Koristeći funkciju lr_train, trenirajte model logističke regresije na skupu seven, prikažite dobivenu granicu između klasa te izračunajte pogrešku unakrsne entropije.
NB: Pripazite da modelu date dovoljan broj iteracija.
End of explanation
"""
from sklearn.metrics import zero_one_loss
eta = [0.005, 0.01, 0.05, 0.1]
[w3d, w3d_trace] = lr_train(seven_X, seven_y, trace=True)
Phi = PolyFeat(1).fit_transform(seven_X)
h_3d = lambda x: x >= 0.5
error_unakrs = []
errror_classy = []
errror_eta = []
for k in range(0, len(w3d_trace), 3):
error_unakrs.append(cross_entropy_error(seven_X, seven_y, w3d_trace[k:k+3]))
errror_classy.append(zero_one_loss(seven_y, h_3d(sigm(1)(Phi.dot(w3d_trace[k:k+3])))))
for i in eta:
err = []
[w3, w3_trace] = lr_train(seven_X, seven_y, i, trace=True)
for j in range(0, len(w3_trace), 3):
err.append(cross_entropy_error(seven_X, seven_y, w3_trace[j:j+3]))
errror_eta.append(err)
figure(figsize(12, 15))
subplots_adjust(wspace=0.1)
subplot(2,1,1)
grid()
plot(error_unakrs); plot(errror_classy);
subplot(2,1,2)
grid()
for i in range(0, len(eta)):
plot(errror_eta[i], label = 'eta = ' + str(i))
legend(loc = 'best');
"""
Explanation: Q: Koji kriterij zaustavljanja je aktiviran?
Q: Zašto dobivena pogreška unakrsne entropije nije jednaka nuli?
Q: Kako biste utvrdili da je optimizacijski postupak doista pronašao hipotezu koja minimizira pogrešku učenja? O čemu to ovisi?
Q: Na koji način biste preinačili kôd ako biste htjeli da se optimizacija izvodi stohastičkim gradijentnim spustom (online learning)?
(d)
Prikažite na jednom grafikonu pogrešku unakrsne entropije (očekivanje logističkog gubitka) i pogrešku klasifikacije (očekivanje gubitka 0-1) na skupu seven kroz iteracije optimizacijskog postupka. Koristite trag težina funkcije lr_train iz zadatka (b) (opcija trace=True). Na drugom grafikonu prikažite pogrešku unakrsne entropije kao funkciju broja iteracija za različite stope učenja, $\eta\in{0.005,0.01,0.05,0.1}$.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
reg3e = LogisticRegression(max_iter=2000, tol=0.0001, C=0.01**-1, solver='lbfgs').fit(seven_X,seven_y)
h3e = lambda x : reg3e.predict(x)
figure(figsize(7, 7))
mlutils.plot_2d_clf_problem(seven_X,seven_y, h3e)
"""
Explanation: Q: Zašto je pogreška unakrsne entropije veća od pogreške klasifikacije? Je li to uvijek slučaj kod logističke regresije i zašto?
Q: Koju stopu učenja $\eta$ biste odabrali i zašto?
(e)
Upoznajte se s klasom linear_model.LogisticRegression koja implementira logističku regresiju. Usporedite rezultat modela na skupu seven s rezultatom koji dobivate pomoću vlastite implementacije algoritma.
NB: Kako ugrađena implementacija koristi naprednije verzije optimizacije funkcije, vrlo je vjerojatno da Vam se rješenja neće poklapati, ali generalne performanse modela bi trebale. Ponovno, pripazite na broj iteracija i snagu regularizacije.
End of explanation
"""
logReg4 = LogisticRegression(solver='liblinear').fit(outlier_X, outlier_y)
mlutils.plot_2d_clf_problem(X=outlier_X, y=outlier_y, h= lambda x : logReg4.predict(x) >= 0.5)
"""
Explanation: 4. Analiza logističke regresije
(a)
Koristeći ugrađenu implementaciju logističke regresije, provjerite kako se logistička regresija nosi s vrijednostima koje odskaču. Iskoristite skup outlier iz prvog zadatka. Prikažite granicu između klasa.
Q: Zašto se rezultat razlikuje od onog koji je dobio model klasifikacije linearnom regresijom iz prvog zadatka?
End of explanation
"""
[w4b, w4b_trace] = lr_train(seven_X, seven_y, trace = True)
w0_4b = []; w1_4b = []; w2_4b = [];
for i in range(0, len(w4b_trace), 3):
w0_4b.append(w4b_trace[i])
w1_4b.append(w4b_trace[i+1])
w2_4b.append(w4b_trace[i+2])
h_gl = []
for i in range(0, len(seven_X)):
h = []
for j in range(0, len(w4b_trace), 3):
h.append(lr_h(seven_X[i], w4b_trace[j:j+3]))
h_gl.append(h)
figure(figsize(7, 14))
subplot(2,1,1)
grid()
for i in range(0, len(h_gl)):
plot(h_gl[i], label = 'x' + str(i))
legend(loc = 'best') ;
subplot(2,1,2)
grid()
plot(w0_4b); plot(w1_4b); plot(w2_4b);
legend(['w0', 'w1', 'w2'], loc = 'best');
"""
Explanation: (b)
Trenirajte model logističke regresije na skupu seven te na dva odvojena grafikona prikažite, kroz iteracije optimizacijskoga algoritma, (1) izlaz modela $h(\mathbf{x})$ za svih sedam primjera te (2) vrijednosti težina $w_0$, $w_1$, $w_2$.
End of explanation
"""
unsep_y = np.append(seven_y, 0)
[w4c, w4c_trace] = lr_train(unsep_X, unsep_y, trace = True)
w0_4c = []; w1_4c = []; w2_4c = [];
for i in range(0, len(w4c_trace), 3):
w0_4c.append(w4c_trace[i])
w1_4c.append(w4c_trace[i+1])
w2_4c.append(w4c_trace[i+2])
h_gl = []
for i in range(0, len(unsep_X)):
h = []
for j in range(0, len(w4c_trace), 3):
h.append(lr_h(unsep_X[i], w4c_trace[j:j+3]))
h_gl.append(h)
figure(figsize(7, 14))
subplots_adjust(wspace=0.1)
subplot(2,1,1)
grid()
for i in range(0, len(h_gl)):
plot(h_gl[i], label = 'x' + str(i))
legend(loc = 'best') ;
subplot(2,1,2)
grid()
plot(w0_4c); plot(w1_4c); plot(w2_4c);
legend(['w0', 'w1', 'w2'], loc = 'best');
"""
Explanation: (c)
Ponovite eksperiment iz podzadatka (b) koristeći linearno neodvojiv skup podataka unsep iz prvog zadatka.
Q: Usporedite grafikone za slučaj linearno odvojivih i linearno neodvojivih primjera te komentirajte razliku.
End of explanation
"""
from numpy.linalg import norm
alpha5 = [0, 1, 10, 100]
err_gl = []; norm_gl = [];
for a in alpha5:
[w5, w5_trace] = lr_train(seven_X, seven_y, alpha = a, trace = True)
err = []; L2_norm = [];
for k in range(0, len(w5_trace), 3):
err.append(cross_entropy_error(seven_X, seven_y, w5_trace[k:k+3]))
L2_norm.append(linalg.norm(w5_trace[k:k+1]))
err_gl.append(err)
norm_gl.append(L2_norm)
figure(figsize(7, 14))
subplot(2,1,1)
grid()
for i in range(0, len(err_gl)):
plot(err_gl[i], label = 'alpha = ' + str(alpha5[i]) )
legend(loc = 'best') ;
subplot(2,1,2)
grid()
for i in range(0, len(err_gl)):
plot(norm_gl[i], label = 'alpha = ' + str(alpha5[i]) )
legend(loc = 'best');
"""
Explanation: 5. Regularizirana logistička regresija
Trenirajte model logističke regresije na skupu seven s različitim faktorima L2-regularizacije, $\alpha\in{0,1,10,100}$. Prikažite na dva odvojena grafikona (1) pogrešku unakrsne entropije te (2) L2-normu vektora $\mathbf{w}$ kroz iteracije optimizacijskog algoritma.
Q: Jesu li izgledi krivulja očekivani i zašto?
Q: Koju biste vrijednost za $\alpha$ odabrali i zašto?
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
[x6, y6] = make_classification(n_samples=100, n_features=2, n_redundant=0, n_classes=2, n_clusters_per_class=2)
figure(figsize(7, 5))
mlutils.plot_2d_clf_problem(x6, y6)
d = [2,3]
j = 1
figure(figsize(12, 4))
subplots_adjust(wspace=0.1)
for i in d:
subplot(1,2,j)
poly = PolynomialFeatures(i)
Phi = poly.fit_transform(x6)
model = LogisticRegression(solver='lbfgs')
model.fit(Phi, y6)
h = lambda x : model.predict(poly.transform(x))
mlutils.plot_2d_clf_problem(x6, y6, h)
title('d = ' + str(i))
j += 1
# Vaš kôd ovdje...
"""
Explanation: 6. Logistička regresija s funkcijom preslikavanja
Proučite funkciju datasets.make_classification. Generirajte i prikažite dvoklasan skup podataka s ukupno $N=100$ dvodimenzijskih ($n=2)$ primjera, i to sa dvije grupe po klasi (n_clusters_per_class=2). Malo je izgledno da će tako generiran skup biti linearno odvojiv, međutim to nije problem jer primjere možemo preslikati u višedimenzijski prostor značajki pomoću klase preprocessing.PolynomialFeatures, kao što smo to učinili kod linearne regresije u prvoj laboratorijskoj vježbi. Trenirajte model logističke regresije koristeći za preslikavanje u prostor značajki polinomijalnu funkciju stupnja $d=2$ i stupnja $d=3$. Prikažite dobivene granice između klasa. Možete koristiti svoju implementaciju, ali se radi brzine preporuča koristiti linear_model.LogisticRegression. Regularizacijski faktor odaberite po želji.
NB: Kao i ranije, za prikaz granice između klasa koristite funkciju plot_2d_clf_problem. Funkciji kao argumente predajte izvorni skup podataka, a preslikavanje u prostor značajki napravite unutar poziva funkcije h koja čini predikciju, na sljedeći način:
End of explanation
"""
|
bobmyhill/burnman | tutorial/tutorial_02_composition_class.ipynb | gpl-2.0 | from burnman import Composition
olivine_composition = Composition({'MgO': 1.8,
'FeO': 0.2,
'SiO2': 1.}, 'weight')
"""
Explanation: <h1>The BurnMan Tutorial</h1>
Part 2: The Composition Class
This file is part of BurnMan - a thermoelastic and thermodynamic toolkit
for the Earth and Planetary Sciences
Copyright (C) 2012 - 2021 by the BurnMan team,
released under the GNU GPL v2 or later.
Introduction
This ipython notebook is the second in a series designed to introduce new users to the code structure and functionalities present in BurnMan.
<b>Demonstrates</b>
burnman.Composition: Defining Composition objects, converting between molar, weight and atomic amounts, changing component bases. and modifying compositions.
Everything in BurnMan and in this tutorial is defined in SI units.
The Composition class
It is quite common in petrology to want to perform simple manipulations on chemical compositions. These manipulations might include:
- converting between molar and weight percent of oxides or elements
- changing from one compositional basis to another (e.g. 'FeO' and 'Fe2O3' to 'Fe' and 'O')
- adding new chemical components to an existing composition in specific proportions with existing components.
These operations are easy to perform in Excel (for example), but errors are surprisingly common, and are even present in published literature. BurnMan's Composition class is designed to make some of these common tasks easy and hopefully less error prone. Composition objects are initialised with a dictionary of component amounts (in any format), followed by a string that indicates whether that composition is given in "molar" amounts or "weight" (more technically mass, but weight is a more commonly used word in chemistry).
End of explanation
"""
olivine_composition.print('molar', significant_figures=4,
normalization_component='SiO2', normalization_amount=1.)
olivine_composition.print('weight', significant_figures=4,
normalization_component='total', normalization_amount=1.)
olivine_composition.print('atomic', significant_figures=4,
normalization_component='total', normalization_amount=7.)
"""
Explanation: After initialization, the "print" method can be used to directly print molar, weight or atomic amounts. Optional variables control the print precision and normalization of amounts.
End of explanation
"""
KLB1 = Composition({'SiO2': 44.48,
'Al2O3': 3.59,
'FeO': 8.10,
'MgO': 39.22,
'CaO': 3.44,
'Na2O': 0.30}, 'weight')
"""
Explanation: Let's do something a little more complicated.
When we're making a starting mix for petrological experiments, we often have to add additional components. For example, we add iron as Fe2O3 even if we want a reduced oxide starting mix, because FeO is not a stable stoichiometric compound.
Here we show how to use BurnMan to create such mixes. In this case, let's say we want to create a KLB-1 starting mix (Takahashi, 1986). We know the weight proportions of the various oxides (including only components in the NCFMAS system):
End of explanation
"""
CO2_molar = KLB1.molar_composition['CaO'] + KLB1.molar_composition['Na2O']
O_molar = KLB1.molar_composition['FeO']*0.5
KLB1.add_components(composition_dictionary = {'CO2': CO2_molar,
'O': O_molar},
unit_type = 'molar')
"""
Explanation: However, this composition is not the composition we wish to make in the lab. We need to make the following changes:
- $\text{CaO}$ and $\text{Na}_2\text{O}$ should be added as $\text{CaCO}_3$ and $\text{Na}_2\text{CO}_3$.
- $\text{FeO}$ should be added as $\text{Fe}_2\text{O}_3$
First, we change the bulk composition to satisfy these requirements. The molar amounts of the existing components are stored in a dictionary "molar_composition", and can be used to determine the amounts of CO2 and O to add to the bulk composition:
End of explanation
"""
KLB1.change_component_set(['Na2CO3', 'CaCO3', 'Fe2O3', 'MgO', 'Al2O3', 'SiO2'])
KLB1.print('weight', significant_figures=4, normalization_amount=2.)
"""
Explanation: Then we can change the component set to the oxidised, carbonated compounds and print the desired starting compositions, for 2 g total mass:
End of explanation
"""
|
ML4DS/ML4all | P2.Numpy/P2_Numpy_basics_student.ipynb | mit | # Import numpy library
import numpy as np
"""
Explanation: Exercises about Numpy
Notebook version:
* 1.0 (Mar 15, 2016) - First version - UTAD version
* 1.1 (Sep 12, 2017) - Python3 compatible
* 1.2 (Sep 3, 2018) - Adapted to TMDE (only numpy exercises)
* 1.3 (Sep 4, 2019) - Spelling and structure revision.
Authors: Jerónimo Arenas García (jeronimo.arenas@uc3m.es),
Jesús Cid Sueiro (jcid@tsc.uc3m.es),
Vanessa Gómez Verdejo (vanessa@tsc.uc3m.es),
Óscar García Hinde (oghinnde@tsc.uc3m.es),
Simón Roca Sotelo (sroca@tsc.uc3m.es)
This notebook is an introduction to the Numpy library. Numpy adds support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. Here we will learn the basics on how to work with numpy arrays, and some of the most common operations which are needed when working with these data structures.
1. Importing numpy
End of explanation
"""
x = [5, 4, 3, 4]
print(type(x[0]))
"""
Explanation: 2. Numpy exercises
2.1. Create numpy arrays
The following code fragment defines variable x as a list of 4 integers, you can check that by printing the type of any element of x.
End of explanation
"""
# Create a list of floats containing the same elements as in x
x_f = []
for element in x:
# <FILL IN>
print(x_f)
print(type(x_f[0]))
"""
Explanation: If you want to apply a transformation over each element of this list you have to build a loop and operate over each element.
Exercise 1: Complete the following code to create a new list with the same elements as x, but where each element of the list is a float (you can use the float() function).
End of explanation
"""
# Numpy arrays can be created from numeric lists or using different numpy methods
y = np.arange(8) + 1
x = np.array(x_f)
z = np.array([[1, 2], [2, 1]])
# The arange() function generates vectors of equally spaced numbers. We can
# specify start and stop positions as well as the step length (the steps don't
# need to be integers!):
print('A vector that goes from 2 to 8 in steps of 2: ', np.arange(2, 9, 2))
# Numpy also has a linspace() function that works exactly like its Matlab
# counterpart:
print('\nA vector of length 5 that spans from 0 to 1 in constant increments:\n',
np.linspace(0, 1, 5))
# Check the different data types involved
print('\nThe type of variable x_f is ', type(x_f))
print('The type of variable x is ', type(x))
# Print the shapes of the numpy arrays
print('\nThe variable x has shape ', x.shape)
print('The variable y has shape ', y.shape)
print('The variable z has shape ', z.shape)
"""
Explanation: The output should be:
[5.0, 4.0, 3.0, 4.0]
<class 'float'>
Numpy is a python library that lets you work with data vectors and matrices (we will call them numpy arrays) and directly apply operations over these arrays without the need to operate element by element.
Numpy arrays can be defined directly using methods such as np.arange(), np.ones(), np.zeros(), np.eye(), as well as random number generators. Alternatively, you can easily generate them from python lists (or lists of lists) containing elements of a numerical type by using np.array(my_list).
You can easily check the shape of any numpy array with the property .shape.
End of explanation
"""
my_array = np.arange(9).reshape((3, 3))
print(my_array)
print('the type is: ', type(my_array))
# Convert my_array to list
my_array_list = my_array.tolist()
print('\n', my_array_list)
print('the type is: ',type(my_array_list))
"""
Explanation: Note: Compare the shape of x and y with the shape of z and note the difference between 1-D and N-D numpy arrays (ndarrays). We will later review this issue in detail.
We can also convert a numpy array or matrix into a python list with the method np.tolist().
End of explanation
"""
# 1. Define a new 3x2 array named my_array with [1, 2, 3] in the first row and
# [4, 5, 6] in the second. Check the dimension of the array.
# my_array = <FILL IN>
print(my_array)
print('Its shape is: ', np.shape(my_array))
#2. Define a new 3x4 array named my_zeros with all its elements to zero
# my_zeros = <FILL IN>
print('A 3x4 vector of zeros:')
print(my_zeros)
#3. Define a new 4x2 array named my_ones with all its elements to one
# my_ones = <FILL IN>
print('A 4x2 vector of ones:')
print(my_ones)
#4. Modify the dimensions of my_ones to a 2x4 array using command np.reshape()
# my_ones2 = <FILL IN>
print('A 2x4 vector of ones:')
print(my_ones2)
#5. Define a new 4x4 identity array named my_eye
# my_ones2 = <FILL IN>
print('A 4x4 identity vector:')
print(my_ones2)
"""
Explanation: Exercise 2: Complete the following exercises:
End of explanation
"""
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3))
result = x1 + x2
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nAddition of x1 and x2 using the + operator:\n', result)
"""
Explanation: The output should be:
```
[[1 2 3]
[4 5 6]]
Its shape is: (2, 3)
A 3x4 vector of zeros:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
A 4x2 vector of ones:
[[1. 1.]
[1. 1.]
[1. 1.]
[1. 1.]]
A 2x4 vector of ones:
[[1. 1. 1. 1.]
[1. 1. 1. 1.]]
```
2.2 Numpy operations
We can perform all the usual numerical and matrix operations with numpy. In the case of matrix addition and subtraction, we can use the common "+" or "-" operators:
End of explanation
"""
# We can add two arrays:
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3))
result = np.add(x1, x2)
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nAddition of x1 and x2 using built-in functions:\n', result)
# Or compute the difference:
result = np.subtract(x1, x2)
print('\nSubtraction of x1 and x2 using built-in functions:\n', result)
"""
Explanation: However, numpy provides us with built-in functions that guarantee that any errors and exceptions are handled properly:
End of explanation
"""
# We can add or subtract row and column vectors:
row_vect = np.ones((1, 3))
col_vect = np.ones((3, 1))
result = np.add(x1, row_vect)
print('x1:\n', x1, '\n\nrow_vect:\n', row_vect, '\n\ncol_vect:\n', col_vect)
print('\nAddition of a row vector:\n', result)
result = np.add(x1, col_vect)
print('\nAddition of a column vector:\n', result)
"""
Explanation: Whether you use the basic operators or the built-in functions will depend on the situation.
We can also add or subtract column or row vectors from arrays. Again, both the basic operators and the built in functions will perform the same operations. Unlike in Matlab, where this operation will raise an error, Python will automatically execute it row by row or column by column as appropriate:
End of explanation
"""
# We can perform element-wise multiplication by using the * operator:
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3)) * 2 # a 3x3 array with 2s in every cell
result = x1 * x2
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nElement-wise multiplication of x1 and x2 using the * operator:\n', result)
# or by using the built-in numpy function:
result = np.multiply(x1, x2)
print('\nElement-wise multiplication of x1 and x2 using built-in functions:\n', result)
"""
Explanation: Another key difference with Matlab is that the "*" operator won't give us matrix multiplication. It will instead compute an element-wise multiplication. Again, numpy has a built-in function for this purpose that will guarantee propper handling of errors:
End of explanation
"""
# We can perform matrix multiplication:
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3)) * 2 # a 3x3 array with 2s in every cell
result = np.matmul(x1, x2)
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nProduct of x1 and x2:\n', result)
# Or the dot product between vectors:
v1 = np.arange(4)
v2 = np.arange(3, 7)
result = np.dot(v1, v2)
print('\nv1:\n', v1, '\n\nv2:\n', v2)
print('\nDot product of v1 and v2:\n', result)
"""
Explanation: Numpy also gives us funtions to perform matrix multiplications and dot products.
End of explanation
"""
x1 = np.arange(9).reshape((3, 3))
x2 = np.ones((3, 3)) * 2 # a 3x3 array with 2s in every cell
result = np.matmul(x1, x2)
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nProduct of x1 and x2 using np.matmul():\n', result)
result = np.dot(x1, x2)
print('\nProduct of x1 and x2 using np.dot():\n', result, '\n')
# Read the np.dot() documentation for more information:
help(np.dot)
"""
Explanation: Note that the np.dot() function is very powerful and can perform a number of different operations depending on the nature of the input arguments. For example, if we give it a pair of matrices of adequate dimensions, it will perform the same operation as np.matmul().
End of explanation
"""
# Examples of element-wise numerical operations:
x1 = np.arange(9).reshape((3, 3)) + 1
print('x1:\n', x1)
print('\nExponentiation of x1:\n', np.exp(x1))
print('\nLogarithm of x1:\n', np.log(x1))
print('\nSquare root of x1:\n', np.sqrt(x1))
"""
Explanation: We can also compute typical numerical operations, which will be applied element-wise:
End of explanation
"""
# Element-wise division of two matrices:
x1 = np.arange(9).reshape((3, 3)) * 3
x2 = np.ones((3, 3)) * 3
result = np.divide(x1, x2)
print('x1:\n', x1, '\n\nx2:\n', x2)
print('\nElement-wise division of x1 and x2:\n', result)
"""
Explanation: Element-wise division between matrices is performed using the ''/'' operator or the divide() built-in function:
End of explanation
"""
# Performing power operations with the ** operator:
x1 = np.arange(9).reshape((3, 3))
result = x1**2
print('x1:\n', x1)
print('\nRaising all elements in x1 to the power of 2 using the ** operator:\n', result)
result = x1**x1
print('\nRaising all elements in x1 to themselves using the ** operator:\n', result)
# Performing power operations with the power() function:
result = np.power(x1, 2)
print('\nRaising all elements in x1 to the power of 2 using the power() function:\n', result)
result = np.power(x1, x1)
print('\nRaising all elements in x1 to themselves using the power() function:\n', result)
"""
Explanation: We can use the "**" operator or the power() built-in function to raise elements from a matrix to a given power, or to raise elements of one matrix to positionally-corresponding powers in another matrix:
End of explanation
"""
# Three different ways of transposing a matrix:
x1 = np.arange(9).reshape((3, 3))
print('x1:\n', x1)
print('\nTranspose of x1 using the numpy function:\n', np.transpose(x1))
print('\nTranspose of x1 using the ndarray method:\n', x1.transpose())
print('\nTranspose of x1 using the abbreviated form:\n', x1.T)
print('\nOddly enough, the three methods produce the same result!')
"""
Explanation: Finally, we can transpose a matrix by using the numpy.transpose() function, the ndarray.transpose() method or its abbreviated version, ndarray.T. We usually use the abbreviated version, but the other forms have their place in certain contexts. Check their documentations to see what options they offer:
End of explanation
"""
# Complete the following exercises. Print the partial results to visualize them.
# Create a 3x4 array called `y`. It's up to you to decide what it contains.
#y = <FILL IN>
# Create a column vector of length 3 called `x_col`.
#x_col = <FILL IN>
# Multiply the 2-D array `y` by 2
#y_by2 = <FILL IN>
# Multiply each of the columns in `y` by the column vector `x_col`
#z = <FILL IN>
# Obtain the matrix product of the transpose of x_col and y
#x_by_y = <FILL IN>
# Compute the sine of a vector that spans from -5 to 5 in increments of 0.5
#x = <FILL IN>
#x_sin = <FILL IN>
"""
Explanation: Exercise 3: In the next cell you'll find a few exercises for you to practice these operations.
End of explanation
"""
array1 = np.array([1,1,1])
print('array1:\n', array1)
array2 = np.ones((3,1))
print('\narray2:\n', array2)
"""
Explanation: 2.2. N-D numpy arrays
To correctly operate with numpy arrays we have to be aware of their dimensions. Are these two arrays equal?
End of explanation
"""
print('Shape of array1 :',array1.shape)
print('Number of dimensions of array1 :',array1.ndim)
print('Shape of array2 :',array2.shape)
print('Number of dimensions of array2 :',array2.ndim)
"""
Explanation: The answer is no. We can easily check this by examining their shapes and dimensions:
End of explanation
"""
x1 = np.arange(9).reshape((3, 3))
print('x1:\n', x1)
print('Its shape is: ', x1.shape)
print('\n Use the method flaten:')
print('x1.flatten(): ', x1.flatten())
print('Its shape is: ', x1.flatten().shape)
print('\n Use the method ravel:')
print('x1.ravel(): ', x1.ravel())
print('Its shape is:', x1.ravel().shape)
print('\n Use the method shape:')
print('x1.reshape(-1): ', x1.reshape(-1))
print('Its shape is: ', x1.reshape(-1).shape)
# Note that here the method shape is used to reorganize the array into a 1-D
# array. A more common use of reshape() is to simply redimension an array from
# shape (i, j) to shape (i', j') satisfying the condition i*j = i'*j'.
# For example:
print('\n A more common use of reshape():')
x1 = np.arange(12).reshape((4, 3))
print('x1:\n', x1)
print('Its shape is: ', x1.shape)
print('\nx1.reshape((2,6)):\n', x1.reshape((2,6)))
print('Its shape is: ', x1.reshape((2,6)).shape)
"""
Explanation: Effectively, array1 is a 1D array, whereas array2 is a 2D array. There are some methods that will let you modify the dimensions of an array. To go from a 2-D to 1-D array we have the methods flatten(), ravel() and reshape(). Check the result of the following code (you can use the help function to check the funcionalities of each method).
End of explanation
"""
# Let's start with a 1-D array:
array1 = np.array([1,1,1])
print('1D array:\n',array1)
print('Its shape is: ', array1.shape)
# Let's turn it into a column vector (2-D array with dimension 1x3):
array2 = array1[:,np.newaxis]
print('\n2D array:\n',array2)
print('Its shape is: ', array2.shape)
# Let's turn it into a row vector (2-D array with dimension 3x1):
array3 = array1[np.newaxis,:]
print('\n2D array:\n',array3)
print('Its shape is: ', array3.shape)
"""
Explanation: Note: flatten() always returns a copy of the original vector, whereas ravel() and shape() returns a view of the original array whenever possible.
Sometimes we need to add a new dimension to an array, for example to turn a 1-D array into a 2-D column vector. For this we use np.newaxis.
End of explanation
"""
array1_1D = np.squeeze(array1)
print('1D array:\n',array1_1D)
print('Its shape is: ', array1_1D.shape)
array2_1D = np.squeeze(array2)
print('\n1D array:\n',array2_1D)
print('Its shape is: ', array2_1D.shape)
array3_1D = np.squeeze(array3)
print('\n1D array:\n',array3_1D)
print('Its shape is: ', array3_1D.shape)
"""
Explanation: We might also need to remove empty or unused dimensions. For this we have np.squeeze():
End of explanation
"""
# Given the following matrix and vector:
vect = np.arange(3)[:, np.newaxis]
mat = np.arange(9).reshape((3, 3))
# Apply the necessary transformation to vect so that you can perform the matrix
# multiplication np.matmul(vect, mat)
# vect = <FILL IN>
print(np.matmul(vect, mat))
"""
Explanation: Exercise 4: Complete the following excercise:
End of explanation
"""
x1 = np.arange(24).reshape((8, 3))
print(x1.shape)
print(np.mean(x1))
print(np.mean(x1,axis=0))
print(np.mean(x1,axis=1))
"""
Explanation: The output should be:
`
[15 18 21]
2.3. Numpy methods that can be carried out along different dimensions
Compare the result of the following commands:
End of explanation
"""
# Given the following list of heights:
heights = [1.60, 1.85, 1.68, 1.90, 1.78, 1.58, 1.62, 1.60, 1.70, 1.56]
# 1. Obtain a 2x5 array, called `h_array`, using the methods you learnt above.
# h_array = <FILL IN>
print('h_array: \n',h_array)
print('Its shape is: \n', h_array.shape)
# 2. Use method mean() to get the mean of each column, and the mean of each row.
# Store them in two vectors, named `mean_column` and `mean_row` respectively.
#mean_column = <FILL IN>
#mean_row = <FILL IN>
print('\nMean of each column: \n',mean_column)
print('Its shape is (it must coincide with number of columns):\n', mean_column.shape)
print('\nMean of each row: \n',mean_row)
print('Its shape is (it must coincide with number of rows):\n', mean_row.shape)
# 3. Obtain a 5x2 array by multiplying the mean vectors. You may need to create a
# new axis. The array name should be `new_array`
#new_array = <FILL IN>
print('\nNew array: \n',new_array)
print('New array shape: \n', new_array.shape)
"""
Explanation: Other numpy methods where you can specify the axis along with a certain operation should be carried out are:
np.median()
np.std()
np.var()
np.percentile()
np.sort()
np.argsort()
If the axis argument is not provided, the array is flattened before carriying out the corresponding operation.
Exercise 5: Complete the following exercises:
End of explanation
"""
my_array = np.array([[1, -1, 3, 3],[2, 2, 4, 6]])
print('Array 1:')
print(my_array)
print(my_array.shape)
my_array2 = np.ones((2,3))
print('Array 2:')
print(my_array2)
print(my_array2.shape)
# Vertically stack matrix my_array with itself
#ex1_res = <FILL IN>
print('Vertically stack:')
print(ex1_res)
# Horizontally stack matrix my_array and my_array2
#ex2_res = <FILL IN>
print('Horizontally stack:')
print(ex2_res)
# Transpose the vector `my_array`, and then stack a ones vector
# as the first column. Alternatively, you can stack a row, and then transpose.
# Just make sure that the final shape is (4,3). Name it `expanded`:
#ones_v = <FILL_IN>
ones_v = np.ones((4,1))
#expanded = <FILL IN>
print('Expanded array: \n',expanded)
print('Its shape is: \n', expanded.shape)
"""
Explanation: The output should be:
```
h_array:
[[1.6 1.85 1.68 1.9 1.78]
[1.58 1.62 1.6 1.7 1.56]]
Its shape is:
(2, 5)
Mean of each column:
[1.59 1.735 1.64 1.8 1.67 ]
Its shape is (it must coincide with number of columns):
(5,)
Mean of each row:
[1.762 1.612]
Its shape is (it must coincide with number of rows):
(2,)
New array:
[[2.80158 2.56308]
[3.05707 2.79682]
[2.88968 2.64368]
[3.1716 2.9016 ]
[2.94254 2.69204]]
New array shape:
(5, 2)```
2.4. Concatenating arrays
Provided that the corresponding dimensions fit, horizontal and vertical stacking of matrices can be carried out with methods np.hstack() and np.vstack().
Exercise 6: Complete the following exercises to practice matrix concatenation:
End of explanation
"""
# Selecting specific elements from a vector:
vect = np.arange(10)+4
new_vect = vect[[1, 3, 7]] # remember that in python arrays start at 0
print('vect:\n', vect)
print('\nSelecting specific elements:\n', new_vect)
# Selecting a range of elements from a vector:
new_vect = vect[3:7]
print('\nSelecting a range of elements:\n', new_vect)
# Selecting a subarray from an array:
array = np.arange(12).reshape((3, 4))
new_array = array[2, 2:4]
print('\nArray:\n', array)
print('\nSelecting a subarray:\n', new_array)
"""
Explanation: The output should be:
Array 1:
[[ 1 -1 3 3]
[ 2 2 4 6]]
(2, 4)
Array 2:
[[1. 1. 1.]
[1. 1. 1.]]
(2, 3)
Vertically stack:
[[ 1 -1 3 3]
[ 2 2 4 6]
[ 1 -1 3 3]
[ 2 2 4 6]]
Horizontally stack:
[[ 1. -1. 3. 3. 1. 1. 1.]
[ 2. 2. 4. 6. 1. 1. 1.]]
Expanded array:
[[ 1. 1. 2.]
[ 1. -1. 2.]
[ 1. 3. 4.]
[ 1. 3. 6.]]
Its shape is:
(4, 3)
2.5. Slicing
In numpy, slicing means selecting and/or accessing specific array rows and columns.
Particular elements of numpy arrays (both unidimensional and multidimensional) can be accessed using standard python slicing. When working with multidimensional arrays, slicing can be carried out along several different dimensions at once.
Let's look at some examples:
End of explanation
"""
X = np.arange(0,25).reshape((5,5))
print('X:\n',X)
# 1. Keep last row of matrix X
#X_sub1 = <FILL IN>
print('\nX_sub1: \n',X_sub1)
# 2. Keep first column of the three first rows of X
#X_sub2 = <FILL IN>
print('\nX_sub2: \n',X_sub2)
# 3. Keep first two columns of the three first rows of X
#X_sub3 = <FILL IN>
print('\nX_sub3: \n',X_sub3)
# 4. Invert the order of the rows of X
#X_sub4 = <FILL IN>
print('\nX_sub4: \n',X_sub4)
# 5. Keep odd columns (first, third...) of X
#X_sub5 = <FILL IN>
print('\nX_sub5: \n',X_sub5)
"""
Explanation: Exercise 7: Complete the following excersises:
End of explanation
"""
x = np.array([-3,-2,-1,0,1,2,3])
# Create a new vector `y` with the elements of x, but replacing by 0 each number whose
# absolute value is 2 or less.
y = np.copy(x) # CAUTION! Doing y = x will create two pointers to the same array.
condition = np.abs(x)<=2
y[np.where(condition)[0]]=0
# Note that np.where() returns a tuple. In this case the second element of the
# tuple is empty. Read the np.where() docstring for more info.
print('Before conditioning: \n',x)
print('\nAfter conditioning: \n',y)
"""
Explanation: The output should be:
```
X:
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]
[20 21 22 23 24]]
X_sub1:
[20 21 22 23 24]
X_sub2:
[ 0 5 10]
X_sub3:
[[ 0 1]
[ 5 6]
[10 11]]
X_sub4:
[[20 21 22 23 24]
[15 16 17 18 19]
[10 11 12 13 14]
[ 5 6 7 8 9]
[ 0 1 2 3 4]]
X_sub5:
[[ 0 2 4]
[ 5 7 9]
[10 12 14]
[15 17 19]
[20 22 24]]
```
We have seen how slicing allows us to index over the different dimensions of a given array. In the previous examples we learned how to select the rows and columns we're interested in, but how can we select only the elements of an array that meet a specific condition?
Numpy provides us with the method np.where(condition). A common way of using this function is by setting a condition involving an array. For example, the condition x > 5 will give us the indexes in which x contains numbers higher than 5.
End of explanation
"""
#
x = np.arange(12).reshape((3, 4))
# Write your code here
# <SOL>
# </SOL>
print(x_new)
"""
Explanation: Exercise 8: Given array x below, select the subarray formed by the columns of x that add more than 4.:
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.0/examples/notebooks/generated/rolling_ls.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pandas_datareader as pdr
import seaborn
import statsmodels.api as sm
from statsmodels.regression.rolling import RollingOLS
seaborn.set_style("darkgrid")
pd.plotting.register_matplotlib_converters()
%matplotlib inline
"""
Explanation: Rolling Regression
Rolling OLS applies OLS across a fixed windows of observations and then rolls
(moves or slides) the window across the data set. They key parameter is window
which determines the number of observations used in each OLS regression. By
default, RollingOLS drops missing values in the window and so will estimate
the model using the available data points.
Estimated values are aligned so that models estimated using data points
$i+1, i+2, ... i+window$ are stored in location $i+window$.
Start by importing the modules that are used in this notebook.
End of explanation
"""
factors = pdr.get_data_famafrench("F-F_Research_Data_Factors", start="1-1-1926")[0]
factors.head()
industries = pdr.get_data_famafrench("10_Industry_Portfolios", start="1-1-1926")[0]
industries.head()
"""
Explanation: pandas-datareader is used to download data from
Ken French's website.
The two data sets downloaded are the 3 Fama-French factors and the 10 industry portfolios.
Data is available from 1926.
The data are monthly returns for the factors or industry portfolios.
End of explanation
"""
endog = industries.HiTec - factors.RF.values
exog = sm.add_constant(factors["Mkt-RF"])
rols = RollingOLS(endog, exog, window=60)
rres = rols.fit()
params = rres.params.copy()
params.index = np.arange(1, params.shape[0] + 1)
params.head()
params.iloc[57:62]
params.tail()
"""
Explanation: The first model estimated is a rolling version of the CAPM that regresses
the excess return of Technology sector firms on the excess return of the market.
The window is 60 months, and so results are available after the first 60 (window)
months. The first 59 (window - 1) estimates are all nan filled.
End of explanation
"""
fig = rres.plot_recursive_coefficient(variables=["Mkt-RF"], figsize=(14, 6))
"""
Explanation: We next plot the market loading along with a 95% point-wise confidence interval.
The alpha=False omits the constant column, if present.
End of explanation
"""
exog_vars = ["Mkt-RF", "SMB", "HML"]
exog = sm.add_constant(factors[exog_vars])
rols = RollingOLS(endog, exog, window=60)
rres = rols.fit()
fig = rres.plot_recursive_coefficient(variables=exog_vars, figsize=(14, 18))
"""
Explanation: Next, the model is expanded to include all three factors, the excess market, the size factor
and the value factor.
End of explanation
"""
joined = pd.concat([factors, industries], axis=1)
joined["Mkt_RF"] = joined["Mkt-RF"]
mod = RollingOLS.from_formula("HiTec ~ Mkt_RF + SMB + HML", data=joined, window=60)
rres = mod.fit()
rres.params.tail()
"""
Explanation: Formulas
RollingOLS and RollingWLS both support model specification using the formula interface. The example below is equivalent to the 3-factor model estimated previously. Note that one variable is renamed to have a valid Python variable name.
End of explanation
"""
%timeit rols.fit()
%timeit rols.fit(params_only=True)
"""
Explanation: RollingWLS: Rolling Weighted Least Squares
The rolling module also provides RollingWLS which takes an optional weights input to perform rolling weighted least squares. It produces results that match WLS when applied to rolling windows of data.
Fit Options
Fit accepts other optional keywords to set the covariance estimator. Only two estimators are supported, 'nonrobust' (the classic OLS estimator) and 'HC0' which is White's heteroskedasticity robust estimator.
You can set params_only=True to only estimate the model parameters. This is substantially faster than computing the full set of values required to perform inference.
Finally, the parameter reset can be set to a positive integer to control estimation error in very long samples. RollingOLS avoids the full matrix product when rolling by only adding the most recent observation and removing the dropped observation as it rolls through the sample. Setting reset uses the full inner product every reset periods. In most applications this parameter can be omitted.
End of explanation
"""
res = RollingOLS(endog, exog, window=60, min_nobs=12, expanding=True).fit()
res.params.iloc[10:15]
res.nobs[10:15]
"""
Explanation: Expanding Sample
It is possible to expand the sample until sufficient observations are available for the full window length. In this example, we start once we have 12 observations available, and then increase the sample until we have 60 observations available. The first non-nan value is computed using 12 observations, the second 13, and so on. All other estimates are computed using 60 observations.
End of explanation
"""
|
aattaran/Machine-Learning-with-Python | Mini Project Student Admissions in Keras/imdb/Student_Admissions.ipynb | bsd-3-clause | import pandas as pd
data = pd.read_csv('student_data.csv')
data.head(5)
"""
Explanation: Predicting Student Admissions
In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:
- GRE Scores (Test)
- GPA Scores (Grades)
- Class rank (1-4)
The dataset originally came from here: http://www.ats.ucla.edu/
Note: Thanks Adam Uccello, for helping us debug!
1. Load and visualize the data
To load the data, we will use a very useful data package called Pandas. You can read on Pandas documentation here:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
plot_points(data)
plt.show()
"""
Explanation: Let's plot the data and see how it looks.
End of explanation
"""
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
"""
Explanation: The data, based on only GRE and GPA scores, doesn't seem very separable. Maybe if we make a plot for each of the ranks, the boundaries will be more clear.
End of explanation
"""
import keras
from keras.utils import np_utils
# remove NaNs
data = data.fillna(0)
# One-hot encoding the rank
processed_data = pd.get_dummies(data, columns=['rank'])
# Normalizing the gre and the gpa scores to be in the interval (0,1)
processed_data["gre"] = processed_data["gre"]/800
processed_data["gpa"] = processed_data["gpa"]/4
# Splitting the data input into X, and the labels y
X = np.array(processed_data)[:,1:]
X = X.astype('float32')
y = keras.utils.to_categorical(data["admit"],2)
# Checking that the input and output look correct
print("Shape of X:", X.shape)
print("\nShape of y:", y.shape)
print("\nFirst 10 rows of X")
print(X[:10])
print("\nFirst 10 rows of y")
print(y[:10])
"""
Explanation: These plots look a bit more linearly separable, although not completely. But it seems that using a multi-layer perceptron with the rank, gre, and gpa as inputs, may give us a decent solution.
2. Process the data
We'll do the following steps to clean up the data for training:
- One-hot encode the rank
- Normalize the gre and the gpa scores, so they'll be in the interval (0,1)
- Split the data into the input X, and the labels y.
End of explanation
"""
# break training set into training and validation sets
(X_train, X_test) = X[50:], X[:50]
(y_train, y_test) = y[50:], y[:50]
# print shape of training set
print('x_train shape:', X_train.shape)
# print number of training, validation, and test images
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
"""
Explanation: 3. Split the data into training and testing sets
End of explanation
"""
# Imports
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
# Building the model
# Note that filling out the empty rank as "0", gave us an extra column, for "Rank 0" students.
# Thus, our input dimension is 7 instead of 6.
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(7,)))
model.add(Dropout(.2))
model.add(Dense(64, activation='relu'))
model.add(Dropout(.1))
model.add(Dense(2, activation='softmax'))
# Compiling the model
model.compile(loss = 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
"""
Explanation: 4. Define the model architecture
End of explanation
"""
# Training the model
model.fit(X_train, y_train, epochs=200, batch_size=100, verbose=0)
"""
Explanation: 5. Train the model
End of explanation
"""
# Evaluating the model on the training and testing set
score = model.evaluate(X_train, y_train)
print("\n Training Accuracy:", score[1])
score = model.evaluate(X_test, y_test)
print("\n Testing Accuracy:", score[1])
"""
Explanation: 6. Score the model
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/e1c3654f77f904db443b548e9d93b8f9/50_decoding.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import (SlidingEstimator, GeneralizingEstimator, Scaler,
cross_val_multiscore, LinearModel, get_coef,
Vectorizer, CSP)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
tmin, tmax = -0.200, 0.500
event_id = {'Auditory/Left': 1, 'Visual/Left': 3} # just use two
raw = mne.io.read_raw_fif(raw_fname)
raw.pick_types(meg='grad', stim=True, eog=True, exclude=())
# The subsequent decoding analyses only capture evoked responses, so we can
# low-pass the MEG data. Usually a value more like 40 Hz would be used,
# but here low-pass at 20 so we can more heavily decimate, and allow
# the example to run faster. The 2 Hz high-pass helps improve CSP.
raw.load_data().filter(2, 20)
events = mne.find_events(raw, 'STI 014')
# Set up bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443'] # bads + 2 more
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0.), preload=True,
reject=dict(grad=4000e-13, eog=150e-6), decim=3,
verbose='error')
epochs.pick_types(meg=True, exclude='bads') # remove stim and EOG
del raw
X = epochs.get_data() # MEG signals: n_epochs, n_meg_channels, n_times
y = epochs.events[:, 2] # target: auditory left vs visual left
"""
Explanation: Decoding (MVPA)
.. include:: ../../links.inc
Design philosophy
Decoding (a.k.a. MVPA) in MNE largely follows the machine
learning API of the scikit-learn package.
Each estimator implements fit, transform, fit_transform, and
(optionally) inverse_transform methods. For more details on this design,
visit scikit-learn_. For additional theoretical insights into the decoding
framework in MNE :footcite:KingEtAl2018.
For ease of comprehension, we will denote instantiations of the class using
the same name as the class but in small caps instead of camel cases.
Let's start by loading data for a simple two-class problem:
sphinx_gallery_thumbnail_number = 6
End of explanation
"""
# Uses all MEG sensors and time points as separate classification
# features, so the resulting filters used are spatio-temporal
clf = make_pipeline(
Scaler(epochs.info),
Vectorizer(),
LogisticRegression(solver='liblinear') # liblinear is faster than lbfgs
)
scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
score = np.mean(scores, axis=0)
print('Spatio-temporal: %0.1f%%' % (100 * score,))
"""
Explanation: Transformation classes
Scaler
The :class:mne.decoding.Scaler will standardize the data based on channel
scales. In the simplest modes scalings=None or scalings=dict(...),
each data channel type (e.g., mag, grad, eeg) is treated separately and
scaled by a constant. This is the approach used by e.g.,
:func:mne.compute_covariance to standardize channel scales.
If scalings='mean' or scalings='median', each channel is scaled using
empirical measures. Each channel is scaled independently by the mean and
standand deviation, or median and interquartile range, respectively, across
all epochs and time points during :class:~mne.decoding.Scaler.fit
(during training). The :meth:~mne.decoding.Scaler.transform method is
called to transform data (training or test set) by scaling all time points
and epochs on a channel-by-channel basis. To perform both the fit and
transform operations in a single call, the
:meth:~mne.decoding.Scaler.fit_transform method may be used. To invert the
transform, :meth:~mne.decoding.Scaler.inverse_transform can be used. For
scalings='median', scikit-learn_ version 0.17+ is required.
<div class="alert alert-info"><h4>Note</h4><p>Using this class is different from directly applying
:class:`sklearn.preprocessing.StandardScaler` or
:class:`sklearn.preprocessing.RobustScaler` offered by
scikit-learn_. These scale each *classification feature*, e.g.
each time point for each channel, with mean and standard
deviation computed across epochs, whereas
:class:`mne.decoding.Scaler` scales each *channel* using mean and
standard deviation computed across all of its time points
and epochs.</p></div>
Vectorizer
Scikit-learn API provides functionality to chain transformers and estimators
by using :class:sklearn.pipeline.Pipeline. We can construct decoding
pipelines and perform cross-validation and grid-search. However scikit-learn
transformers and estimators generally expect 2D data
(n_samples * n_features), whereas MNE transformers typically output data
with a higher dimensionality
(e.g. n_samples * n_channels * n_frequencies * n_times). A Vectorizer
therefore needs to be applied between the MNE and the scikit-learn steps
like:
End of explanation
"""
csp = CSP(n_components=3, norm_trace=False)
clf_csp = make_pipeline(
csp,
LinearModel(LogisticRegression(solver='liblinear'))
)
scores = cross_val_multiscore(clf_csp, X, y, cv=5, n_jobs=1)
print('CSP: %0.1f%%' % (100 * scores.mean(),))
"""
Explanation: PSDEstimator
The :class:mne.decoding.PSDEstimator
computes the power spectral density (PSD) using the multitaper
method. It takes a 3D array as input, converts it into 2D and computes the
PSD.
FilterEstimator
The :class:mne.decoding.FilterEstimator filters the 3D epochs data.
Spatial filters
Just like temporal filters, spatial filters provide weights to modify the
data along the sensor dimension. They are popular in the BCI community
because of their simplicity and ability to distinguish spatially-separated
neural activity.
Common spatial pattern
:class:mne.decoding.CSP is a technique to analyze multichannel data based
on recordings from two classes :footcite:Koles1991 (see also
https://en.wikipedia.org/wiki/Common_spatial_pattern).
Let $X \in R^{C\times T}$ be a segment of data with
$C$ channels and $T$ time points. The data at a single time point
is denoted by $x(t)$ such that $X=[x(t), x(t+1), ..., x(t+T-1)]$.
Common spatial pattern (CSP) finds a decomposition that projects the signal
in the original sensor space to CSP space using the following transformation:
\begin{align}x_{CSP}(t) = W^{T}x(t)
:label: csp\end{align}
where each column of $W \in R^{C\times C}$ is a spatial filter and each
row of $x_{CSP}$ is a CSP component. The matrix $W$ is also
called the de-mixing matrix in other contexts. Let
$\Sigma^{+} \in R^{C\times C}$ and $\Sigma^{-} \in R^{C\times C}$
be the estimates of the covariance matrices of the two conditions.
CSP analysis is given by the simultaneous diagonalization of the two
covariance matrices
\begin{align}W^{T}\Sigma^{+}W = \lambda^{+}
:label: diagonalize_p\end{align}
\begin{align}W^{T}\Sigma^{-}W = \lambda^{-}
:label: diagonalize_n\end{align}
where $\lambda^{C}$ is a diagonal matrix whose entries are the
eigenvalues of the following generalized eigenvalue problem
\begin{align}\Sigma^{+}w = \lambda \Sigma^{-}w
:label: eigen_problem\end{align}
Large entries in the diagonal matrix corresponds to a spatial filter which
gives high variance in one class but low variance in the other. Thus, the
filter facilitates discrimination between the two classes.
.. topic:: Examples
* `ex-decoding-csp-eeg`
* `ex-decoding-csp-eeg-timefreq`
<div class="alert alert-info"><h4>Note</h4><p>The winning entry of the Grasp-and-lift EEG competition in Kaggle used
the :class:`~mne.decoding.CSP` implementation in MNE and was featured as
a [script of the week](sotw_).</p></div>
We can use CSP with these data with:
End of explanation
"""
# Fit CSP on full data and plot
csp.fit(X, y)
csp.plot_patterns(epochs.info)
csp.plot_filters(epochs.info, scalings=1e-9)
"""
Explanation: Source power comodulation (SPoC)
Source Power Comodulation (:class:mne.decoding.SPoC)
:footcite:DahneEtAl2014 identifies the composition of
orthogonal spatial filters that maximally correlate with a continuous target.
SPoC can be seen as an extension of the CSP where the target is driven by a
continuous variable rather than a discrete variable. Typical applications
include extraction of motor patterns using EMG power or audio patterns using
sound envelope.
.. topic:: Examples
* `ex-spoc-cmc`
xDAWN
:class:mne.preprocessing.Xdawn is a spatial filtering method designed to
improve the signal to signal + noise ratio (SSNR) of the ERP responses
:footcite:RivetEtAl2009. Xdawn was originally
designed for P300 evoked potential by enhancing the target response with
respect to the non-target response. The implementation in MNE-Python is a
generalization to any type of ERP.
.. topic:: Examples
* `ex-xdawn-denoising`
* `ex-xdawn-decoding`
Effect-matched spatial filtering
The result of :class:mne.decoding.EMS is a spatial filter at each time
point and a corresponding time course :footcite:SchurgerEtAl2013.
Intuitively, the result gives the similarity between the filter at
each time point and the data vector (sensors) at that time point.
.. topic:: Examples
* `ex-ems-filtering`
Patterns vs. filters
When interpreting the components of the CSP (or spatial filters in general),
it is often more intuitive to think about how $x(t)$ is composed of
the different CSP components $x_{CSP}(t)$. In other words, we can
rewrite Equation :eq:csp as follows:
\begin{align}x(t) = (W^{-1})^{T}x_{CSP}(t)
:label: patterns\end{align}
The columns of the matrix $(W^{-1})^T$ are called spatial patterns.
This is also called the mixing matrix. The example ex-linear-patterns
discusses the difference between patterns and filters.
These can be plotted with:
End of explanation
"""
# We will train the classifier on all left visual vs auditory trials on MEG
clf = make_pipeline(
StandardScaler(),
LogisticRegression(solver='liblinear')
)
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
# here we use cv=3 just for speed
scores = cross_val_multiscore(time_decod, X, y, cv=3, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot
fig, ax = plt.subplots()
ax.plot(epochs.times, scores, label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC') # Area Under the Curve
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Sensor space decoding')
"""
Explanation: Decoding over time
This strategy consists in fitting a multivariate predictive model on each
time instant and evaluating its performance at the same instant on new
epochs. The :class:mne.decoding.SlidingEstimator will take as input a
pair of features $X$ and targets $y$, where $X$ has
more than 2 dimensions. For decoding over time the data $X$
is the epochs data of shape n_epochs × n_channels × n_times. As the
last dimension of $X$ is the time, an estimator will be fit
on every time instant.
This approach is analogous to SlidingEstimator-based approaches in fMRI,
where here we are interested in when one can discriminate experimental
conditions and therefore figure out when the effect of interest happens.
When working with linear models as estimators, this approach boils
down to estimating a discriminative spatial filter for each time instant.
Temporal decoding
We'll use a Logistic Regression for a binary classification as machine
learning model.
End of explanation
"""
clf = make_pipeline(
StandardScaler(),
LinearModel(LogisticRegression(solver='liblinear'))
)
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
time_decod.fit(X, y)
coef = get_coef(time_decod, 'patterns_', inverse_transform=True)
evoked_time_gen = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
evoked_time_gen.plot_joint(times=np.arange(0., .500, .100), title='patterns',
**joint_kwargs)
"""
Explanation: You can retrieve the spatial filters and spatial patterns if you explicitly
use a LinearModel
End of explanation
"""
# define the Temporal generalization object
time_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc',
verbose=True)
# again, cv=3 just for speed
scores = cross_val_multiscore(time_gen, X, y, cv=3, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot the diagonal (it's exactly the same as the time-by-time decoding above)
fig, ax = plt.subplots()
ax.plot(epochs.times, np.diag(scores), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC')
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Decoding MEG sensors over time')
"""
Explanation: Temporal generalization
Temporal generalization is an extension of the decoding over time approach.
It consists in evaluating whether the model estimated at a particular
time instant accurately predicts any other time instant. It is analogous to
transferring a trained model to a distinct learning problem, where the
problems correspond to decoding the patterns of brain activity recorded at
distinct time instants.
The object to for Temporal generalization is
:class:mne.decoding.GeneralizingEstimator. It expects as input $X$
and $y$ (similarly to :class:~mne.decoding.SlidingEstimator) but
generates predictions from each model for all time instants. The class
:class:~mne.decoding.GeneralizingEstimator is generic and will treat the
last dimension as the one to be used for generalization testing. For
convenience, here, we refer to it as different tasks. If $X$
corresponds to epochs data then the last dimension is time.
This runs the analysis used in :footcite:KingEtAl2014 and further detailed
in :footcite:KingDehaene2014:
End of explanation
"""
fig, ax = plt.subplots(1, 1)
im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.)
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Temporal generalization')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
cbar = plt.colorbar(im, ax=ax)
cbar.set_label('AUC')
"""
Explanation: Plot the full (generalization) matrix:
End of explanation
"""
cov = mne.compute_covariance(epochs, tmax=0.)
del epochs
fwd = mne.read_forward_solution(
meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif')
inv = mne.minimum_norm.make_inverse_operator(
evoked_time_gen.info, fwd, cov, loose=0.)
stc = mne.minimum_norm.apply_inverse(evoked_time_gen, inv, 1. / 9., 'dSPM')
del fwd, inv
"""
Explanation: Projecting sensor-space patterns to source space
If you use a linear classifier (or regressor) for your data, you can also
project these to source space. For example, using our evoked_time_gen
from before:
End of explanation
"""
brain = stc.plot(hemi='split', views=('lat', 'med'), initial_time=0.1,
subjects_dir=subjects_dir)
"""
Explanation: And this can be visualized using :meth:stc.plot <mne.SourceEstimate.plot>:
End of explanation
"""
|
HazyResearch/snorkel | tutorials/advanced/Structure_Learning.ipynb | apache-2.0 | from snorkel.learning import GenerativeModelWeights
from snorkel.learning.structure import generate_label_matrix
weights = GenerativeModelWeights(10)
for i in range(10):
weights.lf_accuracy[i] = 1.0
weights.dep_similar[0, 1] = 0.5
weights.dep_similar[2, 3] = 0.5
y, L = generate_label_matrix(weights, 10000)
"""
Explanation: Learning the Structure of Generative Models
In this notebook, we'll use structure learning to find the dependency structure of a generative model. You can do this for any label matrix!
See the blog post or the paper for more details.
Generating Some Data
We'll generate some data from a known model of noisy labels in which two pairs of labeling functions are correlated.
End of explanation
"""
from snorkel.learning.structure import DependencySelector
ds = DependencySelector()
deps = ds.select(L, threshold=0.05)
print(deps)
assert deps == set([(0, 1, 0), (2, 3, 0)])
"""
Explanation: Structure Learning
L is the label matrix produced by a LabelManager.
A few notes:
* The deps object is a collection of tuples specifying which labeling functions are related by which types of dependencies.
* The keyword argument threshold is a positive float that indicates how strong the dependency has to be for it to be returned in the collection. Too many dependencies? Turn it up. Too few? Turn it down.
* By default, the DependencySelector looks for pairwise correlations between labeling functions. Pass the keyword argument higher_order=True to the select method to also look for reinforcing and fixing dependencies (described in the data programming paper).
End of explanation
"""
from snorkel.learning import GenerativeModel
gen_model = GenerativeModel()
gen_model.train(L, deps=deps)
print(gen_model.weights.lf_accuracy)
"""
Explanation: Using the Learned Structure
To incorporate the selected dependencies into your generative model, just pass them in as a keyword argument:
End of explanation
"""
|
NathanYee/ThinkBayes2 | code/blaster.ipynb | gpl-2.0 | from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from thinkbayes2 import Hist, Pmf, Cdf, Suite, Beta
import thinkplot
"""
Explanation: The Alien Blaster problem
This notebook presents solutions to exercises in Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
prior = Beta(2, 3)
thinkplot.Pdf(prior.MakePmf())
prior.Mean()
"""
Explanation: Part One
In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, $x$.
Based on previous tests, the distribution of $x$ in the population of designs is well-modeled by a beta distribution with parameters $\alpha=2$ and $\beta=3$. What is the average missile's probability of shooting down an alien?
End of explanation
"""
posterior = Beta(3, 2)
posterior.Update((2, 8))
posterior.MAP()
"""
Explanation: In its first test, the new Alien Blaster 9000 takes 10 shots and hits 2 targets. Taking into account this data, what is the posterior distribution of $x$ for this missile? What is the value in the posterior with the highest probability, also known as the MAP?
End of explanation
"""
from scipy import stats
class AlienBlaster(Suite):
def Likelihood(self, data, hypo):
"""Computes the likeliood of data under hypo.
data: number of shots they took
hypo: probability of a hit, p
"""
n = data
x = hypo
# specific version for n=2 shots
likes = [x**4, (1-x)**4, (2*x*(1-x))**2]
# general version for any n shots
likes = [stats.binom.pmf(k, n, x)**2 for k in range(n+1)]
return np.sum(likes)
"""
Explanation: Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were, but they report: "The same number of targets were hit in the two tests, so we have reason to think this new design is consistent."
Write a class called AlienBlaster that inherits from Suite and provides a likelihood function that takes this data -- two shots and a tie -- and computes the likelihood of the data for each hypothetical value of $x$. If you would like a challenge, write a version that works for any number of shots.
End of explanation
"""
pmf = Beta(1, 1).MakePmf()
blaster = AlienBlaster(pmf)
blaster.Update(2)
thinkplot.Pdf(blaster)
"""
Explanation: If we start with a uniform prior, we can see what the likelihood function looks like:
End of explanation
"""
pmf = Beta(2, 3).MakePmf()
blaster = AlienBlaster(pmf)
blaster.Update(2)
thinkplot.Pdf(blaster)
"""
Explanation: A tie is most likely if they are both terrible shots or both very good.
Is this data good or bad; that is, does it increase or decrease your estimate of $x$ for the Alien Blaster 10K?
Now let's run it with the specified prior and see what happens when we multiply the convex prior and the concave posterior:
End of explanation
"""
prior.Mean(), blaster.Mean()
prior.MAP(), blaster.MAP()
"""
Explanation: The posterior mean and MAP are lower than in the prior.
End of explanation
"""
k = 3
n = 10
x1 = 0.3
x2 = 0.4
0.3 * stats.binom.pmf(k, n, x1) + 0.7 * stats.binom.pmf(k, n, x2)
"""
Explanation: So if we learn that the new design is "consistent", it is more likely to be consistently bad (in this case).
Part Two
Suppose we
have we have a stockpile of 3 Alien Blaster 9000s and 7 Alien
Blaster 10Ks. After extensive testing, we have concluded that
the AB9000 hits the target 30% of the time, precisely, and the
AB10K hits the target 40% of the time.
If I grab a random weapon from the stockpile and shoot at 10 targets,
what is the probability of hitting exactly 3? Again, you can write a
number, mathematical expression, or Python code.
End of explanation
"""
def flip(p):
return np.random.random() < p
def simulate_shots(n, p):
return np.random.binomial(n, p)
ks = []
for i in range(1000):
if flip(0.3):
k = simulate_shots(n, x1)
else:
k = simulate_shots(n, x2)
ks.append(k)
"""
Explanation: The answer is a value drawn from the mixture of the two distributions.
Continuing the previous problem, let's estimate the distribution
of k, the number of successful shots out of 10.
Write a few lines of Python code to simulate choosing a random weapon and firing it.
Write a loop that simulates the scenario and generates random values of k 1000 times.
Store the values of k you generate and plot their distribution.
End of explanation
"""
pmf = Pmf(ks)
thinkplot.Hist(pmf)
len(ks), np.mean(ks)
"""
Explanation: Here's what the distribution looks like.
End of explanation
"""
xs = np.random.choice(a=[x1, x2], p=[0.3, 0.7], size=1000)
Hist(xs)
"""
Explanation: The mean should be near 3.7. We can run this simulation more efficiently using NumPy. First we generate a sample of xs:
End of explanation
"""
ks = np.random.binomial(n, xs)
"""
Explanation: Then for each x we generate a k:
End of explanation
"""
pmf = Pmf(ks)
thinkplot.Hist(pmf)
np.mean(ks)
"""
Explanation: And the results look similar.
End of explanation
"""
from thinkbayes2 import MakeBinomialPmf
pmf1 = MakeBinomialPmf(n, x1)
pmf2 = MakeBinomialPmf(n, x2)
metapmf = Pmf({pmf1:0.3, pmf2:0.7})
metapmf.Print()
"""
Explanation: One more way to do the same thing is to make a meta-Pmf, which contains the two binomial Pmf objects:
End of explanation
"""
ks = [metapmf.Random().Random() for _ in range(1000)]
"""
Explanation: Here's how we can draw samples from the meta-Pmf:
End of explanation
"""
pmf = Pmf(ks)
thinkplot.Hist(pmf)
np.mean(ks)
"""
Explanation: And here are the results, one more time:
End of explanation
"""
from thinkbayes2 import MakeMixture
mix = MakeMixture(metapmf)
thinkplot.Hist(mix)
mix.Mean()
"""
Explanation: This result, which we have estimated three ways, is a predictive distribution, based on our uncertainty about x.
We can compute the mixture analtically using thinkbayes2.MakeMixture:
def MakeMixture(metapmf, label='mix'):
"""Make a mixture distribution.
Args:
metapmf: Pmf that maps from Pmfs to probs.
label: string label for the new Pmf.
Returns: Pmf object.
"""
mix = Pmf(label=label)
for pmf, p1 in metapmf.Items():
for k, p2 in pmf.Items():
mix[k] += p1 * p2
return mix
The outer loop iterates through the Pmfs; the inner loop iterates through the items.
So p1 is the probability of choosing a particular Pmf; p2 is the probability of choosing a value from the Pmf.
In the example, each Pmf is associated with a value of x (probability of hitting a target). The inner loop enumerates the values of k (number of targets hit after 10 shots).
End of explanation
"""
from thinkbayes2 import Beta
beta = Beta(2, 3).MakePmf()
metapmf = Pmf()
thinkplot.hist(beta)
for x, prob in beta.Items():
nested_pmf = MakeBinomialPmf(n, x)
metapmf[nested_pmf] = prob
mix = MakeMixture(metapmf)
thinkplot.hist(mix)
mix.Mean()
beta.Mean()
"""
Explanation: Exercise: Assuming again that the distribution of x in the population of designs is well-modeled by a beta distribution with parameters α=2 and β=3, what the distribution if k if I choose a random Alien Blaster and fire 10 shots?
End of explanation
"""
|
faneshion/MatchZoo | tutorials/model_tuning.ipynb | apache-2.0 | import matchzoo as mz
train_raw = mz.datasets.toy.load_data('train')
dev_raw = mz.datasets.toy.load_data('dev')
test_raw = mz.datasets.toy.load_data('test')
"""
Explanation: Model Tuning
End of explanation
"""
preprocessor = mz.models.DenseBaseline.get_default_preprocessor()
train = preprocessor.fit_transform(train_raw, verbose=0)
dev = preprocessor.transform(dev_raw, verbose=0)
test = preprocessor.transform(test_raw, verbose=0)
"""
Explanation: basic Usage
A couple things are needed by the tuner:
- a model with a parameters filled
- preprocessed training data
- preprocessed testing data
Since MatchZoo models have pre-defined hyper-spaces, the tuner can start tuning right away once you have the data ready.
prepare the data
End of explanation
"""
model = mz.models.DenseBaseline()
model.params['input_shapes'] = preprocessor.context['input_shapes']
model.params['task'] = mz.tasks.Ranking()
"""
Explanation: prepare the model
End of explanation
"""
tuner = mz.auto.Tuner(
params=model.params,
train_data=train,
test_data=dev,
num_runs=5
)
results = tuner.tune()
"""
Explanation: start tuning
End of explanation
"""
results['best']
results['best']['params'].to_frame()
"""
Explanation: view the best hyper-parameter set
End of explanation
"""
model.params.hyper_space
"""
Explanation: understading hyper-space
model.params.hyper_space reprensents the model's hyper-parameters search space, which is the cross-product of individual hyper parameter's hyper space. When a Tuner builds a model, for each hyper parameter in model.params, if the hyper-parameter has a hyper-space, then a sample will be taken in the space. However, if the hyper-parameter does not have a hyper-space, then the default value of the hyper-parameter will be used.
End of explanation
"""
def sample_and_build(params):
sample = mz.hyper_spaces.sample(params.hyper_space)
print('if sampled:', sample, '\n')
params.update(sample)
print('the built model will have:\n')
print(params, '\n\n\n')
for _ in range(3):
sample_and_build(model.params)
"""
Explanation: In a DenseBaseline model, only mlp_num_units, mlp_num_layers, and mlp_num_fan_out have pre-defined hyper-space. In other words, only these hyper-parameters will change values during a tuning. Other hyper-parameters, like mlp_activation_func, are fixed and will not change.
End of explanation
"""
print(model.params.get('mlp_num_units').hyper_space)
model.params.to_frame()[['Name', 'Hyper-Space']]
"""
Explanation: This is similar to the process of a tuner sampling model hyper-parameters, but with one key difference: a tuner's hyper-space is suggestive. This means the sampling process in a tuner is not truely random but skewed. Scores of the past samples affect future choices: a tuner with more runs knows better about its hyper-space, and take samples in a way that will likely yields better scores.
For more details, consult tuner's backend: hyperopt, and the search algorithm tuner uses: Tree of Parzen Estimators (TPE)
Hyper-spaces can also be represented in a human-readable format.
End of explanation
"""
model.params.get('optimizer').hyper_space = mz.hyper_spaces.choice(['adam', 'adagrad', 'rmsprop'])
for _ in range(10):
print(mz.hyper_spaces.sample(model.params.hyper_space))
"""
Explanation: setting hyper-space
What if I want the tuner to choose optimizer among adam, adagrad, and rmsprop?
End of explanation
"""
model.params['mlp_num_layers'] = 2
model.params.get('mlp_num_layers').hyper_space = None
for _ in range(10):
print(mz.hyper_spaces.sample(model.params.hyper_space))
"""
Explanation: What about setting mlp_num_layers to a fixed value of 2?
End of explanation
"""
tuner.num_runs = 2
tuner.callbacks.append(mz.auto.tuner.callbacks.SaveModel())
results = tuner.tune()
"""
Explanation: using callbacks
To save the model during the tuning process, use mz.auto.tuner.callbacks.SaveModel.
End of explanation
"""
best_model_id = results['best']['model_id']
mz.load_model(mz.USER_TUNED_MODELS_DIR.joinpath(best_model_id))
"""
Explanation: This will save all built models to your mz.USER_TUNED_MODELS_DIR, and can be loaded by:
End of explanation
"""
toy_embedding = mz.datasets.toy.load_embedding()
preprocessor = mz.models.DUET.get_default_preprocessor()
train = preprocessor.fit_transform(train_raw, verbose=0)
dev = preprocessor.transform(dev_raw, verbose=0)
params = mz.models.DUET.get_default_params()
params['task'] = mz.tasks.Ranking()
params.update(preprocessor.context)
params['embedding_output_dim'] = toy_embedding.output_dim
embedding_matrix = toy_embedding.build_matrix(preprocessor.context['vocab_unit'].state['term_index'])
load_embedding_matrix_callback = mz.auto.tuner.callbacks.LoadEmbeddingMatrix(embedding_matrix)
tuner = mz.auto.tuner.Tuner(
params=params,
train_data=train,
test_data=dev,
num_runs=1
)
tuner.callbacks.append(load_embedding_matrix_callback)
results = tuner.tune()
"""
Explanation: To load a pre-trained embedding layer into a built model during a tuning process, use mz.auto.tuner.callbacks.LoadEmbeddingMatrix.
End of explanation
"""
import numpy as np
class ValidateEmbedding(mz.auto.tuner.callbacks.Callback):
def __init__(self, embedding_matrix):
self._matrix = embedding_matrix
def on_build_end(self, tuner, model):
loaded_matrix = model.get_embedding_layer().get_weights()[0]
if np.isclose(self._matrix, loaded_matrix).all():
print("Yes! The my embedding is correctly loaded!")
validate_embedding_matrix_callback = ValidateEmbedding(embedding_matrix)
tuner = mz.auto.tuner.Tuner(
params=params,
train_data=train,
test_data=dev,
num_runs=1,
callbacks=[load_embedding_matrix_callback, validate_embedding_matrix_callback]
)
tuner.callbacks.append(load_embedding_matrix_callback)
results = tuner.tune()
"""
Explanation: make your own callbacks
To build your own callbacks, inherit mz.auto.tuner.callbacks.Callback and overrides corresponding methods.
A run proceeds in the following way:
run start (callback)
build model
build end (callback)
fit and evaluate model
collect result
run end (callback)
This process is repeated for num_runs times in a tuner.
For example, say I want to verify if my embedding matrix is correctly loaded.
End of explanation
"""
|
infilect/ml-course1 | keras-notebooks/RNN/6.3-advanced-usage-of-recurrent-neural-networks.ipynb | mit | import os
data_dir = '/home/ubuntu/data/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
print(header)
print(len(lines))
"""
Explanation: Advanced usage of recurrent neural networks
This notebook contains the code samples found in Chapter 6, Section 3 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
In this section, we will review three advanced techniques for improving the performance and generalization power of recurrent neural
networks. By the end of the section, you will know most of what there is to know about using recurrent networks with Keras. We will
demonstrate all three concepts on a weather forecasting problem, where we have access to a timeseries of data points coming from sensors
installed on the roof of a building, such as temperature, air pressure, and humidity, which we use to predict what the temperature will be
24 hours after the last data point collected. This is a fairly challenging problem that exemplifies many common difficulties encountered
when working with timeseries.
We will cover the following techniques:
Recurrent dropout, a specific, built-in way to use dropout to fight overfitting in recurrent layers.
Stacking recurrent layers, to increase the representational power of the network (at the cost of higher computational loads).
Bidirectional recurrent layers, which presents the same information to a recurrent network in different ways, increasing accuracy and
mitigating forgetting issues.
A temperature forecasting problem
Until now, the only sequence data we have covered has been text data, for instance the IMDB dataset and the Reuters dataset. But sequence
data is found in many more problems than just language processing. In all of our examples in this section, we will be playing with a weather
timeseries dataset recorded at the Weather Station at the Max-Planck-Institute for Biogeochemistry in Jena, Germany: http://www.bgc-jena.mpg.de/wetter/.
In this dataset, fourteen different quantities (such air temperature, atmospheric pressure, humidity, wind direction, etc.) are recorded
every ten minutes, over several years. The original data goes back to 2003, but we limit ourselves to data from 2009-2016. This dataset is
perfect for learning to work with numerical timeseries. We will use it to build a model that takes as input some data from the recent past (a
few days worth of data points) and predicts the air temperature 24 hours in the future.
Let's take a look at the data:
End of explanation
"""
import numpy as np
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
"""
Explanation: Let's convert all of these 420,551 lines of data into a Numpy array:
End of explanation
"""
from matplotlib import pyplot as plt
temp = float_data[:, 1] # temperature (in degrees Celsius)
plt.plot(range(len(temp)), temp)
plt.show()
"""
Explanation: For instance, here is the plot of temperature (in degrees Celsius) over time:
End of explanation
"""
plt.plot(range(1440), temp[:1440])
plt.show()
"""
Explanation: On this plot, you can clearly see the yearly periodicity of temperature.
Here is a more narrow plot of the first ten days of temperature data (since the data is recorded every ten minutes, we get 144 data points
per day):
End of explanation
"""
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
"""
Explanation: On this plot, you can see daily periodicity, especially evident for the last 4 days. We can also note that this ten-days period must be
coming from a fairly cold winter month.
If we were trying to predict average temperature for the next month given a few month of past data, the problem would be easy, due to the
reliable year-scale periodicity of the data. But looking at the data over a scale of days, the temperature looks a lot more chaotic. So is
this timeseries predictable at a daily scale? Let's find out.
Preparing the data
The exact formulation of our problem will be the following: given data going as far back as lookback timesteps (a timestep is 10 minutes)
and sampled every steps timesteps, can we predict the temperature in delay timesteps?
We will use the following parameter values:
lookback = 720, i.e. our observations will go back 5 days.
steps = 6, i.e. our observations will be sampled at one data point per hour.
delay = 144, i.e. our targets will be 24 hours in the future.
To get started, we need to do two things:
Preprocess the data to a format a neural network can ingest. This is easy: the data is already numerical, so we don't need to do any
vectorization. However each timeseries in the data is on a different scale (e.g. temperature is typically between -20 and +30, but
pressure, measured in mbar, is around 1000). So we will normalize each timeseries independently so that they all take small values on a
similar scale.
Write a Python generator that takes our current array of float data and yields batches of data from the recent past, alongside with a
target temperature in the future. Since the samples in our dataset are highly redundant (e.g. sample N and sample N + 1 will have most
of their timesteps in common), it would be very wasteful to explicitly allocate every sample. Instead, we will generate the samples on the
fly using the original data.
We preprocess the data by subtracting the mean of each timeseries and dividing by the standard deviation. We plan on using the first
200,000 timesteps as training data, so we compute the mean and standard deviation only on this fraction of the data:
End of explanation
"""
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
"""
Explanation: Now here is the data generator that we will use. It yields a tuple (samples, targets) where samples is one batch of input data and
targets is the corresponding array of target temperatures. It takes the following arguments:
data: The original array of floating point data, which we just normalized in the code snippet above.
lookback: How many timesteps back should our input data go.
delay: How many timesteps in the future should our target be.
min_index and max_index: Indices in the data array that delimit which timesteps to draw from. This is useful for keeping a segment
of the data for validation and another one for testing.
shuffle: Whether to shuffle our samples or draw them in chronological order.
batch_size: The number of samples per batch.
step: The period, in timesteps, at which we sample data. We will set it 6 in order to draw one data point every hour.
End of explanation
"""
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
"""
Explanation: Now let's use our abstract generator function to instantiate three generators, one for training, one for validation and one for testing.
Each will look at different temporal segments of the original data: the training generator looks at the first 200,000 timesteps, the
validation generator looks at the following 100,000, and the test generator looks at the remainder.
End of explanation
"""
np.mean(np.abs(preds - targets))
"""
Explanation: A common sense, non-machine learning baseline
Before we start leveraging black-box deep learning models to solve our temperature prediction problem, let's try out a simple common-sense
approach. It will serve as a sanity check, and it will establish a baseline that we will have to beat in order to demonstrate the
usefulness of more advanced machine learning models. Such common-sense baselines can be very useful when approaching a new problem for
which there is no known solution (yet). A classic example is that of unbalanced classification tasks, where some classes can be much more
common than others. If your dataset contains 90% of instances of class A and 10% of instances of class B, then a common sense approach to
the classification task would be to always predict "A" when presented with a new sample. Such a classifier would be 90% accurate overall,
and any learning-based approach should therefore beat this 90% score in order to demonstrate usefulness. Sometimes such elementary
baseline can prove surprisingly hard to beat.
In our case, the temperature timeseries can safely be assumed to be continuous (the temperatures tomorrow are likely to be close to the
temperatures today) as well as periodical with a daily period. Thus a common sense approach would be to always predict that the temperature
24 hours from now will be equal to the temperature right now. Let's evaluate this approach, using the Mean Absolute Error metric (MAE).
Mean Absolute Error is simply equal to:
End of explanation
"""
def evaluate_naive_method():
batch_maes = []
for step in range(val_steps):
samples, targets = next(val_gen)
preds = samples[:, -1, 1]
mae = np.mean(np.abs(preds - targets))
batch_maes.append(mae)
print(np.mean(batch_maes))
evaluate_naive_method()
"""
Explanation: Here's our evaluation loop:
End of explanation
"""
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Flatten(input_shape=(lookback // step, float_data.shape[-1])))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
"""
Explanation: It yields a MAE of 0.29. Since our temperature data has been normalized to be centered on 0 and have a standard deviation of one, this
number is not immediately interpretable. It translates to an average absolute error of 0.29 * temperature_std degrees Celsius, i.e.
2.57˚C. That's a fairly large average absolute error -- now the game is to leverage our knowledge of deep learning to do better.
A basic machine learning approach
In the same way that it is useful to establish a common sense baseline before trying machine learning approaches, it is useful to try
simple and cheap machine learning models (such as small densely-connected networks) before looking into complicated and computationally
expensive models such as RNNs. This is the best way to make sure that any further complexity we throw at the problem later on is legitimate
and delivers real benefits.
Here is a simply fully-connected model in which we start by flattening the data, then run it through two Dense layers. Note the lack of
activation function on the last Dense layer, which is typical for a regression problem. We use MAE as the loss. Since we are evaluating
on the exact same data and with the exact same metric as with our common sense approach, the results will be directly comparable.
End of explanation
"""
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
"""
Explanation: Let's display the loss curves for validation and training:
End of explanation
"""
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.GRU(32, input_shape=(None, float_data.shape[-1])))
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
"""
Explanation: Some of our validation losses get close to the no-learning baseline, but not very reliably. This goes to show the merit of having had this baseline in the first place: it turns out not to be so easy to outperform. Our
common sense contains already a lot of valuable information that a machine learning model does not have access to.
You may ask, if there exists a simple, well-performing model to go from the data to the targets (our common sense baseline), why doesn't
the model we are training find it and improve on it? Simply put: because this simple solution is not what our training setup is looking
for. The space of models in which we are searching for a solution, i.e. our hypothesis space, is the space of all possible 2-layer networks
with the configuration that we defined. These networks are already fairly complicated. When looking for a solution with a space of
complicated models, the simple well-performing baseline might be unlearnable, even if it's technically part of the hypothesis space. That
is a pretty significant limitation of machine learning in general: unless the learning algorithm is hard-coded to look for a specific kind
of simple model, parameter learning can sometimes fail to find a simple solution to a simple problem.
A first recurrent baseline
Our first fully-connected approach didn't do so well, but that doesn't mean machine learning is not applicable to our problem. The approach
above consisted in first flattening the timeseries, which removed the notion of time from the input data. Let us instead look at our data
as what it is: a sequence, where causality and order matter. We will try a recurrent sequence processing model -- it should be the perfect
fit for such sequence data, precisely because it does exploit the temporal ordering of data points, unlike our first approach.
Instead of the LSTM layer introduced in the previous section, we will use the GRU layer, developed by Cho et al. in 2014. GRU layers
(which stands for "gated recurrent unit") work by leveraging the same principle as LSTM, but they are somewhat streamlined and thus cheaper
to run, albeit they may not have quite as much representational power as LSTM. This trade-off between computational expensiveness and
representational power is seen everywhere in machine learning.
End of explanation
"""
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
"""
Explanation: Let look at our results:
End of explanation
"""
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.GRU(32,
dropout=0.2,
recurrent_dropout=0.2,
input_shape=(None, float_data.shape[-1])))
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=40,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
"""
Explanation: Much better! We are able to significantly beat the common sense baseline, such demonstrating the value of machine learning here, as well as
the superiority of recurrent networks compared to sequence-flattening dense networks on this type of task.
Our new validation MAE of ~0.265 (before we start significantly overfitting) translates to a mean absolute error of 2.35˚C after
de-normalization. That's a solid gain on our initial error of 2.57˚C, but we probably still have a bit of margin for improvement.
Using recurrent dropout to fight overfitting
It is evident from our training and validation curves that our model is overfitting: the training and validation losses start diverging
considerably after a few epochs. You are already familiar with a classic technique for fighting this phenomenon: dropout, consisting in
randomly zeroing-out input units of a layer in order to break happenstance correlations in the training data that the layer is exposed to.
How to correctly apply dropout in recurrent networks, however, is not a trivial question. It has long been known that applying dropout
before a recurrent layer hinders learning rather than helping with regularization. In 2015, Yarin Gal, as part of his Ph.D. thesis on
Bayesian deep learning, determined the proper way to use dropout with a recurrent network: the same dropout mask (the same pattern of
dropped units) should be applied at every timestep, instead of a dropout mask that would vary randomly from timestep to timestep. What's
more: in order to regularize the representations formed by the recurrent gates of layers such as GRU and LSTM, a temporally constant
dropout mask should be applied to the inner recurrent activations of the layer (a "recurrent" dropout mask). Using the same dropout mask at
every timestep allows the network to properly propagate its learning error through time; a temporally random dropout mask would instead
disrupt this error signal and be harmful to the learning process.
Yarin Gal did his research using Keras and helped build this mechanism directly into Keras recurrent layers. Every recurrent layer in Keras
has two dropout-related arguments: dropout, a float specifying the dropout rate for input units of the layer, and recurrent_dropout,
specifying the dropout rate of the recurrent units. Let's add dropout and recurrent dropout to our GRU layer and see how it impacts
overfitting. Because networks being regularized with dropout always take longer to fully converge, we train our network for twice as many
epochs.
End of explanation
"""
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.GRU(32,
dropout=0.1,
recurrent_dropout=0.5,
return_sequences=True,
input_shape=(None, float_data.shape[-1])))
model.add(layers.GRU(64, activation='relu',
dropout=0.1,
recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=40,
validation_data=val_gen,
validation_steps=val_steps)
"""
Explanation: Great success; we are no longer overfitting during the first 30 epochs. However, while we have more stable evaluation scores, our best
scores are not much lower than they were previously.
Stacking recurrent layers
Since we are no longer overfitting yet we seem to have hit a performance bottleneck, we should start considering increasing the capacity of
our network. If you remember our description of the "universal machine learning workflow": it is a generally a good idea to increase the
capacity of your network until overfitting becomes your primary obstacle (assuming that you are already taking basic steps to mitigate
overfitting, such as using dropout). As long as you are not overfitting too badly, then you are likely under-capacity.
Increasing network capacity is typically done by increasing the number of units in the layers, or adding more layers. Recurrent layer
stacking is a classic way to build more powerful recurrent networks: for instance, what currently powers the Google translate algorithm is
a stack of seven large LSTM layers -- that's huge.
To stack recurrent layers on top of each other in Keras, all intermediate layers should return their full sequence of outputs (a 3D tensor)
rather than their output at the last timestep. This is done by specifying return_sequences=True:
End of explanation
"""
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
"""
Explanation: Let's take a look at our results:
End of explanation
"""
def reverse_order_generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples[:, ::-1, :], targets
train_gen_reverse = reverse_order_generator(
float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen_reverse = reverse_order_generator(
float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
model = Sequential()
model.add(layers.GRU(32, input_shape=(None, float_data.shape[-1])))
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen_reverse,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen_reverse,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
"""
Explanation: We can see that the added layers does improve ours results by a bit, albeit not very significantly. We can draw two conclusions:
Since we are still not overfitting too badly, we could safely increase the size of our layers, in quest for a bit of validation loss
improvement. This does have a non-negligible computational cost, though.
Since adding a layer did not help us by a significant factor, we may be seeing diminishing returns to increasing network capacity at this
point.
Using bidirectional RNNs
The last technique that we will introduce in this section is called "bidirectional RNNs". A bidirectional RNN is common RNN variant which
can offer higher performance than a regular RNN on certain tasks. It is frequently used in natural language processing -- you could call it
the Swiss army knife of deep learning for NLP.
RNNs are notably order-dependent, or time-dependent: they process the timesteps of their input sequences in order, and shuffling or
reversing the timesteps can completely change the representations that the RNN will extract from the sequence. This is precisely the reason
why they perform well on problems where order is meaningful, such as our temperature forecasting problem. A bidirectional RNN exploits
the order-sensitivity of RNNs: it simply consists of two regular RNNs, such as the GRU or LSTM layers that you are already familiar with,
each processing input sequence in one direction (chronologically and antichronologically), then merging their representations. By
processing a sequence both way, a bidirectional RNN is able to catch patterns that may have been overlooked by a one-direction RNN.
Remarkably, the fact that the RNN layers in this section have so far processed sequences in chronological order (older timesteps first) may
have been an arbitrary decision. At least, it's a decision we made no attempt at questioning so far. Could it be that our RNNs could have
performed well enough if it were processing input sequences in antichronological order, for instance (newer timesteps first)? Let's try
this in practice and see what we get. All we need to do is write a variant of our data generator, where the input sequences get reverted
along the time dimension (replace the last line with yield samples[:, ::-1, :], targets). Training the same one-GRU-layer network as we
used in the first experiment in this section, we get the following results:
End of explanation
"""
from keras.datasets import imdb
from keras.preprocessing import sequence
from keras import layers
from keras.models import Sequential
# Number of words to consider as features
max_features = 10000
# Cut texts after this number of words (among top max_features most common words)
maxlen = 500
# Load data
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# Reverse sequences
x_train = [x[::-1] for x in x_train]
x_test = [x[::-1] for x in x_test]
# Pad sequences
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
model = Sequential()
model.add(layers.Embedding(max_features, 128))
model.add(layers.LSTM(32))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
"""
Explanation: So the reversed-order GRU strongly underperforms even the common-sense baseline, indicating that the in our case chronological processing is very
important to the success of our approach. This makes perfect sense: the underlying GRU layer will typically be better at remembering the
recent past than the distant past, and naturally the more recent weather data points are more predictive than older data points in our
problem (that's precisely what makes the common-sense baseline a fairly strong baseline). Thus the chronological version of the layer is
bound to outperform the reversed-order version. Importantly, this is generally not true for many other problems, including natural
language: intuitively, the importance of a word in understanding a sentence is not usually dependent on its position in the sentence. Let's
try the same trick on the LSTM IMDB example from the previous section:
End of explanation
"""
from keras import backend as K
K.clear_session()
model = Sequential()
model.add(layers.Embedding(max_features, 32))
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
history = model.fit(x_train, y_train, epochs=10, batch_size=128, validation_split=0.2)
"""
Explanation: We get near-identical performance as the chronological-order LSTM we tried in the previous section.
Thus, remarkably, on such a text dataset, reversed-order processing works just as well as chronological processing, confirming our
hypothesis that, albeit word order does matter in understanding language, which order you use isn't crucial. Importantly, a RNN trained
on reversed sequences will learn different representations than one trained on the original sequences, in much the same way that you would
have quite different mental models if time flowed backwards in the real world -- if you lived a life where you died on your first day and
you were born on your last day. In machine learning, representations that are different yet useful are always worth exploiting, and the
more they differ the better: they offer a new angle from which to look at your data, capturing aspects of the data that were missed by other
approaches, and thus they can allow to boost performance on a task. This is the intuition behind "ensembling", a concept that we will
introduce in the next chapter.
A bidirectional RNN exploits this idea to improve upon the performance of chronological-order RNNs: it looks at its inputs sequence both
ways, obtaining potentially richer representations and capturing patterns that may have been missed by the chronological-order version alone.
To instantiate a bidirectional RNN in Keras, one would use the Bidirectional layer, which takes as first argument a recurrent layer
instance. Bidirectional will create a second, separate instance of this recurrent layer, and will use one instance for processing the
input sequences in chronological order and the other instance for processing the input sequences in reversed order. Let's try it on the
IMDB sentiment analysis task:
End of explanation
"""
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Bidirectional(
layers.GRU(32), input_shape=(None, float_data.shape[-1])))
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=40,
validation_data=val_gen,
validation_steps=val_steps)
"""
Explanation: It performs slightly better than the regular LSTM we tried in the previous section, going above 88% validation accuracy. It also seems to
overfit faster, which is unsurprising since a bidirectional layer has twice more parameters than a chronological LSTM. With some
regularization, the bidirectional approach would likely be a strong performer on this task.
Now let's try the same approach on the weather prediction task:
End of explanation
"""
|
jorisvandenbossche/2015-EuroScipy-pandas-tutorial | solved - 02 - Data structures.ipynb | bsd-2-clause | df = pd.read_csv("data/titanic.csv")
df.head()
"""
Explanation: Tabular data
End of explanation
"""
df['Age'].hist()
"""
Explanation: Starting from reading this dataset, to answering questions about this data in a few lines of code:
What is the age distribution of the passengers?
End of explanation
"""
df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))
"""
Explanation: How does the survival rate of the passengers differ between sexes?
End of explanation
"""
df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')
"""
Explanation: Or how does it differ between the different classes?
End of explanation
"""
df['Survived'].sum() / df['Survived'].count()
df25 = df[df['Age'] <= 25]
df25['Survived'].sum() / len(df25['Survived'])
"""
Explanation: Are young people more likely to survive?
End of explanation
"""
s = pd.Series([0.1, 0.2, 0.3, 0.4])
s
"""
Explanation: All the needed functionality for the above examples will be explained throughout this tutorial.
Data structures
Pandas provides two fundamental data objects, for 1D (Series) and 2D data (DataFrame).
Series
A Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created:
End of explanation
"""
s.index
"""
Explanation: Attributes of a Series: index and values
The series has a built-in concept of an index, which by default is the numbers 0 through N - 1
End of explanation
"""
s.values
"""
Explanation: You can access the underlying numpy array representation with the .values attribute:
End of explanation
"""
s[0]
"""
Explanation: We can access series values via the index, just like for NumPy arrays:
End of explanation
"""
s2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])
s2
s2['c']
"""
Explanation: Unlike the NumPy array, though, this index can be something other than integers:
End of explanation
"""
pop_dict = {'Germany': 81.3,
'Belgium': 11.3,
'France': 64.3,
'United Kingdom': 64.9,
'Netherlands': 16.9}
population = pd.Series(pop_dict)
population
"""
Explanation: In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value.
In fact, it's possible to construct a series directly from a Python dictionary:
End of explanation
"""
population['France']
"""
Explanation: We can index the populations like a dict as expected:
End of explanation
"""
population * 1000
"""
Explanation: but with the power of numpy arrays:
End of explanation
"""
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
"""
Explanation: DataFrames: Multi-dimensional Data
A DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
<img src="img/dataframe.png" width=110%>
One of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Note that in the IPython notebook, the dataframe will display in a rich HTML view:
End of explanation
"""
countries.index
countries.columns
"""
Explanation: Attributes of the DataFrame
A DataFrame has besides a index attribute, also a columns attribute:
End of explanation
"""
countries.dtypes
"""
Explanation: To check the data types of the different columns:
End of explanation
"""
countries.info()
"""
Explanation: An overview of that information can be given with the info() method:
End of explanation
"""
countries.values
"""
Explanation: Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:
End of explanation
"""
countries = countries.set_index('country')
countries
"""
Explanation: If we don't like what the index looks like, we can reset it and set one of our columns:
End of explanation
"""
countries['area']
"""
Explanation: To access a Series representing a column in the data, use typical indexing syntax:
End of explanation
"""
# redefining the example objects
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3,
'United Kingdom': 64.9, 'Netherlands': 16.9})
countries = pd.DataFrame({'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']})
"""
Explanation: Basic operations on Series/Dataframes
As you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.
End of explanation
"""
population / 100
countries['population'] / countries['area']
"""
Explanation: Elementwise-operations (like numpy)
Just like with numpy arrays, many operations are element-wise:
End of explanation
"""
s1 = population[['Belgium', 'France']]
s2 = population[['France', 'Germany']]
s1
s2
s1 + s2
"""
Explanation: Alignment! (unlike numpy)
Only, pay attention to alignment: operations between series will align on the index:
End of explanation
"""
population.mean()
"""
Explanation: Reductions (like numpy)
The average population number:
End of explanation
"""
countries['area'].min()
"""
Explanation: The minimum area:
End of explanation
"""
countries.median()
"""
Explanation: For dataframes, often only the numeric columns are included in the result:
End of explanation
"""
population / population['Belgium'].mean()
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate the population numbers relative to Belgium
</div>
End of explanation
"""
countries['population']*1000000 / countries['area']
countries['density'] = countries['population']*1000000 / countries['area']
countries
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate the population density for each country and add this as a new column to the dataframe.
</div>
End of explanation
"""
countries.sort_values('density', ascending=False)
"""
Explanation: Some other useful methods
Sorting the rows of the DataFrame according to the values in a column:
End of explanation
"""
countries.describe()
"""
Explanation: One useful method to use is the describe method, which computes summary statistics for each column:
End of explanation
"""
countries.plot()
"""
Explanation: The plot method can be used to quickly visualize the data in different ways:
End of explanation
"""
countries['population'].plot(kind='bar')
"""
Explanation: However, for this dataset, it does not say that much:
End of explanation
"""
pd.read
states.to
"""
Explanation: You can play with the kind keyword: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin'
Importing and exporting data
A wide range of input/output formats are natively supported by pandas:
CSV, text
SQL database
Excel
HDF5
json
html
pickle
...
End of explanation
"""
|
tbphu/fachkurs_master_2016 | 07_modelling/20151201_ZombieApocalypse-Assignment.ipynb | mit | import numpy as np
# 1. initial conditions
# initial population
# initial zombie population
# initial death population
# initial condition vector
# 2. parameter values
# birth rate
# 'natural' death percent (per day)
# transmission percent (per day)
# resurect percent (per day)
# destroy percent (per day)
# 3. simulation time
# start time in days
# end time in days
# time grid, 1000 steps or data points (NUMPY!!!)
"""
Explanation: Ordinary Differential Equations - ODE
or 'How to Model the Zombie Apocalypse'
Jens Hahn - 01/12/2015
Content taken from:
Scipy Docs at http://scipy-cookbook.readthedocs.org/items/Zombie_Apocalypse_ODEINT.html
Munz et al. (2009): http://mysite.science.uottawa.ca/rsmith43/Zombies.pdf
Introduction
What is an ODE
Differential equations can be used to describe the time-dependent behaviour of a variable.
$$\frac{\text{d}\vec{x}}{\text{d}t} = \vec{f}(\vec{x}, t)$$
In our case the variable stands for the number of humans in a infected (zombies) or not infected population.
Of course they can also be used to describe the change of concentrations in a cell or other continuous or quasi-continuous quantity.
In general, a first order ODE has two parts, the increasing (birth, formation,...) and the decreasing (death, degradation, ...) part:
$$\frac{\text{d}\vec{x}}{\text{d}t} = \sum_{}\text{Rates}{\text{production}} - \sum{}\text{Rates}_{\text{loss}}$$
You probably already know ways to solve a differential equation algebraically by 'separation of variables' (Trennung der Variablen) in the homogeneous case or 'variation of parameters' (Variation der Konstanten) in the inhomogeneous case. Here, we want to discuss the use of numerical methods to solve your ODE system.
Solve the model
The zombie apokalypse model
Let's have a look at our equations:
Number of susceptible victims $S$:
$$\frac{\text{d}S}{\text{d}t} = \text{???}$$
Number of zombies $Z$:
$$\frac{\text{d}Z}{\text{d}t} = \text{???}$$
Number of people "killed" $R$:
$$\frac{\text{d}R}{\text{d}t} = \text{???}$$
Parameters:
P: the population birth rate
d: the chance of a natural death
B: the chance the “zombie disease” is transmitted (an alive person becomes a zombie)
G: the chance a dead person is resurrected into a zombie
A: the chance a zombie is totally destroyed by a human
Let's start
Before we start the simulation of our model, we have to define our system.
We start with our static information:
1. Initial conditions for our variables
2. Values of the paramters
3. Simulation time
4. Number of time points at which we want to have the values for our variables (the time grid). Use numpy!!
End of explanation
"""
# function 'f' or 'dxdt' to evaluate the changes of the system dy/dt = f(y, t)
"""
Explanation: In the second step, we write a small function f, that receives a list of the current values of our variables x and the current time t. The function has to evaluate the equations of our system or $\frac{\text{d}\vec{x}}{\text{d}t}$, respectively. Afterwards, it returns the values of the equations as another list.
Important
Since this function f is used by the solver, we are not allowed to change the input (arguments) or output (return value) of this function.
End of explanation
"""
# zombie apocalypse modeling
import matplotlib.pyplot as plt # for plotting
# plots inside the notebook
%matplotlib inline
from scipy.integrate import odeint # the integrator
# solve the DEs
result = odeint(f, y0, t)
S = result[:, 0]
Z = result[:, 1]
R = result[:, 2]
# plot results
plt.figure()
plt.plot(t, S, label='Humans')
plt.plot(t, Z, label='Zombies')
plt.plot(t, R, label='Dead Humans')
plt.xlabel('Days from outbreak')
plt.ylabel('Population')
plt.title('Zombie Apocalypse - No Init. Dead Pop.; No New Births.')
plt.ylim([0,500])
plt.legend(loc=0)
"""
Explanation: Last but not least, we need to import and call our solver. The result will be a matrix with our time courses as columns and the values at the specified time points. Since we have a values for every time point and every species, we can directly plot the results using matplotlib.
End of explanation
"""
|
google/jax-md | notebooks/minimization.ipynb | apache-2.0 | #@title Imports & Utils
!pip install jax-md
import numpy as onp
import jax.numpy as np
from jax.config import config
config.update('jax_enable_x64', True)
from jax import random
from jax import jit
from jax_md import space, smap, energy, minimize, quantity, simulate
from jax_md.colab_tools import renderer
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
def format_plot(x, y):
plt.grid(True)
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
"""
Explanation: <a href="https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/minimization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
N = 1000
dimension = 2
box_size = quantity.box_size_at_number_density(N, 0.8, dimension)
displacement, shift = space.periodic(box_size)
"""
Explanation: Harmonic Minimization
Here we demonstrate some simple example code showing how we might find the inherent structure for some initially random configuration of particles. Note that this code will work on CPU, GPU, or TPU out of the box.
First thing we need to do is set some parameters that define our simulation, including what kind of box we're using (specified using a metric function and a wrapping function).
End of explanation
"""
key = random.PRNGKey(0)
R = box_size * random.uniform(key, (N, dimension), dtype=np.float32)
# The system ought to be a 50:50 mixture of two types of particles, one
# large and one small.
sigma = np.array([[1.0, 1.2], [1.2, 1.4]])
N_2 = int(N / 2)
species = np.where(np.arange(N) < N_2, 0, 1)
"""
Explanation: Next we need to generate some random positions as well as particle sizes.
End of explanation
"""
energy_fn = energy.soft_sphere_pair(displacement, species=species, sigma=sigma)
fire_init, fire_apply = minimize.fire_descent(energy_fn, shift)
fire_apply = jit(fire_apply)
fire_state = fire_init(R)
"""
Explanation: Then we need to construct our FIRE minimization function. Like all simulations in JAX MD, the FIRE optimizer is two functions: an init_fn that creates the state of the optimizer and an apply_fn that updates the state to a new state.
End of explanation
"""
E = []
trajectory = []
for i in range(200):
fire_state = fire_apply(fire_state)
E += [energy_fn(fire_state.position)]
trajectory += [fire_state.position]
R = fire_state.position
trajectory = np.stack(trajectory)
"""
Explanation: Now let's actually do minimization, keepting track of the energy and particle positions as we go.
End of explanation
"""
metric = lambda R: space.distance(space.map_product(displacement)(R, R))
dr = metric(R)
plt.plot(np.min(dr[:N_2, :N_2] + 5 * np.eye(N_2, N_2), axis=0), 'o',
label='$\\sigma_{AA}$')
plt.plot(np.min(dr[:N_2, N_2:] + 5 * np.eye(N_2, N_2), axis=0), 'o',
label='$\\sigma_{AB}$')
plt.plot(np.min(dr[N_2:, N_2:] + 5 * np.eye(N_2, N_2), axis=0), 'o',
label='$\\sigma_{BB}$')
plt.legend()
format_plot('', 'min neighbor distance')
finalize_plot()
"""
Explanation: Let's plot the nearest distance for different species pairs. We see that particles on average have neighbors that are the right distance apart.
End of explanation
"""
ms = 45
R_plt = onp.array(fire_state.position)
plt.plot(R_plt[:N_2, 0], R_plt[:N_2, 1], 'o', markersize=ms * 0.5)
plt.plot(R_plt[N_2:, 0], R_plt[N_2:, 1], 'o', markersize=ms * 0.7)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
"""
Explanation: Now let's plot the system. It's nice and minimized!
End of explanation
"""
diameter = np.where(species, 1.4, 1.0)
color = np.where(species[:, None],
np.array([[1.0, 0.5, 0.05]]),
np.array([[0.15, 0.45, 0.8]]))
renderer.render(box_size,
{ 'particles': renderer.Disk(trajectory, diameter, color)},
buffer_size=50)
"""
Explanation: If we want, we can visualize the entire minimization.
End of explanation
"""
plt.plot(E, linewidth=3)
format_plot('step', '$E$')
finalize_plot()
"""
Explanation: Finally, let's plot the energy trajectory that we observer during FIRE minimization.
End of explanation
"""
|
tensorflow/docs-l10n | site/pt-br/r1/tutorials/keras/basic_text_classification.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
# keras.datasets.imdb está quebrado em 1.13 e 1.14, pelo np 1.16.3
!pip install tf_nightly
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
"""
Explanation: Classificação de texto com avaliações de filmes
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Execute em Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Veja a fonte em GitHub</a>
</td>
</table>
Este notebook classifica avaliações de filmes como positiva ou negativa usando o texto da avaliação. Isto é um exemplo de classificação binária —ou duas-classes—, um importante e bastante aplicado tipo de problema de aprendizado de máquina.
Usaremos a base de dados IMDB que contém avaliaçòes de mais de 50000 filmes do bando de dados Internet Movie Database. A base é dividida em 25000 avaliações para treinamento e 25000 para teste. Os conjuntos de treinamentos e testes são balanceados, ou seja, eles possuem a mesma quantidade de avaliações positivas e negativas.
O notebook utiliza tf.keras, uma API alto-nível para construir e treinar modelos com TensorFlow. Para mais tutoriais avançados de classificação de textos usando tf.keras, veja em MLCC Text Classification Guide.
End of explanation
"""
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
"""
Explanation: Baixe a base de dados IMDB
A base de dados vem empacotada com TensorFlow. Ele já vem pré-processado de forma que as avaliações (sequências de palavras) foi convertida em sequências de inteiros, onde cada inteiro representa uma palavra específica no dicionário.
O código abaixo baixa a base de dados IMDB para a sua máquina (ou usa a cópia em cache, caso já tenha baixado):
End of explanation
"""
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
"""
Explanation: O argumento num_words=10000 mantém as 10000 palavras mais frequentes no conjunto de treinamento. As palavras mais raras são descartadas para preservar o tamanho dos dados de forma maleável.
Explore os dados
Vamos parar um momento para entender o formato dos dados. O conjunto de dados vem pré-processado: cada exemplo é um array de inteiros representando as palavras da avaliação do filme. Cada label é um inteiro com valor ou de 0 ou 1, onde 0 é uma avaliação negativa e 1 é uma avaliação positiva.
End of explanation
"""
print(train_data[0])
"""
Explanation: O texto das avaliações foi convertido para inteiros, onde cada inteiro representa uma palavra específica no dicionário. Isso é como se parece a primeira revisão:
End of explanation
"""
len(train_data[0]), len(train_data[1])
"""
Explanation: As avaliações dos filmes têm diferentes tamanhos. O código abaixo mostra o número de palavras da primeira e segunda avaliação. Sabendo que o número de entradas da rede neural tem que ser de mesmo também, temos que resolver isto mais tarde.
End of explanation
"""
# Um dicionário mapeando palavras em índices inteiros
word_index = imdb.get_word_index()
# Os primeiros índices são reservados
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
"""
Explanation: Converta os inteiros de volta a palavras
É util saber como converter inteiros de volta a texto. Aqui, criaremos uma função de ajuda para consultar um objeto dictionary que contenha inteiros mapeados em strings:
End of explanation
"""
decode_review(train_data[0])
"""
Explanation: Agora, podemos usar a função decode_review para mostrar o texto da primeira avaliação:
End of explanation
"""
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
"""
Explanation: Prepare os dados
As avaliações—o arrays de inteiros— deve ser convertida em tensores (tensors) antes de alimentar a rede neural. Essa conversão pode ser feita de duas formas:
Converter os arrays em vetores de 0s e 1s indicando a ocorrência da palavra, similar com one-hot encoding. Por exemplo, a sequência [3, 5] se tornaria um vetor de 10000 dimensões, onde todos seriam 0s, tirando 3 would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a num_words * num_reviews size matrix.
Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape max_length * num_reviews. We can use an embedding layer capable of handling this shape as the first layer in our network.
In this tutorial, we will use the second approach.
Since the movie reviews must be the same length, we will use the pad_sequences function to standardize the lengths:
End of explanation
"""
len(train_data[0]), len(train_data[1])
"""
Explanation: Let's look at the length of the examples now:
End of explanation
"""
print(train_data[0])
"""
Explanation: And inspect the (now padded) first review:
End of explanation
"""
# O formato de entrada é a contagem vocabulário usados pelas avaliações dos filmes (10000 palavras)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
"""
Explanation: Construindo o modelo
A rede neural é criada por camadas empilhadas —isso necessita duas decisões arquiteturais principais:
Quantas camadas serão usadas no modelo?
Quantas hidden units são usadas em cada camada?
Neste exemplo, os dados de entrada são um array de palavras-índices. As labels para predizer são ou 0 ou 1. Vamos construir um modelo para este problema:
End of explanation
"""
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
"""
Explanation: As camadas são empilhadas sequencialmente para construir o classificador:
A primeira camada é uma camada Embedding layer (Embedding layer). Essa camada pega o vocabulário em inteiros e olha o vetor embedding em cada palavra-index. Esses vetores são aprendidos pelo modelo, ao longo do treinamento. Os vetores adicionam a dimensão ao array de saída. As dimensões resultantes são: (batch, sequence, embedding).
Depois, uma camada GlobalAveragePooling1D retorna um vetor de saída com comprimento fixo para cada exemplo fazendo a média da sequência da dimensão. Isso permite o modelo de lidar com entradas de tamanhos diferentes da maneira mais simples possível.
Esse vetor de saída com tamanho fixo passa por uma camada fully-connected (Dense) layer com 16 hidden units.
A última camada é uma densely connected com um único nó de saída. Usando uma função de ativação sigmoid, esse valor é um float que varia entre 0 e 1, representando a probabilidade, ou nível de confiança.
Hidden units
O modelo abaixo tem duas camadas intermediárias ou "hidden" (hidden layers), entre a entrada e saída. O número de saídas (unidades— units—, nós ou neurônios) é a dimensão do espaço representacional para a camada. Em outras palavras, a quantidade de liberdade que a rede é permitida enquanto aprende uma representação interna.
Se o modelo tem mais hidden units (um espaço representacional de maior dimensão), e/ou mais camadas, então a rede pode aprender representações mais complexas. Entretanto, isso faz com que a rede seja computacionamente mais custosa e pode levar o aprendizado de padrões não desejados— padrões que melhoram a performance com os dados de treinamento, mas não com os de teste. Isso se chama overfitting, e exploraremos mais tarde.
Função Loss e otimizadores (optimizer)
O modelo precisa de uma função loss e um otimizador (optimizer) para treinamento. Já que é um problema de classificação binário e o modelo tem com saída uma probabilidade (uma única camada com ativação sigmoide), usaremos a função loss binary_crossentropy.
Essa não é a única escolha de função loss, você poderia escolher, no lugar, a mean_squared_error. Mas, geralmente, binary_crossentropy é melhor para tratar probabilidades— ela mede a "distância" entre as distribuições de probabilidade, ou, no nosso caso, sobre a distribuição real e as previsões.
Mais tarde, quando explorarmos problemas de regressão (como, predizer preço de uma casa), veremos como usar outra função loss chamada mean squared error.
Agora, configure o modelo para usar o optimizer a função loss:
End of explanation
"""
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
"""
Explanation: Crie um conjunto de validação
Quando treinando. queremos checar a acurácia do modelo com os dados que ele nunca viu. Crie uma conjunto de validação tirando 10000 exemplos do conjunto de treinamento original. (Por que não usar o de teste agora? Nosso objetivo é desenvolver e melhorar (tunar) nosso modelo usando somente os dados de treinamento, depois usar o de teste uma única vez para avaliar a acurácia).
End of explanation
"""
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
"""
Explanation: Treine o modelo
Treine o modelo em 40 epochs com mini-batches de 512 exemplos. Essas 40 iterações sobre todos os exemplos nos tensores x_train e y_train. Enquanto treina, monitore os valores do loss e da acurácia do modelo nos 10000 exemplos do conjunto de validação:
End of explanation
"""
results = model.evaluate(test_data, test_labels, verbose=2)
print(results)
"""
Explanation: Avalie o modelo
E vamos ver como o modelo se saiu. Dois valores serão retornados. Loss (um número que representa o nosso erro, valores mais baixos são melhores), e acurácia.
End of explanation
"""
history_dict = history.history
history_dict.keys()
"""
Explanation: Está é uma aproximação ingênua que conseguiu uma acurácia de 87%. Com mais abordagens avançadas, o modelo deve chegar em 95%.
Crie um gráfico de acurácia e loss por tempo
model.fit() retorna um objeto History que contém um dicionário de tudo o que aconteceu durante o treinamento:
End of explanation
"""
import matplotlib.pyplot as plt
acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" de "blue dot" ou "ponto azul"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b de "solid blue line" "linha azul"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # limpa a figura
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
"""
Explanation: Tem 4 entradas: uma para cada métrica monitorada durante a validação e treinamento. Podemos usá-las para plotar a comparação do loss de treinamento e validação, assim como a acurácia de treinamento e validação:
End of explanation
"""
|
fastai/course-v3 | nbs/dl2/11a_transfer_learning.ipynb | apache-2.0 | path = datasets.untar_data(datasets.URLs.IMAGEWOOF_160)
size = 128
bs = 64
tfms = [make_rgb, RandomResizedCrop(size, scale=(0.35,1)), np_to_float, PilRandomFlip()]
val_tfms = [make_rgb, CenterCrop(size), np_to_float]
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
ll.valid.x.tfms = val_tfms
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=8)
len(il)
loss_func = LabelSmoothingCrossEntropy()
opt_func = adam_opt(mom=0.9, mom_sqr=0.99, eps=1e-6, wd=1e-2)
learn = cnn_learner(xresnet18, data, loss_func, opt_func, norm=norm_imagenette)
def sched_1cycle(lr, pct_start=0.3, mom_start=0.95, mom_mid=0.85, mom_end=0.95):
phases = create_phases(pct_start)
sched_lr = combine_scheds(phases, cos_1cycle_anneal(lr/10., lr, lr/1e5))
sched_mom = combine_scheds(phases, cos_1cycle_anneal(mom_start, mom_mid, mom_end))
return [ParamScheduler('lr', sched_lr),
ParamScheduler('mom', sched_mom)]
lr = 3e-3
pct_start = 0.5
cbsched = sched_1cycle(lr, pct_start)
learn.fit(40, cbsched)
st = learn.model.state_dict()
type(st)
', '.join(st.keys())
st['10.bias']
mdl_path = path/'models'
mdl_path.mkdir(exist_ok=True)
"""
Explanation: Serializing the model
Jump_to lesson 12 video
End of explanation
"""
torch.save(st, mdl_path/'iw5')
"""
Explanation: It's also possible to save the whole model, including the architecture, but it gets quite fiddly and we don't recommend it. Instead, just save the parameters, and recreate the model directly.
End of explanation
"""
pets = datasets.untar_data(datasets.URLs.PETS)
pets.ls()
pets_path = pets/'images'
il = ImageList.from_files(pets_path, tfms=tfms)
il
#export
def random_splitter(fn, p_valid): return random.random() < p_valid
random.seed(42)
sd = SplitData.split_by_func(il, partial(random_splitter, p_valid=0.1))
sd
n = il.items[0].name; n
re.findall(r'^(.*)_\d+.jpg$', n)[0]
def pet_labeler(fn): return re.findall(r'^(.*)_\d+.jpg$', fn.name)[0]
proc = CategoryProcessor()
ll = label_by_func(sd, pet_labeler, proc_y=proc)
', '.join(proc.vocab)
ll.valid.x.tfms = val_tfms
c_out = len(proc.vocab)
data = ll.to_databunch(bs, c_in=3, c_out=c_out, num_workers=8)
learn = cnn_learner(xresnet18, data, loss_func, opt_func, norm=norm_imagenette)
learn.fit(5, cbsched)
"""
Explanation: Pets
Jump_to lesson 12 video
End of explanation
"""
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
st = torch.load(mdl_path/'iw5')
m = learn.model
m.load_state_dict(st)
cut = next(i for i,o in enumerate(m.children()) if isinstance(o,nn.AdaptiveAvgPool2d))
m_cut = m[:cut]
xb,yb = get_batch(data.valid_dl, learn)
pred = m_cut(xb)
pred.shape
ni = pred.shape[1]
#export
class AdaptiveConcatPool2d(nn.Module):
def __init__(self, sz=1):
super().__init__()
self.output_size = sz
self.ap = nn.AdaptiveAvgPool2d(sz)
self.mp = nn.AdaptiveMaxPool2d(sz)
def forward(self, x): return torch.cat([self.mp(x), self.ap(x)], 1)
nh = 40
m_new = nn.Sequential(
m_cut, AdaptiveConcatPool2d(), Flatten(),
nn.Linear(ni*2, data.c_out))
learn.model = m_new
learn.fit(5, cbsched)
"""
Explanation: Custom head
Jump_to lesson 12 video
End of explanation
"""
def adapt_model(learn, data):
cut = next(i for i,o in enumerate(learn.model.children())
if isinstance(o,nn.AdaptiveAvgPool2d))
m_cut = learn.model[:cut]
xb,yb = get_batch(data.valid_dl, learn)
pred = m_cut(xb)
ni = pred.shape[1]
m_new = nn.Sequential(
m_cut, AdaptiveConcatPool2d(), Flatten(),
nn.Linear(ni*2, data.c_out))
learn.model = m_new
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
for p in learn.model[0].parameters(): p.requires_grad_(False)
learn.fit(3, sched_1cycle(1e-2, 0.5))
for p in learn.model[0].parameters(): p.requires_grad_(True)
learn.fit(5, cbsched, reset_opt=True)
"""
Explanation: adapt_model and gradual unfreezing
Jump_to lesson 12 video
End of explanation
"""
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
def apply_mod(m, f):
f(m)
for l in m.children(): apply_mod(l, f)
def set_grad(m, b):
if isinstance(m, (nn.Linear,nn.BatchNorm2d)): return
if hasattr(m, 'weight'):
for p in m.parameters(): p.requires_grad_(b)
apply_mod(learn.model, partial(set_grad, b=False))
learn.fit(3, sched_1cycle(1e-2, 0.5))
apply_mod(learn.model, partial(set_grad, b=True))
learn.fit(5, cbsched, reset_opt=True)
"""
Explanation: Batch norm transfer
Jump_to lesson 12 video
End of explanation
"""
learn.model.apply(partial(set_grad, b=False));
"""
Explanation: Pytorch already has an apply method we can use:
End of explanation
"""
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
def bn_splitter(m):
def _bn_splitter(l, g1, g2):
if isinstance(l, nn.BatchNorm2d): g2 += l.parameters()
elif hasattr(l, 'weight'): g1 += l.parameters()
for ll in l.children(): _bn_splitter(ll, g1, g2)
g1,g2 = [],[]
_bn_splitter(m[0], g1, g2)
g2 += m[1:].parameters()
return g1,g2
a,b = bn_splitter(learn.model)
test_eq(len(a)+len(b), len(list(m.parameters())))
Learner.ALL_CBS
#export
from types import SimpleNamespace
cb_types = SimpleNamespace(**{o:o for o in Learner.ALL_CBS})
cb_types.after_backward
#export
class DebugCallback(Callback):
_order = 999
def __init__(self, cb_name, f=None): self.cb_name,self.f = cb_name,f
def __call__(self, cb_name):
if cb_name==self.cb_name:
if self.f: self.f(self.run)
else: set_trace()
#export
def sched_1cycle(lrs, pct_start=0.3, mom_start=0.95, mom_mid=0.85, mom_end=0.95):
phases = create_phases(pct_start)
sched_lr = [combine_scheds(phases, cos_1cycle_anneal(lr/10., lr, lr/1e5))
for lr in lrs]
sched_mom = combine_scheds(phases, cos_1cycle_anneal(mom_start, mom_mid, mom_end))
return [ParamScheduler('lr', sched_lr),
ParamScheduler('mom', sched_mom)]
disc_lr_sched = sched_1cycle([0,3e-2], 0.5)
learn = cnn_learner(xresnet18, data, loss_func, opt_func,
c_out=10, norm=norm_imagenette, splitter=bn_splitter)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
def _print_det(o):
print (len(o.opt.param_groups), o.opt.hypers)
raise CancelTrainException()
learn.fit(1, disc_lr_sched + [DebugCallback(cb_types.after_batch, _print_det)])
learn.fit(3, disc_lr_sched)
disc_lr_sched = sched_1cycle([1e-3,1e-2], 0.3)
learn.fit(5, disc_lr_sched)
"""
Explanation: Discriminative LR and param groups
Jump_to lesson 12 video
End of explanation
"""
!./notebook2script.py 11a_transfer_learning.ipynb
"""
Explanation: Export
End of explanation
"""
|
moonbury/pythonanywhere | github/RegressionAnalysisWithPython/Chap_6_ Achieving Generalization.ipynb | gpl-3.0 | import pandas as pd
from sklearn.datasets import load_boston
boston = load_boston()
dataset = pd.DataFrame(boston.data, columns=boston.feature_names)
dataset['target'] = boston.target
observations = len(dataset)
variables = dataset.columns[:-1]
X = dataset.ix[:,:-1]
y = dataset['target'].values
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101)
print ("Train dataset sample size: %i" % len(X_train))
print ("Test dataset sample size: %i" % len(X_test))
X_train, X_out_sample, y_train, y_out_sample = train_test_split(X, y, test_size=0.40, random_state=101)
X_validation, X_test, y_validation, y_test = train_test_split(X_out_sample, y_out_sample, test_size=0.50, random_state=101)
print ("Train dataset sample size: %i" % len(X_train))
print ("Validation dataset sample size: %i" % len(X_validation))
print ("Test dataset sample size: %i" % len(X_test))
"""
Explanation: Achieving Generalization
Testing and cross-validation
Train-test split
End of explanation
"""
from sklearn.cross_validation import cross_val_score, KFold, StratifiedKFold
from sklearn.metrics import make_scorer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
import numpy as np
def RMSE(y_true, y_pred):
return np.sum((y_true -y_pred)**2)
lm = LinearRegression()
cv_iterator = KFold(n=len(X), n_folds=10, shuffle=True, random_state=101)
edges = np.histogram(y, bins=5)[1]
binning = np.digitize(y, edges)
stratified_cv_iterator = StratifiedKFold(binning, n_folds=10, shuffle=True, random_state=101)
second_order=PolynomialFeatures(degree=2, interaction_only=False)
third_order=PolynomialFeatures(degree=3, interaction_only=True)
over_param_X = second_order.fit_transform(X)
extra_over_param_X = third_order.fit_transform(X)
cv_score = cross_val_score(lm, over_param_X, y, cv=cv_iterator, scoring='mean_squared_error', n_jobs=1)
print (cv_score)
print ('Cv score: mean %0.3f std %0.3f' % (np.mean(np.abs(cv_score)), np.std(cv_score)))
cv_score = cross_val_score(lm, over_param_X, y, cv=stratified_cv_iterator, scoring='mean_squared_error', n_jobs=1)
print ('Cv score: mean %0.3f std %0.3f' % (np.mean(np.abs(cv_score)), np.std(cv_score)))
"""
Explanation: Cross validation
End of explanation
"""
import random
def Bootstrap(n, n_iter=3, random_state=None):
"""
Random sampling with replacement cross-validation generator.
For each iter a sample bootstrap of the indexes [0, n) is
generated and the function returns the obtained sample
and a list of all the excluded indexes.
"""
if random_state:
random.seed(random_state)
for j in range(n_iter):
bs = [random.randint(0, n-1) for i in range(n)]
out_bs = list({i for i in range(n)} - set(bs))
yield bs, out_bs
boot = Bootstrap(n=10, n_iter=5, random_state=101)
for train_idx, validation_idx in boot:
print (train_idx, validation_idx)
import numpy as np
boot = Bootstrap(n=len(X), n_iter=10, random_state=101)
lm = LinearRegression()
bootstrapped_coef = np.zeros((10,13))
for k, (train_idx, validation_idx) in enumerate(boot):
lm.fit(X.ix[train_idx,:],y[train_idx])
bootstrapped_coef[k,:] = lm.coef_
print(bootstrapped_coef[:,10])
print(bootstrapped_coef[:,6])
"""
Explanation: Valid options are ['accuracy', 'adjusted_rand_score', 'average_precision', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'log_loss', 'mean_absolute_error', 'mean_squared_error', 'median_absolute_error', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc'
http://scikit-learn.org/stable/modules/model_evaluation.html
Bootstrapping
End of explanation
"""
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=3)
lm = LinearRegression()
lm.fit(X_train,y_train)
print ('Train (cases, features) = %s' % str(X_train.shape))
print ('Test (cases, features) = %s' % str(X_test.shape))
print ('In-sample mean squared error %0.3f' % mean_squared_error(y_train,lm.predict(X_train)))
print ('Out-sample mean squared error %0.3f' % mean_squared_error(y_test,lm.predict(X_test)))
from sklearn.preprocessing import PolynomialFeatures
second_order=PolynomialFeatures(degree=2, interaction_only=False)
third_order=PolynomialFeatures(degree=3, interaction_only=True)
lm.fit(second_order.fit_transform(X_train),y_train)
print ('(cases, features) = %s' % str(second_order.fit_transform(X_train).shape))
print ('In-sample mean squared error %0.3f' % mean_squared_error(y_train,lm.predict(second_order.fit_transform(X_train))))
print ('Out-sample mean squared error %0.3f' % mean_squared_error(y_test,lm.predict(second_order.fit_transform(X_test))))
lm.fit(third_order.fit_transform(X_train),y_train)
print ('(cases, features) = %s' % str(third_order.fit_transform(X_train).shape))
print ('In-sample mean squared error %0.3f' % mean_squared_error(y_train,lm.predict(third_order.fit_transform(X_train))))
print ('Out-sample mean squared error %0.3f' % mean_squared_error(y_test,lm.predict(third_order.fit_transform(X_test))))
"""
Explanation: Greedy selection of features
Controlling for over-parameterization
End of explanation
"""
try:
import urllib.request as urllib2
except:
import urllib2
import numpy as np
train_data = 'https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/MADELON/madelon_train.data'
validation_data = 'https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/MADELON/madelon_valid.data'
train_response = 'https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/MADELON/madelon_train.labels'
validation_response = 'https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/madelon_valid.labels'
try:
Xt = np.loadtxt(urllib2.urlopen(train_data))
yt = np.loadtxt(urllib2.urlopen(train_response))
Xv = np.loadtxt(urllib2.urlopen(validation_data))
yv = np.loadtxt(urllib2.urlopen(validation_response))
except:
# In case downloading the data doesn't works,
# just manually download the files into the working directory
Xt = np.loadtxt('madelon_train.data')
yt = np.loadtxt('madelon_train.labels')
Xv = np.loadtxt('madelon_valid.data')
yv = np.loadtxt('madelon_valid.labels')
print ('Training set: %i observations %i feature' % (Xt.shape))
print ('Validation set: %i observations %i feature' % (Xv.shape))
from scipy.stats import describe
print (describe(Xt))
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
def visualize_correlation_matrix(data, hurdle = 0.0):
R = np.corrcoef(data, rowvar=0)
R[np.where(np.abs(R)<hurdle)] = 0.0
heatmap = plt.pcolor(R, cmap=mpl.cm.coolwarm, alpha=0.8)
heatmap.axes.set_frame_on(False)
plt.xticks(rotation=90)
plt.tick_params(axis='both', which='both', bottom='off', top='off', left = 'off',
right = 'off')
plt.colorbar()
plt.show()
visualize_correlation_matrix(Xt[:,100:150], hurdle=0.0)
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LogisticRegression
logit = LogisticRegression()
logit.fit(Xt,yt)
from sklearn.metrics import roc_auc_score
print ('Training area under the curve: %0.3f' % roc_auc_score(yt,logit.predict_proba(Xt)[:,1]))
print ('Validation area under the curve: %0.3f' % roc_auc_score(yv,logit.predict_proba(Xv)[:,1]))
"""
Explanation: Madelon dataset
End of explanation
"""
from sklearn.feature_selection import SelectPercentile, f_classif
selector = SelectPercentile(f_classif, percentile=50)
selector.fit(Xt,yt)
variable_filter = selector.get_support()
plt.hist(selector.scores_, bins=50, histtype='bar')
plt.grid()
plt.show()
variable_filter = selector.scores_ > 10
print ("Number of filtered variables: %i" % np.sum(variable_filter))
from sklearn.preprocessing import PolynomialFeatures
interactions = PolynomialFeatures(degree=2, interaction_only=True)
Xs = interactions.fit_transform(Xt[:,variable_filter])
print ("Number of variables and interactions: %i" % Xs.shape[1])
logit.fit(Xs,yt)
Xvs = interactions.fit_transform(Xv[:,variable_filter])
print ('Validation area Under the Curve before recursive selection: %0.3f' % roc_auc_score(yv,logit.predict_proba(Xvs)[:,1]))
"""
Explanation: Univariate selection of features
End of explanation
"""
# Execution time: 3.15 s
from sklearn.feature_selection import RFECV
from sklearn.cross_validation import KFold
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=1)
lm = LinearRegression()
cv_iterator = KFold(n=len(X_train), n_folds=10, shuffle=True, random_state=101)
recursive_selector = RFECV(estimator=lm, step=1, cv=cv_iterator, scoring='mean_squared_error')
recursive_selector.fit(second_order.fit_transform(X_train),y_train)
print ('Initial number of features : %i' % second_order.fit_transform(X_train).shape[1])
print ('Optimal number of features : %i' % recursive_selector.n_features_)
a = second_order.fit_transform(X_train)
print (a)
essential_X_train = recursive_selector.transform(second_order.fit_transform(X_train))
essential_X_test = recursive_selector.transform(second_order.fit_transform(X_test))
lm.fit(essential_X_train, y_train)
print ('cases = %i features = %i' % essential_X_test.shape)
print ('In-sample mean squared error %0.3f' % mean_squared_error(y_train,lm.predict(essential_X_train)))
print ('Out-sample mean squared error %0.3f' % mean_squared_error(y_test,lm.predict(essential_X_test)))
edges = np.histogram(y, bins=5)[1]
binning = np.digitize(y, edges)
stratified_cv_iterator = StratifiedKFold(binning, n_folds=10, shuffle=True, random_state=101)
essential_X = recursive_selector.transform(second_order.fit_transform(X))
cv_score = cross_val_score(lm, essential_X, y, cv=stratified_cv_iterator, scoring='mean_squared_error', n_jobs=1)
print ('Cv score: mean %0.3f std %0.3f' % (np.mean(np.abs(cv_score)), np.std(cv_score)))
"""
Explanation: Recursive feature selection
End of explanation
"""
from sklearn.linear_model import Ridge
ridge = Ridge(normalize=True)
# The following commented line is to show a logistic regression with L2 regularization
# lr_l2 = LogisticRegression(C=1.0, penalty='l2', tol=0.01)
ridge.fit(second_order.fit_transform(X), y)
lm.fit(second_order.fit_transform(X), y)
print ('Average coefficient: Non regularized = %0.3f Ridge = %0.3f' % (np.mean(lm.coef_), np.mean(ridge.coef_)))
print ('Min coefficient: Non regularized = %0.3f Ridge = %0.3f' % (np.min(lm.coef_), np.min(ridge.coef_)))
print ('Max coefficient: Non regularized = %0.3f Ridge = %0.3f' % (np.max(lm.coef_), np.max(ridge.coef_)))
"""
Explanation: Regularization
Ridge
End of explanation
"""
from sklearn.grid_search import GridSearchCV
edges = np.histogram(y, bins=5)[1]
binning = np.digitize(y, edges)
stratified_cv_iterator = StratifiedKFold(binning, n_folds=10, shuffle=True, random_state=101)
search = GridSearchCV(estimator=ridge, param_grid={'alpha':np.logspace(-4,2,7)}, scoring = 'mean_squared_error',
n_jobs=1, refit=True, cv=stratified_cv_iterator)
search.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % search.best_params_['alpha'])
print ('Best CV mean squared error: %0.3f' % np.abs(search.best_score_))
search.grid_scores_
# Alternative: sklearn.linear_model.RidgeCV
from sklearn.linear_model import RidgeCV
auto_ridge = RidgeCV(alphas=np.logspace(-4,2,7), normalize=True, scoring = 'mean_squared_error', cv=None)
auto_ridge.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % auto_ridge.alpha_)
"""
Explanation: Grid search for optimal parameters
End of explanation
"""
from sklearn.grid_search import RandomizedSearchCV
from scipy.stats import expon
np.random.seed(101)
search_func=RandomizedSearchCV(estimator=ridge, param_distributions={'alpha':np.logspace(-4,2,100)}, n_iter=10,
scoring='mean_squared_error', n_jobs=1, iid=False, refit=True, cv=stratified_cv_iterator)
search_func.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % search_func.best_params_['alpha'])
print ('Best CV mean squared error: %0.3f' % np.abs(search_func.best_score_))
"""
Explanation: Random Search
End of explanation
"""
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=1.0, normalize=True, max_iter=2*10**5)
#The following comment shows an example of L1 logistic regression
#lr_l1 = LogisticRegression(C=1.0, penalty='l1', tol=0.01)
from sklearn.grid_search import RandomizedSearchCV
from scipy.stats import expon
np.random.seed(101)
stratified_cv_iterator = StratifiedKFold(binning, n_folds=10, shuffle=True, random_state=101)
search_func=RandomizedSearchCV(estimator=lasso, param_distributions={'alpha':np.logspace(-5,2,100)}, n_iter=10,
scoring='mean_squared_error', n_jobs=1, iid=False, refit=True, cv=stratified_cv_iterator)
search_func.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % search_func.best_params_['alpha'])
print ('Best CV mean squared error: %0.3f' % np.abs(search_func.best_score_))
print ('Zero value coefficients: %i out of %i' % (np.sum(~(search_func.best_estimator_.coef_==0.0)),
len(search_func.best_estimator_.coef_)))
# Alternative: sklearn.linear_model.LassoCV
# Execution time: 54.9 s
from sklearn.linear_model import LassoCV
auto_lasso = LassoCV(alphas=np.logspace(-5,2,100), normalize=True, n_jobs=1, cv=None, max_iter=10**6)
auto_lasso.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % auto_lasso.alpha_)
"""
Explanation: Lasso
End of explanation
"""
# Execution time: 1min 3s
from sklearn.linear_model import ElasticNet
import numpy as np
elasticnet = ElasticNet(alpha=1.0, l1_ratio=0.15, normalize=True, max_iter=10**6, random_state=101)
from sklearn.grid_search import RandomizedSearchCV
from scipy.stats import expon
np.random.seed(101)
search_func=RandomizedSearchCV(estimator=elasticnet, param_distributions={'alpha':np.logspace(-5,2,100),
'l1_ratio':np.arange(0.0, 1.01, 0.05)}, n_iter=10,
scoring='mean_squared_error', n_jobs=1, iid=False, refit=True, cv=stratified_cv_iterator)
search_func.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % search_func.best_params_['alpha'])
print ('Best l1_ratio: %0.5f' % search_func.best_params_['l1_ratio'])
print ('Best CV mean squared error: %0.3f' % np.abs(search_func.best_score_))
print ('Zero value coefficients: %i out of %i' % (np.sum(~(search_func.best_estimator_.coef_==0.0)),
len(search_func.best_estimator_.coef_)))
# Alternative: sklearn.linear_model.ElasticNetCV
from sklearn.linear_model import ElasticNetCV
auto_elastic = ElasticNetCV(alphas=np.logspace(-5,2,100), normalize=True, n_jobs=1, cv=None, max_iter=10**6)
auto_elastic.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % auto_elastic.alpha_)
print ('Best l1_ratio: %0.5f' % auto_elastic.l1_ratio_)
print(second_order.fit_transform(X).shape)
print(len(y))
print(second_order.fit_transform(X)[0])
print(y[0])
"""
Explanation: Elasticnet
End of explanation
"""
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import RandomizedLogisticRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
threshold = 0.03
stability_selection = RandomizedLogisticRegression(n_resampling=300, n_jobs=1, random_state=101, scaling=0.15,
sample_fraction=0.50, selection_threshold=threshold)
interactions = PolynomialFeatures(degree=4, interaction_only=True)
model = make_pipeline(stability_selection, interactions, logit)
model.fit(Xt,yt)
print(Xt.shape)
print(yt.shape)
#print(Xt)
#print(yt)
#print(model.steps[0][1].all_scores_)
print ('Number of features picked by stability selection: %i' % np.sum(model.steps[0][1].all_scores_ >= threshold))
from sklearn.metrics import roc_auc_score
print ('Area Under the Curve: %0.3f' % roc_auc_score(yv,model.predict_proba(Xv)[:,1]))
"""
Explanation: Stability selection
End of explanation
"""
|
machinelearningnanodegree/stanford-cs231 | solutions/kvn219/assignment2/ConvolutionalNetworks.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
"""
Element-wise relative error for floating point comparison.
Input:
- x: a numpy array of type float.
- y: a numpy array of type float.
Returns:
- highest relative error.
"""
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
"""
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
"""
from scipy.misc import imread, imresize
kitten, puppy = imread('./kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
"""
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
"""
# Width, Height, Depth
# Filters
# S
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
"""
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
"""
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
"""
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
"""
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
"""
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
"""
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
"""
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
"""
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
"""
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print 'Initial loss (no regularization): ', loss
model.reg = 0.5
loss, grads = model.loss(X, y)
print 'Initial loss (with regularization): ', loss
"""
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
"""
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
"""
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
End of explanation
"""
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
"""
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
"""
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
"""
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
"""
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1,
batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
"""
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
"""
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
"""
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print 'Before spatial batch normalization:'
print ' Shape: ', x.shape
print ' Means: ', x.mean(axis=(0, 2, 3))
print ' Stds: ', x.std(axis=(0, 2, 3))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization:'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization (nontrivial gamma, beta):'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in xrange(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After spatial batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=(0, 2, 3))
print ' stds: ', a_norm.std(axis=(0, 2, 3))
"""
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
"""
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
"""
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
"""
# Keep track of best parameters
best_model = None
solvers = {}
best_val = 0.0
# Train a really good model on CIFAR-
size = 7
label = 'size_{}'.format(size)
model = ThreeLayerConvNet(weight_scale=0.001,
hidden_dim=500,
filter_size=size,
reg=0.001)
solver = Solver(model, data,
num_epochs=1,
batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
solvers[label] = solver
if solver.best_val_acc > best_val:
best_val = solver.best_val_acc
best_model = solver.model
best_solver = solver
print("done!")
# smaller filter and batch size
size = 3
label = 'size_{}'.format(size)
model = ThreeLayerConvNet(weight_scale=0.001,
hidden_dim=500,
filter_size=size,
reg=0.001)
solver = Solver(model, data,
num_epochs=1,
batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
solvers[label] = solver
if solver.best_val_acc > best_val:
best_val = solver.best_val_acc
best_model = solver.model
best_solver = solver
print("done!")
"""
Explanation: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
[conv-relu-pool]XN - [affine]XM - [softmax or SVM]
[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
Model ensembles
Data augmentation
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Have fun and happy training!
Baseline
End of explanation
"""
|
makism/dyfunconn | tutorials/EEG - 3 - Dynamic Connectivity.ipynb | bsd-3-clause | import numpy as np
import scipy
from scipy import io
eeg = np.load("data/eeg_eyes_opened.npy")
num_trials, num_channels, num_samples = np.shape(eeg)
eeg_ts = np.squeeze(eeg[0, :, :])
"""
Explanation: In this short tutorial, we will build and expand on the previous tutorials by computing the dynamic connectivity, using Time-Varying Functional Connectivity Graphs.
In the near future, the standard method of "sliding window" will be supported.
Load data
End of explanation
"""
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
from dyconnmap import tvfcg
from dyconnmap.fc import IPLV
"""
Explanation: Dynamic connectivity
As a first example, we are going to compute the static connectivity of the EEG signals using the IPLV estimator.
End of explanation
"""
fb = [1.0, 4.0]
cc = 3.0
fs = 160.0
step = 32
"""
Explanation: First, setup the configuration options
* frequency band, fb
* sampling frequency, fs
* cycle-criterion, cc
* steping samples, step
End of explanation
"""
estimator = IPLV(fb, fs)
"""
Explanation: Declare and instantiate which estimator we will use to compute the dynamic connectivity. In this case we use again IPLV.
Notes: As one might have noticed, the following line intantiates an object. We only need to pass two parameters, the fb and fs.
End of explanation
"""
fcgs = tvfcg(eeg_ts, estimator, fb, fs, cc, step)
num_fcgs, num_rois, num_rois = np.shape(fcgs)
print(f"{num_fcgs} FCGs of shape {num_rois}x{num_rois}")
print(f"FCGs array data type is {fcgs.dtype}")
"""
Explanation: Now we are ready to estimate the dynamic functional connectivity.
End of explanation
"""
rfcgs = np.real(fcgs)
"""
Explanation: Because of the nature of the estimator, notice the FCG's data type; for compatibility reasons, it is np.complex128. We have to use np.real to get the real part.
End of explanation
"""
import matplotlib.pyplot as plt
slices = np.linspace(0, num_fcgs - 1, 5, dtype=np.int32)
num_slices = len(slices)
mtx_min = 0.0
mtx_max = np.max(rfcgs)
f, axes = plt.subplots(ncols=num_slices, figsize=(14, 14), dpi=100, sharey=True, sharex=False)
for i, s in enumerate(slices):
slice_mtx = rfcgs[s, :, :] + rfcgs[s, :, :].T
np.fill_diagonal(slice_mtx, 1.0)
cax = axes[i].imshow(slice_mtx, vmin=mtx_min, vmax=mtx_max, cmap=plt.cm.Spectral)
axes[i].set_title(f'Slice #{s}')
axes[i].set_xlabel("ROI")
axes[0].set_ylabel("ROI")
# move the colorbar to the side ;)
f.subplots_adjust(right=0.8)
cbar_ax = f.add_axes([0.82, 0.445, 0.0125, 0.115])
cb = f.colorbar(cax, cax=cbar_ax)
cb.set_label('Imaginary PLV')
"""
Explanation: Plot
Plot a few FCGs using the standard Matplotlib functions
End of explanation
"""
|
cesarcontre/Simulacion2017 | Modulo3/Clase22_ClasificacionBinaria.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
def fun_log(z):
return 1/(1+np.exp(-z))
z = np.linspace(-5, 5)
plt.figure(figsize = (8,6))
plt.plot(z, fun_log(z), lw = 2)
plt.xlabel('$z$')
plt.ylabel('$\sigma(z)$')
plt.grid()
plt.show()
"""
Explanation: Clasificación binaria
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://cdn.pixabay.com/photo/2017/04/08/11/07/nerve-cell-2213009_960_720.jpg" width="400px" height="125px" />
Lo que veremos en esta clase son aspectos básicos de lo que se conoce técnicamente con muchos nombres sofisticados: aprendizaje de máquina (machine learning), clasificación con redes neuronales (neural networks), entre otros.
Referencia
- https://es.coursera.org/learn/neural-networks-deep-learning
0. Motivación
Muchas aplicaciones de ingeniería se derivan de construir modelos para:
- predecir (clima, proyección de producción, toma de decisiones),
- diseñar (máquinas eléctricas, construcciones),
- entre otros.
Hasta hace unos quince años, la construcción de dichos modelos se hacía mayoritariamente con base en leyes basadas en una fuerte evidencia de la naturaleza (leyes de Newton, ecuaciones de Maxwell, leyes de la termodinámica, entre otras).
Sin embargo, en los últimos años, la gran cantidad de información disponible junto con el avance tecnológico en capacidad de procesamiento han llevado a que se construyan modelos con base en simplemente datos.
Ejemplos:
- Modelado de tráfico (waze/google maps).
- Modelos sociales (publicidad, diseño de estrategias de mercadeo).
- Modelos de... ¿personalidad?
Si lo anterior no les resulta sorprendente, los invito a que saquen veinte minutos y lean la siguiente entrevista.
Bueno, todo lo anterior es el poderoso alcance que tienen las tecnologías de la información. Hoy veremos un abrebocas: construir un modelo de clasificación binaria con base en datos únicamente.
1. Formulación del problema
1.1 Idea básica
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/d/db/Logo_ITESO_normal.jpg" width="200px" height="70px" />
Presentamos la idea básica de clasificación binaria mediante un ejemplo.
Tenemos como entrada una imagen digital y como salida una etiqueta que identifica a esta imagen como el logo del ITESO (en cuyo caso la etiqueta toma el valor de uno '1') o no (en cuyo caso la etiqueta toma el valor de cero '0').
A la salida la denotaremos $y$.
¿Cómo guarda las imágenes un computador? Código de colores RGB.
<font color = red>
$$R=\left[\begin{array}{cccc}255 & 124 & \dots & 45\ 235 & 224 & \dots & 135\ \vdots & \vdots & & \vdots\ 23 & 12 & \dots & 242\end{array}\right]$$
</font>
<font color = green>
$$G=\left[\begin{array}{cccc}255 & 154 & \dots & 42\ 215 & 24 & \dots & 145\ \vdots & \vdots & & \vdots\ 0 & 112 & \dots & 232\end{array}\right]$$
</font>
<font color = blue>
$$B=\left[\begin{array}{cccc}255 & 231 & \dots & 145\ 144 & 234 & \dots & 35\ \vdots & \vdots & & \vdots\ 5 & 52 & \dots & 42\end{array}\right]$$
</font>
Cada matriz tiene tamaño correspondiente con los pixeles de la imagen. Si la imagen se de $64px\times 64px$, cada matriz será de $64\times 64$.
¿Cómo podemos convertir entonces una imagen en una entrada? Ponemos cada valor de cada matriz en un vector de características $\boldsymbol{x}$:
$$\boldsymbol{x}=\left[\begin{array}{ccc} \text{vec}R & \text{vec}G & \text{vec}B \end{array}\right]^T=\left[\begin{array}{ccccccccc} 255 & 124 & \dots & 255 & 154 & \dots & 255 & 231 & \dots \end{array}\right]^T$$
Entonces el problema de clasificación se puede resumir como dado un vector de entrada $\boldsymbol{x}$ (en este caso un vector con las intensidades de rojo, verde y azul por pixel de una imagen), predecir si la etiqueta correspondiente $y$ toma el valor de $1$ o $0$ (si es logo del ITESO o no).
1.2 Notación
En adelante seguiremos la siguiente notación.
Un ejemplo de entrenamiento se representa por la pareja ordenada $(\boldsymbol{x},y)$, donde $\boldsymbol{x}\in\mathbb{R}^n$ y $y\in\left\lbrace0,1\right\rbrace$.
Tendremos $m$ ejemplos de entrenamiento, de modo que nuestro conjunto de entrenamiento será $\left\lbrace(\boldsymbol{x}^1,y^1),(\boldsymbol{x}^2,y^2),\dots,(\boldsymbol{x}^m,y^m)\right\rbrace$.
Por otra parte, para presentar de forma más compacta las entradas de entrenamiento, definimos la matriz
$$\boldsymbol{X}=\left[\begin{array}{c} {\boldsymbol{x}^1}^T \ {\boldsymbol{x}^2}^T \ \vdots \ {\boldsymbol{x}^m}^T \end{array}\right]\in\mathbb{R}^{m\times n},$$
cuyas filas son los vectores de entrenamiento de entrada transpuestos, y el vector
$$\boldsymbol{Y}=\left[\begin{array}{c} y^1 \ y^2 \ \vdots \ y^m \end{array}\right]\in\mathbb{R}^{m},$$
cuyas componentes son las etiquetas (salidas) de entrenamiento.
2. Regresión logística
La idea entonces es, dado un vector de características $\boldsymbol{x}$ (quizá correspondiente a una imagen que queramos identificar como el logo del ITESO o no), queremos obtener una predicción $\hat{y}$ que es nuestro estimado de $y$.
Formalmente $\hat{y}=P(y=1|\boldsymbol{x})\in\left[0,1\right]$...
Los parámetros de regresión serán $\boldsymbol{\beta}=\left[\beta_0\quad \beta_1\quad \dots\quad \beta_n \right]^T\in\mathbb{R}^{n+1}.$
Primera idea: usar una regresor lineal
$$\hat{y}=\beta_0+\beta_1x_1+\beta_2x_2+\dots+\beta_nx_n=\left[1\quad \boldsymbol{x}^T\right]\boldsymbol{\beta}=\boldsymbol{x}_a^T\boldsymbol{\beta},$$
donde $\boldsymbol{x}_a=\left[1\quad \boldsymbol{x}^T \right]^T\in\mathbb{R}^{n+1}$.
¿Cuál es el problema? Que el producto punto $\boldsymbol{\beta}^T\boldsymbol{x}_a$ no está entre $0$ y $1$.
Entonces, pasamos el regresor lineal por una sigmoide (función logística)
$$\sigma(z)=\frac{1}{1+e^{-z}}$$
End of explanation
"""
def reg_log(B,Xa):
return fun_log(Xa.dot(B))
"""
Explanation: Notamos que:
- Si $z$ es grande, $\sigma(z)=1$.
- Si $-z$ es grande, $\sigma(z)=-1$.
- $\sigma(0)=0.5$.
Finalmente...
Regresor logístico: $\hat{y}=\sigma(\boldsymbol{x}_a^T\boldsymbol{\beta})$.
Para manejar todos los datos de entrenamiento, se define la matriz
$$\boldsymbol{X}a=\left[\boldsymbol{1}{m\times 1}\quad \boldsymbol{X}\right]=\left[\begin{array}{cc} 1 & {\boldsymbol{x}^1}^T \ 1 & {\boldsymbol{x}^2}^T \ \vdots & \vdots \ 1 & {\boldsymbol{x}^m}^T \end{array}\right]\in\mathbb{R}^{m\times (n+1)}.$$
Así,
$$\hat{\boldsymbol{Y}}=\left[\begin{array}{c} \hat{y}^1 \ \hat{y}^2 \ \vdots \ \hat{y}^m \end{array}\right]=\sigma(\boldsymbol{X}_a\boldsymbol{\beta})$$
End of explanation
"""
data_file = 'ex2data1.txt'
data = pd.read_csv(data_file, header=None)
X = data.iloc[:,0:2].values
Y = data.iloc[:,2].values
X
Y
plt.figure(figsize = (8,6))
plt.scatter(X[:,0], X[:,1], c=Y)
plt.show()
"""
Explanation: 3. Funcional de costo
Ya que tenemos definida la forma de nuestro modelo clasificador, debemos entrenar los parámetros $\boldsymbol{\beta}$ con los ejemplos de entrenamiento.
Es decir, dados $\left\lbrace(\boldsymbol{x}^1,y^1),(\boldsymbol{x}^2,y^2),\dots,(\boldsymbol{x}^m,y^m)\right\rbrace$, queremos encontrar parámetros $\boldsymbol{\beta}$ tales que $\hat{y}^i=\sigma({\boldsymbol{x}_a^i}^T\boldsymbol{\beta})\approx y^i$ 'lo mejor posible'.
Esto lo plantearemos como un problema de optimización.
Primera idea: minimizar error cuadrático $\min_{\boldsymbol{\beta}} \frac{1}{m}\sum_{i=1}^m (\hat{y}^i-y^i)^2$. Problema de optimización no convexo (explicar).
Alternativa: entonces, se buscó una función de modo que el problema de optimización fuera convexo. Esta es:
$$\min_{\boldsymbol{\beta}} \frac{1}{m}\sum_{i=1}^m -\left(y^i\log(\hat{y}^i)+(1-y^i)\log(1-\hat{y}^i)\right)$$
No pretendemos explicar toda esta función. Pero sí podemos ganar algo de intuición de porqué la usamos. Fijemos un $i$ dentro del sumatorio y consideremos el término $-\left(y^i\log(\hat{y}^i)+(1-y^i)\log(1-\hat{y}^i)\right)$.
Si $y^i=1$, entonces lo que queremos minimzar es $-\log(\hat{y}^i)$. Es decir, queremos que $\hat{y}^i=\sigma({\boldsymbol{x}_a^i}^T\boldsymbol{\beta})$ sea lo más grande posible, osea $1=y^i$.
Si $y^i=0$, entonces lo que queremos minimzar es $-\log(1-\hat{y}^i)$. Es decir, queremos que $\hat{y}^i=\sigma({\boldsymbol{x}_a^i}^T\boldsymbol{\beta})$ sea lo más pequeño posible, osea $0=y^i$.
En cualquier caso, esta función objetivo cumple con lo requerido.
Ejemplo
El archivo ex2data1.txt contiene puntos en el plano $(x,y)$ clasificados con etiquetas $1$ y $0$.
End of explanation
"""
import pyomo_utilities
B = pyomo_utilities.logreg_clas(X, Y)
"""
Explanation: Diseñar un clasificador binario con regresión logística.
El paquete pyomo_utilities.py fue actualizado y contiene ahora la función logreg_clas(X, Y).
End of explanation
"""
B
x = np.arange(20, 110, 0.5)
y = np.arange(20, 110, 0.5)
Xm, Ym = np.meshgrid(x, y)
m,n = np.shape(Xm)
Xmr = np.reshape(Xm,(m*n,1))
Ymr = np.reshape(Ym,(m*n,1))
Xa = np.append(np.ones((len(Ymr),1)), Xmr, axis=1)
Xa = np.append(Xa,Ymr,axis=1)
Yg = reg_log(B,Xa)
Z = np.reshape(Yg, (m,n))
Z = np.round(Z)
plt.figure(figsize=(10,10))
plt.contour(Xm, Ym, Z)
plt.scatter(X[:, 0],X[:, 1], c=Y, edgecolors='w')
plt.show()
"""
Explanation: Los parámetros del clasificador son entonces:
End of explanation
"""
X = 10*np.random.random((100, 2))
Y = (X[:, 1] > X[:, 0]**2)*1
plt.figure(figsize = (8,6))
plt.scatter(X[:,0], X[:,1], c=Y)
plt.show()
"""
Explanation: Actividad
Considere los siguientes datos y diseñe un clasificador binario por regresión logística.
Mostrar el gráfico de la división del clasificador con los puntos de entrenamiento.
Abrir un nuevo notebook, llamado Tarea8_ApellidoNombre y subirlo a moodle en el espacio habilitado. Si no terminan esto en clase, tienen hasta el miércoles a las 23:00.
La tarea de la clase pasada es hasta el miercoles también (se me olvidó habilitar el espacio)
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.