markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Per the notes, here is how we can convert our COO representation from before into a SciPy implementation.
A_coo = sp.coo_matrix ((coo_vals, (coo_rows, coo_cols)))
20--sparse-plus-least-squares.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Now measure the time to do a sparse matrix-vector multiply in the COO representation. How does it compare to the nested default dictionary approach?
x_np = np.array (x) %timeit A_coo.dot (x_np)
20--sparse-plus-least-squares.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Exercise. Repeat the same experiment for SciPy-based CSR.
# @YOUSE: Solution A_csr = A_coo.tocsr () %timeit A_csr.dot (x_np)
20--sparse-plus-least-squares.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Linear regression and least squares Yay! Time for a new topic: linear regression by the method of least squares. For this topic, let's use the following dataset, which is a crimes dataset from 1960: http://cse6040.gatech.edu/fa15/uscrime.csv This dataset comes from: http://www.statsci.org/data/general/uscrime.html
df = pd.read_csv ('uscrime.csv', skiprows=1) display (df.head ())
20--sparse-plus-least-squares.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Each row of this dataset is a US State. The columns are described here: http://www.statsci.org/data/general/uscrime.html
import seaborn as sns %matplotlib inline # Look at a few relationships sns.pairplot (df[['Crime', 'Wealth', 'Ed', 'U1']])
20--sparse-plus-least-squares.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Start with regular to understand tools
size = 200 true_intercept = 1 true_slope = 2 x = np.linspace(0, 1, size) # y = a + b*x true_regression_line = true_intercept + true_slope * x # add noise y = true_regression_line + np.random.normal(scale=.5, size=size) data = dict(x=x, y=y) df = pd.DataFrame(data) df.head() fig = plt.figure(figsize=(7, 7)) ax = f...
Regression/Poisson Regression.ipynb
balarsen/pymc_learning
bsd-3-clause
and now look into this something is not quite right with my undrstanding
df = pd.read_csv('http://stats.idre.ucla.edu/stat/data/poisson_sim.csv', index_col=0) df['x'] = df['math'] df['y'] = df['num_awards'] df.head() df.plot(kind='scatter', x='math', y='num_awards') with Model() as model: # specify glm and pass in data. The resulting linear model, its likelihood and # and all its ...
Regression/Poisson Regression.ipynb
balarsen/pymc_learning
bsd-3-clause
Load the MNIST database
# --- load the data img_rows, img_cols = 28, 28 (X_train, y_train), (X_test, y_test) = load_mnist() X_train = 2 * X_train - 1 # normalize to -1 to 1 X_test = 2 * X_test - 1 # normalize to -1 to 1
Project/trained_mental_manipulations_ens_inhibition.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
Each digit is represented by a one hot vector where the index of the 1 represents the number
temp = np.diag([1]*10) ZERO = temp[0] ONE = temp[1] TWO = temp[2] THREE= temp[3] FOUR = temp[4] FIVE = temp[5] SIX = temp[6] SEVEN =temp[7] EIGHT= temp[8] NINE = temp[9] labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE] dim =28
Project/trained_mental_manipulations_ens_inhibition.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
Load the saved weight matrices that were created by training the model
label_weights = cPickle.load(open("label_weights5000.p", "rb")) activity_to_img_weights = cPickle.load(open("activity_to_img_weights5000.p", "rb")) rotated_clockwise_after_encoder_weights = cPickle.load(open("rotated_after_encoder_weights_clockwise5000.p", "r")) rotated_counter_after_encoder_weights = cPickle.load(op...
Project/trained_mental_manipulations_ens_inhibition.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
Functions to perform the inhibition of each ensemble
#A value of zero gives no inhibition def inhibit_rotate_clockwise(t): if t < 1: return dim**2 else: return 0 def inhibit_rotate_counter(t): if t < 1: return 0 else: return dim**2 def inhibit_identity(t): if t < 1: return dim**2 else: re...
Project/trained_mental_manipulations_ens_inhibition.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
The network where the mental imagery and rotation occurs The state, seed and ensemble parameters (including encoders) must all be the same for the saved weight matrices to work The number of neurons (n_hid) must be the same as was used for training The input must be shown for a short period of time to be able to view ...
def add_manipulation(main_ens,weights,inhibition_func): #create ensemble for manipulation ens_manipulation = nengo.Ensemble(n_hid,dim**2,seed=3,encoders=encoders, **ens_params) #create node for inhibition inhib_manipulation = nengo.Node(inhibition_func) #Connect the main ensemble to each manipulatio...
Project/trained_mental_manipulations_ens_inhibition.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
The following is not part of the brain model, it is used to view the output for the ensemble Since it's probing the neurons themselves, the output must be transformed from neuron activity to visual image
'''Animation for Probe output''' fig = plt.figure() output_acts = [] for act in sim.data[probe]: output_acts.append(np.dot(act,activity_to_img_weights)) def updatefig(i): im = pylab.imshow(np.reshape(output_acts[i],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True) return im, ani = anim...
Project/trained_mental_manipulations_ens_inhibition.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
Pickle the probe's output if it takes a long time to run
#The filename includes the number of neurons and which digit is being rotated filename = "mental_rotation_output_ONE_" + str(n_hid) + ".p" cPickle.dump(sim.data[probe], open( filename , "wb" ) )
Project/trained_mental_manipulations_ens_inhibition.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
Testing
testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights)) plt.subplot(121) pylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) #Get image testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights)) #Get activity of image _, testing_act = nengo.utils.ensemble.tuning_cur...
Project/trained_mental_manipulations_ens_inhibition.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
Just for fun
letterO = np.dot(ZERO,np.dot(label_weights,activity_to_img_weights)) plt.subplot(161) pylab.imshow(np.reshape(letterO,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) letterL = np.dot(SEVEN,label_weights) for _ in range(30): letterL = np.dot(letterL,rotation_weights) letterL = np.dot(letterL,activity_to_img_weigh...
Project/trained_mental_manipulations_ens_inhibition.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
Query Data Grab schedule page:
base_url = "https://pydata.org" r = rq.get(base_url + "/berlin2018/schedule/") bs = bs4.BeautifulSoup(r.text, "html.parser")
pydatabln_2018_schedule2cal/pydatabln2018_filter_and_overview.ipynb
nicoa/showcase
mit
Let's query every talk description:
data = {} for ahref in tqdm_notebook(bs.find_all("a")): if 'schedule/presentation' in ahref.get("href"): url = ahref.get("href") else: continue data[url] = {} resp = bs4.BeautifulSoup(rq.get(base_url + url).text, "html.parser") title = resp.find("h2").text resp = resp.find_all(at...
pydatabln_2018_schedule2cal/pydatabln2018_filter_and_overview.ipynb
nicoa/showcase
mit
Okay, make a dataframe and add some helpful columns:
df = pd.DataFrame.from_dict(data, orient='index') df.reset_index(drop=True, inplace=True) # Tutorials on Friday df.loc[df.day_info=='Friday', 'tutorial'] = True df['tutorial'].fillna(False, inplace=True) # time handling df['time_from'], df['time_to'] = zip(*df.time_inf.str.split(u'\u2013')) df.time_from = pd.to_datet...
pydatabln_2018_schedule2cal/pydatabln2018_filter_and_overview.ipynb
nicoa/showcase
mit
visualize some stuff
plt.style.use('seaborn-darkgrid')#'seaborn-darkgrid') plt.rcParams['savefig.dpi'] = 200 plt.rcParams['figure.dpi'] = 120 plt.rcParams['figure.autolayout'] = False plt.rcParams['figure.figsize'] = 10, 5 plt.rcParams['axes.labelsize'] = 17 plt.rcParams['axes.titlesize'] = 20 plt.rcParams['font.size'] = 16 plt.rcParams[...
pydatabln_2018_schedule2cal/pydatabln2018_filter_and_overview.ipynb
nicoa/showcase
mit
Data reading The data file is located in the datafiles directory.
datadir = './datafiles/' datafile = 'GL_PR_ML_EXRE0065_2010.nc'
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
We extract only the spatial coordinates:
with netCDF4.Dataset(datadir + datafile) as nc: lon = nc.variables['LONGITUDE'][:] lat = nc.variables['LATITUDE'][:] print lon.shape
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
Location of the profiles In this first plot we want to see the location of the profiles obtained with the profiler.<br/> We create a Mercator projection using the coordinates we just read.
m = Basemap(projection='merc', llcrnrlat=lat.min()-0.5, urcrnrlat=lat.max()+0.5, llcrnrlon=lon.min()-0.5, urcrnrlon=lon.max()+0.5, lat_ts=0.5*(lon.min()+lon.max()), resolution='h')
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
Once we have the projection, the coordinates have to be changed into this projection:
lon2, lat2 = m(lon, lat)
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
The locations of the vessel stations are added on a map with the coastline and the land mask.
mpl.rcParams.update({'font.size': 16}) fig = plt.figure(figsize=(8,8)) m.plot(lon2, lat2, 'ko', ms=2) m.drawcoastlines(linewidth=0.5, zorder=3) m.fillcontinents(zorder=2) m.drawparallels(np.arange(-90.,91.,0.5), labels=[1,0,0,0], zorder=1) m.drawmeridians(np.arange(-180.,181.,0.5), labels=[0,0,1,0], zorder=1) plt.sho...
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
Profile plot We read the temperature, salinity and depth variables.
with netCDF4.Dataset(datadir + datafile) as nc: depth = nc.variables['DEPH'][:] temperature = nc.variables['TEMP'][:] temperature_name = nc.variables['TEMP'].long_name temperature_units = nc.variables['TEMP'].units salinity = nc.variables['PSAL'][:] salinity_name = nc.variables['PSAL'].long_name...
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
We observe different types of profiles. As the covered region is rather small, this may be because the measurements were done at different time of the year. The time variable will tell us.<br/> We create a plot of time versus temperature (first measurement of each profile).
dates = num2date(time, units=time_units) fig = plt.figure(figsize=(8,8)) ax = plt.subplot(111) plt.plot(dates, temperature[:,0], 'ko') fig.autofmt_xdate() plt.ylabel("%s (%s)" % (temperature_name, temperature_units)) plt.show()
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
The graph confirms that we have data obtained during different periods: * July-August 2010, * December 2010. T-S diagram The x and y labels for the plot are directly taken from the netCDF variable attributes.
fig = plt.figure(figsize=(8,8)) ax = plt.subplot(111) plt.plot(temperature, salinity, 'ko', markersize=2) plt.xlabel("%s (%s)" % (temperature_name, temperature_units)) plt.ylabel("%s (%s)" % (salinity_name, salinity_units)) plt.ylim(32, 36) plt.grid() plt.show()
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
3-D plot We illustrate with a simple example how to have a 3-dimensional representation of the profiles.<br/> First we import the required modules.
from mpl_toolkits.mplot3d import Axes3D
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
Then the plot is easily obtained by specifying the coordinates (x, y, z) and the variables (salinity) to be plotted.
cmap = plt.cm.Spectral_r norm = colors.Normalize(vmin=32, vmax=36) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot(111, projection='3d') for ntime in range(0, nprofiles): plt.scatter(lon[ntime]*np.ones(ndepths), lat[ntime]*np.ones(ndepths), zs=-depth[ntime,:], zdir='z', s=20, c=salinity[ntime...
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
To get started we import pymrio
import pymrio
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
For the example here, we use the data from 2009:
wiod09 = pymrio.parse_wiod(path=wiod_folder, year=2009)
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
WIOD includes multiple material accounts, specified for the "Used" and "Unused" category, as well as information on the total. We will use the latter to confirm our calculations:
wiod09.mat.F
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
To aggregate these with the Pandas groupby function, we need to specify the groups which should be grouped by Pandas. Pymrio contains a helper function which builds such a matching dictionary. The matching can also include regular expressions to simplify the build:
groups = wiod09.mat.get_index(as_dict=True, grouping_pattern = {'.*_Used': 'Material Used', '.*_Unused': 'Material Unused'}) groups
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
Note, that the grouping contains the rows which do not match any of the specified groups. This allows to easily aggregates only parts of a specific stressor set. To actually omit these groups include them in the matching pattern and provide None as value. To have the aggregated data alongside the original data, we fir...
wiod09.mat_agg = wiod09.mat.copy(new_name='Aggregated matrial accounts')
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
Then, we use the pymrio get_DataFrame iterator together with the pandas groupby and sum functions to aggregate the stressors. For the dataframe containing the unit information, we pass a custom function which concatenate non-unique unit strings.
for df_name, df in zip(wiod09.mat_agg.get_DataFrame(data=False, with_unit=True, with_population=False), wiod09.mat_agg.get_DataFrame(data=True, with_unit=True, with_population=False)): if df_name == 'unit': wiod09.mat_agg.__dict__[df_name] = df.groupby(groups).apply(lambda x: ' & '.jo...
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
Use with stressors including compartment information: The same regular expression grouping can be used to aggregate stressor data which is given per compartment. To do so, the matching dict needs to consist of tuples corresponding to a valid index value in the DataFrames. Each position in the tuple is interprested as ...
tt = pymrio.load_test() tt.emissions.get_index(as_dict=True)
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
With that information, we can now build our own grouping dict, e.g.:
agg_groups = {('emis.*', '.*'): 'all emissions'} group_dict = tt.emissions.get_index(as_dict=True, grouping_pattern=agg_groups) group_dict
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
Which can then be used to aggregate the satellite account:
for df_name, df in zip(tt.emissions.get_DataFrame(data=False, with_unit=True, with_population=False), tt.emissions.get_DataFrame(data=True, with_unit=True, with_population=False)): if df_name == 'unit': tt.emissions.__dict__[df_name] = df.groupby(group_dict).apply(lambda x: ' & '.join...
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
In this case we loose the information on the compartment. To reset the index do:
import pandas as pd tt.emissions.set_index(pd.Index(tt.emissions.get_index(), name='stressor')) tt.emissions.F
doc/source/notebooks/advanced_group_stressors.ipynb
konstantinstadler/pymrio
gpl-3.0
To get some intuitions about the data, let's plot the 100 labelled books, using the counts of the words "laser" and "love" as the x and y axes:
# commands prefaced by a % in Jupyter are called "magic" # these "magic" commands allow us to do special things only related to jupyter # %matplotlib inline - allows one to display charts from the matplotlib library in a notebook # %load_ext autoreload - automatically reloads imported modules if they change # %autorel...
machine-learning/machine-learning.ipynb
YaleDHLab/lab-workshops
mit
This plot shows each of our 100 labelled books, positioned according to the counts of the words "laser" and "love" in the book, and colored by the book's genre label. Romance books are red; scifi books are blue. As we can see, the two genres appear pretty distinct here, which means we can expect pretty good classificat...
from sklearn.neighbors import KNeighborsClassifier import numpy as np # create a KNN classifier using 3 as the value of K clf = KNeighborsClassifier(3) # "train" the classifier by showing it our labelled data clf.fit(X, labels) # predict the genre label of a new, unlabelled book clf.predict(np.array([[14.2, 10.3]]))
machine-learning/machine-learning.ipynb
YaleDHLab/lab-workshops
mit
For each observation we pass as input to <code>clf.predict()</code>, the function returns one label (either 0 or 1). In the snippet above, we pass in only a single observation, so we get only a single label back. The example observation above gets a label 1, which means the model thought this particular book was a work...
from sklearn.neighbors import KNeighborsClassifier # import some custom helper code import helpers # create and train a KNN model clf = KNeighborsClassifier(3) clf.fit(X, labels) # use a helper function to plot the trained classifier's decision boundary helpers.plot_decision_boundary(clf, X, labels) # add a title a...
machine-learning/machine-learning.ipynb
YaleDHLab/lab-workshops
mit
For each pixel in the plot above, we retrieve the 3 closest points with known labels. We then use a majority vote of those labels to assign the label of the pixel. This is exactly analogous to predicting a label for unlabelled point&mdash;in both cases, we take a majority vote of the 3 closest points with known labels....
from IPython.display import IFrame IFrame(src='https://s3.amazonaws.com/duhaime/blog/visualizations/isolation-forests.html', width=700, height=640)
machine-learning/machine-learning.ipynb
YaleDHLab/lab-workshops
mit
If we run the simulation above a number of times, we should see the "outlier" point is consistently isolated quickly, while it usually takes more iterations to isolate the other points. This is the chief intuition behind the Isolation Forests outlier classification strategy&mdash;outliers are isolated quickly because t...
from sklearn.ensemble import IsolationForest from sklearn.datasets.samples_generator import make_blobs import matplotlib import matplotlib.pyplot as plt import numpy as np %matplotlib inline # seed a random number generator for consistent random values rng = np.random.RandomState(1) # generate 100 "training" data obs...
machine-learning/machine-learning.ipynb
YaleDHLab/lab-workshops
mit
In just a few lines of code, we can create, train, and deploy a machine learning model for detecting outliers in high-dimensional data! Dimension Reduction So far we've seen data with observations in two dimensions (the scifi vs. romance books example) and observations in 50 dimensions (the word vector example). While ...
import pandas as pd from bs4 import BeautifulSoup from sklearn.feature_extraction.text import CountVectorizer from nltk import ngrams from requests import get def get_passages(url, chunk_size=1000): text = BeautifulSoup( get(url).text, 'html.parser' ).get_text().lower() words = ''.join([c for c in text if c.is...
machine-learning/machine-learning.ipynb
YaleDHLab/lab-workshops
mit
Now that we have represented each passage of 1000 words with a high-dimensional vector, let's project those vectors down into two dimensions to visualize the similarity between our three author's styles:
from matplotlib.lines import Line2D from umap import UMAP X = vec.fit_transform(austen + dickens + mystery).toarray() projected = UMAP(random_state=2).fit_transform(X) labels = ['green' for i in range(len(austen))] + \ ['orange' for i in range(len(dickens))] + \ ['purple' for i in range(len(mystery...
machine-learning/machine-learning.ipynb
YaleDHLab/lab-workshops
mit
As we can see, the new points in purple have strong overlap with the green points, suggesting that the mystery author has a style quite similar to that of Austen. There's a good reason for that&mdash;the purple text is <i>Pride and Prejudice and Zombies</i>, which adapts the language and plot of Jane Austen's classic n...
from zipfile import ZipFile from collections import defaultdict from urllib.request import urlretrieve import numpy as np import json, os, codecs # download the vector files we'll use if not os.path.exists('glove.6B.50d.txt'): urlretrieve('http://nlp.stanford.edu/data/glove.6B.zip', 'glove.6B.zip') # unzip the dow...
machine-learning/machine-learning.ipynb
YaleDHLab/lab-workshops
mit
As we can see above, <code>words</code> is just a list of words. For each of those words, <code>vectors</code> contains a corresponding 50-dimensional vector (or list of 50 numbers). Those vectors indicate the semantic meaning of a word. In other words, if the English language were a 50 dimensional vector space, each w...
from sklearn.cluster import KMeans # cluster the word vectors kmeans = KMeans(n_clusters=20, random_state=0).fit(np.array(vectors)) # `kmeans.labels_` is an array whos `i-th` member identifies the group to which # the `i-th` word in `words` is assigned groups = defaultdict(list) for idx, i in enumerate(kmeans.labels_...
machine-learning/machine-learning.ipynb
YaleDHLab/lab-workshops
mit
Monitoring with transformations Very often it's useful to apply specific transformations to the state variables before applying the observation model of a monitor. Additionally, it can be useful to apply other transformations on the monitor's output. The pre_expr and post_expr attributes of the Monitor classes allow fo...
sim = simulator.Simulator( model=models.Generic2dOscillator(), connectivity=connectivity.Connectivity(load_default=True), coupling=coupling.Linear(), integrator=integrators.EulerDeterministic(), monitors=Raw(pre_expr='V;W;V**2;W-V', post_expr=';;sin(mon);exp(mon)')) sim.configure() ts, ys = [], []...
tvb/simulator/demos/Monitoring with transformations.ipynb
rajul/tvb-library
gpl-2.0
Plotting the results demonstrates the effect of the transformations of the state variables through the monitor. Here, a Raw monitor was used to make the effects clear, but the pre- and post-expressions can be provided to any of the Monitors.
figure(figsize=(7, 5), dpi=600) subplot(311) plot(t, v[:, 0, 0], 'k') plot(t, w[:, 0, 0], 'k') ylabel('$V(t), W(t)$') grid(True, axis='x') xticks(xticks()[0], []) subplot(312) plot(t, sv2[:, 0, 0], 'k') ylabel('$\\sin(G(V^2(t)))$') grid(True, axis='x') xticks(xticks()[0], []) subplot(313) plot(t, ewmv[:, 0, 0], 'k')...
tvb/simulator/demos/Monitoring with transformations.ipynb
rajul/tvb-library
gpl-2.0
Load Data
import knifey
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The data dimensions have already been defined in the knifey module, so we just need to import the ones we need.
from knifey import img_size, img_size_flat, img_shape, num_classes, num_channels
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Set the directory for storing the data-set on your computer.
# knifey.data_dir = "data/knifey-spoony/"
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The Knifey-Spoony data-set is about 22 MB and will be downloaded automatically if it is not located in the given path.
knifey.maybe_download_and_extract()
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Now load the data-set. This scans the sub-directories for all *.jpg images and puts the filenames into two lists for the training-set and test-set. This does not actually load the images.
dataset = knifey.load()
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Get the class-names.
class_names = dataset.class_names class_names
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Training and Test-Sets This function returns the file-paths for the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels. In this tutorial we will actually use the integer class-numbers and call them labels. This may be a little confusing but you can always add print-stat...
image_paths_train, cls_train, labels_train = dataset.get_training_set()
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Print the first image-path to see if it looks OK.
image_paths_train[0]
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Get the test-set.
image_paths_test, cls_test, labels_test = dataset.get_test_set()
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Print the first image-path to see if it looks OK.
image_paths_test[0]
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The Knifey-Spoony data-set has now been loaded and consists of 4700 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
print("Size of:") print("- Training-set:\t\t{}".format(len(image_paths_train))) print("- Test-set:\t\t{}".format(len(image_paths_test)))
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
def plot_images(images, cls_true, cls_pred=None, smooth=True): assert len(images) == len(cls_true) # Create figure with sub-plots. fig, axes = plt.subplots(3, 3) # Adjust vertical spacing. if cls_pred is None: hspace = 0.3 else: hspace = 0.6 fig.subplots_adjust(hspace=hspa...
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Helper-function for loading images This dataset does not load the actual images, instead it has a list of the images in the training-set and another list for the images in the test-set. This helper-function loads some image-files.
def load_images(image_paths): # Load the images from disk. images = [imread(path) for path in image_paths] # Convert to a numpy array and return it. return np.asarray(images)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Plot a few images to see if data is correct
# Load the first images from the test-set. images = load_images(image_paths=image_paths_test[0:9]) # Get the true classes for those images. cls_true = cls_test[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true, smooth=True)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Create TFRecords TFRecords is the binary file-format used internally in TensorFlow which allows for high-performance reading and processing of datasets. For this small dataset we will just create one TFRecords file for the training-set and another for the test-set. But if your dataset is very large then you can split i...
path_tfrecords_train = os.path.join(knifey.data_dir, "train.tfrecords") path_tfrecords_train
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
File-path for the TFRecords file holding the test-set.
path_tfrecords_test = os.path.join(knifey.data_dir, "test.tfrecords") path_tfrecords_test
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Helper-function for printing the conversion progress.
def print_progress(count, total): # Percentage completion. pct_complete = float(count) / total # Status-message. # Note the \r which means the line should overwrite itself. msg = "\r- Progress: {0:.1%}".format(pct_complete) # Print it. sys.stdout.write(msg) sys.stdout.flush()
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Helper-function for wrapping an integer so it can be saved to the TFRecords file.
def wrap_int64(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Helper-function for wrapping raw bytes so they can be saved to the TFRecords file.
def wrap_bytes(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
This is the function for reading images from disk and writing them along with the class-labels to a TFRecords file. This loads and decodes the images to numpy-arrays and then stores the raw bytes in the TFRecords file. If the original image-files are compressed e.g. as jpeg-files, then the TFRecords file may be many ti...
def convert(image_paths, labels, out_path): # Args: # image_paths List of file-paths for the images. # labels Class-labels for the images. # out_path File-path for the TFRecords output file. print("Converting: " + out_path) # Number of images. Used when printing the progr...
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Note the 4 function calls required to write the data-dict to the TFRecords file. In the original code-example from the Google Developers, these 4 function calls were actually nested. The design-philosophy for TensorFlow generally seems to be: If one function call is good, then 4 function calls are 4 times as good, and ...
convert(image_paths=image_paths_train, labels=cls_train, out_path=path_tfrecords_train)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Convert the test-set to a TFRecords-file:
convert(image_paths=image_paths_test, labels=cls_test, out_path=path_tfrecords_test)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Input Functions for the Estimator The TFRecords files contain the data in a serialized binary format which needs to be converted back to images and labels of the correct data-type. We use a helper-function for this parsing:
def parse(serialized): # Define a dict with the data-names and types we expect to # find in the TFRecords file. # It is a bit awkward that this needs to be specified again, # because it could have been written in the header of the # TFRecords file instead. features = \ { 'ima...
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Helper-function for creating an input-function that reads from TFRecords files for use with the Estimator API.
def input_fn(filenames, train, batch_size=32, buffer_size=2048): # Args: # filenames: Filenames for the TFRecords files. # train: Boolean whether training (True) or testing (False). # batch_size: Return batches of this size. # buffer_size: Read buffers of this size. The random shuffling ...
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
This is the input-function for the training-set for use with the Estimator API:
def train_input_fn(): return input_fn(filenames=path_tfrecords_train, train=True)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
This is the input-function for the test-set for use with the Estimator API:
def test_input_fn(): return input_fn(filenames=path_tfrecords_test, train=False)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Input Function for Predicting on New Images An input-function is also needed for predicting the class of new data. As an example we just use a few images from the test-set. You could load any images you want here. Make sure they are the same dimensions as expected by the TensorFlow model, otherwise you need to resize t...
some_images = load_images(image_paths=image_paths_test[0:9])
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
These images are now stored as numpy arrays in memory, so we can use the standard input-function for the Estimator API. Note that the images are loaded as uint8 data but it must be input to the TensorFlow graph as floats so we do a type-cast.
predict_input_fn = tf.estimator.inputs.numpy_input_fn( x={"image": some_images.astype(np.float32)}, num_epochs=1, shuffle=False)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The class-numbers are actually not used in the input-function as it is not needed for prediction. However, the true class-number is needed when we plot the images further below.
some_images_cls = cls_test[0:9]
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Pre-Made / Canned Estimator When using a pre-made Estimator, we need to specify the input features for the data. In this case we want to input images from our data-set which are numeric arrays of the given shape.
feature_image = tf.feature_column.numeric_column("image", shape=img_shape)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
You can have several input features which would then be combined in a list:
feature_columns = [feature_image]
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
In this example we want to use a 3-layer DNN with 512, 256 and 128 units respectively.
num_hidden_units = [512, 256, 128]
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The DNNClassifier then constructs the neural network for us. We can also specify the activation function and various other parameters (see the docs). Here we just specify the number of classes and the directory where the checkpoints will be saved.
model = tf.estimator.DNNClassifier(feature_columns=feature_columns, hidden_units=num_hidden_units, activation_fn=tf.nn.relu, n_classes=num_classes, model_dir="./checkpoints_tutoria...
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Training We can now train the model for a given number of iterations. This automatically loads and saves checkpoints so we can continue the training later.
model.train(input_fn=train_input_fn, steps=200)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Evaluation Once the model has been trained, we can evaluate its performance on the test-set.
result = model.evaluate(input_fn=test_input_fn) result print("Classification accuracy: {0:.2%}".format(result["accuracy"]))
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Predictions The trained model can also be used to make predictions on new data. Note that the TensorFlow graph is recreated and the checkpoint is reloaded every time we make predictions on new data. If the model is very large then this could add a significant overhead. It is unclear why the Estimator is designed this w...
predictions = model.predict(input_fn=predict_input_fn) cls = [p['classes'] for p in predictions] cls_pred = np.array(cls, dtype='int').squeeze() cls_pred plot_images(images=some_images, cls_true=some_images_cls, cls_pred=cls_pred)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Predictions for the Entire Test-Set It appears that the model maybe classifies all images as 'spoony'. So let us see the predictions for the entire test-set. We can do this simply by using its input-function:
predictions = model.predict(input_fn=test_input_fn) cls = [p['classes'] for p in predictions] cls_pred = np.array(cls, dtype='int').squeeze()
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The test-set contains 530 images in total and they have all been predicted as class 2 (spoony). So this model does not work at all for classifying the Knifey-Spoony dataset.
np.sum(cls_pred == 2)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
New Estimator If you cannot use one of the built-in Estimators, then you can create an arbitrary TensorFlow model yourself. To do this, you first need to create a function which defines the following: The TensorFlow model, e.g. a Convolutional Neural Network. The output of the model. The loss-function used to improve ...
def model_fn(features, labels, mode, params): # Args: # # features: This is the x-arg from the input_fn. # labels: This is the y-arg from the input_fn. # mode: Either TRAIN, EVAL, or PREDICT # params: User-defined hyper-parameters, e.g. learning-rate. # Reference to the tensor n...
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Create an Instance of the Estimator We can specify hyper-parameters e.g. for the learning-rate of the optimizer.
params = {"learning_rate": 1e-4}
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
We can then create an instance of the new Estimator. Note that we don't provide feature-columns here as it is inferred automatically from the data-functions when model_fn() is called. It is unclear from the TensorFlow documentation why it is necessary to specify the feature-columns when using DNNClassifier in the examp...
model = tf.estimator.Estimator(model_fn=model_fn, params=params, model_dir="./checkpoints_tutorial18-2/")
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Training Now that our new Estimator has been created, we can train it.
model.train(input_fn=train_input_fn, steps=200)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Predictions The model can also be used to make predictions on new data.
predictions = model.predict(input_fn=predict_input_fn) cls_pred = np.array(list(predictions)) cls_pred plot_images(images=some_images, cls_true=some_images_cls, cls_pred=cls_pred)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Predictions for the Entire Test-Set To get the predicted classes for the entire test-set, we just use its input-function:
predictions = model.predict(input_fn=test_input_fn) cls_pred = np.array(list(predictions)) cls_pred
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The Convolutional Neural Network predicts different classes for the images, although most have just been classified as 0 (forky), so the accuracy is horrible.
np.sum(cls_pred == 0) np.sum(cls_pred == 1) np.sum(cls_pred == 2)
18_TFRecords_Dataset_API.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
from 1st module Test; Each draw from the deck results in a value between 1 and 10 (uniformly distributed)
import matplotlib.pyplot as plt %matplotlib notebook plt.figure(1) values = [] for i in xrange(0,100000): values.append(Card().absolute_value) # values.append(random.randint(1,10)) plt.title('Test; Each draw from the deck results in a value between 1 and 10 (uniformly distributed)') plt.hist(values) ...
Joe #2 Monte-Carlo Control in Easy21/easy21 tests.ipynb
analog-rl/Easy21
mit
Each draw from the deck results in a colour of red (probability 1/3) or black (probability 2/3).
import matplotlib.pyplot as plt %matplotlib notebook plt.figure(2) values = [] for i in xrange(0,100000): if (Card().is_black): values.append(0.6666666) else: values.append(0.3333333) plt.title('Test; red (probability 1/3) or black (probability 2/3)') plt.hist(values) # , c='g', s=...
Joe #2 Monte-Carlo Control in Easy21/easy21 tests.ipynb
analog-rl/Easy21
mit
Test: If the player’s sum exceeds 21, or becomes less than 1, then she “goes bust” and loses the game (reward -1)
def play_test_player_bust(): s = State(Card(True),Card(True)) a = Actions.hit e = Environment() while not s.term: s, r = e.step(s, a) # print ("state = %s, %s, %s" % (s.player, s.dealer, s.term)) return s, r import matplotlib.pyplot as plt %matplotlib notebook plt.figure(3) va...
Joe #2 Monte-Carlo Control in Easy21/easy21 tests.ipynb
analog-rl/Easy21
mit
Test: If the player sticks then the dealer starts taking turns. The dealer always sticks on any sum of 17 or greater, and hits otherwise. If the dealer goes bust, then the player wins; otherwise, the outcome – win (reward +1), lose (reward -1), or draw (reward 0) – is the player with the largest sum.
def play_test_player_stick(): s = State(Card(True),Card(True)) a = Actions.hit e = Environment() a = Actions.stick while not s.term: s, r = e.step(s, a) # print ("state = %s, %s, %s" % (s.player, s.dealer, s.term)) return s, r import matplotlib.pyplot as plt %matplotlib not...
Joe #2 Monte-Carlo Control in Easy21/easy21 tests.ipynb
analog-rl/Easy21
mit