markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
In such a case, the condition evaluates to False and the print call included in the indented statement is simply skipped. However, showing (or rather creating) no output at all is not always desirable, and in the majority of use cases, there will definitely be a pool of two or more possibilities that need to be taken i... | if n > 0:
print("Larger than zero.")
else:
print("Smaller than or equal to zero.") | docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb | marburg-open-courseware/gmoc | mit |
At this point, be aware that the lines including if and else are not indented, whereas the related code bodies are. Due to the black-and-white nature of such an if-else statement, exactly one out of two possible blocks is executed. Starting from the top,
the if statement is evaluated and returns False (because n is n... | if n > 0:
print("Larger than zero.")
elif n < 0:
print("Smaller than zero.")
else:
print("Exactly zero.") | docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb | marburg-open-courseware/gmoc | mit |
And similarly, | p = 0
if p > 0:
print("Larger than zero.")
elif p < 0:
print("Smaller than zero.")
else:
print("Exactly zero.") | docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb | marburg-open-courseware/gmoc | mit |
Of course, indented blocks can have more than one statement, i.e. consist of multiple indented lines of code. In addition, they can embrace, or be embraced by, for or while loops. For example, if we wanted to count all the non-negative entries in a list, the following code snippet would be a proper solution that relies... | x = [0, 3, -6, -2, 7, 1, -4]
## set a counter
n = 0
for i in range(len(x)):
# if a non-negative integer is found, increment the counter by 1
if x[i] >= 0:
print("The value at position", i, "is larger than or equal to zero.")
n += 1
# else do not increment the counter
else:
... | docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb | marburg-open-courseware/gmoc | mit |
<hr>
Brief digression: continue and break
There are (at least) two key words that allow for an even finer control of what happens inside a for loop, viz.
continue and
break.
As the name implies, continue moves directly on to the next iteration of a loop without executing the remaining code body. | for i in range(5):
if i in [1, 3]:
continue
print(i) | docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb | marburg-open-courseware/gmoc | mit |
break, on the other hand, breaks out of the innermost loop. Here, (i) the remaining code body following the break statement in the current iteration, but also (ii) any outstanding iterations are not executed anymore. | for i in range(5):
if i == 2:
break
print(i) | docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb | marburg-open-courseware/gmoc | mit |
Breadth-First Search
Breadt-first search (BFS) is algorithm that can find the closest members in a graph that match a certain search criterion.
BFS requires that we model our problem as a graph (nodes connected through edges). BFS can be applied to directed and undirected graph, where it can be applied to answer to typ... | graph = {}
graph['You'] = ['Elijah', 'Marissa', 'Nikolai', 'Cassidy']
graph['Elijah'] = ['You']
graph['Marissa'] = ['You']
graph['Nikolai'] = ['John', 'Thomas', 'You']
graph['Cassidy'] = ['John', 'You']
graph['John'] = ['Cassidy', 'Nikolai']
graph['Thomas'] = ['Nikolai', 'Mario']
graph['Mario'] = ['Thomas'] | ipython_nbs/search/breadth-first-search.ipynb | rasbt/algorithms_in_ipython_notebooks | gpl-3.0 |
The Queue data structure
Next, let's setup a simple queue data structure. Of course, we can also use a regular Python list like a queue (using .insert(0, x) and .pop(), but this way, our breadth-first search implementation is maybe more illustrative. For more information about queues, please see the Queues and Deques n... | class QueueItem():
def __init__(self, value, pointer=None):
self.value = value
self.pointer = pointer
class Queue():
def __init__(self):
self.last = None
self.first = None
self.length = 0
def enqueue(self, value):
item = QueueItem(value, None)
if... | ipython_nbs/search/breadth-first-search.ipynb | rasbt/algorithms_in_ipython_notebooks | gpl-3.0 |
Implementing freadth-first search to find the shortest path
Now, back to the graph, where we want to identify the closest connection that owns a truck, which can be helpful for moving (if we are allowed to borrow it, that is):
<img src="images/breadth-first-search/friend-graph-2.jpg" alt="" style="width: 600px;"/> | graph = {}
graph['You'] = ['Elijah', 'Marissa', 'Nikolai', 'Cassidy']
graph['Elijah'] = ['You']
graph['Marissa'] = ['You']
graph['Nikolai'] = ['John', 'Thomas', 'You']
graph['Cassidy'] = ['John', 'You']
graph['John'] = ['Cassidy', 'Nikolai']
graph['Thomas'] = ['Nikolai', 'Mario']
graph['Mario'] = ['Thomas'] | ipython_nbs/search/breadth-first-search.ipynb | rasbt/algorithms_in_ipython_notebooks | gpl-3.0 |
For simplicity, let's assume we have function that checks if a person ows a pick-up truck. (Say, Mario owns a pick-up truck, the check function knows it but we don't know it.) | def has_truck(person):
if person == 'Mario':
return True
else:
return False | ipython_nbs/search/breadth-first-search.ipynb | rasbt/algorithms_in_ipython_notebooks | gpl-3.0 |
Now, the breadth_first_search implementation below will check our closest neighbors first, then, it will check the neighnors of our neighbors and so forth. We will make use both of the graph we constructed and the Queue data structure that we implemented. Also, note that we are keeping track of people we already checke... | def breadth_first_search(graph):
# initialize queue
queue = Queue()
for person in graph['You']:
queue.enqueue(person)
people_checked = set()
degree = 0
while queue.length:
person = queue.dequeue()
if has_truck(person):
return person
el... | ipython_nbs/search/breadth-first-search.ipynb | rasbt/algorithms_in_ipython_notebooks | gpl-3.0 |
Length range
Print out the gene names for all genes between 90 and 110 bases long. | import csv
with open('data.csv') as csvfile:
raw_data = csv.reader(csvfile)
for row in raw_data:
if len(row[1]) >= 90 or len(row[1]) <= 110:
print(row[2]) | Week_06/Week06 - 01 - Homework Solutions.ipynb | biof-309-python/BIOF309-2016-Fall | mit |
AT content
Print out the gene names for all genes whose AT content is less than 0.5 and whose expression level is greater than 200. | def is_at_rich(dna):
length = len(dna)
a_count = dna.upper().count('A')
t_count = dna.upper().count('T')
at_content = (a_count + t_count) / length
return at_content < 0.5
import csv
with open('data.csv') as csvfile:
raw_data = csv.reader(csvfile)
for row in raw_data:
if is_at_rich(r... | Week_06/Week06 - 01 - Homework Solutions.ipynb | biof-309-python/BIOF309-2016-Fall | mit |
Complex condition
Print out the gene names for all genes whose name begins with “k” or “h” except those belonging to Drosophila melanogaster. | import csv
with open('data.csv') as csvfile:
raw_data = csv.reader(csvfile)
for row in raw_data:
if (row[2].startswith('k') or row[2].startswith('h')) and row[0] != 'Drosophila melanogaster':
print(row[2]) | Week_06/Week06 - 01 - Homework Solutions.ipynb | biof-309-python/BIOF309-2016-Fall | mit |
High low medium
For each gene, print out a message giving the gene name and saying whether its AT content is high (greater than 0.65), low (less than 0.45) or medium (between 0.45 and 0.65). | def at_percentage(dna):
length = len(dna)
a_count = dna.upper().count('A')
t_count = dna.upper().count('T')
at_content = (a_count + t_count) / length
return at_content
import csv
with open('data.csv') as csvfile:
raw_data = csv.reader(csvfile)
for row in raw_data:
at_percent = at_pe... | Week_06/Week06 - 01 - Homework Solutions.ipynb | biof-309-python/BIOF309-2016-Fall | mit |
定义要训练的模型 | import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_... | site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb | tensorflow/docs-l10n | apache-2.0 |
设置远程执行器
默认情况下,TFF 在本地执行所有计算。在此步骤中,我们指示 TFF 连接到我们在上面设置的 Kubernetes 服务。确保在此处复制服务的 IP 地址。 | import grpc
ip_address = '0.0.0.0' #@param {type:"string"}
port = 80 #@param {type:"integer"}
channels = [grpc.insecure_channel(f'{ip_address}:{port}') for _ in range(10)]
tff.backends.native.set_remote_execution_context(channels) | site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb | tensorflow/docs-l10n | apache-2.0 |
运行训练 | evaluate() | site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb | tensorflow/docs-l10n | apache-2.0 |
The toy data created above consists of 4 gaussian blobs, having 200 points each, centered around the vertices of a rectancle. Let's plot it for convenience. | import matplotlib.pyplot as plt
%matplotlib inline
figure,axis = plt.subplots(1,1)
axis.plot(rectangle[0], rectangle[1], 'o', color='r', markersize=5)
axis.set_xlim(-5,15)
axis.set_ylim(-50,150)
axis.set_title('Toy data : Rectangle')
plt.show() | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
With data at our disposal, it is time to apply KMeans to it using the KMeans class in Shogun. First we construct Shogun features from our data: | train_features = sg.create_features(rectangle) | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Next we specify the number of clusters we want and create a distance object specifying the distance metric to be used over our data for our KMeans training: | # number of clusters
k = 2
# distance metric over feature matrix - Euclidean distance
distance = sg.create_distance('EuclideanDistance')
distance.init(train_features, train_features) | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Next, we create a KMeans object with our desired inputs/parameters and train: | # KMeans object created
kmeans = sg.create_machine("KMeans", k=k, distance=distance)
# KMeans training
kmeans.train() | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Now that training has been done, let's get the cluster centers and label for each data point | # cluster centers
centers = kmeans.get("cluster_centers")
# Labels for data points
result = kmeans.apply() | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Finally let us plot the centers and the data points (in different colours for different clusters): | def plotResult(title = 'KMeans Plot'):
figure,axis = plt.subplots(1,1)
for i in range(totalPoints):
if result.get("labels")[i]==0.0:
axis.plot(rectangle[0,i], rectangle[1,i], 'go', markersize=3)
else:
axis.plot(rectangle[0,i], rectangle[1,i], 'yo', markersize=3)
axis.... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
<b>Note:</b> You might not get the perfect result always. That is an inherent flaw of KMeans algorithm. In subsequent sections, we will discuss techniques which allow us to counter this.<br>
Now that we have already worked out a simple KMeans implementation, it's time to understand certain specifics of KMeans implement... | initial_centers = np.array([[0.,10.],[50.,50.]])
# initial centers passed
kmeans = sg.create_machine("KMeans", k=k, distance=distance, initial_centers=initial_centers) | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Now, let's first get results by repeating the rest of the steps: | # KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get("cluster_centers")
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('Hand initialized KMeans Results 1') | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
The other way to initialize centers by hand is as follows: | new_initial_centers = np.array([[5.,5.],[0.,100.]])
# set new initial centers
kmeans.put("initial_centers", new_initial_centers) | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Let's complete the rest of the code to get results. | # KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get("cluster_centers")
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('Hand initialized KMeans Results 2') | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Note the difference that inititial cluster centers can have on final result.
Initializing using KMeans++ algorithm
In Shogun, a user can also use <a href="http://en.wikipedia.org/wiki/K-means%2B%2B">KMeans++ algorithm</a> for center initialization. Using KMeans++ for center initialization is beneficial because it redu... | # set flag for using KMeans++
kmeans = sg.create_machine("KMeans", k=k, distance=distance, kmeanspp=True) | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Completing rest of the steps to get result: | # KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get("cluster_centers")
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('KMeans with KMeans++ Results') | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Training Methods
Shogun offers 2 training methods for KMeans clustering:<ul><li><a href='http://en.wikipedia.org/wiki/K-means_clustering#Standard_algorithm'>Classical Lloyd's training</a> (default)</li><li><a href='http://www.eecs.tufts.edu/~dsculley/papers/fastkmeans.pdf'>mini-batch KMeans training</a></li></ul>Lloyd'... | # set training method to mini-batch
kmeans = sg.create_machine("KMeansMiniBatch", k=k, distance=distance) | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Completing the code to get results: | # KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get("cluster_centers")
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('Mini-batch KMeans Results') | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Applying KMeans on Real Data
In this section we see how useful KMeans can be in classifying the different varieties of Iris plant. For this purpose, we make use of Fisher's Iris dataset borrowed from the <a href='http://archive.ics.uci.edu/ml/datasets/Iris'>UCI Machine Learning Repository</a>. There are 3 varieties of ... | with open(os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as f:
feats = []
# read data from file
for line in f:
words = line.rstrip().split(',')
feats.append([float(i) for i in words[0:4]])
# create observation matrix
obsmatrix = np.array(feats).T
# plot the data
figure,axis = plt.sub... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
In the above plot we see that the data points labelled Iris Sentosa form a nice separate cluster of their own. But in case of other 2 varieties, while the data points of same label do form clusters of their own, there is some mixing between the clusters at the boundary. Now let us apply KMeans algorithm and see how wel... | def apply_kmeans_iris(data):
# wrap to Shogun features
train_features = sg.create_features(data)
# number of cluster centers = 3
k = 3
# distance function features - euclidean
distance = sg.create_distance('EuclideanDistance')
distance.init(train_features, train_features)
# initialize... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Now let us create a 2-D plot of the clusters formed making use of the two most important features (petal length and petal width) and compare it with the earlier plot depicting the actual labels of data points. | # plot the clusters over the original points in 2 dimensions
figure,axis = plt.subplots(1,1)
for i in range(150):
if result.get("labels")[i]==0.0:
axis.plot(obsmatrix[2,i],obsmatrix[3,i],'ro', markersize=5)
elif result.get("labels")[i]==1.0:
axis.plot(obsmatrix[2,i],obsmatrix[3,i],'go', markersi... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
From the above plot, it can be inferred that the accuracy of KMeans algorithm is very high for Iris dataset. Don't believe me? Alright, then let us make use of one of Shogun's clustering evaluation techniques to formally validate the claim. But before that, we have to label each sample in the dataset with a label corre... | # first 50 are iris sensosa labelled 0, next 50 are iris versicolour labelled 1 and so on
labels = np.concatenate((np.zeros(50),np.ones(50),2.*np.ones(50)),0)
# bind labels assigned to Shogun multiclass labels
ground_truth = sg.create_labels(np.array(labels,dtype='float64')) | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Now we can compute clustering accuracy making use of the ClusteringAccuracy class in Shogun | def analyzeResult(result):
# shogun object for clustering accuracy
AccuracyEval = sg.create_evaluation("ClusteringAccuracy")
# evaluates clustering accuracy
accuracy = AccuracyEval.evaluate(result, ground_truth)
# find out which sample points differ from actual labels (or ground truth)
compar... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
In the above plot, wrongly clustered data points are marked in red. We see that the Iris Sentosa plants are perfectly clustered without error. The Iris Versicolour plants and Iris Virginica plants are also clustered with high accuracy, but there are some plant samples of either class that have been clustered with the w... | def apply_pca_to_data(target_dims):
train_features = sg.create_features(obsmatrix)
submean = sg.create_transformer("PruneVarSubMean", divide_by_std=False)
submean.fit(train_features)
submean.transform(train_features)
preprocessor = sg.create_transformer("PCA", target_dim=target_dims)
preprocesso... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Next, let us get an idea of the data in 1-D by plotting it. | figure,axis = plt.subplots(1,1)
# First 50 data belong to Iris Sentosa, plotted in green
axis.plot(oneD_matrix[0,0:50], np.zeros(50), 'go', markersize=5)
# Next 50 data belong to Iris Versicolour, plotted in red
axis.plot(oneD_matrix[0,50:100], np.zeros(50), 'ro', markersize=5)
# Last 50 data belong to Iris Virginica, ... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Now that we have the results, the inevitable step is to check how good these results are. | (diff,accuracy_1d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_1d))
# plot the difference between ground truth and predicted clusters
figure,axis = plt.subplots(1,1)
axis.plot(oneD_matrix[0,:],np.zeros(150),'x',color='black', markersize=5)
axis.plot(oneD_matrix[0,diff],np.zeros(len(diff)),'x',color='r',... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
2-Dimensional Representation
We follow the same steps as above and get the clustering accuracy.
STEP 1 : Apply PCA and plot the data (plotting is optional) | twoD_matrix = apply_pca_to_data(2)
figure,axis = plt.subplots(1,1)
# First 50 data belong to Iris Sentosa, plotted in green
axis.plot(twoD_matrix[0,0:50], twoD_matrix[1,0:50], 'go', markersize=5)
# Next 50 data belong to Iris Versicolour, plotted in red
axis.plot(twoD_matrix[0,50:100], twoD_matrix[1,50:100], 'ro', mar... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
STEP 3: Get the accuracy of the results | (diff,accuracy_2d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_2d))
# plot the difference between ground truth and predicted clusters
figure,axis = plt.subplots(1,1)
axis.plot(twoD_matrix[0,:],twoD_matrix[1,:],'x',color='black', markersize=5)
axis.plot(twoD_matrix[0,diff],twoD_matrix[1,diff],'x',color='... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
STEP 3: Get accuracy of results. In this step, the 'difference' plot positions data points based petal length
and petal width in the original data. This will enable us to visually compare these results with that of KMeans applied
to 4-Dimensional data (ie. our first result on Iris dataset) | (diff,accuracy_3d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_3d))
# plot the difference between ground truth and predicted clusters
figure,axis = plt.subplots(1,1)
axis.plot(obsmatrix[2,:],obsmatrix[3,:],'x',color='black', markersize=5)
axis.plot(obsmatrix[2,diff],obsmatrix[3,diff],'x',color='r', mark... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
Finally, let us plot clustering accuracy vs. number of dimensions to consolidate our results. | from scipy.interpolate import interp1d
x = np.array([1, 2, 3, 4])
y = np.array([accuracy_1d, accuracy_2d, accuracy_3d, accuracy_4d])
f = interp1d(x, y)
xnew = np.linspace(1,4,10)
plt.plot(x,y,'o',xnew,f(xnew),'-')
plt.xlim([0,5])
plt.xlabel('no. of dims')
plt.ylabel('Clustering Accuracy')
plt.title('PCA Results')
plt.... | doc/ipython-notebooks/clustering/KMeans.ipynb | geektoni/shogun | bsd-3-clause |
iii. Explain what private and public do
The private and public, also protected restrict the access to the class members.
A private member variable or function cannot be accessed, or even viewed from outside the class. Only the class and friend functions can access private members.
A public member is accessible from any... | //*Quote from comments:*
// This structure type is private to the class, and used as a form of
// linked list in order to contain the actual (static) data stored by the Stack class | Stack.ipynb | chapman-cs510-2016f/cw-12-redyellow | mit |
iv. Explain what size_t is used for
It is a type that can represent the size of any object in bytes: size_t is the type returned by the sizeof operator and is widely used in the standard library to represent sizes and counts.
//From: http://www.cplusplus.com/reference/cstring/size_t/ | //*Quote from comments:*
// Size method
// Specifying const tells the compiler that the method will not change the
// internal state of the instance of the class | Stack.ipynb | chapman-cs510-2016f/cw-12-redyellow | mit |
v. Explain why this code avoids the use of C pointers
First, raw pointers must under no circumstances own memory. That means you must delete after use it.
Second, most uses of pointers in C++ are unnecessary. C++ has very strong support for value semantics, you can use smart pointer, container classes, design patterns... | //*Quote from comments:*
// However, by using the "unique_ptr" type above, we carefully avoid any
// explicit memory allocation by using the allocation pre-defined inside the
// unique_ptr itself. By using memory-safe structures in this way, we are using
// the "Rule of Zero" and simplifying our life by... | Stack.ipynb | chapman-cs510-2016f/cw-12-redyellow | mit |
ix. Explain what a list initializer does
Constructor is a special non-static member function of a class that is used to initialize objects of its class type.
In the definition of a constructor of a class, member initializer list specifies the initializers for direct and virtual base subobjects and non-static data membe... | //*Quote from comments*
// Implementation of default constructor
Stack::Stack()
: depth(0) // internal depth is 0
, head(nullptr) // internal linked list is null to start
{};
// The construction ": var1(val1), var2(val2) {}" is called a
// "list initializer" for a constructor, and is the preferred
// wa... | Stack.ipynb | chapman-cs510-2016f/cw-12-redyellow | mit |
x. Explain what the "Rule of Zero" is, and how it relates to the "Rule of Three"
Rule of Zero: Classes that have custom destructors, copy/move constructors or copy/move assignment operators should deal exclusively with ownership (which follows from the Single Responsibility Principle). Other classes should not have cus... | //*Quote from comments:*
// Normally we would have to implement the following things in C++ here:
// 1) Class Destructor : to deallocate memory when a Stack is deleted
// ~Stack();
//
// 2) Copy Constructor : to define what Stack b(a) does when a is a Stack
// ... | Stack.ipynb | chapman-cs510-2016f/cw-12-redyellow | mit |
I accomplished the above by running this command at the command prompt:
THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32' jupyter notebook | #import theano
from theano import function, config, sandbox, shared
import theano.tensor as T
import numpy as np
import scipy
import time | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
More theano setup in jupyter notebook boilerplate | print( theano.config.device )
print( theano.config.lib.cnmem) # cf. http://deeplearning.net/software/theano/library/config.html
print( theano.config.print_active_device)# Print active device at when the GPU device is initialized.
import os, sys
os.getcwd()
os.listdir( os.getcwd() )
%run gpu_test.py THEANO_FLAGS='mo... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
sample data boilerplate | # Load the diabetes dataset
diabetes = sklearn.datasets.load_diabetes()
diabetes_X = diabetes.data
diabetes_Y = diabetes.target
#diabetes_X1 = diabetes_X[:,np.newaxis,2]
diabetes_X1 = diabetes_X[:,np.newaxis, 2].astype(theano.config.floatX)
#diabetes_Y = diabetes_Y.reshape( diabetes_Y.shape[0], 1)
diabetes_Y = diabe... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Linear regression
cf. Linear Regression In Theano
1_linear_regression.py from github Newmu/Theano-Tutorials
Train on $m$ number of input data points | m_lin = diabetes_X1.shape[0] | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
input, output variables $x$, $y$ for Theano | #x1 = T.vector('x1') # X1, input data, with only 1 feature, i.e. X \in \mathbb{R}^N, d=1
#ylin = T.vector('ylin') # target variable for linear regression, so that Y \in \mathbb{R}
x1 = T.scalar('x1') # X1, input data, with only 1 feature, i.e. X \in \mathbb{R}^N, d=1
ylin = T.scalar('ylin') # target variable for l... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Parameters (for a linear slope)
$$
(\theta^0, \theta^1) \in \mathbb{R}^2
$$ | thet0_init_val = np.random.randn()
thet1_init_val = np.random.randn()
thet0 = theano.shared( value=thet0_init_val, name='thet0', borrow=True) # \theta^0
thet1 = theano.shared( thet1_init_val, name='thet1', borrow=True) # \theta^1
| theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
hypothesis function $h_{\theta}$
$$
h_{\theta}(x) = \theta_1 x + \theta_0
$$ | #h_thet = T.dot( thet1, x1) + thet0
# whereas, Newmu uses
h_thet = thet1 * x1 + thet0 | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Cost function $J(\theta)$ | # roshansanthosh uses
#Jthet = T.sum( T.pow(h_thet-ylin,2))/(2*m_lin)
# whereas, Newmu uses
# Jthet = T.mean( T.sqr( thet_1*x1 + thet_0 - ylin ))
Jthet = T.mean( T.pow( h_thet-ylin,2))/2
#Jthet = sandbox.cuda.basic_ops.gpu_from_host( T.mean(
# sandbox.cuda.basic_ops.gpu_from_host( T.pow( h_thet-ylin,2))))/2 | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
$$
\text{grad}{\theta}J(\theta) = ( \text{grad}{\theta^0} J , \text{grad}_{\theta^1} J )
$$ | grad_thet0 = T.grad(Jthet, thet0)
grad_thet1 = T.grad(Jthet, thet1)
# so-called "learning rate"
gamma = 0.01 | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Note that "updates (iterable over pairs (shared_variable, new_expression) List, tuple or dict.) – expressions for new SharedVariable values" cf. Theano doc | train_lin = theano.function(inputs = [x1,ylin], outputs=Jthet,
updates=[[thet1,thet1-gamma*grad_thet1],[thet0,thet0-gamma*grad_thet0]])
test_lin = theano.function([x1],h_thet)
#X1_lin_in = shared( diabetes_X1 ,'float32')
#Y_lin_out = shared( diabetes_Y, 'float32')
training_steps = 1000 # 1... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Linear Algebra and theano
cf. Week 1, Linear Algebra Review, Coursera, Machine Learning with Ng
I'll take this opportunity to provide a dictionary between the syntax of linear algebra math and numpy.
Essentially, what I did was take Coursera's Week 1, Linear Algebra Review and then translated the math into theano, a... | A = T.matrix('A')
B = T.matrix('B')
#matadd = function([A,B], A+B)
#matadd = function([A,B],sandbox.cuda.basic_ops.gpu_from_host(A+B) )
# Note: we are just defining the expressions, nothing is evaluated here!
C = sandbox.cuda.basic_ops.gpu_from_host(A+B)
matadd = function([A,B], C)
#A = T.dmatrix('A')
#B = T.dmatrix... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
The way to do it, to "force" on the GPU, is like this (cf. Speeding up your Neural Network with Theano and the GPU - Wild ML): | np.random.randn( *A_eg_CPU.shape )
C_out = theano.shared( np.random.randn( *A_eg_CPU.shape).astype('float32') )
C_out.type()
#A_in = shared( A_eg_CPU, "float32")
#A_in = shared( A_eg_CPU, "float32")
A_in = shared( A_eg_CPU.astype("float32"), "float32")
B_in = shared( B_eg_CPU.astype("float32"), "float32")
#C_out_GP... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Notice how DIFFERENT this setup or syntax is: we have to set up tensor or matrix shared variables A_n, B_in, which are then used to define the theano function, theano.function. "By using shared variables we ensure that they are present in the GPU memory". cf. Linear Algebra Shootout: NumPy vs. Theano vs. TensorFlow | print( matadd_GPU.maker.fgraph.toposort() )
#if np.any([isinstance(C_out_GPU.op, tensor.Elemwise ) and
if np.any([isinstance( C_out_GPU.op, T.Elemwise ) and
('Gpu' not in type( C_out_GPU.op).__name__) for x in matadd_GPU.maker.fgraph.toposort()]) :
print('Used the cpu')
else:
print('Used the gpu')... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Bottom Line: there are 2 ways of doing linear algebra on the GPU
symbolic computation with the usual arguments
$$
A + B = C \in \text{Mat}_{\mathbb{R}}(M,N)
$$
$ \forall \, A, B \in \text{Mat}_{\mathbb{R}}(M,N)$ | A = T.matrix('A')
B = T.matrix('B')
C = sandbox.cuda.basic_ops.gpu_from_host( A + B ) # vs.
# C = A + B # this will result in an output array on the host, as opposed to CudaNdarray on device
matadd = function([A,B], C)
print( matadd.maker.fgraph.toposort() )
matadd( A_eg_CPU.astype("float32"), B_eg_CPU.astype("flo... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
with shared variables | A_in = shared( A_eg_CPU.astype("float32"), "float32") # initialize with the input values, A_eg_CPU, anyway
B_in = shared( B_eg_CPU.astype("float32"), "float32") # initialize with the input values B_eg_CPU, anyway
# C_out = A_in + B_in # this version will output to the host as a numpy.ndarray
# indeed, reading the gr... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Scalar Multiplication (on the GPU)
cf. Scalar Multiplication of Linear Algebra Review, coursera, Machine Learning Intro by Ng | A_2 = np.array( [[4,5],[1,7] ])
a = T.scalar('a')
F = sandbox.cuda.basic_ops.gpu_from_host( a*A )
scalarmul = theano.function([a,A],F)
print( scalarmul.maker.fgraph.toposort() )
scalarmul( np.float32( 2.), A_2.astype("float32")) | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Composition; Confirming that you can do composition of scalar multiplication on a matrix (or ring) addition
Being able to do composition is very important in math | scalarmul( np.float32(2.), matadd( A_eg_CPU.astype("float32"), B_eg_CPU.astype("float32") ) )
u = T.vector('u')
v = T.vector('v')
w = sandbox.cuda.basic_ops.gpu_from_host( u + v)
vecadd = theano.function( [u,v],w)
t = sandbox.cuda.basic_ops.gpu_from_host( a * u)
scalarmul_vec = theano.function([a,u], t)
print(veca... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
This was the computer equivalent to mathematical expression:
$$
\left[ \begin{matrix} 4 \ 6 \ 7 \end{matrix} \right] /2 - 3 * \left[ \begin{matrix} 2 \ 1 \ 0 \end{matrix} \right]
$$
sAxy or A-V multiplication or so-called "Gemv", or Matrix Multiplication on a vector, or linear transformation on a R-module, or vec... | B_out = sandbox.cuda.basic_ops.gpu_from_host( T.dot(A,v))
AVmul = theano.function([A,v], B_out)
print(AVmul.maker.fgraph.toposort())
AVmul( np.array([[1,0,3],[2,1,5],[3,1,2]]).astype("float32"), np.array([1,6,2]).astype("float32"))
AVmul( np.array([[1,0,0],[0,1,0],[0,0,1]]).astype("float32"), np.array([1,6,2]).astype... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
AB or Gemm or Matrix Multiplication, i.e. Ring multiplication
i.e.
$$
A*B = C
$$ | C_f = sandbox.cuda.basic_ops.gpu_from_host( T.dot(A,B))
matmul = theano.function([A,B], C_f)
print( matmul.maker.fgraph.toposort())
matmul( np.array( [[1,3],[2,4],[0,5]] ).astype("float32"), np.array([[1,0],[2,3]]).astype("float32") ) | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Inverse and Transpose
cf. Inverse and Transpose | Ainverse = sandbox.cuda.basic_ops.gpu_from_host( T.inv(A))
Ainv = theano.function([A], Ainverse)
print(Ainv.maker.fgraph.toposort())
Atranspose = sandbox.cuda.basic_ops.gpu_from_host( A.T)
AT = theano.function([A],Atranspose)
print(AT.maker.fgraph.toposort()) | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Summation, sum, mean, scan
Linear Regression (again), via Coursera's Machine Learning Intro by Ng, Programming Exercise 1 for Week 2
Boilerplate, load sample data | linregdata = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data1.txt', header=None)
X_linreg_training = linregdata.as_matrix([0]) # pandas.DataFrame.as_matrix convert frame to its numpy-array representation
y_linreg_training = linregdata.as_matrix([1])
m_linreg_training = len(y_linreg_training) # number of ... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Try representing $\theta$, parameters or "weights", of size $|\theta|$ which should be equal to the number of features $n$ (or $d$). | # theta_linreg = T.vector('theta_linreg')
d = X_linreg_training.shape[1] # d = features
# Declare Theano symbolic variables
X = T.matrix('x')
y = T.vector('y') | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Preprocess training data (due to numpy's treatment of arrays) (note, this is not needed, if you use pandas to choose which column(s) you want to make into a numpy array) | #X_linreg_training = X_linreg_training.reshape( m_linreg_training,1)
#y_linreg_training = y_linreg_training.reshape( m_linreg_training,1)
# Instead, the training data X and test data values y are going to be represented by Theano symbolic variable above
#X_linreg = theano.shared(X_linreg_training.astype("float32"),"fl... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Preprocess X to include intercepts | input_X_linreg = np.hstack( ( np.ones((m_linreg_training,1)), X_linreg_training ) ).astype("float32")
y_linreg_training_processed = y_linreg_training.reshape( m_linreg_training,).astype("float32")
J_History = [0 for iter in range(num_iters)]
for iter in range(num_iters):
predicted_vals_out, J_out = \
grad... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Denny Britz's way:
http://www.wildml.com/2015/09/speeding-up-your-neural-network-with-theano-and-the-gpu/
Speeding up your Neural Network with Theano and the GPU
and his jupyter notebook
https://github.com/dennybritz/nn-theano/blob/master/nn-theano-gpu.ipynb
nn-theano/nn-theano-gpu.ipynb | input_X_linreg.shape
# GPU NOTE: Conversion to float32 to store them on the GPU!
X = theano.shared( input_X_linreg.astype('float32'), name='X' )
y = theano.shared( y_linreg_training.astype('float32'), name='y')
# GPU NOTE: Conversion to float32 to store them on the GPU!
theta = theano.shared( np.vstack(theta_0).ast... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Testing the Linear Regression with (Batch) Gradient Descent classes in ./ML/ | import sys
import os
#sys.path.append( os.getcwd() + '/ML')
sys.path.append( os.getcwd() + '/ML' )
from linreg_gradDes import LinearReg, LinearReg_loaded
#from ML import LinearReg, LinearReg_loaded | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Boilerplate for sample input data | linregdata1 = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data1.txt', header=None)
linregdata1.as_matrix([0]).shape
linregdata1.as_matrix([1]).shape
features = linregdata1.as_matrix([0]).shape[1]
numberoftraining = linregdata1.as_matrix([0]).shape[0]
LinReg_housing = LinearReg( features, numberoftraining , ... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Other (sample) datasets
Consider feature normalization | def featureNormalize(X):
"""
FEATURENORMALIZE Normalizes the features in X
FEATURENORMALIZE(X) returns a normalized version of X where
the mean value of each feature is 0 and the standard deviation
is 1. This is often a good preprocessing step to do when
working with learning algorithms.... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Diabetes data from sklearn, sci-kit learn | # Load the diabetes dataset
diabetes = sklearn.datasets.load_diabetes()
diabetes_X = diabetes.data
diabetes_Y = diabetes.target
#diabetes_X1 = diabetes_X[:,np.newaxis,2]
diabetes_X1 = diabetes_X[:,np.newaxis, 2].astype(theano.config.floatX)
#diabetes_Y = diabetes_Y.reshape( diabetes_Y.shape[0], 1)
diabetes_Y = np.vs... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Multiple number of features case: | features = diabetes_X.shape[1]
LinReg_diabetes = LinearReg( features, numberoftraining, 0.01)
processed_X = LinReg_diabetes.preprocess_X( diabetes_X )
%time LinReg_diabetes.build_model( processed_X, diabetes_Y.flatten(), 10000)
LinRegloaded_diabetes = LinearReg_loaded( diabetes_X, diabetes_Y,
... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
ex2 Linear Regression, on d=2 features | data_ex1data2 = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data2.txt', header=None)
X_ex1data2 = data_ex1data2.iloc[:,0:2]
y_ex1data2 = data_ex1data2.iloc[:,2]
m_ex1data2 = y_ex1data2.shape[0]
X_ex1data2=X_ex1data2.values.astype(np.float32)
y_ex1data2=y_ex1data2.values.reshape((m_ex1data2,1)).astype(np.floa... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Multi-class Classification
cf. ex3, Programming Exercise 3: Multi-class Classification and Neural Networks, Machine Learning
1 Multi-class Classification | os.getcwd()
os.listdir( './coursera_Ng/machine-learning-ex3/' )
os.listdir( './coursera_Ng/machine-learning-ex3/ex3' )
# Load saved matrices from file
multiclscls_data = scipy.io.loadmat('./coursera_Ng/machine-learning-ex3/ex3/ex3data1.mat') | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
import the classes from ML | import sys
import os
os.getcwd()
#sys.path.append( os.getcwd() + '/ML')
sys.path.append( os.getcwd() + '/ML' )
from gradDes import LogReg
# Test case for Cost function J_{\theta} with regularization
theta_t = np.vstack( np.array( [-2, -1, 1, 2]) )
X_t = np.array( [i/10. for i in range(1,16)]).reshape((3,5)).T
#X_t... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Neural Networks
Model representation
cf. 2 Neural Networks, 2.1 Model representation, ex3.pdf | os.getcwd()
os.listdir( './coursera_Ng/machine-learning-ex3/' )
os.listdir( './coursera_Ng/machine-learning-ex3/ex3/' ) | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
$ \Theta_1, \Theta_2 $ | # Load saved matrices from file
nn3_data = scipy.io.loadmat('./coursera_Ng/machine-learning-ex3/ex3/ex3weights.mat')
print( nn3_data.keys() )
print( type( nn3_data['Theta1']) )
print( type( nn3_data['Theta2']) )
print( nn3_data['Theta1'].shape )
print( nn3_data['Theta2'].shape )
Theta1[0] | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Feedforward | %load_ext tikzmagic | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
$$
\begin{tikzpicture}
\matrix (m) [matrix of math nodes, row sep=3em, column sep=4em, minimum width=2em]
{
\mathbb{R}^{s_l} & \mathbb{R}^{ s_l +1 } & \mathbb{R}^{s_{l+1} } & \mathbb{R}^{s_{l+1} } \
a^{(l)} & (a_0^{(l)} = 1, a^{(l)} ) & z^{(l+1)} & g(z^{(l+1)}) = a^{(l+1)} \
};
\path[->]
(m-1-1) edge... | np.random.seed(0)
s_l = 400 # (layer) size of layer l, i.e. number of nodes, units in layer l
s_lp1 = 25
al = theano.shared( np.random.randn(s_l+1,1).astype('float32'), name="al")
#alp1 = theano.shared( np.random.randn(s_lp1,1).astype('float32'), name="al")
#Thetal = theano.shared( np.random.randn( s_lp1,s_l+1).astype(... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
From Deep Learning Tutorials of LISA lab of University of Montreal; logistic_sgd.py, mlp.py | %env
os.getcwd()
print( sys.path )
#sys.path.append( os.getcwd() + '/ML')
sys.path.append( '../DeepLearningTutorials/code/' )
#from logistic_sgd import LogisticRegression, load_data, sgd_optimization_mnist, predict
import logistic_sgd
MNIST_MTLdat = logistic_sgd.load_data("../DeepLearningTutorials/data/mnist.pkl.... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
NN.py, load NN.py for Layer class for Neural Net for Multiple Layers | import sys
import os
#sys.path.append( os.getcwd() + '/ML')
sys.path.append( os.getcwd() + '/ML' )
from NN import Layer, cost_functional, cost_functional_noreg, gradientDescent_step
| theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Boilerplate sample data, from Coursera's Machine Learning Introduction | # Load Training Data
print("Loading and Visualizing Data ... \n")
ex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat')
ex4data1.keys()
print( ex4data1['X'].shape )
print( ex4data1['y'].shape )
test_rng = np.random.RandomState(1234)
#Theta1 = Layer( test_rng, 1, 400,25, 5000)
#help(Thet... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Sanity check using ex4.m, Exercise 4 or Programming Exercise 4 from Coursera's Machine Learning Introduction by Ng | Theta_testvals = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4weights.mat')
print( Theta_testvals.keys() )
print( Theta_testvals['Theta1'].shape )
print( Theta_testvals['Theta2'].shape )
Theta1_testval = Theta_testvals['Theta1'][:,1:]
b1_testval = Theta_testvals['Theta1'][:,0:1]
print( Theta1_testval.sh... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
For $\Theta^{(2)}$, the key to connecting $\Theta^{(2)}$ with $\Theta^{(1)}$ is to set the argument in class Layer with al=Theta1.alp1, | Theta2 = Layer( test_rng, 2, 25,10,5000, al=Theta1.alp1 , activation=T.nnet.sigmoid)
Theta2.Theta.set_value( Theta2_testval.astype('float32'))
Theta2.b.set_value( b2_testval.astype('float32'))
h_test = Theta2.alp1
J = sandbox.cuda.basic_ops.gpu_from_host(
T.mean( T.sum(
- y_sh_var * T.log( h_test ) - ( n... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Summary for Neural Net with Multiple Layers for logistic regression (but can be extended to linear regression)
Load boilerplate training data: | sys.path.append( os.getcwd() + '/ML' )
from NN import Layer, cost_functional, cost_functional_noreg, gradientDescent_step, MLP
# Load Training Data
print("Loading and Visualizing Data ... \n")
ex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat')
# recall that whereas the original labels... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Testing on MNIST, from University of Montreal, Deep Learning Tutorial, data | K=10
m = len(train_set[1])
y_train_prob = [np.zeros(K) for row in train_set[1]] # list of 5000 numpy arrays of size dims. (10,)
for i in range( m):
y_train_prob[i][ train_set[1][i]] = 1
y_train_prob = np.array(y_train_prob).T.astype(theano.config.floatX) # size dims. (K,m)
print( y_train_prob.shape )
print( ... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Save the mode; cf. Getting Started, DeepLearning 0.1 documentation, Loading and Saving Models | import cPickle
save_file = open('./saved_models/MNIST_MTL_log_reg','wb')
for Thet in MNIST_MTL.Thetas:
cPickle.dump( Thet.Theta.get_value(borrow=True), save_file,-1) # the -1 is for HIGHEST priority
cPickle.dump( Thet.b.get_value(borrow=True), save_file,-1)
save_file.close()
MNIST_MTL.Thetas[0].al.set_value... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
cf. Glass Classification | gls_data = pd.read_csv( "./kaggle/glass.csv")
gls_data.describe()
gls_data.get_values().shape
X_gls = gls_data.get_values()[:,:-1]
print(X_gls.shape)
y_gls = gls_data.get_values()[:,-1]
print(y_gls.shape)
print( y_gls[:10])
X_gls_train = gls_data.get_values()[:-14,:-1]
print(X_gls_train.shape)
y_gls_train = gls_data... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
GPU test | test_gen_conn = theano.function([], outputs=sandbox.cuda.basic_ops.gpu_from_host(a2p1), givens={ Thetab1.al : X42test })
test_gen_conn()
test_gen_conn = theano.function([], outputs=sandbox.cuda.basic_ops.gpu_from_host(a2p1), givens={ Thetab1.al : X43test })
test_gen_conn() | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Summary for Neural Net with Multiple Layers for logistic regression (but can be extended to linear regression) | sys.path.append( os.getcwd() + '/ML' )
from NN import MLP
# Load Training Data
print("Loading and Visualizing Data ... \n")
ex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat')
# recall that whereas the original labels (in the variable y) were 1, 2, ..., 10, for the purpose of training ... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Testing on University of Montreal LISA lab MNIST data | import gzip
import six.moves.cPickle as pickle
with gzip.open("../DeepLearningTutorials/data/mnist.pkl.gz", 'rb') as f:
try:
train_set, valid_set, test_set = pickle.load(f, encoding='latin1')
except:
train_set, valid_set, test_set = pickle.load(f)
K=10
m = len(train_set[1])
y_train_prob = [np.z... | theano_ML.ipynb | ernestyalumni/MLgrabbag | mit |
Reading in markers, Calculating decompressed length: We use the (very awesome) itertools module to do the iterating and filtering for us.
We use an iterator to go over the input values, so that we can use the itertools functions such as takewhile that selects characters as long as a condition is fulfilled.
def takewhil... | from itertools import islice, takewhile
import re
numbers = re.compile(r'(\d+)')
def decompress(data_iterator):
'''parses markers and returns index of last character and length of decompressed data'''
count = 0
index = 0
while True:
# handle single tokens that decompress to length 1 until st... | 2016/python3/Day09.ipynb | coolharsh55/advent-of-code | mit |
Part Two
Apparently, the file actually uses version two of the format.
In version two, the only difference is that markers within decompressed data are decompressed. This, the documentation explains, provides much more substantial compression capabilities, allowing many-gigabyte files to be stored in only a few kilobyt... | def decompress(data_iterator):
count = 0
'''parses markers and returns index of last character and length of decompressed data'''
while(True):
# handle all single characters
count += len(list(takewhile(lambda character: character != '(', data_iterator)))
# marker occurs here, extract... | 2016/python3/Day09.ipynb | coolharsh55/advent-of-code | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.