markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
View the results of the parameters for each class after the training. You can see that they look like the corresponding numbers. | PlotParameters(model) | DL0110EN/3.3.2lab_predicting _MNIST_using_Softmax.ipynb | atlury/deep-opencl | lgpl-3.0 |
Plot the first five misclassified samples: | count=0
for x,y in validation_dataset:
z=model(x.reshape(-1,28*28))
_,yhat=torch.max(z,1)
if yhat!=y:
show_data((x,y))
plt.show()
print("yhat:",yhat)
count+=1
if count>=5:
break
| DL0110EN/3.3.2lab_predicting _MNIST_using_Softmax.ipynb | atlury/deep-opencl | lgpl-3.0 |
Initializing the Network and Snapshot
SNAPSHOT_PATH below can be updated to point to a custom snapshot directory, see the Batfish instructions for how to package data for analysis.<br>
More example networks are available in the networks folder of the Batfish repository. | # Initialize a network and snapshot
NETWORK_NAME = "example_network"
SNAPSHOT_NAME = "example_snapshot"
SNAPSHOT_PATH = "networks/example"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True) | jupyter_notebooks/Introduction to Route Analysis.ipynb | batfish/pybatfish | apache-2.0 |
The network snapshot that we initialized above is illustrated below. You can download/view devices' configuration files here.
All of the information we will show you in this notebook is dynamically computed by Batfish based on the configuration files for the network devices.
View Routing Tables for ALL devices and ALL VRFs
Batfish makes all routing tables in the network easily accessible. Let's take a look at how you can retrieve the specific information you want. | # Get routing tables for all nodes and VRFs
routes_all = bf.q.routes().answer().frame() | jupyter_notebooks/Introduction to Route Analysis.ipynb | batfish/pybatfish | apache-2.0 |
We are not going to print this table as it has a large number of entries.
View Routing Tables for default VRF on AS1 border routers
There are 2 ways that we can get the desired subset of data:
Option 1) Only request that information from Batfish by passing in parameters into the routes() question. This is useful to do when you need to reduce the amount of data being returned, but is limited to regex filtering based on VRF, Node, Protocol and Network.
Option 2) Filter the output of the routes() question using the Pandas APIs. | ?bf.q.routes
# Get the routing table for the 'default' VRF on border routers of as1
# using BF parameters
routes_as1border = bf.q.routes(nodes="/as1border/", vrfs="default").answer().frame()
# Get the routing table for the 'default' VRF on border routers of as1
# using Pandas filtering
routes_as1border = routes_all[(routes_all['Node'].str.contains('as1border')) & (routes_all['VRF'] == 'default')]
routes_as1border | jupyter_notebooks/Introduction to Route Analysis.ipynb | batfish/pybatfish | apache-2.0 |
View BGP learnt routes for default VRF on AS1 border routers | # Getting BGP routes in the routing table for the 'default' VRF on border routers of as1
# using BF parameters
routes_as1border_bgp = bf.q.routes(nodes="/as1border/", vrfs="default", protocols="bgp").answer().frame()
# Geting BGP routes in the routing table for the 'default' VRF on border routers of as1
# using Pandas filtering
routes_as1border_bgp = routes_all[(routes_all['Node'].str.contains('as1border')) & (routes_all['VRF'] == 'default') & (routes_all['Protocol'] == 'bgp')]
routes_as1border_bgp | jupyter_notebooks/Introduction to Route Analysis.ipynb | batfish/pybatfish | apache-2.0 |
View BGP learnt routes for ALL VRFs on ALL routers with Metric >=50
We cannot pass in metric as a parameter to Batfish, so this task is best handled with the Pandas API. | routes_filtered = routes_all[(routes_all['Protocol'] == 'bgp') & (routes_all['Metric'] >= 50)]
routes_filtered | jupyter_notebooks/Introduction to Route Analysis.ipynb | batfish/pybatfish | apache-2.0 |
View the routing entries for network 1.0.2.0/24 on ALL routers in ALL VRFs | # grab the route table entry for network 1.0.2.0/24 from all routers in all VRFs
# using BF parameters
routes_filtered = bf.q.routes(network="1.0.2.0/24").answer().frame()
# grab the route table entry for network 1.0.2.0/24 from all routers in all VRFs
# using Pandas filtering
routes_filtered = routes_all[routes_all['Network'] == "1.0.2.0/24"]
routes_filtered | jupyter_notebooks/Introduction to Route Analysis.ipynb | batfish/pybatfish | apache-2.0 |
Using Panda's filtering it is easy to retrieve the list of nodes which have the network in the routing table for at least 1 VRF. This type of processing should always be done using the Pandas APIs. | # Get the list of nodes that have the network 1.0.2.0/24 in at least 1 VRF
# the .unique function removes duplicate entries that would have been returned if the network was in multiple VRFs on a node or there were
# multiple route entries for the network (ECMP)
print(sorted(routes_filtered["Node"].unique())) | jupyter_notebooks/Introduction to Route Analysis.ipynb | batfish/pybatfish | apache-2.0 |
Now we will retrieve the list of nodes that do NOT have this prefix in their routing table. This is easy to do with the Pandas groupby and filter functions. | # Group all routes by Node and filter for those that don't have '1.0.2.0/24'
routes_filtered = routes_all.groupby('Node').filter(lambda x: all(x['Network'] != '1.0.2.0/24'))
# Get the unique node names and sort the list
print(sorted(routes_filtered["Node"].unique())) | jupyter_notebooks/Introduction to Route Analysis.ipynb | batfish/pybatfish | apache-2.0 |
The only devices that do not have a route to 1.0.2.0/24 are the 2 hosts in the snapshot. This is expected, as they should just have a default route. Let's verify that. | routes_all[routes_all['Node'].str.contains('host')] | jupyter_notebooks/Introduction to Route Analysis.ipynb | batfish/pybatfish | apache-2.0 |
In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array. Let's already load some other modules too. | import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid') | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Showcases
Roll the dice
You like to play boardgames, but you want to better know you're chances of rolling a certain combination with 2 dices: | def mydices(throws):
"""
Function to create the distrrbution of the sum of two dices.
Parameters
----------
throws : int
Number of throws with the dices
"""
stone1 = np.random.uniform(1, 6, throws)
stone2 = np.random.uniform(1, 6, throws)
total = stone1 + stone2
return plt.hist(total, bins=20) # We use matplotlib to show a histogram
mydices(100) # test this out with multiple options | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Cartesian2Polar
Consider a random 10x2 matrix representing cartesian coordinates, how to convert them to polar coordinates | # random numbers (X, Y in 2 columns)
Z = np.random.random((10,2))
X, Y = Z[:,0], Z[:,1]
# distance
R = np.sqrt(X**2 + Y**2)
# angle
T = np.arctan2(Y, X) # Array of angles in radians
Tdegree = T*180/(np.pi) # If you like degrees more
# NEXT PART (now for illustration)
#plot the cartesian coordinates
plt.figure(figsize=(14, 6))
ax1 = plt.subplot(121)
ax1.plot(Z[:,0], Z[:,1], 'o')
ax1.set_title("Cartesian")
#plot the polar coorsidnates
ax2 = plt.subplot(122, polar=True)
ax2.plot(T, R, 'o')
ax2.set_title("Polar") | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Speed
Memory-efficient container that provides fast numerical operations: | L = range(1000)
%timeit [i**2 for i in L]
a = np.arange(1000)
%timeit a**2
#More information about array?
np.array? | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Creating numpy arrays
There are a number of ways to initialize new numpy arrays, for example from
a Python list or tuples
using functions that are dedicated to generating numpy arrays, such as arange, linspace, etc.
reading data from files
From lists
For example, to create new vector and matrix arrays from Python lists we can use the numpy.array function. | # a vector: the argument to the array function is a Python list
V = np.array([1, 2, 3, 4])
V
# a matrix: the argument to the array function is a nested Python list
M = np.array([[1, 2], [3, 4]])
M | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
The v and M objects are both of the type ndarray that the numpy module provides. | type(V), type(M) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property. | V.shape
M.shape | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
The number of elements in the array is available through the ndarray.size property: | M.size | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Equivalently, we could use the function numpy.shape and numpy.size | np.shape(M)
np.size(M) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Using the dtype (data type) property of an ndarray, we can see what type the data of an array has (always fixed for each array, cfr. Matlab): | M.dtype | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
We get an error if we try to assign a value of the wrong type to an element in a numpy array: | #M[0,0] = "hello" #uncomment this cell
f = np.array(['Bonjour', 'Hello', 'Hallo',])
f | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
If we want, we can explicitly define the type of the array data when we create it, using the dtype keyword argument: | M = np.array([[1, 2], [3, 4]], dtype=complex) #np.float64, np.float, np.int64
print(M, '\n', M.dtype) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Since Numpy arrays are statically typed, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the astype functions (see also the similar asarray function). This always create a new array of new type: | M = np.array([[1, 2], [3, 4]], dtype=float)
M2 = M.astype(int)
M2 | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Common type that can be used with dtype are: int, float, complex, bool, object, etc.
We can also explicitly define the bit size of the data types, for example: int64, int16, float64, float128, complex128.
Higher order is also possible: | C = np.array([[[1], [2]], [[3], [4]]])
print(C.shape)
C
C.ndim # number of dimensions | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Using array-generating functions
For larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in numpy that generates arrays of different forms. Some of the more common are:
arange | # create a range
x = np.arange(0, 10, 1) # arguments: start, stop, step
x
x = np.arange(-1, 1, 0.1)
x | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
linspace and logspace | # using linspace, both end points ARE included
np.linspace(0, 10, 25)
np.logspace(0, 10, 10, base=np.e)
plt.plot(np.logspace(0, 10, 10, base=np.e), np.random.random(10), 'o')
plt.xscale('log') | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
random data | # uniform random numbers in [0,1]
np.random.rand(5,5)
# standard normal distributed random numbers
np.random.randn(5,5) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
zeros and ones | np.zeros((3,3))
np.ones((3,3)) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
<div class="alert alert-success">
<b>EXERCISE</b>: Create a vector with values ranging from 10 to 49 with steps of 1
</div> | np.arange(10, 50, 1) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
<div class="alert alert-success">
<b>EXERCISE</b>: Create a 3x3 identity matrix (look into docs!)
</div> | np.identity(3)
np.eye(3) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
<div class="alert alert-success">
<b>EXERCISE</b>: Create a 3x3x3 array with random values
</div> | np.random.random((3, 3, 3)) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
File I/O
Numpy is capable of reading and writing text and binary formats. However, since most data-sources are providing information in a format with headings, different dtypes,... we will use for reading/writing of textfiles the power of Pandas.
Comma-separated values (CSV)
Writing to a csvfile with numpy is done with the savetxt-command: | a = np.random.random(40).reshape((20, 2))
np.savetxt("random-matrix.csv", a, delimiter=",") | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
To read data from such file into Numpy arrays we can use the numpy.genfromtxt function. For example, | a2 = np.genfromtxt("random-matrix.csv", delimiter=',')
a2 | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Numpy's native file format
Useful when storing and reading back numpy array data, since binary. Use the functions numpy.save and numpy.load: | np.save("random-matrix.npy", a)
!file random-matrix.npy
np.load("random-matrix.npy") | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Manipulating arrays
Indexing
<center>MATLAB-USERS:<br> PYTHON STARTS AT 0!
We can index elements in an array using the square bracket and indices: | V
# V is a vector, and has only one dimension, taking one index
V[0]
V[-1:] #-2, -2:,...
# a is a matrix, or a 2 dimensional array, taking two indices
# the first dimension corresponds to rows, the second to columns.
a[1, 1] | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array) | a[1] | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
The same thing can be achieved with using : instead of an index: | a[1, :] # row 1
a[:, 1] # column 1 | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
We can assign new values to elements in an array using indexing: | a[0, 0] = 1
a[:, 1] = -1
a | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Index slicing
Index slicing is the technical name for the syntax M[lower:upper:step] to extract part of an array: | A = np.array([1, 2, 3, 4, 5])
A
A[1:3] | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Array slices are mutable: if they are assigned a new value the original array from which the slice was extracted is modified: | A[1:3] = [-2,-3]
A | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
We can omit any of the three parameters in M[lower:upper:step]: | A[::] # lower, upper, step all take the default values
A[::2] # step is 2, lower and upper defaults to the beginning and end of the array
A[:3] # first three elements
A[3:] # elements from index 3
A[-3:] # the last three elements | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
<div class="alert alert-success">
<b>EXERCISE</b>: Create a null vector of size 10 and adapt it in order to make the fifth element a value 1
</div> | vec = np.zeros(10)
vec[4] = 1.
vec | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Fancy indexing
Fancy indexing is the name for when an array or list is used in-place of an index: | a = np.arange(0, 100, 10)
a[[2, 3, 2, 4, 2]] | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
In more dimensions: | A = np.arange(25).reshape(5,5)
A
row_indices = [1, 2, 3]
A[row_indices]
col_indices = [1, 2, -1] # remember, index -1 means the last element
A[row_indices, col_indices] | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
We can also index masks: If the index mask is an Numpy array of with data type bool, then an element is selected (True) or not (False) depending on the value of the index mask at the position each element: | B = np.array([n for n in range(5)]) #range is pure python => Exercise: Make this shorter with pur numpy
B
row_mask = np.array([True, False, True, False, False])
B[row_mask]
# same thing
row_mask = np.array([1,0,1,0,0], dtype=bool)
B[row_mask] | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
This feature is very useful to conditionally select elements from an array, using for example comparison operators: | AR = np.random.randint(0, 20, 15)
AR
AR%3 == 0
extract_from_AR = AR[AR%3 == 0]
extract_from_AR
x = np.arange(0, 10, 0.5)
x
mask = (5 < x) * (x < 7.5) # We actually multiply two masks here (boolean 0 and 1 values)
mask
x[mask] | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
<div class="alert alert-success">
<b>EXERCISE</b>: Swap the first two rows of the 2-D array `A`?
</div> | A = np.arange(25).reshape(5,5)
A
#SWAP
A[[0, 1]] = A[[1, 0]]
A | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
<div class="alert alert-success">
<b>EXERCISE</b>: Change all even numbers of `AR` into zero-values.
</div> | AR = np.random.randint(0, 20, 15)
AR
AR[AR%2==0] = 0.
AR | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
<div class="alert alert-success">
<b>EXERCISE</b>: Change all even positions of matrix `AR` into zero-values
</div> | AR = np.random.randint(1, 20, 15)
AR
AR[1::2] = 0
AR | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Some more extraction functions
where function to know the indices of something | x = np.arange(0, 10, 0.5)
np.where(x>5.) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
With the diag function we can also extract the diagonal and subdiagonals of an array: | np.diag(A) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
The take function is similar to fancy indexing described above: | x.take([1, 5]) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Linear algebra
Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations.
Scalar-array operations
We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers. | v1 = np.arange(0, 5)
v1 * 2
v1 + 2
A = np.arange(25).reshape(5,5)
A * 2
np.sin(A) #np.log(A), np.arctan,... | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Element-wise array-array operations
When we add, subtract, multiply and divide arrays with each other, the default behaviour is element-wise operations: | A * A # element-wise multiplication
v1 * v1 | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row: | A.shape, v1.shape
A * v1 | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Consider the speed difference with pure python: | a = np.arange(10000)
%timeit a + 1
l = range(10000)
%timeit [i+1 for i in l]
#logical operators:
a1 = np.arange(0, 5, 1)
a2 = np.arange(5, 0, -1)
a1>a2 # >, <=,...
# cfr.
np.all(a1>a2) # any | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Basic operations on numpy arrays (addition, etc.) are elementwise. Nevertheless, It’s also possible to do operations on arrays of different sizes if Numpy can transform these arrays so that they all have the same size: this conversion is called broadcasting. | A, v1
A*v1
x, y = np.arange(5), np.arange(5).reshape((5, 1)) # a row and a column array
distance = np.sqrt(x ** 2 + y ** 2)
distance
#let's put this in a figure:
plt.pcolor(distance)
plt.colorbar() | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Matrix algebra
What about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments: | np.dot(A, A)
np.dot(A, v1) #check the difference with A*v1 !!
np.dot(v1, v1) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra. You can also get inverse of matrices, determinant,...
We won't go deeper here on pure matrix calculation, but for more information, check the related functions: inner, outer, cross, kron, tensordot. Try for example help(kron).
Calculations
Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays. | a = np.random.random(40) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Different frequently used operations can be done: | print ('Mean value is', np.mean(a))
print ('Median value is', np.median(a))
print ('Std is', np.std(a))
print ('Variance is', np.var(a))
print ('Min is', a.min())
print ('Element of minimum value is', a.argmin())
print ('Max is', a.max())
print ('Sum is', np.sum(a))
print ('Prod', np.prod(a))
print ('Cumsum is', np.cumsum(a)[-1])
print ('CumProd of 5 first elements is', np.cumprod(a)[4])
print ('Unique values in this array are:', np.unique(np.random.randint(1,6,10)))
print ('85% Percentile value is: ', np.percentile(a, 85))
a = np.random.random(40)
print(a.argsort())
a.sort() #sorts in place!
print(a.argsort()) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Calculations with higher-dimensional data
When functions such as min, max, etc., is applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis argument we can specify how these functions should behave: | m = np.random.rand(3,3)
m
# global max
m.max()
# max in each column
m.max(axis=0)
# max in each row
m.max(axis=1) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument.
<div class="alert alert-success">
<b>EXERCISE</b>: Rescale the 5x5 matrix `Z` to values between 0 and 1:
</div> | Z = np.random.uniform(5.0, 15.0, (5,5))
Z
# RESCALE:
(Z - Z.min())/(Z.max() - Z.min()) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Reshaping, resizing and stacking arrays
The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays. | A = np.arange(25).reshape(5,5)
n, m = A.shape
B = A.reshape((1,n*m))
B | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data (see next) | B = A.flatten()
B | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Stacking and repeating arrays
Using function repeat, tile, vstack, hstack, and concatenate we can create larger vectors and matrices from smaller ones:
tile and repeat | a = np.array([[1, 2], [3, 4]])
# repeat each element 3 times
np.repeat(a, 3)
# tile the matrix 3 times
np.tile(a, 3) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
concatenate | b = np.array([[5, 6]])
np.concatenate((a, b), axis=0)
np.concatenate((a, b.T), axis=1) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
hstack and vstack | np.vstack((a,b))
np.hstack((a,b.T)) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
IMPORTANT!: View and Copy
To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (techincal term: pass by reference). | A = np.array([[1, 2], [3, 4]])
A
# now B is referring to the same array data as A
B = A
# changing B affects A
B[0,0] = 10
B
A | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
If we want to avoid this behavior, so that when we get a new completely independent object B copied from A, then we need to do a so-called "deep copy" using the function copy: | B = np.copy(A)
# now, if we modify B, A is not affected
B[0,0] = -5
B
A | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Also reshape function just takes a view: | arr = np.arange(8)
arr_view = arr.reshape(2, 4)
print('Before\n', arr_view)
arr[0] = 1000
print('After\n', arr_view)
arr.flatten()[2] = 10 #Flatten creates a copy!
arr | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Using arrays in conditions
When using arrays in conditions in for example if statements and other boolean expressions, one need to use one of any or all, which requires that any or all elements in the array evalutes to True: | M
if (M > 5).any():
print("at least one element in M is larger than 5")
else:
print("no element in M is larger than 5")
if (M > 5).all():
print("all elements in M are larger than 5")
else:
print("all elements in M are not larger than 5") | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Some extra applications:
Polynomial fit | b_data = np.genfromtxt("./data/bogota_part_dataset.csv", skip_header=3, delimiter=',')
plt.scatter(b_data[:,2], b_data[:,3])
x, y = b_data[:,1], b_data[:,3]
t = np.polyfit(x, y, 2) # fit a 2nd degree polynomial to the data, result is x**2 + 2x + 3
t
x.sort()
plt.plot(x, y, 'o')
plt.plot(x, t[0]*x**2 + t[1]*x + t[2], '-') | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
---------------------_
<div class="alert alert-success">
<b>EXERCISE</b>: Make a fourth order fit between the fourth and fifth column of `b_data`
</div> | x, y = b_data[:,3], b_data[:,4]
t = np.polyfit(x, y, 4) # fit a 2nd degree polynomial to the data, result is x**2 + 2x + 3
t
x.sort()
plt.plot(x, y, 'o')
plt.plot(x, t[0]*x**4 + t[1]*x**3 + t[2]*x**2 + t[3]*x +t[4], '-') | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
-------------------__
However, when doing some kind of regression, we would like to have more information about the fit characterstics automatically. Statsmodels is a library that provides this functionality, we will later come back to this type of regression problem.
Moving average function | def moving_average(a, n=3) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
print(moving_average(b_data , n=3)) | notebooks/python_recap/05-numpy.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/yamnet"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/yamnet.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/yamnet.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/yamnet.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/yamnet/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Sound classification with YAMNet
YAMNet is a deep net that predicts 521 audio event classes from the AudioSet-YouTube corpus it was trained on. It employs the
Mobilenet_v1 depthwise-separable
convolution architecture. | import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
import csv
import matplotlib.pyplot as plt
from IPython.display import Audio
from scipy.io import wavfile | examples/colab/yamnet.ipynb | tensorflow/hub | apache-2.0 |
Load the Model from TensorFlow Hub.
Note: to read the documentation just follow the model's url | # Load the model.
model = hub.load('https://tfhub.dev/google/yamnet/1') | examples/colab/yamnet.ipynb | tensorflow/hub | apache-2.0 |
The labels file will be loaded from the models assets and is present at model.class_map_path().
You will load it on the class_names variable. | # Find the name of the class with the top score when mean-aggregated across frames.
def class_names_from_csv(class_map_csv_text):
"""Returns list of class names corresponding to score vector."""
class_names = []
with tf.io.gfile.GFile(class_map_csv_text) as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
class_names.append(row['display_name'])
return class_names
class_map_path = model.class_map_path().numpy()
class_names = class_names_from_csv(class_map_path) | examples/colab/yamnet.ipynb | tensorflow/hub | apache-2.0 |
Add a method to verify and convert a loaded audio is on the proper sample_rate (16K), otherwise it would affect the model's results. | def ensure_sample_rate(original_sample_rate, waveform,
desired_sample_rate=16000):
"""Resample waveform if required."""
if original_sample_rate != desired_sample_rate:
desired_length = int(round(float(len(waveform)) /
original_sample_rate * desired_sample_rate))
waveform = scipy.signal.resample(waveform, desired_length)
return desired_sample_rate, waveform | examples/colab/yamnet.ipynb | tensorflow/hub | apache-2.0 |
Downloading and preparing the sound file
Here you will download a wav file and listen to it.
If you have a file already available, just upload it to colab and use it instead.
Note: The expected audio file should be a mono wav file at 16kHz sample rate. | !curl -O https://storage.googleapis.com/audioset/speech_whistling2.wav
!curl -O https://storage.googleapis.com/audioset/miaow_16k.wav
# wav_file_name = 'speech_whistling2.wav'
wav_file_name = 'miaow_16k.wav'
sample_rate, wav_data = wavfile.read(wav_file_name, 'rb')
sample_rate, wav_data = ensure_sample_rate(sample_rate, wav_data)
# Show some basic information about the audio.
duration = len(wav_data)/sample_rate
print(f'Sample rate: {sample_rate} Hz')
print(f'Total duration: {duration:.2f}s')
print(f'Size of the input: {len(wav_data)}')
# Listening to the wav file.
Audio(wav_data, rate=sample_rate) | examples/colab/yamnet.ipynb | tensorflow/hub | apache-2.0 |
The wav_data needs to be normalized to values in [-1.0, 1.0] (as stated in the model's documentation). | waveform = wav_data / tf.int16.max | examples/colab/yamnet.ipynb | tensorflow/hub | apache-2.0 |
Executing the Model
Now the easy part: using the data already prepared, you just call the model and get the: scores, embedding and the spectrogram.
The score is the main result you will use.
The spectrogram you will use to do some visualizations later. | # Run the model, check the output.
scores, embeddings, spectrogram = model(waveform)
scores_np = scores.numpy()
spectrogram_np = spectrogram.numpy()
infered_class = class_names[scores_np.mean(axis=0).argmax()]
print(f'The main sound is: {infered_class}') | examples/colab/yamnet.ipynb | tensorflow/hub | apache-2.0 |
Visualization
YAMNet also returns some additional information that we can use for visualization.
Let's take a look on the Waveform, spectrogram and the top classes inferred. | plt.figure(figsize=(10, 6))
# Plot the waveform.
plt.subplot(3, 1, 1)
plt.plot(waveform)
plt.xlim([0, len(waveform)])
# Plot the log-mel spectrogram (returned by the model).
plt.subplot(3, 1, 2)
plt.imshow(spectrogram_np.T, aspect='auto', interpolation='nearest', origin='lower')
# Plot and label the model output scores for the top-scoring classes.
mean_scores = np.mean(scores, axis=0)
top_n = 10
top_class_indices = np.argsort(mean_scores)[::-1][:top_n]
plt.subplot(3, 1, 3)
plt.imshow(scores_np[:, top_class_indices].T, aspect='auto', interpolation='nearest', cmap='gray_r')
# patch_padding = (PATCH_WINDOW_SECONDS / 2) / PATCH_HOP_SECONDS
# values from the model documentation
patch_padding = (0.025 / 2) / 0.01
plt.xlim([-patch_padding-0.5, scores.shape[0] + patch_padding-0.5])
# Label the top_N classes.
yticks = range(0, top_n, 1)
plt.yticks(yticks, [class_names[top_class_indices[x]] for x in yticks])
_ = plt.ylim(-0.5 + np.array([top_n, 0])) | examples/colab/yamnet.ipynb | tensorflow/hub | apache-2.0 |
That sort of makes sense
$x$ increases quickly, then hits an upper bound
How quickly?
What parameters of the system affect this?
What are the precise dynamics?
What about $f(x)=-x$? | with model:
def feedback(x):
return -x
conn.function = feedback
sim = nengo.Simulator(model)
sim.run(.5)
plot(sim.trange(), sim.data[ensA_p])
ylim(-1.5,1.5); | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
That also makes sense. What if we nudge it away from zero? | from nengo.utils.functions import piecewise
with model:
stim = nengo.Node(piecewise({0:1, .2:-1, .4:0}))
nengo.Connection(stim, ensA)
sim = nengo.Simulator(model)
sim.run(.6)
plot(sim.trange(), sim.data[ensA_p])
ylim(-1.5,1.5); | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
With an input of 1, $x=0.5$
With an input of -1, $x=-0.5$
With an input of 0, it goes back to $x=0$
Does this make sense?
Why / why not?
And why that particular timing/curvature?
What about $f(x)=x^2$? | with model:
stim.output = piecewise({.1:.2, .2:.4, .5:0})
def feedback(x):
return x*x
conn.function = feedback
sim = nengo.Simulator(model)
sim.run(.6)
plot(sim.trange(), sim.data[ensA_p])
ylim(-1.5,1.5); | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
Well that's weird
Stable at $x=0$ with no input
Stable at .2
Unstable at .4, shoots up high
Something very strange happens around $x=1$ when the input is turned off (why decay if $f(x) = x^2$?)
Why is this happening?
Making sense of dynamics
Let's go back to something simple
Just a single feed-forward neural population
Encode $x$ into current, compute spikes, decode filtered spikes into $\hat{x}$
Instead of a constant input, let's change the input
Change it suddenly from zero to one to get a sense of what's happening with changes | import nengo
from nengo.utils.functions import piecewise
model = nengo.Network(seed=4)
with model:
stim = nengo.Node(piecewise({.3:1}))
ensA = nengo.Ensemble(100, dimensions=1)
def feedback(x):
return x
nengo.Connection(stim, ensA)
#conn = nengo.Connection(ensA, ensA, function=feedback)
stim_p = nengo.Probe(stim)
ensA_p = nengo.Probe(ensA, synapse=0.01)
sim = nengo.Simulator(model)
sim.run(1)
plot(sim.trange(), sim.data[ensA_p], label="$\hat{x}$")
plot(sim.trange(), sim.data[stim_p], label="$x$")
legend()
ylim(-.2,1.5); | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
This was supposed to compute $f(x)=x$
For a constant input, that works
But we get something else when there's a change in the input
What is this difference?
What affects it? | with model:
ensA_p = nengo.Probe(ensA, synapse=0.03)
sim = nengo.Simulator(model)
sim.run(1)
plot(sim.trange(), sim.data[ensA_p], label="$\hat{x}$")
plot(sim.trange(), sim.data[stim_p], label="$x$")
legend()
ylim(-.2,1.5); | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
The time constant of the post-synaptic filter
We're not getting $f(x)=x$
Instead we're getting $f(x(t))=x(t)*h(t)$ | tau = 0.03
with model:
ensA_p = nengo.Probe(ensA, synapse=tau)
sim = nengo.Simulator(model)
sim.run(1)
stim_filt = nengo.Lowpass(tau).filt(sim.data[stim_p], dt=sim.dt)
plot(sim.trange(), sim.data[ensA_p], label="$\hat{x}$")
plot(sim.trange(), sim.data[stim_p], label="$x$")
plot(sim.trange(), stim_filt, label="$h(t)*x(t)$")
legend()
ylim(-.2,1.5); | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
So there are dynamics and filtering going on, since there is always a synaptic filter on a connection
Why isn't it exactly the same?
Recurrent connections are dynamic as well (i.e. passing past information to future state of the population)
Let's take a look more carefully
Recurrent connections
So a connection actually approximates $f(x(t))*h(t)$
So what does a recurrent connection do?
Also $x(t) = f(x(t))*h(t)$
where $$
h(t) = \begin{cases}
e^{-t/\tau} &\mbox{if } t > 0 \
0 &\mbox{otherwise}
\end{cases}
$$
How can we work with this?
General rule of thumb: convolutions are annoying, so let's get rid of them
We could do a Fourier transform
$X(\omega)=F(\omega)H(\omega)$
But, since we are studying the response of a system (rather than a continuous signal), there's a more general and appropriate transform that makes life even easier:
Laplace transform (it is more general because $s = a + j\omega$)
The Laplace transform of our equations are:
$X(s)=F(s)H(s)$
$H(s)={1 \over {1+s\tau}}$
Rearranging:
$X(s)=F(s){1 \over {1+s\tau}}$
$X(s)(1+s\tau) = F(s)$
$X(s) + X(s)s\tau = F(s)$
$sX(s) = {1 \over \tau} (F(s)-X(s))$
Convert back into the time domain (inverse Laplace):
${dx \over dt} = {1 \over \tau} (f(x(t))-x(t))$
Dynamics
This says that if we introduce a recurrent connection, we end up implementing a differential equation
So what happened with $f(x)=x+1$?
$\dot{x} = {1 \over \tau} (x+1-x)$
$\dot{x} = {1 \over \tau}$
What about $f(x)=-x$?
$\dot{x} = {1 \over \tau} (-x-x)$
$\dot{x} = {-2x \over \tau}$
Consistent with figures above, so at inputs of $\pm 1$ get to $0 = 2x\pm 1$, $x=\pm .5$
And $f(x)=x^2$?
$\dot{x} = {1 \over \tau} (x^2-x)$
Consistent with figure, at input of .2, $0=x^2-x+.2=(x-.72)(x-.27)$, for input of .4 you get imaginary solutions.
For 0 input, x = 0,1 ... what if we get it over 1 before turning off input?
Synthesis
What if there's some differential equation we really want to implement?
We want $\dot{x} = f(x)$
So we do a recurrent connection of $f'(x)=\tau f(x)+x$
The resulting model will end up implementing $\dot{x} = {1 \over \tau} (\tau f(x)+x-x)=f(x)$
Inputs
What happens if there's an input as well?
We'll call the input $u$ from another population, and it is also computing some function $g(u)$
$x(t) = f(x(t))h(t)+g(u(t))h(t)$
Follow the same derivation steps
$\dot{x} = {1 \over \tau} (f(x)-x + g(u))$
So if you have some input that you want added to $\dot{x}$, you need to scale it by $\tau$
This lets us do any differential equation of the form $\dot{x}=f(x)+g(u)$
A derivation
Linear systems
Let's take a step back and look at just linear systems
The book shows that we can implement any equation of the form
$\dot{x}(t) = A x(t) + B u(t)$
Where $A$ and $x$ are a matrix and vector -- giving a standard control theoretic structure
<img src="files/lecture5/control_sys.png" width="600">
Our goal is to convert this to a structure which has $h(t)$ as the transfer function instead of the standard $\int$
<img src="files/lecture5/control_sysh.png" width="600">
Using Laplace on the standard form gives:
$sX(s) = A X(s) + B U(s)$
Laplace on the 'neural control' form gives (as before where $F(s) = A'X(s) + B'U(s)$):
$X(s) = {1 \over {1 + s\tau}} (A'X(s) + B'U(s))$
$X(s) + \tau sX(s) = (A'X(s) + B'U(s))$
$sX(s) = {1 \over \tau} (A'X(s) + B'U(s) - X(s))$
$sX(s) = {1 \over \tau} ((A' - I) X(s) + B'U(s))$
Making the 'standard' and 'neural' equations equal to one another, we find that for any system with a given A and B, the A' and B' of the equivalent neural system are given by:
$A' = \tau A + I$ and
$B' = \tau B$
where $I$ is the identity matrix
This is nice because lots of engineers think of the systems they build in these terms (i.e. as linear control systems).
Nonlinear systems
In fact, these same steps can be taken to account for nonlinear control systems as well:
$\dot{x}(t) = f(x(t),u(t),t)$
For a neural system with transfer function $h(t)$:
$X(s) = H(s)F'(X(s),U(s),s)$
$X(s) = {1 \over {1 + s\tau}} F'(X(s),U(s),s)$
$sX(s) = {1 \over \tau} (F'(X(s),U(s),s) - X(s))$
This gives the general result (slightly more general than what we saw earlier):
$F'(X(s),U(s),s) = \tau(F(X(s),U(s),s)) + X(s)$
Applications
Eye control
Part of the brainstem called the nuclei prepositus hypoglossi
Input is eye velocity $v$
Output is eye position $x$
$\dot{x}=v$
This is an integrator ($x$ is the integral of $v$)
It's a linear system, so, to get it in the standard control form $\dot{x}=Ax+Bu$ we have:
$A=0$
$B=1$
So that means we need $A'=\tau 0 + I = 1$ and $B'=\tau 1 = \tau$
<img src="files/lecture5/eye_sys.png" width="400"> | import nengo
from nengo.utils.functions import piecewise
from nengo.utils.ensemble import tuning_curves
tau = 0.01
model = nengo.Network('Eye control', seed=8)
with model:
stim = nengo.Node(piecewise({.3:1, .6:0 }))
velocity = nengo.Ensemble(100, dimensions=1)
position = nengo.Ensemble(20, dimensions=1)
def feedback(x):
return 1*x
conn = nengo.Connection(stim, velocity)
conn = nengo.Connection(velocity, position, transform=tau, synapse=tau)
conn = nengo.Connection(position, position, function=feedback, synapse=tau)
stim_p = nengo.Probe(stim)
position_p = nengo.Probe(position, synapse=.01)
velocity_p = nengo.Probe(velocity, synapse=.01)
sim = nengo.Simulator(model)
sim.run(1)
x, A = tuning_curves(position, sim)
plot(x,A)
figure()
plot(sim.trange(), sim.data[stim_p], label = "stim")
plot(sim.trange(), sim.data[position_p], label = "position")
plot(sim.trange(), sim.data[velocity_p], label = "velocity")
legend(loc="best"); | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
That's pretty good... the area under the input is about equal to the magnitude of the output.
But, in order to be a perfect integrator, we'd need exactly $x=1\times x$
We won't get exactly that
Neural implementations are always approximations
Two forms of error:
$E_{distortion}$, the decoding error
$E_{noise}$, the random noise error
What will they do?
Distortion error
<img src="files/lecture5/integrator_error.png">
What affects this? | import nengo
from nengo.dists import Uniform
from nengo.utils.ensemble import tuning_curves
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(100, dimensions=1, max_rates=Uniform(100,200))
connection = nengo.Connection(neurons, neurons)
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
xhat = numpy.dot(A, d)
x, A = tuning_curves(neurons, sim)
plot(x,A)
figure()
plot(x, xhat-x)
axhline(0, color='k')
xlabel('$x$')
ylabel('$\hat{x}-x$'); | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
We can think of the distortion error as introducing a bunch of local attractors into the representation
Any 'downward' x-crossing will be a stable point ('upwards' is unstable).
There will be a tendency to drift towards one of these even if the input is zero.
Noise error
What will random noise do?
Push the representation back and forth
What if it is small?
What if it is large?
What will changing the post-synaptic time constant $\tau$ do?
How does that interact with noise?
Real neural integrators
But real eyes aren't perfect integrators
If you get someone to look at someting, then turn off the lights but tell them to keep looking in the same direction, their eye will drift back to centre (with about 70s time constant)
How do we implement that?
$\dot{x}=-{1 \over \tau_c}x + v$
$\tau_c$ is the time constant of that return to centre
$A'=\tau {-1 \over \tau_c}+1$
$B' = \tau$ | import nengo
from nengo.utils.functions import piecewise
tau = 0.1
tau_c = 2.0
model = nengo.Network('Eye control', seed=5)
with model:
stim = nengo.Node(piecewise({.3:1, .6:0 }))
velocity = nengo.Ensemble(100, dimensions=1)
position = nengo.Ensemble(200, dimensions=1)
def feedback(x):
return (-tau/tau_c + 1)*x
conn = nengo.Connection(stim, velocity)
conn = nengo.Connection(velocity, position, transform=tau, synapse=tau)
conn = nengo.Connection(position, position, function=feedback, synapse=tau)
stim_p = nengo.Probe(stim)
position_p = nengo.Probe(position, synapse=.01)
velocity_p = nengo.Probe(velocity, synapse=.01)
sim = nengo.Simulator(model)
sim.run(5)
plot(sim.trange(), sim.data[stim_p], label = "stim")
plot(sim.trange(), sim.data[position_p], label = "position")
plot(sim.trange(), sim.data[velocity_p], label = "velocity")
legend(loc="best"); | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
That also looks right. Note that as $\tau_c \rightarrow \infty$ this will approach the integrator.
Humans (a) and Goldfish (b)
Humans have more neurons doing this than goldfish (~1000 vs ~40)
They also have slower decay (70 s vs. 10 s).
Why do these fit together?
<img src="files/lecture5/integrator_decay.png">
Controlled Integrator
What if we want an integrator where we can adjust the decay on-the-fly?
Separate input telling us what the decay constant $d$ should be
$\dot{x} = -d x + v$
So there are two inputs: $v$ and $d$
This is no longer in the standard $Ax + Bu$ form. Sort of...
Let $A = -d(t)$, so it's not a matrix
But it is of the more general form: ${dx \over dt}=f(x)+g(u)$
We need to compute a nonlinear function of an input ($d$) and the state variable ($x$)
How can we do this?
Going to 2D so we can compute the nonlinear function
Let's have the state variable be $[x, d]$
<img src="files/lecture5/controlled_integrator.png" width = "600"> | import nengo
from nengo.utils.functions import piecewise
tau = 0.1
model = nengo.Network('Controlled integrator', seed=1)
with model:
vel = nengo.Node(piecewise({.2:1.5, .5:0 }))
dec = nengo.Node(piecewise({.7:.2, .9:0 }))
velocity = nengo.Ensemble(100, dimensions=1)
decay = nengo.Ensemble(100, dimensions=1)
position = nengo.Ensemble(400, dimensions=2)
def feedback(x):
return -x[1]*x[0]+x[0], 0
conn = nengo.Connection(vel, velocity)
conn = nengo.Connection(dec, decay)
conn = nengo.Connection(velocity, position[0], transform=tau, synapse=tau)
conn = nengo.Connection(decay, position[1], synapse=0.01)
conn = nengo.Connection(position, position, function=feedback, synapse=tau)
position_p = nengo.Probe(position, synapse=.01)
velocity_p = nengo.Probe(velocity, synapse=.01)
decay_p = nengo.Probe(decay, synapse=.01)
sim = nengo.Simulator(model)
sim.run(1)
plot(sim.trange(), sim.data[decay_p])
lineObjects = plot(sim.trange(), sim.data[position_p])
plot(sim.trange(), sim.data[velocity_p])
legend(('decay','position','decay','velocity'),loc="best");
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/controlled_integrator.py.cfg") | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
Other fun functions
Oscillator
$F = -kx = m \ddot{x}$ let $\omega = \sqrt{\frac{k}{m}}$
$\frac{d}{dt} \begin{bmatrix}
\omega x \
\dot{x}
\end{bmatrix}
=
\begin{bmatrix}
0 & \omega \
-\omega & 0
\end{bmatrix}
\begin{bmatrix}
x_0 \
x_1
\end{bmatrix}$
Therefore, with the above, $\dot{x}=[x_1, -x_0]$ | import nengo
model = nengo.Network('Oscillator')
freq = -.5
with model:
stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])
osc = nengo.Ensemble(200, dimensions=2)
def feedback(x):
return x[0]+freq*x[1], -freq*x[0]+x[1]
nengo.Connection(osc, osc, function=feedback, synapse=.01)
nengo.Connection(stim, osc)
osc_p = nengo.Probe(osc, synapse=.01)
sim = nengo.Simulator(model)
sim.run(.5)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[osc_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[osc_p][:,0],sim.data[osc_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/oscillator.py.cfg") | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
Lorenz Attractor (a chaotic attractor)
$\dot{x}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \over 3}(x_2+28)-28]$ | import nengo
model = nengo.Network('Lorenz Attractor', seed=3)
tau = 0.1
sigma = 10
beta = 8.0/3
rho = 28
def feedback(x):
dx0 = -sigma * x[0] + sigma * x[1]
dx1 = -x[0] * x[2] - x[1]
dx2 = x[0] * x[1] - beta * (x[2] + rho) - rho
return [dx0 * tau + x[0],
dx1 * tau + x[1],
dx2 * tau + x[2]]
with model:
lorenz = nengo.Ensemble(2000, dimensions=3, radius=60)
nengo.Connection(lorenz, lorenz, function=feedback, synapse=tau)
lorenz_p = nengo.Probe(lorenz, synapse=tau)
sim = nengo.Simulator(model)
sim.run(14)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[lorenz_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[lorenz_p][:,0],sim.data[lorenz_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/lorenz.py.cfg") | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
Note: This is not the original Lorenz attractor.
The original is $\dot{x}=[10x_1-10x_0, x_0 (28-x_2)-x_1, x_0 x_1 - {8 \over 3}(x_2)]$
Why change it to $\dot{x}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \over 3}(x_2+28)-28]$?
What's being changed here?
Oscillators with different paths
Since we can implement any function, we're not limited to linear oscillators
What about a "square" oscillator?
Instead of the value going in a circle, it traces out a square
$$
{\dot{x}} = \begin{cases}
[r, 0] &\mbox{if } |x_1|>|x_0| \wedge x_1>0 \
[-r, 0] &\mbox{if } |x_1|>|x_0| \wedge x_1<0 \
[0, -r] &\mbox{if } |x_1|<|x_0| \wedge x_0>0 \
[0, r] &\mbox{if } |x_1|<|x_0| \wedge x_0<0 \
\end{cases}
$$ | import nengo
model = nengo.Network('Square Oscillator')
tau = 0.02
r=6
def feedback(x):
if abs(x[1])>abs(x[0]):
if x[1]>0: dx=[r, 0]
else: dx=[-r, 0]
else:
if x[0]>0: dx=[0, -r]
else: dx=[0, r]
return [tau*dx[0]+x[0], tau*dx[1]+x[1]]
with model:
stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])
square_osc = nengo.Ensemble(1000, dimensions=2)
nengo.Connection(square_osc, square_osc, function=feedback, synapse=tau)
nengo.Connection(stim, square_osc)
square_osc_p = nengo.Probe(square_osc, synapse=tau)
sim = nengo.Simulator(model)
sim.run(2)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[square_osc_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[square_osc_p][:,0],sim.data[square_osc_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model) #do config | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
Does this do what you expect?
How is it affected by:
Number of neurons?
Post-synaptic time constant?
Decoding filter time constant?
Speed of oscillation (r)?
What about this shape? | import nengo
model = nengo.Network('Heart Oscillator')
tau = 0.02
r=4
def feedback(x):
return [-tau*r*x[1]+x[0], tau*r*x[0]+x[1]]
def heart_shape(x):
theta = np.arctan2(x[1], x[0])
r = 2 - 2 * np.sin(theta) + np.sin(theta)*np.sqrt(np.abs(np.cos(theta)))/(np.sin(theta)+1.4)
return -r*np.cos(theta), r*np.sin(theta)
with model:
stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])
heart_osc = nengo.Ensemble(1000, dimensions=2)
heart = nengo.Ensemble(100, dimensions=2, radius=4)
nengo.Connection(stim, heart_osc)
nengo.Connection(heart_osc, heart_osc, function=feedback, synapse=tau)
nengo.Connection(heart_osc, heart, function=heart_shape, synapse=tau)
heart_p = nengo.Probe(heart, synapse=tau)
sim = nengo.Simulator(model)
sim.run(4)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[heart_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[heart_p][:,0],sim.data[heart_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model) #do config | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
We are doing things differently here
The actual $x$ value is a normal circle oscillator
The heart shape is a function of $x$
But that's just a different decoder
Would it be possible to do an oscillator where $x$ followed this shape?
How could we tell them apart in terms of neural behaviour?
Controlled Oscillator
Change the frequency of the oscillator on-the-fly
$\dot{x}=[x_1 x_2, -x_0 x_2]$ | import nengo
from nengo.utils.functions import piecewise
model = nengo.Network('Controlled Oscillator')
tau = 0.1
freq = 20
def feedback(x):
return x[1]*x[2]*freq*tau+1.1*x[0], -x[0]*x[2]*freq*tau+1.1*x[1], 0
with model:
stim = nengo.Node(lambda t: [20,20] if t<.02 else [0,0])
freq_ctrl = nengo.Node(piecewise({0:1, 2:.5, 6:-1}))
ctrl_osc = nengo.Ensemble(500, dimensions=3)
nengo.Connection(ctrl_osc, ctrl_osc, function=feedback, synapse=tau)
nengo.Connection(stim, ctrl_osc[0:2])
nengo.Connection(freq_ctrl, ctrl_osc[2])
ctrl_osc_p = nengo.Probe(ctrl_osc, synapse=0.01)
sim = nengo.Simulator(model)
sim.run(8)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[ctrl_osc_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[ctrl_osc_p][:,0],sim.data[ctrl_osc_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/controlled_oscillator.py.cfg") | SYDE 556 Lecture 5 Dynamics.ipynb | celiasmith/syde556 | gpl-2.0 |
Sums of squares functions
Let's start by writing a set of functions for calculating sums of squared deviations from the mean (also called "sums of squared differences"), or "sums-of-squares" for short. | def sum_squares_total(groups):
"""Calculate total sum of squares ignoring groups.
groups should be a sequence of np.arrays representing the samples asssigned
to their respective groups.
"""
allobs = np.ravel(groups) # np.ravel collapses arrays or lists into a single list
grandmean = np.mean(allobs)
return np.sum((allobs - grandmean)**2)
def sum_squares_between(groups):
"""Between group sum of squares"""
ns = np.array([len(g) for g in groups])
grandmean = np.mean(np.ravel(groups))
groupmeans = np.array([np.mean(g) for g in groups])
return np.sum(ns * (groupmeans - grandmean)**2)
def sum_squares_within(groups):
"""Within group sum of squares"""
groupmeans = np.array([np.mean(g) for g in groups])
group_sumsquares = []
for i in range(len(groups)):
groupi = np.asarray(groups[i])
groupmeani = groupmeans[i]
group_sumsquares.append(np.sum((groupi - groupmeani)**2))
return np.sum(group_sumsquares)
def degrees_freedom(groups):
"""Calculate the """
N = len(np.ravel(groups))
k = len(groups)
return (k-1, N - k, N - 1)
def ANOVA_oneway(groups):
index = ['BtwGroup', 'WithinGroup', 'Total']
cols = ['df', 'SumSquares','MS','F','pval']
df = degrees_freedom(groups)
ss = sum_squares_between(groups), sum_squares_within(groups), sum_squares_total(groups)
ms = ss[0]/df[0], ss[1]/df[1], ""
F = ms[0]/ms[1], "", ""
pval = stats.f.sf(F[0], df[0], df[1]), "", ""
tbl = pd.DataFrame(index=index, columns=cols)
tbl.index.name = 'Source'
tbl.df = df
tbl.SumSquares = ss
tbl.MS = ms
tbl.F = F
tbl.pval = pval
return tbl
def ANOVA_R2(anovatbl):
SSwin = anovatbl.SumSquares[1]
SStot = anovatbl.SumSquares[2]
return (1.0 - (SSwin/SStot)) | 2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Simulate ANOVA under the null hypothesis of no difference in group means | ## simulate one way ANOVA under the null hypothesis of no
## difference in group means
groupmeans = [0, 0, 0, 0]
k = len(groupmeans) # number of groups
groupstds = [1] * k # standard deviations equal across groups
n = 25 # sample size
# generate samples
samples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)]
allobs = np.concatenate(samples) | 2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.