markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
2.6 Question 6
liste1 = [1,2,3] liste2 = liste1 liste2.append(4) print(liste1)
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2.7 Question 7
x=8 liste1 = [ ] for it in range(x//2): liste1.append( x - it ) print( liste1 )
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
3) Debogage de code Dans cette partie les codes fournis ne fonctionnent pas, soit parce qu’ils provoquent une erreur, soit parce qu’ils ne produisent pas les résultats attendus. Le but est de les corriger. 3.1 Exercice 1 : Syntax error 3.1.1 Exercice 1.1
def cube_si_abs_plus_grande_que_un( x ): if abs( x*x*x >= 1.0 : print("Erreur") return return x*x*x def cube_si_abs_plus_grande_que_un( x ): if abs( x*x*x <= 1.0 ) : ## Condition mal formeé print("Erreur") return return x*x*x print(cube_si_abs_plus_grande_que_un(3))
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
3.1.2 Exercice 1.2
def factorielle( int( n ) ): res = 1 for it in range(1,n+1): res = res*it return res print( factorielle( 10 ) ) def factorielle( n ): res = 1 for it in range(1,n+1): res = res*it return res print( factorielle( 10 ) )
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
3.1.3 Exercice 1.3
def somme_carre( n ): res = 0 i=0 while i < n: i += 1 res += i*i return res def somme_carre( n ): res = 0 i=0 while i < n: i += 1 res += i*i return res somme_carre(10)
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
3.1.4) Exercice 1.4
from math import srqt #Faute de frape def somme_sqrt( n ): res = 0 i=0 while i < n #On ne declare pas les vaiables sur la m ligne i += 1#Faute d'indentation res += sqrt(i) return res from math import sqrt #Faute de frape def somme_sqrt( n ): res = 0 i=0 while (i < n): i += 1 r...
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
3.2 Exercice 3
def somme_racine_cubique( n ): res = 0 for it in range(n): res += it**(1/3) return res somme_racine_cubique(8) def somme_racine_cubique( n ): res = 0 for it in range(n+1): res += it**(1/3) return res somme_racine_cubique(8)
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
4) Analyse de code Dans cette partie les codes fonctionnent, le but étant de comprendre leur effet, ou d’être capable de les simplifier. 4.1 Exercice 1
def f(liste): for i in range(len(liste)): for j in range(i): if liste[i][j] != 0: return False return True a = f( [[1,1,1], [0,1,1], [0,0,1]] ) print('Est ce que la matrice contient des 0 sous la diagonale ? {}'.format(a)) a = f( [[1,0,0], [0,1,0], ...
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
4.2 Exercice 2
def f( n ): if n == 0: return False return not f( n - 1 ) a = f(12) b = f(8) c = f(13) print(a,b,c) for it in range(50): print('le nombre {0} est il impair ? {1}'.format(it,f(it)))
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
4.3 Exercice 3
def f( n ): res = 0 for it in range(n): if it % 5 == 1: res = res + it if it % 7 == 1: res = res + it if it % 9 == 1: res = res + it return res for it in range(50): print('Si on applique à {} la fonction on obtient {} '.format(it,f(it))) def ...
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
5) Codage 5.1 Exercice 1 : somme de multiples pairs et evaluation de leur performance...
from time import clock def duree(fonction, n=10): debut = clock() fonction(n) fin = clock() return fin - debut def somme_n_entiers1(n): ref=0 l=range(n+1) for it in l: if it % 2 == 0: ref += it return ref def somme_n_entiers2(n): it=0 ref=0 l=range(n+1) ...
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
5.2 Exercice 2 : recherche de multiple de 11
def Test_11(My_Liste): for it in My_Liste: if it % 11 == 0: return True break return False Test_11([1, 2, 3, 22])
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
6 Algorithme des différences successives
def div_euclidiene(a,b): q,r=0,a while r>=b: q,r=q+1,r-b return(q,r) a=546 b=34 quotient, reste = div_euclidiene(a,b) print("La Division euclidienne peut s'ecrire \n {0} = {1} x {2} + {3}".format(a,b,quotient,reste))
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Bandwidth-limited ops Have to pull in more cache lines for the pointers Poor locality causes pipeline stalls
size = 10 ** 7 # ints will be un-intered past 258 print('create a list 1, 2, ...', size) def create_list(): return list(range(size)) def create_array(): return np.arange(size, dtype=int) compare_times(create_list, create_array) print('deep copies (no pre-allocation)') # Shallow copy is cheap for both! size = 10 ** ...
presentation.ipynb
vlad17/np-learn
apache-2.0
Flop-limited ops Can't engage VPU on non-contiguous memory: won't saturate CPU computational capabilities of your hardware (note that your numpy may not be vectorized anyway, but the "saturate CPU" part still holds)
print('square out-of-place') def square_lists(src, dst): for i, v in enumerate(src): dst[i] = v * v def square_arrays(src, dst): np.square(src, out=dst) compare_times(square_lists, square_arrays, create_lists, create_arrays) # Caching and SSE can have huge cumulative effects print('square in-pl...
presentation.ipynb
vlad17/np-learn
apache-2.0
Memory consumption List representation uses 8 extra bytes for every value (assuming 64-bit here and henceforth)!
from pympler import asizeof size = 10 ** 4 print('list kb', asizeof.asizeof(list(range(size))) // 1024) print('array kb', asizeof.asizeof(np.arange(size, dtype=int)) // 1024)
presentation.ipynb
vlad17/np-learn
apache-2.0
Disclaimer Regular python lists are still useful! They do a lot of things arrays can't: List comprehensions [x * x for x in range(10) if x % 2 == 0] Ragged nested lists [[1, 2, 3], [1, [2]]] The NumPy Array doc Abstraction We know what an array is -- a contiugous chunk of memory holding an indexed list of things from...
n0 = np.array(3, dtype=float) n1 = np.stack([n0, n0, n0, n0]) n2 = np.stack([n1, n1]) n3 = np.stack([n2, n2]) for x in [n0, n1, n2, n3]: print('ndim', x.ndim, 'shape', x.shape) print(x)
presentation.ipynb
vlad17/np-learn
apache-2.0
Axes are read LEFT to RIGHT: an array of shape (n0, n1, ..., nN-1) has axis 0 with length n0, etc. Detour: Formal Representation Warning, these are pretty useless definitions unless you want to understand np.einsum, which is only at the end anyway. Formally, a NumPy array can be viewed as a mathematical object. If: Th...
original = np.arange(10) # shallow copies s1 = original[:] s2 = s1.view() s3 = original[:5] print(original) original[2] = -1 print('s1', s1) print('s2', s2) print('s3', s3) id(original), id(s1.base), id(s2.base), id(s3.base), original.base
presentation.ipynb
vlad17/np-learn
apache-2.0
Dtypes $F$ (our dtype) can be (doc): boolean integral floating-point complex floating-point any structure (record array) of the above, e.g. complex integral values The dtype can also be unicode, a date, or an arbitrary object, but those don't form fields. This means that most NumPy functions aren't usful for this dat...
# Names are pretty intuitive for basic types i16 = np.arange(100, dtype=np.uint16) i64 = np.arange(100, dtype=np.uint64) print('i16', asizeof.asizeof(i16), 'i64', asizeof.asizeof(i64)) # We can use arbitrary structures for our own types # For example, exact Gaussian (complex) integers gauss = np.dtype([('re', np.int...
presentation.ipynb
vlad17/np-learn
apache-2.0
Indexing doc Probably the most creative, unique part of the entire library. This is what makes NumPy ndarray better than any other array. And index returns an ndarray view based on the other ndarray. Basic Indexing
x = np.arange(10) # start:stop:step # inclusive start, exclusive stop print(x) print(x[2:6:2]) print(id(x), id(x[2:6:2].base)) # Default start is 0, default end is length, default step is 1 print(x[:3]) print(x[7:]) # Don't worry about overshooting print(x[:100]) print(x[7:2:1]) # Negatives wrap around (taken mod l...
presentation.ipynb
vlad17/np-learn
apache-2.0
Basic indices let us access hyper-rectangles with strides: <img src="assets/slices.png" alt="http://www.scipy-lectures.org/intro/numpy/numpy.html" width="300"> Advanced Indexing Arbitrary combinations of basic indexing. GOTCHA: All advanced index results are copies, not views.
m = np.arange(4 * 5).reshape(4, 5) # 1D advanced index display('m') display('m[[1,2,1],:]') print('original indices') print(' rows', np.arange(m.shape[0])) print(' cols', np.arange(m.shape[1])) print('new indices') print(' rows', ([1, 2, 1])) print(' cols', np.arange(m.shape[1])) # 2D advanced index display('m')...
presentation.ipynb
vlad17/np-learn
apache-2.0
Why on earth would you do the above? Selection, sampling, algorithms that are based on offsets of arrays (i.e., basically all of them). What's going on? Advanced indexing is best thought of in the following way: A typical ndarray, x, with shape (n0, ..., nN-1) has N corresponding indices. (range(n0), ..., range(nN-1))...
# GOTCHA: accidentally invoking advanced indexing display('x') display('x[(0, 0, 1),]') # advanced display('x[(0, 0, 1)]') # basic # best policy: don't parenthesize when you want basic
presentation.ipynb
vlad17/np-learn
apache-2.0
The above covers the case of one advanced index and the rest being basic. One other common situation that comes up in practice is every index is advanced. Recall array x with shape (n0, ..., nN-1). Let indj be integer ndarrays all of the same shape (say, (m0, ..., mM-1)). Then x[ind0, ... indN-1] has shape (m0, ..., mM...
display('m') display('m[[1,2],[3,4]]') # ix_: only applies to 1D indices. computes the cross product display('m[np.ix_([1,2],[3,4])]') # r_: concatenates slices and all forms of indices display('m[0, np.r_[:2, slice(3, 1, -1), 2]]') # Boolean arrays are converted to integers where they're true # Then they're treated...
presentation.ipynb
vlad17/np-learn
apache-2.0
Indexing Applications
# Data cleanup / filtering x = np.array([1, 2, 3, np.nan, 2, 1, np.nan]) b = ~np.isnan(x) print(x) print(b) print(x[b]) # Selecting labelled data (e.g. for plotting) %matplotlib inline import matplotlib.pyplot as plt # From DBSCAN sklearn ex from sklearn.datasets.samples_generator import make_blobs X, labels = mak...
presentation.ipynb
vlad17/np-learn
apache-2.0
Array Creation and Initialization doc If unspecified, default dtype is usually float, with an exception for arange.
display('np.linspace(4, 8, 2)') display('np.arange(4, 8, 2)') # GOTCHA plt.plot(np.linspace(1, 4, 10), np.logspace(1, 4, 10)) plt.show() shape = (4, 2) print(np.zeros(shape)) # init to zero. Use np.ones or np.full accordingly # [GOTCHA] np.empty won't initialize anything; it will just grab the first available chunk ...
presentation.ipynb
vlad17/np-learn
apache-2.0
Extremely extensive random generation. Remember to seed! Transposition Under the hood. So far, we've just been looking at the abstraction that NumPy offers. How does it actually keep things contiguous in memory? We have a base array, which is one long contiguous array from 0 to size - 1.
x = np.arange(2 * 3 * 4).reshape(2, 3, 4) print(x.shape) print(x.size) # Use ravel() to get the underlying flat array. np.flatten() will give you a copy print(x) print(x.ravel()) # np.transpose or *.T will reverse axes print('transpose', x.shape, '->', x.T.shape) # rollaxis pulls the argument axis to axis 0, keeping ...
presentation.ipynb
vlad17/np-learn
apache-2.0
Transposition Example: Kronecker multiplication Based on Saatci 2011 (PhD thesis). Recall the tensor product over vector spaces $V \otimes W$ from before. If $V$ has basis $\textbf{v}_i$ and $W$ has $\textbf{w}_j$, we can define the tensor product over elements $\nu\in V,\omega\in W$ as follows. Let $\nu= \sum_{i=1}^n\...
# Kronecker demo A = np.array([[1, 1/2], [-1/2, -1]]) B = np.identity(2) f, axs = plt.subplots(2, 2) # Guess what a 2x2 axes subplot type is? print(type(axs)) # Use of numpy for convenience: arbitrary object flattening for ax in axs.ravel(): ax.axis('off') ax1, ax2, ax3, ax4 = axs.ravel() ax1.imshow(A, vmi...
presentation.ipynb
vlad17/np-learn
apache-2.0
Ufuncs and Broadcasting doc
# A ufunc is the most common way to modify arrays # In its simplest form, an n-ary ufunc takes in n numpy arrays # of the same shape, and applies some standard operation to "parallel elements" a = np.arange(6) b = np.repeat([1, 2], 3) print(a) print(b) print(a + b) print(np.add(a, b)) # If any of the arguments are o...
presentation.ipynb
vlad17/np-learn
apache-2.0
Aliasing You can save on allocations and copies by providing the output array to copy into. Aliasing occurs when all or part of the input is repeated in the output Ufuncs allow aliasing
# Example: generating random symmetric matrices A = np.random.randint(0, 10, size=(3,3)) print(A) A += A.T # this operation is WELL-DEFINED, even though A is changing print(A) # Above is sugar for np.add(A, A, out=A) x = np.arange(10) print(x) np.subtract(x[:5], x[5:], x[:5]) print(x)
presentation.ipynb
vlad17/np-learn
apache-2.0
[GOTCHA]: If it's not a ufunc, aliasing is VERY BAD: Search for "In general the rule" in this discussion. Ufunc aliasing is safe since this pr
x = np.arange(2 * 2).reshape(2, 2) try: x.dot(np.arange(2), out=x) # GOTCHA: some other functions won't warn you! except ValueError as e: print(e)
presentation.ipynb
vlad17/np-learn
apache-2.0
Configuration and Hardware Acceleration NumPy works quickly because it can perform vectorization by linking to C functions that were built for your particular system. [GOTCHA] There are two different high-level ways in which NumPy uses hardware to accelerate your computations. Ufunc When you perform a built-in ufunc: *...
# Great resources to learn einsum: # https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/ # http://ajcr.net/Basic-guide-to-einsum/ # Examples of how it's general: np.random.seed(1234) x = np.random.randint(-10, 11, size=(2, 2, 2)) print(x) # Swap axes print(np.einsum('ijk->kji', x)) # Sum [...
presentation.ipynb
vlad17/np-learn
apache-2.0
General Einsum Approach Again, lots of visuals in this blog post. [GOTCHA]. You can't use more than 52 different letters.. But if you find yourself writing np.einsum with more than 52 active dimensions, you should probably make two np.einsum calls. If you have dimensions for which nothing happens, then ... can be used ...
# Let the contiguous blocks of letters be words # If they're on the left, they're argument words. On the right, result words. np.random.seed(1234) x = np.random.randint(-10, 11, 3 * 2 * 2 * 1).reshape(3, 2, 2, 1) y = np.random.randint(-10, 11, 3 * 2 * 2).reshape(3, 2, 2) z = np.random.randint(-10, 11, 2 * 3).reshape(2...
presentation.ipynb
vlad17/np-learn
apache-2.0
Neural Nets with Einsum Original post <table> <tr> <th> <img src="assets/mlp1.png" alt="https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/" width="600" > </th><th> <img src="assets/mlp2.png" alt="https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/" width="600" > </th...
np.random.seed(1234) a = 3 b = 300 Bs = np.random.randn(10, a, a) Ds = np.random.randn(10, b) # just the diagonal z = np.random.randn(a * b) def quadratic_impl(): K = np.zeros((a * b, a * b)) for B, D in zip(Bs, Ds): K += np.kron(B, np.diag(D)) return K.dot(z) def einsum_impl(): # Ellipses tr...
presentation.ipynb
vlad17/np-learn
apache-2.0
Regression with TensorFlow We have trained a linear regression model in TensorFlow and used it to predict housing prices. However, the model didn't perform as well as we would have liked it to. In this lab, we will build a neural network to try to tackle the same regression problem and see if we can get better results....
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Once you are done, use the kaggle command to download the file into the lab.
!kaggle datasets download camnugent/california-housing-prices !ls
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
We now have a file called california-housing-prices.zip that we can load into a DataFrame.
import pandas as pd housing_df = pd.read_csv('california-housing-prices.zip') housing_df
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Next we can define which columns are features and which is the target. We'll also make a separate list of our numeric columns.
target_column = 'median_house_value' feature_columns = [c for c in housing_df.columns if c != target_column] numeric_feature_columns = [c for c in feature_columns if c != 'ocean_proximity'] target_column, feature_columns, numeric_feature_columns
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
We also reduced the value of our targets by a factor in the previous lab. This reduction in magnitude was done to help the model train faster. Let's do that again.
TARGET_FACTOR = 100000 housing_df[target_column] /= TARGET_FACTOR housing_df[target_column].describe()
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
And we filled in some missing total_bedrooms values.
has_all_data = housing_df[~housing_df['total_bedrooms'].isna()] sums = has_all_data[['total_bedrooms', 'total_rooms']].sum().tolist() bedrooms_to_total_rooms_ratio = sums[0] / sums[1] missing_total_bedrooms_idx = housing_df['total_bedrooms'].isna() housing_df.loc[missing_total_bedrooms_idx, 'total_bedrooms'] = hous...
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exercise 1: Standardization Previously when we worked with this dataset, we normalized the feature data in order to get it ready for the model. Normalization was the process of making all of the data fit between 0.0 and 1.0 by subtracting the minimum of each column from each data point in that column and then dividing ...
# Your Code Goes Here
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
One-Hot Encoding The ocean_proximity column will not work with the neural network model that we are planning to build. Neural networks expect numeric values, but ocean_proximity contains string values. Let's remind ourselves which values it contains:
sorted(housing_df['ocean_proximity'].unique())
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
There are five string values. In our linear regression Colab we told TensorFlow to treat these values as a categorical column. Each string was converted to a whole number that represented their position in a vocabulary list: 0, 1, 2, 3, or 4. For neural networks it is common to see another strategy called one-hot encod...
for op in sorted(housing_df['ocean_proximity'].unique()): op_col = op.lower().replace(' ', '_').replace('<', '') housing_df[op_col] = (housing_df['ocean_proximity'] == op).astype(int) feature_columns.append(op_col) feature_columns.remove('ocean_proximity') housing_df
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exercise 2: Split the Data We want to hold out some of the data for validation. Using standard Python or a library, split the data. Put 20% of the data in a DataFrame called testing_df and the other 80% in a DataFrame called training_df. Be sure to shuffle the data before splitting. Print the number of records in testi...
# Your Code Goes Here
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Building the Model We will build the model using TensorFlow 2. Let's enable it and go ahead and load up TensorFlow.
%tensorflow_version 2.x import tensorflow as tf tf.__version__
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
When we built a TensorFlow LinearRegressor in a previous lab, we were using a pre-configured model. For our neural network regressor, we will build the model ourselves using the Keras API of TensorFlow. We'll build a sequential model where one layer feeds into the next. Each layer will be densely connected, which means...
from tensorflow import keras from tensorflow.keras import layers # Create the Sequential model. model = keras.Sequential() # Determine the "input shape", which is the number # of features that we will feed into the model. input_shape = len(feature_columns) # Create a layer that accepts our features and outputs # a s...
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Above we have basically recreated our linear regression from an earlier lab. We have all of our inputs directly mapping to a single output. We didn't choose an activation function, and the default activation function for a Dense layer is a linear function $f(x) = x$. Note that the way we built this model was pretty ver...
from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential(layers=[ layers.Dense(1, input_shape=[len(feature_columns)]) ]) model.summary()
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Also notice that the layers are named dense_1, dense_2, etc. If you don't supply a name for a layer, TensorFlow will provide a name for you. In small models, this isn't a problem, but you might want to have a meaningful layer name in larger models. Even in simple models, is dense_2 a good name for the first layer in a ...
from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential(layers=[ layers.Dense( 1, input_shape=[len(feature_columns)], # Name your layer here ) ]) model.summary()
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Which class did the parameter that you used originate from? Your answer goes here Making a Deep Neural Network Where neural networks really get powerful is when you add hidden layers. These hidden layers can find complex patterns in your data. Let's create a model with a few hidden layers. We'll add two layers with ...
from tensorflow import keras from tensorflow.keras import layers feature_count = len(feature_columns) model = keras.Sequential([ layers.Dense(64, input_shape=[feature_count]), layers.Dense(64), layers.Dense(1) ]) model.summary()
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
We now have a deep neural network model. The model has 13 input nodes. These nodes feed into our first hidden layer of 64 nodes. The first line of our model summary tells us that we have 64 nodes and 896 parameters. The node count in 'Output Shape' makes sense, but what about the 'Param #' of 896? Remember that we have...
model.compile( loss='mse', optimizer='Adam', metrics=['mae', 'mse'], ) model.summary()
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Training the Model We can now train the model using the fit() method. Training is performed for a specified number of epochs. An epoch is a full pass over the training data. In this case, we are asking to train over the full dataset 50 times. In order to get the data into the model, we don't have to write an input func...
EPOCHS = 50 model.fit( training_df[feature_columns], training_df[target_column], epochs=EPOCHS, validation_split=0.2, )
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Validating the Model We can now see how well our model performs on our validation test set. In order to get the model to make predictions, we use the predict method.
predictions = model.predict(testing_df[feature_columns]) predictions
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Notice that the predictions are lists of lists. This is because neural networks can return more than one prediction per input. We set this network up to have a single final node, but could have had more. Exercise 4: Calculating RMSE At this point we have the predicted values from our test features and the actual values...
# Your Code Goes here
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Improving the Model In the exercise above, you likely got a root mean squared error very close to the error we got in the linear regression lab. What's going on? I thought deep learning models were supposed to be really, really good! Deep learning models can be really good, but they often require a bit of hyperparamete...
# Your Code Goes Here
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Visualizing Training At this point, we have a pretty solid neural network regression model. It performs better than our linear regression model, though it does take a while to train. Training time is largely a product of two factors: The size of the model The number of epochs Larger models take longer to train. That ...
model = keras.Sequential([ layers.Dense(64, input_shape=[feature_count]), layers.Dense(64), layers.Dense(1) ]) model.compile( loss='mse', optimizer='Adam', metrics=['mae', 'mse'], ) EPOCHS = 5 history = model.fit( training_df[feature_columns], training_df[target_column], epochs=EPOCHS, verbose=0,...
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Notice that the history.history contains our model's loss (loss), mean absolute error (mae), mean squared error (mse), validation loss (val_loss), validation mean absolute error (val_mae), and validation mean squared error (val_mse) at each epoch. It would be useful to plot the error over time. In the next exercise, yo...
model = keras.Sequential([ layers.Dense(64, input_shape=[feature_count]), layers.Dense(64), layers.Dense(1) ]) model.compile( loss='mse', optimizer='Adam', metrics=['mae', 'mse'], ) EPOCHS = 100 history = model.fit( training_df[feature_columns], training_df[target_column], epochs=EPOCHS, verbose=...
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Student Solution
# Your Code Goes Here
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Interpreting Loss Visualizations We have now created a visualization that should look something like this: But how do we interpret this visualization? The blue line is the mean squared error for the training data. You can see it plummeting fast as the model quickly learns. The orange line is the validation data. This ...
model = keras.Sequential([ layers.Dense(64, input_shape=[feature_count]), layers.Dense(64), layers.Dense(1) ]) model.compile( loss='mse', optimizer='Adam', metrics=['mae', 'mse'], ) EPOCHS = 1000 early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10) history = model.fit( training_...
content/03_regression/08_regression_with_tensorflow/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
We load in the position and box information created in the intro notebook. If you haven't run that notebook, this line will not work! (You don't have to read the wall of text, just run the cells...)
import numpy as np pos = np.loadtxt('data/positions.dat') box = np.loadtxt('data/box.dat') print('Read {:d} positions.'.format(pos.shape[0])) print('x min/max: {:+4.2f}/{:+4.2f}'.format(pos.min(0)[0],pos.max(0)[0])) print('y min/max: {:+4.2f}/{:+4.2f}'.format(pos.min(0)[1],pos.max(0)[1])) print('z min/max: {:+4.2f}/{:...
02-numpy.ipynb
martintb/pe_optimization_tutorial
mit
Round 1: Vectorized Operations We need to re-implement the potential energy function in numpy.
import numpy as np def potentialEnergyFunk(r,width=1.0,height=10.0): ''' Calculates the (soft) potential energy between two atoms Parameters ---------- r: ndarray (float) separation distances between two atoms height: float breadth of the potential i.e. where the potential ...
02-numpy.ipynb
martintb/pe_optimization_tutorial
mit
We can plot the potential energy again just to make sure this function behaves as expected.
%%opts Curve [width=600,show_grid=True,height=350] dr = 0.05 # spacing of r points rmax = 10.0 # maximum r value pts = int(rmax/dr) # number of r points r = np.arange(dr,rmax,dr) def plotFunk(width,height,label='dynamic'): U = potentialEnergyFunk(r,width,height) return hv.Curve((r,U),kdims=['S...
02-numpy.ipynb
martintb/pe_optimization_tutorial
mit
Runtime profiling!
%%prun -D prof/numpy1.prof energy = calcTotalEnergy1(pos,box) with open('energy/numpy1.dat','w') as f: f.write('{}\n'.format(energy))
02-numpy.ipynb
martintb/pe_optimization_tutorial
mit
Memory profiling!
memprof = %memit -o calcTotalEnergy1(pos,box) usage = memprof.mem_usage[0] incr = memprof.mem_usage[0] - memprof.baseline with open('prof/numpy1.memprof','w') as f: f.write('{}\n{}\n'.format(usage,incr))
02-numpy.ipynb
martintb/pe_optimization_tutorial
mit
Round 2: Less is More This is good, but can we do better? With this implementation, we are actually calculating twice as potential energies as we need to! Let's reimplement the above to see if we can speed up this function (and possible reduce the memory usage).
from math import sqrt def calcTotalEnergy2(pos,box): ''' Parameters ---------- pos: ndarray, size (N,3), (float) array of cartesian coordinate positions box: ndarray, size (3), (float) simulation box dimensions ''' #sanity check assert box.shape[0] == 3 ...
02-numpy.ipynb
martintb/pe_optimization_tutorial
mit
Memory profiling!
memprof = %memit -o calcTotalEnergy2(pos,box) usage = memprof.mem_usage[0] incr = memprof.mem_usage[0] - memprof.baseline with open('prof/numpy2.memprof','w') as f: f.write('{}\n{}\n'.format(usage,incr))
02-numpy.ipynb
martintb/pe_optimization_tutorial
mit
The Epochs data structure: discontinuous data This tutorial covers the basics of creating and working with :term:epoched &lt;epochs&gt; data. It introduces the :class:~mne.Epochs data structure in detail, including how to load, query, subselect, export, and plot data from an :class:~mne.Epochs object. For more informat...
import os import mne
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
:class:~mne.Epochs objects are a data structure for representing and analyzing equal-duration chunks of the EEG/MEG signal. :class:~mne.Epochs are most often used to represent data that is time-locked to repeated experimental events (such as stimulus onsets or subject button presses), but can also be used for storing s...
sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=60)
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
As we saw in the tut-events-vs-annotations tutorial, we can extract an events array from :class:~mne.io.Raw objects using :func:mne.find_events:
events = mne.find_events(raw, stim_channel='STI 014')
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
<div class="alert alert-info"><h4>Note</h4><p>We could also have loaded the events from file, using :func:`mne.read_events`:: sample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_aud...
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7)
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
You'll see from the output that: all 320 events were used to create epochs baseline correction was automatically applied (by default, baseline is defined as the time span from tmin to 0, but can be customized with the baseline parameter) no additional metadata was provided (see tut-epochs-metadata for detai...
print(epochs)
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that the Event IDs are in quotes; since we didn't provide an event dictionary, the :class:mne.Epochs constructor created one automatically and used the string representation of the integer Event IDs as the dictionary keys. This is more clear when viewing the event_id attribute:
print(epochs.event_id)
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This time let's pass preload=True and provide an event dictionary; our provided dictionary will get stored as the event_id attribute and will make referencing events and pooling across event types easier:
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3, 'visual/right': 4, 'face': 5, 'buttonpress': 32} epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict, preload=True) print(epochs.event_id) del raw # we're done with raw, free up some memory
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that the output now mentions "1 bad epoch dropped". In the tutorial section tut-reject-epochs-section we saw how you can specify channel amplitude criteria for rejecting epochs, but here we haven't specified any such criteria. In this case, it turns out that the last event was too close the end of the (cropped) ...
print(epochs.drop_log[-4:])
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
<div class="alert alert-info"><h4>Note</h4><p>If you forget to provide the event dictionary to the :class:`~mne.Epochs` constructor, you can add it later by assigning to the ``event_id`` attribute:: epochs.event_id = event_dict</p></div> Basic visualization of Epochs objects The :class:~mne.Epochs obj...
epochs.plot(n_epochs=10)
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that the individual epochs are sequentially numbered along the bottom axis and are separated by vertical dashed lines. Epoch plots are interactive (similar to :meth:raw.plot() &lt;mne.io.Raw.plot&gt;) and have many of the same interactive controls as :class:~mne.io.Raw plots. Horizontal and vertical scrollbars a...
print(epochs['face'])
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can also pool across conditions easily, thanks to how MNE-Python handles the / character in epoch labels (using what is sometimes called "tag-based indexing"):
# pool across left + right print(epochs['auditory']) assert len(epochs['auditory']) == (len(epochs['auditory/left']) + len(epochs['auditory/right'])) # pool across auditory + visual print(epochs['left']) assert len(epochs['left']) == (len(epochs['auditory/left']) + ...
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
You can also pool conditions by passing multiple tags as a list. Note that MNE-Python will not complain if you ask for tags not present in the object, as long as it can find some match: the below example is parsed as (inclusive) 'right' or 'bottom', and you can see from the output that it selects only auditory/right an...
print(epochs[['right', 'bottom']])
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
However, if no match is found, an error is returned:
try: print(epochs[['top', 'bottom']]) except KeyError: print('Tag-based selection with no matches raises a KeyError!')
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Selecting epochs by index :class:~mne.Epochs objects can also be indexed with integers, :term:slices &lt;slice&gt;, or lists of integers. This method of selection ignores event labels, so if you want the first 10 epochs of a particular type, you can select the type first, then use integers or slices:
print(epochs[:10]) # epochs 0-9 print(epochs[1:8:2]) # epochs 1, 3, 5, 7 print(epochs['buttonpress'][:4]) # first 4 "buttonpress" epochs print(epochs['buttonpress'][[0, 1, 2, 3]]) # same as previous line
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Selecting, dropping, and reordering channels You can use the :meth:~mne.Epochs.pick, :meth:~mne.Epochs.pick_channels, :meth:~mne.Epochs.pick_types, and :meth:~mne.Epochs.drop_channels methods to modify which channels are included in an :class:~mne.Epochs object. You can also use :meth:~mne.Epochs.reorder_channels for t...
epochs_eeg = epochs.copy().pick_types(meg=False, eeg=True) print(epochs_eeg.ch_names) new_order = ['EEG 002', 'STI 014', 'EOG 061', 'MEG 2521'] epochs_subset = epochs.copy().reorder_channels(new_order) print(epochs_subset.ch_names) del epochs_eeg, epochs_subset
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Changing channel name and type You can change the name or type of a channel using :meth:~mne.Epochs.rename_channels or :meth:~mne.Epochs.set_channel_types. Both methods take :class:dictionaries &lt;dict&gt; where the keys are existing channel names, and the values are the new name (or type) for that channel. Existing c...
epochs.rename_channels({'EOG 061': 'BlinkChannel'}) epochs.set_channel_types({'EEG 060': 'ecg'}) print(list(zip(epochs.ch_names, epochs.get_channel_types()))[-4:]) # let's set them back to the correct values before moving on epochs.rename_channels({'BlinkChannel': 'EOG 061'}) epochs.set_channel_types({'EEG 060': 'eeg...
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Selection in the time domain To change the temporal extent of the :class:~mne.Epochs, you can use the :meth:~mne.Epochs.crop method:
shorter_epochs = epochs.copy().crop(tmin=-0.1, tmax=0.1, include_tmax=True) for name, obj in dict(Original=epochs, Cropped=shorter_epochs).items(): print('{} epochs has {} time samples' .format(name, obj.get_data().shape[-1]))
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Cropping removed part of the baseline. When printing the cropped :class:~mne.Epochs, MNE-Python will inform you about the time period that was originally used to perform baseline correction by displaying the string "baseline period cropped after baseline correction":
print(shorter_epochs)
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
However, if you wanted to expand the time domain of an :class:~mne.Epochs object, you would need to go back to the :class:~mne.io.Raw data and recreate the :class:~mne.Epochs with different values for tmin and/or tmax. It is also possible to change the "zero point" that defines the time values in an :class:~mne.Epochs ...
# shift times so that first sample of each epoch is at time zero later_epochs = epochs.copy().shift_time(tshift=0., relative=False) print(later_epochs.times[:3]) # shift times by a relative amount later_epochs.shift_time(tshift=-7, relative=True) print(later_epochs.times[:3]) del shorter_epochs, later_epochs
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note that although time shifting respects the sampling frequency (the spacing between samples), it does not enforce the assumption that there is a sample occurring at exactly time=0. Extracting data in other forms The :meth:~mne.Epochs.get_data method returns the epoched data as a :class:NumPy array &lt;numpy.ndarray&g...
eog_data = epochs.get_data(picks='EOG 061') meg_data = epochs.get_data(picks=['mag', 'grad']) channel_4_6_8 = epochs.get_data(picks=slice(4, 9, 2)) for name, arr in dict(EOG=eog_data, MEG=meg_data, Slice=channel_4_6_8).items(): print('{} contains {} channels'.format(name, arr.shape[1]))
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note that if your analysis requires repeatedly extracting single epochs from an :class:~mne.Epochs object, epochs.get_data(item=2) will be much faster than epochs[2].get_data(), because it avoids the step of subsetting the :class:~mne.Epochs object first. You can also export :class:~mne.Epochs data to :class:Pandas Dat...
df = epochs.to_data_frame(index=['condition', 'epoch', 'time']) df.sort_index(inplace=True) print(df.loc[('auditory/left', slice(0, 10), slice(100, 107)), 'EEG 056':'EEG 058']) del df
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
See the tut-epochs-dataframe tutorial for many more examples of the :meth:~mne.Epochs.to_data_frame method. Loading and saving Epochs objects to disk :class:~mne.Epochs objects can be loaded and saved in the .fif format just like :class:~mne.io.Raw objects, using the :func:mne.read_epochs function and the :meth:~mne.Ep...
epochs.save('saved-audiovisual-epo.fif', overwrite=True) epochs_from_file = mne.read_epochs('saved-audiovisual-epo.fif', preload=False)
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The MNE-Python naming convention for epochs files is that the file basename (the part before the .fif or .fif.gz extension) should end with -epo or _epo, and a warning will be issued if the filename you provide does not adhere to that convention. As a final note, be aware that the class of the epochs object is differen...
print(type(epochs)) print(type(epochs_from_file))
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
In almost all cases this will not require changing anything about your code. However, if you need to do type checking on epochs objects, you can test against the base class that these classes are derived from:
print(all([isinstance(epochs, mne.BaseEpochs), isinstance(epochs_from_file, mne.BaseEpochs)]))
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Iterating over Epochs Iterating over an :class:~mne.Epochs object will yield :class:arrays &lt;numpy.ndarray&gt; rather than single-trial :class:~mne.Epochs objects:
for epoch in epochs[:3]: print(type(epoch))
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If you want to iterate over :class:~mne.Epochs objects, you can use an integer index as the iterator:
for index in range(3): print(type(epochs[index]))
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Step 1: Fit the Initial Random Forest Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn
load_breast_cancer = load_breast_cancer() X_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=10)
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
Yu-Group/scikit-learn-sandbox
mit
Check out the data
print("Training feature dimensions", X_train.shape, sep = ":\n") print("\n") print("Training outcome dimensions", y_train.shape, sep = ":\n") print("\n") print("Test feature dimensions", X_test.shape, sep = ":\n") print("\n") print("Test outcome dimensions", y_test.shape, sep = ":\n") print("\n") print("first 5 rows of...
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
Yu-Group/scikit-learn-sandbox
mit
Step 2: Get all Random Forest and Decision Tree Data Extract in a single dictionary the random forest data and for all of it's decision trees This is as required for RIT purposes
all_rf_tree_data = irf_utils.get_rf_tree_data(rf=rf, X_train=X_train, y_train=y_train, X_test=X_test, y_test=y_test)
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
Yu-Group/scikit-learn-sandbox
mit
STEP 3: Get the RIT data and produce RITs
all_rit_tree_data = irf_utils.get_rit_tree_data( all_rf_tree_data=all_rf_tree_data, bin_class_type=1, random_state=12, M=10, max_depth=3, noisy_split=False, num_splits=2)
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
Yu-Group/scikit-learn-sandbox
mit
Perform Manual CHECKS on the irf_utils These should be converted to unit tests and checked with nosetests -v test_irf_utils.py Step 4: Plot some Data List Ranked Feature Importances
# Print the feature ranking print("Feature ranking:") feature_importances_rank_idx = all_rf_tree_data['feature_importances_rank_idx'] feature_importances = all_rf_tree_data['feature_importances'] for f in range(X_train.shape[1]): print("%d. feature %d (%f)" % (f + 1 , feature_im...
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
Yu-Group/scikit-learn-sandbox
mit
Plot Ranked Feature Importances
# Plot the feature importances of the forest feature_importances_std = all_rf_tree_data['feature_importances_std'] plt.figure() plt.title("Feature importances") plt.bar(range(X_train.shape[1]) , feature_importances[feature_importances_rank_idx] , color="r" , yerr = feature_importances_std[featu...
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
Yu-Group/scikit-learn-sandbox
mit
Decision Tree 0 (First) - Get output Check the output against the decision tree graph
# Now plot the trees individually irf_jupyter_utils.draw_tree(decision_tree = all_rf_tree_data['rf_obj'].estimators_[0])
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
Yu-Group/scikit-learn-sandbox
mit
Compare to our dict of extracted data from the tree
irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0']) # Count the number of samples passing through the leaf nodes sum(all_rf_tree_data['dtree0']['tot_leaf_node_values'])
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
Yu-Group/scikit-learn-sandbox
mit
Check output against the diagram
irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0']['all_leaf_paths_features'])
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
Yu-Group/scikit-learn-sandbox
mit
Data Imports These are the imports and files which will be referenced for the report
from matplotlib import pyplot as plt from yank.reports import notebook %matplotlib inline report = notebook.HealthReportData(store_directory, **analyzer_kwargs) report.report_version()
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit