markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Our model in step 3 of the previous chapter has been simple. It multipled our inputs by constants (the values of $\theta$) and added them up. That is classic linear stuff. With only a slight modification of that model we can easily extend regression to any number of variables -- even millions of them! Load the Data
# Load up the housing price data we used before import os path = os.getcwd() + '/Data/ex1data2.txt' data2 = pd.read_csv(path, header=None, names=['Size', 'Bedrooms', 'Price']) data2.head()
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
Visualize the Data We can visualize the entire dataset as follows.
import seaborn as sns sns.set(style='whitegrid', context='notebook') cols = ['Size', 'Bedrooms', 'Price'] sns.pairplot(data2[cols], size=2.5) plt.show()
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
Exercise 4-1 Based on the visuals above, how would you describe the data? Write a short paragraph describing the data. Use Size as the Key Variable Paradoxically, to demostrate how multivariate non-linear regression works, we'll strip down our original dataset into one that just has Size and Price; the Bedrooms part of the data is removed. This simplifies things so that we can easily visualize what's going on. So, to visualize things more easily, we're going to focus just on the sinlge variable -- the size of the house. We'll turn this into a multi-variable situation in just a bit.
# Just checking on the type of object data2 is ... good to remind ourselves type(data2) # First drop the Bedrooms column from the data set -- we're not going to be using it for the rest of this notebook data3 = data2.drop('Bedrooms', axis = 1) data3.head() # Visualize this simplified data set data3.plot.scatter(x='Size', y='Price', figsize=(8,5))
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
How Polynomials Fit the Data Let's visualize the fit for various degrees of polynomial functions.
# Because Price is about 100 times Size, first normalize the data data3Norm = (data3 - data3.mean()) / data3.std() data3Norm.head() X = data3Norm['Size'] y = data3Norm['Price'] # fit the data with a 2nd degree polynomial z2 = np.polyfit(X, y, 2) p2 = np.poly1d(z2) # construct the polynomial (note: that's a one in "poly1d") # fit the data with a 3rd degree polynomial z3 = np.polyfit(X, y, 3) p3 = np.poly1d(z3) # construct the polynomial # fit the data with a 4th degree polynomial z4 = np.polyfit(X, y, 4) p4 = np.poly1d(z4) # construct the polynomial # fit the data with a 8th degree polynomial - just for the heck of it :-) z8 = np.polyfit(X, y, 8) p8 = np.poly1d(z8) # construct the polynomial # fit the data with a 16th degree polynomial - just for the heck of it :-) z16 = np.polyfit(X, y, 16) p16 = np.poly1d(z16) # construct the polynomial xx = np.linspace(-2, 4, 100) plt.figure(figsize=(8,5)) plt.plot(X, y, 'o', label='data') plt.xlabel('Size') plt.ylabel('Price') plt.plot(xx, p2(xx), 'g-', label='2nd degree poly') plt.plot(xx, p3(xx), 'y-', label='3rd degree poly') #plt.plot(xx, p4(xx), 'r-', label='4th degree poly') plt.plot(xx, p8(xx), 'c-', label='8th degree poly') #plt.plot(xx, p16(xx), 'm-', label='16th degree poly') plt.legend(loc=2) plt.axis([-2,4,-1.5,3]) # Use for higher degrees of polynomials
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
Steps 1 and 2: Define the Inputs and the Outputs
# Add a column of 1s to the X input (keeps the notation simple) data3Norm.insert(0,'x0',1) data3Norm.head() data3Norm.insert(2,'Size^2', np.power(data3Norm['Size'],2)) data3Norm.head() data3Norm.insert(3,'Size^3', np.power(data3Norm['Size'],3)) data3Norm.head() data3Norm.insert(4,'Size^4', np.power(data3Norm['Size'],4)) data3Norm.head()
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
We now have 4 input variables -- they're various powers of the one input variable we started with.
X3 = data3Norm.iloc[:, 0:5] y3 = data3Norm.iloc[:, 5]
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
Step 3: Define the Model We're going to turn this one (dependent) variable data set consisting of Size values into a dataset that will be represented by a multi-variate, polynomial model. First let's define the kind of model we're interested in. In the expressions below $x$ represents the Size of a house and the model is saying that the price of the house is a polynomial function of size. Here's a second-degree polynomial model: Model p2 = $h_{\theta}(x) = \theta_{0}x_{0} + \theta_{1}x + \theta_{2}x^{2}$ Here's a third-degree polynomial model: Model p3 = $h_{\theta}(x) = \theta_{0}x_{0} + \theta_{1}x + \theta_{2}x^{2} + \theta_{3}x^3$ And here's a fourth-degree polynomial model: Model p4 = $h_{\theta}(x) = \theta_{0}x_{0} + \theta_{1}x + \theta_{2}x^{2} + \theta_{3}x^3 + \theta_{4}x^4$ Our models are more complicated than before, but $h_{\theta}(x)$ is still the same calculation as before because our inputs have been transformed to represent $x^{2}$, $x^{3}$, and $x^{4}$. We'll use Model p4 for the rest of the calculations. It's a legitimate question to ask how to decide which model choose. We'll answer that question a few chapters later. Step 4: Define the Parameters of the Model $\theta_{0}$, $\theta_{1}$, $\theta_{2}$, $\theta_{3}$, and $\theta_{4}$ are the parameters of the model. Unlike our example of the boiling water in Chapter 1, these parameters can each take on an infinite number of values. $\theta_{0}$ is called the bias value. With this model, we know exactly how to transform an input into an output -- that is, once the values of the parameters are given. Let's pick a value of X from the dataset, fix specific values for $\theta_{0}$, $\theta_{1}$, $\theta_{2}$, $\theta_{3}$, and $\theta_{4}$, and see what we get for the value of y. Specifically, let $\begin{bmatrix} \theta_{0} \ \theta_{1} \ \theta_{2} \ \theta_{3} \ \theta_{4} \end{bmatrix} = \begin{bmatrix} -10 \ 1 \ 0 \ 5 \ -1 \end{bmatrix}$ This means $\theta_{0}$ is -10, $\theta_{1}$ is 1, and so on. Let's try out X * $\theta$ for the first few rows of X.
# Outputs generated by our model for the first 5 inputs with the specific theta values below theta_test = np.matrix('-10;1;0;5;-1') outputs = np.matrix(X3.iloc[0:5, :]) * theta_test outputs # Compare with the first few values of the output y3.head()
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
That's quite a bit off from the actual values; so we know that the values for $\theta$ in theta_test must be quite far from the optimal values for $\theta$ -- the values that will minimize the cost of getting it wrong. Step 5: Define the Cost of Getting it Wrong Our cost function is exactly the same as it was before for the single variable case. The cost of getting it wrong is defined as a function $J(\theta)$: $$J(\theta) = \frac{1}{2m} \sum_{i=1}^{m} (h_{\theta}x^{(i)}) - y^{(i)})^2$$ The only difference from what we had before is the addition of the various $\theta$s and $x$s $$h_{\theta}(X) = \theta_{0} * x_{0}\ +\ \theta_{1} * x_{1} +\ \theta_{2} * x_{2} +\ \theta_{3} * x_{3} +\ \theta_{4} * x_{4}$$ where $x_{2} = x_{1}^{2}$, $x_{3} = x_{1}^{3}$, and $x_{4} = x_{1}^{4}$.
# Compute the cost for a given set of theta values over the entire dataset # Get X and y in to matrix form computeCost(np.matrix(X3.values), np.matrix(y3.values), theta_test)
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
We don't know yet if this is high or low -- we'll have to try out a whole bunch of $\theta$ values. Or better yet, we can use pick an iterative method and implement it. Steps 6 and 7: Pick an Iterative Method to Minimize the Cost of Getting it Wrong and Implement It Once again, the method that will "learn" the optimal values for $\theta$ is gradient descent. We don't have to do a thing to the function we wrote before for gradient descent. Let's use it to find the minimum cost and the values of $\theta$ that result in that minimum cost.
theta_init = np.matrix('-1;0;1;0;-1') # Run gradient descent for a number of different learning rates alpha = 0.00001 iters = 5000 theta_opt, cost_min = gradientDescent(np.matrix(X3.values), np.matrix(y3.values), theta_init, alpha, iters) # This is the value of theta for the last iteration above -- hence for alpha = 0.1 theta_opt # The minimum cost cost_min[-1]
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
Step 8: The Results Let's make some predictions based on the values of $\theta_{opt}$. We're using our 4th-order polynomial as the model.
size = 2 size_nonnorm = (size * data3.std()[0]) + data3.mean()[0] price = (theta_opt[0] * 1) + (theta_opt[1] * size) + (theta_opt[2] * np.power(size,2)) + (theta_opt[3] * np.power(size,3)) + (theta_opt[4] * np.power(size,4)) price[0,0] # Transform the price into the real price (not normalized) price_mean = data3.mean() price_std = data3.std()[1] price_pred = (price[0,0] * price_std) + price_mean price_pred size_nonnorm data3.mean()[1]
Chapter-4-Non-Linear-Regression.ipynb
jsub10/Machine-Learning-By-Example
gpl-3.0
Define the analysis region and view on a map First, we define our area of interest using latitude and longitude coordinates. Our test region is near Richmond, NSW, Australia. The first line defines the lower-left corner of the bounding box and the second line defines the upper-right corner of the bounding box. GeoJSON format uses a specific order: (longitude, latitude), so be careful when entering the coordinates.
# Define the bounding box using corners min_lon, min_lat = (150.62, -33.69) # Lower-left corner (longitude, latitude) max_lon, max_lat = (150.83, -33.48) # Upper-right corner (longitude, latitude) bbox = (min_lon, min_lat, max_lon, max_lat) latitude = (min_lat, max_lat) longitude = (min_lon, max_lon) def _degree_to_zoom_level(l1, l2, margin = 0.0): degree = abs(l1 - l2) * (1 + margin) zoom_level_int = 0 if degree != 0: zoom_level_float = math.log(360/degree)/math.log(2) zoom_level_int = int(zoom_level_float) else: zoom_level_int = 18 return zoom_level_int def display_map(latitude = None, longitude = None): margin = -0.5 zoom_bias = 0 lat_zoom_level = _degree_to_zoom_level(margin = margin, *latitude ) + zoom_bias lon_zoom_level = _degree_to_zoom_level(margin = margin, *longitude) + zoom_bias zoom_level = min(lat_zoom_level, lon_zoom_level) center = [np.mean(latitude), np.mean(longitude)] map_hybrid = folium.Map(location=center,zoom_start=zoom_level, tiles=" http://mt1.google.com/vt/lyrs=y&z={z}&x={x}&y={y}",attr="Google") line_segments = [(latitude[0],longitude[0]),(latitude[0],longitude[1]), (latitude[1],longitude[1]),(latitude[1],longitude[0]), (latitude[0],longitude[0])] map_hybrid.add_child(folium.features.PolyLine(locations=line_segments,color='red',opacity=0.8)) map_hybrid.add_child(folium.features.LatLngPopup()) return map_hybrid # Plot bounding box on a map f = folium.Figure(width=600, height=600) m = display_map(latitude,longitude) f.add_child(m)
notebooks/Data_Challenge/LandCover.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Discover and load the data for analysis Using the pystac_client we can search the Planetary Computer's STAC endpoint for items matching our query parameters. We will look for data tiles (1-degree square) that intersect our bounding box.
stac = pystac_client.Client.open("https://planetarycomputer.microsoft.com/api/stac/v1") search = stac.search(bbox=bbox,collections=["io-lulc"]) items = list(search.get_items()) print('Number of data tiles intersecting our bounding box:',len(items))
notebooks/Data_Challenge/LandCover.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Next, we'll load the data into an xarray DataArray using stackstac and then "clip" the data to only the pixels within our region (bounding box). There are also several other <b>important settings for the data</b>: We have changed the projection to EPSG=4326 which is standard latitude-longitude in degrees. We have specified the spatial resolution of each pixel to be 10-meters, which is the baseline accuracy for this data. After creating the DataArray, we will need to mosaic the raster chunks across the time dimension (remember, they're all from a single synthesized "time" from 2020) and drop the single band dimension. Finally, we will read the actual data by calling .compute(). In the end, the dataset will include land cover classifications (10 total) at 10-meters spatial resolution.
item = next(search.get_items()) items = [pc.sign(item).to_dict() for item in search.get_items()] nodata = raster.ext(item.assets["data"]).bands[0].nodata # Define the pixel resolution for the final product # Define the scale according to our selected crs, so we will use degrees resolution = 10 # meters per pixel scale = resolution / 111320.0 # degrees per pixel for crs=4326 data = stackstac.stack( items, # use only the data from our search results epsg=4326, # use common lat-lon coordinates resolution=scale, # Use degrees for crs=4326 dtype=np.ubyte, # matches the data versus default float64 fill_value=nodata, # fills voids with no data bounds_latlon=bbox # clips to our bounding box ) land_cover = stackstac.mosaic(data, dim="time", axis=None).squeeze().drop("band").compute()
notebooks/Data_Challenge/LandCover.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Land Cover Map Now we will create a land cover classification map. The source GeoTIFFs contain a colormap and the STAC metadata contains the class names. We'll open one of the source files just to read this metadata and construct the right colors and names for our plot.
# Create a custom colormap using the file metadata class_names = land_cover.coords["label:classes"].item()["classes"] class_count = len(class_names) with rasterio.open(pc.sign(item.assets["data"].href)) as src: colormap_def = src.colormap(1) # get metadata colormap for band 1 colormap = [np.array(colormap_def[i]) / 255 for i in range(class_count) ] # transform to matplotlib color format cmap = ListedColormap(colormap) image = land_cover.plot(size=8,cmap=cmap,add_colorbar=False,vmin=0,vmax=class_count) cbar = plt.colorbar(image) cbar.set_ticks(range(class_count)) cbar.set_ticklabels(class_names) plt.gca().set_aspect('equal') plt.title('Land Cover Classification') plt.xlabel('Longitude') plt.ylabel('Latitude') plt.show()
notebooks/Data_Challenge/LandCover.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Save the output data in a GeoTIFF file
filename = "Land_Cover_sample2.tiff" # Set the dimensions of file in pixels height = land_cover.shape[0] width = land_cover.shape[1] # Define the Coordinate Reference System (CRS) to be common Lat-Lon coordinates # Define the tranformation using our bounding box so the Lat-Lon information is written to the GeoTIFF gt = rasterio.transform.from_bounds(min_lon,min_lat,max_lon,max_lat,width,height) land_cover.rio.write_crs("epsg:4326", inplace=True) land_cover.rio.write_transform(transform=gt, inplace=True); # Create the GeoTIFF output file using the defined parameters with rasterio.open(filename,'w',driver='GTiff',width=width,height=height, crs='epsg:4326',transform=gt,count=1,compress='lzw',dtype=np.ubyte) as dst: dst.write(land_cover,1) dst.close() # Show the location and size of the new output file !ls *.tiff -lah
notebooks/Data_Challenge/LandCover.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
How will the participants use this data? The GeoTIFF file will contain the Lat-Lon coordinates of each pixel and will also contain the land class for each pixel. Since the FrogID data is also Lat-Lon position, it is possible to find the closest pixel using code similar to what is demonstrated below. Once this pixel is found, then the corresponding land class can be used for modeling species distribution. In addition, participants may want to consider proximity to specific land classes. For example, there may be a positive correlation with land classes such as trees, grass or water and there may be a negative correlation with land classes such as built-up area or bare soil. These are the possible <b>land classifications</b>, reported below:<br> 1 = water, 2 = trees, 3 = grass, 4 = flooded vegetation, 5 = crops<br> 6 = scrub, 7 = built-up (urban), 8 = bare soil, 9 = snow/ice, 10=clouds
# This is an example for a specific Lon-Lat location randomly selected within our sample region. values = land_cover.sel(x=150.71, y=-33.51, method="nearest").values print("This is the land classification for the closest pixel: ",values)
notebooks/Data_Challenge/LandCover.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Overview Building, compiling, and running expressions with Theano What is Theano? First steps with Theano Configuring Theano Working with array structures Wrapping things up – a linear regression example Choosing activation functions for feedforward neural networks Logistic function recap Estimating probabilities in multi-class classification via the softmax function Broadening the output spectrum by using a hyperbolic tangent Training neural networks efficiently using Keras Summary <br> <br>
from IPython.display import Image
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
Building, compiling, and running expressions with Theano Depending on your system setup, it is typically sufficient to install Theano via pip install Theano For more help with the installation, please see: http://deeplearning.net/software/theano/install.html
Image(filename='./images/13_01.png', width=500)
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
<br> <br> What is Theano? ... First steps with Theano Introducing the TensorType variables. For a complete list, see http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors
import theano from theano import tensor as T # initialize x1 = T.scalar() w1 = T.scalar() w0 = T.scalar() z1 = w1 * x1 + w0 # compile net_input = theano.function(inputs=[w1, x1, w0], outputs=z1) # execute net_input(2.0, 1.0, 0.5)
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
<br> <br> Configuring Theano Configuring Theano. For more options, see - http://deeplearning.net/software/theano/library/config.html - http://deeplearning.net/software/theano/library/floatX.html
print(theano.config.floatX) theano.config.floatX = 'float32'
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
To change the float type globally, execute export THEANO_FLAGS=floatX=float32 in your bash shell. Or execute Python script as THEANO_FLAGS=floatX=float32 python your_script.py Running Theano on GPU(s). For prerequisites, please see: http://deeplearning.net/software/theano/tutorial/using_gpu.html Note that float32 is recommended for GPUs; float64 on GPUs is currently still relatively slow.
print(theano.config.device)
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
You can run a Python script on CPU via: THEANO_FLAGS=device=cpu,floatX=float64 python your_script.py or GPU via THEANO_FLAGS=device=gpu,floatX=float32 python your_script.py It may also be convenient to create a .theanorc file in your home directory to make those configurations permanent. For example, to always use float32, execute echo -e "\n[global]\nfloatX=float32\n" &gt;&gt; ~/.theanorc Or, create a .theanorc file manually with the following contents [global] floatX = float32 device = gpu <br> <br> Working with array structures
import numpy as np # initialize # if you are running Theano on 64 bit mode, # you need to use dmatrix instead of fmatrix x = T.fmatrix(name='x') x_sum = T.sum(x, axis=0) # compile calc_sum = theano.function(inputs=[x], outputs=x_sum) # execute (Python list) ary = [[1, 2, 3], [1, 2, 3]] print('Column sum:', calc_sum(ary)) # execute (NumPy array) ary = np.array([[1, 2, 3], [1, 2, 3]], dtype=theano.config.floatX) print('Column sum:', calc_sum(ary))
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
Updating shared arrays. More info about memory management in Theano can be found here: http://deeplearning.net/software/theano/tutorial/aliasing.html
# initialize x = T.fmatrix(name='x') w = theano.shared(np.asarray([[0.0, 0.0, 0.0]], dtype=theano.config.floatX)) z = x.dot(w.T) update = [[w, w + 1.0]] # compile net_input = theano.function(inputs=[x], updates=update, outputs=z) # execute data = np.array([[1, 2, 3]], dtype=theano.config.floatX) for i in range(5): print('z%d:' % i, net_input(data))
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
We can use the givens variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables. If we use inputs, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent. Via givens, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).
# initialize data = np.array([[1, 2, 3]], dtype=theano.config.floatX) x = T.fmatrix(name='x') w = theano.shared(np.asarray([[0.0, 0.0, 0.0]], dtype=theano.config.floatX)) z = x.dot(w.T) update = [[w, w + 1.0]] # compile net_input = theano.function(inputs=[], updates=update, givens={x: data}, outputs=z) # execute for i in range(5): print('z:', net_input())
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
<br> <br> Wrapping things up: A linear regression example Creating some training data.
import numpy as np X_train = np.asarray([[0.0], [1.0], [2.0], [3.0], [4.0], [5.0], [6.0], [7.0], [8.0], [9.0]], dtype=theano.config.floatX) y_train = np.asarray([1.0, 1.3, 3.1, 2.0, 5.0, 6.3, 6.6, 7.4, 8.0, 9.0], dtype=theano.config.floatX)
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
Implementing the training function.
import theano from theano import tensor as T import numpy as np def train_linreg(X_train, y_train, eta, epochs): costs = [] # Initialize arrays eta0 = T.fscalar('eta0') y = T.fvector(name='y') X = T.fmatrix(name='X') w = theano.shared(np.zeros( shape=(X_train.shape[1] + 1), dtype=theano.config.floatX), name='w') # calculate cost net_input = T.dot(X, w[1:]) + w[0] errors = y - net_input cost = T.sum(T.pow(errors, 2)) # perform gradient update gradient = T.grad(cost, wrt=w) update = [(w, w - eta0 * gradient)] # compile model train = theano.function(inputs=[eta0], outputs=cost, updates=update, givens={X: X_train, y: y_train,}) for _ in range(epochs): costs.append(train(eta)) return costs, w
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
Plotting the sum of squared errors cost vs epochs.
%matplotlib inline import matplotlib.pyplot as plt costs, w = train_linreg(X_train, y_train, eta=0.001, epochs=10) plt.plot(range(1, len(costs)+1), costs) plt.tight_layout() plt.xlabel('Epoch') plt.ylabel('Cost') plt.tight_layout() # plt.savefig('./figures/cost_convergence.png', dpi=300) plt.show()
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
Making predictions.
def predict_linreg(X, w): Xt = T.matrix(name='X') net_input = T.dot(Xt, w[1:]) + w[0] predict = theano.function(inputs=[Xt], givens={w: w}, outputs=net_input) return predict(X) plt.scatter(X_train, y_train, marker='s', s=50) plt.plot(range(X_train.shape[0]), predict_linreg(X_train, w), color='gray', marker='o', markersize=4, linewidth=3) plt.xlabel('x') plt.ylabel('y') plt.tight_layout() # plt.savefig('./figures/linreg.png', dpi=300) plt.show()
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
<br> <br> Choosing activation functions for feedforward neural networks ... Logistic function recap The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function. Net input $z$: $$z = w_1x_{1} + \dots + w_mx_{m} = \sum_{j=1}^{m} x_{j}w_{j} \ = \mathbf{w}^T\mathbf{x}$$ Logistic activation function: $$\phi_{logistic}(z) = \frac{1}{1 + e^{-z}}$$ Output range: (0, 1)
# note that first element (X[0] = 1) to denote bias unit X = np.array([[1, 1.4, 1.5]]) w = np.array([0.0, 0.2, 0.4]) def net_input(X, w): z = X.dot(w) return z def logistic(z): return 1.0 / (1.0 + np.exp(-z)) def logistic_activation(X, w): z = net_input(X, w) return logistic(z) print('P(y=1|x) = %.3f' % logistic_activation(X, w)[0])
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
Now, imagine a MLP perceptron with 3 hidden units + 1 bias unit in the hidden unit. The output layer consists of 3 output units.
# W : array, shape = [n_output_units, n_hidden_units+1] # Weight matrix for hidden layer -> output layer. # note that first column (A[:][0] = 1) are the bias units W = np.array([[1.1, 1.2, 1.3, 0.5], [0.1, 0.2, 0.4, 0.1], [0.2, 0.5, 2.1, 1.9]]) # A : array, shape = [n_hidden+1, n_samples] # Activation of hidden layer. # note that first element (A[0][0] = 1) is for the bias units A = np.array([[1.0], [0.1], [0.3], [0.7]]) # Z : array, shape = [n_output_units, n_samples] # Net input of output layer. Z = W.dot(A) y_probas = logistic(Z) print('Probabilities:\n', y_probas) y_class = np.argmax(Z, axis=0) print('predicted class label: %d' % y_class[0])
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
<br> <br> Estimating probabilities in multi-class classification via the softmax function The softmax function is a generalization of the logistic function and allows us to compute meaningful class-probalities in multi-class settings (multinomial logistic regression). $$P(y=j|z) =\phi_{softmax}(z) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$ the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is: Output range: (0, 1)
def softmax(z): return np.exp(z) / np.sum(np.exp(z)) def softmax_activation(X, w): z = net_input(X, w) return softmax(z) y_probas = softmax(Z) print('Probabilities:\n', y_probas) y_probas.sum() y_class = np.argmax(Z, axis=0) y_class
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
<br> <br> Broadening the output spectrum using a hyperbolic tangent Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function. $$\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$ Output range: (-1, 1)
def tanh(z): e_p = np.exp(z) e_m = np.exp(-z) return (e_p - e_m) / (e_p + e_m) import matplotlib.pyplot as plt %matplotlib inline z = np.arange(-5, 5, 0.005) log_act = logistic(z) tanh_act = tanh(z) # alternatives: # from scipy.special import expit # log_act = expit(z) # tanh_act = np.tanh(z) plt.ylim([-1.5, 1.5]) plt.xlabel('net input $z$') plt.ylabel('activation $\phi(z)$') plt.axhline(1, color='black', linestyle='--') plt.axhline(0.5, color='black', linestyle='--') plt.axhline(0, color='black', linestyle='--') plt.axhline(-1, color='black', linestyle='--') plt.plot(z, tanh_act, linewidth=2, color='black', label='tanh') plt.plot(z, log_act, linewidth=2, color='lightgreen', label='logistic') plt.legend(loc='lower right') plt.tight_layout() # plt.savefig('./figures/activation.png', dpi=300) plt.show() Image(filename='./images/13_05.png', width=700)
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
<br> <br> Training neural networks efficiently using Keras Loading MNIST 1) Download the 4 MNIST datasets from http://yann.lecun.com/exdb/mnist/ train-images-idx3-ubyte.gz: training set images (9912422 bytes) train-labels-idx1-ubyte.gz: training set labels (28881 bytes) t10k-images-idx3-ubyte.gz: test set images (1648877 bytes) t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes) 2) Unzip those files 3 Copy the unzipped files to a directory ./mnist
import os import struct import numpy as np def load_mnist(path, kind='train'): """Load MNIST data from `path`""" labels_path = os.path.join(path, '%s-labels-idx1-ubyte' % kind) images_path = os.path.join(path, '%s-images-idx3-ubyte' % kind) with open(labels_path, 'rb') as lbpath: magic, n = struct.unpack('>II', lbpath.read(8)) labels = np.fromfile(lbpath, dtype=np.uint8) with open(images_path, 'rb') as imgpath: magic, num, rows, cols = struct.unpack(">IIII", imgpath.read(16)) images = np.fromfile(imgpath, dtype=np.uint8).reshape(len(labels), 784) return images, labels X_train, y_train = load_mnist('mnist', kind='train') print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1])) X_test, y_test = load_mnist('mnist', kind='t10k') print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
Multi-layer Perceptron in Keras Once you have Theano installed, Keras can be installed via pip install Keras In order to run the following code via GPU, you can execute the Python script that was placed in this directory via THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
import theano theano.config.floatX = 'float32' X_train = X_train.astype(theano.config.floatX) X_test = X_test.astype(theano.config.floatX)
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
One-hot encoding of the class variable:
from keras.utils import np_utils print('First 3 labels: ', y_train[:3]) y_train_ohe = np_utils.to_categorical(y_train) print('\nFirst 3 labels (one-hot):\n', y_train_ohe[:3]) from keras.models import Sequential from keras.layers.core import Dense from keras.optimizers import SGD np.random.seed(1) model = Sequential() model.add(Dense(input_dim=X_train.shape[1], output_dim=50, init='uniform', activation='tanh')) model.add(Dense(input_dim=50, output_dim=50, init='uniform', activation='tanh')) model.add(Dense(input_dim=50, output_dim=y_train_ohe.shape[1], init='uniform', activation='softmax')) sgd = SGD(lr=0.001, decay=1e-7, momentum=.9) model.compile(loss='categorical_crossentropy', optimizer=sgd) model.fit(X_train, y_train_ohe, nb_epoch=50, batch_size=300, verbose=1, validation_split=0.1, show_accuracy=True) y_train_pred = model.predict_classes(X_train, verbose=0) print('First 3 predictions: ', y_train_pred[:3]) train_acc = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0] print('Training accuracy: %.2f%%' % (train_acc * 100)) y_test_pred = model.predict_classes(X_test, verbose=0) test_acc = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0] print('Test accuracy: %.2f%%' % (test_acc * 100))
code/ch13/ch13.ipynb
wei-Z/Python-Machine-Learning
mit
1b. Download Associations, if necessary The NCBI gene2go file contains numerous species. We will select mouse shortly.
# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz from goatools.base import download_ncbi_associations fin_gene2go = download_ncbi_associations()
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
2. Load Ontologies, Associations and Background gene set 2a. Load Ontologies
from goatools.obo_parser import GODag obodag = GODag("go-basic.obo")
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
2b. Load Associations
from __future__ import print_function from goatools.anno.genetogo_reader import Gene2GoReader # Read NCBI's gene2go. Store annotations in a list of namedtuples objanno = Gene2GoReader(fin_gene2go, taxids=[10090]) # Get namespace2association where: # namespace is: # BP: biological_process # MF: molecular_function # CC: cellular_component # assocation is a dict: # key: NCBI GeneID # value: A set of GO IDs associated with that gene ns2assoc = objanno.get_ns2assc() for nspc, id2gos in ns2assoc.items(): print("{NS} {N:,} annotated mouse genes".format(NS=nspc, N=len(id2gos)))
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
2c. Load Background gene set In this example, the background is all mouse protein-codinge genes. Follow the instructions in the background_genes_ncbi notebook to download a set of background population genes from NCBI.
from genes_ncbi_10090_proteincoding import GENEID2NT as GeneID2nt_mus print(len(GeneID2nt_mus))
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
3. Initialize a GOEA object The GOEA object holds the Ontologies, Associations, and background. Numerous studies can then be run withough needing to re-load the above items. In this case, we only run one GOEA.
from goatools.goea.go_enrichment_ns import GOEnrichmentStudyNS goeaobj = GOEnrichmentStudyNS( GeneID2nt_mus.keys(), # List of mouse protein-coding genes ns2assoc, # geneid/GO associations obodag, # Ontologies propagate_counts = False, alpha = 0.05, # default significance cut-off methods = ['fdr_bh']) # defult multipletest correction method
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
4. Read study genes ~400 genes from the Nature paper supplemental table 4
# Data will be stored in this variable import os geneid2symbol = {} # Get xlsx filename where data is stored ROOT = os.path.dirname(os.getcwd()) # go up 1 level from current working directory din_xlsx = os.path.join(ROOT, "goatools/test_data/nbt_3102/nbt.3102-S4_GeneIDs.xlsx") # Read data if os.path.isfile(din_xlsx): import xlrd book = xlrd.open_workbook(din_xlsx) pg = book.sheet_by_index(0) for r in range(pg.nrows): symbol, geneid, pval = [pg.cell_value(r, c) for c in range(pg.ncols)] if geneid: geneid2symbol[int(geneid)] = symbol print('{N} genes READ: {XLSX}'.format(N=len(geneid2symbol), XLSX=din_xlsx)) else: raise RuntimeError('FILE NOT FOUND: {XLSX}'.format(XLSX=din_xlsx))
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
5. Run Gene Ontology Enrichment Analysis (GOEA) You may choose to keep all results or just the significant results. In this example, we choose to keep only the significant results.
# 'p_' means "pvalue". 'fdr_bh' is the multipletest method we are currently using. geneids_study = geneid2symbol.keys() goea_results_all = goeaobj.run_study(geneids_study) goea_results_sig = [r for r in goea_results_all if r.p_fdr_bh < 0.05]
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
5a. Quietly Run Gene Ontology Enrichment Analysis (GOEA) GOEAs can be run quietly using prt=None: goea_results = goeaobj.run_study(geneids_study, prt=None) No output is printed if prt=None:
goea_quiet_all = goeaobj.run_study(geneids_study, prt=None) goea_quiet_sig = [r for r in goea_results_all if r.p_fdr_bh < 0.05]
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
Print customized results summaries Example 1: Significant v All GOEA results
print('{N} of {M:,} results were significant'.format( N=len(goea_quiet_sig), M=len(goea_quiet_all)))
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
Example 2: Enriched v Purified GOEA results
print('Significant results: {E} enriched, {P} purified'.format( E=sum(1 for r in goea_quiet_sig if r.enrichment=='e'), P=sum(1 for r in goea_quiet_sig if r.enrichment=='p')))
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
Example 3: Significant GOEA results by namespace
import collections as cx ctr = cx.Counter([r.NS for r in goea_quiet_sig]) print('Significant results[{TOTAL}] = {BP} BP + {MF} MF + {CC} CC'.format( TOTAL=len(goea_quiet_sig), BP=ctr['BP'], # biological_process MF=ctr['MF'], # molecular_function CC=ctr['CC'])) # cellular_component
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
6. Write results to an Excel file and to a text file
goeaobj.wr_xlsx("nbt3102.xlsx", goea_results_sig) goeaobj.wr_txt("nbt3102.txt", goea_results_sig)
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
7. Plot all significant GO terms Plotting all significant GO terms produces a messy spaghetti plot. Such a plot can be useful sometimes because you can open it and zoom and scroll around. But sometimes it is just too messy to be of use. The "{NS}" in "nbt3102_{NS}.png" indicates that you will see three plots, one for "biological_process"(BP), "molecular_function"(MF), and "cellular_component"(CC)
from goatools.godag_plot import plot_gos, plot_results, plot_goid2goobj plot_results("nbt3102_{NS}.png", goea_results_sig)
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
7a. These plots are likely to messy The Cellular Component plot is the smallest plot... 7b. So make a smaller sub-plot This plot contains GOEA results: * GO terms colored by P-value: * pval < 0.005 (light red) * pval < 0.01 (light orange) * pval < 0.05 (yellow) * pval > 0.05 (grey) Study terms that are not statistically significant * GO terms with study gene counts printed. e.g., "32 genes"
# Plot subset starting from these significant GO terms goid_subset = [ 'GO:0003723', # MF D04 RNA binding (32 genes) 'GO:0044822', # MF D05 poly(A) RNA binding (86 genes) 'GO:0003729', # MF D06 mRNA binding (11 genes) 'GO:0019843', # MF D05 rRNA binding (6 genes) 'GO:0003746', # MF D06 translation elongation factor activity (5 genes) ] plot_gos("nbt3102_MF_RNA_genecnt.png", goid_subset, # Source GO ids obodag, goea_results=goea_results_all) # Use pvals for coloring
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
7c. Add study gene Symbols to plot e.g., 11 genes: Calr, Eef1a1, Pabpc1
plot_gos("nbt3102_MF_RNA_Symbols.png", goid_subset, # Source GO ids obodag, goea_results=goea_results_all, # use pvals for coloring # We can further configure the plot... id2symbol=geneid2symbol, # Print study gene Symbols, not Entrez GeneIDs study_items=6, # Only only 6 gene Symbols max on GO terms items_p_line=3, # Print 3 genes per line )
notebooks/goea_nbt3102.ipynb
tanghaibao/goatools
bsd-2-clause
Create Your Own Visualizations! Instructions: 1. Install tensor2tensor and train up a Transformer model following the instruction in the repository https://github.com/tensorflow/tensor2tensor. 2. Update cell 3 to point to your checkpoint, it is currently set up to read from the default checkpoint location that would be created from following the instructions above. 3. If you used custom hyper parameters then update cell 4. 4. Run the notebook!
import os import tensorflow as tf from tensor2tensor import problems from tensor2tensor.bin import t2t_decoder # To register the hparams set from tensor2tensor.utils import registry from tensor2tensor.utils import trainer_lib from tensor2tensor.visualization import attention from tensor2tensor.visualization import visualization %%javascript require.config({ paths: { d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min' } });
v0.5.0/google/research_v3.32/gnmt-tpuv3-32/code/gnmt/model/t2t/tensor2tensor/visualization/TransformerVisualization.ipynb
mlperf/training_results_v0.5
apache-2.0
HParams
# PUT THE MODEL YOU WANT TO LOAD HERE! CHECKPOINT = os.path.expanduser('~/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu') # HParams problem_name = 'translate_ende_wmt32k' data_dir = os.path.expanduser('~/t2t_data/') model_name = "transformer" hparams_set = "transformer_base_single_gpu"
v0.5.0/google/research_v3.32/gnmt-tpuv3-32/code/gnmt/model/t2t/tensor2tensor/visualization/TransformerVisualization.ipynb
mlperf/training_results_v0.5
apache-2.0
Visualization
visualizer = visualization.AttentionVisualizer(hparams_set, model_name, data_dir, problem_name, beam_size=1) tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step') sess = tf.train.MonitoredTrainingSession( checkpoint_dir=CHECKPOINT, save_summaries_secs=0, ) input_sentence = "I have two dogs." output_string, inp_text, out_text, att_mats = visualizer.get_vis_data_from_string(sess, input_sentence) print(output_string)
v0.5.0/google/research_v3.32/gnmt-tpuv3-32/code/gnmt/model/t2t/tensor2tensor/visualization/TransformerVisualization.ipynb
mlperf/training_results_v0.5
apache-2.0
Interpreting the Visualizations The layers drop down allow you to view the different Transformer layers, 0-indexed of course. Tip: The first layer, last layer and 2nd to last layer are usually the most interpretable. The attention dropdown allows you to select different pairs of encoder-decoder attentions: All: Shows all types of attentions together. NOTE: There is no relation between heads of the same color - between the decoder self attention and decoder-encoder attention since they do not share parameters. Input - Input: Shows only the encoder self-attention. Input - Output: Shows the decoder’s attention on the encoder. NOTE: Every decoder layer attends to the final layer of encoder so the visualization will show the attention on the final encoder layer regardless of what layer is selected in the drop down. Output - Output: Shows only the decoder self-attention. NOTE: The visualization might be slightly misleading in the first layer since the text shown is the target of the decoder, the input to the decoder at layer 0 is this text with a GO symbol prepreded. The colored squares represent the different attention heads. You can hide or show a given head by clicking on it’s color. Double clicking a color will hide all other colors, double clicking on a color when it’s the only head showing will show all the heads again. You can hover over a word to see the individual attention weights for just that position. Hovering over the words on the left will show what that position attended to. Hovering over the words on the right will show what positions attended to it.
attention.show(inp_text, out_text, *att_mats)
v0.5.0/google/research_v3.32/gnmt-tpuv3-32/code/gnmt/model/t2t/tensor2tensor/visualization/TransformerVisualization.ipynb
mlperf/training_results_v0.5
apache-2.0
Exercise 1
def cosine_dist(u, v, axis): """Returns cosine of angle betwwen two vectors.""" return 1 - (u*v).sum(axis)/(np.sqrt((u**2).sum(axis))*np.sqrt((v**2).sum(axis))) u = np.array([1,2,3]) v = np.array([4,5,6])
homework/07_Linear_Algebra_Applications_Solutions_Explanation.ipynb
cliburn/sta-663-2017
mit
Note 1: We write the dot product as the sum of element-wise products. This allows us to generalize when u, v are matrices rather than vectors. The norms in the denominator are calculated in the same way.
u @ v (u * v).sum()
homework/07_Linear_Algebra_Applications_Solutions_Explanation.ipynb
cliburn/sta-663-2017
mit
Note 2: Broadcasting
M = np.array([[1.,2,3],[4,5,6]]) M.shape
homework/07_Linear_Algebra_Applications_Solutions_Explanation.ipynb
cliburn/sta-663-2017
mit
Note 2A: Broadcasting for M as collection of row vectors. How we broadcast and which axis to broadcast over are determined by the need to end up with a 2x2 matrix.
M[None,:,:].shape, M[:,None,:].shape (M[None,:,:] + M[:,None,:]).shape cosine_dist(M[None,:,:], M[:,None,:], 2)
homework/07_Linear_Algebra_Applications_Solutions_Explanation.ipynb
cliburn/sta-663-2017
mit
Note 2B: Broadcasting for M as a collection of column vectors. How we broadcast and which axis to broadcast over are determined by the need to end up with a 3x3 matrix.
M[:,None,:].shape, M[:,:,None].shape (M[:,None,:] + M[:,:,None]).shape cosine_dist(M[:,None,:], M[:,:,None], 0)
homework/07_Linear_Algebra_Applications_Solutions_Explanation.ipynb
cliburn/sta-663-2017
mit
Exeercise 2 Note 1: Using collections.Counter and pandas.DataFrame reduces the amount of code to write. Exercise 3
M = np.array([[1, 0, 0, 1, 0, 0, 0, 0, 0], [1, 0, 1, 0, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 0, 1, 0, 0, 0, 0], [0, 1, 1, 2, 0, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 1, 1]]) M.shape U, s, V = np.linalg.svd(M, full_matrices=False) U.shape, s.shape, V.shape s[2:] = 0 M2 = U @ np.diag(s) @ V from scipy.stats import spearmanr r2 = spearmanr(M2)[0] r2 r2[np.tril_indices_from(r2[:5, :5], -1)] r2[np.tril_indices_from(r2[5:, 5:], -1)]
homework/07_Linear_Algebra_Applications_Solutions_Explanation.ipynb
cliburn/sta-663-2017
mit
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function return None, None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = None # Batch Size batch_size = None # RNN Size rnn_size = None # Embedding Dimension Size embed_dim = None # Sequence Length seq_length = None # Learning Rate learning_rate = None # Show stats for every n number of batches show_every_n_batches = None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Build the Graph Build the graph using the neural network you implemented.
""" DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
""" DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved')
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ # TODO: Implement Function return None, None, None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
tv-script-generation/dlnd_tv_script_generation.ipynb
gatmeh/Udacity-deep-learning
mit
First try just searching for "glider"
url = 'https://data.ioos.us/gliders/erddap/search/advanced.csv?page=1&itemsPerPage=1000&searchFor={}'.format('glider') dft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution','Dataset ID']) dft.head()
ERDDAP/GliderDAC_Search.ipynb
rsignell-usgs/notebook
mit
Now search for all temperature data in specified bounding box and temporal extent
start = '2000-01-01T00:00:00Z' stop = '2017-02-22T00:00:00Z' lat_min = 39. lat_max = 41.5 lon_min = -72. lon_max = -69. standard_name = 'sea_water_temperature' endpoint = 'https://data.ioos.us/gliders/erddap/search/advanced.csv' import pandas as pd base = ( '{}' '?page=1' '&itemsPerPage=1000' '&searchFor=' '&protocol=(ANY)' '&cdm_data_type=(ANY)' '&institution=(ANY)' '&ioos_category=(ANY)' '&keywords=(ANY)' '&long_name=(ANY)' '&standard_name={}' '&variableName=(ANY)' '&maxLat={}' '&minLon={}' '&maxLon={}' '&minLat={}' '&minTime={}' '&maxTime={}').format url = base( endpoint, standard_name, lat_max, lon_min, lon_max, lat_min, start, stop ) print(url) dft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution', 'Dataset ID']) print('Glider Datasets Found = {}'.format(len(dft))) dft
ERDDAP/GliderDAC_Search.ipynb
rsignell-usgs/notebook
mit
Define a function that returns a Pandas DataFrame based on the dataset ID. The ERDDAP request variables (e.g. pressure, temperature) are hard-coded here, so this routine should be modified for other ERDDAP endpoints or datasets
def download_df(glider_id): from pandas import DataFrame, read_csv # from urllib.error import HTTPError uri = ('https://data.ioos.us/gliders/erddap/tabledap/{}.csv' '?trajectory,wmo_id,time,latitude,longitude,depth,pressure,temperature' '&time>={}' '&time<={}' '&latitude>={}' '&latitude<={}' '&longitude>={}' '&longitude<={}').format url = uri(glider_id,start,stop,lat_min,lat_max,lon_min,lon_max) print(url) # Not sure if returning an empty df is the best idea. try: df = read_csv(url, index_col='time', parse_dates=True, skiprows=[1]) except: df = pd.DataFrame() return df # concatenate the dataframes for each dataset into one single dataframe df = pd.concat(list(map(download_df, dft['Dataset ID'].values))) print('Total Data Values Found: {}'.format(len(df))) df.head() df.tail()
ERDDAP/GliderDAC_Search.ipynb
rsignell-usgs/notebook
mit
plot up the trajectories with Cartopy (Basemap replacement)
%matplotlib inline import matplotlib.pyplot as plt import cartopy.crs as ccrs from cartopy.feature import NaturalEarthFeature bathym_1000 = NaturalEarthFeature(name='bathymetry_J_1000', scale='10m', category='physical') fig, ax = plt.subplots( figsize=(9, 9), subplot_kw=dict(projection=ccrs.PlateCarree()) ) ax.coastlines(resolution='10m') ax.add_feature(bathym_1000, facecolor=[0.9, 0.9, 0.9], edgecolor='none') dx = dy = 0.5 ax.set_extent([lon_min-dx, lon_max+dx, lat_min-dy, lat_max+dy]) g = df.groupby('trajectory') for glider in g.groups: traj = df[df['trajectory'] == glider] ax.plot(traj['longitude'], traj['latitude'], label=glider) gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=2, color='gray', alpha=0.5, linestyle='--') ax.legend();
ERDDAP/GliderDAC_Search.ipynb
rsignell-usgs/notebook
mit
SQL CREATE TABLE presidents (first_name, last_name, year_of_birth); INSERT INTO presidents VALUES ('George', 'Washington', 1732); INSERT INTO presidents VALUES ('John', 'Adams', 1735); INSERT INTO presidents VALUES ('Thomas', 'Jefferson', 1743); INSERT INTO presidents VALUES ('James', 'Madison', 1751); INSERT INTO presidents VALUES ('James', 'Monroe', 1758); INSERT INTO presidents VALUES ('Zachary', 'Taylor', 1784); INSERT INTO presidents VALUES ('Abraham', 'Lincoln', 1809); INSERT INTO presidents VALUES ('Theodore', 'Roosevelt', 1858); INSERT INTO presidents VALUES ('Richard', 'Nixon', 1913); INSERT INTO presidents VALUES ('Barack', 'Obama', 1961);
%%read_sql temp CREATE TABLE presidents (first_name, last_name, year_of_birth); INSERT INTO presidents VALUES ('George', 'Washington', 1732); INSERT INTO presidents VALUES ('John', 'Adams', 1735); INSERT INTO presidents VALUES ('Thomas', 'Jefferson', 1743); INSERT INTO presidents VALUES ('James', 'Madison', 1751); INSERT INTO presidents VALUES ('James', 'Monroe', 1758); INSERT INTO presidents VALUES ('Zachary', 'Taylor', 1784); INSERT INTO presidents VALUES ('Abraham', 'Lincoln', 1809); INSERT INTO presidents VALUES ('Theodore', 'Roosevelt', 1858); INSERT INTO presidents VALUES ('Richard', 'Nixon', 1913); INSERT INTO presidents VALUES ('Barack', 'Obama', 1961); %%read_sql df SELECT * FROM presidents df
notebooks/05-SQL-Example.ipynb
jbwhit/jupyter-best-practices
mit
Inline magic
later_presidents = %read_sql SELECT * FROM presidents WHERE year_of_birth > 1825 later_presidents %%read_sql later_presidents SELECT * FROM presidents WHERE year_of_birth > 1825
notebooks/05-SQL-Example.ipynb
jbwhit/jupyter-best-practices
mit
Through pandas directly
birthyear = 1800 %%read_sql df1 SELECT first_name, last_name, year_of_birth FROM presidents WHERE year_of_birth > {birthyear} df1 coal = pd.read_csv("../data/coal_prod_cleaned.csv") coal.head() coal.to_sql('coal', con=sqlite_engine, if_exists='append', index=False) %%read_sql example SELECT * FROM coal example.head() example.columns
notebooks/05-SQL-Example.ipynb
jbwhit/jupyter-best-practices
mit
Some global data
symbol = '^GSPC' capital = 10000 #start = datetime.datetime(1900, 1, 1) start = datetime.datetime(*pf.SP500_BEGIN) end = datetime.datetime.now()
examples/280.pyfolio-integration/strategy.ipynb
fja05680/pinkfish
mit
Define Strategy Class - sell in may and go away
class Strategy: def __init__(self, symbol, capital, start, end): self.symbol = symbol self.capital = capital self.start = start self.end = end self.ts = None self.rlog = None self.tlog = None self.dbal = None self.stats = None def _algo(self): pf.TradeLog.cash = capital for i, row in enumerate(self.ts.itertuples()): date = row.Index.to_pydatetime() end_flag = pf.is_last_row(self.ts, i) # Buy (at the close on first trading day in Nov). if self.tlog.shares == 0: if row.month == 11 and row.first_dotm: self.tlog.buy(date, row.close) # Sell (at the close on first trading day in May). else: if ((row.month == 5 and row.first_dotm) or end_flag): self.tlog.sell(date, row.close) # Record daily balance self.dbal.append(date, row.close) def run(self): # Fetch and select timeseries. self.ts = pf.fetch_timeseries(self.symbol) self.ts = pf.select_tradeperiod(self.ts, self.start, self.end, use_adj=True) # Add calendar columns. self.ts = pf.calendar(self.ts) # Finalize timeseries. self.ts, self.start = pf.finalize_timeseries(self.ts, self.start, dropna=True, drop_columns=['open', 'high', 'low']) # Create tlog and dbal objects self.tlog = pf.TradeLog(symbol) self.dbal = pf.DailyBal() # Run algorithm, get logs self._algo() self._get_logs() self._get_stats() def _get_logs(self): self.rlog = self.tlog.get_log_raw() self.tlog = self.tlog.get_log() self.dbal = self.dbal.get_log(self.tlog) def _get_stats(self): s.stats = pf.stats(self.ts, self.tlog, self.dbal, self.capital)
examples/280.pyfolio-integration/strategy.ipynb
fja05680/pinkfish
mit
Run Strategy
s = Strategy(symbol, capital, start, end) s.run()
examples/280.pyfolio-integration/strategy.ipynb
fja05680/pinkfish
mit
Pyfolio Returns Tear Sheet (create_returns_tear_sheet() seems to be a bit broke in Pyfolio, see: https://github.com/quantopian/pyfolio/issues/520)
# Convert pinkfish data to Empyrical format returns = s.dbal['close'].pct_change() #returns.index = returns.index.tz_localize('UTC') returns.index = returns.index.to_pydatetime() type(returns.index) # Filter warnings import warnings warnings.simplefilter(action='ignore', category=FutureWarning) # Convert pinkfish data to Empyrical format returns = s.dbal['close'].pct_change() returns.index = returns.index.tz_localize('UTC') benchmark_rets = benchmark.dbal['close'].pct_change() benchmark_rets.index = benchmark_rets.index.tz_localize('UTC') live_start_date=None live_start_date='2010-01-01' # Uncomment to select the tear sheet you are interested in. #pyfolio.create_returns_tear_sheet(returns, benchmark_rets=benchmark_rets, live_start_date=live_start_date) pyfolio.create_simple_tear_sheet(returns, benchmark_rets=benchmark_rets) #pyfolio.create_interesting_times_tear_sheet(returns, benchmark_rets=benchmark_rets)
examples/280.pyfolio-integration/strategy.ipynb
fja05680/pinkfish
mit
.. _tut_stats_cluster_source_1samp: Permutation t-test on source data with spatio-temporal clustering Tests if the evoked response is significantly different between conditions across subjects (simulated here using one subject's data). The multiple comparisons problem is addressed with a cluster-level permutation test across space and time.
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Eric Larson <larson.eric.d@gmail.com> # License: BSD (3-clause) import os.path as op import numpy as np from numpy.random import randn from scipy import stats as stats import mne from mne import (io, spatial_tris_connectivity, compute_morph_matrix, grade_to_tris) from mne.epochs import equalize_epoch_counts from mne.stats import (spatio_temporal_cluster_1samp_test, summarize_clusters_stc) from mne.minimum_norm import apply_inverse, read_inverse_operator from mne.datasets import sample print(__doc__)
0.14/_downloads/plot_cluster_stats_spatio_temporal.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' subjects_dir = data_path + '/subjects' tmin = -0.2 tmax = 0.3 # Use a lower tmax to reduce multiple comparisons # Setup for reading the raw data raw = io.Raw(raw_fname) events = mne.read_events(event_fname)
0.14/_downloads/plot_cluster_stats_spatio_temporal.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Read epochs for all channels, removing a bad one
raw.info['bads'] += ['MEG 2443'] picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads') event_id = 1 # L auditory reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6) epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, preload=True) event_id = 3 # L visual epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, preload=True) # Equalize trial counts to eliminate bias (which would otherwise be # introduced by the abs() performed below) equalize_epoch_counts([epochs1, epochs2])
0.14/_downloads/plot_cluster_stats_spatio_temporal.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Transform to source space
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' snr = 3.0 lambda2 = 1.0 / snr ** 2 method = "dSPM" # use dSPM method (could also be MNE or sLORETA) inverse_operator = read_inverse_operator(fname_inv) sample_vertices = [s['vertno'] for s in inverse_operator['src']] # Let's average and compute inverse, resampling to speed things up evoked1 = epochs1.average() evoked1.resample(50) condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method) evoked2 = epochs2.average() evoked2.resample(50) condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method) # Let's only deal with t > 0, cropping to reduce multiple comparisons condition1.crop(0, None) condition2.crop(0, None) tmin = condition1.tmin tstep = condition1.tstep
0.14/_downloads/plot_cluster_stats_spatio_temporal.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Transform to common cortical space
# Normally you would read in estimates across several subjects and morph # them to the same cortical space (e.g. fsaverage). For example purposes, # we will simulate this by just having each "subject" have the same # response (just noisy in source space) here. Note that for 7 subjects # with a two-sided statistical test, the minimum significance under a # permutation test is only p = 1/(2 ** 6) = 0.015, which is large. n_vertices_sample, n_times = condition1.data.shape n_subjects = 7 print('Simulating data for %d subjects.' % n_subjects) # Let's make sure our results replicate, so set the seed. np.random.seed(0) X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10 X[:, :, :, 0] += condition1.data[:, :, np.newaxis] X[:, :, :, 1] += condition2.data[:, :, np.newaxis] # It's a good idea to spatially smooth the data, and for visualization # purposes, let's morph these to fsaverage, which is a grade 5 source space # with vertices 0:10242 for each hemisphere. Usually you'd have to morph # each subject's data separately (and you might want to use morph_data # instead), but here since all estimates are on 'sample' we can use one # morph matrix for all the heavy lifting. fsave_vertices = [np.arange(10242), np.arange(10242)] morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices, fsave_vertices, 20, subjects_dir) n_vertices_fsave = morph_mat.shape[0] # We have to change the shape for the dot() to work properly X = X.reshape(n_vertices_sample, n_times * n_subjects * 2) print('Morphing data.') X = morph_mat.dot(X) # morph_mat is a sparse matrix X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2) # Finally, we want to compare the overall activity levels in each condition, # the diff is taken along the last axis (condition). The negative sign makes # it so condition1 > condition2 shows up as "red blobs" (instead of blue). X = np.abs(X) # only magnitude X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
0.14/_downloads/plot_cluster_stats_spatio_temporal.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute statistic
# To use an algorithm optimized for spatio-temporal clustering, we # just pass the spatial connectivity matrix (instead of spatio-temporal) print('Computing connectivity.') connectivity = spatial_tris_connectivity(grade_to_tris(5)) # Note that X needs to be a multi-dimensional array of shape # samples (subjects) x time x space, so we permute dimensions X = np.transpose(X, [2, 1, 0]) # Now let's actually do the clustering. This can take a long time... # Here we set the threshold quite high to reduce computation. p_threshold = 0.001 t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1) print('Clustering.') T_obs, clusters, cluster_p_values, H0 = clu = \ spatio_temporal_cluster_1samp_test(X, connectivity=connectivity, n_jobs=2, threshold=t_threshold) # Now select the clusters that are sig. at p < 0.05 (note that this value # is multiple-comparisons corrected). good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
0.14/_downloads/plot_cluster_stats_spatio_temporal.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Visualize the clusters
print('Visualizing clusters.') # Now let's build a convenient representation of each cluster, where each # cluster becomes a "time point" in the SourceEstimate stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep, vertices=fsave_vertices, subject='fsaverage') # Let's actually plot the first "time point" in the SourceEstimate, which # shows all the clusters, weighted by duration subjects_dir = op.join(data_path, 'subjects') # blue blobs are for condition A < condition B, red for A > B brain = stc_all_cluster_vis.plot(hemi='both', subjects_dir=subjects_dir, time_label='Duration significant (ms)') brain.set_data_time_index(0) brain.show_view('lateral') brain.save_image('clusters.png')
0.14/_downloads/plot_cluster_stats_spatio_temporal.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words]
embeddings/Skip-Gram_word2vec.ipynb
d-k-b/udacity-deep-learning
mit
Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
## Your code here train_words = # The final subsampled word list
embeddings/Skip-Gram_word2vec.ipynb
d-k-b/udacity-deep-learning
mit
Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' # Your code here return
embeddings/Skip-Gram_word2vec.ipynb
d-k-b/udacity-deep-learning
mit
Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
train_graph = tf.Graph() with train_graph.as_default(): inputs = labels =
embeddings/Skip-Gram_word2vec.ipynb
d-k-b/udacity-deep-learning
mit
Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
n_vocab = len(int_to_vocab) n_embedding = # Number of embedding features with train_graph.as_default(): embedding = # create embedding weight matrix here embed = # use tf.nn.embedding_lookup to get the hidden layer output
embeddings/Skip-Gram_word2vec.ipynb
d-k-b/udacity-deep-learning
mit
Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
# Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = # create softmax weight matrix here softmax_b = # create softmax biases here # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost)
embeddings/Skip-Gram_word2vec.ipynb
d-k-b/udacity-deep-learning
mit
Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints
embeddings/Skip-Gram_word2vec.ipynb
d-k-b/udacity-deep-learning
mit
Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: ## From Thushan Ganegedara's implementation # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding)
embeddings/Skip-Gram_word2vec.ipynb
d-k-b/udacity-deep-learning
mit
Spatially visualize active layer thickness:
fig=plt.figure(figsize=(8,4.5)) ax = fig.add_axes([0.05,0.05,0.9,0.85]) m = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\ rsphere=(6378137.00,6356752.3142),\ resolution='l',area_thresh=1000.,projection='lcc',\ lat_1=50.,lon_0=-107.,ax=ax) X, Y = m(LONS, LATS) m.drawcoastlines(linewidth=1.25) # m.fillcontinents(color='0.8') m.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0]) m.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1]) clev = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0]) cs = m.contourf(X, Y, ALT, clev, cmap=plt.cm.PuBu_r, extend='both') cbar = m.colorbar(cs) cbar.set_label('m') plt.show() # print x._values["ALT"][:] ALT2 = np.reshape(ALT, np.size(ALT)) ALT2 = ALT2[np.where(~np.isnan(ALT2))] print 'Simulated ALT:' print 'Max:', np.nanmax(ALT2),'m', '75% = ', np.percentile(ALT2, 75) print 'Min:', np.nanmin(ALT2),'m', '25% = ', np.percentile(ALT2, 25) plt.hist(ALT2)
notebooks/Ku_2D.ipynb
permamodel/permamodel
mit