markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Logarithmic axis The axis scale can be adapted with the xscale and yscale settings.
import tellurium as te te.setDefaultPlottingEngine('matplotlib') %matplotlib inline r = te.loadTestModel('feedback.xml') r.integrator.variable_step_size = True s = r.simulate(0, 50) r.plot(s, logx=True, xlim=[10E-4, 10E2], title="Logarithmic x-Axis with grid", ylabel="concentration");
examples/notebooks/core/tellurium_plotting.ipynb
kirichoi/tellurium
apache-2.0
Mine titles and abstracts for topics
#import dependencies import pandas as pd from textblob import TextBlob #import spacy #nlp = spacy.load('en') #run once if you need to download nltk corpora, igonre otherwise import nltk nltk.download() df=pd.read_csv('../a.csv') p=df[df.columns[1]].values.astype(str) #get topical nouns for title and abstract using natural language processing r=[] for i in range(len(p)): #get topical nouns with textblob blob1 = TextBlob(repr(p[i])) keywords1=blob1.noun_phrases r.append(keywords1) #save parsed data import json file('r.json','w').write(json.dumps(r)) #load if saved previously #pubdict=json.loads(file('pubdict2.json','r').read()) b=[] for i in r: for z in i: b.append(z.replace('research','').replace('development','').replace('\r','').replace('\n','')) file('b.json','w').write(json.dumps(b))
test/gcrf-hub/wordcloud/wordcloud.ipynb
csaladenes/csaladenes.github.io
mit
Minimal steps: queue = [A] next = A; queue = [B,C] next = B; queue = [I,D,E,C] next = I -> STOP There are 3 iterations needed to find node I in minimum. Question II (a)
import pandas as pd adjacency = pd.DataFrame({'A':[0,1,1,0,0,0,0,0,0], 'B':[0,0,0,1,1,0,0,0,1], 'C':[0,0,0,0,0,1,1,0,0], 'D':[0,0,0,0,1,0,0,1,0], 'E':[0,0,0,0,0,0,0,0,1], 'F':[1,0,0,0,0,0,1,0,0], 'G':[0,0,0,0,0,0,0,0,0], 'H':[0,0,0,0,0,0,0,0,0], 'I':[0,0,0,0,0,1,0,0,0]}, index =['A','B','C','D','E','F','G','H','I']) adjacency = adjacency.T
Algorithmic_Basics_of_Bioinformatics/Algorithmic Basics of Bioinformatics Tutorial Sheet 3.ipynb
Oli4/lsi-material
mit
Question II (b) A graph is acyclic if it contains not only one cycle and directed if the edges have a direction. This grahp contains a cycle (A->C->F->A) so it is not acyclic but the edges have a directions so it is directed. Question II (c) Yes there are Eulerian cycles in the graph. Eulerian cycles use every edge exactly once For Example A->C->F->A or A->B->D->E->I->F->A Question II (d) Yes there are Hamiltonian cycles in the graph. Hamiltonian cycles use every node exactly once. For Example A->C->F->A or A->B->D->E->I->F->A Question II (e) The concept of cliques is only defined for undirected graphs. If the graph in Fig. 1 would be undirected, ACF, CFG, BDE and BEI would me maximal cliques. Question II (f) As the graph contains cycles a topological ordering is only possible by violating some of the edges. Question III
def dfs(matrix, query, start): # Return True if the query was found if query == start: return True # Return False if Node is already visited elif start not in matrix.index: return False # Return False if there are not outgoing edges elif not 1 in matrix.loc[start].values: return False # Call the function for all unvisited neighbouring nodes else: mask = adjacency.loc[start].values == 1 neighbours = list(adjacency.loc[start][mask].index) matrix = matrix.drop(start) found = [] for n in neighbours: found.append(dfs(matrix, query, n)) if any(found) == True: return True if all(found) == False: return False dfs(adjacency, 'I', 'A') dfs(adjacency, 'N', 'A')
Algorithmic_Basics_of_Bioinformatics/Algorithmic Basics of Bioinformatics Tutorial Sheet 3.ipynb
Oli4/lsi-material
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ return { '.': '||Period||', ',': '||Comma||', '"': '||QuotationMark||', ';': '||Semicolon||', '!': '||Exclamationmark||', '?': '||Questionmark||', '(': '||LeftParentheses||', ')': '||RightParentheses||', '--': '||Dash||', '\n': '||Return||', } """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
tv-script-generation/dlnd_tv_script_generation.ipynb
yuanotes/deep-learning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ inputs = tf.placeholder(tf.int32, shape=[None, None], name="input") targets = tf.placeholder(tf.int32, shape=[None, None]) learning_rate = tf.placeholder(tf.float32) return inputs, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
yuanotes/deep-learning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ n_layers = 1 # keep_prob = 0.5 lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) # dorp = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob) cell = tf.contrib.rnn.MultiRNNCell([lstm] * n_layers) initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name="initial_state") return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
yuanotes/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ embed = get_embed(input_data, vocab_size, embed_dim) outputs, final_state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected(outputs, vocab_size, weights_initializer=tf.truncated_normal_initializer(mean=0.0,stddev=0.01), biases_initializer=tf.zeros_initializer(), activation_fn=None) # logits = tf.nn.sigmoid(logits) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
tv-script-generation/dlnd_tv_script_generation.ipynb
yuanotes/deep-learning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ n_batches = len(int_text) // (batch_size * seq_length) # Drop the last few characters to make only full batches xdata = np.array(int_text[: n_batches * batch_size * seq_length]) ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1]) x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1) y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1) return np.array(list(zip(x_batches, y_batches))) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
tv-script-generation/dlnd_tv_script_generation.ipynb
yuanotes/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 30 # Batch Size batch_size = 500 # RNN Size rnn_size = 500 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 10 # Learning Rate learning_rate = 0.03 # Show stats for every n number of batches show_every_n_batches = 10 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/dlnd_tv_script_generation.ipynb
yuanotes/deep-learning
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ choice = np.random.choice(len(int_to_vocab), 1, p=probabilities) return int_to_vocab[choice[0]] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
tv-script-generation/dlnd_tv_script_generation.ipynb
yuanotes/deep-learning
mit
If you zoom in you will note that the plot rerenders depending on the zoom level, which allows the full dataset to be explored interactively even though only an image of it is ever sent to the browser. The way this works is that datashade is a dynamic operation that also declares some linked streams. These linked streams are automatically instantiated and dynamically supply the plot size, x_range, and y_range from the Bokeh plot to the operation based on your current viewport as you zoom or pan:
datashade.streams # Exercise: Plot the taxi pickup locations ('pickup_x' and 'pickup_y' columns) # Warning: Don't try to display hv.Points() directly; it's too big! Use datashade() for any display # Optional: Change the cmap on the datashade operation to inferno from datashader.colors import inferno points = hv.Points(ddf, kdims=['pickup_x', 'pickup_y']) datashade(points, cmap=inferno)
solutions/07-working-with-large-datasets-with-solutions.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Adding a tile source Using the GeoViews (geographic) extension for HoloViews, we can display a map in the background. Just declare a Bokeh WMTSTileSource and pass it to the gv.WMTS Element, then we can overlay it:
%opts RGB [xaxis=None yaxis=None] import geoviews as gv from bokeh.models import WMTSTileSource url = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg' wmts = WMTSTileSource(url=url) gv.WMTS(wmts) * datashade(points) %opts RGB [xaxis=None yaxis=None] # Exercise: Overlay the taxi pickup data on top of the Wikipedia tile source wiki_url = 'https://maps.wikimedia.org/osm-intl/{Z}/{X}/{Y}@2x.png' wmts = WMTSTileSource(url=wiki_url) gv.WMTS(wmts) * datashade(points)
solutions/07-working-with-large-datasets-with-solutions.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Aggregating with a variable So far we have simply been counting taxi dropoffs, but our dataset is much richer than that. We have information about a number of variables including the total cost of a taxi ride, the total_amount. Datashader provides a number of aggregator functions, which you can supply to the datashade operation. Here use the ds.mean aggregator to compute the average cost of a trip at a dropoff location:
selected = points.select(total_amount=(None, 1000)) selected.data = selected.data.persist() gv.WMTS(wmts) * datashade(selected, aggregator=ds.mean('total_amount')) # Exercise: Use the ds.min or ds.max aggregator to visualize ``tip_amount`` by dropoff location # Optional: Eliminate outliers by using select selected = points.select(tip_amount=(None, 1000)) selected.data = selected.data.persist() gv.WMTS(wmts) * datashade(selected, aggregator=ds.max('tip_amount')) # Try using ds.min
solutions/07-working-with-large-datasets-with-solutions.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Grouping by a variable Because datashading happens only just before visualization, you can use any of the techniques shown in previous sections to select, filter, or group your data before visualizing it, such as grouping it by the hour of day:
%opts Image [width=600 height=500 logz=True xaxis=None yaxis=None] taxi_ds = hv.Dataset(ddf) grouped = taxi_ds.to(hv.Points, ['dropoff_x', 'dropoff_y'], groupby=['hour'], dynamic=True) aggregate(grouped).redim.values(hour=range(24)) %%opts Image [width=300 height=200 xaxis=None yaxis=None] # Exercise: Facet the trips in the morning hours as an NdLayout using aggregate(grouped.layout()) # Hint: You can reuse the existing grouped variable or select a subset before using the .to method taxi_ds = hv.Dataset(ddf).select(hour=(2, 8)) taxi_ds.data = taxi_ds.data.persist() grouped = taxi_ds.to(hv.Points, ['dropoff_x', 'dropoff_y'], groupby=['hour']) aggregate(grouped.layout()).cols(3)
solutions/07-working-with-large-datasets-with-solutions.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause
Chargement du modèle
from code_beatrix.ai import DLImageSegmentation model = DLImageSegmentation(fLOG=print)
_doc/notebooks/ai/image_segmentation.ipynb
sdpython/code_beatrix
mit
Sur une petite image
img = 'images/Tesla_circa_1890c.jpg' feat, pred = model.predict(img) pred.shape viz = model.plot(img, pred) # img ou feat import skimage.io as skio skio.imshow(viz)
_doc/notebooks/ai/image_segmentation.ipynb
sdpython/code_beatrix
mit
Sur une image dont on change la taille
from PIL import Image img = 'images/Tesla_circa_1890c.jpg' pilimg = Image.open(img) si = pilimg.size pilimg2 = pilimg.resize((si[0]//2, si[1]//2)) from skimage.io._plugins.pil_plugin import pil_to_ndarray skimg = pil_to_ndarray(pilimg2) skimg.shape feat, pred = model.predict(skimg) pred.shape viz = model.plot(skimg, pred) skio.imshow(viz)
_doc/notebooks/ai/image_segmentation.ipynb
sdpython/code_beatrix
mit
Sur une grande image
img = 'images/h2015_2.jpg' pilimg = Image.open(img) si = pilimg.size pilimg2 = pilimg.resize((si[0]//2, si[1]//2)) skimg = pil_to_ndarray(pilimg2) skimg.shape skio.imshow(skimg) feat, pred = model.predict(skimg) pred.shape viz = model.plot(feat, pred) import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1, figsize=(14, 12)) ax.imshow(viz)
_doc/notebooks/ai/image_segmentation.ipynb
sdpython/code_beatrix
mit
Now let's find the celebrities. The most basic centrality is Degree Centrality which is the sum of all in and out nodes (in the case of directed graph).
dg_centrality = nx.degree_centrality(g_fb) sorted_dg_centrality = sorted(dg_centrality.items(), key=operator.itemgetter(1), reverse=True) sorted_dg_centrality[:10]
Centralities.ipynb
noppanit/social-network-analysis
mit
We can see that the node 107 has the highest degree centrality which means node 107 has the highest number of connected nodes. We can prove this by getting the degree of node 107 to see how many friends of node 107 has
nx.degree(g_fb, [107])
Centralities.ipynb
noppanit/social-network-analysis
mit
Node 107 has 1045 friends and we can divide that by number of nodes to get the normalized degree centrality
float(nx.degree(g_fb, [107]).values()[0]) / g_fb.number_of_nodes()
Centralities.ipynb
noppanit/social-network-analysis
mit
Degree centrality might be the easiest number to calculate but it only shows the number of nodes connected which in real social network it might not be very useful as you might have a million followers but if the majority of them is bots then the number is not telling anything new. Now let's try Betweenness which count all of the shortest path going throw each now. This might mean that if you have the highest shortest path going through you, you might be considered as bridge of your entire network. Nodes with high betweenness are important in communication and information diffusion We will be using multiprocessing so we can parallel the computation and distribute the load.
from multiprocessing import Pool import itertools def partitions(nodes, n): "Partitions the nodes into n subsets" nodes_iter = iter(nodes) while True: partition = tuple(itertools.islice(nodes_iter,n)) if not partition: return yield partition def btwn_pool(G_tuple): return nx.betweenness_centrality_source(*G_tuple) def between_parallel(G, processes = None): p = Pool(processes=processes) part_generator = 4*len(p._pool) node_partitions = list(partitions(G.nodes(), int(len(G)/part_generator))) num_partitions = len(node_partitions) bet_map = p.map(btwn_pool, zip([G]*num_partitions, [True]*num_partitions, [None]*num_partitions, node_partitions)) bt_c = bet_map[0] for bt in bet_map[1:]: for n in bt: bt_c[n] += bt[n] return bt_c
Centralities.ipynb
noppanit/social-network-analysis
mit
Let's try with multiprocesser.
start = timeit.default_timer() bt = between_parallel(g_fb) stop = timeit.default_timer() top = 10 max_nodes = sorted(bt.iteritems(), key = lambda v: -v[1])[:top] bt_values = [5]*len(g_fb.nodes()) bt_colors = [0]*len(g_fb.nodes()) for max_key, max_val in max_nodes: bt_values[max_key] = 150 bt_colors[max_key] = 2 print 'It takes {} seconds to finish'.format(stop - start) print max_nodes
Centralities.ipynb
noppanit/social-network-analysis
mit
Now let's try with just one processor
start = timeit.default_timer() bt = nx.betweenness_centrality(g_fb) stop = timeit.default_timer() top = 10 max_nodes = sorted(bt.iteritems(), key = lambda v: -v[1])[:top] bt_values = [5]*len(g_fb.nodes()) bt_colors = [0]*len(g_fb.nodes()) for max_key, max_val in max_nodes: bt_values[max_key] = 150 bt_colors[max_key] = 2 print 'It takes {} seconds to finish'.format(stop - start) print max_nodes
Centralities.ipynb
noppanit/social-network-analysis
mit
Page rank We're going to try PageRank algorithm. This is very similar to Google's PageRank which they use incoming links to determine the "popularity"
g_fb_pr = nx.pagerank(g_fb) top = 10 max_pagerank = sorted(g_fb_pr.iteritems(), key = lambda v: -v[1])[:top] max_pagerank
Centralities.ipynb
noppanit/social-network-analysis
mit
We can see that now the score is different as node 3437 is more popular than node 107. Who is a "Gray Cardinal" There's another metric that we can measure most influential node. It's called eigenvector centrality. To put it simply it means that if you're well connected to a lot of important people that means you're important or most influential as well.
g_fb_eg = nx.eigenvector_centrality(g_fb) top = 10 max_eg = sorted(g_fb_eg.iteritems(), key = lambda v: -v[1])[:top] max_eg
Centralities.ipynb
noppanit/social-network-analysis
mit
Now we get quite a different result. This would mean that node 1912 is connected to more important people in the entire network that means that node is more influential than the rest of the network. iGraph with SNAP Facebook Dataset Networkx is easy to install and great to start with. However, as it's written in Python it's quite slow. I'm going to try iGraph which is C based. I'm hoping that this would yield the same result but faster.
from igraph import * import timeit igraph_fb = Graph.Read_Edgelist('facebook_combined.txt', directed=False) print igraph_fb.summary()
Centralities.ipynb
noppanit/social-network-analysis
mit
Betweenness
def betweenness_centralization(G): vnum = G.vcount() if vnum < 3: raise ValueError("graph must have at least three vertices") denom = (vnum-1)*(vnum-2) temparr = [2*i/denom for i in G.betweenness()] return temparr start = timeit.default_timer() igraph_betweenness = betweenness_centralization(igraph_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) igraph_betweenness.sort(reverse=True) print igraph_betweenness[:10]
Centralities.ipynb
noppanit/social-network-analysis
mit
Closeness
start = timeit.default_timer() igraph_closeness = igraph_fb.closeness() stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) igraph_closeness.sort(reverse=True) print igraph_closeness[:10]
Centralities.ipynb
noppanit/social-network-analysis
mit
Eigen Value
start = timeit.default_timer() igraph_eg = igraph_fb.evcent() stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) igraph_eg.sort(reverse=True) print igraph_eg[:10]
Centralities.ipynb
noppanit/social-network-analysis
mit
PageRank
start = timeit.default_timer() igraph_pr = igraph_fb.pagerank() stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) igraph_pr.sort(reverse=True) print igraph_pr[:10]
Centralities.ipynb
noppanit/social-network-analysis
mit
We can see that iGraph yields similar result from networkx but it's a lot quicker in the same machine. Graph_tool with SNAP Facebook Dataset I'm going to try another library which is supposed to be the fastest than networkx and igraph. Graph_tool is also C based which it has OpenMP enabled so a lot of algorithms is multiprocessing.
import sys from graph_tool.all import * import timeit show_config() graph_tool_fb = Graph(directed=False) with open('facebook_combined.txt', 'r') as f: for line in f: edge_list = line.split() source, target = tuple(edge_list) graph_tool_fb.add_edge(source, target) print graph_tool_fb.num_vertices() print graph_tool_fb.num_edges()
Centralities.ipynb
noppanit/social-network-analysis
mit
Betweeness
start = timeit.default_timer() vertext_betweenness, edge_betweenness = betweenness(graph_tool_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) vertext_betweenness.a[107]
Centralities.ipynb
noppanit/social-network-analysis
mit
Closeness
start = timeit.default_timer() v_closeness = closeness(graph_tool_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) v_closeness.a[107]
Centralities.ipynb
noppanit/social-network-analysis
mit
Eigenvalue
start = timeit.default_timer() v_closeness = eigenvector(graph_tool_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start)
Centralities.ipynb
noppanit/social-network-analysis
mit
Page Rank
start = timeit.default_timer() v_closeness = pagerank(graph_tool_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start)
Centralities.ipynb
noppanit/social-network-analysis
mit
Information diffusion modelling I'm going to information diffusion model to simulate how information travels in the graph.
%matplotlib inline import random as r import networkx as nx import matplotlib.pyplot as plot class Person(object): def __init__(self, id): #Start with a single initial preference self.id = id self.i = r.random() self.a = self.i # we value initial opinion and subsequent information equally self.alpha = 0.8 def __str__(self): return (str(self.id)) def step(self): # loop through the neighbors and aggregate their preferences neighbors = g[self] # all nodes in the list of neighbors are equally weighted, including self w = 1/float((len(neighbors) + 1 )) s = w * self.a for node in neighbors: s += w * node.a # update my beliefs = initial belief plus sum of all influences self.a = (1 - self.alpha) * self.i + self.alpha * s density = 0.9 g = nx.Graph() ## create a network of Person objects for i in range(10): p = Person(i) g.add_node(p) ## this will be a simple random graph, every pair of nodes has an ## equal probability of connection for x in g.nodes(): for y in g.nodes(): if r.random() <= density: g.add_edge(x,y) ## draw the resulting graph and color the nodes by their value col = [n.a for n in g.nodes()] pos = nx.spring_layout(g) nx.draw_networkx(g, pos=pos, node_color=col) ## repeat for 30 times periods for i in range(30): ## iterate through all nodes in the network and tell them to make a step for node in g.nodes(): node.step() ## collect new attitude data, print it to the terminal and plot it. col = [n.a for n in g.nodes()] print col plot.plot(col) class Influencer(Person): def __ini__(self, id): self.id = id self.i = r.random() self.a = 1 ## opinion is strong and immovable def step(self): pass influencers = 2 connections = 4 ## add the influencers to the network and connect each to 3 other nodes for i in range(influencers): inf = Influencer("Inf" + str(i)) for x in range(connections): g.add_edge(r.choice(g.nodes()), inf) ## repeat for 30 time periods for i in range(30): ## iterate through all nodes in the network and tell them to make a step for node in g.nodes(): node.step() ## collect new attitude data, print it to the terminal and plot it. col = [n.a for n in g.nodes()] #print col plot.plot(col)
Centralities.ipynb
noppanit/social-network-analysis
mit
Networkx Independent Cascade Model
import copy import networkx as nx import random def independent_cascade(G, seeds, steps = 0): """ "Return the active nodes of each diffusion step by the independent cascade model Parameters -- -- -- -- -- - G: graph A NetworkX graph seeds: list of nodes The seed nodes for diffusion steps: integer The number of steps to diffuse.If steps <= 0, the diffusion runs until no more nodes can be activated.If steps > 0, the diffusion runs for at most "steps" rounds Returns -- -- -- - layer_i_nodes: list of list of activated nodes layer_i_nodes[0]: the seeds layer_i_nodes[k]: the nodes activated at the kth diffusion step Notes -- -- - When node v in G becomes active, it has a * single * chance of activating each currently inactive neighbor w with probability p_ { vw } Examples -- -- -- -- >>> DG = nx.DiGraph() >>> DG.add_edges_from([(1, 2), (1, 3), (1, 5), (2, 1), (3, 2), (4, 2), (4, 3), \ >>> (4, 6), (5, 3), (5, 4), (5, 6), (6, 4), (6, 5)], act_prob = 0.2) >>> H = nx.independent_cascade(DG, [6]) References -- -- -- -- --[1] David Kempe, Jon Kleinberg, and Eva Tardos. Influential nodes in a diffusion model for social networks. In Automata, Languages and Programming, 2005. """ if type(G) == nx.MultiGraph or type(G) == nx.MultiDiGraph: raise Exception(\ "independent_cascade() is not defined for graphs with multiedges.") # make sure the seeds are in the graph for s in seeds: if s not in G.nodes(): raise Exception("seed", s, "is not in graph") # change to directed graph if not G.is_directed(): DG = G.to_directed() else: DG = copy.deepcopy(G) # init activation probabilities for e in DG.edges(): if 'act_prob' not in DG[e[0]][e[1]]: DG[e[0]][e[1]]['act_prob'] = 0.1 elif DG[e[0]][e[1]]['act_prob'] > 1: raise Exception("edge activation probability:", DG[e[0]][e[1]]['act_prob'], "cannot be larger than 1") # perform diffusion A = copy.deepcopy(seeds)# prevent side effect if steps <= 0: #perform diffusion until no more nodes can be activated return _diffuse_all(DG, A)# perform diffusion for at most "steps" rounds return _diffuse_k_rounds(DG, A, steps) def _diffuse_all(G, A): tried_edges = set() layer_i_nodes = [ ] layer_i_nodes.append([i for i in A]) # prevent side effect while True: len_old = len(A) (A, activated_nodes_of_this_round, cur_tried_edges) = _diffuse_one_round(G, A, tried_edges) layer_i_nodes.append(activated_nodes_of_this_round) tried_edges = tried_edges.union(cur_tried_edges) if len(A) == len_old: break return layer_i_nodes def _diffuse_k_rounds(G, A, steps): tried_edges = set() layer_i_nodes = [ ] layer_i_nodes.append([i for i in A]) while steps > 0 and len(A) < len(G): len_old = len(A) (A, activated_nodes_of_this_round, cur_tried_edges) = _diffuse_one_round(G, A, tried_edges) layer_i_nodes.append(activated_nodes_of_this_round) tried_edges = tried_edges.union(cur_tried_edges) if len(A) == len_old: break steps -= 1 return layer_i_nodes def _diffuse_one_round(G, A, tried_edges): activated_nodes_of_this_round = set() cur_tried_edges = set() for s in A: for nb in G.successors(s): if nb in A or (s, nb) in tried_edges or (s, nb) in cur_tried_edges: continue if _prop_success(G, s, nb): activated_nodes_of_this_round.add(nb) cur_tried_edges.add((s, nb)) activated_nodes_of_this_round = list(activated_nodes_of_this_round) A.extend(activated_nodes_of_this_round) return A, activated_nodes_of_this_round, cur_tried_edges def _prop_success(G, src, dest): return random.random() <= G[src][dest]['act_prob'] run_times = 10 G = nx.DiGraph() G.add_edge(1,2,act_prob=.5) G.add_edge(2,1,act_prob=.5) G.add_edge(1,3,act_prob=.2) G.add_edge(3,1,act_prob=.2) G.add_edge(2,3,act_prob=.3) G.add_edge(2,4,act_prob=.5) G.add_edge(3,4,act_prob=.1) G.add_edge(3,5,act_prob=.2) G.add_edge(4,5,act_prob=.2) G.add_edge(5,6,act_prob=.6) G.add_edge(6,5,act_prob=.6) G.add_edge(6,4,act_prob=.3) G.add_edge(6,2,act_prob=.4) nx.draw_networkx(G) independent_cascade(G, [1], steps=0) n_A = 0.0 for i in range(run_times): A = independent_cascade(G, [1], steps=1) print A for layer in A: n_A += len(layer) n_A / run_times #assert_almost_equal(n_A / run_times, 1.7, places=1)
Centralities.ipynb
noppanit/social-network-analysis
mit
<div class="alert alert-success"> <b>EXERCISE</b>: Use groupby() to plot the number of "Hamlet" films made each decade. </div>
hamlet = titles[titles['title'] == 'Hamlet'] hamlet.groupby(hamlet.year // 10 * 10).size().plot(kind='bar')
solved - 04b - Advanced groupby operations.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
<div class="alert alert-success"> <b>EXERCISE</b>: How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s? </div>
cast1950 = cast[cast.year // 10 == 195] cast1950 = cast1950[cast1950.n == 1] cast1950.groupby(['year', 'type']).size()
solved - 04b - Advanced groupby operations.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
<div class="alert alert-success"> <b>EXERCISE</b>: List the 10 actors/actresses that have the most leading roles (n=1) since the 1990's. </div>
cast1990 = cast[cast['year'] >= 1990] cast1990 = cast1990[cast1990.n == 1] cast1990.groupby('name').size().nlargest(10)
solved - 04b - Advanced groupby operations.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
<div class="alert alert-success"> <b>EXERCISE</b>: List each of the characters that Frank Oz has portrayed at least twice. </div>
c = cast c = c[c.name == 'Frank Oz'] g = c.groupby(['character']).size() g[g > 1].sort_values()
solved - 04b - Advanced groupby operations.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
<div class="alert alert-success"> <b>EXERCISE</b>: Add a column to the `cast` dataframe that indicates the number of roles for the film. </div>
cast['n_total'] = cast.groupby('title')['n'].transform('max') cast.head()
solved - 04b - Advanced groupby operations.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
<div class="alert alert-success"> <b>EXERCISE</b>: Calculate the ratio of leading actor and actress roles to the total number of leading roles per decade. </div> Tip: you can to do a groupby twice in two steps, once calculating the numbers, and then the ratios.
leading = cast[cast['n'] == 1] sums_decade = leading.groupby([cast['year'] // 10 * 10, 'type']).size() sums_decade #sums_decade.groupby(level='year').transform(lambda x: x / x.sum()) ratios_decade = sums_decade / sums_decade.groupby(level='year').transform('sum') ratios_decade ratios_decade[:, 'actor'].plot() ratios_decade[:, 'actress'].plot()
solved - 04b - Advanced groupby operations.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
For an overview of all string methods, see: http://pandas.pydata.org/pandas-docs/stable/api.html#string-handling <div class="alert alert-success"> <b>EXERCISE</b>: We already plotted the number of 'Hamlet' films released each decade, but not all titles are exactly called 'Hamlet'. Give an overview of the titles that contain 'Hamlet', and that start with 'Hamlet': </div>
hamlets = titles[titles['title'].str.contains('Hamlet')] hamlets['title'].value_counts() hamlets = titles[titles['title'].str.match('Hamlet')] hamlets['title'].value_counts()
solved - 04b - Advanced groupby operations.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
<div class="alert alert-success"> <b>EXERCISE</b>: List the 10 movie titles with the longest name. </div>
title_longest = titles['title'].str.len().nlargest(10) title_longest pd.options.display.max_colwidth = 210 titles.loc[title_longest.index]
solved - 04b - Advanced groupby operations.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
We need two new columns for $x$ and $y$ error. We can calculate this by first calulating the theoretical coordinate and then subtracting the observed coordinate.
dynamixel_range = 300.0 gear_ratio = 76.0 / 32.0 l_1 = 15.25 l_2 = 13.75 x_offset = 17.0 y_offset = -2.97 gripper_offset = 0.5 def f_with_theta(s, theta_prime_1, theta_prime_2): theta_1 = ((1023.0-s[:,0])/1023.0) * np.deg2rad(dynamixel_range / gear_ratio) + theta_prime_1 theta_2 = ((1023.0-s[:,1])/1023.0) * np.deg2rad(dynamixel_range / gear_ratio) + theta_prime_2 x = x_offset + np.cos(theta_1) * l_1 + np.cos(theta_1 + theta_2) * l_2 + np.cos(theta_1 + theta_2 + np.pi/2.0) * gripper_offset y = y_offset + np.sin(theta_1) * l_1 + np.sin(theta_1 + theta_2) * l_2 + np.sin(theta_1 + theta_2 + np.pi/2.0) * gripper_offset return np.array([x, y]).T
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Optimization of Theta Values $\theta'_1$ and $\theta'_2$ were measured by hand to be 4° and 40° respectively. To find a more accurate measurement, we will brute force values close to those and find the two values that results in the smallest mean squared error when comparing the theoretical results to the known values in the dataset. This will provide a better starting point when solving for $\epsilon$.
from sklearn.metrics import mean_squared_error from scipy.optimize import brute def mse(theta): xy = f_with_theta(data.values[:,0:2], theta[0], theta[1]) return np.array([mean_squared_error(data.x, xy[:,0]), mean_squared_error(data.y, xy[:,1])]).mean() theta_prime_1_estimate = np.deg2rad(4.0) theta_prime_2_estimate = np.deg2rad(40.0) def slice_for_value(value): inc = value * 0.2 return slice(value - inc, value + inc) t = brute(mse, [slice_for_value(theta_prime_1_estimate), slice_for_value(theta_prime_2_estimate)]) print "θ′: {}".format(np.rad2deg(t)) def f(s): return f_with_theta(s, t[0], t[1]) theoretical_xy = f(data.values[:,0:2]) data["x_theory"] = theoretical_xy[:,0] data["y_theory"] = theoretical_xy[:,1] data["x_error"] = data.x - data.x_theory data["y_error"] = data.y - data.y_theory data.head()
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Visualizing Error Below is a visualization of the error. The x and y axis on the graph represent inches from the bottom left corner of the chessboard. The actual $x,y$ is in blue, and the theoretical values are red. The equation is pretty accurate, but in the far corners, there is noticable error.
import matplotlib.pyplot as plt def scatter(x, y): plt.scatter(data.x, data.y, color='blue') plt.scatter(x, y, color='red', marker=',') plt.show() scatter(data.x_theory, data.y_theory)
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Below is another way to visualize the error. This time the x and y axes represent servo values.
from matplotlib import cm from matplotlib.colors import Normalize plt.scatter(data.s1, data.s2, c=data.x_error, cmap=cm.coolwarm, norm=Normalize(-0.5, 0.5), linewidth=0.4, s=40.0) plt.xlim(0,1023) plt.ylim(0,1023) plt.show() plt.scatter(data.s1, data.s2, c=data.y_error, cmap=cm.coolwarm, norm=Normalize(-0.5, 0.5), linewidth=0.4, s=40.0) plt.xlim(0,1023) plt.ylim(0,1023) plt.show()
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Model The model is a simple polynomial regression.
from sklearn.model_selection import cross_val_score from sklearn.model_selection import ShuffleSplit from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import Pipeline S_train = data.values[:,0:2] z_train = data.values[:,6:8] poly = PolynomialFeatures(degree=3) x_error_model = Pipeline([('poly', poly), ('linear', LinearRegression(normalize=True))]) y_error_model = Pipeline([('poly', poly), ('linear', LinearRegression(normalize=True))]) cv = ShuffleSplit(n_splits=3, test_size=0.3, random_state=0) cvs_x = cross_val_score(x_error_model, S_train, y=z_train[:, 0], scoring='neg_mean_squared_error', cv=cv) cvs_y = cross_val_score(y_error_model, S_train, y=z_train[:, 1], scoring='neg_mean_squared_error', cv=cv) print "Cross validation error for x: {}".format(cvs_x) print "Cross validation error for y: {}".format(cvs_y) x_error_model.fit(S_train, z_train[:, 0]) y_error_model.fit(S_train, z_train[:, 1]) def predict_error(s): return np.array([x_error_model.predict(s), y_error_model.predict(s)]).T
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Below is a plot of the predict_error function with the real error plotted in circles on top.
from itertools import product s_plot = np.array(list(product(np.linspace(0.0, 1023.0, 32), np.linspace(0.0, 1023.0, 32)))) predicted_error = predict_error(s_plot) x_error_predicted = predicted_error[:,0] y_error_predicted = predicted_error[:,1] plt.scatter(s_plot[:,0], s_plot[:,1], c=x_error_predicted, marker=',', cmap=cm.coolwarm, norm=Normalize(-0.5, 0.5), linewidth=0.0, s=120.0) plt.scatter(data.s1, data.s2, c=data.x_error, cmap=cm.coolwarm, norm=Normalize(-0.5, 0.5), linewidth=0.4, s=40.0) plt.xlim(-20,1043) plt.ylim(-20,1043) plt.show() plt.scatter(s_plot[:,0], s_plot[:,1], c=y_error_predicted, marker=',',cmap=cm.coolwarm, norm=Normalize(-0.5, 0.5), linewidth=0.0, s=120.0) plt.scatter(data.s1, data.s2, c=data.y_error, cmap=cm.coolwarm, norm=Normalize(-0.5, 0.5), linewidth=0.4, s=40.0) plt.xlim(-20,1043) plt.ylim(-20,1043) plt.show()
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Building the Final Equation Using the predict_error function $f'(s)$ can now be created,.
def f_prime(s): return f(s) + predict_error(s)
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
To visualize the difference $f'(s)$ makes, we can, once again, plot the predicted $x,y$ (red) on top of the real $x,y$ (blue) for each given $s_1,s_2$.
xy_theoretical_with_error = f_prime(data.values[:,0:2]) data['x_theory_with_error'] = xy_theoretical_with_error[:,0] data['y_theory_with_error'] = xy_theoretical_with_error[:,1] scatter(data.x_theory_with_error, data.y_theory_with_error)
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Now that we have a working $f'(s)$, we can set out to accomplish our original goal, to create $g(x, y)$. The function $f'$ takes a $s_1, s_2$ and returns $x, y$. The function $g$ takes $x, y$ and returns $s_1, s_2$. In order to find $g$, we just need to invert $f'$! But $f'$ doesn't invert easily and we actually don't need $g$ to work over all values of $s_1,s_2$–only when $s_1,s_2 \in {0, 1, 2, ..., 1023}$. So instead, let's solve for every possible $s_1,s_2$ combination, and then create a lookup tree with the results. We can the query the lookup tree to find the closest $s_1,s_2$ for any given point $x,y$.
srange = np.array(list(product(range(1024), range(1024)))) pts = f_prime(srange) from sklearn.neighbors import KDTree tree = KDTree(pts, metric='euclidean') def g(pts): return np.array(srange[tree.query(pts, return_distance=False)]).reshape(-1,2)
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Visualizing the Final Equation Below are two plots visualizing the function $g(x,y)$ over the entire 18"x18" chessboard. The first plot is $s_1$ and the second is $s_2$.
pts_grid = np.array(list(product(np.linspace(0.0, 18.0, 32), np.linspace(0.0, 18.0, 32)))) projected_s = g(pts_grid) plt.scatter(pts_grid[:,0], pts_grid[:,1], c=projected_s[:,0], marker=',', cmap=cm.coolwarm, norm=Normalize(0, 1023), linewidth=0.0, s=120.0) plt.xlim(0,18) plt.ylim(0,18) plt.colorbar() plt.title("s1") plt.show() plt.scatter(pts_grid[:,0], pts_grid[:,1], c=projected_s[:,1], marker=',', cmap=cm.coolwarm, norm=Normalize(0, 1023), linewidth=0.0, s=120.0) plt.xlim(0,18) plt.ylim(0,18) plt.colorbar() plt.title("s2") plt.show()
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Saving the Model The $x,y$ associated with the each $s_1,s_2$ from 0 to 1023 is saved to disk so the lookup tree can be created on demand for the Raspberry Turk to use.
path = project.path('data', 'processed', 'arm_movement_engine_pts.npy') with open(path, 'w') as f: np.save(f, pts)
notebooks/arm_error_correction.ipynb
joeymeyer/raspberryturk
mit
Data Pre-Processing First, let's load the dataset and add a binary affair column.
# load dataset dta = sm.datasets.fair.load_pandas().data # add "affair" column: 1 represents having affairs, 0 represents not dta['affair'] = (dta.affairs > 0).astype(int)
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
Data Exploration
dta.groupby('affair').mean()
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
We can see that on average, women who have affairs rate their marriages lower, which is to be expected. Let's take another look at the rate_marriage variable.
dta.groupby('rate_marriage').mean()
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
An increase in age, yrs_married, and children appears to correlate with a declining marriage rating. Data Visualization
# show plots in the notebook %matplotlib inline
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
Let's start with histograms of education and marriage rating.
# histogram of education dta.educ.hist() plt.title('Histogram of Education') plt.xlabel('Education Level') plt.ylabel('Frequency') # histogram of marriage rating dta.rate_marriage.hist() plt.title('Histogram of Marriage Rating') plt.xlabel('Marriage Rating') plt.ylabel('Frequency')
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
Let's take a look at the distribution of marriage ratings for those having affairs versus those not having affairs.
# barplot of marriage rating grouped by affair (True or False) pd.crosstab(dta.rate_marriage, dta.affair.astype(bool)).plot(kind='bar') plt.title('Marriage Rating Distribution by Affair Status') plt.xlabel('Marriage Rating') plt.ylabel('Frequency')
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
Logistic Regression Let's go ahead and run logistic regression on the entire data set, and see how accurate it is!
# instantiate a logistic regression model, and fit with X and y model = LogisticRegression() model = model.fit(X, y) # check the accuracy on the training set model.score(X, y)
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
73% accuracy seems good, but what's the null error rate?
# what percentage had affairs? y.mean()
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
Only 32% of the women had affairs, which means that you could obtain 68% accuracy by always predicting "no". So we're doing better than the null error rate, but not by much. Let's examine the coefficients to see what we learn.
# examine the coefficients trans = np.transpose(model.coef_) zipi = list(zip(X.columns, trans)) pd.DataFrame(zipi)
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
Increases in marriage rating and religiousness correspond to a decrease in the likelihood of having an affair. For both the wife's occupation and the husband's occupation, the lowest likelihood of having an affair corresponds to the baseline occupation (student), since all of the dummy coefficients are positive. Model Evaluation Using a Validation Set So far, we have trained and tested on the same set. Let's instead split the data into a training set and a testing set.
# evaluate the model by splitting into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) model2 = LogisticRegression() model2.fit(X_train, y_train)
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
We now need to predict class labels for the test set. We will also generate the class probabilities, just to take a look.
# predict class labels for the test set predicted = model2.predict(X_test) print(predicted) # generate class probabilities probs = model2.predict_proba(X_test) print(probs)
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
As you can see, the classifier is predicting a 1 (having an affair) any time the probability in the second column is greater than 0.5. Now let's generate some evaluation metrics.
# generate evaluation metrics print(metrics.accuracy_score(y_test, predicted)) print(metrics.roc_auc_score(y_test, probs[:, 1]))
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
The accuracy is 73%, which is the same as we experienced when training and predicting on the same data. We can also see the confusion matrix and a classification report with other metrics.
print(metrics.confusion_matrix(y_test, predicted)) print(metrics.classification_report(y_test, predicted))
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
Model Evaluation Using Cross-Validation Now let's try 10-fold cross-validation, to see if the accuracy holds up more rigorously.
# evaluate the model using 10-fold cross-validation scores = cross_val_score(LogisticRegression(), X, y, scoring='accuracy', cv=10) print(scores) print(scores.mean())
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
Looks good. It's still performing at 73% accuracy. Predicting the Probability of an Affair Just for fun, let's predict the probability of an affair for a random woman not present in the dataset. She's a 25-year-old teacher who graduated college, has been married for 3 years, has 1 child, rates herself as strongly religious, rates her marriage as fair, and her husband is a farmer.
model.predict_proba(np.array([[1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 3, 25, 3, 1, 4, 16]]))
Jupyter/Logistic.ipynb
davidgutierrez/HeartRatePatterns
gpl-3.0
Create a raw data set, then compute season and apply basic filters (also export to CSV file)
raw_data = pynsqd.NSQData().data clean_data = ( raw_data .query("primary_landuse != 'Unknown'") .query("parameter in @nsqd_params") .query("fraction == 'Total'") .query("epa_rain_zone == 1") .assign(station='outflow') .assign(cvcparam=lambda df: df['parameter'].apply(get_cvc_parameter)) .assign(season=lambda df: df['start_date'].apply(wqio.utils.getSeason)) .drop('parameter', axis=1) .rename(columns={'cvcparam': 'parameter'}) .pipe(fix_nsqd_bacteria_units) .query("primary_landuse == 'Residential'") )
examples/medians/0 - Setup NSQD Median computation.ipynb
Geosyntec/pycvc
bsd-3-clause
Show the sample counts for each parameter
clean_data.groupby(by=['parameter', 'season']).size().unstack(level='season')
examples/medians/0 - Setup NSQD Median computation.ipynb
Geosyntec/pycvc
bsd-3-clause
Export TSS to a CSV file
( clean_data .query("parameter == 'Total Suspended Solids'") .to_csv('NSQD_Res_TSS.csv', index=False) )
examples/medians/0 - Setup NSQD Median computation.ipynb
Geosyntec/pycvc
bsd-3-clause
Identifier for storing these features on disk and referring to them later.
feature_list_id = 'wmd'
notebooks/feature-wmd.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Build Features
def wmd(pair): return embedding_model.wmdistance(pair[0], pair[1]) wmds = kg.jobs.map_batch_parallel( tokens, item_mapper=wmd, batch_size=1000, ) wmds = np.array(wmds).reshape(-1, 1) X_train = wmds[:len(tokens_train)] X_test = wmds[len(tokens_train):] print('X_train:', X_train.shape) print('X_test: ', X_test.shape)
notebooks/feature-wmd.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Save features
feature_names = [ 'wmd', ] project.save_features(X_train, X_test, feature_names, feature_list_id)
notebooks/feature-wmd.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting. 2 - L2 Regularization The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from: $$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} \tag{1}$$ To: $$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} }\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$ Let's modify your cost and observe the consequences. Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use : python np.sum(np.square(Wl)) Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
# GRADED FUNCTION: compute_cost_with_regularization def compute_cost_with_regularization(A3, Y, parameters, lambd): """ Implement the cost function with L2 regularization. See formula (2) above. Arguments: A3 -- post-activation, output of forward propagation, of shape (output size, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) parameters -- python dictionary containing parameters of the model Returns: cost - value of the regularized loss function (formula (2)) """ m = Y.shape[1] W1 = parameters["W1"] W2 = parameters["W2"] W3 = parameters["W3"] cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost ### START CODE HERE ### (approx. 1 line) L2_regularization_cost = None ### END CODER HERE ### cost = cross_entropy_cost + L2_regularization_cost return cost A3, Y_assess, parameters = compute_cost_with_regularization_test_case() print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
CourseraDeepLearningSpecialization/2.HyperparameterTrainingRegularizationAndOptimization/Week1/Exercises/Regularization.ipynb
bhattacharjee/courses
mit
Expected Output: <table> <tr> <td> **cost** </td> <td> 1.78648594516 </td> </tr> </table> Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. Exercise: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
# GRADED FUNCTION: backward_propagation_with_regularization def backward_propagation_with_regularization(X, Y, cache, lambd): """ Implements the backward propagation of our baseline model to which we added an L2 regularization. Arguments: X -- input dataset, of shape (input size, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) cache -- cache output from forward_propagation() lambd -- regularization hyperparameter, scalar Returns: gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y ### START CODE HERE ### (approx. 1 line) dW3 = 1./m * np.dot(dZ3, A2.T) + None ### END CODE HERE ### db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) ### START CODE HERE ### (approx. 1 line) dW2 = 1./m * np.dot(dZ2, A1.T) + None ### END CODE HERE ### db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) ### START CODE HERE ### (approx. 1 line) dW1 = 1./m * np.dot(dZ1, X.T) + None ### END CODE HERE ### db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case() grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7) print ("dW1 = "+ str(grads["dW1"])) print ("dW2 = "+ str(grads["dW2"])) print ("dW3 = "+ str(grads["dW3"]))
CourseraDeepLearningSpecialization/2.HyperparameterTrainingRegularizationAndOptimization/Week1/Exercises/Regularization.ipynb
bhattacharjee/courses
mit
Observations: - The value of $\lambda$ is a hyperparameter that you can tune using a dev set. - L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias. What is L2-regularization actually doing?: L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. <font color='blue'> What you should remember -- the implications of L2-regularization on: - The cost computation: - A regularization term is added to the cost - The backpropagation function: - There are extra terms in the gradients with respect to weight matrices - Weights end up smaller ("weight decay"): - Weights are pushed to smaller values. 3 - Dropout Finally, dropout is a widely used regularization technique that is specific to deep learning. It randomly shuts down some neurons in each iteration. Watch these two videos to see what this means! <!-- To understand drop-out, consider this conversation with a friend: - Friend: "Why do you need all these neurons to train your network and classify images?". - You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!" - Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?" - You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution." !--> <center> <video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls> </video> </center> <br> <caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep_prob$ or keep it with probability $keep_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption> <center> <video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls> </video> </center> <caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption> When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. 3.1 - Forward propagation with dropout Exercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. Instructions: You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps: 1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{1} d^{1} ... d^{1}] $ of the same dimension as $A^{[1]}$. 2. Set each entry of $D^{[1]}$ to be 0 with probability (1-keep_prob) or 1 with probability (keep_prob), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: X = (X &lt; 0.5). Note that 0 and 1 are respectively equivalent to False and True. 3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values. 4. Divide $A^{[1]}$ by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
# GRADED FUNCTION: forward_propagation_with_dropout def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5): """ Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID. Arguments: X -- input dataset, of shape (2, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (20, 2) b1 -- bias vector of shape (20, 1) W2 -- weight matrix of shape (3, 20) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) keep_prob - probability of keeping a neuron active during drop-out, scalar Returns: A3 -- last activation value, output of the forward propagation, of shape (1,1) cache -- tuple, information stored for computing the backward propagation """ np.random.seed(1) # retrieve parameters W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) ### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above. D1 = None # Step 1: initialize matrix D1 = np.random.rand(..., ...) D1 = None # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold) A1 = None # Step 3: shut down some neurons of A1 A1 = None # Step 4: scale the value of neurons that haven't been shut down ### END CODE HERE ### Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) ### START CODE HERE ### (approx. 4 lines) D2 = None # Step 1: initialize matrix D2 = np.random.rand(..., ...) D2 = None # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold) A2 = None # Step 3: shut down some neurons of A2 A2 = None # Step 4: scale the value of neurons that haven't been shut down ### END CODE HERE ### Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) return A3, cache X_assess, parameters = forward_propagation_with_dropout_test_case() A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7) print ("A3 = " + str(A3))
CourseraDeepLearningSpecialization/2.HyperparameterTrainingRegularizationAndOptimization/Week1/Exercises/Regularization.ipynb
bhattacharjee/courses
mit
Expected Output: <table> <tr> <td> **A3** </td> <td> [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]] </td> </tr> </table> 3.2 - Backward propagation with dropout Exercise: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. Instruction: Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps: 1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to A1. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to dA1. 2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if $A^{[1]}$ is scaled by keep_prob, then its derivative $dA^{[1]}$ is also scaled by the same keep_prob).
# GRADED FUNCTION: backward_propagation_with_dropout def backward_propagation_with_dropout(X, Y, cache, keep_prob): """ Implements the backward propagation of our baseline model to which we added dropout. Arguments: X -- input dataset, of shape (2, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) cache -- cache output from forward_propagation_with_dropout() keep_prob - probability of keeping a neuron active during drop-out, scalar Returns: gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables """ m = X.shape[1] (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) ### START CODE HERE ### (≈ 2 lines of code) dA2 = None # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation dA2 = None # Step 2: Scale the value of neurons that haven't been shut down ### END CODE HERE ### dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T) db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) ### START CODE HERE ### (≈ 2 lines of code) dA1 = None # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation dA1 = None # Step 2: Scale the value of neurons that haven't been shut down ### END CODE HERE ### dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case() gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8) print ("dA1 = " + str(gradients["dA1"])) print ("dA2 = " + str(gradients["dA2"]))
CourseraDeepLearningSpecialization/2.HyperparameterTrainingRegularizationAndOptimization/Week1/Exercises/Regularization.ipynb
bhattacharjee/courses
mit
Imagine that in the above program, 23 is the temperature which was read by  some sensor or manually entered by the user and Normal is the response of the program.
a = "apple" b = "banana" c = "Mango" if a == "apple": print("apple") elif b == "Mango": print("mango") elif c == "Mango": print("My Mango farm") x = list(range(10)) a = 0 x.reverse() print(x) while x: a = x.pop() print(a) print(a) x = list(range(10)) print(x) a = 0 x.reverse() while x: a = x.pop(0) print(a) print(a) items = "This is a test" for count, item in enumerate(items[1:], start=11): print(item, count, end=" ") if 'i' in item: break else: print("\nFinished") print(count)
Section 1 - Core Python/Chapter 04 - Control Flow/3.1 Compound Statements.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
4. Model Training This notebook demonstrates how to train a Propensity Model using BigQuery ML. Requirements Input features used for training needs to be stored as a BigQuery table. This can be done using 2. ML Data Preparation Notebook. Install and import required modules
# Uncomment to install required python modules # !sh ../utils/setup.sh # Add custom utils module to Python environment import os import sys sys.path.append(os.path.abspath(os.pardir)) from gps_building_blocks.cloud.utils import bigquery as bigquery_utils from utils import model from utils import helpers
packages/propensity/04.model_training.ipynb
google/compass
apache-2.0
Set paramaters
configs = helpers.get_configs('config.yaml') dest_configs, run_id_configs = configs.destination, configs.run_id # GCP project ID PROJECT_ID = dest_configs.project_id # Name of the BigQuery dataset DATASET_NAME = dest_configs.dataset_name # To distinguish the separate runs of the training pipeline RUN_ID = run_id_configs.train # BigQuery table name containing model development dataset FEATURES_DEV_TABLE = f'features_dev_table_{RUN_ID}' # BigQuery table name containing model testing dataset FEATURES_TEST_TABLE = f'features_test_table_{RUN_ID}' # Output model name to save in BigQuery MODEL_NAME = f'propensity_model_{RUN_ID}' bq_utils = bigquery_utils.BigQueryUtils(project_id=PROJECT_ID)
packages/propensity/04.model_training.ipynb
google/compass
apache-2.0
Next, let's configure modeling options. Model and features configuration Model options can be configured in detail based on BigQuery ML specifications listed in The CREATE MODEL statement. NOTE: Propensity modeling supports only following four types of models available in BigQuery ML: - LOGISTIC_REG - AUTOML_CLASSIFIER - BOOSTED_TREE_CLASSIFIER - DNN_CLASSIFIER In order to use specific model options, you can add options to following configuration exactly same as listed in the The CREATE MODEL statement. For example, if you want to trian AUTOML_CLASSIFIER with BUDGET_HOURS=1, you can specify it as: python params = { 'model_type': 'AUTOML_CLASSIFIER', 'budget_hours': 1 }
# Read in Features table schema to select feature names for model training sql = ("SELECT column_name " f"FROM `{PROJECT_ID}.{DATASET_NAME}`.INFORMATION_SCHEMA.COLUMNS " f"WHERE table_name='{FEATURES_DEV_TABLE}';") print(sql) features_schema = bq_utils.run_query(sql).to_dataframe() # Columns to remove from the feature list to_remove = ['window_start_ts', 'window_end_ts', 'snapshot_ts', 'user_id', 'label', 'key', 'data_split'] # Selected features for model training training_features = [v for v in features_schema['column_name'] if v not in to_remove] print('Number of training features:', len(training_features)) print(training_features) # Set parameters for AUTOML_CLASSIFIER model FEATURE_COLUMNS = training_features TARGET_COLUMN = 'label' params = { 'model_path': f'{PROJECT_ID}.{DATASET_NAME}.{MODEL_NAME}', 'features_table_path': f'{PROJECT_ID}.{DATASET_NAME}.{FEATURES_DEV_TABLE}', 'feature_columns': FEATURE_COLUMNS, 'target_column': TARGET_COLUMN, 'MODEL_TYPE': 'AUTOML_CLASSIFIER', 'BUDGET_HOURS': 1.0, # Enable data_split_col if you want to use custom data split. # Details on AUTOML data split column: # https://cloud.google.com/automl-tables/docs/prepare#split # 'DATA_SPLIT_COL': 'data_split', 'OPTIMIZATION_OBJECTIVE': 'MAXIMIZE_AU_ROC' }
packages/propensity/04.model_training.ipynb
google/compass
apache-2.0
Train the model First, we initialize PropensityModel with config parameters.
propensity_model = model.PropensityModel(bq_utils=bq_utils, params=params)
packages/propensity/04.model_training.ipynb
google/compass
apache-2.0
Next cell triggers model training job in BigQuery which takes some time to finish depending on dataset size and model complexity. Set verbose=True, if you want to verify training query details.
propensity_model.train(verbose=False)
packages/propensity/04.model_training.ipynb
google/compass
apache-2.0
Following cell allows you to see detailed information about the input features used to train a model. It provides following columns: - input — The name of the column in the input training data. - min — The sample minimum. This column is NULL for non-numeric inputs. - max — The sample maximum. This column is NULL for non-numeric inputs. - mean — The average. This column is NULL for non-numeric inputs. - stddev — The standard deviation. This column is NULL for non-numeric inputs. - category_count — The number of categories. This column is NULL for non-categorical columns. - null_count — The number of NULLs. For more details refer to help page.
propensity_model.get_feature_info()
packages/propensity/04.model_training.ipynb
google/compass
apache-2.0
Evaluate the model This section helps to do quick model evaluation to get following model metrics: recall accuracy f1_score log_loss roc_auc Two optional parameters can be specified for evaluation: eval_table: BigQuery table containing evaluation dataset threshold: Custom probability threshold to be used for evaluation (to binarize the predictions). Default value is 0.5. If neither of these options are specified, the model is evaluated using evaluation dataset split during training with default threshold of 0.5. NOTE: This evaluation provides basic model performance metrics. For thorough evaluation refer to 5. Model evaluation notebook notebook. TODO(): Add sql code to calculate the proportion of positive examples in the evaluation dataset to be used as the threshold.
# Model performance on the model development dataset on which the final # model has been trained EVAL_TABLE_NAME = FEATURES_DEV_TABLE eval_params = { 'eval_table_path': f'{PROJECT_ID}.{DATASET_NAME}.{EVAL_TABLE_NAME}', 'threshold': 0.5 } propensity_model.evaluate(eval_params, verbose=False) # Model performance on the held out test dataset EVAL_TABLE_NAME = FEATURES_TEST_TABLE eval_params = { 'eval_table_path': f'{PROJECT_ID}.{DATASET_NAME}.{EVAL_TABLE_NAME}', 'threshold': 0.5 } propensity_model.evaluate(eval_params, verbose=False)
packages/propensity/04.model_training.ipynb
google/compass
apache-2.0
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int.
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ def helper(text, vocab_to_int, extra=None): sentences = text.split('\n') arr = [] for sentence in sentences: ids = [ vocab_to_int[word] for word in sentence.split()] if extra: ids.append(extra) arr.append(ids) return arr list_source_ids = helper(source_text, source_vocab_to_int) list_target_ids = helper(target_text, target_vocab_to_int, target_vocab_to_int['<EOS>']) return list_source_ids, list_target_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids)
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() target_vocab_to_int['<UNK>']
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ input_text = tf.placeholder(tf.int32, [None, None], name="input") targets = tf.placeholder(tf.int32, [None, None]) lr = tf.placeholder(tf.float32) kp = tf.placeholder(tf.float32, name="keep_prob") return input_text, targets, lr, kp """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs)
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # Encoder cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) enc_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) _, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32) return enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer)
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) # Apply output function outputs = tf.nn.dropout(train_pred, keep_prob=keep_prob) train_logits = output_fn(outputs) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train)
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length - 1, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) # Dropout not needed for inference! # drop = tf.nn.dropout(inference_logits, keep_prob) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference.
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # Decoder RNNs rnn_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop = tf.contrib.rnn.DropoutWrapper(rnn_cell, output_keep_prob=keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) with tf.variable_scope("decoding") as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) training_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) decoding_scope.reuse_variables() inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return training_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer)
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob) dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) training_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return training_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model)
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability
# Number of Epochs epochs = 20 # Batch Size batch_size = 1370 # RNN Size rnn_size = 250 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.75
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit