markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Choose the model with the highest topic coherence.
model, tc = max(model_list, key=lambda x: x[1]) print('Topic coherence: %.3e' %tc)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
We save the model, to avoid having to train it again, and also show how to load it again.
# Save model. model.save('/tmp/model.atmodel') # Load model. model = AuthorTopicModel.load('/tmp/model.atmodel')
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Explore author-topic representation Now that we have trained a model, we can start exploring the authors and the topics. First, let's simply print the most important words in the topics. Below we have printed topic 0. As we can see, each topic is associated with a set of words, and each word has a probability of being expressed under that topic.
model.show_topic(0)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Below, we have given each topic a label based on what each topic seems to be about intuitively.
topic_labels = ['Circuits', 'Neuroscience', 'Numerical optimization', 'Object recognition', \ 'Math/general', 'Robotics', 'Character recognition', \ 'Reinforcement learning', 'Speech recognition', 'Bayesian modelling']
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Rather than just calling model.show_topics(num_topics=10), we format the output a bit so it is easier to get an overview.
for topic in model.show_topics(num_topics=10): print('Label: ' + topic_labels[topic[0]]) words = '' for word, prob in model.show_topic(topic[0]): words += word + ' ' print('Words: ' + words) print()
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
These topics are by no means perfect. They have problems such as chained topics, intruded words, random topics, and unbalanced topics (see Mimno and co-authors 2011). They will do for the purposes of this tutorial, however. Below, we use the model[name] syntax to retrieve the topic distribution for an author. Each topic has a probability of being expressed given the particular author, but only the ones above a certain threshold are shown.
model['YannLeCun']
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Let's print the top topics of some authors. First, we make a function to help us do this more easily.
from pprint import pprint def show_author(name): print('\n%s' % name) print('Docs:', model.author2doc[name]) print('Topics:') pprint([(topic_labels[topic[0]], topic[1]) for topic in model[name]])
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Below, we print some high profile researchers and inspect them. Three of these, Yann LeCun, Geoffrey E. Hinton and Christof Koch, are spot on. Terrence J. Sejnowski's results are surprising, however. He is a neuroscientist, so we would expect him to get the "neuroscience" label. This may indicate that Sejnowski works with the neuroscience aspects of visual perception, or perhaps that we have labeled the topic incorrectly, or perhaps that this topic simply is not very informative.
show_author('YannLeCun') show_author('GeoffreyE.Hinton') show_author('TerrenceJ.Sejnowski') show_author('ChristofKoch')
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Simple model evaluation methods We can compute the per-word bound, which is a measure of the model's predictive performance (you could also say that it is the reconstruction error). To do that, we need the doc2author dictionary, which we can build automatically.
from gensim.models import atmodel doc2author = atmodel.construct_doc2author(model.corpus, model.author2doc)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Now let's evaluate the per-word bound.
# Compute the per-word bound. # Number of words in corpus. corpus_words = sum(cnt for document in model.corpus for _, cnt in document) # Compute bound and divide by number of words. perwordbound = model.bound(model.corpus, author2doc=model.author2doc, \ doc2author=model.doc2author) / corpus_words print(perwordbound)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
We can evaluate the quality of the topics by computing the topic coherence, as in the LDA class. Use this to e.g. find out which of the topics are poor quality, or as a metric for model selection.
%time top_topics = model.top_topics(model.corpus)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Plotting the authors Now we're going to produce the kind of pacific archipelago looking plot below. The goal of this plot is to give you a way to explore the author-topic representation in an intuitive manner. We take all the author-topic distributions (stored in model.state.gamma) and embed them in a 2D space. To do this, we reduce the dimensionality of this data using t-SNE. t-SNE is a method that attempts to reduce the dimensionality of a dataset, while maintaining the distances between the points. That means that if two authors are close together in the plot below, then their topic distributions are similar. In the cell below, we transform the author-topic representation into the t-SNE space. You can increase the smallest_author value if you do not want to view all the authors with few documents.
%%time from sklearn.manifold import TSNE tsne = TSNE(n_components=2, random_state=0) smallest_author = 0 # Ignore authors with documents less than this. authors = [model.author2id[a] for a in model.author2id.keys() if len(model.author2doc[a]) >= smallest_author] _ = tsne.fit_transform(model.state.gamma[authors, :]) # Result stored in tsne.embedding_
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
We are now ready to make the plot. Note that if you run this notebook yourself, you will see a different graph. The random initialization of the model will be different, and the result will thus be different to some degree. You may find an entirely different representation of the data, or it may show the same interpretation slightly differently. If you can't see the plot, you are probably viewing this tutorial in a Jupyter Notebook. View it in an nbviewer instead at http://nbviewer.jupyter.org/github/rare-technologies/gensim/blob/develop/docs/notebooks/atmodel_tutorial.ipynb.
# Tell Bokeh to display plots inside the notebook. from bokeh.io import output_notebook output_notebook() from bokeh.models import HoverTool from bokeh.plotting import figure, show, ColumnDataSource x = tsne.embedding_[:, 0] y = tsne.embedding_[:, 1] author_names = [model.id2author[a] for a in authors] # Radius of each point corresponds to the number of documents attributed to that author. scale = 0.1 author_sizes = [len(model.author2doc[a]) for a in author_names] radii = [size * scale for size in author_sizes] source = ColumnDataSource( data=dict( x=x, y=y, author_names=author_names, author_sizes=author_sizes, radii=radii, ) ) # Add author names and sizes to mouse-over info. hover = HoverTool( tooltips=[ ("author", "@author_names"), ("size", "@author_sizes"), ] ) p = figure(tools=[hover, 'crosshair,pan,wheel_zoom,box_zoom,reset,save,lasso_select']) p.scatter('x', 'y', radius='radii', source=source, fill_alpha=0.6, line_color=None) show(p)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
The circles in the plot above are individual authors, and their sizes represent the number of documents attributed to the corresponding author. Hovering your mouse over the circles will tell you the name of the authors and their sizes. Large clusters of authors tend to reflect some overlap in interest. We see that the model tends to put duplicate authors close together. For example, Terrence J. Sejnowki and T. J. Sejnowski are the same person, and their vectors end up in the same place (see about $(-10, -10)$ in the plot). At about $(-15, -10)$ we have a cluster of neuroscientists like Christof Koch and James M. Bower. As discussed earlier, the "object recognition" topic was assigned to Sejnowski. If we get the topics of the other authors in Sejnoski's neighborhood, like Peter Dayan, we also get this same topic. Furthermore, we see that this cluster is close to the "neuroscience" cluster discussed above, which is further indication that this topic is about visual perception in the brain. Other clusters include a reinforcement learning cluster at about $(-5, 8)$, and a Bayesian modelling cluster at about $(8, -12)$. Similarity queries In this section, we are going to set up a system that takes the name of an author and yields the authors that are most similar. This functionality can be used as a component in an information retrieval (i.e. a search engine of some kind), or in an author prediction system, i.e. a system that takes an unlabelled document and predicts the author(s) that wrote it. We simply need to search for the closest vector in the author-topic space. In this sense, the approach is similar to the t-SNE plot above. Below we illustrate a similarity query using a built-in similarity framework in Gensim.
from gensim.similarities import MatrixSimilarity # Generate a similarity object for the transformed corpus. index = MatrixSimilarity(model[list(model.id2author.values())]) # Get similarities to some author. author_name = 'YannLeCun' sims = index[model[author_name]]
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
However, this framework uses the cosine distance, but we want to use the Hellinger distance. The Hellinger distance is a natural way of measuring the distance (i.e. dis-similarity) between two probability distributions. Its discrete version is defined as $$ H(p, q) = \frac{1}{\sqrt{2}} \sqrt{\sum_{i=1}^K (\sqrt{p_i} - \sqrt{q_i})^2}, $$ where $p$ and $q$ are both topic distributions for two different authors. We define the similarity as $$ S(p, q) = \frac{1}{1 + H(p, q)}. $$ In the cell below, we prepare everything we need to perform similarity queries based on the Hellinger distance.
# Make a function that returns similarities based on the Hellinger distance. from gensim import matutils import pandas as pd # Make a list of all the author-topic distributions. author_vecs = [model.get_author_topics(author) for author in model.id2author.values()] def similarity(vec1, vec2): '''Get similarity between two vectors''' dist = matutils.hellinger(matutils.sparse2full(vec1, model.num_topics), \ matutils.sparse2full(vec2, model.num_topics)) sim = 1.0 / (1.0 + dist) return sim def get_sims(vec): '''Get similarity of vector to all authors.''' sims = [similarity(vec, vec2) for vec2 in author_vecs] return sims def get_table(name, top_n=10, smallest_author=1): ''' Get table with similarities, author names, and author sizes. Return `top_n` authors as a dataframe. ''' # Get similarities. sims = get_sims(model.get_author_topics(name)) # Arrange author names, similarities, and author sizes in a list of tuples. table = [] for elem in enumerate(sims): author_name = model.id2author[elem[0]] sim = elem[1] author_size = len(model.author2doc[author_name]) if author_size >= smallest_author: table.append((author_name, sim, author_size)) # Make dataframe and retrieve top authors. df = pd.DataFrame(table, columns=['Author', 'Score', 'Size']) df = df.sort_values('Score', ascending=False)[:top_n] return df
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Now we can find the most similar authors to some particular author. We use the Pandas library to print the results in a nice looking tables.
get_table('YannLeCun')
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
As before, we can specify the minimum author size.
get_table('JamesM.Bower', smallest_author=3)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Serialized corpora The AuthorTopicModel class accepts serialized corpora, that is, corpora that are stored on the hard-drive rather than in memory. This is usually done when the corpus is too big to fit in memory. There are, however, some caveats to this functionality, which we will discuss here. As these caveats make this functionality less than ideal, it may be improved in the future. It is not necessary to read this section if you don't intend to use serialized corpora. In the following, an explanation, followed by an example and a summarization will be given. If the corpus is serialized, the user must specify serialized=True. Any input corpus can then be any type of iterable or generator. The model will then take the input corpus and serialize it in the MmCorpus format, which is supported in Gensim. The user must specify the path where the model should serialize all input documents, for example serialization_path='/tmp/model_serializer.mm'. To avoid accidentally overwriting some important data, the model will raise an error if there already exists a file at serialization_path; in this case, either choose another path, or delete the old file. When you want to train on new data, and call model.update(corpus, author2doc), all the old data and the new data have to be re-serialized. This can of course be quite computationally demanding, so it is recommended that you do this only when necessary; that is, wait until you have as much new data as possible to update, rather than updating the model for every new document.
%time model_ser = AuthorTopicModel(corpus=corpus, num_topics=10, id2word=dictionary.id2token, \ author2doc=author2doc, random_state=1, serialized=True, \ serialization_path='/tmp/model_serialization.mm') # Delete the file, once you're done using it. import os os.remove('/tmp/model_serialization.mm')
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Feed sequences to the TM
# create sequences allSequences = [] for s in range(numSequences): if s % 2 == 0: sequence = generateRandomSequence(symbolsPerSequence, tm.numberOfColumns(), sparsity) allSequences.append(sequence) else: sequenceHO = generateHOSequence(sequence, symbolsPerSequence, tm.numberOfColumns(), sparsity) allSequences.append(sequenceHO) spikeTrains = np.zeros((tm.numberOfCells(), totalTS), dtype = "uint32") columnUsage = np.zeros(tm.numberOfColumns(), dtype="uint32") spikeCount = np.zeros(totalTS, dtype="uint32") ts = 0 entropyX = [] entropyY = [] negPCCX_cells = [] negPCCY_cells = [] numSpikesX = [] numSpikesY = [] numSpikes = 0 negPCCX_cols = [] negPCCY_cols = [] traceX = [] traceY = [] # Randomly generate the indices of the columns to keep track during simulation time colIndicesLarge = np.random.permutation(tm.numberOfColumns())[0:125] # keep track of 125 columns = 1000 cells for epoch in range(epochs): # shuffle sequences print "" print "Epoch: " + str(epoch) seqIndices = np.random.permutation(np.arange(numSequences)) for s in range(numSequences): tm.reset() if s > 0 and s % 100 == 0: print str(s) + " sequences processed" for symbol in range(symbolsPerSequence): tm.compute(allSequences[seqIndices[s]][symbol], learn=True) for cell in tm.getActiveCells(): spikeTrains[cell, ts] = 1 numSpikes += 1 spikeCount[ts] += 1 # Obtain active columns: activeColumnsIndices = [tm.columnForCell(i) for i in tm.getActiveCells()] currentColumns = [1 if i in activeColumnsIndices else 0 for i in range(tm.numberOfColumns())] for col in np.nonzero(currentColumns)[0]: columnUsage[col] += 1 if ts > 0 and ts % int(totalTS * 0.1) == 0: numSpikesX.append(ts) numSpikesY.append(numSpikes) numSpikes = 0 #print "++ Analyzing correlations (cells at random) ++" subSpikeTrains = subSample(spikeTrains, 1000, tm.numberOfCells(), ts, 1000) (corrMatrix, numNegPCC) = computePWCorrelations(subSpikeTrains, removeAutoCorr=True) negPCCX_cells.append(ts) negPCCY_cells.append(numNegPCC) bins = 300 plt.hist(corrMatrix.ravel(), bins, alpha=0.5) plt.xlim(-0.05,0.1) plt.xlabel("PCC") plt.ylabel("Frequency") plt.savefig("cellsHist_" + str(ts)) plt.close() traceX.append(ts) #traceY.append(sum(1 for i in corrMatrix.ravel() if i > 0.5)) #traceY.append(np.std(corrMatrix)) #traceY.append(sum(1 for i in corrMatrix.ravel() if i > -0.05 and i < 0.1)) traceY.append(sum(1 for i in corrMatrix.ravel() if i > 0.0)) entropyX.append(ts) entropyY.append(computeEntropy(subSpikeTrains)) #print "++ Analyzing correlations (whole columns) ++" ### First the LARGE subsample of columns: subSpikeTrains = subSampleWholeColumn(spikeTrains, colIndicesLarge, tm.getCellsPerColumn(), ts, 1000) (corrMatrix, numNegPCC) = computePWCorrelationsWithinCol(subSpikeTrains, True, tm.getCellsPerColumn()) negPCCX_cols.append(ts) negPCCY_cols.append(numNegPCC) #print "++ Generating histogram ++" plt.hist(corrMatrix.ravel(), alpha=0.5) plt.xlabel("PCC") plt.ylabel("Frequency") plt.savefig("colsHist_" + str(ts)) plt.close() ts += 1 print "*** DONE ***" plt.plot(traceX, traceY) plt.xlabel("Time") plt.ylabel("Positive PCC Count") plt.savefig("positivePCCTrace") plt.close() sparsityTraceX = [] sparsityTraceY = [] for i in range(totalTS - 1000): sparsityTraceX.append(i) sparsityTraceY.append(np.mean(spikeCount[i:1000 + i]) / tm.numberOfCells()) plt.plot(sparsityTraceX, sparsityTraceY) plt.xlabel("Time") plt.ylabel("Sparsity") plt.savefig("sparsityTrace") plt.close() # plot trace of negative PCCs plt.plot(negPCCX_cells, negPCCY_cells) plt.xlabel("Time") plt.ylabel("Negative PCC Count") plt.savefig("negPCCTrace_cells") plt.close() plt.plot(negPCCX_cols, negPCCY_cols) plt.xlabel("Time") plt.ylabel("Negative PCC Count") plt.savefig("negPCCTrace_cols") plt.close() plt.plot(numSpikesX, numSpikesY) plt.xlabel("Time") plt.ylabel("Num Spikes") plt.savefig("numSpikesTrace") plt.close() # plot entropy plt.plot(entropyX, entropyY) plt.xlabel("Time") plt.ylabel("Entropy") plt.savefig("entropyTM") plt.close() plt.hist(columnUsage) plt.xlabel("Number of times active") plt.ylabel("Number of columns") plt.savefig("columnUsage") plt.close()
projects/neural_correlations/EXP2-HighOrder/NeuCorr_Exp2.ipynb
subutai/htmresearch
agpl-3.0
ISI analysis (with Poisson model too)
subSpikeTrains = subSample(spikeTrains, 1000, tm.numberOfCells(), 0, 0) isi = computeISI(subSpikeTrains) # Print ISI distribution of TM #bins = np.linspace(np.min(isi), np.max(isi), 50) bins = 100 plt.hist(isi, bins) plt.xlim(0,1000) # plt.xlim(89500,92000) plt.xlabel("ISI") plt.ylabel("Frequency") plt.savefig("isiTM") plt.close() print np.mean(isi) print np.std(isi) print np.std(isi)/np.mean(isi) # Generate spike distribution spikeCount = [] for cell in range(np.shape(subSpikeTrains)[0]): spikeCount.append(np.count_nonzero(subSpikeTrains[cell,:])) bins = 25 plt.hist(spikeCount, bins) plt.xlabel("Spike Count") plt.ylabel("Number of cells") plt.savefig("spikesHistTM") plt.close() #firingRate = 18 firingRate = np.mean(subSpikeTrains) * 1000 print "firing rate: " + str(firingRate) pSpikeTrain = poissonSpikeGenerator(firingRate,np.shape(subSpikeTrains)[1],np.shape(subSpikeTrains)[0]) isi = computeISI(pSpikeTrain) # Print ISI distribution of Poisson model #bins = np.linspace(np.min(isi), np.max(isi), 50) bins = 100 plt.hist(isi, bins) plt.xlim(0,600) # plt.xlim(89500,92000) plt.xlabel("ISI") plt.ylabel("Frequency") plt.savefig("isiPOI") plt.close() print np.mean(isi) print np.std(isi) print np.std(isi)/np.mean(isi) # Generate spike distribution spikeCount = [] for cell in range(np.shape(pSpikeTrain)[0]): spikeCount.append(np.count_nonzero(pSpikeTrain[cell,:])) bins = 25 plt.hist(spikeCount, bins) plt.xlabel("Spike Count") plt.ylabel("Number of cells") plt.savefig("spikesHistPOI") plt.close()
projects/neural_correlations/EXP2-HighOrder/NeuCorr_Exp2.ipynb
subutai/htmresearch
agpl-3.0
Raster Plots
subSpikeTrains = subSample(spikeTrains, 100, tm.numberOfCells(), -1, 1000) rasterPlot(subSpikeTrains, "TM") pSpikeTrain = poissonSpikeGenerator(firingRate,np.shape(subSpikeTrains)[1],np.shape(subSpikeTrains)[0]) rasterPlot(pSpikeTrain, "Poisson")
projects/neural_correlations/EXP2-HighOrder/NeuCorr_Exp2.ipynb
subutai/htmresearch
agpl-3.0
Quick Accuracy Test
simpleAccuracyTest("random", tm, allSequences)
projects/neural_correlations/EXP2-HighOrder/NeuCorr_Exp2.ipynb
subutai/htmresearch
agpl-3.0
Elad Plot
# Sample from both TM_SpikeTrains and Poisson_SpikeTrains. 10 cells for 1000 (?) timesteps wordLength = 10 firingRate = np.mean(subSpikeTrains) * 1000 # generate all 2^N strings: binaryStrings = list(itertools.product([0, 1], repeat=wordLength)) trials = 10 x = [] #observed y = [] #predicted by random model for t in range(trials): print "Trial: " + str(t) # sample from spike trains subSpikeTrains = subSample(spikeTrains, wordLength, tm.numberOfCells(), 0, 0) pSpikeTrain = poissonSpikeGenerator(firingRate,np.shape(subSpikeTrains)[1],np.shape(subSpikeTrains)[0]) for i in range(2**wordLength): if i == 0: continue # if i % 100 == 0: # print str(i) + " words processed" binaryWord = np.array(binaryStrings[i], dtype="uint32") x.append(countInSample(binaryWord, subSpikeTrains)) y.append(countInSample(binaryWord, pSpikeTrain)) # print "**All words processed**" # print "" print "*** DONE ***" plt.loglog(x, y, 'bo',basex=10) plt.xlabel("Observed") plt.ylabel("Predicted") plt.plot(x,x,'k-') plt.xlim(0,np.max(x)) plt.savefig("EladPlot") plt.close()
projects/neural_correlations/EXP2-HighOrder/NeuCorr_Exp2.ipynb
subutai/htmresearch
agpl-3.0
Save TM
saveTM(tm) # to load the TM back from the file do: with open('tm.nta', 'rb') as f: proto2 = TemporalMemoryProto_capnp.TemporalMemoryProto.read(f, traversal_limit_in_words=2**61) tm = TM.read(proto2)
projects/neural_correlations/EXP2-HighOrder/NeuCorr_Exp2.ipynb
subutai/htmresearch
agpl-3.0
Analysis of input
overlapMatrix = inputAnalysis(allSequences, "random", tm.numberOfColumns()) # show heatmap of overlap matrix plt.imshow(overlapMatrix, cmap='spectral', interpolation='nearest') cb = plt.colorbar() cb.set_label('Overlap Score') plt.savefig("overlapScore_heatmap") plt.close() # plt.show() # generate histogram bins = 60 (n, bins, patches) = plt.hist(overlapMatrix.ravel(), bins, alpha=0.5) plt.xlabel("Overlap Score") plt.ylabel("Frequency") plt.savefig("overlapScore_hist") plt.xlim(0.1,1.0) plt.ylim(0,200000) plt.xlabel("Overlap Score") plt.ylabel("Frequency") plt.savefig("overlapScore_hist_ZOOM") plt.close() flag = False for i in range(numSequences*symbolsPerSequence): for j in range(numSequences*symbolsPerSequence): if overlapMatrix[i,j] == 1: print i,j flag = True break if flag == True: break print overlapMatrix[1,11] print allSequences[0][1] print allSequences[1][1] print percentOverlap(allSequences[0][1], allSequences[1][1], tm.numberOfColumns())
projects/neural_correlations/EXP2-HighOrder/NeuCorr_Exp2.ipynb
subutai/htmresearch
agpl-3.0
1. First lets make some fake data
#a handful of sites sites = ['org','A','B','C','D','E','F','G','H','I','J','K'] #non symetric distances distances = dict( ((a,b),np.random.randint(10,50)) for a in sites for b in sites if a!=b )
05-routes-and-schedules/traveling_salesman.ipynb
cochoa0x1/integer-programming-with-python
mit
The model We are going to model this problem as an integer program. Lets imagine we have a tour through our sites, and site i is in the tour followed by site j. We can model this with a binary variable $x_{i,j}$ that should be 1 only when site i is connected to site j. $$x_{i,j} = \begin{cases} 1, & \text{if site i comes exactly before j in the tour} \ 0, & \text{otherwise} \end{cases} $$ Where this holds for all i,j combinations except where i = j. Each site is visited exactly once, so that means if we fix j and look at all the $x_{i,j}$, then these represent the connections <b>into</b> that site j so only one of those can be 1. We can express this equivalently by requiring that the sum of those $x_{i,j}$ equal 1 for each fixed j. i.e., $$\sum_{i \neq j} x_{i,j} = 1 \space \forall j$$ Alternatively, there should be one and only one way of exiting a site, so we have also $$\sum_{j \neq i} x_{i,j} = 1 \space \forall i$$ So we have our variables, what should the objective be? Our objective is the total tour distance: $$\sum_{i,j \space i \neq j} x_{i,j} Distance(i,j)$$ This is a lot of variables! If we have $N$ sites then we are creating $N^2 -N$ many binary variables and in general the more integer variables you have the harder the problem gets, often exponentially harder.
#create the problme prob=LpProblem("salesman",LpMinimize) #indicator variable if site i is connected to site j in the tour x = LpVariable.dicts('x',distances, 0,1,LpBinary) #the objective cost = lpSum([x[(i,j)]*distances[(i,j)] for (i,j) in distances]) prob+=cost #constraints for k in sites: #every site has exactly one inbound connection prob+= lpSum([ x[(i,k)] for i in sites if (i,k) in x]) ==1 #every site has exactly one outbound connection prob+=lpSum([ x[(k,i)] for i in sites if (k,i) in x]) ==1
05-routes-and-schedules/traveling_salesman.ipynb
cochoa0x1/integer-programming-with-python
mit
Subtours and why we need way more constraints There is still something missing in our solution. Lets imagine we have 6 sites A,B,C,D,E,F. Does the following satisfy our existing constraints? $$A->B->C->A \ and \ D->E->F->D$$ Each site is visited only once and has only one inbound and one outbound connection, so yes, it does. The problem is what is known as a subtour. We need to require that all of our sites are on the same tour! A common brute force way of doing this is to require that every possible subset be connected, but this requires an exponential amount of constraints because of the way the number of subsets grows. Instead we will introduce a new $N$ many dummy variables $u_{i}$ which will track the order at which site i is visited. $$u_{i} : \text{order site i is visited}$$ Consider what $u_{i}-u_{j}$ should be. It should depend on what $x_{i,j}$ is. If they are connected the delta should be exactly -1. If they are not connected it could be anything up to N-1 cause the tour only has N-1 steps. $$u_{i}-u_{j} \leq N(1-x_{i,j}) - 1$$ We need to add this for every site connection possible except for the site we start off at. We are adding on the order of $N^2$ many more constraints.
#we need to keep track of the order in the tour to eliminate the possibility of subtours u = LpVariable.dicts('u', sites, 0, len(sites)-1, LpInteger) #subtour elimination N=len(sites) for i in sites: for j in sites: if i != j and (i != 'org' and j!= 'org') and (i,j) in x: prob += u[i] - u[j] <= (N)*(1-x[(i,j)]) - 1 %time prob.solve() print(LpStatus[prob.status])
05-routes-and-schedules/traveling_salesman.ipynb
cochoa0x1/integer-programming-with-python
mit
And the result:
sites_left = sites.copy() org = 'org' tour=[] tour.append(sites_left.pop( sites_left.index(org))) while len(sites_left) > 0: for k in sites_left: if x[(org,k)].varValue ==1: tour.append( sites_left.pop( sites_left.index(k))) org=k break tour.append('org') tour_legs = [distances[(tour[i-1], tour[i])] for i in range(1,len(tour))] print('Found optimal tour!') print(' -> '.join(tour))
05-routes-and-schedules/traveling_salesman.ipynb
cochoa0x1/integer-programming-with-python
mit
The total tour length:
sum(tour_legs)
05-routes-and-schedules/traveling_salesman.ipynb
cochoa0x1/integer-programming-with-python
mit
Attribute Information: No: row number year: year of data in this row month: month of data in this row day: day of data in this row hour: hour of data in this row pm2.5: PM2.5 concentration (ug/m^3) DEWP: Dew Point (ƒ) TEMP: Temperature (ƒ) PRES: Pressure (hPa) cbwd: Combined wind direction Iws: Cumulated wind speed (m/s) Is: Cumulated hours of snow Ir: Cumulated hours of rain
pm2 = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/00381/PRSA_data_2010.1.1-2014.12.31.csv', na_values='NA') pm2.columns = ['id', 'year', 'month', 'day', 'hour', 'pm2', 'dew_point', 'temperature', 'pressure', 'wind_dir', 'wind_speed', 'hours_snow', 'hours_rain'] pm2.head() pm2.info()
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
There ore over 2000 samples with the pm 2.5 value missing: since this is the value to predict I am going to drop them.
pm2.dropna(inplace=True) pm2.describe().T pm2.describe(include=['O']) pm2.wind_dir.value_counts()
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
2 - Suppose our data became corrupted after we downloaded it and values were missing. Randomly insert 5000 NaN into the dataset accross all the columns.
# setting the seed np.random.seed(0) # creating an array of dimension equal to the number of cells of the dataframe and with exactly 5000 ones dim = pm2.shape[0]*pm2.shape[1] arr = np.array([0]*(dim-5000) + [1]*5000) # shuffling and reshaping the array np.random.shuffle(arr) arr = arr.reshape(pm2.shape[0], pm2.shape[1]) # looping through all the values and setting the corresponding position in the dataframe to nan it = np.nditer(arr, flags=['multi_index']) while not it.finished: if it[0] == 1: pm2.iloc[it.multi_index[0], it.multi_index[1]] = np.nan it.iternext() # solution: inserted nans on all columns at random data_na = pm2.copy() nrow = data_na.shape[0] for col in data_na: rows = np.random.randint(0, nrow, 5000) data_na[col].iloc[rows] = np.nan pm2.info()
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
3 - Which variables lend themselves to be in a regression model? Select those variables, and then fit a regression model for each of the following imputation strategies, commenting on your results. - Dropping all rows with at least 1 NA - Dropping all rows with at least 3 NA - Imputing 0 - Mean - Median - Mode
# I'm dropping wind_dir and id regr_cols = ['year', 'month', 'day', 'hour', 'dew_point', 'temperature', 'pressure', 'wind_speed', 'hours_snow', 'hours_rain', 'pm2'] pm2_regr = pm2.loc[:, regr_cols] # in the solution there is no year, month, day and hour # also, he discards hours_snow and hours_rain (though they aren't binary or categorical) # from sklearn.model_selection import train_test_split from sklearn.preprocessing import Imputer from sklearn.linear_model import LinearRegression lr = LinearRegression() # X = pm2_regr.iloc[:, :-1] # y = pm2_regr.iloc[:, -1] # Xtrain, Xtest, ytrain, ytest = train_test_split(pm2_regr.iloc[:, :-1], pm2_regr.iloc[:, -1], test_size=0.2, random_state=0) #just a note to self pm2_regr1 = pm2_regr.dropna(thresh=7) # same as dropna without thresh # thresh is the number of non nan columns required to mantain the rows pm2_regr1 = pm2_regr.dropna(thresh=5) pm2_regr1.info()
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Dropping all rows with at least 1 NA:
lr.fit(pm2_regr.dropna().iloc[:, :-1], pm2_regr.dropna().iloc[:, -1]) lr.score(pm2_regr.dropna().iloc[:, :-1], pm2_regr.dropna().iloc[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Dropping all row with at least 3 NA gets me an error because I have nans in some rows:
lr.fit(pm2_regr.dropna(thresh=5).iloc[:, :-1], pm2_regr.dropna(thresh=5).iloc[:, -1]) lr.score(pm2_regr.dropna(thresh=5).iloc[:, :-1], pm2_regr.dropna(thresh=5).iloc[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Imputing 0:
lr.fit(pm2_regr.fillna(0).iloc[:, :-1], pm2_regr.fillna(0).iloc[:, -1]) lr.score(pm2_regr.fillna(0).iloc[:, :-1], pm2_regr.fillna(0).iloc[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Imputing the mean:
imp = Imputer(strategy='mean') pm2_regr_mean = imp.fit_transform(pm2_regr) lr.fit(pm2_regr_mean[:, :-1], pm2_regr_mean[:, -1]) lr.score(pm2_regr_mean[:, :-1], pm2_regr_mean[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
The median:
imp = Imputer(strategy='median') pm2_regr_median = imp.fit_transform(pm2_regr) lr.fit(pm2_regr_median[:, :-1], pm2_regr_median[:, -1]) lr.score(pm2_regr_median[:, :-1], pm2_regr_median[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
And the mode:
imp = Imputer(strategy='most_frequent') pm2_regr_mode = imp.fit_transform(pm2_regr) lr.fit(pm2_regr_mode[:, :-1], pm2_regr_mode[:, -1]) lr.score(pm2_regr_mode[:, :-1], pm2_regr_mode[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
The best result I get is from simply dropping all rows with NAs, mean and median gives similar performances while the mode is the worst imputation (surprisingly worst than imputing 0, which is quite random). Overall all strategies doesn't yield good results, I guess this fit is bad in general. 4 - Given the results in part (3), and your own ingenuity, come up with a new imputation strategy and try it out. Comment on your results. I'm going to drop rows with NAs for the columns year, month and hour, pm2; I'm imputing the median for all other columns:
pm2_regr_imp = pm2_regr.dropna(subset=['year', 'month', 'day', 'hour', 'pm2']) imp = Imputer(strategy = 'median') pm2_regr_imp = imp.fit_transform(pm2_regr_imp) lr.fit(pm2_regr_imp[:, :-1], pm2_regr_imp[:, -1]) lr.score(pm2_regr_imp[:, :-1], pm2_regr_imp[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
The result is slightly better than simply imputing mean or median, but still worse than dropping all NAs. Categorical Data Sometimes your data will contain categorical variables which need to be handled carefully depending on the machine learning algorithm you choose to use. Encoding categorical variables comes in two flavors: oridinal (ordered) and nominal (unordered) features. In this exercise, you'll further explore the Beijing PM2.5 dataset, this time using categorical variables. 1 - Which variables are categorical? Encode them properly, taking care to insure that they are properly classified as either ordinal or nominal. There is one categorical variable:
pm2.describe(include=['O'])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
The variable is nominal, so I'm going to use one-hot encoding:
# for simplicity I'm using pandas function pm2_enc = pd.get_dummies(pm2) pm2_enc = pm2_enc.loc[:, regr_cols[:-1] + ['wind_dir_NE', 'wind_dir_NW', 'wind_dir_SE', 'wind_dir_cv'] + regr_cols[-1:]].dropna() # from solutions using sklearn: from sklearn.preprocessing import LabelEncoder, OneHotEncoder l_enc = LabelEncoder() oh_enc = OneHotEncoder(sparse=False) # change categorical data labels to integers data_sub = pm2.copy() data_sub.wind_dir = l_enc.fit_transform(data_sub.wind_dir) # one-hot encode dummies = pd.DataFrame(oh_enc.fit_transform(data_sub.wind_dir.values.reshape(-1, 1)), columns=l_enc.classes_) # join with original df data_sub = data_sub.drop('wind_dir', axis=1) data_sub = data_sub.join(dummies) data_sub.head()
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
2 - Perform a multilinear regression, using the classified data, removing the NA values. Comment on your results.
lr.fit(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1]) lr.score(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
The results are a bit better than before, but the performances are still very bad. 3 - Create a new encoding for days in which it rained, snowed, neither, and both, and then rerun the regression. Are the results any better?
# hours_snow and hours_rain are cumulative across days, so I'm taking the max for each day to see if it snowed days = pm2_enc.groupby(['year', 'month', 'day'])['hours_snow', 'hours_rain'].max() # creating columns for the encodings days['snow'] = pd.Series(days['hours_snow'] > 0, dtype='int') days['rain'] = pd.Series(days['hours_rain'] > 0, dtype='int') days['rain_snow'] = pd.Series((days['hours_rain'] > 0) & (days['hours_snow'] > 0), dtype='int') days['no_rain_snow'] = pd.Series((days['hours_rain'] == 0) & (days['hours_snow'] == 0), dtype='int') # resetting index and dropping hours_snow and hours_rain days.reset_index(inplace=True) days.drop(['hours_snow', 'hours_rain'], inplace=True, axis=1) # joining the dataframe with the new columns to the original one pm2_enc = pm2_enc.merge(days, left_on=['year', 'month', 'day'], right_on=['year', 'month', 'day']) pm2_enc.info() lr.fit(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1]) lr.score(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Wow, now the fit is perfect! 4 - Create a new encoding for the quartile that a day falls under by wind speed and rerun the regression. Comment on your results.
# using pandas cut and subtracting 0.1 to include the min values pm2_enc['wind_speed_quartile'] = pd.cut(pm2_enc.wind_speed, bins=list(pm2_enc.wind_speed.quantile([0])-0.1) + list(pm2_enc.wind_speed.quantile([0.25, 0.5, 0.75, 1])), labels=[0.25, 0.5, 0.75, 1]) # from solutions: using np.percentile: quartile = np.percentile(data_sub['wind_speed_quartile'], [25, 50, 75, 100]) cat = [] for row in range(len(data_sub)): wind_speed = data_sub['wind_speed_quartile'].iloc[row] if wind_speed <= quartile[0]: cat.append('1st') if wind_speed <= quartile[1]: cat.append('2nd') if wind_speed <= quartile[2]: cat.append('3rd') if wind_speed <= quartile[3]: cat.append('4th') data_sub['wind_quart'] = cat # and then create dummies... # transforming the column in numeric pm2_enc.wind_speed_quartile = pd.to_numeric(pm2_enc.wind_speed_quartile) lr.fit(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1]) lr.score(pm2_enc.iloc[:, :-1], pm2_enc.iloc[:, -1])
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
The accuracy has gone down again after adding this new column, this may be due to the fact that this adds useless noise to the data or that this binning is too coarse maybe. 5 - Create a new encoding for deciles of the DEWP variable. Then select the row containing the highest temperature, and using Pandas category data type, select all rows in a lesser DEWP decile than this row.
# using pandas cut and subtracting 0.1 to include the min values pm2_enc['dew_point_decile'] = pd.cut(pm2_enc.dew_point, bins=list(pm2_enc.dew_point.quantile([0])-0.1) + list(pm2_enc.dew_point.quantile([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]))) # from solutions: not using cut but creating new column and then: data_sub.dew_dec = pd.Categorical(data_sub.dew_dec, categories=data_sub.dew_dec.unique(), ordered=True) decile = pm2_enc.iloc[pm2_enc.temperature.argmax()].dew_point_decile print(decile) pm2_enc.loc[pm2_enc.dew_point_decile < decile]
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Feature Scaling Many of the machine learning algorithms we have at our disposal require that the feautures be on the the same scale in order to work properly. In this exercise, you'll test out a few techniques with and without feature scaling and observe the outcomes. 1 - Head over to the Machine Learning Repository, download the Wine Dataset, and put it in a dataframe, being sure to label the columns properly.
wine = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None) wine.columns = ['class', 'alcohol', 'malic_acid', 'ash', 'alcalinity_ash', 'magnesium', 'total_phenols', 'flavanoids', 'nonflavanoid_phenols', 'proanthocyanins', 'color_intensity', 'hue', 'OD280_OD315', 'proline'] wine.head()
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
2 - Fit a Nearest Neighbors model to the data, using a normalized data set, a stardardized data set, and the original. Split into test and train sets and compute the accuracy of the classifications and comment on your results.
from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import MinMaxScaler, StandardScaler from sklearn.model_selection import train_test_split
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Original dataset:
Xtrain, Xtest, ytrain, ytest = train_test_split(wine.iloc[:, 1:], wine.iloc[:, 0], test_size=0.3, random_state=0) knn = KNeighborsClassifier() knn.fit(Xtrain, ytrain) print(knn.score(Xtrain, ytrain)) print(knn.score(Xtest, ytest))
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Normalized dataset:
mms = MinMaxScaler() Xtrain_norm = mms.fit_transform(Xtrain) Xtest_norm = mms.transform(Xtest) knn.fit(Xtrain_norm, ytrain) print(knn.score(Xtrain_norm, ytrain)) print(knn.score(Xtest_norm, ytest))
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Standardized dataset:
ssc = StandardScaler() Xtrain_std = ssc.fit_transform(Xtrain) Xtest_std = ssc.transform(Xtest) knn.fit(Xtrain_std, ytrain) print(knn.score(Xtrain_std, ytrain)) print(knn.score(Xtest_std, ytest))
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
The accuracy is way better for a normalized or standardized dataset, with the least having a slightly better generalization: K-Nearest Neighbors is sensitive to feature scaling. 3 - Fit a Naive Bayes model to the data, using a normalized data set, a standardized data set, and the original. Comment on your results.
from sklearn.naive_bayes import GaussianNB
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Original dataset:
gnb = GaussianNB() gnb.fit(Xtrain, ytrain) print(gnb.score(Xtrain, ytrain)) print(gnb.score(Xtest, ytest))
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Normalized dataset:
gnb.fit(Xtrain_norm, ytrain) print(gnb.score(Xtrain_norm, ytrain)) print(gnb.score(Xtest_norm, ytest))
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Standardized dataset:
gnb.fit(Xtrain_std, ytrain) print(gnb.score(Xtrain_std, ytrain)) print(gnb.score(Xtest_std, ytest))
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
For this algorithm there is no difference at all, so scaling the data isn't necessary. Feature Selection With many datasets, you will find yourself in a situation where not all of the provided features are relevant to your model and it may be best to discard them. This is a very complex topic, involving many techniques, a few of which we will explore in this exercise, using the Boston housing data. 1 - From sklearn import the load_boston package, and put the data into a data frame with the proper column names, and then split into training and testing sets.
from sklearn.datasets import load_boston boston = load_boston() boston_df = pd.DataFrame(boston.data, columns=boston.feature_names) boston_target = boston.target from sklearn.model_selection import train_test_split Xtrain, Xtest, ytrain, ytest = train_test_split(boston_df, boston_target, test_size=0.3, random_state=0)
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
2 - Fit a series of least squares multilinear regression models to the data, and use the F-Statistic to select the K best features for values of k ranging from 1 to the total number of features. Plot the MSE for each model against the test set and print the best features for each iteration. Comment on your results.
from sklearn.feature_selection import f_classif, SelectKBest from sklearn.metrics import mean_squared_error # in the solutions he uses f_regression and not f_classif # also, best features are obtained by cols[sel.get_support()] with cols = Xtrain.columns # and lr is instantiated with normalize=True from sklearn.feature_selection import f_regression from sklearn.linear_model import LinearRegression mse = [] cols = Xtrain.columns lr = LinearRegression(normalize=True) # looping through the number of features desired and storing the results in mse for k in range(1, boston_df.shape[1]+1): # using SelectKBest with the F-statistic as the score sel = SelectKBest(score_func=f_regression, k=k) # fitting the selector sel.fit(Xtrain, ytrain) # transforming train and test sets Xtrain_k = sel.transform(Xtrain) Xtest_k = sel.transform(Xtest) # fitting linear regression model and printing out the k best features lr.fit(Xtrain_k, ytrain) print('Top {} features {}'.format(sel.k, cols[sel.get_support()])) mse.append(mean_squared_error(lr.predict(Xtest_k), ytest)) mse mse = [] # looping through the number of features desired and storing the results in mse for k in range(1, boston_df.shape[1]+1): # using SelectKBest with the F-statistic as the score sel = SelectKBest(score_func=f_classif, k=k) # fitting the selector sel.fit(Xtrain, ytrain) # transforming train and test sets Xtrain_k = sel.transform(Xtrain) Xtest_k = sel.transform(Xtest) # fitting linear regression model and printing out the k best features lr.fit(Xtrain_k, ytrain) print('Top {} features {}'.format(k, pd.Series(sel.scores_, index=Xtrain.columns).\ sort_values(ascending=False).\ head(k).index.values)) mse.append(mean_squared_error(lr.predict(Xtest_k), ytest)) import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(16, 8)) plt.plot(range(1, len(mse)+1), mse) plt.title('MSE for models with different number of features') plt.xlabel('Number of Features') plt.ylabel('MSE');
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
The MSE keeps going down adding features, there is a great gain after the 11th feature is added. 3 - Do the same as in part (2) instead this time using recursive feature selection.
from sklearn.feature_selection import RFE mse = [] # looping through the number of features desired and storing the results in mse for k in range(1, boston_df.shape[1]+1): # using Recursive Feature Selection with linear regression as estimator sel = RFE(estimator=lr, n_features_to_select=k) # fitting the selector sel.fit(Xtrain, ytrain) # transforming train and test sets Xtrain_k = sel.transform(Xtrain) Xtest_k = sel.transform(Xtest) # fitting linear regression model and printing out the k best features lr.fit(Xtrain_k, ytrain) print('Top {} features {}'.format(k, pd.Series(sel.support_, index=Xtrain.columns).\ sort_values(ascending=False).\ head(k).index.values)) mse.append(mean_squared_error(lr.predict(Xtest_k), ytest)) plt.figure(figsize=(16, 8)) plt.plot(range(1, len(mse)+1), mse) plt.title('MSE for models with different number of features') plt.xlabel('Number of Features') plt.ylabel('MSE');
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
The MSE keeps going down adding features but after the sixth feature is added there isn't much improvement. 4 - Fit a Ridge Regression model to the data and use recursive feature elimination and SelectFromModel in sklearn to select your features. Generate the same plots and best features as in parts (2) and (3) and comment and compare your results to what you have found previously.
# in solutions he doesn't use select from model but repeats the previous exercise only using ridge instead # ok no, it does both # in selectfrommodel it uses c_vals = np.arange(0.1, 2.1, 0.1) to loop through and the threshold is set to # str(c) + '*mean' for c in c_vals # also, he always fits the ridge model from sklearn.linear_model import Ridge from sklearn.feature_selection import SelectFromModel # fitting ridge regression ridge = Ridge() c_vals = np.arange(0.1, 2.1, 0.1) cols = Xtrain.columns mse = [] # looping through the possible threshholds from above and storing the results in mse for c in c_vals: # using SelectFromModel with the ridge scores from above selfrmod = SelectFromModel(ridge, threshold=str(c) + '*mean') # fitting the selector selfrmod.fit(Xtrain, ytrain) # transforming train and test sets Xtrain_k = selfrmod.transform(Xtrain) Xtest_k = selfrmod.transform(Xtest) # fitting linear regression model and printing out the k best features ridge.fit(Xtrain_k, ytrain) print('c={} features {}'.format(c, cols[selfrmod.get_support()])) mse.append(mean_squared_error(ridge.predict(Xtest_k), ytest)) mse import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(16, 8)) plt.plot(c_vals, mse) plt.title('MSE for different thresholds') plt.xlabel('c') plt.ylabel('MSE'); from sklearn.linear_model import Ridge # from sklearn.feature_selection import RFECV from sklearn.feature_selection import SelectFromModel # fitting ridge regression ridge = Ridge() ridge.fit(Xtrain, ytrain) # storing features importance coef = ridge.coef_ mse = [] # looping through the possible threshholds from above and storing the results in mse for k, thresh in enumerate(sorted(coef, reverse=True)): # using SelectFromModel with the ridge scores from above selfrmod = SelectFromModel(ridge, threshold=thresh) # fitting the selector selfrmod.fit(Xtrain, ytrain) # transforming train and test sets Xtrain_k = selfrmod.transform(Xtrain) Xtest_k = selfrmod.transform(Xtest) # fitting linear regression model and printing out the k best features lr.fit(Xtrain_k, ytrain) print('Top {} features {}'.format(k+1, pd.Series(ridge.coef_, index=Xtrain.columns).\ sort_values(ascending=False).\ head(k+1).index.values)) mse.append(mean_squared_error(lr.predict(Xtest_k), ytest)) plt.figure(figsize=(16, 8)) plt.plot(range(1, len(mse)+1), mse) plt.title('MSE for models with different number of features') plt.xlabel('Number of Features') plt.ylabel('MSE');
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
After the fourth feature there is no improvement. Also, the MSE seems better than all the trials before. 5 - L1 regularization can also be used for model selection. Choose an algorithm in sklearn and repeat part (4) using model selection via regularization.
# again, in solutions he uses the c_vals as before and he fits the lasso from sklearn.linear_model import LassoCV from sklearn.feature_selection import SelectFromModel # fitting ridge regression lasso = LassoCV() c_vals = np.arange(0.1, 2.1, 0.1) cols = Xtrain.columns mse = [] # looping through the possible threshholds from above and storing the results in mse for c in c_vals: # using SelectFromModel with the ridge scores from above selfrmod = SelectFromModel(lasso, threshold=str(c) + '*mean') # fitting the selector selfrmod.fit(Xtrain, ytrain) # transforming train and test sets Xtrain_k = selfrmod.transform(Xtrain) Xtest_k = selfrmod.transform(Xtest) # fitting linear regression model and printing out the k best features lasso.fit(Xtrain_k, ytrain) print('c={} features {}'.format(c, cols[selfrmod.get_support()])) mse.append(mean_squared_error(lasso.predict(Xtest_k), ytest)) mse import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(16, 8)) plt.plot(c_vals, mse) plt.title('MSE for different thresholds') plt.xlabel('c') plt.ylabel('MSE'); from sklearn.linear_model import LassoCV # fitting lasso regression lasso = LassoCV() lasso.fit(Xtrain, ytrain) # storing features importance coef = lasso.coef_ mse = [] # looping through the possible threshholds from above and storing the results in mse for k, thresh in enumerate(sorted(coef, reverse=True)): # using SelectFromModel with the lasso scores from above selfrmod = SelectFromModel(lasso, threshold=thresh) # fitting the selector selfrmod.fit(Xtrain, ytrain) # transforming train and test sets Xtrain_k = selfrmod.transform(Xtrain) Xtest_k = selfrmod.transform(Xtest) # fitting linear regression model and printing out the k best features lr.fit(Xtrain_k, ytrain) print('Top {} features {}'.format(k+1, pd.Series(lasso.coef_, index=Xtrain.columns).\ sort_values(ascending=False).\ head(k+1).index.values)) mse.append(mean_squared_error(lr.predict(Xtest_k), ytest)) plt.figure(figsize=(16, 8)) plt.plot(range(1, len(mse)+1), mse) plt.title('MSE for models with different number of features') plt.xlabel('Number of Features') plt.ylabel('MSE');
Data Preprocessing/Preprocessing_exercise.ipynb
aleph314/K2
gpl-3.0
Load training data
df = pd.read_csv('../facies_vectors.csv')
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Build features In the real world it would be unusual to have neutron-density cross-plot porosity (i.e. PHIND) without the corresponding raw input curves, namely bulk density and neutron porosity, as we have in this contest dataset. So as part of the feature engineering process, I back-calculate estimates of those raw curves from the provided DeltaPHI and PHIND curves. One issue with this approach though is that cross-plot porosity differs between vendors, toolstrings, and software packages, and it is not known exactly how the PHIND in this dataset was computed. So I make the assumption here that PHIND ≈ sum of squares porosity, which is usually an adequate approximation of neutron-density crossplot porosity. That equation looks like this: $$PHIND = \sqrt{\frac{NPHI^2 + DPHI^2}{2}}$$ and it is assumed here that DeltaPHI is: $$DeltaPHI = NPHI - DPHI$$ The functions below use the relationships from the above equations (...two equations, two unknowns...) to estimate NPHI and DPHI (and consequently RHOB). Once we have RHOB, we can use it combined with PE to estimate apparent grain density (RHOMAA) and apparent photoelectric capture cross-section (UMAA), which are useful in lithology estimations from well logs.
def estimate_dphi(df): return ((4*(df['PHIND']**2) - (df['DeltaPHI']**2))**0.5 - df['DeltaPHI']) / 2 def estimate_rhob(df): return (2.71 - (df['DPHI_EST']/100) * 1.71) def estimate_nphi(df): return df['DPHI_EST'] + df['DeltaPHI'] def compute_rhomaa(df): return (df['RHOB_EST'] - (df['PHIND'] / 100)) / (1 - df['PHIND'] / 100) def compute_umaa(df): return ((df['PE'] * df['RHOB_EST']) - (df['PHIND']/100 * 0.398)) / (1 - df['PHIND'] / 100)
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Because solving the sum of squares equation involved the quadratic formula, in some cases imaginary numbers result due to porosities being negative, which is what the warning below is about.
df['DPHI_EST'] = df.apply(lambda x: estimate_dphi(x), axis=1).astype(float) df['RHOB_EST'] = df.apply(lambda x: estimate_rhob(x), axis=1) df['NPHI_EST'] = df.apply(lambda x: estimate_nphi(x), axis=1) df['RHOMAA_EST'] = df.apply(lambda x: compute_rhomaa(x), axis=1)
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Regress missing PE values
pe = df.dropna() PE = pe['PE'].values wells = pe['Well Name'].values drop_list_pe = ['Formation', 'Well Name', 'Facies', 'Depth', 'PE', 'RELPOS'] fv_pe = pe.drop(drop_list_pe, axis=1).values X_pe = preprocessing.StandardScaler().fit(fv_pe).transform(fv_pe) y_pe = PE reg = neighbors.KNeighborsRegressor(n_neighbors=40, weights='distance') logo = LeaveOneGroupOut() f1knn_pe = [] for train, test in logo.split(X_pe, y_pe, groups=wells): well_name = wells[test[0]] reg.fit(X_pe[train], y_pe[train]) score = reg.fit(X_pe[train], y_pe[train]).score(X_pe[test], y_pe[test]) print("{:>20s} {:.3f}".format(well_name, score)) f1knn_pe.append(score) print("-Average leave-one-well-out F1 Score: %6f" % (np.mean(f1knn_pe)))
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Apply regression model to missing PE values and merge back into dataframe:
reg.fit(X_pe, y_pe) fv_apply = df.drop(drop_list_pe, axis=1).values X_apply = preprocessing.StandardScaler().fit(fv_apply).transform(fv_apply) df['PE_EST'] = reg.predict(X_apply) df.PE = df.PE.combine_first(df.PE_EST)
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Compute UMAA for lithology model
df['UMAA_EST'] = df.apply(lambda x: compute_umaa(x), axis=1)
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Umaa Rhomaa plot Just for fun, below is a basic Umaa-Rhomaa plot to view relative abundances of quartz, calcite, dolomite, and clay. The red triangle represents a ternary solution for QTZ, CAL, and DOL, while the green triangle represents a solution for QTZ, CAL, and CLAY (illite).
df[df.GR < 125].plot(kind='scatter', x='UMAA_EST', y='RHOMAA_EST', c='GR', figsize=(8,6)) plt.ylim(3.1, 2.2) plt.xlim(0.0, 17.0) plt.plot([4.8, 9.0, 13.8, 4.8], [2.65, 2.87, 2.71, 2.65], c='r') plt.plot([4.8, 11.9, 13.8, 4.8], [2.65, 3.06, 2.71, 2.65], c='g') plt.scatter([4.8], [2.65], s=50, c='r') plt.scatter([9.0], [2.87], s=50, c='r') plt.scatter([13.8], [2.71], s=50, c='r') plt.scatter([11.9], [3.06], s=50, c='g') plt.text(2.8, 2.65, 'Quartz', backgroundcolor='w') plt.text(14.4, 2.71, 'Calcite', backgroundcolor='w') plt.text(9.6, 2.87, 'Dolomite', backgroundcolor='w') plt.text(12.5, 3.06, 'Illite', backgroundcolor='w') plt.text(7.0, 2.55, "gas effect", ha="center", va="center", rotation=-55, size=8, bbox=dict(boxstyle="larrow,pad=0.3", fc="pink", ec="red", lw=2)) plt.text(15.0, 2.78, "barite?", ha="center", va="center", rotation=0, size=8, bbox=dict(boxstyle="rarrow,pad=0.3", fc="yellow", ec="orange", lw=2))
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Here I use matrix inversion to "solve" the ternary plot for each lithologic component. Essentially each datapoint is a mix of the three components defined by the ternary diagram, with abundances of each defined by the relative distances from each endpoint. I use a GR cutoff of 40 API to determine when to use either the QTZ-CAL-DOL or QTZ-CAL-CLAY ternary solutions. In other words, it is assumed that below 40 API, there is 0% clay, and above 40 API there is 0% dolomite, and also that these four lithologic components are the only components in these rocks. Admittedly it's not a great assumption, especially since the ternary plot indicates other stuff is going on. For example the high Umaa datapoints near the Calcite endpoint may indicate some heavy minerals (e.g., pyrite) or even barite-weighted mud. The "pull" of datapoints to the northwest quadrant probably reflects some gas effect, so my lithologies in those gassy zones will be skewed.
# QTZ-CAL-CLAY ur1 = inversion.UmaaRhomaa() ur1.set_dol_uma(11.9) ur1.set_dol_rhoma(3.06) # QTZ-CAL-DOL ur2 = inversion.UmaaRhomaa() df['UR_QTZ'] = np.nan df['UR_CLY'] = np.nan df['UR_CAL'] = np.nan df['UR_DOL'] = np.nan df.ix[df.GR >= 40, 'UR_QTZ'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR >= 40, 'UR_CLY'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR >= 40, 'UR_CAL'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR >= 40, 'UR_DOL'] = 0 df.ix[df.GR < 40, 'UR_QTZ'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR < 40, 'UR_DOL'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR < 40, 'UR_CAL'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR < 40, 'UR_CLY'] = 0
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Plot facies by formation to see if the Formation feature will be useful
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'] fms = df.Formation.unique() fig, ax = plt.subplots(int(len(fms) / 2), 2, sharey=True, sharex=True, figsize=(5,10)) for i, fm in enumerate(fms): facies_counts = df[df.Formation == fm]['Facies'].value_counts().sort_index() colors = [facies_colors[i-1] for i in facies_counts.index] ax[int(i/2), i%2].bar(facies_counts.index, height=facies_counts, color=colors) ax[int(i/2), i%2].set_title(fm, size=8)
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Group formations by similar facies distributions
fm_groups = [['A1 SH', 'B1 SH', 'B2 SH', 'B3 SH', 'B4 SH'], ['B5 SH', 'C SH'], ['A1 LM', 'C LM'], ['B1 LM', 'B3 LM', 'B4 LM'], ['B2 LM', 'B5 LM']] fm_group_dict = {fm:i for i, l in enumerate(fm_groups) for fm in l} df['FM_GRP'] = df.Formation.map(fm_group_dict)
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Make dummy variables from the categorical Formation feature
df = pd.get_dummies(df, prefix='FM_GRP', columns=['FM_GRP'])
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Compute Archie water saturation
def archie(df): return np.sqrt(0.08 / ((df.PHIND ** 2) * (10 ** df.ILD_log10))) df['SW'] = df.apply(lambda x: archie(x), axis=1)
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Get distances between wells
# modified from jesper latlong = pd.DataFrame({"SHRIMPLIN": [37.978076, -100.987305], # "ALEXANDER D": [37.6747257, -101.1675259], # "SHANKLE": [38.0633799, -101.3920543], # "LUKE G U": [37.4499614, -101.6121913], # "KIMZEY A": [37.12289, -101.39697], # "CROSS H CATTLE": [37.9105826, -101.6464517], # "NOLAN": [37.7866294, -101.0451641], #? "NEWBY": [37.3172442, -101.3546995], # "CHURCHMAN BIBLE": [37.3497658, -101.1060761], #? "STUART": [37.4857262, -101.1391063], # "CRAWFORD": [37.1893654, -101.1494994], #? "Recruit F9": [0,0]}) def haversine(lon1, lat1, lon2, lat2): """ Calculate the great circle distance between two points on the earth (specified in decimal degrees) """ # convert decimal degrees to radians lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2]) # haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) km = 6367 * c return km def get_lat(df): return latlong[df['Well Name']][0] def get_long(df): return latlong[df['Well Name']][1]
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Add latitude and longitude as features, add distances to every other well as features
df['LAT'] = df.apply(lambda x: get_lat(x), axis=1) df['LON'] = df.apply(lambda x: get_long(x), axis=1) dist_dict = {} for k in latlong: dict_name = k + '_DISTANCES' k_dict = {} lat1 = latlong[k][0] lon1 = latlong[k][1] for l in latlong: lat2 = latlong[l][0] lon2 = latlong[l][1] if l == 'Recruit F9': dist = haversine(0, 0, 0, 0) elif k == "Recruit F9": dist = haversine(0, 0, 0, 0) else: dist = haversine(lon1, lat1, lon2, lat2) k_dict[l] = dist dist_dict[dict_name] = k_dict for i in dist_dict: df[i] = np.nan for j in dist_dict[i]: df.loc[df['Well Name'] == j, i] = dist_dict[i][j]
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
First guess at facies using KNN
df0 = df[(df.PHIND <= 40) & (df['Well Name'] != 'CROSS H CATTLE')] facies = df0['Facies'].values wells = df0['Well Name'].values keep_list0 = ['GR', 'ILD_log10', 'PHIND', 'PE', 'NM_M', 'RELPOS', 'RHOB_EST', 'UR_CLY', 'UR_CAL'] fv0 = df0[keep_list0].values clf0 = neighbors.KNeighborsClassifier(n_neighbors=56, weights='distance') X0 = preprocessing.StandardScaler().fit(fv0).transform(fv0) y0 = facies logo = LeaveOneGroupOut() f1knn0 = [] clf0.fit(X0, y0) X1 = preprocessing.StandardScaler().fit(df[keep_list0].values).transform(df[keep_list0].values) knn_pred = clf0.predict(X1) df['KNN_FACIES'] = knn_pred
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Fit RandomForect model and apply LeavePGroupsOut test There is some bad log data in this dataset which I'd guess is due to rugose hole. PHIND gets as high at 80%, which is certainly spurious, so I'll remove data with cross-plot porosity greater than 40% from the dataset. CROSS H CATTLE well also looks pretty different from the others so I'm going to remove it from the training set.
df1 = df.dropna() df1 = df1[(df1['Well Name'] != 'CROSS H CATTLE') & (df.PHIND < 40.0)] facies = df1['Facies'].values wells = df1['Well Name'].values drop_list = ['Formation', 'Well Name', 'Facies', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI', 'UMAA_EST', 'UR_QTZ', 'PE_EST', 'Recruit F9_DISTANCES', 'KIMZEY A_DISTANCES', 'NEWBY_DISTANCES', 'ALEXANDER D_DISTANCES', 'NOLAN_DISTANCES', 'FM_GRP_3'] fv = df1.drop(drop_list, axis=1).values X = preprocessing.StandardScaler().fit(fv).transform(fv) y = facies ne_grid = [150] mf_grid = [10] md_grid = [None] msl_grid = [5] mss_grid = [20] keys = ['n_estimators', 'max_features', 'max_depth', 'min_samples_leaf', 'min_samples_split'] param_sets = itertools.product(ne_grid, mf_grid, md_grid, msl_grid, mss_grid) param_grid = [dict(zip(keys, i)) for i in param_sets] clf_list = [] for i, d in enumerate(param_grid): clf = ensemble.RandomForestClassifier(n_estimators=d['n_estimators'], class_weight='balanced', min_samples_leaf=d['min_samples_leaf'], min_samples_split=d['min_samples_split'], max_features=d['max_features'], max_depth=d['max_depth'], n_jobs=-1) lpgo = LeavePGroupsOut(n_groups=2) f1rfc = [] for train, test in lpgo.split(X, y, groups=wells): clf.fit(X[train], y[train]) score = clf.fit(X[train], y[train]).score(X[test], y[test]) f1rfc.append(score) print("Average leave-two-wells-out F1 Score: %6f" % (np.mean(f1rfc))) clf_list.append((clf, np.mean(f1rfc))) np.max([i[1] for i in clf_list]) list(zip(df1.drop(drop_list, axis=1).columns, clf.feature_importances_))
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Apply model to validation dataset Load validation data (vd), build features, and use the classfier from above to predict facies. Ultimately the PE_EST curve seemed to be slightly more predictive than the PE curve proper (?). I use that instead of PE in the classifer so I need to compute it with the validation data.
# refit model to entire training set clf.fit(X, y) # load validation data vd = pd.read_csv('../validation_data_nofacies.csv') # compute extra log data features vd['DPHI_EST'] = vd.apply(lambda x: estimate_dphi(x), axis=1).astype(float) vd['RHOB_EST'] = vd.apply(lambda x: estimate_rhob(x), axis=1) vd['NPHI_EST'] = vd.apply(lambda x: estimate_nphi(x), axis=1) vd['RHOMAA_EST'] = vd.apply(lambda x: compute_rhomaa(x), axis=1) # predict missing PE values drop_list_vd = ['Formation', 'Well Name', 'Depth', 'PE', 'RELPOS'] fv_vd = vd.drop(drop_list_vd, axis=1).values X_vd = preprocessing.StandardScaler().fit(fv_vd).transform(fv_vd) vd['PE_EST'] = reg.predict(X_vd) vd.PE = vd.PE.combine_first(vd.PE_EST) vd['UMAA_EST'] = vd.apply(lambda x: compute_umaa(x), axis=1) # Estimate lithology using Umaa Rhomaa solution vd['UR_QTZ'] = np.nan vd['UR_CLY'] = np.nan vd['UR_CAL'] = np.nan vd['UR_DOL'] = np.nan vd.ix[vd.GR >= 40, 'UR_QTZ'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR >= 40, 'UR_CLY'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR >= 40, 'UR_CAL'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR >= 40, 'UR_DOL'] = 0 vd.ix[vd.GR < 40, 'UR_QTZ'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR < 40, 'UR_DOL'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR < 40, 'UR_CAL'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR < 40, 'UR_CLY'] = 0 # Formation grouping vd['FM_GRP'] = vd.Formation.map(fm_group_dict) vd = pd.get_dummies(vd, prefix='FM_GRP', columns=['FM_GRP']) # Water saturation vd['SW'] = vd.apply(lambda x: archie(x), axis=1) # Lat-long features vd['LAT'] = vd.apply(lambda x: get_lat(x), axis=1) vd['LON'] = vd.apply(lambda x: get_long(x), axis=1) for i in dist_dict: vd[i] = np.nan for j in dist_dict[i]: vd.loc[vd['Well Name'] == j, i] = dist_dict[i][j] # Compute first guess at facies with KNN X2 = preprocessing.StandardScaler().fit(vd[keep_list0].values).transform(vd[keep_list0].values) vd['KNN_FACIES'] = clf0.predict(X2) # Apply final model drop_list1 = ['Formation', 'Well Name', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI', 'UMAA_EST', 'UR_QTZ', 'PE', 'Recruit F9_DISTANCES', 'KIMZEY A_DISTANCES', 'NEWBY_DISTANCES', 'ALEXANDER D_DISTANCES', 'NOLAN_DISTANCES', 'FM_GRP_3'] fv_vd1 = vd.drop(drop_list1, axis=1).values X_vd1 = preprocessing.StandardScaler().fit(fv_vd1).transform(fv_vd1) vd_predicted_facies = clf.predict(X_vd1) vd['Facies'] = vd_predicted_facies vd.to_csv('RFC_submission_3_predictions.csv') vd_predicted_facies
dagrha/RFC_submission_3_dagrha.ipynb
seg/2016-ml-contest
apache-2.0
Для 61 большого города в Англии и Уэльсе известны средняя годовая смертность на 100000 населения (по данным 1958–1964) и концентрация кальция в питьевой воде (в частях на миллион). Чем выше концентрация кальция, тем жёстче вода. Города дополнительно поделены на северные и южные.
water_data = pd.read_table('water.txt') water_data.info() water_data.describe() water_data.head()
4 Stats for data analysis/Homework/1 test mean conf int/Test Mean conf int.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Постройте 95% доверительный интервал для средней годовой смертности в больших городах. Чему равна его нижняя граница? Округлите ответ до 4 знаков после десятичной точки.
mort_mean = water_data['mortality'].mean() print('Mean mortality: %f' % mort_mean) from statsmodels.stats.weightstats import _tconfint_generic mort_mean_std = water_data['mortality'].std() / np.sqrt(water_data['mortality'].shape[0]) print('Mortality 95%% interval: %s' % str(_tconfint_generic(mort_mean, mort_mean_std, water_data['mortality'].shape[0] - 1, 0.05, 'two-sided')))
4 Stats for data analysis/Homework/1 test mean conf int/Test Mean conf int.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
На данных из предыдущего вопроса постройте 95% доверительный интервал для средней годовой смертности по всем южным городам. Чему равна его верхняя граница? Округлите ответ до 4 знаков после десятичной точки.
water_data_south = water_data[water_data.location == 'South'] mort_mean_south = water_data_south['mortality'].mean() print('Mean south mortality: %f' % mort_mean_south) mort_mean_south_std = water_data_south['mortality'].std() / np.sqrt(water_data_south['mortality'].shape[0]) print('Mortality south 95%% interval: %s' % str(_tconfint_generic(mort_mean_south, mort_mean_south_std, water_data_south['mortality'].shape[0] - 1, 0.05, 'two-sided')))
4 Stats for data analysis/Homework/1 test mean conf int/Test Mean conf int.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
На тех же данных постройте 95% доверительный интервал для средней годовой смертности по всем северным городам. Пересекается ли этот интервал с предыдущим? Как вы думаете, какой из этого можно сделать вывод?
water_data_north = water_data[water_data.location == 'North'] mort_mean_north = water_data_north['mortality'].mean() print('Mean north mortality: %f' % mort_mean_north) mort_mean_north_std = water_data_north['mortality'].std() / np.sqrt(water_data_north['mortality'].shape[0]) print('Mortality north 95%% interval: %s' % str(_tconfint_generic(mort_mean_north, mort_mean_north_std, water_data_north['mortality'].shape[0] - 1, 0.05, 'two-sided')))
4 Stats for data analysis/Homework/1 test mean conf int/Test Mean conf int.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Пересекаются ли 95% доверительные интервалы для средней жёсткости воды в северных и южных городах?
hardness_mean_south = water_data_south['hardness'].mean() print('Mean south hardness: %f' % hardness_mean_south) hardness_mean_north = water_data_north['hardness'].mean() print('Mean north hardness: %f' % hardness_mean_north) hardness_mean_south_std = water_data_south['hardness'].std() / np.sqrt(water_data_south['hardness'].shape[0]) print('Hardness south 95%% interval: %s' % str(_tconfint_generic(hardness_mean_south, hardness_mean_south_std, water_data_south['hardness'].shape[0] - 1, 0.05, 'two-sided'))) hardness_mean_north_std = water_data_north['hardness'].std() / np.sqrt(water_data_north['hardness'].shape[0]) print('Hardness north 95%% interval: %s' % str(_tconfint_generic(hardness_mean_north, hardness_mean_north_std, water_data_north['hardness'].shape[0] - 1, 0.05, 'two-sided')))
4 Stats for data analysis/Homework/1 test mean conf int/Test Mean conf int.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
<b> Вспомним формулу доверительного интервала для среднего нормально распределённой случайной величины с дисперсией σ2: При σ=1 какой нужен объём выборки, чтобы на уровне доверия 95% оценить среднее с точностью ±0.1?
from scipy import stats np.ceil((stats.norm.ppf(1-0.05/2) / 0.1)**2)
4 Stats for data analysis/Homework/1 test mean conf int/Test Mean conf int.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
def download(url, file): """ Download file from <url> :param url: URL to file :param file: Local file path """ if not os.path.isfile(file): print('Downloading ' + file + '...') urlretrieve(url, file) print('Download Finished') # Download the training and test dataset. download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip') download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip') # Make sure the files aren't corrupted assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\ 'notMNIST_train.zip file is corrupted. Remove the file and try again.' assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\ 'notMNIST_test.zip file is corrupted. Remove the file and try again.' # Wait until you see that all files have been downloaded. print('All files downloaded.') def uncompress_features_labels(file): """ Uncompress features and labels from a zip file :param file: The zip file to extract the data from """ features = [] labels = [] with ZipFile(file) as zipf: # Progress Bar filenames_pbar = tqdm(zipf.namelist(), unit='files') # Get features and labels from all files for filename in filenames_pbar: # Check if the file is a directory if not filename.endswith('/'): with zipf.open(filename) as image_file: image = Image.open(image_file) image.load() # Load image data as 1 dimensional array # We're using float32 to save on memory space feature = np.array(image, dtype=np.float32).flatten() # Get the the letter from the filename. This is the letter of the image. label = os.path.split(filename)[1][0] features.append(feature) labels.append(label) return np.array(features), np.array(labels) # Get the features and labels from the zip files train_features, train_labels = uncompress_features_labels('notMNIST_train.zip') test_features, test_labels = uncompress_features_labels('notMNIST_test.zip') # Limit the amount of data to work with a docker container docker_size_limit = 150000 train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit) # Set flags for feature engineering. This will prevent you from skipping an important step. is_features_normal = False is_labels_encod = False # Wait until you see that all features and labels have been uncompressed. print('All features and labels uncompressed.')
intro-to-tensorflow/intro_to_tensorflow.ipynb
snegirigens/DLND
mit
<img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%"> Problem 1 The first problem involves normalizing the features for your training and test data. Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9. Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255. Min-Max Scaling: $ X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}} $ If you're having trouble solving problem 1, you can view the solution here.
# Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ # TODO: Implement Min-Max scaling for grayscale image data a = 0.1 b = 0.9 Xmin = 0 Xmax = 255 return a + (image_data - Xmin) * (b - a) / (Xmax - Xmin) ### DON'T MODIFY ANYTHING BELOW ### # Test Cases np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])), [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314, 0.125098039216, 0.128235294118, 0.13137254902, 0.9], decimal=3) np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])), [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078, 0.896862745098, 0.9]) if not is_features_normal: train_features = normalize_grayscale(train_features) test_features = normalize_grayscale(test_features) is_features_normal = True print('Tests Passed!') if not is_labels_encod: # Turn labels into numbers and apply One-Hot Encoding encoder = LabelBinarizer() encoder.fit(train_labels) train_labels = encoder.transform(train_labels) test_labels = encoder.transform(test_labels) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 train_labels = train_labels.astype(np.float32) test_labels = test_labels.astype(np.float32) is_labels_encod = True print('Labels One-Hot Encoded') assert is_features_normal, 'You skipped the step to normalize the features' assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels' # Get randomized datasets for training and validation train_features, valid_features, train_labels, valid_labels = train_test_split( train_features, train_labels, test_size=0.05, random_state=832289) print('Training features and labels randomized and split.') # Save the data for easy access pickle_file = 'notMNIST.pickle' if not os.path.isfile(pickle_file): print('Saving data to pickle file...') try: with open('notMNIST.pickle', 'wb') as pfile: pickle.dump( { 'train_dataset': train_features, 'train_labels': train_labels, 'valid_dataset': valid_features, 'valid_labels': valid_labels, 'test_dataset': test_features, 'test_labels': test_labels, }, pfile, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise print('Data cached in pickle file.')
intro-to-tensorflow/intro_to_tensorflow.ipynb
snegirigens/DLND
mit
Checkpoint All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
%matplotlib inline # Load the modules import pickle import math import numpy as np import tensorflow as tf from tqdm import tqdm import matplotlib.pyplot as plt # Reload the data pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: pickle_data = pickle.load(f) train_features = pickle_data['train_dataset'] train_labels = pickle_data['train_labels'] valid_features = pickle_data['valid_dataset'] valid_labels = pickle_data['valid_labels'] test_features = pickle_data['test_dataset'] test_labels = pickle_data['test_labels'] del pickle_data # Free up memory print('Data and modules loaded.')
intro-to-tensorflow/intro_to_tensorflow.ipynb
snegirigens/DLND
mit
Problem 2 Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer. <img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%"> For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - features - Placeholder tensor for feature data (train_features/valid_features/test_features) - labels - Placeholder tensor for label data (train_labels/valid_labels/test_labels) - weights - Variable Tensor with random numbers from a truncated normal distribution. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help. - biases - Variable Tensor with all zeros. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help. If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
# All the pixels in the image (28 * 28 = 784) features_count = 784 # All the labels labels_count = 10 # TODO: Set the features and labels tensors features = tf.placeholder (tf.float32, [None, features_count]) labels = tf.placeholder (tf.float32, [None, labels_count]) # TODO: Set the weights and biases tensors weights = tf.Variable (tf.truncated_normal ([features_count, labels_count], dtype=tf.float32)) biases = tf.Variable (tf.zeros (labels_count, dtype=tf.float32)) ### DON'T MODIFY ANYTHING BELOW ### #Test Cases from tensorflow.python.ops.variables import Variable assert features._op.name.startswith('Placeholder'), 'features must be a placeholder' assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder' assert isinstance(weights, Variable), 'weights must be a TensorFlow variable' assert isinstance(biases, Variable), 'biases must be a TensorFlow variable' assert features._shape == None or (\ features._shape.dims[0].value is None and\ features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect' assert labels._shape == None or (\ labels._shape.dims[0].value is None and\ labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect' assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect' assert biases._variable._shape == (10), 'The shape of biases is incorrect' assert features._dtype == tf.float32, 'features must be type float32' assert labels._dtype == tf.float32, 'labels must be type float32' # Feed dicts for training, validation, and test session train_feed_dict = {features: train_features, labels: train_labels} valid_feed_dict = {features: valid_features, labels: valid_labels} test_feed_dict = {features: test_features, labels: test_labels} # Linear Function WX + b logits = tf.matmul(features, weights) + biases prediction = tf.nn.softmax(logits) # Cross entropy cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1) # Training loss loss = tf.reduce_mean(cross_entropy) # Create an operation that initializes all variables init = tf.global_variables_initializer() # Test Cases with tf.Session() as session: session.run(init) session.run(loss, feed_dict=train_feed_dict) session.run(loss, feed_dict=valid_feed_dict) session.run(loss, feed_dict=test_feed_dict) biases_data = session.run(biases) assert not np.count_nonzero(biases_data), 'biases must be zeros' print('Tests Passed!') # Determine if the predictions are correct is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1)) # Calculate the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32)) print('Accuracy function created.')
intro-to-tensorflow/intro_to_tensorflow.ipynb
snegirigens/DLND
mit
<img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%"> Problem 3 Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy. Parameter configurations: Configuration 1 * Epochs: 1 * Learning Rate: * 0.8 * 0.5 * 0.1 * 0.05 * 0.01 Configuration 2 * Epochs: * 1 * 2 * 3 * 4 * 5 * Learning Rate: 0.2 The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed. If you're having trouble solving problem 3, you can view the solution here.
# Change if you have memory restrictions batch_size = 128 # TODO: Find the best parameters for each configuration epochs = 1 learning_rate = 0.5 ### DON'T MODIFY ANYTHING BELOW ### # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 # Measurements use for graphing loss and accuracy log_batch_step = 50 batches = [] loss_batch = [] train_acc_batch = [] valid_acc_batch = [] with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer and get loss _, l = session.run( [optimizer, loss], feed_dict={features: batch_features, labels: batch_labels}) # Log every 50 batches if not batch_i % log_batch_step: # Calculate Training and Validation accuracy training_accuracy = session.run(accuracy, feed_dict=train_feed_dict) validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) # Log batches previous_batch = batches[-1] if batches else 0 batches.append(log_batch_step + previous_batch) loss_batch.append(l) train_acc_batch.append(training_accuracy) valid_acc_batch.append(validation_accuracy) # Check accuracy against Validation data validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) loss_plot = plt.subplot(211) loss_plot.set_title('Loss') loss_plot.plot(batches, loss_batch, 'g') loss_plot.set_xlim([batches[0], batches[-1]]) acc_plot = plt.subplot(212) acc_plot.set_title('Accuracy') acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy') acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy') acc_plot.set_ylim([0, 1.0]) acc_plot.set_xlim([batches[0], batches[-1]]) acc_plot.legend(loc=4) plt.tight_layout() plt.show() print('Validation accuracy at {}'.format(validation_accuracy))
intro-to-tensorflow/intro_to_tensorflow.ipynb
snegirigens/DLND
mit
Test You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
### DON'T MODIFY ANYTHING BELOW ### # The accuracy measured against the test set test_accuracy = 0.0 with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels}) # Check accuracy against Test data test_accuracy = session.run(accuracy, feed_dict=test_feed_dict) assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy) print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
intro-to-tensorflow/intro_to_tensorflow.ipynb
snegirigens/DLND
mit
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph with tf.name_scope('inputs'): inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot') with tf.name_scope('targets'): targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot') y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) keep_prob = tf.placeholder(tf.float32, name='keep_prob') # Build the RNN layers with tf.name_scope("RNN_layers"): lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) with tf.name_scope("RNN_init_state"): initial_state = cell.zero_state(batch_size, tf.float32) # Run the data through the RNN layers with tf.name_scope("RNN_forward"): outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state) final_state = state # Reshape output so it's a bunch of rows, one row for each cell output with tf.name_scope('sequence_reshape'): seq_output = tf.concat(outputs, axis=1,name='seq_output') output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output') # Now connect the RNN putputs to a softmax layer and calculate the cost with tf.name_scope('logits'): softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1), name='softmax_w') softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b') logits = tf.matmul(output, softmax_w) + softmax_b with tf.name_scope('predictions'): preds = tf.nn.softmax(logits, name='predictions') with tf.name_scope('cost'): loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss') cost = tf.reduce_mean(loss, name='cost') # Optimizer for training, using gradient clipping to control exploding gradients with tf.name_scope('train'): tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) # Export the nodes export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph
tensorboard/Anna_KaRNNa_Name_Scoped.ipynb
retnuh/deep-learning
mit
Write out the graph for TensorBoard
model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
tensorboard/Anna_KaRNNa_Name_Scoped.ipynb
retnuh/deep-learning
mit
Training Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
!mkdir -p checkpoints/anna epochs = 10 save_every_n = 200 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/anna20.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: 0.5, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batch([val_x, val_y], num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss))) tf.train.get_checkpoint_state('checkpoints/anna')
tensorboard/Anna_KaRNNa_Name_Scoped.ipynb
retnuh/deep-learning
mit
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32,(None,real_dim),name='input_real') inputs_z = tf.placeholder(tf.float32,(None,z_dim),name='input_z') return inputs_real, inputs_z
gan_mnist/Intro_to_GANs_Exercises.ipynb
ClementPhil/deep-learning
mit
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('generator',reuse=reuse): # finish this # Hidden layer h1 = tf.contrib.layers.fully_connected(z,n_units,activation_fn=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.contrib.layers.fully_connected(h1,1,activation_fn=None) out = tf.tanh(logits) return out,logits
gan_mnist/Intro_to_GANs_Exercises.ipynb
ClementPhil/deep-learning
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
def discriminator(x, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('discriminator',reuse=reuse): # finish this # Hidden layer h1 = tf.contrib.layers.fully_connected(x,n_units,activation_fn=None) # Leaky ReLU h1 = tf.maximum(alpha * h1,h1) logits = tf.contrib.layers.fully_connected(h1,1,activation_fn=None) out = tf.sigmoid(logits) return out, logits
gan_mnist/Intro_to_GANs_Exercises.ipynb
ClementPhil/deep-learning
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise: Build the network from the functions you defined earlier.
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model,g_logits = generator(input_z, input_size) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real) d_model_fake, d_logits_fake = discriminator(g_model, reuse=False)
gan_mnist/Intro_to_GANs_Exercises.ipynb
ClementPhil/deep-learning
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
# Calculate losses d_loss_real = d_loss_fake = d_loss = g_loss =
gan_mnist/Intro_to_GANs_Exercises.ipynb
ClementPhil/deep-learning
mit
We start by making sure the computation is performed on GPU if available. prefer_gpu should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated.
from thinc.api import prefer_gpu prefer_gpu()
examples/03_pos_tagger_basic_cnn.ipynb
explosion/thinc
mit
We also define the following helper functions for loading the data, and training and evaluating a given model. Don't forget to call model.initialize with a batch of input and output data to initialize the model and fill in any missing shapes.
import ml_datasets from tqdm.notebook import tqdm from thinc.api import fix_random_seed fix_random_seed(0) def train_model(model, optimizer, n_iter, batch_size): (train_X, train_y), (dev_X, dev_y) = ml_datasets.ud_ancora_pos_tags() model.initialize(X=train_X[:5], Y=train_y[:5]) for n in range(n_iter): loss = 0.0 batches = model.ops.multibatch(batch_size, train_X, train_y, shuffle=True) for X, Y in tqdm(batches, leave=False): Yh, backprop = model.begin_update(X) d_loss = [] for i in range(len(Yh)): d_loss.append(Yh[i] - Y[i]) loss += ((Yh[i] - Y[i]) ** 2).sum() backprop(d_loss) model.finish_update(optimizer) score = evaluate(model, dev_X, dev_y, batch_size) print(f"{n}\t{loss:.2f}\t{score:.3f}") def evaluate(model, dev_X, dev_Y, batch_size): correct = 0 total = 0 for X, Y in model.ops.multibatch(batch_size, dev_X, dev_Y): Yh = model.predict(X) for yh, y in zip(Yh, Y): correct += (y.argmax(axis=1) == yh.argmax(axis=1)).sum() total += y.shape[0] return float(correct / total)
examples/03_pos_tagger_basic_cnn.ipynb
explosion/thinc
mit