markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Finishing the project---**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
!!jupyter nbconvert *.ipynb --to html
_____no_output_____
MIT
HMM Tagger.ipynb
luiscberrocal/hmm-tagger
Step 4: [Optional] Improving model performance---There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there ...
import nltk from nltk import pos_tag, word_tokenize from nltk.corpus import brown nltk.download('brown') training_corpus = nltk.corpus.brown training_corpus.tagged_sents()[0]
[nltk_data] Downloading package brown to [nltk_data] /Users/luiscberrocal/nltk_data... [nltk_data] Package brown is already up-to-date!
MIT
HMM Tagger.ipynb
luiscberrocal/hmm-tagger
Skip-gram Word2VecIn this notebook, I'll lead you through using PyTorch to implement the [Word2Vec algorithm](https://en.wikipedia.org/wiki/Word2vec) using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing w...
# read in the extracted text file with open('data/text8') as f: text = f.read() # print out the first 100 characters print(text[:100])
anarchism originated as a term of abuse first used against early working class radicals including t
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
Pre-processingHere I'm fixing up the text to make training easier. This comes from the `utils.py` file. The `preprocess` function does a few things:>* It converts any punctuation into tokens, so a period is changed to ` `. In this data set, there aren't any periods, but it will help in other NLP problems. * It remove...
import utils # get list of words words = utils.preprocess(text) print(words[:30]) # print some stats about this word data print("Total words in text: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) # `set` removes any duplicate words
Total words in text: 16680599 Unique words: 63641
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
DictionariesNext, I'm creating two dictionaries to convert words to integers and back again (integers to words). This is again done with a function in the `utils.py` file. `create_lookup_tables` takes in a list of words in a text and returns two dictionaries.>* The integers are assigned in descending frequency order, ...
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] print(int_words[:30])
[5233, 3080, 11, 5, 194, 1, 3133, 45, 58, 155, 127, 741, 476, 10571, 133, 0, 27349, 1, 0, 102, 854, 2, 0, 15067, 58112, 1, 0, 150, 854, 3580]
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
SubsamplingWords that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ i...
from collections import Counter import random import numpy as np threshold = 1e-5 word_counts = Counter(int_words) #print(list(word_counts.items())[0]) # dictionary of int_words, how many times they appear total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = ...
[5233, 3080, 194, 3133, 741, 10571, 27349, 854, 15067, 58112, 854, 3580, 194, 190, 58, 10712, 1324, 104, 2731, 708, 2757, 567, 7088, 247, 5233, 248, 44611, 2877, 792, 2621]
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to define a surrounding _context_ and grab all the words in a window around that word, with size $C$. From [Mikolov et al.](https://...
def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = words[start:idx] + words[idx+1:stop+1] return list(target_words) # test your...
Input: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Target: [3, 4, 6, 7]
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
Generating Batches Here's a generator function that returns batches of input and target data for our model, using the `get_target` function from above. The idea is that it grabs `batch_size` words from a words list. Then for each of those batches, it gets the target words in a window.
def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] ...
x [0, 0, 1, 1, 2, 2, 2, 3, 3, 3] y [1, 2, 0, 2, 0, 1, 3, 0, 1, 2]
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
--- ValidationHere, I'm creating a function that will help us observe our model as it learns. We're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them using the cosine similarity: $$\mathrm{similarity} = \cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b...
def cosine_similarity(embedding, valid_size=16, valid_window=100, device='cpu'): """ Returns the cosine similarity of validation words with words in the embedding matrix. Here, embedding should be a PyTorch embedding module. """ # Here we're calculating the cosine similarity between some random...
_____no_output_____
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
--- SkipGram modelDefine and train the SkipGram model. > You'll need to define an [embedding layer](https://pytorch.org/docs/stable/nn.htmlembedding) and a final, softmax output layer.An Embedding layer takes in a number of inputs, importantly:* **num_embeddings** – the size of the dictionary of embeddings, or how many...
import torch from torch import nn import torch.optim as optim class SkipGramNeg(nn.Module): def __init__(self, n_vocab, n_embed, noise_dist=None): super().__init__() self.n_vocab = n_vocab self.n_embed = n_embed self.noise_dist = noise_dist # define embeddin...
_____no_output_____
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
TrainingBelow is our training loop, and I recommend that you train on GPU, if available.
device = 'cuda' if torch.cuda.is_available() else 'cpu' # Get our noise distribution # Using word frequencies calculated earlier in the notebook word_freqs = np.array(sorted(freqs.values(), reverse=True)) unigram_dist = word_freqs/word_freqs.sum() noise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist*...
Epoch: 1/5 Loss: 6.832333087921143 would | yard, mlb, aspartame, supports, magazine from | and, brown, falling, lenses, deposit can | epistolary, ambitious, birds, bhutan, adherents in | of, the, one, republic, corinth system | yankees, simple, cueball, kemp, hague two | of, a, one, the, greeks this | dictate, the, em...
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
Visualizing the word vectorsBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to lear...
%matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE # getting embeddings from the embedding layer of our model, by name embeddings = model.in_embed.weight.to('cpu').data.numpy() viz_words = 380 tsne = TSNE() embed_tsne = tsne.fit_transform...
_____no_output_____
MIT
word2vec-embeddings/Negative_Sampling_Exercise.ipynb
Joonsoo/udacity-deep-learning-2019
\begin{equation}\int_{S} K(x, y) y_2 dy\end{equation} Ideas:* there could be a bug in adaptive.hpp* maybe recursive subdivision is better than gauss-kronrod for this type of problem.* ~~kahan summation might be necessary. perhaps the adding and subtracting of the error causes problems?~~* align the python numpy kernels...
from tectosaur2.nb_config import setup setup() import numpy as np from tectosaur2 import gauss_rule, integrate_term from tectosaur2.mesh import unit_circle from tectosaur2.laplace2d import hypersingular from tectosaur2.global_qbx import global_qbx_self quad_rule = gauss_rule(10) circle = unit_circle(quad_rule) circle....
1 4.657536261689784 8.12064950692637e-05 2 5.704405858601769 7.705862049567358e-07 3 8.12835335669428 3.9674554669355544e-08 4 9.690547112513867 2.4950611021701263e-10
MIT
experiments/hypersingular_accuracy.ipynb
tbenthompson/tectosaur2
Analytic comparison Let's use the analytic solution for stress for slip on a line segment in a fullspace extending from y = -1 to y = 1. From page 35 of the Segall book.
import sympy as sp import matplotlib.pyplot as plt from tectosaur2 import panelize_symbolic_surface, pts_grid t = sp.var('t') fault = panelize_symbolic_surface(t, 0*t, t, quad_rule, n_panels=1) def analytical_stress(obsx, obsy): rp = obsx ** 2 + (obsy + 1) ** 2 ri = obsx ** 2 + (obsy - 1) ** 2 sxz = -(1.0 /...
_____no_output_____
MIT
experiments/hypersingular_accuracy.ipynb
tbenthompson/tectosaur2
Return Forecasting: Read Historical Daily Yen Futures DataIn this notebook, you will load historical Dollar-Yen exchange rate futures data and apply time series analysis and modeling to determine whether there is any predictable behavior.
# Futures contract on the Yen-dollar exchange rate: # This is the continuous chain of the futures contracts that are 1 month to expiration yen_futures = pd.read_csv( Path("yen.csv"), index_col="Date", infer_datetime_format=True, parse_dates=True ) yen_futures.head() # Trim the dataset to begin on January 1st, 1990 ...
_____no_output_____
MIT
Scratch/ha_time_series_v1.1.ipynb
HassanAlam55/TimeSeriesHW-10
Return Forecasting: Initial Time-Series Plotting Start by plotting the "Settle" price. Do you see any patterns, long-term and/or short?
# Plot just the "Settle" column from the dataframe: # YOUR CODE HERE! yen_futures['Settle'].plot(figsize=(12, 8))
_____no_output_____
MIT
Scratch/ha_time_series_v1.1.ipynb
HassanAlam55/TimeSeriesHW-10
--- Decomposition Using a Hodrick-Prescott Filter Using a Hodrick-Prescott Filter, decompose the Settle price into a trend and noise.
import statsmodels.api as sm # Apply the Hodrick-Prescott Filter by decomposing the "Settle" price into two separate series: # YOUR CODE HERE! ts_noise, ts_trend = sm.tsa.filters.hpfilter(yen_futures['Settle']) # Create a dataframe of just the settle price, and add columns for "noise" and "trend" series from above: ...
_____no_output_____
MIT
Scratch/ha_time_series_v1.1.ipynb
HassanAlam55/TimeSeriesHW-10
--- Forecasting Returns using an ARMA Model Using futures Settle *Returns*, estimate an ARMA model1. ARMA: Create an ARMA model and fit it to the returns data. Note: Set the AR and MA ("p" and "q") parameters to p=2 and q=1: order=(2, 1).2. Output the ARMA summary table and take note of the p-values of the lags. Based...
# Create a series using "Settle" price percentage returns, drop any nan"s, and check the results: # (Make sure to multiply the pct_change() results by 100) # In this case, you may have to replace inf, -inf values with np.nan"s returns = (yen_futures[["Settle"]].pct_change() * 100) returns = returns.replace(-np.inf, np....
_____no_output_____
MIT
Scratch/ha_time_series_v1.1.ipynb
HassanAlam55/TimeSeriesHW-10
--- Forecasting the Settle Price using an ARIMA Model 1. Using the *raw* Yen **Settle Price**, estimate an ARIMA model. 1. Set P=5, D=1, and Q=1 in the model (e.g., ARIMA(df, order=(5,1,1)) 2. P= of Auto-Regressive Lags, D= of Differences (this is usually =1), Q= of Moving Average Lags 2. Output the ARIMA ...
from statsmodels.tsa.arima_model import ARIMA # Estimate and ARIMA Model: # Hint: ARIMA(df, order=(p, d, q)) # YOUR CODE HERE! model = ARIMA(yen_futures['Settle'], order=(5, 1, 1)) # Fit the model # YOUR CODE HERE! results = model.fit() # Output model summary results: results.summary() # Plot the 5 Day Price Forecast ...
_____no_output_____
MIT
Scratch/ha_time_series_v1.1.ipynb
HassanAlam55/TimeSeriesHW-10
--- Volatility Forecasting with GARCHRather than predicting returns, let's forecast near-term **volatility** of Japanese Yen futures returns. Being able to accurately predict volatility will be extremely useful if we want to trade in derivatives or quantify our maximum loss. Using futures Settle *Returns*, estimate an...
from arch import arch_model # Estimate a GARCH model: # YOUR CODE HERE! model = arch_model(returns, mean="Zero", vol="GARCH", p=2, q=1) # Fit the model # YOUR CODE HERE! res = model.fit(disp="off") # Summarize the model results # YOUR CODE HERE! res.summary() # Find the last day of the dataset last_day = returns.index....
_____no_output_____
MIT
Scratch/ha_time_series_v1.1.ipynb
HassanAlam55/TimeSeriesHW-10
[NTDS'18]: test your installation[ntds'18]: https://github.com/mdeff/ntds_2018[Michaël Defferrard](http://deff.ch), [EPFL LTS2](http://lts2.epfl.ch) This is a mini "test" Jupyter notebook to make sure the main packages we'll use are installed.Run it after following the [installation instructions](https://github.com/md...
!git --version !python --version !jupyter --version !jupyter-notebook --version # !jupyter-lab --version !ipython --version
6.5.0
MIT
test_install.ipynb
Team36-ntds2018/ntds_2018
Python packagesIf you get a `ModuleNotFoundError` error, try to run `conda install ` (in the `ntds_2018` environment, i.e., after `conda activate ntds_2018`).
import numpy as np np.__version__ import scipy scipy.__version__ import pandas as pd pd.__version__ import matplotlib as mpl mpl.__version__ import networkx as nx nx.__version__ import pygsp pygsp.__version__
_____no_output_____
MIT
test_install.ipynb
Team36-ntds2018/ntds_2018
Small test
%matplotlib inline graph = pygsp.graphs.Logo() graph.estimate_lmax() filt = pygsp.filters.Heat(graph, tau=100) DELTAS = [20, 30, 1090] signal = np.zeros(graph.N) signal[DELTAS] = 1 signal = filt.filter(signal) graph.plot_signal(signal, highlight=DELTAS)
_____no_output_____
MIT
test_install.ipynb
Team36-ntds2018/ntds_2018
Pomocný kódNeotvárajte, obsahuje časti riešení.
import pandas as pd import io import numpy as np def get_age_data(): csv_data = io.StringIO(""" Ukazovate�;;;1996;1997;1998;1999;2000;2001;2002;2003;2004;2005;2006;2007;2008;2009;2010;2011;2012;2013;2014;2015;2016;2017;2018 ;;;Spolu;Spolu;Spolu;Spolu;Spolu;Spolu;Spolu;Spolu;Spolu;Spolu;Spolu;Spolu;Spolu;Spolu;Spo...
_____no_output_____
MIT
notebooks/05_odhadovanie.ipynb
cedeerwe/slobodna-akademia
Úloha č.1Na obrázkoch vidíte krabicu s kačičkami.1. Odhadnite čo najpresnejšie počet kačičiek v krabici.1. Určte interval, ktorému na 95% veríte, že sa v ňom nachádza správny výsledok na časť 1. (Ak by som vám dal podobnú úlohu 20 krát, mali by ste sa mýliť vo svojom intervale v priemere raz.) Riešenie Takýto typ úl...
data = get_age_data() # vyčistí a spracuje data, ktoré ste si pozreli na webe relevant_data = data.iloc[1:20,:] # ukáže prvý až dvadsiaty riadok dát relevant_data
_____no_output_____
MIT
notebooks/05_odhadovanie.ipynb
cedeerwe/slobodna-akademia
Skúsime si tieto dáta aj vykresliť, aby sme v nich lepšie videli vzory. Vedieť správne vizualizovať (a vyčistiť) dáta je kľúčová schopnosť, ktorá je často medzi matematikmi podceňovaná.
import plotly.graph_objects as go fig = go.Figure( go.Heatmap( x=relevant_data.columns, y=relevant_data.index[::-1], z=relevant_data.values[::-1,:], colorscale="viridis" ) ) fig.show()
_____no_output_____
MIT
notebooks/05_odhadovanie.ipynb
cedeerwe/slobodna-akademia
Je celkom jasne vidieť, že veľkosti skupín sa z roka na rok moc nemenia. Dobrý odhad na náš výsledok by teda bol počet 16 ročných z roku 2018, čo je 50792. Je to číslo niekde v strede medzi našimi prvými dvoma odhadmi, ktoré sme získali.Keď sme sa už zoznámili s našími dátami, je na čase si skúsiť napísať, čo sa snažím...
age16 = relevant_data.loc["16 rokov"].values[:-1] # všetci 16 ročný okrem posledného age17 = relevant_data.loc["17 rokov"].values[1:] # všetci 17 ročný okrem prvého pomery = np.divide(age17, age16) pomery
_____no_output_____
MIT
notebooks/05_odhadovanie.ipynb
cedeerwe/slobodna-akademia
Priemerný pomer teraz dostaneme vypočítaním priemeru týchto čísel:
np.mean(pomery)
_____no_output_____
MIT
notebooks/05_odhadovanie.ipynb
cedeerwe/slobodna-akademia
Tento výsledok je kúsok prekvapivý. Keď sa pozrieme na pomery, ktoré nám vyšli, 16 z nich je pod 1, dva sú presne 1 a iba 4 sú väčšie ako 1. Tie štyri ale majú väčšiu váhu, ako všetky menšie ako 1 dokopy, takže sa žiadna chyba v matematike nestala. Vo väčšine prípadov ale počet 17 ročných je kúsok menší ako počet 16 ro...
np.median(pomery)
_____no_output_____
MIT
notebooks/05_odhadovanie.ipynb
cedeerwe/slobodna-akademia
To je už uveriteľnejšie. Ak na základe tohto čísla predpovieme počet 17 ročných, dostaneme $$50792 \cdot 0.9998356 \approx 50783.65.$$Takže iba na základe predpokladu, že skupina narodená roku 2002 nie je ničím odlišná od skupín narodených iné deti, dostali sme prvý prepracovaný odhad nášho výsledku. Ak by sme chceli s...
50792 * np.min(pomery), 50792 * np.max(pomery)
_____no_output_____
MIT
notebooks/05_odhadovanie.ipynb
cedeerwe/slobodna-akademia
Extrapolácia pokračovanie Posledný krok je overiť, či je náš predpoklad správny - či skupina z roku 2002 je niečím špeciálna alebo nie. Môžme to overiť viacerými spôsobmi. Skúsime sa pozrieť na postupne na úbytky / prírastky medzi vekmi 0 a 1, 1 a 2, 2 a 3, 3 a 4, atď.. Ak sa vo všetkých týchto úbytkoch umiestni naša...
results = {} for i_vek in range(1, len(relevant_data.index)): for i_rok in range(1, len(relevant_data.columns)): pomer = relevant_data.iloc[i_vek, i_rok] / relevant_data.iloc[i_vek-1, i_rok-1] narodeni = int(relevant_data.columns[i_rok]) - i_vek if narodeni not in results: result...
_____no_output_____
MIT
notebooks/05_odhadovanie.ipynb
cedeerwe/slobodna-akademia
Na prvý pohľad vyzerá červená čiara (pomery počtu v nasledovných vekoch) úplne štandardne, ale potom si všimneme obrovský skok v deviatom roku života skupinky 2002, rovnako ako mierne väčší spád v 13-tom roku života. Stalo sa teda niečo špeciálne v 9. roku života ročníka 2002? Extrapolácia pokračovanie II. Môžme si v...
results = {} for i_vek in range(1, len(relevant_data.index)): for i_rok in range(1, len(relevant_data.columns)): pomer = relevant_data.iloc[i_vek, i_rok] / relevant_data.iloc[i_vek-1, i_rok-1] narodeni = int(relevant_data.columns[i_rok]) - i_vek if narodeni not in results: result...
_____no_output_____
MIT
notebooks/05_odhadovanie.ipynb
cedeerwe/slobodna-akademia
Classification In classification, we predict categorical labels. In regression, we predict quantitative/numerical labels. The critical difference is that we can't take a difference between the predicted and actual category in classification, while we can take a difference between the predicted and actual numerical val...
import numpy as np from sklearn import metrics # generate our results y_pred = np.zeros(100, dtype=np.int32) y_pred[:12] = 1 y = np.zeros(100) y[:8] = 1 y[-2:] = 1 print("precision: {:g}".format(metrics.precision_score(y, y_pred))) print("recall: {:g}".format(metrics.recall_score(y, y_pred))) print(metrics.classifica...
precision: 0.666667 recall: 0.8 precision recall f1-score support 0.0 0.98 0.96 0.97 90 1.0 0.67 0.80 0.73 10 accuracy 0.94 100 macro avg 0.82 0.88 0.85 100 weighted avg ...
MIT
5_ML_Classification.ipynb
jhonsonlee/basics-of-machine-learning
Probabilistic Classification ModelsSome classification models do not directly predict a class for an observation but instead reports a probability. For example, it might predict that there's a 75% chance the observation is positive. For the preceding example, should we assign a positive or negative label? The natural ...
# generate data np.random.seed(0) y_proba = np.linspace(0, 1, 1000) y_pred = (y_proba > 0.5).astype(np.int32) y = np.random.binomial(1, y_proba) print("accuracy: {}".format(metrics.accuracy_score(y, y_pred))) precision, recall, threshold = metrics.precision_recall_curve(y, y_proba) f1_score = 2*precision*recall/(preci...
_____no_output_____
MIT
5_ML_Classification.ipynb
jhonsonlee/basics-of-machine-learning
In the above figure, we see how increasing the threshold led to higher precision but lower recall. The threshold that yielded the largest $F_1$ score was about 0.36. Any probabilistic model can achieve any arbitrary level of precision and recall by adjusting the threshold. As such, when comparing the performance of pro...
plt.plot(recall, precision) plt.xlabel('recall') plt.ylabel('precision') plt.xlim([0, 1]) plt.ylim([0, 1]);
_____no_output_____
MIT
5_ML_Classification.ipynb
jhonsonlee/basics-of-machine-learning
We want a model that has less tradeoff between precision and recall, resulting in a curve with less of a drop with increasing recall. Geometrically, it is better to have a model with a larger area under the curve, **AUC**, of its precision-recall plot. In `scikit-learn`, the AUC can be calculated using the `metrics.auc...
print("precision-recall AUC: {}".format(metrics.auc(recall, precision))) print("receiver-operator AUC: {}".format(metrics.roc_auc_score(y, y_proba)))
precision-recall AUC: 0.833677363943477 receiver-operator AUC: 0.834057379672299
MIT
5_ML_Classification.ipynb
jhonsonlee/basics-of-machine-learning
In the example, the resulting model had similar values for AUC and ROC. In general, if your data is imbalanced (more observation of the negative class) or if you care more about false positives you should rely on AUC of the precision-recall curve. Note, the number of true negatives are not factored in calculating eithe...
p = np.linspace(1E-6, 1-1E-6, 1000) y = 1 log_loss = -(y*np.log(p) + (1 - y)*np.log(1 - p)) plt.plot(p, log_loss) plt.xlabel('probability') plt.ylabel('log loss') plt.legend(['$y$ = 1']);
_____no_output_____
MIT
5_ML_Classification.ipynb
jhonsonlee/basics-of-machine-learning
Logistic regressionThe logistic regression model is the classifier version of linear regression. It is a probabilistic model; it will predict probability values that can then be used to assign class labels. The model works by taking the output of a linear regression model and feeds it into a sigmoid or logistic functi...
x = np.linspace(-10, 10, 100) s = 1/(1 + np.exp(-x)) plt.plot(x, s) plt.xlabel('$x$') plt.ylabel('$S(x)$');
_____no_output_____
MIT
5_ML_Classification.ipynb
jhonsonlee/basics-of-machine-learning
The $\beta$ coefficients of the model are chosen to minimize the log loss. Unlike linear regression, there is no closed-form solution to the optimal coefficient. Instead, the coefficients are solved using gradient descent.Let's train a logistic regression model through `scikit-learn`. We'll first train a model and plot...
from sklearn.datasets import make_blobs X, y = make_blobs(centers=[[1, 1], [-1, -1]], cluster_std=1.5, random_state=0) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.bwr) plt.xlabel('$x_1$') plt.ylabel('$x_2$'); from sklearn.linear_model import LogisticRegression clf = LogisticRegression(solver='lbfgs') clf.fit(X, y)...
_____no_output_____
MIT
5_ML_Classification.ipynb
jhonsonlee/basics-of-machine-learning
Import Packages
import tensorflow as tf import numpy as np from tensorflow import keras from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences import tensorflow_datasets as tfds
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
Load IMDB dataset
imdb, info = tfds.load("imdb_reviews", with_info=True, as_supervised=True) train_data, test_data = imdb['train'], imdb['test'] training_sentences = [] training_labels = [] testing_sentences = [] testing_labels = [] for s,l in train_data: training_sentences.append(str(s.numpy())) training_labels.append(l.numpy()) ...
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
Tokenization
vocab_size = 10000 embedding_dim = 16 max_length = 120 trunc_type='post' oov_tok = "<OOV>" tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok) tokenizer.fit_on_texts(training_sentences) word_index = tokenizer.word_index sequences = tokenizer.texts_to_sequences(training_sentences) padded = pad_sequences(se...
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
Build LSTM Model
model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), tf.keras.layers.Dense(6, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy',op...
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
Train Model
num_epochs = 10 history = model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
Visualize the training graph
import matplotlib.pyplot as plt def plot_graphs(history, string): plt.plot(history.history[string]) plt.plot(history.history['val_'+string]) plt.xlabel("Epochs") plt.ylabel(string) plt.legend(['training '+string, 'validation '+string]) plt.show() plot_graphs(history, 'accuracy') plot_graphs(hi...
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
Using GRU Model
model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length), tf.keras.layers.Bidirectional(tf.keras.layers.GRU(32)), tf.keras.layers.Dense(6, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy',opt...
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
Using CNN Model
model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length), tf.keras.layers.Conv1D(128, 5, activation='relu'), tf.keras.layers.GlobalAveragePooling1D(), tf.keras.layers.Dense(6, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) mod...
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
Download Embedding files
e = model.layers[0] weights = e.get_weights()[0] print(weights.shape) # shape: (vocab_size, embedding_dim) import io out_v = io.open('vecs.tsv', 'w', encoding='utf-8') out_m = io.open('meta.tsv', 'w', encoding='utf-8') for word_num in range(1, vocab_size): word = reverse_word_index[word_num] embeddings = weights[w...
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
Predicting Sentiment in new Reviews
# Use the model to predict a review fake_reviews = ["Awesome movie", "It's been a long time since I watched a good movie like this", "It was very dragging and boring till first half but it picked the pace during 2nd half", "Waste of money!!", "Sci-Fi movie of the year"] print(fake_re...
_____no_output_____
Apache-2.0
IMDB Movie Reviews Sentiment Analysis using LSTM, GRU and CNN.ipynb
sproboticworks/ml-course
AlexNet in Keras Build a deep convolutional neural network in to classify MNIST digits Set seed for reproducibility
import numpy as np np.random.seed(42)
_____no_output_____
Apache-2.0
Deep Learning/Tutorials/Alexnet_in_keras.ipynb
surajch77/ML-Examples
Load dependencies
import keras from keras.models import Sequential from keras.layers import Dense, Dropout from keras.layers import Flatten, MaxPooling2D, Conv2D from keras.layers.normalization import BatchNormalization from keras.activations import softmax, relu, tanh, sigmoid from keras.callbacks import TensorBoard
Using TensorFlow backend.
Apache-2.0
Deep Learning/Tutorials/Alexnet_in_keras.ipynb
surajch77/ML-Examples
Load and preprocess the data
import tflearn.datasets.oxflower17 as oxflower17 X, Y = oxflower17.load_data(one_hot=True)
_____no_output_____
Apache-2.0
Deep Learning/Tutorials/Alexnet_in_keras.ipynb
surajch77/ML-Examples
Design Neural Network architecture
X.shape Y.shape Y[0] model = Sequential() model.add(Conv2D(96, kernel_size=(11, 11), strides=(4, 4), activation='relu', input_shape=(224, 224, 3))) model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2))) model.add(BatchNormalization()) model.add(Conv2D(256, kernel_size=(5, 5), strides=(1, 1), activation='relu')) m...
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 54, 54, 96) 34944 ________________________________________________________...
Apache-2.0
Deep Learning/Tutorials/Alexnet_in_keras.ipynb
surajch77/ML-Examples
Compile the neural network
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
_____no_output_____
Apache-2.0
Deep Learning/Tutorials/Alexnet_in_keras.ipynb
surajch77/ML-Examples
Configure the Tensorboard
tb = TensorBoard("/home/suraj/Desktop/Anaconda/TensorflowLiveLessons/Tutorials/logs/alexnet")
_____no_output_____
Apache-2.0
Deep Learning/Tutorials/Alexnet_in_keras.ipynb
surajch77/ML-Examples
Train the model
model.fit(X, Y, batch_size=64, epochs=10, verbose=1, validation_split=0.1, shuffle=True)
Train on 1224 samples, validate on 136 samples Epoch 1/10 1224/1224 [==============================] - 50s 41ms/step - loss: 4.8063 - acc: 0.2018 - val_loss: 7.4674 - val_acc: 0.1985 Epoch 2/10 1224/1224 [==============================] - 47s 39ms/step - loss: 3.2963 - acc: 0.2794 - val_loss: 4.2938 - val_acc: 0.1029 E...
Apache-2.0
Deep Learning/Tutorials/Alexnet_in_keras.ipynb
surajch77/ML-Examples
Linear Regression - to be submitted
# import required libraries import os import pandas as pd import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt os.chdir(".../Chapter 3/Linear Regression") os.getcwd()
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
Read data
df_housingdata = pd.read_csv("Final_HousePrices.csv") df_housingdata.head(5)
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
We start by identifying our numeric and categorical variables.
df_housingdata.dtypes df_housingdata.corr(method='pearson')
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
Besides the correlation between the variables, we'd also like to study the correlation between the predictor variables and the response variable.
correlation = df_housingdata.corr(method='pearson') # Our response variable "SalePrice" is in the last. We remove correlation with itself. correlation_response = correlation.iloc[-1][:-1] # variables sorted in descending manner correlation_response.sort_values(ascending=False)
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
To sort correlations by absolute values
correlation_response[abs(correlation_response).argsort()[::-1]]
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
Correlation
# Generate a mask for the upper triangle # np.zeros_like - Returns an array of zeros with the same shape and type as per given array # In this case we pass the correlation matrix # we create a variable "mask" which is a 14 X 14 numpy array mask = np.zeros_like(correlation, dtype=np.bool) # We create a tuple with triu_...
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
See distribution of the target variable
# Setting the plot size fig, axis = plt.subplots(figsize=(7, 7)) # We use kde=True to plot the gaussian kernel density estimate sns.distplot(df_housingdata['SalePrice'], bins=50, kde=True)
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
We can also use JointGrid() from our seaborn package to plot combination of plots
from scipy import stats g = sns.JointGrid(df_housingdata['GarageArea'], df_housingdata['SalePrice']) g = g.plot(sns.regplot, sns.distplot) g = g.annotate(stats.pearsonr)
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
Let us now scale our numeric variables
# create a variable to hold the names of the data types viz int16, in32 and so on num_cols = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] # Filter out variables with numeric data types df_numcols_only = df_housingdata.select_dtypes(include=num_cols) # Importing MinMaxScaler and initializing it from skl...
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
Perform one-hot encoding on our categorical variables
# We exclude all numeric columns df_housingdata_catcol = df_housingdata.select_dtypes(exclude=num_cols) # Steps to one-hot encoding: # We iterate through each categorical column name # Create encoded variables for each categorical columns # Concatenate the encoded variables to the data frame # Remove the original cate...
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
Linear model fitted by minimizing a regularized empirical loss with SGD
import numpy as np from sklearn.linear_model import SGDRegressor lin_model = SGDRegressor() # We fit our model with train data lin_model.fit(X_train, Y_train) # We use predict() to predict our values lin_model_predictions = lin_model.predict(X_test) # We check the coefficient of determination with score() print(lin...
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
We change the hyper-parameters and compare the results
import numpy as np from sklearn.linear_model import SGDRegressor lin_model = SGDRegressor(alpha=0.0000001, max_iter=2000) # We fit our model with train data lin_model.fit(X_train, Y_train) # We use predict() to predict our values lin_model_predictions = lin_model.predict(X_test) # We check the coefficient of determ...
_____no_output_____
MIT
Chapter04/Linear regression/Chapter 3 - Linear Regression.ipynb
YMandCL/Ensemble-Machine-Learning-Cookbook
Industrial Defect Inspection with image segmentation In order to satisfy customers' needs, companies have to guarantee the quality of their products, which can often be achieved only by inspection of the finished product. Automatic visual defect detection has the potential to reduce the cost of quality assurance signi...
from IPython.display import Image %matplotlib inline Image('./userdata/images/WeaklySpervisedLearningforIndustrialOpticalInspection.jpg')
_____no_output_____
MIT
toturial/201801_Nvidia Training Data/案例分享-Jupyter with TensorFlow 展示解說程式/Industrial Defect Inspection with image segmentation - AI tech sharing.ipynb
TW-NCHC/TWGC
labeling data Defect exists inside an image was bounded with an ellipse. The ellipse-parameters are provided in a separate .txt-file with a format as shown below. [filename] \t \n[semi-major axis] \t [semi-minor axis] \t [rotation angle] \t[x-position of the centre of the ellipsoid] \t [y-position of the centre of th...
!cat './dataset/public_defects/Class1_def/labels.txt'
_____no_output_____
MIT
toturial/201801_Nvidia Training Data/案例分享-Jupyter with TensorFlow 展示解說程式/Industrial Defect Inspection with image segmentation - AI tech sharing.ipynb
TW-NCHC/TWGC
Data Preprocessing/Exploration/Inspection
import matplotlib.pyplot as plt %matplotlib inline from coslib import plot_ellipse_seg_test plot_ellipse_seg_test('./dataset/public_defects/Class1_def/1.png') plot_ellipse_seg_test('./dataset/public_defects/Class2_def/1.png') plot_ellipse_seg_test('./dataset/public_defects/Class3_def/1.png') plot_ellipse_seg_test('./da...
_____no_output_____
MIT
toturial/201801_Nvidia Training Data/案例分享-Jupyter with TensorFlow 展示解說程式/Industrial Defect Inspection with image segmentation - AI tech sharing.ipynb
TW-NCHC/TWGC
Unet - Fully Convolutional Neuralnetwork The u-net is convolutional network architecture for fast and precise segmentation of images. Up to now it has outperformed the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks...
Image('./userdata/images/Unet-model.jpg') img_rows = 512 img_cols = 512 from keras.models import Model from keras.layers import Input, merge, Conv2D, MaxPooling2D, UpSampling2D,Lambda, Conv2DTranspose, concatenate from keras.optimizers import Adam from keras.callbacks import ModelCheckpoint, LearningRateScheduler from ...
_____no_output_____
MIT
toturial/201801_Nvidia Training Data/案例分享-Jupyter with TensorFlow 展示解說程式/Industrial Defect Inspection with image segmentation - AI tech sharing.ipynb
TW-NCHC/TWGC
Learning curves
plt.figure(figsize=(20, 5)) plt.plot(model.history.history['loss'], label='Train loss') plt.plot(model.history.history['val_loss'], label='Val loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.figure(figsize=(20, 5)) plt.plot(model.history.history['IOU_calc'], label='Train IOU') plt.plot(model.history.his...
_____no_output_____
MIT
toturial/201801_Nvidia Training Data/案例分享-Jupyter with TensorFlow 展示解說程式/Industrial Defect Inspection with image segmentation - AI tech sharing.ipynb
TW-NCHC/TWGC
Predict on testing data
predict = model.predict(X_test) import numpy as np import cv2 def predict_evaluation(pred, image, label): ''' ''' # transform gray image to rgb img = np.array(image, np.uint8) rgb_img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # scale pred and mask's pixel range to 0~255 im_label = np.array(255...
_____no_output_____
MIT
toturial/201801_Nvidia Training Data/案例分享-Jupyter with TensorFlow 展示解說程式/Industrial Defect Inspection with image segmentation - AI tech sharing.ipynb
TW-NCHC/TWGC
Save model for later use
model_json_string = model.to_json() with open('./userdata/model.json', 'w') as f: f.write(model_json_string) model.save_weights('./userdata/model.h5') !ls ./userdata/ from coslib import convert_keras_to_pb convert_keras_to_pb('./userdata/', 'conv2d_19/Sigmoid') !ls ./userdata/
_____no_output_____
MIT
toturial/201801_Nvidia Training Data/案例分享-Jupyter with TensorFlow 展示解說程式/Industrial Defect Inspection with image segmentation - AI tech sharing.ipynb
TW-NCHC/TWGC
Data framework: the basic paradigmuser implements one function `define_experiment`then runs `../../tools/data_framework/run_experiment.py`it runs potentially many experimental trials (over all defined configurations), captures output, builds a sqlite database, queries it, produces plots, and produces html pages to dis...
import sys ; sys.path.append('../../tools/data_framework') ; from run_experiment import * print("Initialized.")
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
The 'hello world' of `run_experiment.sh`defining a trivial experiment that compiles and runs a single command once and saves the output.we do `run_in_jupyter` and pass `define_experiment`. could alternatively save `define_experiment` in a python file and run the equivalent `run_experiments.sh` command (described in co...
from _basic_functions import * def define_experiment(exp_dict, args): set_dir_compile (exp_dict, os.getcwd() + '/../../microbench') ## working dir for compiling set_dir_run (exp_dict, os.getcwd() + '/../../microbench/bin') ## working dir for running set_cmd_compile (exp_dict, 'make brown_ext_abtr...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Try the same thing from the command line!- create a file called `myexp.py` in this directory.- start it with `from _basic_functions import *`- copy the `define_experiment` function above into `myexp.py`- run `../../tools/data_framework/run_experiment.py myexp.py -cr` in the shell (starting from this directory)if you g...
def define_experiment(exp_dict, args): set_dir_compile (exp_dict, os.getcwd() + '/../../microbench') ## working dir for compiling set_dir_run (exp_dict, os.getcwd() + '/../../microbench/bin') ## working dir for running set_cmd_compile (exp_dict, 'make brown_ext_abtree_lf.debra') set_cmd_run ...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Data files (captured stdout/err)every time the data_framework runs your "run command" (provided by `set_cmd_run`), the output is automatically saved in a `data file`.this is the output of that one run we executed.
print(shell_to_str('cat data/data000001.txt'))
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Running with varying `run param`etersof course running one command isn't very interesting... you could do that yourself.instead, we want to run the command many times, with different arguments. to this end, we allow the user to specify `run param`s.the idea is as follows:- call `add_run_param` to make the data framewo...
def define_experiment(exp_dict, args): set_dir_compile (exp_dict, os.getcwd() + '/../../microbench') set_dir_run (exp_dict, os.getcwd() + '/../../microbench/bin') set_cmd_compile (exp_dict, 'make -j6') ## -j specifies how many threads to compile with add_run_param (exp_dict, 'DS_TYPENAME', ['...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Extracting data fields from captured stdout/errNOW we're going to EXTRACT data automatically from the generated data file(s). To do this, we must include the argument `-d` which stands for `database creation`.note 3 data files were produced this time: one for each value of `DS_TYPENAME`. let's put those data files to ...
def define_experiment(exp_dict, args): set_dir_compile (exp_dict, os.getcwd() + '/../../microbench') set_dir_run (exp_dict, os.getcwd() + '/../../microbench/bin') set_cmd_compile (exp_dict, 'make -j6') add_run_param (exp_dict, 'DS_TYPENAME', ['brown_ext_ist_lf', 'brown_ext_abtree_lf', 'bronso...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Querying the databaseNote that we can simply **access** the last database we created, *WITHOUT rerunning* any experiments, by omitting all command line args in our `run_in_jupyter` call.Also note that you can accomplish the same thing from the **command line** by running `../../tools/data_framework/run_experiment.py m...
import sys ; sys.path.append('../../tools/data_framework') ; from run_experiment import * run_in_jupyter(define_experiment, cmdline_args='') df = select_to_dataframe('select * from data') df # run_in_jupyter call above has equivalent command: # [...]/run_experiment.py myexp.py
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Suppressing logging output in `run_in_jupyter`If you want to call `run_in_jupyter` as above *without* seeing the `logging data` that was copied to stdout, you can disable the log output by calling `disable_tee_stdout()`. Note that logs will still be collected, but the output will **only** go to the log file `output_lo...
import sys ; sys.path.append('../../tools/data_framework') ; from run_experiment import * disable_tee_stdout() run_in_jupyter(define_experiment, cmdline_args='') df = select_to_dataframe('select * from data') enable_tee_stdout() ## remember to enable, or you won't get output where you DO expect it... df
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Running multiple trialsif you want to perform repeated trials of each experimental configuration, add a run_param called "`__trials`", and specify a list of trial numbers (as below).(the run_param doesn't *need* to be called `__trials` exactly, but if it is called `__trials` exactly,then extra sanity checks will be pe...
def define_experiment(exp_dict, args): set_dir_compile (exp_dict, os.getcwd() + '/../../microbench') set_dir_run (exp_dict, os.getcwd() + '/../../microbench/bin') set_cmd_compile (exp_dict, 'make -j6') add_run_param (exp_dict, '__trials', [1, 2, 3]) add_run_param (exp_dict, 'DS_TYPENAM...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Querying the data (to see the multiple trials)
select_to_dataframe('select * from data')
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Extractors: mining data from arbitrary textby default, when you call `add_data_field(exp_dict, 'XYZ')`, a field `'XYZ'` will be fetched from each data file using extractor `grep_line()`, which greps (searches) for a line of the form `'XYZ={arbitrary string}\n'`*if a field you want to extract is not stored that way in ...
def get_maxres(exp_dict, file_name, field_name): ## manually parse the maximum resident size from the output of `time` and add it to the data file maxres_kb_str = shell_to_str('grep "maxres" {} | cut -d" " -f6 | cut -d"m" -f1'.format(file_name)) return float(maxres_kb_str) / 1000
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
**Using** this extractor in `define_experiment`we actually use this extractor by adding a data field and specifying it:`add_data_field (exp_dict, 'maxresident_mb', extractor=get_maxres)`
def get_maxres(exp_dict, file_name, field_name): ## manually parse the maximum resident size from the output of `time` and add it to the data file maxres_kb_str = shell_to_str('grep "maxres" {} | cut -d" " -f6 | cut -d"m" -f1'.format(file_name)) return float(maxres_kb_str) / 1000 def define_experiment(exp_...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Viewing the resulting datanote the `maxresident_mb` column -- highlighted for emphasis using Pandas DataFrame `style.applymap()`.
df = select_to_dataframe('select * from data') df.style.applymap(lambda s: 'background-color: #b63f3f', subset=pd.IndexSlice[:, ['maxresident_mb']])
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Validators: *checking* extracted datasuppose you want to run some basic *sanity checks* on fields you pull from data files.a `validator` function is a great way of having the data framework perform a basic check on values as they are extracted from data files.pre-existing `validator` functions:- `is_positive`- `is_non...
def get_maxres(exp_dict, file_name, field_name): ## manually parse the maximum resident size from the output of `time` and add it to the data file maxres_kb_str = shell_to_str('grep "maxres" {} | cut -d" " -f6 | cut -d"m" -f1'.format(file_name)) return float(maxres_kb_str) / 1000 def define_experiment(exp_...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
What happens when a field *fails* validation?we trigger a validation failure by specifying an obviously incorrect validator `is_equal('hello')`
def get_maxres(exp_dict, file_name, field_name): ## manually parse the maximum resident size from the output of `time` and add it to the data file maxres_kb_str = shell_to_str('grep "maxres" {} | cut -d" " -f6 | cut -d"m" -f1'.format(file_name)) return float(maxres_kb_str) / 1000 def define_experiment(exp_...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Plotting results (for data with 3 dimensions)One of the main reasons I created the data framework was to make it stupid-easy to produce lots of graphs/plots.The main tool for doing this is the `add_plot_set` function.`add_plot_set()` can be used to cause a SET of plots to be rendered as images in the data directory.th...
def define_experiment(exp_dict, args): set_dir_tools (exp_dict, os.getcwd() + '/../../tools') ## tools library for plotting set_dir_compile (exp_dict, os.getcwd() + '/../../microbench') set_dir_run (exp_dict, os.getcwd() + '/../../microbench/bin') set_cmd_compile (exp_dict, 'make -j6') ad...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Let's view the data and plot produced by the previous cell(You have to run the previous cell before running the next one.)
from IPython.display import Image display(Image('data/throughput.png')) display(select_to_dataframe('select * from data'))
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Plotting data with a custom functionIf you want full control over how your data is plotted, you can specify your own function as the `plot_type` argument.Your custom function will be called with keyword arguments:- `filename` -- the output filename for the plot image- `column_filters` -- the *current* values o...
def my_plot_func(filename, column_filters, data, series_name, x_name, y_name, exp_dict=None): print('## filename: {}'.format(filename)) print('## filters: {}'.format(column_filters)) print('## data:') print(data) def define_experiment(exp_dict, args): set_dir_tools (exp_dict, os.getcwd() + '/../...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
For example, we can plot this data *manually* using `Pandas`Since we have `TWO trials` per combination of `DS_TYPENAME` and `TOTAL_THREADS`, we need to aggregate our data somehow before plotting. We can use `pandas` `pivot_table()` function to compute the `mean` of the trials for each data point.Once we have a pivot t...
import pandas import matplotlib as mpl def my_plot_func(filename, column_filters, data, series_name, x_name, y_name, exp_dict=None): table = pandas.pivot_table(data, index=x_name, columns=series_name, values=y_name, aggfunc='mean') table.plot(kind='line') mpl.pyplot.savefig(filename) print('## SAVED FI...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Viewing the generated figure
from IPython.display import Image display(Image('data/throughput.png'))
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Producing *many* plots (for data with 5 dimensions)the real power of `add_plot_set` only starts to show once you want to plot *many* plots at once.so, let's add a couple of dimensions to our data:- key range (`MAXKEY` in the data file)- update rate (`INS_DEL_FRAC` in the data file)and use them to produce **multiple pl...
def define_experiment(exp_dict, args): set_dir_tools (exp_dict, os.getcwd() + '/../../tools') ## path to tools library set_dir_compile (exp_dict, os.getcwd() + '/../../microbench') set_dir_run (exp_dict, os.getcwd() + '/../../microbench/bin') set_cmd_compile (exp_dict, 'make bin_dir={__dir_run...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Let's view the plots produced by the previous cellnote you can click on the plots to "drill down" into the data.
show_html('data/throughput.html')
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
How about 4 dimensions?We just saw how to plot 3- and 5-dimensional data...Let's remove the `MAXKEY` column / data dimension to reduce the dimensionality of the data to 4.With only one column in the `varying_cols_list` and NO `row_field` specified in `add_page_set`, there will only be one row of plots. (So a strip of ...
def define_experiment(exp_dict, args): set_dir_tools (exp_dict, os.getcwd() + '/../../tools') ## path to tools library set_dir_compile (exp_dict, os.getcwd() + '/../../microbench') set_dir_run (exp_dict, os.getcwd() + '/../../microbench/bin') set_cmd_compile (exp_dict, 'make bin_dir={__dir_run...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Let's view the plots produced by the previous cell
show_html('data/throughput.html')
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock
Plots and HTML for data with 6 dimensionsnote that we could have added more than 2 dimensions of data (resulting in data with 6+ dimensions), listing potentially many fields in `varying_cols_list`, and this simply would have resulted in *more plots*.note that if we had **one** more dimension of data (6 dimensions in t...
def define_experiment(exp_dict, args): set_dir_tools (exp_dict, os.getcwd() + '/../../tools') ## path to tools library set_dir_compile (exp_dict, os.getcwd() + '/../../microbench') set_dir_run (exp_dict, os.getcwd() + '/../../microbench/bin') set_cmd_compile (exp_dict, 'make bin_dir={__dir_run...
_____no_output_____
MIT
setbench/setbench/microbench_experiments/tutorial/tutorial.ipynb
cmuparlay/flock