markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Benchmark: Training with In-Fold Generated Mean
encoding_df = ( train_df .groupby('category', as_index=False) .agg({'y': np.mean}) .rename(columns={'y': 'infold_mean'}) ) def get_target_and_features(df: pd.DataFrame) -> Tuple[pd.DataFrame]: merged_df = df.merge(encoding_df, on='category') X = merged_df[['x', 'infold_mean']] y = merged_df['y'] return X, y X_train, y_train = get_target_and_features(train_df) X_test, y_test = get_target_and_features(test_df) rgr = LinearRegression() rgr.fit(X_train, y_train) y_hat_train = rgr.predict(X_train) r2_score(y_train, y_hat_train) y_hat = rgr.predict(X_test) r2_score(y_test, y_hat)
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Obviously, overfitting is detected. Out-of-Fold Target Encoding Estimator
X_train, y_train = train_df[['x', 'category']], train_df['y'] X_test, y_test = test_df[['x', 'category']], test_df['y'] splitter = KFold(shuffle=True, random_state=361) rgr = OutOfFoldTargetEncodingRegressor( LinearRegression, # It is a type, not an instance of a class. dict(), # If neeeded, pass constructor arguments here as a dictionary. # Separation of constructor arguments from estimator makes code # with involvement of tools such as `GridSearchCV` more consistent. splitter=splitter, # Define how to make folds for features generation. smoothing_strength=0, # New feature can be smoothed towards unconditional aggregate. min_frequency=1, # Unconditional aggregate is used for rare enough values. drop_source_features=True # To use or not to use features from conditioning. )
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
What is below is a wrong way to measure train score. Regressor uses in-fold generated features for predictions, not the features that are used for training.
rgr.fit(X_train, y_train, source_positions=[1]) y_hat_train = rgr.predict(X_train) r2_score(y_train, y_hat_train)
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Actually, imparity between training set and all other sets is the main reason why special estimators are implemented instead of making dsawl.target_encoding.TargetEncoder be able to work inside sklearn.pipeline.Pipeline. If you want to use an estimator with target encoding inside a pipeline, pass pipeline instance as an internal estimator, i.e., as the first argument. For more details, please go to Appendix II of this demo. Now, let us look at the right way to measure performance on train set. In OutOfFoldTargetEncodingRegressor and OutOfFoldTargetEncodingClassifier, fit_predict method is not just a combination of fit and predict methods, because it is designed specially for correct work with training sets.
y_hat_train = rgr.fit_predict(X_train, y_train, source_positions=[1]) r2_score(y_train, y_hat_train) rgr.fit(X_train, y_train, source_positions=[1]) y_hat = rgr.predict(X_test) r2_score(y_test, y_hat)
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Thus, there is no blatant overfitting โ€” train set score and test set score are close to each other (for some other random seeds the gap between them might be lower, but, anyway, it is not too considerable even now). Also it is appropriate to highlight that test set score is significantly higher than that of the benchmark regressor trained with mean generated in-fold. Appendix I. Some Snippets How to Run Grid Search?
from sklearn.model_selection import GridSearchCV grid_params = { 'estimator_params': [{}, {'fit_intercept': False}], 'smoothing_strength': [0, 10], 'min_frequency': [0, 10] } rgr = GridSearchCV( OutOfFoldTargetEncodingRegressor(), grid_params ) rgr.fit(X_train, y_train, source_positions=[1]) rgr.best_params_
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Appendix II. Advanced Integration with Pipelines To start with, remind that the key reason of difficulties with target encoding is imparity between training set and other sets for which predictions are made. Training set is split into folds, whereas other sets are not, because targets from them are not used for target encoding. If correct measuring of train scores is not crucial for you, you can use this quick-and-impure trick.
from dsawl.target_encoding import TargetEncoder class PipelinedTargetEncoder(TargetEncoder): pass PipelinedTargetEncoder.fit_transform = PipelinedTargetEncoder.fit_transform_out_of_fold # Now instances of `PipelinedTargetEncoder` can be used as transformers in pipelines # and target encoding inside these pipelines is implemented in an out-of-fold fashion.
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Anyway, look at a way that allows working with train scores methodocogically right. Suppose that by some unknown reasons there is a need in learning a model from the dataset under consideration such that: * first of all, both features are scaled, * then the categorical feature is target-encoded, * then squares, cubes, and interactions of order not higher than three between all terms are included as a new features, * and, finally, linear regression is run. This is not so easy, because if Pipeline instance is passed as an internal estimator, target encoding is the first transformation, yet in this case it must go after scaling. Below snippet demonstrates how to use OutOfFoldTargetEncodingRegressor inside a pipeline that meets above specifications.
splitter = KFold(shuffle=True, random_state=361) rgr = OutOfFoldTargetEncodingRegressor( Pipeline, { 'steps': [ ('poly_enrichment', PolynomialFeatures()), ('linear_model', LinearRegression()) ], 'poly_enrichment__degree': 3 }, splitter=splitter ) pipeline = Pipeline([ ('scaling', StandardScaler()), ('target_encoding_regression', rgr) ]) X_train = X_train.astype(np.float64) # Avoid `DataConversionWarning`. pipeline.fit( X_train, y_train, target_encoding_regression__source_positions=[1] ) r2_score( y_train, pipeline.fit_predict( X_train, y_train, target_encoding_regression__source_positions=[1] ) )
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Training and Test Data Sets Each patient record is randomly assigned to a "training" data set (80%) or a "test" dataset (20%). Best practices have a cross-validation set (60% training, 20% cross-validation, 20% test).
training_set = load_data("data/breast-cancer.train") test_set = load_data("data/breast-cancer.test") print "Training set has %d patients" % (training_set.shape[0]) print "Test set has %d patients\n" % (test_set.shape[0]) print training_set.iloc[:, 0:6].head(3) print print training_set.iloc[:, 6:11].head(3) training_set_malignant = training_set['malignant'] training_set_features = training_set.iloc[:, 1:10] test_set_malignant = test_set['malignant'] test_set_features = test_set.iloc[:, 1:10]
15 min Intro to ML in Medicine.ipynb
massie/notebooks
apache-2.0
Linear Support Vector Machine Classification This image shows how support vector machine searches for a "Maximum-Margin Hyperplane" in 2-dimensional space. The breast cancer data set is 9-dimensional. Image by User:ZackWeinberg, based on PNG version by User:Cyc [<a href="http://creativecommons.org/licenses/by-sa/3.0">CC BY-SA 3.0</a>], <a href="https://commons.wikimedia.org/wiki/File%3ASvm_separating_hyperplanes_(SVG).svg">via Wikimedia Commons</a> Using scikit-learn to predict malignant tumors
from sklearn.preprocessing import MinMaxScaler from sklearn import svm # (1) Scale the 'training set' scaler = MinMaxScaler() scaled_training_set_features = scaler.fit_transform(training_set_features) # (2) Create the model model = svm.LinearSVC(C=0.1) # (3) Fit the model using the 'training set' model.fit(scaled_training_set_features, training_set_malignant) # (4) Scale the 'test set' using the same scaler as the 'training set' scaled_test_set_features = scaler.transform(test_set_features) # (5) Use the model to predict malignancy the 'test set' test_set_malignant_predictions = model.predict(scaled_test_set_features) print test_set_malignant_predictions
15 min Intro to ML in Medicine.ipynb
massie/notebooks
apache-2.0
Evaluating performance of the model
from sklearn import metrics accuracy = metrics.accuracy_score(test_set_malignant, \ test_set_malignant_predictions) * 100 ((tn, fp), (fn, tp)) = metrics.confusion_matrix(test_set_malignant, \ test_set_malignant_predictions) print "Accuracy: %.2f%%" % (accuracy) print "True Positives: %d, True Negatives: %d" % (tp, tn) print "False Positives: %d, False Negatives: %d" % (fp, fn)
15 min Intro to ML in Medicine.ipynb
massie/notebooks
apache-2.0
We'll be using tables prepared from Cufflinks GTF output from GEO entries GSM847565 and GSM847566. These represent results control and ATF3 knockdown experiments in the K562 human cell line. You can read more about the data on GEO; this example will be more about the features of :mod:metaseq than the biology. Let's get the example files:
%%bash example_dir="metaseq-example" mkdir -p $example_dir (cd $example_dir \ && wget --progress=dot:giga https://raw.githubusercontent.com/daler/metaseq-example-data/master/metaseq-example-data.tar.gz \ && tar -xzf metaseq-example-data.tar.gz \ && rm metaseq-example-data.tar.gz) data_dir = 'metaseq-example/data' control_filename = os.path.join(data_dir, 'GSM847565_SL2585.table') knockdown_filename = os.path.join(data_dir, 'GSM847566_SL2592.table')
doc/source/example_session_2.ipynb
daler/metaseq
mit
We'd also like to add a title. But how to access the top-most axes? Whenever the scatter method is called, the MarginalHistograms object created as a by-product of the plotting is stored in the marginal attribute. This, in turn, has a top_hists attribute, and we can grab the last one created. While we're at it, let's histograms axes as well.
# When `d2.scatter` is called, we get a `marginal` attribute. top_axes = d2.marginal.top_hists[-1] top_axes.set_title('Differential expression, ATF3 knockdown'); for ax in d2.marginal.top_hists: ax.set_ylabel('No.\ntranscripts', rotation=0, ha='right', va='center', size=8) for ax in d2.marginal.right_hists: ax.set_xlabel('No.\ntranscripts', rotation=-90, ha='left', va='top', size=8) fig = ax.figure fig.savefig('expression-demo.png') fig
doc/source/example_session_2.ipynb
daler/metaseq
mit
๊ฐ€์žฅ ๋งŽ์ด ์ถœํ˜„ํ•œ ๋ฐ”์ด๊ทธ๋žจ ์ถœ๋ ฅํ•˜๊ธฐ
from nltk.collocations import * bigram_measures = nltk.collocations.BigramAssocMeasures() trigram_measures = nltk.collocations.TrigramAssocMeasures() finder = BigramCollocationFinder.from_words( nltk.corpus.genesis.words('english-web.txt')) finder.nbest(bigram_measures.pmi, 10)
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
์ž…๋ ฅ๋œ ํ…์ŠคํŠธ๋กœ๋ถ€ํ„ฐ ๋ฐ”์ด๊ทธ๋žจ ์ƒ์„ฑ
from nltk import bigrams text = "I do not like green eggs and ham, I do not like them Sam I am!" tokens = nltk.wordpunct_tokenize(text) finder = BigramCollocationFinder.from_words(tokens) scored = finder.score_ngrams(bigram_measures.raw_freq) sorted(bigram for bigram, score in scored)
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
๋ฐ”์ด๊ทธ๋žจ์˜ ๋นˆ๋„์ˆ˜ ์„ธ๊ธฐ
sorted(finder.nbest(trigram_measures.raw_freq, 2)) sorted(finder.ngram_fd.items(), key=lambda t: (-t[1], t[0]))[:10]
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
BOW(Bag of Words, ๋‹จ์–ด๊ฐ€๋ฐฉ) Scikit-learn์˜ CounterVectorizer ํ™œ์šฉํ•˜๊ธฐ CountVectorizer๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์„ธ๊ฐ€์ง€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ๋ฌธ์„œ๋ฅผ ํ† ํฐ ๋ฆฌ์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. ๊ฐ ๋ฌธ์„œ์—์„œ ํ† ํฐ์˜ ์ถœํ˜„ ๋นˆ๋„๋ฅผ ์„ผ๋‹ค. ๊ฐ ๋ฌธ์„œ๋ฅผ BOW ์ธ์ฝ”๋”ฉ ๋ฒกํ„ฐ๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅํ•˜๊ณ  ๋‹จ์–ด๊ฐ€๋ฐฉ(BOW)์— ๋„ฃ๊ธฐ ๋‹จ์–ด ์ˆ˜๋งŒํผ ๋ฒกํ„ฐ๋ฅผ ๋งŒ๋“ค๊ณ , ๊ฐ ๋‹จ์–ด๋ฅผ ๋ฒกํ„ฐ์— abc ์ˆœ์œผ๋กœ ๋„ฃ๊ธฐ ex) and๋Š” 0๋ฒˆ๊ฐ€๋ฐฉ์—, document๋Š” 1๋ฒˆ๊ฐ€๋ฐฉ์— ๋“ค์–ด๊ฐ
from sklearn.feature_extraction.text import CountVectorizer corpus = [ 'This is the first document.', 'This is the second second document.', 'And the third one.', 'Is this the first document?', 'The last document?', ] vect = CountVectorizer() vect.fit(corpus) vect.vocabulary_
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
๊ฐ€๋ฐฉ์— ๋“ค์–ด๊ฐ„ ๋‹จ์–ด๋“ค์„ ์ถœ๋ ฅํ•˜๊ณ , ๊ฐ ๋ฌธ์žฅ์—์„œ ๊ทธ ๋‹จ์–ด๊ฐ€ ์ถœํ˜„ํ•œ ๋นˆ๋„์ˆ˜ ์„ธ๊ธฐ
print(vectorizer.get_feature_names()) vect.transform(corpus).toarray()
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
Stop Words(๋ถˆ์šฉ์–ด) ์„ค์ •ํ•˜๊ธฐ Stop Words ๋Š” ๋ฌธ์„œ์—์„œ ๋‹จ์–ด์žฅ์„ ์ƒ์„ฑํ•  ๋•Œ ๋ฌด์‹œํ•  ์ˆ˜ ์žˆ๋Š” ๋‹จ์–ด๋ฅผ ๋งํ•œ๋‹ค. ๋ณดํ†ต ์˜์–ด์˜ ๊ด€์‚ฌ๋‚˜ ์ ‘์†์‚ฌ, ํ•œ๊ตญ์–ด์˜ ์กฐ์‚ฌ ๋“ฑ์ด ์—ฌ๊ธฐ์— ํ•ด๋‹นํ•œ๋‹ค. stop_words ์ธ์ˆ˜๋กœ ์กฐ์ ˆํ•  ์ˆ˜ ์žˆ๋‹ค.
vect = CountVectorizer(stop_words=["and", "is", "the", "this"]).fit(corpus) vect.vocabulary_
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
Stop Words(๋ถˆ์šฉ์–ด)๋ฅผ ์ œ์™ธํ•˜๊ณ , ๋ฒกํ„ฐ๋ฅผ ๋‹ค์‹œ ์ƒ์„ฑํ•˜๊ธฐ
vect.transform(corpus).toarray()
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
TF-IDF TF-IDF(Term Frequency โ€“ Inverse Document Frequency) ์ธ์ฝ”๋”ฉ์€ ๋‹จ์–ด๋ฅผ ๊ฐฏ์ˆ˜ ๊ทธ๋Œ€๋กœ ์นด์šดํŠธํ•˜์ง€ ์•Š๊ณ  ๋ชจ๋“  ๋ฌธ์„œ์— ๊ณตํ†ต์ ์œผ๋กœ ๋“ค์–ด์žˆ๋Š” ๋‹จ์–ด์˜ ๊ฒฝ์šฐ ๋ฌธ์„œ ๊ตฌ๋ณ„ ๋Šฅ๋ ฅ์ด ๋–จ์–ด์ง„๋‹ค๊ณ  ๋ณด์•„ ๊ฐ€์ค‘์น˜๋ฅผ ์ถ•์†Œํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ๊ตฌ์ œ์ ์œผ๋กœ๋Š” ๋ฌธ์„œ $d$(document)์™€ ๋‹จ์–ด $t$ ์— ๋Œ€ํ•ด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ณ„์‚ฐํ•œ๋‹ค. $$ \text{tf-idf}(d, t) = \text{tf}(d, t) \cdot \text{idf}(t) $$ ์—ฌ๊ธฐ์—์„œ $\text{tf}(d, t)$: term frequency. ํŠน์ •ํ•œ ๋‹จ์–ด์˜ ๋นˆ๋„์ˆ˜ $\text{idf}(t)$ : inverse document frequency. ํŠน์ •ํ•œ ๋‹จ์–ด๊ฐ€ ๋“ค์–ด ์žˆ๋Š” ๋ฌธ์„œ์˜ ์ˆ˜์— ๋ฐ˜๋น„๋ก€ํ•˜๋Š” ์ˆ˜ $$ \text{idf}(d, t) = \log \dfrac{n}{1 + \text{df}(t)} $$ $n$ : ์ „์ฒด ๋ฌธ์„œ์˜ ์ˆ˜ $\text{df}(t)$: ๋‹จ์–ด $t$๋ฅผ ๊ฐ€์ง„ ๋ฌธ์„œ์˜ ์ˆ˜
from sklearn.feature_extraction.text import TfidfVectorizer tfidv = TfidfVectorizer().fit(corpus) tfidv.transform(corpus).toarray()
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
ํ™œ์šฉ ์‚ฌ๋ก€ ๋‹ค์Œ์€ Scikit-Learn์˜ ๋ฌธ์ž์—ด ๋ถ„์„๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›น์‚ฌ์ดํŠธ์— ํŠน์ •ํ•œ ๋‹จ์–ด๊ฐ€ ์–ด๋А ์ •๋„ ์‚ฌ์šฉ๋˜์—ˆ๋Š”์ง€ ๋นˆ๋„์ˆ˜๋ฅผ ์•Œ์•„๋ณด๋Š” ์ฝ”๋“œ์ด๋‹ค.
from urllib.request import urlopen import json import string from konlpy.utils import pprint from konlpy.tag import Hannanum hannanum = Hannanum() #f = urlopen("https://www.datascienceschool.net/download-notebook/708e711429a646818b9dcbb581e0c10a/") f = urlopen("https://github.com/ahhn/oss/raw/master/resources/Ngram_BOW_TF-IDF.ipynb") json = json.loads(f.read()) cell = ["\n".join(c["source"]) for c in json["cells"] if c["cell_type"] == "markdown"] docs = [w for w in hannanum.nouns(" ".join(cell)) if ((not w[0].isnumeric()) and (w[0] not in string.punctuation))]
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
์—ฌ๊ธฐ์—์„œ๋Š” ํ•˜๋‚˜์˜ ๋ฌธ์„œ๊ฐ€ ํ•˜๋‚˜์˜ ๋‹จ์–ด๋กœ๋งŒ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ CountVectorizer๋กœ ์ด ๋ฌธ์„œ ์ง‘ํ•ฉ์„ ์ฒ˜๋ฆฌํ•˜๋ฉด ๊ฐ ๋ฌธ์„œ๋Š” ํ•˜๋‚˜์˜ ์›์†Œ๋งŒ 1์ด๊ณ  ๋‚˜๋จธ์ง€ ์›์†Œ๋Š” 0์ธ ๋ฒกํ„ฐ๊ฐ€ ๋œ๋‹ค. ์ด ๋ฒกํ„ฐ์˜ ํ•ฉ์œผ๋กœ ๋นˆ๋„๋ฅผ ์•Œ์•„๋ณด์•˜๋‹ค.
import numpy as np import matplotlib.pyplot as plt vect = CountVectorizer().fit(docs) count = vect.transform(docs).toarray().sum(axis=0) idx = np.argsort(-count) count = count[idx] feature_name = np.array(vect.get_feature_names())[idx] plt.bar(range(len(count)), count) plt.show() pprint(list(zip(feature_name, count))[:10])
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
SciPy 2016 Scikit-learn Tutorial Out-of-core Learning - Large Scale Text Classification for Sentiment Analysis Scalability Issues The sklearn.feature_extraction.text.CountVectorizer and sklearn.feature_extraction.text.TfidfVectorizer classes suffer from a number of scalability issues that all stem from the internal usage of the vocabulary_ attribute (a Python dictionary) used to map the unicode string feature names to the integer feature indices. The main scalability issues are: Memory usage of the text vectorizer: the all the string representations of the features are loaded in memory Parallelization problems for text feature extraction: the vocabulary_ would be a shared state: complex synchronization and overhead Impossibility to do online or out-of-core / streaming learning: the vocabulary_ needs to be learned from the data: its size cannot be known before making one pass over the full dataset To better understand the issue let's have a look at how the vocabulary_ attribute work. At fit time the tokens of the corpus are uniquely indentified by a integer index and this mapping stored in the vocabulary:
from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(min_df=1) vectorizer.fit([ "The cat sat on the mat.", ]) vectorizer.vocabulary_
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Note Since the movie datasets consists of 50,000 individual text files, executing the code snippet above may take ~20 sec or longer. The load_files function loaded the datasets into sklearn.datasets.base.Bunch objects, which are Python dictionaries:
train.keys()
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
As we can see, the HashingVectorizer is much faster than the Countvectorizer in this case. Finally, let us train a LogisticRegression classifier on the IMDb training subset:
from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline h_pipeline = Pipeline(( ('vec', HashingVectorizer(encoding='latin-1')), ('clf', LogisticRegression(random_state=1)), )) h_pipeline.fit(docs_train, y_train) print('Train accuracy', h_pipeline.score(docs_train, y_train)) print('Validation accuracy', h_pipeline.score(docs_valid, y_valid)) import gc del count_vec del h_pipeline gc.collect()
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Out-of-Core learning Out-of-Core learning is the task of training a machine learning model on a dataset that does not fit into memory or RAM. This requires the following conditions: a feature extraction layer with fixed output dimensionality knowing the list of all classes in advance (in this case we only have positive and negative tweets) a machine learning algorithm that supports incremental learning (the partial_fit method in scikit-learn). In the following sections, we will set up a simple batch-training function to train an SGDClassifier iteratively. But first, let us load the file names into a Python list:
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train') train_pos = os.path.join(train_path, 'pos') train_neg = os.path.join(train_path, 'neg') fnames = [os.path.join(train_pos, f) for f in os.listdir(train_pos)] +\ [os.path.join(train_neg, f) for f in os.listdir(train_neg)] fnames[:3]
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Now, we implement the batch_train function as follows:
from sklearn.base import clone def batch_train(clf, fnames, labels, iterations=25, batchsize=1000, random_seed=1): vec = HashingVectorizer(encoding='latin-1') idx = np.arange(labels.shape[0]) c_clf = clone(clf) rng = np.random.RandomState(seed=random_seed) for i in range(iterations): rnd_idx = rng.choice(idx, size=batchsize) documents = [] for i in rnd_idx: with open(fnames[i], 'r') as f: documents.append(f.read()) X_batch = vec.transform(documents) batch_labels = labels[rnd_idx] c_clf.partial_fit(X=X_batch, y=batch_labels, classes=[0, 1]) return c_clf
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Limitations of the Hashing Vectorizer Using the Hashing Vectorizer makes it possible to implement streaming and parallel text classification but can also introduce some issues: The collisions can introduce too much noise in the data and degrade prediction quality, The HashingVectorizer does not provide "Inverse Document Frequency" reweighting (lack of a use_idf=True option). There is no easy way to inverse the mapping and find the feature names from the feature index. The collision issues can be controlled by increasing the n_features parameters. The IDF weighting might be reintroduced by appending a TfidfTransformer instance on the output of the vectorizer. However computing the idf_ statistic used for the feature reweighting will require to do at least one additional pass over the training set before being able to start training the classifier: this breaks the online learning scheme. The lack of inverse mapping (the get_feature_names() method of TfidfVectorizer) is even harder to workaround. That would require extending the HashingVectorizer class to add a "trace" mode to record the mapping of the most important features to provide statistical debugging information. In the mean time to debug feature extraction issues, it is recommended to use TfidfVectorizer(use_idf=False) on a small-ish subset of the dataset to simulate a HashingVectorizer() instance that have the get_feature_names() method and no collision issues. Exercise In our implementation of the batch_train function above, we randomly draw k training samples as a batch in each iteration, which can be considered as a random subsampling with replacement. Can you modify the batch_train function so that it iterates over the documents without replacement, i.e., that it uses each document exactly once per iteration?
# %load solutions/27_B-batchtrain.py
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Download the dataset (movie.zip) and gold standard data (topicsMovie.txt and goldMovie.txt) from the link and plug in the locations below.
base_dir = os.path.join(os.path.expanduser('~'), "workshop/nlp/data/") data_dir = os.path.join(base_dir, 'wiki-movie-subset') if not os.path.exists(data_dir): raise ValueError("SKIP: Please download the movie corpus.") ref_dir = os.path.join(base_dir, 'reference') topics_path = os.path.join(ref_dir, 'topicsMovie.txt') human_scores_path = os.path.join(ref_dir, 'goldMovie.txt') %%time texts = [] file_num = 0 preprocessed = 0 listing = os.listdir(data_dir) for fname in listing: file_num += 1 if 'disambiguation' in fname: continue # discard disambiguation and redirect pages elif fname.startswith('File_'): continue # discard images, gifs, etc. elif fname.startswith('Category_'): continue # discard category articles # Not sure how to identify portal and redirect pages, # as well as pages about a single year. # As a result, this preprocessing differs from the paper. with open(os.path.join(data_dir, fname)) as f: for line in f: # lower case all words lowered = line.lower() #remove punctuation and split into seperate words words = re.findall(r'\w+', lowered, flags = re.UNICODE | re.LOCALE) texts.append(words) preprocessed += 1 if file_num % 10000 == 0: print('PROGRESS: %d/%d, preprocessed %d, discarded %d' % ( file_num, len(listing), preprocessed, (file_num - preprocessed))) %%time dictionary = Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts]
docs/notebooks/topic_coherence-movies.ipynb
macks22/gensim
lgpl-2.1
Cross validate the numbers According to the paper the number of documents should be 108,952 with a vocabulary of 1,625,124. The difference is because of a difference in preprocessing. However the results obtained are still very similar.
print(len(corpus)) print(dictionary) topics = [] # list of 100 topics with open(topics_path) as f: topics = [line.split() for line in f if line] len(topics) human_scores = [] with open(human_scores_path) as f: for line in f: human_scores.append(float(line.strip())) len(human_scores)
docs/notebooks/topic_coherence-movies.ipynb
macks22/gensim
lgpl-2.1
We then define some parameters and set up the problem for the Bayesian inference.
# Create some toy data real_parameters = [0.015, 500] times = np.linspace(0, 1000, 1000) org_values = model.simulate(real_parameters, times) # Add noise noise = 10 values = org_values + np.random.normal(0, noise, org_values.shape) real_parameters = np.array(real_parameters + [noise]) # Get properties of the noise sample noise_sample_mean = np.mean(values - org_values) noise_sample_std = np.std(values - org_values) # Create an object with links to the model and time series problem = pints.SingleOutputProblem(model, times, values) # Create a log-likelihood function (adds an extra parameter!) log_likelihood = pints.GaussianLogLikelihood(problem) # Create a uniform prior over both the parameters and the new noise variable log_prior = pints.UniformLogPrior( [0.001, 10, noise*0.1], [1.0, 1000, noise*100] ) # Create a posterior log-likelihood (log(likelihood * prior)) log_posterior = pints.LogPosterior(log_likelihood, log_prior)
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
In this example, we will pick some considerably difficult starting points for the MCMC chains.
# Choose starting points for 3 mcmc chains xs = [ [0.7, 20, 2], [0.005, 900, 100], [0.01, 100, 500], ]
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
Let's run an Adaptive Covariance MCMC without doing any parameter transformation to check its performance.
# Create mcmc routine with four chains mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC) # Add stopping criterion mcmc.set_max_iterations(4000) # Start adapting after 1000 iterations mcmc.set_initial_phase_iterations(1000) # Disable logging mode mcmc.set_log_to_screen(False) # Run! print('Running...') chains = mcmc.run() print('Done!') # Discard warm up chains = chains[:, 2000:, :] # Look at distribution across all chains pints.plot.pairwise(np.vstack(chains), kde=False, parameter_names=[r'$r$', r'$K$', r'$\sigma$']) # Show graphs plt.show()
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
The MCMC samples are not ideal, because we've started the MCMC run from some difficult starting points. We can use MCMCSummary to inspect the efficiency of the MCMC run.
results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=["r", "k", "sigma"]) print(results)
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
Now, we create a create a pints.Transformation object for log-transformation and re-run the MCMC to see if it makes any difference.
# Create parameter transformation transformation = pints.LogTransformation(n_parameters=len(xs[0])) # Create mcmc routine with four chains mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC, transformation=transformation) # Add stopping criterion mcmc.set_max_iterations(4000) # Start adapting after 1000 iterations mcmc.set_initial_phase_iterations(1000) # Disable logging mode mcmc.set_log_to_screen(False) # Run! print('Running...') chains = mcmc.run() print('Done!') # Discard warm up chains = chains[:, 2000:, :] # Look at distribution across all chains pints.plot.pairwise(np.vstack(chains), kde=False, parameter_names=[r'$r$', r'$K$', r'$\sigma$']) # Show graphs plt.show()
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
The MCMC samples using parameter transformation looks very similar to the one in another example notebook which we had some good starting points and without parameter transformation. This is a good sign! It suggests the transformation did not mess anything up. Now we check the efficiency again:
results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=["r", "k", "sigma"]) print(results)
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
Confirm Device
from jaxlib import xla_extension import jax key = jax.random.PRNGKey(1701) arr = jax.random.normal(key, (1000,)) device = arr.device_buffer.device() print(f"JAX device type: {device}") assert isinstance(device, xla_extension.GpuDevice), "unexpected JAX device type"
tests/notebooks/colab_gpu.ipynb
google/jax
apache-2.0
First we'll need a way to track which party the Senate & President are part of. For now, let's just stick with the two major parties and create a Party enumeration. Enumerations can group and give names to related constants in the code. This can help the code be more understandable when reading it.
class Party(Enum): D = 1 R = 2 color_trans = {Party.D:'blue', Party.R:'red'}
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
We'll also make a class to represent each justice. When a new Justice is created for a party, they'll be given a randomly generated term of somewhere between 0 and 40 years.
class Justice: def __init__(self, party): self.party = party self.term = random.randint(0,40) def __str__(self): return self.__repr__() def __repr__(self): return "{party}-{term}".format(party=self.party.name,term=self.term)
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
Our lass class is Bench. This class will represent the bench that contains the Justices currently on the Supreme Court. When the Bench is first formed, it will be empty. We care about modifying the Bench in a few ways: Filling all available seats with judges of a certain party (fill_seats). Adding years to determine if judges have vacated their seats (add_years). Getting the composition of the court at a particular year, used for displaying the data later (breakdown). Empty seats in the court are represented by None, and judges are removed when they have <= 0 years left in their term.
class Bench: SIZE = 9 def __init__(self): self.seats = [None] * self.SIZE def fill_seats(self, party): # loop through all seats for i in range(self.SIZE): if self.seats[i] is None: # if seat is empty, add new # justice of the correct party self.seats[i] = Justice(party) def add_years(self, num_years): for i in range(self.SIZE): if self.seats[i] is not None: # for occupied seats, remove the given # number of years from their remaining # term. If their term is less than 0 # this means their seat should now # be empty again. self.seats[i].term -= num_years if self.seats[i].term <= 0: self.seats[i] = None def breakdown(self): c = Counter([s.party.name if s is not None else "" for s in self.seats]) return tuple(c[k] if k in c else 0 for k in [""] + [e.name for e in Party]) def __repr__(self): return "\n".join(map(str,self.seats))
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
Last but not least, simulate is where the magic happens. This function loops over a supplied number of years, first determining if any judges have left their position. After that, it randomly picks the winning parties for any elections that are happening. After the elections, if the government is aligned, empty seats on the bench should be filled by that party.
def simulate(years): president_party = None senate_party = None bench = Bench() for year in range(years+1): bench.add_years(1) if year % 2 == 0: senate_party = random.choice(list(Party)) if year % 4 == 0: president_party = random.choice(list(Party)) if president_party == senate_party: bench.fill_seats(president_party) yield year, bench.breakdown(), president_party, senate_party
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
run_simulation will execute the simulation for the supplied number of years, and post-process the data to return the following information: years: an array of all the years that were simulated. bench_stacks: the stacked bar graph data for the composition of the court at each year. president_parties: an array of the president's party at each year. senate_parties: an array of the senate's party at each year. mean: an array with the cumulative moving averages of the number of vacancies.
def run_simulation(sim_years): years, benches, president_parties, senate_parties = zip(*list(simulate(sim_years))) bench_stacks = np.row_stack(zip(*benches)) vacancies = bench_stacks[0] mean = np.cumsum(vacancies) / (np.asarray(years) + 1) return years, bench_stacks, president_parties, senate_parties, mean
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
First, let's look at the result of our simulated supreme court over 200 years. Along the bottom, the parties of the Senate and President are shown. The height of each stack represents the number of seats that party holds, and the white space indicates vacancies.
sim_years = 200 years, bench_stacks, president_parties, senate_parties, _ = run_simulation(sim_years) stacked_plot_bench_over_time_with_parties(years, bench_stacks, president_parties, senate_parties, color_trans, Party)
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
During the periods of alignment, the vacancies (white spaces) as filled. This just serves as visual confirmation that our simulation got that aspect correct. We can see that seats are continuously being vacated and filled, so we can't learn much from just this one plot. As an aside, it's difficult to look at this and not imagine potential storylines! In the beginning, an initially 100% Republican court slowly retires over time. In the mean time, the two parties fight for control of the Senate under continuous Democratic presidencies from years 25 to 50. But enough of that, let's get back to the numbers. Next, let's simulate over a longer time period, 1000 years, and view the cumulative average number of vacancies.
sim_years = 1000 years, bench_stacks, _, _, mean = run_simulation(sim_years) stacked_plot_bench_over_time(years, bench_stacks, mean, color_trans, Party)
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
This simulation shows that we should expect a little less than 1 vacancy, about 0.7, per year. It also illustrates that as more data is added, the cumulative moving average becomes less variable. However, this is only a single simulation run, and could be an outlier. To see how likely different numbers of vacancies are, let's run a Monte Carlo experiment. In this experiment, we're going to run 1000 different simulations, each to 50000 years.
sim_years = 50000 sample_size = 1000 results=[] with Pool(processes=4) as p: with tqdm_notebook(total=sample_size) as pbar: for r in p.imap_unordered(run_simulation, itertools.repeat(sim_years,sample_size)): results.append(r) pbar.update(1) years, _, _, _, means = zip(*results) plot_sims(years[0], means, [0.5, 0.9])
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
This distribution shows that we should still expect to see ~0.71 vacancies per year over the long run, but it wouldn't be surprising to see 0.68 or 0.73 vacancies. As a thought experiment, let's see what happen if the Green Party suddenly launches into relevance and all 3 parties have an equal shot in elections. How would this impact the result?
class Party(Enum): D = 1 R = 2 G = 3 color_trans = {Party.D:'blue', Party.R:'red', Party.G:'green'} sim_years = 200 years, bench_stacks, president_parties, senate_parties, mean = run_simulation(sim_years) stacked_plot_bench_over_time_with_parties(years, bench_stacks, president_parties, senate_parties, color_trans, Party) sim_years = 1000 years, bench_stacks, president_parties, senate_parties, mean = run_simulation(sim_years) stacked_plot_bench_over_time(years, bench_stacks, mean, color_trans, Party)
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
Unsurprisingly, adding more parties into the mix while still requiring an aligned government looks like it leads to even more vacancies. But, how do the numbers shake out?
sim_years = 50000 sample_size = 1000 results=[] with Pool(processes=4) as p: with tqdm_notebook(total=sample_size) as pbar: for r in p.imap_unordered(run_simulation, itertools.repeat(sim_years,sample_size)): results.append(r) pbar.update(1) years, _, _, _, means = zip(*results) plot_sims(years[0], means, [1, 1.8])
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
Then we can deploy the model using the gcloud CLI as before:
%%bash # TODO 5 TIMESTAMP=$(date -u +%Y%m%d_%H%M%S) MODEL_DISPLAYNAME=title_model_$TIMESTAMP ENDPOINT_DISPLAYNAME=swivel_$TIMESTAMP IMAGE_URI="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest" ARTIFACT_DIRECTORY=gs://${BUCKET}/${MODEL_DISPLAYNAME}/ echo $ARTIFACT_DIRECTORY gsutil cp -r ${EXPORT_PATH}/* ${ARTIFACT_DIRECTORY} # Model MODEL_RESOURCENAME=$(gcloud ai models upload \ --region=$REGION \ --display-name=$MODEL_DISPLAYNAME \ --container-image-uri=$IMAGE_URI \ --artifact-uri=$ARTIFACT_DIRECTORY \ --format="value(model)") echo "MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}" echo "MODEL_RESOURCENAME=${MODEL_RESOURCENAME}" # Endpoint ENDPOINT_RESOURCENAME=$(gcloud ai endpoints create \ --region=$REGION \ --display-name=$ENDPOINT_DISPLAYNAME \ --format="value(name)") echo "ENDPOINT_DISPLAYNAME=${ENDPOINT_DISPLAYNAME}" echo "ENDPOINT_RESOURCENAME=${ENDPOINT_RESOURCENAME}" # Deployment DEPLOYED_MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}_deployment MACHINE_TYPE=n1-standard-2 MIN_REPLICA_COUNT=1 MAX_REPLICA_COUNT=3 gcloud ai endpoints deploy-model $ENDPOINT_RESOURCENAME \ --region=$REGION \ --model=$MODEL_RESOURCENAME \ --display-name=$DEPLOYED_MODEL_DISPLAYNAME \ --machine-type=$MACHINE_TYPE \ --min-replica-count=$MIN_REPLICA_COUNT \ --max-replica-count=$MAX_REPLICA_COUNT \ --traffic-split=0=100
notebooks/text_models/solutions/reusable_embeddings_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Let's go ahead and hit our model:
%%writefile input.json { "instances": [ {"keras_layer_1_input": "hello"} ] }
notebooks/text_models/solutions/reusable_embeddings_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
(3b) Exercise: Number of Unique Hosts How many unique hosts are there in the entire log? Think about the steps that you need to perform to count the number of different hosts in the log.
# TODO: Replace <FILL IN> with appropriate code # HINT: Do you recall the tips from (3a)? Each of these <FILL IN> could be an transformation or action. hosts = access_logs.map(lambda log: (log.host, 1)) uniqueHosts = hosts.reduceByKey(lambda a, b : a + b) uniqueHostCount = uniqueHosts.count() print 'Unique hosts: %d' % uniqueHostCount # TEST Number of unique hosts (3b) Test.assertEquals(uniqueHostCount, 54507, 'incorrect uniqueHostCount')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(3c) Exercise: Number of Unique Daily Hosts For an advanced exercise, let's determine the number of unique hosts in the entire log on a day-by-day basis. This computation will give us counts of the number of unique daily hosts. We'd like a list sorted by increasing day of the month which includes the day of the month and the associated number of unique hosts for that day. Make sure you cache the resulting RDD dailyHosts so that we can reuse it in the next exercise. Think about the steps that you need to perform to count the number of different hosts that make requests each day. Since the log only covers a single month, you can ignore the month.
# TODO: Replace <FILL IN> with appropriate code dayToHostPairTuple = (access_logs.map(lambda log : ((log.date_time.day, log.host), 1))) dayGroupedHosts = dayToHostPairTuple.reduceByKey(lambda v1,v2: v1+v2).map(lambda (k,v) : k) dayHostCount = dayGroupedHosts.map(lambda (k, v) : (k,1)).reduceByKey(lambda v1, v2: v1+v2) dailyHosts = (dayHostCount .sortByKey() .cache()) dailyHostsList = dailyHosts.take(30) print 'Unique hosts per day: %s' % dailyHostsList # TEST Number of unique daily hosts (3c) Test.assertEquals(dailyHosts.count(), 21, 'incorrect dailyHosts.count()') Test.assertEquals(dailyHostsList, [(1, 2582), (3, 3222), (4, 4190), (5, 2502), (6, 2537), (7, 4106), (8, 4406), (9, 4317), (10, 4523), (11, 4346), (12, 2864), (13, 2650), (14, 4454), (15, 4214), (16, 4340), (17, 4385), (18, 4168), (19, 2550), (20, 2560), (21, 4134), (22, 4456)], 'incorrect dailyHostsList') Test.assertTrue(dailyHosts.is_cached, 'incorrect dailyHosts.is_cached')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(3d) Exercise: Visualizing the Number of Unique Daily Hosts Using the results from the previous exercise, use matplotlib to plot a "Line" graph of the unique hosts requests by day. daysWithHosts should be a list of days and hosts should be a list of number of unique hosts for each corresponding day. * How could you convert a RDD into a list? See the collect() method*
# TODO: Replace <FILL IN> with appropriate code daysWithHosts = dailyHosts.map(lambda (k,v): k).collect() hosts = dailyHosts.map(lambda (k,v): v).collect() # TEST Visualizing unique daily hosts (3d) test_days = range(1, 23) test_days.remove(2) Test.assertEquals(daysWithHosts, test_days, 'incorrect days') Test.assertEquals(hosts, [2582, 3222, 4190, 2502, 2537, 4106, 4406, 4317, 4523, 4346, 2864, 2650, 4454, 4214, 4340, 4385, 4168, 2550, 2560, 4134, 4456], 'incorrect hosts') fig = plt.figure(figsize=(8,4.5), facecolor='white', edgecolor='white') plt.axis([min(daysWithHosts), max(daysWithHosts), 0, max(hosts)+500]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Day') plt.ylabel('Hosts') plt.plot(daysWithHosts, hosts) pass
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(3e) Exercise: Average Number of Daily Requests per Hosts Next, let's determine the average number of requests on a day-by-day basis. We'd like a list by increasing day of the month and the associated average number of requests per host for that day. Make sure you cache the resulting RDD avgDailyReqPerHost so that we can reuse it in the next exercise. To compute the average number of requests per host, get the total number of request across all hosts and divide that by the number of unique hosts. Since the log only covers a single month, you can skip checking for the month. Also to keep it simple, when calculating the approximate average use the integer value - you do not need to upcast to float
# TODO: Replace <FILL IN> with appropriate code dayAndHostTuple = access_logs.map(lambda log: ((log.date_time.day, log.host),1)).reduceByKey(lambda v1, v2 : v1+v2) groupedByDay = dayAndHostTuple.map(lambda ((k,v),cnt): (k,(1,cnt))).reduceByKey(lambda v1, v2 : (v1[0]+v2[0], v1[1]+v2[1])) sortedByDay = groupedByDay.sortByKey() avgDailyReqPerHost = (sortedByDay .map(lambda (k,v): (k, v[1]/v[0])) .cache()) avgDailyReqPerHostList = avgDailyReqPerHost.take(30) print 'Average number of daily requests per Hosts is %s' % avgDailyReqPerHostList # TEST Average number of daily requests per hosts (3e) Test.assertEquals(avgDailyReqPerHostList, [(1, 13), (3, 12), (4, 14), (5, 12), (6, 12), (7, 13), (8, 13), (9, 14), (10, 13), (11, 14), (12, 13), (13, 13), (14, 13), (15, 13), (16, 13), (17, 13), (18, 13), (19, 12), (20, 12), (21, 13), (22, 12)], 'incorrect avgDailyReqPerHostList') Test.assertTrue(avgDailyReqPerHost.is_cached, 'incorrect avgDailyReqPerHost.is_cache')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(3f) Exercise: Visualizing the Average Daily Requests per Unique Host Using the result avgDailyReqPerHost from the previous exercise, use matplotlib to plot a "Line" graph of the average daily requests per unique host by day. daysWithAvg should be a list of days and avgs should be a list of average daily requests per unique hosts for each corresponding day.
# TODO: Replace <FILL IN> with appropriate code daysWithAvg = avgDailyReqPerHost.map(lambda (k,v): k).take(30) avgs = avgDailyReqPerHost.map(lambda(k,v): v).take(30) # TEST Average Daily Requests per Unique Host (3f) Test.assertEquals(daysWithAvg, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect days') Test.assertEquals(avgs, [13, 12, 14, 12, 12, 13, 13, 14, 13, 14, 13, 13, 13, 13, 13, 13, 13, 12, 12, 13, 12], 'incorrect avgs') fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, max(daysWithAvg), 0, max(avgs)+2]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Day') plt.ylabel('Average') plt.plot(daysWithAvg, avgs) pass
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4b) Exercise: Listing 404 Response Code Records Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list up to 40 distinct endpoints that generate 404 errors - no endpoint should appear more than once in your list.
# TODO: Replace <FILL IN> with appropriate code badEndpoints = badRecords.map(lambda log: (log.endpoint,1)) badUniqueEndpoints = badEndpoints.reduceByKey(lambda v1,v2 :1).map(lambda (k, v): k) badUniqueEndpointsPick40 = badUniqueEndpoints.take(40) print '404 URLS: %s' % badUniqueEndpointsPick40 # TEST Listing 404 records (4b) badUniqueEndpointsSet40 = set(badUniqueEndpointsPick40) Test.assertEquals(len(badUniqueEndpointsSet40), 40, 'badUniqueEndpointsPick40 not distinct')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4c) Exercise: Listing the Top Twenty 404 Response Code Endpoints Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty endpoints that generate the most 404 errors. Remember, top endpoints should be in sorted order
# TODO: Replace <FILL IN> with appropriate code badEndpointsCountPairTuple = badRecords.map(lambda log: (log.endpoint, 1)) badEndpointsSum = badEndpointsCountPairTuple.reduceByKey(lambda v1, v2: v1+v2) badEndpointsTop20 = badEndpointsSum.takeOrdered(20, lambda (k,v): -1*v) print 'Top Twenty 404 URLs: %s' % badEndpointsTop20 # TEST Top twenty 404 URLs (4c) Test.assertEquals(badEndpointsTop20, [(u'/pub/winvn/readme.txt', 633), (u'/pub/winvn/release.txt', 494), (u'/shuttle/missions/STS-69/mission-STS-69.html', 431), (u'/images/nasa-logo.gif', 319), (u'/elv/DELTA/uncons.htm', 178), (u'/shuttle/missions/sts-68/ksc-upclose.gif', 156), (u'/history/apollo/sa-1/sa-1-patch-small.gif', 146), (u'/images/crawlerway-logo.gif', 120), (u'/://spacelink.msfc.nasa.gov', 117), (u'/history/apollo/pad-abort-test-1/pad-abort-test-1-patch-small.gif', 100), (u'/history/apollo/a-001/a-001-patch-small.gif', 97), (u'/images/Nasa-logo.gif', 85), (u'/shuttle/resources/orbiters/atlantis.gif', 64), (u'/history/apollo/images/little-joe.jpg', 62), (u'/images/lf-logo.gif', 59), (u'/shuttle/resources/orbiters/discovery.gif', 56), (u'/shuttle/resources/orbiters/challenger.gif', 54), (u'/robots.txt', 53), (u'/elv/new01.gif>', 43), (u'/history/apollo/pad-abort-test-2/pad-abort-test-2-patch-small.gif', 38)], 'incorrect badEndpointsTop20')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4d) Exercise: Listing the Top Twenty-five 404 Response Code Hosts Instead of looking at the endpoints that generated 404 errors, let's look at the hosts that encountered 404 errors. Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty-five hosts that generate the most 404 errors.
# TODO: Replace <FILL IN> with appropriate code errHostsCountPairTuple = badRecords.map(lambda log: (log.host, 1)) errHostsSum = errHostsCountPairTuple.reduceByKey(lambda v1, v2: v1+v2) errHostsTop25 = errHostsSum.takeOrdered(25, lambda (k,v): -1*v) print 'Top 25 hosts that generated errors: %s' % errHostsTop25 # TEST Top twenty-five 404 response code hosts (4d) Test.assertEquals(len(errHostsTop25), 25, 'length of errHostsTop25 is not 25') Test.assertEquals(len(set(errHostsTop25) - set([(u'maz3.maz.net', 39), (u'piweba3y.prodigy.com', 39), (u'gate.barr.com', 38), (u'm38-370-9.mit.edu', 37), (u'ts8-1.westwood.ts.ucla.edu', 37), (u'nexus.mlckew.edu.au', 37), (u'204.62.245.32', 33), (u'163.206.104.34', 27), (u'spica.sci.isas.ac.jp', 27), (u'www-d4.proxy.aol.com', 26), (u'www-c4.proxy.aol.com', 25), (u'203.13.168.24', 25), (u'203.13.168.17', 25), (u'internet-gw.watson.ibm.com', 24), (u'scooter.pa-x.dec.com', 23), (u'crl5.crl.com', 23), (u'piweba5y.prodigy.com', 23), (u'onramp2-9.onr.com', 22), (u'slip145-189.ut.nl.ibm.net', 22), (u'198.40.25.102.sap2.artic.edu', 21), (u'gn2.getnet.com', 20), (u'msp1-16.nas.mr.net', 20), (u'isou24.vilspa.esa.es', 19), (u'dial055.mbnet.mb.ca', 19), (u'tigger.nashscene.com', 19)])), 0, 'incorrect errHostsTop25')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4e) Exercise: Listing 404 Response Codes per Day Let's explore the 404 records temporally. Break down the 404 requests by day (cache() the RDD errDateSorted) and get the daily counts sorted by day as a list. Since the log only covers a single month, you can ignore the month in your checks.
# TODO: Replace <FILL IN> with appropriate code errDateCountPairTuple = badRecords.map(lambda log: (log.date_time.day, 1)) errDateSum = errDateCountPairTuple.reduceByKey(lambda v1, v2: v1+v2) errDateSorted = (errDateSum .sortByKey() .cache()) errByDate = errDateSorted.take(30) print '404 Errors by day: %s' % errByDate # TEST 404 response codes per day (4e) Test.assertEquals(errByDate, [(1, 243), (3, 303), (4, 346), (5, 234), (6, 372), (7, 532), (8, 381), (9, 279), (10, 314), (11, 263), (12, 195), (13, 216), (14, 287), (15, 326), (16, 258), (17, 269), (18, 255), (19, 207), (20, 312), (21, 305), (22, 288)], 'incorrect errByDate') Test.assertTrue(errDateSorted.is_cached, 'incorrect errDateSorted.is_cached')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4f) Exercise: Visualizing the 404 Response Codes by Day Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by day.
# TODO: Replace <FILL IN> with appropriate code daysWithErrors404 = errDateSorted.map(lambda (k,v): k).take(30) errors404ByDay = errDateSorted.map(lambda (k,v):v).take(30) # TEST Visualizing the 404 Response Codes by Day (4f) Test.assertEquals(daysWithErrors404, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect daysWithErrors404') Test.assertEquals(errors404ByDay, [243, 303, 346, 234, 372, 532, 381, 279, 314, 263, 195, 216, 287, 326, 258, 269, 255, 207, 312, 305, 288], 'incorrect errors404ByDay') fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, max(daysWithErrors404), 0, max(errors404ByDay)]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Day') plt.ylabel('404 Errors') plt.plot(daysWithErrors404, errors404ByDay) pass
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4g) Exercise: Top Five Days for 404 Response Codes Using the RDD errDateSorted you cached in the part (4e), what are the top five days for 404 response codes and the corresponding counts of 404 response codes?
# TODO: Replace <FILL IN> with appropriate code topErrDate = errDateSorted.takeOrdered(5, lambda (k,v):-1*v) print 'Top Five dates for 404 requests: %s' % topErrDate # TEST Five dates for 404 requests (4g) Test.assertEquals(topErrDate, [(7, 532), (8, 381), (6, 372), (4, 346), (15, 326)], 'incorrect topErrDate')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4h) Exercise: Hourly 404 Response Codes Using the RDD badRecords you cached in the part (4a) and by hour of the day and in increasing order, create an RDD containing how many requests had a 404 return code for each hour of the day (midnight starts at 0). Cache the resulting RDD hourRecordsSorted and print that as a list.
# TODO: Replace <FILL IN> with appropriate code hourCountPairTuple = badRecords.map(lambda log: (log.date_time.hour, 1)) hourRecordsSum = hourCountPairTuple.reduceByKey(lambda v1, v2: v1+v2) hourRecordsSorted = (hourRecordsSum .sortByKey() .cache()) errHourList = hourRecordsSorted.take(24) print 'Top hours for 404 requests: %s' % errHourList # TEST Hourly 404 response codes (4h) Test.assertEquals(errHourList, [(0, 175), (1, 171), (2, 422), (3, 272), (4, 102), (5, 95), (6, 93), (7, 122), (8, 199), (9, 185), (10, 329), (11, 263), (12, 438), (13, 397), (14, 318), (15, 347), (16, 373), (17, 330), (18, 268), (19, 269), (20, 270), (21, 241), (22, 234), (23, 272)], 'incorrect errHourList') Test.assertTrue(hourRecordsSorted.is_cached, 'incorrect hourRecordsSorted.is_cached')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4i) Exercise: Visualizing the 404 Response Codes by Hour Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by hour.
# TODO: Replace <FILL IN> with appropriate code hoursWithErrors404 = hourRecordsSorted.map(lambda (k,v): k).take(24) errors404ByHours = hourRecordsSorted.map(lambda (k,v): v).take(24) # TEST Visualizing the 404 Response Codes by Hour (4i) Test.assertEquals(hoursWithErrors404, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], 'incorrect hoursWithErrors404') Test.assertEquals(errors404ByHours, [175, 171, 422, 272, 102, 95, 93, 122, 199, 185, 329, 263, 438, 397, 318, 347, 373, 330, 268, 269, 270, 241, 234, 272], 'incorrect errors404ByHours') fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, max(hoursWithErrors404), 0, max(errors404ByHours)]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Hour') plt.ylabel('404 Errors') plt.plot(hoursWithErrors404, errors404ByHours) pass
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
Create Data Frame
# Create feature matrix X = np.array([[1, 2], [6, 3], [8, 4], [9, 5], [np.nan, 4]])
machine-learning/deleting_missing_values.ipynb
tpin3694/tpin3694.github.io
mit
Drop Missing Values Using NumPy
# Remove observations with missing values X[~np.isnan(X).any(axis=1)]
machine-learning/deleting_missing_values.ipynb
tpin3694/tpin3694.github.io
mit
Drop Missing Values Using pandas
# Load data as a data frame df = pd.DataFrame(X, columns=['feature_1', 'feature_2']) # Remove observations with missing values df.dropna()
machine-learning/deleting_missing_values.ipynb
tpin3694/tpin3694.github.io
mit
Hidden cells Some cells contain code that is necessary but not interesting for the exercise at hand. These cells will typically be collapsed to let you focus at more interesting pieces of code. If you want to see their contents, double-click the cell. Wether you peek inside or not, you must run the hidden cells for the code inside to be interpreted. Try it now, the cell is marked RUN ME.
#@title "Hidden cell with boring code [RUN ME]" def display_sinusoid(): X = range(180) Y = [math.sin(x/10.0) for x in X] plt.plot(X, Y) display_sinusoid()
courses/fast-and-lean-data-science/colab_intro.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Did it work ? If not, run the collapsed cell marked RUN ME and try again! Accelerators Colaboratory offers free GPU and TPU (Tensor Processing Unit) accelerators. You can choose your accelerator in Runtime > Change runtime type The cell below is the standard boilerplate code that enables distributed training on GPUs or TPUs in Keras.
# Detect hardware try: # detect TPUs tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() # TPU detection strategy = tf.distribute.TPUStrategy(tpu) except ValueError: # detect GPUs strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines (works on CPU too) #strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU #strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines # How many accelerators do we have ? print("Number of accelerators: ", strategy.num_replicas_in_sync) # To use the selected distribution strategy: # with strategy.scope: # # --- define your (Keras) model here --- # # For distributed computing, the batch size and learning rate need to be adjusted: # global_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs. # learning_rate = LEARNING_RATE * strategy.num_replicas_in_sync
courses/fast-and-lean-data-science/colab_intro.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Notice how many times the print statements occured and how the while loop kept going until the True condition was met, which occured once x==10. Its important to note that once this occured the code stopped. Lets see how we could add an else statement:
x = 0 while x < 10: print 'x is currently: ',x print ' x is still less than 10, adding 1 to x' x+=1 else: print 'All Done!'
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/While loops -checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
break, continue, pass We can use break, continue, and pass statements in our loops to add additional functionality for various cases. The three statements are defined by: break: Breaks out of the current closest enclosing loop. continue: Goes to the top of the closest enclosing loop. pass: Does nothing at all. Thinking about break and continue statements, the general format of the while loop looks like this: while test: code statement if test: break if test: continue else: break and continue statements can appear anywhere inside the loopโ€™s body,but we will usually put them furhter nested in conjunction with an if statement to perform an action based on some condition. Lets go ahead and look at some examples!
x = 0 while x < 10: print 'x is currently: ',x print ' x is still less than 10, adding 1 to x' x+=1 if x ==3: print 'x==3' else: print 'continuing...' continue
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/While loops -checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above data_path = "./data/x_data_3.csv" df = pd.read_csv(data_path, header=0) x_data = df.drop('category', 1) y = df.category.as_matrix() # Impute missing values with mean values: #x_complete = df.fillna(df.mean()) x_complete = x_data.fillna(x_data.mean()) X_raw = x_complete.as_matrix() # Scale the data between 0 and 1: X = MinMaxScaler().fit_transform(X_raw) # Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time: np.random.seed(0) shuffle = np.random.permutation(np.arange(X.shape[0])) X, y = X[shuffle], y[shuffle] print(np.where(y == 'TREA')) print(np.where(y == 'PORNOGRAPHY/OBSCENE MAT')) ## Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare ## crimes from the data for quality issues. #X_minus_trea = X[np.where(y != 'TREA')] #y_minus_trea = y[np.where(y != 'TREA')] #X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')] #y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')] ## Separate training, dev, and test data: #test_data, test_labels = X_final[800000:], y_final[800000:] #dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000] #train_data, train_labels = X_final[100000:700000], y_final[100000:700000] #calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000] test_data, test_labels = X[800000:], y[800000:] dev_data, dev_labels = X[700000:800000], y[700000:800000] #train_data, train_labels = X[100000:700000], y[100000:700000] train_data, train_labels = X[:700000], y[:700000] #calibrate_data, calibrate_labels = X[:100000], y[:100000] # Create mini versions of the above sets #mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000] #mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000] #mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000] #mini_train_data, mini_train_labels = X[:20000], y[:20000] mini_train_data, mini_train_labels = X[:200000], y[:200000] #mini_calibrate_data, mini_calibrate_labels = X[19000:28000], y[19000:28000] mini_dev_data, mini_dev_labels = X[430000:480000], y[430000:480000] ## Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow #crime_labels = list(set(y_final)) #crime_labels_mini_train = list(set(mini_train_labels)) #crime_labels_mini_dev = list(set(mini_dev_labels)) #crime_labels_mini_calibrate = list(set(mini_calibrate_labels)) #print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate)) crime_labels = list(set(y)) crime_labels_mini_train = list(set(mini_train_labels)) crime_labels_mini_dev = list(set(mini_dev_labels)) #crime_labels_mini_calibrate = list(set(mini_calibrate_labels)) #print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate)) print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev)) print(len(train_data),len(train_labels)) print(len(dev_data),len(dev_labels)) print(len(mini_train_data),len(mini_train_labels)) print(len(mini_dev_data),len(mini_dev_labels)) print(len(test_data),len(test_labels)) #print(len(mini_calibrate_data),len(mini_calibrate_labels)) #print(len(calibrate_data),len(calibrate_labels))
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
Logistic Regression Hyperparameter tuning: For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag') Model calibration: See above
#log_reg = LogisticRegression(penalty='l1').fit(mini_train_data, mini_train_labels) #log_reg = LogisticRegression().fit(mini_train_data, mini_train_labels) #eval_prediction_probabilities = log_reg.predict_proba(mini_dev_data) #eval_predictions = log_reg.predict(mini_dev_data) #print("Multi-class Log Loss:", log_loss(y_true = mini_dev_labels, y_pred = eval_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n") #columns = ['hour_of_day','dayofweek',\ # 'x','y','bayview','ingleside','northern',\ # 'central','mission','southern','tenderloin',\ # 'park','richmond','taraval','HOURLYDRYBULBTEMPF',\ # 'HOURLYRelativeHumidity','HOURLYWindSpeed',\ # 'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\ # 'Daylight'] ##print(len(columns)) #allCoefs = pd.DataFrame(index=columns) #for a in range(len(log_reg.coef_)): # #print(crime_labels_mini_dev[a]) # #print(pd.DataFrame(log_reg.coef_[a], index=columns)) # allCoefs[crime_labels_mini_dev[a]] = log_reg.coef_[a] # #print() #allCoefs #%matplotlib inline #import matplotlib.pyplot as plt # #f = plt.figure(figsize=(15,8)) #allCoefs.plot(kind='bar', figsize=(15,8)) #plt.legend(loc='center left', bbox_to_anchor=(1.0,0.5)) #plt.show()
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
LR with L1-Penalty Hyperparameter Tuning
lr_param_grid_1 = {'C': [0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0]} #lr_param_grid_1 = {'C': [0.0001, 0.01, 0.5, 5.0, 10.0]} LR_l1 = GridSearchCV(LogisticRegression(penalty='l1'), param_grid=lr_param_grid_1, scoring='neg_log_loss') LR_l1.fit(train_data, train_labels) print('L1: best C value:', str(LR_l1.best_params_['C'])) LR_l1_prediction_probabilities = LR_l1.predict_proba(dev_data) LR_l1_predictions = LR_l1.predict(dev_data) print("L1 Multi-class Log Loss:", log_loss(y_true = dev_labels, y_pred = LR_l1_prediction_probabilities, labels = crime_labels), "\n\n")
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
Dataframe for Coefficients
columns = ['hour_of_day','dayofweek',\ 'x','y','bayview','ingleside','northern',\ 'central','mission','southern','tenderloin',\ 'park','richmond','taraval','HOURLYDRYBULBTEMPF',\ 'HOURLYRelativeHumidity','HOURLYWindSpeed',\ 'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\ 'Daylight'] allCoefsL1 = pd.DataFrame(index=columns) for a in range(len(LR_l1.coef_)): allCoefsL1[crime_labels[a]] = LR_l1.coef_[a] allCoefsL1
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
LR with L2-Penalty Hyperparameter Tuning
lr_param_grid_2 = {'C': [0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0], \ 'solver':['liblinear','newton-cg','lbfgs', 'sag']} LR_l2 = GridSearchCV(LogisticRegression(penalty='l2'), param_grid=lr_param_grid_2, scoring='neg_log_loss') LR_l2.fit(train_data, train_labels) print('L2: best C value:', str(LR_l2.best_params_['C'])) print('L2: best solver:', str(LR_l2.best_params_['solver'])) LR_l2_prediction_probabilities = LR_l2.predict_proba(dev_data) LR_l2_predictions = LR_l2.predict(dev_data) print("L2 Multi-class Log Loss:", log_loss(y_true = dev_labels, y_pred = LR_l2_prediction_probabilities, labels = crime_labels), "\n\n")
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
Dataframe for Coefficients
columns = ['hour_of_day','dayofweek',\ 'x','y','bayview','ingleside','northern',\ 'central','mission','southern','tenderloin',\ 'park','richmond','taraval','HOURLYDRYBULBTEMPF',\ 'HOURLYRelativeHumidity','HOURLYWindSpeed',\ 'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\ 'Daylight'] allCoefsL2 = pd.DataFrame(index=columns) for a in range(len(LR_l2.coef_)): allCoefsL2[crime_labels[a]] = LR_l2.coef_[a] allCoefsL2
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
Run baseline high- and low-resolution models To illustrate the effect of parameterizations, we'll run two baseline models: a low-resolution model without parameterizations at nx=64 resolution (where $\Delta x$ is larger than the deformation radius $r_d$, preventing the model from fully resolving eddies), a high-resolution model at nx=256 resolution (where $\Delta x$ is ~4x finer than the deformation radius, so eddies can be almost fully resolved).
%%time year = 24*60*60*360. base_kwargs = dict(dt=3600., tmax=5*year, tavestart=2.5*year, twrite=25000) low_res = pyqg.QGModel(nx=64, **base_kwargs) low_res.run() %%time high_res = pyqg.QGModel(nx=256, **base_kwargs) high_res.run()
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
Run Smagorinsky and backscatter parameterizations Now we'll run two types of parameterization: one from Smagorinsky 1963 which models an effective eddy viscosity from subgrid stress, and one adapted from Jansen and Held 2014 and Jansen et al. 2015, which reinjects a fraction of the energy dissipated by Smagorinsky back into larger scales:
def run_parameterized_model(p): model = pyqg.QGModel(nx=64, parameterization=p, **base_kwargs) model.run() return model %%time smagorinsky = run_parameterized_model( pyqg.parameterizations.Smagorinsky(constant=0.08)) %%time backscatter = run_parameterized_model( pyqg.parameterizations.BackscatterBiharmonic(smag_constant=0.08, back_constant=1.1)) def label_for(sim): return f"nx={sim.nx}, {sim.parameterization or 'unparameterized'}" plt.figure(figsize=(15,6)) plt.rcParams.update({'font.size': 12}) vlim = 2e-5 for i, sim in enumerate([high_res, low_res, smagorinsky, backscatter]): plt.subplot(2, 4, i+1, title=label_for(sim).replace(',',",\n").replace('Biharmonic','')) plt.imshow(sim.q[0], vmin=-vlim, vmax=vlim, cmap='bwr') plt.xticks([]); plt.yticks([]) if i == 0: plt.ylabel("Upper PV\n[$s^{-1}$]", rotation=0, va='center', ha='right', fontsize=14) if i == 3: plt.colorbar() vlim = 2e-2 for i, sim in enumerate([high_res, low_res, smagorinsky, backscatter]): plt.subplot(2, 4, i+5) plt.imshow((sim.u**2 + sim.v**2).sum(0), vmin=0, vmax=vlim, cmap='inferno') plt.xticks([]); plt.yticks([]) if i == 0: plt.ylabel("KE density\n[$m^2 s^{-2}$]", rotation=0, va='center', ha='right', fontsize=14) if i == 3: plt.colorbar() plt.tight_layout()
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
Note how these are slightly slower than the baseline low-resolution model, but much faster than the high-resolution model. See the parameterizations API section and code for examples of how these parameterizations are defined! Compute similarity metrics between parameterized and high-resolution simulations To assist with evaluating the effects of parameterizations, we include helpers for computing similarity metrics between model diagnostics. Similarity metrics quantify the percentage closer a diagnostic is to high resolution than low resolution; values greater than 0 indicate improvement over low resolution (with 1 being the maximum), while values below 0 indicate worsening. We can compute these for all diagnostics for all four simulations:
sims = [high_res, backscatter, low_res, smagorinsky] pd.DataFrame.from_dict([ dict(Simulation=label_for(sim), **pyqg.diagnostic_tools.diagnostic_similarities(sim, high_res, low_res)) for sim in sims])
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
Note that the high-resolution and low-resolution models themselves have similarity scores of 1 and 0 by definition. In this case, the backscatter parameterization is consistently closer to high-resolution than low-resolution, while the Smagorinsky is consistently further. Let's plot some of the actual curves underlying these metrics to get a better sense:
def plot_kwargs_for(sim): kw = dict(label=label_for(sim).replace('Biharmonic','')) kw['ls'] = (':' if sim.uv_parameterization else ('--' if sim.q_parameterization else '-')) kw['lw'] = (4 if sim.nx==256 else 3) return kw plt.figure(figsize=(16,6)) plt.rcParams.update({'font.size': 16}) plt.subplot(121, title="KE spectrum") for sim in sims: plt.loglog( *pyqg.diagnostic_tools.calc_ispec(sim, sim.get_diagnostic('KEspec').sum(0)), **plot_kwargs_for(sim)) plt.ylabel("[$m^2 s^{-2}$]") plt.xlabel("[$m^{-1}$]") plt.ylim(1e-2,2e2) plt.xlim(1e-5, 2e-4) plt.legend(loc='lower left') plt.subplot(122, title="Enstrophy spectrum") for sim in sims: plt.loglog( *pyqg.diagnostic_tools.calc_ispec(sim, sim.get_diagnostic('Ensspec').sum(0)), **plot_kwargs_for(sim)) plt.ylabel("[$s^{-2}$]") plt.xlabel("[$m^{-1}$]") plt.ylim(1e-8,2e-6) plt.xlim(1e-5, 2e-4) plt.tight_layout()
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
The backscatter model, though low-resolution, has energy and enstrophy spectra that more closely resemble those of the high-resolution model.
def plot_spectra(m): m_ds = m.to_dataset().isel(time=-1) diag_names_enstrophy = ['ENSflux', 'ENSgenspec', 'ENSfrictionspec', 'ENSDissspec', 'ENSparamspec'] diag_names_energy = ['APEflux', 'APEgenspec', 'KEflux', 'KEfrictionspec', 'Dissspec', 'paramspec'] bud_labels_list = [['APE gen','APE flux','KE flux','Bottom drag','Diss.','Param.'], ['ENS gen','ENS flux','Dissipation','Friction','Param.']] title_list = ['Spectral Energy Transfer', 'Spectral Enstrophy Transfer'] plt.figure(figsize = [15, 5]) for p, diag_names in enumerate([diag_names_energy, diag_names_enstrophy]): bud = [] for name in diag_names: kr, spec = pyqg.diagnostic_tools.calc_ispec(m, getattr(m_ds, name).data.squeeze()) bud.append(spec.copy()) plt.subplot(1, 2, p+1) [plt.semilogx(kr, term, label=label) for term, label in zip(bud, bud_labels_list[p])] plt.semilogx(kr, -np.vstack(bud).sum(axis=0), 'k--', label = 'Resid.') plt.legend(loc='best') plt.xlabel(r'k (m$^{-1}$)'); plt.grid() plt.title(title_list[p]) plt.tight_layout() plot_spectra(backscatter)
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
Create an array of points that represent a sine curve between 0 and 2$\pi$.
#create the data to be plotted x = np.linspace(0, 2*np.pi, 300) y = np.sin(x)
training/05_LinearFits.ipynb
dedx/cpalice
mit
Plot the data over the full range as a dashed line and then overlay the section of the data that looks roughly linear, which we will try to fit with a straight line.
#Now plot it plt.plot(x,y,'b--') plt.plot(x[110:180], y[110:180]) #subset of points that we will fit plt.show()
training/05_LinearFits.ipynb
dedx/cpalice
mit
We need to define the function that we will try to fit to this data. In this example, we will use the equation for a straight line, which has two parameters, the slope $m$ and the y-intercept $b$.
#Define the fit function def func(x, m, b): return (m*x + b)
training/05_LinearFits.ipynb
dedx/cpalice
mit
Before we can fit the data we need to make an initial guess at the slope and y-intercept which we can pass to the optimizer. It will start with those values and then keep trying small variations on those values until it minimizes the linear least squared difference between the data points we are trying to fit and points on the line described by those parameters. Looking at the graph, the top-left of the solid blue curve will probably hit around $y$ = 2 when $x$ = 0 (the y-intercept). The slope is negative (decreasing y for increasing x) in the region we are fitting and it looks like the "rise" in $y$ (really it's a drop) over the "run" in $x$ appears to be about 1. Here's the parameter array we will pass to the optimizer. The order of the parameters has to match the order that they are called in the function we defined (func) so the slope comes first.
# Make initial guess at parameters, slope then y-intercept p0 = [-1.0, 2.0]
training/05_LinearFits.ipynb
dedx/cpalice
mit
Now call the optimizer. It will return two arrays. The first is the set of optimized parameters and the second is a matrix that shows the covariance between the parameters. Don't worry about the details of the covariance matrix for now.
#Call the curve fitter and have it return the optimized parameters (popt) and covariance matrix (pcov) popt, pcov = curve_fit(func, x[110:180], y[110:180], p0)
training/05_LinearFits.ipynb
dedx/cpalice
mit
The diagonal elements of the covariance matrix are related to the uncertainties in the optimized fit parameters - they are the square of the uncertainties, actually. Any off-diagonal elements that are non-zero tell you how correlated the parameters are. Values close to zero mean the parameters are totally uncorrelated to one another. Values close to one tell you that the parameters are tightly correlated, meaning that changing the value of one of them makes the value of the other one change by a lot. In the case of a linear fit, changing the slope of the line will change where that line intersects the y-axis, so you would expect a high degree of correlation between the slope and the y-intercept. When you are trying to understand how well a theoretical model matches data and extract parameters with some physical meaning, analyzing the covariance matrix if very important. For now, we just want the best-fit parameters and their uncertainties.
#Compute the parameter uncertainties from the covariance matrix punc = np.zeros(len(popt)) for i in np.arange(0,len(popt)): punc[i] = np.sqrt(pcov[i,i]) #Print the result print "optimal parameters: ", popt print "uncertainties of parameters: ", punc
training/05_LinearFits.ipynb
dedx/cpalice
mit
Let's look at how the fit compares to the data by plotting them on top of one another. The fitresult array extends over the full range in x. You can see that a linear fit in the range of interest is pretty good, but it deviates quite significantly from the data (the sine curve) oustide that range.
#plot the fit result with the data fitresult = func(x,popt[0],popt[1]) plt.plot(x,y,'b--',label="data") plt.plot(x,fitresult,'g',label="fit") plt.legend(loc="best") plt.show()
training/05_LinearFits.ipynb
dedx/cpalice
mit
Quiz Question: How many predicted values in the test set are false positives?
false_positives = 1443 false_negatives = 1406
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Computing the cost of mistakes Put yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, false positives cost more than false negatives. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.) Suppose you know the costs involved in each kind of mistake: 1. \$100 for each false positive. 2. \$1 for each false negative. 3. Correctly classified reviews incur no cost. Quiz Question: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the test set?
false_positives * 100 + false_negatives * 1
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: Out of all reviews in the test set that are predicted to be positive, what fraction of them are false positives? (Round to the second decimal place e.g. 0.25)
fpr = 1 - precision print fpr
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: What fraction of the positive reviews in the test_set were correctly predicted as positive by the classifier? Quiz Question: What is the recall value for a classifier that predicts +1 for all data points in the test_data? Precision-recall tradeoff In this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve. Varying the threshold False positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold. Write a function called apply_threshold that accepts two things * probabilities (an SArray of probability values) * threshold (a float between 0 and 1). The function should return an array, where each element is set to +1 or -1 depending whether the corresponding probability exceeds threshold.
def apply_threshold(probabilities, threshold): ### YOUR CODE GOES HERE # +1 if >= threshold and -1 otherwise. predictions = probabilities >= threshold return predictions.apply(lambda x: 1 if x == 1 else -1)
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places.
for t, p in zip(threshold_values, precision_all): if p >= 0.965: print t,p break
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: Using threshold = 0.98, how many false negatives do we get on the test_data? (Hint: You may use the graphlab.evaluation.confusion_matrix function implemented in GraphLab Create.)
probabilities = model.predict(test_data, output_type='probability') predictions = apply_threshold(probabilities, 0.98) print graphlab.evaluation.confusion_matrix(test_data['sentiment'], predictions)
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better for the reviews of data in baby_reviews? Round your answer to 3 decimal places.
for t, p in zip(threshold_values, precision_all): if p >= 0.965: print t,p break
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Load W3C prov json file, visualize it and generate the neo4j graph (relations)
#rov_doc_from_json = provio.get_provdoc('json',install_path+"/neo4j_prov/examples/wps-prov.json") prov_doc_from_json = provio.get_provdoc('json','/home/stephan/Repos/ENES-EUDAT/submission_forms/test/ingest_prov_1.json') rels = provio.gen_graph_model(prov_doc_from_json) print prov_doc_from_json.get_records() print rels provio.visualize_prov(prov_doc_from_json)
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
Alternatively load W3C prov xml file and generate neo4j graph (relations)
prov_doc_from_xml = provio.get_provdoc('xml',install_path+"/neo4j_prov/examples/wps-prov.xml") rels = provio.gen_graph_model(prov_doc_from_xml) print prov_doc_from_xml.get_records() print rels
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
Connect to a neo4j graph endpoint and generate new graph Attention: previous graph(s) are deleted
from py2neo import Graph, Node, Relationship, authenticate authenticate("localhost:7474", "neo4j", "prolog16") # connect to authenticated graph database graph = Graph("http://localhost:7474/db/data/") graph.delete_all() for rel in rels: graph.create(rel)
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
query the newly generated graph and display result
%load_ext cypher %matplotlib inline results = %cypher http://neo4j:prolog16@localhost:7474/db/data MATCH (a)-[r]-(b) RETURN a,r, b results.get_graph() results.draw()
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
To help visualization of large graphs the javascript library from Almende B.V. is helpful (git clone git://github.com/almende/vis.git) Therefore a javascript visualization generation is provided by the vis script (which I adapted from https://github.com/nicolewhite/neo4j-jupyter/tree/master/scripts)
from neo4j_prov.vis import draw options = {"16":"label"} result_iframe = draw(graph,options)
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
We import the word_tokenizer from the NLTK module. We convert the tokenlist from each text to a set of types.
from nltk import word_tokenize types1 = set(word_tokenize(text1)) types2 = set(word_tokenize(text2))
notebooks/Python for Text Similarities.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0