markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Benchmark: Training with In-Fold Generated Mean
encoding_df = ( train_df .groupby('category', as_index=False) .agg({'y': np.mean}) .rename(columns={'y': 'infold_mean'}) ) def get_target_and_features(df: pd.DataFrame) -> Tuple[pd.DataFrame]: merged_df = df.merge(encoding_df, on='category') X = merged_df[['x', 'infold_mean']] y = merged_df...
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Obviously, overfitting is detected. Out-of-Fold Target Encoding Estimator
X_train, y_train = train_df[['x', 'category']], train_df['y'] X_test, y_test = test_df[['x', 'category']], test_df['y'] splitter = KFold(shuffle=True, random_state=361) rgr = OutOfFoldTargetEncodingRegressor( LinearRegression, # It is a type, not an instance of a class. dict(), # If neeeded, pass constructor...
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
What is below is a wrong way to measure train score. Regressor uses in-fold generated features for predictions, not the features that are used for training.
rgr.fit(X_train, y_train, source_positions=[1]) y_hat_train = rgr.predict(X_train) r2_score(y_train, y_hat_train)
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Actually, imparity between training set and all other sets is the main reason why special estimators are implemented instead of making dsawl.target_encoding.TargetEncoder be able to work inside sklearn.pipeline.Pipeline. If you want to use an estimator with target encoding inside a pipeline, pass pipeline instance as a...
y_hat_train = rgr.fit_predict(X_train, y_train, source_positions=[1]) r2_score(y_train, y_hat_train) rgr.fit(X_train, y_train, source_positions=[1]) y_hat = rgr.predict(X_test) r2_score(y_test, y_hat)
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Thus, there is no blatant overfitting โ€” train set score and test set score are close to each other (for some other random seeds the gap between them might be lower, but, anyway, it is not too considerable even now). Also it is appropriate to highlight that test set score is significantly higher than that of the benchma...
from sklearn.model_selection import GridSearchCV grid_params = { 'estimator_params': [{}, {'fit_intercept': False}], 'smoothing_strength': [0, 10], 'min_frequency': [0, 10] } rgr = GridSearchCV( OutOfFoldTargetEncodingRegressor(), grid_params ) rgr.fit(X_train, y_train, source_positions=[1]) rgr.b...
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Appendix II. Advanced Integration with Pipelines To start with, remind that the key reason of difficulties with target encoding is imparity between training set and other sets for which predictions are made. Training set is split into folds, whereas other sets are not, because targets from them are not used for target ...
from dsawl.target_encoding import TargetEncoder class PipelinedTargetEncoder(TargetEncoder): pass PipelinedTargetEncoder.fit_transform = PipelinedTargetEncoder.fit_transform_out_of_fold # Now instances of `PipelinedTargetEncoder` can be used as transformers in pipelines # and target encoding inside these pipel...
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Anyway, look at a way that allows working with train scores methodocogically right. Suppose that by some unknown reasons there is a need in learning a model from the dataset under consideration such that: * first of all, both features are scaled, * then the categorical feature is target-encoded, * then squares, cubes, ...
splitter = KFold(shuffle=True, random_state=361) rgr = OutOfFoldTargetEncodingRegressor( Pipeline, { 'steps': [ ('poly_enrichment', PolynomialFeatures()), ('linear_model', LinearRegression()) ], 'poly_enrichment__degree': 3 }, splitter=splitter ) pipeline...
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Training and Test Data Sets Each patient record is randomly assigned to a "training" data set (80%) or a "test" dataset (20%). Best practices have a cross-validation set (60% training, 20% cross-validation, 20% test).
training_set = load_data("data/breast-cancer.train") test_set = load_data("data/breast-cancer.test") print "Training set has %d patients" % (training_set.shape[0]) print "Test set has %d patients\n" % (test_set.shape[0]) print training_set.iloc[:, 0:6].head(3) print print training_set.iloc[:, 6:11].head(3) train...
15 min Intro to ML in Medicine.ipynb
massie/notebooks
apache-2.0
Linear Support Vector Machine Classification This image shows how support vector machine searches for a "Maximum-Margin Hyperplane" in 2-dimensional space. The breast cancer data set is 9-dimensional. Image by User:ZackWeinberg, based on PNG version by User:Cyc [<a href="http://creativecommons.org/licenses/by-sa/3.0">...
from sklearn.preprocessing import MinMaxScaler from sklearn import svm # (1) Scale the 'training set' scaler = MinMaxScaler() scaled_training_set_features = scaler.fit_transform(training_set_features) # (2) Create the model model = svm.LinearSVC(C=0.1) # (3) Fit the model using the 'training set' model.fit(scaled_trai...
15 min Intro to ML in Medicine.ipynb
massie/notebooks
apache-2.0
Evaluating performance of the model
from sklearn import metrics accuracy = metrics.accuracy_score(test_set_malignant, \ test_set_malignant_predictions) * 100 ((tn, fp), (fn, tp)) = metrics.confusion_matrix(test_set_malignant, \ test_set_malignant_predictions) print "Accur...
15 min Intro to ML in Medicine.ipynb
massie/notebooks
apache-2.0
We'll be using tables prepared from Cufflinks GTF output from GEO entries GSM847565 and GSM847566. These represent results control and ATF3 knockdown experiments in the K562 human cell line. You can read more about the data on GEO; this example will be more about the features of :mod:metaseq than the biology. Let's g...
%%bash example_dir="metaseq-example" mkdir -p $example_dir (cd $example_dir \ && wget --progress=dot:giga https://raw.githubusercontent.com/daler/metaseq-example-data/master/metaseq-example-data.tar.gz \ && tar -xzf metaseq-example-data.tar.gz \ && rm metaseq-example-data.tar.gz) data_dir = 'metaseq-example/data' co...
doc/source/example_session_2.ipynb
daler/metaseq
mit
We'd also like to add a title. But how to access the top-most axes? Whenever the scatter method is called, the MarginalHistograms object created as a by-product of the plotting is stored in the marginal attribute. This, in turn, has a top_hists attribute, and we can grab the last one created. While we're at it, let'...
# When `d2.scatter` is called, we get a `marginal` attribute. top_axes = d2.marginal.top_hists[-1] top_axes.set_title('Differential expression, ATF3 knockdown'); for ax in d2.marginal.top_hists: ax.set_ylabel('No.\ntranscripts', rotation=0, ha='right', va='center', size=8) for ax in d2.marginal.right_hists: ...
doc/source/example_session_2.ipynb
daler/metaseq
mit
๊ฐ€์žฅ ๋งŽ์ด ์ถœํ˜„ํ•œ ๋ฐ”์ด๊ทธ๋žจ ์ถœ๋ ฅํ•˜๊ธฐ
from nltk.collocations import * bigram_measures = nltk.collocations.BigramAssocMeasures() trigram_measures = nltk.collocations.TrigramAssocMeasures() finder = BigramCollocationFinder.from_words( nltk.corpus.genesis.words('english-web.txt')) finder.nbest(bigram_measures.pmi, 10)
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
์ž…๋ ฅ๋œ ํ…์ŠคํŠธ๋กœ๋ถ€ํ„ฐ ๋ฐ”์ด๊ทธ๋žจ ์ƒ์„ฑ
from nltk import bigrams text = "I do not like green eggs and ham, I do not like them Sam I am!" tokens = nltk.wordpunct_tokenize(text) finder = BigramCollocationFinder.from_words(tokens) scored = finder.score_ngrams(bigram_measures.raw_freq) sorted(bigram for bigram, score in scored)
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
๋ฐ”์ด๊ทธ๋žจ์˜ ๋นˆ๋„์ˆ˜ ์„ธ๊ธฐ
sorted(finder.nbest(trigram_measures.raw_freq, 2)) sorted(finder.ngram_fd.items(), key=lambda t: (-t[1], t[0]))[:10]
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
BOW(Bag of Words, ๋‹จ์–ด๊ฐ€๋ฐฉ) Scikit-learn์˜ CounterVectorizer ํ™œ์šฉํ•˜๊ธฐ CountVectorizer๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์„ธ๊ฐ€์ง€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ๋ฌธ์„œ๋ฅผ ํ† ํฐ ๋ฆฌ์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. ๊ฐ ๋ฌธ์„œ์—์„œ ํ† ํฐ์˜ ์ถœํ˜„ ๋นˆ๋„๋ฅผ ์„ผ๋‹ค. ๊ฐ ๋ฌธ์„œ๋ฅผ BOW ์ธ์ฝ”๋”ฉ ๋ฒกํ„ฐ๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅํ•˜๊ณ  ๋‹จ์–ด๊ฐ€๋ฐฉ(BOW)์— ๋„ฃ๊ธฐ ๋‹จ์–ด ์ˆ˜๋งŒํผ ๋ฒกํ„ฐ๋ฅผ ๋งŒ๋“ค๊ณ , ๊ฐ ๋‹จ์–ด๋ฅผ ๋ฒกํ„ฐ์— abc ์ˆœ์œผ๋กœ ๋„ฃ๊ธฐ ex) and๋Š” 0๋ฒˆ๊ฐ€๋ฐฉ์—, document๋Š” 1๋ฒˆ๊ฐ€๋ฐฉ์— ๋“ค์–ด๊ฐ
from sklearn.feature_extraction.text import CountVectorizer corpus = [ 'This is the first document.', 'This is the second second document.', 'And the third one.', 'Is this the first document?', 'The last document?', ] vect = CountVectorizer() vect.fit(corpus) vect.vocabulary_
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
๊ฐ€๋ฐฉ์— ๋“ค์–ด๊ฐ„ ๋‹จ์–ด๋“ค์„ ์ถœ๋ ฅํ•˜๊ณ , ๊ฐ ๋ฌธ์žฅ์—์„œ ๊ทธ ๋‹จ์–ด๊ฐ€ ์ถœํ˜„ํ•œ ๋นˆ๋„์ˆ˜ ์„ธ๊ธฐ
print(vectorizer.get_feature_names()) vect.transform(corpus).toarray()
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
Stop Words(๋ถˆ์šฉ์–ด) ์„ค์ •ํ•˜๊ธฐ Stop Words ๋Š” ๋ฌธ์„œ์—์„œ ๋‹จ์–ด์žฅ์„ ์ƒ์„ฑํ•  ๋•Œ ๋ฌด์‹œํ•  ์ˆ˜ ์žˆ๋Š” ๋‹จ์–ด๋ฅผ ๋งํ•œ๋‹ค. ๋ณดํ†ต ์˜์–ด์˜ ๊ด€์‚ฌ๋‚˜ ์ ‘์†์‚ฌ, ํ•œ๊ตญ์–ด์˜ ์กฐ์‚ฌ ๋“ฑ์ด ์—ฌ๊ธฐ์— ํ•ด๋‹นํ•œ๋‹ค. stop_words ์ธ์ˆ˜๋กœ ์กฐ์ ˆํ•  ์ˆ˜ ์žˆ๋‹ค.
vect = CountVectorizer(stop_words=["and", "is", "the", "this"]).fit(corpus) vect.vocabulary_
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
Stop Words(๋ถˆ์šฉ์–ด)๋ฅผ ์ œ์™ธํ•˜๊ณ , ๋ฒกํ„ฐ๋ฅผ ๋‹ค์‹œ ์ƒ์„ฑํ•˜๊ธฐ
vect.transform(corpus).toarray()
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
TF-IDF TF-IDF(Term Frequency โ€“ Inverse Document Frequency) ์ธ์ฝ”๋”ฉ์€ ๋‹จ์–ด๋ฅผ ๊ฐฏ์ˆ˜ ๊ทธ๋Œ€๋กœ ์นด์šดํŠธํ•˜์ง€ ์•Š๊ณ  ๋ชจ๋“  ๋ฌธ์„œ์— ๊ณตํ†ต์ ์œผ๋กœ ๋“ค์–ด์žˆ๋Š” ๋‹จ์–ด์˜ ๊ฒฝ์šฐ ๋ฌธ์„œ ๊ตฌ๋ณ„ ๋Šฅ๋ ฅ์ด ๋–จ์–ด์ง„๋‹ค๊ณ  ๋ณด์•„ ๊ฐ€์ค‘์น˜๋ฅผ ์ถ•์†Œํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ๊ตฌ์ œ์ ์œผ๋กœ๋Š” ๋ฌธ์„œ $d$(document)์™€ ๋‹จ์–ด $t$ ์— ๋Œ€ํ•ด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ณ„์‚ฐํ•œ๋‹ค. $$ \text{tf-idf}(d, t) = \text{tf}(d, t) \cdot \text{idf}(t) $$ ์—ฌ๊ธฐ์—์„œ $\text{tf}(d, t)$: term frequency. ํŠน์ •ํ•œ ๋‹จ์–ด์˜ ๋นˆ๋„์ˆ˜ $\text{i...
from sklearn.feature_extraction.text import TfidfVectorizer tfidv = TfidfVectorizer().fit(corpus) tfidv.transform(corpus).toarray()
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
ํ™œ์šฉ ์‚ฌ๋ก€ ๋‹ค์Œ์€ Scikit-Learn์˜ ๋ฌธ์ž์—ด ๋ถ„์„๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›น์‚ฌ์ดํŠธ์— ํŠน์ •ํ•œ ๋‹จ์–ด๊ฐ€ ์–ด๋А ์ •๋„ ์‚ฌ์šฉ๋˜์—ˆ๋Š”์ง€ ๋นˆ๋„์ˆ˜๋ฅผ ์•Œ์•„๋ณด๋Š” ์ฝ”๋“œ์ด๋‹ค.
from urllib.request import urlopen import json import string from konlpy.utils import pprint from konlpy.tag import Hannanum hannanum = Hannanum() #f = urlopen("https://www.datascienceschool.net/download-notebook/708e711429a646818b9dcbb581e0c10a/") f = urlopen("https://github.com/ahhn/oss/raw/master/resources/Ngram_BO...
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
์—ฌ๊ธฐ์—์„œ๋Š” ํ•˜๋‚˜์˜ ๋ฌธ์„œ๊ฐ€ ํ•˜๋‚˜์˜ ๋‹จ์–ด๋กœ๋งŒ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ CountVectorizer๋กœ ์ด ๋ฌธ์„œ ์ง‘ํ•ฉ์„ ์ฒ˜๋ฆฌํ•˜๋ฉด ๊ฐ ๋ฌธ์„œ๋Š” ํ•˜๋‚˜์˜ ์›์†Œ๋งŒ 1์ด๊ณ  ๋‚˜๋จธ์ง€ ์›์†Œ๋Š” 0์ธ ๋ฒกํ„ฐ๊ฐ€ ๋œ๋‹ค. ์ด ๋ฒกํ„ฐ์˜ ํ•ฉ์œผ๋กœ ๋นˆ๋„๋ฅผ ์•Œ์•„๋ณด์•˜๋‹ค.
import numpy as np import matplotlib.pyplot as plt vect = CountVectorizer().fit(docs) count = vect.transform(docs).toarray().sum(axis=0) idx = np.argsort(-count) count = count[idx] feature_name = np.array(vect.get_feature_names())[idx] plt.bar(range(len(count)), count) plt.show() pprint(list(zip(feature_name, count))[...
resources/Ngram_BOW_TF-IDF.ipynb
ahhn/oss
gpl-3.0
SciPy 2016 Scikit-learn Tutorial Out-of-core Learning - Large Scale Text Classification for Sentiment Analysis Scalability Issues The sklearn.feature_extraction.text.CountVectorizer and sklearn.feature_extraction.text.TfidfVectorizer classes suffer from a number of scalability issues that all stem from the internal usa...
from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(min_df=1) vectorizer.fit([ "The cat sat on the mat.", ]) vectorizer.vocabulary_
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Note Since the movie datasets consists of 50,000 individual text files, executing the code snippet above may take ~20 sec or longer. The load_files function loaded the datasets into sklearn.datasets.base.Bunch objects, which are Python dictionaries:
train.keys()
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
As we can see, the HashingVectorizer is much faster than the Countvectorizer in this case. Finally, let us train a LogisticRegression classifier on the IMDb training subset:
from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline h_pipeline = Pipeline(( ('vec', HashingVectorizer(encoding='latin-1')), ('clf', LogisticRegression(random_state=1)), )) h_pipeline.fit(docs_train, y_train) print('Train accuracy', h_pipeline.score(docs_train, y_train)) ...
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Out-of-Core learning Out-of-Core learning is the task of training a machine learning model on a dataset that does not fit into memory or RAM. This requires the following conditions: a feature extraction layer with fixed output dimensionality knowing the list of all classes in advance (in this case we only have positiv...
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train') train_pos = os.path.join(train_path, 'pos') train_neg = os.path.join(train_path, 'neg') fnames = [os.path.join(train_pos, f) for f in os.listdir(train_pos)] +\ [os.path.join(train_neg, f) for f in os.listdir(train_neg)] fnames[:3]
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Now, we implement the batch_train function as follows:
from sklearn.base import clone def batch_train(clf, fnames, labels, iterations=25, batchsize=1000, random_seed=1): vec = HashingVectorizer(encoding='latin-1') idx = np.arange(labels.shape[0]) c_clf = clone(clf) rng = np.random.RandomState(seed=random_seed) for i in range(iterations): r...
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Limitations of the Hashing Vectorizer Using the Hashing Vectorizer makes it possible to implement streaming and parallel text classification but can also introduce some issues: The collisions can introduce too much noise in the data and degrade prediction quality, The HashingVectorizer does not provide "Inverse Docume...
# %load solutions/27_B-batchtrain.py
scipy-2017-sklearn-master/notebooks/23 Out-of-core Learning Large Scale Text Classification.ipynb
RTHMaK/RPGOne
apache-2.0
Download the dataset (movie.zip) and gold standard data (topicsMovie.txt and goldMovie.txt) from the link and plug in the locations below.
base_dir = os.path.join(os.path.expanduser('~'), "workshop/nlp/data/") data_dir = os.path.join(base_dir, 'wiki-movie-subset') if not os.path.exists(data_dir): raise ValueError("SKIP: Please download the movie corpus.") ref_dir = os.path.join(base_dir, 'reference') topics_path = os.path.join(ref_dir, 'topicsMovie.t...
docs/notebooks/topic_coherence-movies.ipynb
macks22/gensim
lgpl-2.1
Cross validate the numbers According to the paper the number of documents should be 108,952 with a vocabulary of 1,625,124. The difference is because of a difference in preprocessing. However the results obtained are still very similar.
print(len(corpus)) print(dictionary) topics = [] # list of 100 topics with open(topics_path) as f: topics = [line.split() for line in f if line] len(topics) human_scores = [] with open(human_scores_path) as f: for line in f: human_scores.append(float(line.strip())) len(human_scores)
docs/notebooks/topic_coherence-movies.ipynb
macks22/gensim
lgpl-2.1
We then define some parameters and set up the problem for the Bayesian inference.
# Create some toy data real_parameters = [0.015, 500] times = np.linspace(0, 1000, 1000) org_values = model.simulate(real_parameters, times) # Add noise noise = 10 values = org_values + np.random.normal(0, noise, org_values.shape) real_parameters = np.array(real_parameters + [noise]) # Get properties of the noise sam...
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
In this example, we will pick some considerably difficult starting points for the MCMC chains.
# Choose starting points for 3 mcmc chains xs = [ [0.7, 20, 2], [0.005, 900, 100], [0.01, 100, 500], ]
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
Let's run an Adaptive Covariance MCMC without doing any parameter transformation to check its performance.
# Create mcmc routine with four chains mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC) # Add stopping criterion mcmc.set_max_iterations(4000) # Start adapting after 1000 iterations mcmc.set_initial_phase_iterations(1000) # Disable logging mode mcmc.set_log_to_screen(False) # Run! ...
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
The MCMC samples are not ideal, because we've started the MCMC run from some difficult starting points. We can use MCMCSummary to inspect the efficiency of the MCMC run.
results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=["r", "k", "sigma"]) print(results)
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
Now, we create a create a pints.Transformation object for log-transformation and re-run the MCMC to see if it makes any difference.
# Create parameter transformation transformation = pints.LogTransformation(n_parameters=len(xs[0])) # Create mcmc routine with four chains mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC, transformation=transformation) # Add sto...
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
The MCMC samples using parameter transformation looks very similar to the one in another example notebook which we had some good starting points and without parameter transformation. This is a good sign! It suggests the transformation did not mess anything up. Now we check the efficiency again:
results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=["r", "k", "sigma"]) print(results)
examples/sampling/transformed-parameters.ipynb
martinjrobins/hobo
bsd-3-clause
Confirm Device
from jaxlib import xla_extension import jax key = jax.random.PRNGKey(1701) arr = jax.random.normal(key, (1000,)) device = arr.device_buffer.device() print(f"JAX device type: {device}") assert isinstance(device, xla_extension.GpuDevice), "unexpected JAX device type"
tests/notebooks/colab_gpu.ipynb
google/jax
apache-2.0
First we'll need a way to track which party the Senate & President are part of. For now, let's just stick with the two major parties and create a Party enumeration. Enumerations can group and give names to related constants in the code. This can help the code be more understandable when reading it.
class Party(Enum): D = 1 R = 2 color_trans = {Party.D:'blue', Party.R:'red'}
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
We'll also make a class to represent each justice. When a new Justice is created for a party, they'll be given a randomly generated term of somewhere between 0 and 40 years.
class Justice: def __init__(self, party): self.party = party self.term = random.randint(0,40) def __str__(self): return self.__repr__() def __repr__(self): return "{party}-{term}".format(party=self.party.name,term=self.term)
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
Our lass class is Bench. This class will represent the bench that contains the Justices currently on the Supreme Court. When the Bench is first formed, it will be empty. We care about modifying the Bench in a few ways: Filling all available seats with judges of a certain party (fill_seats). Adding years to determin...
class Bench: SIZE = 9 def __init__(self): self.seats = [None] * self.SIZE def fill_seats(self, party): # loop through all seats for i in range(self.SIZE): if self.seats[i] is None: # if seat is empty, add new # justice of th...
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
Last but not least, simulate is where the magic happens. This function loops over a supplied number of years, first determining if any judges have left their position. After that, it randomly picks the winning parties for any elections that are happening. After the elections, if the government is aligned, empty seat...
def simulate(years): president_party = None senate_party = None bench = Bench() for year in range(years+1): bench.add_years(1) if year % 2 == 0: senate_party = random.choice(list(Party)) if year % 4 == 0: president_party = random...
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
run_simulation will execute the simulation for the supplied number of years, and post-process the data to return the following information: years: an array of all the years that were simulated. bench_stacks: the stacked bar graph data for the composition of the court at each year. president_parties: an array of t...
def run_simulation(sim_years): years, benches, president_parties, senate_parties = zip(*list(simulate(sim_years))) bench_stacks = np.row_stack(zip(*benches)) vacancies = bench_stacks[0] mean = np.cumsum(vacancies) / (np.asarray(years) + 1) return years, bench_stacks, president_parties, senate_partie...
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
First, let's look at the result of our simulated supreme court over 200 years. Along the bottom, the parties of the Senate and President are shown. The height of each stack represents the number of seats that party holds, and the white space indicates vacancies.
sim_years = 200 years, bench_stacks, president_parties, senate_parties, _ = run_simulation(sim_years) stacked_plot_bench_over_time_with_parties(years, bench_stacks, president_parties, senate_p...
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
During the periods of alignment, the vacancies (white spaces) as filled. This just serves as visual confirmation that our simulation got that aspect correct. We can see that seats are continuously being vacated and filled, so we can't learn much from just this one plot. As an aside, it's difficult to look at this and...
sim_years = 1000 years, bench_stacks, _, _, mean = run_simulation(sim_years) stacked_plot_bench_over_time(years, bench_stacks, mean, color_trans, Party)
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
This simulation shows that we should expect a little less than 1 vacancy, about 0.7, per year. It also illustrates that as more data is added, the cumulative moving average becomes less variable. However, this is only a single simulation run, and could be an outlier. To see how likely different numbers of vacancies a...
sim_years = 50000 sample_size = 1000 results=[] with Pool(processes=4) as p: with tqdm_notebook(total=sample_size) as pbar: for r in p.imap_unordered(run_simulation, itertools.repeat(sim_years,sample_size)): results.append(r) pbar.update(1) years, _, _, _, means = zip(*results...
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
This distribution shows that we should still expect to see ~0.71 vacancies per year over the long run, but it wouldn't be surprising to see 0.68 or 0.73 vacancies. As a thought experiment, let's see what happen if the Green Party suddenly launches into relevance and all 3 parties have an equal shot in elections. How w...
class Party(Enum): D = 1 R = 2 G = 3 color_trans = {Party.D:'blue', Party.R:'red', Party.G:'green'} sim_years = 200 years, bench_stacks, president_parties, senate_parties, mean = run_simulation(sim_years) stacked_plot_bench_over_time_with_parties(years, benc...
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
Unsurprisingly, adding more parties into the mix while still requiring an aligned government looks like it leads to even more vacancies. But, how do the numbers shake out?
sim_years = 50000 sample_size = 1000 results=[] with Pool(processes=4) as p: with tqdm_notebook(total=sample_size) as pbar: for r in p.imap_unordered(run_simulation, itertools.repeat(sim_years,sample_size)): results.append(r) pbar.update(1) years, _, _, _, means = zip(*results...
FiveThirtyEightRiddler/2017-04-14/2017-04-26-empty_court_seats.ipynb
andrewzwicky/puzzles
mit
Then we can deploy the model using the gcloud CLI as before:
%%bash # TODO 5 TIMESTAMP=$(date -u +%Y%m%d_%H%M%S) MODEL_DISPLAYNAME=title_model_$TIMESTAMP ENDPOINT_DISPLAYNAME=swivel_$TIMESTAMP IMAGE_URI="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest" ARTIFACT_DIRECTORY=gs://${BUCKET}/${MODEL_DISPLAYNAME}/ echo $ARTIFACT_DIRECTORY gsutil cp -r ${EXPORT_PATH}/* ${AR...
notebooks/text_models/solutions/reusable_embeddings_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Let's go ahead and hit our model:
%%writefile input.json { "instances": [ {"keras_layer_1_input": "hello"} ] }
notebooks/text_models/solutions/reusable_embeddings_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
(3b) Exercise: Number of Unique Hosts How many unique hosts are there in the entire log? Think about the steps that you need to perform to count the number of different hosts in the log.
# TODO: Replace <FILL IN> with appropriate code # HINT: Do you recall the tips from (3a)? Each of these <FILL IN> could be an transformation or action. hosts = access_logs.map(lambda log: (log.host, 1)) uniqueHosts = hosts.reduceByKey(lambda a, b : a + b) uniqueHostCount = uniqueHosts.count() print 'Unique hosts: %d...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(3c) Exercise: Number of Unique Daily Hosts For an advanced exercise, let's determine the number of unique hosts in the entire log on a day-by-day basis. This computation will give us counts of the number of unique daily hosts. We'd like a list sorted by increasing day of the month which includes the day of the month a...
# TODO: Replace <FILL IN> with appropriate code dayToHostPairTuple = (access_logs.map(lambda log : ((log.date_time.day, log.host), 1))) dayGroupedHosts = dayToHostPairTuple.reduceByKey(lambda v1,v2: v1+v2).map(lambda (k,v) : k) dayHostCount = dayGroupedHosts.map(lambda (k, v) : (k,1)).reduceByKey(lambda v1, v2: v1+v...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(3d) Exercise: Visualizing the Number of Unique Daily Hosts Using the results from the previous exercise, use matplotlib to plot a "Line" graph of the unique hosts requests by day. daysWithHosts should be a list of days and hosts should be a list of number of unique hosts for each corresponding day. * How could you con...
# TODO: Replace <FILL IN> with appropriate code daysWithHosts = dailyHosts.map(lambda (k,v): k).collect() hosts = dailyHosts.map(lambda (k,v): v).collect() # TEST Visualizing unique daily hosts (3d) test_days = range(1, 23) test_days.remove(2) Test.assertEquals(daysWithHosts, test_days, 'incorrect days') Test.assertE...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(3e) Exercise: Average Number of Daily Requests per Hosts Next, let's determine the average number of requests on a day-by-day basis. We'd like a list by increasing day of the month and the associated average number of requests per host for that day. Make sure you cache the resulting RDD avgDailyReqPerHost so that we c...
# TODO: Replace <FILL IN> with appropriate code dayAndHostTuple = access_logs.map(lambda log: ((log.date_time.day, log.host),1)).reduceByKey(lambda v1, v2 : v1+v2) groupedByDay = dayAndHostTuple.map(lambda ((k,v),cnt): (k,(1,cnt))).reduceByKey(lambda v1, v2 : (v1[0]+v2[0], v1[1]+v2[1])) sortedByDay = groupedByDay.s...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(3f) Exercise: Visualizing the Average Daily Requests per Unique Host Using the result avgDailyReqPerHost from the previous exercise, use matplotlib to plot a "Line" graph of the average daily requests per unique host by day. daysWithAvg should be a list of days and avgs should be a list of average daily requests per u...
# TODO: Replace <FILL IN> with appropriate code daysWithAvg = avgDailyReqPerHost.map(lambda (k,v): k).take(30) avgs = avgDailyReqPerHost.map(lambda(k,v): v).take(30) # TEST Average Daily Requests per Unique Host (3f) Test.assertEquals(daysWithAvg, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 2...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4b) Exercise: Listing 404 Response Code Records Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list up to 40 distinct endpoints that generate 404 errors - no endpoint should appear more than once in your list.
# TODO: Replace <FILL IN> with appropriate code badEndpoints = badRecords.map(lambda log: (log.endpoint,1)) badUniqueEndpoints = badEndpoints.reduceByKey(lambda v1,v2 :1).map(lambda (k, v): k) badUniqueEndpointsPick40 = badUniqueEndpoints.take(40) print '404 URLS: %s' % badUniqueEndpointsPick40 # TEST Listing 404 r...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4c) Exercise: Listing the Top Twenty 404 Response Code Endpoints Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty endpoints that generate the most 404 errors. Remember, top endpoints should be in sorted order
# TODO: Replace <FILL IN> with appropriate code badEndpointsCountPairTuple = badRecords.map(lambda log: (log.endpoint, 1)) badEndpointsSum = badEndpointsCountPairTuple.reduceByKey(lambda v1, v2: v1+v2) badEndpointsTop20 = badEndpointsSum.takeOrdered(20, lambda (k,v): -1*v) print 'Top Twenty 404 URLs: %s' % badEndpoi...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4d) Exercise: Listing the Top Twenty-five 404 Response Code Hosts Instead of looking at the endpoints that generated 404 errors, let's look at the hosts that encountered 404 errors. Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty-five ...
# TODO: Replace <FILL IN> with appropriate code errHostsCountPairTuple = badRecords.map(lambda log: (log.host, 1)) errHostsSum = errHostsCountPairTuple.reduceByKey(lambda v1, v2: v1+v2) errHostsTop25 = errHostsSum.takeOrdered(25, lambda (k,v): -1*v) print 'Top 25 hosts that generated errors: %s' % errHostsTop25 # ...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4e) Exercise: Listing 404 Response Codes per Day Let's explore the 404 records temporally. Break down the 404 requests by day (cache() the RDD errDateSorted) and get the daily counts sorted by day as a list. Since the log only covers a single month, you can ignore the month in your checks.
# TODO: Replace <FILL IN> with appropriate code errDateCountPairTuple = badRecords.map(lambda log: (log.date_time.day, 1)) errDateSum = errDateCountPairTuple.reduceByKey(lambda v1, v2: v1+v2) errDateSorted = (errDateSum .sortByKey() .cache()) errByDate = errDateSorted.take(30) prin...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4f) Exercise: Visualizing the 404 Response Codes by Day Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by day.
# TODO: Replace <FILL IN> with appropriate code daysWithErrors404 = errDateSorted.map(lambda (k,v): k).take(30) errors404ByDay = errDateSorted.map(lambda (k,v):v).take(30) # TEST Visualizing the 404 Response Codes by Day (4f) Test.assertEquals(daysWithErrors404, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4g) Exercise: Top Five Days for 404 Response Codes Using the RDD errDateSorted you cached in the part (4e), what are the top five days for 404 response codes and the corresponding counts of 404 response codes?
# TODO: Replace <FILL IN> with appropriate code topErrDate = errDateSorted.takeOrdered(5, lambda (k,v):-1*v) print 'Top Five dates for 404 requests: %s' % topErrDate # TEST Five dates for 404 requests (4g) Test.assertEquals(topErrDate, [(7, 532), (8, 381), (6, 372), (4, 346), (15, 326)], 'incorrect topErrDate')
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4h) Exercise: Hourly 404 Response Codes Using the RDD badRecords you cached in the part (4a) and by hour of the day and in increasing order, create an RDD containing how many requests had a 404 return code for each hour of the day (midnight starts at 0). Cache the resulting RDD hourRecordsSorted and print that as a li...
# TODO: Replace <FILL IN> with appropriate code hourCountPairTuple = badRecords.map(lambda log: (log.date_time.hour, 1)) hourRecordsSum = hourCountPairTuple.reduceByKey(lambda v1, v2: v1+v2) hourRecordsSorted = (hourRecordsSum .sortByKey() .cache()) errHourList = hourRecord...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
(4i) Exercise: Visualizing the 404 Response Codes by Hour Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by hour.
# TODO: Replace <FILL IN> with appropriate code hoursWithErrors404 = hourRecordsSorted.map(lambda (k,v): k).take(24) errors404ByHours = hourRecordsSorted.map(lambda (k,v): v).take(24) # TEST Visualizing the 404 Response Codes by Hour (4i) Test.assertEquals(hoursWithErrors404, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12...
cs1001x_lab2.ipynb
StephenHarrington/spark
mit
Create Data Frame
# Create feature matrix X = np.array([[1, 2], [6, 3], [8, 4], [9, 5], [np.nan, 4]])
machine-learning/deleting_missing_values.ipynb
tpin3694/tpin3694.github.io
mit
Drop Missing Values Using NumPy
# Remove observations with missing values X[~np.isnan(X).any(axis=1)]
machine-learning/deleting_missing_values.ipynb
tpin3694/tpin3694.github.io
mit
Drop Missing Values Using pandas
# Load data as a data frame df = pd.DataFrame(X, columns=['feature_1', 'feature_2']) # Remove observations with missing values df.dropna()
machine-learning/deleting_missing_values.ipynb
tpin3694/tpin3694.github.io
mit
Hidden cells Some cells contain code that is necessary but not interesting for the exercise at hand. These cells will typically be collapsed to let you focus at more interesting pieces of code. If you want to see their contents, double-click the cell. Wether you peek inside or not, you must run the hidden cells for the...
#@title "Hidden cell with boring code [RUN ME]" def display_sinusoid(): X = range(180) Y = [math.sin(x/10.0) for x in X] plt.plot(X, Y) display_sinusoid()
courses/fast-and-lean-data-science/colab_intro.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Did it work ? If not, run the collapsed cell marked RUN ME and try again! Accelerators Colaboratory offers free GPU and TPU (Tensor Processing Unit) accelerators. You can choose your accelerator in Runtime > Change runtime type The cell below is the standard boilerplate code that enables distributed training on GPUs or...
# Detect hardware try: # detect TPUs tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() # TPU detection strategy = tf.distribute.TPUStrategy(tpu) except ValueError: # detect GPUs strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines (works on CPU too) #strategy ...
courses/fast-and-lean-data-science/colab_intro.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Notice how many times the print statements occured and how the while loop kept going until the True condition was met, which occured once x==10. Its important to note that once this occured the code stopped. Lets see how we could add an else statement:
x = 0 while x < 10: print 'x is currently: ',x print ' x is still less than 10, adding 1 to x' x+=1 else: print 'All Done!'
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/While loops -checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
break, continue, pass We can use break, continue, and pass statements in our loops to add additional functionality for various cases. The three statements are defined by: break: Breaks out of the current closest enclosing loop. continue: Goes to the top of the closest enclosing loop. pass: Does nothing at all. Thinkin...
x = 0 while x < 10: print 'x is currently: ',x print ' x is still less than 10, adding 1 to x' x+=1 if x ==3: print 'x==3' else: print 'continuing...' continue
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/While loops -checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above data_path = "./data/x_data_3.csv" df = pd.read_csv(data_path, header=0) x_data = df.drop('category', 1) y = df.category.as_matrix() # Impute missing values with mean values: #x_complete = df.fillna(df.mean()) x_comple...
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
Logistic Regression Hyperparameter tuning: For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag') Model calibration: See above
#log_reg = LogisticRegression(penalty='l1').fit(mini_train_data, mini_train_labels) #log_reg = LogisticRegression().fit(mini_train_data, mini_train_labels) #eval_prediction_probabilities = log_reg.predict_proba(mini_dev_data) #eval_predictions = log_reg.predict(mini_dev_data) #print("Multi-class Log Loss:", log_loss(y...
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
LR with L1-Penalty Hyperparameter Tuning
lr_param_grid_1 = {'C': [0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0]} #lr_param_grid_1 = {'C': [0.0001, 0.01, 0.5, 5.0, 10.0]} LR_l1 = GridSearchCV(LogisticRegression(penalty='l1'), param_grid=lr_param_grid_1, scoring='neg_log_loss') LR_l1.fit(train_data, train_labels) print('L1: best C value:', str(LR_l1.best_p...
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
Dataframe for Coefficients
columns = ['hour_of_day','dayofweek',\ 'x','y','bayview','ingleside','northern',\ 'central','mission','southern','tenderloin',\ 'park','richmond','taraval','HOURLYDRYBULBTEMPF',\ 'HOURLYRelativeHumidity','HOURLYWindSpeed',\ 'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\ ...
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
LR with L2-Penalty Hyperparameter Tuning
lr_param_grid_2 = {'C': [0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0], \ 'solver':['liblinear','newton-cg','lbfgs', 'sag']} LR_l2 = GridSearchCV(LogisticRegression(penalty='l2'), param_grid=lr_param_grid_2, scoring='neg_log_loss') LR_l2.fit(train_data, train_labels) print('L2: best C value:', str...
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
Dataframe for Coefficients
columns = ['hour_of_day','dayofweek',\ 'x','y','bayview','ingleside','northern',\ 'central','mission','southern','tenderloin',\ 'park','richmond','taraval','HOURLYDRYBULBTEMPF',\ 'HOURLYRelativeHumidity','HOURLYWindSpeed',\ 'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\ ...
iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_19_1530.ipynb
samgoodgame/sf_crime
mit
Run baseline high- and low-resolution models To illustrate the effect of parameterizations, we'll run two baseline models: a low-resolution model without parameterizations at nx=64 resolution (where $\Delta x$ is larger than the deformation radius $r_d$, preventing the model from fully resolving eddies), a high-res...
%%time year = 24*60*60*360. base_kwargs = dict(dt=3600., tmax=5*year, tavestart=2.5*year, twrite=25000) low_res = pyqg.QGModel(nx=64, **base_kwargs) low_res.run() %%time high_res = pyqg.QGModel(nx=256, **base_kwargs) high_res.run()
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
Run Smagorinsky and backscatter parameterizations Now we'll run two types of parameterization: one from Smagorinsky 1963 which models an effective eddy viscosity from subgrid stress, and one adapted from Jansen and Held 2014 and Jansen et al. 2015, which reinjects a fraction of the energy dissipated by Smagorinsky back...
def run_parameterized_model(p): model = pyqg.QGModel(nx=64, parameterization=p, **base_kwargs) model.run() return model %%time smagorinsky = run_parameterized_model( pyqg.parameterizations.Smagorinsky(constant=0.08)) %%time backscatter = run_parameterized_model( pyqg.parameterizations.BackscatterB...
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
Note how these are slightly slower than the baseline low-resolution model, but much faster than the high-resolution model. See the parameterizations API section and code for examples of how these parameterizations are defined! Compute similarity metrics between parameterized and high-resolution simulations To assist wi...
sims = [high_res, backscatter, low_res, smagorinsky] pd.DataFrame.from_dict([ dict(Simulation=label_for(sim), **pyqg.diagnostic_tools.diagnostic_similarities(sim, high_res, low_res)) for sim in sims])
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
Note that the high-resolution and low-resolution models themselves have similarity scores of 1 and 0 by definition. In this case, the backscatter parameterization is consistently closer to high-resolution than low-resolution, while the Smagorinsky is consistently further. Let's plot some of the actual curves underlying...
def plot_kwargs_for(sim): kw = dict(label=label_for(sim).replace('Biharmonic','')) kw['ls'] = (':' if sim.uv_parameterization else ('--' if sim.q_parameterization else '-')) kw['lw'] = (4 if sim.nx==256 else 3) return kw plt.figure(figsize=(16,6)) plt.rcParams.update({'font.size': 16}) plt.subplot(121...
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
The backscatter model, though low-resolution, has energy and enstrophy spectra that more closely resemble those of the high-resolution model.
def plot_spectra(m): m_ds = m.to_dataset().isel(time=-1) diag_names_enstrophy = ['ENSflux', 'ENSgenspec', 'ENSfrictionspec', 'ENSDissspec', 'ENSparamspec'] diag_names_energy = ['APEflux', 'APEgenspec', 'KEflux', 'KEfrictionspec', 'Dissspec', 'paramspec'] bud_labels_list = [['APE gen','APE flux','KE flu...
docs/examples/parameterizations.ipynb
pyqg/pyqg
mit
Create an array of points that represent a sine curve between 0 and 2$\pi$.
#create the data to be plotted x = np.linspace(0, 2*np.pi, 300) y = np.sin(x)
training/05_LinearFits.ipynb
dedx/cpalice
mit
Plot the data over the full range as a dashed line and then overlay the section of the data that looks roughly linear, which we will try to fit with a straight line.
#Now plot it plt.plot(x,y,'b--') plt.plot(x[110:180], y[110:180]) #subset of points that we will fit plt.show()
training/05_LinearFits.ipynb
dedx/cpalice
mit
We need to define the function that we will try to fit to this data. In this example, we will use the equation for a straight line, which has two parameters, the slope $m$ and the y-intercept $b$.
#Define the fit function def func(x, m, b): return (m*x + b)
training/05_LinearFits.ipynb
dedx/cpalice
mit
Before we can fit the data we need to make an initial guess at the slope and y-intercept which we can pass to the optimizer. It will start with those values and then keep trying small variations on those values until it minimizes the linear least squared difference between the data points we are trying to fit and poin...
# Make initial guess at parameters, slope then y-intercept p0 = [-1.0, 2.0]
training/05_LinearFits.ipynb
dedx/cpalice
mit
Now call the optimizer. It will return two arrays. The first is the set of optimized parameters and the second is a matrix that shows the covariance between the parameters. Don't worry about the details of the covariance matrix for now.
#Call the curve fitter and have it return the optimized parameters (popt) and covariance matrix (pcov) popt, pcov = curve_fit(func, x[110:180], y[110:180], p0)
training/05_LinearFits.ipynb
dedx/cpalice
mit
The diagonal elements of the covariance matrix are related to the uncertainties in the optimized fit parameters - they are the square of the uncertainties, actually. Any off-diagonal elements that are non-zero tell you how correlated the parameters are. Values close to zero mean the parameters are totally uncorrelate...
#Compute the parameter uncertainties from the covariance matrix punc = np.zeros(len(popt)) for i in np.arange(0,len(popt)): punc[i] = np.sqrt(pcov[i,i]) #Print the result print "optimal parameters: ", popt print "uncertainties of parameters: ", punc
training/05_LinearFits.ipynb
dedx/cpalice
mit
Let's look at how the fit compares to the data by plotting them on top of one another. The fitresult array extends over the full range in x. You can see that a linear fit in the range of interest is pretty good, but it deviates quite significantly from the data (the sine curve) oustide that range.
#plot the fit result with the data fitresult = func(x,popt[0],popt[1]) plt.plot(x,y,'b--',label="data") plt.plot(x,fitresult,'g',label="fit") plt.legend(loc="best") plt.show()
training/05_LinearFits.ipynb
dedx/cpalice
mit
Quiz Question: How many predicted values in the test set are false positives?
false_positives = 1443 false_negatives = 1406
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Computing the cost of mistakes Put yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews w...
false_positives * 100 + false_negatives * 1
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: Out of all reviews in the test set that are predicted to be positive, what fraction of them are false positives? (Round to the second decimal place e.g. 0.25)
fpr = 1 - precision print fpr
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: What fraction of the positive reviews in the test_set were correctly predicted as positive by the classifier? Quiz Question: What is the recall value for a classifier that predicts +1 for all data points in the test_data? Precision-recall tradeoff In this part, we will explore the trade-off between preci...
def apply_threshold(probabilities, threshold): ### YOUR CODE GOES HERE # +1 if >= threshold and -1 otherwise. predictions = probabilities >= threshold return predictions.apply(lambda x: 1 if x == 1 else -1)
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places.
for t, p in zip(threshold_values, precision_all): if p >= 0.965: print t,p break
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: Using threshold = 0.98, how many false negatives do we get on the test_data? (Hint: You may use the graphlab.evaluation.confusion_matrix function implemented in GraphLab Create.)
probabilities = model.predict(test_data, output_type='probability') predictions = apply_threshold(probabilities, 0.98) print graphlab.evaluation.confusion_matrix(test_data['sentiment'], predictions)
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better for the reviews of data in baby_reviews? Round your answer to 3 decimal places.
for t, p in zip(threshold_values, precision_all): if p >= 0.965: print t,p break
ml-classification/module-9-precision-recall-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Load W3C prov json file, visualize it and generate the neo4j graph (relations)
#rov_doc_from_json = provio.get_provdoc('json',install_path+"/neo4j_prov/examples/wps-prov.json") prov_doc_from_json = provio.get_provdoc('json','/home/stephan/Repos/ENES-EUDAT/submission_forms/test/ingest_prov_1.json') rels = provio.gen_graph_model(prov_doc_from_json) print prov_doc_from_json.get_records() print rel...
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
Alternatively load W3C prov xml file and generate neo4j graph (relations)
prov_doc_from_xml = provio.get_provdoc('xml',install_path+"/neo4j_prov/examples/wps-prov.xml") rels = provio.gen_graph_model(prov_doc_from_xml) print prov_doc_from_xml.get_records() print rels
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
Connect to a neo4j graph endpoint and generate new graph Attention: previous graph(s) are deleted
from py2neo import Graph, Node, Relationship, authenticate authenticate("localhost:7474", "neo4j", "prolog16") # connect to authenticated graph database graph = Graph("http://localhost:7474/db/data/") graph.delete_all() for rel in rels: graph.create(rel)
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
query the newly generated graph and display result
%load_ext cypher %matplotlib inline results = %cypher http://neo4j:prolog16@localhost:7474/db/data MATCH (a)-[r]-(b) RETURN a,r, b results.get_graph() results.draw()
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
To help visualization of large graphs the javascript library from Almende B.V. is helpful (git clone git://github.com/almende/vis.git) Therefore a javascript visualization generation is provided by the vis script (which I adapted from https://github.com/nicolewhite/neo4j-jupyter/tree/master/scripts)
from neo4j_prov.vis import draw options = {"16":"label"} result_iframe = draw(graph,options)
neo4j_prov/notebooks/provio_intro.ipynb
stephank16/enes_graph_use_case
gpl-3.0
We import the word_tokenizer from the NLTK module. We convert the tokenlist from each text to a set of types.
from nltk import word_tokenize types1 = set(word_tokenize(text1)) types2 = set(word_tokenize(text2))
notebooks/Python for Text Similarities.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0