markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Load the movie ids and titles for querying embeddings
!gsutil cp gs://cloud-samples-data/vertex-ai/matching-engine/swivel/movielens_25m/movies.csv ./movies.csv movies = pd.read_csv("movies.csv") print(f"Movie count: {len(movies.index)}") movies.head() # Change to your favourite movies. query_movies = [ "Lion King, The (1994)", "Aladdin (1992)", "Star Wars: E...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Look up embedding by making an online prediction request
predictions = endpoint.predict(instances=input_items) embeddings = predictions.predictions print(len(embeddings))
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Explore movie embedding similarities:
for idx1 in range(0, len(input_items) - 1, 2): item1 = input_items[idx1] title1 = query_movies[idx1] print(title1) print("==================") embedding1 = embeddings[idx1] for idx2 in range(0, len(input_items)): item2 = input_items[idx2] embedding2 = embeddings[idx2] sim...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Swivel job for Wikipedia text embedding (Optional) This section shows you how to create embeddings for the movies in the wikipedia dataset using Swivel. You need to do the following steps: 1. Configure the swivel template (using the text input_type) and create a pipeline job. 2. Run the following item embedd...
# Copy the wikipedia sample dataset ! gsutil -m cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/wikipedia/* {SOURCE_DATA_PATH}/wikipedia YOUR_PIPELINE_SUFFIX = "my-first-pipeline-wiki" # @param {type:"string"} !./swivel_template_configuration.sh -pipeline_suffix {YOUR_PIPELINE_SUFFIX} -project_id {PRO...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Submit the pipeline job through aiplatform.PipelineJob object. After the job finishes successfully (~a few hours), you can view the trained model in your CLoud Storage browser. It is going to have the following format: {BUCKET_NAME}/{PROJECT_NUMBER}/swivel-{TIMESTAMP}/EmbTrainerComponent_-{SOME_NUMBER}/model/ You may...
! gsutil -m cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/models/wikipedia/model {SOURCE_DATA_PATH}/wikipedia_model SAVEDMODEL_DIR = os.path.join(SOURCE_DATA_PATH, "wikipedia_model/model") embedding_model = tf.saved_model.load(SAVEDMODEL_DIR)
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Explore the trained text embeddings Load the SavedModel to lookup embeddings for items. Note the following: * The SavedModel expects a list of string inputs. * Each string input is treated as a list of space-separated tokens. * If the input is text, the string input is lowercased with punctuation removed. * An embeddin...
input_items = ["horror", "film", '"HORROR! Film"', "horror-film"] output_embeddings = embedding_model(input_items) horror_film_embedding = tf.math.reduce_mean(output_embeddings[:2], axis=0) # Average of embeddings for 'horror' and 'film' equals that for '"HORROR! Film"' # since preprocessing cleans punctuation and low...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
You can use the TensorBoard Embedding Projector to graphically represent high dimensional embeddings, which can be helpful in examining and understanding your embeddings. Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise...
# Delete endpoint resource # If force is set to True, all deployed models on this Endpoint will be undeployed first. endpoint.delete(force=True) # Delete model resource MODEL_RESOURCE_NAME = model.resource_name ! gcloud ai models delete $MODEL_RESOURCE_NAME --region $REGION --quiet # Delete Cloud Storage objects that...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
1.9 A brief introduction to interferometry and its history 1.9.1 The double-slit experiment The basics of interferometry date back to Thomas Young's double-slit experiment ⤴ of 1801. In this experiment, a plate pierced by two parallel slits is illuminated by a monochromatic source of light. Due to the wave-like ...
def double_slit (p0=[0],a0=[1],baseline=1,d1=5,d2=5,wavelength=.1,maxint=None): """Renders a toy dual-slit experiment. 'p0' is a list or array of source positions (drawn along the vertical axis) 'a0' is an array of source intensities 'baseline' is the distance between the slits 'd1' and 'd2' are d...
1_Radio_Science/01_09_a_brief_introduction_to_interferometry.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
A.1 The Betelgeuse size measurement For fun, let us use our toy to re-create the Betelgeuse size measurement of 1920 by A.A. Michelson and F.G. Pease. Their experiment was set up as follows. The interferometer they constructed had movable outside mirrors, giving it a baseline that could be adjusted from a maximum of 6m...
arcsec = 1/3600. interact(lambda extent_arcsec, baseline: michelson(p0=[0], a0=[1], extent=extent_arcsec*arcsec, maxint=1, baseline=baseline,fov=1*arcsec), extent_arcsec=(0,0.1,0.001), baseline=(1e+4,1e+7,1e+4) ) and None
1_Radio_Science/01_09_a_brief_introduction_to_interferometry.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens t...
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name] feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers] print('Number of layers', len(layers)) print('Total number of feature channels:', sum(feature_nums)) # Helper functions for TF...
downloads/articles/2017-09-14-Google-DeepDream-Python/deepdream.ipynb
leimao/leimao.github.io
gpl-2.0
<a id='naive'></a> Naive feature visualization Let's start with a naive way of visualizing these. Image-space gradient ascent! Lei Mao: Unlike loss minimization in classification jobs, we do maximization in generating the pattern that certain convolutional layer "likes most". The basic principles of minization and maxi...
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity # to have non-zero gradients for features with negative initial activations. layer = 'mixed4d_3x3_bottleneck_pre_relu' channel = 139 # picking some feature channel to visualize # start with a gray image with a little noise im...
downloads/articles/2017-09-14-Google-DeepDream-Python/deepdream.ipynb
leimao/leimao.github.io
gpl-2.0
<a id="multiscale"></a> Multiscale image generation Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale. With multiscale image gene...
def tffunc(*argtypes): '''Helper that transforms TF-graph generating function into a regular one. See "resize" function below. ''' placeholders = list(map(tf.placeholder, argtypes)) def wrap(f): out = f(*placeholders) def wrapper(*args, **kw): return out.eval(dict(zip(pla...
downloads/articles/2017-09-14-Google-DeepDream-Python/deepdream.ipynb
leimao/leimao.github.io
gpl-2.0
Is there a difference in the precision-recall values of different models?
query_dict = {'expansions__vectors__rep': 0, 'expansions__k':3, 'labelled':'amazon_grouped-tagged', 'expansions__use_similarity': 0, 'expansions__neighbour_strategy':'linear', 'expansions__vectors__dimensionality': 100, 'document_features_ev...
notebooks/error_analysis.ipynb
mbatchkarov/ExpLosion
bsd-3-clause
Overall, precision and recall are balanced and roughly equal. Better models are better in both P and R. What's in each cluster when using VQ?
path = '../FeatureExtractionToolkit/word2vec_vectors/composed/AN_NN_word2vec-wiki_100percent-rep0_Add.events.filtered.strings.kmeans2000' df = pd.read_hdf(path, key='clusters') counts = df.clusters.value_counts() g = sns.distplot(counts.values, kde_kws={'cut':True}) g.set(xlim=(0, None)) plt.title('Distribution of clu...
notebooks/error_analysis.ipynb
mbatchkarov/ExpLosion
bsd-3-clause
Find the smallest cluster and print phrases it contains
df[df.clusters==counts.argmin()].head(20) df[df.clusters == 5] # cluster 5 (negative sentiment), 2 (royalty), 8 (cheap, expencive) are very sensible # cluster 3 ('arm'), 1 ('product'), 15 (hot), 16 (playing) are dominated by a single word (may contain multiple senses, e.g. hot water, hot waiter) # cluster 6 (grand sla...
notebooks/error_analysis.ipynb
mbatchkarov/ExpLosion
bsd-3-clause
Large clusters form (this one is 934 unigrams and NP) that share a word (e.g. bad occurs in 726 of those), and even though they are not pure (e.g. good is also in that cluster 84 times) the vast majority of "bad" stuff ends up in cluster 5, which starts to correspond to negative sentiment. This can be something the cla...
from discoutils.thesaurus_loader import Vectors as vv # not quite the same vectors (15% vs 100%), but that's all I've got on this machine v = vv.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/composed/AN_NN_word2vec-wiki_15percent-rep0_Add.events.filtered.strings') v.init_sims(n_neighbors=30) v.get_nearest_ne...
notebooks/error_analysis.ipynb
mbatchkarov/ExpLosion
bsd-3-clause
Let's find if there is a positive sentiment cluster
cluster_num = df.ix['good/J_guy/N'][0] print(cluster_num) df[df.clusters == cluster_num] Counter([str(x).split('_')[0] for x in df[df.clusters == cluster_num].index]).most_common(10) cluster_num = df.ix['good/J_movie/N'][0] print(cluster_num) df[df.clusters == cluster_num] Counter([str(x).split('_')[1] for x in df[d...
notebooks/error_analysis.ipynb
mbatchkarov/ExpLosion
bsd-3-clause
Does the same hold for Turian vectors?
path = '../FeatureExtractionToolkit/socher_vectors/composed/AN_NN_turian_Socher.events.filtered.strings.kmeans2000' ddf = pd.read_hdf(path, key='clusters') cluster_num = ddf.ix['bad/J_guy/N'][0] print(cluster_num) ddf[ddf.clusters == cluster_num] Counter([str(x).split('_')[1] for x in ddf[ddf.clusters == cluster_num]...
notebooks/error_analysis.ipynb
mbatchkarov/ExpLosion
bsd-3-clause
Is it OK to use accuracy instead of Averaged F1 score?
gaps = [] for r in Results.objects.filter(classifier=CLASSIFIER): gap = r.accuracy_mean - r.macrof1_mean if abs(gap) > 0.1: print(r.id.id) gaps.append(gap) plt.hist(gaps);
notebooks/error_analysis.ipynb
mbatchkarov/ExpLosion
bsd-3-clause
Are neighbours of words other words, and is there grouping by PoS tag?
from discoutils.thesaurus_loader import Vectors from discoutils.tokens import DocumentFeature v = Vectors.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/word2vec-wiki-15perc.unigr.strings.rep0') from random import sample sampled_words = sample(list(v.keys()), 5000) v.init_sims(n_neighbors=100) data = [] for w...
notebooks/error_analysis.ipynb
mbatchkarov/ExpLosion
bsd-3-clause
Preliminary Report Read the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform. A. Initial observations based on the plot above + Overall, rate of readmissions...
# A. Do you agree with the above analysis and recommendations? Why or why not? import seaborn as sns relevant_columns = clean_hospital_read_df[['Excess Readmission Ratio', 'Number of Discharges']][81:-3] sns.regplot(relevant_columns['Number of Discharges'], relevant_columns['Excess Readmission Ratio'])
Statistics_Exercises/sliderule_dsi_inferential_statistics_exercise_3.ipynb
RyanAlberts/Springbaord-Capstone-Project
mit
<div class="span5 alert alert-info"> ### Exercise Include your work on the following **in this notebook and submit to your Github account**. A. Do you agree with the above analysis and recommendations? Why or why not? B. Provide support for your arguments and your own recommendations with a statistically sound anal...
rv =relevant_columns print rv[rv['Number of Discharges'] < 100][['Excess Readmission Ratio']].mean() print '\nPercent of subset with excess readmission rate > 1: ', len(rv[(rv['Number of Discharges'] < 100) & (rv['Excess Readmission Ratio'] > 1)]) / len(rv[relevant_columns['Number of Discharges'] < 100]) print '\n', ...
Statistics_Exercises/sliderule_dsi_inferential_statistics_exercise_3.ipynb
RyanAlberts/Springbaord-Capstone-Project
mit
In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1 Accurate In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1 Correction...
np.corrcoef(rv['Number of Discharges'], rv['Excess Readmission Ratio'])
Statistics_Exercises/sliderule_dsi_inferential_statistics_exercise_3.ipynb
RyanAlberts/Springbaord-Capstone-Project
mit
You will work with the Housing Prices Competition for Kaggle Learn Users from the previous exercise. Run the next code cell without changes to load the training and test data in X and X_test. For simplicity, we drop categorical variables.
import pandas as pd from sklearn.model_selection import train_test_split # Read the data train_data = pd.read_csv('../input/train.csv', index_col='Id') test_data = pd.read_csv('../input/test.csv', index_col='Id') # Remove rows with missing target, separate target from predictors train_data.dropna(axis=0, subset=['Sal...
notebooks/ml_intermediate/raw/ex5.ipynb
Kaggle/learntools
apache-2.0
Use the next code cell to print the first several rows of the data.
X.head()
notebooks/ml_intermediate/raw/ex5.ipynb
Kaggle/learntools
apache-2.0
So far, you've learned how to build pipelines with scikit-learn. For instance, the pipeline below will use SimpleImputer() to replace missing values in the data, before using RandomForestRegressor() to train a random forest model to make predictions. We set the number of trees in the random forest model with the n_es...
from sklearn.ensemble import RandomForestRegressor from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer my_pipeline = Pipeline(steps=[ ('preprocessor', SimpleImputer()), ('model', RandomForestRegressor(n_estimators=50, random_state=0)) ])
notebooks/ml_intermediate/raw/ex5.ipynb
Kaggle/learntools
apache-2.0
You have also learned how to use pipelines in cross-validation. The code below uses the cross_val_score() function to obtain the mean absolute error (MAE), averaged across five different folds. Recall we set the number of folds with the cv parameter.
from sklearn.model_selection import cross_val_score # Multiply by -1 since sklearn calculates *negative* MAE scores = -1 * cross_val_score(my_pipeline, X, y, cv=5, scoring='neg_mean_absolute_error') print("Average MAE score:", scores.mean())
notebooks/ml_intermediate/raw/ex5.ipynb
Kaggle/learntools
apache-2.0
Step 1: Write a useful function In this exercise, you'll use cross-validation to select parameters for a machine learning model. Begin by writing a function get_score() that reports the average (over three cross-validation folds) MAE of a machine learning pipeline that uses: - the data in X and y to create folds, - Sim...
def get_score(n_estimators): """Return the average MAE over 3 CV folds of random forest model. Keyword argument: n_estimators -- the number of trees in the forest """ # Replace this body with your own code pass # Check your answer step_1.check() #%%RM_IF(PROD)%% def get_score(n_estimators...
notebooks/ml_intermediate/raw/ex5.ipynb
Kaggle/learntools
apache-2.0
Step 2: Test different parameter values Now, you will use the function that you defined in Step 1 to evaluate the model performance corresponding to eight different values for the number of trees in the random forest: 50, 100, 150, ..., 300, 350, 400. Store your results in a Python dictionary results, where results[i] ...
results = ____ # Your code here # Check your answer step_2.check() #%%RM_IF(PROD)%% results = {} for i in range(1,9): results[50*i] = get_score(50*i) step_2.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ step_2.hint() #_COMMENT_IF(PROD)_ step_2.solution()
notebooks/ml_intermediate/raw/ex5.ipynb
Kaggle/learntools
apache-2.0
Use the next cell to visualize your results from Step 2. Run the code without changes.
import matplotlib.pyplot as plt %matplotlib inline plt.plot(list(results.keys()), list(results.values())) plt.show()
notebooks/ml_intermediate/raw/ex5.ipynb
Kaggle/learntools
apache-2.0
Step 3: Find the best parameter value Given the results, which value for n_estimators seems best for the random forest model? Use your answer to set the value of n_estimators_best.
n_estimators_best = ____ # Check your answer step_3.check() #%%RM_IF(PROD)%% n_estimators_best = min(results, key=results.get) step_3.assert_check_passed() #%%RM_IF(PROD)%% n_estimators_best = 200 step_3.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ step_3.hint() #_CO...
notebooks/ml_intermediate/raw/ex5.ipynb
Kaggle/learntools
apache-2.0
Now let us write a custom function to run the xgboost model.
def runXGB(train_X, train_y, test_X, test_y=None, feature_names=None, seed_val=0, num_rounds=1000): param = {} param['objective'] = 'multi:softprob' param['eta'] = 0.1 param['max_depth'] = 6 param['silent'] = 1 param['num_class'] = 3 param['eval_metric'] = "mlogloss" param['min_child_wei...
xgboost.ipynb
shengqiu/renthop
gpl-2.0
Let us read the train and test files and store it.
data_path = "../input/" train_file = data_path + "train.json" test_file = data_path + "test.json" train_df = pd.read_json(train_file) test_df = pd.read_json(test_file) print(train_df.shape) print(test_df.shape)
xgboost.ipynb
shengqiu/renthop
gpl-2.0
We do not need any pre-processing for numerical features and so create a list with those features.
features_to_use = ["bathrooms", "bedrooms", "latitude", "longitude", "price"]
xgboost.ipynb
shengqiu/renthop
gpl-2.0
Now let us create some new features from the given features.
# count of photos # train_df["num_photos"] = train_df["photos"].apply(len) test_df["num_photos"] = test_df["photos"].apply(len) # count of "features" # train_df["num_features"] = train_df["features"].apply(len) test_df["num_features"] = test_df["features"].apply(len) # count of words present in description column # t...
xgboost.ipynb
shengqiu/renthop
gpl-2.0
We have 4 categorical features in our data display_address manager_id building_id listing_id So let us label encode these features.
categorical = ["display_address", "manager_id", "building_id", "street_address"] for f in categorical: if train_df[f].dtype=='object': #print(f) lbl = preprocessing.LabelEncoder() lbl.fit(list(train_df[f].values) + list(test_df[f].values)) train_df[f] = lbl.transf...
xgboost.ipynb
shengqiu/renthop
gpl-2.0
We have features column which is a list of string values. So we can first combine all the strings together to get a single string and then apply count vectorizer on top of it.
train_df['features'] = train_df["features"].apply(lambda x: " ".join(["_".join(i.split(" ")) for i in x])) test_df['features'] = test_df["features"].apply(lambda x: " ".join(["_".join(i.split(" ")) for i in x])) print(train_df["features"].head()) tfidf = CountVectorizer(stop_words='english', max_features=200) tr_sparse...
xgboost.ipynb
shengqiu/renthop
gpl-2.0
Now let us stack both the dense and sparse features into a single dataset and also get the target variable.
train_X = sparse.hstack([train_df[features_to_use], tr_sparse]).tocsr() test_X = sparse.hstack([test_df[features_to_use], te_sparse]).tocsr() target_num_map = {'high':0, 'medium':1, 'low':2} train_y = np.array(train_df['interest_level'].apply(lambda x: target_num_map[x])) print(train_X.shape, test_X.shape)
xgboost.ipynb
shengqiu/renthop
gpl-2.0
Now let us do some cross validation to check the scores. Please run it in local to get the cv scores. I am commenting it out here for time.
cv_scores = [] kf = model_selection.KFold(n_splits=5, shuffle=True, random_state=2016) for dev_index, val_index in kf.split(range(train_X.shape[0])): dev_X, val_X = train_X[dev_index,:], train_X[val_index,:] dev_y, val_y = train_y[dev_index], train_y[val_index] preds, model = runXGB(dev_X, dev_y...
xgboost.ipynb
shengqiu/renthop
gpl-2.0
Now let us build the final model and get the predictions on the test set.
preds, model = runXGB(train_X, train_y, test_X, num_rounds=400) out_df = pd.DataFrame(preds) out_df.columns = ["high", "medium", "low"] out_df["listing_id"] = test_df.listing_id.values out_df.to_csv("xgb_starter2.csv", index=False)
xgboost.ipynb
shengqiu/renthop
gpl-2.0
Collate and output the results as a plain-text alignment table, as JSON, and as colored HTML
collationText = collate(json_input,output='table',layout='vertical') print(collationText) collationJSON = collate(json_input,output='json') print(collationJSON) collationHTML2 = collate(json_input,output='html2')
unit8/unit8-collatex-and-XML/CollateX and XML, Part 2.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Different forms of the network. The node-link network that we get from Source includes topological information, in addition to the geometries of the various nodes, links and catchments, and their attributes, such as node names. When we initial retrieve the network, with v.network() we get an object that includes a numb...
network = v.network()
doc/examples/network/TopologicalQueries.ipynb
flowmatters/veneer-py
isc
eg, find all outlet nodes
outlets = network.outlet_nodes().as_dataframe() outlets[:10]
doc/examples/network/TopologicalQueries.ipynb
flowmatters/veneer-py
isc
Feature id Other topological queries are based on the id attribute of features in the network. For example /network/nodes/187
upstream_features = network.upstream_features('/network/nodes/214').as_dataframe() upstream_features upstream_features.plot()
doc/examples/network/TopologicalQueries.ipynb
flowmatters/veneer-py
isc
Partitioning the network The network.partition method can be very useful for a range of parameterisation and reporting needs. partition groups all features (nodes, links and catchments) in the network based on which of a series of key nodes those features drain through. parition adds a new property to each feature, nam...
network.partition? gauge_names = network['features'].find_by_icon('/resources/GaugeNodeModel')._select(['name']) gauge_names network.partition(gauge_names,'downstream_gauge') dataframe = network.as_dataframe() dataframe[:10] ## Path between two features network.path_between? network.path_between('/network/catchme...
doc/examples/network/TopologicalQueries.ipynb
flowmatters/veneer-py
isc
The (College Student) Diet Problem Consider the canonical college student. After a hard afternoon's work of solving way too many partial differential equations, she emerges from her room to obtain sustenance for the day. She has a choice between getting chicken over rice (\$5) from the halal cart on her street ($r$), o...
fig = plt.figure() axes = fig.add_subplot(1,1,1) # define view r_min = 0.0 r_max = 3.0 s_min = 0.0 s_max = 5.0 res = 50 r = numpy.linspace(r_min, r_max, res) # plot axes axes.axhline(0, color='#B3B3B3', linewidth=5) axes.axvline(0, color='#B3B3B3', linewidth=5) # plot constraints c_1 = lambda x: 4 - 2*x c_2 = lambd...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
We can visualise our diet problem on a graph of "Number of Subs vs. Number of Chicken or Rice", where lines each represent a constraint, and our cost function can be represented in shades of blue: the deeper the blue, the more we will spend on meals. The regions where we will satisfy our constraints will be the regions...
fig = plt.figure() axes = fig.add_subplot(1,1,1) # plot axes axes.axhline(0, color='k') axes.axvline(0, color='k') # plot constraints c_1 = lambda x: 4 - 2*x c_2 = lambda x: 1 - x c_3 = lambda x: - 0.25 * ( - 6 + 3*x ) c_1_line = axes.plot( r, c_1(r), label='Fibre' ) # 2r + s \geq 4 c_2_line = axes.plot( r, c_2(r), ...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
In dual form, this is \begin{align} \text{minimise} \quad & - 4y_1 - 3y_2 - 6y_3 \ \text{subject to} \quad & 2y_1 + 3y_2 + 3y_3 = 5 \ & y_1 + 3y_2 + 4y_3 = 7 \ \text{and} \quad & { y_i \geq 0 }_{i=1}^3 \ \end{align} Which can be seen as minimising the objective function on th...
import pandas as pd pd.set_option('display.notebook_repr_html', True) def pivot(departing, entering, tab): dpi = tab[tab['basic_variable']==departing].index[0] # index of the departing row # update basic variable tab['basic_variable'][dpi] = entering # normalise departing_row tab.ix[dpi,0:-1]...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
Bland's Rule This seemingly arbitrary rule will seem less arbitrary in just a while. Multiple Optimal Solutions So, given the graphical intuition we now have for how the simplex method works, do we know if there ever a time when we would encounter more than 1 optimal solution for a given problem? \begin{align} \text{ma...
fig = plt.figure() axes = fig.add_subplot(1,1,1) # define view x_1_min = 0.0 x_1_max = 3.0 x_2_min = 0.0 x_2_max = 5.0 res = 50 # plot axes axes.axhline(0, color='k') axes.axvline(0, color='k') # plot constraints x_1 = numpy.linspace(x_1_min, x_1_max, res) c_1 = lambda x: 4.0 - 2.0*x c_2 = lambda x: (30.0 - 10.0*x)/...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
\begin{align} \text{maximise} \quad & 5x_1 + 7x_2 \ \text{subject to} \quad & 2x_1 + x_2 + s_1 = 4 \ & 10x_1 + 14x_2 + s_2 = 30 \ \text{and} \quad & x_1, x_2, s_1, s_2 \geq 0 \end{align}
c_1 = numpy.array([[ 2, 1, 1, 0, 4, 's_1']]) c_2 = numpy.array([[10, 14, 0, 1, 30, 's_2']]) z = numpy.array([[-5, -7, 0, 0, 0, '']]) rows= numpy.concatenate((c_1, c_2, z), axis=0) tableau_multiple = pd.DataFrame(rows, columns=['x_1','x_2','s_1','s_2','value', 'basic_variable'], index=['c_1','c_2','z']) tablea...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
Unbounded Optima \begin{align} \text{maximise} \quad & 5x_1 + 7x_2 \ \text{subject to} \quad & -x_1 + x_2 \leq 5 \ & -\frac{1}{2}x_1 + x_2 \leq 7 \ \text{and} \quad & x_1, x_2 \geq 0 \end{align}
fig = plt.figure() axes = fig.add_subplot(1,1,1) # define view x_1_min = 0.0 x_1_max = 10.0 x_2_min = 0.0 x_2_max = 15.0 # res = 100 # plot axes axes.axhline(0, color='k') axes.axvline(0, color='k') # plot constraints x_1 = numpy.linspace(x_1_min, x_1_max, res) c_1 = lambda x: 5.0 + x c_2 = lambda x: 7 + 0.5*x c_1_l...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
\begin{align} \text{maximise} \quad & 5x_1 + 7x_2 \ \text{subject to} \quad & -x_1 + x_2 + s_1 = 5 \ & -\frac{1}{2}x_1 + x_2 + s_2 = 7 \ \text{and} \quad & x_1, x_2, s_1, s_2 \geq 0 \end{align}
c_1 = numpy.array([[ -1, 1, 1, 0, 5, 's_1']]) c_2 = numpy.array([[-0.5, 1, 0, 1, 7, 's_2']]) z = numpy.array([[ -5, -7, 0, 0, 0, '']]) rows= numpy.concatenate((c_1, c_2, z), axis=0) tableau_unbounded = pd.DataFrame(rows, columns=['x_1','x_2','s_1','s_2','value', 'basic_variable'], index=['c_1','c_2','z']) ...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
We got an error! ValueError: attempt to get argmin of an empty sequence Usually, errors are bad things, but in this case, the error is trying to tell us something In the code: return tab.ix[ratios[ratios&gt;=0].idxmin(),'basic_variable'] Which is telling us that no non-negative ratio was found! Why is this a problem fo...
display(tableau_unbounded)
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
\begin{gather} z = 83 + 17 s_1 - 24 s_2 \ x_1 = 4 + 2 s_1 - 2 s_2 \ x_2 = 9 + s_1 - 2 s_2 \end{gather} At this point, we want to pick $s_1$ as our entering variable because it is has most negative coefficient, and increasing the value of $s_1$ would most increase the value of $z$. Usually, increasing the value of $s_...
fig = plt.figure() axes = fig.add_subplot(1,1,1) # define view x_1_min = 0.0 x_1_max = 3.0 x_2_min = 0.0 x_2_max = 5.0 # res = 100 # plot axes axes.axhline(0, color='k') axes.axvline(0, color='k') # plot constraints x_1 = numpy.linspace(x_1_min, x_1_max, res) c_1 = lambda x: 3.0 - x c_2 = lambda x: -3.0 + x c_3 = la...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
\begin{align} \text{maximise} \quad & 2x_1 + 7x_2 \ \text{subject to} \quad & -x_1 + x_2 + s_1 = 3 \ & x_1 - x_2 + s_2 = 3 \ & x_2 + s_3 = 2 \ \text{and} \quad & {x_i}{i=1}^2, {s_j}{j=1}^3 \geq 0 \end{align}
c_1 = numpy.array([[ 3, 1, 1, 0, 0, 6, 's_1']]) c_2 = numpy.array([[ 1, -1, 0, 1, 0, 2, 's_2']]) c_3 = numpy.array([[ 0, 1, 0, 0, 1, 3, 's_3']]) z = numpy.array([[-2, -1, 0, 0, 0, 0, '']]) rows= numpy.concatenate((c_1, c_2, c_3, z), axis=0) tableau_degenerate = pd.DataFrame(rows, columns=['x_1','x_2','s_1',...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
You think you're moving, but you get nowhere. — Stop and Stare, OneRepublic As its name suggests, degeneracy is when you get a basic variable (that's supposed to have a non-zero value) with a value of 0, and you are able to modify the value of the objective function without moving on the simplex. In general, predictin...
tableaux_degenerate[1]
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
Without Bland's Rule, one could potentially choose to pivot on $s_2$, which will give us
pivot('s_2', 'x_2', tableaux_degenerate[1]) tableaux_degenerate[1]
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
Choosing $x_2$ to pivot back to seems like a good idea, right? Nope.
pivot('x_2', 's_2', tableaux_degenerate[1]) tableaux_degenerate[1]
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
Cycling, ladies and gentlemen, aka a slow spiral into insanity. $\epsilon-$perturbations Another earlier (and nowadays less popular) method for avoiding degeneracy is by introducing $\epsilon$-perturbations into the problem. Recall that the standard system goes like \begin{align} \text{maximise} \quad & c^T x \ \text...
c_1 = numpy.array([[ 1, 0, 0, 1, 0, 0, 1, 's_1']]) c_2 = numpy.array([[ 20, 1, 0, 0, 1, 0, 100, 's_2']]) c_3 = numpy.array([[ 200, 20, 1, 0, 0, 1, 10000, 's_3']]) z = numpy.array([[-100, -10, -1, 0, 0, 0, 0, '']]) rows= numpy.concatenate((c_1, c_2, c_3, z), axis=0) tableau_klee_minty = pd.Dat...
.ipynb_checkpoints/lpsm-checkpoint.ipynb
constellationcolon/simplexity
mit
The rows contains the electricity used in each hour for a one year period. Each row indicates the usage for the hour starting at the specified time, so 1/1/13 0:00 indicates the usage for the first hour of January 1st. Working with datetime data Let's take a closer look at our data:
nrg.head()
NevadaDashboard/pythonic_pandas.ipynb
bgroveben/python3_machine_learning_projects
mit
Both pandas and Numpy use the concept of dtypes as data types, and if no arguments are specified, date_time will take on an object dtype.
nrg.dtypes # https://docs.python.org/3/library/functions.html#type # https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iat.html type(nrg.iat[0,0])
NevadaDashboard/pythonic_pandas.ipynb
bgroveben/python3_machine_learning_projects
mit
This will be an issue with any column that can't neatly fit into a single data type. Working with dates as strings is also an inefficient use of memory and programmer time (not to mention patience). This exercise will work with time series data, and the date_time column will be formatted as an array of datetime objects...
nrg['date_time'] = pd.to_datetime(nrg['date_time']) # https://stackoverflow.com/questions/29206612/difference-between-data-type-datetime64ns-and-m8ns nrg['date_time'].dtype
NevadaDashboard/pythonic_pandas.ipynb
bgroveben/python3_machine_learning_projects
mit
If you're curious about alternatives to the code above, check out pandas.PeriodIndex, which can store ordinal values indicating regular time periods. We now have a pandas.DataFrame called nrg that contains the data from our .csv file. Notice how the time is displayed differently in the date_time column.
nrg.head()
NevadaDashboard/pythonic_pandas.ipynb
bgroveben/python3_machine_learning_projects
mit
Time for a timing decorator The code above is pretty straightforward, but how fast does it run? Let's find out by using a timing decorator called @timeit (an homage to Python's timeit). This decorator behaves like timeit.repeat(), but it also allows you to return the result of the function itself as well as get the ave...
from timer import timeit @timeit(repeat=3, number=10) def convert_with_format(nrg, column_name): return pd.to_datetime(nrg[column_name], format='%d/%m/%y %H:%M') nrg['date_time'] = convert_with_format(nrg, 'date_time')
NevadaDashboard/pythonic_pandas.ipynb
bgroveben/python3_machine_learning_projects
mit
One easily overlooked detail is that the datetimes in the energy_consumption.csv file are not in ISO 8601 format. You need YYYY-MM-DD HH:MM. If you don’t specify a format, Pandas will use the dateutil package to convert each string to a date. Conversely, if the raw datetime data is already in ISO 8601 format, pandas ca...
nrg['cost_cents'] = nrg['energy_kwh'] * 28; nrg.head()
NevadaDashboard/pythonic_pandas.ipynb
bgroveben/python3_machine_learning_projects
mit
However, our hourly costs depend on the time of day. If you use a loop to do the conditional calculation, you are not using pandas the way it was intended. For the rest of this tutorial, you'll start with a sub-optimal solution and work your way up to a Pythonic approach that leverages the full power of pandas. Take a ...
# Create a function to apply the appropriate rate to the given hour: def apply_rate(kwh, hour): """ Calculates the cost of electricity for a given hour. """ if 0 <= hour < 7: rate = 12 elif 7 <= hour <= 17: rate = 20 elif 17 <= hour <= 24: rate = 28 else: # +1...
NevadaDashboard/pythonic_pandas.ipynb
bgroveben/python3_machine_learning_projects
mit
Now for a computationally expensive and non-Pythonic loop:
# Not the best way: @timeit(repeat=2, number = 10) def apply_rate_loop(nrg): """ Calculate the costs using a loop, and modify `nrg` dataframe in place. """ energy_cost_list = [] for i in range(len(nrg)): # Get electricity used and the corresponding rate. energy_used = nrg.iloc[i]['en...
NevadaDashboard/pythonic_pandas.ipynb
bgroveben/python3_machine_learning_projects
mit
You can consider the above to be an “antipattern” in pandas for several reasons. First, initialize a list in which the outputs will be recorded. Second, use the opaque object range(0, len(df)) to loop through nrg, then apply apply_rate(), and append the result to a list used to make the new DataFrame column. Third, cha...
@timeit(repeat=2, number=10) def apply_rate_iterrows(nrg): energy_cost_list = [] for index, row in nrg.iterrows(): energy_used = row['energy_kwh'] hour = row['date_time'].hour energy_cost = apply_rate(energy_used, hour) energy_cost_list.append(energy_cost) nrg['cost_cents'] =...
NevadaDashboard/pythonic_pandas.ipynb
bgroveben/python3_machine_learning_projects
mit
Clean Raw Annotations Load raw annotations
""" # v4_annotated user_blocked = [ 'annotated_onion_layer_5_rows_0_to_5000_raters_20', 'annotated_onion_layer_5_rows_0_to_10000', 'annotated_onion_layer_5_rows_0_to_10000_raters_3', 'annotated_onion_layer_5_rows_10000_to_50526_...
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Make random and blocked samples disjoint
df.drop_duplicates(subset=['rev_id', 'sample'])['rev_id'].value_counts().value_counts() df.index = df.rev_id df.sample_count = df.drop_duplicates(subset=['rev_id', 'sample'])['rev_id'].value_counts() df.sample_count.value_counts() # just set them all to random df['sample'][df.sample_count == 2] = 'random' df.drop_d...
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Tidy is_harassment_or_attack column
df = tidy_labels(df)
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Remap aggression score
df['aggression'] = df['aggression_score'].apply(map_aggression_score_to_2class)
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Remove answers to test questions
df = df.query('_golden == False') print('# annotations: ', df.shape[0])
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Remove annotations where revision could not be read
# remove all annotations for a revisions where more than 50% of annotators for that revision could not read the comment df = remove_na(df) print('# annotations: ', df.shape[0]) # remove all annotations where the annotator could not read the comment df = df.query('na==False') print('# annotations: ', df.shape[0])
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Examine aggression_score or is_harassment_or_attack input
df['aggression_score'].value_counts(dropna=False) df['is_harassment_or_attack'].value_counts(dropna=False)
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Drop NAs in aggression_score or is_harassment_or_attack input
df = df.dropna(subset = ['aggression_score', 'is_harassment_or_attack']) print('# annotations: ', df.shape[0])
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Remove ambivalent is_harassment_or_attack annotations An annotations is ambivalent if it was labeled as both an attack and not an attack
# remove all annotations from users who are ambivalent in 10% or more of revisions # we consider these users unreliable def ambivalent(s): return 'not_attack' in s and s!= 'not_attack' df['ambivalent'] = df['is_harassment_or_attack'].apply(ambivalent) non_ambivalent_workers = df.groupby('_worker_id', as_index = Fal...
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Make sure that each rev was only annotated by the same worker once
df.groupby(['rev_id', '_worker_id']).size().value_counts() df = df.drop_duplicates(subset = ['rev_id', '_worker_id']) print('# annotations: ', df.shape[0])
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Filter out annotations for revisions with duplicated diff content
comments = df.drop_duplicates(subset = ['rev_id']) print(comments.shape[0]) u_comments = comments.drop_duplicates(subset = ['clean_diff']) print(u_comments.shape[0]) comments[comments.duplicated(subset = ['clean_diff'])].head(5) df = df.merge(u_comments[['rev_id']], how = 'inner', on = 'rev_id') print('# annotations...
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Check that labels are not None
df['recipient'].value_counts(dropna=False) df['attack'].value_counts(dropna=False) df['aggression'].value_counts(dropna=False)
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Remove annotations from all revisions that were annotated less than 8 times
counts = df['rev_id'].value_counts().to_frame() counts.columns = ['n'] counts['rev_id'] = counts.index counts.shape counts['n'].value_counts().head() counts_enough = counts.query("n>=8") counts_enough.shape df = df.merge(counts_enough[['rev_id']], how = 'inner', on = 'rev_id') print('# annotations: ', df.shape[0])
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Discard nuisance columns
df.columns cols = ['rev_id', '_worker_id', 'ns', 'sample', 'src','clean_diff', 'diff', 'insert_only', 'page_id', 'page_title', 'rev_comment', 'rev_timestamp', 'user_id', 'user_text', 'not_attack', 'other', 'quoting', 'recipient', 'third_party', 'attack', 'aggression', 'aggression_score'] df = df[...
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
Summary Stats
df.groupby(['ns', 'sample']).size() df.to_csv('../../data/annotations/clean/annotations.tsv', index=False, sep='\t') pd.read_csv('../../data/annotations/clean/annotations.tsv', sep='\t').shape
src/modeling/Clean Annotations.ipynb
ewulczyn/talk_page_abuse
apache-2.0
2) Depth-01 term, GO:0019012 (virion) has dcnt=0 through is_a relationships (default) GO:0019012, virion, has no GO terms below it through the is_a relationship, so the default value of dcnt will be zero, even though it is very high in the DAG at depth=01.
virion = 'GO:0019012' from goatools.gosubdag.gosubdag import GoSubDag gosubdag_r0 = GoSubDag(go_leafs, godag)
notebooks/relationships_change_dcnt_values.ipynb
tanghaibao/goatools
bsd-2-clause
Notice that dcnt=0 for GO:0019012, virion, even though it is very high in the DAG hierarchy (depth=1). This is because there are no GO IDs under GO:0019012 (virion) using the is_a relationship.
nt_virion = gosubdag_r0.go2nt[virion] print(nt_virion) print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt))
notebooks/relationships_change_dcnt_values.ipynb
tanghaibao/goatools
bsd-2-clause
3) Depth-01 term, GO:0019012 (virion) dcnt value is higher using all relationships Load all relationships into GoSubDag using relationships=True
gosubdag_r1 = GoSubDag(go_leafs, godag, relationships=True) nt_virion = gosubdag_r1.go2nt[virion] print(nt_virion) print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt))
notebooks/relationships_change_dcnt_values.ipynb
tanghaibao/goatools
bsd-2-clause
4) Depth-01 term, GO:0019012 (virion) dcnt value is higher using part_of relationships Load all relationships into GoSubDag using relationships={'part_of'}
gosubdag_partof = GoSubDag(go_leafs, godag, relationships={'part_of'}) nt_virion = gosubdag_partof.go2nt[virion] print(nt_virion) print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt))
notebooks/relationships_change_dcnt_values.ipynb
tanghaibao/goatools
bsd-2-clause
5) Descendants under GO:0019012 (virion)
virion_descendants = gosubdag_partof.rcntobj.go2descendants[virion] print('{N} descendants of virion were found'.format(N=len(virion_descendants)))
notebooks/relationships_change_dcnt_values.ipynb
tanghaibao/goatools
bsd-2-clause
6) Plot descendants of virion
from goatools.gosubdag.plot.gosubdag_plot import GoSubDagPlot # Limit plot of descendants to get a smaller plot virion_capsid_fiber = {'GO:0098033', 'GO:0098032'} nts = gosubdag_partof.prt_goids(virion_capsid_fiber, '{NS} {GO} dcnt({dcnt}) D-{depth:02} {GO_name}') # Limit plot size by choosi...
notebooks/relationships_change_dcnt_values.ipynb
tanghaibao/goatools
bsd-2-clause
Download
resp = requests.get('https://www.indiegogo.com/explore?filter_title=dayton')
scraping.ipynb
centaurustech/crowdfunding937
mit
Parse The BeautifulSoup library converts a raw string of HTML into a highly searchable object
soup = bs4.BeautifulSoup(resp.text)
scraping.ipynb
centaurustech/crowdfunding937
mit
inspecting the HTML, it looks like each project is described in a div of class i-project-card. For example:
proj0 = soup.find_all('div', class_='i-project-card')[0] proj0
scraping.ipynb
centaurustech/crowdfunding937
mit
We may want to drill into each individual project page for more details.
detail_url = proj0.find('a', class_='i-project') detail_url['href'] detail_url = 'https://www.indiegogo.com' + detail_url['href'] detail_url detail_resp = requests.get('https://www.indiegogo.com' + detail_url['href']) detail_soup = bs4.BeautifulSoup(detail_resp.text) detail_soup
scraping.ipynb
centaurustech/crowdfunding937
mit
Kickstarter There's an undocumented API that can give us JSON.
kicks_raw = requests.get('http://www.kickstarter.com/projects/search.json?search=&term=dayton') import json data = json.loads(kicks_raw.text) data['projects'][0]
scraping.ipynb
centaurustech/crowdfunding937
mit
Regression Project We have learned about regression and how to build regression models using both scikit-learn and TensorFlow. Now we'll build a regression model from start to finish. We will acquire data and perform exploratory data analysis and data preprocessing. We'll build and tune our model and measure how well o...
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'
content/03_regression/09_regression_project/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exercise 2: EDA and Data Preprocessing Using as many code and text blocks as you need, download the dataset, explore it, and do any model-independent preprocessing that you think is necessary. Feel free to use any of the tools for data analysis and visualization that we have covered in this course so far. Be sure to do...
# Add code and text blocks to explore the data and explain your work
content/03_regression/09_regression_project/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Modeling Now that we understand our data a little better, we can build a model. We are trying to predict 'charges', which is a continuous variable. We'll use a regression model to predict 'charges'. Exercise 3: Modeling Using as many code and text blocks as you need, build a model that can predict 'charges' given the f...
# Add code and text blocks to build and validate a model and explain your work
content/03_regression/09_regression_project/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
The majority of machine learning algorithms assume that they are operating on a fully observed data set. In contast, a great deal of data sets in the real world are missing some values. Sometimes, this missingness is missing at random (MAR), which means that there is no important pattern to the missingness, and sometim...
X = numpy.concatenate([numpy.random.normal(0, 1, size=(1000)), numpy.random.normal(6, 1, size=(1250))]) plt.title("Bimodal Distribution", fontsize=14) plt.hist(X, bins=numpy.arange(-3, 9, 0.1), alpha=0.6) plt.ylabel("Count", fontsize=14) plt.yticks(fontsize=12) plt.xlabel("Value", fontsize=14) plt.yticks(fontsize=12) ...
tutorials/old/Tutorial_9_Missing_Values.ipynb
jmschrei/pomegranate
mit
Even when the data is all drawn from a single, Gaussian, distribution, it is not a great idea to do mean imputation. We can see that the standard deviation of the learned distribution is significantly smaller than the true standard deviation (of 1), whereas if the missing data is ignored the value is closer. This might...
n, d, steps = 1000, 10, 50 diffs1 = numpy.zeros(int(steps*0.86)) diffs2 = numpy.zeros(int(steps*0.86)) X = numpy.random.normal(6, 3, size=(n, d)) for k, size in enumerate(range(0, int(n*d*0.86), n*d / steps)): idxs = numpy.random.choice(numpy.arange(n*d), replace=False, size=size) i, j = idxs / d, idxs % d ...
tutorials/old/Tutorial_9_Missing_Values.ipynb
jmschrei/pomegranate
mit
In even the simplest case of Gaussian distributed data with a diagonal covariance matrix, it is more accurate to use the ignoring strategy rather than imputing the mean. When the data set is mostly unobserved the mean imputation strategy tends to do better in this case, but only because there is so little data for the ...
X = numpy.random.randn(100) X_nan = numpy.concatenate([X, [numpy.nan]*100]) print "Fitting only to observed values:" print NormalDistribution.from_samples(X) print print "Fitting to observed and missing values:" print NormalDistribution.from_samples(X_nan)
tutorials/old/Tutorial_9_Missing_Values.ipynb
jmschrei/pomegranate
mit