markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Optimization If we want to find the "best-fit" hyperparameters, we should optimize an objective function. The two standard functions (as described in Chapter 5 of R&W) are the marginalized ln-likelihood and the cross validation likelihood. George implements the former in the GP.lnlikelihood function and the gradient with respect to the hyperparameters in the GP.grad_lnlikelihood function:
import george gp = george.GP(kernel, mean=np.mean(y), fit_mean=True, white_noise=np.log(0.19**2), fit_white_noise=True) gp.compute(t) print(gp.log_likelihood(y)) print(gp.grad_log_likelihood(y))
docs/_static/notebooks/hyper.ipynb
dfm/george
mit
We'll use a gradient based optimization routine from SciPy to fit this model as follows:
import scipy.optimize as op # Define the objective function (negative log-likelihood in this case). def nll(p): gp.set_parameter_vector(p) ll = gp.log_likelihood(y, quiet=True) return -ll if np.isfinite(ll) else 1e25 # And the gradient of the objective function. def grad_nll(p): gp.set_parameter_vector(p) return -gp.grad_log_likelihood(y, quiet=True) # You need to compute the GP once before starting the optimization. gp.compute(t) # Print the initial ln-likelihood. print(gp.log_likelihood(y)) # Run the optimization routine. p0 = gp.get_parameter_vector() results = op.minimize(nll, p0, jac=grad_nll, method="L-BFGS-B") # Update the kernel and print the final log-likelihood. gp.set_parameter_vector(results.x) print(gp.log_likelihood(y))
docs/_static/notebooks/hyper.ipynb
dfm/george
mit
Warning: An optimization code something like this should work on most problems but the results can be very sensitive to your choice of initialization and algorithm. If the results are nonsense, try choosing a better initial guess or try a different value of the method parameter in op.minimize. We can plot our prediction of the CO2 concentration into the future using our optimized Gaussian process model by running:
x = np.linspace(max(t), 2025, 2000) mu, var = gp.predict(y, x, return_var=True) std = np.sqrt(var) pl.plot(t, y, ".k") pl.fill_between(x, mu+std, mu-std, color="g", alpha=0.5) pl.xlim(t.min(), 2025) pl.xlabel("year") pl.ylabel("CO$_2$ in ppm");
docs/_static/notebooks/hyper.ipynb
dfm/george
mit
Sampling & Marginalization The prediction made in the previous section take into account uncertainties due to the fact that a Gaussian process is stochastic but it doesn’t take into account any uncertainties in the values of the hyperparameters. This won’t matter if the hyperparameters are very well constrained by the data but in this case, many of the parameters are actually poorly constrained. To take this effect into account, we can apply prior probability functions to the hyperparameters and marginalize using Markov chain Monte Carlo (MCMC). To do this, we’ll use the emcee package. First, we define the probabilistic model:
def lnprob(p): # Trivial uniform prior. if np.any((-100 > p[1:]) + (p[1:] > 100)): return -np.inf # Update the kernel and compute the lnlikelihood. gp.set_parameter_vector(p) return gp.lnlikelihood(y, quiet=True)
docs/_static/notebooks/hyper.ipynb
dfm/george
mit
In this function, we’ve applied a prior on every parameter that is uniform between -100 and 100 for every parameter. In real life, you should probably use something more intelligent but this will work for this problem. The quiet argument in the call to GP.lnlikelihood() means that that function will return -numpy.inf if the kernel is invalid or if there are any linear algebra errors (otherwise it would raise an exception). Then, we run the sampler (this will probably take a while to run if you want to repeat this analysis):
import emcee gp.compute(t) # Set up the sampler. nwalkers, ndim = 36, len(gp) sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob) # Initialize the walkers. p0 = gp.get_parameter_vector() + 1e-4 * np.random.randn(nwalkers, ndim) print("Running burn-in") p0, _, _ = sampler.run_mcmc(p0, 200) print("Running production chain") sampler.run_mcmc(p0, 200);
docs/_static/notebooks/hyper.ipynb
dfm/george
mit
After this run, you can plot 50 samples from the marginalized predictive probability distribution:
x = np.linspace(max(t), 2025, 250) for i in range(50): # Choose a random walker and step. w = np.random.randint(sampler.chain.shape[0]) n = np.random.randint(sampler.chain.shape[1]) gp.set_parameter_vector(sampler.chain[w, n]) # Plot a single sample. pl.plot(x, gp.sample_conditional(y, x), "g", alpha=0.1) pl.plot(t, y, ".k") pl.xlim(t.min(), 2025) pl.xlabel("year") pl.ylabel("CO$_2$ in ppm");
docs/_static/notebooks/hyper.ipynb
dfm/george
mit
分别使用TF-idf、TextRank方式提取关键词和权重,并且依次显示出来。(如果你不做特殊指定的话,默认显示数量为20个关键词)
for keyword, weight in jieba.analyse.extract_tags(data, topK=30, withWeight=True): print('%s %s' % (keyword, weight)) for keyword, weight in jieba.analyse.textrank(data, topK=30, withWeight=True): print('%s %s' % (keyword, weight)) result=" ".join(jieba.cut(data)) print("切分结果: "+result[0:99]) from wordcloud import WordCloud wordcloud = WordCloud( background_color="white", #背景颜色 max_words=200, #显示最大词数 width=800, # 输出的画布宽度,默认为400像素 height=600,# 输出的画布高度,默认为400像素 font_path=r"‪C:\Windows\Fonts\msyh.ttc", #使用字体微软雅黑 ).generate(result) %pylab inline import matplotlib.pyplot as plt plt.imshow(wordcloud) plt.axis("off")
jupyter_notebook/getKeyWord.ipynb
xiaoxiaoyao/MyApp
unlicense
Interpretation: - The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting. - You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. Optional questions: Note: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right. Some optional/ungraded questions that you can explore if you wish: - What happens when you change the tanh activation for a sigmoid activation or a ReLU activation? - Play with the learning_rate. What happens? - What if we change the dataset? (See part 5 below!) <font color='blue'> You've learnt to: - Build a complete neural network with a hidden layer - Make a good use of a non-linear unit - Implemented forward propagation and backpropagation, and trained a neural network - See the impact of varying the hidden layer size, including overfitting. Nice work! 5) Performance on other datasets If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
# Datasets noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets() datasets = {"noisy_circles": noisy_circles, "noisy_moons": noisy_moons, "blobs": blobs, "gaussian_quantiles": gaussian_quantiles} ### START CODE HERE ### (choose your dataset) dataset = "noisy_moons" ### END CODE HERE ### X, Y = datasets[dataset] X, Y = X.T, Y.reshape(1, Y.shape[0]) # make blobs binary if dataset == "blobs": Y = Y%2 # Visualize the data plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v2.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Visualizing Data
strat_train_set_copy = strat_train_set.copy() housing.plot(kind="scatter", x='longitude', y='latitude') housing.plot(kind="scatter", x='longitude', y='latitude', alpha=0.1) strat_train_set_copy.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4, s=strat_train_set_copy.population/100, c=strat_train_set_copy.median_house_value, cmap=plt.get_cmap("jet"), label="population", figsize=(15, 15), colorbar=True) plt.legend() corr_matrix = strat_train_set_copy.corr() corr_matrix.median_house_value.sort_values(ascending=False) from pandas.plotting import scatter_matrix attributes = ["median_house_value", "median_income", "total_rooms", "housing_median_age"] scatter_matrix(housing[attributes], figsize=(12, 8)) strat_train_set_copy.plot.scatter(x="median_income", y="median_house_value", alpha=0.1)
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
Experimenting with Attribute Combinations
housing["rooms_per_household"] = housing["total_rooms"] / housing["households"] housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"] housing["population_per_household"]=housing["population"]/housing["households"] housing.info() corr_matrix = housing.corr() corr_matrix['median_house_value'].sort_values(ascending=False)
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
2.5 Prepare the Data for Machine Learning Algorithms
housing = strat_train_set.drop('median_house_value', axis=1) housing_labels = strat_train_set['median_house_value'].copy() housing.info() housing.dropna(subset=['total_bedrooms']).info() housing.drop('total_bedrooms', axis=1).info() housing['total_bedrooms'].fillna(housing['total_bedrooms'].median()).describe() from sklearn.impute import SimpleImputer imputer = SimpleImputer(strategy='median') housing_num = housing.drop("ocean_proximity", axis=1) imputer.fit(housing_num) imputer.statistics_ imputer.strategy housing.drop("ocean_proximity", axis=1).median().values X = imputer.transform(housing_num) X housing_tr = pd.DataFrame(X, columns=housing_num.columns) housing_tr.head()
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
Handling Text and Categorical Attributes
from sklearn.preprocessing import LabelEncoder encoder = LabelEncoder() housing_cat = housing.ocean_proximity housing_cat.describe() housing_cat.value_counts() housing_cat_encoded = encoder.fit_transform(housing_cat) housing_cat_encoded type(housing_cat_encoded) print(encoder.classes_)
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
One hot encoding
from sklearn.preprocessing import OneHotEncoder encoder = OneHotEncoder() print(housing_cat_encoded.shape) print(type(housing_cat_encoded)) (housing_cat_encoded.reshape(-1, 1)).shape housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1, 1)) housing_cat_1hot type(housing_cat_1hot) housing_cat_1hot.toarray()
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
Combine
from sklearn.preprocessing import LabelBinarizer encoder = LabelBinarizer(sparse_output=False) housing_cat_1hot = encoder.fit_transform(housing_cat) housing_cat_1hot type(housing_cat_1hot)
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
Custom Transformers
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6 housing.head() housing.iloc[:, 3] X = housing.values # This can be achieved by the iloc, with using .values housing.iloc[:, [rooms_ix, bedrooms_ix, households_ix, population_ix]].head() rooms_per_household = X[:, rooms_ix] / X[:, households_ix] population_per_household = X[:, population_ix] / X[:, households_ix] bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix] np.c_[X, rooms_per_household, population_per_household] np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room] from sklearn.base import BaseEstimator, TransformerMixin rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6 class CombinedAttributesAdder(BaseEstimator, TransformerMixin): def __init__(self, add_bedrooms_per_room=False): self.add_bedrooms_per_room = add_bedrooms_per_room def fit(self, X, y=None): return self def transform(self, X, y=None): rooms_per_household = X[:, rooms_ix] / X[:, households_ix] population_per_household = X[:, population_ix] / X[:, households_ix] if self.add_bedrooms_per_room: bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix] return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room] else: return np.c_[X, rooms_per_household, population_per_household] attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False) housing_extra_attribs = attr_adder.transform(X) print(housing_extra_attribs.shape) print(housing.shape) # Convert back to data frame -- My way new_columns = housing.columns.append( pd.Index(['rooms_per_household', 'population_per_household']) ) new_columns housing_extra_attribs_df = pd.DataFrame(housing_extra_attribs, columns=new_columns) housing_extra_attribs_df.head()
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
2.5.4 Feature Scaling
housing.describe() housing.total_rooms.describe() from sklearn.preprocessing import MinMaxScaler scalar = MinMaxScaler() scalar.fit(housing["total_rooms"].values.reshape(-1, 1)) pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe() from sklearn.preprocessing import StandardScaler scalar = StandardScaler() scalar.fit(housing["total_rooms"].values.reshape(-1, 1)) pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe()
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
2.5.5 Transformation Pipeline
from sklearn.pipeline import Pipeline num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attr_adder', CombinedAttributesAdder()), ('std_scaler', StandardScaler()) ]) # I want to verify the pipelined version # doest the same thing as the separated steps num_pipeline_stage1 = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ]) X_pipeline = num_pipeline_stage1.fit_transform(housing_num) X = imputer.transform(housing_num) X_pipeline np.array_equal(X, X_pipeline) num_pipeline_stage2 = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attr_adder', CombinedAttributesAdder()), ]) Y = attr_adder.fit_transform(X) Y_pipeline = num_pipeline_stage2.fit_transform(housing_num) np.array_equal(Y, Y_pipeline) num_pipeline_stage3 = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attr_adder', CombinedAttributesAdder()), ('std_scaler', StandardScaler()) ]) Z = scalar.fit_transform(Y) Z.std(), Z.mean() Z_pipeline = num_pipeline_stage3.fit_transform(housing_num) np.array_equal(Z, Z_pipeline) from sklearn.base import BaseEstimator, TransformerMixin class DataFrameSelector(BaseEstimator, TransformerMixin): def __init__(self, attribute_names): self.attribute_names = attribute_names def fit(self, X, y=None): return self def transform(self, X): return X[self.attribute_names].values class CustomizedLabelBinarizer(BaseEstimator, TransformerMixin): def __init__(self, sparse_output=False): self.encode = LabelBinarizer(sparse_output = sparse_output) def fit(self, X, y=None): return self.encode.fit(X) def transform(self, X): return self.encode.transform(X) num_attribs = list(housing_num) cat_attribs = ["ocean_proximity"] num_pipeline = Pipeline([ ('selector', DataFrameSelector(num_attribs)), ('imputer', SimpleImputer(strategy="median")), ('attr_adder', CombinedAttributesAdder()), ('std_scaler', StandardScaler()), ] ) cat_pipeline = Pipeline([ ('selector', DataFrameSelector(cat_attribs)), ('label_binarizer', CustomizedLabelBinarizer()), ] ) # LabelBinarizer().fit_transform(DataFrameSelector(cat_attribs).fit_transform(housing)) # num_pipeline.fit_transform(housing) # cat_pipeline.fit_transform(housing) from sklearn.pipeline import FeatureUnion full_pipeline = FeatureUnion(transformer_list=[ ('num_pipeline', num_pipeline), ('cat_pipeline', cat_pipeline), ]) housing_prepared = full_pipeline.fit_transform(housing) print(housing_prepared.shape) housing_prepared
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
2.6.1 Training and Evaluating on the Training Set
from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(housing_prepared, housing_labels) some_data = housing[:5] some_data some_labels = housing_labels[:5] some_labels some_data_prepared = full_pipeline.transform(some_data) some_data_prepared print(f'Prediction:\t{lin_reg.predict(some_data_prepared)}') print(f'Lables:\t\t{list(some_labels)}') from sklearn.metrics import mean_squared_error housing_prediction = lin_reg.predict(housing_prepared) lin_mse = mean_squared_error(housing_prediction, housing_labels) lin_rmse = np.sqrt(lin_mse) lin_rmse
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
Tree model
from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor() tree_reg.fit(housing_prepared, housing_labels) tree_predictions = tree_reg.predict(housing_prepared) tree_mse = mean_squared_error(tree_predictions, housing_labels) tree_rmse = np.sqrt(tree_mse) tree_rmse
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
2.6.2 Better Evaluation Using Cross-Validation
from sklearn.model_selection import cross_val_score scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10) rmse_scores = np.sqrt(-scores) rmse_scores def display_scores(scores): print(f'Scores: {scores}') print(f'Mean: {scores.mean()}') print(f'STD: {scores.std()}') display_scores(rmse_scores)
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
Random Forest
from sklearn.ensemble import RandomForestRegressor forest_reg = RandomForestRegressor() forest_reg.fit(housing_prepared, housing_labels) forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10) forest_rmse_scores = np.sqrt(-forest_scores) display_scores(forest_rmse_scores) forest_prediction = forest_reg.predict(housing_prepared) forest_rmse = np.sqrt(mean_squared_error(forest_prediction, housing_labels)) forest_rmse
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
Ex03 Try adding a transformer in the preparation pipeline to select only the most important attributes. The importance of each feature is show below in 2.7.4 ```python sorted(zip(feature_importances, attributes), reverse=True) [(0.32649798665134971, 'median_income'), (0.15334491760305854, 'INLAND'), (0.11305529021187399, 'pop_per_hhold'), (0.07793247662544775, 'bedrooms_per_room'), (0.071415642259275158, 'longitude'), (0.067613918945568688, 'latitude'), (0.060436577499703222, 'rooms_per_hhold'), (0.04442608939578685, 'housing_median_age'), (0.018240254462909437, 'population'), (0.01663085833886218, 'total_rooms'), (0.016607686091288865, 'total_bedrooms'), (0.016345876147580776, 'households'), (0.011216644219017424, '<1H OCEAN'), (0.0034668118081117387, 'NEAR OCEAN'), (0.0026848388432755429, 'NEAR BAY'), (8.4130896890070617e-05, 'ISLAND')] ``` Based on the ranking, I will select the following 3: * median_income * INLAND * pop_per_hhold
from sklearn.base import BaseEstimator, TransformerMixin class EX3NumSelector(BaseEstimator, TransformerMixin): def __init__(self): pass def fit(self, X, y=None): return self def transform(self, X): X['pop_per_hhold'] = X['population'] / X['households'] return X[['median_income', 'pop_per_hhold', 'longitude']].values class EX3CatSelector(BaseEstimator, TransformerMixin): def __init__(self): pass def fit(self, X, y=None): return self def transform(self, X): Y = housing['ocean_proximity'] Y[Y != 'INLAND'] = 'NON_INLAND' return Y.values num_sel = EX3NumSelector() num_sel.fit_transform(housing) cat_sel = EX3CatSelector() cat_sel.fit_transform(housing) num_pipeline = Pipeline([ ('selector', EX3NumSelector()), ('imputer', SimpleImputer(strategy="median")), ('std_scaler', StandardScaler()), ] ) cat_pipeline = Pipeline([ ('selector', EX3CatSelector()), ('label_binarizer', CustomizedLabelBinarizer()), ] ) full_pipeline = FeatureUnion(transformer_list=[ ('num_pipeline', num_pipeline), ('cat_pipeline', cat_pipeline), ]) housing_prepared = full_pipeline.fit_transform(housing) print(housing_prepared.shape) housing_prepared forest_reg.fit(housing_prepared, housing_labels) forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10) forest_rmse_scores = np.sqrt(-forest_scores) display_scores(forest_rmse_scores)
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
Conclusions of Ex03 With only 3 features, we do see the performance degradation. Adding one more feature 'longitude' improves a little bit, which makes sense. 2.7.1 Grid Search
# from sklearn.model_selection import GridSearchCV # param_grid = [ # {'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]}, # {'bootstrap': [False], 'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]} # ] # forest_reg = RandomForestRegressor() # grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring="neg_mean_squared_error") # grid_search.fit(housing_prepared, housing_labels) # grid_search.best_params_ # grid_search.best_estimator_ # cvres = grid_search.cv_results_ # for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]): # print(np.sqrt(-mean_score), params)
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
2.7.4 Analyze the best models and their errors
# feature_importances = grid_search.best_estimator_.feature_importances_ # feature_importances # extra_attribs = ['rooms_per_hhold', 'pop_per_hhold'] # cat_one_hot_attribs = list(encoder.classes_) # cat_one_hot_attribs # attributes = num_attribs + extra_attribs + cat_one_hot_attribs # attributes, len(attributes) # sorted(zip(feature_importances, attributes), reverse=True)
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
2.7.5 Evaluate Your System on the Test Set
# final_model = grid_search.best_estimator_ # X_test = strat_test_set.drop("median_house_value", axis=1) # y_test = strat_test_set.median_house_value.copy() # X_test_prepared = full_pipeline.transform(X_test) # final_predictions = final_model.predict(X_test_prepared) # final_mse = mean_squared_error(final_predictions, y_test) # final_rmse = np.sqrt(final_mse) # final_rmse
HandsOnML/ch02/ex03.ipynb
eroicaleo/LearningPython
mit
Universal Sentence Encoder-Lite demo <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/google/universal-sentence-encoder-lite/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> </td> </table> This Colab illustrates how to use the Universal Sentence Encoder-Lite for sentence similarity task. This module is very similar to Universal Sentence Encoder with the only difference that you need to run SentencePiece processing on your input sentences. The Universal Sentence Encoder makes getting sentence level embeddings as easy as it has historically been to lookup the embeddings for individual words. The sentence embeddings can then be trivially used to compute sentence level meaning similarity as well as to enable better performance on downstream classification tasks using less supervised training data. Getting started Setup
# Install seaborn for pretty visualizations !pip3 install --quiet seaborn # Install SentencePiece package # SentencePiece package is needed for Universal Sentence Encoder Lite. We'll # use it for all the text processing and sentence feature ID lookup. !pip3 install --quiet sentencepiece from absl import logging import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import tensorflow_hub as hub import sentencepiece as spm import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import re import seaborn as sns
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
tensorflow/docs-l10n
apache-2.0
Load the module from TF-Hub
module = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-lite/2") input_placeholder = tf.sparse_placeholder(tf.int64, shape=[None, None]) encodings = module( inputs=dict( values=input_placeholder.values, indices=input_placeholder.indices, dense_shape=input_placeholder.dense_shape))
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
tensorflow/docs-l10n
apache-2.0
Load SentencePiece model from the TF-Hub Module The SentencePiece model is conveniently stored inside the module's assets. It has to be loaded in order to initialize the processor.
with tf.Session() as sess: spm_path = sess.run(module(signature="spm_path")) sp = spm.SentencePieceProcessor() with tf.io.gfile.GFile(spm_path, mode="rb") as f: sp.LoadFromSerializedProto(f.read()) print("SentencePiece model loaded at {}.".format(spm_path)) def process_to_IDs_in_sparse_format(sp, sentences): # An utility method that processes sentences with the sentence piece processor # 'sp' and returns the results in tf.SparseTensor-similar format: # (values, indices, dense_shape) ids = [sp.EncodeAsIds(x) for x in sentences] max_len = max(len(x) for x in ids) dense_shape=(len(ids), max_len) values=[item for sublist in ids for item in sublist] indices=[[row,col] for row in range(len(ids)) for col in range(len(ids[row]))] return (values, indices, dense_shape)
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
tensorflow/docs-l10n
apache-2.0
Test the module with a few examples
# Compute a representation for each message, showing various lengths supported. word = "Elephant" sentence = "I am a sentence for which I would like to get its embedding." paragraph = ( "Universal Sentence Encoder embeddings also support short paragraphs. " "There is no hard limit on how long the paragraph is. Roughly, the longer " "the more 'diluted' the embedding will be.") messages = [word, sentence, paragraph] values, indices, dense_shape = process_to_IDs_in_sparse_format(sp, messages) # Reduce logging output. logging.set_verbosity(logging.ERROR) with tf.Session() as session: session.run([tf.global_variables_initializer(), tf.tables_initializer()]) message_embeddings = session.run( encodings, feed_dict={input_placeholder.values: values, input_placeholder.indices: indices, input_placeholder.dense_shape: dense_shape}) for i, message_embedding in enumerate(np.array(message_embeddings).tolist()): print("Message: {}".format(messages[i])) print("Embedding size: {}".format(len(message_embedding))) message_embedding_snippet = ", ".join( (str(x) for x in message_embedding[:3])) print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
tensorflow/docs-l10n
apache-2.0
Semantic Textual Similarity (STS) task example The embeddings produced by the Universal Sentence Encoder are approximately normalized. The semantic similarity of two sentences can be trivially computed as the inner product of the encodings.
def plot_similarity(labels, features, rotation): corr = np.inner(features, features) sns.set(font_scale=1.2) g = sns.heatmap( corr, xticklabels=labels, yticklabels=labels, vmin=0, vmax=1, cmap="YlOrRd") g.set_xticklabels(labels, rotation=rotation) g.set_title("Semantic Textual Similarity") def run_and_plot(session, input_placeholder, messages): values, indices, dense_shape = process_to_IDs_in_sparse_format(sp,messages) message_embeddings = session.run( encodings, feed_dict={input_placeholder.values: values, input_placeholder.indices: indices, input_placeholder.dense_shape: dense_shape}) plot_similarity(messages, message_embeddings, 90)
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
tensorflow/docs-l10n
apache-2.0
Similarity visualized Here we show the similarity in a heat map. The final graph is a 9x9 matrix where each entry [i, j] is colored based on the inner product of the encodings for sentence i and j.
messages = [ # Smartphones "I like my phone", "My phone is not good.", "Your cellphone looks great.", # Weather "Will it snow tomorrow?", "Recently a lot of hurricanes have hit the US", "Global warming is real", # Food and health "An apple a day, keeps the doctors away", "Eating strawberries is healthy", "Is paleo better than keto?", # Asking about age "How old are you?", "what is your age?", ] with tf.Session() as session: session.run(tf.global_variables_initializer()) session.run(tf.tables_initializer()) run_and_plot(session, input_placeholder, messages)
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
tensorflow/docs-l10n
apache-2.0
Evaluation: STS (Semantic Textual Similarity) Benchmark The STS Benchmark provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. Pearson correlation is then used to evaluate the quality of the machine similarity scores against human judgements. Download data
import pandas import scipy import math def load_sts_dataset(filename): # Loads a subset of the STS dataset into a DataFrame. In particular both # sentences and their human rated similarity score. sent_pairs = [] with tf.gfile.GFile(filename, "r") as f: for line in f: ts = line.strip().split("\t") # (sent_1, sent_2, similarity_score) sent_pairs.append((ts[5], ts[6], float(ts[4]))) return pandas.DataFrame(sent_pairs, columns=["sent_1", "sent_2", "sim"]) def download_and_load_sts_data(): sts_dataset = tf.keras.utils.get_file( fname="Stsbenchmark.tar.gz", origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz", extract=True) sts_dev = load_sts_dataset( os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv")) sts_test = load_sts_dataset( os.path.join( os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv")) return sts_dev, sts_test sts_dev, sts_test = download_and_load_sts_data()
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
tensorflow/docs-l10n
apache-2.0
Build evaluation graph
sts_input1 = tf.sparse_placeholder(tf.int64, shape=(None, None)) sts_input2 = tf.sparse_placeholder(tf.int64, shape=(None, None)) # For evaluation we use exactly normalized rather than # approximately normalized. sts_encode1 = tf.nn.l2_normalize( module( inputs=dict(values=sts_input1.values, indices=sts_input1.indices, dense_shape=sts_input1.dense_shape)), axis=1) sts_encode2 = tf.nn.l2_normalize( module( inputs=dict(values=sts_input2.values, indices=sts_input2.indices, dense_shape=sts_input2.dense_shape)), axis=1) sim_scores = -tf.acos(tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1))
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
tensorflow/docs-l10n
apache-2.0
Evaluate sentence embeddings
#@title Choose dataset for benchmark dataset = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"} values1, indices1, dense_shape1 = process_to_IDs_in_sparse_format(sp, dataset['sent_1'].tolist()) values2, indices2, dense_shape2 = process_to_IDs_in_sparse_format(sp, dataset['sent_2'].tolist()) similarity_scores = dataset['sim'].tolist() def run_sts_benchmark(session): """Returns the similarity scores""" scores = session.run( sim_scores, feed_dict={ sts_input1.values: values1, sts_input1.indices: indices1, sts_input1.dense_shape: dense_shape1, sts_input2.values: values2, sts_input2.indices: indices2, sts_input2.dense_shape: dense_shape2, }) return scores with tf.Session() as session: session.run(tf.global_variables_initializer()) session.run(tf.tables_initializer()) scores = run_sts_benchmark(session) pearson_correlation = scipy.stats.pearsonr(scores, similarity_scores) print('Pearson correlation coefficient = {0}\np-value = {1}'.format( pearson_correlation[0], pearson_correlation[1]))
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
tensorflow/docs-l10n
apache-2.0
For high dpi displays.
%config InlineBackend.figure_format = 'retina'
examples/6_p_scale_test_Dorogokupets2007_Au.ipynb
SHDShim/pytheos
apache-2.0
0. General note This example compares pressure calculated from pytheos and original publication for the gold scale by Dorogokupets 2007. 1. Global setup
import matplotlib.pyplot as plt import numpy as np from uncertainties import unumpy as unp import pytheos as eos
examples/6_p_scale_test_Dorogokupets2007_Au.ipynb
SHDShim/pytheos
apache-2.0
3. Compare
eta = np.linspace(1., 0.65, 8) print(eta) dorogokupets2007_au = eos.gold.Dorogokupets2007() help(dorogokupets2007_au) dorogokupets2007_au.print_equations() dorogokupets2007_au.print_equations() dorogokupets2007_au.print_parameters() v0 = 67.84742110765599 dorogokupets2007_au.three_r v = v0 * (eta) temp = 2500. p = dorogokupets2007_au.cal_p(v, temp * np.ones_like(v))
examples/6_p_scale_test_Dorogokupets2007_Au.ipynb
SHDShim/pytheos
apache-2.0
<img src='./tables/Dorogokupets2007_Au.png'>
print('for T = ', temp) for eta_i, p_i in zip(eta, p): print("{0: .3f} {1: .2f} ".format(eta_i, p_i)) v = dorogokupets2007_au.cal_v(p, temp * np.ones_like(p), min_strain=0.6) print(1.-(v/v0))
examples/6_p_scale_test_Dorogokupets2007_Au.ipynb
SHDShim/pytheos
apache-2.0
根據數據的維度,Pandas可將資料存成序列(series) 或資料表(data frame)。以下將各別針對序列和資料表做介紹。 <a id="01"/>序列(Series):詞頻表 可將Python字典存成Pandas序列。字典格式為 {索引1:索引值1, 索引2:索引值2, 索引3:索引值3,...}。
adjs={'affordable': 1, 'comfortable': 3, 'comparable': 1, 'different': 1, 'disappointed': 1, 'fantastic': 1, 'good': 8, 'great': 15} ser=pd.Series(adjs)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
以[ ]方法取出序列中的前三筆資料
ser[:3]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="02"/>以index檢查序列索引
ser.index
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="03"/>以loc(), iloc()提取序列內容 以[索引名稱]可提取和該索引名稱相應的索引值。
ser['comfortable'] for ind in ser.index: print(ind,ser[ind])
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
ser[索引名稱] 等同於使用 ser.loc[索引名稱]:
for ind in ser.index: print(ind,ser.loc[ind])
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
以iloc[索引排序]可提取對應該索引排序的索引值。索引排序是整數,代表該筆資料在序列內的順位。
for i,ind in enumerate(ser.index): print(ind,ser[ind]) print(ind,ser.iloc[i]) print('------------')
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="04"/>以apply()對序列做處理
ser
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
利用apply()將序列內的每個值+1。
ser.apply(lambda x:x+1)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="05"/>畫詞頻圖
ser.plot.bar()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
以上等同於:
ser.plot(kind='bar')
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="06"/>序列(Series):高斯分佈 我們亦可不提供索引,直接輸入一個無索引, 非字典形式的序列給Pandas:
ser=pd.Series(np.random.normal(0,1,1000)) ser[:5]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
由於並無提供索引,索引會以流水號從0開始自動產生。
ser.shape
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="07"/>畫盒鬚圖
ser.plot.box()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
以上結果等同於:
ser.plot(kind='box')
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
我們先前使用了 pd.Series(序列) 讓序列自動產生由0開始遞增的索引。而若是我們想要自己提供索引,我們仍然可使用 pd.Series(序列,索引序列) 建立自帶索引的序列。
ser=pd.Series(np.random.normal(0,1,1000), index=np.random.choice(['A','B'],1000)) ser.head(5)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
複習 * 給字典(key:value),則索引為key * 給一維序列不給索引,則索引為流水號 * 給兩個等長的一維序列,則一個為資料,一個為資料的索引 回索引 <a id="08"/>資料表(DataFrame) 以下我們建立一個有三個行,且索引為自動流水號的資料表。 我們可以提供{行名稱1:行序列1,行名稱2:行序列2,行名稱3:行序列3,...}給pandas.DataFrame()建構資料表。
a=[1,2,3]*2 b=[1, 2, 3, 10, 20, 30] c=np.array(b)*2 df=pd.DataFrame({'col1':a,'col2':b,'col3':c}) df
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
直接於df物件之後加[欄位名稱]可選取欄位
df['col2']
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="09"/>以loc()或iloc()做資料的選取 用loc或iloc時機:當你不只想選取欄位,也想選取特定索引範圍時。 以loc[索引名稱,行的名稱] 可擷取一部分範圍的資料表 例:選出索引0,1,2;列為col2的資料
df.loc[0:2,'col2']
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
以上範例索引為數值,因此可以用$0:2$的方式,選出索引由$0$至$2$ ($0,1,2$) 的資料列。 例:選出索引為0,1,2, 列為col1和col2的資料
df.loc[0:2,['col1','col2']] df
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
選取位於'col'行中等於1的列
df['col1']==1 df[df['col1']==1]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
先依條件df['col1']==1選定資料,再指定要選取的欄位是'col2'
df.loc[df['col1']==1,'col2']
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
亦可使用iloc(索引排序,行排序),根據索引排序和行排序來選取資料
df df.iloc[0:2,1]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="ex00"/>練習0:選出數字為7的path,並告訴我數字為7的資料有幾筆
def filePathsGen(rootPath): paths=[] dirs=[] for dirPath,dirNames,fileNames in os.walk(rootPath): for fileName in fileNames: fullPath=os.path.join(dirPath,fileName) paths.append((int(dirPath[len(rootPath) ]),fullPath)) dirs.append(dirNames) return dirs,paths dirs,paths=filePathsGen('mnist/') #載入圖片路徑 dfPath=pd.DataFrame(paths,columns=['class','path']) #圖片路徑存成Pandas資料表 dfPath.head(5) # 看資料表前5個row # 完成以下程式碼: dfPath[...]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="10"/>groupby() Pandas 可做各種等同於SQL查詢的處理,詳見Pandas和SQL指令的比較: http://pandas.pydata.org/pandas-docs/stable/comparison_with_sql.html 現在,我們首先來介紹,SQL查詢指令中常見的groupby,於pandas是該怎麼做呢?
df
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
以行'col1'來做groupby
grouped=df.groupby('col1')
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="11"/>以屬性indices檢查groupby後的結果
grouped.indices
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
現在,grouped這個物件是一個群集,裡面有三個群。其中,群為1的有索引是0和索引是3的這兩列,群為2的有索引是1和索引是4的這兩列,群為3的有索引是2和索引是5的這兩列。 回索引 <a id="12"/>我們可用for將一個個群從群集裡取出
for name,group in grouped: print(name) print(group) print('--------------')
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
直接以* 將群集展開
print(*grouped)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="13"/>用get_group(2)將群集內的第二個群取出
grouped.get_group(2)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="14"/>groupby完之後,可以用sum()做加總
grouped.sum()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="15"/>或以describe()來對個群做一些簡單統計
grouped.describe()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="16"/>產生一個亂數時間序列資料,用來了解groupby, aggregate, transform, filter 現在,我們來製作一個亂數產生的時間序列。我們將利用它來學習,於群聚(groupby)之後,如何將得到的群集做聚合 (aggregate),轉換 (transform)或過濾 (filter),進而得出想要的結果。 以下我們要亂數產生一個時間序列的範例資料。首先,先產生$365\times2$個日期
dates = pd.date_range('2017/1/1', periods=365*2,freq='D')
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
建立一個時間序列,以剛剛產生的日期當做索引。序列的前365個亂數是由常態分佈$N(\mu=0,\sigma=1)$抽取出來的樣本,而後365個亂數則是由常態分佈$N(\mu=6,\sigma=1)$抽取出來的樣本。 以下,我們以水平方向堆疊(horizontal stack)兩個不同分佈的亂數序列,將其輸入給pd.Series()。
dat=pd.Series(np.hstack( (np.random.normal(0,1,365),np.random.normal(6,1,365) )) ,dates) dat[:5]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
接著我們將序列以年來分群。
grouped=pd.Series(dat).groupby(lambda x:x.year) grouped.indices
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
顯而易見,因2017和2018年的資料是不同的亂數分佈產生的,所以畫出來這兩年的資料是分開的。
for name,group in grouped: group.plot(label=name,legend=True)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
畫直方圖(histogram),可發現兩組數據的確類似常態分佈,一個中心約在x=0處,另一個中心約在x=6處。 法一
grouped.plot.hist(15)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
法二(有畫出兩組資料的標籤)
for name,group in grouped: group.plot.hist(15,label=name,legend=True)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
盒鬚圖(boxplot)
fig,axes=plt.subplots(1,2) for idx,(name,group) in enumerate(grouped): group.plot(kind='box', label=name,ax=axes[idx])
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
接著我們想畫盒鬚圖(boxplot)。在這裡遇到一個問題,也就是boxplot()這個方法只有物件是DataFrame才有,物件是Series尚無此方法。因此我們只好將序列(Series)轉成DataFrame。 註:我們想用boxplot這個方法,因為它可以寫一行就幫我們自動分群,畫出盒鬚圖。
dat.name='random variables'
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
將序列dat轉成DataFrame並且將日期索引變成行。
datNew=dat.to_frame().reset_index()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
將日期欄位取年的部份,存到一個欄位叫做'year'。
datNew['year']=datNew['index'].apply(lambda x:x.year)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
將原先是索引的日期欄位刪除。
del datNew['index']
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
最終我們產生了新的資料表,可以用'year'這個欄位來做groupby。
datNew[:5]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
最後以boxplot()畫盒鬚圖並以年做groupby。
datNew.boxplot(by='year')
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
從盒鬚圖可看出,2017年的亂數資料平均值靠近0, 而2018年的亂數資料平均值靠近6。這符合我們的預期。 用seaborn畫可能會漂亮些:
sns.boxplot(data=datNew,x='year',y='random variables',width=0.3) dat[:5]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="17"/>接著我們將資料groupby年以後,以transform()對資料做轉換 以下我們想做的事情是將常態分佈的亂數做標準化,也就是將$x\sim N(\mu,\sigma)$轉換成 $x_{new}=\frac{x-\mu}{\sigma}\sim N(0,1)$。 https://en.wikipedia.org/wiki/Standard_score
grouped=dat.groupby(lambda x:x.year) transformed = grouped.transform(lambda x: (x - x.mean()) / x.std()) transformed[:5] len(transformed)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
畫圖比較轉換前和轉換後的序列
compare = pd.DataFrame({'before transformation': dat, 'after transformation': transformed}) compare.plot()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
轉換後,2018年的資料和2017年的資料有相同的分佈 回索引 <a id="18"/>groupby後,除了以transform()做轉換,我們亦可用agg()做聚合,計算各群的平均和標準差 將原資料做groupby
groups=dat.groupby(lambda x:x.year)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
將轉換後資料做groupby
groupsTrans=transformed.groupby(lambda x:x.year)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
計算原資料各群平均
groups.agg(np.mean)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
計算原資料各群標準差
groups.agg(np.std)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
計算轉換後資料各群平均
groupsTrans.agg(np.mean)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
計算轉換後資料各群標準差
groupsTrans.agg(np.std)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="19"/>groupby後,以filter()過濾出符合判斷式的群 例如說,我們想找出平均數是小於5的群集。由先前所畫的圖,或是由數據產生的方式,我們知道2017年這個群,平均數是小於五,因此我們可使用np.abs(x.mean())<5 這個條件過濾出2017年的群集資料。
filtered=groups.filter(lambda x: np.abs(x.mean())<5) len(filtered) filtered[:5]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
相同的,我們知道2018年這個群,平均數是大於五,因此我們可使用np.abs(x.mean())>5 這個條件過濾出2018年的群集資料。
filtered=groups.filter(lambda x: np.abs(x.mean())>5) len(filtered) filtered[:5]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
使用np.abs(x.mean())>6 將找不到任何群集的資料。
filtered=groups.filter(lambda x: np.abs(x.mean())>6) len(filtered)
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
使用np.abs(x.mean())<6 將得到屬於任何群集的資料。
filtered=groups.filter(lambda x: np.abs(x.mean())<6) len(filtered) filtered[:5] filtered[-1-5:-1]
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
回索引 <a id="20"/>未定義(NaN)的處理
dat=pd.Series(np.random.normal(0,1,6)) dat
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
將索引2的值換成NaN
dat[2]=np.NAN dat
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
將序列所有值往後挪動(shift)一格。此舉會導致最前面的數變成NaN。
shifted=dat.shift(1) shifted
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
用NaN後的數來填補該NaN (backward fill)
shifted.bfill()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit
用NaN前的數來填補該NaN (forward fill)
shifted.ffill()
tutorials/PandasTutorial.ipynb
chi-hung/PythonTutorial
mit