markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark. Overview Dealing with missing data Eliminating samples or features with missing values Imputing missing values Understanding the scikit-learn estimator API Handling categorical data Mapping ordinal features Encoding class labels Performing one-hot encoding on nominal features Partitioning a dataset in training and test sets Bringing features onto the same scale Selecting meaningful features Sparse solutions with L1 regularization Sequential feature selection algorithms Assessing feature importance with random forests Summary
from IPython.display import Image %matplotlib inline # Added version check for recent scikit-learn 0.18 checks from distutils.version import LooseVersion as Version from sklearn import __version__ as sklearn_version
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Dealing with missing data The training data might be incomplete due to various reasons * data collection error * measurements not applicable * etc. Most machine learning algorithms/implementations cannot robustly deal with missing data Thus we need to deal with missing data before training models We use pandas (Python data analysis) library for dealing with missing data in the examples below
import pandas as pd from io import StringIO csv_data = '''A,B,C,D 1.0,2.0,3.0,4.0 5.0,6.0,,8.0 10.0,11.0,12.0,''' # If you are using Python 2.7, you need # to convert the string to unicode: # csv_data = unicode(csv_data) df = pd.read_csv(StringIO(csv_data)) df
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
The columns (A, B, C, D) are features. The rows (0, 1, 2) are samples. Missing values become NaN (not a number).
df.isnull() df.isnull().sum(axis=0)
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Eliminating samples or features with missing values One simple strategy is to simply eliminate samples (table rows) or features (table columns) with missing values, based on various criteria.
# the default is to drop samples/rows df.dropna() # but we can also elect to drop features/columns df.dropna(axis=1) # only drop rows where all columns are NaN df.dropna(how='all') # drop rows that have not at least 4 non-NaN values df.dropna(thresh=4) # only drop rows where NaN appear in specific columns (here: 'C') df.dropna(subset=['C'])
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Dropping data might not be desirable, as the resulting data set might become too small. Imputing missing values Interpolating missing values from existing ones can preserve the original data better. Impute: the process of replacing missing data with substituted values <a href="https://en.wikipedia.org/wiki/Imputation_(statistics)">in statistics</a>
from sklearn.preprocessing import Imputer # options from the imputer library includes mean, median, most_frequent imr = Imputer(missing_values='NaN', strategy='mean', axis=0) imr = imr.fit(df) imputed_data = imr.transform(df.values) imputed_data
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
For example, 7.5 is the average of 3 and 12. 6 is the average of 4 and 8.
df.values
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
We can do better than this, by selecting only the most similar rows for interpolation, instead of all rows. This is how recommendation system could work, e.g. predict your potential rating of a movie or book you have not seen based on item ratings from you and other users. <i>Programming Collective Intelligence: Building Smart Web 2.0 Applications, by Toby Segaran</i> * very good reference book: recommendation system, search engine, etc. * I didn't choose it as one of the text/reference books as the code/data is a bit out of date <a href="https://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325/ref=sr_1_1?s=books&ie=UTF8&qid=1475564389&sr=1-1&keywords=collective+intelligence"> <img src="https://images-na.ssl-images-amazon.com/images/I/51LolW3DugL._SX379_BO1,204,203,200_.jpg" width=25% align=right> </a> Understanding the scikit-learn estimator API Transformer class for data transformation * imputer Key methods * fit() for fitting from (training) ata * transform() for transforming future data based on the fitted data Good API designs are consistent. For example, the fit() method has similar meanings for different classes, such as transformer and estimator. Transformer <img src='./images/04_04.png' width=80%> Estimator <img src='./images/04_05.png' width=80%> Handling different types of data There are different types of feature data: numerical and categorical. Numerical features are numbers and often "continuous" like real numbers. Categorical features are "discrete", and can be either nominal or ordinal. * Ordinal values are discrete but carry some numerical meanings such as ordering and thus can be sorted. * Nominal values have no numerical meanings. In the example below: * color is nominal (no numerical meaning) * size is ordinal (can be sorted in some way) * price is numerical A given dataset can contain features of different types. It is important to handle them carefully. For example, do not treat nominal values as numbers without proper mapping.
import pandas as pd df = pd.DataFrame([['green', 'M', 10.1, 'class1'], ['red', 'L', 13.5, 'class2'], ['blue', 'XL', 15.3, 'class1']]) df.columns = ['color', 'size', 'price', 'classlabel'] df
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Data conversion For some estimators such as decision trees that handle one feature at a time, it is OK to keep the features as they are. However for other estimators that need to handle multiple features together, we need to convert them into compatible forms before proceeding: 1. convert categorical values into numerical values 2. scale/normalize numerical values Mapping ordinal features Ordinal features can be converted into numbers, but the conversion often depends on semantics and thus needs to be specified manually (by a human) instead of automatically (by a machine). In the example below, we can map sizes into numbers. Intuitively, larger sizes should map to larger values. Exactly which values to map to is often a judgment call. Below, we use Python dictionary to define a mapping.
size_mapping = {'XL': 3, 'L': 2, 'M': 1} df['size'] = df['size'].map(size_mapping) df inv_size_mapping = {v: k for k, v in size_mapping.items()} df['size'].map(inv_size_mapping)
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Encoding class labels Class labels often need to be represented as integers for machine learning libraries * not ordinal, so any integer mapping will do * but a good idea and convention is to use consecutive small values like 0, 1, ...
import numpy as np class_mapping = {label: idx for idx, label in enumerate(np.unique(df['classlabel']))} class_mapping # forward map df['classlabel'] = df['classlabel'].map(class_mapping) df # inverse map inv_class_mapping = {v: k for k, v in class_mapping.items()} df['classlabel'] = df['classlabel'].map(inv_class_mapping) df
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
We can use LabelEncoder in scikit learn to convert class labels automatically.
from sklearn.preprocessing import LabelEncoder class_le = LabelEncoder() y = class_le.fit_transform(df['classlabel'].values) y class_le.inverse_transform(y)
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Performing one-hot encoding on nominal features However, unlike class labels, we cannot just convert nominal features (such as colors) directly into integers. A common mistake is to map nominal features into numerical values, e.g. for colors * blue $\rightarrow$ 0 * green $\rightarrow$ 1 * red $\rightarrow$ 2
X = df[['color', 'size', 'price']].values color_le = LabelEncoder() X[:, 0] = color_le.fit_transform(X[:, 0]) X
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
For categorical features, it is important to keep the mapped values "equal distance" * unless you have good reasons otherwise For example, for colors red, green, blue, we want to convert them to values so that each color has equal distance from one another. This cannot be done in 1D but doable in 2D (how? think about it). One hot encoding is a straightforward way to make this just work, by mapping n-value nominal feature into n-dimensional binary vector. * blue $\rightarrow$ (1, 0, 0) * green $\rightarrow$ (0, 1, 0) * red $\rightarrow$ (0, 0, 1)
from sklearn.preprocessing import OneHotEncoder ohe = OneHotEncoder(categorical_features=[0]) ohe.fit_transform(X).toarray() # automatic conversion via the get_dummies method in pd pd.get_dummies(df[['price', 'color', 'size']]) df
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Partitioning a dataset in training and test sets Training set to train the models Test set to evaluate the trained models Separate the two to avoid over-fitting * well trained models should generalize to unseen, test data Validation set for tuning hyper-parameters * parameters are trained by algorithms * hyper-parameters are selected by humans * will talk about this later Wine dataset A dataset to classify wines based on 13 features and 178 samples.
wine_data_remote = 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data' wine_data_local = '../datasets/wine/wine.data' df_wine = pd.read_csv(wine_data_local, header=None) df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline'] print('Class labels', np.unique(df_wine['Class label'])) df_wine.head()
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
<hr> Note: If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at ./../datasets/wine/wine.data. Or you could fetch it via
df_wine = pd.read_csv('https://raw.githubusercontent.com/1iyiwei/pyml/master/code/datasets/wine/wine.data', header=None) df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline'] df_wine.head()
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
How to allocate training and test proportions? As much training data as possible for accurate model As much test data as possible for evaluation Usual rules is 60:40, 70:30, 80:20 Larger datasets can have more portions for training * e.g. 90:10 Other partitions possible * talk about later in model evaluation and parameter tuning
if Version(sklearn_version) < '0.18': from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=0.3, random_state=0) print(X.shape) import numpy as np print(np.unique(y))
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Bringing features onto the same scale Most machine learning algorithms behave better when features are on similar scales. Exceptions * decision trees * random forests Example Two features, in scale [0 1] and [0 100000] Think about what happens when we apply * perceptron * KNN Two common approaches Normalization min-max scaler: $$\frac{x-x_{min}}{x_{max}-x_{min}}$$ Standardization standard scaler: $$\frac{x-x_\mu}{x_\sigma}$$ * $x_\mu$: mean of x values * $x_\sigma$: standard deviation of x values Standardization more common as normalization sensitive to outliers
from sklearn.preprocessing import MinMaxScaler mms = MinMaxScaler() X_train_norm = mms.fit_transform(X_train) X_test_norm = mms.transform(X_test) from sklearn.preprocessing import StandardScaler stdsc = StandardScaler() X_train_std = stdsc.fit_transform(X_train) X_test_std = stdsc.transform(X_test)
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Toy case: standardization versus normalization
ex = pd.DataFrame([0, 1, 2, 3, 4, 5]) # standardize ex[1] = (ex[0] - ex[0].mean()) / ex[0].std(ddof=0) # Please note that pandas uses ddof=1 (sample standard deviation) # by default, whereas NumPy's std method and the StandardScaler # uses ddof=0 (population standard deviation) # normalize ex[2] = (ex[0] - ex[0].min()) / (ex[0].max() - ex[0].min()) ex.columns = ['input', 'standardized', 'normalized'] ex
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Selecting meaningful features Overfitting is a common problem for machine learning. * model fits training data too closely and fails to generalize to real data * model too complex for the given training data <img src="./images/03_06.png" width=80%> Ways to address overfitting Collect more training data (to make overfitting less likely) Reduce the model complexity explicitly, such as the number of parameters Reduce the model complexity implicitly via regularization Reduce data dimensionality, which forced model reduction Amount of data should be sufficient relative to model complexity. Objective We can sum up both the loss and regularization terms as the total objective: $$\Phi(\mathbf{X}, \mathbf{T}, \Theta) = L\left(\mathbf{X}, \mathbf{T}, \mathbf{Y}=f(\mathbf{X}, \Theta)\right) + P(\Theta)$$ During training, the goal is to optimize the parameters $\Theta$ with respect to the given training data $\mathbf{X}$ and $\mathbf{T}$: $$argmin_\Theta \; \Phi(\mathbf{X}, \mathbf{T}, \Theta)$$ And hope the trained model with generalize well to future data. Loss Every machine learning task as a goal, which can be formalized as a loss function: $$L(\mathbf{X}, \mathbf{T}, \mathbf{Y})$$ , where $\mathbf{T}$ is some form of target or auxiliary information, such as: * labels for supervised classification * number of clusters for unsupervised clustering * environment for reinforcement learning Regularization In addition to the objective, we often care about the simplicity of the model, for better efficiency and generalization (avoiding over-fitting). The complexity of the model can be measured by another penalty function: $$P(\Theta)$$ Some common penalty functions include number and/or magnitude of parameters. Regularization For weight vector $\mathbf{w}$ of some model (e.g. perceptron or SVM) $L_2$: $ \|\mathbf{w}\|2^2 = \sum{k} w_k^2 $ $L_1$: $ \|\mathbf{w}\|1 = \sum{k} \left|w_k \right| $ $L_1$ tends to produce sparser solutions than $L_2$ * more zero weights * more like feature selection <img src='./images/04_12.png' width=80%> <img src='./images/04_13.png' width=80%> We are more likely to bump into sharp corners of an object. Experiment: drop a circle and a square into a flat floor. What is the probability of hitting any point on the shape? <img src="./images/sharp_and_round.svg" width=80%> How about a non-flat floor, e.g. concave or convex with different curvatures? Regularization in scikit-learn Many ML models support regularization with different * methods (e.g. $L_1$ and $L_2$) * strength (the $C$ value inversely proportional to regularization strength)
from sklearn.linear_model import LogisticRegression # l1 regularization lr = LogisticRegression(penalty='l1', C=0.1) lr.fit(X_train_std, y_train) # compare training and test accuracy to see if there is overfitting print('Training accuracy:', lr.score(X_train_std, y_train)) print('Test accuracy:', lr.score(X_test_std, y_test)) # 3 sets of parameters due to one-versus-rest with 3 classes lr.intercept_ # 13 coefficients for 13 wine features; notice many of them are 0 lr.coef_ from sklearn.linear_model import LogisticRegression # l2 regularization lr = LogisticRegression(penalty='l2', C=0.1) lr.fit(X_train_std, y_train) # compare training and test accuracy to see if there is overfitting print('Training accuracy:', lr.score(X_train_std, y_train)) print('Test accuracy:', lr.score(X_test_std, y_test)) # notice the disappearance of 0 coefficients due to L2 lr.coef_
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Plot regularization $C$ is inverse to the regularization strength
import matplotlib.pyplot as plt fig = plt.figure() ax = plt.subplot(111) colors = ['blue', 'green', 'red', 'cyan', 'magenta', 'yellow', 'black', 'pink', 'lightgreen', 'lightblue', 'gray', 'indigo', 'orange'] weights, params = [], [] for c in np.arange(-4, 6): lr = LogisticRegression(penalty='l1', C=10**c, random_state=0) lr.fit(X_train_std, y_train) weights.append(lr.coef_[1]) params.append(10**c) weights = np.array(weights) for column, color in zip(range(weights.shape[1]), colors): plt.plot(params, weights[:, column], label=df_wine.columns[column + 1], color=color) plt.axhline(0, color='black', linestyle='--', linewidth=3) plt.xlim([10**(-5), 10**5]) plt.ylabel('weight coefficient') plt.xlabel('C') plt.xscale('log') plt.legend(loc='upper left') ax.legend(loc='upper center', bbox_to_anchor=(1.38, 1.03), ncol=1, fancybox=True) # plt.savefig('./figures/l1_path.png', dpi=300) plt.show()
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Dimensionality reduction $L_1$ regularization implicitly selects features via zero out Feature selection * explicit - you specify how many features to select, the algorithm picks the most relevant (not important) ones * forward, backward * next topic Note: 2 important features might be highly correlated, and thus it is relevant to select only 1 Feature extraction * implicit * can build new, not just select original, features * e.g. PCA * next chapter Sequential feature selection algorithms Feature selection is a way to reduce input data dimensionality. You can think of it as reducing the number of columns of the input data table/frame. How do we decide which features/columns to keep? Intuitively, we want to keep relevant ones and remove the rest. We can select these features sequentially, either forward or backward. Backward selection Sequential backward selection (SBS) is a simple heuristic. The basic idea is to start with $n$ features, and consider all possible $n-1$ subfeatures, and remove the one that matters the least for model training. We then move on to reduce the number of features further ($[n-2, n-3, \cdots]$) until reaching the desired number of features.
from sklearn.base import clone from itertools import combinations import numpy as np from sklearn.metrics import accuracy_score if Version(sklearn_version) < '0.18': from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split class SBS(): def __init__(self, estimator, k_features, scoring=accuracy_score, test_size=0.25, random_state=1): self.scoring = scoring self.estimator = clone(estimator) self.k_features = k_features self.test_size = test_size self.random_state = random_state def fit(self, X, y): X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=self.test_size, random_state=self.random_state) dim = X_train.shape[1] self.indices_ = tuple(range(dim)) self.subsets_ = [self.indices_] score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_) self.scores_ = [score] while dim > self.k_features: scores = [] subsets = [] for p in combinations(self.indices_, r=dim - 1): score = self._calc_score(X_train, y_train, X_test, y_test, p) scores.append(score) subsets.append(p) best = np.argmax(scores) self.indices_ = subsets[best] self.subsets_.append(self.indices_) dim -= 1 self.scores_.append(scores[best]) self.k_score_ = self.scores_[-1] return self def transform(self, X): return X[:, self.indices_] def _calc_score(self, X_train, y_train, X_test, y_test, indices): self.estimator.fit(X_train[:, indices], y_train) y_pred = self.estimator.predict(X_test[:, indices]) score = self.scoring(y_test, y_pred) return score
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Below we try to apply the SBS class above. We use the KNN classifer, which can suffer from curse of dimensionality.
import matplotlib.pyplot as plt from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=2) # selecting features sbs = SBS(knn, k_features=1) sbs.fit(X_train_std, y_train) # plotting performance of feature subsets k_feat = [len(k) for k in sbs.subsets_] plt.plot(k_feat, sbs.scores_, marker='o') plt.ylim([0.7, 1.1]) plt.ylabel('Accuracy') plt.xlabel('Number of features') plt.grid() plt.tight_layout() # plt.savefig('./sbs.png', dpi=300) plt.show() # list the 5 most important features k5 = list(sbs.subsets_[8]) # 5+8 = 13 print(df_wine.columns[1:][k5]) knn.fit(X_train_std, y_train) print('Training accuracy:', knn.score(X_train_std, y_train)) print('Test accuracy:', knn.score(X_test_std, y_test)) knn.fit(X_train_std[:, k5], y_train) print('Training accuracy:', knn.score(X_train_std[:, k5], y_train)) print('Test accuracy:', knn.score(X_test_std[:, k5], y_test))
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Note the improved test accuracy by fitting lower dimensional training/test data. Forward selection This is essetially the reverse of backward selection; we will leave this as an exercise. Assessing Feature Importances with Random Forests Recall * a decision tree is built by splitting nodes * each node split is to maximize information gain * random forest is a collection of decision trees with randomly selected features Information gain (or impurity loss) at each node can measure the importantce of the feature being split
# feature_importances_ from random forest classifier records this info from sklearn.ensemble import RandomForestClassifier feat_labels = df_wine.columns[1:] forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1) forest.fit(X_train, y_train) importances = forest.feature_importances_ indices = np.argsort(importances)[::-1] for f in range(X_train.shape[1]): print("%2d) %-*s %f" % (f + 1, 30, feat_labels[indices[f]], importances[indices[f]])) plt.title('Feature Importances') plt.bar(range(X_train.shape[1]), importances[indices], color='lightblue', align='center') plt.xticks(range(X_train.shape[1]), feat_labels[indices], rotation=90) plt.xlim([-1, X_train.shape[1]]) plt.tight_layout() #plt.savefig('./random_forest.png', dpi=300) plt.show() threshold = 0.15 if False: #Version(sklearn_version) < '0.18': X_selected = forest.transform(X_train, threshold=threshold) else: from sklearn.feature_selection import SelectFromModel sfm = SelectFromModel(forest, threshold=threshold, prefit=True) X_selected = sfm.transform(X_train) X_selected.shape
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Now, let's print the 3 features that met the threshold criterion for feature selection that we set earlier (note that this code snippet does not appear in the actual book but was added to this notebook later for illustrative purposes):
for f in range(X_selected.shape[1]): print("%2d) %-*s %f" % (f + 1, 30, feat_labels[indices[f]], importances[indices[f]]))
code/ch04/ch04.ipynb
1iyiwei/pyml
mit
Testing LRISb spectrum
test_arc_path = arclines.__path__[0]+'/data/sources/' src_file = 'lrisb_600_4000_PYPIT.json' with open(test_arc_path+src_file,'r') as f: pypit_fit = json.load(f) spec = pypit_fit['spec'] tbl = Table() tbl['spec'] = spec tbl.write(arclines.__path__[0]+'/tests/files/LRISb_600_spec.ascii',format='ascii')
docs/nb/Match_script.ipynb
PYPIT/arclines
bsd-3-clause
Run arclines_match LRISb_600_spec.ascii 4000. 1.26 CdI,HgI,ZnI Kastr
test_arc_path = '/Users/xavier/local/Python/PYPIT-development-suite/REDUX_OUT/Kast_red/600_7500_d55/MF_kast_red/' src_file = 'MasterWaveCalib_A_01_aa.json' with open(test_arc_path+src_file,'r') as f: pypit_fit = json.load(f) spec = pypit_fit['spec'] tbl = Table() tbl['spec'] = spec tbl.write(arclines.__path__[0]+'/tests/files/Kastr_600_7500_spec.ascii',format='ascii')
docs/nb/Match_script.ipynb
PYPIT/arclines
bsd-3-clause
Run arclines_match Kastr_600_7500_spec.ascii 7500. 2.35 HgI,NeI,ArI Check dispersion
pix = np.array(pypit_fit['xfit'])*(len(spec)-1.) wave = np.array(pypit_fit['yfit']) disp = (wave-np.roll(wave,1))/(pix-np.roll(pix,1)) disp
docs/nb/Match_script.ipynb
PYPIT/arclines
bsd-3-clause
Ran without UNKNWN -- Excellent LRISr -- 600/7500
test_arc_path = arclines.__path__[0]+'/data/sources/' src_file = 'lrisr_600_7500_PYPIT.json' with open(test_arc_path+src_file,'r') as f: pypit_fit = json.load(f) spec = pypit_fit['spec'] tbl = Table() tbl['spec'] = spec tbl.write(arclines.__path__[0]+'/tests/files/LRISr_600_7500_spec.ascii',format='ascii')
docs/nb/Match_script.ipynb
PYPIT/arclines
bsd-3-clause
Run arclines_match LRISr_600_7500_spec.ascii 7000. 1.6 ArI,HgI,KrI,NeI,XeI Need UNKNOWNS for better performance arclines_match LRISr_600_7500_spec.ascii 7000. 1.6 ArI,HgI,KrI,NeI,XeI --unknowns Am scanning now (successfully) RCS at MMT
range(5,-1,-1)
docs/nb/Match_script.ipynb
PYPIT/arclines
bsd-3-clause
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function arrays = [] for x_ in x: array = np.array(x_) arrays.append(array) return np.stack(arrays, axis=0) / 256. """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function # class_num = np.array(x).max() class_num = 10 num = len(x) out = np.zeros((num, class_num)) for i in range(num): out[i, x[i]-1] = 1 return out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function # print ('image_shape') # print (image_shape) shape = (None, ) shape = shape + image_shape # print ('shape') # print (shape) inputs = tf.placeholder(tf.float32, shape=shape, name='x') # print ('inputs') # print (inputs) return inputs def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function shape = (None, ) shape = shape + (n_classes, ) return tf.placeholder(tf.float32, shape=shape, name='y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, name='keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, maxpool=True): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function input_channel = x_tensor.get_shape().as_list()[-1] weights_size = conv_ksize + (input_channel,) + (conv_num_outputs,) conv_strides = (1,) + conv_strides + (1,) pool_ksize = (1,) + pool_ksize + (1,) pool_strides = (1,) + pool_strides + (1,) weights = tf.Variable(tf.random_normal(weights_size, stddev=0.01)) biases = tf.Variable(tf.zeros(conv_num_outputs)) out = tf.nn.conv2d(x_tensor, weights, conv_strides, padding='SAME') out = out + biases out = tf.nn.relu(out) if maxpool: out = tf.nn.max_pool(out, pool_ksize, pool_strides, padding='SAME') return out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function num, hight, width, channel = tuple(x_tensor.get_shape().as_list()) new_shape = (-1, hight * width * channel) # print ('new_shape') # print (new_shape) return tf.reshape(x_tensor, new_shape) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function num, dim = x_tensor.get_shape().as_list() weights = tf.Variable(tf.random_normal((dim, num_outputs), stddev=np.sqrt(2 / num_outputs))) biases = tf.Variable(tf.zeros(num_outputs)) return tf.nn.relu(tf.matmul(x_tensor, weights) + biases) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function num, dim = x_tensor.get_shape().as_list() weights = tf.Variable(tf.random_normal((dim, num_outputs), np.sqrt(2 / num_outputs))) biases = tf.Variable(tf.zeros(num_outputs)) return tf.matmul(x_tensor, weights) + biases """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_ksize3 = (3, 3) conv_ksize1 = (1, 1) conv_ksize5 = (5, 5) conv_ksize7 = (7, 7) conv_strides1 = (1, 1) conv_strides2 = (2, 2) pool_ksize = (2, 2) pool_strides = (2, 2) channels = [32,128,512,512] # L = 4 out = x # 6 layers # for i in range(int(L / 4)): out = conv2d_maxpool(out, channels[0], conv_ksize7, conv_strides1, pool_ksize, pool_strides, maxpool=True) out = conv2d_maxpool(out, channels[1], conv_ksize5, conv_strides1, pool_ksize, pool_strides, maxpool=True) out = conv2d_maxpool(out, channels[2], conv_ksize3, conv_strides1, pool_ksize, pool_strides, maxpool=True) # out = conv2d_maxpool(out, channels[3], conv_ksize5, conv_strides2, pool_ksize, pool_strides, maxpool=True) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) out = flatten(out) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) # by remove this fully connected layer can improve performance out = fully_conn(out, 256) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) out = tf.nn.dropout(out, keep_prob) out = output(out, 10) # TODO: return output return out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network.
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function feed_dict = {keep_prob: keep_probability, x: feature_batch, y: label_batch} session.run(optimizer, feed_dict=feed_dict) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function # here will print loss, train_accuracy, and val_accuracy # I implemented the val_accuracy, please read them all, thanks # print train_accuracy to see overfit loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0}) train_accuracy = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0}) batch = feature_batch.shape[0] num_valid = valid_features.shape[0] val_accuracy = 0 for i in range(0, num_valid, batch): end_i = i + batch if end_i > num_valid: end_i = num_valid batch_accuracy = session.run(accuracy, feed_dict={ x: valid_features[i:end_i], y: valid_labels[i:end_i], keep_prob: 1.0}) batch_accuracy *= (end_i - i) val_accuracy += batch_accuracy val_accuracy /= num_valid print ('loss is {}, train_accuracy is {}, val_accuracy is {}'.format(loss, train_accuracy, val_accuracy))
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout
# TODO: Tune Parameters epochs = 10 batch_size = 128 keep_probability = 0.8
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
luofan18/deep-learning
mit
Generations
def print_sample(sample, best_bleu=None): enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>']) gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>']) print('Input: '+ enc_input + '\n') print('Gend: ' + sample['generated'] + '\n') print('True: ' + gold + '\n') if best_bleu is not None: cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>']) print('Closest BLEU Match: ' + cbm + '\n') print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n') print('\n') for i, sample in enumerate(report['train_samples']): print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None) for i, sample in enumerate(report['valid_samples']): print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None) for i, sample in enumerate(report['test_samples']): print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
report_notebooks/encdec_noing10_200_512_04drb.ipynb
kingb12/languagemodelRNN
mit
BLEU Analysis
def print_bleu(blue_struct): print 'Overall Score: ', blue_struct['score'], '\n' print '1-gram Score: ', blue_struct['components']['1'] print '2-gram Score: ', blue_struct['components']['2'] print '3-gram Score: ', blue_struct['components']['3'] print '4-gram Score: ', blue_struct['components']['4'] # Training Set BLEU Scores print_bleu(report['train_bleu']) # Validation Set BLEU Scores print_bleu(report['valid_bleu']) # Test Set BLEU Scores print_bleu(report['test_bleu']) # All Data BLEU Scores print_bleu(report['combined_bleu'])
report_notebooks/encdec_noing10_200_512_04drb.ipynb
kingb12/languagemodelRNN
mit
N-pairs BLEU Analysis This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
# Training Set BLEU n-pairs Scores print_bleu(report['n_pairs_bleu_train']) # Validation Set n-pairs BLEU Scores print_bleu(report['n_pairs_bleu_valid']) # Test Set n-pairs BLEU Scores print_bleu(report['n_pairs_bleu_test']) # Combined n-pairs BLEU Scores print_bleu(report['n_pairs_bleu_all']) # Ground Truth n-pairs BLEU Scores print_bleu(report['n_pairs_bleu_gold'])
report_notebooks/encdec_noing10_200_512_04drb.ipynb
kingb12/languagemodelRNN
mit
Alignment Analysis This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
print 'Average (Train) Generated Score: ', report['average_alignment_train'] print 'Average (Valid) Generated Score: ', report['average_alignment_valid'] print 'Average (Test) Generated Score: ', report['average_alignment_test'] print 'Average (All) Generated Score: ', report['average_alignment_all'] print 'Average Gold Score: ', report['average_alignment_gold']
report_notebooks/encdec_noing10_200_512_04drb.ipynb
kingb12/languagemodelRNN
mit
Then, we decide on how many believers of each cultural option (religion) we want to start with.
A = 65 # initial number of believers A B = N - A # initial number of believers A
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
Finally, we want to update these quantities at every time step depending on some variation:
t = 0 MAX_TIME = 100 while t < MAX_TIME: A = A + variation B = B - variation # advance time to next iteration t = t + 1
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
Type the code in the code tab. Keep in mind that indents are important. Run the code by hitting F5 (or click on a green triangle). What happens? Well, nothing happens or, rather, we get an error. python NameError: name variation is not defined Indeed, we have not defined what do we mean by 'variation'. So let's calculate it based on the population switching trait based on a comparison between payoffs. For example if B has higher payoff then A then we should get something like this: \begin{equation} \Delta_{A\to B} = A · (payoff_{A\to B} - payoff_{B\to A}) \end{equation} So the proportion of population A that switches to B is proportional to the difference between payoffs. As we mentioned the payoff of a trait is determined by the population exhibiting the competing trait as well as its intrinsic attractiveness. To define the payoff we need to implement the following competition equations: \begin{equation} Payoff_{B\to A} = \frac{A_t}{N}\ \frac {T_A} {(T_A + T_B)}\ \end{equation} \begin{equation} Payoff_{A\to B} = \frac{B_t}{N}\ \frac {T_B} {(T_A + T_B)}\ \end{equation} Let's look at the equations a bit more closely. The first term is the proportion of the entire population N holding a particular cultural trait ($\frac {A_t}{N}$ for $A$ and $\frac {B_t}{N}$ for $B$). While the second element of the equations is the balance between the attractiveness of both ideas ($T_A$ and $T_B$) expressed as the attractiveness of the given trait in respect to the total 'available' attractiveness ($T_A + T_B$). You have probably immediately noticed that these two equations are the same in structure and only differ in terms of what is put into them. Therefore, to avoid unnecessary hassle we will create a 'universal' function that can be used for both. Type the code below at the beginning of your script:
def payoff(believers, Tx,Ty): proportionBelievers = believers/N attraction = Tx/(Ty + Tx) return proportionBelievers * attraction
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
Let's break it down a little. First we define the function and give it the input - the number of believers and the two values that define how attractive each cultural option is. python def payoff(believers, Tx, Ty): Then we calculate two values: percentage of population sharing this cultural option python proportionBelievers = believers/N how attractive the option is python attraction = Tx/(Ty + Tx) Look at the equations above, the first element is just the $\frac {A_t}{N}$ and $\frac {B_t}{N}$ part and the second is this bit: $\frac {T_A} {(T_A + T_B)}$. Finally we return the results of the calculations. python return proportionBelievers * attraction Voila! We have implemented the equation into Python code. Now, let's modify the main loop to call the function - we need to do it twice to get the payoff for changing from A to B and from B to A. This is repeated during each iteration of the loop, so each time we can pass different values into it. To get the payoffs for switching from A to B and from B to A we have to add the calls to 'payoff' at the beginning of our loop:
while t < MAX_TIME: variationBA = payoff(A, Ta, Tb) variationAB = payoff(B, Tb, Ta) A = A + variation B = B - variation # advance time to next iteration t = t + 1
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
That is, we pass the number of believers and the attractiveness of the traits. The order in which we pass the input variables into a function is important. In the B to A transmission, B becomes 'believers', while Ta becomes 'Tx' and Tb becomes 'Ty'. In the second line showing the A to B transmission, A becomes '_believers', Tb becomes 'Tx' and Ta becomes 'Ty'. The obvious problem after this change is that we have not defined the 'attractiveness' of each trait. To do so add their definitions at the beginning of the script around other definitions (N, A, B, MAX_TIME, etc).
Ta = 1.0 # initial attractiveness of option A Tb = 2.0 # initial attractiveness of option B alpha = 0.1 # strength of the transmission process
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
We can now calculate the difference between the perceived payoffs during this time step. To do so, we need to first see which one did better (A or B).
difference = variationBA - variationAB
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
Now, if the difference between the two is negative then we know that during time step B is more interesting than A. On the contrary, if it is positive then A seems better than B. What is left is to see how many people moved based on this difference between payoffs. We can express it in the main while loop, like this:
# B -> A if difference > 0: variation = difference*B # A -> B else: variation = difference*A # update the population A = A + variation B = B - variation
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
We can use an additional term to have a control over how strong the transmission from A to B and back is - we will call it alpha (α). This parameter will multiply change before we modify populations A and B:
variation = alpha*variation
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
And we need to add it to the rest of the parameters of our model:
# temporal dimension MAX_TIME = 100 t = 0 # initial time # init populations N = 100 # population size A = 65 # initial population of believers A B = N-A # initial population of believers B # additional params Ta = 1.0 # initial attractiveness of option A Tb = 2.0 # initial attractiveness of option B alpha = 0.1 # strength of the transmission process
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
You should bear in mind that the main loop code should be located after the definition of the transmission function and the initialization of variables (because they are used here). After all the edits you have done it should look like this:
while t < MAX_TIME: # calculate the payoff for change of believers A and B in the current time step variationBA = payoff(A, Ta, Tb) variationAB = payoff(B, Tb, Ta) difference = variationBA - variationAB # B -> A if difference > 0: variation = difference*B # A -> B else: variation = difference*A # control the pace of change with alpha variation = alpha*variation # update the population A = A + variation B = B - variation # advance time to next iteration t = t + 1
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
OK, we have all the elements ready now and if you run the code the computer will churn all the numbers somewhere in the background. However, it produces no output so we have no idea what is actually happening. Let's solve this by visualising the flow of believers from one option to another. First, we will create two empty lists. Second, we will add there the initial populations. Then, at each timestep, we will add the current number of believers to these lists and finally, at the end of the simulation run we will plot them to see how they changed over time. Start with creating two empty lists. Add the following code right after all the variable definitions at the beginning of the code before the while loop:
# initialise the list used for plotting believersA = [] believersB = [] # add the initial populations believersA.append(A) believersB.append(B)
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
The whole initialisation/definition block at the beginning of your code should look like this:
# initialisation MAX_TIME = 100 t = 0 # initial time N = 100 # population size A = 65 # initial proportion of believers A B = N-A # initial proportion of believers B Ta = 1.0 # initial attractiveness of option A Tb = 2.0 # initial attractiveness of option B alpha = 0.1 # strength of the transmission process # initialise the list used for plotting believersA = [] believersB = [] # add the initial populations believersA.append(A) believersB.append(B)
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
We just added the initial number of believers to their respective lists. However, we also need to do this at the end of each time step. Add the following code at the end of the while loop - remember to align the indents with the previous line!
believersA.append(A) believersB.append(B)
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
The whole while-loop block should now look like this:
while t < MAX_TIME: # calculate the payoff for change of believers A and B in the current time step variationBA = payoff(A, Ta, Tb) variationAB = payoff(B, Tb, Ta) difference = variationBA - variationAB # B -> A if difference > 0: variation = difference*B # A -> B else: variation = difference*A # control the pace of change with alpha variation = alpha*variation # update the population A = A + variation B = B - variation # save the values to a list for plotting believersA.append(A) believersB.append(B) # advance time to next iteration t = t + 1
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
Finally, let's plot the results. First, we will import Python's plotting library, Matplotlib and use a predifined plotting style. Add these two lines at the beginning of your script:
import matplotlib.pyplot as plt # plotting library plt.style.use('ggplot') # makes the graphs look pretty
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
Finally, let's plot! Plotting in Python is as easy as saying 'please plot this data for me'. Type these two lines at the very end of your code. We only want to plot the results once the simulation has finished so make sure this are not inside the while loop - that is, ensure this block of code is not indented. Run the code!
%matplotlib inline # plot the results plt.plot(believersA) plt.plot(believersB)
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
You can see how over time one set of believers increases while the other decreases. This is not particularly surprising as the attractiveness Tb is higher than Ta. However, can you imagine a configuration where this is not enough to sway the population? Have a go at setting the variables to different initial values to see what combination can counteract this pattern. 1. set the initial value of A to 5, 10, 25, 50, 75 2. set the MAX_TIME to 1000, 3. set the Ta and Tb to 1.0, 10.0, 0.1, 0.01, etc., 4. set alpha to 0.01, and 1.0 You can try all sorts of configurations to see how quickly the population shifts from one option to another or what are the minimum values of each variable that prevent it. However, we can make the model more interesting if we allow the attractiveness of each option to change through time. To do so let's define a new function. Add the following line at the beginning of the while loop (remember indentation!).
Ta, Tb = attractiveness(Ta, Tb)
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
Ta and Tb will then be modified based on some dynamics we want to model. Let's define the 'attractiveness' function. We have already done it once for the 'payoff' function so it should be a piece of cake. At each time step we will slightly modify the attractiveness of each trait using a kernel K that we will define. This can be expressed as: \begin{equation} T_{A, t+1} = {T_A} + {K_a} \end{equation} \begin{equation} T_{B, t+1} = {T_B} + {K_b} \end{equation} K can have several shapes such as: * Fixed traits with $K_a = K_b = 0$ * A gaussian stochastic process such as $K = N (0, 1)$ * A combination (e.g., $K_a = N (0, 1)$ and $K_b = 1/3)$ Let's start with a simple case scenario, such as a) $T_a$ will increase each step by a fixed $K_a$ and b) $K_b$ is equal to zero (so $T_b$ will be fixed over the whole simulation).
def attractiveness(Ta, Tb): Ka = 0.1 Kb = 0 Ta = Ta + Ka Tb = Tb + Kb return Ta, Tb
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
First, we define the function and give it the input values. python def attractiveness(Ta, Tb): Then, we establish how much the attractiveness of each trait changes (i.e., define $K_a$ and $K_b$) python Ka = 0.1 Kb = 0 And plug them into the equations: python Ta = Ta + Ka Tb = Tb + Kb Finally, we return the new values: python return Ta, Tb This is how the function is defined. The main loop will now look like this:
while t < MAX_TIME: # update attractiveness Ta, Tb = attractiveness(Ta, Tb) # calculate the payoff for change of believers A and B in the current time step variationBA = payoff(A, Ta, Tb) variationAB = payoff(B, Tb, Ta) difference = variationBA - variationAB # B -> A if difference > 0: variation = difference*B # A -> B else: variation = difference*A # control the pace of change with alpha variation = alpha*variation # update the population A = A + variation B = B - variation # save the values to a list for plotting believersA.append(A) believersB.append(B) # advance time to next iteration t = t + 1
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
If you plot this you will get a different result than previously:
plt.plot(believersA) plt.plot(believersB)
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
Have a go at changing the values of $K_a$ and $K_b$ and see what happens. Can you see any equilibrium where both traits coexist? There are a number of functions we can use to dynamically change the 'attractiveness' of each trait. Try the following ones:
import numpy as np # stick this line at the beginning of the script alongside other 'imports' def attractiveness2(Ta, Tb): # temporal autocorrelation with stochasticity (normal distribution) # we get 2 samples from a normal distribution N(0,1) Ka, Kb = np.random.normal(0, 1, 2) # compute the difference between Ks diff = Ka-Kb # apply difference of Ks to attractiveness Ta += diff Tb -= diff return Ta, Tb def attractiveness3(Ta, Tb): # anti-conformism dynamics (more population means less attractiveness) # both values initialized at 0 Ka = 0 Kb = 0 # first we sample gamma with mean=last popSize of A times relevance diffPop = np.random.gamma(believersA[t]) # we sustract from this value the same computation for population B diffPop = diffPop - np.random.gamma(believersB[t]) # if B is larger then we need to increase the attractiveness of A if diffPop < 0: Ka = -diffPop # else A is larger and we need to increase the attractiveness of B else: Kb = diffPop # change current values Ta = Ta + Ka Tb = Tb + Kb return Ta, Tb
doc/DH2016tutorial.ipynb
xrubio/simulationdh
gpl-3.0
Interact with SVG display SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
s = """ <svg width="100" height="100"> <circle cx="50" cy="50" r="20" fill="aquamarine" /> </svg> """ SVG(s)
assignments/assignment06/InteractEx05.ipynb
LimeeZ/phys292-2015-work
mit
Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'): """Draw an SVG circle. Parameters ---------- width : int The width of the svg drawing area in px. height : int The height of the svg drawing area in px. cx : int The x position of the center of the circle in px. cy : int The y position of the center of the circle in px. r : int The radius of the circle in px. fill : str The fill color of the circle. """ #TRIPLE QUOTES GIVE YOU LINE CAPABILITIES l=""" <svg width="%d" height="%d"> <circle cx="%d" cy="%d" r="%d" fill="%s" /> </svg> """ svg= l %(width, height, cx, cy, r, fill) display(SVG(svg)) draw_circle(cx=10, cy=10, r=10, fill='blue') assert True # leave this to grade the draw_circle function
assignments/assignment06/InteractEx05.ipynb
LimeeZ/phys292-2015-work
mit
Use interactive to build a user interface for exploing the draw_circle function: width: a fixed value of 300px height: a fixed value of 300px cx/cy: a slider in the range [0,300] r: a slider in the range [0,50] fill: a text area in which you can type a color's name Save the return value of interactive to a variable named w.
#?interactive w = interactive(draw_circle, width = fixed(300), height = fixed(300), cx=[0,300], cy=[0,300], r=[0,50], fill = '') w c = w.children assert c[0].min==0 and c[0].max==300 assert c[1].min==0 and c[1].max==300 assert c[2].min==0 and c[2].max==50 assert c[3].value=='red'
assignments/assignment06/InteractEx05.ipynb
LimeeZ/phys292-2015-work
mit
Use the display function to show the widgets created by interactive:
display(w) assert True # leave this to grade the display of the widget
assignments/assignment06/InteractEx05.ipynb
LimeeZ/phys292-2015-work
mit
Vertex AI AutoML tables regression Installation Install the latest (preview) version of Vertex SDK.
! pip3 install -U google-cloud-aiplatform --user
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Restart the Kernel Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
import os if not os.getenv("AUTORUN") and False: # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
AutoML constants Next, setup constants unique to AutoML Table classification datasets and training: Dataset Schemas: Tells the managed dataset service which type of dataset it is. Data Labeling (Annotations) Schemas: Tells the managed dataset service how the data is labeled (annotated). Dataset Training Schemas: Tells the Vertex AI Pipelines service the task (e.g., classification) to train the model for.
# Tabular Dataset type TABLE_SCHEMA = "google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml" # Tabular Labeling type IMPORT_SCHEMA_TABLE_CLASSIFICATION = ( "gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml" ) # Tabular Training task TRAINING_TABLE_CLASSIFICATION_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Clients The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex). You will use several clients in this tutorial, so set them all up upfront. Dataset Service for managed datasets. Model Service for managed models. Pipeline Service for training. Endpoint Service for deployment. Job Service for batch jobs and custom training. Prediction Service for serving. Note: Prediction has a different service endpoint.
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_pipeline_client(): client = aip.PipelineServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client def create_prediction_client(): client = aip.PredictionServiceClient(client_options=client_options) return client def create_job_client(): client = aip.JobServiceClient(client_options=client_options) return client clients = {} clients["dataset"] = create_dataset_client() clients["model"] = create_model_client() clients["pipeline"] = create_pipeline_client() clients["endpoint"] = create_endpoint_client() clients["prediction"] = create_prediction_client() clients["job"] = create_job_client() for client in clients.items(): print(client) IMPORT_FILE = "gs://cloud-ml-tables-data/bank-marketing.csv" ! gsutil cat $IMPORT_FILE | head -n 10
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: Age,Job,MaritalStatus,Education,Default,Balance,Housing,Loan,Contact,Day,Month,Duration,Campaign,PDays,Previous,POutcome,Deposit 58,management,married,tertiary,no,2143,yes,no,unknown,5,may,261,1,-1,0,unknown,1 44,technician,single,secondary,no,29,yes,no,unknown,5,may,151,1,-1,0,unknown,1 33,entrepreneur,married,secondary,no,2,yes,yes,unknown,5,may,76,1,-1,0,unknown,1 47,blue-collar,married,unknown,no,1506,yes,no,unknown,5,may,92,1,-1,0,unknown,1 33,unknown,single,unknown,no,1,no,no,unknown,5,may,198,1,-1,0,unknown,1 35,management,married,tertiary,no,231,yes,no,unknown,5,may,139,1,-1,0,unknown,1 28,management,single,tertiary,no,447,yes,yes,unknown,5,may,217,1,-1,0,unknown,1 42,entrepreneur,divorced,tertiary,yes,2,yes,no,unknown,5,may,380,1,-1,0,unknown,1 58,retired,married,primary,no,121,yes,no,unknown,5,may,50,1,-1,0,unknown,1 Create a dataset projects.locations.datasets.create Request
DATA_SCHEMA = TABLE_SCHEMA metadata = { "input_config": { "gcs_source": { "uri": [IMPORT_FILE], } } } dataset = { "display_name": "bank_" + TIMESTAMP, "metadata_schema_uri": "gs://" + DATA_SCHEMA, "metadata": json_format.ParseDict(metadata, Value()), } print( MessageToJson( aip.CreateDatasetRequest( parent=PARENT, dataset=dataset, ).__dict__["_pb"] ) )
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "dataset": { "displayName": "bank_20210226015209", "metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml", "metadata": { "input_config": { "gcs_source": { "uri": [ "gs://cloud-ml-tables-data/bank-marketing.csv" ] } } } } } Call
request = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/datasets/7748812594797871104", "displayName": "bank_20210226015209", "metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/tabular_1.0.0.yaml", "labels": { "aiplatform.googleapis.com/dataset_metadata_schema": "TABLE" }, "metadata": { "inputConfig": { "gcsSource": { "uri": [ "gs://cloud-ml-tables-data/bank-marketing.csv" ] } } } }
# The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split("/")[-1] print(dataset_id)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train a model projects.locations.trainingPipelines.create Request
TRAINING_SCHEMA = TRAINING_TABLE_CLASSIFICATION_SCHEMA TRANSFORMATIONS = [ {"auto": {"column_name": "Age"}}, {"auto": {"column_name": "Job"}}, {"auto": {"column_name": "MaritalStatus"}}, {"auto": {"column_name": "Education"}}, {"auto": {"column_name": "Default"}}, {"auto": {"column_name": "Balance"}}, {"auto": {"column_name": "Housing"}}, {"auto": {"column_name": "Loan"}}, {"auto": {"column_name": "Contact"}}, {"auto": {"column_name": "Day"}}, {"auto": {"column_name": "Month"}}, {"auto": {"column_name": "Duration"}}, {"auto": {"column_name": "Campaign"}}, {"auto": {"column_name": "PDays"}}, {"auto": {"column_name": "POutcome"}}, ] task = Value( struct_value=Struct( fields={ "disable_early_stopping": Value(bool_value=False), "prediction_type": Value(string_value="regression"), "target_column": Value(string_value="Deposit"), "train_budget_milli_node_hours": Value(number_value=1000), "transformations": json_format.ParseDict(TRANSFORMATIONS, Value()), } ) ) training_pipeline = { "display_name": "bank_" + TIMESTAMP, "input_data_config": { "dataset_id": dataset_short_id, "fraction_split": { "training_fraction": 0.8, "validation_fraction": 0.1, "test_fraction": 0.1, }, }, "model_to_upload": { "display_name": "flowers_" + TIMESTAMP, }, "training_task_definition": TRAINING_SCHEMA, "training_task_inputs": task, } print( MessageToJson( aip.CreateTrainingPipelineRequest( parent=PARENT, training_pipeline=training_pipeline, ).__dict__["_pb"] ) )
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "trainingPipeline": { "displayName": "bank_20210226015209", "inputDataConfig": { "datasetId": "7748812594797871104", "fractionSplit": { "trainingFraction": 0.8, "validationFraction": 0.1, "testFraction": 0.1 } }, "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml", "trainingTaskInputs": { "transformations": [ { "auto": { "column_name": "Age" } }, { "auto": { "column_name": "Job" } }, { "auto": { "column_name": "MaritalStatus" } }, { "auto": { "column_name": "Education" } }, { "auto": { "column_name": "Default" } }, { "auto": { "column_name": "Balance" } }, { "auto": { "column_name": "Housing" } }, { "auto": { "column_name": "Loan" } }, { "auto": { "column_name": "Contact" } }, { "auto": { "column_name": "Day" } }, { "auto": { "column_name": "Month" } }, { "auto": { "column_name": "Duration" } }, { "auto": { "column_name": "Campaign" } }, { "auto": { "column_name": "PDays" } }, { "auto": { "column_name": "POutcome" } } ], "prediction_type": "regression", "disable_early_stopping": false, "train_budget_milli_node_hours": 1000.0, "target_column": "Deposit" }, "modelToUpload": { "displayName": "flowers_20210226015209" } } } Call
request = clients["pipeline"].create_training_pipeline( parent=PARENT, training_pipeline=training_pipeline )
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/trainingPipelines/3147717072369221632", "displayName": "bank_20210226015209", "inputDataConfig": { "datasetId": "7748812594797871104", "fractionSplit": { "trainingFraction": 0.8, "validationFraction": 0.1, "testFraction": 0.1 } }, "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml", "trainingTaskInputs": { "targetColumn": "Deposit", "trainBudgetMilliNodeHours": "1000", "transformations": [ { "auto": { "columnName": "Age" } }, { "auto": { "columnName": "Job" } }, { "auto": { "columnName": "MaritalStatus" } }, { "auto": { "columnName": "Education" } }, { "auto": { "columnName": "Default" } }, { "auto": { "columnName": "Balance" } }, { "auto": { "columnName": "Housing" } }, { "auto": { "columnName": "Loan" } }, { "auto": { "columnName": "Contact" } }, { "auto": { "columnName": "Day" } }, { "auto": { "columnName": "Month" } }, { "auto": { "columnName": "Duration" } }, { "auto": { "columnName": "Campaign" } }, { "auto": { "columnName": "PDays" } }, { "auto": { "columnName": "POutcome" } } ], "predictionType": "regression" }, "modelToUpload": { "displayName": "flowers_20210226015209" }, "state": "PIPELINE_STATE_PENDING", "createTime": "2021-02-26T01:57:51.364312Z", "updateTime": "2021-02-26T01:57:51.364312Z" }
# The full unique ID for the training pipeline training_pipeline_id = request.name # The short numeric ID for the training pipeline training_pipeline_short_id = training_pipeline_id.split("/")[-1] print(training_pipeline_id)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.trainingPipelines.get Call
request = clients["pipeline"].get_training_pipeline(name=training_pipeline_id)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/trainingPipelines/3147717072369221632", "displayName": "bank_20210226015209", "inputDataConfig": { "datasetId": "7748812594797871104", "fractionSplit": { "trainingFraction": 0.8, "validationFraction": 0.1, "testFraction": 0.1 } }, "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml", "trainingTaskInputs": { "trainBudgetMilliNodeHours": "1000", "transformations": [ { "auto": { "columnName": "Age" } }, { "auto": { "columnName": "Job" } }, { "auto": { "columnName": "MaritalStatus" } }, { "auto": { "columnName": "Education" } }, { "auto": { "columnName": "Default" } }, { "auto": { "columnName": "Balance" } }, { "auto": { "columnName": "Housing" } }, { "auto": { "columnName": "Loan" } }, { "auto": { "columnName": "Contact" } }, { "auto": { "columnName": "Day" } }, { "auto": { "columnName": "Month" } }, { "auto": { "columnName": "Duration" } }, { "auto": { "columnName": "Campaign" } }, { "auto": { "columnName": "PDays" } }, { "auto": { "columnName": "POutcome" } } ], "targetColumn": "Deposit", "predictionType": "regression" }, "modelToUpload": { "displayName": "flowers_20210226015209" }, "state": "PIPELINE_STATE_PENDING", "createTime": "2021-02-26T01:57:51.364312Z", "updateTime": "2021-02-26T01:57:51.364312Z" }
while True: response = clients["pipeline"].get_training_pipeline(name=training_pipeline_id) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_name = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: break else: model_id = response.model_to_upload.name print("Training Time:", response.end_time - response.start_time) break time.sleep(60) print(model_id)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Evaluate the model projects.locations.models.evaluations.list Call
request = clients["model"].list_model_evaluations(parent=model_id)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
import json model_evaluations = [json.loads(MessageToJson(me.__dict__["_pb"])) for me in request] # The evaluation slice evaluation_slice = request.model_evaluations[0].name print(json.dumps(model_evaluations, indent=2))
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: [ { "name": "projects/116273516712/locations/us-central1/models/3936304403996213248/evaluations/6323797633322037836", "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/regression_metrics_1.0.0.yaml", "metrics": { "rSquared": 0.39799774, "meanAbsolutePercentageError": 9.791032, "rootMeanSquaredError": 0.24675915, "rootMeanSquaredLogError": 0.10022795, "meanAbsoluteError": 0.12842195 }, "createTime": "2021-02-26T03:39:42.254525Z" } ] projects.locations.models.evaluations.get Call
request = clients["model"].get_model_evaluation(name=evaluation_slice)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/models/3936304403996213248/evaluations/6323797633322037836", "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/regression_metrics_1.0.0.yaml", "metrics": { "meanAbsolutePercentageError": 9.791032, "rootMeanSquaredLogError": 0.10022795, "rSquared": 0.39799774, "meanAbsoluteError": 0.12842195, "rootMeanSquaredError": 0.24675915 }, "createTime": "2021-02-26T03:39:42.254525Z" } Make batch predictions Make the batch input file Let's now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use CSV in this tutorial.ion on.
! gsutil cat $IMPORT_FILE | head -n 1 > tmp.csv ! gsutil cat $IMPORT_FILE | tail -n 10 >> tmp.csv ! cut -d, -f1-16 tmp.csv > batch.csv gcs_input_uri = "gs://" + BUCKET_NAME + "/test.csv" ! gsutil cp batch.csv $gcs_input_uri ! gsutil cat $gcs_input_uri
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: Age,Job,MaritalStatus,Education,Default,Balance,Housing,Loan,Contact,Day,Month,Duration,Campaign,PDays,Previous,POutcome 53,management,married,tertiary,no,583,no,no,cellular,17,nov,226,1,184,4,success 34,admin.,single,secondary,no,557,no,no,cellular,17,nov,224,1,-1,0,unknown 23,student,single,tertiary,no,113,no,no,cellular,17,nov,266,1,-1,0,unknown 73,retired,married,secondary,no,2850,no,no,cellular,17,nov,300,1,40,8,failure 25,technician,single,secondary,no,505,no,yes,cellular,17,nov,386,2,-1,0,unknown 51,technician,married,tertiary,no,825,no,no,cellular,17,nov,977,3,-1,0,unknown 71,retired,divorced,primary,no,1729,no,no,cellular,17,nov,456,2,-1,0,unknown 72,retired,married,secondary,no,5715,no,no,cellular,17,nov,1127,5,184,3,success 57,blue-collar,married,secondary,no,668,no,no,telephone,17,nov,508,4,-1,0,unknown 37,entrepreneur,married,secondary,no,2971,no,no,cellular,17,nov,361,2,188,11,other projects.locations.batchPredictionJobs.create Request
batch_prediction_job = { "display_name": "bank_" + TIMESTAMP, "model": model_id, "input_config": { "instances_format": "csv", "gcs_source": { "uris": [gcs_input_uri], }, }, "output_config": { "predictions_format": "csv", "gcs_destination": { "output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/", }, }, "dedicated_resources": { "machine_spec": { "machine_type": "n1-standard-2", "accelerator_count": 0, }, "starting_replica_count": 1, "max_replica_count": 1, }, } print( MessageToJson( aip.CreateBatchPredictionJobRequest( parent=PARENT, batch_prediction_job=batch_prediction_job, ).__dict__["_pb"] ) )
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "batchPredictionJob": { "displayName": "bank_20210226015209", "model": "projects/116273516712/locations/us-central1/models/3936304403996213248", "inputConfig": { "instancesFormat": "csv", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210226015209/test.csv" ] } }, "outputConfig": { "predictionsFormat": "csv", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015209/batch_output/" } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-2" }, "startingReplicaCount": 1, "maxReplicaCount": 1 } } } Call
request = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job )
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/4417450692310990848", "displayName": "bank_20210226015209", "model": "projects/116273516712/locations/us-central1/models/3936304403996213248", "inputConfig": { "instancesFormat": "csv", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210226015209/test.csv" ] } }, "outputConfig": { "predictionsFormat": "csv", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015209/batch_output/" } }, "state": "JOB_STATE_PENDING", "createTime": "2021-02-26T09:35:43.113270Z", "updateTime": "2021-02-26T09:35:43.113270Z" }
# The fully qualified ID for the batch job batch_job_id = request.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split("/")[-1] print(batch_job_id)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/4417450692310990848", "displayName": "bank_20210226015209", "model": "projects/116273516712/locations/us-central1/models/3936304403996213248", "inputConfig": { "instancesFormat": "csv", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210226015209/test.csv" ] } }, "outputConfig": { "predictionsFormat": "csv", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015209/batch_output/" } }, "state": "JOB_STATE_PENDING", "createTime": "2021-02-26T09:35:43.113270Z", "updateTime": "2021-02-26T09:35:43.113270Z" }
def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subfolder > latest: latest = folder[:-1] return latest while True: response = clients["job"].get_batch_prediction_job(name=batch_job_id) if response.state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", response.state) if response.state == aip.JobState.JOB_STATE_FAILED: break else: folder = get_latest_predictions( response.output_config.gcs_destination.output_uri_prefix ) ! gsutil ls $folder/prediction* ! gsutil cat $folder/prediction* break time.sleep(60)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: gs://migration-ucaip-trainingaip-20210226015209/batch_output/prediction-flowers_20210226015209-2021-02-26T09:35:43.034287Z/predictions_1.csv Age,Balance,Campaign,Contact,Day,Default,Duration,Education,Housing,Job,Loan,MaritalStatus,Month,PDays,POutcome,Previous,predicted_Deposit 72,5715,5,cellular,17,no,1127,secondary,no,retired,no,married,nov,184,success,3,1.6232702732086182 23,113,1,cellular,17,no,266,tertiary,no,student,no,single,nov,-1,unknown,0,1.3257474899291992 34,557,1,cellular,17,no,224,secondary,no,admin.,no,single,nov,-1,unknown,0,1.0801490545272827 25,505,2,cellular,17,no,386,secondary,no,technician,yes,single,nov,-1,unknown,0,1.2516863346099854 73,2850,1,cellular,17,no,300,secondary,no,retired,no,married,nov,40,failure,8,1.5064295530319214 37,2971,2,cellular,17,no,361,secondary,no,entrepreneur,no,married,nov,188,other,11,1.1924527883529663 57,668,4,telephone,17,no,508,secondary,no,blue-collar,no,married,nov,-1,unknown,0,1.1636843681335449 Make online predictions projects.locations.endpoints.create Request
endpoint = {"display_name": "bank_" + TIMESTAMP} print( MessageToJson( aip.CreateEndpointRequest( parent=PARENT, endpoint=endpoint, ).__dict__["_pb"] ) )
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "endpoint": { "displayName": "bank_20210226015209" } } Call
request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/endpoints/6899338707271155712" }
# The fully qualified ID for the endpoint endpoint_id = result.name # The short numeric ID for the endpoint endpoint_short_id = endpoint_id.split("/")[-1] print(endpoint_id)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.endpoints.deployModel Request
deployed_model = { "model": model_id, "display_name": "bank_" + TIMESTAMP, "dedicated_resources": { "min_replica_count": 1, "machine_spec": {"machine_type": "n1-standard-2"}, }, } traffic_split = {"0": 100} print( MessageToJson( aip.DeployModelRequest( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split=traffic_split, ).__dict__["_pb"] ) )
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "endpoint": "projects/116273516712/locations/us-central1/endpoints/6899338707271155712", "deployedModel": { "model": "projects/116273516712/locations/us-central1/models/3936304403996213248", "displayName": "bank_20210226015209", "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-2" }, "minReplicaCount": 1 } }, "trafficSplit": { "0": 100 } } Call
request = clients["endpoint"].deploy_model( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split=traffic_split )
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "deployedModel": { "id": "7646795507926302720" } }
# The numeric ID for the deploy model deploy_model_id = result.deployed_model.id print(deploy_model_id)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.endpoints.predict Prepare data item for online prediction
INSTANCE = { "Age": "58", "Job": "managment", "MaritalStatus": "married", "Education": "teritary", "Default": "no", "Balance": "2143", "Housing": "yes", "Loan": "no", "Contact": "unknown", "Day": "5", "Month": "may", "Duration": "261", "Campaign": "1", "PDays": "-1", "Previous": 0, "POutcome": "unknown", }
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Request
instances_list = [INSTANCE] instances = [json_format.ParseDict(s, Value()) for s in instances_list] request = aip.PredictRequest( endpoint=endpoint_id, ) request.instances.append(instances) print(MessageToJson(request.__dict__["_pb"]))
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "endpoint": "projects/116273516712/locations/us-central1/endpoints/6899338707271155712", "instances": [ [ { "Education": "teritary", "MaritalStatus": "married", "Balance": "2143", "Contact": "unknown", "Housing": "yes", "Previous": 0.0, "Loan": "no", "Duration": "261", "Default": "no", "Day": "5", "POutcome": "unknown", "Age": "58", "Month": "may", "PDays": "-1", "Campaign": "1", "Job": "managment" } ] ] } Call
request = clients["prediction"].predict(endpoint=endpoint_id, instances=instances)
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "predictions": [ { "upper_bound": 1.685426712036133, "value": 1.007092595100403, "lower_bound": 0.06719603389501572 } ], "deployedModelId": "7646795507926302720" } projects.locations.endpoints.undeployModel Call
request = clients["endpoint"].undeploy_model( endpoint=endpoint_id, deployed_model_id=deploy_model_id, traffic_split={} )
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: {} Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial.
delete_dataset = True delete_model = True delete_endpoint = True delete_pipeline = True delete_batchjob = True delete_bucket = True # Delete the dataset using the Vertex AI fully qualified identifier for the dataset try: if delete_dataset: clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the model using the Vertex AI fully qualified identifier for the model try: if delete_model: clients["model"].delete_model(name=model_id) except Exception as e: print(e) # Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint try: if delete_endpoint: clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex AI fully qualified identifier for the training pipeline try: if delete_pipeline: clients["pipeline"].delete_training_pipeline(name=training_pipeline_id) except Exception as e: print(e) # Delete the batch job using the Vertex AI fully qualified identifier for the batch job try: if delete_batchjob: clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r gs://$BUCKET_NAME
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Customize what happens in Model.fit <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/customizing_what_happens_in_fit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/customizing_what_happens_in_fit.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/customizing_what_happens_in_fit.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Introduction When you're doing supervised learning, you can use fit() and everything works smoothly. When you need to write your own training loop from scratch, you can use the GradientTape and take control of every little detail. But what if you need a custom training algorithm, but you still want to benefit from the convenient features of fit(), such as callbacks, built-in distribution support, or step fusing? A core principle of Keras is progressive disclosure of complexity. You should always be able to get into lower-level workflows in a gradual way. You shouldn't fall off a cliff if the high-level functionality doesn't exactly match your use case. You should be able to gain more control over the small details while retaining a commensurate amount of high-level convenience. When you need to customize what fit() does, you should override the training step function of the Model class. This is the function that is called by fit() for every batch of data. You will then be able to call fit() as usual -- and it will be running your own learning algorithm. Note that this pattern does not prevent you from building models with the Functional API. You can do this whether you're building Sequential models, Functional API models, or subclassed models. Let's see how that works. Setup Requires TensorFlow 2.2 or later.
import tensorflow as tf from tensorflow import keras
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0