markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Introduction to Neural Nets This Colab builds a deep neural network to perform more sophisticated linear regression than the earlier Colabs. Learning Objectives: After doing this Colab, you'll know how to do the following: Create a simple deep neural network. Tune the hyperparameters for a simple deep neural network. The Dataset Like several of the previous Colabs, this Colab uses the California Housing Dataset. Use the right version of TensorFlow The following hidden code cell ensures that the Colab will run on TensorFlow 2.X.
#@title Run on TensorFlow 2.x %tensorflow_version 2.x from __future__ import absolute_import, division, print_function, unicode_literals
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Import relevant modules The following hidden code cell imports the necessary code to run the code in the rest of this Colaboratory.
#@title Import relevant modules import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras import layers from matplotlib import pyplot as plt import seaborn as sns # The following lines adjust the granularity of reporting. pd.options.display.max_rows = 10 pd.options.display.float_format = "{:.1f}".format print("Imported modules.")
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Load the dataset Like most of the previous Colab exercises, this exercise uses the California Housing Dataset. The following code cell loads the separate .csv files and creates the following two pandas DataFrames: train_df, which contains the training set test_df, which contains the test set
train_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv") train_df = train_df.reindex(np.random.permutation(train_df.index)) # shuffle the examples test_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv")
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Normalize values When building a model with multiple features, the values of each feature should cover roughly the same range. The following code cell normalizes datasets by converting each raw value to its Z-score. (For more information about Z-scores, see the Classification exercise.)
#@title Convert raw values to their Z-scores # Calculate the Z-scores of each column in the training set: train_df_mean = train_df.mean() train_df_std = train_df.std() train_df_norm = (train_df - train_df_mean)/train_df_std # Calculate the Z-scores of each column in the test set. test_df_mean = test_df.mean() test_df_std = test_df.std() test_df_norm = (test_df - test_df_mean)/test_df_std print("Normalized the values.")
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Represent data The following code cell creates a feature layer containing three features: latitude X longitude (a feature cross) median_income population This code cell specifies the features that you'll ultimately train the model on and how each of those features will be represented. The transformations (collected in my_feature_layer) don't actually get applied until you pass a DataFrame to it, which will happen when we train the model.
# Create an empty list that will eventually hold all created feature columns. feature_columns = [] # We scaled all the columns, including latitude and longitude, into their # Z scores. So, instead of picking a resolution in degrees, we're going # to use resolution_in_Zs. A resolution_in_Zs of 1 corresponds to # a full standard deviation. resolution_in_Zs = 0.3 # 3/10 of a standard deviation. # Create a bucket feature column for latitude. latitude_as_a_numeric_column = tf.feature_column.numeric_column("latitude") latitude_boundaries = list(np.arange(int(min(train_df_norm['latitude'])), int(max(train_df_norm['latitude'])), resolution_in_Zs)) latitude = tf.feature_column.bucketized_column(latitude_as_a_numeric_column, latitude_boundaries) # Create a bucket feature column for longitude. longitude_as_a_numeric_column = tf.feature_column.numeric_column("longitude") longitude_boundaries = list(np.arange(int(min(train_df_norm['longitude'])), int(max(train_df_norm['longitude'])), resolution_in_Zs)) longitude = tf.feature_column.bucketized_column(longitude_as_a_numeric_column, longitude_boundaries) # Create a feature cross of latitude and longitude. latitude_x_longitude = tf.feature_column.crossed_column([latitude, longitude], hash_bucket_size=100) crossed_feature = tf.feature_column.indicator_column(latitude_x_longitude) feature_columns.append(crossed_feature) # Represent median_income as a floating-point value. median_income = tf.feature_column.numeric_column("median_income") feature_columns.append(median_income) # Represent population as a floating-point value. population = tf.feature_column.numeric_column("population") feature_columns.append(population) # Convert the list of feature columns into a layer that will later be fed into # the model. my_feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Build a linear regression model as a baseline Before creating a deep neural net, find a baseline loss by running a simple linear regression model that uses the feature layer you just created.
#@title Define the plotting function. def plot_the_loss_curve(epochs, mse): """Plot a curve of loss vs. epoch.""" plt.figure() plt.xlabel("Epoch") plt.ylabel("Mean Squared Error") plt.plot(epochs, mse, label="Loss") plt.legend() plt.ylim([mse.min()*0.95, mse.max() * 1.03]) plt.show() print("Defined the plot_the_loss_curve function.") #@title Define functions to create and train a linear regression model def create_model(my_learning_rate, feature_layer): """Create and compile a simple linear regression model.""" # Most simple tf.keras models are sequential. model = tf.keras.models.Sequential() # Add the layer containing the feature columns to the model. model.add(feature_layer) # Add one linear layer to the model to yield a simple linear regressor. model.add(tf.keras.layers.Dense(units=1, input_shape=(1,))) # Construct the layers into a model that TensorFlow can execute. model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=my_learning_rate), loss="mean_squared_error", metrics=[tf.keras.metrics.MeanSquaredError()]) return model def train_model(model, dataset, epochs, batch_size, label_name): """Feed a dataset into the model in order to train it.""" # Split the dataset into features and label. features = {name:np.array(value) for name, value in dataset.items()} label = np.array(features.pop(label_name)) history = model.fit(x=features, y=label, batch_size=batch_size, epochs=epochs, shuffle=True) # Get details that will be useful for plotting the loss curve. epochs = history.epoch hist = pd.DataFrame(history.history) rmse = hist["mean_squared_error"] return epochs, rmse print("Defined the create_model and train_model functions.")
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Run the following code cell to invoke the the functions defined in the preceding two code cells. (Ignore the warning messages.) Note: Because we've scaled all the input data, including the label, the resulting loss values will be much less than previous models. Note: Depending on the version of TensorFlow, running this cell might generate WARNING messages. Please ignore these warnings.
# The following variables are the hyperparameters. learning_rate = 0.01 epochs = 15 batch_size = 1000 label_name = "median_house_value" # Establish the model's topography. my_model = create_model(learning_rate, my_feature_layer) # Train the model on the normalized training set. epochs, mse = train_model(my_model, train_df_norm, epochs, batch_size, label_name) plot_the_loss_curve(epochs, mse) test_features = {name:np.array(value) for name, value in test_df_norm.items()} test_label = np.array(test_features.pop(label_name)) # isolate the label print("\n Evaluate the linear regression model against the test set:") my_model.evaluate(x = test_features, y = test_label, batch_size=batch_size)
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Define a deep neural net model The create_model function defines the topography of the deep neural net, specifying the following: The number of layers in the deep neural net. The number of nodes in each layer. The create_model function also defines the activation function of each layer.
def create_model(my_learning_rate, my_feature_layer): """Create and compile a simple linear regression model.""" # Most simple tf.keras models are sequential. model = tf.keras.models.Sequential() # Add the layer containing the feature columns to the model. model.add(my_feature_layer) # Describe the topography of the model by calling the tf.keras.layers.Dense # method once for each layer. We've specified the following arguments: # * units specifies the number of nodes in this layer. # * activation specifies the activation function (Rectified Linear Unit). # * name is just a string that can be useful when debugging. # Define the first hidden layer with 20 nodes. model.add(tf.keras.layers.Dense(units=20, activation='relu', name='Hidden1')) # Define the second hidden layer with 12 nodes. model.add(tf.keras.layers.Dense(units=12, activation='relu', name='Hidden2')) # Define the output layer. model.add(tf.keras.layers.Dense(units=1, name='Output')) model.compile(optimizer=tf.keras.optimizers.Adam(lr=my_learning_rate), loss="mean_squared_error", metrics=[tf.keras.metrics.MeanSquaredError()]) return model
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Define a training function The train_model function trains the model from the input features and labels. The tf.keras.Model.fit method performs the actual training. The x parameter of the fit method is very flexible, enabling you to pass feature data in a variety of ways. The following implementation passes a Python dictionary in which: The keys are the names of each feature (for example, longitude, latitude, and so on). The value of each key is a NumPy array containing the values of that feature. Note: Although you are passing every feature to model.fit, most of those values will be ignored. Only the features accessed by my_feature_layer will actually be used to train the model.
def train_model(model, dataset, epochs, label_name, batch_size=None): """Train the model by feeding it data.""" # Split the dataset into features and label. features = {name:np.array(value) for name, value in dataset.items()} label = np.array(features.pop(label_name)) history = model.fit(x=features, y=label, batch_size=batch_size, epochs=epochs, shuffle=True) # The list of epochs is stored separately from the rest of history. epochs = history.epoch # To track the progression of training, gather a snapshot # of the model's mean squared error at each epoch. hist = pd.DataFrame(history.history) mse = hist["mean_squared_error"] return epochs, mse
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Call the functions to build and train a deep neural net Okay, it is time to actually train the deep neural net. If time permits, experiment with the three hyperparameters to see if you can reduce the loss against the test set.
# The following variables are the hyperparameters. learning_rate = 0.01 epochs = 20 batch_size = 1000 # Specify the label label_name = "median_house_value" # Establish the model's topography. my_model = create_model(learning_rate, my_feature_layer) # Train the model on the normalized training set. We're passing the entire # normalized training set, but the model will only use the features # defined by the feature_layer. epochs, mse = train_model(my_model, train_df_norm, epochs, label_name, batch_size) plot_the_loss_curve(epochs, mse) # After building a model against the training set, test that model # against the test set. test_features = {name:np.array(value) for name, value in test_df_norm.items()} test_label = np.array(test_features.pop(label_name)) # isolate the label print("\n Evaluate the new model against the test set:") my_model.evaluate(x = test_features, y = test_label, batch_size=batch_size)
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Task 1: Compare the two models How did the deep neural net perform against the baseline linear regression model?
#@title Double-click to view a possible answer # Assuming that the linear model converged and # the deep neural net model also converged, please # compare the test set loss for each. # In our experiments, the loss of the deep neural # network model was consistently lower than # that of the linear regression model, which # suggests that the deep neural network model # will make better predictions than the # linear regression model.
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Task 2: Optimize the deep neural network's topography Experiment with the number of layers of the deep neural network and the number of nodes in each layer. Aim to achieve both of the following goals: Lower the loss against the test set. Minimize the overall number of nodes in the deep neural net. The two goals may be in conflict.
#@title Double-click to view a possible answer # Many answers are possible. We noticed the # following trends: # * Two layers outperformed one layer, but # three layers did not perform significantly # better than two layers; two layers # outperformed one layer. # In other words, two layers seemed best. # * Setting the topography as follows produced # reasonably good results with relatively few # nodes: # * 10 nodes in the first layer. # * 6 nodes in the second layer. # As the number of nodes in each layer dropped # below the preceding, test loss increased. # However, depending on your application, hardware # constraints, and the relative pain inflicted # by a less accurate model, a smaller network # (for example, 6 nodes in the first layer and # 4 nodes in the second layer) might be # acceptable.
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Task 3: Regularize the deep neural network (if you have enough time) Notice that the model's loss against the test set is much higher than the loss against the training set. In other words, the deep neural network is overfitting to the data in the training set. To reduce overfitting, regularize the model. The course has suggested several different ways to regularize a model, including: L1 regularization L2 regularization Dropout regularization Your task is to experiment with one or more regularization mechanisms to bring the test loss closer to the training loss (while still keeping test loss relatively low). Note: When you add a regularization function to a model, you might need to tweak other hyperparameters. Implementing L1 or L2 regularization To use L1 or L2 regularization on a hidden layer, specify the kernel_regularizer argument to tf.keras.layers.Dense. Assign one of the following methods to this argument: tf.keras.regularizers.l1 for L1 regularization tf.keras.regularizers.l2 for L2 regularization Each of the preceding methods takes an l parameter, which adjusts the regularization rate. Assign a decimal value between 0 and 1.0 to l; the higher the decimal, the greater the regularization. For example, the following applies L2 regularization at a strength of 0.01. model.add(tf.keras.layers.Dense(units=20, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(l=0.01), name='Hidden1')) Implementing Dropout regularization You implement dropout regularization as a separate layer in the topography. For example, the following code demonstrates how to add a dropout regularization layer between the first hidden layer and the second hidden layer: ``` model.add(tf.keras.layers.Dense( define first hidden layer) model.add(tf.keras.layers.Dropout(rate=0.25)) model.add(tf.keras.layers.Dense( define second hidden layer) ``` The rate parameter to tf.keras.layers.Dropout specifies the fraction of nodes that the model should drop out during training.
#@title Double-click for a possible solution # The following "solution" uses L2 regularization to bring training loss # and test loss closer to each other. Many, many other solutions are possible. def create_model(my_learning_rate, my_feature_layer): """Create and compile a simple linear regression model.""" # Discard any pre-existing version of the model. model = None # Most simple tf.keras models are sequential. model = tf.keras.models.Sequential() # Add the layer containing the feature columns to the model. model.add(my_feature_layer) # Describe the topography of the model. # Implement L2 regularization in the first hidden layer. model.add(tf.keras.layers.Dense(units=20, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.04), name='Hidden1')) # Implement L2 regularization in the second hidden layer. model.add(tf.keras.layers.Dense(units=12, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.04), name='Hidden2')) # Define the output layer. model.add(tf.keras.layers.Dense(units=1, name='Output')) model.compile(optimizer=tf.keras.optimizers.Adam(lr=my_learning_rate), loss="mean_squared_error", metrics=[tf.keras.metrics.MeanSquaredError()]) return model # Call the new create_model function and the other (unchanged) functions. # The following variables are the hyperparameters. learning_rate = 0.007 epochs = 140 batch_size = 1000 label_name = "median_house_value" # Establish the model's topography. my_model = create_model(learning_rate, my_feature_layer) # Train the model on the normalized training set. epochs, mse = train_model(my_model, train_df_norm, epochs, label_name, batch_size) plot_the_loss_curve(epochs, mse) test_features = {name:np.array(value) for name, value in test_df_norm.items()} test_label = np.array(test_features.pop(label_name)) # isolate the label print("\n Evaluate the new model against the test set:") my_model.evaluate(x = test_features, y = test_label, batch_size=batch_size)
ml/cc/exercises/intro_to_neural_nets.ipynb
google/eng-edu
apache-2.0
Then we can start with some imports.
import pandas as pd from sklearn.datasets import fetch_covtype import ray from ray import tune from ray.air import RunConfig from ray.train.xgboost import XGBoostTrainer from ray.tune.tune_config import TuneConfig from ray.tune.tuner import Tuner
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
We'll define a utility function to create a Ray Dataset from the Sklearn dataset. We expect the target column to be in the dataframe, so we'll add it to the dataframe manually.
def get_training_data() -> ray.data.Dataset: data_raw = fetch_covtype() df = pd.DataFrame(data_raw["data"], columns=data_raw["feature_names"]) df["target"] = data_raw["target"] return ray.data.from_pandas(df) train_dataset = get_training_data()
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
Let's take a look at the schema here:
print(train_dataset)
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
Since we'll be training a multiclass prediction model, we have to pass some information to XGBoost. For instance, XGBoost expects us to provide the number of classes, and multiclass-enabled evaluation metrices. For a good overview of commonly used hyperparameters, see our tutorial in the docs.
# XGBoost specific params params = { "tree_method": "approx", "objective": "multi:softmax", "eval_metric": ["mlogloss", "merror"], "num_class": 8, "min_child_weight": 2 }
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
With these parameters in place, we'll create a Ray AIR XGBoostTrainer. Note a few things here. First, we pass in a scaling_config to configure the distributed training behavior of each individual XGBoost training job. Here, we want to distribute training across 2 workers. The label_column specifies which columns in the dataset contains the target values. params are the XGBoost training params defined above - we can tune these later! The datasets dict contains the dataset we would like to train on. Lastly, we pass the number of boosting rounds to XGBoost.
trainer = XGBoostTrainer( scaling_config={"num_workers": 2}, label_column="target", params=params, datasets={"train": train_dataset}, num_boost_round=10, )
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
We can now create the Tuner with a search space to override some of the default parameters in the XGBoost trainer. Here, we just want to the XGBoost max_depth and min_child_weights parameters. Note that we specifically specified min_child_weight=2 in the default XGBoost trainer - this value will be overwritten during tuning. We configure Tune to minimize the train-mlogloss metric. In random search, this doesn't affect the evaluated configurations, but it will affect our default results fetching for analysis later. By the way, the name train-mlogloss is provided by the XGBoost library - train is the name of the dataset and mlogloss is the metric we passed in the XGBoost params above. Trainables can report any number of results (in this case we report 2), but most search algorithms only act on one of them - here we chose the mlogloss.
tuner = Tuner( trainer, run_config=RunConfig(verbose=1), param_space={ "params": { "max_depth": tune.randint(2, 8), "min_child_weight": tune.randint(1, 10), }, }, tune_config=TuneConfig(num_samples=8, metric="train-mlogloss", mode="min"), )
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
Let's run the tuning. This will take a few minutes to complete.
results = tuner.fit()
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
Now that we obtained the results, we can analyze them. For instance, we can fetch the best observed result according to the configured metric and mode and print it:
# This will fetch the best result according to the `metric` and `mode` specified # in the `TuneConfig` above: best_result = results.get_best_result() print("Best result error rate", best_result.metrics["train-merror"])
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
For more sophisticated analysis, we can get a pandas dataframe with all trial results:
df = results.get_dataframe() print(df.columns)
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
As an example, let's group the results per min_child_weight parameter and fetch the minimal obtained values:
groups = df.groupby("config/params/min_child_weight") mins = groups.min() for min_child_weight, row in mins.iterrows(): print("Min child weight", min_child_weight, "error", row["train-merror"], "logloss", row["train-mlogloss"])
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
As you can see in our example run, the min child weight of 2 showed the best prediction accuracy with 0.196929. That's the same as results.get_best_result() gave us! The results.get_dataframe() returns the last reported results per trial. If you want to obtain the best ever observed results, you can pass the filter_metric and filter_mode arguments to results.get_dataframe(). In our example, we'll filter the minimum ever observed train-merror for each trial:
df_min_error = results.get_dataframe(filter_metric="train-merror", filter_mode="min") df_min_error["train-merror"]
doc/source/ray-air/examples/analyze_tuning_results.ipynb
ray-project/ray
apache-2.0
Confirm TensorFlow can see the GPU Simply select "GPU" in the Accelerator drop-down in Notebook Settings (either through the Edit menu or the command palette at cmd/ctrl-shift-P).
device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': #raise SystemError('GPU device not found') print('GPU device not found') else: print('Found GPU at: {}'.format(device_name)) #GPU count and name !nvidia-smi -L
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Load the Dataset Download and Extract the Dataset
# Download the dataset !curl -O https://selbystorage.s3-us-west-2.amazonaws.com/research/office_2/office_2.tar.gz data_set = 'office_2' tar_file = data_set + '.tar.gz' # Unzip the .tgz file # -x for extract # -v for verbose # -z for gnuzip # -f for file (should come at last just before file name) # -C to extract the zipped contents to a different directory !tar -xvzf $tar_file
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Parse the CSV File
# Define path to csv file csv_path = data_set + '/interpolated.csv' # Load the CSV file into a pandas dataframe df = pd.read_csv(csv_path, sep=",") # Print the dimensions print("Dataset Dimensions:") print(df.shape) # Print the first 5 lines of the dataframe for review print("\nDataset Summary:") df.head(5)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Clean and Pre-process the Dataset Remove Unneccessary Columns
# Remove 'index' and 'frame_id' columns df.drop(['index','frame_id'],axis=1,inplace=True) # Verify new dataframe dimensions print("Dataset Dimensions:") print(df.shape) # Print the first 5 lines of the new dataframe for review print("\nDataset Summary:") df.head(5)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Detect Missing Data
# Detect Missing Values print("Any Missing Values?: {}".format(df.isnull().values.any())) # Total Sum print("\nTotal Number of Missing Values: {}".format(df.isnull().sum().sum())) # Sum Per Column print("\nTotal Number of Missing Values per Column:") print(df.isnull().sum())
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Remove Zero Throttle Values
# Determine if any throttle values are zeroes print("Any 0 throttle values?: {}".format(df['speed'].eq(0).any())) # Determine number of 0 throttle values: print("\nNumber of 0 throttle values: {}".format(df['speed'].eq(0).sum())) # Remove rows with 0 throttle values if df['speed'].eq(0).any(): df = df.query('speed != 0') # Reset the index df.reset_index(inplace=True,drop=True) # Verify new dataframe dimensions print("\nNew Dataset Dimensions:") print(df.shape) df.head(5)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
View Label Statistics
# Steering Command Statistics print("\nSteering Command Statistics:") print(df['angle'].describe()) print("\nThrottle Command Statistics:") # Throttle Command Statistics print(df['speed'].describe())
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
View Histogram of Steering Commands
#@title Select the number of histogram bins num_bins = 25 #@param {type:"slider", min:5, max:50, step:1} hist, bins = np.histogram(df['angle'], num_bins) center = (bins[:-1]+ bins[1:]) * 0.5 plt.bar(center, hist, width=0.05) #plt.plot((np.min(df['angle']), np.max(df['angle'])), (samples_per_bin, samples_per_bin)) # Normalize the histogram (150-300 for RBG) #@title Normalize the Histogram { run: "auto" } hist = True #@param {type:"boolean"} remove_list = [] samples_per_bin = 200 if hist: for j in range(num_bins): list_ = [] for i in range(len(df['angle'])): if df.loc[i,'angle'] >= bins[j] and df.loc[i,'angle'] <= bins[j+1]: list_.append(i) random.shuffle(list_) list_ = list_[samples_per_bin:] remove_list.extend(list_) print('removed:', len(remove_list)) df.drop(df.index[remove_list], inplace=True) df.reset_index(inplace=True) df.drop(['index'],axis=1,inplace=True) print('remaining:', len(df)) hist, _ = np.histogram(df['angle'], (num_bins)) plt.bar(center, hist, width=0.05) plt.plot((np.min(df['angle']), np.max(df['angle'])), (samples_per_bin, samples_per_bin))
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
View a Sample Image
# View a Single Image index = random.randint(0,df.shape[0]-1) img_name = data_set + '/' + df.loc[index,'filename'] angle = df.loc[index,'angle'] center_image = cv2.imread(img_name) center_image_mod = cv2.resize(center_image, (320,180)) center_image_mod = cv2.cvtColor(center_image_mod,cv2.COLOR_RGB2BGR) # Crop the image height_min = 75 height_max = center_image_mod.shape[0] width_min = 0 width_max = center_image_mod.shape[1] crop_img = center_image_mod[height_min:height_max, width_min:width_max] plt.subplot(2,1,1) plt.imshow(center_image_mod) plt.grid(False) plt.xlabel('angle: {:.2}'.format(angle)) plt.show() plt.subplot(2,1,2) plt.imshow(crop_img) plt.grid(False) plt.xlabel('angle: {:.2}'.format(angle)) plt.show()
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
View Multiple Images
# Number of Images to Display num_images = 4 # Display the images i = 0 for i in range (i,num_images): index = random.randint(0,df.shape[0]-1) image_path = df.loc[index,'filename'] angle = df.loc[index,'angle'] img_name = data_set + '/' + image_path image = cv2.imread(img_name) image = cv2.resize(image, (320,180)) image = cv2.cvtColor(image,cv2.COLOR_RGB2BGR) plt.subplot(num_images/2,num_images/2,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(image, cmap=plt.cm.binary) plt.xlabel('angle: {:.3}'.format(angle)) i += 1
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Split the Dataset Define an ImageDataGenerator to Augment Images
# Create image data augmentation generator and choose augmentation types datagen = ImageDataGenerator( #rotation_range=20, zoom_range=0.15, #width_shift_range=0.1, #height_shift_range=0.2, #shear_range=10, brightness_range=[0.5,1.0], #horizontal_flip=True, #vertical_flip=True, #channel_shift_range=100.0, fill_mode="reflect")
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
View Image Augmentation Examples
# load the image index = random.randint(0,df.shape[0]-1) img_name = data_set + '/' + df.loc[index,'filename'] original_image = cv2.imread(img_name) original_image = cv2.cvtColor(original_image,cv2.COLOR_RGB2BGR) original_image = cv2.resize(original_image, (320,180)) label = df.loc[index,'angle'] # convert to numpy array data = img_to_array(original_image) # expand dimension to one sample test = expand_dims(data, 0) # prepare iterator it = datagen.flow(test, batch_size=1) # generate batch of images batch = it.next() # convert to unsigned integers for viewing image_aug = batch[0].astype('uint8') print("Augmenting a Single Image: \n") plt.subplot(2,1,1) plt.imshow(original_image) plt.grid(False) plt.xlabel('angle: {:.2}'.format(label)) plt.show() plt.subplot(2,1,2) plt.imshow(image_aug) plt.grid(False) plt.xlabel('angle: {:.2}'.format(label)) plt.show() print("Multiple Augmentations: \n") # generate samples and plot for i in range(0,num_images): # define subplot plt.subplot(num_images/2,num_images/2,i+1) # generate batch of images batch = it.next() # convert to unsigned integers for viewing image = batch[0].astype('uint8') # plot raw pixel data plt.imshow(image) # show the figure plt.show()
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Define a Data Generator
def generator(samples, batch_size=32, aug=0): num_samples = len(samples) while 1: # Loop forever so the generator never terminates for offset in range(0, num_samples, batch_size): batch_samples = samples[offset:offset + batch_size] #print(batch_samples) images = [] angles = [] for batch_sample in batch_samples: if batch_sample[5] != "filename": name = data_set + '/' + batch_sample[3] center_image = cv2.imread(name) center_image = cv2.cvtColor(center_image,cv2.COLOR_RGB2BGR) center_image = cv2.resize( center_image, (320, 180)) #resize from 720x1280 to 180x320 angle = float(batch_sample[4]) if not aug: images.append(center_image) angles.append(angle) else: data = img_to_array(center_image) sample = expand_dims(data, 0) it = datagen.flow(sample, batch_size=1) batch = it.next() image_aug = batch[0].astype('uint8') if random.random() < .5: image_aug = np.fliplr(image_aug) angle = -1 * angle images.append(image_aug) angles.append(angle) X_train = np.array(images) y_train = np.array(angles) yield sklearn.utils.shuffle(X_train, y_train)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Split the Dataset
samples = [] samples = df.values.tolist() sklearn.utils.shuffle(samples) train_samples, validation_samples = train_test_split(samples, test_size=0.2) print("Number of traing samples: ", len(train_samples)) print("Number of validation samples: ", len(validation_samples))
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Define Training and Validation Data Generators
batch_size_value = 32 img_aug = 0 train_generator = generator(train_samples, batch_size=batch_size_value, aug=img_aug) validation_generator = generator( validation_samples, batch_size=batch_size_value, aug=0)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Compile and Train the Model Build the Model
# Initialize the model model = Sequential() # trim image to only see section with road # (top_crop, bottom_crop), (left_crop, right_crop) model.add(Cropping2D(cropping=((height_min,0), (width_min,0)), input_shape=(180,320,3))) # Preprocess incoming data, centered around zero with small standard deviation model.add(Lambda(lambda x: (x / 255.0) - 0.5)) # Nvidia model model.add(Convolution2D(24, (5, 5), activation="relu", name="conv_1", strides=(2, 2))) model.add(Convolution2D(36, (5, 5), activation="relu", name="conv_2", strides=(2, 2))) model.add(Convolution2D(48, (5, 5), activation="relu", name="conv_3", strides=(2, 2))) model.add(SpatialDropout2D(.5, dim_ordering='default')) model.add(Convolution2D(64, (3, 3), activation="relu", name="conv_4", strides=(1, 1))) model.add(Convolution2D(64, (3, 3), activation="relu", name="conv_5", strides=(1, 1))) model.add(Flatten()) model.add(Dense(1164)) model.add(Dropout(.5)) model.add(Dense(100, activation='relu')) model.add(Dropout(.5)) model.add(Dense(50, activation='relu')) model.add(Dropout(.5)) model.add(Dense(10, activation='relu')) model.add(Dropout(.5)) model.add(Dense(1)) model.compile(loss='mse', optimizer=Adam(lr=0.001), metrics=['mse','mae','mape','cosine']) # Print model sumamry model.summary()
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Setup Checkpoints
# checkpoint model_path = './model' !if [ -d $model_path ]; then echo 'Directory Exists'; else mkdir $model_path; fi filepath = model_path + "/weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='auto', period=1)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Setup Early Stopping to Prevent Overfitting
# The patience parameter is the amount of epochs to check for improvement early_stop = EarlyStopping(monitor='val_loss', patience=10)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Reduce Learning Rate When a Metric has Stopped Improving
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Setup Tensorboard
# Clear any logs from previous runs !rm -rf ./Graph/ # Launch Tensorboard !pip install -U tensorboardcolab from tensorboardcolab import * tbc = TensorBoardColab() # Configure the Tensorboard Callback tbCallBack = TensorBoard(log_dir='./Graph', histogram_freq=1, write_graph=True, write_grads=True, write_images=True, batch_size=batch_size_value, update_freq='epoch')
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Load Existing Model
load = True #@param {type:"boolean"} if load: # Returns a compiled model identical to the previous one !curl -O https://selbystorage.s3-us-west-2.amazonaws.com/research/office_2/model.h5 !mv model.h5 model/ model_path_full = model_path + '/' + 'model.h5' model = load_model(model_path_full) print("Loaded previous model: {} \n".format(model_path_full)) else: print("No previous model loaded \n")
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Train the Model
# Define step sizes STEP_SIZE_TRAIN = len(train_samples) / batch_size_value STEP_SIZE_VALID = len(validation_samples) / batch_size_value # Define number of epochs n_epoch = 5 # Define callbacks # callbacks_list = [TensorBoardColabCallback(tbc)] # callbacks_list = [TensorBoardColabCallback(tbc), early_stop] # callbacks_list = [TensorBoardColabCallback(tbc), early_stop, checkpoint] callbacks_list = [TensorBoardColabCallback(tbc), early_stop, checkpoint, reduce_lr] # Fit the model history_object = model.fit_generator( generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=validation_generator, validation_steps=STEP_SIZE_VALID, callbacks=callbacks_list, use_multiprocessing=True, epochs=n_epoch)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Save the Model
# Save model model_path_full = model_path + '/' model.save(model_path_full + 'model.h5') with open(model_path_full + 'model.json', 'w') as output_json: output_json.write(model.to_json())
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Evaluate the Model Plot the Training Results
# Plot the training and validation loss for each epoch print('Generating loss chart...') plt.plot(history_object.history['loss']) plt.plot(history_object.history['val_loss']) plt.title('model mean squared error loss') plt.ylabel('mean squared error loss') plt.xlabel('epoch') plt.legend(['training set', 'validation set'], loc='upper right') plt.savefig(model_path + '/model.png') # Done print('Done.')
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Print Performance Metrics
scores = model.evaluate_generator(validation_generator, STEP_SIZE_VALID, use_multiprocessing=True) metrics_names = model.metrics_names for i in range(len(model.metrics_names)): print("Metric: {} - {}".format(metrics_names[i],scores[i]))
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Compute Prediction Statistics
# Define image loading function def load_images(dataframe): # initialize images array images = [] for i in dataframe.index.values: name = data_set + '/' + dataframe.loc[i,'filename'] center_image = cv2.imread(name) center_image = cv2.resize(center_image, (320,180)) images.append(center_image) return np.array(images) # Load images test_size = 200 df_test = df.sample(frac=1).reset_index(drop=True) df_test = df_test.head(test_size) test_images = load_images(df_test) batch_size = 32 preds = model.predict(test_images, batch_size=batch_size, verbose=1) #print("Preds: {} \n".format(preds)) testY = df_test.iloc[:,4].values #print("Labels: {} \n".format(testY)) df_testY = pd.Series(testY) df_preds = pd.Series(preds.flatten) # Replace 0 angle values if df_testY.eq(0).any(): df_testY.replace(0, 0.0001,inplace=True) # Calculate the difference diff = preds.flatten() - df_testY percentDiff = (diff / testY) * 100 absPercentDiff = np.abs(percentDiff) # compute the mean and standard deviation of the absolute percentage # difference mean = np.mean(absPercentDiff) std = np.std(absPercentDiff) print("[INFO] mean: {:.2f}%, std: {:.2f}%".format(mean, std)) # Compute the mean and standard deviation of the difference print(diff.describe()) # Plot a histogram of the prediction errors num_bins = 25 hist, bins = np.histogram(diff, num_bins) center = (bins[:-1]+ bins[1:]) * 0.5 plt.bar(center, hist, width=0.05) plt.title('Historgram of Predicted Error') plt.xlabel('Steering Angle') plt.ylabel('Number of predictions') plt.xlim(-2.0, 2.0) plt.plot(np.min(diff), np.max(diff)) # Plot a Scatter Plot of the Error plt.scatter(testY, preds) plt.xlabel('True Values ') plt.ylabel('Predictions ') plt.axis('equal') plt.axis('square') plt.xlim([-1.75,1.75]) plt.ylim([-1.75,1.75]) plt.plot([-1.75, 1.75], [-1.75, 1.75], color='k', linestyle='-', linewidth=.1)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Plot a Prediction
# Plot the image with the actual and predicted steering angle index = random.randint(0,df_test.shape[0]-1) img_name = data_set + '/' + df_test.loc[index,'filename'] center_image = cv2.imread(img_name) center_image = cv2.cvtColor(center_image,cv2.COLOR_RGB2BGR) center_image_mod = cv2.resize(center_image, (320,180)) #resize from 720x1280 to 180x320 plt.imshow(center_image_mod) plt.grid(False) plt.xlabel('Actual: {:.2f} Predicted: {:.2f}'.format(df_test.loc[index,'angle'],float(preds[index]))) plt.show()
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Visualize the Network Show the Model Summary
model.summary()
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Access Individual Layers
# Creating a mapping of layer name ot layer details # We will create a dictionary layers_info which maps a layer name to its charcteristics layers_info = {} for i in model.layers: layers_info[i.name] = i.get_config() # Here the layer_weights dictionary will map every layer_name to its corresponding weights layer_weights = {} for i in model.layers: layer_weights[i.name] = i.get_weights() pprint.pprint(layers_info['conv_5'])
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Visualize the filters
# Visualize the first filter of each convolution layer layers = model.layers layer_ids = [2,3,4,6,7] #plot the filters fig,ax = plt.subplots(nrows=1,ncols=5) for i in range(5): ax[i].imshow(layers[layer_ids[i]].get_weights()[0][:,:,:,0][:,:,0],cmap='gray') ax[i].set_title('Conv'+str(i+1)) ax[i].set_xticks([]) ax[i].set_yticks([])
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
Visualize the Saliency Map
!pip install -I scipy==1.2.* !pip install git+https://github.com/raghakot/keras-vis.git -U # import specific functions from keras-vis package from vis.utils import utils from vis.visualization import visualize_saliency, visualize_cam, overlay # View a Single Image index = random.randint(0,df.shape[0]-1) img_name = data_set + '/' + df.loc[index,'filename'] sample_image = cv2.imread(img_name) sample_image = cv2.cvtColor(sample_image,cv2.COLOR_RGB2BGR) sample_image_mod = cv2.resize(sample_image, (320,180)) plt.imshow(sample_image_mod) layer_idx = utils.find_layer_idx(model, 'conv_5') grads = visualize_saliency(model, layer_idx, filter_indices=None, seed_input=sample_image_mod, grad_modifier='absolute', backprop_modifier='guided') plt.imshow(grads, alpha = 0.6)
rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb
wilselby/diy_driverless_car_ROS
bsd-2-clause
I like this way of importing libraries, if some libraries are not already installed, the system will exit. There is another room for improvement here, if a library does not exist, it is possile to install it automatically if we run the code as admin or with enough permission The Notifier Class
class Notifier(object): suffixes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB'] def __init__(self, **kwargs): self.threshold = None self.path = None self.list = None self.email_sender = None self.email_password = None self.gmail_smtp = None self.gmail_smtp_port = None self.text_subtype = None self.cap_reached = False self.email_subject = None for (key, value) in kwargs.iteritems(): if hasattr(self, key): setattr(self, key, value) self._log = init_log()
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
We init the class as an object containing some features, this object will have a threshold upon which there will be an email triggered to a recipient list. This obect is looking ath the size of each subdirectory in path. You need to create an email addresse and add some variables to your PATH ( will be discussed later)
@property def loggy(self): return self._log
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
We need to inherhit logging capabilities from the logging class we imported (see later the code of this class). This will allow us to log from within the class itself
@staticmethod def load_recipients_emails(emails_file): recipients = [line.rstrip('\n') for line in open(emails_file) if not line[0].isspace()] return recipients
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
We need to lad the emails from a file created by the user. Usually I create 2 files, development_list containing only email adresses I will use for testing and production_list containing adresses I want to notify in production
@staticmethod def load_message_content(message_template_file, table): template_file = open(message_template_file, 'rb') template_file_content = template_file.read().replace( "{{table}}", table.get_string()) template_file.close() return template_file_content
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
Inspired by MVC apps, we load message body from a template, this template will contain a placeholder called {{table}} that will contain the table of subdirectories and their respective sizes
def notify_user(self, email_receivers, table, template): """This method sends an email :rtype : email sent to specified members """ # Create the message input_file = os.path.join( os.path.dirname(__file__), "templates/" + template + ".txt") content = self.load_message_content(input_file, table) msg = MIMEText(content, self.text_subtype) msg["Subject"] = self.email_subject msg["From"] = self.email_sender msg["To"] = ','.join(email_receivers) try: smtpObj = SMTP(self.gmail_smtp, self.gmail_smtp_port) # Identify yourself to GMAIL ESMTP server. smtpObj.ehlo() # Put SMTP connection in TLS mode and call ehlo again. smtpObj.starttls() smtpObj.ehlo() # Login to service smtpObj.login(user=self.email_sender, password=self.email_password) # Send email smtpObj.sendmail(self.email_sender, email_receivers, msg.as_string()) # close connection and session. smtpObj.quit() except SMTPException as error: print "Error: unable to send email : {err}".format(err=error)
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
notify_user is the function that will send an email to the users upon request. It loads the message body template and injects the table in it.
@staticmethod def du(path): """disk usage in kilobytes""" # return subprocess.check_output(['du', '-s', # path]).split()[0].decode('utf-8') try: p1 = subprocess.Popen(('ls', '-d', path), stdout=subprocess.PIPE) p2 = subprocess.Popen((os.environ["GNU_PARALLEL"], '--no-notice', 'du', '-s', '2>&1'), stdin=p1.stdout, stdout=subprocess.PIPE) p3 = subprocess.Popen( ('grep', '-v', '"Permission denied"'), stdin=p2.stdout, stdout=subprocess.PIPE) output = p3.communicate()[0] except subprocess.CalledProcessError as e: raise RuntimeError("command '{0}' return with error (code {1}): {2}".format( e.cmd, e.returncode, e.output)) # return ''.join([' '.join(hit.split('\t')) for hit in output.split('\n') # if len(hit) > 0 and not "Permission" in hit and output[0].isdigit()]) result = [' '.join(hit.split('\t')) for hit in output.split('\n')] for line in result: if line and len(line.split('\n')) > 0 and "Permission" not in line and line[0].isdigit(): return line.split(" ")[0]
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
This is a wrapper of the famous du command. I use GNU_PARALLEL in case we have a lot of subdirectories and in case we don't want to wait for sequential processing. Note that we could have done this in multithreading as well
def du_h(self, nbytes): if nbytes == 0: return '0 B' i = 0 while nbytes >= 1024 and i < len(self.suffixes) - 1: nbytes /= 1024. i += 1 f = ('%.2f'.format(nbytes)).rstrip('0').rstrip('.') return '%s %s'.format(f, self.suffixes[i])
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
I didn't want to use the -h flag because we may want to sum up subdirectories sizes or doing other postprocessing, we'd rather keep them in a unified format (unit). For a more human readable format, we can use du_h() method
@staticmethod def list_folders(given_path): user_list = [] for path in os.listdir(given_path): if not os.path.isfile(os.path.join(given_path, path)) and not path.startswith(".") and not path.startswith( "archive"): user_list.append(path) return user_list
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
we need at some point to return a list of subdirectories, each will be passed through the same function (du)
def notify(self): global cap_reached self._log.info("Loading recipient emails...") list_of_recievers = self.load_recipients_emails(self.list) paths = self.list_folders(self.path) paths = [self.path + user for user in paths] sizes = [] for size in paths: try: self._log.info("calculating disk usage for " + size + " ...") sizes.append(int(self.du(size))) except Exception, e: self._log.exception(e) sizes.append(0) # sizes = [int(du(size).split(' ')[0]) for size in paths] # convert kilobytes to bytes sizes = [int(element) * 1000 for element in sizes] table = PrettyTable(["Directory", "Size"]) table.align["Directory"] = "l" table.align["Size"] = "r" table.padding_width = 5 table.border = False for account, size_of_account in zip(paths, sizes): if int(size_of_account) > int(self.threshold): table.add_row( ["*" + os.path.basename(account) + "*", "*" + self.du_h(size_of_account) + "*"]) self.cap_reached = True else: table.add_row([os.path.basename(account), self.du_h(size_of_account)]) # notify Admins table.add_row(["TOTAL", self.du_h(sum(sizes))]) table.add_row(["Usage", str(sum(sizes) / 70000000000000)]) self.notify_user(list_of_recievers, table, "karey") if self.cap_reached: self.notify_user(list_of_recievers, table, "default_size_limit") def run(self): self.notify()
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
Finally we create the function that will bring all this protocol together : Read the list of recievers load the path we want to look into for each subdirectory calculate the size of it and append it to a list create a Table to be populated row by row add subdirectories and their sizes Calculate the total of sizes in subdirectories If one of the subdirectories has a size higher than the threshold specified, trigger the email Report the usage as a percentage
def arguments(): """Defines the command line arguments for the script.""" main_desc = """Monitors changes in the size of dirs for a given path""" parser = ArgumentParser(description=main_desc) parser.add_argument("path", default=os.path.expanduser('~'), nargs='?', help="The path to monitor. If none is given, takes the home directory") parser.add_argument("list", help="text file containing the list of persons to be notified, one per line") parser.add_argument("-s", "--notification_subject", default=None, help="Email subject of the notification") parser.add_argument("-t", "--threshold", default=2500000000000, help="The threshold that will trigger the notification") parser.add_argument("-v", "--version", action="version", version="%(prog)s {0}".format(__version__), help="show program's version number and exit") return parser
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
The program takes in account : the path to examine, the list of emails in a file, the subject of the alert, the thresold that will trigger the email (here by defailt 2.5T)
def main(): args = arguments().parse_args() notifier = Notifier() loggy = notifier.loggy # Set parameters loggy.info("Starting QuotaWatcher session...") loggy.info("Setting parameters ...") notifier.list = args.list notifier.threshold = args.threshold notifier.path = args.path # Configure the app try: loggy.info("Loading environment variables ...") notifier.email_sender = os.environ["NOTIFIER_SENDER"] notifier.email_password = os.environ["NOTIFIER_PASSWD"] notifier.gmail_smtp = os.environ["NOTIFIER_SMTP"] notifier.gmail_smtp_port = os.environ["NOTIFIER_SMTP_PORT"] notifier.text_subtype = os.environ["NOTIFIER_SUBTYPE"] notifier.email_subject = args.notification_subject notifier.cap_reached = False except Exception, e: loggy.exception(e) notifier.run() loggy.info("End of QuotaWatcher session")
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
Note that in the main we load some environment variable that you should specify in advance. This is up to the user to fill these out, It is always preferable to declare these as environment variable, most of the time these are confidential so we better not show them here, it is always safe to set environment variable for these That's it this is an example of the LOG output. 2015-07-03 10:40:46,968 - quota_logger - INFO - Starting QuotaWatcher session... 2015-07-03 10:40:46,969 - quota_logger - INFO - Setting parameters ... 2015-07-03 10:40:46,969 - quota_logger - INFO - Loading environment variables ... 2015-07-03 10:40:46,969 - quota_logger - INFO - Loading recipient emails... 2015-07-03 10:40:47,011 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/amcpherson .. . 2015-07-03 11:21:09,442 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/andrewjlroth ... 2015-07-03 15:31:41,500 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/asteif ... 2015-07-03 15:40:34,268 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/clefebvre ... 2015-07-03 15:42:47,483 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/dgrewal ... 2015-07-03 16:01:30,588 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/fdorri ... 2015-07-03 16:03:43,850 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/fong ... 2015-07-03 16:16:13,781 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/gha ... 2015-07-03 16:16:38,673 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/jding ... 2015-07-03 16:16:50,820 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/cdesouza ... 2015-07-03 16:16:52,585 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/jrosner ... 2015-07-03 16:27:30,684 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/jtaghiyar ... 2015-07-03 16:28:16,982 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/kareys ... 2015-07-03 19:21:07,607 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/hfarahani ... 2015-07-03 19:22:07,618 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/jzhou ... 2015-07-03 19:38:28,147 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/pipelines ... 2015-07-03 19:53:20,771 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/projects ... 2015-07-03 20:52:45,001 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/raniba ... 2015-07-03 20:59:50,543 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/tfunnell ... 2015-07-03 21:00:47,216 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/ykwang ... 2015-07-03 21:03:30,277 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/azhang ... 2015-07-03 21:03:30,820 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/softwares ... 2015-07-03 21:03:42,679 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/sjewell ... 2015-07-03 21:03:51,711 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/kastonl ... 2015-07-03 21:04:52,536 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/amazloomian . .. 2015-07-03 21:07:43,501 - quota_logger - INFO - End of QuotaWatcher session And as of the email triggered, it will look like ``` THIS IS AN ALERT MESSAGE : DISK USAGE SPIKE This is a warning message about the disk usage relative to the Shahlab group at GSC We detected a spike > 2.5 T for some accounts and here is a list of the space usage per account reported today Directory Size amcpherson 1.96 TB andrewjlroth 390.19 GB asteif 2.05 TB clefebvre 16.07 GB dgrewal 1.61 TB fdorri 486.49 GB *fong* *9.67 TB* gha 50.7 GB jding 638.72 GB cdesouza 56.15 GB jrosner 1.82 TB jtaghiyar 253.84 GB *kareys* *11.26 TB* hfarahani 1.09 TB jzhou 1.19 TB pipelines 2.1 TB *projects* *4.09 TB* raniba 2.03 TB tfunnell 1.02 TB ykwang 1.71 TB azhang 108.4 MB softwares 34.67 GB sjewell 24.53 GB kastonl 118.51 GB amazloomian 1.71 TB TOTAL 45.34 TB Usage 71.218% Please do the necessary to remove temporary files and take the time to clean up your working directories Thank you for your cooperation (am a cron job, don't reply to this message, if you have questions ask Ali) PS : This is a very close estimation, some directories may have strict permissions, for an accurate disk usage please make sure that you set your files permissions so that anyone can see them. ``` The logger
import logging import datetime def init_log(): current_time = datetime.datetime.now() logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) handler = logging.FileHandler(current_time.isoformat()+'_quotawatcher.log') handler.setLevel(logging.INFO) # create a logging format formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) return logger
QuotaWatcher.ipynb
radaniba/QuotaWatcher
gpl-2.0
To keep track of the information we'll setup a config variable and output them on the screen for bookkeeping.
import os import shutil from datetime import datetime from ioos_tools.ioos import parse_config config = parse_config("config.yaml") # Saves downloaded data into a temporary directory. save_dir = os.path.abspath(config["run_name"]) if os.path.exists(save_dir): shutil.rmtree(save_dir) os.makedirs(save_dir) fmt = "{:*^64}".format print(fmt("Saving data inside directory {}".format(save_dir))) print(fmt(" Run information ")) print("Run date: {:%Y-%m-%d %H:%M:%S}".format(datetime.utcnow())) print("Start: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["start"])) print("Stop: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["stop"])) print( "Bounding box: {0:3.2f}, {1:3.2f}," "{2:3.2f}, {3:3.2f}".format(*config["region"]["bbox"]) )
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
To interface with the IOOS catalog we will use the Catalogue Service for the Web (CSW) endpoint and python's OWSLib library. The cell below creates the Filter Encoding Specification (FES) with configuration we specified in cell [2]. The filter is composed of: - or to catch any of the standard names; - not some names we do not want to show up in the results; - date range and bounding box for the time-space domain of the search.
def make_filter(config): from owslib import fes from ioos_tools.ioos import fes_date_filter kw = dict( wildCard="*", escapeChar="\\", singleChar="?", propertyname="apiso:Subject" ) or_filt = fes.Or( [fes.PropertyIsLike(literal=("*%s*" % val), **kw) for val in config["cf_names"]] ) not_filt = fes.Not([fes.PropertyIsLike(literal="GRIB-2", **kw)]) begin, end = fes_date_filter(config["date"]["start"], config["date"]["stop"]) bbox_crs = fes.BBox(config["region"]["bbox"], crs=config["region"]["crs"]) filter_list = [fes.And([bbox_crs, begin, end, or_filt, not_filt])] return filter_list filter_list = make_filter(config)
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
We need to wrap OWSlib.csw.CatalogueServiceWeb object with a custom function, get_csw_records, to be able to paginate over the results. In the cell below we loop over all the catalogs returns and extract the OPeNDAP endpoints.
from ioos_tools.ioos import get_csw_records, service_urls from owslib.csw import CatalogueServiceWeb dap_urls = [] print(fmt(" Catalog information ")) for endpoint in config["catalogs"]: print("URL: {}".format(endpoint)) try: csw = CatalogueServiceWeb(endpoint, timeout=120) except Exception as e: print("{}".format(e)) continue csw = get_csw_records(csw, filter_list, esn="full") OPeNDAP = service_urls(csw.records, identifier="OPeNDAP:OPeNDAP") odp = service_urls( csw.records, identifier="urn:x-esri:specification:ServiceType:odp:url" ) dap = OPeNDAP + odp dap_urls.extend(dap) print("Number of datasets available: {}".format(len(csw.records.keys()))) for rec, item in csw.records.items(): print("{}".format(item.title)) if dap: print(fmt(" DAP ")) for url in dap: print("{}.html".format(url)) print("\n") # Get only unique endpoints. dap_urls = list(set(dap_urls))
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
We found 10 dataset endpoints but only 9 of them have the proper metadata for us to identify the OPeNDAP endpoint, those that contain either OPeNDAP:OPeNDAP or urn:x-esri:specification:ServiceType:odp:url scheme. Unfortunately we lost the COAWST model in the process. The next step is to ensure there are no observations in the list of endpoints. We want only the models for now.
from ioos_tools.ioos import is_station from timeout_decorator import TimeoutError # Filter out some station endpoints. non_stations = [] for url in dap_urls: try: if not is_station(url): non_stations.append(url) except (IOError, OSError, RuntimeError, TimeoutError) as e: print("Could not access URL {}.html\n{!r}".format(url, e)) dap_urls = non_stations print(fmt(" Filtered DAP ")) for url in dap_urls: print("{}.html".format(url))
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
Now we have a nice list of all the models available in the catalog for the domain we specified. We still need to find the observations for the same domain. To accomplish that we will use the pyoos library and search the SOS CO-OPS services using the virtually the same configuration options from the catalog search.
from pyoos.collectors.coops.coops_sos import CoopsSos collector_coops = CoopsSos() collector_coops.set_bbox(config["region"]["bbox"]) collector_coops.end_time = config["date"]["stop"] collector_coops.start_time = config["date"]["start"] collector_coops.variables = [config["sos_name"]] ofrs = collector_coops.server.offerings title = collector_coops.server.identification.title print(fmt(" Collector offerings ")) print("{}: {} offerings".format(title, len(ofrs)))
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
To make it easier to work with the data we extract the time-series as pandas tables and interpolate them to a common 1-hour interval index.
import pandas as pd from ioos_tools.ioos import collector2table data = collector2table( collector=collector_coops, config=config, col="water_surface_height_above_reference_datum (m)", ) df = dict( station_name=[s._metadata.get("station_name") for s in data], station_code=[s._metadata.get("station_code") for s in data], sensor=[s._metadata.get("sensor") for s in data], lon=[s._metadata.get("lon") for s in data], lat=[s._metadata.get("lat") for s in data], depth=[s._metadata.get("depth") for s in data], ) pd.DataFrame(df).set_index("station_code") index = pd.date_range( start=config["date"]["start"].replace(tzinfo=None), end=config["date"]["stop"].replace(tzinfo=None), freq="1H", ) # Preserve metadata with `reindex`. observations = [] for series in data: _metadata = series._metadata series.index = series.index.tz_localize(None) obs = series.reindex(index=index, limit=1, method="nearest") obs._metadata = _metadata observations.append(obs)
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
The next cell saves those time-series as CF-compliant netCDF files on disk, to make it easier to access them later.
import iris from ioos_tools.tardis import series2cube attr = dict( featureType="timeSeries", Conventions="CF-1.6", standard_name_vocabulary="CF-1.6", cdm_data_type="Station", comment="Data from http://opendap.co-ops.nos.noaa.gov", ) cubes = iris.cube.CubeList([series2cube(obs, attr=attr) for obs in observations]) outfile = os.path.join(save_dir, "OBS_DATA.nc") iris.save(cubes, outfile)
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
We still need to read the model data from the list of endpoints we found. The next cell takes care of that. We use iris, and a set of custom functions from the ioos_tools library, that downloads only the data in the domain we requested.
from ioos_tools.ioos import get_model_name from ioos_tools.tardis import is_model, proc_cube, quick_load_cubes from iris.exceptions import ConstraintMismatchError, CoordinateNotFoundError, MergeError print(fmt(" Models ")) cubes = dict() for k, url in enumerate(dap_urls): print("\n[Reading url {}/{}]: {}".format(k + 1, len(dap_urls), url)) try: cube = quick_load_cubes(url, config["cf_names"], callback=None, strict=True) if is_model(cube): cube = proc_cube( cube, bbox=config["region"]["bbox"], time=(config["date"]["start"], config["date"]["stop"]), units=config["units"], ) else: print("[Not model data]: {}".format(url)) continue mod_name = get_model_name(url) cubes.update({mod_name: cube}) except ( RuntimeError, ValueError, ConstraintMismatchError, CoordinateNotFoundError, IndexError, ) as e: print("Cannot get cube for: {}\n{}".format(url, e))
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
Now we can match each observation time-series with its closest grid point (0.08 of a degree) on each model. This is a complex and laborious task! If you are running this interactively grab a coffee and sit comfortably :-) Note that we are also saving the model time-series to files that align with the observations we saved before.
import iris from ioos_tools.tardis import ( add_station, ensure_timeseries, get_nearest_water, make_tree, ) from iris.pandas import as_series for mod_name, cube in cubes.items(): fname = "{}.nc".format(mod_name) fname = os.path.join(save_dir, fname) print(fmt(" Downloading to file {} ".format(fname))) try: tree, lon, lat = make_tree(cube) except CoordinateNotFoundError: print("Cannot make KDTree for: {}".format(mod_name)) continue # Get model series at observed locations. raw_series = dict() for obs in observations: obs = obs._metadata station = obs["station_code"] try: kw = dict(k=10, max_dist=0.08, min_var=0.01) args = cube, tree, obs["lon"], obs["lat"] try: series, dist, idx = get_nearest_water(*args, **kw) except RuntimeError as e: print("Cannot download {!r}.\n{}".format(cube, e)) series = None except ValueError: status = "No Data" print("[{}] {}".format(status, obs["station_name"])) continue if not series: status = "Land " else: raw_series.update({station: series}) series = as_series(series) status = "Water " print("[{}] {}".format(status, obs["station_name"])) if raw_series: # Save cube. for station, cube in raw_series.items(): cube = add_station(cube, station) try: cube = iris.cube.CubeList(raw_series.values()).merge_cube() except MergeError as e: print(e) ensure_timeseries(cube) try: iris.save(cube, fname) except AttributeError: # FIXME: we should patch the bad attribute instead of removing everything. cube.attributes = {} iris.save(cube, fname) del cube print("Finished processing [{}]".format(mod_name))
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
With the matched set of models and observations time-series it is relatively easy to compute skill score metrics on them. In cells [13] to [16] we apply both mean bias and root mean square errors to the time-series.
from ioos_tools.ioos import stations_keys def rename_cols(df, config): cols = stations_keys(config, key="station_name") return df.rename(columns=cols) from ioos_tools.ioos import load_ncs from ioos_tools.skill_score import apply_skill, mean_bias dfs = load_ncs(config) df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False) skill_score = dict(mean_bias=df.to_dict()) # Filter out stations with no valid comparison. df.dropna(how="all", axis=1, inplace=True) df = df.applymap("{:.2f}".format).replace("nan", "--") from ioos_tools.skill_score import rmse dfs = load_ncs(config) df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False) skill_score["rmse"] = df.to_dict() # Filter out stations with no valid comparison. df.dropna(how="all", axis=1, inplace=True) df = df.applymap("{:.2f}".format).replace("nan", "--") import pandas as pd # Stringfy keys. for key in skill_score.keys(): skill_score[key] = {str(k): v for k, v in skill_score[key].items()} mean_bias = pd.DataFrame.from_dict(skill_score["mean_bias"]) mean_bias = mean_bias.applymap("{:.2f}".format).replace("nan", "--") skill_score = pd.DataFrame.from_dict(skill_score["rmse"]) skill_score = skill_score.applymap("{:.2f}".format).replace("nan", "--")
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
Last but not least we can assemble a GIS map, cells [17-23], with the time-series plot for the observations and models, and the corresponding skill scores.
import folium from ioos_tools.ioos import get_coordinates def make_map(bbox, **kw): line = kw.pop("line", True) zoom_start = kw.pop("zoom_start", 5) lon = (bbox[0] + bbox[2]) / 2 lat = (bbox[1] + bbox[3]) / 2 m = folium.Map( width="100%", height="100%", location=[lat, lon], zoom_start=zoom_start ) if line: p = folium.PolyLine( get_coordinates(bbox), color="#FF0000", weight=2, opacity=0.9, ) p.add_to(m) return m bbox = config["region"]["bbox"] m = make_map(bbox, zoom_start=8, line=True, layers=True) all_obs = stations_keys(config) from glob import glob from operator import itemgetter import iris from folium.plugins import MarkerCluster iris.FUTURE.netcdf_promote = True big_list = [] for fname in glob(os.path.join(save_dir, "*.nc")): if "OBS_DATA" in fname: continue cube = iris.load_cube(fname) model = os.path.split(fname)[1].split("-")[-1].split(".")[0] lons = cube.coord(axis="X").points lats = cube.coord(axis="Y").points stations = cube.coord("station_code").points models = [model] * lons.size lista = zip(models, lons.tolist(), lats.tolist(), stations.tolist()) big_list.extend(lista) big_list.sort(key=itemgetter(3)) df = pd.DataFrame(big_list, columns=["name", "lon", "lat", "station"]) df.set_index("station", drop=True, inplace=True) groups = df.groupby(df.index) locations, popups = [], [] for station, info in groups: sta_name = all_obs[station] for lat, lon, name in zip(info.lat, info.lon, info.name): locations.append([lat, lon]) popups.append("[{}]: {}".format(name, sta_name)) MarkerCluster(locations=locations, popups=popups, name="Cluster").add_to(m) titles = { "coawst_4_use_best": "COAWST_4", "pacioos_hycom-global": "HYCOM", "NECOFS_GOM3_FORECAST": "NECOFS_GOM3", "NECOFS_FVCOM_OCEAN_MASSBAY_FORECAST": "NECOFS_MassBay", "NECOFS_FVCOM_OCEAN_BOSTON_FORECAST": "NECOFS_Boston", "SECOORA_NCSU_CNAPS": "SECOORA/CNAPS", "roms_2013_da_avg-ESPRESSO_Real-Time_v2_Averages_Best": "ESPRESSO Avg", "roms_2013_da-ESPRESSO_Real-Time_v2_History_Best": "ESPRESSO Hist", "OBS_DATA": "Observations", } from itertools import cycle from bokeh.embed import file_html from bokeh.models import HoverTool, Legend from bokeh.palettes import Category20 from bokeh.plotting import figure from bokeh.resources import CDN from folium import IFrame # Plot defaults. colors = Category20[20] colorcycler = cycle(colors) tools = "pan,box_zoom,reset" width, height = 750, 250 def make_plot(df, station): p = figure( toolbar_location="above", x_axis_type="datetime", width=width, height=height, tools=tools, title=str(station), ) leg = [] for column, series in df.iteritems(): series.dropna(inplace=True) if not series.empty: if "OBS_DATA" not in column: bias = mean_bias[str(station)][column] skill = skill_score[str(station)][column] line_color = next(colorcycler) kw = dict(alpha=0.65, line_color=line_color) else: skill = bias = "NA" kw = dict(alpha=1, color="crimson") line = p.line( x=series.index, y=series.values, line_width=5, line_cap="round", line_join="round", **kw ) leg.append(("{}".format(titles.get(column, column)), [line])) p.add_tools( HoverTool( tooltips=[ ("Name", "{}".format(titles.get(column, column))), ("Bias", bias), ("Skill", skill), ], renderers=[line], ) ) legend = Legend(items=leg, location=(0, 60)) legend.click_policy = "mute" p.add_layout(legend, "right") p.yaxis[0].axis_label = "Water Height (m)" p.xaxis[0].axis_label = "Date/time" return p def make_marker(p, station): lons = stations_keys(config, key="lon") lats = stations_keys(config, key="lat") lon, lat = lons[station], lats[station] html = file_html(p, CDN, station) iframe = IFrame(html, width=width + 40, height=height + 80) popup = folium.Popup(iframe, max_width=2650) icon = folium.Icon(color="green", icon="stats") marker = folium.Marker(location=[lat, lon], popup=popup, icon=icon) return marker dfs = load_ncs(config) for station in dfs: sta_name = all_obs[station] df = dfs[station] if df.empty: continue p = make_plot(df, station) marker = make_marker(p, station) marker.add_to(m) folium.LayerControl().add_to(m) def embed_map(m): from IPython.display import HTML m.save("index.html") with open("index.html") as f: html = f.read() iframe = '<iframe srcdoc="{srcdoc}" style="width: 100%; height: 750px; border: none"></iframe>' srcdoc = html.replace('"', "&quot;") return HTML(iframe.format(srcdoc=srcdoc)) embed_map(m)
notebooks/2018-03-15-ssh-skillscore.ipynb
ioos/notebooks_demos
mit
原始資料來源的 SQL,這是抽樣過的資料,當中也有一筆資料是修改過的,因為當天 Server 似乎出了一些問題,導至流量大幅下降
sql = """ SELECT date,count(distinct cookie_pta) as uv from TABLE_DATE_RANGE(pixinsight.article_visitor_log_1_100_, TIMESTAMP('2017-01-01'), CURRENT_TIMESTAMP()) where venue = 'pixnet' group by date order by date """ from os import environ # load and plot dataset import pandas as pd from pandas import read_csv from pandas import datetime from matplotlib import pyplot import matplotlib.dates as mdates %matplotlib notebook # %matplotlib inline # load dataset def parser(x): return datetime.strptime(x, '%Y%m%d') series = pd.read_gbq(sql,project_id=environ['PROJECT_ID'], verbose=False, private_key=environ['GOOGLE_KEY'])#,header=0, parse_dates=[0], index_col='date', squeeze=True, date_parser=parser) series['date'] = pd.to_datetime(series['date'],format='%Y%m%d') series.index = series['date'] del series['date'] # summarize first few rows print(series.head())
Keras_LSTM2.ipynb
texib/deeplearning_homework
mit
進行 scale to 0-1 ,方便作為 input 及 output (因為 sigmoid 介於 0~1 之間)
from sklearn.preprocessing import scale,MinMaxScaler scaler = MinMaxScaler() x = series.values x = x.reshape([x.shape[0],1]) scaler.fit(x) x_scaled = scaler.transform(x) pyplot.figure() pyplot.plot(x_scaled) pyplot.show()
Keras_LSTM2.ipynb
texib/deeplearning_homework
mit
產生 x,y pair 舉列來說假設將 Step Size 設為 4 天,故一筆 Training Data ,為連續 4 天的流量。再來利用這4天的資料來預測第 5 天的流量 綠色的部是 Training Data(前4天的資料),藍色的部份是需要被預測的部份。示意如下圖 <img align="left" width="50%" src="./imgs/sequence_uv.png" />
#往回看 30 天前的每一筆資料 step_size = 15 print("原始資料長度:{}".format(x_scaled.shape)) def window_stack(a, stepsize=1, width=3): return np.hstack( a[i:1+i-width or None:stepsize] for i in range(0,width) ) import numpy as np train_x = window_stack(x_scaled, stepsize=1, width=step_size) # 最後一筆資料要放棄,因為沒有未來的答案作驗證 train_x = train_x[:-1] train_x.shape # 請注意千萬不將每一筆(Row) 當中的最後一天資料作為 Training Data 中的 Input Data train_y = np.array([i for i in x_scaled[step_size:]])
Keras_LSTM2.ipynb
texib/deeplearning_homework
mit
確認產出來的 Training Data 沒有包含到 Testing Data
train_y.shape train_x[0] train_x[1] train_y[0]
Keras_LSTM2.ipynb
texib/deeplearning_homework
mit
Design Graph
# reshape input to be [samples, time steps, features] trainX = np.reshape(train_x, (train_x.shape[0], step_size, 1)) from keras import Sequential from keras.layers import LSTM,Dense # create and fit the LSTM network model = Sequential() # input_shape(step_size,feature_dim) model.add(LSTM(4, input_shape=(step_size,1), unroll=True)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam',metrics=['accuracy']) model.summary()
Keras_LSTM2.ipynb
texib/deeplearning_homework
mit
最後30 筆資料不要看
validation_size = 60 val_loss = [] loss = [] for _ in range(400): history = model.fit(trainX[:-1*validation_size], train_y[:-1*validation_size], epochs=1,shuffle=False, validation_data=(trainX[-1*validation_size:], train_y[-1*validation_size:])) loss.append(history.history['loss']) val_loss.append(history.history['val_loss']) model.reset_states()
Keras_LSTM2.ipynb
texib/deeplearning_homework
mit
看一下 Error Rate 曲線
pyplot.figure() pyplot.plot(loss) pyplot.plot(val_loss) pyplot.show()
Keras_LSTM2.ipynb
texib/deeplearning_homework
mit
看一下曲線擬合效果
predict_y = model.predict(trainX) train_y.shape pyplot.figure() pyplot.plot(scaler.inverse_transform(predict_y)) pyplot.plot(scaler.inverse_transform(train_y)) pyplot.show()
Keras_LSTM2.ipynb
texib/deeplearning_homework
mit
來預測最後 60 天資料預出來的結果
predict_y = model.predict(trainX[-1*validation_size:]) predict_y = scaler.inverse_transform(predict_y) predict_y.shape pyplot.figure() pyplot.plot(x[-1*(validation_size+1):-1]) pyplot.plot(predict_y) pyplot.show()
Keras_LSTM2.ipynb
texib/deeplearning_homework
mit
We need a lot of building blocks from Lasagne to build network
import lasagne from lasagne.utils import floatX from lasagne.layers import InputLayer from lasagne.layers import Conv2DLayer as ConvLayer # can be replaced with dnn layers from lasagne.layers import BatchNormLayer from lasagne.layers import Pool2DLayer as PoolLayer from lasagne.layers import NonlinearityLayer from lasagne.layers import ElemwiseSumLayer from lasagne.layers import DenseLayer from lasagne.nonlinearities import rectify, softmax
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Helper modules, some of them will help us to download images and plot them
%matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = 8, 6 import io import urllib import skimage.transform from IPython.display import Image import pickle
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Build Lasagne model BatchNormalization issue in caffe Caffe doesn't have correct BN layer as described in https://arxiv.org/pdf/1502.03167.pdf: * it can collect datasets mean ($\hat{\mu}$) and variance ($\hat{\sigma}^2$) * it can't fit $\gamma$ and $\beta$ parameters to scale and shift standardized distribution of feature in following formula: $\hat{x}_i = \dfrac{x_i - \hat{\mu}_i}{\sqrt{\hat{\sigma}_i^2 + \epsilon}}\cdot\gamma + \beta$ To fix this issue, <a href="https://github.com/KaimingHe/deep-residual-networks">here</a> authors use such BN layer followed by Scale layer, which can fit scale and shift parameters, but can't standardize data: <pre> layer { bottom: "res2a_branch1" top: "res2a_branch1" name: "bn2a_branch1" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "res2a_branch1" top: "res2a_branch1" name: "scale2a_branch1" type: "Scale" scale_param { bias_term: true } } </pre> In Lasagne we have correct BN layer, so we do not need use such a trick. Replicated blocks Simple blocks ResNet contains a lot of similar replicated blocks, lets call them simple blocks, which have one of two architectures: * Convolution $\rightarrow$ BN $\rightarrow$ Nonlinearity * Convolution $\rightarrow$ BN http://ethereon.github.io/netscope/#/gist/2f702ea9e05900300462102a33caff9c
Image(filename='images/head.png', width='40%')
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
We can increase, decrease or keep same dimensionality of data using such blocks. In ResNet-50 only several transformation are used. Keep shape with 1x1 convolution We can apply nonlinearity transformation from (None, 64, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the origin of a network after first pool layer): * num_filters: same as parent has * filter_size: 1 * stride: 1 * pad: 0
Image(filename='images/conv1x1.png', width='40%')
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Keep shape with 3x3 convolution Also we can apply nonlinearity transformation from (None, 64, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the middle of any residual blocks): * num_filters: same as parent has * filter_size: 3x3 * stride: 1 * pad: 1
Image(filename='images/conv3x3.png', width='40%')
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Increase shape using number of filters We can nonlinearly increase shape from (None, 64, 56, 56) to (None, 256, 56, 56) if we apply simple block with following parameters (look at the last simple block of any risidual block): * num_filters: four times greater then parent has * filter_size: 1x1 * stride: 1 * pad: 0
Image(filename='images/increase_fn.png', width='40%')
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Increase shape using number of filters We can nonlinearly decrease shape from (None, 256, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the first simple block of any risidual block without left branch): * num_filters: four times less then parent has * filter_size: 1x1 * stride: 1 * pad: 0
Image(filename='images/decrease_fn.png', width='40%')
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Increase shape using number of filters We can also nonlinearly decrease shape from (None, 256, 56, 56) to (None, 128, 28, 28) if we apply simple block with following parameters (look at the first simple block of any risidual block with left branch): * num_filters: two times less then parent has * filter_size: 1x1 * stride: 2 * pad: 0
Image(filename='images/decrease_fnstride.png', width='40%')
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Following function creates simple block
def build_simple_block(incoming_layer, names, num_filters, filter_size, stride, pad, use_bias=False, nonlin=rectify): """Creates stacked Lasagne layers ConvLayer -> BN -> (ReLu) Parameters: ---------- incoming_layer : instance of Lasagne layer Parent layer names : list of string Names of the layers in block num_filters : int Number of filters in convolution layer filter_size : int Size of filters in convolution layer stride : int Stride of convolution layer pad : int Padding of convolution layer use_bias : bool Whether to use bias in conlovution layer nonlin : function Nonlinearity type of Nonlinearity layer Returns ------- tuple: (net, last_layer_name) net : dict Dictionary with stacked layers last_layer_name : string Last layer name """ net = [] net.append(( names[0], ConvLayer(incoming_layer, num_filters, filter_size, pad, stride, flip_filters=False, nonlinearity=None) if use_bias else ConvLayer(incoming_layer, num_filters, filter_size, stride, pad, b=None, flip_filters=False, nonlinearity=None) )) net.append(( names[1], BatchNormLayer(net[-1][1]) )) if nonlin is not None: net.append(( names[2], NonlinearityLayer(net[-1][1], nonlinearity=nonlin) )) return dict(net), net[-1][0]
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Residual blocks ResNet also contains several residual blockes built from simple blocks, each of them have two branches; left branch sometimes contains simple block, sometimes not. Each block ends with Elementwise sum layer followed by ReLu nonlinearity. http://ethereon.github.io/netscope/#/gist/410e7e48fa1e5a368ee7bca5eb3bf0ca
Image(filename='images/left_branch.png', width='40%') Image(filename='images/no_left_branch.png', width='40%') simple_block_name_pattern = ['res%s_branch%i%s', 'bn%s_branch%i%s', 'res%s_branch%i%s_relu'] def build_residual_block(incoming_layer, ratio_n_filter=1.0, ratio_size=1.0, has_left_branch=False, upscale_factor=4, ix=''): """Creates two-branch residual block Parameters: ---------- incoming_layer : instance of Lasagne layer Parent layer ratio_n_filter : float Scale factor of filter bank at the input of residual block ratio_size : float Scale factor of filter size has_left_branch : bool if True, then left branch contains simple block upscale_factor : float Scale factor of filter bank at the output of residual block ix : int Id of residual block Returns ------- tuple: (net, last_layer_name) net : dict Dictionary with stacked layers last_layer_name : string Last layer name """ net = {} # right branch net_tmp, last_layer_name = build_simple_block( incoming_layer, map(lambda s: s % (ix, 2, 'a'), simple_block_name_pattern), int(lasagne.layers.get_output_shape(incoming_layer)[1]*ratio_n_filter), 1, int(1.0/ratio_size), 0) net.update(net_tmp) net_tmp, last_layer_name = build_simple_block( net[last_layer_name], map(lambda s: s % (ix, 2, 'b'), simple_block_name_pattern), lasagne.layers.get_output_shape(net[last_layer_name])[1], 3, 1, 1) net.update(net_tmp) net_tmp, last_layer_name = build_simple_block( net[last_layer_name], map(lambda s: s % (ix, 2, 'c'), simple_block_name_pattern), lasagne.layers.get_output_shape(net[last_layer_name])[1]*upscale_factor, 1, 1, 0, nonlin=None) net.update(net_tmp) right_tail = net[last_layer_name] left_tail = incoming_layer # left branch if has_left_branch: net_tmp, last_layer_name = build_simple_block( incoming_layer, map(lambda s: s % (ix, 1, ''), simple_block_name_pattern), int(lasagne.layers.get_output_shape(incoming_layer)[1]*4*ratio_n_filter), 1, int(1.0/ratio_size), 0, nonlin=None) net.update(net_tmp) left_tail = net[last_layer_name] net['res%s' % ix] = ElemwiseSumLayer([left_tail, right_tail], coeffs=1) net['res%s_relu' % ix] = NonlinearityLayer(net['res%s' % ix], nonlinearity=rectify) return net, 'res%s_relu' % ix
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Gathering everighting together Create head of the network (everithing before first residual block)
net = {} net['input'] = InputLayer((None, 3, 224, 224)) sub_net, parent_layer_name = build_simple_block( net['input'], ['conv1', 'bn_conv1', 'conv1_relu'], 64, 7, 3, 2, use_bias=True) net.update(sub_net) net['pool1'] = PoolLayer(net[parent_layer_name], pool_size=3, stride=2, pad=0, mode='max', ignore_border=False)
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Create four groups of residual blocks
block_size = list('abc') parent_layer_name = 'pool1' for c in block_size: if c == 'a': sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1, 1, True, 4, ix='2%s' % c) else: sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='2%s' % c) net.update(sub_net) block_size = list('abcd') for c in block_size: if c == 'a': sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/2, 1.0/2, True, 4, ix='3%s' % c) else: sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='3%s' % c) net.update(sub_net) block_size = list('abcdef') for c in block_size: if c == 'a': sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/2, 1.0/2, True, 4, ix='4%s' % c) else: sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='4%s' % c) net.update(sub_net) block_size = list('abc') for c in block_size: if c == 'a': sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/2, 1.0/2, True, 4, ix='5%s' % c) else: sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='5%s' % c) net.update(sub_net)
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit
Create tail of the network (everighting after last resudual block)
net['pool5'] = PoolLayer(net[parent_layer_name], pool_size=7, stride=1, pad=0, mode='average_exc_pad', ignore_border=False) net['fc1000'] = DenseLayer(net['pool5'], num_units=1000, nonlinearity=None) net['prob'] = NonlinearityLayer(net['fc1000'], nonlinearity=softmax) print 'Total number of layers:', len(lasagne.layers.get_all_layers(net['prob']))
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
matthijsvk/multimodalSR
mit