markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
In the one dimensional case we used the location matrix or $LM$ array to link local node numbers in elements to equations. With the $IED$ and $ID$ arrays the $LM$ matrix is strictly redundant, as $LM(a, e) = ID(IEN(e, a))$. However, it's still standard to construct it:
LM = numpy.zeros_like(IEN.T) for e in range(IEN.shape[0]): for a in range(IEN.shape[1]): LM[a,e] = ID[IEN[e,a]] LM
FEEG6016 Simulation and Modelling/08-Finite-Elements-Lab-2.ipynb
ngcm/training-public
mit
Function representation and shape functions We're going to want to write our unknown functions $T, w$ in terms of shape functions. These are easiest to write down for a single reference element, in the same way as we did for the one dimensional case where our reference element used the coordinates $\xi$. In two dimensi...
corners = numpy.array([[0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 0.0]]) pyplot.plot(corners[:,0],corners[:,1],linewidth=2) pyplot.xlabel(r"$\xi_0$") pyplot.ylabel(r"$\xi_1$") pyplot.axis('equal') pyplot.ylim(-0.1,1.1);
FEEG6016 Simulation and Modelling/08-Finite-Elements-Lab-2.ipynb
ngcm/training-public
mit
The shape functions on this triangle are \begin{align} N_0(\xi_0, \xi_1) &= 1 - \xi_0 - \xi_1, \ N_1(\xi_0, \xi_1) &= \xi_0, \ N_2(\xi_0, \xi_1) &= \xi_1. \end{align} The derivatives are all either $0$ or $\pm 1$. As soon as we have the shape functions, our weak form becomes $$ \sum_A T_A \int_{\Omega} \text{d}...
def generate_2d_grid(Nx): """ Generate a triangular grid covering the plate math:`[0,1]^2` with Nx (pairs of) triangles in each dimension. Parameters ---------- Nx : int Number of triangles in any one dimension (so the total number on the plate is math:`2 Nx^2`) Return...
FEEG6016 Simulation and Modelling/08-Finite-Elements-Lab-2.ipynb
ngcm/training-public
mit
Deploying a Kubeflow Cluster on Google Cloud Platform (GCP) This notebook provides instructions for setting up a Kubeflow cluster on GCP using the command-line interface (CLI). For additional help, see the guide to deploying Kubeflow using the CLI. There are two possible alternatives: - The first alternative is to depl...
work_directory_name = 'kubeflow' ! mkdir -p $work_directory_name %cd $work_directory_name
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
Download kfctl Download kfctl to your working directory. The default version used is v0.7.0, but you can find the latest release here.
## Download kfctl v0.7.0 ! curl -LO https://github.com/kubeflow/kubeflow/releases/download/v0.7.0/kfctl_v0.7.0_linux.tar.gz ## Unpack the tar ball ! tar -xvf kfctl_v0.7.0_linux.tar.gz
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
If you are using AI Platform Notebooks, your environment is already authenticated. Skip the following cell.
## Create user credentials ! gcloud auth application-default login
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
Set up environment variables Set up environment variables to use while installing Kubeflow. Replace variable placeholders (for example, <VARIABLE NAME>) with the correct values for your environment.
# Set your GCP project ID and the zone where you want to create the Kubeflow deployment %env PROJECT=<ADD GCP PROJECT HERE> %env ZONE=<ADD GCP ZONE TO LAUNCH KUBEFLOW CLUSTER HERE> # google cloud storage bucket %env GCP_BUCKET=gs://<ADD STORAGE LOCATION HERE> # Use the following kfctl configuration file for authentic...
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
Configure gcloud and add kfctl to your path.
! gcloud config set project ${PROJECT} ! gcloud config set compute/zone ${ZONE} # Set the path to the base directory where you want to store one or more # Kubeflow deployments. For example, /opt/. # Here we use the current working directory as the base directory # Then set the Kubeflow application directory for thi...
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
Create service account
! gcloud iam service-accounts create ${SA_NAME} ! gcloud projects add-iam-policy-binding ${PROJECT} \ --member serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com \ --role 'roles/owner' ! gcloud iam service-accounts keys create key.json \ --iam-account ${SA_NAME}@${PROJECT}.iam.gserviceaccount.com
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
Set GOOGLE_APPLICATION_CREDENTIALS
key_path = os.getenv('BASE_DIR') + "/" + 'key.json' %env GOOGLE_APPLICATION_CREDENTIALS=$key_path
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
Setup and deploy Kubeflow
! mkdir -p ${KF_DIR} %cd $kf_dir ! kfctl apply -V -f ${CONFIG_URI}
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
Install Kubeflow Pipelines SDK
%%capture # Install the SDK (Uncomment the code if the SDK is not installed before) ! pip3 install 'kfp>=0.1.36' --quiet --user
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
Sanity Check: Check the ingress created
! kubectl -n istio-system describe ingress
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
kubeflow/pipelines
apache-2.0
Train / Test Split Let's keep some held-out data to be able to measure the generalization performance of our model.
from sklearn.model_selection import train_test_split data = np.asarray(digits.data, dtype='float32') target = np.asarray(digits.target, dtype='int32') X_train, X_test, y_train, y_test = train_test_split( data, target, test_size=0.15, random_state=37) X_train.shape X_test.shape y_train.shape y_test.shape
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Preprocessing of the Input Data Make sure that all input variables are approximately on the same scale via input normalization:
from sklearn import preprocessing # mean = 0 ; standard deviation = 1.0 scaler = preprocessing.StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) # print(scaler.mean_) # print(scaler.scale_) X_train.shape X_train.mean(axis=0) X_train.std(axis=0)
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Let's display the one of the transformed sample (after feature standardization):
sample_index = 45 plt.figure(figsize=(3, 3)) plt.imshow(X_train[sample_index].reshape(8, 8), cmap=plt.cm.gray_r, interpolation='nearest') plt.title("transformed sample\n(standardization)");
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
The scaler objects makes it possible to recover the original sample:
plt.figure(figsize=(3, 3)) plt.imshow(scaler.inverse_transform(X_train[sample_index:sample_index+1]).reshape(8, 8), cmap=plt.cm.gray_r, interpolation='nearest') plt.title("original sample"); print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape)
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Preprocessing of the Target Data To train a first neural network we also need to turn the target variable into a vector "one-hot-encoding" representation. Here are the labels of the first samples in the training set encoded as integers:
y_train[:3]
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Keras provides a utility function to convert integer-encoded categorical variables as one-hot encoded values:
from tensorflow.keras.utils import to_categorical Y_train = to_categorical(y_train) Y_train[:3] Y_train.shape
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Feed Forward Neural Networks with Keras Objectives of this section: Build and train a first feedforward network using Keras https://www.tensorflow.org/guide/keras/overview Experiment with different optimizers, activations, size of layers, initializations A First Keras Model We can now build an train a our first fee...
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras import optimizers input_dim = X_train.shape[1] hidden_dim = 100 output_dim = Y_train.shape[1] model = Sequential() model.add(Dense(hidden_dim, input_dim=input_dim, activation="tanh")) model.add(Dense(output_...
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Visualizing the Convergence Let's wrap the keras history info into a pandas dataframe for easier plotting:
import pandas as pd history_df = pd.DataFrame(history.history) history_df["epoch"] = history.epoch fig, (ax0, ax1) = plt.subplots(nrows=2, sharex=True, figsize=(12, 6)) history_df.plot(x="epoch", y=["loss", "val_loss"], ax=ax0) history_df.plot(x="epoch", y=["accuracy", "val_accuracy"], ax=ax1);
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Monitoring Convergence with Tensorboard Tensorboard is a built-in neural network monitoring tool.
%load_ext tensorboard !rm -rf tensorboard_logs import datetime from tensorflow.keras.callbacks import TensorBoard model = Sequential() model.add(Dense(hidden_dim, input_dim=input_dim, activation="tanh")) model.add(Dense(output_dim, activation="softmax")) model.compile(optimizer=optimizers.SGD(learning_rate=0.1), ...
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
b) Exercises: Impact of the Optimizer Try to decrease the learning rate value by 10 or 100. What do you observe? Try to increase the learning rate value to make the optimization diverge. Configure the SGD optimizer to enable a Nesterov momentum of 0.9 Notes: The keras API documentation is available at: https:/...
optimizers.SGD? # %load solutions/keras_sgd_and_momentum.py
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Replace the SGD optimizer by the Adam optimizer from keras and run it with the default parameters. Hint: use optimizers.&lt;TAB&gt; to tab-complete the list of implemented optimizers in Keras. Add another hidden layer and use the "Rectified Linear Unit" for each hidden layer. Can you still train the model with Ad...
# %load solutions/keras_adam.py
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Exercises: Forward Pass and Generalization Compute predictions on test set using model.predict(...) Compute average accuracy of the model on the test set: the fraction of test samples for which the model makes a prediction that matches the true label.
# %load solutions/keras_accuracy_on_test_set.py
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Let us decompose how we got the predictions. First, we call the model on the data to get the laster layer (softmax) outputs directly as a tensorflow Tensor:
predictions_tf = model(X_test) predictions_tf[:5] type(predictions_tf), predictions_tf.shape
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
We can use the tensorflow API to check that for each row, the probabilities sum to 1:
import tensorflow as tf tf.reduce_sum(predictions_tf, axis=1)[:5]
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
We can also extract the label with the highest probability using the tensorflow API:
predicted_labels_tf = tf.argmax(predictions_tf, axis=1) predicted_labels_tf[:5]
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
We can compare those labels to the expected labels to compute the accuracy with the Tensorflow API. Note however that we need an explicit cast from boolean to floating point values to be able to compute the mean accuracy when using the tensorflow tensors:
accuracy_tf = tf.reduce_mean(tf.cast(predicted_labels_tf == y_test, tf.float64)) accuracy_tf
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Also note that it is possible to convert tensors to numpy array if one prefer to use numpy:
accuracy_tf.numpy() predicted_labels_tf[:5] predicted_labels_tf.numpy()[:5] (predicted_labels_tf.numpy() == y_test).mean()
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Home Assignment: Impact of Initialization Let us now study the impact of a bad initialization when training a deep feed forward network. By default Keras dense layers use the "Glorot Uniform" initialization strategy to initialize the weight matrices: each weight coefficient is randomly sampled from [-scale, scale] sca...
from tensorflow.keras import initializers normal_init = initializers.TruncatedNormal(stddev=0.01) model = Sequential() model.add(Dense(hidden_dim, input_dim=input_dim, activation="tanh", kernel_initializer=normal_init)) model.add(Dense(hidden_dim, activation="tanh", kernel_initializer...
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Let's have a look at the parameters of the first layer after initialization but before any training has happened:
model.layers[0].weights w = model.layers[0].weights[0].numpy() w w.std() b = model.layers[0].weights[1].numpy() b history = model.fit(X_train, Y_train, epochs=15, batch_size=32) plt.figure(figsize=(12, 4)) plt.plot(history.history['loss'], label="Truncated Normal init") plt.legend();
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Once the model has been fit, the weights have been updated and notably the biases are no longer 0:
model.layers[0].weights
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Questions: Try the following initialization schemes and see whether the SGD algorithm can successfully train the network or not: a very small e.g. stddev=1e-3 a larger scale e.g. stddev=1 or 10 initialize all weights to 0 (constant initialization) What do you observe? Can you find an explanation for those ...
# %load solutions/keras_initializations.py # %load solutions/keras_initializations_analysis.py
labs/01_keras/Intro Keras.ipynb
m2dsupsdlclass/lectures-labs
mit
Run the default solution on dev
Pw = Pdist(data=datafile("data/count_1w.txt")) segmenter = Segment(Pw) # note that the default solution for this homework ignores the unigram counts output_full = [] with open("data/input/dev.txt") as f: for line in f: output = " ".join(segmenter.segment(line.strip())) output_full.append(output) pri...
zhsegment/default.ipynb
anoopsarkar/nlp-class-hw
apache-2.0
Evaluate the default output
from zhsegment_check import fscore with open('data/reference/dev.out', 'r') as refh: ref_data = [str(x).strip() for x in refh.read().splitlines()] tally = fscore(ref_data, output_full) print("score: {:.2f}".format(tally), file=sys.stderr)
zhsegment/default.ipynb
anoopsarkar/nlp-class-hw
apache-2.0
Importing a file I will import this file from the agreg/ sub-folder.
import agreg.memoisation
Test_for_Binder__access_local_packages.ipynb
Naereen/notebooks
mit
Author - @SauravMaheshkar Install Dependencies Install the latest version of the Trax Library.
!pip install -q -U trax
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
Introduction Named-entity recognition (NER) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. ...
import trax # Our Main Library from trax import layers as tl import os # For os dependent functionalities import numpy as np # For scientific computing import pandas as pd # For basic data analysis import random as rnd # For using random functions
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
Pre-Processing Loading the Dataset Let's load the ner_dataset.csv file into a dataframe and see what it looks like
data = pd.read_csv("/kaggle/input/entity-annotated-corpus/ner_dataset.csv",encoding = 'ISO-8859-1') data = data.fillna(method = 'ffill') data.head()
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
Creating a Vocabulary File We can see there's a column for the words in each sentence. Thus, we can extract this column using the .loc() and store it into a .txt file using the .savetext() function from numpy.
## Extract the 'Word' column from the dataframe words = data.loc[:, "Word"] ## Convert into a text file using the .savetxt() function np.savetxt(r'words.txt', words.values, fmt="%s")
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
Creating a Dictionary for Vocabulary Here, we create a Dictionary for our vocabulary by reading through all the sentences in the dataset.
vocab = {} with open('words.txt') as f: for i, l in enumerate(f.read().splitlines()): vocab[l] = i print("Number of words:", len(vocab)) vocab['<PAD>'] = len(vocab)
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
Extracting Sentences from the Dataset For extracting sentences from the dataset and creating (X,y) pairs for training.
class Get_sentence(object): def __init__(self,data): self.n_sent=1 self.data = data agg_func = lambda s:[(w,p,t) for w,p,t in zip(s["Word"].values.tolist(), s["POS"].values.tolist(), s["...
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
Making a Batch Generator Here, we create a batch generator for training.
def data_generator(batch_size, x, y,pad, shuffle=False, verbose=False): num_lines = len(x) lines_index = [*range(num_lines)] if shuffle: rnd.shuffle(lines_index) index = 0 while True: buffer_x = [0] * batch_size buffer_y = [0] * batch_size max_len = 0 ...
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
Splitting into Test and Train
from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(X,y,test_size = 0.1,random_state=1)
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
Building the Model The Reformer Model In this notebook, we use the Reformer, which is a more efficient of Transformer that uses reversible layers and locality-sensitive hashing. You can read the original paper here. Locality-Sensitive Hashing The biggest problem that one might encounter while using Transformers, for ...
def NERmodel(tags, vocab_size=35181, d_model = 50): model = tl.Serial( # tl.Embedding(vocab_size, d_model), trax.models.reformer.Reformer(vocab_size, d_model, ff_activation=tl.LogSoftmax), tl.Dense(tags), tl.LogSoftmax() ) return model model = NERmodel(tags = 17) print(model)
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
Train the Model
from trax.supervised import training rnd.seed(33) batch_size = 64 train_generator = trax.data.inputs.add_loss_weights( data_generator(batch_size, x_train, y_train,vocab['<PAD>'], True), id_to_mask=vocab['<PAD>']) eval_generator = trax.data.inputs.add_loss_weights( data_generator(batch_size, x_test, y_te...
trax/examples/NER_using_Reformer.ipynb
google/trax
apache-2.0
2. Density based plots with matplotlib In this section, we will be looking at density based plots. Plots like these address a problem with big data: How does one visualise a plot with 10,000++ data points and avoid overplotting.
PRSA.head()
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
Source : https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data
plt.plot( PRSA.TEMP, PRSA["pm2.5"], 'o', color="steelblue", alpha=0.5) plt.ylabel("$\mu g/m^3$") plt.title("PM 2.5 readings as a function of temperature (Celsius)")
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
As one can see, there's not much one can say about the structure of the data because every point below 400 $\mu g/m^3$ is totally filled up with blue color. It is here, that a density plot helps mitigate this problem. The central idea is that individual data points are not so important in as much as they contribute to...
fig = plt.figure() ax = fig.add_subplot(111) ax.set_title("Histogram of dew point readings") ax.set_xlabel("Dew point (C$^\circ$)") # Enter plotting code below here: ax.hist(PRSA.DEWP)
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
3.1 Understanding the plotting code So how was this produced? We initialized a figure object using plt.figure. Next we created an axis within the figure. We used the command fig.add_subplot. We passed an argument 111 to the function which is a short hand for creating only one axes. With an ax object created, we now ...
fig = plt.figure() ax = fig.add_subplot(111) ax.set_title("Histogram of dew point readings") ax.set_xlabel("Dew point (C$^\circ$)") # Try typing in ax.hist(PRSA.DEWP, ...) with various customization options.
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
More options can be found of the documentation page. 3.3 Histograms with weights The hist function is not only used to plot histograms. In essence, it is a function used to plot rectangular patches on an axis. Thus, we use hist to plot bar charts. In fact, we may use it to plot stacked bar charts, which is something se...
bike_sharing.head()
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
Source: https://archive.ics.uci.edu/ml/datasets/bike+sharing+dataset As you can see in the dataset above, we need to plot weekday on the x-axis and have cnt as the y-axis. How can we plot this using hist? The problem is that if we pass the weekday array to hist, we end up counting the frequency of each day in the datas...
# Please run this cell before proceeding group = bike_sharing.groupby("weathersit") weathers = [w_r for w_r, _ in group] day_data = [group.get_group(weather).weekday for weather in weathers] weights_data = [group.get_group(weather).cnt for weather in weathers]
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
What we need to do is to group the data set by weather rating and create a list of array data to be passed to hist. We can do this efficiently using the groupby method on data frames and list comprehension statements. Now we have three lists: One for weather rating, one for the day of the week and one more for the dail...
fig2 = plt.figure() ax2 = fig2.add_subplot(111) ax2.set_title("Distribution of daily bike rental counts by weather conditions", y=1.05) ax2.set_xlabel("Day") ax2.set_ylabel("Bike rental counts") ax2.hist( , # day_data label= , # weathers weights= , # weig...
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
As expected, bike rentals are low in bad weather. However, notice the variation within a week for the bad weather category. It is quite clear that people do not go biking on a bad weather Sunday since they have a choice not to go out! Let's rerun the cell above by replacing the ax.hist function with the following code...
PRSA = PRSA.dropna() #drop missing data fig3, ax3 = plt.subplots(figsize=(8,6)) ax3.grid(b="off") ax3.set_facecolor("white") ax3.set_title("PM2.5 readings distribution by temperature") ax3.set_ylabel("$\mu g$/$m^3$") ax3.set_xlabel("Daily temperature (C$^\circ$)") ax3.hist2d(PRSA.TEMP, PRSA["pm2.5"], bins=5...
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
This chart is more informative that a simple scatterplot. For one, we now know that there are two modes in the distribution of temperature and pollutant. Furthermore, there is more variation in pollution levels in the colder seasons as compared to warmer days. Let's try to understand how this plot was created. fig3, ...
img
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
If you observe, img is a tuple of length 4. The last entry of the tuple is the image data. Next, after the last line of ax3.hist command, add in the function cbar = plt.colorbar(img[3], ax=ax3) This means that we are now going to plot a color bar in ax3 (that's what ax=ax3 means) using the image data from our 2d histo...
fig4, ax4 = plt.subplots(figsize=(9,6)) ax4.grid(b="off") ax4.set_facecolor("white") ax4.set_title("PM2.5 distribution by temperature") ax4.set_ylabel("$\mu$g/$m^3$") ax4.set_xlabel("Temperature (C$^\circ$)") img = ax4.hexbin(PRSA.TEMP, PRSA["pm2.5"], gridsize=55, mincnt=5, # bins="log"...
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
However, hexbin plots differ from the square 2d histograms in more ways than the type of tiling used. We may use hexbin plots to investigate how a dependant variable depends on 2 independant variables. Just as we passed bin frequencies to the weights parameter in hist, we pass the third dependant variable to the C para...
# Paste or type in your code here
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
The color for each hexagon is determined by the mean value of each PM2.5 readings corresponding to the pressure and temperature readings contained in each hexagon. 4.3.2 Changing the aggregation function for each hexbin However, the aggregation function on each hexagon can be changed by specifying another function to ...
fig5, ax5 = plt.subplots(figsize=(8,6)) ax5.grid(b="off") ax5.set_facecolor("white") ax5.set_title("PM2.5 pollutants as a function of temperature\nand atmospheric pressure") ax5.set_xlabel("Temperature (C$^\circ$)") ax5.set_ylabel("Pressure (hPa)") img = ax5.hexbin(PRSA.TEMP, PRSA.PRES, C=PRSA["pm2.5"], gridsize=(3...
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
4.3.3 Changing the bins parameter Besides controlling the number of hexagons, we can also bin the hexagons so that the hexagons within the same bin have the same color. This helps us further smooth out the plot and avoid overfitting.
fig5, ax5 = plt.subplots(figsize=(8,6)) ax5.grid(b="off") ax5.set_facecolor("white") ax5.set_title("PM2.5 pollutants as a function of temperature\nand atmospheric pressure", y=1.05) ax5.set_xlabel("Temperature (C$^\circ$)") ax5.set_ylabel("Pressure (hPa)") img = ax5.hexbin(PRSA.TEMP, PRSA.PRES, C=PRSA["pm2.5"], gri...
Day 2 - Unit 3.2.ipynb
uliang/First-steps-with-the-Python-language
mit
ARIMA Example 1: Arima As can be seen in the graphs from Example 2, the Wholesale price index (WPI) is growing over time (i.e. is not stationary). Therefore an ARMA model is not a good specification. In this first example, we consider a model where the original time series is assumed to be integrated of order 1, so tha...
# Dataset wpi1 = requests.get('https://www.stata-press.com/data/r12/wpi1.dta').content data = pd.read_stata(BytesIO(wpi1)) data.index = data.t # Fit the model mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1)) res = mod.fit(disp=False) print(res.summary())
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
Thus the maximum likelihood estimates imply that for the process above, we have: $$ \Delta y_t = 0.1050 + 0.8740 \Delta y_{t-1} - 0.4206 \epsilon_{t-1} + \epsilon_{t} $$ where $\epsilon_{t} \sim N(0, 0.5226)$. Finally, recall that $c = (1 - \phi_1) \beta_0$, and here $c = 0.1050$ and $\phi_1 = 0.8740$. To compare with ...
# Dataset data = pd.read_stata(BytesIO(wpi1)) data.index = data.t data['ln_wpi'] = np.log(data['wpi']) data['D.ln_wpi'] = data['ln_wpi'].diff() # Graph data fig, axes = plt.subplots(1, 2, figsize=(15,4)) # Levels axes[0].plot(data.index._mpl_repr(), data['wpi'], '-') axes[0].set(title='US Wholesale Price Index') # L...
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model: python mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1)) The order argument is a tuple of the form (AR specification, Integration order, MA specific...
# Fit the model mod = sm.tsa.statespace.SARIMAX(data['ln_wpi'], trend='c', order=(1,1,1)) res = mod.fit(disp=False) print(res.summary())
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
ARIMA Example 3: Airline Model In the previous example, we included a seasonal effect in an additive way, meaning that we added a term allowing the process to depend on the 4th MA lag. It may be instead that we want to model a seasonal effect in a multiplicative way. We often write the model then as an ARIMA $(p,d,q) \...
# Dataset air2 = requests.get('https://www.stata-press.com/data/r12/air2.dta').content data = pd.read_stata(BytesIO(air2)) data.index = pd.date_range(start=datetime(data.time[0], 1, 1), periods=len(data), freq='MS') data['lnair'] = np.log(data['air']) # Fit the model mod = sm.tsa.statespace.SARIMAX(data['lnair'], orde...
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a nu...
# Dataset friedman2 = requests.get('https://www.stata-press.com/data/r12/friedman2.dta').content data = pd.read_stata(BytesIO(friedman2)) data.index = data.time # Variables endog = data.loc['1959':'1981', 'consump'] exog = sm.add_constant(data.loc['1959':'1981', 'm2']) # Fit the model mod = sm.tsa.statespace.SARIMAX(...
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
ARIMA Postestimation: Example 1 - Dynamic Forecasting Here we describe some of the post-estimation capabilities of Statsmodels' SARIMAX. First, using the model from example, we estimate the parameters using data that excludes the last few observations (this is a little artificial as an example, but it allows considerin...
# Dataset raw = pd.read_stata(BytesIO(friedman2)) raw.index = raw.time data = raw.loc[:'1981'] # Variables endog = data.loc['1959':, 'consump'] exog = sm.add_constant(data.loc['1959':, 'm2']) nobs = endog.shape[0] # Fit the model mod = sm.tsa.statespace.SARIMAX(endog.loc[:'1978-01-01'], exog=exog.loc[:'1978-01-01'], ...
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
Next, we want to get results for the full dataset but using the estimated parameters (on a subset of the data).
mod = sm.tsa.statespace.SARIMAX(endog, exog=exog, order=(1,0,1)) res = mod.filter(fit_res.params)
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
The predict command is first applied here to get in-sample predictions. We use the full_results=True argument to allow us to calculate confidence intervals (the default output of predict is just the predicted values). With no other arguments, predict returns the one-step-ahead in-sample predictions for the entire sampl...
# In-sample one-step-ahead predictions predict = res.get_prediction() predict_ci = predict.conf_int()
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
We can also get dynamic predictions. One-step-ahead prediction uses the true values of the endogenous values at each step to predict the next in-sample value. Dynamic predictions use one-step-ahead prediction up to some point in the dataset (specified by the dynamic argument); after that, the previous predicted endogen...
# Dynamic predictions predict_dy = res.get_prediction(dynamic='1978-01-01') predict_dy_ci = predict_dy.conf_int()
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978:Q1), the two are the same.
# Graph fig, ax = plt.subplots(figsize=(9,4)) npre = 4 ax.set(title='Personal consumption', xlabel='Date', ylabel='Billions of dollars') # Plot data points data.loc['1977-07-01':, 'consump'].plot(ax=ax, style='o', label='Observed') # Plot predictions predict.predicted_mean.loc['1977-07-01':].plot(ax=ax, style='r--', ...
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better.
# Prediction error # Graph fig, ax = plt.subplots(figsize=(9,4)) npre = 4 ax.set(title='Forecast error', xlabel='Date', ylabel='Forecast - Actual') # In-sample one-step-ahead predictions and 95% confidence intervals predict_error = predict.predicted_mean - endog predict_error.loc['1977-10-01':].plot(ax=ax, label='One...
examples/notebooks/statespace_sarimax_stata.ipynb
ChadFulton/statsmodels
bsd-3-clause
The vocabulary is used at transform time to build the occurrence matrix:
X = vectorizer.transform([ "The cat sat on the mat.", "This cat is a nice cat.", ]).toarray() print(len(vectorizer.vocabulary_)) print(vectorizer.get_feature_names()) print(X)
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Let's refit with a slightly larger corpus:
vectorizer = CountVectorizer(min_df=1) vectorizer.fit([ "The cat sat on the mat.", "The quick brown fox jumps over the lazy dog.", ]) vectorizer.vocabulary_
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
The vocabulary_ is the (logarithmically) growing with the size of the training corpus. Note that we could not have built the vocabularies in parallel on the 2 text documents as they share some words hence would require some kind of shared datastructure or synchronization barrier which is complicated to setup, especiall...
X = vectorizer.transform([ "The cat sat on the mat.", "This cat is a nice cat.", ]).toarray() print(len(vectorizer.vocabulary_)) print(vectorizer.get_feature_names()) print(X)
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
The IMDb movie dataset To illustrate the scalability issues of the vocabulary-based vectorizers, let's load a more realistic dataset for a classical text classification task: sentiment analysis on text documents. The goal is to tell apart negative from positive movie reviews from the Internet Movie Database (IMDb). In ...
import os train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train') test_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'test')
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Now, let's load them into our active session via scikit-learn's load_files function
from sklearn.datasets import load_files train = load_files(container_path=(train_path), categories=['pos', 'neg']) test = load_files(container_path=(test_path), categories=['pos', 'neg'])
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
<div class="alert alert-warning"> <b>NOTE</b>: <ul> <li> Since the movie datasets consists of 50,000 individual text files, executing the code snippet above may take ~20 sec or longer. </li> </ul> </div> The load_files function loaded the datasets into sklearn.datasets.base.Bunch objects...
train.keys()
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
In particular, we are only interested in the data and target arrays.
import numpy as np for label, data in zip(('TRAINING', 'TEST'), (train, test)): print('\n\n%s' % label) print('Number of documents:', len(data['data'])) print('\n1st document:\n', data['data'][0]) print('\n1st label:', data['target'][0]) print('\nClass names:', data['target_names']) print('Clas...
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
As we can see above the 'target' array consists of integers 0 and 1, where 0 stands for negative and 1 stands for positive. The Hashing Trick Remember the bag of word representation using a vocabulary based vectorizer: <img src="figures/bag_of_words.svg" width="100%"> To workaround the limitations of the vocabulary-bas...
from sklearn.utils.murmurhash import murmurhash3_bytes_u32 # encode for python 3 compatibility for word in "the cat sat on the mat".encode("utf-8").split(): print("{0} => {1}".format( word, murmurhash3_bytes_u32(word, 0) % 2 ** 20))
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
This mapping is completely stateless and the dimensionality of the output space is explicitly fixed in advance (here we use a modulo 2 ** 20 which means roughly 1M dimensions). The makes it possible to workaround the limitations of the vocabulary based vectorizer both for parallelizability and online / out-of-core lear...
from sklearn.feature_extraction.text import HashingVectorizer h_vectorizer = HashingVectorizer(encoding='latin-1') h_vectorizer
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
It shares the same "preprocessor", "tokenizer" and "analyzer" infrastructure:
analyzer = h_vectorizer.build_analyzer() analyzer('This is a test sentence.')
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
We can vectorize our datasets into a scipy sparse matrix exactly as we would have done with the CountVectorizer or TfidfVectorizer, except that we can directly call the transform method: there is no need to fit as HashingVectorizer is a stateless transformer:
docs_train, y_train = train['data'], train['target'] docs_valid, y_valid = test['data'][:12500], test['target'][:12500] docs_test, y_test = test['data'][12500:], test['target'][12500:]
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
The dimension of the output is fixed ahead of time to n_features=2 ** 20 by default (nearly 1M features) to minimize the rate of collision on most classification problem while having reasonably sized linear models (1M weights in the coef_ attribute):
h_vectorizer.transform(docs_train)
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Now, let's compare the computational efficiency of the HashingVectorizer to the CountVectorizer:
h_vec = HashingVectorizer(encoding='latin-1') %timeit -n 1 -r 3 h_vec.fit(docs_train, y_train) count_vec = CountVectorizer(encoding='latin-1') %timeit -n 1 -r 3 count_vec.fit(docs_train, y_train)
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
As we can see, the HashingVectorizer is much faster than the Countvectorizer in this case. Finally, let us train a LogisticRegression classifier on the IMDb training subset:
from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline h_pipeline = Pipeline([ ('vec', HashingVectorizer(encoding='latin-1')), ('clf', LogisticRegression(random_state=1)), ]) h_pipeline.fit(docs_train, y_train) print('Train accuracy', h_pipeline.score(docs_train, y_train)) ...
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Out-of-Core learning Out-of-Core learning is the task of training a machine learning model on a dataset that does not fit into memory or RAM. This requires the following conditions: a feature extraction layer with fixed output dimensionality knowing the list of all classes in advance (in this case we only have positiv...
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train') train_pos = os.path.join(train_path, 'pos') train_neg = os.path.join(train_path, 'neg') fnames = [os.path.join(train_pos, f) for f in os.listdir(train_pos)] +\ [os.path.join(train_neg, f) for f in os.listdir(train_neg)] fnames[:3]
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Next, let us create the target label array:
y_train = np.zeros((len(fnames), ), dtype=int) y_train[:12500] = 1 np.bincount(y_train)
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Now, we implement the batch_train function as follows:
from sklearn.base import clone def batch_train(clf, fnames, labels, iterations=25, batchsize=1000, random_seed=1): vec = HashingVectorizer(encoding='latin-1') idx = np.arange(labels.shape[0]) c_clf = clone(clf) rng = np.random.RandomState(seed=random_seed) for i in range(iterations): r...
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Note that we are not using LogisticRegression as in the previous section, but we will use a SGDClassifier with a logistic cost function instead. SGD stands for stochastic gradient descent, an optimization alrogithm that optimizes the weight coefficients iteratively sample by sample, which allows us to feed the data to ...
from sklearn.linear_model import SGDClassifier sgd = SGDClassifier(loss='log', random_state=1) sgd = batch_train(clf=sgd, fnames=fnames, labels=y_train)
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Eventually, let us evaluate its performance:
vec = HashingVectorizer(encoding='latin-1') sgd.score(vec.transform(docs_test), y_test)
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Limitations of the Hashing Vectorizer Using the Hashing Vectorizer makes it possible to implement streaming and parallel text classification but can also introduce some issues: The collisions can introduce too much noise in the data and degrade prediction quality, The HashingVectorizer does not provide "Inverse Docume...
# %load solutions/23_batchtrain.py
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Train Model
adult = fetch_adult() adult.keys() data = adult.data target = adult.target feature_names = adult.feature_names category_map = adult.category_map
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Note that for your own datasets you can use our utility function gen_category_map to create the category map:
from alibi.utils.data import gen_category_map
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Define shuffled training and test set
np.random.seed(0) data_perm = np.random.permutation(np.c_[data, target]) data = data_perm[:,:-1] target = data_perm[:,-1] idx = 30000 X_train,Y_train = data[:idx,:], target[:idx] X_test, Y_test = data[idx+1:,:], target[idx+1:]
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Create feature transformation pipeline Create feature pre-processor. Needs to have 'fit' and 'transform' methods. Different types of pre-processing can be applied to all or part of the features. In the example below we will standardize ordinal features and apply one-hot-encoding to categorical features. Ordinal feature...
ordinal_features = [x for x in range(len(feature_names)) if x not in list(category_map.keys())] ordinal_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler())])
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Categorical features:
categorical_features = list(category_map.keys()) categorical_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')), ('onehot', OneHotEncoder(handle_unknown='ignore'))])
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Combine and fit:
preprocessor = ColumnTransformer(transformers=[('num', ordinal_transformer, ordinal_features), ('cat', categorical_transformer, categorical_features)])
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Train Random Forest model Fit on pre-processed (imputing, OHE, standardizing) data.
np.random.seed(0) clf = RandomForestClassifier(n_estimators=50) model=Pipeline(steps=[("preprocess",preprocessor),("model",clf)]) model.fit(X_train,Y_train)
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Define predict function
def predict_fn(x): return model.predict(x) #predict_fn = lambda x: clf.predict(preprocessor.transform(x)) print('Train accuracy: ', accuracy_score(Y_train, predict_fn(X_train))) print('Test accuracy: ', accuracy_score(Y_test, predict_fn(X_test))) dump(model, 'model.joblib') print(get_minio().fput_object(MINIO_M...
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Train Explainer
model.predict(X_train) explainer = AnchorTabular(predict_fn, feature_names, categorical_names=category_map)
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0