text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('austin_weather.csv')
df.head()
df.info()
```
<h2>Visualisasi Scatter Plot Perbandingan Kuantitatif</h2>
Pada tugas kali ini kita akan mengamati nilai DewPointAvg (F) dengan mengamati nilai HumidityAvg (%), TempAvg (F), dan WindAvg (MPG)
Perhatikan bahwa data kita tidaklah siap untuk di analisis, salah satunya tipe data dari DewPointAvg (F), HumidityAvg (%), dan WindAvg (MPG) adalah object, padahalnya data nya ber isi numeric. maka :
- Ubahlah tipe data tersebut menjadi tipe data float
Step2 :
- Kalian tidak akan dengan mudah mengubah tipe data tersebut karena column tersebut mempunyai nilai '-' yang dimana tidak bisa di ubah ke bentuk float, maka replace lah terlebih dahulu data yang bernilai '-' dengan nilai NaN, gunakan method .replace(). baca dokumentasi https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html
- Isi nilai nan dengan nilai sebelumnya di row tersebut. gunakan method .fillna() dengan argument method bernilai 'ffill', baca dokumentasi https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html
- Sekarang ubah tipe datanya dengan float, gunakan method .astype(), baca dokumentasi https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html
Setelah ini sebagian data siap untuk di jadikan bahan analisis. maka :
Buahlah visualisasi perbandingan kuantitatif scatter plot, sehingga menghasilkan gambar seperti dibawah :
ket :
- colormap adalah 'coolwarm'
- berikat warna terhadap setiap data poin dengan nilai dari column TempAvgF
- berikan size terhadap setiap data poin dengan nilai dari column WindAvgMPH, kalikan dengan 20 agar size terlihat lebih besar
Berikan pendapat dari insight yang bisa di dapat dari visualisasi perbandingan kuantitatif ini!!!
```
df = df.replace('-', np.nan)
tabel = pd.DataFrame(data=df, columns=['DewPointAvgF','HumidityAvgPercent', 'TempAvgF', 'WindAvgMPH'])
tabel.fillna(method='ffill')
flo = tabel.astype('float')
flo.info()
fig,ax = plt.subplots(figsize=(15,8))
temp = flo['TempAvgF']
wind = flo['WindAvgMPH']
data = ax.scatter(flo['HumidityAvgPercent'], flo['DewPointAvgF'], c=temp, cmap='coolwarm', s=wind*20, alpha=0.7)
ax.set_xlabel('HumidityAvg %')
ax.set_ylabel('DewPointAvg F')
ax.set_title('Austin Weather')
fig.colorbar(data)
plt.show()
```

semakin jelas warna merah. maka nilai TempAvgF dari perbandingan tersebut semakin tinggi
sebaliknya warna biru akan semakin jelas
transparansi bisa memperjelas jika ada data poin yang bertumpuk
jika dilihat dari ketebalannya, data point paling banyak terdapat di sekitar nilai tertinggi +- 90
| github_jupyter |
- leaky relu / elu
## Planet Kaggle competition
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fast_gen import *
from learner import *
from pt_models import *
from dataset_pt import *
from sgdr_pt import *
from planet import *
bs=64; f_model = resnet34
path = "/data/jhoward/fast/planet/"
torch.cuda.set_device(1)
n=len(list(open(f'{path}train_v2.csv')))-1
data=get_data_pad(f_model, path, 256, 64, n, 0)
learn = Learner.pretrained_convnet(f_model, data, metrics=[f2])
```
### Train
```
learn.fit(0.2, 1, cycle_len=1)
learn.sched.plot_lr()
learn.unfreeze()
learn.fit([0.01,0.05,0.2], 12, cycle_len=4)
learn.fit([1e-4,1e-3,0.01], 4)
```
### Evaluate
```
name = '170809'
def load_cycle_cv(cv, cycle):
data=get_data_zoom(f_model, path, 256, 64, n, cv)
learn.set_data(data)
learn.load_cycle(f'{name}_{cv}', cycle)
return data
data = load_cycle_cv(0,1)
val = learn.predict()
f2(val,data.val_y)
f2(learn.TTA(),data.val_y)
f2(val,data.val_y)
f2(learn.TTA(),data.val_y)
def get_labels(a): return [data.classes[o] for o in a.nonzero()[0]]
lbls = test>0.2
idx=9
print(get_labels(lbls[idx]))
PIL.Image.open(path+data.test_dl.dataset.fnames[idx]).convert('RGB')
res = [get_labels(o) for o in lbls]
data.test_dl.dataset.fnames[:5]
outp = pd.DataFrame({'image_name': [f[9:-4] for f in data.test_dl.dataset.fnames],
'tags': [' '.join(l) for l in res]})
outp.head()
outp.to_csv('tmp/subm.gz', compression='gzip', index=None)
from IPython.display import FileLink
FileLink('tmp/subm.gz')
def cycle_preds(name, cycle, n_tta=4, is_test=False):
learn.load_cycle(name, cycle)
return learn.TTA(n_tta, is_test=is_test)
def cycle_cv_preds(cv, n_tta=4, is_test=False):
data=get_data_pad(f_model, path, 256, 64, n, cv)
learn.set_data(data)
return [cycle_preds(f'{name}_{cv}',i, is_test=is_test) for i in range(5)]
```
- check dogs and cats
- get resize working again with new path structure
```
%%time
preds_arr=[]
for i in range(5):
print(i)
preds_arr.append(cycle_cv_preds(i, is_test=True))
def all_cycle_cv_preds(end_cycle, start_cycle=0, n_tta=4, is_test=False):
return [cycle_cv_preds(i, is_test=is_test) for i in range(start_cycle, end_cycle)]
np.savez_compressed(f'{path}tmp/test_preds', preds_arr)
preds_avg = [np.mean(o,0) for o in preds_arr]
test = np.mean(preds_avg,0)
%time preds_arr = all_cycle_cv_preds(5)
[f2(preds_arr[0][o],data.val_y) for o in range(5)]
preds_avg = [np.mean(o,0) for o in preds_arr]
ys = [get_data_zoom(f_model, path, 256, 64, n, cv).val_y for cv in range(5)]
f2s = [f2(o,y) for o,y in zip(preds_avg,ys)]; f2s
ots = [opt_th(o,y) for o,y in zip(preds_avg,ys)]; ots
np.mean(ots)
np.mean(f2s,0)
```
### End
| github_jupyter |
# Convolutional Neural Networks for Image Classification
+ [Load MNIST dataset](#load)
+ [Visualizing the Image Data](#visualize)
+ [PreProcessing Data](#preprocessing)
+ [1.encoding Labels using one hot encoding](#encoding)
+ [2.Normalizing Data](#normalizing)
+ [3.Reshaping the Data](#reshaping)
+ [Training the Model](#training)
+ [Which parameters should we set based on our data and can we play around](#parameters)
+ [Model Evaluation](#evaluation)
+ [Predicting a given image](#predicting)
--------
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
-----
# <a name='load'>Load MNIST data set</a>
```
from tensorflow.keras.datasets import mnist
```
As the data is already prepared in split format, we just need to use tuple upacking.
```
(x_train, y_train), (x_test, y_test) =mnist.load_data()
```
-----
# <a name='visualizing'>Visualizing the Image Data</a>
```
x_train.shape
# We can see that there are 60,000 images with 28x28 dimensions
single_image = x_train[0]
single_image
plt.imshow(single_image);
```
Acutally the image is in grey scale.
**So why are we see this purple, yellow stuffs here?**
As we already knew that matplotlib has different colormaps and default one is `viridis` to show the image.
Thefore, the darkest color became purple and lighest color became yellow when it is using `viridis` colormap to show the image.
### We can change the cmap
```
plt.imshow(single_image, cmap='gray');
```
----
# <a name='preprocessing'>PreProcessing Data</a>
# <a name='encoding'>1) encoding Labels using one hot encoding</a>
```
y_train
```
If we look at the y_train dataset, first index is `5`. So our image is displaying 5 which corresponding to correct label.
```
from tensorflow.keras.utils import to_categorical
```
### Before One hot encoding,
```
y_train.shape
```
### After one hot encoding,
```
y_example = to_categorical(y_train)
y_example.shape
```
Now we can see that 1, 2, 3.. etc values are one hot encoded to swtiched on and off values.
Example => if digit is 4 then it is encoded as `0000100000`. we can see the index number where 4 is represented is switched on.
```
y_example[0] # for value 5
```
## Encoding Y_test labels
+ `num_classes`: number of unique classes
Example for digits it will be 10 unique classes (0,1,2,3,4,5,6,7,8,9)
```
y_cat_test = to_categorical(y_test, num_classes=10)
y_cat_train = to_categorical(y_train, num_classes=10)
```
-------
# <a name='normalizing'>2) Normalizing Data</a>
**We can see that a single image is represented by values of `0-255`**
```
single_image
single_image.min()
single_image.max()
```
As we know image is made up of 0-255 values and future images will made up of 0-255 values too. So there is not much changes.
**So we can simply divide by 255 to scale the values between 0 and 1.**
```
x_train = x_train/255
x_test = x_test/255
scaled_image = x_train[0]
scaled_image
scaled_image.min(), scaled_image.max()
```
------
# <a name='reshaping'>3)Reshaping the Data</a>
Right now our data is 60,000 images stored in 28 by 28 pixel array formation.
This is correct for a CNN, but **we need to add one more dimension to show we're dealing with 1 RGB channel** (since technically the images are in black and white, only showing values from 0-255 on a single channel), **an color image would have 3 dimensions**.
```
x_train.shape
```
#### Reshape to include channel dimension (in this case, 1 channel)
```
# batch size, width, height, color_channels
x_train = x_train.reshape(60000, 28, 28, 1)
x_test = x_test.reshape(10000, 28, 28, 1)
```
-------
# <a name='training'>Training the Model</a>
### 4 main hyperparameters for Conv2D
- **filters**: more classes you try to predict, the more filter you should add. (common to use based on power of 2, starting point can be 32).
- **kernel_size**: typical size is 2x2, 4x4, etc. can keep expanding it based on your data. start point can be 4x4.
- **strides**: how big of a stride are we taking as we are moving the kernel along this image? in our case our images are 28x28 and our chosen kernel is 4x4. So it will take 28/4 = 7 times to scan the image.
- **padding**: Valid, Same
https://stackoverflow.com/questions/37674306/what-is-the-difference-between-same-and-valid-padding-in-tf-nn-max-pool-of-t
```
28*28
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPool2D, Flatten
model = Sequential()
# One Set, as our model is simple we will use one set only
# Convolutional Layer
model.add(Conv2D(filters=32, kernel_size=(4,4), input_shape=(28,28, 1), activation='relu')) # our images are 28x28 with 1 color channel, we also use default padding and stride
# Pooling Layer
model.add(MaxPool2D(pool_size=(2,2)))
# Flattening the image from 28x28 to 784 before the final layer
model.add(Flatten())
# 128 NEURONS IN DENSE HIDDEN LAYER (YOU CAN CHANGE THIS NUMBER OF NEURONS)
model.add(Dense(128, activation='relu'))
# LAST LAYER IS THE CLASSIFIER, THUS 10 POSSIBLE CLASSES
# Multiclassfication problem => softmax
model.add(Dense(10, activation='softmax'))
# https://keras.io/metrics/ => can refer various metrics avaliable
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
```
# <a name='parameters'>Which parameters </a>
+ should we set based on our data?
+ can we play around ?
### Parameters to set based on our data

### Parameters that we can play around

### Check model summary
```
model.summary()
```
### Add in Early Stopping
```
from tensorflow.keras.callbacks import EarlyStopping
# we can also use val_accuracy
early_stopping = EarlyStopping(monitor='val_loss', patience=1, verbose=1)
```
# Train the Model
```
model.fit(x_train, y_cat_train,
validation_data=(x_test, y_cat_test),
epochs=10,
callbacks=[early_stopping])
```
------
# <a name='evaluation'>Model Evaluation </a>
```
model.metrics_names
```
As we have set accuracy too, there will be 2 metrics.
```
metrics = pd.DataFrame(model.history.history)
metrics.head()
metrics[['loss', 'val_loss']].plot();
metrics[['accuracy', 'val_accuracy']].plot();
```
------
```
model.evaluate(x_test, y_cat_test, verbose=0)
```
the above numbers `loss` and `accuracy` are essentially same as the very last epochs's result.
-----
```
from sklearn.metrics import classification_report, confusion_matrix
predictions = np.argmax(model.predict(x_test), axis=-1)
```
### Comparing with actual y_test values (no longer categorical encoded values)
```
print(classification_report(y_test, predictions))
confusion_matrix(y_test, predictions)
plt.figure(figsize=(10, 6))
sns.heatmap(confusion_matrix(y_test, predictions), annot=True);
```
--------
# <a name='predicting'>Predicting a given image</a>
```
my_number = x_test[0]
plt.imshow(my_number.reshape(28,28));
# need to reshape the image
# Number of image, width, height, number of color channels
np.argmax(model.predict(my_number.reshape(1, 28, 28, 1)), axis=-1)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Array/array_sorting.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Array/array_sorting.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Array/array_sorting.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Define an arbitrary region of interest as a point.
roi = ee.Geometry.Point(-122.26032, 37.87187)
# Use these bands.
bandNames = ee.List(['B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B10', 'B11'])
# Load a Landsat 8 collection.
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.select(bandNames) \
.filterBounds(roi) \
.filterDate('2014-06-01', '2014-12-31') \
.map(lambda image: ee.Algorithms.Landsat.simpleCloudScore(image))
# Convert the collection to an array.
array = collection.toArray()
# Label of the axes.
imageAxis = 0
bandAxis = 1
# Get the cloud slice and the bands of interest.
bands = array.arraySlice(bandAxis, 0, bandNames.length())
clouds = array.arraySlice(bandAxis, bandNames.length())
# Sort by cloudiness.
sorted = bands.arraySort(clouds)
# Get the least cloudy images, 20% of the total.
numImages = sorted.arrayLength(imageAxis).multiply(0.2).int()
leastCloudy = sorted.arraySlice(imageAxis, 0, numImages)
# Get the mean of the least cloudy images by reducing along the image axis.
mean = leastCloudy.arrayReduce(**{
'reducer': ee.Reducer.mean(),
'axes': [imageAxis]
})
# Turn the reduced array image into a multi-band image for display.
meanImage = mean.arrayProject([bandAxis]).arrayFlatten([bandNames])
Map.centerObject(ee.FeatureCollection(roi), 12)
Map.addLayer(meanImage, {'bands': ['B5', 'B4', 'B2'], 'min': 0, 'max': 0.5}, 'Mean Image')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Shallow networks with Keras on CIFAR10
Modify your MLP version from the previous exercise towards Convolutional Neural Networks.
## Loading the packages
```
# First, import TF and get its version.
import tensorflow as tf
tf_version = tf.__version__
# Check if version >=2.0.0 is used
if not tf_version.startswith('2.'):
print('WARNING: TensorFlow >= 2.0.0 will be used in this course.\nYour version is {}'.format(tf_version) + '.\033[0m')
else:
print('OK: TensorFlow >= 2.0.0' + '.\033[0m')
# check tensorflow installation to see if we have GPU support
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
import numpy as np
from matplotlib import pyplot as plt
from sklearn.metrics import confusion_matrix
# ... import here the different keras libraries you need
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras import utils
%matplotlib inline
```
## Loading the raw data
```
def show_imgs(X):
plt.figure(1)
k = 0
for i in range(0,5):
for j in range(0,5):
plt.subplot2grid((5,5),(i,j))
plt.imshow(X[k])
k = k+1
plt.axis('off')
plt.show()
# Load data & split data between train and test sets
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
show_imgs(X_train)
print("X-training: ", X_train.shape)
print("y-training: ", y_train.shape)
print("X-test: ", X_test.shape)
print("y-test: ", y_test.shape)
# Don't reshape
#X_train = X_train.reshape(50000, 32*32*3) #change the shape towards (50000, 32*32*3)
#X_test = X_test.reshape(10000, 32*32*3) #idem (10000, 32*32*3)
X_train = X_train.astype('float32') #change the type towards float32
X_test = X_test.astype('float32') #idem
X_train /= 255.0 #normalize the range to be between 0.0 and 1.0
X_test /= 255.0 #idem
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print(np.unique(y_train))
n_classes = 10
# Conversion to class vectors
Y_train = utils.to_categorical(y_train, n_classes)
Y_test = utils.to_categorical(y_test, n_classes)
print(Y_train[:10])
```
## CNN
### Define the network
```
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Activation, Dropout, BatchNormalization
epochs = 30
batches = 32
D = X_train.shape[1] # dimension of input sample - 32*32*3 CIFAR10
# Basic model
model = Sequential(name="simple_cnn")
model.add(Conv2D(filters=64, kernel_size=(3, 3), strides=1, padding='same', input_shape=(32, 32, 3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Conv2D(filters=128, kernel_size=(5, 5), strides=1, padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(MaxPooling2D(pool_size=2))
model.add(Flatten())
model.add(Dense(n_classes, activation='softmax'))
model.summary()
```
### Compile and train the network
```
model.compile(loss=tf.keras.losses.CategoricalCrossentropy(),
optimizer='adam',
metrics=['accuracy'])
log = model.fit(X_train,
Y_train,
batch_size=batches,
epochs=epochs,
validation_data=(X_test, Y_test))
```
## Evaluate the network
### Loss evolution during training
This can be done first looking at the history of the training (output of the `fit()` function).
```
f = plt.figure(figsize=(12,4))
ax1 = f.add_subplot(121)
ax2 = f.add_subplot(122)
ax1.plot(log.history['loss'], label='Training loss')
ax1.plot(log.history['val_loss'], label='Testing loss')
ax1.legend()
ax1.grid()
ax2.plot(log.history['accuracy'], label='Training acc')
ax2.plot(log.history['val_accuracy'], label='Val acc')
ax2.legend()
ax2.grid()
```
### Model evaluation
We can compute the overall performance on test set calling the `evaluate()` function on the model. The function returns the loss and the metrics used to compile the models.
```
loss_test, metric_test = model.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', loss_test)
print('Test accuracy:', metric_test)
```
### Notes
It is confirmed that the performance gained using the simple CNN is more than ~10% compared to the initial two layers MLP.
```
import pandas as pd
print(pd.DataFrame([
{
"Architecture": "Layer 1: CONV D=32, w=h=3, S=1, P='same'; Layer 2: MaxPooling2D S=2; Layer 3: DENSE D=10",
"train": 0.7758,
"test": 0.6531
},{
"Architecture": "Layer 1: CONV D=32, w=h=3, S=1, P='same'; Layer 2: MaxPooling2D S=2; Layer 3: DENSE D=64; Layer 3: DENSE D=64; Layer 3: DENSE D=10",
"train": 0.9298,
"test": 0.6366
},{
"Architecture": "Layer 1: CONV D=32, w=h=3, S=1, P='same'; Layer 2: CONV D=64, w=h=3, S=1, P='same'; Layer 3: CONV D=128, w=h=3, S=1, P='same'; Layer 4: MaxPooling2D S=2; Layer 5: MaxPooling2D S=2; Layer 6: DENSE D=64; Layer 7: DENSE D=64; Layer 8: DENSE D=10",
"train": 0.9603,
"test": 0.6855
},{
"Architecture": "CONV D=32, w=h=3, S=1, P='same'; BATCHNORM; DROPOUT(0.4); CONV D=64, w=h=5, S=1, P='same'; BATCHNORM; DROPOUT(0.4); MaxPooling2D S=2; Layer 8: DENSE D=10",
"train": 0.8801,
"test": 0.6980
},{
"Architecture": "CONV D=64, w=h=3, S=1, P='same'; BATCHNORM; DROPOUT(0.4); CONV D=128, w=h=5, S=1, P='same'; BATCHNORM; DROPOUT(0.4); MaxPooling2D S=2; Layer 8: DENSE D=10",
"train": 0.9424,
"test": 0.7106
}
]).to_string())
```
| github_jupyter |
# Download observational data from Frost
[Frost](https://frost.met.no/index.html) is an API which gives access to MET Norway's archive of historical weather and climate data.
## Get access
- to access the API you need to [create a user](https://frost.met.no/auth/requestCredentials.html)
## How to use Frost
- [basic introduction](https://frost.met.no/howto.html) to help you learn to use Frost
- [Examples](https://frost.met.no/examples2.html) of how to use Frost
## How to find the variable?
- [Browse weather elements](https://frost.met.no/elementtable)
The following script is based on the [example](https://frost.met.no/python_example.html) provided by Frost documentation
New client credentials have been successfully created for email address franziska.hellmuth@geo.uio.no. Your client ID is:
```
client_id = 'd2e8db6e-9f6b-4cff-a337-3accf09bc8d8'
# import packages
import requests
import pandas as pd
import xarray as xr
import numpy as np
```
To find station information use the Norwegian Centre for Climate services: <https://seklima.met.no/stations/>
## We will use Andøya airport
- Municipality: Andøy
- County: Nordland
- Station number (id): SN87110
- Height above mean sea level: 10 m
- Latitude: 69.3073º N
- Longitude: 16.1312º E
- Operating period: 01.01.1958 - now
- WMO number: 1010
- WIGOS number: 0-20000-0-01010
- Station holder: Met.no, Avinor
```
station = 'SN87110' # based on the information taken from seklima
```
Define the variables to be downloaded after you [browsed weather elements](https://frost.met.no/elementtable)
```
_xx = xr.open_dataset('/scratch/franzihe/output/Met-No_obs/SN87110/air_pressure_at_sea_level_202104.nc')
elements = [
'air_temperature',
'wind_speed',
'wind_from_direction',
'air_pressure_at_sea_level',
'sum(precipitation_amount PT1H)',
# 'sum(precipitation_amount P1D)', # error when downloading
# 'sum(precipitation_amount PT12H)', # error when downloading
'cloud_area_fraction',
'cloud_area_fraction1',
'cloud_area_fraction2',
'cloud_area_fraction3',
'cloud_base_height1',
'cloud_base_height2',
'cloud_base_height3',
]
reference_time = [
'2021-03-01/2021-03-31',
'2021-04-01/2021-04-30'
] # start and end of data which shall be retrieved
for ref_time in reference_time:
for var in elements:
# retrieve data from Frost using the requests.get function.
# Define endpoint and parameters
endpoint = 'https://frost.met.no/observations/v0.jsonld'
parameters = {
'sources': station,
'elements': var,
'referencetime': ref_time,
}
# Issue an HTTP GET request
r = requests.get(endpoint, parameters, auth=(client_id,''))
# Extract JSON data
json = r.json()
# Check if the request worked, print out any errors
if r.status_code == 200:
data = json['data']
print('{} retrieved from frost.met.no!'.format(var))
else:
print('Error! Returned status code %s' % r.status_code)
print('Message: %s' % json['error']['message'])
print('Reason: %s' % json['error']['reason'])
# This will return a Dataframe with all of the observations in a table format
df = pd.DataFrame()
for i in range(len(data)):
row = pd.DataFrame(data[i]['observations'])
row['referenceTime'] = data[i]['referenceTime']
row['sourceId'] = data[i]['sourceId']
df = df.append(row)
df = df.reset_index()
# make a shorter and more readable table, you can use the code below.
# These additional columns will be kept
columns = ['sourceId','referenceTime','value','unit','timeOffset', 'timeResolution', 'level']
try:
df = df[columns].copy()
except KeyError:
columns = ['sourceId','referenceTime','value','unit','timeOffset', 'timeResolution',]
df = df[columns].copy()
# Convert the time value to something Python understands
df['referenceTime'] = pd.to_datetime(df['referenceTime'])
if var == 'air_pressure_at_sea_level' or var == 'sum(precipitation_amount PT1H)':
print('hourly data retrieved')
df.drop(df[df['timeResolution'] != 'PT1H'].index, inplace=True)
df.drop(df[df['timeOffset'] != 'PT0H'].index, inplace = True)
elif var == 'cloud_area_fraction2' or var == 'cloud_area_fraction3':
df.drop(df[df['timeResolution'] != 'PT30M'].index, inplace=True)
df.drop(df[df['timeOffset'] != 'PT20M'].index, inplace = True)
else:
# select only 10-minute time resolution and timeOffset to be at 0H
df.drop(df[df['timeResolution'] != 'PT10M'].index, inplace=True)
df.drop(df[df['timeOffset'] != 'PT0H'].index, inplace = True)
# reset the index to start at zero
df.reset_index(drop=True, inplace = True)
# rename the columns (useful for later when we create the xarray)
df.rename(columns={'value':var, }, inplace=True)
# create xarray DataArray and assign units
try:
dsx = df.to_xarray().drop_vars(['unit', 'timeOffset', 'timeResolution', 'level', 'sourceId'])
except ValueError:
dsx = df.to_xarray().drop_vars(['unit', 'timeOffset', 'timeResolution', 'sourceId'])
attrs = {'units': ''}
dsx['referenceTime'].assign_attrs(attrs)
dsx['index'] = pd.to_datetime(dsx['referenceTime'].values)
# rename index
dsx = dsx.rename({'index': 'time'})
# remove variable referenceTime
dsx = dsx.drop('referenceTime')
dsx['time'] = pd.DatetimeIndex(dsx['time'].values)
# assign attributes to variable
try:
dsx[var] = dsx[var].assign_attrs({'units': df['unit'][0], df['level'][0]['levelType'] : str(df['level'][0]['value']) + df['level'][0]['unit']})
except KeyError:
dsx[var] = dsx[var].assign_attrs({'units': df['unit'][0], })
# assign attributes to dataset
dsx = dsx.assign_attrs({
'Municipality': 'Andøy',
'County': 'Nordland',
'Height above mean sea level': '10 m',
'Station number (id)': 'SN87110',
'Latitude': 69.3073,
'Longitude': 16.1312,
'WMO number': 1010})
save_file = '/scratch/franzihe/output/Met-No_obs/{}/{}_{}{}.nc'.format(station, var, ref_time.split('-')[0], ref_time.split('-')[1])
dsx.to_netcdf(path = save_file)
print('File saved: {}'.format(save_file))
```
| github_jupyter |
# Model Trainig
## SQL Analysis Problem
```
!wget https://github.com/claranet-coast/sql-analytics-problem/archive/master.zip
!unzip master.zip -d /project
!pip install -q tensorflow-text
!pip install -q tf-models-official
import pandas as pd
import json
import os
import re
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
from sklearn.model_selection import train_test_split
from gensim.models import FastText
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import GRU, Dense, Embedding,concatenate, Input, Dropout, Activation, GlobalAveragePooling1D, Bidirectional, GlobalMaxPooling1D
from tensorflow.keras.models import Sequential,Model
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.svm import SVR
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_val_score
import matplotlib.pyplot as plt
from keras import backend as K
from google.colab import files
from pickle import dump
import statistics
```
## Data Preprocessing
- remove text between */ */
- remove new line dedicated chars
- replace date, time, id and number dedicated strings by corresponding tags
The task covers regression analysis.<br>
The values of target variable is too small and eventually may lead to negative predictions.<br>
To avoid that logarithme transormation has been applied on the target variable. <br>
Predicted values are converted back to real scale using exponent function.
```
def clean(s):
s= re.sub('\s+', ' ', s)
try:
s=re.search(r'(.*)/(.*)',s).group(2)
except:
pass
return s.lower()
def extendedClean(s):
s= re.sub('\d{4}-\d{2}-\d{2}',"|date|",s)
s= re.sub('\d{2}:\d{2}:\d{2}',"|time|",s)
s= re.sub('\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}',"|datetime|",s)
s= re.sub('\d{13}',"|datetime|",s)
s= re.sub('\d{8,12}',"|id|",s)
s= re.sub('\d{2,7}',"|number|",s)
return s
with open('/project/sql-analytics-problem-master/data/slow_log.json') as json_file:
data = json.load(json_file)
logData=pd.json_normalize(data)
preprocessed=logData['sql_text'].map(lambda x: clean(x))
preprocessed.head(5)
y=logData['query_time'].map(lambda x: float(str(x)[6:]))
y.describe()
y= np.log(y)
y.describe()
```
# Support Vector Regressor
- Count word vectorizer
- SVR
- Crossvalidation
```
prep= preprocessed.map(lambda x: extendedClean(x))
prep=prep.map(lambda x: x.replace(",", " , ").replace("("," ( ").replace(")"," ) ").replace("="," = "))
print(prep[0])
vectorizer = CountVectorizer(analyzer='word', ngram_range=(1,1))
X = vectorizer.fit_transform(prep)
scaler=StandardScaler(with_mean=False)
X=scaler.fit_transform(X)
clf=SVR(C=1.0, epsilon=0.2)
scores = cross_val_score(clf, X, y, cv=5, scoring="neg_mean_squared_error")
scores
print("Average MSE of SVR Model: " + str(statistics.mean([-2.60929857, -0.48768693, -1.02234861, -0.7812262 , -0.7548222 ])))
# custom keras metric to calculate MSE on real data
def metr(y_true, y_pred):
'''custom MSE to monitor real data'''
return K.mean(K.square(K.exp(y_pred)- K.exp(y_true)))
es=tf.keras.callbacks.EarlyStopping(monitor='val_loss',patience=5, restore_best_weights=True)
epochs = 50
```
## BERT model training
BERT model requires row input
```
#for bert
X_train, X_test, y_train, y_test = train_test_split(preprocessed, y, test_size=0.25, random_state=42)
bert_model = hub.KerasLayer('https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',name="preprocess")
bert_preprocess = hub.KerasLayer('https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',name="network")
def build_bert():
text_input = Input(shape=(), dtype=tf.string, name='text')
preprocessing_layer = bert_preprocess
encoder_inputs = preprocessing_layer(text_input)
encoder = bert_model
outputs = encoder(encoder_inputs)
net = outputs['pooled_output']
#net = tf.keras.layers.Dense(128,activation="relu", name='net')(net)
net = Dropout(0.2)(net)
net = Dense(1, activation="linear", name='output')(net)
return Model(text_input, net)
bert = build_bert()
bert.summary()
bert.compile(optimizer=tf.optimizers.Adam(learning_rate=0.001),loss="mse",metrics=[metr])
hist= bert.fit(x=X_train,y=y_train,validation_data=(X_test,y_test),epochs=epochs, callbacks=[es])
fig = plt.figure(figsize=(15,4))
ax1 = plt.subplot2grid((1,2),(0,0))
ax1.plot(hist.history['metr'])
ax1.plot(hist.history['loss'])
ax1.set_title('Model training')
ax1.set_ylabel('MSE')
ax1.set_xlabel('epoch')
ax1.legend(['Real', 'Log'], loc='upper left')
ax1 = plt.subplot2grid((1,2),(0,1))
ax1.plot(hist.history['val_metr'])
ax1.plot(hist.history['val_loss'])
ax1.set_title('Model validation')
ax1.set_ylabel('MSE')
ax1.set_xlabel('epoch')
ax1.legend(['Real', 'Log'], loc='upper left')
plt.show()
```
## RNN + FastText embeddings training
- FastText embeddings
- RNN model
Additional preprocessing to keep , ( ) = chars as separate words
```
maxlen= max([len(i.split()) for i in preprocessed])
#extendedClean
addPreproc= preprocessed.map(lambda x: extendedClean(x))
addPreproc=addPreproc.map(lambda x: x.replace(",", " , ").replace("("," ( ").replace(")"," ) ").replace("="," = "))
tokenized=[tf.keras.preprocessing.text.text_to_word_sequence(i,filters='') for i in addPreproc]
tokenized[10:15]
modEmbed = FastText(tokenized, size=300, window=5, min_count=1, workers=4, word_ngrams=1)
#words = list(modEmbed.wv.vocab)
print(modEmbed)
#modEmbed.train(prepped,total_examples=len(prepped),epochs=3, verbose=1)
embedding_matrix = np.zeros((len(modEmbed.wv.vocab) + 1, 300))
for i, vec in enumerate(modEmbed.wv.vectors):
embedding_matrix[i] = vec
features = embedding_matrix.shape[0]
tokenizer = Tokenizer(num_words = features)
# fit the tokenizer on our text
tokenizer.fit_on_texts(addPreproc)
print(addPreproc[0])
# get all words that the tokenizer knows
word_index = tokenizer.word_index
# put the tokens in a matrix
X = tokenizer.texts_to_sequences(addPreproc)
print(X[0])
X = pad_sequences(X)
print(X.shape)
print(X[0])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
inp = Input(shape=(X.shape[1], ))
emb=Embedding(len(modEmbed.wv.vocab)+1 ,300,weights=[embedding_matrix],input_length=X.shape[1],trainable=False)(inp)
d=Dropout(0.25)(emb)
d=Bidirectional(GRU(500))(d)
out=Dense(1,activation="linear")(d)
rnn = Model(inputs=inp, outputs=out)
rnn.summary()
rnn.compile(loss='mse' ,optimizer=tf.optimizers.Adam(learning_rate=0.001),metrics=[metr])
hist=rnn.fit(x=X_train,y=y_train,validation_data=(X_test,y_test),epochs=epochs, callbacks=[es])
fig = plt.figure(figsize=(15,4))
ax1 = plt.subplot2grid((1,2),(0,0))
ax1.plot(hist.history['metr'])
ax1.plot(hist.history['loss'])
ax1.set_title('Model training')
ax1.set_ylabel('MSE')
ax1.set_xlabel('epoch')
ax1.legend(['Real', 'Log'], loc='upper left')
ax1 = plt.subplot2grid((1,2),(0,1))
ax1.plot(hist.history['val_metr'])
ax1.plot(hist.history['val_loss'])
ax1.set_title('Model validation')
ax1.set_ylabel('MSE')
ax1.set_xlabel('epoch')
ax1.legend(['Real', 'Log'], loc='upper left')
plt.show()
#check if the best weights been restored
rnn.evaluate(X_test,y_test)
x=" ping"
text= clean(x)
text= extendedClean(text)
tokenized=text.replace(",", " , ").replace("("," ( ").replace(")"," ) ").replace("="," = ")
X=tokenizer.texts_to_sequences([tokenized])
print(X)
X = pad_sequences(X,244)
np.exp(rnn.predict(X))
from google.colab import files
rnn.save('modelRnn.h5')
files.download('modelRnn.h5')
dump(tokenizer, open('tokenizer.pkl', 'wb'))
files.download('tokenizer.pkl')
```
## Result
#### Models Accuracy Log Transformation Applied
SVR : 1,13<br>
BERT: 1,15<br>
RNN: 0,76<br>
| github_jupyter |
# Quantum Walk
```
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from scipy.linalg import expm
%matplotlib inline
```
Here we perform a continuous time quantum walk (CTQW) on a complete graph with four nodes (denoted as $K_4$). We will be following [this](https://www.nature.com/articles/ncomms11511) paper.
```
G = nx.complete_graph(4)
nx.draw_networkx(G)
```
The spectrum of complete graphs is quite simple -- one eigenvalue equal to $N-1$ (where $N$ is the number of nodes) and the remaining equal to -1:
```
A = nx.adjacency_matrix(G).toarray()
eigvals, _ = np.linalg.eigh(A)
print(eigvals)
```
For the CTQW the usual hamiltonian is the adjacency matrix $A$. We modify it slightly by adding the identity, i.e. we take $\mathcal{H} = A + I$. This will reduce the number of gates we need to apply, since the eigenvectors with 0 eigenvalue will not acquire a phase.
```
hamil = A + np.eye(4)
```
It turns out that $K_n$ graphs are Hadamard diagonalizable, allowing us to write $\mathcal{H} = Q \Lambda Q^\dagger$, where $Q = H \otimes H$. Let's check that this works.
```
had = np.sqrt(1/2) * np.array([[1, 1], [1, -1]])
Q = np.kron(had, had)
Q.conj().T.dot(hamil).dot(Q)
```
The time evolution operator $e^{-iHt}$ is also diagonalized by the same transformation. In particular we have
$$
Q^\dagger e^{-iHt}Q = \begin{pmatrix}
e^{-i4t} & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
$$
Which is just a [CPHASE00](http://docs.rigetti.com/en/stable/apidocs/autogen/pyquil.gates.CPHASE00.html#pyquil.gates.CPHASE00) gate with an angle of $-4t$.
```
from pyquil import Program
from pyquil.api import WavefunctionSimulator
from pyquil.gates import H, X, CPHASE00
from pyquil.latex import display
wfn_sim = WavefunctionSimulator()
def k_4_ctqw(t):
# Change to diagonal basis
p = Program(H(0), H(1), X(0), X(1))
# Time evolve
p += CPHASE00(-4*t, 0, 1)
# Change back to computational basis
p += Program(X(0), X(1), H(0), H(1))
return p
display(k_4_ctqw(1))
```
Let's compare the quantum walk with a classical random walk. The classical time evolution operator is $e^{-(\mathcal{T} - I) t}$ where $\mathcal{T}$ is the transition matrix of the graph.
We choose as our initial condition $\left| \psi(0) \right\rangle = \left| 0 \right\rangle$, that is the walker starts on the first node. Therefore, due to symmetry, the probability of occupation of all nodes besides $\left| 0 \right\rangle$ is the same.
```
T = A / np.sum(A, axis=0)
time = np.linspace(0, 4, 40)
quantum_probs = np.zeros((len(time), 4))
classical_probs = np.zeros((len(time), 4))
for i, t in enumerate(time):
p = k_4_ctqw(t)
wvf = wfn_sim.wavefunction(p)
vec = wvf.amplitudes
quantum_probs[i] = np.abs(vec)**2
classical_ev = expm((T-np.eye(4))*t)
classical_probs[i] = classical_ev[:, 0]
f, (ax1, ax2) = plt.subplots(2, sharex=True, sharey=True)
ax1.set_title("Quantum evolution")
ax1.set_ylabel('p')
ax1.plot(time, quantum_probs[:, 0], label='Initial node')
ax1.plot(time, quantum_probs[:, 1], label='Remaining nodes')
ax1.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax2.set_title("Classical evolution")
ax2.set_xlabel('t')
ax2.set_ylabel('p')
ax2.plot(time, classical_probs[:, 0], label='Initial node')
ax2.plot(time, classical_probs[:, 1], label='Remaining nodes')
```
As expected the quantum walk exhbits coherent oscillations whilst the classical walk converges to the stationary distribution $p_i = \frac{d_i}{\sum_j d_j} = \frac{1}{4}$.
We can readily generalize this scheme to any $K_{2^n}$ graphs.
```
def k_2n_ctqw(n, t):
p = Program()
# Change to diagonal basis
for i in range(n):
p += Program(H(i), X(i))
# Create and apply CPHASE00
big_cphase00 = np.diag(np.ones(2**n)) + 0j
big_cphase00[0, 0] = np.exp(-1j*4*t)
p.defgate("BIG-CPHASE00", big_cphase00)
args = tuple(["BIG-CPHASE00"] + list(range(n)))
p.inst(args)
# Change back to computational basis
for i in range(n):
p += Program(X(i), H(i))
return p
def k_2n_crw(n, t):
G = nx.complete_graph(2**n)
A = nx.adjacency_matrix(G)
T = A / A.sum(axis=0)
classical_ev = expm((T-np.eye(2**n))*t)
return classical_ev[:, 0]
time = np.linspace(0, 4, 40)
quantum_probs = np.zeros((len(time), 8))
classical_probs = np.zeros((len(time), 8))
for i, t in enumerate(time):
p = k_2n_ctqw(3, t)
wvf = wfn_sim.wavefunction(p)
vec = wvf.amplitudes
quantum_probs[i] = np.abs(vec)**2
classical_probs[i] = k_2n_crw(3, t)
f, (ax1, ax2) = plt.subplots(2, sharex=True, sharey=True)
ax1.set_title("Quantum evolution")
ax1.set_ylabel('p')
ax1.plot(time, quantum_probs[:, 0], label='Initial node')
ax1.plot(time, quantum_probs[:, 1], label='Remaining nodes')
ax1.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax2.set_title("Classical evolution")
ax2.set_xlabel('t')
ax2.set_ylabel('p')
ax2.plot(time, classical_probs[:, 0], label='Initial node')
ax2.plot(time, classical_probs[:, 1], label='Remaining nodes')
```
| github_jupyter |
# Amazon SageMaker Autopilot Candidate Definition Notebook
This notebook was automatically generated by the AutoML job **automl-dm-16-23-03-55**.
This notebook allows you to customize the candidate definitions and execute the SageMaker Autopilot workflow.
The dataset has **2** columns and the column named **star_rating** is used as
the target column. This is being treated as a **MulticlassClassification** problem. The dataset also has **5** classes.
This notebook will build a **[MulticlassClassification](https://en.wikipedia.org/wiki/Multiclass_classification)** model that
**maximizes** the "**ACCURACY**" quality metric of the trained models.
The "**ACCURACY**" metric provides the percentage of times the model predicted the correct class.
As part of the AutoML job, the input dataset has been randomly split into two pieces, one for **training** and one for
**validation**. This notebook helps you inspect and modify the data transformation approaches proposed by Amazon SageMaker Autopilot. You can interactively
train the data transformation models and use them to transform the data. Finally, you can execute a multiple algorithm hyperparameter optimization (multi-algo HPO)
job that helps you find the best model for your dataset by jointly optimizing the data transformations and machine learning algorithms.
<div class="alert alert-info"> 💡 <strong> Available Knobs</strong>
Look for sections like this for recommended settings that you can change.
</div>
---
## Contents
1. [Sagemaker Setup](#Sagemaker-Setup)
1. [Downloading Generated Candidates](#Downloading-Generated-Modules)
1. [SageMaker Autopilot Job and Amazon Simple Storage Service (Amazon S3) Configuration](#SageMaker-Autopilot-Job-and-Amazon-Simple-Storage-Service-(Amazon-S3)-Configuration)
1. [Candidate Pipelines](#Candidate-Pipelines)
1. [Generated Candidates](#Generated-Candidates)
1. [Selected Candidates](#Selected-Candidates)
1. [Executing the Candidate Pipelines](#Executing-the-Candidate-Pipelines)
1. [Run Data Transformation Steps](#Run-Data-Transformation-Steps)
1. [Multi Algorithm Hyperparameter Tuning](#Multi-Algorithm-Hyperparameter-Tuning)
1. [Model Selection and Deployment](#Model-Selection-and-Deployment)
1. [Tuning Job Result Overview](#Tuning-Job-Result-Overview)
1. [Model Deployment](#Model-Deployment)
---
## Sagemaker Setup
Before you launch the SageMaker Autopilot jobs, we'll setup the environment for Amazon SageMaker
- Check environment & dependencies.
- Create a few helper objects/function to organize input/output data and SageMaker sessions.
**Minimal Environment Requirements**
- Jupyter: Tested on `JupyterLab 1.0.6`, `jupyter_core 4.5.0` and `IPython 6.4.0`
- Kernel: `conda_python3`
- Dependencies required
- `sagemaker-python-sdk>=2.19.0`
- Use `!pip install sagemaker==2.19.0` to download this dependency.
- Kernel may need to be restarted after download.
- Expected Execution Role/permission
- S3 access to the bucket that stores the notebook.
### Downloading Generated Modules
Download the generated data transformation modules and an SageMaker Autopilot helper module used by this notebook.
Those artifacts will be downloaded to **automl-dm-16-23-03-55-artifacts** folder.
```
!mkdir -p automl-dm-16-23-03-55-artifacts
!aws s3 sync s3://sagemaker-us-east-1-405759480474/models/autopilot/automl-dm-16-23-03-55/sagemaker-automl-candidates/pr-1-f08b7007254e43bb8d5d10af988f5e93db4f7a6b7f194a32b39f2c0733/generated_module automl-dm-16-23-03-55-artifacts/generated_module --only-show-errors
!aws s3 sync s3://sagemaker-us-east-1-405759480474/models/autopilot/automl-dm-16-23-03-55/sagemaker-automl-candidates/pr-1-f08b7007254e43bb8d5d10af988f5e93db4f7a6b7f194a32b39f2c0733/notebooks/sagemaker_automl automl-dm-16-23-03-55-artifacts/sagemaker_automl --only-show-errors
import sys
sys.path.append("automl-dm-16-23-03-55-artifacts")
```
### SageMaker Autopilot Job and Amazon Simple Storage Service (Amazon S3) Configuration
The following configuration has been derived from the SageMaker Autopilot job. These items configure where this notebook will
look for generated candidates, and where input and output data is stored on Amazon S3.
```
from sagemaker_automl import uid, AutoMLLocalRunConfig
# Where the preprocessed data from the existing AutoML job is stored
BASE_AUTOML_JOB_NAME = "automl-dm-16-23-03-55"
BASE_AUTOML_JOB_CONFIG = {
"automl_job_name": BASE_AUTOML_JOB_NAME,
"automl_output_s3_base_path": "s3://sagemaker-us-east-1-405759480474/models/autopilot/automl-dm-16-23-03-55",
"data_transformer_image_repo_version": "0.2-1-cpu-py3",
"algo_image_repo_versions": {"xgboost": "1.0-1-cpu-py3"},
"algo_inference_image_repo_versions": {"xgboost": "1.0-1-cpu-py3"},
}
# Path conventions of the output data storage path from the local AutoML job run of this notebook
LOCAL_AUTOML_JOB_NAME = "automl-dm--notebook-run-{}".format(uid())
LOCAL_AUTOML_JOB_CONFIG = {
"local_automl_job_name": LOCAL_AUTOML_JOB_NAME,
"local_automl_job_output_s3_base_path": "s3://sagemaker-us-east-1-405759480474/models/autopilot/automl-dm-16-23-03-55/{}".format(
LOCAL_AUTOML_JOB_NAME
),
"data_processing_model_dir": "data-processor-models",
"data_processing_transformed_output_dir": "transformed-data",
"multi_algo_tuning_output_dir": "multi-algo-tuning",
}
AUTOML_LOCAL_RUN_CONFIG = AutoMLLocalRunConfig(
role="arn:aws:iam::405759480474:role/mod-caf61d640fbd4ba7-SageMakerExecutionRole-1U3FI8J98QOSN",
base_automl_job_config=BASE_AUTOML_JOB_CONFIG,
local_automl_job_config=LOCAL_AUTOML_JOB_CONFIG,
security_config={"EnableInterContainerTrafficEncryption": False, "VpcConfig": {}},
)
AUTOML_LOCAL_RUN_CONFIG.display()
```
## Candidate Pipelines
The `AutoMLLocalRunner` keeps track of selected candidates and automates many of the steps needed to execute feature engineering and tuning steps.
```
from sagemaker_automl import AutoMLInteractiveRunner, AutoMLLocalCandidate
automl_interactive_runner = AutoMLInteractiveRunner(AUTOML_LOCAL_RUN_CONFIG)
```
### Generated Candidates
The SageMaker Autopilot Job has analyzed the dataset and has generated **3** machine learning
pipeline(s) that use **1** algorithm(s). Each pipeline contains a set of feature transformers and an
algorithm.
<div class="alert alert-info"> 💡 <strong> Available Knobs</strong>
1. The resource configuration: instance type & count
1. Select candidate pipeline definitions by cells
1. The linked data transformation script can be reviewed and updated. Please refer to the [README.md](./automl-dm-16-23-03-55-artifacts/generated_module/README.md) for detailed customization instructions.
</div>
**[dpp0-xgboost](automl-dm-16-23-03-55-artifacts/generated_module/candidate_data_processors/dpp0.py)**: This data transformation strategy first transforms 'text' features using [MultiColumnTfidfVectorizer](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/feature_extraction/text.py). It merges all the generated features and applies [RobustStandardScaler](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/preprocessing/data.py). The
transformed data will be used to tune a *xgboost* model. Here is the definition:
```
automl_interactive_runner.select_candidate(
{
"data_transformer": {
"name": "dpp0",
"training_resource_config": {
"instance_type": "ml.m5.4xlarge",
"instance_count": 1,
"volume_size_in_gb": 50,
},
"transform_resource_config": {
"instance_type": "ml.m5.4xlarge",
"instance_count": 1,
},
"transforms_label": True,
"transformed_data_format": "application/x-recordio-protobuf",
"sparse_encoding": True,
},
"algorithm": {
"name": "xgboost",
"training_resource_config": {
"instance_type": "ml.m5.4xlarge",
"instance_count": 1,
},
},
}
)
```
**[dpp1-xgboost](automl-dm-16-23-03-55-artifacts/generated_module/candidate_data_processors/dpp1.py)**: This data transformation strategy first transforms 'text' features using [MultiColumnTfidfVectorizer](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/feature_extraction/text.py). It merges all the generated features and applies [RobustPCA](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/decomposition/robust_pca.py) followed by [RobustStandardScaler](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/preprocessing/data.py). The
transformed data will be used to tune a *xgboost* model. Here is the definition:
```
automl_interactive_runner.select_candidate(
{
"data_transformer": {
"name": "dpp1",
"training_resource_config": {
"instance_type": "ml.m5.4xlarge",
"instance_count": 1,
"volume_size_in_gb": 50,
},
"transform_resource_config": {
"instance_type": "ml.m5.4xlarge",
"instance_count": 1,
},
"transforms_label": True,
"transformed_data_format": "text/csv",
"sparse_encoding": False,
},
"algorithm": {
"name": "xgboost",
"training_resource_config": {
"instance_type": "ml.m5.4xlarge",
"instance_count": 1,
},
},
}
)
```
**[dpp2-xgboost](automl-dm-16-23-03-55-artifacts/generated_module/candidate_data_processors/dpp2.py)**: This data transformation strategy first transforms 'text' features using [MultiColumnTfidfVectorizer](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/feature_extraction/text.py). It merges all the generated features and applies [RobustStandardScaler](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/preprocessing/data.py). The
transformed data will be used to tune a *xgboost* model. Here is the definition:
```
automl_interactive_runner.select_candidate(
{
"data_transformer": {
"name": "dpp2",
"training_resource_config": {
"instance_type": "ml.m5.4xlarge",
"instance_count": 1,
"volume_size_in_gb": 50,
},
"transform_resource_config": {
"instance_type": "ml.m5.4xlarge",
"instance_count": 1,
},
"transforms_label": True,
"transformed_data_format": "application/x-recordio-protobuf",
"sparse_encoding": True,
},
"algorithm": {
"name": "xgboost",
"training_resource_config": {
"instance_type": "ml.m5.4xlarge",
"instance_count": 1,
},
},
}
)
```
### Selected Candidates
You have selected the following candidates (please run the cell below and click on the feature transformer links for details):
```
automl_interactive_runner.display_candidates()
```
The feature engineering pipeline consists of two SageMaker jobs:
1. Generated trainable data transformer Python modules like [dpp0.py](automl-dm-16-23-03-55-artifacts/generated_module/candidate_data_processors/dpp0.py), which has been downloaded to the local file system
2. A **training** job to train the data transformers
3. A **batch transform** job to apply the trained transformation to the dataset to generate the algorithm compatible data
The transformers and its training pipeline are built using open sourced **[sagemaker-scikit-learn-container][]** and **[sagemaker-scikit-learn-extension][]**.
[sagemaker-scikit-learn-container]: https://github.com/aws/sagemaker-scikit-learn-container
[sagemaker-scikit-learn-extension]: https://github.com/aws/sagemaker-scikit-learn-extension
## Executing the Candidate Pipelines
Each candidate pipeline consists of two steps, feature transformation and algorithm training.
For efficiency first execute the feature transformation step which will generate a featurized dataset on S3
for each pipeline.
After each featurized dataset is prepared, execute a multi-algorithm tuning job that will run tuning jobs
in parallel for each pipeline. This tuning job will execute training jobs to find the best set of
hyper-parameters for each pipeline, as well as finding the overall best performing pipeline.
### Run Data Transformation Steps
Now you are ready to start execution all data transformation steps. The cell below may take some time to finish,
feel free to go grab a cup of coffee. To expedite the process you can set the number of `parallel_jobs` to be up to 10.
Please check the account limits to increase the limits before increasing the number of jobs to run in parallel.
```
automl_interactive_runner.fit_data_transformers(parallel_jobs=7)
```
### Multi Algorithm Hyperparameter Tuning
Now that the algorithm compatible transformed datasets are ready, you can start the multi-algorithm model tuning job
to find the best predictive model. The following algorithm training job configuration for each
algorithm is auto-generated by the AutoML Job as part of the recommendation.
<div class="alert alert-info"> 💡 <strong> Available Knobs</strong>
1. Hyperparameter ranges
2. Objective metrics
3. Recommended static algorithm hyperparameters.
Please refers to [Xgboost tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost-tuning.html) and [Linear learner tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner-tuning.html) for detailed explanations of the parameters.
</div>
The AutoML recommendation job has recommended the following hyperparameters, objectives and accuracy metrics for
the algorithm and problem type:
```
ALGORITHM_OBJECTIVE_METRICS = {
"xgboost": "validation:accuracy",
}
STATIC_HYPERPARAMETERS = {
"xgboost": {
"objective": "multi:softprob",
"save_model_on_termination": "true",
"num_class": 5,
},
}
```
The following tunable hyperparameters search ranges are recommended for the Multi-Algo tuning job:
```
from sagemaker.parameter import CategoricalParameter, ContinuousParameter, IntegerParameter
ALGORITHM_TUNABLE_HYPERPARAMETER_RANGES = {
"xgboost": {
"num_round": IntegerParameter(2, 1024, scaling_type="Logarithmic"),
"max_depth": IntegerParameter(2, 8, scaling_type="Logarithmic"),
"eta": ContinuousParameter(1e-3, 1.0, scaling_type="Logarithmic"),
"gamma": ContinuousParameter(1e-6, 64.0, scaling_type="Logarithmic"),
"min_child_weight": ContinuousParameter(1e-6, 32.0, scaling_type="Logarithmic"),
"subsample": ContinuousParameter(0.5, 1.0, scaling_type="Linear"),
"colsample_bytree": ContinuousParameter(0.3, 1.0, scaling_type="Linear"),
"lambda": ContinuousParameter(1e-6, 2.0, scaling_type="Logarithmic"),
"alpha": ContinuousParameter(1e-6, 2.0, scaling_type="Logarithmic"),
},
}
```
#### Prepare Multi-Algorithm Tuner Input
To use the multi-algorithm HPO tuner, prepare some inputs and parameters. Prepare a dictionary whose key is the name of the trained pipeline candidates and the values are respectively:
1. Estimators for the recommended algorithm
2. Hyperparameters search ranges
3. Objective metrics
```
multi_algo_tuning_parameters = automl_interactive_runner.prepare_multi_algo_parameters(
objective_metrics=ALGORITHM_OBJECTIVE_METRICS,
static_hyperparameters=STATIC_HYPERPARAMETERS,
hyperparameters_search_ranges=ALGORITHM_TUNABLE_HYPERPARAMETER_RANGES,
)
```
Below you prepare the inputs data to the multi-algo tuner:
```
multi_algo_tuning_inputs = automl_interactive_runner.prepare_multi_algo_inputs()
```
#### Create Multi-Algorithm Tuner
With the recommended Hyperparameter ranges and the transformed dataset, create a multi-algorithm model tuning job
that coordinates hyper parameter optimizations across the different possible algorithms and feature processing strategies.
<div class="alert alert-info"> 💡 <strong> Available Knobs</strong>
1. Tuner strategy: [Bayesian](https://en.wikipedia.org/wiki/Hyperparameter_optimization#Bayesian_optimization), [Random Search](https://en.wikipedia.org/wiki/Hyperparameter_optimization#Random_search)
2. Objective type: `Minimize`, `Maximize`, see [optimization](https://en.wikipedia.org/wiki/Mathematical_optimization)
3. Max Job size: the max number of training jobs HPO would be launching to run experiments. Note the default value is **250**
which is the default of the managed flow.
4. Parallelism. Number of jobs that will be executed in parallel. Higher value will expedite the tuning process.
Please check the account limits to increase the limits before increasing the number of jobs to run in parallel
5. Please use a different tuning job name if you re-run this cell after applied customizations.
</div>
```
from sagemaker.tuner import HyperparameterTuner
base_tuning_job_name = "{}-tuning".format(AUTOML_LOCAL_RUN_CONFIG.local_automl_job_name)
tuner = HyperparameterTuner.create(
base_tuning_job_name=base_tuning_job_name,
strategy="Bayesian",
objective_type="Maximize",
max_parallel_jobs=7,
max_jobs=250,
**multi_algo_tuning_parameters,
)
```
#### Run Multi-Algorithm Tuning
Now you are ready to start running the **Multi-Algo Tuning** job. After the job is finished, store the tuning job name which you use to select models in the next section.
The tuning process will take some time, please track the progress in the Amazon SageMaker Hyperparameter tuning jobs console.
```
from IPython.display import display, Markdown
# Run tuning
tuner.fit(inputs=multi_algo_tuning_inputs, include_cls_metadata=None)
tuning_job_name = tuner.latest_tuning_job.name
display(
Markdown(
f"Tuning Job {tuning_job_name} started, please track the progress from [here](https://{AUTOML_LOCAL_RUN_CONFIG.region}.console.aws.amazon.com/sagemaker/home?region={AUTOML_LOCAL_RUN_CONFIG.region}#/hyper-tuning-jobs/{tuning_job_name})"
)
)
# Wait for tuning job to finish
tuner.wait()
```
## Model Selection and Deployment
This section guides you through the model selection process. Afterward, you construct an inference pipeline
on Amazon SageMaker to host the best candidate.
Because you executed the feature transformation and algorithm training in two separate steps, you now need to manually
link each trained model with the feature transformer that it is associated with. When running a regular Amazon
SageMaker Autopilot job, this will automatically be done for you.
### Tuning Job Result Overview
The performance of each candidate pipeline can be viewed as a Pandas dataframe. For more interactive usage please
refers to [model tuning monitor](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-monitor.html).
```
from pprint import pprint
from sagemaker.analytics import HyperparameterTuningJobAnalytics
SAGEMAKER_SESSION = AUTOML_LOCAL_RUN_CONFIG.sagemaker_session
SAGEMAKER_ROLE = AUTOML_LOCAL_RUN_CONFIG.role
tuner_analytics = HyperparameterTuningJobAnalytics(tuner.latest_tuning_job.name, sagemaker_session=SAGEMAKER_SESSION)
df_tuning_job_analytics = tuner_analytics.dataframe()
# Sort the tuning job analytics by the final metrics value
df_tuning_job_analytics.sort_values(
by=["FinalObjectiveValue"], inplace=True, ascending=False if tuner.objective_type == "Maximize" else True
)
# Show detailed analytics for the top 20 models
df_tuning_job_analytics.head(20)
```
The best training job can be selected as below:
<div class="alert alert-info"> 💡 <strong>Tips: </strong>
You could select alternative job by using the value from `TrainingJobName` column above and assign to `best_training_job` below
</div>
```
attached_tuner = HyperparameterTuner.attach(tuner.latest_tuning_job.name, sagemaker_session=SAGEMAKER_SESSION)
best_training_job = attached_tuner.best_training_job()
print("Best Multi Algorithm HPO training job name is {}".format(best_training_job))
```
### Linking Best Training Job with Feature Pipelines
Finally, deploy the best training job to Amazon SageMaker along with its companion feature engineering models.
At the end of the section, you get an endpoint that's ready to serve online inference or start batch transform jobs!
Deploy a [PipelineModel](https://sagemaker.readthedocs.io/en/stable/pipeline.html) that has multiple containers of the following:
1. Data Transformation Container: a container built from the model we selected and trained during the data transformer sections
2. Algorithm Container: a container built from the trained model we selected above from the best HPO training job.
3. Inverse Label Transformer Container: a container that converts numerical intermediate prediction value back to non-numerical label value.
Get both best data transformation model and algorithm model from best training job and create an pipeline model:
```
from sagemaker.estimator import Estimator
from sagemaker import PipelineModel
from sagemaker_automl import select_inference_output
# Get a data transformation model from chosen candidate
best_candidate = automl_interactive_runner.choose_candidate(df_tuning_job_analytics, best_training_job)
best_data_transformer_model = best_candidate.get_data_transformer_model(
role=SAGEMAKER_ROLE, sagemaker_session=SAGEMAKER_SESSION
)
# Our first data transformation container will always return recordio-protobuf format
best_data_transformer_model.env["SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT"] = "application/x-recordio-protobuf"
# Add environment variable for sparse encoding
if best_candidate.data_transformer_step.sparse_encoding:
best_data_transformer_model.env["AUTOML_SPARSE_ENCODE_RECORDIO_PROTOBUF"] = "1"
# Get a algo model from chosen training job of the candidate
algo_estimator = Estimator.attach(best_training_job)
best_algo_model = algo_estimator.create_model(**best_candidate.algo_step.get_inference_container_config())
# Final pipeline model is composed of data transformation models and algo model and an
# inverse label transform model if we need to transform the intermediates back to non-numerical value
model_containers = [best_data_transformer_model, best_algo_model]
if best_candidate.transforms_label:
model_containers.append(
best_candidate.get_data_transformer_model(
transform_mode="inverse-label-transform", role=SAGEMAKER_ROLE, sagemaker_session=SAGEMAKER_SESSION
)
)
# This model can emit response ['predicted_label', 'probability', 'labels', 'probabilities']. To enable the model to emit one or more
# of the response content, pass the keys to `output_key` keyword argument in the select_inference_output method.
model_containers = select_inference_output(
"MulticlassClassification", model_containers, output_keys=["predicted_label"]
)
pipeline_model = PipelineModel(
name="AutoML-{}".format(AUTOML_LOCAL_RUN_CONFIG.local_automl_job_name),
role=SAGEMAKER_ROLE,
models=model_containers,
vpc_config=AUTOML_LOCAL_RUN_CONFIG.vpc_config,
)
```
### Deploying Best Pipeline
<div class="alert alert-info"> 💡 <strong> Available Knobs</strong>
1. You can customize the initial instance count and instance type used to deploy this model.
2. Endpoint name can be changed to avoid conflict with existing endpoints.
</div>
Finally, deploy the model to SageMaker to make it functional.
```
pipeline_model.deploy(
initial_instance_count=1, instance_type="ml.m5.2xlarge", endpoint_name=pipeline_model.name, wait=True
)
```
Congratulations! Now you could visit the sagemaker
[endpoint console page](https://us-east-1.console.aws.amazon.com/sagemaker/home?region=us-east-1#/endpoints) to find the deployed endpoint (it'll take a few minutes to be in service).
<div class="alert alert-warning">
<strong>To rerun this notebook, delete or change the name of your endpoint!</strong> <br>
If you rerun this notebook, you'll run into an error on the last step because the endpoint already exists. You can either delete the endpoint from the endpoint console page or you can change the <code>endpoint_name</code> in the previous code block.
</div>
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import seaborn as sns
# from tqdm import
connectivity_min, connectivity_max = 0,65
# random_input_span = (1.2,2.8)
random_input_span = (9.5,13.5)
total_time = 100
alpha = 20
alpha_folder = 'alpha_' + str(alpha)
current_models = ['IF','Rotational']
neuron_model = current_models[1]
model_folder_name = neuron_model+'_ensembles'
version = '_cluster_computed'
```
# Capture related ensembles
```
def list_folders_in_path(path):
return [ name for name in os.listdir( path ) if os.path.isdir( os.path.join(path, name) ) ]
num_neurons = 10000
target_networks_name = 'N{}_T{}_I{}_{}'.format(num_neurons,total_time,random_input_span[0],random_input_span[1]) + version
target_path = os.path.join(model_folder_name,target_networks_name)
all_g_folders = list_folders_in_path(target_path)
desired_g_folders = all_g_folders
# delay_folder_name = 'd_{}'.format(delay)
sigma_glossary_dict = {}
amin_saman_param_glossary_dict = {}
field_period_glossary_dict = {}
field_max_intensity_mod_glossary_dict = {}
for g_folder in desired_g_folders:
available_d_folders = list_folders_in_path(os.path.join(target_path,g_folder))
g = float( g_folder.split('_')[1] ) #folder names are g_# d_#
sigma_glossary_dict[g] = {}
amin_saman_param_glossary_dict[g] = {}
field_period_glossary_dict[g] = {}
field_max_intensity_mod_glossary_dict[g] = {}
for d_folder in available_d_folders:
delay = float( d_folder.split('_')[1] ) #folder names are d_#
g_d_alpha_path = os.path.join(target_path, g_folder, d_folder, alpha_folder)
try:
g_ensembles_list = list_folders_in_path(g_d_alpha_path)
sigma_glossary_dict[g].update( {delay:[]} )
amin_saman_param_glossary_dict[g].update( {delay:[]} )
field_period_glossary_dict[g].update( {delay:[]} )
field_max_intensity_mod_glossary_dict[g].update( {delay:[]} )
except: #if the given connectivity and delay has not been measured even once
continue
for ensemble_num in g_ensembles_list:
ensemble_path = os.path.join(g_d_alpha_path, ensemble_num)
with open( os.path.join(ensemble_path,'sigma.txt') ) as file:
sigma = float( file.readline() )
sigma_glossary_dict[g][delay].append( sigma )
with open( os.path.join(ensemble_path,'field_properties.txt') ) as file:
info_line = file.readline()
field_period = float( info_line.split(',')[0] )
max_intensity_mod = float( info_line.split(',')[1] )
field_period_glossary_dict[g][delay].append( field_period )
field_max_intensity_mod_glossary_dict[g][delay].append( max_intensity_mod )
if neuron_model == 'Rotational': #if not does not exist
with open( os.path.join(ensemble_path,'amin_saman_param.txt') ) as file:
amin_saman_param = float( file.readline() )
amin_saman_param_glossary_dict[g][delay].append( amin_saman_param )
sigma_glossary_dict[g][delay] = np.mean(sigma_glossary_dict[g][delay])
field_period_glossary_dict[g][delay] = np.mean(np.abs( field_period_glossary_dict[g][delay] ) )
field_max_intensity_mod_glossary_dict[g][delay] = np.mean(field_max_intensity_mod_glossary_dict[g][delay])
if neuron_model == 'Rotational':amin_saman_param_glossary_dict[g][delay] = np.mean(amin_saman_param_glossary_dict[g][delay])
def dict_to_dataframe(input_dict):
table = pd.DataFrame.from_dict(input_dict)
table.index.name = 'delay'
table.columns.name = 'connectivity'
table = table.sort_index(axis=1)
return table
```
# Sigma dataframe
```
sigma_table = dict_to_dataframe(sigma_glossary_dict)
sigma_table
ax_sigma = sns.heatmap(sigma_table, annot=False, vmax = 1)
ax_sigma.set_title('Sigma as an Order parameter')
ax_sigma.invert_yaxis()
fig = ax_sigma.get_figure()
fig.savefig(os.path.join(target_path, 'sigma_phase_space.png'), dpi = 1000)
```
# Amin Saman Parameter
```
if neuron_model == 'Rotational':
amin_saman_param_table = dict_to_dataframe(amin_saman_param_glossary_dict)
ax_field_period = sns.heatmap(amin_saman_param_table)
ax_field_period.set_title('AminSaman as an Order parameter ')
fig = ax_field_period.get_figure()
fig.savefig(os.path.join(target_path, 'amin_saman_phase_space.png'))
```
# Field period dataframe
```
field_period_table = dict_to_dataframe(field_period_glossary_dict)
field_period_table
ax_field_period = sns.heatmap(np.log(field_period_table.abs()), annot=False, vmax = 2, vmin = -2)
ax_field_period.set_title('Logarithm of field period time')
ax_field_period.invert_yaxis()
fig = ax_field_period.get_figure()
fig.savefig(os.path.join(target_path, 'field_period_phase_space.png'))
max_intensity_table = dict_to_dataframe(field_max_intensity_mod_glossary_dict)
max_intensity_table
%matplotlib notebook
from mpl_toolkits.mplot3d import Axes3D
d_arr = max_intensity_table.index
g_arr = max_intensity_table.columns
bars_pos = np.array([np.tile(g_arr, len(d_arr)), np.repeat(d_arr, len(g_arr)), [0]*(len(d_arr)*len(g_arr))])
dd_arr = d_arr[1] - d_arr[0]
dg_arr = g_arr[1] - g_arr[0]
dmax_intensity = max_intensity_table.to_numpy().flatten()
cmap = plt.cm.get_cmap('magma') # Get desired colormap - you can change this!
period_arr = field_period_table.to_numpy().flatten()
max_height = np.max(period_arr) # get range of colorbars so we can normalize
min_height = np.min(period_arr)
# scale each z to [0,1], and get their rgb values
rgba = [cmap( np.log( k ) ) for k in period_arr]
fig = plt.figure() #create a canvas, tell matplotlib it's 3d
ax = fig.add_subplot(111, projection='3d')
ax.bar3d(bars_pos[0], bars_pos[1], bars_pos[2], dg_arr, dd_arr, dmax_intensity, color=rgba)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-Data" data-toc-modified-id="Load-Data-1"><span class="toc-item-num">1 </span>Load Data</a></span><ul class="toc-item"><li><span><a href="#Exploration-Data" data-toc-modified-id="Exploration-Data-1.1"><span class="toc-item-num">1.1 </span>Exploration Data</a></span></li><li><span><a href="#Modeling-Data" data-toc-modified-id="Modeling-Data-1.2"><span class="toc-item-num">1.2 </span>Modeling Data</a></span></li></ul></li><li><span><a href="#Data-Exploration" data-toc-modified-id="Data-Exploration-2"><span class="toc-item-num">2 </span>Data Exploration</a></span><ul class="toc-item"><li><span><a href="#Full-Data-Visualization" data-toc-modified-id="Full-Data-Visualization-2.1"><span class="toc-item-num">2.1 </span>Full Data Visualization</a></span></li><li><span><a href="#Modeling-Data-Preparation" data-toc-modified-id="Modeling-Data-Preparation-2.2"><span class="toc-item-num">2.2 </span>Modeling Data Preparation</a></span></li></ul></li><li><span><a href="#Standard-Models" data-toc-modified-id="Standard-Models-3"><span class="toc-item-num">3 </span>Standard Models</a></span><ul class="toc-item"><li><span><a href="#Two-Model" data-toc-modified-id="Two-Model-3.1"><span class="toc-item-num">3.1 </span>Two Model</a></span></li><li><span><a href="#Interaction-Term" data-toc-modified-id="Interaction-Term-3.2"><span class="toc-item-num">3.2 </span>Interaction Term</a></span></li><li><span><a href="#Class-Transformations" data-toc-modified-id="Class-Transformations-3.3"><span class="toc-item-num">3.3 </span>Class Transformations</a></span><ul class="toc-item"><li><span><a href="#Binary-Transformation" data-toc-modified-id="Binary-Transformation-3.3.1"><span class="toc-item-num">3.3.1 </span>Binary Transformation</a></span></li><li><span><a href="#Quaternary-Transformation" data-toc-modified-id="Quaternary-Transformation-3.3.2"><span class="toc-item-num">3.3.2 </span>Quaternary Transformation</a></span></li></ul></li><li><span><a href="#Reflective-Uplift" data-toc-modified-id="Reflective-Uplift-3.4"><span class="toc-item-num">3.4 </span>Reflective Uplift</a></span></li><li><span><a href="#Pessimistic-Uplift" data-toc-modified-id="Pessimistic-Uplift-3.5"><span class="toc-item-num">3.5 </span>Pessimistic Uplift</a></span></li></ul></li><li><span><a href="#Evaluation" data-toc-modified-id="Evaluation-4"><span class="toc-item-num">4 </span>Evaluation</a></span><ul class="toc-item"><li><span><a href="#Iterations" data-toc-modified-id="Iterations-4.1"><span class="toc-item-num">4.1 </span>Iterations</a></span></li><li><span><a href="#Visual" data-toc-modified-id="Visual-4.2"><span class="toc-item-num">4.2 </span>Visual</a></span></li><li><span><a href="#Iterated-Evaluation-and-Variance" data-toc-modified-id="Iterated-Evaluation-and-Variance-4.3"><span class="toc-item-num">4.3 </span>Iterated Evaluation and Variance</a></span></li></ul></li></ul></div>
**Mayo PBC Dataset**
A dataset on medical trials to combat primary biliary cholangitis (PBC, formerly cirrhosis) of the liver from the Mayo Clinic.
If using this notebook in [Google Colab](https://colab.research.google.com/github/andrewtavis/causeinfer/blob/main/examples/medical_mayo_pbc.ipynb), you can activate GPUs by following `Edit > Notebook settings > Hardware accelerator` and selecting `GPU`.
```
# pip install causeinfer -U
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
from causeinfer import utils
from causeinfer.data import mayo_pbc
from causeinfer.standard_algorithms.two_model import TwoModel
from causeinfer.standard_algorithms.interaction_term import InteractionTerm
from causeinfer.standard_algorithms.binary_transformation import BinaryTransformation
from causeinfer.standard_algorithms.quaternary_transformation import (
QuaternaryTransformation,
)
from causeinfer.standard_algorithms.reflective import ReflectiveUplift
from causeinfer.standard_algorithms.pessimistic import PessimisticUplift
from causeinfer.evaluation import qini_score, auuc_score
from causeinfer.evaluation import plot_cum_effect, plot_cum_gain, plot_qini
from causeinfer.evaluation import plot_batch_responses, signal_to_noise
from causeinfer.evaluation import iterate_model, eval_table
pd.set_option("display.max_rows", 16)
pd.set_option("display.max_columns", None)
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:99% !important; }</style>"))
```
# Load Data
```
mayo_pbc.download_mayo_pbc()
```
## Exploration Data
```
# The full mostly unformatted dataset is loaded
data_raw = mayo_pbc.load_mayo_pbc(
file_path="datasets/mayo_pbc.text", format_covariates=False, normalize=False
)
df_full = pd.DataFrame(data_raw["dataset_full"], columns=data_raw["dataset_full_names"])
display(df_full.head())
df_full.shape
```
## Modeling Data
```
# The formatted dataset is loaded
data_mayo_pbc = mayo_pbc.load_mayo_pbc(
file_path="datasets/mayo_pbc.text", format_covariates=True, normalize=True
)
df = pd.DataFrame(
data_mayo_pbc["dataset_full"], columns=data_mayo_pbc["dataset_full_names"]
)
display(df.head())
df.shape
# Covariates, treatments and responses are loaded separately
X = data_mayo_pbc["features"]
# 0 is the patient is alive, 1 is a liver transplant, 2 is deceased
y = data_mayo_pbc["response"]
w = data_mayo_pbc["treatment"]
```
# Data Exploration
```
sns.set(style="whitegrid")
```
## Full Data Visualization
```
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(20, 5))
fontsize = 20
utils.plot_unit_distributions(
df=df_full, variable="histologic_stage", treatment=None, bins=None, axis=ax1,
),
ax1.set_xlabel("Histologic Stage", fontsize=fontsize)
ax1.set_ylabel("Counts", fontsize=fontsize)
ax1.axes.set_title("Breakdown of Histologic Stage", fontsize=fontsize * 1.5)
ax1.tick_params(labelsize=fontsize / 1.5)
ax1.set_xticklabels(ax1.get_xticklabels())
utils.plot_unit_distributions(
df=df_full, variable="histologic_stage", treatment="treatment", bins=None, axis=ax2,
)
ax2.set_xlabel("Histologic Stage", fontsize=fontsize)
ax2.set_ylabel("Counts", fontsize=fontsize)
ax2.axes.set_title(
"Breakdown of Histologic Stage and Treatment", fontsize=fontsize * 1.5
)
ax2.tick_params(labelsize=fontsize / 1.5)
ax2.set_xticklabels(ax2.get_xticklabels())
# plt.savefig('outputs_images/mayo_breakdown_histologic_stage.png', dpi=150)
plt.show()
# 0=male, 1=female
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(20, 5))
fontsize = 20
utils.plot_unit_distributions(
df=df_full, variable="sex", treatment=None, bins=None, axis=ax1,
),
ax1.set_xlabel("Sex", fontsize=fontsize)
ax1.set_ylabel("Counts", fontsize=fontsize)
ax1.axes.set_title("Breakdown of Sex", fontsize=fontsize * 1.5)
ax1.tick_params(labelsize=fontsize / 1.5)
ax1.set_xticklabels(ax1.get_xticklabels())
utils.plot_unit_distributions(
df=df_full, variable="sex", treatment="treatment", bins=None, axis=ax2,
)
ax2.set_xlabel("Sex", fontsize=fontsize)
ax2.set_ylabel("Counts", fontsize=fontsize)
ax2.axes.set_title("Breakdown of Sex and Treatment", fontsize=fontsize * 1.5)
ax2.tick_params(labelsize=fontsize / 1.5)
ax2.set_xticklabels(ax2.get_xticklabels())
# plt.savefig('outputs_images/mayo_breakdown_sex.png', dpi=150)
plt.show()
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(20, 5))
fontsize = 20
utils.plot_unit_distributions(
df=df_full, variable="status", treatment=None, bins=None, axis=ax1,
),
ax1.set_xlabel("Status", fontsize=fontsize)
ax1.set_ylabel("Counts", fontsize=fontsize)
ax1.axes.set_title("Breakdown of Status", fontsize=fontsize * 1.5)
ax1.tick_params(labelsize=fontsize / 1.5)
ax1.set_xticklabels(ax1.get_xticklabels())
utils.plot_unit_distributions(
df=df_full, variable="status", treatment="treatment", bins=None, axis=ax2,
)
ax2.set_xlabel("Status", fontsize=fontsize)
ax2.set_ylabel("Counts", fontsize=fontsize)
ax2.axes.set_title("Breakdown of Status and Treatment", fontsize=fontsize * 1.5)
ax2.tick_params(labelsize=fontsize / 1.5)
ax2.set_xticklabels(ax2.get_xticklabels())
# plt.savefig('outputs_images/mayo_breakdown_status.png', dpi=150)
plt.show()
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(20, 5))
fontsize = 20
utils.plot_unit_distributions(
df=df_full, variable="age", treatment=None, bins=25, axis=ax1,
),
ax1.set_xlabel("Age", fontsize=fontsize)
ax1.set_ylabel("Counts", fontsize=fontsize)
ax1.axes.set_title("Breakdown of Age", fontsize=fontsize * 1.5)
ax1.tick_params(labelsize=fontsize / 1.5)
ax1.set_xticklabels(ax1.get_xticklabels(), rotation=30)
utils.plot_unit_distributions(
df=df_full, variable="age", treatment="treatment", bins=25, axis=ax2,
)
ax2.set_xlabel("Age", fontsize=fontsize)
ax2.set_ylabel("Counts", fontsize=fontsize)
ax2.axes.set_title("Breakdown of Age and Treatment", fontsize=fontsize * 1.5)
ax2.tick_params(labelsize=fontsize / 1.5)
ax2.set_xticklabels(ax2.get_xticklabels(), rotation=30)
# plt.savefig('outputs_images/mayo_breakdown_age.png', dpi=150)
plt.show()
pd.crosstab(df["treatment"], df["status"], margins=True, normalize=True)
df["alive"] = [1 if i == 0 else 0 for i in df["status"]]
df["transplant"] = [1 if i == 1 else 0 for i in df["status"]]
df["deceased"] = [1 if i == 2 else 0 for i in df["status"]]
df.pivot_table(
values=["alive", "transplant", "deceased"],
index="treatment",
aggfunc=[np.mean],
margins=True,
)
```
## Modeling Data Preparation
```
# Counts for response
alive_indexes = [i for i, e in enumerate(y) if e == 0]
transplant_indexes = [i for i, e in enumerate(y) if e == 1]
deceased_indexes = [i for i, e in enumerate(y) if e == 2]
transplant_deceased_indexes = transplant_indexes + deceased_indexes
print(len(alive_indexes))
print(len(transplant_indexes))
print(len(deceased_indexes))
print(len(transplant_deceased_indexes))
y = np.array([1 if i in transplant_deceased_indexes else 0 for i, e in enumerate(y)])
# Counts for treatment
control_indexes = [i for i, e in enumerate(w) if e == 0]
treatment_indexes = [i for i, e in enumerate(w) if e == 1]
print(len(control_indexes))
print(len(treatment_indexes))
X_control = X[control_indexes]
y_control = y[control_indexes]
w_control = w[control_indexes]
X_treatment = X[treatment_indexes]
y_treatment = y[treatment_indexes]
w_treatment = w[treatment_indexes]
# Over-sampling of control
X_os, y_os, w_os = utils.over_sample(
X_1=X_control,
y_1=y_control,
w_1=w_control,
sample_2_size=len(X_treatment),
shuffle=True,
)
X_split = np.append(X_os, X_treatment, axis=0)
y_split = np.append(y_os, y_treatment, axis=0)
w_split = np.append(w_os, w_treatment, axis=0)
X_split.shape, y_split.shape, w_split.shape # Should all be equal in the first dimension
X_train, X_test, y_train, y_test, w_train, w_test = utils.train_test_split(
X_split,
y_split,
w_split,
percent_train=0.7,
random_state=42,
maintain_proportions=True,
)
X_train.shape, X_test.shape, y_train.shape, y_test.shape, w_train.shape, w_test.shape
print(np.array(np.unique(y_train, return_counts=True)).T)
print(np.array(np.unique(y_test, return_counts=True)).T)
print(np.array(np.unique(w_train, return_counts=True)).T)
print(np.array(np.unique(w_test, return_counts=True)).T)
sn_ratio = signal_to_noise(y=y_split, w=w_split)
sn_ratio
```
The signal to noise ratio suggests at a base level that there may be little use for CI with this dataset.
# Standard Models
The following cells present single iteration modeling, with analysis being done over multiple iterations.
## Two Model
```
tm = TwoModel(
treatment_model=RandomForestClassifier(), control_model=RandomForestClassifier()
)
tm.fit(X=X_train, y=y_train, w=w_train)
tm_probas = tm.predict_proba(X=X_test)
tm_probas[:5]
```
## Interaction Term
```
it = InteractionTerm(model=RandomForestClassifier())
it.fit(X=X_train, y=y_train, w=w_train)
it_probas = it.predict_proba(X=X_test)
it_probas[:5]
```
## Class Transformations
### Binary Transformation
```
bt = BinaryTransformation(model=RandomForestClassifier(), regularize=False)
bt.fit(X=X_train, y=y_train, w=w_train)
bt_probas = bt.predict_proba(X=X_test)
bt_probas[:5]
```
### Quaternary Transformation
```
qt = QuaternaryTransformation(model=RandomForestClassifier(), regularize=False)
qt.fit(X=X_train, y=y_train, w=w_train)
qt_probas = qt.predict_proba(X=X_test)
qt_probas[:5]
```
## Reflective Uplift
```
ru = ReflectiveUplift(model=RandomForestClassifier())
ru.fit(X=X_train, y=y_train, w=w_train)
ru_probas = ru.predict_proba(X=X_test)
ru_probas[:5]
```
## Pessimistic Uplift
```
pu = PessimisticUplift(model=RandomForestClassifier())
pu.fit(X=X_train, y=y_train, w=w_train)
pu_probas = pu.predict_proba(X=X_test)
pu_probas[:5]
```
# Evaluation
## Iterations
```
# New models instatiated with a more expansive scikit-learn base model (assign individually)
tm = TwoModel(
treatment_model=RandomForestClassifier(
n_estimators=200, criterion="gini", bootstrap=True
),
control_model=RandomForestClassifier(
n_estimators=200, criterion="gini", bootstrap=True
),
)
it = InteractionTerm(
model=RandomForestClassifier(n_estimators=200, criterion="gini", bootstrap=True)
)
bt = BinaryTransformation(
model=RandomForestClassifier(n_estimators=200, criterion="gini", bootstrap=True),
regularize=False,
)
qt = QuaternaryTransformation(
model=RandomForestClassifier(n_estimators=200, criterion="gini", bootstrap=True),
regularize=False,
)
ru = ReflectiveUplift(
model=RandomForestClassifier(n_estimators=200, criterion="gini", bootstrap=True)
)
pu = PessimisticUplift(
model=RandomForestClassifier(n_estimators=200, criterion="gini", bootstrap=True)
)
n = 200
model_eval_dict = {}
model_eval_dict["Mayo PBC"] = {}
model_eval_dict
for dataset in model_eval_dict.keys():
for model in [tm, it, bt, qt, ru, pu]:
(
avg_preds,
all_preds,
avg_eval,
eval_variance,
eval_sd,
all_evals,
) = iterate_model(
model=model,
X_train=X_train,
y_train=y_train,
w_train=w_train,
X_test=X_test,
y_test=y_test,
w_test=w_test,
tau_test=None,
n=n,
pred_type="predict_proba",
eval_type="qini",
normalize_eval=False,
verbose=False, # Progress bar
)
model_eval_dict[dataset].update(
{
str(model)
.split(".")[-1]
.split(" ")[0]: {
"avg_preds": avg_preds,
"all_preds": all_preds,
"avg_eval": avg_eval,
"eval_variance": eval_variance,
"eval_sd": eval_sd,
"all_evals": all_evals,
}
}
)
# Treatment and control probability subtraction
tm_effects = [
proba[0] - proba[1]
for proba in model_eval_dict["Mayo PBC"]["TwoModel"]["avg_preds"]
]
# Treatment interaction and control interaction probability subtraction
it_effects = [
proba[0] - proba[1]
for proba in model_eval_dict["Mayo PBC"]["InteractionTerm"]["avg_preds"]
]
# Binary favorable and unfavorable class probability subtraction
bt_effects = [
proba[0] - proba[1]
for proba in model_eval_dict["Mayo PBC"]["BinaryTransformation"]["avg_preds"]
]
# Quaternary favorable and unfavorable class probability subtraction
qt_effects = [
proba[0] - proba[1]
for proba in model_eval_dict["Mayo PBC"]["QuaternaryTransformation"]["avg_preds"]
]
# Reflective favorable and unfavorable class probability subtraction
ru_effects = [
proba[0] - proba[1]
for proba in model_eval_dict["Mayo PBC"]["ReflectiveUplift"]["avg_preds"]
]
# Pessimistic favorable and unfavorable class probability subtraction
pu_effects = [
proba[0] - proba[1]
for proba in model_eval_dict["Mayo PBC"]["PessimisticUplift"]["avg_preds"]
]
```
## Visual
```
visual_eval_dict = {
"y_test": y_test,
"w_test": w_test,
"two_model": tm_effects,
"interaction_term": it_effects,
"binary_trans": bt_effects,
"quaternary_trans": qt_effects,
"reflective": ru_effects,
"pessimistic": pu_effects,
}
df_visual_eval = pd.DataFrame(visual_eval_dict, columns=visual_eval_dict.keys())
display(df_visual_eval.head())
df_visual_eval.shape
models = [col for col in visual_eval_dict.keys() if col not in ["y_test", "w_test"]]
# fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=False, figsize=(20,5))
plot_cum_effect(
df=df_visual_eval,
n=100,
models=models,
percent_of_pop=True,
outcome_col="y_test",
treatment_col="w_test",
random_seed=42,
figsize=(10, 5),
fontsize=20,
axis=None,
legend_metrics=False,
)
# plot_batch_responses(df=df_visual_eval, n=10, models=models,
# outcome_col='y_test', treatment_col='w_test', normalize=False,
# figsize=None, fontsize=15, axis=ax2)
plt.savefig("./mayo_cum_effect.png", dpi=150)
# fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharey=False, figsize=(20, 5))
plot_cum_gain(
df=df_visual_eval,
n=100,
models=models,
percent_of_pop=True,
outcome_col="y_test",
treatment_col="w_test",
normalize=False,
random_seed=42,
figsize=None,
fontsize=20,
axis=None,
legend_metrics=True,
)
plt.savefig("./mayo_auuc.png", dpi=150)
plot_qini(
df=df_visual_eval,
n=100,
models=models,
percent_of_pop=True,
outcome_col="y_test",
treatment_col="w_test",
normalize=False,
random_seed=42,
figsize=None,
fontsize=20,
axis=None,
legend_metrics=True,
)
# plt.savefig("./mayo_qini.png", dpi=150)
```
## Iterated Evaluation and Variance
```
# Qini
df_model_eval = eval_table(model_eval_dict, variances=True, annotate_vars=True)
df_model_eval
```
| github_jupyter |
```
!pip --quiet install transformers
!pip --quiet install tokenizers
from google.colab import drive
drive.mount('/content/drive')
!cp -r '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Scripts/.' .
COLAB_BASE_PATH = '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/'
MODEL_BASE_PATH = COLAB_BASE_PATH + 'Models/Files/198-roBERTa_base/'
```
## Dependencies
```
import json, glob, warnings
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
pd.set_option('max_colwidth', 120)
```
# Load data
```
# Unzip files
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_1.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_2.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_3.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_4.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_5.tar.gz'
database_base_path = COLAB_BASE_PATH + 'Data/complete_64_clean/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
display(k_fold.head())
```
# Model parameters
```
vocab_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-vocab.json'
merges_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-merges.txt'
base_path = COLAB_BASE_PATH + 'qa-transformers/roberta/'
with open(MODEL_BASE_PATH + 'config.json') as json_file:
config = json.load(json_file)
config
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=True)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
_, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
h11 = hidden_states[-2]
x = layers.Dropout(.1)(h11)
x_start = layers.Dense(1)(x)
x_start = layers.Flatten()(x_start)
y_start = layers.Activation('softmax', name='y_start')(x_start)
x_end = layers.Dense(1)(x)
x_end = layers.Flatten()(x_end)
y_end = layers.Activation('softmax', name='y_end')(x_end)
model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])
return model
```
# Make predictions
```
for n_fold in range(config['N_FOLDS']):
n_fold +=1
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
# Load model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
# Make predictions
model.load_weights(MODEL_BASE_PATH + model_path)
predict_eval_df(k_fold, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
```
# Model evaluation
```
#@title
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
```
# Visualize predictions
```
#@title
k_fold['jaccard_mean'] = (k_fold['jaccard_fold_1'] + k_fold['jaccard_fold_2'] +
k_fold['jaccard_fold_3'] + k_fold['jaccard_fold_4'] +
k_fold['jaccard_fold_4']) / 5
display(k_fold[['text', 'selected_text', 'sentiment', 'text_tokenCnt',
'selected_text_tokenCnt', 'jaccard', 'jaccard_mean']].head(15))
```
## Post-processing evaluation
```
#@title
k_fold_post = k_fold.copy()
k_fold_post.loc[k_fold_post['sentiment'] == 'neutral', 'selected_text'] = k_fold_post["text"]
print('\nImpute neutral')
display(evaluate_model_kfold(k_fold_post, config['N_FOLDS']).head(1).style.applymap(color_map))
k_fold_post = k_fold.copy()
k_fold_post.loc[k_fold_post['text_wordCnt'] <= 3, 'selected_text'] = k_fold_post["text"]
print('\nImpute <= 3')
display(evaluate_model_kfold(k_fold_post, config['N_FOLDS']).head(1).style.applymap(color_map))
k_fold_post = k_fold.copy()
k_fold_post['selected_text'] = k_fold_post['selected_text'].apply(lambda x: x.replace('!!!!', '!') if len(x.split())==1 else x)
k_fold_post['selected_text'] = k_fold_post['selected_text'].apply(lambda x: x.replace('..', '.') if len(x.split())==1 else x)
k_fold_post['selected_text'] = k_fold_post['selected_text'].apply(lambda x: x.replace('...', '.') if len(x.split())==1 else x)
print('\nImpute noise')
display(evaluate_model_kfold(k_fold_post, config['N_FOLDS']).head(1).style.applymap(color_map))
k_fold_post = k_fold.copy()
k_fold_post.loc[k_fold_post['sentiment'] == 'neutral', 'selected_text'] = k_fold_post["text"]
k_fold_post.loc[k_fold_post['text_wordCnt'] <= 3, 'selected_text'] = k_fold_post["text"]
print('\nImpute neutral and <= 3')
display(evaluate_model_kfold(k_fold_post, config['N_FOLDS']).head(1).style.applymap(color_map))
k_fold_post = k_fold.copy()
k_fold_post.loc[k_fold_post['sentiment'] == 'neutral', 'selected_text'] = k_fold_post["text"]
k_fold_post.loc[k_fold_post['text_wordCnt'] <= 3, 'selected_text'] = k_fold_post["text"]
k_fold_post['selected_text'] = k_fold_post['selected_text'].apply(lambda x: x.replace('!!!!', '!') if len(x.split())==1 else x)
k_fold_post['selected_text'] = k_fold_post['selected_text'].apply(lambda x: x.replace('..', '.') if len(x.split())==1 else x)
k_fold_post['selected_text'] = k_fold_post['selected_text'].apply(lambda x: x.replace('...', '.') if len(x.split())==1 else x)
print('\nImpute neutral and <= 3 and mpute noise')
display(evaluate_model_kfold(k_fold_post, config['N_FOLDS']).head(1).style.applymap(color_map))
```
# Error analysis
## 10 worst predictions
```
#@title
k_fold['jaccard_mean'] = (k_fold['jaccard_fold_1'] + k_fold['jaccard_fold_2'] +
k_fold['jaccard_fold_3'] + k_fold['jaccard_fold_4'] +
k_fold['jaccard_fold_4']) / 5
display(k_fold[['text', 'selected_text', 'sentiment', 'jaccard', 'jaccard_mean',
'prediction_fold_1', 'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(10))
```
# Sentiment
```
#@title
print('\n sentiment == neutral')
display(k_fold[k_fold['sentiment'] == 'neutral'][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(10))
print('\n sentiment == positive')
display(k_fold[k_fold['sentiment'] == 'positive'][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(10))
print('\n sentiment == negative')
display(k_fold[k_fold['sentiment'] == 'negative'][['text', 'selected_text',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(10))
```
# text_tokenCnt
```
#@title
print('\n text_tokenCnt <= 3')
display(k_fold[k_fold['text_tokenCnt'] <= 3][['text', 'selected_text', 'sentiment',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(5))
print('\n text_tokenCnt >= 45')
display(k_fold[k_fold['text_tokenCnt'] >= 45][['text', 'selected_text', 'sentiment',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(5))
```
# selected_text_tokenCnt
```
#@title
print('\n selected_text_tokenCnt <= 3')
display(k_fold[k_fold['selected_text_tokenCnt'] <= 3][['text', 'selected_text', 'sentiment',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(5))
print('\n selected_text_tokenCnt >= 45')
display(k_fold[k_fold['selected_text_tokenCnt'] >= 45][['text', 'selected_text', 'sentiment',
'jaccard_mean', 'prediction_fold_1',
'prediction_fold_2', 'prediction_fold_3',
'prediction_fold_4', 'prediction_fold_5']].sort_values(by=['jaccard_mean']).head(5))
```
# Jaccard histogram
```
#@title
fig, ax = plt.subplots(1, 1, figsize=(20, 5))
sns.distplot(k_fold['jaccard_mean'], ax=ax).set_title(f"Overall [{len(k_fold)}]")
sns.despine()
plt.show()
```
## By sentiment
```
#@title
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(20, 15), sharex=False)
sns.distplot(k_fold[k_fold['sentiment'] == 'neutral']['jaccard_mean'], ax=ax1).set_title(f"Neutral [{len(k_fold[k_fold['sentiment'] == 'neutral'])}]")
sns.distplot(k_fold[k_fold['sentiment'] == 'positive']['jaccard_mean'], ax=ax2).set_title(f"Positive [{len(k_fold[k_fold['sentiment'] == 'positive'])}]")
sns.distplot(k_fold[k_fold['sentiment'] == 'negative']['jaccard_mean'], ax=ax3).set_title(f"Negative [{len(k_fold[k_fold['sentiment'] == 'negative'])}]")
sns.despine()
plt.show()
```
## By text token count
```
#@title
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 10), sharex=False)
sns.distplot(k_fold[k_fold['text_tokenCnt'] <= 3]['jaccard_mean'], ax=ax1).set_title(f"text_tokenCnt <= 3 [{len(k_fold[k_fold['text_tokenCnt'] <= 3])}]")
sns.distplot(k_fold[k_fold['text_tokenCnt'] >= 45]['jaccard_mean'], ax=ax2).set_title(f"text_tokenCnt >= 45 [{len(k_fold[k_fold['text_tokenCnt'] >= 45])}]")
sns.despine()
plt.show()
```
## By selected_text token count
```
#@title
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 10), sharex=False)
sns.distplot(k_fold[k_fold['selected_text_tokenCnt'] <= 3]['jaccard_mean'], ax=ax1).set_title(f"selected_text_tokenCnt <= 3 [{len(k_fold[k_fold['selected_text_tokenCnt'] <= 3])}]")
sns.distplot(k_fold[k_fold['selected_text_tokenCnt'] >= 45]['jaccard_mean'], ax=ax2).set_title(f"selected_text_tokenCnt >= 45 [{len(k_fold[k_fold['selected_text_tokenCnt'] >= 45])}]")
sns.despine()
plt.show()
```
| github_jupyter |
In looking at the flux results, I had a hunch that something was off. The flux was correct on my test flow fields, so I speculated the problem was in the flow approximations of the calcium. So I've been to digging into the idiosyncrasies of the flow calculations and I have found what I believe to an issue, hopefully THE issue.
---
Take frames 79 and 80 from the TSeries-01292015-1540_site3_0.75ISO_AL_VDNstd3 data set... (Frame 79 on left, frame 80 on right)
<img src="files/media/issues_with_flow/frames79and80.png" height="90%" width="90%">
We can clearly see from this raw data that there appears to be inward calcium movement in the top process. So I began to examine how accurately the optical flow algorithm captures this. From our discussions, the implementation that I WAS using went as follows
```python
# ...
data = loadData(dataPath) # Load the data
f0 = percentile(data, 10.0, axis=0) # Used for calculating relative fluorescence
relData = (data-f0)/f0 # Reletive fluorescence
blurData = gauss(relData, (1.2,0.75,0.75)) # Blurring stds are (time,y,x)
# ---- Then calculate the optical flow ----
# ...
prev = blurData[0]
for i,curr in enumerate(blurData[1:]):
flow = optFlow(prev, curr, pyr_scale, levels, winSz, itrs, polyN, polyS, flg)
xflow[i] = flow[:,:,0]
yflow[i] = flow[:,:,1]
prev = curr
```
So we see here that the flow is calculated using the blurred relative fluorescence. Below I show frames 79 and 80 from the the preblurred relative fluorescence, `relData`, and the post blurred relative fluorescence, `blurData`.
<img src="files/media/issues_with_flow/preblur_rc_frames79and80.png" height="92%" width="92%">
<img src="files/media/issues_with_flow/rcframes79and80.png" height="92%" width="92%">
I see a poor representation of the movement of calcium in the top process as compared with the original data. This can be seen from the flow approximations between these two frames. If you examine the flow more closely (I went back and forth between overlayed flows and images to see) you can see that the apparent inward flow in the top process is not aligned on top of the process and it is not at what visually appears to be the right angle. We compare the flow determined from both the preblurred relative calcium, and the blurred relative calcium
<img src="files/media/issues_with_flow/preblurred_rc_based_raw_flow.png" height="96%" width="96%">
<img src="files/media/issues_with_flow/rc_based_raw_flow.png" height="96%" width="96%">
So I decided to try the calculations again on the regular blurred fluorescence images. i.e. I simply removed the relative fluorescence calculation. The two blurred raw frames 79 and 80 are shown below...
<img src="files/media/issues_with_flow/rawframes79and80.png" height="92%" width="92%">
If I didn't save the image all goofy then it would be pretty clear that this better represents the calcium movement in the top process. (I am emailing you all of the figures so you can examine and compare them at your leisure) So, I was hoping that the flow calculations would be much better, however, they were not. Below is the flow approximation.
<img src="files/media/issues_with_flow/raw_flow.png" height="96%" width="96%">
After I carefully examined this it seemed to me that all of the alleged "correct" flows are in here, however there is also much more garbage. This is because the flow calculations look for displacement and ignore magnitude. So the seemingly strong flows around the periphery are allegedly the dynamics of the faint calcium fluorescence. So I remembered my previous idea of incorporating the original data set into the flux, and thought I could incorporate the original dataset into the flow calculations instead. So, I compared the results between scaling the flow by original blurred calcium intensities and via scaling it by the blurred relative calcium intensities. The idea here is that the "strong" flows around the periphery would be smothered by the low amplitude of the calcium underneath while the strong calcium movements associated with high calcium intensity would be accentuated. It turns out that this was the case...
Here is the scaled flow based on the relative fluorescence... The first is scaled by calcium intesnty. The second is scaled by relative calcium intensity
<img src="files/media/issues_with_flow/preblurred_rc_based_flow_scaled_by_calcium.png" height="96%" width="96%">
<img src="files/media/issues_with_flow/preblurred_rc_based_flow_scaled_by_relative_calcium.png" height="96%" width="96%">
And now the scaled flow based on the regular fluorescence... The first is scaled by calcium intesnty. The second is scaled by relative calcium intensity
<img src="files/media/issues_with_flow/flow_scaled_by_calcium.png" height="96%" width="96%">
<img src="files/media/issues_with_flow/flow_scaled_by_relative_calcium.png" height="96%" width="96%">
So I see that the scaling via the regular calcium seems to "sharpen" the flow within the processes and capture activity in the soma, while scaling via the relative fluorescence broadens the flow around the processes and silences the flow in the soma. Given our objective of measuring "flow" in the processes, off hand I think calculating the flow from the regular data and scaling it with the regular data produces the best results. Or perhaps I need to choose a better f0 to calculate the relative fluorescnece. What do you think?
| github_jupyter |
# Soil Moisture Active Passive (SMAP) Level 4 Data demo
In this demo we are downloading data using Planet OS Package-API which let's us use bigger amount of data with less time than raster API.
We are showing Portugal and Spain droughts what might have been a reason for over 600 wildfires that happend during summer and autumn 2017. Note that dates in the demo has changed and therefore description might not be relevant
```
import time
import os
from package_api import download_data
import xarray as xr
from netCDF4 import Dataset, num2date
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
import matplotlib
import datetime
import warnings
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
```
<font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
```
API_key = open('APIKEY').read().strip()
```
Here we define the area we are intrested, time range from where we want to have the data, dataset key to use and variable name.
```
def get_start_end(days):
date = datetime.datetime.now() - datetime.timedelta(days=days)
time_start = date.strftime('%Y-%m-%d') + 'T16:00:00'
time_end = date.strftime('%Y-%m-%d') + 'T21:00:00'
return time_start,time_end
latitude_south = 15.9; latitude_north = 69.5
longitude_west = -17.6; longitude_east = 38.6
area = 'europe'
days = 6
time_start,time_end = get_start_end(days)
dataset_key = 'nasa_smap_spl4smau'
variable = 'Analysis_Data__sm_surface_analysis'
```
This one here is generating working directory, we need it to know where we are going to save data. No worries, we will delete file after using it!
```
folder = os.path.realpath('.') + '/'
```
Now we ara making a function for making images
```
def make_image(lon,lat,data,date,latitude_north, latitude_south,longitude_west, longitude_east,unit,**kwargs):
m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4,
resolution = 'i', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(lon,lat)
lonmap,latmap = m(lons,lats)
if len(kwargs) > 0:
fig=plt.figure(figsize=(10,8))
plt.subplot(221)
m.drawcoastlines()
m.drawcountries()
c = m.pcolormesh(lonmap,latmap,data,vmin = 0.01,vmax = 0.35)
plt.title(date)
plt.subplot(222)
m.drawcoastlines()
m.drawcountries()
plt.title(kwargs['date_later'])
m.pcolormesh(lonmap,latmap,kwargs['data_later'],vmin = 0.01,vmax = 0.35)
else:
fig=plt.figure(figsize=(9,7))
m.drawcoastlines()
m.drawcountries()
c = m.pcolormesh(lonmap,latmap,data,vmin = 0.01,vmax = 0.35)
plt.title(date)
cbar = plt.colorbar(c)
cbar.set_label(unit)
plt.show()
```
Here we are downloading data by using Package-API. If you are intrested how data is downloaded, find the file named `package_api.py` from notebook folder.
```
try:
package_key = download_data(folder,dataset_key,API_key,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,variable,area)
except:
days = 7
time_start,time_end = get_start_end(days)
package_key = download_data(folder,dataset_key,API_key,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,variable,area)
```
Now we have data and we are reading it in using xarray:
```
filename_europe = package_key + '.nc'
data = xr.open_dataset(filename_europe)
surface_soil_moisture_data = data.Analysis_Data__sm_surface_analysis
unit = surface_soil_moisture_data.units
surface_soil_moisture = data.Analysis_Data__sm_surface_analysis.values[0,:,:]
surface_soil_moisture = np.ma.masked_where(np.isnan(surface_soil_moisture),surface_soil_moisture)
latitude = data.lat; longitude = data.lon
lat = latitude.values
lon = longitude.values
date = str(data.time.values[0])[:-10]
```
Here we are making image by using function defined above.
In this image we can see how dry Iberian peninsula (Portugal and Spain) was during wildfires in October. On 15th October strong winds from the Hurricane Ophelia quickly spread flames along the Iberian coast. We can see that on that day Iberian peninsula soil moisture was comparable with African deserts.
```
make_image(lon,lat,surface_soil_moisture,date,latitude_north, latitude_south,longitude_west, longitude_east,unit)
```
So let's see Portugal and Spain littlebit closer. For that we need to define the area and we will slice data from this area.
```
iberia_west = -10; iberia_east = 3.3
iberia_south = 35; iberia_north = 45
lon_ib = longitude.sel(lon=slice(iberia_west,iberia_east)).values
lat_ib = latitude.sel(lat=slice(iberia_north,iberia_south)).values
soil_ib = surface_soil_moisture_data.sel(lat=slice(iberia_north,iberia_south),lon=slice(iberia_west,iberia_east)).values[0,:,:]
soil_ib = np.ma.masked_where(np.isnan(soil_ib),soil_ib)
```
Let's also import some data from later as well.
```
days2 = days -1
time_start, time_end = get_start_end(days2)
try:
package_key_iberia = download_data(folder,dataset_key,API_key,iberia_west,iberia_east,iberia_south,iberia_north,time_start,time_end,variable,area)
except:
days2 = days - 2
time_start, time_end = get_start_end(days2)
package_key_iberia = download_data(folder,dataset_key,API_key,iberia_west,iberia_east,iberia_south,iberia_north,time_start,time_end,variable,area)
filename_iberia = package_key_iberia + '.nc'
data_later = xr.open_dataset(filename_iberia)
soil_data_later = data_later.Analysis_Data__sm_surface_analysis
soil_later = data_later.Analysis_Data__sm_surface_analysis.values[0,:,:]
soil_later = np.ma.masked_where(np.isnan(soil_later),soil_later)
latitude_ib = data.lat; longitude_ib = data.lon
lat_ibl = latitude_ib.values
lon_ibl = longitude_ib.values
date_later = str(data_later.time.values[0])[:-10]
```
Now we are making two images from the same area - Portugal and Spain.
On the left image we can see soil moisture values on 15th October and on the right image we can see soil moisture on 21th October.
We can see that the land was getting littlebit wetter with a few days. It even helped firefighters to get wildfires under control.
```
make_image(lon_ib,lat_ib,soil_ib,date,iberia_north, iberia_south,iberia_west, iberia_east, unit, data_later = soil_later,date_later = date_later)
```
Finally, let's delete files we downloaded:
```
if os.path.exists(filename_europe):
os.remove(filename_europe)
if os.path.exists(filename_iberia):
os.remove(filename_iberia)
```
| github_jupyter |
# Basic Tic-Tac-Toe
### Group Members and Roles
- Group Member 1 (Role)
- Group Member 2 (Role)
- Group Member 3 (Role)
### Intro
In this activity, we'll work in groups to write a very simple version of the game [Tic-Tac-Toe](https://en.wikipedia.org/wiki/Tic-tac-toe). As you may know, Tic-Tac-Toe is a two-player game.
- The game starts with an empty 3x3 grid:
```
. . .
. . .
. . .
```
- Player 1 places an X in one of the 9 grid squares.
- Player 2 places an O in one of the remaining 8 grid squares.
- The players alternate until no grid squares remain.
- A player wins if any row, column, or diagonal contains 3 copies of their symbol; if no player wins, the game ends in a draw. Here is an example of a game in which the O player has won (on the diagonal) after the 6th turn.
```
O X .
X O .
X . O
```
In this activity, we won't worry about evaluating which player has won -- instead, we'll focus on writing code that will allow players to place their symbols and visualize the grid.
This activity will ask you to write a single, large chunk of code, which may appear somewhat disorganized. In future lectures, we'll discuss *functions*, which offer convenient ways to break large operations into simpler operations that are easier to manage.
As often in programming, we'll start from the end and work our way back toward the beginning. In Parts (A)-(C), I will give you the code block, and your task is to describe in words what it does. Note: the operation of several of these code blocks may appear somewhat cryptic in isolation -- just interpret what you see.
**Note**: you are free to run the code! But you'll need to reason about the structure of the code, in addition to observing the output, in order to understand what is going on.
## § Part (A)
```
while True:
current_move = input("Player, please enter your move:")
if current_move == "I win!":
print("good job")
break
```
## Solution
\[Your explanation of the code here\]
## § Part (B)
```
n = 3
grid = [["." for i in range(n)] for j in range(n)]
```
## Solution
\[Your explanation of the code here\]
## § Part (C)
For this code to run, you'll need to run the code in Part (B) if you haven't done so already.
*Hint: By default, `print(a)` will add a newline (`\n`) after printing `a`. You can modify this behavior (to print rows) by using `print(a, end = " ")`.*
```
for i in range(n):
for j in range(n):
print(grid[i][j], end = " ")
print("\n")
```
## Solution
\[Your explanation of the code here\]
## § Part (D)
Now we are going to ask you to start synthesizing and modifying the code shown in parts **(A)--(C)** to produce a functioning Tic-Tac-Toe game.
Use your understanding of the code shown in parts **(A)--(C)** to create a block of code which will:
- Initialize the empty `grid` as in **(B)**.
- Prompt the user until the user says "I win!", as in **(A)**.
- If the user gives any other input, print the grid, as in **(C)**.
***Hint***: use `if-else` to check whether to terminate the loop or print the grid. <br>
You will not need to write much code (after you copy-paste from **(A)--(C)**).
At this stage, it is not necessary to request or use any other input from the user.
```
# your solution here
## first, initialize the empty grid using the code from (b)
## then, until the user types "I win!":
### prompt the user for input
### if the input is "I win!", exit the loop
### otherwise, print the grid using the code from (c)
```
## § Part (E)
Next, study the following code block. What does it do?
```
turn = 0
while turn < 9:
if turn % 2 == 0:
player = "X"
else:
player = "O"
current_move = input("Player " + player + ", please enter your move:")
turn += 1
```
Once you feel comfortable with the above, copy your code from **(D)** into the code block below, and begin to modify it using the ideas illustrated immediately above.
Your new code should include a `turn` variable which keeps track of whose turn it is. Additionally, your code should prompt each player ("Player X" and "Player O") on the appropriate turns.
The code block above has most of the required pieces -- you just need to slot them in.
```
# your solution here
```
## § Part (F)
Now, allow both players to make moves. If Player X enters two integers separated by a space, then place an "X" in the corresponding grid position. You may assume that both players will always input either "I win!" or two nonnegative integers no larger than `n-1`. For now, you may assume that the players will not attempt to make a move on a square that is already occupied.
***Hint***: the following block illustrates how to get two integer coordinates from the user input.
```python
current_move = "1 1"
coords = current_move.split()
```
```
# your solution here
```
## § Part (G)
Now modify your code so that, if the player attempts to make a move in a space that is already occupied, then ask the player for a different move.
***Hint***: `continue` will return immediately to the beginning of the `while`-loop, and may be useful for reprompting after invalid inputs.
```
# your solution here
```
## § Part (H)
Add comments to the important parts of your code and turn it in. Your comments should make clear which parts of your code are:
- Initializing the grid.
- Updating the grid.
- Prompting the user.
- Checking for the validity of moves.
- Keeping track of turns.
```
# your solution here
```
# Example Session
Here is an example of the session output in a complete solution. In the final move, Player X types "I win!" instead of making their winning move at (2, 1).
?
```
Player X, please enter your move: 1 1
. . .
. X .
. . .
Player O, please enter your move: 0 0
O . .
. X .
. . .
Player X, please enter your move: 0 1
O X .
. X .
. . .
Player O, please enter your move: 2 0
O X .
. X .
0 . .
Player X, please enter your move: I win!
good job
```
| github_jupyter |
# A GENTLE INTRODUCTION TO TORCH.AUTOGRAD
https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py
- Author: Israel Oliveira [\[e-mail\]](mailto:'Israel%20Oliveira%20'<prof.israel@gmail.com>)
```
%load_ext watermark
%config Completer.use_jedi = False
import pandas as pd
#import matplotlib.pyplot as plt
#%matplotlib inline
#from IPython.core.pylabtools import figsize
#figsize(12, 8)
#import seaborn as sns
#sns.set_theme()
#pd.set_option("max_columns", None)
#pd.set_option("max_rows", None)
#from IPython.display import Markdown, display
#def md(arg):
# display(Markdown(arg))
#from pandas_profiling import ProfileReport
# report = ProfileReport(#DataFrame here#, minimal=True)
# report.to
#import pyarrow.parquet as pq
# df = pq.ParquetDataset(path_to_folder_with_parquets, filesystem=None).read_pandas().to_pandas()
# Run this cell before close.
%watermark -d --iversion -b -r -g -m -v
!cat /proc/cpuinfo |grep 'model name'|head -n 1 |sed -e 's/model\ name/CPU/'
!free -h |cut -d'i' -f1 |grep -v total
import torch, torchvision
model = torchvision.models.resnet18(pretrained=True)
data = torch.rand(1, 3, 64, 64)
labels = torch.rand(1, 1000)
prediction = model(data)
prediction.shape
loss = (prediction - labels).sum()
loss.backward() # backward pass
optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
optim.step() #gradient descent
import torch
a = torch.tensor([2., 3.], requires_grad=True)
b = torch.tensor([6., 4.], requires_grad=True)
Q = 3*a**3 - b**2
Q
external_grad = torch.tensor([1., 1.])
Q.backward(gradient=external_grad)
print(9*a**2 == a.grad)
print(-2*b == b.grad)
x = torch.rand(5, 5)
y = torch.rand(5, 5)
z = torch.rand((5, 5), requires_grad=True)
a = x + y
print(f"Does `a` require gradients? : {a.requires_grad}")
b = x + z
print(f"Does `b` require gradients?: {b.requires_grad}")
from torch import nn, optim
model = torchvision.models.resnet18(pretrained=True)
# Freeze all the parameters in the network
for param in model.parameters():
param.requires_grad = False
model.fc
model.fc = nn.Linear(512, 10)
model.fc
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 3x3 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
```
$$
\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) +
\sum_{k = 0}^{C_{\text{in}} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)
$$
```
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
net.zero_grad()
out.backward(torch.randn(1, 10))
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
loss.grad_fn
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
```
| github_jupyter |
<a href="https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/master/C3/W2/ungraded_labs/C3_W2_Lab_3_imdb_subwords.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Ungraded Lab: Subword Tokenization with the IMDB Reviews Dataset
In this lab, you will look at a pre-tokenized dataset that is using subword text encoding. This is an alternative to word-based tokenization which you have been using in the previous labs. You will see how it works and its implications on preparing your data and training your model.
Let's begin!
## Download the IMDB reviews plain text and tokenized datasets
First, you will download the [IMDB Reviews](https://www.tensorflow.org/datasets/catalog/imdb_reviews) dataset from Tensorflow Datasets. You will get two configurations:
* `plain_text` - this is the default and the one you used in Lab 1 of this week
* `subwords8k` - a pre-tokenized dataset (i.e. instead of sentences of type string, it will already give you the tokenized sequences). You will see how this looks in later sections.
```
import tensorflow_datasets as tfds
# Download the plain text default config
imdb_plaintext, info_plaintext = tfds.load("imdb_reviews", with_info=True, as_supervised=True)
# Download the subword encoded pretokenized dataset
imdb_subwords, info_subwords = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
```
## Compare the two datasets
As mentioned, the data types returned by the two datasets will be different. For the default, it will be strings as you also saw in Lab 1. Notice the description of the `text` key below and the sample sentences:
```
# Print description of features
info_plaintext.features
# Take 2 training examples and print the text feature
for example in imdb_plaintext['train'].take(2):
print(example[0].numpy())
```
For `subwords8k`, the dataset is already tokenized so the data type will be integers. Notice that the `text` features also include an `encoder` field and has a `vocab_size` of around 8k, hence the name.
```
# Print description of features
info_subwords.features
```
If you print the results, you will not see string sentences but a sequence of tokens:
```
# Take 2 training examples and print its contents
for example in imdb_subwords['train'].take(2):
print(example)
```
You can get the `encoder` object included in the download and use it to decode the sequences above. You'll see that you will arrive at the same sentences provided in the `plain_text` config:
```
# Get the encoder
tokenizer_subwords = info_subwords.features['text'].encoder
# Take 2 training examples and decode the text feature
for example in imdb_subwords['train'].take(2):
print(tokenizer_subwords.decode(example[0]))
```
*Note: The documentation for the encoder can be found [here](https://www.tensorflow.org/datasets/api_docs/python/tfds/deprecated/text/SubwordTextEncoder) but don't worry if it's marked as deprecated. As mentioned, the objective of this exercise is just to show the characteristics of subword encoding.*
## Subword Text Encoding
From previous labs, the number of tokens in the sequence is the same as the number of words in the text (i.e. word tokenization). The following cells shows a review of this process.
```
# Get the train set
train_data = imdb_plaintext['train']
# Initialize sentences list
training_sentences = []
# Loop over all training examples and save to the list
for s,_ in train_data:
training_sentences.append(s.numpy().decode('utf8'))
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
vocab_size = 10000
oov_tok = '<OOV>'
# Initialize the Tokenizer class
tokenizer_plaintext = Tokenizer(num_words = 10000, oov_token=oov_tok)
# Generate the word index dictionary for the training sentences
tokenizer_plaintext.fit_on_texts(training_sentences)
# Generate the training sequences
sequences = tokenizer_plaintext.texts_to_sequences(training_sentences)
```
The cell above uses a `vocab_size` of 10000 but you'll find that it's easy to find OOV tokens when decoding using the lookup dictionary it created. See the result below:
```
# Decode the first sequence using the Tokenizer class
tokenizer_plaintext.sequences_to_texts(sequences[0:1])
```
For binary classifiers, this might not have a big impact but you may have other applications that will benefit from avoiding OOV tokens when training the model (e.g. text generation). If you want the tokenizer above to not have OOVs, then the `vocab_size` will increase to more than 88k. This can slow down training and bloat the model size. The encoder also won't be robust when used on other datasets which may contain new words, thus resulting in OOVs again.
```
# Total number of words in the word index dictionary
len(tokenizer_plaintext.word_index)
```
*Subword text encoding* gets around this problem by using parts of the word to compose whole words. This makes it more flexible when it encounters uncommon words. See how these subwords look like for this particular encoder:
```
# Print the subwords
print(tokenizer_subwords.subwords)
```
If you use it on the previous plain text sentence, you'll see that it won't have any OOVs even if it has a smaller vocab size (only 8k compared to 10k above):
```
# Encode the first plaintext sentence using the subword text encoder
tokenized_string = tokenizer_subwords.encode(training_sentences[0])
print(tokenized_string)
# Decode the sequence
original_string = tokenizer_subwords.decode(tokenized_string)
# Print the result
print (original_string)
```
Subword encoding can even perform well on words that are not commonly found on movie reviews. See first the result when using the plain text tokenizer. As expected, it will show many OOVs:
```
# Define sample sentence
sample_string = 'TensorFlow, from basics to mastery'
# Encode using the plain text tokenizer
tokenized_string = tokenizer_plaintext.texts_to_sequences([sample_string])
print ('Tokenized string is {}'.format(tokenized_string))
# Decode and print the result
original_string = tokenizer_plaintext.sequences_to_texts(tokenized_string)
print ('The original string: {}'.format(original_string))
```
Then compare to the subword text encoder:
```
# Encode using the subword text encoder
tokenized_string = tokenizer_subwords.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
# Decode and print the results
original_string = tokenizer_subwords.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
```
As you may notice, the sentence is correctly decoded. The downside is the token sequence is much longer. Instead of only 5 when using word-encoding, you ended up with 11 tokens instead. The mapping for this sentence is shown below:
```
# Show token to subword mapping:
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer_subwords.decode([ts])))
```
## Training the model
You will now train your model using this pre-tokenized dataset. Since these are already saved as sequences, you can jump straight to making uniform sized arrays for the train and test sets. These are also saved as `tf.data.Dataset` type so you can use the [`padded_batch()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#padded_batch) method to create batches and pad the arrays into a uniform size for training.
```
BUFFER_SIZE = 10000
BATCH_SIZE = 64
# Get the train and test splits
train_data, test_data = imdb_subwords['train'], imdb_subwords['test'],
# Shuffle the training data
train_dataset = train_data.shuffle(BUFFER_SIZE)
# Batch and pad the datasets to the maximum length of the sequences
train_dataset = train_dataset.padded_batch(BATCH_SIZE)
test_dataset = test_data.padded_batch(BATCH_SIZE)
```
Next, you will build the model. You can just use the architecture from the previous lab.
```
import tensorflow as tf
# Define dimensionality of the embedding
embedding_dim = 64
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer_subwords.vocab_size, embedding_dim),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(6, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Print the model summary
model.summary()
```
Similarly, you can use the same parameters for training. In Colab, it will take around 20 seconds per epoch (without an accelerator) and you will reach around 94% training accuracy and 88% validation accuracy.
```
num_epochs = 10
# Set the training parameters
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
# Start training
history = model.fit(train_dataset, epochs=num_epochs, validation_data=test_dataset)
```
## Visualize the results
You can use the cell below to plot the training results. See if you can improve it by tweaking the parameters such as the size of the embedding and number of epochs.
```
import matplotlib.pyplot as plt
# Plot utility
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
# Plot the accuracy and results
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
```
## Wrap Up
In this lab, you saw how subword text encoding can be a robust technique to avoid out-of-vocabulary tokens. It can decode uncommon words it hasn't seen before even with a relatively small vocab size. Consequently, it results in longer token sequences when compared to full word tokenization. Next week, you will look at other architectures that you can use when building your classifier. These will be recurrent neural networks and convolutional neural networks.
| github_jupyter |
# "Global Land Cover" Widget
This widget is a donut chart which shows the land cover breakdown of a particlular region using the classifications found in the Global Land Cover layer in GFW.
The donut chart should display data for each of the classification types in area(ha) and relative area (%) as well as the year the data was collected (currently only 2015, but we can expand this to cover the last 20 years of global cover data)
User Variables:
1. Admin-0, -1 and -2 region
2. area or %
3. year (WIP)
Tabs: ['Land Cover']
```
import os
import ee
import json
import requests
import requests_cache
from pprint import pprint
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
ee.Initialize()
#Import Global Metadata etc
%run '0.Importable_Globals.ipynb'
# http://maps.elie.ucl.ac.be/CCI/viewer/download/ESACCI-LC-PUG-v2.5.pdf
# http://maps.elie.ucl.ac.be/CCI/viewer/download.php#usertool
"""Note that there exists nuch finer resolution breakdowns than shown here...
maybe we could give users to see a more detailed breakdown if they wish?"""
global_class_dict = {
'class_0': 'No data',
'class_10': 'Agriculture',
'class_11': 'Agriculture',
'class_12': 'Agriculture',
'class_20': 'Agriculture',
'class_30': 'Agriculture',
'class_40': 'Agriculture',
'class_50': 'Forest',
'class_51': 'Forest',
'class_52': 'Forest',
'class_60': 'Forest',
'class_61': 'Forest',
'class_62': 'Forest',
'class_70': 'Forest',
'class_71': 'Forest',
'class_72': 'Forest',
'class_80': 'Forest',
'class_81': 'Forest',
'class_82': 'Forest',
'class_90': 'Forest',
'class_100': 'Shrubland',
'class_110': 'Shrubland',
'class_120': 'Shrubland',
'class_121': 'Shrubland',
'class_122': 'Shrubland',
'class_130': 'Grassland',
'class_140': 'Sparse vegetation',
'class_150': 'Sparse vegetation',
'class_151': 'Sparse vegetation',
'class_152': 'Sparse vegetation',
'class_153': 'Sparse vegetation',
'class_160': 'Wetland',
'class_170': 'Wetland',
'class_180': 'Wetland',
'class_190': 'Settlement',
'class_200': 'Bare',
'class_201': 'Bare',
'class_202': 'Bare',
'class_210': 'Water',
'class_220': 'Permanent Snow and Ice'
}
# http://maps.elie.ucl.ac.be/CCI/viewer/download/ESACCI-LC-PUG-v2.5.pdf
# http://maps.elie.ucl.ac.be/CCI/viewer/download.php#usertool
"""These are the original land class categories"""
og_classes = {
'class_0': 'No data',
'class_10': 'Rainfed Cropland',
'class_11': 'Herbaceous Rainfed Cropland',
'class_12': 'Tree or Shrub Rainfed Cropland',
'class_20': 'Irrigated Cropland',
'class_30': 'Mosaic Cropland',
'class_40': 'Mosaic Natural Vegetation',
'class_50': 'Evergreen Broadleaf Treecover',
'class_51': 'Closed Evergreen Broadleaf Treecover (>40%)',
'class_52': 'Open Evergreen Broadleaf Treecover (<40%)',
'class_60': 'Deciduous Broadleaf Treecover (>15%)',
'class_61': 'Closed Deciduous Broadleaf Treecover (>40%)',
'class_62': 'Open Deciduous Broadleaf Treecover (<40%)',
'class_70': 'Evergreen Needleleaf Treecover (>15%)',
'class_71': 'Closed Evergreen Needleleaf Treecover (>40%)',
'class_72': 'Open Evergreen Needleleaf Treecover (<40%)',
'class_80': 'Deciduous Needleleaf Treecover',
'class_81': 'Closed Deciduous Needleleaf Treecover (>40%)',
'class_82': 'Open Deciduous Needleleaf Treecover (<40%)',
'class_90': 'Mixedleaf Treecover',
'class_100': 'Mosaic Tree and Shrub',
'class_110': 'Mosaic Hebaceous cover',
'class_120': 'Shrubland',
'class_121': 'Evergreen Shrubland',
'class_122': 'Deciduous Shrubland',
'class_130': 'Grassland',
'class_140': 'Lichens and mosses',
'class_150': 'Sparse vegetation',
'class_151': 'Sparse Treecover',
'class_152': 'Sparse Shrub',
'class_153': 'Sparse Herbaceous cover',
'class_160': 'Freshwater Flooded Treecover',
'class_170': 'Saltwater Flooded Treecover',
'class_180': 'Flooded Shrubland',
'class_190': 'Urban Areas',
'class_200': 'Bare Areas',
'class_201': 'Consolidated Bare Areas',
'class_202': 'Consolidated Bare Areas',
'class_210': 'Water Bodies',
'class_220': 'Permanent Snow and Ice'
}
# Make the query and return data
def classification_query(year=2015, adm0='BRA', adm1=None, adm2 = None):
if adm2:
print('Request for adm2 area')
sql = (f"SELECT * "
f"FROM global_land_cover_adm2 "
f"WHERE iso = '{adm0}' "
f"AND adm1 = {adm1} "
f"AND adm2 = {adm2} ")
elif adm1:
print('Request for adm1 area')
sql = (f"SELECT * "
f"FROM global_land_cover_adm2 "
f"WHERE iso = '{adm0}' "
f"AND adm1 = {adm1} ")
elif adm0:
print('Request for adm0 area')
sql = (f"SELECT * "
f"FROM global_land_cover_adm2 "
f"WHERE iso = '{adm0}' ")
account = 'wri-01'
urlCarto = "https://{0}.carto.com/api/v2/sql".format(account)
sql = {"q": sql}
r = requests.get(urlCarto, params=sql)
print(r.url,'\n')
# pprint(r.json())
data = r.json().get('rows')
return data
# Sum data together for each category and calculate are(ha) and area(%) for each
def buildData(data):
areas = {}
#Sum all class categories together
for d in data:
for k, v in d.items():
if 'class_' in k:
if global_class_dict[k] not in areas:
areas[global_class_dict[k]] = 0
areas[global_class_dict[k]] += v
# get total area of region
total = 0
for k,v in areas.items():
total += v
# build data up by calculating area and %
class_data = []
other = 0
for k,v in areas.items():
if v/total >= 0.001:
class_data.append({
'class': k,
'area_ha': v * 300 * 300 * 1e-4,
'area_%': 100 * v / total
})
# exclude gategories with zero area, and collect small (<.1%) % areas together as 'Other'
elif v/total < 0.001 and v > 0:
other += v
class_data.append({
'class': 'Other',
'area_ha': other * 300 * 300 * 1e-4,
'area_%': 100 * other / total
})
return sorted(class_data, key=lambda k: k.get('area_ha'), reverse=True)
def showPie(plot_data):
areaId_to_name = None
if adm2:
tmp = get_admin2_json(iso=adm0, adm1=adm1)
areaId_to_name ={}
for row in tmp:
areaId_to_name[row.get('adm2')] = row.get('name')
if adm1 and not adm2:
tmp = get_admin1_json(iso=adm0)
areaId_to_name={}
for row in tmp:
areaId_to_name[row.get('adm1')] = row.get('name')
if adm0 and not adm1 and not adm2:
title = (f"{iso_to_countries[adm0]}")
if adm0 and adm1 and not adm2:
title = (f"{areaId_to_name[adm1]}")
if adm0 and adm1 and adm2:
title = (f"{areaId_to_name[adm2]}")
labels = [d.get('class') + f" ({round(d.get('area_%'),2)})%" for d in plot_data]
sizes = [d.get('area_ha') for d in plot_data]
fig1, ax1 = plt.subplots(figsize=(5,5))
ax1.pie(sizes, shadow=False, startangle=90)
ax1.axis('equal')
plt.legend(labels, loc="best",bbox_to_anchor=(1.1, 1))
centre_circle = plt.Circle((0,0),0.75,color='white', fc='white',linewidth=0.5)
fig1 = plt.gcf()
fig1.gca().add_artist(centre_circle)
plt.title('Land cover for ' + title)
plt.show()
def getSentence(plot_data):
areaId_to_name = None
if adm2:
tmp = get_admin2_json(iso=adm0, adm1=adm1)
areaId_to_name ={}
for row in tmp:
areaId_to_name[row.get('adm2')] = row.get('name')
if adm1 and not adm2:
tmp = get_admin1_json(iso=adm0)
areaId_to_name={}
for row in tmp:
areaId_to_name[row.get('adm1')] = row.get('name')
if adm0 and not adm1 and not adm2:
title = (f"{iso_to_countries[adm0]}")
if adm0 and adm1 and not adm2:
title = (f"{areaId_to_name[adm1]}")
if adm0 and adm1 and adm2:
title = (f"{areaId_to_name[adm2]}")
print(f"The land use of ", end="")
print(f"{title} in {year} is mostly {plot_data[0].get('class')}, ", end="")
print(f"covering an area of {int(plot_data[0].get('area_ha'))}ha.", end="")
```
***
### Examples
```
adm0 = 'BRA'
adm1 = None
adm2 = None
year = 2015
data = classification_query(adm0=adm0, adm1=adm1,adm2=adm2)
data = buildData(data)
showPie(data)
getSentence(data)
adm0 = 'BRA'
adm1 = 1
adm2 = None
year = 2015
data = classification_query(adm0=adm0, adm1=adm1,adm2=adm2)
data = buildData(data)
showPie(data)
getSentence(data)
adm0 = 'GBR'
adm1 = 1
adm2 = None
year = 2015
data = classification_query(adm0=adm0, adm1=adm1,adm2=adm2)
data = buildData(data)
showPie(data)
getSentence(data)
adm0 = 'RUS'
adm1 = None
adm2 = None
year = 2015
data = classification_query(adm0=adm0, adm1=adm1,adm2=adm2)
data = buildData(data)
showPie(data)
getSentence(data)
adm0 = 'IDN'
adm1 = None
adm2 = None
year = 2015
data = classification_query(adm0=adm0, adm1=adm1,adm2=adm2)
data = buildData(data)
showPie(data)
getSentence(data)
adm0 = 'IDN'
adm1 = 1
adm2 = 1
year = 2015
data = classification_query(adm0=adm0, adm1=adm1,adm2=adm2)
data = buildData(data)
showPie(data)
getSentence(data)
```
| github_jupyter |
```
import sys
import os
import torch
import yaml
from easydict import EasyDict as edict
from pytorch_transformers.tokenization_bert import BertTokenizer
from vilbert.datasets import ConceptCapLoaderTrain, ConceptCapLoaderVal
from vilbert.vilbert import VILBertForVLTasks, BertConfig, BertForMultiModalPreTraining
from vilbert.task_utils import LoadDatasetEval
import numpy as np
import matplotlib.pyplot as plt
import PIL
#from maskrcnn_benchmark_master.maskrcnn_benchmark.config import cfg
#from maskrcnn_benchmark_master.maskrcnn_benchmark.layers import nms
#from maskrcnn_benchmark_master.maskrcnn_benchmark.modeling.detector import build_detection_model
#from maskrcnn_benchmark_master.maskrcnn_benchmark.structures.image_list import to_image_list
#from maskrcnn_benchmark_master.maskrcnn_benchmark.utils.model_serialization import load_state_dict
from PIL import Image
import cv2
import argparse
import glob
from types import SimpleNamespace
import pdb
%matplotlib inline
dir(SimpleNamespace)
class FeatureExtractor:
MAX_SIZE = 1333
MIN_SIZE = 800
def __init__(self):
self.args = self.get_parser()
self.detection_model = self._build_detection_model()
def get_parser(self):
parser = SimpleNamespace(model_file= 'save/resnext_models/model_final.pth',
config_file='save/resnext_models/e2e_faster_rcnn_X-152-32x8d-FPN_1x_MLP_2048_FPN_512_train.yaml',
batch_size=1,
num_features=100,
feature_name="fc6",
confidence_threshold=0,
background=False,
partition=0)
return parser
def _build_detection_model(self):
cfg.merge_from_file(self.args.config_file)
cfg.freeze()
model = build_detection_model(cfg)
checkpoint = torch.load(self.args.model_file, map_location=torch.device("cpu"))
load_state_dict(model, checkpoint.pop("model"))
model.to("cuda")
model.eval()
return model
def _image_transform(self, path):
img = Image.open(path)
im = np.array(img).astype(np.float32)
# IndexError: too many indices for array, grayscale images
if len(im.shape) < 3:
im = np.repeat(im[:, :, np.newaxis], 3, axis=2)
im = im[:, :, ::-1]
im -= np.array([102.9801, 115.9465, 122.7717])
im_shape = im.shape
im_height = im_shape[0]
im_width = im_shape[1]
im_size_min = np.min(im_shape[0:2])
im_size_max = np.max(im_shape[0:2])
# Scale based on minimum size
im_scale = self.MIN_SIZE / im_size_min
# Prevent the biggest axis from being more than max_size
# If bigger, scale it down
if np.round(im_scale * im_size_max) > self.MAX_SIZE:
im_scale = self.MAX_SIZE / im_size_max
im = cv2.resize(
im, None, None, fx=im_scale, fy=im_scale, interpolation=cv2.INTER_LINEAR
)
img = torch.from_numpy(im).permute(2, 0, 1)
im_info = {"width": im_width, "height": im_height}
return img, im_scale, im_info
def _process_feature_extraction(
self, output, im_scales, im_infos, feature_name="fc6", conf_thresh=0
):
batch_size = len(output[0]["proposals"])
n_boxes_per_image = [len(boxes) for boxes in output[0]["proposals"]]
score_list = output[0]["scores"].split(n_boxes_per_image)
score_list = [torch.nn.functional.softmax(x, -1) for x in score_list]
feats = output[0][feature_name].split(n_boxes_per_image)
cur_device = score_list[0].device
feat_list = []
info_list = []
for i in range(batch_size):
dets = output[0]["proposals"][i].bbox / im_scales[i]
scores = score_list[i]
max_conf = torch.zeros((scores.shape[0])).to(cur_device)
conf_thresh_tensor = torch.full_like(max_conf, conf_thresh)
start_index = 1
# Column 0 of the scores matrix is for the background class
if self.args.background:
start_index = 0
for cls_ind in range(start_index, scores.shape[1]):
cls_scores = scores[:, cls_ind]
keep = nms(dets, cls_scores, 0.5)
max_conf[keep] = torch.where(
# Better than max one till now and minimally greater than conf_thresh
(cls_scores[keep] > max_conf[keep])
& (cls_scores[keep] > conf_thresh_tensor[keep]),
cls_scores[keep],
max_conf[keep],
)
sorted_scores, sorted_indices = torch.sort(max_conf, descending=True)
num_boxes = (sorted_scores[: self.args.num_features] != 0).sum()
keep_boxes = sorted_indices[: self.args.num_features]
feat_list.append(feats[i][keep_boxes])
bbox = output[0]["proposals"][i][keep_boxes].bbox / im_scales[i]
# Predict the class label using the scores
objects = torch.argmax(scores[keep_boxes][start_index:], dim=1)
cls_prob = torch.max(scores[keep_boxes][start_index:], dim=1)
info_list.append(
{
"bbox": bbox.cpu().numpy(),
"num_boxes": num_boxes.item(),
"objects": objects.cpu().numpy(),
"image_width": im_infos[i]["width"],
"image_height": im_infos[i]["height"],
"cls_prob": scores[keep_boxes].cpu().numpy(),
}
)
return feat_list, info_list
def get_detectron_features(self, image_paths):
img_tensor, im_scales, im_infos = [], [], []
for image_path in image_paths:
im, im_scale, im_info = self._image_transform(image_path)
img_tensor.append(im)
im_scales.append(im_scale)
im_infos.append(im_info)
# Image dimensions should be divisible by 32, to allow convolutions
# in detector to work
current_img_list = to_image_list(img_tensor, size_divisible=32)
current_img_list = current_img_list.to("cuda")
with torch.no_grad():
output = self.detection_model(current_img_list)
feat_list = self._process_feature_extraction(
output,
im_scales,
im_infos,
self.args.feature_name,
self.args.confidence_threshold,
)
return feat_list
def _chunks(self, array, chunk_size):
for i in range(0, len(array), chunk_size):
yield array[i : i + chunk_size]
def _save_feature(self, file_name, feature, info):
file_base_name = os.path.basename(file_name)
file_base_name = file_base_name.split(".")[0]
info["image_id"] = file_base_name
info["features"] = feature.cpu().numpy()
file_base_name = file_base_name + ".npy"
np.save(os.path.join(self.args.output_folder, file_base_name), info)
def extract_features(self, image_path):
features, infos = self.get_detectron_features([image_path])
return features, infos
def tokenize_batch(batch):
return [tokenizer.convert_tokens_to_ids(sent) for sent in batch]
def untokenize_batch(batch):
return [tokenizer.convert_ids_to_tokens(sent) for sent in batch]
def detokenize(sent):
""" Roughly detokenizes (mainly undoes wordpiece) """
new_sent = []
for i, tok in enumerate(sent):
if tok.startswith("##"):
new_sent[len(new_sent) - 1] = new_sent[len(new_sent) - 1] + tok[2:]
else:
new_sent.append(tok)
return new_sent
def printer(sent, should_detokenize=True):
if should_detokenize:
sent = detokenize(sent)[1:-1]
print(" ".join(sent))
# write arbitary string for given sentense.
import _pickle as cPickle
def prediction(question, features, spatials, segment_ids, input_mask, image_mask, co_attention_mask, task_tokens, ):
vil_prediction, vil_prediction_gqa, vil_logit, vil_binary_prediction, vil_tri_prediction, vision_prediction, vision_logit, linguisic_prediction, linguisic_logit, attn_data_list = model(
question, features, spatials, segment_ids, input_mask, image_mask, co_attention_mask, task_tokens, output_all_attention_masks=True
)
height, width = img.shape[0], img.shape[1]
logits = torch.max(vil_prediction, 1)[1].data # argmax
# Load VQA label to answers:
label2ans_path = os.path.join('save', "VQA" ,"cache", "trainval_label2ans.pkl")
vqa_label2ans = cPickle.load(open(label2ans_path, "rb"))
answer = vqa_label2ans[logits[0].item()]
print("VQA: " + answer)
# Load GQA label to answers:
label2ans_path = os.path.join('save', "gqa" ,"cache", "trainval_label2ans.pkl")
logtis_gqa = torch.max(vil_prediction_gqa, 1)[1].data
gqa_label2ans = cPickle.load(open(label2ans_path, "rb"))
answer = gqa_label2ans[logtis_gqa[0].item()]
print("GQA: " + answer)
# vil_binary_prediction NLVR2, 0: False 1: True Task 12
logtis_binary = torch.max(vil_binary_prediction, 1)[1].data
print("NLVR: " + str(logtis_binary.item()))
# vil_entaliment:
label_map = {0:"contradiction", 1:"neutral", 2:"entailment"}
logtis_tri = torch.max(vil_tri_prediction, 1)[1].data
print("Entaliment: " + str(label_map[logtis_tri.item()]))
# vil_logit:
logits_vil = vil_logit[0].item()
print("ViL_logit: %f" %logits_vil)
# grounding:
logits_vision = torch.max(vision_logit, 1)[1].data
grounding_val, grounding_idx = torch.sort(vision_logit.view(-1), 0, True)
examples_per_row = 5
ncols = examples_per_row
nrows = 1
figsize = [12, ncols*20] # figure size, inches
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=figsize)
for i, axi in enumerate(ax.flat):
idx = grounding_idx[i]
val = grounding_val[i]
box = spatials[0][idx][:4].tolist()
y1 = int(box[1] * height)
y2 = int(box[3] * height)
x1 = int(box[0] * width)
x2 = int(box[2] * width)
patch = img[y1:y2,x1:x2]
axi.imshow(patch)
axi.axis('off')
axi.set_title(str(i) + ": " + str(val.item()))
plt.axis('off')
plt.tight_layout(True)
plt.show()
def custom_prediction(query, task, features, infos):
tokens = tokenizer.encode(query)
tokens = tokenizer.add_special_tokens(tokens)
segment_ids = [0] * len(tokens)
input_mask = [1] * len(tokens)
max_length = 37
if len(tokens) < max_length:
# Note here we pad in front of the sentence
padding = [0] * (max_length - len(tokens))
tokens = tokens + padding
input_mask += padding
segment_ids += padding
text = torch.from_numpy(np.array(tokens)).cuda().unsqueeze(0)
input_mask = torch.from_numpy(np.array(input_mask)).cuda().unsqueeze(0)
segment_ids = torch.from_numpy(np.array(segment_ids)).cuda().unsqueeze(0)
task = torch.from_numpy(np.array(task)).cuda().unsqueeze(0)
num_image = len(infos)
feature_list = []
image_location_list = []
image_mask_list = []
for i in range(num_image):
image_w = infos[i]['image_width']
image_h = infos[i]['image_height']
feature = features[i]
num_boxes = feature.shape[0]
g_feat = torch.sum(feature, dim=0) / num_boxes
num_boxes = num_boxes + 1
feature = torch.cat([g_feat.view(1,-1), feature], dim=0)
boxes = infos[i]['bbox']
image_location = np.zeros((boxes.shape[0], 5), dtype=np.float32)
image_location[:,:4] = boxes
image_location[:,4] = (image_location[:,3] - image_location[:,1]) * (image_location[:,2] - image_location[:,0]) / (float(image_w) * float(image_h))
image_location[:,0] = image_location[:,0] / float(image_w)
image_location[:,1] = image_location[:,1] / float(image_h)
image_location[:,2] = image_location[:,2] / float(image_w)
image_location[:,3] = image_location[:,3] / float(image_h)
g_location = np.array([0,0,1,1,1])
image_location = np.concatenate([np.expand_dims(g_location, axis=0), image_location], axis=0)
image_mask = [1] * (int(num_boxes))
feature_list.append(feature)
image_location_list.append(torch.tensor(image_location))
image_mask_list.append(torch.tensor(image_mask))
features = torch.stack(feature_list, dim=0).float().cuda()
spatials = torch.stack(image_location_list, dim=0).float().cuda()
image_mask = torch.stack(image_mask_list, dim=0).byte().cuda()
co_attention_mask = torch.zeros((num_image, num_boxes, max_length)).cuda()
prediction(text, features, spatials, segment_ids, input_mask, image_mask, co_attention_mask, task)
# =============================
# ViLBERT part
# =============================
feature_extractor = FeatureExtractor()
args = SimpleNamespace(from_pretrained= "save/multitask_model/pytorch_model_9.bin",
bert_model="bert-base-uncased",
config_file="config/bert_base_6layer_6conect.json",
max_seq_length=101,
train_batch_size=1,
do_lower_case=True,
predict_feature=False,
seed=42,
num_workers=0,
baseline=False,
img_weight=1,
distributed=False,
objective=1,
visual_target=0,
dynamic_attention=False,
task_specific_tokens=True,
tasks='1',
save_name='',
in_memory=False,
batch_size=1,
local_rank=-1,
split='mteval',
clean_train_sets=True
)
config = BertConfig.from_json_file(args.config_file)
with open('./vilbert_tasks.yml', 'r') as f:
task_cfg = edict(yaml.safe_load(f))
task_names = []
for i, task_id in enumerate(args.tasks.split('-')):
task = 'TASK' + task_id
name = task_cfg[task]['name']
task_names.append(name)
timeStamp = args.from_pretrained.split('/')[-1] + '-' + args.save_name
config = BertConfig.from_json_file(args.config_file)
default_gpu=True
if args.predict_feature:
config.v_target_size = 2048
config.predict_feature = True
else:
config.v_target_size = 1601
config.predict_feature = False
if args.task_specific_tokens:
config.task_specific_tokens = True
if args.dynamic_attention:
config.dynamic_attention = True
config.visualization = True
num_labels = 3129
if args.baseline:
model = BaseBertForVLTasks.from_pretrained(
args.from_pretrained, config=config, num_labels=num_labels, default_gpu=default_gpu
)
else:
model = VILBertForVLTasks.from_pretrained(
args.from_pretrained, config=config, num_labels=num_labels, default_gpu=default_gpu
)
model.eval()
cuda = torch.cuda.is_available()
if cuda: model = model.cuda(0)
tokenizer = BertTokenizer.from_pretrained(
args.bert_model, do_lower_case=args.do_lower_case
)
# 1: VQA, 2: GenomeQA, 4: Visual7w, 7: Retrieval COCO, 8: Retrieval Flickr30k
# 9: refcoco, 10: refcoco+ 11: refcocog, 12: NLVR2, 13: VisualEntailment, 15: GQA, 16: GuessWhat,
image_path = 'demo/1.jpg'
features, infos = feature_extractor.extract_features(image_path)
img = PIL.Image.open(image_path).convert('RGB')
img = torch.tensor(np.array(img))
plt.axis('off')
plt.imshow(img)
plt.show()
query = "swimming elephant"
task = [9]
custom_prediction(query, task, features, infos)
```
| github_jupyter |
# Dynamics 365 Business Central Troubleshooting Guide (TSG) - Extensions
This notebook contains Kusto queries that can help getting to the root cause of an issue with extensions for one or more environments.
NB! Some of the signal used in this notebook is only available in newer versions of Business Central, so check the version of your environment if some sections do not return any data. The signal documentation states in which version a given signal was introduced.
## 1\. Get setup: Load up Python libraries and connect to Application Insights
First you need to set the notebook Kernel to Python3,
load the KQLmagic module (did you install it? If not, go here to get install instructions <span style="color: rgb(33, 33, 33); font-family: Consolas, "Courier New", monospace; font-size: 12px; white-space: pre;"> <a href="https://github.com/microsoft/BCTech/tree/master/samples/AppInsights/TroubleShootingGuides" data-href="https://github.com/microsoft/BCTech/tree/master/samples/AppInsights/TroubleShootingGuides" title="https://github.com/microsoft/BCTech/tree/master/samples/AppInsights/TroubleShootingGuides" is-absolute="false">https://github.com/microsoft/BCTech/tree/master/samples/AppInsights/TroubleShootingGuides</a></span>)
and connect to your Application Insights resource (get appid and appkey from the API access page in the Application Insights portal)
```
# load the KQLmagic module
%reload_ext Kqlmagic
# Connect to the Application Insights API
%kql appinsights://appid='<add app id from the Application Insights portal>';appkey='<add API key from the Application Insights portal>'
```
## 2\. Define filters
This workbook is designed for troubleshooting extensions. Please provide values for aadTenantId, environmentName, and extensionId (or use a config file).
You can also specify limits to the period of time that the analysis should include.
```
# Add values for AAD tenant id, environment name, and extension id.
# It is possible to leave one or more values blank (if you want to analyze across all values of the parameter)
# You can either use configuration file (INI file format) or set filters directly.
# If you specify a config file, then variables set here takes precedence over manually set filter variables
# config file name and directory (full path)
configFile = "c:\\tmp\\notebook.ini"
# Add AAD tenant id and environment name here (or leave blank)
aadTenantId = ""
environmentName = ""
extensionId = ""
# date filters for the analysis
# use YYYY-MM-DD format for the dates (ISO 8601)
startDate = "2021-11-20"
endDate = "2022-01-01"
# Do not edit this code section
import configparser
config = configparser.ConfigParser()
config.read(configFile)
if bool(config.defaults()):
if config.has_option('DEFAULT', 'aadTenantId'):
aadTenantId = config['DEFAULT']['aadTenantId']
if config.has_option('DEFAULT', 'environmentName'):
environmentName = config['DEFAULT']['environmentName']
if config.has_option('DEFAULT', 'extensionId'):
extensionId = config['DEFAULT']['extensionId']
if config.has_option('DEFAULT', 'startDate'):
startDate = config['DEFAULT']['startDate']
if config.has_option('DEFAULT', 'endDate'):
endDate = config['DEFAULT']['endDate']
print("Using these parameters for the analysis:")
print("----------------------------------------")
print("aadTenantId " + aadTenantId)
print("environmentName " + environmentName)
print("extensionId " + extensionId)
print("startDate " + startDate)
print("endDate " + endDate)
```
# Analyze extension events
Now you can run Kusto queries to look for possible root causes for issues about extensions.
Either click **Run All** above to run all sections, or scroll down to the type of analysis you want to do and manually run queries
## Extension event overview
Event telemetry docs:
* https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/telemetry-extension-lifecycle-trace
* https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/telemetry-extension-update-trace
KQL samples: https://github.com/microsoft/BCTech/blob/master/samples/AppInsights/KQL/RawData/ExtensionLifecycle.kql
```
%%kql
//
// extension event types stats
//
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
let _extensionId = extensionId;
let _startDate = startDate;
let _endDate = endDate;
traces
| where 1==1
and timestamp >= todatetime(_startDate)
and timestamp <= todatetime(_endDate) + totimespan(24h) - totimespan(1ms)
and (_aadTenantId == '' or customDimensions.aadTenantId == _aadTenantId)
and (_environmentName == '' or customDimensions.environmentName == _environmentName )
and (_extensionId == '' or customDimensions.extensionId == _extensionId)
and customDimensions.eventId in ('RT0010', 'LC0010', 'LC0011', 'LC0012', 'LC0013', 'LC0014', 'LC0015', 'LC0016', 'LC0017', 'LC0018', 'LC0019', 'LC0020', 'LC0021', 'LC0022', 'LC0023')
| extend aadTenantId=tostring( customDimensions.aadTenantId)
, environmentName=tostring( customDimensions.environmentName )
, extensionId=tostring( customDimensions.extensionId )
, eventId=tostring(customDimensions.eventId)
| extend eventMessageShort= strcat( case(
eventId=='RT0010', 'Update failed (upgrade code)'
, eventId=='LC0011', 'Install failed'
, eventId=='LC0012', 'Synch succeeded'
, eventId=='LC0013', 'Synch failed'
, eventId=='LC0014', 'Publish succeeded'
, eventId=='LC0015', 'Publish failed'
, eventId=='LC0016', 'Un-install succeeded'
, eventId=='LC0017', 'Un-install failed'
, eventId=='LC0018', 'Un-publish succeeded'
, eventId=='LC0019', 'Un-publish failed'
, eventId=='LC0020', 'Compilation succeeded'
, eventId=='LC0021', 'Compilation failed'
, eventId=='LC0022', 'Update succeeded'
, eventId=='LC0023', 'Update failed (other)'
, 'Unknown message'
), " (", eventId, ')' )
| summarize count=count() by eventType=eventMessageShort
| order by eventType
| render barchart with (title='Extension lifecycle event overview', legend=hidden)
%%kql
//
// top 100 extension events
//
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
let _extensionId = extensionId;
let _startDate = startDate;
let _endDate = endDate;
traces
| where 1==1
and timestamp >= todatetime(_startDate)
and timestamp <= todatetime(_endDate) + totimespan(24h) - totimespan(1ms)
and (_aadTenantId == '' or customDimensions.aadTenantId == _aadTenantId)
and (_environmentName == '' or customDimensions.environmentName == _environmentName )
and (_extensionId == '' or customDimensions.extensionId == _extensionId)
and customDimensions.eventId in ('RT0010', 'LC0010', 'LC0011', 'LC0012', 'LC0013', 'LC0014', 'LC0015', 'LC0016', 'LC0017', 'LC0018', 'LC0019', 'LC0020', 'LC0021', 'LC0022', 'LC0023')
| extend aadTenantId=tostring( customDimensions.aadTenantId)
, environmentName=tostring( customDimensions.environmentName )
, extensionId=tostring( customDimensions.extensionId )
, extensionName=tostring( customDimensions.extensionName )
, eventId=tostring(customDimensions.eventId)
| extend eventMessageShort= strcat( case(
eventId=='RT0010', 'Update failed (upgrade code)'
, eventId=='LC0011', 'Install failed'
, eventId=='LC0012', 'Synch succeeded'
, eventId=='LC0013', 'Synch failed'
, eventId=='LC0014', 'Publish succeeded'
, eventId=='LC0015', 'Publish failed'
, eventId=='LC0016', 'Un-install succeeded'
, eventId=='LC0017', 'Un-install failed'
, eventId=='LC0018', 'Un-publish succeeded'
, eventId=='LC0019', 'Un-publish failed'
, eventId=='LC0020', 'Compilation succeeded'
, eventId=='LC0021', 'Compilation failed'
, eventId=='LC0022', 'Update succeeded'
, eventId=='LC0023', 'Update failed (other)'
, 'Unknown message'
), " (", eventId, ')' )
| project timestamp, eventMessageShort, extensionName, aadTenantId, environmentName, extensionId
| order by aadTenantId, environmentName, extensionId, timestamp asc
| limit 100
```
## Extension failures
Event telemetry docs:
* https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/telemetry-extension-lifecycle-trace
* https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/telemetry-extension-update-trace
```
%%kql
//
// extension event failure overview
//
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
let _extensionId = extensionId;
let _startDate = startDate;
let _endDate = endDate;
traces
| where 1==1
and timestamp >= todatetime(_startDate)
and timestamp <= todatetime(_endDate) + totimespan(24h) - totimespan(1ms)
and (_aadTenantId == '' or customDimensions.aadTenantId == _aadTenantId)
and (_environmentName == '' or customDimensions.environmentName == _environmentName )
and (_extensionId == '' or customDimensions.extensionId == _extensionId)
and customDimensions.eventId in ('RT0010', 'LC0011', 'LC0013', 'LC0015', 'LC0017', 'LC0019', 'LC0021', 'LC0023')
| extend aadTenantId=tostring( customDimensions.aadTenantId)
, environmentName=tostring( customDimensions.environmentName )
, extensionId=tostring( customDimensions.extensionId )
, eventId=tostring(customDimensions.eventId)
| extend eventMessageShort= strcat( case(
eventId=='RT0010', 'Update failed (upgrade code)'
, eventId=='LC0011', 'Install failed'
, eventId=='LC0013', 'Synch failed'
, eventId=='LC0015', 'Publish failed'
, eventId=='LC0017', 'Un-install failed'
, eventId=='LC0019', 'Un-publish failed'
, eventId=='LC0021', 'Compilation failed'
, eventId=='LC0023', 'Update failed (other)'
, 'Unknown message'
), " (", eventId, ')' )
| summarize count=count() by eventType=eventMessageShort
| order by eventType
| render barchart with (title='Failure type overview', xtitle="", legend=hidden)
%%kql
//
// top 100 latest extension event failure details
//
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
let _extensionId = extensionId;
let _startDate = startDate;
let _endDate = endDate;
traces
| where 1==1
and timestamp >= todatetime(_startDate)
and timestamp <= todatetime(_endDate) + totimespan(24h) - totimespan(1ms)
and (_aadTenantId == '' or customDimensions.aadTenantId == _aadTenantId)
and (_environmentName == '' or customDimensions.environmentName == _environmentName )
and (_extensionId == '' or customDimensions.extensionId == _extensionId)
and customDimensions.eventId in ('RT0010', 'LC0011', 'LC0013', 'LC0015', 'LC0017', 'LC0019', 'LC0021', 'LC0023')
| extend aadTenantId=tostring( customDimensions.aadTenantId)
, environmentName=tostring( customDimensions.environmentName )
, extensionId=tostring( customDimensions.extensionId )
, eventId=tostring(customDimensions.eventId)
, extensionName=tostring(customDimensions.extensionName)
| extend eventMessageShort= strcat( case(
eventId=='RT0010', 'Update failed (upgrade code)'
, eventId=='LC0011', 'Install failed'
, eventId=='LC0013', 'Synch failed'
, eventId=='LC0015', 'Publish failed'
, eventId=='LC0017', 'Un-install failed'
, eventId=='LC0019', 'Un-publish failed'
, eventId=='LC0021', 'Compilation failed'
, eventId=='LC0023', 'Update failed (other)'
, 'Unknown message'
), " (", eventId, ')' )
| project timestamp, extensionName, eventType=eventMessageShort
, version=customDimensions.extensionVersion
, failureReason=customDimensions.failureReason
, aadTenantId, environmentName, extensionId
| order by timestamp desc
| limit 100
%%kql
//
// top 20 latest update failures (due to upgrade code)
//
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
let _extensionId = extensionId;
let _startDate = startDate;
let _endDate = endDate;
traces
| where 1==1
and timestamp >= todatetime(_startDate)
and timestamp <= todatetime(_endDate) + totimespan(24h) - totimespan(1ms)
and (_aadTenantId == '' or customDimensions.aadTenantId == _aadTenantId)
and (_environmentName == '' or customDimensions.environmentName == _environmentName )
and (_extensionId == '' or customDimensions.extensionId == _extensionId)
and customDimensions.eventId == 'RT0010'
| extend aadTenantId=tostring( customDimensions.aadTenantId)
, environmentName=tostring( customDimensions.environmentName )
, extensionId=tostring( customDimensions.extensionId )
, eventId=tostring(customDimensions.eventId)
, extensionName=tostring(customDimensions.extensionName)
| project timestamp, extensionName
, version=customDimensions.extensionVersion
, targetedVersion =customDimensions.extensionTargetedVersion
, failureType =customDimensions.failureType
, alStackTrace =customDimensions.alStackTrace
, companyName = customDimensions.companyName
, extensionPublisher = customDimensions.extensionPublisher
, aadTenantId, environmentName, extensionId
| order by timestamp desc
| limit 20
%%kql
//
// top 20 latest synch failures
//
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
let _extensionId = extensionId;
let _startDate = startDate;
let _endDate = endDate;
traces
| where 1==1
and timestamp >= todatetime(_startDate)
and timestamp <= todatetime(_endDate) + totimespan(24h) - totimespan(1ms)
and (_aadTenantId == '' or customDimensions.aadTenantId == _aadTenantId)
and (_environmentName == '' or customDimensions.environmentName == _environmentName )
and (_extensionId == '' or customDimensions.extensionId == _extensionId)
and customDimensions.eventId == 'LC0013'
| extend aadTenantId=tostring( customDimensions.aadTenantId)
, environmentName=tostring( customDimensions.environmentName )
, extensionId=tostring( customDimensions.extensionId )
, eventId=tostring(customDimensions.eventId)
, extensionName=tostring(customDimensions.extensionName)
| project timestamp, extensionName
, version=customDimensions.extensionVersion
, failureReason=customDimensions.failureReason
, publishedAs = customDimensions.extensionPublishedAs
, extensionPublisher = customDimensions.extensionPublisher
, extensionScope = customDimensions.extensionScope
, extensionSynchronizationMode = customDimensions.extensionSynchronizationMode
, aadTenantId, environmentName, extensionId
| order by timestamp desc
| limit 20
%%kql
//
// top 20 latest compilation failures
//
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
let _extensionId = extensionId;
let _startDate = startDate;
let _endDate = endDate;
traces
| where 1==1
and timestamp >= todatetime(_startDate)
and timestamp <= todatetime(_endDate) + totimespan(24h) - totimespan(1ms)
and (_aadTenantId == '' or customDimensions.aadTenantId == _aadTenantId)
and (_environmentName == '' or customDimensions.environmentName == _environmentName )
and (_extensionId == '' or customDimensions.extensionId == _extensionId)
and customDimensions.eventId == 'LC0021'
| extend aadTenantId=tostring( customDimensions.aadTenantId)
, environmentName=tostring( customDimensions.environmentName )
, extensionId=tostring( customDimensions.extensionId )
, eventId=tostring(customDimensions.eventId)
, extensionName=tostring(customDimensions.extensionName)
| project timestamp, extensionName
, version=customDimensions.extensionVersion
, failureReason=customDimensions.failureReason
, compilationResult = customDimensions.extensionCompilationResult
, compilationDependencyList = customDimensions.extensionCompilationDependencyList
, publisher = customDimensions.extensionPublisher
, publishedAs = customDimensions.extensionPublishedAs
, extensionScope = customDimensions.extensionScope
, aadTenantId, environmentName, extensionId
| order by timestamp desc
| limit 20
```
| github_jupyter |
## Import Template
```
import numpy as np
import pandas as pd
import seaborn as sn
import tensorflow as tf
from tensorflow import keras
from matplotlib import pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
%matplotlib inline
df = pd.read_csv("9_Customer_churn.csv")
df.sample(5)
```
## Data Cleaning Phase
```
df.drop("customerID", axis="columns", inplace = True)
df.dtypes # we want to convert total charges to float as well
df.TotalCharges.values
print(df.TotalCharges.dtype)
pd.to_numeric(df.TotalCharges, errors="coerce") # ignore character with spaces
# rows without total charges
df[pd.to_numeric(df.TotalCharges, errors="coerce").isnull()]
# drop them since it's a small size in the sample
df1 = df[df.TotalCharges != " "]
# df1.shape
df1.TotalCharges = pd.to_numeric(df1.TotalCharges)
df1.TotalCharges.dtype
# Get a good visual about tenure vs Churn
tenure_churn_no = df1[df1.Churn == "No"].tenure
tenure_churn_yes = df1[df1.Churn == "Yes"].tenure
plt.xlabel("Tenure")
plt.ylabel("Num of Customers")
plt.title("Tenure vs. Customer Churn")
plt.hist([tenure_churn_yes, tenure_churn_no], color=["green", "red"], label=["Churn: Yes", "Churn: No"])
plt.legend()
# Get another good visual about Monthly Charges vs Churn
mc_churn_no = df1[df1.Churn == "No"].MonthlyCharges
mc_churn_yes = df1[df1.Churn == "Yes"].MonthlyCharges
plt.xlabel("Monthly Charges")
plt.ylabel("Num of Customers")
plt.title("Monthly Charges vs. Customer Churn")
plt.hist([mc_churn_yes, mc_churn_no], color=["green", "red"], label=["Churn: Yes", "Churn: No"])
plt.legend()
# List out unique values in each column
def print_uni_col(df):
for col in df:
# if df1[col].dtype == "object":
print(f"{col}: {df[col].unique()}")
# remove redundant param (cvt to no)
df1.replace("No internet service", "No", inplace=True) # inplace will modify the original data instead of return a copy
df1.replace("No phone service", "No", inplace=True)
df1["gender"].replace({"Female": 1, "Male": 0}, inplace=True)
# cvt yes/no to 1/0
yes_no_columns = ['Partner','Dependents','PhoneService','MultipleLines','OnlineSecurity','OnlineBackup',
'DeviceProtection','TechSupport','StreamingTV','StreamingMovies','PaperlessBilling','Churn']
for col in yes_no_columns:
df1[col].replace({'Yes': 1,'No': 0},inplace=True)
# Applying One hot encoding for categorical columns (created 7 more columns)
df2 = pd.get_dummies(data=df1, columns=['InternetService','Contract','PaymentMethod'])
df2.dtypes # all numbers no objects
# Scaling values that over 0-1 range
cols_to_scale = ['tenure','MonthlyCharges','TotalCharges']
scaler = MinMaxScaler()
df2[cols_to_scale] = scaler.fit_transform(df2[cols_to_scale])
print_uni_col(df2) # successfully scaled
```
## Create data sets and Model
```
# split train and test sets
X = df2.drop("Churn", axis="columns")
y = df2["Churn"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=5)
X_train.shape
model = keras.Sequential([
keras.layers.Dense(26, input_shape=(26, ), activation="relu"), # Input Layer / hidden layer
# keras.layers.Dense(50, activation="relu"),
# keras.layers.Dense(10, activation="relu"),
keras.layers.Dense(1, activation="sigmoid") # Output layer
])
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
```
## Train the model
```
# First run 5 epochs and make sure accuracy is increasing
model.fit(X_train, y_train, epochs=100)
```
## Test and Visulize the results
```
model.evaluate(X_test, y_test)
yp = model.predict(X_test)
yp[yp > 0.5] = 1
yp[yp <= 0.5] = 0
# print prediction and actual side by side
np.c_[yp[:10], y_test[:10]]
# cm & reports
print(classification_report(y_test, yp))
# plot it
cm = tf.math.confusion_matrix(labels=y_test, predictions=yp)
plt.figure(figsize=(10, 7))
sn.heatmap(cm, annot=True, fmt="d")
plt.xlabel("Predicted")
plt.ylabel("Actual")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yukinaga/numpy_matplotlib/blob/main/section_3/01_various_graphs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# matplotlibの様々なグラフ
matplotlibを使い、様々なグラフを描画してみましょう。
## ●matplotlibのインポート
グラフを描画するために、matplotlibのpyplotというモジュールをインポートします。
pyplotはグラフの描画をサポートします。
データの操作にNumPyの配列を使うので、NumPyもインポートします。
```
import numpy as np
import matplotlib.pyplot as plt
# 練習用
```
## ●linspace関数
matplotlibでグラフを描画する際に、NumPyのlinspace関数がよく使われます。
linspace関数は、ある区間を50に等間隔で区切ってNumPyの配列にします。
この配列を、グラフの横軸の値としてよく使います。
```
import numpy as np
x = np.linspace(-5, 5) # -5から5まで50に区切る
print(x)
print(len(x)) # xの要素数
# 練習用
```
この配列を使って、連続に変化する横軸の値を擬似的に表現します。
## ●グラフの描画
例として、pyplotを使って直線を描画します。
NumPyのlinspace関数でx座標のデータを配列として生成し、これに値をかけてy座標とします。
そして、pyplotのplotで、x座標、y座標のデータをプロットし、showでグラフを表示します。
```
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-5, 5) # -5から5まで
y = 2 * x # xに2をかけてy座標とする
plt.plot(x, y)
plt.show()
# 練習用
```
## ●グラフの装飾
軸のラベルやグラフのタイトル、凡例などを表示し、線のスタイルを変更してリッチなグラフにしましょう。
```
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-5, 5)
y_1 = 2 * x
y_2 = 3 * x
# 軸のラベル
plt.xlabel("x value")
plt.ylabel("y value")
# グラフのタイトル
plt.title("My Graph")
# プロット 凡例と線のスタイルを指定
plt.plot(x, y_1, label="y1")
plt.plot(x, y_2, label="y2", linestyle="dashed")
plt.legend() # 凡例を表示
plt.show()
# 練習用
```
## ●散布図の表示
scatter関数により散布図を表示することができます。
以下のコードでは、x座標、y座標から散布図を描画しています。
```
import numpy as np
import matplotlib.pyplot as plt
x_1 = np.random.rand(50) - 1.0 # 座標を左に1.0ずらす
y_1 = np.random.rand(50)
x_2 = np.random.rand(50)
y_2 = np.random.rand(50)
plt.scatter(x_1, y_1, marker="+") # 散布図のプロット
plt.scatter(x_2, y_2, marker="*")
plt.show()
# 練習用
```
## ●棒グラフの表示
bar関数により棒グラフを表示することができます。
以下のコードは、棒グラフを2つ重ねて描画します。
```
import numpy as np
import matplotlib.pyplot as plt
x = np.array([1, 2, 3, 4, 5])
y_1 = np.array([100, 300, 500, 400, 200])
y_2 = np.array([200, 400, 300, 100, 500])
plt.bar(x, y_1, color="blue") # 棒グラフ
plt.bar(x, y_2, bottom=y_1, color="orange") # 棒グラフを重ねる
plt.show()
# 練習用
```
## ●円グラフの表示
pie関数により円グラフを表示することができます。
以下のコードは、円グラフを描画します。
```
import numpy as np
import matplotlib.pyplot as plt
x = np.array([500, 400, 300, 200, 100])
labels = ["Lion", "Tiger", "Leopard", "Jaguar", "Others"] # ラベル
plt.pie(x, labels=labels, counterclock=False, startangle=90) # 円グラフ
plt.show()
# 練習用
```
## ●画像の表示
pyplotのimshow関数は、配列を画像として表示することができます。
以下のコードは、配列を画像として表示するサンプルです。
```
import numpy as np
import matplotlib.pyplot as plt
img = np.array([[0, 1, 2, 3],
[4, 5, 6, 7],
[8, 9, 10,11],
[12,13,14,15]])
plt.imshow(img, "gray") # グレースケールで表示
plt.colorbar() # カラーバーの表示
plt.show()
# 練習用
```
この場合、0が黒、15が白を表し、その間の値はこれらの中間色を表します。
カラーバーを表示することもできます。
## ●演習
以下のコードに追記を行い、散布図を描画しましょう。
`random.randn`関数は、「正規分布」に従う乱数を返します。
```
import numpy as np
import matplotlib.pyplot as plt
x_1 = np.random.randn(200) - 1.5 # 座標を左に1.5ずらす
y_1 = np.random.randn(200)
x_2 = np.random.randn(200) + 1.5 # 座標を右に1.5ずらす
y_2 = np.random.randn(200)
# 散布図のプロット
plt.scatter( # ←ここにコードを追記
plt.scatter( # ←ここにコードを追記
plt.show()
```
## ●解答例
以下は解答例です。
どうしても分からない時、もしくは答え合わせ時に参考にしましょう。
```
import numpy as np
import matplotlib.pyplot as plt
x_1 = np.random.randn(200) - 1.5 # 座標を左に1.5ずらす
y_1 = np.random.randn(200)
x_2 = np.random.randn(200) + 1.5 # 座標を右に1.5ずらす
y_2 = np.random.randn(200)
# 散布図のプロット
plt.scatter(x_1, y_1, marker="+") # ←ここにコードを追記
plt.scatter(x_2, y_2, marker="*") # ←ここにコードを追記
plt.show()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
plt.style.use('fivethirtyeight')
from matplotlib import rc
plt.rc('text', usetex=True)
plt.rc('font', family='sans')
```
# Module 3 - Project: pendulum solutions
In this notebook, you will compare two solutions for the motion of a swinging pendulum
1. linearized solution for $\ddot{\theta}(t) = -\frac{g}{L}\theta$
2. numerical solution for $\ddot{\theta}(t) = -\frac{g}{L}\sin\theta$
You can choose your own pendulum length $L$ for the analysis. Try tying a string to an object and recording the swinging of the pendulum. You can estimate the initial angle and its period of oscillation.
## 1. Linear solution
The linear solution is the first order Taylor series approximation for the equation of motion
$\ddot{\theta}(t) = -\frac{g}{L}\sin\theta \approx -\frac{g}{L}\theta$
The solution to this simple harmonic oscillator function is
$\theta(t) = \theta_0 \cos\omega t + \frac{\dot{\theta}_0}{\omega}\sin\omega t$
where $\theta_0$ is the initial angle, $\dot{\theta}_0$ is the initial angular velocity [in rad/s], and $\omega=\sqrt{\frac{g}{L}}$.
Here, you can plot the solution for
- $L = 1~m$
- $g = 9.81~m/s^2$
- $\theta_0 = \frac{\pi}{3} = [60^o]$
- $\dot{\theta}_0 = 0~rad/s$
```
g = 9.81 # m/s/s
L = 1 # m
w = np.sqrt(g/L) # rad/s
t = np.linspace(0, 4*np.pi/w) # 2 time periods of motion
theta0 = np.pi/3
dtheta0 = 0
theta = theta0*np.cos(w*t) + dtheta0/w*np.sin(w*t)
plt.plot(t, theta*180/np.pi)
plt.xlabel('time (s)')
plt.ylabel(r'$\theta(t)$ [degrees]')
```
In the plot above, you should see harmonic motion. If it starts from rest at $60^o$ the pendulum will swing to $-60^o$ in $\frac{T}{2} =\frac{\pi}{\omega}$ seconds.
## 2. Numerical solution
In the numerical solution, there is no need to linearize the equation of motion. You _do_ need to create a __state__ variable as such
$\mathbf{x} = [x_1,~x_2] = [\theta,~\dot{\theta}]$
now, you can describe 2 first order equations in a function as such
1. $\dot{x}_1 = x_2$
2. $\dot{x}_2 = \ddot{\theta} =-\frac{g}{L}\sin\theta$
putting this in a Python function it becomes
```
g = 9.81
L = 1
def pendulum(t, x):
'''pendulum equations of motion for theta and dtheta/dt
arguments
---------
t: current time
x: current state variable [theta, dtheta/dt]
outputs
-------
dx: current derivative of state variable [dtheta/dt, ddtheta/ddt]'''
dx = np.zeros(len(x))
dx[0] = x[1]
dx[1] = -g/L*np.sin(x[0])
return dx
```
This function `pendulum` defines the differential equation. Next, integrate with `solve_ivp` to find $\theta(t)$ as such.
> __Note__: Here, I am using some previously defined values:
> - `theta0`: the initial angle
> - `dtheta0`: the initial angular velocity
> - `t`: the timesteps from the previous linear analysis $t = (0...2T)$
```
from scipy.integrate import solve_ivp
sol = solve_ivp(pendulum, [0, t.max()], [theta0, 0], t_eval = t)
plt.plot(sol.t, sol.y[0]*180/np.pi)
plt.xlabel('time (s)')
plt.ylabel(r'$\theta(t)$ [degrees]')
```
## Comparing results
At first glance, it would seem both solutions are _close enough_, but here you can plot them together and compare.
```
plt.plot(t, theta*180/np.pi)
plt.plot(sol.t, sol.y[0]*180/np.pi)
plt.xlabel('time (s)')
plt.ylabel(r'$\theta(t)$ [degrees]')
```
You should notice a difference in time period between the two solutions, which means the natural frequencies are different.
You can also consider angular velocity and angular acceleration, below a set of 3 plots are shown for $\theta(t),~\dot{\theta}(t),~and~\ddot{\theta}(t)$.
```
dtheta = -theta0*w*np.sin(w*t) + dtheta0*np.cos(w*t)
ddtheta = -theta0*w**2*np.cos(w*t) - dtheta0*w*np.sin(w*t)
ddtheta_numerical = -g/L*np.sin(sol.y[0]) # using equation of motion
plt.figure(figsize= (10, 11))
plt.subplot(311)
plt.plot(t, theta*180/np.pi)
plt.plot(sol.t, sol.y[0]*180/np.pi)
plt.ylabel(r'$\theta(t)$ [degrees]')
plt.subplot(312)
plt.plot(sol.t, sol.y[1])
plt.plot(t, dtheta)
plt.ylabel(r'$\dot{\theta}(t)$ (rad/s)')
plt.subplot(313)
plt.plot(t, ddtheta)
plt.plot(t, ddtheta_numerical)
plt.xlabel('time (s)')
plt.ylabel(r'$\ddot{\theta}(t)$ (rad/s$^2$)')
```
### Question prompt
- Where do you see the biggest difference between the linearized solution and the numerical solution?
- What are ways you can quantify the error between then two results? Is one solution more _accurate_?
- What other calculations can you compare between these two solutions?
| github_jupyter |
```
a = 6
a
a + 2
a = 'hi'
a
len(a)
a + str(len(a))
#foo
import sys
def main():
print 'Hello there', sys.argv[1]
if __name__ == '__main__':
main()
def repeat(s, exclaim):
result = s + s + s
if exclaim:
result = result + '!!!'
return result
def main():
print repeat('Yay', False)
print repeat('Woo Hoo', True)
def main():
if name == 'Guido':
print repeeeet(name) + '!!!'
else:
print repeat(name)
s = 'hi'
print s[1]
print len(s)
print s + ' there'
pi = 3.14
##text = 'The value of pi is ' + pi
text = 'The value of pi is ' + str(pi)
raw = r'this\t\n and that'
print raw
multi = """It was the best of times.
It was the worst of times."""
print multi
text = "%d little pigs come out, or I'll %s, and I'll %s, and I'll blow your %s down." % (3, 'huff', 'puff', 'house')
text = ("%d little pigs come out, or I'll %s, and I'll %s, and I'll blow your %s down."
% (3, 'huff', 'puff', 'house'))
text = (
"%d little pigs come out, "
"or I'll %s, and I'll %s, "
"and I'll blow your %s down."
% (3, 'huff', 'puff', 'house'))
ustring = u'A unicode \u018e string \xf1'
ustring
s = ustring.encode('utf-8')
s
t = unicode(s, 'utf-8')
t == ustring
#if speed >= 80:
# print 'License and registration please'
# if mood == 'terrible' or speed >= 100:
# print 'You have the right to remain silent.'
# elif mood == 'bad' or speed >= 90:
# print "I'm going to have to write you a ticket."
# write_ticket()
# else:
# print "Let's try to keep it under 80 ok?"
#if speed >= 80: print 'You are so busted'
#else: print 'Have a nice day'
colors = ['red', 'blue', 'green']
print colors[0]
print colors[2]
print len(colors)
b = colors
squares = [1, 4, 9, 16]
sum = 0
for num in squares:
sum += num
print sum
list = ['larry', 'curly', 'moe']
if 'curly' in list:
print 'yay'
for i in range(100):
print i
i = 0
while i < len(a):
print a[i]
i = i + 3
list = ['larry', 'curly', 'moe']
list.append('shemp')
list.insert(0, 'xxx')
list.extend(['yyy', 'zzz'])
print list
print list.index('curly')
list.remove('curly')
list.pop(1)
print list
list = [1, 2, 3]
print list.append(4)
print list
list = []
list.append('a')
list.append('b')
list = ['a', 'b', 'c', 'd']
print list[1:-1]
list[0:2] = 'z'
print list
a = [5, 1, 4, 3]
print sorted(a)
print a
strs = ['aa', 'BB', 'zz', 'CC']
print sorted(strs)
print sorted(strs, reverse=True)
strs = ['ccc', 'aaaa', 'd', 'bb']
print sorted(strs, key=len)
print sorted(strs, key=str.lower)
strs = ['xc', 'zb', 'yd', 'wa']
def MyFn(s):
return s[-1]
print sorted(strs, key=MyFn)
#alist.sort()
#alist = blist.sort()
tuple = (1, 2, 'hi')
print len(tuple)
print tuple[2]
#tuple[2] = 'bye'
tuple = (1, 2, 'bye')
tuple = ('hi',)
(x, y, z) = (42, 13, "hike")
print z
#(err_string, err_code) = Foo()
nums = [1, 2, 3, 4]
squares = [n*n for n in nums]
strs = ['hello', 'and', 'goodbye']
shouting = [s.upper() + '!!!' for s in strs]
nums = [2, 8, 1, 6]
small = [n for n in nums if n<=2]
fruits = ['apple', 'cherry', 'banana', 'lemon']
afruits = [s.upper() for s in fruits if 'a' in s]
dict = {}
dict['a'] = 'alpha'
dict['g'] = 'gamma'
dict['o'] = 'omega'
print dict
print dict['a']
dict['a'] = 6
'a' in dict
#print dict['z']
if z in dict: print dict['z']
print dict.get('z')
for key in dict: print key
for key in dict.keys(): print key
print dict.keys()
print dict.values()
for key in sorted(dict.keys()):
print key, dict[key]
print dict.items()
for k, v in dict.items(): print k, '>', v
hash = {}
hash['word'] = 'garfield'
hash['count'] = 42
s = 'I want %(count)d copies of %(word)s' % hash
var = 6
del var
list = ['a', 'b', 'c', 'd']
del list[0]
del list[-2:]
print list
dict = {'a':1, 'b':2, 'c':3}
del dict['b']
print dict
#f = open('foo.txt', 'rU')
#for line in f:
# print line,
#
#f.close()
import codecs
#f = codecs.open('foo.txt', 'rU', 'utf-8')
#for line in f:
import re
#match = re.search(pat, str)
str = 'an example word:cat!!'
match = re.search(r'word:\w\w\w', str)
if match:
print 'found', match.group()
else:
print 'did not find'
match = re.search(r'iii', 'piiig')
match = re.search(r'igs', 'piiig')
match = re.search(r'..g', 'piiig')
match = re.search(r'\d\d\d', 'p123g')
match = re.search(r'\w\w\w', '@@abcd!!')
match = re.search(r'pi+', 'piiig')
match = re.search(r'i+', 'piigiiii')
match = re.search(r'\d\s*\d\s*\d', 'xx1 2 3xx')
match= re.search(r'\d\s*\d\s*\d', 'xx12 3xx')
match = re.search(r'\d\s*\d\s*\d', 'xx123xx')
match = re.search(r'^b\w+', 'foobar')
match = re.search(r'b\w+', 'foobar')
str = 'purple alice-b@google.com monkey dishwasher'
match = re.search(r'\w+@\w+', str)
if match:
print match.group()
match = re.search(r'[\w.-]+@[\w.-]+', str)
if match:
print match.group()
str = 'purple alice-b@google.com monkey dishwasher'
match = re.search(r'([\w.-]+)@([\w.-]+)', str)
if match:
print match.group()
print match.group(1)
print match.group(2)
str = 'purple alice@google.com, blah monkey bob@abc.com blah dishwasher'
emails = re.findall(r'[\w\.-]+@[\w\.-]+', str)
for email in emails:
print email
#f = open('test.txt', 'r')
#strings = re.findall(r'some pattern', f.read())
str = 'purple alice@google.com, blah monkey bob@abc.com blah dishwasher'
tuples = re.findall(r'([\w\.-]+)@([\w\.-]+)', str)
print tuples
for tuple in tuples:
print tuple[0]
print tuple[1]
str = 'purple alice@google.com, blah monkey bob@abc.com blah dishwasher'
print re.sub(r'([\w\.-]+)@([\w\.-]+)', r'\1@yo-yo-dyne.com', str)
def printdir(dir):
filenames = os.listdir(dir)
for filename in filenames:
print filename
print os.path.join(dir, filename)
print os.path.abspath(os.path.join(dir, filename))
def listdir(dir):
cmd = 'ls -l ' + dir
print "Command to run:", cmd
(status, output) = commands.getstatusoutput(cmd)
if status:
sys.stderr.write(output)
sys.exit(status)
print output
#try:
# f = open(filename, 'rU')
# text = f.read()
# f.close()
#except IOError:
# sys.stderr.write('problem reading:' + filename)
def wget(url):
ufile = urllib.urlopen(url)
info = ufile.info()
if info.gettype() == 'text/html':
print 'base url:' + ufile.geturl()
text = ufile.read()
print text
def wget2(url):
try:
ufile = urllib.urlopen(url)
if ufile.info().gettype() == 'text/html':
print ufile.read()
except IOError:
print 'problem reading url:', url
```
| github_jupyter |
# Wrapping a template library
A template library is a library where there are only template classes that can be instantiated.
Wrapping such libraries therefore requires **AutoWIG** to be able to consider various *C++* template classes instantiations during the `Parse` step.
It is therefore required to install the `clanglite` `parser`.
The **Standard Template Library (STL)** library is a *C++* library that provides a set of common *C++* template classes such as containers and associative arrays.
These classes can be used with any built-in or user-defined type that supports some elementary operations (e.g., copying, assignment).
It is divided in four components called algorithms, containers, functional and iterators.
**STL** containers (e.g., `std::vector`, `std::set`) are used in many *C++* libraries.
In such a case, it does not seem relevant that every wrapped *C++* library contains wrappers for usual **STL** containers (e.g., `std::vector< double >`, `std::set< int >`).
We therefore proposed *Python* bindings for some sequence containers (e.g., `vector` of the `std` namespace) and associative containers (e.g., `set`, `unordered_set` of the `std` namespace).
These template instantiations are done for various *C++* fundamental types (e.g., `int`, `unsigned long int`, `double`) and the `string` of the `std` namespace.
For ordered associative containers only the `std::less` comparator was used.
For the complete procedure refer to the `AutoWIG.py` file situed at the root of the **STL** [repository](https://github.com/StatisKit/STL).
We here aim at presenting how template libraries can be wrapped.
First, we need:
* to detect if the operating system (OS) is a Windows OS or a Unix OS.
```
import platform
is_windows = any(platform.win32_ver())
```
On Windows OSes, the visual studio version used to compile future wrappers must be given.
But if the **SCons** tool is used, this version is known.
* to import **subprocess**.
```
import subprocess
```
* to detect the **Git** repository root
```
import os
GIT_ROOT = subprocess.check_output('git rev-parse --show-toplevel', shell=True).decode()
GIT_ROOT = GIT_ROOT.replace('/', os.sep).strip()
GIT_ROOT = os.path.join(GIT_ROOT, 'share', 'git', 'STL')
from devops_tools import describe
os.environ['GIT_DESCRIBE_VERSION'] = describe.git_describe_version(GIT_ROOT)
os.environ['GIT_DESCRIBE_NUMBER'] = describe.git_describe_number(GIT_ROOT)
os.environ['DATETIME_DESCRIBE_VERSION'] = describe.datetime_describe_version(GIT_ROOT)
os.environ['DATETIME_DESCRIBE_NUMBER'] = describe.datetime_describe_number(GIT_ROOT)
```
In this notebook, we do not need to import **AutoWIG** since **SCons** is configured to use the **Boost.Python** tool installed with **AutoWIG** that can be used to generate wrappers (see the `../git/STL/src/cpp/SConscript` file).
```
SCONSCRIPT = os.path.join(GIT_ROOT, 'src', 'cpp', 'SConscript')
!pygmentize {SCONSCRIPT}
```
The controller is registered in the `../git/STL/src/cpp/AutoWIG.py` file
```
AUTOWIG = os.path.join(GIT_ROOT, 'src', 'cpp', 'AutoWIG.py')
!pygmentize {AUTOWIG}
```
Then, in addition to the **STL** library, the **StatisKit.STL** library has to be installed in order to have access to some functionalities.
To do so, we use available **Conda** recipes.
```
subprocess.call('conda remove libstatiskit_stl -y', shell=True)
SCONSIGN = os.path.join(GIT_ROOT, '.sconsign.dblite')
if is_windows:
subprocess.call('del ' + SCONSIGN, shell=True)
else:
subprocess.call('rm ' + SCONSIGN, shell=True)
CONDA_RECIPE = os.path.join(GIT_ROOT, 'etc', 'conda', 'libstatiskit_stl')
subprocess.check_call('conda build ' + CONDA_RECIPE + ' -c statiskit -c defaults --override-channels',
shell=True)
subprocess.check_call('conda install -y libstatiskit_stl --use-local -c statiskit -c defaults --override-channels',
shell=True)
```
As presented below, in order to wrap a template library, the user needs to write headers containing aliases for desired template class instantiations (see the `../git/STL/src/cpp/STL.h` file).
```
TYPEDEFS = os.path.join(GIT_ROOT, 'src', 'cpp', 'STL.h')
!pygmentize {TYPEDEFS}
```
Once these preliminaries are done, we can proceed to the actual generation of wrappers for the **STL** library.
To do so, we need then to install the *C++* headers.
This is done using the `cpp` target in **SCons**.
```
if is_windows:
subprocess.call('del ' + SCONSIGN, shell=True)
else:
subprocess.call('rm ' + SCONSIGN, shell=True)
subprocess.check_call('scons cpp -C ' + GIT_ROOT, shell=True)
```
Once the headers have been installed in the system, we parse headers with relevant compilation flags.
This is done using the `autowig` target in **SCons**.
```
if is_windows:
subprocess.call('del ' + SCONSIGN, shell=True)
else:
subprocess.call('rm ' + SCONSIGN, shell=True)
subprocess.check_call('scons autowig -C ' + GIT_ROOT, shell=True)
```
Here is the list of the generated wrappers (untracked files).
```
!git -C {GIT_ROOT} status
```
And here, we present the wrappers generated for the `std::vector< int >` class.
```
WRAPPER = os.path.join(GIT_ROOT, 'src', 'py', 'wrapper',
'wrapper_6b9ae5eac40858c9a0f5e6e21c15d1d3.cpp')
!pygmentize {WRAPPER}
```
Once the wrappers are written on disk, we need to compile and install the *Python* bindings.
To do so, we use available **Conda** recipes.
```
subprocess.call('conda remove python-statiskit_stl -y', shell=True)
if is_windows:
subprocess.call('del ' + SCONSIGN, shell=True)
else:
subprocess.call('rm ' + SCONSIGN, shell=True)
CONDA_RECIPE = os.path.join(GIT_ROOT, 'etc', 'conda', 'python-statiskit_stl')
subprocess.check_call('conda build ' + CONDA_RECIPE + ' -c statiskit -c defaults --override-channels',
shell=True)
subprocess.check_call('conda install -y python-statiskit_stl --use-local -c statiskit -c defaults --override-channels',
shell=True)
```
Finally, we can hereafter use the *C++* library in the *Python* interpreter.
```
from statiskit.stl import VectorInt
v = VectorInt()
v.push_back(-1)
v.push_back(0)
v.push_back(1)
v
list(v)
v[0]
v[0] = -2
v[0]
```
Here is a report concerning objects wrapped using this notebook.
```
import fp17
import os
import pickle
with open(os.path.join(os.environ['SITE_SCONS'],
'site_autowig',
'ASG',
'statiskit_stl.pkl'), 'rb') as filehandler:
asg = pickle.load(filehandler)
fp17.report(asg)
```
| github_jupyter |
```
#hide
#colab
# attach gdrive holding repo
from google.colab import drive
drive.mount('/content/drive')
#default_exp multi_core.lr_find
```
# Multi Core LR Find XLA Extensions
> Classes to replace LRFinder and patches to Learner
to support running lr_find using multi core TPUs
Modifications to existing callback `LRFinder` are needed in order to run `lr_find` using multiple TPU cores. An equivalent `xla_lr_find` method is patched to `Learner` so it can run on multiple TPU cores.
```
#hide
#colab
!pip install -Uqq cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.7-cp36-cp36m-linux_x86_64.whl
#hide
#colab
# !pip install -Uqq git+https://github.com/fastai/fastai.git
!pip install -Uqq fastai --upgrade
#hide
#colab
!pip install -qqq nbdev
#hide
#colab
# !pip install -Uqq git+https://github.com/butchland/fastai_xla_extensions.git
#hide
#colab
# !pip install -Uqq git+https://github.com/butchland/my_timesaver_utils.git
#hide
#colab
!curl -s https://course19.fast.ai/setup/colab | bash
#hide
#colab
%cd /content
!ln -s /content/drive/MyDrive/fastai_xla_extensions fastai_xla_extensions
#hide
!pip freeze | grep torch
!pip freeze | grep fast
!pip freeze | grep timesaver
!pip freeze | grep nbdev
# hide
# start of kernel
#hide
from nbdev.showdoc import *
#hide
# colab
%cd /content/fastai_xla_extensions
#exporti
from fastai_xla_extensions.utils import xla_imported
from fastai_xla_extensions.misc_utils import *
from fastai_xla_extensions.multi_core.base import *
from fastai_xla_extensions.multi_core.learner import *
from fastai_xla_extensions.multi_core.callback import *
#hide
#colab
%cd /content
#exporti
try:
import torch_xla
except:
pass
#exporti
if xla_imported():
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
#hide
#local
# fake out torch_xla modules if not running on xla supported envs
if not xla_imported():
# replace torch xla modules with fake equivalents
from types import SimpleNamespace
torch_xla = SimpleNamespace (
)
from typing import Union,BinaryIO
import os
import pickle
import torch.cuda
def fake_opt_step(opt,barrier=False):
opt.step()
def fake_device(n=None, devkind=None):
gpu_available = torch.cuda.is_available()
if gpu_available:
return torch.device(torch.cuda.current_device())
return torch.device('cpu')
def fake_save(obj, f: Union[str, os.PathLike, BinaryIO],
master_only=True, global_master=False):
return torch.save(obj,f,pickle_module=pickle,
pickle_protocol=2,
_use_new_zipfile_serialization=True)
def fake_rate():
return 230.20
def fake_global_rate():
return 830.10
def fake_add(*args,**kwargs):
pass
def fake_RateTracker():
return SimpleNamespace(
rate = fake_rate,
global_rate = fake_global_rate,
add = fake_add
)
def fake_xrt_world_size():
return 1
def fake_get_ordinal():
return 0
xm = SimpleNamespace(
optimizer_step = fake_opt_step,
xla_device = fake_device,
save = fake_save,
RateTracker = fake_RateTracker,
master_print = print,
xrt_world_size = fake_xrt_world_size,
get_ordinal = fake_get_ordinal
)
def fake_metrics_report():
return "Fake Metrics Report \n\n\n\n"
met = SimpleNamespace (
metrics_report = fake_metrics_report
)
class FakeParallelLoader:
def __init__(self, loader, *args):
self.loader = loader
def per_device_loader(self,device):
return self.loader
pl = SimpleNamespace(
ParallelLoader = FakeParallelLoader
)
def fake_MpModelWrapper(o):
return o
def fake_run(f,*args, **kwargs):
return f(*args,**kwargs)
def fake_MpSerialExecutor():
return SimpleNamespace(
run = fake_run
)
def fake_spawn(f, args=None, nprocs=0, start_method=None):
return f(0,*args)
xmp = SimpleNamespace (
MpModelWrapper = fake_MpModelWrapper,
MpSerialExecutor = fake_MpSerialExecutor,
spawn = fake_spawn
)
xu = SimpleNamespace (
)
#exporti
# from fastai.vision.all import *
# from fastai_xla_extensions.all import *
#export
from fastai.callback.core import Callback
from fastai.learner import CancelValidException
class SkipValidationCallback(Callback):
order,run_valid = -9, False
# raise CancelValidException before XLATrainingCallback.before_validate
# to prevent call to wrap_parallel_loader on before_validate
def before_validate(self):
raise CancelValidException()
def after_cancel_validate(self):
if getattr(self.learn,'inner_xla', False):
xm.mark_step()
#export
from fastai.callback.schedule import ParamScheduler, SchedExp
class XLALRFinder(ParamScheduler):
"Training with exponentially growing learning rate"
def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
if is_listy(start_lr):
self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]}
else: self.scheds = {'lr': SchedExp(start_lr, end_lr)}
self.num_it,self.stop_div = num_it,stop_div
self.skip_batch = False
self.num_losses = 0
def before_fit(self):
super().before_fit()
# no need to save orig weights
# since learner instances are transient on spawned procs
# self.learn.save('_tmp')
self.best_loss = float('inf')
self.skip_batch = False
self.num_losses = 0
# dont report losses while running lrfind (override sync_recorder)
# run after sync_recorder.before_fit (sync_recorder.order == 55)
# while param scheduler order == 60
if getattr(self.learn,'inner_xla',False) \
and xm.is_master_ordinal() and hasattr(self.learn, 'sync_recorder'):
self.learn.logger = noop
self.learn.sync_recorder._sync_stats_log = noop
def before_batch(self):
if self.skip_batch:
return
self._update_val(self.train_iter/self.num_it)
def after_batch(self):
if self.skip_batch:
return
super().after_batch()
smooth_loss = self.smooth_loss.item() # move xla tensor to cpu
self.num_loss = len(self.recorder.losses)
if smooth_loss < self.best_loss:
self.best_loss = smooth_loss
# handle continuation of batch iteration until all batches exhausted
if smooth_loss > 4*self.best_loss and self.stop_div:
self.skip_batch = True
return
if self.train_iter >= self.num_it:
self.skip_batch = True
return
def after_fit(self):
# no need to load old weights since these will be transient
# self.learn.opt.zero_grad()
# Need to zero the gradients of the model before detaching the optimizer for future fits
# tmp_f = self.path/self.model_dir/'_tmp.pth'
# if tmp_f.exists():
# self.learn.load('_tmp', with_opt=True)
# os.remove(tmp_f)
if not getattr(self.learn,'inner_xla', False):
return # skip if not on spawned process
if not xm.is_master_ordinal(): return
if not self.skip_batch: # completed w/o copying lrs and losses from recorder to plot_data
self.num_loss = len(self.recorder.losses)
self.recorder.losses = self.recorder.losses[: self.num_loss]
self.recorder.lrs = self.recorder.lrs[: self.num_loss]
num_iters = len(self.recorder.iters)
for i, iter in enumerate(self.recorder.iters):
if iter >= self.num_it:
num_iters = i + 1
break
self.recorder.iters = self.recorder.iters[:num_iters]
self.recorder.values = self.recorder.values[:num_iters]
self.recorder.dump_attrs() # rewrite updated attrs
#export
def xla_run_lr_find(rank, learner_args, add_args, lr_find_args, ctrl_args):
'run xla lr_find on spawned processes'
xm.rendezvous('start_run_lrfind')
# print(f'xla {rank} : start run lrfind')
sync_valid = True
learner = make_xla_child_learner(rank, sync_valid, learner_args, add_args, ctrl_args)
num_it = lr_find_args['num_it']
n_epoch = num_it//len(learner.dls.train) + 1
lr_find_cb = XLALRFinder(**lr_find_args)
skip_valid_cb = SkipValidationCallback()
with learner.no_logging():
learner.fit(n_epoch, cbs=[lr_find_cb, skip_valid_cb])
#export
from fastai.learner import Learner
from fastai.callback.schedule import SuggestedLRs
from fastai.basics import patch
from fastai.torch_core import tensor
@patch
def get_suggested_lrs(self:Learner, num_it):
'compute Suggested LRs'
lrs,losses = tensor(self.recorder.lrs[num_it//10:-5]),tensor(self.recorder.losses[num_it//10:-5])
if len(losses) == 0: return
lr_min = lrs[losses.argmin()].item()
grads = (losses[1:]-losses[:-1]) / (lrs[1:].log()-lrs[:-1].log())
lr_steep = lrs[grads.argmin()].item()
return SuggestedLRs(lr_min/10.,lr_steep)
#hide_input
show_doc(Learner.get_suggested_lrs)
#export
from fastai.learner import Learner
from fastcore.basics import patch
from fastcore.meta import delegates
from fastcore.foundation import L
from fastai.callback.progress import ProgressCallback
@patch
@delegates(Learner.lr_find, but='num_cores,start_method')
def xla_lr_find(self:Learner, num_cores=8, start_method='fork', **kwargs):
'multi core xla equivalent of `lr_find`'
# default params for lr_find
lr_find_args = {
'start_lr': 1e-7,
'end_lr': 10.,
'num_it': 100,
'stop_div': True
}
has_progress = 'progress' in L(self.cbs).attrgot('name')
show_plot = True
suggestions = True
# remove show_plot and suggestions param
if 'show_plot' in kwargs:
show_plot = kwargs.pop('show_plot')
if 'suggestions' in kwargs:
suggestions = kwargs.pop('suggestions')
# override default with kwargs
lr_find_args = {**lr_find_args, **kwargs}
ctrl_args = self.pre_xla_fit()
learner_args, add_args = self.pack_learner_args()
xmp.spawn(xla_run_lr_find,
args=(learner_args, add_args, lr_find_args, ctrl_args),
nprocs=num_cores,
start_method=start_method)
# self.recorder.reload_attrs()
# self.recorder.reload_hps()
# if has_progress and 'progress' not in L(self.cbs).attrgot('name'):
# self.add_cbs([ProgressCallback])
self.post_xla_fit(ctrl_args)
if show_plot:
self.recorder.plot_lr_find()
if suggestions:
return self.get_suggested_lrs(lr_find_args['num_it'])
```
## Test out routines
```
#colab
from fastai.vision.all import *
path = untar_data(URLs.MNIST_TINY)
# path = untar_data(URLs.MNIST)
#colab
data = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
get_y=parent_label,
# splitter=GrandparentSplitter(train_name='training', valid_name='testing'),
splitter=GrandparentSplitter(),
item_tfms=Resize(28),
batch_tfms=[]
)
#colab
# dls = data.dataloaders(path, bs=64)
dls = data.dataloaders(path, bs=8)
#colab
learner = cnn_learner(dls, resnet18, metrics=accuracy, concat_pool=False, pretrained=False)
#colab
learner.unfreeze()
#colab
learner.xla_fit_one_cycle(5,lr_max=slice(8e-4))
#colab
# %%time
learner.xla_lr_find(stop_div=True,end_lr=100, num_it=400)
# learner.xla_lr_find()
#colab
# learner.xla_fit_one_cycle(5, lr_max=slice(0.026))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/vaibhavmishra1/Deep_Learning_with_Keras/blob/master/Deep_Dream_in_Keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from keras.applications import inception_v3
from keras import backend as K
K.set_learning_phase(0)
model = inception_v3.InceptionV3(weights='imagenet',include_top=False)
model.summary()
layer_contributions = {
'mixed2': 0.2,
'mixed3': 3.,
'mixed4': 2.,
'mixed5': 1.5,
}
layer_dict = dict([(layer.name, layer) for layer in model.layers])
loss = K.variable(0.)
for layer_name in layer_contributions:
coeff = layer_contributions[layer_name]
activation = layer_dict[layer_name].output
scaling = K.prod(K.cast(K.shape(activation), 'float32'))
loss += coeff * K.sum(K.square(activation[:, 2: -2, 2: -2, :])) / scaling
dream = model.input
grads = K.gradients(loss, dream)[0]
grads /= K.maximum(K.mean(K.abs(grads)), 1e-7)
outputs = [loss, grads]
fetch_loss_and_grads = K.function([dream], outputs)
def eval_loss_and_grads(x):
outs = fetch_loss_and_grads([x])
loss_value = outs[0]
grad_values = outs[1]
return loss_value, grad_values
def gradient_ascent(x, iterations, step, max_loss=None):
for i in range(iterations):
loss_value, grad_values = eval_loss_and_grads(x)
if max_loss is not None and loss_value > max_loss:
break
print('...Loss value at', i, ':', loss_value)
x += step * grad_values
return x
import scipy
from keras.preprocessing import image
def resize_img(img, size):
img = np.copy(img)
factors = (1,float(size[0]) / img.shape[1],float(size[1]) / img.shape[2],1)
return scipy.ndimage.zoom(img, factors, order=1)
def save_img(img, fname):
pil_img = deprocess_image(np.copy(img))
scipy.misc.imsave(fname, pil_img)
def preprocess_image(image_path):
img = image.load_img(image_path)
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = inception_v3.preprocess_input(img)
return img
def deprocess_image(x):
if K.image_data_format() == 'channels_first':
x = x.reshape((3, x.shape[2], x.shape[3]))
x = x.transpose((1, 2, 0))
else:
x = x.reshape((x.shape[1], x.shape[2], 3))
x /= 2.
x += 0.5
x *= 255.
x = np.clip(x, 0, 255).astype('uint8')
return x
import numpy as np
step = 0.01
num_octave = 3
octave_scale = 1.4
iterations = 20
max_loss = 10.
base_image_path = '/content/rose-blue-flower-rose-blooms-67636.jpeg'
img = preprocess_image(base_image_path)
original_shape = img.shape[1:3]
successive_shapes = [original_shape]
for i in range(1, num_octave):
shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape])
successive_shapes.append(shape)
successive_shapes = successive_shapes[::-1]
original_img = np.copy(img)
shrunk_original_img = resize_img(img, successive_shapes[0])
import matplotlib.pyplot as plt
for shape in successive_shapes:
print('Processing image shape', shape)
print(img.shape)
img = resize_img(img, shape)
img = gradient_ascent(img,
iterations=iterations,
step=step,
max_loss=max_loss)
upscaled_shrunk_original_img = resize_img(shrunk_original_img, shape)
same_size_original = resize_img(original_img, shape)
lost_detail = same_size_original - upscaled_shrunk_original_img
img += lost_detail
shrunk_original_img = resize_img(original_img, shape)
img2=img[0,:,:,:]
print("image shape=",img2.shape)
plt.imshow(img2)
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Register Model and deploy as Webservice
This example shows how to deploy a Webservice in step-by-step fashion:
1. Register Model
2. Deploy Model as Webservice
## Prerequisites
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't.
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Initialize Workspace
Initialize a workspace object from persisted configuration.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
```
### Register Model
You can add tags and descriptions to your Models. Note you need to have a `sklearn_regression_model.pkl` file in the current directory. This file is generated by the 01 notebook. The below call registers that file as a Model with the same name `sklearn_regression_model.pkl` in the workspace.
Using tags, you can track useful information such as the name and version of the machine learning library used to train the model. Note that tags must be alphanumeric.
```
from azureml.core.model import Model
model = Model.register(model_path="sklearn_regression_model.pkl",
model_name="sklearn_regression_model.pkl",
tags={'area': "diabetes", 'type': "regression"},
description="Ridge regression model to predict diabetes",
workspace=ws)
```
### Create Environment
You can now create and/or use an Environment object when deploying a Webservice. The Environment can have been previously registered with your Workspace, or it will be registered with it as a part of the Webservice deployment. Only Environments that were created using azureml-defaults version 1.0.48 or later will work with this new handling however.
More information can be found in our [using environments notebook](../training/using-environments/using-environments.ipynb).
```
from azureml.core import Environment
env = Environment.from_conda_specification(name='deploytocloudenv', file_path='myenv.yml')
# This is optional at this point
# env.register(workspace=ws)
```
## Create Inference Configuration
There is now support for a source directory, you can upload an entire folder from your local machine as dependencies for the Webservice.
Note: in that case, your entry_script, conda_file, and extra_docker_file_steps paths are relative paths to the source_directory path.
Sample code for using a source directory:
```python
inference_config = InferenceConfig(source_directory="C:/abc",
runtime= "python",
entry_script="x/y/score.py",
conda_file="env/myenv.yml",
extra_docker_file_steps="helloworld.txt")
```
- source_directory = holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder
- runtime = Which runtime to use for the image. Current supported runtimes are 'spark-py' and 'python
- entry_script = contains logic specific to initializing your model and running predictions
- conda_file = manages conda and python package dependencies.
- extra_docker_file_steps = optional: any extra steps you want to inject into docker file
```
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(entry_script="score.py", environment=env)
```
### Deploy Model as Webservice on Azure Container Instance
Note that the service creation can take few minutes.
```
from azureml.core.webservice import AciWebservice, Webservice
from azureml.exceptions import WebserviceException
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
aci_service_name = 'aciservice1'
try:
# if you want to get existing service below is the command
# since aci name needs to be unique in subscription deleting existing aci if any
# we use aci_service_name to create azure aci
service = Webservice(ws, name=aci_service_name)
if service:
service.delete()
except WebserviceException as e:
print()
service = Model.deploy(ws, aci_service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
```
#### Test web service
```
import json
test_sample = json.dumps({'data': [
[1,2,3,4,5,6,7,8,9,10],
[10,9,8,7,6,5,4,3,2,1]
]})
test_sample_encoded = bytes(test_sample, encoding='utf8')
prediction = service.run(input_data=test_sample_encoded)
print(prediction)
```
#### Delete ACI to clean up
```
service.delete()
```
### Model Profiling
You can also take advantage of the profiling feature to estimate CPU and memory requirements for models.
```python
profile = Model.profile(ws, "profilename", [model], inference_config, test_sample)
profile.wait_for_profiling(True)
profiling_results = profile.get_results()
print(profiling_results)
```
### Model Packaging
If you want to build a Docker image that encapsulates your model and its dependencies, you can use the model packaging option. The output image will be pushed to your workspace's ACR.
You must include an Environment object in your inference configuration to use `Model.package()`.
```python
package = Model.package(ws, [model], inference_config)
package.wait_for_creation(show_output=True) # Or show_output=False to hide the Docker build logs.
package.pull()
```
Instead of a fully-built image, you can also generate a Dockerfile and download all the assets needed to build an image on top of your Environment.
```python
package = Model.package(ws, [model], inference_config, generate_dockerfile=True)
package.wait_for_creation(show_output=True)
package.save("./local_context_dir")
```
| github_jupyter |
# KAIM 2018
## Ensemble Methods - Bagging and Boosting
### Anand Subramanian
```
import numpy as np
import pylab as plt
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_hastie_10_2
import warnings
import pandas as pd
import seaborn as sns
sns.set_style('darkgrid')
warnings.filterwarnings("ignore", category=DeprecationWarning)
np.random.seed(8088)
class Bagging(object):
def __init__(self, model, x_data, y_data):
self.x_data = np.array(x_data)
self.y_data = np.array(y_data)
self.N = self.x_data.shape[0]
self.model = model
def bagging_train(self, Num_learners = 1):
self.learners = []
for k in range(Num_learners):
learner = self.model()
for i in range(10):
# Bootstrap data
idx = np.random.randint(self.N, size = int(self.N/Num_learners))
x_train = self.x_data[idx, :]
y_train = self.y_data[idx]
# Train Classifiers
train_model = learner.fit(x_train, y_train)
self.learners.append(learner)
def get_predictions(self,x_test, y_test):
y_pred = np.empty([x_test.shape[0], 0])
for i, learner in enumerate(self.learners):
y_pred = np.hstack((y_pred, learner.predict(x_test).reshape(-1,1)))
# Plurality Voting
y_pred += 1
preds = []
for row in y_pred:
preds.append(np.argmax(np.bincount(row.astype(int)))-1)
self.test_accuracy = 100*(y_test == preds).sum()/y_test.shape[0]
return preds, self.test_accuracy
"""==================================================================================================="""
class Adaboost(object):
def __init__(self, model, x_data, y_data):
self.x_data = x_data
self.y_data = y_data
self.N = self.x_data.shape[0]
self.weights = np.ones(self.N)/self.N
self.eps = []
self.alpha = []
self.learners = []
self.model = model
def boost(self, Num_learners= 50):
self.weights = np.ones(self.N)/self.N
self.eps = []
self.alpha = []
self.learners = []
for k in range(Num_learners):
learner = self.model(max_depth = 1, random_state = 1)
#Train the classifier
train_model = learner.fit(self.x_data, self.y_data, sample_weight=self.weights)
# Get predictions
y_pred = learner.predict(self.x_data)
e_k = np.sum((y_pred != self.y_data)*self.weights)
self.eps.append(e_k)
alpha_k = 0.5*np.log((1 - e_k) / float(e_k))
self.alpha.append(alpha_k)
# Update the weights
I = np.array([1.0 if x == True else -1.0 for x in (y_pred != self.y_data)])
self.weights = np.multiply(self.weights, np.exp(alpha_k*I))
self.weights = self.weights/ (np.sum(self.weights))
# added learner to the list
self.learners.append(learner)
def get_predictions(self, x_test, y_test):
y_pred = np.zeros(x_test.shape[0])
for i, learner in enumerate(self.learners):
#print(learner.predict(x_test).shape, y_pred.shape, self.alpha[i])
y_pred += self.alpha[i]*learner.predict(x_test)
y_pred = np.sign(y_pred)
# calculate test accuracy
self.test_accuracy = 100*(y_test == y_pred).sum()/y_test.shape[0]
return y_pred, self.test_accuracy
x, y = make_hastie_10_2()
df = pd.DataFrame(x)
df['Y'] = y
# Split into training and test set
train, test = train_test_split(df, test_size = 0.2)
X_train, Y_train = train.ix[:,:-1], train.ix[:,-1]
X_test, Y_test = test.ix[:,:-1], test.ix[:,-1]
# Bagging
clf_tree = DecisionTreeClassifier()
ensemble = Bagging(DecisionTreeClassifier,X_train, Y_train)
ensemble.bagging_train(Num_learners= 70)
y_pred, test_accuracy = ensemble.get_predictions(X_test, Y_test)
print('Test Accuracy of Bagged Decision Trees : %.4f %% '%test_accuracy)
# Boosting
clf_tree = DecisionTreeClassifier(max_depth = 1, random_state = 1)
ensemble = Adaboost(DecisionTreeClassifier,X_train, Y_train)
# Without Boosting (Single Classifier)
ensemble.boost(Num_learners= 1)
y_pred, test_accuracy = ensemble.get_predictions(X_test, Y_test)
print('Test Accuracy of Decision Stump : %.4f %% '%test_accuracy)
# With Boosting (Miltuple Cl)
ensemble.boost(Num_learners= 400)
y_pred, test_accuracy =ensemble.get_predictions(X_test, Y_test)
print('Test Accuracy of Boosted Decision Trees : %.4f %% '%test_accuracy)
```
| github_jupyter |
# Сегментация тетрадей
Установка гарантированно работает на **Christofari** с **NVIDIA Tesla V100** и образом **jupyter-cuda10.1-tf2.3.0-pt1.6.0-gpu:0.0.82**
## Detectron2 - продвинутый уровень
В данном ноутбуке представлено обучение модели instance сегментации текста в школьных тетрадях с помощью фреймворка detectron2.\
Применялись **аугментации** + модель **X101-FPN**.
# 0. Установка библиотек
Установка библиотек, под которым запускается данный бейзлайн.
```
!nvidia-smi
# !pip install gdown
# !gdown https://drive.google.com/uc?id=1VOojDMJe7RAxryQ2QKXrqA7CvhsnzJ_z
# ^ данные соревнования https://ods.ai/competitions/nto_final_21-22/data
# %%capture
# !unzip -u /home/jovyan/nto_final_data.zip
# !mv data/train_segmentation data/train
# !pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# !pip install git+https://github.com/facebookresearch/detectron2.git
# !pip install opencv-pyth
# !pip install tensorflow==2.1.0
```
## 1. Загрузить необходимые библиотеки для создания и обучения модели
```
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings("ignore")
import detectron2
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.data.datasets import register_coco_instances, load_coco_json
from detectron2.data import detection_utils as utils
from detectron2.engine import DefaultTrainer
from detectron2.evaluation.evaluator import DatasetEvaluator
from detectron2.checkpoint import DetectionCheckpointer
from detectron2.modeling import build_model
from detectron2.evaluation import COCOEvaluator
import detectron2.data.transforms as T
from detectron2.data import build_detection_train_loader, build_detection_test_loader
import torch, torchvision
from tqdm import tqdm
import numpy as np
import gc, cv2, random, json, os, copy
import shutil
from IPython.display import Image
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
from matplotlib import pyplot as plt
import logging
logger = logging.getLogger('detectron2')
logger.setLevel(logging.CRITICAL)
def clear_cache():
'''Функция для очистки мусора из памяти'''
gc.collect()
torch.cuda.empty_cache()
gc.collect()
```
Прежде чем переходить к загрузке данных посмотрим, доступны ли нам GPU-мощности.
```
print('GPU: ' + str(torch.cuda.is_available()))
```
# 2. Валидационный датасет
Для валидации наших моделей нам неплохо было создать из обучающих данных валидационный датасет. Для этого разделим наш датасет на две части - для обучения и для валидации. Для этого просто создадим два новых файлика с аннотациями, куда раздельно запишем исиходную информацию об аннотациях.
```
# Подгрузим аннотации train
with open('data/train/annotations.json') as f:
annotations = json.load(f)
len(annotations['images']) # количество изображений в наборе
# Пустой словарь для аннотаций валидации
annotations_val = {}
# Список категорий такой же как в train
annotations_val['categories'] = annotations['categories']
# Пустой словарь для аннотаций нового train
annotations_train = {}
# Список категорий такой же как в train
annotations_train['categories'] = annotations['categories']
# Положим в валидацию каждое 110 изображение из исходного train, а остальные - в новый train
annotations_val['images'] = []
annotations_train['images'] = []
for num, img in enumerate(annotations['images']):
if num % 110 == 0:
annotations_val['images'].append(img)
else:
annotations_train['images'].append(img)
# Положим в список аннотаций валидации только те аннотации, которые относятся к изображениям из валидации.
# А в список аннотаций нового train - только те, которые относятся к нему
val_img_id = [i['id'] for i in annotations_val['images']]
train_img_id = [i['id'] for i in annotations_train['images']]
annotations_val['annotations'] = []
annotations_train['annotations'] = []
for annot in annotations['annotations']:
if annot['image_id'] in val_img_id:
annotations_val['annotations'].append(annot)
elif annot['image_id'] in train_img_id:
annotations_train['annotations'].append(annot)
else:
print('Аннотации нет ни в одном наборе')
# набор содержит мусорную картинку 41_3.JPG, её нужно удалить
for i, element in enumerate(annotations_train["images"]):
if element["file_name"] == "41_3.JPG":
print(element["id"])
del annotations_train["images"][i]
for i, element in enumerate(annotations_train["annotations"]):
if element["image_id"] == 405:
print("Done")
del annotations_train["annotations"][i]
try: os.remove("data/train/images/41_3.JPG")
except: pass
clear_cache() # лишним не бывает(почти)
```
Готово! Аннотации для валидации и новой обучающей выборки готовы, теперь просто сохраним их в формате json, и положим в папке. Назовем аннотации **annotations_new.json**, чтобы новая набор аннотаций для train (без множества val) не перезаписал исходные аннотации.
```
if not os.path.exists('data/val'):
os.makedirs('data/val')
if not os.path.exists('data/val/images'):
os.makedirs('data/val/images')
```
Скопируем изображения, которые относятся к валидации, в папку val/images
```
for i in annotations_val['images']:
shutil.copy('data/train/images/' + i['file_name'], 'data/val/images/')
```
Запишем новые файлы с аннотациями для train и val.
```
with open('data/val/annotations_new.json', 'w') as outfile:
json.dump(annotations_val, outfile)
with open('data/train/annotations_new.json', 'w') as outfile:
json.dump(annotations_train, outfile)
```
# 3. Регистрация датасета
Зарегистрируем выборки в detectron2 для дальнейшей подачи на обучение модели.
```
for d in ['train', 'val']:
DatasetCatalog.register("my_dataset_" + d, lambda d=d: load_coco_json("./data/{}/annotations_new.json".format(d),
image_root= "./data/train/images",\
dataset_name="my_dataset_" + d, extra_annotation_keys=['bbox_mode']))
```
После регистрации можно загружать выборки, чтобы иметь возможность посмотреть на них глазами. Первой загрузим обучающую выборку в **dataset_dicts_train**
```
dataset_dicts_train = DatasetCatalog.get("my_dataset_train")
train_metadata = MetadataCatalog.get("my_dataset_train")
```
И тестовую выборку в **dataset_dicts_val**
```
dataset_dicts_val = DatasetCatalog.get("my_dataset_val")
val_metadata = MetadataCatalog.get("my_dataset_val")
```
Посмотрим на размер получившихся выборок - эта операция в python осуществляется при помощи функции **len()**
```
print('Размер обучающей выборки (Картинки): {}'.format(len(dataset_dicts_train)))
print('Размер тестовой выборки (Картинки): {}'.format(len(dataset_dicts_val)))
```
Итак, у нас в распоряжении **922** изображения для тренировки, и **9** - для проверки качества.
**Посмотрим на размеченные фотографии из валидации**
```
@interact
def show_images(file=range(len(dataset_dicts_val))):
example = dataset_dicts_val[file]
image = utils.read_image(example["file_name"], format="RGB")
plt.figure(figsize=(3,3),dpi=200)
visualizer = Visualizer(image[:, :, ::-1], metadata=val_metadata, scale=0.5)
vis = visualizer.draw_dataset_dict(example)
plt.imshow(vis.get_image()[:, :,::-1])
plt.show()
```
# 4 Обучение модели
**4.1. Определяем конфигурацию**
Прежде чем начать работать с самой моделью, нам нужно определить ее параметры и спецификацию обучения
Создаем конфигурацию и загружаем архитектуру модели с предобученными весами (на COCO - датасете, содержащем $80$ популярных категорий объектов и более $300000$ изображений) для распознавания объектов.
```
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml")
```
В целом, вы можете посмотреть и другие архитектуры в зоопарке [моделей](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md).
Теперь задаем параметры самой модели и обучения модели
```
# Здесь мы определяем минимальное соотношение ширины и высоты
# для изображений, чтобы относительно них увеличивать разрешение
# входного изображения без потери качества
height, width = 10000, 10000
for element in annotations_train["images"]:
height = min(height, element["height"])
width = min(width, element["width"])
print(height, width)
# Загружаем названия обучающией и тестовой выборок в настройки
cfg.DATASETS.TRAIN = ("my_dataset_train",)
cfg.DATASETS.TEST = ("my_dataset_val",)
# раз в итераций мы вызываем класс DatasetEvaluator
cfg.TEST.EVAL_PERIOD = 1000
# Часто имеет смысл сделать изображения чуть меньшего размера, чтобы
# обучение происходило быстрее. Поэтому мы можем указать размер, до которого будем изменяться наименьшая
# и наибольшая из сторон исходного изображения.
cfg.INPUT.MIN_SIZE_TRAIN = 2160
cfg.INPUT.MAX_SIZE_TRAIN = 3130
cfg.INPUT.MIN_SIZE_TEST = cfg.INPUT.MIN_SIZE_TRAIN
cfg.INPUT.MAX_SIZE_TEST = cfg.INPUT.MAX_SIZE_TRAIN
# Также мы должны сказать модели ниже какой вероятности определения она игнорирует результат.
# То есть, если она найдет на картинке еду, но вероятность правильного определения ниже 0.1,
# то она не будет нам сообщать, что она что-то нашла.
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.1
# Также мы должны указать порядок каналов во входном изображении. Обратите внимание, что это Blue Green Red (BGR),
# а не привычный RGB. Это особенности работы данной модели.
cfg.INPUT.FORMAT = 'BGR'
# Для более быстрой загрузки данных в модель, мы делаем параллельную загрузку. Мы указываем параметр 4,
cfg.DATALOADER.NUM_WORKERS = 3
# Следующий параметр задает количество изображений в батче, на котором
# модель делает одну итерацию обучения (изменения весов).
# Чем меньше, тем быстрее обучается
cfg.SOLVER.IMS_PER_BATCH = 1
# Зададим также learning_rate
cfg.SOLVER.BASE_LR = 0.01
# Укажем модели, через сколько шагов обучения модели следует уменьшить learning rate
cfg.SOLVER.STEPS = (1500,)
# Фактор, на который уменьшается learning rate задается следующим выражением
cfg.SOLVER.GAMMA = 0.1
# Зададим общее число итераций обучения.
cfg.SOLVER.MAX_ITER = 17000
# Укажем количество классов в нашей выборке
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
# Задаем через сколько шагов обучения сохранять веса модели в файл. Этот файл мы сможем загрузить потом
# для тестирования нашей обученной модели на новых данных.
cfg.SOLVER.CHECKPOINT_PERIOD = cfg.TEST.EVAL_PERIOD
# Задаем максимальное число слов на странице
cfg.TEST.DETECTIONS_PER_IMAGE = 1000
# И указываем название папки, куда сохранять чекпойнты модели и информацию о процессе обучения.
cfg.OUTPUT_DIR = './output'
# Если вдруг такой папки нет, то создадим ее
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
# Если мы хотим удалить чекпойнты предыдущих моделей, то выполняем данную команду.
#%rm output/*
class custom_mapper:
def __init__(self, cfg):
self.transform_list = [
T.ResizeShortestEdge(
[cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST],
cfg.INPUT.MAX_SIZE_TEST),
T.RandomBrightness(0.9, 1.1),
T.RandomContrast(0.9, 1.1),
T.RandomSaturation(0.9, 1.1),
T.RandomLighting(0.9)
]
print(f"[custom_mapper]: {self.transform_list}")
def __call__(self, dataset_dict):
dataset_dict = copy.deepcopy(dataset_dict)
image = utils.read_image(dataset_dict["file_name"], format="BGR")
image, transforms = T.apply_transform_gens(self.transform_list, image)
dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))
annos = [
utils.transform_instance_annotations(obj, transforms, image.shape[:2])
for obj in dataset_dict.pop("annotations")
if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(annos, image.shape[:2])
dataset_dict["instances"] = utils.filter_empty_instances(instances)
return dataset_dict
def f1_loss(y_true, y_pred):
tp = np.sum(y_true & y_pred)
tn = np.sum(~y_true & ~y_pred)
fp = np.sum(~y_true & y_pred)
fn = np.sum(y_true & ~y_pred)
epsilon = 1e-7
precision = tp / (tp + fp + epsilon)
recall = tp / (tp + fn + epsilon)
f1 = 2 * precision*recall / ( precision + recall + epsilon)
return f1
CHECKPOINTS_RESULTS = []
class F1Evaluator(DatasetEvaluator):
def __init__(self):
self.loaded_true = np.load('data/train/binary.npz')
self.val_predictions = {}
self.f1_scores = []
def reset(self):
self.val_predictions = {}
self.f1_scores = []
def process(self, inputs, outputs):
for input, output in zip(inputs, outputs):
filename = input["file_name"].split("/")[-1]
if filename != "41_3.JPG":
true = self.loaded_true[filename].reshape(-1)
prediction = output['instances'].pred_masks.cpu().numpy()
mask = np.add.reduce(prediction)
mask = (mask > 0).reshape(-1)
self.f1_scores.append(f1_loss(true, mask))
def evaluate(self):
global CHECKPOINTS_RESULTS
result = np.mean(self.f1_scores)
CHECKPOINTS_RESULTS.append(result)
return {"meanF1": result}
class AugTrainer(DefaultTrainer):
@classmethod
def build_train_loader(cls, cfg):
return build_detection_train_loader(cfg, mapper=custom_mapper(cfg))
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
if output_folder is None:
output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
return F1Evaluator()
```
**4.2. Обучаем модель**
Процесс обучения модели запускают следующие три строчки кода. Возможно будут предупреждения, на которые можно не обращать внимания, это информация об обучении.
```
%rm output/*
trainer = AugTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
print()
del trainer
clear_cache()
!ls ./output
```
Используем обученную модель для проверки качества на валидации.
```
RESULTS_PER_EPOCH = list(enumerate(CHECKPOINTS_RESULTS, start=1))
RESULTS_PER_EPOCH
# файл с результатами валидации на каждом прогоне
with open("CHECKPOINTS_RESULTS.txt", "w") as f:
f.write(str(RESULTS_PER_EPOCH))
```
| github_jupyter |
```
pip install jupyter_dash
pip install dash_extensions
pip install dash_bootstrap_components
pip install wordcloud
from jupyter_dash import JupyterDash
import dash_html_components as html
import dash_core_components as dcc
from dash.dependencies import Output,Input
from dash_extensions import Lottie
import dash_bootstrap_components as dbc
import plotly.express as px
import pandas as pd
from datetime import date
import calendar
from wordcloud import WordCloud
import os
import base64
url_connections = "https://assets9.lottiefiles.com/private_files/lf30_5ttqPi.json"
url_companies = "https://assets9.lottiefiles.com/packages/lf20_EzPrWM.json"
url_msg_in = "https://assets9.lottiefiles.com/packages/lf20_8wREpI.json"
url_msg_out = "https://assets2.lottiefiles.com/packages/lf20_Cc8Bpg.json"
url_reactions = "https://assets2.lottiefiles.com/packages/lf20_nKwET0.json"
url_messages = "https://assets3.lottiefiles.com/packages/lf20_kOcCeF.json"
options = dict(loop=True, autoPlay=True, rendererSettings=dict(preserveAspectRatio='xMidYMid slice'))
#connections
df_cnt = pd.read_csv("Connections.csv")
df_cnt["Connected On"] = pd.to_datetime(df_cnt["Connected On"])
df_cnt["month"] = df_cnt["Connected On"].dt.month
df_cnt["month"] = df_cnt['month'].apply(lambda x : calendar.month_abbr[x])
#invitations
df_invite = pd.read_csv("Invitations.csv")
df_invite["Sent At"] = pd.to_datetime(df_invite["Sent At"])
#reactions
df_react = pd.read_csv("Reactions.csv")
df_react["Date"] = pd.to_datetime(df_react["Date"])
#messages
df_msg = pd.read_csv("messages.csv")
df_msg["DATE"] = pd.to_datetime(df_msg["DATE"])
#image_filename = os.path.join(os.getcwd(), 'linkedin-logo2.png')
#print(image_filename)
image_filename = 'LI-Logo.png' # replace with your own image
encoded_image = base64.b64encode(open(image_filename, 'rb').read())
app = JupyterDash(__name__, external_stylesheets = [dbc.themes.DARKLY]) #theme must always be capitalized
app.layout = dbc.Container([
dbc.Row([
dbc.Col([
dbc.Card([
dbc.CardImg(src='data:image/png;base64,{}'.format(encoded_image))#LinkedIn Logo
],className='mb-2'), #mb - margin bottom (the number indicates the amt of space)
dbc.Card([
dbc.CardBody([
dbc.CardLink("Anuhya Bhagavatula", target = "_blank",
href = "https://linkedin.com/in/anuhyabs")
])
])
], width = 4),
dbc.Col([
dbc.Card([
dbc.CardBody([
dcc.DatePickerRange( #dash core component
id = 'my-date-picker-range',
min_date_allowed=date(1995, 1, 1),
max_date_allowed=date(2050, 1, 1),
start_date=date(2018,1,1),
end_date=date(2021,12,1),
className='mt-2 ml-5'
),
])
],color="secondary",inverse=True, style={'height':'18vh'}),
], width = 8),
],className='mb-2 mt-2'), #className is the Bootstrap attribute that helps sepcify utility spacing
dbc.Row([
dbc.Col([
dbc.Card([
dbc.CardHeader(Lottie(options = options, width="67%", height = "67%", url = url_connections)),
dbc.CardBody([
html.H6('Connections'),
html.H2(id='content-connections', children="000")
], style={'textAlign':'center'})
])
], width = 2),
dbc.Col([
dbc.Card([
dbc.CardHeader(Lottie(options = options, width="32%", height = "32%", url = url_companies)),
dbc.CardBody([
html.H6('Companies'),
html.H2(id='content-companies', children="000")
], style={'textAlign':'center'})
])
], width = 2),
dbc.Col([
dbc.Card([
dbc.CardHeader(Lottie(options = options, width="25%", height = "25%", url = url_msg_in)),
dbc.CardBody([
html.H6('Invites Received'),
html.H2(id='content-invites-received', children="000")
], style={'textAlign':'center'})
])
], width = 2),
dbc.Col([
dbc.Card([
dbc.CardHeader(Lottie(options = options, width="53%", height = "53%", url = url_msg_out)),
dbc.CardBody([
html.H6('Invites Sent'),
html.H2(id='content-invites-sent', children="000")
], style={'textAlign':'center'})
])
], width = 2),
dbc.Col([
dbc.Card([
dbc.CardHeader(Lottie(options = options, width="25%", height = "25%", url = url_reactions)),
dbc.CardBody([
html.H6('Reactions'),
html.H2(id='content-reactions', children="000")
], style={'textAlign':'center'})
])
], width = 2),
dbc.Col([
dbc.Card([
dbc.CardHeader(Lottie(options = options, width="25%", height = "25%", url = url_messages)),
dbc.CardBody([
html.H6('Messages'),
html.H2(id='content-messages', children="000")
], style={'textAlign':'center'})
])
], width = 2),
],className='mb-2'),
dbc.Row([
dbc.Col([
dbc.Card([
dbc.CardBody([
dcc.Graph(id='line-chart',figure={})
])
])
], width = 7),
dbc.Col([
dbc.Card([
dbc.CardBody([
dcc.Graph(id='bar-chart',figure={})
])
])
], width = 5),
],className='mb-2'),
dbc.Row([
dbc.Col([
dbc.Card([
dbc.CardBody([
dcc.Graph(id='TBD',figure={})
])
])
], width = 3),
dbc.Col([
dbc.Card([
dbc.CardBody([
dcc.Graph(id='pie-chart',figure={})
])
])
], width = 4),
dbc.Col([
dbc.Card([
dbc.CardBody([
dcc.Graph(id='wordcloud',figure={})
])
])
], width = 5),
],className='mb-2')
],fluid = True)
#fluid - False : narrows the content to s ingle file and leaves a bunch of space on the sides
#fluid - True: fills up the entire space( area of around 12 columns)
@app.callback(
Output('content-connections','children'),
Output('content-companies','children'),
Output('content-invites-received','children'),
Output('content-invites-sent','children'),
Output('content-reactions','children'),
Output('content-messages','children'),
Input('my-date-picker-range','start_date'),
Input('my-date-picker-range','end_date'),
)
def update_small_cards(start_date, end_date):
#connections
dff_c = df_cnt.copy()
dff_c = dff_c[(dff_c['Connected On']>=start_date) & (dff_c['Connected On']<=end_date)]
connection_num = len(dff_c)
#companies
company_num = len(dff_c['Company'].unique())
#invitations
dff_i = df_invite.copy()
dff_i = dff_i[(dff_i['Sent At']>=start_date) & (dff_i['Sent At']<=end_date)]
in_num = len(dff_i[dff_i['Direction']=='INCOMING'])
out_num = len(dff_i[dff_i['Direction']=='OUTGOING'])
#reactions
dff_r = df_react.copy()
dff_r = dff_r[(dff_r['Date']>= start_date) & (dff_r['Date'] <= end_date)]
react_num = len(dff_r)
#messages
dff_m = df_msg.copy()
dff_m = dff_m[(dff_m['DATE']>= start_date) & (dff_m['DATE']<= end_date)]
msg_num = len(dff_m)
return connection_num, company_num, in_num, out_num, react_num, msg_num
@app.callback(
Output('line-chart','figure'),
Input('my-date-picker-range','start_date'),
Input('my-date-picker-range','end_date'),
)
def update_lineChart(start_date, end_date):
dff = df_cnt.copy()
dff = dff[(dff['Connected On']>= start_date) & (dff['Connected On']<= end_date)]
dff = dff[["month"]].value_counts()
dff = dff.to_frame()
dff.reset_index(inplace=True)
dff.rename(columns={0:'Total Connections'},inplace=True)
fig_line = px.line(dff,x='month',y='Total Connections', template='ggplot2',
title="Total connections by Month Name") #ggplot2 theme has grey bckg and white grid lines
fig_line.update_traces(mode="lines+markers", fill='tozeroy', line={'color':'blue'})
fig_line.update_layout(margin=dict(l = 20, r = 20, t = 30, b = 20))
return fig_line
@app.callback(
Output('bar-chart','figure'),
Input('my-date-picker-range','start_date'),
Input('my-date-picker-range','end_date'),
)
def update_barChart(start_date, end_date):
dff = df_cnt.copy()
dff = dff[(dff['Connected On']>= start_date) & (dff['Connected On']<= end_date)]
dff = dff[["Company"]].value_counts().head(6)
dff = dff.to_frame()
dff.reset_index(inplace=True)
dff.rename(columns={0:'Total Connections'},inplace=True)
fig_bar = px.bar(dff,x='Total Connections',y='Company', template='ggplot2', orientation ='h',
title="Total connections by Company") #ggplot2 theme has grey bckg and white grid lines
#fig_bar.update_yaxes(tickangle=45)
fig_bar.update_layout(margin=dict(l = 20, r = 20, t = 30, b = 20))
fig_bar.update_traces(marker_color='blue')
return fig_bar
@app.callback(
Output('TBD','figure'),
Input('my-date-picker-range','start_date'),
Input('my-date-picker-range','end_date'),
)
def update_TBD(start_date, end_date):
dff = df_react.copy()
dff = dff[(dff['Date']>= start_date) & (dff['Date']<= end_date)]
dff = dff[["Type"]].value_counts()
dff = dff.to_frame()
dff.reset_index(inplace=True)
dff.rename(columns={0:'Total Connections'},inplace=True)
fig_tbd = px.bar(dff,x='Type',y='Total Connections', template='ggplot2',
title="Total Reactions") #ggplot2 theme has grey bckg and white grid lines
#fig_bar.update_yaxes(tickangle=45)
fig_tbd.update_layout(margin=dict(l = 20, r = 20, t = 30, b = 20))
fig_tbd.update_traces(marker_color='blue')
return fig_tbd
@app.callback(
Output('pie-chart','figure'),
Input('my-date-picker-range','start_date'),
Input('my-date-picker-range','end_date'),
)
def update_pieChart(start_date, end_date):
dff = df_msg.copy()
dff = dff[(dff['DATE']>= start_date) & (dff['DATE']<= end_date)]
msg_sent = len(dff[dff['FROM']=='Anuhya Bhagavatula'])
msg_rcvd = len(dff[dff['FROM']!='Anuhya Bhagavatula'])
fig_pie = px.pie(names=['Sent','Received'], values=[msg_sent, msg_rcvd], template='ggplot2',
title="Messages Sent & Received") #ggplot2 theme has grey bckg and white grid lines
fig_pie.update_layout(margin=dict(l = 20, r = 20, t = 30, b = 20))
fig_pie.update_traces(marker_colors=['blue','red'])
return fig_pie
@app.callback(
Output('wordcloud','figure'),
Input('my-date-picker-range','start_date'),
Input('my-date-picker-range','end_date'),
)
def update_wordcloud(start_date, end_date):
dff = df_cnt.copy()
dff = dff.Position[(dff['Connected On']>=start_date) & (dff['Connected On']<=end_date)].astype(str)
myWordcloud = WordCloud(
background_color = 'white',
height = 275
).generate(' '.join(dff))
fig_wc = px.imshow(myWordcloud, template='ggplot2',
title="Total connections by Position") #ggplot2 theme has grey bckg and white grid lines
fig_wc.update_layout(margin=dict(l = 20, r = 20, t = 30, b = 20))
fig_wc.update_xaxes(visible=False)
fig_wc.update_yaxes(visible=False)
return fig_wc
app.run_server(mode='jupyterlab',port=8001)
```
| github_jupyter |
## 1. Comparison between Hybrid Strategies (by changing Exploration part) and DATE only
```
import numpy as np
import pandas as pd
import glob
import csv
import traceback
import datetime
import os
pd.options.display.max_columns=50
results = glob.glob('../results/performances/fldsh-*') # fld4- or quick- or www21- or fld-result- or fld4-result- or fldsh or flds
list1, list2 = zip(*sorted(zip([os.stat(result).st_size for result in results], results)))
```
### Collecting Result Files: Results of Individual Experiments
```
import matplotlib.pyplot as plt
from collections import defaultdict
%matplotlib inline
full_results = defaultdict(list)
# Retrieving results
num_logs = len([i for i in list1 if i > 1000])
count= 0
for i in range(1,num_logs+1):
try:
df = pd.read_csv(list2[-i])
var = 'norm-revenue'
rolling_mean7 = df[var].rolling(window=7).mean()
rolling_mean13 = df[var].rolling(window=13).mean()
filename = list2[-i][list2[-i].index('16'):list2[-i].index('16')+10]
info = ','.join(list(df[['data', 'sampling', 'subsamplings']].iloc[0]))
full_results[info].append(rolling_mean13)
count += 1
# Draw individual figures
# plt.figure()
# plt.title(info+','+filename)
# plt.plot(df['numWeek'], df[var], color='skyblue', label='Weekly')
# plt.plot(df['numWeek'], rolling_mean7, color='teal', label='MA (7 weeks)')
# plt.plot(df['numWeek'], rolling_mean13, color='blue', label='MA (14 weeks)')
# plt.legend(loc='upper left')
# plt.ylabel(var)
# plt.xlabel('numWeeks')
# plt.show()
except:
print('loading error:', list2[-i])
continue
print(count)
# plt.close()
```
### Synthetic data Simulation Results - Hybrid
```
full_results.keys()
plt.figure()
# info = ','.join(list(df[['data', 'samplings']].iloc[0]))
result_one_dataset = [key for key in full_results.keys() if 'synthetic' in key]
print('The number of trials for each setting (Results are averaged):')
for key in result_one_dataset:
avg_result = pd.concat([*full_results[key]], axis=1).mean(axis=1)
# print(full_results[key])
# print(pd.concat([*full_results[key]], axis=1)) # Check current running status: debug purpose
print(key, len(full_results[key]), round(np.mean(avg_result[-1:]), 4))
plt.plot(avg_result.index, avg_result, label=key)
# # printing test_illicit_rate
# tir = pd.read_csv(list2[-1])['test_illicit_rate'].rolling(window=7).mean()
# plt.plot(tir.index, tir, label='Test illicit rate (ref)')
plt.title('<Synthetic> Train: 1 months, Valid: 28 days, Test: 7 days, fast linear decay')
plt.legend(loc='lower left')
plt.ylabel(var)
plt.ylim(0,0.4)
plt.xlabel('numWeeks')
plt.show()
plt.close()
f = plt.figure()
result_one_dataset = [key for key in full_results.keys() if 'synthetic' in key]
print('The number of trials for each setting (Results are averaged):')
for key in ['synthetic,DATE,-', 'synthetic,hybrid,DATE/random', 'synthetic,hybrid,DATE/badge', 'synthetic,hybrid,DATE/bATE', 'synthetic,hybrid,DATE/gATE']: # result_one_dataset:
avg_result = pd.concat([*full_results[key]], axis=1).mean(axis=1)
if key == 'synthetic,hybrid,DATE/gATE':
plt.plot(avg_result.index, avg_result, label='DATE 90% + gATE 10% → 0.292')
if key == 'synthetic,hybrid,DATE/random':
plt.plot(avg_result.index, avg_result, label='DATE 90% + Random 10% → 0.291')
if key == 'synthetic,hybrid,DATE/badge':
plt.plot(avg_result.index, avg_result, label='DATE 90% + BADGE 10% → 0.287')
if key == 'synthetic,hybrid,DATE/bATE':
plt.plot(avg_result.index, avg_result, label='DATE 90% + bATE 10% → 0.292')
if key == 'synthetic,DATE,-':
plt.plot(avg_result.index, avg_result, label='DATE 100% → 0.303')
plt.legend(loc='lower left', fontsize=12.5)
plt.vlines(23, 0, 1, linestyles ="dotted", colors ="k")
plt.ylabel(var, fontsize=15)
plt.xlabel('Year', fontsize=15)
plt.xticks(ticks=[-5,8,21,34,47], labels=['2013', '','','', 14], fontsize=15)
plt.yticks(fontsize=15)
plt.title('Synthetic', fontsize=15)
plt.ylabel('Norm-Rev@10%', fontsize=15)
plt.ylim(0,0.4)
plt.show()
plt.close()
f.savefig("hybrid-s.pdf", bbox_inches='tight')
f = plt.figure()
result_one_dataset = [key for key in full_results.keys() if 'synthetic' in key]
print('The number of trials for each setting (Results are averaged):')
for key in result_one_dataset:
avg_result = pd.concat([*full_results[key]], axis=1).mean(axis=1)
# if key == 'synthetic,hybrid,DATE/random':
# plt.plot(avg_result.index, avg_result, label='Hybrid with 10% Exploration')
if key == 'synthetic,DATE,-':
plt.plot(avg_result.index, avg_result, label='No Exploration (Full Exploitation)')
plt.legend(loc='upper left', fontsize=12.5)
plt.vlines(23, 0, 1, linestyles ="dotted", colors ="k")
plt.ylabel(var, fontsize=15)
plt.xlabel('Year', fontsize=15)
plt.xticks(ticks=[-5,8,21,34,47], labels=['2013', '','','', 14], fontsize=15)
plt.yticks(fontsize=0.1, color='white')
plt.title('Synthetic', fontsize=15)
plt.ylabel('Norm-Rev@10%', fontsize=15)
plt.ylim(0,0.4)
plt.show()
plt.close()
f.savefig("exploitation-not-fail-s-only-notick.pdf", bbox_inches='tight')
```
| github_jupyter |
## Integrating different data sets
It is often the case that we are confronted with multiple data sets from different sources, and need to bring them together so that we can operate on that larger collection of data. Pandas provides a variety of mechanisms for such integration — including merging, joining, and concatenation, as described in more detail [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html). We will explore one such approach here, namely concatenation. Often associated with such a process are various sorts of data manipulations required to get the final, aggregrated data into a useful form for further analysis.
We'll first explore the multiyear sales data that we have examined previously in the video, diving into the code and data in some greater detail.
```
# First, let's do some imports and configuration
import pandas as pd
%matplotlib inline
```
### Step 1.
We can see by inspecting the contents of the sales_directory that we have salesdata from 2000 through 2017, one file for each year. Execute the code cell below, and examine the output.
```
# examine contents of sales_directory
%ls sales_directory
```
We previously showed in a video a function that executed a number of steps in order to read in each of these datafiles and stores them as dataframes in a dictionary containing the entire group.
Let's revisit that function, presented in the code cell below. Execute the code cell, and then we'll examine what's going on in a bit more detail below.
```
import glob
def read_multiyear_sales_data(directory):
sales = {}
# get names of all files matching this name
salesfiles = glob.glob(directory + '/sales*.csv')
for filename in salesfiles:
# parse year from filename
stop = filename.find('.csv')
start = stop-4
year = filename[start:stop]
# read in dataframe for this year
df = pd.read_csv(filename, index_col='Month')
# store dataframe in a dictionary
sales[year] = df
return sales
sales_by_year = read_multiyear_sales_data('sales_directory')
```
The function defined, ```read_multiyear_sales_data```, reads all the salesdata csv files in a specified directory, through the following steps:
1. initialize an empty dictionary (```sales```)
2. get the names of all files of interest (```salesfiles```)
3. extract the year from the filename (```year```)
4. read in the dataframe for that year (```df```)
5. store the dataframe in the ```sales``` dictionary as the value paired to the year
6. return filled dictionary
We will examine some of these steps in more detail below.
### Step 2.
The ```glob``` module in the Python standard library provides the ability to list all the files whose filenames match some specified form, where we can specify wildcard characters that can match multiple filenames. The key function is ```glob.glob```.
The code cell below imports the glob module, and then assigns to the variable ```salesfiles``` all those files in the sales_directory that have a name of the form ```salesdata*.csv```, where ```*``` can represent any sequence of characters (including none at all).
After you execute the cell, inspect the variable `salesfiles`. You should notice that salesfiles is a Python list, containing a sequence of strings. You might also notice that the list does not necessarily contain strings in their natural chronological order (i.e., from 2000 to 2017). ```glob.glob``` does not guarantee any particular ordering, so if we want some particular ordering, we need to carry that out explicitly, as discussed further down in this exercise.
```
# import glob module from Python standard library
import glob
salesfiles = glob.glob('sales_directory/salesdata*.csv')
print(salesfiles)
```
### Step 3.
Let's say you were only interested in sales data going back to 2010 (i.e., 2010-2017). In the code cell below, write an expression that uses ```glob.glob``` to return the set of filenames just for those years, and assign the result to the variable ```sales2010s_files```. Print the value of ```sales2010s_files```.
```
import glob
sales2010s_files = [file for file in glob.glob("sales_directory/*.csv") if "201" in file]
for file in sales2010s_files:
print(file)
```
## Self-Check
Run the cell below to test the correctness of your code in the cell above.
```
# Run this self-test cell to check your code; do not add code or delete code in this cell
from jn import testFileNames
try:
print(testFileNames(sales2010s_files))
except Exception as e:
print("Error!\n" + str(e))
```
### Step 4.
The function ```read_multiyear_sales_data``` contains a few lines dedicated to extracting the year of a particular filename from the name of the file. Manipulations of this sort — small bits of code to extract some useful metadata — are common in a variety of data science applications. Let's examine that code in some more detail. The code in question reads as follows:
<pre>
# parse year from filename
stop = filename.find('.csv')
start = stop-4
year = filename[start:stop]
</pre>
The variable ```filename``` changes each time through the for loop (```for filename in salesfiles:```), acquiring successive values of strings contained in ```salesfiles```. In the code presented below, we won't iterate through all filenames, but just consider one particular file in the loop (e.g., 'sales_directory/salesdata_2010.csv'), setting filename to that string.
The variable filename is assigned to a string, so the expression ```filename.find('.csv')``` is a call to the ```find``` method on a string object. We are doing this so that we can locate the correct position in the string from which to extract the year information — the expression ```filename.find('.csv')``` returns the position of the string ```'.csv'``` within the larger string ```filename```. You can understand this by calling up the documentation on this method by executing the code cell below -- the documentation should appear in a small panel near the bottom of this page. Once you are done reading the documentation, you can close the small panel by clicking on the "X" in the upper right corner of the panel.
```
str.find?
```
Execute the code cell below and examine the printed output. Then proceed.
```
# parse year from filename
filename = 'sales_directory/salesdata_2010.csv'
stop = filename.find('.csv')
start = stop-4
year = filename[start:stop]
print(start, stop, year)
```
### Step 5.
Imagine instead that the filenames in the sales_directory had a different naming convention, e.g., 'year_2010_salesdata.csv','year_2011_salesdata.csv', etc. In the code cell below, write new code to extract the year from the filename, and print the values of ```start```, ```stop```, and ```year```. There is more than one correct way to solve this problem, but you'll want to verify that the values are correct regardless the specific approach you use. To be specific about the filename you are trying to parse, begin the code cell below by defining <code>filename = 'sales_directory/year_2012_salesdata.csv'</code>.
```
# YOUR CODE HERE
# parse year from filename
#filename = 'sales_directory/salesdata_2010.csv'
filename = 'sales_directory/year_2012_salesdata.csv'
stop = filename.find('.csv') #It will find the .csv matching character
start = stop-4 #4 character back from .csv position
year = filename[start:stop] # it will print the year
print(start, stop, year)
```
## Self-Check
Run the cell below to test the correctness of your code in the cell above.
```
# Run this self-test cell to check your code; do not add code or delete code in this cell
from jn import testExtractingFilename
try:
print(testExtractingFilename(filename, start, stop, year))
except Exception as e:
print("Error!\n" + str(e))
```
### Step 6.
We also previously showed in a video a function that executed a number of steps in order to convert the separate dataframes stored in the sales dictionary created above into one big dataframe, using the concatenation capabilities of pandas (```pd.concat```).
Let's revisit a slightly revised version of that function, presented in the code cell below.
Execute the code cell, and then we'll examine what's going on in a bit more detail below.
```
def make_dataframe_from_sales_data(sales):
# concatentate sales data
df = pd.concat(sales, axis=0, keys=sorted(sales.keys()), names=['Year', 'Month'])
# convert month strings to numbers
lookup = {'Jan': '01', 'Feb': '02', 'Mar': '03',
'Apr': '04', 'May': '05', 'Jun': '06',
'Jul': '07', 'Aug': '08', 'Sep': '09',
'Oct': '10', 'Nov': '11', 'Dec': '12'}
df = df.rename(index=lookup)
# convert the (year, month) MultiIndex to a 'year-month' index
df.index = ["-".join(x) for x in df.index.ravel()]
# convert the 'year-month' strings to datetime objects
df.index = pd.to_datetime(df.index)
return df
sales_df = make_dataframe_from_sales_data(sales_by_year)
```
The function defined, ```make_dataframe_from_sales_data```, converts the multiple sales dataframes into one big dataframe through the following sequence of steps:
1. concatenate the separate dataframes in the sales dictionary into a new dataframe (```df```)
2. rename the index of the dataframe ```df``` so that month abbreviation strings are converted to number strings
3. reconfigure the index of the dataframe ```df``` so that each index label is a combined "year-month" string (e.g., "2010-07")
4. convert the index of the dataframe ```df``` to consist of datetime objects (timestamps) for further processing
We'll go through these steps in more detail below, but let's first have a peek at which the ```sales_by_year``` dictionary looks like so that we can better understand the operations on it. Execute the code cell below to first print the keys of the dictionary, and then to print the values associated with one of those keys. Each of the keys is a year, and each of the values are the sales data for that year.
```
print(sales_by_year.keys())
print(sales_by_year['2010'])
```
### Step 7.
The first step of the function above does the following:
<pre>
df = pd.concat(sales, axis=0, keys=sorted(sales.keys()), names=['Year', 'Month'])
</pre>
The first argument in the call to ```pd.concat``` is a dictionary that maps year names to associated dataframes. The name for that dictionary internal to the function is ```sales```, and of course we are free to name it whatever we want inside the function. When we called our ```make_dataframe_from_sales_data``` function above, we passed in the dictionary that we had computed named ```sales_by_year```.
Each separate dataframe consists of a sequence of rows, one for each month of the year, and we want to concatenate each of those rows, one after the other. Therefore, we want to concatenate row-wise, or along ```axis=0```, which is the second argument passed to the ```concat``` function.
We observed previously that the dictionary keys were not in chronological or lexographical order, but we would like our new dataframe to progress chronologically from 2001 through 2017. We can accomplish this by sorting the dictionary keys as part of the concatenation process, as is indicated in the third argument passed to the ```concat``` function.
The last argument passed to the ```concat``` function reads ```names=['Year', 'Month']```, which is just a way of indicating what we want to call the index of the new concatenated dataframe. The role of these names will become more apparent below.
Execute the following code cells below, so that you can create a concatenated dataframe and then examine its contents and summary information. As noted, internally the function refers to the dictionary as ```sales```, which acquires its value when we pass an argument to the function. But the object ```sales``` does not currently exist in this notebook, so we can create it by assigning it to the ```sales_by_year``` dictionary that we created previously.
```
sales = sales_by_year
df = pd.concat(sales, axis=0, keys=sorted(sales.keys()), names=['Year', 'Month'])
df
df.info()
```
### Step 8.
You can see from the summary information printed above that the index of the dataframe ```df``` is a MultiIndex, also known as a hierarchical index. (These are described in more detail [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html).) This is because there are two pieces of information that are being used to index each row, the Year and the Month. That information is organized hierarchically: for each Year (first level of MultiIndex), there are 12 Months (second level of the MultiIndex). Together the Year and Month make up a multilevel index for each row.
In the code cell below, print the index of the dataframe to examine its contents.
```
# YOUR CODE HERE
df.index
```
### Step 9.
Ultimately, we want convert this Year-Month MultiIndex into a single-level index composed of dates (i.e., datetime objects), since Pandas can then make better use of that information. The somewhat obscure code in our function definition does that in a series of steps:
<pre>
# convert month strings to numbers
lookup = {'Jan': '01', 'Feb': '02', 'Mar': '03',
'Apr': '04', 'May': '05', 'Jun': '06',
'Jul': '07', 'Aug': '08', 'Sep': '09',
'Oct': '10', 'Nov': '11', 'Dec': '12'}
df = df.rename(index=lookup)
# convert the (year, month) MultiIndex to a 'year-month' index
df.index = ["-".join(x) for x in df.index.ravel()]
# convert the 'year-month' strings to datetime objects
df.index = pd.to_datetime(df.index)
</pre>
As noted, these steps do the following:
* rename the index of the dataframe ```df``` so that month abbreviation strings are converted to number strings
* reconfigure the index of the dataframe ```df``` so that each index label is a combined "year-month" string (e.g., "2010-07")
* convert the index of the dataframe ```df``` to consist of datetime objects (timestamps) for further processing
Each of these steps is broken out in successive code cells. Execute each cell in turn, and print the dataframe index after each transformation so that you can understand the transformation that is taking place. If necessary, look up available documentation on what the various pieces do (e.g., ```df.rename```, ```"-".join```, ```pd.to_datetime```).
```
# convert month strings to numbers
lookup = {'Jan': '01', 'Feb': '02', 'Mar': '03',
'Apr': '04', 'May': '05', 'Jun': '06',
'Jul': '07', 'Aug': '08', 'Sep': '09',
'Oct': '10', 'Nov': '11', 'Dec': '12'}
df = df.rename(index=lookup)
# convert the (year, month) MultiIndex to a 'year-month' index
df.index = ["-".join(x) for x in df.index.ravel()]
# convert the 'year-month' strings to datetime objects
df.index = pd.to_datetime(df.index)
```
Examine the resulting dataframe and print its summary information to verify that the data have been processed into a suitable format.
```
df
df.info()
```
### Step 10.
Now that the sales data have all been integrated into one big, chronologically ordered dataframe, we can examine it to look for temporal trends.
Let's first plot the data, which is something we also did in the video. In the code cell below, plot the sales data in the dataframe using its ```plot``` method. You should be able to observe substantial seasonal variation throughout each year.
<i>Note: a self-check will not accompany this exercise.</i><br>
Your plot should look like this: <br><img src="IntegratingDSStep10.png" width=400 height=400 align="left"/>
```
# YOUR CODE HERE
df.plot()
```
### Step 11.
While the detailed data represented in the plot of the dataframe is useful, the within-year seasonal variation makes it more difficult to discern longer timescale trends. Fortunately, having gone through the trouble to convert the (Year, Month) data to proper datetime objects, Pandas is now able to operate on that information.
In the video, we demonstrated the code shown in the cell below. Execute the code cell below and inspect the resulting plot.
```
df.resample('Y').sum().plot()
```
The code above computes and plots the total amount of sales of each item type <i>per year</i>. The raw data in the dataframe represents sales per month, but we can <i>resample</i> along the time axis, at a different frequency. The argument to the ```resample``` method in the code above is 'Y', which instructs the method to resample at the time scale of a year. Other frequencies (referred to as "offset aliases") are possible with other values of that argument, as described in [this documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases).
Resampling is similar to a groupby operation, in that rows in a dataframe are split into subgroups (based on the sampling frequency), such that an aggregating function can be applied to each subgroup. In the plot above, the aggregating function we used was ```sum```, since we were interested in computing the total number of sales in each product category over each year.
In the code cell below, write an expression to compute and plot the mean sales in each product category over each quarter of the year (aligned with the end of each calendar quarter). Consult the link provided above to determine the appropriate string alias to resample over quarters instead of years.
<i>Hint: Use the same pattersn as the code in the cell above.</i>
<i>Note: a self-check will not accompany this exercise.</i><br>
Your plot should look like this: <br><img src="IntegratingDSStep11.png" width=400 height=400 align="left"/>
```
# YOUR CODE HERE
df.resample('Q').mean().plot()
```
| github_jupyter |
```
# look at tools/set_up_magics.ipynb
yandex_metrica_allowed = True ; get_ipython().run_cell('# one_liner_str\n\nget_ipython().run_cell_magic(\'javascript\', \'\', \n \'// setup cpp code highlighting\\n\'\n \'IPython.CodeCell.options_default.highlight_modes["text/x-c++src"] = {\\\'reg\\\':[/^%%cpp/]} ;\'\n \'IPython.CodeCell.options_default.highlight_modes["text/x-cmake"] = {\\\'reg\\\':[/^%%cmake/]} ;\'\n)\n\n# creating magics\nfrom IPython.core.magic import register_cell_magic, register_line_magic\nfrom IPython.display import display, Markdown, HTML\nimport argparse\nfrom subprocess import Popen, PIPE\nimport random\nimport sys\nimport os\nimport re\nimport signal\nimport shutil\nimport shlex\nimport glob\nimport time\n\n@register_cell_magic\ndef save_file(args_str, cell, line_comment_start="#"):\n parser = argparse.ArgumentParser()\n parser.add_argument("fname")\n parser.add_argument("--ejudge-style", action="store_true")\n args = parser.parse_args(args_str.split())\n \n cell = cell if cell[-1] == \'\\n\' or args.no_eof_newline else cell + "\\n"\n cmds = []\n with open(args.fname, "w") as f:\n f.write(line_comment_start + " %%cpp " + args_str + "\\n")\n for line in cell.split("\\n"):\n line_to_write = (line if not args.ejudge_style else line.rstrip()) + "\\n"\n if line.startswith("%"):\n run_prefix = "%run "\n if line.startswith(run_prefix):\n cmds.append(line[len(run_prefix):].strip())\n f.write(line_comment_start + " " + line_to_write)\n continue\n comment_prefix = "%" + line_comment_start\n if line.startswith(comment_prefix):\n cmds.append(\'#\' + line[len(comment_prefix):].strip())\n f.write(line_comment_start + " " + line_to_write)\n continue\n raise Exception("Unknown %%save_file subcommand: \'%s\'" % line)\n else:\n f.write(line_to_write)\n f.write("" if not args.ejudge_style else line_comment_start + r" line without \\n")\n for cmd in cmds:\n if cmd.startswith(\'#\'):\n display(Markdown("\\#\\#\\#\\# `%s`" % cmd[1:]))\n else:\n display(Markdown("Run: `%s`" % cmd))\n get_ipython().system(cmd)\n\n@register_cell_magic\ndef cpp(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef cmake(fname, cell):\n save_file(fname, cell, "#")\n\n@register_cell_magic\ndef asm(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef makefile(fname, cell):\n fname = fname or "makefile"\n assert fname.endswith("makefile")\n save_file(fname, cell.replace(" " * 4, "\\t"))\n \n@register_line_magic\ndef p(line):\n line = line.strip() \n if line[0] == \'#\':\n display(Markdown(line[1:].strip()))\n else:\n try:\n expr, comment = line.split(" #")\n display(Markdown("`{} = {}` # {}".format(expr.strip(), eval(expr), comment.strip())))\n except:\n display(Markdown("{} = {}".format(line, eval(line))))\n \n \ndef show_log_file(file, return_html_string=False):\n obj = file.replace(\'.\', \'_\').replace(\'/\', \'_\') + "_obj"\n html_string = \'\'\'\n <!--MD_BEGIN_FILTER-->\n <script type=text/javascript>\n var entrance___OBJ__ = 0;\n var errors___OBJ__ = 0;\n function halt__OBJ__(elem, color)\n {\n elem.setAttribute("style", "font-size: 14px; background: " + color + "; padding: 10px; border: 3px; border-radius: 5px; color: white; "); \n }\n function refresh__OBJ__()\n {\n entrance___OBJ__ -= 1;\n if (entrance___OBJ__ < 0) {\n entrance___OBJ__ = 0;\n }\n var elem = document.getElementById("__OBJ__");\n if (elem) {\n var xmlhttp=new XMLHttpRequest();\n xmlhttp.onreadystatechange=function()\n {\n var elem = document.getElementById("__OBJ__");\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\n if (elem && xmlhttp.readyState==4) {\n if (xmlhttp.status==200)\n {\n errors___OBJ__ = 0;\n if (!entrance___OBJ__) {\n if (elem.innerHTML != xmlhttp.responseText) {\n elem.innerHTML = xmlhttp.responseText;\n }\n if (elem.innerHTML.includes("Process finished.")) {\n halt__OBJ__(elem, "#333333");\n } else {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n }\n return xmlhttp.responseText;\n } else {\n errors___OBJ__ += 1;\n if (!entrance___OBJ__) {\n if (errors___OBJ__ < 6) {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n } else {\n halt__OBJ__(elem, "#994444");\n }\n }\n }\n }\n }\n xmlhttp.open("GET", "__FILE__", true);\n xmlhttp.setRequestHeader("Cache-Control", "no-cache");\n xmlhttp.send(); \n }\n }\n \n if (!entrance___OBJ__) {\n entrance___OBJ__ += 1;\n refresh__OBJ__(); \n }\n </script>\n\n <p id="__OBJ__" style="font-size: 14px; background: #000000; padding: 10px; border: 3px; border-radius: 5px; color: white; ">\n </p>\n \n </font>\n <!--MD_END_FILTER-->\n <!--MD_FROM_FILE __FILE__.md -->\n \'\'\'.replace("__OBJ__", obj).replace("__FILE__", file)\n if return_html_string:\n return html_string\n display(HTML(html_string))\n\n \nclass TInteractiveLauncher:\n tmp_path = "./interactive_launcher_tmp"\n def __init__(self, cmd):\n try:\n os.mkdir(TInteractiveLauncher.tmp_path)\n except:\n pass\n name = str(random.randint(0, 1e18))\n self.inq_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".inq")\n self.log_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".log")\n \n os.mkfifo(self.inq_path)\n open(self.log_path, \'w\').close()\n open(self.log_path + ".md", \'w\').close()\n\n self.pid = os.fork()\n if self.pid == -1:\n print("Error")\n if self.pid == 0:\n exe_cands = glob.glob("../tools/launcher.py") + glob.glob("../../tools/launcher.py")\n assert(len(exe_cands) == 1)\n assert(os.execvp("python3", ["python3", exe_cands[0], "-l", self.log_path, "-i", self.inq_path, "-c", cmd]) == 0)\n self.inq_f = open(self.inq_path, "w")\n interactive_launcher_opened_set.add(self.pid)\n show_log_file(self.log_path)\n\n def write(self, s):\n s = s.encode()\n assert len(s) == os.write(self.inq_f.fileno(), s)\n \n def get_pid(self):\n n = 100\n for i in range(n):\n try:\n return int(re.findall(r"PID = (\\d+)", open(self.log_path).readline())[0])\n except:\n if i + 1 == n:\n raise\n time.sleep(0.1)\n \n def input_queue_path(self):\n return self.inq_path\n \n def wait_stop(self, timeout):\n for i in range(int(timeout * 10)):\n wpid, status = os.waitpid(self.pid, os.WNOHANG)\n if wpid != 0:\n return True\n time.sleep(0.1)\n return False\n \n def close(self, timeout=3):\n self.inq_f.close()\n if not self.wait_stop(timeout):\n os.kill(self.get_pid(), signal.SIGKILL)\n os.waitpid(self.pid, 0)\n os.remove(self.inq_path)\n # os.remove(self.log_path)\n self.inq_path = None\n self.log_path = None \n interactive_launcher_opened_set.remove(self.pid)\n self.pid = None\n \n @staticmethod\n def terminate_all():\n if "interactive_launcher_opened_set" not in globals():\n globals()["interactive_launcher_opened_set"] = set()\n global interactive_launcher_opened_set\n for pid in interactive_launcher_opened_set:\n print("Terminate pid=" + str(pid), file=sys.stderr)\n os.kill(pid, signal.SIGKILL)\n os.waitpid(pid, 0)\n interactive_launcher_opened_set = set()\n if os.path.exists(TInteractiveLauncher.tmp_path):\n shutil.rmtree(TInteractiveLauncher.tmp_path)\n \nTInteractiveLauncher.terminate_all()\n \nyandex_metrica_allowed = bool(globals().get("yandex_metrica_allowed", False))\nif yandex_metrica_allowed:\n display(HTML(\'\'\'<!-- YANDEX_METRICA_BEGIN -->\n <script type="text/javascript" >\n (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)};\n m[i].l=1*new Date();k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)})\n (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym");\n\n ym(59260609, "init", {\n clickmap:true,\n trackLinks:true,\n accurateTrackBounce:true\n });\n </script>\n <noscript><div><img src="https://mc.yandex.ru/watch/59260609" style="position:absolute; left:-9999px;" alt="" /></div></noscript>\n <!-- YANDEX_METRICA_END -->\'\'\'))\n\ndef make_oneliner():\n html_text = \'("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "")\'\n html_text += \' + "<""!-- MAGICS_SETUP_PRINTING_END -->"\'\n return \'\'.join([\n \'# look at tools/set_up_magics.ipynb\\n\',\n \'yandex_metrica_allowed = True ; get_ipython().run_cell(%s);\' % repr(one_liner_str),\n \'display(HTML(%s))\' % html_text,\n \' #\'\'MAGICS_SETUP_END\'\n ])\n \n\n');display(HTML(("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "") + "<""!-- MAGICS_SETUP_PRINTING_END -->")) #MAGICS_SETUP_END
```
# Assembler x86
1) Записи этого семинара нет. Но есть запись второго семинара про ассемблер x86-64
2) Если вы не были на первом семинаре, то лучше смотрите материалы по x86-64
## Особенности
* Мало регистров
* Много команд
* Много легаси
* Много соглашений о вызовах
* Разные синтаксисы
[Ридинг Яковлева](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/asm/x86_basics)
Сегодня в программе:
* <a href="#regs" style="color:#856024"> Регистры </a>
* <a href="#syntax" style="color:#856024"> Синтаксисы </a>
* <a href="#clamp" style="color:#856024"> Функция clamp </a>
* <a href="#asm" style="color:#856024"> Работа с памятью </a>
* <a href="#mul" style="color:#856024"> Интересные факты </a>
* <a href="#hw" style="color:#856024"> Комментарии к ДЗ </a>
# <a name="regs"></a> Регистры
Немного истории
| Год | Регистры | Битность | Первый процессор | Комментарий |
|------|--------------------|----------|------------------|-------------|
| 1974 | a, b, c, d | 8 bit | Intel 8080 | |
| 1978 | ax, bx, cx, dx | 16 bit | Intel 8086 | X - eXtended ([совсем ненадежный источник](https://stackoverflow.com/a/892948))|
| 1985 | eax, ebx, exc, edx | 32 bit | Intel 80386 | E - extended |
| 2003 | rax, rbx, rcx, rdx | 64 bit | AMD Opteron | R - (внезапно) register |
Как оно выглядит сейчас в 64-битных процессорах
<table width="800px" border="1" style="text-align:center; font-family: Courier New; font-size: 10pt">
<tbody><tr>
<td colspan="8" width="25%" style="background:lightgrey">RAX
</td>
<td colspan="8" width="25%" style="background:lightgrey">RCX
</td>
<td colspan="8" width="25%" style="background:lightgrey">RDX
</td>
<td colspan="8" width="25%" style="background:lightgrey">RBX
</td></tr>
<tr>
<td colspan="4" width="12.5%"></td>
<td colspan="4" width="12.5%" style="background:lightgrey">EAX
</td>
<td colspan="4" width="12.5%"></td>
<td colspan="4" width="12.5%" style="background:lightgrey">ECX
</td>
<td colspan="4" width="12.5%"></td>
<td colspan="4" width="12.5%" style="background:lightgrey">EDX
</td>
<td colspan="4" width="12.5%"></td>
<td colspan="4" width="12.5%" style="background:lightgrey">EBX
</td></tr>
<tr>
<td colspan="6" width="18.75%"></td>
<td colspan="2" width="6.25%" style="background:lightgrey">AX
</td>
<td colspan="6" width="18.75%"></td>
<td colspan="2" width="6.25%" style="background:lightgrey">CX
</td>
<td colspan="6" width="18.75%"></td>
<td colspan="2" width="6.25%" style="background:lightgrey">DX
</td>
<td colspan="6" width="18.75%"></td>
<td colspan="2" width="6.25%" style="background:lightgrey">BX
</td></tr>
<tr>
<td colspan="6" width="18.75%"></td>
<td width="3.125%" style="background:lightgrey">AH</td>
<td width="3.125%" style="background:lightgrey">AL
</td>
<td colspan="6" width="18.75%"></td>
<td width="3.125%" style="background:lightgrey">CH</td>
<td width="3.125%" style="background:lightgrey">CL
</td>
<td colspan="6" width="18.75%"></td>
<td width="3.125%" style="background:lightgrey">DH</td>
<td width="3.125%" style="background:lightgrey">DL
</td>
<td colspan="6" width="18.75%"></td>
<td width="3.125%" style="background:lightgrey">BH</td>
<td width="3.125%" style="background:lightgrey">BL
</td></tr></tbody></table>
(На самом деле все далеко не так просто устроено. [stackoverflow](https://stackoverflow.com/a/25456097))
Регистры x86 и их странные названия
* EAX - Accumulator Register
* EBX - Base Register
* ECX - Counter Register
* EDX - Data Register
* ESI - Source Index
* EDI - Destination Index
* EBP - Base Pointer
* ESP - Stack Pointer
Регистры в x86:
<br> `eax`, `ebx`, `ecx`, `edx` - регистры общего назначения.
<br> `esp` - указатель на вершину стека
<br> `ebp` - указатель на начало фрейма (но можно использовать аккуратно использовать как регистр общего назначения)
<br> `esi`, `edi` - странные регистры для копирования массива, по сути регистры общего назначения, но ограниченные в возможностях.
Возвращаемое значение записывается в регистр eax.
Вызываемая функция **обязана сохранять на стеке значения регистров** общего назначения `ebx`, `ebp`, `esi` и `edi`.
Аргументы могут передаваться в функцию различными способами, в зависимости от соглашений, принятых в ABI (смотрите ридинг Яковлева).
```
%%save_file asm_filter_useless
%run chmod +x asm_filter_useless
#!/bin/bash
grep -v "^\s*\." | grep -v "^[0-9]"
```
# <a name="syntax"></a> Syntaxes
### AT&T
```
%%cpp att_example.c
%run gcc -m32 -masm=att -O3 att_example.c -S -o att_example.S
%run cat att_example.S | ./asm_filter_useless
#include <stdint.h>
int32_t sum(int32_t a, int32_t b) {
return a + b;
}
```
### Intel
DWORD PTR — это переменная типа двойного слова. Слово — это 16 бит. Термин получил распространение в эпоху 16-ти битных процессоров, тогда в регистр помещалось ровно 16 бит. Такой объем информации стали называть словом (word). Т. е. в нашем случае dword (double word) 2*16 = 32 бита = 4 байта (обычный int).
https://habr.com/ru/post/344896/
```
%%cpp intel_example.c
%run gcc -m32 -masm=intel -O3 intel_example.c -S -o intel_example.S
%run cat intel_example.S | ./asm_filter_useless
#include <stdint.h>
int32_t sum(int32_t a, int32_t b) {
return a + b;
}
```
Про `endbr32` [Введение в аппаратную защиту стека / Хабр](https://habr.com/ru/post/494000/) и [control-flow-enforcement-technology](https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf)
TLDR: чтобы хакерам было сложнее, есть особый режим процессора, в котором переход (jump) к инструкции не являющейся `endbr*` приводит к прерыванию и завершению программы.
# <a name="clamp"></a> Пишем функцию clamp тремя способами
```
%%asm clamp_disasm.S
.intel_syntax noprefix
.text
.globl clamp
clamp:
// x - esp + 4
// a - esp + 8
// b - esp + 12
mov edx, DWORD PTR [esp+4] // edx = x
mov eax, DWORD PTR [esp+8] // eax = a
cmp edx, eax // x ? a
jl .L2 // if (x < a)
cmp edx, DWORD PTR [esp+12] // x ? b
mov eax, edx // eax = x
cmovg eax, DWORD PTR [esp+12] // if (x > b) eax = b
.L2:
rep ret
%%asm clamp_if.S
.intel_syntax noprefix
.text
.globl clamp
clamp:
mov edx, DWORD PTR [esp + 4] // X
mov eax, DWORD PTR [esp + 8] // A
cmp edx, eax
jl return_eax // return A if X < A
mov eax, DWORD PTR [esp + 12] // B
cmp edx, eax
jg return_eax // return B if X > B
mov eax, edx
return_eax:
ret
%%asm clamp_cmov.S
.intel_syntax noprefix
.text
.globl clamp
clamp:
mov eax, DWORD PTR [esp + 4] // X
mov edx, DWORD PTR [esp + 8] // A
cmp eax, edx
cmovl eax, edx // if (X < A) X = A
mov edx, DWORD PTR [esp + 12] // B
cmp eax, edx
cmovg eax, edx // if (X > B) X = B
ret
%%cpp clamp_test.c
// compile and test using all three asm clamp implementations
%run gcc -m32 -masm=intel -O2 clamp_disasm.S clamp_test.c -o clamp_test.exe
%run ./clamp_test.exe
%run gcc -m32 -masm=intel -O2 clamp_if.S clamp_test.c -o clamp_if_test.exe
%run ./clamp_if_test.exe
%run gcc -m32 -masm=intel -O2 clamp_cmov.S clamp_test.c -o clamp_cmov_test.exe
%run ./clamp_cmov_test.exe
#include <stdint.h>
#include <stdio.h>
#include <assert.h>
int32_t clamp(int32_t a, int32_t b, int32_t c);
int main() {
assert(clamp(1, 10, 20) == 10);
assert(clamp(100, 10, 20) == 20);
assert(clamp(15, 10, 20) == 15);
fprintf(stderr, "All is OK");
return 0;
}
```
То же самое ассемблерной вставкой
```
%%cpp clamp_inline_test.c
%run gcc -m32 -masm=intel -O2 clamp_inline_test.c -o clamp_inline_test.exe
%run ./clamp_inline_test.exe
#include <stdint.h>
#include <stdio.h>
#include <assert.h>
int32_t clamp(int32_t a, int32_t b, int32_t c);
__asm__(R"(
clamp:
mov eax, DWORD PTR [esp + 4]
mov edx, DWORD PTR [esp + 8]
cmp eax, edx
cmovl eax, edx
mov edx, DWORD PTR [esp + 12]
cmp eax, edx
cmovg eax, edx
ret
)");
int main() {
assert(clamp(1, 10, 20) == 10);
assert(clamp(100, 10, 20) == 20);
assert(clamp(15, 10, 20) == 15);
fprintf(stderr, "All is OK");
return 0;
}
```
# <a name="memory"></a> Поработаем с памятью
Даны n, x. Посчитаем $\sum_{i=0}^{n - 1} (-1)^i \cdot x[i]$
```
%%asm my_sum.S
.intel_syntax noprefix
.text
.globl my_sum
my_sum:
push ebx
mov eax, 0
mov edx, DWORD PTR [esp + 8]
mov ebx, DWORD PTR [esp + 12]
cmp edx, 0
start_loop:
jle return_eax
add eax, DWORD PTR [ebx]
add ebx, 4
dec edx // and compare
jle return_eax
sub eax, DWORD PTR [ebx]
add ebx, 4
dec edx // and write compare with 0 flags
jmp start_loop
return_eax:
pop ebx
ret
%%cpp my_sum_test.c
%run gcc -g3 -m32 -masm=intel my_sum_test.c my_sum.S -o my_sum_test.exe
%run ./my_sum_test.exe
#include <stdint.h>
#include <stdio.h>
#include <assert.h>
int32_t my_sum(int32_t n, int32_t* x);
int main() {
int32_t x[] = {100, 2, 200, 3};
assert(my_sum(sizeof(x) / sizeof(int32_t), x) == 100 - 2 + 200 - 3);
int32_t y[] = {100, 2, 200};
assert(my_sum(sizeof(y) / sizeof(int32_t), y) == 100 - 2 + 200);
int32_t z[] = {100};
assert(my_sum(sizeof(z) / sizeof(int32_t), z) == 100);
printf("SUCCESS");
return 0;
}
```
# <a name="mul"></a> Развлекательно-познавательная часть
```
%%cpp mul.c
%run gcc -m32 -masm=intel -O1 mul.c -S -o mul.S
%run cat mul.S | ./asm_filter_useless
#include <stdint.h>
int32_t mul(int32_t a) {
return a * 128;
}
%%cpp div_0.c
%run gcc -m64 -masm=intel -O3 div_0.c -S -o div_0.S
%run cat div_0.S | ./asm_filter_useless
#include <stdint.h>
uint32_t div(uint32_t a) {
return a / 3;
}
uint32_t div2(uint32_t a, uint32_t b) {
return a / b;
}
%%cpp div.c
%run gcc -m32 -masm=intel -O3 div.c -S -o div.S
%run cat div.S | ./asm_filter_useless
#include <stdint.h>
int32_t div(int32_t a) {
return a / 4;
}
uint32_t udiv(uint32_t a) {
return a / 4;
}
```
```
1111 -> 10 -> 0
1110 -> 01 -> 0
1101 -> 0 -> 0
1100 -> 1111 -> -1
1011 -> 1110 -> -1
```
```
time(NULL)
```
# <a name="inline"></a> Inline ASM
http://asm.sourceforge.net/articles/linasm.html
```
%%cpp simdiv.c
%run gcc -m32 -masm=intel -O3 simdiv.c -o simdiv.exe
%run ./simdiv.exe
#include <stdint.h>
#include <assert.h>
int32_t simdiv(int32_t a) {
uint32_t eax = ((uint32_t)a >> 31) + a;
__asm__("sar %0" : "=a"(eax) : "a"(eax));
return eax;
}
int main() {
assert(simdiv(1) == 0);
assert(simdiv(5) == 2);
assert(simdiv(-1) == 0);
assert(simdiv(-5) == -2);
}
```
## <a name="hw"></a> Комментарии к дз
| github_jupyter |
```
import json
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15, 6
from numpy import *
import math
import matplotlib.pyplot as plt
from ipywidgets import *
from IPython.display import display
plt.style.use('ggplot')
from pymongo import MongoClient
import pandas as pd
from pandas.io.json import json_normalize
import plotly.plotly as py
import plotly.graph_objs as go
from datetime import datetime
import pandas_datareader.data as web
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
from sklearn.linear_model import Ridge
from influxdb import DataFrameClient
client = DataFrameClient('34.217.33.66', 32589, 'root', 'root', 'k8s')
mongoclient = MongoClient('34.217.33.66', 27017, username="MongoServer3", password="cloud@SecureMongod75")
def getAllNodeNames():
queryResult = client.query("SHOW TAG VALUES FROM uptime WITH KEY=nodename;")
nodeNames_temp = list(queryResult.get_points())
dfnodeNames = pd.DataFrame(nodeNames_temp)
allNodeNames = dfnodeNames[:]["value"]
return allNodeNames
def getNamespaceNames(node):
nsQuery = client.query("SHOW TAG VALUES FROM uptime WITH KEY=namespace_name WHERE nodename = '"+node+"';")
nsQuery_temp = list(nsQuery.get_points())
dfnsNames = pd.DataFrame(nsQuery_temp)
allnsNames = dfnsNames[:]["value"]
return allnsNames
def getAllPodNames(node,ns_name):
queryResult = client.query("SHOW TAG VALUES FROM uptime WITH KEY = pod_name WHERE namespace_name = '"+ns_name+"' AND nodename = '"+node+"';")
podNames_temp = list(queryResult.get_points())
dfpodNames = pd.DataFrame(podNames_temp)
allpodNames = dfpodNames[:]["value"]
return allpodNames
def getCPUUtilizationNode(node):
queryResult = client.query('SELECT * FROM "cpu/node_utilization" where nodename = \''+node+'\' AND type=\'node\';')
dfcpuUtilization = pd.DataFrame(queryResult['cpu/node_utilization'])
return dfcpuUtilization
def getCPUUtilizationPod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "cpu/usage_rate" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfcpuUtilization = pd.DataFrame(queryResult['cpu/usage_rate'])
return dfcpuUtilization
def getCPUUtilizationPodContainer(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "cpu/usage_rate" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod_container\';')
dfcpuUtilization = pd.DataFrame(queryResult['cpu/usage_rate'])
return dfcpuUtilization
def prepareCpuUtilization(node,ns_name, pod_name):
cpuUtilization = getCPUUtilizationNode(node)
podCpuUtilization = getCPUUtilizationPod(node,ns_name, pod_name)
containercpuUtilization = getCPUUtilizationPodContainer(node,ns_name, pod_name)
plt.plot(cpuUtilization.index, cpuUtilization['value'] *1000, 'r') # plotting t, a separately
plt.plot(podCpuUtilization.index, podCpuUtilization['value'], 'b') # plotting t, b separately
plt.plot(containercpuUtilization.index, containercpuUtilization['value'], 'g') # plotting t, c separately
plt.show()
def getMemoryUtilizationNode(node):
queryResult = client.query('SELECT * FROM "memory/node_utilization" where nodename = \''+node+'\' AND type=\'node\';')
dfmemUtilization = pd.DataFrame(queryResult['memory/node_utilization'])
return dfmemUtilization
def getMemoryUtilizationPod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "memory/usage" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfmemUtilization = pd.DataFrame(queryResult['memory/usage'])
return dfmemUtilization
def getMemoryUtilizationPodContainer(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "memory/usage" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod_container\';')
dfmemUtilization = pd.DataFrame(queryResult['memory/usage'])
return dfmemUtilization
def prepareMemoryUtilization(node,ns_name, pod_name):
memoryUtilization = getMemoryUtilizationNode(node)
podMemoryUtilization = getMemoryUtilizationPod(node,ns_name, pod_name)
containerMemoryUtilization = getMemoryUtilizationPodContainer(node,ns_name, pod_name)
plt.plot(memoryUtilization.index, memoryUtilization['value'] *1000000000, 'r') # plotting t, a separately
plt.plot(podMemoryUtilization.index, podMemoryUtilization['value'], 'b') # plotting t, b separately
plt.plot(containerMemoryUtilization.index, containerMemoryUtilization['value'], 'g') # plotting t, c separately
plt.show()
def getNetworkTxRatePod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "network/tx_rate" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfmemUtilization = pd.DataFrame(queryResult['network/tx_rate'])
return dfmemUtilization
def getNetworkTxPod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "network/tx" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfmemUtilization = pd.DataFrame(queryResult['network/tx'])
return dfmemUtilization
def getNetworkTxErrorsPod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "network/tx_errors" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfmemUtilization = pd.DataFrame(queryResult['network/tx_errors'])
return dfmemUtilization
def getNetworkTxErrorsRatePod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "network/tx_errors_rate" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfmemUtilization = pd.DataFrame(queryResult['network/tx_errors_rate'])
return dfmemUtilization
def prepareNetworkTxRateUtilization(node,ns_name, pod_name):
podNetworTxRate = getNetworkTxRatePod(node,ns_name, pod_name)
podNetworTx = getNetworkTxPod(node,ns_name, pod_name)
podNetworkError = getNetworkTxErrorsPod(node,ns_name, pod_name)
podNetworkErrorRate = getNetworkTxErrorsRatePod(node,ns_name, pod_name)
plt.plot(podNetworTxRate.index, podNetworTxRate['value'], 'b') # plotting t, b separately
#plt.plot(podNetworTx.index, podNetworTx['value'], 'g') # plotting t, b separately
#plt.plot(podNetworkError.index, podNetworkError['value'], 'y') # plotting t, b separately
plt.plot(podNetworkErrorRate.index, podNetworkErrorRate['value'], 'r') # plotting t, b separately
plt.show()
def getNetworkRxRatePod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "network/rx_rate" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfmemUtilization = pd.DataFrame(queryResult['network/rx_rate'])
return dfmemUtilization
def getNetworkRxPod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "network/rx" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfmemUtilization = pd.DataFrame(queryResult['network/rx'])
return dfmemUtilization
def getNetworkRxErrorsPod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "network/rx_errors" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfmemUtilization = pd.DataFrame(queryResult['network/rx_errors'])
return dfmemUtilization
def getNetworkRxErrorsRatePod(node,ns_name, pod_name):
queryResult = client.query('SELECT * FROM "network/rx_errors_rate" where nodename = \''+node+'\' AND pod_name = \''+pod_name+'\' AND namespace_name = \''+ns_name+'\' AND type=\'pod\';')
dfmemUtilization = pd.DataFrame(queryResult['network/rx_errors_rate'])
return dfmemUtilization
def prepareNetworkRxRateUtilization(node,ns_name, pod_name):
podNetworRxRate = getNetworkRxRatePod(node,ns_name, pod_name)
podNetworRx = getNetworkRxPod(node,ns_name, pod_name)
podNetworkError = getNetworkRxErrorsPod(node,ns_name, pod_name)
podNetworkErrorRate = getNetworkRxErrorsRatePod(node,ns_name, pod_name)
plt.plot(podNetworRxRate.index, podNetworRxRate['value'], 'b') # plotting t, b separately
#plt.plot(podNetworRx.index, podNetworRx['value'], 'g') # plotting t, b separately
#plt.plot(podNetworkError.index, podNetworkError['value'], 'y') # plotting t, b separately
plt.plot(podNetworkErrorRate.index, podNetworkErrorRate['value'], 'r') # plotting t, b separately
plt.show()
allNodeNames = getAllNodeNames()
allNodeNames
nsNames = getNamespaceNames(allNodeNames[3])
allPodNamesLarge = getAllPodNames (allNodeNames[3], nsNames[0])
allPodNamesSmall = getAllPodNames (allNodeNames[2], nsNames[0])
allPodNames2xLarge = getAllPodNames (allNodeNames[1], nsNames[0])
allPodNamesMedium = getAllPodNames (allNodeNames[4], nsNames[0])
#prepareMemoryUtilization(allNodeNames[3],nsNames[0], allPodNamesLarge[3])
#prepareCpuUtilization(allNodeNames[3],nsNames[0], allPodNamesLarge[3])
#prepareNetworkTxRateUtilization(allNodeNames[3],nsNames[0], allPodNamesLarge[3])
#prepareNetworkRxRateUtilization(allNodeNames[3],nsNames[0], allPodNamesLarge[3])
def getMongoDf(dbName, collectionName):
db = mongoclient[dbName]
collection_name = collectionName
datapoints = list(db.kubeserver.find({}))
dfMongo = json_normalize(datapoints)
dfMongo = dfMongo.drop(['intermediate'], axis=1)
dfMongo = dfMongo.drop(['aggregate.codes.200'], axis=1)
dfMongo = dfMongo.drop(['_id'], axis=1)
if('aggregate.errors.ECONNRESET' in dfMongo.columns):
dfMongo = dfMongo.drop(['aggregate.errors.ECONNRESET'], axis=1)
if('aggregate.errors.ECONNREFUSED' in dfMongo.columns):
dfMongo = dfMongo.drop(['aggregate.errors.ECONNREFUSED'], axis=1)
if('aggregate.errors.ESOCKETTIMEDOUT' in dfMongo.columns):
dfMongo = dfMongo.drop(['aggregate.errors.ETIMEDOUT'], axis=1)
dfMongo = dfMongo.drop(['aggregate.latency.p95'], axis=1)
dfMongo = dfMongo.drop(['aggregate.latency.p99'], axis=1)
dfMongo = dfMongo.drop(['aggregate.rps.count'], axis=1)
#dfMongo = dfMongo.drop(['aggregate.rps.mean'], axis=1)
dfMongo = dfMongo.drop(['aggregate.scenarioDuration.max'], axis=1)
dfMongo = dfMongo.drop(['aggregate.scenarioDuration.min'], axis=1)
dfMongo = dfMongo.drop(['aggregate.scenarioDuration.p95'], axis=1)
dfMongo = dfMongo.drop(['aggregate.scenarioDuration.p99'], axis=1)
dfMongo = dfMongo.drop(['aggregate.latency.max'], axis=1)
dfMongo = dfMongo.drop(['aggregate.latency.min'], axis=1)
dfMongo = dfMongo.drop(['aggregate.matches'], axis=1)
dfMongo = dfMongo.drop(['aggregate.requestsCompleted'], axis=1)
return dfMongo
def assignValues(node,ns_name, pod_name,dbName,collectionName):
cpuUtilization = getCPUUtilizationNode(node)
podCpuUtilization = getCPUUtilizationPod(node,ns_name, pod_name)
containercpuUtilization = getCPUUtilizationPodContainer(node,ns_name, pod_name)
memoryUtilization = getMemoryUtilizationNode(node)
podMemoryUtilization = getMemoryUtilizationPod(node,ns_name, pod_name)
containerMemoryUtilization = getMemoryUtilizationPodContainer(node,ns_name, pod_name)
podNetworTxRate = getNetworkTxRatePod(node,ns_name, pod_name)
podNetworTx = getNetworkTxPod(node,ns_name, pod_name)
podNetworkError = getNetworkTxErrorsPod(node,ns_name, pod_name)
podNetworkErrorRate = getNetworkTxErrorsRatePod(node,ns_name, pod_name)
df = getMongoDf(dbName,collectionName)
df2 = pd.DataFrame()
df = df.reset_index(drop=True)
df2 = df2.reset_index(drop=True)
df['cpuNode'] = pd.Series(cpuUtilization['value'].values)
df['cpuPod'] = pd.Series(podCpuUtilization['value'].values / 1000)
df['cpuContainer'] = pd.Series(containercpuUtilization['value'].values / 1000)
df['memNode'] = pd.Series(memoryUtilization['value'].values)
df['memPod'] = pd.Series(podMemoryUtilization['value'].values / 1000000000)
df['memContainer'] = pd.Series(containerMemoryUtilization['value'].values / 1000000000)
df['networkTx'] = pd.Series(podNetworTxRate['value'].values)
df['networkTxError'] = pd.Series(podNetworkErrorRate['value'].values)
df['networkTx'] = df['networkTx'].values /df['networkTx'].max()
df['aggregate.scenariosCreated'] = df['aggregate.scenariosCreated'].values / df['aggregate.scenariosCreated'].max()
df['aggregate.scenariosCompleted'] = df['aggregate.scenariosCompleted'].values /df['aggregate.scenariosCompleted'].max()
df['aggregate.latency.median'] = df['aggregate.latency.median'].values /df['aggregate.latency.median'].max()
#df2['latencyMedian'] = pd.Series(df['aggregate.latency.median'].values)
df['aggregate.scenarioDuration.median'] = df['aggregate.scenarioDuration.median'].values / df['aggregate.scenarioDuration.median'].max()
df['aggregate.rps.mean'] = df['aggregate.rps.mean'].values / df['aggregate.rps.mean'].max()
#df3 = pd.concat([df2, df[['aggregate.latency.median', 'aggregate.scenarioDuration.median']]], axis=1)
return df
newdf = assignValues(allNodeNames[3],nsNames[0], allPodNamesLarge[3],'t2large', 'kubeserver')
newdf = newdf.fillna(0)
newdf['core'] = 2
newdf['mem'] = 8
newdfSmall = assignValues(allNodeNames[2],nsNames[0], allPodNamesSmall[3],'t2small', 'kubeserver')
newdfSmall = newdfSmall.fillna(0)
newdfSmall['core'] = 1
newdfSmall['mem'] = 2
newdfMedium = assignValues(allNodeNames[4],nsNames[0], allPodNamesMedium[3],'t2medium', 'kubeserver')
newdfMedium = newdfMedium.fillna(0)
newdfMedium['core'] = 2
newdfMedium['mem'] = 4
newdf2xLarge = assignValues(allNodeNames[1],nsNames[0], allPodNames2xLarge[3],'t22xlarge', 'kubeserver')
newdf2xLarge = newdf2xLarge.fillna(0)
newdf2xLarge['core'] = 8
newdf2xLarge['mem'] = 16
newdf.head()
#plt.plot(newdf['aggregate.timestamp'], newdf['memNode']*1000, 'b') # plotting t, b separately
#plt.plot(newdf['aggregate.timestamp'], newdf['aggregate.scenariosCompleted'], 'g') # plotting t, b separately
#plt.plot(newdf['aggregate.timestamp'], newdf['cpuNode']*1000, 'r') # plotting t, b separately
#plt.plot(podNetworkError.index, podNetworkError['value'], 'y') # plotting t, b separately
#plt.plot(podNetworkErrorRate.index, podNetworkErrorRate['value'], 'r') # plotting t, b separately
#plt.show()
from pandas import read_csv
newdf.to_csv('LRdatalarge1.csv')
newdfSmall.to_csv('LRdatasmall1.csv')
newdfMedium.to_csv('LRdatamedium1.csv')
newdf2xLarge.to_csv('LRdata2xlarge1.csv')
datasetLarge = read_csv('LRdatalarge1.csv', header=0, index_col=0)
datasetSmall = read_csv('LRdatasmall1.csv', header=0, index_col=0)
datasetMedium = read_csv('LRdatamedium1.csv', header=0, index_col=0)
dataset2xLarge = read_csv('LRdata2xlarge1.csv', header=0, index_col=0)
dfLarge = pd.DataFrame(datasetLarge)
dfSmall = pd.DataFrame(datasetSmall)
dfMedium = pd.DataFrame(datasetMedium)
df2xLarge = pd.DataFrame(dataset2xLarge)
d2dfsmall = dfSmall.drop(['aggregate.timestamp','mem','core','aggregate.errors.ESOCKETTIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d3dfsmall = dfSmall.drop(['aggregate.timestamp','mem','aggregate.errors.ESOCKETTIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d4dfsmall = dfSmall.drop(['aggregate.timestamp','aggregate.errors.ESOCKETTIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d2dflarge = dfLarge.drop(['aggregate.timestamp','mem','core','aggregate.errors.ESOCKETTIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d3dflarge = dfLarge.drop(['aggregate.timestamp','mem','aggregate.errors.ESOCKETTIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d4dflarge = dfLarge.drop(['aggregate.timestamp','aggregate.errors.ESOCKETTIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d2dfmedium = dfMedium.drop(['aggregate.timestamp','mem','core','aggregate.errors.ESOCKETTIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d3dfmedium = dfMedium.drop(['aggregate.timestamp','mem','aggregate.errors.ESOCKETTIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d4dfmedium = dfMedium.drop(['aggregate.timestamp','aggregate.errors.ESOCKETTIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d2df2xlarge = df2xLarge.drop(['aggregate.timestamp','mem','core','aggregate.errors.ETIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d3df2xlarge = df2xLarge.drop(['aggregate.timestamp','mem','aggregate.errors.ETIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
d4df2xlarge = df2xLarge.drop(['aggregate.timestamp','aggregate.errors.ETIMEDOUT','aggregate.scenariosCompleted', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
```
# 2d Linear Regression
```
#X = testdf.drop(['cpuNode','aggregate.timestamp', 'cpuPod', 'cpuContainer', 'memPod', 'memContainer','networkTx', 'networkTxError'], axis=1)
#X = testdf.drop(['cpuNode','aggregate.timestamp', 'memPod', 'memContainer','networkTx', 'networkTxError', 'aggregate.latency.median', 'aggregate.scenarioDuration.median', 'memNode', 'aggregate.rps.mean', 'cpuPod', 'cpuContainer'], axis=1)
toDropX = ['cpuNode']
d3df = d3dfsmall[:50]
X = d3df.drop(toDropX, axis=1)
y = d3df['cpuNode']
Xlarge = d3dflarge.drop(toDropX, axis=1)
ylarge = d3dflarge['cpuNode']
Xmedium = d3dfmedium.drop(toDropX, axis=1)
ymedium = d3dfmedium['cpuNode']
X2xlarge = d3df2xlarge.drop(toDropX, axis=1)
y2xlarge = d3df2xlarge['cpuNode']
#X_train =X[:20]
#X_test =X.tail(10)
#y_train =y[:20]
#y_test =y.tail(10)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
from sklearn.linear_model import LinearRegression, Lasso, Ridge, ElasticNet, SGDRegressor
a = 0.3
for name,met in [
('linear regression', LinearRegression()),
#('lasso', Lasso(fit_intercept=True, alpha=a)),
#('ridge', Ridge(fit_intercept=True, alpha=a)),
#('elastic-net', ElasticNet(fit_intercept=True, alpha=a))
]:
met.fit(X_train,y_train)
y_predictsmall = met.predict(X_test)
y_predictlarge = met.predict(Xlarge)
y_predictmedium = met.predict(Xmedium)
y_predict2xlarge = met.predict(X2xlarge)
# The mean squared error
print ("a= ", a)
print("Mean squared error small: %.2f"
% mean_squared_error(y_test, y_predictsmall))
print("Mean squared error large: %.2f"
% mean_squared_error(ylarge, y_predictlarge))
print("Mean squared error medium: %.2f"
% mean_squared_error(ymedium, y_predictmedium))
print("Mean squared error 2xlarge: %.2f"
% mean_squared_error(y2xlarge, y_predict2xlarge))
# Explained variance score: 1 is perfect prediction
print('Variance score small: %.2f' % r2_score(y_test, y_predictsmall))
# Explained variance score: 1 is perfect prediction
print('Variance score large: %.2f' % r2_score(ylarge, y_predictlarge))
# Explained variance score: 1 is perfect prediction
print('Variance score medium: %.2f' % r2_score(ymedium, y_predictmedium))
# Explained variance score: 1 is perfect prediction
print('Variance score 2xlarge: %.2f' % r2_score(y2xlarge, y_predict2xlarge))
print('met',met)
#from sklearn.preprocessing import StandardScaler
#scaler = StandardScaler()
#scaler.fit(X_train)
#x_s = scaler.transform(X_train)
#sgdreg = SGDRegressor(penalty='l2', alpha=0.15, max_iter=200)
# Compute RMSE on training data
#sgdreg.fit(x_s,y_train)
#y_predict = sgdreg.predict(X_test)
#X_predict = sgdreg.predict(X_train)
#print("Mean squared error test: %.2f"
# % mean_squared_error(y_test, y_predict))
#print("Mean squared error train: %.2f"
# % mean_squared_error(y_train, X_predict))
# Explained variance score: 1 is perfect prediction
#print('Variance score: %.2f' % r2_score(y_test, y_predict))
pd.DataFrame(list(zip(X,y)), columns = ['predict', 'test']).head()
# Plot outputs
#plt.scatter(X_predict, y_train, color='pink')
#plt.scatter(X_train['aggregate.scenariosCompleted'], y_train, color='blue')
#plt.scatter(X_train['memNode'], y_train, color='red')
#plt.scatter(X_train['aggregate.latency.median'], y_train, color='green')
plt.scatter(X_test['aggregate.scenariosCreated'], y_predictsmall, color='red')
plt.scatter(Xlarge['aggregate.scenariosCreated'], y_predictlarge, color='blue')
plt.scatter(Xmedium['aggregate.scenariosCreated'], y_predictmedium, color='green')
plt.scatter(X2xlarge['aggregate.scenariosCreated'], y_predict2xlarge, color='yellow')
#plt.plot(X_test, y_predict, color='blue')
plt.show()
plt.scatter(X_test['aggregate.scenariosCreated'], y_test, color='red')
plt.scatter(Xlarge['aggregate.scenariosCreated'], ylarge, color='blue')
plt.scatter(Xmedium['aggregate.scenariosCreated'], ymedium, color='green')
plt.scatter(X2xlarge['aggregate.scenariosCreated'], y2xlarge, color='yellow')
#plt.plot(X_test, y_predict, color='blue')
plt.show()
dfMy = testdf.set_index(pd.DatetimeIndex(testdf['aggregate.timestamp']))
dfMy = dfMy.drop(['aggregate.timestamp'], axis=1)
from pandas.plotting import lag_plot
lag_plot(dfMy)
onlyCPU = dfMy['cpuNode']
values = pd.DataFrame(onlyCPU.values)
dataframe = pd.concat([values.shift(1), values], axis=1)
result = dataframe.corr()
from pandas.plotting import autocorrelation_plot
from matplotlib import pyplot
autocorrelation_plot(onlyCPU)
pyplot.show()
from statsmodels.graphics.tsaplots import plot_acf
plot_acf(onlyCPU, lags=31)
pyplot.show()
from statsmodels.tsa.ar_model import AR
# train autoregression
testingData = onlyCPU.values
train, test = testingData[1:len(onlyCPU)-20], testingData[len(onlyCPU)-20:]
modelAR = AR(train)
model_fit = modelAR.fit()
print('Lag: %s' % model_fit.k_ar)
print('Coefficients: %s' % model_fit.params)
# make predictions
predictions = model_fit.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
for i in range(len(predictions)):
print('predicted=%f, expected=%f' % (predictions[i], test[i]))
error = mean_squared_error(test, predictions)
print('Test MSE: %.3f' % error)
# plot results
pyplot.plot(test, color='blue')
pyplot.plot(predictions, color='red')
pyplot.show()
modelAR = AR(train)
model_fit = modelAR.fit()
window = model_fit.k_ar
coef = model_fit.params
# walk forward over time steps in test
history = train[len(train)-window:]
history = [history[i] for i in range(len(history))]
predictions = list()
for t in range(len(test)):
length = len(history)
lag = [history[i] for i in range(length-window,length)]
yhat = coef[0]
for d in range(window):
yhat += coef[d+1] * lag[window-d-1]
obs = test[t]
predictions.append(yhat)
history.append(obs)
print('predicted=%f, expected=%f' % (yhat, obs))
error = mean_squared_error(test, predictions)
print('Test MSE: %.3f' % error)
# plot
pyplot.plot(test, color='blue')
pyplot.plot(predictions, color='red')
pyplot.show()
dfKeras = newdf.set_index(pd.DatetimeIndex(newdf['aggregate.timestamp']))
dfKeras = dfKeras.drop(['aggregate.timestamp'], axis=1)
dfKeras['core'] = 2
dfKeras['mem'] = 8
dfKerasSmall = newdfSmall.set_index(pd.DatetimeIndex(newdfSmall['aggregate.timestamp']))
dfKerasSmall = dfKerasSmall.drop(['aggregate.timestamp'], axis=1)
dfKerasSmall['core'] = 1
dfKerasSmall['mem'] = 2
dfKeras2xLarge = newdf2xLarge.set_index(pd.DatetimeIndex(newdf2xLarge['aggregate.timestamp']))
dfKeras2xLarge = dfKeras2xLarge.drop(['aggregate.timestamp'], axis=1)
dfKeras2xLarge['core'] = 8
dfKeras2xLarge['mem'] = 16
dfKerasMedium = newdfMedium.set_index(pd.DatetimeIndex(newdfMedium['aggregate.timestamp']))
dfKerasMedium = dfKerasMedium.drop(['aggregate.timestamp'], axis=1)
dfKerasMedium['core'] = 2
dfKerasMedium['mem'] = 4
# mark all NA values with 0
dfKeras.fillna(0, inplace=True)
#dfKeras = dfKeras[['cpuNode', 'core','aggregate.scenariosCreated', 'mem']]
dfKeras = dfKeras[['cpuNode', 'core','mem','aggregate.scenariosCreated']]
dfKerasSmall.fillna(0, inplace=True)
#dfKerasSmall = dfKerasSmall[['cpuNode', 'core','aggregate.scenariosCreated', 'mem']]
dfKerasSmall = dfKerasSmall[['cpuNode', 'core','mem','aggregate.scenariosCreated']]
dfKeras2xLarge.fillna(0, inplace=True)
#dfKeras2xLarge = dfKeras2xLarge[['cpuNode', 'core','aggregate.scenariosCreated', 'mem']]
dfKeras2xLarge = dfKeras2xLarge[['cpuNode', 'core', 'mem','aggregate.scenariosCreated']]
dfKerasMedium.fillna(0, inplace=True)
#dfKerasMedium = dfKerasMedium[['cpuNode', 'core','aggregate.scenariosCreated', 'mem']]
dfKerasMedium = dfKerasMedium[['cpuNode', 'core','mem','aggregate.scenariosCreated']]
#dfKeras= dfKeras.append(dfKerasMedium)
#dfKeras= dfKeras.append(dfKeras2xLarge)
#dfKeras= dfKeras.append(dfKerasSmall)
dfKeras.head()
dfKeras.to_csv('datalarge1.csv')
dfKerasSmall.to_csv('datasmall1.csv')
dfKeras2xLarge.to_csv('data2xlarge1.csv')
dfKerasMedium.to_csv('dataMedium1.csv')
from pandas import read_csv
from matplotlib import pyplot
# load dataset
dataset = read_csv('datalarge1.csv', header=0, index_col=0)
values = dataset.values
# specify columns to plot
groups = [0, 1, 2, 3]
i = 1
# plot each column
pyplot.figure()
for group in groups:
pyplot.subplot(len(groups), 1, i)
pyplot.plot(values[:, group])
pyplot.title(dataset.columns[group], y=0.5, loc='right')
i += 1
pyplot.show()
from sklearn import preprocessing
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = pd.DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = pd.concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# load dataset
dataset = read_csv('datalarge1.csv', header=0, index_col=0)
values = dataset.values
# integer encode direction
encoder = preprocessing.LabelEncoder()
values[:,3] = encoder.fit_transform(values[:,3])
# ensure all data is float
values = values.astype('float32')
# normalize features
scaler = preprocessing.MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
# frame as supervised learning
reframed = series_to_supervised(scaled, 1, 1)
reframed.drop(reframed.columns[[ 5,6,7]], axis=1, inplace=True)
print(reframed.head())
# load dataset
datasetSmall = read_csv('dataSmall1.csv', header=0, index_col=0)
valuesSmall = datasetSmall.values
# integer encode direction
encoderSmall = preprocessing.LabelEncoder()
valuesSmall[:,3] = encoderSmall.fit_transform(valuesSmall[:,3])
# ensure all data is float
valuesSmall = valuesSmall.astype('float32')
# normalize features
scalerSmall = preprocessing.MinMaxScaler(feature_range=(0, 1))
scaledSmall = scalerSmall.fit_transform(valuesSmall)
# frame as supervised learning
reframedSmall = series_to_supervised(scaledSmall, 1, 1)
reframedSmall.drop(reframedSmall.columns[[5,6,7]], axis=1, inplace=True)
print(reframedSmall.head())
# load dataset
dataset2xLarge = read_csv('data2xlarge1.csv', header=0, index_col=0)
values2xLarge = dataset2xLarge.values
# integer encode direction
encoder2xLarge = preprocessing.LabelEncoder()
values2xLarge[:,3] = encoder2xLarge.fit_transform(values2xLarge[:,3])
# ensure all data is float
values2xLarge = values2xLarge.astype('float32')
# normalize features
scaler2xLarge = preprocessing.MinMaxScaler(feature_range=(0, 1))
scaled2xLarge = scaler2xLarge.fit_transform(values2xLarge)
# frame as supervised learning
reframed2xLarge = series_to_supervised(scaled2xLarge, 1, 1)
reframed2xLarge.drop(reframed2xLarge.columns[[ 5,6,7]], axis=1, inplace=True)
print(reframed2xLarge.head())
# load dataset
datasetmedium = read_csv('datamedium1.csv', header=0, index_col=0)
valuesmedium = datasetmedium.values
# integer encode direction
encodermedium = preprocessing.LabelEncoder()
valuesmedium[:,3] = encodermedium.fit_transform(valuesmedium[:,3])
# ensure all data is float
valuesmedium = valuesmedium.astype('float32')
# normalize features
scalermedium = preprocessing.MinMaxScaler(feature_range=(0, 1))
scaledmedium = scalermedium.fit_transform(valuesmedium)
# frame as supervised learning
reframedmedium = series_to_supervised(scaledmedium, 1, 1)
reframedmedium.drop(reframedmedium.columns[[5,6,7]], axis=1, inplace=True)
print(reframedmedium.head())
valuesLarge = reframed.values
valuesSmall = reframedSmall.values
valuesmedium = reframedmedium.values
n_train_samp = 50
train = valuesLarge#[:n_train_samp, :]
test = valuesLarge#[n_train_samp:, :]
num_features=6
dim = num_features - 1
test.shape
# split into input and outputs
train_X, train_y = train[:, :-1], train[:, -1]
test_X, test_y = test[:, :-1], test[:, -1]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
model = Sequential()
model.add(LSTM(2, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
# fit network
history = model.fit(train_X, train_y, epochs=50, batch_size=5, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# plot history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
# make a prediction
yhat = model.predict(test_X)
test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
# invert scaling for forecast
inv_yhat = concatenate((yhat, test_X[:, 1:]), axis=1)
inv_yhat = scalerSmall.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]
# invert scaling for actual
test_y = test_y.reshape((len(test_y), 1))
inv_y = concatenate((test_y, test_X[:, 1:]), axis=1)
inv_y = scalerSmall.inverse_transform(inv_y)
inv_y = inv_y[:,0]
# calculate RMSE
rmse = sqrt(mean_squared_error(inv_y, inv_yhat))
print('Test RMSE: %.3f' % rmse)
values2xLarge = reframed2xLarge.values
valuesmedium = reframedmedium.values
test2xLarge = values2xLarge
testmedium = valuesmedium
test_X2xLarge, test_y2xLarge = test2xLarge[:, :-1], test2xLarge[:, -1]
# reshape input to be 3D [samples, timesteps, features]
test_X2xLarge = test_X2xLarge.reshape((test_X2xLarge.shape[0], 1, test_X2xLarge.shape[1]))
print(test_X2xLarge.shape, test_y2xLarge.shape)
test_Xmedium, test_ymedium = testmedium[:, :-1], testmedium[:, -1]
# reshape input to be 3D [samples, timesteps, features]
test_Xmedium = test_Xmedium.reshape((test_Xmedium.shape[0], 1, test_Xmedium.shape[1]))
print(test_Xmedium.shape, test_ymedium.shape)
# make a prediction
yhat2xLarge = model.predict(test_X2xLarge)
test_X2xLarge = test_X2xLarge.reshape((test_X2xLarge.shape[0], test_X2xLarge.shape[2]))
# invert scaling for forecast
inv_yhat2xLarge = concatenate((yhat2xLarge, test_X2xLarge[:, 1:]), axis=1)
inv_yhat2xLarge = scaler2xLarge.inverse_transform(inv_yhat2xLarge)
inv_yhat2xLarge = inv_yhat2xLarge[:,0]
# invert scaling for actual
test_y2xLarge = test_y2xLarge.reshape((len(test_y2xLarge), 1))
inv_y2xLarge = concatenate((test_y2xLarge, test_X2xLarge[:, 1:]), axis=1)
inv_y2xLarge = scaler2xLarge.inverse_transform(inv_y2xLarge)
inv_y2xLarge = inv_y2xLarge[:,0]
# calculate RMSE
rmse2xLarge = sqrt(mean_squared_error(inv_y2xLarge, inv_yhat2xLarge))
print('Test RMSE: %.3f' % rmse2xLarge)
# make a prediction
yhatmedium = model.predict(test_Xmedium)
test_Xmedium = test_Xmedium.reshape((test_Xmedium.shape[0], test_Xmedium.shape[2]))
# invert scaling for forecast
inv_yhatmedium = concatenate((yhatmedium, test_Xmedium[:, 1:]), axis=1)
inv_yhatmedium = scalermedium.inverse_transform(inv_yhatmedium)
inv_yhatmedium = inv_yhatmedium[:,0]
# invert scaling for actual
test_ymedium = test_ymedium.reshape((len(test_ymedium), 1))
inv_ymedium = concatenate((test_ymedium, test_Xmedium[:, 1:]), axis=1)
inv_ymedium = scalermedium.inverse_transform(inv_ymedium)
inv_ymedium = inv_ymedium[:,0]
# calculate RMSE
rmsemedium = sqrt(mean_squared_error(inv_ymedium, inv_yhatmedium))
print('Test RMSE: %.3f' % rmsemedium)
pyplot.plot(inv_y, label = 'Actual')
pyplot.plot(inv_yhat, label = 'Predicted')
pyplot.legend()
pyplot.show()
pyplot.plot(inv_ymedium, label = 'Actual')
pyplot.plot(inv_yhatmedium, label = 'Predicted')
pyplot.legend()
pyplot.show()
pyplot.plot(inv_y2xLarge, label = 'Actual')
pyplot.plot(inv_yhat2xLarge, label = 'Predicted')
pyplot.legend()
pyplot.show()
mongoclient2 = MongoClient('localhost', 27017)
def getMongoDf2(dbName, collectionName):
db = mongoclient[dbName]
collection_name = collectionName
datapoints = list(db.kubeserver.find({}))
dfMongo = json_normalize(datapoints)
dfMongo = dfMongo.drop(['intermediate'], axis=1)
dfMongo = dfMongo.drop(['aggregate.codes.200'], axis=1)
dfMongo = dfMongo.drop(['_id'], axis=1)
if('aggregate.errors.ECONNRESET' in dfMongo.columns):
dfMongo = dfMongo.drop(['aggregate.errors.ECONNRESET'], axis=1)
if('aggregate.errors.ECONNREFUSED' in dfMongo.columns):
dfMongo = dfMongo.drop(['aggregate.errors.ECONNREFUSED'], axis=1)
if('aggregate.errors.ESOCKETTIMEDOUT' in dfMongo.columns):
dfMongo = dfMongo.drop(['aggregate.errors.ETIMEDOUT'], axis=1)
dfMongo = dfMongo.drop(['aggregate.latency.p95'], axis=1)
dfMongo = dfMongo.drop(['aggregate.latency.p99'], axis=1)
dfMongo = dfMongo.drop(['aggregate.rps.count'], axis=1)
#dfMongo = dfMongo.drop(['aggregate.rps.mean'], axis=1)
dfMongo = dfMongo.drop(['aggregate.scenarioDuration.max'], axis=1)
dfMongo = dfMongo.drop(['aggregate.scenarioDuration.min'], axis=1)
dfMongo = dfMongo.drop(['aggregate.scenarioDuration.p95'], axis=1)
dfMongo = dfMongo.drop(['aggregate.scenarioDuration.p99'], axis=1)
dfMongo = dfMongo.drop(['aggregate.latency.max'], axis=1)
dfMongo = dfMongo.drop(['aggregate.latency.min'], axis=1)
dfMongo = dfMongo.drop(['aggregate.matches'], axis=1)
dfMongo = dfMongo.drop(['aggregate.requestsCompleted'], axis=1)
return dfMongo
def assignValues2(node,ns_name, pod_name,dbName,collectionName):
df = getMongoDf2(dbName,collectionName)
df2 = pd.DataFrame()
df = df.reset_index(drop=True)
df2 = df2.reset_index(drop=True)
df['aggregate.scenariosCreated'] = df['aggregate.scenariosCreated'].values / df['aggregate.scenariosCreated'].max()
df['aggregate.scenariosCompleted'] = df['aggregate.scenariosCompleted'].values / df['aggregate.scenariosCompleted'].max()
df2['aggregate.latency.median'] = df['aggregate.latency.median'].values / df['aggregate.latency.median'].max()
#df2['requestDurationMedian'] = pd.Series(df['aggregate.scenarioDuration.median'].values)
#df3 = pd.concat([df2, df[['aggregate.latency.median', 'aggregate.scenarioDuration.median']]], axis=1)
return df
newdfmongoLarge = assignValues2(allNodeNames[3],nsNames[0], allPodNamesLarge[3],'t2large', 'testKube')
newdfmongoLarge = newdfmongoLarge.fillna(0)
newdf2Small = assignValues2(allNodeNames[2],nsNames[0], allPodNamesSmall[3],'t2small', 'testKube')
newdf2Small = newdf2Small.fillna(0)
newdf2Medium = assignValues2(allNodeNames[4],nsNames[0], allPodNamesMedium[3],'t2medium', 'testKube')
newdf2Medium = newdf2Medium.fillna(0)
newdf22xLarge = assignValues2(allNodeNames[1],nsNames[0], allPodNames2xLarge[3],'t22xlarge', 'testKube')
newdf22xLarge = newdf22xLarge.fillna(0)
dfKerasSmall2 = newdf2Small.set_index(pd.DatetimeIndex(newdf2Small['aggregate.timestamp']))
dfKerasSmall2 = dfKerasSmall2.drop(['aggregate.timestamp'], axis=1)
dfKerasSmall2['core'] = 1
dfKerasSmall2['mem'] = 2
dfKerasSmall2['memNode'] = 0
dfKerasSmall2['cpuNode'] = 0
dfKeras2xLarge2 = newdf22xLarge.set_index(pd.DatetimeIndex(newdf22xLarge['aggregate.timestamp']))
dfKeras2xLarge2 = dfKeras2xLarge2.drop(['aggregate.timestamp'], axis=1)
dfKeras2xLarge2['core'] = 8
dfKeras2xLarge2['mem'] = 16
dfKeras2xLarge2['memNode'] = 0
dfKeras2xLarge2['cpuNode'] = 0
dfKerasMedium2 = newdf2Medium.set_index(pd.DatetimeIndex(newdf2Medium['aggregate.timestamp']))
dfKerasMedium2 = dfKerasMedium2.drop(['aggregate.timestamp'], axis=1)
dfKerasMedium2['core'] = 2
dfKerasMedium2['mem'] = 4
dfKerasMedium2['memNode'] = 0
dfKerasMedium2['cpuNode'] = 0
dfKeraslarge2 = newdfmongoLarge.set_index(pd.DatetimeIndex(newdfmongoLarge['aggregate.timestamp']))
dfKeraslarge2 = dfKeraslarge2.drop(['aggregate.timestamp'], axis=1)
dfKeraslarge2['core'] = 2
dfKeraslarge2['mem'] = 8
dfKeraslarge2['memNode'] = 0
dfKeraslarge2['cpuNode'] = 0
# mark all NA values with 0
dfKeraslarge2.fillna(0, inplace=True)
#dfKeraslarge2 = dfKeraslarge2[['cpuNode', 'core','aggregate.scenariosCreated', 'mem']]
dfKeraslarge2 = dfKeraslarge2[['cpuNode', 'core','mem','aggregate.scenariosCreated']]
dfKerasSmall2.fillna(0, inplace=True)
#dfKerasSmall2 = dfKerasSmall2[['cpuNode', 'core','aggregate.scenariosCreated', 'mem']]
dfKerasSmall2 = dfKerasSmall2[['cpuNode', 'core','mem','aggregate.scenariosCreated']]
dfKeras.append(dfKerasMedium)
dfKeras2xLarge2.fillna(0, inplace=True)
#dfKeras2xLarge2 = dfKeras2xLarge2[['cpuNode', 'core','aggregate.scenariosCreated', 'mem']]
dfKeras2xLarge2 = dfKeras2xLarge2[['cpuNode', 'core','mem','aggregate.scenariosCreated']]
dfKerasMedium2.fillna(0, inplace=True)
#dfKerasMedium2 = dfKerasMedium2[['cpuNode', 'core','aggregate.scenariosCreated', 'mem']]
dfKerasMedium2 = dfKerasMedium2[['cpuNode', 'core', 'mem','aggregate.scenariosCreated']]
#dfKeras=dfKeras.append(dfKeras2xLarge)
dfKeraslarge2.head()
dfKeraslarge2.to_csv('datalarge2.csv')
dfKerasSmall2.to_csv('datasmall2.csv')
dfKeras2xLarge2.to_csv('data2xlarge2.csv')
dfKerasMedium2.to_csv('dataMedium2.csv')
# load dataset
dataset2 = read_csv('datalarge2.csv', header=0, index_col=0)
values2 = dataset2.values
# integer encode direction
encoder2 = preprocessing.LabelEncoder()
values2[:,3] = encoder2.fit_transform(values2[:,3])
# ensure all data is float
values2 = values2.astype('float32')
# normalize features
scaler2 = preprocessing.MinMaxScaler(feature_range=(0, 1))
scaled2 = scaler2.fit_transform(values2)
# frame as supervised learning
reframed2 = series_to_supervised(scaled2, 1, 1)
reframed2.drop(reframed2.columns[[5,6,7]], axis=1, inplace=True)
print(reframed2.head())
# load dataset
datasetSmall2 = read_csv('dataSmall2.csv', header=0, index_col=0)
valuesSmall2 = datasetSmall2.values
# integer encode direction
encoderSmall2 = preprocessing.LabelEncoder()
valuesSmall2[:,3] = encoderSmall2.fit_transform(valuesSmall2[:,3])
# ensure all data is float
valuesSmall2 = valuesSmall2.astype('float32')
# normalize features
scalerSmall2 = preprocessing.MinMaxScaler(feature_range=(0, 1))
scaledSmall2 = scalerSmall2.fit_transform(valuesSmall2)
# frame as supervised learning
reframedSmall2 = series_to_supervised(scaledSmall2, 1, 1)
reframedSmall2.drop(reframedSmall2.columns[[5,6,7]], axis=1, inplace=True)
print(reframedSmall2.head())
# load dataset
dataset2xLarge2 = read_csv('data2xlarge2.csv', header=0, index_col=0)
values2xLarge2 = dataset2xLarge2.values
# integer encode direction
encoder2xLarge2 = preprocessing.LabelEncoder()
values2xLarge2[:,3] = encoder2xLarge2.fit_transform(values2xLarge2[:,3])
# ensure all data is float
values2xLarge2 = values2xLarge2.astype('float32')
# normalize features
scaler2xLarge2 = preprocessing.MinMaxScaler(feature_range=(0, 1))
scaled2xLarge2 = scaler2xLarge2.fit_transform(values2xLarge2)
# frame as supervised learning
reframed2xLarge2 = series_to_supervised(scaled2xLarge2, 1, 1)
reframed2xLarge2.drop(reframed2xLarge2.columns[[5,6,7]], axis=1, inplace=True)
print(reframed2xLarge2.head())
# load dataset
datasetmedium2 = read_csv('datamedium2.csv', header=0, index_col=0)
valuesmedium2 = datasetmedium2.values
# integer encode direction
encodermedium2= preprocessing.LabelEncoder()
valuesmedium2[:,3] = encodermedium2.fit_transform(valuesmedium2[:,3])
# ensure all data is float
valuesmedium2 = valuesmedium2.astype('float32')
# normalize features
scalermedium2 = preprocessing.MinMaxScaler(feature_range=(0, 1))
scaledmedium2 = scalermedium2.fit_transform(valuesmedium2)
# frame as supervised learning
reframedmedium2 = series_to_supervised(scaledmedium2, 1, 1)
reframedmedium2.drop(reframedmedium2.columns[[5,6,7]], axis=1, inplace=True)
print(reframedmedium2.head())
values2xLarge2 = reframed2xLarge2.values
valuesmedium2 = reframedmedium2.values
valuessmall2 = reframedSmall2.values
valuesLarge2 = reframed2.values
test2xLarge2 = values2xLarge2
testmedium2= valuesmedium2
testsmall2 = valuessmall2
testlarge2= valuesLarge2
test_X2xLarge2, test_y2xLarge2 = test2xLarge2[:, :-1], test2xLarge2[:, -1]
# reshape input to be 3D [samples, timesteps, features]
test_X2xLarge2 = test_X2xLarge2.reshape((test_X2xLarge2.shape[0], 1, test_X2xLarge2.shape[1]))
print(test_X2xLarge2.shape, test_y2xLarge2.shape)
test_Xmedium2, test_ymedium2 = testmedium2[:, :-1], testmedium2[:, -1]
# reshape input to be 3D [samples, timesteps, features]
test_Xmedium2 = test_Xmedium2.reshape((test_Xmedium2.shape[0], 1, test_Xmedium2.shape[1]))
print(test_Xmedium2.shape, test_ymedium2.shape)
test_XLarge2, test_yLarge2 = testlarge2[:, :-1], testlarge2[:, -1]
# reshape input to be 3D [samples, timesteps, features]
test_XLarge2 = test_XLarge2.reshape((test_XLarge2.shape[0], 1, test_XLarge2.shape[1]))
print(test_XLarge2.shape, test_yLarge2.shape)
test_Xsmall2, test_ysmall2 = testsmall2[:, :-1], testsmall2[:, -1]
# reshape input to be 3D [samples, timesteps, features]
test_Xsmall2 = test_Xsmall2.reshape((test_Xsmall2.shape[0], 1, test_Xsmall2.shape[1]))
print(test_Xsmall2.shape, test_ysmall2.shape)
# make a prediction
yhat2xLarge2 = model.predict(test_X2xLarge2)
test_X2xLarge2 = test_X2xLarge2.reshape((test_X2xLarge2.shape[0], test_X2xLarge2.shape[2]))
# invert scaling for forecast
inv_yhat2xLarge2 = concatenate((yhat2xLarge2, test_X2xLarge2[:, 1:]), axis=1)
inv_yhat2xLarg2e = scaler2xLarge2.inverse_transform(inv_yhat2xLarge2)
inv_yhat2xLarge2 = inv_yhat2xLarge2[:,0]
# invert scaling for actual
test_y2xLarge2 = test_y2xLarge2.reshape((len(test_y2xLarge2), 1))
inv_y2xLarge2 = concatenate((test_y2xLarge2, test_X2xLarge2[:, 1:]), axis=1)
inv_y2xLarge2 = scaler2xLarge2.inverse_transform(inv_y2xLarge2)
inv_y2xLarge2 = inv_y2xLarge2[:,0]
# calculate RMSE
rmse2xLarge2 = sqrt(mean_squared_error(inv_y2xLarge2, inv_yhat2xLarge2))
print('Test RMSE: %.3f' % rmse2xLarge2)
# make a prediction
yhatmedium2 = model.predict(test_Xmedium2)
test_Xmedium2 = test_Xmedium2.reshape((test_Xmedium2.shape[0], test_Xmedium2.shape[2]))
# invert scaling for forecast
inv_yhatmedium2 = concatenate((yhatmedium2, test_Xmedium2[:, 1:]), axis=1)
inv_yhatmedium2 = scalermedium2.inverse_transform(inv_yhatmedium2)
inv_yhatmedium2= inv_yhatmedium2[:,0]
# invert scaling for actual
test_ymedium2 = test_ymedium2.reshape((len(test_ymedium2), 1))
inv_ymedium2 = concatenate((test_ymedium2, test_Xmedium2[:, 1:]), axis=1)
inv_ymedium2 = scalermedium2.inverse_transform(inv_ymedium2)
inv_ymedium2 = inv_ymedium2[:,0]
# calculate RMSE
rmsemedium2 = sqrt(mean_squared_error(inv_ymedium2, inv_yhatmedium2))
print('Test RMSE: %.3f' % rmsemedium2)
# make a prediction
yhatsmall2 = model.predict(test_Xsmall2)
test_Xsmall2 = test_Xsmall2.reshape((test_Xsmall2.shape[0], test_Xsmall2.shape[2]))
# invert scaling for forecast
inv_yhatsmall2 = concatenate((yhatsmall2, test_Xsmall2[:, 1:]), axis=1)
inv_yhatsmall2 = scalerSmall2.inverse_transform(inv_yhatsmall2)
inv_yhatsmall2= inv_yhatsmall2[:,0]
# invert scaling for actual
test_ysmall2 = test_ysmall2.reshape((len(test_ysmall2), 1))
inv_ysmall2 = concatenate((test_ysmall2, test_Xsmall2[:, 1:]), axis=1)
inv_ysmall2 = scalerSmall2.inverse_transform(inv_ysmall2)
inv_ysmall2 = inv_ysmall2[:,0]
# calculate RMSE
rmsemedium2 = sqrt(mean_squared_error(inv_ysmall2, inv_yhatsmall2))
print('Test RMSE: %.3f' % rmsemedium2)
pyplot.plot(inv_ysmall2, label = 'Actual')
pyplot.plot(inv_yhatsmall2, label = 'Predicted')
pyplot.legend()
pyplot.show()
pyplot.plot(inv_ymedium2, label = 'Actual')
pyplot.plot(inv_yhatmedium2, label = 'Predicted')
pyplot.legend()
pyplot.show()
pyplot.plot(inv_y2xLarge2, label = 'Actual')
pyplot.plot(inv_yhat2xLarge2, label = 'Predicted')
pyplot.legend()
pyplot.show()
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Algorithms/landsat_radiance.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/landsat_radiance.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Algorithms/landsat_radiance.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/landsat_radiance.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for the first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# Load a raw Landsat scene and display it.
raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
Map.centerObject(raw, 10)
Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw')
# Convert the raw data to radiance.
radiance = ee.Algorithms.Landsat.calibratedRadiance(raw)
Map.addLayer(radiance, {'bands': ['B4', 'B3', 'B2'], 'max': 90}, 'radiance')
# Convert the raw data to top-of-atmosphere reflectance.
toa = ee.Algorithms.Landsat.TOA(raw)
Map.addLayer(toa, {'bands': ['B4', 'B3', 'B2'], 'max': 0.2}, 'toa reflectance')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
## Download latest Jars
```
dbutils.fs.mkdirs("dbfs:/FileStore/jars/")
%sh
cd ../../dbfs/FileStore/jars/
wget -O cudf-21.06.1.jar https://search.maven.org/remotecontent?filepath=ai/rapids/cudf/21.06.1/cudf-21.06.1.jar
wget -O rapids-4-spark_2.12-21.06.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/rapids-4-spark_2.12/21.06.0/rapids-4-spark_2.12-21.06.0.jar
wget -O xgboost4j_3.0-1.3.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j_3.0/1.3.0-0.1.0/xgboost4j_3.0-1.3.0-0.1.0.jar
wget -O xgboost4j-spark_3.0-1.3.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j-spark_3.0/1.3.0-0.1.0/xgboost4j-spark_3.0-1.3.0-0.1.0.jar
ls -ltr
# Your Jars are downloaded in dbfs:/FileStore/jars directory
```
### Create a Directory for your init script
```
dbutils.fs.mkdirs("dbfs:/databricks/init_scripts/")
dbutils.fs.put("/databricks/init_scripts/init.sh","""
#!/bin/bash
sudo cp /dbfs/FileStore/jars/xgboost4j_3.0-1.3.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j_2.12--ml.dmlc__xgboost4j_2.12__1.0.0.jar
sudo cp /dbfs/FileStore/jars/cudf-21.06.1.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/rapids-4-spark_2.12-21.06.0.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/xgboost4j-spark_3.0-1.3.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j-spark_2.12--ml.dmlc__xgboost4j-spark_2.12__1.0.0.jar""", True)
```
### Confirm your init script is in the new directory
```
%sh
cd ../../dbfs/databricks/init_scripts
pwd
ls -ltr
```
### Download the Mortgage Dataset into your local machine and upload Data using import Data
```
dbutils.fs.mkdirs("dbfs:/FileStore/tables/")
%sh
cd /dbfs/FileStore/tables/
wget -O mortgage.zip https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
ls
unzip mortgage.zip
%sh
pwd
cd ../../dbfs/FileStore/tables
ls -ltr mortgage/csv/*
```
### Next steps
1. Edit your cluster, adding an initialization script from `dbfs:/databricks/init_scripts/init.sh` in the "Advanced Options" under "Init Scripts" tab
2. Reboot the cluster
3. Go to "Libraries" tab under your cluster and install `dbfs:/FileStore/jars/xgboost4j-spark_3.0-1.3.0-0.1.0.jar` in your cluster by selecting the "DBFS" option for installing jars
4. Import the mortgage example notebook from `https://github.com/NVIDIA/spark-xgboost-examples/blob/spark-3/examples/notebooks/python/mortgage-gpu.ipynb`
5. Inside the mortgage example notebook, update the data paths
`train_data = reader.schema(schema).option('header', True).csv('/data/mortgage/csv/small-train.csv')`
`trans_data = reader.schema(schema).option('header', True).csv('/data/mortgage/csv/small-trans.csv')`
| github_jupyter |
# BERT Classifier
- Group 14
- Kaggle Team Name: Yellow Submarine
- Score: 0.59839, Rank: 4
- Team members (Student Name & ID & kaggle display name):
- YUAN Yanzhe: 20728555 (bighead)
- LIU Donghua: 20731100 (Danhuang)
- SHEN Dinghui: 20726478 (mengnan)
- Run on: Google Colab
```
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/MyDrive/Colab Notebooks/6000_proj1')
!pip install transformers
!pip install nltk
import nltk
nltk.download('stopwords')
# machine laerning packages
import os
import re
import math
import pandas as pd
import numpy as np
from collections import defaultdict
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import accuracy_score, f1_score
import nltk
from nltk.corpus import stopwords
# deep learning packages
import torch
from torch import nn as nn
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
import transformers
from transformers import BertTokenizer
from transformers import BertModel
from transformers import BertForSequenceClassification
from transformers import AdamW
from transformers import get_linear_schedule_with_warmup
# settings
stopwords = set(stopwords.words("english"))
if torch.cuda.is_available():
device = torch.device("cuda")
print(f'There are {torch.cuda.device_count()} GPU(s) available.')
print('Device name:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
```
## 1. Data Proprocessing
- lower word
- re filtering
- stopword filtering
```
# preprocessing
def load_data(file_name):
df = pd.read_csv(file_name)
return df, df['text'], df['context'], df['impact_label']
def preprocessing(text, lower_word=True, re_word=True, stop_word=True):
# input: text:str
# output: preprocessed sentence:str
# lower_word
if lower_word:
text = text.lower()
# re_word
if re_word:
text = text.strip('[]').replace("'", "").replace("'", "").replace("%", " percent ")\
.replace("$", " dollar ").replace('&', ' and ').replace("-", " ")
#.replace('.','').replace('"', "")
# stop_word
if stop_word:
text = ' '.join([word for word in text.split() if word not in stopwords])
# remove whitespace
text = re.sub(r'\s+', ' ', text).strip()
return text
# Load Data
train_raw, train_raw_text, train_raw_context, train_raw_label = load_data('data/train.csv')
valid_raw, valid_raw_text, valid_raw_context, valid_raw_label = load_data('data/valid.csv')
test_raw, test_raw_text, test_raw_context, test_raw_label = load_data('data/test.csv')
train_raw.head()
#valid_raw.head()
#test_raw.head()
# Do Preprocessing
# train and valid data
train_text = [preprocessing(text, lower_word=True, re_word=True, stop_word=True) for text in train_raw_text]
valid_text = [preprocessing(text, lower_word=True, re_word=True, stop_word=True) for text in valid_raw_text]
train_context = [preprocessing(text, lower_word=True, re_word=True, stop_word=True) for text in train_raw_context]
valid_context = [preprocessing(text, lower_word=True, re_word=True, stop_word=True) for text in valid_raw_context]
class_to_label = {'NOT_IMPACTFUL':0, 'MEDIUM_IMPACT':1, 'IMPACTFUL':2}
train_label = train_raw_label.map(class_to_label).tolist()
valid_label = valid_raw_label.map(class_to_label).tolist()
print(len(train_text))
print(len(train_context))
print(len(train_label))
print(len(valid_text))
print(len(valid_context))
print(len(valid_label))
# test data
test_text = [preprocessing(text, lower_word=True, re_word=True, stop_word=True) for text in test_raw_text]
test_context = [preprocessing(text, lower_word=True, re_word=True, stop_word=True) for text in test_raw_context]
test_label = test_raw_label.map({'UNKNOWN':-1})
print(len(test_text))
print(len(test_context))
```
## EDA
- class imbalance
- max_length
```
# See Stats
print('The shape of train data: ', train_raw.shape)
print('The shape of valid data: ', valid_raw.shape)
#print('The shape of test data: ', test_raw.shape)
print('\n# of NULL value in train data:\n', train_raw.isnull().sum())
print('# of NULL value in valid data:\n', valid_raw.isnull().sum())
#print('# of NULL value in valid data:\n', test_raw.isnull().sum())
train_imbalance = train_raw.impact_label.value_counts()
valid_imbalance = valid_raw.impact_label.value_counts()
print('\nimbalance in train data\n', train_imbalance)
print('imbalance in valid data\n', valid_imbalance)
# determine class weight
# from sklearn.utils.class_weight import compute_class_weight
# class_weight = compute_class_weight('balanced',np.unique(train_label+valid_label),train_label+valid_label).astype(np.float64)
# class_weight = torch.FloatTensor(class_weight)
test_raw.head()
# find MAX_LENGTH
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
all_data = train_context+train_text\
+valid_context+valid_text\
+test_context+test_text
sents_encode = [tokenizer.encode(sent, add_special_tokens=True) for sent in all_data]
sent_lens = [len(sent) for sent in sents_encode]
print('Max length: ', max(sent_lens))
import seaborn as sns
sns.histplot(sent_lens)
plt.xlim([0,512])
plt.xlabel('Sent Len');
```
## Data Preprocessing Method 2
```
# df1 = pd.read_csv("train.csv")
# df2 = pd.read_csv("valid.csv")
# df = df1.append(df2)
# df.shape
# plt.style.use('ggplot')
# sns.countplot(df.impact_label)
# plt.xlabel('label')
# def to_numerical(label):
# if label == "IMPACTFUL":
# return 2
# elif label == "MEDIUM_IMPACT":
# return 1
# else:
# return 0
# df["impact_label"] = df.impact_label.apply(to_numerical)
# def processtext(x):
# text = x
# text = text.replace('[','').replace(']','')
# text = text.replace('\"','\'')
# return text
# def process_stance(x):
# text = x
# text = text.replace('[','').replace(']','')
# return text
# df.context = df.context.apply(lambda x : processtext(x))
# df.stance_label = df.stance_label.apply(lambda x : process_stance(x))
# def merge_stance1(text, stance_label): # combine the text with the last stance label word in 'stance_label'
# label_list = stance_label.split(',')
# label_list.insert(0, label_list[-1])
# label_list.pop()
# output_list = text + label_list[0]
# return output_list
# def merge_stance2(context, stance_label): # combine all the sentence in context with the corresponding stance label
# label_list = stance_label.split(',')
# label_list.insert(0, label_list[-1])
# label_list.pop()
# context_list = context.split("\', \'")
# output_list = []
# num = len(label_list) - 1
# for i in range(num):
# output_list.append(context_list[i] + label_list[i+1])
# return output_list
# df['text_with_stance'] = df[["text", "stance_label"]].apply(lambda x : merge_stance1(x["text"], x["stance_label"]), axis=1)
# df['context_with_stance'] = df[["context", "stance_label"]].apply(lambda x : merge_stance2(x["context"], x["stance_label"]), axis=1)
# df['context_with_stance'] = df['context_with_stance'].str.join(' ')
# df.index = range(len(df))
# pd.DataFrame(df,columns = ['text_with_stance', 'context_with_stance', 'impact_label']).to_csv("train1.csv")
# df_test = pd.read_csv("test.csv")
# df_test['text'] = df_test.text.apply(lambda x : processtext(x))
# df_test['context'] = df_test.context.apply(lambda x : processtext(x))
# df_test.stance_label = df_test.stance_label.apply(lambda x : process_stance(x))
# df_test['context_with_stance'] = df_test[["context", "stance_label"]].apply(lambda x : merge_stance2(x["context"], x["stance_label"]), axis=1)
# df_test['text_with_stance'] = df_test[["text", "stance_label"]].apply(lambda x : merge_stance1(x["text"], x["stance_label"]), axis=1)
# df_test['context_with_stance'] = df_test['context_with_stance'].str.join(' ')
# df_test.index = range(len(df_test))
# pd.DataFrame(df_test,columns = ['text_with_stance', 'context_with_stance']).to_csv('test1.csv', index = False)
```
## 2. Model: BERT
### 2.1 bert pipeline
- Load Data
- class BertDataPreprocessor: to make all data into BERT input.
- def data_loader: to transfer datasets into the form of PyTorch DataLoader.
- Define Model
- BERT (pooler_output/last_hidden_state)
- dropout
- linear
- Train Model
- train and eval
- train on the whole dataset
```
# Hyperparameters: for tuning
PRETRAINED_MODEL = 'bert-base-uncased' # 'xlnet-base-cased','distilbert'
MAX_LENGTH = 256 # 512
BATCH_SIZE = 32
EPOCHS = 8
bert_model = BertModel.from_pretrained(PRETRAINED_MODEL)
tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL)
```
##### We also merge the train and valid data together as a whole and split into new train and valid data
```
# Data Loader: transfer data into BERT input
class BertDataPreprocessor():
def __init__(self, text, context, label, tokenizer, max_len):
self.text = text
self.context = context
self.label = label
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.text)
def __getitem__(self, item):
context = self.context[item]
text = self.text[item]
label = self.label[item]
encoding = self.tokenizer.encode_plus(
#context,
text,
context,
add_special_tokens=True,
max_length=self.max_len,
return_token_type_ids=False,
padding='max_length',
truncation=True,
return_attention_mask=True,
return_tensors='pt',
)
return {'context':context, 'text':text, 'input_ids': encoding['input_ids'].flatten(),
'attention_mask': encoding['attention_mask'].flatten(),
'label': torch.tensor(label, dtype=torch.long)}
# token_type_ids = encoded_pair['token_type_ids'].squeeze(0)
def data_loader(text, context, label, tokenizer, max_len, batch_size):
data = BertDataPreprocessor(
text=text,
context=context,
label=label,
tokenizer=tokenizer,
max_len=max_len
)
return DataLoader(data, batch_size=batch_size,num_workers=4)
# ---Train Model 1.0: train and eval---
train_data_loader = data_loader(train_text, train_context, train_label, tokenizer, MAX_LENGTH, BATCH_SIZE)
valid_data_loader = data_loader(valid_text, valid_context, valid_label, tokenizer, MAX_LENGTH, BATCH_SIZE)
test_data_loader = data_loader(test_text, test_context, test_label, tokenizer, MAX_LENGTH, BATCH_SIZE)
# ---Train Model 2.0: train the whole dataset---
# train_text_full = train_text + valid_text
# train_context_full = train_context + valid_context
# train_label_full = train_label + valid_label
# train_data_loader = data_loader(train_text_full, train_context_full, train_label_full, tokenizer, MAX_LENGTH, BATCH_SIZE)
# test_data_loader = data_loader(test_text, test_context, test_label, tokenizer, MAX_LENGTH, BATCH_SIZE)
# check output
data = next(iter(train_data_loader))
data.keys()
print(data['input_ids'].shape)
print(data['attention_mask'].shape)
print(data['label'].shape)
# Define Model
class BertClassifier(nn.Module):
def __init__(self, pretrained_model, num_class):
super(BertClassifier,self).__init__()
self.bert = pretrained_model #BertModel.from_pretrained('bert-base-uncased')
self.dropout = nn.Dropout(p=0.5)
self.linear = nn.Sequential(
nn.Linear(self.bert.config.hidden_size, 128),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(128, num_class)
)
# self.linear = nn.Linear(self.bert.config.hidden_size, num_class)
def forward(self, input_ids, attention_mask):
res = self.bert(input_ids=input_ids, attention_mask=attention_mask)
res = self.dropout(res['last_hidden_state'][:,0,:]) # res['last_hidden_state'][:,0,:], res['pooler_output']
res = self.linear(res)
return res
# settings
net = BertClassifier(bert_model, len(class_to_label)).to(device)
#print(net)
optimizor = AdamW(net.parameters(), lr=2e-5, correct_bias=False)
loss_func = nn.CrossEntropyLoss().to(device)
total_steps = len(train_data_loader) * EPOCHS
scheduler = get_linear_schedule_with_warmup(optimizer=optimizor, num_warmup_steps=0, num_training_steps=total_steps)
# def set_seed(seed):
# """ Set all seeds to make results reproducible """
# torch.manual_seed(seed)
# torch.cuda.manual_seed_all(seed)
# torch.backends.cudnn.deterministic = True
# torch.backends.cudnn.benchmark = False
# np.random.seed(seed)
# random.seed(seed)
# os.environ['PYTHONHASHSEED'] = str(seed)
# def train_test_split(data_df, test_size=0.2, shuffle=True, random_state=None):
# if shuffle:
# data_df = reset(data_df, random_state=random_state)
# train = data_df[int(len(data_df)*test_size):].reset_index(drop = True)
# test = data_df[:int(len(data_df)*test_size)].reset_index(drop = True)
# return train, test
def train_model(net, data_loader, optimizor, loss_func, scheduler, num_data, device):
net = net.train() # open up dropout and normalization
preds, total_l, correct = [], [], 0
for data in data_loader:
# data
input_ids, attention_mask = data['input_ids'].to(device), data['attention_mask'].to(device)
y = data['label'].to(device)
# pipeline
output = net(input_ids=input_ids, attention_mask=attention_mask)
_, pred = torch.max(output,dim=1) # values, indices = torch.max(), use indices to reflect pred_labels
loss = loss_func(output,y)
loss.backward()
nn.utils.clip_grad_norm_(net.parameters(), max_norm=1.0)
optimizor.step()
scheduler.step()
optimizor.zero_grad()
# record stats
preds.append(pred)
correct += torch.sum(pred==y)
total_l.append(loss.item())
return correct.double()/num_data, np.mean(total_l), torch.cat(preds, dim=0).cpu()
def eval_model(net, data_loader, loss_func, num_data, device):
net = net.eval()
preds, total_l, correct = [], [], 0
with torch.no_grad():
for data in data_loader:
# data
input_ids, attention_mask = data['input_ids'].to(device), data['attention_mask'].to(device)
y = data['label'].to(device)
# pipeline: just foward
output = net(input_ids=input_ids, attention_mask=attention_mask)
_, pred = torch.max(output,dim=1)
loss = loss_func(output,y)
# record stats
preds.append(pred)
correct += torch.sum(pred==y)
total_l.append(loss.item())
return correct.double()/num_data, np.mean(total_l), torch.cat(preds, dim=0).cpu()
def test_model(net, data_loader):
net = net.eval()
preds, pred_probs = [],[]
with torch.no_grad():
for data in data_loader:
# data
input_ids, attention_mask = data['input_ids'].to(device), data['attention_mask'].to(device)
y = data['label'].to(device)
# pipeline
output = net(input_ids=input_ids, attention_mask=attention_mask)
_, pred = torch.max(output,dim=1)
preds.extend(pred.cpu())
#pred_prob = nn.functional.softmax(output, dim=1)
#pred_probs.extend(pred_prob)
predictions = torch.stack(preds).cpu().numpy()
#prediction_probs = torch.stack(prediction_probs).cpu()
return predictions
# Train Model 1.0: train and eval
train_history, lowest_loss, best_f1 = defaultdict(list), 10, 0
for epoch in range(EPOCHS):
# tain
train_acc, train_loss, train_preds = train_model(net, train_data_loader, optimizor, loss_func,\
scheduler, len(train_text), device)
# show stats
train_f1 = f1_score(train_label, train_preds, average='macro')
print('--------------')
print('Epoch: %d/%d \n Train_loss: %f, Train_acc: %.4f%%, Train_F1_Macro: %f'\
%((epoch+1), EPOCHS, train_loss, train_acc*100, train_f1))
# validate
valid_acc, valid_loss, valid_preds = eval_model(net, valid_data_loader, loss_func, len(valid_text), device)
# show stats
valid_f1 = f1_score(valid_label, valid_preds, average='macro')
print('Epoch: %d/%d \n Valid_loss: %f, Valid_acc: %.4f%%, Valid_F1_Macro: %f'\
%((epoch+1), EPOCHS, valid_loss, valid_acc*100, valid_f1))
train_history['train_acc'].append(train_acc)
train_history['train_f1'].append(train_f1)
train_history['train_loss'].append(train_loss)
train_history['val_acc'].append(valid_acc)
train_history['valid_f1'].append(valid_f1)
train_history['val_loss'].append(valid_loss)
if valid_f1 > best_f1:
torch.save(net.state_dict(), 'model_saved/bert_clf_329_f1_1.h')
best_f1 = valid_f1
if valid_loss < lowest_loss:
torch.save(net.state_dict(), 'model_saved/bert_clf_329_loss_1.h')
lowest_loss = valid_loss
# # Train Model 2.0: train the whole dataset (train+valid)
# for epoch in range(EPOCHS):
# # tain
# train_acc, train_loss, train_preds = train_model(net, train_data_loader, optimizor, loss_func,\
# scheduler, len(train_text_full), device)
# # show stats
# print('--------------')
# train_f1 = f1_score(train_label, train_preds, average='macro')
# print('Epoch: %d/%d \n Train_loss: %f, Train_acc: %.4f%%, Train_F1_Macro: %f'\
# %((epoch+1), EPOCHS, train_loss, train_acc*100, train_f1))
#torch.save(net.state_dict(), 'model_saved/bert_clf_0.h')
# Save Model
# torch.save(net.state_dict(), 'model_saved/bert_clf_0.h')
```
### 2.2 predict and submit
```
# Make Predictions
net = BertClassifier(bert_model, len(class_to_label))
net.load_state_dict(torch.load('model_saved/bert_clf_329_1.h'))
net = net.to(device)
predictions = test_model(net,test_data_loader)
# check output
print(len(predictions),type(predictions))
# Submit
submission = pd.read_csv('data/submission_sample.csv')
submission['pred'] = pd.Series(predictions)
submission.to_csv('3300.csv', index=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cxbxmxcx/EvolutionaryDeepLearning/blob/main/EDL_6_4_Keras_GA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Install DEAP
!pip install deap --quiet
!pip install livelossplot --quiet
#@title Imports
import tensorflow as tf
import numpy as np
import sklearn
import sklearn.datasets
import sklearn.linear_model
import matplotlib.pyplot as plt
from IPython.display import clear_output
from livelossplot import PlotLossesKeras
#DEAP
from deap import algorithms
from deap import base
from deap import benchmarks
from deap import creator
from deap import tools
import random
#@title Dataset Parameters { run: "auto" }
number_samples = 400 #@param {type:"slider", min:100, max:1000, step:25}
difficulty = 1 #@param {type:"slider", min:1, max:5, step:1}
problem = "moons" #@param ["classification", "blobs", "gaussian quantiles", "moons", "circles"]
number_features = 2
number_classes = 2
middle_layer = 25 #@param {type:"slider", min:5, max:25, step:1}
epochs = 50 #@param {type:"slider", min:5, max:50, step:1}
def load_data(problem):
if problem == "classification":
clusters = 1 if difficulty < 3 else 2
informs = 1 if difficulty < 4 else 2
data = sklearn.datasets.make_classification(
n_samples = number_samples,
n_features=number_features,
n_redundant=0,
class_sep=1/difficulty,
n_informative=informs,
n_clusters_per_class=clusters)
if problem == "blobs":
data = sklearn.datasets.make_blobs(
n_samples = number_samples,
n_features=number_features,
centers=number_classes,
cluster_std = difficulty)
if problem == "gaussian quantiles":
data = sklearn.datasets.make_gaussian_quantiles(mean=None,
cov=difficulty,
n_samples=number_samples,
n_features=number_features,
n_classes=number_classes,
shuffle=True,
random_state=None)
if problem == "moons":
data = sklearn.datasets.make_moons(
n_samples = number_samples)
if problem == "circles":
data = sklearn.datasets.make_circles(
n_samples = number_samples)
return data
data = load_data(problem)
X, Y = data
# Input Data
plt.figure("Input Data")
plt.scatter(X[:, 0], X[:, 1], c=Y, s=40, cmap=plt.cm.Spectral)
#@title Helper function to show prediction results
def show_predictions(model, X, Y, name=""):
""" display the labeled data X and a surface of prediction of model """
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1))
X_temp = np.c_[xx.flatten(), yy.flatten()]
Z = model.predict(X_temp)
plt.figure("Predictions " + name)
plt.contourf(xx, yy, Z.reshape(xx.shape), cmap=plt.cm.Spectral)
plt.ylabel('x2')
plt.xlabel('x1')
plt.scatter(X[:, 0], X[:, 1],c=Y, s=40, cmap=plt.cm.Spectral)
#@title Logisitc Regression with SKLearn
clf = sklearn.linear_model.LogisticRegressionCV()
clf.fit(X, Y)
show_predictions(clf, X, Y, "Logistic regression")
LR_predictions = clf.predict(X)
print("Logistic Regression accuracy : ", np.sum(LR_predictions == Y) / Y.shape[0])
#@title Setup Keras Model
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(16, activation='relu', input_shape=(X.shape[1],)),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
optimizer = tf.keras.optimizers.Adam(learning_rate=.001)
model.compile(optimizer=optimizer,
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
trainableParams = np.sum([np.prod(v.get_shape()) for v in model.trainable_weights])
print(f"Trainable parameters: {trainableParams}")
model.fit(X, Y, epochs=epochs,
callbacks=[PlotLossesKeras()],
verbose=0)
model.evaluate(X,Y)
show_predictions(model, X, Y, "Keras with Adam")
print("Neural Network accuracy : ", model.evaluate(X,Y)[1])
def print_parameters():
for layer in model.layers:
for na in layer.get_weights():
print(na)
def set_parameters(individual):
idx = 0
tensors=[]
for layer in model.layers:
for na in layer.get_weights():
size = na.size
sh = na.shape
t = individual[idx:idx+size]
t = np.array(t)
t = np.reshape(t, sh)
idx += size
tensors.append(t)
model.set_weights(tensors)
number_of_genes = trainableParams
print(number_of_genes)
individual = np.ones(number_of_genes)
set_parameters(individual)
print("Neural Network accuracy : ", model.evaluate(X,Y)[1])
print_parameters()
show_predictions(model, X, Y, "Neural Network")
print("Neural Network accuracy : ", model.evaluate(X,Y)[1])
#@title Setting up the Creator
creator.create("FitnessMax", base.Fitness, weights=(-1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax)
#@title Create Individual and Population
def uniform(low, up, size=None):
try:
return [random.uniform(a, b) for a, b in zip(low, up)]
except TypeError:
return [random.uniform(a, b) for a, b in zip([low] * size, [up] * size)]
toolbox = base.Toolbox()
toolbox.register("attr_float", uniform, -1, 1, number_of_genes)
toolbox.register("individual", tools.initIterate, creator.Individual, toolbox.attr_float)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
toolbox.register("select", tools.selTournament, tournsize=5)
toolbox.register("mate", tools.cxBlend, alpha=.5)
toolbox.register("mutate", tools.mutGaussian, mu=0.0, sigma=.1, indpb=.25)
def clamp(num, min_value, max_value):
return max(min(num, max_value), min_value)
def evaluate(individual):
set_parameters(individual)
print('.', end='')
return 1/clamp(model.evaluate(X,Y, verbose=0)[1], .00001, 1),
toolbox.register("evaluate", evaluate)
#@title Optimize the Weights { run: "auto" }
MU = 35 #@param {type:"slider", min:5, max:1000, step:5}
NGEN = 100 #@param {type:"slider", min:100, max:1000, step:10}
RGEN = 1 #@param {type:"slider", min:1, max:100, step:1}
CXPB = .6
MUTPB = .3
random.seed(64)
pop = toolbox.population(n=MU)
hof = tools.HallOfFame(1)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", np.mean)
stats.register("std", np.std)
stats.register("min", np.min)
stats.register("max", np.max)
best = None
history = []
for g in range(NGEN):
pop, logbook = algorithms.eaSimple(pop, toolbox,
cxpb=CXPB, mutpb=MUTPB, ngen=RGEN, stats=stats, halloffame=hof, verbose=False)
best = hof[0]
clear_output()
print(f"Gen ({(g+1)*RGEN})")
show_predictions(model, X, Y, "Neural Network")
print("Neural Network accuracy : ", model.evaluate(X,Y)[1])
plt.show()
set_parameters(best)
show_predictions(model, X, Y, "Best Neural Network")
plt.show()
acc = model.evaluate(X,Y)[1]
print("Best neural Network accuracy : ", acc)
if acc > .99: #stop condition
break
```
| github_jupyter |
# **Linear Regression Implementation from Scratch**
```
%matplotlib inline
from IPython import display
from matplotlib import pyplot as plt
import torch
import random
```
# **Generating Data Sets**
* Randomly generate $\mathbf{X}\in\mathbb{R}^{1000\times 2}$
* Use ground truth: weight $\mathbf{w}=[2,-3.4]^T$ and bias $b=4.2$.
* Generate label by $\mathbf{y} = \mathbf{X} \mathbf{w} +b +\epsilon$ with noise $\epsilon$ obeying a normal distribution with a mean of 0 and a standard deviation of 0.01.
```
num_inputs = 2
num_examples = 1000
true_w = torch.tensor([2, -3.4])
true_b = 4.2
features = torch.normal(0, 1, size=(num_examples, num_inputs))
labels = torch.matmul(features, true_w) + true_b
labels += torch.normal(0, 0.01, size=(labels.shape))
```
# **Visualize the Second Feature and Label**
```
display.set_matplotlib_formats('svg')
plt.figure(figsize=(6, 3))
print(features.shape, labels.shape)
plt.scatter(features[:, 1].numpy(), labels.numpy(), 1);
```
# **Reading Data**
Iterate over the data set and return batch_size (batch size) random examples every time.
```
def data_iter(batch_size, features, labels):
num_examples = len(features)
indices = list(range(num_examples))
# The examples are read at random, in no particular order
random.shuffle(indices)
for i in range(0, num_examples, batch_size):
j = torch.tensor(indices[i: min(i + batch_size, num_examples)])
yield features[j,:], labels[j]
```
# **Print a Small Data Batch**
```
batch_size = 10
for X, y in data_iter(batch_size, features, labels):
print(X, y)
break
```
# **Initialize Model Parameters**
Weights are initialized to normal random numbers using a mean of 0 and a standard deviation of 0.01, with the bias $b$ set to zero.
```
w = torch.normal(0, 0.01, size=(num_inputs, 1))
b = torch.zeros(size=(1,))
```
# **Attach Gradients to Parameters**
```
w.requires_grad_(True)
b.requires_grad_(True)
```
# **Define the Liner Model**
```
def linreg(X, w, b):
return torch.matmul(X, w) + b
```
# **Define the Loss Function**
```
def squared_loss(y_hat, y):
return (y_hat - y.reshape(y_hat.shape)) ** 2 / 2
```
# **Define the Optimization Algorithm**
```
def sgd(params, lr, batch_size):
for param in params:
# param[:] = param - lr * param.grad / batch_size
param.data.sub_(lr*param.grad/batch_size)
param.grad.data.zero_()
```
# **Training**
```
lr = 0.1 # Learning rate
num_epochs = 3 # Number of iterations
net = linreg # Our fancy linear model
loss = squared_loss # 0.5 (y-y')^2
w = torch.normal(0, 0.01, size=(num_inputs, 1))
b = torch.zeros(size=(1,))
w.requires_grad_(True)
b.requires_grad_(True)
for epoch in range(num_epochs):
for X, y in data_iter(batch_size, features, labels):
with torch.enable_grad():
l = loss(net(X, w, b), y) # Minibatch loss in X and y
l.mean().backward() # Compute gradient on l with respect to [w,b]
sgd([w, b], lr, batch_size) # Update parameters using their gradient
train_l = loss(net(features, w, b), labels)
print('epoch %d, loss %f' % (epoch + 1, train_l.mean().item()))
```
# **Evaluate the Trained Model**
```
print('Error in estimating w', true_w - w.reshape(true_w.shape))
print('Error in estimating b', true_b - b)
print(w)
print(b)
```
| github_jupyter |
```
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
from __future__ import division, unicode_literals, print_function
import warnings
warnings.filterwarnings('ignore')
import spacy
import plac
import ujson as json
import numpy
import pandas as pd
import en_core_web_md
import en_vectors_glove_md
from pathlib import Path
from keras.utils.np_utils import to_categorical
from keras.callbacks import EarlyStopping, ModelCheckpoint
from sklearn.model_selection import train_test_split
try:
import cPickle as pickle
except ImportError:
import pickle
from spacy_hook import get_embeddings, get_word_ids
from spacy_hook import create_similarity_pipeline
from keras_decomposable_attention import build_model
def get_quora_data(src_train, src_test):
df_train = pd.read_csv(src_train)
df_train.dropna(inplace = True)
df_tr, df_val = train_test_split(df_train, test_size = 0.15, random_state = 111)
return df_tr, df_val
def evaluate(dev_loc):
dev_texts1, dev_texts2, dev_labels = read_snli(dev_loc)
nlp = spacy.load('en',
create_pipeline=create_similarity_pipeline)
total = 0.
correct = 0.
for text1, text2, label in zip(dev_texts1, dev_texts2, dev_labels):
doc1 = nlp(text1)
doc2 = nlp(text2)
sim = doc1.similarity(doc2)
if sim.argmax() == label.argmax():
correct += 1
total += 1
return correct, total
def train_mine(shape, settings, savename):
train_texts1, train_texts2, train_labels = df_tr['question1'], df_tr['question2'], to_categorical(df_tr['is_duplicate'])
dev_texts1, dev_texts2, dev_labels = df_val['question1'], df_val['question2'], to_categorical(df_val['is_duplicate'])
print("Loading spaCy")
#nlp = spacy.load('en')
nlp = en_core_web_md.load()
#nlp = en_vectors_glove_md.load()
assert nlp.path is not None
print("Compiling network")
model = build_model(get_embeddings(nlp.vocab), shape, settings)
print("Processing texts...")
Xs = []
for texts in (train_texts1, train_texts2, dev_texts1, dev_texts2):
Xs.append(get_word_ids(list(nlp.pipe(texts, n_threads=20, batch_size=20000)),
max_length=shape[0],
rnn_encode=settings['gru_encode'],
tree_truncate=settings['tree_truncate']))
train_X1, train_X2, dev_X1, dev_X2 = Xs
print(settings)
callbacks = [ModelCheckpoint('{}.h5'.format(savename),
monitor='val_loss',
verbose = 0, save_best_only = True),
EarlyStopping(monitor='val_loss', patience = 10, verbose = 1)]
model.fit([train_X1, train_X2],train_labels,
validation_data=([dev_X1, dev_X2], dev_labels), class_weight = class_weight,
nb_epoch=settings['nr_epoch'],
batch_size=settings['batch_size'], callbacks = callbacks)
return model
src_train_raw = '../../../data/train.csv'
src_test_raw = '../../../data/test.csv'
src_train = '../../../features/df_train_spacylemmat_fullclean.csv'
src_test = '../../../features/df_test_spacylemmat_fullclean.csv'
settings = {
'lr': 0.0005,
'dropout': 0.2,
'batch_size': 128,
'nr_epoch': 100,
'tree_truncate': True,
'gru_encode': False,
}
max_length = 170
nr_hidden = 256
shape = (max_length, nr_hidden, 2)
print(shape)
re_weight = True
if re_weight:
class_weight = {0: 1.309028344, 1: 0.472001959}
else:
class_weight = None
```
On fullclean data:
settings = {
'lr': 0.0005,
'dropout': 0.2,
'batch_size': 128,
'nr_epoch': 100,
'tree_truncate': False,
'gru_encode': False,
}
max_length = 170
nr_hidden = 256
fullclean
val_loss: 0.3533 with treetrunc 0.3483
```
df_tr, df_val = get_quora_data(src_train, src_test)
train_mine(shape, settings, 'decomposable_encoreweb_0.0005LR_treetrunc_170len_fullclean_reweight')
```
| github_jupyter |
# Smoking Status Classification
Spark NLP v 2.4.5
Spark NLP-JSL v 2.4.6
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/clinical_text_classification/1.Smoking_Status_Classification.ipynb)
```
# template for license_key.json
license_keys = {'secret':"xxx",
'SPARK_NLP_LICENSE': 'aaa',
'JSL_OCR_LICENSE': 'bbb',
'AWS_ACCESS_KEY_ID':"ccc",
'AWS_SECRET_ACCESS_KEY':"ddd",
'JSL_OCR_SECRET':"eee"}
import os
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
secret = license_keys['secret']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['JSL_OCR_LICENSE'] = license_keys['JSL_OCR_LICENSE']
os.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
! python -m pip install --upgrade spark-nlp-jsl==2.4.6 --extra-index-url https://pypi.johnsnowlabs.com/$secret
# Install Spark NLP
! pip install --ignore-installed -q spark-nlp==2.4.5
import sparknlp
print (sparknlp.version())
import json
import os
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
def start(secret):
builder = SparkSession.builder \
.appName("Spark NLP Licensed") \
.master("local[*]") \
.config("spark.driver.memory", "8G") \
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") \
.config("spark.kryoserializer.buffer.max", "900M") \
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.11:2.4.5") \
.config("spark.jars", "https://pypi.johnsnowlabs.com/"+secret+"/spark-nlp-jsl-2.4.6.jar")
return builder.getOrCreate()
# spark = start(secret) # if you want to start the session with custom params as in start function above
# sparknlp_jsl.start(secret)
import os
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
from sparknlp.common import *
import sparknlp_jsl
spark = sparknlp_jsl.start('xxx')
import xmltodict
with open('data/smoker/smokers_surrogate_test_all_groundtruth_version2.xml', 'r') as f:
text = f.read()
xml = xmltodict.parse(text)
xml['ROOT']['RECORD'][0].keys()
xml['ROOT']['RECORD'][0]['TEXT'][:1000]
len(xml['ROOT']['RECORD'])
import pandas as pd
test_df = pd.DataFrame([(t['TEXT'],t['SMOKING']['@STATUS']) for t in xml['ROOT']['RECORD']], columns = ['text','label'])
test_df.label.value_counts()
import xmltodict
with open('data/smoker/smokers_surrogate_train_all_version2.xml', 'r') as f:
text = f.read()
xml = xmltodict.parse(text)
train_df = pd.DataFrame([(t['TEXT'],t['SMOKING']['@STATUS']) for t in xml['ROOT']['RECORD']], columns = ['text','label'])
train_df.label.value_counts()
print (train_df[train_df.label=='PAST SMOKER']['text'].values[0][:100])
train_df.head()
test_df.head()
print (test_df.text[0][:1000])
spark_train_df = spark.createDataFrame(train_df.append(test_df))
from pyspark.sql import functions as F
# create a monotonically increasing id
spark_train_df = spark_train_df.withColumn("id", F.monotonically_increasing_id())
spark_train_df.show(1)
spark_train_df.select('label').show(10)
rules = '''
(no|non|not|never|negative)\W*(smoker|smoking|smoked|tobacco), xxx
denies\W*smoking, xxx
nonsmoker, xxx
(tobacco|smoke|smoking|nicotine)\W*(never|no), xxx
doesn\'t smoke, xxx
'''
with open('data/smoker/smoking_regex_rules.txt', 'w') as f:
f.write(rules)
sparknlp_jsl.version()
entities = ['smoke', 'secondhand', 'thirdhand', 'pipes',
'cigs', 'tobacco', 'cigarettes', 'cigar', 'cigars',
'tobaco', 'cigarette', 'hookah', 'nutcrackers',
'nicotine','nicotene', 'nicoderm', 'nictoine']
with open ('data/smoker/smoker_entities.txt', 'w') as f:
for i in entities:
f.write(i+'\n')
document = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentence = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentence')\
.setCustomBounds(['\n'])
regex_matcher = RegexMatcher()\
.setInputCols('sentence')\
.setStrategy("MATCH_ALL")\
.setOutputCol("nonsmoker_regex_matches")\
.setExternalRules(path='data/smoker/smoking_regex_rules.txt', delimiter=',')
token = Tokenizer()\
.setInputCols(['sentence'])\
.setOutputCol('token')
entity_extractor = TextMatcher() \
.setInputCols(["sentence",'token'])\
.setOutputCol("smoker_entities")\
.setEntities('data/smoker/smoker_entities.txt')
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = NerDLModel.pretrained('ner_clinical', 'en', 'clinical/models')\
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("clinical_ner")
clinical_converter = NerConverter()\
.setInputCols(["sentence", "token", "clinical_ner"])\
.setOutputCol("clinical_ner_chunk")
bionlp_model = NerDLModel.pretrained('ner_bionlp', 'en', 'clinical/models')\
.setInputCols(["sentence", "token", "embeddings"])\
.setOutputCol("bio_ner")
bionlp_converter = NerConverter()\
.setInputCols(["sentence", "token", "bio_ner"])\
.setOutputCol("bio_ner_chunk")
posology_ner_model = NerDLModel.pretrained('ner_posology_large', 'en', 'clinical/models')\
.setInputCols(["sentence", "token", "embeddings"])\
.setOutputCol("posology_ner")
posology_converter = NerConverter()\
.setInputCols(["sentence", "token", "posology_ner"])\
.setOutputCol("posology_ner_chunk")
risk_ner_model = NerDLModel.pretrained('ner_risk_factors', 'en', 'clinical/models')\
.setInputCols(["sentence", "token", "embeddings"])\
.setOutputCol("risk_ner")
risk_converter = NerConverter()\
.setInputCols(["sentence", "token", "risk_ner"])\
.setOutputCol("risk_ner_chunk")\
.setWhiteList(['SMOKER'])
risk_assertion_dl = sparknlp_jsl.annotators.AssertionDLModel.pretrained('assertion_dl', 'en', 'clinical/models')\
.setInputCols(["sentence", "risk_ner_chunk", "embeddings"])\
.setOutputCol("assertion")
cell_ner_model = NerDLModel.pretrained('ner_cellular', 'en', 'clinical/models')\
.setInputCols(["sentence", "token", "embeddings"])\
.setOutputCol("cell_ner")
cell_converter = NerConverter()\
.setInputCols(["sentence", "token", "cell_ner"])\
.setOutputCol("cell_ner_chunk")
ner_pipeline = Pipeline(
stages = [
document,
sentence,
regex_matcher,
token,
entity_extractor,
embeddings,
clinical_ner,
clinical_converter,
bionlp_model,
bionlp_converter,
posology_ner_model,
posology_converter,
risk_ner_model,
risk_converter,
risk_assertion_dl,
cell_ner_model,
cell_converter,
])
empty_data = spark.createDataFrame([[""]]).toDF("text")
model = ner_pipeline.fit(empty_data)
print ('Spark NLP pipeline is built')
lm = LightPipeline(model)
match_df = model.transform(spark_train_df)
lm.fullAnnotate(match_df.select('text').take(1)[0][0][:200])
lm.fullAnnotate('He is a nonsmoker. He quit cigar a year ago')
#ann_results = lm.fullAnnotate(list(text_df['text'])[0])
lm.annotate('He is a nonsmoker. He quit cigar a year ago')
match_df.select('text').take(1)
match_df.show(2)
match_df.select("smoker_entities.metadata").take(1)
match_df.select('sentence').take(1)
match_df.select('id','label','assertion.result', 'nonsmoker_regex_matches.result', 'smoker_entities.result').show(3)
from pyspark.sql import functions as F
pandas_df = match_df.select('id','label','nonsmoker_regex_matches','smoker_entities','assertion',
F.explode(F.arrays_zip('clinical_ner_chunk.result',"clinical_ner_chunk.metadata",
'bio_ner_chunk.result',"bio_ner_chunk.metadata",
'posology_ner_chunk.result',"posology_ner_chunk.metadata",
'risk_ner_chunk.result',"risk_ner_chunk.metadata",
'cell_ner_chunk.result',"cell_ner_chunk.metadata",
)).alias("cols")) \
.select('id','label','nonsmoker_regex_matches.result','smoker_entities.result','assertion.result',
F.expr("cols['0']").alias("clinical_token"),
F.expr("cols['1'].entity").alias("clinical_entity"),
F.expr("cols['2']").alias("bionlp_token"),
F.expr("cols['3'].entity").alias("bionlp_entity"),
F.expr("cols['4']").alias("posology_token"),
F.expr("cols['5'].entity").alias("posology_entity"),
F.expr("cols['6']").alias("risk_token"),
F.expr("cols['7'].entity").alias("risk_entity"),
F.expr("cols['8']").alias("cell_token"),
F.expr("cols['9'].entity").alias("cell_entity")).toPandas()
pandas_df.head()
pandas_df.columns = ['id', 'label', 'nonsmoker_regex_matches', 'smoker_entities', 'assertion', 'clinical_token',
'clinical_entity', 'bionlp_token', 'bionlp_entity', 'posology_token',
'posology_entity', 'risk_token', 'risk_entity', 'cell_token',
'cell_entity']
pandas_df.to_pickle('data/smoker_features_df.pickle')
import pandas as pd
pandas_df= pd.read_pickle('data/smoker_features_df.pickle')
pandas_df.columns
pandas_df.shape
pandas_df.head()
pandas_df.assertion.value_counts()
pandas_df.nonsmoker_regex_matches.apply(lambda x: 1 if len(x)>0 else 0).sum()
pandas_df.nonsmoker_regex_matches.apply(lambda x: len(x)).sum()
#assertion_scores = {'absent':}
from collections import Counter
def get_assertion_stats(ass):
ass_list =[]
for s in ass:
ass_list.extend(s)
x = dict(Counter(ass))
return x
adf = pandas_df.assertion.apply(lambda x: get_assertion_stats(x)).value_counts().reset_index()
dic = {'present':0,
'absent':0,
'associated_with_someone_else':0}
for i,row in adf.iterrows():
try:
k = list(row['index'].keys())[0]
dic[k]= dic[k]+row['index'][k]*row['assertion']
except:
pass
dic
pandas_df.assertion.apply(lambda x: get_assertion_stats(x)).value_counts()
#{k: x.get(k, 0) + y.get(k, 0) for k in set(x) | set(y)}
pandas_df.nonsmoker_regex_matches = pandas_df.nonsmoker_regex_matches.apply(lambda x: len(x))
pandas_df.smoker_entities = pandas_df.smoker_entities.apply(lambda x: len(x))
pandas_df.smoker_entities.value_counts()
pandas_df.nonsmoker_regex_matches.value_counts()
pids = pandas_df['id'].unique()
parsed_dict = {}
xx=[]
for i in pids:
temp_dict = pandas_df[pandas_df.id==i]['clinical_entity'].value_counts().to_dict()
temp_dict.update(pandas_df[pandas_df.id==i]['bionlp_entity'].value_counts().to_dict())
temp_dict.update(pandas_df[pandas_df.id==i]['posology_entity'].value_counts().to_dict())
temp_dict.update(pandas_df[pandas_df.id==i]['cell_entity'].value_counts().to_dict())
temp_dict.update(pandas_df[pandas_df.id==i]['risk_entity'].value_counts().to_dict())
adf = pandas_df[pandas_df.id==i]['assertion'].apply(lambda x: get_assertion_stats(x)).value_counts().reset_index()
dic = {'present':0,
'absent':0,
'associated_with_someone_else':0}
for j,row in adf.iterrows():
try:
k = list(row['index'].keys())[0]
dic[k]= dic[k]+row['index'][k]*row['assertion']
except:
pass
temp_dict.update(dic)
temp_dict['smoker_entities'] = pandas_df[pandas_df.id==i]['smoker_entities'].sum()
temp_dict['nonsmoker_regex_matches'] = pandas_df[pandas_df.id==i]['nonsmoker_regex_matches'].sum()
temp_dict['id']=i
xx.append(temp_dict)
stats_df = pd.DataFrame(xx)
stats_df.columns = ['entity_{}'.format(c) for c in stats_df.columns]
stats_df = stats_df.rename(columns={'entity_pid':'pid'})
stats_df.entity_associated_with_someone_else.sum()
stats_df.head()
stats_df['entity_SMOKER'].sum()
stats_df.entity_smoker_entities.sum()
stats_df.columns
stats_df.shape
pandas_df[['id','label']].drop_duplicates()
stats_df.entity_id.value_counts()
model_df = pandas_df[['id','label']].drop_duplicates().merge(stats_df, left_on='id', right_on='entity_id').fillna(0)
model_df = model_df[model_df.label!='SMOKER'].reset_index(drop=True)
model_df.head()
from sklearn.model_selection import train_test_split
X=model_df.drop(['label','entity_id', 'id'], axis=1) # Features
y=model_df['label'] # Labels
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
#Import Random Forest Model
from sklearn.ensemble import RandomForestClassifier
#Create a Gaussian Classifier
clf=RandomForestClassifier(n_estimators=100)
#Train the model using the training sets y_pred=clf.predict(X_test)
clf.fit(X_train, y_train)
y_pred=clf.predict(X_test)
#Import scikit-learn metrics module for accuracy calculation
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
print(metrics.classification_report(y_test, y_pred))
import pandas as pd
feature_imp = pd.Series(clf.feature_importances_,index=X.columns).sort_values(ascending=False)
feature_imp
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
from sklearn.linear_model import LogisticRegression
lr_clf = LogisticRegression(random_state=0)
lr_model = lr_clf.fit(X_train, y_train)
y_pred=lr_model.predict(X_test)
print(metrics.classification_report(y_test, y_pred))
```
## Creating TfIdf features and appending to scalar features
```
train_df_text = spark_train_df.toPandas()
model_df_text = train_df_text.merge(model_df.drop(['label','entity_id'], axis=1))
import string
model_df_text['text'] = model_df_text['text'].apply(lambda x: ''.join([i for i in x if not i.isdigit()]).lower().replace('\n',' '))
model_df_text['text'].head()
from sklearn.feature_extraction.text import TfidfVectorizer
vect = TfidfVectorizer(max_features=1000, min_df= 5, norm='l2', ngram_range=(1, 3), stop_words='english')
tfidf_matrix = vect.fit_transform(model_df_text['text'])
df = pd.DataFrame(tfidf_matrix.toarray(), columns = vect.get_feature_names())
X = pd.concat([df, model_df_text], axis=1)
print ('train with tfidf:', X.shape)
X.columns
X.head()
X['text'].head()
X_train, X_test, y_train, y_test = train_test_split(X.drop(['text','id', 'label'], axis=1), X.label, test_size=0.2)
#Import Random Forest Model
from sklearn.ensemble import RandomForestClassifier
#Create a Gaussian Classifier
clf=RandomForestClassifier(n_estimators=1000)
#Train the model using the training sets y_pred=clf.predict(X_test)
clf.fit(X_train, y_train)
y_pred=clf.predict(X_test)
print(metrics.classification_report(y_test, y_pred))
print(metrics.classification_report(y_test, y_pred))
import pandas as pd
feature_imp = pd.Series(clf.feature_importances_,index=X.drop(['text','id', 'label'], axis=1).columns).sort_values(ascending=False)
feature_imp
```
## with feature selection (select best K)
```
from sklearn.feature_selection import SelectKBest, f_classif
selector = SelectKBest(f_classif, k = 20)
sub_X = selector.fit_transform(X.drop(['text','id', 'label'], axis=1), X.label)
sub_X
X_train, X_test, y_train, y_test = train_test_split(sub_X, X.label, test_size=0.2)
clf=RandomForestClassifier(n_estimators=100)
#Train the model using the training sets y_pred=clf.predict(X_test)
clf.fit(X_train, y_train)
y_pred=clf.predict(X_test)
print(metrics.classification_report(y_test, y_pred))
print(metrics.classification_report(y_test, y_pred))
print(metrics.classification_report(y_test, y_pred))
model_df_text.columns
```
## Using smoke-related sentences for TfIDF
```
text_df = match_df.select('id','text').toPandas()
text_df.head()
sent_dic={}
for m, row in text_df.iterrows():
ann = lm.fullAnnotate(row['text'])
for i in ann:
sent_ids = [str(j.metadata['sentence']) for j in i['risk_ner_chunk']]
sent_ids.extend([str(j.metadata['sentence']) for j in i['nonsmoker_regex_matches']])
sent_ids.extend([str(j.metadata['sentence']) for j in i['smoker_entities']])
sentences = [j.result for j in i['sentence'] if j.metadata['sentence'] in sent_ids]
print (sentences)
sent_dic[str(row['id'])] = sentences
print (m)
model_df_text['smoke_sents'] = model_df_text['id'].apply(lambda x: ' '.join(sent_dic[str(x)]))
import pandas as pd
model_df_text=pd.read_pickle('data/smoker_model_withText.pickle')
model_df_text.to_pickle('data/smoker_model_withText.pickle')
model_df_text.label = model_df_text.label.replace('SMOKER','CURRENT SMOKER')
model_df_text.head()
model_df_text['smoke_sents']
from sklearn.feature_extraction.text import TfidfVectorizer
vect = TfidfVectorizer(max_features=100, min_df= 5, norm='l2', ngram_range=(1, 3), stop_words=None)
tfidf_matrix = vect.fit_transform(model_df_text['smoke_sents'])
df = pd.DataFrame(tfidf_matrix.toarray(), columns = vect.get_feature_names())
X = pd.concat([df, model_df_text], axis=1)
print ('train with tfidf:', X.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X.drop(['text','id', 'label','smoke_sents'], axis=1), X.label, test_size=0.2)
#Import Random Forest Model
from sklearn.ensemble import RandomForestClassifier
#Create a Gaussian Classifier
clf=RandomForestClassifier(n_estimators=100)
#Train the model using the training sets y_pred=clf.predict(X_test)
clf.fit(X_train, y_train)
y_pred=clf.predict(X_test)
from sklearn import metrics
print(metrics.classification_report(y_test, y_pred))
import pandas as pd
feature_imp = pd.Series(clf.feature_importances_,index=X.drop(['text','id', 'label','smoke_sents'], axis=1).columns).sort_values(ascending=False)
feature_imp.head(20)
from sklearn.linear_model import LogisticRegression
lr_clf = LogisticRegression(random_state=0)
lr_model = lr_clf.fit(X_train, y_train)
y_pred=lr_model.predict(X_test)
print(metrics.classification_report(y_test, y_pred))
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Visualization/visualizing_geometries.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Visualization/visualizing_geometries.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Visualization/visualizing_geometries.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Create a planar polygon.
planarPolygon = ee.Geometry(polygon, {}, False)
polygon = ee.FeatureCollection(polygon)
planarPolygon = ee.FeatureCollection(planarPolygon)
# Display the polygons by adding them to the map.
Map.centerObject(polygon)
Map.addLayer(polygon, {'color': 'FF0000'}, 'geodesic polygon')
Map.addLayer(planarPolygon, {'color': '000000'}, 'planar polygon')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# ETL Processes
Use this notebook to develop the ETL process for each of your tables before completing the `etl.py` file to load the whole datasets.
```
import os
import glob
import psycopg2
import pandas as pd
from sql_queries import *
conn = psycopg2.connect("host=127.0.0.1 dbname=sparkifydb user=student password=student")
cur = conn.cursor()
def get_files(filepath):
all_files = []
for root, dirs, files in os.walk(filepath):
files = glob.glob(os.path.join(root,'*.json'))
for f in files :
all_files.append(os.path.abspath(f))
return all_files
```
# Process `song_data`
In this first part, you'll perform ETL on the first dataset, `song_data`, to create the `songs` and `artists` dimensional tables.
Let's perform ETL on a single song file and load a single record into each table to start.
- Use the `get_files` function provided above to get a list of all song JSON files in `data/song_data`
- Select the first song in this list
- Read the song file and view the data
```
song_files = get_files('data/song_data')
filepath = song_files[0]
df = pd.read_json(filepath, lines=True)
df.head()
#df[['song_id', 'title', 'artist_id', 'year', 'duration']].values[0]
df[['artist_id', 'artist_name', 'artist_location', 'artist_latitude', 'artist_longitude']].values[0]
```
## #1: `songs` Table
#### Extract Data for Songs Table
- Select columns for song ID, title, artist ID, year, and duration
- Use `df.values` to select just the values from the dataframe
- Index to select the first (only) record in the dataframe
- Convert the array to a list and set it to `song_data`
```
song_data = df[['song_id', 'title', 'artist_id', 'year', 'duration']].values[0]
song_data
```
#### Insert Record into Song Table
Implement the `song_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song into the `songs` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songs` table in the sparkify database.
```
cur.execute(song_table_insert, song_data)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added a record to this table.
## #2: `artists` Table
#### Extract Data for Artists Table
- Select columns for artist ID, name, location, latitude, and longitude
- Use `df.values` to select just the values from the dataframe
- Index to select the first (only) record in the dataframe
- Convert the array to a list and set it to `artist_data`
```
artist_data = df[['artist_id', 'artist_name', 'artist_location', 'artist_latitude', 'artist_longitude']].values[0]
artist_data
```
#### Insert Record into Artist Table
Implement the `artist_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song's artist into the `artists` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `artists` table in the sparkify database.
```
cur.execute(artist_table_insert, artist_data)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added a record to this table.
# Process `log_data`
In this part, you'll perform ETL on the second dataset, `log_data`, to create the `time` and `users` dimensional tables, as well as the `songplays` fact table.
Let's perform ETL on a single log file and load a single record into each table.
- Use the `get_files` function provided above to get a list of all log JSON files in `data/log_data`
- Select the first log file in this list
- Read the log file and view the data
```
log_files = get_files('data/log_data/')
log_files[0]
filepath = log_files[0]
df = pd.read_json(filepath, lines=True)
df.head()
```
## #3: `time` Table
#### Extract Data for Time Table
- Filter records by `NextSong` action
- Convert the `ts` timestamp column to datetime
- Hint: the current timestamp is in milliseconds
- Extract the timestamp, hour, day, week of year, month, year, and weekday from the `ts` column and set `time_data` to a list containing these values in order
- Hint: use pandas' [`dt` attribute](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.html) to access easily datetimelike properties.
- Specify labels for these columns and set to `column_labels`
- Create a dataframe, `time_df,` containing the time data for this file by combining `column_labels` and `time_data` into a dictionary and converting this into a dataframe
```
df = df[df['page']=='NextSong']
df.head(2)
t = pd.to_datetime(df['ts'], unit='ms')
t.head(2)
#time.dt.hour
#time.dt.second
#time.dt.quarter
time_data = [df.ts.values, t.dt.hour.values, t.dt.day.values, t.dt.weekofyear.values, t.dt.month.values, t.dt.year.values, t.dt.weekday.values]
column_labels = ['start_time', 'hour', 'day', 'week', 'month', 'year', 'weekday']
time_df = pd.DataFrame(dict(zip(column_labels, time_data)))
time_df.head()
```
#### Insert Records into Time Table
Implement the `time_table_insert` query in `sql_queries.py` and run the cell below to insert records for the timestamps in this log file into the `time` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `time` table in the sparkify database.
```
print(time_table_insert)
for i, row in time_df.iterrows():
#print(list(row))
cur.execute(time_table_insert, list(row))
conn.commit()
```
Run `test.ipynb` to see if you've successfully added records to this table.
## #4: `users` Table
#### Extract Data for Users Table
- Select columns for user ID, first name, last name, gender and level and set to `user_df`
```
user_df = df[['userId', 'firstName', 'lastName', 'gender', 'level']]
user_df.count()
```
#### Insert Records into Users Table
Implement the `user_table_insert` query in `sql_queries.py` and run the cell below to insert records for the users in this log file into the `users` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `users` table in the sparkify database.
```
user_table_insert
for i, row in user_df.iterrows():
cur.execute(user_table_insert, row)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added records to this table.
## #5: `songplays` Table
#### Extract Data and Songplays Table
This one is a little more complicated since information from the songs table, artists table, and original log file are all needed for the `songplays` table. Since the log file does not specify an ID for either the song or the artist, you'll need to get the song ID and artist ID by querying the songs and artists tables to find matches based on song title, artist name, and song duration time.
- Implement the `song_select` query in `sql_queries.py` to find the song ID and artist ID based on the title, artist name, and duration of a song.
- Select the timestamp, user ID, level, song ID, artist ID, session ID, location, and user agent and set to `songplay_data`
#### Insert Records into Songplays Table
- Implement the `songplay_table_insert` query and run the cell below to insert records for the songplay actions in this log file into the `songplays` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songplays` table in the sparkify database.
```
df.head(2)
print("song_select :", song_select, sep='\n')
print("songplay_table_insert :", songplay_table_insert, sep='\n')
for index, row in df.iterrows():
# get songid and artistid from song and artist tables
#cur.execute(song_select, (row.song.replace("'", ""), row.artist, row.length))
cur.execute(song_select, (row.song, row.artist, row.length))
results = cur.fetchone()
if results:
songid, artistid = results
else:
songid, artistid = None, None
songplay_data = [1, row.ts, row.userId, row.level, songid, artistid, row.sessionId, row.location, row.userAgent]
# insert songplay record
cur.execute(songplay_table_insert, songplay_data)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added records to this table.
# Close Connection to Sparkify Database
```
conn.close()
```
# Implement `etl.py`
Use what you've completed in this notebook to implement `etl.py`.
| github_jupyter |
```
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
input_width = 28
input_height = 28
input_channels = 1
n_input = 784
n_conv1 = 32
n_conv2 = 64
conv1_k = 5
conv2_k = 5
n_hidden = 1024
n_out = 10
pooling_window_size = 2
weights = {
"wc1" : tf.Variable(tf.random_normal([conv1_k, conv1_k ,input_channels, n_conv1])),
"wc2" : tf.Variable(tf.random_normal([conv2_k, conv2_k ,n_conv1, n_conv2])),
"wh1" : tf.Variable(tf.random_normal([input_width//4 * input_height//4 * n_conv2, n_hidden])),
"wout": tf.Variable(tf.random_normal([n_hidden, n_out]))
}
biases = {
"bc1" : tf.Variable(tf.random_normal([n_conv1])),
"bc2" : tf.Variable(tf.random_normal([n_conv2])),
"bh1" : tf.Variable(tf.random_normal([n_hidden])),
"bout": tf.Variable(tf.random_normal([n_out]))
}
def conv(x, weights, bias, strides = 1):
out = tf.nn.conv2d(x, weights, padding="SAME", strides=[1, strides, strides, 1])
out = tf.nn.bias_add(out, bias)
return tf.nn.relu(out)
def maxpooling(x, k = 2):
return tf.nn.max_pool(x, padding="SAME", ksize=[1, k, k, 1], strides=[1, k, k, 1])
def cnn(x, weights, biases):
x = tf.reshape(x, shape = [-1,input_height, input_width, 1])
conv1 = conv(x, weights["wc1"], biases["bc1"])
conv1 = maxpooling(conv1, k = pooling_window_size)
conv2 = conv(conv1, weights["wc2"], biases["bc2"])
conv2 = maxpooling(conv2, k = pooling_window_size)
hidden_input = tf.reshape(conv2, shape=[-1, input_width//4 * input_height//4 * n_conv2])
hidden_output = tf.add(tf.matmul(hidden_input, weights["wh1"]), biases['bh1'])
hidden_output = tf.nn.relu(hidden_output)
out = tf.add(tf.matmul(hidden_output, weights["wout"]), biases['bout'])
return out
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_out])
pred = cnn(x, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimize = tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
batch_size = 100
for i in range(25):
num_batches = int(mnist.train.num_examples/batch_size)
total_cost = 0
for j in range(num_batches):
batch_x, batch_y = mnist.train.next_batch(batch_size)
_ , c = sess.run([optimize, cost], feed_dict={x:batch_x, y:batch_y})
total_cost += c
print(total_cost)
with tf.Session() as sess:
writer = tf.summary.FileWriter('./graphs', sess.graph)
```
| github_jupyter |
# Import Dataset
3 methods
1. from local stoage
- using file()
- using pandas
2. from cloud storage
3. from URL
- using pandas
- using scrapping
---
##### In Google Colab
```
import pandas as pd
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
dataset_path = 'drive/My Drive/path-to-dataset-in-drive-filesystem/social-protection-and-labor-indicators-for-west-bank-and-gaza-1.csv'
df = pd.read_csv(dataset_path)
df
```
##### Non-Common Format
```
dataset_path = 'drive/My Drive/path-to-folder-in-drive-filesystem/iris.data'
df = pd.read_csv(dataset_path, sep=',')
df
```
##### Datasets Resources
- Kuggle
- UCI
- data.world
- worldbank
---

| github_jupyter |
### Face Mask Detection - `TensorFlow`
In this notebook we are going to create a simple `model` that will detect weather a person is wearing a mask or not using the data images downloaded from **Kaggle**
* [Dataset](https://www.kaggle.com/sumansid/facemask-dataset)
```
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras.preprocessing import image
import os
import numpy as np
```
### Dataset
* Our data has the following structure.
```
data
mask
-img1
..
nomask
- img2
..
```
```
class Paths:
ROOT = "data"
MASK = 'mask'
NO_MASK = 'nomask'
```
### Creating an Image Dataset
* Creating an image dataset using the `ImageDataGenerator` and `flow_from_directory`.
```
data_gen = keras.preprocessing.image.ImageDataGenerator(
rescale=1.0 / 255,
data_format = "channels_last",
dtype = tf.float32,
validation_split = .15
)
train_ds = data_gen.flow_from_directory(
Paths.ROOT,
target_size=(96, 96),
color_mode='grayscale',
class_mode='categorical',
batch_size=32,
shuffle=True,
seed=42,
subset="training",
interpolation='nearest',
)
valid_ds = data_gen.flow_from_directory(
Paths.ROOT,
target_size=(96, 96),
color_mode='grayscale',
class_mode='categorical',
batch_size=32,
shuffle=True,
seed=42,
subset="validation",
interpolation='nearest',
)
```
### Class Names
```
class_names = train_ds.class_indices
class_names
classes = dict([(value, key) for (key, value) in class_names.items()])
classes
for batch in train_ds:
break
def plot_images(images_and_classes, labels, cols=5):
rows = 3
fig = plt.figure()
fig.set_size_inches(cols * 2, rows * 2)
labels = np.argmax(labels, axis=1)
for i, (image, label) in enumerate(zip(images_and_classes, labels)):
plt.subplot(rows, cols, i + 1)
plt.axis('off')
plt.imshow(image, cmap="gray")
plt.title(classes[label], color ='g' if classes[label] == "mask" else 'r', fontsize=16 )
plot_images(batch[0][:24], batch[1][:24], cols=8)
```
### Creating a simple `NN`.
```
seq_model = keras.Sequential([
keras.layers.Input(shape=(96, 96, 1), name="input_layer"),
keras.layers.Conv2D(64, (3, 3), padding="same", name="conv_1", activation="relu"),
keras.layers.BatchNormalization(name="bn_1"),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(128, (3, 3), padding="same", name="conv_2", activation="relu"),
keras.layers.BatchNormalization(name="bn_2"),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(64, (3, 3), padding="same", name="conv_3", activation="relu"),
keras.layers.BatchNormalization(name="bn_3"),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Flatten(name="flatten_layer"),
keras.layers.Dense(512, activation="relu"),
keras.layers.BatchNormalization(name="bn_4"),
keras.layers.Dense(2, activation='softmax')
])
seq_model.compile(
loss = keras.losses.CategoricalCrossentropy(from_logits=False),
optimizer = 'adam',
metrics=["acc"]
)
print(seq_model.summary())
lr_reduction = keras.callbacks.ReduceLROnPlateau(
monitor='val_acc',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.00001
)
early_stopping = keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=5,
verbose=0, mode='auto'
)
history = seq_model.fit(
train_ds,
validation_data = valid_ds,
verbose =1,
batch_size =32,
epochs =10,
shuffle = True,
callbacks=[lr_reduction, early_stopping]
)
```
### Visualising the ``model's`` training history
Our model is overfitting which means it may not perform well on the `test` data. We are not concerned much about this issue, we will look more in depth about using:
* Regulirazation && Dropout
Later on but for now we want to do transfare learning on the `VGG16` model.
```
import pandas as pd
df = pd.DataFrame(history.history)
epochs = np.arange(1, 7)
plt.plot(epochs,df["acc"], label="acc")
plt.plot(epochs,df["val_acc"], label="val_acc")
plt.plot(epochs,df["loss"], label="loss")
plt.plot(epochs,df["val_loss"], label="val_loss")
plt.title("Model Training History", fontsize=16)
plt.legend()
plt.xlabel("epochs", fontsize=15)
plt.show()
```
THE **MODEL** IS **POOR**.
### Transfare Learning - `VGG16`
```
from tensorflow.keras.applications.vgg16 import VGG16
```
### Creating new dataset.
* The vgg16 model expect images to have `3` color channels so we are going to regenerate images datasets again.
```
train_ds_vgg = data_gen.flow_from_directory(
Paths.ROOT,
target_size=(96, 96),
color_mode='rgb',
class_mode='categorical',
batch_size=32,
shuffle=True,
seed=42,
subset="training",
interpolation='nearest',
)
valid_ds_vgg = data_gen.flow_from_directory(
Paths.ROOT,
target_size=(96, 96),
color_mode='rgb',
class_mode='categorical',
batch_size=32,
shuffle=True,
seed=42,
subset="validation",
interpolation='nearest',
)
vgg = VGG16(input_shape=(96, 96, 3), weights='imagenet', include_top=False)
vgg.summary()
```
### Frezing the model.
* We got `14,714,688` trainable parameters. so we want to turn trainable parameters to `False` because we don't want to retrain the layers.
```
for layer in vgg.layers:
layer.trainable = False
vgg.summary()
```
### Fine `tunnuing`
Now we want to add the Flatten Layer to the model and then add our output layer with 2 classes. We can do it as follows using the `Sequential` API.
```
model = keras.Sequential(name="model")
for layer in vgg.layers:
model.add(layer)
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(512, activation="relu"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(2, activation="softmax"))
model.summary()
model.compile(
loss = keras.losses.CategoricalCrossentropy(from_logits=False),
optimizer = 'adam',
metrics=["acc"]
)
lr_reduction = keras.callbacks.ReduceLROnPlateau(
monitor='val_acc',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.00001
)
early_stopping = keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=5,
verbose=0, mode='auto'
)
history = model.fit(
train_ds_vgg,
validation_data = valid_ds_vgg,
verbose =1,
batch_size =32,
epochs =10,
shuffle = True,
callbacks=[lr_reduction, early_stopping]
)
```
### Visualising the model Trainning history after transfare learning.
```
df = pd.DataFrame(history.history)
epochs = np.arange(1, 7)
plt.plot(epochs,df["acc"], label="acc")
plt.plot(epochs,df["val_acc"], label="val_acc")
plt.plot(epochs,df["loss"], label="loss")
plt.plot(epochs,df["val_loss"], label="val_loss")
plt.title("Model Training History", fontsize=16)
plt.legend()
plt.xlabel("epochs", fontsize=15)
plt.show()
```
### Making Predictions and plotting them.
```
for data in valid_ds_vgg:
break
predictions = np.argmax(model(data[0]).numpy(), axis=1).astype("int32")
y_true = np.argmax(data[1], axis=1).astype('int32')
predictions, y_true
def plot_predictions_images(images_and_classes, labels_true, labels_pred, cols=5):
rows = 3
fig = plt.figure()
fig.set_size_inches(cols * 2, rows * 2)
for i, (image, label_true, label_pred) in enumerate(zip(images_and_classes, labels_true.astype("int32"), labels_pred)):
plt.subplot(rows, cols, i + 1)
plt.axis('off')
plt.imshow(image, cmap="gray")
plt.title(classes[label_pred], color ='g' if label_true == label_pred else 'r', fontsize=16 )
plot_predictions_images(data[0][:24], y_true[:24], predictions[:24], cols=8)
```
### The `confusion_matrix`.
```
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(y_true, y_pred, classes=None, figsize=(5, 5), text_size=20):
cm = confusion_matrix(y_true, y_pred)
cm_norm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
n_classes = cm.shape[0]
fig, ax = plt.subplots(figsize=figsize)
cax = ax.matshow(cm, cmap=plt.cm.Blues)
fig.colorbar(cax)
if classes:
labels = classes
else:
labels = np.arange(cm.shape[0])
ax.set(title="Confusion Matrix",
xlabel="Predicted label",
ylabel="True label",
xticks=np.arange(n_classes),
yticks=np.arange(n_classes),
xticklabels=labels,
yticklabels=labels,
)
ax.yaxis.label.set_color('green')
ax.xaxis.label.set_color('green')
ax.xaxis.set_label_position("bottom")
ax.xaxis.tick_bottom()
threshold = (cm.max() + cm.min()) / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, f"{cm[i, j]} ({cm_norm[i, j]*100:.1f}%)",
horizontalalignment="center",
color="white" if cm[i, j] > threshold else "black",
size=text_size)
plot_confusion_matrix(y_true, predictions, list(class_names.keys()), figsize=(8, 8))
```
### Conclusion.
* The model is predicting very well a person with a mask. It is not even confused in predicting a person with a mask.
| github_jupyter |
## Astrophysics background
It is very common in Astrophysics to work with sky pixels. The sky is tassellated in patches with specific properties and a sky map is then a collection of intensity values for each pixel. The most common pixelization used in Cosmology is [HEALPix](http://healpix.jpl.nasa.gov).
Measurements from telescopes are then represented as an array of pixels that encode the pointing of the instrument at each timestamp and the measurement output.
## Sample timeline
```
import pandas as pd
import numba
import numpy as np
```
For simplicity let's assume we have a sky with 50K pixels:
```
NPIX = 50000
```
And we have 50 million measurement from our instrument:
```
NTIME = int(50 * 1e6)
```
The pointing of our instrument is an array of pixels, random in our sample case:
```
pixels = np.random.randint(0, NPIX-1, NTIME)
```
Our data are also random:
```
timeline = np.random.randn(NTIME)
```
## Create a map of the sky with pandas
One of the most common operations is to sum all of our measurements in a sky map, so the value of each pixel in our sky map will be the sum of each individual measurement.
The easiest way is to use the `groupby` operation in `pandas`:
```
timeline_pandas = pd.Series(timeline, index=pixels)
timeline_pandas.head()
%time m = timeline_pandas.groupby(level=0).sum()
```
## Create a map of the sky with numba
We would like to improve the performance of this operation using `numba`, which allows to produce automatically C-speed compiled code from pure python functions.
First we need to develop a pure python version of the code, test it, and then have `numba` optimize it:
```
def groupby_python(index, value, output):
for i in range(index.shape[0]):
output[index[i]] += value[i]
m_python = np.zeros_like(m)
%time groupby_python(pixels, timeline, m_python)
np.testing.assert_allclose(m_python, m)
```
Pure Python is slower than the `pandas` version implemented in `cython`.
### Optimize the function with numba.jit
`numba.jit` gets an input function and creates an compiled version with does not depend on slow Python calls, this is enforced by `nopython=True`, `numba` would throw an error if it would not be possible to run in `nopython` mode.
```
groupby_numba = numba.jit(groupby_python, nopython=True)
m_numba = np.zeros_like(m)
%time groupby_numba(pixels, timeline, m_numba)
np.testing.assert_allclose(m_numba, m)
```
Performance improvement is about 100x compared to Python and 20x compared to Pandas, pretty good!
## Use numba.jit as a decorator
The exact same result is obtained if we use `numba.jit` as a decorator:
```
@numba.jit(nopython=True)
def groupby_numba(index, value, output):
for i in range(index.shape[0]):
output[index[i]] += value[i]
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import os.path as op
from csv import writer
import math
import cmath
import pickle
import tensorflow as tf
from tensorflow import keras
from keras.models import Model,Sequential,load_model
from keras.layers import Input, Embedding
from keras.layers import Dense, Bidirectional
from keras.layers.recurrent import LSTM
import keras.metrics as metrics
import itertools
from tensorflow.python.keras.utils.data_utils import Sequence
from decimal import Decimal
from keras.layers import Conv1D,MaxPooling1D,Flatten,Dense
A1=np.empty((0,5),dtype='float32')
U1=np.empty((0,7),dtype='float32')
node=['150','149','147','144','142','140','136','61']
mon=['Apr','Mar','Aug','Jun','Jul','Sep','May','Oct']
for j in node:
for i in mon:
inp= pd.read_csv('../../data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[1,2,3,15,16],low_memory=False)
out= pd.read_csv('../../data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[5,6,7,8,17,18,19],low_memory=False)
inp=np.array(inp,dtype='float32')
out=np.array(out,dtype='float32')
A1=np.append(A1, inp, axis=0)
U1=np.append(U1, out, axis=0)
print(A1)
print(U1)
from sklearn.decomposition import SparsePCA
import warnings
scaler_obj1=SparsePCA()
scaler_obj2=SparsePCA()
X1=scaler_obj1.fit_transform(A1)
Y1=scaler_obj2.fit_transform(U1)
warnings.filterwarnings(action='ignore', category=UserWarning)
from Hybrid_Model import HybridModel
# Splitting Data into training and testing dataset
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(X1,Y1,test_size=0.25,random_state=42)
hybrid_model=HybridModel()
hybrid_model.load_models()
hybrid_model.fit_machine_learning_model(x_test,y_test)
new_x_test=x_test[:,np.newaxis,:]
new_y_test=y_test[:,np.newaxis,:]
hybrid_model.fit_neural_network_model(new_x_test,new_y_test)
res=hybrid_model.predict()
res
df2=pd.DataFrame(y_test)
df2.head(5)
df=pd.DataFrame(res)
df.head(5)
from sklearn.metrics import r2_score,mean_absolute_error,mean_squared_error
r2_score_value=r2_score(y_test,res,multioutput='variance_weighted')
mae_value=mean_absolute_error(y_test,res)
mse_value=mean_squared_error(y_test,res)
rmse_value=np.sqrt(mse_value)
print("R2 scotre:",r2_score_value)
print("Mean absolute error:",mae_value)
print("Mean Squared Error:",mse_value)
print("Root Mean Squared Error:",rmse_value)
from matplotlib import style
train_sizes=['NO2','O3','NO','CO','PM1','PM2.5','PM10']
style.use('ggplot')
for i in range(0,7):
plt.figure(figsize=[12,10])
plt.plot(y_test[:10,i],linewidth=3, markersize=12)
plt.plot(res[:10,i],linewidth=2, markersize=12)
plt.xlabel('X')
plt.ylabel(train_sizes[i])
plt.show()
#completed
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as stats
import numpy as np
import seaborn as sns
print(sns.__version__)
```
## Experiment setup
Groups of student are randomly assigned to one of the meal and the attention span is measured.
A group of 15 students were randomly assigned to each of three meal plans:
no breakfast, light breakfast, and full breakfast.
Their attention spans (in minutes) were recorded during a morning reading period
3 population of 3 means with a common variance sigma and normally distributed.
Sample size=15 (number of observation in sample i
```
experiment = {'No breakfast' : [8,7,9,13,10],
'Light breakfast' : [14,16,12,17,11],
'Full breakfast' : [10,12,16,15,12]}
exp_df = pd.DataFrame(experiment)
exp_df.index.name = "observation"
exp_df
import researchpy as rp
rp.summary_cont(exp_df)
# normality test histogram
exp_df.plot.hist()
# Reform the dataframe
stack_df = exp_df.stack().reset_index()
stack_df = stack_df.rename(columns={'level_0': 'id',
'level_1': 'treatment',
0:'span'})
display(stack_df)
# show dist
# sns.displot(stack_df, x='span', hue='treatment', kind="kde", fill=True)
ax = sns.boxplot(y=stack_df["span"], x=stack_df["treatment"])
ax.set_title('Boxplot')
```
You could use the library below to compute the confidence interval but the best estimate of the common variance uses all information from the entire set of measurements. This is why you should ALWAYS use the POOLED Variance:
variance = s^2 = MSE with df =n - k and Pooled STD = np.sqrt(n)
The technics below are faster than the one below the analysis of variance table
```
# THESE METHOD IS FOR SAMPLE DATA (NO SAMPLE MEAN)
import numpy as np
# to check
# Confidence interval
confidence_level = 0.95
# If juste samples (not mean) the dof is
mean_list, std_list, ci_list = [], [], []
for col_name in exp_df:
col_values = exp_df[col_name].values
sample_size = len(col_values)
degrees_freedom = sample_size - 1
sample_mean = np.mean(col_values)
# Standard error of the mean (SEM) = sigma / sqrt(n)
sample_standard_error = stats.sem(col_values)
print('sample_standard_error s^2=', sample_standard_error,
'or s/np.sqrt(n_t)', np.std(col_values)/np.sqrt(sample_size), np.std(col_values))
confidence_interval = stats.t.interval(alpha=confidence_level,
df=degrees_freedom,
loc=sample_mean,
scale=sample_standard_error)
std_list.append(sample_standard_error)
ci_list.append(confidence_interval)
mean_list.append(sample_mean)
CI_df = pd.DataFrame([exp_df.columns.values, mean_list, std_list, ci_list]).transpose()
CI_df.columns = ['treatment',
'mean',
'std error',
'CI']
CI_df.loc[:,'CI'] = CI_df.loc[:,'CI'].map(lambda x: (x[0].round(2), x[1].round(2)))
CI_df = CI_df.sort_values(by=['mean'])
display(CI_df)
graph = sns.displot(stack_df, x='span', hue='treatment', kind="kde", fill=True)
for CI in CI_df['CI'].values:
plt.axvline(CI[0], linestyle='--')
plt.axvline(CI[1], linestyle='--')
plt.show()
# Normality test quantile-quantile plot.
# Show the theoritical expectation of
# normal distribution quantile vs present distribution quantile.
from statsmodels.graphics.gofplots import qqplot
for treat in exp_df.columns:
qqplot(exp_df[treat].values, line='s')
plt.title(f'{treat} q-q plot')
plt.show()
# Test hypothesis where null hypothesis assume that the sample are taken from Gaussian
# distribution.
# p <= alpha: reject H0, not normal.
# p > alpha: fail to reject H0, normal.
# Shapiro-Wilk test
from scipy.stats import shapiro
for col, val in exp_df.iteritems():
print(col)
stat, p = shapiro(val)
print('Statistics=%.3f, p=%.3f' % (stat, p))
# if p > 0.05 we fail to reject null hypoteshsi -> Gaussian
# Extension, see D’Agostino’s K^2 Test or Anderson-Darling Test
# Levene test for homogeneity of variance
stats.levene(*[exp_df[treat] for treat in exp_df.columns])
# if p-value >0.05 non-statistically significant difference in their varability
# ANOVA one way
import scipy.stats as stats
F, p = stats.f_oneway(*[exp_df[treat] for treat in exp_df.columns])
print(f'F= {F}, p={p}')
# Interpretation
# F_stat = 4.9 to compare with critical value of test statistic F_alpha
# p-value = 0.027 < 0.05 we can say at 95% of confidence that
# breakfast load has an effect on attention span.
# ANOVA the hard way
# n1 = n2 = n3
k = len(exp_df.iloc[0]) # Number of columns
list_n = exp_df.count().values
n = np.sum(exp_df.count().values) # Total number of observations
# sum all value from all groups and divide it by sum of observation from all (the three) samples
CM = (exp_df.sum().sum())**2 / n
# print('CM', CM)
# total SS
TSS = (exp_df**2).sum().sum() - CM
dof_tss = n - 1
# SST between
SST = ((exp_df.sum()**2).values / list_n).sum() - CM
dof_sst = k - 1
MST = SST / dof_sst
# SSE within
SSE = TSS - SST
dof_mse = n - k
MSE = SSE / dof_mse
# Test statistic
F = MST/ MSE
# p-value
p = stats.f.sf(F, dof_sst, dof_mse)
# Eta
et_sq = SST / TSS
# Omega squared
om_sq = SST - (dof_sst * MSE) / (TSS + MSE)
print('SST', SST, 'dof', dof_sst,'MST', MST)
print('SSE', SSE, 'dof', dof_mse,'MSE', MSE)
print('TSS', TSS, 'dof', dof_tss, 'F', F)
print('p value', p)
print('eta squared', et_sq, 'omega_sqaured', om_sq)
alpha = 0.05
F_alpha = stats.f.ppf(q=1-alpha, dfn=dof_sst, dfd=dof_mse)
F_alpha
# CHECK THE RESULTS of DF in analys variance table.
# F stat and pvalue from statsmodel
import statsmodels.api as sm
from statsmodels.formula.api import ols
# Reform the dataframe
stack_df = exp_df.stack().reset_index()
stack_df = stack_df.rename(columns={'level_0': 'id',
'level_1': 'treatment',
0:'span'})
# print(exp_df_2)
mod = ols('span ~ treatment', data=stack_df).fit()
aov_table = sm.stats.anova_lm(mod, typ=2)
print(aov_table)
# et_sq = SST / TSS
et_sq = aov_table['sum_sq'][0]/(aov_table['sum_sq'][0]+aov_table['sum_sq'][1])
print('et_sq', et_sq)
# confidence interval for the difference of one means
mean0 = exp_df.mean()[0]
var = MSE
s = np.sqrt(var)
# from t distribution at 95% LOC and df = n-k = 15-3 = 12
t_alpha = 2.179
CI = t_alpha * s * np.sqrt(1/5)
print('CI for no breakfast:', mean0 - CI, mean0 + CI)
# confidence interval for the difference of two means
mean1 = exp_df.mean()[1]
mean2 = exp_df.mean()[2]
var = MSE
diff_mean = mean1-mean2
# from t distribution at 95% LOC and df = n-k = 15-3 = 12
t_alpha = 2.179
CI = t_alpha * s * np.sqrt((1/5) + (1/5))
print('CI light VS full:', diff_mean - CI, diff_mean + CI)
# Tukey multi comparison method
from statsmodels.stats.multicomp import (pairwise_tukeyhsd,
MultiComparison)
# Set up the data for comparison (creates a specialised object)
MultiComp = MultiComparison(stack_df['span'],
stack_df['treatment'])
# Show all pair-wise comparisons:
# Print the comparisons
print(MultiComp.tukeyhsd().summary())
# Confidence intervals
import math
import numpy as np
mu = 0
variance = 300
sigma = math.sqrt(variance)
x = np.linspace(mu - 3*sigma*3, mu + 3*sigma*3, 100)
plt.plot(x, stats.norm.pdf(x, mu, sigma), color='blue', label='full breakfast')
plt.axvline(mu, color='blue')
mu2 = mu+8
plt.plot(x, stats.norm.pdf(x, mu2, sigma), color='orange', label='light breakfast')
plt.axvline(mu2, color='orange')
mu3 = mu+16
plt.plot(x, stats.norm.pdf(x, mu3, sigma), color='green', label='no breakfast')
plt.axvline(mu3, color='green')
plt.title('Same mean but the variability within the groups make the difference of the mean difficult to see')
plt.legend()
plt.show()
mu = 0
variance = 2
sigma = math.sqrt(variance)
x = np.linspace(mu - 4*sigma*3, mu + 6*sigma*3, 100)
plt.plot(x, stats.norm.pdf(x, mu, sigma), color='blue', label='full breakfast')
plt.axvline(mu, color='blue')
mu2 = mu+8
plt.plot(x, stats.norm.pdf(x, mu2, sigma), color='orange', label='light breakfast')
plt.axvline(mu2, color='orange')
plt.title('Same mean with "small" variability between groups')
mu3 = mu+16
plt.plot(x, stats.norm.pdf(x, mu3, sigma), color='green', label='no breakfast')
plt.axvline(mu3, color='green')
plt.legend()
plt.show()
```
| github_jupyter |
```
import random
import pandas as pd
import numpy as np
from scipy import stats
# from oauth2client.service_account import ServiceAccountCredentials
from googleapiclient.discovery import build
from google.oauth2.service_account import Credentials
CREDENTIALS_PATH_GOOGLE = '../google-credentials.json'
SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly']
SPREADSHEET = '1b75J-QTGrujSgF9r0_JPOKkcXAwzFVwpETOAyVBw8ak'
# Load service account credentials.
__credentials = Credentials.from_service_account_file(CREDENTIALS_PATH_GOOGLE, scopes=SCOPES)
# Creates Google Sheets API (v4/latest) service.
service = build('sheets', 'v4', credentials=__credentials)
# Gets values from Ach! Musik: Notations sheet.
values = service.spreadsheets().values().get(spreadsheetId=SPREADSHEET, range='Notations').execute()['values']
headers = values.pop(0)
# Format data as pd.DataFrame
data = pd.DataFrame(values, columns=headers)
data
# Saving as csv for later use
data.to_csv("../data/achmusik.csv", index=False, decimal=",")
data = pd.read_csv("../data/achmusik.csv")
data
# Getting the decimals right -- commas to points and no more Nones
data = data.set_index(["genre", "sub_genre", "artist", "album", "song"])
data.fillna(value="", inplace=True)
for i in range(data.columns.size):
data[data.columns[i]] = data[data.columns[i]].str.replace(",", ".")
data[data.columns[i]] = pd.to_numeric(data[data.columns[i]], errors='coerce')
kept_people = ["Qu", "Gr", "Vi", "Ro"]
default_grade = 5
# Keeping only present people at the hypothetical party!
data = data.filter(kept_people)
# Hard to do this shit inplace -- if no grades at all, give it a chance to play with default grade
data = data.dropna(how="all").append(data[data.isnull().all(axis=1)].fillna(default_grade))
data
```
## Score voting
```
COUNT_FACTOR = .3
COUNT_INHIB = len(kept_people) // 2
MIN_SCORE = 6
PLAYLIST_SIZE = 200
ELIMINATING_GRADE = 4.9
# To avoid having to hard code the amount of columns for cases where the next cell is re-ran, we initialize columns
data["mean"] = 0
data["count"] = 0
data["score"] = 0
data["rank"] = 0
# Mean of all notes for each track
data["mean"] = data[data.columns[:-4]].mean(axis=1)
# Amount of notes for each track
data["count"] = data.count(axis=1) - 4
# Helping songs graded by more people in the group
data["score"] = data["mean"] + (COUNT_FACTOR * (data["count"] - COUNT_INHIB))
# Truncating to keep only the acceptable songs
data = data[data["score"] > MIN_SCORE]
data = data.sort_values("score", ascending=False)
data["rank"] = data["score"].rank(method="min")
data
# Removing tracks with at least one grade under the minimum required
data = data[data[data.columns[:-4]].min(axis=1) > ELIMINATING_GRADE]
data
if PLAYLIST_SIZE < 1:
playlist = data.sample(frac=PLAYLIST_SIZE, weights="rank")
else:
playlist = data.sample(n=PLAYLIST_SIZE, weights="rank")
playlist
```
## genre matrix and distances
```
DEFAULT_TRANSITION = "4,0"
DEFAULT_THRESHOLD = 7
transitions = pd.read_csv("../data/transitions.csv").fillna(DEFAULT_TRANSITION)
transitions.index = transitions["Unnamed: 0"]
transitions.drop("Unnamed: 0", axis=1, inplace=True)
# Getting the decimals right -- commas to points and no more Nones
for i in range(transitions.columns.size):
transitions[transitions.columns[i]] = transitions[transitions.columns[i]].str.replace(",", ".")
transitions[transitions.columns[i]] = pd.to_numeric(transitions[transitions.columns[i]], errors='coerce')
playlist.reset_index(inplace=True)
# This is so horribly un-optimized... May the Python Lords forgive me.
shuffled_playlist = playlist.iloc[0:1].drop(playlist.columns[5:], axis=1)
playlist.drop(0, inplace=True)
current_genre = shuffled_playlist.iloc[0]["genre"]
current_artist = shuffled_playlist.iloc[0]["artist"]
threshold = 8
chain = 0
chain_factor = .6
desperation_factor = .6
remove_indices = []
while playlist.size > 0:
for row in playlist.iterrows():
if (transitions[current_genre][row[1]["genre"]] + (chain * chain_factor) > threshold and row[1]["artist"] != current_artist) or threshold < 0:
# Song accepted -- increment or reset chain
if current_genre == row[1]["genre"]:
chain += 1
else:
current_genre = row[1]["genre"]
chain = 0
# Add song to shuffled playlist and its index to a list for further removal
shuffled_playlist = shuffled_playlist.append(playlist.loc[row[0]].drop(playlist.columns[5:]))
remove_indices.append(row[0])
threshold = DEFAULT_THRESHOLD
# Removing songs that were added during the for loop
if remove_indices:
playlist.drop(remove_indices, inplace=True)
remove_indices = []
else:
threshold -= desperation_factor
shuffled_playlist
shuffled_playlist.to_csv("../test.csv", index=False)
transitions
```
# Playground
```
data = data.reset_index()
COL = ["Qu", "Gr", "Vi"]
BY = "artist"
AMNT = 10
best = data[[BY, *COL]].dropna(how="any").groupby(BY).filter(lambda x: len(x) >= AMNT).groupby(BY).mean()[COL]
best[COL].mean(axis=1).sort_values(ascending=False).head(10)
```
| github_jupyter |
- ./bin/sample_service.py # run service
- locust -f benchmark/locustfile.py --host=http://127.0.0.1:8890 # run locust
- open web ui http://127.0.0.1:8089/
docs at https://docs.locust.io/en/stable/quickstart.html
- `locust -f benchmark/sleep50_locust.py --master --host=http://127.0.0.1:8890`
- `locust -f benchmark/sleep50_locust.py --slave --host=http://127.0.0.1:8890`
go
```
locust -f benchmark/dummy.py --master --master-bind-host=127.0.0.1 --master-bind-port=5557
cd benchmark/boomer
go build -o a.out http_benchmark.go
./a.out --url=http://127.0.0.1:8890/sleep50 --master-port=5557 --rpc=zeromq
```
```
run_locust_master = 'locust -f dummy.py --master --master-bind-host=127.0.0.1 --master-bind-port=5557 --csv=foobar --no-web -c 100 -r 100 -n 1000'
run_locust_slave = './a.out --url=http://127.0.0.1:8890/sleep50 --master-port=5557 --rpc=zeromq'
import os
os.system('./run_locust.sh 10')
import pandas as pd
pd.read_csv('foobar_distribution.csv')
pd.read_csv('foobar_requests.csv')
import tqdm
import time
def run_experiment(rps: list, users: list, duration_sec=7):
all_distributions = []
all_requests = []
for rp, users in tqdm.tqdm(zip(rps, users)):
#users = int(rps / 10 + 2)
#users = 5
print(rp, users)
os.system(f'./run_locust.sh {users} {duration_sec} {rp}')
time.sleep(1)
distribution = pd.read_csv('foobar_distribution.csv')
distribution['rps'] = rp
all_distributions.append(distribution)
requests = pd.read_csv('foobar_requests.csv')
requests['rps'] = rp
all_requests.append(requests)
print(requests)
distributions = pd.concat(all_distributions)
distributions = distributions[distributions.Name != 'None Total']
requests = pd.concat(all_requests)
requests = requests[requests.Name != 'Total']
return distributions, requests
def report(requests, distributions, experiment):
return pd.DataFrame({
'experiment': experiment,
#'me': requests['Name'],
'rps': requests['Requests/s'],
'load': requests['rps'],
'50': distributions['50%'],
'75': distributions['75%'],
'95': distributions['95%'],
'99': distributions['99%'],
'100': distributions['100%'],
})
distributions, requests = run_experiment(users=[10, 20, 30], rps=[100, 200, 300], duration_sec=10)
r = report(requests, distributions, 'aiohttp uvloop')
r
distributions, requests = run_experiment(users=[50], rps=[1000], duration_sec=10)
r2 = report(requests, distributions, 'aiohttp uvloop')
r2
requests
distributions
```
| github_jupyter |
# Nearest Centroid Classification with StandardScaler and QuantileTransformer
This Code template is for the Classification task using a simple NearestCentroid with feature rescaling technique StandardScaler and feature tranformation technique QuantileTransformer in a pipeline.The NearestCentroid classifier is a simple algorithm that represents each class by the centroid of its members.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from imblearn.over_sampling import RandomOverSampler
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder,StandardScaler,QuantileTransformer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestCentroid
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path = ""
```
List of features which are required for model training .
```
#x_values
features = []
```
Target feature for prediction.
```
#y_value
target = ''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
#### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.
One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Data Rescaling
Standardize features by removing the mean and scaling to unit variance
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) for the parameters
### Feature Transformation
The quantile transformer provides an automatic way to transform a numeric input variable to have a different data distribution such as the uniform or normal distribution which in turn, can be used as input to a predictive model.
Quantile function, also called a percent-point function (PPF), is the inverse of the cumulative probability distribution (CDF).
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html) for the parameters
### Model
The NearestCentroid classifier is a simple algorithm that represents each class by the centroid of its members. In effect, this makes it similar to the label updating phase of the KMeans algorithm. It also has no parameters to choose, making it a good baseline classifier. It does, however, suffer on non-convex classes, as well as when classes have drastically different variances, as equal variance in all dimensions is assumed.
#### Tuning Parameter
> **metric** : The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by metrics.pairwise.pairwise_distances for its metric parameter. The centroids for the samples corresponding to each class is the point from which the sum of the distances of all samples that belong to that particular class are minimized. If the “manhattan” metric is provided, this centroid is the median and for all other metrics, the centroid is now set to be the mean.
> **shrink_threshold** :Threshold for shrinking centroids to remove features.
```
# Build Model here
model = make_pipeline(StandardScaler(),QuantileTransformer(),NearestCentroid())
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Aishwarya Guntoju , Github: [Profile](https://github.com/DSAishwaryaG)
| github_jupyter |
# Generating MIDI with the LSTM Model
Running this notebook will generate generates a 30 second midi file using the trained neural network
### Imports
```
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import pickle
import numpy
from music21 import instrument, note, stream, chord
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import BatchNormalization as BatchNorm
```
Change the values below in order to tweak the generation behavior
```
# Config
weights_filename = 'weights-improvement-100-0.4802-bigger.hdf5'
sequence_len = 128 # which is around 30 seconds long at 128 bpm
```
Construct the Network
This fuction is the same one used in train.ipynb and train.py
```
def create_network(network_input, n_vocab):
""" create the structure of the neural network """
model = Sequential()
model.add(LSTM(512, input_shape=(network_input.shape[1], network_input.shape[2]),
recurrent_dropout=0.3, return_sequences=True))
model.add(LSTM(512, return_sequences=True, recurrent_dropout=0.3,))
model.add(LSTM(512))
model.add(BatchNorm())
model.add(Dropout(0.3))
model.add(Dense(256))
model.add(Activation('relu'))
model.add(BatchNorm())
model.add(Dropout(0.3))
model.add(Dense(n_vocab))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
# Load the weights to each node
model.load_weights(weights_filename)
return model
```
The below cell prepares the sequences used by the Neural Network
```
def prepare_sequences(notes, pitchnames, n_vocab):
# map between notes and integers and back
note_to_int = dict((note, number) for number, note in enumerate(pitchnames))
memory_length = 16
network_input = []
output = []
for i in range(0, len(notes) - memory_length, 1):
sequence_in = notes[i:i + memory_length]
sequence_out = notes[i + memory_length]
network_input.append([note_to_int[char] for char in sequence_in])
output.append(note_to_int[sequence_out])
n_patterns = len(network_input)
# reshape the input into a format compatible with LSTM layers
normalized_input = numpy.reshape(network_input, (n_patterns, memory_length, 1))
# normalize input
normalized_input = normalized_input / float(n_vocab)
return (network_input, normalized_input)
```
The cell below converts the output from the prediction to notes and create a midi file from the notes
```
def create_midi(prediction_output):
offset = 0
output_notes = []
# create note and chord objects based on the values generated by the model
for pattern in prediction_output:
# pattern is a chord
if ('.' in pattern) or pattern.isdigit():
notes_in_chord = pattern.split('.')
notes = []
for current_note in notes_in_chord:
new_note = note.Note(int(current_note))
new_note.storedInstrument = instrument.Piano()
notes.append(new_note)
new_chord = chord.Chord(notes)
new_chord.offset = offset
output_notes.append(new_chord)
# pattern is a note
else:
new_note = note.Note(pattern)
new_note.offset = offset
new_note.storedInstrument = instrument.Piano()
output_notes.append(new_note)
# increase offset each iteration so that notes do not stack
offset += 0.5
midi_stream = stream.Stream(output_notes)
midi_stream.write('midi', fp='test_output.mid')
```
The below cell generates notes from the neural network based on a sequence of notes
```
def generate_notes(model, network_input, pitchnames, n_vocab):\
# pick a random sequence from the input as a starting point for the prediction
start = numpy.random.randint(0, len(network_input)-1)
int_to_note = dict((number, note) for number, note in enumerate(pitchnames))
pattern = network_input[start]
prediction_output = []
# generate X notes (X is determined by sequence_len)
for note_index in range(sequence_len):
prediction_input = numpy.reshape(pattern, (1, len(pattern), 1))
prediction_input = prediction_input / float(n_vocab)
prediction = model.predict(prediction_input, verbose=0)
index = numpy.argmax(prediction)
result = int_to_note[index]
prediction_output.append(result)
pattern.append(index)
pattern = pattern[1:len(pattern)]
return prediction_output
```
The cell below contains the function used to generate a piano midi file
```
def generate():
#load the notes used to train the model
with open('data/notes', 'rb') as filepath:
notes = pickle.load(filepath)
# Get all pitch names
pitchnames = sorted(set(item for item in notes))
# Get all pitch names
n_vocab = len(set(notes))
network_input, normalized_input = prepare_sequences(notes, pitchnames, n_vocab)
model = create_network(normalized_input, n_vocab)
prediction_output = generate_notes(model, network_input, pitchnames, n_vocab)
create_midi(prediction_output)
if __name__ == '__main__':
generate()
print("Generated midi file.")
```
| github_jupyter |
# 定数係数線形常微分方程式(非斉次)
* Author: 黒木玄
* Date: 2019-04-23~2019-06-05
* Repository: https://github.com/genkuroki/DifferentialEquations
$
\newcommand\ds{\displaystyle}
\newcommand\Z{{\mathbb Z}}
\newcommand\R{{\mathbb R}}
\newcommand\C{{\mathbb C}}
\newcommand\eps{\varepsilon}
\newcommand\QED{\text{□}}
\newcommand\d{\partial}
\newcommand\real{\operatorname{Re}}
\newcommand\imag{\operatorname{Im}}
\newcommand\tr{\operatorname{tr}}
$
このファイルは [nbviewer](https://nbviewer.jupyter.org/github/genkuroki/DifferentialEquations/blob/master/08-1%20Linear%20inhomogeneous%20ODEs%20with%20constant%20coefficients.ipynb) でも閲覧できる.
[Julia言語](https://julialang.org/) と [Jupyter環境](https://jupyter.org/) の簡単な解説については次を参照せよ:
* [JuliaとJupyterのすすめ](https://nbviewer.jupyter.org/github/genkuroki/msfd28/blob/master/msfd28genkuroki.ipynb?flush_cached=true)
[Julia言語](https://julialang.org/) 環境の整備の仕方については次を参照せよ:
* [Julia v1.1.0 の Windows 8.1 へのインストール](https://nbviewer.jupyter.org/github/genkuroki/msfd28/blob/master/install.ipynb)
[Wolfram言語](http://www.wolfram.com/language/fast-introduction-for-programmers/ja/) 環境の整備の仕方については次を参照せよ:
* [Free Wolfram EngineをJupyterで使う方法](https://nbviewer.jupyter.org/github/genkuroki/msfd28/blob/master/Free%20Wolfram%20Engine.ipynb)
**注意:** このノートブックの出力結果は [Free Wolfram EngineをJupyterで使う方法](https://nbviewer.jupyter.org/github/genkuroki/msfd28/blob/master/Free%20Wolfram%20Engine.ipynb) に書いてある修正を [OutputHandlingUtilities.wl の toOutText 函数](https://github.com/WolframResearch/WolframLanguageForJupyter/blob/master/WolframLanguageForJupyter/Resources/OutputHandlingUtilities.wl#L123-L136) に施した場合にのみ得られる出力である. 例えば, `<pre>`~`</pre>` で囲んである部分を `$$`~`$$` で囲むように修正している.
```
JupyterImageResolution = 84;
JupyterOutTextForm = "TeX";
TeX[x_] := ToString[TeXForm[x]]
TeX[x_, y__] := StringJoin[TeX[x], TeX[y]]
TeXRaw[x__, y_] := StringJoin[x, TeX[y]]
MappedBy[x_] := x
MappedBy[x_, F___, G_] := MappedBy[x, F] // G
SetAttributes[TeXEq, HoldFirst]
TeXEq[x_] := TeX[HoldForm[x] == MappedBy[x, ReleaseHold, FullSimplify]]
TeXEq[x_, F__] := TeX[HoldForm[x] == MappedBy[x, ReleaseHold, F]]
```
<h1>目次<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#非斉次の定数係数線形常微分方程式" data-toc-modified-id="非斉次の定数係数線形常微分方程式-1"><span class="toc-item-num">1 </span>非斉次の定数係数線形常微分方程式</a></span><ul class="toc-item"><li><span><a href="#斉次な場合の一般解" data-toc-modified-id="斉次な場合の一般解-1.1"><span class="toc-item-num">1.1 </span>斉次な場合の一般解</a></span><ul class="toc-item"><li><span><a href="#例:-斉次調和振動子" data-toc-modified-id="例:-斉次調和振動子-1.1.1"><span class="toc-item-num">1.1.1 </span>例: 斉次調和振動子</a></span></li></ul></li><li><span><a href="#非斉次な場合の一般解" data-toc-modified-id="非斉次な場合の一般解-1.2"><span class="toc-item-num">1.2 </span>非斉次な場合の一般解</a></span><ul class="toc-item"><li><span><a href="#非斉次な調和振動子" data-toc-modified-id="非斉次な調和振動子-1.2.1"><span class="toc-item-num">1.2.1 </span>非斉次な調和振動子</a></span></li><li><span><a href="#共振" data-toc-modified-id="共振-1.2.2"><span class="toc-item-num">1.2.2 </span>共振</a></span></li><li><span><a href="#共振再論" data-toc-modified-id="共振再論-1.2.3"><span class="toc-item-num">1.2.3 </span>共振再論</a></span></li></ul></li></ul></li><li><span><a href="#非斉次な波動方程式-(より高級な話題)" data-toc-modified-id="非斉次な波動方程式-(より高級な話題)-2"><span class="toc-item-num">2 </span>非斉次な波動方程式 (より高級な話題)</a></span><ul class="toc-item"><li><span><a href="#解の表示" data-toc-modified-id="解の表示-2.1"><span class="toc-item-num">2.1 </span>解の表示</a></span></li><li><span><a href="#Fourier変換に関する公式" data-toc-modified-id="Fourier変換に関する公式-2.2"><span class="toc-item-num">2.2 </span>Fourier変換に関する公式</a></span></li><li><span><a href="#非斉次な波動方程式のFourier変換による解法" data-toc-modified-id="非斉次な波動方程式のFourier変換による解法-2.3"><span class="toc-item-num">2.3 </span>非斉次な波動方程式のFourier変換による解法</a></span></li></ul></li></ul></div>
## 非斉次の定数係数線形常微分方程式
斉次の定数係数線形常微分方程式とは, 定数 $p_1,p_2,\ldots,p_n$ に関する
$$
u^{(n)}(x) + p_{n-1}u^{(n-1)}(x) + \cdots + p_1 u'(x) + p_0 u(x) = 0
\tag{$*$}
$$
の形の微分方程式のことである.
非斉次の定数係数線形常微分方程式とは, 定数 $p_1,p_2,\ldots,p_n$ と任意函数 $f(x)$ に関する
$$
u^{(n)}(x) + p_{n-1}u^{(n-1)}(x) + \cdots + p_1 u'(x) + p_0 u(x) = f(x)
\tag{$**$}
$$
の形の微分方程式のことである. 右辺が $0$ ではなく $f(x)$ であることが斉次の場合と異なる. この微分方程式は
$$
w(x) = \begin{bmatrix}
u_0(x) \\ u_1(x) \\ \vdots \\ u_{n-2}(x) \\ u_{n-1}(x) \\
\end{bmatrix} =
\begin{bmatrix}
u(x) \\ u'(x) \\ \vdots \\ u^{(n-2)}(x) \\ u^{(n-1)}(x) \\
\end{bmatrix}, \qquad
b(x) = \begin{bmatrix}
0 \\ 0 \\ \vdots \\ 0 \\ f(x) \\
\end{bmatrix}
$$
とおけば
$$
u_0' = u_1,\; \ldots,\; u_{n-2}' = u_{n-1},\;
u_{n-1}' = u^{(n)} = -p_{n-1}u_{n-1}-\cdots-p_1 u_1 - p_0 u_0
$$
なので次のように書き直される:
$$
\frac{dw(x)}{dx} = Au(x) + b(x).
\tag{$**'$}
$$
ここで
$$
A = \begin{bmatrix}
0 & 1 & & & \\
& 0 & \ddots & & \\
& & \ddots & 1 & \\
-p_0&-p_1& \cdots & -p_{n-2} & -p_{n-1} \\
\end{bmatrix}
$$
とおいた.
斉次な定数係数線形常微分方程式 ($*$) は
$$
\frac{dw(x)}{dx} = Au(x).
\tag{$*'$}
$$
と書き直される. ゆえに方程式 ($*$) と ($**$) の取り扱いは, ($*'$) や ($**'$) の形の微分方程式の取り扱いに帰着される.
### 斉次な場合の一般解
$A$ は定数を成分とする $n\times n$ 行列であるとし, $w(x)$ は $x$ の函数を成分とする $n$ 次元縦ベクトルであるとし, $c$ は定数を成分とする $n$ 次元縦ベクトルであるとする.
$w(x)$ に関する線形微分方程式の初期値問題
$$
\frac{dw(x)}{dx} = Aw(x), \quad w(0) = c
$$
の解は次の形に表わされるのであった:
$$
w(x) = e^{xA}c.
$$
#### 例: 斉次調和振動子
$\omega \ne 0$ であるとし, $a,b$ は与えられた定数であるとする.
微分方程式の初期値問題
$$
\ddot u(t) = -\omega^2 u(t), \quad u(0) = a, \quad \dot u(0) = b
$$
は $w(t) = \begin{bmatrix}u(t) \\ \dot u(t)\end{bmatrix}$, $c=\begin{bmatrix} a \\ b \end{bmatrix}$, $A=\begin{bmatrix}
0 & 1 \\
-\omega^2 & 0 \\
\end{bmatrix}$ とおくと,
$$
\frac{dw(t)}{dt}=Aw(t), \quad w(0) = c
$$
の形に書き直される. このとき, $2\times 2$ の単位行列を $E$ と書くと,
$$
A^{2k} = (-\omega^2)^k E, \quad A^{2k+1} = (-\omega^2)^k A
$$
が成立するので,
$$
\begin{aligned}
e^{tA} &=
\sum_{k=0}^\infty \frac{t^{2k}}{(2k)!}(-\omega^2)^k E +
\sum_{k=0}^\infty \frac{t^{2k+1}}{(2k)!}(-\omega^2)^k A
\\ &=
\begin{bmatrix}
\cos(\omega t) & \sin(\omega t)/\omega \\
-\omega\sin(\omega t) & \cos(\omega t) \\
\end{bmatrix}.
\end{aligned}
$$
ゆえに, $\dot w(t)=Aw(t)$, $w(0) = c$ の解は
$$
\begin{aligned}
w(t) = \begin{bmatrix}
u(t) \\ \dot u(t)
\end{bmatrix} &=
\begin{bmatrix}
\cos(\omega t) & \sin(\omega t)/\omega \\
-\omega\sin(\omega t) & \cos(\omega t) \\
\end{bmatrix}
\begin{bmatrix}
a \\
b \\
\end{bmatrix}
\\ &=
\begin{bmatrix}
a\cos(\omega t) + b\sin(\omega t)/\omega \\
-a\omega\sin(\omega t) + b\cos(\omega t) \\
\end{bmatrix}
\end{aligned}
$$
と表わされる. 特に
$$
u(t) = a\cos(\omega t) + b\frac{\sin(\omega t)}{\omega}
$$
である. $\QED$
```
Clear[a,b,omega,t]
u = a Cos[omega t] + b Sin[omega t]/omega;
u // TeXEq
u/.t->0 // TeXEq
D[u,t]/.t->0 // TeXEq
D[u,{t,2}] + omega^2 u // TeXEq
```
### 非斉次な場合の一般解
$b(x)$ は与えられた函数を成分とする $n$ 次元縦ベクトルであるとし, 非斉次な定数係数線形微分方程式の初期値問題
$$
\frac{dw(x)}{dx} = Aw(x) + b(x), \quad w(0) = c
$$
について考える. これは所謂**定数変化法**で解くことができる. 斉次の場合すなわち $b(x)=0$ の場合の解は
$$
w(x) = e^{xA}c
$$
と書けるのであった. 非斉次の場合については, 定数を成分とするベクトル $c$ を函数を成分とするベクトル $c(x)$ に置き換えた
$$
w(x) = e^{xA}c(x)
$$
の形の解を探してみよう. 定数 $c$ を変化する $c(x)$ に置き換えるので, この方法は**定数変化法**と呼ばれている.
$w(x)=e^{xA}c(x)$ を $w'(x)=Aw(x)+b(x)$ に代入すると,
$$
w'(x)=Ae^{xA}c(x) + e^{xA}c'(x) = Aw(x)+e^{xA}c'(x) = Aw(x) + b(x)
$$
なので, $c(x)$ が $e^{xA}c'(x)=b(x)$, $c(0)=c$ を満たしていれば, $w(x)$ は $w'(x)=Aw(x)+b(x)$, $w(0)=c$ を満たす. $c(x)$ に関する方程式 $e^{xA}c'(x)=b(x)$, $c(0)=c$ は $c'(x) = e^{-xA}b(x)$, $c(0)=x$ と同値であり,
$$
c(x) = c + \int_0^x e^{-yA}b(y)\,dy
$$
と解ける. これを $w(x)=e^{xA}c(t)$ に代入すると
$$
w(x) = e^{xA}c + \int_0^x e^{(x-y)A}b(y)\,dy.
$$
これが $w'(x)=Aw(x)+b(x)$, $w(0)=c$ を満たしていることは直接的に確認できる. ただし, 一般に $F(x,z)$ について
$$
\frac{d}{dx}F(x,x) = F_x(x,x) + F_z(x,z)
$$
が成立することを, $F(x,z)=\int_0^z e^{(x-y)A}b(y)\,dy$ に適用すると
$$
\frac{d}{dx} \int_0^x e^{(x-y)A}b(y)\,dy =
A \int_0^x e^{(x-y)A}b(y)\,dy + b(x)
$$
が得られることに注意せよ.
これで解きたい非斉次な定数係数線形微分方程式を解くことができた.
#### 非斉次な調和振動子
$\omega \ne 0$ であるとし, $a,b$ は与えられた定数であるとし, $f(t)$ は与えられた函数であるとする.
微分方程式の初期値問題
$$
\ddot u(t) = -\omega^2 u(t) + f(t), \quad u(0) = a, \quad \dot u(0) = b
$$
は $w(t) = \begin{bmatrix}u(t) \\ \dot u(t)\end{bmatrix}$, $c=\begin{bmatrix} a \\ b \end{bmatrix}$, $g(t)=\begin{bmatrix} 0 \\ f(t)\end{bmatrix}$, $A=\begin{bmatrix}
0 & 1 \\
-\omega^2 & 0 \\
\end{bmatrix}$ とおくと,
$$
\frac{dw(t)}{dt}=Aw(t) + g(t), \quad w(0) = c
$$
の形に書き直される. このとき,
$$
A^{2k} =
\begin{bmatrix}
(-\omega^2)^k & 0 \\
0 & (-\omega^2)^k \\
\end{bmatrix},
\quad
A^{2k+1} =
\begin{bmatrix}
0 & (-\omega^2)^k \\
(-\omega^2)^{k+1} & 0 \\
\end{bmatrix}
$$
なので
$$
e^{tA} =
\begin{bmatrix}
\sum_{k=0}^\infty (-1)^k\frac{(\omega t)^{2k}}{(2k)!}
& \frac{1}{\omega}\sum_{k=0}^\infty (-1)^k\frac{(\omega t)^{2k+1}}{(2k+1)!}
\\ -\omega\sum_{k=0}^\infty (-1)^k\frac{(\omega t)^{2k+1}}{(2k+1)!}
& \sum_{k=0}^\infty (-1)^k\frac{(\omega t)^{2k}}{(2k)!} \\
\end{bmatrix} =
\begin{bmatrix}
\cos(\omega t) & \sin(\omega t)/\omega \\
-\omega\sin(\omega t) & \cos(\omega t) \\
\end{bmatrix}
$$
となるのであった.
```
Clear[omega, t]
A = {{0, 1}, {-omega^2, 0}};
A // TeXEq
MatrixExp[t A] // TeXEq
```
$w(t)=e^{tA}c(t)$ の形で解を探してみよう.
$$
\dot w(t) = Ae^{tA}c(t) + e^{tA}\dot c(t) = Aw(t) + e^{tA}\dot c(t)
$$
なので, $\dot c(t) = e^{-tA}g(t)$, $c(0)=c$ ならば $w(t)$ は欲しい解になっている. そのような $c(t)$ は
$$
c(t) = c + \int_0^t e^{-As}g(s)\,ds
$$
と書ける. ゆえに欲しい解は
$$
w(t) = e^{tA}c + \int_0^t e^{(t-s)A}g(s)\,ds
$$
と書ける. そして,
$$
e^{tA}c =
\begin{bmatrix}
a\cos(\omega t) + b\sin(\omega t)/\omega \\
-a\omega\sin(\omega t) + b\cos(\omega t) \\
\end{bmatrix},
\quad
e^{(t-s)A}g(s) =
\begin{bmatrix}
\sin(\omega(t-s))/\omega \\
\cos(\omega(t-s)) \\
\end{bmatrix}
f(s)
$$
なので $w(t)$ の第1成分の $u(t)$ は次のように表わされる:
$$
u(t) = a\cos(\omega t) + b\frac{\sin(\omega t)}{\omega} +
\int_0^t \frac{\sin(\omega(t-s))}{\omega} f(s)\,ds.
$$
これが実際に欲しい解になっていることを直接確認してみよう:
$$
\begin{aligned}
u(0) &= a,
\\
\dot u(t) &= -\omega a \sin(\omega t) + \omega b \frac{\cos(\omega t)}{\omega} +
\omega\int_0^t \frac{\cos(\omega(t-s))}{\omega} f(s)\,ds,
\\
\dot u(0) &= b,
\\
\ddot u(t) &= -\omega^2 a\cos(\omega t) - \omega^2 b\frac{\sin(\omega t)}{\omega} -
\omega^2\int_0^t \frac{\sin(\omega(t-s))}{\omega} f(s)\,ds + f(t)
\\ & = -
\omega^2 u(t) + f(t).
\end{aligned}
$$
非斉次な調和振動子は外力 $f(t)$ が与えられた場合の調和振動子になっている.
```
Clear[a,b,omega,s,t]
u = a Cos[omega t] + b Sin[omega t]/omega + Integrate[Sin[omega (t-s)]/omega * f[s], {s, 0, t}];
u // TeXEq
u/.t->0 // TeXEq
D[u, t] // TeXEq
D[u, t]/.t->0 // TeXEq
D[u, {t,2}] // TeXEq
```
#### 共振
$\ddot u(t) = -\omega^2 u(t) + f(t)$ の解
$$
u(t) = a\cos(\omega t) + b\frac{\sin(\omega t)}{\omega} +
\int_0^t \frac{\sin(\omega(t-s))}{\omega} f(s)\,ds.
$$
が $f(t) = \cos(\alpha t), \sin(\alpha t)$ の場合にどうなるかを調べてみよう. そのためには三角函数に関する公式
$$
\begin{aligned}
&
\sin x\cdot \cos y = \frac{1}{2}(\sin(x-y) + \sin(x+y)),
\\ &
\sin x\cdot \sin y = \frac{1}{2}(\cos(x-y) - \cos(x+y))
\end{aligned}
$$
を使う. この公式より
$$
\begin{aligned}
&
\frac{\sin(\omega(t-s))}{\omega}\cos(\alpha s) =
\frac{1}{2\omega}(\sin(\omega t - (\omega+\alpha)s) + \sin(\omega t - (\omega-\alpha)s)),
\\ &
\frac{\sin(\omega(t-s))}{\omega}\sin(\alpha s) =
\frac{1}{2\omega}(\cos(\omega t - (\omega+\alpha)s) - \cos(\omega t - (\omega-\alpha)s)).
\end{aligned}
$$
ゆえに, $\alpha\ne\pm\omega$ の場合には,
$$
\begin{aligned}
\int_0^t \frac{\sin(\omega(t-s))}{\omega}\cos(\alpha s)\,ds &=
\frac{1}{2\omega}\left[
\frac{\cos(\omega t - (\omega+\alpha)s)}{\omega+\alpha} +
\frac{\cos(\omega t - (\omega-\alpha)s)}{\omega-\alpha}
\right]_{s=0}^{s=t}
\\ & =
\frac{1}{2\omega}\left(
\frac{\cos(\alpha t)}{\omega+\alpha} +
\frac{\cos(\alpha t)}{\omega-\alpha} -
\frac{\cos(\omega t)}{\omega+\alpha} -
\frac{\cos(\omega t)}{\omega-\alpha}
\right)
\\ & =
\frac{\cos(\alpha t)}{\omega^2-\alpha^2} - \frac{\cos(\omega t)}{\omega^2-\alpha^2}
\\ &=
-\frac{\cos(\alpha t)-\cos(\omega t)}{\alpha^2-\omega^2},
\\
\int_0^t \frac{\sin(\omega(t-s))}{\omega}\sin(\alpha s)\,ds &=
\frac{1}{2\omega}\left[
\frac{-\sin(\omega t - (\omega+\alpha)s)}{\omega+\alpha} -
\frac{-\sin(\omega t - (\omega-\alpha)s)}{\omega-\alpha}
\right]_{s=0}^{s=t}
\\ & =
\frac{1}{2\omega}\left(
\frac{\sin(\alpha t)}{\omega+\alpha} +
\frac{\sin(\alpha t)}{\omega-\alpha} +
\frac{\sin(\omega t)}{\omega+\alpha} -
\frac{\sin(\omega t)}{\omega-\alpha}
\right)
\\ & =
\frac{\sin(\alpha t)}{\omega^2-\alpha^2} -
\frac{\alpha}{\omega}\frac{\sin(\omega t)}{\omega^2-\alpha^2}
\\ & = -\frac{\alpha}{\omega}
\frac{(\omega/\alpha)\sin(\alpha t) - \sin(\omega t)}{\alpha^2-\omega^2}.
\end{aligned}
$$
これらの函数の振幅は $\alpha^2$ が $\omega^2$ に近付くと大きくなる.
$\alpha=\omega$ の場合には上の公式で $\alpha\to\omega$ の極限を取ると,
$$
\begin{aligned}
\int_0^t \frac{\sin(\omega(t-s))}{\omega}\cos(\omega s)\,ds &=
\frac{t\sin(\omega t)}{2\omega},
\\
\int_0^t \frac{\sin(\omega(t-s))}{\omega}\sin(\omega s)\,ds &=
\frac{-t\cos(\omega t) + (1/\omega)\sin(\omega t)}{2\omega}.
\end{aligned}
$$
$\alpha=-\omega$ の場合の公式はこの公式で $\omega$ を $-\omega$ に置き換えれば得られる.
このように, $\alpha=\pm\omega$ の場合には時刻 $t$ に比例して振幅が大きくなる項が出て来る. すなわち, 調和振動子に与える外力の周期が調和振動子の周期 $2\pi/|\omega|$ に等しい場合には, 外力が与えられた調和振動子の振幅は時間に比例して幾らでも大きくなる.
```
Sin[x-y] + Sin[x+y] // HoldForm // TeXEq[#, TrigExpand]&
Cos[x-y] - Cos[x+y] // HoldForm // TeXEq[#, TrigExpand]&
Integrate[Sin[omega (t-s)] Cos[alpha s] / omega, {s, 0, t}] // TeXEq
Integrate[Sin[omega (t-s)] Cos[omega s] / omega, {s, 0, t}] // TeXEq
Integrate[Sin[omega (t-s)] Cos[-omega s] / omega, {s, 0, t}] // TeXEq
Integrate[Sin[omega (t-s)] Sin[alpha s] / omega, {s, 0, t}] // TeXEq
Integrate[Sin[omega (t-s)] Sin[omega s] / omega, {s, 0, t}] // TeXEq
Integrate[Sin[omega (t-s)] Sin[-omega s] / omega, {s, 0, t}] // TeXEq
```
**まとめ:** $\alpha,\omega\ne 0$ のとき,
$$
\ddot u(t) = -\omega^2 u(t) + A\cos(\alpha t) + B\sin(\alpha t), \quad
u(0)=q, \quad \dot u(0)=p
\tag{$*$}
$$
の形の微分方程式の初期値問題の解は, $\alpha\ne\pm\omega$ のとき
$$
u(t) = a\cos(\omega t) + b\sin(\omega t) + c\cos(\alpha t) + d\sin(\alpha t)
$$
の形になり, $\alpha=\pm\omega$ のとき
$$
u(t) = a\cos(\omega t) + b\sin(\omega t) + c\,t\cos(\omega t)+ d\,t\sin(\omega t)
$$
の形になる.
**証明:** $\alpha\ne\pm\omega$ の場合.
$$
u(t) = a\cos(\omega t) + b\sin(\omega t) + c\cos(\alpha t) + d\sin(\alpha t)
$$
とおくと,
$$
\begin{aligned}
u(0) &= a + c
\\
\dot u(t) &= -
\omega a\sin(\omega t) + \omega b\cos(\omega t) -
\alpha c\sin(\alpha t) + \alpha d\cos(\alpha t),
\\
\dot u(0) &= \omega b + \alpha d,
\\
\ddot u(t) &= -
\omega^2 a\cos(\omega t) - \omega^2 b\sin(\omega t) -
\alpha^2 c\cos(\alpha t) - \alpha^2 d\sin(\alpha t)
\\ & =
-\omega^2 u(t) + (\omega^2-\alpha2)c\cos(\alpha t) + (\omega^2-\alpha^2)d\sin(\alpha t).
\end{aligned}
$$
これと($*$)を比較すると,
$$
a + c = q, \quad \omega b + \alpha d = p, \quad
(\omega^2-\alpha^2)c = A, \quad (\omega^2-\alpha^)d = B.
$$
すなわち,
$$
a = q - \frac{A}{\omega^2-\alpha^2}, \quad
b = \frac{1}{\omega}\left(p - \frac{\alpha B}{\omega^2-\alpha^2}\right), \quad
c = \frac{A}{\omega^2-\alpha^2}, \quad
d = \frac{B}{\omega^2-\alpha^2}.
$$
$a,b,c,d$ をこのように定めると上の $u(t)$ が($*$)の解であることは容易に確認できる.
$\alpha=\omega$ の場合:
$$
u(t) = a\cos(\omega t) + b\sin(\omega t) + c\,t\cos(\omega t) + d\,t\sin(\omega t)
$$
とおくと,
$$
\begin{aligned}
u(0) &= a,
\\
\dot u(t) &= -
\omega a\sin(\omega t) + \omega b\cos(\omega t) +
c(-\omega t\sin(\omega t) + \cos(\omega t)) + d(\omega t\cos(\omega t) + \sin(\omega t)),
\\
\dot u(0) &= \omega b + c,
\\
\ddot u(t) &= -
\omega^2 a\cos(\omega t) - \omega b\sin(\omega t) +
c(-\omega^2 t\cos(\omega t) - 2\omega \sin(\omega t)) + d(-\omega^2 t\sin(\omega t) + 2\omega\cos(\omega t)),
\\ & =
-\omega^2 u(t) + 2\omega d\cos t -2\omega c\sin t.
\end{aligned}
$$
これと($*$)を比較すると,
$$
a = q, \quad \omega b+c = p, \quad 2\omega d = A, \quad -2\omega c=B.
$$
すなわち,
$$
a = q, \quad b = \frac{p}{\omega} + \frac{B}{2\omega^2}, \quad c = -\frac{B}{2\omega}, \quad d = \frac{A}{2\omega}.
$$
$a,b,c,d$ をこのように定めると上の $u(t)$ が($*$)の解であることは容易に確認できる. $\alpha=-\omega$ の場合も同様. $\QED$
```
Clear[A,B,a,b,p,q,alpha,omega]
u = (q - A/(omega^2 - alpha^2)) Cos[omega t] + (p - alpha B/(omega^2-alpha^2))/omega Sin[omega t] + A/(omega^2-alpha^2) Cos[alpha t] + B/(omega^2-alpha^2) Sin[alpha t];
u // TeXEq
u/.t->0 // TeXEq
D[u, t] // TeXEq
D[u,t]/.t->0 // TeXEq
D[u,t,t] + omega^2 u // TeXEq
Clear[A,B,a,b,p,q,alpha,omega]
u = q Cos[omega t] + (p/omega + B/(2 omega^2)) Sin[omega t] - B/(2 omega) t Cos[omega t] + A/(2 omega) t Sin[omega t];
u // TeXEq
u/.t->0 // TeXEq
D[u,t] // TeXEq
D[u,t]/.t->0 // TeXEq
D[u, t, t] + omega^2 u // TeXEq
```
**例:** $\ddot u(t) = -u(t) + \cos(\sqrt{2}\;t)$, $u(0)=1$, $\dot u(0)=0$ の解は
$$
u(t) = 2\cos t - \cos(\sqrt{2}\;t).
$$
```
u = 2 Cos[t] - Cos[Sqrt[2] t];
u // TeXEq
u/.t->0 // TeXEq
D[u,t] // TeXEq
D[u,t]/.t->0 // TeXEq
D[u,t,t] + u // TeXEq
Plot[u, {t,0,100}]
```
**例:** $\ddot u(t) = -u(t) + \sin(\sqrt{2}\;t)$, $u(0)=1$, $\dot u(0)=0$ の解は
$$
u(t) = \cos t + \sqrt{2}\sin t - \sin(\sqrt{2}\;t).
$$
```
u = Cos[t] + Sqrt[2] Sin[t] - Sin[Sqrt[2] t];
u // TeXEq
u/.t->0 // TeXEq
D[u,t] // TeXEq
D[u,t]/.t->0 // TeXEq
D[u,t,t] + u // TeXEq
Plot[u, {t,0,100}]
```
**例:** $\ddot u(t) = -u(t) + \cos t$, $u(0)=1$, $\dot u(0)=0$ の解は
$$
u(t) = \cos t + \frac{t}{2}\sin t.
$$
```
u = Cos[t] + t/2 Sin[t];
u // TeXEq
u/.t->0 // TeXEq
D[u,t] // TeXEq
D[u,t]/.t->0 // TeXEq
D[u,t,t] + u // TeXEq
Plot[u, {t,0,100}]
```
**例:** $\ddot u(t) = -u(t) + \sin t$, $u(0)=1$, $\dot u(0)=0$ の解は
$$
u(t) = \cos t + \frac{1}{2}\sin t - \frac{t}{2}\cos t.
$$
これは以下のようにして求めることができる.
$$
u(t) = a\cos t + b\sin t + c\;t\cos t + d\;t\sin t
$$
とおくと,
$$
\begin{aligned}
u(0) &= a,
\\
\dot u(t) &= -a\sin t + b\cos t + c(-t\sin t + \cos t) + d(t\cos t + \sin t),
\\
\dot u(0) &= b+c,
\\
\ddot u(t) &= -a\cos t - b\sin t + c(-t\cos t -2\sin t) + d(-t\sin t + 2\cos t)
\\ & =
-u(t) + 2d\cos t -2c\sin t.
\end{aligned}
$$
これと $\ddot u(t) = -u(t) + \sin t$, $u(0)=1$, $\dot u(0)=0$ を比較すれば, $a=1$, $d=0$, $c=-1/2$, $b=1/2$ を得る.
```
u = Cos[t] + 1/2 Sin[t] - t/2 Cos[t];
u // TeX
u/.t->0 // TeXEq
D[u,t] // TeXEq
D[u,t]/.t->0 // TeXEq
D[u,t,t] + u // TeXEq
Plot[u, {t,0,100}]
```
#### 共振再論
$\alpha,\omega\ne 0$ であるとし, $\d = d/dt$ とおく.
前節のまとめは, 定数係数微分方程式
$$
\ddot u(t) + \omega^2 u(t) = A\cos(\alpha t) + B\sin(\alpha t)
\tag{$*$}
$$
の解が次の微分方程式の解になることを意味している:
$$
(\d^2+\omega^2)(\d^2+\alpha^2)u(t) = 0.
\tag{$**$}
$$
$\omega^2\ne\alpha^2$ の場合には($**$)の解全体のなすベクトル空間の基底として,
$$
\cos(\omega t), \quad \sin(\omega t), \quad
\cos(\alpha t), \quad \sin(\alpha t)
$$
が取れ, $\omega^2=\alpha^2$ の場合には($**$)の解全体のなすベクトル空間の基底として,
$$
\cos(\omega t), \quad \sin(\omega t), \quad
t\cos(\omega t), \quad t\sin(\omega t)
$$
が取れるのであった.
この一致は偶然であろうか? もちろんそうではない. そのことを以下で説明しよう.
方程式($*$)の左辺は $(\d^2+\omega^2)u(t)$ と書け, 右辺は微分方程式 $(\d^2+\alpha^2)v(t)=0$ の解なので, 方程式($*$)の解 $u(t)$ から
$$
(\d^2+\omega^2)u(t) = v(t), \quad
(\d^2+\alpha^2)v(t) = 0
$$
の解 $u(t),v(t)$ が得られる. 前者を後者に代入すると($**$)が得られる. これで, ($*$)の解は($**$)の解になっていることが分かった.
最初からこのように考えていれば($*$)の解が前節のまとめのように記述されることはほぼ自明になっていたはずである.
## 非斉次な波動方程式 (より高級な話題)
この節の内容は先走った内容になっているので, 初めて読む読者は飛ばしてよい. 後で必要になってから戻って来ても遅くない. しかし, 筆者的には実用的な内容を含んでいるので計算を追ってみて欲しいと思う.
**ポイント:** 非斉次な調和振動子は本質的に非斉次な波動方程式の空間方向のFourier変換になっている. 非斉次な調和振動子が完全に解けているので, その逆Fourier変換によって非斉次な波動方程式も完全に解けてしまうことになる. $\QED$
### 解の表示
$u=u(t,x)$ に関する次の方程式を非斉次な波動方程式の初期値問題と呼ぶ:
$$
u_{tt}(t,x) = u_{xx}(t,x) + f(t,x), \quad u(0,x)=a(x), \quad u_t(0,x)=b(x).
$$
これの解は次のように書ける:
$$
u(t,x) = \frac{a(x+t)+a(x-t)}{2} +
\frac{1}{2}\int_{x-t}^{x+t} b(y)\,dy +
\frac{1}{2}\int_0^t ds\,\int_{x-(t-s)}^{x+(t-s)} f(s,y)\,dy.
\tag{$*$}
$$
これが実際に上の条件を満たしていることは直接的に確認できる:
$$
\begin{aligned}
u(0,x) &= a(x),
\\
u_t(t,x) &= \frac{a'(x+t)-a'(x-t)}{2} +
\frac{b(x+t)+b(x-t)}{2}
\\ &+
\int_0^t\frac{f(s,x+(t-s))+f(s,x-(t-s))}{2}\,ds,
\\
u_t(0,x) &= b(x),
\\
u_{tt}(t,x) &= \frac{a''(x+t)+a''(x-t)}{2} +
\frac{b'(x+t)-b'(x-t)}{2}
\\ &+
\int_0^t\frac{f'(s,x+(t-s))-f'(s,x-(t-s))}{2}\,ds + f(t,x),
\\
u_x(t,x) &= \frac{a'(x+t)+a'(x-t)}{2} +
\frac{b(x+t)-b(x-t)}{2}
\\ &+
\int_0^t\frac{f(s,x+(t-s))-f(s,x-(t-s))}{2}\,ds,
\\
u_{xx}(t,x) &= \frac{a''(x+t)+a''(x-t)}{2} +
\frac{b'(x+t)-b'(x-t)}{2}
\\ &+
\int_0^t\frac{f'(s,x+(t-s))-f'(s,x-(t-s))}{2}\,ds.
\end{aligned}
$$
ゆえに $u_{tt}(t,x) = u_{xx}(t,x) + f(t,x)$.
```
Clear[a,b,f,s,t,x,y]
u = (a[x+t]+a[x-t])/2 + (1/2) Integrate[b[y],{y,x-t,x+t}] + (1/2) Integrate[Integrate[f[s,y],{y,x-(t-s),x+(t-s)}],{s,0,t}];
u // TeXRaw["u=",#]&
u/.t->0 // TeXRaw["u(0,x) = ", #]&
D[u,t] // TeXRaw["u_t = ",#]&
D[u,t]/.t->0 // TeXRaw["u_t(0,x) =", #]&
utt = D[u,{t,2}];
utt // TeXRaw["u_{tt} = ", #]&
uxx = D[u,{x,2}];
uxx // TeXRaw["u_{xx} = ", #]&
utt - uxx// TeXRaw["u_{tt}-u_{xx}=", # // Simplify]&
```
それでは天下りに与えた解の表示($*$)はどのようにして発見できるであろうか?
非斉次な波動方程式を $x$ についてFourier変換すれば前節の非斉次な調和振動子に帰着し, 逆Fourier変換でもとに戻せば解の表示($*$)が得られる. 以下の節では実際にそれを実行してみよう.
### Fourier変換に関する公式
その絶対値およびその導函数の絶対値などが $|x|\to\infty$ で十分速く減少する函数 $f(x)$ について
$$
\mathscr{F}[f](\omega) = \hat{f}(\omega) = \int_{-\infty}^\infty e^{-i\omega x}f(x)\,dx
$$
でFourier変換 $\hat{f}(\omega)$ を定義すると, 逆変換によって $\hat{f}(\omega)$ から $f(x)$ が得られる:
$$
f(x) = \frac{1}{2\pi}\int_{-\infty}^\infty e^{i\omega x}\hat{f}(\omega)\,d\omega.
$$
部分積分によって,
$$
\mathscr{F}[f'](\omega) =
\int_{-\infty}^\infty e^{-i\omega x}f'(x)\,dx = -
\int_{-\infty}^\infty (e^{-i\omega x})' f(x)\,dx = i\omega\hat{f}(\omega).
$$
ゆえに
$$
\mathscr{F}[f''](\omega) = -\omega^2\hat{f}(\omega).
\tag{1}
$$
簡単のため $t>0$ とする. $f(x)$ が
$$
f(x) = \frac{1}{2}\chi_{[-t,t]}(x) =
\begin{cases}
1/2 & (-t\leqq x\leqq t) \\
0 & (\text{otherwise})
\end{cases}
$$
のとき,
$$
\hat{f}(\omega) = \frac{1}{2}\int_{-t}^t e^{-i\omega x}\,dx = \frac{\sin(\omega t)}{\omega}
$$
なので
$$
\frac{1}{2}\chi_{[-t,t]}(x) =
\frac{1}{2\pi}\int_{-\infty}^\infty e^{i\omega x}\frac{\sin(\omega t)}{\omega}\,d\omega.
$$
これの両辺の $x$ に $x-y$ を代入し, $f(y)$ をかけて $y$ について積分すると,
$$
\frac{1}{2}\int_{x-t}^{x+t} f(y)\,dy =
\frac{1}{2\pi}\int_{-\infty}^\infty e^{i\omega x}\frac{\sin(\omega t)}{\omega}\hat{f}(\omega)\,d\omega.
\tag{2}
$$
これの両辺を $t$ で微分すると
$$
\frac{f(x+t)+f(x-t)}{2} =
\frac{1}{2\pi}\int_{-\infty}^\infty e^{i\omega x}\cos(\omega t)\hat{f}(\omega)\,d\omega.
\tag{3}
$$
```
Plot[(HeavisidePi[x/(2t)]/2)/.t->-2, {x,-4,4}, Exclusions->None]
Sqrt[2 Pi] FourierTransform[HeavisidePi[x/(2t)]/2, x, omega] // TeXEq
1/Sqrt[2 Pi] InverseFourierTransform[Abs[t] Sinc[omega t], omega, x] // TeXEq
F = 1/Sqrt[2 Pi] InverseFourierTransform[Abs[t] Sinc[omega t], omega, x];
Plot[F/.t->-2, {x,-4,4}]
```
### 非斉次な波動方程式のFourier変換による解法
非斉次な波動方程式の初期値問題
$$
u_{tt}(t,x) = u_{xx}(t,x) + f(t,x), \quad u(0,x)=a(x), \quad u_t(0,x)=b(x)
$$
は(1)より
$$
\hat{u}_{tt}(t,\omega) = -\omega^2\hat{u}_{xx}(t,\omega) + \hat{f}(t,\omega),
\quad \hat{u}(t,\omega)=\hat{a}(\omega), \quad \hat{u}_t(0,x)=\hat{b}(\omega).
$$
と書き直される. これは非斉次な調和振動子である. ゆえに次のように解ける:
$$
\hat{u}(t,\omega) =
\hat{a}(\omega)\cos(\omega t) +
\hat{b}(\omega)\frac{\sin(\omega t)}{\omega} +
\int_0^t \frac{\sin(\omega(t-s))}{\omega}\hat{f}(s,\omega)\,ds.
$$
この等式の両辺に $e^{i\omega x}$ をかけて $\omega$ について積分して $2\pi$ で割ると, (2),(3)より,
$$
u(t,x) =
\frac{a(x+t)+a(x-t)}{2} +
\frac{1}{2}\int_{x-t}^{x+t} b(y)\, dy +
\frac{1}{2}\int_0^t ds\int_{x-(t-s)}^{x+(t-s)} f(s,y)\,dy.
$$
これが欲しい解の表示であった.
特に斉次の場合の通常の波動方程式の初期値問題の解は
$$
u(t,x) =
\frac{a(x+t)+a(x-t)}{2} +
\frac{1}{2}\int_{x-t}^{x+t} b(y)\, dy
$$
と書ける. これは $B(x)=\int b(x)\,dx$, $f(x)=(a(x)+B(x))/2$, $g(x)=(a(x)-B(x))/2$ とおくと,
$$
u(t,x) = f(x+t) + g(x-t)
$$
と書き直される. これが $u_{tt}=u_{xx}$ を満たしていることを確認することは易しい.
| github_jupyter |
# SKJ quotient analysis plots
```
ivunitsall = ['°C','°C','m','psu',
'umol/kg','kPa','m',
'log(mg/m$^3$)','m','cm']
ivnicenamesall = ['Sea surface temperature (SST)','T$_{100m}$',
'Thermocline depth (TCD)','Sea surface salinity (SSS)',
'O$_{2,100m}$','PO$_{2,100m}$','Tuna hypoxic depth (THD)',
'log(Chorophyll a)','Mixed layer depth (MLD)',
'Sea surface height anomaly (SSHA)']
ivshortnicenamesall = ['SST','T$_{100m}$','TCD','SSS',
'O$_{2,100m}$','PO$_{2,100m}$','THD',
'log(CHL)','MLD','SSHA']
dvnicenamesall = ['Skipjack CPUE','Skipjack CPUE','Skipjack CPUE','Skipjack CPUE',
'Skipjack CPUE','Skipjack CPUE','Skipjack CPUE',
'Skipjack CPUE','Skipjack CPUE','Skipjack CPUE']
ivall = [iv_sstskjcp, iv_temp100skjcp, iv_tcdskjcp, iv_sssskjcp,
iv_o2100skjcp, iv_po2100skjcp, iv_thdskjcp,
iv_logchlskjcp, iv_mldskjcp, iv_sshaskjcp]
binedgesall = [binedges_sstskjcp, binedges_temp100skjcp, binedges_tcdskjcp, binedges_sssskjcp,
binedges_o2100skjcp, binedges_po2100skjcp, binedges_thdskjcp,
binedges_logchlskjcp, binedges_mldskjcp, binedges_sshaskjcp]
bincentersall = [bincenters_sstskjcp, bincenters_temp100skjcp, bincenters_tcdskjcp, bincenters_sssskjcp,
bincenters_o2100skjcp, bincenters_po2100skjcp, bincenters_thdskjcp,
bincenters_logchlskjcp, bincenters_mldskjcp, bincenters_sshaskjcp]
ivcountsall = [ivcounts_sstskjcp, ivcounts_temp100skjcp, ivcounts_tcdskjcp, ivcounts_sssskjcp,
ivcounts_o2100skjcp, ivcounts_po2100skjcp, ivcounts_thdskjcp,
ivcounts_logchlskjcp, ivcounts_mldskjcp, ivcounts_sshaskjcp]
dvcountsall = [dvcounts_sstskjcp, dvcounts_temp100skjcp, dvcounts_tcdskjcp, dvcounts_sssskjcp,
dvcounts_o2100skjcp, dvcounts_po2100skjcp, dvcounts_thdskjcp,
dvcounts_logchlskjcp, dvcounts_mldskjcp, dvcounts_sshaskjcp]
dvquotall = [dvquot_sstskjcp, dvquot_temp100skjcp, dvquot_tcdskjcp, dvquot_sssskjcp,
dvquot_o2100skjcp, dvquot_po2100skjcp, dvquot_thdskjcp,
dvquot_logchlskjcp, dvquot_mldskjcp, dvquot_sshaskjcp]
qlimsreplaceTall = [qlimsreplaceT_sstskjcp, qlimsreplaceT_temp100skjcp, qlimsreplaceT_tcdskjcp, qlimsreplaceT_sssskjcp,
qlimsreplaceT_o2100skjcp, qlimsreplaceT_po2100skjcp, qlimsreplaceT_thdskjcp,
qlimsreplaceT_logchlskjcp, qlimsreplaceT_mldskjcp, qlimsreplaceT_sshaskjcp]
qlimsreplaceFall = [qlimsreplaceT_sstskjcp, qlimsreplaceT_temp100skjcp, qlimsreplaceT_tcdskjcp, qlimsreplaceT_sssskjcp,
qlimsreplaceT_o2100skjcp, qlimsreplaceT_po2100skjcp, qlimsreplaceT_thdskjcp,
qlimsreplaceT_logchlskjcp, qlimsreplaceT_mldskjcp, qlimsreplaceT_sshaskjcp]
plotbarorhist='bar'; plotlegend=1;
plotqlimsreplaceT=1; plotqlimsreplaceF=0
nrows=5; ncols=2
fig,axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16,18))
isp = 0 # subplot index
for yax in range(0,nrows):
for xax in range(0,ncols):
ivunits = ivunitsall[isp]
ivnicename = ivnicenamesall[isp]
ivshortnicename = ivshortnicenamesall[isp]
dvnicename = dvnicenamesall[isp]
iv = ivall[isp]
binedges = binedgesall[isp]; bincenters = bincentersall[isp]
ivcounts = ivcountsall[isp]; dvcounts = dvcountsall[isp]
dvquot = dvquotall[isp]
qlimsreplaceT = qlimsreplaceTall[isp]
qlimsreplaceF = qlimsreplaceFall[isp]
if ncols>1 and nrows>1:
ax = axes[yax][xax]
elif ncols==1 and nrows>1:
ax = axes[yax]
elif ncols>1 and nrows==1:
ax = axes[xax]
exec(open('helper_scripts/plot_qa.py').read())
ax.text(-0.03, 1.03, string.ascii_uppercase[isp],
transform=ax.transAxes, size=13, weight='bold')
isp = isp+1
fig.tight_layout()
fig.savefig(figpath + 'S8_fig.pdf',
bbox_inches='tight', pad_inches = 0, dpi = 300)
fig.savefig(figpath + 'S8_fig.png',
bbox_inches='tight', pad_inches = 0, dpi = 300)
```
# BET quotient analysis plots
```
ivunitsall = ['°C','°C','m','psu',
'umol/kg','kPa','m',
'log(mg/m$^3$)','m','cm']
ivnicenamesall = ['Sea surface temperature (SST)','T$_{100m}$',
'Thermocline depth (TCD)','Sea surface salinity (SSS)',
'O$_{2,100m}$','PO$_{2,100m}$','Tuna hypoxic depth (THD)',
'log(Chorophyll a)','Mixed layer depth (MLD)',
'Sea surface height anomaly (SSHA)']
ivshortnicenamesall = ['SST','T$_{100m}$','TCD','SSS',
'O$_{2,100m}$','PO$_{2,100m}$','THD',
'log(CHL)','MLD','SSHA']
dvnicenamesall = ['Bigeye CPUE','Bigeye CPUE','Bigeye CPUE','Bigeye CPUE',
'Bigeye CPUE','Bigeye CPUE','Bigeye CPUE',
'Bigeye CPUE','Bigeye CPUE','Bigeye CPUE']
ivall = [iv_sstbetcp, iv_temp100betcp, iv_tcdbetcp, iv_sssbetcp,
iv_o2100betcp, iv_po2100betcp, iv_thdbetcp,
iv_logchlbetcp, iv_mldbetcp, iv_sshabetcp]
binedgesall = [binedges_sstbetcp, binedges_temp100betcp, binedges_tcdbetcp, binedges_sssbetcp,
binedges_o2100betcp, binedges_po2100betcp, binedges_thdbetcp,
binedges_logchlbetcp, binedges_mldbetcp, binedges_sshabetcp]
bincentersall = [bincenters_sstbetcp, bincenters_temp100betcp, bincenters_tcdbetcp, bincenters_sssbetcp,
bincenters_o2100betcp, bincenters_po2100betcp, bincenters_thdbetcp,
bincenters_logchlbetcp, bincenters_mldbetcp, bincenters_sshabetcp]
ivcountsall = [ivcounts_sstbetcp, ivcounts_temp100betcp, ivcounts_tcdbetcp, ivcounts_sssbetcp,
ivcounts_o2100betcp, ivcounts_po2100betcp, ivcounts_thdbetcp,
ivcounts_logchlbetcp, ivcounts_mldbetcp, ivcounts_sshabetcp]
dvcountsall = [dvcounts_sstbetcp, dvcounts_temp100betcp, dvcounts_tcdbetcp, dvcounts_sssbetcp,
dvcounts_o2100betcp, dvcounts_po2100betcp, dvcounts_thdbetcp,
dvcounts_logchlbetcp, dvcounts_mldbetcp, dvcounts_sshabetcp]
dvquotall = [dvquot_sstbetcp, dvquot_temp100betcp, dvquot_tcdbetcp, dvquot_sssbetcp,
dvquot_o2100betcp, dvquot_po2100betcp, dvquot_thdbetcp,
dvquot_logchlbetcp, dvquot_mldbetcp, dvquot_sshabetcp]
qlimsreplaceTall = [qlimsreplaceT_sstbetcp, qlimsreplaceT_temp100betcp, qlimsreplaceT_tcdbetcp, qlimsreplaceT_sssbetcp,
qlimsreplaceT_o2100betcp, qlimsreplaceT_po2100betcp, qlimsreplaceT_thdbetcp,
qlimsreplaceT_logchlbetcp, qlimsreplaceT_mldbetcp, qlimsreplaceT_sshabetcp]
qlimsreplaceFall = [qlimsreplaceT_sstbetcp, qlimsreplaceT_temp100betcp, qlimsreplaceT_tcdbetcp, qlimsreplaceT_sssbetcp,
qlimsreplaceT_o2100betcp, qlimsreplaceT_po2100betcp, qlimsreplaceT_thdbetcp,
qlimsreplaceT_logchlbetcp, qlimsreplaceT_mldbetcp, qlimsreplaceT_sshabetcp]
plotbarorhist='bar'; plotlegend=1;
plotqlimsreplaceT=1; plotqlimsreplaceF=0
nrows=5; ncols=2
fig,axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16,18))
isp = 0 # subplot index
for yax in range(0,nrows):
for xax in range(0,ncols):
ivunits = ivunitsall[isp]
ivnicename = ivnicenamesall[isp]
ivshortnicename = ivshortnicenamesall[isp]
dvnicename = dvnicenamesall[isp]
iv = ivall[isp]
binedges = binedgesall[isp]; bincenters = bincentersall[isp]
ivcounts = ivcountsall[isp]; dvcounts = dvcountsall[isp]
dvquot = dvquotall[isp]
qlimsreplaceT = qlimsreplaceTall[isp]
qlimsreplaceF = qlimsreplaceFall[isp]
if ncols>1 and nrows>1:
ax = axes[yax][xax]
elif ncols==1 and nrows>1:
ax = axes[yax]
elif ncols>1 and nrows==1:
ax = axes[xax]
exec(open('helper_scripts/plot_qa.py').read())
ax.text(-0.03, 1.03, string.ascii_uppercase[isp],
transform=ax.transAxes, size=13, weight='bold')
isp = isp+1
fig.tight_layout()
fig.savefig(figpath + 'S9_fig.pdf',
bbox_inches='tight', pad_inches = 0, dpi = 300)
fig.savefig(figpath + 'S9_fig.png',
bbox_inches='tight', pad_inches = 0, dpi = 300)
```
| github_jupyter |
```
#Library Import and Missing value Handling
import numpy as np
import pandas as pd
from sklearn.preprocessing import Imputer
df = pd.read_csv('cancer.csv')
df.replace('?',0, inplace=True)
#print((df[[6]] == 0).sum())
df[[6]] = df[[6]].replace(0, np.NaN)
# Fill missing values with mean column values
values = df.values
imputer = Imputer()
transformed_values = imputer.fit_transform(values)
# Count the number of NaN values in each column
print(np.isnan(transformed_values).sum())
# Making a dataframe and saving it
preprocessed_df = pd.DataFrame(data = transformed_values)
preprocessed_df.to_csv('Preprocessed.csv')
# Load the preprocessed dataset
dataframe = pd.read_csv('Preprocessed.csv')
#dataframe
# Drop the ID i.e. 0th Column
dataframe.drop(dataframe.columns[[0]],axis = 1, inplace = True)
dataframe
# Normalize the Dataset
from sklearn.preprocessing import MinMaxScaler
array = dataframe.values
scaler = MinMaxScaler(feature_range=(0, 1))
normalized_array = scaler.fit_transform(array)
normalized_array.shape
```
Ensembles in Action
```
# Bagged Decision Trees for Classification
from sklearn import model_selection
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
array = normalized_array
X = array[:,0:9]
Y = array[:,9]
seed = 7
kfold = model_selection.KFold(n_splits=10, random_state=seed)
cart = DecisionTreeClassifier()
num_trees = 100
model = BaggingClassifier(base_estimator=cart, n_estimators=num_trees, random_state=seed)
results = model_selection.cross_val_score(model, X, Y, cv=kfold)
print(results.mean())
# AdaBoost Classification
from sklearn.ensemble import AdaBoostClassifier
seed = 7
num_trees = 30
kfold = model_selection.KFold(n_splits=10, random_state=seed)
model = AdaBoostClassifier(n_estimators=num_trees, random_state=seed)
results = model_selection.cross_val_score(model, X, Y, cv=kfold)
print(results.mean())
# Stochastic Gradient Boosting Classification
from sklearn.ensemble import GradientBoostingClassifier
seed = 7
num_trees = 100
kfold = model_selection.KFold(n_splits=10, random_state=seed)
model = GradientBoostingClassifier(n_estimators=num_trees, random_state=seed)
results = model_selection.cross_val_score(model, X, Y, cv=kfold)
print(results.mean())
# Voting Ensemble for Classification
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
kfold = model_selection.KFold(n_splits=10, random_state=seed)
# create the sub models
estimators = []
model1 = LogisticRegression()
estimators.append(('logistic', model1))
model2 = DecisionTreeClassifier()
estimators.append(('cart', model2))
model3 = SVC()
estimators.append(('svm', model3))
# create the ensemble model
ensemble = VotingClassifier(estimators)
results = model_selection.cross_val_score(ensemble, X, Y, cv=kfold)
print(results.mean())
```
| github_jupyter |
```
def lrelu(x, leak=0.2, name="lrelu", alt_relu_impl=False):
with tf.variable_scope(name):
if alt_relu_impl:
f1 = 0.5 * (1 + leak)
f2 = 0.5 * (1 - leak)
# lrelu = 1/2 * (1 + leak) * x + 1/2 * (1 - leak) * |x|
return f1 * x + f2 * abs(x)
else:
return tf.maximum(x, leak*x)
def instance_norm(x):
with tf.variable_scope("instance_norm"):
epsilon = 1e-5
mean, var = tf.nn.moments(x, [1, 2], keep_dims=True)
scale = tf.get_variable('scale',[x.get_shape()[-1]],
initializer=tf.truncated_normal_initializer(mean=1.0, stddev=0.02))
offset = tf.get_variable('offset',[x.get_shape()[-1]],initializer=tf.constant_initializer(0.0))
out = scale*tf.div(x-mean, tf.sqrt(var+epsilon)) + offset
return out
def general_conv2d(inputconv, o_d=64, f_h=7, f_w=7, s_h=1, s_w=1, stddev=0.02, padding="VALID", name="conv2d", do_norm=True, do_relu=True, relufactor=0):
with tf.variable_scope(name):
conv = tf.contrib.layers.conv2d(inputconv, o_d, f_w, s_w, padding, activation_fn=None, weights_initializer=tf.truncated_normal_initializer(stddev=stddev),biases_initializer=tf.constant_initializer(0.0))
if do_norm:
conv = instance_norm(conv)
# conv = tf.contrib.layers.batch_norm(conv, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True, scope="batch_norm")
if do_relu:
if(relufactor == 0):
conv = tf.nn.relu(conv,"relu")
else:
conv = lrelu(conv, relufactor, "lrelu")
return conv
def general_deconv2d(inputconv, outshape, o_d=64, f_h=7, f_w=7, s_h=1, s_w=1, stddev=0.02, padding="VALID", name="deconv2d", do_norm=True, do_relu=True, relufactor=0):
with tf.variable_scope(name):
conv = tf.contrib.layers.conv2d_transpose(inputconv, o_d, [f_h, f_w], [s_h, s_w], padding, activation_fn=None, weights_initializer=tf.truncated_normal_initializer(stddev=stddev),biases_initializer=tf.constant_initializer(0.0))
if do_norm:
conv = instance_norm(conv)
# conv = tf.contrib.layers.batch_norm(conv, decay=0.9, updates_collections=None, epsilon=1e-5, scale=True, scope="batch_norm")
if do_relu:
if(relufactor == 0):
conv = tf.nn.relu(conv,"relu")
else:
conv = lrelu(conv, relufactor, "lrelu")
return conv
def build_resnet_block(inputres, dim, name="resnet"):
with tf.variable_scope(name):
out_res = tf.pad(inputres, [[0, 0], [1, 1], [1, 1], [0, 0]], "REFLECT")
out_res = general_conv2d(out_res, dim, 3, 3, 1, 1, 0.02, "VALID","c1")
out_res = tf.pad(out_res, [[0, 0], [1, 1], [1, 1], [0, 0]], "REFLECT")
out_res = general_conv2d(out_res, dim, 3, 3, 1, 1, 0.02, "VALID","c2",do_relu=False)
return tf.nn.relu(out_res + inputres)
def generator(inputgen, name="generator"):
with tf.variable_scope(name):
f = 7
ks = 3
ngf = 64
pad_input = tf.pad(inputgen,[[0, 0], [ks, ks], [ks, ks], [0, 0]], "REFLECT")
o_c1 = general_conv2d(pad_input, ngf, f, f, 1, 1, 0.02,name="c1")
o_c2 = general_conv2d(o_c1, ngf*2, ks, ks, 2, 2, 0.02,"SAME","c2")
o_c3 = general_conv2d(o_c2, ngf*4, ks, ks, 2, 2, 0.02,"SAME","c3")
o_r1 = build_resnet_block(o_c3, ngf*4, "r1")
o_r2 = build_resnet_block(o_r1, ngf*4, "r2")
o_r3 = build_resnet_block(o_r2, ngf*4, "r3")
o_r4 = build_resnet_block(o_r3, ngf*4, "r4")
o_r5 = build_resnet_block(o_r4, ngf*4, "r5")
o_r6 = build_resnet_block(o_r5, ngf*4, "r6")
o_c4 = general_deconv2d(o_r6, [1,64,64,ngf*2], ngf*2, ks, ks, 2, 2, 0.02,"SAME","c4")
o_c5 = general_deconv2d(o_c4, [1,128,128,ngf], ngf, ks, ks, 2, 2, 0.02,"SAME","c5")
o_c5_pad = tf.pad(o_c5,[[0, 0], [ks, ks], [ks, ks], [0, 0]], "REFLECT")
o_c6 = general_conv2d(o_c5_pad, 3, f, f, 1, 1, 0.02,"VALID","c6",do_relu=False)
# Adding the tanh layer
out_gen = tf.nn.tanh(o_c6,"t1")
return out_gen
def discriminator(inputdisc, name="discriminator"):
with tf.variable_scope(name):
f = 4
ndf = 64
o_c1 = general_conv2d(inputdisc, ndf, f, f, 2, 2, 0.02, "SAME", "c1", do_norm=False, relufactor=0.2)
o_c2 = general_conv2d(o_c1, ndf*2, f, f, 2, 2, 0.02, "SAME", "c2", relufactor=0.2)
o_c3 = general_conv2d(o_c2, ndf*4, f, f, 2, 2, 0.02, "SAME", "c3", relufactor=0.2)
o_c4 = general_conv2d(o_c3, ndf*8, f, f, 1, 1, 0.02, "SAME", "c4",relufactor=0.2)
o_c5 = general_conv2d(o_c4, 1, f, f, 1, 1, 0.02, "SAME", "c5",do_norm=False,do_relu=False)
return o_c5
def gen_image_pool(num_gens, genimg, gen_pool):
''' This function saves the generated image to corresponding pool of images.
In starting. It keeps on feeling the pool till it is full and then randomly selects an
already stored image and replace it with new one.'''
pool_size = 50
if(num_gens < pool_size):
gen_pool[num_gens] = genimg
return genimg
else :
p = random.random()
if p > 0.5:
random_id = random.randint(0,pool_size-1)
temp = gen_pool[random_id]
gen_pool[random_id] = genimg
return temp
else :
return genimg
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import random
train_a_files = tf.train.match_filenames_once("/Users/zjucx/Documents/Github/GAN/dataset/input/monet2photo/trainA/*.jpg")
train_b_files = tf.train.match_filenames_once("/Users/zjucx/Documents/Github/GAN/dataset/input/monet2photo/trainB/*.jpg")
train_a_queue = tf.train.string_input_producer(train_a_files)
train_b_queue = tf.train.string_input_producer(train_b_files)
image_reader = tf.WholeFileReader()
_, image_a = image_reader.read(train_a_queue)
_, image_b = image_reader.read(train_b_queue)
image_A = tf.subtract(tf.div(tf.image.resize_images(tf.image.decode_jpeg(image_a),[256,256]),127.5),1)
image_B = tf.subtract(tf.div(tf.image.resize_images(tf.image.decode_jpeg(image_b),[256,256]),127.5),1)
input_A = tf.placeholder(tf.float32, [1, 256, 256, 3], name="input_A")
input_B = tf.placeholder(tf.float32, [1, 256, 256, 3], name="input_B")
gen_A_pool = tf.placeholder(tf.float32, [None, 256, 256, 3], name="gen_A_pool")
gen_B_pool = tf.placeholder(tf.float32, [None, 256, 256, 3], name="gen_B_pool")
with tf.variable_scope("Model") as scope:
gena = generator(input_B, name="g_B")
genb = generator(input_A, name="g_A")
dica = discriminator(input_A, name="d_A")
dicb = discriminator(input_B, name="d_B")
scope.reuse_variables()
cyca = generator(genb, name="g_B")
cycb = generator(gena, name="g_A")
dic_gana = discriminator(gena, name="d_A")
dic_ganb = discriminator(genb, name="d_B")
scope.reuse_variables()
dic_gen_A_pool = discriminator(gen_A_pool, "d_A")
dic_gen_B_pool = discriminator(gen_B_pool, "d_B")
d_loss_a = (tf.reduce_mean(tf.squared_difference(dica, 1)) + tf.reduce_mean(tf.square(dic_gen_A_pool)))/2
d_loss_b = (tf.reduce_mean(tf.squared_difference(dicb, 1)) + tf.reduce_mean(tf.square(dic_gen_B_pool)))/2
cyc_loss = tf.reduce_mean(tf.abs(input_A-cyca)) + tf.reduce_mean(tf.abs(input_B-cycb))
g_loss_a = cyc_loss*10 + tf.reduce_mean(tf.squared_difference(dic_ganb, 1))
g_loss_b = cyc_loss*10 + tf.reduce_mean(tf.squared_difference(dic_gana, 1))
optimizer = tf.train.AdamOptimizer(0.0002, beta1=0.5)
model_vars = tf.trainable_variables()
d_A_vars = [var for var in model_vars if 'd_A' in var.name]
g_A_vars = [var for var in model_vars if 'g_A' in var.name]
d_B_vars = [var for var in model_vars if 'd_B' in var.name]
g_B_vars = [var for var in model_vars if 'g_B' in var.name]
d_A_trainer = optimizer.minimize(d_loss_a, var_list=d_A_vars)
d_B_trainer = optimizer.minimize(d_loss_b, var_list=d_B_vars)
g_A_trainer = optimizer.minimize(g_loss_a, var_list=g_A_vars)
g_B_trainer = optimizer.minimize(g_loss_b, var_list=g_B_vars)
#for var in model_vars: print(var.name)
with tf.control_dependencies([g_A_trainer, d_B_trainer, g_B_trainer, d_A_trainer]):
optimizers = tf.no_op(name='optimizers')
gena_pool = np.zeros((50, 1, 256, 256, 3))
genb_pool = np.zeros((50, 1, 256, 256, 3))
A_input = np.zeros((100, 1, 256, 256, 3))
B_input = np.zeros((100, 1, 256, 256, 3))
with tf.Session() as sess:
sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])
print(tf.size(train_a_files))
# Loading images into the tensors
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for idx in range(0, 100):
imga = sess.run(image_A).reshape((1, 256, 256, 3))
imgb = sess.run(image_B).reshape((1, 256, 256, 3))
A_input[idx] = imga
B_input[idx] = imgb
coord.request_stop()
coord.join(threads)
num_gen_inputs = 0
for epoch in range(0,101):
for idx in range(0, 100):
imggenb, imggena = sess.run([genb, gena],feed_dict={input_A:A_input[idx], input_B:B_input[idx]})
# train
_, a, b, c, d = (
sess.run(
[optimizers, g_loss_a, d_loss_b, g_loss_b, d_loss_a],
feed_dict={input_A:A_input[idx], input_B:B_input[idx],
gen_A_pool: gen_image_pool(num_gen_inputs, imggena, gena_pool),
gen_B_pool: gen_image_pool(num_gen_inputs, imggenb, genb_pool)}
)
)
num_gen_inputs += 1
# Optimizing the G_A network
#_, a, imggena, imggenb = sess.run([g_A_trainer, g_loss_a, genb, gena],feed_dict={input_A:A_input[idx], input_B:B_input[idx]})
#_, b = sess.run([d_B_trainer, d_loss_b],feed_dict={input_A:A_input[idx], input_B:B_input[idx]})
#_, c = sess.run([g_B_trainer, g_loss_b],feed_dict={input_A:A_input[idx], input_B:B_input[idx]})
#_, d = sess.run([d_A_trainer, d_loss_a],feed_dict={input_A:A_input[idx], input_B:B_input[idx]})
print("epoch:%d idx:%d g_A_trainer:%f d_B_trainer:%f g_B_trainer:%f d_A_trainer%f"%(epoch, idx, a, b, c, d))
plt.subplot(141); plt.imshow(((A_input[idx].reshape((256, 256, 3))+1)*127.5).astype(np.uint8))
plt.subplot(142); plt.imshow(((B_input[idx].reshape((256, 256, 3))+1)*127.5).astype(np.uint8))
plt.subplot(143); plt.imshow(((imggena.reshape((256, 256, 3))+1)*127.5).astype(np.uint8))
plt.subplot(144); plt.imshow(((imggenb.reshape((256, 256, 3))+1)*127.5).astype(np.uint8))
plt.show()
```
| github_jupyter |
## Logistic Regression using Newton Method and KFold
```
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
dataset = load_iris()
X = dataset.data
y = dataset.target
target_names = list(dataset.target_names)
print(target_names)
# Change to binary class
y = (y > 0).astype(int)
y
class LogReg:
"""
This implementation of Logistic Regression uses the Newton's Method for optimization.
"""
def __init__(self, num_iters=20, tolerance = 1e-10, epsilon = 10e-8, threshold=0.5, verbose=False):
self.num_iters = num_iters
self.tolerance = tolerance
self.epsilon = epsilon # subtracted to make hessian invertible
self.threshold = threshold
self.verbose = verbose
def add_ones(self, X):
return np.concatenate((np.ones((len(X),1)), X), axis = 1)
def sigmoid(self, X, theta):
return 1/(1 + np.exp(X@theta))
def get_hessian_inv(self, X, theta):
htheta1theta = np.diag(self.sigmoid(X, theta).T@(1-self.sigmoid(X, theta))).reshape(-1, 1)
hessian = (X.T*htheta1theta)@X
hessian_inv = np.linalg.inv(hessian + self.epsilon*np.eye(X.T.shape[0]))
return hessian_inv
def cost(self, X, y_true):
return np.mean((X@self.theta - y_true)**2)
def fit(self, X, y):
X = X.copy()
X = self.add_ones(X)
y = y.reshape(-1, 1)
self.theta = np.zeros((len(X[0]), 1))
current_iter = 1
norm = 1
while (norm >= self.tolerance and current_iter < self.num_iters):
old_theta = self.theta.copy()
grad = -X.T@(y - self.sigmoid(X, self.theta))
grad= grad.reshape(-1, 1)
hessian_inv = self.get_hessian_inv(X, self.theta)
self.theta = self.theta + hessian_inv@grad
if self.verbose:
print(f'cost for {current_iter} iteration : {self.cost(X, y)}')
norm = np.linalg.norm(old_theta - self.theta)
current_iter += 1
return self.theta
def evaluate(self, X, y):
"""
Returns mse loss for a dataset evaluated on the hypothesis
"""
X = self.add_ones(X)
return self.cost(X, y)
def predict(self, X):
prob = self.predict_proba(X)
return (prob >= self.threshold).astype(int)
def predict_proba(self, X):
"""
Returns probability of predictions.
"""
X = self.add_ones(X)
return self.sigmoid(X, self.theta)
logreg = LogReg()
logreg.fit(X, y)
predictions = logreg.predict(X)
predictions = predictions.squeeze()
np.sum(y == predictions) / len(y)
class KFoldCrossVal:
"""
Performs k-fold cross validation on each combination of hyperparameter set
Input
............
X : Features (m, n)
y : target (m, 1)
hyperparameter_Set : Dictionary of hyperparameters for k-fold
num_of_folds: Number of folds, k; default:10
verbose: Checks whether to print parameters on every iteration; Boolean; Default: False
"""
def __init__(self, hyperparameter_set, num_of_folds=10, verbose=True):
self.hyperparameter_set = hyperparameter_set
self.k = num_of_folds
self.verbose = verbose
import sys
if ('numpy' not in sys.modules) and ('np' not in dir()): #import numpy if not done
import numpy as np
#def get_model_no(self):
def shuffle_data(self, X, y):
shuffle_arr = np.random.permutation(len(X))
X_shuffled = X[shuffle_arr]
y_shuffled = y[shuffle_arr].reshape(-1, 1)
return X_shuffled, y_shuffled
def get_kfold_arr_index(self, subset_size, last_index):
array_indexes = [0]
for fold_no in range(self.k):
if fold_no != (self.k-1):
array_indexes.append((fold_no+1)*subset_size)
elif fold_no == (self.k - 1): # To accomodate examples not part of the
array_indexes.append(last_index) #for last index
return array_indexes
def get_split_data_fold(self, X, y, array_indexes, fold_no):
start = array_indexes[fold_no]
end = array_indexes[fold_no+1]
X_val = X[start: end]
y_val = y[start: end]
X_train = np.delete(X, [start,end], axis=0)
y_train = np.delete(y, [start,end]).reshape(-1,1)
return X_train, y_train, X_val, y_val
def get_hyperparameter_sets(self, hyperparameter_dict):
"""
Converts the hyperparameter dictionary into all possible combinations of hyperparameters
Return
..............
Array of hyperparameter set
"""
import itertools
parameter_keys = hyperparameter_dict.keys()
parameter_values = hyperparameter_dict.values()
parameter_array = []
for params in itertools.product(*parameter_values):
parameter_array.append(params)
parameter_sets = []
for parameter_values in parameter_array:
parameter_set = dict(zip(parameter_keys, parameter_values))
parameter_sets.append(parameter_set)
return parameter_sets
def evaluate(self, X, y):
models = self.get_hyperparameter_sets(self.hyperparameter_set)
#print(f'Performing k-fold for {len(models)} models and {len(models) * self.k} cross validations' )
m = len(X)
generalization_mse = []
X, y = self.shuffle_data(X, y)
subset_size = int(m/self.k)
array_indexes = self.get_kfold_arr_index(subset_size, m+1)
for hyperparameters in models:
model = LogReg(**hyperparameters)
fold_mse_arr = []
for fold_no in range(self.k - 1):
X_train, y_train, X_val, y_val = self.get_split_data_fold(X, y, array_indexes, fold_no)
model.fit(X_train, y_train)
mse = model.evaluate(X_val, y_val)
fold_mse_arr.append(mse)
cv_mse = np.mean(fold_mse_arr)
if self.verbose:
print(f'{hyperparameters}, mse: {cv_mse}')
generalization_mse.append(cv_mse)
lowest_gen_mse_index = np.argmin(generalization_mse)
lowest_mse = generalization_mse[lowest_gen_mse_index]
best_model = models[lowest_gen_mse_index]
return lowest_mse, best_model
hyp = {
'num_iters': [1, 2, 3, 4],
'tolerance': [1e-3, 1e-5, 1e-7, 1e-10],
'epsilon': [10e-6, 10e-8, 10e-10],
'threshold': [0.45, 0.5]
}
kcv = KFoldCrossVal(hyp, 5, verbose=False)
kcv.evaluate(X, y)
```
| github_jupyter |
### Seed
```
import matplotlib.pyplot as plt
from time import time
import matplotlib.pyplot as plt
import pandas as pd
import cv2 as cv
seed_value= 0
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
import random
random.seed(seed_value)
import numpy as np
np.random.seed(seed_value)
import tensorflow as tf
tf.set_random_seed(seed_value)
import keras
from keras.models import Sequential, Model
from keras.layers import Input, Flatten, Dense, Dropout, Convolution2D, Conv2D, MaxPooling2D, Lambda, GlobalMaxPooling2D, GlobalAveragePooling2D, BatchNormalization, Activation, AveragePooling2D, Concatenate
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.utils import np_utils
from keras import backend as K
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
```
### Read Dataset
```
def getOneHot(a):
b = np.zeros((a.size, a.max()+1))
b[np.arange(a.size),a] = 1
return b
def getData(path, folders):
path+='/'
labelNames = folders
labels = []
images = []
temp = []
for i in range(len(folders)):
randomImages = []
imgAddress = path + folders[i]
l = os.listdir(imgAddress)
for img in l:
frame = cv.imread(imgAddress+'/'+img)
temp.append((i,frame))
random.shuffle(temp)
for x in temp:
temp1, temp2 = (x)
labels.append(temp1)
images.append(temp2)
labels = getOneHot(np.asarray(labels))
images = np.asarray(images)
return images, labels
def preprocess(a):
b = []
for action_frame in a:
hsv = cv.cvtColor(action_frame, cv.COLOR_BGR2HSV)
lower_color = np.array([0, 10, 60])
upper_color = np.array([20, 150, 255])
mask = cv.inRange(hsv, lower_color, upper_color)
res = cv.bitwise_and(action_frame,action_frame, mask= mask)
b.append(res)
return b
labelNames = ['next', 'previous', 'stop']
x_train, y_train = getData('completeData', labelNames)
x_train = np.asarray(preprocess(x_train))
```
### Show images
```
def showImg(images, labels, labelNames):
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
n = random.randint(0,len(images))
plt.imshow(images[n])
plt.xlabel(labelNames[np.argmax(labels[n])])
plt.show()
print("Training data:")
showImg(x_train, y_train, labelNames)
```
### CNN architecture
```
def get_model(layer1Filter = 16, layer2Filter = 32, layer3Filter = 64, layer4Output = 400, optim = 'adam'):
x = Input((50, 50, 3))
model = BatchNormalization(axis = 3)(x)
model = Convolution2D(filters = layer1Filter, kernel_size = (3,3), activation='relu')(model)
model = MaxPooling2D()(model)
model = Dropout(0.5, seed = seed_value)(model)
model = BatchNormalization(axis = 3)(model)
model = Convolution2D(filters = layer2Filter, kernel_size = (3,3), activation='relu')(model)
model = MaxPooling2D()(model)
model = Dropout(0.5, seed = seed_value)(model)
model = BatchNormalization(axis = 3)(model)
model = Convolution2D(filters = layer3Filter, kernel_size = (3,3), activation='relu')(model)
model = MaxPooling2D()(model)
model = Dropout(0.5, seed = seed_value)(model)
model = Flatten()(model)
model = Dense(layer4Output , activation = 'relu')(model)
model = Dropout(0.5, seed = seed_value)(model)
model = Dense(3, activation = 'softmax')(model)
model = Model(input = x, output = model)
if optim == 'adam':
opt = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
else:
opt = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
model.compile(opt, loss='binary_crossentropy', metrics=['accuracy'])
return model
def get_callbacks(name_weights, patience_lr):
mcp_save = ModelCheckpoint(name_weights, save_best_only=True, monitor='loss', mode='min')
reduce_lr_loss = ReduceLROnPlateau(monitor='loss', factor=0.75, patience=patience_lr, verbose=1, epsilon=1e-4)
return [mcp_save, reduce_lr_loss]
def train_model(x_train, y_train, layer4Output, optim, epochs):
name_weights = "final_model_weights_complete.h5"
model = get_model(layer4Output = layer4Output, optim = optim)
callbacks = get_callbacks(name_weights = name_weights, patience_lr=2)
model.fit(x = x_train, y = y_train, epochs = epochs, callbacks=callbacks)
return model
model = train_model(x_train, y_train, 400, 'adam', 20)
```
### Prediction
```
cap = cv.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
print("Unable to capture video")
break
frame = cv.flip( frame, 1 )
frame2 = cv.resize(frame, (50, 50))
a = []
a.append(frame2)
a = np.array(preprocess(a))
prob = model.predict(a)
pred = np.argmax(prob)
if prob[0][pred] < 0.9:
s = "Background " + str(prob[0])
else:
s = labelNames[pred] + " " + str(prob[0][pred])
font = cv.FONT_HERSHEY_SIMPLEX
cv.putText(frame,s,(40,40),font,0.70,(0,0,255),2)
cv.imshow('frame', frame)
cv.imshow('frame2', a[0])
if cv.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv.destroyAllWindows()
```
### Using trained model
```
def load_trained_model(weights_path):
model= get_model()
model.load_weights(weights_path)
return model
model = load_trained_model("/home/ankit/Desktop/Acad/Sem/Sem5/COL780/Assn/4/weightsComplete.h5")
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# BlackHoles@Home Tutorial: Creating a `BOINC` app using the `WrapperApp`
## Author: Leo Werneck
## This tutorial notebook demonstrates how to write a program that runs in the `BOINC` infrastructure using the `WrapperApp`
## <font color=red>**WARNING**:</font> this tutorial notebook is currently incompatible with Windows
## Introduction:
The [BlackHoles@Home](http://blackholesathome.net/) project allows users to volunteer CPU time so a large number of binary black holes simulations can be performed. The objective is to create a large catalog of [gravitational waveforms](https://en.wikipedia.org/wiki/Gravitational_wave), which can be used by observatories such as [LIGO](https://www.ligo.org), [VIRGO](https://www.virgo-gw.eu), and, in the future, [LISA](https://lisa.nasa.gov) in order to infer what was the source of a detected gravitational wave.
BlackHoles@Home is destined to run on the [BOINC](https://boinc.berkeley.edu) infrastructure (alongside [Einstein@Home](https://einsteinathome.org/) and [many other great projects](https://boinc.berkeley.edu/projects.php)), enabling anyone with a computer to contribute to the construction of the largest numerical relativity gravitational wave catalogs ever produced.
### Additional Reading Material:
* [BOINC's Wiki page](https://boinc.berkeley.edu/trac/wiki)
* [BOINC's WrapperApp Wiki page](https://boinc.berkeley.edu/trac/wiki/WrapperApp)
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This tutorial explains how to use the `BOINC` wrapper application to run a simple program. The structture of this notebook is as follows:
1. [Step 1](#introduction): Introduction
1. [Step 2](#compiling_wrapper_app): Compiling the `BOINC` wrapper app for your platform
1. [Step 3](#using_wrapper_app): Using the `BOINC` wrapper app
1. [Step 3.a](#the_main_application): The main application
1. [Step 3.b](#compiling_the_main_application): Compiling the main application
1. [Step 3.c](#job_xml): The `job.xml` file
1. [Step 3.c.i](#simple_job_xml): A very simple `job.xml` file
1. [Step 3.c.ii](#job_xml_output_redirect_and_zip): Redirecting and zipping output files
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='introduction'></a>
# Step 1: Introduction \[Back to [top](#toc)\]
$$\label{introduction}$$
The [`WrapperApp`](https://boinc.berkeley.edu/trac/wiki/WrapperApp) is the simplest way of converting an existing program into a `BOINC` compatible application. The program that will be actually running is the `WrapperApp` and it will take care of:
* Interfacing with the `BOINC` libraries
* Running the original program
* Handling input/output files
Let us assume a simple `BOINC` application, which is made out of only one program, `bhah_test_app`. The directory of this application should then contain the following files:
* The application file `bhah_test_app` with the name format `appname_version_platform`.
* The `WrapperApp` file with the name format `WrapperAppname_version_platform`.
* The `WrapperApp` configuration file, which we will typically call `appname_version_job.xml`.
* The appication version file, which is called `version.xml`.
We note that the application we will create in this tutorial notebook is analogous to the native `BOINC` application we create in [this tutorial notebook](Tutorial-BlackHolesAtHome-BOINC_applications-Native_applications.ipynb), and thus reading that tutorial notebook is also recommended.
<a id='compiling_wrapper_app'></a>
# Step 2: Compiling the `BOINC` wrapper app for your platform \[Back to [top](#toc)\]
$$\label{compiling_wrapper_app}$$
```
# Step 2: Compiling the BOINC wrapper app
# Step 2.a: Load needed Python modules
import os,sys
# Step 2.b: Add NRPy's root directory to the sys.path()
sys.path.append("..")
# Step 2.c: Load NRPy+'s command line helper module
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
# Step 2.d: Set the path to the BOINC source code
path_to_boinc = "~/bhah/boinc"
current_path = os.getcwd()
# Step 2.e: Check the platform and adjust the compilation command accordingly
if sys.platform == "linux":
wrapper_compile = "make"
elif sys.platform == "darwin":
wrapper_compile = "source BuildMacWrapper.sh"
else:
print("Unsupported platform: "+sys.platform)
sys.exit(1)
# Step 2.f: Compile the wrapper app
!cd $path_to_boinc/samples/wrapper && $wrapper_compile
# Step 2.g: Copy the wrapper app to the current working directory
!cp $path_to_boinc/samples/wrapper/wrapper $current_path
```
<a id='using_wrapper_app'></a>
# Step 3: Using the `BOINC` wrapper app \[Back to [top](#toc)\]
$$\label{using_wrapper_app}$$
Once we have everything set up, using the wrapper app is as simple as running
```bash
$: ./wrapper
```
The following steps will describe how to set the configuration files so that the wrapper app works as you expect it to.
<a id='the_main_application'></a>
## Step 3.a: The main application \[Back to [top](#toc)\]
$$\label{the_main_application}$$
Let us start by writing a simple application which we will run using the `BOINC` wrapper app. In order for us to be able to see some additional configuration features of the wrapper app, we will make our main application slightly more complicated than a simple "Hello World!" program.
This application takes any number of command line arguments and then prints them to `stdout`, `stderr`, and an output text file.
```
%%writefile simple_app.c
// Step 0: Load all the necessary C header files
#include <stdio.h>
#include <stdlib.h>
// Program description: this program is just a slightly
// more complicated version of the
// "Hello World!" program, where
// we will be taking some command
// line inputs and printing them to
// stdout, stderr, and an output file.
int main( int argc, char** argv ) {
// Step 1: Check correct usage
if( argc == 1 ) {
fprintf(stderr,"(ERROR) Correct usage is ./simple_app <command_line_arguments>\n");
exit(1);
}
// Step 2: Print all command line arguments to
// stdout, stderr, and an output file
//
// Step 2.a: Open the output file
// Step 2.a.i: Set the output file name
char filename[100] = "output_file.txt";
// Step 2.a.ii: Open the file
FILE* filept = fopen(filename,"w");
// Step 2.a.iii: Check everything is OK
if( !filept ) {
fprintf(stderr,"(ERROR) Could not open file %s\n.",filename);
exit(1);
}
// Step 2.b: Print an information line
fprintf(stdout,"(INFO) Got the following command line arguments:");
fprintf(stderr,"(INFO) Got the following command line arguments:");
fprintf(filept,"(INFO) Got the following command line arguments:");
// Step 2.c: Loop over the command line arguments, printing
// them to stdout, stderr, and our output file
for(int i=1;i<argc;i++) {
fprintf(stdout," %s",argv[i]);
fprintf(stderr," %s",argv[i]);
fprintf(filept," %s",argv[i]);
}
// Step 2.d: Add a line break to the output
fprintf(stdout,"\n");
fprintf(stderr,"\n");
fprintf(filept,"\n");
// Step 2.d: Close the output file
fclose(filept);
// All done!
return 0;
}
```
<a id='compiling_the_main_application'></a>
## Step 3.b: Compiling the main application \[Back to [top](#toc)\]
$$\label{compiling_the_main_application}$$
We now compile the main application using NRPy+'s `cmdline_helper` module.
```
cmd.C_compile("simple_app.c","simple_app")
```
<a id='job_xml'></a>
## Step 3.c: The `job.xml` file \[Back to [top](#toc)\]
$$\label{job_xml}$$
Let's see what happens if we try running the wrapper app:
```
!rm -f job.xml
cmd.Execute("wrapper")
```
As can be seen above, the `BOINC` wrapper application requests an input file, `job.xml`, to be present in the current working directory. We will now set up a `job.xml` file for the wrapper app in a way that it works correctly with our `simple_app`. A `job.xml` has the following syntax:
```xml
<job_desc>
<task>
...task_options...
</task>
...additional_options...
</job_desc>
```
All the configurations for the wrapper application are enclosed by the `job_desc` environment. To configure the wrapper to work with our specific application, we provide the `task_options`, while `additional_options` can be provided for additional configuration, as we will see.
<a id='simple_job_xml'></a>
### Step 3.c.i: A very simple `job.xml` file \[Back to [top](#toc)\]
$$\label{simple_job_xml}$$
First, let us start with a very basic configuration: let us ask the wrapper to run our simple application using the command line arguments `1 2 3 4 testing hello world 4 3 2 1`. This is achieved with the following `job.xml` file:
```
%%writefile job.xml
<job_desc>
<task>
<application>simple_app</application>
<command_line>1 2 3 4 testing hello world 4 3 2 1</command_line>
</task>
</job_desc>
```
Let us now copy everything into a new, fresh directory and run our wrapper application.
```
!rm -rf wrapper_app_test
cmd.mkdir("wrapper_app_test")
!cp wrapper simple_app job.xml wrapper_app_test && cd wrapper_app_test && ./wrapper && ls
```
Note that after execution, we see the output `(INFO) Got the following command line arguments: 1 2 3 4 testing hello world 4 3 2 1` printed to `stdout`. If we examine the file `output_file.txt`, we will see the same output:
```
!cat output_file.txt
```
The `stderr.txt` file is automatically generated by the `BOINC` wrapper app, and it contains all the output which was sent to `stderr`. We see that while we also have the expected output in it, there is also some additional information which was generated by the wrapper app:
```
!cat stderr.txt
```
Aditionally, we see that the wrapper application has created two additional files: the `boinc_finish_called` and the `wrapper_checkpoint.txt`. For our purposes, the `wrapper_checkpoint.txt` file is irrelevant, so we will ignore it for now. The `boinc_finish_called` file contains the numerical value returned by our program, `simple_app`. As is usual in `C`, if the return value is `0`, then the execution was successful, while a non-zero value indicates an error:
```
!cat boinc_finish_called
```
<a id='job_xml_output_redirect_and_zip'></a>
### Step 3.c.ii: Redirecting and zipping output files \[Back to [top](#toc)\]
$$\label{job_xml_output_redirect_and_zip}$$
Now that we have seen the simplest possible case, let us look at something slightly more complicated. The following `job.xml` file asks the wrapper app to perform the following tasks:
1. Run the `simple_app` application with command line arguments `1 2 3 4 testing hello world 4 3 2 1`
1. Redirect all `stdout` output to the file `simple_app.out`
1. Redirect all `stderr` output to the file `simple_app.err`
1. Zip all the output files we have seen before into a single file: `output.zip`
```
%%writefile job.xml
<job_desc>
<task>
<application>simple_app</application>
<command_line>1 2 3 4 testing hello world 4 3 2 1</command_line>
<stdout_filename>simple_app.out</stdout_filename>
<stderr_filename>simple_app.err</stderr_filename>
</task>
<zip_output>
<zipfilename>output.zip</zipfilename>
<filename>simple_app.out</filename>
<filename>simple_app.err</filename>
<filename>output_file.txt</filename>
<filename>boinc_finish_called</filename>
<filename>wrapper_checkpoint.txt</filename>
</zip_output>
</job_desc>
```
Now let us see what happens when we run the wrapper app:
```
!rm -rf wrapper_app_test
cmd.mkdir("wrapper_app_test")
!cp wrapper simple_app job.xml wrapper_app_test && cd wrapper_app_test && ./wrapper && ls
```
Notice that we now have the output files `simple_app.out` and `simple_app.err`, as expected. The file `stderr.txt` is still present, by default. We also have all our output files neatly collected into a single zip file, `output.zip`. Note that zipping the output is not done with the goal of reducing the overall size of the output, but because it easier to communicate the output files back to the `BOINC` server.
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-BlackHolesAtHome-BOINC_applications-Using_the_WrapperApp.pdf](Tutorial-BlackHolesAtHome-BOINC_applications-Using_the_WrapperApp.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
!cp ../latex_nrpy_style.tplx .
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BlackHolesAtHome-BOINC_applications-Using_the_WrapperApp")
!rm -f latex_nrpy_style.tplx
```
| github_jupyter |
### Training workflow for DLScore version 3 <br>
Changes: <br>
<ul>
<li>With sensoring added. (But I don't think that helps, just to give it a try!) </li>
</ul>
```
from __future__ import print_function
import numpy as np
import pandas as pd
import keras
from keras import metrics
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras import backend as K
from keras import regularizers
from keras import initializers
from keras.callbacks import EarlyStopping
from keras.utils.training_utils import multi_gpu_model
from keras.utils import plot_model
from scipy.stats import pearsonr
from sklearn.model_selection import KFold
import random
import os.path
import itertools
import pickle
import json
from tqdm import *
import glob
import re
import csv
import multiprocessing as mp
from tqdm import *
random.seed(12345)
# Sensoring outliers
def sensoring(true, pred):
""" Sensor the predicted data to get rid of outliers"""
mn = np.min(true)
mx = np.max(true)
pred = np.minimum(pred, mx)
pred = np.maximum(pred, mn)
return pred
def split_data(x, y, pdb_ids, valid_size=0.1, test_size=0.1):
"""Converts the pandas dataframe into a matrix.
Splits the data into train, test and validations set.
Returns numpy arrays"""
# Load the indices of the non-zero columns.
# The same indices need to be used during the evaluation of test data
#with open("nonzero_column_indices.pickle", "rb") as f:
# non_zero_columns = pickle.load(f)
# Filter the zero columns out
#data = data[:, non_zero_columns]
pdb_ids = np.array(pdb_ids)
# Validation set
val_count = int(x.shape[0]*valid_size) # Number of examples to take
val_ids = np.random.choice(x.shape[0], val_count) # Select rows randomly
val_x = x[val_ids, :]
val_y = y[val_ids]
# Save the pdb ids of the validation set in disk
with open('val_pdb_ids.pickle', 'wb') as f:
pickle.dump(pdb_ids[val_ids], f)
# Remove validation set from data
mask = np.ones(x.shape[0], dtype=bool)
mask[val_ids] = False
x = x[mask, :]
y = y[mask]
pdb_ids = pdb_ids[mask]
# Test set
test_count = int(x.shape[0]*test_size)
test_ids = np.random.choice(x.shape[0], test_count)
test_x = x[test_ids, :]
test_y = y[test_ids]
# Save the pdb ids of the test set in disk
with open('test_pdb_ids.pickle', 'wb') as f:
pickle.dump(pdb_ids[test_ids], f)
# Remove test set from data
mask = np.ones(x.shape[0], dtype=bool)
mask[test_ids] = False
x = x[mask, :]
y = y[mask]
return x, y, val_x, val_y, test_x, test_y
def train_test_split(x, y, pdb_ids, test_size=0.1):
"""Converts the pandas dataframe into a matrix.
Splits the data into train, test and validations set.
Returns numpy arrays"""
# Load the indices of the non-zero columns.
# The same indices need to be used during the evaluation of test data
#with open("nonzero_column_indices.pickle", "rb") as f:
# non_zero_columns = pickle.load(f)
# Filter the zero columns out
#data = data[:, non_zero_columns]
pdb_ids = np.array(pdb_ids)
# Test set
test_count = int(x.shape[0]*test_size)
test_ids = np.random.choice(x.shape[0], test_count)
test_x = x[test_ids, :]
test_y = y[test_ids]
# Save the pdb ids of the test set in disk
with open('test_pdb_ids.pickle', 'wb') as f:
pickle.dump(pdb_ids[test_ids], f)
# Remove test set from data
mask = np.ones(x.shape[0], dtype=bool)
mask[test_ids] = False
x = x[mask, :]
y = y[mask]
return x, y, test_x, test_y
# Build the model
def get_model(x_size, hidden_layers, dr_rate=0.5, l2_lr=0.01):
model = Sequential()
model.add(Dense(hidden_layers[0], activation="relu", kernel_initializer='normal', input_shape=(x_size,)))
model.add(Dropout(0.2))
for i in range(1, len(hidden_layers)):
model.add(Dense(hidden_layers[i],
activation="relu",
kernel_initializer='normal',
kernel_regularizer=regularizers.l2(l2_lr),
bias_regularizer=regularizers.l2(l2_lr)))
model.add(Dropout(dr_rate))
model.add(Dense(1, activation="linear"))
return(model)
# def get_hidden_layers():
# x = [128, 256, 512, 768, 1024, 2048]
# hl = []
# for i in range(1, len(x)):
# hl.extend([p for p in itertools.product(x, repeat=i+1)])
# return hl
def run(output_dir, serial=0):
if serial:
print('Running in parallel')
else:
print('Running standalone')
# Create the output directory
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
# Preprocess the data
pdb_ids = []
x = []
y = []
with open('Data_new.csv', 'r') as f:
reader = csv.reader(f)
next(reader, None) # Skip the header
for row in reader:
pdb_ids.append(str(row[0]))
x.append([float(i) for i in row[1:349]])
y.append(float(row[349]))
x = np.array(x, dtype=np.float32)
y = np.array(y, dtype=np.float32)
# Normalize the data
mean = np.mean(x, axis=0)
std = np.std(x, axis=0) + 0.00001
x_n = (x - mean) / std
# Write things down
transform = {}
transform['std'] = std
transform['mean'] = mean
with open(output_dir + 'transform.pickle', 'wb') as f:
pickle.dump(transform, f)
# Read the 'best' hidden layers
with open("best_hidden_layers.pickle", "rb") as f:
hidden_layers = pickle.load(f)
# Determine if running all alone or in parts (if in parts, assuming 8)
if serial:
chunk_size = (len(hidden_layers)//8) + 1
hidden_layers = [hidden_layers[i*chunk_size:i*chunk_size+chunk_size] for i in range(8)][serial-1]
# Network parameters
epochs = 100
batch_size = 128
keras_callbacks = [EarlyStopping(monitor='val_mean_squared_error',
min_delta = 0,
patience=20,
verbose=0)
]
# Split the data into training and test set
train_x, train_y, test_x, test_y = train_test_split(x_n, y, pdb_ids, test_size=0.1)
#train_x, train_y, val_x, val_y, test_x, test_y = split_data(x_n, y, pdb_ids)
pbar = tqdm_notebook(total=len(hidden_layers),
desc='GPU: ' + str(serial))
for i in range(len(hidden_layers)):
if serial:
model_name = 'model_' + str(serial) + '_' + str(i)
else:
model_name = 'model_' + str(i)
# Set dynamic memory allocation in a specific gpu
config = K.tf.ConfigProto()
config.gpu_options.allow_growth = True
if serial:
config.gpu_options.visible_device_list = str(serial-1)
K.set_session(K.tf.Session(config=config))
# Build the model
model = get_model(train_x.shape[1], hidden_layers=hidden_layers[i])
# Save the model
with open(output_dir + model_name + ".json", "w") as json_file:
json_file.write(model.to_json())
if not serial:
# If not running with other instances then use 4 GPUs
model = multi_gpu_model(model, gpus=4)
model.compile(
loss='mean_squared_error',
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=[metrics.mse])
#Save the initial weights
ini_weights = model.get_weights()
# 10 fold cross validation
kf = KFold(n_splits=10)
val_fold_score = 0.0
train_fold_score = 0.0
for _i, (train_index, valid_index) in enumerate(kf.split(train_x, train_y)):
# Reset the weights
model.set_weights(ini_weights)
# Train the model
train_info = model.fit(train_x[train_index], train_y[train_index],
batch_size=batch_size,
epochs=epochs,
shuffle=True,
verbose=0,
validation_split=0.1,
#validation_data=(train_x[valid_index], train_y[valid_index]),
callbacks=keras_callbacks)
current_val_predict = sensoring(train_y[valid_index], model.predict(train_x[valid_index])).flatten()
current_val_r2 = pearsonr(current_val_predict, train_y[valid_index])[0]
# If the current validation score is better then save it
if current_val_r2 > val_fold_score:
val_fold_score = current_val_r2
# Save the predicted values for both the training set
train_predict = sensoring(train_y[train_index], model.predict(train_x[train_index])).flatten()
train_fold_score = pearsonr(train_predict, train_y[train_index])[0]
# Save the training history
with open(output_dir + 'history_' + model_name + '_' + str(_i) + '.pickle', 'wb') as f:
pickle.dump(train_info.history, f)
# Save the results
dict_r = {}
dict_r['hidden_layers'] = hidden_layers[i]
dict_r['pearsonr_train'] = train_fold_score
dict_r['pearsonr_valid'] = val_fold_score
pred = sensoring(test_y, model.predict(test_x)).flatten()
dict_r['pearsonr_test'] = pearsonr(pred, test_y)[0]
#pred = sensoring(test_x, test_y, model.predict(test_x)).flatten()
# Write the result in a file
with open(output_dir + 'result_' + model_name + '.pickle', 'wb') as f:
pickle.dump(dict_r, f)
# Save the model weights
model.save_weights(output_dir + "weights_" + model_name + ".h5")
# Clear the session and the model from the memory
del model
K.clear_session()
pbar.update()
output_dir = 'dl_networks_04/'
jobs = [mp.Process(target=run, args=(output_dir, i)) for i in range(1, 9, 1)]
for j in jobs:
j.start()
```
### Result Analysis
```
# Get the network number and pearson coffs. of train, test and validation set in a list (in order)
model_files = sorted(glob.glob(output_dir + 'model_*'))
weight_files = sorted(glob.glob(output_dir + 'weights_*'))
result_files = sorted(glob.glob(output_dir + 'result_*'))
models = []
r2 = []
hidden_layers = []
weights = []
# net_layers = []
for mod, res, w in zip(model_files, result_files, weight_files):
models.append(mod)
weights.append(w)
with open(res, 'rb') as f:
r = pickle.load(f)
coeff = [r['pearsonr_train'], r['pearsonr_test'], r['pearsonr_valid']]
r2.append(coeff)
hidden_layers.append(r['hidden_layers'])
```
Sort the indices according to the validation result
```
r2_ar = np.array(r2)
sorted_indices = list((-r2_ar)[:, 2].argsort())
sorted_r2 = [r2[i] for i in sorted_indices]
sorted_r2[:5]
sorted_models = [models[i] for i in sorted_indices]
sorted_models[:5]
sorted_weights = [weights[i] for i in sorted_indices]
sorted_weights[:5]
```
Save the lists in the disk
```
with open(output_dir + 'sorted_models.pickle', 'wb') as f:
pickle.dump(sorted_models, f)
with open(output_dir + 'sorted_r2.pickle', 'wb') as f:
pickle.dump(sorted_r2, f)
with open(output_dir + 'sorted_weights.pickle', 'wb') as f:
pickle.dump(sorted_weights, f)
```
| github_jupyter |
# Optimization of an X-Gate for a Transmon Qubit
```
# NBVAL_IGNORE_OUTPUT
%load_ext watermark
import sys
import os
import qutip
import numpy as np
import scipy
import matplotlib
import matplotlib.pylab as plt
import krotov
from scipy.fftpack import fft
from scipy.interpolate import interp1d
%watermark -v --iversions
```
$\newcommand{tr}[0]{\operatorname{tr}}
\newcommand{diag}[0]{\operatorname{diag}}
\newcommand{abs}[0]{\operatorname{abs}}
\newcommand{pop}[0]{\operatorname{pop}}
\newcommand{aux}[0]{\text{aux}}
\newcommand{opt}[0]{\text{opt}}
\newcommand{tgt}[0]{\text{tgt}}
\newcommand{init}[0]{\text{init}}
\newcommand{lab}[0]{\text{lab}}
\newcommand{rwa}[0]{\text{rwa}}
\newcommand{bra}[1]{\langle#1\vert}
\newcommand{ket}[1]{\vert#1\rangle}
\newcommand{Bra}[1]{\left\langle#1\right\vert}
\newcommand{Ket}[1]{\left\vert#1\right\rangle}
\newcommand{Braket}[2]{\left\langle #1\vphantom{#2} \mid #2\vphantom{#1}\right\rangle}
\newcommand{op}[1]{\hat{#1}}
\newcommand{Op}[1]{\hat{#1}}
\newcommand{dd}[0]{\,\text{d}}
\newcommand{Liouville}[0]{\mathcal{L}}
\newcommand{DynMap}[0]{\mathcal{E}}
\newcommand{identity}[0]{\mathbf{1}}
\newcommand{Norm}[1]{\lVert#1\rVert}
\newcommand{Abs}[1]{\left\vert#1\right\vert}
\newcommand{avg}[1]{\langle#1\rangle}
\newcommand{Avg}[1]{\left\langle#1\right\rangle}
\newcommand{AbsSq}[1]{\left\vert#1\right\vert^2}
\newcommand{Re}[0]{\operatorname{Re}}
\newcommand{Im}[0]{\operatorname{Im}}$
In the previous examples, we have only optimized for state-to-state
transitions, i.e., for a single objective. This example shows the optimization
of a simple quantum gate, which requires multiple objectives to be fulfilled
simultaneously (one for each state in the logical basis). We consider a
superconducting "transmon" qubit and implement a single-qubit Pauli-X gate.
**Note**: This notebook uses some parallelization features (`parallel_map`/`multiprocessing`). Unfortunately, on Windows (and macOS with Python >= 3.8), `multiprocessing` does not work correctly for functions defined in a Jupyter notebook (due to the [spawn method](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) being used on Windows, instead of Unix-`fork`, see also https://stackoverflow.com/questions/45719956). We can use the third-party [loky](https://loky.readthedocs.io/) library to fix this, but this significantly increases the overhead of multi-process parallelization. The use of parallelization here is for illustration only and makes no guarantee of actually improving the runtime of the optimization.
```
krotov.parallelization.set_parallelization(use_loky=True)
```
## The transmon Hamiltonian
The effective Hamiltonian of a single transmon depends on the capacitive energy
$E_C=e^2/2C$ and the Josephson energy $E_J$, an energy due to the Josephson
junction working as a nonlinear inductor periodic with the flux $\Phi$. In the
so-called transmon limit, the ratio between these two energies lies around
$E_J / E_C \approx 45$. The Hamiltonian for the transmon is
$$
\op{H}_{0} = 4 E_C (\hat{n}-n_g)^2 - E_J \cos(\hat{\Phi})
$$
where $\hat{n}$ is the number operator, which counts the relative number of Cooper pairs
capacitively stored in the junction, and $n_g$ is the effective offset charge
measured in Cooper pair charge units. The equation can be written in a truncated
charge basis defined by the number operator $\op{n} \ket{n} = n \ket{n}$ such
that
$$
\op{H}_{0}
= 4 E_C \sum_{j=-N} ^N (j-n_g)^2 \ket{j} \bra{j}
- \frac{E_J}{2} \sum_{j=-N}^{N-1}
(\ket{j+1} \bra{j} + \ket{j} \bra{j+1}).
$$
A voltage $V(t)$ applied to the circuit couples to the charge Hamiltonian
$\op{q}$, which in the (truncated) charge basis reads
$$
\op{H}_1 = \op{q} = \sum_{j=-N} ^N -2n \ket{n} \bra{n}\,.
$$
The factor 2 is due to the charge carriers in a superconductor being Cooper
pairs. The total Hamiltonian is
$$
\op{H} = \op{H}_{0} + V(t) \cdot \op{H}_{1}
$$
We use a Gaussian voltage profile as the guess pulse:
```
tlist = np.linspace(0, 10, 1000)
def eps0(t, args):
T = tlist[-1]
return 4 * np.exp(-40.0 * (t / T - 0.5) ** 2)
def plot_pulse(pulse, tlist, xlimit=None):
fig, ax = plt.subplots()
if callable(pulse):
pulse = np.array([pulse(t, None) for t in tlist])
ax.plot(tlist, pulse)
ax.set_xlabel('time (ns)')
ax.set_ylabel('pulse amplitude')
if xlimit is not None:
ax.set_xlim(xlimit)
plt.show(fig)
plot_pulse(eps0, tlist)
```
The complete Hamiltonian is instantiated as
```
def transmon_hamiltonian(Ec=0.386, EjEc=45, nstates=8, ng=0.0, T=10.0):
"""Transmon Hamiltonian
Args:
Ec: capacitive energy
EjEc: ratio `Ej` / `Ec`
nstates: defines the maximum and minimum states for the basis. The
truncated basis will have a total of ``2*nstates + 1`` states
ng: offset charge
T: gate duration
"""
Ej = EjEc * Ec
n = np.arange(-nstates, nstates + 1)
up = np.diag(np.ones(2 * nstates), k=-1)
do = up.T
H0 = qutip.Qobj(np.diag(4 * Ec * (n - ng) ** 2) - Ej * (up + do) / 2.0)
H1 = qutip.Qobj(-2 * np.diag(n))
return [H0, [H1, eps0]]
H = transmon_hamiltonian()
```
We define the logical basis $\ket{0_l}$ and $\ket{1_l}$ (not to be confused with
the charge states $\ket{n=0}$ and $\ket{n=1}$) as the eigenstates of the drift
Hamiltonian $\op{H}_0$ with the lowest energy. The optimization goal is to find a
potential $V_{opt}(t)$ such that after a given final time $T$ implements an
X-gate on this logical basis.
```
def logical_basis(H):
H0 = H[0]
eigenvals, eigenvecs = scipy.linalg.eig(H0.full())
ndx = np.argsort(eigenvals.real)
E = eigenvals[ndx].real
V = eigenvecs[:, ndx]
psi0 = qutip.Qobj(V[:, 0])
psi1 = qutip.Qobj(V[:, 1])
w01 = E[1] - E[0] # Transition energy between states
print("Energy of qubit transition is %.3f" % w01)
return psi0, psi1
psi0, psi1 = logical_basis(H)
```
We also introduce the projectors $P_i = \ket{\psi _i}\bra{\psi _i}$ for the logical
states $\ket{\psi _i} \in \{\ket{0_l}, \ket{1_l}\}$
```
proj0 = qutip.ket2dm(psi0)
proj1 = qutip.ket2dm(psi1)
```
## Optimization target
The key insight for the realization of a quantum gate $\Op{O}$ is that
(by virtue of linearity)
$$\ket{\Psi(t=0)} \rightarrow \ket{\Psi(t=T)}
= \Op{U}(T, \epsilon(t))\ket{\Psi(0)}
= \Op{O} \ket{\Psi(0)}
$$
is fulfilled for an arbitrary state $\Ket{\Psi(t=0)}$ if an only if
$\Op{U}(T, \epsilon(t))\ket{k} = \Op{O} \ket{k}$ for every state $\ket{k}$ in
logical basis, for the time evolution operator $\Op{U}(T, \epsilon(t))$ from
$t=0$ to $t=T$ under the same control $\epsilon(t)$.
The function `krotov.gate_objectives` automatically sets up the corresponding
objectives $\forall \ket{k}: \ket{k} \rightarrow \Op{O} \ket{k}$:
```
objectives = krotov.gate_objectives(
basis_states=[psi0, psi1], gate=qutip.operators.sigmax(), H=H
)
objectives
```
## Dynamics of the guess pulse
```
guess_dynamics = [
objectives[x].mesolve(tlist, e_ops=[proj0, proj1]) for x in [0, 1]
]
def plot_population(result):
'''Representation of the expected values for the initial states'''
fig, ax = plt.subplots()
ax.plot(result.times, result.expect[0], label='0')
ax.plot(result.times, result.expect[1], label='1')
ax.legend()
ax.set_xlabel('time')
ax.set_ylabel('population')
plt.show(fig)
plot_population(guess_dynamics[0])
plot_population(guess_dynamics[1])
```
## Optimization
We define the desired shape of the update and the factor $\lambda_a$, and then start the optimization
```
def S(t):
"""Scales the Krotov methods update of the pulse value at the time t"""
return krotov.shapes.flattop(
t, t_start=0.0, t_stop=10.0, t_rise=0.5, func='sinsq'
)
pulse_options = {H[1][1]: dict(lambda_a=1, update_shape=S)}
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.expm,
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(
J_T=krotov.functionals.J_T_re,
show_g_a_int_per_pulse=True,
unicode=False,
),
check_convergence=krotov.convergence.Or(
krotov.convergence.value_below(1e-3, name='J_T'),
krotov.convergence.delta_below(1e-5),
krotov.convergence.check_monotonic_error,
),
iter_stop=5,
parallel_map=(
krotov.parallelization.parallel_map,
krotov.parallelization.parallel_map,
krotov.parallelization.parallel_map_fw_prop_step,
),
)
```
(this takes a while ...)
```
dumpfile = "./transmonxgate_opt_result.dump"
if os.path.isfile(dumpfile):
opt_result = krotov.result.Result.load(dumpfile, objectives)
else:
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.expm,
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(
J_T=krotov.functionals.J_T_re,
show_g_a_int_per_pulse=True,
unicode=False,
),
check_convergence=krotov.convergence.Or(
krotov.convergence.value_below(1e-3, name='J_T'),
krotov.convergence.delta_below(1e-5),
krotov.convergence.check_monotonic_error,
),
iter_stop=1000,
parallel_map=(
qutip.parallel_map,
qutip.parallel_map,
krotov.parallelization.parallel_map_fw_prop_step,
),
continue_from=opt_result
)
opt_result.dump(dumpfile)
opt_result
def plot_convergence(result):
fig, ax = plt.subplots()
ax.semilogy(result.iters, np.array(result.info_vals))
ax.set_xlabel('OCT iteration')
ax.set_ylabel('error')
plt.show(fig)
plot_convergence(opt_result)
```
## Optimized pulse and dynamics
We obtain the following optimized pulse:
```
plot_pulse(opt_result.optimized_controls[0], tlist)
```
The oscillations in the control shape indicate non-negligible spectral broadening:
```
def plot_spectrum(pulse, tlist, xlim=None):
if callable(pulse):
pulse = np.array([pulse(t, None) for t in tlist])
dt = tlist[1] - tlist[0]
n = len(tlist)
w = np.fft.fftfreq(n, d=dt/(2.0*np.pi))
# the factor 2π in the normalization means that
# the spectrum is in units of angular frequency,
# which is normally what we want
spectrum = np.fft.fft(pulse) / n
# normalizing the spectrum with n means that
# the y-axis is independent of dt
# we assume a real-valued pulse, so we throw away
# the half of the spectrum with negative frequencies
w = w[range(int(n / 2))]
spectrum = np.abs(spectrum[range(int(n / 2))])
fig, ax = plt.subplots()
ax.plot(w, spectrum, '-o')
ax.set_xlabel(r'$\omega$')
ax.set_ylabel('amplitude (arb. units)')
if xlim is not None:
ax.set_xlim(*xlim)
plt.show(fig)
plot_spectrum(opt_result.optimized_controls[0], tlist, xlim=(0, 40))
```
Lastly, we verify that the pulse produces the desired dynamics $\ket{0_l} \rightarrow \ket{1_l}$ and $\ket{1_l} \rightarrow \ket{0_l}$:
```
opt_dynamics = [
opt_result.optimized_objectives[x].mesolve(tlist, e_ops=[proj0, proj1])
for x in [0, 1]
]
plot_population(opt_dynamics[0])
plot_population(opt_dynamics[1])
```
Since the optimized pulse shows some oscillations (cf. the spectrum above),
it is a good idea to check for any discretization error. To this end, we also
propagate the optimization result using the same propagator that
was used in the optimization (instead of `qutip.mesolve`). The main difference
between the two propagations is that `mesolve` assumes piecewise constant
pulses that switch between two points in `tlist`, whereas `propagate` assumes
that pulses are constant on the intervals of `tlist`, and thus switches *on*
the points in `tlist`.
```
opt_dynamics2 = [
opt_result.optimized_objectives[x].propagate(
tlist, e_ops=[proj0, proj1], propagator=krotov.propagators.expm
)
for x in [0, 1]
]
```
The difference between the two propagations gives an indication of the error
due to the choice of the piecewise constant time discretization. If this error
were unacceptably large, we would need a smaller time step.
```
# NBVAL_IGNORE_OUTPUT
# Note: the particular error value may depend on the version of QuTiP
print(
"Time discretization error = %.1e" %
abs(opt_dynamics2[0].expect[1][-1] - opt_dynamics[0].expect[1][-1])
)
```
| github_jupyter |
# Looking at data - Distribution
$Def$
**Statistics**: The science of learning from Data.
**Cases**: the objects described by a set of data.
> **e.g.**
>Customers, companies, experimental subjects, or other objects.
**Variable**: a special characteristic of a case.
**Label**: a special variable used in some data sets to distinguish between cases.
$\odot$
Different cases can have different values of a variable.$\square$
>**e.g.**
>
| Number | Name | Album | Genre |
|:------:|:----:|:---------:|:-------:|
| 1 | ABC | Jackson 5 | Pop |
| 2 | XYZ | Jackson 6 | Country |
| 3 | 123 | Jackson 5 | Pop |
>
>1. Lable: {1,2,3,...}
>2. Variables: "Number", "Name", "Album", "Genre"
>3. Cases, totally 3 cases, each is a row
## Variables
We construct a set of data by first deciding which *cases* or units we want to study. For each case, we record information about characteristics that we call *variables*.
Two kinds:
1. Categorical Variable: Places individual into one of several groups or categories
2. Quantitative Variable: Takes numerical values for which arithmetic operations make sense
$\dagger$
NOT ALL numbers are Quant.V., **operations required**.
## Displaying
The nature of the variable decides the graphical tools.
1. Categorical Variable
- Bar graphs
- Pie charts
2. Quantitative Variable
- Histograms
- Stemplots
- Time plots
>| ID | Name | Grade | totalPoints |
|:---:|:----:|:-----:|:-----------:|
| 101 | ABC | A | 956 |
| 102 | XYZ | F | 125 |
| 103 | 123 | C | 693 |
>
>Here ID, Name, Grade are categorical variables, and only totalPoints is the quantitative variable.
**Distribution**: which tells us the values that a variable takes and how often it takes each value.
### Categorical Variable
| Pie Chart | Bar Graph |
|:---------------------:|:---------------:|
|  |  |
|PCs show the distribution of a categorical variable as a “pie” whose slices are sized by the counts or percents for the categories.|BGs represent categories as bars whose heights show the category counts or percents.|
| porpotion of the whole | roughly amounts |
### Stemplots
Separate each observation into a stem and a leaf that are then plotted to display the distribution while maintaining the original values of the variable.
Figure like this: 16, 43, 38, 48, 42, 23, 36, 35, 37, 34, 25, 28, 26, 43, 51, 33, 40, 35, 41, 42, can be drawn in this way:

1. Write the *stems*.
2. Go through the data and write each *leaf* on the proper stem.
3. Rearrange the leaves
### Histograms
Show the distribution of a quantitative variable by bars. The height of a bar represents the number of individuals whose values fall within the corresponding class.
$\odot$
For large datasets and/or quantitative variables that take many values.$\square$
>**e.g.**
>
| Figures | Plot |
|:-------------------:|:----------:|
|||
> Here $-<$ means that it is greater or equal to the left side but must be less than the right side, or briefly, from the left side up to the right side.
### Time plots
The behavior over time.

### Examine Distributions
**Outlier**: an individual that falls outside the overall pattern.
And about the symmetricity:
| **Symmetric** | **Left-skewed** | **Right-skewed** |
|:-------------------------:|:------------------------:|:-------------------------:|
|  |||
|Bell Curve: the right and left sides of the graph are approximately mirror images of each other.|A ***right-leaning*** curve: the right side of the graph (containing the half of the observations with larger values) is much longer than the left side.|A ***left-leaning*** curve: the left side of the graph is much longer than the right side.|
## Describing Distributions with Numbers
### Measure fot the center
**Mean**: Average
$$\bar{x} = \frac{1} {n} \sum x_i = \frac{\textrm{sum of observations}} {n}$$
**Median**: The midpoint of a distribution, the number such that half of the observations are smaller and the other half are larger.
$\odot$
The mean cannot resist the influence of extreme observations (outliers), it is not a resistant measure of center, while median is unaffeted by that.$\square$
1. If n is *odd*, the median M is the center observation in the ordered list.
2. If n is *even*, the median M is the average of the two center observations in the ordered list.
$Compare$
| **Symmetric** | **Left-skewed** | **Right-skewed** |
|:-------------------------:|:------------------------:|:-------------------------:|
|  |||
|$\mathrm{Mean} = \mathrm{Median}$|$\mathrm{Mean} < \mathrm{Median}$|$\mathrm{Mean} > \mathrm{Median}$|
### Measure for the spread, the quartiles
1. First quartile $Q_1$: the median of the observations located to the left of the median in the ordered list.
2. Third quartile $Q_3$: the median of the observations located to the right of the median in the ordered list.
3. **Interquartile range** (IQR): $IQR = Q_3 - Q_1$
### Five-Number Summary
a distribution consists of the smallest observation, the first quartile, the median, the third quartile, and the largest observation, written in order from smallest to largest.
1. $\min$
2. $Q_1$
3. $M=Q_2$
4. $Q_3$
5. $\max$
>**e.g.**
>
### 1.5 × IQR Rule
$Conclusion$
We call an observation an outlier if it falls more than 1.5 × IQR above the third quartile or below the first quartile.
> **e.g.**
> For the New York travel time data:
>
| Stems | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|:------:|:-:|:-----------:|:-------:|:---:|:-----:|:-:|:-----:|:-:|:-:|
| Leaves | 5 | 0,0,5,5,5,5 | 0,0,0,5 | 0,0 | 0,0,5 | | 0,0,5 | | 5 |
>
>$$
\begin{align*}
Q_1 &= 15 \\
Q_3 &= 42.5\\
IQR &= 27.5\\
1.5 \times IQR &= 41.25 \\
\end{align*}
$$
>
>So the data that is not an outlier must lie in range:
>
$$\left [Q_1 – 1.5 \times IQR, Q_3 + 1.5 \times IQR \right ] = [-26.25,83.75]$$
### Measure for the spread, the Standard Deviation $S_x$, or $\sigma$
**standard deviation**: measures the average distance of the observations from their mean.
1. $S_x$ measures spread (deviation) about the mean and should be used only when the mean is the measure of center.
2. $S_x = 0$ only when all observations have the same value and there is no spread. Otherwise, $s > 0$.
3. $S_x$ is influenced by outliers.
4. $S_x$ has the same units of measurement as the original observations.
$\odot$
A measure of spread looks at how far each observation is from the mean.$\square$
$\dagger$
A deviation is just one observation minus the mean, no squre, nor absolute.
$$\boxed{ \mathrm{Variance} = S_{x}^{2} = \frac {1} {\boxed{\mathbf{n-1}}} \sum_{i=1}^{n} \left( x_i - \bar{x} \right) ^{2} }$$
And *Standard deviation* is just the square root of the *Variace*.
$\odot$
Sum of *Deviation* equals to $0$.$\square$
$Compare$
When shall we use
1. Mean and standard deviation
2. Median and interquartile range
$Answer$
1. The median and IQR are usually better than the mean and standard deviation for describing a **skewed distribution** or **a distribution with outliers**.
2. Use mean and standard deviation ONLY for reasonably symmetric distributions that ***DO NOT*** have outliers.
### Changing the Unit of Measurement
Like we can do a linear transformation: $X_{\textrm{new}} = a + b \cdot X$.
1. Multiplying each observation by a positive number $b$ multiplies both measures of center (mean, median) and spread ($IQR$, $s$) by $b$.
2. Adding the same number $a$ (positive or negative) to each observation adds $a$ to measures of center and to quartiles, but it does not change measures of spread ($IQR$, $s$).
$\dagger$
And variance will be multiplied by $b^2$.
## Density Curves and Normal Distributions
### Density curve
1. Always on or above the horizontal axis
2. With an area of exactly 1 underneath it
A density curve describes the overall pattern of a distribution. The area under the curve and above any range of values on the horizontal axis is the proportion of all observations that fall in that range.
And for thir mean and median:
1. The median of a density curve is the *equal-areas point*―the point that divides the area under the curve in half.
2. The mean of a density curve is the *balance point*, that is, the point at which the curve would balance if made of solid material.
3. The median and the mean are the same for a symmetric density curve. They both lie at the center of the curve. The mean of a skewed curve is pulled away from the median in the direction of the long tail.
>
And now for the mean and standard deviation of the actual distribution represented by the density curve are denoted by $\mu$ and $\sigma$.
### Normal Distribution
Giving the mean $\mu$ and the standard deviation $\sigma$, we can have a normal curve that is symmetric, single-peaked, and bell-shaped.
1. The mean of a Normal distribution is the center of the symmetric Normal curve.
2. The standard deviation is the distance from the center to the change-of-curvature points on either side.
3. We abbreviate the Normal distribution with mean $\mu$ and standard deviation $\sigma$ as $N(\mu,\sigma)$.
$Conclusion$
1. Approximately 68% of the observations fall within $\sigma$ of $\mu$.
2. Approximately 95% of the observations fall within $2\sigma$ of $\mu$.
3. Approximately 99.7% of the observations fall within $3\sigma$ of $\mu$.
### Standardizing Observations
$Conclusion$
If a variable $X$ has a distribution with mean $\mu$ and standard deviation $\sigma$, then the standardized value of $X$, or its **z-score**, is $Z = (X-\mu)/\sigma$, which follows the standard Normal distribution, $N(0,1)$.
### Normal Quantile Plots
One way to assess if a distribution is indeed approximately Normal.
The z-scores of the original data are used for the x-axis against which the data that are plotted on the y-axis of the Normal quantile plot.
1. If the distribution is indeed Normal, the plot will show a straight line, indicating a good match between the data and a Normal distribution.
2. Systematic deviations from a straight line indicate a non-Normal distribution. Outliers appear as points that are far away from the overall pattern of the plot.
>**e.g.**
>
>| Good fit to a straight line | Curved pattern |
>|:---------------------------:|:--------------:|
>|  |  |
>| The distribution is close to Normal.|The data are **right** skewed.|
# Looking at Data - Relationship
## Relationships
$Def$
Two variables measured on the same cases are **associated** if knowing the value of one of the variables tells you something that you would not otherwise know about the value of the other variable.
1. Response variable: measures an outcome of a study.
2. Explanatory variable: explains or causes changes in the response variable.
## Scatterplots散点图
The most useful graph for displaying the relationship between two quantitative variables.
It shows the relationship between two quantitative variables measured on the same individuals. The values of one variable appear on the horizontal axis, and the values of the other variable appear on the vertical axis. **Each individual corresponds to one point on the graph**.
$\odot$
1. Plot the explanatory variable on the X axis, and the response variable on the Y axis.
2. Label and Scale the axes.
3. Plot each point individually.$\square$
### Interpreting
Focus on the *direction*, *form*, and *strength* of the relationship, and point out the *outliers* if exist.
|Positive relationship| Negative relationship|
|:------------------:|:---------------------:|
|above-average values of one tend to accompany above-average values<br/> of the other, and below-average values also tend to occur together|above-average values of one tend to accompany below-average values of the other|
|||
|Slope going up|Slope going down|
>**e.g.**
>
>1. There is a moderately **strong** (*relationship strength*), **positive** (*relationship direction*), **linear** (*relationship form*) relationship between body weight and backpack weight.
>2. It appears that lighter hikers are carrying lighter backpacks.
### Adding categorical variables
Using a different dot to represent another category. And make a legend of them.
### Nonlinear relationship

## Correlation
$r$, correlation: measures the strength of the **linear relationship** between two **quantitative** variables.
$$r = \frac{1} {n-1} \sum\left( \frac{x_i - \bar{x}} {s_x} \cdot \frac{y_i - \bar{y}} {s_y} \right)$$
$Property$
1. $r$ is always a number between $–1$ and $1$.
2. $r > 0$ indicates a positive association.
3. $r < 0$ indicates a negative association.
4. Values of $r$ near $0$ indicate a very weak linear relationship.
5. The strength of the linear relationship increases as $r$ moves away from $0$ toward $–1$ or $1$.
6. The extreme values $r = –1$ and $r = 1$ occur only in the case of a perfect linear relationship.
7. Correlation makes no distinction between explanatory and response variables.
8. $r$ has **no units** and does not change when we change the units of measurement of $x$, $y$, or both.
$\odot$
1. Required both variables be quantitative.
2. Can't describe curved relationships between variables.
3. Not resistant, and can be strongly affected by a few outlying observations.
4. Not a complete summary of two-variable data.$\square$
>**e.g.**
>
## Least-Squares Regression
### Regression Lines
$Def$
A straight line that describes how a response variable $y$ changes as an explanatory variable $x$ changes.
**Regression equation**: $\hat{y} = b_0 + b_1 x$
Here $x$ is the value of explanatory variable, $\hat{y}$ is the *predicted value* of response variabe for a *given* $x$, $b_1$ is the *slope*, $b_0$ is the *intercept*, value of $\hat{y}$ when $x=0$.
$\dagger$
Interpolation is OK, be careful when using prediction. **Extrapolation** is the use of a regression line for prediction far outside the range of values of the explanator variable $x$.
### Least-Squares Regression Line
$Def$
the line that minimizes the sum of the squares of the vertical distances of the data points from the line. And the equation is $\hat{y} = b_0 + b_1 x$ with *slope* and *intercept*:
$$b_1 = r \frac{s_y} {s_x}, b_0 = \bar{y} - b_1 \bar{x}$$
Here $s_x$ is the standard deviation.
$\odot$
1. A change of one standard deviation in $x$ corresponds to a change of $r \times $ standard deviations in $y$.
2. The LSRL always passes through $(\bar{x}, \bar{y})$.
3. The distinction between explanatory and response variables is essential. And if we reverse the roles of the two variables, we get a different LSRL.$\square$
### Correlation and Regression
The two variables play different roles in regression. And the **square of correlation**, $r^2$, is the fraction of the variation in the values of $y$ that is explained by the least-squared regression of $y$ on $x$, also called the **coefficient of determination**.
## Cautions About Correlation and Regression
### Residuals
$Def$
Observed $y$ minus predicted $y$, $i.e.$, $y - \hat{y}$.
**Residual Plots**
1. Ideally there should be a “random” scatter around zero.
2. Residual patterns suggest deviations from a linear relationship.
>**e.g.**
>
| Regression | Residual Plot |
|:-----------------------------:|:-----------------------------:|
|  |  |
### Outliers and Influential Points
$Def$
The observation that lies outside the overall pattern of the other observations.
**Influential**: an observation is influential if removing it would markedly change the result of the calculation.
$\odot$
And if it's an outlier in the $x$ direction, it is often influential. However if it's in the $y$ direction, it is often with large residual.$\square$
### Cautions
1. Both Correlation and Regression describe linear relationships.
2. Both are affected by outliers.
3. Beware of extrapolation, in predicting y when x is outside the range of observed x’s.
4. Beware of **lurking variables**: These have an important effect on the relationship among the variables in a study, but are not included in the study.
5. Correlation does not imply causation!
## Data Analysis for Two-Way Tables
A Two-way table describes two categorical variables, organizing counts according to a **row variable** and a **column variable**. Each combination of values for these two variables is called a **cell**.
>
What we do to the two-way table?
1. Transform them in to **contingency table**, that is presented each figure with their corresponding frequency.
2. The inside entries are the **JOINT probability distribution**.
3. The entries of the *last row* and the *last column* are called the **MARGINAL probability distribution**: the distribution of values of that variable among all individuals *within same category* described by the table.
>
### Maginal distribution and conditional distribution
| Maginal | Conditional |
|:-----------------------------:|:-----------------------------:|
|  |  |
|Given the joint distribution, consider the distribution inside each category|Given the joint distribution, consider the distribution under certain condition|
### Simpson’s Paradox
$Def$
An association or comparison that holds for all of several groups can reverse direction when the data are combined to form a single group. This reversal is called Simpson’s paradox.
The lurking variable creates subgroups, and failure to take these subgroups into consideration can lead to misleading conclusions regarding the association between the two variables.
>**e.g.**
>
And after we considering the lurking variable,
>
> Our finding is that before the variable "School", men are accepted more than women in percentage, however the conclusion is inversed if "School" is considered.
## The Question of Causation因果关系
$\odot$
Association, however strong, does NOT imply causation.$\square$
In the following subsection, dashed lines show an association, solid arrows show a cause-and-effect link. $x$ is explanatory, $y$ is response, and $z$ is a lurking variable.
| Common Response | Confounding | Causation |
|:-----------------------------:|:-----------------------------:|:-----------:|
|  |  |  |
|The observed relationship between the variables can be explained by a lurking variable. Both $x$ and $y$ may change in response to changes in $z$.|Two variables' effects on a response variable cannot be distinguished from each other. The confounded variables may be either explanatory variables or lurking variables.|A properly conducted experiment may establish causation.|
|Most students who have high SAT scores ($x$) in high school have high GPAs ($y$) in their first year of college. And “ability and knowledge” is the lurking variable.|Religious people live longer than nonreligious people, but they also take better care of themselves and are less likely to smoke or be overweight.|A properly conducted experiment may establish causation.|
Criteria for Causation
1. strong association
2. consistent association
1. The connection happens in repeated trials.
2. The connection happens under varying conditions.
3. higher doses are associated with stronger responses.
4. alleged cause do precede the effect.
5. The alleged cause is plausible.
# Producing Data
## Sources of Data
1. *Anecdotal data* represent individual cases that often come to our attention because they are striking in some way.
2. *Available data* are data that were produced in the past for some other purpose but that may help answer a present question inexpensively.
3. a *sample of individuals* is selected from a larger *population of individuals*.
$Compare$
An observational study observes individuals and measures variables of interest but does not attempt to influence the
responses. The purpose is to describe some group or situation.
An experiment deliberately imposes some treatment on individuals to measure their responses. The purpose is to study
whether the treatment causes a change in the response.
Experiments don’t just observe individuals or ask them questions. They actively impose some treatment in order to measure the response.
***
Well-designed experiments take steps to avoid **confounding**.
Confounding occurs when two variables are associated in such a way that their effects on a response variable cannot be distinguished from each other.
A *lurking variabl*e is a variable that is not among the explanatory or response variables in a study but that may influence the response variable.
## Design of Experiments
## Sampling Design
**population**: entire group of individuals about which we want information.
**sample**: part of the population from which we actually collect information.
We use information from a *sample* to draw conclusions about the entire *population*. We collect data from a
**representative Sample** so that we can make an inference about thw whold population.
The design of a sample is **biased** if it *systematically* favors certain outcomes.
**convenience sample**: Choosing individuals simply because they are easy to reach.
**voluntary response sample**: people who choose themselves by responding to a general appeal. $\odot$ This often show great bias because people with stronger opinions are more likely to respond.
**Random sampling**: the use of chance to select a sample, is the *central principle* of statistical sampling.
**simple random sample** (SRS) of size $n$: $n$ individuals from the population chosen in such a way that every set of $n$ individuals has an equal chance to be the sample actually selected.
We can get random numbers generated by a computer or calculator to choose samples. Or use *table of random digits*.
**possible sample**: a sample chosen by chance.
To select a **stratified random sample**, first *classify* the population into groups of similar individuals, called **strata**. Then choose a *separate SRS in each stratum* and combine these SRSs to form the full sample.
Some source of error:
1. *Undercoverage*: when some groups in the population are left out of the process of choosing the sample.
2. *Nonresponse*: when an individual chosen for the sample can’t be contacted or refuses to participate.
3. *Response bias*: a systematic pattern of incorrect responses in a sample survey
4. *wording of questions*: **most important influence** on the answers
## Toward Statistical Inference
A **parameter** is a number that describes some characteristic of the population. In statistical practice, the value of a parameter is not known because we cannot examine the entire population.
A **statistic** is a number that describes some characteristic of a sample. The value of a statistic can be computed directly from the sample data. We often use a statistic to estimate an unknown parameter.
**s***tatistics* come from **s***amples* and **p***arameters* come from **p***opulations*.
Knowing the **sampling distribution** would be better to perform statistical inference.
The **population distribution** of a variable is the distribution of values of the variable among all individuals in the population.
The **sampling distribution** of a statistic is the distribution of values taken by the statistic in all possible samples of the same size from the same population.
**Bias** concerns the center of the sampling distribution. A statistic used to estimate a parameter is **unbiased** if the mean of its sampling distribution is equal to the true value of the parameter being estimated.
The **variability of a statistic** is described by the spread of its sampling distribution. This spread is determined by the sampling design and the sample size n. Statistics from larger probability samples have smaller spreads.

To reduce bias, use random sampling. And To reduce variability of a statistic from an SRS, use a larger sample.
**inference**: Draw conclusions about a population on the basis of sample data
Why *Random Sampling*?
1. To *eliminate bias* in selecting samples from the list of available individuals.
2. The laws of probability allow trustworthy inference about the population.
- Results from random samples come with a margin of error that sets bounds on the size of the likely error.
- Larger random samples give better information about the population than smaller samples.
## Ethics
When collecting data from people.
**Basic Data Ethics**
1. The organization that carries out the study must have an **institutional review board** that reviews all planned studies in advance in order to protect the subjects from possible harm.
- Reviews the plan of study
- Can require changes
- Reviews the consent form
- Monitors progress at least once a year
2. All individuals who are subjects in a study must give their **informed consent** before data are collected, the nature of this research and any risk of harm it might bring.
3. All individual data must be kept **confidential**. Only statistical summaries for groups of subjects may be made public.
Who can't give informed consent?
- Prison inmates
- Very young children
- People with mental disorders
### Confidentiality
1. All individual data must be kept confidential. **Only** *statistical summaries* may be made public.
2. Not the same with **anoymity**
3. Separate the identity of the subjects from the rest of the data immediately!
### Clinical Trials
- Randomized comparative experiments are the only way to see the true effects of new treatments.
- Most benefits of clinical trials go to future patients. We must balance future benefits against present risks.
- The interests of the subject must always prevail over the interests of science and society.
### Behavioral and Social Science Experiments
1. These experiments rely on hiding the true purpose of the study.
2. Subjects would change their behavior if told in advance what investigators were looking for.
3. “Ethical Principles”: consent, unless a study merely observes behavior in a public space.
# Probability: The Study of Randomness
## Randomness
We call a *phenomenon* **random** if individual outcomes are uncertain but there is nonetheless a regular distribution of outcomes in a large number of repetitions.
The **probability** of any outcome of a chance process is the proportion of times the outcome would occur in a very long series of repetitions.
**independent trail**: the outcome of a new trail is not influenced by the result of the previous trail
## Probability Models
The **sample space** $S$ of a chance process is the set of **all possible outcomes**.
An **event** is an outcome or a set of outcomes of a random phenomenon. That is, an event is a **subset** of the *sample space*.
**Probability** is a function that assigns a *measure (number)* between $\left[0, 1\right]$ to an *event*: $P(\text{event}) = p$, where $0 \leq p \leq 1$.
$Rule$
1. Any probability is a number between $0$ and $1$.
2. All possible outcomes together must have probability $1$.
3. If two events have no outcomes in common, the probability that one or the other occurs is the *sum* of their individual probabilities.
4. The probability that an event does not occur is $1$ minus the probability that the event does occur.
or in math languages
1. for event $A$, $0 \leq P(A) \leq 1$
2. for sample space $S$, $P(S) = 1$
3. for disjoint event $A$, and $B$, $P(A \text{ OR } B) = P(A) + P(B)$
4. the complement of any event $A$, is the event that $A$ doesn't occur, denoted as $A^c$. $P(A^c) = 1 - P(A)$
Other rules
Two events $A$ and $B$ are **independent** if knowing that one occurs does not change the probability that the other occurs. We have $P(A \text{ AND } B) = P(A) \times P(B)$.
## Random Variables
A **random variable** is a **function** that assigns numerical values (state space) to each of the outcomes in the *sample space*.
The probability distribution of a random variable gives its possible values and their probabilities.
And the complete information (solution) of a random variable is contained in its DISTRIBUTION FUNCTION.
### Discrete Random Variable
For discrete random variable $X$, it takes a fixed set of possible values with gaps between.
| Value | $x_1$ | $x_2$ | $x_3$ | $\dots$ |
|:-------------|:-----:|:-----:|:-----:|:-------:|
| Probability | $p_1$ | $p_2$ | $p_3$ | $\dots$ |
1. Every probability $p_i$ is a number between $0$ and $1$.
2. The sum of the probabilities is $1$.
>**e.g.** Tossing a Die, $X$ is the $r.v.$ of number.
>
>*state space*: $\{1, 2, 3, 4, 5, 6\}$, *distribution func of* $X$: $P(X = i) = \displaystyle \frac{1} {i}, i = 1 , 2, \dots, 6$
### Continuous Random Variable
For continuous Random Variable $Y$, it takes on all values in an interval of numbers.
It has *infinitely many possible values* and *only* **intervals of values** have positive probability.
A **continuous probability model** assigns probabilities as areas *under a **density curve***. The area under the curve and above any range of values is the probability of an outcome in that range.
### Normal Probability Models
After standardize, $\displaystyle z = \frac{x - \mu} {\sigma}$, we have $Z \sim \mathrm{N}(\mu = 0, \sigma = 1)$.
## Means and Variances of Random Variables
For discrete $r.v.$, the mean is $\mu_x = \sum x_i P_i$, a weighted average.
$\odot$ The expected value does not need to be a possible value of $X$.$\square$
and the variance is $\sigma_x^2 = Var(X) = \sum (x_i - \mu_X)^2p_i$
### The Law of Large Numbers
As the number of observations drawn (sample size, $n$) increases, the sample mean $\bar{x}$ of the observed values gets closer and closer to the mean $\mu$ of the population.
### Some rules
For $r.v.$ $X$ and $Y$
1. $\mu_{a + bX} = a + b\cdot \mu_X$
2. $\mu_{X+Y} = \mu_X + \mu_Y$
3. $\sigma_{a + bX}^2 = b^2 \cdot \sigma_x^2$
4. if independent, $\sigma_{X + Y}^2 = \sigma_x^2 + \sigma_Y^2$
5. if with correlation $\rho$, $\sigma_{X + Y}^2 = \sigma_x^2 + \sigma_Y^2 + 2 \rho \sigma_X \sigma_Y, \sigma_{X - Y}^2 = \sigma_x^2 + \sigma_Y^2 - 2 \rho \sigma_X \sigma_Y$
| github_jupyter |
# Spatial Analysis
CARTOframes provides the `Dataset(your_query)` method for performing analysis and returning the results as a pandas dataframe, and the `Layer` class for visualizing analysis as a map layer. Both methods run the quries against a **PostgreSQL** database with **PostGIS**. CARTO also provides more advanced spatial analysis through the crankshaft extension.
In this guide, we will analyze McDonald’s locations in US Census tracts using spatial analysis functionality in CARTO.
You can download directly the dataset from their sources:
- [NYC Census Tracts Data](https://www.census.gov/cgi-bin/geo/shapefiles/index.php?year=2018&layergroup=Census+Tracts)
- [NYC McDonalds](https://data.cityofnewyork.us/Health/McDonald-s/kyws-ad2t)
Or follow the following steps to download them directly from the CARTOframes account.
```
from cartoframes.auth import Credentials
from cartoframes.viz import Map, Layer, Legend, Source
from cartoframes.data import Dataset
## Add here your CARTO credentials:
credentials = Credentials(base_url='https://your_user_name.carto.com', api_key='your_api_key')
cf_credentials = Credentials(base_url='https://cartoframes.carto.com', api_key='default_public')
mcdonalds_nyc_data = Dataset('mcdonalds_nyc', credentials=cf_credentials)
mcdonalds_nyc_data_df = mcdonalds_nyc_data.download()
mcdonalds_nyc = Dataset(mcdonalds_nyc_data_df)
mcdonalds_nyc.upload(table_name='mcdonalds_nyc', if_exists='replace', credentials=credentials)
nyc_census_tracts_data = Dataset('nyc_census_tracts', credentials=cf_credentials)
nyc_census_tracts_data_df = nyc_census_tracts_data.download()
nyc_census_tracts = Dataset(nyc_census_tracts_data_df)
nyc_census_tracts.upload(table_name='nyc_census_tracts', if_exists='replace', credentials=credentials)
```
### Example 1
Find the number of McDonald’s in each census tract in New York City.
```
mcdonalds_per_census_tracts = Dataset('''
SELECT
tracts.geom_refs AS FIPS_code,
tracts.the_geom as the_geom,
COUNT(mcd.*) AS num_mcdonalds
FROM nyc_census_tracts As tracts, mcdonalds_nyc As mcd
WHERE ST_Intersects(tracts.the_geom, mcd.the_geom)
GROUP BY tracts.geom_refs, tracts.the_geom
ORDER BY num_mcdonalds DESC
''', credentials=credentials)
# Show first five entries of results
# Including FIPS code (unique digital identifier for census tracts) and the number of McDonald's
# Sorted by the number of McDonald's in descending order
mcdonalds_per_census_tracts = mcdonalds_per_census_tracts.download()
mcdonalds_per_census_tracts.head()
```
### Example 2
Build 100 meter buffer area for each McDonald’s by updating the geometry.
```
# create a new table and save it as 'nyc_mcdonalds_buffer_100m'.
nyc_mcdonalds_buffer_100m = Dataset(
'''
SELECT name, id, address, city, zip,
ST_Buffer(the_geom::geography, 100)::geometry AS the_geom
FROM mcdonalds_nyc
''',
credentials=credentials)
# Show the first entry.
# 'geomtery' is Polygon type now.
nyc_mcdonalds_buffer_100m.upload(table_name='nyc_mcdonalds_buffer_100m', if_exists='replace', credentials=credentials)
nyc_mcdonalds_buffer_100m_df = nyc_mcdonalds_buffer_100m.download()
nyc_mcdonalds_buffer_100m_df.iloc[0, :]
```
To show the results of this query on a map, we can do the following:
```
from cartoframes.viz import Map, Layer
Map([
Layer('nyc_mcdonalds_buffer_100m', 'color: lightgray width: 20', credentials=credentials),
Layer('mcdonalds_nyc', 'color: red width: 3', credentials=credentials),
],
viewport={'zoom': 13.00, 'lat': 40.74, 'lng': -73.98}
)
```
### Example 3
Apply k-means (k=5) spatial clustering for all McDonald’s in NYC, and visualize different clusters by color. Note: for more complicated queries, it is best to create a temporary table from the query and then visualize it.
```
k_means_dataset = Dataset('''
SELECT
row_number() OVER () AS cartodb_id,
c.cluster_no,
c.the_geom,
ST_Transform(c.the_geom, 3857) AS the_geom_webmercator
FROM
((SELECT *
FROM cdb_crankshaft.cdb_kmeans(
'SELECT the_geom, cartodb_id, longitude, latitude FROM mcdonalds_nyc', 5)
) AS a
JOIN
mcdonalds_nyc AS b
ON a.cartodb_id = b.cartodb_id
) AS c
''',
credentials=credentials)
k_means_dataset.upload(table_name='mcdonalds_clusters', if_exists='replace', credentials=credentials)
Map(Layer('mcdonalds_clusters', 'color: ramp($cluster_no, Prism)', credentials=credentials))
```
| github_jupyter |
# Latent Factor DBCM Analysis
This example highlights several advanced modeling strategies.
**Dynamic Binary Cascade Model**
The first is a Dynamic Binary Cascade Model (DBCM), which is the combination a Dynamic Count Mixture Model (DCMM) and a binary cascade (Berry et. al. 2020). When modeling sales in a retail setting, the DCMM (Berry and West, 2019) is used to model the number of transactions involving an item, and the binary cascade models the quantity purchased by each customer. The DCMM models the transactions as a mixture of a Bernoulli and Poisson DGLM:
\begin{equation} \label{eqn-dcmm}
z_t \sim Ber(\pi_t) \text{ and } y_t \mid z_t =
\begin{cases}
0, & \text{if } z_t = 0,\\
1 + x_t, \quad x_t \sim Po(\mu_t), & \text{if }z_t = 1
\end{cases}
\end{equation}
where $\pi_t$ and $\mu_t$ vary according to the dynamics of independent Bernoulli and Poisson DGLMs respectively:
\begin{equation*}
\text{logit}(\pi_t) = \mathbf{F}_{ber, t}^{'}\boldsymbol{\theta}_{ber, t} \qquad \text{and} \qquad \text{log}(\mu_t) = \mathbf{F}_{Po, t}^{'} \boldsymbol{\theta}_{Po, t}
\end{equation*}
A DCMM is useful to capture a higher prevalence of $0$ outcomes than in a standard Poisson DGLM.
Transactions are related to sales by modeling the number of units bought in each transaction. Recognizing that sales outliers often occur due to shoppers buying many units of an item, the probability of each quantity is modeled with a binary cascade. Let $n_{r,t}$ be the number of transactions with more than $r$ units. Then $n_{r,t} | n_{r-1, t} \sim Bin(n_{r-1,t}, \pi_{r,t})$ is defined by a binomial DGLM. This cascade of binomial DGLMs represents the sequence of conditional probabilities for purchasing $n$ units or greater, given that the shopper has bought $n-1$. This implies the following expression for sales $y_t$:
\begin{equation}\label{eqn-dbcm}
y_t =
\begin{cases}
0, & \text{if } z_t = 0,\\
\sum_{r=1:d} r(n_{r-1, t} - n_{r,t}) + e_t, & \text{if }z_t = 1,
\end{cases}
\end{equation}
where $d$ is the predefined length of the cascade and $e_t$ represents excess units greater than $d$. The cascade enables modeling of very small probabilities, in the rare cases when shoppers purchase large quantities of an item on a single grocery store trip.
**Latent Factors**
The second strategy in this example is the use of *latent factors* for multiscale modeling. This example features simulated data from a retail sales setting. There is data for an item, and also for total sales at the store. The total sales are smoother and more predictable than the item level sales. A model is fit to the total sales which includes a day-of-week seasonal effect. This seasonal effect is extracted from the model on total sales, and used as a predictor in the item level model.
Modeling effects at different levels of a hierarchy - aka multiscale modeling - can improve forecast accuracy, because the sales of an individual can be very noisy, making it difficult to learn accurate patterns.
**Copula Forecasting**
In this example we are focusing on forecasting retail sales *1* through *14* days into the future. To do this, we will simulate from the joint forecast distribution *1:14* days into the future. This is accomplished in an efficient manner through the use of a Copula, which accounts for dependence in the forecasts across days into the future.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pybats.analysis import analysis, analysis_dbcm
from pybats.latent_factor import seas_weekly_lf
from pybats.shared import load_dbcm_latent_factor_example
from pybats.plot import plot_data_forecast
from pybats.point_forecast import median
from pybats.loss_functions import MAD
```
We start by loading the simulated retail sales data.
The dataframe 'totaldata' has the total daily sales in a store, along with a predictor, which represents a measure of average price.
The dataframe 'data' is the sales of a single item, and has two predictors. 'X_transaction' is a measure of price, and will be used in as a predictor in the DCMM. 'X_cascade' is a binary indicator of whether the item is being promoted within the store, and is used as a predictor in the binary cascade within the DBCM.
```
data = load_dbcm_latent_factor_example()
totaldata, data = data.values()
totaldata['Y'] = np.log(totaldata['Y'] + 1)
totaldata.head()
data.head()
```
Here we define hyper parameters for the analysis:
- *rho* is a discount factor which calibrates the forecast distribution. A value of *rho* less than $1$ increases the forecast variance.
- *k* is the number of days ahead to forecast.
- *nsamps* is the number of forecast samples to draw.
- *prior_length* is the number of days of data to use when defining the priors for model coefficients.
```
#Define hyper parameters
rho = .2 # rho must be greater than 0, and is typically < 1. A smaller rho widens the forecast distribution.
k = 14 # Number of days ahead that we will forecast
nsamps = 200
prior_length = 21
```
Next, we define the window of time that we want to forecast over. For each day within this time period, the model will sample from the path (joint) forecast distribution *1:k* days into the future.
```
# Define the forecast range
T = len(totaldata)
forecast_end_date = totaldata.index[-k]
forecast_start_date = forecast_end_date - pd.DateOffset(days=365)
```
The first analysis is run on the 'totaldata', which is used to learn a latent factor. In this case, we fit a normal DLM to the log of total sales in order to learn the weekly season effect, which is stored in the latent factor.
```
# Get multiscale signal (a latent factor) from higher level log-normal model
latent_factor = analysis(totaldata['Y'].values, totaldata['X'].values, k,
forecast_start_date, forecast_end_date, dates=totaldata.index,
seasPeriods=[7], seasHarmComponents=[[1,2,3]],
family="normal", ret=['new_latent_factors'], new_latent_factors= [seas_weekly_lf.copy()],
prior_length=prior_length)
```
The second analysis is performed on 'data', which is the simulated sales of a single item. The latent factor is used to inform on the sales of this item. On each day, a copula is used to draw joint forecast samples from *1:k* days ahead.
```
# Update and forecast the model
mod, forecast_samples = analysis_dbcm(data['Y_transaction'].values.reshape(-1), data['X_transaction'].values.reshape(-1,1),
data[['mt1', 'mt2', 'mt3', 'mt4']].values, data['X_cascade'].values.reshape(-1,1),
data['excess'].values,
forecast_start=forecast_start_date, forecast_end=forecast_end_date,
prior_length=prior_length, k=k,
nsamps=nsamps, rho=rho,
latent_factor=latent_factor, dates = data.index,
delregn_pois=.98)
```
Finally, we can examine the results, first by plotting both the *1-* and *14-* step ahead forecasts:
```
forecast = median(forecast_samples)
horizon = 1
plot_length = 50
fig, ax = plt.subplots(figsize=(8,4))
start_date = forecast_end_date + pd.DateOffset(horizon - plot_length)
end_date = forecast_end_date + pd.DateOffset(horizon - 1)
ax = plot_data_forecast(fig, ax, data.loc[start_date:end_date].Sales,
forecast[-plot_length:,horizon - 1],
forecast_samples[:,-plot_length:,horizon - 1],
data.loc[start_date:end_date].index,
linewidth = 2)
horizon = 14
plot_length = 50
fig, ax = plt.subplots(figsize=(8,4))
start_date = forecast_end_date + pd.DateOffset(horizon - plot_length)
end_date = forecast_end_date + pd.DateOffset(horizon - 1)
ax = plot_data_forecast(fig, ax, data.loc[start_date:end_date].Sales,
forecast[-plot_length:,horizon - 1],
forecast_samples[:,-plot_length:,horizon - 1],
data.loc[start_date:end_date].index,
linewidth = 2)
```
As well as looking at the mean absolute deviation (MAD) between the forecast median and the observations over the forecast horizons. Interestingly, there is only a small increase in the MAD for longer forecast horizons.
```
# Mean absolute deviation at increasing forecast horizons
horizons = list(range(1, k+1))
list(map(lambda k: MAD(data.loc[forecast_start_date + pd.DateOffset(k-1):forecast_end_date + pd.DateOffset(k-1)].Sales,
forecast[:,k-1]),
horizons))
```
| github_jupyter |
```
import keras
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, BatchNormalization, LocallyConnected2D, Permute
from keras.layers import Concatenate, Reshape, Softmax, Conv2DTranspose, Embedding, Multiply
from keras.callbacks import ModelCheckpoint, EarlyStopping, Callback
from keras import regularizers
from keras import backend as K
import keras.losses
import tensorflow as tf
from tensorflow.python.framework import ops
import isolearn.keras as iso
import numpy as np
import tensorflow as tf
import logging
logging.getLogger('tensorflow').setLevel(logging.ERROR)
import pandas as pd
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
import isolearn.io as isoio
import isolearn.keras as isol
from genesis.visualization import *
from genesis.generator import *
from genesis.predictor import *
from genesis.optimizer import *
import sklearn
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from scipy.stats import pearsonr
import seaborn as sns
from matplotlib import colors
import editdistance
def subselect_list(li, ixs) :
return [
li[ixs[k]] for k in range(len(ixs))
]
class IdentityEncoder(iso.SequenceEncoder) :
def __init__(self, seq_len, channel_map) :
super(IdentityEncoder, self).__init__('identity', (seq_len, len(channel_map)))
self.seq_len = seq_len
self.n_channels = len(channel_map)
self.encode_map = channel_map
self.decode_map = {
nt: ix for ix, nt in self.encode_map.items()
}
def encode(self, seq) :
encoding = np.zeros((self.seq_len, self.n_channels))
for i in range(len(seq)) :
if seq[i] in self.encode_map :
channel_ix = self.encode_map[seq[i]]
encoding[i, channel_ix] = 1.
return encoding
def encode_inplace(self, seq, encoding) :
for i in range(len(seq)) :
if seq[i] in self.encode_map :
channel_ix = self.encode_map[seq[i]]
encoding[i, channel_ix] = 1.
def encode_inplace_sparse(self, seq, encoding_mat, row_index) :
raise NotImplementError()
def decode(self, encoding) :
seq = ''
for pos in range(0, encoding.shape[0]) :
argmax_nt = np.argmax(encoding[pos, :])
max_nt = np.max(encoding[pos, :])
seq += self.decode_map[argmax_nt]
return seq
def decode_sparse(self, encoding_mat, row_index) :
raise NotImplementError()
#Plot joint histograms
def plot_joint_histo(measurements, labels, x_label, y_label, colors=None, n_bins=50, figsize=(6, 4), legend_outside=False, save_fig=False, fig_name="default_1", fig_dpi=150, min_val=None, max_val=None, max_y_val=None) :
min_hist_val = np.min(measurements[0])
max_hist_val = np.max(measurements[0])
for i in range(1, len(measurements)) :
min_hist_val = min(min_hist_val, np.min(measurements[i]))
max_hist_val = max(max_hist_val, np.max(measurements[i]))
if min_val is not None :
min_hist_val = min_val
if max_val is not None :
max_hist_val = max_val
hists = []
bin_edges = []
means = []
for i in range(len(measurements)) :
hist, b_edges = np.histogram(measurements[i], range=(min_hist_val, max_hist_val), bins=n_bins, density=True)
hists.append(hist)
bin_edges.append(b_edges)
means.append(np.mean(measurements[i]))
bin_width = bin_edges[0][1] - bin_edges[0][0]
f = plt.figure(figsize=figsize)
for i in range(len(measurements)) :
if colors is not None :
plt.bar(bin_edges[i][1:] - bin_width/2., hists[i], width=bin_width, linewidth=2, edgecolor='black', color=colors[i], label=labels[i])
else :
plt.bar(bin_edges[i][1:] - bin_width/2., hists[i], width=bin_width, linewidth=2, edgecolor='black', label=labels[i])
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim(min_hist_val, max_hist_val)
if max_y_val is not None :
plt.ylim(0, max_y_val)
plt.xlabel(x_label, fontsize=14)
plt.ylabel(y_label, fontsize=14)
if colors is not None :
for i in range(len(measurements)) :
plt.axvline(x=means[i], linewidth=2, color=colors[i], linestyle="--")
if not legend_outside :
plt.legend(fontsize=14, loc='upper left')
else :
plt.legend(fontsize=14, bbox_to_anchor=(1.04,1), loc="upper left")
plt.tight_layout()
if save_fig :
plt.savefig(fig_name + ".eps")
plt.savefig(fig_name + ".svg")
plt.savefig(fig_name + ".png", dpi=fig_dpi, transparent=True)
plt.show()
#Plot join histograms
def plot_joint_cmp(measurements, labels, y_label, plot_type='violin', colors=None, figsize=(6, 4), legend_outside=False, save_fig=False, fig_name="default_1", fig_dpi=150, min_y_val=None, max_y_val=None) :
f = plt.figure(figsize=figsize)
sns_g = None
if colors is not None :
if plot_type == 'violin' :
sns_g = sns.violinplot(data=measurements, palette=colors, scale='width') #, x=labels
elif plot_type == 'strip' :
sns_g = sns.stripplot(data=measurements, palette=colors, alpha=0.1, jitter=0.3, linewidth=2, edgecolor='black') #, x=labels
for i in range(len(measurements)) :
plt.plot(x=[i, i+1], y=[np.median(measurements[i]), np.median(measurements[i])], linewidth=2, color=colors[i], linestyle="--")
elif plot_type == 'bar' :
for i in range(len(measurements)) :
plt.bar([i], [np.percentile(measurements[i], 95)], width=0.4, color=colors[i], label=str(i) + ") " + labels[i], linewidth=2, edgecolor='black')
plt.bar([i+0.2], [np.percentile(measurements[i], 80)], width=0.4, color=colors[i], linewidth=2, edgecolor='black')
plt.bar([i+0.4], [np.percentile(measurements[i], 50)], width=0.4, color=colors[i], linewidth=2, edgecolor='black')
else :
if plot_type == 'violin' :
sns_g = sns.violinplot(data=measurements, scale='width') #, x=labels
elif plot_type == 'strip' :
sns_g = sns.stripplot(data=measurements, alpha=0.1, jitter=0.3, linewidth=2, edgecolor='black') #, x=labels
elif plot_type == 'bar' :
for i in range(len(measurements)) :
plt.bar([i], [np.percentile(measurements[i], 95)], width=0.25, label=str(i) + ") " + labels[i], linewidth=2, edgecolor='black')
plt.bar([i+0.125], [np.percentile(measurements[i], 80)], width=0.25, linewidth=2, edgecolor='black')
plt.bar([i+0.25], [np.percentile(measurements[i], 50)], width=0.25, linewidth=2, edgecolor='black')
plt.xticks(np.arange(len(labels)), fontsize=14)
plt.yticks(fontsize=14)
#plt.xlim(min_hist_val, max_hist_val)
if min_y_val is not None and max_y_val is not None :
plt.ylim(min_y_val, max_y_val)
plt.ylabel(y_label, fontsize=14)
if plot_type not in ['violin', 'strip'] :
if not legend_outside :
plt.legend(fontsize=14, loc='upper left')
else :
plt.legend(fontsize=14, bbox_to_anchor=(1.04,1), loc="upper left")
else :
if not legend_outside :
f.get_axes()[0].legend(fontsize=14, loc="upper left", labels=[str(label_i) + ") " + label for label_i, label in enumerate(labels)])
else :
f.get_axes()[0].legend(fontsize=14, bbox_to_anchor=(1.04,1), loc="upper left", labels=[str(label_i) + ") " + label for label_i, label in enumerate(labels)])
plt.tight_layout()
if save_fig :
plt.savefig(fig_name + ".eps")
plt.savefig(fig_name + ".svg")
plt.savefig(fig_name + ".png", dpi=fig_dpi, transparent=True)
plt.show()
#Load generated data from models to be evaluated
def load_sequences(file_path, split_on_tab=True, seq_template=None, max_n_sequences=1e6, select_best_fitness=False, predictor=None, batch_size=32) :
seqs = []
with open(file_path, "rt") as f :
for l in f.readlines() :
l_strip = l.strip()
seq = l_strip
if split_on_tab :
seq = l_strip.split("\t")[0]
if seq_template is not None :
seq = ''.join([
seq_template[j] if seq_template[j] != 'N' else seq[j]
for j in range(len(seq))
])
seqs.append(seq)
if select_best_fitness and predictor is not None :
onehots = np.expand_dims(np.concatenate([
np.expand_dims(acgt_encoder.encode(seq), axis=0) for seq in seqs
], axis=0), axis=-1)
#Predict fitness
score_pred = predictor.predict(x=[onehots[..., 0]], batch_size=batch_size)
score_pred = np.ravel(score_pred[:, 5])
sort_index = np.argsort(score_pred)[::-1]
seqs = [
seqs[sort_index[i]] for i in range(len(seqs))
]
return seqs[:max_n_sequences]
#Metric helper functions
def compute_latent_manhattan_distance(latent_vecs) :
shuffle_index = np.arange(latent_vecs.shape[0])
shuffle_index = shuffle_index[::-1]#np.random.shuffle(shuffle_index)
latent_vecs_shuffled = latent_vecs[shuffle_index]
latent_dists = np.sum(np.abs(latent_vecs - latent_vecs_shuffled), axis=-1)
mean_latent_distance = np.mean(latent_dists)
return latent_dists, mean_latent_distance
def compute_latent_cosine_distance(latent_vecs) :
shuffle_index = np.arange(latent_vecs.shape[0])
shuffle_index = shuffle_index[::-1]#np.random.shuffle(shuffle_index)
latent_vecs_shuffled = latent_vecs[shuffle_index]
latent_cosines = np.sum(latent_vecs * latent_vecs_shuffled, axis=-1) / (np.sqrt(np.sum(latent_vecs**2, axis=-1)) * np.sqrt(np.sum(latent_vecs_shuffled**2, axis=-1)))
latent_cosines = 1. - latent_cosines
mean_latent_cosine = np.mean(latent_cosines)
return latent_cosines, mean_latent_cosine
def compute_edit_distance(onehots, opt_len=100) :
shuffle_index = np.arange(onehots.shape[0])
shuffle_index = shuffle_index[::-1]#np.random.shuffle(shuffle_index)
seqs = [acgt_encoder.decode(onehots[i, :, :, 0]) for i in range(onehots.shape[0])]
seqs_shuffled = [seqs[shuffle_index[i]] for i in range(onehots.shape[0])]
edit_distances = np.ravel([float(editdistance.eval(seq_1, seq_2)) for seq_1, seq_2 in zip(seqs, seqs_shuffled)])
edit_distances /= opt_len
mean_edit_distance = np.mean(edit_distances)
return edit_distances, mean_edit_distance
#Evaluate metrics for each model
def compute_metrics(seqs, n_seqs_to_test=960, batch_size=64, opt_len=90) :
n_seqs_to_test = min(len(seqs), n_seqs_to_test)
onehots = np.expand_dims(np.concatenate([
np.expand_dims(acgt_encoder.encode(seq), axis=0) for seq in seqs
], axis=0), axis=-1)
#Predict fitness
score_pred, dense_pred = saved_predictor_w_dense.predict(x=[onehots[:n_seqs_to_test, :, :, 0]], batch_size=batch_size)
score_pred = np.ravel(score_pred[:, 5])
#Compare pair-wise latent distances
dense_dists, _ = compute_latent_manhattan_distance(dense_pred)
#Compare pair-wise latent cosine similarities
dense_cosines, _ = compute_latent_cosine_distance(dense_pred)
#Compare pair-wise edit distances
edit_dists, _ = compute_edit_distance(onehots[:n_seqs_to_test], opt_len=opt_len)
return score_pred, dense_dists, dense_cosines, edit_dists
def load_data(data_name, valid_set_size=0.05, test_set_size=0.05) :
#Load cached dataframe
cached_dict = pickle.load(open(data_name, 'rb'))
x_train = cached_dict['x_train']
y_train = cached_dict['y_train']
x_test = cached_dict['x_test']
y_test = cached_dict['y_test']
g_nt = np.zeros((1, 1, 1, 4))
g_nt[0, 0, 0, 2] = 1.
x_train = np.concatenate([x_train, np.tile(g_nt, (x_train.shape[0], 1, 15, 1))], axis=2)
x_test = np.concatenate([x_test, np.tile(g_nt, (x_test.shape[0], 1, 15, 1))], axis=2)
return x_train, x_test
def load_predictor_model(model_path) :
saved_model = Sequential()
# sublayer 1
saved_model.add(Conv1D(48, 3, padding='same', activation='relu', input_shape=(145, 4), name='dragonn_conv1d_1_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_1_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_1_copy'))
saved_model.add(Conv1D(64, 3, padding='same', activation='relu', name='dragonn_conv1d_2_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_2_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_2_copy'))
saved_model.add(Conv1D(100, 3, padding='same', activation='relu', name='dragonn_conv1d_3_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_3_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_3_copy'))
saved_model.add(Conv1D(150, 7, padding='same', activation='relu', name='dragonn_conv1d_4_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_4_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_4_copy'))
saved_model.add(Conv1D(300, 7, padding='same', activation='relu', name='dragonn_conv1d_5_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_5_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_5_copy'))
saved_model.add(MaxPooling1D(3))
# sublayer 2
saved_model.add(Conv1D(200, 7, padding='same', activation='relu', name='dragonn_conv1d_6_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_6_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_6_copy'))
saved_model.add(Conv1D(200, 3, padding='same', activation='relu', name='dragonn_conv1d_7_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_7_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_7_copy'))
saved_model.add(Conv1D(200, 3, padding='same', activation='relu', name='dragonn_conv1d_8_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_8_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_8_copy'))
saved_model.add(MaxPooling1D(4))
# sublayer 3
saved_model.add(Conv1D(200, 7, padding='same', activation='relu', name='dragonn_conv1d_9_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_9_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_9_copy'))
saved_model.add(MaxPooling1D(4))
saved_model.add(Flatten())
saved_model.add(Dense(100, activation='relu', name='dragonn_dense_1_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_10_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_10_copy'))
saved_model.add(Dense(12, activation='linear', name='dragonn_dense_2_copy'))
saved_model.compile(
loss= "mean_squared_error",
optimizer=keras.optimizers.SGD(lr=0.1)
)
saved_model.load_weights(model_path)
return saved_model
from keras.backend.tensorflow_backend import set_session
def contain_tf_gpu_mem_usage() :
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
contain_tf_gpu_mem_usage()
sequence_template = 'N' * 145
problem_prefix = "mpradragonn_genesis_sv40_max_activity"
n_seqs_to_test = 4000
#Specfiy file path to pre-trained predictor network
saved_predictor_model_path = '../../../seqprop/examples/mpradragonn/pretrained_deep_factorized_model.hdf5'
saved_predictor = load_predictor_model(saved_predictor_model_path)
acgt_encoder = IdentityEncoder(145, {'A':0, 'C':1, 'G':2, 'T':3})
#Get latent space predictor
saved_predictor_w_dense = Model(
inputs = saved_predictor.inputs,
outputs = saved_predictor.outputs + [saved_predictor.get_layer('dragonn_dense_1_copy').output]
)
saved_predictor_w_dense.compile(loss='mse', optimizer=keras.optimizers.SGD(lr=0.1))
#Build random data
random_sequences = [
''.join([
sequence_template[j] if sequence_template[j] != 'N' else np.random.choice(['A', 'C', 'G', 'T'])
for j in range(len(sequence_template))
]) for i in range(n_seqs_to_test)
]
#Trajectory comparison configuration
traj_dirs = [
"samples/basinhopping_mpradragonn_max_activity_512_sequences_1000_iters/",
"samples/genesis_mpradragonn_max_activity_sv40_25000_updates_similarity_margin_05_earthmover_weight_01_target_35_singlesample/",
"samples/genesis_mpradragonn_max_activity_sv40_25000_updates_similarity_margin_03_earthmover_weight_01_target_35_singlesample/"
]
traj_file_funcs = [
lambda i: "intermediate_iter_" + str((i+1) * 100) + ".txt",
lambda i: "intermediate_epoch_" + str(i) + "_960_sequences.txt" if i < 10 else "intermediate_epoch_" + str((i-9)*10) + "_960_sequences.txt",
lambda i: "intermediate_epoch_" + str(i) + "_960_sequences.txt" if i < 10 else "intermediate_epoch_" + str((i-9)*10) + "_960_sequences.txt"
]
traj_scale_generator_touch_funcs = [
lambda i: (i+1) * 100,
lambda i: (i+1) * 100 * 64 if i < 11 else (i-10)*10 * 100 * 64,
lambda i: (i+1) * 100 * 64 if i < 11 else (i-10)*10 * 100 * 64
]
traj_names = [
"Simulated Annealing (1000 iters)",
"DEN Earthm (seq margin 0.5)",
"DEN Earthm (seq margin 0.3)"
]
#Load and predict sequence trajectory data
def load_and_aggregate_score(file_path, agg_mode='mean', perc=50, split_on_tab=True, seq_template=None, predictor=None, batch_size=32, max_n_sequences=960) :
seqs = []
print("Processing '" + str(file_path) + "'...")
try :
with open(file_path, "rt") as f :
for l in f.readlines() :
l_strip = l.strip()
seq = l_strip
if split_on_tab :
seq = l_strip.split("\t")[0]
if seq_template is not None :
seq = ''.join([
seq_template[j] if seq_template[j] != 'N' else seq[j]
for j in range(len(seq))
])
seqs.append(seq)
if len(seqs) > max_n_sequences :
seqs = seqs[:max_n_sequences]
score_pred, _, _, edit_dists = compute_metrics(
seqs,
n_seqs_to_test=len(seqs),
batch_size=batch_size,
opt_len=np.sum([1 if seq_template[j] == 'N' else 0 for j in range(len(seq_template))])
)
if agg_mode == "mean" :
return np.mean(score_pred), np.mean(edit_dists), np.mean(edit_dists), score_pred
elif agg_mode == "perc" :
return np.percentile(score_pred, perc), np.percentile(edit_dists, perc), np.percentile(edit_dists, perc), score_pred
else :
return np.mean(score_pred), np.mean(edit_dists), np.mean(edit_dists), score_pred
except FileNotFoundError :
return np.nan, np.nan, np.nan, np.zeros(max_n_sequences)
max_n_files = 250
traj_ys = [
[
load_and_aggregate_score(
traj_dirs[model_i] + traj_file_funcs[model_i](file_i),
agg_mode='perc',
seq_template=sequence_template,
predictor=saved_predictor,
batch_size=32,
max_n_sequences=512 if model_i == 0 else 960
)
for file_i in range(max_n_files)
]
for model_i in range(len(traj_dirs))
]
traj_gen_xs = [
[
traj_scale_generator_touch_funcs[model_i](file_i)
for file_i in range(max_n_files)
]
for model_i in range(len(traj_dirs))
]
#Clean up trajectories and convert to numpy arrays
traj_raw = []
for model_i in range(len(traj_dirs)) :
#traj_ys[model_i] = np.array(list(zip(*traj_ys[model_i])))
traj_ys[model_i] = list(zip(*traj_ys[model_i]))
traj_raw.append(np.array(traj_ys[model_i][3]))
traj_ys[model_i] = np.array(traj_ys[model_i][:3])
traj_gen_xs[model_i] = np.array(traj_gen_xs[model_i])
isnan_index = np.nonzero(np.isnan(traj_ys[model_i][0]))[0]
first_isnan_ix = None
if len(isnan_index) > 0 :
first_isnan_ix = isnan_index[0]
if first_isnan_ix is not None :
traj_ys[model_i] = traj_ys[model_i][:, :first_isnan_ix]
traj_gen_xs[model_i] = traj_gen_xs[model_i][:first_isnan_ix]
#Plot overlapping training trajectories
def plot_trajectories(traj_indices, traj_names, iteration_scales, iteration_constants, add_zeros, iterations, measures, model_names, measure_ix, x_label, y_label, colors=None, figsize=(6, 4), legend_outside=False, save_fig=False, fig_name="default_1", fig_dpi=150, min_x_val=0, max_x_val=None, min_y_val=None, max_y_val=None, log10_scale=False) :
f = plt.figure(figsize=figsize)
max_iter_val = 0
ls = []
for i, [model_ix, traj_name] in enumerate(zip(traj_indices, traj_names)) :
#for model_ix, [iters, all_meas] in enumerate(zip(iterations, measures)) :
iters, all_meas = iterations[model_ix], measures[model_ix]
meas = np.zeros(all_meas[measure_ix, :].shape)
meas[:] = all_meas[measure_ix, :]
iters_copy = np.zeros(iters.shape)
iters_copy[:] = iters[:]
if add_zeros[i] is not None :
iters_copy = np.concatenate([np.array([0]), iters_copy], axis=0)
meas = np.concatenate([np.array([add_zeros[i]]), meas], axis=0)
iters_copy[1:] = iters_copy[1:] * iteration_scales[i] + iteration_constants[i]
if log10_scale :
iters_copy[1:] = np.log10(iters_copy[1:])
max_iter_val = max(max_iter_val, np.max(iters_copy))
l1 = None
if colors is not None :
l1 = plt.plot(iters_copy, meas, color=colors[model_ix], linewidth=2, label=traj_name)
else :
l1 = plt.plot(iters_copy, meas, linewidth=2, label=traj_name)
ls.append(l1[0])
if log10_scale :
plt.xticks(np.arange(int(max_iter_val) + 1), 10**np.arange(int(max_iter_val) + 1), fontsize=14, rotation=45)
else :
plt.xticks(fontsize=14, rotation=45)
plt.yticks(fontsize=14)
if max_x_val is not None :
plt.xlim(min_x_val, max_x_val)
else :
plt.xlim(min_x_val, max_iter_val)
if min_y_val is not None and max_y_val is not None :
plt.ylim(min_y_val, max_y_val)
plt.xlabel(x_label, fontsize=14)
plt.ylabel(y_label, fontsize=14)
if not legend_outside :
plt.legend(handles=ls, fontsize=14, loc='upper left')
else :
plt.legend(handles=ls, fontsize=14, bbox_to_anchor=(1.04,1), loc="upper left")
plt.tight_layout()
if save_fig :
plt.savefig(fig_name + ".eps")
plt.savefig(fig_name + ".svg")
plt.savefig(fig_name + ".png", dpi=fig_dpi, transparent=True)
plt.show()
#Plot trajectory data
experiment_suffix = "_traj_comparisons_singlesample"
model_colors = ['indigo', 'black', 'dimgrey']
figsize = (12, 4)
#Generator time scale
plot_trajectories(
[0, 0, 0, 2, 2, 2],
[
"Simulated Annealing (1000 iters) - 1,000 Seqs",
"Simulated Annealing (1000 iters) - 100,000 Seqs",
"Simulated Annealing (1000 iters) - 10,000,000 Seqs",
"DEN Earthm - 1,000 Seqs",
"DEN Earthm - 100,000 Seqs",
"DEN Earthm - 10,000,000 Seqs",
],
[1000.0, 100000.0, 10000000.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 1000.0, 100000.0, 10000000.0],
[0, 0, 0, 0, 0, 0],
traj_gen_xs,
traj_ys,
traj_names,
0,
'Generator calls',
'Fitness score',
colors=model_colors,
min_x_val=3,
#max_x_val=40000,
min_y_val=0,
max_y_val=4,
figsize=figsize,
save_fig=True,
fig_name=problem_prefix + experiment_suffix + "_fitness_log_logscale",
legend_outside=True,
log10_scale=True
)
#Plot trajectory data
experiment_suffix = "_traj_comparisons_singlesample"
model_colors = ['indigo', 'black', 'dimgrey']
figsize = (12, 4)
#Generator time scale
plot_trajectories(
[0, 0, 0, 1, 1, 1],
[
"Simulated Annealing (1000 iters) - 1,000 Seqs",
"Simulated Annealing (1000 iters) - 100,000 Seqs",
"Simulated Annealing (1000 iters) - 10,000,000 Seqs",
"DEN Earthm - 1,000 Seqs",
"DEN Earthm - 100,000 Seqs",
"DEN Earthm - 10,000,000 Seqs",
],
[1000.0, 100000.0, 10000000.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 1000.0, 100000.0, 10000000.0],
[0, 0, 0, 0, 0, 0],
traj_gen_xs,
traj_ys,
traj_names,
0,
'Generator calls',
'Fitness score',
colors=model_colors,
min_x_val=3,
#max_x_val=40000,
min_y_val=0,
max_y_val=4,
figsize=figsize,
save_fig=True,
fig_name=problem_prefix + experiment_suffix + "_fitness_log_logscale_2",
legend_outside=True,
log10_scale=True
)
from scipy.interpolate import interp1d
def plot_bars(traj_indices, traj_names, iteration_scales, iteration_constants, add_zeros, iterations, measures, model_names, measure_ix, x_label, y_label, colors=None, figsize=(6, 4), legend_outside=False, save_fig=False, fig_name="default_1", fig_dpi=150, min_x_val=0, max_x_val=None, min_y_val=None, max_y_val=None, log10_scale=False) :
max_iter_val = 0
max_meas_val = -np.inf
iter_interps = []
meas_interps = []
for i, [model_ix, traj_name] in enumerate(zip(traj_indices, traj_names)) :
#for model_ix, [iters, all_meas] in enumerate(zip(iterations, measures)) :
iters, all_meas = iterations[model_ix], measures[model_ix]
meas = np.zeros(all_meas[measure_ix, :].shape)
meas[:] = all_meas[measure_ix, :]
iters_copy = np.zeros(iters.shape)
iters_copy[:] = iters[:]
if add_zeros[i] is not None :
iters_copy = np.concatenate([np.array([0]), iters_copy], axis=0)
meas = np.concatenate([np.array([add_zeros[i]]), meas], axis=0)
iters_copy[1:] = iters_copy[1:] * iteration_scales[i] + iteration_constants[i]
max_iter_val = max(max_iter_val, np.max(iters_copy))
max_meas_val = max(max_meas_val, np.max(meas))
f_interp = interp1d(iters_copy, meas)
iter_interp = np.linspace(iters_copy[0], iters_copy[-1], 1000)
meas_interp = f_interp(iter_interp)
if log10_scale :
iter_interp[1:] = np.log10(iter_interp[1:])
iter_interps.append(iter_interp)
meas_interps.append(meas_interp)
if log10_scale :
max_iter_val = np.log10(max_iter_val)
meas_perc_50 = 0.5 * max_meas_val
meas_perc_80 = 0.8 * max_meas_val
meas_perc_95 = 0.95 * max_meas_val
meas_perc_99 = 0.99 * max_meas_val
f = plt.figure(figsize=figsize)
for i, [model_ix, traj_name] in enumerate(zip(traj_indices, traj_names)) :
iter_interp = iter_interps[i]
meas_interp = meas_interps[i]
first_iter_perc_50_ind = np.nonzero(meas_interp >= meas_perc_50)[0]
first_iter_perc_50_ix = iter_interp[first_iter_perc_50_ind[0]] if len(first_iter_perc_50_ind) > 0 else max_iter_val
first_iter_perc_80_ind = np.nonzero(meas_interp >= meas_perc_80)[0]
first_iter_perc_80_ix = iter_interp[first_iter_perc_80_ind[0]] if len(first_iter_perc_80_ind) > 0 else max_iter_val
first_iter_perc_95_ind = np.nonzero(meas_interp >= meas_perc_95)[0]
first_iter_perc_95_ix = iter_interp[first_iter_perc_95_ind[0]] if len(first_iter_perc_95_ind) > 0 else max_iter_val
first_iter_perc_99_ind = np.nonzero(meas_interp >= meas_perc_99)[0]
first_iter_perc_99_ix = iter_interp[first_iter_perc_99_ind[0]] if len(first_iter_perc_99_ind) > 0 else max_iter_val
if colors is not None :
#plt.bar([model_ix + 0.25 * int(i % 3)], [first_iter_perc_99_ix], width=0.25, color=colors[model_ix][3], edgecolor='black', linewidth=1, label=model_names[model_ix])
#plt.bar([model_ix + 0.25 * int(i % 3)], [first_iter_perc_95_ix], width=0.25, color=colors[model_ix][2], edgecolor='black', linewidth=1, label=model_names[model_ix])
plt.bar([model_ix + 0.25 * int(i % 3)], [first_iter_perc_80_ix], width=0.25, color=colors[model_ix][1], edgecolor='black', linewidth=1, label=model_names[model_ix])
plt.bar([model_ix + 0.25 * int(i % 3)], [first_iter_perc_50_ix], width=0.25, color=colors[model_ix][0], edgecolor='black', linewidth=1, label=model_names[model_ix])
plt.xticks([], [])
if log10_scale :
plt.yticks(np.arange(int(max_iter_val) + 1), 10**np.arange(int(max_iter_val) + 1), fontsize=14, rotation=45)
else :
plt.yticks(fontsize=14, rotation=45)
plt.yticks(fontsize=14)
if min_x_val is not None and max_x_val :
plt.xlim(min_x_val, max_x_val)
if min_y_val is not None and max_y_val is not None :
plt.ylim(min_y_val, max_y_val)
plt.xlabel(x_label, fontsize=14)
plt.ylabel(y_label, fontsize=14)
if not legend_outside :
plt.legend(fontsize=14, loc='upper left')
else :
plt.legend(fontsize=14, bbox_to_anchor=(1.04,1), loc="upper left")
plt.tight_layout()
if save_fig :
plt.savefig(fig_name + ".eps")
plt.savefig(fig_name + ".svg")
plt.savefig(fig_name + ".png", dpi=fig_dpi, transparent=True)
plt.show()
experiment_suffix = "_traj_comparisons_bars_singlesample"
model_colors = [
[
'violet',
'mediumorchid',
'darkviolet',
'indigo'
],
[
'yellow',
'gold',
'orange',
'darkorange'
]
]
figsize = (12, 6)
#Generator time scale
plot_bars(
[0, 0, 0, 1, 1, 1],
[
"Simulated Annealing (1000 iters) - 1,000 Seqs",
"Simulated Annealing (1000 iters) - 100,000 Seqs",
"Simulated Annealing (1000 iters) - 10,000,000 Seqs",
"DEN Earthm - 1,000 Seqs",
"DEN Earthm - 100,000 Seqs",
"DEN Earthm - 10,000,000 Seqs",
],
[1000.0, 100000.0, 10000000.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 1000.0, 100000.0, 10000000.0],
[0, 0, 0, 0, 0, 0],
traj_gen_xs,
traj_ys,
traj_names,
0,
'Generative Algorithm',
'Generator Calls',
colors=model_colors,
#min_x_val=3,
#max_x_val=40000,
min_y_val=3,
max_y_val=10.5,
figsize=figsize,
save_fig=True,
fig_name=problem_prefix + experiment_suffix + "_fitness_log_logscale_2",
legend_outside=True,
log10_scale=True
)
```
| github_jupyter |
# Working With Jupyter Notebooks
Jupyter notebooks are interactive online notebooks that run directly in your web browser and can both display text and run code (such as Python).
In our course we will use Jupyter to provide some interactive content where you can learn new material and practice it at the same time, with instant feedback.
## The cell structure
Every notebook is made up of **cells**. You can navigate between cells using the arrow keys or by clicking on them with the mouse pointer.
Cells can either be **code cells** or **text cells** (using a formatting language called *Markdown*).
All cells you have seen in this notebook so far have been text cells. They can display text with various formatting, such as *italics* or **bold**, as well as math formulas, for example $\dfrac{x^2-9}{x+3}$.
If you double-click a text cell, or select it with the keyboard or your mouse and press `ENTER`, you will see the raw format (markdown) version of the text. If you press
> `Shift + Enter`,
the text cell will be formatted and display nicely.
---
Cells can also be code cells and contain lines of code in a *programming language*, typically *Python*:
```
for i in range(10):
print("I love math!")
```
If you select a code cell and press
> `Shift + Enter`,
Jupyter will run the code in that cell. Any output that the computation produces will be shown **right beneath the code cell**. Try that with the code cell above. Select it and press `Shift + Enter`.
We will use code cells mainly for *two purposes*:
1. **to perform mathematical computations**
2. **to enter and evaluate answers to problems**
## Initializing the notebook
The following code cell initializes the next part of the notebook. In particular, it loads the problems that you will be asked to solve. So please go ahead and run this cell using `Shift + Enter`.
```
"""
Notebook Initialization
"""
from cyllene import *
%initialize
```
---
<!-- The notebooks we will be using will have their code cells hidden by default. You can make them visible, if you want, by clicking on the `eye` button  in the toolbar above. This button will toggle code cells from hidden to visible and back. -->
<!-- ### Go ahead and press the 'eye' button to make all code cells invisible.
(If you know some Python and you would like to experiment, you can of course make all code sections visible and play around with the code. You can't "break" anything. For example, in the code cell above, change the 10 to a 4 and hit `SHIFT+ENTER`.) -->
## Computing with Jupyter notebooks
You can use code cells like a powerful calculator. Give it a try.
Go ahead and execute (i.e `Shift+Enter`) the cell below.
```
2021*(17-6)
```
As you see, you enter calculations into code cells pretty much like you would enter it into a calculator, using `*` for multiplication.
We use `/` for divison and `^` for exponentiation.
```
2^7/64
```
You can also use `**` for exponentiation. (This is actually the standard notation in Python.)
```
2**8
```
As usual, exponentiation binds stronger than the other operations. If you want to change that, you have to use parentheses `(,)`.
```
2^(7/64)
```
## Answering problems
We also use cells to enter answers to problems.
_Answer cells start with the line `%%answer`, followed by the problem name. You input your answer in the line below that and, as usual, press `Shift+Enter`._
### Problem 1
What is the smallest [*sphenic*](https://en.wikipedia.org/wiki/Sphenic_number) number?
```
%%answer Problem 1
30
```
Try solving the next problem yourself.
### Problem 2
$ \dfrac{2}{3} - \dfrac{1}{2} = $ ?
(_Fractions are entered as `a/b`._)
```
%%answer Problem 2
```
### Problem 3
$ 1.01+0.22 = $ ?
(*Decimal fractions are simply entered in the form `X.XXX`.*)
```
%%answer Problem 3
```
---
## Entering algebraic expressions
We will also need to enter algebraic expressions containing variables, such as $x$, along with operations like multiplication and exponentiation. The basic format remains the pretty much the same.
When entering answers, as usual in mathematical notation, you may omit the multiplication symbol `*`.
### Problem 4
Simplify: $\qquad 5x - 3x$
```
%%answer Problem 4
```
---
The general template to input fractional expressions like $\dfrac{x^2-9}{x+3}$ is `(...)/(...)`.
### Problem 5
Enter the fraction $\qquad \dfrac{x+3}{x^2-1}$.
```
%%answer Problem 5
```
---
## Generating new problems with `Shift+Enter`
If you come across a problem heading starting with 🔄 symbol, this means you can generate new versions of this problem as often as you like. Simply select the problem cell and hit `Shift + Enter`. Give it a try below.
### 🔄 More Practice
```
"""
Run this cell to generate a new problem
"""
generate_problem('6')
%%answer Problem 6
```
| github_jupyter |
# Training Facemask Detection CNN for DPU compilation
In this notebook we show how to train a Convolutional Neural Network (CNN) for deployment on the DPU (ZCU104).
Steps:
* Loading and pre-processing the imagenet dataset
* Training a CNN with Keras and Tensorflow
* Freezing the trained model
* Quantizing and evaluating the quantized model
* Compiling for DPU using the Vitis AI compiler
**Note**:
* This notebook should be run on a proper X86 machine
In my case:
* Distributor ID: Ubuntu
* Description: Ubuntu 18.04.3
* Release: 18.04.3
* Please make sure the following has been done in terminal before you start this notebook:
conda activate vitis-ai-tensorflow
yes | pip install matplotlib keras==2.2.5
```
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import random
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
import keras
from keras.layers import Dense, Conv2D, InputLayer, Flatten, MaxPool2D
from keras.preprocessing.image import ImageDataGenerator,img_to_array,load_img
from keras.applications.mobilenet_v2 import preprocess_input
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
```
## Load dataset
The Facemask dataset comes with 1916 images with Facemask and 1930 without Facemask. We have to preprocess the images.
```
print("The number of images with facemask labelled 'yes':",len(os.listdir('facemask-dataset/yes')))
print("The number of images without facemask labelled 'no':",len(os.listdir('facemask-dataset/no')))
```
## Preprocessing Dataset
We are going to preprocess the images, convert them to grayscale, transform their size to 64x64, divide the images by 255 to normalize them, and convert them to numpy float32 arrays.
```
import cv2
import os
data = []
labels = []
mylist = os.listdir("facemask-dataset") # Set Dataset Folder
for x in mylist:
mylist2 = os.listdir("facemask-dataset/"+str(x))
label = str(x)
for y in mylist2:
# extract the class label from the folder name
# load the input image (64x64) and preprocess it
image = load_img("facemask-dataset/"+str(x)+"/"+str(y),color_mode="grayscale", target_size=(64, 64))
image = img_to_array(image)
image = preprocess_input(image)
# update the data and labels lists, respectively
data.append(image)
labels.append(label)
# convert the data and labels to NumPy arrays
data = np.array(data, dtype="float32")
labels = np.array(labels)
# perform one-hot encoding on the labels
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels = to_categorical(labels)
# partition the data into training and testing splits using 80% of
# the data for training and the remaining 20% for testing
(x_train, x_test,y_train, y_test) = train_test_split(data, labels,
test_size=0.20, stratify=labels, random_state=42)
print('Training data: {}, {}'.format(x_train.shape, y_train.shape))
print('Test data: {}, {}'.format(x_test.shape, y_test.shape))
#The strcuture that we must obtain are the following:
# For training and test datasets: (#data, 64, 64, 1)
# For labels: (#labels, 2)
#Example:
#Training data: (3091, 64, 64, 1), (3091, 2)
#Test data: (773, 64, 64, 1), (773, 2)
```
## Let's check if the data set and labels are correct with a sample of 6 images
```
SampleSize=6
fig, axs = plt.subplots(1, SampleSize, figsize=(10, 10))
plt.tight_layout()
for i in range(SampleSize):
axs[i].imshow(x_train[i].reshape(64, 64), 'gray')
if(y_train[i][0]==0):
tempLabel = "Facemask ON"
else:
tempLabel = "Facemask OFF"
axs[i].set_title('{}'.format(tempLabel))
```
## Model Creation
Create a sequential model by passing a list of layers. Because the mask detection it
is a binary classification problem, we can use 2 Convolutional layers to solve the problem.
# NOTE: Do not rename the input layer nor the output layer.
```
model = keras.models.Sequential([
InputLayer(input_shape=(64, 64, 1), name='input_data'),
Conv2D(64, (3,3), activation='relu'),
MaxPool2D(pool_size=(2,2)),
Conv2D(128, (3,3), activation='relu'),
MaxPool2D(pool_size=(2,2)),
Flatten(),
Dense(64, activation='relu'),
Dense(2, activation='softmax', name='output_logits')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
model.summary()
```
Configure the model for training: choose desired optimizer, loss function
and metrics to observe over the training period.
```
model.compile(optimizer='adam',
loss="binary_crossentropy", # For binary classifiers always use binary_crossentropy
metrics=['accuracy'])
```
# Early stoping to avoid Overfitting
```
from keras.callbacks import EarlyStopping
es = EarlyStopping(monitor='val_loss', mode='auto', verbose=1)
```
Now we can train the model.
```
history = model.fit(x_train,y_train,validation_data=(x_test, y_test),epochs=30, verbose=1 ,callbacks=[es])
```
We can inspect the training results by plotting the collected data in the
`history` object.
```
fig, axs = plt.subplots(1, 2, figsize=(12, 4))
axs[0].plot(history.history['loss'])
axs[0].set_title('Training loss')
axs[0].set(xlabel='Epochs', ylabel='Loss')
axs[1].plot(history.history['acc'])
axs[1].set_title('Training accuracy')
axs[1].set(xlabel='Epochs', ylabel='Accuracy')
plt.show()
loss, accuracy = model.evaluate(x_test,y_test)
print("Test loss: {}".format(loss))
print("Test accuracy: {}".format(accuracy))
```
Evaluate the trained model on the test dataset.
## Save checkpoint
```
saver = tf.train.Saver()
tf_session = keras.backend.get_session()
input_graph_def = tf_session.graph.as_graph_def()
save_path = saver.save(tf_session, './checkpoint.ckpt')
tf.train.write_graph(input_graph_def,
'./', 'face_binary.pb', as_text=False)
```
As well as saving the checkpoint we also need to make a note of the input
and output nodes of the graph for freezing and quantization. We made our
lives a bit easier by naming the input and output layers, which results
in our input and output nodes being named `input_data` and
`output_logits/Softmax` respectively. You can check the node names in the
list defined below.
# DONT RENAME NOTHING
```
nodes_names = [node.name for node in
tf.get_default_graph().as_graph_def().node]
print(nodes_names)
```
## Freeze Tensorflow graph
The Vitis AI flow requires a frozen model for quantization. We can obtain a binary
protobuf file of our frozen model by using the Tensorflow `freeze_graph` utility.
```
!freeze_graph \
--input_graph face_binary.pb \
--input_checkpoint checkpoint.ckpt \
--input_binary true \
--output_graph frozen.pb \
--output_node_names output_logits/Softmax
```
## Quantization
We will save some of our training data as calibration data for
`vai_q_tensorflow`, then use that along with the frozen graph to quantize our model.
`vai_q_tensorflow inspect` can be used to confirm available input and output
node names.
```
!vai_q_tensorflow inspect --input_frozen_graph=frozen.pb
np.savez('./calib_data.npz', data = x_train[:1000])
```
We will save a portion of our training data for quantization.
Recommended number is around 100-1000 for images. (1000 in this case)
```
%%writefile input_func.py
import numpy as np
data = np.load('calib_data.npz')['data']
batch_size=10 # Images per batch
def calib_input(iter):
calib_data = data[iter*batch_size:(iter+1)*batch_size]
return {'input_data': calib_data}
!vai_q_tensorflow quantize \
--input_frozen_graph frozen.pb \
--input_fn input_func.calib_input \
--output_dir quantized \
--input_nodes input_data \
--output_nodes output_logits/Softmax \
--input_shapes ?,64,64,1 \
--calib_iter 100 # Images per Epoch
```
## Evaluate quantized model
The quantizer produces a special model called `quantize_eval_model.pb`,
which we can use to load up like a regular Tensorfow binary graph and
evaluate its performance.
Because we already have a graph definition in our session, we need to
reset the default graph so as to not interfere with the graph we are
about to load from the frozen model.
```
tf.reset_default_graph()
```
In order to evaluate a quantized model we have to import `tensorflow.contrib.decent_q`,
otherwise the model evaluation will error out. We will use standard
Tensorflow 1, to set up the graph for evaluation.
In the next cell, we will read in a frozen binary graph and add the
accuracy metric.
```
import tensorflow.contrib.decent_q
with tf.gfile.GFile('quantized/quantize_eval_model.pb', "rb") as f:
graph = tf.GraphDef()
graph.ParseFromString(f.read())
tf.import_graph_def(graph,name = '')
input_data = tf.get_default_graph().get_tensor_by_name('input_data'+':0')
labels = tf.placeholder(tf.int64, shape=[None,])
logits = tf.get_default_graph().get_tensor_by_name(
'output_logits/Softmax'+':0')
nn_output = tf.argmax(logits, 1)
correct_prediction = tf.equal(nn_output, labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
```
## Compilation
Now that we are satisfied with our quantized model accuracy, we can compile it and move onto the target.
This example targets the ZCU104.
You can target a different architecture by
specifying its configuration json file with the `--arch` flag.
/opt/vitis_ai/compiler/arch/DPUCZDX8G/ZCU104/arch.json #ZCU104
```
!sudo chmod -R 777 *
!vai_c_tensorflow \
--frozen_pb quantized/deploy_model.pb \
--arch /opt/vitis_ai/compiler/arch/DPUCZDX8G/ZCU104/arch.json \
--output_dir . \
--net_name face_binary_classifier
```
| github_jupyter |
```
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
def calculateSimpleInterest(currentTotal,deltaT,apy):
interest = currentTotal*apy/365*deltaT
return interest
def compoundValue(currentTotal,deltaT,apy,fee):
### currentTotal in coins. deltaT in days. apy in decimal. fee in coins.
interest = calculateSimpleInterest(currentTotal,deltaT,apy)
newTotal = currentTotal+interest-fee
return newTotal
def simulateTime(principal,deltaT,apy,fee,timeLength):
### principal in coins. deltaT in days. apy in decimal. fee in coins. timeLength in years
compoundingEvents = int(timeLength/(deltaT/365.))
activeTotal = principal
for event in range(1,compoundingEvents+1):
activeTotal = compoundValue(activeTotal,deltaT,apy,fee)
#Account for interest gained during the final part of the time frame. Assumes you pull out at the end of the year no matter what.
fractionalEvent = timeLength/(deltaT/365)%1
#Approximating the end of the timeframe as getting a percentage of the growth over the next deltaT. Crude because it'll overestimate for small fractions but it should work.
activeTotal = activeTotal + fractionalEvent*calculateSimpleInterest(activeTotal,deltaT,apy)
return activeTotal
vectorizedSimulateTime = np.vectorize(simulateTime)
#Sanity checking formulas
print("The calculated total:", simulateTime(10,365,0.1,0.0014,1))
print("The interest should be 1. It is:", calculateSimpleInterest(10,365,0.1))
print("The total should be:", 10+1-0.0014,"\n")
print("If there's no interest and no fees:",simulateTime(10,365,0,0,1))
print("If there's no fees:",simulateTime(10,365,0.1,0,1),"\n")
print("Approximately continuous with no fees:",simulateTime(10,0.001,0.1,0,1), ". Should be:", 10*np.exp(0.1*1))
print("The difference: ",simulateTime(10,0.001,0.1,0,1)-10*np.exp(0.1*1)," can be attributed to numerical error/continuous approximation error.")
```
# **EDIT THESE PARAMETERS**
```
### Maximum finding paremeters:
principal = 50
apy = 0.1
fee = 0.004 # Important to include the total fees involved in the whole process including the redelegation of the reward
timeLength=1 # Years
# The minimum deltaT is the amount of time to accumulate 1 fee's worth of a token
minimumDeltaT=fee/(principal*apy/365)
# Negating function to find the minimum
neg_simulateTime = lambda deltaT: -1*simulateTime(principal,deltaT,apy,fee,timeLength)
brent_deltaT_o = opt.minimize_scalar(neg_simulateTime,bracket=(minimumDeltaT,timeLength))
print("Optimal re-investing time:", brent_deltaT_o.x)
print("Due to approximations take this with a grain of salt. It's more useful to look at the graphs below.")
endpoint=timeLength*365
x = np.linspace(minimumDeltaT,endpoint,num=2000)
y = vectorizedSimulateTime(principal,x,apy,fee,timeLength)
plt.plot(x,y)
plt.xlim((-20,endpoint))
plt.title("Coins vs deltaT")
plt.xlabel("deltaT (days)")
plt.ylabel("Coins")
plt.show()
plt.plot(x,y)
plt.title("Zoomed in")
plt.xlabel("deltaT (days)")
plt.ylabel("Coins")
plt.xlim((-10,endpoint/4))
plt.plot(x,y)
plt.title("Zoomed in")
plt.xlabel("deltaT (days)")
plt.ylabel("Coins")
plt.xlim((-10,200))
```
| github_jupyter |
# Recommendations with IBM
In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform.
You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.**
By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations.
## Table of Contents
I. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br>
II. [Rank Based Recommendations](#Rank)<br>
III. [User-User Based Collaborative Filtering](#User-User)<br>
IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br>
V. [Matrix Factorization](#Matrix-Fact)<br>
VI. [Extras & Concluding](#conclusions)
At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import project_tests as t
import pickle
%matplotlib inline
df = pd.read_csv('data/user-item-interactions.csv')
df_content = pd.read_csv('data/articles_community.csv')
del df['Unnamed: 0']
del df_content['Unnamed: 0']
# Show df to get an idea of the data
df
# Show df_content to get an idea of the data
df_content
```
### <a class="anchor" id="Exploratory-Data-Analysis">Part I : Exploratory Data Analysis</a>
Use the dictionary and cells below to provide some insight into the descriptive statistics of the data.
`1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article.
```
# Number of articles
num_articles = df['article_id'].nunique()
print('Number of unique articles read by users:', num_articles)
# Number of readers
num_readers = df['email'].nunique()
print('Number of readers:', num_readers)
# What is the distribution of how many articles a user interacts with in the dataset?
df_grouping_user = df.groupby(['email']).count()['title']
ax = df_grouping_user.plot.hist(align='mid',bins=70, range=[0,90])
ax.set_xticks(np.arange(0, 90, 5), minor=False)
plt.xlabel('Number of articles')
plt.ylabel('Number of readers')
;
```
On the abscissa axis of the histograms we have the number of artciles and on the ordinate axis the number of readers. This shows that the majority of readers have only read a few articles. For a more granular (per user) depiction of how many and which artciles each user reads, we povide you with the following table:
```
# Provide a visual and descriptive statistics to assist with giving a look at the number of times each user
# interacts with an article.
# This table shows how many times each user has interacted with an article
df.groupby(['email', 'article_id']).count()
```
A histogram for each user would be too resource intensive, the above table summarizes the such information. We encourage the reader to take this dataframe and explore some of the users.
```
# Fill in the median and maximum number of user_article interactios below
median_val = df_grouping_user.median() # 50% of individuals interact with ____ number of articles or fewer.
max_views_by_user = df_grouping_user.max() # The maximum number of user-article interactions by any 1 user is ______.
email_of_assiduous_reader = df.groupby(['email']).count()['title']\
[df.groupby(['email']).count()['title']==max_views_by_user].index[0]
print('50% of individuals interact with {} number of articles or fewer.'.format(median_val))
print('The maximum number of user-article interactions by any 1 user is {}.'.format(max_views_by_user))
print('The hash of the most assiduous reader\'s email is:', email_of_assiduous_reader)
```
We see from the histogram that the dataset is very positively skewed, let us check the average and the std. The average must be larger than the median, the larger the difference the greater the skewness. The std must be large as well.
```
average_number_of_articles_read = df_grouping_user.mean()
std_number_of_articles_read = df_grouping_user.std()
skew_number_of_articles_read = df_grouping_user.skew()
print('The mean is', average_number_of_articles_read)
print('The std is', std_number_of_articles_read)
print('The skewness is', skew_number_of_articles_read)
```
Indeed we have a very skew dataset. Even the score itself is very high.
`2.` Explore and remove duplicate articles from the **df_content** dataframe.
```
# Find and explore duplicate articles
duplicate_articles = df_content.duplicated(subset=['article_id'])
print('There are {} duplicate articles'.format(duplicate_articles.value_counts()[True]))
df_content.loc[duplicate_articles == True]
```
The titles of these duplicates do not indicate a correlation. Now it is time to remove them from df_content
```
# Remove any rows that have the same article_id - only keep the first
df_content.drop_duplicates(subset=['article_id'], inplace=True, keep='first')
print(df_content.duplicated(subset=['article_id']).value_counts())
print('Now there are 0 duplicate articles.')
```
`3.` Use the cells below to find:
**a.** The number of unique articles that have an interaction with a user.
**b.** The number of unique articles in the dataset (whether they have any interactions or not).<br>
**c.** The number of unique users in the dataset. (excluding null values) <br>
**d.** The number of user-article interactions in the dataset.
```
# We check the unique values of the artciles read by users:
unique_articles = df['article_id'].nunique() # The number of unique articles that have at least one interaction
# We already deleted the duplicates, so we just ned to count the total number of articles in df_content:
total_articles = df_content.shape[0] # The number of unique articles on the IBM platform
# We check the unique emails:
unique_users = df['email'].nunique() # The number of unique users
# we need to count the number of rows in df
user_article_interactions = df.shape[0] # The number of user-article interactions
print('Unique articles', unique_articles)
print('Total articles', total_articles)
print('Unique users', unique_users)
print('User article interactions', user_article_interactions)
```
`4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
```
# For truncating in case there are more decimal numbers
def truncate(n, decimals=0):
multiplier = 10 ** decimals
return int(n * multiplier) / multiplier
# Value counts provides an ordered list, so the first element is the most viewd article
most_viewed_article_id = str(truncate(df['article_id'].value_counts().index[0], 1)) # The most viewed article in the dataset as a string with one value following the decimal
max_views = df['article_id'].value_counts().max() # The most viewed article in the dataset was viewed how many times?
## No need to change the code here - this will be helpful for later parts of the notebook
# Run this cell to map the user email to a user_id column and remove the email column
def email_mapper():
coded_dict = dict()
cter = 1
email_encoded = []
for val in df['email']:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
email_encoded.append(coded_dict[val])
return email_encoded
email_encoded = email_mapper()
del df['email']
df['user_id'] = email_encoded
# show header
df.head()
## If you stored all your results in the variable names above,
## you shouldn't need to change anything in this cell
sol_1_dict = {
'`50% of individuals have _____ or fewer interactions.`': median_val,
'`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,
'`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,
'`The most viewed article in the dataset was viewed _____ times.`': max_views,
'`The article_id of the most viewed article is ______.`': most_viewed_article_id,
'`The number of unique articles that have at least 1 rating ______.`': unique_articles,
'`The number of unique users in the dataset is ______`': unique_users,
'`The number of unique articles on the IBM platform`': total_articles
}
# Test your dictionary against the solution
t.sol_1_test(sol_1_dict)
print(sol_1_dict)
```
### <a class="anchor" id="Rank">Part II: Rank-Based Recommendations</a>
Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.
`1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.
```
def get_top_articles(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
# Create list of top artciles
list_of_top_articles_ids = df['article_id'].isin(df['article_id'].value_counts().index[:n])
# Find them in the df and create a df containing them
top_articles_df = df.loc[list_of_top_articles_ids]
# Drop duplicates from the top artciles df, for easier reading of the title later
top_articles_df.drop_duplicates(subset=['article_id'], inplace=True, keep='first')
# We retrieve the titles
top_article_titles = np.array(top_articles_df['title'])
return top_article_titles # Return the top article titles from df (not df_content)
def get_top_article_ids(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article ids
'''
# Create list of top artciles as strings with one decimal
top_articles = np.array(df['article_id'].value_counts().index[:10])
top_articles_str = []
for top_article in top_articles:
top_articles_str.append(str(truncate(top_article, 1)))
return top_articles_str # Return the top article ids
print(get_top_articles(10))
print(get_top_article_ids(10))
# Test your function by returning the top 5, 10, and 20 articles
top_5 = get_top_articles(5)
top_10 = get_top_articles(10)
top_20 = get_top_articles(20)
# Test each of your three lists from above
t.sol_2_test(get_top_articles)
```
### <a class="anchor" id="User-User">Part III: User-User Based Collaborative Filtering</a>
`1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns.
* Each **user** should only appear in each **row** once.
* Each **article** should only show up in one **column**.
* **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1.
* **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**.
Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
```
# create the user-article matrix with 1's and 0's
def create_user_item_matrix(df):
'''
INPUT:
df - pandas dataframe with article_id, title, user_id columns
OUTPUT:
user_item - user item matrix
Description:
Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with
an article and a 0 otherwise
'''
# We first drop the duplicates
user_item = df.drop_duplicates(subset=['user_id', 'article_id'], inplace=False, keep='first')
# Then we create the item matrix
user_item = user_item.groupby(['user_id', 'article_id']).size().unstack().fillna(0).astype(int)
return user_item # return the user_item matrix
user_item = create_user_item_matrix(df)
## Tests: You should just need to run this cell. Don't change the code.
assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right."
assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right."
assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right."
print("You have passed our quick tests! Please proceed!")
```
`2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users.
Use the tests to test your function.
```
def find_similar_users(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user_id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
similar_users - (list) an ordered list where the closest users (largest dot product users)
are listed first
Description:
Computes the similarity of every pair of users based on the dot product
Returns an ordered
'''
# array with the binary values for the article interactions
user_to_recommend_interactions = user_item.loc[user_id, :]
users_recommenders_interactions = user_item.drop([user_id])
# We get the scores thorugh the dot product. A series, easy to order. We only need to transpose the matrix
user_scores = user_to_recommend_interactions.dot(users_recommenders_interactions.T)
# We sort values
most_similar_users = user_scores.sort_values(ascending=False).index.tolist()
return most_similar_users # return a list of the users in order from most to least similar
# Do a spot check of your function
print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10]))
print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5]))
print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
```
`3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
```
def get_article_names(article_ids, df=df):
'''
INPUT:
article_ids - (list) a list of article ids
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the title column)
'''
article_names = df.loc[df['article_id'].isin(article_ids)]['title'].unique().tolist()
return article_names # Return the article names associated with list of article ids
def get_user_articles(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
article_ids - (list) a list of the article ids seen by the user
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the doc_full_name column in df_content)
Description:
Provides a list of the article_ids and article titles that have been seen by a user
'''
# Find the user in the dataframe
article_ids_series = user_item.loc[user_id, :]
# find the articles the user has seen (1)
article_ids_series = article_ids_series[article_ids_series == 1]
# Extract the ids from the index of the cretaed Series. Careful to convert them into strings and list
article_ids = article_ids_series.index.astype('str').tolist()
article_names = get_article_names(article_ids)
return article_ids, article_names # return the ids and names
def user_user_recs(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
Users who are the same closeness are chosen arbitrarily as the 'next' user
For the user where the number of recommended articles starts below m
and ends exceeding m, the last items are chosen arbitrarily
'''
# We get the list of items already seen by the user.
viewed_article_ids, viewed_article_names = get_user_articles(user_id, user_item)
viewed_article_ids_set = set(viewed_article_ids)
# we have a list of similar users from largest similarity to lowest
similar_user_ids = find_similar_users(user_id, user_item)
recs = []
for similar_user_id in similar_user_ids:
potential_rec_article_ids, potential_rec_article_names = get_user_articles(similar_user_id, user_item)
potential_rec_article_ids_set = set(potential_rec_article_ids)
# We get the ids from the other user that the user to be recommended does not have
recommendations_set = potential_rec_article_ids_set - viewed_article_ids_set
for value in recommendations_set:
recs.append(value)
if len(recs) >= m:
break
return recs[:m] # return your recommendations for this user_id
# Check Results
get_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1
# Test your functions here - No need to change this code - just run this cell
assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])
assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])
assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])
assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])
print("If this is all you see, you passed all of our tests! Nice job!")
```
`4.` Now we are going to improve the consistency of the **user_user_recs** function from above.
* Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.
* Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.
## Note: I do not return a column called neighbor_id as it is more convinient for me to have it in the index and it serves the same purpose (and it is more accessible)
```
def get_top_sorted_users(user_id, df=df, user_item=user_item):
'''
INPUT:
user_id - (int)
df - (pandas dataframe) df as defined at the top of the notebook
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
neighbors_df - (pandas dataframe) a dataframe with:
neighbor_id - is a neighbor user_id
similarity - measure of the similarity of each user to the provided user_id
num_interactions - the number of articles viewed by the user - if a u
Other Details - sort the neighbors_df by the similarity and then by number of interactions where
highest of each is higher in the dataframe
'''
# First we find the similarity
# array with the binary values for the article interactions
user_to_recommend_interactions = user_item.loc[user_id, :]
users_recommenders_interactions = user_item.drop([user_id])
# We get the scores thorugh the dot product. A series, easy to order. We only need to transpose the matrix
user_scores = user_to_recommend_interactions.dot(users_recommenders_interactions.T)
# Second we get the number of interactions per user.
# We do not drop duplicates becase we want the number of interactions
# This provides you with how many interactions a user has had
user_interactions = df.groupby(['article_id', 'user_id']).size().unstack().sum()
# Now we organize them in a series with the same order as the ordered list of scores
user_interactions = user_interactions.loc[user_scores.index]
# We combine them into a dataframe
user_scores_interactions_df = pd.concat([user_scores, user_interactions], axis=1)
user_scores_interactions_df.columns = ['similarity', 'num_interactions']
# we sort them based on the 2 attributes
neighbors_df = user_scores_interactions_df.sort_values(['similarity', 'num_interactions'], ascending=[False, False])
return neighbors_df # Return the dataframe specified in the doc_string
def user_user_recs_part2(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user by article id
rec_names - (list) a list of recommendations for the user by article title
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
* Choose the users that have the most total article interactions
before choosing those with fewer article interactions.
* Choose articles with the articles with the most total interactions
before choosing those with fewer total interactions.
'''
# We get the list of items already seen by the user.
viewed_article_ids, viewed_article_names = get_user_articles(user_id, user_item)
viewed_article_ids_set = set(viewed_article_ids)
neighbors_df = get_top_sorted_users(user_id, df, user_item)
closest_neighbors = neighbors_df.index
article_interactions = df.groupby(['user_id', 'article_id']).size().unstack().sum()
article_interactions = pd.DataFrame({'article_id':article_interactions.index, 'num_interactions':article_interactions.values})
recs = []
for closest_neighbor in closest_neighbors:
potential_rec_article_ids, potential_rec_article_names = get_user_articles(closest_neighbor, user_item)
potential_rec_article_ids_set = set(potential_rec_article_ids)
# We get the ids from the other user that the user to be recommended does not have
recommendations_set = potential_rec_article_ids_set - viewed_article_ids_set
# we need to order the recommendations by number of interactions the articles had
# We only do this if there are articles to potentially recommend
# we go thorugh this part only if there are articles to recommend
new_recs = []
if recommendations_set != set():
for value in recommendations_set:
new_recs.append(value)
# Sort the values according to the number of interactions. We only select the ones from the closest neighbor
article_interactions_sorted = article_interactions.loc[article_interactions['article_id'].isin(new_recs)]\
.sort_values(by=['num_interactions'],ascending=False)
# We now save the new recs
for value in article_interactions_sorted['article_id']:
recs.append(value)
# We remove duplicates from the list, as 2 users might yield the same recommendation.
# We maintainm the order. Ref: https://www.w3schools.com/python/python_howto_remove_duplicates.asp
recs = list(dict.fromkeys(recs))
# We check if we already have enough recommendations
if len(recs) >= m:
break
recs = recs[:m]
rec_names = get_article_names(recs, df)
return recs, rec_names
a, b = user_user_recs_part2(1, m=10)
# The next two values must be the same. otherwise we included non-unique recommendations.
# For large values of m (like 1000), this non-uniqueness is noticeble
print(len(a))
print(pd.DataFrame(a, columns=['article_ids'])['article_ids'].nunique())
# Quick spot check - don't change this code - just use it to test your functions
rec_ids, rec_names = user_user_recs_part2(20, 10)
print("The top 10 recommendations for user 20 are the following article ids:")
print(rec_ids)
print()
print("The top 10 recommendations for user 20 are the following article names:")
print(rec_names)
```
`5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
```
### Tests with a dictionary of results
user1_most_sim = get_top_sorted_users(1).index[0] # Find the user that is most similar to user 1
user131_10th_sim = get_top_sorted_users(131).index[9] # Find the 10th most similar user to user 131
## Dictionary Test Here
sol_5_dict = {
'The user that is most similar to user 1.': user1_most_sim,
'The user that is the 10th most similar to user 131': user131_10th_sim,
}
t.sol_5_test(sol_5_dict)
```
`6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.
**Neither, as they both depend on the previously read artciles from the user. If the user is new, there will be no possibility to compute similarity. We could make a recommendation solely content based, which is based on the things the new user first liked. But before we even do that, for the begining, I would use knowledge based recommendation, so we can get some input from the very user about what he likes, and offer him content based on it. Once he starts reading things, then we can use content and user based recommendations. Nonetheless, the function that we have written that could be used is: get_top_articles()**
`7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
```
new_user = '0.0'
# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.
# Provide a list of the top 10 article ids you would give to
new_user_recs = get_top_article_ids(10)# Your recommendations here
assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users."
print("That's right! Nice job!")
```
### <a class="anchor" id="Content-Recs">Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)</a>
Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information.
`1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations.
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
```
def make_content_recs():
'''
INPUT:
OUTPUT:
'''
```
`2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender?
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
**Write an explanation of your content based recommendation system here.**
`3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations.
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
```
# make recommendations for a brand new user
# make a recommendations for a user who only has interacted with article id '1427.0'
```
### <a class="anchor" id="Matrix-Fact">Part V: Matrix Factorization</a>
In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.
`1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook.
```
# Load the matrix here
user_item_matrix = pd.read_pickle('user_item_matrix.p')
# quick look at the matrix
user_item_matrix.head()
```
`2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
```
# Perform SVD on the User-Item Matrix Here
u, s, vt = np.linalg.svd(user_item_matrix)
u.shape, s.shape, vt.shape
user_item_matrix.isnull().sum().sum()
```
**It is different because luckily we do not have nan values. Also, we do not have ratings, we only have interations (binary values)**
`3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
```
print(u.shape)
print(s.shape)
print(vt.shape)
# Learning what slicing does
print(u[:, :10].shape)
print(s[:10].shape)
print(vt[:10, :].shape)
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item_matrix, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
```
`4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below.
Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below:
* How many users can we make predictions for in the test set?
* How many users are we not able to make predictions for because of the cold start problem?
* How many articles can we make predictions for in the test set?
* How many articles are we not able to make predictions for because of the cold start problem?
```
df_train = df.head(40000)
df_test = df.tail(5993)
def create_test_and_train_user_item(df_train, df_test):
'''
INPUT:
df_train - training dataframe
df_test - test dataframe
OUTPUT:
user_item_train - a user-item matrix of the training dataframe
(unique users for each row and unique articles for each column)
user_item_test - a user-item matrix of the testing dataframe
(unique users for each row and unique articles for each column)
test_idx - all of the test user ids
test_arts - all of the test article ids
'''
user_item_train = create_user_item_matrix(df_train)
user_item_test = create_user_item_matrix(df_test)
test_idx = df_test['user_id']
test_arts = df_test['article_id']
return user_item_train, user_item_test, test_idx, test_arts
user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)
```
### For evaluation
```
# The users who are both in train and test, can be predicted in test
len(set(df_test.user_id.unique()).intersection(set(df_train.user_id.unique())))
# the users who are in test and not in train, have the cold start problem
len(set(df_test.user_id.unique()) - set(df_train.user_id.unique()))
# The articles who are both in test and train can be used to make predictions
len(set(df_test.article_id.unique()).intersection(set(df_train.article_id.unique())))
# the articles who are in test and not in train, have the cold start problem
len(set(df_test.article_id.unique()) - set(df_train.article_id.unique()))
# Replace the values in the dictionary below
a = 662
b = 574
c = 20
d = 0
sol_4_dict = {
'How many users can we make predictions for in the test set?': c,# letter here,
'How many users in the test set are we not able to make predictions for because of the cold start problem?': a,# letter here,
'How many movies can we make predictions for in the test set?': b,# letter here,
'How many movies in the test set are we not able to make predictions for because of the cold start problem?': d# letter here
}
t.sol_4_test(sol_4_dict)
```
`5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.
Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
```
# fit SVD on the user_item_train matrix
u_train, s_train, vt_train = np.linalg.svd(user_item_train) # fit svd similar to above then use the cells below
# Check sizes
print('TRAIN')
print(u_train.shape)
print(s_train.shape)
print(vt_train.shape)
# We obtain u and vt for test from u and vt from train.
# If we would use scd on test, it would not provide us with the necessary amount of
# total latent features
users_to_pred = set(df_test.user_id.unique()).intersection(set(df_train.user_id.unique()))
users_to_pred_bool = user_item_train.index.isin(users_to_pred)
# We take some rows but all the columns
u_test = u_train[users_to_pred_bool, :]
# For the columns we must take another approach
# The articles who are both in test and train can be used to make predictions
articles_to_pred = set(df_test.article_id.unique()).intersection(set(df_train.article_id.unique()))
articles_to_pred_bool = user_item_train.columns.isin(articles_to_pred)
# We take all rows but some columns
vt_test = vt_train[:, articles_to_pred_bool]
print('TEST')
print(u_test.shape)
print(s_train.shape)
print(vt_test.shape)
# Use these cells to see how well you can use the training
# decomposition to predict on test data
num_latent_feats = np.arange(10,700+10,20)
sum_errs_train = []
sum_errs_test = []
for k in num_latent_feats:
# restructure with k latent features
s_new_train, u_new_train, vt_new_train = np.diag(s_train[:k]), u_train[:, :k], vt_train[:k, :]
u_new_test, vt_new_test = u_test[:, :k], vt_test[:k, :]
# take dot product
user_item_est_train = np.around(np.dot(np.dot(u_new_train, s_new_train), vt_new_train))
user_item_est_test = np.around(np.dot(np.dot(u_new_test, s_new_train), vt_new_test))
# compute error for each prediction to actual value
diffs_train = np.subtract(user_item_train, user_item_est_train)
# We have to adjust the size because we are only testing 20 users
diffs_test = np.subtract(user_item_test.loc[user_item_test.index.isin(users_to_pred)], user_item_est_test)
# total errors and keep track of them
err_train = np.sum(np.sum(np.abs(diffs_train)))
sum_errs_train.append(err_train)
err_test = np.sum(np.sum(np.abs(diffs_test)))
sum_errs_test.append(err_test)
data_points_train = user_item_train.size
data_points_test = user_item_test.loc[user_item_test.index.isin(users_to_pred)].size
plt.plot(num_latent_feats, \
1 - np.array(sum_errs_train)/data_points_train, label='TRAIN');
plt.plot(num_latent_feats, \
1 - np.array(sum_errs_test)/data_points_test, label='TEST');
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
plt.legend();
```
`6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles?
**The accuracy for the test keeps dropping with more latent features, due to overfitting from training. So I would use less than 100. But the accuracy overall is good. However, the problem is that we had more than 600 users that we could not predict, therefore we seriously lack functionality in the algorithm used. We could use other recommendation types as well like content and knowledge based recommendations to improve the overall system.**
```
from subprocess import call
call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])
```
| github_jupyter |
<p><font size="6"><b>Visualisation: Seaborn </b></font></p>
> *© 2021, Joris Van den Bossche and Stijn Van Hoey (<mailto:jorisvandenbossche@gmail.com>, <mailto:stijnvanhoey@gmail.com>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
---
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
# Seaborn
[Seaborn](https://seaborn.pydata.org/) is a Python data visualization library:
* Built on top of Matplotlib, but providing
1. High level functions.
2. Support for _tidy data_, which became famous due to the `ggplot2` R package.
3. Attractive and informative statistical graphics out of the box.
* Interacts well with Pandas
```
import seaborn as sns
```
## Introduction
We will use the Titanic example data set:
```
titanic = pd.read_csv('data/titanic.csv')
titanic.head()
```
Let's consider following question:
>*For each class at the Titanic and each gender, what was the average age?*
Hence, we should define the *mean* of the male and female groups of column `Survived` in combination with the groups of the `Pclass` column. In Pandas terminology:
```
age_stat = titanic.groupby(["Pclass", "Sex"])["Age"].mean().reset_index()
age_stat
```
Providing this data in a bar chart with pure Pandas is still partly supported:
```
age_stat.plot(kind='bar')
## A possible other way of plotting this could be using groupby again:
#age_stat.groupby('Pclass').plot(x='Sex', y='Age', kind='bar') # (try yourself by uncommenting)
```
but with mixed results.
__Seaborn__ provides another level of abstraction to visualize such *grouped* plots with different categories:
```
sns.catplot(data=age_stat,
x="Sex", y="Age",
col="Pclass", kind="bar")
```
Check <a href="#this_is_tidy">here</a> for a short recap about `tidy` data.
<div class="alert alert-info">
**Remember**
- Seaborn is especially suitbale for these so-called <a href="http://vita.had.co.nz/papers/tidy-data.pdf">tidy</a> dataframe representations.
- The [Seaborn tutorial](https://seaborn.pydata.org/tutorial/data_structure.html#long-form-vs-wide-form-data) provides a very good introduction to tidy (also called _long-form_) data.
- You can use __Pandas column names__ as input for the visualisation functions of Seaborn.
</div>
## Interaction with Matplotlib
Seaborn builds on top of Matplotlib/Pandas, adding an additional layer of convenience.
Topic-wise, Seaborn provides three main modules, i.e. type of plots:
- __relational__: understanding how variables in a dataset relate to each other
- __distribution__: specialize in representing the distribution of datapoints
- __categorical__: visualize a relationship involving categorical data (i.e. plot something _for each category_)
The organization looks like this:

We first check out the top commands of each of the types of plots: `relplot`, `displot`, `catplot`, each returning a Matplotlib `Figure`:
### Figure level functions
Let's start from: _What is the relation between Age and Fare?_
```
# A relation between variables in a Pandas DataFrame -> `relplot`
sns.relplot(data=titanic, x="Age", y="Fare")
```
Extend to: _Is the relation between Age and Fare different for people how survived?_
```
sns.relplot(data=titanic, x="Age", y="Fare",
hue="Survived")
```
Extend to: _Is the relation between Age and Fare different for people how survived and/or the gender of the passengers?_
```
age_fare = sns.relplot(data=titanic, x="Age", y="Fare",
hue="Survived",
col="Sex")
```
The function returns a Seaborn `FacetGrid`, which is related to a Matplotlib `Figure`:
```
type(age_fare), type(age_fare.fig)
```
As we are dealing here with 2 subplots, the `FacetGrid` consists of two Matplotlib `Axes`:
```
age_fare.axes, type(age_fare.axes.flatten()[0])
```
Hence, we can still apply all the power of Matplotlib, but start from the convenience of Seaborn.
<div class="alert alert-info">
**Remember**
The `Figure` level Seaborn functions:
- Support __faceting__ by data variables (split up in subplots using a categorical variable)
- Return a Matplotlib `Figure`, hence the output can NOT be part of a larger Matplotlib Figure
</div>
### Axes level functions
In 'technical' terms, when working with Seaborn functions, it is important to understand which level they operate, as `Axes-level` or `Figure-level`:
- __axes-level__ functions plot data onto a single `matplotlib.pyplot.Axes` object and return the `Axes`
- __figure-level__ functions return a Seaborn object, `FacetGrid`, which is a `matplotlib.pyplot.Figure`
Remember the Matplotlib `Figure`, `axes` and `axis` anatomy explained in [visualization_01_matplotlib](visualization_01_matplotlib.ipynb)?
Each plot module has a single `Figure`-level function (top command in the scheme), which offers a unitary interface to its various `Axes`-level functions (.
We can ask the same question: _Is the relation between Age and Fare different for people how survived?_
```
scatter_out = sns.scatterplot(data=titanic, x="Age", y="Fare", hue="Survived")
type(scatter_out)
```
But we can't use the `col`/`row` options for facetting:
```
# sns.scatterplot(data=titanic, x="Age", y="Fare", hue="Survived", col="Sex") # uncomment to check the output
```
We can use these functions to create custom combinations of plots:
```
fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(10, 6))
sns.scatterplot(data=titanic, x="Age", y="Fare", hue="Survived", ax=ax0)
sns.violinplot(data=titanic, x="Survived", y="Fare", ax=ax1) # boxplot, stripplot,.. as alternative to represent distribution per category
```
__Note!__ Check the similarity with the _best of both worlds_ approach:
1. Prepare with Matplotlib
2. Plot using Seaborn
3. Further adjust specific elements with Matplotlib if needed
<div class="alert alert-info">
**Remember**
The `Axes` level Seaborn functions:
- Do NOT support faceting by data variables
- Return a Matplotlib `Axes`, hence the output can be used in combination with other Matplotlib `Axes` in the same `Figure`
</div>
### Summary statistics
Aggregations such as `count`, `mean` are embedded in Seaborn (similar to other 'Grammar of Graphics' packages such as ggplot in R and plotnine/altair in Python). We can do these operations directly on the original `titanic` data set in a single coding step:
```
sns.catplot(data=titanic, x="Survived", col="Pclass",
kind="count")
```
To use another statistical function to apply on each of the groups, use the `estimator`:
```
sns.catplot(data=titanic, x="Sex", y="Age", col="Pclass", kind="bar",
estimator=np.mean)
```
## Exercises
<div class="alert alert-success">
**EXERCISE**
- Make a histogram of the age, split up in two subplots by the `Sex` of the passengers.
- Put both subplots underneath each other.
- Use the `height` and `aspect` arguments of the plot function to adjust the size of the figure.
<details><summary>Hints</summary>
- When interested in a histogram, i.e. the distribution of data, use the `displot` module
- A split into subplots is requested using a variable of the DataFrame (facetting), so use the `Figure`-level function instead of the `Axes` level functions.
- Link a column name to the `row` argument for splitting into subplots row-wise.
</details>
```
sns.displot(data=titanic, x="Age", row="Sex", aspect=3, height=2)
```
<div class="alert alert-success">
**EXERCISE**
Make a violin plot showing the `Age` distribution in each of the `Pclass` categories comparing for `Sex`:
- Use the `Pclass` column to create a violin plot for each of the classes. To do so, link the `Pclass` column to the `x-axis`.
- Use a different color for the `Sex`.
- Check the behavior of the `split` argument and apply it to compare male/female.
- Use the `sns.despine` function to remove the boundaries around the plot.
<details><summary>Hints</summary>
- Have a look at https://seaborn.pydata.org/examples/grouped_violinplots.html for inspiration.
</details>
```
# Figure based
sns.catplot(data=titanic, x="Pclass", y="Age",
hue="Sex", split=True,
palette="Set2", kind="violin")
sns.despine(left=True)
# Axes based
sns.violinplot(data=titanic, x="Pclass", y="Age",
hue="Sex", split=True,
palette="Set2")
sns.despine(left=True)
```
## Some more Seaborn functionalities to remember
Whereas the `relplot`, `catplot` and `displot` represent the main components of the Seaborn library, more useful functions are available. You can check the [gallery](https://seaborn.pydata.org/examples/index.html) yourself, but let's introduce a few rof them:
__jointplot()__ and __pairplot()__
`jointplot()` and `pairplot()` are Figure-level functions and create figures with specific subplots by default:
```
# joined distribution plot
sns.jointplot(data=titanic, x="Fare", y="Age",
hue="Sex", kind="scatter") # kde
sns.pairplot(data=titanic[["Age", "Fare", "Sex"]], hue="Sex") # Also called scattermatrix plot
```
__heatmap()__
Plot rectangular data as a color-encoded matrix.
```
titanic_age_summary = titanic.pivot_table(columns="Pclass", index="Sex",
values="Age", aggfunc="mean")
titanic_age_summary
sns.heatmap(data=titanic_age_summary, cmap="Reds")
```
__lmplot() regressions__
`Figure` level function to generate a regression model fit across a FacetGrid:
```
g = sns.lmplot(
data=titanic, x="Age", y="Fare",
hue="Survived", col="Survived", # hue="Pclass"
)
```
# Exercises data set road casualties
The [Belgian road casualties data set](https://statbel.fgov.be/en/themes/mobility/traffic/road-accidents) contains data about the number of victims involved in road accidents.
The script `load_casualties.py` in the `data` folder contains the routine to download the individual years of data, clean up the data and concatenate the individual years.
The `%run` is an ['IPython magic' ](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-run) function to run a Python file as if you would run it from the command line. Run `%run ./data/load_casualties.py --help` to check the input arguments required to run the script. As data is available since 2005, we download 2005 till 2020.
__Note__ As the scripts downloads the individual files, it can take a while to run the script the first time.
```
# RUN THIS CELL TO PREPARE THE ROAD CASUALTIES DATA SET
%run ./data/load_casualties.py 2005 2020
```
When succesfull, the `casualties.csv` data is available in the `data` folder:
```
casualties = pd.read_csv("./data/casualties.csv", parse_dates=["datetime"])
```
The data contains the following columns:
- datetime: Date and time of the casualty.
- week_day: Weekday of the datetime.
- n_victims: Number of victims
- n_victims_ok: Number of victims without
- n_slightly_injured: Number of slightly injured
- n_seriously_injured: Number of severely injured
- n_dead_30days: Number of dead within 30 days
- road_user_type: Road user, vehicle
- victim_type: Type of victim (pedestrian, driver, passenger,...)
- gender
- age
- road_type: Regional road, Motorway or Municipal road
- build_up_area: Outside or inside built-up area
- light_conditions: Day or night (with or without road lights), or dawn
- refnis_municipality: Postal reference ID number of municipality
- municipality: Municipality name
- refnis_region: Postal reference ID number of region
- region: Flemish Region, Walloon Region or Brussels-Capital Region
<div class="alert alert-success">
**EXERCISE**
Create a barplot with the number of victims ("n_victims") for each hour of the day. Before plotting, calculate the victims for each hour of the day with Pandas and assign it to the variable `victims_hour_of_day`. Update the column names to respectively "Hour of the day" and "Number of victims".
Use the `height` and `aspect` to adjust the figure width/height.
<details><summary>Hints</summary>
- The sum of victims _for each_ hour of the day requires `groupby`. One can create a new column with the hour of the day or pass the hour directly to `groupby`.
- The `.dt` accessor provides access to all kinds of datetime information.
- `rename` requires a dictionary with a mapping of the old vs new names.
- A bar plot is in seaborn one of the `catplot` options.
</details>
```
victims_hour_of_day = casualties.groupby(casualties["datetime"].dt.hour)["n_victims"].sum().reset_index()
victims_hour_of_day = victims_hour_of_day.rename(
columns={"datetime": "Hour of the day", "n_victims": "Number of victims"}
)
sns.catplot(data=victims_hour_of_day,
x="Hour of the day",
y="Number of victims",
kind="bar",
aspect=4,
height=3
)
```
<div class="alert alert-success">
**EXERCISE**
Create a barplot with the number of victims ("n_victims") for each hour of the day for each category in the gender column. Before plotting, calculate the victims for each hour of the day and each gender with Pandas and assign it to the variable `victims_gender_hour_of_day`.
Create a separate subplot for each gender category in a separate row and apply the `rocket` color palette.
Make sure to include the `NaN` values of the "gender" column as a separate subplot, called _"unknown"_ without changing the `casualties` DataFrame data.
<details><summary>Hints</summary>
- The sum of victims _for each_ hour of the day requires `groupby`. Groupby accepts multiple inputs to group on multiple categories together.
- `groupby` also accepts a parameter `dropna=False` and/or using `fillna` is a useful function to replace the values in the gender column with the value "unknown".
- The `.dt` accessor provides access to all kinds of datetime information.
- Link the "gender" column with the `row` parameter to create a facet of rows.
- Use the `height` and `aspect` to adjust the figure width/height.
</details>
```
victims_gender_hour_of_day = casualties.groupby([casualties["datetime"].dt.hour, "gender"],
dropna=False)["n_victims"].sum().reset_index()
victims_gender_hour_of_day.head()
sns.catplot(data=victims_gender_hour_of_day.fillna("unknown"),
x="datetime",
y="n_victims",
row="gender",
palette="rocket",
kind="bar",
aspect=4,
height=3)
```
<div class="alert alert-success">
**EXERCISE**
Compare the number of victims for each day of the week for casualties that happened in "Flemish Region" on a "Motorway" with a "Passenger car" with the victim the "Driver" and of age 30 till 39.
Use a bar plot to compare the victims for each day of the week with Seaborn directly (do not use the `groupby`).
__Note__ The `week_day` is converted to an __ordered__ categorical variable. This ensures the days are sorted correctly in Seaborn.
<details><summary>Hints</summary>
- The first part of the exercise is filtering the data. Combine the statements with `&` and do not forget to provide the necessary brackets. The `.isin()`to create a boolean condition might be useful for the age selection.
- Whereas using `groupby` to get to the counts is perfectly correct, using the `estimator` in Seaborn gives the same result.
__Note__ The `estimator=np.sum` is less performant than using Pandas `groupby`. After filtering the data set, the summation with Seaborn is a feasible option.
</details>
```
# Convert weekday to Pandas categorical data type
casualties["week_day"] = pd.Categorical(
casualties["week_day"],
categories=["Monday", "Tuesday", "Wednesday",
"Thursday", "Friday", "Saturday", "Sunday"],
ordered=True
)
fl_motowar_20s = casualties[(casualties["region"]=="Flemish Region") &
(casualties["road_type"] == "Motorway") &
(casualties["road_user_type"] =="Passenger car") &
(casualties["victim_type"] =="Driver") &
(casualties["age"].isin(["30 - 34", "35 - 39"]))
]
sns.catplot(data=fl_motowar_20s,
x="week_day",
y="n_victims",
estimator=np.sum,
ci=None,
kind="bar",
color="#900C3F",
height=3,
aspect=4)
```
<div class="alert alert-success">
**EXERCISE**
Compare the relative number of deaths within 30 days (in relation to the total number of victims) in between the following "road_user_type"s: "Bicycle", "Passenger car", "Pedestrian", "Motorbike" for the year 2019 and 2020:
- Filter the data for the years 2019 and 2020.
- Filter the data on the road user types "Bicycle", "Passenger car", "Pedestrian" and "Motorbike". Call the new variable `compare_dead_30`.
- Count for each combination of year and road_user_type the total victims and the total deaths within 30 days victims.
- Calculate the percentage deaths within 30 days (add a new column "dead_prop").
- Use a horizontal bar chart to plot the results with the "road_user_type" on the y-axis and a separate color for each year.
<details><summary>Hints</summary>
- By setting `datetime` as the index, slicing time series can be done using strings to filter data on the years 2019 and 2020.
- Use `isin()` to filter "road_user_type" categories used in the exercise.
- Count _for each_... Indeed, use `groupby` with 2 inputs, "road_user_type" and the year of `datetime`.
- Deriving the year from the datetime: When having an index, use `compare_dead_30.index.year`, otherwise `compare_dead_30["datetime"].dt.year`.
- Dividing columns works element-wise in Pandas.
- A horizontal bar chart in seaborn is a matter of defining `x` and `y` inputs correctly.
</details>
```
# filter the data
compare_dead_30 = casualties.set_index("datetime")["2019":"2021"]
compare_dead_30 = compare_dead_30[compare_dead_30["road_user_type"].isin(
["Bicycle", "Passenger car", "Pedestrian", "Motorbike"])]
# Sum the victims and dead within 30 days victims for each year/road-user type combination
compare_dead_30 = compare_dead_30.groupby(
["road_user_type", compare_dead_30.index.year])[["n_dead_30days", "n_victims"]].sum().reset_index()
# create a new colum with the percentage deads
compare_dead_30["dead_prop"] = compare_dead_30["n_dead_30days"]/compare_dead_30["n_victims"] * 100
sns.catplot(data=compare_dead_30,
x="dead_prop",
y="road_user_type",
kind="bar",
hue="datetime"
)
```
<div class="alert alert-success">
**EXERCISE**
Create a line plot of the __monthly__ number for each of the categories of victims ('n_victims_ok', 'n_dead_30days', 'n_slightly_injured' and 'n_seriously_injured') as a function of time:
- Create a new variable `monthly_victim_counts` that contains the monthly sum of 'n_victims_ok', 'n_dead_30days', 'n_slightly_injured' and 'n_seriously_injured'.
- Create a line plot of the `monthly_victim_counts` using Seaborn. Choose any [color palette](https://seaborn.pydata.org/tutorial/color_palettes.html).
- Create an `area` plot (line plot with the individual categories stacked on each other) using Pandas.
What happens with the data registration since 2012?
<details><summary>Hints</summary>
- Monthly statistics from a time series requires `resample` (with - in this case - `sum`), which also takes the `on` parameter to specify the datetime column (instead of using the index of the DataFrame).
- Apply the resampling on the `["n_victims_ok", "n_slightly_injured", "n_seriously_injured", "n_dead_30days"]` columns only.
- Seaborn line plots works without tidy data when NOT providing `x` and `y` argument. It also works using tidy data. To 'tidy' the data set, `.melt()` can be used, see [pandas_07_reshaping.ipynb](pandas_07_reshaping.ipynb).
- Pandas plot method works on the non-tidy data set with `plot.area()` .
__Note__ Seaborn does not have an area plot.
</details>
```
monthly_victim_counts = casualties.resample("M", on="datetime")[
["n_victims_ok", "n_slightly_injured", "n_seriously_injured", "n_dead_30days"]
].sum()
sns.relplot(
data=monthly_victim_counts,
kind="line",
palette="colorblind",
height=3, aspect=4,
)
# Optional solution with tidy data representation (providing x and y)
monthly_victim_counts_melt = monthly_victim_counts.reset_index().melt(
id_vars="datetime", var_name="victim_type", value_name="count"
)
sns.relplot(
data=monthly_victim_counts_melt,
x="datetime",
y="count",
hue="victim_type",
kind="line",
palette="colorblind",
height=3, aspect=4,
)
# Pandas area plot
monthly_victim_counts.plot.area(colormap='Reds', figsize=(15, 5))
```
<div class="alert alert-success">
**EXERCISE**
Make a line plot of the daily victims (column "n_victims") in 2020. Can you explain the counts from March till May?
<details><summary>Hints</summary>
- To get the line plot of 2020 with daily counts, the data preparation steps are:
- Filter data on 2020. By defining `datetime` as the index, slicing time series can be done using strings.
- Resample to daily counts. Use `resample` with the sum on column "n_victims".
- Create a line plot. Do you prefer Pandas or Seaborn?
</details>
```
# Using Pandas
daily_total_counts_2020 = casualties.set_index("datetime")["2020":"2021"].resample("D")["n_victims"].sum()
daily_total_counts_2020.plot.line(figsize=(12, 3))
# Using Seaborn
sns.relplot(data=daily_total_counts_2020,
kind="line",
aspect=4, height=3)
```
<div class="alert alert-success">
**EXERCISE**
Combine the following two plots in a single Matplotlib figure:
- (left) The empirical cumulative distribution of the _weekly_ proportion of victims that died (`n_dead_30days` / `n_victims`) with a separate color for each "light_conditions".
- (right) The empirical cumulative distribution of the _weekly_ proportion of victims that died (`n_dead_30days` / `n_victims`) with a separate color for each "road_type".
Prepare the data for both plots separately with Pandas and use the variable `weekly_victim_dead_lc` and `weekly_victim_dead_rt`.
<details><summary>Hints</summary>
- The plot can not be made by a single Seaborn Figure-level plot. Create a Matplotlib figure first and use the __axes__ based functions of Seaborn to plot the left and right Axes.
- The data for both subplots need to be prepared separately, by `groupby` once on "light_conditions" and once on "road_type".
- Weekly sums (`resample`) _for each_ (`groupby`) "light_conditions" or "road_type"?! yes! you need to combine both here.
- [`sns.ecdfplot`](https://seaborn.pydata.org/generated/seaborn.ecdfplot.html#seaborn.ecdfplot) creates empirical cumulative distribution plots.
</details>
```
# weekly proportion of deadly victims for each light condition
weekly_victim_dead_lc = (
casualties
.groupby("light_conditions")
.resample("W", on="datetime")[["datetime", "n_victims", "n_dead_30days"]]
.sum()
.reset_index()
)
weekly_victim_dead_lc["dead_prop"] = weekly_victim_dead_lc["n_dead_30days"] / weekly_victim_dead_lc["n_victims"] * 100
# .. and the same for each road type
weekly_victim_dead_rt = (
casualties
.groupby("road_type")
.resample("W", on="datetime")[["datetime", "n_victims", "n_dead_30days"]]
.sum()
.reset_index()
)
weekly_victim_dead_rt["dead_prop"] = weekly_victim_dead_rt["n_dead_30days"] / weekly_victim_dead_rt["n_victims"] * 100
fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(15, 5))
sns.ecdfplot(data=weekly_victim_dead_lc, x="dead_prop", hue="light_conditions", ax=ax0)
sns.ecdfplot(data=weekly_victim_dead_rt, x="dead_prop", hue="road_type", ax=ax1)
```
<div class="alert alert-success">
**EXERCISE**
You wonder if there is a relation between the number of victims per day and the minimal daily temperature. A data set with minimal daily temperatures for the year 2020 is available in the `./data` subfolder: `daily_min_temperature_2020.csv`.
- Read the file `daily_min_temperature_2020.csv` and assign output to the variable `daily_min_temp_2020`.
- Combine the daily (minimal) temperatures with the `daily_total_counts_2020` variable
- Create a regression plot with Seaborn.
Does it make sense to present the data as a regression plot?
<details><summary>Hints</summary>
- `pd.read_csv` has a `parse_dates` parameter to load the `datetime` column as a Timestamp data type.
- `pd.merge` need a (common) key to link the data.
- `sns.lmplot` or `sns.jointplot` are both seaborn functions to create scatter plots with a regression. Joint plot adds the marginal distributions.
</details>
```
# available (see previous exercises)
daily_total_counts_2020 = casualties.set_index("datetime")["2020": "2021"].resample("D")["n_victims"].sum()
daily_min_temp_2020 = pd.read_csv("./data/daily_min_temperature_2020.csv",
parse_dates=["datetime"])
daily_with_temp = daily_total_counts_2020.reset_index().merge(daily_min_temp_2020, on="datetime")
g = sns.jointplot(
data=daily_with_temp, x="air_temperature", y="n_victims", kind="reg"
)
```
# Need more Seaborn inspiration?
<div class="alert alert-info" style="font-size:18px">
__Remember__
[Seaborn gallery](https://seaborn.pydata.org/examples/index.html) and package [documentation](https://seaborn.pydata.org/index.html)
</div>
<a id='this_is_tidy'></a>
# Recap: what is `tidy`?
If you're wondering what *tidy* data representations are, you can read the scientific paper by Hadley Wickham, http://vita.had.co.nz/papers/tidy-data.pdf.
Here, we just introduce the main principle very briefly:
Compare:
#### un-tidy
| WWTP | Treatment A | Treatment B |
|:------|-------------|-------------|
| Destelbergen | 8. | 6.3 |
| Landegem | 7.5 | 5.2 |
| Dendermonde | 8.3 | 6.2 |
| Eeklo | 6.5 | 7.2 |
*versus*
#### tidy
| WWTP | Treatment | pH |
|:------|:-------------:|:-------------:|
| Destelbergen | A | 8. |
| Landegem | A | 7.5 |
| Dendermonde | A | 8.3 |
| Eeklo | A | 6.5 |
| Destelbergen | B | 6.3 |
| Landegem | B | 5.2 |
| Dendermonde | B | 6.2 |
| Eeklo | B | 7.2 |
This is sometimes also referred as *short* versus *long* format for a specific variable... Seaborn (and other grammar of graphics libraries) work better on `tidy` (long format) data, as it better supports `groupby`-like transactions!
<div class="alert alert-info" style="font-size:16px">
**Remember:**
A tidy data set is setup as follows:
- Each <code>variable</code> forms a <b>column</b> and contains <code>values</code>
- Each <code>observation</code> forms a <b>row</b>
- Each type of <code>observational unit</code> forms a <b>table</b>.
</div>
| github_jupyter |
# Lección 5: Cálculo Vectorial
## Gráficas 3D e Implementación de Cuadratura Gaussiana
### Objetivos:
1. Conocer los elementos vectoriales básicos con los que se puede trabajar en Python.
2. Implementar la Cuadratura Gaussiana haciendo uso de los anteriores.
### 1. Algunas gráficas clásicas del Cálculo Multivariado.
#### 1.1. La Hélice.
Podemos graficar una de las curvas tridimensionales más conocidas de forma muy sencilla haciendo uso de las poderosas librerías disponibles en Python. Estudiemos el problema de graficar la curva paramétrica:
$$x(t)=cos(t) \\
y(t)=sin(t) \\
z(t)=t$$
comúnmente conocida como la hélice, la cual es el resultado de hacer al círculo 2-dimensional una función de una componente de altura.
```
%pylab inline
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = plt.axes(projection='3d')
z = np.linspace(0, 1, 100)
x = z * np.sin(20 * z)
y = z * np.cos(20 * z)
c = z
ax.scatter(x, y, z, c=c)
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot(x, y, z, '-b')
```
#### 1.2. El Cilindro.
El cilindro corresponde a barrer el eje z con un círculo de radio fijo. El lugar geométrico en coordenadas cartesianas por:
$$x^2+y^2 = r , r>0 $$
```
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x=np.linspace(-1, 1, 100)
z=np.linspace(-2, 2, 100)
Xc, Zc=np.meshgrid(x, z)
Yc = np.sqrt(1-Xc**2)
rstride = 20
cstride = 10
ax.plot_surface(Xc, Yc, Zc, alpha=0.2, rstride=rstride, cstride=cstride)
ax.plot_surface(Xc, -Yc, Zc, alpha=0.2, rstride=rstride, cstride=cstride)
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
plt.show()
```
#### 1.3. El Atractor de Lorenz
El atractor de Lorenz es un sistema dinámico tridimensional descubierto por Edward Lorenz mientras estudiaba la convección en la atmósfera terrestre. El sistema dinámico que lo describe es el siguiente:
$$ \frac{dx}{dt} = a ( y - x ) \\
\frac{dy}{dt} = x ( b - z ) - y \\
\frac{dz}{dt} = xy - cz$$
A continuación se presenta su gráfica:
```
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
a, b, c = 10., 28., 8. / 3.
def lorenz_map(X, dt = 1e-2):
X_dt = np.array([a * (X[1] - X[0]),
X[0] * (b - X[2]) - X[1],
X[0] * X[1] - c * X[2]])
return X + dt * X_dt
points = np.zeros((10000, 3))
X = np.array([.1, .0, .0])
for i in range(points.shape[0]):
points[i], X = X, lorenz_map(X)
fig = plt.figure()
ax = fig.gca(projection = '3d')
ax.plot(points[:, 0], points[:, 1], points[:, 2], c = 'k')
plt.show()
```
### 1.4. Campo Escalar
Podemos graficar un campo escalar de forma bastante sencilla ocupando las librerías matplotlib y mpl_toolkits. A continuación se presenta una superficie 3 dimensional de un círculo.
```
import numpy as np
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
x = np.linspace(-3, 3, 256)
y = np.linspace(-3, 3, 256)
X, Y = np.meshgrid(x, y)
Z = np.sinc(np.sqrt(X ** 2 + Y ** 2))
fig = plt.figure()
ax = fig.gca(projection = '3d')
ax.plot_surface(X, Y, Z, color='w')
plt.show()
```
#### 1.5. El Toro.
El toro, o la dona, es una de las figuras paramétricas más famosas. La superficie está descrita por la ecuación:
$$
x = (R + r cos\alpha )cos \beta \\
y = (R + r cos\alpha ) sin \beta \\
z = rsin \alpha
$$
donde R representa el radio exterior, r el radio interior, $ \alpha $ la latitud respecto del eje xz y $ \beta $ el ángulo de rotación alrededor del eje z.
```
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
# Generate torus mesh
angle = np.linspace(0, 2 * np.pi, 32)
theta, phi = np.meshgrid(angle, angle)
r, R = .25, 1.
X = (R + r * np.cos(phi)) * np.cos(theta)
Y = (R + r * np.cos(phi)) * np.sin(theta)
Z = r * np.sin(phi)
# Display the mesh
fig = plt.figure()
ax = fig.gca(projection = '3d')
ax.set_xlim3d(-1, 1)
ax.set_ylim3d(-1, 1)
ax.set_zlim3d(-1, 1)
ax.plot_surface(X, Y, Z, color = 'w', rstride = 1, cstride = 1)
plt.show()
```
### 2. Elementos de Cálculo Vectorial
#### 2.1. Cálculo de la longitud de Arco.
Uno de los elementos esenciales del cálculo vectorial corresponde al cálculo de la longitud de arco de una curva en el espacio. Para ello, podemos emplear las poderosas librerías de Python y hacer los cálculos de forma bastante sencilla.
Sea $\vec(r): [a,b] \Rightarrow \mathbb(R)^3$ una $\mathcal(C)^1$-parametrización de una curva seccionalmente regular $\gamma$. Denotemos por $\mathcal{P}$ una partición de $[a , b]$ dada por:
$$ \mathcal{P}={a=t_0, t_1, t_2, ..., t_n = b} $$
\Denotemos además $\Delta t_i = t_i - t_{i-1}$ para $i = 1, 2, ..., n$ y $\delta = \delta (\mathcal{P})=max_{i}(t_i-t_i-1)$. Si $\mathcal{P}_n(\delta)$ es la poligonal que se obtiene al unir los puntos $\vec{r}(t_i)$ con $\vec{t_{i+1}}, i=0, 1, ...,n-1$, entonces mientras más pequeño sea $\delta$, mejor será la aproximación de $\gamma$ mediante la longitud de la poligonal $\mathcal{P}_n(\delta)$:
$$ l(\mathcal{P}_n(\delta)) = \sum_{i=1}^n\Vert \vec{r}(t_i)-\vec{r}(t_{i-1})\Vert = \sum_{i=1}^n \Vert \frac{\vec{r}(t_i)-\vec{r}(t_i-1}{t_i-t_{i-1}}\Vert \Delta t_i $$
Luego, tomando $\delta \Rightarrow 0$ se obtiene la longitud de la curva $\gamma$:
$$ l ( \mathcal{P}_n (\delta) ) \Rightarrow l( \gamma ) = \int_{a}^b \Vert \frac{d\vec{r}}{dt}(t)\Vert dt$$
Otra forma de expresar lo anterior es dada una función $f(x)$ continua en $[a, b]$. En longitud de curva desde a hasta b es:
$$ L = \int_a^b \sqrt{1+(f'(x))^2}dx $$
Si la curva viene dada de forma paramétrica por $x = x(t), y = y(t)$ y $a = x(\alpha), b = x ( \beta ), t \in [\alpha , \beta]$, entonces podemos hacer el siguiente cambio de variable:
$$ x = x(t) , dx = x'(t)dt $$ $$ f'(x) = \frac{dy}{dx} = \frac{dy}{dt} \frac{dt}{dx} = \frac{dy / dt}{dx / dt} = \frac{y'(t)}{x'(t)} $$
$$ \Rightarrow L = \int_{a}^b \sqrt{1 + (f'(x))^2}dx = \int_{\alpha}^{\beta} \sqrt{1 + ( \frac{y'(t)}{x'(t)} )^2}x'(t)dt = \int_{\alpha}^{\beta} \sqrt{(x'(t))^2+(y'(t))^2}dt $$
#### Ejemplo:
Desarrolle un programa para calcular la longitud de arco de la cicloide:
$$ x = k(t-sen(t)), y = k(1- cos(t)), t \in [0, 2\pi] $$ con k constante real positiva.
```
from scipy import integrate
k = 1.2
t0 = 0
tn = 2 * math.pi
x2 = lambda z: k ** 2 * (1 - cos(z))**2 + (sin (z)) ** 2
integrate.quad(x2, t0, tn)
```
Para el cálculo de la integral anterior se utilizó la poderosa librería integrate de scipy la cual permite a través del método de cuadratura gaussiana el cálculo numérico de una integral dada.
#### Desafío:
Extienda el código anterior para crear una función para el cálculo del área para valores arbitrarios de inicio y fin tanto como de forma k.
#### 2.2. Integración simbólica de Integrales de linea.
¿Qué es posible hacer en el caso de que se quisiese utilizar Python para comprobar un resultado teórico y no numérico de una Integral de linea?. La respuesta se encuentra en la librería SimPy, la cual permite realizar cálculos simbólicos en el lenguaje. En particular en el caso de la integración el módulo integrate nos permitirá calcular tanto integrales definidas como indefinidas. A continuación se presenta un breve ejemplo de como utilizar la librería anterior.
#### Ejemplo:
Calcule la integral de linea de la función $f( x, y ) = x^2y^2$ sobre una circunferencia de radio unitario.
```
from simpy import *
t, x, y = symbols("t, x, y")
C = Curve([cos(t), sin(t)], (t, 0, 2 * pi))
line_integrate(x**2 * y**2, C, [x, y])
```
#### 2.3. Integración Múltiple
Para evaluar integrales en dimensiones Python dispone de las poderosas extensiones de la cuadratura. La sintaxis para llamar a cada librería corresponde a a dblquad y tplquad para el caso dos dimensional y tres dimensional respectivamente contenidos en SciPy. En el caso de una integral n-dimensional es posible extender el método a través del módulo nquad para integrales de la forma $\int...\int_D f(\vec{x})d\vec{x}$ sobre un dominio $D$ apropiado.
En el caso de la integración doble, dblquad puede ser utilizada para cálculo de integrales de la forma $\int_a^b\int_{g(x)}^{h(x)}f(x,y)dxdy$ mediante la sintaxis $dblquad(f,a,b,g,h)$ donde tanto $f, g, h$ son funcions y $a, b$ constantes.
#### Ejemplo:
Calcule la integral de la función $\exp^{-(x^2+y^2)}$ sobre un cuadrado unitario centrado en el origen.
```
import matplotlib as plt
import numpy as np
from scipy import integrate
def f(x, y):
return np.exp(-x**2-y**2)
fig, ax = plt.subplots(figsize=(6,5))
x = y = np.linspace(-1.25, 1.25, 75)
X, Y = np-meshgrid(x, y)
c = ax.contour(X, Y, f(X,Y), 15, cmap = mpl.cm.RdBu, vmin = -1, vmax = 1)
bound_rect = plt.Rectangle((0, 0), 1, 1, facecolor = "grey")
ax.add_patch(bound_rect)
ax.axis('tight')
ax.set_xlabel('$x$', fontsize = 18)
ax.set_ylabel('$y$', fontsize = 18)
```
En la ventana anterior se muestra la región de integración en la que trabajaremos. A continuación se presenta el código para la integración.
```
import matplotlib as plt
import numpy as np
from scipy.integrate import dbquad
def f(x, y):
return np.exp(-x**2-y**2)
a, b = 0, 1
g = lambda x : 0
h = lambda x : 1
dbquad(f, a, b, g, h)
```
## Anexo
### El Método de Cuadratura Gaussiana.
El método de Cuadratura gaussiana es uno de las técnicas numéricas más utilizadas para la aproximación de integración definida de funciones. Consiste esencialmente en la utilización de polinomios de orden superior para la aproximación de la función que se desea aproximar. El método esencialmente consiste en aproximar la integral mediante:
$$ I_n(f)=\sum_{j=1}^n w_{j,n}f(x_j, n)=\int_a^bw(x)f(x)dx $$
donde a la función $w_{j,n}$ se les denomina **pesos** y son no negativos e integrables en el intervalo $[a,b]$. Además a los puntos $x_{j,n}$ se les denomina **nodos** en el intervalo $[a,b]$. La idea esencial es escoger tanto a los pesos como a los nodos de forma que $I_n(f)$ sea igual a $I(f)$ para polinomios de alto grado.
Para tener una idea intuitiva de como construir el polinomio $I_n(f)$ estudiemos el caso especial de calcular la integral:
$$\int_{-1}^1f(x)dx=\sum_{j=1}^nw_jf(x_j)$$
donde $w(x)=1$. Una forma de escoger los pesos y nodos es buscando minimizar el error al n-ésimo paso:
$$E_n(f)=\int_{-1}^1f(x)dx-\sum_{j=1}^nw_jf(x_j)
para la $f(x)$ más grande. Para derivar ecuaciones primero note que:
$$ E_n(a_0+a_1x+a_2x^2+...+a_mx^m )=a_0E_n(1)+a_1E_n(x)+...+a_mE_n(x^m) $$
es decir, el error es un operador lineal. Así, ser+a $E_n(f)=0$ para todo polinomio de grado menor igual a m si, y solo si:
$E_n(xî)=0, \forall i=0,1,...,m$
Se distinguen entonces los siguientes casos:
####**Caso 1**: n=1. Dado que hay dos parámetros por determina: **w_1** y **x_1**, se debe pedir:
$$ E_n(1)=0 $$ y $$ E_n(x)=0 $$
Es decir:
$$ \int_{-1}^11dx=w_1=0 $$ $$ \int_{-1}1xdx-w_1x_1=0 $$
$$ \Rightarrow w_1=2 \;\; y \;\; x_1=0$$.
$$ \int_{-1}^1f(x)dx=2f(0) $$
Lo que no es más que la regla del punto medio.
####**Caso 2**: n=2. En éste caso debemos determinar **$w_1,w_2, x_1, x_2$** por lo que se requieren 4 restricciones sobre:
$$ E_n(x^i)=\int_{-1}^1x^idx-[w_1x^i_1+w_2x^i_2 ]=0, i=0, 1, 2, 3 $$
o, equivalentemente:
$$ w_1+w_2=2 $$
$$ w_1x_1+w_2x_2=0 $$
$$ w_1x_1^2+w_2x_2^2=\frac{2}{3}$$
$$ w_1x_1^3+w_2x_2^3=0 $$
La resolución del sistema anterior nos lleva a la fórmula única:
$$ \int_{-1}^1f(x)dx=f(\frac{-\sqrt(3)}{3})+f(\frac{\sqrt(3)}{3}) $$
la que posee grado de precisión 3.
####**Caso 3**: Para un n general existen **2n** parámetros que determinar de ${x_j}$ e ${w_j}$ y podemos esperar que haya una fórmula que usa n nodos y entregue grado de precisión $2n-1$. Las ecuaciones a resolver son:
$$E_n(x^i)=0, i=0, 1, 2, ...2n-1$$
o de forma equivalente:
$$\sum_{j=1}^nw_jx_j^i=0, i= 1, 3, 5,...,2n-1$$
$$\sum_{j=1}^nw_jx_j^i=\frac{2}{i+1}, i=0, 2, 4,...,2n $$
Lo anterior es un sistema no lineal cuya solución, resulta no trivial. Para ésto se emplea la siguiente idea: Sean ${\phi_n(x) \mid n\leq 0 }$ polinomios ortogonales a $w(x)\leq 0$ en el intervalo $(a,b)$. Denotemos sus raíces por:
$$a<x_1<...<x_n<b$$
y:
$$\phi_n(x)=A_nx^n+... \; a_n=\frac{A_{n+1}}{A_n} $$
$$\gamma_n=\int_a^bw(x)[\phi(x]^2dx $$
La solución nos la entrega el siguiente Teorema:
**Teorema**: Para cada $n\leq 1$ existe una única integración numérica con grado de precisión $2n-1$. Asumiento $f(x)$ es 2 continuamente diferenciale en $[a,b]$, la fórmula para $I_n(f)$ y su error vienen dados por:
$$\int_a^bw(x)f(x)dx=\sum_{j=1}^nw_jf(x_j)+\frac{\gamma_n}{A^2_n(2n)!}f^{(2n)}(\eta)$$
para algún $a < \eta < b$. Los ceros y nodos de $\phi_n(x)$ viene dados por:
$$ w_j=\frac{-a_n\gamma_n}{\phi_n(x_j)\phi_{n+1}(x_j)} ,j=1,...,n$$
**Demostración**: Ver Atkinson, pag. 272-276.
| github_jupyter |
# Flopy MODFLOW-2005 Boundary Conditions
Flopy has a new way to enter boundary conditions for some MODFLOW packages. These changes are substantial. Boundary conditions can now be entered as a list of boundaries, as a numpy recarray, or as a dictionary. These different styles are described in this notebook.
Flopy also now requires zero-based input. This means that **all boundaries are entered in zero-based layer, row, and column indices**. This means that older Flopy scripts will need to be modified to account for this change. If you are familiar with Python, this should be natural, but if not, then it may take some time to get used to zero-based numbering. Flopy users submit all information in zero-based form, and Flopy converts this to the one-based form required by MODFLOW.
The following MODFLOW-2005 packages are affected by this change:
* Well
* Drain
* River
* General-Head Boundary
* Time-Variant Constant Head
This notebook explains the different ways to enter these types of boundary conditions.
```
#begin by importing flopy
import os
import sys
import numpy as np
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('flopy version: {}'.format(flopy.__version__))
```
## List of Boundaries
Boundary condition information is passed to a package constructor as stress_period_data. In its simplest form, stress_period_data can be a list of individual boundaries, which themselves are lists. The following shows a simple example for a MODFLOW River Package boundary:
```
stress_period_data = [
[2, 3, 4, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 5, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 6, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
]
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
```
If we look at the River Package created here, you see that the layer, row, and column numbers have been increased by one.
```
!head -n 10 'data/test.riv'
```
If this model had more than one stress period, then Flopy will assume that this boundary condition information applies until the end of the simulation
```
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
dis = flopy.modflow.ModflowDis(m, nper=3)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!head -n 10 'data/test.riv'
```
## Recarray of Boundaries
Numpy allows the use of recarrays, which are numpy arrays in which each column of the array may be given a different type. Boundary conditions can be entered as recarrays. Information on the structure of the recarray for a boundary condition package can be obtained from that particular package. The structure of the recarray is contained in the dtype.
```
riv_dtype = flopy.modflow.ModflowRiv.get_default_dtype()
print(riv_dtype)
```
Now that we know the structure of the recarray that we want to create, we can create a new one as follows.
```
stress_period_data = np.zeros((3), dtype=riv_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
```
We can then fill the recarray with our boundary conditions.
```
stress_period_data[0] = (2, 3, 4, 10.7, 5000., -5.7)
stress_period_data[1] = (2, 3, 5, 10.7, 5000., -5.7)
stress_period_data[2] = (2, 3, 6, 10.7, 5000., -5.7)
print(stress_period_data)
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!head -n 10 'data/test.riv'
```
As before, if we have multiple stress periods, then this recarray will apply to all of them.
```
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
dis = flopy.modflow.ModflowDis(m, nper=3)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!head -n 10 'data/test.riv'
```
## Dictionary of Boundaries
The power of the new functionality in Flopy3 is the ability to specify a dictionary for stress_period_data. If specified as a dictionary, the key is the stress period number (**as a zero-based number**), and the value is either a nested list, an integer value of 0 or -1, or a recarray for that stress period.
Let's say that we want to use the following schedule for our rivers:
0. No rivers in stress period zero
1. Rivers specified by a list in stress period 1
2. No rivers
3. No rivers
4. No rivers
5. Rivers specified by a recarray
6. Same recarray rivers
7. Same recarray rivers
8. Same recarray rivers
```
sp1 = [
[2, 3, 4, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 5, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 6, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
]
print(sp1)
riv_dtype = flopy.modflow.ModflowRiv.get_default_dtype()
sp5 = np.zeros((3), dtype=riv_dtype)
sp5 = sp5.view(np.recarray)
sp5[0] = (2, 3, 4, 20.7, 5000., -5.7)
sp5[1] = (2, 3, 5, 20.7, 5000., -5.7)
sp5[2] = (2, 3, 6, 20.7, 5000., -5.7)
print(sp5)
sp_dict = {0:0, 1:sp1, 2:0, 5:sp5}
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
dis = flopy.modflow.ModflowDis(m, nper=8)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=sp_dict)
m.write_input()
!head -n 10 'data/test.riv'
```
## MODFLOW Auxiliary Variables
Flopy works with MODFLOW auxiliary variables by allowing the recarray to contain additional columns of information. The auxiliary variables must be specified as package options as shown in the example below.
In this example, we also add a string in the last column of the list in order to name each boundary condition. In this case, however, we do not include boundname as an auxiliary variable as MODFLOW would try to read it as a floating point number.
```
#create an empty array with an iface auxiliary variable at the end
riva_dtype = [('k', '<i8'), ('i', '<i8'), ('j', '<i8'),
('stage', '<f4'), ('cond', '<f4'), ('rbot', '<f4'),
('iface', '<i4'), ('boundname', object)]
riva_dtype = np.dtype(riva_dtype)
stress_period_data = np.zeros((3), dtype=riva_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
stress_period_data[0] = (2, 3, 4, 10.7, 5000., -5.7, 1, 'riv1')
stress_period_data[1] = (2, 3, 5, 10.7, 5000., -5.7, 2, 'riv2')
stress_period_data[2] = (2, 3, 6, 10.7, 5000., -5.7, 3, 'riv3')
print(stress_period_data)
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data, dtype=riva_dtype, options=['aux iface'])
m.write_input()
!head -n 10 'data/test.riv'
```
## Working with Unstructured Grids
Flopy can create an unstructured grid boundary condition package for MODFLOW-USG. This can be done by specifying a custom dtype for the recarray. The following shows an example of how that can be done.
```
#create an empty array based on nodenumber instead of layer, row, and column
rivu_dtype = [('nodenumber', '<i8'), ('stage', '<f4'), ('cond', '<f4'), ('rbot', '<f4')]
rivu_dtype = np.dtype(rivu_dtype)
stress_period_data = np.zeros((3), dtype=rivu_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
stress_period_data[0] = (77, 10.7, 5000., -5.7)
stress_period_data[1] = (245, 10.7, 5000., -5.7)
stress_period_data[2] = (450034, 10.7, 5000., -5.7)
print(stress_period_data)
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data, dtype=rivu_dtype)
m.write_input()
print(workspace)
!head -n 10 'data/test.riv'
```
## Combining two boundary condition packages
```
ml = flopy.modflow.Modflow(modelname="test",model_ws=workspace)
dis = flopy.modflow.ModflowDis(ml,10,10,10,10)
sp_data1 = {3: [1, 1, 1, 1.0],5:[1,2,4,4.0]}
wel1 = flopy.modflow.ModflowWel(ml, stress_period_data=sp_data1)
ml.write_input()
!head -n 10 'data/test.wel'
sp_data2 = {0: [1, 1, 3, 3.0],8:[9,2,4,4.0]}
wel2 = flopy.modflow.ModflowWel(ml, stress_period_data=sp_data2)
ml.write_input()
!head -n 10 'data/test.wel'
```
Now we create a third wel package, using the ```MfList.append()``` method:
```
wel3 = flopy.modflow.ModflowWel(ml,stress_period_data=\
wel2.stress_period_data.append(
wel1.stress_period_data))
ml.write_input()
!head -n 10 'data/test.wel'
```
| github_jupyter |
We ran a Nadaraya-Watson photo-z algorithm from astroML's implementation trained on four photometry bands from DES's science verification data release. This notebook produces a comparison of the photometric redshift estimates reported by DES (described in Bonnett et al. 2015.), our Nadaraya-Watson redshift estimates based on DES's photometry, and the SDSS confirmed spectroscopic redshifts for 258 SDSS dr7 and 919 dr12 quasars that were imaged in the DES science verification survey. We also indicate the object classification that DES applied to all objects to provide an insight into how the methods perform on point sources vs. extended sources in the DES catalog.
Match summary: <br>
Dr12 quasars matched to DES sva1 gold catalog to obtain DES photometry, then matched to spAll-DR12.fits at https://data.sdss.org/sas/dr12/env/BOSS_SPECTRO_REDUX/ to obtain DCR offsets, then matched to GAIA dr2 to obtain proper motion data.
Dr7 quasars matched to DES sva1 gold catalog to obtain DES photometry, then matched to the DR7 PhotoObj table to obtain DCR offsets, then matched to GAIA dr2 to obtain proper motion data
919 out of 297301 quasars from SDSS DR12 survive the matching process <br>
258 out of 105783 quasars from SDSS DR7 survive the matching process <br>
```
import numpy as np
from astropy.table import Table
from sklearn.model_selection import train_test_split, cross_val_predict
from sklearn.metrics import classification_report
from astroML.linear_model import NadarayaWatson
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import palettable
import richardsplot as rplot
%matplotlib inline
```
For the following code, the same process is repeated once for each of the four DES photo-z methods. The cells are marked with a comment at the top indicating which method each cell applies to. In general, only the ANNZ method cells are commented and should be used as the primary reference point.
```
#ANNZ
#read in data tables that contain des data and sdss dr7 and dr12 quasar data
dr7_annz = Table.read('dr7Q+sva1gold+offset+gaia_annz.fits')
dr12_annz = Table.read('dr12Q+sva1gold+spectro+gaia_annz.fits')
dr7_annz = dr7_annz.filled()
dr12_annz = dr12_annz.filled()
#BPZ
dr7_bpz = Table.read('dr7Q+sva1gold+offset+gaia_bpz.fits')
dr12_bpz = Table.read('dr12Q+sva1gold+spectro+gaia_bpz.fits')
dr7_bpz = dr7_bpz.filled()
dr12_bpz = dr12_bpz.filled()
#SKYNET
dr7_skynet = Table.read('dr7Q+sva1gold+offset+gaia_skynet.fits')
dr12_skynet = Table.read('dr12Q+sva1gold+spectro+gaia_skynet.fits')
dr7_skynet = dr7_skynet.filled()
dr12_skynet = dr12_skynet.filled()
#TPZ
dr7_tpz = Table.read('dr7Q+sva1gold+offset+gaia_tpz.fits')
dr12_tpz = Table.read('dr12Q+sva1gold+spectro+gaia_tpz.fits')
dr7_tpz = dr7_tpz.filled()
dr12_tpz = dr12_tpz.filled()
#ANNZ
#take the photometry bands, the absolute magnitude of proper motion, the DCR offset in u and g bands, the photometric redshift, and the object classification from DES's data for dr7 quasars
X1_annz = np.vstack([ dr7_annz['MAG_AUTO_G'], dr7_annz['MAG_AUTO_R'], dr7_annz['MAG_AUTO_I'], dr7_annz['MAG_AUTO_Z'], dr7_annz['Z_MEAN'], dr7_annz['MODEST_CLASS'] ]).T
#take the sdss dr7 spec-z
y1_annz = np.array(dr7_annz['z_1'])
#repeat the last two lines for dr12 quasars in DES sva1
X2_annz = np.vstack([ dr12_annz['MAG_AUTO_G'], dr12_annz['MAG_AUTO_R'], dr12_annz['MAG_AUTO_I'], dr12_annz['MAG_AUTO_Z'], dr12_annz['Z_MEAN'], dr12_annz['MODEST_CLASS'] ]).T
y2_annz = np.array(dr12_annz['Z_PIPE_1']) #this is the dr12 spec-z
#combine our two sets of quasars together
X_annz = np.concatenate((X1_annz, X2_annz))
y_annz = np.concatenate((y1_annz, y2_annz))
#split our quasars into test and training sets, 4/5 and 1/5 respectively
X_train_annz, X_test_annz, y_train_annz, y_test_annz = train_test_split(X_annz, y_annz, test_size=0.2, random_state=84)
#make some empty arrays to separate the photometry bands into
X_traintrue_annz = np.empty((X_train_annz.shape[0], X_train_annz.shape[1]-2), dtype=float)
X_testtrue_annz = np.empty((X_test_annz.shape[0], X_test_annz.shape[1]-2), dtype=float)
#more empty arrays to hold des photo-z and object class for plotting purposes
DesZs_annz = np.empty((X_test_annz.shape[0], 1), dtype=float)
ModestClass_annz = np.empty((X_test_annz.shape[0], 1), dtype=int)
#loop over the training set to separate out the photometries
for i in range(len(X_train_annz)):
X_traintrue_annz[i] = X_train_annz[i][:4] #just the photometry
#loop over each element in the test set to get photometry, des photo-z and des object class
for i in range(len(X_test_annz)):
X_testtrue_annz[i] = X_test_annz[i][:4] #just the photometry
DesZs_annz[i] = X_test_annz[i][4] #the DES photo-z
ModestClass_annz[i] = X_test_annz[i][5] #the DES object classification
#train our model
model_annz = NadarayaWatson('gaussian', 0.05) #gaussian kernel, width of 0.05
#fit the model to our training set, training on 5 DES photometry bands and sdss spec-z
model_annz.fit(X_traintrue_annz, y_train_annz)
#predict a redshift for all quasars in the test set
pred_annz = model_annz.predict(X_testtrue_annz)
#bpz
#take the photometry bands, the absolute magnitude of proper motion, the DCR offset in u and g bands, the photometric redshift, and the object classification from DES's data for dr7 quasars
X1_bpz = np.vstack([ dr7_bpz['MAG_AUTO_G'], dr7_bpz['MAG_AUTO_R'], dr7_bpz['MAG_AUTO_I'], dr7_bpz['MAG_AUTO_Z'], dr7_bpz['Z_MEAN'], dr7_bpz['MODEST_CLASS'] ]).T
#take the sdss dr7 spec-z
y1_bpz = np.array(dr7_bpz['z_1'])
#repeat the last two lines for dr12 quasars in DES sva1
X2_bpz = np.vstack([ dr12_bpz['MAG_AUTO_G'], dr12_bpz['MAG_AUTO_R'], dr12_bpz['MAG_AUTO_I'], dr12_bpz['MAG_AUTO_Z'], dr12_bpz['Z_MEAN'], dr12_bpz['MODEST_CLASS'] ]).T
y2_bpz = np.array(dr12_bpz['Z_PIPE_1']) #this is the dr12 spec-z
#combine our two sets of quasars together
X_bpz = np.concatenate((X1_bpz, X2_bpz))
y_bpz = np.concatenate((y1_bpz, y2_bpz))
#split our quasars into test and training sets, 4/5 and 1/5 respectively
X_train_bpz, X_test_bpz, y_train_bpz, y_test_bpz = train_test_split(X_bpz, y_bpz, test_size=0.2, random_state=84)
#make some empty arrays to separate the photometry bands into
X_traintrue_bpz = np.empty((X_train_bpz.shape[0], X_train_bpz.shape[1]-2), dtype=float)
X_testtrue_bpz = np.empty((X_test_bpz.shape[0], X_test_bpz.shape[1]-2), dtype=float)
#more empty arrays to hold des photo-z and object class for plotting purposes
DesZs_bpz = np.empty((X_test_bpz.shape[0], 1), dtype=float)
ModestClass_bpz = np.empty((X_test_bpz.shape[0], 1), dtype=int)
#loop over the training set to separate out the photometries
for i in range(len(X_train_bpz)):
X_traintrue_bpz[i] = X_train_bpz[i][:4] #just the photometry
#loop over each element in the test set to get photometry, des photo-z and des object class
for i in range(len(X_test_bpz)):
X_testtrue_bpz[i] = X_test_bpz[i][:4] #just the photometry
DesZs_bpz[i] = X_test_bpz[i][4] #the DES photo-z
ModestClass_bpz[i] = X_test_bpz[i][5] #the DES object classification
#train our model
model_bpz = NadarayaWatson('gaussian', 0.05) #gaussian kernel, width of 0.05
#fit the model to our training set, training on 5 DES photometry bands and sdss spec-z
model_bpz.fit(X_traintrue_bpz, y_train_bpz)
#predict a redshift for all quasars in the test set
pred_bpz = model_bpz.predict(X_testtrue_bpz)
#skynet
#take the photometry bands, the absolute magnitude of proper motion, the DCR offset in u and g bands, the photometric redshift, and the object classification from DES's data for dr7 quasars
X1_skynet = np.vstack([ dr7_skynet['MAG_AUTO_G'], dr7_skynet['MAG_AUTO_R'], dr7_skynet['MAG_AUTO_I'], dr7_skynet['MAG_AUTO_Z'], dr7_skynet['Z_MEAN'], dr7_skynet['MODEST_CLASS'] ]).T
#take the sdss dr7 spec-z
y1_skynet = np.array(dr7_skynet['z_1'])
#repeat the last two lines for dr12 quasars in DES sva1
X2_skynet = np.vstack([ dr12_skynet['MAG_AUTO_G'], dr12_skynet['MAG_AUTO_R'], dr12_skynet['MAG_AUTO_I'], dr12_skynet['MAG_AUTO_Z'], dr12_skynet['Z_MEAN'], dr12_skynet['MODEST_CLASS'] ]).T
y2_skynet = np.array(dr12_skynet['Z_PIPE_1']) #this is the dr12 spec-z
#combine our two sets of quasars together
X_skynet = np.concatenate((X1_skynet, X2_skynet))
y_skynet = np.concatenate((y1_skynet, y2_skynet))
#split our quasars into test and training sets, 4/5 and 1/5 respectively
X_train_skynet, X_test_skynet, y_train_skynet, y_test_skynet = train_test_split(X_skynet, y_skynet, test_size=0.2, random_state=84)
#make some empty arrays to separate the photometry bands into
X_traintrue_skynet = np.empty((X_train_skynet.shape[0], X_train_skynet.shape[1]-2), dtype=float)
X_testtrue_skynet = np.empty((X_test_skynet.shape[0], X_test_skynet.shape[1]-2), dtype=float)
#more empty arrays to hold des photo-z and object class for plotting purposes
DesZs_skynet = np.empty((X_test_skynet.shape[0], 1), dtype=float)
ModestClass_skynet = np.empty((X_test_skynet.shape[0], 1), dtype=int)
#loop over the training set to separate out the photometries
for i in range(len(X_train_skynet)):
X_traintrue_skynet[i] = X_train_skynet[i][:4] #just the photometry
#loop over each element in the test set to get photometry, des photo-z and des object class
for i in range(len(X_test_skynet)):
X_testtrue_skynet[i] = X_test_skynet[i][:4] #just the photometry
DesZs_skynet[i] = X_test_skynet[i][4] #the DES photo-z
ModestClass_skynet[i] = X_test_skynet[i][5] #the DES object classification
#train our model
model_skynet = NadarayaWatson('gaussian', 0.05) #gaussian kernel, width of 0.05
#fit the model to our training set, training on 5 DES photometry bands and sdss spec-z
model_skynet.fit(X_traintrue_skynet, y_train_skynet)
#predict a redshift for all quasars in the test set
pred_skynet = model_skynet.predict(X_testtrue_skynet)
#tpz
#take the photometry bands, the absolute magnitude of proper motion, the DCR offset in u and g bands, the photometric redshift, and the object classification from DES's data for dr7 quasars
X1_tpz = np.vstack([ dr7_tpz['MAG_AUTO_G'], dr7_tpz['MAG_AUTO_R'], dr7_tpz['MAG_AUTO_I'], dr7_tpz['MAG_AUTO_Z'], dr7_tpz['Z_MEAN'], dr7_tpz['MODEST_CLASS'] ]).T
#take the sdss dr7 spec-z
y1_tpz = np.array(dr7_tpz['z_1'])
#repeat the last two lines for dr12 quasars in DES sva1
X2_tpz = np.vstack([ dr12_tpz['MAG_AUTO_G'], dr12_tpz['MAG_AUTO_R'], dr12_tpz['MAG_AUTO_I'], dr12_tpz['MAG_AUTO_Z'], dr12_tpz['Z_MEAN'], dr12_tpz['MODEST_CLASS'] ]).T
y2_tpz = np.array(dr12_tpz['Z_PIPE_1']) #this is the dr12 spec-z
#combine our two sets of quasars together
X_tpz = np.concatenate((X1_tpz, X2_tpz))
y_tpz = np.concatenate((y1_tpz, y2_tpz))
#split our quasars into test and training sets, 4/5 and 1/5 respectively
X_train_tpz, X_test_tpz, y_train_tpz, y_test_tpz = train_test_split(X_tpz, y_tpz, test_size=0.2, random_state=84)
#make some empty arrays to separate the photometry bands into
X_traintrue_tpz = np.empty((X_train_tpz.shape[0], X_train_tpz.shape[1]-2), dtype=float)
X_testtrue_tpz = np.empty((X_test_tpz.shape[0], X_test_tpz.shape[1]-2), dtype=float)
#more empty arrays to hold des photo-z and object class for plotting purposes
DesZs_tpz = np.empty((X_test_tpz.shape[0], 1), dtype=float)
ModestClass_tpz = np.empty((X_test_tpz.shape[0], 1), dtype=int)
#loop over the training set to separate out the photometries
for i in range(len(X_train_tpz)):
X_traintrue_tpz[i] = X_train_tpz[i][:4] #just the photometry
#loop over each element in the test set to get photometry, des photo-z and des object class
for i in range(len(X_test_tpz)):
X_testtrue_tpz[i] = X_test_tpz[i][:4] #just the photometry
DesZs_tpz[i] = X_test_tpz[i][4] #the DES photo-z
ModestClass_tpz[i] = X_test_tpz[i][5] #the DES object classification
#train our model
model_tpz = NadarayaWatson('gaussian', 0.05) #gaussian kernel, width of 0.05
#fit the model to our training set, training on 5 DES photometry bands and sdss spec-z
model_tpz.fit(X_traintrue_tpz, y_train_tpz)
#predict a redshift for all quasars in the test set
pred_tpz = model_tpz.predict(X_testtrue_tpz)
#ANNZ
#create some empty arrays to hold objects based on MODEST_CLASS from DES
stars_annz = np.empty(shape=(0,3))
gals_annz = np.empty(shape=(0,3))
uns_annz = np.empty(shape=(0,3))
#Loop through object classifications and sort objects accordingly, grabbing the Nadaraya-Watson prediction, Des's photo-z
#and the sdss spec-z
for i in range(len(ModestClass_annz)):
if ModestClass_annz[i] == 2:
stars_annz = np.append(stars_annz, [[pred_annz[i], DesZs_annz[i], y_test_annz[i]]], axis = 0)
elif ModestClass_annz[i] == 1:
gals_annz = np.append(gals_annz, [[pred_annz[i], DesZs_annz[i], y_test_annz[i]]], axis = 0)
else:
uns_annz = np.append(uns_annz, [[pred_annz[i], DesZs_annz[i], y_test_annz[i]]], axis = 0)
#BPZ
stars_bpz = np.empty(shape=(0,3))
gals_bpz = np.empty(shape=(0,3))
uns_bpz = np.empty(shape=(0,3))
for i in range(len(ModestClass_bpz)):
if ModestClass_bpz[i] == 2:
stars_bpz = np.append(stars_bpz, [[pred_bpz[i], DesZs_bpz[i], y_test_bpz[i]]], axis = 0)
elif ModestClass_bpz[i] == 1:
gals_bpz = np.append(gals_bpz, [[pred_bpz[i], DesZs_bpz[i], y_test_bpz[i]]], axis = 0)
else:
uns_bpz = np.append(uns_bpz, [[pred_bpz[i], DesZs_bpz[i], y_test_bpz[i]]], axis = 0)
#Skynet
stars_skynet = np.empty(shape=(0,3))
gals_skynet = np.empty(shape=(0,3))
uns_skynet = np.empty(shape=(0,3))
for i in range(len(ModestClass_skynet)):
if ModestClass_skynet[i] == 2:
stars_skynet = np.append(stars_skynet, [[pred_skynet[i], DesZs_skynet[i], y_test_skynet[i]]], axis = 0)
elif ModestClass_skynet[i] == 1:
gals_skynet = np.append(gals_skynet, [[pred_skynet[i], DesZs_skynet[i], y_test_skynet[i]]], axis = 0)
else:
uns_skynet = np.append(uns_skynet, [[pred_skynet[i], DesZs_skynet[i], y_test_skynet[i]]], axis = 0)
#TPZ
stars_tpz = np.empty(shape=(0,3))
gals_tpz = np.empty(shape=(0,3))
uns_tpz = np.empty(shape=(0,3))
for i in range(len(ModestClass_tpz)):
if ModestClass_tpz[i] == 2:
stars_tpz = np.append(stars_tpz, [[pred_tpz[i], DesZs_tpz[i], y_test_tpz[i]]], axis = 0)
elif ModestClass_tpz[i] == 1:
gals_tpz = np.append(gals_tpz, [[pred_tpz[i], DesZs_tpz[i], y_test_tpz[i]]], axis = 0)
else:
uns_tpz = np.append(uns_tpz, [[pred_tpz[i], DesZs_tpz[i], y_test_tpz[i]]], axis = 0)
#plotting with MODEST_CLASS
plt.figure(figsize=(16,16))
plt.subplot(221)
#note that stars_annz.T[0] is our photo-z prediction, stars_annz.T[1] is the DES prediction, and stars_annz.T[2] is the zspec
plt.scatter(stars_annz.T[0], stars_annz.T[2], s=25, facecolor='none', edgecolor='blue')
plt.scatter(stars_annz.T[1], stars_annz.T[2], s=10, c='blue')
#same for gals_annz and uns_annz
plt.scatter(gals_annz.T[0], gals_annz.T[2], s=25, facecolor='none', edgecolor='orange')
plt.scatter(gals_annz.T[1], gals_annz.T[2], s=10, c='orange')
#black points (undetermined objects) carry the legend tags for open circles and closed points
legendhelp1_annz = plt.scatter(uns_annz.T[0], uns_annz.T[2], s=25, facecolor='none', edgecolor='k', label = 'NW photo-z')
legendhelp2_annz = plt.scatter(uns_annz.T[1], uns_annz.T[2], s=10, c='k', label = 'DES ANNZ photo-z')
plt.plot([0,1,2,3,4,5], 'r') #plot a one-to-one line for reference
plt.xlim(0,5)
plt.ylim(0,5)
plt.xlabel('Photo-z Estimation')
plt.ylabel('SDSS z-spec')
plt.title('ANNZ')
#colored patches for the legend
orange_patch = mpatches.Patch(color='orange', label='DES Galaxy')
blue_patch = mpatches.Patch(color='blue', label='DES Star')
black_patch = mpatches.Patch(color='k', label='DES Undetermined')
plt.legend(handles=[blue_patch, orange_patch, black_patch, legendhelp1_annz, legendhelp2_annz], loc=1)
plt.subplot(222)
plt.scatter(stars_bpz.T[0], stars_bpz.T[2], s=25, facecolor='none', edgecolor='blue')
plt.scatter(stars_bpz.T[1], stars_bpz.T[2], s=10, c='blue')
plt.scatter(gals_bpz.T[0], gals_bpz.T[2], s=25, facecolor='none', edgecolor='orange')
plt.scatter(gals_bpz.T[1], gals_bpz.T[2], s=10, c='orange')
legendhelp1_bpz = plt.scatter(uns_bpz.T[0], uns_bpz.T[2], s=25, facecolor='none', edgecolor='k', label = 'NW photo-z')
legendhelp2_bpz = plt.scatter(uns_bpz.T[1], uns_bpz.T[2], s=10, c='k', label = 'DES BPZ photo-z')
plt.plot([0,1,2,3,4,5], 'r')
plt.xlim(0,5)
plt.ylim(0,5)
plt.xlabel('Photo-z Estimation')
plt.ylabel('SDSS z-spec')
plt.title('BPZ')
plt.legend(handles=[blue_patch, orange_patch, black_patch, legendhelp1_bpz, legendhelp2_bpz], loc=1)
plt.subplot(223)
plt.scatter(stars_skynet.T[0], stars_skynet.T[2], s=25, facecolor='none', edgecolor='blue')
plt.scatter(stars_skynet.T[1], stars_skynet.T[2], s=10, c='blue')
plt.scatter(gals_skynet.T[0], gals_skynet.T[2], s=25, facecolor='none', edgecolor='orange')
plt.scatter(gals_skynet.T[1], gals_skynet.T[2], s=10, c='orange')
legendhelp1_skynet = plt.scatter(uns_skynet.T[0], uns_skynet.T[2], s=25, facecolor='none', edgecolor='k', label = 'NW photo-z')
legendhelp2_skynet = plt.scatter(uns_skynet.T[1], uns_skynet.T[2], s=10, c='k', label = 'DES Skynet photo-z')
plt.plot([0,1,2,3,4,5], 'r')
plt.xlim(0,5)
plt.ylim(0,5)
plt.xlabel('Photo-z Estimation')
plt.ylabel('SDSS z-spec')
plt.title('Skynet')
plt.legend(handles=[blue_patch, orange_patch, black_patch, legendhelp1_skynet, legendhelp2_skynet], loc=1)
plt.subplot(224)
plt.scatter(stars_tpz.T[0], stars_tpz.T[2], s=25, facecolor='none', edgecolor='blue')
plt.scatter(stars_tpz.T[1], stars_tpz.T[2], s=10, c='blue')
plt.scatter(gals_tpz.T[0], gals_tpz.T[2], s=25, facecolor='none', edgecolor='orange')
plt.scatter(gals_tpz.T[1], gals_tpz.T[2], s=10, c='orange')
legendhelp1_tpz = plt.scatter(uns_tpz.T[0], uns_tpz.T[2], s=25, facecolor='none', edgecolor='k', label = 'NW photo-z')
legendhelp2_tpz = plt.scatter(uns_tpz.T[1], uns_tpz.T[2], s=10, c='k', label = 'DES TPZ photo-z')
plt.plot([0,1,2,3,4,5], 'r')
plt.xlim(0,5)
plt.ylim(0,5)
plt.xlabel('Photo-z Estimation')
plt.ylabel('SDSS z-spec')
plt.title('TPZ')
plt.legend(handles=[blue_patch, orange_patch, black_patch, legendhelp1_tpz, legendhelp2_tpz], loc=1)
```
w.r.t the above plots: Each panel compares one of DES's photo-z methods to astroML's Nadaraya-Watson implementation trained on the 4 DES photometry bands: g, r, i, and z. Open circles are NW photo-z estimations, dots are DES method photo-z estimations. The colors indicate the DES MODEST_CLASS object classification for each object. Blue points were classified as stars, orange points were classified as galaxies, black points were undetermined objects. Red line indicates the one-to-one line along which a given photo-z estimation would be exactly correct. We find that the DES photo-z methods do poorly for objects that are not dominated by the host galaxy i.e. the quasars that DES classifies as stars. We also find that the NW photo-z method handles low redshift quasars, but has a large scatter for higher redshift objects while still "correct" on average.
| github_jupyter |
# Inverse Translatation of a Stochastic Process
Author: Lohit Vandanapu
Date: May 14, 2019
In this example, a Gaussian stochastic processes is first translated into a stocahstic processes of a different distribution and subsequently, these translated samples are translated back to Gaussian samples with InverseTranslate class.
Import the necessary libraries. Here we import standard libraries such as numpy and matplotlib, but also need to import the InverseTranslation class along with the Translation class from the StochasticProcesses module of UQpy.
```
from UQpy.StochasticProcess import Translation, InverseTranslation
from UQpy.StochasticProcess import SRM
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn')
```
Firstly we generate Gaussian Stochastic Processes using the Spectral Representation Method.
```
n_sim = 10000 # Num of samples
T = 100 # Time(1 / T = dw)
nt = 256 # Num.of Discretized Time
F = 1 / T * nt / 2 # Frequency.(Hz)
nw = 128 # Num of Discretized Freq.
dt = T / nt
t = np.linspace(0, T - dt, nt)
dw = F / nw
w = np.linspace(0, F - dw, nw)
S = 125 / 4 * w ** 2 * np.exp(-5 * w)
SRM_object = SRM(n_sim, S, dw, nt, nw, case='uni')
samples = SRM_object.samples
def S_to_R(S, w, t):
dw = w[1] - w[0]
fac = np.ones(len(w))
fac[1: len(w) - 1: 2] = 4
fac[2: len(w) - 2: 2] = 2
fac = fac * dw / 3
R = np.zeros(len(t))
for i in range(len(t)):
R[i] = 2 * np.dot(fac, S * np.cos(w * t[i]))
return R
R = S_to_R(S, w, t)
```
We translate the samples to be Uniform samples from 1 to 2
```
from UQpy.Distributions import Uniform
distribution = Uniform(0, 1)
samples = samples.flatten()[:, np.newaxis]
print(samples.shape)
Translate_object = Translation(distribution=distribution, time_duration=dt, frequency_interval=dw, number_time_intervals=nt, number_frequency_intervals=nw, auto_correlation_function_gaussian=R, samples_gaussian=samples)
samples_ng = Translate_object.samples_non_gaussian
R_ng = Translate_object.auto_correlation_function_non_gaussian
r_ng = Translate_object.correlation_function_non_gaussian
```
Plotting the actual and translated autocorrelation functions
```
fig1 = plt.figure()
plt.plot(R, label='Gaussian')
plt.plot(R_ng, label='Uniform')
plt.title('Autocorrelation Functions')
plt.legend()
plt.show()
InverseTranslate_object = InverseTranslation(distribution=distribution, time_duration=dt, frequency_interval=dw, number_time_intervals=nt, number_frequency_intervals=nw, auto_correlation_function_non_gaussian=R_ng, samples_non_gaussian=samples_ng)
samples_g = InverseTranslate_object.samples_gaussian
S_g = InverseTranslate_object.power_spectrum_gaussian
R_g = InverseTranslate_object.auto_correlation_function_gaussian
r_g = InverseTranslate_object.correlation_function_gaussian
fig2 = plt.figure()
plt.plot(r_g, label='Inverse Translated')
plt.plot(R, label='Original')
plt.title('Autocorrelation Functions')
plt.legend()
plt.show()
```
| github_jupyter |
# Quantization of Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Introduction
[Digital signal processors](https://en.wikipedia.org/wiki/Digital_signal_processor) and general purpose processors can only perform arithmetic operations within a limited number range. So far we considered discrete signals with continuous amplitude values. These cannot be handled by processors in a straightforward manner. [Quantization](https://en.wikipedia.org/wiki/Quantization_%28signal_processing%29) is the process of mapping a continuous amplitude to a countable set of amplitude values. This refers also to the *requantization* of a signal from a large set of countable amplitude values to a smaller set. Scalar quantization is an instantaneous and memoryless operation. It can be applied to the continuous amplitude signal, also referred to as *analog signal* or to the (time-)discrete signal. The quantized discrete signal is termed as *digital signal*. The connections between the different domains is illustrated in the following for time dependent signals.

### Model of the Quantization Process
In order to quantify the effects of quantizing a continuous amplitude signal, a model of the quantization process is formulated. We restrict our considerations to a discrete real-valued signal $x[k]$. In order to map the continuous amplitude to a quantized representation the following model is used
\begin{equation}
x_Q[k] = g( \; \lfloor \, f(x[k]) \, \rfloor \; )
\end{equation}
where $g(\cdot)$ and $f(\cdot)$ denote real-valued mapping functions, and $\lfloor \cdot \rfloor$ a rounding operation. The quantization process can be split into two stages
1. **Forward quantization**
The mapping $f(x[k])$ maps the signal $x[k]$ such that it is suitable for the rounding operation. This may be a scaling of the signal or a non-linear mapping. The result of the rounding operation is an integer number $\lfloor \, f(x[k]) \, \rfloor \in \mathbb{Z}$, which is termed as *quantization index*.
2. **Inverse quantization**
The mapping $g(\cdot)$, maps the quantization index to the quantized value $x_Q[k]$ such that it is an approximation of $x[k]$. This may be a scaling or a non-linear operation.
The quantization error/quantization noise $e[k]$ is defined as
\begin{equation}
e[k] = x_Q[k] - x[k]
\end{equation}
Rearranging yields that the quantization process can be modeled by adding the quantization noise to the discrete signal

#### Example
In order to illustrate the introduced model, the quantization of one period of a sine signal is considered
\begin{equation}
x[k] = \sin[\Omega_0 k]
\end{equation}
using $f(x[k]) = 3 \cdot x[k]$ and $g(i) = \frac{1}{3} \cdot i$. The rounding is realized by the [nearest integer function](https://en.wikipedia.org/wiki/Nearest_integer_function). The quantized signal is then given as
\begin{equation}
x_Q[k] = \frac{1}{3} \cdot \lfloor \, 3 \cdot \sin[\Omega_0 k] \, \rfloor
\end{equation}
For ease of illustration the signals are not shown by stem plots.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
N = 1024 # length of signal
# generate signal
x = np.sin(2*np.pi/N * np.arange(N))
# quantize signal
xi = np.round(3 * x)
xQ = 1/3 * xi
e = xQ - x
# plot (quantized) signals
fig, ax1 = plt.subplots(figsize=(10,4))
ax2 = ax1.twinx()
ax1.plot(x, 'r', label=r'discrete signal $x[k]$')
ax1.plot(xQ, 'b', label=r'quantized signal $x_Q[k]$')
ax1.plot(e, 'g', label=r'quantization error $e[k]$')
ax1.set_xlabel('k')
ax1.set_ylabel(r'$x[k]$, $x_Q[k]$, $e[k]$')
ax1.axis([0, N, -1.2, 1.2])
ax1.legend()
ax2.set_ylim([-3.6, 3.6])
ax2.set_ylabel('quantization index')
ax2.grid()
```
**Exercise**
* Investigate the quantization noise $e[k]$. Is its amplitude bounded?
* If you would represent the quantization index (shown on the right side) by a binary number, how much bits would you need?
* Try out other rounding operations like `np.floor()` and `np.ceil()` instead of `np.round()`. What changes?
### Properties
Without knowledge of the quantization error $e[k]$, the signal $x[k]$ cannot be reconstructed exactly knowing only its quantization index or quantized representation $x_Q[k]$. The quantization error $e[k]$ itself depends on the signal $x[k]$. Therefore, quantization is in general an irreversible process. The mapping from $x[k]$ to $x_Q[k]$ is furthermore non-linear, since the superposition principle does not hold in general. Summarizing, quantization is an inherently irreversible and non-linear process.
### Applications
Quantization has widespread applications in Digital Signal Processing. For instance in
* [Analog-to-Digital conversion](https://en.wikipedia.org/wiki/Analog-to-digital_converter)
* [Lossy compression](https://en.wikipedia.org/wiki/Lossy_compression) of signals (speech, music, video, ...)
* Storage and Transmission ([Pulse-Code Modulation](https://en.wikipedia.org/wiki/Pulse-code_modulation), ...)
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2017*.
| github_jupyter |
```
import sys
import os
# current working directory
path = os.getcwd()
# parent directory
parent = os.path.join(path, os.pardir)
sys.path.append(os.path.abspath(parent))
from bayes_opt1 import BayesianOptimization
from bayes_opt1 import UtilityFunction
from plot_gp_function import plot_gp
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
%matplotlib inline
from scipy.stats import norm
from scipy.optimize import minimize
from plot_gp_function import plot_convergence
from plot_gp_function import plot_simple_regret
```
# Comparative analysis on acquisition functions
```
xx = np.linspace(-2, 10, 1000).reshape(-1,1)
def target(xx):
return np.exp(-(xx - 2)**2) + np.exp(-(xx - 6)**2/10) + 1/ (xx**2 + 1)
params = {'ucb': {'kappa': 5, 'xi': 0}, 'ei': {'kappa': 5, 'xi': 0}, 'poi': {'kappa': 5,'xi': 0.01}, 'kg': {'kappa': 5, 'xi': 0}}
optimizers = {}
for acq in params.keys():
optimizers[acq] = BayesianOptimization(target, {'x': (min(xx), max(xx))}, random_state=27)
def update_optimizers(n_iter, optimizers, init_points=2):
if len(list(optimizers.values())[0]._space.target) == 0:
for acq in optimizers.keys():
optimizers[acq].maximize(init_points=init_points, n_iter=0, acq=acq)
else:
for acq in optimizers.keys():
optimizers[acq].maximize(init_points=0, n_iter=n_iter, acq=acq)
return optimizers
optimizers = update_optimizers(5, optimizers)
plot_gp(optimizers, xx, target, params)
optimizers = update_optimizers(5, optimizers)
plot_gp(optimizers, xx, target, params)
optimizers = update_optimizers(5, optimizers)
plot_gp(optimizers, xx, target, params)
optimizers = update_optimizers(5, optimizers)
plot_gp(optimizers, xx, target, params)
optimizers = update_optimizers(5, optimizers)
plot_gp(optimizers, xx, target, params)
```
# Plot convergences
```
params = {'ucb': {'kappa': 5, 'xi': 0}, 'ei': {'kappa': 5, 'xi': 0}, 'poi': {'kappa': 5,'xi': 0.01}, 'kg': {'kappa': 5, 'xi': 0}}
optimizers = {}
for acq in params.keys():
optimizers[acq] = BayesianOptimization(target, {'x': (min(xx), max(xx))}, random_state=27)
optimizers = update_optimizers(0, optimizers)
plot_convergence(optimizers, xx, target, params)
```
# Plot of the simple regret
```
params = {'ucb': {'kappa': 5, 'xi': 0}, 'ei': {'kappa': 5, 'xi': 0}, 'poi': {'kappa': 5,'xi': 0.01}, 'kg': {'kappa': 5, 'xi': 0}}
optimizers = {}
for acq in params.keys():
optimizers[acq] = BayesianOptimization(target, {'x': (min(xx), max(xx))}, random_state=27)
optimizers = update_optimizers(0, optimizers)
plot_simple_regret(optimizers, xx, target, params, dim=1)
```
# Objective function affected by noise
```
xx = np.linspace(-2, 10, 10000).reshape(-1,1)
def target(xx):
return np.exp(-(xx - 2)**2) + np.exp(-(xx - 6)**2/10) + 1/ (xx**2 + 1)
optimizerNoise = BayesianOptimization(target, {'x': (min(xx), max(xx))}, random_state=27, noise = 0.5)
optimizerNoise.maximize(init_points=2, n_iter=0, kappa=5)
optimizerNoise.maximize(init_points=0, n_iter=3, kappa=5)
```
# Multidimensional regret
```
def grid_construction(pbounds, n_grid):
dim = pbounds.shape[0]
init = [] #[np.zeros(n_grid)] * dim
for i in range(dim):
init.append(np.linspace(pbounds[i,0], pbounds[i,1], n_grid)) #= np.linspace(pbounds[i,0], pbounds[i,1], n_grid) # works \forall p
grid = np.meshgrid(*init) # list with p matrices n_grid x n_grid
for g in range(len(grid)):
grid[g] = grid[g].reshape(-1,1)
grid = np.stack(grid, axis = -1)
grid=grid[:,0,:] # array of shape (n_grid*n_grid, p): each row is composed of one element from init[0] and one element from init[1]
return grid
def black_box_function(x, y):
return -x ** 2 - (y - 1) ** 2 + 1
optimizer1 = BayesianOptimization(
f_temp=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
optimizer2 = BayesianOptimization(
f_temp=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
optimizer3 = BayesianOptimization(
f_temp=black_box_function,
pbounds={'x': (-2, 2), 'y': (-3, 3)},
verbose=2,
random_state=1,
)
grid = grid_construction(optimizer1._space.bounds, 100) # PAY ATTENTION: the density of the grid influences the result of the regret
# Remember to take enough points
optimizer1.maximize(init_points=2, n_iter=0, kappa=2.5)
optimizer2.maximize(init_points=2, n_iter=0,acq='ei')
optimizer3.maximize(init_points=2, n_iter=0,acq='poi')
optimizers={'ucb': optimizer1, 'ei': optimizer2, 'poi': optimizer3}
params = {'ucb': {'kappa': 2.5, 'xi': 0}, 'ei': {'kappa': 5, 'xi': 0}, 'poi': {'kappa': 5,'xi': 0.01}}
plot_simple_regret(optimizers, grid, black_box_function, params, dim=2)
```
| github_jupyter |
<img align="left" style="padding-right:10px;" width="150" src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Star_Wars_Logo.svg/320px-Star_Wars_Logo.svg.png" />
*elaborado por Ferran Carrascosa Mallafrè.*
__[Abre en Colab](https://colab.research.google.com/github/griu/init_python_b1/blob/master/Ejercicios_Python_II.ipynb)__
# Preparación del entorno
Padawan! Cuando inicies sesión en Colab, prepara el entorno ejecutando el siguiente código.
```
if 'google.colab' in str(get_ipython()):
!git clone https://github.com/griu/init_python_b1.git /content/init_python_b1
!git -C /content/init_python_b1 pull
%cd /content/init_python_b1
```
# Ejercicio 2
Para el ejercicio 2, añadimos los datos del ejercicio 1 los datos de planetas.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns; sns.set() # para el estilo de graficos
entidades = ['planets','starships','vehicles','people','species']
entidades_df = {x: pd.read_pickle('www/' + x + '_df.pkl') for x in entidades}
# Datos people
people_df = entidades_df['people'][["height","mass","birth_year","gender","homeworld"]].dropna()
# planetas
planets_df = entidades_df['planets'][["orbital_period","url"]].dropna()
planets_df.head()
```
## Ejercicio 2.1.
Construye una función que diga "buenos días", "buenas tardes" o "buenas noches" en función de la hora del día.
> Truco 1: Para testear la función haz que tenga un parametro de entrada que tenga como valor por defecto: `datetime.now()` (primero carga `from datetime import datetime`).
> Truco 2: Puedes extraer la hora de un datetime con `.hour`.
```
# Solución:
```
## Ejercicio 2.2.
En el data frame personajes_df, calcula de nuevo el IMC y crea una nueva variable con el trameado de la variable IMC definido en la siguiente tabla:
| Categoría de nivel de peso | Intervalo del percentil |
| -------------------------- | ----------------------- |
| Bajo peso | < 18.5 |
| Normal | >= 18.5 y < 25 |
| Sobrepeso | >= 25 y <30 |
| Obeso | >= 30 |
> Truco: utiliza `pd.cut(..., right=False)` y modifica las etiquetas con `.cat.categories`.
```
# Solución:
```
## Ejercicio 2.3.
Muestra las frecuencias de la nueva variable definida en 2.2.
```
# Solución:
```
## Ejercicio 2.4.
Calcula ahora un trameado de la edad en 5 grupos equiprobables.
Muestra los recuentos (frecuencias) de la nueva obtenida por pantalla.
> Truco: Busca ayuda de la función [pd.qcut()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html)
```
# Solución:
```
## Ejercicio 2.5.
Presenta la tabla cruzada de tramos de edad (ej. 2.4) por tramos de IMC (ej. 2.2).
¿Qué tramo de edad tiene un mayor número de personajes con Bajo peso?
```
# Solución:
```
## Ejercicio 2.6.
Calcula una tabla resumen (data frame) donde se presente la media del IMC en cada tramo de edad calculado en ejercicio 2.4.
Presenta por pantalla la nueva tabla resumen.
```
# Solución:
```
## Ejercicio 2.7.
Presenta los datos del ej. 2.6. como un gráfico de líneas donde el eje x sea la edad y el eje y el IMC medio.
> Truco: Como eje x del gráfico de líneas, puedes calcular en 2.6., en el mismo cálculo del IMC medio, la mediana de edad de cada tramo de edad.
```
# Solución:
```
## Ejercicio 2.8.
Calcula el ratio del IMC sobre la mediana del IMC de su tramo de edad (definidos en el ejercicio 2.2.) mediante la función groupby(...).apply(...).
> truco: primero crea una función que devuelva: `x / np.nanmedian(x)`.
Presenta los datos mediante un boxplot de la nueva variable: [pd.boxplot()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.boxplot.html).
```
# Solución:
```
## Ejercicio 2.9.
¿Cual es planeta con un menor índice IMC medio de sus personajes?
¿Que personaje/s son de ese planeta?
```
# Solución:
```
## Ejercicio 2.10.
Convierte a datetime los siguientes strings con la función [datetime.strptime()](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) (consulta la ayuda si es necesario) de la libreria datetime:
- "1 january, 2020"
- "15-feb.-2017"
- "20190701 22:30" # 1 de julio de 2019
```
# Solución:
```
| github_jupyter |
```
import argparse
import os
import time
import math
import torch
import torch.nn as nn
from torch.autograd import Variable
from torchtext import data as d
from torchtext import datasets
from torchtext.vocab import GloVe
import model
is_cuda = torch.cuda.is_available()
is_cuda
TEXT = d.Field(lower=True, batch_first=True,)
# make splits for data
train, valid, test = datasets.WikiText2.splits(TEXT,root='data')
batch_size=20
bptt_len=30
clip = 0.25
lr = 20
log_interval = 200
(len(valid[0].text)//batch_size)*batch_size
len(valid[0].text)
train[0].text = train[0].text[:(len(train[0].text)//batch_size)*batch_size]
valid[0].text = valid[0].text[:(len(valid[0].text)//batch_size)*batch_size]
test[0].text = test[0].text[:(len(valid[0].text)//batch_size)*batch_size]
len(valid[0].text)
# print information about the data
print('train.fields', train.fields)
print('len(train)', len(train))
print('vars(train[0])', vars(train[0])['text'][0:10])
TEXT.build_vocab(train)
print('len(TEXT.vocab)', len(TEXT.vocab))
train_iter, valid_iter, test_iter = d.BPTTIterator.splits((train, valid, test), batch_size=batch_size, bptt_len=bptt_len, device=0,repeat=False)
class RNNModel(nn.Module):
def __init__(self,ntoken,ninp,nhid,nlayers,dropout=0.5,tie_weights=False):
super().__init__()
self.drop = nn.Dropout()
self.encoder = nn.Embedding(ntoken,ninp)
self.rnn = nn.LSTM(ninp,nhid,nlayers,dropout=dropout)
self.decoder = nn.Linear(nhid,ntoken)
if tie_weights:
self.decoder.weight = self.encoder.weight
self.init_weights()
self.nhid = nhid
self.nlayers = nlayers
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange,initrange)
self.decoder.bias.data.fill_(0)
self.decoder.weight.data.uniform_(-initrange,initrange)
def forward(self,input,hidden):
emb = self.drop(self.encoder(input))
output,hidden = self.rnn(emb,hidden)
output = self.drop(output)
s = output.size()
decoded = self.decoder(output.view(s[0]*s[1],s[2]))
return decoded.view(s[0],s[1],decoded.size(1)),hidden
def init_hidden(self,bsz):
weight = next(self.parameters()).data
return(Variable(weight.new(self.nlayers,bsz,self.nhid).zero_()),Variable(weight.new(self.nlayers,bsz,self.nhid).zero_()))
criterion = nn.CrossEntropyLoss()
len(valid_iter.dataset[0].text)
emsize = 200
nhid=200
nlayers=2
dropout = 0.2
ntokens = len(TEXT.vocab)
lstm = RNNModel(ntokens, emsize, nhid,nlayers, dropout, 'store_true')
if is_cuda:
lstm = lstm.cuda()
def repackage_hidden(h):
"""Wraps hidden states in new Variables, to detach them from their history."""
if type(h) == Variable:
return Variable(h.data)
else:
return tuple(repackage_hidden(v) for v in h)
def evaluate(data_source):
# Turn on evaluation mode which disables dropout.
lstm.eval()
total_loss = 0
hidden = lstm.init_hidden(batch_size)
for batch in data_source:
data, targets = batch.text,batch.target.view(-1)
output, hidden = lstm(data, hidden)
output_flat = output.view(-1, ntokens)
total_loss += len(data) * criterion(output_flat, targets).data
hidden = repackage_hidden(hidden)
return total_loss[0]/(len(data_source.dataset[0].text)//batch_size)
def trainf():
# Turn on training mode which enables dropout.
lstm.train()
total_loss = 0
start_time = time.time()
hidden = lstm.init_hidden(batch_size)
for i,batch in enumerate(train_iter):
data, targets = batch.text,batch.target.view(-1)
# Starting each batch, we detach the hidden state from how it was previously produced.
# If we didn't, the model would try backpropagating all the way to start of the dataset.
hidden = repackage_hidden(hidden)
lstm.zero_grad()
output, hidden = lstm(data, hidden)
loss = criterion(output.view(-1, ntokens), targets)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
torch.nn.utils.clip_grad_norm(lstm.parameters(), clip)
for p in lstm.parameters():
p.data.add_(-lr, p.grad.data)
total_loss += loss.data
if i % log_interval == 0 and i > 0:
cur_loss = total_loss[0] / log_interval
elapsed = time.time() - start_time
(print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.2f} | ms/batch {:5.2f} | loss {:5.2f} | ppl {:8.2f}'.format(epoch, i, len(train_iter), lr,elapsed * 1000 / log_interval, cur_loss, math.exp(cur_loss))))
total_loss = 0
start_time = time.time()
# Loop over epochs.
best_val_loss = None
epochs = 40
for epoch in range(1, epochs+1):
epoch_start_time = time.time()
trainf()
val_loss = evaluate(valid_iter)
print('-' * 89)
print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '
'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time),
val_loss, math.exp(val_loss)))
print('-' * 89)
if not best_val_loss or val_loss < best_val_loss:
best_val_loss = val_loss
else:
# Anneal the learning rate if no improvement has been seen in the validation dataset.
lr /= 4.0
```
| github_jupyter |
# Neural Machine Translation
- Input is a sentence (sequence) in English
- Output is the corresponding sequence in German
- Encoder Decoder model with a Bidirectional GRU Encoder, Attention and GRU Decoder
## Import needed libraries
```
import tensorflow as tf
import numpy as np
# Import local libraries
import src.text_processing as text_processing
import src.dictionary as dictionary
import src.neural_network as neural_network
# Update python files
%load_ext autoreload
%autoreload 2
```
## Data processing
### Read dataset
```
# Read file containing english and german translations
data = text_processing.load_doc("./dataset/ENG_to_GER.txt")
# Split data into english and german
english_sentences, german_sentences = text_processing.prepare_data(data)
# Check and print number of sentences from one language to the other
assert(len(english_sentences) == len(german_sentences))
print(english_sentences.shape)
# Example of sentence with translation
print(english_sentences[20])
print(german_sentences[20])
```
### Split dataset (training + validation)
```
# Split percentage of training and validation
split_percentage = 0.85
# Count how many samples into training dataset
total_dataset = len(english_sentences)
train_dataset = int(total_dataset * split_percentage)
# Set random seed to have always same training and validation split
np.random.seed(42)
train_indices = np.random.choice(total_dataset, train_dataset, replace=False)
# Get training data for the two languages
training_english = english_sentences[train_indices]
training_german = german_sentences[train_indices]
# Get validation data
validation_english = np.delete(english_sentences, train_indices)
validation_german = np.delete(german_sentences, train_indices)
print("Training samples: " + str(training_english.shape[0]))
print("Validation samples: " + str(validation_english.shape[0]))
```
### Create dictionaries for the two languages
```
# Calculate longest sentence in the two languages
english_max_length = text_processing.max_length_sentence(training_english)
german_max_length = text_processing.max_length_sentence(training_german) + 2 # + 2 because of <START> and <END> the beginning
print("Longest sentence in English has " + str(english_max_length) + " tokens.")
print("Longest sentence in German has " + str(german_max_length) + " tokens.")
print()
# Create dictionaries
english_dictionary = dictionary.LanguageDictionary(training_english, english_max_length)
german_dictionary = dictionary.LanguageDictionary(training_german, german_max_length)
# Calculate size of the dictionaries
english_dictionary_size = len(english_dictionary.index_to_word)
german_dictionary_size = len(german_dictionary.index_to_word)
print("English dictionary size: " + str(english_dictionary_size))
print("German dictionary size: " + str(german_dictionary_size))
# Save dictionaries
text_processing.save_dump(english_dictionary, "./dumps/eng_dict.pickle")
text_processing.save_dump(german_dictionary, "./dumps/ger_dict.pickle")
```
### Prepare sequences for the Neural Network
```
# Prepare sequences of training data
train_source_input, train_target_input = text_processing.prepare_sequences(training_english,
training_german,
english_dictionary,
german_dictionary)
# Prepare sequences of validation data
val_source_input, val_target_input = text_processing.prepare_sequences(validation_english,
validation_german,
english_dictionary,
german_dictionary)
# Check if same number of samples
assert(len(train_source_input) == len(train_target_input))
assert(len(val_source_input) == len(val_target_input))
# Print shapes data
print("Training samples : " + str(len(train_source_input)))
print(train_source_input.shape)
print(train_target_input.shape)
print("Validation samples : " + str(len(val_source_input)))
print(val_source_input.shape)
print(val_target_input.shape)
```
### Print sample input data in English, German and next word to be predicted in German
```
print(train_source_input[0])
print(train_target_input[0])
print("SOURCE => " + english_dictionary.indices_to_text(train_source_input[0]))
print("TARGET => " + german_dictionary.indices_to_text(train_target_input[0]))
```
## Neural Network
### Parameters
```
epochs = 200
batch_size = 128
embedding_size = 256
lstm_hidden_units = 192
lr = 1e-3
keep_dropout_prob = 0.7
```
### Create Seq2seq neural network graph
```
tf.reset_default_graph()
# Placeholders
input_sequence = tf.placeholder(tf.int32, (None, english_dictionary.max_length_sentence), 'inputs')
output_sequence = tf.placeholder(tf.int32, (None, None), 'output')
target_labels = tf.placeholder(tf.int32, (None, None), 'targets')
keep_prob = tf.placeholder(tf.float32, (None), 'dropout_prob')
decoder_outputs_tensor = tf.placeholder(tf.float32, (None, german_dictionary.max_length_sentence - 1,
lstm_hidden_units * 2), 'output')
# Create graph for the network
logits, dec_output, mask = neural_network.create_network(input_sequence,
output_sequence,
keep_prob,
decoder_outputs_tensor,
english_dictionary_size,
german_dictionary_size,
embedding_size,
lstm_hidden_units)
```
### Set the loss function, optimizer and other useful tensors
```
# Cross entropy loss after softmax of logits
ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=target_labels) * mask
loss = tf.reduce_mean(ce)
# Using Adam optimizer for the update of the weights of the network with gradient clipping
optimizer = tf.train.AdamOptimizer(learning_rate=lr) #.minimize(loss)
gradients, variables = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
optimize = optimizer.apply_gradients(zip(gradients, variables))
# Useful tensors
scores = tf.nn.softmax(logits)
predictions = tf.to_int32(tf.argmax(scores, axis=2))
correct_mask = tf.to_float(tf.equal(predictions, target_labels))
accuracy = tf.contrib.metrics.accuracy(predictions, target_labels, weights=mask)
```
### Training of the network
```
# Training and validation data variables
training_overfit = False
best_val_accuracy = 0
consecutive_validation_without_saving = 0
indices = list(range(len(train_source_input)))
print("Number of iterations per epoch: " + str((len(train_source_input) // batch_size) + 1))
# Start session and initialize variables in the graph
with tf.Session() as sess:
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
for i in range(epochs):
# Vector accumulating accuracy and loss during one epoch
total_accuracies, total_losses = [], []
# Shuffle data to not train the network always with the same order
np.random.shuffle(indices)
train_source_input = train_source_input[indices]
train_target_input = train_target_input[indices]
# Iterate over mini-batches
for j in range(0, len(train_source_input), batch_size):
dec_out_tmp = neural_network.get_decoder_outputs(sess, dec_output, input_sequence, output_sequence,
decoder_outputs_tensor, keep_prob, keep_dropout_prob,
len(train_source_input[j:j+batch_size]), german_dictionary.max_length_sentence - 1,
lstm_hidden_units, train_source_input[j:j+batch_size],
train_target_input[j:j+batch_size, :-1])
_, avg_accuracy, avg_loss = sess.run([optimize, accuracy, loss], feed_dict={
input_sequence: train_source_input[j:j+batch_size],
output_sequence: train_target_input[j:j+batch_size, :-1],
target_labels: train_target_input[j:j+batch_size, 1:],
keep_prob: keep_dropout_prob,
decoder_outputs_tensor: dec_out_tmp })
# Add values for this mini-batch iterations
total_losses.append(avg_loss)
total_accuracies.append(avg_accuracy)
# Statistics on validation set
if (j // batch_size + 1) % 250 == 0:
# Accumulate validation statistics
val_accuracies, val_losses = [], []
for k in range(0, len(val_source_input), batch_size):
dec_out_tmp = neural_network.get_decoder_outputs(sess, dec_output, input_sequence,
output_sequence, decoder_outputs_tensor, keep_prob, 1.0,
len(val_source_input[k:k+batch_size]), german_dictionary.max_length_sentence - 1,
lstm_hidden_units, val_source_input[k:k+batch_size], val_target_input[k:k+batch_size, :-1])
avg_accuracy, avg_loss = sess.run([accuracy, loss], feed_dict={
input_sequence: val_source_input[k:k+batch_size],
output_sequence: val_target_input[k:k+batch_size, :-1],
target_labels: val_target_input[k:k+batch_size, 1:],
keep_prob: 1.0,
decoder_outputs_tensor: dec_out_tmp })
val_losses.append(avg_loss)
val_accuracies.append(avg_accuracy)
# Average validation accuracy over batches
final_val_accuracy = np.mean(val_accuracies)
# Save model if validation accuracy better
if final_val_accuracy > best_val_accuracy:
consecutive_validation_without_saving = 0
best_val_accuracy = final_val_accuracy
print("VALIDATION loss: " + str(np.mean(val_losses)) + ", accuracy: " + str(final_val_accuracy))
save_path = saver.save(sess, "./checkpoints/model.ckpt")
else:
# Count every time check validation accuracy
consecutive_validation_without_saving += 1
# If checked validation time many consecutive times without having improvement in accuracy
if consecutive_validation_without_saving >= 10:
training_overfit = True
break
# Epoch statistics
print("Epoch: " + str(i+1) + ", AVG loss: " + str(np.mean(np.array(total_losses))) +
", AVG accuracy: " + str(np.mean(np.array(total_accuracies))) + "\n")
if training_overfit:
print("Early stopping")
break
```
## Testing network
### Rebuild graph quickly if want to run only this part of the notebook
```
# Load dictionaries from pickle
english_dictionary = text_processing.load_dump("./dumps/eng_dict.pickle")
german_dictionary = text_processing.load_dump("./dumps/ger_dict.pickle")
tf.reset_default_graph()
embedding_size = 256
lstm_hidden_units = 192
# Placeholders
input_sequence = tf.placeholder(tf.int32, (None, english_dictionary.max_length_sentence), 'inputs')
output_sequence = tf.placeholder(tf.int32, (None, None), 'output')
target_labels = tf.placeholder(tf.int32, (None, None), 'targets')
keep_prob = tf.placeholder(tf.float32, (None), 'dropout_prob')
decoder_outputs_tensor = tf.placeholder(tf.float32, (None, german_dictionary.max_length_sentence - 1,
lstm_hidden_units * 2), 'output')
# Create graph for the network
logits, dec_output, mask = neural_network.create_network(input_sequence,
output_sequence,
keep_prob,
decoder_outputs_tensor,
len(english_dictionary.index_to_word),
len(german_dictionary.index_to_word),
embedding_size,
lstm_hidden_units)
# Predictions
scores = tf.nn.softmax(logits)
predictions = tf.to_int32(tf.argmax(scores, axis=2))
```
### Perform test predictions
```
with tf.Session() as sess:
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
saver.restore(sess, "./checkpoints/model.ckpt")
test_source_sentence = ["Could you please come here and explain me what to do"]
for source_sentence in test_source_sentence:
# Normalize & tokenize (cut if longer than max_length_source)
source_preprocessed = text_processing.preprocess_sentence(source_sentence)
# Convert to numbers
source_encoded = english_dictionary.text_to_indices(source_preprocessed)
# Add padding
source_input = text_processing.pad_sentence(source_encoded, english_dictionary.max_length_sentence)
# Starting target sentence in German
target_sentence = [["<START>"]]
target_encoded = german_dictionary.text_to_indices(target_sentence[0])
i = 0
word_predicted = 0
while word_predicted != 2: # If <END> (index 2), stop
target_encoded_pad = text_processing.pad_sentence(target_encoded,
german_dictionary.max_length_sentence - 1,
pad_before=False)
dec_out_tmp = neural_network.get_decoder_outputs(
sess,
dec_output,
input_sequence,
output_sequence,
decoder_outputs_tensor,
keep_prob,
1.0,
1,
german_dictionary.max_length_sentence - 1,
lstm_hidden_units,
[source_input],
[target_encoded_pad])
# Perform prediction
pred = sess.run(predictions, feed_dict={ input_sequence: [source_input],
output_sequence: [target_encoded_pad],
keep_prob: 1.0,
decoder_outputs_tensor: dec_out_tmp })
# Accumulate
target_encoded.append(pred[0][i])
word_predicted = pred[0][i]
if i > german_dictionary.max_length_sentence:
break
i += 1
print(english_dictionary.indices_to_text(source_input) + " => "
+ german_dictionary.indices_to_text(target_encoded))
```
| github_jupyter |
# Import TensorFlow 2.x.
```
try:
%tensorflow_version 2.x
except:
pass
import tensorflow as tf
import tensorflow.keras.layers as layers
import tensorflow.keras.models as models
import numpy as np
np.random.seed(7)
print(tf.__version__)
```
# Import TensorFlow datasets.
* MNIST dataset
```
import tensorflow_datasets as tfds
```
# Load MNIST dataset.
### Load MNIST dataset.
* train split
* test split
```
train_dataset, test_dataset = tfds.load(name="mnist", split=['train', 'test'], as_supervised=True)
```
### Normalize dataset images.
```
def _normalize_image(image, label):
image = tf.cast(image, tf.float32) / 255.
return (image, label)
```
### Create dataset batches.
```
buffer_size = 1024
batch_size = 32
train_dataset = train_dataset.shuffle(buffer_size).batch(batch_size)
train_dataset = train_dataset.map(_normalize_image)
test_dataset = test_dataset.batch(batch_size)
test_dataset = test_dataset.map(_normalize_image)
```
# Create the model.
* No activation (or default linear activation) on last layer
* L2 normalized embeddings.
```
model = models.Sequential([
layers.Conv2D(filters=128, kernel_size=2, padding='same', activation='relu', input_shape=(28,28,1)),
layers.Conv2D(filters=128, kernel_size=2, padding='same', activation='relu'),
layers.MaxPooling2D(pool_size=2),
layers.Dropout(0.3),
layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'),
layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'),
layers.MaxPooling2D(pool_size=2),
layers.Dropout(0.3),
layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'),
layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'),
layers.MaxPooling2D(pool_size=2),
layers.Dropout(0.3),
layers.Flatten(),
layers.Dense(256, activation=None), # No activation on final dense layer
layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1)) # L2 normalized embeddings
])
```
### Show model summary.
```
model.summary()
```
# Train the model.
```
def pairwise_distance(feature, squared=False):
"""Computes the pairwise distance matrix with numerical stability.
output[i, j] = || feature[i, :] - feature[j, :] ||_2
Args:
feature: 2-D Tensor of size [number of data, feature dimension].
squared: Boolean, whether or not to square the pairwise distances.
Returns:
pairwise_distances: 2-D Tensor of size [number of data, number of data].
"""
# yapf: disable
pairwise_distances_squared = tf.math.add(
tf.math.reduce_sum(
tf.math.square(feature),
axis=[1],
keepdims=True),
tf.math.reduce_sum(
tf.math.square(tf.transpose(feature)),
axis=[0],
keepdims=True)) - 2.0 * tf.matmul(feature, tf.transpose(feature))
# yapf: enable
# Deal with numerical inaccuracies. Set small negatives to zero.
pairwise_distances_squared = tf.math.maximum(pairwise_distances_squared,
0.0)
# Get the mask where the zero distances are at.
error_mask = tf.math.less_equal(pairwise_distances_squared, 0.0)
# Optionally take the sqrt.
if squared:
pairwise_distances = pairwise_distances_squared
else:
pairwise_distances = tf.math.sqrt(
pairwise_distances_squared +
tf.cast(error_mask, dtype=tf.dtypes.float32) * 1e-16)
# Undo conditionally adding 1e-16.
pairwise_distances = tf.math.multiply(
pairwise_distances,
tf.cast(tf.math.logical_not(error_mask), dtype=tf.dtypes.float32))
num_data = tf.shape(feature)[0]
# Explicitly set diagonals to zero.
mask_offdiagonals = tf.ones_like(pairwise_distances) - tf.linalg.diag(
tf.ones([num_data]))
pairwise_distances = tf.math.multiply(pairwise_distances,
mask_offdiagonals)
return pairwise_distances
def _masked_maximum(data, mask, dim=1):
"""Computes the axis wise maximum over chosen elements.
Args:
data: 2-D float `Tensor` of size [n, m].
mask: 2-D Boolean `Tensor` of size [n, m].
dim: The dimension over which to compute the maximum.
Returns:
masked_maximums: N-D `Tensor`.
The maximized dimension is of size 1 after the operation.
"""
axis_minimums = tf.math.reduce_min(data, dim, keepdims=True)
masked_maximums = tf.math.reduce_max(
tf.math.multiply(data - axis_minimums, mask), dim,
keepdims=True) + axis_minimums
return masked_maximums
def _masked_minimum(data, mask, dim=1):
"""Computes the axis wise minimum over chosen elements.
Args:
data: 2-D float `Tensor` of size [n, m].
mask: 2-D Boolean `Tensor` of size [n, m].
dim: The dimension over which to compute the minimum.
Returns:
masked_minimums: N-D `Tensor`.
The minimized dimension is of size 1 after the operation.
"""
axis_maximums = tf.math.reduce_max(data, dim, keepdims=True)
masked_minimums = tf.math.reduce_min(
tf.math.multiply(data - axis_maximums, mask), dim,
keepdims=True) + axis_maximums
return masked_minimums
def triplet_semihard_loss(y_true, y_pred, margin=1.0):
"""Computes the triplet loss with semi-hard negative mining.
Args:
y_true: 1-D integer `Tensor` with shape [batch_size] of
multiclass integer labels.
y_pred: 2-D float `Tensor` of embedding vectors. Embeddings should
be l2 normalized.
margin: Float, margin term in the loss definition.
"""
labels, embeddings = y_true, y_pred
# Reshape label tensor to [batch_size, 1].
lshape = tf.shape(labels)
labels = tf.reshape(labels, [lshape[0], 1])
# Build pairwise squared distance matrix.
pdist_matrix = pairwise_distance(embeddings, squared=True)
# Build pairwise binary adjacency matrix.
adjacency = tf.math.equal(labels, tf.transpose(labels))
# Invert so we can select negatives only.
adjacency_not = tf.math.logical_not(adjacency)
batch_size = tf.size(labels)
# Compute the mask.
pdist_matrix_tile = tf.tile(pdist_matrix, [batch_size, 1])
mask = tf.math.logical_and(
tf.tile(adjacency_not, [batch_size, 1]),
tf.math.greater(pdist_matrix_tile,
tf.reshape(tf.transpose(pdist_matrix), [-1, 1])))
mask_final = tf.reshape(
tf.math.greater(
tf.math.reduce_sum(
tf.cast(mask, dtype=tf.dtypes.float32), 1, keepdims=True),
0.0), [batch_size, batch_size])
mask_final = tf.transpose(mask_final)
adjacency_not = tf.cast(adjacency_not, dtype=tf.dtypes.float32)
mask = tf.cast(mask, dtype=tf.dtypes.float32)
# negatives_outside: smallest D_an where D_an > D_ap.
negatives_outside = tf.reshape(
_masked_minimum(pdist_matrix_tile, mask), [batch_size, batch_size])
negatives_outside = tf.transpose(negatives_outside)
# negatives_inside: largest D_an.
negatives_inside = tf.tile(
_masked_maximum(pdist_matrix, adjacency_not), [1, batch_size])
semi_hard_negatives = tf.where(mask_final, negatives_outside,
negatives_inside)
loss_mat = tf.math.add(margin, pdist_matrix - semi_hard_negatives)
mask_positives = tf.cast(
adjacency, dtype=tf.dtypes.float32) - tf.linalg.diag(
tf.ones([batch_size]))
# In lifted-struct, the authors multiply 0.5 for upper triangular
# in semihard, they take all positive pairs except the diagonal.
num_positives = tf.math.reduce_sum(mask_positives)
triplet_loss = tf.math.truediv(
tf.math.reduce_sum(
tf.math.maximum(tf.math.multiply(loss_mat, mask_positives), 0.0)),
num_positives)
return triplet_loss
class TripletSemiHardLoss(tf.keras.losses.Loss):
"""Computes the triplet loss with semi-hard negative mining.
The loss encourages the positive distances (between a pair of embeddings
with the same labels) to be smaller than the minimum negative distance
among which are at least greater than the positive distance plus the
margin constant (called semi-hard negative) in the mini-batch.
If no such negative exists, uses the largest negative distance instead.
See: https://arxiv.org/abs/1503.03832.
We expect labels `y_true` to be provided as 1-D integer `Tensor` with shape
[batch_size] of multi-class integer labels. And embeddings `y_pred` must be
2-D float `Tensor` of l2 normalized embedding vectors.
Args:
margin: Float, margin term in the loss definition. Default value is 1.0.
name: Optional name for the op.
"""
def __init__(self, margin=1.0, name=None, **kwargs):
super(TripletSemiHardLoss, self).__init__(
name=name, reduction=tf.keras.losses.Reduction.NONE)
self.margin = margin
def call(self, y_true, y_pred):
return triplet_semihard_loss(y_true, y_pred, self.margin)
def get_config(self):
config = {
"margin": self.margin,
}
base_config = super(TripletSemiHardLoss, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
```
### Compile the model.
```
model.compile(optimizer=tf.keras.optimizers.Adam(0.001), loss=TripletSemiHardLoss())
```
### Train the model.
```
history = model.fit(train_dataset, epochs=10)
```
# Evaluate the model.
* Create the embeddings for the test dataset.
```
embeddings = model.predict(test_dataset)
```
# Save the embeddings for visualization in the embedding projector.
```
import io
np.savetxt("embeddings-vecs.tsv", embeddings, delimiter='\t')
meta_file = io.open('embeddings-meta.tsv', 'w', encoding='utf-8')
for image, labels in tfds.as_numpy(test_dataset):
[meta_file.write(str(label) + "\n") for label in labels]
meta_file.close()
```
# Visualize using Embedding Projector.
Generated embedding vector and metadata files can be loaded and visualized using Embedding Projector available [here](https://projector.tensorflow.org).
| github_jupyter |
```
npzfile = np.load('../datasets/0627-8slot-503.npz')
alloc, rt_50, rt_99, rps=npzfile['alloc'], npzfile['rt_50'], npzfile['rt_99'], npzfile['rps']
alloc_list = alloc.tolist()
alloc[35], rps[35]
for index in range(len(alloc_list)):
print(index, alloc_list[index])
def reproduce_x_to_profile_setting_list(self, array): # 0627
# input: array, shape: (2, 9), e.g., [[2. 1. 0. 4. 0. 1. 0. 0. 0.]]
# output: [[1, 1, 2, 4, 4, 4, 4, 6]]
summary_setting_list = None
setting = []
for item in array:
for i in range(len(item)):
if item[i] != 0:
for num in range(int(item[i])):
setting.append(i+1)
if len(setting) > 8: # over allocated
print("[ERROR]: Too many containers: ", item)
else:
for i in range(8-len(setting)):
setting.insert(0, 0) # add 0 in the first place
# now the setting is well sorted
if summary_setting_list is None:
summary_setting_list = [setting]
else:
summary_setting_list.append(setting)
return summary_setting_list
input_a=reproduce_x_to_profile_setting_list(_, np.array([[1.0, 0.0, 2.0, 3.0, 0.0, 0.0, 0.0, 0.0, 0.0]]));
input_a
'-'.join([str(i) for i in input_a[0]])
import numpy as np
item_1=np.array([1,1,2,1,0,0,1,2,0],dtype=int)
item_1=np.array([0, 1, 0, 0, 2, 0, 0, 0, 1])
# item_1=np.array([1.,1.,2.,0.,0.,4.,0.,0.,0.], dtype=int)
item_1 # Luping's input: 9: R1 R2 M1 M2 + 5
item_2=r1r2m1m2_to_mms_local_remote_1d(item_1); item_2 # 9: CPU, Redis, MMS_L, MMS_R + 5
item_3=reproduce_x_to_profile_setting([item_2])[0]; item_3 # 8 loc: [] [] [] []
def eight_to_nine(in_array):
out_array = np.array([0]*9)
for item in in_array:
if item != 0:
out_array[int(item-1)]+=1
return out_array
item_4=eight_to_nine(item_3); item_4 # alloc_match
total_rps = 0
try:
index = alloc_list.index(item_4.tolist())
# rps_breakdown = rps[index]
# rps_sum = [ r * n for (r, n) in zip(rps_breakdown, query_allocation_array)]
# rps_sum = sum(np.nan_to_num(np.array(rps_sum)))
# total_rps += rps_sum
rps_breakdown = rps[index]
rps_sum = [ r * n for (r, n) in zip(rps_breakdown, item_4)]
rps_sum = sum(np.nan_to_num(np.array(rps_sum)))
total_rps += rps_sum
print("luping:", item_1)
print(" query:", item_4)
print("[rps/container]:", rps_breakdown)
except ValueError:
print("Not Found:", item_4, " (from)", item_1)
mms_local_remote_to_r1r2m1m2_1d(item_1, rps_breakdown)
query_results_r1r2m1m2('results-sim-spare8081.npz', ar_sim_in[0], debug=True)
query_results_r1r2m1m2('results-sim-spare8081.npz', [[1,1,2,0,0,4,0,0,0]], debug=True)
reproduce_x_to_profile_setting([[0, 2, 2, 0, 0, 4, 0, 0, 0]], r1r2m1m2=True)
def query_results_r1r2m1m2(npz_file_path, query_allocation_array_list, debug=False):
npzfile = np.load(npz_file_path)
alloc, rt_50, rt_99, rps=npzfile['alloc'], npzfile['rt_50'], npzfile['rt_99'], npzfile['rps']
alloc_list = alloc.tolist()
total_rps = 0
for item_1 in query_allocation_array_list:
item_2=r1r2m1m2_to_mms_local_remote_1d(item_1); # 9: CPU, Redis, MMS_L, MMS_R + 5
item_3=reproduce_x_to_profile_setting([item_2])[0]; # 8 loc: [] [] [] []
def eight_to_nine(in_array):
out_array = np.array([0]*9)
for item in in_array:
if item != 0:
out_array[int(item-1)]+=1
return out_array
item_4=eight_to_nine(item_3); # alloc_match
if debug:
print("[DEBUG] luping:", item_1)
print("[DEBUG] query:", item_4)
try:
index = alloc_list.index(item_4.tolist())
temp_rps_breakdown = rps[index]
if debug:
print("[DEBUG] temp rps:", np.nan_to_num(temp_rps_breakdown))
rps_breakdown = mms_local_remote_to_r1r2m1m2_1d(item_1, np.nan_to_num(temp_rps_breakdown))
rps_sum = [ r * n for (r, n) in zip(rps_breakdown, item_1)]
rps_sum = sum(np.nan_to_num(np.array(rps_sum)))
total_rps += rps_sum
print("%s throughput: %.4f" % (item_1, rps_sum))
print("\t[rps/cntr] %s" % rps_breakdown)
except ValueError:
print("Not Found:", item_4, " (from)", item_1)
print("Total Throughput: %.4f" % total_rps)
for i in range(5):
print('--------',i+1,'--------')
query_results_r1r2m1m2('results-sim-spare8081.npz', ar_sim_in[i])
alloc_list
def reproduce_x_to_profile_setting(array, r1r2m1m2=False, with_redis=True, verbose=True):
# date: 2019-0617
# input: array, shape: (2, 9), e.g., [[2. 1. 0. 4. 0. 1. 0. 0. 0.]]
# output: [[2, 1, 1, 4, 4, 4, 4, 6]]
summary_setting = None
for item in array:
if r1r2m1m2:
item=r1r2m1m2_to_mms_local_remote_1d(item); # 9: CPU, Redis, MMS_L, MMS_R + 5
if with_redis:
setting = [2]
item[1] -= 1 # put 1 Redis at top
else:
setting = []
for i in range(len(item)):
if item[i] != 0:
for num in range(item[i]):
setting.append(i+1)
if len(setting) > 8: # over allocated
print("[ERROR]: Too many containers: ", item)
else:
for i in range(8-len(setting)):
setting.insert(1, 0) # add 0 after the first place
if summary_setting is None:
summary_setting = [setting]
else:
summary_setting.append(setting)
return summary_setting
def mms_local_remote_to_r1r2m1m2_1d(input_x, input_y):
"""
input_x = [1, 0, 1, 0, 1, 1, 0, 1, 3] # R1, R2, M1, M2, XXXXX
output_x = [0, 1, 1, 0, 1, 1, 0, 1, 3] # CPU, Redis, MMS_LOCAL, MMS_REMOTE, XXXXX
input_y = [0.0000, 0.8218, 4.0633, 0.0000, 2.6703, 1.0034, 0.0000, 0.8029, 0.5026] # CPU, Redis, MMS_LOCAL, MMS_REMOTE, XXXXX
output_y = [0.8218, 0.0000, 4.0633, 0.0000, 2.6703, 1.0034, 0.0000, 0.8029, 0.5026] # R1, R2, M1, M2, XXXXX
"""
output_y = input_y.copy() # input_y[-5:] unchanged
if input_x[0] >= 1: # if R1 exists
output_y[0] = input_y[1] # R1 = Redis
if input_x[2] >= 1: # if R1 & M1 co-exists
output_y[2] = input_y[2] # M1 = MMS_LOCAL
else:
output_y[2] = 0 # M1 = 0
else: # if R1 not exists
output_y[0] = 0 # R1 = 0
if input_x[2] >= 1: # if M1 exists
output_y[2] = input_y[3] # M1 = MMS_REMOTE
else:
output_y[2] = 0 # M1 = 0
if input_x[1] >= 1: # if R2 exists
output_y[1] = input_y[1] # R2 = Redis
if input_x[3] >= 1: # if R2 & M2 co-exists
output_y[3] = input_y[2] # M1 = MMS_LOCAL
else:
output_y[3] = 0 # M2 = 0
else: # if R2 not exists
output_y[1] = 0 # R1 = 0
if input_x[3] >= 1: # if M2 exists
output_y[3] = input_y[3] # M2 = MMS_REMOTE
else:
output_y[3] = 0 # M1 = 0
return output_y
def r1r2m1m2_to_mms_local_remote_1d(input_x):
"""
input_x = [1, 0, 1, 0, 1, 1, 0, 1, 3] # R1, R2, M1, M2, XXXXX
output_x = [0, 1, 1, 0, 1, 1, 0, 1, 3] # CPU, Redis, MMS_LOCAL, MMS_REMOTE, XXXXX
"""
output_x = input_x.copy() # input_x[-5:] unchanged
num_redis = input_x[0] + input_x[1] # Redis = R1 + R2
num_mms_local = 0
num_mms_remote = 0
if input_x[0] >= 1: # if R1 exists
num_mms_local += input_x[2] # MMS_LOCAL += R1
else:
num_mms_remote += input_x[2] # MMS_REMOTE += R1
if input_x[1] >= 1: # if R2 exists
num_mms_local += input_x[3] # MMS_LOCAL += R2
else:
num_mms_remote += input_x[3] # MMS_REMOTE += R2
output_x[0] = 0 # Sysbench CPU records discarded
output_x[1] = num_redis
output_x[2] = num_mms_local
output_x[3] = num_mms_remote
return output_x
alloc
def query_results(npz_file_path, query_allocation_array_list):
npzfile = np.load(npz_file_path)
alloc, rt_50, rt_99, rps=npzfile['alloc'], npzfile['rt_50'], npzfile['rt_99'], npzfile['rps']
alloc_list = alloc.tolist()
total_rps = 0
for query_allocation_array in query_allocation_array_list:
try:
# query_allocation_array = reproduce_x_to_profile_setting([query_allocation_array])[0]
index = alloc_list.index(query_allocation_array)
rps_breakdown = rps[index]
rps_sum = [ r * n for (r, n) in zip(rps_breakdown, query_allocation_array)]
rps_sum = sum(np.nan_to_num(np.array(rps_sum)))
total_rps += rps_sum
print("%s throughput: %.4f" % (query_allocation_array, rps_sum))
print("\t[rps/cntr] %s" % rps_breakdown)
except ValueError:
print("%s throughput: nan" % (query_allocation_array))
print("\t xxxxx [NOT FOUND] xxxxx")
rps_breakdown = [np.nan] * 9
rps_sum = np.nan
print("Total Throughput: %.4f" % total_rps)
ar_444_in = [[[2, 1, 0, 4, 0, 1, 0, 0, 0],[1, 1, 0, 0, 0, 3, 0, 3, 0],[0, 2, 0, 0, 0, 0, 2, 1, 0],[1, 1, 0, 0, 0, 0, 0, 0, 1],[0, 1, 1, 0, 2, 0, 1, 0, 0],[0, 1, 1, 0, 2, 0, 1, 0, 0],[0, 2, 0, 0, 0, 0, 0, 0, 1],[0, 1, 2, 0, 0, 0, 0, 0, 1],[0, 3, 0, 0, 0, 0, 0, 0, 1]],[[1, 1, 1, 0, 1, 0, 1, 0, 0],[1, 1, 1, 0, 1, 0, 0, 1, 0],[1, 1, 1, 0, 0, 1, 0, 1, 0],[1, 1, 0, 1, 0, 1, 0, 1, 0],[0, 2, 0, 1, 0, 1, 0, 1, 0],[0, 2, 0, 1, 0, 1, 0, 0, 1],[0, 2, 0, 1, 0, 0, 1, 0, 1],[0, 2, 0, 0, 1, 0, 1, 0, 1],[0, 1, 1, 0, 1, 0, 1, 0, 1]],[[4, 1, 0, 0, 0, 0, 0, 0, 0],[0, 5, 0, 0, 0, 0, 0, 0, 0],[0, 1, 4, 0, 0, 0, 0, 0, 0],[0, 1, 0, 4, 0, 0, 0, 0, 0],[0, 1, 0, 0, 4, 0, 0, 0, 0],[0, 1, 0, 0, 0, 4, 0, 0, 0],[0, 1, 0, 0, 0, 0, 4, 0, 0],[0, 1, 0, 0, 0, 0, 0, 4, 0],[0, 1, 0, 0, 0, 0, 0, 0, 4]],[[1, 1, 1, 0, 0, 0, 0, 2, 0],[0, 2, 0, 1, 2, 0, 1, 0, 0],[2, 1, 0, 0, 1, 0, 1, 0, 0],[0, 2, 1, 0, 0, 0, 2, 1, 1],[0, 2, 0, 2, 0, 2, 0, 0, 1],[0, 1, 0, 0, 0, 1, 0, 0, 1],[1, 1, 0, 0, 0, 0, 0, 0, 0],[0, 1, 2, 0, 1, 1, 0, 0, 1],[0, 2, 0, 1, 0, 0, 0, 1, 0]],[[2, 1, 1, 0, 0, 1, 1, 1, 0],[0, 1, 0, 1, 0, 0, 1, 0, 0],[0, 1, 0, 0, 0, 0, 0, 2, 0],[0, 1, 0, 0, 0, 1, 0, 0, 1],[1, 1, 2, 1, 0, 1, 0, 0, 1],[0, 3, 0, 0, 0, 0, 1, 1, 0],[1, 1, 0, 0, 0, 0, 0, 0, 2],[0, 2, 1, 1, 1, 0, 0, 0, 0],[0, 2, 0, 1, 3, 1, 1, 0, 0]]]
ar_sim_in=[[[0,1,0,0,2,0,0,0,1],[0,1,0,0,0,0,2,0,1],[0,1,0,0,0,0,0,0,1],[0,1,0,0,0,0,0,0,1],[1,1,2,0,0,4,0,0,0],[1,1,2,1,0,0,1,2,0],[0,1,0,0,0,0,0,0,0],[1,1,0,3,0,0,1,2,0],[1,5,0,0,2,0,0,0,0]],[[1,1,1,0,1,0,1,0,0],[1,1,1,0,1,0,0,1,0],[1,1,1,0,0,1,0,1,0],[1,1,0,1,0,1,0,1,0],[0,2,0,1,0,1,0,1,0],[0,2,0,1,0,1,0,0,1],[0,2,0,1,0,0,1,0,1],[0,2,0,0,1,0,1,0,1],[0,1,1,0,1,0,1,0,1]],[[4,1,0,0,0,0,0,0,0],[0,5,0,0,0,0,0,0,0],[0,1,4,0,0,0,0,0,0],[0,1,0,4,0,0,0,0,0],[0,1,0,0,4,0,0,0,0],[0,1,0,0,0,4,0,0,0],[0,1,0,0,0,0,4,0,0],[0,1,0,0,0,0,0,4,0],[0,1,0,0,0,0,0,0,4]],[[1,1,1,0,0,0,0,2,0],[0,2,0,1,2,0,1,0,0],[2,1,0,0,1,0,1,0,0],[0,2,1,0,0,0,2,1,1],[0,2,0,2,0,2,0,0,1],[0,1,0,0,0,1,0,0,1],[1,1,0,0,0,0,0,0,0],[0,1,2,0,1,1,0,0,1],[0,2,0,1,0,0,0,1,0]],[[2,1,1,0,0,1,1,1,0],[0,1,0,1,0,0,1,0,0],[0,1,0,0,0,0,0,2,0],[0,1,0,0,0,1,0,0,1],[1,1,2,1,0,1,0,0,1],[0,3,0,0,0,0,1,1,0],[1,1,0,0,0,0,0,0,2],[0,2,1,1,1,0,0,0,0],[0,2,0,1,3,1,1,0,0]]]
npzfile_444='results-444.npz'
npzfile_sim='results-sim.npz'
for i in range(5):
print('--------',i+1,'--------')
query_results(npzfile_444, ar_444_in[i])
from pyscript_utils import reproduce_x_to_profile_setting
# query = [0, 1, 0, 0, 0, 0, 0, 0, 1]
query = [0, 2, 2, 0, 0, 4, 0, 0, 0]
query_input = reproduce_x_to_profile_setting([query])
query_input
filename = str(query_input[0]).replace(', ', '-')[1:-1]
filename = 'results/'+filename+'.csv'
filename
import pandas as pd
src=pd.read_csv(filename)
src
WORKLOAD = ["SYS-CPU", "Redis", "MXNet-ResNet-Local", "MXNet-ResNet-Remote", "SYS-MEM", "SYS-FILEIO", "Stress-CPU", "Stress-MEM", "Stress-FILE"]
output_array = np.array([np.nan]*9)
for worki in range(len(WORKLOAD)):
related_rows = src[ src['App'] == WORKLOAD[worki]]
output_array[worki] = related_rows['Requests/s'].median()
print(query, "throughput:", sum(np.nan_to_num(output_array)))
print("\t[rps/cntr]", output_array)
WORKLOAD = ["SYS-CPU", "Redis", "MXNet-ResNet-Local", "MXNet-ResNet-Remote", "SYS-MEM", "SYS-FILEIO", "Stress-CPU", "Stress-MEM", "Stress-FILE"]
related_rows = src[ src['App'] == WORKLOAD[1]]
def query_results_check_csv_r1r2m1m2(query=[0, 2, 2, 0, 0, 4, 0, 0, 0]):
# not correct in display, but can find the right file
save_query = query.copy()
query_input = reproduce_x_to_profile_setting([query],r1r2m1m2=True)
print("query_input 8 containers with [CPU Redis ...] (not R1 R2 ..) :", query_input)
filename = str(query_input[0]).replace(', ', '-')[1:-1]
filename = 'results/'+filename+'.csv'
try:
src=pd.read_csv(filename)
except:
print("CSV file", filename, " not found")
WORKLOAD = ["SYS-CPU", "Redis", "MXNet-ResNet-Local", "MXNet-ResNet-Remote", "SYS-MEM", "SYS-FILEIO", "Stress-CPU", "Stress-MEM", "Stress-FILE"]
output_array = np.array([np.nan]*9)
for worki in range(len(WORKLOAD)):
related_rows = src[ src['App'] == WORKLOAD[worki]]
output_array[worki] = related_rows['Requests/s'].median()
print(save_query, "throughput:", sum(np.nan_to_num(output_array)))
print("\t[rps/cntr]", output_array)
return src
# query_results_check_csv_r1r2m1m2([0,1,0,0,0,1,0,0,1])
# query_results_check_csv_r1r2m1m2([1, 1, 2, 0, 0, 4, 0, 0, 0])
query_results_check_csv_r1r2m1m2([0, 1, 0, 0, 0, 1, 0, 0, 1])
related_rows
related_rows.median()['Requests/s']
inarray
inarray=[[0, 1, 0, 0, 0, 0, 0, 0, 1],[0, 1, 0, 0, 0, 0, 0, 0, 1],[1, 1, 2, 0, 0, 4, 0, 0, 0],[1, 1, 2, 1, 0, 0, 1, 2, 0],[1, 1, 0, 3, 0, 0, 1, 2, 0],[1, 5, 0, 0, 2, 0, 0, 0, 0],[1, 1, 1, 0, 1, 0, 1, 0, 0],[1, 1, 1, 0, 1, 0, 0, 1, 0],[1, 1, 1, 0, 0, 1, 0, 1, 0],[1, 1, 0, 1, 0, 1, 0, 1, 0],[0, 2, 0, 1, 0, 1, 0, 1, 0],[0, 2, 0, 1, 0, 1, 0, 0, 1],[0, 2, 0, 1, 0, 0, 1, 0, 1],[0, 1, 1, 0, 1, 0, 1, 0, 1],[4, 1, 0, 0, 0, 0, 0, 0, 0],[0, 5, 0, 0, 0, 0, 0, 0, 0],[0, 1, 4, 0, 0, 0, 0, 0, 0],[1, 1, 1, 0, 0, 0, 0, 2, 0],[0, 2, 0, 1, 2, 0, 1, 0, 0],[2, 1, 0, 0, 1, 0, 1, 0, 0],[0, 2, 1, 0, 0, 0, 2, 1, 1],[0, 2, 0, 2, 0, 2, 0, 0, 1],[0, 1, 0, 0, 0, 1, 0, 0, 1],[1, 1, 0, 0, 0, 0, 0, 0, 0],[0, 1, 2, 0, 1, 1, 0, 0, 1],[0, 2, 0, 1, 0, 0, 0, 1, 0],[2, 1, 1, 0, 0, 1, 1, 1, 0],[0, 1, 0, 1, 0, 0, 1, 0, 0],[0, 1, 0, 0, 0, 1, 0, 0, 1],[1, 1, 2, 1, 0, 1, 0, 0, 1],[1, 1, 0, 0, 0, 0, 0, 0, 2],[0, 2, 0, 1, 3, 1, 1, 0, 0]]
inarray
inarray_back = inarray.copy()
import os
os.path.exists(filename)
for query in inarray:
print("in: ", query)
query_input = reproduce_x_to_profile_setting([query], with_redis=False)
filename = str(query_input[0]).replace(', ', '-')[1:-1]
filename = 'results/'+filename+'.csv'
if not os.path.exists(filename):
print(query)
print(filename)
ar_sim_in=[[0,1,0,0,2,0,0,0,1],[0,1,0,0,0,0,2,0,1],[0,1,0,0,0,0,0,0,1],[0,1,0,0,0,0,0,0,1],[1,1,2,0,0,4,0,0,0],[1,1,2,1,0,0,1,2,0],[0,1,0,0,0,0,0,0,0],[1,1,0,3,0,0,1,2,0],[1,5,0,0,2,0,0,0,0],[1,1,1,0,1,0,1,0,0],[1,1,1,0,1,0,0,1,0],[1,1,1,0,0,1,0,1,0],[1,1,0,1,0,1,0,1,0],[0,2,0,1,0,1,0,1,0],[0,2,0,1,0,1,0,0,1],[0,2,0,1,0,0,1,0,1],[0,2,0,0,1,0,1,0,1],[0,1,1,0,1,0,1,0,1],[4,1,0,0,0,0,0,0,0],[0,5,0,0,0,0,0,0,0],[0,1,4,0,0,0,0,0,0],[0,1,0,4,0,0,0,0,0],[0,1,0,0,4,0,0,0,0],[0,1,0,0,0,4,0,0,0],[0,1,0,0,0,0,4,0,0],[0,1,0,0,0,0,0,4,0],[0,1,0,0,0,0,0,0,4],[1,1,1,0,0,0,0,2,0],[0,2,0,1,2,0,1,0,0],[2,1,0,0,1,0,1,0,0],[0,2,1,0,0,0,2,1,1],[0,2,0,2,0,2,0,0,1],[0,1,0,0,0,1,0,0,1],[1,1,0,0,0,0,0,0,0],[0,1,2,0,1,1,0,0,1],[0,2,0,1,0,0,0,1,0],[2,1,1,0,0,1,1,1,0],[0,1,0,1,0,0,1,0,0],[0,1,0,0,0,0,0,2,0],[0,1,0,0,0,1,0,0,1],[1,1,2,1,0,1,0,0,1],[0,3,0,0,0,0,1,1,0],[1,1,0,0,0,0,0,0,2],[0,2,1,1,1,0,0,0,0],[0,2,0,1,3,1,1,0,0]]
output_xs = r1r2m1m2_to_mms_local_remote(np.array(ar_sim_in))
output_xs
def reproduce_x_to_profile_setting(array, with_redis=True, verbose=True):
# date: 2019-0617
# input: array, shape: (2, 9), e.g., [[2. 1. 0. 4. 0. 1. 0. 0. 0.]]
# output: [[2, 1, 1, 4, 4, 4, 4, 6]]
summary_setting = None
for item in array:
if with_redis:
setting = [2]
item[1] -= 1 # put 1 Redis at top
else:
setting = []
for i in range(len(item)):
if item[i] != 0:
for num in range(item[i]):
setting.append(i+1)
if len(setting) > 8: # over allocated
print("[ERROR]: Too many containers: ", item)
else:
for i in range(8-len(setting)):
setting.insert(1, 0) # add 0 after the first place
if summary_setting is None:
summary_setting = [setting]
else:
summary_setting.append(setting)
return summary_setting
reproduce_x_to_profile_setting(output_xs)
```
| github_jupyter |
<font size="4" face="verdana" color="red"> Brainery Byte
<hr></font>
<font size="6" face="verdana" color="blue"> <b>Building an Interactive User Interface <br>in Jupyter Notebook</b> <br>
>> Building A Shopping List <<
<br>
<br>
</font><p>
<font size="4" face="verdana" color="black">
R211014 <br>
Silvia Mazzoni, 2021 <br>
silviamazzoni@yahoo.com <br>
<br>
The objective of this workbook is to help you assemble the building blocks of an interactive user interface.<br>
<p>
<font size="4" face="verdana" color="black">
<b>Widgets:</b>
<p>Widgets are what makes Jupyter Notebooks so awesome!
Here is where I get my info on building <a href = "https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html"> WIDGETS </a></p>
<br>
<ol type="1">
<li>Text Entry</li>
<li>Pull-Down Menus</li>
<li>Radio Buttons</li>
<li>Checkboxes</li>
<li>Action Buttons</li>
<li>Accordion boxes</li>
<li>Embedded Videos</li>
<li>Sliders </li>
<br>
</ol>
</font>
<font size="3" face="verdana" color="black">
<br>
<b>NOTES:</b>
<ol>
<li> This is where you put notes.</li>
<ol type="1">
<li> <a href = "https://www.silviasbrainery.com"> Here is a Hypterlink to my web site</a> with additional text</li>
<li> <a href = "https://youtu.be/CTqG3GbB0i0"> Here is a hyperlink to a video</a></li>
</ol>
<li> Here is another line in the outline </li>
</ol>
</font>
<BR>
When you are done, you can write simple text
Because we are working in Binder, and Binder sessions are meant to be ephemeral, it is not possible for you to save any changes you make to your Jupyter Notebook. If you do make changes or notes, you will need to download the notebook to your own computer by clicking File > Download as > Notebook (.ipynb). The only way you will be able to run these is if you have the appropriate software to run Jupyter Notebooks in Python and pip install OpenSeesPy and eSEESminiPy in your Python configuration. You may view my videos on how to install Anaconda, Jupyter Notebooks and OpenSeesPy (https://www.youtube.com/c/silviasbrainery).
This Code has been developed by Silvia Mazzoni. Please acknowledge this in your scripts, when applicable
```
#Here is an example of embedding a video into your Notebook
from IPython.lib.display import YouTubeVideo
YouTubeVideo('8AFhbeVl3qY')
```
# Initialize Python:
```
# pip install these packages if you are running this locally
from ipywidgets import widgets, Output
from ipywidgets import interact, interactive, fixed, interact_manual, Layout
from IPython.display import display
from IPython.display import clear_output
from IPython.display import HTML
from IPython.display import Image
from IPython.display import Javascript
%%javascript
IPython.OutputArea.auto_scroll_threshold = 100000;
```
## Initalize Widget Data
```
AllWidgetData = {}
outBucketData = {}
```
## Create a header for your UI. put this into a widget so you can display it anywhere
```
outBucketData['Header'] = Output()
def createHeader():
label_TopHeader = widgets.HTML(value = """<font color="#D94496"><h1>My Interactive Shopping List</h1></font>
<p>
Developed by Silvia Mazzoni<br>
Let's do some shopping. """)
thisContainer = widgets.VBox(display='wrap',flex_wrap='wrap',children=[label_TopHeader],border='1px solid blue')
return thisContainer
def displayHeader(thisOut):
thisContainer = createHeader()
thisOut.clear_output()
with thisOut:
display(thisContainer)
display(thisOut)
displayHeader(outBucketData['Header'])
```
# Create nice formatted header, user-input
```
def makeSectionHeaderWidget(text):
thisWidget = widgets.HTML(value = """<font color="#D47088"><h3>""" + text + """</h3></font>""")
return thisWidget
```
## Create Pull-Down Menu Widget
```
def makeDropdownMenu(defValue,MenuArray,descrip):
thisWidget = widgets.Dropdown(
options=MenuArray,
value=defValue,
description=descrip,
continuous_update=True,
disabled=False,
)
return thisWidget
```
## Store Data
```
StoreData = {}
StoreData['Costco'] = {'Address': 'here','Travel Time':10}
StoreData['TraderJoes'] = {'Address': 'there','Travel Time':22}
StoreData['Ralphs'] = {'Address': 'somewhere','Travel Time':5}
```
## Create Pull-Down Menu for Store
```
SelectStore_Default = 'Costco'
outBucketData['StoreSelector'] = Output()
def updateStoreData(thisStore):
AllWidgetData['MyStore'].value='My Store: ' + thisStore
AllWidgetData['Address'].value='Address: ' + str(StoreData[thisStore]['Address'])
AllWidgetData['MyStoreTravelTime'].value='Travel Time: ' + str(StoreData[thisStore]['Travel Time'])
def SelectStore_eventhandler(change):
thisStore = change.new
updateStoreData(thisStore)
def makeStoreSelector():
thisSectionHeader = makeSectionHeaderWidget('Store')
SelectStore_Options = StoreData.keys()
SelectStore_Descript = 'Select Store'
AllWidgetData['SelectStore'] = makeDropdownMenu(SelectStore_Default,SelectStore_Options,SelectStore_Descript)
thisContainer = widgets.VBox(children=[thisSectionHeader,AllWidgetData['SelectStore']])
AllWidgetData['SelectStore'].observe(SelectStore_eventhandler, names='value')
return thisContainer
def displayStoreSelector(thisOut):
thisContainer = makeStoreSelector()
thisOut.clear_output()
with thisOut:
display(thisContainer)
display(thisOut)
displayStoreSelector(outBucketData['StoreSelector'])
```
## Display Store Data
```
outBucketData['StoreData'] = Output()
def makeStoreData(thisStore):
thisSectionHeader = makeSectionHeaderWidget('Store Info')
AllWidgetData['MyStore'] = widgets.Label(value='My Store: ' + thisStore)
AllWidgetData['Address'] = widgets.Label(value='Address: ' + str(StoreData[thisStore]['Address']))
AllWidgetData['MyStoreTravelTime'] = widgets.Label(value='Travel Time: ' + str(StoreData[thisStore]['Travel Time']))
thisContainer = widgets.VBox(children=[thisSectionHeader,AllWidgetData['MyStore'],AllWidgetData['Address'],AllWidgetData['MyStoreTravelTime']])
return thisContainer
def DisplayStoreData(thisStore,thisOut):
thisContainer = makeStoreData(thisStore)
thisOut.clear_output()
with thisOut:
display(thisContainer)
display(thisOut)
DisplayStoreData(SelectStore_Default,outBucketData['StoreData'])
```
# Shopping Items
```
ShoppingItems = {'FRUITS':{'Bananas':{'price':10,'default':0},
'Apples':{'price':25,'default':1},
'Oranges':{'price':33,'default':6}},
'CARBS':{'Rice':{'price':37,'default':3},
'Pasta':{'price':27,'default':2},
'Bread':{'price':15,'default':1}}}
```
## Create Checkbox
```
def makeCheckbox(defValue,descrip):
thisWidget = widgets.Checkbox(
value=defValue,
description=descrip,
disabled=False,
indent=False
)
return thisWidget
```
## Create Slider for quantity
```
def makeFloatSlider(defValue,RangeArray,descrip):
thisSlider = widgets.FloatSlider(
value=defValue,
min=RangeArray[0],
max=RangeArray[2],
step=RangeArray[1],
description= descrip,
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True
)
return thisSlider
```
## Select Items and Amount:
```
outBucketData['ShoppingItems'] = Output()
defValue = False
def makeShoppingItemsDisplay():
thisSectionHeader = makeSectionHeaderWidget('Shopping Items')
thisWidgetList_H = []
for ItemType,ItemData in ShoppingItems.items():
typeLabelWidget = widgets.Label(value=ItemType)
thisWidgetList_here = []
for thisItem,thisItemData in ItemData.items():
thisPrice = ShoppingItems[ItemType][thisItem]['price']
thisWidget=makeCheckbox(defValue,thisItem + ' (' + str(thisPrice) + '/unit)')
thisDefaultAmt = ShoppingItems[ItemType][thisItem]['default']
thisLabel = '(def=' + str(thisDefaultAmt)+')'
AmountRange = [0,1,20]
thisSliderWidget = makeFloatSlider(thisDefaultAmt,AmountRange,thisLabel)
thisContainer = widgets.HBox(display='wrap',flex_wrap='wrap',children=[thisSliderWidget,thisWidget])
thisWidgetList_here.append(thisContainer)
AllWidgetData[thisItem+'CheckBox'] = thisWidget
AllWidgetData[thisItem+'Amount'] = thisSliderWidget
thisContainer = widgets.VBox(display='wrap',flex_wrap='wrap',children=[typeLabelWidget,*thisWidgetList_here])
thisWidgetList_H.append(thisContainer)
thisHBox = widgets.VBox(display='wrap',flex_wrap='wrap',children=thisWidgetList_H)
thisContainer=widgets.VBox(display='wrap',flex_wrap='wrap',children=[thisSectionHeader,thisHBox])
return thisContainer
def displayShoppingItemsDisplay(thisOut):
thisContainer = makeShoppingItemsDisplay()
thisOut.clear_output()
with thisOut:
display(thisContainer)
display(thisOut)
displayShoppingItemsDisplay(outBucketData['ShoppingItems'])
```
# Create Radio Buttons for Hot Items
```
HotItems = {'Chicken':{'Whole':10,'Half':7,'None':0},'Ribs':{'FullRack':12,'HalfRack':8,'None':0}}
display(HotItems)
def makeRadioButtons(defValue,MenuArray,descrip):
thisWidget = widgets.RadioButtons(
options=MenuArray,
value=defValue,
# layout={'width': 'max-content'}, # If the items' names are long
description=descrip,
disabled=False
)
return thisWidget
outBucketData['HotItems'] = Output()
PriceWidget = {}
def HotItems_eventhandler(change):
updateHotItemsDisplay()
def updateHotItemsDisplay():
TotalHotPrice = 0
for ItemType,ItemData in HotItems.items():
thisItemValue = AllWidgetData[ItemType].value
thisItemPrice = HotItems[ItemType][thisItemValue]
PriceWidget[ItemType].value = ItemType + ' Price: ' + str(thisItemPrice)
TotalHotPrice = TotalHotPrice + float(thisItemPrice)
PriceWidget['TotalHotPrice'].value = 'Total HotItems Price: ' + str(TotalHotPrice)
def makeHotItemsDisplay():
thisSectionHeader = makeSectionHeaderWidget('Hot Items')
thisWidgetList_V = []
TotalHotPrice = 0
for ItemType,ItemData in HotItems.items():
theseOptions = ItemData.keys()
thisDef = 'None'
AllWidgetData[ItemType] =makeRadioButtons(thisDef,theseOptions,ItemType)
PriceWidget[ItemType] = widgets.Label(value=ItemType + ' Price: ' + str(HotItems[ItemType][thisDef]) , fontsize=20, color = '#ff0000',border='3 px solid red')
thisContainer=widgets.HBox(children=[AllWidgetData[ItemType],PriceWidget[ItemType]])
thisWidgetList_V.append(thisContainer)
TotalHotPrice = TotalHotPrice + HotItems[ItemType][thisDef]
AllWidgetData[ItemType].observe(HotItems_eventhandler, names='value')
PriceWidget['TotalHotPrice'] = widgets.Label(value='Total HotItems Price: ' + str(TotalHotPrice) , fontsize=20, color = '#ff0000',border='3 px solid red')
thisContainer = widgets.VBox(children=[thisSectionHeader,*thisWidgetList_V,PriceWidget['TotalHotPrice']])
return thisContainer
def displayHotItemsDisplay(thisOut):
thisContainer = makeHotItemsDisplay()
thisOut.clear_output()
with thisOut:
display(thisContainer)
display(thisOut)
displayHotItemsDisplay(outBucketData['HotItems'])
```
## Entry Widget for Extras
```
def makeTextEntryWidget(defValue,descrip,placeholder = ''):
thisWidget = widgets.Text(
value=defValue,
placeholder=placeholder,
description=descrip,
#layout=widgets.Layout(width='75%', height='40px'),
continuous_update=True,
disabled=False
)
return thisWidget
def makeFloatEntryWidget(defValue,descrip,placeholder = ''):
thisWidget = makeTextEntryWidget(str(defValue),descrip,placeholder)
return thisWidget
outBucketData['Extras'] = Output()
ExtrasLabelWidget = {}
ExtrasPriceWidget = {}
def makeExtrasDisplay():
thisSectionHeader = makeSectionHeaderWidget('Extras')
thisExtraWidgetList = [thisSectionHeader]
for i in range(5):
ExtrasLabelWidget[str(i)] = makeTextEntryWidget('','','Enter Item Description')
ExtrasPriceWidget[str(i)] = makeTextEntryWidget('','','Enter Item Price')
thisContainer = widgets.HBox(children=[ExtrasLabelWidget[str(i)],ExtrasPriceWidget[str(i)]])
thisExtraWidgetList.append(thisContainer)
thisContainer = widgets.VBox(children=thisExtraWidgetList)
return thisContainer
def dispalyExtrasDisplay(thisOut):
thisContainer = makeExtrasDisplay()
thisOut.clear_output()
with thisOut:
display(thisContainer)
display(thisOut)
dispalyExtrasDisplay(outBucketData['Extras'])
```
| github_jupyter |
## Precision-Recall-Curves
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import (
precision_recall_curve,
plot_precision_recall_curve,
average_precision_score,
auc,
)
from yellowbrick.classifier import PrecisionRecallCurve
```
## Load data
```
# load data
data = pd.read_csv('../kdd2004.csv')
# remap target class to 0 and 1
data['target'] = data['target'].map({-1:0, 1:1})
data.head()
# data size
data.shape
# imbalanced target
data.target.value_counts() / len(data)
# separate dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(
data.drop(labels=['target'], axis=1), # drop the target
data['target'], # just the target
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
```
## Train ML models
### Random Forests
```
rf = RandomForestClassifier(n_estimators=100, random_state=39, max_depth=2, n_jobs=4)
rf.fit(X_train, y_train)
y_train_rf = rf.predict_proba(X_train)[:,1]
y_test_rf = rf.predict_proba(X_test)[:,1]
```
### Logistic Regression
```
logit = LogisticRegression(random_state=0, max_iter=1000)
logit.fit(X_train, y_train)
y_train_logit = logit.predict_proba(X_train)[:,1]
y_test_logit = logit.predict_proba(X_test)[:,1]
```
## Precision-Recall Curve - Plot
[plot_precision_recall_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_precision_recall_curve.html)
```
rf_disp = plot_precision_recall_curve(rf, X_test, y_test)
logit_disp = plot_precision_recall_curve(logit, X_test, y_test)
ax = plt.gca()
rf_disp.plot(ax=ax, alpha=0.8)
logit_disp.plot(ax=ax, alpha=0.8)
```
## Area under the PR Curve
[precision_recall_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_curve.html#sklearn.metrics.precision_recall_curve)
[auc](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.auc.html#sklearn.metrics.auc)
```
# random forests
# first find preciion and recall various at various thresholds
precision, recall, thresholds = precision_recall_curve(y_test, y_test_rf)
# then using these values, determine the area under the curve
auc_logit = auc(recall, precision)
print('Area under PR Curve Random Forests: ', auc_logit)
# logistic regression
# first find preciion and recall various at various thresholds
precision, recall, thresholds = precision_recall_curve(y_test, y_test_logit)
# then using these values, determine the area under the curve
auc_logit = auc(recall, precision)
print('Area under PR Curve Logistic Regression: ', auc_logit)
```
## Average Precision Score
It is another way of summarizing the PR Curve.
[average_precision_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score)
```
# random forests
ap_rf = average_precision_score(y_test, y_test_rf)
print('Average Precision Random Forests: ', ap_rf)
# logistic regression
ap_logit = average_precision_score(y_test, y_test_logit)
print('Average Precision Logistic Regression: ', ap_logit)
```
### Yellobrick
https://www.scikit-yb.org/en/latest/api/classifier/prcurve.html
```
visualizer = PrecisionRecallCurve(rf, classes=[0, 1])
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Finalize and show the figure
visualizer = PrecisionRecallCurve(logit, classes=[0, 1])
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Finalize and show the figure
```
## Additional Reading
https://towardsdatascience.com/on-roc-and-precision-recall-curves-c23e9b63820c
There is a lot of information in the sklearn documentation as well.
| github_jupyter |
<!-- Group05 -->
<html>
<body>
<!-- To make the size of the font bigger for presentations, change the following command from +1 to +2 -->
<font size="+1">
<p style='font-size: 36px;font-family: Arial;font-style: italic;font-weight: bold;color: #FF00FF;background-color: #80FFFF;text-align: center;'>
Abstract Algebra: An Interactive Approach, 2e
</p>
<p style='font-family: Geneva;font-style: italic;color: #0000FF;background-color: #FFFFFF;'>
©2015 This notebook is provided with the textbook, "Abstract Algebra: An Interactive Approach, 2nd Ed." by William Paulsen. Users of this notebook are encouraged to buy the textbook.
</p>
<p style='font-size: 36px;font-family: New York;font-weight: bold;color: #000000;background-color: #FFFFFF;text-align: center;border: 1px;border-style:
solid;border-color: #000000;'>
Chapter 5<br><br>
Permutation Groups
</p>
<p style='text-align: center;'>Initialization: This cell MUST be evaluated first:</p>
```
load('absalgtext2.sage')
```
<br>
<a href="#sec51">Symmetric Groups</a><br>
<a href="#sec52">Cycles</a><br>
<a href="#sec53">Cayley's Theorem</a><br>
<a href="#sec54">Numbering the Permutations</a><br>
<a href="#sec5p"><em>SageMath</em> Interactive Problems</a><br>
<a name="sec51" id="sec51"></a>
<h1>Symmetric Groups</h1>
<br>
In this chapter we will explore one class of finite groups that has important applications. These groups are called the <em>permutation groups</em>,
or <em>symmetric groups</em>. We have already seen <em>S</em><sub>3</sub>, the permutation group on 3 objects.<br><br>
Recall that we used 3 different colored books to illustrate this group. We can easily generalize this to consider the group of all permutations of <em>n</em> objects. For example, with four books the beginning position would be<br>
```
InitBooks(4)
```
<br>
We have seen several ways of rearranging the books. We can swap the first two books with the command<br>
```
MoveBooks(First)
```
<br>
We can also swap the last two books:<br>
```
MoveBooks(Last)
```
<br>
We can move the first book to the end, sliding the other books to the left.<br>
```
MoveBooks(Left)
```
<br>
The opposite of the proceedure is to move the last book to the beginning.<br>
```
MoveBooks(Right)
```
<br>
We can also reverse the order of the books.<br>
```
MoveBooks(Rev)
```
<br>
Finally, we can leave the books alone.<br>
```
MoveBooks(Stay)
```
<br>
For three books, any permutation can be obtained by just one of these six commands.
But it is apparent that with four books, there are even more ways to rearrange the books. One possible operation would be to move the left-most book two positions to
the right. This can be accomplished by the sequence<br>
```
MoveBooks(Left, Last)
```
<br>
EXPERIMENT:<br>
Using just the six commands mentioned above, put the books back into their original positions (Red, Red, Purple, Orange).
Try doing this using the fewest number of commands.<br>
<br>
This experment shows that it can be somewhat tricky to represent any permutation order using only a few commands. The more books there are, the more of a puzzle
this can be. Let us introduce a notation for a permutation of books that explicitly states where each book ends up.<br><br>
One natural way to do this is to number the books in consecutive order, and determine the numbers in the final position. For example, if we put the books in their
original order, <br>
```
InitBooks(4)
```
<br>
and then shift the books to the left,<br>
```
MoveBooks(Left)
```
<br>
we find that if the books started in 1, 2, 3, 4 order, the final position will be 2, 3, 4, 1. Here, book 1 is red, book 2 is green, book 3 is purple, and book 4 is orange. Since the final position of the books (green, purple, orange, red) translates to 2, 3, 4, 1. We write the ending position below the starting position, as follows.
<br><br>
<table align="center" width="160" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">1</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
We can now multiply the permutations by first performing one action, and then the other. For example, doing the operation <strong>Left</strong>, followed
by <strong>Last</strong>, leaves us with the books in the order
<p style='text-align: center;'>green purple red orange</p><br>
which, when compared to the original position, translates to 2, 3, 1, 4, giving the permutation<br><br>
<table align="center" width="160" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">1</td>
<td align="center">4</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
We write the <strong>Left</strong>·<strong>Last</strong> as a product of two permutations<br><br>
<table align="center" width="500" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
<td rowspan = "2" align="center">·</td>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
<td rowspan = "2" align="center">=</td>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">1</td>
<td align="left" valign="top">⎠</td>
<td align="right" valign="top">⎝</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">4</td>
<td align="center">3</td>
<td align="left" valign="top">⎠</td>
<td align="right" valign="top">⎝</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">1</td>
<td align="center">4</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
If we did these operations in the other order, we would get<br><br>
<table align="center" width="500" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
<td rowspan = "2" align="center">·</td>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
<td rowspan = "2" align="center">=</td>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">4</td>
<td align="center">3</td>
<td align="left" valign="top">⎠</td>
<td align="right" valign="top">⎝</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">1</td>
<td align="left" valign="top">⎠</td>
<td align="right" valign="top">⎝</td>
<td align="center">2</td>
<td align="center">4</td>
<td align="center">3</td>
<td align="center">1</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
Obviously, <strong>Left</strong> · <strong>Last</strong> does not equal <strong>Last</strong> · <strong>Left</strong>.<br><br>
We can also view a permutation as a <em>function</em> whose domain is a subset of the integers. Consider the functions <em>ƒ</em>(<em>x</em>) and
<em>ϕ</em>(<em>x</em>) given by the table<br><br>
<table align = "center" width="300" border="0">
<tr>
<td align = "center"><em>ƒ</em>(1) = 2</td>
<td align = "center"><em>ϕ</em>(1) = 2</td>
</tr>
<tr>
<td align = "center"><em>ƒ</em>(2) = 3</td>
<td align = "center"><em>ϕ</em>(2) = 3</td>
</tr>
<tr>
<td align = "center"><em>ƒ</em>(3) = 1</td>
<td align = "center"><em>ϕ</em>(3) = 4</td>
</tr>
<tr>
<td align = "center"><em>ƒ</em>(4) = 4</td>
<td align = "center"><em>ϕ</em>(4) = 1.</td>
</tr>
</table>
<br>
We could denote these two functions by the same type of notation, as follows:<br><br>
<table align="center" border="0" cellspacing="0" cellpadding="0">
<tr>
<td rowspan = "2" align="right"><em>ƒ</em>(<em>x</em>) =</td>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
<td rowspan = "2" align="right"><em>ϕ</em>(<em>x</em>) =</td>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">1</td>
<td align="center">4</td>
<td align="left" valign="top">⎠</td>
<td align="right" valign="top">⎝</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">1</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
Notice that each column shows the value of <em>ƒ</em>(<em>x</em>) or <em>ϕ</em>(<em>x</em>) directly beneath <em>x</em>. Since the range and domain of
these functions are the same, we can consider the composition of the two functions. Let us consider <em>ƒ</em>·<em>ϕ</em>:<br>
<p style='text-align: center;'><em>ƒ</em>(<em>ϕ</em>(1)) = <em>ƒ</em>(2) = 3<br>
<em>ƒ</em>(<em>ϕ</em>(2)) = <em>ƒ</em>(3) = 1<br>
<em>ƒ</em>(<em>ϕ</em>(3)) = <em>ƒ</em>(4) = 4<br>
<em>ƒ</em>(<em>ϕ</em>(4)) = <em>ƒ</em>(1) = 2</p>
So the composition function <em>ƒ</em>(<em>ϕ</em>(<em>x</em>)), that is, of doing <em>ϕ</em> first, and then <em>ƒ</em>, can be written as<br><br>
<table align="center" width="160" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">3</td>
<td align="center">1</td>
<td align="center">4</td>
<td align="center">2</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
We can write this permutation as <em>ƒ</em>·<em>ϕ</em>, so we have that
<p style='text-align: center;'><em>ƒ</em>(<em>ϕ</em>(<em>x</em>)) = (<em>ƒ</em>·<em>ϕ</em>)(<em>x</em>).</p>
<br>
There is something curious here. When we view permutations as ways to rearrange a set of objects, such as books, the permutations are multiplied from left to right, which
is the natural order. But when we view permutations as functions, the permutations are multiplied from right to left, which is again the natural order for function
composition.
<br><br>
DEFINITION 5.1<br>
For the set {1, 2, 3, … <em>n</em>}, we define the group of permutations on the set by <em>S<sub>n</sub></em>. That is, <em>S<sub>n</sub></em> is the set of functions
which are one-to-one and onto on the set {1, 2, 3, … <em>n</em>}. The group operation is function composition.<br><br>
EXPERIMENT: <br>Consider the following product:
<table align="center" width="320" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">5</td>
<td align="left" valign="bottom">⎞</td>
<td rowspan = "2" align="center">·</td>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">5</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">5</td>
<td align="center">4</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="left" valign="top">⎠</td>
<td align="right" valign="top">⎝</td>
<td align="center">4</td>
<td align="center">3</td>
<td align="center">5</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
First work this product by viewing the permutations as two ways to rearrange 5 books. If you work the rearrangements from left to right, what is the resulting
rearrangement of the books? Now work this product by viewing the permutations as functions, working from right to left. Do you get the same answer? Which of the
two methods was easier? If you work the problem viewing the permutations as functions, and worked from left to right, would you get the right answer?<br>
<br>
<em>SageMath</em> can easily work with permutations. The top line of the permutation is only for convenience since the bottom line contains all of the information about
the permutation. Thus, we can write these permutations like this:
<p style='text-align: center;'>P(5,4,1,2,3) · P(4,3,5,1,2).</p>
In general, a permutation in <em>S<sub>n</sub></em> can be written
<p style='text-align: center;'>P(<em>x</em><sub>1</sub>, <em>x</em><sub>2</sub>, <em>x</em><sub>3</sub>, … <em>x<sub>n</sub></em>).</p>
Note that the numbers <em>x</em><sub>1</sub>, <em>x</em><sub>2</sub>, <em>x</em><sub>3</sub>, … <em>x<sub>n</sub></em> must all be different and must consist of
the numbers from 1 to <em>n</em>. This permutation corresponds to the function
<p style='text-align: center;'><em>ƒ</em>(1) = <em>x</em><sub>1</sub><br>
<em>ƒ</em>(2) = <em>x</em><sub>2</sub><br>
<em>ƒ</em>(3) = <em>x</em><sub>3</sub><br>
………<br>
<em>ƒ</em>(<em>n</em>) = <em>x<sub>n</sub></em>.</p>
This notation is a little more difficult since one must count four numbers to determine <em>ƒ</em>(4). Yet <em>SageMath</em> prefers this notation for its
conciseness. Let's try using <em>SageMath</em> to check the above product: <br>
```
P(5,4,1,2,3)*P(4,3,5,1,2)
```
<br>
Is this the answer you got? We do not have to define this group the way we have been defining the other groups in these notebooks. The symmetric group is pre-loaded in
the initialization. Let us determine the product of these two permutations multiplied in the other order:<br>
```
P(4,3,5,1,2)*P(5,4,1,2,3)
```
<br>
What happened to the 5? Since the composition function maps 5 to itself, <em>SageMath</em> can safety drop the 5, treating this as a permutation on four objects
instead. To save space <em>SageMath</em> displays only up to the last number which is out of position. All of the necessary information is still in the
permutation.<br><br>
Since all permutations in <em>S</em><sub>4</sub> can be expressed in terms of some combinations of the <strong>Left</strong> and <strong>Last</strong> book rearrangements,
we can find all of the elements of <em>S</em><sub>4</sub>.<br>
```
S4 = Group(P(2, 3, 4, 1), P(1, 2, 4, 3)); S4
len(S4)
```
<br>
Note that the identity element of <em>S</em><sub>4</sub> is denoted by <strong>P()</strong>, since the corresponding function leaves all objects fixed.
We can determine the size of the group <em>S<sub>n</sub></em> in general, by counting the number of one-to-one and onto functions from the set {1, 2, 3, … <em>n</em>} to
itself. We have <em>n</em> choices for <em>f</em>(1), but then there will be only <em>n</em> − 1 choices for <em>f</em>(2), <em>n</em> − 2 choices for <em>f</em>(3), and
so on. Thus, the size of the group <em>S<sub>n</sub></em> is given by
<p style='text-align: center;'><em>n</em>! = <em>n</em>·(<em>n</em> − 1)·(<em>n</em> − 2)·(<em>n</em> − 3)·… ·
2·1</p>
The number <em>n</em>! is read "<em>n</em> factorial," and represents the product of the first <em>n</em> numbers. Here is a short table of <em>n</em>!:<br><br>
<table align="center" width="180" border="0">
<tr>
<td align="right">1!</td>
<td align="center">=</td>
<td>1</td>
</tr>
<tr>
<td align="right">2!</td>
<td align="center">=</td>
<td>2</td>
</tr>
<tr>
<td align="right">3!</td>
<td align="center">=</td>
<td>6</td>
</tr>
<tr>
<td align="right">4!</td>
<td align="center">=</td>
<td>24</td>
</tr>
<tr>
<td align="right">5!</td>
<td align="center">=</td>
<td>120</td>
</tr>
<tr>
<td align="right">6!</td>
<td align="center">=</td>
<td>720</td>
</tr>
<tr>
<td align="right">7!</td>
<td align="center">=</td>
<td>5040</td>
</tr>
<tr>
<td align="right">8!</td>
<td align="center">=</td>
<td>40320</td>
</tr>
<tr>
<td align="right">9!</td>
<td align="center">=</td>
<td>362880</td>
</tr>
<tr>
<td align="right">10!</td>
<td align="center">=</td>
<td>3628800</td>
</tr>
</table>
<br>
From this table we see that 4! = 24, so this verifies that <em>S</em><sub>4</sub> has 24 elements. One can see that the size
of <em>S<sub>n</sub></em> grows very quickly. <em>S</em><sub>1</sub> has only one element, so it is the trivial group. <em>S</em><sub>2</sub> has two elements, so this
must be isomorphic to <em>Z</em><sub>2</sub>. We have already seen <em>S</em><sub>3</sub> was isomorphic to Terry's group. Now we find that <em>S</em><sub>4</sub> has
24 elements. Since the octahedral group also has 24 elements, so we could ask if these two groups are isomorphic. The octahedral group can be reloaded by the commands<br>
```
InitGroup("e")
AddGroupVar("a", "b", "c")
Define(a^2, e)
Define(b^3, e)
Define(c^4, e)
Define(b*a, a*b^2)
Define(c*a, a*b*c)
Define(c*b, a*c^2)
Oct = Group();Oct
```
Let us try to find an isomorphism. We discovered that there is a copy of <em>S</em><sub>3</sub> inside of the octahedral group, generated by the elements <em>a</em> and
<em>b</em>. Let us begin there. In <em>S</em><sub>3</sub>, the element <em>a</em> represented an element of order 2, such
as <strong>P(2, 1)</strong>, and <em>b</em> represented an element
of order 3, such as <strong>P(2, 3, 1)</strong>. Let us construct a possible isomorphism. The command<br>
```
F = Homomorph(Oct, S4)
```
<br>
defines <strong>F</strong> to be a homomorphism. We can tell <em>SageMath</em> where the elements <em>a</em> and <em>b</em> are mapped to:<br>
```
HomoDef(F, a, P(2,1) )
HomoDef(F, b, P(2,3,1) )
```
<br>
This should be consistent, since
<p style='text-align: center;'><em>ƒ</em>(<em>b</em>)·<em>ƒ</em>(<em>a</em>) =
<em>ƒ</em>(<em>a</em>)·<em>ƒ</em>(<em>b</em>)·<em>ƒ</em>(<em>b</em>).</p>
The way to have <em>SageMath</em> check that <em>ƒ</em> is so far defined consistently is to find whether <em>ƒ</em> is a homomorphism on
the <em>subgroup</em> containing <em>a</em> and <em>b</em>. If we enter<br>
```
FinishHomo(F)
```
<br>
then <em>SageMath</em> indicates that so far the homomorphism is consistant, but it still is not defined for the whole group <strong>Oct</strong>. We can find the range
of <strong>F</strong> so far with the command<br>
```
HomoRange(F)
```
<br>
This shows that we have defined the homomorphism for 6 elements.<br><br>
We now need to determine a value for <em>ƒ</em>(<em>c</em>) that makes an isomorphism. Note that <em>ƒ</em>(<em>c</em>) must be
of order 4. We want to find all elements of <em>S</em><sub>4</sub> which are of order 4 so we will be able to choose one for <em>ƒ</em>(<em>c</em>).<br><br>
Finding all of the elements of a given order is not hard using <em>SageMath</em>. This is because <em>SageMath</em> can raise all of the elements to a given power in one
step. Here is every element squared:<br>
```
[ x^2 for x in S4]
```
<br>
Note that many of the elements, when squared, yield the identity element. In fact, elements 1, 2, 3, 6, 7, 8, 15, 17, 22, and 24 (counting from the left) have
<em>a</em><sup>2</sup> = <em>e</em>. The first element is the identity, but the others must be elements of order 2. Thus there are nine elements
of <em>S</em><sub>4</sub> that are of order 2.<br><br>
We can use the same method to find the elements of order 4. Let us raise all elements to the fourth power:<br>
```
[ x^4 for x in S4]
```
<br>
So elements 1, 2, 3, 6, 7, 8, 10, 11, 14, 15, 17, 18, 19, 22, 23, and 24 have the fourth power equaling the identity. Many of these are of order 2, as we have seen
before. But the square of elements 10, 11, 14, 18, 19 and 23 are not the identity. Therefore, we have six elements of order 4. Comparing this with the original
group<br>
```
S4
```
<br>
tells us the six elements:<br>
<p style='text-align: center;'>P(4, 1, 2, 3)<br>
P(2, 4, 1, 3)<br>
P(3, 1, 4, 2)<br>
P(4, 3, 1, 2)<br>
P(2, 3, 4, 1)<br>
P(3, 4, 2, 1)</p>
If the octahedronal group is isomorphic to <em>S</em><sub>4</sub>, then the element <em>c</em> must map to one of these six elements. But which one? Let us try the
first permutation, <nobr>P(4, 1, 2, 3).</nobr> The entire homomorphism would be defined by the commands<br>
```
F = Homomorph(Oct, S4)
HomoDef(F, a, P(2, 1) )
HomoDef(F, b, P(2, 3, 1) )
HomoDef(F, c, P(4, 1, 2, 3) )
FinishHomo(F)
```
<br>
Unfortunately, this does not produce a homomorphism, since <em>SageMath</em> found a contradiction. But perhaps by defining <strong>F(c)</strong> to be one of the other
six elements of <em>S</em><sub>4</sub> which is of order 4, we can produce a homomorphism.<br><br>
EXPERIMENT:<br>
By changing P(4, 1, 2, 3) with one of the other elements of <em>S</em><sub>4</sub> which is of order 4:
<p style='text-align: center;'>P(2, 4, 1, 3)<br>
P(3, 1, 4, 2)<br>
P(4, 3, 1, 2)<br>
P(2, 3, 4, 1)<br>
P(3, 4, 2, 1)</p>
see if you can find one which creates a homomorphism.<br>
<br>
Once we have found a homomorphism, we want to see that <em>ƒ</em> is an isomorphism by showing that the kernel of <em>ƒ</em> is just the identity.<br>
```
Kernel(F)
```
<br>
Since the kernel is the identity element, <em>ƒ</em> is an isomorphism. <br><br>
Finally, we can check that the image of <em>ƒ</em> is <em>S</em><sub>4</sub>. The image is generated by the command<br>
```
Image(F, Oct)
len(_)
```
<br>
Since there are 24 elements here, all of which are in <em>S</em><sub>4</sub>, this must be <em>S</em><sub>4</sub>. Therefore, we have shown
that <em>S</em><sub>4</sub> is isomorphic to the octahedral group. So we immediately see a relationship between one of the groups we have been working
with and the permutation groups. Later this chapter, we will see an application of the permutation groups that will apply to <em>any</em> finite group.<br><br>
Since a permutation is a function from the integers 1, 2, 3, … <em>n</em> onto themselves, we can use the circle graphs that we have used before to visualize
these permutations. For example, to draw a picture of the permutation P(5,4,1,2,3), use the command:<br>
```
CircleGraph([1,2,3,4,5], P(5,4,1,2,3))
```
<br>
Notice that this forms one triangle in red, which maps 1 to 5, 5 to 3, and then 3 back to 1. There is also a green "double arrow" that maps 2 to 4 and
back to 2. So this circle graph reveals some additional structure to the permutation which we will study later. For now, let us study the geometrical significance
of multiplying two permutations together.<br><br>
We can graph two or more permutations simultaneously by adding the additional permutations to the <em>SageMath</em> command.<br>
```
CircleGraph([1,2,3,4,5], P(5,4,1,2,3), P(4,3,5,1,2))
```
<br>
Here, the red arrows represent the permutation P(5,4,1,2,3), while the green arrows represent P(4,3,5,1,2).<br><br>
EXPERIMENT:<br>
On the above circle graph, imagine what would happen if for each of the numbers {1,2,3,4,5}, one first traveled through a green arrow,
and then traveled through a red arrow. Does this form a permutation on the set {1,2,3,4,5}? If so, which permutation is it? Is it the same permutation as
P(5,4,1,2,3)·P(4,3,5,1,2) ?<br>
<br>
The following command maps <em>three</em> permutations simultaneously:<br>
```
CircleGraph([1,2,3,4,5], P(5,4,1,2,3), P(4,3,5,1,2), P(2,1,3,5,4))
```
<br>
This circle graph answers the questions raised in the last experiment. In this graph, if one travels a green arrow, and immediately travel through a red arrow, one ends
up exactly in the same place as if one traveled just a purple arrow. Thus, this "arrow composition" results in the permutation P(2,1,3,5,4), which is the product
of P(5,4,1,2,3) and P(4,3,5,1,2). Note that the arrows are like functions, in that we apply the arrow of the second permutation first, and then the arrow for the first permutation.<br><br>
<em>SageMath</em> can help us to find the inverse of a permutation. The inverse of P(5,4,1,2,3) would be<br>
```
P(5,4,1,2,3)^-1
```
<br>
We can make a circle graph of this new permutation.<br>
```
CircleGraph([1,2,3,4,5], P(5,4,1,2,3)^-1 )
```
<br>
How does this circle graph compare to the circle graph of the original permutation?<br>
```
CircleGraph([1,2,3,4,5], P(5,4,1,2,3) )
```
<br>
It is easy to see that the graphs are the same, except that the arrows are all going in the opposite direction. (Of course, when the direction is reversed on the
green "double arrow," it remains a double arrow.) Thus, we see that<br><br>
<table align="center" width="320" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">5</td>
<td align="left" valign="bottom">⎞<sup>-1</sup></td>
<td rowspan = "2" align="center">=</td>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">5</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">5</td>
<td align="center">4</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="left" valign="top">⎠ </td>
<td align="right" valign="top">⎝</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">5</td>
<td align="center">2</td>
<td align="center">1</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
We can check this using <em>SageMath</em> to see if the product of the two really is the identity:<br>
```
P(5,4,1,2,3)*P(3,4,5,2,1)
```
<br>
We noted before that <em>SageMath</em> displays only up to the last number not in position. The identity element has all of its numbers in position, so <em>SageMath</em> does
not display any of them! Therefore, <strong>P()</strong> represents the identity element of this group.<br><br>
Because we can think of a permutation as a function of positive integers, we can evaluate a permutation at a given number. For example, the permutation
<br><br>
<table align="center" width="180" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">5</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">5</td>
<td align="center">4</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
evaluated at 2 gives us 4. Since the <em>SageMath</em> notation for a function of <em>x</em> is <strong>F(x)</strong>, the <em>SageMath</em> command for evaluation this
permutation at 2 is:<br>
```
P(5,4,1,2,3)(2)
```
<br>
We can evaluate this permutation at 7 as well.<br>
```
P(5,4,1,2,3)(7)
```
<br>
Since this permutation does not mention the number 7, <em>SageMath</em> assumes that it is fixed.<br><br>
In spite of the simplicity of the notations for a permutation, we will find that there is yet another notation that is even more concise. We will study
this in the next section.<br><br>
<a name="sec52" id="sec52"></a>
<h1>Cycles</h1>
<br>
Throughout this chapter, we have been using the circle graphs to represent a permutation in <em>S<sub>n</sub></em>. For example, the permutation P(5,4,1,2,3) could
be diagrammed by the following graph:<br>
```
CircleGraph([1,2,3,4,5], P(5,4,1,2,3) )
```
<br>
We can immediately see from this graph the red triangle connecting 1, 5, and 3, while a green "double arrow" connecting 2 and 4. In this section we will
consider the significance of the two different colors of arrows.<br><br>
We begin by noticing that some of the permutations have circle graphs that are all in one color. Consider the graph of the permutation P(4,5,2,3,1):<br>
```
CircleGraph([1,2,3,4,5], P(4,5,2,3,1) )
```
<br>
This circle graph consists entirely of red arrows. These arrows indicate that the permutation can be expressed by a single chain
<p style='text-align: center;'>1 → 4 → 3 → 2 → 5 → 1.</p>
This can be read, "1 goes to 4 which goes to 3 which goes to 2 which goes to 5 which goes back to 1."
Any permutation which can be expressed as a chain like this one is called a <em>cycle</em>.<br><br>
Here is another example of a circle graph of a permutation.<br>
```
CircleGraph([1,2,3,4,5,6], P(2,4,1,6,5,3) )
```
<br>
In this graph, there is a green loop mapping 5 to itself, but all of the <em>straight</em> arrows are the same color―red. We can still represent this
permutation by a single chain:
<p style='text-align: center;'>1 → 2 → 4 → 6 → 3 → 1.</p>
This chain states where each number goes except for the number 5. However, if we stipulate that all numbers that are not mentioned
in the chain map to themselves, we have expressed the permutation P(2,4,1,6,5,3) as a single chain, and hence this is also a cycle. <br>
<br>
<br>DEFININTION 5.2<br>
Any permutation that can be expressed as a single chain is called a <em>cycle</em>. A cycle that moves exactly <em>r</em> of the numbers is called an
<em>r</em>-<em>cycle</em>.<br><br>
Whenever a permutation can be expressed as an <em>r</em>-cycle, it is easier to read the chain then the permutation. For example, to find where 4 is mapped to in
the permutation P(2,4,1,6,5,3), one must count 4 numbers from the left, whereas in the notation
<p style='text-align: center;'>1 → 2 → 4 → 6 → 3 → 1.</p>
one only has to spot the 4 and see that it maps to 6. We can further simplify the chains by using a more compact notation:
<p style='text-align: center;'>(1 2 4 6 3)</p>
Here, each number is mapped to the next number in the chain. The last number always maps back to the first number. This notation is called the
<em>cycle notation</em> of the permutation.<br><br>
In general, the <em>r</em>-cycle (<em>i</em><sub>1</sub> <em>i</em><sub>2</sub> <em>i</em><sub>3</sub> ··· <em>i<sub>r</sub></em>) represents
the permutation that maps <em>i</em><sub>1</sub> to <em>i</em><sub>2</sub>, <em>i</em><sub>2</sub> to <em>i</em><sub>3</sub>, etc., and finally <em>i<sub>r</sub></em> back
to <em>i</em><sub>1</sub>. Notice that
<p style='text-align: center;'>(<em>i</em><sub>1</sub> <em>i</em><sub>2</sub> <em>i</em><sub>3</sub> ··· <em>i<sub>r</sub></em>)<sup>-1</sup> =
(<em>i<sub>r</sub></em> <em>i</em><sub><em>r</em>−1</sub> ··· <em>i</em><sub>3</sub> <em>i</em><sub>2</sub> <em>i</em><sub>1</sub>).</p>
so the inverse of an <em>r</em>-cycle will always be an <em>r</em>-cycle. However, the product of two cycles may not always give a cycle, but a permutation. The
identity element can be written as the 0-cycle ( ).<br><br>
A 1-cycle is actually impossible, since if one number is not fixed by a permutation, then the number that it maps to cannot be fixed.
Thus, a non-identity permutation must move at least two numbers. We say that
an <nobr><em>r</em>-cycle</nobr> is a <em>nontrivial cycle</em> if <em>r</em> > 1.<br><br>
Because the cycle notation is in general easier to use then standard permutation notation, we would like to find a way to write any permutation in terms of cycles.
If we look back at the first circle graph of a permutation,<br>
```
CircleGraph([1,2,3,4,5], P(5,4,1,2,3) )
```
<br>
we see that this permutation cannot be expressed as a single chain. However, the two different colored arrows suggests the following idea. Suppose we considered
one permutation which consisted mainly of the red triangle, (with 2 and 4 mapping to themselves) and another permutation consisting mainly of the green double arrow,
with 1, 3, and 5 mapping to themselves. These two permutations are given by<br>
```
CircleGraph([1,2,3,4,5], P(5,2,1,4,3) )
```
and
```
CircleGraph([1,2,3,4,5], P(1,4,3,2,5) )
```
<br>
EXPERIMENT:<br>
Is there a way to multiply the permutations P(5,2,1,4,3) and P(1,4,3,2,5) to produce the original permutation P(5,2,1,4,3)? Which order do the permutations need to
be multiplied?<br>
<br>
To help explain why the permutations acted the way that they did in the experiment, try graphing both permutations on the same graph:<br>
```
CircleGraph([1,2,3,4,5], P(5,2,1,4,3), P(1,4,3,2,5) )
```
<br>
One can see the similarity between this graph and the graph of P(5,4,1,2,3). Let us write the permutations P(5,2,1,4,3) and P(1,4,3,2,5) in cycle notation:
<p style='text-align: center;'>P(5,4,1,2,3) = (1 5 3),   P(1,4,3,2,5) = (2 4).</p>
<br>Notice that when these two permutations are written as cycles, there are no numbers in common between these two cycles. We give a name to these type of cycles.<br><br>
DEFINITION 5.3<br>
Two cycles
<p style='text-align: center;'>(<em>i</em><sub>1</sub> <em>i</em><sub>2</sub> <em>i</em><sub>3</sub> ··· <em>i<sub>r</sub></em>) and
(<em>j</em><sub>1</sub> <em>j</em><sub>2</sub> <em>j</em><sub>3</sub> ··· <em>j<sub>s</sub></em>)</p>
are <em>disjoint</em> if none of the <em>i</em>'s are equal to any of the <em>j</em>'s.<br><br>
We saw that (1 5 3) and (2 4) are disjoint. It is fairly clear that two disjoint cycles will commute with one another, as demonstrated in the experiment.<br><br>
Can we express other permutations as a product of cycles? Consider the permutation
<br><br>
<table align="center" width="250" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">5</td>
<td align="center">6</td>
<td align="center">7</td>
<td align="center">8</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">4</td>
<td align="center">6</td>
<td align="center">1</td>
<td align="center">8</td>
<td align="center">2</td>
<td align="center">5</td>
<td align="center">7</td>
<td align="center">3</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
Here is the circle graph of this permutation:<br>
```
CircleGraph([1,2,3,4,5,6,7,8], P(4,6,1,8,2,5,7,3) )
```
<br>
We see from the circle graph that this permutation maps 1 into 4, 4 into 8, 8 into 3, and 3 back into 1. So the cycle (1 4 8 3) describes part of the permutation.
However, we also have 2 mapping to 6, which maps to 5, which maps back to 2. So the cycle (2 6 5) is also a component of the permutation. Finally, 7 maps to itself,
so it will not be the part of any cycle. Thus we can describe this permutation as
<p style='text-align: center;'>(1 4 8 3)·(2 6 5).</p>
We can imitate this process for any permutation. As a result, we have the following lemma:
<p />
<a name="lem51ret" id="lem51ret"></a>
LEMMA 5.1<br>
Let <em>x</em> be an element of <em>S<sub>n</sub></em> which is not the identity. Then <em>x</em> can be written as a product of nontrivial disjoint cycles.
This representation of <em>x</em> is unique up to the rearrangement of the cycles.<br><br>
<a href="#lem51">Click here for the proof.</a>
<p />
Since any permutation can be written succinctly in terms of cycles, we are given another way to express any permutation. <em>SageMath</em> uses the notation
<p style='text-align: center;'>C(<em>i</em>, <em>j</em>, <em>k</em>, … )</p>
to denote the cycle (<em>i</em> <em>j</em> <em>k</em> …). <em>SageMath</em> can multiply two cycles together. For example, to multiply (2 3 4 5)·(1 2 4), type<br>
```
C(2,3,4,5)*C(1,2,4)
```
<br>
Notice that <em>SageMath</em> forms the answer as a product of 2 disjoint cycles, without the times sign between them. We call this the <em>cycle decomposition</em> of the permutation. Can we see why this is the product? Remember to work from right to left in
multiplying permutations or cycles.<br><br>
We can convert back and forth between the permutation notation and the cycles. The commands
<p style='text-align: center;'>PermToCycle( P(………))   and   CycleToPerm( C(………))</p>
tell <em>SageMath</em> to switch between the two notations. Thus, to convert our answer to a permutation, we type<br>
```
CycleToPerm( C(1,3,4)*C(2,5) )
```
<br>
Likewise, <em>SageMath</em> can use the procedure described in Lemma 5.1 to find the cycle decomposition of the permutation
<br><br>
<table align="center" width="250" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">5</td>
<td align="center">6</td>
<td align="center">7</td>
<td align="center">8</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">4</td>
<td align="center">6</td>
<td align="center">1</td>
<td align="center">8</td>
<td align="center">2</td>
<td align="center">5</td>
<td align="center">7</td>
<td align="center">3</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
by the command<br>
```
PermToCycle( P(4,6,1,8,2,5,7,3) )
```
<br>
To enter the identity element in <em>SageMath</em>, use <strong>C( )</strong>, which corresponds to the 0-cycle (). <br><br>
We may even mix the two notations within an expression:<br>
```
C(1,2,3)*P(3,1,2,5,4)*C(4,5)
```
<br>
Whenever <em>SageMath</em> encounters a mixture like this, it puts the answer in terms of cycles. In this case the answer is the identity permutation,
so <em>SageMath</em> returned <strong>( )</strong>.<br><br>
We can evaluate a cycle or a product of cycles at a given number, just as we did for permutations. For example, to determine where the cycle (1 4 8 3) sends the
number 3, type<br>
```
C(1,4,8,3)(3)
```
This evaluates the cycle at 3, giving the value 1. We can also form a circle graph of a cycle as we would a permutation:<br>
```
CircleGraph([1,2,3,4,5,6,7,8], C(1,4,8,3) )
```
<br>
This even works for a product of cycles.<br>
```
CircleGraph([1,2,3,4,5,6,7,8], C(1,4,8,3)*C(2,6,5) )
```
<br>
However, to evaluate a product of cycles at a given number, we need an extra pair of parentheses:<br>
```
(C(1,4,8,3)*C(2,6,5))(2)
```
<br>
Although <em>SageMath</em> works faster using the standard permutation notation, cycles are more succinct in most cases and more readable. Thus, for large operations that
could take time, such as checking that a function is a homomorphism, it will be much faster using the <strong>P</strong>(………) notation.<br><br>
We mentioned that there are no permutations that move just one element, but the permutations which move exactly 2 elements will be important. We will give these 2-cycles
a special name.<br><br>
DEFINITION 5.4<br>
A <em>transposition</em> is a 2-cycle (<em>i</em><sub>1</sub> <em>i</em><sub>2</sub>), where <em>i</em><sub>1</sub> ≠ <em>i</em><sub>2</sub>.<br><br>
We can find the number of transpositions in <em>S<sub><n></sub></em> as follows: <em>i</em><sub>1</sub> can be any of the <em>n</em> integers,
and <em>i</em><sub>2</sub> can be any of the <em>n</em> − 1 integers left over.
Thus, there are
<p style='text-align: center;'><em>n</em>(<em>n</em> − 1) = <em>n</em><sup>2</sup> − <em>n</em></p>
ways of forming an ordered pair (<em>i</em><sub>1</sub>, <em>i</em><sub>2</sub>) with
<em>i</em><sub>1</sub> unequal to <em>i</em><sub>2</sub>. However, the transposition (<em>i</em><sub>1</sub> <em>i</em><sub>2</sub>) is the same as the transposition
(<em>i</em><sub>2</sub> <em>i</em><sub>1</sub>). Thus, by counting ordered pairs, we have counted each transposition twice. Therefore, to find the number of
transpositions, we divide that count by 2 to get<br><br>
<table align="center" border="0" cellspacing="0" cellpadding="0">
<tr>
<td height="33" valign="bottom"><em>n</em><sup>2</sup> − <em>n</em></td>
<td rowspan = "2" align="right">.</td>
</tr>
<tr>
<td valign="top" align="center"><font style="text-decoration:overline"> 2 </font></td>
</tr>
</table>
<br>
<a name="lem52ret" id="lem52ret"></a>
<br>LEMMA 5.2<br>
For <em>n</em> > 1, the set of transpositions in <em>S<sub>n</sub></em> generates <em>S<sub>n</sub></em>.<br><br>
<a href="#lem52">Click here for the proof.</a>
<p />
Of course, a particular permutation can be expressed as a product of transpositions in more than one way. But an important property of the symmetric groups is that the
number of transpositions used to represent a given permutation will always have the same parity, that is, even or odd. To show this,
we will first prove the following lemma.
<p />
<a name="lem53ret" id="lem53ret"></a>
LEMMA 5.3<br>
The product of an odd number of transpositions in <em>S<sub>n</sub></em> cannot equal the identity element.<br><br>
<a href="#lem53">Click here for the proof.</a>
<p />
We can use this lemma to prove the following theorem.<br>
<p />
<a name="theor51ret" id="theor51ret"></a>
THEOREM 5.1: The Signature Theorem<br>For the symmetric group <em>S<sub>n</sub></em>, define the function
<p style='text-align: center;'><em>σ</em>: <em>S<sub>n</sub></em> → ℤ</p>
by
<p style='text-align: center;'><em>σ</em>(<em>x</em>) = (−1)<sup><em>N</em>(<em>x</em>)</sup>,</p>
where <em>N</em>(<em>x</em>) is the minimum number of transpositions needed to express <em>x</em> as a product of transpositions. Then this function, called
the <em>signature function</em>, is a homomorphism from <em>S<sub>n</sub></em> to the integers {−1, 1}.<br><br>
<a href="#theor51">Click here for the proof.</a>
<p />
With <em>SageMath</em>, can compute the signature function on both permutations and products of cycles, using the <strong>Signature</strong> command.<br>
```
Signature( P(4,3,5,1,2) )
Signature( C(1,4,2,7)*C(6,7,3) )
```
<br>
Try this out with several permutations. You will notice that the signature will always be ±1.<br><br>
<br>
The signature of an <em>r</em>-cycle will be −1 if <em>r</em> is even, and +1 if <em>r</em> is odd. The fact that this function is a homomorphism has some important ramifications.<br><br>
DEFINITION 5.5<br>
A permutation is an <em>alternating permutation</em> or an <em>even permutation</em> if the signature of the permutation is 1. A permutation is an
<em>odd permutation</em> if it is not even, that is, if the signature is −1. The set of all alternating permutations of order <em>n</em> is written
<em>A<sub>n</sub></em>.
<p />
<a name="cor51ret" id="cor51ret"></a>
COROLLARY 5.1<br>
The set of all alternating permutations <em>A<sub>n</sub></em> is a normal subgroup of <em>S<sub>n</sub></em>. If <em>n</em> > 1, then
<em>S<sub>n</sub></em>/<em>A<sub>n</sub></em> is isomorphic to <em>Z</em><sub>2</sub>.<br><br>
<a href="#cor51">Click here for the proof.</a>
<p />
Finally, we can ask which permutations generate the group <em>A<sub>n</sub></em>. Since the set of 2-cycles generated <em>S<sub>n</sub></em>, it is not too
surprising that <em>A<sub>n</sub></em> can also be generated by cycles.
<p />
<a name="prop51ret" id="prop51ret"></a>
PROPOSITION 5.1<br>
For <em>n</em> > 2, the alternating group <em>A<sub>n</sub></em> is generated by the set of 3-cycles.<br><br>
<a href="#prop51">Click here for the proof.</a>
<p />
Let us use this proposition to find the elements of <em>A</em><sub>4</sub>. We know that this is generated by 3-cycles, and has 24/2 = 12 elements. Let us see if
two cycles are enough to give us twelve elements.<br>
```
Group( C(1,2,3), C(1,2,4) )
len(_)
```
<br>
Since this gives us 12 elements, this is <em>A</em><sub>4</sub>. Eight of the twelve elements are 3-cycles. Do you recognize the other 4 elements?<br><br>
<a name="sec53" id="sec53"></a>
<h1>Cayley's Theorem</h1>
<br>
In the last section, we used the circle graph to illustrate a given permutation. The circle graphs produced had the property that every point on the circle had
exactly one arrow which points to it. In Chapter 3 we mentioned that such a graph was <em>one-to-one</em> and <em>onto</em>. Where else have we seen circle graphs
with this property?<br><br>
Although we have seen circle graphs that are one-to-one and onto in several places, many of them were in Chapter 3 when we were working with cosets. To illustrate,
the following command loads the quaternionic group <em>Q</em> discovered in the last chapter.<br>
```
Q = InitQuaternions(); Q
```
<br>
Recall the multiplication table for <em>Q</em> is given by:<br>
```
MultTable(Q)
```
<br>
We can know look at the circle graph of <strong>LeftMult</strong>(<em>x</em>) and <strong>RightMult</strong>(<em>x</em>) for
different elements <em>x</em> in <em>Q</em>.<br>
```
CircleGraph(Q, LeftMult(i) )
CircleGraph(Q, RightMult(i) )
```
<br>
Look carefully. These two circle graphs are not the same, even though they are very similar. Notice that these circle graphs are one-to-one and onto,
just as the graphs of a permutation. Could we view these circle graphs as permutations in <em>S</em><sub>8</sub>? Suppose we numbered the elements of the group, starting
at the top and working clockwise. That is, we would assign
<br><br>
<table align = "center" border="0">
<tr>
<td align="right">1) </td>
<td>1</td>
</tr>
<tr>
<td align="right">2) </td>
<td><em>i</em></td>
</tr>
<tr>
<td align="right">3) </td>
<td><em>j</em></td>
</tr>
<tr>
<td align="right">4) </td>
<td><em>k</em></td>
</tr>
<tr>
<td align="right">5) </td>
<td>−1</td>
</tr>
<tr>
<td align="right">6) </td>
<td>−<em>i</em></td>
</tr>
<tr>
<td align="right">7) </td>
<td>−<em>j</em></td>
</tr>
<tr>
<td align="right">8) </td>
<td>−<em>k</em></td>
</tr>
</table>
<br>
We can then create graphs of permutations that simulate the two graphs we see above.<br>
```
CircleGraph([1,2,3,4,5,6,7,8], C(1,2,5,6)*C(3,8,7,4) )
CircleGraph([1,2,3,4,5,6,7,8], C(1,2,5,6)*C(3,4,7,8) )
```
<br>
EXPERIMENT:<br>
Replace the <em>i</em> in the two circle graphs on <em>Q</em> with other elements of <em>Q</em>, such as <em>j</em>, -1, and <em>k</em>. Can you find the
permutations in <em>S</em><sub>8</sub> which produce the same circle graphs? Remember that there will be two permutations for each element of <em>Q</em>: one
that simulates <strong>LeftMult(x)</strong>, and one that simulates <strong>RightMult(x)</strong>. Are there any elements for which the graphs
for <strong>LeftMult(x)</strong> and <strong>RightMult(x)</strong> are the same?<br>
<br>
We see from this experiment that we each element of <em>Q</em> will correspond to two elements of <em>S</em><sub>8</sub>, one that simulates <strong>LeftMult(x)</strong>,
which we will call <em>ƒ</em>(<em>x</em>), and one that simulates <strong>RightMult(x)</strong>, called <em>ϕ</em>(<em>x</em>). Here is a table of these permutations:<br>
<table align="center" width="550" border="0">
<tr>
<th width="60" scope="col"><em>x</em></th>
<th width="250" scope="col"><em>ƒ</em>(<em>x</em>)<br> LeftMult(x)</th>
<th width="250" scope="col"><em>ϕ</em>(<em>x</em>)<br> RightMult(x)</th>
</tr>
<tr>
<th scope="row">1</th>
<td align="center">( )</td>
<td align="center">( )</td>
</tr>
<tr>
<th scope="row"><em>i</em></th>
<td align="center">(1 2 5 6)(3 8 7 4)</td>
<td align="center">(1 2 5 6)(3 4 7 8)</td>
</tr>
<tr>
<th scope="row"><em>j</em></th>
<td align="center">(1 3 5 7)(2 4 6 8)</td>
<td align="center">(1 3 5 7)(2 8 6 4)</td>
</tr>
<tr>
<th scope="row"><em>k</em></th>
<td align="center">(1 4 5 8)(2 7 6 3)</td>
<td align="center">(1 4 5 8)(2 3 6 7)</td>
</tr>
<tr>
<th scope="row">−1</th>
<td align="center">(1 5)(2 6)(3 7)(4 8)</td>
<td align="center">(1 5)(2 6)(3 7)(4 8)</td>
</tr>
<tr>
<th scope="row">−<em>i</em></th>
<td align="center">(1 6 5 2)(3 4 7 8)</td>
<td align="center">(1 6 5 2)(3 8 7 4)</td>
</tr>
<tr>
<th scope="row">−<em>j</em></th>
<td align="center">(1 7 5 3)(2 8 6 4)</td>
<td align="center">(1 7 5 3)(2 4 6 8)</td>
</tr>
<tr>
<th scope="row">−<em>k</em></th>
<td align="center">(1 8 5 4)(2 3 6 7)</td>
<td align="center">(1 8 5 4)(2 7 6 3)</td>
</tr>
</table>
<br>
By labeling the permutations by <em>ƒ</em>(<em>x</em>) and <em>ϕ</em>(<em>x</em>), we emphasize that these two functions map elements of <em>Q</em> to elements
of <em>S</em><sub>8</sub>. So here's the natural question: Are either of these homomorphisms?<br><br>
EXAMPLE:<br>
Use <em>SageMath</em> to see if either of the two functions <em>f</em> or <em>ϕ</em> are homomorphisms.<br><br>
Let's begin by testing <em>ƒ</em>.
Normally, in defining a homomorphism, we first determine the domain group and the target group. But in this case the target group is <em>S</em><sub>8</sub>, which
has 40320 elements. Rather than having <em>SageMath</em> construct all of the elements of this group, which would take an unreasonable amount of time, we can find the
range of the homomorphism by determining which group is generated by <em>ƒ</em>(<em>i</em>) and <em>ƒ</em>(<em>j</em>).<br>
```
Q = InitQuaternions(); Q
T = Group(C(1,2,5,6)*C(3,8,7,4), C(1,3,5,7)*C(2,4,6,8)); T
```
<br>
Now we can try to create the homomorphism.<br>
```
F = Homomorph(Q, T)
HomoDef(F, i, C(1,2,5,6)*C(3,8,7,4) )
HomoDef(F, j, C(1,3,5,7)*C(2,4,6,8) )
HomoDef(F, k, C(1,4,5,8)*C(2,7,6,3) )
FinishHomo(F)
```
So this is not a homomorphism. What about <em>ϕ</em>?<br>
<br>
EXPERIMENT:<br>
See if <em>ϕ</em> is a homomorphism using the above table. If it seems to produce a homomorphism, use <strong>GraphHomo</strong> to verify that it is one-to-one
and onto.<br>
<br>
So apparently, <em>ϕ</em>(<em>x</em>) is a homomorphism from <em>Q</em> to <em>S</em><sub>8</sub>. In fact, it is clear that this is one-to-one, so
<em>ϕ</em>(<em>x</em>) is an isomorphism from <em>Q</em> onto a subgroup of <em>S</em><sub>8</sub>. Will <strong>RightMult</strong> produce an isomorphism for any
group? The answer is yes, and the proof reveals an important property of permutation groups. You should be able to use the <strong>RightMult</strong> function to prove
the following theorem:
<p />
<a name="theor52ret" id="theor52ret"></a>
THEOREM 5.2:Cayley's Theorem<br>
Every finite group of order <em>n</em> is isomorphic to a subgroup of <em>S<sub>n</sub></em>.<br><br>
<a href="#theor52">Click here for the proof.</a>
<p />
Although this theorem shows that all finite groups can be considered as a subgroup of a symmetric group, the theorem can also apply to infinite groups as well.
Of course we then must consider <em>infinite</em> symmetric groups, whose elements would be permutations on an infinite collection of objects. We might have a difficult
time expressing some of the permutations! For example, if we had a library of an infinite number of books, we could not begin to express how one could rearrange the
books. Some of the permutations could be expressed as one-to-one and onto functions. However, most of the permutations in an infinite symmetric group are not
expressible using a finite number of words or symbols. Problems 10 through 12 of §5.2 reveal some of the unusual properties of infinite symmetric groups.
Fortunately, we will mainly work with finite symmetric groups.
<br><br>
Although Cayley's theorem (5.2) shows that any finite group <em>G</em> is a subgroup of <em>S<sub>n</sub></em>, where <em>n</em> is the size of the group <em>G</em>,
we often can find a smaller symmetric group that contains an isomorphic copy of <em>G</em>.<br><br>
EXAMPLE:<br>
Consider the group <em>D</em><sub>4</sub>, introduced in the last chapter, for which <nobr><em>a</em><sup>4</sup> =
<em>b</em><sup>2</sup> = <em>e</em>,</nobr> and
<em>b</em>·<em>a</em> = <em>a</em><sup>3</sup>·<em>b</em>. Consider the effects of <strong>RightMult</strong> on the set of cosets, using
a <em>non-normal</em> subgroup.<br>
```
InitGroup("e")
AddGroupVar("a", "b")
Define(a^4, e)
Define(b^2, e)
Define(b*a, a^3*b)
D4 = Group()
D4
```
<br>
Let us consider a <em>non-normal</em> subgroup of <em>D</em><sub>4</sub>,<br>
```
H = Group(b); H
```
<br>
We saw in Cayley's theorem (5.2) that <strong>RightMult</strong> applied to the elements ofthe group derived a homomorphism. What if we applied <strong>RightMult</strong>
to the <em>cosets</em> of the subgroup? Recall that <strong>RightMult(g)</strong> can be thought as a
function <em>p<sub>g</sub></em>(<em>x</em>) = <em>g</em>·<em>x</em>,
that is, it multiplies the argument of the function to the right of <em>g</em>. If we apply this function to a left coset of <em>H</em>, we have
<em>p<sub>g</sub></em>(<em>x H</em>) = <em>g</em>·<em>x H</em>, which yields another left coset. (Right cosets won't work here, since
<em>p<sub>g</sub></em>(<em>H x</em>) = <em>g</em>·<em>H x</em>, which is neither a left nor right coset.) Let us first create a list of left cosets.<br>
```
L = LftCoset(D4, H); L
```
<br>
What happens if we multiply each of the cosets by a fixed element of the group, say <em>a</em>?<br>
```
CircleGraph(L, RightMult(a) )
```
<br>
We see that each coset is mapped to another coset. Once again, we have a one-to-one and onto mapping, which we can treat as a permutation. Let us first number the
right cosets of <em>H</em>.<br>
<table align="center" border="0">
<tr>
<td align="right">1 )</td>
<td align="left"> { <em>e</em>, <em>b</em> }</td>
</tr>
<tr>
<td align="right">2 )</td>
<td align="left"> { <em>a</em>, <em>a</em>·<em>b</em> }</td>
</tr>
<tr>
<td align="right">3 )</td>
<td align="left"> { <em>a</em><sup>2</sup>, <em>a</em><sup>2</sup>·<em>b</em> }</td>
</tr>
<tr>
<td align="right">4 )</td>
<td align="left"> { <em>a</em><sup>3</sup>, <em>a</em><sup>3</sup>·<em>b</em> }</td>
</tr>
</table>
<br>
Then the circle graph shows that left multiplication by <em>a</em> is equivalent to the permutation (1 2 3 4).
If we try this with the element <em>b</em> instead,<br>
```
CircleGraph(L, RightMult(b) )
```
<br>
the circle graph shows that right multiplication by <em>b</em> is equivalent to the permutation (2 4).<br>
<br>
EXPERIMENT:<br>
Try replacing the elements <em>a</em> and <em>b</em> with other elements of the group. Do the
circle graphs produce permutations?<br>
<br>
Since we have a permutation for each element of the group, the natural question is whether we have a homomorphism between the group
<em>D</em><sub>4</sub> and the permutation group. <em>SageMath</em> can check for us.<br>
```
S4 = Group(C(1,2), C(1,2,3), C(1,2,3,4) ); S4
F = Homomorph(D4, S4)
HomoDef(F, a, C(1,2,3,4) )
HomoDef(F, b, C(2,4) )
FinishHomo(F)
```
<br>
So we indeed have a homomorphism, just as in Cayley's theorem. What is the kernel of this homomorphism?<br>
```
Kernel(F)
```
<br>
This proves that <em>D</em><sub>4</sub> is isomorphic to a subgroup of <em>S</em><sub>4</sub>. Note that this is a much stronger result than Cayley's theorem (5.2),
which only says that <em>D</em><sub>4</sub> is isomorphic to a subgroup of the larger group <em>S</em><sub>8</sub>.<br><br>
We can now see the subgroup of <em>S</em><sub>4</sub> isomorphic to <em>D</em><sub>4</sub>.<br>
```
Image(F, D4)
MultTable(_)
```
<br>
The multiplication table reveals a non-abelian group with 5 elements of order 2, so this is indeed isomorphic to <em>D</em><sub>4</sub>.
We can generalize this procedure to produce the following result:
<p />
<a name="theor53ret" id="theor53ret"></a>
THEOREM 5.3: Generalized Cayley's Theorem<br>
Let <em>G</em> be a finite group of order <em>n</em>, and <em>H</em> a subgroup of order <em>m</em>. Then there is a homomorphism
from <em>G</em> to <em>S<sub>k</sub></em>, with <em>k</em> = <em>n</em>/<em>m</em>, and whose kernel is a subgroup of <em>H</em>.<br><br>
<a href="#theor53">Click here for the proof.</a>
<p />
We see one application of this proposition in the case of <em>D</em><sub>4</sub>. Since <em>H</em> was a subgroup of order 2 which was <em>not</em> normal, the only
normal subgroup of <em>G</em> that is contained in <em>H</em> is the trivial subgroup. Thus, the homomorphism is an isomorphism, and we find a copy
of <em>D</em><sub>4</sub> inside of <em>S</em><sub>4</sub> instead of having to look in the larger group <em>S</em><sub>8</sub>. This idea can be applied
whenever we can find a subgroup of <em>G</em> that does not contain any nontrivial normal subgroups of <em>G</em>.<br><br>
But there is another important ramification from this proposition. We can prove the existence of a normal subgroup of a group, knowing only the order of the group!
<p />
<a name="cor52ret" id="cor52ret"></a>
COROLLARY 5.2<br>
Let <em>G</em> be a finite group, and <em>H</em> any subgroup of <em>G</em>. Then <em>H</em> contains a subgroup <em>N</em>, which is a normal subgroup of <em>G</em>,
such that |<em>G</em>| divides (|<em>G</em>|/|<em>H</em>|)! · |<em>N</em>|.<br><br>
<a href="#cor52">Click here for the proof.</a>
<p />
Here is an example of how we can prove the existence of a nontrivial normal subgroup, using just the order of the group. Suppose we have a group <em>G</em>
of order 108. Suppose that <em>G</em> has a subgroup of order 27. (We will find in §7.4 that all groups of order 108 must have a subgroup of order 27.) Using
|<em>G</em>|=108 and |<em>H</em>|=27, we find that G must contain a normal subgroup <em>N</em> such that
<p style='text-align: center;'>108 divides (108/27)! ·|<em>N</em>|=24·|<em>N</em>|.</p>
But this means that |<em>N</em>| must be a multiple of 9. Since <em>N</em> is a subgroup of <em>H</em>, which has order 27, we see that <em>N</em> is of
order 9 or 27. Hence, we have proven that <em>G</em> contains a normal subgroup of either order 9 or 27.
This will go a long way in finding the possible group structures of <em>G</em>, using only the size of the group <em>G</em>.<br><br>
<a name="sec54" id="sec54"></a>
<h1>Numbering the Permutations</h1>
<br>
We have seen that from Cayley's theorem that any group can be represented as the subgroup of a symmetric group. In turn, most of the permutations in the symmetric group
can be written succinctly as a product of disjoint cycles. So naturally, we can express any group in terms of cycles.<br><br>
For example, we saw using Cayley theorem a copy of the quaternionic group <em>Q</em> as a subgroup of <em>S<sub>8</sub></em>. It was generated by the elements
<p style='text-align: center;'><em>ϕ</em>(<em>i</em>) = (1 2 5 6)(3 4 7 8)<br>
<em>ϕ</em>(<em>j</em>) = (1 3 5 7)(2 8 6 4).</p>
In fact, the entire group is given by<br>
```
Q = Group( C(1,2,4,6)*C(3,5,7,8) , C(1,3,4,7)*C(2,8,6,5) ); Q
```
<br>
To compare notation, let us convert these elements to permutaions.<br>
```
[ CycleToPerm(x) for x in Q ]
```
<br>
Which method is best? For small groups, using cycles would be a good choice, because the results are easy to read. But for larger groups (say over 100 elements, and
yes, we will be working with groups that large in the next chapter) having <em>SageMath</em> write out all of the elements in terms of cycles would be time consuming
and messy. It would be nice to have a succinct way to describe each permutation using some kind of abbreviation.<br><br>
The clue for seeing how such an abbreviation is possible is to look again at the group <em>S</em><sub>4</sub>, using permutation notation.<br>
```
S4 = Group( P(2,1), P(2,3,1), P(2,3,4,1) ); S4
```
<br>
Notice that <em>SageMath</em> sorts the elements, first listing the identity, then the transposition which exchanges 1 and 2, then the permutations which change
only the first three elements, and finally the permutations which move the fourth element. But if we look closely we can find even more patterns.
For example, the last six elements are the six elements which map 4 into 1. Can you see any other patterns?<br><br>
<em>SageMath</em> uses a predefined order to list all of the permutations. As a result, we can number the permutations as follows:<br><br>
<table align="center" border="0">
<tr>
<td align="right">1<sup>st</sup> permutation = </td>
<td align="left"> <em>P</em>( )</td>
</tr>
<tr>
<td align="right">2<sup>nd</sup> permutation = </td>
<td align="left"> <em>P</em>(2, 1)</td>
</tr>
<tr>
<td align="right">3<sup>rd</sup> permutation = </td>
<td align="left"> <em>P</em>(1, 3, 2)</td>
</tr>
<tr>
<td align="right">4<sup>th</sup> permutation = </td>
<td align="left"> <em>P</em>(3, 1, 2)</td>
</tr>
<tr>
<td align="right">5<sup>th</sup> permutation = </td>
<td align="left"> <em>P</em>(2, 3, 1)</td>
</tr>
<tr>
<td align="right">6<sup>th</sup> permutation = </td>
<td align="left"> <em>P</em>(3, 2, 1)</td>
</tr>
<tr>
<td align="right">7<sup>th</sup> permutation = </td>
<td align="left"> <em>P</em>(1, 2, 4, 3)</td>
</tr>
<tr>
<td align="right">……… </td>
<td align="left">………</td>
</tr>
<tr>
<td align="right">24<sup>th</sup> permutation = </td>
<td align="left"> <em>P</em>(4, 3, 2, 1)</td>
</tr>
</table>
<br>
In this list the first 2 elements give the group <em>S</em><sub>2</sub>, the first 6 give <em>S</em><sub>3</sub>, and the first 24 elements give <em>S</em><sub>4</sub>.
This pattern can be extended to higher order permutations, so that the first <em>n!</em> permutations gives the group <em>S<sub>n</sub></em>.<br><br>
The advantage of sorting the permutations in this way is that <em>SageMath</em> can quickly find the <em>n</em><sup>th</sup> permutation without having to find any of the
previous permutations. For example, to find out what the 2000th permutation would be on this list, type<br>
```
NthPerm(2000)
```
<br>
<em>SageMath</em> can also determine where a given permutation is on the list of permutations. The command <strong>PermToInt</strong>( P(………) ) converts
a permutation to a number<br>
```
PermToInt(P(4,1,7,6,3,2,5))
```
<br>
We now have found a way of abbreviating permutations. Rather than spelling out where each element is mapped, we can give a single number that describes where the
permutation is on the list of permutations. This will be called the <em>integer representation</em> of the permutation. Although this representation hides most of
the information about the permutation, <em>SageMath</em> can quickly recover all of the information needed to do group operations.<br><br>
For example, suppose we want to multiply the 3<sup>rd</sup> permutation with the 21<sup>st</sup>. We could enter the command<br>
```
NthPerm(3) * NthPerm(21)
```
<br>
We could convert this back to a number as follows:<br>
```
PermToInt( NthPerm(3) * NthPerm(21) )
```
<br>
So the 3<sup>rd</sup> permutation times the 21<sup>st</sup> permutation gives the 23<sup>rd</sup> permutation. If we multiplied in the other order, we would have<br>
```
PermToInt( NthPerm(21) * NthPerm(3) )
```
<br>
So the 21<sup>st</sup> permutation times the 3<sup>rd</sup> permutation gives the 19<sup>th</sup> permutation.<br>
<br>
EXPERIMENT:<br>
Try entering in different numbers in place of 21 and 3, to see what happens.<br>
<br>
<em>SageMath</em> provides an abbreviation to the permutations. By setting the variable <strong>DisplayPermInt</strong> to true, permutations will be displayed as
their integer counterpart.<br>
```
DisplayPermInt = true
P(4,1,7,6,3,2,5)
```
<br>
Now many of <em>SageMath</em>'s operations can be done with integer abreviation of the elements. For example, we found that the quaternionic group <em>Q</em> was
isomorphic to a subgroup of <em>S</em><sub>8</sub>, generated by the elements
<p style='text-align: center;'>(1 2 5 6)(3 4 7 8) and (1 3 5 7)(2 8 6 4).</p>
We first convert these cycles to permutations, which will display as integers.<br>
```
CycleToPerm( C(1,2,5,6)*C(3,4,7,8) )
CycleToPerm( C(1,3,5,7)*C(2,8,6,4) )
```
<br>
So we find that the quaternionic group contains the 25827<sup>th</sup> and 14805<sup>th</sup> permutations. Now we can form the group using these two permutaions as generators.<br>
```
Q = Group(NthPerm(25827), NthPerm(14805)); Q
```
<br>
So the entire group can be given on a single line in a way which expresses the entire structure of the group. We can see the multiplication table of the group<br>
```
MultTable(Q)
```
<br>
and see that the group is isomorphic to <em>Q</em>. This integer representation is succinct enough to form such a table, and has many other advantages over cyclic
permutations, especially when we are working with large subgroups of the symmetric groups. Not only is it much easier to identify two elements as being the same,
but also the elements do not take up as much room to display.<br><br>
We can return to the standard notation for permutations by setting <strong>DisplayPermInt</strong> to false.<br>
```
DisplayPermInt = false
MultTable(Q)
```
<br>
This multiplication table is harder to read. Thus, it is worthwhile to introduce the new representation.<br><br>
There are simple algorithms to convert from the permutation representation to the integer representation and back without a computer. We begin by presenting a method of converting from a permutation to a integer.<br><br>
EXAMPLE:<br>
Demonstrate without <em>SageMath</em> that P(4, 1, 7, 6, 3, 2, 5) is the 2000<sup>th</sup> permutation.<br><br>
For each number in the permutation, we count how many numbers further left are larger than that number. For example, the 4 has no numbers further left, so the count
would be 0. The 3, however, has three numbers to the left of it which are larger, namely 4, 7, and 6.
Here are the results of these counts.
<p style='text-align: center;'>P(4, 1, 7, 6, 3, 2, 5)<br>
 0  1  0  1  3  4  2</p>
Next, we multiply each of these counts by (<em>n</em> − 1)!, and add the products together, and finally add 1. Thus,
<p style='text-align: center;'>0 · 0! + 1 · 1! + 0 · 2! + 1 · 3! + 3 · 4! + 4 · 5! + 2 · 6! + 1 = 2000.</p>
<br>
A similar algorithm reverses the procedure, and determines the <em>n</em><sup>th</sup> permutation.<br><br>
EXAMPLE:<br>
Determine the 4000<sup>th</sup> permutation without <em>SageMath</em>.<br><br>
We begin by subtracting 1, then using the division algorithm to successively divide by 2, 3, 4, etc., until the quotient is 0.<br><br>
<table align="center" border="0">
<tr>
<td align="right">3999 = 2 ·</td>
<td align="right">1999 + 1</td>
</tr>
<tr>
<td align="right">1999 = 3 ·</td>
<td align="right">666 + 1</td>
</tr>
<tr>
<td align="right">666 = 4 ·</td>
<td align="right">166 + 2</td>
</tr>
<tr>
<td align="right">166 = 5 ·</td>
<td align="right">33 + 1</td>
</tr>
<tr>
<td align="right">33 = 6 ·</td>
<td align="right">5 + 3</td>
</tr>
<tr>
<td align="right">5 = 7 ·</td>
<td align="right">0 + 5</td>
</tr>
</table>
<br>
Since the last division was by <em>n</em> = 7, the permutation is in <em>S</em><sub>7</sub>. We will use the remainders to determine the permutation, starting from the
last remainder, and working towards the first. We start with a list of numbers from 1 to <em>n</em>:
<p style='text-align: center;'>{1, 2, 3, 4, 5, 6, 7}</p>
For each remainder <em>m</em>, we
consider the (<em>m</em> + 1)<sup>st</sup> largest number in the list which has not been crossed out. Since the last remainder is 5, we take the 6<sup>th</sup> largest number, which is 2. This eliminates 2 from the list.<br><br>
<table align="center" border="0">
<tr>
<td align="right">3999 = 2 ·</td>
<td align="right">1999 + 1</td>
<td></td>
</tr>
<tr>
<td align="right">1999 = 3 ·</td>
<td align="right">666 + 1</td>
<td></td>
</tr>
<tr>
<td align="right">666 = 4 ·</td>
<td align="right">166 + 2</td>
<td></td>
</tr>
<tr>
<td align="right">166 = 5 ·</td>
<td align="right">33 + 1</td>
<td></td>
</tr>
<tr>
<td align="right">33 = 6 ·</td>
<td align="right">5 + 3</td>
<td></td>
</tr>
<tr>
<td align="right">5 = 7 ·</td>
<td align="right">0 + 5</td>
<td align="left"> ⟹ 2</td>
</tr>
</table>
<p style='text-align: center;'>{1, <font style="text-decoration:line-through">2</font>, 3, 4, 5, 6, 7}</p>
Here is the result after processing two more remainders.
<table align="center" border="0">
<tr>
<td align="right">3999 = 2 ·</td>
<td align="right">1999 + 1</td>
<td></td>
</tr>
<tr>
<td align="right">1999 = 3 ·</td>
<td align="right">666 + 1</td>
<td></td>
</tr>
<tr>
<td align="right">666 = 4 ·</td>
<td align="right">166 + 2</td>
<td></td>
</tr>
<tr>
<td align="right">166 = 5 ·</td>
<td align="right">33 + 1</td>
<td align="left"> ⟹ 6</td>
</tr>
<tr>
<td align="right">33 = 6 ·</td>
<td align="right">5 + 3</td>
<td align="left"> ⟹ 4</td>
</tr>
<tr>
<td align="right">5 = 7 ·</td>
<td align="right">0 + 5</td>
<td align="left"> ⟹ 2</td>
</tr>
</table>
<p style='text-align: center;'>{1, <font style="text-decoration:line-through">2</font>, 3, <font style="text-decoration:line-through">4</font>, 5, <font style="text-decoration:line-through">6</font>, 7}</p>
The next remainder is 2, so we take the 3<sup>rd</sup> largest number which is not crossed out, which is 3. Continuing, we get the following.<br><br>
<table align="center" border="0">
<tr>
<td align="right">3999 = 2 ·</td>
<td align="right">1999 + 1</td>
<td align="left"> ⟹ 1</td>
</tr>
<tr>
<td align="right">1999 = 3 ·</td>
<td align="right">666 + 1</td>
<td align="left"> ⟹ 5</td>
</tr>
<tr>
<td align="right">666 = 4 ·</td>
<td align="right">166 + 2</td>
<td align="left"> ⟹ 3</td>
</tr>
<tr>
<td align="right">166 = 5 ·</td>
<td align="right">33 + 1</td>
<td align="left"> ⟹ 6</td>
</tr>
<tr>
<td align="right">33 = 6 ·</td>
<td align="right">5 + 3</td>
<td align="left"> ⟹ 4</td>
</tr>
<tr>
<td align="right">5 = 7 ·</td>
<td align="right">0 + 5</td>
<td align="left"> ⟹ 2</td>
</tr>
</table>
<p style='text-align: center;'>{<font style="text-decoration:line-through">1</font>, <font style="text-decoration:line-through">2</font>, <font style="text-decoration:line-through">3</font>, <font style="text-decoration:line-through">4</font>, <font style="text-decoration:line-through">5</font>, <font style="text-decoration:line-through">6</font>, 7}</p>
The only number not crossed out is 7, which becomes the first number in the permutation. The rest of the permutation can be read from the new numbers from top to bottom, producing P(7, 1, 5, 3, 6, 4, 2).<br><br>
EXPERIMENT:<br>
Use the method of the previous example to verify that P(7, 1, 5, 3, 6, 4, 2) is indeed the 4000<sup>th</sup> permutation.<br><br>
The simple algorithms for converting permutations to integers and back make this association more natural. It also explains why <em>SageMath</em> is able to convert
permutations so quickly.
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<h1>Proofs:</h1>
<a name="lem51" id="lem51"></a>
Proof of Lemma 5.1:<br><br>
Let us say that <em>x</em> fixes the integer <em>i</em> if <em>x</em>(<em>i</em>) = <em>i</em>. We will use induction on the number of integers not left fixed
by <em>x</em>, denoted by <em>m</em>. Because <em>x</em> is not the identity, there is at least one integer not fixed by <em>x</em>. In fact, <em>m</em> must
be at least 2, for the first integer must have somewhere to go.<br><br>
If <em>m</em> = 2, then only two numbers <em>i</em><sub>1</sub> and <em>i</em><sub>2</sub> are moved. Since these are the only two integers not fixed, <em>x</em> must be
a 2-cycle (<em>i</em><sub>1</sub> <em>i</em><sub>2</sub>).<br><br>
We now will assume by induction that the lemma is true whenever the number of integers not left fixed by <em>x</em> is fewer than <em>m</em>. Let <em>i</em><sub>1</sub>
be one integer that is not fixed, and let <em>i</em><sub>2</sub> = <em>x</em>(<em>i</em><sub>1</sub>). Then <em>x</em>(<em>i</em><sub>2</sub>) cannot be
<em>i</em><sub>2</sub> for <em>x</em> is one-to-one, and if <em>x</em>(<em>i</em><sub>2</sub>) is not <em>i</em><sub>1</sub>, we define
<em>i</em><sub>3</sub> = <em>x</em>(<em>i</em><sub>2</sub>). Likewise, <em>x</em>(<em>i</em><sub>3</sub>) cannot be either <em>i</em><sub>2</sub> or
<em>i</em><sub>3</sub>, since <em>x</em> is one-to-one. If <em>x</em>(<em>i</em><sub>3</sub>) is not <em>i</em><sub>1</sub>, we define
<em>i</em><sub>4</sub> = <em>x</em>(<em>i</em><sub>3</sub>).<br><br>
Eventually this process must stop, for there are only <em>m</em> elements that are not fixed by <em>x</em>. Thus, there must be some value <em>k</em> such that
<em>x</em>(<em>i<sub>k</sub></em>) = <em>i</em><sub>1</sub>. Define the permutation <em>y</em> to be the <em>k</em>-cycle
(<em>i</em><sub>1</sub> <em>i</em><sub>2</sub> <em>i</em><sub>3</sub> … <em>i<sub>k</sub></em>). <br>
Then <em>x</em>·<em>y</em><sup>-1</sup> fixes
all of the integers fixed by <em>x</em>, along with <em>i</em><sub>1</sub>, <em>i</em><sub>2</sub>, <em>i</em><sub>3</sub>, … <em>i<sub>k</sub></em>.
By induction, since there are fewer integers not fixed by <em>x</em>·<em>y</em><sup>-1</sup> then by <em>x</em>, <em>x</em>·<em>y</em><sup>-1</sup>
can be espressed by a series of nontrivial disjoint cycles
<em>c</em><sub>1</sub>·<em>c</em><sub>2</sub>·<em>c</em><sub>3</sub>·…·<em>c<sub>t</sub></em>. Moreover, the integers appearing
in <em>c</em><sub>1</sub>·<em>c</em><sub>2</sub>·<em>c</em><sub>3</sub>·…·<em>c<sub>t</sub></em> are just those that are not fixed
by <em>x</em>·<em>y</em><sup>-1</sup>. Thus, <em>c</em><sub>1</sub>, <em>c</em><sub>2</sub>, <em>c</em><sub>3</sub>, …, <em>c<sub>t</sub></em>
are disjoint from <em>y</em>. Finally, we have
<p style='text-align: center;'><em>x</em> =
<em>y</em>·<em>c</em><sub>1</sub>·<em>c</em><sub>2</sub>·<em>c</em><sub>3</sub>·…·<em>c<sub>t</sub></em>.</p>
Therefore, <em>x</em> can be written as a product of disjoint nontrivial cycles. By induction, every permutation besides the identity can be written as a product
of nontrivial disjoint cycles.<br><br>
For the uniqueness, suppose that a permutation <em>x</em> has two ways of being written is terms of nontrivial disjoint cycles:
<p style='text-align: center;'><em>x</em> = <em>c</em><sub>1</sub>·<em>c</em><sub>2</sub>·<em>c</em><sub>3</sub>·…·<em>c<sub>r</sub></em>
= <em>d</em><sub>1</sub>·<em>d</em><sub>2</sub>·<em>d</em><sub>3</sub>·…·<em>d<sub>s</sub></em>.</p>
For any integer <em>i</em><sub>1</sub> not fixed by <em>x</em>, one and only one cycle must contain <em>i</em><sub>1</sub>. Suppose that cycle is
<em>c<sub>j</sub></em> = (<em>i</em><sub>1</sub>, <em>i</em><sub>2</sub>, <em>i</em><sub>3</sub>, … <em>i<sub>q</sub></em>). But by the way we constructed
the cycles above, this cycle must also be one of the <em>d<sub>k</sub></em>'s. Thus, each cycle <em>c<sub>j</sub></em> is equal to
<em>d<sub>k</sub></em> for some <em>k</em>. By symmetry, each <em>d<sub>k</sub></em> is equal to <em>c<sub>j</sub></em> for some <em>j</em>. Thus, the two ways of
writing <em>x</em> in terms of nontrivial disjoint cycles are merely rearrangements of the cycles.<br><br>
<a href="#lem51ret">Return to text</a>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<a name="lem52" id="lem52"></a>
Proof of Lemma 5.2:<br><br>
We need to show that every element of <em>S<sub>n</sub></em> can be written as a product of transpositions. The identity element can be written as <nobr>(1 2)(1 2),</nobr> so we let
<em>x</em> be a permutation that is not the identity. By Lemma 5.1, we can express <em>x</em> as a product of nontrivial disjoint cycles:
<p style='text-align: center;'><em>x</em> = (<em>i</em><sub>1</sub> <em>i</em><sub>2</sub> <em>i</em><sub>3</sub> …
<em>i<sub>r</sub></em>)·(<em>j</em><sub>1</sub> <em>j</em><sub>2</sub> … <em>j<sub>s</sub></em>)·(<em>k</em><sub>1</sub> <em>k</em><sub>2</sub>
… <em>k<sub>t</sub></em>)·….</p>
Now, consider the product of transpositions
<p style='text-align: center;'>(<em>i</em><sub>1</sub> <em>i</em><sub>2</sub>)·(<em>i</em><sub>2</sub>
<em>i</em><sub>3</sub>)· ⋯ ·(<em>i</em><sub><em>r</em>−1</sub> <em>i<sub>r</sub></em>)·(<em>j</em><sub>1</sub>
<em>j</em><sub>2</sub>)·(<em>j</em><sub>2</sub> <em>j</em><sub>3</sub>)· ⋯ ·(<em>j</em><sub><em>s</em>−1</sub>
<em>j<sub>s</sub></em>)·(<em>k</em><sub>1</sub> <em>k</em><sub>2</sub>)· ⋯ ·(<em>k</em><sub><em>t</em>−1</sub> <em>k<sub>t</sub></em>)· ⋯ .</p>
Note that this product is equal to <em>x</em>. (Recall that we are working from right to left.) Therefore, we have expressed every element of <em>S<sub>n</sub></em>
as a product of transpositions.<br><br>
<a href="#lem52ret">Return to text</a>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<a name="lem53" id="lem53"></a>
Proof of Lemma 5.3:<br><br>
Since <em>S</em><sub>2</sub> only contains one transposition, (1 2), raising this to an odd power will not be the identity element, so the lemma is true for the case
<em>n</em> = 2. So by induction we can assume that the lemma is true for <em>S</em><sub><em>n</em>−1</sub>. Suppose that there is an odd number of transpositions
producing the identity in <em>S<sub>n</sub></em>. Then we can find such a product that uses the fewest number of transpositions, say <em>k</em> transpositions,
with <em>k</em> odd. At least one transposition will
involve moving <em>n</em>, since the lemma is true for <em>S</em><sub><em>n</em>−1</sub>.
Suppose that the <em>m</em><sup>th</sup> transposition is the last one that
moves <em>n</em>. If <em>m</em> = 1, then only the first transposition moves <em>n</em>, so the product cannot be the identity. We
will now use induction on <em>m</em>. That is,
we will assume that no product of <em>k</em> transpositions can be the identity for a smaller <em>m</em>. But then the (<em>m</em> − 1)<sup>st</sup>
and <em>m</em><sup>th</sup> transpositions are one of the four possibilities
<p style='text-align: center;'>(<em>n</em> <em>x</em>)(<em>n</em> <em>x</em>), (<em>n</em> <em>x</em>)(<em>n</em> <em>y</em>),
(<em>x</em> <em>y</em>)(<em>n</em> <em>x</em>), or (<em>y</em> <em>x</em>)(<em>n</em> <em>x</em>)</p>
for some <em>x</em>, <em>y</em>, and <em>z</em>. In the first case, the two transpositions cancel, so we can form a product using a fewer number of transpositions. In
the other three cases, we can replace the pair with another pair,
<p style='text-align: center;'>(<em>n</em> <em>x</em>)(<em>n</em> <em>y</em>) = (<em>n</em> <em>y</em>)(<em>x</em> <em>y</em>); 
(<em>x</em> <em>y</em>)(<em>n</em> <em>x</em>) = (<em>n</em> <em>y</em>)(<em>x</em> <em>y</em>); 
(<em>y</em> <em>z</em>)(<em>n</em> <em>x</em>) = (<em>n</em> <em>x</em>)(<em>y</em> <em>z</em>);</p>
for which <em>m</em> is smaller. Thus, there is no odd product of transpositions in <em>S<sub>n</sub></em> equaling the identity.<br><br>
<a href="#lem53ret">Return to text</a>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<a name="theor51" id="theor51"></a>
Proof of Theorem 5.1:<br><br>
By Lemma 5.2, every element of <em>S<sub>n</sub></em> can be written as a product of transpositions, so <em>σ</em>(<em>x</em>) is well defined. Obviously this
maps <em>S<sub>n</sub></em> to {−1, 1}, so we only need to establish that this is a homomorphism. Suppose that
<p style='text-align: center;'><em>σ</em>(<em>x</em>·<em>y</em>) ≠ <em>σ</em>(<em>x</em>)·<em>σ</em>(<em>y</em>).</p>
Then <em>N</em>(<em>x</em>·<em>y</em>) − (<em>N</em>(<em>x</em>) + <em>N</em>(<em>y</em>)) would be an odd number. Since
<em>N</em>(<em>x</em><sup>-1</sup>) = <em>N</em>(<em>x</em>), we would also have <em>N</em>(<em>x</em>·<em>y</em>) + (<em>N</em>(<em>y</em><sup>-1</sup>) +
<em>N</em>(<em>x</em><sup>-1</sup>))
being an odd number. But then we would have three sets of transpositions, totaling an odd number, which when strung together produce
<em>x</em>·<em>y</em>·<em>y</em><sup>-1</sup>·<em>x</em><sup>-1</sup> = ( ). This contradicts Lemma 5.3, so in fact
<em>σ</em>(<em>x</em>·<em>y</em>) = <em>σ</em>(<em>x</em>)·<em>σ</em>(<em>y</em>) for all <em>x</em> and <em>y</em> in
<em>S<sub>n</sub></em>.<br><br>
<a href="#theor51ret">Return to text</a>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<a name="cor51" id="cor51"></a>
Proof of Corollary 5.1:<br><br>
Clearly <em>A<sub>n</sub></em> is a normal subgroup of <em>S<sub>n</sub></em>, since <em>A<sub>n</sub></em> is the kernel of the signature homomorphism.
Also if <em>n</em> > 1, then <em>S<sub>n</sub></em> contains at least one transposition whose signature would be −1. Thus, the image of the
homomorphism is {1, −1}. This group is isomorphic to <em>Z</em><sub>2</sub>. Then by the first isomorphism theorem (4.1),
<em>S<sub>n</sub></em>/<em>A<sub>n</sub></em> is isomorphic to <em>Z</em><sub>2</sub>.<br><br>
<a href="#cor51ret">Return to text</a>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<a name="prop51" id="prop51"></a>
Proof of Proposition 5.1:<br><br>
Since every 3-cycle is a product of two transpositions, every 3-cycle is in <em>A<sub>n</sub></em>. Thus, it is sufficient to show that every element
in <em>A<sub>n</sub></em> can be expressed in terms of 3-cycles. We have already seen that any element can be expressed as a product of an even number of
transpositions. Suppose we group these in pairs as follows:
<p style='text-align: center;'><em>x</em> = [(<em>i</em><sub>1</sub> <em>j</em><sub>1</sub>)·(<em>k</em><sub>1</sub> <em>l</em><sub>1</sub>)] ·
[(<em>i</em><sub>2</sub> <em>j</em><sub>2</sub>)·(<em>k</em><sub>2</sub> <em>l</em><sub>2</sub>)] ·
… · [(<em>i<sub>r</sub></em> <em>j<sub>r</sub></em>)·(<em>k<sub>r</sub></em> <em>l<sub>r</sub></em>)].</p>
If we could convert each pair of transpositions into 3-cycles, we would have the permutation <em>x</em> expressed as a product of 3-cycles.
There are three cases to consider:<br><br>
Case 1: The integers <em>i<sub>m</sub></em>, <em>j<sub>m</sub></em>, <em>k<sub>m</sub></em>, and <em>l<sub>m</sub></em> are all distinct. In this case,
<p style='text-align: center;'>(<em>i<sub>m</sub></em> <em>j<sub>m</sub></em>)·(<em>k<sub>m</sub></em> <em>l<sub>m</sub></em>) =
(<em>i<sub>m</sub></em> <em>k<sub>m</sub></em> <em>l<sub>m</sub></em>)·(<em>i<sub>m</sub></em> <em>j<sub>m</sub></em> <em>l<sub>m</sub></em>).</p>
<br>
Case 2: Three of the four integers <em>i<sub>m</sub></em>, <em>j<sub>m</sub></em>, <em>k<sub>m</sub></em>, and <em>l<sub>m</sub></em> are distinct. The four
combinations that would produce this situation are
<p style='text-align: center;'><em>i<sub>m</sub></em> = <em>k<sub>m</sub></em>, <em>i<sub>m</sub></em> = <em>l<sub>m</sub></em>, <em>j<sub>m</sub></em> =
<em>k<sub>m</sub></em>, or <em>j<sub>m</sub></em> = <em>l<sub>m</sub></em></p>
However, these four possibilities are essentially the same, so we only have to check one of these four combinations: <em>i<sub>m</sub></em> = <em>k<sub>m</sub></em>.
Then we have
<p style='text-align: center;'>(<em>i<sub>m</sub></em> <em>j<sub>m</sub></em>)·(<em>i<sub>m</sub></em> <em>l<sub>m</sub></em>) =
(<em>i<sub>m</sub></em> <em>l<sub>m</sub></em> <em>j<sub>m</sub></em>).</p>
<br>
Case 3: Only two of the four integers <em>i<sub>m</sub></em>, <em>j<sub>m</sub></em>, <em>k<sub>m</sub></em>, and <em>l<sub>m</sub></em> are distinct.
Then we must either have <em>i<sub>m</sub></em> = <em>k<sub>m</sub></em> and <em>j<sub>m</sub></em> = <em>l<sub>m</sub></em>, or
<em>i<sub>m</sub></em> = <em>l<sub>m</sub></em> and <em>j<sub>m</sub></em> = <em>k<sub>m</sub></em>. In either case, we have
<p style='text-align: center;'>(<em>i<sub>m</sub></em> <em>j<sub>m</sub></em>)·(<em>k<sub>m</sub></em> <em>l<sub>m</sub></em>) =
( ) = (1 2 3)·(1 3 2).</p>
In all three cases, we were able to express a pair of transpositions in terms of a product of one or two 3-cycles. Therefore, the permutation <em>x</em> can be
written as a product of 3-cycles.<br><br>
<a href="#prop51ret">Return to text</a>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<a name="theor52" id="theor52"></a>
Proof of Theorem 5.2:<br><br>
Let <em>G</em> be a group of order <em>n</em>. For each <em>g</em> in <em>G</em>, define the mapping
<p style='text-align: center;'><em>p<sub>g</sub></em> : <em>G</em> → <em>G</em> by <em>p<sub>g</sub></em>(<em>x</em>) = <em>g</em>·<em>x</em>.</p>
For a given <em>g</em>, if <em>p<sub>g</sub></em>(<em>x</em>) = <em>p<sub>g</sub></em>(<em>y</em>), then <em>g</em>·<em>x</em> = <em>g</em>·<em>y</em>,
so <em>x</em> = <em>y</em>. Hence, <em>p<sub>g</sub></em> is a one-to-one mapping. Since <em>G</em> is a finite group, we can use to pigeonhole principle to show that
<em>p<sub>g</sub></em> is also onto, and hence is a permutation of the elements of <em>G</em>.<br><br>
We now can consider the mapping <em>ϕ</em> from <em>G</em> to the symmetric group <em>S</em><sub>|<em>G</em>|</sub> on the elements of <em>G</em>, given by
<p style='text-align: center;'><em>ϕ</em>(<em>g</em>) = <em>p<sub>g</sub></em>.</p>
Now, consider two elements <em>ϕ</em>(<em>g</em>) and <em>ϕ</em>(<em>h</em>). The product of these is the mapping
<p style='text-align: center;'><em>x</em> → (<em>p<sub>g</sub></em>·<em>p<sub>h</sub></em>)(<em>x</em>) = <em>p<sub>g</sub></em>(<em>p<sub>h</sub></em>(<em>x</em>)) = <em>p<sub>g</sub></em>(<em>h</em>·<em>x</em>) =
<em>g</em>·(<em>h</em>·<em>x</em>) = (<em>g</em>·<em>h</em>)·<em>x</em>.</p>
Since this is the same as <em>ϕ</em>(<em>g</em>·<em>h</em>), <em>ϕ</em> is a homomorphism.<br><br>
The element <em>g</em> will be in the kernel of the homomorphism <em>ϕ</em> only if <em>p<sub>g</sub></em>(<em>x</em>) is the identity permutation.
This means that <em>g</em>·<em>x</em> = <em>x</em> for all elements <em>x</em> in <em>G</em>. Thus, the kernel consists just of the identity element
of <em>G</em>, and hence <em>ϕ</em> is an isomorphism. Therefore, <em>G</em> is isomorphic to a subgroup of <em>S</em><sub>|<em>G</em>|</sub>.<br><br>
<a href="#theor52ret">Return to text</a>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<a name="theor53" id="theor53"></a>
Proof of Theorem 5.3:<br><br>
Let <em>Q</em> be the set of left cosets <em>G</em>/<em>H</em>. For each <em>g</em> in <em>G</em>, define the mapping
<p style='text-align: center;'><em>p<sub>g</sub></em> : <em>Q</em> → <em>Q</em> by  <em>p<sub>g</sub></em>(<em>x H</em>) = <em>g</em>·<em>x H</em>.</p>
Note that this is well defined, since if <em>x H</em> = <em>y H</em>, then <em>g</em>·<em>x H</em> = <em>g</em>·<em>y H</em>.<br><br>
For a given <em>g</em>, if <em>p<sub>g</sub></em>(<em>x H</em>) = <em>p<sub>g</sub></em>(<em>y H</em>), then <em>g</em>·<em>x H</em> = <em>g</em>·<em>y H</em>,
so <em>x H</em> = <em>y H</em>. Hence, <em>p<sub>g</sub></em> is a one-to-one mapping. Since <em>Q</em> is a finite set, by the
pigeonhole principle, <em>p<sub>g</sub></em> must also be onto, and hence is a permutation of the elements of <em>Q</em>.<br><br>
We now can consider the mapping <em>ϕ</em> from <em>G</em> to the symmetric group <em>S</em><sub>|<em>Q</em>|</sub> on the elements of <em>Q</em>, given by
<p style='text-align: center;'><em>ϕ</em>(<em>g</em>) = <em>p<sub>g</sub></em>.</p>
Now, consider two elements <em>ϕ</em>(<em>g</em>) and <em>ϕ</em>(<em>h</em>). The product of these is the mapping
<p style='text-align: center;'><em>x H</em> → (<em>p<sub>g</sub></em>·<em>p<sub>h</sub></em>)(<em>x H</em>) =
<em>p<sub>g</sub></em>(<em>p<sub>h</sub></em>(<em>x H</em>)) = <em>p<sub>g</sub></em>(<em>h</em>·<em>x H</em>) =
<em>g</em>·(<em>h</em>·<em>x H</em>) = (<em>g</em>·<em>h</em>)·<em>x H</em>.</p>
Since this is the same as <em>ϕ</em>(<em>g</em>·<em>h</em>), <em>ϕ</em> is a homomorphism.<br><br>
Finally, we must show that the kernel of <em>ϕ</em> is a subgroup of <em>H</em>. The element <em>g</em> will be in the kernel of the homomorphism <em>ϕ</em>
only if <em>p<sub>g</sub></em>(<em>x H</em>) is the identity permutation. This means that <em>g</em>·<em>x H</em> = <em>x H</em>
for all right cosets <em>x H</em> in <em>Q</em>. In particular, the left coset <em>e</em>·<em>H</em> = <em>H</em> is in <em>Q</em>, so
<em>g</em>·<em>H</em> = <em>H</em>. This can only happen if <em>g</em> is in <em>H</em>. Thus, the kernel is a subgroup of <em>H</em>. We have found a
homomorphism <em>ϕ</em> from the group <em>G</em> to the group <em>S</em><sub>|<em>Q</em>|</sub>, whose kernel is a subgroup of <em>H</em>.<br><br>
<a href="#theor53ret">Return to text</a>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<a name="cor52" id="cor52"></a>
Proof of Corollary 5.2:<br><br>
By the generized Cayley's theorem (5.3), there is a homomorphism <em>ϕ</em> from <em>G</em> to <em>S<sub>k</sub></em>, where <em>k</em> = |<em>G</em>|/|<em>H</em>|. Furthermore, the
kernel is a subgroup of <em>H</em>. If we let <em>N</em> be the kernel, and let <em>I</em> be the image of the homomorphism, we have by the first isomorphism
theorem (4.1) that
<p style='text-align: center;'><em>G</em>/<em>N</em> ≈ <em>I</em>.</p>
In particular, |<em>G</em>|/|<em>N</em>| = |<em>I</em>|, and |<em>I</em>| is a factor
of |<em>S<sub>k</sub></em>| = <em>k</em>!. This means that |<em>G</em>| is a factor of <em>k</em>!· |<em>N</em>|.<br><br>
<a href="#cor52ret">Return to text</a>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<a name="sec5p" id="sec5p"></a>
<h1><em>SageMath</em> Problems</h1>
<br>
For Problems 17 through 20: Determine how the following permutations can be expressed in terms of the book
rearrangements <strong>First</strong>, <strong>Last</strong>, <strong>Left</strong>, <strong>Right</strong>, and <strong>Rev</strong>.<br>
<br>
§5.1 #17)<br>
<table align="center" width="160" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">1</td>
<td align="center">3</td>
<td align="center">2</td>
<td align="center">4</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
§5.1 #18)<br>
<table align="center" width="160" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">4</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">1</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
§5.1 #19)<br>
<table align="center" width="160" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">3</td>
<td align="center">1</td>
<td align="center">4</td>
<td align="center">2</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
§5.1 #20)<br>
<table align="center" width="160" border="0" cellspacing="0" cellpadding="0">
<tr>
<td align="right" valign="bottom">⎛</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="left" valign="bottom">⎞</td>
</tr>
<tr>
<td align="right" valign="top">⎝</td>
<td align="center">2</td>
<td align="center">4</td>
<td align="center">1</td>
<td align="center">3</td>
<td align="left" valign="top">⎠</td>
</tr>
</table>
<br>
§5.2 #18)<br>
Use <em>SageMath</em> to find a pair of 3-cycles whose product is a 3-cycle. Can there be a product of two 4-cycles that yields a 4-cycle?<br>
<br>
§5.2 #19)<br>
The <em>cycle structure</em> of a permutation is the number of 2-cycles, 3-cycles, etc. it contains when written as a product of disjoint cycles. For example, (1 2 3)(4 5) and (3 4 5)(1 2) have the same cycle structure. Consider the elements<br>
```
a = C(1, 2, 3); a
b = C(1, 4, 2, 5, 6, 7); b
```
<br>
Predict the cycle structure of <em>a</em><sup>2</sup>, <em>a</em><sup>3</sup>, <em>b</em><sup>2</sup>, <em>b</em><sup>3</sup>, and <em>b</em><sup>6</sup>. Check your answers with <em>SageMath</em>.<br>
<br>
§5.2 #20)<br>
Calculate <em>a</em>·<em>b</em> from Problem 19. Predict the cycle structure of (<em>a</em>·<em>b</em>)<sup>2</sup>,
(<em>a</em>·<em>b</em>)<sup>3</sup>, and (<em>a</em>·<em>b</em>)<sup>4</sup>, and verify your predictions with <em>SageMath</em>.<br>
<br>
§5.2 #21)<br>
Calculate <em>a</em>·<em>b</em>·<em>a</em><sup>-1</sup> from Problem 19. Notice that it has the same cycle structure as <em>b</em>. Try this with other
random permutations. Does <em>a</em>·<em>b</em>·<em>a</em><sup>-1</sup> always have the same cycle structure as <em>b</em>? How do Problems 16 and 17
explain what is happening?<br>
<br>
§5.3 #20)<br>
Use Cayley's theorem (5.2) to find a subgroup of <em>S</em><sub>12</sub> that is isomorphic to <em>Z</em><sub>21</sub><sup>*</sup>.<br>
<br>
§5.3 #21)<br>
Use Cayley's theorem (5.2) to find a subgroup of <em>S</em><sub>12</sub> that is isomorphic to the following group:<br>
```
InitGroup("e")
AddGroupVar("a", "b")
Define(a^3, e)
Define(b^4, e)
Define(b*a, a^2*b)
G = Group(); G
```
<br>
§5.3 #22)<br>
Use the generalized Cayley's theorem (5.3) to find a subgroup of <em>S</em><sub>8</sub> that is isomorphic to the following group:<br>
```
InitGroup("e")
AddGroupVar("a", "b")
Define(a^2, e)
Define(b^8, e)
Define(b*a, a*b^5)
G = Group(); G
```
<br>
Hint: Find a subgroup of order 2 that is not normal.<br>
<br>
§5.4 #19)<br>
Find the elements of <em>A</em><sub>4</sub> converted to the integer representation. Is there a pattern as to which positive integers correspond to the
even permutations, and which correspond to odd? Does the pattern continue to <em>A</em><sub>5</sub>?<br>
<br>
§5.4 #20)<br>
Use <em>SageMath</em> to find all elements of <em>S</em><sub>7</sub> whose square is P(3,5,1,7,6,2,4).<br>
(Hint: Use a "for" loop to test all of the elements of <em>S</em><sub>7</sub>):<br>
```
for i in range(1, 5041):
if NthPerm(i)^2 == P(3,5,1,7,6,2,4):
print(NthPerm(i))
```
<br>
§5.4 #21)<br>
Use <em>SageMath</em> to find all elements of <em>S</em><sub>6</sub> whose cube is P(3,5,6,1,2,4). (See the hint for Problem 20.)<br>
</font>
</body>
</html>
| github_jupyter |
# Shortcuts to usefulness
Our tutorial materials help take a dedicated person from non-quantumness to the ability to do interesting and useful things with Qiskit.
But for hackathons, game jams or other events where people might get their first taste, we need to provide a shortcut.
The *CreativeQiskit* tools do this by making it easier to do simple creative projects with Qiskit, and to tempt people into starting to play with the source code.
For example, `twobit` is inspired by a variant of 'Quantum Battleships'. It presents some simple qubit behaviour in a simple object.
# How to use `twobit`
```
import sys
sys.path.append('../')
import CreativeQiskit
```
A qubit is the quantum version of a bit. So you can use it to store boolean values. That's basically what a `twobit` object does.
```
b = CreativeQiskit.twobit()
```
We can prepare our bit with the value `True` or `False`, and then read it out again using the `Z_value()` method.
```
b.prepare({'Z':True})
print(" bit value =",b.Z_value() )
b.prepare({'Z':False})
print(" bit value =",b.Z_value() )
```
You probably notice the `Z` in all the lines above. This is because, though you can only store one bit in a single qubit, there are many ways to do it. So when preparing the bit, and when reading out the value, you need to specify what method is used. In the above, we used what is known as the `Z` basis.
The `twobit` object also supports the use of the so-called `X` basis.
```
b.prepare({'X':True})
print(" bit value =",b.X_value() )
b.prepare({'X':False})
print(" bit value =",b.X_value() )
```
These two ways of storing a bit are completely incompatible. If you encode using the `Z` basis and read out using the `X` (or vice-versa) you'll get a random result.
```
print(" Here are 10 trials with, each with True encoded in the Z basis. The values read out with X are:\n")
for trial in range(1,11):
b.prepare({'Z':True})
message = " Try " + str(trial)+": "
message += str( b.X_value() )
print( message )
```
Once you read a value out for a given basis, the qubit forgets anything that was encoded within it before the readout. So though encoding with `Z` and then reading out with `X` gives a random value, that value will remain if you read out using `X` again.
Below we do the same as before, but this time the readout is done 5 times during each individual trail, instead of just once.
```
for trial in range(1,11):
message = " Try " + str(trial)+": "
b.prepare({'Z':True})
for repeat in range(5):
message += str( b.X_value() ) + ", "
print(message)
```
This behaviour is exactly why `Z_value()` and `X_value()` need to be methods of the objects rather than just attributes. If they were attributes, it would imply that they can both have well defined values at the same time, which we can just look at whenever we want without changing the object. But this is not the case. Instead, the action of extracting the values requires the object to run a process, known as measurement, which can change what is going on inside the object. That's why it needs a method.
The `X_value()`, `Z_value()` and `value()` methods all have the standard kwargs `device`, `noisy` and `shots` as explained in [the README](README.md). When noise is present, the wrong output might be returned with a small probability. Some mitigation is done to make this less likely. This can be turned off by setting the `mitigate=True` qwarg to `False`. Large values of `shots` (such as 1024) allow the mitigation to work better than for smaller values. At the extreme of `shots=1`, the mitigation becomes powerless.
For example, here is the same setup as above (10 samples, each with 5 repeated readouts) but with unmitigated noise.
```
b = CreativeQiskit.twobit()
for trial in range(1,11):
message = " Try " + str(trial)+": "
b.prepare({'Z':True})
for repeat in range(5):
message += str( b.X_value(noisy=0.2,mitigate=False) ) + ", "
print(message)
```
And now with mitigated noise.
```
b = CreativeQiskit.twobit()
for trial in range(1,11):
message = " Try " + str(trial)+": "
b.prepare({'Z':True})
for repeat in range(5):
message += str( b.X_value(noisy=True) ) + ", "
print(message)
```
In the game [Battleships with complementary measurements](https://medium.com/@decodoku/how-to-program-a-quantum-computer-part-2-f0d3eee872fe), this behaviour is used to implement the attacks that can be made on ships. There are two kinds of attack, with correspond calling `value()` with either `Z` or `X`.
A ship is destroyed when the result is `True`. If `False`, the ship survives the attack. It also becomes immune to it, since another identical call to `value()` will give the same result. Sof fo any hope of succes, the other type of attack must be used. If the ship again survives, it will have forgotten its immunity to the previous attack type. So switching between attacks will ensure victory in the end.
*Note: The following cell is interactive so you'll need to run it yourself*
```
ship = CreativeQiskit.twobit()
destroyed = False
while not destroyed:
basis = input('\n > Choose a torpedo type (Z or X)...\n ')
destroyed = ship.value(basis)
if destroyed:
print('\n *Ship destroyed!*')
else:
print('\n *Attack failed!*')
print('\n **Mission complete!**')
```
Note that the `prepare()` method was never called above. This resulting in the first use of `value()` giving a random outcome, regardless of whether `X` or `Z` as used. Also, rather than using the `X_value()` or `Z_value()` methods, we used `value()` with a kwarg to specify `X` or `Z`. This is just a shortcut provided to write nicer programs in cases like these.
Finally, it should be noted that there is a third basis alongside `X` and `Z`. As you could probably guess, it's called `Y`. It is also fully functional, and is even used in the case where `prepare()` is not called to provide an initial state that is random for both `X` and `Z`. So this object should really be called `three_bit`. But it's not, because everyone always ignores poor `Y`.
| github_jupyter |
# Detect and Mitigate Unfairness in Models
Machine learning models can incorporate unintentional bias, which can lead to issues with *fairness*. For example, a model that predicts the likelihood of diabetes might work well for some age groups, but not for others - subjecting a subset of patients to unnecessary tests, or depriving them of tests that would confirm a diabetes diagnosis.
In this notebook, you'll use the **Fairlearn** package to analyze a model and explore disparity in prediction performance for different subsets of patients based on age.
> **Note**: Integration with the Fairlearn package is in preview at this time. You may experience some unexpected errors.
## Important - Considerations for fairness
> This notebook is designed as a practical exercise to help you explore the Fairlearn package and its integration with Azure Machine Learning. However, there are a great number of considerations that an organization or data science team must discuss related to fairness before using the tools. Fairness is a complex *sociotechnical* challenge that goes beyond simply running a tool to analyze models.
>
> Microsoft Research has co-developed a [fairness checklist](https://www.microsoft.com/en-us/research/publication/co-designing-checklists-to-understand-organizational-challenges-and-opportunities-around-fairness-in-ai/) that provides a great starting point for the important discussions that need to take place before a single line of code is written.
## Install the required SDKs
To use the Fairlearn package with Azure Machine Learning, you need the Azure Machine Learning and Fairlearn Python packages, so run the following cell verify that the **azureml-contrib-fairness** package is installed.
```
!pip show azureml-contrib-fairness
```
You'll also need the **fairlearn** package itself, and the **raiwidgets** package (which is used by Fairlearn to visualize dashboards). Run the following cell to install them.
```
!pip install --upgrade fairlearn==0.7.0 raiwidgets
```
## Train a model
You'll start by training a classification model to predict the likelihood of diabetes. In addition to splitting the data into training and test sets of features and labels, you'll extract *sensitive* features that are used to define subpopulations of the data for which you want to compare fairness. In this case, you'll use the **Age** column to define two categories of patient: those over 50 years old, and those 50 or younger.
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
# load the diabetes dataset
print("Loading Data...")
data = pd.read_csv('data/diabetes.csv')
# Separate features and labels
features = ['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']
X, y = data[features].values, data['Diabetic'].values
# Get sensitive features
S = data[['Age']].astype(int)
# Change value to represent age groups
S['Age'] = np.where(S.Age > 50, 'Over 50', '50 or younger')
# Split data into training set and test set
X_train, X_test, y_train, y_test, S_train, S_test = train_test_split(X, y, S, test_size=0.20, random_state=0, stratify=y)
# Train a classification model
print("Training model...")
diabetes_model = DecisionTreeClassifier().fit(X_train, y_train)
print("Model trained.")
```
Now that you've trained a model, you can use the Fairlearn package to compare its behavior for different sensitive feature values. In this case, you'll:
- Use the fairlearn **selection_rate** function to return the selection rate (percentage of positive predictions) for the overall population.
- Use **scikit-learn** metric functions to calculate overall accuracy, recall, and precision metrics.
- Use a **MetricFrame** to calculate selection rate, accuracy, recall, and precision for each age group in the **Age** sensitive feature. Note that a mix of **fairlearn** and **scikit-learn** metric functions are used to calculate the performance values.
```
from fairlearn.metrics import selection_rate, MetricFrame
from sklearn.metrics import accuracy_score, recall_score, precision_score
# Get predictions for the witheld test data
y_hat = diabetes_model.predict(X_test)
# Get overall metrics
print("Overall Metrics:")
# Get selection rate from fairlearn
overall_selection_rate = selection_rate(y_test, y_hat) # Get selection rate from fairlearn
print("\tSelection Rate:", overall_selection_rate)
# Get standard metrics from scikit-learn
overall_accuracy = accuracy_score(y_test, y_hat)
print("\tAccuracy:", overall_accuracy)
overall_recall = recall_score(y_test, y_hat)
print("\tRecall:", overall_recall)
overall_precision = precision_score(y_test, y_hat)
print("\tPrecision:", overall_precision)
# Get metrics by sensitive group from fairlearn
print('\nMetrics by Group:')
metrics = {'selection_rate': selection_rate,
'accuracy': accuracy_score,
'recall': recall_score,
'precision': precision_score}
group_metrics = MetricFrame(metrics=metrics,
y_true=y_test,
y_pred=y_hat,
sensitive_features=S_test['Age'])
print(group_metrics.by_group)
```
From these metrics, you should be able to discern that a larger proportion of the older patients are predicted to be diabetic. *Accuracy* should be more or less equal for the two groups, but a closer inspection of *precision* and *recall* indicates some disparity in how well the model predicts for each age group.
In this scenario, consider *recall*. This metric indicates the proportion of positive cases that were correctly identified by the model. In other words, of all the patients who are actually diabetic, how many did the model find? The model does a better job of this for patients in the older age group than for younger patients.
It's often easier to compare metrics visually. To do this, you'll use the Fairlearn fairness dashboard:
1. Run the cell below to generate a dashboard from the model you created previously.
2. When the widget is displayed, use the **Get started** link to start configuring your visualization.
3. Select the sensitive features you want to compare (in this case, there's only one: **Age**).
4. Select the model performance metric you want to compare (in this case, it's a binary classification model so the options are *Accuracy*, *Balanced accuracy*, *Precision*, and *Recall*). Start with **Recall**.
5. Select the type of fairness comparison you want to view. Start with **Demographic parity difference**.
6. View the dashboard visualization, which shows:
- **Disparity in performance** - how the selected performance metric compares for the subpopulations, including *underprediction* (false negatives) and *overprediction* (false positives).
- **Disparity in predictions** - A comparison of the number of positive cases per subpopulation.
7. Edit the configuration to compare the predictions based on different performance and fairness metrics.
```
from raiwidgets import FairnessDashboard
# View this model in Fairlearn's fairness dashboard, and see the disparities which appear:
FairnessDashboard(sensitive_features=S_test,
y_true=y_test,
y_pred={"diabetes_model": diabetes_model.predict(X_test)})
```
The results show a much higher selection rate for patients over 50 than for younger patients. However, in reality, age is a genuine factor in diabetes, so you would expect more positive cases among older patients.
If we base model performance on *accuracy* (in other words, the percentage of predictions the model gets right), then it seems to work more or less equally for both subpopulations. However, based on the *precision* and *recall* metrics, the model tends to perform better for patients who are over 50 years old.
Let's see what happens if we exclude the **Age** feature when training the model.
```
# Separate features and labels
ageless = features.copy()
ageless.remove('Age')
X2, y2 = data[ageless].values, data['Diabetic'].values
# Split data into training set and test set
X_train2, X_test2, y_train2, y_test2, S_train2, S_test2 = train_test_split(X2, y2, S, test_size=0.20, random_state=0, stratify=y2)
# Train a classification model
print("Training model...")
ageless_model = DecisionTreeClassifier().fit(X_train2, y_train2)
print("Model trained.")
# View this model in Fairlearn's fairness dashboard, and see the disparities which appear:
FairnessDashboard(sensitive_features=S_test2,
y_true=y_test2,
y_pred={"ageless_diabetes_model": ageless_model.predict(X_test2)})
```
Explore the model in the dashboard.
When you review *recall*, note that the disparity has reduced, but the overall recall has also reduced because the model now significantly underpredicts positive cases for older patients. Even though **Age** was not a feature used in training, the model still exhibits some disparity in how well it predicts for older and younger patients.
In this scenario, simply removing the **Age** feature slightly reduces the disparity in *recall*, but increases the disparity in *precision* and *accuracy*. This underlines one the key difficulties in applying fairness to machine learning models - you must be clear about what *fairness* means in a particular context, and optimize for that.
## Register the model and upload the dashboard data to your workspace
You've trained the model and reviewed the dashboard locally in this notebook; but it might be useful to register the model in your Azure Machine Learning workspace and create an experiment to record the dashboard data so you can track and share your fairness analysis.
Let's start by registering the original model (which included **Age** as a feature).
> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
```
from azureml.core import Workspace, Experiment, Model
import joblib
import os
# Load the Azure ML workspace from the saved config file
ws = Workspace.from_config()
print('Ready to work with', ws.name)
# Save the trained model
model_file = 'diabetes_model.pkl'
joblib.dump(value=diabetes_model, filename=model_file)
# Register the model
print('Registering model...')
registered_model = Model.register(model_path=model_file,
model_name='diabetes_classifier',
workspace=ws)
model_id= registered_model.id
print('Model registered.', model_id)
```
Now you can use the FairLearn package to create binary classification group metric sets for one or more models, and use an Azure Machine Learning experiment to upload the metrics.
> **Note**: This may take a while, and may result in some warning messages (which you can ignore). When the experiment has completed, the dashboard data will be downloaded and displayed to verify that it was uploaded successfully.
```
from fairlearn.metrics._group_metric_set import _create_group_metric_set
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
# Create a dictionary of model(s) you want to assess for fairness
sf = { 'Age': S_test.Age}
ys_pred = { model_id:diabetes_model.predict(X_test) }
dash_dict = _create_group_metric_set(y_true=y_test,
predictions=ys_pred,
sensitive_features=sf,
prediction_type='binary_classification')
exp = Experiment(ws, 'mslearn-diabetes-fairness')
print(exp)
run = exp.start_logging()
# Upload the dashboard to Azure Machine Learning
try:
dashboard_title = "Fairness insights of Diabetes Classifier"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
# To test the dashboard, you can download it
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
print(downloaded_dict)
finally:
run.complete()
```
The preceding code downloaded the metrics generated in the experiement just to confirm it completed successfully. The real benefit of uploading the metrics to an experiement is that you can now view the FairLearn dashboard in Azure Machine Learning studio.
Run the cell below to see the experiment details, and click the **View Run details** link in the widget to see the run in Azure Machine Learning studio. Then view the **Fairness** tab of the experiment run to view the dashboard for the fairness ID assigned to the metrics you uploaded, which behaves the same way as the widget you viewed previously in this notebook.
```
from azureml.widgets import RunDetails
RunDetails(run).show()
```
You can also find the fairness dashboard by selecting a model in the **Models** page of Azure Machine Learning studio and reviewing its **Fairness** tab. This enables your organization to maintain a log of fairness analysis for the models you train and register.
## Mitigate unfairness in the model
Now that you've analyzed the model for fairness, you can use any of the *mitigation* techniques supported by the FairLearn package to find a model that balances predictive performance and fairness.
In this exercise, you'll use the **GridSearch** feature, which trains multiple models in an attempt to minimize the disparity of predictive performance for the sensitive features in the dataset (in this case, the age groups). You'll optimize the models by applying the **EqualizedOdds** parity constraint, which tries to ensure that models that exhibit similar true and false positive rates for each sensitive feature grouping.
> *This may take some time to run*
```
from fairlearn.reductions import GridSearch, EqualizedOdds
import joblib
import os
print('Finding mitigated models...')
# Train multiple models
sweep = GridSearch(DecisionTreeClassifier(),
constraints=EqualizedOdds(),
grid_size=20)
sweep.fit(X_train, y_train, sensitive_features=S_train.Age)
models = sweep.predictors_
# Save the models and get predictions from them (plus the original unmitigated one for comparison)
model_dir = 'mitigated_models'
os.makedirs(model_dir, exist_ok=True)
model_name = 'diabetes_unmitigated'
print(model_name)
joblib.dump(value=diabetes_model, filename=os.path.join(model_dir, '{0}.pkl'.format(model_name)))
predictions = {model_name: diabetes_model.predict(X_test)}
i = 0
for model in models:
i += 1
model_name = 'diabetes_mitigated_{0}'.format(i)
print(model_name)
joblib.dump(value=model, filename=os.path.join(model_dir, '{0}.pkl'.format(model_name)))
predictions[model_name] = model.predict(X_test)
```
Now you can use the FairLearn dashboard to compare the mitigated models:
Run the following cell and then use the wizard to visualize **Age** by **Recall**.
```
FairnessDashboard(sensitive_features=S_test,
y_true=y_test,
y_pred=predictions)
```
The models are shown on a scatter plot. You can compare the models by measuring the disparity in predictions (in other words, the selection rate) or the disparity in the selected performance metric (in this case, *recall*). In this scenario, we expect disparity in selection rates (because we know that age *is* a factor in diabetes, with more positive cases in the older age group). What we're interested in is the disparity in predictive performance, so select the option to measure **Disparity in recall**.
The chart shows clusters of models with the overall *recall* metric on the X axis, and the disparity in recall on the Y axis. Therefore, the ideal model (with high recall and low disparity) would be at the bottom right corner of the plot. You can choose the right balance of predictive performance and fairness for your particular needs, and select an appropriate model to see its details.
An important point to reinforce is that applying fairness mitigation to a model is a trade-off between overall predictive performance and disparity across sensitive feature groups - generally you must sacrifice some overall predictive performance to ensure that the model predicts fairly for all segments of the population.
> **Note**: Viewing the *precision* metric may result in a warning that precision is being set to 0.0 due to no predicted samples - you can ignore this.
## Upload the mitigation dashboard metrics to Azure Machine Learning
As before, you might want to keep track of your mitigation experimentation. To do this, you can:
1. Register the models found by the GridSearch process.
2. Compute the performance and disparity metrics for the models.
3. Upload the metrics in an Azure Machine Learning experiment.
```
# Register the models
registered_model_predictions = dict()
for model_name, prediction_data in predictions.items():
model_file = os.path.join(model_dir, model_name + ".pkl")
registered_model = Model.register(model_path=model_file,
model_name=model_name,
workspace=ws)
registered_model_predictions[registered_model.id] = prediction_data
# Create a group metric set for binary classification based on the Age feature for all of the models
sf = { 'Age': S_test.Age}
dash_dict = _create_group_metric_set(y_true=y_test,
predictions=registered_model_predictions,
sensitive_features=sf,
prediction_type='binary_classification')
exp = Experiment(ws, "mslearn-diabetes-fairness")
print(exp)
run = exp.start_logging()
RunDetails(run).show()
# Upload the dashboard to Azure Machine Learning
try:
dashboard_title = "Fairness Comparison of Diabetes Models"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
finally:
run.complete()
```
> **Note**: A warning that precision is being set to 0.0 due to no predicted samples may be displayed - you can ignore this.
When the experiment has finished running, click the **View Run details** link in the widget to view the run in Azure Machine Learning studio (you may need to scroll past the initial output to see the widget), and view the FairLearn dashboard on the **fairness** tab.
| github_jupyter |
```
import sys
import os
import numpy as np
import torch
import torch.nn as nn
from torch.autograd import grad
# device = torch.device('cuda')
device = torch.device('cpu')
sys.path.append('utils_discrete')
from utils_discrete import preprocess_data_discrete_identification, plot_results_discrete_identification
# Load the dataset and preprocess it
data_path = 'data/burgers_shock.mat'
N0 = 200
N1 = 200
noise = 0.0
idx_t0 = 10
idx_t1 = 90
x, t, u_exact, lb, ub, dt, q, x0, u0, x1, u1, IRK_alphas, IRK_betas = preprocess_data_discrete_identification(data_path, idx_t0, idx_t1,
N0, N1, noise)
x0 = torch.tensor(x0, dtype = torch.float, requires_grad = True, device = device)
u0 = torch.tensor(u0, dtype = torch.float, requires_grad = True, device = device)
x1 = torch.tensor(x1, dtype = torch.float, requires_grad = True, device = device)
u1 = torch.tensor(u1, dtype = torch.float, requires_grad = True, device = device)
dt = torch.tensor(dt, dtype = torch.float, requires_grad = True, device = device)
IRK_alphas = torch.tensor(IRK_alphas, dtype = torch.float, requires_grad = True, device = device)
IRK_betas = torch.tensor(IRK_betas, dtype = torch.float, requires_grad = True, device = device)
class NeuralNet(nn.Module):
def __init__(self, layers, lb, ub):
super().__init__()
self.lb = torch.tensor(lb, dtype = torch.float, device = device)
self.ub = torch.tensor(ub, dtype = torch.float, device = device)
# Layer module
self.layers = nn.ModuleList()
# Make the neural network
input_dim = layers[0]
for output_dim in layers[1:]:
self.layers.append(nn.Linear(input_dim, output_dim))
nn.init.xavier_normal_(self.layers[-1].weight)
input_dim = output_dim
def forward(self, X):
x = 2.0*(X - self.lb)/(self.ub - self.lb) - 1.0
for layer in self.layers[:-1]:
x = torch.tanh(layer(x))
outputs = self.layers[-1](x)
return outputs
class PINN:
def __init__(self, dt, x0, u0, x1, u1, lb, ub, IRK_alphas, IRK_betas, nu, layers = [2, 20, 20, 20, 1], lr = 1e-2, device = torch.device('cpu')):
self.nu = nu
self.dt = dt
self.x0 = x0
self.u0 = u0
self.x1 = x1
self.u1 = u1
self.IRK_alphas = IRK_alphas
self.IRK_betas = IRK_betas
self.net_u = NeuralNet(layers, lb, ub)
self.net_u.to(device)
# Physical paramters to optimize
self.lmbda = nn.ParameterList()
self.lmbda.append(nn.Parameter(0.0*torch.ones(1, device = device)))
self.lmbda.append(nn.Parameter(-6.0*torch.ones(1, device = device)))
params = list(self.lmbda) + list(self.net_u.parameters())
self.optimizer = torch.optim.Adam(params, lr = lr, betas=(0.9, 0.999))
def forward_gradients(self, y, x):
temp1 = torch.ones(y.size(), device = device, requires_grad = True)
temp2 = torch.ones(x.size(), device = device)
g = grad(y, x, grad_outputs = temp1, create_graph = True)[0]
dy = grad(g, temp1, grad_outputs = temp2, create_graph = True)[0]
del temp1, temp2
return dy
def net_u0(self):
u = self.net_u(self.x0)
u_x = self.forward_gradients(u, self.x0)
u_xx = self.forward_gradients(u_x, self.x0)
f = - self.lmbda[0]*u*u_x + torch.exp(self.lmbda[1])*u_xx
u0 = u - self.dt*torch.matmul(f,self.IRK_alphas.T)
return u0
def net_u1(self):
u = self.net_u(self.x1)
u_x = self.forward_gradients(u, self.x1)
u_xx = self.forward_gradients(u_x, self.x1)
f = - self.lmbda[0]*u*u_x + torch.exp(self.lmbda[1])*u_xx
u1 = u - self.dt*torch.matmul(f,(self.IRK_alphas-self.IRK_betas).T)
return u1
def loss_f(self, u0, u0_pred, u1, u1_pred):
return torch.mean(torch.square(u0-u0_pred)) + torch.mean(torch.square(u1-u1_pred))
def optimizer_step(self):
# Zero the grads for the model paramters in the optimizer
self.optimizer.zero_grad()
# Compute the losses and backpropagate the losses
loss = self.loss_f(self.u0, self.net_u0(), self.u1, self.net_u1())
loss.backward()
# Optimizer one iteration with the given optimizer
self.optimizer.step()
return loss.item()
def fit(self, epochs = 1):
for epoch in range(epochs):
loss_value = self.optimizer_step()
if epoch % 100 == 99:
print(f'Epoch {epoch+1}: Training Loss = {loss_value}')
if epoch % 1000 == 999:
print(f'Lambda 1 = {self.lmbda[0].item()}, Lambda 2 = {torch.exp(self.lmbda[1]).item()}\n')
def predict(self):
return self.net_u0(), self.net_u1()
pinn = PINN(dt, x0, u0, x1, u1, lb, ub, IRK_alphas, IRK_betas, nu = (0.01/np.pi), layers = [1, 50, 50, 50, q], lr = 1e-3)
pinn.fit(epochs = 80000)
# Plot the results
%matplotlib widget
device = torch.device('cpu')
plot_results_discrete_identification(x, t, x0.to(device).detach().numpy(), x1.to(device).detach().numpy(), u_exact,
u0.to(device).detach().numpy(), u1.to(device).detach().numpy(),
idx_t0, idx_t1, lb, ub,
pinn.lmbda[0].to(device).detach().item(), torch.exp(pinn.lmbda[1]).to(device).detach().item())
```
| github_jupyter |
```
from pflow.particle_filter import BootstrapFilter, ObservationBase, FilterState, LikelihoodMethodBase, ProposalMethodBase
from pflow.base import BaseReweight
from pflow.optimal_transport.transportation_plan import Transport
from pflow.resampling.systematic import SystematicResampling
from pflow.optimal_transport.recentering import LearnBest, IncrementalLearning
from joblib import Parallel, delayed
import pykalman
import numpy as np
import torch
import tqdm
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
```
### SSM defintion
```
class ProposalMethod(ProposalMethodBase):
def __init__(self, bias, state_size):
locs = torch.zeros(state_size, requires_grad=False) + bias
self._dist = torch.distributions.MultivariateNormal(locs, scale_tril=torch.eye(state_size))
def apply(self, state, _observation):
x = state.x
sample = self._dist.rsample((x.shape[0],))
x_proposed = 0.5 * x + sample
return FilterState(x=x_proposed, logw=state.logw, n=state.n, loglik=state.loglik)
class Observation(ObservationBase):
__slots__ = ['y']
def __init__(self, y):
self.y = y
class LikelihoodMethod(LikelihoodMethodBase):
def __init__(self, obs_matrix):
locs = torch.zeros(obs_matrix.shape[0], requires_grad=False)
self._obs_matrix = obs_matrix
self._dist = torch.distributions.MultivariateNormal(loc=locs,
scale_tril=torch.eye(observation_matrix.shape[0]))
def apply(self, state, observation, log=True):
distance = (self._obs_matrix @ state.x.T).T - observation.y.unsqueeze(0)
log_probs = self._dist.log_prob(distance)
if log:
return log_probs
else:
return log_probs.exp()
class NoResampling(BaseReweight):
def apply(self, x, w, logw):
return x, logw
_ = torch.random.manual_seed(0)
import math
import random
random.seed(42)
def autoregressive(transition_matrix, observation_matrix, seed=42):
np.random.seed(seed)
x = np.zeros(transition_matrix.shape[1])
yield (observation_matrix @ x).squeeze() + np.random.randn(observation_matrix.shape[0])
while True:
x = (transition_matrix @ x).squeeze() + np.random.randn(x.shape[0])
yield (observation_matrix @ x).squeeze() + np.random.randn(observation_matrix.shape[0])
# 1D
def ar_pf(values,
lamda,
observation_matrix,
epsilon=0.5,
n=100,
seed=1234,
scaling=0.75,
reach=None,
min_neff=0.5,
resampling_method=None):
states = []
observations = []
torch.random.manual_seed(seed)
initial_x = torch.zeros((n, observation_matrix.shape[1]))
initial_x.requires_grad=False
torch_observation = torch.tensor(observation_matrix.astype(np.float32))
initial_w = torch.full((n,), 1/n, requires_grad=True)
initial_log_lik = torch.tensor(0., requires_grad=True)
state = FilterState(x=initial_x, logw=initial_w.log(), n=n, loglik=initial_log_lik)
likelihood_method = LikelihoodMethod(torch_observation)
if resampling_method is None:
if epsilon > 0:
resampling_method = Transport(epsilon=epsilon, scaling=scaling, reach=reach)
else:
resampling_method = SystematicResampling()
boot = BootstrapFilter(proposal_method=ProposalMethod(lamda, observation_matrix.shape[1]),
likelihood_method=likelihood_method,
reweighting_method=resampling_method,
min_neff=min_neff)
n_obs = 0
for val in values:
n_obs += 1
obs = Observation(torch.tensor(val, requires_grad=False))
state = boot.update(state, obs)
observations.append(obs)
states.append(state)
state = boot.predict(state, None)
return -state.loglik / n_obs, states, observations
linspace = np.linspace(-1, 1, 5)
n_experiments = 100
linspace
T = 150
transition_matrix = np.array([[0.5]])
observation_matrix = np.array([[1.]])
observations_lists = []
for i in range(2):
autoregressive_gen = autoregressive(transition_matrix, observation_matrix, seed=i)
ar = [next(autoregressive_gen) for _ in range(T)]
observations_lists.append(np.asanyarray(ar[:]).astype(np.float32))
data = observations_lists[0]
N = 50
sys_ll = []
sys_grads = []
for value in tqdm.tqdm(linspace):
lamda_tensor = torch.tensor(value, requires_grad=True)
gradient_for_seed = []
likelihood_for_seed = []
for i in range(n_experiments):
res = ar_pf(data, lamda_tensor, observation_matrix, 0., n=N, min_neff=0.5, seed=12345 + i)
likelihood_for_seed.append(-res[0].detach().cpu().numpy().sum()*T)
grad = torch.autograd.grad(-res[0] * T, lamda_tensor)[0].detach().cpu().numpy()
gradient_for_seed.append(grad.sum())
sys_ll.append(likelihood_for_seed)
sys_grads.append(gradient_for_seed)
reg_ll = {}
reg_grads = {}
for eps in [0.25, 0.5, 0.75]:
reg_ll_list = []
reg_grad_list = []
for value in tqdm.tqdm(linspace):
lamda_tensor = torch.tensor(value, requires_grad=True)
gradient_for_seed = []
likelihood_for_seed = []
for i in range(n_experiments):
res = ar_pf(data, lamda_tensor, observation_matrix, eps, scaling=0.33, n=N, min_neff=0.5, seed=12345 + i)
likelihood_for_seed.append(-res[0].detach().cpu().numpy().sum()*T)
grad = torch.autograd.grad(-res[0] * T, lamda_tensor)[0].detach().cpu().numpy()
gradient_for_seed.append(grad.sum())
reg_ll_list.append(likelihood_for_seed)
reg_grad_list.append(gradient_for_seed)
reg_ll[eps] = reg_ll_list
reg_grads[eps] = reg_grad_list
learned_ll = {}
learned_grads = {}
eps = 0.5
for start_from_systematic, start_from_regularised in [[True, False], [False, True]]:
learned_ll_list = []
learned_grad_list = []
for value in tqdm.tqdm(linspace):
lamda_tensor = torch.tensor(value, requires_grad=True)
gradient_for_seed = []
likelihood_for_seed = []
for i in range(n_experiments):
resampling_method = LearnBest(0.1, {'scaling': 0.33}, learning_rate=1., optimizer_kwargs={},
schedule_kwargs={'gamma': 0.75, 'step_size':1}, n_steps=5,
start_from_regularised=start_from_regularised, start_from_systematic=start_from_systematic,
jitter=0.,
optim_class_name='SGD',
scheduler_class_name='StepLR')
res = ar_pf(data, lamda_tensor, observation_matrix,None, scaling=None, n=N, min_neff=0.5, seed=12345 + i, resampling_method=resampling_method)
likelihood_for_seed.append(-res[0].detach().cpu().numpy().sum()*T)
grad = torch.autograd.grad(-res[0] * T, lamda_tensor)[0].detach().cpu().numpy()
gradient_for_seed.append(grad.sum())
learned_ll_list.append(likelihood_for_seed)
learned_grad_list.append(gradient_for_seed)
learned_ll[eps] = learned_ll_list
learned_grads[eps] = learned_grad_list
reg_ll = {k: pd.DataFrame(data=np.array(v).T, columns=linspace) for k, v in reg_ll.items()}
reg_grads = {k: pd.DataFrame(data=np.array(v).T, columns=linspace) for k, v in reg_grads.items()}
ll_for_kalman = []
grads_for_kalman = []
for value in tqdm.tqdm(linspace):
kf = pykalman.KalmanFilter(observation_covariance=[[1.]],
transition_covariance=[[1.]],
transition_matrices=[[0.5]],
transition_offsets=[value],
initial_state_mean=[0.],
initial_state_covariance=[[0.]])
kf_eps = pykalman.KalmanFilter(observation_covariance=[[1.]],
transition_covariance=[[1.]],
transition_matrices=[[0.5]],
transition_offsets=[value + 1e-4],
initial_state_mean=[0.],
initial_state_covariance=[[0.]])
ll = kf.loglikelihood(data)
ll_eps = kf_eps.loglikelihood(data)
ll_for_kalman.append(ll)
grads_for_kalman.append(1e4*(ll_eps-ll))
sys_ll = np.array(sys_ll).T
sys_grads = np.array(sys_grads).T
sys_grads_df = pd.DataFrame(data=sys_grads, columns=linspace).describe().T
sys_grads_df['var'] = sys_grads_df['std']**2
sys_grads_df = sys_grads_df[['mean', 'var']]
sys_grads_df.columns = pd.MultiIndex.from_product([['systematic'], sys_grads_df.columns])
sys_ll_df = pd.DataFrame(data=sys_ll, columns=linspace).describe().T
sys_ll_df['var'] = sys_ll_df['std']**2
sys_ll_df = sys_ll_df[['mean', 'var']]
sys_ll_df.columns = pd.MultiIndex.from_product([['systematic'], sys_ll_df.columns])
grads_for_kalman_df = pd.DataFrame(index=linspace, data=grads_for_kalman, columns=['KF'])
grads_for_kalman_df.columns = pd.MultiIndex.from_tuples([('KF', 'mean')])
lls_for_kalman_df = pd.DataFrame(index=linspace, data=ll_for_kalman, columns=['KF'])
lls_for_kalman_df.columns = pd.MultiIndex.from_tuples([('KF', 'mean')])
reg_ll_df = pd.concat( reg_ll, axis=1 ).describe().T
reg_ll_df['var'] = reg_ll_df['std']**2
reg_ll_df = reg_ll_df[['mean', 'var' ]].unstack(0)
reg_ll_df = reg_ll_df.swaplevel(0, 1, axis=1).sort_index(axis=1)
reg_ll_df = reg_ll_df.rename({0.25: '$\epsilon = 0.25$', 0.5: '$\epsilon = 0.5$', 0.75: '$\epsilon = 0.75$'}, level=0, axis=1)
reg_grad_df = pd.concat( reg_grads, axis=1 ).describe().T
reg_grad_df['var'] = reg_grad_df['std']**2
reg_grad_df = reg_grad_df[['mean', 'var' ]].unstack(0)
reg_grad_df = reg_grad_df.swaplevel(0, 1, axis=1).sort_index(axis=1)
reg_grad_df = reg_grad_df.rename({0.25: '$\epsilon = 0.25$', 0.5: '$\epsilon = 0.5$', 0.75: '$\epsilon = 0.75$'}, level=0, axis=1)
likelihoods = pd.concat([ lls_for_kalman_df, reg_ll_df, sys_ll_df ], axis = 1).round(1)
gradients_df = pd.concat([ grads_for_kalman_df, reg_grad_df, sys_grads_df ], axis = 1).round(1)
for groupname, group in likelihoods.groupby(level=1, axis=1, squeeze=True):
print(group.droplevel(1, axis=1).to_latex(escape=False))
print('\n')
for groupname, group in gradients_df.groupby(level=1, axis=1, squeeze=True):
print(group.droplevel(1, axis=1).to_latex(escape=False))
print('\n')
group.dr
pd.concat( reg_grads, axis=1 ).describe().T
pd.concat( reg_ll, axis=1 ).describe().T
pd.Series(index=linspace, data=ll_for_kalman)
reg_grads[0.25].describe().T
# Grads for regularised transport
pd.DataFrame(data=sys_grads, columns=linspace).describe().T
# Grads for systematic resampling
gradients_for_learnt = {}
ll_for_learnt = {}
for start_from_systematic, start_from_regularised in [[True, False], [False, True]]:
gradients = []
likelihoods = []
for data in tqdm.tqdm(observations_lists):
gradients_data = []
likelihoods_data = []
for value in linspace:
lamda_tensor = torch.tensor(value, requires_grad=True)
gradient_for_seed = []
likelihood_for_seed = []
for i in range(n_experiments):
learnt_resampling_method = LearnBest(0.1, {'scaling': 0.5}, learning_rate=1., optimizer_kwargs={},
schedule_kwargs={'gamma': 0.9, 'step_size':1}, n_steps=5,
start_from_regularised=start_from_regularised, start_from_systematic=start_from_regularised,
jitter=0.,
optim_class_name='SGD',
scheduler_class_name='StepLR')
res = ar_pf(data, lamda_tensor, observation_matrix, None, scaling=None, n=15, min_neff=0.5, seed=12345 + i, resampling_method=learnt_resampling_method)
likelihood_for_seed.append(-res[0].detach().cpu().numpy().sum()*T)
grad = torch.autograd.grad(-res[0] * T, lamda_tensor)[0].detach().cpu().numpy()
gradient_for_seed.append(grad.sum())
gradients_data.append(gradient_for_seed)
likelihoods_data.append(likelihood_for_seed)
likelihoods.append(likelihoods_data)
gradients.append(gradients_data)
gradients_for_learnt[(start_from_systematic, start_from_regularised)] = gradients
ll_for_learnt[(start_from_systematic, start_from_regularised)] = likelihoods
ll_for_kalman = []
grads_for_kalman = []
for data in tqdm.tqdm(observations_lists):
gradients_data = []
likelihoods_data = []
for value in linspace:
kf = pykalman.KalmanFilter(observation_covariance=[[1.]],
transition_covariance=[[1.]],
transition_matrices=[[0.5]],
transition_offsets=[value],
initial_state_mean=[0.],
initial_state_covariance=[[0.]])
kf_eps = pykalman.KalmanFilter(observation_covariance=[[1.]],
transition_covariance=[[1.]],
transition_matrices=[[0.5]],
transition_offsets=[value + 1e-4],
initial_state_mean=[0.],
initial_state_covariance=[[0.]])
ll = kf.loglikelihood(data)
ll_eps = kf_eps.loglikelihood(data)
likelihoods_data.append(ll)
gradients_data.append(1e4*(ll_eps-ll))
ll_for_kalman.append(likelihoods_data)
grads_for_kalman.append(gradients_data)
fig, axes = plt.subplots(1, 2, figsize=(20, 10), sharex=True, sharey=True)
for i, ax in enumerate(axes.flatten()):
systematic_grad = gradients_for_systematic[15][ i ]
ax.plot(linspace, np.mean(systematic_grad, 1), label = 'systematic')
ax.plot(linspace, grads_for_kalman[1], label = 'KF', color = 'k', linestyle='--')
for eps, transport_gradients in gradients_for_transport.items():
ax.plot(linspace, np.mean(transport_gradients[i], 1), label = 'reg: {}'.format(eps))
# for (start_from_systematic, start_from_regularised), learnt_gradients in gradients_for_learnt.items():
# label = 'from systematic' if start_from_systematic else 'from regularised'
# ax.plot(linspace, np.mean(learnt_gradients[i], 1), label = label)
ax.axhline(0, linestyle='-', color='k')
_ = ax.legend()
fig, axes = plt.subplots(1, 2, figsize=(20, 10), sharex=True, sharey=True)
for i, ax in enumerate(axes.flatten()):
systematic_ll = ll_for_systematic[15][ i ]
ax.plot(linspace, np.mean(systematic_ll, 1), label = 'systematic')
ax.plot(linspace, ll_for_kalman[1], label = 'KF', color = 'k', linestyle='--')
# for eps, ll_transport in ll_for_transport.items():
# ax.plot(linspace, np.mean(ll_transport[i], 1), label = 'reg: {}'.format(eps), linewidth = 1)
for (start_from_systematic, start_from_regularised), learnt_ll in ll_for_learnt.items():
label = 'from systematic' if start_from_systematic else 'from regularised'
ax.plot(linspace, np.mean(learnt_ll[i], 1), label = label)
_ = ax.legend()
fig, axes = plt.subplots(1, 2, figsize=(20, 10), sharex=True, sharey=True)
sub_slice = slice(200, 300)
for i, ax in enumerate(axes.flatten()):
systematic_ll = ll_for_systematic[15][i]
ax.plot(linspace[sub_slice], np.mean(systematic_ll, 1)[sub_slice], label = 'systematic')
ax.plot(linspace[sub_slice], ll_for_kalman[1][sub_slice], label = 'KF', color = 'k', linestyle='--')
for eps, ll_transport in ll_for_transport.items():
ax.plot(linspace[sub_slice], np.mean(ll_transport[i], 1)[sub_slice], label = 'reg: {}'.format(eps), linewidth = 1)
_ = ax.legend()
```
### Let's learn
```
n_iter = 20
lr = 1.
lamda_tensor = torch.tensor(0., requires_grad=True)
lambda_values_sys = [0.]
autoregressive_gen = autoregressive(transition_matrix, observation_matrix, seed=5)
for _ in tqdm.trange(n_iter):
data = observations_lists[0]
temp_res = ar_pf(data, lamda_tensor, observation_matrix, 0., n=10, min_neff=0.5)
grad = torch.autograd.grad(temp_res[0], lamda_tensor)
lamda_tensor.data -= lr * grad[0]
lambda_values_sys.append(lamda_tensor.detach().cpu().numpy().sum())
n_iter = 20
lr = 1.
lamda_tensor = torch.tensor(0., requires_grad=True)
lambda_values_transport = [0.]
autoregressive_gen = autoregressive(transition_matrix, observation_matrix, seed=5)
for _ in tqdm.trange(n_iter):
ar = [next(autoregressive_gen) for _ in range(200)]
data = observations_lists[0]
temp_res = ar_pf(data, lamda_tensor, observation_matrix, 0.5, scaling=0.5, n=10, min_neff=0.5, seed=666)
grad = torch.autograd.grad(temp_res[0], lamda_tensor)
lamda_tensor.data -= lr * grad[0]
lambda_values_transport.append(lamda_tensor.detach().cpu().numpy().sum())
plt.plot(lambda_values_sys)
plt.plot(lambda_values_transport)
n_iter = 20
lr = 0.01
lamda_val = 0.
lamda_val_list = [lamda_val]
autoregressive_gen = autoregressive(transition_matrix, observation_matrix, seed=1)
for _ in tqdm.trange(n_iter):
ar = [next(autoregressive_gen) for _ in range(200)]
data = observations_lists[0]
kf = pykalman.KalmanFilter(observation_covariance=[[1.]],
transition_covariance=[[1.]],
transition_matrices=[[0.5]],
transition_offsets=[lamda_val],
initial_state_mean=[0.],
initial_state_covariance=[[0.]])
kf_eps = pykalman.KalmanFilter(observation_covariance=[[1.]],
transition_covariance=[[1.]],
transition_matrices=[[0.5]],
transition_offsets=[lamda_val + 1e-6],
initial_state_mean=[0.],
initial_state_covariance=[[0.]])
ll = -kf.loglikelihood(data)
ll_eps = -kf_eps.loglikelihood(data)
grad = (ll_eps - ll) * 1e6
lamda_val -= lr * grad
lamda_val_list.append(lamda_val)
plt.plot(lamda_val_list)
plt.plot(phi_tensors_ot, label = 'learn OT')
plt.plot(phi_tensors_sys, label = 'learn Sys')
plt.plot(phi_tensors_kf, label = 'learn KF')
plt.legend()
alpha = 0.42
d = 10
transition_matrix = np.asarray([[alpha ** (abs(i-j) + 1) for j in range(1, d+1)] for i in range(1, d+1)])
n_obs = 1
sparse_observation = np.eye(n_obs, d, dtype=float)
def weighted_avg_and_std(values, weights):
"""
Return the weighted average and standard deviation.
values, weights -- Numpy ndarrays with the same shape.
"""
average = np.average(values, weights=weights, axis=0)
# Fast and numerically precise:
variance = np.average((values-average)**2, weights=weights, axis=0)
return (average, np.sqrt(variance))
def plot_pf_component(pf_res, linspace, component, ax, label, show=None):
pf_state, pf_weight = zip(*[(l.x.detach().cpu().numpy(), l.w.detach().cpu().numpy()) for l in pf_res[1]])
pf_weight = np.stack(pf_weight, axis=0)
pf_state = np.stack(pf_state, axis=0)
pf_mean, pf_std = weighted_avg_and_std(pf_state[:, :, component].T, pf_weight.T)
ax.plot(linspace[: show], pf_mean[: show], label = label)
ax.fill_between(linspace[: show],
pf_mean[: show] - 2 * pf_std[: show],
pf_mean[: show] + 2 * pf_std[: show], alpha = 0.3)
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15, 5), sharex=True, sharey=True)
show = 100
linspace = np.arange(0, T)
axes[0, 0].plot(linspace[: show], ar_kf[0][:, 0].squeeze()[: show], label='Kalman Filter')
axes[0, 0].fill_between(linspace[: show],
ar_kf[0][:, 0].squeeze()[: show] - 2*np.sqrt(ar_kf[1][:, 0, 0])[: show],
ar_kf[0][:, 0].squeeze()[: show] + 2*np.sqrt(ar_kf[1][:, 0, 0])[: show], alpha = 0.3)
plot_pf_component(ar_10, linspace, 0, axes[1, 0], 'particle filter, eps = 0.1', show)
plot_pf_component(ar_100, linspace, 0, axes[1, 1], 'particle filter, eps = 1.00', show)
plot_pf_component(ar_systematic, linspace, 0, axes[0, 1], 'particle filter, systematic', show)
for ax in axes.flatten():
ax.legend(loc='upper right')
# axes[0].set_title('AR(1)')
# fig.savefig('AR_PF_optimalTransport.png')
linspace = np.linspace(0.25, 0.75, 250)
epsilons = [0.1, 0.15, 0.2, 0.25, 0.3, 0.5, 0.75, 1.]
results_for_smoothness_ot_dict = {}
results_for_smoothness_ot_grad_dict = {}
for eps in epsilons:
results_for_smoothness_ot_dict[eps] = []
results_for_smoothness_ot_grad_dict[eps] = []
for val in tqdm.tqdm(linspace):
phi_tensor = torch.tensor(val, requires_grad=True)
temp_res = ar_pf(observations, phi_tensor, log_noise, log_error, eps, n=100, scaling=0.9, min_neff=0.5, reach=None)
# temp_res = run_pf(flat, log_sigma_val, dt_tensor, eps, n=100, scaling=0.9, min_neff=0.5, reach=5.)
results_for_smoothness_ot_grad_dict[eps].append(torch.autograd.grad(temp_res[0], phi_tensor)[0].detach().cpu().numpy())
results_for_smoothness_ot_dict[eps].append(temp_res[0].detach().cpu().numpy())
results_for_smoothness_systematic_dict = {}
results_for_smoothness_systematic_grad_dict = {}
for n in [100]:
results_for_smoothness_systematic_dict[n] = []
results_for_smoothness_systematic_grad_dict[n] = []
for val in tqdm.tqdm(linspace):
phi_tensor = torch.tensor(val, requires_grad=True)
temp_res = ar_pf(observations, phi_tensor, log_noise, log_error, 0., n=100, min_neff=0.5)
results_for_smoothness_systematic_grad_dict[n].append(torch.autograd.grad(temp_res[0], phi_tensor)[0].detach().cpu().numpy())
results_for_smoothness_systematic_dict[n].append(temp_res[0].detach().cpu().numpy())
# results_for_learnt_ot_dict = {}
# eps = 0.1
# adam_kwargs = {'lr': 0.5}
# methods = {#'incremental': IncrementalLearning(eps, {'scaling': 0.5}, adam_kwargs, 4, 5),
# 'one-shot': LearnBest(eps, {'scaling': 0.75}, adam_kwargs, 10)}
# for name, method in methods.items():
# results_for_learnt_ot_dict[name] = [~]
# for val in tqdm.tqdm(linspace):
# log_sigma_tensor = torch.tensor(val, requires_grad=False)
# temp_res = run_pf(flat, log_sigma_tensor, dt_tensor, eps, n=250, reweighting_method=method)
# results_for_learnt_ot_dict[name].append(temp_res[0].detach().cpu().numpy())
results_for_kalman = []
grads_for_kalman = []
for val in tqdm.tqdm(linspace):
kf = pykalman.KalmanFilter(observation_covariance=[[0.01]],
transition_covariance=[[0.01]],
transition_matrices=[[val]],
initial_state_mean=[0.])
kf_eps = pykalman.KalmanFilter(observation_covariance=[[0.01]],
transition_covariance=[[0.01]],
transition_matrices=[[val + 1e-4]],
initial_state_mean=[0.])
ll = -kf.loglikelihood(observations)/len(observations)
ll_eps = -kf_eps.loglikelihood(observations)/len(observations)
results_for_kalman.append(ll)
grads_for_kalman.append(1e4*(ll_eps-ll))
def plot_gradient(linspace, values, gradients, k, ax, line):
x = np.take(linspace, k)
y = np.take(values, k)
v = np.take(gradients, k)
u = [1]*len(k)
ax.quiver(x, y, u, v, scale=20, zorder=3, color=l.get_color(),
width=0.007, headwidth=3., headlength=4.)
fig, ax = plt.subplots(ncols=1, figsize=(15, 4), sharey=False, sharex=False)
fig.suptitle('Likelihood function comparison', y = 1.01)
for eps, lst in results_for_smoothness_ot_dict.items():
if eps >= 0.25:
arr = np.asanyarray(lst)
ax.plot(linspace, arr - arr.min(), label=f'Biased OT: eps={eps}')
arr_sys = np.asanyarray(results_for_smoothness_systematic_dict[100])
ax.plot(linspace, arr_sys - arr_sys.min(), label=f'Systematic resampling', alpha=0.5)
arr_kf = np.asanyarray(results_for_kalman)
ax.plot(linspace, arr_kf - arr_kf.min(), label = 'Kalman filter', linestyle = '--')
# zoom = slice(0, 250)
# grads_locs = [25, 100, 175]
# for eps, lst in results_for_smoothness_ot_dict.items():
# l, = axes[1].plot(linspace[zoom], lst[zoom], label=f'Biased OT: eps={eps}')
# plot_gradient(linspace, lst, results_for_smoothness_ot_grad_dict[eps], grads_locs, axes[1], l)
# l, = axes[1].plot(linspace[zoom], results_for_smoothness_systematic_dict[100][zoom], label=f'Systematic resampling', alpha=0.5)
# plot_gradient(linspace, results_for_smoothness_systematic_dict[100], results_for_smoothness_systematic_grad_dict[100], grads_locs, axes[1], l)
# axes[1].plot(linspace[zoom], results_for_kalman[zoom], label = 'Kalman filter', linestyle = '--')
# for ax in axes.flatten():
ax.set_ylabel('$\\frac{-loglik}{n_{samples}}$', rotation=0, y=1, fontsize=15)
ax.set_xlabel('$\phi$', rotation=0, y=1, fontsize=15)
ax.legend()
fig.savefig('ar_likelihood.png')
fig, ax = plt.subplots(figsize=(15, 4), sharey=False, sharex=True)
fig.suptitle('Likelihood function gradient comparison', y = 1.01)
for eps, grads in results_for_smoothness_ot_grad_dict.items():
if eps >= 0.25:
ax.plot(linspace, np.stack(grads), label = f'Biased OT: eps={eps}')
ax.plot(linspace, np.stack(results_for_smoothness_systematic_grad_dict[100]), label='Systematic Resampling')
ax.plot(linspace, np.stack(grads_for_kalman), label='KF', linestyle='--')
ax.legend()
fig.savefig('ar_likelihoodGradient.png')
fig, ax = plt.subplots(figsize=(15, 4), sharey=False, sharex=True)
fig.suptitle('Antiderivative (cumsum) gradient comparison', y = 1.01)
for eps, grads in results_for_smoothness_ot_grad_dict.items():
if eps >= 0.25:
ax.plot(linspace, np.cumsum(np.stack(grads)), label = f'Biased OT: eps={eps}')
ax.plot(linspace, np.cumsum(np.stack(results_for_smoothness_systematic_grad_dict[100])), label='Systematic Resampling')
ax.plot(linspace, np.cumsum(np.stack(grads_for_kalman)), label='KF', linestyle='--')
ax.legend()
fig.savefig('ar_likelihoodGradient.png')
```
| github_jupyter |
# Ridge/LASSO polynomial regression with linear and random sampling
* Input variable space is constructed using random sampling/cluster pick/uniform sampling
* Linear fit is often inadequate but higher-order polynomial fits often leads to overfitting i.e. learns spurious, flawed relationships between input and output
* Ridge and LASSO regression are used with varying model complexity (degree of polynomial)
* Model score is obtained on a test set and average score over a # of runs is compared for linear and random sampling
### Import libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
### Global variables for the program
```
N_points = 41 # Number of points for constructing function
x_min = 1 # Min of the range of x (feature)
x_max = 10 # Max of the range of x (feature)
noise_mean = 0 # Mean of the Gaussian noise adder
noise_sd = 2 # Std.Dev of the Gaussian noise adder
ridge_alpha = tuple([10**(x) for x in range(-3,0,1) ]) # Alpha (regularization strength) of ridge regression
lasso_eps = 0.001
lasso_nalpha=20
lasso_iter=1000
degree_min = 2
degree_max = 8
```
### Generate feature and output vector following a non-linear function
$$ The\ ground\ truth\ or\ originating\ function\ is\ as\ follows:\ $$
$$ y=f(x)= x^2.sin(x).e^{-0.1x}+\psi(x) $$
$$: \psi(x) = {\displaystyle f(x\;|\;\mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\;e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}} $$
```
x_smooth = np.array(np.linspace(x_min,x_max,1001))
# Linearly spaced sample points
X=np.array(np.linspace(x_min,x_max,N_points))
# Samples drawn from uniform random distribution
X_sample = x_min+np.random.rand(N_points)*(x_max-x_min)
def func(x):
result = x**2*np.sin(x)*np.exp(-(1/x_max)*x)
return (result)
noise_x = np.random.normal(loc=noise_mean,scale=noise_sd,size=N_points)
y = func(X)+noise_x
y_sampled = func(X_sample)+noise_x
df = pd.DataFrame(data=X,columns=['X'])
df['Ideal y']=df['X'].apply(func)
df['y']=y
df['X_sampled']=X_sample
df['y_sampled']=y_sampled
df.head()
```
### Plot the function(s), both the ideal characteristic and the observed output (with process and observation noise)
```
df.plot.scatter('X','Ideal y',title='Ideal y',grid=True,edgecolors=(0,0,0),c='blue',s=40,figsize=(10,5))
plt.plot(x_smooth,func(x_smooth),'k')
df.plot.scatter('X_sampled',y='y_sampled',title='Randomly sampled y',
grid=True,edgecolors=(0,0,0),c='orange',s=40,figsize=(10,5))
plt.plot(x_smooth,func(x_smooth),'k')
df.plot.scatter('X',y='y',title='Linearly sampled y',grid=True,edgecolors=(0,0,0),c='orange',s=40,figsize=(10,5))
plt.plot(x_smooth,func(x_smooth),'k')
```
### Import scikit-learn librares and prepare train/test splits
```
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LassoCV
from sklearn.linear_model import RidgeCV
from sklearn.ensemble import AdaBoostRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.cross_validation import train_test_split
from sklearn.pipeline import make_pipeline
X_train, X_test, y_train, y_test = train_test_split(df['X'], df['y'], test_size=0.33)
X_train=X_train.values.reshape(-1,1)
X_test=X_test.values.reshape(-1,1)
```
### Polynomial model with Ridge regularization (pipelined) with lineary spaced samples
** This is an advanced machine learning method which prevents over-fitting by penalizing high-valued coefficients i.e. keep them bounded **
```
linear_sample_score = []
poly_degree = []
for degree in range(degree_min,degree_max+1):
#model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=ridge_alpha,normalize=True,cv=5))
model = make_pipeline(PolynomialFeatures(degree), LassoCV(eps=lasso_eps,n_alphas=lasso_nalpha,
max_iter=lasso_iter,normalize=True,cv=5))
#model = make_pipeline(PolynomialFeatures(degree), LinearRegression(normalize=True))
model.fit(X_train, y_train)
y_pred = np.array(model.predict(X_train))
test_pred = np.array(model.predict(X_test))
RMSE=np.sqrt(np.sum(np.square(y_pred-y_train)))
test_score = model.score(X_test,y_test)
linear_sample_score.append(test_score)
poly_degree.append(degree)
print("Test score of model with degree {}: {}\n".format(degree,test_score))
#plt.figure()
#plt.title("RMSE: {}".format(RMSE),fontsize=10)
#plt.suptitle("Polynomial of degree {}".format(degree),fontsize=15)
#plt.xlabel("X training values")
#plt.ylabel("Fitted and training values")
#plt.scatter(X_train,y_pred)
#plt.scatter(X_train,y_train)
plt.figure()
plt.title("Predicted vs. actual for polynomial of degree {}".format(degree),fontsize=15)
plt.xlabel("Actual values")
plt.ylabel("Predicted values")
plt.scatter(y_test,test_pred)
plt.plot(y_test,y_test,'r',lw=2)
linear_sample_score
```
### Modeling with randomly sampled data set
```
X_train, X_test, y_train, y_test = train_test_split(df['X_sampled'], df['y_sampled'], test_size=0.33)
X_train=X_train.values.reshape(-1,1)
X_test=X_test.values.reshape(-1,1)
random_sample_score = []
poly_degree = []
for degree in range(degree_min,degree_max+1):
#model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=ridge_alpha,normalize=True,cv=5))
model = make_pipeline(PolynomialFeatures(degree), LassoCV(eps=lasso_eps,n_alphas=lasso_nalpha,
max_iter=lasso_iter,normalize=True,cv=5))
#model = make_pipeline(PolynomialFeatures(degree), LinearRegression(normalize=True))
model.fit(X_train, y_train)
y_pred = np.array(model.predict(X_train))
test_pred = np.array(model.predict(X_test))
RMSE=np.sqrt(np.sum(np.square(y_pred-y_train)))
test_score = model.score(X_test,y_test)
random_sample_score.append(test_score)
poly_degree.append(degree)
print("Test score of model with degree {}: {}\n".format(degree,test_score))
#plt.figure()
#plt.title("RMSE: {}".format(RMSE),fontsize=10)
#plt.suptitle("Polynomial of degree {}".format(degree),fontsize=15)
#plt.xlabel("X training values")
#plt.ylabel("Fitted and training values")
#plt.scatter(X_train,y_pred)
#plt.scatter(X_train,y_train)
plt.figure()
plt.title("Predicted vs. actual for polynomial of degree {}".format(degree),fontsize=15)
plt.xlabel("Actual values")
plt.ylabel("Predicted values")
plt.scatter(y_test,test_pred)
plt.plot(y_test,y_test,'r',lw=2)
random_sample_score
df_score = pd.DataFrame(data={'degree':[d for d in range(degree_min,degree_max+1)],
'Linear sample score':linear_sample_score,
'Random sample score':random_sample_score})
df_score
plt.figure(figsize=(8,5))
plt.grid(True)
plt.plot(df_score['degree'],df_score['Linear sample score'],lw=2)
plt.plot(df_score['degree'],df_score['Random sample score'],lw=2)
plt.xlabel ("Model Complexity: Degree of polynomial",fontsize=20)
plt.ylabel ("Model Score: R^2 score on test set",fontsize=15)
plt.legend(fontsize=15)
```
#### Cehcking the regularization strength from the cross-validated model pipeline
```
m=model.steps[1][1]
m.alpha_
```
| github_jupyter |
Copyright (c) Microsoft Corporation.
Licensed under the MIT License.
# FWI demo based on:
This project ports devito (https://github.com/opesci/devito) into Azure and runs tutorial notebooks at:
https://nbviewer.jupyter.org/github/opesci/devito/blob/master/examples/seismic/tutorials/
In this notebook we run the devito demo [notebooks](https://nbviewer.jupyter.org/github/opesci/devito/blob/master/examples/seismic/tutorials/) mentioned above by using an [AzureML estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py) with custom docker image. The docker image and associated docker file were created in previous notebook.
<a id='devito_in_AzureML_demoing_modes'></a>
#### This notebook is used as a control plane to submit experimentation jobs running devito in Azure in two modes (see [remote run azureml python script file invoking devito](#devito_demo_mode)):
- [Mode 1](#devito_demo_mode_1):
- uses custom code (slightly modified graphing functions save images to files too)
- experimentation job is defined by the devito code that is packaged as a py file to be run on an Azure remote compute target
- experimentation job can be used to track metrics or other artifacts (images)
- Mode 2:
- papermill is invoked via cli or via its Python API to run unedited devito demo notebooks (https://github.com/opesci/devito/tree/master/examples/seismic/tutorials) on the remote compute target and get back the results as saved notebooks that are then Available in Azure portal.
```
# Allow multiple displays per cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import sys, os
import shutil
import urllib
import azureml.core
from azureml.core import Workspace, Experiment
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.runconfig import MpiConfiguration
# from azureml.core.datastore import Datastore
# from azureml.data.data_reference import DataReference
# from azureml.pipeline.steps import HyperDriveStep
# from azureml.pipeline.core import Pipeline, PipelineData
# from azureml.train.dnn import TensorFlow
from azureml.train.estimator import Estimator
from azureml.widgets import RunDetails
import platform
print("Azure ML SDK Version: ", azureml.core.VERSION)
platform.platform()
os.getcwd()
def add_path_to_sys_path(path_to_append):
if not (any(path_to_append in paths for paths in sys.path)):
sys.path.append(path_to_append)
auxiliary_files_dir = os.path.join(*(['.', 'src']))
paths_to_append = [os.path.join(os.getcwd(), auxiliary_files_dir)]
[add_path_to_sys_path(crt_path) for crt_path in paths_to_append]
import project_utils
prj_consts = project_utils.project_consts()
dotenv_file_path = os.path.join(*(prj_consts.DOTENV_FILE_PATH))
dotenv_file_path
%load_ext dotenv
workspace_config_dir = os.path.join(*(prj_consts.AML_WORKSPACE_CONFIG_DIR))
workspace_config_file = prj_consts.AML_WORKSPACE_CONFIG_FILE_NAME
workspace_config_dir
%dotenv $dotenv_file_path
script_folder = prj_consts.AML_EXPERIMENT_DIR + ['devito_tutorial']
devito_training_script_file = '01_modelling.py' # hardcoded in file azureml_training_script_full_file_name below
azureml_training_script_file = 'azureml_'+devito_training_script_file
experimentName = '020_AzureMLEstimator'
os.makedirs(os.path.join(*(script_folder)), exist_ok=True)
script_path = os.path.join(*(script_folder))
training_script_full_file_name = os.path.join(script_path, devito_training_script_file)
azureml_training_script_full_file_name = os.path.join(script_path, azureml_training_script_file)
training_script_full_file_name
azureml_training_script_full_file_name
```
<a id='devito_demo_mode_1'></a>
##### devito in Azure ML demo mode 1
Create devito demo script based on
https://nbviewer.jupyter.org/github/opesci/devito/blob/master/examples/seismic/tutorials/01_modelling.ipynb
[Back](#devito_in_AzureML_demoing_modes) to summary of modes od demoing devito in AzureML.
Main purpose of this script is to extend _plot_velocity()_ and _plot_shotrecord()_ devito [plotting functions](https://github.com/opesci/devito/blob/master/examples/seismic/plotting.py) to allow the mto work in batch mode, i.e. save output to a file.
```
%%writefile $training_script_full_file_name
import numpy as np
import os, argparse
from examples.seismic import Model
from examples.seismic import TimeAxis
from examples.seismic import Receiver
from devito import TimeFunction
from devito import Eq, solve
from devito import Operator
# try:
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.axes_grid1 import make_axes_locatable
mpl.rc('font', size=16)
mpl.rc('figure', figsize=(8, 6))
# except:
# plt = None
# cm = None
# "all" plotting utils in devito do not save to file, so we extend them here
# https://github.com/opesci/devito/blob/master/examples/seismic/plotting.py
def plot_velocity(model, source=None, receiver=None, colorbar=True, file=None):
"""
Plot a two-dimensional velocity field from a seismic `Model`
object. Optionally also includes point markers for sources and receivers.
Parameters
----------
model : Model
Object that holds the velocity model.
source : array_like or float
Coordinates of the source point.
receiver : array_like or float
Coordinates of the receiver points.
colorbar : bool
Option to plot the colorbar.
"""
domain_size = 1.e-3 * np.array(model.domain_size)
extent = [model.origin[0], model.origin[0] + domain_size[0],
model.origin[1] + domain_size[1], model.origin[1]]
plot = plt.imshow(np.transpose(model.vp.data), animated=True, cmap=cm.jet,
vmin=np.min(model.vp.data), vmax=np.max(model.vp.data),
extent=extent)
plt.xlabel('X position (km)')
plt.ylabel('Depth (km)')
# Plot source points, if provided
if receiver is not None:
plt.scatter(1e-3*receiver[:, 0], 1e-3*receiver[:, 1],
s=25, c='green', marker='D')
# Plot receiver points, if provided
if source is not None:
plt.scatter(1e-3*source[:, 0], 1e-3*source[:, 1],
s=25, c='red', marker='o')
# Ensure axis limits
plt.xlim(model.origin[0], model.origin[0] + domain_size[0])
plt.ylim(model.origin[1] + domain_size[1], model.origin[1])
# Create aligned colorbar on the right
if colorbar:
ax = plt.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(plot, cax=cax)
cbar.set_label('Velocity (km/s)')
plt.show()
if file is not None:
plt.savefig(file)
print('plotted image saved as {} file'.format(file))
plt.clf()
def plot_shotrecord(rec, model, t0, tn, colorbar=True, file=None):
"""
Plot a shot record (receiver values over time).
Parameters
----------
rec :
Receiver data with shape (time, points).
model : Model
object that holds the velocity model.
t0 : int
Start of time dimension to plot.
tn : int
End of time dimension to plot.
"""
scale = np.max(rec) / 10.
extent = [model.origin[0], model.origin[0] + 1e-3*model.domain_size[0],
1e-3*tn, t0]
plot = plt.imshow(rec, vmin=-scale, vmax=scale, cmap=cm.gray, extent=extent)
plt.xlabel('X position (km)')
plt.ylabel('Time (s)')
# Create aligned colorbar on the right
if colorbar:
ax = plt.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(plot, cax=cax)
plt.show()
if file is not None:
plt.savefig(file)
print('plotted image saved as {} file'.format(file))
plt.clf()
def main(output_folder):
# 1. Define the physical problem
# The first step is to define the physical model:
# - physical dimensions of interest
# - velocity profile of this physical domain
# Define a physical size
shape = (101, 101) # Number of grid point (nx, nz)
spacing = (10., 10.) # Grid spacing in m. The domain size is now 1km by 1km
origin = (0., 0.) # What is the location of the top left corner. This is necessary to define
# the absolute location of the source and receivers
# Define a velocity profile. The velocity is in km/s
v = np.empty(shape, dtype=np.float32)
v[:, :51] = 1.5
v[:, 51:] = 2.5
# With the velocity and model size defined, we can create the seismic model that
# encapsulates this properties. We also define the size of the absorbing layer as 10 grid points
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=2, nbpml=10)
plot_velocity(model,
file= os.path.join(*( [output_folder,'output000.png'])))
# 2. Acquisition geometry
t0 = 0. # Simulation starts a t=0
tn = 1000. # Simulation last 1 second (1000 ms)
dt = model.critical_dt # Time step from model grid spacing
time_range = TimeAxis(start=t0, stop=tn, step=dt)
from examples.seismic import RickerSource
f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)
src = RickerSource(name='src', grid=model.grid, f0=f0,
npoint=1, time_range=time_range)
# First, position source centrally in all dimensions, then set depth
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 20. # Depth is 20m
# We can plot the time signature to see the wavelet
# src.show()
# Create symbol for 101 receivers
rec = Receiver(name='rec', grid=model.grid, npoint=101, time_range=time_range)
# Prescribe even spacing for receivers along the x-axis
rec.coordinates.data[:, 0] = np.linspace(0, model.domain_size[0], num=101)
rec.coordinates.data[:, 1] = 20. # Depth is 20m
# We can now show the source and receivers within our domain:
# Red dot: Source location
# Green dots: Receiver locations (every 4th point)
plot_velocity(model, source=src.coordinates.data,
receiver=rec.coordinates.data[::4, :],
file= os.path.join(*( [output_folder,'output010.png'])))
# Define the wavefield with the size of the model and the time dimension
u = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=2)
# We can now write the PDE
pde = model.m * u.dt2 - u.laplace + model.damp * u.dt
# The PDE representation is as on paper
pde
# This discrete PDE can be solved in a time-marching way updating u(t+dt) from the previous time step
# Devito as a shortcut for u(t+dt) which is u.forward. We can then rewrite the PDE as
# a time marching updating equation known as a stencil using customized SymPy functions
stencil = Eq(u.forward, solve(pde, u.forward))
# Finally we define the source injection and receiver read function to generate the corresponding code
src_term = src.inject(field=u.forward, expr=src * dt**2 / model.m)
# Create interpolation expression for receivers
rec_term = rec.interpolate(expr=u.forward)
op = Operator([stencil] + src_term + rec_term, subs=model.spacing_map)
op(time=time_range.num-1, dt=model.critical_dt)
plot_shotrecord(rec.data, model, t0, tn,
file= os.path.join(*( [output_folder,'output020.png'])))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--output_folder', type=str, nargs='?', \
dest='output_folder', help='ouput artifacts location',\
default='.')
args = parser.parse_args()
main(args.output_folder)
```
##### Get experimentation docker image for devito
```
docker_repo_name = os.getenv('ACR_NAME')+'.azurecr.io' # or os.getenv('DOCKER_LOGIN')
docker_image_name = os.getenv('EXPERIMENTATION_DOCKER_IMAGE_NAME')
image_version = os.getenv('EXPERIMENTATION_DOCKER_IMAGE_TAG')
if image_version!="":
docker_image_name = docker_image_name +':'+ image_version
full_docker_image_name = docker_repo_name + '/' + docker_image_name
docker_image_name
full_docker_image_name
```
Extract/decide the python path in custom docker image that corresponds to desired conda environment. Without this, AzureML tries to create a separate environment.
```
get_Python_path_command='docker run -i --rm --name fwi01_azureml_container02 '+ \
full_docker_image_name + \
' /bin/bash -c "which python" '
get_Python_path_command
import subprocess
python_path_in_docker_image = subprocess.check_output(get_Python_path_command,shell=True,stderr=subprocess.STDOUT).\
decode('utf-8').strip()
python_path_in_docker_image
```
<a id='devito_demo_mode'></a>
#### Create azureml_script_file that invokes:
- devito exclusive custom edited training_script_file
- unedited devito notebooks via papermill (invoked via cli and via ppapermill python API)
[Back](#devito_in_AzureML_demoing_modes) to notebook summary.
```
%%writefile $azureml_training_script_full_file_name
import argparse
import os
os.system('conda env list')
import azureml.core;
from azureml.core.run import Run
print(azureml.core.VERSION)
parser = argparse.ArgumentParser()
parser.add_argument('--output_folder', type=str, dest='output_folder', help='ouput artifacts location')
args = parser.parse_args()
print('args.output_folder is {} but it will be ignored since AzureML_tracked ./outputs will be used'.format(args.output_folder))
# get the Azure ML run object
run = Run.get_context()
# ./outputs/ folder is autotracked so should get uploaded at the end of the run
output_dir_AzureML_tracked = './outputs'
crt_dir = os.getcwd()
cli_command= \
'cd /devito; /opt/conda/envs/fwi01_conda_env/bin/python '+ crt_dir +'/01_modelling.py' + \
' --output_folder '+ crt_dir + output_dir_AzureML_tracked+ '/' + \
' > '+ crt_dir + output_dir_AzureML_tracked + '/01_modelling.log'
# + \
# ' 2>&1 ' + crt_dir +'/'+ output_dir_AzureML_tracked + '/devito_cli_py.log'
print('Running devito from cli on 01_modelling.py----BEGIN-----:')
print(cli_command); print('\n');os.system(cli_command)
print('Running devito from cli on 01_modelling.py----END-----:\n\n')
cli_command= \
'cd /devito; papermill ' + \
'./examples/seismic/tutorials/02_rtm.ipynb '+\
crt_dir +'/outputs/02_rtm_output.ipynb ' + \
'--log-output --no-progress-bar --kernel python3 ' + \
' > '+ crt_dir + output_dir_AzureML_tracked + '/02_rtm_output.log'
# + \
# ' 2>&1 ' + crt_dir +'/'+ output_dir_AzureML_tracked + '/papermill_cli.log'
# FIXME - activate right conda env for running papermill from cli
activate_right_conda_env_fixed = False
if activate_right_conda_env_fixed:
print('Running papermill from cli on 02_rtm.ipynb----BEGIN-----:')
print(cli_command); print('\n');os.system(cli_command)
print('Running papermill from cli on 02_rtm.ipynb----END-----:\n\n')
print('Running papermill from Python API on 03_fwi.ipynb----BEGIN-----:')
import papermill as pm
os.chdir('/devito')
pm.execute_notebook(
'./examples/seismic/tutorials/03_fwi.ipynb',
crt_dir +'/outputs/03_fwi_output.ipynb'
)
print('Running papermill from Python API on 03_fwi.ipynb----END-----:')
print('Running papermill from Python API on 04_dask.ipynb----BEGIN-----:')
import papermill as pm
os.chdir('/devito')
pm.execute_notebook(
'./examples/seismic/tutorials/04_dask.ipynb',
crt_dir +'/outputs/04_dask_output.ipynb'
)
print('Running papermill from Python API on 04_dask.ipynb----END-----:')
os.system('pwd')
os.system('ls -l /')
os.system('ls -l ./')
os.system('ls -l ' +crt_dir + output_dir_AzureML_tracked)
run.log('training_message01: ', 'finished experiment')
print('\n')
script_path=os.path.join(*(script_folder))
os.listdir(script_path)
```
## Initialize workspace
Initialize a workspace object from persisted configuration. If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure the config file is present at .\config.json
```
ws = Workspace.from_config(
path=os.path.join(os.getcwd(),
os.path.join(*([workspace_config_dir, '.azureml', workspace_config_file]))))
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id[0:4], sep = '\n')
```
## Create an Azure ML experiment
Let's create an experiment named "tf-mnist" and a folder to hold the training scripts. The script runs will be recorded under the experiment in Azure.
```
exp = Experiment(workspace=ws, name=experimentName)
```
## Retrieve or create a Azure Machine Learning compute
Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Azure Machine Learning Compute in the current workspace, if it doesn't already exist. We will then run the training script on this compute target.
If we could not find the compute with the given name in the previous cell, then we will create a new compute here. This process is broken down into the following steps:
1. Create the configuration
2. Create the Azure Machine Learning compute
**This process will take a few minutes and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell.**
```
gpu_cluster_name = os.getenv('GPU_CLUSTER_NAME')
gpu_cluster_name = 'gpuclstfwi07'
gpu_cluster_name
# Verify that cluster does not exist already
max_nodes_value = 2
try:
gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print("Found existing gpu cluster")
except ComputeTargetException:
print("Could not find ComputeTarget cluster!")
# # Create a new gpucluster using code below
# # Specify the configuration for the new cluster
# compute_config = AmlCompute.provisioning_configuration(vm_size="Standard_NC6",
# min_nodes=0,
# max_nodes=max_nodes_value)
# # Create the cluster with the specified name and configuration
# gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
# # Wait for the cluster to complete, show the output log
# gpu_cluster.wait_for_completion(show_output=True)
# for demo purposes, show how clsuter properties can be altered post-creation
gpu_cluster.update(min_nodes=0, max_nodes=max_nodes_value, idle_seconds_before_scaledown=1200)
```
#### Create an Azure ML SDK estimator with custom docker image
```
# use a custom Docker image
from azureml.core.container_registry import ContainerRegistry
image_name = docker_image_name
# you can also point to an image in a private ACR
image_registry_details = ContainerRegistry()
image_registry_details.address = docker_repo_name
image_registry_details.username = os.getenv('ACR_USERNAME')
image_registry_details.password = os.getenv('ACR_PASSWORD')
# don't let the system build a new conda environment
user_managed_dependencies = True
# submit to a local Docker container. if you don't have Docker engine running locally, you can set compute_target to cpu_cluster.
script_params = {
'--output_folder': 'some_folder'
}
# distributed_training_conf = MpiConfiguration()
# distributed_training_conf.process_count_per_node = 2
est = Estimator(source_directory=script_path,
compute_target=gpu_cluster,#'local', #gpu_cluster,
entry_script=azureml_training_script_file,
script_params=script_params,
use_docker=True,
custom_docker_image=image_name,
# uncomment below line to use your private ACR
image_registry_details=image_registry_details,
user_managed=user_managed_dependencies,
distributed_training=None,
node_count=1
)
est.run_config.environment.python.interpreter_path = python_path_in_docker_image
run = exp.submit(est)
RunDetails(run).show()
```
One can use the above link to currrent experiment run in Azure Portal to see tracked metrics, and images and output notebooks saved by azureml_training_script_full_file_name in {run_dir}/outputs on the remote compute target that are automatically saved by AzureML in the run history Azure portal pages.
```
response = run.wait_for_completion(show_output=False)
import time
from collections import Counter
#wait till all jobs finished
def wait_for_run_list_to_finish(the_run_list):
finished_status_list = ['Completed', 'Failed']
printing_counter = 0
start_time = time.time()
while (not all((crt_queried_job.get_status() in finished_status_list) for crt_queried_job in the_run_list)):
time.sleep(2)
printing_counter+= 1
print('print {0:.0f}, time {1:.3f} seconds: {2}'.format(printing_counter, time.time() - start_time,
str(Counter([crt_queried_job.get_status() for crt_queried_job in the_run_list]))), end="\r")
# final status
print('Final print {0:.0f}, time {1:.3f} seconds: {2}'.format(printing_counter, time.time() - start_time,
str(Counter([crt_queried_job.get_status() for crt_queried_job in the_run_list]))), end="\r")
wait_for_run_list_to_finish([run])
import datetime, math
def get_run_duration(azureml_exp_run):
run_details = azureml_exp_run.get_details()
run_duration = datetime.datetime.strptime(run_details['endTimeUtc'], "%Y-%m-%dT%H:%M:%S.%fZ") - \
datetime.datetime.strptime(run_details['startTimeUtc'], "%Y-%m-%dT%H:%M:%S.%fZ")
return run_duration.total_seconds()
run_duration = get_run_duration(run)
run_seconds, run_minutes = math.modf(run_duration/60)
print('run_duration in seconds {}'.format(run_duration))
print('run_duration= {0:.0f}m {1:.3f}s'.format(run_minutes, run_seconds*60))
import time
from IPython.display import clear_output
no_of_nodes = int(20)
no_of_jobs = int(no_of_nodes*10)
job_counter = 0
print_cycle = 20
run_list = []
submit_time_list = []
for crt_nodes in range(no_of_nodes, (no_of_nodes+1)):
gpu_cluster.update(min_nodes=0, max_nodes=crt_nodes, idle_seconds_before_scaledown=1200)
clust_start_time = time.time()
for crt_job in range(1, no_of_jobs):
job_counter+= 1
start_time = time.time()
run = exp.submit(est)
end_time = time.time()
run_time = end_time - start_time
run_list.append(run)
submit_time_list.append(run_time)
print('Counter{}: submission of job {} on {} nodes took {} seconds '.format(job_counter, crt_job, crt_nodes, run_time))
print('run list length {}'.format(len(run_list)))
if ((job_counter-1) % print_cycle) == 0:
clear_output()
print('Showing details for run {}'.format(job_counter))
RunDetails(run).show()
# [all_jobs_done = True if (('Completed'==crt_queried_job.get_status()) for crt_queried_job in run_list)]
import numpy as np
np.asarray(submit_time_list)
np.histogram(np.asarray(submit_time_list), bins=np.linspace(6.0, 10.0, num=10), density=False)
def wait_for_run_list_to_finish(the_run_list, plot_results=True):
finished_status_list = ['Completed', 'Failed']
printing_counter = 0
start_time = time.time()
while (not all((crt_queried_job.get_status() in finished_status_list) for crt_queried_job in the_run_list)):
time.sleep(2)
printing_counter+= 1
crt_status = Counter([crt_queried_job.get_status() for crt_queried_job in the_run_list])
print('print {0:.0f}, time {1:.3f} seconds: {2}'.format(printing_counter, time.time() - start_time,
str(crt_status)), end="\r")
if plot_results:
# import numpy as np
import matplotlib.pyplot as plt
plt.bar(crt_status.keys(), crt_status.values())
plt.show()
# indexes = np.arange(len(labels))
# width = 1
# plt.bar(indexes, values, width)
# plt.xticks(indexes + width * 0.5, labels)
# plt.show()
# from pandas import Series
# crt_status = Series([crt_queried_job.get_status() for crt_queried_job in the_run_list])
# status_counts = crt_status.value_counts().sort_index()
# print('print {0:.0f}, time {1:.3f} seconds: {2}'.format(printing_counter, time.time() - start_time,
# str(status_counts)), end="\r")
# final status
print('Final print {0:.0f}, time {1:.3f} seconds: {2}'.format(printing_counter, time.time() - start_time,
str(Counter([crt_queried_job.get_status() for crt_queried_job in the_run_list]))), end="\r")
wait_for_run_list_to_finish(run_list, plot_results=False)
run_durations = [get_run_duration(crt_queried_job) for crt_queried_job in run_list]
run_statuses = [crt_queried_job.get_status() for crt_queried_job in run_list]
run_durations = np.asarray(run_durations)
run_statuses = np.asarray(run_statuses)
extreme_k = 20
#longest runs
indices = np.argsort(run_durations)[-extreme_k:]
indices
print(run_durations[indices])
print(run_statuses[indices])
#shortest runs
indices = np.argsort(run_durations)[0:extreme_k]
indices
print(run_durations[indices])
print(run_statuses[indices])
#run_durations histogram - counts and bins
np.histogram(run_durations, bins=np.linspace(50, 200, num=10), density=False)
print('Finished running 030_ScaleJobsUsingAzuremL_GeophysicsTutorial_FWI_Azure_devito!')
```
| github_jupyter |
# Analyzing and predicting Service Request Types in DC
The flow adopted in this notebook is as follows:
> 1. Read in the datasets using ArcGIS API for Python
> 2. Merge datasets
> 3. Construct model that predicts service type
> 4. How many requests does each neighborhood make?
> 5. What kind of requests does each neighborhood mostly make?
> 6. Next Steps
The datasets used in this notebook are the
1. __`City Service Requests in 2018`__
2. __`Neighborhood Clusters`__
These datasets can be found at [opendata.dc.gov](http://opendata.dc.gov/)
We start by importing the ArcGIS package to load the data using a service URL
```
import arcgis
from arcgis.features import *
```
### 1.1 Read in service requests for 2018
[Link](http://opendata.dc.gov/datasets/city-service-requests-in-2018/geoservice?geometry=-77.49%2C38.811%2C-76.534%2C38.998) to Service Requests 2018 dataset
```
requests_url = 'https://maps2.dcgis.dc.gov/dcgis/rest/services/DCGIS_DATA/ServiceRequests/MapServer/9'
requests_layer = FeatureLayer(requests_url)
requests_layer
# Extract all the data and display number of rows
requests_features = requests_layer.query()
print('Total number of rows in the dataset: ')
print(len(requests_features.features))
```
This dataset updates on runtime, hence the number of rows could vary each time.
```
# store as dataframe
requests = requests_features.sdf
# View first 5 rows
requests.head()
```
### 1.2 Read in Neighborhood Clusters dataset
[Link](http://opendata.dc.gov/datasets/neighborhood-clusters) to this dataset
```
neighborhood_url = 'https://maps2.dcgis.dc.gov/dcgis/rest/services/DCGIS_DATA/Administrative_Other_Boundaries_WebMercator/MapServer/17'
neighborhood_layer = FeatureLayer(neighborhood_url)
neighborhood_layer
# Extract all the data and display number of rows
neighborhood_features = neighborhood_layer.query()
print('Total number of rows in the dataset: ')
print(len(neighborhood_features.features))
# store as dataframe
neighborhood = neighborhood_features.sdf
# View first 5 rows
neighborhood.head()
```
We now __merge__ the two datasets
```
# Connect to the GIS
from arcgis.gis import GIS
gis = GIS('http://dcdev.maps.arcgis.com/', 'username', 'password')
# Perform spatial join between CBG layer and the service areas created for all time durations
requests_with_neighborhood = arcgis.features.analysis.join_features(requests_url, neighborhood_url, spatial_relationship='Intersects', output_name='serviceRequests_Neighborhood_DC_1')
requests_with_neighborhood.share(everyone=True)
requests_with_neighborhood_url = str(requests_with_neighborhood.url)+'/0/'
layer = FeatureLayer(requests_with_neighborhood_url)
features = layer.query()
print('Total number of rows in the dataset: ')
print(len(features.features))
merged = features.sdf
merged.head()
```
### 3. Construct model that predicts service type
The variables used to build the model are:
> 1. City Quadrant
> 2. Neighborhood cluster
> 3. Ward (Geographical unit)
> 4. Organization acronym
> 5. Status Code
### 3.1 Data preprocessing
```
quads = ['NE', 'NW', 'SE', 'SW']
def generate_quadrant(x):
'''Function that extracts quadrant from street address'''
try:
temp = x[-2:]
if temp in quads:
return temp
else:
return 'NaN'
except:
return 'NaN'
merged['QUADRANT'] = merged['STREETADDRESS'].apply(generate_quadrant)
merged['QUADRANT'].head()
merged['QUADRANT'].unique()
merged['CLUSTER'] = merged['NAME'].apply(lambda x: x[8:])
merged['CLUSTER'].head()
merged['CLUSTER'] = merged['CLUSTER'].astype(int)
merged['ORGANIZATIONACRONYM'].unique()
merged['STATUS_CODE'].unique()
```
Let's extract the number of possible outcomes, i.e. length of the target variable and also take a look at the values
```
len(merged['SERVICETYPECODEDESCRIPTION'].unique())
requests['SERVICETYPECODEDESCRIPTION'].unique()
```
### 3.2 Model building
```
# Import necessary packages
from sklearn.preprocessing import *
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
# Convert categorical (text) fields to numbers
number = LabelEncoder()
merged['SERVICETYPE_NUMBER'] = number.fit_transform(merged['SERVICETYPECODEDESCRIPTION'].astype('str'))
merged['STATUS_CODE_NUMBER'] = number.fit_transform(merged['STATUS_CODE'].astype('str'))
# Extract desired fields
data = merged[['SERVICETYPECODEDESCRIPTION', 'SERVICETYPE_NUMBER', 'QUADRANT', 'CLUSTER', 'WARD', 'ORGANIZATIONACRONYM', 'STATUS_CODE', 'STATUS_CODE_NUMBER']]
data.reset_index(inplace=True)
data.head()
```
Let's binarize values in fields `QUADRANT` (4) and `ORGANIZATIONACRONYM` (8)
Wonder why are not doing it for `CLUSTER`? Appropriate nomenclature of [adjacent clusters](http://opendata.dc.gov/datasets/neighborhood-clusters).
```
import pandas as pd
data = pd.get_dummies(data=data, columns=['QUADRANT', 'ORGANIZATIONACRONYM'])
data.head()
# Extract input dataframe
model_data = data.drop(['SERVICETYPECODEDESCRIPTION', 'SERVICETYPE_NUMBER', 'STATUS_CODE'], axis=1)
model_data.head()
def handle_ward(x):
accept = [range(0,8)]
if x not in accept:
return 0
else:
return x
model_data['WARD'] = model_data['WARD'].apply(handle_ward)
# Define independent and dependent variables
y = data['SERVICETYPE_NUMBER'].values
X = model_data.values
# Split data into training and test samples of 70%-30%
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = .3, random_state=522, stratify=y)
# n_estimators = number of trees in the forest
# min_samples_leaf = minimum number of samples required to be at a leaf node for the tree
rf = RandomForestClassifier(n_estimators=2500, min_samples_leaf=5, random_state=522)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
print(y_pred)
print('Accuracy: ', accuracy_score(y_test, y_pred))
```
### 3.3 Alternate model, excluding the department codes
```
data = merged[['SERVICETYPECODEDESCRIPTION', 'SERVICETYPE_NUMBER', 'QUADRANT', 'CLUSTER', 'WARD', 'ORGANIZATIONACRONYM', 'STATUS_CODE', 'STATUS_CODE_NUMBER']]
data.reset_index(inplace=True)
data.head()
data1 = pd.get_dummies(data=data,columns=['QUADRANT'])
data1.head()
model_data1 = data1.drop(['SERVICETYPECODEDESCRIPTION', 'SERVICETYPE_NUMBER', 'STATUS_CODE', 'ORGANIZATIONACRONYM'], axis=1)
model_data1.head()
model_data1['WARD'] = model_data1['WARD'].apply(handle_ward)
y = data['SERVICETYPE_NUMBER'].values
X = model_data1.values
# Split data into training and test samples of 70%-30%
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = .3, random_state=522, stratify=y)
# n_estimators = number of trees in the forest
# min_samples_leaf = minimum number of samples required to be at a leaf node for the tree
rf = RandomForestClassifier(n_estimators=2500, min_samples_leaf=5, random_state=522)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
print(y_pred)
print('Accuracy: ', accuracy_score(y_test, y_pred))
```
A drop in accuracy from __68.39%__ to __48.78%__ demonstrates the importance of using the correct predictors.
### 4. How many requests does each neighborhood make?
```
# Count of service requests per cluster
cluster_count = merged.groupby('NAME').size().reset_index(name='counts')
cluster_count.head()
# merge with original file
neighborhood = pd.merge(neighborhood, cluster_count, on='NAME')
neighborhood.head()
temp = neighborhood.sort_values(['counts'], ascending=[False])
temp[['NAME', 'NBH_NAMES', 'counts']]
# Viewing the map
search_result = gis.content.search("Neighborhood_Service_Requests")
search_result[0]
```
### 5. What kind of requests does each neighborhood mostly make?
```
import scipy.stats
merged.columns
df = merged[['NAME', 'SERVICECODEDESCRIPTION']]
# Extract the most frequently occuring service request type, and its count
df1 = df.groupby('NAME').agg(lambda x: scipy.stats.mode(x)[0][0])
df2 = df.groupby('NAME').agg(lambda x: scipy.stats.mode(x)[1][0])
df1.reset_index(inplace=True)
df2.reset_index(inplace=True)
df2 = df2.rename(columns={'SERVICECODEDESCRIPTION':'SERVICECODEDESCRIPTION_COUNT'})
# merge the two datasets
final_df = pd.merge(df1, df2, on='NAME')
final_df.head()
# merge it with neighborhood clusters
neighborhood_data = pd.merge(neighborhood, final_df, on='NAME')
# view the map
search_result = gis.content.search("Neighborhood_Service_DC")
search_result[0]
```
| github_jupyter |
## Image网 Submission `128x128`
This contains a submission for the Image网 leaderboard in the `128x128` category.
In this notebook we:
1. Train on 1 pretext task:
- Train a network to do image inpatining on Image网's `/train`, `/unsup` and `/val` images.
2. Train on 4 downstream tasks:
- We load the pretext weights and train for `5` epochs.
- We load the pretext weights and train for `20` epochs.
- We load the pretext weights and train for `80` epochs.
- We load the pretext weights and train for `200` epochs.
Our leaderboard submissions are the accuracies we get on each of the downstream tasks.
```
import json
import torch
import numpy as np
from functools import partial
from fastai2.basics import *
from fastai2.vision.all import *
torch.cuda.set_device(1)
```
## Pretext Task: Contrastive Learning
```
# Chosen parameters
lr=2e-2
sqrmom=0.99
mom=0.95
beta=0.
eps=1e-6
bs=64
sa=1
m = xresnet34
act_fn = Mish
pool = MaxPool
nc=20
source = untar_data(URLs.IMAGEWANG_160)
len(get_image_files(source/'unsup')), len(get_image_files(source/'train')), len(get_image_files(source/'val'))
def get_dbunch(size, bs, workers=8):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
files = get_image_files(source, folders=['unsup', 'val'])
tfms = [[PILImage.create, ToTensor, Resize(size)],
[lambda x: x.parent.name, Categorize()]]
# dsets = Datasets(files, tfms=tfms, splits=GrandparentSplitter(train_name='unsup', valid_name='val')(files))
dsets = Datasets(files, tfms=tfms, splits=RandomSplitter(valid_pct=0.1)(files))
batch_tfms = [IntToFloatTensor]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
# Use the Ranger optimizer
opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
size = 128
bs = 256
dbunch = get_dbunch(160, bs)
# dbunch.c = nc
dbunch.c = 128
len(dbunch.train.dataset)
dbunch.show_batch()
#export
from pytorch_metric_learning import losses
class XentLoss(losses.NTXentLoss):
def forward(self, output1, output2):
stacked = torch.cat((output1, output2), dim=0)
labels = torch.arange(output1.shape[0]).repeat(2)
return super().forward(stacked, labels, None)
class ContrastCallback(Callback):
run_before=Recorder
def __init__(self, size=256, aug_targ=None, aug_pos=None, temperature=0.1):
self.aug_targ = ifnone(aug_targ, get_aug_pipe(size, min_scale=0.7))
self.aug_pos = ifnone(aug_pos, get_aug_pipe(size, min_scale=0.4))
self.temperature = temperature
def update_size(self, size):
pipe_update_size(self.aug_targ, size)
pipe_update_size(self.aug_pos, size)
def begin_fit(self):
self.old_lf = self.learn.loss_func
self.old_met = self.learn.metrics
self.learn.metrics = []
self.learn.loss_func = losses.NTXentLoss(self.temperature)
def after_fit(self):
self.learn.loss_fun = self.old_lf
self.learn.metrics = self.old_met
def begin_batch(self):
xb, = self.learn.xb
xb_targ = self.aug_targ(xb)
xb_pos = self.aug_pos(xb)
self.learn.xb = torch.cat((xb_targ, xb_pos), dim=0),
self.learn.yb = torch.arange(xb_targ.shape[0]).repeat(2),
#export
def pipe_update_size(pipe, size):
for tf in pipe.fs:
if isinstance(tf, RandomResizedCropGPU):
tf.size = size
#export
def get_aug_pipe(size, min_scale=0.4, stats=imagenet_stats, erase=True, **kwargs):
tfms = [Normalize.from_stats(*stats), *aug_transforms(size=size, min_scale=min_scale, **kwargs)]
if erase: tfms.append(RandomErasing(p=0.5, max_count=1, sh=0.2))
return Pipeline(tfms)
m_part = partial(m, c_out=nc, act_cls=torch.nn.ReLU, sa=sa, pool=pool)
save_name = 'imagewang_contrast_simple_dogsonly'
aug = get_aug_pipe(size, min_scale=0.3, mult=1, max_lighting=0.4, stats=imagenet_stats)
aug2 = get_aug_pipe(size, min_scale=0.25, mult=2, max_lighting=0.3, stats=imagenet_stats)
cbs = ContrastCallback(size=size, aug_targ=aug, aug_pos=aug2)
learn = cnn_learner(dbunch, m_part, opt_func=opt_func,
metrics=[], loss_func=CrossEntropyLossFlat(), cbs=cbs, pretrained=False,
config={'ps':0.0, 'concat_pool':False}
)
learn.model
learn.unfreeze()
learn.fit_flat_cos(30, 2e-2, wd=1e-2)
torch.save(learn.model[0].state_dict(), f'{save_name}.pth')
# learn.save(save_name)
```
## Downstream Task: Image Classification
```
def get_dbunch(size, bs, workers=8, dogs_only=True):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
if dogs_only:
dog_categories = [f.name for f in (source/'val').ls()]
dog_train = get_image_files(source/'train', folders=dog_categories)
valid = get_image_files(source/'val')
files = dog_train + valid
splits = [range(len(dog_train)), range(len(dog_train), len(dog_train)+len(valid))]
else:
files = get_image_files(source)
splits = GrandparentSplitter(valid_name='val')(files)
item_aug = [RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
tfms = [[PILImage.create, ToTensor, *item_aug],
[lambda x: x.parent.name, Categorize()]]
dsets = Datasets(files, tfms=tfms, splits=splits)
batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
def do_train(size=128, bs=64, epochs=5, runs=5, dogs_only=False, save_name=None):
dbunch = get_dbunch(size, bs, dogs_only=dogs_only)
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func,
metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(),
pretrained=False,
config={'custom_head':ch})
if save_name is not None:
state_dict = torch.load(f'{save_name}.pth')
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, 2e-2, wd=1e-2)
```
### 5 Epochs
```
epochs = 5
runs = 5
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
```
## Dogs only
```
do_train(epochs=epochs, runs=runs, dogs_only=True, save_name=save_name)
```
## Random weights - ACC = 0.337999
```
do_train(epochs=epochs, runs=1, dogs_only=False, save_name=None)
```
### 20 Epochs
```
epochs = 20
runs = 3
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
do_train(epochs=epochs, runs=runs, dogs_only=True, save_name=save_name)
do_train(epochs=epochs, runs=1, dogs_only=False, save_name=None)
```
## 80 epochs
```
epochs = 80
runs = 1
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func,
metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(),
pretrained=False,
config={'custom_head':ch})
learn.unfreeze()
learn.fit_flat_cos(epochs, 2e-2, wd=1e-3)
```
Accuracy: **62.18%**
### 200 epochs
```
epochs = 200
runs = 1
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func,
metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(),
config={'custom_head':ch})#, cbs=cbs)
if dump: print(learn.model); exit()
# if fp16: learn = learn.to_fp16()
cbs = MixUp(mixup) if mixup else []
learn.load(ss_name, strict=True)
learn.freeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
```
Accuracy: **62.03%**
| github_jupyter |
<a href="https://colab.research.google.com/github/wesleybeckner/technology_fundamentals/blob/main/C2%20Statistics%20and%20Model%20Creation/SOLUTIONS/SOLUTION_Tech_Fun_C1_P2_Game_AI%2C_OOP_and_Agents_PART_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Technology Fundamentals Course 2, Project Part 2: Building Agents and Object Oriented Programming
**Instructor**: Wesley Beckner
**Contact**: wesleybeckner@gmail.com
**Teaching Assitants**: Varsha Bang, Harsha Vardhan
**Contact**: vbang@uw.edu, harshav@uw.edu
<br>
---
<br>
In part II of our tic-tac-toe and AI journey, we're going to take all the functions we've defined so far and make them object oriented!
<br>
---
<br>
<a name='top'></a>
<a name='x.0'></a>
## 2.0 Preparing Environment and Importing Data
[back to top](#top)
<a name='x.0.1'></a>
### 2.0.1 Import Packages
[back to top](#top)
```
def visualize_board(board_values):
"""
Visualizes the board during gameplay
Parameters
----------
board_values : list
The values ('X', 'O', or ' ' at each board location)
Returns
-------
None
"""
print(
"|{}|{}|{}|\n|{}|{}|{}|\n|{}|{}|{}|\n".format(*board_values)
)
def init_board():
"""
Initializes an empty board for the start of gameplay
Parameters
----------
None
Returns
-------
board : dict
a dictionary with keys 1-9 and single space (' ') string as values
"""
return {1: ' ',
2: ' ',
3: ' ',
4: ' ',
5: ' ',
6: ' ',
7: ' ',
8: ' ',
9: ' ',}
# the keys on the game board where, if filled completely with X's or O's a
# winner has occurred
win_patterns = [[1,2,3], [4,5,6], [7,8,9],
[1,4,7], [2,5,8], [3,6,9],
[1,5,9], [7,5,3]]
def check_winning(board):
"""
Checks if the game has a winner
Parameters
----------
board : dict
the tictactoe board as a dictionary
Returns
-------
win_statement : str
defaults to an empty string if no winner. Otherwise 'X' Won! or 'O' Won!
"""
for pattern in win_patterns:
values = [board[i] for i in pattern]
if values == ['X', 'X', 'X']:
return "'X' Won!"
elif values == ['O', 'O', 'O']:
return "'O' Won!"
return ''
def tic_tac_toe():
"""
The tictactoe game engine. Runs the while loop that handles the game
Parameters
----------
None
Returns
-------
None
"""
print("'X' will go first!")
board = init_board()
while True:
for player in (['X', 'O']):
visualize_board(board.values())
move = int(input("{}, what's your move?".format(player)))
if board[move] != ' ':
move = input("{}, that position is already taken! "\
"What's your move?".format(player))
else:
board[move] = player
winner = check_winning(board)
if winner == '':
continue
else:
print(winner)
break
if winner != '':
break
```
<a name='x.1'></a>
## 2.1 OOP
[back to top](#top)
Notice how we have so many functions with calls to our main object `board`. Let's try to organize this into a more object oriented scheme.
We'll also want to write a function that recognizes when a stalemate has been reached!
### 2.1.1 Thinking in Objects
It's helpful to think of how our code can be divided into useful segments that can then be extended, interfaced, used elsewhere, etc.
It's just like we had when we were playing with our pokeball and pokemon objects. In that case, it made sense to make two separate objects one for pokemon and one for pokeballs.
Can you think of any way that would make sense to divide our code into objects? I can think of two.
### 2.1.2 class TicTacToe
the first object will be one that handles our board and all of its methods and attributes. In this class called `TicTacToe` we will have the attributes:
* `winner`, initialized as an empty string, and updates at the conclusion of a game with 'X', 'O', or 'Stalemate'
* `start_player` initialized as an empty string and updates at the start of a game with 'X' or 'O'
* `board` initialized as our empty board dictionary
* `win_patterns` the list of lists containing the winning patterns of the game
and then we will have three different methods, each of which takes one parameter, `self`
* `visualize_board`
* `check_winning`
* `check_stalemate` : a new function. Returns "It's a stalemate!" and sets `self.winner = "Stalemate"` (note there is a bug in the way this is currently written, we will move along for now and work through a debugging tutorial later this week!)
#### Q1 Attributes of TicTacToe
Within class TicTacToe, define the attributes described above
```
class TicTacToe:
# create winner and start_player parameters with default values as empty
# strings within __init__
def __init__(self, winner='', start_player=''):
##################################
########### Attributes ###########
##################################
# set self.winner and self.start_player with the parameters from __init__
self.winner = winner
self.start_player = start_player
# set self.board as a dictionary with ' ' as values and 1-9 as keys
self.board = {1: ' ',
2: ' ',
3: ' ',
4: ' ',
5: ' ',
6: ' ',
7: ' ',
8: ' ',
9: ' ',}
# set self.win_patterns with the 8 winning patterns (a list of lists)
self.win_patterns = [[1,2,3], [4,5,6], [7,8,9],
[1,4,7], [2,5,8], [3,6,9],
[1,5,9], [7,5,3]]
```
#### Q2 Methods of TicTacToe
Here now we will define the methods of `TicTacToe`. Paste your attributes from the above cell, into the bellow cell so that your changes carry over.
```
class TicTacToe:
# create winner and start_player parameters with default values as empty
# strings within __init__
def __init__(self, winner='', start_player=''):
##################################
########### Attributes ###########
##################################
# set self.winner and self.start_player with the parameters from __init__
self.winner = winner
self.start_player = start_player
# set self.board as a dictionary with ' ' as values and 1-9 as keys
self.board = {1: ' ',
2: ' ',
3: ' ',
4: ' ',
5: ' ',
6: ' ',
7: ' ',
8: ' ',
9: ' ',}
# set self.win_patterns with the 8 winning patterns (a list of lists)
self.win_patterns = [[1,2,3], [4,5,6], [7,8,9],
[1,4,7], [2,5,8], [3,6,9],
[1,5,9], [7,5,3]]
###############################
########### METHODS ###########
###############################
# the other functions are now passed self
# define visualize_board and update the board
# object with self.board (and maybe self.board.values() depending on how your
# visualize_board function is written)
def visualize_board(self):
"""
Visualizes the board during gameplay
Parameters
----------
board_values : list
The values ('X', 'O', or ' ' at each board location)
Returns
-------
None
"""
print(
"|{}|{}|{}|\n|{}|{}|{}|\n|{}|{}|{}|\n".format(*self.board.values())
)
# define check_winning and similarly update win_patterns,
# board, and winner to be accessed via the self. Be sure to update the
# attribute self.winner with the appropriate winner in the function
def check_winning(self):
"""
Checks if the game has a winner
Parameters
----------
board : dict
the tictactoe board as a dictionary
Returns
-------
win_statement : str
defaults to an empty string if no winner. Otherwise 'X' Won! or 'O' Won!
"""
for pattern in self.win_patterns:
values = [self.board[i] for i in pattern]
if values == ['X', 'X', 'X']:
self.winner = 'X'
return "'X' Won!"
elif values == ['O', 'O', 'O']:
self.winner = 'O'
return "'O' Won!"
return ''
# here the definition of check_stalemate is given
def check_stalemate(self):
if ' ' not in self.board.values():
self.winner = 'Stalemate'
return "It's a stalemate!"
```
### 2.1.3 The Game Engine (just a function for now)
Next we'll create a function that runs game play using TicTacToe as an object that it passes around. I've already done the heavy lifting of replacing references to attributes (board, win_patterns) and methods (visualize_board, check_winning) to pass through the `TicTacToe` object. I also added the option for the user to quit the game by typing in `'q'` to the input line if they would like.
#### Q3 Add Condition for Stalemate
```
def play_game():
print("'X' will go first!")
tic_tac_toe = TicTacToe()
while True:
for player in (['X', 'O']):
tic_tac_toe.visualize_board()
move = input("{}, what's your move?".format(player))
####################################################################
# we're going to allow the user to quit the game from the input line
####################################################################
if move in ['q', 'quit']:
tic_tac_toe.winner = 'F'
print('quiting the game')
break
move = int(move)
if tic_tac_toe.board[move] != ' ':
while True:
move = input("{}, that position is already taken! "\
"What's your move?".format(player))
move = int(move)
if tic_tac_toe.board[move] != ' ':
continue
else:
break
tic_tac_toe.board[move] = player
# the winner varaible will now be checked within the board object
tic_tac_toe.check_winning()
##############################
# CALL check_stalemate() BELOW
##############################
tic_tac_toe.check_stalemate()
if tic_tac_toe.winner == '':
continue
##########################################################################
# write an elif statement that checks if self.winner is 'Stalemate' and
# subsequently visualizes the board and breaks out of the while loop
# also print out check_stalemate so the returned string is shown to the
# user
##########################################################################
elif tic_tac_toe.winner == 'Stalemate':
tic_tac_toe.visualize_board()
print(tic_tac_toe.check_stalemate())
break
else:
print(tic_tac_toe.check_winning())
tic_tac_toe.visualize_board()
break
if tic_tac_toe.winner != '':
break
```
Let's test our new module
```
play_game()
```
| github_jupyter |
```
#========================================================================
# Copyright 2019 Science Technology Facilities Council
# Copyright 2019 University of Manchester
#
# This work is part of the Core Imaging Library developed by Science Technology
# Facilities Council and University of Manchester
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0.txt
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#=========================================================================
# please, ignore this cell for now
# here we import and set-up some utilities for this notebook
# some imports
# to fix compatibility issues
# between different versions of Python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# import utilities
# to minimise amount of code one needs to write to plot data,
# we implemented some plotting utilities
from utilities import plotter2D
# a small function to generate a sinogram
# imports
from ccpi.framework import ImageData, ImageGeometry
from ccpi.framework import AcquisitionData, AcquisitionGeometry
from ccpi.astra.operators import AstraProjectorSimple
import numpy as np
import matplotlib.pyplot as plt
# simulate 'ideal', i.e. noise-free, sino
def get_ideal_sino(data, N, n_angles):
# get ImageGeometry
ig = data.geometry
# Create AcquisitionGeometry
angles = np.linspace(0, np.pi, n_angles, dtype=np.float32)
ag = AcquisitionGeometry(geom_type="parallel",
dimension="2D",
angles=angles,
pixel_num_h=N)
dev = "cpu"
Aop = AstraProjectorSimple(ig, ag, dev)
sino = Aop.direct(data)
return sino.as_array()
```
# Framework basics
The goal of this notebook is to cover the Framework building blocks. We do not expect participants to step through all the cells. We suggest to use this notebook as a reference for other exercises or for independent trials in future.
## CT geometry
In conventional CT systems, an object is placed between a source emitting X-rays and a detector array measuring the X-ray transmission images of the incident X-rays. Typically, either the object is placed on a rotating sample stage and rotates with respect to the source-detector assembly, or the source-detector gantry rotates with respect to the stationary object. This arrangement results in so-called *circular scanning trajectory*. Depending on source and detector types, there are three conventional data acquisition geometries:
- parallel geometry (2D or 3D),
- fan-beam geometry, and
- cone-beam geometry.
*Parallel geometry*
Parallel beams of X-rays are emitted onto 1D (single pixel row) or 2D detector array. This geometry is common for synchrotron sources. 2D parrallel geometry is illustrated below.
<img src="figures/parallel.png" width=500 height=500 align="left">
3D parallel geometry is a stack of 2D slices each having 2D parallel geometry.
<img src="figures/parallel3d.png" width=600 height=600 align="left">
*Fan-beam geometry*
A single point-like X-ray source emits a cone beam onto 1D detector pixel row. Cone-beam is typically collimated to imaging field of view. Collimation allows greatly reduce amount of scatter radiation reaching the detector. Fan-beam geometry is used when scattering has significant influence on image quality or single-slice reconstruction is sufficient.
<img src="figures/fan.png" width=500 height=500 align="left">
*Cone-beam geometry*
A single point-like X-ray source emits a cone beam onto 2D detector array. Cone-beam geometry is mainly used in lab-based CT instruments.
<img src="figures/cone.png" width=600 height=600 align="left">
### Parallel geometry
#### AcquisitionGeometry and AcquisitionData
In the Framework, we implemented `AcquisitionGeometry` class to hold acquisition parameters and `ImageGeometry` to hold geometry of a reconstructed volume. Corresponding data arrays are wrapped as `AcquisitionData` and `ImageData` classes, respectively.
The simplest (of course from image processing point of view, not from physical implementation) geometry is the parallel geometry. Geometrical parameters for parallel geometry are depicted below:
<a id='2d_parallel_geometry'></a>
<img src="figures/parallel_geometry.png" width=600 height=600 align="left">
In the Framework, we define `AcquisitionGeometry` as follows.
```
# imports
from ccpi.framework import AcquisitionGeometry
import numpy as np
# acquisition angles
n_angles = 90
angles = np.linspace(0, np.pi, n_angles, dtype=np.float32)
# number of pixels in detector row
N = 256
# pixel size
pixel_size_h = 1
# # create AcquisitionGeometry
ag_par = AcquisitionGeometry(geom_type='parallel',
dimension='2D',
angles=angles,
pixel_num_h=N,
pixel_size_h=pixel_size_h)
print('Acquisition geometry:\n{}'.format(ag_par))
```
`AcquisitionGeometry` contains only metadata, the actual data is wrapped in `AcquisitionData` class.`AcquisiitonGeometry` class also holds information about arrangement of the actual acquisition data array. We use attribute `dimension_labels` to label axis. The expected dimension labels are shown below:
<a id='2d_parallel_labels'></a>
<img src="figures/parallel_data.png" width=300 height=300 align="left">
The default order of dimensions for `AcquisitionData` is `[angle, horizontal]`, meaning that the number of elements along 0 and 1 axes in the acquisition data array is expected to be `n_angles` and `N`, respectively.
```
print('Dimension labels:\n{}'.format(ag_par.dimension_labels))
```
<a id='acquisition_data'></a>
To have consistent `AcquisitionData` and `AcquisitionGeometry`, we recommend to allocate `AcquisitionData` using `allocate` method of `AcquisitionGeometry` class:
```
# imports
from ccpi.framework import AcquisitionData
# allocate AcquisitionData
ad_par = ag_par.allocate()
print('Dimensions and Labels = {}, {}'.format(ad_par.shape, ad_par.dimension_labels))
```
Now, we can pass actual sinogram to `AcquisitionData`:
```
# imports
import matplotlib.pyplot as plt
%matplotlib inline
from ccpi.framework import TestData
import os, sys
# load test image
# initialise loader
loader = TestData(data_dir=os.path.join(sys.prefix, "share", "ccpi"))
# load data
data = loader.load(TestData.SIMPLE_PHANTOM_2D, size = (N, N))
# scale data
data *= 2.5 / N
ad_par.fill(get_ideal_sino(data, N, n_angles))
# show sinogram
plotter2D([data, ad_par],
["Image", "Sinogram"],
fix_range=False,
stretch_y=False)
```
#### ImageGeometry and ImageData
To store reconstruction results, we implemented two classes:`ImageGeometry` and `ImageData` class. Similar to `AcquisitionData` and `AcquisitionGeometry`, we first define 2D [`ImageGeometry`](#2d_parallel_geometry) and then allocate [`ImageData`](#2d_parallel_labels).
```
# imports
from ccpi.framework import ImageData, ImageGeometry
# define 2D ImageGeometry
# given AcquisitionGeometry ag_par, default parameters for corresponding ImageData
ig_par = ImageGeometry(voxel_num_y=ag_par.pixel_num_h,
voxel_size_x=ag_par.pixel_size_h,
voxel_num_x=ag_par.pixel_num_h,
voxel_size_y=ag_par.pixel_size_h)
# allocate ImageData filled with 0 values with the specific geometry
im_data1 = ig_par.allocate()
# allocate ImageData filled with random values with the specific geometry
im_data2 = ig_par.allocate('random', seed=5)
print('Allocate with zeros \n{}\n'.format(im_data1.as_array()))
print('Allocate with random numbers in [0,1] \n{}\n'.format(im_data2.as_array()))
print('Dimensions and Labels = {} , {}'.format(im_data1.shape, im_data1.dimension_labels))
```
Default parameters are recommended to fully exploit detector resolution but they can be chosen based on other considerations. For instance, to reconstruct on coarser grid:
```
ig_par1 = ImageGeometry(voxel_num_y=ag_par.pixel_num_h // 2,
voxel_size_x=ag_par.pixel_size_h * 2,
voxel_num_x=ag_par.pixel_num_h // 2,
voxel_size_y=ag_par.pixel_size_h * 2)
im_data3 = ig_par1.allocate('random', seed=5)
print('Dimensions and Labels = {} , {}'.format(im_data3.shape, im_data3.dimension_labels))
```
### 3D parallel, fan-beam and cone-beam geometries
Fan-beam, cone-beam and 3D (multi-slice) parallel geometry can be set-up similar to 2D parallel geometry.
#### 3D parallel geometry
Geometrical parameters and dimension labels for 3D parallel beam geometry:
<img src="figures/parallel3d_geometry.png" width=700 height=700 align="left">
<img src="figures/parallel3d_data.png" width=600 height=600 align="left">
```
# set-up 3D parallel beam AcquisitionGeometry
# physical pixel size
pixel_size_h = 1
ag_par_3d = AcquisitionGeometry(geom_type='parallel',
dimension='3D',
angles=angles,
pixel_num_h=N,
pixel_size_h=pixel_size_h,
pixel_num_v=N,
pixel_size_v=pixel_size_h)
print('Fan-beam acquisition geometry:\n{}'.format(ag_par_3d))
print('Dimension labels:\n{}'.format(ag_par_3d.dimension_labels))
```
Given `ag_par_3d` acquisition geometry, default `ImageGeometry` parameters can be set up as follows:
```
# set-up 3D parallel beam ImageGeometry
ig_par_3d = ImageGeometry(voxel_num_x=ag_par_3d.pixel_num_h,
voxel_size_x=ag_par_3d.pixel_size_h,
voxel_num_y=ag_par_3d.pixel_num_h,
voxel_size_y=ag_par_3d.pixel_size_h,
voxel_num_z=ag_par_3d.pixel_num_v,
voxel_size_z=ag_par_3d.pixel_size_v)
print('Fan-beam image geometry:\n{}'.format(ig_par_3d))
print('Dimension labels:\n{}'.format(ig_par_3d.dimension_labels))
```
<a id='fan_beam_geometry'></a>
#### Fan-beam geometry
Geometrical parameters and dimension labels for fan-beam geometry:
<img src="figures/fan_geometry.png" width=700 height=700 align="left">
<img src="figures/fan_data.png" width=450 height=450 align="left">
```
# set-up fan-beam AcquisitionGeometry
# distance from source to center of rotation
dist_source_center = 200.0
# distance from center of rotation to detector
dist_center_detector = 300.0
# physical pixel size
pixel_size_h = 2
ag_fan = AcquisitionGeometry(geom_type='cone',
dimension='2D',
angles=angles,
pixel_num_h=N,
pixel_size_h=pixel_size_h,
dist_source_center=dist_source_center,
dist_center_detector=dist_center_detector)
print('Fan-beam acquisition geometry:\n{}'.format(ag_fan))
print('Dimension labels:\n{}'.format(ag_fan.dimension_labels))
```
Given `ag_fan` acquisition geometry, default `ImageGeometry` parameters can be set up as follows:
```
# set-up fan-beam ImageGeometry
# calculate geometrical magnification
mag = (ag_fan.dist_source_center + ag_fan.dist_center_detector) / ag_fan.dist_source_center
ig_fan = ImageGeometry(voxel_num_x=ag_fan.pixel_num_h,
voxel_size_x=ag_fan.pixel_size_h / mag,
voxel_num_y=ag_fan.pixel_num_h,
voxel_size_y=ag_fan.pixel_size_h / mag)
print('Fan-beam image geometry:\n{}'.format(ig_fan))
print('Dimension labels:\n{}'.format(ig_fan.dimension_labels))
```
<a id='cone_beam_geometry'></a>
#### Cone-beam geometry
Geometrical parameters and dimension labels for cone-beam geometry:
<img src="figures/cone_data.png" width=650 height=650 align="left">
<img src="figures/cone_geometry.png" width=800 height=800 align="left">
```
# set-up cone-beam geometry
# distance from source to center of rotation
dist_source_center = 200.0
# distance from center of rotation to detector
dist_center_detector = 300.0
# physical pixel size
pixel_size_h = 2
ag_cone = AcquisitionGeometry(geom_type='cone',
dimension='3D',
angles=angles,
pixel_num_h=N,
pixel_size_h=pixel_size_h,
pixel_num_v=N,
pixel_size_v=pixel_size_h,
dist_source_center=dist_source_center,
dist_center_detector=dist_center_detector)
print('Cone-beam acquisition geometry:\n{}'.format(ag_cone))
print('Dimension labels:\n{}'.format(ag_cone.dimension_labels))
```
Conventionally, the voxel size is related to the detector pixel size through a geometrical magnification:
$$F = \frac{r_1 + r_2}{r_1}$$
where $r_1$ and $r_2$ are `dist_source_center` and `dist_center_detector`, respectively.
Given `ag_cone` acquisition geometry, default `ImageGeometry` parameters can be calculated as follows:
```
# cone-beam ImageGeometry
# calculate geometrical magnification
mag = (ag_cone.dist_source_center + ag_cone.dist_center_detector) / ag_cone.dist_source_center
ig_cone = ImageGeometry(voxel_num_x = ag_cone.pixel_num_h,
voxel_size_x = ag_cone.pixel_size_h / mag,
voxel_num_y = ag_cone.pixel_num_h,
voxel_size_y = ag_cone.pixel_size_h / mag,
voxel_num_z = ag_cone.pixel_num_v,
voxel_size_z = ag_cone.pixel_size_v / mag)
print('Cone-beam image geometry:\n{}'.format(ig_cone))
print('Dimension labels:\n{}'.format(ig_cone.dimension_labels))
```
## Manipulating AcquisitionData and ImageData
### Methods
`AcquisiionData` and `ImageData` inherit from the same parent `DataContainer` class, therefore they behave the same way.
There are algebraic operations defined for both `AcquisiionData` and `ImageData`. Following operations are defined:
binary operations (between two DataContainers or scalar and DataContainer)
- \+ addition
- \- subtraction
- \/ division
- \* multiplication
- \** power
- maximum
- minimum
in-place operations
- \+=
- \-=
- \*=
- \**=
- /=
unary operations
- abs
- sqrt
- sign
- conjugate
reductions
- minimum
- maximum
- sum
- norm
- dot product
Here are examples of operations listed above for `AcquistionData`:
```
a = ag_par.allocate(2)
b = ag_par.allocate(3)
print('a \n{}\n'.format(a.as_array()))
print('b \n{}\n'.format(b.as_array()))
## binary operations
c = a + b
print('a + b \n{}\n'.format(c.as_array()))
# or alternatively
c = a.add(b)
print('a + b \n{}\n'.format(c.as_array()))
d = 3 ** a
print('3 ** a \n{}\n'.format(d.as_array()))
e = a ** b
print('a ** b \n{}\n'.format(e.as_array()))
f = a.maximum(b)
print('max(a,b) \n{}\n'.format(f.as_array()))
## in-place operations
b **= b
print('b ** b \n{}\n'.format(b.as_array()))
a += b
print('a + b \n{}\n'.format(a.as_array()))
# or alternatively
a.add(b, out = a)
print('a + b \n{}\n'.format(a.as_array()))
## unary operation
g = a.sign()
print('sign(a) \n{}\n'.format(g.as_array()))
## reductions
h = a.norm()
print('norm(a) \n{}\n'.format(h))
i = a.dot(b)
print('dot(a,b) \n{}\n'.format(i))
```
A few examples for `ImageData`:
```
a = ig_par.allocate(2)
b = ig_par.allocate(3)
print('a \n{}\n'.format(a.as_array()))
print('b \n{}\n'.format(b.as_array()))
c = a + b
print('a + b \n{}\n'.format(c.as_array()))
b **= b
print('b ** b \n{}\n'.format(b.as_array()))
a += b
print('a + b \n{}\n'.format(a.as_array()))
g = a.sign()
print('sign(a) \n{}\n'.format(g.as_array()))
h = a.norm()
print('norm(a) \n{}\n'.format(h))
```
<a id='image_data'></a>
The Framework provides a number of test images
- BOAT = 'boat.tiff'
- CAMERA = 'camera.png'
- PEPPERS = 'peppers.tiff'
- RESOLUTION_CHART = 'resolution_chart.tiff'
- SIMPLE_PHANTOM_2D = 'hotdog'
- SHAPES = 'shapes.png'
Here we load a 'hotdog' image (a simple CT phantom consisting of 2 materials).
```
# imports
from ccpi.framework import TestData
import os, sys
# initialise loader
loader = TestData(data_dir=os.path.join(sys.prefix, 'share','ccpi'))
data = loader.load(TestData.SIMPLE_PHANTOM_2D, size=(200,200))
# get ImageGeometry
ig = data.geometry
print('ImageGeometry:\n{}'.format(ig))
print('Dimensions and Labels = {}, {}'.format(data.shape, data.dimension_labels))
# plot data
plotter2D(data,
"Image",
cmap="gray")
```
`AcquisitionData` and `ImageData` provide a simple method to produce a subset of itself based on the axis we would like to have.
```
# transpose data using subset method
data_subset = data.subset(['horizontal_y', 'horizontal_x'])
plotter2D(data_subset,
"Transposed image",
cmap = "gray")
# extract single row
data_profile = data_subset.subset(horizontal_y=100)
plt.plot(data_profile.as_array())
plt.show()
```
A middle slice of [cone-beam data](#cone_beam_geometry) has [fan-beam geometry](#fan_beam_geometry). Get middle slice from cone-beam `AcquisitionData` using the subset method:
```
# allocate cone-beam acquisition data
ad_cone = ag_cone.allocate('random', seed=5)
print('3D cone image geometry:\n {}'.format(ad_cone.geometry))
# extract middle-slice
# AcquisitionData container
ad_middle_slice = ad_cone.subset(vertical=ad_cone.geometry.pixel_num_v // 2)
# and corresponding AcquisitionGeometry
ag_middle_slice = ad_middle_slice.geometry
print('Dimensions and Labels = {}, {}\n'.format(ad_middle_slice.shape, ad_middle_slice.dimension_labels))
print('Middle slice acquisition geometry:\n {}'.format(ag_middle_slice))
```
### Read/ write AcquisitionData and ImageData
The Framework provides classes to read and write `AcquisitionData` and `ImageData` as NEXUS files.
```
# imports
from ccpi.io import NEXUSDataWriter, NEXUSDataReader
# initialise NEXUS Writer
writer = NEXUSDataWriter()
writer.set_up(file_name='/home/sirfuser/tmp_nexus.nxs',
data_container=ad_middle_slice)
# write data
writer.write_file()
# read data
# initialize NEXUS reader
reader = NEXUSDataReader()
reader.set_up(nexus_file='/home/sirfuser/tmp_nexus.nxs')
# load data
ad1 = reader.load_data()
# get AcquisiionGeometry
ag1 = reader.get_geometry()
print('Dimensions and Labels = {}, {}\n'.format(ad1.shape, ad1.dimension_labels))
print('Acquisition geometry:\n {}'.format(ag1))
```
### Processor
`Processor` takes as an input `DataContainer` and returns either another `DataContainer` or some number. The aim of this class is to simplify the writing of processing pipelines.
#### Resizer
Quite often we need either crop or downsample data; `Resizer` provides a convinient way to perform these operations for both `ImageData` and `AcquisitionData`. Here we use an image from this [example](#image_data).
```
# imports
from ccpi.processors import Resizer
# show test image
plotter2D(data,
"Test image",
cmap="gray")
# crop ImageData along 1st dimension
# initialise Resizer
resizer_crop = Resizer(binning = [1, 1], roi = [-1, (20,180)])
# pass DataContainer
resizer_crop.input = data
data_cropped = resizer_crop.process()
# get new ImageGeometry
ig_data_cropped = data_cropped.geometry
plotter2D([data, data_cropped],
["Original image", "Cropped image"],
cmap="gray")
print('Original image, dimensions and Labels = {}, {}\n'.format(data.shape, data.dimension_labels))
print('Cropped image, dimensions and Labels = {}, {}\n'.format(data_cropped.shape, data_cropped.dimension_labels))
print('Original ImageGeometry:\n{}'.format(ig))
print('Cropped ImageGeometry:\n{}'.format(ig_data_cropped))
# re-bin ImageData
# initialise Resizer
resizer_rebin = Resizer(binning = [3, 5], roi = -1)
# pass the image
resizer_rebin.input = data
data_rebinned = resizer_rebin.process()
# get new ImageGeometry
ig_data_rebinned = data_rebinned.geometry
plotter2D([data, data_rebinned],
["Original image", "Rebinned image"],
cmap="gray")
print('Original image, dimensions and Labels = {}, {}\n'.format(data.shape, data.dimension_labels))
print('Rebinned image, dimensions and Labels = {}, {}\n'.format(data_rebinned.shape, data_rebinned.dimension_labels))
print('Original ImageGeometry:\n{}'.format(ig))
print('Rebinned ImageGeometry:\n{}'.format(ig_data_rebinned))
```
#### Calculation of Center of Rotation
In the ideal alignment of a CT instrument, orthogonal projection of an axis of rotation onto a detector has to coincide with a vertical midline of the detector. This is barely feasible in practice due to misalignment and/ or kinematic errors in positioning of CT instrument components. A slight offset of the center of rotation with respect to the theoretical position will contribute to the loss of resolution; in more severe cases, it will cause severe artifacts in the reconstructed volume (double-borders). `CenterOfRotationFinder` allows to estimate offset of center of rotation from theoretical. In the current release of the Framework `CenterOfRotationFinder` supports only parallel geometry. Here we use sinogram from this [example](#acquisition_data).
`CenterOfRotationFinder` is based on [Nghia Vo's method](https://doi.org/10.1364/OE.22.019078).
```
# imports
from ccpi.processors import CenterOfRotationFinder
plotter2D(ad_par,
"Sinogram acquired with paralell geometry",
cmap="gray")
# initialise CenterOfRotationFinder
cor = CenterOfRotationFinder()
cor.set_input(ad_par)
center_of_rotation = cor.get_output()
print(center_of_rotation)
```
## Multi-channel data
Both `AcquisitionGeometry`/ `AcquisitionData` and `ImageGeometry`/ `ImageData` can be defined for multi-channel (spectral) CT data using `channels` attribute.
```
# multi-channel fan-beam geometry
ag_fan_mc = AcquisitionGeometry(geom_type='cone',
dimension='2D',
angles=angles,
pixel_num_h=N,
pixel_size_h=1,
dist_source_center=200,
dist_center_detector=300,
channels=10)
print('Multi-channel fan-beam geometry:\n{}'.format(ag_fan_mc))
print('Number of channels:{}\n'.format(ag_fan_mc.channels))
print('Labels = {}'.format(im_data3.dimension_labels))
# define multi-channel 2D ImageGeometry
ig3 = ImageGeometry(voxel_num_y=5,
voxel_num_x=4,
channels=2)
# create random integer in [0, max_value] ImageData with the specific geometry
im_data3 = ig3.allocate('random_int', seed=10, max_value=500)
print('Allocate with random integers \n{}\n'.format(im_data3.as_array()))
print('Dimensions and Labels = {}, {}'.format(im_data3.shape, im_data3.dimension_labels))
```
All the methods described for single-channel data in the sections above are also defined for multi-channel data.
```
# allocate multi-channel fan-beam acquisition data
ad_fan_mc = ag_fan_mc.allocate()
# extract single channel from multi-channel fan-beam data using subset method
ad_fan_sc = ad_fan_mc.subset(channel=5)
ag_fan_sc = ad_fan_mc.geometry
print('Geometry:\n{}'.format(ag_fan_sc))
print('Dimensions and Labels = {}, {}'.format(ad_fan_sc.shape, ad_fan_sc.dimension_labels))
```
| github_jupyter |
# Dota 2 - Regression Analysis
* Trying to predict the total score of two teams based full match data
# RUN THIS BLOCK ONCE
```
%%capture
!conda update --all -y
!pip install --upgrade setuptools
!pip install xgboost
!pip install keras
!pip install tensorflow
# API IMPORTS
from __future__ import print_function
import time
from pprint import pprint
import json
import ast
from sklearn.exceptions import DataConversionWarning
# BASIC IMPORTS
import warnings
warnings.filterwarnings("ignore", category=DataConversionWarning)
import pandas as pd
import numpy as np
import datetime as dt
import seaborn as sns
import matplotlib.pyplot as plt
from math import sqrt
from statistics import mean, stdev
from scipy.stats import iqr
#SKLEARN IMPORTS
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
# KERAS/TENSORFLOW IMPORTS
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
from keras.callbacks import EarlyStopping
# XGBOOST IMPORTS
from xgboost import XGBClassifier
```
# Basic Stats
```
def simple_stats(df):
sorted(df['total_score'])
q1, q3= np.percentile(df['total_score'],[25,75])
inter_range = iqr(df['total_score'])
lower_bound = q1 -(1.5 * inter_range)
upper_bound = q3 +(1.5 * inter_range)
print("IQR: ", inter_range)
print("LOW: ", lower_bound)
print("HIGH: ", upper_bound)
print("Q1: ", q1)
print("Q3: ", q3)
plt.violinplot(df['total_score'], showmedians=True)
plt.ylabel("Total Score")
plt.xticks([])
plt.show()
```
## Compute Accuracy with Range
```
def accuracy_calc(yTest, predictions, spread):
num_tests = yTest.size
correct = 0
for l1,l2 in zip(yTest, predictions):
if abs(l1 - l2) < spread:
correct = correct + 1
return correct/num_tests
```
# Linear Regression
```
def lin_reg(df):
x = df.drop(['total_score'], axis=1)
y = df['total_score']
xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size = 2/5)
linearRegressor = LinearRegression()
linearRegressor.fit(xTrain, yTrain)
yPrediction = linearRegressor.predict(xTest)
predictions = [round(value) for value in yPrediction]
accuracy = accuracy_score(yTest, predictions)
print("LINEAR REGRESSION")
print("---------------------------------")
print("Exact Answer Accuracy: %.2f%%" % (accuracy * 100.0))
print("Accuracy with +/- 5 Spread: %.2f%%" % (accuracy_calc(yTest, yPrediction, 5) * 100.0))
print("Accuracy with +/- 10 Spread: %.2f%%" % (accuracy_calc(yTest, yPrediction, 10) * 100.0))
print("Accuracy with +/- 15 Spread: %.2f%%" % (accuracy_calc(yTest, yPrediction, 15) * 100.0))
print("Root Mean Squared Error: %.2f" % sqrt(mean_squared_error(yTest, predictions)))
print()
print()
```
# Random Forest Regression
```
def rand_for(df):
x = df.drop(['total_score'], axis=1)
y = df['total_score']
xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size = 2/5)
forestRegressor = RandomForestRegressor(n_estimators=2000, max_depth=10)
forestRegressor.fit(xTrain, yTrain)
yPrediction = forestRegressor.predict(xTest)
predictions = [round(value) for value in yPrediction]
accuracy = accuracy_score(yTest, predictions)
print("RANDOM FOREST REGRESSION")
print("---------------------------------")
print("Accuracy: %.2f%%" % (accuracy * 100.0))
print("Accuracy with +/- 5 Spread: %.2f%%" % (accuracy_calc(yTest, yPrediction, 5) * 100.0))
print("Accuracy with +/- 10 Spread: %.2f%%" % (accuracy_calc(yTest, yPrediction, 10) * 100.0))
print("Accuracy with +/- 15 Spread: %.2f%%" % (accuracy_calc(yTest, yPrediction, 15) * 100.0))
print("Root Mean Squared Error: %.2f" % sqrt(mean_squared_error(yTest, predictions)))
print()
print()
```
# XGBoost
```
def xg_reg(df):
x = df.drop(['total_score'], axis=1)
y = df['total_score']
xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size = 2/5)
# fit model no training data
model = XGBClassifier(scale_pos_weight=1,
learning_rate=0.01,
subsample = 0.8,
objective='reg:squarederror',
n_estimators=2000,
reg_alpha = 0.3,
max_depth=40,
gamma=10)
model.fit(xTrain, yTrain)
# make predictions for test data
yPrediction = model.predict(xTest)
predictions = [round(value) for value in yPrediction]
# evaluate predictions
accuracy = accuracy_score(yTest, predictions)
print("XGBOOST REGRESSION")
print("---------------------------------")
print("Accuracy: %.2f%%" % (accuracy * 100.0))
print("Accuracy with +/- 5 Spread: %.2f%%" % (accuracy_calc(yTest, yPrediction, 5) * 100.0))
print("Accuracy with +/- 10 Spread: %.2f%%" % (accuracy_calc(yTest, yPrediction, 10) * 100.0))
print("Accuracy with +/- 15 Spread: %.2f%%" % (accuracy_calc(yTest, yPrediction, 15) * 100.0))
print("Root Mean Squared Error: %.2f" % sqrt(mean_squared_error(yTest, predictions)))
print()
print()
```
# Neural Network
* Inspiration from: https://machinelearningmastery.com/regression-tutorial-keras-deep-learning-library-python/
```
def larger_model():
# create model
model = Sequential()
model.add(Dense(10, input_dim=10, kernel_initializer='normal', activation='relu'))
model.add(Dense(6, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
def model_2():
# create model
model = Sequential()
model.add(Dense(10, input_dim=10, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, kernel_initializer='normal', activation='relu'))
model.add(Dense(75, kernel_initializer='normal', activation='relu'))
model.add(Dense(15, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
def nn_reg(df):
x = df.drop(['patch','total_score'], axis=1)
y = df['total_score'].values
sc= MinMaxScaler()
x= sc.fit_transform(x.astype(np.float))
y= y.reshape(-1,1)
y=sc.fit_transform(y.astype(np.float))
es = EarlyStopping(monitor='loss', mode='min', verbose=1, patience=20)
xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size = 2/5)
model = KerasRegressor(build_fn=larger_model, batch_size=5,epochs=5000, verbose=0, callbacks=[es])
model.fit(xTrain,yTrain)
# make predictions for test data
yPrediction = model.predict(xTest)
# evaluate predictions
#accuracy = accuracy_score(yTest, yPrediction)
print("NEURAL NETWORK REGRESSION")
print("---------------------------------")
#print("Accuracy: %.2f%%" % (accuracy * 100.0))
print("Root Mean Squared Error: %.2f" % sqrt(mean_squared_error(yTest, yPrediction)))
print()
print()
def nn_reg_model2(df):
x = df.drop(['patch','total_score'], axis=1)
y = df['total_score'].values
sc= MinMaxScaler()
x= sc.fit_transform(x.astype(np.float))
y= y.reshape(-1,1)
y=sc.fit_transform(y.astype(np.float))
es = EarlyStopping(monitor='loss', mode='min', verbose=1, patience=20)
xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size = 2/5)
model = KerasRegressor(build_fn=model_2, batch_size=5,epochs=5000, verbose=0, callbacks=[es])
model.fit(xTrain,yTrain)
# make predictions for test data
yPrediction = model.predict(xTest)
# evaluate predictions
#accuracy = accuracy_score(yTest, yPrediction)
print("NEURAL NETWORK REGRESSION - Model 2")
print("---------------------------------")
#print("Accuracy: %.2f%%" % (accuracy * 100.0))
print("Root Mean Squared Error: %.2f" % sqrt(mean_squared_error(yTest, yPrediction)))
print()
print()
```
# Automate All Tests
```
def run_all(df):
simple_stats(df)
lin_reg(df)
rand_for(df)
xg_reg(df)
nn_reg(df)
nn_reg_model2(df)
```
### Data Dropping and Cleaning
* Removing all columns except for team scores and draft information
```
def drop_bulk(df):
df = df.drop(axis=1, columns=['Unnamed: 0','Unnamed: 0.1', 'barracks_status_dire', 'barracks_status_radiant',
'cluster', 'cosmetics', 'duration', 'radiant_xp_adv', 'region', 'series_id',
'series_type', 'skill', 'start_time', 'throw', 'tower_status_dire',
'tower_status_radiant', 'version', 'engine', 'first_blood_time', 'game_mode',
'league', 'leagueid', 'lobby_type', 'loss', 'match_id', 'match_seq_num',
'objectives', 'draft_timings', 'dire_team', 'radiant_gold_adv',
'radiant_team', 'radiant_win'])
df = df.dropna(axis=0)
return df
def create_total(df):
df["total_score"] = df["dire_score"] + df["radiant_score"]
return df
def drop_bots(df):
# Dropping rows that have a total score of 0 (extremely unlikely scenario, arguably impossible)
# Dropping rows that do not have 10 human players participating
# (means bots are playing and we do not want these included)
df = df[df.total_score != 0]
df = df[df.human_players == 10]
return df
def collect_picks(row):
picks = []
json_string = row["picks_bans"]
json_string = ast.literal_eval(json_string.replace("'",'"'))
data = json.loads(json.dumps(json_string))
picks = [data[v]['hero_id'] for v in range(len(data)) if data[v]['is_pick'] == True]
return picks
def picks_wrapper(df):
heroes_lists = []
for index,row in df.iterrows():
row = row.copy()
picks = collect_picks(row)
heroes_lists.append(picks)
df['heroes'] = heroes_lists
df = df.drop(axis=1, columns=['picks_bans'])
return df
def columnize_heroes(df):
df = df.drop(axis=1, columns=['dire_score', 'human_players', 'radiant_score'])
df[['h1','h2','h3','h4','h5','h6','h7','h8','h9','h10']] = pd.DataFrame(df.heroes.values.tolist(),
index= df.index)
df = df.drop(axis=1, columns=['heroes'])
return df
def clean(df):
print(df.shape)
# Call other functions
df = drop_bulk(df)
df = create_total(df)
df = drop_bots(df)
df = picks_wrapper(df)
df = columnize_heroes(df)
print(df.shape)
return df
```
## Read CSVs
```
df_prof = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/raw_match_csvs/professional_raw_match_full.csv")
df_prof = clean(df_prof)
df_prof = df_prof.drop(['Unnamed: 0.1.1', 'Unnamed: 0.1.1.1', 'Unnamed: 0.1.1.1.1'], axis=1)
df_prof.head()
```
### See which patch versions we have
```
print(df_prof['patch'].unique())
```
### Get an idea for the amount of matches we have for each patch version
```
for i in range(min(df_prof['patch'].astype('int64')), max(df_prof['patch'].astype('int64')) + 1):
print(df_prof.loc[df_prof['patch'] == i].shape)
#run_all(df_prof)
```
### Let's run our tests on patch versions where we have more than 2000 data points
```
for i in range(min(df_prof['patch'].astype('int64')), max(df_prof['patch'].astype('int64')) + 1):
if df_prof.loc[df_prof['patch'] == i].shape[0] > 2000:
print("PATCH VERSION: ", i)
print("Dataset shape for patch version: ", df_prof.loc[df_prof['patch'] == i].shape)
print("---------------------------------------------")
run_all(df_prof.loc[df_prof['patch'] == i])
```
| github_jupyter |
Analysis of data from:
De Neuter, N., Bartholomeus, E., Elias, G., Keersmaekers, N., Suls, A., Jansens, H., … Ogunjimi, B. (2018). Memory CD4 + T cell receptor repertoire data mining as a tool for identifying cytomegalovirus serostatus. Genes & Immunity, 1. <https://doi.org/10.1038/s41435-018-0035-y>
Here's their description of the data:
> In this study, we collected peripheral blood samples from 9
CMV seropositive and 24 CMV seronegative healthy Belgian
adults. We sequenced TCRβ sequences from the CD4
+CD45RO+ lymphocyte population only, as opposed to the
CD4+CD45RO+/− and CD8+CD45RO+/- lymphocyte
populations collected in the original study [15], and thus
focused solely on the immune signal within the CD4+
memory repertoire. After removal of out of frame TCR
sequences, 2,204,828 distinct TCRβ sequences were
obtained, with a mean of 66 813 sequences per individual.
I took these 33 repertoires and randomly split them into 22 training sets and then 11 testing sets.
The training sets were mixed and 10% used for actual training, with 90% available for validation.
(Note that the original data set has a few replicate samples from some of the repertoires, but I only selected one of the two replicates).
### Hey you, edit this path 👇 to point to your clone of the vampire repository
```
vampire_path = '/home/ematsen/re/vampire/'
suppressMessages(library(cowplot))
library(jsonlite)
library(latex2exp)
library(tools)
library(devtools)
suppressMessages(devtools::load_all(paste0(vampire_path, 'vampire/R/sumrep')))
theme_set(theme_minimal())
source_colors = c(basic = "#fc8d62", count_match = "#66c2a5", OLGA.Q ="#8da0cb", data = "#A3A3A3", train = "#444444")
json_path = 'input/_output_deneuter-2019-02-07/deneuter-2019-02-07.json'
data_dir = paste0(dirname(json_path),'/')
data_path = function(path) paste0(data_dir, path)
train_dir = str_replace(json_path, '.json', '.train/')
output_dir = 'output/'
data_json = fromJSON(json_path)
test_samples = lapply(
data_json$test_paths,
function(path) tools::file_path_sans_ext(basename(path)))
system(paste('mkdir -p ', output_dir))
output_path = function(path) paste0(output_dir, path)
# This notebook makes nice versions of plots in this directory:
normalizePath(output_dir)
```
The following command should run if the data is in the right place.
```
summarized = data_path('summarized.agg.csv')
system(paste('ls', summarized), intern=TRUE)
```
---
We will be comparing the VAE methods to OLGA + the thymic Q multiplier suggested by Thierry.
For simplicity we will be referring to this method as "OLGA.Q".
---
Here we set the regularization parameter beta.
```
our_beta = 0.75
fit_dir = paste0(train_dir, our_beta, '/')
fit_path = function(path) paste0(fit_dir, path)
```
---
Let's compare the CDR3 length distribution between the various programs and the data sets.
The programs will appear with thick colored lines, while the thin gray lines represent the test data sets.
```
prep_sumrep = function(path) {
df = read.csv(path, stringsAsFactors=FALSE)
colnames(df)[colnames(df) == 'amino_acid'] = 'junction_aa'
data.table(df)
}
named_summary = function(summary_fun, summary_name, path, data_source, data_group) {
df = data.frame(summary_fun(path))
colnames(df) = c(summary_name)
df$source = data_source
df$group = data_group
df
}
prep_summaries_general = function(data_paths, olga_str, basic_str, count_match_str, summary_fun, summary_name) {
aux = function(path, data_source, data_group) {
named_summary(summary_fun, summary_name, path, data_source, data_group)
}
data_df = do.call(rbind,
lapply(
data_paths,
function(path) aux(path, 'data', path)
))
df = rbind(
aux(fit_path(olga_str), 'olga', 'olga'),
aux(fit_path(basic_str), 'basic', 'basic'),
aux(fit_path(count_match_str), 'count_match', 'count_match')
)
df = rbind(df, data_df)
df$size = 1-as.numeric(df$source == 'data')
df
}
prep_summaries = function(summary_fun, summary_name) {
test_head_csvs =
lapply(test_samples, function(sample) data_path(paste0(sample, '/', sample, '.head.csv')))
prep_summaries_general(test_head_csvs, 'olga-generated.csv', 'basic/vae-generated.csv',
'count_match/vae-generated.csv', summary_fun, summary_name)
}
plot_summaries = function(df, summary_name, binwidth=1) {
theme_set(theme_minimal(base_size=18))
df[df$source == 'olga', 'source'] = 'OLGA.Q'
p = ggplot(df,
aes_string(summary_name, color='source', group='group', size='size')) +
geom_freqpoly(aes(y=..density..), binwidth=binwidth) +
scale_size(range=c(0.2, 1.2), guide='none') +
theme(legend.justification=c(0,1), legend.position=c(0,1)) +
scale_color_manual(values=source_colors) +
xlab(gsub('_', ' ', summary_name)) +
ylab('frequency')
ggsave(output_path(paste0(summary_name, '.svg')), width=8, height=4.5)
p
}
plot_summaries(prep_summaries(
function(path) getCDR3LengthDistribution(prep_sumrep(path), by_amino_acid = TRUE),
'CDR3_length'), 'CDR3_length')
plot_summaries(prep_summaries(
function(path) getChargeDistribution(prep_sumrep(path), column='junction_aa'),
'charge'),
'charge')
```
For each CDR3 sequence, let's look at the distance to the nearest neighbor CDR3 sequence.
```
plot_summaries(prep_summaries(
function(path) getNearestNeighborDistribution(prep_sumrep(path), column='junction_aa', tol=1e-6),
'nearest_neighbor_Levenshtein_distance'),
'nearest_neighbor_Levenshtein_distance')
```
Now let's explore the distribution of pairwise distances between the CDR3 sequences.
Let's look at divergences from the test sets for TCR sequences generated by the various programs.
Each data point in these plots is a divergence between the simulated set of sequences and a collection of sequences from the test set.
```
df = read.csv(summarized, stringsAsFactors=FALSE)
facet_labeller = function(s) {
s = sub("sumdiv_","",s)
s = gsub("_"," ",s)
s = sub("distance","dist",s)
s
}
compare_model_divergences = function(df, beta) {
df$synthetic = TRUE
df[df$model == 'train', ]$synthetic = FALSE
df = df[df$beta == beta,]
df[df$model == 'olga', 'model'] = 'OLGA.Q'
id_vars = c('test_set', 'model', 'synthetic')
measure_vars = grep('sumdiv_', colnames(df), value=TRUE)
df = df[c(id_vars, measure_vars)]
theme_set(theme_minimal())
ggplot(
melt(df, id_vars, measure_vars, variable.name='divergence_name', value.name='divergence'),
aes_string('model', 'divergence', color='model', shape='synthetic')
) + geom_point(position = position_jitterdodge(dodge.width=0.5, jitter.width=0.5)) +
facet_wrap(vars(divergence_name), nrow=3, scales='free', labeller=as_labeller(facet_labeller)) +
scale_y_log10() +
scale_shape_manual(values=c(3, 16)) +
theme(axis.text.x=element_blank(), axis.title.x = element_blank()) +
scale_color_manual(values=source_colors) +
labs(color='data source')
}
compare_model_divergences(df, our_beta)
ggsave(output_path('sumrep_divergences.svg'), width=8, height=4.5)
```
---
Now let's look at a more sophisticated way of evaluating sequences, namely Ppost.
If a synthetically generated sequence doesn't look like a real VDJ recombination, then Ppost will be low.
```
get_ppost = function(path) read.csv(path)$Ppost
test_ppost_csvs =
lapply(test_samples,
function(sample) paste0(train_dir, sample, '.head/ppost.csv'))
summaries = prep_summaries_general(
test_ppost_csvs, 'olga-generated.ppost.csv', 'basic/vae-generated.ppost.csv',
'count_match/vae-generated.ppost.csv', get_ppost, 'Ppost')
summaries$log_Ppost = log(summaries$Ppost)
plot_summaries(summaries, 'log_Ppost') + coord_cartesian(xlim=c(-50, -10)) + xlab(TeX("$\\log(P_{OLGA.Q})$"))
ggsave(output_path('log_Ppost.svg'), width=8, height=4.5)
```
What does Pvae look like?
```
get_pvae = function(path) read.csv(path)$log_p_x
test_pvae_csvs =
lapply(test_samples,
function(sample) fit_path(paste0('basic/',sample,'.head/test.pvae.csv')))
summaries = prep_summaries_general(
test_pvae_csvs, 'basic/olga-generated.pvae.csv', 'basic/vae-generated.pvae.csv',
'count_match/vae-generated.pvae.csv', get_pvae, 'log_Pvae')
plot_summaries(summaries, 'log_Pvae') + coord_cartesian(xlim=c(-50, -10)) + xlab(TeX("$\\log(P_{VAE})$"))
ggsave(output_path('log_Pvae.svg'), width=8, height=4.5)
```
Here we see the surprising result that Pvae is very similar between the VAE-generated sequences and the real sequences. That means that the VAE is not just memorizing the input sequences.
Also, we see that the VAE can tell the difference between OLGA-generated sequences and real sequences.
---
#### Latent space visualization
```
# This file is made by running the bottom cells of the python.ipynb notebook.
df = read.csv('output/pcs.csv')
v_genes = sort(unique(df$v_gene))
popular = data.frame(sort(table(df$v_gene), decreasing=TRUE))[1:7, 1]
v_plot = ggplot(df[df$v_gene %in% popular,], aes(pc_1, pc_2, color=v_gene)) +
geom_point(alpha=0.4) +
coord_fixed() +
scale_color_discrete(name="V gene") +
xlab("") + ylab("PC 2")
j_plot = ggplot(df, aes(pc_1, pc_2, color=j_gene)) +
geom_point(alpha=0.4) +
scale_color_discrete(name="J gene") +
coord_fixed() +
xlab("PC 1") + ylab("PC 2")
p = plot_grid(v_plot, j_plot, labels = c("(a)", "(b)"), ncol=1)
ggsave('output/gene_pca.png', plot=p, width=8, height=8)
df = read.csv('output/pcs_topgenes.csv')
p = ggplot(df, aes(pc_1, pc_2)) + ggtitle('TCRBV30-01 / TCRBJ01-02')
data.frame(number = seq_along(colnames(df)), name = colnames(df))
p + geom_point(aes_string(color = 'test_set'))
p + geom_point(aes_string(color = 'cdr3_length')) + scale_colour_viridis_c(option = "plasma")
lapply(colnames(df)[seq(5,20)], function(s) p + geom_point(aes_string(color = s)))
```
| github_jupyter |
# Visualize and analyze differential expression data in a network
In analysis of differential expression data, it is often useful to analyze properties of the local neighborhood of specific genes. I developed a simple interactive tool for this purpose, which takes as input diferential expression data, and gene interaction data (from http://www.genemania.org/). The network is then plotted in an interactive widget, where the node properties, edge properties, and layout can be mapped to different network properties. The interaction type (of the 6 options from genemania) can also be selected.
This tool will also serve as an example for how to create, modify, visualize and analyze weighted and unweighted gene interaction networks using the highly useful and flexible python package NetworkX (https://networkx.github.io/)
This tool is most useful if you have a reasonably small list of genes (~100) with differential expression data, and want to explore properties of their interconnections and their local neighborhoods.
```
# import some useful packages
import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
% matplotlib inline
```
### Import a real network (from this experiment http://www.ncbi.nlm.nih.gov/sites/GDSbrowser?acc=GDS4419)
This experiment contains fold change information for genes in an experiment studying 'alveolar macrophage response to bacterial endotoxin lipopolysaccharide exposure in vivo'. We selected a list of genes from the experiment which had high differential expression, and were enriched for 'immune response' and 'response to external biotic stimulus' in the gene ontology. This experiment and gene list were selected purely as examples for how to use this tool for an initial exploration of differential expression data.
NOTE: change paths/filenames in this cell to apply network visualizer to other datasets. Network format comes from genemania (e.g. columns are 'Entity 1', 'Entity 2', 'Weight', 'Network_group', 'Networks')
NOTE: File format is tsv, and needs to contain columns for 'IDENTIFIER', 'DiffExp', and 'absDiffExp'. Other columns are optional
```
dataDE = pd.read_csv('DE_data/DE_ayyagari_data_genename_foldchange.csv',sep='\t')
print(dataDE.head())
# genes in dataDE
gene_list = list(dataDE['IDENTIFIER'])
# only use the average fold-change (because there are multiple entries for some genes
#dataDE_mean = dataDE.DiffExp.groupby(dataDE['IDENTIFIER']).mean()
dataDE_mean = dataDE['fold_change']
dataDE_mean.index=gene_list
print(dataDE_mean)
# load the gene-gene interactions (from genemania)
#filename = 'DE_data/DE_experiment_interactions.txt'
filename = 'DE_data/DE_ayyagari_interactions.txt'
DE_network = pd.read_csv(filename, sep='\t', header=6)
DE_network.columns = ['Entity 1','Entity 2', 'Weight','Network_group','Networks']
# create the graph, and add some edges (and nodes)
G_DE = nx.Graph()
idxCE = DE_network['Network_group']=='Co-expression'
edge_list = zip(list(DE_network['Entity 1'][idxCE]),list(DE_network['Entity 2'][idxCE]))
G_DE.add_edges_from(edge_list)
print('number of edges = ' + str(len(G_DE.edges())))
print('number of nodes = '+ str(len(G_DE.nodes())))
# create version with weighted edges
G_DE_w = nx.Graph()
edge_list_w = zip(list(DE_network['Entity 1']),list(DE_network['Entity 2']),list(DE_network['Weight']))
G_DE_w.add_weighted_edges_from(edge_list_w)
```
## Import plotting tool
(and reload it if changes have been made)
```
import imp
import plot_network
imp.reload(plot_network)
```
# Run the plotting tool on prepared data
### Description of options:
- **focal_node_name**: Select gene to focus on (a star will be drawn on this node)
- **edge_threshold**: Change the number of edges included in the network by moving the edge_threshold slider. Higher values of edge_threshold means fewer edges will be included in the graph (and may improve interpretability). The threshold is applied to the 'Weight' column of DE_network, so the less strongly weighted edges are not included as the threshold increases
- **network_algo**: Select the network algorithm to apply to the graph. Choices are:
- 'spl' (shortest path length): Plot the network in a circular tree layout, with the focal gene at the center, with nodes color-coded by log fold-change.
- 'clustering coefficient': Plot the network in a circular tree layout, with nodes color-coded by the local clustering coefficient (see https://en.wikipedia.org/wiki/Clustering_coefficient).
- 'pagerank': Plot the network in a spring layout, with nodes color-coded by page rank score (see https://en.wikipedia.org/wiki/PageRank for algorithm description)
- 'community': Group the nodes in the network into communities, using the Louvain modularity maximization algorithm, which finds groups of nodes optimizing for modularity (a metric which measures the number of edges within communities compared to number of edges between communities, see https://en.wikipedia.org/wiki/Modularity_(networks) for more information). The nodes are then color-coded by these communities, and the total modularity of the partition is printed above the graph (where the maximal value for modularity is 1 which indicates a perfectly modular network so that there are no edges connecting communities). Below the network the average fold-change in each community is shown with box-plots, where the focal node's community is indicated by a white star, and the colors of the boxes correspond to the colors of the communities above.
- **map_degree**: Choose whether to map the node degree to node size
- **plot_border_col**: Choose whether to plot the log fold-change as the node border color
- **draw_shortest_paths**: If checked, draw the shortest paths between the focal node and all other nodes in blue transparent line. More opaque lines indicate that section of path was traveled more often.
- **coexpression, colocalization, other, physical_interactions, predicted_interactions, shared_protein_domain**: Select whether to include interactions of these types (types come from GeneMania- http://pages.genemania.org/data/)
```
from IPython.html.widgets import interact
from IPython.html import widgets
import matplotlib.colorbar as cb
import seaborn as sns
import community
# import network plotting module
from plot_network import *
# temporary graph variable
Gtest = nx.Graph()
# check whether you have differential expression data
diff_exp_analysis=True
# replace G_DE_w with G_DE in these two lines if unweighted version is desired
Gtest.add_nodes_from(G_DE_w.nodes())
Gtest.add_edges_from(G_DE_w.edges(data=True))
# prep border colors
nodes = Gtest.nodes()
#gene_list = gene_list
if diff_exp_analysis:
diff_exp = dataDE_mean
genes_intersect = np.intersect1d(gene_list,nodes)
border_cols = Series(index=nodes)
for i in genes_intersect:
if diff_exp[i]=='Unmeasured':
border_cols[i] = np.nan
else:
border_cols[i] = diff_exp[i]
else: # if no differential expression data
border_cols = [None]
numnodes = len(Gtest)
# make these three global to feed into widget
global Gtest
global boder_cols
global DE_network
def plot_network_shell(focal_node_name,edge_thresh=.5,network_algo='spl', map_degree=True,
plot_border_col=False, draw_shortest_paths=True,
coexpression=True, colocalization=True, other=False,physical_interactions=False,
predicted_interactions=False,shared_protein_domain=False):
# this is the main plotting function, called from plot_network module
fig = plot_network(Gtest, border_cols, DE_network,
focal_node_name, edge_thresh, network_algo, map_degree, plot_border_col, draw_shortest_paths,
coexpression, colocalization, other, physical_interactions, predicted_interactions, shared_protein_domain)
return fig
# threshold slider parameters
min_thresh = np.min(DE_network['Weight'])
max_thresh = np.max(DE_network['Weight']/10)
thresh_step = (max_thresh-min_thresh)/1000.0
interact(plot_network_shell, focal_node_name=list(np.sort(nodes)),
edge_thresh=widgets.FloatSliderWidget(min=min_thresh,max=max_thresh,step=thresh_step,value=min_thresh,description='edge threshold'),
network_algo = ['community','clustering_coefficient','pagerank','spl']);
```
# Some examples
First let's look at the graph when 'spl' (shortest path length) is selected as the network algo. ADA is the focal node in this case, and it has 4 nearest neighbors (MX1, CD44, FITM1, and CD80). Note that CD44 connects the focal node ADA to many other nodes in the network, as it is an opaque blue line. Also note that there is only one gene with a negative fold change in this gene set (CCL13). The white nodes are genes included by genemania- they are the 20 genes nearest to the input genelist.

## Community detection
When the network_algo button is switched to 'community', the louvain modularity maximization algorithm runs on the network, and partitions the nodes into communities which maximize the modularity. In this case (with CXCL10 as the focal node), the nodes are partitioned into 5 groups, with the three largest groups indicated by red, green, and teal circles. While you can see some support for this grouping by eye, the overall graph modularity is 0.33, which is a relatively low value. This means that although groups were found in the graph, the graph itself is not very modular. As a rule of thumb, very modular graphs have modularities of about 0.5 or 0.6.

Below the graph, there is a panel showing the average fold change for the nodes in this community. Since most of the nodes in the input gene list have positive fold changes here, all communities also have positive average fold changes. Were the input gene list to have fewer large fold changes, this would enable you to see if a particular grouping of nodes had significantly higher (or lower) levels of differential expression than alternative groupings.

| github_jupyter |
This notebook is to demonstrate the interface of the developed environment
Replace the variable *PATH_TO_ROOT* to run the notebook
```
import sys
PATH_TO_ROOT = 'C:/Users/walter/Desktop/git/AlphaBuilding-ResCommunity'
sys.path.insert(0,PATH_TO_ROOT)
import pandas as pd
import numpy as np
from gym_AlphaBuilding.envs import residential
from bin.util.distribution import utility
from bin.util.weather import noaa_weather
# Retrieve the Thermal Time Constant and Equivalent Temperature inferred from the Ecobee DYD Dataset
# Users can use their own data or use the provided function (different value for different state)
city = 'Berkeley'
state = 'CA'
ttc = utility.get_ttc(state)
teq = utility.get_teq(state)
# Retrieve the comfort temperature zone using the provided function
# Four methods supported: 'ASHRAE PMV', 'ASHRAE adaptive', 'Wang2020', and 'ResStock'
tsp, trange = utility.get_comfort_temp('cooling', 'Wang2020')
SAMPLE_SIZE = 100
STEP_SIZE = 15 # min
start_date = '2018-07-01'
final_date = '2018-08-01'
year = 2018
sim_horizon = (start_date, final_date)
# Download weather data from NOAA weather stations
# If you want to use your own weather data, skip this step
address = '{}, {}'.format(city, state)
station_ids, station_names = noaa_weather.find_closest_weather_station(noaa_weather.geocode_address(address))
# You might need to try a couple of weather stations, because some weather stations have large missing rate
weather = noaa_weather.download_weather(station_ids[0], year)
weather = weather.tz_convert('America/Los_Angeles').tz_localize(None) # remove tz-awareness
# truncate and resample the weather data to fit the simulation horizon and time step
weather_h = weather.resample('1H').mean() # hourlu average to remove noise
# ambient_temp = weather_h.resample('{}min'.format(STEP_SIZE)).interpolate()[['Temperature']]
weather_h = weather_h.resample('{}min'.format(STEP_SIZE)).interpolate()
weather_h = weather_h.truncate(*sim_horizon)
# Make the time index matches with simulation horizon and step size
weather_h = weather_h.resample('{}min'.format(STEP_SIZE)).interpolate()
weather_h.head()
# Set up the environment
env = residential.AlphaResEnv(sampleSize = SAMPLE_SIZE,
stepSize = STEP_SIZE,
simHorizon = (start_date, final_date),
ambientWeather = weather_h,
ttc = ttc,
teq = teq,
tsp = tsp,
trange = trange,
hvacMode = 'cooling only')
# Get the parameter of the environment
env.getParameter()
# Reset the environment to start simulation
obs = env.reset()
print(f'The length of the returned obs is {len(obs)}, whichs equals to 3 + {SAMPLE_SIZE}')
# Select the control action: 1 for heating, 2 for cooling, 0 for free floating
acts = np.ones(100)*2
# One step simulation using the selected control action
obs, reward, done, comments = env.step(acts)
# Returned observations
obs
## Returned comments, including
# Summed electricity consumption during the last time step (15min in this case), unit: kWh
# Summed uncomfortable degree hours during the last time step, unit: degC*h
# Internal heat gain for each TCL
# Solar heat gain for each TCL
# Modelling uncertainty for each TCL, unit: degC
# Measurement error for each TCL, unit: degC
comments
# Returned reward, caluculated as the weighted sum of energy consumption and uncomfortable degree hours using
# the costWeight input
# Users can use the observations and comments to re-calculate the rewards
reward
# Flag indicating whether the eposide end or not
done
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.