text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet # IGNORE_COPYRIGHT: cleared by OSS licensing
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# 사전 학습된 ConvNet을 이용한 전이 학습
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
이 튜토리얼에서는 사전 훈련된 네트워크에서 전이 학습을 사용하여 고양이와 개의 이미지를 분류하는 방법을 배우게 됩니다.
사전 훈련된 모델은 이전에 대규모 데이터셋에서 훈련된 저장된 네트워크로, 일반적으로 대규모 이미지 분류 작업에서 훈련된 것입니다. 사전 훈련된 모델을 그대로 사용하거나 전이 학습을 사용하여 이 모델을 주어진 작업으로 사용자 정의하세요.
이미지 분류를 위한 전이 학습을 직관적인 시각에서 바라보면 모델이 충분히 크고 일반적인 데이터 집합에서 훈련된다면, 이 모델은 사실상 시각 세계의 일반적인 모델로서 기능할 것이라는 점입니다. 그런 다음 대규모 데이터셋에서 대규모 모델을 교육하여 처음부터 시작할 필요 없이 이러한 학습된 특징 맵을 활용할 수 있습니다.
이번 notebook에서는 사전 훈련된 모델을 사용자 정의하는 두 가지 방법을 시도 해보겠습니다.:
1. 특징 추출: 새 샘플에서 의미 있는 형상을 추출하기 위해 이전 네트워크에서 학습한 표현을 사용합니다. 사전 훈련된 모델 위에 처음부터 교육할 새 분류기를 추가하기만 하면 이전에 데이터셋으로 학습한 특징 맵의 용도를 재사용할 수 있습니다.
전체 모델을 재훈련시킬 필요는 없습니다. 기본 컨볼루션 네트워크에는 그림 분류에 일반적으로 유용한 기능이 이미 포함되어 있습니다. 그러나 사전 훈련된 모델의 최종 분류 부분은 기존의 분류 작업에 따라 다르며 이후에 모델이 훈련된 클래스 집합에 따라 다릅니다.
1. 미세 조정: 고정된 기본 모델의 일부 최상위 층을 고정 해제하고 새로 추가 된 분류기 층과 기본 모델의 마지막 층을 함께 훈련시킵니다. 이를 통해 기본 모델에서 고차원 특징 표현을 "미세 조정"하여 특정 작업에 보다 관련성이 있도록 할 수 있습니다.
일반적인 기계 학습 일련의 과정을 진행합니다.
1. 데이터 검사 및 이해
1. 입력 파이프 라인 빌드(이 경우 Keras ImageDataGenerator를 사용)
1. 모델 작성
* 사전 훈련된 기본 모델(또한 사전 훈련된 가중치)에 적재
* 분류 층을 맨 위에 쌓기
1. 모델 훈련
1. 모델 평가
```
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import matplotlib.pyplot as plt
try:
# %tensorflow_version은 Colab에서만 지원됩니다.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
keras = tf.keras
```
## 데이터 전처리
### 데이터 다운로드
고양이와 개의 데이터셋을 가져오기 위해 [Tensorflow 데이터셋](http://tensorflow.org/datasets) 이용합니다.
`tfds`패키지는 미리 정의 된 데이터를 가져오는 가장 쉬운 방법입니다. 본인 만의 데이터가 있고 Tensorflow에서 이 패키지를 이용해 데이터를 가져오려는 경우 [이미지 데이터 가져오기](../load_data/images.ipynb)를 확인하세요.
```
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
```
`tfds.load` 메소드는 데이터를 다운로드 및 캐시하고 `tf.data.Dataset` 오브젝트를 리턴합니다. 이러한 객체는 데이터를 조작하고 모델에 파이프하는 강력하고 효율적인 방법을 제공합니다.
`"cats_vs_dogs"` 는 표준 splits 기능을 정의하지 않으므로 subsplit 기능을 사용하여 각각 80%, 10%, 10%(훈련, 검증, 테스트)의 데이터로 나눕니다.
```
(raw_train, raw_validation, raw_test), metadata = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
```
tf.data.Dataset 객체에는 (이미지, 레이블) 쌍으로 이루어져 있습니다. 이미지는 가변적인 shape, 3개 채널로 구성되며, 레이블은 스칼라로 구성됩니다.
```
print(raw_train)
print(raw_validation)
print(raw_test)
```
훈련용 데이터셋에서 처음 두 개의 이미지 및 레이블을 보여줍니다:
```
get_label_name = metadata.features['label'].int2str
for image, label in raw_train.take(2):
plt.figure()
plt.imshow(image)
plt.title(get_label_name(label))
```
### 데이터 포맷
`tf.image` 모듈을 사용하여 이미지를 포맷하세요.
이미지를 고정 된 입력 크기로 조정하고 입력 채널의 크기를 `[-1,1]` 범위로 조정하세요.
<!-- TODO(markdaoust): fix the keras_applications preprocessing functions to work in tf2 -->
```
IMG_SIZE = 160 # 모든 이미지는 160x160으로 크기가 조정됩니다
def format_example(image, label):
image = tf.cast(image, tf.float32)
image = (image/127.5) - 1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
```
map 함수를 사용하여 데이터셋의 각 항목에 이 함수를 적용하세요:
```
train = raw_train.map(format_example)
validation = raw_validation.map(format_example)
test = raw_test.map(format_example)
```
이제 데이터를 섞고 일괄 처리하세요.
```
BATCH_SIZE = 32
SHUFFLE_BUFFER_SIZE = 1000
train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
validation_batches = validation.batch(BATCH_SIZE)
test_batches = test.batch(BATCH_SIZE)
```
일련의 데이터 검사하기:
```
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
```
## 사전 훈련된 컨볼루션 네트워크로부터 기본 모델 생성하기
Google에서 개발한 MobileNet V2 모델로부터 기본 모델을 생성합니다. 이 모델은 1.4M 이미지와 1000개의 클래스로 구성된 대규모 데이터셋인 ImageNet 데이터셋를 사용해 사전 훈련된 모델입니다. ImageNet은 `잭프루트` 및 `주사기`와 같은 다양한 범주의 연구용 훈련 데이터셋입니다. 이 지식 기반은 특정 데이터셋에서 고양이와 개를 분류하는데 도움이 됩니다.
먼저 기능 추출에 사용할 MobileNet V2 층을 선택 해야 합니다. 가장 최근의 분류 층 ("맨 위층", 대부분의 머신 러닝 모델 다이어그램은 아래에서 위로 이동하므로)은 유용하지 않습니다. 대신에 flatten 연산을 하기 전에 맨 아래 층을 가지고 진행하겠습니다. 이 층을 "병목 층"ㄹ이라고합니다. 병목 층은 맨 위층보다 일반성을 유지합니다.
먼저 ImageNet으로 훈련된 가중치가 저장된 MobileNet V2 모델을 인스턴스화 하세요. ** include_top = False ** 로 지정하면 맨 위에 분류 층이 포함되지 않은 네트워크를 로드하므로 특징 추출에 이상적입니다.
```
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
# 사전 훈련된 모델 MobileNet V2에서 기본 모델을 생성합니다.
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
```
이 특징 추출기는 각 160x160x3 이미지를 5x5x1280 개의 특징 블록으로 변환합니다. 이미지 배치 예제에서 수행하는 작업을 확인하세요:
```
feature_batch = base_model(image_batch)
print(feature_batch.shape)
```
## 특징 추출
이 단계에서는 이전 단계에서 작성된 컨벌루션 베이스 모델을 동결하고 특징 추출기로 사용합니다. 또한 그 위에 분류기를 추가하고 최상위 분류기를 훈련시킵니다.
### 컨볼루션 베이스 모델 고정하기
모델을 컴파일하고 훈련시키기 전에 컨볼루션 베이스 모델을 고정 시키는 것이 중요합니다. 고정(layer.trainable = False를 설정하여)하면 훈련 중 지정된 층의 가중치가 업데이트되지 않습니다. MobileNet V2에는 많은 층이 있으므로 전체 모델의 훈련 가능한 플래그를 False로 설정하면 모든 층이 고정됩니다.
```
base_model.trainable = False
# 기본 모델 아키텍처를 살펴봅니다.
base_model.summary()
```
### 분류 층을 맨 위에 추가하기
특징 블록에서 예측을 하기위해 tf.keras.layers.GlobalAveragePooling2D 층을 사용하여 특징을 이미지 한개 당 1280개의 요소 벡터로 변환하여 5x5 공간 위치에 대한 평균을 구하세요.
```
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
```
`tf.keras.layers.Dense`층을 사용하여 특징을 이미지당 단일 예측으로 변환하세요. 이 예측은 `logit`또는 원시 예측 값으로 취급되므로 활성화 함수가 필요하지 않습니다. 양수는 클래스 1을 예측하고 음수는 클래스 0을 예측합니다.
```
prediction_layer = keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
```
이제 `tf.keras.Sequential`모델을 사용하여 특징 추출기와 이 두 층을 쌓으세요:
```
model = tf.keras.Sequential([
base_model,
global_average_layer,
prediction_layer
])
```
### 모델 컴파일
학습하기 전에 모델을 컴파일해야 합니다. 두 개의 클래스가 있으므로 모델이 선형 출력을 제공하므로 `from_logits = True`와 함께 이진 교차 엔트로피 손실을 사용하세요.
```
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
```
MobileNet의 2.5M 개의 매개 변수는 고정되어 있지만 Dense 층에는 1.2K 개의 _trainable_ 매개 변수가 있습니다. 이것들은 두 개의 tf.Variable 객체, 즉 가중치와 바이어스로 나뉩니다.
```
len(model.trainable_variables)
```
### 모델 훈련
10 epochs만큼 훈련 후 ~96%의 정확도를 볼 수 있습니다.
```
initial_epochs = 10
validation_steps=20
loss0,accuracy0 = model.evaluate(validation_batches, steps = validation_steps)
print("initial loss: {:.2f}".format(loss0))
print("initial accuracy: {:.2f}".format(accuracy0))
history = model.fit(train_batches,
epochs=initial_epochs,
validation_data=validation_batches)
```
### 학습 곡선
MobileNet V2 기본 모델을 고정된 특징 추출기로 사용했을 때의 학습 및 검증 정확도 / 손실의 학습 곡선을 살펴 보겠습니다.
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
```
Note: 유효성 검사 지표가 훈련 지표보다 명확하게 더 나은 이유는 `tf.keras.layers.BatchNormalization` 및 `tf.keras.layers.Dropout`과 같은 층이 훈련 중 정확도에 영향을 주기 때문입니다. 이것들은 유효성 검사 손실을 계산할 때 해제됩니다.
훈련 지표가 한 에포크 동안의 평균을 평가하는 반면, 유효성 검사 지표는 에포크 이후에 평가하므로 유효성 검사 지표는 약간 더 많이 훈련 된 모델을 볼 수 있기 때문입니다.
## 미세 조정
기능 추출 실험에서는 MobileNet V2 기본 모델을 기반으로 몇 개의 층 만 학습했습니다. 사전 훈련된 네트워크의 가중치는 훈련 중에 업데이트 되지 **않았습니다**.
성능을 더욱 향상시키는 한 가지 방법은 추가 한 분류기의 훈련과 함께 사전 훈련된 모델의 최상위 레이어 가중치를 훈련(또는 "미세 조정")하는 것입니다. 훈련을 통해 가중치는 일반적인 특징 맵에서 개별 데이터셋과 관련된 특징으로 조정됩니다.
Note: 사전 훈련된 모델을 훈련 불가능으로 설정하여 최상위 분류기를 훈련한 후에만 시도해야 합니다. 사전 훈련된 모델 위에 무작위로 초기화된 분류기를 추가하고 모든 레이어를 공동으로 훈련하려고하면 (분류기가 가중치를 임의 설정하기 때문에) 그래디언트 업데이트의 크기가 너무 커지고 사전 훈련된 모델은 배운 것을 잊어버리게 됩니다.
또한 전체 MobileNet 모델이 아닌 소수의 최상위 층을 미세 조정해야 합니다. 대부분의 컨볼루션 네트워크에서 층이 높을수록 층이 더 전문화됩니다. 처음 몇 층은 거의 모든 유형의 이미지로 일반화되는 매우 간단하고 일반적인 특징을 학습합니다. 더 높은 수준으로 올라가면 훈련에 사용된 데이터 세트에 맞춰 특징이 점점 더 구체화 됩니다. 미세 조정의 목표는 이러한 전문화된 특징이 일반적인 학습을 덮어쓰지 않고 새 데이터셋에 맞춰 잘 동작 수 있도록 조정하는 것입니다.
### 최상위 층 고정 해제하기
base_model을 고정 해제하고 맨 아래 층을 훈련 할 수 없도록 설정하면 됩니다. 그런 다음 모델을 다시 컴파일하고(변경 사항을 적용하기 위해서) 훈련을 다시 시작해야 합니다.
```
base_model.trainable = True
# 기본 모델에 몇 개의 층이 있는지 확인 합니다.
print("Number of layers in the base model: ", len(base_model.layers))
# 해당 층 이후부터 미세 조정
fine_tune_at = 100
# `fine_tune_at` 층 이전의 모든 층을 고정
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
```
### 모델 컴파일
훨씬 더 낮은 학습 비율로 모델 컴파일합니다.
```
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer = tf.keras.optimizers.RMSprop(lr=base_learning_rate/10),
metrics=['accuracy'])
model.summary()
len(model.trainable_variables)
```
### 모델 훈련 계속하기
이미 수렴 상태로 훈련된 경우에, 이 단계는 정확도를 몇 퍼센트 포인트 향상시킵니다.
```
fine_tune_epochs = 10
total_epochs = initial_epochs + fine_tune_epochs
history_fine = model.fit(train_batches,
epochs=total_epochs,
initial_epoch = history.epoch[-1],
validation_data=validation_batches)
```
MobileNet V2 기본 모델의 마지막 몇 층을 미세 조정하고 그 위의 분류기를 훈련할 때의 학습 및 검증 정확도 / 손실의 학습 곡선을 살펴 보겠습니다. 검증 손실은 훈련 손실보다 훨씬 높으므로 약간의 과적합이 나올 수 있습니다.
새로운 훈련용 데이터셋이 상대적으로 작고 원래 MobileNet V2의 데이터셋과 유사하기 때문에 약간의 과적합이 발생할 수 있습니다.
미세 조정 후 모델은 거의 98% 정확도에 도달합니다.
```
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.8, 1])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 1.0])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
```
## 요약:
* **특징 추출을 위해 사전 훈련된 모델 사용하기**: 작은 데이터셋으로 작업 할 때는 동일한 범주의 클래스의 더 큰 데이터셋으로 훈련시킨 사전 학습된 모델의 특징을 활용하는 것이 일반적입니다. 사전 훈련된 모델을 인스턴스화하고 완전히 연결된 분류기를 맨 위에 추가하면 됩니다. 사전 훈련된 모델은 "고정"되고 분류기의 가중치만 훈련 중에 업데이트됩니다.
이 경우 컨벌루션 베이스 모델은 각 이미지와 관련된 모든 특징을 추출하며 주어진 추출된 특징을 가지고 이미지 클래스를 결정하는 분류기를 훈련합니다.
* **사전 훈련된 모델을 미세 조정하기**: 성능을 더욱 향상시키기 위해 사전 훈련된 모델의 최상위 계층을 미세 조정을 통해 새 데이터셋으로써 재사용 할 수 있습니다.
이 경우 모델이 주어진 데이터셋에 맞는 상위 레벨의 특징을 학습 할 수 있도록 가중치를 조정합니다. 이 기술은 일반적으로 훈련 데이터셋의 규모가 크고, 사전 훈련된 모델이 사용했던 원래 데이터셋과 매우 유사한 경우에 권장됩니다.
| github_jupyter |
Проект команды **paranormal** в рамках домашнего задания Летней Школы **МТС.Тета**, направление "Машинное обучение"
#### Загрузка и настройка необходимых библиотек
```
import pickle
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from scipy.stats import pointbiserialr
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score
from sklearn.metrics import f1_score, recall_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
sns.set_theme(style='whitegrid', palette='deep')
warnings.filterwarnings('ignore')
```
## 1. Анализ данных
### 1.1. Предобработка датасета
```
# загрузка данных
data = pd.read_csv('data/diabetes.csv')
# убираем дубликаты
data = data.drop_duplicates()
# провидим в однообразное написание название переменных
data.columns = [c.replace(' ', '_').lower() for c in data.columns]
# заменяем значения 'Female', 'No' и 'Negative' на 0, 'Male', 'Yes' и 'Positive' - на 1
data = data.replace(["Yes", 'No', 'Male', 'Female', 'Positive', 'Negative'], [1, 0, 1, 0, 1, 0])
# сохраняем загруженные данные в отдельный датасет
df_diabetes = data.copy()
df_diabetes.head(5)
```
#### Переменные:
- polyuria - полиурия (увеличенное образование мочи)
- polydipsia - полидипсия (неутолимая жажда)
- sudden weight loss - внезапная потеря веса
- weakness - слабость
- polyphagia - полифагия (повышенный аппетит)
- genital thrush - генитальная молочница
- visual blurring - расплывчатость зрения
- itching - зуд
- irritability - раздражительность
- delayed healing - медленное заживление ран
- partial paresis - частичный парез (потеря мышечной силы)
- muscle stiffness - жесткость мышц
- alopecia - алопеция (выпадение волос)
- obesity - ожирение
### 2.2. Разведочный анализ данных
```
print('Уникальные значения переменных')
for col in df_diabetes.columns:
print(col, df_diabetes[col].unique())
df_diabetes.info()
```
<div class="alert alert-block alert-info"><b>
Пропущенных значений нет, нет необходимости в обработке пропусков
</div>
```
df_diabetes.describe()
```
<div class="alert alert-block alert-info"><b>
<p>В датасете данные пациентов в возрасте от 16 до 90 лет, медиана - 48 лет, средний 48.9 лет. </p>
<p>Остальные переменные - бинарные. </p>
<p>Датасет по целевому классу достаточно сбалансирован: 69% на 31%.
</div>
```
sns.distplot(df_diabetes['age'], bins=20);
sns.displot(data=data, x='age', hue='class', kde = True);
sns.pairplot(df_diabetes, hue='class', corner=True);
sns.pairplot(df_diabetes[['gender', 'class']], hue='class');
round(df_diabetes[df_diabetes['class'] == 1].groupby(['gender'])['weakness'].count() / df_diabetes[df_diabetes['class'] == 1]['weakness'].count() * 100, 2)
fig, ax = plt.subplots(figsize=(15,12))
sns.heatmap(df_diabetes.corr(method='pearson'), center=0, square=False, annot=True, ax=ax);
pointbiserialr(df_diabetes.iloc[:, 1], df_diabetes.age)
```
<div class="alert alert-block alert-info"><b>
Основные выводы
</div>
<div class="alert alert-block alert-info"><b>
1) Диабет, особенно 2-го типа, наиболее распространен среди мужчин, чем среди женщин. https://www.news-medical.net/health/Diabetes-in-Men-versus-Women.aspx
2) Целевой класс сильно коррелирует с переменными полиурия и полидипсия. https://www.jdrf.org/t1d-resources/about/symptoms/frequent-urination/
3) Также целевой класс коррелирует с внезапной потерей веса. https://www.medicinenet.com/is_weight_loss_caused_by_diabetes_dangerous/ask.htm
4) Перечисленные в исходных данных признаки (полиурия, полидипсия, внезапная потеря веса, слабость, повышенный аппетит, ожирание, зуд и т.п.) являются симптомами сахарного диабета. Стоит отметить, что чем выше стадия сахарного диабета, тем заметнее проявление симптомов.
5) Указан признак полиурия, но помимо этого возможно также ночное недержание. Можно добавить и такие признаки, как онемение и покалывание в руках и ногах, повышеная потливость, быстрая утомляемость, нехватка энергии, сильная усталость и сухость во рту из-за чувства жажды.
6) На представленных данных можно построить модель. В будущем в данные можно будет добавить указанные выше симптомы, а также расширить географию сбора данных.
7) Признаки не противоречат друг другу, данные соответствуют гипотезе.
</div>
## 2. Моделирование
<div class="alert alert-block alert-info"><b>
Решая поставленную задачу, мы испробовали несколько методов машинного обучения, включая логистическую регрессию, градиентный бустинг и случайный лес. Лучший результат на наших данных показал случайный лес по метрике F1. Приводим код только финальной модели.</div>
```
X, y = df_diabetes.drop('class', 1), df_diabetes['class']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, shuffle=True)
param_grid = {
'n_estimators': np.arange(5, 51, 15),
'max_depth': np.arange(5, 51, 15),
'min_samples_split': np.arange(2, 11, 4),
'min_samples_leaf': np.arange(1, 10, 4),
'max_samples': np.arange(0.1, 0.99, 0.23),
}
%%time
rf = RandomForestClassifier(n_jobs=-1, random_state=42)
cv = GridSearchCV(rf, param_grid, cv=3).fit(X_train, y_train)
cv.best_params_
cv.best_estimator_
y_pred = cv.best_estimator_.predict(X_test)
conf_mat = confusion_matrix(y_test, y_pred)
ax = plt.subplot()
sns.heatmap(conf_mat / np.sum(conf_mat), annot=True, fmt='.2%', cmap='Blues', ax=ax)
ax.set_title('Confusion Matrix')
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.xaxis.set_ticklabels(['healthy', 'sick'])
ax.yaxis.set_ticklabels(['healthy', 'sick'])
def print_metrics(y_true, y_pred):
print(f'f1_score: {f1_score(y_true, y_pred):.4f}')
print(f'recall_score: {recall_score(y_true, y_pred):.4f}')
print(f'precision_score: {precision_score(y_true, y_pred):.4f}')
print_metrics(y_test, y_pred)
```
<div class="alert alert-block alert-info"><b>
Полученное значение F1 меры соответствует ожидаемому качеству модели. </div>
## 3. Сохраняем модель в бинарный файл
```
model_path = 'random_forest_diabet.pkl'
with open(model_path, 'wb') as file:
pickle.dump(cv.best_estimator_, file)
```
## 4. Загружаем модель и проверяем метрики
```
with open(model_path, 'rb') as file:
loaded_model = pickle.load(file)
loaded_model
print_metrics(y_test, loaded_model.predict(X_test))
```
| github_jupyter |
# Testing HLS Module
The HLS module simply copies the input image to the output image (passthrough)
The project builds on the VDMA demo.
## Project sources can be found here
[HLS Passthrough Demo](https://github.com/CospanDesign/pynq-hdl/tree/master/Projects/Simple%20HLS%20VDMA)
```
import cv2
import numpy as np
def cvtcolor_rgb2yuv422(rgb):
yuv422 =np.zeros((rgb.shape[0], rgb.shape[1], 2)).astype(np.uint8)
yuv444 = cv2.cvtColor(rgb, cv2.COLOR_BGR2YUV);
# chroma subsampling: yuv444 -> yuv422;
for row in range(yuv444.shape[0]):
for col in range(0, yuv444.shape[1], 2):
p0_in = yuv444[row, col]
p1_in = yuv444[row, col + 1]
p0_out = [p0_in[0], p0_in[1]]
p1_out = [p1_in[0], p0_in[2]]
yuv422[row, col] = p0_out
yuv422[row, col + 1] = p1_out
return yuv422
```
# Open and Convert the Image to a usable format
Open the image and convert it to YUV422
Perform the conversion in a seperate cell than below because the conversion takes a long time.
```
# %matplotlib inline
from matplotlib import pyplot as plt
#Create a YUV422 Image So we don't need to keep regenerating it
IMAGE_FILE = "../data/test_1080p.bmp"
image_in = cv2.imread(IMAGE_FILE)
image_yuv = cvtcolor_rgb2yuv422(image_in)
#SHOW IMAGE
image_out = cv2.cvtColor(image_yuv, cv2.COLOR_YUV2BGR_YUYV)
plt.imshow(image_out)
plt.show()
```
# Perform the Image Processing
1. Program the FPGA.
2. Configure the Egress and Ingress Video DMA cores and configure them to take in images with the with and height the same as the image opened.
3. Configure the Image Processor.
4. Send down the image to the memory accessable by the FPGA.
5. Intitate the VDMA Transfer.
6. Wait for the transfer to finish.
7. Read back and display the image
```
# %matplotlib inline
from time import sleep
from pynq import Overlay
from pynq.drivers import VDMA
from image_processor import ImageProcessor
import cv2
from matplotlib import pyplot as plt
from IPython.display import Image
import numpy as np
#Constants
BITFILE_NAME = "hls_passthrough.bit"
EGRESS_VDMA_NAME = "SEG_axi_vdma_0_Reg"
INGRESS_VDMA_NAME = "SEG_axi_vdma_1_Reg"
HLS_NAME = "SEG_image_filter_0_Reg"
# Set Debug to true to enable debug messages from the VDMA core
DEBUG = False
#DEBUG = True
# Set Verbose to true to dump a lot of messages about
VERBOSE = False
#VERBOSE = True
#These can be set between 0 - 2, the VDMA can also be configured for up to 32 frames in 32-bit memspace and 16 in 64-bit memspace
EGRESS_FRAME_INDEX = 0
INGRESS_FRAME_INDEX = 0
IMAGE_WIDTH = image_yuv.shape[1]
IMAGE_HEIGHT = image_yuv.shape[0]
print ("Image Size: %dx%d" % (IMAGE_WIDTH, IMAGE_HEIGHT))
#Download Images
ol = Overlay(BITFILE_NAME)
ol.download()
vdma_egress = VDMA(name = EGRESS_VDMA_NAME, debug = DEBUG)
vdma_ingress = VDMA(name = INGRESS_VDMA_NAME, debug = DEBUG)
image_processor = ImageProcessor(HLS_NAME)
image_processor.set_image_width(IMAGE_WIDTH)
image_processor.set_image_height(IMAGE_HEIGHT)
image_processor.enable(True)
#print ("Image Processor Enabled? %s" % image_processor.is_enabled())
#Set the size of the image
vdma_egress.set_image_size(IMAGE_WIDTH, IMAGE_HEIGHT, color_depth = 2)
vdma_ingress.set_image_size(IMAGE_WIDTH, IMAGE_HEIGHT, color_depth = 2)
#The above functions created the video frames
#Populate the frame
frame = vdma_egress.get_frame(EGRESS_FRAME_INDEX)
frame.set_bytearray(bytearray(image_yuv.astype(np.int8).tobytes()))
print ("Frame width, height: %d, %d" % (frame.width, frame.height))
print ("")
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("")
print ("Enabling One of the Engine")
#Open Up the Ingress Side
vdma_ingress.start_ingress_engine( continuous = False,
num_frames = 1,
frame_index = INGRESS_FRAME_INDEX,
interrupt = False)
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
print ("")
print ("Enabling Both Engines")
#Quick Start
vdma_egress.start_egress_engine( continuous = False,
num_frames = 1,
frame_index = EGRESS_FRAME_INDEX,
interrupt = False)
print ("")
print ("Both of the engines should be halted after transferring one frame")
#XXX: I think this sleep isn't needed but the core erroniously reports an engine isn't finished even though it is.
#XXX: This sleep line can be commented out but the egress core may report it is not finished.
sleep(0.1)
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
if VERBOSE:
print ("Egress WIP: %d" % vdma_egress.get_wip_egress_frame())
print ("Ingress WIP: %d" % vdma_ingress.get_wip_ingress_frame())
#Check to see if the egress frame point progressed
print ("")
print ("Disabling both engines")
#Disable both
vdma_egress.stop_egress_engine()
vdma_ingress.stop_ingress_engine()
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("Egress Error: 0x%08X" % vdma_egress.get_egress_error())
print ("Ingress Error: 0x%08X" % vdma_ingress.get_ingress_error())
frame = vdma_ingress.get_frame(INGRESS_FRAME_INDEX)
#frame.save_as_jpeg("./image.jpg")
image_yuv_out = np.ndarray( shape = (IMAGE_HEIGHT, IMAGE_WIDTH, 2),
dtype=np.uint8,
buffer = frame.get_bytearray())
image_rgb_out = cv2.cvtColor(image_yuv_out, cv2.COLOR_YUV2BGR_YUYV)
#SHOW IMAGE
plt.imshow(image_rgb_out)
plt.show()
```
| github_jupyter |
# Developing an AI application
Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below.
<img src='assets/Flowers.png' width=500px>
The project is broken down into multiple steps:
* Load and preprocess the image dataset
* Train the image classifier on your dataset
* Use the trained classifier to predict image content
We'll lead you through each part which you'll implement in Python.
When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.
First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.
```
# Imports here
```
## Load the data
Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.
The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.
The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1.
```
data_dir = 'flowers'
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'
test_dir = data_dir + '/test'
# TODO: Define your transforms for the training, validation, and testing sets
data_transforms =
# TODO: Load the datasets with ImageFolder
image_datasets =
# TODO: Using the image datasets and the trainforms, define the dataloaders
dataloaders =
```
### Label mapping
You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers.
```
import json
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f)
```
# Building and training the classifier
Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.
We're going to leave this part up to you. Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:
* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)
* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout
* Train the classifier layers using backpropagation using the pre-trained network to get the features
* Track the loss and accuracy on the validation set to determine the best hyperparameters
We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!
When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.
One last important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro to
GPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module.
**Note for Workspace users:** If your network is over 1 GB when saved as a checkpoint, there might be issues with saving backups in your workspace. Typically this happens with wide dense layers after the convolutional layers. If your saved checkpoint is larger than 1 GB (you can open a terminal and check with `ls -lh`), you should reduce the size of your hidden layers and train again.
```
# TODO: Build and train your network
```
## Testing your network
It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
```
# TODO: Do validation on the test set
```
## Save the checkpoint
Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.
```model.class_to_idx = image_datasets['train'].class_to_idx```
Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now.
```
# TODO: Save the checkpoint
```
## Loading the checkpoint
At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network.
```
# TODO: Write a function that loads a checkpoint and rebuilds the model
```
# Inference for classification
Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
First you'll need to handle processing the input image such that it can be used in your network.
## Image Preprocessing
You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training.
First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.
Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.
As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation.
And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions.
```
def process_image(image):
''' Scales, crops, and normalizes a PIL image for a PyTorch model,
returns an Numpy array
'''
# TODO: Process a PIL image for use in a PyTorch model
```
To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions).
```
def imshow(image, ax=None, title=None):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = image.numpy().transpose((1, 2, 0))
# Undo preprocessing
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Image needs to be clipped between 0 and 1 or it looks like noise when displayed
image = np.clip(image, 0, 1)
ax.imshow(image)
return ax
```
## Class Prediction
Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.
To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.
Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
```
def predict(image_path, model, topk=5):
''' Predict the class (or classes) of an image using a trained deep learning model.
'''
# TODO: Implement the code to predict the class from an image file
```
## Sanity Checking
Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:
<img src='assets/inference_example.png' width=300px>
You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above.
```
# TODO: Display an image along with the top 5 classes
```
| github_jupyter |
# Example: CanvasXpress heatmap Chart No. 11
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/heatmap-11.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block.
```
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="heatmap11",
data={
"y": {
"vars": [
"V1",
"V2",
"V3",
"V4",
"V5"
],
"smps": [
"S1",
"S2",
"S3",
"S4",
"S5",
"S6",
"S7",
"S8",
"S9",
"S10"
],
"data": [
[
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
[
10,
9,
8,
7,
6,
5,
4,
3,
2,
1
],
[
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
[
10,
9,
8,
7,
6,
5,
4,
3,
2,
1
],
[
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
]
]
}
},
config={
"colorSpectrum": [
"#f0f0f0",
"#bdbdbd",
"#636363",
"#000000"
],
"graphType": "Heatmap",
"showHeatmapIndicator": False,
"showLegend": False,
"sizeBy": "Size",
"sizeByContinuous": True,
"sizeByData": "data",
"title": "A good old Northern Blot"
},
width=613,
height=613,
events=CXEvents(),
after_render=[],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="heatmap_11.html")
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
defaulter_df = pd.read_csv("Default.csv")
defaulter_df.head()
print("Size of the data : ", defaulter_df.shape)
print("Target variable frequency distribution : \n", defaulter_df["default"].value_counts())
X = defaulter_df[["balance", "income"]]
y = defaulter_df["default"]
#### Train-test Split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, random_state = 42)
print("Size of training data : ", X_train.shape[0])
print("Size of test data : ", X_test.shape[0])
```
#### Normalization
```
from sklearn.preprocessing import MinMaxScaler
min_max = MinMaxScaler()
min_max.fit(X_train)
train_transformed = min_max.transform(X_train)
transformed = min_max.transform(X_test)
transformed
X_train["balance_normalized"] = train_transformed[:,0]
X_train["income_normalized"] = train_transformed[:,1]
X_train.head()
X_test["balance_normalized"] = transformed[:,0]
X_test["income_normalized"] = transformed[:,1]
X_test.head()
```
### Fitting kNN
#### 1. k = 3
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 3, metric = "euclidean")
knn.fit(X_train[["balance_normalized","income_normalized"]], y_train)
# OR
#knn.fit(train_transformed, y_train)
predictions = knn.predict(X_test[["balance_normalized","income_normalized"]])
#OR
#predictions = knn.predict(transformed)
predictions
y_test
```
#### computing accuracy
```
from sklearn.metrics import accuracy_score
test_accuracy = accuracy_score(y_test, predictions)
print("Accuracy on test data :", test_accuracy)
train_predictions = knn.predict(X_train[["balance_normalized","income_normalized"]])
train_accuracy = accuracy_score(y_train, train_predictions)
print("Accuracy on training data :", train_accuracy)
```
#### 2. k = 5
```
knn_5 = KNeighborsClassifier(n_neighbors = 5, metric = "euclidean")
knn_5.fit(X_train[["balance_normalized","income_normalized"]], y_train)
predictions = knn_5.predict(X_test[["balance_normalized","income_normalized"]])
test_accuracy = accuracy_score(y_test, predictions)
print("Accuracy on test data :", test_accuracy)
train_predictions = knn_5.predict(X_train[["balance_normalized","income_normalized"]])
train_accuracy = accuracy_score(y_train, train_predictions)
print("Accuracy on training data :", train_accuracy)
```
### Finding Optimal value of k
```
train_accuracies = []
test_accuracies = []
for i in range(1,16,2):
knn = KNeighborsClassifier(n_neighbors = i, metric = "euclidean")
knn.fit(X_train[["balance_normalized","income_normalized"]], y_train)
predictions = knn.predict(X_test[["balance_normalized","income_normalized"]])
train_predictions = knn.predict(X_train[["balance_normalized","income_normalized"]])
test_accuracy = accuracy_score(y_test, predictions)
test_accuracies.append(test_accuracy)
train_accuracy = accuracy_score(y_train, train_predictions)
train_accuracies.append(train_accuracy)
k_values = list(range(1,16,2))
plt.plot(k_values, train_accuracies)
plt.plot(k_values, test_accuracies)
plt.legend(["train_accuracy", "test_accuracy"])
```
#### Fitting with initial optimal value of k
```
knn = KNeighborsClassifier(n_neighbors = 9, metric = "euclidean")
knn.fit(X_train[["balance_normalized","income_normalized"]], y_train)
predictions = knn.predict(X_test[["balance_normalized","income_normalized"]])
#train_predictions = knn.predict(X_train)
test_accuracy = accuracy_score(y_test, predictions)
print("Accuracy on test data :", test_accuracy)
train_predictions = knn.predict(X_train[["balance_normalized","income_normalized"]])
train_accuracy = accuracy_score(y_train, train_predictions)
print("Accuracy on training data :", train_accuracy)
```
#### Validation Split
#### 1. Simple train and validation split
```
x_train, x_val, y_train_new, y_val = train_test_split(X_train[["balance_normalized","income_normalized"]],y_train, test_size = 0.2, random_state = 42)
print("Size of training data: ", x_train.shape[0])
print("Size of validation data : ", x_val.shape[0])
train_accuracies = []
val_accuracies = []
for i in range(1,16,2):
knn = KNeighborsClassifier(n_neighbors = i, metric = "euclidean")
knn.fit(x_train, y_train_new)
val_predictions = knn.predict(x_val)
val_accuracy = accuracy_score(y_val, val_predictions)
val_accuracies.append(val_accuracy)
train_predictions = knn.predict(x_train)
train_accuracy = accuracy_score(y_train_new, train_predictions)
train_accuracies.append(train_accuracy)
k_values = list(range(1,16,2))
plt.plot(k_values, train_accuracies)
plt.plot(k_values, val_accuracies)
plt.legend(["train_accuracy", "validation_accuracy"])
#fitting with optimla value of k
knn = KNeighborsClassifier(n_neighbors = 5, metric = "euclidean")
knn.fit(x_train, y_train_new)
val_predictions = knn.predict(x_val)
val_accuracy = accuracy_score(y_val, val_predictions)
val_accuracy
predictions = knn.predict(X_test[["balance_normalized","income_normalized"]])
test_accuracy = accuracy_score(y_test, predictions)
test_accuracy
```
#### 2. Cross Validation
```
from sklearn.model_selection import cross_validate
knn = KNeighborsClassifier(n_neighbors = 5, metric = "euclidean")
cv_results = cross_validate(knn, X_train[["balance_normalized","income_normalized"]], y_train, cv=4, return_train_score =True)
cv_results
print("Training data average accuracy :", cv_results["train_score"].mean()*100)
print("Validation data average accuracy :", cv_results["test_score"].mean()*100)
knn.fit(X_train[["balance_normalized","income_normalized"]], y_train)
predictions = knn.predict(X_test[["balance_normalized","income_normalized"]])
test_accuracy = accuracy_score(y_test, predictions)
test_accuracy
```
### Hyper-parameter tuning using GridSearch
```
from sklearn.model_selection import GridSearchCV
knn = KNeighborsClassifier(metric = "euclidean")
param_grid = {"n_neighbors" : np.arange(1,16,2)}
knn_with_gs = GridSearchCV(knn, param_grid, return_train_score = True, verbose =1, scoring = "accuracy")
knn_with_gs.fit(X_train[["balance_normalized","income_normalized"]], y_train)
knn_with_gs.cv_results_
tuned_df = pd.DataFrame(knn_with_gs.cv_results_)
tuned_df = tuned_df[["param_n_neighbors","mean_train_score", "mean_test_score"]]
tuned_df
knn_9 = KNeighborsClassifier(n_neighbors = 11, metric = "euclidean")
knn_9.fit(X_train[["balance_normalized","income_normalized"]], y_train)
predictions = knn_9.predict(X_test[["balance_normalized","income_normalized"]])
test_accuracy = accuracy_score(y_test, predictions)
test_accuracy
```
### Evaluation measures
#### Confusion Matrix
```
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, predictions)
print("Confusion Matrix : \n", cm)
pd.DataFrame(cm, columns = ["No", "Yes"], index = ["No", "Yes"])
```
#### Precision and Recall
```
from sklearn.metrics import precision_score, recall_score
precision_score(y_test,predictions, pos_label = "Yes")
precision_score(y_test,predictions, pos_label = "Yes")
recall_score(y_test,predictions, pos_label = "Yes")
recall_score(y_test,predictions, pos_label = "No")
```
#### F1-score
```
from sklearn.metrics import f1_score
f1_score(y_test,predictions,pos_label = "No")
f1_score(y_test,predictions,pos_label = "Yes")
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Tutorial-IllinoisGRMHD: InitSymBound.C
## Authors: Leo Werneck & Zach Etienne
<font color='red'>**This module is currently under development**</font>
## In this tutorial module we explain the symmetry conditions used by the `IllinoisGRMHD` codes. This module will likely be absorbed by another one once we finish documenting the code.
### Required and recommended citations:
* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).
* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).
* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This module is organized as follows
0. [Step 0](#src_dir): **Source directory creation**
1. [Step 1](#introduction): **Introduction**
1. [Step 2](#initsymbound__c): **`InitSymBound.C`**
1. [Step n-1](#code_validation): **Code validation**
1. [Step n](#latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file**
<a id='src_dir'></a>
# Step 0: Source directory creation \[Back to [top](#toc)\]
$$\label{src_dir}$$
We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
```
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__InitSymBound__C = os.path.join(IGM_src_dir_path,"InitSymBound.C")
```
<a id='introduction'></a>
# Step 1: Introduction \[Back to [top](#toc)\]
$$\label{introduction}$$
<a id='a_i_rhs_no_gauge_terms__c'></a>
# Step 2: `A_i_rhs_no_gauge_terms.C` \[Back to [top](#toc)\]
$$\label{a_i_rhs_no_gauge_terms__c}$$
```
%%writefile $outfile_path__InitSymBound__C
/*
Set the symmetries for the IllinoisGRMHD variables
*/
#include "cctk.h"
#include <cstdio>
#include <cstdlib>
#include "cctk_Arguments.h"
#include "cctk_Parameters.h"
#include "Symmetry.h"
#include "IllinoisGRMHD_headers.h"
extern "C" void IllinoisGRMHD_InitSymBound(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if( ( CCTK_EQUALS(Matter_BC,"frozen") && !CCTK_EQUALS(EM_BC,"frozen") ) ||
( !CCTK_EQUALS(Matter_BC,"frozen") && CCTK_EQUALS(EM_BC,"frozen") ) )
CCTK_VError(VERR_DEF_PARAMS,"If Matter_BC or EM_BC is set to FROZEN, BOTH must be set to frozen!");
if ((cctk_nghostzones[0]<3 || cctk_nghostzones[1]<3 || cctk_nghostzones[2]<3))
CCTK_VError(VERR_DEF_PARAMS,"ERROR: The version of PPM in this thorn requires 3 ghostzones. You only have (%d,%d,%d) ghostzones!",cctk_nghostzones[0],cctk_nghostzones[1],cctk_nghostzones[2]);
if(cctk_iteration==0) {
CCTK_VInfo(CCTK_THORNSTRING,"Setting Symmetry = %s... at iteration = %d",Symmetry,cctk_iteration);
int sym[3];
if(CCTK_EQUALS(Symmetry,"none")) {
/* FIRST SET NO SYMMETRY OPTION */
sym[0] = 1; sym[1] = 1; sym[2] = 1;
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::grmhd_conservatives");
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::em_Ax");
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::em_Ay");
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::em_Az");
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::em_psi6phi");
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::grmhd_primitives_allbutBi");
} else if(CCTK_EQUALS(Symmetry,"equatorial")) {
/* THEN SET EQUATORIAL SYMMETRY OPTION */
// Set default to no symmetry, which is correct for scalars and most vectors:
sym[0] = 1; sym[1] = 1; sym[2] = 1;
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::grmhd_conservatives");
// Don't worry about the wrong sym values since A_{\mu} is staggered
// and we're going to impose the symmetry separately
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::em_Ax");
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::em_Ay");
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::em_Az");
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::em_psi6phi");
SetCartSymGN(cctkGH,sym,"IllinoisGRMHD::grmhd_primitives_allbutBi");
// Then set unstaggered B field variables
sym[2] = -Sym_Bz;
SetCartSymVN(cctkGH, sym,"IllinoisGRMHD::Bx");
SetCartSymVN(cctkGH, sym,"IllinoisGRMHD::By");
sym[2] = Sym_Bz;
SetCartSymVN(cctkGH, sym,"IllinoisGRMHD::Bz");
sym[2] = -1;
SetCartSymVN(cctkGH, sym,"IllinoisGRMHD::mhd_st_z");
SetCartSymVN(cctkGH, sym,"IllinoisGRMHD::vz");
} else {
CCTK_VError(VERR_DEF_PARAMS,"IllinoisGRMHD_initsymbound: Should not be here; picked an impossible symmetry.");
}
}
}
```
<a id='code_validation'></a>
# Step n-1: Code validation \[Back to [top](#toc)\]
$$\label{code_validation}$$
First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
```
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/InitSymBound.C"
original_IGM_file_name = "InitSymBound-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__InitSymBound__C = !diff $original_IGM_file_path $outfile_path__InitSymBound__C
if Validation__InitSymBound__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for InitSymBound.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for InitSymBound.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__InitSymBound__C:
print(diff_line)
```
<a id='latex_pdf_output'></a>
# Step n: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-IllinoisGRMHD__InitSymBound.pdf](Tutorial-IllinoisGRMHD__InitSymBound.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
```
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__InitSymBound.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__InitSymBound.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__InitSymBound.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__InitSymBound.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
| github_jupyter |
### Here are the simple examples for plotting nomogram, ROC curves, Calibration curves, and Decision curves in training and test dataset by using R language.
```
# Library and data
library(rms)
library(pROC)
library(rmda)
train <-read.csv("E:/Experiments/YinjunDong/nomogram/EGFR-nomogram.csv")
test <-read.csv("E:/Experiments/YinjunDong/nomogram/EGFR-nomogram-test.csv")
# Nomogram
dd=datadist(train)
options(datadist="dd")
f1 <- lrm(EGFR~ Rad
+Smoking
+Type
,data = train,x = TRUE,y = TRUE)
nom <- nomogram(f1, fun=plogis,fun.at=c(.001, .01, seq(.1,.9, by=.4)), lp=F, funlabel="EGFR Mutations")
plot(nom)
# ROC train
f2 <- glm(EGFR~ Rad
+Smoking
+Type
,data = train,family = "binomial")
pre <- predict(f2, type='response')
plot.roc(train$EGFR, pre,
main="ROC Curve", percent=TRUE,
print.auc=TRUE,
ci=TRUE, ci.type="bars",
of="thresholds",
thresholds="best",
print.thres="best",
col="blue"
#,identity=TRUE
,legacy.axes=TRUE,
print.auc.x=ifelse(50,50),
print.auc.y=ifelse(50,50)
)
# ROC test
pre1 <- predict(f2,newdata = test)
plot.roc(test$EGFR, pre1,
main="ROC Curve", percent=TRUE,
print.auc=TRUE,
ci=TRUE, ci.type="bars",
of="thresholds",
thresholds="best",
print.thres="best",
col="blue",legacy.axes=TRUE,
print.auc.x=ifelse(50,50),
print.auc.y=ifelse(50,50)
)
# Calibration Curve train
rocplot1 <- roc(train$EGFR, pre)
ci.auc(rocplot1)
cal <- calibrate(f1, method = "boot", B = 1000)
plot(cal, xlab = "Nomogram Predicted Survival", ylab = "Actual Survival",main = "Calibration Curve")
# Calibration Curve test
rocplot2 <- roc(test$EGFR,pre1)
ci.auc(rocplot2)
f3 <- lrm(test$EGFR ~ pre1,x = TRUE,y = TRUE)
cal2 <- calibrate(f3, method = "boot", B = 1000)
plot(cal2, xlab = "Nomogram Predicted Survival", ylab = "Actual Survival",main = "Calibration Curve")
# Decision Curve train
Rad<- decision_curve(EGFR~
Rad, data = train, family = binomial(link ='logit'),
thresholds= seq(0,1, by = 0.01),
confidence.intervals =0.95,study.design = 'case-control',
population.prevalence = 0.3)
Clinical<- decision_curve(EGFR~
Smoking+Type, data = train, family = binomial(link ='logit'),
thresholds= seq(0,1, by = 0.01),
confidence.intervals =0.95,study.design = 'case-control',
population.prevalence = 0.3)
clinical_Rad<- decision_curve(EGFR~ Rad
+Smoking+Type, data = train,
family = binomial(link ='logit'), thresholds = seq(0,1, by = 0.01),
confidence.intervals= 0.95,study.design = 'case-control',
population.prevalence= 0.3)
List<- list(Clinical,Rad,clinical_Rad)
plot_decision_curve(List,curve.names= c('Clinical','Rad-Score','Nomogram'),
cost.benefit.axis =FALSE,col = c('green','red','blue'),
confidence.intervals =FALSE,standardize = FALSE,
#legend.position = "none"
legend.position = "bottomleft"
)
# Decision Curve test
Rad1<- decision_curve(EGFR~
Rad, data = test, family = binomial(link ='logit'),
thresholds= seq(0,1, by = 0.01),
confidence.intervals =0.95,study.design = 'case-control',
population.prevalence = 0.3)
Clinical1<- decision_curve(EGFR~
Smoking+Type, data = test, family = binomial(link ='logit'),
thresholds= seq(0,1, by = 0.01),
confidence.intervals =0.95,study.design = 'case-control',
population.prevalence = 0.3)
clinical_Rad1<- decision_curve(EGFR~ Rad
+Smoking+Type, data = test,
family = binomial(link ='logit'), thresholds = seq(0,1, by = 0.01),
confidence.intervals= 0.95,study.design = 'case-control',
population.prevalence= 0.3)
List1<- list(Clinical1, Rad1, clinical_Rad1)
plot_decision_curve(List1,curve.names= c('Clinical','Rad-Score','Nomogram'),
cost.benefit.axis =FALSE,col = c('green','red','blue'),
confidence.intervals =FALSE,standardize = FALSE,
legend.position = "bottomleft")
```
| github_jupyter |
# DREAMER Dominance EMI-GRU 48_16
Adapted from Microsoft's notebooks, available at https://github.com/microsoft/EdgeML authored by Dennis et al.
```
import pandas as pd
import numpy as np
from tabulate import tabulate
import os
import datetime as datetime
import pickle as pkl
import pathlib
from __future__ import print_function
import os
import sys
import tensorflow as tf
import numpy as np
# Making sure edgeml is part of python path
sys.path.insert(0, '../../')
#For processing on CPU.
os.environ['CUDA_VISIBLE_DEVICES'] ='0'
np.random.seed(42)
tf.set_random_seed(42)
# MI-RNN and EMI-RNN imports
from edgeml.graph.rnn import EMI_DataPipeline
from edgeml.graph.rnn import EMI_GRU
from edgeml.trainer.emirnnTrainer import EMI_Trainer, EMI_Driver
import edgeml.utils
import keras.backend as K
cfg = K.tf.ConfigProto()
cfg.gpu_options.allow_growth = True
K.set_session(K.tf.Session(config=cfg))
# Network parameters for our LSTM + FC Layer
NUM_HIDDEN = 128
NUM_TIMESTEPS = 48
ORIGINAL_NUM_TIMESTEPS = 128
NUM_FEATS = 16
FORGET_BIAS = 1.0
NUM_OUTPUT = 5
USE_DROPOUT = True
KEEP_PROB = 0.75
# For dataset API
PREFETCH_NUM = 5
BATCH_SIZE = 32
# Number of epochs in *one iteration*
NUM_EPOCHS = 2
# Number of iterations in *one round*. After each iteration,
# the model is dumped to disk. At the end of the current
# round, the best model among all the dumped models in the
# current round is picked up..
NUM_ITER = 4
# A round consists of multiple training iterations and a belief
# update step using the best model from all of these iterations
NUM_ROUNDS = 10
LEARNING_RATE=0.001
# A staging direcory to store models
MODEL_PREFIX = '/home/sf/data/DREAMER/Dominance/48_16/models/GRU/model-gru'
```
# Loading Data
```
# Loading the data
path='/home/sf/data/DREAMER/Dominance/Fast_GRNN/48_16/'
x_train, y_train = np.load(path + 'x_train.npy'), np.load(path + 'y_train.npy')
x_test, y_test = np.load(path + 'x_test.npy'), np.load(path + 'y_test.npy')
x_val, y_val = np.load(path + 'x_val.npy'), np.load(path + 'y_val.npy')
# BAG_TEST, BAG_TRAIN, BAG_VAL represent bag_level labels. These are used for the label update
# step of EMI/MI RNN
BAG_TEST = np.argmax(y_test[:, 0, :], axis=1)
BAG_TRAIN = np.argmax(y_train[:, 0, :], axis=1)
BAG_VAL = np.argmax(y_val[:, 0, :], axis=1)
NUM_SUBINSTANCE = x_train.shape[1]
print("x_train shape is:", x_train.shape)
print("y_train shape is:", y_train.shape)
print("x_test shape is:", x_val.shape)
print("y_test shape is:", y_val.shape)
```
# Computation Graph
```
# Define the linear secondary classifier
def createExtendedGraph(self, baseOutput, *args, **kwargs):
W1 = tf.Variable(np.random.normal(size=[NUM_HIDDEN, NUM_OUTPUT]).astype('float32'), name='W1')
B1 = tf.Variable(np.random.normal(size=[NUM_OUTPUT]).astype('float32'), name='B1')
y_cap = tf.add(tf.tensordot(baseOutput, W1, axes=1), B1, name='y_cap_tata')
self.output = y_cap
self.graphCreated = True
def restoreExtendedGraph(self, graph, *args, **kwargs):
y_cap = graph.get_tensor_by_name('y_cap_tata:0')
self.output = y_cap
self.graphCreated = True
def feedDictFunc(self, keep_prob=None, inference=False, **kwargs):
if inference is False:
feedDict = {self._emiGraph.keep_prob: keep_prob}
else:
feedDict = {self._emiGraph.keep_prob: 1.0}
return feedDict
EMI_GRU._createExtendedGraph = createExtendedGraph
EMI_GRU._restoreExtendedGraph = restoreExtendedGraph
if USE_DROPOUT is True:
EMI_Driver.feedDictFunc = feedDictFunc
inputPipeline = EMI_DataPipeline(NUM_SUBINSTANCE, NUM_TIMESTEPS, NUM_FEATS, NUM_OUTPUT)
emiGRU = EMI_GRU(NUM_SUBINSTANCE, NUM_HIDDEN, NUM_TIMESTEPS, NUM_FEATS,
useDropout=USE_DROPOUT)
emiTrainer = EMI_Trainer(NUM_TIMESTEPS, NUM_OUTPUT, lossType='xentropy',
stepSize=LEARNING_RATE)
tf.reset_default_graph()
g1 = tf.Graph()
with g1.as_default():
# Obtain the iterators to each batch of the data
x_batch, y_batch = inputPipeline()
# Create the forward computation graph based on the iterators
y_cap = emiGRU(x_batch)
# Create loss graphs and training routines
emiTrainer(y_cap, y_batch)
```
# EMI Driver
```
with g1.as_default():
emiDriver = EMI_Driver(inputPipeline, emiGRU, emiTrainer)
emiDriver.initializeSession(g1)
y_updated, modelStats = emiDriver.run(numClasses=NUM_OUTPUT, x_train=x_train,
y_train=y_train, bag_train=BAG_TRAIN,
x_val=x_val, y_val=y_val, bag_val=BAG_VAL,
numIter=NUM_ITER, keep_prob=KEEP_PROB,
numRounds=NUM_ROUNDS, batchSize=BATCH_SIZE,
numEpochs=NUM_EPOCHS, modelPrefix=MODEL_PREFIX,
fracEMI=0.5, updatePolicy='top-k', k=1)
```
# Evaluating the trained model
```
# Early Prediction Policy: We make an early prediction based on the predicted classes
# probability. If the predicted class probability > minProb at some step, we make
# a prediction at that step.
def earlyPolicy_minProb(instanceOut, minProb, **kwargs):
assert instanceOut.ndim == 2
classes = np.argmax(instanceOut, axis=1)
prob = np.max(instanceOut, axis=1)
index = np.where(prob >= minProb)[0]
if len(index) == 0:
assert (len(instanceOut) - 1) == (len(classes) - 1)
return classes[-1], len(instanceOut) - 1
index = index[0]
return classes[index], index
def getEarlySaving(predictionStep, numTimeSteps, returnTotal=False):
predictionStep = predictionStep + 1
predictionStep = np.reshape(predictionStep, -1)
totalSteps = np.sum(predictionStep)
maxSteps = len(predictionStep) * numTimeSteps
savings = 1.0 - (totalSteps / maxSteps)
if returnTotal:
return savings, totalSteps
return savings
k = 2
predictions, predictionStep = emiDriver.getInstancePredictions(x_test, y_test, earlyPolicy_minProb,
minProb=0.99, keep_prob=1.0)
bagPredictions = emiDriver.getBagPredictions(predictions, minSubsequenceLen=k, numClass=NUM_OUTPUT)
print('Accuracy at k = %d: %f' % (k, np.mean((bagPredictions == BAG_TEST).astype(int))))
mi_savings = (1 - NUM_TIMESTEPS / ORIGINAL_NUM_TIMESTEPS)
emi_savings = getEarlySaving(predictionStep, NUM_TIMESTEPS)
total_savings = mi_savings + (1 - mi_savings) * emi_savings
print('Savings due to MI-RNN : %f' % mi_savings)
print('Savings due to Early prediction: %f' % emi_savings)
print('Total Savings: %f' % (total_savings))
# A slightly more detailed analysis method is provided.
df = emiDriver.analyseModel(predictions, BAG_TEST, NUM_SUBINSTANCE, NUM_OUTPUT)
```
## Picking the best model
```
devnull = open(os.devnull, 'r')
for val in modelStats:
round_, acc, modelPrefix, globalStep = val
emiDriver.loadSavedGraphToNewSession(modelPrefix, globalStep, redirFile=devnull)
predictions, predictionStep = emiDriver.getInstancePredictions(x_test, y_test, earlyPolicy_minProb,
minProb=0.99, keep_prob=1.0)
bagPredictions = emiDriver.getBagPredictions(predictions, minSubsequenceLen=k, numClass=NUM_OUTPUT)
print("Round: %2d, Validation accuracy: %.4f" % (round_, acc), end='')
print(', Test Accuracy (k = %d): %f, ' % (k, np.mean((bagPredictions == BAG_TEST).astype(int))), end='')
mi_savings = (1 - NUM_TIMESTEPS / ORIGINAL_NUM_TIMESTEPS)
emi_savings = getEarlySaving(predictionStep, NUM_TIMESTEPS)
total_savings = mi_savings + (1 - mi_savings) * emi_savings
print("Total Savings: %f" % total_savings)
params = {
"NUM_HIDDEN" : 128,
"NUM_TIMESTEPS" : 48, #subinstance length.
"ORIGINAL_NUM_TIMESTEPS" : 128,
"NUM_FEATS" : 16,
"FORGET_BIAS" : 1.0,
"NUM_OUTPUT" : 5,
"USE_DROPOUT" : 1, # '1' -> True. '0' -> False
"KEEP_PROB" : 0.75,
"PREFETCH_NUM" : 5,
"BATCH_SIZE" : 32,
"NUM_EPOCHS" : 2,
"NUM_ITER" : 4,
"NUM_ROUNDS" : 10,
"LEARNING_RATE" : 0.001,
"MODEL_PREFIX" : '/home/sf/data/DREAMER/Dominance/model-gru'
}
gru_dict = {**params}
gru_dict["k"] = k
gru_dict["accuracy"] = np.mean((bagPredictions == BAG_TEST).astype(int))
gru_dict["total_savings"] = total_savings
gru_dict["y_test"] = BAG_TEST
gru_dict["y_pred"] = bagPredictions
# A slightly more detailed analysis method is provided.
df = emiDriver.analyseModel(predictions, BAG_TEST, NUM_SUBINSTANCE, NUM_OUTPUT)
print (tabulate(df, headers=list(df.columns), tablefmt='grid'))
dirname = "home/sf/data/DREAMER/Dominance/GRU/"
pathlib.Path(dirname).mkdir(parents=True, exist_ok=True)
print ("Results for this run have been saved at" , dirname, ".")
now = datetime.datetime.now()
filename = list((str(now.year),"-",str(now.month),"-",str(now.day),"|",str(now.hour),"-",str(now.minute)))
filename = ''.join(filename)
#Save the dictionary containing the params and the results.
pkl.dump(gru_dict,open(dirname + filename + ".pkl",mode='wb'))
dirname+filename+'.pkl'
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
import joblib
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM
import datetime
import math
pd.set_option('display.max_rows', 10000)
%matplotlib inline
%reload_ext tensorboard
np.random.seed(42)
tf.random.set_seed(42)
def create_split(df, pct_train, pct_val, batch_size, window_size):
length = df.shape[0]
temp_train_size = find_batch_gcd(math.floor(pct_train * length), batch_size)
test_size = length - temp_train_size
train_size = find_batch_gcd(math.floor((1 - pct_val) * temp_train_size), batch_size)
val_size = temp_train_size - train_size
df_train = df[:- val_size - test_size]
df_val = df[- val_size - test_size - window_size:- test_size]
df_test = df[- test_size - window_size:]
return df_train, df_val, df_test
def find_batch_gcd(length, batch_size):
while length % batch_size != 0:
length -= 1
return length
def create_dataset(df, window_size):
X, y = [], []
for i in range(len(df) - window_size):
v = df.iloc[i:(i + window_size)].values
X.append(v)
y.append(df["Close"].iloc[i + window_size])
return np.array(X), np.array(y)
def create_multi_pred_dataset(df, window_size, time_steps):
X, y = [], []
for i in range(len(df) - window_size - time_steps - 1):
v = df.iloc[i:(i + window_size)].values
X.append(v)
y.append(df["Close"].iloc[i + window_size:i + window_size + time_steps].values)
return np.array(X), np.array(y)
def create_model(nodes, optimizer, dropout, X_train):
model = Sequential()
model.add(LSTM(nodes[0], input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True))
model.add(LSTM(nodes[1], return_sequences=True))
model.add(LSTM(nodes[2]))
model.add(Dropout(dropout))
model.add(Dense(nodes[3]))
model.compile(loss="mse", optimizer=optimizer, metrics=['mae'])
return model
def flatten_prediction(pred, pred_count, time_steps):
print(pred_count, pred.shape[0])
pred = pred[::time_steps]
pred = pred.flatten()
if pred_count < pred.shape[0]:
pred = pred[:pred_count - pred.shape[0]]
return pred
def evaluate_forecast(pred, actual):
mse = mean_squared_error(pred, actual)
print("Test Mean Squared Error:", mse)
mae = mean_absolute_error(pred, actual)
print("Test Mean Absolute Error:", mae)
return
def train_model(pair, batch_size, window_size, nodes_arr, optimizer, dropout, epochs):
series = pd.read_csv("../data/processed/{}_processed.csv".format(pair))
buy = pair[:3]
sell = pair[3:]
series = series[series.shape[0] % batch_size:]
close = series[['Real Close']]
series = series.drop(['Time', 'Real Close'], axis=1)
series = series[['Close', 'EMA_10', 'EMA_50', 'RSI', 'A/D Index',
'{} Interest Rate'.format(buy), '{} Interest Rate'.format(sell), '{}_CPI'.format(buy), '{}_CPI'.format(sell),
'{} Twitter Sentiment'.format(buy), '{} Twitter Sentiment'.format(sell),
'{} News Sentiment'.format(buy), '{} News Sentiment'.format(sell),
#'EUR_GDP', 'USD_GDP', 'EUR_PPI', 'USD_PPI', 'USD Unemployment Rate', 'EUR Unemployment Rate'
]]
df_train, df_val, df_test = create_split(series, 0.75, 0.1, batch_size, window_size)
print(f'df_train.shape {df_train.shape}, df_validation.shape {df_val.shape}, df_test.shape {df_test.shape}')
closeScaler = MinMaxScaler(feature_range=(0, 1))
featureScaler = MinMaxScaler(feature_range=(0, 1))
df_train = df_train.copy()
df_val = df_val.copy()
df_test = df_test.copy()
df_train.loc[:, ['Close']] = closeScaler.fit_transform(df_train[['Close']])
df_train.loc[:, ~df_train.columns.isin(['Close'])] = featureScaler.fit_transform(df_train.loc[:, ~df_train.columns.isin(['Close'])])
df_val.loc[:, ['Close']] = closeScaler.transform(df_val[['Close']])
df_val.loc[:, ~df_val.columns.isin(['Close'])] = featureScaler.transform(df_val.loc[:, ~df_val.columns.isin(['Close'])])
df_test.loc[:, ['Close']] = closeScaler.transform(df_test[['Close']])
df_test.loc[:, ~df_test.columns.isin(['Close'])] = featureScaler.transform(df_test.loc[:, ~df_test.columns.isin(['Close'])])
#X_train, y_train = create_dataset(df_train, window_size)
#X_val, y_val = create_dataset(df_val, window_size)
#X_test, y_test = create_dataset(df_test, window_size)
X_train, y_train = create_multi_pred_dataset(df_train, window_size, nodes_arr[3])
X_val, y_val = create_multi_pred_dataset(df_val, window_size, nodes_arr[3])
print(X_train.shape)
print(y_train.shape)
print(X_val.shape)
print(y_val.shape)
#X_test, y_test = create_multi_pred_dataset(df_test, window_size, nodes_arr[3])
model = create_model(nodes_arr, optimizer, dropout, X_train)
current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
log_dir = "logs/tuning/" + current_time
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, update_freq='epoch', profile_batch=0, histogram_freq=1)
history = model.fit(X_train, y_train,
validation_data=(X_val, y_val),
epochs=epochs,
batch_size=batch_size,
shuffle=False,
#callbacks=[tensorboard_callback]
)
return model, closeScaler, featureScaler
def visualize_loss(history):
fig = plt.figure(figsize=(16, 10))
ax1 = fig.subplots(1)
ax1.set_title('Model Loss')
ax1.set(xlabel='Epoch', ylabel='Loss')
ax1.plot(history.history['loss'], label='Train Loss')
ax1.plot(history.history['val_loss'], label='Val Loss')
ax1.legend()
def test_model(pair, window_size, batch_size, time_steps, model, scaler, fScaler):
buy = pair[:3]
sell = pair[3:]
series = pd.read_csv("../data/processed/{}_processed.csv".format(pair))
series = series[series.shape[0] % batch_size:]
close = series[['Time', 'Real Close', 'Close']]
close = close.copy()
close['PrevClose'] = close['Close'].shift(1)
series = series.drop(['Time', 'Real Close'], axis=1)
series = series[['Close', 'EMA_10', 'EMA_50', 'RSI', 'A/D Index',
'{} Interest Rate'.format(buy), '{} Interest Rate'.format(sell), '{}_CPI'.format(buy), '{}_CPI'.format(sell),
'{} Twitter Sentiment'.format(buy), '{} Twitter Sentiment'.format(sell),
'{} News Sentiment'.format(buy), '{} News Sentiment'.format(sell),
#'EUR_GDP', 'USD_GDP', 'EUR Unemployment Rate', 'USD Unemployment Rate', 'EUR_PPI', 'USD_PPI'
]]
df_train, df_val, df_test = create_split(series, 0.75, 0.1, batch_size, window_size)
print(f'df_train.shape {df_train.shape}, df_validation.shape {df_val.shape}, df_test.shape {df_test.shape}')
df_test = df_test.copy()
df_test.loc[:, ['Close']] = scaler.transform(df_test[['Close']])
df_test.loc[:, ~df_test.columns.isin(['Close'])] = fScaler.transform(df_test.loc[:, ~df_test.columns.isin(['Close'])])
X_test, y_test = create_dataset(df_test, window_size)
#X_test, y_test = create_multi_pred_dataset(df_test, window_size, 5)
y_pred = model.predict(X_test)
multi_pred = flatten_prediction(y_pred, y_test.shape[0], time_steps)
evaluate_forecast(multi_pred, y_test)
#mse = model.evaluate(X_test, y_test)
#print("Test Mean Squared Error:", mse)
index = [i for i in range(multi_pred.shape[0])]
df_predicted = pd.DataFrame(scaler.inverse_transform(multi_pred.reshape(-1, 1)), columns=['Close'], index=index)
df_actual = pd.DataFrame(scaler.inverse_transform(y_test.reshape(-1, 1)), columns=['Close'], index=index)
df = pd.DataFrame(close[-multi_pred.shape[0] - window_size:])
df.reset_index(inplace=True, drop=True)
#print(df_test[['Close']][:20])
#print(scaler.inverse_transform(df_test[['Close']])[:20])
#print(scaler.inverse_transform(y_test.reshape(-1, 1))[:20])
df = df[window_size:]
df.reset_index(inplace=True, drop=True)
#print(df[:20])
df['rip'] = df_actual['Close']
#df_predicted['Close'] = df['Real Close'].mul(np.exp(df_predicted['Close'].shift(-1))).shift(1)
df_actual = df['Real Close'].mul(np.exp(df['Close']).shift(-1)).shift(1)
print(df[:20])
print(df_actual[:20])
#evaluate_forecast(df_predicted['Close'].iloc[1:], df_actual['Close'].iloc[1:])
#return df_predicted, df_actual
#index = [i for i in range(y_pred.shape[0])]
#df_predicted = pd.DataFrame(scaler.inverse_transform(y_pred), columns=['Close'], index=index)
#df_actual = pd.DataFrame(scaler.inverse_transform(y_test.reshape(-1, 1)), columns=['Close'], index=index)
#df = pd.DataFrame(close['Real Close'][-y_pred.shape[0] - window_size:-window_size])
#df.reset_index(inplace=True, drop=True)
#df_predicted['Close'] = df['Real Close'].mul(np.exp(df_predicted['Close'].shift(-1))).shift(1)
#df_actual['Close'] = df['Real Close'].mul(np.exp(df_actual['Close'].shift(-1))).shift(1)
#df_predicted['Close'] = df_predicted['Close']
#df_actual['Close'] = df_actual['Close']
#return df_predicted, df_actual
def visualize_prediction(df_predicted, df_actual):
fig = plt.figure(figsize=(16, 10))
ax1 = fig.subplots(1)
ax1.set_title('Predicted Closing Price')
ax1.set(xlabel='Time', ylabel='Close')
ax1.plot(df_actual['Close'][:100], label='Actual')
ax1.plot(df_predicted['Close'][:100], label='Prediction')
ax1.legend()
batch_size = 32
window_size = 10
nodes = [80, 64, 32, 5]
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0005)
dropout = 0.2
epochs = 1
model, closeScaler, featureScaler = train_model("EURUSD", batch_size, window_size, nodes, optimizer, dropout, epochs)
cool = test_model("EURUSD", window_size, batch_size, 5, model, closeScaler, featureScaler)
from TwitterAPI import TwitterAPI, TwitterPager
import datetime
from dotenv import load_dotenv
import re
import pandas as pd
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
import os
#from sentiment_keyword_defs import SENTIMENT_KEYWORDS
load_dotenv()
import time
import functools
def timeit(func):
@functools.wraps(func)
def newfunc(*args, **kwargs):
startTime = time.time()
func(*args, **kwargs)
elapsedTime = time.time() - startTime
print('function [{}] finished in {} ms'.format(
func.__name__, int(elapsedTime * 1000)))
return newfunc
sentiment_keyword = {
"usd": {
"positive": [
"usd/",
"u.s.",
"greenback",
"buck",
"barnie",
"america",
"united states",
],
"negative": ["/usd", "cable"],
},
"aud": {
"positive": ["aud/", "gold", "aussie", "australia"],
"negative": ["/aud"],
},
"gbp": {
"positive": [
"gbp/",
"sterling",
"pound",
"u.k.",
"united kingdom",
"cable",
"guppy",
],
"negative": ["/gbp"],
},
"nzd": {
"positive": ["nzd/", "gold", "kiwi", "new zealand"],
"negative": ["/nzd"],
},
"cad": {"positive": ["cad/", "oil", "loonie", "canada"], "negative": ["/cad"]},
"chf": {"positive": ["chf/", "swiss"], "negative": ["/chf"]},
"jpy": {"positive": ["jpy/", "asian", "japan"], "negative": ["/jpy", "guppy"]},
"eur": {"positive": ["eur/", "fiber", "euro"], "negative": ["/eur"]},
}
api = TwitterAPI(consumer_key=os.getenv("TWITTER_CONSUMER_KEY"), consumer_secret=os.getenv("TWITTER_CONSUMER_SECRET"),
access_token_key=os.getenv("TWITTER_ACCESS_TOKEN_KEY"), access_token_secret=os.getenv("TWITTER_ACCESS_TOKEN_SECRET"), api_version='2')
@timeit
def get_twitter_data(start_time):
pager = TwitterPager(api, 'tweets/search/recent', {
'query': 'from:FXstreetNews OR from:forexcom',
'tweet.fields': 'public_metrics,created_at',
'start_time': str(start_time),
'max_results': 100
}
)
tweet_data = []
for item in pager.get_iterator(new_tweets=False):
tweet_data.append({"text": item['text'], "created_at": item['created_at']})
print(item)
return tweet_data
@timeit
def tweet_sentiment(tweets):
sid = SentimentIntensityAnalyzer()
for tweet_data in tweets:
tweet_data["text"] = remove_pattern(tweet_data["text"], "RT @[\w]*:")
tweet_data["text"] = remove_pattern(tweet_data["text"], "@[\w]*")
tweet_data["text"] = remove_pattern(tweet_data["text"], "https?://[A-Za-z0-9./]*")
tweet_data["text"] = tweet_data["text"].replace("[^a-zA-Z]", " ")
tweet_data["text"] = tweet_data["text"].replace("\n", " ")
tweet_data["score"] = sid.polarity_scores(tweet_data["text"])["compound"]
return tweets
def remove_pattern(input_text, pattern):
"""
Finds patterns in posts and substitutes them with blank space.
Args:
input_text: String representing a twitter post
pattern: Regex pattern to search for in twitter post
Returns:
String with pattern stripped.
"""
match = re.findall(pattern, input_text)
for i in match:
input_text = re.sub(i, "", input_text)
return input_text
tweets = get_twitter_data((datetime.datetime.now() - datetime.timedelta(hours=48)).isoformat("T") + "Z")
tweet_sentiment(tweets)
print(tweets)
sentiment = tweet_sentiment(tweets)
sentiment
pd.set_option('display.max_rows', 100)
@timeit
def combine_dates(tweets):
"""
Merge sentiment scores according to date.
Args:
tweets: Dataframe containing countries and their sentiment scores at a certain time
Returns:
Dataframe with a country's sentiment score with sequential time.
"""
currencies = ["eur", "usd", "jpy", "cad", "gbp", "aud", "nzd", "chf"]
length = 1
for i in range(1, len(tweets.index)):
current = tweets.at[i, "Time"]
if current == tweets.at[i - length, "Time"] and i == len(tweets.index) - 1:
for currency in currencies:
tweets.at[i - length, currency.upper()] = (
tweets[currency.upper()].iloc[i - length : i].mean()
)
elif current == tweets.at[i - length, "Time"]:
length += 1
elif length > 1:
for currency in currencies:
tweets.at[i - length, currency.upper()] = (
tweets[currency.upper()].iloc[i - length : i].mean()
)
length = 1
tweets.drop_duplicates(subset=["Time"], inplace=True)
return tweets
@timeit
def country_sentiment_df(tweets, start, window):
tweet_df = pd.DataFrame()
tweet_df['Time'] = [datetime.datetime.strptime(tweet['created_at'], "%Y-%m-%dT%H:%M:%S.%fZ") for tweet in tweets]
tweet_df['Time'] = tweet_df['Time'].dt.strftime("%Y-%m-%d %H:%M:00")
tweet_df['Twitter_Sentiment'] = [tweet['score'] for tweet in tweets]
tweet_df['Post'] = [tweet['text'].lower() for tweet in tweets]
country_df = pd.DataFrame()
for currency in sentiment_keyword:
for entity in sentiment_keyword[currency]["positive"]:
currency_df = tweet_df[tweet_df['Post'].str.contains(entity)]
currency_df = currency_df[{"Time", "Twitter_Sentiment"}]
currency_df = currency_df.rename(
columns={"Twitter_Sentiment": currency.upper()}
)
if country_df.empty:
country_df = currency_df
elif not currency.upper() in country_df.columns:
country_df = country_df.merge(currency_df, how="outer", on="Time")
else:
country_df = country_df.merge(
currency_df, how="outer", on=["Time", currency.upper()]
)
for entity in sentiment_keyword[currency]['negative']:
currency_df = tweet_df[tweet_df['Post'].str.contains(entity)]
currency_df = currency_df[{"Time", "Twitter_Sentiment"}]
currency_df["Twitter_Sentiment"] = currency_df[
"Twitter_Sentiment"
].transform(lambda score: -score)
currency_df = currency_df.rename(
columns={"Twitter_Sentiment": currency.upper()}
)
if country_df.empty:
country_df = currency_df
elif not currency.upper() in country_df.columns:
country_df = country_df.merge(currency_df, how="outer", on="Time")
else:
country_df = country_df.merge(
currency_df, how="outer", on=["Time", currency.upper()]
)
print(country_df)
time_frame = pd.date_range(
start=start, freq="1T", end=str(datetime.datetime.now())
)
time_frame = pd.DataFrame(time_frame, columns=["Time"])
time_frame["Time"] = time_frame["Time"].dt.strftime("%Y-%m-%d %H:%M:%S")
country_df = country_df.reset_index(drop=True)
country_df = combine_dates(country_df)
country_df = time_frame.merge(country_df, how="left", on="Time")
country_df = country_df.sort_values(by="Time", ascending=True)
for currency in sentiment_keyword:
country_df[currency.upper()] = (
country_df[currency.upper()].rolling(window, min_periods=1).mean()
)
country_df = country_df.fillna(0)
return country_df
nice = country_sentiment_df(sentiment, str(datetime.datetime.strftime(datetime.datetime.now() - datetime.timedelta(hours=27), "%Y-%m-%d %H:%M:00")), 60)
nice[-500:-400]
generate_fake_ohlc_data(nice)
import tensorflow as tf
import pandas as pd
import numpy as np
import joblib
import random
import datetime
import talib
@timeit
def generate_twitter_sentiment(hours, window):
tweets = get_twitter_data((datetime.datetime.now() - datetime.timedelta(hours=hours)).isoformat("T") + "Z")
sentiment = tweet_sentiment(tweets)
return country_sentiment_df(sentiment, hours, window)
@timeit
def generate_technical_indicators(pair_df):
pair_df["EMA_10"] = pd.DataFrame(abstract.EMA(pair_df["Close"], timeperiod=10))#960))
pair_df["EMA_50"] = pd.DataFrame(abstract.EMA(pair_df["Close"], timeperiod=50))#4800))
pair_df["RSI"] = pd.DataFrame(abstract.RSI(pair_df["Close"], timeperiod=14))
pair_df["A/D Index"] = pd.DataFrame(
abstract.AD(
pair_df["High"], pair_df["Low"], pair_df["Close"], pair_df["Volume"]
)
)
pair_df["A/D Index"] = pair_df["A/D Index"] - pair_df["A/D Index"].shift(1)
pair_df = stationary_log_returns(pair_df)
return pair_df
@timeit
def stationary_log_returns(pair_df):
"""
Calculates log returns for EMA and closing price to make data stationary.
Args:
pair_df: Dataframe containing OHLC data, Time, and technical indicators
Returns:
Dataframe with EMA and closing prices substituted with log returns
"""
pair_df = pair_df.copy()
pair_df["Real Close"] = pair_df["Close"]
pair_df["Close"] = np.log(pair_df["Close"] / pair_df["Close"].shift(1))
pair_df["EMA_10"] = np.log(pair_df["EMA_10"] / pair_df["EMA_10"].shift(1))
pair_df["EMA_50"] = np.log(pair_df["EMA_50"] / pair_df["EMA_50"].shift(1))
return pair_df
# Used for testing
@timeit
def generate_fake_ohlc_data(data_df):
data_df.loc[0, 'Close'] = random.random()*0.001 + 1
data_df.loc[0, 'Open'] = random.random()*0.001 + 1
data_df.loc[0, 'High'] = data_df.loc[0, 'Open'] + random.random()*0.001 if data_df.loc[0, 'Open'] > data_df.loc[0, 'Close'] else data_df.loc[0, 'Close'] + random.random()*0.001
data_df.loc[0, 'Low'] = data_df.loc[0, 'Open'] - random.random()*0.001 if data_df.loc[0, 'Open'] < data_df.loc[0, 'Close'] else data_df.loc[0, 'Close'] - random.random()*0.001
data_df.loc[0, 'Volume'] = random.randrange(10, 100, 1)
for i in range(1, len(data_df)):
data_df.loc[i, 'Open'] = data_df.loc[i - 1, 'Close'] + (random.random() - 0.5)*0.001
data_df.loc[i, 'Close'] = data_df.loc[i, 'Open'] + (random.random() - 0.5)*0.001
data_df.loc[i, 'Volume'] = random.randrange(10, 100, 1)
if data_df.loc[i, 'Open'] > data_df.loc[i, 'Close']:
data_df.loc[i, 'High'] = data_df.loc[i, 'Open'] + random.random()*0.001
data_df.loc[i, 'Low'] = data_df.loc[i, 'Close'] - random.random()*0.001
else:
data_df.loc[i, 'High'] = data_df.loc[i, 'Close'] + random.random()*0.001
data_df.loc[i, 'Low'] = data_df.loc[i, 'Open'] - random.random()*0.001
return data_df
@timeit
def configure_time(minutes, dataframe, start):
time_frame = pd.date_range(
start=start,
freq="{}T".format(minutes),
end=str(datetime.datetime.now())
)
time_frame = pd.DataFrame(time_frame, columns=["Time"])
time_frame["Time"] = time_frame["Time"].dt.strftime("%Y-%m-%d %H:%M:%S")
time_frame["Time"] = pd.to_datetime(time_frame["Time"], utc=True)
configured_df = time_frame.merge(dataframe, how="inner", on="Time")
return configured_df
@timeit
def generate_prediction(pair, window_size, time_steps):
fScaler = joblib.load("../scalers/{}/features.bin".format(pair))
scaler = joblib.load("../scalers/{}/close.bin".format(pair))
model = tf.keras.models.load_model("../models/{}".format(pair))
buy = pair[:3]
sell = pair[3:]
twitter_df = generate_twitter_sentiment(48, 60)
#ohlc_df = generate_fake_ohlc_data(twitter_df)
#technical_analysis_df = generate_technical_indicators(ohlc_df)
#inference_df = configure_time(15, technical_analysis_df, technical_analysis_df.loc[0, 'Time'])
return twitter_df
print(generate_prediction("EURUSD", 96, 4))
```
| github_jupyter |
# Interactive Demo for Metrics
* command line executables: see README.md
* algorithm documentation: [metrics.py API & Algorithm Documentation](metrics.py_API_Documentation.ipynb)
* **make sure you enabled interactive widgets via: **
```
sudo jupyter nbextension enable --py --sys-prefix widgetsnbextension
```
* **make sure you use the correct Kernel** matching the your evo Python version (otherwise use the menu Kernel->Change..
...some modules and settings for this demo:
```
from __future__ import print_function
from evo.tools import log
log.configure_logging()
from evo.tools import plot
from evo.tools.plot import PlotMode
from evo.core.metrics import PoseRelation, Unit
from evo.tools.settings import SETTINGS
# temporarily override some package settings
SETTINGS.plot_figsize = [6, 6]
SETTINGS.plot_split = True
SETTINGS.plot_usetex = False
# magic plot configuration
import matplotlib.pyplot as plt
%matplotlib inline
%matplotlib notebook
# interactive widgets configuration
import ipywidgets
check_opts_ape = {"align": False, "correct_scale": False, "show_plot": True}
check_boxes_ape=[ipywidgets.Checkbox(description=desc, value=val) for desc, val in check_opts_ape.items()]
check_opts_rpe = {"align": False, "correct_scale": False, "all_pairs": False, "show_plot": True}
check_boxes_rpe=[ipywidgets.Checkbox(description=desc, value=val) for desc, val in check_opts_rpe.items()]
delta_input = ipywidgets.FloatText(value=1.0, description='delta', disabled=False, color='black')
du_selector=ipywidgets.Dropdown(
options={u.value: u for u in Unit},
value=Unit.frames, description='delta_unit'
)
pm_selector=ipywidgets.Dropdown(
options={p.value: p for p in PlotMode},
value=PlotMode.xy, description='plot_mode'
)
pr_selector=ipywidgets.Dropdown(
options={p.value: p for p in PoseRelation},
value=PoseRelation.translation_part, description='pose_relation'
)
```
---
## Load trajectories
```
from evo.tools import file_interface
from evo.core import sync
```
**Load KITTI files** with entries of the first three rows of $\mathrm{SE}(3)$ matrices per line (no timestamps):
```
traj_ref = file_interface.read_kitti_poses_file("../test/data/KITTI_00_gt.txt")
traj_est = file_interface.read_kitti_poses_file("../test/data/KITTI_00_ORB.txt")
```
**...or load a ROS bagfile** with `geometry_msgs/PoseStamped` topics:
```
try:
import rosbag
bag_handle = rosbag.Bag("../test/data/ROS_example.bag")
traj_ref = file_interface.read_bag_trajectory(bag_handle, "groundtruth")
traj_est = file_interface.read_bag_trajectory(bag_handle, "ORB-SLAM")
traj_ref, traj_est = sync.associate_trajectories(traj_ref, traj_est)
except ImportError as e:
print(e) # ROS not found
```
**... or load TUM files with** 3D position and orientation quaternion per line ($x$ $y$ $z$ $q_x$ $q_y$ $q_z$ $q_w$):
```
traj_ref = file_interface.read_tum_trajectory_file("../test/data/fr2_desk_groundtruth.txt")
traj_est = file_interface.read_tum_trajectory_file("../test/data/fr2_desk_ORB_kf_mono.txt")
traj_ref, traj_est = sync.associate_trajectories(traj_ref, traj_est)
print(traj_ref)
print(traj_est)
```
---
## APE
Algorithm and API explanation: [see here](metrics.py_API_Documentation.ipynb#ape_math)
### Interactive APE Demo
***Run the code below, configure the parameters in the GUI and press the update button.***
(uses the trajectories loaded above)
```
import evo.main_ape as main_ape
import evo.common_ape_rpe as common
count = 0
results = []
def callback_ape(pose_relation, align, correct_scale, plot_mode, show_plot):
global results, count
est_name="APE Test #{}".format(count)
result = main_ape.ape(traj_ref, traj_est, est_name=est_name,
pose_relation=pose_relation, align=align, correct_scale=correct_scale)
count += 1
results.append(result)
if show_plot:
fig = plt.figure()
ax = plot.prepare_axis(fig, plot_mode)
plot.traj(ax, plot_mode, traj_ref, style="--", alpha=0.5)
plot.traj_colormap(
ax, result.trajectories[est_name], result.np_arrays["error_array"], plot_mode,
min_map=result.stats["min"], max_map=result.stats["max"])
_ = ipywidgets.interact_manual(callback_ape, pose_relation=pr_selector, plot_mode=pm_selector,
**{c.description: c.value for c in check_boxes_ape})
```
---
## RPE
Algorithm and API explanation: [see here](metrics.py_API_Documentation.ipynb#rpe_math)
### Interactive RPE Demo
***Run the code below, configure the parameters in the GUI and press the update button.***
(uses the trajectories loaded above, alignment only useful for visualization here)
```
import evo.main_rpe as main_rpe
count = 0
results = []
def callback_rpe(pose_relation, delta, delta_unit, all_pairs, align, correct_scale, plot_mode, show_plot):
global results, count
est_name="RPE Test #{}".format(count)
result = main_rpe.rpe(traj_ref, traj_est, est_name=est_name,
pose_relation=pose_relation, delta=delta, delta_unit=delta_unit,
all_pairs=all_pairs, align=align, correct_scale=correct_scale,
support_loop=True)
count += 1
results.append(result)
if show_plot:
fig = plt.figure()
ax = plot.prepare_axis(fig, plot_mode)
plot.traj(ax, plot_mode, traj_ref, style="--", alpha=0.5)
plot.traj_colormap(
ax, result.trajectories[est_name], result.np_arrays["error_array"], plot_mode,
min_map=result.stats["min"], max_map=result.stats["max"])
_ = ipywidgets.interact_manual(callback_rpe, pose_relation=pr_selector, plot_mode=pm_selector,
delta=delta_input, delta_unit=du_selector,
**{c.description: c.value for c in check_boxes_rpe})
```
Do stuff with the result objects:
```
import pandas as pd
from evo.tools import pandas_bridge
df = pd.DataFrame()
for result in results:
df = pd.concat((df, pandas_bridge.result_to_df(result)), axis="columns")
df
df.loc["stats"]
```
| github_jupyter |
```
import json
import requests
import spacy
import nltk
from collections import Counter
import sys
sys.path.append("..")
with open('../data/comment_data/headphoneadvice_360.json') as f:
c_ha = json.load(f)
len(c_ha)
with open('../data/comment_data/audiophile_360.json') as f:
c_a = json.load(f)
len(c_a)
with open('../data/comment_data/budgetaudiophile_360.json') as f:
c_ba = json.load(f)
len(c_ba)
```
## Increase total comment number
Instead of making sure the words best, advice, and recommendations are in the post, let's instead:
1) Make 3 requests:
- best + keyword
- advice + keyword
- recommendations + keyword
2) Remove duplicates
```
from crowdrank import ingester
subreddit = 'headphoneadvice'
lookback_days = 360
num_posts = 500
def get_subs_short(subreddit, lookback_days, num_posts):
print("Looking at {}".format(subreddit))
h_page = requests.get('http://api.pushshift.io/reddit/search/submission/?subreddit={}&q=best+advice+recommendations&before={}d&size={}&sort_type=score'.format(subreddit,lookback_days, num_posts))
# TBD: Put all this data on S3
# Save submissions for later
#out_file = dumppath + "{}/{}_{}.{}".format("submission_data", subreddit, lookback_days, "json")
submissions_data = json.loads(h_page.text)
return submissions_data
# headphoneadvice, 85 submissions
len(submissions_data['data'])
submissions_data['data'][0]
comments_list = []
%%time
# For 85 submissions... ~ 6 mins
for h_d in submissions_data['data']:
comments_list.append(ingester.get_assoc_comments(h_d))
print('end')
cmts_per_sub = [len(comments_list[i]) for i in range(len(comments_list))]
# Headphoneadvice
sum(cmts_per_sub)
# Audiophile ~ 5s
subs_data_audiophile = get_subs_short('audiophile', 360, 500)
comments_list_audiophile = []
%%time
# For 33 submissions... ~ 1m38s mins
for h_d in subs_data_audiophile['data']:
comments_list_audiophile.append(ingester.get_assoc_comments(h_d))
print('end')
# How many total comments:
cmts_per_sub_a = [len(comments_list_audiophile[i]) for i in range(len(comments_list_audiophile))]
sum(cmts_per_sub_a)
len(subs_data_baudiophile['data'])
# Baudiophile ~ 5s
subs_data_baudiophile = get_subs_short('budgetaudiophile', 360, 500)
%%time
# For 85 submissions... ~ 2m44s
comments_list_baudiophile = []
for h_d in subs_data_baudiophile['data']:
comments_list_baudiophile.append(ingester.get_assoc_comments(h_d))
print('end')
# How many total comments:
cmts_per_sub_ba = [len(comments_list_baudiophile[i]) for i in range(len(comments_list_baudiophile))]
sum(cmts_per_sub_ba)
sr = 'audiophile'
lookback_days = 360
comments_path = "../data/comment_data/{}_{}.json".format(sr, lookback_days)
with open(comments_path) as f:
comments_2D = json.load(f)
num_comments = sum([len(comments) for comments in comments_2D])
num_comments
# from crowdrank import ingester
# ingester.count_comments('audiophile', lookback_days)
def get_more_subs(subreddit, keyword, lookback_days, num_posts):
# Expanding the search, do 3 queries for keyword + advice_synonym
# and then add the results
print("Looking at {}".format(subreddit))
#
h_page_best = requests.get('http://api.pushshift.io/reddit/search/submission/?subreddit={}&q={}+recommendations&before={}d&size={}&sort_type=score'.format(subreddit, keyword, lookback_days, num_posts))
h_page_advice = requests.get('http://api.pushshift.io/reddit/search/submission/?subreddit={}&q={}+best&before={}d&size={}&sort_type=score'.format(subreddit, keyword, lookback_days, num_posts))
h_page_recc = requests.get('http://api.pushshift.io/reddit/search/submission/?subreddit={}&q={}+advice&before={}d&size={}&sort_type=score'.format(subreddit, keyword, lookback_days, num_posts))
# TBD: Put all this data on S3
# Save submissions for later
#out_file = dumppath + "{}/{}_{}.{}".format("submission_data", subreddit, lookback_days, "json")
submissions_data_best = json.loads(h_page_best.text)
submissions_data_recc = json.loads(h_page_recc.text)
submissions_data_advice = json.loads(h_page_advice.text)
combined_subs = submissions_data_best['data'] + submissions_data_recc['data'] + submissions_data_advice['data']
#comments_list = []
#for h_d in combined_subs:
# comments = ingester.get_assoc_comments(h_d)
# comments_list.append(comments)
return combined_subs
h_page_best = requests.get('http://api.pushshift.io/reddit/search/submission/?subreddit={}&q=headphones+recommendations&before={}d&size={}&sort_type=score'.format(subreddit,lookback_days, num_posts))
subs_best = json.loads(h_page_best.text)
type(subs_best['data'])
c_list = get_more_subs('headphoneadvice', 360, 500)
sub_ids = [sub['id'] for sub in c_list]
print(len(sub_ids))
print(len(list(set(sub_ids))))
len(c_list)
c_list = get_more_subs('headphoneadvice', 'headphones', 360, 500)
sub_ids = [sub['id'] for sub in c_list]
print(len(set(sub_ids)))
c_list = get_more_subs('audiophile', 'headphones', 360, 500)
sub_ids = [sub['id'] for sub in c_list]
print(len(set(sub_ids)))
c_list = get_more_subs('budgetaudiophile', 'headphones', 360, 500)
sub_ids = [sub['id'] for sub in c_list]
print(len(sub_ids))
print(len(set(sub_ids)))
with open('../data/comment_data/headphoneadvice_360.json') as f:
c_ha = json.load(f)
comments_ha = []
for c in c_ha:
comments_ha.extend(c)
len(comments_ha)
comments_ha_body = [c['body'] for c in comments_ha]
len(comments_ha_body)
len(set(comments_ha_body))
```
## Viz for comparing rankings:
```
# We're going to look at the top 99% of data's ranking (if we have 9 opinions on a brand, omit)
import pandas as pd
import seaborn as sns
df_ranking = pd.read_csv("../data/results/Headphones_360.csv")
df_ranking.columns = ['Brand', 'Sentiment', 'Popularity', 'Variance']
sns.scatterplot(df_ranking['Sentiment'], df_ranking['Popularity'])
```
## Comparing CrowdRank to other rankings
```
df_ranking
df_ranking.Brand.values
ranking_string= '''
Bose
Sony
Sennheiser
Audio-technica
Beyerdynamic
JBL
Plantronics
Beats
Jaybird
Jabra
Anker
Skullcandy
SteelSeries
Logitech
HyperX
'''
#
ranking_haddict_string = '''
Sennheiser
JBL
Bose
Sony
Skullcandy
AKG
Beats
Audio-technica
Bang&Olufsen
Beyerdynamic
Marshall
Samsung
Xiaomi
Jabra
Bowers&Wilkins
Philips
Shure
V-MODA
RHA-Technologies
HyperX
'''
rtings_top = [s_i for s_i in ranking_string.split()]
rtings_top
import numpy as np
import pandas as pd
ranking_comp = np.array([df_ranking.Brand.values, rtings_top ])
print(ranking_comp)
df_compared = pd.DataFrame({'CrowdRank' : [b.capitalize() for b in df_ranking.Brand.values],
'RTings': rtings_top +['NaN' for i in range(len(df_ranking) - len(rtings_top))],
'HeadphonesAddict' : [s_i for s_i in ranking_haddict_string.split()] })
df_compared.index += 1
df_compared
# Top-k accuracy:
# In top-5 accuracy you give yourself credit for having the right answer if the right answer appeas in your top five guess:
# Top-15 accuracy:
# For CR & RTs: 10/15 = 67 %
# FOR HAs & RTs: 7/15 = 47%
# For CR & HAs: 9/15 = 60%
# Jaccard Similarity for average over lap at diff depths:
# For CR & RTs, at depth 15:
# [HyperX, Bose, SteelSeries, Logitech, Sennheiser, Jabra, Beyerdynamic, Sony, Skullcandy, JBL],
# [Denon, Klipsch, Philips, Samsung, AKG,]
# Jacc Sim = 10/15
# For CR & HAs, at depth 15:
#
# NO: [Hyperx, Steelseries, Denon, Klipsch, Philips, Logitech, ]
# Y : [Sennheiser, Jabra, Samsung, Beyerdynamic, Bose, Akg, Sony, Skullcandy, JBL]
# 9/15
# Ones which appear in all 3:
# Sony, Bose, Sennheiser, Audio-technica, BeyerDynamic, JBL, Jabra, Skullcandy, HyperX
# Sony: 13, 2, 4 -->
# Bose: 11, 1, 3 -->
# Sennheiser: 7, 3, 1
# A-T: 16, 4, 8
# Beyerdynamic: 10, 5, 5
# JBL: 15, 6, 2
# Jabra: 8, 9, 14
# Skullcandy: 14, 12, 5
# HyperX: 1, 15, 20
df_ranking_by_pop = df_ranking.sort_values(by=['Popularity'], ascending=False)
df_compared_pop = pd.DataFrame({'CrowdRank' : [b.capitalize() for b in df_ranking_by_pop.Brand.values],
'RTings': rtings_top +['NaN' for i in range(len(df_ranking) - len(rtings_top))],
'HeadphonesAddict' : [s_i for s_i in ranking_haddict_string.split()] })
df_compared_pop.index +=1
df_compared_pop
# Top-15 accuracy (Usin CrowdRank_Popularity):
# Top-15 accuracy:
# NO: [AKG, Apple, Philips, Samsung]
# For CR & RTs: 11/15 = 73.3 %
# FOR HAs & RTs: 7/15 = 47%
# No: [Apple, Logitech, HyperX, Philips, Jaybird, Steelseries]
# For CR & HAs: 9/15 = 60%
# 4 brands are not very common
df = pd.DataFrame({'CR & RTs': ['67%', '73%'], , 'CR & HAs': ['60%', '60%'], 'RTs & HAs': ['47%, 47%']})
#df.index.name = 'Top-15 Accuracy'
df
df
```
| github_jupyter |
```
# default_exp callback.core
#export
from fastai.data.all import *
from fastai.optimizer import *
#hide
from nbdev.showdoc import *
#export
_all_ = ['CancelFitException', 'CancelEpochException', 'CancelTrainException', 'CancelValidException', 'CancelBatchException']
```
# Callback
> Basic callbacks for Learner
## Events
Callbacks can occur at any of these times:: *before_fit before_epoch before_train before_batch after_pred after_loss before_backward after_backward after_step after_cancel_batch after_batch after_cancel_train after_train before_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
```
# export
_events = L.split('before_fit before_epoch before_train before_batch after_pred after_loss \
before_backward after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train before_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
```
To ensure that you are referring to an event (that is, the name of one of the times when callbacks are called) that exists, and to get tab completion of event names, use `event`:
```
test_eq(event.after_backward, 'after_backward')
```
## Callback -
```
#export
_inner_loop = "before_batch after_pred after_loss before_backward after_backward after_step after_cancel_batch after_batch".split()
#export
@funcs_kwargs(as_method=True)
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn,run,run_train,run_valid = 'learn',None,True,True,True
_methods = _events
def __init__(self, **kwargs): assert not kwargs, f'Passed unknown events: {kwargs}'
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
_run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
(self.run_valid and not getattr(self, 'training', False)))
res = None
if self.run and _run: res = getattr(self, event_name, noop)()
if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
return res
def __setattr__(self, name, value):
if hasattr(self.learn,name):
warn(f"You are setting an attribute ({name}) that also exists in the learner, so you're not setting it in the learner but in the callback. Use `self.learn.{name}` otherwise.")
super().__setattr__(name, value)
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
```
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:
- compute the output of the model from the input
- calculate a loss between this output and the desired target
- compute the gradients of this loss with respect to all the model parameters
- update the parameters accordingly
- zero all the gradients
Any tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:
- `before_fit`: called before doing anything, ideal for initial setup.
- `before_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.
- `before_train`: called at the beginning of the training part of an epoch.
- `before_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).
- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.
- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).
- `before_backward`: called after the loss has been computed, but only in training mode (i.e. when the backward pass will be used)
- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).
- `after_step`: called after the step and before the gradients are zeroed.
- `after_batch`: called at the end of a batch, for any clean-up before the next one.
- `after_train`: called at the end of the training phase of an epoch.
- `before_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.
- `after_validate`: called at the end of the validation part of an epoch.
- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.
- `after_fit`: called at the end of training, for final clean-up.
```
show_doc(Callback.__call__)
```
One way to define callbacks is through subclassing:
```
class _T(Callback):
def call_me(self): return "maybe"
test_eq(_T()("call_me"), "maybe")
```
Another way is by passing the callback function to the constructor:
```
def cb(self): return "maybe"
_t = Callback(before_fit=cb)
test_eq(_t(event.before_fit), "maybe")
show_doc(Callback.__getattr__)
```
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
```
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
```
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2. It also issues a warning that something is probably wrong:
```
learn.a
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
```
A proper version needs to write `self.learn.a = self.a + 1`:
```
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
```
### TrainEvalCallback -
```
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
run_valid = False
def before_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
if hasattr(self.dls, 'device'): self.model.to(self.dls.device)
if hasattr(self.model, 'reset'): self.model.reset()
def after_batch(self):
"Update the iter counter (in training mode)"
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def before_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def before_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
```
This `Callback` is automatically added in every `Learner` at initialization.
```
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.before_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.before_train)
# export
if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback]
```
### GatherPredsCallback -
```
#export
#TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors.
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0):
store_attr("with_input,with_loss,save_preds,save_targs,concat_dim")
def before_batch(self):
if self.with_input: self.inputs.append((self.learn.to_detach(self.xb)))
def before_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs = []
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
if not hasattr(self, 'pred'): return
preds,targs = self.learn.to_detach(self.pred),self.learn.to_detach(self.yb)
if self.save_preds is None: self.preds.append(preds)
else: (self.save_preds/str(self.iter)).save_array(preds)
if self.save_targs is None: self.targets.append(targs)
else: (self.save_targs/str(self.iter)).save_array(targs[0])
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(self.learn.to_detach(loss))
def after_validate(self):
"Concatenate all recorded tensors"
if not hasattr(self, 'preds'): return
if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim))
if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim))
if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim))
if self.with_loss: self.losses = to_concat(self.losses)
def all_tensors(self):
res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets]
if self.with_input: res = [self.inputs] + res
if self.with_loss: res.append(self.losses)
return res
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.before_validate)
show_doc(GatherPredsCallback.after_batch)
show_doc(GatherPredsCallback.after_validate)
#export
class FetchPredsCallback(Callback):
"A callback to fetch predictions during the training loop"
remove_on_fetch = True
def __init__(self, ds_idx=1, dl=None, with_input=False, with_decoded=False, cbs=None, reorder=True):
self.cbs = L(cbs)
store_attr('ds_idx,dl,with_input,with_decoded,reorder')
def after_validate(self):
to_rm = L(cb for cb in self.learn.cbs if getattr(cb, 'remove_on_fetch', False))
with self.learn.removed_cbs(to_rm + self.cbs) as learn:
self.preds = learn.get_preds(ds_idx=self.ds_idx, dl=self.dl,
with_input=self.with_input, with_decoded=self.with_decoded, inner=True, reorder=self.reorder)
show_doc(FetchPredsCallback, title_level=3)
```
When writing a callback, the following attributes of `Learner` are available:
- `model`: the model used for training/validation
- `data`: the underlying `DataLoaders`
- `loss_func`: the loss function used
- `opt`: the optimizer used to update the model parameters
- `opt_func`: the function used to create the optimizer
- `cbs`: the list containing all `Callback`s
- `dl`: current `DataLoader` used for iteration
- `x`/`xb`: last input drawn from `self.dl` (potentially modified by callbacks). `xb` is always a tuple (potentially with one element) and `x` is detuplified. You can only assign to `xb`.
- `y`/`yb`: last target drawn from `self.dl` (potentially modified by callbacks). `yb` is always a tuple (potentially with one element) and `y` is detuplified. You can only assign to `yb`.
- `pred`: last predictions from `self.model` (potentially modified by callbacks)
- `loss`: last computed loss (potentially modified by callbacks)
- `n_epoch`: the number of epochs in this training
- `n_iter`: the number of iterations in the current `self.dl`
- `epoch`: the current epoch index (from 0 to `n_epoch-1`)
- `iter`: the current iteration index in `self.dl` (from 0 to `n_iter-1`)
The following attributes are added by `TrainEvalCallback` and should be available unless you went out of your way to remove that callback:
- `train_iter`: the number of training iterations done since the beginning of this training
- `pct_train`: from 0. to 1., the percentage of training iterations completed
- `training`: flag to indicate if we're in training mode or not
The following attribute is added by `Recorder` and should be available unless you went out of your way to remove that callback:
- `smooth_loss`: an exponentially-averaged version of the training loss
## Callbacks control flow
It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't always want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.
This is made possible by raising specific exceptions the training loop will look for (and properly catch).
```
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
```
You can detect one of those exceptions occurred and add code that executes right after with the following events:
- `after_cancel_batch`: reached immediately after a `CancelBatchException` before proceeding to `after_batch`
- `after_cancel_train`: reached immediately after a `CancelTrainException` before proceeding to `after_epoch`
- `after_cancel_valid`: reached immediately after a `CancelValidException` before proceeding to `after_epoch`
- `after_cancel_epoch`: reached immediately after a `CancelEpochException` before proceeding to `after_epoch`
- `after_cancel_fit`: reached immediately after a `CancelFitException` before proceeding to `after_fit`
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
<a id='ar1'></a>
<div id="qe-notebook-header" align="right" style="text-align:right;">
<a href="https://quantecon.org/" title="quantecon.org">
<img style="width:250px;display:inline;" width="250px" src="https://assets.quantecon.org/img/qe-menubar-logo.svg" alt="QuantEcon">
</a>
</div>
# AR1 Processes
<a id='index-0'></a>
## Contents
- [AR1 Processes](#AR1-Processes)
- [Overview](#Overview)
- [The AR(1) Model](#The-AR(1)-Model)
- [Stationarity and Asymptotic Stability](#Stationarity-and-Asymptotic-Stability)
- [Ergodicity](#Ergodicity)
- [Exercises](#Exercises)
- [Solutions](#Solutions)
## Overview
In this lecture we are going to study a very simple class of stochastic
models called AR(1) processes.
These simple models are used again and again in economic research to represent the dynamics of series such as
- labor income
- dividends
- productivity, etc.
AR(1) processes can take negative values but are easily converted into positive processes when necessary by a transformation such as exponentiation.
We are going to study AR(1) processes partly because they are useful and
partly because they help us understand important concepts.
Let’s start with some imports:
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## The AR(1) Model
The **AR(1) model** (autoregressive model of order 1) takes the form
<a id='equation-can-ar1'></a>
$$
X_{t+1} = a X_t + b + c W_{t+1} \tag{1}
$$
where $ a, b, c $ are scalar-valued parameters.
This law of motion generates a time series $ \{ X_t\} $ as soon as we
specify an initial condition $ X_0 $.
This is called the **state process** and the state space is $ \mathbb R $.
To make things even simpler, we will assume that
- the process $ \{ W_t \} $ is IID and standard normal,
- the initial condition $ X_0 $ is drawn from the normal distribution $ N(\mu_0, v_0) $ and
- the initial condition $ X_0 $ is independent of $ \{ W_t \} $.
### Moving Average Representation
Iterating backwards from time $ t $, we obtain
$$
X_t = a X_{t-1} + b + c W_t
= a^2 X_{t-2} + a b + a c W_{t-1} + b + c W_t
= \cdots
$$
If we work all the way back to time zero, we get
<a id='equation-ar1-ma'></a>
$$
X_t = a^t X_0 + b \sum_{j=0}^{t-1} a^j +
c \sum_{j=0}^{t-1} a^j W_{t-j} \tag{2}
$$
Equation [(2)](#equation-ar1-ma) shows that $ X_t $ is a well defined random variable, the value of which depends on
- the parameters,
- the initial condition $ X_0 $ and
- the shocks $ W_1, \ldots W_t $ from time $ t=1 $ to the present.
Throughout, the symbol $ \psi_t $ will be used to refer to the
density of this random variable $ X_t $.
### Distribution Dynamics
One of the nice things about this model is that it’s so easy to trace out the sequence of distributions $ \{ \psi_t \} $ corresponding to the time
series $ \{ X_t\} $.
To see this, we first note that $ X_t $ is normally distributed for each $ t $.
This is immediate form [(2)](#equation-ar1-ma), since linear combinations of independent
normal random variables are normal.
Given that $ X_t $ is normally distributed, we will know the full distribution
$ \psi_t $ if we can pin down its first two moments.
Let $ \mu_t $ and $ v_t $ denote the mean and variance
of $ X_t $ respectively.
We can pin down these values from [(2)](#equation-ar1-ma) or we can use the following
recursive expressions:
<a id='equation-dyn-tm'></a>
$$
\mu_{t+1} = a \mu_t + b
\quad \text{and} \quad
v_{t+1} = a^2 v_t + c^2 \tag{3}
$$
These expressions are obtained from [(1)](#equation-can-ar1) by taking, respectively, the expectation and variance of both sides of the equality.
In calculating the second expression, we are using the fact that $ X_t $
and $ W_{t+1} $ are independent.
(This follows from our assumptions and [(2)](#equation-ar1-ma).)
Given the dynamics in [(2)](#equation-ar1-ma) and initial conditions $ \mu_0,
v_0 $, we obtain $ \mu_t, v_t $ and hence
$$
\psi_t = N(\mu_t, v_t)
$$
The following code uses these facts to track the sequence of marginal
distributions $ \{ \psi_t \} $.
The parameters are
```
a, b, c = 0.9, 0.1, 0.5
mu, v = -3.0, 0.6 # initial conditions mu_0, v_0
```
Here’s the sequence of distributions:
```
from scipy.stats import norm
sim_length = 10
grid = np.linspace(-5, 7, 120)
fig, ax = plt.subplots()
for t in range(sim_length):
mu = a * mu + b
v = a**2 * v + c**2
ax.plot(grid, norm.pdf(grid, loc=mu, scale=np.sqrt(v)),
label=f"$\psi_{t}$",
alpha=0.7)
ax.legend(bbox_to_anchor=[1.05,1],loc=2,borderaxespad=1)
plt.show()
```
## Stationarity and Asymptotic Stability
Notice that, in the figure above, the sequence $ \{ \psi_t \} $ seems to be converging to a limiting distribution.
This is even clearer if we project forward further into the future:
```
def plot_density_seq(ax, mu_0=-3.0, v_0=0.6, sim_length=60):
mu, v = mu_0, v_0
for t in range(sim_length):
mu = a * mu + b
v = a**2 * v + c**2
ax.plot(grid,
norm.pdf(grid, loc=mu, scale=np.sqrt(v)),
alpha=0.5)
fig, ax = plt.subplots()
plot_density_seq(ax)
plt.show()
```
Moreover, the limit does not depend on the initial condition.
For example, this alternative density sequence also converges to the same limit.
```
fig, ax = plt.subplots()
plot_density_seq(ax, mu_0=3.0)
plt.show()
```
In fact it’s easy to show that such convergence will occur, regardless of the initial condition, whenever $ |a| < 1 $.
To see this, we just have to look at the dynamics of the first two moments, as
given in [(3)](#equation-dyn-tm).
When $ |a| < 1 $, these sequence converge to the respective limits
<a id='equation-mu-sig-star'></a>
$$
\mu^* := \frac{b}{1-a}
\quad \text{and} \quad
v^* = \frac{c^2}{1 - a^2} \tag{4}
$$
(See our [lecture on one dimensional dynamics](scalar_dynam.ipynb) for background on deterministic convergence.)
Hence
<a id='equation-ar1-psi-star'></a>
$$
\psi_t \to \psi^* = N(\mu^*, v^*)
\quad \text{as }
t \to \infty \tag{5}
$$
We can confirm this is valid for the sequence above using the following code.
```
fig, ax = plt.subplots()
plot_density_seq(ax, mu_0=3.0)
mu_star = b / (1 - a)
std_star = np.sqrt(c**2 / (1 - a**2)) # square root of v_star
psi_star = norm.pdf(grid, loc=mu_star, scale=std_star)
ax.plot(grid, psi_star, 'k-', lw=2, label="$\psi^*$")
ax.legend()
plt.show()
```
As claimed, the sequence $ \{ \psi_t \} $ converges to $ \psi^* $.
### Stationary Distributions
A stationary distribution is a distribution that is a fixed
point of the update rule for distributions.
In other words, if $ \psi_t $ is stationary, then $ \psi_{t+j} =
\psi_t $ for all $ j $ in $ \mathbb N $.
A different way to put this, specialized to the current setting, is as follows: a
density $ \psi $ on $ \mathbb R $ is **stationary** for the AR(1) process if
$$
X_t \sim \psi
\quad \implies \quad
a X_t + b + c W_{t+1} \sim \psi
$$
The distribution $ \psi^* $ in [(5)](#equation-ar1-psi-star) has this property —
checking this is an exercise.
(Of course, we are assuming that $ |a| < 1 $ so that $ \psi^* $ is
well defined.)
In fact, it can be shown that no other distribution on $ \mathbb R $ has this property.
Thus, when $ |a| < 1 $, the AR(1) model has exactly one stationary density and that density is given by $ \psi^* $.
## Ergodicity
The concept of ergodicity is used in different ways by different authors.
One way to understand it in the present setting is that a version of the Law
of Large Numbers is valid for $ \{X_t\} $, even though it is not IID.
In particular, averages over time series converge to expectations under the
stationary distribution.
Indeed, it can be proved that, whenever $ |a| < 1 $, we have
<a id='equation-ar1-ergo'></a>
$$
\frac{1}{m} \sum_{t = 1}^m h(X_t) \to
\int h(x) \psi^*(x) dx
\quad \text{as } m \to \infty \tag{6}
$$
whenever the integral on the right hand side is finite and well defined.
Notes:
- In [(6)](#equation-ar1-ergo), convergence holds with probability one.
- The textbook by [[MT09]](zreferences.ipynb#meyntweedie2009) is a classic reference on ergodicity.
For example, if we consider the identity function $ h(x) = x $, we get
$$
\frac{1}{m} \sum_{t = 1}^m X_t \to
\int x \psi^*(x) dx
\quad \text{as } m \to \infty
$$
In other words, the time series sample mean converges to the mean of the
stationary distribution.
As will become clear over the next few lectures, ergodicity is a very
important concept for statistics and simulation.
## Exercises
### Exercise 1
Let $ k $ be a natural number.
The $ k $-th central moment of a random variable is defined as
$$
M_k := \mathbb E [ (X - \mathbb E X )^k ]
$$
When that random variable is $ N(\mu, \sigma^2) $, it is know that
$$
M_k =
\begin{cases}
0 & \text{ if } k \text{ is odd} \\
\sigma^k (k-1)!! & \text{ if } k \text{ is even}
\end{cases}
$$
Here $ n!! $ is the double factorial.
According to [(6)](#equation-ar1-ergo), we should have, for any $ k \in \mathbb N $,
$$
\frac{1}{m} \sum_{t = 1}^m
(X_t - \mu^* )^k
\approx M_k
$$
when $ m $ is large.
Confirm this by simulation at a range of $ k $ using the default parameters from the lecture.
### Exercise 2
Write your own version of a one dimensional [kernel density
estimator](https://en.wikipedia.org/wiki/Kernel_density_estimation),
which estimates a density from a sample.
Write it as a class that takes the data $ X $ and bandwidth
$ h $ when initialized and provides a method $ f $ such that
$$
f(x) = \frac{1}{hn} \sum_{i=1}^n
K \left( \frac{x-X_i}{h} \right)
$$
For $ K $ use the Gaussian kernel ($ K $ is the standard normal
density).
Write the class so that the bandwidth defaults to Silverman’s rule (see
the “rule of thumb” discussion on [this
page](https://en.wikipedia.org/wiki/Kernel_density_estimation)). Test
the class you have written by going through the steps
1. simulate data $ X_1, \ldots, X_n $ from distribution $ \phi $
1. plot the kernel density estimate over a suitable range
1. plot the density of $ \phi $ on the same figure
for distributions $ \phi $ of the following types
- [beta
distribution](https://en.wikipedia.org/wiki/Beta_distribution)
with $ \alpha = \beta = 2 $
- [beta
distribution](https://en.wikipedia.org/wiki/Beta_distribution)
with $ \alpha = 2 $ and $ \beta = 5 $
- [beta
distribution](https://en.wikipedia.org/wiki/Beta_distribution)
with $ \alpha = \beta = 0.5 $
Use $ n=500 $.
Make a comment on your results. (Do you think this is a good estimator
of these distributions?)
### Exercise 3
In the lecture we discussed the following fact: For the $ AR(1) $ process
$$
X_{t+1} = a X_t + b + c W_{t+1}
$$
with $ \{ W_t \} $ iid and standard normal,
$$
\psi_t = N(\mu, s^2) \implies \psi_{t+1}
= N(a \mu + b, a^2 s^2 + c^2)
$$
Confirm this, at least approximately, by simulation. Let
- $ a = 0.9 $
- $ b = 0.0 $
- $ c = 0.1 $
- $ \mu = -3 $
- $ s = 0.2 $
First, plot $ \psi_t $ and $ \psi_{t+1} $ using the true
distributions described above.
Second, plot $ \psi_{t+1} $ on the same figure (in a different
color) as follows:
1. Generate $ n $ draws of $ X_t $ from the $ N(\mu, s^2) $
distribution
1. Update them all using the rule
$ X_{t+1} = a X_t + b + c W_{t+1} $
1. Use the resulting sample of $ X_{t+1} $ values to produce a
density estimate via kernel density estimation.
Try this for $ n=2000 $ and confirm that the
simulation based estimate of $ \psi_{t+1} $ does converge to the
theoretical distribution.
| github_jupyter |
<font size=4>**Create Plots**</font>
**Plot with Symbolic Plotting Functions**
MATLAB® provides many techniques for plotting numerical data. Graphical capabilities of MATLAB include plotting tools, standard plotting functions, graphic manipulation and data exploration tools, and tools for printing and exporting graphics to standard formats. Symbolic Math Toolbox™ expands these graphical capabilities and lets you plot symbolic functions using:
- <font color=blue>fplot</font> to create 2-D plots of symbolic expressions, equations, or functions in Cartesian coordinates.
- <font color=blue>fplot3</font> to create 3-D parametric plots.
- <font color=blue>ezpolar</font> to create plots in polar coordinates.
- <font color=blue>fsurf</font> to create surface plots.
- <font color=blue>fcontour</font> to create contour plots.
- <font color=blue>fmesh</font> to create mesh plots.
Plot the symbolic expression $sin(6x)$ by using **fplot**. By default, **fplot** uses the range $−5<x<5$.
```
from sympy import *
x = symbols('x')
plot(sin(6*x),(x,-5,5))
```
Plot a symbolic expression or function in polar coordinates $r$ (radius) and $\theta$ (polar angle) by using **ezpolar**. By default, **ezpolar** plots a symbolic expression or function over the interval $0<\theta<2\pi$.
Plot the symbolic expression $sin(6t)$ in polar coordinates.
```
#syms t
#ezpolar(sin(6*t))
import matplotlib.pyplot as plt
import numpy as np
t = symbols('t')
eqf = lambdify(t,sin(6*t))
angle = np.arange(0,2*np.pi,1/100)
plt.polar(angle,np.abs(eqf(angle)))
plt.title('$r=sin(6t)$')
```
**Plot Functions Numerically**
As an alternative to plotting expressions symbolically, you can substitute symbolic variables with numeric values by using **subs**. Then, you can use these numeric values with plotting functions in MATLAB™.
In the following expressions **u** and **v**, substitute the symbolic variables **x** and **y** with the numeric values defined by **meshgrid**.
```
x,y = symbols('x y')
u = sin(x**2+y**2)
v = cos(x*y)
```
Now, you can plot **U** and **V** by using standard MATLAB plotting functions.
Create a plot of the vector field defined by the functions $U(X,Y)$ and $V(X,Y)$ by using the MATLAB **quiver** function.
```
eqfU = lambdify((x,y),u)
eqfV = lambdify((x,y),v)
X,Y = np.meshgrid(np.arange(-1,1,0.1),np.arange(-1,1,0.1))
plt.quiver(X,Y,eqfU(X,Y),eqfV(X,Y))
```
**Plot Multiple Symbolic Functions in One Graph**
Plot several functions on one graph by adding the functions sequentially. After plotting the first function, add successive functions by using the **hold** on command. The **hold on** command keeps the existing plots. Without the **hold on** command, each new plot replaces any existing plot. After the **hold on** command, each new plot appears on top of existing plots. Switch back to the default behavior of replacing plots by using the **hold off** command.
Plot $f=e^x sin(20x)$ using **fplot**. Show the bounds of **f** by superimposing plots of $e^x$ and $-e^x$ as dashed red lines. Set the title by using the **DisplayName** property of the object returned by **fplot**.
```
x,y = symbols('x y')
f = exp(x)*sin(20*x)
```
$f=sin(20x)e^x$
```
p1 = plot(f,exp(x),-exp(x),(x,0,3))
```
**Plot Multiple Symbolic Functions in One Figure**
Display several functions side-by-side in one figure by dividing the figure window into several subplots using **subplot**. The command **subplot(m,n,p)** divides the figure into a **m** by **n** matrix of subplots and selects the subplot **p**. Display multiple plots in separate subplots by selecting the subplot and using plotting commands. Plotting into multiple subplots is useful for side-by-side comparisons of plots.
Compare plots of $sin\left(\left(x^2+y^2\right)/a\right)$ for $a=10,20,50,100$ by using subplot to create side-by-side subplots.
```
import mpl_toolkits.mplot3d
x,y,a = symbols('x y a')
eqf3 = lambdify((x,y,a),sin((x**2+y**2)/a))
X,Y = np.meshgrid(np.arange(-5,5,0.1),np.arange(-5,5,0.1))
fig = plt.figure(constrained_layout=True)
ax0 = fig.add_subplot(2,2,1,projection='3d')
ax0.plot_surface(X,Y,eqf3(X,Y,10),cmap=plt.cm.viridis) #使用viridis色谱
ax0.set_title('$a=10$',loc='left')
ax1 = fig.add_subplot(2,2,2,projection='3d')
ax1.plot_surface(X,Y,eqf3(X,Y,20),cmap=plt.cm.viridis) #使用viridis色谱
ax1.set_title('$a=20$',loc='left')
ax2 = fig.add_subplot(2,2,3,projection='3d')
ax2.plot_surface(X,Y,eqf3(X,Y,50),cmap=plt.cm.viridis) #使用viridis色谱
ax2.set_title('$a=50$',loc='left')
ax3 = fig.add_subplot(2,2,4,projection='3d')
ax3.plot_surface(X,Y,eqf3(X,Y,100),cmap=plt.cm.viridis) #使用viridis色谱
ax3.set_title('$a=100$',loc='left')
```
**Combine Symbolic Function Plots and Numeric Data Plots**
Plot numeric and symbolic data on the same graph by using MATLAB and Symbolic Math Toolbox functions together.
For numeric values of **x** between $[−5,5]$, return a noisy sine curve by finding $y=sin(x)$ and adding random values to **y**. View the noisy sine curve by using **scatter** to plot the points $(x1,y1),(x2,y2),⋯$.
```
x = np.arange(-5,5,1/10)
y = np.sin(x)+((-1)*np.random.randint(10,size=100)*np.random.rand(100))/8
fig,ax = plt.subplots()
ax.scatter(x,y,c='w',edgecolors='#1f77b4')
```
Show the underlying structure in the points by superimposing a plot of the sine function. First, use **hold on** to retain the **scatter** plot. Then, use **fplot** to plot the sine function.
```
#hold on
#syms t
#fplot(sin(t))
#hold off
t = symbols('t')
eqft = lambdify(t,sin(t))
fig,ax = plt.subplots()
ax.scatter(x,y,c='w',edgecolors='#1f77b4')
ax.plot(x,eqft(x))
```
**Combine Numeric and Symbolic Plots in 3-D**
Combine symbolic and numeric plots in 3-D by using MATLAB and Symbolic Math Toolbox plotting functions. Symbolic Math Toolbox provides these 3-D plotting functions:
- <font color=blue>fplot3</font> creates 3-D parameterized line plots.
- <font color=blue>fsurf</font> creates 3-D surface plots.
- <font color=blue>fmesh</font> creates 3-D mesh plots.
Create a spiral plot by using **fplot3** to plot the parametric line
$$ x=(1-t)sin(100t)$$
$$ y=(1-t)cos(100t)$$
$$ z=\sqrt{1-x^2-y^2}$$
```
t = symbols('t')
x = (1-t)*sin(100*t)
y = (1-t)*cos(100*t)
z = sqrt(1-x**2-y**2)
eqfx = lambdify(t,x)
eqfy = lambdify(t,y)
eqfz = lambdify(t,z)
X = eqfx(np.arange(0,1,1/1000))
Y = eqfy(np.arange(0,1,1/1000))
Z = eqfz(np.arange(0,1,1/1000))
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.plot(X,Y,Z,linewidth=0.6)
ax.set_title('Symbolic 3-D Parametric Line')
```
Superimpose a plot of a sphere with radius 1 and center at $(0, 0, 0)$. Find points on the sphere numerically by using **sphere**. Plot the sphere by using **mesh**. The resulting plot shows the symbolic parametric line wrapped around the top hemisphere.
```
#hold on
#[X,Y,Z] = sphere;
#mesh(X, Y, Z)
#colormap(gray)
#title('Symbolic Parametric Plot and a Sphere')
#hold off
theta,phi = np.meshgrid(np.linspace(0,2*np.pi,30),np.linspace(0,np.pi,30))
X_sphere = np.sin(phi)*np.cos(theta)
Y_sphere = np.sin(phi)*np.sin(theta)
Z_sphere = np.cos(phi)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.plot_wireframe(X_sphere,Y_sphere,Z_sphere,linewidth=0.2,color='black')
ax.plot(X,Y,Z)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
```
# type 4 mosaic data
```
# y = np.random.randint(0,10,5000)
# idx= []
# for i in range(10):
# print(i,sum(y==i))
# idx.append(y==i)
# x = np.zeros((5000,2))
# x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
# x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
# x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
# x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
# x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
# x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
# x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
# x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
# x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
# x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
# plt.figure(figsize=(6,6))
# for i in range(10):
# plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
# class SyntheticDataset(Dataset):
# """MosaicDataset dataset."""
# def __init__(self, x, y):
# """
# Args:
# csv_file (string): Path to the csv file with annotations.
# root_dir (string): Directory with all the images.
# transform (callable, optional): Optional transform to be applied
# on a sample.
# """
# self.x = x
# self.y = y
# #self.fore_idx = fore_idx
# def __len__(self):
# return len(self.y)
# def __getitem__(self, idx):
# return self.x[idx] , self.y[idx] #, self.fore_idx[idx]
# trainset = SyntheticDataset(x,y)
# trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True)
# classes = ('zero','one','two','three','four','five','six','seven','eight','nine')
# foreground_classes = {'zero','one','two'}
# fg_used = '012'
# fg1, fg2, fg3 = 0,1,2
# all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'}
# background_classes = all_classes - foreground_classes
# background_classes
# dataiter = iter(trainloader)
# background_data=[]
# background_label=[]
# foreground_data=[]
# foreground_label=[]
# batch_size=100
# for i in range(50):
# images, labels = dataiter.next()
# for j in range(batch_size):
# if(classes[labels[j]] in background_classes):
# img = images[j].tolist()
# background_data.append(img)
# background_label.append(labels[j])
# else:
# img = images[j].tolist()
# foreground_data.append(img)
# foreground_label.append(labels[j])
# foreground_data = torch.tensor(foreground_data)
# foreground_label = torch.tensor(foreground_label)
# background_data = torch.tensor(background_data)
# background_label = torch.tensor(background_label)
# def create_mosaic_img(bg_idx,fg_idx,fg):
# """
# bg_idx : list of indexes of background_data[] to be used as background images in mosaic
# fg_idx : index of image to be used as foreground image from foreground data
# fg : at what position/index foreground image has to be stored out of 0-8
# """
# image_list=[]
# j=0
# for i in range(9):
# if i != fg:
# image_list.append(background_data[bg_idx[j]])
# j+=1
# else:
# image_list.append(foreground_data[fg_idx])
# label = foreground_label[fg_idx] - fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2
# #image_list = np.concatenate(image_list ,axis=0)
# image_list = torch.stack(image_list)
# return image_list,label
# # number of data points in bg class and fg class
# nbg = sum(idx[3]) + sum(idx[4]) + sum(idx[5]) + sum(idx[6]) + sum(idx[7]) + sum(idx[8]) + sum(idx[9])
# nfg = sum(idx[0]) + sum(idx[1]) + sum(idx[2])
# print(nbg, nfg, nbg+nfg)
# desired_num = 3000
# mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
# fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
# mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
# list_set_labels = []
# for i in range(desired_num):
# set_idx = set()
# np.random.seed(i)
# bg_idx = np.random.randint(0,nbg,8)
# set_idx = set(background_label[bg_idx].tolist())
# fg_idx = np.random.randint(0,nfg)
# set_idx.add(foreground_label[fg_idx].item())
# fg = np.random.randint(0,9)
# fore_idx.append(fg)
# image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
# mosaic_list_of_images.append(image_list)
# mosaic_label.append(label)
# list_set_labels.append(set_idx)
# data = [{"mosaic_list":mosaic_list_of_images, "mosaic_label": mosaic_label, "fore_idx":fore_idx}]
# np.save("type4_data_1.npy",data)
```
# load mosaic data
```
data = np.load("type4_data.npy",allow_pickle=True)
mosaic_list_of_images = data[0]["mosaic_list"]
mosaic_label = data[0]["mosaic_label"]
fore_idx = data[0]["fore_idx"]
class MosaicDataset1(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
batch = 250
msd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
```
# models
```
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50) #,self.output)
self.linear2 = nn.Linear(50,50)
self.linear3 = nn.Linear(50,self.output)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,self.d], dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
#print(z[:,0].shape,z[:,self.d*0:self.d*0+self.d].shape)
for i in range(self.K):
x[:,i] = self.helper(z[:,i] )[:,0] # self.d*i:self.d*i+self.d
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],z[:,i]) # self.d*i:self.d*i+self.d
return y , x
def helper(self,x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,50)
#self.linear2 = nn.Linear(6,12)
self.linear2 = nn.Linear(50,self.output)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
def calculate_attn_loss(dataloader,what,where,criter):
what.eval()
where.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
loss = criter(outputs, labels)
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
```
# training
```
number_runs = 20
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(2,1,9,2).double()
torch.manual_seed(n)
what = Classification_deep(2,3).double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.01)#,momentum=0.9)
optimizer_what = optim.Adam(what.parameters(), lr=0.01)#,momentum=0.9)
criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 2500
# calculate zeroth epoch loss and FTPT values
running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha = where(inputs)
outputs = what(avg)
loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
print('Finished Training run ' +str(n))
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
# plt.figure(figsize=(6,6))
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
# performance
```
plt.plot(loss_curi)
np.mean(np.array(FTPT_analysis),axis=0)
FTPT_analysis.to_csv("synthetic_zeroth.csv",index=False)
```
```
FTPT_analysis
```
| github_jupyter |
<a href="https://colab.research.google.com/github/seopbo/nlp_tutorials/blob/main/single_text_classification_(nsmc)_LoRa.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Single text classification - LoRa
[LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685)를 GPT에 적용합니다.
- pre-trained language model로는 `skt/kogpt2-base-v2`를 사용합니다.
- https://huggingface.co/skt/kogpt2-base-v2
- single text classification task를 수행하는 예시 데이터셋으로는 `nsmc`를 사용합니다.
- https://huggingface.co/datasets/nsmc
## Setup
어떠한 GPU가 할당되었는 지 아래의 코드 셀을 실행함으로써 확인할 수 있습니다.
```
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
```
아래의 코드 셀을 실행함으로써 본 노트북을 실행하기위한 library를 install하고 load합니다.
```
!pip install torch
!pip install transformers
!pip install datasets
!pip install -U scikit-learn
import torch
import transformers
import datasets
```
## Data preprocessing
1. `skt/kogpt2-base-v2`가 사용한 subword tokenizer를 load합니다.
2. `datasets` library를 이용하여 `nsmc`를 load합니다.
3. 1의 subword tokenizer를 이용 `nsmc`의 data를 single text classification을 수행할 수 있는 형태, train example로 transform합니다.
- `<s> tok 1 ... tok N </s>`로 만들고, 이를 list_of_integers로 transform합니다.

`nsmc`를 load하고, `train_ds`, `valid_ds`, `test_ds`를 생성합니다
```
from datasets import load_dataset
cs = load_dataset("nsmc", split="train")
cs = cs.train_test_split(0.1)
test_cs = load_dataset("nsmc", split="test")
train_cs = cs["train"]
valid_cs = cs["test"]
```
transform을 위한 함수를 정의하고 적용합니다. 먼저 `skt/kogpt2-base-v2`가 사용하는 subword tokenizer의 special tokens들을 확인합니다.
```
from transformers import GPT2TokenizerFast, GPT2Config
test_tokenizer = GPT2TokenizerFast.from_pretrained("skt/kogpt2-base-v2")
print(test_tokenizer.convert_ids_to_tokens(0))
print(test_tokenizer.convert_ids_to_tokens(1))
print(test_tokenizer.convert_ids_to_tokens(2))
print(test_tokenizer.convert_ids_to_tokens(3))
print(test_tokenizer.convert_ids_to_tokens(4))
print(test_tokenizer.convert_ids_to_tokens(5))
```
Figure 1의 classification 유형과 동일하게 처리하기위해서, `build_inputs_with_special_tokens` method를 overriding합니다. `build_inputs_with_special_tokens`를 overriding하면 `prepare_for_model` method 사용 시 그 변경사항이 반영됩니다.
```
from transformers import GPT2TokenizerFast, GPT2Config
class CustomGPT2TokenizerFast(GPT2TokenizerFast):
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A GPT sequence has the following format:
- single sequence: ``<s> X </s>``
- pair of sequences: ``<s> A </s> B </s>``
Args:
token_ids_0 (:obj:`List[int]`):
List of IDs to which the special tokens will be added.
token_ids_1 (:obj:`List[int]`, `optional`):
Optional second list of IDs for sequence pairs.
Returns:
:obj:`List[int]`: List of `input IDs <../glossary.html#input-ids>`__ with the appropriate special tokens.
"""
output = [tokenizer.bos_token_id] + token_ids_0 + [tokenizer.eos_token_id]
if token_ids_1:
output += token_ids_1 + [tokenizer.eos_token_id]
return output
tokenizer = CustomGPT2TokenizerFast.from_pretrained("skt/kogpt2-base-v2")
tokenizer.pad_token = "<pad>"
tokenizer.unk_token = "<unk>"
tokenizer.bos_token = "<s>"
tokenizer.eos_token = "</s>"
config = GPT2Config.from_pretrained("skt/kogpt2-base-v2")
print(tokenizer.__class__)
print(config.__class__)
```
`__call__` method를 사용하지않고 단계적으로 `tokenize`, `convert_tokens_to_ids`, `prepare_for_model` method를 이용하여, `transform` function을 구현합니다.
```
from typing import Union, List, Dict
def transform(sentences: Union[str, List[str]], tokenizer) -> Dict[str, List[List[int]]]:
if not isinstance(sentences, list):
sentences = [sentences]
dicf_of_training_examples: Dict[str, List[List[int]]] = {}
for sentence in sentences:
list_of_tokens = tokenizer.tokenize(sentence)
list_of_ids = tokenizer.convert_tokens_to_ids(list_of_tokens)
training_example = tokenizer.prepare_for_model(list_of_ids, add_special_tokens=True, padding=False, truncation=False)
for key in training_example.keys():
if key not in dicf_of_training_examples:
dicf_of_training_examples.setdefault(key, [])
dicf_of_training_examples[key].append(training_example[key])
return dicf_of_training_examples
samples = train_cs[:2]
transformed_samples = transform(samples["document"], tokenizer)
print(samples)
print(transformed_samples)
train_ds = train_cs.map(lambda data: transform(data["document"], tokenizer), remove_columns=["id", "document"], batched=True).rename_column("label", "labels")
valid_ds = valid_cs.map(lambda data: transform(data["document"], tokenizer), remove_columns=["id", "document"], batched=True).rename_column("label", "labels")
test_ds = test_cs.map(lambda data: transform(data["document"], tokenizer), remove_columns=["id", "document"], batched=True).rename_column("label", "labels")
```
## Prepare model
single text classification을 수행하기위해서 `skt/kogpt2-base-v2`를 load해야합니다. 단 LoRa를 위한 weight를 정의하기위해서 custom class를 작성하는 것이 필요합니다.
### 1. `GPT2Attention`을 subclassing하여 `GPT2AttentionWithLoRa`를 구현
`GPT2Attention`을 상속하여 `__init__`과 `forward` method를 overriding합니다.
```
from torch import nn
from transformers.models.gpt2.modeling_gpt2 import GPT2Attention
from transformers.modeling_utils import Conv1D
class GPT2AttentionWithLoRa(GPT2Attention):
def __init__(self, config, is_cross_attention=False, layer_idx=None):
super().__init__(config, is_cross_attention=False, layer_idx=None)
self.c_attn_lora = nn.Sequential(
Conv1D(4, self.embed_dim),
Conv1D(3 * self.embed_dim, 4),
)
self.c_proj_lora = nn.Sequential(
Conv1D(4, self.embed_dim),
Conv1D(self.embed_dim, 4),
)
def forward(
self,
hidden_states,
layer_past=None,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
use_cache=False,
output_attentions=False,
):
if encoder_hidden_states is not None:
if not hasattr(self, "q_attn"):
raise ValueError(
"If class is used as cross attention, the weights `q_attn` have to be defined. "
"Please make sure to instantiate class with `GPT2Attention(..., is_cross_attention=True)`."
)
query = self.q_attn(hidden_states)
key, value = self.c_attn(encoder_hidden_states).split(self.split_size, dim=2)
attention_mask = encoder_attention_mask
else:
query_orig, key_orig, value_orig = self.c_attn(hidden_states).split(self.split_size, dim=2)
# Added codes
query_adpt, key_adpt, value_adpt = self.c_attn_lora(hidden_states).split(self.split_size, dim=2)
query = query_orig + query_adpt
key = key_orig + key_adpt
value = value_orig + value_adpt
query = self._split_heads(query, self.num_heads, self.head_dim)
key = self._split_heads(key, self.num_heads, self.head_dim)
value = self._split_heads(value, self.num_heads, self.head_dim)
if layer_past is not None:
past_key, past_value = layer_past
key = torch.cat((past_key, key), dim=-2)
value = torch.cat((past_value, value), dim=-2)
if use_cache is True:
present = (key, value)
else:
present = None
if self.reorder_and_upcast_attn:
attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask)
else:
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
attn_output_raw = self._merge_heads(attn_output, self.num_heads, self.head_dim)
attn_output_orig = self.c_proj(attn_output_raw)
# Added codes
attn_output_adpt = self.c_proj_lora(attn_output_raw)
attn_output = attn_output_orig + attn_output_adpt
attn_output = self.resid_dropout(attn_output)
outputs = (attn_output, present)
if output_attentions:
outputs += (attn_weights,)
return outputs # a, present, (attentions)
```
### 2. `GPT2Attention`을 subclassing하여 `GPT2BlockWithLoRa`를 구현
`GPT2Block`을 상속하여 `__init__`을 overriding합니다.
```
from torch import nn
from transformers.models.gpt2.modeling_gpt2 import GPT2Block, GPT2MLP
from transformers.modeling_utils import Conv1D
class GPT2BlockWithLoRa(GPT2Block):
def __init__(self, config, layer_idx=None):
super().__init__(config, layer_idx)
self.attn = GPT2AttentionWithLoRa(config, layer_idx=layer_idx)
```
### 3. `GPT2Model`을 subclassing하여 `GPT2ModelkWithLoRa`를 구현
`GPT2Model`을 상속하여 `__init__`을 overriding합니다.
```
from torch import nn
from transformers.models.gpt2.modeling_gpt2 import GPT2Model
class GPT2ModelWithLoRa(GPT2Model):
def __init__(self, config):
super().__init__(config)
self.h = nn.ModuleList([GPT2BlockWithLoRa(config, layer_idx=i) for i in range(config.num_hidden_layers)])
```
### 4, `GPT2ForSequenceClassification`을 subclassing하여 `GPT2ForSequenceClassificationWithLoRa`를 구현
`GPT2ForSequenceClassification`을 상속하여 `__init__`을 overriding합니다.
```
from torch import nn
from transformers.models.gpt2.modeling_gpt2 import GPT2ForSequenceClassification
from transformers.modeling_utils import Conv1D
class GPT2ForSequenceClassificationWithLoRa(GPT2ForSequenceClassification):
def __init__(self, config):
super().__init__(config)
self.transformer = GPT2ModelWithLoRa(config)
```
### lora 관련 weight만 train 되도록 설정
```
model = GPT2ForSequenceClassificationWithLoRa.from_pretrained("skt/kogpt2-base-v2", num_labels=2)
for named_parameter in model.named_parameters():
if "lora" in named_parameter[0]:
continue
named_parameter[-1].requires_grad_(False)
```
## Training model
`Trainer` class를 이용하여 train합니다.
- https://huggingface.co/transformers/custom_datasets.html?highlight=trainer#fine-tuning-with-trainer
```
import numpy as np
from transformers.data.data_collator import DataCollatorWithPadding
from sklearn.metrics import accuracy_score
def compute_metrics(p):
pred, labels = p
pred = np.argmax(pred, axis=1)
accuracy = accuracy_score(y_true=labels, y_pred=pred)
return {"accuracy": accuracy}
batchify = DataCollatorWithPadding(
tokenizer = tokenizer,
padding = "longest",
)
# mini-batch 구성확인
batchify(train_ds[:2])
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='./results',
evaluation_strategy="steps",
eval_steps=1000,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
learning_rate=1e-4,
weight_decay=0.01,
adam_beta1=.9,
adam_beta2=.95,
adam_epsilon=1e-8,
max_grad_norm=1.,
num_train_epochs=2,
lr_scheduler_type="linear",
warmup_steps=100,
logging_dir='./logs',
logging_strategy="steps",
logging_first_step=True,
logging_steps=100,
save_strategy="epoch",
seed=42,
dataloader_drop_last=False,
dataloader_num_workers=2
)
trainer = Trainer(
args=training_args,
data_collator=batchify,
model=model,
train_dataset=train_ds,
eval_dataset=valid_ds,
compute_metrics=compute_metrics
)
trainer.train()
trainer.evaluate(test_ds)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Train Your Own Model and Convert It to TFLite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c03_exercise_convert_model_to_tflite.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c03_exercise_convert_model_to_tflite.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
This notebook uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing we'll use here.
This uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.
We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow:
# Setup
```
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import pathlib
print(tf.__version__)
```
# Download Fashion MNIST Dataset
```
splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))
splits, info = tfds.load('fashion_mnist', with_info=True, as_supervised=True, split=splits)
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
class_names = ['T-shirt_top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
with open('labels.txt', 'w') as f:
f.write('\n'.join(class_names))
IMG_SIZE = 28
```
# Preprocessing data
## Preprocess
```
# Write a function to normalize and resize the images
def format_example(image, label):
# Cast image to float32
image = # YOUR CODE HERE
# Resize the image if necessary
image = # YOUR CODE HERE
# Normalize the image in the range [0, 1]
image = # YOUR CODE HERE
return image, label
# Set the batch size to 32
BATCH_SIZE = 32
```
## Create a Dataset from images and labels
```
# Prepare the examples by preprocessing the them and then batching them (and optionally prefetching them)
# If you wish you can shuffle train set here
train_batches = # YOUR CODE HERE
validation_batches = # YOUR CODE HERE
test_batches = # YOUR CODE HERE
```
# Building the model
```
"""
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 16) 160
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 16) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 32) 4640
_________________________________________________________________
flatten (Flatten) (None, 3872) 0
_________________________________________________________________
dense (Dense) (None, 64) 247872
_________________________________________________________________
dense_1 (Dense) (None, 10) 650
=================================================================
Total params: 253,322
Trainable params: 253,322
Non-trainable params: 0
"""
# Build the model shown in the previous cell
model = tf.keras.Sequential([
# Set the input shape to (28, 28, 1), kernel size=3, filters=16 and use ReLU activation,
tf.keras.layers.Conv2D(# YOUR CODE HERE),
tf.keras.layers.MaxPooling2D(),
# Set the number of filters to 32, kernel size to 3 and use ReLU activation
tf.keras.layers.Conv2D(# YOUR CODE HERE),
# Flatten the output layer to 1 dimension
tf.keras.layers.Flatten(),
# Add a fully connected layer with 64 hidden units and ReLU activation
tf.keras.layers.Dense(# YOUR CODE HERE),
# Attach a final softmax classification head
tf.keras.layers.Dense(# YOUR CODE HERE)])
# Set the loss and accuracy metrics
model.compile(
optimizer='adam',
loss=# YOUR CODE HERE,
metrics=# YOUR CODE HERE)
```
## Train
```
model.fit(train_batches,
epochs=10,
validation_data=validation_batches)
```
# Exporting to TFLite
```
export_dir = 'saved_model/1'
# Use the tf.saved_model API to export the SavedModel
# Your Code Here
#@title Select mode of optimization
mode = "Speed" #@param ["Default", "Storage", "Speed"]
if mode == 'Storage':
optimization = tf.lite.Optimize.OPTIMIZE_FOR_SIZE
elif mode == 'Speed':
optimization = tf.lite.Optimize.OPTIMIZE_FOR_LATENCY
else:
optimization = tf.lite.Optimize.DEFAULT
optimization
# Use the TFLiteConverter SavedModel API to initialize the converter
converter = # YOUR CODE HERE
# Set the optimzations
converter.optimizations = # YOUR CODE HERE
# Invoke the converter to finally generate the TFLite model
tflite_model = # YOUR CODE HERE
tflite_model_file = 'model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
```
# Test if your model is working
```
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Gather results for the randomly sampled test images
predictions = []
test_labels = []
test_images = []
for img, label in test_batches.take(50):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label[0])
test_images.append(np.array(img))
#@title Utility functions for plotting
# Utilities for plotting
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label.numpy():
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks(list(range(10)), class_names, rotation='vertical')
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array[0], color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array[0])
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('green')
#@title Visualize the outputs { run: "auto" }
index = 49 #@param {type:"slider", min:1, max:50, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_images)
plt.show()
plot_value_array(index, predictions, test_labels)
plt.show()
```
# Download TFLite model and assets
**NOTE: You might have to run to the cell below twice**
```
try:
from google.colab import files
files.download(tflite_model_file)
files.download('labels.txt')
except:
pass
```
# Deploying TFLite model
Now once you've the trained TFLite model downloaded, you can ahead and deploy this on an Android/iOS application by placing the model assets in the appropriate location.
# Prepare the test images for download (Optional)
```
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]].lower(), index))
!ls test_images
!zip -qq fmnist_test_images.zip -r test_images/
try:
files.download('fmnist_test_images.zip')
except:
pass
```
| github_jupyter |
### 94. Binary Tree Inorder Traversal
#### Content
<p>Given the <code>root</code> of a binary tree, return <em>the inorder traversal of its nodes' values</em>.</p>
<p> </p>
<p><strong>Example 1:</strong></p>
<img alt="" src="https://assets.leetcode.com/uploads/2020/09/15/inorder_1.jpg" style="width: 202px; height: 324px;" />
<pre>
<strong>Input:</strong> root = [1,null,2,3]
<strong>Output:</strong> [1,3,2]
</pre>
<p><strong>Example 2:</strong></p>
<pre>
<strong>Input:</strong> root = []
<strong>Output:</strong> []
</pre>
<p><strong>Example 3:</strong></p>
<pre>
<strong>Input:</strong> root = [1]
<strong>Output:</strong> [1]
</pre>
<p><strong>Example 4:</strong></p>
<img alt="" src="https://assets.leetcode.com/uploads/2020/09/15/inorder_5.jpg" style="width: 202px; height: 202px;" />
<pre>
<strong>Input:</strong> root = [1,2]
<strong>Output:</strong> [2,1]
</pre>
<p><strong>Example 5:</strong></p>
<img alt="" src="https://assets.leetcode.com/uploads/2020/09/15/inorder_4.jpg" style="width: 202px; height: 202px;" />
<pre>
<strong>Input:</strong> root = [1,null,2]
<strong>Output:</strong> [1,2]
</pre>
<p> </p>
<p><strong>Constraints:</strong></p>
<ul>
<li>The number of nodes in the tree is in the range <code>[0, 100]</code>.</li>
<li><code>-100 <= Node.val <= 100</code></li>
</ul>
<p> </p>
<strong>Follow up:</strong> Recursive solution is trivial, could you do it iteratively?
#### Difficulty: Easy, AC rate: 68.0%
#### Question Tags:
- Stack
- Tree
- Depth-First Search
- Binary Tree
#### Links:
🎁 [Question Detail](https://leetcode.com/problems/binary-tree-inorder-traversal/description/) | 🎉 [Question Solution](https://leetcode.com/problems/binary-tree-inorder-traversal/solution/) | 💬 [Question Discussion](https://leetcode.com/problems/binary-tree-inorder-traversal/discuss/?orderBy=most_votes)
#### Hints:
#### Sample Test Case
[1,null,2,3]
---
What's your idea?
递归
---
```
from typing import Optional, List
# Definition for a binary tree node.
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
class Solution:
def inorderTraversal(self, root: Optional[TreeNode]) -> List[int]:
if root is None:
return []
return self.inorderTraversal(root.left) + [root.val] + self.inorderTraversal(root.right)
s = Solution()
n3 = TreeNode(3)
n2 = TreeNode(2, n3, None)
n1 = TreeNode(1, None, n2)
s.inorderTraversal(n1)
s.inorderTraversal(None)
n2 = TreeNode(2)
n1 = TreeNode(1, n2, None)
s.inorderTraversal(n1)
n2 = TreeNode(2)
n1 = TreeNode(1, None, n2)
s.inorderTraversal(n1)
import sys, os; sys.path.append(os.path.abspath('..'))
from submitter import submit
submit(94)
```
| github_jupyter |
## Evolving Deep Echo State Networks
This notebook demonstrates using genetic search to find optimal hyperparameters for Deep Echo State Networks implemented using pytorch-esn.
The process will evolve the most fit ESN hyperparameters to solve a given problem, including the size, structure and layers in the ESN.
#### Import Libraries
```
# import the libraries we need for genetic search
import random
from deap import base
from deap import creator
from deap import tools
from deap import algorithms
import numpy as np
import datetime
import math
```
#### Define the method of evaluating the ESN function against the training and test data, to generate a fitness score
```
# here's how to evaluate the ESN
# Load the data:
data = np.load('mackey_glass_t17.npy')
#
def evaluate(individual):
'''
build and test a model based on the parameters in an individual and return
the MSE value
'''
# extract the values of the parameters from the individual chromosome
my_n_reservoir = individual[0]
my_projection = individual[1]
my_noise = individual[2]
my_rectifier = individual[3]
my_steepness = individual[4]
my_sparsity = individual[5]
my_sphere_radius = individual[6]
my_teacher_forcing = individual[7]
my_random_state = individual[8]
my_spectral_radius = individual[9]
data = np.load('mackey_glass_t17.npy')
# http://minds.jacobs-university.de/mantas/code
esn = ESN(n_inputs = 1,
n_outputs = 1,
n_reservoir = my_n_reservoir,
spectral_radius = my_spectral_radius,
noise = my_noise,
sparsity = my_sparsity,
projection = my_projection,
steepness = my_steepness,
sphere_radius = my_sphere_radius,
rectifier = my_rectifier,
random_state = my_random_state)
trainlen = 2000
future = 2000
pred_training = esn.fit(np.ones(trainlen),data[:trainlen])
prediction = esn.predict(np.ones(future))
mse = np.sqrt(np.mean((prediction.flatten() - data[trainlen:trainlen+future])**2))
# can I return the latest spectral radius out to the stats printed? let's see
return mse,
# create a mutate function
def mutate(individual, running_ngen, total_ngen):
# Here, we flip a coin, to decide if our mutation is Exploring parameter space, or Tuning a gene in ever
# tightening ranges based on the number of generations. Ideally, I'd have gone with Age of the individual, but
# this is a basic test of the idea ... (and I couldn't figure out how to do it in deap!)
explore_or_tune = random.randint(0,1) # decide if we are tuning, or exploring
genfactor = running_ngen*2
# for shorter generation counts, the tuning wasn't narrowing down enough
gene = random.randint(0,9) # select which parameter to mutate
# we have selected one of the parameters randomly to mutate - check which, and update
if gene == 0: # This adjects the size of the reservoir ... attr_n_reservoir
if explore_or_tune == 0:
res_mut_range = genfactor
individual[0] = random.choice(n_reservoir) + random.randint(0, res_mut_range)
# grow potential size of reservoir each new generation, so good params early try bigger reservoirs
else:
individual[0] = random.randint(res_low, res_high)
# here the mutation range is large ... as we explore the space
elif gene == 1: # 1 attr_projection
individual[1] = random.choice(projection) # this is fixed now anyway, to just one
elif gene == 2: # 2 attr_noise
individual[2] = random.choice(noise) # explore minor choices
elif gene == 3: # 3 attr_rectifier
individual[3] = random.choice(rectifier)
elif gene ==4 : # 4 attr_steepness
individual[4] = random.choice(steepness)
elif gene == 3 or gene == 4 or gene == 5: # 5 attr_sparsity
if explore_or_tune == 0:
sparsity_nudge_low = individual[5] - (1/genfactor)
sparsity_nudge_high = individual[5] + (1/genfactor)
if sparsity_nudge_low < 0:
sparsity_nudge_low = 0
if sparsity_nudge_high > 1:
sparsity_nudge_high = 1
individual[5] = random.uniform(sparsity_nudge_low, sparsity_nudge_high)
#print(running_ngen,": tune sparsity between",sparsity_nudge_low,sparsity_nudge_high)
else:
#print(running_ngen,": explore sparsity")
individual[5] = random.uniform(sparsity_low, sparsity_high)
elif gene == 1 or gene == 6 or gene == 2 or gene ==7 : # 6 attr_sphere_radius
# here I calculate a mutation-from-current-setting with a nudge range shrinking with each generation
if explore_or_tune == 0: # 0 is tune, 1 is explore
sphere_nudge_low = individual[6] - (1/genfactor)
sphere_nudge_high = individual[6] + (1/genfactor)
if sphere_nudge_low < 2:
sphere_nudge_low = 2
if sphere_nudge_high > 100:
sphere_nudge_high = 100
individual[6] = random.uniform(sphere_nudge_low, sphere_nudge_high)
#print(running_ngen,": tune Sphere radius between",sphere_nudge_low, sphere_nudge_high)
else:
#print(running_ngen,": explore sphere")
individual[6] = random.uniform(sphere_low, sphere_high)
elif gene ==7: # 7 attr_teacher_forcing
individual[7] = random.choice(teacher_forcing)
elif gene ==8 and explore_or_tune == 1 : # 8 attr_random_state
individual[8] = random.choice(random_state)
elif gene ==8 or gene == 9: # 9 attr_spectral_radius
if explore_or_tune == 0: # 0 is tune, 1 is explore
# we calculate a shrinking range around the value it has now
#individual[9] = random.choice(spectral_radius)
spectral_range = (spectral_radius_high - spectral_radius_low)*(1/genfactor)
if spectral_range < 0.3:
spectral_range = 0.3
spectral_nudge_low = individual[9] - spectral_range*0.5
spectral_nudge_high = individual[9] + spectral_range*0.5
if spectral_nudge_low < spectral_radius_low:
spectral_nudge_low = spectral_radius_low
if spectral_nudge_high > spectral_radius_high:
spectral_nudge_high = spectral_radius_high
individual[9] = random.uniform(spectral_nudge_low, spectral_nudge_high)
#print(running_ngen,": tune Sphere radius between",spectral_nudge_low, spectral_nudge_high)
else: # we select over the whole range
#print(running_ngen,": explore SR")
individual[9] = random.uniform(spectral_radius_low, spectral_radius_high)
return individual,
# note the final comma, leave it in the return
# Start by setting up the DEAP genetic search fitness function
creator.create("FitnessMin", base.Fitness, weights=(-1.0,)) # Minimize the fitness function value
creator.create("Individual", list, fitness=creator.FitnessMin)
toolbox = base.Toolbox()
# Possible parameter values
# Size of the Reservoir
n_reservoir = [500,600]
# projection: 0 = no projection, 1 = spherical projection (default), 2 = soft projection
projection = [1]
# noise: 0 = no noise (default), or set a noise value, ie 0.001, for (regularization)
noise = [0, 0.000000000001, 0.00000000001, 0.000000000015]
# spectral radius
# this is now done random uniform between 0.01, 2.01
# rectifier: 0 = no rectifier (ie linear) default, 1 = hard tanh rectifier
rectifier = [0, 1]
# steepness: default is 2, or set a specfic steepness override to control soft projection
steepness = [2]
# sparsity
# this is now done random uniform between 0.001, 0.999
# sphere_radius
# this is now done random uniform between 1, 60
# teacher_forcing
teacher_forcing = [False, True]
# random state / seed
random_state = [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200]
# this is a block to define continuous value ranges to search over, rather than searching lists
res_low, res_high = 2000, 2000
spectral_radius_low, spectral_radius_high = 0.001, 15.00
sphere_low, sphere_high = 1, 100
sparsity_low, sparsity_high = 0.01, 0.99
from objproxies import CallbackProxy
#define how each gene will be generated (e.g. criterion is a random choice from the criterion list).
toolbox.register("attr_n_reservoir", random.choice, n_reservoir)
toolbox.register("attr_projection", random.choice, projection)
toolbox.register("attr_noise", random.choice, noise)
toolbox.register("attr_rectifier", random.choice, rectifier)
toolbox.register("attr_steepness", random.choice, steepness)
#
#toolbox.register("attr_sparsity", random.choice, sparsity)
toolbox.register("attr_sparsity", random.uniform, sparsity_low, sparsity_high)
#
#toolbox.register("attr_sphere_radius", random.choice, sphere_radius)
toolbox.register("attr_sphere_radius", random.uniform, sphere_low, sphere_high)
toolbox.register("attr_teacher_forcing", random.choice, teacher_forcing)
toolbox.register("attr_random_state", random.choice, random_state)
#
#toolbox.register("attr_spectral_radius", random.choice, spectral_radius)
toolbox.register("attr_spectral_radius", random.uniform, spectral_radius_low, spectral_radius_high)
# This is the order in which genes will be combined to create a chromosome
N_CYCLES = 1
toolbox.register("individual", tools.initCycle, creator.Individual,
( toolbox.attr_n_reservoir
, toolbox.attr_projection
, toolbox.attr_noise
, toolbox.attr_rectifier
, toolbox.attr_steepness
, toolbox.attr_sparsity
, toolbox.attr_sphere_radius
, toolbox.attr_teacher_forcing
, toolbox.attr_random_state
, toolbox.attr_spectral_radius
), n=N_CYCLES)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
# test implementing a hack for elistism, found here: https://groups.google.com/forum/#!topic/deap-users/iannnLI2ncE
def selElitistAndTournament(individuals, k_elitist, k_tournament, tournsize):
return tools.selBest(individuals, k_elitist) + tools.selTournament(individuals, k_tournament, tournsize=3)
## Run the genetic search using these params
population_size = 50
number_of_generations = 20
crossover_probability = 0.7
mutation_probability = 0.3
tournement_size = math.ceil(population_size*0.06) +1
# Kai Staats mentioned the optimal tournement size is about 7 out of a 100, so parameterising this to pop
print(tournement_size)
toolbox.register("mate", tools.cxOnePoint)
toolbox.register("mutate",mutate, running_ngen=CallbackProxy(lambda: xgen), total_ngen = CallbackProxy(lambda: tgen) )
toolbox.register("select", tools.selTournament, tournsize=tournement_size)
#POP_SIZE = population_size
#toolbox.register("select", selElitistAndTournament, k_elitist=int(0.1*POP_SIZE), k_tournament=POP_SIZE - int(0.1*POP_SIZE), tournsize=3)
toolbox.register("evaluate", evaluate)
pop = toolbox.population(n=population_size)
pop = tools.selBest(pop, int(0.1*len(pop))) + tools.selTournament(pop, len(pop)-int(0.1*len(pop)), tournsize=tournement_size)
hof = tools.HallOfFame(1)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", np.mean)
#stats.register("std", np.std)
stats.register("min", np.min)
stats.register("max", np.max)
print(datetime.datetime.now())
# this runs The genetic search for the best ESN:
print(datetime.datetime.now())
# was algorithms.eaSimple, now trying out ESNsimple
pop, log = ESNSimple(pop, toolbox, cxpb=crossover_probability, stats = stats,
mutpb = mutation_probability, ngen=number_of_generations, halloffame=hof,
verbose=True)
print(datetime.datetime.now())
print(datetime.datetime.now())
best_parameters = hof[0] # save the optimal set of parameters
gen = log.select("gen")
max_ = log.select("max")
avg = log.select("avg")
min_ = log.select("min")
# this runs The genetic search for the best ESN:
print(datetime.datetime.now())
# was algorithms.eaSimple, now trying out ESNsimple
pop, log = ESNSimple(pop, toolbox, cxpb=crossover_probability, stats = stats,
mutpb = mutation_probability, ngen=number_of_generations, halloffame=hof,
verbose=True)
print(datetime.datetime.now())
print(best_parameters)
win_nr = best_parameters[0]
print("n_reservoir = ", best_parameters[0])
print("projection = ", best_parameters[1])
print("noise = ", best_parameters[2])
print("rectifier = ", best_parameters[3])
print("steepness = ", best_parameters[4])
print("sparsity = ", best_parameters[5])
print("sphere_radius = ", best_parameters[6])
print("teacher_forcing = ", best_parameters[7])
print("random_state = ", best_parameters[8])
print("spectral_radius = ", best_parameters[9])
#import numpy as np
#from pyESN import ESN
from matplotlib import pyplot as plt
%matplotlib inline
data = np.load('mackey_glass_t17.npy') # http://minds.jacobs-university.de/mantas/code
esn = ESN(n_inputs = 1, # not searched
n_outputs = 1, # not searched
n_reservoir = best_parameters[0],
projection = best_parameters[1],
noise = best_parameters[2],
rectifier = best_parameters[3],
steepness = best_parameters[4],
sparsity = best_parameters[5],
sphere_radius = best_parameters[6],
teacher_forcing = best_parameters[7],
random_state= best_parameters[8],
spectral_radius = best_parameters[9],
)
trainlen = 2000
future = 2000
pred_training = esn.fit(np.ones(trainlen),data[:trainlen])
prediction = esn.predict(np.ones(future))
print("test error: \n"+str(np.sqrt(np.mean((prediction.flatten() - data[trainlen:trainlen+future])**2))))
plt.figure(figsize=(19,6.5))
plt.plot(range(0,trainlen+future),data[0:trainlen+future],'k',label="target system")
plt.plot(range(trainlen,trainlen+future),prediction,'r', label="free running ESN")
lo,hi = plt.ylim()
plt.plot([trainlen,trainlen],[lo+np.spacing(1),hi-np.spacing(1)],'k:')
plt.legend(loc=(0.61,1.1),fontsize='x-small')
import torchesn
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
from cbrain.imports import *
from cbrain.utils import *
from cbrain.data_generator import DataGenerator, threadsafe_generator
from cbrain.models import *
from cbrain.model_diagnostics import ModelDiagnostics
limit_mem()
PREPROC_DIR = '/scratch/srasp/preprocessed_data/'
norms = ('feature_means', 'max_rs', None, 'target_conv')
def get_lr_sched(lr_init, div, step):
def lr_update(epoch):
# From goo.gl/GXQaK6
init_lr = lr_init
drop = 1./div
epochs_drop = step
lr = init_lr * np.power(drop, np.floor((1+epoch)/epochs_drop))
print('lr:', lr)
return lr
return lr_update
def get_diag(m, sst='4k', convo=False, convo_tile=False):
return ModelDiagnostics(m,
fpath=f'{PREPROC_DIR}fbp_engy_ess_valid_{sst}_features.nc',
tpath=f'{PREPROC_DIR}fbp_engy_ess_valid_{sst}_targets.nc',
npath=f'{PREPROC_DIR}fbp_engy_ess_train_fullyear_norm.nc',
norms=norms, convo=convo, convo_tile=convo_tile)
def evaluate(m, convo=False, convo_tile=False):
dref = get_diag(m, 'fullyear', convo, convo_tile); dref.compute_stats()
print(dref.mean_stats(10)['hor_r2'])
d4k = get_diag(m, '4k', convo, convo_tile); d4k.compute_stats()
print(d4k.mean_stats(10)['hor_r2'])
d4k.plot_double_yz(10, 15, 'TPHYSTND', cmap='bwr', vmin=-7e-4, vmax=7e-4);
return dref, d4k
```
## Load trained model and test on +4K data
```
mref = keras.models.load_model(
'/export/home/srasp/repositories/CBRAIN-CAM/saved_models/D004_fbp_engy_ess_fullyear_max_rs_deep.h5')
mref.summary()
dref_ref = get_diag(mref, 'fullyear')
dref_ref.compute_stats()
dref_ref.mean_stats(10)
dref_4k = get_diag(mref, '4k'); dref_4k.compute_stats()
dref_4k.mean_stats(10)
dref_4k.plot_double_yz(10, 15, 'TPHYSTND', cmap='bwr', vmin=-7e-4, vmax=7e-4);
```
## Train new models
### Standard model
```
train_gen = DataGenerator(
PREPROC_DIR,
'fbp_engy_ess_train_sample1_shuffle_features.nc',
'fbp_engy_ess_train_sample1_shuffle_targets.nc',
1024,
'fbp_engy_ess_train_fullyear_norm.nc',
'feature_means', 'max_rs', None, 'target_conv',
shuffle=True,
)
mstd = fc_model(
94,
65,
[256,256,256,256,256,256,256,256,256],
1e-2,
'mse',
batch_norm=False,
activation='LeakyReLU',
dr=None,
l2=None,
)
mstd.fit_generator(
train_gen.return_generator(),
train_gen.n_batches/5,
epochs=3,
workers=8,
max_queue_size=50,
callbacks=[LearningRateScheduler(get_lr_sched(1e-2, 5, 1))],
)
dstd_ref = get_diag(mstd, 'fullyear'); dstd_ref.compute_stats()
dstd_ref.mean_stats(10)
dstd_4k = get_diag(mref, '4k'); dstd_4k.compute_stats(); dstd_4k.mean_stats(10)
dstd_4k.plot_double_yz(10, 15, 'TPHYSTND', cmap='bwr', vmin=-7e-4, vmax=7e-4);
```
### Batch norm
```
mbn = fc_model(
94,
65,
[256,256,256,256,256,256,256,256,256],
1e-2,
'mse',
batch_norm=True,
activation='LeakyReLU',
dr=None,
l2=None,
)
mbn.fit_generator(
train_gen.return_generator(),
train_gen.n_batches/5,
epochs=3,
workers=8,
max_queue_size=50,
callbacks=[LearningRateScheduler(get_lr_sched(1e-2, 5, 1))],
)
evaluate(mbn)
```
### Convolution
```
@threadsafe_generator
def data_generator_convo(data_dir, feature_fn, target_fn, shuffle=True,
batch_size=512, feature_norms=None, target_norms=None, noise=None):
"""Works on pre-stacked targets with truely random batches
Hard coded right now for
features = [TBP, QBP, VBP, PS, SOLIN, SHFLX, LHFLX]
and lev = 30
"""
# Open files
feature_file = h5py.File(data_dir + feature_fn, 'r')
target_file = h5py.File(data_dir + target_fn, 'r')
# Determine sizes
n_samples = feature_file['features'].shape[0]
n_batches = int(np.floor(n_samples / batch_size))
# Create ID list
idxs = np.arange(0, n_samples, batch_size)
if shuffle:
np.random.shuffle(idxs)
# generate
while True:
for i in range(n_batches):
batch_idx = idxs[i]
x = feature_file['features'][batch_idx:batch_idx + batch_size, :]
if feature_norms is not None: x = (x - feature_norms[0]) / feature_norms[1]
x1 = x[:, :90].reshape((x.shape[0], 30, -1))
x2 = x[:, 90:]
y = target_file['targets'][batch_idx:batch_idx + batch_size, :]
if target_norms is not None: y = (y - target_norms[0]) * target_norms[1]
if noise is not None:
x += np.random.normal(0, noise, x.shape)
yield [x1, x2], y
conv_gen = data_generator_convo(train_gen.data_dir, train_gen.feature_fn, train_gen.target_fn,
train_gen.shuffle, train_gen.batch_size, train_gen.feature_norms,
train_gen.target_norms, train_gen.noise)
mconv = conv_model((30, 3), 4, 65, [32, 64, 128], [256, 256], 1e-2, 'mse', activation='LeakyReLU',
padding='valid', stride=2)
mconv.summary()
mconv.fit_generator(
conv_gen,
train_gen.n_batches/5,
epochs=3,
workers=8,
max_queue_size=50,
callbacks=[LearningRateScheduler(get_lr_sched(1e-2, 5, 1))],
)
evaluate(mconv, convo=True)
```
### Convolution with batch norm
```
mconvbn = conv_model((30, 3), 4, 65, [16, 32], [256, 256], 1e-2, 'mse', activation='LeakyReLU', batch_norm=True)
mconvbn.summary()
mconvbn.fit_generator(
conv_gen,
train_gen.n_batches/5,
epochs=3,
workers=8,
max_queue_size=50,
callbacks=[LearningRateScheduler(get_lr_sched(1e-2, 5, 1))],
)
evaluate(mconvbn, convo=True)
```
### Convolution with tiles
```
@threadsafe_generator
def data_generator_convo_tile(data_dir, feature_fn, target_fn, shuffle=True,
batch_size=512, feature_norms=None, target_norms=None, noise=None):
"""Works on pre-stacked targets with truely random batches
Hard coded right now for
features = [TBP, QBP, VBP, PS, SOLIN, SHFLX, LHFLX]
and lev = 30
"""
# Open files
feature_file = h5py.File(data_dir + feature_fn, 'r')
target_file = h5py.File(data_dir + target_fn, 'r')
# Determine sizes
n_samples = feature_file['features'].shape[0]
n_batches = int(np.floor(n_samples / batch_size))
# Create ID list
idxs = np.arange(0, n_samples, batch_size)
if shuffle:
np.random.shuffle(idxs)
# generate
while True:
for i in range(n_batches):
batch_idx = idxs[i]
x = feature_file['features'][batch_idx:batch_idx + batch_size, :]
if feature_norms is not None: x = (x - feature_norms[0]) / feature_norms[1]
x = np.concatenate(
[
x[:, :90].reshape((x.shape[0], 30, -1)),
np.rollaxis(np.tile(x[:, 90:], (30, 1, 1)), 0, 2)
],
axis=-1,
)
y = target_file['targets'][batch_idx:batch_idx + batch_size, :]
if target_norms is not None: y = (y - target_norms[0]) * target_norms[1]
if noise is not None:
x += np.random.normal(0, noise, x.shape)
yield x, y
conv_gen_tile = data_generator_convo_tile(train_gen.data_dir, train_gen.feature_fn, train_gen.target_fn,
train_gen.shuffle, train_gen.batch_size, train_gen.feature_norms,
train_gen.target_norms, train_gen.noise)
mconvtile = conv_model((30, 7), None, 65, [16, 32, 64], [512], 1e-2, 'mse', activation='LeakyReLU', tile=True,
padding='valid', stride=2, dr=0.2)
mconvtile.summary()
mconvtile.fit_generator(
conv_gen_tile,
train_gen.n_batches/5,
epochs=3,
workers=8,
max_queue_size=50,
callbacks=[LearningRateScheduler(get_lr_sched(1e-2, 5, 1))],
)
evaluate(mconvtile, convo_tile=True)
```
### Dropout
### L2 Regularization
### Input noise
| github_jupyter |
# Scrape play-by-play data from ESPN
The code is a bit messy, but the idea is pretty simple. Profootballreference.com's play-by-play tables are one of the best resources out there, but they don't say which team has the ball. That's easy to figure out with context, but not for an algorithm that doesn't know what players are on what team. ESPN's webpages for individual game play-by-play makes it easier to tell which team has the ball for a given drive because they have little team logos next to each drive saying which team has the ball. So I wrote a scraper to get that information from ESPN.
The scraper works like this:
* Use the requests library to get an html version of the webpage.
* Make a soup object from the source html.
* Use a customized function to grab a particular table from the soup object.
* In that table, pull out individual drives, which each load an image. The name of the image tells us which team has the ball.
* Within each drives, grab individual plays. I make a dictionary for each column that I then put together to make a single pandas DataFrame.
The code written here first checks a different page to compile a list of urls for individual games. I then use the scraper to loop through webpages for individual games.
# Process the raw data
Making sense of the raw data is crucial. I do a few things to make the data useful.
* Parse down, distance, and field position.
* Parse quarter and time remaining.
* Make columns for score difference and total score.
* Make columns for whether home team has possession and wins.
* Parse play detail to determine whether an individual play is a run, pass, scramble, punt, field goal, whether a pass is complete, and how many yards were gained on the play.
* Determine whether an offensive play is "successful".
```
import pandas as pd
import numpy as np
import requests
from bs4 import BeautifulSoup
def make_soup(url):
res = requests.get(url)
soup = BeautifulSoup(res.text, 'lxml')
return soup
class pbp_drive():
def parse_drive(self,drive):
# Try to read drive header
header = drive.find("div",{"class":"accordion-header"})
self.is_half = False
if header == None: # Then we've got the end of a half or something
self.is_half = True
# print("found end of half, etc.")
text = drive.find("span",{"class":"post-play"}).contents
df = pd.DataFrame([[[],text]], columns=['downdist','detail'])
return df
# Grab information from the drive header
possessor_logo = drive.find("span",{"class":"home-logo"}).contents[0]
s = "nfl/500/"
e = ".png"
# Cut off pieces of url before and after home team
self.offense = (str(possessor_logo).split(s))[1].split(e)[0].upper()
# Get result of the drive
self.result = header.find("span",{"class":"headline"}).contents
# Get info about home/away score
home_info = header.find("span",{"class":"home"}).contents
self.home_team = home_info[0].contents[0]
self.home_score_after = home_info[1].contents[0]
away_info = header.find("span",{"class":"away"}).contents
self.away_team = away_info[0].contents[0]
self.away_score_after = away_info[1].contents[0]
# Get drive summary
self.drive_detail = header.find("span",{"class":"drive-details"}).contents
# print(self.drive_detail)
self.num_plays = self.drive_detail[0].split()[0]
self.num_yards = self.drive_detail[0].split()[2]
self.time_of_poss = self.drive_detail[0].split()[4]
# print(self.result)
# print([self.home_team,self.home_score_after,self.away_team,self.away_score_after])
# Make a dataframe for the drive
# Grab info about individual plays from this drive
playlist = []
plays = drive.find_all("li")
for p in plays:
try:
downdist = p.h3.contents
detail = p.span.contents[0].replace("\n","").replace("\t","")
playlist.append([downdist,detail])
# print([downdist,detail])
except:
pass
# Return a dataFrame of plays for this drive
df = pd.DataFrame(playlist, columns=['downdist','detail'])
df['play_num'] = df.index + 1
return df
# Putting all of the pieces together
def get_game_df(url):
# Make a soup object with html
soup = make_soup(url)
# Find article with play-by-play table
article = soup.find("article", {"class":"sub-module play-by-play"})
# Article is constructed like accordion, with items corresponding
# to individual drives
accordion = article.find("ul", {"class":"css-accordion"})
drives = accordion.find_all("li", {"class":"accordion-item"})
# Now parse each of the drives into a dataFrame
drivelist = []
for i, drive in enumerate(drives):
# Initialize drive object, then parse
d = pbp_drive()
d.df = d.parse_drive(drive)
d.drive_num = i
if i == 0:
d.home_score_before = 0
d.away_score_before = 0
else:
d.home_score_before = drivelist[-1].home_score_after
d.away_score_before = drivelist[-1].away_score_after
# If the drive isn't a special section marking the end of half/game
# Then add drive's dataFrame to the drive list
if not d.is_half:
d.df['home'] = d.home_team
d.df['away'] = d.away_team
d.df['possession'] = d.offense
d.df['home_score_before'] = d.home_score_before
d.df['away_score_before'] = d.away_score_before
d.df['home_score_after'] = d.home_score_after
d.df['away_score_after'] = d.away_score_after
d.df['drive_num'] = d.drive_num
#print(d.df)
drivelist.append(d)
# Make a dataFrame for individual drives
drive_dicts = [{'drive':dd.drive_num,
'offense':dd.offense,
'plays':dd.num_plays,
'yds_gained':dd.num_yards,
'time':dd.time_of_poss,
'result':dd.result[0],
'home':dd.home_team,
'away':dd.away_team,
'home_score_before':dd.home_score_before,
'away_score_before':dd.away_score_before,
'home_score_after':dd.home_score_after,
'away_score_after':dd.away_score_after }
for dd in drivelist ]
drives_df = pd.DataFrame(drive_dicts)
pbp_df = pd.concat([d.df for d in drivelist])
return pbp_df, drives_df
game, drives = get_game_df("http://www.espn.com/nfl/playbyplay?gameId=400951568")
print(game.head(10))
drives.head(10)
# Function to get gameIds for a particular year/week
results = {}
def get_gameId(year,week):
# Make a soup object for the appropriate page
url = "http://www.espn.com/nfl/schedule/_/week/{0}/year/{1}/seasontype/2".format(week,year)
soup = make_soup(url)
sched_page = soup.find("section",{"id":"main-container"})
# Make a list for gameIds
gameids = []
for link in sched_page.find_all('a'):
if "gameId" in link.get('href'):
# Extract last bit of url listed
s = "gameId="
this_game = link.get('href').split(s)[1]
gameids.append(this_game)
# And add text displayed to a dictionary
results[this_game] = link.contents[0]
return gameids, results
gameids, results = get_gameId(2017,1)
print(gameids)
print(results)
# Loop over desired weeks/years to get gameIds that can be used to look up play-by-play for each of the games
import time
gameids = []
gameresults = {}
gameyear = {}
gameweek = {}
for year in range(2009,2018):
for week in range(1,18):
print("Looking up gameIds for {0} week {1}".format(year,week))
ids, results = get_gameId(year,week)
gameids.append(ids)
# Add entry in dictionaries for each gameId
for i in ids:
gameyear[i] = year
gameweek[i] = week
gameresults[i] = results[i]
# Sleep so we don't get blocked
time.sleep(0.5)
# Now flatten gameids, which is a list of weekly lists
ids = [i for sublist in gameids for i in sublist]
print(gameresults)
# Make dataframe for game-specific information using our dictionaries for year and week
data = [ {'gameId':i,
'season':gameyear[i],
'week':gameweek[i],
'result':gameresults[i]}
for i in ids]
gamedata_df = pd.DataFrame(data)
# Set the gameId as the unique identifier for each row
gamedata_df.set_index('gameId', inplace=True)
gamedata_df.sample(10)
# Create list of individual game dataFrames
pbp_list = []
drivelevel_list = []
game_home = {}
game_away = {}
# Now loop over gameIds to scrape individual game play-by-play
for i in ids:
# Check year of game. Only search for pbp of games from 2004 or later.
if gameyear[i] >= 2004:
try:
print(i)
# Make whole url for play-by-play
url = "http://www.espn.com/nfl/playbyplay?gameId="+i
# Call function to scrape and parse info into a dataFrame
pbp_df, drives_df = get_game_df(url)
# Add column to dataframe for gameId
pbp_df['gameId'] = i
drives_df['gameId'] = i
# Extract home/away from game df
game_home[i] = pbp_df['home'].values[0]
game_away[i] = pbp_df['away'].values[0]
pbp_list.append(pbp_df)
drivelevel_list.append(drives_df)
except:
print("Failed to scrape gameId "+i)
pass
time.sleep(0.5)
# Put individual game dataFrames together into one big dataFrame
allplays_df = pd.concat(pbp_list)
alldrives_df = pd.concat(drivelevel_list)
# Add columns to gamedata_df for home and away team dictionaries
gamedata_df['home'] = pd.Series(game_home)
gamedata_df['away'] = pd.Series(game_away)
print(gamedata_df[gamedata_df['season']>=2004].head(5))
allplays_df.sample(10)
alldrives_df.sample(10)
```
## Begin processing the dataFrames
```
# Get information about winning team and final scores
winner = {}
home_score = {}
away_score = {}
ot = {}
for i in list(gamedata_df.index.values):
try:
final = gameresults[i]
# Winner should be first team listed
winner[i] = final.split()[0]
if "(OT)" in final:
ot[i] = 1
else:
ot[i] = 0
if game_home[i] == winner[i]:
# Home team wins, their score is listed first
home_score[i] = final.split()[1].rstrip(",")
away_score[i] = final.split()[3]
else:
# Away team wins, their score is listed first
home_score[i] = final.split()[3]
away_score[i] = final.split()[1].rstrip(",")
# Check for a tie
if home_score[i] == away_score[i]:
winner[i] = "TIE"
except:
winner[i] = "unknown"
ot[i] = "unknown"
home_score[i] = "unknown"
away_score[i] = "unknown"
gamedata_df['winner'] = pd.Series(winner)
gamedata_df['home_score'] = pd.Series(home_score)
gamedata_df['away_score'] = pd.Series(away_score)
gamedata_df['OT'] = pd.Series(ot)
# Games where grabbing pbp failed have some unknown values
gamedata_df[ gamedata_df['winner'] == "unknown" ]
# Double check a game that couldn't get home/away
url = "http://www.espn.com/nfl/playbyplay?gameId="+"400554366"
g_df,d_df = get_game_df(url)
gamedata_df.sample(15)
```
## And start working with the play-by-play data
```
# Start by saving all dataFrames to disk in case I mess anything up
gamedata_df.to_csv("../data/espn_gamedata2009-2017.csv")
allplays_df.to_csv("../data/espn_rawplays2009-2017.csv")
alldrives_df.to_csv("../data/espn_drives2009-2017.csv")
# Load dataframes from disk
gamedata_df = pd.read_csv("../data/espn_gamedata2009-2017.csv")
allplays_df = pd.read_csv("../data/espn_rawplays2009-2017.csv")
alldrives_df = pd.read_csv("../data/espn_drives2009-2017.csv")
# Take another look at what we've got so far
gamedata_df.set_index('gameId', inplace=True)
print(allplays_df.info())
allplays_df.sample(5)
# Start by trying to parse down, distance, and field position
# Make lists to populate with a value for each play
down = []
dist = []
home_fieldpos = []
# Make lists out of home and away teams for comparison with field position
hometeam = allplays_df.home.values
awayteam = allplays_df.away.values
# Function to return fieldposition from the offense's point of view
def get_fieldpos(teamside,ydline,j):
if teamside == hometeam[j]:
# Ball is one home team's half. Location should be negative
return -1*(50-ydline)
elif teamside == awayteam[j]:
return 50-ydline
else:
return "x"
for j, c in enumerate(allplays_df['downdist'].values):
x = c.strip("[]'")
# Check for an empty list. This probably means an end of quarter/half line
if (not x) or x == None:
down.append(0)
dist.append(0)
home_fieldpos.append(0)
# print("Found empty list")
else:
x = [x]
pieces = x[0].split()
# print(pieces)
# Get down
if not pieces[0][0].isalpha(): # check is first character is alphabetic
# Then first character is numeric. This is the down number.
down.append(int(pieces[0][0]))
else:
down.append(0)
# Get distance
for i, word in enumerate(pieces):
if word == "and":
dist.append(pieces[i+1]) # Keep as string to preserve goal-to-go situations
# Get fieldposition from the home team's perspective
for i, word in enumerate(pieces):
if word == "at":
if pieces[i+1] == '50':
home_fieldpos.append(0)
else:
teamside = pieces[i+1]
ydline = int(pieces[i+2])
fieldpos = get_fieldpos(teamside,ydline,j)
if fieldpos == "x": # Failed to match teamside with home/away teams
# Change teamside in a couple cases to account for teams moving
# ESPN seems to always use most recent short form name in fieldposition
if teamside == "LAR":
teamside = "STL"
elif teamside == "LAC":
teamside = "SD"
# Try again with new teamside
fieldpos = get_fieldpos(teamside,ydline,j)
if fieldpos == "x":
home_fieldpos.append(0)
print(pieces)
print("Failed to find side of field correctly")
else:
home_fieldpos.append(fieldpos)
# print([down[-1], dist[-1], home_fieldpos[-1]])
allplays_df['down'] = down
allplays_df['dist'] = dist
allplays_df['home_fieldpos'] = home_fieldpos
allplays_df.sample(5)
# Now look to extract the time remaining (in seconds)
detail = allplays_df.detail.values
# Make lists for quarter and time_remaining
qtr = []
time_rem = []
for d in detail[:]:
# print(d)
# print(type(d))
try:
if (not d) or (d == None):
# detail is an empty list
qtr.append(0)
time_rem.append("0:00")
else:
pieces = d.split()
# print(pieces)
if pieces[0][0] == "E":
# Found End of Quarter/Overtime line
qtr.append(0)
time_rem.append("0:00")
elif pieces[0][0] == "(":
# Found beginning of standard "(1:23 - 4th)" template
qtr.append(pieces[2][0])
time_rem.append(pieces[0].lstrip("("))
else:
# Not sure what this is, so just be safe and go to 0:00 rem in 4th
print(d)
qtr.append(0)
time_rem.append("0:00")
except:
print("Default parse failed for:")
print(d)
qtr.append(0)
time_rem.append("0:00")
# print(qtr[-1])
# print(time_rem[-1])
allplays_df['qtr'] = qtr
allplays_df['time_rem'] = time_rem
allplays_df[['downdist','detail','down','dist','qtr','time_rem']].sample(25)
# Make a column for secconds remaining in the game
qtr = allplays_df.qtr.values
time_rem = allplays_df.time_rem.values
secs_rem = []
for i, tr in enumerate(time_rem):
if qtr[i] in ["1","2","3","4"]:
q = int(qtr[i])
elif qtr[i] == "O":
q = 4
else:
q = 0
mins = int(tr.split(":")[0])
secs = int(tr.split(":")[1])
secs_rem.append( 900*(4-q) + 60*mins + secs )
allplays_df['secs_rem'] = secs_rem
allplays_df[['qtr','time_rem','secs_rem']].sample(10)
allplays_df.info()
# Make column for score difference
allplays_df['home_lead'] = allplays_df['home_score_before'] - allplays_df['away_score_before']
# Make column for total score
allplays_df['total_score'] = allplays_df['home_score_before'] + allplays_df['away_score_before']
# Make column with derived metric for adjusted lead
# Make a column for adjusted score
import math
def adjusted_lead(play):
try:
return play.home_lead / math.sqrt( 3600-play.secs_rem + 1 )
except:
return 0
allplays_df['adj_lead'] = allplays_df.apply(
lambda row: adjusted_lead(row), axis=1 )
# Make a column for whether this play takes place in overtime
allplays_df['OT'] = [1 if " OT" in str(p) else 0 for p in allplays_df.detail.values]
allplays_df[allplays_df.OT == 1].sample(10)
# Make column for whether home team has possession
hometeam = allplays_df.home.values
awayteam = allplays_df.away.values
possession = allplays_df.possession.values
home_possession = [
1 if hometeam[i] == p else 0 if awayteam[i] == p else "X" for i, p in enumerate(possession)
]
allplays_df['home_possession'] = home_possession
# Make a column for whether the hoem team wins
gamedata_df.columns
hometeam = gamedata_df.home.values
awayteam = gamedata_df.away.values
winner = gamedata_df.winner.values
# Make new column for gamedata
home_wins = [1 if hometeam[i] == w else 0 if awayteam[i] == w else "X" for i, w in enumerate(winner)]
gamedata_df['home_win'] = home_wins
# Try pandas join
#allplays_df = allplays_df.join(gamedata_df['home_win'], on='gameId')
# Must be an easier way to make the home_win column but hopefully this works
joined_df = pd.merge(allplays_df, gamedata_df[['season','week','home_win']],
how='left',
left_on='gameId',
right_index=True)
joined_df.columns
final_cols = [
'downdist', 'detail', 'home', 'away', 'possession',
'home_score_before', 'away_score_before', 'gameId', 'down', 'dist', 'home_fieldpos',
'qtr', 'time_rem', 'secs_rem', 'home_lead', 'total_score', 'adj_lead',
'OT','home_possession','home_win','season','week'
]
print(joined_df[final_cols].describe())
joined_df[final_cols].sample(20)
# Objective: Assuming no fumbled snap or pre-snap penalty or other shenanigans,
# should be able to figure out which plays are "successful" and then look at success rates
# Practical things to deal with:
# What raw data to handle?
# Need to be able to take a raw-ish pbp table and extract success rate
# So make one function that will try and label plays as run/pass, yardage gained, and success
def found_pass(detail):
d = detail.lower()
pass_terms = [" pass", " sacked", " scramble",
"interception", "intercepted"]
for term in pass_terms:
if term in d:
return True
return False
def found_scramble(detail):
d = detail.lower()
if " scramble" in d:
return True
return False
def found_run(detail):
d = detail.lower()
run_terms = [" run ", " rush", " left tackle ", " up the middle ",
" left end ", " right end ", " left guard ", " right guard "]
if not " scramble" in d:
for term in run_terms:
if term in d:
return True
return False
def found_punt(detail):
d = detail.lower()
if " punts " in d:
return True
elif " punt return" in d:
return True
return False
def found_fieldgoal(detail):
d = detail.lower()
if " field goal" in d:
return True
return False
def yds_run( i, detail ):
words = detail.lower().split()
# look for yardage in format "for X yards"
for j, w in enumerate(words):
if w == "for" and len(words) > j+2:
if words[j+2].rstrip(".,") in ("yd","yds","yrd","yrds","yard","yards"):
return int(words[j+1])
# or "for no gain"
elif "no" in words[j+1] and "gain" in words[j+2]:
return 0
# or "X yard run/rush"
elif w in ("yd","yds","yrd","yrds","yard","yards") and len(words) >= j+2:
if words[j+1].rstrip(".,") in ("run","rush"):
return int(words[j-1])
return "x"
def yds_passed( i, detail ):
words = detail.lower().split()
# look for yardage in format "for X yards"
for j, w in enumerate(words):
if w == "for" and len(words) > j+2:
if words[j+2].rstrip(".,") in ("yd","yds","yrd","yrds","yard","yards"):
return int(words[j+1])
# or "for no gain"
elif "no" in words[j+1] and "gain" in words[j+2]:
return 0
# or "X yard pass"
elif w in ("yd","yds","yrd","yrds","yard","yards") and len(words) >= j+2:
if words[j+1].rstrip(".,") in ("pass"):
return int(words[j-1])
# Or maybe pass went incomplete
if "incomplete" in detail.lower():
return 0
# Or maybe pass was intercepted. In this case, just say yds_gained is zero
elif ("intercepted" in detail.lower()) or ("interception" in detail.lower()):
return 0
return "x"
def parse_details(df):
print(df.columns)
details = df.detail.values
down = df.down.values
# Make a bunch of dictionaries for storiing play-specific data
# This method assumes that play details are entirely unique.
# If that assumption fails, would need to work on building lists based on order of "details"
is_parseable = [False for d in details]
is_run = [False for d in details]
is_scramble = [False for d in details]
is_pass = [False for d in details]
is_punt = [False for d in details]
is_fieldgoal = [False for d in details]
yds_gained = ["x" for d in details]
runpass_play = [False for d in details]
# Loop through details going through logic tree to find appropriate values
for i, d in enumerate(details):
# Look exclusively for play details on downs 1-4
if down[i] in [1,2,3,4]:
# Parse a scramble
if found_scramble(d):
is_scramble[i] = True
yds_gained[i] = yds_run(i,d)
# Try and parse a pass
if found_run(d):
is_run[i] = True
yds_gained[i] = yds_run(i,d)
# Try and parse a run
if found_pass(d):
is_pass[i] = True
yds_gained[i] = yds_passed(i,d)
# Try and parse a punt
elif found_punt(d):
is_punt[i] = True
# Try and parse a field goal
elif found_fieldgoal(d):
is_fieldgoal[i] = True
for i, yds in enumerate(yds_gained):
if (is_run[i] or is_pass[i] or is_scramble[i]) and (yds != "x"):
is_parseable[i] = True
runpass_play[i] = True
elif is_punt[i]:
is_parseable[i] = True
elif is_fieldgoal[i]:
is_parseable[i] = True
# Now write the columns to the end of the df
df['is_parseable'] = is_parseable
df['is_run'] = is_run
df['is_pass'] = is_pass
df['is_scramble'] = is_scramble
df['is_punt'] = is_punt
df['is_fieldgoal'] = is_fieldgoal
df['yds_gained'] = yds_gained
df['runpass'] = runpass_play
return df
import copy
parsed_df = parse_details( copy.deepcopy(joined_df) )
parsed_df.sample(15)
# And finally save the dataFrame to a csv
parsed_df.to_csv("espn_parsedplays2009-2017.csv")
```
## End of finished code
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Better performance with tf.function and AutoGraph
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/function"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/function.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/function.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/function.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TF 2.0 brings together the ease of eager execution and the power of TF 1.0. At the center of this merger is `tf.function`, which allows you to transform a subset of Python syntax into portable, high-performance TensorFlow graphs.
A cool new feature of `tf.function` is AutoGraph, which lets you write graph code using natural Python syntax. For a list of the Python features that you can use with AutoGraph, see [AutoGraph Capabilities and Limitations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md). For more details about `tf.function`, see the RFC [TF 2.0: Functions, not Sessions](https://github.com/tensorflow/community/blob/master/rfcs/20180918-functions-not-sessions-20.md). For more details about AutoGraph, see `tf.autograph`.
This tutorial will walk you through the basic features of `tf.function` and AutoGraph.
## Setup
Import TensorFlow 2.0:
```
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
```
## The `tf.function` decorator
When you annotate a function with `tf.function`, you can still call it like any other function. But it will be compiled into a graph, which means you get the benefits of faster execution, running on GPU or TPU, or exporting to SavedModel.
```
@tf.function
def simple_nn_layer(x, y):
return tf.nn.relu(tf.matmul(x, y))
x = tf.random.uniform((3, 3))
y = tf.random.uniform((3, 3))
simple_nn_layer(x, y)
```
If we examine the result of the annotation, we can see that it's a special callable that handles all interactions with the TensorFlow runtime.
```
simple_nn_layer
```
If your code uses multiple functions, you don't need to annotate them all - any functions called from an annotated function will also run in graph mode.
```
def linear_layer(x):
return 2 * x + 1
@tf.function
def deep_net(x):
return tf.nn.relu(linear_layer(x))
deep_net(tf.constant((1, 2, 3)))
```
Functions can be faster than eager code, for graphs with many small ops. But for graphs with a few expensive ops (like convolutions), you may not see much speedup.
```
import timeit
conv_layer = tf.keras.layers.Conv2D(100, 3)
@tf.function
def conv_fn(image):
return conv_layer(image)
image = tf.zeros([1, 200, 200, 100])
# warm up
conv_layer(image); conv_fn(image)
print("Eager conv:", timeit.timeit(lambda: conv_layer(image), number=10))
print("Function conv:", timeit.timeit(lambda: conv_fn(image), number=10))
print("Note how there's not much difference in performance for convolutions")
lstm_cell = tf.keras.layers.LSTMCell(10)
@tf.function
def lstm_fn(input, state):
return lstm_cell(input, state)
input = tf.zeros([10, 10])
state = [tf.zeros([10, 10])] * 2
# warm up
lstm_cell(input, state); lstm_fn(input, state)
print("eager lstm:", timeit.timeit(lambda: lstm_cell(input, state), number=10))
print("function lstm:", timeit.timeit(lambda: lstm_fn(input, state), number=10))
```
## Use Python control flow
When using data-dependent control flow inside `tf.function`, you can use Python control flow statements and AutoGraph will convert them into appropriate TensorFlow ops. For example, `if` statements will be converted into `tf.cond()` if they depend on a `Tensor`.
In the example below, `x` is a `Tensor` but the `if` statement works as expected:
```
@tf.function
def square_if_positive(x):
if x > 0:
x = x * x
else:
x = 0
return x
print('square_if_positive(2) = {}'.format(square_if_positive(tf.constant(2))))
print('square_if_positive(-2) = {}'.format(square_if_positive(tf.constant(-2))))
```
Note: The previous example uses simple conditionals with scalar values. <a href="#batching">Batching</a> is typically used in real-world code.
AutoGraph supports common Python statements like `while`, `for`, `if`, `break`, `continue` and `return`, with support for nesting. That means you can use `Tensor` expressions in the condition of `while` and `if` statements, or iterate over a `Tensor` in a `for` loop.
```
@tf.function
def sum_even(items):
s = 0
for c in items:
if c % 2 > 0:
continue
s += c
return s
sum_even(tf.constant([10, 12, 15, 20]))
```
AutoGraph also provides a low-level API for advanced users. For example we can use it to have a look at the generated code.
```
print(tf.autograph.to_code(sum_even.python_function))
```
Here's an example of more complicated control flow:
```
@tf.function
def fizzbuzz(n):
for i in tf.range(n):
if i % 3 == 0:
tf.print('Fizz')
elif i % 5 == 0:
tf.print('Buzz')
else:
tf.print(i)
fizzbuzz(tf.constant(15))
```
## Keras and AutoGraph
AutoGraph is available by default in non-dynamic Keras models. For more information, see `tf.keras`.
```
class CustomModel(tf.keras.models.Model):
@tf.function
def call(self, input_data):
if tf.reduce_mean(input_data) > 0:
return input_data
else:
return input_data // 2
model = CustomModel()
model(tf.constant([-2, -4]))
```
## Side effects
Just like in eager mode, you can use operations with side effects, like `tf.assign` or `tf.print` normally inside `tf.function`, and it will insert the necessary control dependencies to ensure they execute in order.
```
v = tf.Variable(5)
@tf.function
def find_next_odd():
v.assign(v + 1)
if v % 2 == 0:
v.assign(v + 1)
find_next_odd()
v
```
<a id="debugging"></a>
## Debugging
`tf.function` and AutoGraph work by generating code and tracing it into TensorFlow graphs. This mechanism does not yet support step-by-step debuggers like `pdb`. However, you can call `tf.config.experimental_run_functions_eagerly(True)` to temporarily enable eager execution inside the `tf.function' and use your favorite debugger:
```
@tf.function
def f(x):
if x > 0:
# Try setting a breakpoint here!
# Example:
# import pdb
# pdb.set_trace()
x = x + 1
return x
tf.config.experimental_run_functions_eagerly(True)
# You can now set breakpoints and run the code in a debugger.
f(tf.constant(1))
tf.config.experimental_run_functions_eagerly(False)
```
## Advanced example: An in-graph training loop
The previous section showed that AutoGraph can be used inside Keras layers and models. Keras models can also be used in AutoGraph code.
This example shows how to train a simple Keras model on MNIST with the entire training process—loading batches, calculating gradients, updating parameters, calculating validation accuracy, and repeating until convergence—is performed in-graph.
### Download data
```
def prepare_mnist_features_and_labels(x, y):
x = tf.cast(x, tf.float32) / 255.0
y = tf.cast(y, tf.int64)
return x, y
def mnist_dataset():
(x, y), _ = tf.keras.datasets.mnist.load_data()
ds = tf.data.Dataset.from_tensor_slices((x, y))
ds = ds.map(prepare_mnist_features_and_labels)
ds = ds.take(20000).shuffle(20000).batch(100)
return ds
train_dataset = mnist_dataset()
```
### Define the model
```
model = tf.keras.Sequential((
tf.keras.layers.Reshape(target_shape=(28 * 28,), input_shape=(28, 28)),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dense(10)))
model.build()
optimizer = tf.keras.optimizers.Adam()
```
### Define the training loop
```
compute_loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
compute_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
def train_one_step(model, optimizer, x, y):
with tf.GradientTape() as tape:
logits = model(x)
loss = compute_loss(y, logits)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
compute_accuracy(y, logits)
return loss
@tf.function
def train(model, optimizer):
train_ds = mnist_dataset()
step = 0
loss = 0.0
accuracy = 0.0
for x, y in train_ds:
step += 1
loss = train_one_step(model, optimizer, x, y)
if step % 10 == 0:
tf.print('Step', step, ': loss', loss, '; accuracy', compute_accuracy.result())
return step, loss, accuracy
step, loss, accuracy = train(model, optimizer)
print('Final step', step, ': loss', loss, '; accuracy', compute_accuracy.result())
```
## Batching
In real applications batching is essential for performance. The best code to convert to AutoGraph is code where the control flow is decided at the _batch_ level. If making decisions at the individual _example_ level, try to use batch APIs to maintain performance.
For example, if you have the following code in Python:
```
def square_if_positive(x):
return [i ** 2 if i > 0 else i for i in x]
square_if_positive(range(-5, 5))
```
You may be tempted to write it in TensorFlow as such (and this would work!):
```
@tf.function
def square_if_positive_naive(x):
result = tf.TensorArray(tf.int32, size=x.shape[0])
for i in tf.range(x.shape[0]):
if x[i] > 0:
result = result.write(i, x[i] ** 2)
else:
result = result.write(i, x[i])
return result.stack()
square_if_positive_naive(tf.range(-5, 5))
```
But in this case, it turns out you can write the following:
```
def square_if_positive_vectorized(x):
return tf.where(x > 0, x ** 2, x)
square_if_positive_vectorized(tf.range(-5, 5))
```
## Re-tracing
Key points:
* Exercise caution when calling functions with non-tensor arguments, or with arguments that change shapes.
* Decorate module-level functions, and methods of module-level classes, and avoid decorating local functions or methods.
`tf.function` can give you significant speedup over eager execution, at the cost of a slower first-time execution. This is because when executed for the first time, the function is also *traced* into a TensorFlow graph. Constructing and optimizing a graph is usually much slower compared to actually executing it:
```
import timeit
@tf.function
def f(x, y):
return tf.matmul(x, y)
print(
"First invocation:",
timeit.timeit(lambda: f(tf.ones((10, 10)), tf.ones((10, 10))), number=1))
print(
"Second invocation:",
timeit.timeit(lambda: f(tf.ones((10, 10)), tf.ones((10, 10))), number=1))
```
You can easily tell when a function is traced by adding a `print` statement to the top of the function. Because any Python code is only executed at trace time, you will only see the output of `print` when the function is traced:
```
@tf.function
def f():
print('Tracing!')
tf.print('Executing')
print('First invocation:')
f()
print('Second invocation:')
f()
```
`tf.function` may also *re-trace* when called with different non-tensor arguments:
```
@tf.function
def f(n):
print(n, 'Tracing!')
tf.print(n, 'Executing')
f(1)
f(1)
f(2)
f(2)
```
A *re-trace* can also happen when tensor arguments change shape, unless you specified an `input_signature`:
```
@tf.function
def f(x):
print(x.shape, 'Tracing!')
tf.print(x, 'Executing')
f(tf.constant([1]))
f(tf.constant([2]))
f(tf.constant([1, 2]))
f(tf.constant([3, 4]))
```
In addition, tf.function always creates a new graph function with its own set of traces whenever it is called:
```
def f():
print('Tracing!')
tf.print('Executing')
tf.function(f)()
tf.function(f)()
```
This can lead to surprising behavior when using the `@tf.function` decorator in a nested function:
```
def outer():
@tf.function
def f():
print('Tracing!')
tf.print('Executing')
f()
outer()
outer()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import random
%matplotlib inline
xs = np.linspace(-500, 500, 1000)
def f(x):
return -x**2
ys = f(xs)
plt.plot(xs, ys)
def coordinate_ascent_1D(xs, f, T=1000, step=20):
random_start = xs[0]
initial_params = [random_start]
ys = f(xs)
ax = plt.axes()
ax.plot(xs, ys, alpha=0.5)
ax.set_xlabel("x")
ax.set_ylabel("y")
for _ in range(T):
initial_param_step_up = initial_params[-1] + step
itinial_param_step_down = initial_params[-1] - step
if f(initial_param_step_up) > f(initial_params[-1]):
initial_params.append(initial_param_step_up)
elif f(itinial_param_step_down) > f(initial_params[-1]):
initial_params.append(itinial_param_step_down)
"""
ax.arrow(
initial_params[-1],
f(initial_params[-1]),
initial_params[-2] - initial_params[-1],
f(initial_params[-2]) - f(initial_params[-1]),
color="red",
antialiased=True,
linewidth=3
)
"""
ax.scatter(
initial_params[-1],
f(initial_params[-1]),
color="red"
)
coordinate_ascent_1D(xs, f)
xs = np.linspace(-500, 500, 1000)
ys = np.linspace(-500, 500, 1000)
X, Y = np.meshgrid(xs, ys)
Z = -(X**2 + Y**2)
plt.contour(X, Y, Z, levels=20)
def coordinate_ascent_2D(xs, ys, T=1000, step=100):
def f(xs, ys):
return -(xs**2 + ys**2)
zs = f(xs, ys)
X, Y = np.meshgrid(xs, ys)
Z = -(X**2 + Y**2)
random_start_x = xs[0]
random_start_y = ys[0]
initial_params_x = [random_start_x]
initial_params_y = [random_start_y]
ax = plt.axes()
ax.contour(X, Y, Z, levels=20)
ax.set_xlabel("x")
ax.set_ylabel("y")
for _ in range(T):
def evaluate_fx(x_param):
return f(x_param, initial_params_y[-1])
can_continue = True
while can_continue:
initial_param_step_up = initial_params_x[-1] + step
itinial_param_step_down = initial_params_x[-1] - step
if evaluate_fx(initial_param_step_up) > evaluate_fx(initial_params_x[-1]):
initial_params_x.append(initial_param_step_up)
elif evaluate_fx(itinial_param_step_down) > evaluate_fx(initial_params_x[-1]):
initial_params_x.append(itinial_param_step_down)
elif evaluate_fx(itinial_param_step_down) < evaluate_fx(initial_params_x[-1]) and evaluate_fx(initial_param_step_up) < evaluate_fx(initial_params_x[-1]):
can_continue = False
ax.scatter([initial_params_x[-1]], [initial_params_y[-1]], color="red")
def evaluate_fy(y_param):
return f(initial_params_x[-1], y_param)
can_continue = True
while can_continue:
initial_param_step_up = initial_params_y[-1] + step
itinial_param_step_down = initial_params_y[-1] - step
if evaluate_fy(initial_param_step_up) > evaluate_fy(initial_params_y[-1]):
initial_params_y.append(initial_param_step_up)
elif evaluate_fy(itinial_param_step_down) > evaluate_fy(initial_params_y[-1]):
initial_params_y.append(itinial_param_step_down)
elif evaluate_fy(itinial_param_step_down) < evaluate_fy(initial_params_y[-1]) and evaluate_fy(initial_param_step_up) < evaluate_fy(initial_params_y[-1]):
can_continue = False
ax.scatter([initial_params_x[-1]], [initial_params_y[-1]], color="red")
"""
ax.arrow(
initial_params[-1],
f(initial_params[-1]),
initial_params[-2] - initial_params[-1],
f(initial_params[-2]) - f(initial_params[-1]),
color="red",
antialiased=True,
linewidth=3
)
"""
coordinate_ascent_2D(xs, ys, T=1)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/JoshuaShunk/NSDropout/blob/main/mnist_numbers_implementation_of_Dropout.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# MNIST Numbers Implementation of Old Dropout
```
import matplotlib.pyplot as plt
import numpy as np
import random
import keras
from keras.datasets import mnist
import tensorflow as tf
import pandas as pd
np.set_printoptions(threshold=np.inf)
np.random.seed(seed=22) #Random seed used for comparison between old dropout
print(np.random.random(size=3)) #Check that seeds line up
#@title Load Layers (Credit to Harrison Kinsley & Daniel Kukiela for raw python implementation)
# Dense layer
class Layer_Dense:
# Layer initialization
def __init__(self, n_inputs, n_neurons,
weight_regularizer_l1=0, weight_regularizer_l2=0,
bias_regularizer_l1=0, bias_regularizer_l2=0):
# Initialize weights and biases
self.weights = 0.01 * np.random.randn(n_inputs, n_neurons)
self.biases = np.zeros((1, n_neurons))
# Set regularization strength
self.weight_regularizer_l1 = weight_regularizer_l1
self.weight_regularizer_l2 = weight_regularizer_l2
self.bias_regularizer_l1 = bias_regularizer_l1
self.bias_regularizer_l2 = bias_regularizer_l2
# Forward pass
def forward(self, inputs):
# Remember input values
self.inputs = inputs
# Calculate output values from inputs, weights and biases
self.output = np.dot(inputs, self.weights) + self.biases
# Backward pass
def backward(self, dvalues):
# Gradients on parameters
self.dweights = np.dot(self.inputs.T, dvalues)
self.dbiases = np.sum(dvalues, axis=0, keepdims=True)
# Gradients on regularization
# L1 on weights
if self.weight_regularizer_l1 > 0:
dL1 = np.ones_like(self.weights)
dL1[self.weights < 0] = -1
self.dweights += self.weight_regularizer_l1 * dL1
# L2 on weights
if self.weight_regularizer_l2 > 0:
self.dweights += 2 * self.weight_regularizer_l2 * \
self.weights
# L1 on biases
if self.bias_regularizer_l1 > 0:
dL1 = np.ones_like(self.biases)
dL1[self.biases < 0] = -1
self.dbiases += self.bias_regularizer_l1 * dL1
# L2 on biases
if self.bias_regularizer_l2 > 0:
self.dbiases += 2 * self.bias_regularizer_l2 * \
self.biases
# Gradient on values
self.dinputs = np.dot(dvalues, self.weights.T)
# ReLU activation
class Activation_ReLU:
# Forward pass
def forward(self, inputs):
# Remember input values
self.inputs = inputs
# Calculate output values from inputs
self.output = np.maximum(0, inputs)
# Backward pass
def backward(self, dvalues):
# Since we need to modify original variable,
# let's make a copy of values first
self.dinputs = dvalues.copy()
# Zero gradient where input values were negative
self.dinputs[self.inputs <= 0] = 0
# Softmax activation
class Activation_Softmax:
# Forward pass
def forward(self, inputs):
# Remember input values
self.inputs = inputs
# Get unnormalized probabilities
exp_values = np.exp(inputs - np.max(inputs, axis=1,
keepdims=True))
# Normalize them for each sample
probabilities = exp_values / np.sum(exp_values, axis=1,
keepdims=True)
self.output = probabilities
# Backward pass
def backward(self, dvalues):
# Create uninitialized array
self.dinputs = np.empty_like(dvalues)
# Enumerate outputs and gradients
for index, (single_output, single_dvalues) in \
enumerate(zip(self.output, dvalues)):
# Flatten output array
single_output = single_output.reshape(-1, 1)
# Calculate Jacobian matrix of the output
jacobian_matrix = np.diagflat(single_output) - \
np.dot(single_output, single_output.T)
# Calculate sample-wise gradient
# and add it to the array of sample gradients
self.dinputs[index] = np.dot(jacobian_matrix,
single_dvalues)
def predictions(self, outputs):
return np.argmax(outputs, axis=1)
# Sigmoid activation
class Activation_Sigmoid:
# Forward pass
def forward(self, inputs):
# Save input and calculate/save output
# of the sigmoid function
self.inputs = inputs
self.output = 1 / (1 + np.exp(-inputs))
# Backward pass
def backward(self, dvalues):
# Derivative - calculates from output of the sigmoid function
self.dinputs = dvalues * (1 - self.output) * self.output
# SGD optimizer
class Optimizer_SGD:
# Initialize optimizer - set settings,
# learning rate of 1. is default for this optimizer
def __init__(self, learning_rate=1., decay=0., momentum=0.):
self.learning_rate = learning_rate
self.current_learning_rate = learning_rate
self.decay = decay
self.iterations = 0
self.momentum = momentum
# Call once before any parameter updates
def pre_update_params(self):
if self.decay:
self.current_learning_rate = self.learning_rate * \
(1. / (1. + self.decay * self.iterations))
# Update parameters
def update_params(self, layer):
# If we use momentum
if self.momentum:
# If layer does not contain momentum arrays, create them
# filled with zeros
if not hasattr(layer, 'weight_momentums'):
layer.weight_momentums = np.zeros_like(layer.weights)
# If there is no momentum array for weights
# The array doesn't exist for biases yet either.
layer.bias_momentums = np.zeros_like(layer.biases)
# Build weight updates with momentum - take previous
# updates multiplied by retain factor and update with
# current gradients
weight_updates = \
self.momentum * layer.weight_momentums - \
self.current_learning_rate * layer.dweights
layer.weight_momentums = weight_updates
# Build bias updates
bias_updates = \
self.momentum * layer.bias_momentums - \
self.current_learning_rate * layer.dbiases
layer.bias_momentums = bias_updates
# Vanilla SGD updates (as before momentum update)
else:
weight_updates = -self.current_learning_rate * \
layer.dweights
bias_updates = -self.current_learning_rate * \
layer.dbiases
# Update weights and biases using either
# vanilla or momentum updates
layer.weights += weight_updates
layer.biases += bias_updates
# Call once after any parameter updates
def post_update_params(self):
self.iterations += 1
# Adagrad optimizer
class Optimizer_Adagrad:
# Initialize optimizer - set settings
def __init__(self, learning_rate=1., decay=0., epsilon=1e-7):
self.learning_rate = learning_rate
self.current_learning_rate = learning_rate
self.decay = decay
self.iterations = 0
self.epsilon = epsilon
# Call once before any parameter updates
def pre_update_params(self):
if self.decay:
self.current_learning_rate = self.learning_rate * \
(1. / (1. + self.decay * self.iterations))
# Update parameters
def update_params(self, layer):
# If layer does not contain cache arrays,
# create them filled with zeros
if not hasattr(layer, 'weight_cache'):
layer.weight_cache = np.zeros_like(layer.weights)
layer.bias_cache = np.zeros_like(layer.biases)
# Update cache with squared current gradients
layer.weight_cache += layer.dweights ** 2
layer.bias_cache += layer.dbiases ** 2
# Vanilla SGD parameter update + normalization
# with square rooted cache
layer.weights += -self.current_learning_rate * \
layer.dweights / \
(np.sqrt(layer.weight_cache) + self.epsilon)
layer.biases += -self.current_learning_rate * \
layer.dbiases / \
(np.sqrt(layer.bias_cache) + self.epsilon)
# Call once after any parameter updates
def post_update_params(self):
self.iterations += 1
# RMSprop optimizer
class Optimizer_RMSprop:
# Initialize optimizer - set settings
def __init__(self, learning_rate=0.001, decay=0., epsilon=1e-7,
rho=0.9):
self.learning_rate = learning_rate
self.current_learning_rate = learning_rate
self.decay = decay
self.iterations = 0
self.epsilon = epsilon
self.rho = rho
# Call once before any parameter updates
def pre_update_params(self):
if self.decay:
self.current_learning_rate = self.learning_rate * \
(1. / (1. + self.decay * self.iterations))
# Update parameters
def update_params(self, layer):
# If layer does not contain cache arrays,
# create them filled with zeros
if not hasattr(layer, 'weight_cache'):
layer.weight_cache = np.zeros_like(layer.weights)
layer.bias_cache = np.zeros_like(layer.biases)
# Update cache with squared current gradients
layer.weight_cache = self.rho * layer.weight_cache + \
(1 - self.rho) * layer.dweights ** 2
layer.bias_cache = self.rho * layer.bias_cache + \
(1 - self.rho) * layer.dbiases ** 2
# Vanilla SGD parameter update + normalization
# with square rooted cache
layer.weights += -self.current_learning_rate * \
layer.dweights / \
(np.sqrt(layer.weight_cache) + self.epsilon)
layer.biases += -self.current_learning_rate * \
layer.dbiases / \
(np.sqrt(layer.bias_cache) + self.epsilon)
# Call once after any parameter updates
def post_update_params(self):
self.iterations += 1
# Adam optimizer
class Optimizer_Adam:
# Initialize optimizer - set settings
def __init__(self, learning_rate=0.02, decay=0., epsilon=1e-7,
beta_1=0.9, beta_2=0.999):
self.learning_rate = learning_rate
self.current_learning_rate = learning_rate
self.decay = decay
self.iterations = 0
self.epsilon = epsilon
self.beta_1 = beta_1
self.beta_2 = beta_2
# Call once before any parameter updates
def pre_update_params(self):
if self.decay:
self.current_learning_rate = self.learning_rate * \
(1. / (1. + self.decay * self.iterations))
# Update parameters
def update_params(self, layer):
# If layer does not contain cache arrays,
# create them filled with zeros
if not hasattr(layer, 'weight_cache'):
layer.weight_momentums = np.zeros_like(layer.weights)
layer.weight_cache = np.zeros_like(layer.weights)
layer.bias_momentums = np.zeros_like(layer.biases)
layer.bias_cache = np.zeros_like(layer.biases)
# Update momentum with current gradients
layer.weight_momentums = self.beta_1 * \
layer.weight_momentums + \
(1 - self.beta_1) * layer.dweights
layer.bias_momentums = self.beta_1 * \
layer.bias_momentums + \
(1 - self.beta_1) * layer.dbiases
# Get corrected momentum
# self.iteration is 0 at first pass
# and we need to start with 1 here
weight_momentums_corrected = layer.weight_momentums / \
(1 - self.beta_1 ** (self.iterations + 1))
bias_momentums_corrected = layer.bias_momentums / \
(1 - self.beta_1 ** (self.iterations + 1))
# Update cache with squared current gradients
layer.weight_cache = self.beta_2 * layer.weight_cache + \
(1 - self.beta_2) * layer.dweights ** 2
layer.bias_cache = self.beta_2 * layer.bias_cache + \
(1 - self.beta_2) * layer.dbiases ** 2
# Get corrected cache
weight_cache_corrected = layer.weight_cache / \
(1 - self.beta_2 ** (self.iterations + 1))
bias_cache_corrected = layer.bias_cache / \
(1 - self.beta_2 ** (self.iterations + 1))
# Vanilla SGD parameter update + normalization
# with square rooted cache
layer.weights += -self.current_learning_rate * \
weight_momentums_corrected / \
(np.sqrt(weight_cache_corrected) +
self.epsilon)
layer.biases += -self.current_learning_rate * \
bias_momentums_corrected / \
(np.sqrt(bias_cache_corrected) +
self.epsilon)
# Call once after any parameter updates
def post_update_params(self):
self.iterations += 1
# Common loss class
class Loss:
# Regularization loss calculation
def regularization_loss(self, layer):
# 0 by default
regularization_loss = 0
# L1 regularization - weights
# calculate only when factor greater than 0
if layer.weight_regularizer_l1 > 0:
regularization_loss += layer.weight_regularizer_l1 * \
np.sum(np.abs(layer.weights))
# L2 regularization - weights
if layer.weight_regularizer_l2 > 0:
regularization_loss += layer.weight_regularizer_l2 * \
np.sum(layer.weights *
layer.weights)
# L1 regularization - biases
# calculate only when factor greater than 0
if layer.bias_regularizer_l1 > 0:
regularization_loss += layer.bias_regularizer_l1 * \
np.sum(np.abs(layer.biases))
# L2 regularization - biases
if layer.bias_regularizer_l2 > 0:
regularization_loss += layer.bias_regularizer_l2 * \
np.sum(layer.biases *
layer.biases)
return regularization_loss
# Set/remember trainable layers
def remember_trainable_layers(self, trainable_layers):
self.trainable_layers = trainable_layers
# Calculates the data and regularization losses
# given model output and ground truth values
def calculate(self, output, y, *, include_regularization=False):
# Calculate sample losses
sample_losses = self.forward(output, y)
# Calculate mean loss
data_loss = np.mean(sample_losses)
# Return loss
return data_loss
# Calculates accumulated loss
def calculate_accumulated(self, *, include_regularization=False):
# Calculate mean loss
data_loss = self.accumulated_sum / self.accumulated_count
# If just data loss - return it
if not include_regularization:
return data_loss
# Return the data and regularization losses
return data_loss, self.regularization_loss()
# Reset variables for accumulated loss
def new_pass(self):
self.accumulated_sum = 0
self.accumulated_count = 0
# Cross-entropy loss
class Loss_CategoricalCrossentropy(Loss):
# Forward pass
def forward(self, y_pred, y_true):
# Number of samples in a batch
samples = len(y_pred)
# Clip data to prevent division by 0
# Clip both sides to not drag mean towards any value
y_pred_clipped = np.clip(y_pred, 1e-7, 1 - 1e-7)
# Probabilities for target values -
# only if categorical labels
if len(y_true.shape) == 1:
correct_confidences = y_pred_clipped[
range(samples),
y_true
]
# Mask values - only for one-hot encoded labels
elif len(y_true.shape) == 2:
correct_confidences = np.sum(
y_pred_clipped * y_true,
axis=1
)
# Losses
negative_log_likelihoods = -np.log(correct_confidences)
return negative_log_likelihoods
# Backward pass
def backward(self, dvalues, y_true):
# Number of samples
samples = len(dvalues)
# Number of labels in every sample
# We'll use the first sample to count them
labels = len(dvalues[0])
# If labels are sparse, turn them into one-hot vector
if len(y_true.shape) == 1:
y_true = np.eye(labels)[y_true]
# Calculate gradient
self.dinputs = -y_true / dvalues
# Normalize gradient
self.dinputs = self.dinputs / samples
# Softmax classifier - combined Softmax activation
# and cross-entropy loss for faster backward step
class Activation_Softmax_Loss_CategoricalCrossentropy():
# Creates activation and loss function objects
def __init__(self):
self.activation = Activation_Softmax()
self.loss = Loss_CategoricalCrossentropy()
# Forward pass
def forward(self, inputs, y_true):
# Output layer's activation function
self.activation.forward(inputs)
# Set the output
self.output = self.activation.output
# Calculate and return loss value
return self.loss.calculate(self.output, y_true)
# Backward pass
def backward(self, dvalues, y_true):
# Number of samples
samples = len(dvalues)
# If labels are one-hot encoded,
# turn them into discrete values
if len(y_true.shape) == 2:
y_true = np.argmax(y_true, axis=1)
# Copy so we can safely modify
self.dinputs = dvalues.copy()
# Calculate gradient
self.dinputs[range(samples), y_true] -= 1
# Normalize gradient
self.dinputs = self.dinputs / samples
# Binary cross-entropy loss
class Loss_BinaryCrossentropy(Loss):
# Forward pass
def forward(self, y_pred, y_true):
# Clip data to prevent division by 0
# Clip both sides to not drag mean towards any value
y_pred_clipped = np.clip(y_pred, 1e-7, 1 - 1e-7)
# Calculate sample-wise loss
sample_losses = -(y_true * np.log(y_pred_clipped) +
(1 - y_true) * np.log(1 - y_pred_clipped))
sample_losses = np.mean(sample_losses, axis=-1)
# Return losses
return sample_losses
# Backward pass
def backward(self, dvalues, y_true):
# Number of samples
samples = len(dvalues)
# Number of outputs in every sample
# We'll use the first sample to count them
outputs = len(dvalues[0])
# Clip data to prevent division by 0
# Clip both sides to not drag mean towards any value
clipped_dvalues = np.clip(dvalues, 1e-7, 1 - 1e-7)
# Calculate gradient
self.dinputs = -(y_true / clipped_dvalues -
(1 - y_true) / (1 - clipped_dvalues)) / outputs
# Normalize gradient
self.dinputs = self.dinputs / samples
# Common accuracy class
class Accuracy:
# Calculates an accuracy
# given predictions and ground truth values
def calculate(self, predictions, y):
# Get comparison results
comparisons = self.compare(predictions, y)
# Calculate an accuracy
accuracy = np.mean(comparisons)
# Add accumulated sum of matching values and sample count
# Return accuracy
return accuracy
# Calculates accumulated accuracy
def calculate_accumulated(self):
# Calculate an accuracy
accuracy = self.accumulated_sum / self.accumulated_count
# Return the data and regularization losses
return accuracy
# Reset variables for accumulated accuracy
def new_pass(self):
self.accumulated_sum = 0
self.accumulated_count = 0
# Accuracy calculation for classification model
class Accuracy_Categorical(Accuracy):
def __init__(self, *, binary=False):
# Binary mode?
self.binary = binary
# No initialization is needed
def init(self, y):
pass
# Compares predictions to the ground truth values
def compare(self, predictions, y):
if not self.binary and len(y.shape) == 2:
y = np.argmax(y, axis=1)
return predictions == y
# Accuracy calculation for regression model
class Accuracy_Regression(Accuracy):
def __init__(self):
# Create precision property
self.precision = None
# Calculates precision value
# based on passed-in ground truth values
def init(self, y, reinit=False):
if self.precision is None or reinit:
self.precision = np.std(y) / 250
# Compares predictions to the ground truth values
def compare(self, predictions, y):
return np.absolute(predictions - y) < self.precision
class model:
def __init__(self):
pass
def predict(self, classes, samples):
self.classes = classes
self.samples = samples
self.X, self.y = spiral_data(samples=self.samples, classes=self.classes)
dense1.forward(self.X)
activation1.forward(dense1.output)
dense2.forward(activation1.output)
activation2.forward(dense2.output)
# Calculate the data loss
self.loss = loss_function.calculate(activation2.output, self.y)
self.predictions = (activation2.output > 0.5) * 1
self.accuracy = np.mean(self.predictions == self.y)
print(f'Accuracy: {self.accuracy}')
```
# Old Dropout Layer
```
class Layer_Dropout:
# Init
def __init__(self, rate):
# Store rate, we invert it as for example for dropout
# of 0.1 we need success rate of 0.9
self.rate = 1 - rate
# Forward pass
def forward(self, inputs):
# Save input values
self.inputs = inputs
# Generate and save scaled mask
self.binary_mask = np.random.binomial(1, self.rate,
size=inputs.shape) / self.rate
# Apply mask to output values
self.output = inputs * self.binary_mask
# Backward pass
def backward(self, dvalues):
# Gradient on values
self.dinputs = dvalues * self.binary_mask
#print(self.dinputs.shape)
```
Initializing Caches
```
loss_cache = []
val_loss_cache = []
acc_cache = []
val_acc_cache = []
lr_cache = []
epoch_cache = []
test_acc_cache = []
test_loss_cache = []
max_val_accuracyint = 0
```
Initializing Summary List
```
summary = []
```
# Loading Data
Vizulizing Data
```
#(X, y), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# load dataset
(X, y), (X_test, y_test) = mnist.load_data()
# Label index to label name relation
number_mnist_labels = {
0: '0',
1: '1',
2: '2',
3: '3',
4: '4',
5: '5',
6: '6',
7: '7',
8: '8',
9: '9'
}
# Shuffle the training dataset
keys = np.array(range(X.shape[0]))
np.random.shuffle(keys)
X = X[keys]
y = y[keys]
X = X[:8000,:,:]
X_test = X_test[:1600,:,:]
y = y[:8000]
y_test = y_test[:1600]
# Scale and reshape samples
X = (X.reshape(X.shape[0], -1).astype(np.float32) - 127.5) / 127.5
X_test = (X_test.reshape(X_test.shape[0], -1).astype(np.float32) - 127.5) / 127.5
print(X.shape)
print(y.shape)
print(X_test.shape)
print(y_test.shape)
```
Sorting Training Data
```
idx = np.argsort(y)
X_sorted = X[idx]
y_sorted = y[idx]
sorted_x = {}
sorted_y = {}
for classes in range(len(set(y))):
sorted_x["X_{0}".format(classes)] = X[y == classes]
sorted_y["y_{0}".format(classes)] = y[y == classes]
for sorted_lists in sorted_x:
print(f'Number of Samples for {sorted_lists}: {sorted_x[sorted_lists].shape[0]}')
```
Sorting Testing Data
```
idx = np.argsort(y_test)
X_test_sorted = X_test[idx]
y_test_sorted = y_test[idx]
class_list = []
sorted_x_test = {}
sorted_y_test = {}
for classes in range(len(set(y))):
sorted_x_test["X_test_{0}".format(classes)] = X_test[y_test == classes]
sorted_y_test["y_test_{0}".format(classes)] = y_test[y_test == classes]
for sorted_lists in sorted_x_test:
print(f'Number of Samples for {sorted_lists}: {sorted_x_test[sorted_lists].shape[0]}')
class_list.append(sorted_x_test[sorted_lists].shape[0])
print(f'Found {X.shape[0]} images belonging to {len(set(y))} unique classes')
```
# Initializing Layers
```
# Create Dense layer with 2 input features and 64 output values
dense1 = Layer_Dense(X.shape[1], 128, weight_regularizer_l2=5e-4,
bias_regularizer_l2=5e-4)
activation1 = Activation_ReLU()
dropout1 = Layer_Dropout(0.2)
dense2 = Layer_Dense(128, 128)
activation2 = Activation_ReLU()
dense3 = Layer_Dense(128,128)
activation3 = Activation_ReLU()
dense4 = Layer_Dense(128,len(set(y)))
activation4 = Activation_Softmax()
loss_function = Loss_CategoricalCrossentropy()
softmax_classifier_output = \
Activation_Softmax_Loss_CategoricalCrossentropy()
# Create optimizer
optimizer = Optimizer_Adam(decay=5e-7,learning_rate=0.005)
#optimizer = Optimizer_SGD(learning_rate=0.01)
accuracy = Accuracy_Categorical()
accuracy.init(y)
```
# Training Loop
```
epochs = 178
for epoch in range(epochs + 1):
dense1.forward(X)
activation1.forward(dense1.output)
dropout1.forward(activation1.output)
dense2.forward(dropout1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
data_loss = loss_function.calculate(activation4.output, y)
regularization_loss = \
loss_function.regularization_loss(dense1) + \
loss_function.regularization_loss(dense2) + \
loss_function.regularization_loss(dense3) + \
loss_function.regularization_loss(dense4)
loss = data_loss + regularization_loss
#Accuracy
predictions = activation4.predictions(activation4.output)
train_accuracy = accuracy.calculate(predictions, y)
# Backward pass
softmax_classifier_output.backward(activation4.output, y)
activation4.backward(softmax_classifier_output.dinputs)
dense4.backward(activation4.dinputs)
activation3.backward(dense4.dinputs)
dense3.backward(activation3.dinputs)
activation2.backward(dense3.dinputs)
dense2.backward(activation2.dinputs)
dropout1.backward(dense2.dinputs)
activation1.backward(dropout1.dinputs)
dense1.backward(activation1.dinputs)
# Update weights and biases
optimizer.pre_update_params()
optimizer.update_params(dense1)
optimizer.update_params(dense2)
optimizer.update_params(dense3)
optimizer.update_params(dense4)
optimizer.post_update_params()
# Validation
dense1.forward(X_test)
activation1.forward(dense1.output)
dense2.forward(activation1.output)
dense1_outputs = dense1.output
meanarray = np.mean(dense1.output, axis=0)
cached_val_inputs = activation1.output
trainout = meanarray
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
valloss = loss_function.calculate(activation4.output, y_test)
predictions = activation4.predictions(activation4.output)
valaccuracy = accuracy.calculate(predictions, y_test)
#Updating List
loss_cache.append(loss)
val_loss_cache.append(valloss)
acc_cache.append(train_accuracy)
val_acc_cache.append(valaccuracy)
lr_cache.append(optimizer.current_learning_rate)
epoch_cache.append(epoch)
#Summary Items
if valaccuracy >= .8 and len(summary) == 0:
nintypercent = f'Model hit 80% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= .85 and len(summary) == 1:
nintypercent = f'Model hit 85% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= .9 and len(summary) == 2:
nintypercent = f'Model hit 90% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= .95 and len(summary) == 3:
nintypercent = f'Model hit 95% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= .975 and len(summary) == 4:
nintypercent = f'Model hit 97.5% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= 1 and len(summary) == 5:
nintypercent = f'Model hit 100% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if epoch == epochs:
if valaccuracy > max_val_accuracyint:
max_val_accuracyint = valaccuracy
max_val_accuracy = f'Max accuracy was {valaccuracy * 100}% at epoch {epoch}.'
summary.append(max_val_accuracy)
else:
summary.append(max_val_accuracy)
else:
if valaccuracy > max_val_accuracyint:
max_val_accuracyint = valaccuracy
max_val_accuracy = f'Max accuracy was {valaccuracy * 100}% at epoch {epoch}.'
if not epoch % 1:
print(f'epoch: {epoch}, ' +
f'acc: {train_accuracy:.3f}, ' +
f'loss: {loss:.3f} (' +
f'data_loss: {data_loss:.3f}, ' +
f'reg_loss: {regularization_loss:.3f}), ' +
f'lr: {optimizer.current_learning_rate:.9f} ' +
f'validation, acc: {valaccuracy:.3f}, loss: {valloss:.3f} ')
```
# Summary
```
print(np.mean(acc_cache))
for milestone in summary:
print(milestone)
```
# Testing
```
accuracy = Accuracy_Categorical()
accuracy.init(y_test)
dense1.forward(X_test)
activation1.forward(dense1.output)
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, y_test)
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, y_test)
print(f'Accuracy: {testaccuracy:.3f}, loss: {loss:.3f}')
training_diff = []
testing_diff = []
combined_diff = []
```
Individual Training Classes
```
accuracy = Accuracy_Categorical()
for classes, (X_sorted_lists, y_sorted_lists) in enumerate(zip(sorted_x, sorted_y)):
accuracy = Accuracy_Categorical()
y = sorted_y[y_sorted_lists]
X = sorted_x[X_sorted_lists]
accuracy.init(y)
dense1.forward(X)
activation1.forward(dense1.output)
train_train_mean = activation1.output
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, y)
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, y)
print(f'{number_mnist_labels[classes]} Train Accuracy: {testaccuracy:.3f}, loss: {loss:.3f}')
accuracy = Accuracy_Categorical()
for classes, (X_sorted_lists, y_sorted_lists) in enumerate(zip(sorted_x_test, sorted_y_test)):
accuracy.init(y_sorted_lists)
#print(sorted_y[y_sorted_lists].shape)
#print(sorted_x[X_sorted_lists].shape)
dense1.forward(sorted_x_test[X_sorted_lists])
activation1.forward(dense1.output)
testmean = np.mean(activation1.output, axis=0)
testing_diff.append(testmean)
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, sorted_y_test[y_sorted_lists])
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, sorted_y_test[y_sorted_lists])
print(f'{number_mnist_labels[classes]} Test Accuracy: {testaccuracy:.3f}, loss: {loss:.3f}')
```
# Full mnist test
Training data
```
(input, label), (X_val, y_val) = mnist.load_data()
# Label index to label name relation
number_mnist_labels = {
0: '0',
1: '1',
2: '2',
3: '3',
4: '4',
5: '5',
6: '6',
7: '7',
8: '8',
9: '9'
}
# Shuffle the training dataset
keys = np.array(range(input.shape[0]))
np.random.shuffle(keys)
input = input[keys]
label = label[keys]
# Scale and reshape samples
input = (input.reshape(input.shape[0], -1).astype(np.float32) - 127.5) / 127.5
X_val = (X_val.reshape(X_val.shape[0], -1).astype(np.float32) -
127.5) / 127.5
accuracy = Accuracy_Categorical()
accuracy.init(label)
dense1.forward(input)
activation1.forward(dense1.output)
train_train_mean = activation1.output
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, label)
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, label)
print(f'Full Training Accuracy: {testaccuracy:.3f}, loss: {loss:.3f}')
```
Testing data
```
accuracy = Accuracy_Categorical()
accuracy.init(y_val)
dense1.forward(X_val)
activation1.forward(dense1.output)
train_train_mean = activation1.output
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, y_val)
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, y_val)
print(f'Full Testing Accuracy: {testaccuracy:.5f}, loss: {loss:.3f}')
predicted_list = []
true_list = []
for sample in range(len(X_val)):
predicted_list.append(np.where(activation4.output[sample] == np.amax(activation4.output[sample]))[0][0])
true_list.append(y_val[sample])
from sklearn import metrics
import seaborn as sn
import pandas as pd
array = metrics.confusion_matrix(true_list, predicted_list, labels=[0,1,2,3,4,5,6,7,8,9])
df_cm = pd.DataFrame(array, range(len(set(true_list))), range(len(set(true_list))))
df_cm.round(9)
plt.figure(figsize=(10,7))
sn.set(font_scale=1.2) # for label size
sn.heatmap(df_cm, annot=True, annot_kws={"size": 12}, fmt='g') # font size
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
# Printing the precision and recall, among other metrics
print(metrics.classification_report(true_list, predicted_list, labels=[0,1,2,3,4,5,6,7,8,9]))
```
Change idex to get confidence of different samples of testing data. Index values 0-1600 were refrenced in training. Anything past was never seen during training. Lowest confidence is at index 5046 when trained with 178 epochs and numpy seed set to 22.
```
index = 5046
print(f'{(activation4.output[index][np.where(activation4.output[index] == np.amax(activation4.output[index]))][0]*100):.3f}% Confident True is {number_mnist_labels[np.where(activation4.output[index] == np.amax(activation4.output[index]))[0][0]]}. True is actually {number_mnist_labels[y_val[index]]}')
X_val.resize(X_val.shape[0],28,28)
image = X_val[index]
fig = plt.figure
plt.grid(False)
plt.title(f'{number_mnist_labels[y_val[index]]}')
plt.imshow(image, cmap='gray')
plt.show()
confidence_list = []
for index in range(10000):
confidence_list.append(activation4.output[index][np.where(activation4.output[index] == np.amax(activation4.output[index]))][0])
print(confidence_list.index(min(confidence_list)))
```
Plotting Graphs
```
plt.rcParams['axes.grid'] = False
plt.plot(epoch_cache, val_loss_cache, label='Validation Loss')
plt.plot(epoch_cache, loss_cache, label='Training Loss')
plt.title('Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(loc = "upper right")
plt.show()
plt.plot(epoch_cache, val_acc_cache, label='Validation Accuracy')
plt.plot(epoch_cache, acc_cache, label='Training Accuracy')
plt.title('Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc = "upper right")
plt.show()
plt.plot(epoch_cache, lr_cache, label='LR')
plt.title('Learning Rate')
plt.xlabel('Epoch')
plt.ylabel('Learning Rate')
plt.show()
```
| github_jupyter |
## Create examples of network output for figure panels
Created by: Yarden Cohen\
Date: June 2021\
This notebook allows loading specific saved TweetyNet models and examining their outputs.
Cells in this notebook will also hold code to create figure panels showing such network outputs.
```
# imports
from argparse import ArgumentParser
import configparser # used to load 'min_segment_dur.ini'
from collections import defaultdict
import json
from pathlib import Path
import joblib
import numpy as np
import pandas as pd
import pyprojroot
import torch
from tqdm import tqdm
from vak import config, io, models, transforms
from vak.datasets.vocal_dataset import VocalDataset
import vak.device
import vak.files
from vak.labeled_timebins import lbl_tb2segments, majority_vote_transform, lbl_tb_segment_inds_list, remove_short_segments
from vak.core.learncurve import train_dur_csv_paths as _train_dur_csv_paths
from vak.logging import log_or_print
from vak.labeled_timebins import (
lbl_tb2segments,
majority_vote_transform,
lbl_tb_segment_inds_list,
remove_short_segments
)
def load_network_results(path_to_config=None,
spect_scaler_path = None,
csv_path=None,
labelmap_path=None,
checkpoint_path=None,
window_size = 370,
min_segment_dur = 0.01,
num_workers = 12,
device='cuda',
spect_key='s',
timebins_key='t',
freq_key = 'f',
test_all_files=False):
'''
This function loads a model from an EVAL config file or from specified parameters, loads a model, and returns its outputs
for a specified test set.
Setting 'test_all_files=True' will create a copy of the list in csv_path where all files are in the test set.
'''
if path_to_config:
# ---- get all the parameters from the config we need
cfg = config.parse.from_toml_path(path_to_config)
if cfg.eval:
model_config_map = config.models.map_from_path(path_to_config, cfg.eval.models)
csv_path = cfg.eval.csv_path
labelmap_path = cfg.eval.labelmap_path
checkpoint_path = cfg.eval.checkpoint_path
window_size = cfg.dataloader.window_size
num_workers = cfg.eval.num_workers
if spect_scaler_path:
spect_scaler_path = cfg.eval.spect_scaler_path
else:
print('config file must hold parameters in an [EVAL] section')
return None
else:
model_config_map = {'TweetyNet': {'loss': {}, 'metrics': {}, 'network': {}, 'optimizer': {'lr': 0.001}}}
with labelmap_path.open('r') as f:
labelmap = json.load(f)
if spect_scaler_path:
spect_standardizer = joblib.load(spect_scaler_path)
else:
spect_standardizer = None
# prepare evaluation data
csv_df = pd.read_csv(csv_path)
if test_all_files==True: # allow creating a new csv 'csv_path_test.csv' where all entries are 'test'
csv_df['split'] = 'test'
csv_df.to_csv(csv_path.parent.joinpath(csv_path.stem + '_test.csv'))
csv_path = csv_path.parent.joinpath(csv_path.stem + '_test.csv')
csv_df = csv_df[csv_df.split == 'test']
item_transform = transforms.get_defaults('eval',
spect_standardizer=spect_standardizer,
window_size=window_size,
return_padding_mask=True,
)
eval_dataset = VocalDataset.from_csv(csv_path=csv_path,
split='test',
labelmap=labelmap,
spect_key=spect_key,
timebins_key=timebins_key,
item_transform=item_transform,
)
eval_data = torch.utils.data.DataLoader(dataset=eval_dataset,
shuffle=False,
# batch size 1 because each spectrogram reshaped into a batch of windows
batch_size=1,
num_workers=num_workers)
input_shape = eval_dataset.shape
# if dataset returns spectrogram reshaped into windows,
# throw out the window dimension; just want to tell network (channels, height, width) shape
if len(input_shape) == 4:
input_shape = input_shape[1:]
models_map = models.from_model_config_map(
model_config_map,
num_classes=len(labelmap),
input_shape=input_shape
)
model_name = 'TweetyNet'
model = models_map['TweetyNet']
model.load(checkpoint_path)
#metrics = model.metrics # metric name -> callable map we use below in loop
if device is None:
device = vak.device.get_default_device()
pred_dict = model.predict(pred_data=eval_data,
device=device)
annotation_dfs = [pd.DataFrame(eval_dataset.annots[file_number].seq.as_dict()) for file_number in range(len(csv_df))]
return csv_df, annotation_dfs, pred_dict, labelmap
from matplotlib import gridspec
import matplotlib.pyplot as plt
def create_panels(spect_path,
model_output,
annotation_df,
labelmap,
timebin_dur,
min_segment_dur = 0.01,
spect_key = 's',
timebins_key = 't',
freq_key = 'f',
time_window = [0.01,1.01],
freq_window = [500.0,4000.0],
figsize = (5,10)):
spect = vak.files.spect.load(spect_path)[spect_key]
model_output = np.squeeze(model_output.cpu().numpy())
model_output = np.transpose(model_output,(0,2,1))
m_shape = np.shape(model_output)
model_output = model_output.reshape(m_shape[0]*m_shape[1],m_shape[2])
t_vec = vak.files.spect.load(spect_path)[timebins_key] #remember to remove [0]
f_vec = vak.files.spect.load(spect_path)[freq_key]
extent = [np.min(t_vec),np.max(t_vec),np.min(f_vec),np.max(f_vec)]
fig = plt.figure(figsize=figsize)
fig.suptitle('Example file ' + spect_path)
gs = gridspec.GridSpec(7,1)
ax_labels = plt.subplot(gs[0])
ax_spect = plt.subplot(gs[1:3])
ax_model = plt.subplot(gs[3:7])
axs = [ax_labels, ax_spect, ax_model]
model_output = model_output[:len(t_vec)]
model_output_argmax = np.argmax(model_output,axis=1)
# Create a raw prediction
model_raw_pred_labels, model_raw_pred_onsets, model_raw_pred_offsets = lbl_tb2segments(model_output_argmax,
labelmap=labelmap,
t=t_vec,
min_segment_dur=None,
majority_vote=False)
# create a prediction with min syl dur and majority vote
segment_inds_list = lbl_tb_segment_inds_list(model_output_argmax, unlabeled_label=labelmap['unlabeled'])
y_pred_np, segment_inds_list = remove_short_segments(model_output_argmax,
segment_inds_list,
timebin_dur=timebin_dur,
min_segment_dur=min_segment_dur,
unlabeled_label=labelmap['unlabeled'])
y_pred_np = majority_vote_transform(y_pred_np,
segment_inds_list)
#y_pred = to_long_tensor(y_pred_np).to(device)
model_pro_pred_labels, model_pro_pred_onsets, model_pro_pred_offsets = lbl_tb2segments(y_pred_np,
labelmap=labelmap,
t=t_vec,
min_segment_dur=None,
majority_vote=False)
axs[0].set_xlim(time_window)
axs[0].set_ylim([0,1])
axs[1].imshow(np.flipud(spect),aspect='auto',extent=extent,cmap='gray_r',interpolation='nearest')
axs[1].set_xlim(time_window)
axs[1].set_ylim(freq_window)
axs[2].imshow(np.flipud(model_output.T),aspect='auto',extent=[extent[0],extent[1],0.0,np.shape(model_output)[1]],cmap='twilight_shifted',interpolation='nearest')
axs[2].set_xlim(time_window)
for i in range(len(annotation_df)):
t_label = (annotation_df.onsets_s[i]+annotation_df.offsets_s[i])/2
if ((t_label >= time_window[0]) & (t_label <= time_window[1])):
axs[0].text(t_label,0.25,str(labelmap[annotation_df.labels[i]]),horizontalalignment='center')
axs[0].hlines(0.25,annotation_df.onsets_s[i],annotation_df.offsets_s[i],color='red')
axs[1].vlines(annotation_df.onsets_s[i],freq_window[0],freq_window[1],color='red')
axs[1].vlines(annotation_df.offsets_s[i],freq_window[0],freq_window[1],color='red')
# add model predictions here
for i in range(len(model_raw_pred_labels)):
t_label = (model_raw_pred_onsets[i]+model_raw_pred_offsets[i])/2
if ((t_label >= time_window[0]) & (t_label <= time_window[1])):
axs[0].text(t_label,0.45,str(labelmap[model_raw_pred_labels[i]]),horizontalalignment='center')
axs[0].hlines(0.45,model_raw_pred_onsets[i],model_raw_pred_offsets[i],color='green')
axs[2].vlines(model_raw_pred_onsets[i],labelmap[model_raw_pred_labels[i]],labelmap[model_raw_pred_labels[i]]+1,color='black')
axs[2].vlines(model_raw_pred_offsets[i],labelmap[model_raw_pred_labels[i]],labelmap[model_raw_pred_labels[i]]+1,color='black')
axs[2].hlines(labelmap[model_raw_pred_labels[i]],model_raw_pred_onsets[i],model_raw_pred_offsets[i],color='green')
axs[2].hlines(labelmap[model_raw_pred_labels[i]]+1,model_raw_pred_onsets[i],model_raw_pred_offsets[i],color='green')
# add post-processed model predictions here
for i in range(len(model_pro_pred_labels)):
t_label = (model_pro_pred_onsets[i]+model_pro_pred_offsets[i])/2
if ((t_label >= time_window[0]) & (t_label <= time_window[1])):
axs[0].text(t_label,0.65,str(labelmap[model_pro_pred_labels[i]]),horizontalalignment='center')
axs[0].hlines(0.65,model_pro_pred_onsets[i],model_pro_pred_offsets[i],color='blue')
#timebin_indices = np.where((t_vec >= time_window[0]) & (t_vec <= time_window[1]))[0]
axs[0].set_xticks([])
axs[0].set_yticks([])
axs[0].set_title('orig=red, raw_predict=green, processed_predict=blue')
axs[1].set_xticks([])
axs[1].set_ylabel('freq. (Hz)')
axs[2].set_xlabel('Time(sec)')
axs[2].set_ylabel('Label')
return fig,axs
# test
config_file_name = 'D:\\Users\\yarde\\github\\llb16_eval_first_submission_longtrain.toml'
csv_df, annotation_dfs, pred_dict, labelmap = load_network_results(path_to_config = config_file_name)
#model_output = pred_dict[spect_path]
#spect_paths = Path(np.array(csv_df[csv_df.split=='test'].spect_path)[file_number])
file_number = 1
spect_key = 's'
timebin_key = 't'
timebin_dur = io.dataframe.validate_and_get_timebin_dur(csv_df)
spect_path = Path(np.array(csv_df[csv_df.split=='test'].spect_path)[file_number])
spect = vak.files.spect.load(spect_path)[spect_key]
t = vak.files.spect.load(spect_path)[timebin_key]
annotation_df = annotation_dfs[file_number]
model_output = pred_dict[str(spect_path)]
print(max(t))
# llb16_file 0032
time_window = [4.5,6.5]
#time_window = [4.75,5.25]
freq_window = [500.0,12000.0]
min_segment_dur = 0.008
fig,axs = create_panels(str(spect_path),
model_output,
annotation_df,
labelmap,
timebin_dur,
min_segment_dur = min_segment_dur,
time_window = time_window,
freq_window = freq_window,
figsize = (10,10))
plt.show()
```
| github_jupyter |
# Export metadata to django fixture
```
import os, sys
import pandas as pd
import json
from datetime import datetime as dt
sys.path.append('../src')
import utils
import settings
def create_django_datetimestamp(dt_object=None):
if dt_object==None:
created_time = dt.now()
else:
created_time = dt_object
# for django, timefield must be in format YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ]
# e.g. "2020-05-26T11:40:56+01:00"
created_time = created_time.strftime('%Y-%m-%dT%H:%M:%S+01:00')
return created_time
def df_to_json_fixture(df,
app_name,
model_name,
file_name_modifier='',
output_folder=None,
use_df_index_as_pk=False,
pk_start_num=1000,
create_datetimefield_name=None,
created_by_field_name=None,
created_by_value=1):
"""
convert a dataframe to a django fixture file to populate an database
each column becomes a field in the record
df,
app_name: app name in django,
model_name: model name in django
folder: destination folder to output files to
use_df_index_as_pk: if True df.index will become the primary key for records
no checks are performed
pk_start_num: if use_df_index_as_pk is False, primary keys will start at this
number
create_datetimefield_name: set to the name of the datetimefield for
recording when a record is created.
"""
model = "{}.{}".format(app_name, model_name)
if create_datetimefield_name:
created_time = create_django_datetimestamp()
df[create_datetimefield_name] = created_time
if created_by_field_name:
df[created_by_field_name] = created_by_value
fixture_lst = []
for i, row in df.reset_index().iterrows():
if use_df_index_as_pk==True:
pk = row['index']
else:
pk = i+pk_start_num
fields_dict = row.drop(['index']).to_dict()
record = {'model':model,
'pk':pk,
'fields': fields_dict}
fixture_lst.append(record)
fname = model_name+'{}.json'.format(file_name_modifier)
if output_folder==None:
output_folder = '../data/processed/fixtures'
if not os.path.exists(output_folder):
os.makedirs(output_folder)
fpath = os.path.join(output_folder, fname)
if os.path.exists(fpath):
raise Exception('did not save, file already exists: {}'.format(fpath))
with open(fpath, 'w') as f:
json.dump(fixture_lst,
f,
skipkeys=False,
sort_keys=False)
return fixture_lst
# list_of metadata files
df_list = []
metadata_dir = os.path.join(settings.BASE_DIR, 'data','interim','metadata')
fnames = [f for f in os.listdir(metadata_dir) if f.endswith('.csv')]
for fname in fnames:
fpath = os.path.join(metadata_dir,fname)
df = pd.read_csv(fpath, index_col='record_id')
df_list.append(df)
df = pd.concat(df_list)
df = df.sort_values(by='record_id')
df = df.reset_index()
fixture_dict = df_to_json_fixture(df,
'ImageSearch',
'ImageMetadata',
file_name_modifier='',
output_folder=None,
use_df_index_as_pk=False,
pk_start_num=1000,
create_datetimefield_name='created_date',
created_by_field_name=None,
created_by_value=1)
```
| github_jupyter |
# Custom Interactivity
```
import param
import numpy as np
import holoviews as hv
hv.extension('bokeh', 'matplotlib')
```
In previous notebooks we discovered how the ``DynamicMap`` class allows us to declare objects in a lazy way to enable exploratory analysis of large parameter spaces. In the [Responding to Events](./11-Responding_to_Events.ipynb) guide we learned how to interactively push updates to existing plots by declaring Streams on a DynamicMap. In this user guide we will extend the idea to so called *linked* Streams, which allows complex interactions to be declared by specifying which events should be exposed when a plot is interacted with. By passing information about live interactions to a simple Python based callback, you will be able to build richer, even more interactive visualizations that enable seemless data exploration.
Some of the possibilities this opens up include:
* Dynamically aggregating datasets of billions of datapoints depending on the plot axis ranges using the [datashader](./14-Large_Data.ipynb) library.
* Responding to ``Tap`` and ``DoubleTap`` events to reveal more information in subplots.
* Computing statistics in response to selections applied with box- and lasso-select tools.
Currently only the bokeh backend for HoloViews supports the linked streams system but the principles used should extend to any backend that can define callbacks that fire when a user zooms or pans or interacts with a plot.
<center><div class="alert alert-info" role="alert">To use and visualize <b>DynamicMap</b> or <b>Stream</b> objects you need to be running a live Jupyter server.<br>This user guide assumes that it will be run in a live notebook environment.<br>
When viewed statically, DynamicMaps will only show the first available Element.<br></div></center>
## Available Linked Streams
There are a huge number of ways one might want to interact with a plot. The HoloViews streams module aims to expose many of the most common interactions you might want want to employ, while also supporting extensibility via custom linked Streams.
Here is the full list of linked Stream that are all descendants of the ``LinkedStream`` baseclass:
```
from holoviews import streams
listing = ', '.join(sorted([str(s.name) for s in param.descendents(streams.LinkedStream)]))
print('The linked stream classes supported by HoloViews are:\n\n{listing}'.format(listing=listing))
```
```
The linked stream classes supported by HoloViews are:
Bounds, BoundsX, BoundsY, DoubleTap, Draw, LinkedStream, MouseEnter, MouseLeave, PlotSize, PointerX, PointerXY, PointerY, PositionX, PositionXY, PositionY, RangeX, RangeXY, RangeY, Selection1D, SingleTap, Tap
```
As you can see, most of these events are about specific interactions with a plot such as the current axis ranges (the ``RangeX``, ``RangeY`` and ``RangeXY`` streams), the mouse pointer position (the ``PointerX``, ``PointerY`` and ``PointerXY`` streams), click or tap positions (``Tap``, ``DoubleTap``). Additionally there a streams to access plotting selections made using box- and lasso-select tools (``Selection1D``), the plot size (``PlotSize``) and the ``Bounds`` of a selection.
Each of these linked Stream types has a corresponding backend specific ``Callback``, which defines which plot attributes or events to link the stream to and triggers events on the ``Stream`` in response to changes on the plot. Defining custom ``Stream`` and ``Callback`` types will be covered in future guides.
## Linking streams to plots
At the end of the [Responding to Events](./11-Responding_to_Events.ipynb) guide we discovered that streams have ``subscribers``, which allow defining user defined callbacks on events, but also allow HoloViews to install subscribers that let plots respond to Stream updates. Linked streams add another concept on top of ``subscribers``, namely the Stream ``source``.
The source of a linked stream defines which plot element to receive events from. Any plot containing the ``source`` object will be attached to the corresponding linked stream and will send event values in response to the appropriate interactions.
Let's start with a simple example. We will declare one of the linked Streams from above, the ``PointerXY`` stream. This stream sends the current mouse position in plot axes coordinates, which may be continuous or categorical. The first thing to note is that we haven't specified a ``source`` which means it uses the default value of ``None``.
```
pointer = streams.PointerXY()
print(pointer.source)
```
```
None
```
Before continuing, we can check the stream parameters that are made available to user callbacks from a given stream instance by looking at its contents:
```
print('The %s stream has contents %r' % (pointer, pointer.contents))
```
```
The PointerXY(x=None,y=None) stream has contents {'y': None, 'x': None}
```
#### Automatic linking
A stream instance is automatically linked to the first ``DynamicMap`` we pass it to, which we can confirm by inspecting the stream's ``source`` attribute after supplying it to a ``DynamicMap``:
```
pointer_dmap = hv.DynamicMap(lambda x, y: hv.Points([(x, y)]), streams=[pointer])
print(pointer.source is pointer_dmap)
```
```
True
```
The ``DynamicMap`` we defined above simply defines returns a ``Points`` object composed of a single point that marks the current ``x`` and ``y`` position supplied by our ``PointerXY`` stream. The stream is linked whenever this ``DynamicMap`` object is displayed as it is the stream source:
```
pointer_dmap(style={"Points": dict(size=10)})
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/point_hover.gif" width=300></center>
If you hover over the plot canvas above you can see that the point tracks the current mouse position. We can also inspect the last cursor position by examining the stream contents:
```
pointer.contents
```
```
{'x': 0.40575409375411886, 'y': 0.6441381051588625}
```
In the [Responding to Events](11-Responding_to_Events.ipynb) user guide, we introduced an integration example that would work more intuitively with linked streams. Here it is again with the ``limit`` value controlled by the ``PointerX`` linked stream:
```
%%opts Area (color='#fff8dc' line_width=2) Curve (color='black') VLine (color='red')
xs = np.linspace(-3, 3, 400)
def function(xs, time):
"Some time varying function"
return np.exp(np.sin(xs+np.pi/time))
def integral(limit, time):
limit = -3 if limit is None else np.clip(limit,-3,3)
curve = hv.Curve((xs, function(xs, time)))[limit:]
area = hv.Area ((xs, function(xs, time)))[:limit]
summed = area.dimension_values('y').sum() * 0.015 # Numeric approximation
return (area * curve * hv.VLine(limit) * hv.Text(limit + 0.8, 2.0, '%.2f' % summed))
hv.DynamicMap(integral, streams=[streams.Stream.define('Time', time=1.0)(),
streams.PointerX().rename(x='limit')])
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/area_hover.gif" width=300></center>
We only needed to import and use the ``PointerX`` stream and rename the ``x`` parameter that tracks the cursor position to 'limit' so that it maps to the corresponding argument. Otherwise, the example only required bokeh specific style options to match the matplotlib example as closely as possible.
#### Explicit linking
In the example above, we took advantage of the fact that a ``DynamicMap`` automatically becomes the stream source if a source isn't explicitly specified. If we want to link the stream instance to a different object we can specify our source explicitly. Here we will create a 2D ``Image`` of sine gratings, and then declare that this image is the ``source`` of the ``PointerXY`` stream. This pointer stream is then used to generate a single point that tracks the cursor when hovering over the image:
```
xvals = np.linspace(0,4,202)
ys,xs = np.meshgrid(xvals, -xvals[::-1])
img = hv.Image(np.sin(((ys)**3)*xs))
pointer = streams.PointerXY(x=0,y=0, source=img)
pointer_dmap = hv.DynamicMap(lambda x, y: hv.Points([(x, y)]), streams=[pointer])
```
Now if we display a ``Layout`` consisting of the ``Image`` acting as the source together with the ``DynamicMap``, the point shown on the right tracks the cursor position when hovering over the image on the left:
```
img + pointer_dmap(style={"Points": dict(size=10)})
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/raster_hover.gif" width=600></center>
This will even work across different cells. If we use this particular stream instance in another ``DynamicMap`` and display it, this new visualization will also be supplied with the cursor position when hovering over the image.
To illustrate this, we will now use the pointer ``x`` and ``y`` position to generate cross-sections of the image at the cursor position on the ``Image``, making use of the ``Image.sample`` method. Note the use of ``np.clip`` to make sure the cross-section is well defined when the cusor goes out of bounds:
```
%%opts Curve {+framewise}
hv.DynamicMap(lambda x, y: img.sample(y=np.clip(y,-.49,.49)), streams=[pointer]) +\
hv.DynamicMap(lambda x, y: img.sample(x=np.clip(x,-.49,.49)), streams=[pointer])
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/cross_section_hover.gif" width=600></center>
Now when you hover over the ``Image`` above, you will see the cross-sections update while the point position to the right of the ``Image`` simultaneously updates.
#### Unlinking objects
Sometimes we just want to display an object designated as a source without linking it to the stream. If the object is not a ``DynamicMap``, like the ``Image`` we designated as a ``source`` above, we can make a copy of the object using the ``clone`` method. We can do the same with ``DynamicMap`` though we just need to supply ``link_inputs=False`` as an extra argument.
Here we will create a ``DynamicMap`` that draws a cross-hair at the cursor position:
```
pointer = streams.PointerXY(x=0, y=0)
cross_dmap = hv.DynamicMap(lambda x, y: (hv.VLine(x) * hv.HLine(y)), streams=[pointer])
```
Now we will add two copies of the ``cross_dmap`` into a Layout but the subplot on the right will not be linking the inputs. Try hovering over the two subplots and observe what happens:
```
cross_dmap + cross_dmap.clone(link_inputs=False)
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/unlink.gif" width=600></center>
Notice how hovering over the left plot updates the crosshair position on both subplots, while hovering over the right subplot has no effect.
## Transient linked streams
In the basic [Responding to Events](11-Responding_to_Events.ipynb) user guide we saw that stream parameters can be updated and those values are then passed to the callback. This model works well for many different types of streams that have well-defined values at all times.
This approach is not suitable for certain events which only have a well defined value at a particular point in time. For instance, when you hover your mouse over a plot, the hover position always has a well-defined value but the click position is only defined when a click occurs (if it occurs).
This latter case is an example of what are called 'transient' streams. These streams are supplied new values only when they occur and fall back to a default value at all other times. This default value is typically ``None`` to indicate that the event is not occuring and therefore has no data.
Transient streams are particularly useful when you are subscribed to multiple streams, some of which are only occasionally triggered. A good example are the ``Tap`` and ``DoubleTap`` streams; while you sometimes just want to know the last tapped position, we can only tell the two events apart if their values are ``None`` when not active.
We'll start by declaring a ``SingleTap`` and a ``DoubleTap`` stream as ``transient``. Since both streams supply 'x' and 'y' parameters, we will rename the ``DoubleTap`` parameters to 'x2' and 'y2'.
```
tap = streams.SingleTap(transient=True)
double_tap = streams.DoubleTap(rename={'x': 'x2', 'y': 'y2'}, transient=True)
```
Next we define a list of taps we can append to, and a function that accumulates the tap and double tap coordinates along with the number of taps, returning a ``Points`` Element of the tap positions.
```
taps = []
def record_taps(x, y, x2, y2):
if None not in [x,y]:
taps.append((x, y, 1))
elif None not in [x2, y2]:
taps.append((x2, y2, 2))
return hv.Points(taps, vdims='Taps')
```
Finally we can create a ``DynamicMap`` from our callback and attach the streams. We also apply some styling so the points are colored depending on the number of taps.
```
%%opts Points [color_index='Taps' tools=['hover']] (size=10 cmap='Set1')
hv.DynamicMap(record_taps, streams=[tap, double_tap])
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/tap_record.gif" width=300></center>
Now try single- and double-tapping within the plot area, each time you tap a new point is appended to the list and displayed. Single taps show up in red and double taps show up in grey. We can also inspect the list of taps directly:
```
taps
```
```
[(0.4395821339578692, 0.6807756323806448, 1),
(0.3583948374688684, 0.6073731430597871, 2),
(0.7327584823903722, 0.48095774478497655, 1),
(0.20053064985136673, 0.17103612320802172, 1),
(0.8590498324843735, 0.7337885413345976, 1),
(0.3358428106663682, 0.358620262583547, 2)]
```
| github_jupyter |
# Creating config file names for t1s, masks
```
import numpy as np
import glob as gb
import random
```
### Making the list of t1s
```
paths = gb.glob('/home/despoB/cathwang/native/*/*Brain*nii.gz')
t1 = []
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0001/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0002/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0003/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0004/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0005/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0006/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0007/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0008/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0009/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0010/*")
t1 += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0011/*")
t1.remove('/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0004/c0004s0016t01')
t1.remove('/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0002/c0002s0009t01')
t1.remove('/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0005/c0005s0017t01')
t1.remove('/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0003/c0003s0011t01')
len(t1)
t1_template = t1
t1_normal_brains = []
for brain in t1_template:
t1_normal_brains += [brain + '/forks/Output_brain_ANTS/r_norm_OutputBrainExtractionBrain.nii.gz']
print(len(t1_normal_brains))
print(t1_normal_brains[:5])
```
### ROIs
```
t1_brain_roi = []
for brain in t1_template:
t1_brain_roi += [brain + '/forks/Output_brain_ANTS/bin_r_norm_OutputBrainExtractionBrain.nii.gz']
print(len(t1_brain_roi))
print(t1_brain_roi[:5])
```
### Making the list of brain masks
```
t1_brain_masks = []
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0001/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0002/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0003/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0004/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_1/c0005/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0006/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0007/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0008/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0009/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0010/*/bin_r_*")
t1_brain_masks += gb.glob("/Users/catherinewang/Desktop/despolab/deepmedic/atlas/native/native_2/c0011/*/bin_r_*")
print(len(t1_brain_masks))
print(t1_brain_masks[:5])
```
### Random selecting t1s and brain masks
85% train. 10% test. 5% validation.
```
list(range(10))
index_train = random.sample(list(range(300)), 255)
index_train
# index_test
# for i in np.arange(300)
# if i is not in index_train
# index_test += []
index_test = [i for i in np.arange(300) if i not in index_train]
index_test
```
## Final lists
```
300*0.05
for i in index_train:
print(t1_normal_brains[i])
import ConfigParser
cp = ConfigParser.ConfigParser()
cp.read("trainRoiMasks.cfg")
text_file = open("testRoiMasks.cfg", "r")
lines = text_file.read().split('\n')
lines[0][58:71]
for i in lines:
print('/global/scratch/catherinewang/hemi/' + str(i[58:71]) + '_lhemi.nii.gz')
# print(i)
len(lines)
lines
```
| github_jupyter |
# OpenDartReader - Users Guide
<img width="40%" src="https://i.imgur.com/FMsL0id.png" >
`OpenDartReader`는 금융감독원 전자공시 시스템의 "Open DART"서비스 API를 손쉽게 사용할 수 있도록 돕는 오픈소스 라이브러리 입니다.
#### 2020-2021 [FinanceData.KR](http://financedata.kr) | [facebook.com/financedata](http://facebook.com/financedata)
## OpenDartReader
`Open DART`는 금융감독원이 제공하는 전자공시 시스템에서 제공하는 API서비스 입니다. 기존의 "오픈API", "공시정보 활용마당" 서비스 확대개편하였으며 2020-01-21 (시범) 서비스 시작하였습니다.
`Open DART` API는 매우 잘 만들어진 서비스입니다. 하지만, API를 직접사용하면 추가적인 작업들이 필요합니다. 예를 들어, `Open DART` API는 기업에 대한 내용을 조회할 때, 전자공시에서 사용하는 개별 기업의 고유 번호를 사용합니다. 특정 회사를 상장회사 종목코드으로 조회하려면 매번 고유번호를 얻어야 합니다. 수신한 데이터는 JSON혹은 XML형태 인데 대부분의 경우 pandas 데이터프레임(DataFrame)으로 가공해야 사용하기 더 편리합니다.
`OpenDartReader`는 `Open DART`를 이런 내용을 일반화하고 좀 더 쉽게 `Open DART`를 사용하기 위한 파이썬 라이브러리 입니다.
## Installation
다음과 같이 설치 합니다.
```bash
pip install opendartreader
```
이미 설치되어 있고 업그레이드가 필요하다면 다음과 같이 설치합니다.
```bash
pip install --upgrade opendartreader
```
## Quick Start
```python
import OpenDartReader
# ==== 0. 객체 생성 ====
# 객체 생성 (API KEY 지정)
api_key = 'd81e18ac719d1c1e4ec7899ef21a737ab6cbb4c7'
dart = OpenDartReader(api_key)
# == 1. 공시정보 검색 ==
# 삼성전자 2019-07-01 하루 동안 공시 목록 (날짜에 다양한 포맷이 가능합니다)
dart.list('005930', end='2019-7-1')
# 삼성전자 상장이후 모든 공시 목록 (5,142 건+)
dart.list('005930', start='1900')
# 삼성전자 2010-01-01 ~ 2019-12-31 모든 공시 목록 (2,676 건)
dart.list('005930', start='2010-01-01', end='2019-12-31')
# 삼성전자 1999-01-01 이후 모든 정기보고서
dart.list('005930', start='1999-01-01', kind='A', final=False)
# 삼성전자 1999년~2019년 모든 정기보고서(최종보고서)
dart.list('005930', start='1999-01-01', end='2019-12-31', kind='A')
# 2020-07-01 하루동안 모든 공시목록
dart.list(end='20200701')
# 2020-01-01 ~ 2020-01-10 모든 회사의 모든 공시목록 (4,209 건)
dart.list(start='2020-01-01', end='2020-01-10')
# 2020-01-01 ~ 2020-01-10 모든 회사의 모든 공시목록 (정정된 공시포함) (4,876 건)
dart.list(start='2020-01-01', end='2020-01-10', final=False)
# 2020-07-01 부터 현재까지 모든 회사의 정기보고서
dart.list(start='2020-07-01', kind='A')
# 2019-01-01 ~ 2019-03-31 모든 회사의 정기보고서 (961건)
dart.list(start='20190101', end='20190331', kind='A')
# 기업의 개황정보
dart.company('005930')
# 회사명에 삼성전자가 포함된 회사들에 대한 개황정보
dart.company_by_name('삼성전자')
# 삼성전자 사업보고서 (2018.12) 원문 텍스트
xml_text = dart.document('20190401004781')
# ==== 2. 사업보고서 ====
# 삼성전자(005930), 배당관련 사항, 2018년
dart.report('005930', '배당', 2018)
# 서울반도체(046890), 최대주주 관한 사항, 2018년
dart.report('046890', '최대주주', 2018)
# 서울반도체(046890), 임원 관한 사항, 2018년
dart.report('046890', '임원', 2018)
# 삼성바이오로직스(207940), 2019년, 소액주주에 관한 사항
dart.report('207940', '소액주주', '2019')
# ==== 3. 상장기업 재무정보 ====
# 삼성전자 2018 재무제표
dart.finstate('삼성전자', 2018) # 사업보고서
# 삼성전자 2018Q1 재무제표
dart.finstate('삼성전자', 2018, reprt_code='11013')
# 여러종목 한번에
dart.finstate('00126380,00164779,00164742', 2018)
dart.finstate('005930, 000660, 005380', 2018)
dart.finstate('삼성전자, SK하이닉스, 현대자동차', 2018)
# 단일기업 전체 재무제표 (삼성전자 2018 전체 재무제표)
dart.finstate_all('005930', 2018)
# 재무제표 XBRL 원본 파일 저장 (삼성전자 2018 사업보고서)
dart.finstate_xml('20190401004781', save_as='삼성전자_2018_사업보고서_XBRL.zip')
# XBRL 표준계정과목체계(계정과목)
dart.xbrl_taxonomy('BS1')
# ==== 4. 지분공시 ====
# 대량보유 상황보고 (종목코드, 종목명, 고유번호 모두 지정 가능)
dart.major_shareholders('삼성전자')
# 임원ㆍ주요주주 소유보고 (종목코드, 종목명, 고유번호 모두 지정 가능)
dart.major_shareholders_exec('005930')
# ==== 5. 확장 기능 ====
# 지정한 날짜의 공시목록 전체 (시간 정보 포함)
dart.list_date_ex('2020-01-03')
# 개별 문서 제목과 URL
rcp_no = '20190401004781' # 삼성전자 2018년 사업보고서
dart.sub_docs(rcp_no)
# 제목이 잘 매치되는 순서로 소트
dart.sub_docs('20190401004781', match='사업의 내용')
# 첨부 문서 제목과 URL
dart.attach_doc_list(rcp_no)
# 제목이 잘 매치되는 순서로 소트
dart.attach_doc_list(rcp_no, match='감사보고서')
# 첨부 파일 제목과 URL
dart.attach_file_list(rcp_no)
```
## 0. OpenDartReader 객체 dart를 생성
가장 먼저 해야할 일은 발급 받은 API키를 사용하여 `OpenDartReader` 객체를 생성하는 것입니다.
API키는 전자공시 [인증키 신청 페이지](https://opendart.fss.or.kr/uss/umt/EgovMberInsertView.do)에서 회원가입하고 신청하면 바로 발급 받을 수 있습니다. API키는 40자의 영문과 숫자로 구성되어 있습니다.
```
import OpenDartReader
api_key = 'd81e78aa719d1c1e4ec7867ef22a737ab6cbb4c7'
dart = OpenDartReader(api_key)
```
## 1. 공시정보 검색
### list() - 공시검색 (개별종목)
지정한 회사의 보고서를 검색합니다. 기간과 보고서의 종류를 지정할 수 있습니다.
```python
dart.list(corp, start=None, end=None, kind='', kind_detail='', final=True)
```
#### 날짜 지정하는 방법
`start`와 `end`에 지정하는 날짜의 형식 '2020-07-01', '2020-7-1', '20200701', '1 july 2020', 'JULY 1 2020' 모두 가능합니다. datetime객체도 가능합니다.
* start 와 end를 함께 지정하면 start~end 기간을 지정합니다.
* start만 지정하면 start 부터 현재까지,
* end만 지정하면 end 하루를 지정하게 됩니다.
```
# 삼성전자 2019-07-01 하루 동안 공시 목록
dart.list('005930', end='2019-7-1')
# 삼성전자 상장이후 모든 공시 목록 (5,142 건+)
dart.list('005930', start='1900')
# 삼성전자 2010-01-01 ~ 2019-12-31 모든 공시 목록 (2,676 건)
dart.list('005930', start='2010-01-01', end='2019-12-31')
# 삼성전자 1999-01-01 이후 모든 정기보고서 (85건)
dart.list('005930', start='1999-01-01', kind='A')
# 삼성전자 1999년~2019년 모든 정기보고서(최종보고서) (82건)
dart.list('005930', start='1999-01-01', end='2019-12-31', kind='A', final=False)
```
### list() - 공시검색 (모든종목)
회사(종목)를 지정하지 않으면 모든 종목에 대한 공시를 검색합니다. (이때, 검색 기간은 `start` 와 `end` 사이의 3개월 이내 범위만 가능합니다). 주로 모든 회사를 대상으로 최근 공시를 조회할 목적으로 사용합니다
```
# 2020-07-01 하루동안 모든 공시목록
dart.list(end='20200701')
# 2020-01-01 ~ 2020-01-10 모든 회사의 모든 공시목록
dart.list(start='2020-01-01', end='2020-01-10')
# 2020-01-01 ~ 2020-01-10 모든 회사의 모든 공시목록 (정정된 공시포함)
dart.list(start='2020-01-01', end='2020-01-10', final=False)
# 2021-01-01 부터 현재까지 모든 회사의 정기보고서
dart.list(start='2021-01-01', kind='A')
# 2019-01-01 ~ 2019-03-31 모든 회사의 정기보고서
dart.list(start='20190101', end='20190331', kind='A')
```
### company(), company_by_name() - 기업개황
```python
# 기업의 개황정보를 읽어옵니다
dart.company(corp)
# 기업을 검색하여 이름이 포함된 모든 기업들의 기업개황 정보를 반환합니다.
dart.company_by_name(name)
```
```
dart.company('005930') # 삼성전자, 005930, 00126380
```
find_company_by_name()는 기업을 검색하여 이름이 포함된 모든 기업들의 기업개황 정보를 반환합니다.
개별 회사의 corp_code(고유번호), stock_code (종목코드)를 얻는데도 유용하게 사용할 수 있습니다.
```
# 회사명에 삼성전자가 포함된 회사들에 대한 개황정보
data = dart.company_by_name('삼성전자')
data[:2]
```
가져온 기업개황 정보(dict list)를 데이터 프레임으로 가공하고, 필요한 컬럼을 지정하여 정리할 수 있습니다.
```
import pandas as pd
pd.DataFrame(data)[['stock_name' , 'stock_code' , 'ceo_nm' , 'corp_cls' , 'jurir_no' , 'bizr_no' , 'adres' ]]
```
### document() - 공시서류 원문
```
# 삼성전자 사업보고서 (2018.12)
# http://dart.fss.or.kr/dsaf001/main.do?rcpNo=20190401004781
xml_text = dart.document('20190401004781') # 사업보고서 (2018.12)
xml_text[:2000]
```
### find_corp_code() - 고유번호 얻기
종목코드 혹은 기업명으로 고유번호를 가져옵니다.
전자공시에서 개별 기업은 고유번호로 식별 됩니다. 특히, 상장종목이 아닌 경우는 고유번호를 사용해야 합니다.
```
dart.find_corp_code('005930')
dart.find_corp_code('삼성전자')
```
### corp_codes - 고유번호(속성)
고유번호, 종목명, 종목코드 등의 정보를 포함하고 있는 속성값 입니다. 8만 1천여개 기업에 대한 고유번호, 종목명, 종목코드 등의 정보를 포함하고 있습니다.
```
dart.corp_codes
```
## 2. 사업보고서
### report() - 사업보고서 주요정보
사업보고서의 주요 내용을 조회 합니다.
```python
dart.report(corp, key_word, bsns_year, reprt_code='11011')
```
`key_word`에 '증자','배당','자기주식','최대주주','최대주주변동','소액주주','임원','직원','임원개인보수','임원전체보수','개인별보수','타법인출자' 중의 하나를 지정할 수 있습니다.
bsns_year 에 사업 년도를 지정합니다 (문자열 혹은 정수값)
함수에 대한 더 상세한 내용은 [OpenDartReader - Reference Manual]을 참고하십시오.
```
# 삼성전자(005930), 배당관련 사항, 2018년
dart.report('005930', '배당', 2018)
# 서울반도체(046890), 최대주주 관한 사항, 2018년
dart.report('046890', '최대주주', 2018)
# 서울반도체(046890), 임원 관한 사항, 2018년
dart.report('046890', '임원', 2018)
# 서울반도체(046890), 직원 관한 사항, 2018년
dart.report('046890', '직원', 2018)
```
## 3. 상장기업 재무정보
```
# 삼성전자 2018 재무제표
dart.finstate('삼성전자', 2018) # 사업보고서
# 삼성전자 2018Q1 재무제표
dart.finstate('삼성전자', 2018, reprt_code='11013') # 2018년 1분기 보고서 재무제표
```
## 4. 지분공시
* major_shareholders() - 대량보유 상황보고
* major_shareholders_exec() - 임원ㆍ주요주주 소유보고
```
dart.major_shareholders('삼성전자') # 종목코드, 종목명, 고유번호 모두 가능
dart.major_shareholders_exec('삼성전자') # 종목코드, 종목명, 고유번호 모두 가능
```
## 5. 확장기능
Open DART의 API가 제공하지 않는 유용한 확장 기능을 제공합니다.
```
# 지정한 날짜의 공시목록 전체 (시간 정보 포함)
dart.list_date_ex('2020-01-03')
```
보고서의 개별 문서(sub_docs)와 첨부 문서(attach_doc_list), 첨부 파일(attach_file_list)에 대한 목록을 가져옵니다.
```
rcp_no = '20190401004781' # 삼성전자 2018년 사업보고서
# 개별 문서 제목과 URL
dart.sub_docs(rcp_no)[:10]
# 첨부 문서 제목과 URL
dart.attach_doc_list(rcp_no)
# 첨부 파일 제목과 URL
dart.attach_file_list(rcp_no)
# 삼성전자 2018년 사업보고서
attaches = dart.attach_file_list('20190401004781')
attaches
xls_url = attaches.loc[attaches['type']=='excel', 'url'].values[0] # 첨부 재무제표 엑셀
xls_url
dart.retrieve(xls_url, '삼성전자_2018.xls')
import pandas as pd
xl = pd.ExcelFile('삼성전자_2018.xls')
xl.sheet_names
pd.read_excel('삼성전자_2018.xls', sheet_name='연결 손익계산서', skiprows=6)
```
#### 2021 [FinanceData.KR](http://financedata.kr) | [facebook.com/financedata](http://facebook.com/financedata)
| github_jupyter |
# Introduction to Functions
- [Download the lecture notes](https://philchodrow.github.io/PIC16A/content/functions/functions_1.ipynb).
**Functions** are one of the most important constructs in computer programming. A function is a single command which, when executed, performs some operations and may return a value. You've already encountered functions in PIC10A, where they may have looked something like this:
```cpp
// Filename: boldy.cpp
#include <iostream>
int main() {
std::cout << "To boldly go";
return 0;
}
```
You'll notice the *type declaration* (`int`), the function name (`main`), the parameter declaration (`()`, i.e. no parameters in this case), and the *return value* (`0`). Python functions have a similar syntax. Instead of a type declaration, one uses the `def` keyword to denote function definition. One does not use `{}` braces, but one does use a `:` colon to initiate the body of the function and whitespace to indent the body.
Since Python is interpreted rather than compiled, functions are ready to use as soon as they are defined.
```
def boldly_print(): # colon ends declaration and begins definition
print("To boldly go")
# return values are optional
boldly_print()
# ---
```
## Parameters
Just as in C++, in Python we can pass *arguments* (or *parameters*) to functions in order to modify their behavior.
```
def boldly_print_2(k):
for i in range(k):
print("To boldly go")
boldly_print_2(3)
# ---
```
These arguments can be given *default* values, so that it is not necessary to specify each argument in each function call.
```
def boldly_print_3(k, verb="go"):
for i in range(k):
print("To boldly " + verb)
boldly_print_3(2)
# ---
```
It is often desirable to use *keyword arguments* so that your code clearly indicates which argument is being supplied which value:
```
boldly_print_3(3, "sing") # fine
# ---
boldly_print_3(k=3, verb="sing") # same as above, easier to read
# ---
```
All keyword arguments must be supplied after all positional arguments:
```
boldly_print_3(k = 3, "sing")
# ---
```
## Scope
The **global scope** is the set of all variables available for usage outside of any function.
```
x = 3 # available in global scope
x
```
Functions create a **local scope**. This means:
- Variables in the global scope are available within the function.
- Variables created within the function are **not** available within the global scope.
```
# variables within the global scope are available within the function
def print_x():
print(x)
print_x()
# ---
def print_y():
y = 2
print(y)
print_y()
# ---
y
# ---
```
Immutable variables in the global scope cannot be modified by functions, even if you use the same variable name.
```
def new_x():
x = 7
print(x)
new_x()
# ---
print(x)
# ---
```
On the other hand, *mutable* variables in global scope can be modified by functions. **This is usually a bad idea**, for reasons we'll discuss in another set of notes.
```
# this works, but it's a bad idea.
captains = ["Kirk", "Picard", "Janeway", "Sisko"]
def reverse_names():
for i in range(4):
captains[i] = captains[i][::-1]
reverse_names()
captains
```
## Return values
So far, we've seen examples of functions that print but do not *return* anything. Usually, you will want your function to have one or more return values. These allow the output of a function to be used in future computations.
```
def boldly_return(k = 1, verb = "go"):
return(["to boldly " + verb for i in range(k)])
x = boldly_return(k = 2, verb = "dance")
x
```
Your function can return multiple values:
```
def double_your_number(j):
return(j, 2*j)
x, y = double_your_number(10)
```
The `return` statement *immediately* terminates the function's local scope, usually returning to global scope. So, for example, a `return` statement can be used to terminate a `while` loop, similar to a `break` statement.
```
def largest_power_below(a, upper_bound):
i = 1
while True:
i *= a
if a*i >= upper_bound:
return(i)
largest_power_below(3, 10000)
```
| github_jupyter |
```
import tensorflow as tf
# Import MNIST data (Numpy format)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Parameters
learning_rate = 0.01
num_steps = 1000
batch_size = 128
display_step = 100
# Network Parameters
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units
sess = tf.Session()
# Create a dataset tensor from the images and the labels
dataset = tf.data.Dataset.from_tensor_slices(
(mnist.train.images, mnist.train.labels))
# Automatically refill the data queue when empty
dataset = dataset.repeat()
# Create batches of data
dataset = dataset.batch(batch_size)
# Prefetch data for faster consumption
dataset = dataset.prefetch(batch_size)
# Create an iterator over the dataset
iterator = dataset.make_initializable_iterator()
# Initialize the iterator
sess.run(iterator.initializer)
# Neural Net Input (images, labels)
X, Y = iterator.get_next()
# -----------------------------------------------
# THIS IS A CLASSIC CNN (see examples, section 3)
# -----------------------------------------------
# Note that a few elements have changed (usage of sess run).
# Create model
def conv_net(x, n_classes, dropout, reuse, is_training):
# Define a scope for reusing the variables
with tf.variable_scope('ConvNet', reuse=reuse):
# MNIST data input is a 1-D vector of 784 features (28*28 pixels)
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Convolution Layer with 32 filters and a kernel size of 5
conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Flatten the data to a 1-D vector for the fully connected layer
fc1 = tf.contrib.layers.flatten(conv2)
# Fully connected layer (in contrib folder for now)
fc1 = tf.layers.dense(fc1, 1024)
# Apply Dropout (if is_training is False, dropout is not applied)
fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
# Output layer, class prediction
out = tf.layers.dense(fc1, n_classes)
# Because 'softmax_cross_entropy_with_logits' already apply softmax,
# we only apply softmax to testing network
out = tf.nn.softmax(out) if not is_training else out
return out
# Because Dropout have different behavior at training and prediction time, we
# need to create 2 distinct computation graphs that share the same weights.
# Create a graph for training
logits_train = conv_net(X, n_classes, dropout, reuse=False, is_training=True)
# Create another graph for testing that reuse the same weights, but has
# different behavior for 'dropout' (not applied).
logits_test = conv_net(X, n_classes, dropout, reuse=True, is_training=False)
# Define loss and optimizer (with train logits, for dropout to take effect)
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits_train, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(logits_test, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Run the initializer
sess.run(init)
# Training cycle
for step in range(1, num_steps + 1):
# Run optimization
sess.run(train_op)
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
# (note that this consume a new batch of data)
loss, acc = sess.run([loss_op, accuracy])
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
print("Optimization Finished!")
```
| github_jupyter |
```
import pandas as pd
def load_data():
return pd.read_csv("../datasets/housing/housing.csv")
housingData = load_data()
housingData.head()
housingData.info()
housingData["ocean_proximity"].value_counts()
%matplotlib inline
import matplotlib.pyplot as plt
housingData.hist(bins = 50, figsize=(20,15))
plt.show()
import numpy as np
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test(housingData, 0.2)
housingData["income_cat"] = np.ceil(housingData["median_income"] / 1.5)
housingData["income_cat"].where(housingData["income_cat"] < 5, 5.0, inplace = True)
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits = 1, test_size = 0.2, random_state = 42)
for train_index, test_index in split.split(housingData, housingData["income_cat"]):
strat_train_set = housingData.loc[train_index]
strat_test_set = housingData.loc[test_index]
for set in (strat_test_set, strat_train_set):
set.drop(["income_cat"], axis = 1, inplace = True)
housing = strat_train_set.copy()
housing.plot(kind = "scatter", x = "longitude", y = "latitude", alpha = 0.1)
housing.plot(kind = "scatter", x = "longitude", y = "latitude", alpha = 0.4,
s = housing["population"] / 100, label ="population",
c = "median_house_value", cmap = plt.get_cmap("jet"), colorbar = True
)
plt.legend()
# compute pairwise corelation of columns
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
housing["rooms_per_household"] = housing["total_rooms"] / housing["population"]
housing["bedrooms_per_room"] = housing["total_bedrooms"] / housing["total_rooms"]
housing["population_per_household"] = housing["population"] / housing["households"]
# calculate a median for all attributes and fill in the median in empty rows
from sklearn.preprocessing import Imputer
imputer = Imputer(strategy="median")
housing_num = housing.drop("ocean_proximity", axis = 1)
imputer.fit(housing_num)
imputer.statistics_
X = imputer.transform(housing_num)
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
# transform the text attributes in numerical attributes
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
housing_cat = housing["ocean_proximity"]
housing_cat_encoded = encoder.fit_transform(housing_cat)
housing_cat_encoded
encoder.classes_
# transform the ocean proximity attribute in multiple attributes with values of 0 and 1 (onehotencoder)
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1,1))
housing_cat_1hot
# or to do both of the previous processes in one go
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
encoder.fit_transform(housing_cat)
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
years=[1,1000,1500,1600,1700,1750,1800,1850,1900,1950,1955,1960,1965,1970,1980,1985,1990,
1995,2000,2005,2010,2015]
pops=[200,400,458,580,682,791,1000,1262,1650,2525,2758,3018,3322,3682,
4061,4440,4853,5310,5735,6127,6520,7349]
plt.plot(years,pops)
plt.show()
# Adding labels and custom line color
years=[1950,1955,1960,1965,1970,1980,1985,1990,1995,2000,2005,2010,2015]
pops=[2.5,2.7,3.0,3.3,3.6,4.0,4.4,4.8,5.3,5.7,6.1,6.5,7.3]
plt.plot(years,pops,color=(255/255,100/255,100/255))
plt.ylabel("Population in Billions")
plt.xlabel("Population growth by year")
plt.title("Population Growth")
plt.show()
#Legends, Titles, and Labels
x1 = [1,6,3]
y1 = [5,9,4]
x2 = [1,2,3]
y2 = [10,14,12]
plt.plot(x1, y1, label='First Line')
plt.plot(x2, y2, label='Second Line')
plt.xlabel('Plot Number')
plt.ylabel('Important var')
plt.title('Interesting Graph Check it out')
plt.legend()
plt.grid(True)
plt.show()
# sin graph
# cos graph
x = np.arange(0, 10, 0.01)
y = np.cos(x)
plt.plot(x, y)
plt.show()
y_sin = np.sin(x)
y_cos = np.cos(x)
plt.plot(x,y_sin,label="sin graph")
plt.plot(x,y_cos,label="sin graph")
plt.legend()
```
## Pie Charts
```
labels=['Python','C','C++','PHP','Java','Ruby']
sizes=[33,52,12,17,42,48]
plt.pie(sizes,labels=labels)
plt.axis('equal')
plt.show()
labels=['Python','C','C++','PHP','Java','Ruby']
sizes=[33,52,12,17,42,48]
plt.pie(sizes,labels=labels,autopct='%.2f%%')
plt.axis('equal')
plt.show()
labels=['Python','C','C++','PHP','Java','Ruby']
sizes=[33,52,12,17,42,48]
separated=(.1,.1,0,0,0,0)
plt.pie(sizes,labels=labels,autopct='%.2f%%',explode=separated)
plt.axis('equal')
plt.show()
```
## Scatter Plot
```
x = [1,2,3,4,5,6,7,8]
y = [5,2,4,2,1,4,5,2]
plt.scatter(x,y, label='skitscat Raggedy', color='k', s=25, marker="o")
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Graph\nCheck it out')
plt.legend()
plt.show()
korea_scores=(554,536,538)
canada_scores=(518,523,525)
china_scores=(413,570,580)
franch_scores=(495,505,499)
index=np.arange(3)
bar_width=.2
k1=plt.bar(index,korea_scores,bar_width,label="Korea")
c1=plt.bar(index+bar_width,canada_scores,bar_width,label="Canada")
ch1=plt.bar(index+bar_width*2,china_scores,bar_width,label="China")
f1=plt.bar(index+bar_width*3,franch_scores,bar_width,label="Franch")
korea_scores=(554,536,538)
canada_scores=(518,523,525)
china_scores=(413,570,580)
franch_scores=(495,505,499)
index=np.arange(3)
bar_width=.2
k1=plt.bar(index,korea_scores,bar_width,label="Korea")
c1=plt.bar(index+bar_width,canada_scores,bar_width,label="Canada")
ch1=plt.bar(index+bar_width*2,china_scores,bar_width,label="China")
f1=plt.bar(index+bar_width*3,franch_scores,bar_width,label="Franch")
plt.xticks(index+.6/2,('Mathematics','Reading','Science'))
plt.ylabel('Mean score in PISA in 2012')
plt.xlabel('Subjects')
plt.title('Test scores by Country')
plt.legend()
plt.show()
```
## Stack Plot
```
days = [1,2,3,4,5]
sleeping = [7,8,6,11,7]
eating = [2,3,4,3,2]
working = [7,8,7,2,2]
playing = [8,5,7,8,13]
plt.stackplot(days, sleeping,eating,working,playing, colors=['m','c','r','k'])
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Graph\nStack Plots')
plt.show()
days = [1,2,3,4,5]
sleeping = [7,8,6,11,7]
eating = [2,3,4,3,2]
working = [7,8,7,2,2]
playing = [8,5,7,8,13]
plt.plot([],[],color='m', label='Sleeping')
plt.plot([],[],color='c', label='Eating')
plt.plot([],[],color='r', label='Working')
plt.plot([],[],color='k', label='Playing')
plt.stackplot(days, sleeping,eating,working,playing, colors=['m','c','r','k'])
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Graph\nCheck it out')
plt.legend()
plt.show()
```
| github_jupyter |
## Assignment:
Beat the performance of my Lasso regression by **using different feature engineering steps ONLY!!**.
The performance of my current model, as shown in this notebook is:
- test rmse: 44798.497576784845
- test r2: 0.7079639526659389
To beat my model you will need a test r2 bigger than 0.71 and a rmse smaller than 44798.
### Conditions:
- You MUST NOT change the hyperparameters of the Lasso.
- You MUST use the same seeds in Lasso and train_test_split as I show in this notebook (random_state)
- You MUST use all the features of the dataset (except Id) - you MUST NOT select features
### If you beat my model:
Make a pull request with your notebook to this github repo:
https://github.com/solegalli/udemy-feml-challenge
Remember that you need to fork this repo first, upload your winning notebook to your repo, and then make a PR (pull request) to my repo. I will then revise and accept the PR, which will appear in my repo and be available to all the students in the course. This way, other students can learn from your creativity when transforming the variables in your dataset.
## House Prices dataset
```
from math import sqrt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for the model
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Lasso
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_squared_error, r2_score
# for feature engineering
from sklearn.preprocessing import StandardScaler
from feature_engine import missing_data_imputers as mdi
from feature_engine import discretisers as dsc
from feature_engine import categorical_encoders as ce
```
### Load Datasets
```
# load dataset
data = pd.read_csv('../houseprice.csv')
# make lists of variable types
categorical = [var for var in data.columns if data[var].dtype == 'O']
year_vars = [var for var in data.columns if 'Yr' in var or 'Year' in var]
discrete = [
var for var in data.columns if data[var].dtype != 'O'
and len(data[var].unique()) < 20 and var not in year_vars
]
numerical = [
var for var in data.columns if data[var].dtype != 'O'
if var not in discrete and var not in ['Id', 'SalePrice']
and var not in year_vars
]
print('There are {} continuous variables'.format(len(numerical)))
print('There are {} discrete variables'.format(len(discrete)))
print('There are {} temporal variables'.format(len(year_vars)))
print('There are {} categorical variables'.format(len(categorical)))
```
### Separate train and test set
```
# IMPORTANT: keep the random_state to zero for reproducibility
# Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(data.drop(
['Id', 'SalePrice'], axis=1),
data['SalePrice'],
test_size=0.1,
random_state=0)
# calculate elapsed time
def elapsed_years(df, var):
# capture difference between year variable and
# year the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# drop YrSold
X_train.drop('YrSold', axis=1, inplace=True)
X_test.drop('YrSold', axis=1, inplace=True)
# Join number of bathrooms in total
X_train['FullBath'] = X_train['FullBath'] + X_train['BsmtFullBath']
X_train['HalfBath'] = X_train['HalfBath'] + X_train['BsmtHalfBath']
X_test['FullBath'] = X_test['FullBath'] + X_test['BsmtFullBath']
X_test['HalfBath'] = X_test['HalfBath'] + X_test['BsmtHalfBath']
X_train.drop(['BsmtFullBath', 'BsmtHalfBath'], axis=1, inplace=True)
X_test.drop(['BsmtFullBath', 'BsmtHalfBath'], axis=1, inplace=True)
discrete.remove('BsmtFullBath')
discrete.remove('BsmtHalfBath')
# capture the column names for use later in the notebook
final_columns = X_train.columns
```
## Feature Engineering Pipeline
```
# I will treat discrete variables as if they were categorical
# to treat discrete as categorical using Feature-engine
# we need to re-cast them as object
X_train[discrete] = X_train[discrete].astype('O')
X_test[discrete] = X_test[discrete].astype('O')
data[np.append(year_vars, numerical)].isnull().mean().sort_values(ascending=False)
#mean of number of categories
vals = []
for i in categorical:
vals.append(len(data[i].unique()))
np.ceil(np.mean(vals))
house_pipe = Pipeline([
# missing data imputation - section 4
('missing_ind_1',
mdi.ArbitraryNumberImputer(
arbitrary_number=0, variables=['MasVnrArea'])),
('missing_ind_2',
mdi.AddNaNBinaryImputer(
variables=['LotFrontage', 'GarageYrBlt'])),
('imputer_num',
mdi.MeanMedianImputer(
imputation_method='median',
variables=['LotFrontage', 'GarageYrBlt'])),
('imputer_cat', mdi.CategoricalVariableImputer(variables=categorical)),
# categorical encoding - section 6
('rare_label_enc',
ce.RareLabelCategoricalEncoder(tol=0.03,
n_categories=7,
variables=categorical)),
('categorical_enc',
ce.OneHotCategoricalEncoder(top_categories=10,
variables=categorical)),
# feature Scaling - section 10
('scaler', StandardScaler()),
# regression
('lasso', Lasso(random_state=0))
])
# let's fit the pipeline
house_pipe.fit(X_train, y_train)
# let's get the predictions
X_train_preds = house_pipe.predict(X_train)
X_test_preds = house_pipe.predict(X_test)
# check model performance:
print('train mse: {}'.format(mean_squared_error(y_train, X_train_preds)))
print('train rmse: {}'.format(sqrt(mean_squared_error(y_train, X_train_preds))))
print('train r2: {}'.format(r2_score(y_train, X_train_preds)))
print()
print('test mse: {}'.format(mean_squared_error(y_test, X_test_preds)))
print('test rmse: {}'.format(sqrt(mean_squared_error(y_test, X_test_preds))))
print('test r2: {}'.format(r2_score(y_test, X_test_preds)))
# plot predictions vs real value
plt.scatter(y_test,X_test_preds)
plt.xlabel('True Price')
plt.ylabel('Predicted Price')
plt.show()
```
| github_jupyter |
```
# Imágenes: Copyright a autores respectivos.
# Gráficos: Tomados de http://matplotlib.org/gallery.html y modificados.
```
# MAT281
## Aplicaciones de la Matemática en la Ingeniería
## ¿Porqué aprenderemos sobre visualización?
* Porque un resultado no sirve si no puede comunicarse correctamente.
* Porque una buena visualización dista de ser una tarea trivial.
* Porque un ingenierio necesita producir excelentes gráficos (pero nadie enseña cómo).
Seguramente está exagerando...
## No, no exagero...
<img src="images/Fox1.png" alt="" width="800" align="middle"/>
## No, no exagero...
<img src="images/Fox2.png" alt="" width="800" align="middle"/>
## No, no exagero...
<img src="images/Fox3.png" alt="" width="800" align="middle"/>
## Primeras visualizaciones
Campaña de Napoleón a Moscú (Charles Minard, 1889).
<img src="images/Napoleon.png" alt="" width="800" align="middle"/>
## Primeras visualizaciones
Mapa del cólera (John Snow, 1855).
<img src="images/Colera.png" alt="" width="800" align="middle"/>
## ¿Y en primer lugar, porqué utilizamos gráficos?
¿Porqué utilizamos gráficos para presentar datos?
* El 70 % de los receptores sensoriales del cuerpo humano está dedicado a la visión.
* Cerebro ha sido entrenado evolutivamente para interpretar la información visual de manera masiva.
“The eye and the visual cortex of the brain form a massively
parallel processor that provides the highest bandwidth channel
into human cognitive centers”
— Colin Ware, Information Visualization, 2004.
## Ejemplo clásico: Cuarteto de ANSCOMBE
Considere los siguientes 4 conjuntos de datos.
¿Qué puede decir de los datos?
```
import pandas as pd
import os
filepath = os.path.join("data","anscombe.csv")
df = pd.read_csv(filepath)
df
```
## Ejemplo clásico: Cuarteto de ANSCOMBE
Consideremos las estadísticas de los datos, versión `numpy` puro:
```
import numpy as np
data = np.loadtxt("data/anscombe.csv", delimiter=",", skiprows=1)
for i in range(4):
x = data[:,2*i]
y = data[:,2*i+1]
slope, intercept = np.polyfit(x, y, 1)
print("Grupo %d:" %(i+1))
print("\tTiene pendiente m=%.2f e intercepto b=%.2f" %(slope, intercept))
```
Ahora utilizando `pandas`.
```
import pandas as pd
import os
filepath = os.path.join("data","anscombe.csv")
df = pd.read_csv(filepath)
df[sorted(df.columns)].describe(include="all")
```
## Ejemplo clásico: Cuarteto de ANSCOMBE
Grafiquemos los datos, con `numpy`:
```
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
def my_plot():
data = np.loadtxt("data/anscombe.csv", delimiter=",", skiprows=1)
fig = plt.figure(figsize=(16,8))
for i in range(4):
x = data[:,2*i]
y = data[:,2*i+1]
plt.subplot(2, 2, i+1)
plt.plot(x,y,'o')
plt.xlim([2,20])
plt.ylim([2,20])
plt.title("Grupo %d" %(i+1))
m, b = np.polyfit(x, y, 1)
x_aux = np.linspace(2,16,20)
plt.plot(x_aux, m*x_aux + b, 'r', lw=2.0)
plt.suptitle("Cuarteto de Anscombe")
plt.show()
my_plot()
```
Grafiquemos con `pandas`:
```
import pandas as pd
import os
# Formateo de la información
filepath = os.path.join("data","anscombe.csv")
df = pd.read_csv(filepath)
long_format_data = []
for i in range(1,5):
old_cols = ["x{}".format(i), "y{}".format(i)]
new_cols = ["x", "y"]
df_aux = df[old_cols].rename(columns = dict(zip(old_cols, new_cols)))
df_aux["set"] = "{}".format(i)
long_format_data.append(df_aux)
df_new = pd.concat(long_format_data)
df_new
df_new.plot(x="x", y="y", kind="scatter", subplots=True, figsize=(16,8))
```
Uuuffff. Más dificil de lo pensado.
En realidad, siempre conviene usar la mejor herramienta a mano (y conocer varias herramientas).
```
pd.plotting.scatter_matrix(df, figsize=(10,10))#;
```
## Sistema visual humano
#### Buenas noticias
* Gráficos entregan información que la estadística podría no revelar.
* Despliegue visual es esencial para comprensión.
#### Malas noticias
* La atención es selectiva y puede ser fácilmente engañada.
#### La atención es selectiva y puede ser fácilmente engañada.
<img src="images/IO1a.png" alt="" width="400" align="middle"/>
#### La atención es selectiva y puede ser fácilmente engañada.
<img src="images/IO1b.png" alt="" width="400" align="middle"/>
#### La atención es selectiva y puede ser fácilmente engañada.
<img src="images/IO2a.png" alt="" width="400" align="middle"/>
#### La atención es selectiva y puede ser fácilmente engañada.
<img src="images/IO2b.png" alt="" width="400" align="middle"/>
## Consejos generales
Noah Illinsky, en su charla "Cuatro pilatres de la visualización" ([es](https://www.youtube.com/watch?v=nC92wIzpQFE), [en](https://www.youtube.com/watch?v=3eZ15VplE3o)), presenta buenos consejos sobre cómo realizar una correcta visualización:
* Propósito
* Información/Contenido
* Codificación/Estructura
* Formato
Es altamente aconsejable ver el video, pero en resumen:
* **Propósito** o público tiene que ver con para quién se está preparando la viz y que utilidad se le dará. Es muy diferente preparar un gráfico orientado a información y toma de decisiones.
* **Información/Contenido** se refiere a contar con la información que se desea mostrar, en el formato necesario para su procesamiento.
* **Codificación/Estructura** tiene que ver con la selección correcta de la codificación y estructura de la información.
* **Formato** tiene que ver con la elección de fuentes, colores, tamaños relativos, etc.
Lo anterior indica que una visualización no es el resultado de unos datos. Una visualización se diseña, se piensa, y luego se buscan fuentes de información apropiadas.
## Elementos para la creación de una buena visualización
1. ***Honestidad***: representaciones visuales no deben engañar al observador.
2. ***Priorización***: dato más importante debe utilizar elemento de mejor percepción.
3. ***Expresividad***: datos deben utilizar elementos con atribuciones adecuadas.
4. ***Consistencia***: codificación visual debe permitir reproducir datos.
El principio básico a respetar es que a partir del gráfico uno debe poder reobtener fácilmente los datos originales.
## 1. Honestidad
El ojo humano no tiene la misma precisión al estimar distintas atribuciones:
* **Largo**: Bien estimado y sin sesgo, con un factor multiplicativo de 0.9 a 1.1.
* **Área**: Subestimado y con sesgo, con un factor multiplicativo de 0.6 a 0.9.
* **Volumen**: Muy subestimado y con sesgo, con un factor multiplicativo de 0.5 a 0.8.
#### 1. Honestidad
Resulta inadecuado realizar gráficos de datos utilizando áreas o volúmenes buscando inducir a errores.
<img src="images/Honestidad1.png" alt="" width="800" align="middle"/>
#### 1. Honestidad
Resulta inadecuado realizar gráficos de datos utilizando áreas o volúmenes si no queda claro la atribución utilizada.
<img src="images/Honestidad2.png" alt="" width="800" align="middle"/>
#### 1. Honestidad
Una pseudo-excepción la constituyen los "pie-chart" o gráficos circulares,
porque el ojo humano distingue bien ángulos y segmentos de círculo,
y porque es posible indicar los porcentajes respectivos.
```
from matplotlib import pyplot as plt
def my_plot():
# make a square figure and axes
plt.figure(figsize=(6,6))
ax = plt.axes([0.1, 0.1, 0.8, 0.8])
# The slices will be ordered and plotted counter-clockwise.
my_labels = 'Frogs', 'Hogs', 'Dogs', 'Logs'
my_fracs = [15, 30, 45, 10]
my_explode=(0, 0.10, 0.10, 0)
#plt.pie(my_fracs, labels=my_labels)
plt.pie(my_fracs, explode=my_explode, labels=my_labels, autopct='%1.1f%%', shadow=True, startangle=90)
plt.title('Raining Hogs and Dogs', bbox={'facecolor':'0.8', 'pad':5})
plt.show()
my_plot()
```
## 2. Priorización
Dato más importante debe utilizar elemento de mejor percepción.
```
import numpy as np
from matplotlib import pyplot as plt
def my_plot():
N = 31
x = np.arange(N)
y1 = 80 + 20*x/N + 5*np.random.rand(N)
y2 = 75 + 25*x/N + 5*np.random.rand(N)
fig = plt.figure(figsize=(16,8))
plt.subplot(2, 2, 1)
plt.plot(x, y1, 'ok')
plt.plot(x, y2, 'sk')
plt.subplot(2, 2, 2)
plt.plot(x, y1,'ob')
plt.plot(x, y2,'or')
plt.subplot(2, 2, 3)
plt.plot(x, y1,'ob')
plt.plot(x, y2,'*r')
plt.subplot(2, 2, 4)
plt.plot(x, y1,'sr')
plt.plot(x, y2,'ob')
plt.show()
my_plot()
```
#### 2. Priorización
## Elementos de mejor percepción
No todos los elementos tienen la misma percepción a nivel del sistema visual.
En particular, el color y la forma son elementos preatentivos: un color distinto o una forma distinta se reconocen de manera no conciente.
Ejemplos de elementos preatentivos.
<img src="images/preatentivo1.png" alt="" width="600" align="middle"/>
<img src="images/preatentivo2.png" alt="" width="600" align="middle"/>
#### 2. Priorización
## Elementos de mejor percepción
¿En que orden creen que el sistema visual humano puede estimar los siguientes atributos visuales:
* Color
* Pendiente
* Largo
* Ángulo
* Posición
* Área
* Volumen
#### 2. Priorización
## Elementos de mejor percepción
El sistema visual humano puede estimar con precisión siguientes atributos visuales:
1. Posición
2. Largo
3. Pendiente
4. Ángulo
5. Área
6. Volumen
7. Color
Utilice el atributo que se estima con mayor precisión cuando sea posible.
#### 2. Priorización
## Colormaps
Puesto que la percepción del color tiene muy baja precisión, resulta ***inadecuado*** tratar de representar un valor numérico con colores.
* ¿Qué diferencia numérica existe entre el verde y el rojo?
* ¿Que asociación preexistente posee el color rojo, el amarillo y el verde?
* ¿Con cuánta precisión podemos distinguir valores en una escala de grises?
#### 2. Priorización
## Colormaps
<img src="images/colormap.png" alt="" width="400" align="middle"/>
#### 2. Priorización
## Colormaps
Algunos ejemplos de colormaps
```
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
def my_plot():
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
plt.figure(figsize=(16,8))
# First plot
plt.subplot(2,2,1)
im = plt.imshow(Z, interpolation='bilinear', origin='lower',cmap=cm.rainbow, extent=(-3, 3, -2, 2))
plt.colorbar(im, shrink=0.8)
# Second plot
plt.subplot(2,2,2)
im = plt.imshow(Z, interpolation='bilinear', origin='lower',cmap=cm.autumn, extent=(-3, 3, -2, 2))
plt.colorbar(im, shrink=0.8)
# Third plot
plt.subplot(2,2,3)
im = plt.imshow(Z, interpolation='bilinear', origin='lower',cmap=cm.coolwarm, extent=(-3, 3, -2, 2))
plt.colorbar(im, shrink=0.8)
# Fourth plot
plt.subplot(2,2,4)
im = plt.imshow(Z, interpolation='bilinear', origin='lower',cmap=cm.gray, extent=(-3, 3, -2, 2))
plt.colorbar(im, shrink=0.8)
# Show
plt.show()
my_plot()
```
#### 2. Priorización
## Colormaps
Consejo: evite mientras pueda los colormaps. Por ejemplo, utilizando contour plots.
```
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
def my_plot():
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
plt.figure(figsize=(16,8))
# First plot
plt.subplot(2,2,1)
CS = plt.contour(X, Y, Z, 9, cmap=cm.rainbow)
# Second plot
matplotlib.rcParams['contour.negative_linestyle'] = 'solid'
plt.subplot(2,2,2)
CS = plt.contour(X, Y, Z, 9, cmap=cm.rainbow)
plt.clabel(CS, fontsize=9, inline=1)
# Third plot
matplotlib.rcParams['contour.negative_linestyle'] = 'solid'
plt.subplot(2,2,3)
CS = plt.contour(X, Y, Z, 9, colors='k')
plt.clabel(CS, fontsize=9, inline=1)
# Fourth plot
matplotlib.rcParams['contour.negative_linestyle'] = 'dashed'
plt.subplot(2,2,4)
CS = plt.contour(X, Y, Z, 9, colors='k')
plt.clabel(CS, fontsize=9, inline=1)
plt.grid('on');
# Show
plt.show()
my_plot()
```
## 3. Sobre la Expresividad
Mostrar los datos y sólo los datos.
Los datos deben utilizar elementos con atribuciones adecuadas: Not all data is born equal.
#### 3. Sobre la Expresividad
Clasificación de datos:
* ***Datos Cuantitativos***: Cuantificación absoluta.
* Cantidad de azúcar en fruta: 50 [gr/kg]
* Operaciones =, $\neq$, <, >, +, −, * , /
* ***Datos Posicionales***: Cuantificación relativa.
* Fecha de cosecha: 1 Agosto 2014, 2 Agosto 2014.
* Operaciones =, $\neq$, <, >, +, −
* ***Datos Ordinales***: Orden sin cuantificación.
* Calidad de la Fruta: baja, media, alta, exportación.
* Operaciones =, $\neq$, <, >
* ***Datos Nominales***: Nombres o clasificaciones
* Frutas: manzana, pera, kiwi, ...
* Operaciones $=$, $\neq$
#### 3. Sobre la Expresividad
Ejemplo: Terremotos. ¿Que tipos de datos tenemos?
* Ciudad más próxima
* Año
* Magnitud en escala Richter
* Magnitud en escala Mercalli
* Latitud
* Longitud
#### 3. Sobre la Expresividad
Contraejemplo: Compañías de computadores.
| Companía | Procedencia |
|----------|-------------|
| MSI | Taiwan |
| Asus | Taiwan |
| Acer | Taiwan |
| HP | EEUU |
| Dell | EEUU |
| Apple | EEUU |
| Sony | Japon |
| Toshiba | Japon |
| Lenovo | Hong Kong |
| Samsung | Corea del Sur |
#### 3. Sobre la Expresividad
Contraejemplo: Compañias de computadores.
```
import matplotlib.pyplot as plt
import numpy as np
def my_plot():
brands = {"MSI":"Taiwan", "Asus":"Taiwan", "Acer":"Taiwan",
"HP":"EEUU", "Dell":"EEUU", "Apple":"EEUU",
"Sony":"Japon", "Toshiba":"Japon",
"Lenovo":"Hong Kong",
"Samsung":"Corea del Sur"}
C2N = {"Taiwan":1,"EEUU":2,"Japon":3,"Hong Kong":4,"Corea del Sur":7}
x = np.arange(len(brands.keys()))
y = np.array([C2N[val] for key,val in brands.items()])
width = 0.35 # the width of the bars
fig, ax = plt.subplots(figsize=(16,8))
rects1 = ax.bar(x, y, width, color='r')
# add some text for labels, title and axes ticks
ax.set_xticks(x + 0.5*width)
ax.set_xticklabels(brands.keys(), rotation="90")
ax.set_yticks(list(C2N.values()))
ax.set_yticklabels(C2N.keys())
plt.xlim([-1,len(x)+1])
plt.ylim([-1,y.max()+1])
plt.show()
my_plot()
```
#### 3. Sobre la Expresividad
Clasificación de datos:
* ***Datos Cuantitativos***: Cuantificación absoluta.
* Cantidad de azúcar en fruta: 50 [gr/kg]
* Operaciones =, $\neq$, <, >, +, −, * , /
* **Utilizar posición, largo, pendiente o ángulo**
* ***Datos Posicionales***: Cuantificación relativa.
* Fecha de cosecha: 1 Agosto 2014, 2 Agosto 2014.
* Operaciones =, $\neq$, <, >, +, −
* **Utilizar posición, largo, pendiente o ángulo**
* ***Datos Ordinales***: Orden sin cuantificación.
* Calidad de la Fruta: baja, media, alta, exportación.
* Operaciones =, $\neq$, <, >
* **Utilizar marcadores diferenciados en forma o tamaño, o mapa de colores apropiado**
* ***Datos Nominales***: Nombres o clasificaciones
* Frutas: manzana, pera, kiwi, ...
* Operaciones $=$, $\neq$
* **Utilizar forma o color**
## 4. Consistencia
La codificación visual debe permitir reproducir datos. Para ello debemos:
* Graficar datos que sean comparables.
* Utilizar ejes escalados adecuadamente.
* Utilizar la misma codificación visual entre gráficos similares.
#### 4. Consistencia
## Utilizar ejes escalados adecuadamente.
```
import numpy as np
from matplotlib import pyplot as plt
def my_plot():
# Datos
x = range(1,13)
y = 80 + 20*np.random.rand(12)
x_ticks = ["E","F","M","A","M","J","J","A","S","O","N","D"]
fig = plt.figure(figsize=(16,8))
plt.subplot(1, 2, 1)
plt.plot(x, y,'o-')
plt.xticks(x, x_ticks)
plt.xlim([-1,13])
plt.subplot(1, 2, 2)
plt.plot(x, y,'o-')
plt.xticks(x, x_ticks)
plt.xlim([-1,13])
plt.ylim([0,100])
plt.show()
my_plot()
```
#### 4. Consistencia
## Utilizar la misma codificación visual entre gráficos similares
```
import numpy as np
from matplotlib import pyplot as plt
def my_plot():
x = np.linspace(0, 1, 50)
f1 = x**2+.2*np.random.rand(50)
g1 = x+.2*np.random.rand(50)
f2 = 0.5-0.2*x+.2*np.random.rand(50)
g2 =x**3+.2*np.random.rand(50)
fig = plt.figure(figsize=(16,8))
plt.subplot(2, 1, 1)
plt.title("Antes de MAT281")
plt.plot(x, f1, 'b', label='Chile', lw=2.0)
plt.plot(x, g1, 'g:', label='OECD', lw=2.0)
plt.legend(loc="upper left")
plt.subplot(2, 1, 2)
plt.title("Despues de MAT281")
plt.plot(x, f2, 'g:', label='Chile', lw=2.0)
plt.plot(x, g2, 'b', label='OECD', lw=2.0)
plt.legend()
plt.show()
my_plot()
```
## Resumen
Elementos para la creación de una buena visualización
* ***Honestidad***: representaciones visuales no deben engañar al observador.
* ***Priorización***: dato más importante debe utilizar elemento de mejor percepción.
* ***Expresividad***: datos deben utilizar elementos con atribuciones adecuadas.
* ***Consistencia***: codificación visual debe permitir reproducir datos.
El principio básico a respetar es que a partir del gráfico uno debe poder reobtener fácilmente los datos originales.
#### Gráfico a gráfico
## ¿Cuándo utilizar gráfico de barras?
```
from matplotlib import pyplot as plt
import numpy as np
def my_plot():
people = ('Tom', 'Dick', 'Harry', 'Slim', 'Jim')
y_pos = np.arange(len(people))
performance = 3 + 10 * np.random.rand(len(people))
error = np.random.rand(len(people))
fig = plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.barh(y_pos, performance, xerr=error, align='center', color="g", alpha=0.4)
plt.yticks(y_pos, people)
plt.xlabel('Performance')
plt.subplot(1,2,2)
plt.bar(y_pos, performance, yerr=error, align='center', color="g", alpha=0.6)
plt.xticks(y_pos, people)
plt.xlabel('People')
plt.ylabel('Performance')
plt.show()
my_plot()
```
### ¿Cuándo utilizar gráfico de barras?
* x: Debe ser datos del tipo nominal o ordinal.
* y: Debe ser datos de tipo ordinal, posicional o cuantitativo.
Evitar: gráfico de nominal vs nominal.
#### Gráfico a gráfico
## ¿Cuándo utilizar campos de vectores?
¿Porqué se llama quiver al campo de vectores en inglés?
```
import matplotlib.pyplot as plt
import numpy as np
from numpy import ma
def my_plot():
X, Y = np.meshgrid(np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2))
U = np.cos(X)
V = np.sin(Y)
fig = plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
Q = plt.quiver(U, V)
qk = plt.quiverkey(Q, 0.5, 0.92, 2, r'$2 \frac{m}{s}$', labelpos='W',
fontproperties={'weight': 'bold'})
l, r, b, t = plt.axis()
dx, dy = r - l, t - b
plt.axis([l - 0.05*dx, r + 0.05*dx, b - 0.05*dy, t + 0.05*dy])
plt.subplot(1,2,2)
Q = plt.quiver(X[::3, ::3], Y[::3, ::3], U[::3, ::3], V[::3, ::3],
pivot='mid', color='r', units='inches')
qk = plt.quiverkey(Q, 0.5, 0.03, 1, r'$1 \frac{m}{s}$',
fontproperties={'weight': 'bold'})
plt.plot(X[::3, ::3], Y[::3, ::3], 'k.')
plt.axis([-1, 7, -1, 7])
plt.title("pivot='mid'; every third arrow; units='inches'")
plt.show()
my_plot()
```
### ¿Cuándo utilizar campos de vectores?
* x: Debe ser datos del tipo posicional o cuantitativo.
* y: Debe ser datos de tipo posicional o cuantitativo.
* z: Pendiente debe ser dato de tipo posicional o cuantitativo.
Evitar: gráfico de campo de vectores si no es posible la interpretación correspondiente.
#### Gráfico a gráfico
## ¿Cuándo utilizar contour plot?
```
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
def my_plot():
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
plt.figure(figsize=(16,8))
matplotlib.rcParams['contour.negative_linestyle'] = 'solid'
plt.subplot(1,2,1)
CS = plt.contour(X, Y, Z, 9, colors='k')
plt.clabel(CS, fontsize=9, inline=1)
matplotlib.rcParams['contour.negative_linestyle'] = 'dashed'
plt.subplot(1,2,2)
CS = plt.contour(X, Y, Z, 9, colors='k')
plt.clabel(CS, fontsize=9, inline=1)
plt.grid('on')
# Show
plt.show()
my_plot()
```
* x: Dato del tipo posicional o cuantitativo.
* y: Dato de tipo posicional o cuantitativo.
* z: Dato de tipo posicional o cuantitativo.
***OBSERVACION***: Se debe tener suficiente densidad/regularidad de puntos como para poder obtener superficies de nivel.
#### Gráfico a gráfico
## ¿Cuándo utilizar scatter plot?
```
import matplotlib.pyplot as plt
import numpy as np
def my_plot():
N = 100
r0 = 0.6
x = 0.9*np.random.rand(N)
y = 0.9*np.random.rand(N)
area = np.pi*(10 * np.random.rand(N))**2 # 0 to 10 point radiuses
c = np.sqrt(area)
r = np.sqrt(x*x + y*y)
cm1 = plt.cm.get_cmap('RdYlBu')
cm2 = plt.cm.get_cmap('Greys')
plt.figure(figsize=(16,8))
area1 = np.ma.masked_where(r < r0, area)
area2 = np.ma.masked_where(r >= r0, area)
sc1 = plt.scatter(x, y, s=area1, marker='^', c=c, cmap=cm1)
plt.colorbar(sc1)
sc2 = plt.scatter(x, y, s=area2, marker='o', c=c, cmap=cm2)
plt.colorbar(sc2)
# Show the boundary between the regions:
theta = np.arange(0, np.pi/2, 0.01)
plt.plot(r0*np.cos(theta), r0*np.sin(theta), "k:", lw=2.0)
plt.show()
my_plot()
```
### ¿Cuándo utilizar scatter plot?
* x: Dato del tipo posicional o cuantitativo.
* y: Dato del tipo posicional o cuantitativo.
* z: Dato del tipo nominal u ordinal (opcional)
***OBSERVACION***: Si hay pocos puntos, también puede usarse para z datos de tipo posicional o cuantitativo.
#### Gráfico a gráfico
## ¿Cuándo utilizar gráfico de barra de error?
```
import numpy as np
import matplotlib.pyplot as plt
def my_plot():
x = np.arange(0.1, 4, 0.5)
y = np.exp(-x)
plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
x_error = 0.1 + 0.2*np.random.rand(len(x))
plt.errorbar(x, y, xerr=x_error)
plt.subplot(1,2,2)
y_error = 0.1 + 0.2*np.random.rand(len(x))
plt.errorbar(x, y, yerr=y_error)
plt.show()
my_plot()
```
### ¿Cuándo utilizar gráfico de barra de error?
* x: Dato del tipo posicional o cuantitativo.
* y: Dato del tipo posicional o cuantitativo.
* z: Dato del tipo posicional o cuantitativo.
Los valores de z tienen que tener las mismas unidades y.
## Para hacer buenas visualizaciones
* Aprender a reconocer buenos ejemplos y malos ejemplos. Vitrinee.
* Para graficos 2d y 3d simples:
* Libreria clásica: matplotlib (ver ejemplos en http://matplotlib.org/gallery.html)
* Otras librerías: seaborn, gnuplot, ...
* Para gráficos 3d:
* Librería clásica: gmsh
* Otras librerías: mayavi, paraview, ...
* Para gráficos interactivos:
* altair, bokeh, d3js
* PowerBI, Tableau, etc.
| github_jupyter |
# **MODEL C: YOLOv3 + SORT + Early Fused Skeleton + ST-DenseNet**
## A unified framework for pedestrian intention prediction.
1. **YOLOv3** -> Object detector: responsible to identify and detect objects of interest in a given frame or image.
2. **SORT** -> Object Tracker: SORT is responsible tracking the detected object.
4. **Early Fused Skeleton** -> Skeleton mapping: Skeletons are then mapped for each tracked pedestrian.
3. **Spatio-Temporal DenseNet** -> Classifier: responsible to classify every identified and tracked pedestrian's intention by using the last 16 frames of a pedetrian.
*The codes for YOLOv3 was adapted from the GitHub repo: https://github.com/zzh8829/yolov3-tf2*
*The codes for SORT was adapted from the GitHub repo: https://github.com/abewley/sort*
*The codes for Skeleton FittingTF-PoseEstimator was adapted from the GitHub repo: https://github.com/ildoonet/tf-pose-estimation*
*The codes for ST-DenseNet was adapted from the GitHub repo: https://github.com/GalDude33/DenseNetFCN-3D*
## **INSTRUCTIONS TO RUN THE MODEL ON GOOGLE COLAB**
This project was completely developed on Google Colab.
###1. Connect runtime to GPU for better/faster results.
###2. Clone the repository to Colab.
```
# run this to clone the repository Volvo-DataX
!git clone https://github.com/mjpramirez/Volvo-DataX
```
###3. Run this to install dependencies
```
%cd Volvo-DataX/tf-pose-estimation
! pip3 install -r requirements.txt
%cd tf_pose/pafprocess
! sudo apt install swig
!swig -python -c++ pafprocess.i && python3 setup.py build_ext --inplace
```
###4. Next click this link to activate the folder in your google drive: https://drive.google.com/open?id=1HxKtxBva3US2AJfohlKfjYSdhHvjt2Yc and add a shortcut of the folder to the main drive folder
And finally, run the cell to mount your google drive
```
from google.colab import drive
drive.mount('/content/drive')
```
###5. To run the remaining cells below, observe the comments and run them appropriately. Also running some codes may provide warnings, so please ignore them.
```
# run this
%cd /content/Volvo-DataX
!pip install filterpy
try:
%tensorflow_version 2.x
except Exception:
pass
import glob
import sys #Run this
from absl import app, logging, flags
from absl.flags import FLAGS
import time
import cv2
import numpy as np
import tensorflow as tf
from yolov3_tf2.models import (
YoloV3, YoloV3Tiny
)
from yolov3_tf2.dataset import transform_images, load_tfrecord_dataset
from yolov3_tf2.utils import draw_outputs
from sortn import *
tf.compat.v1.disable_eager_execution()
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
flags.DEFINE_string('classes', 'data/coco.names', 'path to classes file')
flags.DEFINE_string('weights', '/content/drive/My Drive/datax_volvo_additional_files/yolov3_train_5.tf','path to weights file')
flags.DEFINE_boolean('tiny', False, 'yolov3 or yolov3-tiny')
flags.DEFINE_integer('size', 416, 'resize images to')
flags.DEFINE_string('tfrecord', None, 'tfrecord instead of image')
flags.DEFINE_integer('num_classes', 1, 'number of classes in the model')
flags.DEFINE_string('video', 'data/JAAD_test_video_0339.mp4','path to video file or number for webcam)')
flags.DEFINE_string('output','Result_model_C.mp4', 'path to output video')
flags.DEFINE_string('output_format', 'mp4v', 'codec used in VideoWriter when saving video to file')
app._run_init(['yolov3'], app.parse_flags_with_usage)
%cd /content/Volvo-DataX/tf-pose-estimation
from tf_pose.estimator import TfPoseEstimator
from tf_pose.networks import get_graph_path, model_wh
from tf_pose.estimator import Human
model = TfPoseEstimator(get_graph_path('egen_jaad_1_5'), target_size=(100, 100))
%cd /content/Volvo-DataX
with open('densenet_model.json', 'r') as json_file:
json_savedModel= json_file.read()
model_j = tf.keras.models.model_from_json(json_savedModel)
model_j.load_weights('densenet_2.hdf5')
def pred_func(X_test):
predictions = model_j.predict(X_test[0:1], verbose=0)
Y = np.argmax(predictions[0], axis=0)
return Y
# Run this
FLAGS.yolo_iou_threshold = 0.5
FLAGS.yolo_score_threshold = 0.5
color = (255, 0, 0)
thickness = 2
yolo = YoloV3(classes=FLAGS.num_classes)
yolo.load_weights(FLAGS.weights).expect_partial()
logging.info('weights loaded')
class_names = [c.strip() for c in open(FLAGS.classes).readlines()]
logging.info('classes loaded')
resize_out_ratio = 4.0
fps_time = 0
def run_model():
print('Processing started.......')
try:
vid = cv2.VideoCapture(int(FLAGS.video))
except:
vid = cv2.VideoCapture(FLAGS.video)
out = None
frame = 0
color = (255, 0, 0)
thickness = 2
if FLAGS.output:
# by default VideoCapture returns float instead of int
width = int(vid.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(vid.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(vid.get(cv2.CAP_PROP_FPS))
codec = cv2.VideoWriter_fourcc(*FLAGS.output_format)
out = cv2.VideoWriter(FLAGS.output, codec, fps, (width, height))
#create instance of SORT
mot_tracker = Sort()
rolling_data = {}
while True:
_, img = vid.read()
if img is None:
break
frame +=1
img_in = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_orig = img
img_in = tf.expand_dims(img_in, 0)
img_in = transform_images(img_in, FLAGS.size)
boxes, scores, classes, nums = yolo.predict(img_in, steps=1) # yolo prediction
dets = boxes[:,:nums[0],:].reshape(nums[0], 4) # filter pedest
trackers = mot_tracker.update(dets[classes[0][:nums[0]] == 0])
for d in trackers:
wh = np.flip(img.shape[0:2])
x1y1 = tuple((np.array(d[0:2]) * wh).astype(np.int32))
x2y2 = tuple((np.array(d[2:4]) * wh).astype(np.int32))
x1 = x1y1[0]
y1 = x1y1[1]
bbwh = (x2y2[0]-x1y1[0], x2y2[1]-x1y1[1])
w = bbwh[0]
h = bbwh[1]
try:
cropped = img_orig[y1:y1 + h, x1:x1 + w]
humans = model.inference(cropped, resize_to_default=(w > 0 and h > 0), upsample_size=resize_out_ratio)
humans.sort(key=lambda human: human.score, reverse=True)
skelett = TfPoseEstimator.draw_humans(cropped, humans, imgcopy=True)
img_orig[y1:y1 + h, x1:x1 + w] = skelett
img_orig2 = img_orig
except:
img_orig2 = img_orig
pass
intent = 0
if int(d[4]) in list(rolling_data.keys()):
if len(rolling_data[int(d[4])]) == 16:
seq = np.stack(np.array(rolling_data[int(d[4])]),axis=2)
seq = np.expand_dims(seq, axis=0)
intent = pred_func(seq) # classification output
else:
seq = np.stack(np.array([rolling_data[int(d[4])][-1]] * 16),axis=2)
seq = np.expand_dims(seq, axis=0)
intent = pred_func(seq) # classification output
# risky pedestrian identification thru box color
if intent == 1:
color = (0, 0, 255)
else:
color = (0, 255, 0)
img = cv2.rectangle(img_orig2, x1y1, x2y2, color, thickness)
img = cv2.putText(img, str(int(d[4])), org = (x1y1[0],x1y1[1]-5) , fontFace = cv2.FONT_HERSHEY_SIMPLEX, fontScale=1, color=color, thickness=thickness)
img = cv2.putText(img, "Frame No: {}".format(frame), (0, 30),cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (255, 0, 0), 2)
# storing the data for last 16 frames
try:
if int(d[4]) in list(rolling_data.keys()): # ID exists in dict
if len(rolling_data[int(d[4])]) < 16: # bboxes values for 16 frames
cropped_seq = []
cropped_img = cv2.resize(img_orig[x1y1[1]:x2y2[1], x1y1[0]:x2y2[0]],(100,100))
rolling_data[int(d[4])].append(np.asarray(cropped_img)) # append the image
else:
del rolling_data[int(d[4])][0] # delete oldest frame bbox and append latest frame bbox
cropped_seq = []
cropped_img = cv2.resize(img_orig[x1y1[1]:x2y2[1], x1y1[0]:x2y2[0]],(100,100))
rolling_data[int(d[4])].append(np.asarray(cropped_img))
else:
cropped_seq = []
cropped_img = cv2.resize(img_orig[x1y1[1]:x2y2[1], x1y1[0]:x2y2[0]],(100,100))
rolling_data[int(d[4])] = [np.asarray(cropped_img)]
except:
pass
if FLAGS.output:
out.write(img)
if cv2.waitKey(1) == ord('q'):
break
cv2.destroyAllWindows()
print('\nProcessing completed.......!!!')
print('Check video file in Volvo-DataX folder!')
return
```
###6. Run this to obtain the Model-C output as a video file named **'Result_model_C.mp4'** in Volvo-DataX folder.
After running the run_model() function expect around 7 mins for GPU and 30 mins for CPU
```
run_model()
```
| github_jupyter |
```
#挑战性练习:仿照task5,将猜数游戏改成由用户随便选择一个整数,让计算机来猜测的猜数游戏,要求和task5中人猜测的方法类似,
#但是人机角色对换,由人来判断猜测是大、小还是相等,请写出完整的猜数游戏。
import random,math
def win ():
print(
'''
======恭喜你,你赢了=======
."". ."",
| | / /
| | / /
| | / /
| |/ ;-._
} ` _/ / ;
| /` ) / /
| / /_/\_/\
|/ / |
( ' \ '- |
\ `. /
| |
| |
======恭喜你,你赢了=======
'''
)
def lose ():
print(
'''
======YOU LOSE=======
.-" "-.
/ \
| |
|, .-. .-. ,|
| )(__/ \__)( |
|/ /\ \|
(@_ (_ ^^ _)
_ ) \_______\__|IIIIII|__/__________________________
(_)@8@8{}<________|-\IIIIII/-|___________________________>
)_/ \ /
(@ `--------`
======YOU LOSE=======
'''
)
def game_over ():
print(
'''
======GAME OVER=======
_________
/ ======= \
/ __________\
| ___________ |
| | - | |
| | | |
| |_________| |________________
\=____________/ )
/ """"""""""" \ /
/ ::::::::::::: \ =D-'
(_________________)
======GAME OVER=======
'''
)
def show_team ():
print ('乱世一人舞倾情制作,hiahiahiahia~')
def show_instruction():
print ('''输入一个数作为神秘整数的上界
输入你的答案,直到计算机猜中为止。''')
def menu():
print ('''====游戏菜单====
1.游戏说明
2.开始游戏
3.退出游戏
4.制作团队
====游戏菜单====''')
def guess_game ():
n = int (input ('请输入一个大于0的整数,作为神秘整数的上界,回车结束'))
number = int (input ('请输入您的答案:'))
max_times = math.ceil(math.log(n,2))
guess_times = 0
guess = random.randint(1,n)
guess1 = guess
while guess_times < max_times:
guess_times += 1
print ('计算机猜测的数是:',guess)
print ('一共可以猜',max_times,'次')
print ('计算机已经猜了',guess_times,'次')
if guess == number:
win()
print ('神秘数字是:',guess)
print ('计算机还可以猜',max_times - guess_times ,'次')
break
elif guess > number:
print ('不好意思,计算机你猜大了')
guess2 = guess
if number > guess1:
guess = random.randint(guess1 + 1,guess2 - 1)
else:
guess = random.randint(1,guess2 - 1)
print ('你还可以猜',max_times - guess_times,'次')
else:
print ('抱歉,计算机你又猜小了')
guess2 = guess
if number < guess1:
guess = random.randint(guess2 + 1,guess1 - 1)
else:
guess = random.randint(guess2 + 1,n)
print ('你还可以猜',max_times - guess_times,'次')
guess1 = guess2
else:
print ('神秘数字是',number)
lose()
#主函数
def main ():
while True:
menu ()
choice = int(input('请输入你的选择'))
if choice == 1:
show_instruction()
elif choice == 2:
guess_game()
elif choice == 3:
game_over()
break
else:
show_team()
#主程序
if __name__ == '__main__':
main()
#练习 1:写函数,求n个随机整数均值的平方根,整数范围在m与k之间。
import random,math
def avg_of_randoms(m,k,n):
i = 0
total_num = 0
avg = 0
gen = 0
while i < n:
number = random.randint(m,k)
total_num += number
i += 1
else:
avg = total_num / n
gen = math.sqrt(avg)
print ('n个随机整数均值的平方根为:',gen)
def main ():
m = int (input ('请输入取值范围的下限:'))
k = int (input ('请输入取值范围的上限:'))
n = int (input ('请输入取值个数:'))
avg_of_randoms(m,k,n)
#主程序
if __name__ == '__main__':
main()
#写函数,共n个随机整数,整数范围在m与k之间,求西格玛log(随机整数)及西格玛1/log(随机整数)
import random,math
def sum_of_logRandoms(m,k,n):
i = 0
temp1 = 0
temp2 = 0
total_num1 = 0
total_num2 = 0
while i < n:
number = random.randint(m,k)
temp1 = math.log(number,2)
total_num1 += temp1
temp2 = 1 / temp1
total_num2 += temp2
i += 1
else:
print ('n个在m,k之间的随机整数对数求和为:',total_num1)
print ('n个在m,k之间的随机整数对数求和为:',total_num2)
def main ():
m = int (input ('请输入取值范围的下限:'))
k = int (input ('请输入取值范围的上限:'))
n = int (input ('请输入取值个数:'))
sum_of_logRandoms(m,k,n)
#主程序
if __name__ == '__main__':
main()
#练习 3:写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。
#例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入
import random,math
def randoms_plus (n):
total_num = 0
i = 0
base_num = random.randint(1,9)
while i < n:
j = i
number = 0
while j >= 0:
number += base_num * pow(10,j)
j -= 1
total_num += number
i += 1
print (n,'个以',base_num,'为基本的数a+aa+aaa+aaaa+aa...a的值为:',total_num)
def main():
n = int (input('请输入相加整数的个数并以回车结束:'))
randoms_plus(n)
#主程序
if __name__ == '__main__':
main()
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from imblearn.under_sampling import RandomUnderSampler
from sklearn.neighbors import KNeighborsClassifier
#import data in data frame
subbmission = pd.read_csv('./sample_submission_ejm25Dc.csv')
data = pd.read_excel('./Train/train_Data.xlsx')
data_test = pd.read_excel('./test_Data.xlsx')
#merging data into one large file to preprocess.
big_data = data.append(data_test)
#look at data
big_data.head()
data_test.head()
big_data.isnull().sum()
subbmission.info()
data.info()
data['PaymentMode'].unique()
dict_pdc = {'PDC_E':'PDC','Cheque':'PDC'}
big_data['PaymentMode']=big_data['PaymentMode'].str.replace('PDC_E', 'PDC', regex=True)
big_data['PaymentMode']=big_data['PaymentMode'].str.replace('Cheque', 'PDC', regex=True)
big_data['PaymentMode'].unique()
big_data['Top-up Month'].nunique()
big_data['Top-up Month'].value_counts()
big_data['Top-up Month'].value_counts().plot(kind='bar')
data.nunique().sort_values(ascending=False)
Drop : AssetID BranchID
big_data.nunique()<100
big_data['InstlmentMode'].nunique()
big_data.isnull().sum()
data.n
demographic.columns
#filling zip code with ranking as its categorical data , ranking is decided on basis of total disburst ammount of each zip code.
zip_ranking =big_data.groupby('ZiPCODE')['DisbursalAmount'].sum().reset_index()
zip_ranking['Ranking'] = zip_ranking['DisbursalAmount'].rank(ascending=False)
ranking_for_zipcode = zip_ranking.set_index('ZiPCODE').to_dict()['Ranking']
big_data['ZiPCODE'] = big_data['ZiPCODE'].map(ranking_for_zipcode)
big_data['ZiPCODE'].isnull().sum()
big_data.columns
test
X = big_data[['ID','Frequency', 'InstlmentMode', 'LoanStatus', 'PaymentMode',
'BranchID', 'Tenure', 'AssetCost', 'AmountFinance',
'DisbursalAmount', 'EMI', 'DisbursalDate', 'MaturityDAte', 'AuthDate',
'AssetID', 'ManufacturerID', 'SupplierID', 'LTV', 'SEX', 'AGE',
'MonthlyIncome', 'ZiPCODE', 'Top-up Month']]
X.dtypes
X = pd.get_dummies(data=X,columns=['Frequency','InstlmentMode','LoanStatus','PaymentMode','SEX'])
X['DisbursalDate_day']=X['DisbursalDate'].dt.day
X['DisbursalDate_month']=X['DisbursalDate'].dt.month
X['DisbursalDate_year']=X['DisbursalDate'].dt.year
X['MaturityDAte_day']=X['MaturityDAte'].dt.day
X['MaturityDAte_month']=X['MaturityDAte'].dt.month
X['MaturityDAte_year']=X['MaturityDAte'].dt.year
X['AuthDate_day']=X['AuthDate'].dt.day
X['AuthDate_month']=X['AuthDate'].dt.month
X['AuthDate_year']=X['AuthDate'].dt.year
X.corr()
X.drop(['DisbursalDate','MaturityDAte','AuthDate'],1,inplace=True)
test = X[X['Top-up Month'].isnull()]
X['Top-up Month'].dropna(inplace=True)
X.dropna(inplace=True)
y=X['Top-up Month']
X.drop('Top-up Month',1,inplace=True)
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,train_size=50)
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
def results(y_test,X_test ):
ypred=knn_clf.predict(X_test)
result = confusion_matrix(y_test, ypred)
print("Confusion Matrix:")
print(result)
result1 = classification_report(y_test, ypred)
print("Classification Report:")
print (result1)
result2 = accuracy_score(y_test,ypred)
print("Accuracy:",result2)
knn_clf=KNeighborsClassifier(n_neighbors=7)
knn_clf.fit(X_train,y_train)
ypred=knn_clf.predict(X_test) #These are the predicted output values
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
result = confusion_matrix(y_test, ypred)
print("Confusion Matrix:")
print(result)
result1 = classification_report(y_test, ypred)
print("Classification Report:")
print (result1)
result2 = accuracy_score(y_test,ypred)
print("Accuracy:",result2)
sns.heatmap(data)
rus = RandomUnderSampler(random_state=0)
rus.fit(X,y)
X_resampled, y_resampled = rus.fit_sample(X, y)
X_test
X_train,X_test,y_train,y_test = train_test_split(X_resampled,y_resampled,train_size=90,random_seed=0)
knn_clf=KNeighborsClassifier(n_neighbors=7)
knn_clf.fit(X_train,y_train)
ypred=knn_clf.predict(X_test) #These are the predicted output values
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
result = confusion_matrix(y_test, ypred)
print("Confusion Matrix:")
print(result)
result1 = classification_report(y_test, ypred)
print("Classification Report:")
print (result1)
result2 = accuracy_score(y_test,ypred)
print("Accuracy:",result2)
test_results = knn_clf.predict(test.drop(['ID','Top-up Month'],1).dropna())
test.drop('Top-up Month',1,inplace=True)
test.dropna(inplace=True)
test['Top-up Month'] = test_results
test=test[['ID','Top-up Month']]
test.to_csv('test_result.csv')
result(X)
X.columns
knn_clf=KNeighborsClassifier()
knn_clf.fit(X_resampled,y_resampled)
ypred=knn_clf.predict(X)
```
| github_jupyter |
# Multiclass Support Vector Machine exercise
*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*
In this exercise you will:
- implement a fully-vectorized **loss function** for the SVM
- implement the fully-vectorized expression for its **analytic gradient**
- **check your implementation** using numerical gradient
- use a validation set to **tune the learning rate and regularization** strength
- **optimize** the loss function with **SGD**
- **visualize** the final learned weights
```
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the
# notebook rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
```
## CIFAR-10 Data Loading and Preprocessing
```
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# As a sanity check, print out the shapes of the data
print 'Training data shape: ', X_train.shape
print 'Validation data shape: ', X_val.shape
print 'Test data shape: ', X_test.shape
print 'dev data shape: ', X_dev.shape
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print mean_image[:10] # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print X_train.shape, X_val.shape, X_test.shape, X_dev.shape
```
## SVM Classifier
Your code for this section will all be written inside **cs231n/classifiers/linear_svm.py**.
As you can see, we have prefilled the function `compute_loss_naive` which uses for loops to evaluate the multiclass SVM loss function.
The `grad` returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function `svm_loss_naive`. You will find it helpful to interleave your new code inside the existing function.
To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you:
```
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.00001)
print 'loss: %f' % (loss, )
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# do the gradient check once again with regularization turned on
# you didn't forget the regularization gradient did you?
loss, grad = svm_loss_naive(W, X_dev, y_dev, 1e2)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 1e2)[0]
grad_numerical = grad_check_sparse(f, W, grad)
```
### Inline Question 1:
It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? *Hint: the SVM loss function is not strictly speaking differentiable*
**Your Answer:** This can occur while evaluating the gradient at a point where the function is non-differentiable. The numerical gradient will simply yield a average-slope-like quantity however, the analytic gradient might explode (become Infinite, NaN, or so on) or simply give incorrect results.
```
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Naive loss: %e computed in %fs' % (loss_naive, toc - tic)
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)
# The losses should match but your vectorized implementation should be much faster.
print 'difference: %f' % (loss_naive - loss_vectorized)
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Naive loss and gradient: computed in %fs' % (toc - tic)
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Vectorized loss and gradient: computed in %fs' % (toc - tic)
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print 'difference: %f' % difference
```
### Stochastic Gradient Descent
We now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.
```
# In the file linear_classifier.py, implement SGD in the function
# LinearClassifier.train() and then run it with the code below.
from cs231n.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=5e4,
num_iters=1500, verbose=True)
toc = time.time()
print 'That took %fs' % (toc - tic)
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print 'training accuracy: %f' % (np.mean(y_train == y_train_pred), )
y_val_pred = svm.predict(X_val)
print 'validation accuracy: %f' % (np.mean(y_val == y_val_pred), )
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = [1e-9, 5e-6]
regularization_strengths = [1e2, 1e4]
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
for _ in np.arange(50):
i = 10 ** np.random.uniform(low=np.log10(learning_rates[0]), high=np.log10(learning_rates[1]))
j = 10 ** np.random.uniform(low=np.log10(regularization_strengths[0]), high=np.log10(regularization_strengths[1]))
svm = LinearSVM()
loss_hist = svm.train(X_train, y_train, learning_rate=i, reg=j,
num_iters=500, verbose=False)
y_train_pred = svm.predict(X_train)
y_val_pred = svm.predict(X_val)
accuracy = (np.mean(y_train == y_train_pred), np.mean(y_val == y_val_pred))
results[(i, j)] = accuracy
if accuracy[1] > best_val:
best_val = accuracy[1]
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Find the best learning rate and regularization strength
best_lr = 0.
best_reg = 0.
for lr, reg in sorted(results):
if results[(lr, reg)][1] == best_val:
best_lr = lr
best_reg = reg
break
# Train the best_svm with more iterations
best_svm = LinearSVM()
best_svm.train(X_train, y_train,
learning_rate=best_lr,
reg=best_reg,
num_iters=2000, verbose=True)
y_train_pred = best_svm.predict(X_train)
y_val_pred = best_svm.predict(X_val)
accuracy = (np.mean(y_train == y_train_pred), np.mean(y_val == y_val_pred))
print 'Best validation accuracy now: %f' % accuracy[1]
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print 'linear SVM on raw pixels final test set accuracy: %f' % test_accuracy
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
```
### Inline question 2:
Describe what your visualized SVM weights look like, and offer a brief explanation for why they look they way that they do.
**Your answer:** The weights describe an average representation of the class in an image form.
| github_jupyter |
<a href="https://colab.research.google.com/github/jkraybill/gpt-2/blob/finetuning/GPT2-finetuning2-345M.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
To try out GPT-2, do this:
- go to the "Runtime" menu and click "Change runtime type" and make sure this is a Python 3 notebook, running with GPU hardware acceleration.
- use the "Files" section to the left to upload a text file called "corpus.txt" which contains all the text you want to train on.
- run the steps below in order.
```
import os
import json
import random
import re
!git clone https://github.com/jkraybill/gpt-2.git
cd gpt-2
!pip3 install -r requirements.txt
!sh download_model.sh 345M
```
The below step encodes your corpus into "NPZ" tokenized format for GPT-2.
```
!PYTHONPATH=src ./encode.py --in-text ../train.txt --out-npz train.txt.npz --model_name 345M
!PYTHONPATH=src ./encode.py --in-text ../val.txt --out-npz val.txt.npz --model_name 345M
```
Training is below. I usually get usable results with "stop_after" anywhere from 800 to 3000, but you can try going even higher. 800 steps takes only a few minutes.
"sample_every" controls how often you get sample output from the trained model.
"save_every" controls how often the model is saved.
"learning_rate" is the AI learning rate. 0.00005 is the rate I've gotten the best results with, but I think most people are running with significantly higher rates, so you could try adjusting it.
```
!PYTHONPATH=src ./trainval.py --dataset train.txt.npz --valset val.txt.npz --sample_every=1000 --save_every=25 --learning_rate=0.00005 --stop_after=60000 --model_name=345M --batch_length=512
```
The step below simply copies your trained model to the model directory, so the output will use your training. If you don't do this, you will be running against the trained GPT-2 model without your finetuning training.
```
!cp -r /content/gpt-2/checkpoint/run1/* /content/gpt-2/models/345M/
```
Run the below step to generate unconditional samples (i.e. "dream mode").
"top_k" controls how many options to consider per word (the larger, the more "diverse" the output - anything from 1 to about 50 usually works, I think values around 10 are pretty good).
"temperature" controls the sampling of the words, from 0 to 1 where 1 is the most "random".
"length" controls the number of words in each sample output.
This command will run continuously until you turn it off.
```
!python3 src/generate_unconditional_samples.py --top_k 20 --temperature 0.8 --length=300 --model_name=345M
```
Run the command below to run in interactive / "completion" mode. You will get a prompt; just type in whatever prompt text you want, and the model will attempt to complete it "nsamples" times.
"top_k", "length", and "temperature" work as specified above.
```
!python3 src/interactive_conditional_samples.py --top_k 1 --length=30 --temperature 0.1 --nsamples 3 --model_name=345M
```
| github_jupyter |
# Feature extraction with tsfresh transformer
In this tutorial, we show how you can use sktime with [tsfresh](https://tsfresh.readthedocs.io) to first extract features from time series, so that we can then use any scikit-learn estimator.
## Preliminaries
You have to install tsfresh if you haven't already. To install it, uncomment the cell below:
```
# !pip install --upgrade tsfresh
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sktime.datasets import load_basic_motions
from sktime.datasets import load_arrow_head
from sktime.transformers.series_as_features.summarize import \
TSFreshFeatureExtractor
from sktime.forecasting.base import ForecastingHorizon
from sklearn.ensemble import RandomForestRegressor
from sktime.forecasting.compose import ReducedTimeSeriesRegressionForecaster
from sklearn.pipeline import make_pipeline
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
```
## Univariate time series classification data
For more details on the data set, see the [univariate time series classification notebook](https://github.com/alan-turing-institute/sktime/blob/master/examples/02_classification_univariate.ipynb).
```
X, y = load_arrow_head(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_train.head()
# binary classification task
np.unique(y_train)
```
## Using tsfresh to extract features
```
# tf = TsFreshTransformer()
t = TSFreshFeatureExtractor(default_fc_parameters="efficient", show_warnings=False)
Xt = t.fit_transform(X_train)
Xt.head()
```
## Using tsfresh with sktime
```
classifier = make_pipeline(
TSFreshFeatureExtractor(default_fc_parameters="efficient", show_warnings=False),
RandomForestClassifier()
)
classifier.fit(X_train, y_train)
classifier.score(X_test, y_test)
```
## Multivariate time series classification data
```
X, y = load_basic_motions(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
# multivariate input data
X_train.head()
t = TSFreshFeatureExtractor(default_fc_parameters="efficient", show_warnings=False)
Xt = t.fit_transform(X_train)
Xt.head()
```
## Univariate time series regression data
```
y = load_airline()
y_train, y_test = temporal_train_test_split(y)
regressor = make_pipeline(TSFreshFeatureExtractor(show_warnings=False, disable_progressbar=True), RandomForestRegressor())
forecaster = ReducedTimeSeriesRegressionForecaster(regressor, window_length=12)
forecaster.fit(y_train)
fh = ForecastingHorizon(y_test.index, is_relative=False)
y_pred = forecaster.predict(fh)
```
| github_jupyter |
```
# !wget https://cdn.commonvoice.mozilla.org/cv-corpus-5.1-2020-06-22/id.tar.gz
# !tar -zxf id.tar.gz
# !wget https://f000.backblazeb2.com/file/malay-dataset/speech/semisupervised-26-02-2021-part2.tar
# !mkdir part1-v2
# !tar -xf semisupervised-26-02-2021-part2.tar -C part1-v2
# !wget https://f000.backblazeb2.com/file/malay-dataset/speech/semisupervised-26-02-2021-part3.tar
# !mkdir part2-v2
# !tar -xf semisupervised-26-02-2021-part3.tar -C part2-v2
# !wget https://f000.backblazeb2.com/file/malay-dataset/speech/semisupervised-26-02-2021-part4.tar
# !mkdir part3-v2
# !tar -xf semisupervised-26-02-2021-part4.tar -C part3-v2
# !wget https://f000.backblazeb2.com/file/malay-dataset/speech/semisupervised-24-03-2021-part1.tar
# !tar -xf semisupervised-24-03-2021-part1.tar
# !wget https://f000.backblazeb2.com/file/malay-dataset/speech/semisupervised-24-03-2021-part2.tar
# !tar -xf semisupervised-24-03-2021-part2.tar
# !wget https://f000.backblazeb2.com/file/malay-dataset/speech/semisupervised-24-03-2021-part3.tar
# !tar -xf semisupervised-24-03-2021-part3.tar
# !wget https://f000.backblazeb2.com/file/malay-dataset/speech-bahasa.zip
# !unzip speech-bahasa.zip
# !wget https://f000.backblazeb2.com/file/malay-dataset/streaming.zip -O wikipedia-asr.zip
# !unzip wikipedia-asr.zip
# wget https://f000.backblazeb2.com/file/malaya-speech-model/data/news-speech.zip
# wget https://f000.backblazeb2.com/file/malaya-speech-model/collections/transcript-news.json
# unzip news-speech.zip -d news
# !wget https://f000.backblazeb2.com/file/malaya-speech-model/data/trainset-audiobook.tar.gz
# !wget https://f000.backblazeb2.com/file/malaya-speech-model/data/text-audiobook.tar.gz
# !tar -xf trainset-audiobook.tar.gz
# !tar -zxf text-audiobook.tar.gz
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import pandas as pd
from glob import glob
from tqdm import tqdm
import json
base_directory = '/home/husein/speech-bahasa'
df = pd.read_csv(f'{base_directory}/cv-corpus-5.1-2020-06-22/id/validated.tsv', sep = '\t')
df = df[(df['sentence'].str.len() > 5) & (df['sentence'].str.count(' ') > 0)]
print(df.shape)
id_commonvoice = []
for i in range(len(df)):
p = f"{base_directory}/cv-corpus-5.1-2020-06-22/id/clips/{df['path'].iloc[i]}"
t = df['sentence'].iloc[i]
if len(t) < 5:
continue
id_commonvoice.append((p, t))
len(id_commonvoice)
malay = glob(f'{base_directory}/part*/output-wav/*.wav')
len(malay)
malay.extend(glob(f'{base_directory}/part*/semisupervised/output-wav/*.wav'))
len(malay)
khalil = glob(f'{base_directory}/tolong-sebut/*.wav')
mas = glob(f'{base_directory}/sebut-perkataan-woman/*.wav')
husein = glob(f'{base_directory}/sebut-perkataan-man/*.wav')
len(khalil), len(mas), len(husein)
khalils = []
for i in tqdm(khalil[:-int(len(khalil) * 0.05)]):
try:
t = i.split('/')[-1].replace('.wav','')
text = f'tolong sebut {t}'
khalils.append((i, text))
except Exception as e:
print(e)
mass = []
for i in tqdm(mas[:-int(len(mas) * 0.05)]):
try:
t = i.split('/')[-1].replace('.wav','')
text = f'sebut perkataan {t}'
mass.append((i, text))
except Exception as e:
print(e)
huseins = []
for i in tqdm(husein[:-int(len(husein) * 0.05)]):
try:
t = i.split('/')[-1].replace('.wav','')
text = f'sebut perkataan {t}'
huseins.append((i, text))
except Exception as e:
print(e)
malays = []
for i in tqdm(malay):
try:
p = i.replace('output-wav','output-text')
with open(f'{p}.txt') as fopen:
text = fopen.read()
if len(text) < 3:
continue
malays.append((i, text))
except Exception as e:
print(e)
wikipedia = []
wavs = glob(f'{base_directory}/streaming/*wav')
for i in tqdm(wavs[:-int(len(wavs) * 0.05)]):
text = os.path.split(i)[1].replace('.wav', '')
wikipedia.append((i, text))
len(wikipedia)
news = []
wavs = glob(f'{base_directory}/news/audio/*wav')
with open(f'{base_directory}/transcript-news.json') as fopen:
transcript_news = json.load(fopen)
for i in tqdm(wavs[:-int(len(wavs) * 0.05)]):
index = i.split('/')[-1].replace('.wav','')
text = transcript_news[int(index)]
news.append((i, text))
audiobook = []
wavs = glob(f'{base_directory}/combined/*wav')
for i in tqdm(wavs):
t = '/'.join(i.split('<>')[1:])
t = t.split('.wav')[0]
t = t.replace('output-wav', 'output-text')
with open(f'{base_directory}/text-audiobook/{t}.wav.txt') as fopen:
text = fopen.read()
audiobook.append((i, text))
df = pd.read_csv(f'{base_directory}/haqkiem/metadata.csv', header = None, sep = '|')
txts = df.values.tolist()
haqkiem = []
for f in tqdm(txts[:-int(len(txts) * 0.05)]):
text = f[1]
text = text.split('.,,')[0]
f = f[0]
r = f'{base_directory}/haqkiem/{f}.wav'
haqkiem.append((r, text))
# import IPython.display as ipd
# ipd.Audio(audiobook[0][0])
audios = id_commonvoice + malays + wikipedia + news + audiobook + haqkiem + khalils + mass + huseins
audios, texts = zip(*audios)
len(texts)
import unicodedata
import re
import itertools
vocabs = [" ", "a", "e", "n", "i", "t", "o", "u", "s", "k", "r", "l", "h", "d", "m", "g", "y", "b", "p", "w", "c", "f", "j", "v", "z", "0", "1", "x", "2", "q", "5", "3", "4", "6", "9", "8", "7"]
def preprocessing_text(string):
string = unicodedata.normalize('NFC', string.lower())
string = ''.join([c if c in vocabs else ' ' for c in string])
string = re.sub(r'[ ]+', ' ', string).strip()
string = (
''.join(''.join(s)[:2] for _, s in itertools.groupby(string))
)
return string
processed_text = [preprocessing_text(t) for t in tqdm(texts)]
audios[-100:], processed_text[-100:]
with open('bahasa-asr-train.json', 'w') as fopen:
json.dump({'X': audios, 'Y':processed_text}, fopen)
# import malaya_speech
# tokenizer = malaya_speech.subword.generate_tokenizer(processed_text, max_subword_length = 3)
# malaya_speech.subword.save(tokenizer, 'transducer.subword')
# tokenizer = malaya_speech.subword.load('transducer.subword')
# malaya_speech.subword.encode(tokenizer, 'i hate', add_blank = True)
# malaya_speech.subword.decode(tokenizer, [0, 2, 133, 875])
# from pydub import AudioSegment
# import numpy as np
# sr = 16000
# def mp3_to_wav(file, sr = sr):
# audio = AudioSegment.from_file(file)
# audio = audio.set_frame_rate(sr).set_channels(1)
# sample = np.array(audio.get_array_of_samples())
# return malaya_speech.astype.int_to_float(sample), sr
# def generator(maxlen = 18, min_length_text = 2):
# for i in tqdm(range(len(audios))):
# try:
# if audios[i].endswith('.mp3'):
# wav_data, _ = mp3_to_wav(audios[i])
# else:
# wav_data, _ = malaya_speech.load(audios[i])
# if (len(wav_data) / sr) > maxlen:
# # print(f'skipped audio too long {audios[i]}')
# continue
# if len(processed_text[i]) < min_length_text:
# print(f'skipped text too short {audios[i]}')
# continue
# yield {
# 'waveforms': wav_data.tolist(),
# 'waveform_lens': [len(wav_data)],
# 'targets': malaya_speech.subword.encode(tokenizer, processed_text[i], add_blank = False),
# }
# except Exception as e:
# print(e)
# generator = generator()
# import os
# import tensorflow as tf
# os.system('rm bahasa-asr/data/*')
# DATA_DIR = os.path.expanduser('bahasa-asr/data')
# tf.gfile.MakeDirs(DATA_DIR)
# shards = [{'split': 'train', 'shards': 1000}]
# import malaya_speech.train as train
# train.prepare_dataset(generator, DATA_DIR, shards, prefix = 'bahasa-asr')
```
| github_jupyter |
## Structure solving as meta-optimization (demo)
This is going to be so cool!
In the work of Senior et al. (2019), Yang et al. (2020), and others, static optimization constraints are predicted then provided to a static, general purpose optimization algorithm (with some amount of manual tuning of optimization parameters to the specific task).
Fascinatingly, there is a broad modern literature on the use of neural networks to learn to optimize. For example, Andrychowicz et al. (2016) demonstrate the learning of a domain-specific optimization algorithm that subsequently was shown to out-perform all of the best in class optimizers available for that problem (that had been a legacy of painstaking effort over more than a decade).
This is amazing because there's the potential to learn better and better optimizers from data which can in turn save time and money for future work - but it's also quite interesting to think of how an optimizer might learn to become specialized to individual optimization problems (such as navigating the energy landscape of a protein structure).
<img src="https://upload.wikimedia.org/wikipedia/commons/9/91/Folding_funnel_schematic.svg" alt="Folding funnel schematic.svg" height="480" width="463">
(Image [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0) / [Thomas Splettstoesser](commons.wikimedia.org/wiki/User:Splette); [original](https://commons.wikimedia.org/wiki/File:Folding_funnel_schematic.svg#/media/File:Folding_funnel_schematic.svg))
### Work in progress
The plan is to modify the [GraphNetEncoder](https://github.com/google/jax-md/blob/master/jax_md/nn.py#L650) and [EnergyGraphNet](https://github.com/google/jax-md/blob/master/jax_md/energy.py#L944) from jax-md to also accept as input evolutionary data and not to predict a single energy value but to predict several things including:
1. A future conformation,
2. A distance matrix,
3. Bond angles, and
4. Compound interaction strengths
The simplest way to include (1) in a loss seems to be to have one of the model outputs be a coordinate for each node that are passed to a conventional jax-md energy function which is then used to incentivized input conformations being mapped to output conformations with lower energy.
It looks like (2) and (3) would be straightforward if the model returned edge representation in some form. It's possible to for now also accomplish (4) in this way.
The philosophy regarding (4) is that when folding a new protein you could obtain its iteraction profile fairly easily and if your model was previously trained to use interaction profiles as a guide (in the same way as using evolutionary data as a guide) might then be able to solve the structure more easily. Succeeding with that means architecting the model in a way consistent with that use case.
This might be done in a variety of ways. In the spirit of our learned optimizer, we might wish to learn an optimizer that not only minimizes energy but predicts conformations that are more and more consistent with interaction profiles with a set of compounds. To do this it seems we may need to run a simulator of those structure/compound interactions (which would be computationally expensive but not impossible, especially for important structures). The tendency of the learned energy minimizer to minimize energy could be fine-tuned based on the interactions of produced structures with compounds.
Or, we might consider the compound interactions as simply a guide to better learning how to extract information from evolutionary data and ignore their predictions at structure inference time.
Alternatively, we might consider compound-polymer interaction strengths as a type of input, like evolutionary data, that need to be correctly encoded but need not be predicted by the network - it simply is yet another kind of input information that can help the model learn to predict low-energy structures.
It's possible we might want to synergize with the energy-predicting approach of jax-md given that the task of learning to predict structures of lower energy seems closely related to that of computing energies - so training node functions to compute partial energies might be nice pre-training for their learning to perform position updates that reduce energy.
### Setup
Ensure the most recent version of Flatland is installed.
```
!pip install git+git://github.com/cayley-group/flatland.git --quiet
```
### Loading examples
Here we use a [Tensorflow Datasets](https://github.com/tensorflow/datasets) definition of a dataset generated using the Flatland environment. This provides a simplified interface to returning a [tf.data](https://www.tensorflow.org/guide/data) Dataset which has a variety of convenient methods for handling the input example stream (e.g. for batching, shuffling, caching, and pre-fetching).
Let's load an example from the "flatland_mock" dataset to see what the structure and data type of examples will be.
```
from absl import logging
logging.set_verbosity(logging.INFO)
import tensorflow as tf
import tensorflow_datasets as tfds
import flatland.dataset
ds = tfds.load('flatland_mock', split="train")
assert isinstance(ds, tf.data.Dataset)
ds = ds.cache().repeat()
for example in tfds.as_numpy(ds):
break
example
```
## Train demo solver
Here we have a wrapper to train the demo solver that currently only trains an energy predicting model but subsequently will transfer-learn this to predicting lower-energy structures.
```
from flatland.train import train_demo_solver
from absl import logging
logging.set_verbosity(logging.INFO)
params = train_demo_solver(num_training_steps=1,
training_log_every=1,
batch_size=16)
from flatland.train import demo_example_stream, graph_network_neighbor_list
from flatland.train import OrigamiNet
from jax_md import space
from functools import partial
box_size = 10.862
batch_size = 16
iter_examples = demo_example_stream(
batch_size=batch_size, split="train")
positions, energies, forces = next(iter_examples)
_, polymer_length, polymer_dimensions = positions.shape
displacement, shift = space.periodic(box_size)
neighbor_fn, init_fn, apply_fn = graph_network_neighbor_list(
network=OrigamiNet,
displacement_fn=displacement,
box_size=box_size,
polymer_length=polymer_length,
polymer_dimensions=polymer_dimensions,
r_cutoff=3.0,
dr_threshold=0.0)
neighbor = neighbor_fn(positions[0], extra_capacity=6)
structure_fn = partial(apply_fn, params)
structure = structure_fn(positions[0], neighbor)[1:]
structure
# A polymer of length 10 and dimension 2
structure.shape
%timeit structure_fn(next(iter_examples)[0][0], neighbor)
```
## Long auto-regressive search
Here we will provide some minimal experimentation with using the model to actually optimize a structure by simply repeatedly applying the structure minimizer. We'll characterize what happens to the energy - e.g. does it consistently go down over time or does it diverge after a certain length of such a "rollout"?
```
# WIP
```
## Genetic + short auto-regressive
Presuming the previous won't be stable under long-rollouts, we'll use the previous method only over somewhat short rollouts (for the horizon over which these are stable) in conjunction with an evolutionary optimization approach to progressively determining better and better optimization starting points.
```
# WIP
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# はじめてのニューラルネットワーク:分類問題の初歩
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [docs-ja@tensorflow.org メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。
このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。
このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
```
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
```
## ファッションMNISTデータセットのロード
このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。
Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。
ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
```
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
```
ロードしたデータセットは、NumPy配列になります。
* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。
* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。
画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
```
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
## データの観察
モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
```
train_images.shape
```
同様に、訓練用データセットには60,000個のラベルが含まれます。
```
len(train_labels)
```
ラベルはそれぞれ、0から9までの間の整数です。
```
train_labels
```
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
```
test_images.shape
```
テスト用データセットには10,000個のラベルが含まれます。
```
len(test_labels)
```
## データの前処理
ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
```
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
```
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。
**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
```
train_images = train_images / 255.0
test_images = test_images / 255.0
```
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
```
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
```
## モデルの構築
ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。
### 層の設定
ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。
ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
```
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
```
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。
ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。
### モデルのコンパイル
モデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。
* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。
* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。
* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
```
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
## モデルの訓練
ニューラルネットワークの訓練には次のようなステップが必要です。
1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。
2. モデルは、画像とラベルの対応関係を学習します。
3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。
訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
```
model.fit(train_images, train_labels, epochs=5)
```
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。
## 正解率の評価
次に、テスト用データセットに対するモデルの性能を比較します。
```
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
```
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。
## 予測する
モデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
```
predictions = model.predict(test_images)
```
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
```
predictions[0]
```
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
```
np.argmax(predictions[0])
```
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
```
test_labels[0]
```
10チャンネルすべてをグラフ化してみることができます。
```
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
```
0番目の画像と、予測、予測配列を見てみましょう。
```
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
```
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
```
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
```
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
```
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
```
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
```
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
```
そして、予測を行います。
```
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
```
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
```
np.argmax(predictions_single[0])
```
というわけで、モデルは9というラベルを予測しました。
| github_jupyter |
# Crossentropy method
This notebook will teach you to solve reinforcement learning problems with crossentropy method. We'll follow-up by scaling everything up and using neural network policy.
```
# In google collab, uncomment this:
# !wget https://bit.ly/2FMJP5K -O setup.py && bash setup.py
# XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
%env DISPLAY = : 1
import gym
import numpy as np
import pandas as pd
env = gym.make("Taxi-v2")
env.reset()
env.render()
n_states = env.observation_space.n
n_actions = env.action_space.n
print("n_states=%i, n_actions=%i" % (n_states, n_actions))
```
# Create stochastic policy
This time our policy should be a probability distribution.
```policy[s,a] = P(take action a | in state s)```
Since we still use integer state and action representations, you can use a 2-dimensional array to represent the policy.
Please initialize policy __uniformly__, that is, probabililities of all actions should be equal.
```
policy = np.ndarray((500, 6))
policy.fill(1. / n_actions)
assert type(policy) in (np.ndarray, np.matrix)
assert np.allclose(policy, 1./n_actions)
assert np.allclose(np.sum(policy, axis=1), 1)
```
# Play the game
Just like before, but we also record all states and actions we took.
```
def generate_session(policy, t_max=10**4):
"""
Play game until end or for t_max ticks.
:param policy: an array of shape [n_states,n_actions] with action probabilities
:returns: list of states, list of actions and sum of rewards
"""
states, actions = [], []
total_reward = 0.
s = env.reset()
for t in range(t_max):
a = np.random.choice([0, 1, 2, 3, 4, 5], p=policy[s])
new_s, r, done, info = env.step(a)
# Record state, action and add up reward to states,actions and total_reward accordingly.
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
s, a, r = generate_session(policy)
assert type(s) == type(a) == list
assert len(s) == len(a)
assert type(r) in [float, np.float]
# let's see the initial reward distribution
import matplotlib.pyplot as plt
%matplotlib inline
sample_rewards = [generate_session(policy, t_max=1000)[-1] for _ in range(200)]
plt.hist(sample_rewards, bins=20)
plt.vlines([np.percentile(sample_rewards, 50)], [0], [
100], label="50'th percentile", color='green')
plt.vlines([np.percentile(sample_rewards, 90)], [0], [
100], label="90'th percentile", color='red')
plt.legend()
```
### Crossentropy method steps (2pts)
```
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
"""
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you're confused, see examples below. Please don't assume that states are integers (they'll get different later).
"""
reward_threshold = np.percentile(rewards_batch, percentile)
elite_states = []
elite_actions = []
for session_i, reward in enumerate(rewards_batch):
if reward >= reward_threshold:
elite_states.extend(states_batch[session_i])
elite_actions.extend(actions_batch[session_i])
return elite_states, elite_actions
states_batch = [
[1, 2, 3], # game1
[4, 2, 0, 2], # game2
[3, 1] # game3
]
actions_batch = [
[0, 2, 4], # game1
[3, 2, 0, 1], # game2
[3, 3] # game3
]
rewards_batch = [
3, # game1
4, # game2
5, # game3
]
test_result_0 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=0)
test_result_40 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=30)
test_result_90 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=90)
test_result_100 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=100)
assert np.all(test_result_0[0] == [1, 2, 3, 4, 2, 0, 2, 3, 1]) \
and np.all(test_result_0[1] == [0, 2, 4, 3, 2, 0, 1, 3, 3]),\
"For percentile 0 you should return all states and actions in chronological order"
assert np.all(test_result_40[0] == [4, 2, 0, 2, 3, 1]) and \
np.all(test_result_40[1] == [3, 2, 0, 1, 3, 3]),\
"For percentile 30 you should only select states/actions from two first"
assert np.all(test_result_90[0] == [3, 1]) and \
np.all(test_result_90[1] == [3, 3]),\
"For percentile 90 you should only select states/actions from one game"
assert np.all(test_result_100[0] == [3, 1]) and\
np.all(test_result_100[1] == [3, 3]),\
"Please make sure you use >=, not >. Also double-check how you compute percentile."
print("Ok!")
def update_policy(elite_states, elite_actions):
"""
Given old policy and a list of elite states/actions from select_elites,
return new updated policy where each action probability is proportional to
policy[s_i,a_i] ~ #[occurences of si and ai in elite states/actions]
Don't forget to normalize policy to get valid probabilities and handle 0/0 case.
In case you never visited a state, set probabilities for all actions to 1./n_actions
:param elite_states: 1D list of states from elite sessions
:param elite_actions: 1D list of actions from elite sessions
"""
new_policy = np.zeros([n_states, n_actions])
for state, action in zip(elite_states, elite_actions):
new_policy[state, action] += 1
for row in new_policy:
total = sum(row)
if total:
row /= total
else:
row.fill(1/n_actions)
return new_policy
elite_states, elite_actions = ([1, 2, 3, 4, 2, 0, 2, 3, 1], [
0, 2, 4, 3, 2, 0, 1, 3, 3])
new_policy = update_policy(elite_states, elite_actions)
assert np.isfinite(new_policy).all(
), "Your new policy contains NaNs or +-inf. Make sure you don't divide by zero."
assert np.all(
new_policy >= 0), "Your new policy can't have negative action probabilities"
assert np.allclose(new_policy.sum(
axis=-1), 1), "Your new policy should be a valid probability distribution over actions"
reference_answer = np.array([
[1., 0., 0., 0., 0.],
[0.5, 0., 0., 0.5, 0.],
[0., 0.33333333, 0.66666667, 0., 0.],
[0., 0., 0., 0.5, 0.5]])
assert np.allclose(new_policy[:4, :5], reference_answer)
print("Ok!")
```
# Training loop
Generate sessions, select N best and fit to those.
```
from IPython.display import clear_output
def show_progress(rewards_batch, log, reward_range=[-990, +10]):
"""
A convenience function that displays training progress.
No cool math here, just charts.
"""
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
# reset policy just in case
policy = np.ones([n_states, n_actions])/n_actions
n_sessions = 250 # sample this many sessions
percentile = 20 # take this percent of session with highest rewards
learning_rate = 0.5 # add this thing to all counts for stability
log = []
for i in range(100):
%time sessions = [generate_session(policy) for _ in range(n_sessions)] #< generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = zip(*sessions)
elite_states, elite_actions = select_elites( states_batch, actions_batch, rewards_batch, percentile)
new_policy = update_policy(elite_states, elite_actions)
policy = learning_rate*new_policy + (1-learning_rate)*policy
# display results on chart
show_progress(rewards_batch, log)
```
# Digging deeper: approximate crossentropy with neural nets

In this section we will train a neural network policy for continuous state space game
```
# if you see "<classname> has no attribute .env", remove .env or update gym
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
plt.imshow(env.render("rgb_array"))
# create agent
from sklearn.neural_network import MLPClassifier
agent = MLPClassifier(hidden_layer_sizes=(20, 20),
activation='tanh',
warm_start=True, # keep progress between .fit(...) calls
max_iter=1 # make only 1 iteration on each .fit(...)
)
# initialize agent to the dimension of state an amount of actions
agent.fit([env.reset()]*n_actions, range(n_actions))
def generate_session(t_max=1000):
states, actions = [], []
total_reward = 0
s = env.reset()
for t in range(t_max):
# predict array of action probabilities
probs = agent.predict_proba([s])[0]
a = np.random.choice([0, 1], p=probs)#<sample action with such probabilities >
new_s, r, done, info = env.step(a)
# record sessions like you did before
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
n_sessions = 100
percentile = 70
log = []
for i in range(100):
# generate new sessions
sessions = [generate_session() for _ in range(n_sessions)]#< generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = map(np.array, zip(*sessions))
elite_states, elite_actions = select_elites( states_batch, actions_batch, rewards_batch, percentile)
agent.fit(elite_states, elite_actions)
show_progress(rewards_batch, log, reward_range=[0, np.max(rewards_batch)])
if np.mean(rewards_batch) > 190:
print("You Win! You may stop training now via KeyboardInterrupt.")
```
# Results
```
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),
directory="videos", force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) # this may or may not be _last_ video. Try other indices
```
# Homework part I
### Tabular crossentropy method
You may have noticed that the taxi problem quickly converges from -100 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.
### Tasks
- __1.1__ (1 pts) Find out how the algorithm performance changes if you change different percentile and different n_sessions.
- __1.2__ (2 pts) Tune the algorithm to end up with positive average score.
It's okay to modify the existing code.
```<Describe what you did here. Preferably with plot/report to support it.>```
# Homework part II
### Deep crossentropy method
By this moment you should have got enough score on [CartPole-v0](https://gym.openai.com/envs/CartPole-v0) to consider it solved (see the link). It's time to try something harder.
* if you have any trouble with CartPole-v0 and feel stuck, feel free to ask us or your peers for help.
### Tasks
* __2.1__ (3 pts) Pick one of environments: MountainCar-v0 or LunarLander-v2.
* For MountainCar, get average reward of __at least -150__
* For LunarLander, get average reward of __at least +50__
See the tips section below, it's kinda important.
__Note:__ If your agent is below the target score, you'll still get most of the points depending on the result, so don't be afraid to submit it.
* __2.2__ (bonus: 4++ pt) Devise a way to speed up training at least 2x against the default version
* Obvious improvement: use [joblib](https://www.google.com/search?client=ubuntu&channel=fs&q=joblib&ie=utf-8&oe=utf-8)
* Try re-using samples from 3-5 last iterations when computing threshold and training
* Experiment with amount of training iterations and learning rate of the neural network (see params)
* __Please list what you did in anytask submission form__
### Tips
* Gym page: [mountaincar](https://gym.openai.com/envs/MountainCar-v0), [lunarlander](https://gym.openai.com/envs/LunarLander-v2)
* Sessions for MountainCar may last for 10k+ ticks. Make sure ```t_max``` param is at least 10k.
* Also it may be a good idea to cut rewards via ">" and not ">=". If 90% of your sessions get reward of -10k and 20% are better, than if you use percentile 20% as threshold, R >= threshold __fails cut off bad sessions__ whule R > threshold works alright.
* _issue with gym_: Some versions of gym limit game time by 200 ticks. This will prevent cem training in most cases. Make sure your agent is able to play for the specified __t_max__, and if it isn't, try `env = gym.make("MountainCar-v0").env` or otherwise get rid of TimeLimit wrapper.
* If you use old _swig_ lib for LunarLander-v2, you may get an error. See this [issue](https://github.com/openai/gym/issues/100) for solution.
* If it won't train it's a good idea to plot reward distribution and record sessions: they may give you some clue. If they don't, call course staff :)
* 20-neuron network is probably not enough, feel free to experiment.
### Bonus tasks
* __2.3 bonus__ Try to find a network architecture and training params that solve __both__ environments above (_Points depend on implementation. If you attempted this task, please mention it in anytask submission._)
* __2.4 bonus__ Solve continuous action space task with `MLPRegressor` or similar.
* Start with ["Pendulum-v0"](https://github.com/openai/gym/wiki/Pendulum-v0).
* Since your agent only predicts the "expected" action, you will have to add noise to ensure exploration.
* [MountainCarContinuous-v0](https://gym.openai.com/envs/MountainCarContinuous-v0), [LunarLanderContinuous-v2](https://gym.openai.com/envs/LunarLanderContinuous-v2)
* 4 points for solving. Slightly less for getting some results below solution threshold. Note that discrete and continuous environments may have slightly different rules aside from action spaces.
If you're still feeling unchallenged, consider the project (see other notebook in this folder).
| github_jupyter |
# Linear regression estimate quality (bivariate with Gaussian noise)
Up to now, the regression models with [1](LinearRegressionUnivariate.ipynb) or [2](LinearRegressionBivariate.ipynb) features were based on a infinite length dataset. As a consequence, all estimates were (almost) perfect.
In a given "real life" application, the dataset might be limited for many reasons like:
- the data collect is slow (low frequency) and it has started recently
- the phenomena to explain is not that stable (stationary), and quasi stationarity time interval corresponds to a small dataset
Data analysis and modeling is possible still, but then we must take into account for variability
### Learning goals
- Learn about the Gaussian linear model
- Show the impact of reduced length dataset on estimate quality (bias, variance)
- __Theory__ and experiments using small length training datasets
- __Confidence intervals__ on the estimates
### References
- Lectures notes on ordinary least squares, Telecom Paris, François Portier and Anne Sabourin (Unpublished)
- High-Dimensional Statistics, chapter 2 : Fixed Design linear regression, [MIT 18.S997 2015](https://ocw.mit.edu/courses/mathematics/18-s997-high-dimensional-statistics-spring-2015/lecture-notes/MIT18_S997S15_Chapter2.pdf)
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.preprocessing import normalize as skNormalize
import pandas as pd
import math
import scipy.stats as stats
import seaborn as sns
```
## Helpers
```
def plotHistParams(x, label, ax, span, nBins = 100, pdfRefX=None, pdfRefY=None):
"""Plot histogram x, compute """
ax.hist(x, bins=nBins, density=True, range=span, label='histogram')
mu = np.mean(x)
x_c = x - mu
ax.set_title('%s\n$\mu$=%.3f, $\sigma^2$=%.3f' % (label, mu, np.dot(x_c,x_c) / len(x)))
if not pdfRefX is None:
ax.plot(pdfRefX, pdfRefY, label='ref')
ax.legend()
def plotToRef(x, y, ref, ax, title, xLabel=None, yErr=None):
""" Plot y and a reference (or target) value"""
if yErr is None:
ax.plot(x, y)
else:
ax.errorbar(x, y, yerr=yErr, ecolor='red', capsize=3.0)
if xLabel:
ax.set_xlabel(xLabel)
ax.plot(x, np.ones((len(x)))*ref, alpha=0.5, color='orange')
ax.set_title(title)
ax.grid()
def plotHeatMap(X, classes, title=None, fmt='.2g', ax=None):
""" Fix heatmap plot from Seaborn with pyplot 3.1.0, 3.1.1
https://stackoverflow.com/questions/56942670/matplotlib-seaborn-first-and-last-row-cut-in-half-of-heatmap-plot
"""
ax = sns.heatmap(X, xticklabels=classes, yticklabels=classes, annot=True, fmt=fmt, cmap=plt.cm.Blues, ax=ax) #notation: "annot" not "annote"
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5)
if title:
ax.set_title(title)
```
## Model
We will use a linear model with Gaussian noise, also called _Fixed Design_ $f_1(x) = 0.5 x_0 - 0.7 x_1 + 0.35 + \epsilon$
With :
- $x_0 \in [0, 0.5], x_1 \in [-0.5, 0.5]$ the two features (co-variables)
- $\epsilon \sim \mathcal{N}(0, \sigma^2)$ a Gaussian distributed unknown
```
nFeatures = 2
bLin = 0.35
wLin = [0.5, -0.7]
sigmaLin = 0.5
def generateBatchLinear(n, sigma=sigmaLin):
#
xMin = np.array([0, -0.5])
xMax = np.array([0.5, 0.5])
#
x = np.random.uniform(xMin, xMax, (n, 2))
yClean = np.dot(x, wLin) + bLin
return x, yClean, yClean + np.random.normal(0, sigma, n)
x_1000, yClean_1000, y_1000 = generateBatchLinear(1000, sigmaLin)
fix, ax = plt.subplots(1,2, figsize=(15, 6))
for i,a in enumerate(ax):
a.scatter(x_1000[:, i], y_1000)
a.set_xlabel('$x_%d$' % i)
a.set_ylabel('$y$');
```
With _a priori_ knowledge restricted to the model to apply, i.e. Gaussian, there are five quantities to evaluate :
- $\hat{w}_0, \hat{w}_1$ the weights to apply to the features
- $\hat{b}$ the intercept
- the Gaussian noise with the mean $\hat{\mu}$, and variance (power) $\hat{\sigma}^2$
## Closed form linear regression
As explained in more details in previous notebook ([HTML](LinearRegressionBivariate.html) / [Jupyter](LinearRegressionBivariate.ipynb), the linear regression estimates are computed in closed form by **minimizing the Euclidian norm** $\lVert X_m \Theta - Y \rVert_2^2$ where:
- $X \in \mathbb{R}^{n \times 2}$
- $\Theta =
\begin{bmatrix}
b \\
w_0 \\
w_1 \end{bmatrix} $
- $X_m =
\begin{bmatrix}
\mathbb{1}_n & X \\
\end{bmatrix}$, a modified $X$ with a column of 1s to evaluate the intercept
Then : $ Y = X_m \Theta + \epsilon $
Leading to the closed form computation of the linear regression, assuming the matrix is invertible (or at least pseudo invertible) :
$$\hat{\Theta} = (X_m^T X_m)^{-1} X_m^T Y$$
```
def xWithIntercept(x):
""" Add a column of ones in order to compute the intercept along with coefficients related to X"""
return np.concatenate((np.ones((x.shape[0], 1)), x), axis=1)
x_m_1000 = xWithIntercept(x_1000)
def linearRegression(x, y):
xTxInv = np.linalg.inv(np.matmul(x.T, x))
return np.matmul(xTxInv, np.matmul(x.T, y))
thetaEst0 = linearRegression(x_m_1000, yClean_1000)
print("Estimation on clean y : Intercept=%.4f, coefficients=%s" % (thetaEst0[0], thetaEst0[1:]))
thetaEst_1000 = linearRegression(x_m_1000, y_1000)
print("Estimation on noisy y : Intercept=%.4f, coefficients=%s" % (thetaEst_1000[0], thetaEst_1000[1:]))
```
## Noise estimation
The noise mean is simply computed as the mean of $Y - X_m \Theta$. It is expected to be 0 in our case.
Given a noise mean equal to 0, the **unbiased noise variance** estimation is then :
$$ \hat{\sigma}^2 = \frac{1}{n-p-1} \Vert Y - X_m \hat{\Theta} \Vert_2^2$$
with:
- $n$ the number of samples in our set
- $p$ the rank of the matrix $X^TX$, equals to 2 in our example
```
def noiseEstimation(yEst, y, p):
""" Unbiased noise mean and variance estimation """
epsilonEst = y - yEst
mu = np.mean(epsilonEst)
epsilonEst_c = epsilonEst - mu
var = 1 / (len(y) - p - 1) * np.dot(epsilonEst_c, epsilonEst_c)
return mu, var
noiseMu_1000, noiseVar_1000 = noiseEstimation(np.matmul(x_m_1000, thetaEst_1000), y_1000, nFeatures)
print('Estimated noise mean=%.2e and std deviation=%.4f' % (noiseMu_1000, math.sqrt(noiseVar_1000)))
```
### Other important theorerical results on the Gaussian linear model
Given $\hat{\Theta_{n,k}}$ the component $k$ of the estimate of $\Theta$ on $n$ samples, and $\Theta_k$ actual value of this component:
- Lemma 1: variations on the estimated components of $\Theta$ are Gaussian distributed
$$\left(\hat{\Theta_n} - \Theta \right) \sim \mathcal{N}\left( 0, (X^T X)^{-1} \sigma^2 \right)$$
- Lemma 2: the ratio of the estimated $\hat{\sigma}^2$ to the true (unknown) $\sigma$ is $\chi^2$ distributed
$$\frac{\hat{\sigma_n}^2(n-p-1)}{\sigma^2} \sim \chi^2(n-p-1)$$
- $\chi^2(n-p-1)$ is a Chi-2 law with n-p-1 degrees of freedom
- Lemma 3: the ratio of the variation of the coefficient over the estimated $\hat{\sigma}$ follows a Student law
$$\sqrt{\frac{n}{\hat{\sigma_n}^2 S_{n,k}}} (\hat{\Theta_{n,k}} - \Theta_k) \sim \mathcal{T}_{n-p-1}$$
- $\mathcal{T}_{n-p-1}$ is the Student law with (n-p-1) degrees of freedom
- $S_{n,k} = n (X^TX)^{-1}_{k,k}$ (select element k,k of the Gram matrix)
```
xx = [np.linspace(0, 16), np.linspace(-5, 5)]
fig,ax = plt.subplots(1, 2, figsize=(12, 5))
for i, a in enumerate(ax):
a.plot(xx[i], stats.norm.pdf(xx[i]), label='$\mathcal{N}(0,1)$')
for i in [1, 2, 4, 8]:
label0 = '$\chi^2(%d)$' % i
ax[0].plot(xx[0], stats.chi2(i).pdf(xx[0]), label=label0)
label1 = '$\mathcal{T}(%d)$' % i
ax[1].plot(xx[1], stats.t(i).pdf(xx[1]), label=label1)
for a in ax:
a.legend()
a.set_ylabel('Density (PDF)')
a.grid()
ax[0].set_title('$\chi^2$ distributions vs. Normal')
ax[1].set_title('$\mathcal{T}$ distributions vs. Normal');
```
As shown on right hand figure above, the Student law quickly converges to the Gaussian law when the degrees of freedom increase.
Application of Lemma 1:
```
def weightCovariance(x, sigma):
""" Compute weight covariance as per Lemma 1"""
return np.linalg.inv(np.matmul(x.T, x)) * sigma**2
wCovTheo_1000 = weightCovariance(x_m_1000, sigmaLin)
plotHeatMap(wCovTheo_1000, ('b','$w_0$', '$w_1$'), title='Correlation matrix', fmt='.2e')
```
## Sweep over dataset length
In a first move, let's sweep over the length of the dataset from 4 to 1000 items and estimate the linear regression and the noise parameters.
Note: the minimum value of n is: $\min_{n \in \mathbb{N}} (n-p-1 > 0) = 4$
```
fig, ax = plt.subplots(2, 2, figsize=(16, 10), subplot_kw={'xscale':'log'})
regressions = []
for n in [4, 5, 10, 20, 40, 100, 200, 500, 1000]:
x_m_n = x_m_1000[:n]
y_n = y_1000[:n]
thetaEst = linearRegression(x_m_n, y_n)
noiseMu, noiseVar = noiseEstimation(np.matmul(x_m_n, thetaEst), y_n, nFeatures)
regressions.append([n, thetaEst[0], thetaEst[1], thetaEst[2], noiseMu, noiseVar])
dfSweepN = pd.DataFrame(regressions, columns=('n', 'b', 'w0','w1', 'noise mean', 'noise var'))
plotToRef(dfSweepN['n'], dfSweepN['b'], bLin, ax[0,0], '$\hat{b}$')
plotToRef(dfSweepN['n'], dfSweepN['noise var'], sigmaLin**2, ax[1,0], '$\hat{\sigma}^2$', xLabel='n')
plotToRef(dfSweepN['n'], dfSweepN['w0'], wLin[0], ax[0,1], '$\hat{w_0}$')
plotToRef(dfSweepN['n'], dfSweepN['w1'], wLin[1], ax[1,1], '$\hat{w_1}$', xLabel='n');
```
## Distributions with $n = 5$, fixed design
In this set of experiments, noise samples are drawn at each experiment. The $x_i$ are known and fixed.
Let's draw many experiences with n = 5 in order to plot the histograms of the estimators
```
regressions = []
n = 5
x_m_5 = x_m_1000[:n]
yClean_5 = yClean_1000[:n]
for l in range(50000):
# Draw noise
y_5 = yClean_5 + np.random.normal(0, sigmaLin, n)
thetaEst = linearRegression(x_m_5, y_5)
noiseMu, noiseVar = noiseEstimation(np.matmul(x_m_5, thetaEst), y_5, nFeatures)
regressions.append([thetaEst[0], thetaEst[1], thetaEst[2], noiseMu, noiseVar])
df_5 = pd.DataFrame(regressions, columns=('b', 'w0','w1', 'noise mean', 'noise var'))
fig, ax = plt.subplots(1, 4, figsize=(16, 5))
plotHistParams(df_5['b'] - bLin, '$\hat{b} - b$', ax[0], [-3, 3])
plotHistParams(df_5['w0'] - wLin[0], '$\hat{w_0} - w_0$', ax[1], [-5, 5])
plotHistParams(df_5['w1'] - wLin[1], '$\hat{w_1} - w_1$', ax[2], [-5, 5])
dFree = n - nFeatures - 1
xx = np.linspace(0,15)
plotHistParams(df_5['noise var'] * dFree / sigmaLin**2, '$(n-p-1)\hat{\sigma}^2 / \sigma^2$ vs. $\chi^2(2)$',
ax[3], [0,20], pdfRefX=xx, pdfRefY=stats.chi2(dFree).pdf(xx))
```
On above graph, the regression intercept $b$ and weigths ($w_0, w_1$) look like gaussian distributed and the noise variance scaled by $\frac{n-p-1}{\sigma^2}$ with n=5, p=2, is matching the expected $\chi^2(2)$ (see Lemma 2)
#### Verification of Lemma 1
As per Lemma 1, we expect the following covariance matrix for the coefficients:
$$\left(\hat{\Theta_n} - \Theta \right) \sim \mathcal{N}\left( 0, (X^T X)^{-1} \sigma^2 \right)$$
```
def autoCovariance(x, axis=0):
x_c = x - np.mean(x, axis=0)
return 1 / x.shape[axis] * np.matmul(x_c.T, x_c)
# Covariance of the intercept and weights
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
ThetaCov_5 = autoCovariance(df_5[['b', 'w0', 'w1']].values)
plotHeatMap(ThetaCov_5, ('b','$w_0$', '$w_1$'), title='Weight covariance matrix', ax=ax[0], fmt='.3f');
# Covariance matrix based on the design matrix (X^T X)
wCovTheo_5 = weightCovariance(x_m_5, sigmaLin)
plotHeatMap(wCovTheo_5, ('b','$w_0$', '$w_1$'), title='Theoretical covariance matrix', ax=ax[1], fmt='.3f');
```
Covariance matrices are matching !
#### Verification of Lemma 3
Let's now verify the law involving only the $\hat{\Theta}$ and $\hat{\sigma}^2$ estimates, as per Lemma 3:
- $J(\hat{\Theta_{n,k}}) = \sqrt{\frac{n}{\hat{\sigma_n}^2 S_{n,k}}} (\hat{\Theta_{n,k}} - \Theta_k) \sim \mathcal{T}_{n-p-1}$
- $\mathcal{T}_{n-p-1}$ is the Student law with (n-p-1) degrees of freedom
- $S_{n,k} = n (X^TX)^{-1}_{k,k}$ (select element k,k of the inverted Gram matrix)
```
n = 5
Theta_5Est = df_5[['b', 'w0', 'w1']].values
Theta_5Exp = np.array([bLin, wLin[0], wLin[1]])
estim = np.zeros((3, len(df_5)))
xTXInv_5 = n * np.linalg.inv(np.matmul(x_m_5.T, x_m_5))
nOverNoiseVarSqrt = np.sqrt(n / df_5['noise var'])
for k, theta in enumerate(Theta_5Est.T):
Snk = xTXInv_5[k, k]
estim[k] = math.sqrt(1/Snk) \
* np.multiply(nOverNoiseVarSqrt, (theta - Theta_5Exp[k]))
fig, ax = plt.subplots(1, 3, figsize=(16, 5))
span, xx = [-10, 10], np.linspace(-10, 10)
pdfXx = stats.t(n - nFeatures - 1).pdf(xx)
labels = ['$J(\hat{b})$ vs. $\mathcal{T}(2)$', '$J(\hat{w_0})$ vs. $\mathcal{T}(2)$', '$J(\hat{w_1})$ vs. $\mathcal{T}(2)$']
for est, l, a in zip(estim, labels, ax):
plotHistParams(est, l, a, span, pdfRefX=xx, pdfRefY=pdfXx)
```
Theory is verified, we now have a closed form expression to compute the confidence interval on the estimates
## Confidence intervals
For this Gaussian linear model, for each coefficient, the confidence interval with level $\alpha$ is:
$$IC(\theta_k) = \left[ \hat{\theta_{n,k}} + \sqrt{\frac{\hat{\sigma_n}^2 S_{n,k}}{n}} q_{\frac{\alpha}{2}}, \hat{\theta_{n,k}} + \sqrt{\frac{\hat{\sigma_n}^2 S_{n,k}}{n}} q_{1-\frac{\alpha}{2}} \right]$$
With $q_{\frac{\alpha}{2}}$ and $q_{1-\frac{\alpha}{2}}$ the quantiles of the Student law $\mathcal{T}(n-p-1)$
We may now create a specific regression procedure that is estimating the noise variance $\hat{\sigma}^2$ and provides confidence intervals for the coefficients:
```
def linearRegressionLinearGaussian(x, y, alpha):
""" Compute linear regression coefficients and their confidence interval
Assume that X is full rank
"""
n = len(y)
rank = x.shape[1] # = p + 1
# Design matrix
xTxInv = np.linalg.inv(np.matmul(x.T, x))
# Regression coefficients
thetas = np.matmul(xTxInv, np.matmul(x.T, y))
# Predictions
yEst = np.matmul(x, thetas)
# Noise variance
epsilonEst = y - yEst
noiseMu = np.mean(epsilonEst) # for verification, but assumed 0
epsilonEst_c = epsilonEst - noiseMu
noiseVar = 1 / (n - rank) * np.dot(epsilonEst_c, epsilonEst_c)
# Student quantile
qAlpha = stats.t(n - rank).ppf(1 - alpha/2)
# Confidence interval half range
xTxInvDiag = np.diagonal(xTxInv)
confidence = np.sqrt(noiseVar * xTxInvDiag) * qAlpha
return thetas, confidence, noiseVar
```
#### Test with $n=5$
```
alpha = 0.1
thetas, confidence, noiseVar = linearRegressionLinearGaussian(x_m_5, y_5, alpha)
print('Coefficients =', thetas)
print('Confidence intervals @ %.0f%% :\n' % ((1-alpha)*100),
np.vstack((thetas - confidence, thetas + confidence)).T)
print('Noise variance estimate = %.3f' % noiseVar)
```
With so few samples in the dataset, the confidence intervals are very wide, this is shown on the three series of graphs above.
Note: all confidence intervals depend on the noise variance estimate, any error on it will drive a large impact on the resulting error.
#### Full test of the confidence interval
Let's redo the sweep on the dataset length computing also confidence intervals:
```
alpha = 0.05
regressions = []
for n in [5, 10, 20, 40, 100, 200, 500, 1000]:
x_m_n = x_m_1000[:n]
y_n = y_1000[:n]
thetaEst, confidence, noiseVar = linearRegressionLinearGaussian(x_m_n, y_n, alpha)
regressions.append([n,
thetaEst[0], thetaEst[1], thetaEst[2],
confidence[0], confidence[1], confidence[2],
noiseVar])
dfSweepN2 = pd.DataFrame(regressions, columns=('n', 'b', 'w0','w1', 'bConf', 'w0Conf', 'w1Conf', 'noise var'))
fig, ax = plt.subplots(2, 2, figsize=(16, 8), subplot_kw={'xscale':'log'})
plotToRef(dfSweepN2['n'], dfSweepN2['b'], bLin, ax[0,0], '$\hat{b}$', yErr=dfSweepN2['bConf'])
plotToRef(dfSweepN2['n'], dfSweepN2['noise var'], sigmaLin**2, ax[1,0], '$\hat{\sigma}^2$', xLabel='n')
plotToRef(dfSweepN2['n'], dfSweepN2['w0'], wLin[0], ax[0,1], '$\hat{w_0}$', yErr=dfSweepN2['w0Conf'])
plotToRef(dfSweepN2['n'], dfSweepN2['w1'], wLin[1], ax[1,1], '$\hat{w_1}$', yErr=dfSweepN2['w1Conf'], xLabel='n');
```
On above graphs, the error bars, in red, correspond to the confidence interval. For short dataset lengths the confidence interval is tremendous. From 10 samples and over, the confidence interval is narrowing quickly.
The true value (orance line) should be within reach of the error bar, if not it means that the case is out of the 95% probability.
## Distributions with $n = 5$ and stochastic $X$ values
A different set of experiments in which noise is drawn from the gaussian distribution, and X from the uniform distributions.
```
n = 5
regressions = []
for l in range(50000):
# Draw a new batch
x, yClean, y = generateBatchLinear(n)
x_m = xWithIntercept(x)
thetaEst = linearRegression(x_m, y)
noiseMu, noiseVar = noiseEstimation(np.matmul(x_m, thetaEst), y, nFeatures)
regressions.append([thetaEst[0], thetaEst[1], thetaEst[2], noiseMu, noiseVar])
df_5s = pd.DataFrame(regressions, columns=('b', 'w0','w1', 'noise mean', 'noise var'))
fig, ax = plt.subplots(1, 4, figsize=(16, 5))
plotHistParams(df_5s['b'] - bLin, '$\hat{b} - b$', ax[0], [-3, 3])
plotHistParams(df_5s['w0'] - wLin[0], '$\hat{w_0} - w_0$', ax[1], [-5, 5])
plotHistParams(df_5s['w1'] - wLin[1], '$\hat{w_1} - w_1$', ax[2], [-5, 5])
dFree = n - nFeatures - 1
ax[3].hist(df_5s['noise var'] * dFree / sigmaLin**2, bins=100, density=True, label='histogram')
xx = np.linspace(0,15)
ax[3].plot(xx, stats.chi2(dFree).pdf(xx), label='$\chi^2(2)$')
ax[3].legend()
ax[3].set_title('$(n-p-1)\hat{\sigma}^2 / \sigma^2$ vs. $\chi^2(2)$');
```
On above graphs, compared to previous graphs, we observe that the shapes are similar but the variances are higher on $b, w_0, w_1$
Whereas the scaled noise variance is still matching the Chi-2 distribution
```
# Covariance of the intercept and weights
ThetaCov_5s = autoCovariance(df_5s[['b', 'w0', 'w1']].values)
plotHeatMap(ThetaCov_5s, ('b','$w_0$', '$w_1$'), title='Correlation matrix', fmt='.3f');
```
Globally the variance on the coefficients has increased, this reflects variability on fitting Y as a linear relation to X.
# Conclusion
Starting from "raw theory", we have shown first how to match it with experiment, and then an application of this theory.
In this notebook, the data prior (inherent) model is Gaussian, which is somehow ideal. In the next notebooks, we will investigate more complex models and situations.
| github_jupyter |
```
import pandas as pd
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Input, Flatten,BatchNormalization,Activation
#from keras.model import sequential
train=pd.read_csv('train.csv')
train.head()
test=pd.read_csv('test.csv')
test.head()
imagen=ImageDataGenerator()
train1=imagen.flow_from_directory('train' ,shuffle=False,)
train1
data_dir='C:\\Users\\Shubham\\Desktop\\New folder\\hakererth\\dance form\\0664343c9a8f11ea\\dataset\\'
data=[]
labels=[]
for i in range(train.shape[0]):
data.append(data_dir + train['Image'].iloc[i]+'.png')
labels.append(train['target'].iloc[i])
df=pd.DataFrame(data)
df.columns=['images']
df['target']=labels
data
labels
#df=pd.DataFrame(data)
df
y_train
y_val
)
from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense, InputLayer, BatchNormalization, Dropout
# build a sequential model
model = Sequential()
model.add(InputLayer(input_shape=(128, 128, 3)))
# 1st conv block
model.add(Conv2D(32, (3, 3), activation='relu', strides=(1, 1), padding='same'))
model.add(MaxPool2D(pool_size=(2, 2), padding='same'))
# 2nd conv block
model.add(Conv2D(64, (3, 3), activation='relu', strides=(2, 2), padding='same'))
model.add(MaxPool2D(pool_size=(2, 2), padding='same'))
model.add(BatchNormalization())
# 3rd conv block
model.add(Conv2D(128, (3, 3), activation='relu', strides=(2, 2), padding='same'))
model.add(MaxPool2D(pool_size=(2, 2), padding='valid'))
model.add(BatchNormalization())
# ANN block
model.add(Flatten())
model.add(Dense(units=256, activation='relu'))
model.add(Dense(units=256, activation='relu'))
model.add(Dropout(0.25))
# output layer
model.add(Dense(units=8, activation='softmax'))
# compile model
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=['accuracy'])
# fit on data for 30 epochs
model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test))
data_dir='C:\\Users\\Shubham\\Desktop\\New folder\\hakererth\\dance form\\0664343c9a8f11ea\\dataset\\train\\'
train_image = []
for i in range(train.shape[0]):
img=image.load_img(data_dir + train['Image'].iloc[i],target_size=(128,128,1))
img = image.img_to_array(img)
img = img/255
train_image.append(img)
#labels.append(train['target'].iloc[i])
X = np.array(train_image)
X.shape
train=pd.read_csv('train.csv')
from keras.preprocessing import image
from keras.utils import to_categorical
from sklearn.preprocessing import LabelEncoder
label_encoder =LabelEncoder()
y=train['target'].values
Y = label_encoder.fit_transform(y)
#y=train['target'].values
y1 = np.array(to_categorical(Y))
y1.shape
y
X.shape
train
X_train, X_test, y_train, y_test = train_test_split(X, y1, random_state=42, test_size=0.2)
#model.add(Conv2D(128, (3, 3), activation='relu', strides=(2, 2), padding='same'))
#model.add(MaxPool2D(pool_size=(2, 2), padding='valid'))
model = Sequential()
model.add(Conv2D(64, kernel_size=(3, 3),activation='relu',input_shape=(32,32,3), padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
#model.add(Conv2D(70, (3,3)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(8, activation='softmax'))
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import to_categorical
from keras.preprocessing import image
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
model.compile(loss='categorical_crossentropy',optimizer='Adam',metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test))
p=model.predict(X_test)
p.shape
X_train.shape
y_train.shape
p
m2=model.predict_classes(X_test)
m2
a = np.array(p)
b=(a == a.max(axis=1)[:,None]).astype(int)
#label_encoder.inverse_transform(b)
b.shape
y1.shape
l=numpy.argmax(b, axis=1)
import numpy
l
z=label_encoder.inverse_transform(m2)
z
p2=label_encoder.inverse_transform(numpy.argmax(y_test, axis=1))
p2
from sklearn.metrics import accuracy_score
accuracy_score(p2,z )
import cv2
import glob
import cv2
import numpy as np
pic_num=1
IMG_DIR=data_dir
def read_images(directory):
for img in glob.glob(directory+"/*.jpg"):
image = cv2.imread(img)
image=cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
resized_img = cv2.resize(image/255.0 , (32 , 32))
#cv2.imwrite("small/"+str(pic_num)+'.jpg',resized_img)
yield resized_img
resized_imgs = np.array(list(read_images(IMG_DIR)))
resized_imgs=X
resized_imgs.shape
resized_imgs
X
#test=pd.read_csv(test.csv)
model1 = Sequential()
# The first two layers with 32 filters of window size 3x3
model1.add(Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=(128,128,3)))
model1.add(Conv2D(32, (3, 3), activation='relu'))
model1.add(MaxPooling2D(pool_size=(2, 2)))
model1.add(Dropout(0.25))
model1.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model1.add(Conv2D(64, (3, 3), activation='relu'))
model1.add(MaxPooling2D(pool_size=(2, 2)))
model1.add(Dropout(0.25))
model1.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model1.add(Conv2D(64, (3, 3), activation='relu'))
model1.add(MaxPooling2D(pool_size=(2, 2)))
model1.add(Dropout(0.25))
model1.add(Flatten())
model1.add(Dense(512, activation='relu'))
model1.add(Dropout(0.5))
model1.add(Dense(8, activation='softmax'))
model1.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
print(model1.summary())
import glob
import cv2
import numpy as np
pic_num=1
IMG_DIR='C:\\Users\\Shubham\\Desktop\\New folder\\hakererth\\dance form\\0664343c9a8f11ea\\dataset\\test\\'
def read_images(directory):
for img in glob.glob(directory+"/*.jpg"):
image = cv2.imread(img)
image=cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
test_image = cv2.resize(image/255.0 , (32 , 32))
#cv2.imwrite("small/"+str(pic_num)+'.jpg',resized_img)
yield test_image
test_image = np.array(list(read_images(IMG_DIR)))
test_image.shape
model.fit(train_image, y1, epochs=20, validation_split=0.3, verbose=2)
testo=model.predict_classes(test_image)
testo
z=label_encoder.inverse_transform(testo)
im=test['Image']
subm=pd.DataFrame({'Image':im, 'target':z})
subm.to_csv('submission.csv', index=False)
subm['target'].value_counts()
data_dir='C:\\Users\\Shubham\\Desktop\\New folder\\hakererth\\dance form\\0664343c9a8f11ea\\dataset\\train\\'
train_image = []
for i in range(train.shape[0]):
img=image.load_img(data_dir + train['Image'].iloc[i],target_size=(32,32,1))
img = image.img_to_array(img)
img = img/255
train_image.append(img)
#labels.append(train['target'].iloc[i])
X = np.array(train_image)
data_dir='C:\\Users\\Shubham\\Desktop\\New folder\\hakererth\\dance form\\0664343c9a8f11ea\\dataset\\test\\'
test_image = []
for i in range(test.shape[0]):
img=image.load_img(data_dir + test['Image'].iloc[i],target_size=(128,128,1))
img = image.img_to_array(img)
img = img/255
train_image.append(img)
#labels.append(train['target'].iloc[i])
X1 = np.array(test_image)
X1
test=pd.read_csv('test.csv')
X1.shape
X.shape
import glob
import cv2
import numpy as np
pic_num=1
IMG_DIR='C:\\Users\\Shubham\\Desktop\\New folder\\hakererth\\dance form\\0664343c9a8f11ea\\dataset\\train\\'
def read_images(directory):
for img in glob.glob(directory+"/*.jpg"):
image = cv2.imread(img)
image=cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
train_image = cv2.resize(image/255.0 , (32 , 32))
#cv2.imwrite("small/"+str(pic_num)+'.jpg',resized_img)
yield train_image
train_image = np.array(list(read_images(IMG_DIR)))
train_image.shape
```
| github_jupyter |
```
from sklearn.model_selection import cross_val_score, cross_val_predict, GridSearchCV, train_test_split
from sklearn.metrics import precision_score, recall_score, f1_score, classification_report
import pandas as pd
import numpy as np
from time import time
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn import metrics
# WINDOW_SIZE = 257, CODEBOOK_SIZE = 10000
enable_norm = True
X_train = np.loadtxt("./feature_train.csv", delimiter = ",").reshape(-1,384)
y_train = np.loadtxt("./label_train.csv", delimiter = ",")
X_test = np.loadtxt("./feature_test.csv", delimiter=",").reshape(-1,384)
y_test = np.loadtxt("./label_test.csv", delimiter=",")
if enable_norm:
X_train = np.transpose(X_train)
X_test = np.transpose(X_test)
model_normalizer_horizontal = MinMaxScaler()
model_normalizer_horizontal.fit(X_train)
X_train = model_normalizer_horizontal.transform(X_train)
model_normalizer_horizontal = MinMaxScaler()
model_normalizer_horizontal.fit(X_test)
X_test = model_normalizer_horizontal.transform(X_test)
X_train = np.transpose(X_train)
X_test = np.transpose(X_test)
model_normalizer_vertical = MinMaxScaler()
model_normalizer_vertical.fit(X_train)
X_train = model_normalizer_vertical.transform(X_train)
X_test = model_normalizer_vertical.transform(X_test)
def plot_2d_space(X, y, label='Classes'):
colors = ['#1F77B4', '#FF7F0E']
markers = ['o', 's']
for l, c, m in zip(np.unique(y), colors, markers):
plt.scatter(
X[y==l, 0],
X[y==l, 1],
c=c, label=l, marker=m
)
plt.title(label)
plt.legend(loc='upper right')
plt.show()
import imblearn
from imblearn.combine import SMOTETomek
smt = SMOTETomek(sampling_strategy='auto')
X_smt, y_smt = smt.fit_sample(X_train, y_train)
plot_2d_space(X_smt, y_smt, 'SMOTE + Tomek links')
X_train, y_train = X_smt, y_smt
X_test, y_test = smt.fit_sample(X_test, y_test)
label_names = ['Buggy', 'Correct']
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
"""
given a sklearn confusion matrix (cm), make a nice plot
Arguments
---------
cm: confusion matrix from sklearn.metrics.confusion_matrix
target_names: given classification classes such as [0, 1, 2]
the class names, for example: ['high', 'medium', 'low']
title: the text to display at the top of the matrix
cmap: the gradient of the values displayed from matplotlib.pyplot.cm
see http://matplotlib.org/examples/color/colormaps_reference.html
plt.get_cmap('jet') or plt.cm.Blues
normalize: If False, plot the raw numbers
If True, plot the proportions
Usage
-----
plot_confusion_matrix(cm = cm, # confusion matrix created by
# sklearn.metrics.confusion_matrix
normalize = True, # show proportions
target_names = y_labels_vals, # list of names of the classes
title = best_estimator_name) # title of graph
Citiation
---------
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
"""
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(8, 6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
# SVM
param_grid_ = {'C': [1], "kernel":["linear","poly", "rbf"], 'verbose': [True]}
print('-> Processing 10-Fold Cross Validation and Grid Search\n')
bow_search = GridSearchCV(SVC(), cv=8, param_grid=param_grid_, scoring='f1_micro', n_jobs=-1, verbose=10)
t0 = time()
bow_search.fit(X_train, y_train)
training_time = round(time()-t0, 3)
print('-> Done! Show Grid scores\n')
print(bow_search.cv_results_,'\n\n')
print("Best parameters set found on development set:\n")
print(bow_search.best_params_,'\n')
print("Grid scores on development set:\n")
means = bow_search.cv_results_['mean_test_score']
stds = bow_search.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, bow_search.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
print('\n\n')
print("Detailed classification report:\n")
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.\n\n")
t0 = time()
y_true, y_pred = y_test, bow_search.predict(X_test)
test_time = round(time()-t0, 3)
cmat = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(cm = cmat,
normalize = False,
target_names = label_names,
cmap = plt.get_cmap('Greys'),
title = "Confusion Matrix SVC Dataset_Norm = %s" % str(enable_norm))
plot_confusion_matrix(cm = cmat,
target_names = label_names,
cmap = plt.get_cmap('Greys'),
title = "Normalized Confusion Matrix SVC Dataset_Norm = %s" % str(enable_norm))
print('\n\n')
print(classification_report(y_true, y_pred))
print()
print('Accuracy', metrics.accuracy_score(y_pred,y_test))
print("Training time : {}\n".format(training_time))
print("Test time : {}\n".format(test_time))
print()
# SVM
param_grid_ = {'C': [10], "kernel":["poly"], 'verbose': [True]}
print('-> Processing 10-Fold Cross Validation and Grid Search\n')
bow_search = GridSearchCV(SVC(), cv=8, param_grid=param_grid_, scoring='f1_micro', n_jobs=-1, verbose=10)
t0 = time()
bow_search.fit(X_train, y_train)
training_time = round(time()-t0, 3)
print('-> Done! Show Grid scores\n')
print(bow_search.cv_results_,'\n\n')
print("Best parameters set found on development set:\n")
print(bow_search.best_params_,'\n')
print("Grid scores on development set:\n")
means = bow_search.cv_results_['mean_test_score']
stds = bow_search.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, bow_search.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
print('\n\n')
print("Detailed classification report:\n")
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.\n\n")
t0 = time()
y_true, y_pred = y_test, bow_search.predict(X_test)
test_time = round(time()-t0, 3)
cmat = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(cm = cmat,
normalize = False,
target_names = label_names,
cmap = plt.get_cmap('Greys'),
title = "Confusion Matrix SVC Dataset_Norm = %s" % str(enable_norm))
plot_confusion_matrix(cm = cmat,
target_names = label_names,
cmap = plt.get_cmap('Greys'),
title = "Normalized Confusion Matrix SVC Dataset_Norm = %s" % str(enable_norm))
print('\n\n')
print(classification_report(y_true, y_pred))
print()
print('Accuracy', metrics.accuracy_score(y_pred,y_test))
print("Training time : {}\n".format(training_time))
print("Test time : {}\n".format(test_time))
print()
# SVM
param_grid_ = {'C': [100], "kernel":["poly", "rbf"], 'verbose': [True]}
print('-> Processing 10-Fold Cross Validation and Grid Search\n')
bow_search = GridSearchCV(SVC(), cv=8, param_grid=param_grid_, scoring='f1_micro', n_jobs=-1, verbose=10)
t0 = time()
bow_search.fit(X_train, y_train)
training_time = round(time()-t0, 3)
print('-> Done! Show Grid scores\n')
print(bow_search.cv_results_,'\n\n')
print("Best parameters set found on development set:\n")
print(bow_search.best_params_,'\n')
print("Grid scores on development set:\n")
means = bow_search.cv_results_['mean_test_score']
stds = bow_search.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, bow_search.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
print('\n\n')
print("Detailed classification report:\n")
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.\n\n")
t0 = time()
y_true, y_pred = y_test, bow_search.predict(X_test)
test_time = round(time()-t0, 3)
cmat = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(cm = cmat,
normalize = False,
target_names = label_names,
cmap = plt.get_cmap('Greys'),
title = "Confusion Matrix SVC Dataset_Norm = %s" % str(enable_norm))
plot_confusion_matrix(cm = cmat,
target_names = label_names,
cmap = plt.get_cmap('Greys'),
title = "Normalized Confusion Matrix SVC Dataset_Norm = %s" % str(enable_norm))
print('\n\n')
print(classification_report(y_true, y_pred))
print()
print('Accuracy', metrics.accuracy_score(y_pred,y_test))
print("Training time : {}\n".format(training_time))
print("Test time : {}\n".format(test_time))
print()
# MLPClassifier
clf = MLPClassifier(activation='tanh', alpha=0.03, batch_size='auto', beta_1=0.9,
beta_2=0.999, early_stopping=True, epsilon=1e-08,
hidden_layer_sizes=(384, 256, 128, 64, 32, 16, 8, 4, 2), learning_rate='adaptive',
learning_rate_init=0.001, max_iter=100000, momentum=0.9,
nesterovs_momentum=True, power_t=0.5, random_state=48, shuffle=True,
solver='adam', tol=0.0001, validation_fraction=0.1, verbose=True,
warm_start=True)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
cmat = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(cm = cmat,
normalize = False,
target_names = label_names,
cmap = plt.get_cmap('Greens'),
title = "Confusion Matrix MLP Dataset_Norm = %s" % str(enable_norm))
plot_confusion_matrix(cm = cmat,
target_names = label_names,
cmap = plt.get_cmap('Greens'),
title = "Normalized Confusion Matrix MLP Dataset_Norm = %s" % str(enable_norm))
print(classification_report(y_true, y_pred))
print('Accuracy', metrics.accuracy_score(y_pred,y_test))
```
| github_jupyter |
# Clustering
Wikipedia: Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). Clustering is one of the main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, and bioinformatics.
Sources: http://scikit-learn.org/stable/modules/clustering.html
## K-means clustering
Source: C. M. Bishop *Pattern Recognition and Machine Learning*, Springer, 2006
Suppose we have a data set $X = \{x_1 , \cdots , x_N\}$ that consists of $N$ observations of a random $D$-dimensional Euclidean variable $x$. Our goal is to partition the data set into some number, $K$, of clusters, where we shall suppose for the moment that the value of $K$ is given. Intuitively, we might think of a cluster as comprising a group of data points whose inter-point distances are small compared to the distances to points outside of the cluster. We can formalize this notion by first introducing a set of $D$-dimensional vectors $\mu_k$, where $k = 1, \ldots, K$, in which $\mu_k$ is a **prototype** associated with the $k^{th}$ cluster. As we shall see shortly, we can think of the $\mu_k$ as representing the centres of the clusters. Our goal is then to find an assignment of data points to clusters, as well as a set of vectors $\{\mu_k\}$, such that the sum of the squares of the distances of each data point to its closest prototype vector $\mu_k$, is at a minimum.
It is convenient at this point to define some notation to describe the assignment of data points to clusters. For each data point $x_i$ , we introduce a corresponding set of binary indicator variables $r_{ik} \in \{0, 1\}$, where $k = 1, \ldots, K$, that describes which of the $K$ clusters the data point $x_i$ is assigned to, so that if data point $x_i$ is assigned to cluster $k$ then $r_{ik} = 1$, and $r_{ij} = 0$ for $j \neq k$. This is known as the 1-of-$K$ coding scheme. We can then define an objective function, denoted **inertia**, as
$$
J(r, \mu) = \sum_i^N \sum_k^K r_{ik} \|x_i - \mu_k\|_2^2
$$
which represents the sum of the squares of the Euclidean distances of each data point to its assigned vector $\mu_k$. Our goal is to find values for the $\{r_{ik}\}$ and the $\{\mu_k\}$ so as to minimize the function $J$. We can do this through an iterative procedure in which each iteration involves two successive steps corresponding to successive optimizations with respect to the $r_{ik}$ and the $\mu_k$ . First we choose some initial values for the $\mu_k$. Then in the first phase we minimize $J$ with respect to the $r_{ik}$, keeping the $\mu_k$ fixed. In the second phase we minimize $J$ with respect to the $\mu_k$, keeping $r_{ik}$ fixed. This two-stage optimization process is then repeated until convergence. We shall see that these two stages of updating $r_{ik}$ and $\mu_k$ correspond respectively to the expectation (E) and maximization (M) steps of the expectation-maximisation (EM) algorithm, and to emphasize this we shall use the terms E step and M step in the context of the $K$-means algorithm.
Consider first the determination of the $r_{ik}$ . Because $J$ in is a linear function of $r_{ik}$ , this optimization can be performed easily to give a closed form solution. The terms involving different $i$ are independent and so we can optimize for each $i$ separately by choosing $r_{ik}$ to be 1 for whichever value of $k$ gives the minimum value of $||x_i - \mu_k||^2$ . In other words, we simply assign the $i$th data point to the closest cluster centre. More formally, this can be expressed as
\begin{equation}
r_{ik}=\begin{cases}
1, & \text{if } k = \arg\min_j ||x_i - \mu_j||^2.\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
Now consider the optimization of the $\mu_k$ with the $r_{ik}$ held fixed. The objective function $J$ is a quadratic function of $\mu_k$, and it can be minimized by setting its derivative with respect to $\mu_k$ to zero giving
$$
2 \sum_i r_{ik}(x_i - \mu_k) = 0
$$
which we can easily solve for $\mu_k$ to give
$$
\mu_k = \frac{\sum_i r_{ik}x_i}{\sum_i r_{ik}}.
$$
The denominator in this expression is equal to the number of points assigned to cluster $k$, and so this result has a simple interpretation, namely set $\mu_k$ equal to the mean of all of the data points $x_i$ assigned to cluster $k$. For this reason, the procedure is known as the $K$-means algorithm.
The two phases of re-assigning data points to clusters and re-computing the cluster means are repeated in turn until there is no further change in the assignments (or until some maximum number of iterations is exceeded). Because each phase reduces the value of the objective function $J$, convergence of the algorithm is assured. However, it may converge to a local rather than global minimum of $J$.
```
from sklearn import cluster, datasets
import matplotlib.pyplot as plt
import seaborn as sns # nice color
%matplotlib inline
iris = datasets.load_iris()
X = iris.data[:, :2] # use only 'sepal length and sepal width'
y_iris = iris.target
km2 = cluster.KMeans(n_clusters=2).fit(X)
km3 = cluster.KMeans(n_clusters=3).fit(X)
km4 = cluster.KMeans(n_clusters=4).fit(X)
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.scatter(X[:, 0], X[:, 1], c=km2.labels_)
plt.title("K=2, J=%.2f" % km2.inertia_)
plt.subplot(132)
plt.scatter(X[:, 0], X[:, 1], c=km3.labels_)
plt.title("K=3, J=%.2f" % km3.inertia_)
plt.subplot(133)
plt.scatter(X[:, 0], X[:, 1], c=km4.labels_)#.astype(np.float))
plt.title("K=4, J=%.2f" % km4.inertia_)
```
### Exercises
#### 1. Analyse clusters
- Analyse the plot above visually. What would a good value of $K$ be?
- If you instead consider the inertia, the value of $J$, what would a good value of $K$ be?
- Explain why there is such difference.
- For $K=2$ why did $K$-means clustering not find the two "natural" clusters? See the assumptions of $K$-means:
http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_assumptions.html#example-cluster-plot-kmeans-assumptions-py
#### 2. Re-implement the $K$-means clustering algorithm (homework)
Write a function `kmeans(X, K)` that return an integer vector of the samples' labels.
## Gaussian mixture models
The Gaussian mixture model (GMM) is a simple linear superposition of Gaussian components over the data, aimed at providing a rich class of density models. We turn to a formulation of Gaussian mixtures in terms of discrete latent variables: the $K$ hidden classes to be discovered.
Differences compared to $K$-means:
- Whereas the $K$-means algorithm performs a hard assignment of data points to clusters, in which each data point is associated uniquely with one cluster, the GMM algorithm makes a soft assignment based on posterior probabilities.
- Whereas the classic $K$-means is only based on Euclidean distances, classic GMM use a Mahalanobis distances that can deal with non-spherical distributions. It should be noted that Mahalanobis could be plugged within an improved version of $K$-Means clustering. The Mahalanobis distance is unitless and scale-invariant, and takes into account the correlations of the data set.
The Gaussian mixture distribution can be written as a linear superposition of $K$ Gaussians in the form:
$$
p(x) = \sum_{k=1}^K \mathcal{N}(x \,|\, \mu_k, \Sigma_k)p(k),
$$
where:
- The $p(k)$ are the mixing coefficients also know as the class probability of class $k$, and they sum to one: $\sum_{k=1}^K p(k) = 1$.
- $\mathcal{N}(x \,|\, \mu_k, \Sigma_k) = p(x \,|\, k)$ is the conditional distribution of $x$ given a particular class $k$. It is the multivariate Gaussian distribution defined over a $P$-dimensional vector $x$ of continuous variables.
The goal is to maximize the log-likelihood of the GMM:
$$
\ln \prod_{i=1}^N p(x_i)= \ln \prod_{i=1}^N \left\{ \sum_{k=1}^K \mathcal{N}(x_i \,|\, \mu_k, \Sigma_k)p(k) \right\} = \sum_{i=1}^N \ln\left\{ \sum_{k=1}^K \mathcal{N}(x_i \,|\, \mu_k, \Sigma_k) p(k) \right\}.
$$
To compute the classes parameters: $p(k), \mu_k, \Sigma_k$ we sum over all samples, by weighting each sample $i$ by its responsibility or contribution to class $k$: $p(k \,|\, x_i)$ such that for each point its contribution to all classes sum to one $\sum_k p(k \,|\, x_i) = 1$. This contribution is the conditional probability
of class $k$ given $x$: $p(k \,|\, x)$ (sometimes called the posterior). It can be computed using Bayes' rule:
\begin{align}
p(k \,|\, x) &= \frac{p(x \,|\, k)p(k)}{p(x)}\\
&= \frac{\mathcal{N}(x \,|\, \mu_k, \Sigma_k)p(k)}{\sum_{k=1}^K \mathcal{N}(x \,|\, \mu_k, \Sigma_k)p(k)}
\end{align}
Since the class parameters, $p(k)$, $\mu_k$ and $\Sigma_k$, depend on the responsibilities $p(k \,|\, x)$ and the responsibilities depend on class parameters, we need a two-step iterative algorithm: the expectation-maximization (EM) algorithm. We discuss this algorithm next.
### The expectation-maximization (EM) algorithm for Gaussian mixtures
Given a Gaussian mixture model, the goal is to maximize the likelihood function with respect to the parameters (comprised of the means and covariances of the components and the mixing coefficients).
Initialize the means $\mu_k$, covariances $\Sigma_k$ and mixing coefficients $p(k)$
1. **E step**. For each sample $i$, evaluate the responsibilities for each class $k$ using the current parameter values
$$
p(k \,|\, x_i) = \frac{\mathcal{N}(x_i \,|\, \mu_k, \Sigma_k)p(k)}{\sum_{k=1}^K \mathcal{N}(x_i \,|\, \mu_k, \Sigma_k)p(k)}
$$
2. **M step**. For each class, re-estimate the parameters using the current responsibilities
\begin{align}
\mu_k^{\text{new}} &= \frac{1}{N_k} \sum_{i=1}^N p(k \,|\, x_i) x_i\\
\Sigma_k^{\text{new}} &= \frac{1}{N_k} \sum_{i=1}^N p(k \,|\, x_i) (x_i - \mu_k^{\text{new}}) (x_i - \mu_k^{\text{new}})^T\\
p^{\text{new}}(k) &= \frac{N_k}{N}
\end{align}
3. Evaluate the log-likelihood
$$
\sum_{i=1}^N \ln \left\{ \sum_{k=1}^K \mathcal{N}(x|\mu_k, \Sigma_k) p(k) \right\},
$$
and check for convergence of either the parameters or the log-likelihood. If the convergence criterion is not satisfied return to step 1.
```
import numpy as np
from sklearn import datasets
import matplotlib.pyplot as plt
import seaborn as sns # nice color
import sklearn
from sklearn.mixture import GaussianMixture
import pystatsml.plot_utils
colors = sns.color_palette()
iris = datasets.load_iris()
X = iris.data[:, :2] # 'sepal length (cm)''sepal width (cm)'
y_iris = iris.target
gmm2 = GaussianMixture(n_components=2, covariance_type='full').fit(X)
gmm3 = GaussianMixture(n_components=3, covariance_type='full').fit(X)
gmm4 = GaussianMixture(n_components=4, covariance_type='full').fit(X)
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.scatter(X[:, 0], X[:, 1], c=[colors[lab] for lab in gmm2.predict(X)])#, color=colors)
for i in range(gmm2.covariances_.shape[0]):
pystatsml.plot_utils.plot_cov_ellipse(cov=gmm2.covariances_[i, :], pos=gmm2.means_[i, :],
facecolor='none', linewidth=2, edgecolor=colors[i])
plt.scatter(gmm2.means_[i, 0], gmm2.means_[i, 1], edgecolor=colors[i],
marker="o", s=100, facecolor="w", linewidth=2)
plt.title("K=2")
plt.subplot(132)
plt.scatter(X[:, 0], X[:, 1], c=[colors[lab] for lab in gmm3.predict(X)])
for i in range(gmm3.covariances_.shape[0]):
pystatsml.plot_utils.plot_cov_ellipse(cov=gmm3.covariances_[i, :], pos=gmm3.means_[i, :],
facecolor='none', linewidth=2, edgecolor=colors[i])
plt.scatter(gmm3.means_[i, 0], gmm3.means_[i, 1], edgecolor=colors[i],
marker="o", s=100, facecolor="w", linewidth=2)
plt.title("K=3")
plt.subplot(133)
plt.scatter(X[:, 0], X[:, 1], c=[colors[lab] for lab in gmm4.predict(X)]) # .astype(np.float))
for i in range(gmm4.covariances_.shape[0]):
pystatsml.plot_utils.plot_cov_ellipse(cov=gmm4.covariances_[i, :], pos=gmm4.means_[i, :],
facecolor='none', linewidth=2, edgecolor=colors[i])
plt.scatter(gmm4.means_[i, 0], gmm4.means_[i, 1], edgecolor=colors[i],
marker="o", s=100, facecolor="w", linewidth=2)
_ = plt.title("K=4")
```
## Model selection
### Bayesian information criterion
In statistics, the Bayesian information criterion (BIC) is a criterion for model selection among a finite set of models; the model with the lowest BIC is preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).
```
X = iris.data
y_iris = iris.target
bic = list()
#print(X)
ks = np.arange(1, 10)
for k in ks:
gmm = GaussianMixture(n_components=k, covariance_type='full')
gmm.fit(X)
bic.append(gmm.bic(X))
k_chosen = ks[np.argmin(bic)]
plt.plot(ks, bic)
plt.xlabel("k")
plt.ylabel("BIC")
print("Choose k=", k_chosen)
```
## Hierarchical clustering
Hierarchical clustering is an approach to clustering that build hierarchies of clusters in two main approaches:
- **Agglomerative**: A *bottom-up* strategy, where each observation starts in their own cluster, and pairs of clusters are merged upwards in the hierarchy.
- **Divisive**: A *top-down* strategy, where all observations start out in the same cluster, and then the clusters are split recursively downwards in the hierarchy.
In order to decide which clusters to merge or to split, a measure of dissimilarity between clusters is introduced. More specific, this comprise a *distance* measure and a *linkage* criterion. The distance measure is just what it sounds like, and the linkage criterion is essentially a function of the distances between points, for instance the minimum distance between points in two clusters, the maximum distance between points in two clusters, the average distance between points in two clusters, etc. One particular linkage criterion, the Ward criterion, will be discussed next.
### Ward clustering
Ward clustering belongs to the family of agglomerative hierarchical clustering algorithms. This means that they are based on a "bottoms up" approach: each sample starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.
In Ward clustering, the criterion for choosing the pair of clusters to merge at each step is the minimum variance criterion. Ward's minimum variance criterion minimizes the total within-cluster variance by each merge. To implement this method, at each step: find the pair of clusters that leads to minimum increase in total within-cluster variance after merging. This increase is a weighted squared distance between cluster centers.
The main advantage of agglomerative hierarchical clustering over $K$-means clustering is that you can benefit from known neighborhood information, for example, neighboring pixels in an image.
```
from sklearn import cluster, datasets
import matplotlib.pyplot as plt
import seaborn as sns # nice color
iris = datasets.load_iris()
X = iris.data[:, :2] # 'sepal length (cm)''sepal width (cm)'
y_iris = iris.target
ward2 = cluster.AgglomerativeClustering(n_clusters=2, linkage='ward').fit(X)
ward3 = cluster.AgglomerativeClustering(n_clusters=3, linkage='ward').fit(X)
ward4 = cluster.AgglomerativeClustering(n_clusters=4, linkage='ward').fit(X)
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.scatter(X[:, 0], X[:, 1], c=ward2.labels_)
plt.title("K=2")
plt.subplot(132)
plt.scatter(X[:, 0], X[:, 1], c=ward3.labels_)
plt.title("K=3")
plt.subplot(133)
plt.scatter(X[:, 0], X[:, 1], c=ward4.labels_) # .astype(np.float))
plt.title("K=4")
```
## Exercises
Perform clustering of the iris dataset based on all variables using Gaussian mixture models.
Use PCA to visualize clusters.
| github_jupyter |
# Performance Tests of Apache Spark-Based DC2 Run 1.1 Object Catalog Access
Author: **Julien Peloton [@JulienPeloton](https://github.com/JulienPeloton)**
Last Run: **2018-11-22**
See also: [issue/249](https://github.com/LSSTDESC/DC2-production/issues/249)
The purpose of this notebook is twofold: introduce Apache Spark and test performance of data manipulations of the static coadd catalogs. More benchmarks can be found on the companion notebook (*_appendix.ipynb)
## Before starting...
**What is Apache Spark?**
I'm glad you asked! [Apache Spark](http://spark.apache.org/) is a cluster computing framework, that is a set of tools to perform computation on a network of many machines. Spark started in 2009 as a research project, and it had a huge success so far in the industry. It is based on the so-called MapReduce cluster computing paradigm, popularized by the Hadoop framework using implicit data parallelism and fault tolerance.
**Where to find information on running Spark at NERSC?**
Most of what you need for interactive and batch jobs is at [spark-distributed-analytic-framework](https://www.nersc.gov/users/data-analytics/data-analytics-2/spark-distributed-analytic-framework/).
For JupyterLab use, see below.
**Where this Notebook is intended to be run?**
These tests were conducted on NERSC through the https://jupyter-dev.nersc.gov interface.
**What is needed to run this Notebook at NERSC?**
1. You need an account at NERSC, and access to the DESC allocation.
2. This Notebook requires a pyspark kernel. The easiest way is to use the `desc-pyspark` kernel (see [LSSTDESC/desc-spark](https://github.com/LSSTDESC/desc-spark#working-at-nersc-jupyterlab) or [LSSTDESC/jupyter-kernels](https://github.com/LSSTDESC/jupyter-kernels) for more information)
If you encounter problem with this kernel, let [me](https://github.com/LSSTDESC/DC2_Repo/issues/new?body=@JulienPeloton) know.
**Where is the data used in this Notebook?**
Data used can be found at
```
/global/projecta/projectdirs/lsst/global/in2p3/Run1.1/summary/
```
Apache Spark can read a large number of data formats (Parquet, Avro, text) but officially neither FITS nor HDF5 are supported. We developed a solution for FITS files ([spark-fits](https://github.com/astrolabsoftware/spark-fits), Scala/Java/Python/R API), but as far as I know there is no Python-friendly connector for HDF5. Therefore we will focus only on Parquet and FITS in this Notebook.
**Note concerning resources**
```
The large-memory login node used by https://jupyter-dev.nersc.gov/
is a shared resource, so please be careful not to use too many CPUs
or too much memory.
That means avoid using `--master local[*]` in your kernel, but limit
the resources to a few core. Typically `--master local[4]` is enough for
prototyping a program.
Then to scale the analysis, the best is to switch to batch mode!
There, no limit!
```
This is already taken care for you in the DESC kernel setup script (from desc-pyspark), but keep this in mind if you use a custom kernel.
## Loading the data
We will follow what is done on the Dask Notebook ([hdf5, pandas](https://github.com/LSSTDESC/DC2-production/blob/master/Notebooks/object_catalog_performance_dask.ipynb), [parquet](https://github.com/LSSTDESC/DC2-production/blob/master/Notebooks/object_catalog_performance_dask_parquet.ipynb)).
We will first focus on one `patch` (4850) with all `tracts`.
### Disclaimer
Apache Spark, is meant to be primarily used in a context of _big data_.
One of its strength is its scalability, namely its capability of using the same
piece of code regardless the underlying data volume. The performance of the code will
then only depend on the resource used.
E.g. for tasks without communications, execution time will be linear with data or resource.
Keep in mind:
- For small volume of data (< 10 GB), you will hit Spark noise and burning time.
- Spark is written in Scala, which is certainly not as specialised as C++ could be. Therefore for small volume of data, there is a chance an algorithm in Scala (Spark) would be slower than its C++ counterpart. But the Spark one is meant to run on TB of data _as it was written_ for MB of data - which is probably not the case for the C++.
- Once the data loaded, you can decide to keep it in memory (distributed among the executors). The next iterations will then go super fast (typically disk I/O throughput is o(100) MB/s while RAM is o(10) GB/s).
So in this example, the one patch test is likely to lessen Spark performance (volume is just few GB here), and these tests must also be ran on hundreds of GB of data.
```
import os
base_dir = '/global/projecta/projectdirs/lsst/global/in2p3/Run1.1/summary'
# Load one patch, all tracts
datafile = os.path.join(base_dir, 'dpdd_object.parquet')
print("Data will be read from: \n", datafile)
```
## Loading data into a DataFrame
Let's initialise Spark and load the data into a DataFrame. We will first focus on the `parquet` data format.
```
from pyspark.sql import SparkSession
# Initialise our Spark session
spark = SparkSession.builder.getOrCreate()
# Read the data as DataFrame
df = spark.read.format("parquet").load(datafile)
# Check what we have in the file
df.printSchema()
# Get number of elements
print("All tracts DataFrame has length:", df.count())
%timeit c = df.count()
```
## Some statistics about the data
Let's play with the data. We will see how to compute statistics (mean, std, etc...) and ...
```
from pyspark.sql import DataFrame
from pyspark.sql.functions import col
def stat_one_col(df: DataFrame, colname: str) -> DataFrame:
""" Return some statistics about one DataFrame Column.
Statistics include: count, mean, stddev, min, max.
Parameters
----------
df : DataFrame
Spark DataFrame
colname : str
Name of the Column for which we want the statistics
Returns
----------
out : DataFrame
DataFrame containing statistics about the Column.
"""
return df.select(colname).describe()
def stat_diff_col(df: DataFrame, colname_1: str, colname_2: str) -> DataFrame:
""" Return some statistics about the difference of
two DataFrame Columns.
Statistics include: count, mean, stddev, min, max.
Parameters
----------
df : DataFrame
Spark DataFrame
colname_1 : str
Name of the first Column
colname_2 : str
Name of the second Column
Returns
----------
out : DataFrame
DataFrame containing statistics about the Columns difference.
"""
return df.select(col(colname_1) - col(colname_2)).describe()
# Get statistics about one column
stat_one_col(df, 'mag_g').show()
%timeit c = stat_one_col(df, 'mag_g').collect()
# Get statistics for the difference of two columns
stat_diff_col(df, 'mag_g', 'mag_r').show()
%timeit d = stat_diff_col(df, 'mag_g', 'mag_r').collect()
```
It takes roughly 1 second to produce statistics on the full dataset (6 million rows, 4.5 GB). This has to be compared to the 52 seconds for Dask to compute the `mean` using the same resource (4 CPU). Note that we didn't explicitly asked to put the data in cache (would be much faster). Note also that the number of elements for each column used to produce statistics varies (`count`). This is due to the fact that `NaN` are discarded.
I also ran this test with more resource (i.e. more CPUs), and the execution time gets smaller. Not linearly, as we hit Spark burning time (not enough data), but e.g. a few hundred of millisecond with 32 CPU.
## Reducing, playing, plotting
You can always go back to pandas world by using the `toPandas()` method. But be careful, if you do that on the full dataframe you will destroy your driver for sure! Spark DataFrames are abstractions of arbitrary amount of data. Invoking `toPandas()` triggers an action, and objects to transfer will be materialised. Imagine TB of data suddenly flowing to your driver...
So transfer only subset of the full Spark DataFrame to pandas. In pyspark there are several ways of selecting only subset of data: via SQL expression or DataFrame methods (which are somewhat related to each other...):
```
# Subset of columns of interest
cols = "mag_g, mag_r, mag_i, magerr_g, magerr_r, magerr_i, extendedness"
# SQL - register first the DataFrame
df.createOrReplaceTempView("full_tract")
# Keeps only columns with 0.0 < magerr_g < 0.3
sql_command = """
SELECT {}
FROM full_tract
WHERE
magerr_g > 0 AND magerr_g < 0.3
""".format(cols)
# Execute the expression - return a DataFrame
df_sub = spark.sql(sql_command)
print("Number of elements selected: ", df_sub.count())
df_sub.show()
# Note that we could have done otherwise:
# Example with select.where
df.select(cols.split(", ")).where("magerr_g > 0 AND magerr_g < 0.3")
# Example with select.filter
df.select(cols.split(", ")).filter("magerr_g > 0 AND magerr_g < 0.3")
```
3000 objects, 7 columns - that's more reasonnable than the million ones! We can go to Pandas and play for example with your favourite plotting library such as seaborn, matplotlib, and so on:
```
df_pandas = df_sub.toPandas()
df_pandas
```
## Data quality
**87% valid entries, 13% NaN**
```
from pyspark.sql.functions import col,sum
# Show the number of valid entries
df_entries = df.select(*(sum(col(c).isNotNull().cast("int")).alias(c) for c in df.columns))
entries = df_entries.collect()
import numpy as np
ini_len = df.count()
print("Input number of entries: {}".format(ini_len))
print("Total yield: {}%".format(np.sum(entries) / (ini_len * len(df.columns)) * 100))
for col, entry in zip(df.columns, entries[0]):
print("{}: {} ({:.1f}%)".format(col, entry, entry/ini_len*100))
```
| github_jupyter |
# Introduction
This notebook was used in order to create the **"Naive Early-fusion" row in TABLE II**.
Note that a lot of code is copy-pasted across notebooks, so you may find some functionality implemented here that is not used, for instance the network is implemented in a way to support late-fusion, which is not used.
```
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# Font which got unicode math stuff.
import matplotlib as mpl
mpl.rcParams['font.family'] = 'DejaVu Sans'
# Much more readable plots
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# Much better than plt.subplots()
from mpl_toolkits.axes_grid1 import ImageGrid
# https://github.com/ipython/ipython/issues/7270#issuecomment-355276432
mpl.interactive(False)
import wheelchAI.utils as u
import lbtoolbox.util as lbu
from ipywidgets import interact, IntSlider, FloatSlider
import ipywidgets
```
# Data loading
```
from os.path import join as pjoin
from glob import glob
```
**CAREFUL**: `scan` goes right-to-left, i.e. first array value corresponds to "rightmost" laser point. Positive angle is left, negative angle right.
```
LABELDIR = DATADIR = "/fastwork/data/DROW-data/"
train_names = [f[:-4] for f in glob(pjoin(DATADIR, 'train', '*.csv'))]
val_names = [f[:-4] for f in glob(pjoin(DATADIR, 'val', '*.csv'))]
te_names = [f[:-4] for f in glob(pjoin(DATADIR, 'test', '*.csv'))]
tr = u.Dataset(train_names, DATADIR, LABELDIR)
va = u.Dataset(val_names, DATADIR, LABELDIR)
WIN_KW = dict(ntime=5, nsamp=48, odom=False, repeat_before=True, center_time='each')
%timeit u.get_batch(tr, bs=1024, **WIN_KW)
batcher = u.BackgroundFunction(u.get_batch, 5, data=tr, bs=1024, **WIN_KW)
```
# Model definition
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import lbtoolbox.pytorch as lbt
torch.backends.cudnn.benchmark = True # Run benchmark to select fastest implementation of ops.
GPU=1 # This is the GPU index, use `False` for CPU-only.
class DROWNet3EF(nn.Module):
def __init__(self, snip_len, dropout=0.5, *a, **kw):
""" thin_fact should be 8 for 5 time-win. """
super(DROWNet3EF, self).__init__(*a, **kw)
# >>> m = weight_norm(nn.Linear(20, 40), name='weight', dim=???)
self.dropout = dropout
self.conv1a = nn.Conv1d(snip_len, 64, kernel_size=3, padding=1)
self.bn1a = nn.BatchNorm1d( 64)
self.conv1b = nn.Conv1d( 64, 64, kernel_size=3, padding=1)
self.bn1b = nn.BatchNorm1d( 64)
self.conv1c = nn.Conv1d( 64, 128, kernel_size=3, padding=1)
self.bn1c = nn.BatchNorm1d(128)
self.conv2a = nn.Conv1d(128, 128, kernel_size=3, padding=1)
self.bn2a = nn.BatchNorm1d(128)
self.conv2b = nn.Conv1d(128, 128, kernel_size=3, padding=1)
self.bn2b = nn.BatchNorm1d(128)
self.conv2c = nn.Conv1d(128, 256, kernel_size=3, padding=1)
self.bn2c = nn.BatchNorm1d(256)
self.conv3a = nn.Conv1d(256, 256, kernel_size=3, padding=1)
self.bn3a = nn.BatchNorm1d(256)
self.conv3b = nn.Conv1d(256, 256, kernel_size=3, padding=1)
self.bn3b = nn.BatchNorm1d(256)
self.conv3c = nn.Conv1d(256, 512, kernel_size=3, padding=1)
self.bn3c = nn.BatchNorm1d(512)
self.conv4a = nn.Conv1d(512, 256, kernel_size=3, padding=1)
self.bn4a = nn.BatchNorm1d(256)
self.conv4b = nn.Conv1d(256, 128, kernel_size=3, padding=1)
self.bn4b = nn.BatchNorm1d(128)
self.conv4p = nn.Conv1d(128, 4, kernel_size=1) # probs
self.conv4v = nn.Conv1d(128, 2, kernel_size=1) # vote
self.reset_parameters()
def forward(self, x):
x = F.leaky_relu(self.bn1a(self.conv1a(x)), 0.1)
x = F.leaky_relu(self.bn1b(self.conv1b(x)), 0.1)
x = F.leaky_relu(self.bn1c(self.conv1c(x)), 0.1)
x = F.max_pool1d(x, 2) # 24
x = F.dropout(x, p=self.dropout, training=self.training)
x = F.leaky_relu(self.bn2a(self.conv2a(x)), 0.1)
x = F.leaky_relu(self.bn2b(self.conv2b(x)), 0.1)
x = F.leaky_relu(self.bn2c(self.conv2c(x)), 0.1)
x = F.max_pool1d(x, 2) # 12
x = F.dropout(x, p=self.dropout, training=self.training)
x = F.leaky_relu(self.bn3a(self.conv3a(x)), 0.1)
x = F.leaky_relu(self.bn3b(self.conv3b(x)), 0.1)
x = F.leaky_relu(self.bn3c(self.conv3c(x)), 0.1)
x = F.max_pool1d(x, 2) # 6
x = F.dropout(x, p=self.dropout, training=self.training)
x = F.leaky_relu(self.bn4a(self.conv4a(x)), 0.1)
x = F.leaky_relu(self.bn4b(self.conv4b(x)), 0.1)
x = F.avg_pool1d(x, 6)
logits = self.conv4p(x)
votes = self.conv4v(x)
return logits[:,:,0], votes[:,:,0] # Due to the arch, output has spatial size 1, so we [0] it.
def reset_parameters(self):
lbt.init(self.conv1a, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv1b, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv1c, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv2a, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv2b, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv2c, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv3a, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv3b, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv3c, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv4a, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv4b, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv4p, lambda t: nn.init.constant(t, 0), 0)
lbt.init(self.conv4v, lambda t: nn.init.constant(t, 0), 0)
nn.init.constant(self.bn1a.weight, 1)
nn.init.constant(self.bn1b.weight, 1)
nn.init.constant(self.bn1c.weight, 1)
nn.init.constant(self.bn2a.weight, 1)
nn.init.constant(self.bn2b.weight, 1)
nn.init.constant(self.bn2c.weight, 1)
nn.init.constant(self.bn3a.weight, 1)
nn.init.constant(self.bn3b.weight, 1)
nn.init.constant(self.bn3c.weight, 1)
nn.init.constant(self.bn4a.weight, 1)
nn.init.constant(self.bn4b.weight, 1)
net = lbt.maybe_cuda(DROWNet3EF(WIN_KW['ntime']), GPU)
lbt.count_parameters(net)
with torch.no_grad():
logits, votes = net(Variable(lbt.maybe_cuda(torch.from_numpy(batcher()[0]), GPU)))
logits.data.shape, votes.data.shape
_dummy_X, _, _ = u.get_batch(tr, 450, **WIN_KW)
def _fwd(net, GPU):
with torch.no_grad():
logits, votes = net(Variable(lbt.maybe_cuda(torch.from_numpy(_dummy_X), GPU), requires_grad=False))
return logits.data.cpu(), votes.data.cpu()
net.eval();
%timeit _fwd(net, GPU)
```
# Training
```
import lbtoolbox.plotting as lbplt
def plottrain_loss(ax_xent, ax_votes):
ax_xent.plot(np.array(xent_avg_losses).flatten())
ax_xent.plot(7500*(0.5 + np.arange(len(xent_avg_losses))), np.mean(xent_avg_losses, axis=-1))
ax_xent.set_yscale('log')
ax_xent.set_ylim(top=2e-1)
ax_votes.plot(np.array(offs_avg_losses).flatten())
ax_votes.plot(7500*(0.5 + np.arange(len(xent_avg_losses))), np.mean(offs_avg_losses, axis=-1))
ax_votes.set_yscale('log')
ax_votes.set_ylim(top=2e-1)
def plottrain1():
fig, axs = plt.subplots(1, 2, figsize=(15,5))
plottrain_loss(*axs)
return fig
```
## Actual start
```
opt = optim.Adam(net.parameters(), amsgrad=True)
xent_losses = []
offs_losses = []
xent_avg_losses = []
offs_avg_losses = []
e, name = 50, "final-WNet3xEF-T5-odom=False-center=each"
net.reset_parameters()
with lbu.Uninterrupt() as un:
net.train()
for e in range(e, 50):
torch.save({'model': net.state_dict(), 'optim': opt.state_dict()},
'/fastwork/beyer/dumps/DROW/{}-{:.0f}ep.pth.tar'.format(name, e))
if un.interrupted:
break
for i in range(7500):
Xb, yb_conf, yb_offs = batcher()
# Apply target noise
tgt_noise = np.exp(np.random.randn(*yb_offs.shape).astype(np.float32)/20)
yb_offs = yb_offs*tgt_noise
# Random left-right flip. Of whole batch for convenience, but should be the same as individuals.
if np.random.rand() < 0.5:
Xb = np.array(Xb[:,:,::-1]) # PyTorch doesn't currently support negative strides.
yb_offs = np.c_[-yb_offs[:,0], yb_offs[:,1]] # Sure to get a copy, batched could give us a view!
v_X = Variable(lbt.maybe_cuda(torch.from_numpy(Xb), GPU))
v_y_conf = Variable(lbt.maybe_cuda(torch.from_numpy(yb_conf), GPU), requires_grad=False)
v_y_offs = Variable(lbt.maybe_cuda(torch.from_numpy(yb_offs), GPU), requires_grad=False)
opt.zero_grad()
logits, votes = net(v_X)
xent = F.cross_entropy(logits, v_y_conf, reduce=True)
xent_losses.append(xent.data.cpu().numpy())
loss = xent.mean()
# Need to special-case batches without any vote labels, because mean of empty is nan.
if np.sum(yb_conf) > 0:
offs = F.mse_loss(votes, v_y_offs, reduce=False) # This is really just (a - b)²
offs = torch.sqrt(torch.masked_select(torch.sum(offs, 1), v_y_conf.ne(0)))
offs_losses.append(offs.data.cpu().numpy())
loss += offs.mean()
else:
offs_losses.append(np.array([]))
loss.backward()
# Total number of iterations/updates
for group in opt.param_groups:
group['lr'] = lbu.expdec(e+i/7500, 40, 1e-3, 50, 1e-6)
opt.step()
if i > 0 and i % 25 == 0:
print('\r[{:.2f} ({}/{})]: Loss: xent={:.4f} offs={:.4f} | Q-fill={:.1%} '.format(
e+i/7500, i, 7500,
np.mean(xent_losses[-100:]), np.nanmean(list(map(np.mean, offs_losses[-100:]))),
batcher.fill_status(normalize=True),
), end='', flush=True)
# To avoid OOM errors on long runs
xent_avg_losses.append(np.array([np.mean(x) for x in xent_losses]))
offs_avg_losses.append(np.array([np.mean(o) for o in offs_losses]))
xent_losses.clear()
offs_losses.clear()
lbplt.liveplot(plottrain1)
torch.save({'model': net.state_dict(), 'optim': opt.state_dict()},
'/fastwork/beyer/dumps/DROW/{}-{:.0f}ep.pth.tar'.format(name, e+1))
```
```
load = torch.load('/fastwork/beyer/dumps/DROW/{}-{:.0f}ep.pth.tar'.format(name, e))
net.load_state_dict(load['model'])
opt.load_state_dict(load['optim'])
```
# Evaluation
```
import pickle
def get_scan(va, iseq, iscan, ntime, nsamp, repeat_before, **cutout_kw):
scan = va.scans[iseq][iscan]
Xb = np.empty((len(scan), ntime, nsamp), np.float32)
assert repeat_before, "Don't know what to do if not repeat before?!"
# Prepend the exact same scan/odom for the first few where there's no history.
if iscan-ntime+1 < 0:
scans = np.array([va.scans[iseq][0]]*abs(iscan-ntime+1) + [va.scans[iseq][i] for i in range(iscan+1)])
odoms = np.array([va.odoms[iseq][0]]*abs(iscan-ntime+1) + [va.odoms[iseq][i] for i in range(iscan+1)])
else:
scans = va.scans[iseq][iscan-ntime+1:iscan+1]
odoms = va.odoms[iseq][iscan-ntime+1:iscan+1]
for ipt in range(len(scan)):
u.cutout(scans, odoms, ipt, out=Xb[ipt], nsamp=nsamp, **cutout_kw)
return Xb
def forward(net, xb):
net.eval()
with torch.no_grad():
logits, votes = net(Variable(lbt.maybe_cuda(torch.from_numpy(xb), GPU)))
return F.softmax(logits, dim=-1).data.cpu().numpy(), votes.data.cpu().numpy()
def forward_all(net, va, **get_scan_kw):
all_confs, all_votes = [], []
nseq = len(va.detsns)
for iseq in range(nseq):
ndet = len(va.detsns[iseq])
for idet in range(ndet):
print('\r[{}/{} | {}/{}] '.format(1+iseq, nseq, 1+idet, ndet), flush=True, end='')
confs, votes = forward(net, get_scan(va, iseq, va.idet2iscan[iseq][idet], **get_scan_kw))
all_confs.append(confs)
all_votes.append(votes)
return np.array(all_confs), np.array(all_votes)
```
## On val
```
pred_yva_conf, pred_yva_offs = forward_all(net, va, **WIN_KW)
```
Compute and dump the predictions on the validation set in order to use them in our hyperparameter tuning setup (which is not published because very specific to our lab)
```
_seqs, _scans, _wcs, _was, _wps = u.linearize(va.scansns, va.scans, va.detsns, va.wcdets, va.wadets, va.wpdets)
_scans = np.array(_scans)
x, y = u._prepare_prec_rec_softmax(_scans, pred_yva_offs)
pickle.dump([x, y, pred_yva_conf, _wcs, _was, _wps], open('/fastwork/beyer/dumps/DROW/' + name + ".pkl", "wb"))
'/fastwork/beyer/dumps/DROW/' + name + ".pkl"
results = u.comp_prec_rec_softmax(_scans, _wcs, _was, _wps, pred_yva_conf, pred_yva_offs,
blur_win=5, blur_sigma=1, weighted_avg=False)
fig, ax = u.plot_prec_rec(*results, title=name + " VoteAvg")
plt.close(fig)
fig
```
## On Test
```
te = u.Dataset(te_names, DATADIR, LABELDIR)
_seqs_te, _scans_te, _wcs_te, _was_te, _wps_te = u.linearize(te.scansns, te.scans, te.detsns, te.wcdets, te.wadets, te.wpdets)
_scans_te = np.array(_scans_te)
pred_yte_conf, pred_yte_offs = forward_all(net, te, **WIN_KW)
```
### TABLE II, row "Naive Early-fusion"
```
import json
from os.path import join as pjoin
with open(pjoin('/home/hermans/drow_votes', name + '.json')) as f:
_kw = json.loads(f.read())
results_te = u.comp_prec_rec_softmax(_scans_te, _wcs_te, _was_te, _wps_te, pred_yte_conf, pred_yte_offs, **_kw)
plt.close()
fig, ax = u.plot_prec_rec(*results_te, title=name + " Hype (TEST)")
plt.show(fig)
print(_kw)
for i, cls in enumerate(['wd', 'wc', 'wa', 'wp']):
u.dump_paper_pr_curves(
'/home/beyer/academic/drower9k/iros18_laser_people_detection/data/pr_curves/' + name + '_' + cls,
results_te[i][1], results_te[i][0])
```
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
```
### Load packages
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
from IPython import display
import pandas as pd
import umap
import copy
import os, tempfile
import tensorflow_addons as tfa
```
### parameters
```
dataset = "cifar10"
labels_per_class = 256 # 'full'
n_latent_dims = 1024
confidence_threshold = 0.0 # minimum confidence to include in UMAP graph for learned metric
learned_metric = True # whether to use a learned metric, or Euclidean distance between datapoints
augmented = True #
min_dist= 0.001 # min_dist parameter for UMAP
negative_sample_rate = 5 # how many negative samples per positive sample
batch_size = 128 # batch size
optimizer = tf.keras.optimizers.Adam(1e-3) # the optimizer to train
optimizer = tfa.optimizers.MovingAverage(optimizer)
label_smoothing = 0.2 # how much label smoothing to apply to categorical crossentropy
max_umap_iterations = 50 # how many times, maximum, to recompute UMAP
max_epochs_per_graph = 50 # how many epochs maximum each graph trains for (without early stopping)
umap_patience = 5 # how long before recomputing UMAP graph
```
#### Load dataset
```
from tfumap.semisupervised_keras import load_dataset
(
X_train,
X_test,
X_labeled,
Y_labeled,
Y_masked,
X_valid,
Y_train,
Y_test,
Y_valid,
Y_valid_one_hot,
Y_labeled_one_hot,
num_classes,
dims
) = load_dataset(dataset, labels_per_class)
```
### load architecture
```
from tfumap.semisupervised_keras import load_architecture
encoder, classifier, embedder = load_architecture(dataset, n_latent_dims)
```
### load pretrained weights
```
from tfumap.semisupervised_keras import load_pretrained_weights
encoder, classifier = load_pretrained_weights(dataset, augmented, labels_per_class, encoder, classifier)
```
#### compute pretrained accuracy
```
# test current acc
pretrained_predictions = classifier.predict(encoder.predict(X_test, verbose=True), verbose=True)
pretrained_predictions = np.argmax(pretrained_predictions, axis=1)
pretrained_acc = np.mean(pretrained_predictions == Y_test)
print('pretrained acc: {}'.format(pretrained_acc))
```
### get a, b parameters for embeddings
```
from tfumap.semisupervised_keras import find_a_b
a_param, b_param = find_a_b(min_dist=min_dist)
```
### build network
```
from tfumap.semisupervised_keras import build_model
model = build_model(
batch_size=batch_size,
a_param=a_param,
b_param=b_param,
dims=dims,
encoder=encoder,
classifier=classifier,
negative_sample_rate=negative_sample_rate,
optimizer=optimizer,
label_smoothing=label_smoothing,
embedder = embedder,
)
```
### build labeled iterator
```
from tfumap.semisupervised_keras import build_labeled_iterator
labeled_dataset = build_labeled_iterator(X_labeled, Y_labeled_one_hot, augmented, dims)
```
### training
```
from livelossplot import PlotLossesKerasTF
from tfumap.semisupervised_keras import get_edge_dataset
from tfumap.semisupervised_keras import zip_datasets
```
#### callbacks
```
# early stopping callback
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_classifier_acc', min_delta=0, patience=15, verbose=0, mode='auto',
baseline=None, restore_best_weights=False
)
# plot losses callback
groups = {'acccuracy': ['classifier_accuracy', 'val_classifier_accuracy'], 'loss': ['classifier_loss', 'val_classifier_loss']}
plotlosses = PlotLossesKerasTF(groups=groups)
history_list = []
current_validation_acc = 0
batches_per_epoch = np.floor(len(X_train)/batch_size).astype(int)
epochs_since_last_improvement = 0
for current_umap_iterations in tqdm(np.arange(max_umap_iterations)):
# make dataset
edge_dataset = get_edge_dataset(
model,
classifier,
encoder,
X_train,
Y_masked,
batch_size,
confidence_threshold,
labeled_dataset,
dims,
learned_metric = learned_metric
)
# zip dataset
zipped_ds = zip_datasets(labeled_dataset, edge_dataset, batch_size)
# train dataset
history = model.fit(
zipped_ds,
epochs=max_epochs_per_graph,
validation_data=(
(X_valid, tf.zeros_like(X_valid), tf.zeros_like(X_valid)),
{"classifier": Y_valid_one_hot},
),
callbacks = [early_stopping, plotlosses],
max_queue_size = 100,
steps_per_epoch = batches_per_epoch,
#verbose=0
)
history_list.append(history)
# get validation acc
pred_valid = classifier.predict(encoder.predict(X_valid))
new_validation_acc = np.mean(np.argmax(pred_valid, axis = 1) == Y_valid)
# if validation accuracy has gone up, mark the improvement
if new_validation_acc > current_validation_acc:
epochs_since_last_improvement = 0
current_validation_acc = copy.deepcopy(new_validation_acc)
else:
epochs_since_last_improvement += 1
if epochs_since_last_improvement > umap_patience:
print('No improvement in {} UMAP iterators'.format(umap_patience))
break
class_pred = classifier.predict(encoder.predict(X_test))
class_acc = np.mean(np.argmax(class_pred, axis=1) == Y_test)
print(class_acc)
```
| github_jupyter |
# The Truck Fleet puzzle
This tutorial includes everything you need to set up decision optimization engines, build constraint programming models.
When you finish this tutorial, you'll have a foundational knowledge of _Prescriptive Analytics_.
>This notebook is part of the **[Prescriptive Analytics for Python](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html)**
>It requires a valid subscription to **Decision Optimization on the Cloud** or a **local installation of CPLEX Optimizers**.
Discover us [here](https://developer.ibm.com/docloud)
Table of contents:
- [Describe the business problem](#Describe-the-business-problem)
* [How decision optimization (prescriptive analytics) can help](#How--decision-optimization-can-help)
* [Use decision optimization](#Use-decision-optimization)
* [Step 1: Download the library](#Step-1:-Download-the-library)
* [Step 2: Set up the engines](#Step-2:-Set-up-the-prescriptive-engine)
- [Step 3: Model the Data](#Step-3:-Model-the-data)
* [Step 4: Prepare the data](#Step-4:-Prepare-the-data)
- [Step 4: Set up the prescriptive model](#Step-4:-Set-up-the-prescriptive-model)
* [Prepare data for modeling](#Prepare-data-for-modeling)
* [Define the decision variables](#Define-the-decision-variables)
* [Express the business constraints](#Express-the-business-constraints)
* [Express the objective](#Express-the-objective)
* [Solve with Decision Optimization solve service](#Solve-with-Decision-Optimization-solve-service)
* [Step 5: Investigate the solution and run an example analysis](#Step-5:-Investigate-the-solution-and-then-run-an-example-analysis)
* [Summary](#Summary)
****
### Describe the business problem
* The problem is to deliver some orders to several clients with a single truck.
* Each order consists of a given quantity of a product of a certain type.
* A product type is an integer in {0, 1, 2}.
* Loading the truck with at least one product of a given type requires some specific installations.
* The truck can be configured in order to handle one, two or three different types of product.
* There are 7 different configurations for the truck, corresponding to the 7 possible combinations of product types:
- configuration 0: all products are of type 0,
- configuration 1: all products are of type 1,
- configuration 2: all products are of type 2,
- configuration 3: products are of type 0 or 1,
- configuration 4: products are of type 0 or 2,
- configuration 5: products are of type 1 or 2,
- configuration 6: products are of type 0 or 1 or 2.
* The cost for configuring the truck from a configuration A to a configuration B depends on A and B.
* The configuration of the truck determines its capacity and its loading cost.
* A delivery consists of loading the truck with one or several orders for the same customer.
* Both the cost (for configuring and loading the truck) and the number of deliveries needed to deliver all the orders must be minimized, the cost being the most important criterion.
Please refer to documentation for appropriate setup of solving configuration.
*****
## How decision optimization can help
* Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes.
* Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
* Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
+ For example:
+ Automate complex decisions and trade-offs to better manage limited resources.
+ Take advantage of a future opportunity or mitigate a future risk.
+ Proactively update recommendations based on changing events.
+ Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
## Use decision optimization
### Step 1: Download the library
Run the following code to install Decision Optimization CPLEX Modeling library. The *DOcplex* library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
```
from sys import stdout
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
```
Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.
### Step 2: Set up the prescriptive engine
* Subscribe to the [Decision Optimization on Cloud solve service](https://developer.ibm.com/docloud).
* Get the service URL and your personal API key.
__Set your DOcplexcloud credentials:__
0. A first option is to set the DOcplexcloud url and key directly in the model source file *(see below)*
1. For a persistent setting, create a Python file __docloud_config.py__ somewhere that is visible from the __PYTHONPATH__
```
SVC_URL = "ENTER YOUR URL HERE"
SVC_KEY = "ENTER YOUR KEY HERE"
from docplex.cp.model import *
```
### Step 3: Model the data
Next section defines the data of the problem.
```
# List of possible truck configurations. Each tuple is (load, cost) with:
# load: max truck load for this configuration,
# cost: cost for loading the truck in this configuration
TRUCK_CONFIGURATIONS = ((11, 2), (11, 2), (11, 2), (11, 3), (10, 3), (10, 3), (10, 4))
# List of customer orders.
# Each tuple is (customer index, volume, product type)
CUSTOMER_ORDERS = ((0, 3, 1), (0, 4, 2), (0, 3, 0), (0, 2, 1), (0, 5, 1), (0, 4, 1), (0, 11, 0),
(1, 4, 0), (1, 5, 0), (1, 2, 0), (1, 4, 2), (1, 7, 2), (1, 3, 2), (1, 5, 0), (1, 2, 2),
(2, 5, 1), (2, 6, 0), (2, 11, 2), (2, 1, 0), (2, 6, 0), (2, 3, 0))
# Transition costs between configurations.
# Tuple (A, B, TCost) means that the cost of modifying the truck from configuration A to configuration B is TCost
CONFIGURATION_TRANSITION_COST = tuple_set(((0, 0, 0), (0, 1, 0), (0, 2, 0), (0, 3, 10), (0, 4, 10),
(0, 5, 10), (0, 6, 15), (1, 0, 0), (1, 1, 0), (1, 2, 0),
(1, 3, 10), (1, 4, 10), (1, 5, 10), (1, 6, 15), (2, 0, 0),
(2, 1, 0), (2, 2, 0), (2, 3, 10), (2, 4, 10), (2, 5, 10),
(2, 6, 15), (3, 0, 3), (3, 1, 3), (3, 2, 3), (3, 3, 0),
(3, 4, 10), (3, 5, 10), (3, 6, 15), (4, 0, 3), (4, 1, 3),
(4, 2, 3), (4, 3, 10), (4, 4, 0), (4, 5, 10), (4, 6, 15),
(5, 0, 3), (5, 1, 3), (5, 2, 3), (5, 3, 10), (5, 4, 10),
(5, 5, 0), (5, 6, 15), (6, 0, 3), (6, 1, 3), (6, 2, 3),
(6, 3, 10), (6, 4, 10), (6, 5, 10), (6, 6, 0)
))
# Compatibility between the product types and the configuration of the truck
# allowedContainerConfigs[i] = the array of all the configurations that accept products of type i
ALLOWED_CONTAINER_CONFIGS = ((0, 3, 4, 6),
(1, 3, 5, 6),
(2, 4, 5, 6))
```
### Step 4: Set up the prescriptive model
#### Prepare data for modeling
Next section extracts from problem data the parts that are frequently used in the modeling section.
```
nbTruckConfigs = len(TRUCK_CONFIGURATIONS)
maxTruckConfigLoad = [tc[0] for tc in TRUCK_CONFIGURATIONS]
truckCost = [tc[1] for tc in TRUCK_CONFIGURATIONS]
maxLoad = max(maxTruckConfigLoad)
nbOrders = len(CUSTOMER_ORDERS)
nbCustomers = 1 + max(co[0] for co in CUSTOMER_ORDERS)
volumes = [co[1] for co in CUSTOMER_ORDERS]
productType = [co[2] for co in CUSTOMER_ORDERS]
# Max number of truck deliveries (estimated upper bound, to be increased if no solution)
maxDeliveries = 15
```
#### Create CPO model
```
mdl = CpoModel(name="trucks")
```
#### Define the decision variables
```
# Configuration of the truck for each delivery
truckConfigs = integer_var_list(maxDeliveries, 0, nbTruckConfigs - 1, "truckConfigs")
# In which delivery is an order
where = integer_var_list(nbOrders, 0, maxDeliveries - 1, "where")
# Load of a truck
load = integer_var_list(maxDeliveries, 0, maxLoad, "load")
# Number of deliveries that are required
nbDeliveries = integer_var(0, maxDeliveries)
# Identification of which customer is assigned to a delivery
customerOfDelivery = integer_var_list(maxDeliveries, 0, nbCustomers, "customerOfTruck")
# Transition cost for each delivery
transitionCost = integer_var_list(maxDeliveries - 1, 0, 1000, "transitionCost")
```
#### Express the business constraints
```
# transitionCost[i] = transition cost between configurations i and i+1
for i in range(1, maxDeliveries):
auxVars = (truckConfigs[i - 1], truckConfigs[i], transitionCost[i - 1])
mdl.add(allowed_assignments(auxVars, CONFIGURATION_TRANSITION_COST))
# Constrain the volume of the orders in each truck
mdl.add(pack(load, where, volumes, nbDeliveries))
for i in range(0, maxDeliveries):
mdl.add(load[i] <= element(truckConfigs[i], maxTruckConfigLoad))
# Compatibility between the product type of an order and the configuration of its truck
for j in range(0, nbOrders):
configOfContainer = integer_var(ALLOWED_CONTAINER_CONFIGS[productType[j]])
mdl.add(configOfContainer == element(truckConfigs, where[j]))
# Only one customer per delivery
for j in range(0, nbOrders):
mdl.add(element(customerOfDelivery, where[j]) == CUSTOMER_ORDERS[j][0])
# Non-used deliveries are at the end
for j in range(1, maxDeliveries):
mdl.add((load[j - 1] > 0) | (load[j] == 0))
# Dominance: the non used deliveries keep the last used configuration
mdl.add(load[0] > 0)
for i in range(1, maxDeliveries):
mdl.add((load[i] > 0) | (truckConfigs[i] == truckConfigs[i - 1]))
# Dominance: regroup deliveries with same configuration
for i in range(maxDeliveries - 2, 0, -1):
ct = true()
for p in range(i + 1, maxDeliveries):
ct = (truckConfigs[p] != truckConfigs[i - 1]) & ct
mdl.add((truckConfigs[i] == truckConfigs[i - 1]) | ct)
```
#### Express the objective
```
# Objective: first criterion for minimizing the cost for configuring and loading trucks
# second criterion for minimizing the number of deliveries
cost = sum(transitionCost) + sum(element(truckConfigs[i], truckCost) * (load[i] != 0) for i in range(maxDeliveries))
mdl.add(minimize_static_lex([cost, nbDeliveries]))
```
#### Solve with Decision Optimization solve service
```
# Search strategy: first assign order to truck
mdl.set_search_phases([search_phase(where)])
# Solve model
print("\nSolving model....")
msol = mdl.solve(url=SVC_URL, key=SVC_KEY, TimeLimit=20, LogPeriod=3000)
```
### Step 5: Investigate the solution and then run an example analysis
```
if msol.is_solution():
print("Solution: ")
ovals = msol.get_objective_values()
print(" Configuration cost: {}, number of deliveries: {}".format(ovals[0], ovals[1]))
for i in range(maxDeliveries):
ld = msol.get_value(load[i])
if ld > 0:
stdout.write(" Delivery {:2d}: config={}".format(i,msol.get_value(truckConfigs[i])))
stdout.write(", items=")
for j in range(nbOrders):
if (msol.get_value(where[j]) == i):
stdout.write(" <{}, {}, {}>".format(j, productType[j], volumes[j]))
stdout.write('\n')
else:
stdout.write("Solve status: {}\n".format(msol.get_solve_status()))
```
## Summary
You learned how to set up and use the IBM Decision Optimization CPLEX Modeling for Python to formulate a Constraint Programming model and solve it with IBM Decision Optimization on the cloud.
#### References
* [CPLEX Modeling for Python documentation](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html)
* [Decision Optimization on Cloud](https://developer.ibm.com/docloud/)
* Need help with DOcplex or to report a bug? Please go [here](https://developer.ibm.com/answers/smartspace/docloud)
* Contact us at dofeedback@wwpdl.vnet.ibm.com
Copyright © 2017 IBM. IPLA licensed Sample Materials.
| github_jupyter |
# LSTM
* We will implement it with tensorflow library together with LSTM tool for sentiment analysis in tweets.
* Unlike the LSTM (Long short-term memory) method, it is a deep learning method.
* Data preprocessing steps are similar to Naive Bayes vs Logistic Regression methods, but the classification of tweets is different.
```
# mounting the drive
from google.colab import drive
drive.mount('/content/drive')
# Importing libararies and modules
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# colab da işe yaramıyor jupyter de kullanmak için
#from nltk.corpus import twitter_samples
# reading the data in a dataframe df
# for use in colab
df_pos = pd.read_json(r'/content/drive/MyDrive/Bitirme_P/positive_tweets.json', lines = True, encoding='utf-8')
df_neg = pd.read_json(r'/content/drive/MyDrive/Bitirme_P/negative_tweets.json', lines = True, encoding='utf-8')
# for use in Jupyter notebook
#df_pos = pd.read_json("positive_tweets.json", lines = True, encoding= 'UTF-8')
#df_neg = pd.read_json("negative_tweets.json", lines = True, encoding= 'UTF-8')
print(df_pos.shape)
print(df_neg.shape)
# Tagging positive tweets
positive_sentiment = []
for i in range(0,5000):
positive_sentiment.append(1)
print(len(positive_sentiment))
df_pos['sentiment'] = positive_sentiment
# creating an 'id' column for tweets
dataframe_id = []
for i in range(0,5000):
i+=1
dataframe_id.append(i)
print(len(dataframe_id))
df_pos['df_id'] = dataframe_id
# Tagging positive tweets
negative_sentiment = []
for i in range(0,5000):
negative_sentiment.append(0)
df_neg['sentiment'] = negative_sentiment
# creating an 'id' column for tweets
dataframe_id = []
for i in range(5000,10000):
i+=1
dataframe_id.append(i)
df_neg['df_id'] = dataframe_id
# Two json files merged on dataframe by adding lines
df_tam = pd.concat([df_pos, df_neg])
print(df_tam.shape)
df_tam.tail()
# printing the dataframe and assign
df = df_tam[['df_id','sentiment','text']]
df.head(10)
# verifying the sentimnet values
# 1 is positive sentimnet and 0 is negative sentiment
df['sentiment'].value_counts()
# pre-processing the data
# define a function to remove the @mentions and other useless text from the tweets
import re
def text_cleaning(tweet):
tweet = re.sub(r'@[A-Za-z0-9]+', '', tweet) # removing @mentions
tweet = re.sub(r'@[A-Za-zA-Z0-9]+', '', tweet) # removing @mentions
tweet = re.sub(r'@[A-Za-z]+', '', tweet) # removing @mentions
tweet = re.sub(r'@[-)]+', '', tweet) # removing @mentions
tweet = re.sub(r'#', '', tweet) # removing '#' sign
tweet = re.sub(r'RT[\s]+', '', tweet) # removing RT
tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) # removing hyper link
tweet = re.sub(r'&[a-z;]+', '', tweet) # removing '>
return tweet
df['text'] = df['text'].apply(text_cleaning)
df.head()
# splitting the data into training and testing data
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(df['text'].values, df['sentiment'].values, test_size=0.2)
# chechking the data split
print('Text: ', x_train[0])
print('Sentiment: ', y_train[0])
# converting the strings into integers using Tokenizer
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
#from nltk.tokenize import TweetTokenizer
# instantiating the tokenizer
max_vocab = 20000000
tokenizer = Tokenizer(num_words=max_vocab)
tokenizer.fit_on_texts(x_train)
# checking the word index and find out the vocabulary of the dataset
wordidx = tokenizer.word_index
V = len(wordidx)
print('The size of dataset vocab is: ', V)
# converting train and test sentences into sequences
train_seq = tokenizer.texts_to_sequences(x_train)
test_seq = tokenizer.texts_to_sequences(x_test)
print('Training sequence: ', train_seq[0])
print('Testing sequence: ', test_seq[0])
# padding the sequence to get equal length sequence because its convertional to use same size sequences
# padding the training sequence
pad_train = pad_sequences(train_seq)
T = pad_train.shape[1]
print('The length of training sequence is: ', T)
# padding the test sequence
pad_test = pad_sequences(test_seq, maxlen=T)
print('The length of testing sequence is: ', pad_test.shape[1])
# building the model
from tensorflow.keras.layers import Input, Dense, Embedding, LSTM, GlobalMaxPooling1D
from tensorflow.keras.models import Model
D = 20
M = 15
i = Input(shape=(T, ))
x = Embedding(V+1, D)(i)
x = LSTM(M, return_sequences=True)(x)
x = GlobalMaxPooling1D()(x)
x = Dense(32, activation='relu')(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(i,x)
# compiling the model
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
# training the model
r = model.fit(pad_train, y_train, validation_data=(pad_test, y_test), epochs=2, verbose=1, shuffle=True)
# Evaluating the model
# plotting the loss and validation loss of the model
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# plotting the accuracy and validation accuracy of the model
plt.plot(r.history['accuracy'], label='accuracy')
plt.plot(r.history['val_accuracy'], label='val_accuracy')
plt.legend()
# Predicting the sentiment of any text
def predict_sentiment(text):
# preprocessing the given text
text_seq = tokenizer.texts_to_sequences(text)
text_pad = pad_sequences(text_seq, maxlen=T)
# predicting the class
predicted_sentiment = model.predict(text_pad).round()
if predicted_sentiment == 1.0:
return (print('It is a positive sentiment'))
else:
return (print('It is a negative sentiment'))
text = ['I love #data #datascience ']
predict_sentiment(text)
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Common Functions for `GiRaFFEfood` Initial Data for `GiRaFFE`
### NRPy+ Source Code for this module: [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Common_Functions.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Common_Functions.py)
**Notebook Status:** <font color='red'><b> In Progress </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module through the main initial data modules that depend on it.
## Introduction:
We will need to "feed" our giraffe with initial data to evolve. There are several different choices of initial data we can use here; while each represents different physical systems, they all have some steps in common with each other. To avoid code duplication, we will first write several functions that we will use for all of them.
<a id='toc'></a>
# Table of Contents:
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Import core NRPy+ modules and set NRPy+ parameters
1. [Step 2](#vectorpotential): Set the vector potential from input functions
1. [Step 3](#velocity): Compute $v^i_{(n)}$ from $E^i$ and $B^i$
1. [Step 4](#setall): Generate specified initial data
<a id='initializenrpy'></a>
# Step 1: Import core NRPy+ modules and set NRPy+ parameters \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
Here, we will import the NRPy+ core modules, set the reference metric to Cartesian, and set commonly used NRPy+ parameters. We will also set up a parameter to determine what initial data is set up, although it won't do much yet.
```
# Step 0: Import the NRPy+ core modules and set the reference metric to Cartesian
import NRPy_param_funcs as par
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import reference_metric as rfm
# Use the Jacobian matrix to transform the vectors to Cartesian coordinates.
# Construct Jacobian & Inverse Jacobians:
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
rfm.reference_metric()
Jac_dUCart_dDrfmUD,Jac_dUrfm_dDCartUD = rfm.compute_Jacobian_and_inverseJacobian_tofrom_Cartesian()
# Transform the coordinates of the Jacobian matrix from spherical to Cartesian:
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
tmpa,tmpb,tmpc = sp.symbols("tmpa,tmpb,tmpc")
for i in range(3):
for j in range(3):
Jac_dUCart_dDrfmUD[i][j] = Jac_dUCart_dDrfmUD[i][j].subs([(rfm.xx[0],tmpa),(rfm.xx[1],tmpb),(rfm.xx[2],tmpc)])
Jac_dUCart_dDrfmUD[i][j] = Jac_dUCart_dDrfmUD[i][j].subs([(tmpa,rfm.xxSph[0]),(tmpb,rfm.xxSph[1]),(tmpc,rfm.xxSph[2])])
Jac_dUrfm_dDCartUD[i][j] = Jac_dUrfm_dDCartUD[i][j].subs([(rfm.xx[0],tmpa),(rfm.xx[1],tmpb),(rfm.xx[2],tmpc)])
Jac_dUrfm_dDCartUD[i][j] = Jac_dUrfm_dDCartUD[i][j].subs([(tmpa,rfm.xxSph[0]),(tmpb,rfm.xxSph[1]),(tmpc,rfm.xxSph[2])])
# Step 1a: Set commonly used parameters.
thismodule = "GiRaFFEfood_NRPy"
```
<a id='vectorpotential'></a>
# Step 2: Set the vector potential from input functions \[Back to [top](#toc)\]
$$\label{vectorpotential}$$
First, we will write a function to generate the vector potential from input functions for each component. This function will also apply the correct coordinate staggering if the input is set as such. That is, in the staggered prescription, $A_x$ is sampled at $(i,j+1/2,k+1/2)$, $A_y$ at $(i+1/2,j,k+1/2)$, and $A_z$ at $(i+1/2,j+1/2,k)$.
We will first do this for initial data that are given with Cartesian vector components.
```
# Generic function for all 1D tests: Compute Ax,Ay,Az
def Axyz_func_Cartesian(Ax_func,Ay_func,Az_func, stagger_enable, **params):
x = rfm.xx_to_Cart[0]
y = rfm.xx_to_Cart[1]
z = rfm.xx_to_Cart[2]
AD = ixp.zerorank1()
# First Ax
if stagger_enable:
y += sp.Rational(1,2)*gri.dxx[1]
z += sp.Rational(1,2)*gri.dxx[2]
AD[0] = Ax_func(x,y,z, **params)
# Then Ay
if stagger_enable:
x += sp.Rational(1,2)*gri.dxx[0]
y -= sp.Rational(1,2)*gri.dxx[1]
z += sp.Rational(1,2)*gri.dxx[2]
AD[1] = Ay_func(x,y,z, **params)
# Finally Az
if stagger_enable:
x += sp.Rational(1,2)*gri.dxx[0]
y += sp.Rational(1,2)*gri.dxx[1]
z -= sp.Rational(1,2)*gri.dxx[2]
AD[2] = Az_func(x,y,z, **params)
return AD
# Generic function for all 1D tests: Compute Ax,Ay,Az
def Axyz_func_spherical(Ar_func,At_func,Ap_func, stagger_enable, **params):
if "KerrSchild_radial_shift" in params:
KerrSchild_radial_shift = params["KerrSchild_radial_shift"]
r = rfm.xxSph[0] + KerrSchild_radial_shift # We are setting the data up in Shifted Kerr-Schild coordinates
else:
r = rfm.xxSph[0] # Some other coordinate system
theta = rfm.xxSph[1]
phi = rfm.xxSph[2]
AsphD = ixp.zerorank1()
# First Ax
if stagger_enable:
y += sp.Rational(1,2)*gri.dxx[1]
z += sp.Rational(1,2)*gri.dxx[2]
AsphD[0] = Ar_func(r,theta,phi, **params)
# Then Ay
if stagger_enable:
x += sp.Rational(1,2)*gri.dxx[0]
y -= sp.Rational(1,2)*gri.dxx[1]
z += sp.Rational(1,2)*gri.dxx[2]
AsphD[1] = At_func(r,theta,phi, **params)
# Finally Az
if stagger_enable:
x += sp.Rational(1,2)*gri.dxx[0]
y += sp.Rational(1,2)*gri.dxx[1]
z -= sp.Rational(1,2)*gri.dxx[2]
AsphD[2] = Ap_func(r,theta,phi, **params)
# Use the Jacobian matrix to transform the vectors to Cartesian coordinates.
AD = change_basis_spherical_to_Cartesian(AsphD)
return AD
```
<a id='velocity'></a>
# Step 3: Compute $v^i_{(n)}$ from $E^i$ and $B^i$ \[Back to [top](#toc)\]
$$\label{velocity}$$
This function computes the Valenciea 3-velocity from input electric and magnetic fields. It can also take the three-metric $\gamma_{ij}$ as an optional input; if this is not set, the function defaults to flat spacetime.
```
# Generic function for all 1D tests: Valencia 3-velocity from ED and BU
def compute_ValenciavU_from_ED_and_BU(ED, BU, gammaDD=None):
# Now, we calculate v^i = ([ijk] E_j B_k) / B^2,
# where [ijk] is the Levi-Civita symbol and B^2 = \gamma_{ij} B^i B^j$ is a trivial dot product in flat space.
LeviCivitaSymbolDDD = ixp.LeviCivitaSymbol_dim3_rank3()
B2 = sp.sympify(0)
# In flat spacetime, use the Minkowski metric; otherwise, use the input metric.
if gammaDD is None:
gammaDD = ixp.zerorank2()
for i in range(3):
gammaDD[i][i] = sp.sympify(1)
for i in range(3):
for j in range(3):
B2 += gammaDD[i][j] * BU[i] * BU[j]
BD = ixp.zerorank1()
for i in range(3):
for j in range(3):
BD[i] = gammaDD[i][j]*BU[j]
ValenciavU = ixp.zerorank1()
for i in range(3):
for j in range(3):
for k in range(3):
ValenciavU[i] += LeviCivitaSymbolDDD[i][j][k] * ED[j] * BD[k] / B2
return ValenciavU
```
<a id='setall'></a>
# Step 4: Generate specified initial data \[Back to [top](#toc)\]
$$\label{setall}$$
This is the main function that users can call to generate the initial data by passing the name of the initial data as a string and specifying if they want to enable staggering.
```
def GiRaFFEfood_NRPy_generate_initial_data(ID_type = "DegenAlfvenWave", stagger_enable = False,**params):
global AD, ValenciavU
if ID_type == "ExactWald":
AD = gfcf.Axyz_func_spherical(gfew.Ar_EW,gfew.Ath_EW,gfew.Aph_EW,stagger_enable,**params)
ValenciavU = gfew.ValenciavU_func_EW(**params)
elif ID_type == "MagnetosphericWald":
AD = gfcf.Axyz_func_spherical(gfmw.Ar_MW,gfmw.Ath_MW,gfmw.Aph_MW,stagger_enable,**params)
ValenciavU = gfmw.ValenciavU_func_MW(**params)
elif ID_type == "SplitMonopole":
AD = gfcf.Axyz_func_spherical(gfsm.Ar_SM,gfsm.Ath_SM,gfsm.Aph_SM,stagger_enable,**params)
ValenciavU = gfsm.ValenciavU_func_SM(**params)
elif ID_type == "AlfvenWave":
AD = gfcf.Axyz_func_Cartesian(gfaw.Ax_AW,gfaw.Ay_AW,gfaw.Az_AW, stagger_enable, **params)
ValenciavU = gfaw.ValenciavU_func_AW(**params)
elif ID_type == "FastWave":
AD = gfcf.Axyz_func_Cartesian(gffw.Ax_FW,gffw.Ay_FW,gffw.Az_FW, stagger_enable, **params)
ValenciavU = gffw.ValenciavU_func_FW(**params)
elif ID_type == "DegenAlfvenWave":
AD = gfcf.Axyz_func_Cartesian(gfdaw.Ax_DAW,gfdaw.Ay_DAW,gfdaw.Az_DAW, stagger_enable, **params)
ValenciavU = gfdaw.ValenciavU_func_DAW(**params)
elif ID_type == "ThreeWaves":
AD = gfcf.Axyz_func_Cartesian(gftw.Ax_TW,gftw.Ay_TW,gftw.Az_TW, stagger_enable, **params)
ValenciavU = gftw.ValenciavU_func_TW(**params)
elif ID_type == "FFE_Breakdown":
AD = gfcf.Axyz_func_Cartesian(gffb.Ax_FB,gffb.Ay_FB,gffb.Az_FB, stagger_enable, **params)
ValenciavU = gffb.ValenciavU_func_FB(**params)
elif ID_type == "AlignedRotator":
AD = gfcf.Axyz_func_spherical(gfar.Ar_AR,gfar.Ath_AR,gfar.Aph_AR, stagger_enable, **params)
ValenciavU = gfar.ValenciavU_func_AR(**params)
```
| github_jupyter |
## Example 3: Sensitivity analysis for a NetLogo model with SALib and Multiprocessing
This is a short demo similar to example two but using the multiprocessing [Pool](https://docs.python.org/3.6/library/multiprocessing.html#module-multiprocessing.pool)
All files used in the example are available from the pyNetLogo repository at https://github.com/quaquel/pyNetLogo.
This code requires python3.
For in depth discussion, please see example 2.
### Running the experiments in parallel using a Process Pool
There are multiple libraries available in the python ecosystem for performing tasks in parallel. One of the default libraries that ships with Python is [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html#module-concurrent.futures). This is in fact a high level interface around several other libraries. See the documentation for details. One of the libraries wrapped by concurrent.futures is multiprocessing. Below we use multiprocessing, anyone on python3.7 can use the either code below or use the ProcessPoolExecuturor from concurrent.futures (recommended).
Here we are going to use the ProcessPoolExecutor, which uses the multiprocessing library. Parallelization is an advanced topic and the exact way in which it is to be done depends at least in part on the operating system one is using. It is recommended to carefully read the documentation provided by both concurrent.futures and mulitprocessing. This example is ran on a mac, linux is expected to be similar but Windows is likely to be slightly different
```
from multiprocessing import Pool
import os
import pandas as pd
import pyNetLogo
from SALib.sample import saltelli
def initializer(modelfile):
'''initialize a subprocess
Parameters
----------
modelfile : str
'''
# we need to set the instantiated netlogo
# link as a global so run_simulation can
# use it
global netlogo
netlogo = pyNetLogo.NetLogoLink(gui=False)
netlogo.load_model(modelfile)
def run_simulation(experiment):
'''run a netlogo model
Parameters
----------
experiments : dict
'''
#Set the input parameters
for key, value in experiment.items():
if key == 'random-seed':
#The NetLogo random seed requires a different syntax
netlogo.command('random-seed {}'.format(value))
else:
#Otherwise, assume the input parameters are global variables
netlogo.command('set {0} {1}'.format(key, value))
netlogo.command('setup')
# Run for 100 ticks and return the number of sheep and
# wolf agents at each time step
counts = netlogo.repeat_report(['count sheep','count wolves'], 100)
results = pd.Series([counts['count sheep'].values.mean(),
counts['count wolves'].values.mean()],
index=['Avg. sheep', 'Avg. wolves'])
return results
if __name__ == '__main__':
modelfile = os.path.abspath('./models/Wolf Sheep Predation_v6.nlogo')
problem = {
'num_vars': 6,
'names': ['random-seed',
'grass-regrowth-time',
'sheep-gain-from-food',
'wolf-gain-from-food',
'sheep-reproduce',
'wolf-reproduce'],
'bounds': [[1, 100000],
[20., 40.],
[2., 8.],
[16., 32.],
[2., 8.],
[2., 8.]]
}
n = 1000
param_values = saltelli.sample(problem, n,
calc_second_order=True)
# cast the param_values to a dataframe to
# include the column labels
experiments = pd.DataFrame(param_values,
columns=problem['names'])
with Pool(4, initializer=initializer, initargs=(modelfile,)) as executor:
results = []
for entry in executor.map(run_simulation, experiments.to_dict('records')):
results.append(entry)
results = pd.DataFrame(results)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Load a pandas.DataFrame
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/pandas_dataframe"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial provides an example of how to load pandas dataframes into a `tf.data.Dataset`.
This tutorials uses a small [dataset](https://archive.ics.uci.edu/ml/datasets/heart+Disease) provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.
## Read data using pandas
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import pandas as pd
import tensorflow as tf
```
Download the csv file containing the heart dataset.
```
csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/applied-dl/heart.csv')
```
Read the csv file using pandas.
```
df = pd.read_csv(csv_file)
df.head()
df.dtypes
```
Convert `thal` column which is an `object` in the dataframe to a discrete numerical value.
```
df['thal'] = pd.Categorical(df['thal'])
df['thal'] = df.thal.cat.codes
df.head()
```
## Load data using `tf.data.Dataset`
Use `tf.data.Dataset.from_tensor_slices` to read the values from a pandas dataframe.
One of the advantages of using `tf.data.Dataset` is it allows you to write simple, highly efficient data pipelines. Read the [loading data guide](https://www.tensorflow.org/guide/data) to find out more.
```
target = df.pop('target')
dataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))
for feat, targ in dataset.take(5):
print ('Features: {}, Target: {}'.format(feat, targ))
```
Since a `pd.Series` implements the `__array__` protocol it can be used transparently nearly anywhere you would use a `np.array` or a `tf.Tensor`.
```
tf.constant(df['thal'])
```
Shuffle and batch the dataset.
```
train_dataset = dataset.shuffle(len(df)).batch(1)
```
## Create and train a model
```
def get_compiled_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
model = get_compiled_model()
model.fit(train_dataset, epochs=15)
```
## Alternative to feature columns
Passing a dictionary as an input to a model is as easy as creating a matching dictionary of `tf.keras.layers.Input` layers, applying any pre-processing and stacking them up using the [functional api](../../guide/keras/functional.ipynb). You can use this as an alternative to [feature columns](../keras/feature_columns.ipynb).
```
inputs = {key: tf.keras.layers.Input(shape=(), name=key) for key in df.keys()}
x = tf.stack(list(inputs.values()), axis=-1)
x = tf.keras.layers.Dense(10, activation='relu')(x)
output = tf.keras.layers.Dense(1)(x)
model_func = tf.keras.Model(inputs=inputs, outputs=output)
model_func.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
```
The easiest way to preserve the column structure of a `pd.DataFrame` when used with `tf.data` is to convert the `pd.DataFrame` to a `dict`, and slice that dictionary.
```
dict_slices = tf.data.Dataset.from_tensor_slices((df.to_dict('list'), target.values)).batch(16)
for dict_slice in dict_slices.take(1):
print (dict_slice)
model_func.fit(dict_slices, epochs=15)
```
| github_jupyter |
# Detecting Spam
*Curtis Miller*
Now, having seen how to load and prepare our e-mail collection, we can start training a classifier.
## Loading And Splitting E-Mails
Our first task is to load in the data. We will split the data into training and test data. The training data will be used to train a classifier while the test data will be used for evaluating how well our classifier performs.
```
import re
import pandas as pd
import email
from bs4 import BeautifulSoup
import nltk
from nltk.stem import SnowballStemmer
from nltk.tokenize import wordpunct_tokenize
import string
from sklearn.naive_bayes import BernoulliNB, GaussianNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
with open("SPAMTrain.label") as f:
spamfiles = f.read()
filedata = pd.DataFrame([f.split(" ") for f in spamfiles.split("\n")[:-1]], columns=["ham", "file"]) # 1 for ham
filedata.ham = filedata.ham.astype('int8')
filedata
```
Here we perform the split.
```
train_emails, test_emails = train_test_split(filedata)
train_emails
```
Now let's load in our training data, storing it in a pandas `DataFrame`.
```
basedir = "RTRAINING/"
train_email_str = list()
for filename in train_emails.file:
with open(basedir + filename, encoding="latin1") as f:
filestr = f.read()
bsobj = BeautifulSoup(filestr, "lxml")
train_email_str.append(bsobj.get_text())
train_email_str[0]
train_emails = train_emails.assign(text=pd.Series(train_email_str, index=train_emails.index))
train_emails
```
## Choosing Features
There are lots of words in our e-mails even after stopwords are removed. Our feature space will be how frequently commonly seen words appear in an e-mail. We will combine all the spam and all the ham e-mails together and choose 1000 most-frequently-seen words for each of those classes, and count how often those words are seen in individual e-mails.
```
def email_clean(email_string):
"""A function for taking an email contained in a string and returning a clean string representing the email"""
stemmer = SnowballStemmer("english")
email_string = email_string.lower()
email_string = re.sub("\s+", " ", email_string)
email_words = wordpunct_tokenize(email_string)
goodchars = "abcdefghijklmnopqrstuvwxyz" # No punctuation or numbers; not interesting for my purpose
email_words = [''.join([c for c in w if c in goodchars]) for w in email_words if w not in ["spam"]]
email_words = [w for w in email_words if w not in nltk.corpus.stopwords.words("english") and w is not '']
return " ".join(email_words)
cleantext = pd.Series(train_emails.text.map(email_clean), index=train_emails.index)
train_emails = train_emails.assign(cleantext=cleantext)
train_emails
train_emails[train_emails.ham == 0].cleantext
```
Here we combine the e-mails to find common words in both spam and ham e-mails.
```
mass_spam = " ".join(train_emails.loc[train_emails.ham == 0].cleantext)
mass_spam
mass_ham = " ".join(train_emails.loc[train_emails.ham == 1].cleantext)
mass_ham
spam_freq = nltk.FreqDist([w for w in mass_spam.split(" ")])
M = 1000
spam_freq.most_common(M)
ham_freq = nltk.FreqDist([w for w in mass_ham.split(" ")])
M = 1000
ham_freq.most_common(M)
```
We now can find the words that will be in our feature space.
```
words = [t[0] for t in ham_freq.most_common(M)] + [t[0] for t in spam_freq.most_common(M)]
words = set(words)
words
len(words)
```
The final step in generating the features for the e-mails is to count how often the words of interest appear in e-mails in the training set.
```
feature_dict = dict()
for i, s in train_emails.iterrows():
wordcounts = dict()
for w in words:
wordcounts[w] = s["cleantext"].count(w)
feature_dict[i] = pd.Series(wordcounts)
pd.DataFrame(feature_dict).T
train_emails = train_emails.join(pd.DataFrame(feature_dict).T, lsuffix='0')
train_emails
```
## Training a Classifier
Now we can train a classifier. In this case we're training a Gaussian naive Bayes classifier.
```
spampred = GaussianNB()
spampred = spampred.fit(train_emails.loc[:, words], train_emails.ham)
ham_predicted = spampred.predict(train_emails.loc[:, words])
ham_predicted
print(classification_report(train_emails.ham, ham_predicted))
```
The classifier does very well in the training data. How well does it do on unseen test data?
## Evaluating Performance
The final step is to evaluate our classifier on test data to see how well we can expect it to perform on future, unseen data. The steps below prepare the test data like we did the training data, loading and cleaning the e-mails and counting how often the words of interest appear in them.
```
test_email_str = list()
for filename in test_emails.file:
with open(basedir + filename, encoding="latin1") as f:
filestr = f.read()
bsobj = BeautifulSoup(filestr, "lxml")
test_email_str.append(bsobj.get_text())
cleantext_test = pd.Series([email_clean(s) for s in test_email_str], index=test_emails.index)
test_emails = test_emails.assign(cleantext=cleantext_test)
feature_dict_test = dict()
for i, s in test_emails.iterrows():
wordcounts = dict()
for w in words:
wordcounts[w] = s["cleantext"].count(w)
feature_dict_test[i] = pd.Series(wordcounts)
test_emails = test_emails.join(pd.DataFrame(feature_dict_test).T, lsuffix='0')
```
Now let's see how the classifier performed.
```
ham_predicted_test = spampred.predict(test_emails.loc[:, words])
print(classification_report(test_emails.ham, ham_predicted_test))
```
It did very well, just like on the training data! It seems we don't have much (if any) overfitting or underfitting. We could have a classifier ready to deploy.
(Of course, our classifier is only as good as the data it was trained on. Perhaps e-mails seen in different contexts or at a different period in time have different characteristics, including both the spam and ham e-mails. In that case the classifier trained here won't be any good since it was trained on the wrong data.)
| github_jupyter |
TSG077 - Kibana logs
====================
Steps
-----
### Parameters
```
import re
tail_lines = 500
pod = None # All
container = "kibana"
log_files = [ "/var/log/supervisor/log/kibana*.log" ]
expressions_to_analyze = [ ]
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get tail for log
```
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('\n'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
```
### Analyze log entries and suggest relevant Troubleshooting Guides
```
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
import os
import json
import requests
import ipykernel
import datetime
from urllib.parse import urljoin
from notebook import notebookapp
def get_notebook_name():
"""Return the full path of the jupyter notebook. Some runtimes (e.g. ADS)
have the kernel_id in the filename of the connection file. If so, the
notebook name at runtime can be determined using `list_running_servers`.
Other runtimes (e.g. azdata) do not have the kernel_id in the filename of
the connection file, therefore we are unable to establish the filename
"""
connection_file = os.path.basename(ipykernel.get_connection_file())
# If the runtime has the kernel_id in the connection filename, use it to
# get the real notebook name at runtime, otherwise, use the notebook
# filename from build time.
try:
kernel_id = connection_file.split('-', 1)[1].split('.')[0]
except:
pass
else:
for servers in list(notebookapp.list_running_servers()):
try:
response = requests.get(urljoin(servers['url'], 'api/sessions'), params={'token': servers.get('token', '')}, timeout=.01)
except:
pass
else:
for nn in json.loads(response.text):
if nn['kernel']['id'] == kernel_id:
return nn['path']
def load_json(filename):
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def get_notebook_rules():
"""Load the notebook rules from the metadata of this notebook (in the .ipynb file)"""
file_name = get_notebook_name()
if file_name == None:
return None
else:
j = load_json(file_name)
if "azdata" not in j["metadata"] or \
"expert" not in j["metadata"]["azdata"] or \
"log_analyzer_rules" not in j["metadata"]["azdata"]["expert"]:
return []
else:
return j["metadata"]["azdata"]["expert"]["log_analyzer_rules"]
rules = get_notebook_rules()
if rules == None:
print("")
print(f"Log Analysis only available when run in Azure Data Studio. Not available when run in azdata.")
else:
print(f"Applying the following {len(rules)} rules to {len(entries_for_analysis)} log entries for analysis, looking for HINTs to further troubleshooting.")
print(rules)
hints = 0
if len(rules) > 0:
for entry in entries_for_analysis:
for rule in rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(rules)} rules). {hints} further troubleshooting hints made inline.")
print('Notebook execution complete.')
```
| github_jupyter |
# Evaluation of SBMV for structured references
Dominika Tkaczyk
5.05.2019
This analysis contains the evaluation of the search-based matching algorithms for structured references.
## Methodology
The test dataset is composed of 2,000 randomly chosen structured references. Three algorithms are compared:
* the legacy approach (OpenURL)
* Search-Based Matching
* Search-Based Matching with Validation
## Results
```
import sys
sys.path.append('../..')
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import re
import utils.data_format_keys as dfk
from dataset.dataset_utils import get_target_test_doi, get_target_gt_doi
from evaluation.link_metrics import LinkMetricsResults
from scipy.stats import chi2_contingency
from utils.utils import read_json
from utils.cr_utils import generate_unstructured
DATA_DIR = 'data/'
```
Read the datasets:
```
dataset_ou = read_json(DATA_DIR + 'dataset_ou.json')[dfk.DATASET_DATASET]
dataset_sbm = read_json(DATA_DIR + 'dataset_sbm.json')[dfk.DATASET_DATASET]
dataset_sbmv = read_json(DATA_DIR + 'dataset_sbmv.json')[dfk.DATASET_DATASET]
print('Dataset size: {}'.format(len(dataset_sbm)))
```
This function modifies the dataset according to the threshold:
```
def modify_validation_threshold(dataset, threshold):
for item in dataset:
if item[dfk.DATASET_SCORE] is not None and item[dfk.DATASET_SCORE] < threshold:
item[dfk.DATASET_TARGET_TEST][dfk.CR_ITEM_DOI] = None
return dataset
def modify_relevance_threshold(dataset, threshold):
for item in dataset:
if item[dfk.DATASET_SCORE] is not None \
and item[dfk.DATASET_SCORE]/len(generate_unstructured(item[dfk.DATASET_REFERENCE])) < threshold:
item[dfk.DATASET_TARGET_TEST][dfk.CR_ITEM_DOI] = None
return dataset
```
Let's calculate SBM's and SBMV's results for different thresholds:
```
dataset_sbm = modify_relevance_threshold(dataset_sbm, 0.47)
dataset_sbmv = modify_validation_threshold(dataset_sbmv, 0.78)
```
The results of OpenURL:
```
def print_summary(dataset, name):
link_results = LinkMetricsResults(dataset)
print('{} precision: {:.4f} (CI at 95% {:.4f}-{:.4f})'
.format(name, link_results.get(dfk.EVAL_PREC),
link_results.get(dfk.EVAL_CI_PREC)[0], link_results.get(dfk.EVAL_CI_PREC)[1]))
print('{} recall: {:.4f} (CI at 95% {:.4f}-{:.4f})'
.format(name, link_results.get(dfk.EVAL_REC),
link_results.get(dfk.EVAL_CI_REC)[0], link_results.get(dfk.EVAL_CI_REC)[1]))
print('{} F1: {:.4f}'.format(name, link_results.get(dfk.EVAL_F1)))
print_summary(dataset_ou, 'OpenURL')
```
The results of SBM:
```
print_summary(dataset_sbm, 'SBM')
```
The results of SBMV:
```
print_summary(dataset_sbmv, 'SBMV')
```
Let's use a statistical test to check whether the differences in precision and recall between the legacy approach and SBMV are statistically significant:
```
for metric in [dfk.EVAL_PREC, dfk.EVAL_REC]:
fun = get_target_test_doi if metric == dfk.EVAL_PREC else get_target_gt_doi
ou_results = LinkMetricsResults(dataset_ou)
ou_precision = ou_results.get(metric)
ou_test_count = len([d for d in dataset_ou if fun(d) is not None])
ou_precision_success = int(ou_precision * ou_test_count)
sbmv_results = LinkMetricsResults(dataset_sbmv)
sbmv_precision = sbmv_results.get(metric)
sbmv_test_count = len([d for d in dataset_sbmv if fun(d) is not None])
sbmv_precision_success = int(sbmv_precision * sbmv_test_count)
_, p, _, _ = chi2_contingency(np.array([[ou_precision_success,
ou_test_count-ou_precision_success],
[sbmv_precision_success,
sbmv_test_count-sbmv_precision_success]]),
correction=True)
c = 'this is statistically significant' if p < 0.05 \
else 'this is not statistically significant'
print('{} p-value: {:.4f} ({})'.format(metric, p, c))
```
Let's compare the algorithms in one plot:
```
def get_means(dataset):
results = LinkMetricsResults(dataset)
return [results.get(m) for m in [dfk.EVAL_PREC, dfk.EVAL_REC, dfk.EVAL_F1]]
def get_ci(dataset):
results = LinkMetricsResults(dataset)
ms = [results.get(m) for m in [dfk.EVAL_PREC, dfk.EVAL_REC]]
return [[a-results.get(m)[0] for m, a in zip([dfk.EVAL_CI_PREC, dfk.EVAL_CI_REC], ms)] + [0],
[results.get(m)[1]-a for m, a in zip([dfk.EVAL_CI_PREC, dfk.EVAL_CI_REC], ms)] + [0]]
def autolabel(ax, rects):
plt.rcParams.update({'font.size': 14})
for rect in rects:
height = rect.get_height()
text = '{:.2f}'.format(height)
text = re.sub('\.00$', '', text)
ax.text(rect.get_x() + rect.get_width()/2., 1.04*height, text, ha='center', va='bottom')
ind = np.arange(3)
width = 0.25
plt.rcParams.update({'font.size': 16, 'legend.fontsize': 14})
fig, ax = plt.subplots(figsize=(12, 9))
rects1 = ax.bar(ind - 0.5 * width, get_means(dataset_ou), yerr=get_ci(dataset_ou), width=width,
color='#d8d2c4')
rects2 = ax.bar(ind + 0.5 * width, get_means(dataset_sbm), yerr=get_ci(dataset_sbm),
width=width, color='#4f5858')
rects3 = ax.bar(ind + 1.5 * width, get_means(dataset_sbmv), yerr=get_ci(dataset_sbmv),
width=width, color='#3eb1c8')
ax.set_ylabel('fraction')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(('precision', 'recall', 'F1'))
plt.ylim(0, 1.25)
plt.yticks([0, 0.2, 0.4, 0.6, 0.8, 1.0])
ax.legend((rects1[0], rects2[0], rects3[0]), ('OpenURL', 'SBM', 'SBMV'))
autolabel(ax, rects1)
autolabel(ax, rects2)
autolabel(ax, rects3)
plt.show()
```
| github_jupyter |
```
%%javascript
var kernel = IPython.notebook.kernel;
var body = document.body,
attribs = body.attributes;
var command = "__filename__ = " + "'" + decodeURIComponent(attribs['data-notebook-name'].value) + "'";
kernel.execute(command);
print(__filename__)
import os, sys, numpy as np, tensorflow as tf
from pathlib import Path
import time
try:
print(__file__)
__current_dir__ = str(Path(__file__).resolve().parents[0])
__filename__ = os.path.basename(__file__)
except NameError:
# jupyter notebook automatically sets the working
# directory to where the notebook is.
__current_dir__ = str(Path(os.getcwd()))
module_parent_dir = str(Path(__current_dir__).resolve().parents[0])
sys.path.append(module_parent_dir)
import LeNet_plus_centerloss
__package__ = 'LeNet_plus_centerloss'
from . import network
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
BATCH_SIZE = 250
SCRIPT_DIR = __current_dir__
FILENAME = __filename__
SUMMARIES_DIR = SCRIPT_DIR
SAVE_PATH = SCRIPT_DIR + "/network.ckpt"
### configure devices for this eval script.
USE_DEVICE = '/gpu:2'
session_config = tf.ConfigProto(log_device_placement=True)
session_config.gpu_options.allow_growth = True
# this is required if want to use GPU as device.
# see: https://github.com/tensorflow/tensorflow/issues/2292
session_config.allow_soft_placement = True
with tf.Graph().as_default() as g, tf.device(USE_DEVICE):
# inference()
input, deep_features = network.inference()
labels, logits, loss_op = network.loss(deep_features)
eval = network.evaluation(logits, labels)
init = tf.initialize_all_variables()
with tf.Session(config=session_config) as sess:
saver = tf.train.Saver()
saver.restore(sess, SAVE_PATH)
# TEST_BATCH_SIZE = np.shape(mnist.test.labels)[0]
logits_output, deep_features_output, loss_value, accuracy = \
sess.run(
[logits, deep_features, loss_op, eval], feed_dict={
input: mnist.test.images[5000:],
labels: mnist.test.labels[5000:]
})
print("MNIST Test accuracy is ", accuracy)
import matplotlib.pyplot as plt
%matplotlib inline
color_palette = ['#507ba6', '#f08e39', '#e0595c', '#79b8b3', '#5ca153',
'#edc854', '#af7ba2', '#fe9fa9', '#9c7561', '#bbb0ac']
from IPython.display import FileLink, FileLinks
bins = {}
for logit, deep_feature in zip(logits_output, deep_features_output):
label = np.argmax(logit)
try:
bins[str(label)].append(list(deep_feature))
except KeyError:
bins[str(label)] = [list(deep_feature)]
plt.figure(figsize=(5, 5))
for numeral in map(str, range(10)):
try:
features = np.array(bins[numeral])
except KeyError:
print(numeral + " does not exist")
features = []
plt.scatter(
features[:, 0],
features[:, 1],
color=color_palette[int(numeral)],
label=numeral
)
plt.legend(loc=(1.1, 0.1), frameon=False)
title = 'MNIST LeNet++ with 2 Deep Features (PReLU)'
plt.title(title)
plt.xlabel('activation of hidden neuron 1')
plt.ylabel('activation of hidden neuron 2')
fname = './figures/' + title + '.png'
plt.savefig(fname, dpi=300, bbox_inches='tight')
FileLink(fname)
```
| github_jupyter |
# Building a model of oxidative ATP synthesis from energetic components
Simulations in the preceding section illustrate how matrix ATP and ADP concentrations are governed by the contributors to the proton motive force. They also show how the matrix ATP/ADP ratio must typically be less than $1$, in contrast to the cytosolic ATP/ADP ratio, which is on the order of $100$. To understand the dependence of ATP synthesis and transport on the proton motive force, the kinetics of the processes that generate it, and the interplay of these processes, we can assemble models of the $\text{F}_0\text{F}_1$ ATP synthase, adenine nucleotide translocase (ANT), mitochondrial phosphate transport, and complexes I, III, and IV of the electron transport chain (ETC) to generate a core model of mitochondrial oxidative ATP synthesis.
## Adenine nucleotide translocase
Following synthesis of ATP from ADP and Pi in the matrix, the final step in delivering ATP to the cytosol at physiological free energy levels is the electrically driven exchange of a matrix $\text{ATP}^{4-}$ for a cytosolic $\text{ADP}^{3-}$. This exchange process,
```{math}
(\text{ATP}^{4-})_x + (\text{ADP}^{3-})_c \rightleftharpoons (\text{ATP}^{4-})_c + (\text{ADP}^{3-})_x \, ,
```
is catalyzed by the ANT. Here, we assume rapid transport of species between the cytosol and the IMS, and therefore, equate IMS and cytosol species concentrations.
To simulate the kinetics of this process, we use the Metelkin et al. model {cite}`Metelkin2006`, which accounts for pH and electrochemical dependencies. (Kinetic parameter value estimates for this model were updated by Wu et al. {cite}`Wu2008`.) The steady-state flux of ANT is expressed
```{math}
:label: J_ANT
J_{\text{ANT}} = E_{\text{ANT}} \dfrac{ \dfrac{ k_2^{\text{ANT}} q }{ K_o^D } [ \text{ATP}^{4-} ]_x [ \text{ADP}^{3-}]_c - \dfrac{ k_3^{\text{ANT}} }{ K_o^T } [ \text{ADP}^{3-} ]_x [ \text{ATP}^{4-} ]_c }{ \left(1 + \dfrac{ [ \text{ATP}^{4-} ]_c }{ K_o^T } + \dfrac{ [ \text{ADP}^{3-} ]_c }{ K_o^D } \right)( [ \text{ADP}^{3-} ]_x + [ \text{ATP}^{4-} ]_x q) },
```
where $E_{\text{ANT}} \ \text{(mol (L mito)}^{-1})$ is the total ANT content of the mitochondria and
```{math}
:label: phi
k_2^\text{ANT} &=& k_{2,o}^\text{ANT} e^{( -3A - 4B + C) \phi}, \nonumber \\
k_3^\text{ANT} &=& k_{3,o}^\text{ANT} e^{(-4A - 3B + C) \phi}, \nonumber \\
K_o^D &=& K_o^{D,0} e^{3 \delta_D \phi}, \nonumber \\
K_o^T &=& K_o^{T,0} e^{4 \delta_T \phi}, \nonumber \\
q &=& \dfrac{ k_3^\text{ANT} K_o^D }{ k_2^\text{ANT} K_o^T } e^\phi, \quad \text{and} \nonumber \\
\phi &=& F \Delta \Psi / R{\rm T}.
```
All parameter values and units can be found in {numref}`table-ANT`, reproduced from {cite}`Bazil2016`.
```{list-table} Adenine nucleotide translocase (ANT) parameters.
:header-rows: 1
:name: table-ANT
* - Parameter
- Units
- Description
- Value
* - $E_\text{ANT}$
- mol (L mito)$^{-1}$
- ANT activity
- $0.325$
* - $\delta_D$
-
- ADP displacement binding constant
- $0.0167 $
* - $\delta_T$
-
- ATP displacement binding constant
- $0.0699 $
* - $k_{2,o}^\text{ANT}$
- s$^{-1}$
- Forward translocation rate
- $0.159 $
* - $k_{3,o}^\text{ANT}$
- s$^{-1}$
- Reverse translocation rate
- $0.501 $
* - $K_o^{D,0}$
- $\mu$mol (L cyto water)$^{-1}$
- ADP binding constant
- $38.89 $
* - $K_o^{T,0}$
- $\mu$mol (L cyto water)$^{-1}$
- ATP binding constant
- $56.05$
* - $A$
-
- Translocation displacement constant
- $0.2829 $
* - $B$
-
- Translocation displacement constant
- $ -0.2086 $
* - $C$
-
- Translocation displacement constant
- $0.2372$
```
To simulate ANT and F$_0$F$_1$ ATP synthase activity simultaneously, we extend the system of Equation {eq}`system-ATPase` by adding states for cytosolic species $[\Sigma \text{ATP} ]_c$ and $[\Sigma \text{ADP}]_c$, yielding
```{math}
:label: system-ATP_ANT
\left\{
\renewcommand{\arraystretch}{2}
\begin{array}{rlrl}
\dfrac{ {\rm d} [\Sigma \text{ATP}]_x }{{\rm d} t} &= (J_\text{F} - J_\text{ANT} ) / W_x, & \dfrac{ {\rm d} [\Sigma \text{ATP}]_c }{{\rm d} t} &= (V_{m2c} J_\text{ANT}) / W_c, \\
\dfrac{ {\rm d} [\Sigma \text{ADP}]_x }{{\rm d} t} &= (-J_\text{F} + J_\text{ANT}) / W_x, & \dfrac{ {\rm d} [\Sigma \text{ADP}]_c }{{\rm d} t} &= (-V_{m2c} J_\text{ANT}) / W_c, \\
\dfrac{ {\rm d} [\Sigma \text{Pi}]_x }{{\rm d} t} &= 0 & &\\
\end{array}
\renewcommand{\arraystretch}{1}
\right.
```
where $V_{m2c} \ \text{(L mito) (L cyto)}^{-1}$ is the fraction of the volume of mitochondria per volume cytosol and $W_c \ \text{(L cyto water) (L cyto)}^{-1}$ is the fraction of water volume in the cytoplasm to the total volume of the cytoplasm ({numref}`table-biophysicalconstants`).
Here, we clamp the matrix phosphate concentration at a constant value since the system of equations in Equation {eq}`system-ATP_ANT` does not account for phosphate transport between the matrix and the cytosol.
```
import numpy as np
import matplotlib.pyplot as plt
!pip install scipy
from scipy.integrate import solve_ivp
###### Constants defining metabolite pools ######
# Volume fractions and water space fractions
V_c = 0.6601 # cytosol volume fraction # L cyto (L cell)**(-1)
V_m = 0.2882 # mitochondrial volume fraction # L mito (L cell)**(-1)
V_m2c = V_m / V_c # mito to cyto volume ratio # L mito (L cuvette)**(-1)
W_c = 0.8425 # cytosol water space # L cyto water (L cyto)**(-1)
W_m = 0.7238 # mitochondrial water space # L mito water (L mito)**(-1)
W_x = 0.9*W_m # matrix water space # L matrix water (L mito)**(-1)
# Membrane potential
DPsi = 175/1000
###### Set fixed pH and cation concentrations ######
# pH
pH_x = 7.40
pH_c = 7.20
# K+ concentrations
K_x = 100e-3 # mol (L matrix water)**(-1)
K_c = 140e-3 # mol (L cyto water)**(-1)
# Mg2+ concentrations
Mg_x = 1.0e-3 # mol (L matrix water)**(-1)
Mg_c = 1.0e-3 # mol (L cyto water)**(-1)
###### Parameter vector ######
X_F = 1000 # Synthase activity
E_ANT = 0.325 # Nucleotide transporter activity
activity_array = np.array([X_F, E_ANT]) # Note: This array will be larger in the future parts
###### Initial Conditions ######
# Matrix species
sumATP_x_0 = 0.5e-3 # mol (L matrix water)**(-1)
sumADP_x_0 = 9.5e-3 # mol (L matrix water)**(-1)
sumPi_x_0 = 1e-3 # mol (L matrix water)**(-1)
# Cytoplasmic species
sumATP_c_0 = 0 #9.95e-3 # mol (L cyto water)**(-1)
sumADP_c_0 = 10e-3 #0.05e-3 # mol (L cyto water)**(-1)
X_0 = np.array([sumATP_x_0, sumADP_x_0, sumPi_x_0, sumATP_c_0, sumADP_c_0])
def dXdt(t, X, activity_array):
# Unpack variables
sumATP_x, sumADP_x, sumPi_x, sumATP_c, sumADP_c = X
X_F, E_ANT = activity_array
# Hydrogen ion concentration
H_x = 10**(-pH_x) # mol (L matrix water)**(-1)
H_c = 10**(-pH_c) # mol (L cuvette water)**(-1)
# Thermochemical constants
R = 8.314 # J (mol K)**(-1)
T = 37 + 273.15 # K
F = 96485 # C mol**(-1)
# Proton motive force parameters (dimensionless)
n_F = 8/3
# Dissociation constants
K_MgATP = 10**(-3.88)
K_HATP = 10**(-6.33)
K_KATP = 10**(-1.02)
K_MgADP = 10**(-3.00)
K_HADP = 10**(-6.26)
K_KADP = 10**(-0.89)
K_MgPi = 10**(-1.66)
K_HPi = 10**(-6.62)
K_KPi = 10**(-0.42)
## Binding polynomials
# Matrix species # mol (L mito water)**(-1)
PATP_x = 1 + H_x/K_HATP + Mg_x/K_MgATP + K_x/K_KATP
PADP_x = 1 + H_x/K_HADP + Mg_x/K_MgADP + K_x/K_KADP
PPi_x = 1 + H_x/K_HPi + Mg_x/K_MgPi + K_x/K_KPi
# Cytosol species # mol (L cuvette water)**(-1)
PATP_c = 1 + H_c/K_HATP + Mg_c/K_MgATP + K_c/K_KATP
PADP_c = 1 + H_c/K_HADP + Mg_c/K_MgADP + K_c/K_KADP
## Unbound species
# Matrix species
ATP_x = sumATP_x / PATP_x # [ATP4-]_x
ADP_x = sumADP_x / PADP_x # [ADP3-]_x
# Cytosol species
ATP_c = sumATP_c / PATP_c # [ATP4-]_c
ADP_c = sumADP_c / PADP_c # [ADP3-]_c
###### F0F1-ATPase ######
# ADP3-_x + HPO42-_x + H+_x + n_A*H+_i <-> ATP4- + H2O + n_A*H+_x
# Gibbs energy (J mol**(-1))
DrGo_F = 4990
DrGapp_F = DrGo_F + R * T * np.log( H_x * PATP_x / (PADP_x * PPi_x))
# Apparent equilibrium constant
Kapp_F = np.exp( (DrGapp_F + n_F * F * DPsi ) / (R * T)) * (H_c / H_x)**n_F
# Flux (mol (s * L mito)**(-1))
J_F = X_F * (Kapp_F * sumADP_x * sumPi_x - sumATP_x)
###### ANT ######
# ATP4-_x + ADP3-_i <-> ATP4-_i + ADP3-_x
#Constants
del_D = 0.0167
del_T = 0.0699
k2o_ANT = 9.54/60 # s**(-1)
k3o_ANT = 30.05/60 # s**(-1)
K0o_D = 38.89e-6 # mol (L cuvette water)**(-1)
K0o_T = 56.05e-6 # mol (L cuvette water)**(-1)
A = +0.2829
B = -0.2086
C = +0.2372
phi = F * DPsi / (R * T)
# Reaction rates (s**(-1))
k2_ANT = k2o_ANT * np.exp((A*(-3) + B*(-4) + C)*phi)
k3_ANT = k3o_ANT * np.exp((A*(-4) + B*(-3) + C)*phi)
# Dissociation constants (M)
K0_D = K0o_D * np.exp(3*del_D*phi)
K0_T = K0o_T * np.exp(4*del_T*phi)
q = k3_ANT * K0_D * np.exp(phi) / (k2_ANT * K0_T)
term1 = k2_ANT * ATP_x * ADP_c * q / K0_D
term2 = k3_ANT * ADP_x * ATP_c / K0_T
num = term1 - term2
den = (1 + ATP_c/K0_T + ADP_c/K0_D) * (ADP_x + ATP_x * q)
# Flux (mol (s * L mito)**(-1))
J_ANT = E_ANT * num / den
###### Differential equations (equation 14) ######
# Matrix species
dATP_x = (J_F - J_ANT) / W_x
dADP_x = (-J_F + J_ANT) / W_x
dPi_x = 0
# Cytosol species
dATP_c = ( V_m2c * J_ANT) / W_c
dADP_c = (-V_m2c * J_ANT) / W_c
dX = [dATP_x, dADP_x, dPi_x, dATP_c, dADP_c]
return dX
# Solve ODE
results = solve_ivp(dXdt, [0, 2], X_0, method = 'Radau', args=(activity_array,))
t = results.t
sumATP_x, sumADP_x, sumPi_x, sumATP_c, sumADP_c = results.y
# Plot figures
fig, ax = plt.subplots(1,2, figsize = (10,5))
ax[0].plot(t, sumATP_x*1000, label = '[$\Sigma$ATP]$_x$')
ax[0].plot(t, sumADP_x*1000, label = '[$\Sigma$ADP]$_x$')
ax[0].plot(t, sumPi_x*1000, label = '[$\Sigma$Pi]$_x$')
ax[0].legend(loc="right")
ax[0].set_ylim((-.5,10.5))
ax[0].set_xlabel('Time (s)')
ax[0].set_xticks([0,1,2])
ax[0].set_ylabel('Concentration (mM)')
ax[1].plot(t, sumATP_c*1000, label = '[$\Sigma$ATP]$_c$')
ax[1].plot(t, sumADP_c*1000, label = '[$\Sigma$ADP]$_c$')
ax[1].set_ylim((-0.5,10.5))
ax[1].set_xticks([0,1,2])
ax[1].legend(loc="right")
ax[1].set_xlabel('Time (s)')
plt.show()
```
**Figure 4:** Steady state solution from Equation {eq}`system-ATP_ANT` for the (a) matrix and (b) cytosol species with $\Delta \Psi = 175$ mV, $\text{pH}_x = 7.4$, and $\text{pH}_c = 7.2$.
The above simulations of the system of Equation {eq}`system-ATP_ANT` show how the electrogenic nature of the ANT transport results in the markedly different ATP/ADP ratios in the cytosol compared to the matrix. As we saw in the previous chapter, the ATP hydrolysis potential in the matrix is approximately $\text{-}45 \ \text{kJ mol}^{-1}$. The roughly $100$:$1$ ratio of ATP to ADP in the cytosol is associated with a hydrolysis potential of approximately $\text{-}65 \ \text{kJ mol}^{-1}$. The difference of $20 \ \text{kJ mol}^{-1}$ between the matrix and the cytosolic space is driven primarily by the membrane potential, which is roughly equivalent to $20 \ \text{kJ mol}^{-1}$.
## Inorganic phosphate transport
During active ATP synthesis, mitochondrial Pi is replenished via the activity of the phosphate-proton cotransporter (PiC), catalyzing the electroneutral cotransport of protonated inorganic phosphate, $\text{H}_2\text{PO}_4^{-}$, and $\text{H}^{+}$ across the membrane. Again, we assume rapid transport between the cytoplasm and intermembrane space, and hence, we have
```{math}
(\text{H}_2\text{PO}_4^{-})_c + (\text{H}^{+})_c \rightleftharpoons (\text{H}_2\text{PO}_4^{-})_x + (\text{H}^{+})_x.
```
Adopting the flux equation from Bazil et al. {cite}`Bazil2016`, we have
```{math}
:label: J_PiC
J_\text{PiC} = E_{\text{PiC}} \dfrac{ [\text{H}^{+} ]_{c} [\text{H}_2\text{PO}_4^{-}]_{c} - [\text{H}^{+}]_{x} [\text{H}_2\text{PO}_4^{-}]_{x} }{ [\text{H}_2\text{PO}_4^{-}]_c + k_{\text{PiC}} },
```
where $E_{\text{PiC}} \ \text{(L matrix water) s}^{-1} \text{ (L mito)}^{-1}$ is the PiC activity rate and $k_{\text{PiC}} = 1.61$ mM is an effective Michaelis-Menten constant. The $\text{H}_2\text{PO}_4^{-}$ concentrations in the matrix and cytosol are computed via the relationship
```{math}
[\text{H}_2\text{PO}_4^{-}] = [\Sigma{\rm Pi}] \left( [{\rm H}^+]/K_{\rm HPi} \right) / P_{\rm Pi}
```
from Equation \eqref{sumPi}.
To incorporate PiC into Equation {eq}`system-ATP_ANT`, we add a new state $[\Sigma \text{Pi}]_c$ such that at given membrane potential, matrix and cytosolic pH, and cation concentrations, we obtain
```{math}
:label: system-ATP_ANT_PiC
\left\{
\renewcommand{\arraystretch}{2}
\begin{array}{rlrl}
\dfrac{ {\rm d} [\Sigma \text{ATP}]_x }{{\rm d} t} &= (J_\text{F} - J_\text{ANT} ) / W_x, & \dfrac{ {\rm d} [\Sigma \text{ATP}]_c }{{\rm d} t} &= (V_{m2c} J_\text{ANT}) / W_c \\
\dfrac{ {\rm d} [\Sigma \text{ADP}]_x }{{\rm d} t} &= (-J_\text{F} + J_\text{ANT}) / W_x, & \dfrac{ {\rm d} [\Sigma \text{ADP}]_c }{{\rm d} t} &= (-V_{m2c} J_\text{ANT}) / W_c, \\
\dfrac{ {\rm d} [\Sigma \text{Pi}]_x }{{\rm d} t} &= (-J_\text{F} + J_\text{PiC}) / W_x, & \dfrac{ {\rm d} [\Sigma \text{Pi}]_c }{{\rm d} t} &= (- V_{m2c} J_\text{PiC}) / W_c,
\end{array}
\renewcommand{\arraystretch}{1}
\right.
```
The following code simulates the synthesis of ATP from ADP and Pi and their translocation across the IMM under physiological conditions.
```
import numpy as np
import matplotlib.pyplot as plt
!pip install scipy
from scipy.integrate import solve_ivp
###### Constants defining metabolite pools ######
# Volume fractions and water space fractions
V_c = 0.6601 # cytosol volume fraction # L cyto (L cell)**(-1)
V_m = 0.2882 # mitochondrial volume fraction # L mito (L cell)**(-1)
V_m2c = V_m / V_c # mito to cyto volume ratio # L mito (L cuvette)**(-1)
W_c = 0.8425 # cytosol water space # L cyto water (L cyto)**(-1)
W_m = 0.7238 # mitochondrial water space # L mito water (L mito)**(-1)
W_x = 0.9*W_m # matrix water space # L matrix water (L mito)**(-1)
# Membrane potential
DPsi = 175/1000
###### Set fixed pH, cation concentrations, and O2 partial pressure ######
# pH
pH_x = 7.40
pH_c = 7.20
# K+ concentrations
K_x = 100e-3 # mol (L matrix water)**(-1)
K_c = 140e-3 # mol (L cyto water)**(-1)
# Mg2+ concentrations
Mg_x = 1.0e-3 # mol (L matrix water)**(-1)
Mg_c = 1.0e-3 # mol (L cyto water)**(-1)
###### Parameter vector ######
X_F = 100 # Synthase activity
E_ANT = 0.325 # Nucleotide transporter activity
E_PiC = 5.0e6 # Phosphate transporter activity
activity_array = np.array([X_F, E_ANT, E_PiC])
###### Initial Conditions ######
# Matrix species
sumATP_x_0 = 0.5e-3 # mol (L matrix water)**(-1)
sumADP_x_0 = 9.5e-3 # mol (L matrix water)**(-1)
sumPi_x_0 = 1e-3 # mol (L matrix water)**(-1)
# Cytosolic species
sumATP_c_0 = 0 # mol (L cyto water)**(-1)
sumADP_c_0 = 10e-3 # mol (L cyto water)**(-1)
sumPi_c_0 = 10e-3 # mol (L cyto water)**(-1)
X_0 = np.array([sumATP_x_0, sumADP_x_0, sumPi_x_0, sumATP_c_0, sumADP_c_0, sumPi_c_0])
def dXdt(t, X, activity_array):
# Unpack variables
sumATP_x, sumADP_x, sumPi_x, sumATP_c, sumADP_c, sumPi_c = X
X_F, E_ANT, E_PiC = activity_array
# Hydrogen ion concentration
H_x = 10**(-pH_x) # mol (L matrix water)**(-1)
H_c = 10**(-pH_c) # mol (L cuvette water)**(-1)
# Thermochemical constants
R = 8.314 # J (mol K)**(-1)
T = 37 + 273.15 # K
F = 96485 # C mol**(-1)
# Proton motive force parameters (dimensionless)
n_F = 8/3
# Dissociation constants
K_MgATP = 10**(-3.88)
K_HATP = 10**(-6.33)
K_KATP = 10**(-1.02)
K_MgADP = 10**(-3.00)
K_HADP = 10**(-6.26)
K_KADP = 10**(-0.89)
K_MgPi = 10**(-1.66)
K_HPi = 10**(-6.62)
K_KPi = 10**(-0.42)
## Binding polynomials
# Matrix species # mol (L mito water)**(-1)
PATP_x = 1 + H_x/K_HATP + Mg_x/K_MgATP + K_x/K_KATP
PADP_x = 1 + H_x/K_HADP + Mg_x/K_MgADP + K_x/K_KADP
PPi_x = 1 + H_x/K_HPi + Mg_x/K_MgPi + K_x/K_KPi
# Cytosol species # mol (L cuvette water)**(-1)
PATP_c = 1 + H_c/K_HATP + Mg_c/K_MgATP + K_c/K_KATP
PADP_c = 1 + H_c/K_HADP + Mg_c/K_MgADP + K_c/K_KADP
PPi_c = 1 + H_c/K_HPi + Mg_c/K_MgPi + K_c/K_KPi
## Unbound species
# Matrix species
ATP_x = sumATP_x / PATP_x # [ATP4-]_x
ADP_x = sumADP_x / PADP_x # [ADP3-]_x
Pi_x = sumPi_x / PPi_x # [HPO42-]_x
# Cytosol species
ATP_c = sumATP_c / PATP_c # [ATP4-]_c
ADP_c = sumADP_c / PADP_c # [ADP3-]_c
Pi_c = sumPi_c / PPi_c # [HPO42-]_c
###### H+-PI2 cotransporter ######
# H2PO42-_x + H+_x = H2PO42-_c + H+_c
# Constant
k_PiC = 1.61e-3 # mol (L cuvette)**(-1)
# H2P04- species
HPi_c = Pi_c * (H_c / K_HPi)
HPi_x = Pi_x * (H_x / K_HPi)
# Flux (mol (s * L mito)**(-1))
J_PiC = E_PiC * (H_c * HPi_c - H_x * HPi_x) / (k_PiC + HPi_c)
###### F0F1-ATPase ######
# ADP3-_x + HPO42-_x + H+_x + n_A*H+_i <-> ATP4- + H2O + n_A*H+_x
# Gibbs energy (J mol**(-1))
DrGo_F = 4990
DrGapp_F = DrGo_F + R * T * np.log( H_x * PATP_x / (PADP_x * PPi_x))
# Apparent equilibrium constant
Kapp_F = np.exp( (DrGapp_F + n_F * F * DPsi ) / (R * T)) * (H_c / H_x)**n_F
# Flux (mol (s * L mito)**(-1))
J_F = X_F * (Kapp_F * sumADP_x * sumPi_x - sumATP_x)
###### ANT ######
# ATP4-_x + ADP3-_i <-> ATP4-_i + ADP3-_x
# Constants
del_D = 0.0167
del_T = 0.0699
k2o_ANT = 9.54/60 # s**(-1)
k3o_ANT = 30.05/60 # s**(-1)
K0o_D = 38.89e-6 # mol (L cuvette water)**(-1)
K0o_T = 56.05e-6 # mol (L cuvette water)**(-1)
A = +0.2829
B = -0.2086
C = +0.2372
phi = F * DPsi / (R * T)
# Reaction rates (s**(-1))
k2_ANT = k2o_ANT * np.exp((A*(-3) + B*(-4) + C)*phi)
k3_ANT = k3o_ANT * np.exp((A*(-4) + B*(-3) + C)*phi)
# Dissociation constants (M)
K0_D = K0o_D * np.exp(3*del_D*phi)
K0_T = K0o_T * np.exp(4*del_T*phi)
q = k3_ANT * K0_D * np.exp(phi) / (k2_ANT * K0_T)
term1 = k2_ANT * ATP_x * ADP_c * q / K0_D
term2 = k3_ANT * ADP_x * ATP_c / K0_T
num = term1 - term2
den = (1 + ATP_c/K0_T + ADP_c/K0_D) * (ADP_x + ATP_x * q)
# Flux (mol (s * L mito)**(-1))
J_ANT = E_ANT * num / den
###### Differential equations (equation 15) ######
# Matrix species
dATP_x = (J_F - J_ANT) / W_x
dADP_x = (-J_F + J_ANT) / W_x
dPi_x = (-J_F + J_PiC) / W_x
# Buffer species
dATP_c = ( V_m2c * J_ANT) / W_c
dADP_c = (-V_m2c * J_ANT) / W_c
dPi_c = (-V_m2c * J_PiC) / W_c
dX = [dATP_x, dADP_x, dPi_x, dATP_c, dADP_c, dPi_c]
return dX
# Solve ODE
t = np.linspace(0,2,100)
results = solve_ivp(dXdt, [0, 2], X_0, method = 'Radau', t_eval = t, args=(activity_array,))
sumATP_x, sumADP_x, sumPi_x, sumATP_c, sumADP_c, sumPi_c = results.y
# Plot figures
fig, ax = plt.subplots(1,2, figsize = (10,5))
ax[0].plot(t, sumATP_x*1000, label = '[$\Sigma$ATP]$_x$')
ax[0].plot(t, sumADP_x*1000, label = '[$\Sigma$ADP]$_x$')
ax[0].plot(t, sumPi_x*1000, label = '[$\Sigma$Pi]$_x$')
ax[0].legend(loc="right")
ax[0].set_ylim((-.5,10.5))
ax[0].set_xlim((0,2))
ax[0].set_xticks([0,1,2])
ax[0].set_xlabel('Time (s)')
ax[0].set_ylabel('Concentration (mM)')
ax[1].plot(t, sumATP_c*1000, label = '[$\Sigma$ATP]$_c$')
ax[1].plot(t, sumADP_c*1000, label = '[$\Sigma$ADP]$_c$')
ax[1].plot(t, sumPi_c*1000, label = '[$\Sigma$Pi]$_c$')
ax[1].set_ylim((-0.5,10.5))
ax[1].set_xlim((0,2))
ax[1].set_xticks([0,1,2])
ax[1].legend(loc="right")
ax[1].set_xlabel('Time (s)')
plt.show()
```
**Figure 5:** Steady state solution from Equation {eq}`system-ATP_ANT_PiC` for the (a) matrix and (b) cytosol species with $\Delta \Psi = 175$ mV, $\text{pH}_x = 7.4$, and $\text{pH}_c = 7.2$.
For the above simulations, cytosolic inorganic phosphate is set to $10 \ \text{mM}$ initially, and all other initial conditions remain unchanged. Driven by $\Delta \text{pH}$, a gradient in phosphate concentration is established, with a steady-state ratio of matrix-to-cytosol concentration of approximately $2.2$. As seen in the previous section, with a constant membrane potential of $175 \ \text{mV}$, the ATP/ADP ratio is maintained at a much higher level in the cytosol than in the matrix.
The final matrix and cytosol ATP and ADP concentrations depend not only on the membrane potential, but also on the total amount of exchangeable phosphate in the system. Here these simulations start with $[\text{Pi}]_c = 10 \ \text{mM}$ and $[\text{Pi}]_x = 1 \ \text{mM}$. The initial $10 \ \text{mM}$ of ADP in the cytosol becomes almost entirely phosphorylated to ATP, leaving $0.32 \ \text{mM}$ of inorganic phosphate in the cytosol in the final steady state. To explore how these steady states depend on $\Delta\Psi$, the following code simulates the steady-state behavior of this system for a range of $\Delta\Psi$ from $100$ to $200 \ \text{mV}$. These simulations, based on a simple, thermodynamically constrained model, show that it is not possible to synthesize ATP at physiological free energy levels for values of $\Delta\Psi$ of lower than approximately $160 \ \text{mV}$.
```
!pip install scipy
from scipy.integrate import solve_ivp
### Simulate over a range of Membrane potential from 100 mV to 250 mV ###
# Define array to iterate over
membrane_potential = np.linspace(100,250) # mV
# Define arrays to store steady state results
ATP_x_steady = np.zeros(len(membrane_potential))
ADP_x_steady = np.zeros(len(membrane_potential))
Pi_x_steady = np.zeros(len(membrane_potential))
ATP_c_steady = np.zeros(len(membrane_potential))
ADP_c_steady = np.zeros(len(membrane_potential))
Pi_c_steady = np.zeros(len(membrane_potential))
# Iterate through range of membrane potentials
for i in range(len(membrane_potential)):
DPsi = membrane_potential[i] / 1000 # convert to V
temp_results = solve_ivp(dXdt, [0, 200], X_0, method = 'Radau', args=(activity_array,)).y*1000 # Concentration in mM
ATP_x_steady[i] = temp_results[0,-1]
ADP_x_steady[i] = temp_results[1,-1]
Pi_x_steady[i] = temp_results[2,-1]
ATP_c_steady[i] = temp_results[3,-1]
ADP_c_steady[i] = temp_results[4,-1]
Pi_c_steady[i] = temp_results[5,-1]
# Plot figures
fig, ax = plt.subplots(1,2, figsize = (10,5))
ax[0].plot(membrane_potential, ATP_x_steady, label = '[$\Sigma$ATP]$_x$')
ax[0].plot(membrane_potential, ADP_x_steady, label = '[$\Sigma$ADP]$_x$')
ax[0].plot(membrane_potential, Pi_x_steady, label = '[$\Sigma$Pi]$_x$')
ax[0].legend(loc = "right")
ax[0].set_xlabel('Membrane potential (mV)')
ax[0].set_ylabel('Concentration (mM)')
ax[0].set_xlim([100, 250])
ax[0].set_ylim([-0.5,13])
ax[1].plot(membrane_potential, ATP_c_steady, label = '[$\Sigma$ATP]$_c$')
ax[1].plot(membrane_potential, ADP_c_steady, label = '[$\Sigma$ADP]$_c$')
ax[1].plot(membrane_potential, Pi_c_steady, label = '[$\Sigma$Pi]$_c$')
ax[1].legend(loc = "right")
ax[1].set_xlabel('Membrane potential (mV)')
ax[1].set_ylabel('Concentration (mM)')
ax[1].set_xlim([100, 250])
ax[1].set_ylim([-0.5,13])
plt.show()
```
**Figure 6:** Simulation of concentration versus $\Delta \Psi$ for Equation {eq}`system-ATP_ANT_PiC` for the (a) matrix and (b) cytosol species with $\Delta \Psi$ from $100$ to $250$ mV.
Simulation of this system reinforces the fact that ATP cannot be synthesized at physiological free energy levels for mitochondrial membrane potential of less than approximately $150 \ \text{mV}$.
## Respiratory complexes and NADH synthesis
The previous sections have assumed a constant membrane potential. To account for the processes that generate the membrane potential, we model proton pumping associated with the respiratory complexes I, III, and IV of the ETC ({numref}`mitofig`).
### ETC complex I
Coupled with the translocation of $n_\text{C1} = 4$ protons across the IMM against the electrochemical gradient, electrons are transferred from NADH to ubiquinone ($Q$) at complex I of the ETC via the reaction
```{math}
:label: reaction_C1
(\text{NADH}^{2-})_x + (\text{H}^{+})_x + (\text{Q})_x + n_\text{C1} (\text{H}^{+})_x \rightleftharpoons (\text{NAD}^{-})_x + (\text{QH}_2)_x + \text{H}_2\text{O} + n_\text{C1}(\text{H}^+)_c.
```
Since protons move against the gradient when the reaction proceeds in the left-to-right direction, the overall Gibbs energy for the reaction of Equation {eq}`reaction_C1` is
```{math}
\Delta G_\text{C1} &= \Delta_r G_\text{C1} - n_\text{C1} \Delta G_{\rm H} \nonumber \\
&= \Delta_r G_\text{C1}^\circ + R{\rm T} \ln \left( \dfrac{ [\text{NAD}^{-}]_x [\text{QH}_2]_x }{ [\text{NADH}^{2-}]_x [\text{Q}]_x} \cdot \dfrac{1}{[\text{H}^{+}]_x } \right) + n_\text{C1} F \Delta \Psi - R{\rm T} \ln \left( \dfrac{ [\text{H}^{+}]_x }{ [\text{H}^{+}]_c } \right)^{n_{\text{C1}}} \nonumber \\
&= \Delta_r G'^{\circ}_\text{C1} + R{\rm T} \ln \left( \dfrac{ [\text{NAD}^{-}]_x [\text{QH}_2]_x }{ [\text{NADH}^{2-}]_x [\text{Q}]_x} \right) + n_\text{C1} F \Delta \Psi - R{\rm T} \ln \left( \dfrac{ [\text{H}^{+}]_x }{ [\text{H}^{+}]_c } \right)^{n_{\text{C1}}},
```
where
```{math}
\Delta_r G'^\circ_\text{C1} = \Delta_r G^\circ_\text{C1} - R \text{T} \ln ( [\text{H}^+]_x )
```
is the apparent Gibbs energy for the reaction in Equation {eq}`reaction_C1`. The apparent equilibrium constant is
```{math}
:label: Kapp_C1
K'_{eq,\text{C1}} = \left(\dfrac{ [\text{NAD}^{-}]_x [\text{QH}_2]_x }{ [\text{NADH}^{2-}]_x [\text{Q}]_x} \right)_{eq} = \exp \left\{ \dfrac{ - ( \Delta_r G'^\circ_\text{C1} + n_\text{C1} F \Delta \Psi) }{ R \text{T}} \right\} \left( \dfrac{ [\text{H}^{+}]_x }{ [\text{H}^{+}]_c } \right)^{n_\text{C1}}.
```
To simulate the flux of complex I, $J_{\text{C1}} \ \text{(mol s}^{-1} \text{ (L mito)}^{-1})$, across the IMM by mass-action kinetics, we have
```{math}
:label: J_C1
J_{\text{C1}} = X_{\text{C1}} \left( K_{eq,\text{C1}}^\prime [\text{NADH}^{2-}]_x [\text{Q}]_x - [\text{NAD}^{-}]_x [\text{QH}_2]_x \right),
```
for $X_\text{C1} \ \text{(mol s}^{-1} \text{ (L mito)}^{-1})$ the a rate constant. {numref}`table-ETC` lists the constants for complex I.
### ETC complex III
The reaction catalyzed by complex III reduces two cytochrome c proteins for every $\text{QH}_2$ oxidized
```{math}
:label: reaction_C3
(\text{QH}_2)_x + 2 \ (\text{c}_{ox}^{3+})_i + n_\text{C3} (\text{H}^+)_x \rightleftharpoons (\text{Q})_x + 2 \ (\text{c}_{red}^{2+})_i + 2 \ (\text{H}^{+})_c + n_\text{C3} (\text{H}^+)_c,
```
where $\text{c}_{ox}^{3+}$ and $\text{c}_{red}^{2+}$ are the oxidized and reduced cytochrome c species and the subscript $i$ indicates that cytochrome c is confined to the IMS. This reaction is coupled with the transport of $n_{\text{C3}} = 2$ protons from the matrix to the cytosol against the electrochemical gradient. Thus, the Gibbs energy for the overall reaction given in Equation {eq}`reaction_C3` is
```{math}
\Delta G_{\text{C3}} &= \Delta_r G_\text{C3} - n_\text{C3} \Delta G_\text{H} \nonumber \\
&= \Delta_r G_{\text{C3}}^\circ + R{\rm T} \ln \left( \dfrac{ [\text{Q}]_x [\text{c}_{red}^{2+}]_i^2 }{ [\text{QH}_2]_x [\text{c}_{ox}^{3+}]_i^2} \cdot [\text{H}^{+}]_c^2 \right) + n_\text{C3} F \Delta \Psi -
R{\rm T} \ln \left( \dfrac{ [\text{H}^{+}]_x }{ [\text{H}^{+}]_c} \right)^{n_\text{C3}} \nonumber \\
&= \Delta_r G'^\circ_\text{C3} + R{\rm T} \ln \left( \dfrac{ [\text{Q}]_x [\text{c}_{red}^{2+}]_i^2 }{ [\text{QH}_2]_x [\text{c}_{ox}^{3+}]_i^2}\right) + n_\text{C3} F \Delta \Psi -
R{\rm T} \ln \left( \dfrac{ [\text{H}^{+}]_x }{ [\text{H}^{+}]_c} \right)^{n_\text{C3}},
```
where
```{math}
\Delta_r G'^\circ_\text{C3} = \Delta_r G^\circ_\text{C3} + 2 R \text{T} \ln ([\text{H}^+]_c)
```
is the apparent Gibbs energy for complex III. The apparent equilibrium constant is
```{math}
:label: Kapp_C3
K_{eq,\text{C3}}^\prime = \left( \dfrac{ [\text{Q}]_x [\text{c}_{red}^{2+}]_i^2 }{ [\text{QH}_2]_x [\text{c}_{ox}^{3+}]_i^2 } \right)_{eq} = \exp \left\{ \dfrac{ -(\Delta_r G'^\circ_\text{C3} + n_\text{C3} F
\Delta \Psi )}{ R \text{T}} \right\} \left( \dfrac{ [\text{H}^{+}]_x}{ [\text{H}^{+}]_c} \right)^{n_\text{C3}}.
```
To simulate the flux of complex III, $J_\text{C3} \ \text{(mol s}^{-1} \text{ (L mito)}^{-1})$, by mass-action kinetics, we have
```{math}
:label: J_C3
J_{\text{C3}} = X_{\text{C3}} \left( K_{eq,\text{C3}}^\prime [\text{QH}_2]_x [\text{c}_{ox}^{3+}]_i^2 - [\text{Q}]_x [\text{c}_{red}^{2+}]_i^2 \right),
```
where $X_{\text{C3}} \ \text{(mol s}^{-1} \text{ (L mito)}^{-1})$ is the rate constant.
### ETC complex IV
In the final step of the ETC catalyzed by complex IV, electrons are transferred from cytochrome c to oxygen, forming water
```{math}
:label: reaction_C4
2 \ (\text{c}_{red}^{2+})_i + \frac{1}{2} (\text{O}_2)_x + 2 \ (\text{H}^{+})_c + n_\text{C4} ([\text{H}^+])_x \rightleftharpoons 2 \ (\text{c}^{3+}_{ox})_i + \text{H}_2\text{O} + n_\text{C4} ([\text{H}^+])_c,
```
coupled with the translocation of $n_\text{C4} = 4$ protons across the IMM against against the electrochemical gradient. The Gibbs energy of the reaction in Equation {eq}`reaction_C4` is
```{math}
\Delta G_\text{C4} &= \Delta_r G_\text{C4} - n_\text{C4} \Delta G_{\rm H} \nonumber \\
&= \Delta_r G_{\text{C4}}^o + R{\rm T} \ln \left( \dfrac{ [\text{c}^{3+}_{ox}]^2_i }{ [\text{c}^{2+}_{red}]^2_i [\text{O}_2]^{1/2}_x } \cdot \dfrac{1}{[\text{H}^{+}]^2_c}\right) + n_{\text{C4}} F \Delta \Psi - R{\rm T} \ln \left( \dfrac{ [\text{H}^{+}]_x }{ [\text{H}^{+}]_c} \right)^{n_{\text{C4}}} \nonumber \\
&= \Delta_r G'^\circ_{\text{C4}} + R{\rm T} \ln \left( \dfrac{ [\text{c}^{3+}_{ox}]^2_i }{ [\text{c}^{2+}_{red}]^2_i [\text{O}_2]^{1/2}_x } \right) + n_{\text{C4}} F \Delta \Psi - R{\rm T} \ln \left( \dfrac{ [\text{H}^{+}]_x }{ [\text{H}^{+}]_c} \right)^{n_{\text{C4}}},
```
where
```{math}
\Delta_r G'^\circ_\text{C4} = \Delta_r G^\circ_\text{C4} - 2 R \text{T} \ln([\text{H}^+]_c)
```
is the apparent Gibbs energy for complex IV. The apparent equilibrium constant is
```{math}
:label: Kapp_C4
K_{eq,\text{C4}}^\prime = \left( \dfrac{ [\text{c}^{3+}_{ox}]_i^2 }{ [\text{c}^{2+}_{red}]_i^2 [\text{O}_2]_x^{1/2} } \right)_{eq} = \exp \left\{ \dfrac{-(\Delta_r G'^\circ_\text{C4} + n_\text{C4} F \Delta \Psi )}{ R \text{T} } \right\} \left( \dfrac{ [\text{H}^+]_x }{[\text{H}^+]_c} \right)^{n_\text{C4}}.
```
To simulate the flux of complex IV, $J_{\text{C4}} \ \text{(mol s}^{-1} \text{ (L mito)}^{-1})$, we use mass-action kinetics and account for binding of oxygen to complex IV as
```{math}
:label: J_C4
J_{\text{C4}} = X_{\text{C4}} \left( \dfrac{1}{1 + \frac{k_{\text{O}_2}}{[\text{O}_2]
}} \right) \left( \left(K_{eq,\text{C4}}^\prime\right)^{1/2} [\text{c}_{red}^{2+}]_i [\text{O}_2]_x^{1/4} - [\text{c}_{ox}^{3+}]_i \right),
```
where $X_{\text{C4}} \ \text{(mol s}^{-1} \text{ (L mito)}^{-1})$ is the rate constant and $k_{\text{O}_2}$ is the $\text{O}_2$ binding constant (\ref{table-ETC}). For this study, we assume a partial pressure of $\text{O}_2$ at $25 \ \text{mmHg}$.
The apparent equilibrium constants for the $\text{F}_0\text{F}_1$ ATPase (Equation {eq}`Kapp_F`), complex I (Equation {eq}`Kapp_C1`), complex III (Equation {eq}`Kapp_C3`), and complex IV (Equation {eq}`Kapp_C4`) depend on $\Delta\Psi$. In the model developed in this section, since $\Delta\Psi$ is a variable, these apparent equilibrium constants are also variables. Thus, the flux expressions in Equations {eq}`J_F`, {eq}`J_C1`, {eq}`J_C3`, and {eq}`J_C4` depend on $\Delta \Psi$. These expressions may be compared to a generalized formulation of rate laws for reversible enzyme-catalyzed reactions {cite}`Noor2013`, where in this case the saturating dependence of flux on substrate concentrations is not accounted for. These expressions may also be compared to the more detailed representations of the underlying catalytic mechanisms used by Bazil et al. {cite}`Bazil2016`. The Bazil et al. model also accounts for side reactions generating reactive oxygen species that are not accounted for here.
### Dehydrogenase activity
In this model, we do not explicitly simulate the reactions of the TCA cycle or beta oxidation, but rather the combined action of NADH-producing reactions, that is,
```{math}
(\text{NAD}^{-})_x \rightleftharpoons (\text{NADH}^{2-})_x + (\text{H}^{+})_x
```
From Beard {cite}`Beard2005`, we represent a Pi dependence of NADH production using the following phenomenological expression
```{math}
:label: J_DH
J_{\text{DH}} = X_{\text{DH}} \left( r [\text{NAD}^-] - [\text{NADH}^{2-}] \right) \left( \dfrac{ 1 + [\Sigma \text{Pi}]_x/k_{\text{Pi},1} }{ 1 + [\Sigma \text{Pi}]_x/k_{\text{Pi},2} } \right),
```
where $X_\text{DH} \text{ (mol s}^{-1} \text{ (L mito)}^{-1})$ is the dehydrogenase activity and $r$ (dimensionless), $k_{\text{Pi},1} \ \text{(mol (L matrix water)}^{-1})$, and $k_{\text{Pi},2} \ \text{(mol (L matrix water)}^{-1})$ are constants. Parameter values are listed in Table {numref}`table-ETC`. The dependence of NADH production on Pi reflects the Pi-dependence of the substrate-level phosphorylation step of the TCA cycle (the succinyl coenzyme-A synthetase reaction) and the fact that Pi drives substrate oxidation via the dicarboxylate carrier.
### Proton leak
To simulate proton leak across the IMM, we adopt the Goldman-Hodgkins-Katz formulation from Wu et al. {cite}`Wu2008`,
```{math}
:label: J_H
J_{\text{H}} = X_\text{H} \left( [\text{H}^{+}]_c \ e^{\phi/2} - [\text{H}^{+}]_x \ e^{-\phi/2} \right)
```
where $X_\text{H} = 1000 \ \text{mol s}^{-1} \text{ (L mito)}^{-1}$ is the proton leak activity and $\phi$ is given in Equation {eq}`phi`. Even though the kinetic constants $X_\text{F}$ and $X_\text{H}$ attain equal values here, under the ATP-producing conditions the proton flux through the $\text{F}_0\text{F}_1$ ATPase ($J_\text{F}$, Equation {eq}`J_F`) is an order of magnitude greater than the proton leak flux ($J_\text{H}$, Equation {eq}`J_H`).
```{list-table} Respiratory complex and inorganic phosphate transport parameters
:header-rows: 1
:name: table-ETC
* - Parameter
- Units
- Description
- Value
- Source
* - $n_{\text{C}1}$
-
- Protons translocated by complex I
- $4 $
- {cite}`Nicholls2013`
* - $n_{\text{C}3}$
-
- Protons translocated by complex III
- $2 $
- {cite}`Nicholls2013`
* - $n_{\text{C}4}$
-
- Protons translocated by complex IV
- $4 $
- {cite}`Nicholls2013`
* - $X_\text{C1}$
- mol s$^{-1}$ (L mito)$^{-1}$
- Complex I rate constant
- $1\text{e}4$
-
* - $X_\text{C3}$
- mol s$^{-1}$ (L mito)$^{-1}$
- Complex III rate constant
- $1\text{e}6$
-
* - $X_\text{C4}$
- mol s$^{-1}$ (L mito)$^{-1}$
- Complex IV rate constant
- $0.0125$
-
* - $X_\text{DH}$
- mol s$^{-1}$ (L mito)$^{-1}$
- NADH dehydrogenase rate constant
- $0.1732$
-
* - $X_\text{H}$
- mol s$^{-1}$ (L mito)$^{-1}$
- Proton leak activity
- $1\text{e}3$
-
* - $r$
-
- Dehydrogenase parameter
- $6.8385 $
-
* - $k_{\text{Pi},1}$
- mmol (L matrix water)$^{-1}$
- Dehydrogenase parameter
- $0.466 $
-
* - $k_{\text{Pi},2}$
- mmol (L matrix water)$^{-1}$
- Dehydrogenase parameter
- $0.658 $
-
* - $k_{\text{PiC}}$
- mmol (L cell)$^{-1}$
- PiC constant
- $1.61$
- {cite}`Bazil2016`
* - $k_{\text{O}_2}$
- $\mu$mol (L matrix water)$^{-1}$
- O$_2$ binding constant
- $120$
- {cite}`Wu2007`
* - $\Delta_r G^o_\text{C1}$
- kJ mol$^{-1}$
- Gibbs energy of reaction for complex I
- $ -109.7 $
- {cite}`Li2011`
* - $\Delta_r G^o_\text{C3}$
- kJ mol$^{-1}$
- Gibbs energy of reaction for complex III
- $46.7 $
- {cite}`Li2011`
* - $\Delta_r G^o_\text{C4}$
- kJ mol$^{-1}$
- Gibbs energy of reaction for complex IV
- $ -202.2 $
- {cite}`Li2011`
* - $[\text{NAD}]_{tot}$
- mmol (L matrix water)$^{-1}$
- Total NAD pool in the matrix
- $2.97$
- {cite}`Wu2007`
* - $[\text{Q}]_{tot}$
- mmol (L matrix water)$^{-1}$
- Total Q pool in the matrix
- $1.35$
- {cite}`Wu2007`
* - $[\text{c}]_{tot}$
- mmol (L IM water)$^{-1}$
- Total cytochrome c pool in the IMS
- $2.70$
- {cite}`Wu2007`
```
## Simulating ATP synthesis in vitro
The flux expressions developed above may be used to simulate mitochondrial ATP synthesis in vitro, governed by the system of equations
```{math}
:label: system-singlemito
\left\{
\renewcommand{\arraystretch}{2.5}
\begin{array}{rl}
\dfrac{ {\rm d} \Delta \Psi }{{\rm d} t} & = ( n_\text{C1} J_\text{C1} + n_\text{C3} J_\text{C3} + n_\text{C4} J_\text{C4} - n_\text{F} J_\text{F} - J_\text{ANT} - J_\text{H}) / C_m \\
\hline
\dfrac{ {\rm d} [\Sigma \text{ATP}]_x }{{\rm d} t} &= (J_\text{F} - J_\text{ANT} ) / W_x \\
\dfrac{ {\rm d} [\Sigma \text{ADP}]_x }{{\rm d} t} &= (-J_\text{F} + J_\text{ANT}) / W_x \\
\dfrac{ {\rm d} [\Sigma \text{Pi}]_x }{{\rm d} t} &= (-J_\text{F} + J_\text{PiC}) / W_x \quad \text{matrix species}\\
\dfrac{ {\rm d} [\text{NADH}^{2-}]_x }{{\rm d} t} &= (J_\text{DH} - J_\text{C1}) / W_x \\
\dfrac{ {\rm d} [\text{QH}_2]_x }{{\rm d} t} &= (J_\text{C1} - J_\text{C3}) / W_x \\
\hline
\dfrac{ {\rm d} [\text{c}_{red}^{2+}]_i}{{\rm d} t} &= 2(J_\text{C3} - J_\text{C4}) / W_i \quad \text{intermembrane space species}\\
\hline
\dfrac{ {\rm d} [\Sigma \text{ATP}]_c }{{\rm d} t} &= (V_{m2c} J_\text{ANT} - J_\text{AtC} )/ W_c \\
\dfrac{ {\rm d} [\Sigma \text{ADP}]_c }{{\rm d} t} &= (-V_{m2c} J_\text{ANT} + J_\text{AtC} ) / W_c \quad \text{cytosol species}\\
\dfrac{ {\rm d} [\Sigma \text{Pi}]_c }{{\rm d} t} &= (- V_{m2c} J_\text{PiC} + J_\text{AtC}) / W_c,
\end{array}
\renewcommand{\arraystretch}{1}
\right.
```
where the fluxes $J_\text{F}$ (Equation {eq}`J_F`), $J_\text{ANT}$ (Equation {eq}`J_ANT`), $J_\text{PiC}$ (Equation {eq}`J_PiC`), $J_\text{C1}$ (Equation {eq}`J_C1`), $J_\text{C3}$ (Equation {eq}`J_C3`), $J_\text{C4}$ (Equation {eq}`J_C4`), $J_\text{DH}$ (Equation {eq}`J_DH`), and $J_\text{H}$ (Equation {eq}`J_H`) are given above and the constants are listed in Tables {numref}`table-biophysicalconstants` and {numref}`table-ETC`. Here, we incorporate a constant ATP consumption flux, $J_\text{AtC} \ \text{(mol s}^{-1} \text{ (L cyto)}^{-1})$, that is
```{math}
J_\text{AtC} = X_\text{AtC}/V_c
```
where $V_c$ is the ratio of the volume of cytosol per L cell. $X_\text{AtC}$ is the ATP consumption rate expressed in units of mmol s$^{-1}$ (L cell)$^{-1}$. Equation {eq}`system-singlemito` does not explicitly treat matrix or external $\text{pH}$, $\text{K}^+$, $\text{Mg}^{2+}$, or $\text{O}_2$ as variables. Reasonable clamped concentrations for these variables are ${\rm pH}_x = 7.4$, ${\rm pH}_c = 7.2$, $[\text{Mg}^{2+}]_x = 1 \ \text{mmol (L matrix water)}^{-1}$, $[\text{Mg}^{2+}]_c = 1 \ \text{mmol (L cyto water)}^{-1}$, $[\text{K}^{+}]_x = 100 \ \text{mmol (L matrix water)}^{-1}$, and $[K^{+}]_c = 140 \ \text{mmol (L cyto water)}^{-1}$, and $\text{O}_2$ partial pressure of $25 \ \text{mmHg}$. Respiratory chain reactants are determined from a total concentration of metabolites within the mitochondrion, that is, the total pools for NAD, cytochrome c, and Q species are
```{math}
[\text{NAD}]_{tot} &= [\text{NAD}^-]_x + [\text{NADH}^{2-}]_x \\
[\text{c}]_{tot} &= [\text{c}^{2+}_{red}]_i + [\text{c}^{3+}_{ox}]_i, \quad \text{and} \\
[\text{Q}]_{tot} &= [\text{Q}]_x + [\text{QH}_2]_x.
```
The pools are $[\text{NAD}]_{tot} = 2.97 \ \text{mmol (L matrix water)}^{-1}$, $[\text{c}]_{tot} = 2.7 \ \text{mmol (L IMS water)}^{-1}$, and $[\text{Q}]_{tot} = 1.35$ $\text{mmol}~\text{(L matrix water)}^{-1}$. The finite nature of these metabolite pools constrains the maximal concentrations of substrates available for complexes I, III, and IV. Thus, although the simple mass-action models for these complexes do not account for saturable enzyme kinetics, the fluxes are limited by the availability of substrates. Initial conditions are set under the assumption that the TAN for both the matrix and cytosol is $10 \ \text{mM}$, but the ATP/ADP ratio is $<$$1$ in the matrix and $\sim$$100$ in the cytosol. The following code simulates in vitro mitochondrial function without ATP consumption in the external (cytosolic space).
```
import numpy as np
import matplotlib.pyplot as plt
!pip install scipy
from scipy.integrate import solve_ivp
###### Constants defining metabolite pools ######
# Volume fractions and water space fractions
V_c = 0.6601 # cytosol volume fraction # L cyto (L cell)**(-1)
V_m = 0.2882 # mitochondrial volume fraction # L mito (L cell)**(-1)
V_m2c = V_m / V_c # mito to cyto volume ratio # L mito (L cuvette)**(-1)
W_c = 0.8425 # cytosol water space # L cyto water (L cyto)**(-1)
W_m = 0.7238 # mitochondrial water space # L mito water (L mito)**(-1)
W_x = 0.9*W_m # matrix water space # L matrix water (L mito)**(-1)
W_i = 0.1*W_m # intermembrane water space # L IM water (L mito)**(-1)
# Total pool concentrations
NAD_tot = 2.97e-3 # NAD+ and NADH conc # mol (L matrix water)**(-1)
Q_tot = 1.35e-3 # Q and QH2 conc # mol (L matrix water)**(-1)
c_tot = 2.7e-3 # cytochrome c ox and red conc # mol (L IM water)**(-1)
# Membrane capacitance ()
Cm = 3.1e-3
###### Set fixed pH, cation concentrations, and O2 partial pressure ######
# pH
pH_x = 7.40
pH_c = 7.20
# K+ concentrations
K_x = 100e-3 # mol (L matrix water)**(-1)
K_c = 140e-3 # mol (L cyto water)**(-1)
# Mg2+ concentrations
Mg_x = 1.0e-3 # mol (L matrix water)**(-1)
Mg_c = 1.0e-3 # mol (L cyto water)**(-1)
# Oxygen partial pressure
PO2 = 25 # mmHg
###### Parameter vector ######
X_DH = 0.1732
X_C1 = 1.0e4
X_C3 = 1.0e6
X_C4 = 0.0125
X_F = 1.0e3
E_ANT = 0.325
E_PiC = 5.0e6
X_H = 1.0e3
X_AtC = 0
activity_array = np.array([X_DH, X_C1, X_C3, X_C4, X_F, E_ANT, E_PiC, X_H, X_AtC])
###### Initial Conditions ######
# Membrane Potential
DPsi_0 = 175/1000 # V
# Matrix species
sumATP_x_0 = 0.5e-3 # mol (L matrix water)**(-1)
sumADP_x_0 = 9.5e-3 # mol (L matrix water)**(-1)
sumPi_x_0 = 1.0e-3 # mol (L matrix water)**(-1)
NADH_x_0 = 2/3 * NAD_tot # mol (L matrix water)**(-1)
QH2_x_0 = 0.1 * Q_tot # mol (L matrix water)**(-1)
# IMS species
cred_i_0 = 0.1 * c_tot # mol (L IMS water)**(-1)
# Cytosolic species
sumATP_c_0 = 0 # mol (L cyto water)**(-1)
sumADP_c_0 = 10e-3 # mol (L cyto water)**(-1)
sumPi_c_0 = 10e-3 # mol (L cyto water)**(-1)
X_0 = np.array([DPsi_0, sumATP_x_0, sumADP_x_0, sumPi_x_0, NADH_x_0, QH2_x_0, cred_i_0, sumATP_c_0, sumADP_c_0, sumPi_c_0])
def dXdt(t, X, activity_array, solve_ode):
# Unpack variables
DPsi, sumATP_x,sumADP_x, sumPi_x, NADH_x, QH2_x, cred_i, sumATP_c, sumADP_c, sumPi_c = X
X_DH, X_C1, X_C3, X_C4, X_F, E_ANT, E_PiC, X_H, X_AtC = activity_array
# Hydrogen ion concentration
H_x = 10**(-pH_x) # mol (L matrix water)**(-1)
H_c = 10**(-pH_c) # mol (L cuvette water)**(-1)
# Oxygen concentration
a_3 = 1.74e-6 # oxygen solubility in cuvette # mol (L matrix water * mmHg)**(-1)
O2_x = a_3*PO2 # mol (L matrix water)**(-1)
# Thermochemical constants
R = 8.314 # J (mol K)**(-1)
T = 37 + 273.15 # K
F = 96485 # C mol**(-1)
# Proton motive force parameters (dimensionless)
n_F = 8/3
n_C1 = 4
n_C3 = 2
n_C4 = 4
# Dissociation constants
K_MgATP = 10**(-3.88)
K_HATP = 10**(-6.33)
K_KATP = 10**(-1.02)
K_MgADP = 10**(-3.00)
K_HADP = 10**(-6.26)
K_KADP = 10**(-0.89)
K_MgPi = 10**(-1.66)
K_HPi = 10**(-6.62)
K_KPi = 10**(-0.42)
# Other concentrations computed from the state variables:
NAD_x = NAD_tot - NADH_x # mol (L matrix water)**(-1)
Q_x = Q_tot - QH2_x # mol (L matrix water)**(-1)
cox_i = c_tot - cred_i # mol (L matrix water)**(-1)
## Binding polynomials
# Matrix species # mol (L mito water)**(-1)
PATP_x = 1 + H_x/K_HATP + Mg_x/K_MgATP + K_x/K_KATP
PADP_x = 1 + H_x/K_HADP + Mg_x/K_MgADP + K_x/K_KADP
PPi_x = 1 + H_x/K_HPi + Mg_x/K_MgPi + K_x/K_KPi
# Cytosol species # mol (L cuvette water)**(-1)
PATP_c = 1 + H_c/K_HATP + Mg_c/K_MgATP + K_c/K_KATP
PADP_c = 1 + H_c/K_HADP + Mg_c/K_MgADP + K_c/K_KADP
PPi_c = 1 + H_c/K_HPi + Mg_c/K_MgPi + K_c/K_KPi
## Unbound species
# Matrix species
ATP_x = sumATP_x / PATP_x # [ATP4-]_x
ADP_x = sumADP_x / PADP_x # [ADP3-]_x
Pi_x = sumPi_x / PPi_x # [HPO42-]_x
# Cytosolic species
ATP_c = sumATP_c / PATP_c # [ATP4-]_c
ADP_c = sumADP_c / PADP_c # [ADP3-]_c
Pi_c = sumPi_c / PPi_c # [HPO42-]_c
###### NADH Dehydrogenase ######
# Constants
r = 6.8385
k_Pi1 = 4.659e-4 # mol (L matrix water)**(-1)
k_Pi2 = 6.578e-4 # mol (L matrix water)**(-1)
# Flux
J_DH = X_DH * (r * NAD_x - NADH_x) * ((1 + sumPi_x / k_Pi1) / (1+sumPi_x / k_Pi2))
###### Complex I ######
# NADH_x + Q_x + 5H+_x <-> NAD+_x + QH2_x + 4H+_i + 4DPsi
# Gibbs energy (J mol**(-1))
DrGo_C1 = -109680
DrGapp_C1 = DrGo_C1 - R * T * np.log(H_x)
# Apparent equilibrium constant
Kapp_C1 = np.exp( -(DrGapp_C1 + n_C1 * F * DPsi) / (R * T)) * ((H_x / H_c)**n_C1)
# Flux (mol (s * L mito)**(-1))
J_C1 = X_C1 * (Kapp_C1 * NADH_x * Q_x - NAD_x * QH2_x)
###### Complex III ######
# QH2_x + 2cuvetteC(ox)3+_i + 2H+_x <-> Q_x + 2cuvetteC(red)2+_i + 4H+_i + 2DPsi
# Gibbs energy (J mol**(-1))
DrGo_C3 = 46690
DrGapp_C3 = DrGo_C3 + 2 * R * T * np.log(H_c)
# Apparent equilibrium constant
Kapp_C3 = np.exp(-(DrGapp_C3 + n_C3 * F * DPsi) / (R * T)) * (H_x / H_c)**n_C3
# Flux (mol (s * L mito)**(-1))
J_C3 = X_C3 * (Kapp_C3 * cox_i**2 * QH2_x - cred_i**2 * Q_x)
###### Complex IV ######
# 2 cytoC(red)2+_i + 0.5O2_x + 4H+_x <-> cytoC(ox)3+_x + H2O_x + 2H+_i +2DPsi
# Constant
k_O2 = 1.2e-4 # mol (L matrix water)**(-1)
# Gibbs energy (J mol**(-1))
DrGo_C4 = -202160 # J mol**(-1)
DrGapp_C4 = DrGo_C4 - 2 * R * T * np.log(H_c)
# Apparent equilibrium constant
Kapp_C4 = np.exp(-(DrGapp_C4 + n_C4 * F * DPsi) / (R * T)) * (H_x / H_c)**n_C4
# Flux (mol (s * L mito)**(-1))
J_C4 = X_C4 *(Kapp_C4**0.5 * cred_i * O2_x**0.25 - cox_i) * (1 / (1 + k_O2 / O2_x))
###### F1F0-ATPase ######
# ADP3-_x + HPO42-_x + H+_x + n_A*H+_i <-> ATP4- + H2O + n_A*H+_x
# Gibbs energy (J mol**(-1))
DrGo_F = 4990
DrGapp_F = DrGo_F + R * T * np.log( H_x * PATP_x / (PADP_x * PPi_x))
# Apparent equilibrium constant
Kapp_F = np.exp( (DrGapp_F + n_F * F * DPsi ) / (R * T)) * (H_c / H_x)**n_F
# Flux (mol (s * L mito)**(-1))
J_F = X_F * (Kapp_F * sumADP_x * sumPi_x - sumATP_x)
###### ANT ######
# ATP4-_x + ADP3-_i <-> ATP4-_i + ADP3-_x
# Constants
del_D = 0.0167
del_T = 0.0699
k2o_ANT = 9.54/60 # s**(-1)
k3o_ANT = 30.05/60 # s**(-1)
K0o_D = 38.89e-6 # mol (L cuvette water)**(-1)
K0o_T = 56.05e-6 # mol (L cuvette water)**(-1)
A = +0.2829
B = -0.2086
C = +0.2372
phi = F * DPsi / (R * T)
# Reaction rates
k2_ANT = k2o_ANT * np.exp((A*(-3) + B*(-4) + C)*phi)
k3_ANT = k3o_ANT * np.exp((A*(-4) + B*(-3) + C)*phi)
# Dissociation constants
K0_D = K0o_D * np.exp(3*del_D*phi)
K0_T = K0o_T * np.exp(4*del_T*phi)
q = k3_ANT * K0_D * np.exp(phi) / (k2_ANT * K0_T)
term1 = k2_ANT * ATP_x * ADP_c * q / K0_D
term2 = k3_ANT * ADP_x * ATP_c / K0_T
num = term1 - term2
den = (1 + ATP_c/K0_T + ADP_c/K0_D) * (ADP_x + ATP_x * q)
# Flux (mol (s * L mito)**(-1))
J_ANT = E_ANT * num / den
###### H+-PI2 cotransporter ######
# H2PO42-_x + H+_x = H2PO42-_c + H+_c
# Constant
k_PiC = 1.61e-3 # mol (L cuvette)**(-1)
# H2P04- species
HPi_c = Pi_c * (H_c / K_HPi)
HPi_x = Pi_x * (H_x / K_HPi)
# Flux (mol (s * L mito)**(-1))
J_PiC = E_PiC * (H_c * HPi_c - H_x * HPi_x) / (k_PiC + HPi_c)
###### H+ leak ######
# Flux (mol (s * L mito)**(-1))
J_H = X_H * (H_c * np.exp(phi/2) - H_x * np.exp(-phi/2))
###### ATPase ######
# ATP4- + H2O = ADP3- + PI2- + H+
#Flux (mol (s * L cyto)**(-1))
J_AtC = X_AtC / V_c
###### Differential equations (equation 23) ######
# Membrane potential
dDPsi = (n_C1 * J_C1 + n_C3 * J_C3 + n_C4 * J_C4 - n_F * J_F - J_ANT - J_H) / Cm
# Matrix species
dATP_x = (J_F - J_ANT) / W_x
dADP_x = (-J_F + J_ANT) / W_x
dPi_x = (-J_F + J_PiC) / W_x
dNADH_x = (J_DH - J_C1) / W_x
dQH2_x = (J_C1 - J_C3) / W_x
# IMS species
dcred_i = 2 * (J_C3 - J_C4) / W_i
# Buffer species
dATP_c = ( V_m2c * J_ANT - J_AtC ) / W_c
dADP_c = (-V_m2c * J_ANT + J_AtC ) / W_c
dPi_c = (-V_m2c * J_PiC + J_AtC) / W_c
dX = [dDPsi, dATP_x, dADP_x, dPi_x, dNADH_x, dQH2_x, dcred_i, dATP_c, dADP_c, dPi_c]
# Calculate state-dependent quantities after model is solved
if solve_ode == 1:
return dX
else:
J = np.array([PATP_x, PADP_x, PPi_x, PATP_c, PADP_c, PPi_c, J_DH, J_C1, J_C3, J_C4, J_F, J_ANT, J_PiC, DrGapp_F])
return dX, J
# Time vector
t = np.linspace(0,5,100)
# Solve ODE
results = solve_ivp(dXdt, [0, 5], X_0, method = 'Radau', t_eval=t, args=(activity_array,1))
DPsi, sumATP_x,sumADP_x, sumPi_x, NADH_x, QH2_x, cred_i, sumATP_c, sumADP_c, sumPi_c = results.y
# Plot figures
fig, ax = plt.subplots(1,2, figsize = (10,5))
ax[0].plot(t, sumATP_x*1000, label = '[$\Sigma$ATP]$_x$')
ax[0].plot(t, sumADP_x*1000, label = '[$\Sigma$ADP]$_x$')
ax[0].plot(t, sumPi_x*1000, label = '[$\Sigma$Pi]$_x$')
ax[0].legend(loc="right")
ax[0].set_xlabel('Time (s)')
ax[0].set_ylabel('Concentration (mM)')
ax[0].set_ylim((-.5,10.5))
ax[1].plot(t, sumATP_c*1000, label = '[$\Sigma$ATP]$_c$')
ax[1].plot(t, sumADP_c*1000, label = '[$\Sigma$ADP]$_c$')
ax[1].plot(t, sumPi_c*1000, label = '[$\Sigma$Pi]$_c$')
ax[1].legend(loc="right")
ax[1].set_xlabel('Time (s)')
ax[1].set_ylabel('Concentration (mM)')
ax[1].set_ylim((-.5,10.5))
plt.show()
```
**Figure 7:** Steady state solution from Equation {eq}`system-singlemito` for the (a) matrix and (b) cytosol species with $\text{pH}_x = 7.4$ and $\text{pH}_c = 2$.
The above simulations reach a final steady state where the phosphate metabolite concentrations are $[\text{ATP}]_x = 0.9 \ \text{mM}$, $[\text{ADP}]_x = 9.1 \ \text{mM} $, $[\text{Pi}]_x = 0.4 \ \text{mM}$, $[\text{ATP}]_c = 9.9 \ \text{mM}$, $[\text{ADP}]_c = 0.1 \ \text{mM}$, $[\text{Pi}]_c = 0.2 \ \text{mM}$, and the membrane potential is $186 \ \text{mV}$. This state represents a *resting* energetic state with no ATP hydrolysis in the cytosol. The Gibbs energy of ATP hydrolysis associated with this predicted state is $\Delta G_{\rm ATP} = \text{-}70 \ \text{kJ mol}^{-1}$, as calculated below.
```
sumATP_c_ss = sumATP_c[-1]
sumADP_c_ss = sumADP_c[-1]
sumPi_c_ss = sumPi_c[-1]
H_c = 10**(-pH_c) # mol (L cuvette water)**(-1)
# Thermochemical constants
R = 8.314 # J (mol K)**(-1)
T = 37 + 273.15 # K
# Dissociation constants
K_MgATP = 10**(-3.88)
K_HATP = 10**(-6.33)
K_KATP = 10**(-1.02)
K_MgADP = 10**(-3.00)
K_HADP = 10**(-6.26)
K_KADP = 10**(-0.89)
K_MgPi = 10**(-1.66)
K_HPi = 10**(-6.62)
K_KPi = 10**(-0.42)
## Binding polynomials
# Cytosol species # mol (L cuvette water)**(-1)
PATP_c = 1 + H_c/K_HATP + Mg_c/K_MgATP + K_c/K_KATP
PADP_c = 1 + H_c/K_HADP + Mg_c/K_MgADP + K_c/K_KADP
PPi_c = 1 + H_c/K_HPi + Mg_c/K_MgPi + K_c/K_KPi
DrGo_ATP = 4990
# Use equation 9 to calcuate apparent reference cytosolic Gibbs energy
DrGo_ATP_apparent = DrGo_ATP + R * T * np.log(H_c * PATP_c / (PADP_c * PPi_c))
# Use equation 9 to calculate cytosolic Gibbs energy
DrG_ATP = DrGo_ATP_apparent + R * T * np.log((sumADP_c_ss * sumPi_c_ss / sumATP_c_ss))
print('Cytosolic Gibbs energy of ATP hydrolysis (kJ mol^(-1))')
print(DrG_ATP / 1000)
```
| github_jupyter |
# Tutorial 1 for R
## Solve Dantzig's Transport Problem using the *ix modeling platform* (ixmp)
<img style="float: right; height: 80px;" src="_static/R_logo.png">
### Aim and scope of the tutorial
This tutorial takes you through the steps to import the data for a very simple optimization model
and solve it using the ``ixmp``-GAMS interface.
We use Dantzig's transport problem, which is also used as the standard GAMS tutorial.
This problem finds a least cost shipping schedule that meets requirements at markets and supplies at factories.
If you are not familiar with GAMS, please take a minute to look at the [transport.gms](transport.gms) code.
For reference of the transport problem, see:
> Dantzig, G B, Chapter 3.3. In Linear Programming and Extensions.
> Princeton University Press, Princeton, New Jersey, 1963.
> This formulation is described in detail in:
> Rosenthal, R E, Chapter 2: A GAMS Tutorial.
> In GAMS: A User's Guide. The Scientific Press, Redwood City, California, 1988.
> see http://www.gams.com/mccarl/trnsport.gms
The steps in the tutorial are the following:
0. Launch an ixmp.Platform instance and initialize a new ixmp.Scenario.
0. Define the sets and parameters in the scenario, and commit the data to the platform
0. Check out the scenario and initialize variables and equations (necessary for ``ixmp`` to import the solution)
0. Solve the model (export to GAMS input gdx, execute, read solution from output gdx)
0. Display the solution (variables and equation)
### Launching the platform and initializing a new scenario
We launch a platform instance and initialize a new scenario. This will be used to store all data required to solve Dantzig's transport problem as well as the solution after solving it in GAMS.
```
# load the rixmp package source code
library("rixmp")
ixmp <- import('ixmp')
# launch the ix modeling platform using a local HSQL database instance
mp <- ixmp$Platform()
# details for creating a new scenario in the ix modeling platform
model <- "transport problem"
scenario <- "standard"
annot <- "Dantzig's transportation problem for illustration and testing"
# initialize a new ixmp.Scenario
# the parameter version='new' indicates that this is a new scenario instance
scen <- mp$Scenario(model, scenario, "new", annotation=annot)
```
### Defining the sets in the scenario
Below, we first show the data as they would be written in the GAMS tutorial ([transport.gms](transport.gms) in this folder).
Then, we show how this can be implemented in the R ``ixmp`` notation, and display the elements of set ``i`` as an R list.
```
# define the sets of locations of canning plants
scen$init_set("i")
i.set = c("seattle","san-diego")
scen$add_set("i", i.set )
### markets set
scen$init_set("j")
j.set = c("new-york","chicago","topeka")
scen$add_set("j", j.set )
# display the set 'i'
scen$set('i')
```
### Defining parameters in the scenario
Next, we define the production capacity and demand parameters, and display the demand parameter as a DataFrame.
Then, we add the two-dimensional distance parameter and the transport cost scalar.
```
# capacity of plant i in cases
scen$init_par("a", c("i"))
a.df = data.frame( i = i.set, value = c(350 , 600) , unit = 'cases')
scen$add_par("a", adapt_to_ret(a.df))
#scen$add_par("a", "san-diego", 600, "cases")
# demand at market j in cases
scen$init_par("b", c("j"))
b.df = data.frame( j = j.set, value = c(325 , 300, 275) , unit = 'cases')
scen$add_par("b", adapt_to_ret(b.df))
# display the parameter 'b'
scen$par('b')
```
```
# distance in thousands of miles
scen$init_par("d", c("i","j"))
d.df = data.frame(expand.grid(i = i.set,j = j.set), value = c(2.5,2.5,1.7,1.8,1.8,1.4), unit = 'km')
scen$add_par("d", adapt_to_ret(d.df))
```
Scalar f freight in dollars per case per thousand miles /90/ ;
```
# cost per case per 1000 miles
# initialize scalar with a value and a unit (and optionally a comment)
scen$init_scalar("f", 90.0, "USD/km")
```
### Committing the scenario to the ixmp database instance
```
# commit new scenario to the database
# no changes can then be made to the scenario data until a check-out is performed
comment = "importing Dantzig's transport problem for illustration of the R interface"
scen$commit(comment)
# set this new scenario as the default version for the model/scenario name
scen$set_as_default()
```
### Defining variables and equations in the scenario
The levels and marginals of these variables and equations will be imported to the scenario when reading the gdx solution file.
```
# perform a check_out to make further changes
scen$check_out()
# initialize the decision variables and equations
scen$init_var("z", NULL, NULL)
scen$init_var("x", idx_sets=c("i", "j"))
scen$init_equ("demand", idx_sets=c("j"))
# commit changes to the scenario (save changes in ixmp database instance)
change_comment = "inialize the model variables and equations"
scen$commit(change_comment)
```
### Solve the scenario
The ``solve()`` function exports the scenario to a GAMS gdx file, executes GAMS, and then imports the solution from an output GAMS gdx file to the database.
For the model equations and the GAMS workflow (reading the data from gdx, solving the model, writing the results to gdx), see ``transport_ixmp.gms``.
```
scen$solve(model="transport_ixmp")
```
### Display and analyze the results
```
# display the objective value of the solution
scen$var("z")
# display the quantities transported from canning plants to demand locations
scen$var("x")
# display the quantities and marginals (=shadow prices) of the demand balance constraints
scen$equ("demand")
```
### Close the database connection of the ix modeling platform
Closing the database connection is recommended when working with the local file-based database, i.e., ``dbtype='HSQLDB'``. This command closes the database files and removes temporary data. This is necessary so that other notebooks or ``ixmp`` instances can access the database file, or so that the database files can be copied to a different folder or drive.
```
# close the connection of the platform instance to the local database files
mp$close_db()
```
| github_jupyter |
```
#################### 2020 xilinx summer school ############
import sys
import numpy as np
import os
import time
import math
from PIL import Image,ImageDraw
from matplotlib import pyplot
import matplotlib.pylab as plt
import cv2
from datetime import datetime
from pynq import Xlnk
from pynq import Overlay
from summer_processing import *
import struct
team = 'summernet'
agent = Agent(team)
interval_time = 0
xlnk = Xlnk()
xlnk.xlnk_reset()
img = xlnk.cma_array(shape=(3,162,322), dtype=np.uint8)
conv_weight_1x1_all = xlnk.cma_array(shape=(1181, 16, 16), dtype=np.uint16)
conv_weight_3x3_all = xlnk.cma_array(shape=(46, 16, 3, 3), dtype=np.uint16)
bias_all = xlnk.cma_array(shape=(123, 16), dtype=np.uint16)
DDR_pool_3_out = xlnk.cma_array(shape=(48, 82, 162), dtype=np.uint16)
DDR_pool_6_out = xlnk.cma_array(shape=(96, 42, 82), dtype=np.uint16)
DDR_buf = xlnk.cma_array(shape=(36, 16, 22, 42), dtype=np.uint16)
predict_box = xlnk.cma_array(shape=(5,), dtype=np.float32)
print("Allocating memory successfully ")
path_mkdir()
img_path = '/home/xilinx/jupyter_notebooks/summer_school/images/'
coord_path = '/home/xilinx/jupyter_notebooks/summer_school/result/coordinate/summer_school/'
tbatch = 0
total_num_img = len(agent.img_list)
print("total_num_img:", total_num_img)
result = list()
agent.reset_batch_count()
blank = Image.new('RGB', (322, 162), (127, 127, 127))
# load parameters from SD card to DDR
params = np.fromfile("summernet.bin", dtype=np.uint16)
idx = 0
np.copyto(conv_weight_1x1_all, params[idx:idx+conv_weight_1x1_all.size].reshape(conv_weight_1x1_all.shape))
idx += conv_weight_1x1_all.size
np.copyto(conv_weight_3x3_all, params[idx:idx+conv_weight_3x3_all.size].reshape(conv_weight_3x3_all.shape))
idx += conv_weight_3x3_all.size
np.copyto(bias_all, params[idx:idx+bias_all.size].reshape(bias_all.shape))
print("Parameters loading successfully")
################### download the overlay #####################
overlay = Overlay('/home/xilinx/jupyter_notebooks/summer_school/overlay/summernet/summernet.bit')
print("summernet.bit loaded successfully")
myIP = overlay.mobilenet_0
################## download weights and image resizing and processing
myIP.write(0x10, img.physical_address)
myIP.write(0x18, conv_weight_1x1_all.physical_address)
myIP.write(0x20, conv_weight_3x3_all.physical_address)
myIP.write(0x28, bias_all.physical_address)
myIP.write(0x30, DDR_pool_3_out.physical_address)
myIP.write(0x38, DDR_pool_6_out.physical_address)
myIP.write(0x40, DDR_buf.physical_address)
myIP.write(0x48, predict_box.physical_address)
def process_image(currPic):
print("img_path + currPic:", img_path + currPic)
image = Image.open(img_path + currPic).convert('RGB')
image = image.resize((320, 160))
blank.paste(image, (1, 1))
image = np.transpose(blank, (2, 0, 1))
np.copyto(img, np.array(image))
first_image = True
boxes = []
names = []
i = 0
lastimgname="x.jpg"
################### Start to detect ################
start = time.time()
for batch in get_image_batch():
for currPic in batch:
i=i+1
print("currPic:", currPic)
names.append(currPic)
if first_image:
image = Image.open(img_path + currPic).convert('RGB')
image = image.resize((320, 160))
blank.paste(image, (1, 1))
image = np.transpose(blank, (2, 0, 1))
np.copyto(img, np.array(image))
# image_1 = pyplot.imread(img_path + currPic)
# pyplot.imshow(image_1)
# pyplot.show()
first_image = False
continue
if not first_image:
myIP.write(0x00, 1)
time.sleep(0.07)
image = Image.open(img_path + currPic).convert('RGB')
image = image.resize((320, 160))
blank.paste(image, (1, 1))
image = np.transpose(blank, (2, 0, 1))
np.copyto(img, np.array(image))
# image_show = pyplot.imread(img_path + currPic)
# pyplot.imshow(image_show)
# pyplot.show()
isready = myIP.read(0x00)
while( isready == 1 ):
isready = myIP.read(0x00)
predict_box[0] = predict_box[0] / 40;
predict_box[1] = predict_box[1] / 20;
predict_box[2] = predict_box[2] / 40;
predict_box[3] = predict_box[3] / 20;
print("predict_box:", predict_box)
x1 = int(round((predict_box[0] - predict_box[2]/2.0) * 640))
y1 = int(round((predict_box[1] - predict_box[3]/2.0) * 360))
x2 = int(round((predict_box[0] + predict_box[2]/2.0) * 640))
y2 = int(round((predict_box[1] + predict_box[3]/2.0) * 360))
boxes.append([x1, x2, y1, y2])
print("coordinate[x1, x2, y1, y2]:", [x1, x2, y1, y2])
###########在原图上画出边界框########
print("batch[i]:", batch[i-1])
print("[x1, x2, y1, y2]", [x1, x2, y1, y2])
imgee=Image.open(img_path+batch[i-1]).convert('RGB')
# image = image.resize((320, 160))
image_show=np.array(imgee)
for x in range(x1,x2):
image_show[y1][x][0]=255
image_show[y1][x][1]=0
image_show[y1][x][2]=0
image_show[y1+1][x][0]=255
image_show[y1+1][x][1]=0
image_show[y1+1][x][2]=0
for x in range(x1,x2):
image_show[y2][x][0]=255
image_show[y2][x][1]=0
image_show[y2][x][2]=0
image_show[y2+1][x][0]=255
image_show[y2+1][x][1]=0
image_show[y2+1][x][2]=0
for y in range(y1,y2):
image_show[y][x1][0]=255
image_show[y][x1][1]=0
image_show[y][x1][2]=0
image_show[y][x1+1][0]=255
image_show[y][x1+1][1]=0
image_show[y][x1+1][2]=0
for y in range(y1,y2):
image_show[y][x2][0]=255
image_show[y][x2][1]=0
image_show[y][x2][2]=0
image_show[y][x2+1][0]=255
image_show[y][x2+1][1]=0
image_show[y][x2+1][2]=0
pyplot.imshow(image_show)
pyplot.show()
#image_show.close()
#lastimgname=batch[i-1]
###################
#collect result for last image
myIP.write(0x00, 1)
isready = myIP.read(0x00)
while( isready == 1 ):
isready = myIP.read(0x00)
predict_box[0] = predict_box[0] / 40;
predict_box[1] = predict_box[1] / 20;
predict_box[2] = predict_box[2] / 40;
predict_box[3] = predict_box[3] / 20;
print("predict_box", predict_box)
x1 = int(round((predict_box[0] - predict_box[2]/2.0) * 640))
y1 = int(round((predict_box[1] - predict_box[3]/2.0) * 360))
x2 = int(round((predict_box[0] + predict_box[2]/2.0) * 640))
y2 = int(round((predict_box[1] + predict_box[3]/2.0) * 360))
boxes.append([x1, x2, y1, y2])
#####################在原图上画出边界框##########
# print("lastimgname:", lastimgname)
# print("[x1, x2, y1, y2]", [x1, x2, y1, y2])
# imgee=Image.open(img_path+batch[i-1]).convert('RGB')
# image_show=np.array(imgee)
# for x in range(x1,x2):
# image_show[y1][x][0]=255
# image_show[y1][x][1]=0
# image_show[y1][x][2]=0
# image_show[y1+1][x][0]=255
# image_show[y1+1][x][1]=0
# image_show[y1+1][x][2]=0
# for x in range(x1,x2):
# image_show[y2][x][0]=255
# image_show[y2][x][1]=0
# image_show[y2][x][2]=0
# image_show[y2+1][x][0]=255
# image_show[y2+1][x][1]=0
# image_show[y2+1][x][2]=0
# for y in range(y1,y2):
# image_show[y][x1][0]=255
# image_show[y][x1][1]=0
# image_show[y][x1][2]=0
# image_show[y][x1+1][0]=255
# image_show[y][x1+1][1]=0
# image_show[y][x1+1][2]=0
# for y in range(y1,y2):
# image_show[y][x2][0]=255
# image_show[y][x2][1]=0
# image_show[y][x2][2]=0
# image_show[y][x2+1][0]=255
# image_show[y][x2+1][1]=0
# image_show[y][x2+1][2]=0
# pyplot.imshow(image_show)
# pyplot.show()
#image_show.close()
###################
end = time.time()
tbatch = end - start
print("All computation finish")
################ record the results and write to XML
f_out = open(coord_path + '/summernet.txt', 'w')
cnt = 0
for box in boxes:
x1 = box[0]
x2 = box[1]
y1 = box[2]
y2 = box[3]
coord = str(x1) + ' ' + str(x2) + ' ' + str(y1) + ' ' + str(y2)
name = names[cnt]
cnt = cnt + 1
f_out.write(name + '\n')
f_out.write(coord + '\n')
f_out.close()
print("\nAll results stored in summernet.txt sucessfully")
# agent.save_results_xml(boxes)
# agent.write(tbatch, total_num_img, team)
# print("XML and time results written successfully.")
############## clean up #############
xlnk.xlnk_reset()
```
| github_jupyter |
# Доверительные интервалы для двух долей
```
import numpy as np
import pandas as pd
import scipy
from statsmodels.stats.weightstats import *
from statsmodels.stats.proportion import proportion_confint
```
## Загрузка данных
```
data = pd.read_csv('banner_click_stat.txt', header = None, sep = '\t')
data.columns = ['banner_a', 'banner_b']
data.head()
data.describe()
```
## Интервальные оценки долей
$$\frac1{ 1 + \frac{z^2}{n} } \left( \hat{p} + \frac{z^2}{2n} \pm z \sqrt{ \frac{ \hat{p}\left(1-\hat{p}\right)}{n} + \frac{z^2}{4n^2} } \right), \;\; z \equiv z_{1-\frac{\alpha}{2}}$$
```
conf_interval_banner_a = proportion_confint(sum(data.banner_a),
data.shape[0],
method = 'wilson')
conf_interval_banner_b = proportion_confint(sum(data.banner_b),
data.shape[0],
method = 'wilson')
print 'interval for banner a [%f, %f]' % conf_interval_banner_a
print 'interval for banner b [%f, %f]' % conf_interval_banner_b
```
### Как их сравнить?
## Доверительный интервал для разности долей (независимые выборки)
| $X_1$ | $X_2$
------------- | -------------|
1 | a | b
0 | c | d
$\sum$ | $n_1$| $n_2$
$$ \hat{p}_1 = \frac{a}{n_1}$$
$$ \hat{p}_2 = \frac{b}{n_2}$$
$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \hat{p}_1 - \hat{p}_2 \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\hat{p}_1(1 - \hat{p}_1)}{n_1} + \frac{\hat{p}_2(1 - \hat{p}_2)}{n_2}}$$
```
def proportions_confint_diff_ind(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
p1 = float(sum(sample1)) / len(sample1)
p2 = float(sum(sample2)) / len(sample2)
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
return (left_boundary, right_boundary)
print "confidence interval: [%f, %f]" % proportions_confint_diff_ind(data.banner_a, data.banner_b)
```
## Доверительный интервал для разности долей (связанные выборки)
$X_1$ \ $X_2$ | 1| 0 | $\sum$
------------- | -------------|
1 | e | f | e + f
0 | g | h | g + h
$\sum$ | e + g| f + h | n
$$ \hat{p}_1 = \frac{e + f}{n}$$
$$ \hat{p}_2 = \frac{e + g}{n}$$
$$ \hat{p}_1 - \hat{p}_2 = \frac{f - g}{n}$$
$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \frac{f - g}{n} \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{f + g}{n^2} - \frac{(f - g)^2}{n^3}}$$
```
def proportions_confint_diff_rel(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
sample = zip(sample1, sample2)
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
left_boundary = float(f - g) / n - z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
right_boundary = float(f - g) / n + z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
return (left_boundary, right_boundary)
print "confidence interval: [%f, %f]" % proportions_confint_diff_rel(data.banner_a, data.banner_b)
```
| github_jupyter |
```
import re
import urllib
import urllib3
import requests
from bs4 import BeautifulSoup
urllib3.disable_warnings()
headers = {'User-Agent':'Mozilla/6.2'}
data_Stor1=[]
http_proxy = "http://76.76.76.154:53281"
https_proxy = "https://35.230.124.232:80"
ftp_proxy = "ftp://35.233.225.185:8080"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
for oo in range(70,100):
urlpage='https://www.yelp.com/search?find_desc=Home+Design&find_loc=90036&start='+str(oo*10)+'&l=g:-118.401374817,34.0253477381,-118.299064636,34.1106675388'
print urlpage
page = requests.get(urlpage, headers=headers, verify=False,proxies=proxyDict)
data = page.content
so1 = BeautifulSoup(data, "html.parser")
search_result=so1.find_all('li',class_="regular-search-result")
for jj in range(0,len(search_result)):
link0=search_result[jj].find_all('a',class_="biz-name js-analytics-click")
link='https://www.yelp.com/'+link0[0].get('href')
print link
page2 = requests.get(link, headers=headers, verify=False)
data21 = page2.content
so2 = BeautifulSoup(data21, "html.parser")
sleep(10)
extractpage()
so2.text
oo,jj
21, 5
extractpage()
data_Stor1
def extractpage():
data_Stor0=['-']*14
title0=so2.find_all('h1')[0].text
title1=title0.replace(' ','').replace('\n','')
print title1
data_Stor0[0]=title1
biz_rating=so1.find_all('div',class_="biz-rating biz-rating-very-large clearfix")
if len(biz_rating)>0:
rating=biz_rating[0].find(class_='offscreen').get('alt')
print rating
data_Stor0[1]=rating
reviews0=biz_rating[0].find('span').text
reviews1=reviews0.replace(' ','').replace('\n','')
print reviews1
data_Stor0[2]=reviews1
mapbox=so2.find_all('div',class_="mapbox-text")[0]
street_address0=mapbox.find_all('strong',class_="street-address")[0]
street_address1=street_address0.next_element.next_element.next_element
street_address2=street_address1.next_element.next_element
try:
street_address11=street_address1.replace(' ','').replace('\n','')
print street_address11
data_Stor0[3]=street_address11
except:
street_address11=street_address1
print street_address11
data_Stor0[3]=street_address11
try:
street_address22=street_address2.replace(' ','').replace('\n','')
print street_address22
data_Stor0[4]=street_address22
except:
street_address22=street_address2
print street_address22
data_Stor0[4]=street_address22
biz_phone0=mapbox.find_all('span',class_="biz-phone")
if len(biz_phone0)>0:
biz_phone1=biz_phone0[0].text
biz_phone2=biz_phone1.replace(' ','').replace('\n','')
print biz_phone2
data_Stor0[5]=biz_phone2
biz_website=mapbox.find_all('span',class_="biz-website js-biz-website js-add-url-tagging")
print len(biz_website)
if len(biz_website)>0:
website0=biz_website[0]
website=website0.find_all('a')[0].text
print website
data_Stor0[6]=website
data_Stor0[7]=link
biz_main_info=so2.find_all('div',class_="biz-main-info embossed-text-white")[0]
category_str_list=biz_main_info.find_all('span',class_="category-str-list")[0]
acategory=category_str_list.find_all('a')
category_stor=[]
for ii in range(len(acategory)):
print acategory[ii].text
category_stor.append(acategory[ii].text)
for hg in range(len(category_stor)):
data_Stor0[8+hg]=category_stor[hg]
print data_Stor0
data_Stor1.append(data_Stor0)
len(data_Stor1)
import warnings
from openpyxl import Workbook
wb = Workbook(write_only=True)
ws = wb.create_sheet()
p=0
# now we'll fill it with 100 rows x 200 columns
for irow in data_Stor1:
print irow
p=p+1
print p
try:
ws.append(irow)
except:
try:
data_Stor1[p-1][3]=data_Stor1[p-1][3].text
except:
pass
try:
data_Stor1[p-1][4]=data_Stor1[p-1][4].text
except:
pass
print p
# save the file
wb.save('home_design.xlsx')
import warnings
from openpyxl import Workbook
wb = Workbook(write_only=True)
ws = wb.create_sheet()
# now we'll fill it with 100 rows x 200 columns
for irow in data_Stor1:
ws.append(irow)
# save the file
wb.save('home_design2.xlsx')
!printf 'y\n' | conda install -c clinicalgraphics chromedriver
!conda install package -y
!conda install -c clinicalgraphics chromedriver
!y
cookiesdict=driver.get_cookies()
cookiesdict
import json, io
with io.open('cookiesdict.txt', 'w', encoding='utf8') as json_file:
data3 = json.dumps(cookiesdict, ensure_ascii=False, encoding='utf8',indent=4, sort_keys=True)
json_file.write(unicode(data3))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/timrocar/DS-Unit-2-Linear-Models/blob/master/module1-regression-1/LS_DS_211_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 1, Module 1*
---
# Regression 1
## Assignment
You'll use another **New York City** real estate dataset.
But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.
The data comes from renthop.com, an apartment listing website.
- [ ] Look at the data. Choose a feature, and plot its relationship with the target.
- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.html#Basics-of-the-API).
- [ ] Define a function to make new predictions and explain the model coefficient.
- [ ] Organize and comment your code.
> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.
If your **Plotly** visualizations aren't working:
- You must have JavaScript enabled in your browser
- You probably want to use Chrome or Firefox
- You may need to turn off ad blockers
- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/#jupyterlab-support-python-35)
## Stretch Goals
- [ ] Do linear regression with two or more features.
- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
```
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv',
parse_dates=['created'],
index_col='created')
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.head()
df.info()
df.isnull().any()
df['price'].describe()
```
#Some EDA
```
df['price'].plot(kind='hist')
df.describe()['price']
df['bedrooms'].plot(kind='hist')
#spliting my data. I will use bedrooms to predict rental price
# Target
y = df['price']
# Feature Matrix
X = df[['bedrooms']]
import matplotlib.pyplot as plt
plt.scatter(X, y)
plt.xlabel('Bedrooms')
plt.ylabel('Rent Price')
plt.show()
```
#Train- Validation Split
```
df.sort_index()
cutoff = '2016-06-10'
mask = X.index < cutoff
X_train, y_train = X.loc[mask], y.loc[mask]
x_val, y_val = X.loc[~mask], y.loc[~mask]
```
##Establishing Baselines
```
## Establishing Baseline
y_train
y_train.plot(kind='hist')
baseline_guess = y_train.mean()
MAE = abs(y_train - baseline_guess).mean()
print(baseline_guess)
print(MAE)
```
## Building a Model
```
from sklearn.linear_model import LinearRegression
# defining the predictor
lr = LinearRegression()
# training predictor
lr.fit(X_train, y_train);
lr.coef_[0]
lr.intercept_
```
Listed here is the model coefficient (860.15). This value is the slope of the line of best fit. The intercept is where the line of best fit intercepts the Y-axis.
## Defining the formula for line of best fit (not neccasary according to Nicholas)
```
## rentpricepredicton = bedrooms*860.15 - 2258.03
```
## Stretch GOALS (Linear Regression with 2+ features)
```
df.describe()
#spliting my data. I will use bedrooms, bathrooms, and whether it was prewar to predict price
# Target
y = df['price']
# Feature Matrix
X = df[['bedrooms', 'bathrooms', 'pre-war']]
## train - validation split
cutoff = '2016-06-10'
mask = X.index < cutoff
X_train, y_train = X.loc[mask], y.loc[mask]
x_val, y_val = X.loc[~mask], y.loc[~mask]
#establishing baselines
y_train
y_train.plot(kind='hist')
baseline_guess = y_train.mean()
MAE = abs(y_train - baseline_guess).mean()
print(baseline_guess)
print(MAE)
# defining the predictor
regressor = LinearRegression()
# training predictor
regressor.fit(X_train, y_train);
regressor.coef_
regressor.intercept_
coeff_df = pd.DataFrame(regressor.coef_, X.columns, columns=['Coefficient'])
coeff_df
```
| github_jupyter |
# AI2S Deep Learning Day - Beginners notebook
<sub>Alessio Ansuini, AREA Research and Technology</sub>
<sub>Andrea Gasparin and Marco Zullich, Artificial Intelligence Student Society</sub>
## Pytorch
PyTorch is a Python library offering extensive support for the construction of deep Neural Networks (NNs).
One of the main characteristics of PyTorch is that it operates with **Tensors**, as they provide a significative speed up of the computations.
For the scope of this introduction we can simply think at Tensors as arrays, with all the relative operations preserved as we can see in the following example.
```
import torch
import numpy as np
tensor_A = torch.tensor([1,1,1])
array_A = np.array([1,1,1])
print(tensor_A)
print(array_A)
print( 2 * tensor_A )
print( 2 * array_A )
```
## The images representation
In our context, we will work with black and white images. They are represented as matrices containing numbers.
The numbers will go from 0 (white) to the max value (black) including all the grey scale spectrum.
```
central_vertical_line = torch.tensor([[ 0, 4, 0],
[ 0, 8, 0],
[ 0, 10, 0]])
import matplotlib.pyplot as plt #plots and image viewer module
plt.imshow(central_vertical_line, cmap="Greys")
```
## Handwritten digit recognition (MNIST dataset)
In this notebook, we'll train a simple fully-connected NN for the classification of the MNIST dataset.
The MNIST (*modified National Institute of Standards and Technology database*) is a collection of 28x28 pixels black and white images containing handwritten digits. Let's see an example:
```
import torchvision #the module where is stored the dataset
#to improve training efficiency, data are first normalised. The "transform" method will do the job for us
transform = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.1307,), (0.3081,)),
])
trainset = torchvision.datasets.MNIST(root="./data", train=True, transform=transform, download=True)
testset = torchvision.datasets.MNIST(root="./data", train=False, transform=transform, download=True)
```
**trainset.data** contains the images, represented as 28x28 matrixes of float numbers
**trainset.target** contains the labels, so the numbers represented in the images
```
print("trainset.data[0] is the first image; its size is:", trainset.data[0].shape)
print("the digit represented is the number: ", trainset.targets[0])
# if we have a tensor composed of a single scalar, we can extract the scalar via tensor.item()
print("scalar representation: ", trainset.targets[0].item())
```
Let's see that the image actually shows the number 5
```
print(trainset.data[0][6])
plt.imshow(trainset.data[0], cmap='Greys')
```
### THE TRAINING
First we need to separate the images and the labels
```
train_imgs = trainset.data
train_labels = trainset.targets
test_imgs = testset.data
test_labels = testset.targets
```
### Flatten the image
To simplify the network flow, images are initially flattened, meaning that the corresponding matrix will be transformed in a single longer row array:
```
central_vertical_line_flattened = central_vertical_line.flatten()
print("initial matrix:\n",central_vertical_line)
print("\nmatrix flattened:\n",central_vertical_line_flattened)
print("\nmatrix shape:",central_vertical_line.shape, " flattened shape:", central_vertical_line_flattened.shape)
```
### Creating the NN
We create the NN as in the image below:
* the **input layer** has 784 neurons: this as the images have 28x28=784 numbers;
* there are three **hidden layers**: the first one has 16 neurons, the second one has 32, the first one has 16 again;
* the **output layer** has 10 neurons, one per class.
The NN can be easily created using the `torch.nn.Sequential` method, which allows for the construction of the NN by pipelining the building blocks in a list and passing it to the Sequential constructor.
We pass to Sequential the following elements:
* we start with a `Flatten()` module since we need to flatten the 2D 28x28 images into the 784 elements 1D array
* we alternate `Linear` layers (fully-connected layers) with `ReLU` modules (Rectified Linear Unit) activation functions
* we conclude with a `Linear` layer withoud activation function: this will output, for each image, an array of 10 scalars, each one indicating the "confidence" that the network has in assigning the input image to the corresponding class. We'll assign the image to the class having the highest confidence.
After this, the architecture of the NN is complete! We will then focus on telling Python how to train this NN.
```
from torch import nn
inputDimension = 784
outputDimension = 10 # the number of classes - 10 digits from 0 to 9
layersWidth = 16
network = nn.Sequential(
nn.Flatten(),
nn.Linear(inputDimension, layersWidth),
nn.ReLU(),
nn.Linear(layersWidth, layersWidth*2),
nn.ReLU(),
nn.Linear(layersWidth*2, layersWidth),
nn.ReLU(),
nn.Linear(layersWidth, outputDimension),
)
```
### NN training
We'll use vanilla mini-batch Stochastic Gradient Descent (SGD) with a learning rate of *learningRate* (you chose!!!) as the optimizer.
We'll create mini-batches of size *batchSize* (i.e., we'll have 60000/*batchSize*=600 mini-batches containing our data) for the training.
We'll train the NN for *epochs* epochs, each epoch indicating how many times the NN "sees" the whole dataset during training.
The loss function we'll use is the **categorical cross-entropy** (particularly useful for non-binary classification problems) and we'll also evaluate the network on its **accuracy** (i.e., images correctly classified divided by total images).
### *learningRate*, *batchSize*, and *epochs* are parameters you can play with, let's see haw you can improve the accuracy!!!
```
#hyper parameters
batchSize = 100
learningRate = 0.1
epochs = 3
```
In order to pass our data to the network, we'll make use of DataLoaders: they take care of subdividing the dataset into mini-batches, applying the requested transformations, and optionally re-shuffling them at the beginning of each new epoch.
```
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False)
```
We also provide a function to compute the accuracy of the nn given its outputs and the true values of the images they are trying to classify
```
def calculate_accuracy(nn_output, true_values):
class_prediction = nn_output.topk(1).indices.flatten()
match = (class_prediction == true_values)
correctly_classified = match.sum().item()
accuracy = correctly_classified / nn_output.size(0)
return accuracy
```
Let's check that it works for a fictitious batch of 4 images and 3 classes.
A NN output in this case will be a matrix of shape 4x3, each row holding the probability that the model assigns the corresponding image to the corresponding class.
We create a fake ground truth s.t. the NN assigns correctly the first 3 images: the corresponding accuracy should then be 3/4=0.75
### Here the actual traininig
```
lossValues = [] #to store the loss value trand during the training (we want it to DECREASE as much as possible)
accuracy = [] #to store the accuracy trand during the training (we want it to INCREASE as much as possible)
lossFunction = torch.nn.CrossEntropyLoss() #the error function the nn is trying to minimise
network.train() #this tells our nn that it is in training mode.
optimizer = torch.optim.SGD(network.parameters(), lr=learningRate) #the kind of optimiser we want of our nn to use
# MAIN LOOP: one iteration for each epoch
for e in range(epochs):
# INNER LOOP: one for each MINI-BATCH
for i, (imgs, ground_truth) in enumerate(trainloader): #range(num_of_batches):
optimizer.zero_grad() # VERY TECHNICAL needed in order NOT to accumulate gradients on top of the previous epochs
predictions = network(imgs)
loss = lossFunction(predictions, ground_truth)
loss.backward()
optimizer.step()
accuracy_batch = calculate_accuracy(predictions, ground_truth)
lossValues.append(loss.item())
accuracy.append(accuracy_batch)
# Every 200 iterations, we print the status of loss and accuracy
if (i+1)%200 == 0:
print(f"***Epoch {e+1} | Iteration {i+1} | Mini-batch loss {loss.item()} | Mini-batch accuracy {accuracy_batch}")
# Let us draw the charts for loss and accuracy for each training iteration
plt.plot(lossValues, label="loss")
plt.plot(accuracy, label="accuracy")
plt.legend()
```
# Check yourself
Here we provide a function to pick a few images from the test set and check if the network classifies them properly
```
def classify():
for i in range(5):
num = np.random.randint(0,test_imgs.shape[0])
network.eval()
plt.imshow(test_imgs[num])
plt.show()
print("Our network classifies this image as: ", network(test_imgs[num:num+1].float()).topk(1).indices.flatten().item())
print("The true value is: ", test_labels[num:num+1].item())
print("\n\n")
classify()
```
| github_jupyter |
# Importing Libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
import plotly
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
import plotly.express as px
import cufflinks as cf
cf.go_offline()
```
# Loading The Datasets
```
matches = pd.read_csv('matches.csv')
deliveries = pd.read_csv('deliveries.csv')
matches.head()
deliveries.head()
matches.shape
deliveries.shape
matches.columns
matches.season.value_counts().sort_values(ascending = False)
matches.team1.value_counts()
matches.isnull().sum().sort_values(ascending = False)
```
# Removed Inconsistent Teams and Added Short Names
```
matches.replace(to_replace = [ 'Delhi Daredevils'] , value = ['Delhi Capitals' ] , inplace = True)
deliveries.replace(to_replace = ['Delhi Daredevils' ] , value = [ 'Delhi Capitals'], inplace = True)
consistent_teams = ['Sunrisers Hyderabad', 'Mumbai Indians',
'Kolkata Knight Riders', 'Royal Challengers Bangalore',
'Delhi Capitals', 'Kings XI Punjab','Chennai Super Kings', 'Rajasthan Royals']
# Taking data of only consistent teams
matches_2 = matches[ (matches.team1.isin( consistent_teams )) & (matches.team2.isin( consistent_teams ))]
deliveries_2 = deliveries[ (deliveries.batting_team.isin( consistent_teams )) & (deliveries.bowling_team.isin( consistent_teams )) ]
dic = {'Sunrisers Hyderabad' : 'SRH' , 'Kolkata Knight Riders' : 'KKR',
'Royal Challengers Bangalore' : 'RCB' , 'Kings XI Punjab' : 'KXIP',
'Mumbai Indians' : 'MI' , 'Chennai Super Kings' : 'CSK' ,
'Rajasthan Royals' : 'RR' , 'Delhi Capitals' : 'DC'
}
# Replacing names of teams to their short names
matches_2.replace( dic , inplace = True )
deliveries_2.replace( dic , inplace = True )
matches_2.head()
deliveries_2.head()
matches_2.shape
matches_2.drop('umpire3' , axis = 1 , inplace = True)
```
# Exploratory Data Analysis on Matches Dataset
___
# Let's find out the winning %age of each team
___
```
win_prcntage = ( matches_2.winner.value_counts() / (matches_2.team1.value_counts() + matches_2.team2.value_counts()) )* 100
win_prcntage = win_prcntage.to_frame().reset_index().rename( columns = { 'index' : 'Team_Name' , 0 : 'Win %age'})
win_prcntage.sort_values( by = 'Win %age' , ascending = False , inplace = True)
win_prcntage.iplot(kind = 'bar' , x = 'Team_Name' , y = 'Win %age' , title = 'Win %age of each team[2008 - 2019]' , xTitle = 'Teams' , yTitle = 'Win %age')
```
# Teams Featured in Most Number of Season
```
# Team Which Featured in Each Season
matches_2.head()
lis = matches_2.team1.unique()
dic = {}
for values in lis:
dic[values] = 0
for season_no in matches_2.groupby('season'):
for team in dic:
if team in season_no[1].team1.unique():
dic[team] += 1
print(dic)
team_vs_seasons = pd.DataFrame(dic.items()).rename(columns={0:'Team Name', 1:'Season Count'})
team_vs_seasons.sort_values(by='Season Count', ascending= False, inplace=True)
team_vs_seasons.head()
team_vs_seasons.iplot(kind = 'bar' , x = 'Team Name' , y = 'Season Count' , title = 'Season Count of each team[2008 - 2019]' , xTitle = 'Team Name' , yTitle = 'Season Count')
```
# Player of the Match Vs Season
```
matches_2.head()
matches.player_of_match.value_counts()
m_of_m_count = matches['player_of_match'].value_counts().head(15).to_frame().reset_index().rename(columns = {'index': 'player_name', 'player_of_match': 'count'})
m_of_m_count.iplot(kind = 'bar' , x = 'player_name' , y = 'count' , title = 'Man of the match[2008 - 2019]' , xTitle = 'player_name' , yTitle = 'count')
```
# Team Vs Numer of Matches Played in Each City
```
matches_2.head()
list(matches_2.city.unique())
def team_matches_city(city_name):
for value in matches_2.groupby('city'):
if value[0] == city_name:
matches = (value[1].team1.value_counts() + value[1].team2.value_counts())
return matches
city_name = 'Mumbai'
matches_in_a_particular_city = team_matches_city(city_name)
matches_in_a_particular_city = matches_in_a_particular_city.to_frame().reset_index().rename(columns = {'index' : 'Team Name' , 0 : 'Count'})
matches_in_a_particular_city.sort_values(by = "Count" , ascending = False , inplace = True)
matches_in_a_particular_city.iplot(kind = 'bar' , x = 'Team Name' , y = 'Count' , title = 'Teams vs no of matches played in' + city_name + '[2008 - 2019]' , xTitle = 'Teams' , yTitle = 'Count')
```
____
# Key players for different teams
___
```
dic = {'Sunrisers Hyderabad' : 'SRH' , 'Kolkata Knight Riders' : 'KKR',
'Royal Challengers Bangalore' : 'RCB' , 'Kings XI Punjab' : 'KXIP',
'Mumbai Indians' : 'MI' , 'Chennai Super Kings' : 'CSK' ,
'Rajasthan Royals' : 'RR' , 'Delhi Capitals' : 'DC'
}
matches.replace(dic , inplace = True)
def key_players(team_name):
for value in matches.groupby('winner'):
if value[0] == team_name:
return value[1]['player_of_match'].value_counts().head()
df = key_players('RCB').to_frame().reset_index().rename(columns = {'index' : 'Player' , 'player_of_match' : 'Count'})
df.iplot(kind = 'bar' , x = 'Player' , y = 'Count' , title = 'Player vs no of MOM count' , xTitle = 'Player' , yTitle = 'Count')
```
# Man of the Match Player Vs teams
```
def player_MOM_for_teams(player_name):
for value in matches.groupby('player_of_match'):
if value[0] == player_name:
return value[1]['winner'].value_counts()
player_name = 'V Kohli'
df = player_MOM_for_teams( player_name ).to_frame().reset_index().rename(columns = {'index' : 'Team' , 'winner' : 'Count'})
px.pie( df , values='Count', names='Team', title='Player vs MOM count for different teams' ,color_discrete_sequence=px.colors.sequential.RdBu)
```
# Average Win by Run While Chasing And Defending
```
def avg_win_by_runs_and_wickets_of_a_team_while_defending_and_chasing( team_name , given_df ):
for value in given_df.groupby('winner'):
if value[0] == team_name:
total_win_by_runs = sum(list(value[1]['win_by_runs']))
total_win_by_wickets = sum(list(value[1]['win_by_wickets']))
if 0 in list(value[1]['win_by_runs'].value_counts().index):
x = value[1]['win_by_runs'].value_counts()[0]
else:
x = 0
if 0 in list(value[1]['win_by_wickets'].value_counts().index):
y = value[1]['win_by_wickets'].value_counts()[0]
else:
y = 0
number_of_times_given_team_win_while_defending = (len(value[1]) - x )
number_of_times_given_team_win_while_chasing = (len(value[1]) - y )
average_runs_by_which_a_given_team_wins_while_defending = total_win_by_runs / number_of_times_given_team_win_while_defending
average_wickets_by_which_a_given_team_wins_while_chasing = total_win_by_wickets / number_of_times_given_team_win_while_chasing
print('number_of_times_given_team_win_while_defending :' , number_of_times_given_team_win_while_defending )
print('number_of_times_given_team_win_while_chasing :' , number_of_times_given_team_win_while_chasing )
print()
print('average_runs_by_which_a_given_team_wins_while_defending : ' ,average_runs_by_which_a_given_team_wins_while_defending )
print('average_wickets_by_which_a_given_team_wins_while_chasing : ' ,average_wickets_by_which_a_given_team_wins_while_chasing)
avg_win_by_runs_and_wickets_of_a_team_while_defending_and_chasing('RCB' , matches)
avg_win_by_runs_and_wickets_of_a_team_while_defending_and_chasing('CSK' , matches)
```
# Wining %age of Team by Toss Decision
```
def win_visu_by_toss(team_name):
datas = matches[(matches['toss_winner']==team_name) & (matches['winner']==team_name)]
count = datas['toss_decision'].value_counts()
win_bat = count['bat']/(count['field']+count['bat'])*100
win_field = count['field']/(count['bat']+count['field'])*100
print("field_count = "+ str(count['field']))
print("bat_count = " + str(count['bat']))
print("Win %age if fielding is choosen = " + str(win_field))
print("Win %age if batting is choosen = " + str(win_bat))
print()
print()
data = [['Fielding', win_field], ['Batting', win_bat]]
data = pd.DataFrame (data,columns=['Decision','Win_%age'])
return(px.pie( data , values= 'Win_%age' , names='Decision', title='Win %age For '+ team_name + ' for toss decision',color_discrete_sequence=px.colors.sequential.Rainbow))
team_name = str(input("Enter Team Name : "))
plot = win_visu_by_toss(team_name)
plot
```
# Matches Played Vs Win Number
```
matches_played=pd.concat([matches['team1'],matches['team2']], axis=0)
matches_played=matches_played.value_counts().reset_index()
matches_played.columns=['Team','Total Matches']
matches_played['wins']=matches['winner'].value_counts().reset_index()['winner']
matches_played.set_index('Team',inplace=True)
matches_played
win_percentage = round(matches_played['wins']/matches_played['Total Matches'],3)*100
```
# Win %age of Each Team
```
Teams = [ 'MI', 'RCB', 'KKR', 'K11P', 'CSK', 'DD', 'RR', 'SH', 'DC', 'PW', 'GL', 'RPSG', 'DC', 'KTK', 'RPSGS']
data = ([58.3, 55.6, 51.7, 47.7, 50. , 46.6, 45.6, 53.7, 38.7, 28.3, 40. ,
62.5, 62.5, 42.9, 35.7])
pie_plot = go.Pie(labels = Teams, values = data)
iplot([pie_plot])
```
# Win %age comparison b/w Two Teams
```
A , B = input("Enter the team names separated by space : ").split(' ')
def compare_teams(A , B):
new_df = matches_2[ ( (matches_2['team1'] == A) & (matches_2['team2'] == B) ) | ((matches_2['team1'] == B) & (matches_2['team2'] == A)) ]
new_df = new_df.winner.value_counts().to_frame().reset_index().rename( columns = {'index' : 'Team' , 'winner' : 'win %age'})
fig = px.pie( new_df , values='win %age', names='Team', title='Comparison of win %age b/w ' + A +' and ' + B ,color_discrete_sequence=px.colors.sequential.RdBu)
return fig
compare_teams(A , B)
```
___
# Top 5 cricket stadiums
```
top_5_venue = matches.venue.value_counts().head(5)
top_5_venue_data = pd.DataFrame({
'venue': top_5_venue.index,
'count': top_5_venue.values
})
px.pie( top_5_venue_data , values='count', names='venue', title='Most popular venues [2008 - 2019]', color_discrete_sequence=px.colors.sequential.RdBu)
```
# Umpires to feature in max number of matches
```
# Creating list for each umpires
umpire1 = list(matches.umpire1)
umpire2 = list(matches.umpire2)
umpire3 = list(matches.umpire3)
# Concating all of the lists
umpire1.extend(umpire2)
umpire1.extend(umpire3)
# Created the dataframe for umpires
new_data = pd.DataFrame(umpire1, columns=['umpires'])
umpire_data = new_data.umpires.value_counts().head(10)
umpire_dataset = pd.DataFrame({
'umpires': umpire_data.index,
'count': umpire_data.values
})
px.pie( umpire_dataset , values='count', names='umpires', title='Umpires to feature in max num of matches [2008 - 2019]', color_discrete_sequence=px.colors.sequential.RdBu)
```
# Creating df for season Winners and Runner-ups
```
lis = []
for value in matches.groupby('season'):
if value[1].tail(1).winner.values[0] == value[1].tail(1).team1.values[0]:
runner_up = value[1].tail(1).team2.values[0]
else:
runner_up = value[1].tail(1).team1.values[0]
lis.append([ value[0] , value[1].tail(1).winner.values[0] , runner_up ] )
print(lis)
winners = pd.DataFrame(lis , columns = ['Season' , 'Winner' , 'RunnerUp'])
```
# Season Winners Effective Visualisations
```
season_winners = winners['Winner'].value_counts().to_frame().reset_index().rename(columns = {'index' : 'Winner_Team' , 'Winner' : 'Count'})
px.pie( season_winners , values='Count', names='Winner_Team', title='Season Winners [2008 - 2019]', color_discrete_sequence=px.colors.sequential.RdBu)
```
# Season Runner-Ups Effective Visualisations
```
season_runner_ups = winners['RunnerUp'].value_counts().to_frame().reset_index().rename(columns = {'index' : 'Runner_up' , 'RunnerUp' : 'Count'})
px.pie( season_runner_ups , values='Count', names='Runner_up', title='Season Runner Ups [2008 - 2019]', color_discrete_sequence=px.colors.sequential.RdBu)
```
# Exploratory Data Analysis on Deliveries Dataset
```
for value in deliveries.groupby('batsman'):
if value[0] == 'DA Warner':
print(value[1]['batsman_runs'].sum())
deliveries['batsman_runs'].sum()
batsmen = matches[['id','season']].merge(deliveries, left_on = 'id', right_on = 'match_id', how = 'left')
batsmen.head()
def player_runs_across_season(player_name):
dic = dict()
for i in matches.season.unique():
dic[i] = 0
for ids in list(deliveries.match_id.unique()):
season = int(matches[(matches.id == ids)]['season'])
values = int(deliveries[(deliveries.match_id == ids) & (deliveries.batsman == player_name)].batsman_runs.sum())
dic[season] += values
dicc = {}
dic11 = list(dic.keys())
dic11.sort()
for i in dic11:
dicc[i] = dic[i]
return dicc
player_1 = input('enter player 1 ')
player_2 = input('enter player 2 ')
dic1 = player_runs_across_season(player_1)
dic2 = player_runs_across_season(player_2)
fig = go.Figure()
fig.add_trace(go.Scatter(x=list(dic1.keys()), y=list(dic1.values()),
mode='lines + markers',
name= player_1 ))
fig.add_trace(go.Scatter(x=list(dic2.keys()), y=list(dic2.values()),
mode='lines+markers',
name= player_2))
fig.show()
```
# Number of fours and sixes across seasons
```
def boundaries_counter(given_df):
lis = []
for value in given_df.groupby('season'):
lis.append([ value[0] , value[1]['batsman_runs'].value_counts()[4] , value[1]['batsman_runs'].value_counts()[6] ])
boundaries = pd.DataFrame( lis , columns = ['Season' , "4's" , "6's"] )
return boundaries
boundaries = boundaries_counter(batsmen)
fig = go.Figure()
fig.add_trace(go.Scatter(x=boundaries['Season'], y=boundaries["4's"],
mode='lines + markers',
name= "4's" ))
fig.add_trace(go.Scatter(x=boundaries['Season'], y=boundaries["6's"],
mode='lines+markers',
name= "6's"))
fig.show()
```
# Top - 15 fielders
```
top_15_fielders = (batsmen.fielder.value_counts().head(15)).to_frame().reset_index().rename(columns = {'index' : 'Player' , 'fielder' : 'Count'})
top_15_fielders
top_15_fielders.iplot(kind = 'bar' , x = 'Player' , y = 'Count' , title = 'Fielder vs No. of dismissals[2008 - 2019]' , xTitle = 'Fielder' , yTitle = 'Count')
def avg_partnership(player_A1, player_A2, player_B1, player_B2):
data11 = deliveries[((deliveries['batsman'] == player_A1) | (deliveries['batsman'] == player_A2)) & ((deliveries['non_striker'] == player_A1) | (deliveries['non_striker'] == player_A2))]
print('Avg Partnership of Pair 1 = '+ str(data11.batsman_runs.sum()/len(data11['match_id'].unique())))
data12 = deliveries[((deliveries['batsman'] == player_B1) | (deliveries['batsman'] == player_B2)) & ((deliveries['non_striker'] == player_B1) | (deliveries['non_striker'] == player_B2))]
print('Avg Partnership of Pair 2 = '+ str(data11.batsman_runs.sum()/len(data12['match_id'].unique())))
ls= [['Pair 1', data11.batsman_runs.sum()/len(data11['match_id'].unique())],['Pair 2', data11.batsman_runs.sum()/len(data12['match_id'].unique())]]
dataf = pd.DataFrame(ls, columns = ['Pairs', 'Avg_Runs'])
return(px.pie( dataf , values='Avg_Runs', names='Pairs', title='Avg Runs For different Pairs ',color_discrete_sequence=px.colors.sequential.RdBu))
player_A1 = input('Enter First Batsman of First Pair : ')
player_A2 = input('Enter Second Batsman of First Pair : ')
player_B1 = input('Enter First Batsman of Second Pair : ')
player_B2 = input('Enter Second Batsman of Second Pair : ')
plot = avg_partnership(player_A1, player_A2, player_B1, player_B2)
plot
```
# Conclusion
- **CSK** has the maximum win percentage, whereas **DC** has minimum win percentage.
- **SRH** has played least number of seasons from 2008-2019.
- **CH Gayle** is the player with maximum number of **Man of the match Award**.
- **MI** has played maximum matches in its home town that is Mumbai.
- **AB de Villiers** is the key player to **RCB**.
- **MI** while chossing fielding had won most number of matches.
- **MI** had played maximum number of matches from 2008-2019 and also had won maximum matche sof all teams.
- **Eden Garden** is the most popular venue in the IPL.
- **S Ravi** is the most featured Umpire in the IPL.
- **CSK** has been the most time Runner Up in IPL.
- **MS Dhoni** has been top fielder in IPL for having maximum number of Dismissals.
| github_jupyter |
<h1>VAST Challenge 2017</h1>
<h2><i>Mini Challenge 1</i></h2>
<br />
<h3>1. Introduction</h3>
<p>At this present work we present our solution for the first challenge proposed at the 2017 VAST Challenge, where contestants, using visual analytics tools, are expected to find patterns in the data of the vehicle traffic of the ficticious Boonsong Lekagul Nature Preserve and relate them with the decline in the Rose-crested Blue Pipit bird species population at the park.</p>
<p>The mini-challenge encourages the participants to use visual analytics to identify repeating patterns of vehicles transiting the park and classify the most supicious ones among then, regarding to threatening the native species. To facilitate this task, the park provides us with traffic data containing an identification for each vehicle, the vehicle type and timestamps collected by the many sensors spreaded in the parks installations. Besides that, a simplified map of the park with the sensors locations is also provided.</p>
<p>Our approach to the problem was to first make an exploratory data analysis of the dataset to raise starting points for the problem investigation and then, afterwards, dive into these hypothesis, testing them with visual analytics tools.</p>
<p>To deal with the data and the plottings, our team choose to use Python 3 and some really useful packages that would make our work easier. In this document, created using <a href="http://jupyter.org/">Jupyter Notebook</a>, we provide both the source code and the execution output in hope that the understanding of the methodology used is clearer to our reader. All the visualizations created are also displayed inline.</p>
```
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
```
<p>In this article we will be presenting our step-by-step investigations, including the raised hipothesis, the visualization choices to explore it, the code and the analysis, all along this beautiful notebook. We hope you're going to enjoy it.</p>
<p>All the project code is released under MIT License and is free for use and redistribution. Attribution is apreciated but not required. More information can be found at the project <a href="http://github.com/dmrib/daytripper">repository</a>.</p>
<h3>2. Tools</h3>
<p>The project code was written using Python 3, due to the great productivity obteained in the handling with the data and because of the possibility of using the same language for data munging and to create visualizations. Some non-native packages were also used:</p>
<ul>
<li>numpy</li>
<ul>
<p>NumPy is the fundamental package for scientific computing with Python. It contains among other things a powerful N-dimensional array object,
sophisticated (broadcasting) functions, tools for integrating C/C++ and Fortran code, useful linear algebra, Fourier transform, and random number capabilities.</p>
```
import numpy as np
```
<ul>
<li>pandas</li>
<ul>
<p><i>pandas</i> is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.</p>
```
import pandas as pd
```
<ul>
<li>matplotlib</li>
</ul>
<p>Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shell, the jupyter notebook, web application servers, and graphical user interface toolkits.</p>
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
```
<ul>
<li>seaborn</li>
</ul>
<p>Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.</p>
```
import seaborn as sns
sns.set_style('whitegrid')
```
<ul>
<li>graphviz</li>
</ul>
<p>Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks. It has important applications in networking, bioinformatics, software engineering, database and web design, machine learning, and in visual interfaces for other technical domains. </p>
```
import graphviz
```
<h3>3. Loading Data</h3>
<p>We start our work by loading the data using the <i>pandas</i> module.</p>
<p>The timestamps are converted to Python timestamps format to make the time operations easier afterwards.</p>
```
raw_dataset = pd.read_csv('../data/Lekagul Sensor Data.csv', parse_dates=["Timestamp"],
date_parser=lambda x: pd.datetime.strptime(x, "%Y-%m-%d %H:%M:%S"))
raw_dataset.head()
```
<h3>4. Traffic distribution analysis</h3>
<p>Before jumping into conclusions, our team decided that having an overview of the dataset and their most relevant traits would be useful to direct our questioning process to the more evident trends in data.</p>
<p>Initially we're going to visualize the occurence of each car-type in every sensor. At this moment, we are not discriminating the different sensor types. Here we shall have a general perspective of how the car flow distribution by car type looks like. This approach shall give us our first general insight about the dataset.</p>
```
counts = raw_dataset.groupby("car-type").count().sort_values(by='Timestamp', ascending=False)
fig = sns.barplot(data=counts, x='car-id', y=counts.index)
fig.set(xlabel='Traffic Volume', ylabel='Car Type')
sns.plt.title('Traffic Volume Distribution By Car Type')
plt.show()
```
<p>To complement this visualization, we will visualize also how many cars have crossed each sensor, this time, not discrimining different car-types.</p>
```
counts = raw_dataset.groupby("gate-name").count().sort_values(by='Timestamp', ascending=False)
fig = sns.barplot(data=counts, x='car-id', y=counts.index)
fig.set(xlabel='Traffic Volume', ylabel='Gate Name')
sns.plt.title('Number of Events Ocurrences By Sensor')
fig.figure.set_size_inches(18,30)
plt.show()
```
<p>At this point we have noticed that the sum of the vehicles counted by the ranger-stops 0 and 2 sensors extrapolates the total occurences of events involving ranger vehicles (2p). Therefore, it is confirmed that there are other visitors reaching these ranger areas. But this can be easily understood having a look on the park map. These two stops are not surronded by gates (which determines non visitors areas), so they are allowed to be reached by visitors. In the case of these areas being populated by the Rose-crested Blue Pipit the park should consider isolating these areas properly.</p>
```
img=mpimg.imread('../data/Lekagul Roadways labeled v2.jpg')
plt.imshow(img )
```
<p>When confronted with this, what immediately came to our minds is that a easier way of visualizing the park roads configuration would be helpful for our analysis. So, manually we derived a graph representation of these roads as a csv file (presented at this project 'data' file) where the first column represents an origin point, the second column a destination point and the third is a boolean value indicating if this path is restricted by gates or not.</p>
<p>We then load the data from the csv file:</p>
```
roads = pd.read_csv('../data/roads.csv')
```
<p>And visualize the resulting graph using the <i>graphviz</i> module:</p>
```
import os
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'
graph = graphviz.Graph()
graph.attr(size='13,10')
graph.node_attr.update(color='lightblue2', style='filled', shape='circle')
locations = set()
for _, row in roads.iterrows():
locations.add(row[0])
locations.add(row[1])
for location in locations:
graph.node(location)
for _, row in roads.iterrows():
if row['restricted'] == 'True':
graph.edge(row[0],row[1], color='red', penwidth='10')
else:
graph.edge(row[0],row[1])
graph
```
<p>Althogh being a little cluttered, this visualization enables us to spot quickly if a road is restricted (can only be acessed through gates) or not and might prove useful in our investigation process.</p>
<p>In fact, when we observe the graph visualization, it becomes clear that there are some sensors that can only be reached by passing through a gate, and therefore, should be only activated by rangers cars. We then direct our attention to the traffic activity at these areas. The sensors we'll investigate corresponds, in the visualization, to those that are only connected to the graph by red edges (restricted roads); They are:</p>
<ul>
<li>Ranger-base</li>
<li>ranger-stop 1</li>
<li>ranger-stop 3</li>
<li>ranger-stop 5</li>
<li>ranger-stop 6</li>
<li>ranger-stop 7</li>
</ul>
```
forbidden = set(['Ranger-base', 'ranger-stop 1', 'ranger-stop 3', 'ranger-stop 5', 'ranger-stop 6', 'ranger-stop 7'])
without_rangers = raw_dataset.reindex(columns=['car-type','gate-name'])
without_rangers = without_rangers[without_rangers['car-type'] != '2P']
trespassers = without_rangers['gate-name'].isin(forbidden).value_counts()
trespassers
```
<p>And, to our sadness, we discover that there are no trespassing vehicles in the restricted areas.</p>
<h3>5. Time Distributions Analysis</h3>
<p>A time distribution of our data entries could be useful for revealing inconsistent datums and to give a general spectre of the traffic volume over the data collection time.</p>
We start by extracting the year, month and day of each sensor entry:
```
time_data = raw_dataset
time_data['year'] = pd.DatetimeIndex(raw_dataset['Timestamp']).year
time_data['month'] = pd.DatetimeIndex(raw_dataset['Timestamp']).month
time_data['hour'] = pd.DatetimeIndex(raw_dataset['Timestamp']).hour
time_data.head()
```
<p>Then, immediatly we can see the years of data collection: </p>
```
fig, ax = plt.subplots()
ax.hist(time_data['year'], bins=2, ec='k', color='#0C77AD')
ax.set_title('Year distribution')
ax.set_xlabel('Year')
ax.set_ylabel('Events Count')
ax.set_xticks(range(2015, 2017, 1))
ax.set_xlim(range(2015, 2017, 1))
ax.get_children()[1].set_color('#54BEB4')
plt.show()
```
<p>And learn that the majority of the data was collected at 2015.</p>
<p>This led us to a question: is there available info about an entire year so we can see the distribution of the traffic inside the reserve in the period of twelve months?</p>
```
first_year_data = time_data[time_data.year == 2015]
first_year_data = first_year_data.groupby('month').count().sort_index().reindex(columns=['Timestamp'])
first_year_data
second_year_data = time_data[time_data.year == 2016]
second_year_data = second_year_data.groupby('month').count().sort_index().reindex(columns=['Timestamp'])
second_year_data
```
<p>Unfortunaly no, but with a little trick we can see relevant data about the traffic distribution along an entire year. Since the only intersection between months is May, and even more, both events counts for the month are really close, we can substitute the month value in an entire year series for the media of this month.</p>
```
may_mean = int(round((first_year_data.loc[5]['Timestamp'] + second_year_data.loc[5]['Timestamp']) / 2))
may_mean
whole_year = pd.concat([first_year_data.drop(5), second_year_data.drop(5)])
whole_year = whole_year.reindex(columns=['Timestamp'])
whole_year.loc[5] = may_mean
whole_year = whole_year.sort_index()
```
<p>And then we can finally plot the whole year sensor events distribution!</p>
```
fig = sns.barplot(data=whole_year, x=whole_year.index, y=whole_year['Timestamp'])
fig = sns.pointplot(x=whole_year.index, y=whole_year['Timestamp'])
fig.axhline(whole_year['Timestamp'].mean(), color='#947EE5', linestyle='dashed', linewidth=2)
fig.set(xlabel='Month', ylabel='Events detected')
sns.plt.title('Traffic Volume Distribution By Month')
plt.show()
```
<p>This denotes a tendency of more visitor coming to the park by the half of the year, between July and September. This raises another relevant question: <b>could the increase in the visitors volume in this period influence the reproductive habits of the birds since this is north hemisphere summer?</b></p>
<p>Then we proceed to visualize the distribution of sensor events along the hours of day. During which hours the traffic in the park peaks?</p>
```
hours_data = time_data.groupby('hour').count().sort_index()
fig, ax = plt.subplots()
ax.hist(time_data['hour'], bins = 24, ec='k', color='#0C77AD')
ax.plot(hours_data.index, hours_data['Timestamp'], color='#58C994')
ax.axhline(hours_data['Timestamp'].mean(), color='orange', linestyle='dashed', linewidth=2)
ax.set_title('Traffic volume by hours')
ax.set_xlabel('Hour')
ax.set_ylabel('Events Count')
ax.set_xticks(range(0, 24))
plt.show()
```
<p>And we see that the distribution of events by hours of the day behaves normally, peaking in the interval from 6am to 18pm.</p>
<p>But wait! Even if the rate of visitors passing through the sensors raises in a aproppriate form, this doesn't mean that we can ignore that a a large numbers of events have been happenning at unusual hours.</p>
```
strange_hours = hours_data.loc[0:5].append(hours_data.loc[23])
fig = sns.barplot(strange_hours.index, strange_hours['Timestamp'])
fig.set(xlabel='Hour', ylabel='Events count')
sns.plt.title('Events registered at strange hours')
plt.show()
```
<p>If these were not made by rangers, it could mean illegal or habitat damaging activities happening during the night. Let's investigate...</p>
```
strange_time = time_data.query('hour <= 5').append(time_data.query('hour > 22'))
without_rangers = strange_time.reindex(columns=['car-type','hour','car-id', 'Timestamp', 'gate-name'])[raw_dataset['car-type'] != '2P']
without_rangers.head()
strange_events = without_rangers.groupby('car-type').count()
fig = sns.barplot(data=strange_events, x=strange_events.index, y=strange_events['Timestamp'])
fig.axhline(strange_events['Timestamp'].mean(), color='#947EE5', linestyle='dashed', linewidth=2)
fig.set(xlabel='Car Type', ylabel='Events detected')
sns.plt.title('Strange Traffic Volume Distribution By Car Type')
plt.show()
```
<p>And we find a four axis truck wandering through the park during the night. I bet he was up to no good.
<p>Besides that, what can we learn about the time spent inside the park? Does the visitants come for a quick visit or they spend a long time inside the dependencies?</p>
```
without = raw_dataset[raw_dataset['car-type'] != '2P']
time_delta = without.groupby('car-id')['Timestamp'].max() - without.groupby('car-id')['Timestamp'].min()
fig, ax = plt.subplots()
ax.axes.get_xaxis().set_visible(False)
x = time_delta.values/ np.timedelta64(1,'D')
ax.plot(x)
```
<p>And visually we detect a HUGE outlier, what can we learn about this guy?</p>
```
outlier = raw_dataset[raw_dataset['car-id'] == '20155705025759-63'].sort_values(by='Timestamp')
outlier['Timestamp'].max() - outlier['Timestamp'].min()
img=mpimg.imread('../data/hippie.jpg')
plt.imshow(img)
```
| github_jupyter |
# Foundations of Computational Economics #38
by Fedor Iskhakov, ANU
<img src="_static/img/dag3logo.png" style="width:256px;">
## Dynamic programming with continuous choice
<img src="_static/img/lecture.png" style="width:64px;">
<img src="_static/img/youtube.png" style="width:65px;">
[https://youtu.be/pAEm9cZd92Y](https://youtu.be/pAEm9cZd92Y)
Description: Optimization in Python. Consumption-savings model with continuous choice.
Goal: take continuous choice seriously and deal with it without discretization
- no discretization of choice variables
- need to employ numerical optimizer to find optimal continuous choice in Bellman equation
- optimization problem has to be solved for all points in the state space
Implement the continuous version of Bellman operator for the stochastic consumption-savings model
### Consumption-savings problem (Deaton model)
$$
V(M)=\max_{0 \le c \le M}\big\{u(c)+\beta \mathbb{E}_{y} V\big(\underset{=M'}{\underbrace{R(M-c)+\tilde{y}}}\big)\big\}
$$
- discrete time, infinite horizon
- one continuous choice of consumption $ 0 \le c \le M $
- state space: consumable resources in the beginning of the period $ M $, discretized
- income $ \tilde{y} $, follows log-normal distribution with $ \mu = 0 $ and $ \sigma $
$$
V(M)=\max_{0 \le c \le M}\big\{u(c)+\beta \mathbb{E}_{y} V\big(\underset{=M'}{\underbrace{R(M-c)+\tilde{y}}}\big)\big\}
$$
- preferences are given by time separable utility $ u(c) = \log(c) $
- discount factor $ \beta $
- gross return on savings $ R $, fixed
### Continuous (non-discretized) Bellman equation
Have to compute
$$
\max_{0 \le c \le M}\big\{u(c)+\beta \mathbb{E}_{y} V\big(R(M-c)+\tilde{y}\big)\big\} = \max_{0 \le c \le M} G(M,c)
$$
using numerical optimization algorithm
- constrained optimization (bounds on $ c $)
- have to interpolate value function $ V(\cdot) $ for every evaluation of objective $ G(c) $
- have to solve this optimization problem for **all possible values** $ M $
#### Numerical optimization in Python
Optimization can be approached
1. **directly**, or through the lenses of analytic
1. **first order conditions**, assuming the objective function is differentiable
- FOC approach is equation solving, see video 13, 22, 23
- here focus on optimization itself
The two approaches are equivalent in terms of computational complexity, end even numerically
### Newton method as optimizer
$$
\max_{x \in \mathbb{R}} f(x) = -x^4 + 2.5x^2 + x + 2
$$
Solve the first order condition:
$$
\begin{eqnarray}
f'(x)=-4x^3 + 5x +1 &=& 0 \\
-4x(x^2-1) + x+1 &=& 0 \\
(x+1)(-4x^2+4x+1) &=& 0 \\
\big(x+1\big)\big(x-\frac{1}{2}-\frac{1}{\sqrt{2}}\big)\big(x-\frac{1}{2}+\frac{1}{\sqrt{2}}\big) &=& 0
\end{eqnarray}
$$
### Taylor series expansion of the equation
Let $ x' $ be an approximate solution of the equation $ g(x)=f'(x)=0 $
$$
g(x') = g(x) + g'(x)(x'-x) + \dots = 0
$$
$$
x' = x - g(x)/g'(x)
$$
Newton step towards $ x' $ from an approximate solution $ x_i $ at iteration $ i $ is then
$$
x_{i+1} = x_i - g(x_i)/g'(x_i) = x_i - f'(x_i)/f''(x_i)
$$
### Or use repeated quadratic approximations
Given approximate solution $ x_i $ at iteration $ i $, approximate function $ f(x) $ using first three terms of Taylor series
$$
\hat{f}(x) = f(x_i) + f'(x_i) (x-x_i) + \tfrac{1}{2} f''(x_i) (x-x_i)^2
$$
The maximum/minimum of this quadratic approximation is given by
$$
{\hat{f}}'(x) = f'(x_i) + f''(x_i) (x-x_i) = 0
$$
Leading to the Newton step
$$
x = x_{i+1} = x_i - f'(x_i)/f''(x_i)
$$
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def newton(fun,grad,x0,tol=1e-6,maxiter=100,callback=None):
'''Newton method for solving equation f(x)=0
with given tolerance and number of iterations.
Callback function is invoked at each iteration if given.
'''
for i in range(maxiter):
x1 = x0 - fun(x0)/grad(x0)
err = abs(x1-x0)
if callback != None: callback(err=err,x0=x0,x1=x1,iter=i)
if err<tol: break
x0 = x1
else:
raise RuntimeError('Failed to converge in %d iterations'%maxiter)
return (x0+x1)/2
F = lambda x: -x**4+2.5*x**2+x+2 # main function
f = lambda x: -4*x**3+5*x+1 # FOC
g = lambda x: -12*x**2+5 # derivative of FOC
# make nice seriest of plots
a,b = -1.5,1.5 # upper and lower limits
xd = np.linspace(a,b,1000) # x grid
ylim1 = [min(np.amin(f(xd))-1,0),max(np.amax(f(xd))+1,0)]
ylim2 = [min(np.amin(F(xd))-1,0),max(np.amax(F(xd))+1,0)]
print(ylim1,ylim2)
def plot_step(x0,x1,iter,**kwargs):
plot_step.iter = iter+1
if iter<10:
fig1, (ax1,ax2) = plt.subplots(1,2,figsize=(16,6))
ax1.set_title('FOC equation solver')
ax1.plot(xd,f(xd),c='red') # plot the function
ax1.plot([a,b],[0,0],c='black') # plot zero line
ax1.plot([x0,x0],ylim1,c='grey') # plot x0
l = lambda z: g(x0)*(z - x1)
ax1.plot(xd,l(xd),c='green') # plot the function
ax1.set_ylim(bottom=ylim1[0],top=ylim1[1])
ax2.set_title('Optimizer')
ax2.plot(xd,F(xd),c='red') # plot the function
ax2.plot([x0,x0],ylim2,c='grey') # plot x0
l = lambda z: F(x0)+f(x0)*(z-x0)+(g(x0)*(z-x0)**2)/2
ax2.plot(xd,l(xd),c='green') # plot the function
ax2.plot([x1,x1],ylim2,c='grey') # plot x1
ax2.set_ylim(bottom=ylim2[0],top=ylim2[1])
ax1.set_ylabel('Iteration %d'%(iter+1))
plt.show()
newton(f,g,x0=-1.3,callback=plot_step) # 0.9, 0.42
print('Converged in %d iterations'%plot_step.iter)
```
### Multidimensional case
$$
\max_{x_1,\dots,x_n} F(x_1,\dots,x_n)
$$
- the Newton optimization method would work with multivariate function $ F(x_1,\dots,x_n) $, *gradient* vector $ \nabla F(x_1,\dots,x_n) $
composed of partial derivatives, and a *Hessian* matrix $ \nabla^2 F(x_1,\dots,x_n) $ composed of second order partial derivatives of $ F(x_1,\dots,x_n) $
- the FOC solver Newton method would work with vector-valued multivariate function $ G(x_1,\dots,x_n)=\nabla F(x_1,\dots,x_n) $,
and a *Jacobian* matrix of first order partial derivatives of all of the outputs of the function $ G(x_1,\dots,x_n) $ with respect to all arguments
### Newton step in multidimensional case
$$
x_{i+1} = x_i - \frac{F'(x_i)}{F''(x_i)} = x_i - \big( \nabla^2 F(x_i) \big)^{-1} \nabla F(x_i)
$$
- requires *inverting* the Hessian/Jacobian matrix
- when analytic Hessian/Jacobian is not available, numerical differentiation can be used (yet slow and imprecise)
### Quasi-Newton methods
**SciPy.optimize**
Main idea: replace Jacobian/Hessian with approximation. For example,
when costly to compute, and/or unavailable in analytic form.
- DFP (Davidon–Fletcher–Powell)
- BFGS (Broyden–Fletcher–Goldfarb–Shanno)
- SR1 (Symmetric rank-one)
- BHHH (Berndt–Hall–Hall–Hausman) $ \leftarrow $ for statistical application and estimation!
#### Broader view on the optimization methods
1. Line search methods
- Newton and Quasi-Newton
- Gradient descent
1. Trust region methods
- Approximation of function in question in a ball around the current point
1. Derivative free algorithms
- Nelder-Mead (simplex)
- Pattern search
1. Global solution algorithms
- Simulation based
- Genetic algorithms
1. **Poly-algorithms** Combinations of other algorithms
### Global convergence of Newton method
Newton step: $ x_{i+1} = x_i + s_i $ where $ s_i $ is the *direction* of the step
$$
s_i = - \frac{f'(x_i)}{f''(x_i)} = - \big( \nabla^2 f(x_i) \big)^{-1} \nabla f(x_i)
$$
Newton method becomes globally convergent with a subproblem of choosing step size $ \tau $, such that
$$
x_{i+1} = x_i + \tau s_i
$$
**Globally convergent to local optimum**: converges from any starting value, but is not guaranteed to find global optimum
### Gradient descent
$$
x_{i+1} = x_i - \tau \nabla f(x_i)
$$
- $ \nabla f(x_i) $ is direction of the fastest change in the function
value
- As a greedy algorithm, can be much slower that Newton.
- Finding optimal step size $ \tau $ is a separate one-dimensional optimization sub-problem
#### Derivative-free methods
**Methods of last resort!**
- Grid search (`brute` in SciPy)
- Nelder-Mead (“simplex”)
- Pattern search (generalization of grid search)
- Model specific (POUNDerS for min sum of squares)
### Nelder-Mead
1. Initialize a simplex
1. Update simplex based on function values
- Increase size of the simplex
- Reduce size of the simplex
- Reflect (flip) the simplex
1. Iterate until convergence
### Nelder-Mead
<img src="_static/img/nedlermead.png" style="">
### Trade-off with derivative free methods
Only local convergence. Anybody talking about global convergence with
derivative free methods is
- either assumes something about the problem (for example, concavity),
- or is prepared to wait forever
“An algorithm converges to the global minimum for any continuous
$ f $ if and only if the sequence of points visited by the algorithm
is dense in $ \Omega $.” Torn & Zilinskas book “Global Optimization”
### Global and simulation-based methods
Coincide with derivative-free methods $ \Rightarrow $ see above!
- Simulated annealing (`basinhopping, dual_annealing` in SciPy.optimize)
- Particle swarms
- Evolutionary algorithms
Better idea: Multi-start + poly-algorithms
### Constrained optimization
Optimization in presence of constraints on the variables of the problem.
**SciPy.optimize**
- Constrained optimization by linear approximation (COBYLA)
- Sequential Least SQuares Programming (SLSQP)
- Trust region with constraints
#### Solving for optimal consumption level in cake eating problem
<img src="_static/img/cake.png" style="width:128px;">
- Simple version of consumption-savings problem
- No returns on savings $ R=1 $
- No income $ y=0 $
- What is not eaten in period $ t $ is left for the future $ M_{t+1}=M_t-c_t $
### Bellman equation
$$
V(M_{t})=\max_{0 \le c_{t} \le M_t}\big\{u(c_{t})+\beta V(\underset{=M_{t}-c_{t}}{\underbrace{M_{t+1}}})\big\}
$$
Attack the optimization problem directly and run the optimizer to solve
$$
\max_{0 \le c \le M} \big\{u(c)+\beta V_{i-1}(M-c) \big \}
$$
### Thoughts on appropriate method
- For Newton we would need first and second derivatives of $ V_{i-1} $, which is
itself only approximated on a grid, so no go..
- The problem is bounded, so constrained optimization method is needed
- **Bisections** should be considered
- Other derivative free methods?
- Quasi-Newton method with bounds?
### Bounded optimization in Python
*Bounded optimization* is a kind of *constrained optimization* with simple
bounds on the variables
(like Robust Newton algorithm in video 25)
Will use **scipy.optimize.minimize_scalar(method=’bounded’)** which uses the
Brent method to find a local minimum.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
from scipy.optimize import minimize_scalar
%matplotlib inline
class cake_continuous():
'''Implementation of the cake eating problem with continuous choices.'''
def __init__(self,beta=.9, Wbar=10, ngrid=50, maxiter_bellman=100,tol_bellman=1e-8):
self.beta = beta # Discount factor
self.Wbar = Wbar # Upper bound on cake size
self.ngrid = ngrid # Number of grid points for the size of cake
self.epsilon = np.finfo(float).eps # smallest positive float number
self.grid_state = np.linspace(self.epsilon,Wbar,ngrid) # grid for state space
self.maxiter_bellman = maxiter_bellman # maximum iterations in Bellman solver
self.tol_bellman = tol_bellman # tolerance in Bellman solver
def bellman(self,V0):
#Bellman operator, V0 is one-dim vector of values on grid
def maximand(c,M,interf):
'''Maximand of the Bellman equation'''
Vnext = interf(M-c) # next period value at the size of cake in the next period
V1 = np.log(c) + self.beta*Vnext
return -V1 # negative because of minimization
def findC(M,maximand,interf):
'''Solves for optimal consumption for given cake size M and value function VF'''
opt = {'maxiter':self.maxiter_bellman, 'xatol':self.tol_bellman}
res = minimize_scalar(maximand,args=(M,interf),method='Bounded',bounds=[self.epsilon,M],options=opt)
if res.success:
return res.x # if converged successfully
else:
return M/2 # return some visibly wrong value
# interpolation function for the current approximation of the vaulue function
interfunc = interpolate.interp1d(self.grid_state,V0,kind='slinear',fill_value="extrapolate")
# allocate space for the policy function
c1=np.empty(self.ngrid,dtype='float')
c1[0] = self.grid_state[0]/2 # skip the zero/eps point
# loop over state space
for i in range(1,self.ngrid):
# find optimal consumption level for each point in the state space
c1[i] = findC(self.grid_state[i],maximand,interfunc)
# compute the value function corresponding to the computed policy
V1 = - maximand(c1,self.grid_state,interfunc) # don't forget the negation!
return V1, c1
def solve(self, maxiter=1000, tol=1e-4, callback=None):
'''Solves the model using successive approximations'''
V0=np.log(self.grid_state) # on first iteration assume consuming everything
for iter in range(maxiter):
V1,c1=self.bellman(V0)
if callback: callback(iter,self.grid_state,V1,c1) # callback for making plots
if np.all(abs(V1-V0) < tol):
break
V0=V1
else: # when i went up to maxiter
print('No convergence: maximum number of iterations achieved!')
return V1,c1
def solve_plot(self, maxiter=1000, tol=1e-4):
'''Illustrate solution'''
fig1, (ax1,ax2) = plt.subplots(1,2,figsize=(14,8))
ax1.grid(b=True, which='both', color='0.65', linestyle='-')
ax2.grid(b=True, which='both', color='0.65', linestyle='-')
ax1.set_title('Value function convergence with VFI')
ax2.set_title('Policy function convergence with VFI')
ax1.set_xlabel('Cake size, W')
ax2.set_xlabel('Cake size, W')
ax1.set_ylabel('Value function')
ax2.set_ylabel('Policy function')
print('Iterations:',end=' ')
def callback(iter,grid,v,c):
print(iter,end=' ') # print iteration number
ax1.plot(grid[1:],v[1:],color='k',alpha=0.25)
ax2.plot(grid,c,color='k',alpha=0.25)
V,c = self.solve(maxiter=maxiter,tol=tol,callback=callback)
# add solutions
ax1.plot(self.grid_state[1:],V[1:],color='r',linewidth=2.5)
ax2.plot(self.grid_state,c,color='r',linewidth=2.5)
plt.show()
return V,c
m3 = cake_continuous (beta=0.92,Wbar=10,ngrid=10,tol_bellman=1e-8)
V3,c3 = m3.solve_plot()
m3 = cake_continuous (beta=0.92,Wbar=10,ngrid=100,tol_bellman=1e-4)
V3,c3 = m3.solve_plot()
```
### Conclusion
Dealing with continuous choice directly using numerical optimization:
- is **slow**, consider using lower level language or just in time complication in Python
- more precise, but not ideal, requires additional technical parameters (tolerance and maxiter for within Bellman optimization)
(Will come back to full blown stochastic consumption-savings model in the next practical video.)
#### Further learning resources
- Overview of SciPy optimize
[https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html)
- Docs [https://docs.scipy.org/doc/scipy/reference/optimize.html#module-scipy.optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html#module-scipy.optimize)
- Visualization of Nelder-Mead [https://www.youtube.com/watch?v=j2gcuRVbwR0](https://www.youtube.com/watch?v=j2gcuRVbwR0)
- Brent’s method explained [https://www.youtube.com/watch?v=-bLSRiokgFk](https://www.youtube.com/watch?v=-bLSRiokgFk)
- Many visualizations of Newton and other methods [https://www.youtube.com/user/oscarsveliz/videos](https://www.youtube.com/user/oscarsveliz/videos)
| github_jupyter |
This guide uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing you'll use here.
This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.
Here, 60,000 images are used to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow:
```
import tensorflow
from tensorflow.keras.datasets.fashion_mnist import load_data
#fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = load_data()
```
Loading the dataset returns four NumPy arrays:
* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.
* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays.
The images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents:
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
Each image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images:
```
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
## Explore the data
Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:
```
print(train_images.shape)
len(train_labels)
train_labels
test_images.shape
```
## Preprocess the data
The data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:
```
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
```
Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, divide the values by 255. It's important that the *training set* and the *testing set* be preprocessed in the same way:
```
train_images = train_images / 255.0
test_images = test_images / 255.0
```
To verify that the data is in the correct format and that you're ready to build and train the network, let's display the first 25 images from the *training set* and display the class name below each image.
```
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
```
## Build the model
Building the neural network requires configuring the layers of the model, then compiling the model.
### Set up the layers
The basic building block of a neural network is the *layer*. Layers extract representations from the data fed into them. Hopefully, these representations are meaningful for the problem at hand.
Most of deep learning consists of chaining together simple layers. Most layers, such as `tf.keras.layers.Dense`, have parameters that are learned during training.
```
from tensorflow import keras
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
```
**The** first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely connected, or fully connected, neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer returns a logits array with length of 10. Each node contains a score that indicates the current image belongs to one of the 10 classes.
### Compile the model
Before the model is ready for training, it needs a few more settings. These are added during the model's *compile* step:
* *Loss function* —This measures how accurate the model is during training. You want to minimize this function to "steer" the model in the right direction.
* *Optimizer* —This is how the model is updated based on the data it sees and its loss function.
* *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified.
```
import tensorflow as tf
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
## Train the model
Training the neural network model requires the following steps:
1. Feed the training data to the model. In this example, the training data is in the `train_images` and `train_labels` arrays.
2. The model learns to associate images and labels.
3. You ask the model to make predictions about a test set—in this example, the `test_images` array.
4. Verify that the predictions match the labels from the `test_labels` array.
### Feed the model
To start training, call the `model.fit` method—so called because it "fits" the model to the training data:
```
model.fit(train_images, train_labels, epochs=10)
```
### Evaluate accuracy
Next, compare how the model performs on the test dataset:
```
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
```
It turns out that the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy represents *overfitting*. Overfitting happens when a machine learning model performs worse on new, previously unseen inputs than it does on the training data. An overfitted model "memorizes" the noise and details in the training dataset to a point where it negatively impacts the performance of the model on the new data. For more information, see the following:
* [Demonstrate overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit#demonstrate_overfitting)
* [Strategies to prevent overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit#strategies_to_prevent_overfitting)
##Add weight regularization
You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.
A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:
* ----L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).
* ----L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.
L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.
In tf.keras, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
### Applying L2 regularization
```
from tensorflow.keras import regularizers
model_l2 = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
keras.layers.Dense(10)
])
model_l2.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_l2.fit(train_images, train_labels, epochs=10)
test_loss_l2, test_acc_l2 = model_l2.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc_l2)
```
As can be seen above overfitting is removed to some extent from the model but at the cost of performance.
### Add dropouts
Dropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.
The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.
Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1].
The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.
In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.
Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
```
from tensorflow.keras import layers
model_dropout = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
layers.Dropout(0.3),
keras.layers.Dense(10)
])
model_dropout.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_dropout.fit(train_images, train_labels, epochs=10)
test_loss_dropout, test_acc_dropout = model_dropout.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc_dropout)
```
We applied dropouts on the single layer of 128 nodes by making the output of 30% nodes as zeroes, we were able to reduce overfitting to more extent than l2 regularization
### Combined L2 + dropout
```
from tensorflow.keras import regularizers
model_l2_dropout = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
layers.Dropout(0.5),
keras.layers.Dense(10)
])
model_l2_dropout.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_l2_dropout.fit(train_images, train_labels, epochs=10)
test_loss_l2_dropout, test_acc_l2_dropout = model_l2_dropout.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc_l2_dropout)
```
### Make predictions
With the model trained, you can use it to make predictions about some images.
The model's linear outputs, [logits](https://developers.google.com/machine-learning/glossary#logits). Attach a softmax layer to convert the logits to probabilities, which are easier to interpret.
```
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
```
Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
```
predictions[0]
test_labels[0]
```
Graph this to look at the full set of 10 class predictions.
```
import numpy as np
def plot_image(i, predictions_array, true_label, img):
true_label, img = true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
true_label = true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
```
### Verify predictions
With the model trained, you can use it to make predictions about some images.
Let's look at the 0th image, predictions, and prediction array. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percentage (out of 100) for the predicted label.
```
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
```
Let's plot several images with their predictions. Note that the model can be wrong even when very confident.
```
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
print(class_names)
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
```
## Use the trained model
Finally, use the trained model to make a prediction about a single image.
## Use the trained model
Finally, use the trained model to make a prediction about a single image.
```
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
```
### I hope with this you can start your journey into the world of Deep Learning
| github_jupyter |
# SageMaker Processing Script: HuggingFace
This notebook shows a very basic example of using SageMaker Processing to create train, test and validation datasets. SageMaker Processing is used to create these datasets, which then are written back to S3.
In a nutshell, we will create a `HuggingFaceProcessor` object, passing the HuggingFace Transformer version we want to use, as well as our managed infrastructure requirements.
For our use case, we will download a well-known datasets, publicly available online, called the [Amazon Customer Reviews dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html). This dataset is composed of 130+ million customer reviews. The data is available in TSV files in the `amazon-reviews-pds` S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters). Samples of the data are available in English and French, and we will use both in our demo.
```
!mkdir -p .data .output
!aws s3 cp s3://amazon-reviews-pds/tsv/sample_us.tsv .data/
!aws s3 cp s3://amazon-reviews-pds/tsv/sample_fr.tsv .data/
from sagemaker import Session
session = Session()
bucket = session.default_bucket()
key_prefix = 'frameworkprocessors/huggingface-example'
source_path = session.upload_data('.data', bucket=bucket, key_prefix=f'{key_prefix}/data')
source_path
```
## Create the script you'd like to use with Processing with your logic
This script is executed by Amazon SageMaker.
In the `main`, it does the core of the operations: it reads and parses arguments passed as parameters, unpacks the model file, then loads the model, preprocess, predict, postprocess the data. Remember to write data locally in the final step so that SageMaker can copy them to S3.
```
!pygmentize huggingface-processing.py
```
## Create the Sagemaker Processor
Once the data has been uploaded to S3, we can now create the `HuggingFaceProcessor` object. We specify the version of the framework that we want to use, the python version, the role with the correct permissions to read the dataset from S3, and the instances we're planning on using for our processing job.
```
from sagemaker.huggingface import HuggingFaceProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker import get_execution_role
hfp = HuggingFaceProcessor(
role=get_execution_role(),
instance_count=2,
instance_type='ml.g4dn.xlarge',
transformers_version='4.4.2',
pytorch_version='1.6.0',
base_job_name='frameworkprocessor-hf'
)
```
All that's left to do is to `run()` the Processing job: we will specify our python script that contains the logic of the transformation in the `code` argument and its dependencies in the `source_dir` folder, the `inputs` and the `outputs` of our job.
Note: in the folder indicated in the `source_dir` argument, it is possible to have a `requirements.txt` file with the dependencies of our script. This file will make SageMaker Processing automatically install the packages specified in it by running the `pip install -r requirements.txt` command before launching the job itself.
```
hfp.run(
code='huggingface-processing.py',
inputs=[
ProcessingInput(
input_name='data',
source=source_path,
destination='/opt/ml/processing/input/data/'
)
],
outputs=[
ProcessingOutput(output_name='train', source='/opt/ml/processing/output/train/'),
ProcessingOutput(output_name='test', source='/opt/ml/processing/output/test/'),
ProcessingOutput(output_name='val', source='/opt/ml/processing/output/val/'),
],
logs=False
)
```
We can now check the results of our processing job, and list the outputs from S3.
```
output_path = hfp.latest_job.outputs[0].destination
!aws s3 ls --recursive $output_path
```
| github_jupyter |
# InSAR Time Series Analysis using MintPy and ARIA products
**Author:** Eric Fielding, David Bekaert, Heresh Fattahi and Zhang Yunjun
This notebook is a second modification by Eric Fielding from an earlier version of the notebook (https://nbviewer.jupyter.org/github/aria-tools/ARIA-tools-docs/blob/master/JupyterDocs/NISAR/L2_interseismic/mintpySF/smallbaselineApp_aria.ipynb) by David Bekaert, Heresh Fattahi and Zhang Yunjun that was originally focused on San Francisco.
This notebook is a modification from the [original](https://nbviewer.jupyter.org/github/insarlab/MintPy-tutorial/blob/main/smallbaselineApp_aria.ipynb) by Heresh Fattahi and Zhang Yunjun.
**Mapping landslide motion with InSAR time series**
The Caltech-JPL ARIA project in partnership with NASA Getting Ready for NISAR (GRFN) project has been generated surface displacement products (interferograms) mimicking the NISAR L2 GUNW (Geocoded Unwrapped phase interferograms) product formatting. The interferograms are stored at the NASA ASF DAAC, and are accessible with an Open Source set of tools called ARIA-tools. The Miami Insar Timeseries software in PYthon (MintPy), an open-source package for InSAR time-series analysis, is compatible with the outputs from the ARIA-tools package, and in combination with the ARIA-tools pre-processor can be used to estimate ground displacement time-series.
The Jupyter notebook presented here is meant as an practical example on the use of Jupyter for exploring landslide displacements. In the example below, we will demonstrate a time-series derived from ARIA standard InSAR products over the Los Angeles, California area revealing landslide motion on the Palos Verdes peninsula.
See the other tutorials on ARIA-tools to learn more about how to use that package.
<div class="alert alert-warning">
<b>To save time, we have pre-ran the ARIA-tools pacakge and data loading into MintPy</b>
!ariaDownload.py -b '33.5 34.5 -119.0 -117.9' --track 71
!ariaTSsetup.py -f 'products/*.nc' -b '33.65 33.9 -118.45 -118.15' --mask Download
!smallbaselineApp.py -t LASenDT71.txt --dostep load_data
</div>
<div class="alert alert-warning">
<b>The staged data was uploaded in S3 data bucket of openSARlab and can be downloaded using:</b>
!aws s3 cp s3://asf-jupyter-data/Fielding/Stack.zip Stack.zip
When users are not leveraging openSARlabs, they should start from ARIA-tools as download from S3 will not work.
</div>
# 0. Notebook setup
```
%%javascript
var kernel = Jupyter.notebook.kernel;
var command = ["notebookUrl = ",
"'", window.location, "'" ].join('')
kernel.execute(command)
from IPython.display import Markdown
from IPython.display import display
user = !echo $JUPYTERHUB_USER
env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/insar_analysis':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "insar_analysis" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select "insar_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "insar_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
```
The two cells below must be run each time the notebook is started to ensure correct set-up of the notebook.
```
%%capture
import os
import zipfile
# verify mintpy install is complete:
try:
import numpy as np
from mintpy import view, tsview, plot_network, plot_transection, plot_coherence_matrix
except ImportError:
raise ImportError("Looks like mintPy is not fully installed")
# define the work directory
work_dir = f"{os.path.abspath(os.getcwd())}/data_LA"
print("Work directory: ", work_dir)
if not os.path.isdir(work_dir):
os.makedirs(work_dir)
print(f'Create directory: {work_dir}')
print(f'Go to work directory: {work_dir}')
os.chdir(work_dir)
if not os.path.isfile('Stack.zip'):
!aws --region=us-east-1 --no-sign-request s3 cp s3://asf-jupyter-data/Fielding/Stack.zip Stack.zip
# verify if download was succesfull
if os.path.isfile('Stack.zip'):
with zipfile.ZipFile('Stack.zip', 'r') as zip_ref:
zip_ref.extractall(os.getcwd())
print('S3 pre-staged data retrieval was successfull')
else:
print("Download outside openSarLabs is not supported.\nAs alternative please start from ARIA-tools with the commandline calls provided at the top of this notebook")
```
The following command will download all the ARIA standard products over the LA area, which is 575 products at the time of this writing. This will take more than two hours to complete if the data is not already downloaded.
```
# download data for descending track 71 over Los Angeles area
#!ariaDownload.py -b '33.5 34.5 -119.0 -117.9' --track 71
```
This ARIA time-series setup would cover the whole Los Angeles area and would take a while to process, so we are skipping this setup.
```
#!ariaTSsetup.py -f 'products/*.nc' -b '33.5 34.5 -119.0 -117.9' --mask Download
```
The following ARIA time-series setup `ariaTSsetup.py` extracts the data that covers only a small area around the Palos Verdes peninsula southwest of Los Angeles to speed the time-series processing, which we specify with the bounding box. We also download the water mask to avoid using data over the ocean.
```
#!ariaTSsetup.py -f 'products/*.nc' -b '33.65 33.9 -118.45 -118.15' --mask Download
```
# 1. smallbaselineApp.py overview
This application provides a workflow which includes several steps to invert a stack of unwrapped interferograms and apply different corrections to obtain ground displacement timeseries.
The workflow consists of two main blocks:
* correcting unwrapping errors and inverting for the raw phase time-series (blue ovals),
* correcting for noise from different sources to obtain the displacement time-series (green ovals).
Some steps are optional, which are switched off by default (marked by dashed boundaries). Configuration parameters for each step are initiated with default values in a customizable text file: [smallbaselineApp.cfg](https://github.com/insarlab/MintPy/blob/master/mintpy/defaults/smallbaselineApp.cfg). In this notebook, we will walk through some of these steps, for a complete example see the [MintPy repository](https://github.com/insarlab/MintPy).
<p align="left">
<img width="600" src="NotebookAddons/MintPyWorkflow.jpg">
</p>
<p style="text-align: center;">
(Figure from Yunjun et al., 2019)
</p>
## 1.1 Processing steps of smallbaselineApp.py
The MintPy **smallbaselineApp.py** application provides a workflow to invert a stack of unwrapped interferograms and apply different (often optional) corrections to obtain ground displacement timeseries. A detailed overview of the options can be retrieved by involking the help option:
```
!smallbaselineApp.py --help
```
## 1.2 Configuring processing parameters
The processing parameters for the **smallbaselineApp.py** are controlled through a configuration file. If no file is provided the default [smallbaselineApp.cfg](https://github.com/insarlab/MintPy/blob/master/mintpy/defaults/smallbaselineApp.cfg) configuration is used. Here we use `LASenDT71.txt`, which already constains selected, manually modified configuration parameters for this time-series analysis.
# 2. Small Baseline Time Series Analysis
## 2.1. Loading ARIA data into MintPy
The [ARIA-tools package](https://github.com/aria-tools/ARIA-tools) is used as a pre-processor for MintPY. It has a download tool that wraps around the ASF DAAC API, and includes tools for stitching/cropping and time-series preparation. The output of the time-series preparation is compatible with the [data directory](https://mintpy.readthedocs.io/en/latest/dir_structure/) structure from MintPy. To save time, we have already pre-ran these steps. The commands used were:
```
!ariaDownload.py -b '33.5 34.5 -119.0 -117.9' --track 71
!ariaTSsetup.py -f 'products/*.nc' -b '33.65 33.9 -118.45 -118.15' --mask Download
```
The `ariaTSsetup.py` step above (or the pre-processed Stack.zip) extracted the data for the subset we specified and found a total of 439 products that cover our study area. Now we load the data for the subset area into MintPy.
```
# define the MintPy time-series directory
mint_dir = work_dir+'/MintPy'
print("MintPy directory: ", mint_dir)
if not os.path.isdir(mint_dir):
os.makedirs(mint_dir)
print('Create directory: {}'.format(mint_dir))
# copy the configuration file
os.chdir(work_dir)
!cp LASenDT71.txt MintPy
print('Go to work directory: {}'.format(mint_dir))
os.chdir(mint_dir)
!smallbaselineApp.py LASenDT71.txt --dostep load_data
```
The output of the loading step is an "inputs" directory containing two HDF5 files:
- ifgramStack.h5: This file contains 6 dataset cubes (e.g. unwrapped phase, coherence, connected components etc.) and multiple metadata.
- geometryGeo.h5: This file contains geometrical datasets (e.g., incidence/azimuth angle, masks, etc.).
```
!ls inputs
```
<div class="alert alert-info">
<b>info.py :</b>
To get general infomation about a MintPy product, run info.py on the file.
</div>
```
!info.py inputs/ifgramStack.h5
!info.py inputs/geometryGeo.h5
```
## 2.2. Plotting the interferogram network
Running **plot_network.py** gives an overview of the network and the average coherence of the stack. The program creates multiple files as follows:
- ifgramStack_coherence_spatialAvg.txt: Contains interferogram dates, average coherence temporal and spatial baseline separation.
- Network.pdf: Displays the network of interferograms on time-baseline coordinates, colorcoded by avergae coherence of the interferograms.
- CoherenceMatrix.pdf shows the avergae coherence pairs between all available pairs in the stack.
```
!smallbaselineApp.py LASenDT71.txt --dostep modify_network
plot_network.main(['inputs/ifgramStack.h5'])
```
## 2.3. Mask generation
Mask files can be can be used to mask pixels in the time-series processing. Below we generate a mask file based on the connected components, which is a metric for unwrapping quality.
```
!generate_mask.py inputs/ifgramStack.h5 --nonzero -o maskConnComp.h5 --update
view.main(['maskConnComp.h5'])
#!view.py
```
## reference_point
The interferometric phase is relative observation by nature. The phases of each unwrapped interferogram are relative with respect to an arbitrary pixel. Therfore we need to reference all interferograms to a common reference pixel.
The step "reference_point" selects a common reference pixel for the stack of interferograms. The default approach of mintpy is to choose a pixel with highest spatial coherence in the stack. Other options include specifying the longitude and latitude of the desired reference pixel or the line and column number of the refence pixel.
```
!smallbaselineApp.py LASenDT71.txt --dostep reference_point
```
Running the "reference_step" adds additional attributes "REF_X, REF_Y" and "REF_LON, REF_LAT" to the ifgramStack.h5 file. To see the attributes of the file run info.py:
```
!info.py inputs/ifgramStack.h5 | egrep 'REF_'
```
In this case, I set the reference point latitude and longitude to be in a location close to the Portuguese Bend Landslide, and MintPy calculated the X and Y locations.
## 2.4. Inverting of the Small Baseline network
In the next step we invert the network of differential unwrapped interferograms to estimate the time-series of unwrapped phase with respect to a reference acquisition date. By default mintpy selects the first acquisition. The estimated time-series is converted to distance change from radar to target and is provided in meters.
```
!smallbaselineApp.py LASenDT71.txt --dostep invert_network
```
The timeseries file contains three datasets:
- the "time-series" which is the interferometric range change for each acquisition relative to the reference acquisition,
- the "date" dataset which contains the acquisition date for each acquisition,
- the "bperp" dataset which contains the timeseries of the perpendicular baseline.
## 2.5. Estimating the long-term velocity rate
The ground deformation caused by many geophysical or anthropogenic processes are linear at first order approximation. Therefore it is common to estimate the rate of the ground deformation which is the slope of linear fit to the time-series.
```
!smallbaselineApp.py LASenDT71.txt --dostep velocity
scp_args = 'velocity.h5 velocity -v -1 1'
view.main(scp_args.split())
```
<div class="alert alert-info">
<b>Note :</b>
Negative values indicates that target is moving away from the radar (i.e., Subsidence in case of vertical deformation).
Positive values indicates that target is moving towards the radar (i.e., uplift in case of vertical deformation).
The line of sight (LOS) for this descending Sentinel-1 track is up and east from ground to radar.
</div>
Obvious features in the estimated velocity map:
1) There are several features with significant velocity in this area.
2) The negative LOS feature on the Palos Verdes peninsula (center left of map) is the Portuguese Bend Landslide, moving down and southwest toward the sea.
3) There are areas of positive and negative LOS change in the area of Long Beach (east part of map). These are due to the extraction of oil and injection of water in oilfields beneath the city and out in the harbor.
4) The block box at 33.76N, 118.36W is the reference pixel for this map, just north of the Portuguese Bend Landslide.
# 3. Error analysis (what is signal, what is noise!)
Uncertainty of the ground displacement products derived from InSAR time-series, depends on the quality of the inversion of the stack of interferograms and the accuracy in separating the ground displacement from other components of the InSAR data. Therefore the definition of signal vs noise is different at the two main steps in mintpy:
1) During the inversion:
At this step all systematic components of the interferometric phase (e.g., ground displacement, propagation delay, geometrical residuals caused by DEM or platform's orbit inaccuracy) are considered signal, while the interferometric phase decorrelation, phase unwrapping error and phase inconsistency are considered noise.
2) After inversion: the ground displacement component of the time-serieses is signal, and everything else (including the propagation delay and geometrical residuals) are considered noise
Therefore we first discuss the possible sources of error during the inversion and the existing ways in MintPy to evaluate the quality of inversion and to improve the uncertainty of the inversion. Afterwards we explain the different components of the time-series and the different processing steps in MintPy to separate them from ground displacement signal.
## 3.1 Quality of the inversion
The main sources of noise during the time-series inversion includes decorrelation, phase unwrapping error and the inconsistency of triplets of interferofgrams. Here we mainly focus on the decorrelation and unwrapping errors. We first show the existing quantities in MintPy to evaluate decorrelation and unwrapping errors and then discuss the existing ways in MintPy to reduce the decorrelation and unwrapping errors on the time-series inversion.
### 3.1.1 Average spatial coherence
Mintpy computes temporal average of spatial coherence of the entire stack as a potential ancillary measure to choose reliable pixels after time-series inversion.
```
view.main(['avgSpatialCoh.h5'])
```
### 3.1.2 Temporal coherence
In addition to timeseries.h5 which contains the time-series dataset, invert_network produces other quantities, which contain metrics to evaluate the quality of the inversion including temporalCoherence.h5. Temporal coherence represents the consistency of the timeseries with the network of interferograms.
Temporal coherence varies from 0 to 1. Pixels with values closer to 1 are considered reliable and pixels with values closer to zero are considered unreliable. For a dense network of interferograms, a threshold of 0.7 may be used (Yunjun et al, 2019).
```
view.main(['temporalCoherence.h5'])
```
With both the spatial coherence and temporal coherence, we can see that the InSAR in the ports of Long Beach and Los Angeles have unstable phase, and the InSAR measurements there will be low quality.
## 3.2. Velocity error analysis
The estimated velocity also comes with an expression of unecrtainty which is simply based on the goodness of fit while fitting a linear model to the time-series. This quantity is saved in "velocity.h5" under the velocityStd dataset.
**Mintpy supports additional corrections in its processing not included in this demo:**
- Unwrapping error correction
- Tropospheric delay correction
- deramping
- Topographic residual correction
- Residual RMS for noise evaluation
- Changing the reference date
```
scp_args = 'velocity.h5 velocityStd -v 0 0.2'
view.main(scp_args.split())
```
Note that the plot above is the velocity error, not the velocity. The errors generally increase with distance from the reference point and also increase for points with elevations different from the reference point because of topographically correlated water vapor variations that are especially strong in this area.
# 4. Plotting a Landslide Motion Transect
```
scp_args = 'velocity.h5 --start-lalo 33.75 -118.38 --end-lalo 33.72 -118.3 '
plot_transection.main(scp_args.split())
!smallbaselineApp.py smallbaselineApp.cfg --dostep google_earth
```
On this transect, the Portuguese Bend Landslide has an average velocity over the last five years that reaches more than 2.5 cm/year in the descending track radar LOS direction (note that the "zero" is about -5 so we have to subtract that first). By analyzing the velocity on the ascending track and combining the two LOS directions, then we could find out that the displacments are largely horizontal, westward (and southward that we can't see with InSAR).
# Reference material
- Original Notebook withe detailed description by Yunjun and Fattahi at: https://nbviewer.jupyter.org/github/insarlab/MintPy-tutorial/blob/master/smallbaselineApp_aria.ipynb
- Mintpy reference: *Yunjun, Z., H. Fattahi, F. Amelung (2019), Small baseline InSAR time series analysis: unwrapping error correction and noise reduction, preprint doi:[10.31223/osf.io/9sz6m](https://eartharxiv.org/9sz6m/).*
- University of Miami online time-series viewer: https://insarmaps.miami.edu/
- Mintpy Github repository: https://github.com/insarlab/MintPy
- ARIA-tools Github Repository: https://github.com/aria-tools/ARIA-tools
<font face="Calibri" size="2"> <i>LosAngeles_time_series.ipynb - Version 1.1 - October 2020
<br>
<b>Version Changes</b>
<ul>
<li>Add --region=us-east-1 --no-sign-request to aws s3 cp call</li>
<li>Capture matplotlib deprecation warnings in import cell</li>
</ul></i>
<br>
Note: matplotlib deprecation and syntax warnings now appear when calling MintPy scripts. We cannot handle or suppress these in the notebook and they should be ignored.
</font>
| github_jupyter |
# Recommendations on GCP with TensorFlow and WALS with Cloud Composer
***
This lab is adapted from the original [solution](https://github.com/GoogleCloudPlatform/tensorflow-recommendation-wals) created by [lukmanr](https://github.com/GoogleCloudPlatform/tensorflow-recommendation-wals/commits?author=lukmanr)
This project deploys a solution for a recommendation service on GCP, using the WALS algorithm in TensorFlow. Components include:
- Recommendation model code, and scripts to train and tune the model on ML Engine
- A REST endpoint using Google Cloud Endpoints for serving recommendations
- An Airflow server managed by Cloud Composer for running scheduled model training
## Confirm Prerequisites
### Create a Cloud Composer Instance
- Create a Cloud Composer [instance](https://console.cloud.google.com/composer/environments/create?project=)
1. Specify 'composer' for name
2. Choose a location
3. Keep the remaining settings at their defaults
4. Select Create
This takes 15 - 20 minutes. Continue with the rest of the lab as you will be using Cloud Composer near the end.
```
%%bash
pip install sh --upgrade pip # needed to execute shell scripts later
```
### Setup environment variables
<span style="color: blue">__Replace the below settings with your own.__</span> Note: you can leave AIRFLOW_BUCKET blank and come back to it after your Composer instance is created which automatically will create an Airflow bucket for you. <br><br>
### 1. Make a GCS bucket with the name recserve_[YOUR-PROJECT-ID]:
```
import os
PROJECT = 'PROJECT' # REPLACE WITH YOUR PROJECT ID
REGION = 'us-central1' # REPLACE WITH YOUR REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = 'recserve_' + PROJECT
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
# create GCS bucket with recserve_PROJECT_NAME if not exists
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo "Not creating recserve_bucket since it already exists."
else
echo "Creating recserve_bucket"
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
### Setup Google App Engine permissions
1. In [IAM](https://console.cloud.google.com/iam-admin/iam?project=), __change permissions for "Compute Engine default service account" from Editor to Owner__. This is required so you can create and deploy App Engine versions from within Cloud Datalab. Note: the alternative is to run all app engine commands directly in Cloud Shell instead of from within Cloud Datalab.<br/><br/>
2. Create an App Engine instance if you have not already by uncommenting and running the below code
```
# %%bash
# run app engine creation commands
# gcloud app create --region ${REGION} # see: https://cloud.google.com/compute/docs/regions-zones/
# gcloud app update --no-split-health-checks
```
# Part One: Setup and Train the WALS Model
## Upload sample data to BigQuery
This tutorial comes with a sample Google Analytics data set, containing page tracking events from the Austrian news site Kurier.at. The schema file '''ga_sessions_sample_schema.json''' is located in the folder data in the tutorial code, and the data file '''ga_sessions_sample.json.gz''' is located in a public Cloud Storage bucket associated with this tutorial. To upload this data set to BigQuery:
### Copy sample data files into our bucket
```
%%bash
gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/ga_sessions_sample.json.gz gs://${BUCKET}/data/ga_sessions_sample.json.gz
gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/recommendation_events.csv data/recommendation_events.csv
gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/endtoend/data/recommendation_events.csv gs://${BUCKET}/data/recommendation_events.csv
```
### 2. Create empty BigQuery dataset and load sample JSON data
Note: Ingesting the 400K rows of sample data. This usually takes 5-7 minutes.
```
%%bash
# create BigQuery dataset if it doesn't already exist
exists=$(bq ls -d | grep -w GA360_test)
if [ -n "$exists" ]; then
echo "Not creating GA360_test since it already exists."
else
echo "Creating GA360_test dataset."
bq --project_id=${PROJECT} mk GA360_test
fi
# create the schema and load our sample Google Analytics session data
bq load --source_format=NEWLINE_DELIMITED_JSON \
GA360_test.ga_sessions_sample \
gs://${BUCKET}/data/ga_sessions_sample.json.gz \
data/ga_sessions_sample_schema.json # can't load schema files from GCS
```
## Install WALS model training package and model data
### 1. Create a distributable package. Copy the package up to the code folder in the bucket you created previously.
```
%%bash
cd wals_ml_engine
echo "creating distributable package"
python setup.py sdist
echo "copying ML package to bucket"
gsutil cp dist/wals_ml_engine-0.1.tar.gz gs://${BUCKET}/code/
```
### 2. Run the WALS model on the sample data set:
```
%%bash
# view the ML train local script before running
cat wals_ml_engine/mltrain.sh
%%bash
cd wals_ml_engine
# train locally with unoptimized hyperparams
./mltrain.sh local ../data/recommendation_events.csv --data-type web_views --use-optimized
# Options if we wanted to train on CMLE. We will do this with Cloud Composer later
# train on ML Engine with optimized hyperparams
# ./mltrain.sh train ../data/recommendation_events.csv --data-type web_views --use-optimized
# tune hyperparams on ML Engine:
# ./mltrain.sh tune ../data/recommendation_events.csv --data-type web_views
```
This will take a couple minutes, and create a job directory under wals_ml_engine/jobs like "wals_ml_local_20180102_012345/model", containing the model files saved as numpy arrays.
### View the locally trained model directory
```
ls wals_ml_engine/jobs
```
### 3. Copy the model files from this directory to the model folder in the project bucket:
In the case of multiple models, take the most recent (tail -1)
```
%%bash
export JOB_MODEL=$(find wals_ml_engine/jobs -name "model" | tail -1)
gsutil cp ${JOB_MODEL}/* gs://${BUCKET}/model/
echo "Recommendation model file numpy arrays in bucket:"
gsutil ls gs://${BUCKET}/model/
```
# Install the recserve endpoint
### 1. Prepare the deploy template for the Cloud Endpoint API:
```
%%bash
cd scripts
cat prepare_deploy_api.sh
%%bash
printf "\nCopy and run the deploy script generated below:\n"
cd scripts
./prepare_deploy_api.sh # Prepare config file for the API.
```
This will output somthing like:
```To deploy: gcloud endpoints services deploy /var/folders/1m/r3slmhp92074pzdhhfjvnw0m00dhhl/T/tmp.n6QVl5hO.yaml```
### 2. Run the endpoints deploy command output above:
<span style="color: blue">Be sure to __replace the below [FILE_NAME]__ with the results from above before running.</span>
```
%%bash
gcloud endpoints services deploy [REPLACE_WITH_TEMP_FILE_NAME.yaml]
```
### 3. Prepare the deploy template for the App Engine App:
```
%%bash
# view the app deployment script
cat scripts/prepare_deploy_app.sh
%%bash
# prepare to deploy
cd scripts
./prepare_deploy_app.sh
```
You can ignore the script output "ERROR: (gcloud.app.create) The project [...] already contains an App Engine application. You can deploy your application using gcloud app deploy." This is expected.
The script will output something like:
```To deploy: gcloud -q app deploy app/app_template.yaml_deploy.yaml```
### 4. Run the command above:
```
%%bash
gcloud -q app deploy app/app_template.yaml_deploy.yaml
```
This will take 7 - 10 minutes to deploy the app. While you wait, consider starting on Part Two below and completing the Cloud Composer DAG file.
## Query the API for Article Recommendations
Lastly, you are able to test the recommendation model API by submitting a query request. Note the example userId passed and numRecs desired as the URL parameters for the model input.
```
%%bash
cd scripts
./query_api.sh # Query the API.
#./generate_traffic.sh # Send traffic to the API.
```
If the call is successful, you will see the article IDs recommended for that specific user by the WALS ML model <br/>
(Example: curl "https://qwiklabs-gcp-12345.appspot.com/recommendation?userId=5448543647176335931&numRecs=5"
{"articles":["299824032","1701682","299935287","299959410","298157062"]} )
__Part One is done!__ You have successfully created the back-end architecture for serving your ML recommendation system. But we're not done yet, we still need to automatically retrain and redeploy our model once new data comes in. For that we will use [Cloud Composer](https://cloud.google.com/composer/) and [Apache Airflow](https://airflow.apache.org/).<br/><br/>
***
# Part Two: Setup a scheduled workflow with Cloud Composer
In this section you will complete a partially written training.py DAG file and copy it to the DAGS folder in your Composer instance.
## Copy your Airflow bucket name
1. Navigate to your Cloud Composer [instance](https://console.cloud.google.com/composer/environments?project=)<br/><br/>
2. Select __DAGs Folder__<br/><br/>
3. You will be taken to the Google Cloud Storage bucket that Cloud Composer has created automatically for your Airflow instance<br/><br/>
4. __Copy the bucket name__ into the variable below (example: us-central1-composer-08f6edeb-bucket)
```
AIRFLOW_BUCKET = 'us-central1-composer-21587538-bucket' # REPLACE WITH AIRFLOW BUCKET NAME
os.environ['AIRFLOW_BUCKET'] = AIRFLOW_BUCKET
```
## Complete the training.py DAG file
Apache Airflow orchestrates tasks out to other services through a [DAG (Directed Acyclic Graph)](https://airflow.apache.org/concepts.html) file which specifies what services to call, what to do, and when to run these tasks. DAG files are written in python and are loaded automatically into Airflow once present in the Airflow/dags/ folder in your Cloud Composer bucket.
Your task is to complete the partially written DAG file below which will enable the automatic retraining and redeployment of our WALS recommendation model.
__Complete the #TODOs__ in the Airflow DAG file below and execute the code block to save the file
```
%%writefile airflow/dags/training.py
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""DAG definition for recserv model training."""
import airflow
from airflow import DAG
# Reference for all available airflow operators:
# https://github.com/apache/incubator-airflow/tree/master/airflow/contrib/operators
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator
from airflow.hooks.base_hook import BaseHook
# from airflow.contrib.operators.mlengine_operator import MLEngineTrainingOperator
# above mlengine_operator currently doesnt support custom MasterType so we import our own plugins:
# custom plugins
from airflow.operators.app_engine_admin_plugin import AppEngineVersionOperator
from airflow.operators.ml_engine_plugin import MLEngineTrainingOperator
import datetime
def _get_project_id():
"""Get project ID from default GCP connection."""
extras = BaseHook.get_connection('google_cloud_default').extra_dejson
key = 'extra__google_cloud_platform__project'
if key in extras:
project_id = extras[key]
else:
raise ('Must configure project_id in google_cloud_default '
'connection from Airflow Console')
return project_id
PROJECT_ID = _get_project_id()
# Data set constants, used in BigQuery tasks. You can change these
# to conform to your data.
# TODO: Specify your BigQuery dataset name and table name
DATASET = 'GA360_test'
TABLE_NAME = 'ga_sessions_sample'
ARTICLE_CUSTOM_DIMENSION = '10'
# TODO: Confirm bucket name and region
# GCS bucket names and region, can also be changed.
BUCKET = 'gs://recserve_' + PROJECT_ID
REGION = 'us-east1'
# The code package name comes from the model code in the wals_ml_engine
# directory of the solution code base.
PACKAGE_URI = BUCKET + '/code/wals_ml_engine-0.1.tar.gz'
JOB_DIR = BUCKET + '/jobs'
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': airflow.utils.dates.days_ago(2),
'email': ['airflow@example.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 5,
'retry_delay': datetime.timedelta(minutes=5)
}
# Default schedule interval using cronjob syntax - can be customized here
# or in the Airflow console.
# TODO: Specify a schedule interval in CRON syntax to run once a day at 2100 hours (9pm)
# Reference: https://airflow.apache.org/scheduler.html
schedule_interval = '00 21 * * *'
# TODO: Title your DAG to be recommendations_training_v1
dag = DAG('recommendations_training_v1',
default_args=default_args,
schedule_interval=schedule_interval)
dag.doc_md = __doc__
#
#
# Task Definition
#
#
# BigQuery training data query
bql='''
#legacySql
SELECT
fullVisitorId as clientId,
ArticleID as contentId,
(nextTime - hits.time) as timeOnPage,
FROM(
SELECT
fullVisitorId,
hits.time,
MAX(IF(hits.customDimensions.index={0},
hits.customDimensions.value,NULL)) WITHIN hits AS ArticleID,
LEAD(hits.time, 1) OVER (PARTITION BY fullVisitorId, visitNumber
ORDER BY hits.time ASC) as nextTime
FROM [{1}.{2}.{3}]
WHERE hits.type = "PAGE"
) HAVING timeOnPage is not null and contentId is not null;
'''
bql = bql.format(ARTICLE_CUSTOM_DIMENSION, PROJECT_ID, DATASET, TABLE_NAME)
# TODO: Complete the BigQueryOperator task to truncate the table if it already exists before writing
# Reference: https://airflow.apache.org/integration.html#bigqueryoperator
t1 = BigQueryOperator(
task_id='bq_rec_training_data',
bql=bql,
destination_dataset_table='%s.recommendation_events' % DATASET,
write_disposition='WRITE_TRUNCATE', # specify to truncate on writes
dag=dag)
# BigQuery training data export to GCS
# TODO: Fill in the missing operator name for task #2 which
# takes a BigQuery dataset and table as input and exports it to GCS as a CSV
training_file = BUCKET + '/data/recommendation_events.csv'
t2 = BigQueryToCloudStorageOperator(
task_id='bq_export_op',
source_project_dataset_table='%s.recommendation_events' % DATASET,
destination_cloud_storage_uris=[training_file],
export_format='CSV',
dag=dag
)
# ML Engine training job
job_id = 'recserve_{0}'.format(datetime.datetime.now().strftime('%Y%m%d%H%M'))
job_dir = BUCKET + '/jobs/' + job_id
output_dir = BUCKET
training_args = ['--job-dir', job_dir,
'--train-files', training_file,
'--output-dir', output_dir,
'--data-type', 'web_views',
'--use-optimized']
# TODO: Fill in the missing operator name for task #3 which will
# start a new training job to Cloud ML Engine
# Reference: https://airflow.apache.org/integration.html#cloud-ml-engine
# https://cloud.google.com/ml-engine/docs/tensorflow/machine-types
t3 = MLEngineTrainingOperator(
task_id='ml_engine_training_op',
project_id=PROJECT_ID,
job_id=job_id,
package_uris=[PACKAGE_URI],
training_python_module='trainer.task',
training_args=training_args,
region=REGION,
scale_tier='CUSTOM',
master_type='complex_model_m_gpu',
dag=dag
)
# App Engine deploy new version
t4 = AppEngineVersionOperator(
task_id='app_engine_deploy_version',
project_id=PROJECT_ID,
service_id='default',
region=REGION,
service_spec=None,
dag=dag
)
# TODO: Be sure to set_upstream dependencies for all tasks
t2.set_upstream(t1)
t3.set_upstream(t2)
t4.set_upstream(t3)
```
### Copy local Airflow DAG file and plugins into the DAGs folder
```
%%bash
gsutil cp airflow/dags/training.py gs://${AIRFLOW_BUCKET}/dags # overwrite if it exists
gsutil cp -r airflow/plugins gs://${AIRFLOW_BUCKET} # copy custom plugins
```
2. Navigate to your Cloud Composer [instance](https://console.cloud.google.com/composer/environments?project=)<br/><br/>
3. Trigger a __manual run__ of your DAG for testing<br/><br/>
3. Ensure your DAG runs successfully (all nodes outlined in dark green and 'success' tag shows)

## Troubleshooting your DAG
DAG not executing successfully? Follow these below steps to troubleshoot.
Click on the name of a DAG to view a run (ex: recommendations_training_v1)
1. Select a node in the DAG (red or yellow borders mean failed nodes)
2. Select View Log
3. Scroll to the bottom of the log to diagnose
4. X Option: Clear and immediately restart the DAG after diagnosing the issue
Tips:
- If bq_rec_training_data immediately fails without logs, your DAG file is missing key parts and is not compiling
- ml_engine_training_op will take 9 - 12 minutes to run. Monitor the training job in [ML Engine](https://console.cloud.google.com/mlengine/jobs?project=)
- Lastly, check the [solution endtoend.ipynb](../endtoend/endtoend.ipynb) to compare your lab answers

# Congratulations!
You have made it to the end of the end-to-end recommendation system lab. You have successfully setup an automated workflow to retrain and redeploy your recommendation model.
***
# Challenges
Looking to solidify your Cloud Composer skills even more? Complete the __optional challenges__ below
<br/><br/>
### Challenge 1
Use either the [BigQueryCheckOperator](https://airflow.apache.org/integration.html#bigquerycheckoperator) or the [BigQueryValueCheckOperator](https://airflow.apache.org/integration.html#bigqueryvaluecheckoperator) to create a new task in your DAG that ensures the SQL query for training data is returning valid results before it is passed to Cloud ML Engine for training.
<br/><br/>
Hint: Check for COUNT() = 0 or other health check
<br/><br/><br/>
### Challenge 2
Create a Cloud Function to [automatically trigger](https://cloud.google.com/composer/docs/how-to/using/triggering-with-gcf) your DAG when a new recommendation_events.csv file is loaded into your Google Cloud Storage Bucket.
<br/><br/>
Hint: Check the [composer_gcf_trigger.ipynb lab](../composer_gcf_trigger/composertriggered.ipynb) for inspiration
<br/><br/><br/>
### Challenge 3
Modify the BigQuery query in the DAG to only train on a portion of the data available in the dataset using a WHERE clause filtering on date. Next, parameterize the WHERE clause to be based on when the Airflow DAG is run
<br/><br/>
Hint: Make use of prebuilt [Airflow macros](https://airflow.incubator.apache.org/_modules/airflow/macros.html) like the below:
_constants or can be dynamic based on Airflow macros_ <br/>
max_query_date = '2018-02-01' # {{ macros.ds_add(ds, -7) }} <br/>
min_query_date = '2018-01-01' # {{ macros.ds_add(ds, -1) }}
## Additional Resources
- Follow the latest [Airflow operators](https://github.com/apache/incubator-airflow/tree/master/airflow/contrib/operators) on github
| github_jupyter |
# Assignment 4 - Average Reward Softmax Actor-Critic
Welcome to your Course 3 Programming Assignment 4. In this assignment, you will implement **Average Reward Softmax Actor-Critic** in the Pendulum Swing-Up problem that you have seen earlier in the lecture. Through this assignment you will get hands-on experience in implementing actor-critic methods on a continuing task.
**In this assignment, you will:**
1. Implement softmax actor-critic agent on a continuing task using the average reward formulation.
2. Understand how to parameterize the policy as a function to learn, in a discrete action environment.
3. Understand how to (approximately) sample the gradient of this objective to update the actor.
4. Understand how to update the critic using differential TD error.
## Pendulum Swing-Up Environment
In this assignment, we will be using a Pendulum environment, adapted from [Santamaría et al. (1998)](http://www.incompleteideas.net/papers/SSR-98.pdf). This is also the same environment that we used in the lecture. The diagram below illustrates the environment.
<img src="data/pendulum_env.png" alt="Drawing" style="width: 400px;"/>
The environment consists of single pendulum that can swing 360 degrees. The pendulum is actuated by applying a torque on its pivot point. The goal is to get the pendulum to balance up-right from its resting position (hanging down at the bottom with no velocity) and maintain it as long as possible. The pendulum can move freely, subject only to gravity and the action applied by the agent.
The state is 2-dimensional, which consists of the current angle $\beta \in [-\pi, \pi]$ (angle from the vertical upright position) and current angular velocity $\dot{\beta} \in (-2\pi, 2\pi)$. The angular velocity is constrained in order to avoid damaging the pendulum system. If the angular velocity reaches this limit during simulation, the pendulum is reset to the resting position.
The action is the angular acceleration, with discrete values $a \in \{-1, 0, 1\}$ applied to the pendulum.
For more details on environment dynamics you can refer to the original paper.
The goal is to swing-up the pendulum and maintain its upright angle. Hence, the reward is the negative absolute angle from the vertical position: $R_{t} = -|\beta_{t}|$
Furthermore, since the goal is to reach and maintain a vertical position, there are no terminations nor episodes. Thus this problem can be formulated as a continuing task.
Similar to the Mountain Car task, the action in this pendulum environment is not strong enough to move the pendulum directly to the desired position. The agent must learn to first move the pendulum away from its desired position and gain enough momentum to successfully swing-up the pendulum. And even after reaching the upright position the agent must learn to continually balance the pendulum in this unstable position.
## Packages
You will use the following packages in this assignment.
- [numpy](www.numpy.org) : Fundamental package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) : Library for plotting graphs in Python.
- [RL-Glue](http://www.jmlr.org/papers/v10/tanner09a.html) : Library for reinforcement learning experiments.
- [jdc](https://alexhagen.github.io/jdc/) : Jupyter magic that allows defining classes over multiple jupyter notebook cells.
- [tqdm](https://tqdm.github.io/) : A package to display progress bar when running experiments
- plot_script : custom script to plot results
- [tiles3](http://incompleteideas.net/tiles/tiles3.html) : A package that implements tile-coding.
- pendulum_env : Pendulum Swing-up Environment
**Please do not import other libraries** — this will break the autograder.
```
# Do not modify this cell!
# Import necessary libraries
# DO NOT IMPORT OTHER LIBRARIES - This will break the autograder.
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
from tqdm import tqdm
from rl_glue import RLGlue
from pendulum_env import PendulumEnvironment
from agent import BaseAgent
import plot_script
import tiles3 as tc
```
## Section 1: Create Tile Coding Helper Function
In this section, we are going to build a tile coding class for our agent that will make it easier to make calls to our tile coder.
Tile-coding is introduced in Section 9.5.4 of the textbook as a way to create features that can both provide good generalization and discrimination. We have already used it in our last programming assignment as well.
Similar to the last programming assignment, we are going to make a function specific for tile coding for our Pendulum Swing-up environment. We will also use the [Tiles3 library](http://incompleteideas.net/tiles/tiles3.html).
To get the tile coder working we need to:
1) create an index hash table using tc.IHT(),
2) scale the inputs for the tile coder based on number of tiles and range of values each input could take
3) call tc.tileswrap to get active tiles back.
However, we need to make one small change to this tile coder.
Note that in this environment the state space contains angle, which is between $[-\pi, \pi]$. If we tile-code this state space in the usual way, the agent may think the value of states corresponding to an angle of $-\pi$ is very different from angle of $\pi$ when in fact they are the same! To remedy this and allow generalization between angle $= -\pi$ and angle $= \pi$, we need to use **wrap tile coder**.
The usage of wrap tile coder is almost identical to the original tile coder, except that we also need to provide the `wrapwidth` argument for the dimension we want to wrap over (hence only for angle, and `None` for angular velocity). More details of wrap tile coder is also provided in [Tiles3 library](http://incompleteideas.net/tiles/tiles3.html).
```
# [Graded]
class PendulumTileCoder:
def __init__(self, iht_size=4096, num_tilings=32, num_tiles=8):
"""
Initializes the MountainCar Tile Coder
Initializers:
iht_size -- int, the size of the index hash table, typically a power of 2
num_tilings -- int, the number of tilings
num_tiles -- int, the number of tiles. Here both the width and height of the tiles are the same
Class Variables:
self.iht -- tc.IHT, the index hash table that the tile coder will use
self.num_tilings -- int, the number of tilings the tile coder will use
self.num_tiles -- int, the number of tiles the tile coder will use
"""
self.num_tilings = num_tilings
self.num_tiles = num_tiles
self.iht = tc.IHT(iht_size)
def get_tiles(self, angle, ang_vel):
"""
Takes in an angle and angular velocity from the pendulum environment
and returns a numpy array of active tiles.
Arguments:
angle -- float, the angle of the pendulum between -np.pi and np.pi
ang_vel -- float, the angular velocity of the agent between -2*np.pi and 2*np.pi
returns:
tiles -- np.array, active tiles
"""
### Set the max and min of angle and ang_vel to scale the input (4 lines)
# ANGLE_MIN = ?
# ANGLE_MAX = ?
# ANG_VEL_MIN = ?
# ANG_VEL_MAX = ?
### START CODE HERE ###
ANGLE_MIN = - np.pi
ANGLE_MAX = np.pi
ANG_VEL_MIN = -2*np.pi
ANG_VEL_MAX = 2*np.pi
### END CODE HERE ###
### Use the ranges above and self.num_tiles to set angle_scale and ang_vel_scale (2 lines)
# angle_scale = number of tiles / angle range
# ang_vel_scale = number of tiles / ang_vel range
### START CODE HERE ###
angle_scale = self.num_tiles/(ANGLE_MAX - ANGLE_MIN)
ang_vel_scale = self.num_tiles/(ANG_VEL_MAX - ANG_VEL_MIN)
### END CODE HERE ###
# Get tiles by calling tc.tileswrap method
# wrapwidths specify which dimension to wrap over and its wrapwidth
tiles = tc.tileswrap(self.iht, self.num_tilings, [angle * angle_scale, ang_vel * ang_vel_scale], wrapwidths=[self.num_tiles, False])
return np.array(tiles)
```
Run the following code to verify `PendulumTilecoder`
```
# Do not modify this cell!
## Test Code for PendulumTileCoder ##
# Your tile coder should also work for other num. tilings and num. tiles
test_obs = [[-np.pi, 0], [-np.pi, 0.5], [np.pi, 0], [np.pi, -0.5], [0, 1]]
pdtc = PendulumTileCoder(iht_size=4096, num_tilings=8, num_tiles=4)
result=[]
for obs in test_obs:
angle, ang_vel = obs
tiles = pdtc.get_tiles(angle=angle, ang_vel=ang_vel)
result.append(tiles)
for tiles in result:
print(tiles)
```
**Expected output**:
[0 1 2 3 4 5 6 7]
[0 1 2 3 4 8 6 7]
[0 1 2 3 4 5 6 7]
[ 9 1 2 10 4 5 6 7]
[11 12 13 14 15 16 17 18]
## Section 2: Create Average Reward Softmax Actor-Critic Agent
Now that we implemented PendulumTileCoder let's create the agent that interacts with the environment. We will implement the same average reward Actor-Critic algorithm presented in the videos.
This agent has two components: an Actor and a Critic. The Actor learns a parameterized policy while the Critic learns a state-value function. The environment has discrete actions; your Actor implementation will use a softmax policy with exponentiated action-preferences. The Actor learns with the sample-based estimate for the gradient of the average reward objective. The Critic learns using the average reward version of the semi-gradient TD(0) algorithm.
In this section, you will be implementing `agent_policy`, `agent_start`, `agent_step`, and `agent_end`.
## Section 2-1: Implement Helper Functions
Let's first define a couple of useful helper functions.
## Section 2-1a: Compute Softmax Probability
In this part you will implement `compute_softmax_prob`.
This function computes softmax probability for all actions, given actor weights `actor_w` and active tiles `tiles`. This function will be later used in `agent_policy` to sample appropriate action.
First, recall how the softmax policy is represented from state-action preferences: $\large \pi(a|s, \mathbf{\theta}) \doteq \frac{e^{h(s,a,\mathbf{\theta})}}{\sum_{b}e^{h(s,b,\mathbf{\theta})}}$.
**state-action preference** is defined as $h(s,a, \mathbf{\theta}) \doteq \mathbf{\theta}^T \mathbf{x}_h(s,a)$.
Given active tiles `tiles` for state `s`, state-action preference $\mathbf{\theta}^T \mathbf{x}_h(s,a)$ can be computed by `actor_w[a][tiles].sum()`.
We will also use **exp-normalize trick**, in order to avoid possible numerical overflow.
Consider the following:
$\large \pi(a|s, \mathbf{\theta}) \doteq \frac{e^{h(s,a,\mathbf{\theta})}}{\sum_{b}e^{h(s,b,\mathbf{\theta})}} = \frac{e^{h(s,a,\mathbf{\theta}) - c} e^c}{\sum_{b}e^{h(s,b,\mathbf{\theta}) - c} e^c} = \frac{e^{h(s,a,\mathbf{\theta}) - c}}{\sum_{b}e^{h(s,b,\mathbf{\theta}) - c}}$
$\pi(\cdot|s, \mathbf{\theta})$ is shift-invariant, and the policy remains the same when we subtract a constant $c \in \mathbb{R}$ from state-action preferences.
Normally we use $c = \max_b h(s,b, \mathbf{\theta})$, to prevent any overflow due to exponentiating large numbers.
```
# [Graded]
def compute_softmax_prob(actor_w, tiles):
"""
Computes softmax probability for all actions
Args:
actor_w - np.array, an array of actor weights
tiles - np.array, an array of active tiles
Returns:
softmax_prob - np.array, an array of size equal to num. actions, and sums to 1.
"""
# First compute the list of state-action preferences (1~2 lines)
# state_action_preferences = ? (list of size 3)
state_action_preferences = []
### START CODE HERE ###
for act in range(len(actor_w)):
state_action_preferences.append(actor_w[act][tiles].sum())
### END CODE HERE ###
# Set the constant c by finding the maximum of state-action preferences (use np.max) (1 line)
# c = ? (float)
### START CODE HERE ###
c = np.max(state_action_preferences)
### END CODE HERE ###
# Compute the numerator by subtracting c from state-action preferences and exponentiating it (use np.exp) (1 line)
# numerator = ? (list of size 3)
### START CODE HERE ###
numerator = np.exp(state_action_preferences - c)
### END CODE HERE ###
# Next compute the denominator by summing the values in the numerator (use np.sum) (1 line)
# denominator = ? (float)
### START CODE HERE ###
denominator = np.sum(numerator)
### END CODE HERE ###
# Create a probability array by dividing each element in numerator array by denominator (1 line)
# We will store this probability array in self.softmax_prob as it will be useful later when updating the Actor
# softmax_prob = ? (list of size 3)
### START CODE HERE ###
softmax_prob = numerator/denominator
### END CODE HERE ###
return softmax_prob
```
Run the following code to verify `compute_softmax_prob`.
We will test the method by building a softmax policy from state-action preferences [-1,1,2].
The sampling probability should then roughly match $[\frac{e^{-1}}{e^{-1}+e^1+e^2}, \frac{e^{1}}{e^{-1}+e^1+e^2}, \frac{e^2}{e^{-1}+e^1+e^2}] \approx$ [0.0351, 0.2595, 0.7054]
```
# Do not modify this cell!
## Test Code for compute_softmax_prob() ##
# set tile-coder
iht_size = 4096
num_tilings = 8
num_tiles = 8
test_tc = PendulumTileCoder(iht_size=iht_size, num_tilings=num_tilings, num_tiles=num_tiles)
num_actions = 3
actions = list(range(num_actions))
actor_w = np.zeros((len(actions), iht_size))
# setting actor weights such that state-action preferences are always [-1, 1, 2]
actor_w[0] = -1./num_tilings
actor_w[1] = 1./num_tilings
actor_w[2] = 2./num_tilings
# obtain active_tiles from state
state = [-np.pi, 0.]
angle, ang_vel = state
active_tiles = test_tc.get_tiles(angle, ang_vel)
# compute softmax probability
softmax_prob = compute_softmax_prob(actor_w, active_tiles)
print('softmax probability: {}'.format(softmax_prob))
```
**Expected Output:**
softmax probability: [0.03511903 0.25949646 0.70538451]
## Section 2-2: Implement Agent Methods
Let's first define methods that initialize the agent. `agent_init()` initializes all the variables that the agent will need.
Now that we have implemented helper functions, let's create an agent. In this part, you will implement `agent_start()` and `agent_step()`. We do not need to implement `agent_end()` because there is no termination in our continuing task.
`compute_softmax_prob()` is used in `agent_policy()`, which in turn will be used in `agent_start()` and `agent_step()`. We have implemented `agent_policy()` for you.
When performing updates to the Actor and Critic, recall their respective updates in the Actor-Critic algorithm video.
We approximate $q_\pi$ in the Actor update using one-step bootstrapped return($R_{t+1} - \bar{R} + \hat{v}(S_{t+1}, \mathbf{w})$) subtracted by current state-value($\hat{v}(S_{t}, \mathbf{w})$), equivalent to TD error $\delta$.
$\delta_t = R_{t+1} - \bar{R} + \hat{v}(S_{t+1}, \mathbf{w}) - \hat{v}(S_{t}, \mathbf{w}) \hspace{6em} (1)$
**Average Reward update rule**: $\bar{R} \leftarrow \bar{R} + \alpha^{\bar{R}}\delta \hspace{4.3em} (2)$
**Critic weight update rule**: $\mathbf{w} \leftarrow \mathbf{w} + \alpha^{\mathbf{w}}\delta\nabla \hat{v}(s,\mathbf{w}) \hspace{2.5em} (3)$
**Actor weight update rule**: $\mathbf{\theta} \leftarrow \mathbf{\theta} + \alpha^{\mathbf{\theta}}\delta\nabla ln \pi(A|S,\mathbf{\theta}) \hspace{1.4em} (4)$
However, since we are using linear function approximation and parameterizing a softmax policy, the above update rule can be further simplified using:
$\nabla \hat{v}(s,\mathbf{w}) = \mathbf{x}(s) \hspace{14.2em} (5)$
$\nabla ln \pi(A|S,\mathbf{\theta}) = \mathbf{x}_h(s,a) - \sum_b \pi(b|s, \mathbf{\theta})\mathbf{x}_h(s,b) \hspace{3.3em} (6)$
```
# [Graded]
class ActorCriticSoftmaxAgent(BaseAgent):
def __init__(self):
self.rand_generator = None
self.actor_step_size = None
self.critic_step_size = None
self.avg_reward_step_size = None
self.tc = None
self.avg_reward = None
self.critic_w = None
self.actor_w = None
self.actions = None
self.softmax_prob = None
self.prev_tiles = None
self.last_action = None
def agent_init(self, agent_info={}):
"""Setup for the agent called when the experiment first starts.
Set parameters needed to setup the semi-gradient TD(0) state aggregation agent.
Assume agent_info dict contains:
{
"iht_size": int
"num_tilings": int,
"num_tiles": int,
"actor_step_size": float,
"critic_step_size": float,
"avg_reward_step_size": float,
"num_actions": int,
"seed": int
}
"""
# set random seed for each run
self.rand_generator = np.random.RandomState(agent_info.get("seed"))
iht_size = agent_info.get("iht_size")
num_tilings = agent_info.get("num_tilings")
num_tiles = agent_info.get("num_tiles")
# initialize self.tc to the tile coder we created
self.tc = PendulumTileCoder(iht_size=iht_size, num_tilings=num_tilings, num_tiles=num_tiles)
# set step-size accordingly (we normally divide actor and critic step-size by num. tilings (p.217-218 of textbook))
self.actor_step_size = agent_info.get("actor_step_size")/num_tilings
self.critic_step_size = agent_info.get("critic_step_size")/num_tilings
self.avg_reward_step_size = agent_info.get("avg_reward_step_size")
self.actions = list(range(agent_info.get("num_actions")))
# Set initial values of average reward, actor weights, and critic weights
# We initialize actor weights to three times the iht_size.
# Recall this is because we need to have one set of weights for each of the three actions.
self.avg_reward = 0.0
self.actor_w = np.zeros((len(self.actions), iht_size))
self.critic_w = np.zeros(iht_size)
self.softmax_prob = None
self.prev_tiles = None
self.last_action = None
def agent_policy(self, active_tiles):
""" policy of the agent
Args:
active_tiles (Numpy array): active tiles returned by tile coder
Returns:
The action selected according to the policy
"""
# compute softmax probability
softmax_prob = compute_softmax_prob(self.actor_w, active_tiles)
# Sample action from the softmax probability array
# self.rand_generator.choice() selects an element from the array with the specified probability
chosen_action = self.rand_generator.choice(self.actions, p=softmax_prob)
# save softmax_prob as it will be useful later when updating the Actor
self.softmax_prob = softmax_prob
return chosen_action
def agent_start(self, state):
"""The first method called when the experiment starts, called after
the environment starts.
Args:
state (Numpy array): the state from the environment's env_start function.
Returns:
The first action the agent takes.
"""
angle, ang_vel = state
### Use self.tc to get active_tiles using angle and ang_vel (2 lines)
# set current_action by calling self.agent_policy with active_tiles
# active_tiles = ?
# current_action = ?
### START CODE HERE ###
active_tiles = self.tc.get_tiles(angle = angle, ang_vel = ang_vel)
current_action = self.agent_policy(active_tiles)
### END CODE HERE ###
self.last_action = current_action
self.prev_tiles = np.copy(active_tiles)
return self.last_action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (Numpy array): the state from the environment's step based on
where the agent ended up after the
last step.
Returns:
The action the agent is taking.
"""
angle, ang_vel = state
### Use self.tc to get active_tiles using angle and ang_vel (1 line)
# active_tiles = ?
### START CODE HERE ###
active_tiles = self.tc.get_tiles(angle = angle, ang_vel = ang_vel)
### END CODE HERE ###
### Compute delta using Equation (1) (1 line)
# delta = ?
### START CODE HERE ###
delta = reward - self.avg_reward + np.sum(self.critic_w[active_tiles]) - np.sum(self.critic_w[self.prev_tiles])
### END CODE HERE ###
### update average reward using Equation (2) (1 line)
# self.avg_reward += ?
### START CODE HERE ###
self.avg_reward += self.avg_reward_step_size*delta
### END CODE HERE ###
# update critic weights using Equation (3) and (5) (1 line)
# self.critic_w[self.prev_tiles] += ?
### START CODE HERE ###
# grads = np.ones(self.critic_w[self.prev_tiles].shape)
self.critic_w[self.prev_tiles] += self.critic_step_size*delta
### END CODE HERE ###
# update actor weights using Equation (4) and (6)
# We use self.softmax_prob saved from the previous timestep
# We leave it as an exercise to verify that the code below corresponds to the equation.
for a in self.actions:
if a == self.last_action:
self.actor_w[a][self.prev_tiles] += self.actor_step_size * delta * (1 - self.softmax_prob[a])
else:
self.actor_w[a][self.prev_tiles] += self.actor_step_size * delta * (0 - self.softmax_prob[a])
### set current_action by calling self.agent_policy with active_tiles (1 line)
# current_action = ?
### START CODE HERE ###
current_action = self.agent_policy(active_tiles)
### END CODE HERE ###
self.prev_tiles = active_tiles
self.last_action = current_action
return self.last_action
def agent_message(self, message):
if message == 'get avg reward':
return self.avg_reward
```
Run the following code to verify `agent_start()`.
Although there is randomness due to `self.rand_generator.choice()` in `agent_policy()`, we control the seed so your output should match the expected output.
```
# Do not modify this cell!
## Test Code for agent_start()##
agent_info = {"iht_size": 4096,
"num_tilings": 8,
"num_tiles": 8,
"actor_step_size": 1e-1,
"critic_step_size": 1e-0,
"avg_reward_step_size": 1e-2,
"num_actions": 3,
"seed": 99}
test_agent = ActorCriticSoftmaxAgent()
test_agent.agent_init(agent_info)
state = [-np.pi, 0.]
test_agent.agent_start(state)
print("agent active_tiles: {}".format(test_agent.prev_tiles))
print("agent selected action: {}".format(test_agent.last_action))
```
**Expected output**:
agent active_tiles: [0 1 2 3 4 5 6 7]
agent selected action: 2
Run the following code to verify `agent_step()`
```
# Do not modify this cell!
## Test Code for agent_step() ##
# Make sure agent_start() and agent_policy() are working correctly first.
# agent_step() should work correctly for other arbitrary state transitions in addition to this test case.
env_info = {"seed": 99}
agent_info = {"iht_size": 4096,
"num_tilings": 8,
"num_tiles": 8,
"actor_step_size": 1e-1,
"critic_step_size": 1e-0,
"avg_reward_step_size": 1e-2,
"num_actions": 3,
"seed": 99}
test_env = PendulumEnvironment
test_agent = ActorCriticSoftmaxAgent
rl_glue = RLGlue(test_env, test_agent)
rl_glue.rl_init(agent_info, env_info)
# start env/agent
rl_glue.rl_start()
rl_glue.rl_step()
print("agent next_action: {}".format(rl_glue.agent.last_action))
print("agent avg reward: {}\n".format(rl_glue.agent.avg_reward))
print("agent first 10 values of actor weights[0]: \n{}\n".format(rl_glue.agent.actor_w[0][:10]))
print("agent first 10 values of actor weights[1]: \n{}\n".format(rl_glue.agent.actor_w[1][:10]))
print("agent first 10 values of actor weights[2]: \n{}\n".format(rl_glue.agent.actor_w[2][:10]))
print("agent first 10 values of critic weights: \n{}".format(rl_glue.agent.critic_w[:10]))
```
**Expected output**:
agent next_action: 1
agent avg reward: -0.03139092653589793
agent first 10 values of actor weights[0]:
[0.01307955 0.01307955 0.01307955 0.01307955 0.01307955 0.01307955
0.01307955 0.01307955 0. 0. ]
agent first 10 values of actor weights[1]:
[0.01307955 0.01307955 0.01307955 0.01307955 0.01307955 0.01307955
0.01307955 0.01307955 0. 0. ]
agent first 10 values of actor weights[2]:
[-0.02615911 -0.02615911 -0.02615911 -0.02615911 -0.02615911 -0.02615911
-0.02615911 -0.02615911 0. 0. ]
agent first 10 values of critic weights:
[-0.39238658 -0.39238658 -0.39238658 -0.39238658 -0.39238658 -0.39238658
-0.39238658 -0.39238658 0. 0. ]
## Section 3: Run Experiment
Now that we've implemented all the components of environment and agent, let's run an experiment!
We want to see whether our agent is successful at learning the optimal policy of balancing the pendulum upright. We will plot total return over time, as well as the exponential average of the reward over time. We also do multiple runs in order to be confident about our results.
The experiment/plot code is provided in the cell below.
```
# Do not modify this cell!
# Define function to run experiment
def run_experiment(environment, agent, environment_parameters, agent_parameters, experiment_parameters):
rl_glue = RLGlue(environment, agent)
# sweep agent parameters
for num_tilings in agent_parameters['num_tilings']:
for num_tiles in agent_parameters["num_tiles"]:
for actor_ss in agent_parameters["actor_step_size"]:
for critic_ss in agent_parameters["critic_step_size"]:
for avg_reward_ss in agent_parameters["avg_reward_step_size"]:
env_info = {}
agent_info = {"num_tilings": num_tilings,
"num_tiles": num_tiles,
"actor_step_size": actor_ss,
"critic_step_size": critic_ss,
"avg_reward_step_size": avg_reward_ss,
"num_actions": agent_parameters["num_actions"],
"iht_size": agent_parameters["iht_size"]}
# results to save
return_per_step = np.zeros((experiment_parameters["num_runs"], experiment_parameters["max_steps"]))
exp_avg_reward_per_step = np.zeros((experiment_parameters["num_runs"], experiment_parameters["max_steps"]))
# using tqdm we visualize progress bars
for run in tqdm(range(1, experiment_parameters["num_runs"]+1)):
env_info["seed"] = run
agent_info["seed"] = run
rl_glue.rl_init(agent_info, env_info)
rl_glue.rl_start()
num_steps = 0
total_return = 0.
return_arr = []
# exponential average reward without initial bias
exp_avg_reward = 0.0
exp_avg_reward_ss = 0.01
exp_avg_reward_normalizer = 0
while num_steps < experiment_parameters['max_steps']:
num_steps += 1
rl_step_result = rl_glue.rl_step()
reward = rl_step_result[0]
total_return += reward
return_arr.append(reward)
avg_reward = rl_glue.rl_agent_message("get avg reward")
exp_avg_reward_normalizer = exp_avg_reward_normalizer + exp_avg_reward_ss * (1 - exp_avg_reward_normalizer)
ss = exp_avg_reward_ss / exp_avg_reward_normalizer
exp_avg_reward += ss * (reward - exp_avg_reward)
return_per_step[run-1][num_steps-1] = total_return
exp_avg_reward_per_step[run-1][num_steps-1] = exp_avg_reward
if not os.path.exists('results'):
os.makedirs('results')
save_name = "ActorCriticSoftmax_tilings_{}_tiledim_{}_actor_ss_{}_critic_ss_{}_avg_reward_ss_{}".format(num_tilings, num_tiles, actor_ss, critic_ss, avg_reward_ss)
total_return_filename = "results/{}_total_return.npy".format(save_name)
exp_avg_reward_filename = "results/{}_exp_avg_reward.npy".format(save_name)
np.save(total_return_filename, return_per_step)
np.save(exp_avg_reward_filename, exp_avg_reward_per_step)
```
## Section 3-1: Run Experiment with 32 tilings, size 8x8
We will first test our implementation using 32 tilings, of size 8x8. We saw from the earlier assignment using tile-coding that many tilings promote fine discrimination, and broad tiles allows more generalization.
We conducted a wide sweep of meta-parameters in order to find the best meta-parameters for our Pendulum Swing-up task.
We swept over the following range of meta-parameters and the best meta-parameter is boldfaced below:
actor step-size: $\{\frac{2^{-6}}{32}, \frac{2^{-5}}{32}, \frac{2^{-4}}{32}, \frac{2^{-3}}{32}, \mathbf{\frac{2^{-2}}{32}}, \frac{2^{-1}}{32}, \frac{2^{0}}{32}, \frac{2^{1}}{32}\}$
critic step-size: $\{\frac{2^{-4}}{32}, \frac{2^{-3}}{32}, \frac{2^{-2}}{32}, \frac{2^{-1}}{32}, \frac{2^{0}}{32}, \mathbf{\frac{2^{1}}{32}}, \frac{3}{32}, \frac{2^{2}}{32}\}$
avg reward step-size: $\{2^{-11}, 2^{-10} , 2^{-9} , 2^{-8}, 2^{-7}, \mathbf{2^{-6}}, 2^{-5}, 2^{-4}, 2^{-3}, 2^{-2}\}$
We will do 50 runs using the above best meta-parameter setting to verify your agent.
Note that running the experiment cell below will take **_approximately 5 min_**.
```
# Do not modify this cell!
#### Run Experiment
# Experiment parameters
experiment_parameters = {
"max_steps" : 20000,
"num_runs" : 50
}
# Environment parameters
environment_parameters = {}
# Agent parameters
# Each element is an array because we will be later sweeping over multiple values
# actor and critic step-sizes are divided by num. tilings inside the agent
agent_parameters = {
"num_tilings": [32],
"num_tiles": [8],
"actor_step_size": [2**(-2)],
"critic_step_size": [2**1],
"avg_reward_step_size": [2**(-6)],
"num_actions": 3,
"iht_size": 4096
}
current_env = PendulumEnvironment
current_agent = ActorCriticSoftmaxAgent
run_experiment(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters)
plot_script.plot_result(agent_parameters, 'results')
```
Run the following code to verify your experimental result.
```
# Do not modify this cell!
## Test Code for experimental result ##
filename = 'ActorCriticSoftmax_tilings_32_tiledim_8_actor_ss_0.25_critic_ss_2_avg_reward_ss_0.015625_exp_avg_reward'
agent_exp_avg_reward = np.load('results/{}.npy'.format(filename))
result_med = np.median(agent_exp_avg_reward, axis=0)
answer_range = np.load('correct_npy/exp_avg_reward_answer_range.npy')
upper_bound = answer_range.item()['upper-bound']
lower_bound = answer_range.item()['lower-bound']
# check if result is within answer range
all_correct = np.all(result_med <= upper_bound) and np.all(result_med >= lower_bound)
if all_correct:
print("Your experiment results are correct!")
else:
print("Your experiment results does not match with ours. Please check if you have implemented all methods correctly.")
```
## Section 3-2: Performance Metric and Meta-Parameter Sweeps
### Performance Metric
To evaluate performance, we plotted both the return and exponentially weighted average reward over time.
In the first plot, the return is negative because the reward is negative at every state except when the pendulum is in the upright position. As the policy improves over time, the agent accumulates less negative reward, and thus the return decreases slowly. Towards the end the slope is almost flat indicating the policy has stabilized to a good policy. When using this plot however, it can be difficult to distinguish whether it has learned an optimal policy. The near-optimal policy in this Pendulum Swing-up Environment is to maintain the pendulum in the upright position indefinitely, getting near 0 reward at each time step. We would have to examine the slope of the curve but it can be hard to compare the slope of different curves.
The second plot using exponential average reward gives a better visualization. We can see that towards the end the value is near 0, indicating it is getting near 0 reward at each time step. Here, the exponentially weighted average reward shouldn't be confused with the agent’s internal estimate of the average reward. To be more specific, we used an exponentially weighted average of the actual reward without initial bias (Refer to Exercise 2.7 from the textbook (p.35) to read more about removing the initial bias). If we used sample averages instead, later rewards would have decreasing impact on the average and would not be able to represent the agent's performance with respect to its current policy effectively.
It is easier to see whether the agent has learned a good policy in the second plot than the first plot. If the learned policy is optimal, the exponential average reward would be close to 0.
Furthermore, how did we pick the best meta-parameter from the sweeps? A common method would be to pick the meta-parameter that results in the largest Area Under the Curve (AUC). However, this is not always what we want. We want to find a set of meta-parameters that learns a good final policy. When using AUC as the criteria, we may pick meta-parameters that allows the agent to learn fast but converge to a worse policy. In our case, we selected the meta-parameter setting that obtained the most exponential average reward over the last 5000 time steps.
### Parameter Sensitivity
In addition to finding the best meta-parameters it is also equally important to plot **parameter sensitivity curves** to understand how our algorithm behaves.
In our simulated Pendulum problem, we can extensively test our agent with different meta-parameter configurations but it would be quite expensive to do so in real life. Parameter sensitivity curves can provide us insight into how our algorithms might behave in general. It can help us identify a good range of each meta-parameters as well as how sensitive the performance is with respect to each meta-parameter.
Here are the sensitivity curves for the three step-sizes we swept over:
<img src="data/sensitivity_combined.png" alt="Drawing" style="width: 1000px;"/>
On the y-axis we use the performance measure, which is the average of the exponential average reward over the 5000 time steps, averaged over 50 different runs. On the x-axis is the meta-parameter we are testing. For the given meta-parameter, the remaining meta-parameters are chosen such that it obtains the best performance.
The curves are quite rounded, indicating the agent performs well for these wide range of values. It indicates that the agent is not too sensitive to these meta-parameters. Furthermore, looking at the y-axis values we can observe that average reward step-size is particularly less sensitive than actor step-size and critic step-size.
But how do we know that we have sufficiently covered a wide range of meta-parameters? It is important that the best value is not on the edge but in the middle of the meta-parameter sweep range in these sensitivity curves. Otherwise this may indicate that there could be better meta-parameter values that we did not sweep over.
## Wrapping up
### **Congratulations!** You have successfully implemented Course 3 Programming Assignment 4.
You have implemented your own **Average Reward Actor-Critic with Softmax Policy** agent in the Pendulum Swing-up Environment. You implemented the environment based on information about the state/action space and transition dynamics. Furthermore, you have learned how to implement an agent in a continuing task using the average reward formulation. We parameterized the policy using softmax of action-preferences over discrete action spaces, and used Actor-Critic to learn the policy.
To summarize, you have learned how to:
1. Implement softmax actor-critic agent on a continuing task using the average reward formulation.
2. Understand how to parameterize the policy as a function to learn, in a discrete action environment.
3. Understand how to (approximately) sample the gradient of this objective to update the actor.
4. Understand how to update the critic using differential TD error.
| github_jupyter |
# Downloads markdown generator for academicpages
Takes a TSV of downloads with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `downloads.py`. Run either from the `markdown_generator` folder after replacing `downloads.tsv` with one containing your data.
TODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style.
```
import pandas as pd
import os
```
## Data format
The TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV.
- Fields that cannot be blank: `title`, `url_slug`, `date`. All else can be blank. `type` defaults to "Talk"
- `date` must be formatted as YYYY-MM-DD.
- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper.
- The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/downloads/YYYY-MM-DD-[url_slug]`
- The combination of `url_slug` and `date` must be unique, as it will be the basis for your filenames
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
```
!cat downloads.tsv
```
## Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
```
downloads = pd.read_csv("downloads.tsv", sep="\t", header=0)
downloads
```
## Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
```
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
if type(text) is str:
return "".join(html_escape_table.get(c,c) for c in text)
else:
return "False"
```
## Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
```
loc_dict = {}
for row, item in downloads.iterrows():
md_filename = str(item.date) + "-" + item.url_slug + ".md"
html_filename = str(item.date) + "-" + item.url_slug
year = item.date[:4]
md = "---\ntitle: \"" + item.title + '"\n'
md += "collection: downloads" + "\n"
if len(str(item.type)) > 3:
md += 'type: "' + item.type + '"\n'
else:
md += 'type: "Talk"\n'
md += "permalink: /downloads/" + html_filename + "\n"
if len(str(item.venue)) > 3:
md += 'venue: "' + item.venue + '"\n'
if len(str(item.location)) > 3:
md += "date: " + str(item.date) + "\n"
if len(str(item.location)) > 3:
md += 'location: "' + str(item.location) + '"\n'
md += "---\n"
if len(str(item.talk_url)) > 3:
md += "\n[More information here](" + item.talk_url + ")\n"
if len(str(item.description)) > 3:
md += "\n" + html_escape(item.description) + "\n"
md_filename = os.path.basename(md_filename)
#print(md)
with open("../_downloads/" + md_filename, 'w') as f:
f.write(md)
```
These files are in the downloads directory, one directory below where we're working from.
```
!ls ../_downloads
!cat ../_downloads/2013-03-01-tutorial-1.md
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# 01. Train in the Notebook & Deploy Model to ACI
* Load workspace
* Train a simple regression model directly in the Notebook python kernel
* Record run history
* Find the best model in run history and download it.
* Deploy the model as an Azure Container Instance (ACI)
## Prerequisites
1. Make sure you go through the [00. Installation and Configuration](../../00.configuration.ipynb) Notebook first if you haven't.
2. Install following pre-requisite libraries to your conda environment and restart notebook.
```shell
(myenv) $ conda install -y matplotlib tqdm scikit-learn
```
3. Check that ACI is registered for your Azure Subscription.
```
!az provider show -n Microsoft.ContainerInstance -o table
```
If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
```
!az provider register -n Microsoft.ContainerInstance
```
## Validate Azure ML SDK installation and get version number for debugging purposes
```
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
```
## Initialize Workspace
Initialize a workspace object from persisted configuration.
```
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
```
## Set experiment name
Choose a name for experiment.
```
experiment_name = 'train-in-notebook'
```
## Start a training run in local Notebook
```
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
```
### Train a simple Ridge model
Train a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
```
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
```
### Add experiment tracking
Now, let's add Azure ML experiment logging, and upload persisted model into run record as well.
```
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
```
We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
```
run
```
### Simple parameter sweep
Sweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
```
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# now let's take a look at the experiment in Azure portal.
experiment
```
## Select best model from the experiment
Load all experiment run metrics recursively from the experiment into a dictionary object.
```
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
```
Now find the run with the lowest Mean Squared Error value
```
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
```
You can add tags to your runs to make them easier to catalog
```
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
```
### Plot MSE over alpha
Let's observe the best model visually by plotting the MSE values over alpha values:
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
```
## Register the best model
Find the model file saved in the run record of best run.
```
for f in best_run.get_file_names():
print(f)
```
Now we can register this model in the model registry of the workspace
```
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
```
Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
```
from azureml.core.model import Model
models = Model.list(workspace=ws, name='best_model')
for m in models:
print(m.name, m.version)
```
You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
```
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
```
## Scoring script
Now we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.
Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
```
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
```
## Create environment dependency file
We need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
```
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=["scikit-learn"])
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
```
## Deploy web service into an Azure Container Instance
The deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).
Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML.
** Note: ** The web service creation can take 6-7 minutes.
```
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
```
Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`.
If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
```
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
%%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
```
## Test web service
```
print('web service is hosted in ACI:', service.scoring_uri)
```
Use the `run` API to call the web service with one row of data to get a prediction.
```
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
```
Feed the entire test set and calculate the errors (residual values).
```
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = service.run(input_data = test_samples)
residual = result - y_test
```
You can also send raw HTTP request to test the web service.
```
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
```
## Residual graph
Plot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
```
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
```
## Delete ACI to clean up
Deleting ACI is super fast!
```
%%time
service.delete()
```
| github_jupyter |
# Forecasting with sktime
In forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.
For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study.
In particular, you'll learn how to
* use statistical models to make forecasts,
* build composite machine learning models, including common techniques like reduction to regression, ensembling and pipelining.
## Preliminaries
```
import matplotlib.pyplot as plt
import numpy as np
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import smape_loss
from sktime.utils.plotting.forecasting import plot_ys
%matplotlib inline
```
## Data
For this tutorial, we will use the famous Box-Jenkins airline data set, which shows the number of international airline
passengers per month from 1949-1960.
As well as using the original time series (which is a classic example of a *multiplicative* time series), we will create an *additive* time series by performing a log-transform on the original data, so we may compare forecasters against both types of model.
```
y = load_airline()
fig, ax = plot_ys(y)
ax.set(xlabel="Time", ylabel="Number of airline passengers");
```
Next we will define a forecasting task.
* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.
* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.
We can split the data as follows:
```
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
```
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. We can specify the forecasting horizon as a simple numpy array of the steps ahead, relative to the end of the training series:
```
fh = np.arange(len(y_test)) + 1
fh
```
## Forecasting
Like in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.
sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon.
### Naïve baselines
Let's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches.
1. We always predict the last value observed (in the training series),
2. We predict the last value observed in the same season.
```
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_last = forecaster.predict(fh)
smape_loss(y_last, y_test)
forecaster = NaiveForecaster(strategy="seasonal_last", sp=12)
forecaster.fit(y_train)
y_last_seasonal = forecaster.predict(fh)
smape_loss(y_last_seasonal, y_test)
plot_ys(y_train, y_test, y_last, y_last_seasonal,
labels=["y_train", "y_test", "last", "seasonal_last"]);
```
### Statistical forecasters
sktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.
Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
```
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
```
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
## Compositite model building
sktime provides a modular API for composite model building for forecasting.
### Ensembling
Like scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
```
from sktime.forecasting.compose import EnsembleForecaster
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
### Applying machine learning: reduction to regression
Forecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows apply any regression algorithm to the forecasting problem.
Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consist of the subsequent observation for each window.
sktime provides a meta-estimator for this approach, which is compatible with scikit-learn, so that we can use any scikit-learn regressor to solve our forecasting problem.
```
from sktime.forecasting.compose import ReducedRegressionForecaster
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=10, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
To better understand the prior data transformation, we can look at how we can split the training series into windows. Internally, sktime uses a temporal time series splitter, similar to the cross-validation splitter in scikit-learn. Here we show how this works for the first 20 observations of the training series:
```
from sktime.forecasting.model_selection import SlidingWindowSplitter
cv = SlidingWindowSplitter(window_length=10, start_with_window=True)
for input_window, output_window in cv.split(y_train.iloc[:20]):
print(input_window, output_window)
```
## Tuning
In the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
```
from sktime.forecasting.model_selection import ForecastingGridSearchCV
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
```
You could of course also try to tune the regressor inside `ReducedRegressionForecaster` using scikit-learn's `GridSearchCV`.
### Detrending
Note that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.
sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
```
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformers.single_series.detrend import Detrender
# liner detrending
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_ys(y_train, y_pred, yt, labels=["y_train", "Fitted linear trend", "Residuals"]);
```
### Pipelining
Let's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
```
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformers.single_series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.
Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts.
## Dynamic forecasts
For model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.
Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
```
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_ys(y_train, y_test, y_pred);
```
For a single update, you can use the `update` method.
## Prediction intervals
So far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.
Here, we use the Theta forecasting algorithm:
```
from sktime.forecasting.theta import ThetaForecaster
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(y_pred.index, pred_ints["lower"], pred_ints["upper"], alpha=0.2, color="green", label=f"{1 - alpha}% prediction intervals")
plt.legend();
```
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
# create engine to hawaii.sqlite
engine = create_engine(f"sqlite:///Resources/hawaii.sqlite")
conn = engine.connect()
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect = True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# inspect measurement table
inspector = inspect(engine)
columns = inspector.get_columns('Measurement')
for c in columns:
print(c['name'], c["type"])
# inspect station table
inspector = inspect(engine)
columns = inspector.get_columns('Station')
for c in columns:
print(c['name'], c["type"])
# Create our session (link) from Python to the DB
session = Session(engine)
```
# Exploratory Precipitation Analysis
```
import datetime as dt
# Find the most recent date in the data set.
session.query(Measurement.date).order_by(Measurement.date.desc()).first()
# Calculate the date one year from the last date in data set.
previous_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
print(previous_year)
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# Perform a query to retrieve the data and precipitation scores
precipt_data = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date > "2016-08-23").\
order_by(Measurement.date.desc())
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
precipitation = pd.DataFrame(precipt_data, columns=['date','prcp']).set_index('date')
precipitation = precipitation.dropna()
precipitation.head()
# Use Pandas Plotting with Matplotlib to plot the data
precipitation.plot(rot=45)
plt.tight_layout()
plt.savefig("Precipitation_Analysis.png")
# Use Pandas to calcualte the summary statistics for the precipitation data
precipitation.describe()
```
# Exploratory Station Analysis
```
# Design a query to calculate the total number stations in the dataset
session.query(Station).distinct(Station.id).count()
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
station_data = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
active_station= pd.DataFrame(station_data, columns=['station','tobs']).set_index('station')
active_station.head()
from scipy import stats
from numpy import mean
from sqlalchemy.sql import func
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).\
filter(Measurement.station == 'USC00519281').all()
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
temp_obv = session.query(Measurement.date, Measurement.tobs).\
filter(Measurement.station == 'USC00519281').\
filter(Measurement.date > "2016-08-23").\
order_by(Measurement.date.desc())
temp_df= pd.DataFrame(temp_obv , columns=['date','tobs']).set_index('date')
temp_df = temp_df.dropna()
temp_df.head()
# Use Pandas Plotting with Matplotlib to plot the data
temp_df.plot(rot=45)
plt.tight_layout()
plt.savefig("Station_Analysis.png")
```
# Close session
```
# Close Session
session.close()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/sayakpaul/Handwriting-Recognizer-in-Keras/blob/main/Recognizer_KerasOCR.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## References:
* https://keras-ocr.readthedocs.io/en/latest/examples/fine_tuning_recognizer.html
## Initial setup
```
!pip install -U git+https://github.com/faustomorales/keras-ocr.git#egg=keras-ocr
!pip install -U opencv-python # We need the most recent version of OpenCV.
import matplotlib.pyplot as plt
import numpy as np
import keras_ocr
import imgaug
import os
import tensorflow as tf
print(tf.__version__)
tf.random.set_seed(42)
np.random.seed(42)
!nvidia-smi
```
## Dataset gathering
```
!wget -q https://github.com/sayakpaul/Handwriting-Recognizer-in-Keras/releases/download/v1.0.0/IAM_Words.zip
!unzip -qq IAM_Words.zip
!mkdir data
!mkdir data/words
!tar -C /content/data/words -xf IAM_Words/words.tgz
!mv IAM_Words/words.txt /content/data
!head -20 data/words.txt
```
## Create training and validation splits
```
words_list = []
words = open('/content/data/words.txt', 'r').readlines()
for line in words:
if line[0]=='#':
continue
if line.split(" ")[1]!="err": # We won't need to deal with errored entries
words_list.append(line)
len(words_list)
np.random.shuffle(words_list)
splitIdx = int(0.9 * len(words_list))
trainSamples = words_list[:splitIdx]
validationSamples = words_list[splitIdx:]
len(trainSamples), len(validationSamples)
def parse_path(file_line):
lineSplit = file_line.strip()
lineSplit = lineSplit.split(" ")
# part1/part1-part2/part1-part2-part3.png
imageName = lineSplit[0]
partI = imageName.split("-")[0]
partII = imageName.split("-")[1]
img_path = os.path.join("/content/data/words/", partI,
(partI + '-' + partII),
(imageName + ".png")
)
label = file_line.split(' ')[8:][0].strip()
if (os.path.getsize(img_path)!=0) & (label!=None):
return (img_path, None, label.lower())
train_labels = [parse_path(file_line) for file_line in trainSamples
if parse_path(file_line)!=None]
val_labels = [parse_path(file_line) for file_line in validationSamples
if parse_path(file_line)!=None]
len(train_labels), len(val_labels)
train_labels[:5]
```
## Create data generators
```
recognizer = keras_ocr.recognition.Recognizer()
recognizer.compile()
batch_size = 8
augmenter = imgaug.augmenters.Sequential([
imgaug.augmenters.GammaContrast(gamma=(0.25, 3.0)),
])
(training_image_gen, training_steps), (validation_image_gen, validation_steps) = [
(
keras_ocr.datasets.get_recognizer_image_generator(
labels=labels,
height=recognizer.model.input_shape[1],
width=recognizer.model.input_shape[2],
alphabet=recognizer.alphabet,
augmenter=augmenter
),
len(labels) // batch_size
) for labels, augmenter in [(train_labels, augmenter), (val_labels, None)]
]
training_gen, validation_gen = [
recognizer.get_batch_generator(
image_generator=image_generator,
batch_size=batch_size
)
for image_generator in [training_image_gen, validation_image_gen]
]
image, text = next(training_image_gen)
plt.imshow(image)
plt.title(text)
plt.show()
```
[Here's](https://keras-ocr.readthedocs.io/en/latest/examples/end_to_end_training.html#generating-synthetic-data) where you can know on what basis a character is termed as illegal in the framework.
## Model training and sample inference
```
callbacks = [
tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=10, restore_best_weights=True),
]
history = recognizer.training_model.fit_generator(
generator=training_gen,
steps_per_epoch=training_steps,
validation_steps=validation_steps,
validation_data=validation_gen,
callbacks=callbacks,
epochs=1000
)
plt.figure()
plt.plot(history.history["loss"], label="train_loss")
plt.plot(history.history["val_loss"], label="val_loss")
plt.title("Training and Validation Loss on Dataset")
plt.xlabel("Epoch #")
plt.ylabel("Loss")
plt.legend(loc="lower left")
plt.show()
```
The training seems to be a bit unstable. This can likely be mitigated by using a lower learning rate.
```
image_filepath, _, actual = val_labels[1]
predicted = recognizer.recognize(image_filepath)
print(f'Predicted: {predicted}, Actual: {actual}')
_ = plt.imshow(keras_ocr.tools.read(image_filepath))
```
| github_jupyter |
```
import os
os.chdir('/home/yuke/PythonProject/DrugEmbedding/')
import warnings
warnings.simplefilter(action='ignore')
from tqdm import tnrange
import json
import numpy as np
import pandas as pd
import random
from decode import *
random.seed(1)
def recon_acc_score(configs, model, smiles_sample_lst):
match_lst = []
for i in tnrange(len(smiles_sample_lst)):
smiles_x = smiles_sample_lst[i]
mean, logv = smiles2mean(configs, smiles_x, model)
_, _, smiles_lst = latent2smiles(configs, model, z=mean.repeat(200, 1),
nsamples=1, sampling_mode='random')
if smiles_x in smiles_lst:
match_lst.append(1)
else:
match_lst.append(0)
return np.array(match_lst).mean()
```
# Prepare Model Reference Table
```
latent_size_lst_64 = [64, 64, 64, 64, 64, 64]
manifold_lst_64 = ['Euclidean', 'Euclidean', 'Euclidean', 'Lorentz', 'Lorentz', 'Lorentz']
exp_dir_lst_64 = ['./experiments/KDD/kdd_009', './experiments/KDD_SEED/kdd_e64_s1', './experiments/KDD_SEED/kdd_e64_s2',
'./experiments/KDD/kdd_010', './experiments/KDD_SEED/kdd_l64_s1', './experiments/KDD_SEED/kdd_l64_s2']
checkpoint_lst_64 = ['checkpoint_epoch110.model', 'checkpoint_epoch120.model', 'checkpoint_epoch120.model',
'checkpoint_epoch110.model', 'checkpoint_epoch120.model', 'checkpoint_epoch120.model']
latent_size_lst_32 = [32, 32, 32, 32, 32, 32]
manifold_lst_32 = ['Euclidean', 'Euclidean', 'Euclidean', 'Lorentz', 'Lorentz', 'Lorentz']
exp_dir_lst_32 = ['./experiments/KDD/kdd_015', './experiments/KDD_SEED/kdd_e32_s1', './experiments/KDD_SEED/kdd_e32_s2',
'./experiments/KDD/kdd_016', './experiments/KDD_SEED/kdd_l32_s1', './experiments/KDD_SEED/kdd_l32_s2']
checkpoint_lst_32 = ['checkpoint_epoch110.model', 'checkpoint_epoch130.model', 'checkpoint_epoch110.model',
'checkpoint_epoch110.model', 'checkpoint_epoch130.model', 'checkpoint_epoch110.model']
latent_size_lst_8 = [8, 8, 8, 8, 8, 8]
manifold_lst_8 = ['Euclidean', 'Euclidean', 'Euclidean', 'Lorentz', 'Lorentz', 'Lorentz']
exp_dir_lst_8 = ['./experiments/KDD/kdd_017', './experiments/KDD_SEED/kdd_e8_s1', './experiments/KDD_SEED/kdd_e8_s2',
'./experiments/KDD/kdd_018', './experiments/KDD_SEED/kdd_l8_s1', './experiments/KDD_SEED/kdd_l8_s2']
checkpoint_lst_8 = ['checkpoint_epoch110.model', 'checkpoint_epoch120.model', 'checkpoint_epoch110.model',
'checkpoint_epoch120.model', 'checkpoint_epoch120.model', 'checkpoint_epoch110.model']
latent_size_lst_4 = [4, 4, 4, 4, 4, 4]
manifold_lst_4 = ['Euclidean', 'Euclidean', 'Euclidean', 'Lorentz', 'Lorentz', 'Lorentz']
exp_dir_lst_4 = ['./experiments/KDD/kdd_019', './experiments/KDD_SEED/kdd_e4_s1', './experiments/KDD_SEED/kdd_e4_s2',
'./experiments/KDD/kdd_020', './experiments/KDD_SEED/kdd_l4_s1', './experiments/KDD_SEED/kdd_l4_s2']
checkpoint_lst_4 = ['checkpoint_epoch100.model', 'checkpoint_epoch110.model', 'checkpoint_epoch110.model',
'checkpoint_epoch100.model', 'checkpoint_epoch100.model', 'checkpoint_epoch090.model']
latent_size_lst_2 = [2, 2, 2, 2, 2, 2]
manifold_lst_2 = ['Euclidean', 'Euclidean', 'Euclidean', 'Lorentz', 'Lorentz', 'Lorentz']
exp_dir_lst_2 = ['./experiments/KDD/kdd_021', './experiments/KDD_SEED/kdd_e2_s1', './experiments/KDD_SEED/kdd_e2_s2',
'./experiments/KDD/kdd_022', './experiments/KDD_SEED/kdd_l2_s1', './experiments/KDD_SEED/kdd_l2_s2']
checkpoint_lst_2 = ['checkpoint_epoch110.model', 'checkpoint_epoch100.model', 'checkpoint_epoch090.model',
'checkpoint_epoch110.model', 'checkpoint_epoch100.model', 'checkpoint_epoch090.model']
latent_size_lst = latent_size_lst_64 + latent_size_lst_32 + latent_size_lst_8 + latent_size_lst_4 + latent_size_lst_2
manifold_lst = manifold_lst_64 + manifold_lst_32 + manifold_lst_8 + manifold_lst_4 + manifold_lst_2
exp_dir_lst = exp_dir_lst_64 + exp_dir_lst_32 + exp_dir_lst_8 + exp_dir_lst_4 + exp_dir_lst_2
checkpoint_lst = checkpoint_lst_64 + checkpoint_lst_32 + checkpoint_lst_8 + checkpoint_lst_4 + checkpoint_lst_2
df = pd.DataFrame.from_dict({'latent_size': latent_size_lst, 'manifold': manifold_lst,
'exp_dir': exp_dir_lst, 'checkpoint': checkpoint_lst})
df.to_csv('./experiments/RECON/model_dir.csv', index=False)
```
# Load SMILES Test Set
```
mdl_dir_df = pd.read_csv('./experiments/RECON/model_dir.csv')
mdl_dir_df['recon_acc'] = None
# load SMILES test set
exp_dir = mdl_dir_df['exp_dir'].iloc[0]
smiles_test_file = os.path.join(exp_dir, 'smiles_test.smi')
smiles_test_lst = []
with open(smiles_test_file) as file:
lines = file.read().splitlines()
idx = 0
for l in lines:
# convert to tokens
if len(l.split(" ")) == 1: # the SMILES comes from ZINC 250k
smi = l # remove /n
id = 'zinc_' + str(idx) # use zinc + idx as instance ID
idx += 1
else: # the SMILES comes from FDA drug
smi = l.split(" ")[0]
id = l.split(" ")[1].lower() # use FDA drug name as instance ID
idx += 1
smiles_test_lst.append(smi)
```
# Evaluate Molecule Reconstruction Accuracy
## latent size of 64
```
idx = mdl_dir_df['latent_size'] == 64
sub_df = mdl_dir_df[idx]
for idx, row in sub_df.iterrows():
print(row)
exp_dir = row['exp_dir']
checkpoint = row['checkpoint']
config_path = os.path.join(exp_dir, 'configs.json')
checkpoint_path = os.path.join(exp_dir, checkpoint)
with open(config_path, 'r') as fp:
configs = json.load(fp)
fp.close()
configs['checkpoint'] = checkpoint
model = load_model(configs)
smiles_sample_lst = random.sample(smiles_test_lst, 1000)
recon_score = recon_acc_score(configs, model, smiles_sample_lst)
mdl_dir_df['recon_acc'].iloc[idx] = recon_score
print('Recon. accuracy: ' + str(recon_score))
print('------------------------------------')
```
## latent size of 32
```
idx = mdl_dir_df['latent_size'] == 32
sub_df = mdl_dir_df[idx]
for idx, row in sub_df.iterrows():
print(row)
exp_dir = row['exp_dir']
checkpoint = row['checkpoint']
config_path = os.path.join(exp_dir, 'configs.json')
checkpoint_path = os.path.join(exp_dir, checkpoint)
with open(config_path, 'r') as fp:
configs = json.load(fp)
fp.close()
configs['checkpoint'] = checkpoint
model = load_model(configs)
smiles_sample_lst = random.sample(smiles_test_lst, 1000)
recon_score = recon_acc_score(configs, model, smiles_sample_lst)
mdl_dir_df['recon_acc'].iloc[idx] = recon_score
print('Recon. accuracy: ' + str(recon_score))
print('------------------------------------')
```
## Structure Only Models
### Euclidean Space
```
seed_lst = [0, 1, 2]
exp_dir = './experiments/EXP_TASK/exp_task_009'
checkpoint = 'checkpoint_epoch100.model'
config_path = os.path.join(exp_dir, 'configs.json')
checkpoint_path = os.path.join(exp_dir, checkpoint)
with open(config_path, 'r') as fp:
configs = json.load(fp)
fp.close()
configs['checkpoint'] = checkpoint
model = load_model(configs)
for s in seed_lst:
random.seed(s)
smiles_sample_lst = random.sample(smiles_test_lst, 1000)
recon_score = recon_acc_score(configs, model, smiles_sample_lst)
mdl_dir_df['recon_acc'].iloc[idx] = recon_score
print('Recon. accuracy: ' + str(recon_score))
print('------------------------------------')
```
### Lorentz Space
```
seed_lst = [0, 1, 2]
exp_dir = './experiments/EXP_TASK/exp_task_010'
checkpoint = 'checkpoint_epoch075.model'
config_path = os.path.join(exp_dir, 'configs.json')
checkpoint_path = os.path.join(exp_dir, checkpoint)
with open(config_path, 'r') as fp:
configs = json.load(fp)
fp.close()
configs['checkpoint'] = checkpoint
model = load_model(configs)
for s in seed_lst:
random.seed(s)
smiles_sample_lst = random.sample(smiles_test_lst, 1000)
recon_score = recon_acc_score(configs, model, smiles_sample_lst)
mdl_dir_df['recon_acc'].iloc[idx] = recon_score
print('Recon. accuracy: ' + str(recon_score))
print('------------------------------------')
np.array([0.885, 0.854, 0.873]).mean()
np.array([0.885, 0.854, 0.873]).std()
```
| github_jupyter |
# Growth media VMH high fat low carb diet
Similar to the western-style diet we will again start by loading the diet and depleting components absorbed by the host. In this case we have no manual annotation for which components should be diluted so we will use a generic human metabolic model to find those.
The growth medium supllied here was created the following way:
Let's start by reading the diet which was downloaded from https://www.vmh.life/#diet/High%20fat,%20low%20carb. Flux is in mmol/human/day. This has to be adjusted to 1 hour. Also the VMH site has a bug where it will clip fluxes after 4 digits, so we will set values like 0.0000 to 0.0001.
```
import pandas as pd
medium = pd.read_csv("../data/vmh_high_fat_low_carb.tsv", index_col=False, sep="\t")
medium.columns = ["reaction", "flux"]
medium.reaction = medium.reaction.str.replace("(\[e\]$)|(\(e\)$)", "", regex=True)
medium.loc[medium.flux < 1e-4, "flux"] = 1e-4
medium.flux = medium.flux / 24
medium
```
Now we will try to identify components that can be taken up by human cells.
## Identifying human adsorption
To achieve this we will load the Recon3 human model. AGORA and Recon IDs are very similar so we should be able to match them. We just have to adjust the Recon3 ones a bit. We start by identifying all available exchanges in Recon3 and adjusting the IDs.
```
from cobra.io import read_sbml_model
import pandas as pd
recon3 = read_sbml_model("../data/Recon3D.xml.gz")
exchanges = pd.Series([r.id for r in recon3.exchanges])
exchanges = exchanges.str.replace("__", "_").str.replace("_e$", "", regex=True)
exchanges.head()
```
Now we will check which ones we can find in our set and add in the dilution factors (again going with 1:10.
```
medium["dilution"] = 1.0
medium.loc[medium.reaction.isin(exchanges), "dilution"] = 0.1
medium.dilution.value_counts()
```
Okay, so 79/91 components can be adsorbed by humans. We end by filling in the additional info.
```
medium["metabolite"] = medium.reaction.str.replace("^EX_", "", regex=True) + "_m"
medium["global_id"] = medium.reaction + "(e)"
medium["reaction"] = medium.reaction + "_m"
medium.loc[medium.flux < 1e-4, "flux"] = 1e-4
medium
```
## Checking the growth medium against the DB
But can the bacteria in our model database actually grow on this medium? Let's check and start by downbloading the AGORA model database.
```
# !wget https://zenodo.org/record/3755182/files/agora103_genus.qza?download=1 -O data/agora103_genus.qza
```
No we we will check for growth by running the growth medium against any single model.
```
from micom.workflows.db_media import check_db_medium
check = check_db_medium("../data/agora103_genus.qza", medium, threads=20)
```
`check` now includes the entire manifest plus two new columns: the growth rate and whether the models can grow.
```
check.can_grow.value_counts()
```
Okay nothing can grow. We probably miss some important cofactor such as manganese or copper.
Let's complete the medium so that all taxa in Refseq can grow at a rate of at least 1e-4.
## Supplementing a growth medium from a skeleton
Sometimes you may start from a few componenents and will want to complete this skeleton medium to reach a certain minimum growth rate across all models in the database. This can be done with `complete_db_medium`. We can minimize either the added total flux, mass or presence of any atom. Since, we want to build a low carb diet here we will minimize the presence of added carbon.
```
from micom.workflows.db_media import complete_db_medium
manifest, imports = complete_db_medium("../data/agora103_genus.qza", medium, growth=0.001, threads=20, max_added_import=10, weights="C")
manifest.can_grow.value_counts()
```
`manifest` is the amended manifest as before and `imports` contains the used import fluxes for each model. A new column in the manifest also tells us how many import were added.
```
manifest.added.describe()
```
So we added 7 metabolites on average (1-22).
From this we build up our new medium.
```
fluxes = imports.max()
fluxes = fluxes[(fluxes > 1e-6) | fluxes.index.isin(medium.reaction)]
completed = pd.DataFrame({
"reaction": fluxes.index,
"metabolite": fluxes.index.str.replace("^EX_", "", regex=True),
"global_id": fluxes.index.str.replace("_m$", "(e)", regex=True),
"flux": fluxes
})
completed.shape
```
Let's also export the medium as Qiime 2 artifact which can be read with `q2-micom` or the normal micom package.
```
from qiime2 import Artifact
arti = Artifact.import_data("MicomMedium[Global]", completed)
arti.save("../media/vmh_high_fat_low_carb_agora.qza")
```
## Validation
As a last step we validate the created medium.
```
check = check_db_medium("../data/agora103_genus.qza", completed, threads=20)
check.can_grow.value_counts()
check.growth_rate.describe()
```
| github_jupyter |
# Getting Physical Compute Inventory from Intersight using the Cisco Intersight Python SDK
In this lab you learn how to retrieve a list of physical compute inventory from Cisco Intersight using the Intersigight Python SDK.
## Objectives
The objective of this lab is to show how to:
* Authenticate with the Intersight using the Python SDK API's
* Call the Intersight API to pull a list of physical compute resources
## Prerequisites
Before getting started with this lab, you will need the following:
* An Intersight account (you may also need a Cisco account for credentials)
* API key
* API secret available
* Familiarity with Python
### Background
Intersight has many capabilities and one of them is managing on-premises compute resources such as Cisco UCS servers. The process of making UCS servers available to Intersight is known as a claim. Thus, an administrator must first claim UCS servers before they are available as a resource in Intersight. After a UCS servers are claimed, they show up in inventory as physical compute servers.
In this lab we call the Intersight REST API's using Python and retrieve the inventory of physical compute resources claimed in Intersight.
## Step 1: Generate API key and Secret Key
Log into your Intersight account and navigate to **Settings** which is available by clicking the gear icon located towaward the upper-righthand side of the Intersight user interface.
<img align="left" src="images/intersight-settings.png">
* Click **API Keys** located in the lefthand column.
* Click **Generate API Key** located toward the upper-righthand side of the page.
* Select **API key for OpenAPI schema version 2** in the **Generate API Key** dialog box and provide a description for your API key.
* Copy the API Key ID and store it somewhere for use in the upcoming steps.
* Save the **Secret Key** to a text file and make note of its location. Move the file to the same directory where this code is running or make note of its location for use in the upcoming steps.
<img align="left" src="images/api-secret-key-save-as.png">
> This is the only one time that the secret key can be viewed or downloaded. You cannot recover them later. However, you can create new access keys at any time.
## Step 2: Install the Intersight Python SDK and import modules
The Python Intersight SDK is available in the [Python Packaging Index](https://pypi.org/) and installable using the command `pip install intersight`.
> Be sure to running Python >= 3.6 as earlier versions of Python are not supported. Also, uninstall any conflicting versions of the SDK installed on your machine. You can check installed versions with the `pip list` command prior to running `pip install intersight`.
```
pip install intersight
```
Running the command `pip list` all packages installed by `pip`. Run the command and scroll down to verify the `intersight` pack is installed.
```
pip list
```
Now that the `intersight` package is installed using `pip` you are ready to import modules needed putting this example together. Start by importing `intersight`, `datetime`, and `timedelta`.
```
import intersight
import intersight.api.compute_api
```
> Interested in learning more about the `intersight` module? If so, use the command `help(intersight)` to see a description of the module as shown below.
```
help(intersight)
```
## Step 3: Configuring and Creating the API Client
In step 1, you retrieved your API key and secret key and in this step you apply those values while configuring a newly created API client. The command `help(intersight.Configuration)` displays information about the `Configuration` Python class and its usage.
```
help(intersight.Configuration)
```
Help is also available for the `ApiClient` class with the command `help(intersight.ApiClient)`
```
help(intersight.ApiClient)
```
#### Configure the API Client
Now that `help` provided you with, albeit a lot of background information, you are ready to configurate and create the API client as shown in the example below.
> Keep in mind that the values below are for demonstration purposes only and the values of the variables will not work in your environment or the example below. You use the values of the secret key and API key derived in Step 1. `API Key ID` maps to `key_id` and `Secret Key` maps to `private_key_path` in the example below.
```
configuration = intersight.Configuration(
signing_info=intersight.HttpSigningConfiguration(
key_id='insert-your-api-key-value-here',
private_key_path='/Users/delgadm/Documents/intersight/intersight-jupyter-notebooks/key/some-secret-key.txt',
signing_scheme=intersight.signing.SCHEME_HS2019,
signed_headers=[intersight.signing.HEADER_HOST,
intersight.signing.HEADER_DATE,
intersight.signing.HEADER_DIGEST,
intersight.signing.HEADER_REQUEST_TARGET
]
)
)
```
#### Create the API Client
Next, create the client and store the resulting handle in the a variable named `api_client` by calling the `Api.Client` class and passing the values stores in `configuration` to it as shown below.
```
api_client = intersight.ApiClient(configuration)
```
## Step 4: Query the API for Physical Compute Inventory
Now that you are authenticated, let's use the SDK to query for the number of physical compute items in inventory. You first pass `api_client` variable defined in the previous step to `intersight.api.compute_api.ComputeApi` and store the result in a new variable named `api_instance` as shown below.
```
# Get compute class instance
api_instance = intersight.api.compute_api.ComputeApi(api_client)
```
#### Make the Query and Add a Filter that Retrieves Only UCS X-Series
In the next line of code, you will query the API with a filter so that we only ask for UCS X-Series physical compute nodes and store the returned value in a variable named `compute_inventory`. If you remove the `filter='contains(Model,\'UCSX\')'` argument, everything in inventory is returned.
```
compute_inventory = api_instance.get_compute_physical_summary_list(filter='contains(Model,\'UCSX\')')
```
## Step 5: See How Many Physical Compute Items Were Returned
The object returned by the API call contains a list of attributes and methods you see when using the `dir()` function.
```
dir(compute_inventory)
```
One of the attributes you see is a list named `results`. If you pass that to the `len()` BIF (built in function), it returns the number of items in the list which tells you the number of UCS X-Series in inventory.
```
len(compute_inventory.results)
```
#### Show Everything Returned by the Query
Printing `compute_inventory.results` shows a rich set of information returned about physical compute items in inventory.
```
compute_inventory.results
```
## Step 6: Bringing it all together
Great! Now we see the number of physical compute devices and now we can pull more information from the returned JSON and organize it by Device, Chassis ID, Management Mode, Model, Memory, and CPU. The information in the `Management Mode` column shows if the phsyical compute device is managed in Intersight Management Mode.
> Intersight Managed Mode (IMM) is a new architecture that manages the UCS Fabric Interconnected systems through a Redfish-based standard model. If you are familiar with the UCS blades, it means the Fabric Interconnect is fully managed by Intersight. Instead of having the familiar UCSM (UCS Manager) interface available directly from the Fabric Interconnect, the interface and all of the Fabric Interconnect operations are managed by Intersight.
We do some CLI formatting to organize our data and see the type of compute hardware managed by Intersight along with its resources (memory and CPU). Then, we iterate over the JSON data and pull the data we're interested in. In this instance, the Model shows the UCS-X series hardware.
> Experiment! See if you can add more information to the list below by choosing other items from the returned JSON data above. For example, you could add a column with `num_cpu_cores` and or `ipv4_address` to the code below. There's no right or wrong answer as to which columns you would like displayed.
```
print ("{:<8} {:12} {:<21} {:<15} {:<10} {:<10}".format(
'Device',
'Chassis ID',
'Management Mode',
'Model',
'Memory',
'CPU'))
for num, items in enumerate(compute_inventory.results, start=1):
print (
"{:<8} {:<12} {:<21} {:<15} {:<10} {:<10}".format(
num,
items.chassis_id,
items.management_mode,
items.model,
items.available_memory,
items.cpu_capacity))
```
| github_jupyter |
```
#!python -m spacy download de_core_news_md --user
#!python -m spacy download en_core_web_lg --user
#nltk.download('vader_lexicon')
#!pip install --user xgboost
en_nlp = spacy.load("en_core_web_lg")
de_nlp = spacy.load("de_core_news_md")
import re
import spacy
#!python -m spacy download de_core_news_md
#!python -m spacy download en_core_web_lg
import nltk
# nltk.download("stopwords")
# nltk.download("punkt")
# nltk.download('vader_lexicon')
# nltk.download("averaged_perceptron_tagger")
##from nltk import pos_tag, pos_tag_sents, word_tokenize, sent_tokenize
##from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from joblib import dump, load
import warnings
warnings.filterwarnings("ignore")
df_deutsch = pd.read_csv("deutsch_stances.csv", index_col = 0)
df_deutsch.reset_index(inplace=True, drop = True)
df_english = pd.read_csv("english_stances.csv", index_col = 0)
df_english.reset_index(inplace=True, drop = True)
def stemmer(text):
tokens = nltk.word_tokenize(text)
stems = []
for item in tokens:
if item.isdigit():
continue
elif item.isalnum():
stems.append(PorterStemmer().stem(item))
return stems
def clean_text(text):
website_pattern = re.compile(r'\((.*?)\)')
slash_pattern = re.compile(r'[\[\]]')
text = re.sub(website_pattern, "", text)
text = re.sub(slash_pattern, "", text)
return text
def generate_base(df, column, language, model = "glove"):
lang = "en" if language == "english" else "de"
if model == "glove":
nlp = en_nlp if language == "english" else de_nlp
embeddings = np.array([nlp(x).vector for x in list(df[column].values)])
shape = embeddings.shape[1]
columns = ["{}_dimension_{}".format(column, i) for i in range(shape)]
ff = pd.DataFrame(data=embeddings, columns=columns)
elif model == "tfidf-stemmed":
model = load("tfidf_"+ lang +"_stemmed.sav")
data = model.transform(df[column])
columns = [column +":" + col for col in model.get_feature_names()]
ff = pd.SparseDataFrame(data = data, columns=columns).fillna(0)
elif model == "tfidf-unstemmed":
model = load("tfidf_"+ lang +"_unstemmed.sav")
data = model.transform(df[column])
columns = [column +":" + col for col in model.get_feature_names()]
ff = pd.SparseDataFrame(data = data, columns=columns).fillna(0)
return ff
def generate_additional(df, column, language, modes = ["pos", "ner", "sentiment"]):
nlp = en_nlp if language == "english" else de_nlp
docs = [nlp(x) for x in list(df[column].values)]
n = len(df)
dfs = []
# use pos tags
if "pos" in modes:
# research tag by using spacy.explain: spacy.explain("ADP")
pos_tags = {"PRON": [0]*n,
"ADV": [0]*n,
"ADJ": [0]*n,
"ADP": [0]*n,
"DET": [0]*n,
"AUX": [0]*n,
"VERB": [0]*n,
"NOUN": [0]*n,
#"PUNCT": [0]*n,
"NUM": [0]*n}
for i, doc in enumerate(docs):
for token in doc:
if token.pos_ in pos_tags.keys():
pos_tags[token.pos_][i] += 1
tf = pd.DataFrame.from_dict(pos_tags)
tf.columns = [column +":" + col for col in tf.columns]
dfs.append(tf)
# use sentiment tas: negative, neutral, positive and compound
if "sentiment" in modes:
sentiment = [sid.polarity_scores(x) for x in list(df[column].values)]
tf = pd.DataFrame(data=sentiment)
tf.columns = [column +":" + col for col in tf.columns]
dfs.append(tf)
# use named entity recognition:
if "ner" in modes:
ner_types = {"PERSON": [0]*n,
"NORP": [0]*n,
"FAC": [0]*n,
"ORG": [0]*n,
"GPE": [0]*n,
"LOC": [0]*n,
"PRODUCT": [0]*n,
"EVENT": [0]*n,
"WORK_OF_ART": [0]*n,
"LAW": [0]*n,
"LANGUAGE": [0]*n,
"QUANITY": [0]*n,
"ORDINAL": [0]*n,
"CARDINAL": [0]*n}
for i, doc in enumerate(docs):
for entity in doc.ents:
if entity.label_ in ner_types.keys():
ner_types[entity.label_][i] += 1
tf = pd.DataFrame.from_dict(ner_types)
tf.columns = [column +":" + col for col in tf.columns]
dfs.append(tf)
if "structure" in modes:
pass
return pd.concat(dfs, axis=1)
def prep_dataset(df, model, language, modes = []):
dfs = []
df_stance = pd.concat([df['stance']], axis=1)
df_stance['stance'] = df_stance.stance.apply(lambda x: 1 if x == "RA" else 0)
dfs.append(df_stance)
dfs.append(generate_base(df, "child_text", model=model, language=language))
dfs.append(generate_base(df, "parent_text", model=model, language=language))
if modes != []:
dfs.append(generate_additional(df, "child_text", language=language, modes = modes))
dfs.append(generate_additional(df, "parent_text", language=language, modes = modes))
return pd.concat(dfs, axis = 1)
%%time
df = prep_dataset(df_english, model = "glove", language="english", modes = ["ner", "sentiment", "pos"])
df.to_csv("english_features.csv")
df
%%time
df_de = prep_dataset(df_deutsch, model = "glove", language="german", modes = ["ner", "sentiment", "pos"])
df_de.to_csv("german_features.csv")
df_de
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn import metrics
import xgboost as xgb
# random state value
rsv = 42
# cpus used for training
n_jobs = -1
models = {#"SVG": SVC(probability=True,random_state=rsv),
"LogReg": LogisticRegression(random_state=rsv,
n_jobs=n_jobs),
"RanFor": RandomForestClassifier(random_state=rsv,
n_jobs=n_jobs),
#"GausNB": GaussianNB(),
#"LDA": LinearDiscriminantAnalysis(),
"KNN": KNeighborsClassifier(n_jobs=n_jobs),
"XGBOOST": xgb.XGBClassifier(n_jobs=n_jobs,
random_state=rsv)}
# split in training data matrix X and target y
def generate_cv_sets(df: pd.DataFrame):
X = df.loc[:, df.columns != 'stance']
y = df[['stance']].values.ravel()
X_train, X_test, y_train, y_test = train_test_split(X, y)
return X_train, X_test, y_train, y_test
%%time
X_train, X_test, y_train, y_test = generate_cv_sets(df.iloc[:, :-54])
results = {}
for name, model in models.items():
model.fit(X_train.fillna(0), y_train)
score = model.score(X_test.fillna(0), y_test)
results[name] = {"score": score}
print(name, score)
%%time
X_train, X_test, y_train, y_test = generate_cv_sets(df_de.iloc[:, :-54])
results_de = {}
for name, model in models_de.items():
model.fit(X_train.fillna(0), y_train)
score = model.score(X_test.fillna(0), y_test)
results_de[name] = {"score": score}
print(name, score)
```
| github_jupyter |
# Convolutional Neural Networks
## Project: Write an Algorithm for a Dog Identification App
---
In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.
---
### Why We're Here
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
### The Road Ahead
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
* [Step 0](#step0): Import Datasets
* [Step 1](#step1): Detect Humans
* [Step 2](#step2): Detect Dogs
* [Step 3](#step3): Create a CNN to Classify Dog Breeds (from Scratch)
* [Step 4](#step4): Create a CNN to Classify Dog Breeds (using Transfer Learning)
* [Step 5](#step5): Write your Algorithm
* [Step 6](#step6): Test Your Algorithm
---
<a id='step0'></a>
## Step 0: Import Datasets
Make sure that you've downloaded the required human and dog datasets:
* Download the [dog dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip). Unzip the folder and place it in this project's home directory, at the location `/dogImages`.
* Download the [human dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip). Unzip the folder and place it in the home diretcory, at location `/lfw`.
*Note: If you are using a Windows machine, you are encouraged to use [7zip](http://www.7-zip.org/) to extract the folder.*
In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays `human_files` and `dog_files`.
```
import numpy as np
from glob import glob
# load filenames for human and dog images
human_files = np.array(glob("lfw/*/*"))
dog_files = np.array(glob("dogImages/*/*/*"))
# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
```
<a id='step1'></a>
## Step 1: Detect Humans
In this section, we use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images.
OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/opencv/opencv/tree/master/data/haarcascades). We have downloaded one of these detectors and stored it in the `haarcascades` directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
```
Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter.
In the above code, `faces` is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as `x` and `y`) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as `w` and `h`) specify the width and height of the box.
### Write a Human Face Detector
We can use this procedure to write a function that returns `True` if a human face is detected in an image and `False` otherwise. This function, aptly named `face_detector`, takes a string-valued file path to an image as input and appears in the code block below.
```
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
```
### (IMPLEMENTATION) Assess the Human Face Detector
__Question 1:__ Use the code cell below to test the performance of the `face_detector` function.
- What percentage of the first 100 images in `human_files` have a detected human face?
- What percentage of the first 100 images in `dog_files` have a detected human face?
Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays `human_files_short` and `dog_files_short`.
__Answer:__
(You can print out your results and/or write your percentages in this cell)
```
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
#-#-# Do NOT modify the code above this line. #-#-#
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
```
We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`.
```
### (Optional)
### TODO: Test performance of anotherface detection algorithm.
### Feel free to use as many code cells as needed.
```
---
<a id='step2'></a>
## Step 2: Detect Dogs
In this section, we use a [pre-trained model](http://pytorch.org/docs/master/torchvision/models.html) to detect dogs in images.
### Obtain Pre-trained VGG-16 Model
The code cell below downloads the VGG-16 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a).
```
import torch
import torchvision.models as models
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# move model to GPU if CUDA is available
if use_cuda:
VGG16 = VGG16.cuda()
```
Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.
### (IMPLEMENTATION) Making Predictions with a Pre-trained Model
In the next code cell, you will write a function that accepts a path to an image (such as `'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg'`) as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.
Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the [PyTorch documentation](http://pytorch.org/docs/stable/torchvision/models.html).
```
from PIL import Image
import torchvision.transforms as transforms
def VGG16_predict(img_path):
'''
Use pre-trained VGG-16 model to obtain index corresponding to
predicted ImageNet class for image at specified path
Args:
img_path: path to an image
Returns:
Index corresponding to VGG-16 model's prediction
'''
## TODO: Complete the function.
## Load and pre-process an image from the given img_path
## Return the *index* of the predicted class for that image
return None # predicted class index
```
### (IMPLEMENTATION) Write a Dog Detector
While looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).
Use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not).
```
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
## TODO: Complete the function.
return None # true/false
```
### (IMPLEMENTATION) Assess the Dog Detector
__Question 2:__ Use the code cell below to test the performance of your `dog_detector` function.
- What percentage of the images in `human_files_short` have a detected dog?
- What percentage of the images in `dog_files_short` have a detected dog?
__Answer:__
```
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
```
We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as [Inception-v3](http://pytorch.org/docs/master/torchvision/models.html#inception-v3), [ResNet-50](http://pytorch.org/docs/master/torchvision/models.html#id3), etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`.
```
### (Optional)
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.
```
---
<a id='step3'></a>
## Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.
Brittany | Welsh Springer Spaniel
- | -
<img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200">
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
Curly-Coated Retriever | American Water Spaniel
- | -
<img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200">
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
Yellow Labrador | Chocolate Labrador | Black Labrador
- | -
<img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220">
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dogImages/train`, `dogImages/valid`, and `dogImages/test`, respectively). You may find [this documentation on custom datasets](http://pytorch.org/docs/stable/torchvision/datasets.html) to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of [transforms](http://pytorch.org/docs/stable/torchvision/transforms.html?highlight=transform)!
```
import os
from torchvision import datasets
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
```
**Question 3:** Describe your chosen procedure for preprocessing the data.
- How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
- Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?
**Answer**:
### (IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. Use the template in the code cell below.
```
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
## Define layers of a CNN
def forward(self, x):
## Define forward behavior
return x
#-#-# You so NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()
```
__Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step.
__Answer:__
### (IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a [loss function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/stable/optim.html). Save the chosen loss function as `criterion_scratch`, and the optimizer as `optimizer_scratch` below.
```
import torch.optim as optim
### TODO: select loss function
criterion_scratch = None
### TODO: select optimizer
optimizer_scratch = None
```
### (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_scratch.pt'`.
```
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
# return trained model
return model
# train the model
model_scratch = train(100, loaders_scratch, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
```
### (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.
```
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)
```
---
<a id='step4'></a>
## Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dogImages/train`, `dogImages/valid`, and `dogImages/test`, respectively).
If you like, **you are welcome to use the same data loaders from the previous step**, when you created a CNN from scratch.
```
## TODO: Specify data loaders
```
### (IMPLEMENTATION) Model Architecture
Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable `model_transfer`.
```
import torchvision.models as models
import torch.nn as nn
## TODO: Specify model architecture
if use_cuda:
model_transfer = model_transfer.cuda()
```
__Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
__Answer:__
### (IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a [loss function](http://pytorch.org/docs/master/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/master/optim.html). Save the chosen loss function as `criterion_transfer`, and the optimizer as `optimizer_transfer` below.
```
criterion_transfer = None
optimizer_transfer = None
```
### (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_transfer.pt'`.
```
# train the model
model_transfer = # train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
# load the model that got the best validation accuracy (uncomment the line below)
#model_transfer.load_state_dict(torch.load('model_transfer.pt'))
```
### (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.
```
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
```
### (IMPLEMENTATION) Predict Dog Breed with the Model
Write a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan hound`, etc) that is predicted by your model.
```
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in data_transfer['train'].classes]
def predict_breed_transfer(img_path):
# load the image and return the predicted breed
return None
```
---
<a id='step5'></a>
## Step 5: Write your Algorithm
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
- if a __dog__ is detected in the image, return the predicted breed.
- if a __human__ is detected in the image, return the resembling dog breed.
- if __neither__ is detected in the image, provide output that indicates an error.
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the `face_detector` and `human_detector` functions developed above. You are __required__ to use your CNN from Step 4 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!

### (IMPLEMENTATION) Write your Algorithm
```
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def run_app(img_path):
## handle cases for a human face, dog, and neither
```
---
<a id='step6'></a>
## Step 6: Test Your Algorithm
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that _you_ look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
### (IMPLEMENTATION) Test Your Algorithm on Sample Images!
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
__Question 6:__ Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
__Answer:__ (Three possible points for improvement)
```
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
## suggested code, below
for file in np.hstack((human_files[:3], dog_files[:3])):
run_app(file)
```
| github_jupyter |
# Retail Demo Store Experimentation Workshop - Interleaving Recommendation Exercise
In this exercise we will define, launch, and evaluate the results of an experiment using recommendation interleaving using the experimentation framework implemented in the Retail Demo Store project. If you have not already stepped through the **[3.1-Overview](./3.1-Overview.ipynb)** workshop notebook, please do so now as it provides the foundation built upon in this exercise. It is also recommended, but not required, to complete the **[3.2-AB-Experiment](./3.2-AB-Experiment.ipynb)** workshop notebook.
Recommended Time: 30 minutes
## Prerequisites
Since this module uses the Retail Demo Store's Recommendation microservice to run experiments across variations that depend on the personalization features of the Retail Demo Store, it is assumed that you have either completed the [Personalization](../1-Personalization/1.1-Personalize.ipynb) workshop or those resources have been pre-provisioned in your AWS environment. If you are unsure and attending an AWS managed event such as a workshop, check with your event lead.
## Exercise 2: Interleaving Recommendations Experiment
For the first exercise, **[3.2-AB-Experiment](./3.2-AB-Experiment.ipynb)**, we demonstrated how to create and run an A/B experiment using two different variations for making product recommendations. We calculated the sample sizes of users needed to reach a statistically significant result comparing the two variations. Then we ran the experiment using a simulation until the sample sizes were reached for both variations. In real-life, depending on the baseline and minimum detectable effect rate combined with your site's user traffic, the amount of time necessary to complete an experiment can take several days to a few weeks. This can be expensive from both an opportunity cost perspective as well as negatively impacting the pace at which experiments and changes can be rolled out to your site.
In this exercise we will look at an alternative approach to evaluating product recommendation variations that requires a smaller sample size and shorter experiment durations. This technique is often used as a preliminary step before formal A/B testing to reduce a larger number of variations to just the top performers. Traditional A/B testing is then done against the best performing variations, significantly reducing the overall time necessary for experimentation.
We will use the same two variations as the last exercise. The first variation will represent our current implementation using the **Default Product Resolver** and the second variation will use the **Personalize Recommendation Resolver**. The scenario we are simulating is adding product recommendations powered by Amazon Personalize to the home page and measuring the impact/uplift in click-throughs for products as a result of deploying a personalization strategy. We will use the same hypothesis from our A/B test where the conversion rate of our existing approach is 15% and we expect a 25% lift in this rate by adding personalized recommendations.
### What is Interleaving Recommendation Testing?
The approach of interleaving recommendations is to take the recommendations from two or more variations and interleave, or blend, them into a single set of recommendations for *every user in the experiment*. Because each user in the sample is exposed to recommendations from all variations, we gain some key benefits. First, the sample size can be smaller since we don't need separate groups of users for each variation. This also results in a shorter experiment duration. Additionally, this approach is less susceptible to variances in user type and behavior that could throw off the results of an experiment. For example, it's not uncommon to have power users who shop/watch/listen/read much more than a typical user. With multiple sample groups, the behavior of these users can throw off results for their group, particularly with smaller sample sizes.
Care must be taken in how recommendations are interleaved, though, to account for position bias in the recommendations and to track variation attribution. There are two common methods to interleaving recommendations. First is a balanced approach where recommendations are taken from each variation in an alternating style where the starting variation is selected randomly. The other approach follows the team-draft analogy where team captains select their "best player" (recommendation) from the variations in random selection order. Both methods can result in different interleaving outputs.
Interleaving recommendations as an approach to experimenation got its start with information retrieval systems and search engines (Yahoo! & Bing) where different approaches to ranking results could be measured concurrently. More recently, [Netflix has adopted the interleaving technique](https://medium.com/netflix-techblog/interleaving-in-online-experiments-at-netflix-a04ee392ec55) to rapidly evaluate different approaches to making movie recommendations to its users. The image below depicts the recommendations from two different recommenders/variations (Ranker A and Ranker B) and examples of how they are interleaved.

### InterleavingExperiment Class
Before stepping through creating and executing our interleaving test, let's look at the relevant source code for the **InterleavingExperiment** class that implements this experiment type in the Retail Demo Store project.
As noted in the **[3.1-Overview](./3.1-Overview.ipynb)** notebook, all experiment types are subclasses of the abstract **Experiment** class. See **[3.1-Overview](./3.1-Overview.ipynb)** for more details on the experimentation framework.
The `InterleavingExperiment.get_items()` method is where item recommendations are retrieved for the experiment. This method will retrieve recommendations from the resolvers for all variations and then use the configured interleaving method (balanced or team-draft) to interleave the recommendations to produce the final result. Exposure tracking is also implemented to facilitate measuring the outcome of an experiment. The implementations for the balanced and team-draft interleaving methods are not included below but are available in the source code for the Recommendations service.
```python
# from src/recommendations/src/recommendations-service/experimentation/experiment_interleaving.py
class InterleavingExperiment(Experiment):
""" Implements interleaving technique described in research paper by
Chapelle et al http://olivier.chapelle.cc/pub/interleaving.pdf
"""
METHOD_BALANCED = 'balanced'
METHOD_TEAM_DRAFT = 'team-draft'
def __init__(self, table, **data):
super(InterleavingExperiment, self).__init__(table, **data)
self.method = data.get('method', InterleavingExperiment.METHOD_BALANCED)
def get_items(self, user_id, current_item_id = None, item_list = None, num_results = 10, tracker = None):
...
# Initialize array structure to hold item recommendations for each variation
variations_data = [[] for x in range(len(self.variations))]
# Get recomended items for each variation
for i in range(len(self.variations)):
resolve_params = {
'user_id': user_id,
'product_id': current_item_id,
'product_list': item_list,
'num_results': num_results * 3 # account for overlaps
}
variation = self.variations[i]
items = variation.resolver.get_items(**resolve_params)
variations_data[i] = items
# Interleave items to produce result
interleaved = []
if self.method == InterleavingExperiment.METHOD_TEAM_DRAFT:
interleaved = self._interleave_team_draft(user_id, variations_data, num_results)
else:
interleaved = self._interleave_balanced(user_id, variations_data, num_results)
# Increment exposure for each variation (can be optimized)
for i in range(len(self.variations)):
self._increment_exposure_count(i)
...
return interleaved
```
### Setup - Import Dependencies
Througout this workshop we will need access to some common libraries and clients for connecting to AWS services. Let's set those up now.
```
import boto3
import json
import uuid
import numpy as np
import requests
import pandas as pd
import random
import scipy.stats as scs
import time
import decimal
import matplotlib.pyplot as plt
from boto3.dynamodb.conditions import Key
from random import randint
# import custom scripts for plotting results
from src.plot import *
from src.stats import *
%matplotlib inline
plt.style.use('ggplot')
# We will be using a DynamoDB table to store configuration info for our experiments.
dynamodb = boto3.resource('dynamodb')
# Service discovery will allow us to dynamically discover Retail Demo Store resources
servicediscovery = boto3.client('servicediscovery')
# Retail Demo Store config parameters are stored in SSM
ssm = boto3.client('ssm')
# Utility class to convert types for printing as JSON.
class CompatEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, decimal.Decimal):
if obj % 1 > 0:
return float(obj)
else:
return int(obj)
else:
return super(CompatEncoder, self).default(obj)
```
### Experiment Strategy Datastore
Let's create an experiment using the interleaving technique.
A DynamoDB table was created by the Retail Demo Store CloudFormation template that we will use to store the configuration information for our experiments. The table name can be found in a system parameter.
```
response = ssm.get_parameter(Name='retaildemostore-experiment-strategy-table-name')
table_name = response['Parameter']['Value'] # Do Not Change
print('Experiments DDB table: ' + table_name)
table = dynamodb.Table(table_name)
```
Next we need to lookup the Amazon Personalize campaign ARN for product recommendations. This is the campaign that was created in the Personalization workshop.
```
response = ssm.get_parameter(Name = 'retaildemostore-product-recommendation-campaign-arn')
campaign_arn = response['Parameter']['Value'] # Do Not Change
print('Personalize product recommendations ARN: ' + campaign_arn)
```
### Create Interleaving Experiment
The Retail Demo Store supports running multiple experiments concurrently. For this workshop we will create a single interleaving test/experiment that will expose users of a single group to recommendations from the default behavior and recommendations from Amazon Personalize. The Recommendations microservice already has logic that supports interleaving experiments when an active experiment is detected.
Experiment configurations are stored in a DynamoDB table where each item in the table represents an experiment and has the following fields.
- **id** - Uniquely identified this experience (UUID).
- **feature** - Identifies the Retail Demo Store feature where the experiment should be applied. The name for the home page product recommendations feature is `home_product_recs`.
- **name** - The name of the experiment. Keep the name short but descriptive. It will be used in the UI for demo purposes and when logging events for experiment result tracking.
- **status** - The status of the experiment (`ACTIVE`, `EXPIRED`, or `PENDING`).
- **type** - The type of test (`ab` for an A/B test, `interleaving` for interleaved recommendations, or `mab` for multi-armed bandit test)
- **method** - The interleaving method (`balanced` or `team-draft`)
- **variations** - List of configurations representing variations for the experiment. For example, for interleaving tests of the `home_product_recs` feature, the `variations` can be two Amazon Personalize campaign ARNs (variation type `personalize-recommendations`) or a single Personalize campaign ARN and the default product behavior.
```
feature = 'home_product_recs'
experiment_name = 'home_personalize_interleaving'
# First, make sure there are no other active experiments so we can isolate
# this experiment for the exercise.
response = table.scan(
ProjectionExpression='#k',
ExpressionAttributeNames={'#k' : 'id'},
FilterExpression=Key('status').eq('ACTIVE')
)
for item in response['Items']:
response = table.update_item(
Key=item,
UpdateExpression='SET #s = :inactive',
ExpressionAttributeNames={
'#s' : 'status'
},
ExpressionAttributeValues={
':inactive' : 'INACTIVE'
}
)
# Query the experiment strategy table to see if our experiment already exists
response = table.query(
IndexName='feature-name-index',
KeyConditionExpression=Key('feature').eq(feature) & Key('name').eq(experiment_name),
FilterExpression=Key('status').eq('ACTIVE')
)
if response.get('Items') and len(response.get('Items')) > 0:
print('Experiment already exists')
home_page_experiment = response['Items'][0]
else:
print('Creating experiment')
# Default product resolver
variation_0 = {
'type': 'product'
}
# Amazon Personalize resolver
variation_1 = {
'type': 'personalize-recommendations',
'campaign_arn': campaign_arn
}
home_page_experiment = {
'id': uuid.uuid4().hex,
'feature': feature,
'name': experiment_name,
'status': 'ACTIVE',
'type': 'interleaving',
'method': 'team-draft',
'analytics': {},
'variations': [ variation_0, variation_1 ]
}
response = table.put_item(
Item=home_page_experiment
)
print(json.dumps(response, indent=4))
print(json.dumps(home_page_experiment, indent=4, cls=CompatEncoder))
```
## Load Users
For our experiment simulation, we will load all Retail Demo Store users and run the experiment until the sample size has been met.
First, let's discover the IP address for the Retail Demo Store's Users service.
```
response = servicediscovery.discover_instances(
NamespaceName='retaildemostore.local',
ServiceName='users',
MaxResults=1,
HealthStatus='HEALTHY'
)
users_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']
print('Users Service Instance IP: {}'.format(users_service_instance))
```
Next, let's load all users into a local data frame.
```
# Load all 5K users so we have enough to satisfy our sample size requirements.
response = requests.get('http://{}/users/all?count=5000'.format(users_service_instance))
users = response.json()
users_df = pd.DataFrame(users)
pd.set_option('display.max_rows', 5)
users_df
```
## Discover Recommendations Service
Next, let's discover the IP address for the Retail Demo Store's Recommendation service.
```
response = servicediscovery.discover_instances(
NamespaceName='retaildemostore.local',
ServiceName='recommendations',
MaxResults=1,
HealthStatus='HEALTHY'
)
recommendations_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']
print('Recommendation Service Instance IP: {}'.format(recommendations_service_instance))
```
## Simulate Experiment
Next we will simulate our interleaving recommendation experiment by making calls to the Recommendation service across the users we just loaded.
### Simulation Function
The following `simulate_experiment` function is supplied with the number of trials we want to run and the probability of conversion for each variation for our simulation. It runs the simulation long enough to satisfy the number of trials and calls the Recommendations service for each trial in the experiment.
```
def simulate_experiment(n_trials, probs):
"""Simulates experiment based on pre-determined probabilities
Example:
Parameters:
n_trials (int): number of trials to run for experiment
probs (array float): array of floats containing probability/conversion
rate for each variation
Returns:
df (df) - data frame of simulation data/results
"""
# will hold exposure/outcome data
data = []
print('Simulating experiment for {} users... this may take a few minutes'.format(n_trials))
for idx in range(n_trials):
if idx > 0 and idx % 500 == 0:
print('Simulated experiment for {} users so far'.format(idx))
row = {}
# Get random user
user = users[randint(0, len(users)-1)]
# Call Recommendations web service to get recommendations for the user
response = requests.get('http://{}/recommendations?userID={}&feature={}'.format(recommendations_service_instance, user['id'], feature))
recommendations = response.json()
recommendation = recommendations[randint(0, len(recommendations)-1)]
variation = recommendation['experiment']['variationIndex']
row['variation'] = variation
# Conversion based on probability of variation
row['converted'] = np.random.binomial(1, p=probs[variation])
if row['converted'] == 1:
# Update experiment with outcome/conversion
correlation_id = recommendation['experiment']['correlationId']
requests.post('http://{}/experiment/outcome'.format(recommendations_service_instance), data={'correlationId':correlation_id})
data.append(row)
# convert data into pandas dataframe
df = pd.DataFrame(data)
print('Done')
return df
```
### Run Simulation
Next we run the simulation by defining our simulation parameters for the number of trials and probabilities and then call `simulate_experiment`. This will take a few minutes to run.
```
%%time
# Number of trials to run
N = 2000
# bcr: baseline conversion rate
p_A = 0.15
# d_hat: difference in a metric between the two groups, sometimes referred to as minimal detectable effect or lift depending on the context
p_B = 0.1875
ab_data = simulate_experiment(N, [p_A, p_B])
ab_data
```
### Inspect Experiment Summary Statistics
Since the **Experiment** class updates statistics on the experiment in the experiment strategy table when a user is exposed to an experiment ("exposure") and when a user converts ("outcome"), we should see updated counts on our experiment. Let's reload our experiment and inspect the exposure and conversion counts for our simulation.
```
response = table.get_item(Key={'id': home_page_experiment['id']})
print(json.dumps(response['Item'], indent=4, cls=CompatEncoder))
```
Note the `conversions` and `exposures` counts for each variation above. These counts were incremented by the experiment class each time a trial was run (exposure) and a user converted in the `simulate_experiment` function above.
### Analyze Simulation Results
To wrap up, let's analyze some of the results from our simulated interleaving experiment by inspecting the actual conversion rate and verifying our target confidence interval and power.
First, let's take a closer look at the results of our simulation. We'll start by calculating some summary statistics.
```
ab_summary = ab_data.pivot_table(values='converted', index='variation', aggfunc=np.sum)
# add additional columns to the pivot table
ab_summary['total'] = ab_data.pivot_table(values='converted', index='variation', aggfunc=lambda x: len(x))
ab_summary['rate'] = ab_data.pivot_table(values='converted', index='variation')
ab_summary
```
Next let's isolate data for each variation.
```
A_group = ab_data[ab_data['variation'] == 0]
B_group = ab_data[ab_data['variation'] == 1]
A_converted, B_converted = A_group['converted'].sum(), B_group['converted'].sum()
A_converted, B_converted
```
Determine the actual sample size for each variation.
```
A_total, B_total = len(A_group), len(B_group)
A_total, B_total
```
Calculate the actual conversion rates and uplift from our simulation.
```
p_A, p_B = A_converted / A_total, B_converted / B_total
p_A, p_B
p_B - p_A
```
### Determining Statistical Significance
For simplicity we will use the same approach as our A/B test to determine statistical significance.
Let's plot the data from both groups as binomial distributions.
```
fig, ax = plt.subplots(figsize=(12,6))
xA = np.linspace(A_converted-49, A_converted+50, 100)
yA = scs.binom(A_total, p_A).pmf(xA)
ax.scatter(xA, yA, s=10)
xB = np.linspace(B_converted-49, B_converted+50, 100)
yB = scs.binom(B_total, p_B).pmf(xB)
ax.scatter(xB, yB, s=10)
plt.xlabel('converted')
plt.ylabel('probability')
```
Based the probabilities from our hypothesis, we should see that the test group in blue (B) converted more users than the control group in red (A). However, the plot above is not a plot of the null and alternate hypothesis. The null hypothesis is a plot of the difference between the probability of the two groups.
> Given the randomness of our user selection, group hashing, and probabilities, your simulation results should be different for each simulation run and therefore may or may not be statistically significant.
In order to calculate the difference between the two groups, we need to standardize the data. Because the number of samples can be different between the two groups, we should compare the probability of successes, p.
According to the central limit theorem, by calculating many sample means we can approximate the true mean of the population from which the data for the control group was taken. The distribution of the sample means will be normally distributed around the true mean with a standard deviation equal to the standard error of the mean.
```
SE_A = np.sqrt(p_A * (1-p_A)) / np.sqrt(A_total)
SE_B = np.sqrt(p_B * (1-p_B)) / np.sqrt(B_total)
SE_A, SE_B
fig, ax = plt.subplots(figsize=(12,6))
xA = np.linspace(0, .3, A_total)
yA = scs.norm(p_A, SE_A).pdf(xA)
ax.plot(xA, yA)
ax.axvline(x=p_A, c='red', alpha=0.5, linestyle='--')
xB = np.linspace(0, .3, B_total)
yB = scs.norm(p_B, SE_B).pdf(xB)
ax.plot(xB, yB)
ax.axvline(x=p_B, c='blue', alpha=0.5, linestyle='--')
plt.xlabel('Converted Proportion')
plt.ylabel('PDF')
```
## Next Steps
You have completed the exercise for implementing an A/B test using the experimentation framework in the Retail Demo Store. Close this notebook and open the notebook for the next exercise, **[3.4-Multi-Armed-Bandit-Experiment](./3.4-Multi-Armed-Bandit-Experiment.ipynb)**.
### References and Further Reading
- [Large Scale Validation and Analysis of Interleaved Search Evaluation](http://olivier.chapelle.cc/pub/interleaving.pdf), Chapelle et al
- [Innovating Faster on Personalization Algorithms at Netflix Using Interleaving](https://medium.com/netflix-techblog/interleaving-in-online-experiments-at-netflix-a04ee392ec55), Netflix Technology Blog
| github_jupyter |
```
import sys
import os
sys.path.append('/Users/zhengz11/myscripts/git_clone/pn_kc/')
import json
import mushroom_2to3.connect_path as cp
import mushroom_2to3.analysis_routine as ar
# credential, to delete when push to remote
sys.path.append('/Users/zhengz11/myscripts/mushroom_v9/credential/')
from fafb_tokens import token
fafb_c = cp.fafb_connection(token)
def save_json(js, file_name):
with open(file_name, 'w+') as file:
json.dump(js, file)
file.close()
def load_json(path):
with open(path) as outfile:
r = json.load(outfile)
return r
path = "/Users/zhengz11/myscripts/git_clone/pn_kc/test/"
##----------------------------------------
import mushroom_2to3.connect as cc
pn_skids = cc.get_skids_from_annos(
fafb_c, [['right_calyx_PN'], ['has_bouton']], ["multiglomerular PN"])
# get KC skeleton ids from CATMAID
# rd is random draw manually traced KCs
rd = cc.get_skids_from_annos(fafb_c,
[['Random Draw 1 KC', 'Random Draw 2 KC'], ['Complete']],
['KCaBp', 'KCyd'])
t1p = cc.get_skids_from_annos(fafb_c,
[['T1+ Complete']])
bundle = cc.get_skids_from_annos(fafb_c,
[['Bundle 1 Seed', 'Different Tracing Protocol in Bundle 1'], ['Complete']], ['KCaBp', 'KCyd'])
save_path = path + "skids/"
if not os.path.exists(save_path):
os.makedirs(save_path)
save_json(pn_skids, save_path + "PN")
save_json(rd, save_path + "RandomDraw")
save_json(t1p, save_path + "t1p")
save_json(bundle, save_path + "bundle")
# pn_skids = load_json(save_path + "PN")
# rd = load_json(save_path + "RandomDraw")
# bundle = load_json(save_path + "bundle")
# load_json(save_path + "t1p.txt")
# load_json(save_path + "bundle.txt")
all_skids = pn_skids + rd + t1p + bundle
for i in all_skids:
cp.save_compact_sk(fafb_c, i, path)
cp.save_annotations_for_skeleton(fafb_c, all_skids, path)
cp.save_neurons_names(fafb_c, all_skids, path)
cp.save_root_node(fafb_c, all_skids, path)
cp.save_annotated_annotations(fafb_c, 'glom_class', 'id', path)
cp.save_annotated_annotations(fafb_c, 'kc_class', 'id', path)
cp.save_pre_post_info(fafb_c, pn_skids, rd + t1p, path, 'testing_pn_kc')
import sys
import os
sys.path.append('/Users/zhengz11/myscripts/git_clone/pn_kc/')
import json
import mushroom_2to3.connect_path as cp
import mushroom_2to3.analysis_routine as ar
# credential, to delete when push to remote
sys.path.append('/Users/zhengz11/myscripts/mushroom_v9/credential/')
from fafb_tokens import token
fafb_c = cp.fafb_connection(token)
fafb_c
# 200329
cp.save_pre_post_info(fafb_c, pn_skids, rd, path, 'pn_rd_kc')
path
# copy from /Users/zhengz11/myscripts/bocklab_git/bocklab/zhihao/mushroom_py/v10/191029-bouton-KC-representations_per_PN.py
# save_path = '/Users/zhengz11/myscripts/data_results/191028-bouton-KC_representations/'
# 200110 change name to 200110-pn_conn_tbl.py
# 200326 change name to load_pn_tbl.py
# This will produce the table for Fig 2A
ana = ana_all_rd
# y number of boutons
# y number of downstream KCs
# per PN, also per glomerulus
path = "/Users/zhengz11/myscripts/git_clone/pn_kc/data/skids/"
pn_skids = load_json(path + "pn")
# pn_skids = cc.get_skids_from_annos(fafb_c, [['right_calyx_PN'], ['has_bouton']], ["multiglomerular PN"])
boutons_per_pn = [len(ana.col_neurons[i].segments.ids) for i in pn_skids]
gloms = df_lookup('glom_anno_id', ana.pn_mapping.skids_to_types(pn_skids),
'short_glom_name', glom_btn_table)
meta_tbl = pn_meta_table.copy()
t1 = meta_tbl.Significance.copy()
t1[pd.isna(t1)] = 'Unknown'
meta_tbl.Significance = t1
tbl = pd.DataFrame({'pn_skids':pn_skids,
'num_boutons': boutons_per_pn,
'short_glom_name': gloms,
'significance': pd.concat([meta_tbl.query('glom==@i').Significance for i in gloms])})
# add connections, namely the number of KCs per PN
conn_data = ana.conn_data['pn_kc_contracted']
pn_gloms = conn_data.col_ids
t11 = np.copy(conn_data.conn['5s'])
t11[np.where(t11)]=1
pn_connections = t11.sum(0)
tbl = tbl.merge(pd.DataFrame({'pn_skids': conn_data.col_ids,'outdegree': pn_connections}), how='outer',on='pn_skids')
# add claws per PNs
t16 = ana.conn_data['pn_claw_contracted'].conn['1s'].copy()
t16[np.where(t16<3)]=0
t16[np.where(t16)]=1
ds_claws = t16.sum(0)
tbl = tbl.merge(pd.DataFrame({'pn_skids': conn_data.col_ids,'num_claws': ds_claws}), how='outer',on='pn_skids')
tbl = tbl.assign(norm_num_boutons=lambda x: x.num_boutons/sum(tbl.num_boutons), norm_num_kcs=lambda x: x.outdegree/sum(tbl.outdegree), norm_num_claws=lambda x: x.num_claws/sum(tbl.num_claws))
tbl.significance = pd.Categorical(tbl.significance, ordered=True, categories=['Food','Aversive','Pheromonal','Egg-laying','Unknown'])
tbl = tbl.sort_values(by=['significance','short_glom_name'])
t1_dict = dict(zip(['Food', 'Aversive', 'Pheromonal', 'Egg-laying', 'Unknown'],['green','red','purple','blue','black']))
tbl['color'] = tbl['significance'].apply(lambda x: t1_dict.get(x))
comm_gloms = df_lookup('glom_id', comm_ids,'short_glom_name', glom_btn_table)
tbl['community'] = [True if i in comm_gloms else False for i in tbl['short_glom_name']]
## to plot
plot_tbl = tbl.copy()
t1_dict = dict(zip(['Food', 'Aversive', 'Pheromonal', 'Egg-laying', 'Unknown'],['green','red','purple','blue','black']))
plot_tbl['color'] = plot_tbl['significance'].apply(lambda x: t1_dict.get(x))
t6 = pd.Series(plot_tbl['num_claws'])
import stat
t6.sort_values(ascending=False, inplace=True)
sns.regplot(list(range(113)), t6.values.tolist(), logistic=True)
import statsmodels
t6.values.shape
bc_conn = ana.conn_data['bouton_claw'].conn['5s'].copy()
bc_conn.shape
freq = [np.count_nonzero(bc_conn[:,i]) for i in range(bc_conn.shape[1])]
freq_vc = pd.Series(freq).value_counts()
sns.regplot(freq_vc.index.tolist(), freq_vc.values.tolist(),logistic=True)
import sys
import os
sys.path.append('/Users/zhengz11/myscripts/git_clone/pn_kc/')
import json
import mushroom_2to3.connect_path as cp
import mushroom_2to3.analysis_routine as ar
# credential, to delete when push to remote
sys.path.append('/Users/zhengz11/myscripts/mushroom_v9/credential/')
# from fafb_tokens import token
# fafb_c = cp.fafb_connection(token)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import io
qpcr_results = pd.read_excel("./qpcr-data/2020324 LOD Study 1.xlsx", sheet_name="Results", skiprows=42, na_values=['Undetermined'])
```
# Standard Curve
```
sc = qpcr_results[qpcr_results['Sample Name'].str.contains('PCD')]
sc[['Sample Name', 'CT', 'Quantity', 'Amp Status']]
sc[sc['Amp Status'] == 'Amp'].plot.scatter(x='Quantity', y='CT', logx=True)
ax = sns.regplot(data=sc[sc['Amp Status'] == 'Amp'], x='Quantity', y='CT', logx=True)
ax.set_xscale('log')
```
# Negative Controls
```
negs = qpcr_results.loc[qpcr_results['Sample Name'].str.contains('Negative')]
negs
negs['Amp Status'].str.match('Amp').any()
```
# CoV
```
ncov = qpcr_results[(qpcr_results['Reporter'] == 'FAM') & qpcr_results['Sample Name'].str.contains('nCoV')]
ncov.loc[:, 'Sample Number'] = ncov['Sample Name'].str.replace(r'[\D]*([0-9]+)[\D]*', r'\1').astype(np.int)
ncov.boxplot(by='Sample Number', column='CT', rot=45, figsize=(12,12))
sample_data = '''
Sample Number NP Sample "Spike" Control "Control Conc" "Volume"
1 1000 PCD 1 2.00E+05 5
2 1000 PCD 1 2.00E+05 5
3 1000 PCD 1 2.00E+05 5
4 200 PCD 1 2.00E+05 1
5 200 PCD 1 2.00E+05 1
6 200 PCD 1 2.00E+05 1
7 100 PCD 2 2.00E+04 5
8 100 PCD 2 2.00E+04 5
9 100 PCD 2 2.00E+04 5
10 50 PCD 2 2.00E+04 2.5
11 50 PCD 2 2.00E+04 2.5
12 50 PCD 2 2.00E+04 2.5
13 20 PCD 2 2.00E+04 1
14 20 PCD 2 2.00E+04 1
15 20 PCD 2 2.00E+04 1
16 10 PCD 3 2.00E+03 5
17 10 PCD 3 2.00E+03 5
18 10 PCD 3 2.00E+03 5
19 2 PCD 3 2.00E+03 1
20 2 PCD 3 2.00E+03 1
21 2 PCD 3 2.00E+03 1
22 0 NEG 0 0
23 0 NEG 0 0
24 0 NEG 0 0'''
sample_data = pd.read_table(io.StringIO(sample_data))
ncov = ncov.merge(sample_data)
ncov
ncov['CT'].unique()
ncov[ncov['Spike'] > 0].plot.scatter(x='Spike', y='CT', logx=True)
ax = sns.regplot(data=ncov[ncov['Spike']>0], x='Spike', y='CT', logx=True)
ax.set_xscale('log')
```
## Yield
```
ncov['Quantity'] = ncov['Quantity'] * 1e6
ncov['Quantity']
ncov.plot.scatter(x='Spike', y='Quantity')
ncov.boxplot(by='Spike', column='CT')
ncov['RNA Input'] = ncov['Spike'] * 400 / 30 * 5
ncov.loc[:,'Yield'] = ncov['Quantity'] / ncov['RNA Input']
ncov[ncov['Yield'] < 1][['Sample Name','Yield']].plot.bar()
ncov[ncov['Yield'] < 1]['Yield'].describe()
(ncov['Quantity'] * 6) / (ncov['Spike'] * 400)
```
## Diag
```
pos = qpcr_results[qpcr_results['Sample Name']=='PCD 1']
(pos['CT'] > 16).all()
(pos['CT'] < 23).all()
pos
neg = qpcr_results[(qpcr_results['Sample Name']=='Negative') & (qpcr_results['Target Name'] == 'FAM')]
neg
ierc = qpcr_results[(qpcr_results['Reporter'] == 'VIC') & qpcr_results['Sample Name'].str.contains('nCoV')]
ierc['CT'].describe()
ierc.sort_values(by='Sample Name')[['Sample Name','CT','Amp Status']]
ncov.sort_values(by='Sample Number')[['Sample Name','CT','Spike','Amp Status']]
neg['CT'].mean()
diags = list()
for i in range(1,48):
name = "{} nCoV".format(i)
sample_trips_data = qpcr_results[qpcr_results['Sample Name'] == '{} nCoV'.format(i)]
for _, sample_data in sample_trips_data.groupby('Well'):
sample_data = sample_data.set_index('Target Name')
ierc = sample_data.loc['VIC']
ncov = sample_data.loc['FAM']
result = {
'Sample Name': name,
'Result': 'Unknown',
'Type': 'Unknown',
'CT': ncov['CT'],
'Quantity': ncov['Quantity'],
'IECRNA CT': ierc['CT']}
if ncov['Amp Status'] == 'Amp':
result['Result'] = 'Positive'
if ncov['CT'] <= 30 or ierc['Amp Status'] == 'Amp':
result['Type'] = 'Quantitative'
elif ncov['CT'] > 30 and ierc['Amp Status'] == 'No Amp':
result['Type'] = 'Qualitative'
else:
if ierc['Amp Status'] == 'Amp':
result['Result'] = 'Negative'
result['Type'] = 'Qualitative'
else:
result['Type'] = 'Sample Failure'
diags.append(result)
diags = pd.DataFrame(diags)
diags
```
# Curves
```
qpcr_amp_data = pd.read_excel("./20200328 LOD.xlsx", sheet_name="Amplification Data", skiprows=42, na_values=['Undetermined'])
qpcr_amp_data = qpcr_amp_data.merge(qpcr_results[['Well', 'Sample Name']])
qpcr_amp_data[(qpcr_amp_data['Sample Name'] == '15 nCoV') & (qpcr_amp_data['Target Name'] == 'FAM')].plot(x='Cycle', y='Rn')
```
| github_jupyter |
# Deep Reinforcement Learning for the CartPole Environment
```
# Install packages
import gym
import copy
import torch
from torch.autograd import Variable
import random
import matplotlib.pyplot as plt
from PIL import Image
from IPython.display import clear_output
import math
import torchvision.transforms as T
import numpy as np
import time
```
## Environment
The CartPole environment consists of a pole which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. The state space is represented by four values: cart position, cart velocity, pole angle, and the velocity of the tip of the pole. The action space consists of two actions: moving left or moving right. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.
Source: [https://gym.openai.com/envs/CartPole-v1/](Open AI Gym).
The cell below plots a bunch of example frames from the environment.
```
env = gym.envs.make("CartPole-v1")
# Demonstration
env = gym.envs.make("CartPole-v1")
def get_screen():
''' Extract one step of the simulation.'''
screen = env.render(mode='rgb_array').transpose((2, 0, 1))
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255.
return torch.from_numpy(screen)
# Speify the number of simulation steps
num_steps = 2
# Show several steps
for i in range(num_steps):
clear_output(wait=True)
env.reset()
plt.figure()
plt.imshow(get_screen().cpu().permute(1, 2, 0).numpy(),
interpolation='none')
plt.title('CartPole-v0 Environment')
plt.xticks([])
plt.yticks([])
plt.show()
```
## Plotting Function
This function will make it possible to analyze how the agent learns over time. The resulting plot consists of two subplots. The first one plots the total reward the agent accumulates over time, while the other plot shows a histogram of the agent's total rewards for the last 50 episodes.
```
def plot_res(values, title=''):
''' Plot the reward curve and histogram of results over time.'''
# Update the window after each episode
clear_output(wait=True)
# Define the figure
f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12,5))
f.suptitle(title)
ax[0].plot(values, label='score per run')
ax[0].axhline(195, c='red',ls='--', label='goal')
ax[0].set_xlabel('Episodes')
ax[0].set_ylabel('Reward')
x = range(len(values))
ax[0].legend()
# Calculate the trend
try:
z = np.polyfit(x, values, 1)
p = np.poly1d(z)
ax[0].plot(x,p(x),"--", label='trend')
except:
print('')
# Plot the histogram of results
ax[1].hist(values[-50:])
ax[1].axvline(195, c='red', label='goal')
ax[1].set_xlabel('Scores per Last 50 Episodes')
ax[1].set_ylabel('Frequency')
ax[1].legend()
plt.show()
```
## Random Search
Before implementing any deep learning approaches, I wrote a simple strategy where the action is sampled randomly from the action space. This approach will serve as a baseline for other strategies and will make it easier to understand how to work with the agent using the Open AI Gym environment.
```
def random_search(env, episodes,
title='Random Strategy'):
""" Random search strategy implementation."""
final = []
for episode in range(episodes):
state = env.reset()
done = False
total = 0
while not done:
# Sample random actions
action = env.action_space.sample()
# Take action and extract results
next_state, reward, done, _ = env.step(action)
# Update reward
total += reward
if done:
break
# Add to the final reward
final.append(total)
plot_res(final,title)
return final
# Get random search results
episodes = 30
random_s = random_search(env, episodes)
```
The plot above presents the random strategy. As expected, it's impossible to solve the environment using this approach. The agent is not learning from their experience. Despite being lucky sometimes (getting a reward of almost 75), their average performance is as low as 10 steps.
## Deep Q Learning
The main idea behind Q-learning is that we have a function $Q: State \times Action \rightarrow \mathbb{R}$, which can tell the agent what actions will result in what rewards. If we know the value of Q, it is possible to construct a policy that maximizes rewards:
\begin{align}\pi(s) = \arg\!\max_a \ Q(s, a)\end{align}
However, in the real world, we don't have access to full information, that's why we need to come up with ways of approximating Q. One traditional method is creating a lookup table where the values of Q are updated after each of the agent's actions. However, this approach is slow and does not scale to large action and state spaces. Since neural networks are universal function approximators, I will train a network that can approximate $Q$.
The DQL class implementation consists of a simple neural network implemented in PyTorch that has two main methods--predict and update. The network takes the agent's state as an input and returns the Q values for each of the actions. The maximum Q value is selected by the agent to perform the next action.
```
class DQN():
''' Deep Q Neural Network class. '''
def __init__(self, state_dim, action_dim, hidden_dim=64, lr=0.05):
self.criterion = torch.nn.MSELoss()
self.model = torch.nn.Sequential(
torch.nn.Linear(state_dim, hidden_dim),
torch.nn.LeakyReLU(),
torch.nn.Linear(hidden_dim, hidden_dim*2),
torch.nn.LeakyReLU(),
torch.nn.Linear(hidden_dim*2, action_dim)
)
self.optimizer = torch.optim.Adam(self.model.parameters(), lr)
def update(self, state, y):
"""Update the weights of the network given a training sample. """
y_pred = self.model(torch.Tensor(state))
loss = self.criterion(y_pred, Variable(torch.Tensor(y)))
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
def predict(self, state):
""" Compute Q values for all actions using the DQL. """
with torch.no_grad():
return self.model(torch.Tensor(state))
```
The q_learning function is the main loop for all the algorithms that follow.
It has many parameters, namely:
- Env represents the Open Ai Gym environment that we want to solve (CartPole.)
- Episodes stand for the number of games we want to play (from the beginning until the end.)
- Gamma is a discounting factor that is multiplied by future rewards to dampen these rewards' effect on the agent. It is designed to make future rewards worth less than immediate rewards.
- Epsilon represents the proportion of random actions relative to actions that are informed by existing "knowledge" that the agent accumulates during the episode. Before playing the game, the agent doesn't have any experience, so it is common to set epsilon to higher values and then gradually decrease its value.
- Eps_decay parameter indicates the speed at which the epsilon decreases as the agent learns. 0.99 comes from the original DQN paper.
I will explain other parameters later on when we will get to the corresponding agents.
The most straightforward agent updates its Q-values based on its most recent observation. It doesn't have any memory, but it learns by first exploring the environment and the gradually decreasing its epsilon value to make informed decisions:
```
def q_learning(env, model, episodes, gamma=0.9,
epsilon=0.3, eps_decay=0.99,
replay=False, replay_size=20,
title = 'DQL', double=False,
n_update=10, soft=False, verbose=True):
"""Deep Q Learning algorithm using the DQN. """
final = []
memory = []
episode_i=0
sum_total_replay_time=0
_max = [0.0, 0.0, 0.0, 0.0]
for episode in range(episodes):
episode_i+=1
if double and not soft:
# Update target network every n_update steps
if episode % n_update == 0:
model.target_update()
if double and soft:
model.target_update()
# Reset state
state = env.reset()
done = False
total = 0
while not done:
# Implement greedy search policy to explore the state space
if random.random() < epsilon:
action = env.action_space.sample()
else:
q_values = model.predict(state)
action = torch.argmax(q_values).item()
# Take action and add reward to total
next_state, reward, done, _ = env.step(action)
# Update total and memory
total += reward
memory.append((state, action, next_state, reward, done))
q_values = model.predict(state).tolist()
if done:
if not replay:
q_values[action] = reward
# Update network weights
model.update(state, q_values)
break
if replay:
t0=time.time()
# Update network weights using replay memory
model.replay(memory, replay_size, gamma)
t1=time.time()
sum_total_replay_time+=(t1-t0)
else:
# Update network weights using the last step only
q_values_next = model.predict(next_state)
q_values[action] = reward + gamma * torch.max(q_values_next).item()
model.update(state, q_values)
for _i in range(len(_max)):
if np.abs(state[i]) > _max[i]:
_max[i] = np.abs(state[i])
state = next_state
# Update epsilon
epsilon = max(epsilon * eps_decay, 0.01)
final.append(total)
plot_res(final, title)
if verbose:
print("episode: {}, total reward: {}".format(episode_i, total))
if replay:
print("Average replay time:", sum_total_replay_time/episode_i)
print(_max)
return final
```
### Parameters
```
# Number of states
n_state = env.observation_space.shape[0]
# Number of actions
n_action = env.action_space.n
# Number of episodes
episodes = 150
# Number of hidden nodes in the DQN
n_hidden = 50
# Learning rate
lr = 0.001
# Get DQN results
simple_dqn = DQN(n_state, n_action, n_hidden, lr)
simple = q_learning(env, simple_dqn, episodes, gamma=.9, epsilon=0.3)
```
The graph above shows that the performance of the agent has significantly improved. It got to 175 steps, which, as we've seen before, is impossible for a random agent. The trend line is also positive, and we can see that the performance increases over time. At the same time, the agent didn't manage to get above the goal line after 150 epochs, and its average performance is still around 15 steps, so there is definitely enough room for improvement.
## Replay
The approximation of Q using one sample at a time is not very effective. The graph above is a nice illustration of that. The network managed to achieve a much better performance compared to a random agent. However, it couldn't get to the threshold line of 195 steps. I implemented experience replay to improve network stability and make sure previous experiences are not discarded but used in training.
Experience replay stores the agent's experiences in memory. Batches of experiences are randomly sampled from memory and are used to train the neural network. Such learning consists of two phases--gaining experience and updating the model. The size of the replay controls the number of experiences that are used for the network update. Memory is an array that stores the agent's state, reward, and action, as well as whether the action finished the game and the next state.
```
# Expand DQL class with a replay function.
class DQN_replay(DQN):
#old replay function
#def replay(self, memory, size, gamma=0.9):
#""" Add experience replay to the DQN network class. """
# Make sure the memory is big enough
#if len(memory) >= size:
#states = []
#targets = []
# Sample a batch of experiences from the agent's memory
#batch = random.sample(memory, size)
# Extract information from the data
#for state, action, next_state, reward, done in batch:
#states.append(state)
# Predict q_values
#q_values = self.predict(state).tolist()
#if done:
#q_values[action] = reward
#else:
#q_values_next = self.predict(next_state)
#q_values[action] = reward + gamma * torch.max(q_values_next).item()
#targets.append(q_values)
#self.update(states, targets)
#new replay function
def replay(self, memory, size, gamma=0.9):
"""New replay function"""
#Try to improve replay speed
if len(memory)>=size:
batch = random.sample(memory,size)
batch_t = list(map(list, zip(*batch))) #Transpose batch list
states = batch_t[0]
actions = batch_t[1]
next_states = batch_t[2]
rewards = batch_t[3]
is_dones = batch_t[4]
states = torch.Tensor(states)
actions_tensor = torch.Tensor(actions)
next_states = torch.Tensor(next_states)
rewards = torch.Tensor(rewards)
is_dones_tensor = torch.Tensor(is_dones)
is_dones_indices = torch.where(is_dones_tensor==True)[0]
all_q_values = self.model(states) # predicted q_values of all states
all_q_values_next = self.model(next_states)
#Update q values
all_q_values[range(len(all_q_values)),actions]=rewards+gamma*torch.max(all_q_values_next, axis=1).values
all_q_values[is_dones_indices.tolist(), actions_tensor[is_dones].tolist()]=rewards[is_dones_indices.tolist()]
self.update(states.tolist(), all_q_values.tolist())
```
### replay using old replay function
```
# Get replay results
dqn_replay = DQN_replay(n_state, n_action, n_hidden, lr)
replay = q_learning(env, dqn_replay,
episodes, gamma=.9,
epsilon=0.2, replay=True,
title='DQL with Replay')
```
### replay using new replay function
```
# Get replay results
dqn_replay = DQN_replay(n_state, n_action, n_hidden, lr)
replay = q_learning(env, dqn_replay,
episodes, gamma=.9,
epsilon=0.2, replay=True,
title='DQL with Replay')
```
As expected, the neural network with the replay seems to be much more robust and smart compared to its counterpart that only remembers the last action. After approximately 60 episodes, the agent managed to achieve the winning threshold and remain at this level. I also managed to achieve the highest reward possible--500.
## Double Q Learning
Traditional Deep Q Learning tends to overestimate the reward, which leads to unstable training and lower quality policy. Let's consider the equation for the Q value:

The last part of the equation takes the estimate of the maximum value. This procedure results in systematic overestimation, which introduces a maximization bias. Since Q-learning involves learning estimates from estimates, such overestimation is especially worrying.
To avoid such a situation, I will define a new target network. The Q values will be taken from this new network, which is meant to reflect the state of the main DQN. However, it doesn't have identical weights because it's only updated after a certain number of episodes. This idea has been first introduced in Hasselt et al., 2015.
The addition of the target network might slow down the training since the target network is not continuously updated. However, it should have a more robust performance over time.
n_update parameter specifies the interval, after which the target network should be updated.
```
class DQN_double(DQN):
def __init__(self, state_dim, action_dim, hidden_dim, lr):
super().__init__(state_dim, action_dim, hidden_dim, lr)
self.target = copy.deepcopy(self.model)
def target_predict(self, s):
''' Use target network to make predicitons.'''
with torch.no_grad():
return self.target(torch.Tensor(s))
def target_update(self):
''' Update target network with the model weights.'''
self.target.load_state_dict(self.model.state_dict())
def replay(self, memory, size, gamma=1.0):
''' Add experience replay to the DQL network class.'''
if len(memory) >= size:
# Sample experiences from the agent's memory
data = random.sample(memory, size)
states = []
targets = []
# Extract datapoints from the data
for state, action, next_state, reward, done in data:
states.append(state)
q_values = self.predict(state).tolist()
if done:
q_values[action] = reward
else:
# The only difference between the simple replay is in this line
# It ensures that next q values are predicted with the target network.
q_values_next = self.target_predict(next_state)
q_values[action] = reward + gamma * torch.max(q_values_next).item()
targets.append(q_values)
self.update(states, targets)
# Get replay results
dqn_double = DQN_double(n_state, n_action, n_hidden, lr)
double = q_learning(env, dqn_double, episodes, gamma=.9,
epsilon=0.2, replay=True, double=True,
title='Double DQL with Replay', n_update=10)
```
Double DQL with replay has outperformed the previous version and has consistently performed above 300 steps. The performance also seems to be a bit more stable, thanks to the separation of action selection and evaluation. Finally, let's explore the last modification to the DQL agent.
## Soft Target Update
The method used to update the target network implemented above was introduced in the original DQN paper. In this section, we will explore another well-established method of updating the target network weights. Instead of updating weights after a certain number of steps, we will incrementally update the target network after every run using the following formula:
target_weights = target_weights * (1-TAU) + model_weights * TAU
where 0 < TAU < 1
This method of updating the target network is known as “soft target network updates” and was introduced in Lillicrap et al., 2016. Method implementation is shown below:
```
class DQN_double_soft(DQN_double):
def target_update(self, TAU=0.1):
''' Update the targer gradually. '''
# Extract parameters
model_params = self.model.named_parameters()
target_params = self.target.named_parameters()
updated_params = dict(target_params)
for model_name, model_param in model_params:
if model_name in target_params:
# Update parameter
updated_params[model_name].data.copy_((TAU)*model_param.data + (1-TAU)*target_params[model_param].data)
self.target.load_state_dict(updated_params)
dqn_double_soft = DQN_double_soft(n_state, n_action, n_hidden, lr)
double = q_learning(env, dqn_double_soft, episodes, gamma=.9,
epsilon=0.2, replay=True, double=True,
title='Double DQL with Replay', n_update=10, soft=True)
```
The network with soft target updates performed quite well. However, it doesn't seem to be better than hard weight updates after a certain number of steps.
## Conclusion
The implementation of the experience replay and the target network have significantly improved the performance of a Deep Q Learning agent in the Open AI CartPole environment. Some other modifications to the agent, such as Dueling Network Architectures (Wang et al., 2015), can be added to this implementation to improve the agent's performance. The algorithm is also generalizable to other environments. Thus, it's possible to test how well it performs on other tasks.
## References:
(1) Reinforcement Q-Learning from Scratch in Python with OpenAI Gym. (2019). Learndatasci.com. Retrieved 9 December 2019, from https://www.learndatasci.com/tutorials/reinforcement-q-learning-scratch-python-openai-gym/
(2) Paszke, A., (2019). Reinforcement Learning (DQN) tutorial. Retrieved from: https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
(3) Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., ... & Wierstra, D. (2015). Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
(4) Van Hasselt, H., Guez, A., & Silver, D. (2016, March). Deep reinforcement learning with double q-learning. In Thirtieth AAAI conference on artificial intelligence.
(5) Wang, Z., Schaul, T., Hessel, M., Van Hasselt, H., Lanctot, M., & De Freitas, N. (2015). Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581.
(6) Double DQN Implementation to Solve OpenAI Gym’s CartPole v-0. (2019). Medium. Retrieved 20 December 2019, from https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d
| github_jupyter |
```
#Importing the basic libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import plotly.offline as py
from plotly import tools
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings('ignore')
#Reading the dataset
cars = pd.read_csv('car_data.csv')
cars.shape
#Since our dataset doesn't contain the name of columns, the column names were assigned
cars.columns = ['Buying', 'Maint', 'Doors','Persons','LugBoot','Safety','Evaluation']
#Taking an overview of data
cars.sample(10)
#Let's check if there are any missing values in our dataset
cars.isnull().sum()
#We see that there are no missing values in our dataset
#Let's take a more analytical look at our dataset
cars.describe()
#We realize that our data has categorical values
cars.columns
#Lets find out the number of cars in each evaluation category
cars['Evaluation'].value_counts().sort_index()
fig = {
"data": [
{
"values": [1210,384,69,65],
"labels": [
"Unacceptable",
"Acceptable",
"Good",
"Very Good"
],
"domain": {"column": 0},
"name": "Car Evaluation",
"hoverinfo":"label+percent+name",
"hole": .6,
"type": "pie"
}],
"layout": {
"title":"Distribution of Evaluated Cars",
"grid": {"rows": 1, "columns": 1},
"annotations": [
{
"font": {
"size": 36
},
"showarrow": False,
"text": "",
"x": 0.5,
"y": 0.5
}
]
}
}
py.iplot(fig, filename='cars_donut')
#cars.Evaluation.replace(('unacc', 'acc', 'good', 'vgood'), (0, 1, 2, 3), inplace = True)
#cars.Buying.replace(('vhigh', 'high', 'med', 'low'), (3, 2, 1, 0), inplace = True)
#cars.Maint.replace(('vhigh', 'high', 'med', 'low'), (3, 2, 1, 0), inplace = True)
#cars.Doors.replace(('5more'),(5),inplace=True)
#cars.Persons.replace(('more'),(5),inplace=True)
#cars.LugBoot.replace(('small','med','big'),(0,1,2),inplace=True)
#cars.Safety.replace(('low','med','high'),(0,1,2),inplace=True)
cars.Doors.replace(('5more'),('5'),inplace=True)
cars.Persons.replace(('more'),('5'),inplace=True)
features = cars.iloc[:,:-1]
features[:5]
a=[]
for i in features:
a.append(features[i].value_counts())
buy = pd.crosstab(cars['Buying'], cars['Evaluation'])
mc = pd.crosstab(cars['Maint'], cars['Evaluation'])
drs = pd.crosstab(cars['Doors'], cars['Evaluation'])
prsn = pd.crosstab(cars['Persons'], cars['Evaluation'])
lb = pd.crosstab(cars['LugBoot'], cars['Evaluation'])
sfty = pd.crosstab(cars['Safety'], cars['Evaluation'])
buy
data = [
go.Bar(
x=a[0].index, # assign x as the dataframe column 'x'
y=buy['unacc'],
name='Unacceptable'
),
go.Bar(
x=a[0].index,
y=buy['acc'],
name='Acceptable'
),
go.Bar(
x=a[0].index,
y=buy['good'],
name='Good'
),
go.Bar(
x=a[0].index,
y=buy['vgood'],
name='Very Good'
)
]
layout = go.Layout(
barmode='stack',
title='Selling Price vs Evaluation'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='distri')
data = [
go.Bar(
x=a[0].index, # assign x as the dataframe column 'x'
y=mc['unacc'],
name='Unacceptable'
),
go.Bar(
x=a[0].index,
y=mc['acc'],
name='Acceptable'
),
go.Bar(
x=a[0].index,
y=mc['good'],
name='Good'
),
go.Bar(
x=a[0].index,
y=mc['vgood'],
name='Very Good'
)
]
layout = go.Layout(
barmode='stack',
title='Maintainance cost vs Evaluation'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='cars_donut')
data = [
go.Bar(
x=a[2].index, # assign x as the dataframe column 'x'
y=drs['unacc'],
name='Unacceptable'
),
go.Bar(
x=a[2].index,
y=drs['acc'],
name='Acceptable'
),
go.Bar(
x=a[2].index,
y=drs['good'],
name='Good'
),
go.Bar(
x=a[2].index,
y=drs['vgood'],
name='Very Good'
)
]
layout = go.Layout(
barmode='stack',
title='Doors vs Evaluation'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='cars_donut')
data = [
go.Bar(
x=a[3].index, # assign x as the dataframe column 'x'
y=prsn['unacc'],
name='Unacceptable'
),
go.Bar(
x=a[3].index,
y=prsn['acc'],
name='Acceptable'
),
go.Bar(
x=a[3].index,
y=prsn['good'],
name='Good'
),
go.Bar(
x=a[3].index,
y=prsn['vgood'],
name='Very Good'
)
]
layout = go.Layout(
barmode='stack',
title='Number of Passengers vs Evaluation'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='cars_donut')
data = [
go.Bar(
x=a[4].index, # assign x as the dataframe column 'x'
y=lb['unacc'],
name='Unacceptable'
),
go.Bar(
x=a[4].index,
y=lb['acc'],
name='Acceptable'
),
go.Bar(
x=a[4].index,
y=lb['good'],
name='Good'
),
go.Bar(
x=a[4].index,
y=lb['vgood'],
name='Very Good'
)
]
layout = go.Layout(
barmode='stack',
title='Luggage Boot vs Evaluation'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='cars_donut')
data = [
go.Bar(
x=a[5].index, # assign x as the dataframe column 'x'
y=sfty['unacc'],
name='Unacceptable'
),
go.Bar(
x=a[5].index,
y=sfty['acc'],
name='Acceptable'
),
go.Bar(
x=a[5].index,
y=sfty['good'],
name='Good'
),
go.Bar(
x=a[5].index,
y=sfty['vgood'],
name='Very Good'
)
]
layout = go.Layout(
barmode='stack',
title='Safety vs Evaluation'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='cars_donut')
#We need to encode the categorical data
#We have two options, either we use label encoder or one hot encoder
#We use label encoder when our target variable changes with increase or decrease in that feature variable
#We use One hot encoder when a target variable depends upon the feature variable
#Dividing the dataframe into x features and y target variable
x = cars.iloc[:, :-1]
y = cars.iloc[:, 6]
x.columns = ['Buying', 'Maint', 'Doors','Persons','LugBoot','Safety']
y.columns=['Evaluation']
x.head()
#Using pandas dummies function to encode the data into categorical data
x = pd.get_dummies(x, prefix_sep='_', drop_first=True)
x.sample(5)
y.describe()
x=x.values
y=y.values
#And the rest of them to be categorically encoded: ['Buying', 'Maint', 'Doors', 'Persons','Safety','Evaluation']
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.25, random_state = 0)
"""from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)"""
x_train[:5]
y_train[:5]
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix
#Using ogistic regression
clf = LogisticRegression(random_state = 0)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
f1_LR=f1_score(y_test,y_pred, average='macro')
print("Training Accuracy: ",clf.score(x_train, y_train))
print("Testing Accuracy: ", clf.score(x_test, y_test))
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test,y_pred))
#Using KNN classifier
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
f1_KNN=f1_score(y_test,y_pred, average='macro')
print("Training Accuracy: ",clf.score(x_train, y_train))
print("Testing Accuracy: ", clf.score(x_test, y_test))
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test,y_pred))
#Using Linear SVC
from sklearn.svm import SVC
clf = SVC(kernel = 'linear', random_state = 0)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
f1_SVC_Linear=f1_score(y_test,y_pred, average='macro')
print("Training Accuracy: ",clf.score(x_train, y_train))
print("Testing Accuracy: ", clf.score(x_test, y_test))
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test,y_pred))
#Using rbf SVC
from sklearn.svm import SVC
clf = SVC(kernel = 'rbf', random_state = 0)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
f1_SVC_rbf=f1_score(y_test,y_pred, average='macro')
print("Training Accuracy: ",clf.score(x_train, y_train))
print("Testing Accuracy: ", clf.score(x_test, y_test))
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test,y_pred))
#Using NB classifier
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf.fit(x_train, y_train)
#GaussianNB?
y_pred = clf.predict(x_test)
print("Training Accuracy: ",clf.score(x_train, y_train))
print("Testing Accuracy: ", clf.score(x_test, y_test))
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test,y_pred))
```
Note that this is WRONG implementation of Naive Bayes classifier. Since the Independence assumption of NB classifier states that the features shoud not be correlated to each other; so when creating the dummy variables, we make family of dependent features and hence we get such a terrible accuracy. So after trying out a couple more algorithms, I've done this one properly :)
```
#Trying decision tree classifier
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
f1_DT=f1_score(y_test,y_pred, average='macro')
print("Training Accuracy: ",clf.score(x_train, y_train))
print("Testing Accuracy: ", clf.score(x_test, y_test))
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test,y_pred))
#Trying Random forest classifier
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators = 25, criterion = 'entropy', random_state = 0)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
f1_RF=f1_score(y_test,y_pred, average='macro')
print("Training Accuracy: ",clf.score(x_train, y_train))
print("Testing Accuracy: ", clf.score(x_test, y_test))
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test,y_pred))
#Now trying the NB classifier again, this time without dummy variables
x_new = cars.iloc[:,:-1]
from sklearn.preprocessing import LabelEncoder
lae = LabelEncoder()
x_new=x_new.apply(lambda col: lae.fit_transform(col))
x_new.head()
x_new=x_new.values
x_train_new, x_test_new= train_test_split(x_new, test_size = 0.25, random_state = 0)
clf_new = GaussianNB(priors=None)
clf_new.fit(x_train_new, y_train)
y_train[:10]
y_pred = clf_new.predict(x_test_new)
f1_NB=f1_score(y_test,y_pred, average='macro')
print("Training Accuracy: ",clf_new.score(x_train_new, y_train))
print("Testing Accuracy: ", clf_new.score(x_test_new, y_test))
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test,y_pred))
models=['Linear SVC', 'Kernel SVC','Logistic Regression','Decision Tree Classifier','Random Forest Classifier','Naive Bayes Classifier' ]
fig = go.Figure(data=[
go.Bar(name='f1_score', x=models, y=[f1_SVC_Linear,f1_SVC_rbf,f1_LR,f1_DT,f1_RF,f1_NB])])
fig.show()
```
| github_jupyter |
## Introduction to Spark Notebooks
Let's look at how to do data discovery/sandboxing with Spark Pools.
A few pointers to get started:
* only run 1 cell at a time
* you will need to change the connection strings to the storage
* `ESC + a` to add a cell `above` the current cell
* `ESC + b` to add a cell `below` the current cell
* CTL+Enter or Shft+Enter to execute a cell
Navigate to `sale-small/Year=2018/Quarter=Q1/Month=1/Day=20180101/sale-small-20180101-snappy.parquet`, right click and choose **New notebook** then **Load to DataFrame**.
You can copy the code from the cells below or simply use this notebook directtly, but you will have to change the connstring to the storage from your other notebook.
```
%%pyspark
df = spark.read.load('abfss://wwi-02@asadatalakedavew891.dfs.core.windows.net/sale-small/Year=2019/Quarter=Q4/Month=12/Day=20191201/sale-small-20191201-snappy.parquet', format='parquet')
display(df.limit(10))
```
Notice when executing a Spark cell for the first time it takes a few minutes to spin up the cluster and get it ready.
I believe the default is to spin down each cluster after 15 mins of inactivity.
```
df.printSchema()
```
This is a .ipynb PYTHON notebook, but we can write SQL too, using a `magic`
```
df.registerTempTable("df")
%%sql
select * from df
```
If you actually type the above commands you should see autocompletion.
Note that it does take some time to do even the simplest things in Spark. It has to build the DAG, spawn executors, etc. It's a BIG DATA tool, not a SMALL data tool. Likewise, as mentioned above, it is meant to "batch" processing, for "interactive" querying (what I call sandboxing) there may be faster tools.
```
%%pyspark
df = spark.read.load('abfss://wwi-02@asadatalakedavew891.dfs.core.windows.net/sale-small/Year=2018/Quarter=Q4/*/*/*', format='parquet')
df.limit(10)
# often a simple PRINT, like above, doesn't work great.
# the trick in Spark, when that happens is to just rerun the command
# wrapped in a display. That will usually fix it.
display(df.limit(10))
# now let's do some aggregations on our df.
# Let's look at sum/avg profit by TransactionDate
# we need a few "imports" to get things right
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql.functions import *
profitByDate = (df.groupBy("TransactionDate")
.agg(
round(sum("ProfitAmount"),2).alias("(sum)Profit"),
round(avg("ProfitAmount"),2).alias("(avg)Profit")
).orderBy("TransactionDate")
)
profitByDate.show(100)
# again, try the display trick, and then note that I can chart it too
# in this case, you need to remove the show method
# often Spark isn't intuitive for new users.
display(profitByDate)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yohanesnuwara/reservoir-geomechanics/blob/master/homework%208/homework8_resgeomech_finally.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **Homework 8. Identifying Critically Stressed Fractures**
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
!git clone https://github.com/yohanesnuwara/reservoir-geomechanics
```
Access dip data from Barnett fracture data in Homework 5
```
fracture = pd.read_csv('/content/reservoir-geomechanics/homework 5/Barnett_fractures.csv')
frac_depth = fracture.depth
frac_dip = fracture.dip
# visualize the dip data
plt.figure(figsize=(4,10))
plt.plot(frac_dip, frac_depth, '.', color='blue')
plt.title("Barnett Dip Log", pad=20, size=15)
plt.xlabel("Dip (degrees)"); plt.ylabel("Depth (ft)")
plt.grid(True)
plt.gca().invert_yaxis()
fracture.head(10)
```
## Information
At depth 5725 ft
The maximum horizontal stress ($SH$) is assumed to be the lower bound of $SH$ in Homework 7. Because the minimum horizontal stress ($Sh$) measured from LOT is 3721.25 psi and the $SH$ lower bound is 3721.25 psi, therefore $Sh=SH<Sv$ or in regime **radial extension**
* Sv gradient from HW 1
* Pore pressure gradient from HW 7
* Friction coefficient from HW 7
* Shmin gradient from HW 7
* SHmax gradient from HW 7 (as lower bound of SHmax)
```
depth = 5725 # ft
# from homework 7
Sh = 3721.25 # psi
SH = Sh
Sv = 6374.8
Pp = 2748
mu = 0.75
# gradient of stresses
Sh_grad = Sh / depth
SH_grad = Sh_grad
Sv_grad = Sv / depth
Pp_grad = Pp / depth
# visualize the stress data
plt.figure(figsize=(4,10))
p1 = plt.plot((frac_depth * Sh_grad), frac_depth, '.', color='blue')
p2 = plt.plot((frac_depth * SH_grad), frac_depth, '.', color='red')
p3 = plt.plot((frac_depth * Sv_grad), frac_depth, '.', color='black')
p4 = plt.plot((frac_depth * Pp_grad), frac_depth, '.', color='green')
plt.title("Barnett Stress/Pressure Profile", pad=20, size=15)
plt.legend((p1[0], p2[0], p3[0], p4[0]), (['Shmin', 'SHmax', 'Sv', 'Pore pressure']))
plt.xlabel("Stress/Pressure (psi)"); plt.ylabel("Depth (ft)")
plt.grid(True)
plt.gca().invert_yaxis()
```
## Question 1a. From the first homework assignment, what is the calculated gradient of the overburden stress of the site at 5725 feet depth in psi/ft?
```
print('Gradient of overburden stress:', Sv_grad, 'psi/ft')
```
## Question 1b. From the seventh homework assignment, what is the given gradient of the minimum horizontal stress of the site at 5725 feet depth in psi/ft?
```
print('Gradient of minimum horizontal stress:', Sh_grad, 'psi/ft')
```
## Question 1c. From the seventh homework assignment, what is the estimated lower bound of the gradient of the maximum horizontal stress of the site at 5725 feet depth in psi/ft?
```
print('Gradient of maximum horizontal stress:', SH_grad, 'psi/ft')
```
## Question 1d. From the seventh homework assignment, what is the given gradient of the pore pressure of the site at 5725 feet depth in psi/ft?
```
print('Gradient of pore pressure:', Pp_grad, 'psi/ft')
```
## Question 1e. From the seventh homework assignment, what is the given coefficient of sliding friction of the site at 5725 feet depth?
```
print('Coefficient of sliding friction:', mu)
```
## Question 2a. How many fractures are critically stressed with the condition given above?
* $\mu=0.75$
* $\Delta Pp=0.48 psi/ft$
Mohr-Coulomb Diagram
```
# principal stresses
S1 = frac_depth * Sv_grad
S3 = frac_depth * Sh_grad
Pp = frac_depth * Pp_grad
# effective normal stress and shear stress
normal = (0.5 * (S1 + S3)) + (0.5 * (S1 - S3) * np.cos(np.deg2rad(2 * frac_dip)))
normal_eff = normal - Pp
shear = 0.5 * (S1 - S3) * np.sin(np.deg2rad(2 * frac_dip))
# failure envelope
normal_env = np.linspace(0, max(normal_eff), 10)
mu = 0.75
shear_env = mu * normal_env
plt.figure(figsize=(20,10))
plt.plot(normal_env, shear_env)
plt.scatter(normal_eff, shear, c=frac_dip, s=20, cmap='gist_rainbow')
plt.xlim(xmin=0)
plt.ylim(ymin=0)
plt.colorbar()
plt.gca().set_aspect('equal')
# counting critically stressed fractures
envelope = mu * normal_eff
difference = envelope - shear
critical = [i for i in difference if i < 0]
critical_count_1 = len(critical)
print("Number of critical fractures: ", critical_count_1, "\n")
```
## Question 2b. How many fractures are critically stressed if the pore pressure gradient is 0.52 psi/ft (Montgomery et al., 2005)?
* $\mu=0.75$
* $\Delta Pp=0.52 psi/ft$
```
# principal stresses
S1 = frac_depth * Sv_grad
S3 = frac_depth * Sh_grad
Pp = frac_depth * 0.52
# effective normal stress and shear stress
normal = (0.5 * (S1 + S3)) + (0.5 * (S1 - S3) * np.cos(np.deg2rad(2 * frac_dip)))
normal_eff = normal - Pp
shear = 0.5 * (S1 - S3) * np.sin(np.deg2rad(2 * frac_dip))
# failure envelope
normal_env = np.linspace(0, max(normal_eff), 10)
mu = 0.75
shear_env = mu * normal_env
plt.figure(figsize=(20,10))
plt.plot(normal_env, shear_env)
plt.scatter(normal_eff, shear, c=frac_dip, s=20, cmap='gist_rainbow')
plt.xlim(xmin=0)
plt.ylim(ymin=0)
plt.colorbar()
plt.gca().set_aspect('equal')
# counting critically stressed fractures
envelope = mu * normal_eff
difference = envelope - shear
critical = [i for i in difference if i < 0]
critical_count_2 = len(critical)
print("Number of critical fractures: ", critical_count_2, "\n")
```
## Question 2c. How many fractures are critically stressed if the frictional sliding coefficient is 0.45 (Kohli and Zoback, 2013)?
* $\mu=0.45$
* $\Delta Pp=0.48 psi/ft$
```
# principal stresses
S1 = frac_depth * Sv_grad
S3 = frac_depth * Sh_grad
Pp = frac_depth * Pp_grad
# effective normal stress and shear stress
normal = (0.5 * (S1 + S3)) + (0.5 * (S1 - S3) * np.cos(np.deg2rad(2 * frac_dip)))
normal_eff = normal - Pp
shear = 0.5 * (S1 - S3) * np.sin(np.deg2rad(2 * frac_dip))
# failure envelope
normal_env = np.linspace(0, max(normal_eff), 10)
mu = 0.45
shear_env = mu * normal_env
plt.figure(figsize=(20,10))
plt.plot(normal_env, shear_env)
plt.scatter(normal_eff, shear, c=frac_dip, s=20, cmap='gist_rainbow')
plt.xlim(xmin=0)
plt.ylim(ymin=0)
plt.colorbar()
plt.gca().set_aspect('equal')
# counting critically stressed fractures
envelope = mu * normal_eff
difference = envelope - shear
critical = [i for i in difference if i < 0]
critical_count_3 = len(critical)
print("Number of critical fractures: ", critical_count_3, "\n")
```
| github_jupyter |
```
# To enable plotting graphs in Jupyter notebook
%matplotlib inline
import pandas as pd
from sklearn.linear_model import LogisticRegression
# importing ploting libraries
import matplotlib.pyplot as plt
#importing seaborn for statistical plots
import seaborn as sns
#Let us break the X and y dataframes into training set and test set. For this we will use
#Sklearn package's data splitting function which is based on random function
from sklearn.model_selection import train_test_split
import numpy as np
# calculate accuracy measures and confusion matrix
from sklearn import metrics
# The data lies in the following URL.
#url = "https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data"
# Since it is a data file with no header, we will supply the column names which have been obtained from the above URL
# Create a python list of column names called "names"
#colnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
#Load the file from local directory using pd.read_csv which is a special form of read_table
#while reading the data, supply the "colnames" list
#pima_df = pd.read_csv("pima-indians-diabetes-2 (1).csv", names= colnames)
pima_df = pd.read_csv("pima-indians-diabetes-2 (1).csv")
pima_df.head(50)
# Let us check whether any of the columns has any value other than numeric i.e. data is not corrupted such as a "?" instead of
# a number.
# we use np.isreal a numpy function which checks each column for each row and returns a bool array,
# where True if input element is real.
# applymap is pandas dataframe function that applies the np.isreal function columnwise
# Following line selects those rows which have some non-numeric value in any of the columns hence the ~ symbol
pima_df[~pima_df.applymap(np.isreal).all(1)]
# replace the missing values in pima_df with median value :Note, we do not need to specify the column names
# every column's missing value is replaced with that column's median respectively
#pima_df = pima_df.fillna(pima_df.median())
#pima_df
#Lets analysze the distribution of the various attributes
pima_df.describe().transpose()
# Let us look at the target column which is 'class' to understand how the data is distributed amongst the various values
pima_df.groupby(["class"]).count()
# Most are not diabetic. The ratio is almost 1:2 in favor or class 0. The model's ability to predict class 0 will
# be better than predicting class 1.
# Let us do a correlation analysis among the different dimensions and also each dimension with the dependent dimension
# This is done using scatter matrix function which creates a dashboard reflecting useful information about the dimensions
# The result can be stored as a .png file and opened in say, paint to get a larger view
#pima_df_attr = pima_df.iloc[:,0:9]
#axes = pd.plotting.scatter_matrix(pima_df_attr)
#plt.tight_layout()
#plt.savefig('d:\greatlakes\pima_pairpanel.png')
# Pairplot using sns
sns.pairplot(pima_df)
#data for all the attributes are skewed, especially for the variable "test"
#The mean for test is 80(rounded) while the median is 30.5 which clearly indicates an extreme long tail on the right
# Attributes which look normally distributed (plas, pres, skin, and mass).
# Some of the attributes look like they may have an exponential distribution (preg, test, pedi, age).
# Age should probably have a normal distribution, the constraints on the data collection may have skewed the distribution.
# There is no obvious relationship between age and onset of diabetes.
# There is no obvious relationship between pedi function and onset of diabetes.
array = pima_df.values
X = pima_df.iloc[:,0:8]
y = pima_df.iloc[:,8]
#X = array[:,0:8] # select all rows and first 8 columns which are the attributes
#Y = array[:,8] # select all rows and the 8th column which is the classification "Yes", "No" for diabeties
test_size = 0.30 # taking 70:30 training and test set
seed =1 # Random numbmer seeding for reapeatability of the code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=seed)
# Fit the model on 30%
model = LogisticRegression()
model.fit(X_train, y_train)
y_predict = model.predict(X_test)
coef_df = pd.DataFrame(model.coef_)
coef_df['intercept'] = model.intercept_
print(coef_df)
model_score = model.score(X_test, y_test)
print(model_score)
print(metrics.confusion_matrix(y_test, y_predict))
# Improve the model -----------------------------Iteration 2 -----------------------------------------------
# To scale the dimensions we need scale function which is part of sckikit preprocessing libraries
from sklearn import preprocessing
# scale all the columns of the mpg_df. This will produce a numpy array
#pima_df_scaled = preprocessing.scale(pima_df[0:7])
X_train_scaled = preprocessing.scale(X_train)
X_test_scaled = preprocessing.scale(X_test)
# Fit the model on 30%
model = LogisticRegression()
model.fit(X_train_scaled, y_train)
y_predict = model.predict(X_test_scaled)
model_score = model.score(X_test_scaled, y_test)
print(model_score)
# IMPORTANT: first argument is true values, second argument is predicted values
# this produces a 2x2 numpy array (matrix)
print(metrics.confusion_matrix(y_test, y_predict))
```
Analyzing the confusion matrix
True Positives (TP): we correctly predicted that they do have diabetes 46
True Negatives (TN): we correctly predicted that they don't have diabetes 134
False Positives (FP): we incorrectly predicted that they do have diabetes (a "Type I error") 13
Falsely predict positive Type I error
False Negatives (FN): we incorrectly predicted that they don't have diabetes (a "Type II error") 38
Falsely predict negative Type II error
| github_jupyter |
```
from glob import glob
import datetime
import numpy as np
from astropy.table import Table
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from scipy.stats import spearmanr
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
from matplotlib.ticker import MultipleLocator
```
## Gather the data
### HR
+ redshift cut
+ HR cut
```
HR = pd.read_csv('../data/campbell_local.tsv', sep='\t', usecols=['SNID', 'redshift', 'hr', 'err_mu'], index_col='SNID')
HR.rename(columns={'err_mu': 'hr uncert'}, inplace=True)
HR = HR[HR['redshift']<0.2]
HR = HR[HR['hr']<0.7]
HR.describe()
```
### SALT2 parameters (x_1 & c)
```
t = Table.read('../data/SDSS_Photometric_SNe_Ia.fits')
salt = t['CID','Z','X1','X1_ERR','COLOR','COLOR_ERR'].to_pandas()
salt.columns = salt.columns.str.lower()
salt.rename(columns={'cid': 'SNID', 'z': 'redshift'}, inplace=True)
salt.set_index('SNID', inplace=True)
salt.describe()
```
### Stellar Mass
```
galaxy = pd.read_csv('../resources/kcorrect_stellarmass.csv', usecols=['GAL', 'redshift', 'stellarmass'], index_col='GAL')
galaxy.rename(columns={'redshift': 'gal redshift', 'stellarmass': 'stellar mass'}, inplace=True)
galaxy.describe()
```
### Age
```
age = pd.read_csv('../resources/ages_campbell.tsv', sep='\t', skiprows=[1],
usecols=['# sn id', 'age'], dtype={'age': np.float64, '# sn id': np.int})
age.rename(columns={'# sn id': 'SNID'}, inplace=True)
age.set_index('SNID', inplace=True)
age.describe()
age_global = pd.read_csv('../resources/ages_campbellG.tsv', sep='\t', skiprows=[1],
usecols=['# sn id', 'age'], dtype={'age': np.float64, '# sn id': np.int})
age_global.rename(columns={'# sn id': 'SNID'}, inplace=True)
age_global.set_index('SNID', inplace=True)
age_global.describe()
```
### Combine
```
data = pd.concat([HR, salt, galaxy, age], axis=1)
data.dropna(inplace=True)
data.describe()
```
Convert stellar mass to be log(stellar mass)
```
data['stellar mass'] = np.log10(data['stellar mass'])
data.describe()
data.head()
# now with global
data_global = pd.concat([HR, salt, galaxy, age_global], axis=1)
data_global.dropna(inplace=True)
data_global.describe()
data_global['stellar mass'] = np.log10(data_global['stellar mass'])
data_global.describe()
```
## PCA
Standardize the variables. We will do everything in their "linear" (looking at you distance modulus/HR) space excepct for stellar mass.
```
# Lets remove uncertainties & redshift?
features = ['x1', 'color', 'stellar mass', 'age']
y = data.loc[:, features].values
scaler = StandardScaler()
scaler.fit(y) # get the needed transformation off of y
y = scaler.transform(y) # transform y
y.shape
print(y.mean(axis=0))
print(y.std(axis=0))
```
Standard Scaler saves std as `self.scale_`. Why, idk. `self.var_` is just the square of std and not used anywhere.
```
scaler.mean_, scaler.scale_ #how is it scaled
y[:5] # post scaled values
data.loc[:, features].values[:5] #get the prescaled values
# Perform PCA
pca2 = PCA(n_components=4)
principalComponents2 = pca2.fit_transform(y)
principalComponents2.shape
principalComponents2[:5]
pca2.components_
pca2.singular_values_
## this is not what I wanted I guess. Use `explained_variance_ratio_`
pca2.explained_variance_ratio_
# need for data table in paper
principalComponents2[:,0]
```
### PCA without HR -- Global
```
# Lets remove uncertainties & redshift?
features = ['x1', 'color', 'stellar mass', 'age']
y = data_global.loc[:, features].values
scaler = StandardScaler()
scaler.fit(y) # get the needed transformation off of y
y = scaler.transform(y) # transform y
y.shape
print(y.mean(axis=0))
print(y.std(axis=0))
scaler.mean_, scaler.scale_
pca_global = PCA(n_components=4)
principalComponents_global = pca_global.fit_transform(y)
principalComponents_global.shape
print(pca_global.components_)
print(pca_global.singular_values_) # not what I want
print(pca_global.singular_values_/pca_global.singular_values_.sum()) #not what I wanted.
print(pca_global.explained_variance_ratio_)
```
### Plots
```
spearmanr(principalComponents2[:,0], data['hr'])
```
5.7x10^-7 is a 5-sigma significance
8.03x10^-11 is a 6.5-sigma significance
2.55x10^-12 is a 7-sigma significance
1.52x10^-23 is a 10-sigma significance
```
(m, b), cov = np.polyfit(principalComponents2[:,0], data['hr'], 1, full=False, cov=True)
print(m, b)
print(cov)
print(np.sqrt(cov[0,0]), np.sqrt(cov[1,1]))
(m, b), cov = np.polyfit(principalComponents_global[:,0], data['hr'], 1, full=False, cov=True)
print(m, b)
print(cov)
print(np.sqrt(cov[0,0]), np.sqrt(cov[1,1]))
sns.set(context='talk', style='ticks', font='serif', color_codes=True)
for i in [0,1,2,3]:
fig = plt.figure()
#fix axes major spacing & size
ax = plt.gca()
ax.get_yaxis().set_major_locator(MultipleLocator(0.2))
ax.set_ylim(-0.67, 0.67)
ax.get_xaxis().set_major_locator(MultipleLocator(1))
ax.set_xlim(-3.5, 3.5)
#set axes ticks and gridlines
ax.tick_params(axis='both', top='on', right='on', direction='in')
ax.grid(which='major', axis='both', color='0.90', linestyle='-')
ax.set_axisbelow(True)
#show origin
# ax.axhline(y=0, color='0.8', linewidth=2)
# ax.axvline(x=0, color='0.8', linewidth=1)
## add best fit on PC_1 -- under data points
if i==0:
# x = np.linspace(min(principalComponents2[:,i]), max(principalComponents2[:,i]), 100)
# print(m*x+b)
# plt.plot(x, m*x+b)
sns.regplot(principalComponents2[:,i], data['hr'], marker='', color='grey', ax=ax)
#plot data on top -- not needed down here if we don't try to show the origin
plt.scatter(principalComponents2[:,i], data['hr'], marker='.', c=data['x1'],
cmap="RdBu", vmin=-3.0, vmax=3.0, edgecolor='k', zorder=10)
# add axes labels, after sns.regplot
plt.xlabel(f'principal component {i+1}', fontsize=17)
plt.ylabel('Hubble residual [mag]', fontsize=17)
#Add colorbar
##["{:>4.1f}".format(y) for y in yticks] as possible color bar formating.
cax = fig.add_axes([0.95, 0.237, 0.02, 0.649]) # fig.set_tight_layout({'pad': 1.5}), 0.95, 0.217, 0.02, 0.691
# cax = fig.add_axes([0.965, 0.2, 0.02, 0.691]) # plt.tight_layout()
cax.tick_params(axis='y', direction='in')
cax.set_axisbelow(False) # bring tick marks above coloring
plt.colorbar(label=r"$x_1$", cax=cax)
#add Spearman's correlation
##add a back color so the grid lines do not get in the way?
sp_r, sp_p = spearmanr(principalComponents2[:,i], data['hr'])
if i==0:
ax.text(-2.9, 0.42, f"Spearman's correlation: {sp_r:.2f}\np: {sp_p:.2e}",
{'fontsize':12})
elif i==1:
ax.text(-2.9, 0.42, f"Spearman's correlation: {sp_r:.2f}\np: {sp_p:.2f}",
{'fontsize':12})
else:
# ax.text(-3, 0.48, f"Spearman's correlation: {sp_r:.2f}\np-value: {sp_p:.2f}",
# {'fontsize':12})
ax.text(-2.9, 0.42, f"Spearman's correlation: {sp_r:.2f}\np: {sp_p:.2f}",
{'fontsize':12})
fig.set_tight_layout({'pad': 1.5})
plt.savefig(f'HRvPC{i+1}.pdf', bbox_inches='tight') # bbox to make space for the colorbar
plt.show()
```
### Plots - Global
```
(m_global, b_global), cov_global = np.polyfit(principalComponents_global[:,0], data['hr'], 1, full=False, cov=True)
print(m_global, b_global)
print(np.sqrt(cov_global[0,0]), np.sqrt(cov_global[1,1]))
sns.set(context='talk', style='ticks', font='serif', color_codes=True)
for i in [0,1,2,3]:
fig = plt.figure()
#fix axes major spacing & size
ax = plt.gca()
ax.get_yaxis().set_major_locator(MultipleLocator(0.2))
ax.set_ylim(-0.67, 0.67)
ax.get_xaxis().set_major_locator(MultipleLocator(1))
ax.set_xlim(-4, 4)
#set axes ticks and gridlines
ax.tick_params(axis='both', top='on', right='on', direction='in')
ax.grid(which='major', axis='both', color='0.90', linestyle='-')
ax.set_axisbelow(True)
#show origin
# ax.axhline(y=0, color='0.8', linewidth=2)
# ax.axvline(x=0, color='0.8', linewidth=1)
## add best fit on PC_1 -- under data points
if i==0:
# x = np.linspace(min(principalComponents_global[:,i]), max(principalComponents_global[:,i]), 100)
# print(m*x+b)
# plt.plot(x, m*x+b)
sns.regplot(principalComponents_global[:,i], data['hr'], marker='', color='grey', ax=ax)
#plot data on top -- not needed down here if we don't try to show the origin
plt.scatter(principalComponents_global[:,i], data['hr'], marker='.', c=data['x1'],
cmap="RdBu", vmin=-3.0, vmax=3.0, edgecolor='k', zorder=10)
# add axes labels, after sns.regplot
plt.xlabel(f'principal component {i+1}', fontsize=17)
plt.ylabel('Hubble residual [mag]', fontsize=17)
#Add colorbar
##["{:>4.1f}".format(y) for y in yticks] as possible color bar formating.
cax = fig.add_axes([0.95, 0.237, 0.02, 0.649]) # fig.set_tight_layout({'pad': 1.5})
# cax = fig.add_axes([0.965, 0.2, 0.02, 0.691]) # plt.tight_layout()
cax.tick_params(axis='y', direction='in')
cax.set_axisbelow(False) # bring tick marks above coloring
plt.colorbar(label=r"$x_1$", cax=cax)
#add Spearman's correlation
##add a back color so the grid lines do not get in the way?
sp_r, sp_p = spearmanr(principalComponents_global[:,i], data['hr'])
if i==0:
ax.text(-3.5, 0.42, f"Spearman's correlation: {sp_r:.2f}\np: {sp_p:.2e}",
{'fontsize':12})
elif i==1:
ax.text(-3, 0.42, f"Spearman's correlation: {sp_r:.2f}\np: {sp_p:.2f}",
{'fontsize':12})
else:
ax.text(-3, 0.42, f"Spearman's correlation: {sp_r:.2f}\np: {sp_p:.2f}",
{'fontsize':12})
fig.set_tight_layout({'pad': 1.5})
plt.savefig(f'HRvPC{i+1}_global.pdf', bbox_inches='tight') # bbox to make space for the colorbar
plt.show()
```
## Reduction of scatter
```
rms = lambda x: np.sqrt(x.dot(x)/x.size)
# RMS around HR = 0
print(rms(data['hr']))
print(data['hr'].std())
# RMS around trendline
print("local RMS: ", rms(data['hr'] - (m*principalComponents2[:,0] + b)))
print("global RMS: ", rms(data['hr'] - (m_global*principalComponents_global[:,0] + b_global)))
print("local STD: ", (data['hr'] - (m*principalComponents2[:,0] + b)).std())
print("global STD: ", (data['hr'] - (m_global*principalComponents_global[:,0] + b_global)).std())
```
# Correation between $x_1$, Mass, and Age
```
import corner
import matplotlib
sns.set(context='talk', style='ticks', font='serif', color_codes=True)
# features = ['x1', 'color', 'stellar mass', 'age']
features = ['x1', 'stellar mass', 'age']
data_compare = data.loc[:, features].values
plt.figure()
fig = corner.corner(data_compare, show_titles=True, use_math_text=True,
# quantiles=[0.16, .50, 0.84],
smooth=1, bins=10,
plot_datapoints=False,
# labels=[r'$x_1$', r'log(M/M$_{\odot}$)', 'age [Gyr]']#, range=[0.99]*8
labels=[r'$x_1$', r'mass', 'age'],
hist_kwargs={'lw': '2'},
contour_kwargs={'levels': np.logspace(-0.5,1,6),
'norm': matplotlib.colors.LogNorm(), # Scale the colors to be on a log scale
'colors': sns.color_palette("Blues_d")
# 'colors': sns.dark_palette("Blue")
},
color=sns.color_palette("Blues_d")[0] # Try to get this to pcolormesh, and one per dataset
)
#fix axes
ax_list = fig.axes
for i, ax in enumerate(ax_list):
if i in [0, 4, 8]:
# fix 1D-histogram plots
ax.tick_params(axis='x', direction='in') # set bottom ticks in
ax.get_yaxis().set_ticks([]) # turn off top ticks
sns.despine(left=True, ax=ax) # despine
else:
# fix 2D-histogram plots (and blacks)
ax.tick_params(axis='both', top='on', right='on', direction='in')
# plt.savefig('x1-mass-Lage-compare2.pdf')
plt.show()
# Global
#features = ['x1', 'color', 'stellar mass', 'age']
features = ['x1', 'stellar mass', 'age']
data_compare = data_global.loc[:, features].values
plt.figure()
fig = corner.corner(data_compare, show_titles=True, use_math_text=True,
quantiles=[0.16, 0.84], smooth=1, bins=10,
plot_datapoints=False,
# labels=[r'$x_1$', r'log(M/M$_{\odot}$)', 'age [Gyr]']#, range=[0.99]*8
labels=[r'$x_1$', r'mass', 'age'],
hist_kwargs={'lw': '2'}
)
# plt.savefig('x1-mass-Gage-compare.pdf')
plt.show()
```
| github_jupyter |
# Spam Text Classification
In second week of inzva Applied AI program, we are going to create a spam text classifier using RNN's. Our data have 2 columns. The first column is the label and the second column is text message itself. We are going to create our model using following techniques
- Embeddings
- SimpleRNN
- GRU
- LSTM
- Ensemble Model
### SimpleRNN
Simple RNN layer. Nothing special. The reason it is 'Simple' because it is not GRU nor LSTM layer. You can read the documentation from https://keras.io/api/layers/recurrent_layers/simple_rnn/
### LSTM
https://keras.io/api/layers/recurrent_layers/lstm/
We will use tokenization and padding to preprocess our data. We are going to create 3 different models and compare them.
## Libraries
```
from keras.layers import SimpleRNN, Embedding, Dense, LSTM
from keras.models import Sequential
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns; sns.set()
```
## Dataset
```
data = pd.read_csv(".../datasets_2050_3494_SPAM text message 20170820 - Data.csv")
```
Let's see the first 20 rows of our data and read the messages. What do you think, are they really look like spam messages?
```
data.iloc[0:20,:]
```
Let's calculate spam and non-spam message counts.
```
texts = []
labels = []
for i, label in enumerate(data['Category']):
texts.append(data['Message'][i])
if label == 'ham':
labels.append(0)
else:
labels.append(1)
texts = np.asarray(texts)
labels = np.asarray(labels)
print("number of texts :" , len(texts))
print("number of labels: ", len(labels))
labels
sum(labels==0)
sum(labels==1)
```
### Data is imbalanced. Making it even more imbalanced by removing some of the spam messages and observing the model performance would be a good exercise to explore imbalanced dataset problem in Sequential Model context.
```
texts
```
## Data Preprocessing
Each sentence has different lengths. We need to have sentences of the same length. Besides, we need to represent them as integers.
As a concerete example, we have following sentences
- 'Go until jurong point crazy'
- 'any other suggestions'
First we will convert them to integers, this operation is known as Tokenizstion.
- [5, 10, 26, 67, 98]
- [7, 74, 107]
Now we have two integer vectors with different length. We need to make them have the same length.
### Post Padding
- [5, 10, 26, 67, 98]
- [7, 74, 107, 0, 0]
### Pre Padding
- [5, 10, 26, 67, 98]
- [0, 0, 7, 74, 107]
But you don't have to use padding in each task. For details please refer to this link https://github.com/keras-team/keras/issues/2375
```
from keras.layers import SimpleRNN, Embedding, Dense, LSTM
from keras.models import Sequential
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
# number of words in our vocabulary
max_features = 10000
# how many words from each document (max)?
maxlen = 500
```
## Train - Test Split
We will take a simple approach and create only train and test sets. Of course having train, test and validation sets is the best practise.
```
training_samples = int(len(labels)*0.8)
training_samples
validation_samples = int(5572 - training_samples)
assert len(labels) == (training_samples + validation_samples), "Not equal!"
print("The number of training {0}, validation {1} ".format(training_samples, validation_samples))
```
## Tokenization
```
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print("Found {0} unique words: ".format(len(word_index)))
#data = pad_sequences(sequences, maxlen=maxlen, padding='post')
data = pad_sequences(sequences, maxlen=maxlen)
print(data.shape)
data
np.random.seed(42)
# shuffle data
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
texts_train = data[:training_samples]
y_train = labels[:training_samples]
texts_test = data[training_samples:]
y_test = labels[training_samples:]
```
## Model Creation
We will create 3 different models and compare their performances. One model will use SimpleRNN layer, the other will use GRU layer and the last one will use LSTM layer. Architecture of each model is the same. We can create deeper models but we already get good results.
```
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy',
metrics=['acc'])
history_rnn = model.fit(texts_train, y_train, epochs=10,
batch_size=60, validation_split=0.2)
acc = history_rnn.history['acc']
val_acc = history_rnn.history['val_acc']
loss = history_rnn.history['loss']
val_loss = history_rnn.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, '-', color='orange', label='training acc')
plt.plot(epochs, val_acc, '-', color='blue', label='validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.show()
plt.plot(epochs, loss, '-', color='orange', label='training acc')
plt.plot(epochs, val_loss, '-', color='blue', label='validation acc')
plt.title('Training and validation loss')
plt.legend()
plt.show()
pred = model.predict_classes(texts_test)
acc = model.evaluate(texts_test, y_test)
proba_rnn = model.predict_proba(texts_test)
from sklearn.metrics import confusion_matrix
print("Test loss is {0:.2f} accuracy is {1:.2f} ".format(acc[0],acc[1]))
print(confusion_matrix(pred, y_test))
sum(y_test==1)
```
## GRU
```
from keras.layers import GRU
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(GRU(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy',
metrics=['acc'])
history_rnn = model.fit(texts_train, y_train, epochs=10,
batch_size=60, validation_split=0.2)
pred = model.predict_classes(texts_test)
acc = model.evaluate(texts_test, y_test)
proba_gru = model.predict_proba(texts_test)
from sklearn.metrics import confusion_matrix
print("Test loss is {0:.2f} accuracy is {1:.2f} ".format(acc[0],acc[1]))
print(confusion_matrix(pred, y_test))
```
## LSTM
```
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history_lstm = model.fit(texts_train, y_train, epochs=10,
batch_size=60, validation_split=0.2)
acc = history_lstm.history['acc']
val_acc = history_lstm.history['val_acc']
loss = history_lstm.history['loss']
val_loss = history_lstm.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, '-', color='orange', label='training acc')
plt.plot(epochs, val_acc, '-', color='blue', label='validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.show()
plt.plot(epochs, loss, '-', color='orange', label='training acc')
plt.plot(epochs, val_loss, '-', color='blue', label='validation acc')
plt.title('Training and validation loss')
plt.legend()
plt.show()
pred = model.predict_classes(texts_test)
acc = model.evaluate(texts_test, y_test)
proba_ltsm = model.predict_proba(texts_test)
from sklearn.metrics import confusion_matrix
print("Test loss is {0:.2f} accuracy is {1:.2f} ".format(acc[0],acc[1]))
print(confusion_matrix(pred, y_test))
```
## Ensemble Model
```
ensemble_proba = 0.25 * proba_rnn + 0.35 * proba_gru + 0.4 * proba_lstm
ensemble_proba[:5]
ensemble_class = np.array([1 if i >= 0.3 else 0 for i in ensemble_proba])
print(confusion_matrix(ensemble_class, y_test))
```
| github_jupyter |
# *Insight*-HXMT 相位分解谱处理示例
## ——[概览](#概览)、[数据预处理](#数据预处理)、[计时分析](#计时分析)、[能谱分析](#能谱分析)
庹攸隶 (tuoyl@ihep.ac.cn)
##### 最终结果:使用慧眼一次 Crab 的观测数据,产生 Crab 脉冲星的相位分解谱
## 概览
### 准备工作
该 Jupyter 文本使用 Python3 环境,若想执行以下所有命令,需要做这些准备:
* 安装并初始化 HXMTDAS 环境(例如能在终端中运行```hepical```命令)
* 使用 Python3.* 版本,并安装有 astropy, numpy, matplotlib 模块
(若使用 conda 环境,请在 environment.yml 文件所在路径执行 ``` conda env create -f environment.yml ``` 安装名为 hxmt_analysis 的环境,然后执行 ```conda activate hxmt_analysis```
<div class="alert alert-block alert-info">
<b>NOTES
:</b> 该介绍中涉及到的命令,你可以在 Jupyter 中使用 Shift+Enter 逐条执行。你同时完全可以在终端中执行所有命令。
</div>
该 Jupyter notebook 将放在 https://github.com/tuoyl/hxmt_analysis_demo 托管更新,你可以下载到文中涉及到的 Python 脚本。你也可以使用 pip 下载一个名为 hxmt-analysis-demo 的模块 ```python -m pip install --index-url https://test.pypi.org/simple/ --no-deps hxmt-analysis-demo```,该模块中同样包含了这些脚本。
* 准备数据。
数据的下载和使用请浏览慧眼官网 hxmt.org ,我们这里用到的数据作为例子可以通过wget下载
```
!wget wget ftp://202.122.39.120/Release/1L/P010129900101-20170827-01-01.tar.gz -O data.tar.gz
```
下载的文件名我们命名为 data.tar.gz,数据总大小为 1.9 GB,请耐心等待下载。下载完成后,我们对数据进行解压
```
!gunzip data.tar.gz
!tar xvzf data.tar
!mv P010129900101-20170827-01-01/ data/
```
为了方便期间,我们将该数据的文件名```P010129900101-20170827-01-01/```改名为 ```data/```
### 目标
* **数据预处理**:使用慧眼用户处理软件(HXMTDAS v2.01)产生用于分析的数据产品
* **计时分析**:使用 Crab 的星历(ephemeris)产生 Crab 的脉冲轮廓
* **能谱分析**:对轮廓分成多个相位区间,得到各个区间的能谱及背景能谱
## 数据预处理
数据预处理针对三个载荷高能、中能和低等载荷,分别是:
* [高能载荷(HE)数据处理](#高能载荷(HE)数据处理)
* [中能载荷(ME)数据处理](#中能载荷(ME)数据处理)
* [低能载荷(LE)数据处理](#低能载荷(LE)数据处理)
### 高能载荷(HE)数据处理
### hepical
```
!hepical evtfile=./data/HE/HXMT_P010129900101_HE-Evt_FFFFFF_V1_L1P.FITS outfile=./data/HE/he_pi.fits clobber=yes
```
<div class="alert alert-block alert-warning">
<b>NOTES:</b> 若出现报错 hepical : hepical: Error: Unable to get 'gain (codename:CHAN2PI_0)' file named ''! 则根据 CALDB 中 gainfile 的路径手动指定,执行如下命令。
</div>
```
!hepical evtfile=./data/HE/HXMT_P010129900101_HE-Evt_FFFFFF_V1_L1P.FITS gainfile=/Users/tuoyouli/Documents/hxmtsoft_newtest/CALDB/data/hxmt/he/bcf/hxmt_he_gain_20171030_v1.fits outfile=./data/HE/he_pi.fits clobber=yes
```
运行该命令根据不同的计算机性能,通常会占用你2-3分钟的时间。输出产生一个新的事例文件,命名为 he_pi.fits。
```
!ls -trl ./data/HE/
```
### hegtigen
生成 HE 载荷的好时间文件(GTIs)
```
!hegtigen hvfile=./data/HE/HXMT_P010129900101_HE-HV_FFFFFF_V1_L1P.FITS tempfile=./data/HE/HXMT_P010129900101_HE-TH_FFFFFF_V1_L1P.FITS ehkfile=./data/AUX/HXMT_P010129900101_HE-EHK_FFFFFF_V1_L1P.FITS outfile=./data/HE/he_gti.fits defaultexpr="NONE" expr="ELV>10&&COR>8&&T_SAA>=300&&TN_SAA>=300&&ANG_DIST<=0.04" clobber=yes
```
我们推荐的对于涉及到能谱分析的好时间判选条件为```ELV>10&&COR>8&&T_SAA>=300&&TN_SAA>=300&&ANG_DIST<=0.04```。
同样的,你可以查看输出文件,我们这一步产生了一个命名为 he_gti.fits 的 FITS 文件。
```
!ls -trl ./data/HE/
```
### hescreen
根据 GTIs 选择出符合要求的好事例
```
!hescreen evtfile=./data/HE/he_pi.fits gtifile=./data/HE/he_gti.fits outfile=./data/HE/he_screen.fits userdetid=0-17 minPI=0 maxPI=255 clobber=yes
```
运行 hescreen 命令大约会占用你1分钟时间,输出文件为 he_screen.fits。我们这里选择了探测器编号0-17,即全选,选择了能道0-255,也是全选。如果你选择产生不同能段的光子,则可以修改 minPI 和 maxPI 的值。
### hespecgen
生成能谱文件(spectra)
```
!hespecgen evtfile=./data/HE/he_screen.fits outfile=./data/HE/he_spec deadfile=./data/HE/HXMT_P010129900101_HE-DTime_FFFFFF_V1_L1P.FITS userdetid="0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16;17" starttime=0 stoptime=0 minPI=0 maxPI=255 clobber=yes
```
我们在这一步生成了 HE 的18个探测器的能谱,通过 userdetid 参数选择探测器。使用分号(;)将各个探测器编号做分割,使我们产生了18个能谱,而不是18个探测器的总能谱。可以查看一下我们产生的结果,是18个以```he_spec```为前缀的```.pha```文件。能谱的结果可以在[能谱分析](#能谱分析)一节查看,这里暂不做展开。
```
!ls -tr ./data/HE/
```
<div class="alert alert-block alert-info">
<b>NOTES:</b> 我们之所以要产生18个能谱而不是一个总谱,是由于各个探测器的响应矩阵不一样,我们在之后会根据各个探测器的能谱产生其各自的响应(盲探测器不产生)。最后可以使用脚本 hhe_spec2pi 将能谱和响应矩阵合并。
</div>
```
!ls ./data/HE/he_spec*.pha | sort -V > ./data/HE/he_spec.txt
!cat ./data/HE/he_spec.txt
```
### herspgen
产生除盲探测器外,所有探测器各自的响应矩阵
```
phalist = open("./data/HE/he_spec.txt")
for phafile,i in zip(phalist,range(18)):
if i == 16:continue # 16号盲探测器不产生响应矩阵
herspgen_text = "herspgen phafile=%s outfile=./data/HE/he_rsp_g%s.fits attfile=%s ra=-1 dec=-91 clobber=yes"%(phafile[0:-1], str(i), "./data/ACS/HXMT_P010129900101_Att_FFFFFF_V1_L1P.FITS")
!{herspgen_text}
phalist.close()
```
我们产生了17个探测器的响应矩阵。可以查看产生的结果
```
!ls -tr ./data/HE//he_rsp*
```
### 中能载荷(ME)数据处理
### mepical
ME 探测器的 Pulse Invariance CALibration。
输入的两个文件,```evtfile=``` 输入的是原始的事例文件(文件名的关键字是```ME-Evt```),```tempfile``` 是温度文件(文件名的关键字是```ME-TH```)
```
!mepical evtfile=./data/ME/HXMT_P010129900101_ME-Evt_FFFFFF_V1_L1P.FITS tempfile=./data/ME/HXMT_P010129900101_ME-TH_FFFFFF_V1_L1P.FITS outfile=./data/ME/me_pi.fits clobber=yes
```
我们可以查看新产生的文件
```
!ls -l ./data/ME/me_pi.fits
```
### megrade
对 ME 的事例进行分级,并挑选“单分裂”事例。同时可以产生 ME 的死时间文件。```evtfile``` 是输入的文件名,该文件是上一步 ```mepical``` 输出产生的文件。```outfile``` 和 ```deadfile``` 分别为输出的完成事例挑选的文件名和输出的死时间文件的文件名。
```
!megrade evtfile=./data/ME/me_pi.fits outfile=./data/ME/me_grade.fits deadfile=./data/ME/me_dtime.fits clobber=yes
```
### megtigen
产生 ME 探测器对应的好时间文件。好时间的判选条件和 HE 的选择条件相同,为```ELV>10&&COR>8&&T_SAA>=300&&TN_SAA>=300&&ANG_DIST<=0.04```
```
!megtigen tempfile=./data/ME/HXMT_P010129900101_ME-TH_FFFFFF_V1_L1P.FITS ehkfile=./data/AUX/HXMT_P010129900101_HE-EHK_FFFFFF_V1_L1P.FITS outfile=./data/ME/me_gti.fits defaultexpr="NONE" expr="ELV>10&&COR>8&&T_SAA>=300&&TN_SAA>=300&&ANG_DIST<=0.04" clobber=yes
```
我们可以查看,产生了一个命名为 me_gti.fits 的文件
```
!ls ./data/ME//me_gti.fits
```
### megti
使用 megti.py 脚本,可以进一步筛选GTIs。在上一步产生的 GTI 文件的基础上,去除掉粒子本底影响较大的时间段。我们可以使用```-h```参数查看该脚本的运行方式。
```
!python ./hxmt_scripts/megti.py -h
```
这里提供两种输入方式:Method 1 是在命令行按顺序输入 ```megrade``` 的输出文件、```megtigen``` 的输出文件,和新的 GTI 文件的文件名;Method 2 是使用交互式运行,在执行 ```megti``` 后会提示你输入上述文件。我们以第一种执行方式为例进行演示。
<div class="alert alert-block alert-info">
<b>NOTES:</b> 我们在这里提供该脚本,放置在 hxmt_scripts 文件夹下,故执行该脚本的方式为 python megti.py 需要注意的是,通过慧眼的软件下载网页,你能在下载的软件包中获得同样的脚本,不过在安装说明中,你可能已经设置了shell 环境的”别名“,即在shell中运行 megti 命令等效于上述的运行 megti.py 脚本。两种运行方式没有区别,我这里因为 Jupyter 不识别shell的别名,故直接执行python脚本,但本质上二者没有区别。
</div>
```
!python ./hxmt_scripts/megti.py ./data/ME/me_grade.fits ./data/ME/me_gti.fits ./data/ME/me_gti_new.fits
```
我们可以查看产生的新的 GTI 文件,命名为 me_gti_new.fits
```
!ls -1t ./data/ME/me_gti*
```
### mescreen
接下来,我们根据 GTIs 可以筛选出该好时间段内的事例。同时我们根据探测器编号、能道、观测时间等做更严格的筛选。根据标定的情况,我们只需要选择所有”小视场“的探测器和”盲探测器“。
我们提供了一个 Python 的小工具```hprint_detid.py```,可以输出所有探测器的编号,我们在筛选之前可以先输出各个探测器的信息,找出我们需要的ME小视场的探测器。
```
!python ./hxmt_scripts/hprint_detid.py
```
可以看到,ME 探测器小视场(small FoV)对应的编号为 0-5, 7, 12-23, 25, 30-41, 43, 48-53。我们可以继续筛选事例。```mescreen``` 有一个输入参数 ```userdetid``` 即探测器编号。这个参数的值我们选择所有小视场探测器编号,即盲探测器的编号,```userdetid="0-5,7,12-23,25,30-41,43,48-53;10,28,46"``` 。 ```mescreen``` 的具体使用方式如下
```
!mescreen evtfile=./data/ME/me_grade.fits gtifile=./data/ME/me_gti.fits outfile=./data/ME/me_screen.fits userdetid="0-5,7,12-23,25,30-41,43,48-53;10,28,46" clobber=yes
```
我们输出了一个命名为 me_screen.fits 的FITS文件。
```
!ls ./data/ME/me_screen.fits
```
### mespecgen
产生能谱文件。我们对 me_screen.fits 文件中事例,产生其能谱。me_screen.fits 中包含了”小视场“探测器的光子以及”盲探测器“的光子,我们产生能谱的时候,只产生”小视场“探测器的光子。在选择探测器时 ```userdetid="0-5,7,12-23,25,30-41,43,48-53"```。注意这里没有“盲探测器”对应的编号 10,28,46
```
!mespecgen evtfile=./data/ME/me_screen.fits outfile=./data/ME/me_spec deadfile=./data/ME/me_dtime.fits userdetid="0-5,7,12-23,25,30-41,43,48-53" clobber=yes
!ls ./data/ME/me_spec*
```
### merspgen
产生 ME 的响应矩阵
```
!merspgen phafile=./data/ME/me_spec_g0_0-53.pha outfile=./data/ME/me_rsp.fits attfile=./data/ACS/HXMT_P010129900101_Att_FFFFFF_V1_L1P.FITS ra=-1 dec=-91 clobber=yes
!ls ./data/ME/me_rsp.fits
```
### 低能载荷(LE)数据处理
### lepical
LE 探测器的 Pulse Invariance CALibration
```
!lepical evtfile=./data/LE/HXMT_P010129900101_LE-Evt_FFFFFF_V1_L1P.FITS tempfile=./data/LE/HXMT_P010129900101_LE-TH_FFFFFF_V1_L1P.FITS outfile=./data/LE/le_pi.fits
```
产生了一个命名为 le_pi.fits 的事例文件。可以查看新产生的文件
```
!ls ./data/LE/le_pi.fits
```
### lerecon
LE 探测器的事例重建
```
!lerecon evtfile=./data/LE/le_pi.fits outfile=./data/LE/le_recon.fits instatusfile=./data/LE/HXMT_P010129900101_LE-InsStat_FFFFFF_V1_L1P.FITS
```
### legtigen
产生 LE 探测器的好时间(GTIs)文件。我们推荐的好时间为```expr="ELV>10&&COR>8&&DYE_ELV>40&&T_SAA>=300&&TN_SAA>=300&&ANG_DIST<=0.04"```,注意和 HE 与 ME 的好时间判选条件比起来,多了一个亮地球与卫星指向的夹角(DYE_ELV)大于40度的限制。
```
!legtigen evtfile="" instatusfile=./data/LE/HXMT_P010129900101_LE-InsStat_FFFFFF_V1_L1P.FITS tempfile=./data/LE/HXMT_P010129900101_LE-TH_FFFFFF_V1_L1P.FITS ehkfile=./data/AUX/HXMT_P010129900101_HE-EHK_FFFFFF_V1_L1P.FITS outfile=./data/LE/le_gti.fits defaultexpr="NONE" expr="ELV>10&&COR>8&&DYE_ELV>40&&T_SAA>=300&&TN_SAA>=300&&ANG_DIST<=0.04" clobber=yes
```
### legti
类似于 ```megti```,我们在上一步 ```legtigen``` 的基础上进一步筛选出好时间段。
同样,我们可以使用 ```-h``` 参数查看输入的参数
```
!python ./hxmt_scripts/legti.py -h
```
可以看到,我们需要按顺序输入 lerecon 的输出文件、上一步 legtigen 的输出文件,以及产生的新的GTI文件的文件名。
```
!python ./hxmt_scripts/legti.py ./data/LE/le_recon.fits ./data/LE/le_gti.fits ./data/LE/le_gti_new.fits
```
可以查看,我们现在产生了一个名为 le_gti_new.fits 的好时间段文件
```
!ls -1 ./data/LE/le_gti*
```
### lescreen
筛选出好时间段内的事例
同样,我们根据好时间段,筛选出时间段内的事例。类似于 ME 的情况,我们这里只选择“小视场”探测器和“盲探测器”,可以查看之前 ```hprint_detid.py``` 输出的内容。LE 探测器的“小视场”和“盲探测器”其编号分别为```0,2-4,6-10,12,14,20,22-26,28,30,32,34-36,38-42,44,46,52,54-58,60-62,64,66-68,70-74,76,78,84,86,88-90,92-94```和```13,45,77,21,53,85```
```
!lescreen evtfile=./data/LE/le_recon.fits gtifile=./data/LE/le_gti_new.fits outfile=./data/LE/le_screen.fits userdetid="0,2-4,6-10,12,14,20,22-26,28,30,32,34-36,38-42,44,46,52,54-58,60-62,64,66-68,70-74,76,78,84,86,88-90,92-94,13,45,77,21,53,85" clobber=yes
```
我们成功产生了一个名为 le_screen.fits 的FITS文件
```
!ls ./data/LE/le_screen.fits
```
### lespecgen
产生 LE 探测器的能谱
我们产生能谱时,仅选择小视场探测器,```userdetid="0,2-4,6-10,12,14,20,22-26,28,30,32,34-36,38-42,44,46,52,54-58,60-62,64,66-68,70-74,76,78,84,86,88-90,92-94"```
```
!lespecgen evtfile=./data/LE/le_screen.fits outfile=./data/LE/le_spec userdetid="0,2-4,6-10,12,14,20,22-26,28,30,32,34-36,38-42,44,46,52,54-58,60-62,64,66-68,70-74,76,78,84,86,88-90,92-94" eventtype=1 clobber=yes
```
### lerspgen
产生 LE 的响应矩阵
```
!lerspgen phafile=./data/LE/le_spec_g0_0-94.pha outfile=./data/LE/le_rsp.fits attfile=./data/ACS/HXMT_P010129900101_Att_FFFFFF_V1_L1P.FITS tempfile=./data/LE/HXMT_P010129900101_LE-TH_FFFFFF_V1_L1P.FITS ra=-1 dec=-91 clobber=yes
```
***
至此,我们完成了 HE, ME, LE 全部的数据预处理工作,我们产生了经过筛选的事例文件、能谱文件、响应文件,基于这些我们可以开展我们的数据分析。
## 计时分析
我们在这一节将使用 Crab 的星历折叠出 Crab 脉冲星的脉冲轮。具体的步骤有:
* 太阳系质心修正 (hxbary)
* 根据 Crab 星历计算光子的脉冲相位
* 对相位做直方图,产生脉冲轮廓
<div class="alert alert-block alert-info">
<b>NOTES:</b> 我们这一节涉及到的一些数据处理,包括计算脉冲相位、产生轮廓等步骤,我们是自己编写 Python 程序计算完成的,我们会简述计算的公式,并提供这些 Python 程序在 hxmt_scripts 路径下。但你也完全可以根据自己的需求自己撰写程序。
</div>
### hxbary 太阳系质心修正
将探测器记录到的光子到达时间转换到太阳系质心。我们使用 HXMTDAS 中的计算工具```hxbary```。我们的输入是事例文件,输出是在该文件中加入一列```TDB```
```
!hxbary evtfile=./data/HE/he_screen.fits orbitfile=./data/ACS/HXMT_P010129900101_Orbit_FFFFFF_V1_L1P.FITS ra=83.63322083 dec=22.01446111111 eph=2 clobber=yes
```
我们对 HE 的事例文件 he_screen.fits 做了太阳系质心修正,输入参数 ```orbitfile``` 是轨道文件,```ra```,```dec```是该源的赤经赤纬,```eph=2``` 是使用 DE405 的太阳系质心星历。我们同样对 ME 和 LE 的事例文件做太阳系质心修正
```
!hxbary evtfile=./data/ME/me_screen.fits orbitfile=./data/ACS/HXMT_P010129900101_Orbit_FFFFFF_V1_L1P.FITS ra=83.63322083 dec=22.01446111111 eph=2 clobber=yes
!hxbary evtfile=./data/LE/le_screen.fits orbitfile=./data/ACS/HXMT_P010129900101_Orbit_FFFFFF_V1_L1P.FITS ra=83.63322083 dec=22.01446111111 eph=2 clobber=yes
```
### 计算光子的脉冲相位
文件夹中有一个星历文件 ```Crab_ephemeris.par```,该文件中记录了在该数据覆盖的时间段内,Crab 脉冲星的守时参数,周期($f_0$),周期一阶导数($f_1$),周期二阶导数($f_3$)。我们可以我们计算光子的相位 $\phi$:
<br><br>$\phi = f_0(t-t_0) + \frac{1}{2}f_1(t-t_0) + \frac{1}{6}f_2(t-t_0)$,
<br><br>其中 $t_0$ 是星历文件中的时间参考点 PEPOCH=57979.425942180467246 (MJD)
<br> 下面我们使用脚本 ```hphase_cal.py``` 计算相位。先用 ```-h``` 查看使用说明
```
!python ./hxmt_scripts/hphase_cal.py -h
!python ./hxmt_scripts/hphase_cal.py evtfile=./data/HE/he_screen.fits parfile=Crab_ephemeris.par
```
我们可以使用 ftools 中的工具 ```fstruct``` 查看 he_screen.fits,可以看到在最后一列增加了 Phase 一列
```
!fstruct ./data/HE/he_screen.fits
```
同样的,我们对 ME,LE 的事例文件计算光子的相位
```
!python ./hxmt_scripts/hphase_cal.py evtfile=./data/ME/me_screen.fits parfile=Crab_ephemeris.par
!python ./hxmt_scripts/hphase_cal.py evtfile=./data/LE/le_screen.fits parfile=Crab_ephemeris.par
```
我们计算了每个光子的相位,下面我们读取数据,画出脉冲轮廓
### 产生脉冲轮廓
我们使用 Python 读取 Phase 那一列数据,画出脉冲轮廓
```
from astropy.io import fits
import matplotlib.pyplot as plt
import numpy as np
# 读取 HE,ME,LE 的数据
phase = np.array([])
for filename in ["./data/HE/he_screen.fits","./data/ME/me_screen.fits","./data/LE/le_screen.fits"]:
hdulist = fits.open("./data/HE/he_screen.fits")
phase = np.append(phase, hdulist[1].data.field("Phase")) #读取Phase
phi = np.arange(0,1.99,0.01)[0:-1]
counts,*_ = np.histogram(phase,bins=np.arange(0,1,0.01))
counts = np.append(counts,counts) #产生两个周期
plt.plot(phi,counts)
plt.xlabel("$\phi$")
plt.ylabel("counts")
plt.axvline(x=0.6,ls='dotted')
plt.axvline(x=0.8,ls='dotted')
plt.show()
```
我们得到了 Crab 脉冲星的双峰轮廓。在做相位分解谱的过程中,我们选择轮廓中“非脉冲相位”的光子作为背景(上图中的0.6-0.8相位),下面开始我们的能谱分析
***
## 能谱分析
我们这一节产生相位分解谱。我们根据上一步产生的轮廓,我们
* 选取“非脉冲相位”光子(0.6-0.8)产生背景能谱
* 产生各个相位的能谱
* 拟合能谱
### 产生“非脉冲相位”光子的能谱
我们选取相位值在0.6-0.8的光子,并产生能谱,作为背景
```
!fselect ./data/LE/le_screen.fits ./data/LE/le_screen_phase_0.6-0.8.fits expr="Phase>=0.6&&Phase<0.8" clobber=yes
!ls ./data/LE/le_screen_phase_0.6-0.8.fits
from hxmt_scripts.create_LE_specfile import create_LE_specfile
hdulist = fits.open("./data/LE/le_screen_phase_0.6-0.8.fits")
PI = hdulist[1].data.field("PI")
# 该能谱的曝光时间为总曝光时间的1/5,因为光子为总相位的1/5,
# 我们计算exposure时应除以5
exposure = hdulist[1].header["exposure"]/5
hdulist.close()
# 产生 LE 的能谱的计数
counts,*_ = np.histogram(PI, bins=np.arange(0,1536,1))
error = np.sqrt(counts)
outfile = "./data/LE/le_spec_phase_bkg.pha"
create_LE_specfile(exposure,counts,error,outfile)
# 我们使用 fparkey 工具修改能谱文件中曝光时间的值
fparkey_cmd = "fparkey %s %s EXPOSURE"%(str(exposure),outfile)
!{fparkey_cmd}
```
我们可以查看我们的输出文件 le_spec_phase_bkg.pha
```
hdulist = fits.open("./data/LE/le_spec_phase_bkg.pha");
channel = hdulist[1].data.field("channel");
counts = hdulist[1].data.field("counts");
errors = hdulist[1].data.field("STAT_ERR");
plt.errorbar(channel, counts, yerr=errors);
plt.show()
```
### 产生各相位的能谱
类似的,我们将完整的相位0-1,分成5个相位区间。产生每个区间的事例文件,产生对应的能谱,并修改能谱文件的曝光时间
```
!fselect ./data/LE/le_screen.fits ./data/LE/le_screen_phase_0.0-0.2.fits expr="Phase>=0.0&&Phase<0.2" clobber=yes
hdulist = fits.open("./data/LE/le_screen_phase_0.0-0.2.fits")
PI = hdulist[1].data.field("PI")
exposure = hdulist[1].header["exposure"]/5
hdulist.close()
# 产生 LE 的能谱的计数
counts,*_ = np.histogram(PI, bins=np.arange(0,1536,1))
error = np.sqrt(counts)
outfile = "./data/LE/le_spec_phase_0.0-0.2.pha"
create_LE_specfile(exposure,counts,error,outfile)
# 我们使用 fparkey 工具修改能谱文件中曝光时间的值
fparkey_cmd = "fparkey %s %s EXPOSURE"%(str(exposure),outfile)
!{fparkey_cmd}
#-------------------------------
!fselect ./data/LE/le_screen.fits ./data/LE/le_screen_phase_0.2-0.4.fits expr="Phase>=0.2&&Phase<0.4" clobber=yes
hdulist = fits.open("./data/LE/le_screen_phase_0.2-0.4.fits")
PI = hdulist[1].data.field("PI")
# 该能谱的曝光时间为总曝光时间的1/5,因为光子为总相位的1/5,
# 我们计算exposure时应除以5
exposure = hdulist[1].header["exposure"]/5
hdulist.close()
# 产生 LE 的能谱的计数
counts,*_ = np.histogram(PI, bins=np.arange(0,1536,1))
error = np.sqrt(counts)
outfile = "./data/LE/le_spec_phase_0.2-0.4.pha"
create_LE_specfile(exposure,counts,error,outfile)
# 我们使用 fparkey 工具修改能谱文件中曝光时间的值
fparkey_cmd = "fparkey %s %s EXPOSURE"%(str(exposure),outfile)
!{fparkey_cmd}
#-------------------------------
!fselect ./data/LE/le_screen.fits ./data/LE/le_screen_phase_0.4-0.6.fits expr="Phase>=0.4&&Phase<0.6" clobber=yes
hdulist = fits.open("./data/LE/le_screen_phase_0.4-0.6.fits")
PI = hdulist[1].data.field("PI")
# 该能谱的曝光时间为总曝光时间的1/5,因为光子为总相位的1/5,
# 我们计算exposure时应除以5
exposure = hdulist[1].header["exposure"]/5
hdulist.close()
# 产生 LE 的能谱的计数
counts,*_ = np.histogram(PI, bins=np.arange(0,1536,1))
error = np.sqrt(counts)
outfile = "./data/LE/le_spec_phase_0.4-0.6.pha"
create_LE_specfile(exposure,counts,error,outfile)
# 我们使用 fparkey 工具修改能谱文件中曝光时间的值
fparkey_cmd = "fparkey %s %s EXPOSURE"%(str(exposure),outfile)
!{fparkey_cmd}
#-------------------------------
!fselect ./data/LE/le_screen.fits ./data/LE/le_screen_phase_0.8-1.0.fits expr="Phase>=0.8&&Phase<1.0" clobber=yes
hdulist = fits.open("./data/LE/le_screen_phase_0.8-1.0.fits")
PI = hdulist[1].data.field("PI")
# 该能谱的曝光时间为总曝光时间的1/5,因为光子为总相位的1/5,
# 我们计算exposure时应除以5
exposure = hdulist[1].header["exposure"]/5
hdulist.close()
# 产生 LE 的能谱的计数
counts,*_ = np.histogram(PI, bins=np.arange(0,1536,1))
error = np.sqrt(counts)
outfile = "./data/LE/le_spec_phase_0.8-1.0.pha"
create_LE_specfile(exposure,counts,error,outfile)
# 我们使用 fparkey 工具修改能谱文件中曝光时间的值
fparkey_cmd = "fparkey %s %s EXPOSURE"%(str(exposure),outfile)
!{fparkey_cmd}
!ls -tr1 ./data/LE/le_spec_*pha
```
我们产生了4个相位的能谱,以及0.6-0.8相位的能谱作为背景能谱,我们下面可以使用 Xspec 拟合能谱
### 能谱拟合
我们在 Xspec 中拟合该能谱(改软件包不包含 Xspec,请自行初始化HEASOFT,并在命令行输入 xspec 进入
Xspec 环境。
`$ xspec `
` XSPEC version: 12.10.0c`
` Build Date/Time: Mon Jul 2 19:29:01 2018`
<b>XSPEC12></b> `data ./data/LE/le_spec_phase_0.0-0.2.pha`
<b>XSPEC12></b> `back ./data/LE/le_spec_phase_bkg.pha `
<b>XSPEC12></b> `response ./data/LE/le_rsp.fits`
<b>XSPEC12></b> `cpd /xw`
<b>XSPEC12></b> `ignore **-1.0 10.0-**`
<b>XSPEC12></b> `mo TBabs*pow`
<b>XSPEC12></b> `/*`
<b>XSPEC12></b> `fit`
<b>XSPEC12></b> `setpl rebin 3 15`
<b>XSPEC12></b> `pl ld del`

我们可以看到 LE 的拟合结果
`Current model list:`
`========================================================================
Model TBabs<1>*powerlaw<2> Source No.: 1 Active/On
Model Model Component Parameter Unit Value
par comp
1 1 TBabs nH 10^22 0.600477 +/- 0.106388
2 2 powerlaw PhoIndex 1.81481 +/- 8.19737E-02
3 2 powerlaw norm 1.33191 +/- 0.168975
________________________________________________________________________`
` Using energies from responses.`
`Fit statistic : Chi-Squared = 1054.35 using 1062 PHA bins.`
`Test statistic : Chi-Squared = 1054.35 using 1062 PHA bins.
Reduced chi-squared = 0.995606 for 1059 degrees of freedom
Null hypothesis probability = 5.345763e-01
Weighting method: standard`
***
## 结束语
我们由于篇幅受限,仅介绍了 LE 的相位分解谱处理流程,HE 和 ME 的处理过程是类似的,实际情况中,你可以写一个脚本完成这些处理,并对所有的数据做循环操作。具体的处理过程,可以参考我们的文章
`Ge, M. Y., et al. "X-RAY PHASE-RESOLVED SPECTROSCOPY OF PSRs B0531+ 21, B1509–58, AND B0540–69 WITH RXTE." The Astrophysical Journal Supplement Series 199.2 (2012): 32.`
`Tuo, You-Li, et al. "Insight-HXMT observations of the Crab pulsar." RAA 19.6 (2019): 087.`
如果对处理过程及代码有任何疑问、建议、bug反馈,你可以在该notebook托管的github仓库https://github.com/tuoyl/hxmt_analysis_demo 反馈,欢迎 raise an issue 或是合作。你也可以联系HXMT地面应用系统的老师咨询具体的问题,包括但不限制于数据、软件、探测器、科学。
good luck and have fun!
| github_jupyter |
```
#hide
#default_exp dev.nbdev
```
# NB-Dev Modification
<br>
### Imports
```
#exports
from fastcore.foundation import Config, Path
from nbdev import export
import os
import re
#exports
_re_version = re.compile('^__version__\s*=.*$', re.MULTILINE)
def update_version():
"Add or update `__version__` in the main `__init__.py` of the library"
fname = Config().path("lib_path")/'__init__.py'
if not fname.exists(): fname.touch()
version = f'__version__ = "{Config().version}"'
with open(fname, 'r') as f: code = f.read()
if _re_version.search(code) is None: code = version + "\n" + code
else: code = _re_version.sub(version, code)
with open(fname, 'w') as f: f.write(code)
export.update_version = update_version
update_version()
#exports
def add_init(path, contents=''):
"Add `__init__.py` in all subdirs of `path` containing python files if it's not there already"
for p,d,f in os.walk(path):
for f_ in f:
if f_.endswith('.py'):
if not (Path(p)/'__init__.py').exists(): (Path(p)/'__init__.py').write_text('\n'+contents)
break
def update_version(init_dir=None, extra_init_contents=''):
"Add or update `__version__` in the main `__init__.py` of the library"
version = Config().version
version_str = f'__version__ = "{version}"'
if init_dir is None: path = Config().path("lib_path")
else: path = Path(init_dir)
fname = path/'__init__.py'
if not fname.exists(): add_init(path, contents=extra_init_contents)
code = f'{version_str}\n{extra_init_contents}'
with open(fname, 'w') as f: f.write(code)
export.add_init = add_init
export.update_version = update_version
#exports
def prepare_nbdev_module(extra_init_contents=''):
export.reset_nbdev_module()
export.update_version(extra_init_contents=extra_init_contents)
export.update_baseurl()
prepare_nbdev_module()
#exports
def notebook2script(fname=None, silent=False, to_dict=False, bare=False, extra_init_contents=''):
"Convert notebooks matching `fname` to modules"
# initial checks
if os.environ.get('IN_TEST',0): return # don't export if running tests
if fname is None: prepare_nbdev_module(extra_init_contents=extra_init_contents)
files = export.nbglob(fname=fname)
d = collections.defaultdict(list) if to_dict else None
modules = export.create_mod_files(files, to_dict, bare=bare)
for f in sorted(files): d = export._notebook2script(f, modules, silent=silent, to_dict=d, bare=bare)
if to_dict: return d
else: add_init(Config().path("lib_path"))
return
notebook2script()
#exports
def add_mod_extra_indices(mod, extra_modules_to_source):
for extra_module, module_source in extra_modules_to_source.items():
extra_module_fp = export.Config().path("lib_path")/extra_module
with open(extra_module_fp, 'r') as text_file:
extra_module_code = text_file.read()
names = export.export_names(extra_module_code)
mod.index.update({name: module_source for name in names})
return mod
def add_mod_extra_modules(mod, extra_modules):
extra_modules = [e for e in extra_modules if e not in mod.modules]
mod.modules = sorted(mod.modules + extra_modules)
return mod
def add_extra_code_desc_to_mod(
extra_modules_to_source = {
'api.py': '06-client-gen.ipynb',
'dev/raw.py': '03-raw-methods.ipynb'
}
):
mod = export.get_nbdev_module()
mod = add_mod_extra_indices(mod, extra_modules_to_source)
mod = add_mod_extra_modules(mod, extra_modules_to_source.keys())
export.save_nbdev_module(mod)
return
# add_extra_code_desc_to_mod()
#hide
notebook2script('10-nbdev.ipynb')
```
| github_jupyter |
```
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import skfuzzy as fuzz
from sklearn.datasets import make_moons
from deepART import dataset
np.random.seed(0)
X, y = make_moons(n_samples=200, noise=0.05)
#scale data
X[:,0] = X[:,0]-np.min(X[:,0])
X[:,1] = X[:,1]-np.min(X[:,1])
sample_data = dataset.Dataset(X)
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
ax.set_xlabel("X",fontsize=12)
ax.set_ylabel("Y",fontsize=12)
ax.grid(True,linestyle='-',color='0.75')
# scatter with colormap mapping
ax.scatter(sample_data.data_normalized[...,0],sample_data.data_normalized[...,1],s=100,c=y,marker='*')
ax.axis((0, 1, 0, 1))
plt.show()
#fig.savefig('data_raw.png', bbox_inches='tight')
cntr, u, u0, d, jm, p, fpc = fuzz.cluster.cmeans(
data=sample_data.data_normalized.T, c=2, m=2, error=0.005, maxiter=1000, init=None, seed=100)
cluster_membership = np.argmax(u, axis=0)
cluster_membership_scores = np.max(u, axis=0)
def unpack_results(pred,target,target_scores):
#unpack result tuples
pred_k = []
scores = []
data_contour = np.empty((0,2),dtype=np.float32)
for n, results in enumerate(pred):
if results == target:
data_contour = np.vstack((data_contour, sample_data.data_normalized[n]))
scores.append(target_scores[n])
return data_contour, scores
def plot_countour(fig,data_contour, scores, sub_index, nplots=(3,2)):
ax = fig.add_subplot(nplots[0],nplots[1],sub_index)
ax.set_title("Clustering Results ",fontsize=14)
ax.set_xlabel("X",fontsize=12)
ax.set_ylabel("Y",fontsize=12)
ax.grid(True,linestyle='-',color='0.75')
# scatter with colormap mapping to predicted class
ax.tricontour(data_contour[...,0], data_contour[...,1], scores, 14, linewidths=0, colors='k')
cntr2 = ax.tricontourf(data_contour[...,0], data_contour[...,1], scores, 14, cmap="RdBu_r",)
fig.colorbar(cntr2, ax=ax)
ax.plot(data_contour[...,0], data_contour[...,1], 'ko', ms=0.5)
ax.axis((0, 1, 0, 1))
ax.set_title('Cluster {}'.format(int(sub_index-1)))
plt.subplots_adjust(hspace=0.5)
plt.show()
#fig.savefig('data_clustered.png', bbox_inches='tight')
#plot out clusters memebership
fig = plt.figure(figsize=(8,8))
nplots = (int(np.ceil(2/2)), 2)
for i in range(2):
data_contour, scores = unpack_results(cluster_membership,target=i, target_scores=cluster_membership_scores)
plot_countour(fig, data_contour,scores, sub_index=i+1,nplots=nplots)
# plt.savefig("fuzzy_contour_3.png")
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
ax.set_title(" ",fontsize=14)
ax.set_xlabel("X",fontsize=12)
ax.set_ylabel("Y",fontsize=12)
# ax.set_ylabel("Z",fontsize=12)
ax.grid(True,linestyle='-',color='0.75')
# scatter with colormap mapping to predicted class
ax.scatter(sample_data.data_normalized[...,0],sample_data.data_normalized[...,1],s=100,c=cluster_membership, marker = '*', cmap = cm.jet_r );
plt.show()
# plt.savefig("fuzzy_correct_3.png")
cluster_membership
y
from sklearn.metrics import silhouette_score, davies_bouldin_score, precision_score, recall_score, f1_score, accuracy_score, normalized_mutual_info_score
def obtain_metrics(x, y_true, y_pred):
results = dict({})
results["silhouette_score"] = silhouette_score(x, y_pred)
results["davies_bouldin_score"] = davies_bouldin_score(x, y_pred)
results["normalized_mutual_info_score"] = normalized_mutual_info_score(y_true, y_pred)
results["precision_score"] = precision_score(y_true, y_pred)
results["recall_score"] = recall_score(y_true, y_pred)
results["f1_score"] = f1_score(y_true, y_pred)
results["accuracy_score"] = accuracy_score(y_true, y_pred)
return results
obtain_metrics(sample_data.data_normalized, y, cluster_membership)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.