code stringlengths 31 1.05M | apis list | extract_api stringlengths 97 1.91M |
|---|---|---|
#!/usr/bin/env python
# coding: utf-8
# %% [markdown]
#
# # TD : Introduction à `sklearn`
#
# ## Formation Machine Learning en Python
#
# <NAME>
#
# Creative Commons CC BY-NC-SA
#
#
# Dans cette séance, nous aborderons l'utilisation de modèles `sklearn`.
#
# Pour disposer de tous les objets / fonctions dont vous aurez besoin, commencez
# votre code avec l'en-tête suivante :
# %%
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Lasso
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_blobs, load_boston, load_iris
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
# %% [markdown]
# # Génération de données synthétiques & régression linéaire
#
# Dans la suite, vous allez tirer des données aléatoirement. Pour que vos
# expériences soient répétables, vous **devez**, avant toute chose, initialiser
# la graine de votre générateur aléatoire :
# %%
np.random.seed(0)
# Notez que, sauf indication contraire, `sklearn` utilise l'état courant du
# générateur aléatoire de `numpy`, donc en fixant cette graine, vous rendez
# répétable le comportement de `numpy` ainsi que celui de `sklearn` pour la suite
# de votre programme.
# 1. À l'aide du module
# [`numpy.random`](https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.random.html),
# générez une matrice `X` de 100
# observations en dimension 1 tirées d'une loi gaussienne centrée réduite.
# Générez également un vecteur `y` tel que :
#
# $$\forall i, y_i = \sin(X_i) + \varepsilon_i, \text{ où } \varepsilon_i \sim N(0, 0.1)$$
# %%
X = np.random.randn(100, 1)
y = np.sin(X) + 0.1 * np.random.randn(100, 1)
# 2. Affichez ces données dans une fenêtre `matplotlib`.
# %%
plt.plot(X, y, 'ro');
# 3. Vous allez maintenant chercher à estimer les paramètres d'un modèle de
# régression linéaire (classe [`LinearRegression`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html))
# à ces données.
# Pour ce faire, les deux premières étapes consisteront à :
#
# a. Créer une instance de la classe `LinearRegression` et
# b. Appeler sa méthode `fit()`.
#
# La troisème étape consistera à obtenir les prédictions ($\hat{y_i}$) du modèle
# à l'aide de la méthode `predict()`.
# 4. Quels sont les attributs des instances de la classe `LinearRegression` ?
# Quels sont leurs valeurs dans votre cas ?
# 5. Affichez, dans une fenêtre `matplotlib`, les données en bleu et les valeurs
# prédites correspondantes en rouge.
# %%
model = LinearRegression()
model.fit(X, y)
y_hat = model.predict(X)
plt.plot(X, y, 'r.', X, y_hat, 'b.');
# # Régression Lasso
#
# 6. Générez une matrice `X` de 100
# observations en dimension 10 tirées d'une loi gaussienne de moyenne nulle et
# dont la matrice de variance-covariance est égale à l'identité.
# Générez également un vecteur `y` tel que :
#
# $$\forall i, y_i = \sin(X_{i,0}) + \varepsilon_i, \text{ où } \varepsilon_i \sim N(0, 0.1)$$
#
# Jetez un oeil aux dimensions de `y`. Vous devriez avoir un vecteur colonne
# (_ie._ une matrice de dimension `(100, 1)`). Si ce n'est pas le cas, c'est qu'il
# faut redimensionner la sortie de la fonction `numpy.sin` à l'aide de la méthode
# [`reshape`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.reshape.html).
# %%
X = np.random.randn(100, 10)
y = np.sin(X[:, 0]).reshape((100, 1)) + 0.1 * np.random.randn(100, 1)
# 7. À l'aide de la fonction
# [`train_test_split`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)
# du module `model_selection`, divisez votre jeu de données en un jeu
# d'apprentissage et un jeu de test, de tailles égales.
# %%
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
# 8. En utilisant uniquement les données d'apprentissage, estimez les paramètres
# d'un modèle
# [`Lasso`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html)
# (pour `alpha=0.2`). Affichez les paramètres estimés. Qu'en pensez-vous ?
# %%
model = Lasso(alpha=0.2)
model.fit(X_train, y_train)
y_hat = model.predict(X_test)
plt.figure()
plt.plot(X_test[:, 0], y_test, 'r.')
plt.plot(X_test[:, 0], y_hat, 'b.')
print(model.coef_)
# 9. Utilisez l'un des deux modèles vus précédemment sur le jeu de données
# [_Boston Housing_](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html).
# 10. Observez, pour quelques instances, les écarts entre loyer prévu par le modèle et loyer réel.
# Affichez l'erreur quadratique moyenne ([_mean squared error_](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html)) obtenue.
# %%
boston = load_boston()
X = boston.data
y = boston.target
X_tr = StandardScaler().fit_transform(X)
model = Lasso(alpha=0.2)
model.fit(X_tr, y)
print(model.sparse_coef_)
print(model.score(X_tr, y))
# %%
plt.figure()
plt.plot(y, model.predict(X_tr), 'r.');
# # Classification : préparation des données
#
# Nous allons maintenant travailler sur un jeu de données ultra classique en
# _machine learning_ : le jeu de données "Iris". Ce jeu de données est intégré
# dans `sklearn` pour être utilisé facilement.
#
# 11. Chargez ce jeu de données à l'aide de la fonction [`load_iris`](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) du module
# `sklearn.datasets`. Faites en sorte de stocker les prédicteurs dans une matrice
# `X` et les classes à prédire dans un vecteur `y`. Quelles sont les dimensions
# de `X` ?
# %%
iris = load_iris()
X = iris.data
y = iris.target
# 12. Découpez ce jeu de données en un jeu d'apprentissage et un jeu de test de
# mêmes tailles et faites en sorte que chacune de vos variables soient
# centrées-réduites.
# %%
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
scaler = StandardScaler()
scaler.fit(X_train)
X_train_tr = scaler.transform(X_train)
X_test_tr = scaler.transform(X_test)
# # Le modèle `SVC` (_Support Vector Classifier_)
#
# 13. Apprenez un modèle SVM linéaire (classe [`SVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) dans `sklearn`) pour votre
# problème.
# 14. Évaluez ses performances sur votre jeu de test à l'aide de la fonction
# [`accuracy_score`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) du module `sklearn.metrics`.
# %%
model = SVC(kernel="linear", C=1.)
model.fit(X_train_tr, y_train)
print(model.score(X_train_tr, y_train))
print(model.score(X_test_tr, y_test))
# 15. Faites de même avec un modèle SVM à noyau gaussien. Faites varier la valeur
# de l'hyperparamètre lié à ce noyau et observez l'évolution du taux de bonnes
# classifications.
# %%
for gamma in [0.1, 1.0, 10.]:
model = SVC(kernel="rbf", C=1., gamma=gamma)
model.fit(X_train_tr, y_train)
print(model.score(X_test_tr, y_test))
# # Réduction de la dimension
#
# Dans la suite, on cherchera à visualiser nos données pour mieux comprendre le
# résultat produit par nos algorithmes de machine learning.
#
# 16. Pour cela, commencez par les plonger dans un espace en deux dimensions à
# l'aide d'une ACP (classe [`PCA`](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) en `sklearn`). Notez que dans ce cas, pour transformer nos données dans l'espace des premiers axes de l'ACP, on n'utilisera pas une méthode `predict()` mais une méthode `transform()`.
#
# On fournit la fonction suivante pour visualiser vos données :
# %%
def plot_decision(X, y=None, model=None):
if model is not None:
xx, yy = np.meshgrid(np.arange(X[:, 0].min() - .5, X[:, 0].max() + .5, .01),
np.arange(X[:, 1].min() - .5, X[:, 1].max() + .5, .01))
zz_class = model.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contourf(xx, yy, zz_class, alpha=.2)
# Plot data
if y is None:
y = "k"
plt.scatter(X[:, 0], X[:, 1], c=y, s=40)
# Set figure coordinate limits
plt.xlim(X[:, 0].min() - .5, X[:, 0].max() + .5)
plt.ylim(X[:, 1].min() - .5, X[:, 1].max() + .5)
# 17. À l'aide d'un appel à cette fonction (sans passer d'argument `model` pour
# le moment), visualisez vos données dans le premier plan de l'ACP : vous
# semble-t-il raisonnable de tenter d'effectuer une classification supervisée dans
# cet espace ?
# %%
plot_decision(X,y)
# # Classification non supervisée
#
# 18. Effectuez une classification non supervisée (ce coup-ci, à vous de
# choisir/trouver la bonne classe `sklearn` :) sur vos données représentées
# dans le premier plan de l'ACP. Visualisez le résultat de cette classification
# en passant votre modèle de clustering comme troisième argument de la
# fonction `plot_decision()` définie plus haut.
# %%
pca = PCA(n_components=2)
pca.fit(X)
X_trans = pca.transform(X)
plot_decision(X_trans, y)
# %%
km = KMeans(n_clusters=3)
km.fit(X_trans)
plt.figure()
plot_decision(X_trans, y=y, model=km)
plt.scatter(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1], marker="^", c="k", s=50)
| [
"sklearn.datasets.load_iris",
"sklearn.cluster.KMeans",
"matplotlib.pyplot.contourf",
"sklearn.linear_model.Lasso",
"sklearn.model_selection.train_test_split",
"sklearn.decomposition.PCA",
"matplotlib.pyplot.plot",
"sklearn.datasets.load_boston",
"sklearn.preprocessing.StandardScaler",
"matplotlib... | [((1113, 1130), 'numpy.random.seed', 'np.random.seed', (['(0)'], {}), '(0)\n', (1127, 1130), True, 'import numpy as np\n'), ((1766, 1789), 'numpy.random.randn', 'np.random.randn', (['(100)', '(1)'], {}), '(100, 1)\n', (1781, 1789), True, 'import numpy as np\n'), ((1903, 1923), 'matplotlib.pyplot.plot', 'plt.plot', (['X', 'y', '"""ro"""'], {}), "(X, y, 'ro')\n", (1911, 1923), True, 'import matplotlib.pyplot as plt\n'), ((2700, 2718), 'sklearn.linear_model.LinearRegression', 'LinearRegression', ([], {}), '()\n', (2716, 2718), False, 'from sklearn.linear_model import LinearRegression, Lasso\n'), ((2761, 2797), 'matplotlib.pyplot.plot', 'plt.plot', (['X', 'y', '"""r."""', 'X', 'y_hat', '"""b."""'], {}), "(X, y, 'r.', X, y_hat, 'b.')\n", (2769, 2797), True, 'import matplotlib.pyplot as plt\n'), ((3501, 3525), 'numpy.random.randn', 'np.random.randn', (['(100)', '(10)'], {}), '(100, 10)\n', (3516, 3525), True, 'import numpy as np\n'), ((3915, 3952), 'sklearn.model_selection.train_test_split', 'train_test_split', (['X', 'y'], {'test_size': '(0.5)'}), '(X, y, test_size=0.5)\n', (3931, 3952), False, 'from sklearn.model_selection import train_test_split\n'), ((4235, 4251), 'sklearn.linear_model.Lasso', 'Lasso', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (4240, 4251), False, 'from sklearn.linear_model import LinearRegression, Lasso\n'), ((4311, 4323), 'matplotlib.pyplot.figure', 'plt.figure', ([], {}), '()\n', (4321, 4323), True, 'import matplotlib.pyplot as plt\n'), ((4324, 4360), 'matplotlib.pyplot.plot', 'plt.plot', (['X_test[:, 0]', 'y_test', '"""r."""'], {}), "(X_test[:, 0], y_test, 'r.')\n", (4332, 4360), True, 'import matplotlib.pyplot as plt\n'), ((4361, 4396), 'matplotlib.pyplot.plot', 'plt.plot', (['X_test[:, 0]', 'y_hat', '"""b."""'], {}), "(X_test[:, 0], y_hat, 'b.')\n", (4369, 4396), True, 'import matplotlib.pyplot as plt\n'), ((4880, 4893), 'sklearn.datasets.load_boston', 'load_boston', ([], {}), '()\n', (4891, 4893), False, 'from sklearn.datasets import make_blobs, load_boston, load_iris\n'), ((4979, 4995), 'sklearn.linear_model.Lasso', 'Lasso', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (4984, 4995), False, 'from sklearn.linear_model import LinearRegression, Lasso\n'), ((5079, 5091), 'matplotlib.pyplot.figure', 'plt.figure', ([], {}), '()\n', (5089, 5091), True, 'import matplotlib.pyplot as plt\n'), ((5738, 5749), 'sklearn.datasets.load_iris', 'load_iris', ([], {}), '()\n', (5747, 5749), False, 'from sklearn.datasets import make_blobs, load_boston, load_iris\n'), ((5997, 6050), 'sklearn.model_selection.train_test_split', 'train_test_split', (['X', 'y'], {'test_size': '(0.5)', 'random_state': '(0)'}), '(X, y, test_size=0.5, random_state=0)\n', (6013, 6050), False, 'from sklearn.model_selection import train_test_split\n'), ((6060, 6076), 'sklearn.preprocessing.StandardScaler', 'StandardScaler', ([], {}), '()\n', (6074, 6076), False, 'from sklearn.preprocessing import StandardScaler\n'), ((6622, 6649), 'sklearn.svm.SVC', 'SVC', ([], {'kernel': '"""linear"""', 'C': '(1.0)'}), "(kernel='linear', C=1.0)\n", (6625, 6649), False, 'from sklearn.svm import SVC\n'), ((9056, 9075), 'sklearn.decomposition.PCA', 'PCA', ([], {'n_components': '(2)'}), '(n_components=2)\n', (9059, 9075), False, 'from sklearn.decomposition import PCA\n'), ((9155, 9175), 'sklearn.cluster.KMeans', 'KMeans', ([], {'n_clusters': '(3)'}), '(n_clusters=3)\n', (9161, 9175), False, 'from sklearn.cluster import KMeans\n'), ((9193, 9205), 'matplotlib.pyplot.figure', 'plt.figure', ([], {}), '()\n', (9203, 9205), True, 'import matplotlib.pyplot as plt\n'), ((9244, 9339), 'matplotlib.pyplot.scatter', 'plt.scatter', (['km.cluster_centers_[:, 0]', 'km.cluster_centers_[:, 1]'], {'marker': '"""^"""', 'c': '"""k"""', 's': '(50)'}), "(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1], marker=\n '^', c='k', s=50)\n", (9255, 9339), True, 'import matplotlib.pyplot as plt\n'), ((1794, 1803), 'numpy.sin', 'np.sin', (['X'], {}), '(X)\n', (1800, 1803), True, 'import numpy as np\n'), ((6990, 7027), 'sklearn.svm.SVC', 'SVC', ([], {'kernel': '"""rbf"""', 'C': '(1.0)', 'gamma': 'gamma'}), "(kernel='rbf', C=1.0, gamma=gamma)\n", (6993, 7027), False, 'from sklearn.svm import SVC\n'), ((8170, 8210), 'matplotlib.pyplot.scatter', 'plt.scatter', (['X[:, 0]', 'X[:, 1]'], {'c': 'y', 's': '(40)'}), '(X[:, 0], X[:, 1], c=y, s=40)\n', (8181, 8210), True, 'import matplotlib.pyplot as plt\n'), ((1812, 1835), 'numpy.random.randn', 'np.random.randn', (['(100)', '(1)'], {}), '(100, 1)\n', (1827, 1835), True, 'import numpy as np\n'), ((3572, 3595), 'numpy.random.randn', 'np.random.randn', (['(100)', '(1)'], {}), '(100, 1)\n', (3587, 3595), True, 'import numpy as np\n'), ((4936, 4952), 'sklearn.preprocessing.StandardScaler', 'StandardScaler', ([], {}), '()\n', (4950, 4952), False, 'from sklearn.preprocessing import StandardScaler\n'), ((8075, 8116), 'matplotlib.pyplot.contourf', 'plt.contourf', (['xx', 'yy', 'zz_class'], {'alpha': '(0.2)'}), '(xx, yy, zz_class, alpha=0.2)\n', (8087, 8116), True, 'import matplotlib.pyplot as plt\n'), ((3530, 3545), 'numpy.sin', 'np.sin', (['X[:, 0]'], {}), '(X[:, 0])\n', (3536, 3545), True, 'import numpy as np\n')] |
from esm import model
import torch
import esm
import re
from tqdm import tqdm
import numpy as np
import pandas as pd
# region 将字符串拆分成固定长度
def cut_text(text,lenth):
"""[将字符串拆分成固定长度]
Args:
text ([string]): [input string]
lenth ([int]): [sub_sequence length]
Returns:
[string list]: [string results list]
"""
textArr = re.findall('.{'+str(lenth)+'}', text)
textArr.append(text[(len(textArr)*lenth):])
return textArr
#endregion
#region 对单个序列进行embedding
def get_rep_single_seq(seqid, sequence, model,batch_converter, seqthres=1022):
"""[对单个序列进行embedding]
Args:
seqid ([string]): [sequence name]]
sequence ([sting]): [sequence]
model ([model]): [ embedding model]]
batch_converter ([object]): [description]
seqthres (int, optional): [max sequence length]. Defaults to 1022.
Returns:
[type]: [description]
"""
if len(sequence) < seqthres:
data =[(seqid, sequence)]
else:
seqArray = cut_text(sequence, seqthres)
data=[]
for item in seqArray:
data.append((seqid, item))
batch_labels, batch_strs, batch_tokens = batch_converter(data)
if torch.cuda.is_available():
batch_tokens = batch_tokens.to(device="cuda", non_blocking=True)
REP_LAYERS = [0, 32, 33]
MINI_SIZE = len(batch_labels)
with torch.no_grad():
results = model(batch_tokens, repr_layers=REP_LAYERS, return_contacts=False)
representations = {layer: t.to(device="cpu") for layer, t in results["representations"].items()}
result ={}
result["label"] = batch_labels[0]
for i in range(MINI_SIZE):
if i ==0:
result["mean_representations"] = {layer: t[i, 1 : len(batch_strs[0]) + 1].mean(0).clone() for layer, t in representations.items()}
else:
for index, layer in enumerate(REP_LAYERS):
result["mean_representations"][layer] += {layer: t[i, 1 : len(batch_strs[0]) + 1].mean(0).clone() for layer, t in representations.items()}[layer]
for index, layer in enumerate(REP_LAYERS):
result["mean_representations"][layer] = result["mean_representations"][layer] /MINI_SIZE
return result
#endregion
#region 对多个序列进行embedding
def get_rep_multi_sequence(sequences, model='esm_msa1b_t12_100M_UR50S', repr_layers=[0, 32, 33], seqthres=1022):
"""[对多个序列进行embedding]
Args:
sequences ([DataFrame]): [ sequence info]]
seqthres (int, optional): [description]. Defaults to 1022.
Returns:
[DataFrame]: [final_rep0, final_rep32, final_rep33]
"""
final_label_list = []
final_rep0 =[]
final_rep32 =[]
final_rep33 =[]
model, alphabet = esm.pretrained.esm1b_t33_650M_UR50S()
batch_converter = alphabet.get_batch_converter()
if torch.cuda.is_available():
model = model.cuda()
print("Transferred model to GPU")
for i in tqdm(range(len(sequences))):
apd = get_rep_single_seq(
seqid = sequences.iloc[i].id,
sequence=sequences.iloc[i].seq,
model=model,
batch_converter=batch_converter,
seqthres=seqthres)
final_label_list.append(np.array(apd['label']))
final_rep0.append(np.array(apd['mean_representations'][0]))
final_rep32.append(np.array(apd['mean_representations'][32]))
final_rep33.append(np.array(apd['mean_representations'][33]))
final_rep0 = pd.DataFrame(final_rep0)
final_rep32 = pd.DataFrame(final_rep32)
final_rep33 = pd.DataFrame(final_rep33)
final_rep0.insert(loc=0, column='id', value=np.array(final_label_list).flatten())
final_rep32.insert(loc=0, column='id', value=np.array(final_label_list).flatten())
final_rep33.insert(loc=0, column='id', value=np.array(final_label_list).flatten())
col_name = ['id']+ ['f'+str(i) for i in range (1,final_rep0.shape[1])]
final_rep0.columns = col_name
final_rep32.columns = col_name
final_rep33.columns = col_name
return final_rep0, final_rep32, final_rep33
#endregion
if __name__ =='__main__':
SEQTHRES = 1022
RUNMODEL = { 'ESM-1b' :'esm1b_t33_650M_UR50S',
'ESM-MSA-1b' :'esm_msa1b_t12_100M_UR50S'
}
train = pd.read_feather(cfg.DATADIR+'train.feather').iloc[:,:6]
test = pd.read_feather(cfg.DATADIR+'test.feather').iloc[:,:6]
rep0, rep32, rep33 = get_rep_multi_sequence(sequences=train, model=RUNMODEL.get('ESM-MSA-1b'),seqthres=SEQTHRES)
rep0.to_feather(cfg.DATADIR + 'train_rep0.feather')
rep32.to_feather(cfg.DATADIR + 'train_rep32.feather')
rep33.to_feather(cfg.DATADIR + 'train_rep33.feather')
print('Embedding Success!') | [
"pandas.read_feather",
"esm.model",
"esm.model.cuda",
"numpy.array",
"torch.cuda.is_available",
"pandas.DataFrame",
"torch.no_grad",
"esm.pretrained.esm1b_t33_650M_UR50S"
] | [((1223, 1248), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (1246, 1248), False, 'import torch\n'), ((2765, 2802), 'esm.pretrained.esm1b_t33_650M_UR50S', 'esm.pretrained.esm1b_t33_650M_UR50S', ([], {}), '()\n', (2800, 2802), False, 'import esm\n'), ((2863, 2888), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (2886, 2888), False, 'import torch\n'), ((3521, 3545), 'pandas.DataFrame', 'pd.DataFrame', (['final_rep0'], {}), '(final_rep0)\n', (3533, 3545), True, 'import pandas as pd\n'), ((3564, 3589), 'pandas.DataFrame', 'pd.DataFrame', (['final_rep32'], {}), '(final_rep32)\n', (3576, 3589), True, 'import pandas as pd\n'), ((3608, 3633), 'pandas.DataFrame', 'pd.DataFrame', (['final_rep33'], {}), '(final_rep33)\n', (3620, 3633), True, 'import pandas as pd\n'), ((1413, 1428), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (1426, 1428), False, 'import torch\n'), ((1448, 1514), 'esm.model', 'model', (['batch_tokens'], {'repr_layers': 'REP_LAYERS', 'return_contacts': '(False)'}), '(batch_tokens, repr_layers=REP_LAYERS, return_contacts=False)\n', (1453, 1514), False, 'from esm import model\n'), ((2910, 2922), 'esm.model.cuda', 'model.cuda', ([], {}), '()\n', (2920, 2922), False, 'from esm import model\n'), ((3270, 3292), 'numpy.array', 'np.array', (["apd['label']"], {}), "(apd['label'])\n", (3278, 3292), True, 'import numpy as np\n'), ((3320, 3360), 'numpy.array', 'np.array', (["apd['mean_representations'][0]"], {}), "(apd['mean_representations'][0])\n", (3328, 3360), True, 'import numpy as np\n'), ((3389, 3430), 'numpy.array', 'np.array', (["apd['mean_representations'][32]"], {}), "(apd['mean_representations'][32])\n", (3397, 3430), True, 'import numpy as np\n'), ((3459, 3500), 'numpy.array', 'np.array', (["apd['mean_representations'][33]"], {}), "(apd['mean_representations'][33])\n", (3467, 3500), True, 'import numpy as np\n'), ((4329, 4375), 'pandas.read_feather', 'pd.read_feather', (["(cfg.DATADIR + 'train.feather')"], {}), "(cfg.DATADIR + 'train.feather')\n", (4344, 4375), True, 'import pandas as pd\n'), ((4396, 4441), 'pandas.read_feather', 'pd.read_feather', (["(cfg.DATADIR + 'test.feather')"], {}), "(cfg.DATADIR + 'test.feather')\n", (4411, 4441), True, 'import pandas as pd\n'), ((3682, 3708), 'numpy.array', 'np.array', (['final_label_list'], {}), '(final_label_list)\n', (3690, 3708), True, 'import numpy as np\n'), ((3769, 3795), 'numpy.array', 'np.array', (['final_label_list'], {}), '(final_label_list)\n', (3777, 3795), True, 'import numpy as np\n'), ((3856, 3882), 'numpy.array', 'np.array', (['final_label_list'], {}), '(final_label_list)\n', (3864, 3882), True, 'import numpy as np\n')] |
from mathutils import MathUtils
import os
import numpy as np
import re
from glogpy.dynamics_job import dynamics_job as dj
class ParseLogAll():
def parse(txt, step_lim=None):
sections = re.split("\*{4} Time.{1,}\d{1,}\.\d{1,}.{1,}\*{4}", txt )
res = []
nsteps = 0
for s in sections[1:]:
if step_lim!=None:
if nsteps == step_lim: break
try:
data = dj(s.strip())
data = data.parse()
res.append(data)
except: raise Exception(f'An error occured parsing step {nsteps} {txt}')
nsteps += 1
return res, nsteps
def I_ImportLogalls(basedir, ngwps, gwp_dir_fmt='gwp{}_V1', fname='gwp{}_V1_dd_data_nm.logall',
step_lim=None, print_steps=True,
quantities=['xyz', 'ci', 'csf', 'mq', 'sd', 'an', 'am']):
# Quantities options are
# xyz = geometries
# ci = CI Coeff
# csf = CSF Coeff
# mq = Mulliken Q #TODO Split into sum/unsum?
# sd = Spin density #TODO Split into sum/unsum?
# dp = Dipole moment
# an = Atom numbers
# am = Atom masses
# fo = Forces
# maxf = max + rms force
# case = CASSCF Energy
# casde = CASSCF DE
steps = None
datax = []
for i in range(1, ngwps+1):
if print_steps: print(f'Parsing GWP {i}/{ngwps}')
# Plus one is to compensate for weirdness of python range()
# We want to loop over 1,2,3 ... ngwp-1, ngwp
loagallf = os.path.join(basedir, gwp_dir_fmt.format(i), fname.format(i))
assert(os.path.exists(loagallf))
with open(loagallf, 'r') as f:
data = f.read()
parsed_data, nsteps = ParseLogAll.parse(data, step_lim = step_lim)
# Make sure all GWP loagalls contain same num steps
if steps == None:
steps = nsteps
else:
assert(steps == nsteps)
datax.append(parsed_data)
assert(len(datax) == ngwps) # Quick sanity check (GWP in = GWP out)
results = {}
results['steps'] = steps
# First pull constants
if 'an' in quantities:
results['atomnos'] = datax[0][0]['atomnos']
if 'am' in quantities:
results['atommasses'] = datax[0][0]['atommasses']
# Work way through scalar data to gen [GWP x Step]
if 'ci' in quantities:
results['adiabats'] = np.zeros([ngwps, steps, len(datax[0][0]['adiabats'])], dtype=complex)
if 'csf' in quantities:
results['diabats'] = np.zeros([ngwps, steps, len(datax[0][0]['diabats'])], dtype=complex)
if 'maxf' in quantities:
results['maxf'] = np.zeros([ngwps, steps])
results['rmsf'] = np.zeros([ngwps, steps])
# Vector quantities [GWP x Step x Dim]
if 'xyz' in quantities:
results['geomx'] = np.zeros([ngwps, steps, len(datax[0][0]['geom_init']), 3])
if 'fo' in quantities:
results['forces'] = np.zeros((ngwps, steps, len(datax[0][0]['forces']), 3))
if 'dp' in quantities:
results['dipolemom'] = np.zeros((ngwps, steps, 3))
if 'casde' in quantities:
results['casde'] = np.zeros((ngwps, steps))
if 'case' in quantities:
results['case'] = np.zeros((ngwps, steps))
if 'mq' in quantities:
results['mullikensum'] = np.zeros([ngwps,steps, len(datax[0][0]['mulliken_sum'])])
results['mullikenmap'] = [int(i) for i in list(datax[0][0]['mulliken_sum'].keys())]
if 'sd' in quantities:
results['spindensum'] = np.zeros([ngwps,steps, len(datax[0][0]['spinden_sum'])])
results['spindenmap'] = [int(i) for i in list(datax[0][0]['spinden_sum'].keys())]
for i, gwp in enumerate(datax):
for j, ts in enumerate(gwp):
if 'ci' in quantities:
results['adiabats'][i,j] = np.array(MathUtils.dict_to_list(ts['adiabats']))
if 'csf' in quantities:
results['diabats'][i,j] = np.array(MathUtils.dict_to_list(ts['diabats']))
if 'xyz' in quantities:
gtemp = MathUtils.dict_to_list(ts['geom_init'])
gtemp = [x[1] for x in gtemp]
results['geomx'][i,j] = np.array(gtemp)
if 'fo' in quantities:
ftemp = MathUtils.dict_to_list(ts['forces'])
results['forces'][i,j] = np.array(ftemp)
if 'dp' in quantities:
results['dipolemom'][i,j] = np.array(ts['dipole'][0])
if 'casde' in quantities:
results['casde'][i,j] = ts['casde']
if 'case' in quantities:
results['case'][i,j] = ts['case']
if 'mq' in quantities:
for atomidx, mullsum in ts['mulliken_sum'].items():
results['mullikensum'][i,j,results['mullikenmap'].index(atomidx)] = mullsum
if 'sd' in quantities:
for atomidx, sdsum in ts['spinden_sum'].items():
results['spindensum'][i,j,results['spindenmap'].index(atomidx)] = sdsum
if 'maxf' in quantities:
results['maxf'][i,j] = ts['maxforce']
results['rmsf'][i,j] = ts['rmsforce']
return results | [
"re.split",
"os.path.exists",
"numpy.array",
"numpy.zeros",
"mathutils.MathUtils.dict_to_list"
] | [((201, 262), 're.split', 're.split', (['"""\\\\*{4} Time.{1,}\\\\d{1,}\\\\.\\\\d{1,}.{1,}\\\\*{4}"""', 'txt'], {}), "('\\\\*{4} Time.{1,}\\\\d{1,}\\\\.\\\\d{1,}.{1,}\\\\*{4}', txt)\n", (209, 262), False, 'import re\n'), ((1706, 1730), 'os.path.exists', 'os.path.exists', (['loagallf'], {}), '(loagallf)\n', (1720, 1730), False, 'import os\n'), ((2849, 2873), 'numpy.zeros', 'np.zeros', (['[ngwps, steps]'], {}), '([ngwps, steps])\n', (2857, 2873), True, 'import numpy as np\n'), ((2904, 2928), 'numpy.zeros', 'np.zeros', (['[ngwps, steps]'], {}), '([ngwps, steps])\n', (2912, 2928), True, 'import numpy as np\n'), ((3298, 3325), 'numpy.zeros', 'np.zeros', (['(ngwps, steps, 3)'], {}), '((ngwps, steps, 3))\n', (3306, 3325), True, 'import numpy as np\n'), ((3392, 3416), 'numpy.zeros', 'np.zeros', (['(ngwps, steps)'], {}), '((ngwps, steps))\n', (3400, 3416), True, 'import numpy as np\n'), ((3481, 3505), 'numpy.zeros', 'np.zeros', (['(ngwps, steps)'], {}), '((ngwps, steps))\n', (3489, 3505), True, 'import numpy as np\n'), ((4395, 4434), 'mathutils.MathUtils.dict_to_list', 'MathUtils.dict_to_list', (["ts['geom_init']"], {}), "(ts['geom_init'])\n", (4417, 4434), False, 'from mathutils import MathUtils\n'), ((4529, 4544), 'numpy.array', 'np.array', (['gtemp'], {}), '(gtemp)\n', (4537, 4544), True, 'import numpy as np\n'), ((4630, 4666), 'mathutils.MathUtils.dict_to_list', 'MathUtils.dict_to_list', (["ts['forces']"], {}), "(ts['forces'])\n", (4652, 4666), False, 'from mathutils import MathUtils\n'), ((4712, 4727), 'numpy.array', 'np.array', (['ftemp'], {}), '(ftemp)\n', (4720, 4727), True, 'import numpy as np\n'), ((4816, 4841), 'numpy.array', 'np.array', (["ts['dipole'][0]"], {}), "(ts['dipole'][0])\n", (4824, 4841), True, 'import numpy as np\n'), ((4135, 4173), 'mathutils.MathUtils.dict_to_list', 'MathUtils.dict_to_list', (["ts['adiabats']"], {}), "(ts['adiabats'])\n", (4157, 4173), False, 'from mathutils import MathUtils\n'), ((4271, 4308), 'mathutils.MathUtils.dict_to_list', 'MathUtils.dict_to_list', (["ts['diabats']"], {}), "(ts['diabats'])\n", (4293, 4308), False, 'from mathutils import MathUtils\n')] |
""" General functions that can be used by multiple modules
"""
import numpy as np
import scipy.optimize as spo
import logging
logger = logging.getLogger(__name__)
def solve_root(func, args=(), method="bisect", x0=None, bounds=None, options={}):
"""
This function will setup and dispatch thermodynamic jobs.
Parameters
----------
func : function
Function used in job. Can be any of the following scipy methods: "brent", "least_squares", "TNC", "L-BFGS-B", "SLSQP", 'hybr', 'lm', 'linearmixing', 'diagbroyden', 'excitingmixing', 'krylov', 'df-sane', 'anderson', 'hybr_broyden1', 'hybr_broyden2', 'broyden1', 'broyden2', 'bisect'.
args : tuple, Optional, default=()
Each entry of this list contains the input arguments for each job
method : str, Optional, default="bisect"
Choose the method used to solve the dew point calculation
x0 : float, Optional, default=None
Initial guess in parameter to be optimized
bounds : tuple, Optional, default=None
Parameter boundaries
options : dict, Optional, default={}
These options are used in the scipy method
Returns
-------
output : tuple
This structure contains the outputs of the jobs given
"""
if method not in [
"brentq",
"least_squares",
"TNC",
"L-BFGS-B",
"SLSQP",
"hybr",
"lm",
"linearmixing",
"diagbroyden",
"excitingmixing",
"krylov",
"df-sane",
"anderson",
"hybr_broyden1",
"hybr_broyden2",
"broyden1",
"broyden2",
"bisect",
]:
raise ValueError("Optimization method, {}, not supported.".format(method))
if x0 is None:
logger.debug("Initial guess in optimization not provided")
if np.any(bounds is None):
logger.debug("Optimization bounds not provided")
if x0 is None and method in [
"broyden1",
"broyden2",
"anderson",
"hybr",
"lm",
"linearmixing",
"diagbroyden",
"excitingmixing",
"krylov",
"df-sane",
]:
if np.any(bounds is None):
raise ValueError(
"Optimization method, {}, requires x0. Because bounds were not provided, so problem cannot be solved.".format(
method
)
)
else:
logger.error(
"Optimization method, {}, requires x0, using bisect instead".format(
method
)
)
method = "bisect"
if np.size(x0) > 1 and method in ["brentq", "bisect"]:
logger.error(
"Optimization method, {}, is for scalar functions, using {}".format(
method, "least_squares"
)
)
method = "least_squares"
if (
np.size(x0) == 1
and np.any(bounds is not None)
and np.shape(x0) != np.shape(bounds)[0]
):
bounds = tuple([bounds])
if np.any(bounds is None) and method in ["brentq", "bisect"]:
if x0 is None:
raise ValueError(
"Optimization method, {}, requires bounds. Because x0 was not provided, so problem cannot be solved.".format(
method
)
)
else:
logger.error(
"Optimization method, {}, requires bounds, using hybr".format(method)
)
method = "hybr"
if np.any(bounds is not None):
for bnd in bounds:
if len(bnd) != 2:
raise ValueError("bounds are not of length two")
#################### Root Finding without Boundaries ###################
if method in ["broyden1", "broyden2"]:
outer_dict = {
"fatol": 1e-5,
"maxiter": 25,
"jac_options": {"reduction_method": "simple"},
}
for key, value in options.items():
outer_dict[key] = value
logger.debug(
"Using the method, {}, with the following options:\n{}".format(
method, outer_dict
)
)
sol = spo.root(func, x0, args=args, method=method, options=outer_dict)
elif method == "anderson":
outer_dict = {"fatol": 1e-5, "maxiter": 25}
for key, value in options.items():
outer_dict[key] = value
logger.debug(
"Using the method, {}, with the following options:\n{}".format(
method, outer_dict
)
)
sol = spo.root(func, x0, args=args, method=method, options=outer_dict)
elif method in [
"hybr",
"lm",
"linearmixing",
"diagbroyden",
"excitingmixing",
"krylov",
"df-sane",
]:
outer_dict = {}
for key, value in options.items():
outer_dict[key] = value
logger.debug(
"Using the method, {}, with the following options:\n{}".format(
method, outer_dict
)
)
sol = spo.root(func, x0, args=args, method=method, options=outer_dict)
#################### Minimization Methods with Boundaries ###################
elif method in ["TNC", "L-BFGS-B"]:
outer_dict = {
"gtol": 1e-2 * np.sqrt(np.finfo("float").eps),
"ftol": np.sqrt(np.finfo("float").eps),
}
for key, value in options.items():
outer_dict[key] = value
logger.debug(
"Using the method, {}, with the following options:\n{}".format(
method, outer_dict
)
)
if len(bounds) == 2:
sol = spo.minimize(
func,
x0,
args=args,
method=method,
bounds=tuple(bounds),
options=outer_dict,
)
else:
sol = spo.minimize(func, x0, args=args, method=method, options=outer_dict)
elif method == "SLSQP":
outer_dict = {}
for key, value in options.items():
outer_dict[key] = value
logger.debug(
"Using the method, {}, with the following options:\n{}".format(
method, outer_dict
)
)
if len(bounds) == 2:
sol = spo.minimize(
func,
x0,
args=args,
method=method,
bounds=tuple(bounds),
options=outer_dict,
)
else:
sol = spo.minimize(func, x0, args=args, method=method, options=outer_dict)
#################### Root Finding with Boundaries ###################
elif method == "brentq":
outer_dict = {"rtol": 1e-7}
for key, value in options.items():
if key in ["xtol", "rtol", "maxiter", "full_output", "disp"]:
outer_dict[key] = value
logger.debug(
"Using the method, {}, with the following options:\n{}".format(
method, outer_dict
)
)
sol = spo.brentq(func, bounds[0][0], bounds[0][1], args=args, **outer_dict)
elif method == "least_squares":
outer_dict = {}
for key, value in options.items():
outer_dict[key] = value
logger.debug(
"Using the method, {}, with the following options:\n{}".format(
method, outer_dict
)
)
bnd_tmp = [[], []]
for bnd in bounds:
bnd_tmp[0].append(bnd[0])
bnd_tmp[1].append(bnd[1])
sol = spo.least_squares(
func, x0, bounds=tuple(bnd_tmp), args=args, **outer_dict
)
elif method == "bisect":
outer_dict = {"maxiter": 100}
for key, value in options.items():
if key in ["xtol", "rtol", "maxiter", "full_output", "disp"]:
outer_dict[key] = value
logger.debug(
"Using the method, {}, with the following options:\n{}".format(
method, outer_dict
)
)
sol = spo.bisect(func, bounds[0][0], bounds[0][1], args=args, **outer_dict)
# Given final P estimate
if method not in ["brentq", "bisect"]:
solution = sol.x
logger.info(
"Optimization terminated successfully: {} {}".format(
sol.success, sol.message
)
)
else:
logger.info("Optimization terminated successfully: {}".format(sol))
solution = sol
return solution
def central_difference(x, func, step_size=1e-5, relative=False, args=()):
"""
Take the derivative of a dependent variable calculated with a given function using the central difference method.
Parameters
----------
x : numpy.ndarray
Independent variable to take derivative with respect too, using the central difference method. Must be first input of the function.
func : function
Function used in job to calculate dependent factor. This function should have a single output.
step_size : float, Optional, default=1E-5
Either the step size used in the central difference method, or if ``relative=True``, this variable is a scaling factor so that the step size for each value of x is x * step_size.
args : tuple, Optional, default=()
Each entry of this list contains the input arguments for each job
relative : bool, Optional, default=False
If False, the step_size is directly used to calculate the derivative. If true, step_size becomes a scaling factor, where the step size for each value of x becomes step_size*x.
Returns
-------
dydx : numpy.ndarray
Array of derivative of y with respect to x, given an array of independent variables.
"""
if not isiterable(x):
x - np.array([x])
elif not isinstance(x, np.ndarray):
x = np.array(x)
if relative:
step = x * step_size
if not isiterable(step):
step = np.array([step])
step = np.array(
[2 * np.finfo(float).eps if xx < np.finfo(float).eps else xx for xx in step]
)
else:
step = step_size
y = func(np.append(x + step, x - step), *args)
lx = int(len(y)/2)
dydx = (y[:lx] - y[lx:]) / (2.0 * step)
return dydx
def isiterable(array):
"""
Check if variable is an iterable type with a length (e.g. np.array or list).
Note that this could be tested with ``isinstance(array, Iterable)``, however ``array=np.array(1.0)`` would pass that test and then fail in ``len(array)``.
Parameters
----------
array
Variable of some type, that should be iterable
Returns
-------
isiterable : bool
Will be True if indexing is possible and False if not.
"""
array_tmp = np.array(array, dtype=object)
tmp = np.shape(array_tmp)
if tmp:
isiterable = True
else:
isiterable = False
return isiterable
def check_length_dict(dictionary, keys, lx=None):
"""
This function compared the entries, keys, in the provided dictionary to ensure they're the same length.
All entries in the list ``keys``, will be made into numpy arrays (if present). If a float or array of length one is provided, it will be expanded to the length of other arrays.
Parameters
----------
dictionary : dict
Dictionary containing all or some of the keywords, ``keys``, of what should be arrays of identical size.
keys : list
Possible keywords representing array entries
lx : int, Optional, default=None
The size that arrays should conform to
Returns
-------
new_dictionary : dict
Dictionary of arrays of identical size.
"""
if lx == None:
lx_array = []
for key in keys:
if key in dictionary:
tmp = dictionary[key]
if np.shape(tmp):
lx_array.append(len(tmp))
else:
lx_array.append(1)
if not len(lx_array):
raise ValueError(
"None of the provided keys are found in the given dictionary"
)
lx = max(lx_array)
new_dictionary = {}
for key in keys:
if key in dictionary:
tmp = dictionary[key]
if isiterable(tmp):
l_tmp = len(tmp)
if l_tmp == 1:
new_dictionary[key] = np.array([tmp[0] for x in range(lx)], float)
elif l_tmp == lx:
new_dictionary[key] = np.array(tmp, float)
else:
raise ValueError(
"Entry, {}, should be length {}, not {}".format(key, lx, l_tmp)
)
else:
new_dictionary[key] = np.array([tmp for x in range(lx)], float)
return new_dictionary
def set_defaults(dictionary, keys, values, lx=None):
"""
This function checks a dictionary for the given keys, and if it's not there, the appropriate value is added to the dictionary.
Parameters
----------
dictionary : dict
Dictionary of data
keys : list
Keys that should be present (of the same length as ``lx``)
values : list
Default values for the keys that aren't in dictionary
lx : int, Optional, default=None
If not None, and values[i] is a float, the key will be set to an array of length, ``lx``, populated by ``values[i]``
Returns
-------
new_dictionary : dict
Dictionary of arrays of identical size.
"""
new_dictionary = dictionary.copy()
key_iterable = isiterable(keys)
if not isiterable(values):
if key_iterable:
values = np.ones(len(keys)) * values
else:
values = np.array([values])
keys = np.array([keys])
else:
if key_iterable and len(keys) != len(values):
raise ValueError("Length of given keys and values must be equivalent.")
elif not key_iterable:
if len(values) != 1:
raise ValueError(
"Multiple default values for given key, {}, is ambiguous".format(
keys
)
)
else:
keys = [keys]
for i, key in enumerate(keys):
if key not in dictionary:
tmp = values[i]
if not isiterable(tmp) and lx != None:
new_dictionary[key] = np.ones(lx) * tmp
else:
new_dictionary[key] = tmp
logger.info("Entry, {}, set to default: {}".format(key, tmp))
return new_dictionary
| [
"logging.getLogger",
"scipy.optimize.bisect",
"numpy.ones",
"scipy.optimize.brentq",
"numpy.size",
"scipy.optimize.minimize",
"numpy.any",
"numpy.append",
"numpy.array",
"scipy.optimize.root",
"numpy.finfo",
"numpy.shape"
] | [((137, 164), 'logging.getLogger', 'logging.getLogger', (['__name__'], {}), '(__name__)\n', (154, 164), False, 'import logging\n'), ((1826, 1848), 'numpy.any', 'np.any', (['(bounds is None)'], {}), '(bounds is None)\n', (1832, 1848), True, 'import numpy as np\n'), ((3517, 3543), 'numpy.any', 'np.any', (['(bounds is not None)'], {}), '(bounds is not None)\n', (3523, 3543), True, 'import numpy as np\n'), ((10847, 10876), 'numpy.array', 'np.array', (['array'], {'dtype': 'object'}), '(array, dtype=object)\n', (10855, 10876), True, 'import numpy as np\n'), ((10887, 10906), 'numpy.shape', 'np.shape', (['array_tmp'], {}), '(array_tmp)\n', (10895, 10906), True, 'import numpy as np\n'), ((2160, 2182), 'numpy.any', 'np.any', (['(bounds is None)'], {}), '(bounds is None)\n', (2166, 2182), True, 'import numpy as np\n'), ((2921, 2947), 'numpy.any', 'np.any', (['(bounds is not None)'], {}), '(bounds is not None)\n', (2927, 2947), True, 'import numpy as np\n'), ((3044, 3066), 'numpy.any', 'np.any', (['(bounds is None)'], {}), '(bounds is None)\n', (3050, 3066), True, 'import numpy as np\n'), ((4184, 4248), 'scipy.optimize.root', 'spo.root', (['func', 'x0'], {'args': 'args', 'method': 'method', 'options': 'outer_dict'}), '(func, x0, args=args, method=method, options=outer_dict)\n', (4192, 4248), True, 'import scipy.optimize as spo\n'), ((10220, 10249), 'numpy.append', 'np.append', (['(x + step)', '(x - step)'], {}), '(x + step, x - step)\n', (10229, 10249), True, 'import numpy as np\n'), ((2622, 2633), 'numpy.size', 'np.size', (['x0'], {}), '(x0)\n', (2629, 2633), True, 'import numpy as np\n'), ((2892, 2903), 'numpy.size', 'np.size', (['x0'], {}), '(x0)\n', (2899, 2903), True, 'import numpy as np\n'), ((2960, 2972), 'numpy.shape', 'np.shape', (['x0'], {}), '(x0)\n', (2968, 2972), True, 'import numpy as np\n'), ((4582, 4646), 'scipy.optimize.root', 'spo.root', (['func', 'x0'], {'args': 'args', 'method': 'method', 'options': 'outer_dict'}), '(func, x0, args=args, method=method, options=outer_dict)\n', (4590, 4646), True, 'import scipy.optimize as spo\n'), ((9853, 9866), 'numpy.array', 'np.array', (['[x]'], {}), '([x])\n', (9861, 9866), True, 'import numpy as np\n'), ((9919, 9930), 'numpy.array', 'np.array', (['x'], {}), '(x)\n', (9927, 9930), True, 'import numpy as np\n'), ((10030, 10046), 'numpy.array', 'np.array', (['[step]'], {}), '([step])\n', (10038, 10046), True, 'import numpy as np\n'), ((13853, 13871), 'numpy.array', 'np.array', (['[values]'], {}), '([values])\n', (13861, 13871), True, 'import numpy as np\n'), ((13891, 13907), 'numpy.array', 'np.array', (['[keys]'], {}), '([keys])\n', (13899, 13907), True, 'import numpy as np\n'), ((2976, 2992), 'numpy.shape', 'np.shape', (['bounds'], {}), '(bounds)\n', (2984, 2992), True, 'import numpy as np\n'), ((5089, 5153), 'scipy.optimize.root', 'spo.root', (['func', 'x0'], {'args': 'args', 'method': 'method', 'options': 'outer_dict'}), '(func, x0, args=args, method=method, options=outer_dict)\n', (5097, 5153), True, 'import scipy.optimize as spo\n'), ((11941, 11954), 'numpy.shape', 'np.shape', (['tmp'], {}), '(tmp)\n', (11949, 11954), True, 'import numpy as np\n'), ((14544, 14555), 'numpy.ones', 'np.ones', (['lx'], {}), '(lx)\n', (14551, 14555), True, 'import numpy as np\n'), ((5938, 6006), 'scipy.optimize.minimize', 'spo.minimize', (['func', 'x0'], {'args': 'args', 'method': 'method', 'options': 'outer_dict'}), '(func, x0, args=args, method=method, options=outer_dict)\n', (5950, 6006), True, 'import scipy.optimize as spo\n'), ((12611, 12631), 'numpy.array', 'np.array', (['tmp', 'float'], {}), '(tmp, float)\n', (12619, 12631), True, 'import numpy as np\n'), ((6576, 6644), 'scipy.optimize.minimize', 'spo.minimize', (['func', 'x0'], {'args': 'args', 'method': 'method', 'options': 'outer_dict'}), '(func, x0, args=args, method=method, options=outer_dict)\n', (6588, 6644), True, 'import scipy.optimize as spo\n'), ((7113, 7182), 'scipy.optimize.brentq', 'spo.brentq', (['func', 'bounds[0][0]', 'bounds[0][1]'], {'args': 'args'}), '(func, bounds[0][0], bounds[0][1], args=args, **outer_dict)\n', (7123, 7182), True, 'import scipy.optimize as spo\n'), ((10117, 10132), 'numpy.finfo', 'np.finfo', (['float'], {}), '(float)\n', (10125, 10132), True, 'import numpy as np\n'), ((10089, 10104), 'numpy.finfo', 'np.finfo', (['float'], {}), '(float)\n', (10097, 10104), True, 'import numpy as np\n'), ((5387, 5404), 'numpy.finfo', 'np.finfo', (['"""float"""'], {}), "('float')\n", (5395, 5404), True, 'import numpy as np\n'), ((5335, 5352), 'numpy.finfo', 'np.finfo', (['"""float"""'], {}), "('float')\n", (5343, 5352), True, 'import numpy as np\n'), ((8116, 8185), 'scipy.optimize.bisect', 'spo.bisect', (['func', 'bounds[0][0]', 'bounds[0][1]'], {'args': 'args'}), '(func, bounds[0][0], bounds[0][1], args=args, **outer_dict)\n', (8126, 8185), True, 'import scipy.optimize as spo\n')] |
import re
import numpy as np
import warnings
import copy
from .utils import is_pos_int, is_non_neg_int, \
is_proportion, is_positive, is_non_negative, \
inherits
class layout:
def __init__(self,
ncol=None,
nrow=None,
byrow=None,
rel_widths=None,
rel_heights=None,
design=None
):
"""
layout class to store information about arangement of patches found
in `cow.patch`.
Arguments
---------
ncol : integer
Integer for the number of columns to arrange the the patches in.
The default is None (which avoids conflicts if a value for
`design` is provided). If ``ncol`` is None but ``nrow`` is not,
then ``ncol`` will default to the minimum number of columns to
make sure that all patches can be visualized.
nrow : integer
Integer for the number of rows to arrange the the patches in.
The default is None (which avoids conflicts if a value for
``design`` is provided). If ``nrow`` is None but ``ncol`` is not,
then ``nrow`` will default to the minimum number of rows to make
sure that all patches can be visualized.
byrow : boolean
If ``ncol`` and/or ``nrow`` is included, then this boolean
indicates if the patches should be ordered by row (default if
``byrow`` is None or when parameter is ``True``) or by column (if
``byrow`` was ``False``).
design : np.array (float based) or str
Specification of the location of each patch in the arrangement.
Can either be a float numpy array with integers between 0 and
the number of patches to arrange, or a text string that captures
similar ideas to the array approach but uses capital alphabetical
characters (A-Z) to indicate each figure. More information is in
Notes.
rel_widths : list, np vector or tuple
Numerical vector of relative columns widths. This not required,
the default would be ``np.ones(ncol)`` or
``np.ones(design.shape[0])``. Note that this is a relative sizing
and the values are only required to be non-negative, non-zero
values, for example ``[1,2]`` would would make the first column
twice as wide as the second column.
rel_heights : list or tuple
Numerical vector of relative row heights. This not required,
the default would be ``np.ones(nrow)`` or
``np.ones(design.shape[1])``. Note that this is a relative sizing
and the values are only required to be non-negative, non-zero
values, for example ``[1,2]`` would would make the first row twice
as tall as the second row.
Notes
-----
*Design*
The ``design`` parameter expects specific input.
1. If the ``design`` is input as a numpy array, we expect it to have
integers only (0 to # patches-1). It is allowed to have ``np.nan``
values if certain "squares" of the layout are not covered by others
(the covering is defined by the value ordering). Note that we won't
check for overlap and ``np.nan`` is not enforced if another patches'
relative (min-x,min-y) and (max-x, max-y) define a box over that
``np.nan``'s area.
An example of a design of the numpy array form could look like
>>> my_np_design = np.array([[1,1,2],
... [3,3,2],
... [3,3,np.nan]])
2. if the ``design`` parameter takes in a string, we expect it to have
a structure such that each line (pre ``\\\\n``) contains the same number
of characters, and these characters must come from the first
(number of patches) capital alphabetical characters or the ``\#`` or
``.`` sign to indicate an empty square. Similar arguments w.r.t.
overlap and the lack of real enforcement for empty squares applies
(as in 1.).
An example of a design of the string form could look like
>>> my_str_design = \"\"\"
... AAB
... CCB
... CC\#
... \"\"\"
or
>>> my_str_design = \"\"\"
... AAB
... CCB
... CC.
... \"\"\"
See the `Layout guide`_ for more detailed examples of functionality.
.. _Layout guide: https://benjaminleroy.github.io/cowpatch/guides/Layout.html
*Similarities to our `R` cousins:*
This layout function is similar to `patchwork\:\:plot_layout <https://patchwork.data-imaginist.com/reference/plot_layout.html>`_
(with a special node to ``design`` parameter) and helps perform similar
ideas to `gridExtra\:\:arrangeGrob <https://cran.r-project.org/web/packages/gridExtra/vignettes/arrangeGrob.html>`_'s
``layout_matrix`` parameter, and `cowplot\:\:plot_grid <https://wilkelab.org/cowplot/reference/plot_grid.html>`_'s
``rel_widths`` and ``rel_heights`` parameters.
Examples
--------
>>> # Necessary libraries for example
>>> import numpy as np
>>> import cowpatch as cow
>>> import plotnine as p9
>>> import plotnine.data as p9_data
>>> g0 = p9.ggplot(p9_data.mpg) +\\
... p9.geom_bar(p9.aes(x="hwy")) +\\
... p9.labs(title = 'Plot 0')
>>> g1 = p9.ggplot(p9_data.mpg) +\\
... p9.geom_point(p9.aes(x="hwy", y = "displ")) +\\
... p9.labs(title = 'Plot 1')
>>> g2 = p9.ggplot(p9_data.mpg) +\\
... p9.geom_point(p9.aes(x="hwy", y = "displ", color="class")) +\\
... p9.labs(title = 'Plot 2')
>>> g3 = p9.ggplot(p9_data.mpg[p9_data.mpg["class"].isin(["compact",
... "suv",
... "pickup"])]) +\\
... p9.geom_histogram(p9.aes(x="hwy"),bins=10) +\\
... p9.facet_wrap("class")
>>> # design matrix
>>> vis_obj = cow.patch(g1,g2,g3)
>>> vis_obj += cow.layout(design = np.array([[0,1],
... [2,2]]))
>>> vis_obj.show()
>>> # design string
>>> vis_obj2 = cow.patch(g1,g2,g3)
>>> vis_obj2 += cow.layout(design = \"\"\"
... AB
... CC
... \"\"\")
>>> vis_obj2.show()
>>> # nrow, ncol, byrow
>>> vis_obj3 = cow.patch(g0,g1,g2,g3)
>>> vis_obj3 += cow.layout(nrow=2, byrow=False)
>>> vis_obj3.show()
>>> # rel_widths/heights
>>> vis_obj = cow.patch(g1,g2,g3)
>>> vis_obj += cow.layout(design = np.array([[0,1],
... [2,2]]),
... rel_widths = np.array([1,2]))
>>> vis_obj.show()
See also
--------
area : object class that helps ``layout`` define where plots will go
in the arangement
patch : fundamental object class which is combined with ``layout`` to
defin the overall arangement of plots
"""
if design is not None:
if ncol is not None or nrow is not None:
warnings.warn("ncol and nrow are overridden"+\
" by the design parameter")
if isinstance(design, np.ndarray):
if len(design.shape) == 1:
warnings.warn("design matrix is 1d,"+\
" will be seen as a 1-row design")
nrow, ncol = 1, design.shape[0]
design = design.reshape((1,-1))
else:
nrow, ncol = design.shape
if isinstance(design, str):
# convert design to desirable structure matrix structure
design = self._design_string_to_mat(design)
nrow, ncol = design.shape
if ncol is None:
if rel_widths is not None:
if isinstance(rel_widths, np.ndarray):
ncol = rel_widths.shape[0]
if isinstance(rel_widths, list) or \
isinstance(rel_widths, tuple):
ncol = len(rel_widths)
rel_widths = np.array(rel_widths)
if nrow is None:
if rel_heights is not None:
if isinstance(rel_heights, np.ndarray):
nrow = rel_heights.shape[0]
if isinstance(rel_heights, list) or \
isinstance(rel_heights, tuple):
nrow = len(rel_heights)
rel_heights= np.array(rel_heights)
if rel_widths is None and rel_heights is None:
assert not (ncol is None and nrow is None), \
"need some parameters to not be none in design initialization"
if rel_widths is None and ncol is not None:
rel_widths = np.ones(ncol)
if rel_heights is None and nrow is not None:
rel_heights = np.ones(nrow)
if rel_heights is not None:
rel_heights = np.array(rel_heights)
if rel_widths is not None:
rel_widths = np.array(rel_widths)
# if design is None:
# if byrow is None or byrow:
# order_str = "C"
# else:
# order_str = "F"
# design = np.arange(ncol*nrow,dtype = int).reshape((nrow, ncol),
# order = order_str)
if design is not None:
byrow = None
# ncol/nrow and rel_widths/rel_heights correct alignment
if ncol is not None and rel_widths is not None:
if ncol != rel_widths.shape[0]:
raise ValueError("ncol (potentially from the design) and "+\
"rel_widths disagree on size of layout")
if nrow is not None and rel_heights is not None:
if nrow != rel_heights.shape[0]:
raise ValueError("nrow (potentially from the design) and "+\
"rel_heights disagree on size of layout")
self.ncol = ncol
self.nrow = nrow
self.__design = design
self.byrow = byrow
self.rel_widths = rel_widths
self.rel_heights = rel_heights
self.num_grobs = self._assess_mat(design)
def _design_string_to_mat(self, design):
"""
Internal function to convert design string into a matrix
Arguments
---------
design : str
design in a string format
Returns
-------
design : np.array integer
design in np.array format
"""
design_clean = re.sub(" *\t*", "", design) # removing spaces and tabs
design_clean = re.sub("^\n*", "", design_clean) # remove leading nl
design_clean = re.sub("\n*$", "", design_clean) # remove following nl
row_info = re.split("\n", design_clean)
ncol_lengths = np.unique([len(x) for x in row_info])
if ncol_lengths.shape != (1,):
raise ValueError("expect all rows in design to have the same "+\
"number of entries, use # for an empty space "+\
"if using a string format.")
ncol = int(ncol_lengths)
nrow = len(re.findall("\n", design)) + 1
design = np.array([[ ord(val)-65
if not np.any([val == x for x in ["#","."]])
else np.nan
for val in r]
for r in row_info])
return design
def _get_design(self, num_grobs=None):
"""
create a design matrix if not explicit design has been provided
"""
if self.__design is not None:
return self.__design
if num_grobs is None:
if self.num_grobs is None:
raise ValueError("unclear number of grobs in layout...")
else:
num_grobs = self.num_grobs
if self.byrow is None or self.byrow:
order_str = "C"
else:
order_str = "F"
# if only ncol or nrow is defined...
ncol = self.ncol
nrow = self.nrow
if ncol is None:
ncol = int(np.ceil(num_grobs / nrow))
if nrow is None:
nrow = int(np.ceil(num_grobs / ncol))
inner_design = np.arange(ncol*nrow,
dtype = float).reshape((nrow, ncol),
order = order_str)
inner_design[inner_design >= num_grobs] = np.nan
_ = self._assess_mat(inner_design) # should pass since we just built it...
return inner_design
# property
design = property(_get_design)
"""
defines underlying ``design`` attribute (potentially defined relative to a
``cow.patch`` object if certain structure are not extremely specific.
"""
def _assess_mat(self, design):
"""
Assesses if the design matrix includes at least 1 box for patches
indexed 0 to (number of patches - 1). This doesn't actually assume to know
the number of patches.
Arguments
---------
design : np.array (integer)
design in numpy array format
Returns
-------
int
number of patches expected in the overall matrix.
Raises
------
ValueError
if design matrix doesn't include at least at least 1 box for all
indices between 0 to (number of patches - 1)
"""
if design is None:
return None # to identify later that we don't have a design matrix
unique_vals = np.unique(design)
unique_vals = np.sort(
unique_vals[np.logical_not(np.isnan(unique_vals))])
num_unique = unique_vals.shape[0]
if not np.allclose(unique_vals, np.arange(num_unique)):
raise ValueError("design input requires values starting "+\
"with 0/A and through integer/alphabetical "+\
"value expected for the number of patches "+\
"provided")
return num_unique
def _rel_structure(self, num_grobs=None):
"""
provide rel_structure (rel_widths, rel_heights) if missing
Arguments
---------
num_grobs : int
if not None, then this value will be used to understand the number
of grobs to be laid out
Returns
-------
rel_widths : np.array vector
a vector of relative widths of the columns of the layout design
rel_heights : np.array vector
a vector of relative heights of the rows of the layout design
"""
if num_grobs is None:
if not (self.ncol is not None and \
self.nrow is not None) and \
not (self.rel_widths is not None and \
self.rel_heights is not None):
raise ValueError("unclear number of grobs in layout -> "+\
"unable to identify relative width and height")
rel_widths = self.rel_widths
rel_heights = self.rel_heights
ncol = self.ncol
nrow = self.nrow
if rel_widths is not None and ncol is None:
ncol = rel_widths.shape[0]
if rel_heights is not None and nrow is None:
nrow = rel_heights.shape[0]
if ncol is None:
ncol = int(np.ceil(num_grobs/nrow))
if rel_widths is None:
rel_widths = np.ones(ncol)
if nrow is None:
nrow = int(np.ceil(num_grobs/ncol))
if rel_heights is None:
rel_heights = np.ones(nrow)
return rel_widths, rel_heights
def _element_locations(self, width_pt, height_pt, num_grobs=None):
"""
create a list of ``area`` objects associated with the location of
each of the layout's grobs w.r.t. a given points width and height
Arguments
---------
width_pt : float
global width (in points) of the full arangement of patches
height_pt : float
global height (in points) of the full arangement of patches
num_grobs : integer
if not ``None``, then this value will be used to understand the
number of grobs to be laid out
Returns
-------
list
list of ``area`` objects describing the location for each of the
layout's grobs (in the order of the index in the self.design)
"""
if self.num_grobs is None and num_grobs is None:
raise ValueError("unclear number of grobs in layout...")
if self.num_grobs is not None:
if num_grobs is not None and num_grobs != self.num_grobs:
warnings.warn("_element_locations overrides num_grobs "+\
"with self.num_grobs")
num_grobs = self.num_grobs
rel_widths, rel_heights = self._rel_structure(num_grobs=num_grobs)
areas = []
for p_idx in np.arange(num_grobs):
dmat_logic = self._get_design(num_grobs=num_grobs) == p_idx
r_logic = dmat_logic.sum(axis=1) > 0
c_logic = dmat_logic.sum(axis=0) > 0
inner_x_where = np.argwhere(c_logic)
inner_x_left = np.min(inner_x_where)
inner_x_right = np.max(inner_x_where)
inner_width = inner_x_right - inner_x_left + 1
inner_x_where = np.argwhere(r_logic)
inner_y_top = np.min(inner_x_where)
inner_y_bottom = np.max(inner_x_where)
inner_height = inner_y_bottom - inner_y_top + 1
inner_design_area = area(x_left = inner_x_left,
y_top = inner_y_top,
width = inner_width,
height = inner_height,
_type = "design")
areas.append(inner_design_area.pt(rel_widths=rel_widths,
rel_heights=rel_heights,
width_pt=width_pt,
height_pt=height_pt))
return areas
def _yokogaki_ordering(self, num_grobs=None):
"""
calculates the yokogaki (left to right, top to bottom) ordering
the the patches
Arguments
---------
num_grobs : integer
if not ``None``, then this value will be used to understand the
number of grobs to be laid out
Returns
-------
numpy array (vector) of integer index of plots in yokogaki ordering
Notes
-----
Yokogaki is a Japanese word that concisely describes the left to right,
top to bottom writing format. We'd like to thank `stack overflow`_.
for pointing this out.
.. _stack overflow:
https://english.stackexchange.com/questions/81520/is-there-a-word-for-left-to-right-and-top-to-bottom
"""
if self.num_grobs is None and num_grobs is None:
raise ValueError("unclear number of grobs in layout...")
if self.num_grobs is not None:
if num_grobs is not None and num_grobs != self.num_grobs:
warnings.warn("_element_locations overrides num_grobs "+\
"with self.num_grobs")
num_grobs = self.num_grobs
areas = self._element_locations(1,1) # basically getting relative positions (doesn't matter) - nor does it matter about rel_height and width, but ah well
all_x_left = np.array([a.x_left for a in areas])
all_y_top = np.array([a.y_top for a in areas])
index_list = np.arange(num_grobs)
yokogaki_ordering = []
# remember y_tops are w.r.t top axis
for y_val in np.sort(np.unique(all_y_top)):
given_row_logic = all_y_top == y_val
inner_index = index_list[given_row_logic]
inner_x_left = all_x_left[given_row_logic]
row_ids = inner_index[np.argsort(inner_x_left)]
yokogaki_ordering += list(row_ids)
return np.array(yokogaki_ordering)
def __hash__(self):
"""
Creates a 'unique' hash for the object to help with identification
Returns
-------
hash integer
"""
if self.num_grobs is None:
design_list = [None]
else:
design_list = list(self.design.ravel())
rw_list = [None]
if self.rel_widths is not None:
rw_list = list(self.rel_widths)
rh_list = [None]
if self.rel_heights is not None:
rh_list = list(self.rel_heights)
info_list = design_list + \
rw_list + rh_list +\
[self.ncol, self.nrow, self.num_grobs]
return abs(hash(tuple(info_list)))
def __str__(self):
return "<layout (%d)>" % self.__hash__()
def __repr__(self):
nrow_str = str(self.nrow)
if self.nrow is None:
nrow_str = "unk"
ncol_str = str(self.ncol)
if self.ncol is None:
ncol_str = "unk"
if self.num_grobs is None:
design_str = "*unk*"
else:
design_str = self.design.__str__()
rw_str = "unk"
if self.rel_widths is not None:
rw_str = self.rel_widths.__str__()
rh_str = "unk"
if self.rel_heights is not None:
rh_str = self.rel_heights.__str__()
out = "design (%s, %s):\n\n"% (nrow_str, ncol_str) +\
design_str +\
"\n\nwidths:\n" +\
rw_str +\
"\nheights:\n" +\
rh_str
return self.__str__() + "\n" + out
def __eq__(self, value):
"""
checks if object is equal to another object (value)
Arguments
---------
value : object
another object (that major or may not be of the layout class)
Returns
-------
boolean
if current object and other object (value) are equal
"""
# if value is not a layout...
if not inherits(value, layout):
return False
# if __design hasn't been specified on 1 but is on another
if (self.__design is None and value.__design is not None) or\
(self.__design is not None and value.__design is None):
return False
# accounting for lack of __design specification
design_logic = True
if self.__design is not None:
design_logic = np.allclose(self.design,value.design,equal_nan=True)
return design_logic and \
self.ncol == value.ncol and \
self.nrow == value.nrow and \
np.unique(self.rel_heights/value.rel_heights).shape[0] == 1 and \
np.unique(self.rel_widths/value.rel_widths).shape[0] == 1
class area:
def __init__(self,
x_left, y_top,
width, height,
_type):
"""
object that stores information about what area a ``patch`` will fill
Arguments
---------
x_left : float
scalar of where the left-most point of the patch is located (impacted
by the ``_type`` parameter)
y_top : float
scalar of where the top-most point of the patch is located (impacted
by the ``_type`` parameter)
width : float
scalar of the width of the patch (impacted by the ``_type``
parameter)
height : float
scalar of the height of the patch (impacted by the ``_type``
parameter)
_type : str {"design", "relative", "pt"}
describes how the parameters are stored. See Notes for more
information between the options.
Notes
-----
These objects provide structural information about where in the overall
arangement individual plots / sub arangments lie.
The ``_type`` parameter informs how to understand the other parameters:
1. "design" means that the values are w.r.t. to a design matrix
relative to the `layout` class, and values are relative to the rows
and columns units.
2. "relative" means the values are defined relative to the full size of
the canvas and taking values between 0-1 (inclusive).
3. "pt" means that values are defined relative to point values
See also
--------
layout : object that incorporates multiple area definitions to define
layouts.
"""
# some structure check:
self._check_info_wrt_type(x_left, y_top, width, height, _type)
self.x_left = x_left
self.y_top = y_top
self.width = width
self.height = height
self._type = _type
def _check_info_wrt_type(self, x_left, y_top, width, height, _type):
"""
some logic checks of inputs relative to ``_type`` parameter
Arguments
---------
x_left : float
scalar of where the left-most point of the patch is located
(impacted by the ``_type`` parameter)
y_top : float
scalar of where the top-most point of the patch is located
(impacted by the ``_type`` parameter)
width : float
scalar of the width of the patch (impacted by the ``_type``
parameter)
height : float
scalar of the height of the patch (impacted by the ``_type``
parameter)
_type : str {"design", "relative", "pt"}
describes how the parameters are stored. Options include
["design", "relative", "pt"]. See class docstring for more info
Raises
------
ValueError
if any of the first four parameters don't make sense with respect
to the ``_type`` parameter
"""
if _type not in ["design", "relative", "pt"]:
raise ValueError("_type parameter not an acceptable option, see"+\
" documentation")
if _type == "design" and \
not np.all([is_non_neg_int(val) for val in [x_left,y_top]] +\
[is_pos_int(val) for val in [width,height]]) :
raise ValueError("with _type=\"design\", all parameters must be "+\
"positive integers")
elif _type == "relative" and \
not np.all([is_proportion(val) for val in [x_left,y_top,
width,height]] +\
[is_positive(val) for val in [width,height]]):
raise ValueError("with _type=\"relative\", all parameters should"+\
" be between 0 and 1 (inclusive) and width and"+\
" height cannot be 0")
elif _type == "pt" and \
not np.all([is_non_negative(val) for val in [x_left,y_top]] +\
[is_positive(val) for val in [width,height]]):
raise ValueError("with _type=\"pt\", all x_left and y_top should"+\
" be non-negative and width and height should"+\
" be strictly positive")
def _design_to_relative(self, rel_widths, rel_heights):
"""
translates an area object with ``_type`` = "design" to area object
with ``_type`` = "relative".
Arguments
---------
rel_widths : np.array (vector)
list of relative widths of each column of the layout matrix
rel_heights : np.array (vector)
list of relative heights of each row of the layout matrix
Returns
-------
area object
area object of ``_type`` = "relative"
"""
rel_widths = rel_widths/np.sum(rel_widths)
rel_heights = rel_heights/np.sum(rel_heights)
x_left = np.sum(rel_widths[:(self.x_left)])
y_top = np.sum(rel_heights[:(self.y_top)])
width = np.sum(rel_widths[self.x_left:(self.x_left + self.width)])
height = np.sum(rel_heights[self.y_top:(self.y_top + self.height)])
rel_area = area(x_left=x_left,
y_top=y_top,
width=width,
height=height,
_type="relative")
return rel_area
def _relative_to_pt(self, width_pt, height_pt):
"""
translates an area object with ``_type`` = "relative" to area object
with ``_type`` = "pt".
Arguments
---------
width_pt : float
width in points
height_pt : float
height in points
Returns
-------
area object
area object of ``_type`` = "pt"
"""
return area(x_left = self.x_left * width_pt,
y_top = self.y_top * height_pt,
width = self.width * width_pt,
height = self.height * height_pt,
_type = "pt")
def pt(self,
width_pt=None,
height_pt=None,
rel_widths=None,
rel_heights=None
):
"""
Translates area object to ``_type`` = "pt"
Arguments
---------
width_pt : float
width in points (required if ``_type`` is not "pt")
height_pt : float
height in points (required if ``_type`` is not "pt")
rel_widths : np.array (vector)
list of relative widths of each column of the layout matrix
(required if ``_type`` is "design")
rel_heights : np.array (vector)
list of relative heights of each row of the layout matrix
(required if ``_type`` is "design")
Returns
-------
area object
area object of ``_type`` = "pt"
"""
if self._type == "design":
rel_area = self._design_to_relative(rel_widths = rel_widths,
rel_heights = rel_heights)
return rel_area.pt(width_pt = width_pt, height_pt = height_pt)
elif self._type == "relative":
return self._relative_to_pt(width_pt = width_pt,
height_pt = height_pt)
elif self._type == "pt":
return copy.deepcopy(self)
else:
raise ValueError("_type attributes altered to a non-acceptable"+\
" value")
def _hash(self):
"""
replacement function for ``__hash__`` due to equality conflicts
Notes
-----
required since we defined ``__eq__`` and this conflicts with the
standard ``__hash__``
"""
return hash((self.x_left, self.y_top,
self.width, self.height,
self._type))
def __str__(self):
return "<area (%d)>" % self._hash()
def __repr__(self):
out = "_type: " + self._type +\
"\n\nx_left: " +\
self.x_left.__str__() +\
"\ny_top: " +\
self.y_top.__str__() +\
"\nwidth: " +\
self.width.__str__() +\
"\nheight: " +\
self.height.__str__()
return self.__str__() + "\n" + out
def __eq__(self, value):
return type(self) == type(value) and \
np.allclose(np.array([self.x_left, self.y_top,
self.width, self.height]),
np.array([value.x_left, value.y_top,
value.width, value.height])) and \
self._type == value._type
| [
"re.split",
"numpy.ceil",
"numpy.allclose",
"copy.deepcopy",
"numpy.unique",
"numpy.ones",
"numpy.any",
"numpy.max",
"numpy.argsort",
"numpy.array",
"numpy.sum",
"numpy.argwhere",
"numpy.isnan",
"numpy.min",
"warnings.warn",
"re.sub",
"re.findall",
"numpy.arange"
] | [((11136, 11163), 're.sub', 're.sub', (['""" *\t*"""', '""""""', 'design'], {}), "(' *\\t*', '', design)\n", (11142, 11163), False, 'import re\n'), ((11214, 11246), 're.sub', 're.sub', (['"""^\n*"""', '""""""', 'design_clean'], {}), "('^\\n*', '', design_clean)\n", (11220, 11246), False, 'import re\n'), ((11290, 11322), 're.sub', 're.sub', (['"""\n*$"""', '""""""', 'design_clean'], {}), "('\\n*$', '', design_clean)\n", (11296, 11322), False, 'import re\n'), ((11365, 11393), 're.split', 're.split', (['"""\n"""', 'design_clean'], {}), "('\\n', design_clean)\n", (11373, 11393), False, 'import re\n'), ((14205, 14222), 'numpy.unique', 'np.unique', (['design'], {}), '(design)\n', (14214, 14222), True, 'import numpy as np\n'), ((17662, 17682), 'numpy.arange', 'np.arange', (['num_grobs'], {}), '(num_grobs)\n', (17671, 17682), True, 'import numpy as np\n'), ((20267, 20302), 'numpy.array', 'np.array', (['[a.x_left for a in areas]'], {}), '([a.x_left for a in areas])\n', (20275, 20302), True, 'import numpy as np\n'), ((20323, 20357), 'numpy.array', 'np.array', (['[a.y_top for a in areas]'], {}), '([a.y_top for a in areas])\n', (20331, 20357), True, 'import numpy as np\n'), ((20380, 20400), 'numpy.arange', 'np.arange', (['num_grobs'], {}), '(num_grobs)\n', (20389, 20400), True, 'import numpy as np\n'), ((20812, 20839), 'numpy.array', 'np.array', (['yokogaki_ordering'], {}), '(yokogaki_ordering)\n', (20820, 20839), True, 'import numpy as np\n'), ((28663, 28695), 'numpy.sum', 'np.sum', (['rel_widths[:self.x_left]'], {}), '(rel_widths[:self.x_left])\n', (28669, 28695), True, 'import numpy as np\n'), ((28714, 28746), 'numpy.sum', 'np.sum', (['rel_heights[:self.y_top]'], {}), '(rel_heights[:self.y_top])\n', (28720, 28746), True, 'import numpy as np\n'), ((28766, 28822), 'numpy.sum', 'np.sum', (['rel_widths[self.x_left:self.x_left + self.width]'], {}), '(rel_widths[self.x_left:self.x_left + self.width])\n', (28772, 28822), True, 'import numpy as np\n'), ((28842, 28898), 'numpy.sum', 'np.sum', (['rel_heights[self.y_top:self.y_top + self.height]'], {}), '(rel_heights[self.y_top:self.y_top + self.height])\n', (28848, 28898), True, 'import numpy as np\n'), ((9324, 9337), 'numpy.ones', 'np.ones', (['ncol'], {}), '(ncol)\n', (9331, 9337), True, 'import numpy as np\n'), ((9417, 9430), 'numpy.ones', 'np.ones', (['nrow'], {}), '(nrow)\n', (9424, 9430), True, 'import numpy as np\n'), ((9494, 9515), 'numpy.array', 'np.array', (['rel_heights'], {}), '(rel_heights)\n', (9502, 9515), True, 'import numpy as np\n'), ((9576, 9596), 'numpy.array', 'np.array', (['rel_widths'], {}), '(rel_widths)\n', (9584, 9596), True, 'import numpy as np\n'), ((16121, 16134), 'numpy.ones', 'np.ones', (['ncol'], {}), '(ncol)\n', (16128, 16134), True, 'import numpy as np\n'), ((16268, 16281), 'numpy.ones', 'np.ones', (['nrow'], {}), '(nrow)\n', (16275, 16281), True, 'import numpy as np\n'), ((17883, 17903), 'numpy.argwhere', 'np.argwhere', (['c_logic'], {}), '(c_logic)\n', (17894, 17903), True, 'import numpy as np\n'), ((17931, 17952), 'numpy.min', 'np.min', (['inner_x_where'], {}), '(inner_x_where)\n', (17937, 17952), True, 'import numpy as np\n'), ((17981, 18002), 'numpy.max', 'np.max', (['inner_x_where'], {}), '(inner_x_where)\n', (17987, 18002), True, 'import numpy as np\n'), ((18091, 18111), 'numpy.argwhere', 'np.argwhere', (['r_logic'], {}), '(r_logic)\n', (18102, 18111), True, 'import numpy as np\n'), ((18138, 18159), 'numpy.min', 'np.min', (['inner_x_where'], {}), '(inner_x_where)\n', (18144, 18159), True, 'import numpy as np\n'), ((18189, 18210), 'numpy.max', 'np.max', (['inner_x_where'], {}), '(inner_x_where)\n', (18195, 18210), True, 'import numpy as np\n'), ((20507, 20527), 'numpy.unique', 'np.unique', (['all_y_top'], {}), '(all_y_top)\n', (20516, 20527), True, 'import numpy as np\n'), ((23256, 23310), 'numpy.allclose', 'np.allclose', (['self.design', 'value.design'], {'equal_nan': '(True)'}), '(self.design, value.design, equal_nan=True)\n', (23267, 23310), True, 'import numpy as np\n'), ((28571, 28589), 'numpy.sum', 'np.sum', (['rel_widths'], {}), '(rel_widths)\n', (28577, 28589), True, 'import numpy as np\n'), ((28624, 28643), 'numpy.sum', 'np.sum', (['rel_heights'], {}), '(rel_heights)\n', (28630, 28643), True, 'import numpy as np\n'), ((7590, 7664), 'warnings.warn', 'warnings.warn', (["('ncol and nrow are overridden' + ' by the design parameter')"], {}), "('ncol and nrow are overridden' + ' by the design parameter')\n", (7603, 7664), False, 'import warnings\n'), ((11762, 11786), 're.findall', 're.findall', (['"""\n"""', 'design'], {}), "('\\n', design)\n", (11772, 11786), False, 'import re\n'), ((12740, 12765), 'numpy.ceil', 'np.ceil', (['(num_grobs / nrow)'], {}), '(num_grobs / nrow)\n', (12747, 12765), True, 'import numpy as np\n'), ((12815, 12840), 'numpy.ceil', 'np.ceil', (['(num_grobs / ncol)'], {}), '(num_grobs / ncol)\n', (12822, 12840), True, 'import numpy as np\n'), ((12866, 12901), 'numpy.arange', 'np.arange', (['(ncol * nrow)'], {'dtype': 'float'}), '(ncol * nrow, dtype=float)\n', (12875, 12901), True, 'import numpy as np\n'), ((14401, 14422), 'numpy.arange', 'np.arange', (['num_unique'], {}), '(num_unique)\n', (14410, 14422), True, 'import numpy as np\n'), ((16040, 16065), 'numpy.ceil', 'np.ceil', (['(num_grobs / nrow)'], {}), '(num_grobs / nrow)\n', (16047, 16065), True, 'import numpy as np\n'), ((16185, 16210), 'numpy.ceil', 'np.ceil', (['(num_grobs / ncol)'], {}), '(num_grobs / ncol)\n', (16192, 16210), True, 'import numpy as np\n'), ((17393, 17478), 'warnings.warn', 'warnings.warn', (["('_element_locations overrides num_grobs ' + 'with self.num_grobs')"], {}), "('_element_locations overrides num_grobs ' + 'with self.num_grobs'\n )\n", (17406, 17478), False, 'import warnings\n'), ((19933, 20018), 'warnings.warn', 'warnings.warn', (["('_element_locations overrides num_grobs ' + 'with self.num_grobs')"], {}), "('_element_locations overrides num_grobs ' + 'with self.num_grobs'\n )\n", (19946, 20018), False, 'import warnings\n'), ((20723, 20747), 'numpy.argsort', 'np.argsort', (['inner_x_left'], {}), '(inner_x_left)\n', (20733, 20747), True, 'import numpy as np\n'), ((32170, 32230), 'numpy.array', 'np.array', (['[self.x_left, self.y_top, self.width, self.height]'], {}), '([self.x_left, self.y_top, self.width, self.height])\n', (32178, 32230), True, 'import numpy as np\n'), ((32286, 32350), 'numpy.array', 'np.array', (['[value.x_left, value.y_top, value.width, value.height]'], {}), '([value.x_left, value.y_top, value.width, value.height])\n', (32294, 32350), True, 'import numpy as np\n'), ((7806, 7879), 'warnings.warn', 'warnings.warn', (["('design matrix is 1d,' + ' will be seen as a 1-row design')"], {}), "('design matrix is 1d,' + ' will be seen as a 1-row design')\n", (7819, 7879), False, 'import warnings\n'), ((8653, 8673), 'numpy.array', 'np.array', (['rel_widths'], {}), '(rel_widths)\n', (8661, 8673), True, 'import numpy as np\n'), ((9031, 9052), 'numpy.array', 'np.array', (['rel_heights'], {}), '(rel_heights)\n', (9039, 9052), True, 'import numpy as np\n'), ((14293, 14314), 'numpy.isnan', 'np.isnan', (['unique_vals'], {}), '(unique_vals)\n', (14301, 14314), True, 'import numpy as np\n'), ((31111, 31130), 'copy.deepcopy', 'copy.deepcopy', (['self'], {}), '(self)\n', (31124, 31130), False, 'import copy\n'), ((23440, 23487), 'numpy.unique', 'np.unique', (['(self.rel_heights / value.rel_heights)'], {}), '(self.rel_heights / value.rel_heights)\n', (23449, 23487), True, 'import numpy as np\n'), ((23518, 23563), 'numpy.unique', 'np.unique', (['(self.rel_widths / value.rel_widths)'], {}), '(self.rel_widths / value.rel_widths)\n', (23527, 23563), True, 'import numpy as np\n'), ((11869, 11909), 'numpy.any', 'np.any', (["[(val == x) for x in ['#', '.']]"], {}), "([(val == x) for x in ['#', '.']])\n", (11875, 11909), True, 'import numpy as np\n')] |
import os
import fnmatch
import numpy as np
import csv
import sys
import pandas as pd
import re
from sklearn import preprocessing
from scipy import signal
from scipy import stats
def readinputclustering(filename, preprocessingmode):
df = pd.read_csv(filename, header=None)
X = df.ix[:, 1::].astype(float)
X.fillna(0, inplace=True)
labels = df.ix[:, 0]
if preprocessing == 'log':
# log transform the dataframe cause values differ by orders of magnitude
X = np.log(X)
X[~np.isfinite(X)] = 0
labels = df.ix[:, 0]
else:
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(X)
X = pd.DataFrame(x_scaled)
labels = df.ix[:, 0]
return X, labels
# reads input for SAX time series discretization
def readMStimedomaintransient(filename):
MS = pd.read_csv(filename, sep =',')
return MS
def crawl_folders(path, extensions):
directories = []
for dirpath, dirnames, files in os.walk(path):
for directory in dirnames:
if directory != 'rawdata' and directory != 'spectrograms' and directory != 'spectrogrampics' and directory != 'results':
p = os.path.join(dirpath, directory)
directories.append(p)
return directories
# find files path, reads csv files only unless specified differently in extensions
def find_files(path, extensions):
# Allow both with ".csv" and without "csv" to be used for extensions
extensions = [e.replace(".", "") for e in extensions]
for dirpath, dirnames, files in os.walk(path):
for extension in extensions:
for f in fnmatch.filter(files, "*.%s" % extension):
p = os.path.join(dirpath, f)
yield (p, extension)
# maybe specify a limit parameter such that optionally
# only part of the spectrogram is examined for now leave whole
# spectrogram
# to make comparisons between m/z series normalization within and between samples is necessary
def read(filename):
spectrogram = pd.read_csv(filename, sep =',')
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(spectrogram)
arr2D = x_scaled
return arr2D
# read in original freq, mass, intensity data raw data from Basem
def readdataframe(filename):
sampleID = os.path.basename(filename)
originaldata = pd.read_table(filename, sep=',', header=0)
colnames = pd.Series(['Index', 'mass', 'freq', 'intensity'])
originaldata.columns = colnames
return originaldata, sampleID
def readdataframecsvrelativefreq(filename, lowerbound, upperbound):
sampleID = os.path.basename(filename).rstrip('.csv')
originaldata = pd.read_table(filename, sep=',', header=0)
# sample ID as placeholder variable for Intensity
colnames = pd.Series(['Index', 'freq', 'mass', sampleID])
originaldata.columns = colnames
mask = originaldata['mass'].between(lowerbound, upperbound, inclusive=True)
newdata = originaldata.loc[mask]
del newdata['mass']
del newdata['Index']
return newdata, sampleID
def readdataframecsvrelativefreqminmax(filename, sampling):
sampleID = os.path.basename(filename).rstrip('.csv')
originaldata = pd.read_table(filename, sep=',', header=0)
# sample ID as placeholder variable for Intensity
colnames = pd.Series(['Index', 'freq', 'mass', sampleID])
originaldata.columns = colnames
return originaldata, sampleID
# only mass intensity space
def readdataframecsvmasssampling(filename, sampling):
sampleID = os.path.basename(filename).rstrip('.csv')
originaldata = pd.read_table(filename, sep=',', header=0)
# sample ID as placeholder variable for Intensity
colnames = pd.Series(['Index', 'mass', 'freq', sampleID])
originaldata.columns = colnames
del originaldata['freq']
del originaldata['Index']
del originaldata[sampleID]
originaldata.columns = [sampleID]
dataarray = np.array(originaldata.T)[0]
originaldataresampled = pd.Series(signal.resample(dataarray, sampling))
return originaldataresampled, sampleID
# only mass intensity space
def readdataframecsvmass(filename):
sampleID = os.path.basename(filename).rstrip('.csv')
originaldata = pd.read_table(filename, sep=',', header=0)
# sample ID as placeholder variable for Intensity
colnames = pd.Series(['Index', 'mass', 'freq', sampleID])
originaldata.columns = colnames
del originaldata['freq']
del originaldata['Index']
del originaldata[sampleID]
originaldata.columns = [sampleID]
dataarray = np.array(originaldata.T)[0]
return originaldata, sampleID
# only mass intensity space
def readdataframecsvmassminmax(filename):
sampleID = os.path.basename(filename).rstrip('.csv')
originaldata = pd.read_table(filename, sep=',', header=0)
# sample ID as placeholder variable for Intensity
colnames = pd.Series(['Index', 'mass', 'freq', sampleID])
originaldata.columns = colnames
del originaldata['freq']
del originaldata['Index']
del originaldata[sampleID]
originaldata.columns = [sampleID]
return originaldata, sampleID
def readdataframecsvfreqsampling(filename, sampling):
sampleID = os.path.basename(filename).rstrip('.csv')
originaldata = pd.read_table(filename, sep=',', header=0)
# sample ID as placeholder variable for Intensity
colnames = pd.Series(['Index', 'mass', 'freq', sampleID])
originaldata.columns = colnames
del originaldata['mass']
del originaldata['Index']
del originaldata[sampleID]
originaldata.columns = [sampleID]
dataarray = np.array(originaldata.T)[0]
originaldataresampled = pd.Series(signal.resample(dataarray, sampling))
return originaldataresampled, sampleID
def readdataframecsvfreq(filename):
sampleID = os.path.basename(filename).rstrip('.csv')
originaldata = pd.read_table(filename, sep=',', header=0)
# sample ID as placeholder variable for Intensity
colnames = pd.Series(['Index', 'mass', 'freq', sampleID])
originaldata.columns = colnames
del originaldata['mass']
del originaldata['Index']
del originaldata[sampleID]
originaldata.columns = [sampleID]
dataarray = np.array(originaldata.T)[0]
return originaldata, sampleID
| [
"pandas.Series",
"pandas.read_csv",
"numpy.log",
"os.path.join",
"numpy.array",
"scipy.signal.resample",
"fnmatch.filter",
"os.path.basename",
"pandas.read_table",
"numpy.isfinite",
"pandas.DataFrame",
"sklearn.preprocessing.MinMaxScaler",
"os.walk"
] | [((243, 277), 'pandas.read_csv', 'pd.read_csv', (['filename'], {'header': 'None'}), '(filename, header=None)\n', (254, 277), True, 'import pandas as pd\n'), ((863, 893), 'pandas.read_csv', 'pd.read_csv', (['filename'], {'sep': '""","""'}), "(filename, sep=',')\n", (874, 893), True, 'import pandas as pd\n'), ((1004, 1017), 'os.walk', 'os.walk', (['path'], {}), '(path)\n', (1011, 1017), False, 'import os\n'), ((1587, 1600), 'os.walk', 'os.walk', (['path'], {}), '(path)\n', (1594, 1600), False, 'import os\n'), ((2051, 2081), 'pandas.read_csv', 'pd.read_csv', (['filename'], {'sep': '""","""'}), "(filename, sep=',')\n", (2062, 2081), True, 'import pandas as pd\n'), ((2104, 2132), 'sklearn.preprocessing.MinMaxScaler', 'preprocessing.MinMaxScaler', ([], {}), '()\n', (2130, 2132), False, 'from sklearn import preprocessing\n'), ((2339, 2365), 'os.path.basename', 'os.path.basename', (['filename'], {}), '(filename)\n', (2355, 2365), False, 'import os\n'), ((2385, 2427), 'pandas.read_table', 'pd.read_table', (['filename'], {'sep': '""","""', 'header': '(0)'}), "(filename, sep=',', header=0)\n", (2398, 2427), True, 'import pandas as pd\n'), ((2443, 2492), 'pandas.Series', 'pd.Series', (["['Index', 'mass', 'freq', 'intensity']"], {}), "(['Index', 'mass', 'freq', 'intensity'])\n", (2452, 2492), True, 'import pandas as pd\n'), ((2709, 2751), 'pandas.read_table', 'pd.read_table', (['filename'], {'sep': '""","""', 'header': '(0)'}), "(filename, sep=',', header=0)\n", (2722, 2751), True, 'import pandas as pd\n'), ((2821, 2867), 'pandas.Series', 'pd.Series', (["['Index', 'freq', 'mass', sampleID]"], {}), "(['Index', 'freq', 'mass', sampleID])\n", (2830, 2867), True, 'import pandas as pd\n'), ((3236, 3278), 'pandas.read_table', 'pd.read_table', (['filename'], {'sep': '""","""', 'header': '(0)'}), "(filename, sep=',', header=0)\n", (3249, 3278), True, 'import pandas as pd\n'), ((3348, 3394), 'pandas.Series', 'pd.Series', (["['Index', 'freq', 'mass', sampleID]"], {}), "(['Index', 'freq', 'mass', sampleID])\n", (3357, 3394), True, 'import pandas as pd\n'), ((3625, 3667), 'pandas.read_table', 'pd.read_table', (['filename'], {'sep': '""","""', 'header': '(0)'}), "(filename, sep=',', header=0)\n", (3638, 3667), True, 'import pandas as pd\n'), ((3737, 3783), 'pandas.Series', 'pd.Series', (["['Index', 'mass', 'freq', sampleID]"], {}), "(['Index', 'mass', 'freq', sampleID])\n", (3746, 3783), True, 'import pandas as pd\n'), ((4253, 4295), 'pandas.read_table', 'pd.read_table', (['filename'], {'sep': '""","""', 'header': '(0)'}), "(filename, sep=',', header=0)\n", (4266, 4295), True, 'import pandas as pd\n'), ((4365, 4411), 'pandas.Series', 'pd.Series', (["['Index', 'mass', 'freq', sampleID]"], {}), "(['Index', 'mass', 'freq', sampleID])\n", (4374, 4411), True, 'import pandas as pd\n'), ((4802, 4844), 'pandas.read_table', 'pd.read_table', (['filename'], {'sep': '""","""', 'header': '(0)'}), "(filename, sep=',', header=0)\n", (4815, 4844), True, 'import pandas as pd\n'), ((4914, 4960), 'pandas.Series', 'pd.Series', (["['Index', 'mass', 'freq', sampleID]"], {}), "(['Index', 'mass', 'freq', sampleID])\n", (4923, 4960), True, 'import pandas as pd\n'), ((5291, 5333), 'pandas.read_table', 'pd.read_table', (['filename'], {'sep': '""","""', 'header': '(0)'}), "(filename, sep=',', header=0)\n", (5304, 5333), True, 'import pandas as pd\n'), ((5403, 5449), 'pandas.Series', 'pd.Series', (["['Index', 'mass', 'freq', sampleID]"], {}), "(['Index', 'mass', 'freq', sampleID])\n", (5412, 5449), True, 'import pandas as pd\n'), ((5891, 5933), 'pandas.read_table', 'pd.read_table', (['filename'], {'sep': '""","""', 'header': '(0)'}), "(filename, sep=',', header=0)\n", (5904, 5933), True, 'import pandas as pd\n'), ((6003, 6049), 'pandas.Series', 'pd.Series', (["['Index', 'mass', 'freq', sampleID]"], {}), "(['Index', 'mass', 'freq', sampleID])\n", (6012, 6049), True, 'import pandas as pd\n'), ((493, 502), 'numpy.log', 'np.log', (['X'], {}), '(X)\n', (499, 502), True, 'import numpy as np\n'), ((598, 626), 'sklearn.preprocessing.MinMaxScaler', 'preprocessing.MinMaxScaler', ([], {}), '()\n', (624, 626), False, 'from sklearn import preprocessing\n'), ((690, 712), 'pandas.DataFrame', 'pd.DataFrame', (['x_scaled'], {}), '(x_scaled)\n', (702, 712), True, 'import pandas as pd\n'), ((3964, 3988), 'numpy.array', 'np.array', (['originaldata.T'], {}), '(originaldata.T)\n', (3972, 3988), True, 'import numpy as np\n'), ((4030, 4066), 'scipy.signal.resample', 'signal.resample', (['dataarray', 'sampling'], {}), '(dataarray, sampling)\n', (4045, 4066), False, 'from scipy import signal\n'), ((4592, 4616), 'numpy.array', 'np.array', (['originaldata.T'], {}), '(originaldata.T)\n', (4600, 4616), True, 'import numpy as np\n'), ((5630, 5654), 'numpy.array', 'np.array', (['originaldata.T'], {}), '(originaldata.T)\n', (5638, 5654), True, 'import numpy as np\n'), ((5696, 5732), 'scipy.signal.resample', 'signal.resample', (['dataarray', 'sampling'], {}), '(dataarray, sampling)\n', (5711, 5732), False, 'from scipy import signal\n'), ((6230, 6254), 'numpy.array', 'np.array', (['originaldata.T'], {}), '(originaldata.T)\n', (6238, 6254), True, 'import numpy as np\n'), ((1660, 1701), 'fnmatch.filter', 'fnmatch.filter', (['files', "('*.%s' % extension)"], {}), "(files, '*.%s' % extension)\n", (1674, 1701), False, 'import fnmatch\n'), ((2648, 2674), 'os.path.basename', 'os.path.basename', (['filename'], {}), '(filename)\n', (2664, 2674), False, 'import os\n'), ((3175, 3201), 'os.path.basename', 'os.path.basename', (['filename'], {}), '(filename)\n', (3191, 3201), False, 'import os\n'), ((3564, 3590), 'os.path.basename', 'os.path.basename', (['filename'], {}), '(filename)\n', (3580, 3590), False, 'import os\n'), ((4192, 4218), 'os.path.basename', 'os.path.basename', (['filename'], {}), '(filename)\n', (4208, 4218), False, 'import os\n'), ((4741, 4767), 'os.path.basename', 'os.path.basename', (['filename'], {}), '(filename)\n', (4757, 4767), False, 'import os\n'), ((5230, 5256), 'os.path.basename', 'os.path.basename', (['filename'], {}), '(filename)\n', (5246, 5256), False, 'import os\n'), ((5830, 5856), 'os.path.basename', 'os.path.basename', (['filename'], {}), '(filename)\n', (5846, 5856), False, 'import os\n'), ((514, 528), 'numpy.isfinite', 'np.isfinite', (['X'], {}), '(X)\n', (525, 528), True, 'import numpy as np\n'), ((1207, 1239), 'os.path.join', 'os.path.join', (['dirpath', 'directory'], {}), '(dirpath, directory)\n', (1219, 1239), False, 'import os\n'), ((1723, 1747), 'os.path.join', 'os.path.join', (['dirpath', 'f'], {}), '(dirpath, f)\n', (1735, 1747), False, 'import os\n')] |
import nibabel as ni
from nilearn import image
import numpy as np
import sys
import os
from subprocess import call
from scipy import ndimage
import shutil
def getinputs():
inputs = sys.argv
inputs.pop(0) # remove first arg (name of file)
t1 = '' # T1w image
t2 = '' # T2w image
m = '' # lesion (typically drawn on T2w image)
f = '0.4' # 0.5 is default, I find it too much for pathological brains
d = False
t1healthy = ''
outputDir = ''
for i, v in enumerate(inputs):
if v == '-t1les':
t1 = inputs[i+1]
if v == '-t2les':
t2 = inputs[i+1]
if v == '-m': # mask
m = inputs[i+1]
if v == '-f': # bet fractional intensity value
f = inputs[i+1]
if v == '-d':
d = True
if v == '-t1healthy':
t1healthy = inputs[i+1]
if v == '-outpth':
outputDir = inputs[i+1]
return t1, t2, m, t1healthy, f, d, outputDir
def fileparts(fnm):
import os
e_ = ''
e2_ = ''
nm_ = ''
pth_ = ''
pth_ = os.path.dirname(fnm)
nm_, e_ = os.path.splitext(os.path.basename(fnm))
if ".nii" in nm_:
nm_, e2_ = os.path.splitext(nm_)
ext_ = e2_+e_
return pth_, nm_, ext_
def getfsldir():
fsldir = os.getenv("FSLDIR")
return fsldir
def getbet():
fsldir = getfsldir()
betpth = os.path.join(fsldir, "bin", "bet")
return betpth
def getconvertxfm():
fsldir = getfsldir()
xfmpth = os.path.join(fsldir, "bin", "convert_xfm")
return xfmpth
def getrfov():
fsldir = getfsldir()
rfovpth = os.path.join(fsldir, "bin", "robustfov")
return rfovpth
def getfslmaths():
fsldir = getfsldir()
fslmathspth = os.path.join(fsldir, "bin", "fslmaths")
return fslmathspth
def getflirt():
fsldir = getfsldir()
flirtpth = os.path.join(fsldir, 'bin', 'flirt')
return flirtpth
def opennii(fname):
obj = ni.load(fname)
imgdata = obj.get_data()
return obj, imgdata
def mirror(imgmat):
mirimg = np.flipud(imgmat)
return mirimg
def makenii(obj, imgdata):
outobj = ni.Nifti1Image(imgdata, obj.affine, obj.header)
return outobj
def savenii(obj, name):
obj.to_filename(name)
return name
def doflirt(inputFile, referenceFile, dof):
import os
# dof = "6" # rigid body only
ipth, inm, ie = fileparts(inputFile)
rpth, rnm, re = fileparts(referenceFile)
outmat = os.path.join(ipth, "r" + inm + ".mat")
outimg = os.path.join(ipth, "r" + inm + ie)
cmd = [
flirt,
"-dof",
dof,
"-in",
inputFile,
"-ref",
referenceFile,
"-out",
outimg,
"-omat",
outmat
]
print(cmd)
if os.path.exists(outimg):
return outimg, outmat
call(cmd)
return outimg, outmat
def applyflirt(inputFile, referenceFile, referenceMat):
import os
ipth, inm, ie = fileparts(inputFile)
rpth, rnm, re = fileparts(referenceFile)
outimg = os.path.join(ipth, "r" + inm + ie)
cmd = [
flirt,
"-in",
inputFile,
"-ref",
referenceFile,
"-applyxfm",
"-init",
referenceMat,
"-out",
outimg
]
print(cmd)
if os.path.exists(outimg):
return outimg
call(cmd)
return outimg
def threshholdMask(fnm, t):
I, img = opennii(fnm)
i = img.flatten()
i[i >= t] = 1
i[i < t] = 0
o = np.reshape(i, img.shape)
newnii = makenii(I, o)
pth, nm, e = fileparts(fnm)
onm = savenii(newnii, os.path.join(pth, "t" + nm + e))
return onm
def healimg(img, msk):
mirimg = mirror(img)
i = img.flatten()
mi = mirimg.flatten()
m = msk.flatten()
# (img(:) .* (1.0-imgLesion(:)))+ (imgFlip(:) .* imgLesion(:))
himg = (i * (1.0-m)) + (mi * m)
o = np.reshape(himg, img.shape)
return o
def deleteTempFiles(files):
for f in files:
print("deleting file: {}".format(f))
os.remove(f)
def getfslreorient():
fsldir = getfsldir()
fslrepth = os.path.join(fsldir, "bin", "fslreorient2std")
return fslrepth
def fslreorient(t1, t2, le, t1h):
fslreorient2std = getfslreorient()
fileList = [t1, t2, le, t1h]
outList = []
for file in fileList:
# make out file
pth, nm, e = fileparts(file)
outFile = os.path.join(pth, "o" + nm + e) # "o" for reoriented file
cmd = [
fslreorient2std,
file,
outFile,
]
outList.append(outFile)
print(cmd)
call(cmd)
return outList[0], outList[1], outList[2], outList[3]
def doBet(fnam, f):
inObj, inImg = opennii(fnam)
com = ndimage.measurements.center_of_mass(inImg)
p, n, e = fileparts(fnam)
outFile = os.path.join(p, "b" + n + e) # "o" for reoriented file
cmd = [
bet,
fnam,
outFile,
"-f",
f,
"-R",
]
print(cmd)
if os.path.exists(outFile):
return outFile
call(cmd)
return outFile
def insertimg(recipient, donor, donormask):
robj, rimg = opennii(recipient)
dobj, dimg = opennii(donor)
mobj, mimg = opennii(donormask)
rpth, rnm, re = fileparts(recipient)
dpth, dnm, de = fileparts(donor)
dmpth, dmnm, dme = fileparts(donormask)
# (img(:) .* (1.0-imgLesion(:)))+ (imgFlip(:) .* imgLesion(:))
i = rimg.flatten()
mi = dimg.flatten()
m = mimg.flatten()
oimg = (i * (1.0-m)) + (mi * m)
o = np.reshape(oimg, rimg.shape)
#rimg[mimg > 0] = dimg[mimg > 0]
outObj = makenii(robj, o)
outlesObj = makenii(mobj, mimg)
outname = os.path.join(pth, rnm + "_" + dnm + re)
outlesname = os.path.join(pth, rnm + "_" + dmnm + re)
savenii(outObj, outname)
savenii(outlesObj, outlesname)
return outname, outlesname
def meanScale(inputImg, referenceImg):
inObj, inImg = opennii(inputImg)
refObj, refImg = opennii(referenceImg)
# imgL1 * (mean(imgH1(:))/mean(imgL1(:)))
scaledImg = inImg * (np.mean(refImg)/np.mean(inImg))
outObj = makenii(inObj, scaledImg)
pth, nm, e = fileparts(inputImg)
outName = savenii(outObj, os.path.join(pth, "d" + nm + e))
return outName
def makeLesImg(inputImg, maskImg):
inObj, inImg = opennii(inputImg)
mObj, mImg = opennii(maskImg)
outImg = np.zeros_like(inImg)
outImg[mImg > 0] = inImg[mImg > 0]
outObj = makenii(inObj, outImg)
pth, nm, e = fileparts(inputImg)
outName = savenii(outObj, os.path.join(pth, "lesionData_" + nm + e))
return outName
def getOrigin(fname):
hdr, img = opennii(fname)
print(hdr.affine)
from nibabel.affines import apply_affine
import numpy.linalg as npl
res = apply_affine(npl.inv(hdr.affine), [0, 0, 0])
#M = hdr.affine[:3, :3]
#res = M.dot([0, 0, 0]) + hdr.affine[:3, 3]
return res
def addToDeleteList(dlist, fname):
dlist.append(fname)
return dlist
def cropZ(fname):
# crop in z dimension to improve bet results
pth, nm, e = fileparts(fname)
outname = os.path.join(pth, "f" + nm + e)
outmname = os.path.join(pth, "f" + nm + ".mat")
cmd = [
rfov,
"-i",
fname,
"-r",
outname,
"-m",
outmname
]
print(cmd)
call(cmd)
return outname, outmname
def concatxfm(amat, bmat):
ptha, nma, ea = fileparts(amat)
pthb, nmb, eb = fileparts(bmat)
outmat = os.path.join(ptha, nma + "_+_" + nmb + ea)
cmd = [
xfm,
"-omat",
outmat,
"-concat",
amat,
bmat
]
print(cmd)
call(cmd)
return outmat
def moveFile(infile, newfolder):
pth, nm, e = fileparts(infile)
shutil.move(infile, os.path.join(newfolder, nm + e))
def changeDataType(infile, newtype):
fslmaths = getfslmaths()
cmd = [
fslmaths,
infile,
"-add",
"0",
infile,
"-odt",
newtype
]
print(cmd)
call(cmd)
return
dlist = []
# get bet and flirt commands
bet = getbet() # get path to bet command
flirt = getflirt() # get path to flirt command
rfov = getrfov()
xfm = getconvertxfm()
# get inputs to makeLesion.py
T1Name, T2Name, lesName, t1healthyName, f, d, outputDir = getinputs() # parse inputs, return file names
# reorient input images to std fsl space
T1Name, T2Name, lesName, t1healthyName = fslreorient(T1Name, T2Name, lesName, t1healthyName)
dlist = addToDeleteList(dlist, T1Name)
dlist = addToDeleteList(dlist, T2Name)
dlist = addToDeleteList(dlist, lesName)
dlist = addToDeleteList(dlist, t1healthyName)
# crop neck from lesion images
T1Name, cropZmat = cropZ(T1Name)
dlist = addToDeleteList(dlist, T1Name)
dlist = addToDeleteList(dlist, cropZmat)
# register lesionT2 image to lesionT1 image space
rT2, rmat = doflirt(T2Name, T1Name, "6") # register T2 to T1
dlist = addToDeleteList(dlist, rT2)
dlist = addToDeleteList(dlist, rmat)
# apply the previous transform to the lesion mask (usually drawn on T2)
rLesion = applyflirt(lesName, rT2, rmat)
dlist = addToDeleteList(dlist, rLesion)
rLesion = threshholdMask(rLesion, 0.5)
dlist = addToDeleteList(dlist, rLesion)
# crop neck from healthy T1
t1healthyName, zmt = cropZ(t1healthyName)
dlist = addToDeleteList(dlist, zmt)
dlist = addToDeleteList(dlist, t1healthyName)
# open lesion mask image and T1 image
LES, lesimg = opennii(rLesion)
T1, T1img = opennii(T1Name) # open image, return obj and img data
# smooth lesion mask before healing to blend better
LES = image.smooth_img(LES, fwhm=3) # feather edges
lesimg = LES.get_data()
# make healed T1 image (flip undamaged hemisphere voxels into lesion area)
h = healimg(T1img, lesimg)
healed = makenii(T1, h)
pth, nm, e = fileparts(T1Name)
hT1 = savenii(healed, os.path.join(pth, "h" + nm + e))
dlist = addToDeleteList(dlist, hT1)
# bet healed lesion T1
bhnii = doBet(hT1, f)
dlist = addToDeleteList(dlist, bhnii)
# bet HealthyT1
bhealthyT1 = doBet(t1healthyName, f)
dlist = addToDeleteList(dlist, bhealthyT1)
# register brain of healed image to brain of Healthy T1
rbhnii, rbhniimat = doflirt(bhnii, bhealthyT1, "12")
dlist = addToDeleteList(dlist, rbhnii)
dlist = addToDeleteList(dlist, rbhniimat)
# apply flirt to LesLes
rrLesion = applyflirt(rLesion, bhealthyT1, rbhniimat)
dlist = addToDeleteList(dlist, rrLesion)
rT1Name = applyflirt(T1Name, bhealthyT1, rbhniimat)
dlist = addToDeleteList(dlist, rT1Name)
# feather edges of les
LES, lesimg = opennii(rrLesion)
LES = image.smooth_img(LES, fwhm=8) # feather edges
pth, nm, e = fileparts(rrLesion)
srrLesion = savenii(LES, os.path.join(pth, "s" + nm + e))
dlist = addToDeleteList(dlist, srrLesion)
# do mean scaling
st1healthyName = meanScale(t1healthyName, rT1Name)
dlist = addToDeleteList(dlist, st1healthyName)
# Put lesion in Healthy T1 that has not been brain extracted
# bWithLes = insertimg(rbhnii, srT1Name, rrLesion)
bWithLes, srrLesion = insertimg(st1healthyName, rT1Name, srrLesion)
tsrrLesion = threshholdMask(srrLesion, 0.5)
# change data type to reduce file size
changeDataType(bWithLes, "short") # short is 16 bit int
changeDataType(tsrrLesion, "short") # short is 16 bit int
moveFile(bWithLes, outputDir)
moveFile(tsrrLesion, outputDir)
if d:
deleteTempFiles(dlist)
| [
"os.path.exists",
"numpy.mean",
"numpy.reshape",
"os.getenv",
"nibabel.load",
"numpy.flipud",
"os.path.join",
"os.path.splitext",
"nilearn.image.smooth_img",
"os.path.dirname",
"numpy.linalg.inv",
"subprocess.call",
"scipy.ndimage.measurements.center_of_mass",
"nibabel.Nifti1Image",
"os.... | [((9568, 9597), 'nilearn.image.smooth_img', 'image.smooth_img', (['LES'], {'fwhm': '(3)'}), '(LES, fwhm=3)\n', (9584, 9597), False, 'from nilearn import image\n'), ((10536, 10565), 'nilearn.image.smooth_img', 'image.smooth_img', (['LES'], {'fwhm': '(8)'}), '(LES, fwhm=8)\n', (10552, 10565), False, 'from nilearn import image\n'), ((1085, 1105), 'os.path.dirname', 'os.path.dirname', (['fnm'], {}), '(fnm)\n', (1100, 1105), False, 'import os\n'), ((1300, 1319), 'os.getenv', 'os.getenv', (['"""FSLDIR"""'], {}), "('FSLDIR')\n", (1309, 1319), False, 'import os\n'), ((1392, 1426), 'os.path.join', 'os.path.join', (['fsldir', '"""bin"""', '"""bet"""'], {}), "(fsldir, 'bin', 'bet')\n", (1404, 1426), False, 'import os\n'), ((1506, 1548), 'os.path.join', 'os.path.join', (['fsldir', '"""bin"""', '"""convert_xfm"""'], {}), "(fsldir, 'bin', 'convert_xfm')\n", (1518, 1548), False, 'import os\n'), ((1623, 1663), 'os.path.join', 'os.path.join', (['fsldir', '"""bin"""', '"""robustfov"""'], {}), "(fsldir, 'bin', 'robustfov')\n", (1635, 1663), False, 'import os\n'), ((1747, 1786), 'os.path.join', 'os.path.join', (['fsldir', '"""bin"""', '"""fslmaths"""'], {}), "(fsldir, 'bin', 'fslmaths')\n", (1759, 1786), False, 'import os\n'), ((1868, 1904), 'os.path.join', 'os.path.join', (['fsldir', '"""bin"""', '"""flirt"""'], {}), "(fsldir, 'bin', 'flirt')\n", (1880, 1904), False, 'import os\n'), ((1957, 1971), 'nibabel.load', 'ni.load', (['fname'], {}), '(fname)\n', (1964, 1971), True, 'import nibabel as ni\n'), ((2060, 2077), 'numpy.flipud', 'np.flipud', (['imgmat'], {}), '(imgmat)\n', (2069, 2077), True, 'import numpy as np\n'), ((2138, 2185), 'nibabel.Nifti1Image', 'ni.Nifti1Image', (['imgdata', 'obj.affine', 'obj.header'], {}), '(imgdata, obj.affine, obj.header)\n', (2152, 2185), True, 'import nibabel as ni\n'), ((2466, 2504), 'os.path.join', 'os.path.join', (['ipth', "('r' + inm + '.mat')"], {}), "(ipth, 'r' + inm + '.mat')\n", (2478, 2504), False, 'import os\n'), ((2518, 2552), 'os.path.join', 'os.path.join', (['ipth', "('r' + inm + ie)"], {}), "(ipth, 'r' + inm + ie)\n", (2530, 2552), False, 'import os\n'), ((2774, 2796), 'os.path.exists', 'os.path.exists', (['outimg'], {}), '(outimg)\n', (2788, 2796), False, 'import os\n'), ((2832, 2841), 'subprocess.call', 'call', (['cmd'], {}), '(cmd)\n', (2836, 2841), False, 'from subprocess import call\n'), ((3039, 3073), 'os.path.join', 'os.path.join', (['ipth', "('r' + inm + ie)"], {}), "(ipth, 'r' + inm + ie)\n", (3051, 3073), False, 'import os\n'), ((3293, 3315), 'os.path.exists', 'os.path.exists', (['outimg'], {}), '(outimg)\n', (3307, 3315), False, 'import os\n'), ((3343, 3352), 'subprocess.call', 'call', (['cmd'], {}), '(cmd)\n', (3347, 3352), False, 'from subprocess import call\n'), ((3492, 3516), 'numpy.reshape', 'np.reshape', (['i', 'img.shape'], {}), '(i, img.shape)\n', (3502, 3516), True, 'import numpy as np\n'), ((3881, 3908), 'numpy.reshape', 'np.reshape', (['himg', 'img.shape'], {}), '(himg, img.shape)\n', (3891, 3908), True, 'import numpy as np\n'), ((4102, 4148), 'os.path.join', 'os.path.join', (['fsldir', '"""bin"""', '"""fslreorient2std"""'], {}), "(fsldir, 'bin', 'fslreorient2std')\n", (4114, 4148), False, 'import os\n'), ((4744, 4786), 'scipy.ndimage.measurements.center_of_mass', 'ndimage.measurements.center_of_mass', (['inImg'], {}), '(inImg)\n', (4779, 4786), False, 'from scipy import ndimage\n'), ((4831, 4859), 'os.path.join', 'os.path.join', (['p', "('b' + n + e)"], {}), "(p, 'b' + n + e)\n", (4843, 4859), False, 'import os\n'), ((5010, 5033), 'os.path.exists', 'os.path.exists', (['outFile'], {}), '(outFile)\n', (5024, 5033), False, 'import os\n'), ((5062, 5071), 'subprocess.call', 'call', (['cmd'], {}), '(cmd)\n', (5066, 5071), False, 'from subprocess import call\n'), ((5544, 5572), 'numpy.reshape', 'np.reshape', (['oimg', 'rimg.shape'], {}), '(oimg, rimg.shape)\n', (5554, 5572), True, 'import numpy as np\n'), ((5690, 5729), 'os.path.join', 'os.path.join', (['pth', "(rnm + '_' + dnm + re)"], {}), "(pth, rnm + '_' + dnm + re)\n", (5702, 5729), False, 'import os\n'), ((5747, 5787), 'os.path.join', 'os.path.join', (['pth', "(rnm + '_' + dmnm + re)"], {}), "(pth, rnm + '_' + dmnm + re)\n", (5759, 5787), False, 'import os\n'), ((6386, 6406), 'numpy.zeros_like', 'np.zeros_like', (['inImg'], {}), '(inImg)\n', (6399, 6406), True, 'import numpy as np\n'), ((7104, 7135), 'os.path.join', 'os.path.join', (['pth', "('f' + nm + e)"], {}), "(pth, 'f' + nm + e)\n", (7116, 7135), False, 'import os\n'), ((7151, 7187), 'os.path.join', 'os.path.join', (['pth', "('f' + nm + '.mat')"], {}), "(pth, 'f' + nm + '.mat')\n", (7163, 7187), False, 'import os\n'), ((7330, 7339), 'subprocess.call', 'call', (['cmd'], {}), '(cmd)\n', (7334, 7339), False, 'from subprocess import call\n'), ((7483, 7525), 'os.path.join', 'os.path.join', (['ptha', "(nma + '_+_' + nmb + ea)"], {}), "(ptha, nma + '_+_' + nmb + ea)\n", (7495, 7525), False, 'import os\n'), ((7655, 7664), 'subprocess.call', 'call', (['cmd'], {}), '(cmd)\n', (7659, 7664), False, 'from subprocess import call\n'), ((8026, 8035), 'subprocess.call', 'call', (['cmd'], {}), '(cmd)\n', (8030, 8035), False, 'from subprocess import call\n'), ((9819, 9850), 'os.path.join', 'os.path.join', (['pth', "('h' + nm + e)"], {}), "(pth, 'h' + nm + e)\n", (9831, 9850), False, 'import os\n'), ((10641, 10672), 'os.path.join', 'os.path.join', (['pth', "('s' + nm + e)"], {}), "(pth, 's' + nm + e)\n", (10653, 10672), False, 'import os\n'), ((1137, 1158), 'os.path.basename', 'os.path.basename', (['fnm'], {}), '(fnm)\n', (1153, 1158), False, 'import os\n'), ((1201, 1222), 'os.path.splitext', 'os.path.splitext', (['nm_'], {}), '(nm_)\n', (1217, 1222), False, 'import os\n'), ((3602, 3633), 'os.path.join', 'os.path.join', (['pth', "('t' + nm + e)"], {}), "(pth, 't' + nm + e)\n", (3614, 3633), False, 'import os\n'), ((4025, 4037), 'os.remove', 'os.remove', (['f'], {}), '(f)\n', (4034, 4037), False, 'import os\n'), ((4399, 4430), 'os.path.join', 'os.path.join', (['pth', "('o' + nm + e)"], {}), "(pth, 'o' + nm + e)\n", (4411, 4430), False, 'import os\n'), ((4611, 4620), 'subprocess.call', 'call', (['cmd'], {}), '(cmd)\n', (4615, 4620), False, 'from subprocess import call\n'), ((6213, 6244), 'os.path.join', 'os.path.join', (['pth', "('d' + nm + e)"], {}), "(pth, 'd' + nm + e)\n", (6225, 6244), False, 'import os\n'), ((6549, 6590), 'os.path.join', 'os.path.join', (['pth', "('lesionData_' + nm + e)"], {}), "(pth, 'lesionData_' + nm + e)\n", (6561, 6590), False, 'import os\n'), ((6786, 6805), 'numpy.linalg.inv', 'npl.inv', (['hdr.affine'], {}), '(hdr.affine)\n', (6793, 6805), True, 'import numpy.linalg as npl\n'), ((7777, 7808), 'os.path.join', 'os.path.join', (['newfolder', '(nm + e)'], {}), '(newfolder, nm + e)\n', (7789, 7808), False, 'import os\n'), ((6075, 6090), 'numpy.mean', 'np.mean', (['refImg'], {}), '(refImg)\n', (6082, 6090), True, 'import numpy as np\n'), ((6091, 6105), 'numpy.mean', 'np.mean', (['inImg'], {}), '(inImg)\n', (6098, 6105), True, 'import numpy as np\n')] |
import multiprocessing
import gc
import logging
import numpy as np
import pandas as pd
from datetime import datetime
from sklearn.ensemble import IsolationForest
from sklearn.model_selection import ParameterSampler
from aml_unsupervised_data_preprocess import ModelTrainingData
from sklearn.preprocessing import MinMaxScaler
from tf_logger_customised import CustomLogger
class IforestAlgorithm(ModelTrainingData):
def __init__(self):
super().__init__()
self.n_hyperparam = self.model_parameters.get('iForest').get(
'n_hyperparam')
self.training_data, self.validation_data = self.get_dataset()
def setup_hyper_param_grid(self):
"""
This function randomly selects `n` hyper paramters.
Return:
List of dict
"""
# specify parameters and distributions to sample from
param_dist = {
"n_estimators": np.linspace(100, 2000, num=10).astype('int64'),
"max_samples": np.linspace(10, 500, num=10).astype('int64'),
"max_features": np.linspace(0.1, 1.0, num=10)
}
param_list = list(
ParameterSampler(param_dist,
n_iter=self.n_hyperparam,
random_state=self.rng))
param_com_list = [
dict((k, round(v, 6)) for (k, v) in d.items()) for d in param_list
]
return param_com_list
def model_train_n_eval(self, training_data, validation_data, combination,
model_iForest, n_val_samples, rng):
"""
This function trains the model on the given input and
evaluates it performance on F1 score and AUC values.
Args:
training_data: Training data pandas DF
validation_data: Validation data pandas DF
combination: Dict of hyperparam
model_iForest: iForest model reference
n_val_samples: number of samples of validation dataset for evaluation
rng: numpy random number reference
Return:
Dict with hyperparam and evaluation results
"""
X_train = training_data.sample(frac=self.fraction,
replace=True,
random_state=rng)
# generate validation dataset preserving the contamination value
sample_validation = self.stratified_val_samples(validation_data)
# set model parameters
model_iForest = model_iForest.set_params(
random_state=rng,
n_estimators=combination.get("n_estimators"),
max_samples=combination.get("max_samples"), # n_samples,
max_features=combination.get("max_features"))
# train the model
model_iForest.fit(X_train)
# find true labels
y_true = [sample.label for sample in sample_validation]
# prepare data for prediction
X_val = [sample.drop(columns=['label']) for sample in sample_validation]
# The anomaly score of an input sample is computed as the mean anomaly score of the trees in the forest.
# The measure of normality of an observation given a tree is the depth of the leaf containing this observation,
# which is equivalent to the number of splittings required to isolate this point
# Negative scores represent outliers, positive scores represent inliers. >> decision function
# score_samples = The anomaly score of the input samples. The lower, the more abnormal.
pred_score = [
model_iForest.score_samples(val_dataset) for val_dataset in X_val
]
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_pred_score = [
scaler.fit_transform(predictions.reshape(-1, 1))
for predictions in pred_score
]
inverted_anomaly_score = [
1 - predictions for predictions in scaled_pred_score
]
# performance measure using AUC for fpr and tpr
average_auc, sd_auc = self.compute_pr_auc(y_true,
inverted_anomaly_score)
combination.update({"avg_auc": average_auc, "std_dev_auc": sd_auc})
gc.collect()
return combination
def mp_evaluation_hyperpram(self, train, validate):
"""
This function executes the multi process evaluation of hyper-parameters.
For the given list of hyperparam combinations, this function runs a
batch equal to the number specified processes.
Args:
train: Training datset
validate: validation dataset
Return:
A pandas DF
"""
max_number_processes = self.n_processes
pool_2 = multiprocessing.Pool(max_number_processes)
ctx = multiprocessing.get_context()
logging.info('Custom logs iForest: Get hyper-parameters')
param_comb_list = self.setup_hyper_param_grid()
# create validation dataset as per the contamination value
val_sample = self.val_contamination_sample(validate)
# isolation forest implementation
model_iForest = IsolationForest()
output_df = []
logging.info('Custom logs iForest: Execute multi-process HP tuning ')
batch_result = [
pool_2.apply_async(self.model_train_n_eval,
args=(train, val_sample, combination,
model_iForest, self.n_val_samples,
self.rng))
for combination in param_comb_list
]
try:
output = [p.get() for p in batch_result]
except multiprocessing.TimeoutError:
logging.error(
'Custom logs iForest: Process not responding for evaluation')
else:
for results in output:
output_df.append(pd.DataFrame(results, index=[0]))
test_df = pd.concat(output_df)
return test_df
def execute_model_iForest(self):
ts = datetime.now()
salt = ts.strftime("%Y_%m_%d_%H_%M_%S")
filename = 'iForest_model_taining_{}.log'.format(salt)
log_ref = CustomLogger(filename)
log_ref.setLogconfig()
logging.info('Custom logs iForest: Number of hyper parameters = %d',
self.n_hyperparam)
logging.info(
'Custom logs iForest: Fraction of training dataset for hp tuning = %.5f',
self.fraction)
logging.info('Custom logs iForest: Number of validation samples = %d',
self.n_val_samples)
logging.info(
'Custom logs iForest: contamination for validation set = %.5f',
self.contamination)
logging.info('Custom logs iForest: Initiate model tuning process')
model_results = self.mp_evaluation_hyperpram(self.training_data,
self.validation_data)
return model_results
| [
"sklearn.ensemble.IsolationForest",
"multiprocessing.get_context",
"sklearn.model_selection.ParameterSampler",
"tf_logger_customised.CustomLogger",
"datetime.datetime.now",
"numpy.linspace",
"pandas.concat",
"multiprocessing.Pool",
"gc.collect",
"pandas.DataFrame",
"logging.info",
"logging.err... | [((3643, 3677), 'sklearn.preprocessing.MinMaxScaler', 'MinMaxScaler', ([], {'feature_range': '(0, 1)'}), '(feature_range=(0, 1))\n', (3655, 3677), False, 'from sklearn.preprocessing import MinMaxScaler\n'), ((4206, 4218), 'gc.collect', 'gc.collect', ([], {}), '()\n', (4216, 4218), False, 'import gc\n'), ((4727, 4769), 'multiprocessing.Pool', 'multiprocessing.Pool', (['max_number_processes'], {}), '(max_number_processes)\n', (4747, 4769), False, 'import multiprocessing\n'), ((4784, 4813), 'multiprocessing.get_context', 'multiprocessing.get_context', ([], {}), '()\n', (4811, 4813), False, 'import multiprocessing\n'), ((4823, 4880), 'logging.info', 'logging.info', (['"""Custom logs iForest: Get hyper-parameters"""'], {}), "('Custom logs iForest: Get hyper-parameters')\n", (4835, 4880), False, 'import logging\n'), ((5133, 5150), 'sklearn.ensemble.IsolationForest', 'IsolationForest', ([], {}), '()\n', (5148, 5150), False, 'from sklearn.ensemble import IsolationForest\n'), ((5183, 5252), 'logging.info', 'logging.info', (['"""Custom logs iForest: Execute multi-process HP tuning """'], {}), "('Custom logs iForest: Execute multi-process HP tuning ')\n", (5195, 5252), False, 'import logging\n'), ((6033, 6047), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (6045, 6047), False, 'from datetime import datetime\n'), ((6178, 6200), 'tf_logger_customised.CustomLogger', 'CustomLogger', (['filename'], {}), '(filename)\n', (6190, 6200), False, 'from tf_logger_customised import CustomLogger\n'), ((6241, 6333), 'logging.info', 'logging.info', (['"""Custom logs iForest: Number of hyper parameters = %d"""', 'self.n_hyperparam'], {}), "('Custom logs iForest: Number of hyper parameters = %d', self.\n n_hyperparam)\n", (6253, 6333), False, 'import logging\n'), ((6358, 6468), 'logging.info', 'logging.info', (['"""Custom logs iForest: Fraction of training dataset for hp tuning = %.5f"""', 'self.fraction'], {}), "(\n 'Custom logs iForest: Fraction of training dataset for hp tuning = %.5f',\n self.fraction)\n", (6370, 6468), False, 'import logging\n'), ((6493, 6588), 'logging.info', 'logging.info', (['"""Custom logs iForest: Number of validation samples = %d"""', 'self.n_val_samples'], {}), "('Custom logs iForest: Number of validation samples = %d', self\n .n_val_samples)\n", (6505, 6588), False, 'import logging\n'), ((6613, 6713), 'logging.info', 'logging.info', (['"""Custom logs iForest: contamination for validation set = %.5f"""', 'self.contamination'], {}), "('Custom logs iForest: contamination for validation set = %.5f',\n self.contamination)\n", (6625, 6713), False, 'import logging\n'), ((6743, 6809), 'logging.info', 'logging.info', (['"""Custom logs iForest: Initiate model tuning process"""'], {}), "('Custom logs iForest: Initiate model tuning process')\n", (6755, 6809), False, 'import logging\n'), ((1060, 1089), 'numpy.linspace', 'np.linspace', (['(0.1)', '(1.0)'], {'num': '(10)'}), '(0.1, 1.0, num=10)\n', (1071, 1089), True, 'import numpy as np\n'), ((1140, 1217), 'sklearn.model_selection.ParameterSampler', 'ParameterSampler', (['param_dist'], {'n_iter': 'self.n_hyperparam', 'random_state': 'self.rng'}), '(param_dist, n_iter=self.n_hyperparam, random_state=self.rng)\n', (1156, 1217), False, 'from sklearn.model_selection import ParameterSampler\n'), ((5936, 5956), 'pandas.concat', 'pd.concat', (['output_df'], {}), '(output_df)\n', (5945, 5956), True, 'import pandas as pd\n'), ((5704, 5779), 'logging.error', 'logging.error', (['"""Custom logs iForest: Process not responding for evaluation"""'], {}), "('Custom logs iForest: Process not responding for evaluation')\n", (5717, 5779), False, 'import logging\n'), ((911, 941), 'numpy.linspace', 'np.linspace', (['(100)', '(2000)'], {'num': '(10)'}), '(100, 2000, num=10)\n', (922, 941), True, 'import numpy as np\n'), ((986, 1014), 'numpy.linspace', 'np.linspace', (['(10)', '(500)'], {'num': '(10)'}), '(10, 500, num=10)\n', (997, 1014), True, 'import numpy as np\n'), ((5879, 5911), 'pandas.DataFrame', 'pd.DataFrame', (['results'], {'index': '[0]'}), '(results, index=[0])\n', (5891, 5911), True, 'import pandas as pd\n')] |
#! /usr/bin/env python
# -*- coding: utf-8 -*-
""" Module to I/O the data """
import os
import re
import warnings
from glob import glob
from astropy.io.fits import getheader, getval
from astropy.time import Time
REDUXPATH = os.getenv('SEDMREDUXPATH',default="~/redux/")
_PACKAGE_ROOT = os.path.abspath(os.path.dirname(__file__))+"/"
SEDM_REDUCER = os.getenv('SEDM_USER',default="auto")
############################
# #
# PROD/DB STRUCTURE #
# #
############################
PROD_CUBEROOT = "e3d"
PROD_SPECROOT = "spec"
PROD_SENSITIVITYROOT = "fluxcal"
PRODSTRUCT_RE = {"ccd":{"lamp":"^(dome|Hg|Cd|Xe)",
"crr":'^(crr)',
"orig":"^(ifu)"},
"bkgd":{"lamp":"^(bkgd).*(dome|Hg|Cd|Xe)",
"crr":"^(bkgd_crr)"},
"cube": {"basic":"^(%s).((?!cal))"%PROD_CUBEROOT, # starts w/ e3d & not containing cal
"calibrated":"^(%s_cal)"%PROD_CUBEROOT,
"defaultcalibrated":"^(%s_defcal)"%PROD_CUBEROOT},
"spec": {"basic":"^(%s).((?!cal))"%PROD_SPECROOT, # starts w/ e3d & not containing cal
"bkgd":"^(%s).((?!cal))"%PROD_SPECROOT, # starts w/ e3d & not containing cal
"auto":"^(%sauto).((?!cal))"%PROD_SPECROOT, # starts w/ e3d & not containing cal
"forcepsf":"^(%s_forcepsf).((?!cal))"%PROD_SPECROOT, # starts w/ e3d & not containing cal
"calibrated":"^(%s_cal)"%PROD_SPECROOT,
"defaultcalibrated":"^(%s_defcal)"%PROD_SPECROOT,
"invsensitivity":"^(%s)"%PROD_SENSITIVITYROOT},
"psf": {"param":"^(%s)(?=.*json)"%"psf"} #.json files containing the psf fitvalues and fitted adr.
}
__all__ = ["get_night_files",
"load_nightly_mapper",
"load_nightly_tracematch","load_nightly_hexagonalgrid",
"load_nightly_wavesolution","load_nightly_flat"]
############################
# #
# Data Access #
# #
############################
def get_night_files(date, kind, target=None, extention=".fits"):
""" GENERIC IO FUNCTION
Parameters
----------
date: [string]
date for which you want file. Format: YYYYMMDD
kind: [string]
Which kind of file do you want.
Format for kind:
- You can directly provide a regular expression by starting by re:
- e.g. all the lamp or dome associated data: kind="re:(dome|Hg|Xe|Cd)"
- Ortherwise, you give provide a predefined format using 'type.subtype' ;
- e.g. the basic cubes : kind='cube.basic'
- e.g. final flux calibrated cubes: kind='cube.fluxcalibrated' ;
- e.g. quick flux calibrated cubes: kind='cube.defaultcalibrated' ;
- e.g. the lamp or dome ccd files : kind='ccd.lamp'
- e.g. cosmic ray corrected ccds : kind='ccd.crr' ;
- e.g. lamp and dome ccd files : kind='ccd.lamp';
(see all the predifined format in pysedm.io.PRODSTRUCT_RE)
target: [string/None] -optional-
Additional selection. The file should also contain the string defined in target.
target supports regex. e.g. requesting dome or Hg files: target="(dome|Hg)"
Returns
-------
list of string (FullPaths
"""
if kind.startswith('re:'):
regex = kind.replace("re:","")
else:
if "." not in kind: kind = "%s.*"%kind
kinds = kind.split(".")
if len(kinds)==2:
type_, subtype_ = kinds
# allowing some user flexibility
if subtype_ in ["cal"]:
subtype_ = "calibrated"
elif subtype_ in ["defcal"]:
subtype_ = "defaultcalibrated"
elif subtype_ in ["fluxcal"]:
subtype_ = "invsensitivity"
# parsing the input
if subtype_ !="*":
regex = PRODSTRUCT_RE[type_][subtype_]
else:
regex = "|".join([subsreg
for subsreg in PRODSTRUCT_RE[type_].values()])
else:
raise TypeError("Enable to parse the given kind: %s"%kind)
path = get_datapath(date)
if target in ["*"]:
target = None
if extention in ["*",".*"]:
extention = None
# - Parsing the files
return [path+f for f in os.listdir(get_datapath(date))
if re.search(r'%s'%regex, f) and
(target is None or re.search(r'%s'%target, f)) and
(extention is None or f.endswith(extention))]
def get_cleaned_sedmcube(filename):
""" get sky and flux calibrated cube """
cube = get_sedmcube(filename)
fluxcalfile = io.fetch_nearest_fluxcal(date, cube.filename)
fluxcal = fluxcalibration.load_fluxcal_spectrum(fluxcalfile)
cube.remove_sky()
#cube.scale_by(fluxcal.data) # old version
cube.scale_by( fluxcal.get_inversed_sensitivity(cube.header.get("AIRMASS", 1.1)), onraw=False )
return cube
#########################
# #
# Reading the DB #
# #
#########################
def get_datapath(YYYYMMDD):
""" Return the full path of the current date """
return REDUXPATH+"/%s/"%YYYYMMDD
def fetch_header(date, target, kind="ccd.crr", getkey=None):
""" Look for a night file (using get_night_files()) and returns its header
or a value from it stored with the given getkey."""
datafile = get_night_files(date, kind, target=target)
if len(datafile)==0:
return None
# Get the entire header
if getkey is None:
if len(datafile)==1:
return getheader(datafile[0])
return {d.split("/"):getheader(d) for d in datafile}
# Or just a key value from it?
else:
from astropy.io.fits import getval
if len(datafile)==1:
return getval(datafile[0],getkey)
return {d.split("/")[-1]:getval(d,getkey) for d in datafile}
def fetch_nearest_fluxcal(date=None, file=None, mjd=None, kind="spec.fluxcal"):
""" Look for the fluxcal_*.fits file the closest (in time) from the given file and returns it. """
if date is None:
if mjd is not None:
date = Time(mjd, format="mjd").datetime.isoformat().split("T")[0].replace("-","")
elif file is not None:
date = filename_to_date(file)
else:
raise ValueError("file is None, then date and/or mjd must be given. None here")
filefluxcal = get_night_files(date, kind)
if len(filefluxcal)==0:
warnings.warn("No %s file for the night %s"%(kind, date))
return None
if len(filefluxcal)==1:
warnings.warn("Only 1 file of kind %s for the night %s"%(kind, date))
return filefluxcal[0]
import numpy as np
if mjd is not None:
target_mjd_obs = mjd
else:
try:
target_mjd_obs = getval(file,"MJD_OBS")
except KeyError:
warnings.warn("No MJD_OBS keyword found, returning most recent file")
return filefluxcal[-1]
fluxcal_mjd_obs = [getval(f,"MJD_OBS") for f in filefluxcal]
return filefluxcal[ np.argmin( np.abs( target_mjd_obs - np.asarray(fluxcal_mjd_obs) ) ) ]
def filename_to_id(filename):
""" """
return filename.split("/")[-1].split( header_to_date( getheader(filename) ))[-1][1:9]
def header_to_date( header, sep=""):
""" returns the datetiume YYYYMMDD associated with the 'JD' from the header """
datetime = Time(header["JD"], format="jd").datetime
return sep.join(["%4s"%datetime.year, "%02d"%datetime.month, "%02d"%datetime.day])
def filename_to_time(filename):
""" """
date, hour, minut, sec = filename.split("_ifu")[-1].split("_")[:4]
return Time("-".join([date[i:j] for i,j in [[0,4],[4,6],[6,8]]]) +" "+ ":".join([hour, minut, sec]))
def filename_to_date(filename, iso=False):
""" """
if iso:
return filename_to_time(filename).datetime.isoformat().split("T")[0]
return filename.split("_ifu")[1].split("_")[0]
def fetch_guider(date, filename, astrom=True, extinction=".fits"):
""" fetch the guider data for the given filename. """
print("DEPRECATED fetch_guider(date, filename) -> filename_to_guider(filename)")
return filename_to_guider(filename, astrom=astrom, extinction=extinction)
def filename_to_guider(filename, astrom=True, extinction=".fits", nomd5=True):
""" """
fileinfo = parse_filename(filename)
dirname = os.path.dirname(filename)
key = "astrom" if astrom else "guider"
return [os.path.join(dirname,l) for l in os.listdir( get_datapath(fileinfo["date"]))
if fileinfo["sedmid"] in l and key in l and (l.endswith(('.fits', '.gz')) if nomd5 else l) ]
def parse_filename(filename):
""" """
filename = filename.split(".")[0]
if filename.startswith("crr"):
crr, b, ifudate, hh, mm, ss, *targetname = filename.split("_")
else:
_, crr, b, ifudate, hh, mm, ss, *targetname = filename.split("_")
if len(targetname)==0:
targetname = None
else:
targetname = ("-".join(targetname).replace(" ","")).split(".")[0]
date = ifudate.replace("ifu","")
mjd = Time(f"{date[:4]}-{date[4:6]}-{date[6:]}"+" "+f"{hh}:{mm}:{ss}", format="iso").mjd
return {"date":date,
"sedmid":f"{ifudate}_{hh}_{mm}_{ss}",
"mjd":mjd,
"name":targetname}
def filename_to_background_name(filename):
""" predefined structure for background naming """
last = filename.split("/")[-1]
return "".join([filename.split(last)[0],"bkgd_"+last])
def get_night_schedule(YYYYMMDD):
""" Return the list of observations (the what.list) """
schedule_file = glob(get_datapath(YYYYMMDD)+"what*")
if len(schedule_file)==0:
warnings.warn("No 'what list' for the given night ")
return None
return open(schedule_file[0]).read().splitlines()
def is_file_stdstars(filename):
""" Tests if the 'OBJECT' entry of the file header is associated with a Standard star exposure. (True / False)
None is returned if the header do not contain an 'OBJECT' entry
(see `is_stdstars`)
Returns
-------
bool or None
"""
from astropy.io.fits import getheader
return is_stdstars( getheader(filename) )
def is_stdstars(header):
""" Tests if the 'OBJECT' of the given header is associated with a Standard star exposure. (True / False)
None is returned if the header do not contain an 'OBJECT' entry
Returns
-------
bool or None
"""
obj = header.get("OBJECT",None)
if obj is None:
return None
stdnames = ["STD","Feige", "Hitlner", "LTT"]
return any([s_ in obj for s_ in stdnames])
#########################
# #
# NIGHT SOLUTION #
# #
#########################
# - Mapper
def load_nightly_mapper(YYYYMMDD, within_ccd_contours=True):
""" High level object to do i,j<->x,y,lbda """
from .mapping import Mapper
tracematch = load_nightly_tracematch(YYYYMMDD)
wsol = load_nightly_wavesolution(YYYYMMDD)
hgrid = load_nightly_hexagonalgrid(YYYYMMDD)
if within_ccd_contours:
from .sedm import INDEX_CCD_CONTOURS
indexes = tracematch.get_traces_within_polygon(INDEX_CCD_CONTOURS)
else:
indexes = list(wsol.wavesolutions.keys())
mapper = Mapper(tracematch= tracematch, wavesolution = wsol, hexagrid=hgrid)
mapper.derive_spaxel_mapping( indexes )
return mapper
# - TraceMatch
def load_nightly_tracematch(YYYYMMDD, withmask=False):
""" Load the spectral matcher.
This object must have been created.
"""
from .spectralmatching import load_tracematcher
if not withmask:
return load_tracematcher(get_datapath(YYYYMMDD)+"%s_TraceMatch.pkl"%(YYYYMMDD))
else:
try:
return load_tracematcher(get_datapath(YYYYMMDD)+"%s_TraceMatch_WithMasks.pkl"%(YYYYMMDD))
except:
warnings.warn("No TraceMatch_WithMasks found. returns the usual TraceMatch")
return load_nightly_tracematch(YYYYMMDD, withmask=False)
# - HexaGrid
def load_nightly_hexagonalgrid(YYYYMMDD, download_it=True,
nprocess_dl=1, **kwargs):
""" Load the Grid id <-> QR<->XY position
This object must have been created.
nprocess_dl: [int]
Number of // download. 1 means no // processing
"""
from .utils.hexagrid import load_hexprojection
hexagrid_path = os.path.join(get_datapath(YYYYMMDD),"%s_HexaGrid.pkl"%(YYYYMMDD) )
if os.path.isfile(hexagrid_path):
return load_hexprojection( hexagrid_path )
if not download_it:
raise IOError(f"Cannot find an hexagrid for date {YYYYMMDD}")
warnings.warn("cannot find the hexagrid for date {YYYYMMDD}, using ztquery.sedm to download it.")
from ztfquery import sedm
squery = sedm.SEDMQuery()
squery.download_night_calibrations(YYYYMMDD, which="HexaGrid.pkl",
nprocess=nprocess_dl, **kwargs)
# has it been downloaded ?
if os.path.isfile(hexagrid_path):
return load_hexprojection( hexagrid_path )
raise IOError(f"Cannot find an hexagrid for date {YYYYMMDD}, even after calling squery.download_night_calibrations()")
# - WaveSolution
def load_nightly_wavesolution(YYYYMMDD, subprocesses=False):
""" Load the spectral matcher.
This object must have been created.
"""
from .wavesolution import load_wavesolution
if not subprocesses:
return load_wavesolution(get_datapath(YYYYMMDD)+"%s_WaveSolution.pkl"%(YYYYMMDD))
return [load_wavesolution(subwave) for subwave in glob(get_datapath(YYYYMMDD)+"%s_WaveSolution_range*.pkl"%(YYYYMMDD))]
# - 3D Flat
def load_nightly_flat(YYYYMMDD):
""" Load the spectral matcher.
This object must have been created.
"""
from pyifu.spectroscopy import load_slice
return load_slice(get_datapath(YYYYMMDD)+"%s_Flat.fits"%(YYYYMMDD))
#########################
# #
# PSF Product #
# #
#########################
def get_psf_parameters(date, target=None, filepath=False):
""" """
path = get_datapath(date)
if target == "*":
target = None
json_files = [path+f for f in os.listdir(get_datapath(date))
if re.search(r'%s'%PRODSTRUCT_RE["psf"]["param"], f)
and (target is None or re.search(r'%s'%target, f))
and ("defcal") not in f]
if filepath:
return json_files
import json
return {f.split("/")[-1]:json.load( open(f) ) for f in json_files}
def load_psf_param(date, keys_to_add=None):
""" """
from pyifu.adr import ten
psfdata = get_psf_parameters(date)
for s, d in psfdata.items():
d["adr"]["delta_parangle"] = (d["adr"]["parangle"] - d["adr"]["parangle_ref"])%360
d["adr"]["delta_parangle.err"] = d["adr"]["parangle.err"]
sour = s.split(date)[-1][:9]
if keys_to_add is not None:
d["header"] = {}
for k in keys_to_add:
if "DEC" in k or "RA" in k:
val = ten(fetch_header(date, sour, getkey=k))
else:
val = fetch_header(date, sour, getkey=k)
d["header"][k.lower()] = val
return psfdata
#########################
# #
# References #
# #
#########################
def load_telluric_line(filter=None):
""" return a TelluricSpectrum (child of pyifu Spectrum) containing the telluric emission line
from Kitt Peak National Observatory.
Source:
Buton et al. 2013 (SNIFS, Buton, C., <NAME>., <NAME>., et al. 2013, A&A, 549, A8)
using data from
<NAME>., <NAME>., & <NAME>. 2003, BAAS, 35, 1260.
_Please cite both._
Returns
-------
TelluricSpectrum (child of pyifu Spectrum)
"""
from .utils.atmosphere import load_telluric_spectrum
return load_telluric_spectrum(_PACKAGE_ROOT+"data/KPNO_lines.fits", filter=filter)
def get_bad_standard_exposures():
""" Get the list of recorded standard star exposures that cannot be used for calibration. """
return open(os.path.join(_PACKAGE_ROOT,"data/bad_standard.txt")).read().splitlines()
def get_noncalspec_standards():
""" Get the list of STD observed by SEDm that are not calspec standards. """
return open(os.path.join(_PACKAGE_ROOT,"data/noncalspec_standard.txt")).read().splitlines()
#########################3
#
# OUTPUT PROD #
#
#########################
def _saveout_forcepsf_(filecube, cube, cuberes=None, cubemodel=None,
cubefitted=None, spec=None, bkgd=None, extraction_type="Force 3DPSF extraction: Spectral Model",
mode="auto", spec_info="", fluxcal=True, nofig=False):
# Cube Model
if cubemodel is not None:
cubemodel.set_header(cube.header)
cubemodel.header["SOURCE"] = (filecube.split("/")[-1], "This object has been derived from this file")
cubemodel.header["PYSEDMT"] = ("Force 3DPSF extraction: Model Cube", "This is the model cube of the PSF extract")
cubemodel.header["EXTRTYPE"] = (mode, "Kind of PSF extraction")
cubemodel.writeto(filecube.replace(PROD_CUBEROOT,"forcepsfmodel_%s_"%mode+PROD_CUBEROOT))
if cubefitted is not None:
cubefitted.set_header(cube.header)
cubefitted.header["SOURCE"] = (filecube.split("/")[-1], "This object has been derived from this file")
cubefitted.header["PYSEDMT"] = ("Force 3DPSF extraction: Fitted Cube", "This is the model cube of the PSF extract")
cubefitted.header["EXTRTYPE"] = (mode, "Kind of PSF extraction")
cubefitted.writeto(filecube.replace(PROD_CUBEROOT,"forcepsf_fitted_%s_"%mode+PROD_CUBEROOT))
if cuberes is not None:
# Cube Residual
cuberes.set_header(cube.header)
cuberes.header["SOURCE"] = (filecube.split("/")[-1], "This object has been derived from this file")
cuberes.header["PYSEDMT"] = ("Force 3DPSF extraction: Residual Cube", "This is the residual cube of the PSF extract")
cuberes.header["EXTRTYPE"] = (mode, "Kind of PSF extraction")
cuberes.writeto(filecube.replace(PROD_CUBEROOT,"psfres_%s_"%mode+PROD_CUBEROOT))
# ----------------- #
# Save the Spectrum #
# ----------------- #
kind_key =""
# - build the spectrum
if spec is not None:
for k,v in cube.header.items():
if k not in spec.header:
spec.header.set(k,v)
spec.header["SOURCE"] = (filecube.split("/")[-1], "This object has been derived from this file")
spec.header["PYSEDMT"] = (extraction_type, "This is the fitted flux spectrum")
spec.header["EXTRTYPE"] = (mode, "Kind of extraction")
fileout = filecube.replace(PROD_CUBEROOT,PROD_SPECROOT+"%s_%s_"%(kind_key, mode+spec_info))
spec.writeto(fileout)
spec.writeto(fileout.replace(".fits",".txt"), ascii=True)
spec._side_properties["filename"] = fileout
if not nofig:
from pyifu import get_spectrum
spec_to_plot = get_spectrum(spec.lbda, spec.data,
variance=spec.variance if spec.has_variance() else None,
header=spec.header)
spec_to_plot.show(savefile=spec.filename.replace(".fits", ".pdf"),
show_zero=fluxcal, show=False)
spec_to_plot.show(savefile=spec.filename.replace(".fits", ".png"),
show_zero=fluxcal, show=False)
# - background
if bkgd is not None:
bkgd.set_header(cube.header)
bkgd.header["SOURCE"] = (filecube.split("/")[-1], "This object has been derived from this file")
bkgd.header["PYSEDMT"] = (extraction_type, "This is the fitted flux spectrum")
bkgd.header["EXTRTYPE"] = (mode, "Kind of extraction")
fileout = filecube.replace(PROD_CUBEROOT,PROD_SPECROOT+"%s_%s_bkgd"%(kind_key,mode+spec_info))
bkgd.writeto(fileout)
bkgd.writeto(fileout.replace(".fits",".txt"), ascii=True)
| [
"astropy.io.fits.getheader",
"os.getenv",
"ztfquery.sedm.SEDMQuery",
"astropy.io.fits.getval",
"os.path.join",
"numpy.asarray",
"os.path.isfile",
"os.path.dirname",
"astropy.time.Time",
"warnings.warn",
"re.search"
] | [((228, 274), 'os.getenv', 'os.getenv', (['"""SEDMREDUXPATH"""'], {'default': '"""~/redux/"""'}), "('SEDMREDUXPATH', default='~/redux/')\n", (237, 274), False, 'import os\n'), ((356, 394), 'os.getenv', 'os.getenv', (['"""SEDM_USER"""'], {'default': '"""auto"""'}), "('SEDM_USER', default='auto')\n", (365, 394), False, 'import os\n'), ((8777, 8802), 'os.path.dirname', 'os.path.dirname', (['filename'], {}), '(filename)\n', (8792, 8802), False, 'import os\n'), ((12928, 12957), 'os.path.isfile', 'os.path.isfile', (['hexagrid_path'], {}), '(hexagrid_path)\n', (12942, 12957), False, 'import os\n'), ((13110, 13217), 'warnings.warn', 'warnings.warn', (['"""cannot find the hexagrid for date {YYYYMMDD}, using ztquery.sedm to download it."""'], {}), "(\n 'cannot find the hexagrid for date {YYYYMMDD}, using ztquery.sedm to download it.'\n )\n", (13123, 13217), False, 'import warnings\n'), ((13251, 13267), 'ztfquery.sedm.SEDMQuery', 'sedm.SEDMQuery', ([], {}), '()\n', (13265, 13267), False, 'from ztfquery import sedm\n'), ((13452, 13481), 'os.path.isfile', 'os.path.isfile', (['hexagrid_path'], {}), '(hexagrid_path)\n', (13466, 13481), False, 'import os\n'), ((306, 331), 'os.path.dirname', 'os.path.dirname', (['__file__'], {}), '(__file__)\n', (321, 331), False, 'import os\n'), ((6848, 6907), 'warnings.warn', 'warnings.warn', (["('No %s file for the night %s' % (kind, date))"], {}), "('No %s file for the night %s' % (kind, date))\n", (6861, 6907), False, 'import warnings\n'), ((6962, 7033), 'warnings.warn', 'warnings.warn', (["('Only 1 file of kind %s for the night %s' % (kind, date))"], {}), "('Only 1 file of kind %s for the night %s' % (kind, date))\n", (6975, 7033), False, 'import warnings\n'), ((7389, 7409), 'astropy.io.fits.getval', 'getval', (['f', '"""MJD_OBS"""'], {}), "(f, 'MJD_OBS')\n", (7395, 7409), False, 'from astropy.io.fits import getval\n'), ((7796, 7827), 'astropy.time.Time', 'Time', (["header['JD']"], {'format': '"""jd"""'}), "(header['JD'], format='jd')\n", (7800, 7827), False, 'from astropy.time import Time\n'), ((8858, 8882), 'os.path.join', 'os.path.join', (['dirname', 'l'], {}), '(dirname, l)\n', (8870, 8882), False, 'import os\n'), ((9516, 9603), 'astropy.time.Time', 'Time', (["(f'{date[:4]}-{date[4:6]}-{date[6:]}' + ' ' + f'{hh}:{mm}:{ss}')"], {'format': '"""iso"""'}), "(f'{date[:4]}-{date[4:6]}-{date[6:]}' + ' ' + f'{hh}:{mm}:{ss}', format\n ='iso')\n", (9520, 9603), False, 'from astropy.time import Time\n'), ((10110, 10162), 'warnings.warn', 'warnings.warn', (['"""No \'what list\' for the given night """'], {}), '("No \'what list\' for the given night ")\n', (10123, 10162), False, 'import warnings\n'), ((10601, 10620), 'astropy.io.fits.getheader', 'getheader', (['filename'], {}), '(filename)\n', (10610, 10620), False, 'from astropy.io.fits import getheader\n'), ((5928, 5950), 'astropy.io.fits.getheader', 'getheader', (['datafile[0]'], {}), '(datafile[0])\n', (5937, 5950), False, 'from astropy.io.fits import getheader\n'), ((5980, 5992), 'astropy.io.fits.getheader', 'getheader', (['d'], {}), '(d)\n', (5989, 5992), False, 'from astropy.io.fits import getheader\n'), ((6148, 6175), 'astropy.io.fits.getval', 'getval', (['datafile[0]', 'getkey'], {}), '(datafile[0], getkey)\n', (6154, 6175), False, 'from astropy.io.fits import getval\n'), ((6209, 6226), 'astropy.io.fits.getval', 'getval', (['d', 'getkey'], {}), '(d, getkey)\n', (6215, 6226), False, 'from astropy.io.fits import getval\n'), ((7192, 7215), 'astropy.io.fits.getval', 'getval', (['file', '"""MJD_OBS"""'], {}), "(file, 'MJD_OBS')\n", (7198, 7215), False, 'from astropy.io.fits import getval\n'), ((4673, 4699), 're.search', 're.search', (["('%s' % regex)", 'f'], {}), "('%s' % regex, f)\n", (4682, 4699), False, 'import re\n'), ((7252, 7321), 'warnings.warn', 'warnings.warn', (['"""No MJD_OBS keyword found, returning most recent file"""'], {}), "('No MJD_OBS keyword found, returning most recent file')\n", (7265, 7321), False, 'import warnings\n'), ((12326, 12402), 'warnings.warn', 'warnings.warn', (['"""No TraceMatch_WithMasks found. returns the usual TraceMatch"""'], {}), "('No TraceMatch_WithMasks found. returns the usual TraceMatch')\n", (12339, 12402), False, 'import warnings\n'), ((14736, 14786), 're.search', 're.search', (["('%s' % PRODSTRUCT_RE['psf']['param'])", 'f'], {}), "('%s' % PRODSTRUCT_RE['psf']['param'], f)\n", (14745, 14786), False, 'import re\n'), ((4739, 4766), 're.search', 're.search', (["('%s' % target)", 'f'], {}), "('%s' % target, f)\n", (4748, 4766), False, 'import re\n'), ((7492, 7519), 'numpy.asarray', 'np.asarray', (['fluxcal_mjd_obs'], {}), '(fluxcal_mjd_obs)\n', (7502, 7519), True, 'import numpy as np\n'), ((7627, 7646), 'astropy.io.fits.getheader', 'getheader', (['filename'], {}), '(filename)\n', (7636, 7646), False, 'from astropy.io.fits import getheader\n'), ((14832, 14859), 're.search', 're.search', (["('%s' % target)", 'f'], {}), "('%s' % target, f)\n", (14841, 14859), False, 'import re\n'), ((16686, 16738), 'os.path.join', 'os.path.join', (['_PACKAGE_ROOT', '"""data/bad_standard.txt"""'], {}), "(_PACKAGE_ROOT, 'data/bad_standard.txt')\n", (16698, 16738), False, 'import os\n'), ((16889, 16948), 'os.path.join', 'os.path.join', (['_PACKAGE_ROOT', '"""data/noncalspec_standard.txt"""'], {}), "(_PACKAGE_ROOT, 'data/noncalspec_standard.txt')\n", (16901, 16948), False, 'import os\n'), ((6498, 6521), 'astropy.time.Time', 'Time', (['mjd'], {'format': '"""mjd"""'}), "(mjd, format='mjd')\n", (6502, 6521), False, 'from astropy.time import Time\n')] |
import pandas as pd
import numpy as np
from htof.utils.data_utils import merge_consortia, safe_concatenate
def test_merge_consortia():
data = pd.DataFrame([[133, 'F', -0.9053, -0.4248, 0.6270, 1.1264, 0.5285, -2.50, 2.21, 0.393],
[133, 'N', -0.9051, -0.4252, 0.6263, 1.1265, 0.5291, -1.18, 1.59, 0.393],
[271, 'N', -0.1051, -0.1252, 0.1263, 0.1265, 0.1291, -0.18, 0.59, 0.193]],
columns=['A1', 'IA2', 'IA3', 'IA4', 'IA5', 'IA6', 'IA7', 'IA8', 'IA9', 'IA10'])
merged_orbit = merge_consortia(data)
assert len(merged_orbit) == 2
assert np.isclose(merged_orbit['IA9'].iloc[0], 1.498373) # merged error
assert np.isclose(merged_orbit['IA8'].iloc[0], -1.505620) # merged residual
assert np.isclose(merged_orbit['IA8'].iloc[1], -0.18) # single orbit un-touched residual
def test_merge_consortia_equal_on_flipped_rows():
data1 = pd.DataFrame([[133, 'F', -0.9053, -0.4248, 0.6270, 1.1264, 0.5285, -2.50, 2.21, 0.393],
[133, 'N', -0.9051, -0.4252, 0.6263, 1.1265, 0.5291, -1.18, 1.59, 0.393]],
columns=['A1', 'IA2', 'IA3', 'IA4', 'IA5', 'IA6', 'IA7', 'IA8', 'IA9', 'IA10'])
data2 = pd.DataFrame([[133, 'N', -0.9051, -0.4252, 0.6263, 1.1265, 0.5291, -1.18, 1.59, 0.393],
[133, 'F', -0.9053, -0.4248, 0.6270, 1.1264, 0.5285, -2.50, 2.21, 0.393]],
columns=['A1', 'IA2', 'IA3', 'IA4', 'IA5', 'IA6', 'IA7', 'IA8', 'IA9', 'IA10'])
pd.testing.assert_frame_equal(merge_consortia(data2), merge_consortia(data1))
def test_safe_concatenate():
a, b = np.arange(3), np.arange(3, 6)
assert np.allclose(a, safe_concatenate(a, None))
assert np.allclose(b, safe_concatenate(None, b))
assert None is safe_concatenate(None, None)
assert np.allclose(np.arange(6), safe_concatenate(a, b))
| [
"htof.utils.data_utils.merge_consortia",
"htof.utils.data_utils.safe_concatenate",
"numpy.isclose",
"pandas.DataFrame",
"numpy.arange"
] | [((148, 480), 'pandas.DataFrame', 'pd.DataFrame', (["[[133, 'F', -0.9053, -0.4248, 0.627, 1.1264, 0.5285, -2.5, 2.21, 0.393], [\n 133, 'N', -0.9051, -0.4252, 0.6263, 1.1265, 0.5291, -1.18, 1.59, 0.393],\n [271, 'N', -0.1051, -0.1252, 0.1263, 0.1265, 0.1291, -0.18, 0.59, 0.193]]"], {'columns': "['A1', 'IA2', 'IA3', 'IA4', 'IA5', 'IA6', 'IA7', 'IA8', 'IA9', 'IA10']"}), "([[133, 'F', -0.9053, -0.4248, 0.627, 1.1264, 0.5285, -2.5, \n 2.21, 0.393], [133, 'N', -0.9051, -0.4252, 0.6263, 1.1265, 0.5291, -\n 1.18, 1.59, 0.393], [271, 'N', -0.1051, -0.1252, 0.1263, 0.1265, 0.1291,\n -0.18, 0.59, 0.193]], columns=['A1', 'IA2', 'IA3', 'IA4', 'IA5', 'IA6',\n 'IA7', 'IA8', 'IA9', 'IA10'])\n", (160, 480), True, 'import pandas as pd\n'), ((565, 586), 'htof.utils.data_utils.merge_consortia', 'merge_consortia', (['data'], {}), '(data)\n', (580, 586), False, 'from htof.utils.data_utils import merge_consortia, safe_concatenate\n'), ((632, 681), 'numpy.isclose', 'np.isclose', (["merged_orbit['IA9'].iloc[0]", '(1.498373)'], {}), "(merged_orbit['IA9'].iloc[0], 1.498373)\n", (642, 681), True, 'import numpy as np\n'), ((709, 758), 'numpy.isclose', 'np.isclose', (["merged_orbit['IA8'].iloc[0]", '(-1.50562)'], {}), "(merged_orbit['IA8'].iloc[0], -1.50562)\n", (719, 758), True, 'import numpy as np\n'), ((790, 836), 'numpy.isclose', 'np.isclose', (["merged_orbit['IA8'].iloc[1]", '(-0.18)'], {}), "(merged_orbit['IA8'].iloc[1], -0.18)\n", (800, 836), True, 'import numpy as np\n'), ((937, 1191), 'pandas.DataFrame', 'pd.DataFrame', (["[[133, 'F', -0.9053, -0.4248, 0.627, 1.1264, 0.5285, -2.5, 2.21, 0.393], [\n 133, 'N', -0.9051, -0.4252, 0.6263, 1.1265, 0.5291, -1.18, 1.59, 0.393]]"], {'columns': "['A1', 'IA2', 'IA3', 'IA4', 'IA5', 'IA6', 'IA7', 'IA8', 'IA9', 'IA10']"}), "([[133, 'F', -0.9053, -0.4248, 0.627, 1.1264, 0.5285, -2.5, \n 2.21, 0.393], [133, 'N', -0.9051, -0.4252, 0.6263, 1.1265, 0.5291, -\n 1.18, 1.59, 0.393]], columns=['A1', 'IA2', 'IA3', 'IA4', 'IA5', 'IA6',\n 'IA7', 'IA8', 'IA9', 'IA10'])\n", (949, 1191), True, 'import pandas as pd\n'), ((1248, 1501), 'pandas.DataFrame', 'pd.DataFrame', (["[[133, 'N', -0.9051, -0.4252, 0.6263, 1.1265, 0.5291, -1.18, 1.59, 0.393],\n [133, 'F', -0.9053, -0.4248, 0.627, 1.1264, 0.5285, -2.5, 2.21, 0.393]]"], {'columns': "['A1', 'IA2', 'IA3', 'IA4', 'IA5', 'IA6', 'IA7', 'IA8', 'IA9', 'IA10']"}), "([[133, 'N', -0.9051, -0.4252, 0.6263, 1.1265, 0.5291, -1.18, \n 1.59, 0.393], [133, 'F', -0.9053, -0.4248, 0.627, 1.1264, 0.5285, -2.5,\n 2.21, 0.393]], columns=['A1', 'IA2', 'IA3', 'IA4', 'IA5', 'IA6', 'IA7',\n 'IA8', 'IA9', 'IA10'])\n", (1260, 1501), True, 'import pandas as pd\n'), ((1581, 1603), 'htof.utils.data_utils.merge_consortia', 'merge_consortia', (['data2'], {}), '(data2)\n', (1596, 1603), False, 'from htof.utils.data_utils import merge_consortia, safe_concatenate\n'), ((1605, 1627), 'htof.utils.data_utils.merge_consortia', 'merge_consortia', (['data1'], {}), '(data1)\n', (1620, 1627), False, 'from htof.utils.data_utils import merge_consortia, safe_concatenate\n'), ((1671, 1683), 'numpy.arange', 'np.arange', (['(3)'], {}), '(3)\n', (1680, 1683), True, 'import numpy as np\n'), ((1685, 1700), 'numpy.arange', 'np.arange', (['(3)', '(6)'], {}), '(3, 6)\n', (1694, 1700), True, 'import numpy as np\n'), ((1727, 1752), 'htof.utils.data_utils.safe_concatenate', 'safe_concatenate', (['a', 'None'], {}), '(a, None)\n', (1743, 1752), False, 'from htof.utils.data_utils import merge_consortia, safe_concatenate\n'), ((1780, 1805), 'htof.utils.data_utils.safe_concatenate', 'safe_concatenate', (['None', 'b'], {}), '(None, b)\n', (1796, 1805), False, 'from htof.utils.data_utils import merge_consortia, safe_concatenate\n'), ((1826, 1854), 'htof.utils.data_utils.safe_concatenate', 'safe_concatenate', (['None', 'None'], {}), '(None, None)\n', (1842, 1854), False, 'from htof.utils.data_utils import merge_consortia, safe_concatenate\n'), ((1878, 1890), 'numpy.arange', 'np.arange', (['(6)'], {}), '(6)\n', (1887, 1890), True, 'import numpy as np\n'), ((1892, 1914), 'htof.utils.data_utils.safe_concatenate', 'safe_concatenate', (['a', 'b'], {}), '(a, b)\n', (1908, 1914), False, 'from htof.utils.data_utils import merge_consortia, safe_concatenate\n')] |
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.
import argparse
import os
import os.path
from os import path
import time
from loguru import logger
import importlib
import sys
import cv2
import numpy as np
import torch
from tabulate import tabulate
from yolox.data.data_augment import ValTransform
from yolox.data.datasets import COCO_CLASSES
from yolox.utils import boxes, fuse_model, get_model_info, postprocess, vis
from map.script.map import map_score
IMAGE_EXT = [".jpg", ".jpeg", ".webp", ".bmp", ".png"]
DSLAB20 = [31.78,98.08,88.51,98.26,97.46,99.61,97.26,98.26,95.94,94.08,99.93,97.50]
DATASETS = ["Hempbox",
"Chueried_Hive01",
"ClemensRed",
"Echolinde",
"Erlen_diago",
"Erlen_front",
"Erlen_smart",
"Erlen_Hive11",
"Froh14",
"Froh23",
"UnitedQueens",
"Doettingen_Hive1"]
def get_exp_by_file(exp_file,data_dir):
try:
sys.path.append(os.path.dirname(exp_file))
current_exp = importlib.import_module(os.path.basename(exp_file).split(".")[0])
exp = current_exp.Exp(data_dir)
except Exception:
raise ImportError("{} doesn't contains class named 'Exp'".format(exp_file))
return exp
def get_exp_by_name(exp_name,data_dir):
import yolox
yolox_path = os.path.dirname(os.path.dirname(yolox.__file__))
filedict = {
"yolox-s": "yolox_s.py",
"yolox-m": "yolox_m.py",
"yolox-l": "yolox_l.py",
"yolox-x": "yolox_x.py",
"yolox-tiny": "yolox_tiny.py",
"yolox-nano": "nano.py",
"yolov3": "yolov3.py",
}
filename = filedict[exp_name]
exp_path = os.path.join(yolox_path, "exps", "default", filename)
return get_exp_by_file(exp_path)
def get_exp(exp_file, exp_name,data_dir):
"""
get Exp object by file or name. If exp_file and exp_name
are both provided, get Exp by exp_file.
Args:
exp_file (str): file path of experiment.
exp_name (str): name of experiment. "yolo-s",
"""
assert (
exp_file is not None or exp_name is not None
), "plz provide exp file or exp name."
if exp_file is not None:
return get_exp_by_file(exp_file,data_dir)
else:
return get_exp_by_name(exp_name,data_dir)
def make_parser():
parser = argparse.ArgumentParser("YOLOX Demo!")
parser.add_argument(
"demo", default="image", help="demo type, eg. image, video and webcam"
)
parser.add_argument("-expn", "--experiment-name", type=str, default=None)
parser.add_argument("-n", "--name", type=str, default=None, help="model name")
parser.add_argument(
"--path", default="./assets/dog.jpg", help="path to images or video"
)
parser.add_argument("--camid", type=int, default=0, help="webcam demo camera id")
parser.add_argument(
"--save_result",
action="store_true",
help="whether to save the inference result of image/video",
)
parser.add_argument('-na', '--no-animation', help="no animation is shown.", action="store_true")
parser.add_argument('-np', '--no-plot', help="no plot is shown.", action="store_true")
parser.add_argument('-q', '--quiet', help="minimalistic console output.", action="store_true")
# argparse receiving list of classes to be ignored (e.g., python main.py --ignore person book)
parser.add_argument('-i', '--ignore', nargs='+', type=str, help="ignore a list of classes.")
# argparse receiving list of classes with specific IoU (e.g., python main.py --set-class-iou person 0.7)
parser.add_argument('--set-class-iou', nargs='+', type=str, help="set IoU for a specific class.")
# exp file
parser.add_argument(
"-f",
"--exp_file",
default=None,
type=str,
help="pls input your experiment description file",
)
parser.add_argument("-c", "--ckpt", default=None, type=str, help="ckpt for eval")
parser.add_argument(
"--device",
default="cpu",
type=str,
help="device to run our model, can either be cpu or gpu",
)
parser.add_argument("--conf", default=0.3, type=float, help="test conf")
parser.add_argument("--nms", default=0.3, type=float, help="test nms threshold")
parser.add_argument("--tsize", default=None, type=int, help="test img size")
parser.add_argument(
"--fp16",
dest="fp16",
default=True,
action="store_true",
help="Adopting mix precision evaluating.",
)
parser.add_argument(
"--legacy",
dest="legacy",
default=False,
action="store_true",
help="To be compatible with older versions",
)
parser.add_argument(
"--fuse",
dest="fuse",
default=False,
action="store_true",
help="Fuse conv and bn for testing.",
)
parser.add_argument(
"--trt",
dest="trt",
default=False,
action="store_true",
help="Using TensorRT model for testing.",
)
return parser
def get_image_list(path):
image_names = []
for maindir, subdir, file_name_list in os.walk(path):
for filename in file_name_list:
apath = os.path.join(maindir, filename)
ext = os.path.splitext(apath)[1]
if ext in IMAGE_EXT:
image_names.append(apath)
return image_names
class Predictor(object):
def __init__(
self,
model,
exp,
cls_names=COCO_CLASSES,
trt_file=None,
decoder=None,
device="cpu",
fp16=False,
legacy=False,
):
self.model = model
self.exp = exp
self.cls_names = cls_names
self.decoder = decoder
self.num_classes = exp.num_classes
self.confthre = exp.test_conf
self.nmsthre = exp.nmsthre
self.test_size = exp.test_size
self.device = device
self.fp16 = fp16
self.preproc = ValTransform(legacy=legacy)
if trt_file is not None:
from torch2trt import TRTModule
model_trt = TRTModule()
model_trt.load_state_dict(torch.load(trt_file))
x = torch.ones(1, 3, exp.test_size[0], exp.test_size[1]).cuda()
self.model(x)
self.model = model_trt
def inference(self, img):
img_info = {"id": 0}
if isinstance(img, str):
img_info["file_name"] = os.path.basename(img)
img = cv2.imread(img)
else:
img_info["file_name"] = None
height, width = img.shape[:2]
img_info["height"] = height
img_info["width"] = width
img_info["raw_img"] = img
ratio = min(self.test_size[0] / img.shape[0], self.test_size[1] / img.shape[1])
#print(self.test_size)
img_info["ratio"] = ratio
img, _ = self.preproc(img, None, self.test_size)
img = torch.from_numpy(img).unsqueeze(0)
img = img.float()
if self.device == "gpu":
img = img.cuda()
if self.fp16:
img = img.half() # to FP16
with torch.no_grad():
t0 = time.time()
outputs = self.model(img)
if self.decoder is not None:
outputs = self.decoder(outputs, dtype=outputs.type())
outputs = postprocess(
outputs, self.num_classes, self.confthre,
self.nmsthre, class_agnostic=True
)
logger.info("Infer time: {:.4f}s".format(time.time() - t0))
return outputs, img_info
def visual(self, output, img_info, dataset,cls_conf=0.35,):
ratio = img_info["ratio"]
img = img_info["raw_img"]
if output is None:
file_path = str("map/input/" + str(dataset) + "/detection-results/" + str(img_info["file_name"])[:-4] + ".txt")
f= open(file_path,"w+")
f.close()
return img
output = output.cpu()
bboxes = output[:, 0:4]
# preprocessing: resize
#print(bboxes)
bboxes /= ratio
#print(bboxes)
pred_path = str("map/input/" + str(dataset) + "/detection-results/")
path_dont_exist = not(os.path.isdir(pred_path))
if path_dont_exist:
os.makedirs(pred_path, exist_ok=True)
file_path = str("map/input/" + str(dataset) + "/detection-results/" + str(img_info["file_name"])[:-4] + ".txt")
f= open(file_path,"w+")
#print(annotations/img_info["ratio"])
bboxes1 = []
for i,obj in enumerate(bboxes):
x1 = np.max((0, obj[0]))
y1 = np.max((0, obj[1]))
x2 = obj[2]
y2 = obj[3]
#x2 = img_info["width"] - np.max((0, obj[0]))
#y2 = img_info["height"] - np.max((0, obj[1]))
#x1 = img_info["width"] - obj[2]
#y1 = img_info["height"] - obj[3]
if x2 >= x1 and y2 >= y1:
bboxes1.append([x1, y1, x2, y2])
f.write(str(int(output[i,6].item())) + " " + str((output[i, 4] *output[i,5]).item()) + " " + str(x1.item()) + " " + str(y1.item()) + " " + str(x2.item()) + " " + str(y2.item()) + "\n")
f.close()
cls = output[:, 6]
scores = output[:, 4] * output[:, 5]
vis_res = vis(img, bboxes1, scores, cls, cls_conf, self.cls_names)
return vis_res
def image_demo(predictor,exp, vis_folder, path, current_time, save_result,dataset):
if os.path.isdir(path):
files = get_image_list(path)
else:
files = [path]
files.sort()
val_loader = exp.get_eval_loader(
batch_size=4,
is_distributed=False,
)
#train_loader = exp.get_data_loader(
# batch_size=4,
# is_distributed=False,
# no_aug=False,
# cache_img=False,
# )
ground_truth_path = "map/input/" + str(dataset)
ground_truth_path = os.path.join(ground_truth_path,"ground-truth/")
path_dont_exist = not(os.path.isdir(ground_truth_path))
if path_dont_exist:
os.makedirs(ground_truth_path, exist_ok=True)
for i,image_name in enumerate(files):
outputs, img_info = predictor.inference(image_name)
if path_dont_exist:
file_path = str(ground_truth_path + str(img_info["file_name"])[:-4] + ".txt")
f1= open(file_path,"w+")
annotations = exp.valdataset.annotations[i][0]
#annotations = exp.dataset._dataset.annotations[i][0]
annotations /= img_info["ratio"]
for i,obj in enumerate(annotations):
f1.write(str(int(obj[4].item())) + " " + str(obj[0].item()) + " " + str(obj[1].item()) + " " + str(obj[2].item()) + " " + str(obj[3].item()) + "\n")
#print(outputs)
result_image = predictor.visual(outputs[0], img_info,dataset, predictor.confthre)
if save_result:
save_folder = os.path.join(
vis_folder,dataset, time.strftime("%Y_%m_%d_%H_%M_%S", current_time)
)
os.makedirs(save_folder, exist_ok=True)
save_file_name = os.path.join(save_folder, os.path.basename(image_name))
logger.info("Saving detection result in {}".format(save_file_name))
cv2.imwrite(save_file_name, result_image)
def imageflow_demo(predictor, vis_folder, current_time, args):
cap = cv2.VideoCapture(args.path if args.demo == "video" else args.camid)
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH) # float
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) # float
fps = cap.get(cv2.CAP_PROP_FPS)
save_folder = os.path.join(
vis_folder, time.strftime("%Y_%m_%d_%H_%M_%S", current_time)
)
os.makedirs(save_folder, exist_ok=True)
if args.demo == "video":
save_path = os.path.join(save_folder, args.path.split("/")[-1])
else:
save_path = os.path.join(save_folder, "camera.mp4")
logger.info(f"video save_path is {save_path}")
vid_writer = cv2.VideoWriter(
save_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (int(width), int(height))
)
while True:
ret_val, frame = cap.read()
if ret_val:
outputs, img_info = predictor.inference(frame)
result_frame = predictor.visual(outputs[0], img_info, predictor.confthre)
if args.save_result:
vid_writer.write(result_frame)
ch = cv2.waitKey(1)
if ch == 27 or ch == ord("q") or ch == ord("Q"):
break
else:
break
def main(args):
table = []
exp = get_exp(args.exp_file, args.name, str("datasets/" + str(DATASETS[0]) + "/"))
if not args.experiment_name:
args.experiment_name = exp.exp_name
file_name = os.path.join(exp.output_dir, args.experiment_name)
os.makedirs(file_name, exist_ok=True)
vis_folder = None
if args.save_result:
vis_folder = os.path.join(file_name, "vis_res")
os.makedirs(vis_folder, exist_ok=True)
if args.trt:
args.device = "gpu"
logger.info("Args: {}".format(args))
if args.conf is not None:
exp.test_conf = args.conf
if args.nms is not None:
exp.nmsthre = args.nms
if args.tsize is not None:
exp.test_size = (args.tsize, args.tsize)
model = exp.get_model()
logger.info("Model Summary: {}".format(get_model_info(model, exp.test_size)))
if args.device == "gpu":
model.cuda()
if args.fp16:
model.half() # to FP16
model.eval()
if not args.trt:
if args.ckpt is None:
ckpt_file = os.path.join(file_name, "best_ckpt.pth")
else:
ckpt_file = args.ckpt
logger.info("loading checkpoint")
ckpt = torch.load(ckpt_file, map_location="cpu")
# load the model state dict
model.load_state_dict(ckpt["model"])
logger.info("loaded checkpoint done.")
if args.fuse:
logger.info("\tFusing model...")
model = fuse_model(model)
if args.trt:
assert not args.fuse, "TensorRT model is not support model fusing!"
trt_file = os.path.join(file_name, "model_trt.pth")
assert os.path.exists(
trt_file
), "TensorRT model is not found!\n Run python3 tools/trt.py first!"
model.head.decode_in_inference = False
decoder = model.head.decode_outputs
logger.info("Using TensorRT to inference")
else:
trt_file = None
decoder = None
predictor = Predictor(
model, exp, COCO_CLASSES, trt_file, decoder,
args.device, args.fp16, args.legacy,
)
current_time = time.localtime()
work_dir = os.getcwd()
for i,dataset in enumerate(DATASETS):
os.chdir(work_dir)
exp = get_exp(args.exp_file, args.name, str("datasets/" + str(dataset) + "/"))
exp.test_size = (args.tsize, args.tsize)
#map_score(dataset,args,os.getcwd())
if args.demo == "image":
path = str("datasets/" + str(dataset) + "/validate")
image_demo(predictor,exp, vis_folder, path, current_time, args.save_result,dataset)
elif args.demo == "video" or args.demo == "webcam":
imageflow_demo(predictor, vis_folder, current_time, args)
score = map_score(dataset,args,work_dir)
score.append(DSLAB20[i])
if(float(score[1][:-1]) > score[2]):
score.append("BETTER")
else:
score.append("WORSE")
table.append(score)
os.chdir(work_dir)
print(tabulate(table[:-1], headers=["Dataset","mAP Score","DSLab20","Comparison"]))
file_path = str("map/output/mAP_results.txt")
f= open(file_path,"w+")
f.write(tabulate(table[:-1], headers=["Dataset","mAP Score","DSLab20","Comparison"]))
f.close()
if __name__ == "__main__":
args = make_parser().parse_args()
#exp = get_exp(args.exp_file, args.name)
main(args)
| [
"yolox.utils.postprocess",
"torch.from_numpy",
"torch2trt.TRTModule",
"os.walk",
"os.path.exists",
"yolox.utils.vis",
"argparse.ArgumentParser",
"numpy.max",
"map.script.map.map_score",
"os.path.isdir",
"cv2.VideoWriter_fourcc",
"time.localtime",
"cv2.waitKey",
"tabulate.tabulate",
"yolo... | [((1750, 1803), 'os.path.join', 'os.path.join', (['yolox_path', '"""exps"""', '"""default"""', 'filename'], {}), "(yolox_path, 'exps', 'default', filename)\n", (1762, 1803), False, 'import os\n'), ((2401, 2439), 'argparse.ArgumentParser', 'argparse.ArgumentParser', (['"""YOLOX Demo!"""'], {}), "('YOLOX Demo!')\n", (2424, 2439), False, 'import argparse\n'), ((5225, 5238), 'os.walk', 'os.walk', (['path'], {}), '(path)\n', (5232, 5238), False, 'import os\n'), ((9629, 9648), 'os.path.isdir', 'os.path.isdir', (['path'], {}), '(path)\n', (9642, 9648), False, 'import os\n'), ((10092, 10140), 'os.path.join', 'os.path.join', (['ground_truth_path', '"""ground-truth/"""'], {}), "(ground_truth_path, 'ground-truth/')\n", (10104, 10140), False, 'import os\n'), ((11556, 11623), 'cv2.VideoCapture', 'cv2.VideoCapture', (["(args.path if args.demo == 'video' else args.camid)"], {}), "(args.path if args.demo == 'video' else args.camid)\n", (11572, 11623), False, 'import cv2\n'), ((11883, 11922), 'os.makedirs', 'os.makedirs', (['save_folder'], {'exist_ok': '(True)'}), '(save_folder, exist_ok=True)\n', (11894, 11922), False, 'import os\n'), ((12098, 12144), 'loguru.logger.info', 'logger.info', (['f"""video save_path is {save_path}"""'], {}), "(f'video save_path is {save_path}')\n", (12109, 12144), False, 'from loguru import logger\n'), ((12926, 12976), 'os.path.join', 'os.path.join', (['exp.output_dir', 'args.experiment_name'], {}), '(exp.output_dir, args.experiment_name)\n', (12938, 12976), False, 'import os\n'), ((12981, 13018), 'os.makedirs', 'os.makedirs', (['file_name'], {'exist_ok': '(True)'}), '(file_name, exist_ok=True)\n', (12992, 13018), False, 'import os\n'), ((14818, 14834), 'time.localtime', 'time.localtime', ([], {}), '()\n', (14832, 14834), False, 'import time\n'), ((14850, 14861), 'os.getcwd', 'os.getcwd', ([], {}), '()\n', (14859, 14861), False, 'import os\n'), ((15687, 15705), 'os.chdir', 'os.chdir', (['work_dir'], {}), '(work_dir)\n', (15695, 15705), False, 'import os\n'), ((1410, 1441), 'os.path.dirname', 'os.path.dirname', (['yolox.__file__'], {}), '(yolox.__file__)\n', (1425, 1441), False, 'import os\n'), ((6058, 6085), 'yolox.data.data_augment.ValTransform', 'ValTransform', ([], {'legacy': 'legacy'}), '(legacy=legacy)\n', (6070, 6085), False, 'from yolox.data.data_augment import ValTransform\n'), ((9456, 9512), 'yolox.utils.vis', 'vis', (['img', 'bboxes1', 'scores', 'cls', 'cls_conf', 'self.cls_names'], {}), '(img, bboxes1, scores, cls, cls_conf, self.cls_names)\n', (9459, 9512), False, 'from yolox.utils import boxes, fuse_model, get_model_info, postprocess, vis\n'), ((10166, 10198), 'os.path.isdir', 'os.path.isdir', (['ground_truth_path'], {}), '(ground_truth_path)\n', (10179, 10198), False, 'import os\n'), ((10232, 10277), 'os.makedirs', 'os.makedirs', (['ground_truth_path'], {'exist_ok': '(True)'}), '(ground_truth_path, exist_ok=True)\n', (10243, 10277), False, 'import os\n'), ((11824, 11872), 'time.strftime', 'time.strftime', (['"""%Y_%m_%d_%H_%M_%S"""', 'current_time'], {}), "('%Y_%m_%d_%H_%M_%S', current_time)\n", (11837, 11872), False, 'import time\n'), ((12054, 12093), 'os.path.join', 'os.path.join', (['save_folder', '"""camera.mp4"""'], {}), "(save_folder, 'camera.mp4')\n", (12066, 12093), False, 'import os\n'), ((12198, 12229), 'cv2.VideoWriter_fourcc', 'cv2.VideoWriter_fourcc', (["*'mp4v'"], {}), "(*'mp4v')\n", (12220, 12229), False, 'import cv2\n'), ((13088, 13122), 'os.path.join', 'os.path.join', (['file_name', '"""vis_res"""'], {}), "(file_name, 'vis_res')\n", (13100, 13122), False, 'import os\n'), ((13131, 13169), 'os.makedirs', 'os.makedirs', (['vis_folder'], {'exist_ok': '(True)'}), '(vis_folder, exist_ok=True)\n', (13142, 13169), False, 'import os\n'), ((13873, 13906), 'loguru.logger.info', 'logger.info', (['"""loading checkpoint"""'], {}), "('loading checkpoint')\n", (13884, 13906), False, 'from loguru import logger\n'), ((13922, 13963), 'torch.load', 'torch.load', (['ckpt_file'], {'map_location': '"""cpu"""'}), "(ckpt_file, map_location='cpu')\n", (13932, 13963), False, 'import torch\n'), ((14053, 14091), 'loguru.logger.info', 'logger.info', (['"""loaded checkpoint done."""'], {}), "('loaded checkpoint done.')\n", (14064, 14091), False, 'from loguru import logger\n'), ((14119, 14151), 'loguru.logger.info', 'logger.info', (['"""\tFusing model..."""'], {}), "('\\tFusing model...')\n", (14130, 14151), False, 'from loguru import logger\n'), ((14168, 14185), 'yolox.utils.fuse_model', 'fuse_model', (['model'], {}), '(model)\n', (14178, 14185), False, 'from yolox.utils import boxes, fuse_model, get_model_info, postprocess, vis\n'), ((14299, 14339), 'os.path.join', 'os.path.join', (['file_name', '"""model_trt.pth"""'], {}), "(file_name, 'model_trt.pth')\n", (14311, 14339), False, 'import os\n'), ((14355, 14379), 'os.path.exists', 'os.path.exists', (['trt_file'], {}), '(trt_file)\n', (14369, 14379), False, 'import os\n'), ((14567, 14609), 'loguru.logger.info', 'logger.info', (['"""Using TensorRT to inference"""'], {}), "('Using TensorRT to inference')\n", (14578, 14609), False, 'from loguru import logger\n'), ((14912, 14930), 'os.chdir', 'os.chdir', (['work_dir'], {}), '(work_dir)\n', (14920, 14930), False, 'import os\n'), ((15461, 15495), 'map.script.map.map_score', 'map_score', (['dataset', 'args', 'work_dir'], {}), '(dataset, args, work_dir)\n', (15470, 15495), False, 'from map.script.map import map_score\n'), ((15716, 15795), 'tabulate.tabulate', 'tabulate', (['table[:-1]'], {'headers': "['Dataset', 'mAP Score', 'DSLab20', 'Comparison']"}), "(table[:-1], headers=['Dataset', 'mAP Score', 'DSLab20', 'Comparison'])\n", (15724, 15795), False, 'from tabulate import tabulate\n'), ((15884, 15963), 'tabulate.tabulate', 'tabulate', (['table[:-1]'], {'headers': "['Dataset', 'mAP Score', 'DSLab20', 'Comparison']"}), "(table[:-1], headers=['Dataset', 'mAP Score', 'DSLab20', 'Comparison'])\n", (15892, 15963), False, 'from tabulate import tabulate\n'), ((1041, 1066), 'os.path.dirname', 'os.path.dirname', (['exp_file'], {}), '(exp_file)\n', (1056, 1066), False, 'import os\n'), ((5300, 5331), 'os.path.join', 'os.path.join', (['maindir', 'filename'], {}), '(maindir, filename)\n', (5312, 5331), False, 'import os\n'), ((6188, 6199), 'torch2trt.TRTModule', 'TRTModule', ([], {}), '()\n', (6197, 6199), False, 'from torch2trt import TRTModule\n'), ((6527, 6548), 'os.path.basename', 'os.path.basename', (['img'], {}), '(img)\n', (6543, 6548), False, 'import os\n'), ((6567, 6582), 'cv2.imread', 'cv2.imread', (['img'], {}), '(img)\n', (6577, 6582), False, 'import cv2\n'), ((7214, 7229), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (7227, 7229), False, 'import torch\n'), ((7248, 7259), 'time.time', 'time.time', ([], {}), '()\n', (7257, 7259), False, 'import time\n'), ((7431, 7523), 'yolox.utils.postprocess', 'postprocess', (['outputs', 'self.num_classes', 'self.confthre', 'self.nmsthre'], {'class_agnostic': '(True)'}), '(outputs, self.num_classes, self.confthre, self.nmsthre,\n class_agnostic=True)\n', (7442, 7523), False, 'from yolox.utils import boxes, fuse_model, get_model_info, postprocess, vis\n'), ((8310, 8334), 'os.path.isdir', 'os.path.isdir', (['pred_path'], {}), '(pred_path)\n', (8323, 8334), False, 'import os\n'), ((8376, 8413), 'os.makedirs', 'os.makedirs', (['pred_path'], {'exist_ok': '(True)'}), '(pred_path, exist_ok=True)\n', (8387, 8413), False, 'import os\n'), ((8700, 8719), 'numpy.max', 'np.max', (['(0, obj[0])'], {}), '((0, obj[0]))\n', (8706, 8719), True, 'import numpy as np\n'), ((8737, 8756), 'numpy.max', 'np.max', (['(0, obj[1])'], {}), '((0, obj[1]))\n', (8743, 8756), True, 'import numpy as np\n'), ((11214, 11253), 'os.makedirs', 'os.makedirs', (['save_folder'], {'exist_ok': '(True)'}), '(save_folder, exist_ok=True)\n', (11225, 11253), False, 'import os\n'), ((11431, 11472), 'cv2.imwrite', 'cv2.imwrite', (['save_file_name', 'result_image'], {}), '(save_file_name, result_image)\n', (11442, 11472), False, 'import cv2\n'), ((12582, 12596), 'cv2.waitKey', 'cv2.waitKey', (['(1)'], {}), '(1)\n', (12593, 12596), False, 'import cv2\n'), ((13535, 13571), 'yolox.utils.get_model_info', 'get_model_info', (['model', 'exp.test_size'], {}), '(model, exp.test_size)\n', (13549, 13571), False, 'from yolox.utils import boxes, fuse_model, get_model_info, postprocess, vis\n'), ((13776, 13816), 'os.path.join', 'os.path.join', (['file_name', '"""best_ckpt.pth"""'], {}), "(file_name, 'best_ckpt.pth')\n", (13788, 13816), False, 'import os\n'), ((5350, 5373), 'os.path.splitext', 'os.path.splitext', (['apath'], {}), '(apath)\n', (5366, 5373), False, 'import os\n'), ((6238, 6258), 'torch.load', 'torch.load', (['trt_file'], {}), '(trt_file)\n', (6248, 6258), False, 'import torch\n'), ((7007, 7028), 'torch.from_numpy', 'torch.from_numpy', (['img'], {}), '(img)\n', (7023, 7028), False, 'import torch\n'), ((11139, 11187), 'time.strftime', 'time.strftime', (['"""%Y_%m_%d_%H_%M_%S"""', 'current_time'], {}), "('%Y_%m_%d_%H_%M_%S', current_time)\n", (11152, 11187), False, 'import time\n'), ((11309, 11337), 'os.path.basename', 'os.path.basename', (['image_name'], {}), '(image_name)\n', (11325, 11337), False, 'import os\n'), ((6277, 6329), 'torch.ones', 'torch.ones', (['(1)', '(3)', 'exp.test_size[0]', 'exp.test_size[1]'], {}), '(1, 3, exp.test_size[0], exp.test_size[1])\n', (6287, 6329), False, 'import torch\n'), ((1114, 1140), 'os.path.basename', 'os.path.basename', (['exp_file'], {}), '(exp_file)\n', (1130, 1140), False, 'import os\n'), ((7619, 7630), 'time.time', 'time.time', ([], {}), '()\n', (7628, 7630), False, 'import time\n')] |
import numpy as np
### linefit
# From linefit.f from <NAME>
# Converted to python by <NAME>
# Fit a line to points with uncertainties in both axes
def linefit(x,sigx,y,sigy):
m = 0.0
b = 0.0
csum = 0.0
mold = -99999.
sigsqb = 9999.0
sigsqm = 9999.
n = x.size
while abs(m-mold) > 0.1*np.sqrt(sigsqm):
mold = m
sigsquen = sigy**2 + (mold*sigx)**2
sumx = np.sum(x/sigsquen)
sumy = np.sum(y/sigsquen)
sumxx = np.sum((x**2)/sigsquen)
sumxy = np.sum((x*y)/sigsquen)
sums = np.sum(1.0/sigsquen)
csum = np.sum((y-b-m*x)**2/sigsquen)
det = sums*sumxx - sumx**2
b = (sumxx*sumy - sumx*sumxy)/det
m = (sums*sumxy - sumx*sumy)/det
sigsqb = sumxx/det
sigsqm = sums/det
sigb = np.sqrt(sigsqb)
sigm = np.sqrt(sigsqm)
chi = csum/float(n-2)
return m,sigm,b,sigb,chi
| [
"numpy.sum",
"numpy.sqrt"
] | [((805, 820), 'numpy.sqrt', 'np.sqrt', (['sigsqb'], {}), '(sigsqb)\n', (812, 820), True, 'import numpy as np\n'), ((832, 847), 'numpy.sqrt', 'np.sqrt', (['sigsqm'], {}), '(sigsqm)\n', (839, 847), True, 'import numpy as np\n'), ((410, 430), 'numpy.sum', 'np.sum', (['(x / sigsquen)'], {}), '(x / sigsquen)\n', (416, 430), True, 'import numpy as np\n'), ((444, 464), 'numpy.sum', 'np.sum', (['(y / sigsquen)'], {}), '(y / sigsquen)\n', (450, 464), True, 'import numpy as np\n'), ((479, 504), 'numpy.sum', 'np.sum', (['(x ** 2 / sigsquen)'], {}), '(x ** 2 / sigsquen)\n', (485, 504), True, 'import numpy as np\n'), ((519, 543), 'numpy.sum', 'np.sum', (['(x * y / sigsquen)'], {}), '(x * y / sigsquen)\n', (525, 543), True, 'import numpy as np\n'), ((557, 579), 'numpy.sum', 'np.sum', (['(1.0 / sigsquen)'], {}), '(1.0 / sigsquen)\n', (563, 579), True, 'import numpy as np\n'), ((593, 632), 'numpy.sum', 'np.sum', (['((y - b - m * x) ** 2 / sigsquen)'], {}), '((y - b - m * x) ** 2 / sigsquen)\n', (599, 632), True, 'import numpy as np\n'), ((317, 332), 'numpy.sqrt', 'np.sqrt', (['sigsqm'], {}), '(sigsqm)\n', (324, 332), True, 'import numpy as np\n')] |
#%% Initialize
import numpy as np
#%% Read input
with open("day_07_input.txt") as f:
positions = np.array([int(x) for x in f.readline().split(",")])
#%% Part 1
print(int(np.abs(positions - np.median(positions).round()).sum()))
#%% Part 2
mean_diff = np.abs(positions - np.floor(np.mean(positions)))
print(int((mean_diff * (mean_diff + 1) / 2).sum()))
| [
"numpy.mean",
"numpy.median"
] | [((285, 303), 'numpy.mean', 'np.mean', (['positions'], {}), '(positions)\n', (292, 303), True, 'import numpy as np\n'), ((195, 215), 'numpy.median', 'np.median', (['positions'], {}), '(positions)\n', (204, 215), True, 'import numpy as np\n')] |
# -*- coding: utf-8 -*-
# MIT License
#
# Copyright (c) 2018 ZhicongYan
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ==============================================================================
import os
import sys
sys.path.append('.')
sys.path.append("../")
import tensorflow as tf
import tensorflow.contrib.layers as tcl
import numpy as np
from encoder.encoder import get_encoder
from decoder.decoder import get_decoder
from classifier.classifier import get_classifier
from discriminator.discriminator import get_discriminator
from netutils.learning_rate import get_learning_rate
from netutils.learning_rate import get_global_step
from netutils.optimizer import get_optimizer
from netutils.sample import get_sample
from netutils.loss import get_loss
from .base_model import BaseModel
class CVAE2(BaseModel):
def __init__(self, config):
super(CVAE2, self).__init__(config)
self.input_shape = config['input shape']
self.z_dim = config['z_dim']
self.nb_classes = config['nb_classes']
self.config = config
self.build_model()
if self.has_summary:
self.build_summary()
def build_model(self):
self.x_real = tf.placeholder(tf.float32, shape=[None, np.product(self.input_shape)], name='x_input')
self.y_real = tf.placeholder(tf.float32, shape=[None, self.nb_classes], name='y_input')
# self.encoder_input_shape = int(np.product(self.input_shape))
self.config['encoder parmas']['name'] = 'EncoderX'
self.config['encoder params']["output dims"] = self.z_dim
self.encoder = get_encoder(self.config['x encoder'], self.config['encoder params'], self.is_training)
self.config['decoder params']['name'] = 'Decoder'
self.config['decoder params']["output dims"] = self.encoder_input_shape
# self.y_encoder = get_encoder(self.config['y encoder'], self.config['y encoder params'], self.is_training)
self.decoder = get_decoder(self.config['decoder'], self.config['decoder params'], self.is_training)
# build encoder
self.z_mean, self.z_log_var = self.x_encoder(tf.concatenate([self.x_real, self.y_real]))
self.z_mean_y = self.y_encoder(self.y_real)
# sample z from z_mean and z_log_var
self.z_sample = self.draw_sample(self.z_mean, self.z_log_var)
# build decoder
self.x_decode = self.decoder(self.z_sample)
# build test decoder
self.z_test = tf.placeholder(tf.float32, shape=[None, self.z_dim], name='z_test')
self.x_test = self.decoder(self.z_test, reuse=True)
# loss function
self.kl_loss = (get_loss('kl', self.config['kl loss'],
{'z_mean' : (self.z_mean - self.z_mean_y), 'z_log_var' : self.z_log_var})
* self.config.get('kl loss prod', 1.0))
self.xent_loss = (get_loss('reconstruction', self.config['reconstruction loss'],
{'x' : self.x_real, 'y' : self.x_decode })
* self.config.get('reconstruction loss prod', 1.0))
self.loss = self.kl_loss + self.xent_loss
# optimizer configure
self.global_step, self.global_step_update = get_global_step()
if 'lr' in self.config:
self.learning_rate = get_learning_rate(self.config['lr_scheme'], float(self.config['lr']), self.global_step, self.config['lr_params'])
self.optimizer = get_optimizer(self.config['optimizer'], {'learning_rate' : self.learning_rate}, self.loss,
self.decoder.vars + self.x_encoder.vars + self.y_encoder.vars)
else:
self.optimizer = get_optimizer(self.config['optimizer'], {}, self.loss, self.decoder.vars + self.x_encoder.vars + self.y_encoder.vars)
self.train_update = tf.group([self.optimizer, self.global_step_update])
# model saver
self.saver = tf.train.Saver(self.x_encoder.vars + self.y_encoder.vars, self.decoder.vars + [self.global_step,])
def build_summary(self):
# summary scalars are logged per step
sum_list = []
sum_list.append(tf.summary.scalar('encoder/kl_loss', self.kl_loss))
sum_list.append(tf.summary.scalar('lr', self.learning_rate))
sum_list.append(tf.summary.scalar('decoder/reconstruction_loss', self.xent_loss))
sum_list.append(tf.summary.scalar('loss', self.loss))
self.sum_scalar = tf.summary.merge(sum_list)
# summary hists are logged by calling self.summary()
sum_list = []
sum_list += [tf.summary.histogram('encoder/'+var.name, var) for var in self.x_encoder.vars + self.y_encoder.vars]
sum_list += [tf.summary.histogram('decoder/'+var.name, var) for var in self.decoder.vars]
self.histogram_summary = tf.summary.merge(sum_list)
'''
train operations
'''
def train_on_batch_supervised(self, sess, x_batch, y_batch):
if self.config.get('flatten', False):
x_batch = x_batch.reshape([x_batch.shape[0], -1])
feed_dict = {
self.x_real : x_batch,
self.y_real : y_batch,
self.is_training : True
}
return self.train(sess, feed_dict,
train_op=self.train_update,
summary=self.sum_scalar)
def train_on_batch_unsupervised(self, sess, x_batch):
return NotImplementedError
'''
test operation
'''
def predict(self, sess, z_batch):
feed_dict = {
self.z_test : z_batch,
self.is_training : False
}
x_batch = sess.run([self.x_test], feed_dict = feed_dict)
return x_batch
def hidden_variable(self, sess, x_batch):
if self.config.get('flatten', False):
x_batch = x_batch.reshape([x_batch.shape[0], -1])
feed_dict = {
self.x_real : x_batch,
self.is_training : False
}
z_mean, z_log_var = sess.run([self.z_mean, self.z_log_var], feed_dict=feed_dict)
return z_mean, z_log_var
'''
summary operation
'''
def summary(self, sess):
if self.has_summary:
sum = sess.run(self.histogram_summary)
return sum
else:
return None
| [
"numpy.product",
"tensorflow.concatenate",
"netutils.optimizer.get_optimizer",
"tensorflow.placeholder",
"tensorflow.train.Saver",
"tensorflow.summary.merge",
"encoder.encoder.get_encoder",
"tensorflow.group",
"netutils.learning_rate.get_global_step",
"tensorflow.summary.histogram",
"netutils.lo... | [((1232, 1252), 'sys.path.append', 'sys.path.append', (['"""."""'], {}), "('.')\n", (1247, 1252), False, 'import sys\n'), ((1253, 1275), 'sys.path.append', 'sys.path.append', (['"""../"""'], {}), "('../')\n", (1268, 1275), False, 'import sys\n'), ((2258, 2331), 'tensorflow.placeholder', 'tf.placeholder', (['tf.float32'], {'shape': '[None, self.nb_classes]', 'name': '"""y_input"""'}), "(tf.float32, shape=[None, self.nb_classes], name='y_input')\n", (2272, 2331), True, 'import tensorflow as tf\n'), ((2533, 2624), 'encoder.encoder.get_encoder', 'get_encoder', (["self.config['x encoder']", "self.config['encoder params']", 'self.is_training'], {}), "(self.config['x encoder'], self.config['encoder params'], self.\n is_training)\n", (2544, 2624), False, 'from encoder.encoder import get_encoder\n'), ((2875, 2964), 'decoder.decoder.get_decoder', 'get_decoder', (["self.config['decoder']", "self.config['decoder params']", 'self.is_training'], {}), "(self.config['decoder'], self.config['decoder params'], self.\n is_training)\n", (2886, 2964), False, 'from decoder.decoder import get_decoder\n'), ((3325, 3392), 'tensorflow.placeholder', 'tf.placeholder', (['tf.float32'], {'shape': '[None, self.z_dim]', 'name': '"""z_test"""'}), "(tf.float32, shape=[None, self.z_dim], name='z_test')\n", (3339, 3392), True, 'import tensorflow as tf\n'), ((3960, 3977), 'netutils.learning_rate.get_global_step', 'get_global_step', ([], {}), '()\n', (3975, 3977), False, 'from netutils.learning_rate import get_global_step\n'), ((4493, 4544), 'tensorflow.group', 'tf.group', (['[self.optimizer, self.global_step_update]'], {}), '([self.optimizer, self.global_step_update])\n', (4501, 4544), True, 'import tensorflow as tf\n'), ((4577, 4678), 'tensorflow.train.Saver', 'tf.train.Saver', (['(self.x_encoder.vars + self.y_encoder.vars)', '(self.decoder.vars + [self.global_step])'], {}), '(self.x_encoder.vars + self.y_encoder.vars, self.decoder.vars +\n [self.global_step])\n', (4591, 4678), True, 'import tensorflow as tf\n'), ((5052, 5078), 'tensorflow.summary.merge', 'tf.summary.merge', (['sum_list'], {}), '(sum_list)\n', (5068, 5078), True, 'import tensorflow as tf\n'), ((5386, 5412), 'tensorflow.summary.merge', 'tf.summary.merge', (['sum_list'], {}), '(sum_list)\n', (5402, 5412), True, 'import tensorflow as tf\n'), ((3026, 3068), 'tensorflow.concatenate', 'tf.concatenate', (['[self.x_real, self.y_real]'], {}), '([self.x_real, self.y_real])\n', (3040, 3068), True, 'import tensorflow as tf\n'), ((3484, 3597), 'netutils.loss.get_loss', 'get_loss', (['"""kl"""', "self.config['kl loss']", "{'z_mean': self.z_mean - self.z_mean_y, 'z_log_var': self.z_log_var}"], {}), "('kl', self.config['kl loss'], {'z_mean': self.z_mean - self.\n z_mean_y, 'z_log_var': self.z_log_var})\n", (3492, 3597), False, 'from netutils.loss import get_loss\n'), ((3672, 3779), 'netutils.loss.get_loss', 'get_loss', (['"""reconstruction"""', "self.config['reconstruction loss']", "{'x': self.x_real, 'y': self.x_decode}"], {}), "('reconstruction', self.config['reconstruction loss'], {'x': self.\n x_real, 'y': self.x_decode})\n", (3680, 3779), False, 'from netutils.loss import get_loss\n'), ((4162, 4323), 'netutils.optimizer.get_optimizer', 'get_optimizer', (["self.config['optimizer']", "{'learning_rate': self.learning_rate}", 'self.loss', '(self.decoder.vars + self.x_encoder.vars + self.y_encoder.vars)'], {}), "(self.config['optimizer'], {'learning_rate': self.\n learning_rate}, self.loss, self.decoder.vars + self.x_encoder.vars +\n self.y_encoder.vars)\n", (4175, 4323), False, 'from netutils.optimizer import get_optimizer\n'), ((4352, 4473), 'netutils.optimizer.get_optimizer', 'get_optimizer', (["self.config['optimizer']", '{}', 'self.loss', '(self.decoder.vars + self.x_encoder.vars + self.y_encoder.vars)'], {}), "(self.config['optimizer'], {}, self.loss, self.decoder.vars +\n self.x_encoder.vars + self.y_encoder.vars)\n", (4365, 4473), False, 'from netutils.optimizer import get_optimizer\n'), ((4777, 4827), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""encoder/kl_loss"""', 'self.kl_loss'], {}), "('encoder/kl_loss', self.kl_loss)\n", (4794, 4827), True, 'import tensorflow as tf\n'), ((4847, 4890), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""lr"""', 'self.learning_rate'], {}), "('lr', self.learning_rate)\n", (4864, 4890), True, 'import tensorflow as tf\n'), ((4910, 4974), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""decoder/reconstruction_loss"""', 'self.xent_loss'], {}), "('decoder/reconstruction_loss', self.xent_loss)\n", (4927, 4974), True, 'import tensorflow as tf\n'), ((4994, 5030), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""loss"""', 'self.loss'], {}), "('loss', self.loss)\n", (5011, 5030), True, 'import tensorflow as tf\n'), ((5166, 5214), 'tensorflow.summary.histogram', 'tf.summary.histogram', (["('encoder/' + var.name)", 'var'], {}), "('encoder/' + var.name, var)\n", (5186, 5214), True, 'import tensorflow as tf\n'), ((5282, 5330), 'tensorflow.summary.histogram', 'tf.summary.histogram', (["('decoder/' + var.name)", 'var'], {}), "('decoder/' + var.name, var)\n", (5302, 5330), True, 'import tensorflow as tf\n'), ((2195, 2223), 'numpy.product', 'np.product', (['self.input_shape'], {}), '(self.input_shape)\n', (2205, 2223), True, 'import numpy as np\n')] |
import numpy as np
from scipy.interpolate import interp1d
from transomaly import helpers
class PrepareArrays(object):
def __init__(self, passbands=('g', 'r'), contextual_info=('redshift',), use_gp_interp=False):
self.passbands = passbands
self.contextual_info = contextual_info
self.nobs = 50
self.npb = len(passbands)
self.nfeatures = self.npb + len(self.contextual_info)
self.timestep = 3.0
self.mintime = -70
self.maxtime = 80
self.use_gp_interp = use_gp_interp
def get_min_max_time(self, lc):
# Get min and max times for tinterp
mintimes = []
maxtimes = []
for j, pb in enumerate(self.passbands):
pbmask = lc['passband'] == pb
time = lc[pbmask]['time'][0:self.nobs].data
mintimes.append(time.min())
maxtimes.append(time.max())
mintime = min(mintimes)
maxtime = max(maxtimes) + self.timestep
return mintime, maxtime
def get_t_interp(self, lc, extrapolate=False):
mintime, maxtime = self.get_min_max_time(lc)
if extrapolate:
tinterp = np.arange(self.mintime, self.maxtime, step=self.timestep)
len_t = len(tinterp)
return tinterp, len_t
tinterp = np.arange(mintime, maxtime, step=self.timestep)
len_t = len(tinterp)
if len_t > self.nobs:
tinterp = tinterp[(tinterp >= self.mintime)]
len_t = len(tinterp)
if len_t > self.nobs:
tinterp = tinterp[:-(len_t - self.nobs)]
len_t = len(tinterp)
return tinterp, len_t
def update_X(self, X, Xerr, idx, gp_lc, lc, tinterp, len_t, objid, contextual_info, meta_data, nsamples=10):
# # Drop infinite values
# lc.replace([np.inf, -np.inf], np.nan)
for j, pb in enumerate(self.passbands):
pbmask = lc['passband'] == pb
# Get data
time = lc[pbmask]['time'][0:self.nobs].data
flux = lc[pbmask]['flux'][0:self.nobs].data
fluxerr = lc[pbmask]['fluxErr'][0:self.nobs].data
photflag = lc[pbmask]['photflag'][0:self.nobs].data
# Mask out times outside of mintime and maxtime
timemask = (time > self.mintime) & (time < self.maxtime)
time = time[timemask]
flux = flux[timemask]
fluxerr = fluxerr[timemask]
photflag = photflag[timemask]
if time.size < 2:
print(f"Not enough {pb}-band observations in range for object {objid}")
continue
if self.use_gp_interp:
# Draw samples from GP
try:
gp_lc[pb].compute(time, fluxerr)
except Exception as e:
print(f"ERROR FOR OBJECT: {objid}", e)
continue
pred_mean, pred_var = gp_lc[pb].predict(flux, tinterp, return_var=True)
pred_std = np.sqrt(pred_var)
if nsamples > 1:
samples = gp_lc[pb].sample_conditional(flux, t=tinterp, size=nsamples)
elif nsamples == 1:
samples = [pred_mean]
# store samples in X
for ns in range(nsamples):
X[idx + ns][j][0:len_t] = samples[ns]
Xerr[idx + ns][j][0:len_t] = pred_std
else:
# USE LINEAR SPLINE INTERPOLATION INSTEAD
# f = interp1d(time, flux, kind='linear', bounds_error=False, fill_value=0.) ##
spl = helpers.ErrorPropagationSpline(time, flux, fluxerr, k=1, N=100, ext='zeros') # #
# fluxinterp = f(tinterp) ##
fluxinterp, fluxerrinterp = spl(tinterp) # #
fluxinterp = np.nan_to_num(fluxinterp)
# fluxinterp = fluxinterp.clip(min=0)
##
# fluxerrinterp = np.zeros(len_t)
# for interp_idx, fluxinterp_val in enumerate(fluxinterp):
# if fluxinterp_val == 0.:
# fluxerrinterp[interp_idx] = 0
# else:
# nearest_idx = helpers.find_nearest(time, tinterp[interp_idx]) ##
# fluxerrinterp[interp_idx] = fluxerr[nearest_idx] ##
# # fluxerrinterp[interp_idx] = fluxerrinterp[interp_idx]
X[idx][j][0:len_t] = fluxinterp
Xerr[idx][j][0:len_t] = fluxerrinterp
if self.use_gp_interp:
# Add contextual information
for ns in range(nsamples):
for jj, c_info in enumerate(contextual_info, 1):
if meta_data[c_info] == None:
meta_data[c_info] = 0
X[idx + ns][j + jj][0:len_t] = meta_data[c_info] * np.ones(len_t)
else:
# Add contextual information
for jj, c_info in enumerate(contextual_info, 1):
if meta_data[c_info] == None:
meta_data[c_info] = 0
X[idx][j + jj][0:len_t] = meta_data[c_info] * np.ones(len_t)
return X, Xerr
| [
"numpy.sqrt",
"numpy.ones",
"numpy.arange",
"transomaly.helpers.ErrorPropagationSpline",
"numpy.nan_to_num"
] | [((1304, 1351), 'numpy.arange', 'np.arange', (['mintime', 'maxtime'], {'step': 'self.timestep'}), '(mintime, maxtime, step=self.timestep)\n', (1313, 1351), True, 'import numpy as np\n'), ((1160, 1217), 'numpy.arange', 'np.arange', (['self.mintime', 'self.maxtime'], {'step': 'self.timestep'}), '(self.mintime, self.maxtime, step=self.timestep)\n', (1169, 1217), True, 'import numpy as np\n'), ((3025, 3042), 'numpy.sqrt', 'np.sqrt', (['pred_var'], {}), '(pred_var)\n', (3032, 3042), True, 'import numpy as np\n'), ((3637, 3713), 'transomaly.helpers.ErrorPropagationSpline', 'helpers.ErrorPropagationSpline', (['time', 'flux', 'fluxerr'], {'k': '(1)', 'N': '(100)', 'ext': '"""zeros"""'}), "(time, flux, fluxerr, k=1, N=100, ext='zeros')\n", (3667, 3713), False, 'from transomaly import helpers\n'), ((3856, 3881), 'numpy.nan_to_num', 'np.nan_to_num', (['fluxinterp'], {}), '(fluxinterp)\n', (3869, 3881), True, 'import numpy as np\n'), ((5194, 5208), 'numpy.ones', 'np.ones', (['len_t'], {}), '(len_t)\n', (5201, 5208), True, 'import numpy as np\n'), ((4912, 4926), 'numpy.ones', 'np.ones', (['len_t'], {}), '(len_t)\n', (4919, 4926), True, 'import numpy as np\n')] |
import math
import numpy as np
from .base_estimator import Estimator
class NaiveEstimator(Estimator):
def estimate(self, proxy_scores, true_labels, max_idx, delta, T, return_upper):
mean = np.mean(true_labels)
ci = self.calc_bernoulli_ci(mean, len(true_labels), delta, T)
if return_upper:
return mean + ci
else:
return mean - ci
| [
"numpy.mean"
] | [((204, 224), 'numpy.mean', 'np.mean', (['true_labels'], {}), '(true_labels)\n', (211, 224), True, 'import numpy as np\n')] |
from __future__ import annotations
from typing import Tuple, Dict, List, Union
import struct
import numpy
import amulet_nbt
from amulet.api.block import Block
from amulet.api.chunk import Chunk
from amulet.utils.world_utils import get_smallest_dtype
from amulet.world_interface.chunk.interfaces import Interface
from amulet.world_interface.chunk import translators
from amulet.world_interface.chunk.interfaces.leveldb.leveldb_chunk_versions import (
chunk_to_game_version, game_to_chunk_version
)
def brute_sort_objects(data) -> Tuple[numpy.ndarray, numpy.ndarray]:
indexes = {}
unique = []
inverse = []
index = 0
for d in data:
if d not in indexes:
indexes[d] = index
index += 1
unique.append(d)
inverse.append(indexes[d])
unique_ = numpy.empty(len(unique), dtype=object)
for index, obj in enumerate(unique):
unique_[index] = obj
return unique_, numpy.array(inverse)
def brute_sort_objects_no_hash(data) -> Tuple[numpy.ndarray, numpy.ndarray]:
unique = []
inverse = numpy.zeros(dtype=numpy.uint, shape=len(data))
for i, d in enumerate(data):
try:
index = unique.index(d)
except ValueError:
index = len(unique)
unique.append(d)
inverse[i] = index
unique_ = numpy.empty(len(unique), dtype=object)
for index, obj in enumerate(unique):
unique_[index] = obj
return unique_, numpy.array(inverse)
class BaseLevelDBInterface(Interface):
def __init__(self):
feature_options = {
"chunk_version": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
"finalised_state": ["int0-2"],
"data_2d": ["height512|biome256", "unused_height512|biome256"],
"entities": ["32list"],
"block_entities": ["31list"],
"terrain": ["2farray", "2f1palette", "2fnpalette"],
}
self.features = {key: None for key in feature_options.keys()}
def get_translator(
self,
max_world_version: Tuple[str, Tuple[int, int, int]],
data: Dict[bytes, bytes] = None,
) -> Tuple[translators.Translator, Tuple[int, int, int]]:
"""
Get the Translator class for the requested version.
:param max_world_version: The game version the world was last opened in.
:type max_world_version: Java: int (DataVersion) Bedrock: Tuple[int, int, int, ...] (game version number)
:param data: Optional data to get translator based on chunk version rather than world version
:param data: Any
:return: Tuple[Translator, version number for PyMCTranslate to use]
:rtype: Tuple[translators.Translator, Tuple[int, int, int]]
"""
if data:
chunk_version = data[b"v"][0]
game_version = chunk_to_game_version(max_world_version[1], chunk_version)
else:
game_version = max_world_version[1]
return translators.loader.get(("leveldb", game_version)), game_version
def decode(
self, cx: int, cz: int, data: Dict[bytes, bytes]
) -> Tuple[Chunk, numpy.ndarray]:
# chunk_key_base = struct.pack("<ii", cx, cz)
chunk = Chunk(cx, cz)
if self.features["terrain"] in ["2farray", "2f1palette", "2fnpalette"]:
subchunks = [data.get(b"\x2F" + bytes([i]), None) for i in range(16)]
chunk.blocks, palette = self._load_subchunks(subchunks)
else:
raise Exception
if self.features["finalised_state"] == "int0-2":
if b"\x36" in data:
val = struct.unpack("<i", data[b"\x36"])[0]
else:
val = 2
chunk.status = val
if self.features["data_2d"] in [
"height512|biome256", "unused_height512|biome256"
]:
d2d = data.get(b"\x2D", b"\x00" * 768)
height, biome = d2d[:512], d2d[512:]
if self.features["data_2d"] == "height512|biome256":
pass # TODO: put this data somewhere
chunk.biomes = numpy.frombuffer(biome, dtype="uint8")
# TODO: impliment key support
# \x2D heightmap and biomes
# \x31 block entity
# \x32 entity
# \x33 ticks
# \x34 block extra data
# \x35 biome state
# \x39 7 ints and an end (03)? Honestly don't know what this is
# \x3A fire tick?
# \x2E 2d legacy
# \x30 legacy terrain
chunk.entities = None
chunk.block_entities = None
return chunk, palette
def encode(
self,
chunk: Chunk,
palette: numpy.ndarray,
max_world_version: Tuple[int, int, int],
) -> Dict[bytes, bytes]:
chunk_data = {}
# chunk version
if self.features["chunk_version"] is not None:
chunk_data[b"v"] = bytes([self.features["chunk_version"]])
# terrain data
if self.features["terrain"] == "2farray":
terrain = self._save_subchunks_0(chunk.blocks, palette)
elif self.features["terrain"] == "2f1palette":
terrain = self._save_subchunks_1(chunk.blocks, palette)
elif self.features["terrain"] == "2fnpalette":
terrain = self._save_subchunks_8(chunk.blocks, palette)
else:
raise Exception
for y, sub_chunk in enumerate(terrain):
chunk_data[b"\x2F" + bytes([y])] = sub_chunk
# chunk status
if self.features["finalised_state"] == "int0-2":
chunk_data[b"\x36"] = struct.pack("<i", chunk.status.as_type("b"))
# biome and height data
if self.features["data_2d"] in [
"height512|biome256", "unused_height512|biome256"
]:
if self.features["data_2d"] == "height512|biome256":
d2d = b"\x00" * 512 # TODO: get this data from somewhere
else:
d2d = b"\x00" * 512
d2d += chunk.biomes.convert_to_format(256).astype("uint8").tobytes()
chunk_data[b"\x2D"] = d2d
return chunk_data
def _load_subchunks(
self, subchunks: List[None, bytes]
) -> Tuple[numpy.ndarray, numpy.ndarray]:
"""
Load a list of bytes objects which contain chunk data
This function should be able to load all sub-chunk formats (technically before it)
All sub-chunks will almost certainly all have the same sub-chunk version but
it should be able to handle a case where that is not true.
As such this function will return a Chunk and a rather complicated palette
The newer formats allow multiple blocks to occupy the same space and the
newer versions also include a version ber block. So this will also need
returning for the translator to handle.
The palette will be a numpy array containing tuple objects
The tuple represents the "block" however can contain more than one Block object.
Inside the tuple are one or more tuples.
These include the block version number and the block itself
The block version number will be either None if no block version is given
or a tuple containing 4 ints.
The block will be either a Block class for the newer formats or a tuple of two ints for the older formats
"""
blocks = numpy.zeros((16, 256, 16), dtype=numpy.uint32)
palette: List[
Tuple[
Tuple[
Union[None, Tuple[int, int, int, int]],
Union[Tuple[int, int], Block],
]
]
] = [
(
(
(1, 7, 0, 0),
Block(
namespace="minecraft",
base_name="air",
properties={"block_data": 0},
),
),
)
]
for y, data in enumerate(subchunks):
if data is None:
continue
if data[0] in [0, 2, 3, 4, 5, 6, 7]:
raise NotImplementedError
elif data[0] in [1, 8]:
if data[0] == 1:
storage_count = 1
data = data[1:]
else:
storage_count, data = data[1], data[2:]
sub_chunk_blocks = numpy.zeros(
(16, 16, 16, storage_count), dtype=numpy.int
)
sub_chunk_palette: List[
List[Tuple[Union[None, Tuple[int, int, int, int]], Block]]
] = []
for storage_index in range(storage_count):
sub_chunk_blocks[
:, :, :, storage_index
], palette_data, data = self._load_palette_blocks(
data
)
palette_data_out: List[
Tuple[Union[None, Tuple[int, int, int, int]], Block]
] = []
for block in palette_data:
namespace, base_name = block["name"].value.split(":", 1)
if "version" in block:
version = tuple(
numpy.array([block["version"].value], dtype=">u4").view(
numpy.uint8
)
)
else:
version = None
if "states" in block: # 1.13 format
properties = block["states"].value
else:
properties = {"block_data": str(block["val"].value)}
palette_data_out.append(
(
version,
Block(
namespace=namespace,
base_name=base_name,
properties=properties,
),
)
)
sub_chunk_palette.append(palette_data_out)
y *= 16
if storage_count == 1:
blocks[:, y:y + 16, :] = sub_chunk_blocks[:, :, :, 0] + len(palette)
palette += [(val,) for val in sub_chunk_palette[0]]
elif storage_count > 1:
# we have two or more storages so need to find the unique block combinations and merge them together
sub_chunk_palette_, sub_chunk_blocks = numpy.unique(
sub_chunk_blocks.reshape(-1, storage_count),
return_inverse=True,
axis=0,
)
blocks[:, y:y + 16, :] = sub_chunk_blocks.reshape(16, 16, 16) + len(
palette
)
palette += [
tuple(
sub_chunk_palette[storage_index][index]
for storage_index, index in enumerate(palette_indexes)
if not (
storage_index > 0
and sub_chunk_palette[storage_index][index][
1
].namespaced_name
== "minecraft:air"
)
)
for palette_indexes in sub_chunk_palette_
]
else:
raise Exception("Is a chunk with no storages allowed?")
# palette should now look like this
# List[
# Tuple[
# Tuple[version, Block]
# ]
# ]
numpy_palette, inverse = brute_sort_objects(palette)
blocks = inverse[blocks]
return blocks.astype(f"uint{get_smallest_dtype(blocks)}"), numpy_palette
def _save_subchunks_0(
self, blocks: numpy.ndarray, palette: numpy.ndarray
) -> List[Union[None, bytes]]:
raise NotImplementedError
def _save_subchunks_1(
self, blocks: numpy.ndarray, palette: numpy.ndarray
) -> List[Union[None, bytes]]:
for index, block in enumerate(palette):
block: Tuple[Tuple[None, Block], ...]
block_data = block[0][1].properties.get("block_data", "0")
if isinstance(block_data, str) and block_data.isnumeric():
block_data = int(block_data)
if block_data >= 16:
block_data = 0
else:
block_data = 0
palette[index] = amulet_nbt.NBTFile(
amulet_nbt.TAG_Compound(
{
"name": amulet_nbt.TAG_String(block[0][1].namespaced_name),
"val": amulet_nbt.TAG_Short(block_data),
}
)
)
chunk = []
for y in range(0, 256, 16):
palette_index, sub_chunk = numpy.unique(
blocks[:, y:y + 16, :], return_inverse=True
)
sub_chunk_palette = list(palette[palette_index])
chunk.append(
b"\x01" + self._save_palette_subchunk(sub_chunk, sub_chunk_palette)
)
return chunk
def _save_subchunks_8(
self, blocks: numpy.ndarray, palette: numpy.ndarray
) -> List[Union[None, bytes]]:
palette_depth = numpy.array([len(block) for block in palette])
if palette[0][0][0] is None:
air = amulet_nbt.NBTFile(
amulet_nbt.TAG_Compound(
{
"name": amulet_nbt.TAG_String("minecraft:air"),
"val": amulet_nbt.TAG_Short(0),
}
)
)
else:
air = amulet_nbt.NBTFile(
amulet_nbt.TAG_Compound(
{
"name": amulet_nbt.TAG_String("minecraft:air"),
"states": amulet_nbt.TAG_Compound({}),
"version": amulet_nbt.TAG_Int(17629184),
}
)
)
for index, block in enumerate(palette):
block: Tuple[Tuple[Union[Tuple[int, int, int, int], None], Block], ...]
full_block = []
for sub_block_version, sub_block in block:
properties = sub_block.properties
if sub_block_version is None:
block_data = properties.get("block_data", "0")
if isinstance(block_data, str) and block_data.isnumeric():
block_data = int(block_data)
if block_data >= 16:
block_data = 0
else:
block_data = 0
sub_block_ = amulet_nbt.NBTFile(
amulet_nbt.TAG_Compound(
{
"name": amulet_nbt.TAG_String(
sub_block.namespaced_name
),
"val": amulet_nbt.TAG_Short(block_data),
}
)
)
else:
sub_block_ = amulet_nbt.NBTFile(
amulet_nbt.TAG_Compound(
{
"name": amulet_nbt.TAG_String(
sub_block.namespaced_name
),
"states": amulet_nbt.TAG_Compound(
{
key: val
for key, val in properties.items()
if isinstance(val, amulet_nbt._TAG_Value)
}
),
"version": amulet_nbt.TAG_Int(
sum(
sub_block_version[i] << (24 - i * 8)
for i in range(4)
)
),
}
)
)
full_block.append(sub_block_)
palette[index] = tuple(full_block)
chunk = []
for y in range(0, 256, 16):
palette_index, sub_chunk = numpy.unique(
blocks[:, y:y + 16, :], return_inverse=True
)
sub_chunk_palette = palette[palette_index]
sub_chunk_depth = palette_depth[palette_index].max()
if (
sub_chunk_depth == 1
and len(sub_chunk_palette) == 1
and sub_chunk_palette[0][0]["name"].value == "minecraft:air"
):
chunk.append(None)
else:
# pad palette with air in the extra layers
sub_chunk_palette_full = numpy.empty(
(sub_chunk_palette.size, sub_chunk_depth), dtype=object
)
sub_chunk_palette_full.fill(air)
for index, block_tuple in enumerate(sub_chunk_palette):
for sub_index, block in enumerate(block_tuple):
sub_chunk_palette_full[index, sub_index] = block
# should now be a 2D array with an amulet_nbt.NBTFile in each element
sub_chunk_bytes = [b"\x08", bytes([sub_chunk_depth])]
for sub_chunk_layer_index in range(sub_chunk_depth):
# TODO: sort out a way to do this quicker without brute forcing it.
sub_chunk_layer_palette, sub_chunk_remap = brute_sort_objects_no_hash(
sub_chunk_palette_full[:, sub_chunk_layer_index]
)
sub_chunk_layer = sub_chunk_remap[sub_chunk]
# sub_chunk_layer, sub_chunk_layer_palette = sub_chunk, sub_chunk_palette_full[:, sub_chunk_layer_index]
sub_chunk_bytes.append(
self._save_palette_subchunk(
sub_chunk_layer.reshape(16, 16, 16),
list(sub_chunk_layer_palette.ravel()),
)
)
chunk.append(b"".join(sub_chunk_bytes))
return chunk
# These arent actual blocks, just ids pointing to the palette.
def _load_palette_blocks(
self, data
) -> Tuple[numpy.ndarray, List[amulet_nbt.NBTFile], bytes]:
# Ignore LSB of data (its a flag) and get compacting level
bits_per_block, data = data[0] >> 1, data[1:]
blocks_per_word = 32 // bits_per_block # Word = 4 bytes, basis of compacting.
word_count = -(
-4096 // blocks_per_word
) # Ceiling divide is inverted floor divide
blocks = numpy.packbits(
numpy.pad(
numpy.unpackbits(
numpy.frombuffer(
bytes(reversed(data[:4 * word_count])), dtype="uint8"
)
).reshape(
-1, 32
)[
:, -blocks_per_word * bits_per_block:
].reshape(
-1, bits_per_block
)[
-4096:, :
],
[(0, 0), (16 - bits_per_block, 0)],
"constant",
)
).view(
dtype=">i2"
)[
::-1
]
blocks = blocks.reshape((16, 16, 16)).swapaxes(1, 2)
data = data[4 * word_count:]
palette_len, data = struct.unpack("<I", data[:4])[0], data[4:]
palette, offset = amulet_nbt.load(
buffer=data,
compressed=False,
count=palette_len,
offset=True,
little_endian=True,
)
return blocks, palette, data[offset:]
def _save_palette_subchunk(
self, blocks: numpy.ndarray, palette: List[amulet_nbt.NBTFile]
) -> bytes:
"""Save a single layer of blocks in the palette format"""
chunk: List[bytes] = []
bits_per_block = max(int(blocks.max()).bit_length(), 1)
if bits_per_block == 7:
bits_per_block = 8
elif 9 <= bits_per_block <= 15:
bits_per_block = 16
chunk.append(bytes([bits_per_block << 1]))
blocks_per_word = 32 // bits_per_block # Word = 4 bytes, basis of compacting.
word_count = -(
-4096 // blocks_per_word
) # Ceiling divide is inverted floor divide
blocks = blocks.swapaxes(1, 2).ravel()
blocks = bytes(
reversed(
numpy.packbits(
numpy.pad(
numpy.pad(
numpy.unpackbits(
numpy.ascontiguousarray(blocks[::-1], dtype=">i").view(
dtype="uint8"
)
).reshape(
4096, -1
)[
:, -bits_per_block:
],
[(word_count * blocks_per_word - 4096, 0), (0, 0)],
"constant",
).reshape(
-1, blocks_per_word * bits_per_block
),
[(0, 0), (32 - blocks_per_word * bits_per_block, 0)],
"constant",
)
).view(
dtype=">i4"
).tobytes()
)
)
chunk.append(blocks)
chunk.append(struct.pack("<I", len(palette)))
chunk += [
block.save_to(compressed=False, little_endian=True) for block in palette
]
return b"".join(chunk)
| [
"amulet_nbt.TAG_String",
"amulet.api.chunk.Chunk",
"amulet.utils.world_utils.get_smallest_dtype",
"numpy.unique",
"amulet_nbt.TAG_Int",
"amulet.api.block.Block",
"amulet.world_interface.chunk.translators.loader.get",
"numpy.ascontiguousarray",
"amulet.world_interface.chunk.interfaces.leveldb.leveldb... | [((950, 970), 'numpy.array', 'numpy.array', (['inverse'], {}), '(inverse)\n', (961, 970), False, 'import numpy\n'), ((1469, 1489), 'numpy.array', 'numpy.array', (['inverse'], {}), '(inverse)\n', (1480, 1489), False, 'import numpy\n'), ((3235, 3248), 'amulet.api.chunk.Chunk', 'Chunk', (['cx', 'cz'], {}), '(cx, cz)\n', (3240, 3248), False, 'from amulet.api.chunk import Chunk\n'), ((7447, 7493), 'numpy.zeros', 'numpy.zeros', (['(16, 256, 16)'], {'dtype': 'numpy.uint32'}), '((16, 256, 16), dtype=numpy.uint32)\n', (7458, 7493), False, 'import numpy\n'), ((20187, 20290), 'amulet_nbt.load', 'amulet_nbt.load', ([], {'buffer': 'data', 'compressed': '(False)', 'count': 'palette_len', 'offset': '(True)', 'little_endian': '(True)'}), '(buffer=data, compressed=False, count=palette_len, offset=\n True, little_endian=True)\n', (20202, 20290), False, 'import amulet_nbt\n'), ((2852, 2910), 'amulet.world_interface.chunk.interfaces.leveldb.leveldb_chunk_versions.chunk_to_game_version', 'chunk_to_game_version', (['max_world_version[1]', 'chunk_version'], {}), '(max_world_version[1], chunk_version)\n', (2873, 2910), False, 'from amulet.world_interface.chunk.interfaces.leveldb.leveldb_chunk_versions import chunk_to_game_version, game_to_chunk_version\n'), ((2988, 3037), 'amulet.world_interface.chunk.translators.loader.get', 'translators.loader.get', (["('leveldb', game_version)"], {}), "(('leveldb', game_version))\n", (3010, 3037), False, 'from amulet.world_interface.chunk import translators\n'), ((4106, 4144), 'numpy.frombuffer', 'numpy.frombuffer', (['biome'], {'dtype': '"""uint8"""'}), "(biome, dtype='uint8')\n", (4122, 4144), False, 'import numpy\n'), ((13286, 13343), 'numpy.unique', 'numpy.unique', (['blocks[:, y:y + 16, :]'], {'return_inverse': '(True)'}), '(blocks[:, y:y + 16, :], return_inverse=True)\n', (13298, 13343), False, 'import numpy\n'), ((16847, 16904), 'numpy.unique', 'numpy.unique', (['blocks[:, y:y + 16, :]'], {'return_inverse': '(True)'}), '(blocks[:, y:y + 16, :], return_inverse=True)\n', (16859, 16904), False, 'import numpy\n'), ((17403, 17471), 'numpy.empty', 'numpy.empty', (['(sub_chunk_palette.size, sub_chunk_depth)'], {'dtype': 'object'}), '((sub_chunk_palette.size, sub_chunk_depth), dtype=object)\n', (17414, 17471), False, 'import numpy\n'), ((20118, 20147), 'struct.unpack', 'struct.unpack', (['"""<I"""', 'data[:4]'], {}), "('<I', data[:4])\n", (20131, 20147), False, 'import struct\n'), ((3634, 3665), 'struct.unpack', 'struct.unpack', (['"""<i"""', "data[b'6']"], {}), "('<i', data[b'6'])\n", (3647, 3665), False, 'import struct\n'), ((7802, 7877), 'amulet.api.block.Block', 'Block', ([], {'namespace': '"""minecraft"""', 'base_name': '"""air"""', 'properties': "{'block_data': 0}"}), "(namespace='minecraft', base_name='air', properties={'block_data': 0})\n", (7807, 7877), False, 'from amulet.api.block import Block\n'), ((8470, 8527), 'numpy.zeros', 'numpy.zeros', (['(16, 16, 16, storage_count)'], {'dtype': 'numpy.int'}), '((16, 16, 16, storage_count), dtype=numpy.int)\n', (8481, 8527), False, 'import numpy\n'), ((12145, 12171), 'amulet.utils.world_utils.get_smallest_dtype', 'get_smallest_dtype', (['blocks'], {}), '(blocks)\n', (12163, 12171), False, 'from amulet.utils.world_utils import get_smallest_dtype\n'), ((13021, 13071), 'amulet_nbt.TAG_String', 'amulet_nbt.TAG_String', (['block[0][1].namespaced_name'], {}), '(block[0][1].namespaced_name)\n', (13042, 13071), False, 'import amulet_nbt\n'), ((13104, 13136), 'amulet_nbt.TAG_Short', 'amulet_nbt.TAG_Short', (['block_data'], {}), '(block_data)\n', (13124, 13136), False, 'import amulet_nbt\n'), ((13944, 13982), 'amulet_nbt.TAG_String', 'amulet_nbt.TAG_String', (['"""minecraft:air"""'], {}), "('minecraft:air')\n", (13965, 13982), False, 'import amulet_nbt\n'), ((14015, 14038), 'amulet_nbt.TAG_Short', 'amulet_nbt.TAG_Short', (['(0)'], {}), '(0)\n', (14035, 14038), False, 'import amulet_nbt\n'), ((14241, 14279), 'amulet_nbt.TAG_String', 'amulet_nbt.TAG_String', (['"""minecraft:air"""'], {}), "('minecraft:air')\n", (14262, 14279), False, 'import amulet_nbt\n'), ((14315, 14342), 'amulet_nbt.TAG_Compound', 'amulet_nbt.TAG_Compound', (['{}'], {}), '({})\n', (14338, 14342), False, 'import amulet_nbt\n'), ((14379, 14407), 'amulet_nbt.TAG_Int', 'amulet_nbt.TAG_Int', (['(17629184)'], {}), '(17629184)\n', (14397, 14407), False, 'import amulet_nbt\n'), ((15299, 15347), 'amulet_nbt.TAG_String', 'amulet_nbt.TAG_String', (['sub_block.namespaced_name'], {}), '(sub_block.namespaced_name)\n', (15320, 15347), False, 'import amulet_nbt\n'), ((15458, 15490), 'amulet_nbt.TAG_Short', 'amulet_nbt.TAG_Short', (['block_data'], {}), '(block_data)\n', (15478, 15490), False, 'import amulet_nbt\n'), ((15764, 15812), 'amulet_nbt.TAG_String', 'amulet_nbt.TAG_String', (['sub_block.namespaced_name'], {}), '(sub_block.namespaced_name)\n', (15785, 15812), False, 'import amulet_nbt\n'), ((10005, 10075), 'amulet.api.block.Block', 'Block', ([], {'namespace': 'namespace', 'base_name': 'base_name', 'properties': 'properties'}), '(namespace=namespace, base_name=base_name, properties=properties)\n', (10010, 10075), False, 'from amulet.api.block import Block\n'), ((9375, 9425), 'numpy.array', 'numpy.array', (["[block['version'].value]"], {'dtype': '""">u4"""'}), "([block['version'].value], dtype='>u4')\n", (9386, 9425), False, 'import numpy\n'), ((21345, 21394), 'numpy.ascontiguousarray', 'numpy.ascontiguousarray', (['blocks[::-1]'], {'dtype': '""">i"""'}), "(blocks[::-1], dtype='>i')\n", (21368, 21394), False, 'import numpy\n')] |
import numpy as np
import os
import pandas as pd
import xarray as xr
from brainio_base.assemblies import array_is_element, walk_coords
from brainscore.benchmarks import BenchmarkBase, ceil_score
from brainscore.benchmarks.screen import place_on_screen
from brainscore.model_interface import BrainModel
from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly
from brainio_base.stimuli import StimulusSet
from brainscore.metrics.regression_extra import take_gram, unflatten
class NeuralBenchmarkCovariate(BenchmarkBase):
def __init__(self, identifier, assembly, covariate_image_dir, similarity_metric, visual_degrees, number_of_trials, **kwargs):
super(NeuralBenchmarkCovariate, self).__init__(identifier=identifier, **kwargs)
self._assembly = assembly
self._similarity_metric = similarity_metric
region = np.unique(self._assembly['region'])
assert len(region) == 1
self.region = region[0]
timebins = timebins_from_assembly(self._assembly)
self.timebins = timebins
self._visual_degrees = visual_degrees
self._number_of_trials = number_of_trials
self.covariate_image_dir = covariate_image_dir
def __call__(self, candidate: BrainModel):
candidate.start_recording(self.region, time_bins=self.timebins)
stimulus_set = place_on_screen(self._assembly.stimulus_set, target_visual_degrees=candidate.visual_degrees(),
source_visual_degrees=self._visual_degrees)
# Find 'twin' image set whose model activations will serve as covariate
brainio_dir = os.getenv('BRAINIO_HOME', os.path.join(os.path.expanduser('~'), '.brainio'))
covariate_stimulus_set = pd.DataFrame(stimulus_set)
covariate_stimulus_set = StimulusSet(covariate_stimulus_set)
#covariate_stimulus_set = stimulus_set.copy(deep=True)
covariate_stimulus_set.identifier = stimulus_set.identifier + '_' + self.covariate_image_dir
covariate_stimulus_set.image_paths = {
k: os.path.join(brainio_dir,
self.covariate_image_dir,
os.path.basename(v)) for k, v in stimulus_set.image_paths.items()}
source_assembly = candidate.look_at(stimulus_set, number_of_trials=self._number_of_trials)
source_assembly.attrs['stimulus_set_identifier'] = stimulus_set.identifier
covariate_assembly = candidate.look_at(covariate_stimulus_set, number_of_trials=self._number_of_trials)
covariate_assembly.attrs['stimulus_set_identifier'] = covariate_stimulus_set.identifier
if 'time_bin' in source_assembly.dims:
source_assembly = source_assembly.squeeze('time_bin') # static case for these benchmarks
covariate_assembly = covariate_assembly.squeeze('time_bin') # static case for these benchmarks
raw_score = self._similarity_metric(source_assembly, covariate_assembly, self._assembly)
return explained_variance(raw_score, self.ceiling)
class CacheFeaturesCovariate(BenchmarkBase):
def __init__(self, identifier, assembly, covariate_image_dir, similarity_metric, visual_degrees, number_of_trials, **kwargs):
super(CacheFeaturesCovariate, self).__init__(identifier=identifier, **kwargs)
self._assembly = assembly
self._similarity_metric = similarity_metric
region = np.unique(self._assembly['region'])
assert len(region) == 1
self.region = region[0]
timebins = timebins_from_assembly(self._assembly)
self.timebins = timebins
self._visual_degrees = visual_degrees
self._number_of_trials = number_of_trials
self.covariate_image_dir = covariate_image_dir
def __call__(self, candidate: BrainModel):
candidate.start_recording(self.region, time_bins=self.timebins)
stimulus_set = place_on_screen(self._assembly.stimulus_set, target_visual_degrees=candidate.visual_degrees(),
source_visual_degrees=self._visual_degrees)
# Find 'twin' image set whose model activations will serve as covariate
brainio_dir = os.getenv('BRAINIO_HOME', os.path.join(os.path.expanduser('~'), '.brainio'))
covariate_stimulus_set = pd.DataFrame(stimulus_set)
covariate_stimulus_set = StimulusSet(covariate_stimulus_set)
#covariate_stimulus_set = stimulus_set.copy(deep=True)
covariate_stimulus_set.identifier = stimulus_set.identifier + '_' + self.covariate_image_dir
covariate_stimulus_set.image_paths = {
k: os.path.join(brainio_dir,
self.covariate_image_dir,
os.path.basename(v)) for k, v in stimulus_set.image_paths.items()}
source_assembly = candidate.look_at(stimulus_set, number_of_trials=self._number_of_trials)
source_assembly.attrs['stimulus_set_identifier'] = stimulus_set.identifier
covariate_assembly = candidate.look_at(covariate_stimulus_set, number_of_trials=self._number_of_trials)
covariate_assembly.attrs['stimulus_set_identifier'] = covariate_stimulus_set.identifier
if 'time_bin' in source_assembly.dims:
source_assembly = source_assembly.squeeze('time_bin') # static case for these benchmarks
covariate_assembly = covariate_assembly.squeeze('time_bin') # static case for these benchmarks
#raw_score = self._similarity_metric(source_assembly, covariate_assembly, self._assembly)
return 0
class NeuralBenchmarkCovariateGram(BenchmarkBase):
def __init__(self, identifier, assembly, covariate_image_dir, similarity_metric, visual_degrees, number_of_trials, gram, **kwargs):
super(NeuralBenchmarkCovariateGram, self).__init__(identifier=identifier, **kwargs)
self._assembly = assembly
self._similarity_metric = similarity_metric
region = np.unique(self._assembly['region'])
assert len(region) == 1
self.region = region[0]
timebins = timebins_from_assembly(self._assembly)
self.timebins = timebins
self._visual_degrees = visual_degrees
self._number_of_trials = number_of_trials
self.covariate_image_dir = covariate_image_dir
self.gram = gram
def __call__(self, candidate: BrainModel):
candidate.start_recording(self.region, time_bins=self.timebins)
stimulus_set = place_on_screen(self._assembly.stimulus_set, target_visual_degrees=candidate.visual_degrees(),
source_visual_degrees=self._visual_degrees)
# Find 'twin' image set whose model activations will serve as covariate
brainio_dir = os.getenv('BRAINIO_HOME', os.path.join(os.path.expanduser('~'), '.brainio'))
covariate_stimulus_set = pd.DataFrame(stimulus_set)
covariate_stimulus_set = StimulusSet(covariate_stimulus_set)
#covariate_stimulus_set = stimulus_set.copy(deep=True)
covariate_stimulus_set.identifier = stimulus_set.identifier + '_' + self.covariate_image_dir
covariate_stimulus_set.image_paths = {
k: os.path.join(brainio_dir,
self.covariate_image_dir,
os.path.basename(v)) for k, v in stimulus_set.image_paths.items()}
source_assembly = candidate.look_at(stimulus_set, number_of_trials=self._number_of_trials)
source_assembly.attrs['stimulus_set_identifier'] = stimulus_set.identifier
covariate_assembly = candidate.look_at(covariate_stimulus_set, number_of_trials=self._number_of_trials)
covariate_assembly.attrs['stimulus_set_identifier'] = covariate_stimulus_set.identifier
if 'time_bin' in source_assembly.dims:
source_assembly = source_assembly.squeeze('time_bin') # static case for these benchmarks
covariate_assembly = covariate_assembly.squeeze('time_bin') # static case for these benchmarks
if self.gram:
model = covariate_assembly.model.values[0]
layer = covariate_assembly.layer.values[0]
fname = os.path.join(brainio_dir, self.covariate_image_dir, '_'.join([model, layer.replace('/', '_'), covariate_assembly.stimulus_set_identifier, 'gram.nc']))
covariate_assembly = gram_on_all(covariate_assembly, fname = fname)
source_assembly, covariate_assembly = source_assembly.sortby('image_id'), covariate_assembly.sortby('image_id')
covariate_assembly = covariate_assembly.rename({'image_id': 'presentation'})
covariate_assembly = covariate_assembly.assign_coords({'presentation': source_assembly.presentation.coords.to_index()})
covariate_assembly = covariate_assembly.assign_coords({'neuroid': pd.MultiIndex.from_tuples(list(zip([model]*covariate_assembly.shape[0],
[layer]*covariate_assembly.shape[0])), names=['model', 'layer'])})
covariate_assembly.attrs['stimulus_set_identifier'] = covariate_stimulus_set.identifier
raw_score = self._similarity_metric(source_assembly, covariate_assembly, self._assembly)
return explained_variance(raw_score, self.ceiling)
class NeuralBenchmarkImageDir(BenchmarkBase):
def __init__(self, identifier, assembly, image_dir, similarity_metric, visual_degrees, number_of_trials, **kwargs):
super(NeuralBenchmarkImageDir, self).__init__(identifier=identifier, **kwargs)
self._assembly = assembly
self._similarity_metric = similarity_metric
region = np.unique(self._assembly['region'])
assert len(region) == 1
self.region = region[0]
timebins = timebins_from_assembly(self._assembly)
self.timebins = timebins
self._visual_degrees = visual_degrees
self._number_of_trials = number_of_trials
self.image_dir = image_dir
def __call__(self, candidate: BrainModel):
candidate.start_recording(self.region, time_bins=self.timebins)
stimulus_set = place_on_screen(self._assembly.stimulus_set,
target_visual_degrees=candidate.visual_degrees(),
source_visual_degrees=self._visual_degrees)
# Find 'twin' image set whose model activations we need
brainio_dir = os.getenv('BRAINIO_HOME', os.path.join(os.path.expanduser('~'), '.brainio'))
stimulus_set_from_dir = pd.DataFrame(stimulus_set)
stimulus_set_from_dir = StimulusSet(stimulus_set_from_dir)
# stimulus_set_from_dir = stimulus_set.copy(deep=True)
stimulus_set_from_dir.identifier = stimulus_set.identifier + '_' + self.image_dir
stimulus_set_from_dir.image_paths = {
k: os.path.join(brainio_dir,
self.image_dir,
os.path.basename(v)) for k, v in stimulus_set.image_paths.items()}
source_assembly = candidate.look_at(stimulus_set_from_dir, number_of_trials=self._number_of_trials)
source_assembly.attrs['stimulus_set_identifier'] = stimulus_set_from_dir.identifier
if 'time_bin' in source_assembly.dims:
source_assembly = source_assembly.squeeze('time_bin') # static case for these benchmarks
raw_score = self._similarity_metric(source_assembly, self._assembly)
return explained_variance(raw_score, self.ceiling)
class ToleranceCeiling(BenchmarkBase):
def __init__(self, identifier, assembly, similarity_metric, visual_degrees, number_of_trials, **kwargs):
super(ToleranceCeiling, self).__init__(identifier=identifier, **kwargs)
self._assembly = assembly
self._similarity_metric = similarity_metric
region = np.unique(self._assembly['region'])
assert len(region) == 1
self.region = region[0]
timebins = timebins_from_assembly(self._assembly)
self.timebins = timebins
self._visual_degrees = visual_degrees
self._number_of_trials = number_of_trials
def __call__(self, candidate: BrainModel):
candidate.start_recording(self.region, time_bins=self.timebins)
stimulus_set = place_on_screen(self._assembly.stimulus_set, target_visual_degrees=candidate.visual_degrees(),
source_visual_degrees=self._visual_degrees)
raw_score = self._similarity_metric(self._assembly)
return explained_variance(raw_score)
class NeuralBenchmarkCeiling(BenchmarkBase):
def __init__(self, identifier, assembly, number_of_trials, **kwargs):
super(NeuralBenchmarkCeiling, self).__init__(identifier=identifier, **kwargs)
self._assembly = assembly
region = np.unique(self._assembly['region'])
assert len(region) == 1
self.region = region[0]
timebins = timebins_from_assembly(self._assembly)
self.timebins = timebins
self._number_of_trials = number_of_trials
def __call__(self):
return self.ceiling
def gram_on_all(assembly, fname):
if os.path.isfile(fname):
assembly = xr.open_dataarray(fname)
else:
assembly = assembly.T
image_ids = assembly.image_id.values
assembly = unflatten(assembly, channel_coord=None)
assembly = assembly.reshape(list(assembly.shape[0:2]) + [-1])
assembly = take_gram(assembly)
assembly = assembly.T
assembly = xr.DataArray(assembly, dims=['neuroid','image_id'], coords={'image_id':image_ids})
assembly.to_netcdf(fname)
return assembly
| [
"brainscore.metrics.regression_extra.take_gram",
"brainscore.benchmarks._neural_common.timebins_from_assembly",
"numpy.unique",
"brainio_base.stimuli.StimulusSet",
"xarray.open_dataarray",
"os.path.isfile",
"brainscore.metrics.regression_extra.unflatten",
"os.path.basename",
"xarray.DataArray",
"p... | [((13326, 13347), 'os.path.isfile', 'os.path.isfile', (['fname'], {}), '(fname)\n', (13340, 13347), False, 'import os\n'), ((898, 933), 'numpy.unique', 'np.unique', (["self._assembly['region']"], {}), "(self._assembly['region'])\n", (907, 933), True, 'import numpy as np\n'), ((1020, 1058), 'brainscore.benchmarks._neural_common.timebins_from_assembly', 'timebins_from_assembly', (['self._assembly'], {}), '(self._assembly)\n', (1042, 1058), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((1790, 1816), 'pandas.DataFrame', 'pd.DataFrame', (['stimulus_set'], {}), '(stimulus_set)\n', (1802, 1816), True, 'import pandas as pd\n'), ((1851, 1886), 'brainio_base.stimuli.StimulusSet', 'StimulusSet', (['covariate_stimulus_set'], {}), '(covariate_stimulus_set)\n', (1862, 1886), False, 'from brainio_base.stimuli import StimulusSet\n'), ((3066, 3109), 'brainscore.benchmarks._neural_common.explained_variance', 'explained_variance', (['raw_score', 'self.ceiling'], {}), '(raw_score, self.ceiling)\n', (3084, 3109), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((3482, 3517), 'numpy.unique', 'np.unique', (["self._assembly['region']"], {}), "(self._assembly['region'])\n", (3491, 3517), True, 'import numpy as np\n'), ((3604, 3642), 'brainscore.benchmarks._neural_common.timebins_from_assembly', 'timebins_from_assembly', (['self._assembly'], {}), '(self._assembly)\n', (3626, 3642), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((4374, 4400), 'pandas.DataFrame', 'pd.DataFrame', (['stimulus_set'], {}), '(stimulus_set)\n', (4386, 4400), True, 'import pandas as pd\n'), ((4435, 4470), 'brainio_base.stimuli.StimulusSet', 'StimulusSet', (['covariate_stimulus_set'], {}), '(covariate_stimulus_set)\n', (4446, 4470), False, 'from brainio_base.stimuli import StimulusSet\n'), ((6043, 6078), 'numpy.unique', 'np.unique', (["self._assembly['region']"], {}), "(self._assembly['region'])\n", (6052, 6078), True, 'import numpy as np\n'), ((6165, 6203), 'brainscore.benchmarks._neural_common.timebins_from_assembly', 'timebins_from_assembly', (['self._assembly'], {}), '(self._assembly)\n', (6187, 6203), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((6963, 6989), 'pandas.DataFrame', 'pd.DataFrame', (['stimulus_set'], {}), '(stimulus_set)\n', (6975, 6989), True, 'import pandas as pd\n'), ((7024, 7059), 'brainio_base.stimuli.StimulusSet', 'StimulusSet', (['covariate_stimulus_set'], {}), '(covariate_stimulus_set)\n', (7035, 7059), False, 'from brainio_base.stimuli import StimulusSet\n'), ((9365, 9408), 'brainscore.benchmarks._neural_common.explained_variance', 'explained_variance', (['raw_score', 'self.ceiling'], {}), '(raw_score, self.ceiling)\n', (9383, 9408), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((9775, 9810), 'numpy.unique', 'np.unique', (["self._assembly['region']"], {}), "(self._assembly['region'])\n", (9784, 9810), True, 'import numpy as np\n'), ((9897, 9935), 'brainscore.benchmarks._neural_common.timebins_from_assembly', 'timebins_from_assembly', (['self._assembly'], {}), '(self._assembly)\n', (9919, 9935), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((10670, 10696), 'pandas.DataFrame', 'pd.DataFrame', (['stimulus_set'], {}), '(stimulus_set)\n', (10682, 10696), True, 'import pandas as pd\n'), ((10730, 10764), 'brainio_base.stimuli.StimulusSet', 'StimulusSet', (['stimulus_set_from_dir'], {}), '(stimulus_set_from_dir)\n', (10741, 10764), False, 'from brainio_base.stimuli import StimulusSet\n'), ((11601, 11644), 'brainscore.benchmarks._neural_common.explained_variance', 'explained_variance', (['raw_score', 'self.ceiling'], {}), '(raw_score, self.ceiling)\n', (11619, 11644), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((11986, 12021), 'numpy.unique', 'np.unique', (["self._assembly['region']"], {}), "(self._assembly['region'])\n", (11995, 12021), True, 'import numpy as np\n'), ((12108, 12146), 'brainscore.benchmarks._neural_common.timebins_from_assembly', 'timebins_from_assembly', (['self._assembly'], {}), '(self._assembly)\n', (12130, 12146), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((12682, 12711), 'brainscore.benchmarks._neural_common.explained_variance', 'explained_variance', (['raw_score'], {}), '(raw_score)\n', (12700, 12711), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((12977, 13012), 'numpy.unique', 'np.unique', (["self._assembly['region']"], {}), "(self._assembly['region'])\n", (12986, 13012), True, 'import numpy as np\n'), ((13099, 13137), 'brainscore.benchmarks._neural_common.timebins_from_assembly', 'timebins_from_assembly', (['self._assembly'], {}), '(self._assembly)\n', (13121, 13137), False, 'from brainscore.benchmarks._neural_common import explained_variance, timebins_from_assembly\n'), ((13369, 13393), 'xarray.open_dataarray', 'xr.open_dataarray', (['fname'], {}), '(fname)\n', (13386, 13393), True, 'import xarray as xr\n'), ((13502, 13541), 'brainscore.metrics.regression_extra.unflatten', 'unflatten', (['assembly'], {'channel_coord': 'None'}), '(assembly, channel_coord=None)\n', (13511, 13541), False, 'from brainscore.metrics.regression_extra import take_gram, unflatten\n'), ((13633, 13652), 'brainscore.metrics.regression_extra.take_gram', 'take_gram', (['assembly'], {}), '(assembly)\n', (13642, 13652), False, 'from brainscore.metrics.regression_extra import take_gram, unflatten\n'), ((13704, 13792), 'xarray.DataArray', 'xr.DataArray', (['assembly'], {'dims': "['neuroid', 'image_id']", 'coords': "{'image_id': image_ids}"}), "(assembly, dims=['neuroid', 'image_id'], coords={'image_id':\n image_ids})\n", (13716, 13792), True, 'import xarray as xr\n'), ((1718, 1741), 'os.path.expanduser', 'os.path.expanduser', (['"""~"""'], {}), "('~')\n", (1736, 1741), False, 'import os\n'), ((2227, 2246), 'os.path.basename', 'os.path.basename', (['v'], {}), '(v)\n', (2243, 2246), False, 'import os\n'), ((4302, 4325), 'os.path.expanduser', 'os.path.expanduser', (['"""~"""'], {}), "('~')\n", (4320, 4325), False, 'import os\n'), ((4811, 4830), 'os.path.basename', 'os.path.basename', (['v'], {}), '(v)\n', (4827, 4830), False, 'import os\n'), ((6891, 6914), 'os.path.expanduser', 'os.path.expanduser', (['"""~"""'], {}), "('~')\n", (6909, 6914), False, 'import os\n'), ((7400, 7419), 'os.path.basename', 'os.path.basename', (['v'], {}), '(v)\n', (7416, 7419), False, 'import os\n'), ((10599, 10622), 'os.path.expanduser', 'os.path.expanduser', (['"""~"""'], {}), "('~')\n", (10617, 10622), False, 'import os\n'), ((11083, 11102), 'os.path.basename', 'os.path.basename', (['v'], {}), '(v)\n', (11099, 11102), False, 'import os\n')] |
from typing import List, Union, Optional, Tuple, Dict
from typing_extensions import TypedDict
from collections import defaultdict
from pathlib import Path
import torch
from torch.utils.data import Dataset
from torch.utils.data.dataloader import default_collate, DataLoader
from torch.nn.utils.rnn import pad_sequence
from vocab import Vocab
import numpy as np
from scipy.spatial.transform import Rotation
import habitat_sim
from habitat_sim import bindings as hsim
from habitat_sim import registry as registry
from habitat_sim.agent.agent import AgentConfiguration, AgentState
PathOrStr = Union[str, Path]
Sample = TypedDict(
"Sample",
{
"pose": torch.Tensor,
"image": Optional[torch.Tensor],
"depth": Optional[torch.Tensor],
"room_types": torch.Tensor,
"object_labels": torch.Tensor,
},
)
def _generate_label_map(scene, verbose=False) -> Tuple[Dict[int, str], Dict[int, str]]:
if verbose:
print(
f"House has {len(scene.levels)} levels, {len(scene.regions)} regions and {len(scene.objects)} objects"
)
print(f"House center:{scene.aabb.center} dims:{scene.aabb.sizes}")
instance_id_to_object_name = {}
instance_id_to_room_name = {}
for region in scene.regions:
for obj in region.objects:
if not obj or not obj.category:
continue
obj_id = int(obj.id.split("_")[-1])
instance_id_to_object_name[obj_id] = obj.category.name()
# you might see a wall while not being able to recognize the room
if obj.category.name() not in ("wall", "floor", "ceiling", "door"):
instance_id_to_room_name[obj_id] = region.category.name()
return instance_id_to_object_name, instance_id_to_room_name
class MatterportDataset(Dataset):
"""
Provide the image, room type and object labels visible by the agent
TODO: add camera parameter fov
TODO: is the camera setting position correct? (h=1.5m)
TODO: should we use a cache?
TODO: should we count a room/object only if it appears at least on X% of the pixels?
"""
def __init__(
self,
scene_filepaths: Union[PathOrStr, List[PathOrStr]],
poses: List[Tuple[float, float, float, float]],
img_size: Tuple[int, int] = (512, 512),
rgb: bool = True,
depth: bool = False,
discard: Tuple[str, ...] = ("misc", "", "objects"),
threshold: float = 0.01,
data_dir: Path = Path("data/"),
):
"""
@param scene_filepaths: paths to the .glb files
@param pose: N x 5 (x, y, z, heading, elevation)
@param dataset: location of matteport3d (symlink or real path)
"""
self.poses = poses
self.img_size = img_size
self.rgb = rgb
self.depth = depth
self.discard = discard
self.threshold = threshold
self.data_dir = Path(data_dir)
self.data_dir.mkdir(exist_ok=True, parents=True)
if isinstance(scene_filepaths, (Path, str)):
scene_filepaths = [scene_filepaths]
self.scene_filepaths = scene_filepaths
assert all(Path(f).is_file() for f in self.scene_filepaths), "Can't find scenes"
self.sim = None
self._is_init = False
def _init(self):
self.cfg = self._config_sim()
self.sim = habitat_sim.Simulator(self.cfg)
self._init_voc()
def _init_voc(self):
voc_file = self.data_dir / "voc.pth"
# try first to load voc
if voc_file.is_file():
voc = torch.load(voc_file)
if "discard" in voc and self.discard == voc["discard"]:
self.object_name_to_instance_ids = voc["object_name_to_instance_ids"]
self.room_name_to_instance_ids = voc["room_name_to_instance_ids"]
self.object_names = voc["object_names"]
self.room_names = voc["room_names"]
return
# compute the voc
instance_id_to_object_name, instance_id_to_room_name = _generate_label_map(
self.sim.semantic_scene
)
self.object_name_to_instance_ids = defaultdict(list)
for iid, name in instance_id_to_object_name.items():
if name not in self.discard:
self.object_name_to_instance_ids[name].append(iid)
self.room_name_to_instance_ids = defaultdict(list)
for iid, name in instance_id_to_room_name.items():
if name not in self.discard:
self.room_name_to_instance_ids[name].append(iid)
self.room_names = Vocab(list(self.room_name_to_instance_ids))
self.object_names = Vocab(list(self.object_name_to_instance_ids))
torch.save({
"discard": self.discard,
"room_name_to_instance_ids": self.room_name_to_instance_ids,
"object_name_to_instance_ids": self.object_name_to_instance_ids,
"object_names": self.object_names,
"room_names": self.room_names,
}, voc_file)
def __len__(self) -> int:
return len(self.poses)
def __getitem__(self, index: int) -> Sample:
if self.sim is None:
self._init()
new_state = AgentState()
new_state.position = self.poses[index][:3]
head = self.poses[index][3]
elev = self.poses[index][4]
rot = Rotation.from_euler('xy', [elev, head], degrees=False)
new_state.rotation = rot.as_quat()
self.sim.agents[0].set_state(new_state)
obs = self.sim.get_sensor_observations()
image = torch.Tensor(obs["color_sensor"]) if self.rgb else None
depth = torch.Tensor(obs["depth_sensor"]) if self.depth else None
content_ids = obs["semantic_sensor"].flatten()
image_size = content_ids.size
object_names = []
for name, iids in self.object_name_to_instance_ids.items():
ratio = np.isin(obs["semantic_sensor"], iids).sum()/image_size
if ratio > self.threshold:
object_names.append(name)
room_names = []
for name, iids in self.room_name_to_instance_ids.items():
ratio = np.isin(obs["semantic_sensor"], iids).sum()/image_size
if ratio > self.threshold:
room_names.append(name)
object_labels = [self.object_names.word2index(n) for n in object_names]
room_types = [self.room_names.word2index(n) for n in room_names]
return {
"image": image,
"depth": depth,
"pose": torch.Tensor(self.poses[index]),
"object_labels": torch.Tensor(object_labels),
"room_types": torch.Tensor(room_types),
}
def _config_sim(self):
settings = {
"width": self.img_size[1], # Spatial resolution of the observations
"height": self.img_size[0],
"scene": self.scene_filepaths[0], # Scene path
"default_agent": 0,
"sensor_height": 1.5, # Height of sensors in meters
"color_sensor": self.rgb, # RGBA sensor
"semantic_sensor": True, # Semantic sensor
"depth_sensor": self.depth, # Depth sensor
"silent": True,
}
sim_cfg = hsim.SimulatorConfiguration()
sim_cfg.enable_physics = False
sim_cfg.gpu_device_id = 0
sim_cfg.scene_id = settings["scene"]
# define default sensor parameters (see src/esp/Sensor/Sensor.h)
sensor_specs = []
if settings["color_sensor"]:
color_sensor_spec = habitat_sim.CameraSensorSpec()
color_sensor_spec.uuid = "color_sensor"
color_sensor_spec.sensor_type = habitat_sim.SensorType.COLOR
color_sensor_spec.resolution = [settings["height"], settings["width"]]
color_sensor_spec.position = [0.0, settings["sensor_height"], 0.0]
color_sensor_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(color_sensor_spec)
if settings["depth_sensor"]:
depth_sensor_spec = habitat_sim.CameraSensorSpec()
depth_sensor_spec.uuid = "depth_sensor"
depth_sensor_spec.sensor_type = habitat_sim.SensorType.DEPTH
depth_sensor_spec.resolution = [settings["height"], settings["width"]]
depth_sensor_spec.position = [0.0, settings["sensor_height"], 0.0]
depth_sensor_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(depth_sensor_spec)
if settings["semantic_sensor"]:
semantic_sensor_spec = habitat_sim.CameraSensorSpec()
semantic_sensor_spec.uuid = "semantic_sensor"
semantic_sensor_spec.sensor_type = habitat_sim.SensorType.SEMANTIC
semantic_sensor_spec.resolution = [settings["height"], settings["width"]]
semantic_sensor_spec.position = [0.0, settings["sensor_height"], 0.0]
semantic_sensor_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(semantic_sensor_spec)
# create agent specifications
agent_cfg = AgentConfiguration()
agent_cfg.sensor_specifications = sensor_specs
return habitat_sim.Configuration(sim_cfg, [agent_cfg])
def close(self) -> None:
r"""Deletes the instance of the simulator."""
if self.sim is not None:
self.sim.close()
del self.sim
self.sim = None
def collate_fn(samples: List[Sample]) -> Dict[str, torch.Tensor]:
""" resolve optional keys on a sample """
el0 = samples[0]
batch = {}
for key in el0:
if el0[key] is None:
continue
seq = [el[key] for el in samples]
if key in ("object_labels", "room_types"):
batch[key] = pad_sequence(seq, batch_first=True, padding_value=-1)
else:
batch[key] = torch.stack(seq, 0)
return batch
def display_sample(sample: Dict[str, torch.Tensor], voc_file: Path):
import matplotlib.pyplot as plt
# load voc
voc = torch.load(voc_file)
object_voc = voc["object_names"]
room_voc = voc["room_names"]
batch_size = sample["pose"].shape[0]
for i in range(batch_size):
arr = {}
if "depth" in sample:
arr["depth"] = sample["depth"][i].long()
if "image" in sample:
arr["image"] = sample["image"][i].long()
plt.figure(figsize=(12, 8))
for j, (key, data) in enumerate(arr.items()):
ax = plt.subplot(1, len(arr), j + 1)
ax.axis("off")
ax.set_title(key)
plt.imshow(data)
plt.show(f"{i}.jpg")
print(f"Object labels in sample {i}")
for obj in sample["object_labels"][i].long().tolist():
if obj != -1:
print(object_voc.index2word(obj))
print(f"\nRoom labels in sample {i}")
for room in sample["room_types"][i].long().tolist():
if room != -1:
print(room_voc.index2word(room))
if __name__ == "__main__":
""" Testing the interface """
dataset = MatterportDataset(
scene_filepaths="data/mp3d/17DRP5sb8fy/17DRP5sb8fy.glb",
poses=torch.randn((10, 4)).tolist(),
)
dataloader = DataLoader(
dataset, num_workers=4, shuffle=True, collate_fn=collate_fn, batch_size=2
)
sample = next(iter(dataloader))
display_sample(sample, "data/voc.pth")
| [
"habitat_sim.Simulator",
"typing_extensions.TypedDict",
"torch.nn.utils.rnn.pad_sequence",
"numpy.isin",
"matplotlib.pyplot.imshow",
"habitat_sim.Configuration",
"scipy.spatial.transform.Rotation.from_euler",
"pathlib.Path",
"torch.utils.data.dataloader.DataLoader",
"habitat_sim.bindings.Simulator... | [((617, 793), 'typing_extensions.TypedDict', 'TypedDict', (['"""Sample"""', "{'pose': torch.Tensor, 'image': Optional[torch.Tensor], 'depth': Optional[\n torch.Tensor], 'room_types': torch.Tensor, 'object_labels': torch.Tensor}"], {}), "('Sample', {'pose': torch.Tensor, 'image': Optional[torch.Tensor],\n 'depth': Optional[torch.Tensor], 'room_types': torch.Tensor,\n 'object_labels': torch.Tensor})\n", (626, 793), False, 'from typing_extensions import TypedDict\n'), ((10134, 10154), 'torch.load', 'torch.load', (['voc_file'], {}), '(voc_file)\n', (10144, 10154), False, 'import torch\n'), ((11340, 11429), 'torch.utils.data.dataloader.DataLoader', 'DataLoader', (['dataset'], {'num_workers': '(4)', 'shuffle': '(True)', 'collate_fn': 'collate_fn', 'batch_size': '(2)'}), '(dataset, num_workers=4, shuffle=True, collate_fn=collate_fn,\n batch_size=2)\n', (11350, 11429), False, 'from torch.utils.data.dataloader import default_collate, DataLoader\n'), ((2500, 2513), 'pathlib.Path', 'Path', (['"""data/"""'], {}), "('data/')\n", (2504, 2513), False, 'from pathlib import Path\n'), ((2930, 2944), 'pathlib.Path', 'Path', (['data_dir'], {}), '(data_dir)\n', (2934, 2944), False, 'from pathlib import Path\n'), ((3374, 3405), 'habitat_sim.Simulator', 'habitat_sim.Simulator', (['self.cfg'], {}), '(self.cfg)\n', (3395, 3405), False, 'import habitat_sim\n'), ((4174, 4191), 'collections.defaultdict', 'defaultdict', (['list'], {}), '(list)\n', (4185, 4191), False, 'from collections import defaultdict\n'), ((4415, 4432), 'collections.defaultdict', 'defaultdict', (['list'], {}), '(list)\n', (4426, 4432), False, 'from collections import defaultdict\n'), ((4752, 5005), 'torch.save', 'torch.save', (["{'discard': self.discard, 'room_name_to_instance_ids': self.\n room_name_to_instance_ids, 'object_name_to_instance_ids': self.\n object_name_to_instance_ids, 'object_names': self.object_names,\n 'room_names': self.room_names}", 'voc_file'], {}), "({'discard': self.discard, 'room_name_to_instance_ids': self.\n room_name_to_instance_ids, 'object_name_to_instance_ids': self.\n object_name_to_instance_ids, 'object_names': self.object_names,\n 'room_names': self.room_names}, voc_file)\n", (4762, 5005), False, 'import torch\n'), ((5251, 5263), 'habitat_sim.agent.agent.AgentState', 'AgentState', ([], {}), '()\n', (5261, 5263), False, 'from habitat_sim.agent.agent import AgentConfiguration, AgentState\n'), ((5402, 5456), 'scipy.spatial.transform.Rotation.from_euler', 'Rotation.from_euler', (['"""xy"""', '[elev, head]'], {'degrees': '(False)'}), "('xy', [elev, head], degrees=False)\n", (5421, 5456), False, 'from scipy.spatial.transform import Rotation\n'), ((7299, 7328), 'habitat_sim.bindings.SimulatorConfiguration', 'hsim.SimulatorConfiguration', ([], {}), '()\n', (7326, 7328), True, 'from habitat_sim import bindings as hsim\n'), ((9195, 9215), 'habitat_sim.agent.agent.AgentConfiguration', 'AgentConfiguration', ([], {}), '()\n', (9213, 9215), False, 'from habitat_sim.agent.agent import AgentConfiguration, AgentState\n'), ((9286, 9333), 'habitat_sim.Configuration', 'habitat_sim.Configuration', (['sim_cfg', '[agent_cfg]'], {}), '(sim_cfg, [agent_cfg])\n', (9311, 9333), False, 'import habitat_sim\n'), ((10492, 10519), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(12, 8)'}), '(figsize=(12, 8))\n', (10502, 10519), True, 'import matplotlib.pyplot as plt\n'), ((10717, 10737), 'matplotlib.pyplot.show', 'plt.show', (['f"""{i}.jpg"""'], {}), "(f'{i}.jpg')\n", (10725, 10737), True, 'import matplotlib.pyplot as plt\n'), ((3585, 3605), 'torch.load', 'torch.load', (['voc_file'], {}), '(voc_file)\n', (3595, 3605), False, 'import torch\n'), ((5614, 5647), 'torch.Tensor', 'torch.Tensor', (["obs['color_sensor']"], {}), "(obs['color_sensor'])\n", (5626, 5647), False, 'import torch\n'), ((5686, 5719), 'torch.Tensor', 'torch.Tensor', (["obs['depth_sensor']"], {}), "(obs['depth_sensor'])\n", (5698, 5719), False, 'import torch\n'), ((6597, 6628), 'torch.Tensor', 'torch.Tensor', (['self.poses[index]'], {}), '(self.poses[index])\n', (6609, 6628), False, 'import torch\n'), ((6659, 6686), 'torch.Tensor', 'torch.Tensor', (['object_labels'], {}), '(object_labels)\n', (6671, 6686), False, 'import torch\n'), ((6714, 6738), 'torch.Tensor', 'torch.Tensor', (['room_types'], {}), '(room_types)\n', (6726, 6738), False, 'import torch\n'), ((7616, 7646), 'habitat_sim.CameraSensorSpec', 'habitat_sim.CameraSensorSpec', ([], {}), '()\n', (7644, 7646), False, 'import habitat_sim\n'), ((8136, 8166), 'habitat_sim.CameraSensorSpec', 'habitat_sim.CameraSensorSpec', ([], {}), '()\n', (8164, 8166), False, 'import habitat_sim\n'), ((8662, 8692), 'habitat_sim.CameraSensorSpec', 'habitat_sim.CameraSensorSpec', ([], {}), '()\n', (8690, 8692), False, 'import habitat_sim\n'), ((9871, 9924), 'torch.nn.utils.rnn.pad_sequence', 'pad_sequence', (['seq'], {'batch_first': '(True)', 'padding_value': '(-1)'}), '(seq, batch_first=True, padding_value=-1)\n', (9883, 9924), False, 'from torch.nn.utils.rnn import pad_sequence\n'), ((9964, 9983), 'torch.stack', 'torch.stack', (['seq', '(0)'], {}), '(seq, 0)\n', (9975, 9983), False, 'import torch\n'), ((10692, 10708), 'matplotlib.pyplot.imshow', 'plt.imshow', (['data'], {}), '(data)\n', (10702, 10708), True, 'import matplotlib.pyplot as plt\n'), ((11285, 11305), 'torch.randn', 'torch.randn', (['(10, 4)'], {}), '((10, 4))\n', (11296, 11305), False, 'import torch\n'), ((3170, 3177), 'pathlib.Path', 'Path', (['f'], {}), '(f)\n', (3174, 3177), False, 'from pathlib import Path\n'), ((5952, 5989), 'numpy.isin', 'np.isin', (["obs['semantic_sensor']", 'iids'], {}), "(obs['semantic_sensor'], iids)\n", (5959, 5989), True, 'import numpy as np\n'), ((6215, 6252), 'numpy.isin', 'np.isin', (["obs['semantic_sensor']", 'iids'], {}), "(obs['semantic_sensor'], iids)\n", (6222, 6252), True, 'import numpy as np\n')] |
import numpy as np
import cv2 as cv
def get_data_in_qr_image(img):
qr_detector = cv.QRCodeDetector()
data, bbox, _ = qr_detector.detectAndDecode(img)
if bbox is not None and len(bbox) > 0:
return data
return None
def image_from_bytestring(bytestring):
return cv.imdecode(np.fromstring(bytestring, np.uint8), cv.IMREAD_COLOR)
| [
"numpy.fromstring",
"cv2.QRCodeDetector"
] | [((87, 106), 'cv2.QRCodeDetector', 'cv.QRCodeDetector', ([], {}), '()\n', (104, 106), True, 'import cv2 as cv\n'), ((303, 338), 'numpy.fromstring', 'np.fromstring', (['bytestring', 'np.uint8'], {}), '(bytestring, np.uint8)\n', (316, 338), True, 'import numpy as np\n')] |
# Licensed under a 3-clause BSD style license - see LICENSE.rst
import os
from itertools import izip
import tables
import numpy as np
from astropy.table import Table, vstack
from Chandra.Time import DateTime
import healpy as hp
from agasc import sphere_dist, get_star
import matplotlib.pyplot as plt
NSIDE = 16
MIN_ACA_DIST = 25. / 3600 # 25 arcmin min separation
MAX_ACA_DIST = 2.1 # degrees corner to corner
if 'STAR_CACHE' not in globals():
STAR_CACHE = {}
def make_distances_h5(max_mag=10.5, outfile='distances.h5', microagasc_file='microagasc.fits'):
if os.path.exists(outfile):
raise IOError('{} already exists'.format(outfile))
agasc = get_microagasc(microagasc_file, max_mag=max_mag)
t = get_all_dists(agasc)
make_h5_file(t, outfile)
def make_microagasc(max_mag=10.5, outfile='microagasc.fits'):
import tables
with tables.openFile('/proj/sot/ska/data/agasc/miniagasc.h5') as h5:
print('Reading miniagasc ...')
mag_aca = h5.root.data.col('MAG_ACA')
ok = mag_aca < max_mag
mag_aca = mag_aca[ok]
ra = h5.root.data.col('RA')[ok]
ra_pm = h5.root.data.col('PM_RA')[ok]
dec = h5.root.data.col('DEC')[ok]
dec_pm = h5.root.data.col('PM_DEC')[ok]
agasc_id = h5.root.data.col('AGASC_ID')[ok]
print(' Done.')
t = Table([agasc_id, ra, dec, ra_pm, dec_pm, mag_aca],
names=['agasc_id', 'ra', 'dec', 'ra_pm', 'dec_pm', 'mag_aca'])
t.write(outfile, overwrite=True)
def get_microagasc(filename='microagasc.fits', date=None, max_mag=10.5):
if not os.path.exists(filename):
make_microagasc()
t = Table.read(filename, format='fits')
t = t[t['mag_aca'] < max_mag]
# Compute the multiplicative factor to convert from the AGASC proper motion
# field to degrees. The AGASC PM is specified in milliarcsecs / year, so this
# is dyear * (degrees / milliarcsec)
agasc_equinox = DateTime('2000:001:00:00:00.000')
dyear = (DateTime(date) - agasc_equinox) / 365.25
pm_to_degrees = dyear / (3600. * 1000.)
ra = t['ra']
dec = t['dec']
ra_pm = t['ra_pm']
dec_pm = t['dec_pm']
ok = ra_pm != -9999 # Select stars with an available PM correction
ra[ok] = ra[ok] + ra_pm[ok] * pm_to_degrees
ok = dec_pm != -9999 # Select stars with an available PM correction
dec[ok] = dec[ok] + dec_pm[ok] * pm_to_degrees
phi = np.radians(ra)
theta = np.radians(90 - dec)
ipix = hp.ang2pix(NSIDE, theta, phi)
t['ipix'] = ipix.astype(np.uint16 if np.max(ipix) < 65536 else np.uint32)
t.sort('ipix')
return t
def get_stars_for_region(agasc, ipix):
i0, i1 = np.searchsorted(agasc['ipix'], [ipix, ipix + 1])
return agasc[i0:i1]
def _get_dists(s0, s1):
idx0, idx1 = np.mgrid[slice(0, len(s0)), slice(0, len(s1))]
idx0 = idx0.ravel()
idx1 = idx1.ravel()
ok = s0['agasc_id'][idx0] < s1['agasc_id'][idx1]
idx0 = idx0[ok]
idx1 = idx1[ok]
dists = sphere_dist(s0['ra'][idx0], s0['dec'][idx0],
s1['ra'][idx1], s1['dec'][idx1])
ok = (dists < MAX_ACA_DIST) & (dists > MIN_ACA_DIST)
dists = dists[ok] * 3600 # convert to arcsec here
idx0 = idx0[ok]
idx1 = idx1[ok]
dists = dists.astype(np.float32)
id0 = s0['agasc_id'][idx0].astype(np.int32)
id1 = s1['agasc_id'][idx1].astype(np.int32)
# Store mag values in millimag as int16. Clip for good measure.
mag0 = np.clip(s0['mag_aca'][idx0] * 1000, -32000, 32000).astype(np.int16)
mag1 = np.clip(s1['mag_aca'][idx1] * 1000, -32000, 32000).astype(np.int16)
out = Table([dists, id0, id1, mag0, mag1],
names=['dists', 'agasc_id0', 'agasc_id1', 'mag0', 'mag1'])
return out
def get_dists_for_region(agasc, ipix):
s0 = get_stars_for_region(agasc, ipix)
ipix_neighbors = hp.get_all_neighbours(NSIDE, ipix)
s1s = [get_stars_for_region(agasc, ipix_neighbor) for ipix_neighbor in ipix_neighbors]
t_list = []
for s1 in [s0] + s1s:
t_list.append(_get_dists(s0, s1))
out = vstack(t_list)
return out
def make_h5_file(t, filename='distances.h5'):
"""Make a new h5 table to hold column from ``dat``."""
filters = tables.Filters(complevel=5, complib='zlib')
dat = np.array(t)
with tables.openFile(filename, mode='a', filters=filters) as h5:
h5.createTable(h5.root, "data", dat, "Data table", expectedrows=len(dat))
h5.root.data.flush()
with tables.openFile(filename, mode='a', filters=filters) as h5:
print('Creating index')
h5.root.data.cols.dists.createIndex()
h5.flush()
def get_all_dists(agasc=None):
if agasc is None:
agasc = get_microagasc()
t_list = []
for ipix in xrange(hp.nside2npix(NSIDE)):
print(ipix)
t_list.append(get_dists_for_region(agasc, ipix))
out = vstack(t_list)
out.sort('dists')
return out
def plot_stars_and_neighbors(agasc, ipix):
s0 = get_stars_for_region(agasc, ipix)
ipix_neighbors = hp.get_all_neighbours(NSIDE, ipix)
s1s = [get_stars_for_region(agasc, ipix_neighbor) for ipix_neighbor in ipix_neighbors]
plt.plot(s0['ra'], s0['dec'], '.c')
for s1 in s1s:
plt.plot(s1['ra'], s1['dec'], '.m')
def plot_dists(agasc, ipix):
def _get_star(id0):
if id0 not in STAR_CACHE:
STAR_CACHE[id0] = get_star(id0)
return STAR_CACHE[id0]
dists, id0s, id1s = get_dists_for_region(agasc, ipix)
plotted = set()
for dist, id0, id1 in izip(dists, id0s, id1s):
star0 = _get_star(id0)
star1 = _get_star(id1)
ra0 = star0['RA']
ra1 = star1['RA']
dec0 = star0['DEC']
dec1 = star1['DEC']
phi1 = np.radians(ra1)
theta1 = np.radians(90 - dec1)
ipix1 = hp.ang2pix(NSIDE, theta1, phi1)
plt.plot([ra0, ra1],
[dec0, dec1], '-k', alpha=0.2)
if id0 not in plotted:
plt.plot([ra0], [dec0], 'ob')
if id1 not in plotted:
plt.plot([ra1], [dec1], 'ob' if ipix1 == ipix else 'or')
dalt = sphere_dist(ra0, dec0,
ra1, dec1)
print('{} {} {} {}'.format(id0, id1, dist, dalt * 3600))
| [
"numpy.radians",
"numpy.clip",
"tables.openFile",
"astropy.table.Table",
"agasc.sphere_dist",
"healpy.ang2pix",
"numpy.array",
"itertools.izip",
"astropy.table.vstack",
"tables.Filters",
"os.path.exists",
"numpy.searchsorted",
"matplotlib.pyplot.plot",
"numpy.max",
"healpy.nside2npix",
... | [((574, 597), 'os.path.exists', 'os.path.exists', (['outfile'], {}), '(outfile)\n', (588, 597), False, 'import os\n'), ((1341, 1458), 'astropy.table.Table', 'Table', (['[agasc_id, ra, dec, ra_pm, dec_pm, mag_aca]'], {'names': "['agasc_id', 'ra', 'dec', 'ra_pm', 'dec_pm', 'mag_aca']"}), "([agasc_id, ra, dec, ra_pm, dec_pm, mag_aca], names=['agasc_id', 'ra',\n 'dec', 'ra_pm', 'dec_pm', 'mag_aca'])\n", (1346, 1458), False, 'from astropy.table import Table, vstack\n'), ((1652, 1687), 'astropy.table.Table.read', 'Table.read', (['filename'], {'format': '"""fits"""'}), "(filename, format='fits')\n", (1662, 1687), False, 'from astropy.table import Table, vstack\n'), ((1947, 1980), 'Chandra.Time.DateTime', 'DateTime', (['"""2000:001:00:00:00.000"""'], {}), "('2000:001:00:00:00.000')\n", (1955, 1980), False, 'from Chandra.Time import DateTime\n'), ((2420, 2434), 'numpy.radians', 'np.radians', (['ra'], {}), '(ra)\n', (2430, 2434), True, 'import numpy as np\n'), ((2447, 2467), 'numpy.radians', 'np.radians', (['(90 - dec)'], {}), '(90 - dec)\n', (2457, 2467), True, 'import numpy as np\n'), ((2479, 2508), 'healpy.ang2pix', 'hp.ang2pix', (['NSIDE', 'theta', 'phi'], {}), '(NSIDE, theta, phi)\n', (2489, 2508), True, 'import healpy as hp\n'), ((2673, 2721), 'numpy.searchsorted', 'np.searchsorted', (["agasc['ipix']", '[ipix, ipix + 1]'], {}), "(agasc['ipix'], [ipix, ipix + 1])\n", (2688, 2721), True, 'import numpy as np\n'), ((2991, 3068), 'agasc.sphere_dist', 'sphere_dist', (["s0['ra'][idx0]", "s0['dec'][idx0]", "s1['ra'][idx1]", "s1['dec'][idx1]"], {}), "(s0['ra'][idx0], s0['dec'][idx0], s1['ra'][idx1], s1['dec'][idx1])\n", (3002, 3068), False, 'from agasc import sphere_dist, get_star\n'), ((3616, 3715), 'astropy.table.Table', 'Table', (['[dists, id0, id1, mag0, mag1]'], {'names': "['dists', 'agasc_id0', 'agasc_id1', 'mag0', 'mag1']"}), "([dists, id0, id1, mag0, mag1], names=['dists', 'agasc_id0',\n 'agasc_id1', 'mag0', 'mag1'])\n", (3621, 3715), False, 'from astropy.table import Table, vstack\n'), ((3848, 3882), 'healpy.get_all_neighbours', 'hp.get_all_neighbours', (['NSIDE', 'ipix'], {}), '(NSIDE, ipix)\n', (3869, 3882), True, 'import healpy as hp\n'), ((4069, 4083), 'astropy.table.vstack', 'vstack', (['t_list'], {}), '(t_list)\n', (4075, 4083), False, 'from astropy.table import Table, vstack\n'), ((4220, 4263), 'tables.Filters', 'tables.Filters', ([], {'complevel': '(5)', 'complib': '"""zlib"""'}), "(complevel=5, complib='zlib')\n", (4234, 4263), False, 'import tables\n'), ((4274, 4285), 'numpy.array', 'np.array', (['t'], {}), '(t)\n', (4282, 4285), True, 'import numpy as np\n'), ((4871, 4885), 'astropy.table.vstack', 'vstack', (['t_list'], {}), '(t_list)\n', (4877, 4885), False, 'from astropy.table import Table, vstack\n'), ((5032, 5066), 'healpy.get_all_neighbours', 'hp.get_all_neighbours', (['NSIDE', 'ipix'], {}), '(NSIDE, ipix)\n', (5053, 5066), True, 'import healpy as hp\n'), ((5162, 5197), 'matplotlib.pyplot.plot', 'plt.plot', (["s0['ra']", "s0['dec']", '""".c"""'], {}), "(s0['ra'], s0['dec'], '.c')\n", (5170, 5197), True, 'import matplotlib.pyplot as plt\n'), ((5530, 5553), 'itertools.izip', 'izip', (['dists', 'id0s', 'id1s'], {}), '(dists, id0s, id1s)\n', (5534, 5553), False, 'from itertools import izip\n'), ((868, 924), 'tables.openFile', 'tables.openFile', (['"""/proj/sot/ska/data/agasc/miniagasc.h5"""'], {}), "('/proj/sot/ska/data/agasc/miniagasc.h5')\n", (883, 924), False, 'import tables\n'), ((1592, 1616), 'os.path.exists', 'os.path.exists', (['filename'], {}), '(filename)\n', (1606, 1616), False, 'import os\n'), ((4295, 4347), 'tables.openFile', 'tables.openFile', (['filename'], {'mode': '"""a"""', 'filters': 'filters'}), "(filename, mode='a', filters=filters)\n", (4310, 4347), False, 'import tables\n'), ((4476, 4528), 'tables.openFile', 'tables.openFile', (['filename'], {'mode': '"""a"""', 'filters': 'filters'}), "(filename, mode='a', filters=filters)\n", (4491, 4528), False, 'import tables\n'), ((4760, 4780), 'healpy.nside2npix', 'hp.nside2npix', (['NSIDE'], {}), '(NSIDE)\n', (4773, 4780), True, 'import healpy as hp\n'), ((5225, 5260), 'matplotlib.pyplot.plot', 'plt.plot', (["s1['ra']", "s1['dec']", '""".m"""'], {}), "(s1['ra'], s1['dec'], '.m')\n", (5233, 5260), True, 'import matplotlib.pyplot as plt\n'), ((5741, 5756), 'numpy.radians', 'np.radians', (['ra1'], {}), '(ra1)\n', (5751, 5756), True, 'import numpy as np\n'), ((5774, 5795), 'numpy.radians', 'np.radians', (['(90 - dec1)'], {}), '(90 - dec1)\n', (5784, 5795), True, 'import numpy as np\n'), ((5812, 5843), 'healpy.ang2pix', 'hp.ang2pix', (['NSIDE', 'theta1', 'phi1'], {}), '(NSIDE, theta1, phi1)\n', (5822, 5843), True, 'import healpy as hp\n'), ((5853, 5904), 'matplotlib.pyplot.plot', 'plt.plot', (['[ra0, ra1]', '[dec0, dec1]', '"""-k"""'], {'alpha': '(0.2)'}), "([ra0, ra1], [dec0, dec1], '-k', alpha=0.2)\n", (5861, 5904), True, 'import matplotlib.pyplot as plt\n'), ((6110, 6143), 'agasc.sphere_dist', 'sphere_dist', (['ra0', 'dec0', 'ra1', 'dec1'], {}), '(ra0, dec0, ra1, dec1)\n', (6121, 6143), False, 'from agasc import sphere_dist, get_star\n'), ((1994, 2008), 'Chandra.Time.DateTime', 'DateTime', (['date'], {}), '(date)\n', (2002, 2008), False, 'from Chandra.Time import DateTime\n'), ((3458, 3508), 'numpy.clip', 'np.clip', (["(s0['mag_aca'][idx0] * 1000)", '(-32000)', '(32000)'], {}), "(s0['mag_aca'][idx0] * 1000, -32000, 32000)\n", (3465, 3508), True, 'import numpy as np\n'), ((3537, 3587), 'numpy.clip', 'np.clip', (["(s1['mag_aca'][idx1] * 1000)", '(-32000)', '(32000)'], {}), "(s1['mag_aca'][idx1] * 1000, -32000, 32000)\n", (3544, 3587), True, 'import numpy as np\n'), ((5380, 5393), 'agasc.get_star', 'get_star', (['id0'], {}), '(id0)\n', (5388, 5393), False, 'from agasc import sphere_dist, get_star\n'), ((5965, 5994), 'matplotlib.pyplot.plot', 'plt.plot', (['[ra0]', '[dec0]', '"""ob"""'], {}), "([ra0], [dec0], 'ob')\n", (5973, 5994), True, 'import matplotlib.pyplot as plt\n'), ((6038, 6094), 'matplotlib.pyplot.plot', 'plt.plot', (['[ra1]', '[dec1]', "('ob' if ipix1 == ipix else 'or')"], {}), "([ra1], [dec1], 'ob' if ipix1 == ipix else 'or')\n", (6046, 6094), True, 'import matplotlib.pyplot as plt\n'), ((2550, 2562), 'numpy.max', 'np.max', (['ipix'], {}), '(ipix)\n', (2556, 2562), True, 'import numpy as np\n')] |
import sys
sys.path.append('')
from libpydart import GraspAnalyser
import multiprocessing as mp
from multiprocessing.queues import SimpleQueue
from itertools import product
import numpy as np
def worker_fn(object_name, session_name, task_q, result_q):
ga = GraspAnalyser(object_name, session_name)
while not task_q.empty():
params = task_q.get()
ga.set_params(*params)
r = ga.analyze_grasps(5, 1, 1, True)
result = [p for p in params]
result.append(r)
result_q.put(tuple(result))
ga.close()
def create_task_q():
aw = [50, 100]
rw = [50, 100]
tw = [25]
ad = [2]
rd = [2]
iw = [10]
lmd = [5, 10]
reg = [-2, -4]
q = SimpleQueue()
for p in product(aw, rw, tw, ad, rd, iw, lmd, reg):
q.put(p)
return q
class GridSearcher(object):
def __init__(self):
sessions = [['camera-0', 'full19_use']]
# ['binoculars', 'full19_handoff']]
# n_total_workers = mp.cpu_count()
n_total_workers = 2
n_workers = [n_total_workers / len(sessions)] * len(sessions)
n_workers[-1] = n_total_workers - sum(n_workers[:-1])
task_qs = [create_task_q()] * len(sessions)
self.result_qs = [SimpleQueue()] * len(sessions)
self.processes = []
for session, n_w, task_q, result_q in \
zip(sessions, n_workers, task_qs, self.result_qs):
for _ in xrange(n_w):
p = mp.Process(target=worker_fn, args=(session[0], session[1],
task_q, result_q))
self.processes.append(p)
def run(self):
for p in self.processes:
p.start()
for p in self.processes:
p.join()
results = {}
for q in self.result_qs:
while not q.empty():
r = q.get()
if r[:-1] not in results:
results[r[:-1]] = [r[-1]]
else:
results[r[:-1]].append(r[-1])
return results
if __name__ == '__main__':
gs = GridSearcher()
results = gs.run()
print(results)
avg_errors = {k: np.mean(v) for k,v in results.items()}
best_params = min(avg_errors, key=avg_errors.get)
print('Best params = ', best_params)
| [
"numpy.mean",
"multiprocessing.Process",
"itertools.product",
"libpydart.GraspAnalyser",
"multiprocessing.queues.SimpleQueue",
"sys.path.append"
] | [((11, 30), 'sys.path.append', 'sys.path.append', (['""""""'], {}), "('')\n", (26, 30), False, 'import sys\n'), ((263, 303), 'libpydart.GraspAnalyser', 'GraspAnalyser', (['object_name', 'session_name'], {}), '(object_name, session_name)\n', (276, 303), False, 'from libpydart import GraspAnalyser\n'), ((719, 732), 'multiprocessing.queues.SimpleQueue', 'SimpleQueue', ([], {}), '()\n', (730, 732), False, 'from multiprocessing.queues import SimpleQueue\n'), ((746, 787), 'itertools.product', 'product', (['aw', 'rw', 'tw', 'ad', 'rd', 'iw', 'lmd', 'reg'], {}), '(aw, rw, tw, ad, rd, iw, lmd, reg)\n', (753, 787), False, 'from itertools import product\n'), ((2202, 2212), 'numpy.mean', 'np.mean', (['v'], {}), '(v)\n', (2209, 2212), True, 'import numpy as np\n'), ((1258, 1271), 'multiprocessing.queues.SimpleQueue', 'SimpleQueue', ([], {}), '()\n', (1269, 1271), False, 'from multiprocessing.queues import SimpleQueue\n'), ((1483, 1560), 'multiprocessing.Process', 'mp.Process', ([], {'target': 'worker_fn', 'args': '(session[0], session[1], task_q, result_q)'}), '(target=worker_fn, args=(session[0], session[1], task_q, result_q))\n', (1493, 1560), True, 'import multiprocessing as mp\n')] |
# -*- coding: utf-8 -*-
"""
Created on Thu Aug 26 13:07:51 2021
@author: AYUSH
"""
from scipy.integrate import solve_ivp
from numpy import sin,cos
import matplotlib.pyplot as pt
import numpy as np
from mpl_toolkits import mplot3d
tspan=[0,8]
teval=np.linspace(tspan[0], tspan[1], 1000)
g=9.81
def e_l_eqns(t,u):
v=-0.25
r=6+(v*t)
theta,omega,phi,psi=u
du=[omega,(sin(theta)*cos(theta)*(psi**2))-((2*v*omega)/r)+((g*sin(theta))/r),psi,-((2*v*psi)/r)-(2*cos(theta)*omega*psi/sin(theta))]
return du
res=solve_ivp(e_l_eqns, tspan, [2.96,0.1,0.1,0.1], t_eval=teval)
theta,omega,phi,psi=res.y
t=res.t
r1=6 - (0.25*t)
x=r1*sin(theta)*cos(phi)
y=r1*sin(theta)*sin(phi)
z=r1*cos(theta)
pt.plot(t,theta,color='red')
pt.show()
pt.plot(t,omega,color='blue')
pt.show()
pt.plot(t,phi)
pt.show()
pt.plot(t,psi)
pt.show()
pt.style.use('seaborn-poster')
g = pt.figure(figsize = (12,12))
ax = pt.axes(projection='3d')
ax.grid()
ax.plot3D(x, y, z)
ax.set_title('3D Parametric Plot')
ax.set_xlabel('x', labelpad=20)
ax.set_ylabel('y', labelpad=20)
ax.set_zlabel('z', labelpad=20)
pt.show() | [
"matplotlib.pyplot.plot",
"scipy.integrate.solve_ivp",
"matplotlib.pyplot.style.use",
"numpy.linspace",
"matplotlib.pyplot.figure",
"matplotlib.pyplot.axes",
"numpy.cos",
"numpy.sin",
"matplotlib.pyplot.show"
] | [((250, 287), 'numpy.linspace', 'np.linspace', (['tspan[0]', 'tspan[1]', '(1000)'], {}), '(tspan[0], tspan[1], 1000)\n', (261, 287), True, 'import numpy as np\n'), ((524, 587), 'scipy.integrate.solve_ivp', 'solve_ivp', (['e_l_eqns', 'tspan', '[2.96, 0.1, 0.1, 0.1]'], {'t_eval': 'teval'}), '(e_l_eqns, tspan, [2.96, 0.1, 0.1, 0.1], t_eval=teval)\n', (533, 587), False, 'from scipy.integrate import solve_ivp\n'), ((702, 732), 'matplotlib.pyplot.plot', 'pt.plot', (['t', 'theta'], {'color': '"""red"""'}), "(t, theta, color='red')\n", (709, 732), True, 'import matplotlib.pyplot as pt\n'), ((731, 740), 'matplotlib.pyplot.show', 'pt.show', ([], {}), '()\n', (738, 740), True, 'import matplotlib.pyplot as pt\n'), ((741, 772), 'matplotlib.pyplot.plot', 'pt.plot', (['t', 'omega'], {'color': '"""blue"""'}), "(t, omega, color='blue')\n", (748, 772), True, 'import matplotlib.pyplot as pt\n'), ((771, 780), 'matplotlib.pyplot.show', 'pt.show', ([], {}), '()\n', (778, 780), True, 'import matplotlib.pyplot as pt\n'), ((781, 796), 'matplotlib.pyplot.plot', 'pt.plot', (['t', 'phi'], {}), '(t, phi)\n', (788, 796), True, 'import matplotlib.pyplot as pt\n'), ((796, 805), 'matplotlib.pyplot.show', 'pt.show', ([], {}), '()\n', (803, 805), True, 'import matplotlib.pyplot as pt\n'), ((806, 821), 'matplotlib.pyplot.plot', 'pt.plot', (['t', 'psi'], {}), '(t, psi)\n', (813, 821), True, 'import matplotlib.pyplot as pt\n'), ((821, 830), 'matplotlib.pyplot.show', 'pt.show', ([], {}), '()\n', (828, 830), True, 'import matplotlib.pyplot as pt\n'), ((831, 861), 'matplotlib.pyplot.style.use', 'pt.style.use', (['"""seaborn-poster"""'], {}), "('seaborn-poster')\n", (843, 861), True, 'import matplotlib.pyplot as pt\n'), ((866, 893), 'matplotlib.pyplot.figure', 'pt.figure', ([], {'figsize': '(12, 12)'}), '(figsize=(12, 12))\n', (875, 893), True, 'import matplotlib.pyplot as pt\n'), ((900, 924), 'matplotlib.pyplot.axes', 'pt.axes', ([], {'projection': '"""3d"""'}), "(projection='3d')\n", (907, 924), True, 'import matplotlib.pyplot as pt\n'), ((1085, 1094), 'matplotlib.pyplot.show', 'pt.show', ([], {}), '()\n', (1092, 1094), True, 'import matplotlib.pyplot as pt\n'), ((652, 660), 'numpy.cos', 'cos', (['phi'], {}), '(phi)\n', (655, 660), False, 'from numpy import sin, cos\n'), ((677, 685), 'numpy.sin', 'sin', (['phi'], {}), '(phi)\n', (680, 685), False, 'from numpy import sin, cos\n'), ((691, 701), 'numpy.cos', 'cos', (['theta'], {}), '(theta)\n', (694, 701), False, 'from numpy import sin, cos\n'), ((641, 651), 'numpy.sin', 'sin', (['theta'], {}), '(theta)\n', (644, 651), False, 'from numpy import sin, cos\n'), ((666, 676), 'numpy.sin', 'sin', (['theta'], {}), '(theta)\n', (669, 676), False, 'from numpy import sin, cos\n'), ((492, 502), 'numpy.sin', 'sin', (['theta'], {}), '(theta)\n', (495, 502), False, 'from numpy import sin, cos\n'), ((434, 444), 'numpy.sin', 'sin', (['theta'], {}), '(theta)\n', (437, 444), False, 'from numpy import sin, cos\n'), ((382, 392), 'numpy.sin', 'sin', (['theta'], {}), '(theta)\n', (385, 392), False, 'from numpy import sin, cos\n'), ((393, 403), 'numpy.cos', 'cos', (['theta'], {}), '(theta)\n', (396, 403), False, 'from numpy import sin, cos\n'), ((471, 481), 'numpy.cos', 'cos', (['theta'], {}), '(theta)\n', (474, 481), False, 'from numpy import sin, cos\n')] |
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import unittest
from typing import cast, Dict, Optional, Union
from unittest import TestCase
from unittest.mock import patch
import numpy as np
import pandas as pd
from kats.compat import statsmodels
from kats.compat.pandas import assert_frame_equal, assert_series_equal
from kats.consts import TimeSeriesData
from kats.data.utils import load_data, load_air_passengers
from kats.models.theta import ThetaModel, ThetaParams
from kats.tests.models.test_models_dummy_data import (
NONSEASONAL_INPUT,
AIR_FCST_15_THETA_SM_11,
AIR_FCST_15_THETA_INCL_HIST_SM_11,
PEYTON_FCST_30_THETA_SM_11,
PEYTON_FCST_30_THETA_INCL_HIST_SM_11,
AIR_FCST_15_THETA_SM_12,
AIR_FCST_15_THETA_INCL_HIST_SM_12,
PEYTON_FCST_30_THETA_SM_12,
PEYTON_FCST_30_THETA_INCL_HIST_SM_12,
)
from parameterized.parameterized import parameterized
TEST_DATA: Dict[str, Dict[str, Union[ThetaParams, TimeSeriesData, pd.DataFrame]]] = {
"short": {
"ts": TimeSeriesData(
pd.DataFrame(
{
"time": [
pd.Timestamp("1961-01-01 00:00:00"),
pd.Timestamp("1961-02-01 00:00:00"),
],
"y": [1.0, 2.0],
}
)
),
"params": ThetaParams(m=2),
},
"constant": {
"ts": TimeSeriesData(
pd.DataFrame(
{"time": pd.date_range("1960-12-01", "1963-01-01", freq="m"), "y": 10.0}
)
),
"params": ThetaParams(m=2),
},
"nonseasonal": {
"ts": TimeSeriesData(NONSEASONAL_INPUT),
"params": ThetaParams(m=4),
},
"daily": {
"ts": TimeSeriesData(
load_data("peyton_manning.csv").set_axis(["time", "y"], axis=1)
),
"params": ThetaParams(),
"params_negative": ThetaParams(m=-5),
},
"monthly": {
"ts": load_air_passengers(),
"params": ThetaParams(m=12),
},
"multivariate": {
"ts": TimeSeriesData(load_data("multivariate_anomaly_simulated_data.csv"))
},
}
class ThetaModelTest(TestCase):
def test_params(self) -> None:
# Test default value
params = ThetaParams()
params.validate_params()
self.assertEqual(params.m, 1)
params = ThetaParams(m=12)
self.assertEqual(params.m, 12)
# pyre-fixme[56]: Pyre was not able to infer the type of the decorator `parameter...
@parameterized.expand(
[
[
"monthly",
TEST_DATA["monthly"]["ts"],
TEST_DATA["monthly"]["params"],
15,
0.05,
False,
None,
(
AIR_FCST_15_THETA_SM_11
if statsmodels.version < "0.12"
else AIR_FCST_15_THETA_SM_12
),
],
[
"monthly, include history",
TEST_DATA["monthly"]["ts"],
TEST_DATA["monthly"]["params"],
15,
0.05,
True,
None,
(
AIR_FCST_15_THETA_INCL_HIST_SM_11
if statsmodels.version < "0.12"
else AIR_FCST_15_THETA_INCL_HIST_SM_12
),
],
[
"daily",
TEST_DATA["daily"]["ts"],
TEST_DATA["daily"]["params"],
30,
0.05,
False,
None,
(
PEYTON_FCST_30_THETA_SM_11
if statsmodels.version < "0.12"
else PEYTON_FCST_30_THETA_SM_12
),
],
[
"daily, include history",
TEST_DATA["daily"]["ts"],
TEST_DATA["daily"]["params_negative"],
30,
0.05,
True,
None,
(
PEYTON_FCST_30_THETA_INCL_HIST_SM_11
if statsmodels.version < "0.12"
else PEYTON_FCST_30_THETA_INCL_HIST_SM_12
),
],
]
)
def test_forecast(
self,
testcase_name: str,
ts: TimeSeriesData,
params: ThetaParams,
steps: int,
alpha: float,
include_history: bool,
freq: Optional[str],
truth: pd.DataFrame,
) -> None:
np.random.seed(0)
m = ThetaModel(data=ts, params=params)
m.fit()
forecast_df = m.predict(
steps=steps, alpha=alpha, include_history=include_history, freq=freq
)
assert_frame_equal(truth, forecast_df)
# pyre-fixme[56]: Pyre was not able to infer the type of the decorator `parameter...
@parameterized.expand(
[
[
"m less than 1",
TEST_DATA["daily"]["ts"],
TEST_DATA["daily"]["params_negative"],
False,
],
[
"data too short",
TEST_DATA["short"]["ts"],
TEST_DATA["short"]["params"],
False,
],
[
"constant data",
TEST_DATA["constant"]["ts"],
TEST_DATA["constant"]["params"],
False,
],
[
"seasonal",
TEST_DATA["monthly"]["ts"],
TEST_DATA["monthly"]["params"],
True,
],
]
)
def test_check_seasonality(
self,
testcase_name: str,
ts: TimeSeriesData,
params: ThetaParams,
is_seasonal: bool,
) -> None:
m = ThetaModel(ts, params)
m.check_seasonality()
self.assertEqual(m.seasonal, is_seasonal)
# pyre-fixme[56]: Pyre was not able to infer the type of the decorator `parameter...
@parameterized.expand(
[
[
"nonseasonal",
False,
TEST_DATA["nonseasonal"]["ts"],
TEST_DATA["nonseasonal"]["params"],
False,
True,
],
[
"seasonal",
True,
TEST_DATA["monthly"]["ts"],
TEST_DATA["monthly"]["params"],
True,
False,
],
]
)
def test_deseasonalize(
self,
testcase_name: str,
seasonal: bool,
ts: TimeSeriesData,
params: ThetaParams,
seasonality_removed: bool,
decomp_is_none: bool,
) -> None:
m = ThetaModel(ts, params)
m.seasonal = seasonal
deseas_data = m.deseasonalize()
if seasonality_removed:
self.assertFalse(ts.value.equals(deseas_data.value))
else:
assert_series_equal(
cast(pd.Series, ts.value), cast(pd.Series, deseas_data.value)
)
self.assertEqual(decomp_is_none, m.decomp is None)
def test_multivar(self) -> None:
# Theta model does not support multivariate time data
self.assertRaises(
ValueError,
ThetaModel,
TEST_DATA["multivariate"]["ts"],
ThetaParams(),
)
def test_exec_plot(self) -> None:
m = ThetaModel(
cast(TimeSeriesData, TEST_DATA["daily"]["ts"]),
cast(ThetaParams, TEST_DATA["daily"]["params"]),
)
m.fit()
m.predict(steps=15, alpha=0.05)
m.plot()
def test_name(self) -> None:
m = ThetaModel(
cast(TimeSeriesData, TEST_DATA["daily"]["ts"]),
cast(ThetaParams, TEST_DATA["daily"]["params"]),
)
self.assertEqual(m.__str__(), "Theta")
def test_search_space(self) -> None:
self.assertEqual(
ThetaModel.get_parameter_search_space(),
[
{
"name": "m",
"type": "choice",
"values": list(range(1, 31)),
"value_type": "int",
"is_ordered": True,
},
],
)
def test_others(self) -> None:
m = ThetaModel(
cast(TimeSeriesData, TEST_DATA["daily"]["ts"]),
cast(ThetaParams, TEST_DATA["daily"]["params"]),
)
# fit must be called before predict
self.assertRaises(ValueError, m.predict, 30)
# seasonal data must be deseasonalized before fit
with patch.object(
m, "deseasonalize", (lambda self: self.data).__get__(m)
), patch.object(m, "check_seasonality"):
m.seasonal = True
m.decomp = None
self.assertRaises(ValueError, m.fit)
with patch(
"kats.utils.decomposition.TimeSeriesDecomposition.decomposer",
return_value={
"seasonal": cast(TimeSeriesData, TEST_DATA["daily"]["ts"]) * 0
},
):
# Don't deseasonalize if any seasonal index = 0
deseas_data = m.deseasonalize()
expected = cast(
pd.Series, cast(TimeSeriesData, TEST_DATA["daily"]["ts"]).value
)
assert_series_equal(expected, cast(pd.Series, deseas_data.value))
if __name__ == "__main__":
unittest.main()
| [
"kats.data.utils.load_data",
"kats.models.theta.ThetaParams",
"kats.models.theta.ThetaModel",
"pandas.Timestamp",
"kats.consts.TimeSeriesData",
"typing.cast",
"numpy.random.seed",
"kats.compat.pandas.assert_frame_equal",
"unittest.main",
"kats.data.utils.load_air_passengers",
"kats.models.theta.... | [((2641, 3526), 'parameterized.parameterized.parameterized.expand', 'parameterized.expand', (["[['monthly', TEST_DATA['monthly']['ts'], TEST_DATA['monthly']['params'], 15,\n 0.05, False, None, AIR_FCST_15_THETA_SM_11 if statsmodels.version <\n '0.12' else AIR_FCST_15_THETA_SM_12], ['monthly, include history',\n TEST_DATA['monthly']['ts'], TEST_DATA['monthly']['params'], 15, 0.05, \n True, None, AIR_FCST_15_THETA_INCL_HIST_SM_11 if statsmodels.version <\n '0.12' else AIR_FCST_15_THETA_INCL_HIST_SM_12], ['daily', TEST_DATA[\n 'daily']['ts'], TEST_DATA['daily']['params'], 30, 0.05, False, None, \n PEYTON_FCST_30_THETA_SM_11 if statsmodels.version < '0.12' else\n PEYTON_FCST_30_THETA_SM_12], ['daily, include history', TEST_DATA[\n 'daily']['ts'], TEST_DATA['daily']['params_negative'], 30, 0.05, True,\n None, PEYTON_FCST_30_THETA_INCL_HIST_SM_11 if statsmodels.version <\n '0.12' else PEYTON_FCST_30_THETA_INCL_HIST_SM_12]]"], {}), "([['monthly', TEST_DATA['monthly']['ts'], TEST_DATA[\n 'monthly']['params'], 15, 0.05, False, None, AIR_FCST_15_THETA_SM_11 if\n statsmodels.version < '0.12' else AIR_FCST_15_THETA_SM_12], [\n 'monthly, include history', TEST_DATA['monthly']['ts'], TEST_DATA[\n 'monthly']['params'], 15, 0.05, True, None, \n AIR_FCST_15_THETA_INCL_HIST_SM_11 if statsmodels.version < '0.12' else\n AIR_FCST_15_THETA_INCL_HIST_SM_12], ['daily', TEST_DATA['daily']['ts'],\n TEST_DATA['daily']['params'], 30, 0.05, False, None, \n PEYTON_FCST_30_THETA_SM_11 if statsmodels.version < '0.12' else\n PEYTON_FCST_30_THETA_SM_12], ['daily, include history', TEST_DATA[\n 'daily']['ts'], TEST_DATA['daily']['params_negative'], 30, 0.05, True,\n None, PEYTON_FCST_30_THETA_INCL_HIST_SM_11 if statsmodels.version <\n '0.12' else PEYTON_FCST_30_THETA_INCL_HIST_SM_12]])\n", (2661, 3526), False, 'from parameterized.parameterized import parameterized\n'), ((5061, 5448), 'parameterized.parameterized.parameterized.expand', 'parameterized.expand', (["[['m less than 1', TEST_DATA['daily']['ts'], TEST_DATA['daily'][\n 'params_negative'], False], ['data too short', TEST_DATA['short']['ts'],\n TEST_DATA['short']['params'], False], ['constant data', TEST_DATA[\n 'constant']['ts'], TEST_DATA['constant']['params'], False], ['seasonal',\n TEST_DATA['monthly']['ts'], TEST_DATA['monthly']['params'], True]]"], {}), "([['m less than 1', TEST_DATA['daily']['ts'], TEST_DATA\n ['daily']['params_negative'], False], ['data too short', TEST_DATA[\n 'short']['ts'], TEST_DATA['short']['params'], False], ['constant data',\n TEST_DATA['constant']['ts'], TEST_DATA['constant']['params'], False], [\n 'seasonal', TEST_DATA['monthly']['ts'], TEST_DATA['monthly']['params'],\n True]])\n", (5081, 5448), False, 'from parameterized.parameterized import parameterized\n'), ((6198, 6426), 'parameterized.parameterized.parameterized.expand', 'parameterized.expand', (["[['nonseasonal', False, TEST_DATA['nonseasonal']['ts'], TEST_DATA[\n 'nonseasonal']['params'], False, True], ['seasonal', True, TEST_DATA[\n 'monthly']['ts'], TEST_DATA['monthly']['params'], True, False]]"], {}), "([['nonseasonal', False, TEST_DATA['nonseasonal']['ts'],\n TEST_DATA['nonseasonal']['params'], False, True], ['seasonal', True,\n TEST_DATA['monthly']['ts'], TEST_DATA['monthly']['params'], True, False]])\n", (6218, 6426), False, 'from parameterized.parameterized import parameterized\n'), ((9646, 9661), 'unittest.main', 'unittest.main', ([], {}), '()\n', (9659, 9661), False, 'import unittest\n'), ((1472, 1488), 'kats.models.theta.ThetaParams', 'ThetaParams', ([], {'m': '(2)'}), '(m=2)\n', (1483, 1488), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((1703, 1719), 'kats.models.theta.ThetaParams', 'ThetaParams', ([], {'m': '(2)'}), '(m=2)\n', (1714, 1719), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((1763, 1796), 'kats.consts.TimeSeriesData', 'TimeSeriesData', (['NONSEASONAL_INPUT'], {}), '(NONSEASONAL_INPUT)\n', (1777, 1796), False, 'from kats.consts import TimeSeriesData\n'), ((1816, 1832), 'kats.models.theta.ThetaParams', 'ThetaParams', ([], {'m': '(4)'}), '(m=4)\n', (1827, 1832), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((1991, 2004), 'kats.models.theta.ThetaParams', 'ThetaParams', ([], {}), '()\n', (2002, 2004), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((2033, 2050), 'kats.models.theta.ThetaParams', 'ThetaParams', ([], {'m': '(-5)'}), '(m=-5)\n', (2044, 2050), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((2090, 2111), 'kats.data.utils.load_air_passengers', 'load_air_passengers', ([], {}), '()\n', (2109, 2111), False, 'from kats.data.utils import load_data, load_air_passengers\n'), ((2131, 2148), 'kats.models.theta.ThetaParams', 'ThetaParams', ([], {'m': '(12)'}), '(m=12)\n', (2142, 2148), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((2386, 2399), 'kats.models.theta.ThetaParams', 'ThetaParams', ([], {}), '()\n', (2397, 2399), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((2489, 2506), 'kats.models.theta.ThetaParams', 'ThetaParams', ([], {'m': '(12)'}), '(m=12)\n', (2500, 2506), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((4714, 4731), 'numpy.random.seed', 'np.random.seed', (['(0)'], {}), '(0)\n', (4728, 4731), True, 'import numpy as np\n'), ((4744, 4778), 'kats.models.theta.ThetaModel', 'ThetaModel', ([], {'data': 'ts', 'params': 'params'}), '(data=ts, params=params)\n', (4754, 4778), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((4927, 4965), 'kats.compat.pandas.assert_frame_equal', 'assert_frame_equal', (['truth', 'forecast_df'], {}), '(truth, forecast_df)\n', (4945, 4965), False, 'from kats.compat.pandas import assert_frame_equal, assert_series_equal\n'), ((6000, 6022), 'kats.models.theta.ThetaModel', 'ThetaModel', (['ts', 'params'], {}), '(ts, params)\n', (6010, 6022), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((6933, 6955), 'kats.models.theta.ThetaModel', 'ThetaModel', (['ts', 'params'], {}), '(ts, params)\n', (6943, 6955), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((2208, 2260), 'kats.data.utils.load_data', 'load_data', (['"""multivariate_anomaly_simulated_data.csv"""'], {}), "('multivariate_anomaly_simulated_data.csv')\n", (2217, 2260), False, 'from kats.data.utils import load_data, load_air_passengers\n'), ((7555, 7568), 'kats.models.theta.ThetaParams', 'ThetaParams', ([], {}), '()\n', (7566, 7568), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((7655, 7701), 'typing.cast', 'cast', (['TimeSeriesData', "TEST_DATA['daily']['ts']"], {}), "(TimeSeriesData, TEST_DATA['daily']['ts'])\n", (7659, 7701), False, 'from typing import cast, Dict, Optional, Union\n'), ((7715, 7762), 'typing.cast', 'cast', (['ThetaParams', "TEST_DATA['daily']['params']"], {}), "(ThetaParams, TEST_DATA['daily']['params'])\n", (7719, 7762), False, 'from typing import cast, Dict, Optional, Union\n'), ((7917, 7963), 'typing.cast', 'cast', (['TimeSeriesData', "TEST_DATA['daily']['ts']"], {}), "(TimeSeriesData, TEST_DATA['daily']['ts'])\n", (7921, 7963), False, 'from typing import cast, Dict, Optional, Union\n'), ((7977, 8024), 'typing.cast', 'cast', (['ThetaParams', "TEST_DATA['daily']['params']"], {}), "(ThetaParams, TEST_DATA['daily']['params'])\n", (7981, 8024), False, 'from typing import cast, Dict, Optional, Union\n'), ((8163, 8202), 'kats.models.theta.ThetaModel.get_parameter_search_space', 'ThetaModel.get_parameter_search_space', ([], {}), '()\n', (8200, 8202), False, 'from kats.models.theta import ThetaModel, ThetaParams\n'), ((8554, 8600), 'typing.cast', 'cast', (['TimeSeriesData', "TEST_DATA['daily']['ts']"], {}), "(TimeSeriesData, TEST_DATA['daily']['ts'])\n", (8558, 8600), False, 'from typing import cast, Dict, Optional, Union\n'), ((8614, 8661), 'typing.cast', 'cast', (['ThetaParams', "TEST_DATA['daily']['params']"], {}), "(ThetaParams, TEST_DATA['daily']['params'])\n", (8618, 8661), False, 'from typing import cast, Dict, Optional, Union\n'), ((8935, 8971), 'unittest.mock.patch.object', 'patch.object', (['m', '"""check_seasonality"""'], {}), "(m, 'check_seasonality')\n", (8947, 8971), False, 'from unittest.mock import patch\n'), ((7187, 7212), 'typing.cast', 'cast', (['pd.Series', 'ts.value'], {}), '(pd.Series, ts.value)\n', (7191, 7212), False, 'from typing import cast, Dict, Optional, Union\n'), ((7214, 7248), 'typing.cast', 'cast', (['pd.Series', 'deseas_data.value'], {}), '(pd.Series, deseas_data.value)\n', (7218, 7248), False, 'from typing import cast, Dict, Optional, Union\n'), ((9577, 9611), 'typing.cast', 'cast', (['pd.Series', 'deseas_data.value'], {}), '(pd.Series, deseas_data.value)\n', (9581, 9611), False, 'from typing import cast, Dict, Optional, Union\n'), ((1596, 1647), 'pandas.date_range', 'pd.date_range', (['"""1960-12-01"""', '"""1963-01-01"""'], {'freq': '"""m"""'}), "('1960-12-01', '1963-01-01', freq='m')\n", (1609, 1647), True, 'import pandas as pd\n'), ((1898, 1929), 'kats.data.utils.load_data', 'load_data', (['"""peyton_manning.csv"""'], {}), "('peyton_manning.csv')\n", (1907, 1929), False, 'from kats.data.utils import load_data, load_air_passengers\n'), ((9468, 9514), 'typing.cast', 'cast', (['TimeSeriesData', "TEST_DATA['daily']['ts']"], {}), "(TimeSeriesData, TEST_DATA['daily']['ts'])\n", (9472, 9514), False, 'from typing import cast, Dict, Optional, Union\n'), ((1253, 1288), 'pandas.Timestamp', 'pd.Timestamp', (['"""1961-01-01 00:00:00"""'], {}), "('1961-01-01 00:00:00')\n", (1265, 1288), True, 'import pandas as pd\n'), ((1314, 1349), 'pandas.Timestamp', 'pd.Timestamp', (['"""1961-02-01 00:00:00"""'], {}), "('1961-02-01 00:00:00')\n", (1326, 1349), True, 'import pandas as pd\n'), ((9231, 9277), 'typing.cast', 'cast', (['TimeSeriesData', "TEST_DATA['daily']['ts']"], {}), "(TimeSeriesData, TEST_DATA['daily']['ts'])\n", (9235, 9277), False, 'from typing import cast, Dict, Optional, Union\n')] |
import numpy as np
# ParaMol imports
from ParaMol.System.system import *
from ParaMol.MM_engines.openmm import *
# ParaMol Tasks imports
from ParaMol.Tasks.parametrization import *
from ParaMol.Tasks.ab_initio_properties import *
from ParaMol.Utils.settings import *
# --------------------------------------------------------- #
# Preparation #
# --------------------------------------------------------- #
# Create the OpenMM engine for carbon monoxide
openmm_engine = OpenMMEngine(True, "AMBER", "co.prmtop", "AMBER", "co.inpcrd")
# Create the ParaMol System
co = ParaMolSystem("carbon_monoxide", openmm_engine, 2)
# Create ParaMol's force field representation and ask to optimize bonds's parameters
co.force_field.create_force_field(opt_bonds=True)
# Create ParaMol settings instance
paramol_settings = Settings()
# --------------------------------------------------------- #
# Perform the conformational sampling manually #
# --------------------------------------------------------- #
# Generate conformations; ParaMol uses nanometers for the length
n_atoms = 2
n_conformations = 100
conformations = np.zeros((n_conformations, n_atoms, 3))
# Change the z distance of atom 2
conformations[:, 1, 2] = np.linspace(0.1, 0.12, n_conformations)
# Set this data in the ParaMol system instance
co.ref_coordinates = conformations
co.n_structures = len(co.ref_coordinates)
# --------------------------------------------------------- #
# Calculate QM energies and forces #
# --------------------------------------------------------- #
# Create the ASE calculator
from ase.calculators.dftb import *
calc = Dftb(Hamiltonian_='DFTB',
Hamiltonian_MaxAngularMomentum_='',
Hamiltonian_MaxAngularMomentum_O='p',
Hamiltonian_MaxAngularMomentum_C='p',
Hamiltonian_SCC='Yes',
Hamiltonian_SCCTolerance=1e-8,
Hamiltonian_MaxSCCIterations=10000)
# Set the calculator in the settings; alternatively the QM engine could be created manually
paramol_settings.qm_engine["ase"]["calculator"] = calc
# Calculate Ab initio properties
ab_initio = AbInitioProperties()
ab_initio.run_task(paramol_settings, [co])
# Save coordinates, energies and forces into .nc file
co.write_data("co_scan.nc")
# --------------------------------------------------------- #
# Parametrize the CO bond #
# --------------------------------------------------------- #
parametrization = Parametrization()
optimal_paramters = parametrization.run_task(paramol_settings, [co])
| [
"numpy.zeros",
"numpy.linspace"
] | [((1171, 1210), 'numpy.zeros', 'np.zeros', (['(n_conformations, n_atoms, 3)'], {}), '((n_conformations, n_atoms, 3))\n', (1179, 1210), True, 'import numpy as np\n'), ((1271, 1310), 'numpy.linspace', 'np.linspace', (['(0.1)', '(0.12)', 'n_conformations'], {}), '(0.1, 0.12, n_conformations)\n', (1282, 1310), True, 'import numpy as np\n')] |
import requests
from itertools import product
from tqdm.notebook import tqdm
import numpy as np
from seaborn import color_palette
from shapely.geometry import box
from shapely import affinity
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
from skimage import color, exposure
from renderapi.render import format_preamble
from renderapi.stack import (get_stack_bounds,
get_bounds_from_z,
get_z_values_for_stack)
from renderapi.tilespec import get_tile_spec, get_tile_specs_from_box
from renderapi.image import get_bb_image
from renderapi.errors import RenderError
from .render_pandas import create_stacks_DataFrame
__all__ = ['render_bbox_image',
'render_partition_image',
'render_tileset_image',
'render_stack_images',
'render_layer_images',
'render_neighborhood_image',
'plot_tile_map',
'plot_stacks',
'plot_neighborhoods',
'plot_stacks_interactive',
'colorize',
'T_HOECHST',
'T_AF594',
'T_RED',
'T_GREEN',
'T_BLUE',
'T_YELLOW']
def render_bbox_image(stack, z, bbox, width=1024, render=None,
**renderapi_kwargs):
"""Renders an image given the specified bounding box
Parameters
----------
stack : str
Input stack from which to render bbox image
z : float
Z layer at which to render bbox image
bbox : list, tuple, array-like
Coordinates of bounding box (minx, miny, maxx, maxy)
width : float
Width of rendered tileset image in pixels
render : `renderapi.render.RenderClient`
`render-ws` instance
Returns
-------
image : ndarray
Rendered bounding box image
Notes
-----
Differs from `renderapi.image.get_bb_image` parameters:
(x0, y0, width, height, scale)
"""
# Unpack bbox
x = bbox[0]
y = bbox[1]
w = bbox[2] - bbox[0]
h = bbox[3] - bbox[1]
s = width / (bbox[2] - bbox[0])
# Render image bounding box image as tif
image = get_bb_image(stack=stack, z=z, x=x, y=y,
width=w, height=h, scale=s,
render=render,
**renderapi_kwargs)
# Sometimes it overloads the system
if isinstance(image, RenderError):
# Recreate requested url
request_url = format_preamble(
host=render.DEFAULT_HOST,
port=render.DEFAULT_PORT,
owner=render.DEFAULT_OWNER,
project=render.DEFAULT_PROJECT,
stack=stack) + \
f"/z/{z:.0f}/box/{x:.0f},{y:.0f},{w:.0f},{h:.0f},{s}/png-image"
# Tell 'em the bad news
print(f"Failed to load {request_url}. Trying again with partitioned bboxes.")
# Try to render image from smaller bboxes
image = render_partition_image(stack, z, bbox, width, render,
**renderapi_kwargs)
return image
def render_partition_image(stack, z, bbox, width=1024, render=None,
**renderapi_kwargs):
"""Renders a bbox image from partitions"""
# Unpack bbox
x = bbox[0]
y = bbox[1]
w = bbox[2] - bbox[0]
h = bbox[3] - bbox[1]
s = width / (bbox[2] - bbox[0])
# Get tiles in bbox
tiles = get_tile_specs_from_box(stack, z, x, y, w, h, s, render=render)
# Average tile width/height
width_ts = np.mean([tile.width for tile in tiles])
height_ts = np.mean([tile.height for tile in tiles])
# Set (approximate) dimensions of partitions
w_p = min(400, width_ts) # actual partition widths, heights are set
h_p = min(400, height_ts) # are set a few lines later by np.diff
# Get coordinates for partitions (sub-bboxes)
Nx_p = int(np.ceil(w/w_p)) # num partitions in x
Ny_p = int(np.ceil(h/h_p)) # num partitions in y
xs_p = np.linspace(x, x+w, Nx_p, dtype=int) # x coords of partitions
ys_p = np.linspace(y, y+h, Ny_p, dtype=int) # y coords of partitions
ws_p = np.diff(xs_p) # partition widths
hs_p = np.diff(ys_p) # partition heights
# Create partitions using meshgrid
# [x0, y0, w0, h0]
# [x1, y0, w1, h0]
# [x2, y0, w2, h0] ...
partitions = np.array([g.ravel() for g in np.meshgrid(xs_p[:-1], ys_p[:-1])] +\
[g.ravel() for g in np.meshgrid(ws_p, hs_p)]).T
# Create empty bbox image (to stitch together partitions)
height = int(np.round(h/w * width))
image = np.zeros((height, width))
# Need x, y offsets such that image starts at (0, 0)
x0 = int(xs_p[0] * s)
y0 = int(ys_p[0] * s)
# Create a bbox image for each partition
for p in tqdm(partitions, leave=False):
# Get bbox image
image_p = get_bb_image(stack=stack, z=z, x=p[0], y=p[1],
width=p[2], height=p[3], scale=s,
render=render,
**renderapi_kwargs)
# Somehow it still overloads the system \_0_/
if isinstance(image_p, RenderError):
request_url = format_preamble(
host=render.DEFAULT_HOST,
port=render.DEFAULT_PORT,
owner=render.DEFAULT_OWNER,
project=render.DEFAULT_PROJECT,
stack=stack) + \
f"/z/{z:.0f}/box/{x:.0f},{y:.0f},{w:.0f},{h:.0f},{s}/png-image"
print(f"Failed to load {request_url}. Still fails -- wtf man.")
return image_p # RenderError
# Get coords for global bbox image
x1 = int(p[0] * s) - x0
# x2 = x1 + int(p[2] * s)
x2 = x1 + image_p.shape[1]
y1 = int(p[1] * s) - y0
# y2 = y1 + int(p[3] * s)
y2 = y1 + image_p.shape[0]
# Add partition to global bbox image
try:
if len(image_p.shape) > 2: # take only the first channel
image[y1:y2, x1:x2] = image_p[:,:,0]
else:
image[y1:y2, x1:x2] = image_p
except ValueError as e:
print(e)
# There are likely gaps due to rounding issues
# Fill in the gaps with mean value
image = np.where(image==0, image.mean(), image).astype(image_p.dtype)
return image
def render_tileset_image(stack, z, width=1024, render=None,
**renderapi_kwargs):
"""Renders an image of a tileset
Parameters
----------
stack : str
Stack with which to render the tileset image
z : float
Z value of stack at which to render tileset image
width : float
Width of rendered tileset image in pixels
render : `renderapi.render.RenderClient`
`render-ws` instance
Returns
-------
image : ndarray
Rendered image of the specified tileset
"""
# Get bbox for z layer from stack bounds
bounds = get_bounds_from_z(stack=stack,
z=z,
render=render)
bbox = [bounds[k] for k in ['minX', 'minY', 'maxX', 'maxY']]
# Render bbox image
image = render_bbox_image(stack=stack,
z=z,
bbox=bbox,
width=width,
render=render,
**renderapi_kwargs)
return image
def render_stack_images(stack, width=1024, render=None,
**renderapi_kwargs):
"""Renders tileset images for a given stack
Parameters
----------
stack : str
Stack with which to render images for all z values
width : float
Width of rendered tileset image in pixels
render : `renderapi.render.RenderClient`
`render-ws` instance
Returns
-------
images : dict
Dictionary of tileset images comprising the stack with z value as key
"""
# Get z values of stack
z_values = get_z_values_for_stack(stack=stack,
render=render)
# Get bbox of stack from stack bounds
bounds = get_stack_bounds(stack=stack,
render=render)
bbox = [bounds[k] for k in ['minX', 'minY', 'maxX', 'maxY']]
# Loop through z values and collect images
images = {}
for z in tqdm(z_values, leave=False):
image = render_bbox_image(stack=stack,
z=z,
bbox=bbox,
width=width,
render=render,
**renderapi_kwargs)
images[z] = image
return images
def render_layer_images(stacks, z, width=1024, render=None,
**renderapi_kwargs):
"""Renders tileset images for a given layer
Parameters
----------
stacks : list
List of stacks to with which to render layer images
z : float
Z value of stacks at which to render layer images
width : float
Width of rendered layer images in pixels
render : `renderapi.render.RenderClient`
`render-ws` instance
Returns
-------
images : dict
Dictionary of tileset images comprising the layer with stack name as key
"""
# Loop through stacks and collect images
images = {}
for stack in tqdm(stacks, leave=False):
image = render_tileset_image(stack=stack,
z=z,
width=width,
render=render,
**renderapi_kwargs)
images[stack] = image
return images
def render_neighborhood_image(stack, tileId, neighborhood=1, width=1024,
render=None, return_bbox=False,
**renderapi_kwargs):
"""Renders an image of the local neighborhood surrounding a tile
Parameters
----------
stack : str
Stack from which to render neighborhood image
tileId : str
tileId (duh)
neighborhood : float
Number of tiles surrounding center tile from which to render the image
width : float
Width of rendered layer images in pixels
render : `renderapi.render.RenderClient`
`render-ws` instance
"""
# Make alias for neighborhood
N = neighborhood
# Get bounding box of specified tile
tile_spec = get_tile_spec(stack=stack,
tile=tileId,
render=render)
# Get width of bbox
bbox = tile_spec.bbox
w = bbox[2] - bbox[0]
# Assume surrounding tiles are squares with ~same width as the center tile
bbox_neighborhood = (bbox[0] - N*w, bbox[1] - N*w,
bbox[2] + N*w, bbox[3] + N*w)
# Render image of neighborhood
image = render_bbox_image(stack=stack,
z=tile_spec.z,
bbox=bbox_neighborhood,
width=width,
render=render,
**renderapi_kwargs)
if return_bbox:
return image, bbox_neighborhood
else:
return image
def plot_tile_map(stacks, render=None):
"""Plots tiles (as matplotlib patches) in `render-ws`
Parameters
----------
stacks : list
List of stacks to plot
render : `renderapi.render.RenderClient`
`render-ws` instance
"""
# Create stacks DataFrame
df_stacks = create_stacks_DataFrame(stacks=stacks,
render=render)
# Specify stacks and sections for plotting
stacks_2_plot = df_stacks['stack'].unique().tolist()
sections_2_plot = df_stacks['sectionId'].unique().tolist()
# Set up figure
ncols = len(sections_2_plot)
fig, axes = plt.subplots(ncols=ncols, sharex=True, sharey=True,
squeeze=False, figsize=(8*ncols, 8))
axmap = {k: v for k, v in zip(sections_2_plot, axes.flat)}
cmap = {k: v for k, v in zip(stacks_2_plot,
color_palette(n_colors=len(stacks_2_plot)))}
# Collect all tiles in each layer to determine bounds
boxes = []
# Iterate through layers
for sectionId, layer in tqdm(df_stacks.groupby('sectionId')):
# Set axis
ax = axmap[sectionId]
# Collect legend handles
handles = []
# Loop through tilesets within each layer
for stack, tileset in layer.groupby('stack'):
# Loop through each tile
for i, tile in tileset.reset_index().iterrows():
# Create `shapely.box` resembling raw image tile
b = box(0, 0, tile['width'], tile['height'])
# Apply transforms to `shapely.box`
for tform in tile['tforms']:
A = (tform.M[:2, :2].ravel().tolist() +
tform.M[:2, 2].ravel().tolist())
b = affinity.affine_transform(b, A)
boxes.append(b)
# Get coordinates of `shapely.box` to plot matplotlib polygon patch
xy = np.array(b.exterior.xy).T
p = Polygon(xy, color=cmap[stack], alpha=0.2, label=stack)
# Add patch to axis
if i != 0: ax.add_patch(p) # Only add first patch
else: handles.append(ax.add_patch(p)) # to legend handles
# Label first tile in tileset
x, y = np.array(b.centroid.xy).ravel()
s = f"{stack}\n{sectionId}\n"\
f"{tile['imageCol']:02.0f}x{tile['imageRow']:02.0f}"
if i == 0: ax.text(x, y, s, ha='center', va='center')
# Axis aesthetics
ax.set_title(sectionId)
ax.legend(handles=handles, loc='lower right')
ax.set_xlabel('X [px]')
ax.set_ylabel('Y [px]')
ax.grid(ls=':', alpha=0.7)
ax.set_aspect('equal')
# Set axis limits based on bounding boxes
bounds = np.swapaxes([b.exterior.xy for b in boxes], 1, 2).reshape(-1, 2)
for ax in axmap.values():
ax.set_xlim(bounds[:, 0].min(), bounds[:, 0].max())
ax.set_ylim(bounds[:, 1].min(), bounds[:, 1].max())
ax.invert_yaxis()
def plot_stacks(stacks, z_values=None, width=1024, render=None,
**renderapi_kwargs):
"""Renders and plots tileset images for the given stacks"""
# Create DataFrame from stacks
df_stacks = create_stacks_DataFrame(stacks=stacks,
render=render)
# Plot all z values if none are provided
if z_values is None:
z_values = df_stacks['z'].unique().tolist()
# Set up figure
nrows = len(stacks)
ncols = len(z_values)
fig, axes = plt.subplots(nrows, ncols, squeeze=False,
figsize=(8*ncols, 8*nrows))
axmap = {k: v for k, v in zip(product(stacks, z_values), axes.flat)}
# Iterate through tilesets
df_2_plot = df_stacks.loc[df_stacks['z'].isin(z_values)]
for (stack, z), tileset in tqdm(df_2_plot.groupby(['stack', 'z'])):
# Render tileset image
image = render_tileset_image(stack=stack,
z=z,
width=width,
render=render,
**renderapi_kwargs)
# Get extent of tileset image in render-space
bounds = get_stack_bounds(stack=stack,
render=render)
extent = [bounds[k] for k in ['minX', 'maxX', 'minY', 'maxY']]
# Plot image
ax = axmap[(stack, z)]
ax.imshow(image, origin='lower', extent=extent)
# Axis aesthetics
ax.invert_yaxis()
sectionId = tileset['sectionId'].iloc[0]
ax.set_title(f"{stack}\nz = {z:.0f} | {sectionId}")
ax.set_xlabel('X [px]')
ax.set_ylabel('Y [px]')
def plot_neighborhoods(stacks, z_values=None, neighborhood=1, width=1024,
render=None, **renderapi_kwargs):
"""Renders and plots a neighborhood image around the given tile"""
# Create DataFrame from stacks
df_stacks = create_stacks_DataFrame(stacks=stacks,
render=render)
# Plot all z values if none are provided
if z_values is None:
z_values = df_stacks['z'].unique().tolist()
# Set up figure
nrows = len(stacks)
ncols = len(z_values)
fig, axes = plt.subplots(nrows, ncols, squeeze=False,
figsize=(8*ncols, 8*nrows))
axmap = {k: v for k, v in zip(product(stacks, z_values), axes.flat)}
# Iterate through tilesets
df_2_plot = df_stacks.loc[df_stacks['z'].isin(z_values)]
for (stack, z), tileset in tqdm(df_2_plot.groupby(['stack', 'z'])):
# Select a tile from the tileset randomly
tileId = tileset.sample(1).iloc[0]['tileId']
# Render neighborhood image
image, bbox = render_neighborhood_image(stack=stack,
tileId=tileId,
neighborhood=neighborhood,
width=width,
return_bbox=True,
render=render,
**renderapi_kwargs)
# Get extent of neighborhood image in render-space (L, R, B, T)
extent = (bbox[0], bbox[2], bbox[1], bbox[3])
# Plot image
ax = axmap[(stack, z)]
ax.imshow(image, origin='lower', extent=extent)
# Axis aesthetics
ax.invert_yaxis()
sectionId = tileset['sectionId'].iloc[0]
ax.set_title(f"{stack}\nz = {z:.0f} | {sectionId}\n{tileId}")
ax.set_xlabel('X [px]')
ax.set_ylabel('Y [px]')
def plot_stacks_interactive(z, stack_images, render=None):
"""Plot stacks interactively (imshow) with a slider to scroll through z value
Parameters
----------
z : scalar
Z value to plot
stack_images : dict
Collection of images in {stack1: {'z_n': image_n},
{'z_n+1': image_n+1},
stack2: {'z_n': image_n},
{'z_n+1': image_n+1}} form
"""
# Get stack names as keys
stacks = list(stack_images.keys())
# Setup figure
ncols=len(stacks)
fig, axes = plt.subplots(ncols=ncols, sharex=True, sharey=True,
squeeze=False, figsize=(7*ncols, 7))
# Map each stack to an axis
axmap = {k: v for k, v in zip(stacks, axes.flat)}
# Get extent from global bounds
bounds = np.array([list(get_stack_bounds(
stack=stack, render=render).values()) for stack in stacks])
extent = [bounds[:, 0].min(axis=0), bounds[:, 3].max(axis=0), # minx, maxx
bounds[:, 1].min(axis=0), bounds[:, 4].max(axis=0)] # miny, maxy
# Loop through stacks to plot images
for stack, images in stack_images.items():
cmap = 'magma' if 'EM' not in stack else 'Greys'
axmap[stack].imshow(images[z], origin='lower', extent=extent, cmap=cmap)
axmap[stack].set_title(stack)
axmap[stack].set_aspect('equal')
axmap[stack].invert_yaxis()
def clear_image_cache():
url = 'https://sonic.tnw.tudelft.nl/render-ws/v1/imageProcessorCache/allEntries'
response = requests.delete(url)
return response
def colorize(image, T):
"""Colorize image
Parameters
----------
image : (M, N) array
Returns
-------
rescaled : rgba float array
Color transformed image
"""
# Convert to rgba
rgba = color.gray2rgba(image, alpha=True)
# Apply transform
transformed = np.dot(rgba, T)
rescaled = exposure.rescale_intensity(transformed)
return rescaled
# Color transformations
# ---------------------
# Labels
T_HOECHST = [[0.2, 0.0, 0.0, 0.2], # blueish
[0.0, 0.2, 0.0, 0.2],
[0.0, 0.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 0.0]]
T_AF594 = [[1.0, 0.0, 0.0, 1.0], # orangeish
[0.0, 0.6, 0.0, 0.6],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0]]
# Primary colors
T_RED = [[1.0, 0.0, 0.0, 1.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0]]
T_GREEN = [[0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 1.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0]]
T_BLUE = [[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 0.0]]
T_YELLOW = [[1.0, 0.0, 0.0, 1.0],
[0.0, 1.0, 0.0, 1.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0]]
| [
"shapely.geometry.box",
"renderapi.tilespec.get_tile_spec",
"numpy.array",
"renderapi.stack.get_z_values_for_stack",
"tqdm.notebook.tqdm",
"numpy.mean",
"itertools.product",
"numpy.diff",
"renderapi.tilespec.get_tile_specs_from_box",
"numpy.linspace",
"numpy.dot",
"numpy.meshgrid",
"shapely.... | [((2159, 2267), 'renderapi.image.get_bb_image', 'get_bb_image', ([], {'stack': 'stack', 'z': 'z', 'x': 'x', 'y': 'y', 'width': 'w', 'height': 'h', 'scale': 's', 'render': 'render'}), '(stack=stack, z=z, x=x, y=y, width=w, height=h, scale=s, render\n =render, **renderapi_kwargs)\n', (2171, 2267), False, 'from renderapi.image import get_bb_image\n'), ((3410, 3473), 'renderapi.tilespec.get_tile_specs_from_box', 'get_tile_specs_from_box', (['stack', 'z', 'x', 'y', 'w', 'h', 's'], {'render': 'render'}), '(stack, z, x, y, w, h, s, render=render)\n', (3433, 3473), False, 'from renderapi.tilespec import get_tile_spec, get_tile_specs_from_box\n'), ((3521, 3560), 'numpy.mean', 'np.mean', (['[tile.width for tile in tiles]'], {}), '([tile.width for tile in tiles])\n', (3528, 3560), True, 'import numpy as np\n'), ((3577, 3617), 'numpy.mean', 'np.mean', (['[tile.height for tile in tiles]'], {}), '([tile.height for tile in tiles])\n', (3584, 3617), True, 'import numpy as np\n'), ((3981, 4019), 'numpy.linspace', 'np.linspace', (['x', '(x + w)', 'Nx_p'], {'dtype': 'int'}), '(x, x + w, Nx_p, dtype=int)\n', (3992, 4019), True, 'import numpy as np\n'), ((4055, 4093), 'numpy.linspace', 'np.linspace', (['y', '(y + h)', 'Ny_p'], {'dtype': 'int'}), '(y, y + h, Ny_p, dtype=int)\n', (4066, 4093), True, 'import numpy as np\n'), ((4129, 4142), 'numpy.diff', 'np.diff', (['xs_p'], {}), '(xs_p)\n', (4136, 4142), True, 'import numpy as np\n'), ((4174, 4187), 'numpy.diff', 'np.diff', (['ys_p'], {}), '(ys_p)\n', (4181, 4187), True, 'import numpy as np\n'), ((4606, 4631), 'numpy.zeros', 'np.zeros', (['(height, width)'], {}), '((height, width))\n', (4614, 4631), True, 'import numpy as np\n'), ((4799, 4828), 'tqdm.notebook.tqdm', 'tqdm', (['partitions'], {'leave': '(False)'}), '(partitions, leave=False)\n', (4803, 4828), False, 'from tqdm.notebook import tqdm\n'), ((6976, 7026), 'renderapi.stack.get_bounds_from_z', 'get_bounds_from_z', ([], {'stack': 'stack', 'z': 'z', 'render': 'render'}), '(stack=stack, z=z, render=render)\n', (6993, 7026), False, 'from renderapi.stack import get_stack_bounds, get_bounds_from_z, get_z_values_for_stack\n'), ((8023, 8073), 'renderapi.stack.get_z_values_for_stack', 'get_z_values_for_stack', ([], {'stack': 'stack', 'render': 'render'}), '(stack=stack, render=render)\n', (8045, 8073), False, 'from renderapi.stack import get_stack_bounds, get_bounds_from_z, get_z_values_for_stack\n'), ((8167, 8211), 'renderapi.stack.get_stack_bounds', 'get_stack_bounds', ([], {'stack': 'stack', 'render': 'render'}), '(stack=stack, render=render)\n', (8183, 8211), False, 'from renderapi.stack import get_stack_bounds, get_bounds_from_z, get_z_values_for_stack\n'), ((8383, 8410), 'tqdm.notebook.tqdm', 'tqdm', (['z_values'], {'leave': '(False)'}), '(z_values, leave=False)\n', (8387, 8410), False, 'from tqdm.notebook import tqdm\n'), ((9424, 9449), 'tqdm.notebook.tqdm', 'tqdm', (['stacks'], {'leave': '(False)'}), '(stacks, leave=False)\n', (9428, 9449), False, 'from tqdm.notebook import tqdm\n'), ((10512, 10566), 'renderapi.tilespec.get_tile_spec', 'get_tile_spec', ([], {'stack': 'stack', 'tile': 'tileId', 'render': 'render'}), '(stack=stack, tile=tileId, render=render)\n', (10525, 10566), False, 'from renderapi.tilespec import get_tile_spec, get_tile_specs_from_box\n'), ((11948, 12043), 'matplotlib.pyplot.subplots', 'plt.subplots', ([], {'ncols': 'ncols', 'sharex': '(True)', 'sharey': '(True)', 'squeeze': '(False)', 'figsize': '(8 * ncols, 8)'}), '(ncols=ncols, sharex=True, sharey=True, squeeze=False, figsize=\n (8 * ncols, 8))\n', (11960, 12043), True, 'import matplotlib.pyplot as plt\n'), ((14915, 14988), 'matplotlib.pyplot.subplots', 'plt.subplots', (['nrows', 'ncols'], {'squeeze': '(False)', 'figsize': '(8 * ncols, 8 * nrows)'}), '(nrows, ncols, squeeze=False, figsize=(8 * ncols, 8 * nrows))\n', (14927, 14988), True, 'import matplotlib.pyplot as plt\n'), ((16650, 16723), 'matplotlib.pyplot.subplots', 'plt.subplots', (['nrows', 'ncols'], {'squeeze': '(False)', 'figsize': '(8 * ncols, 8 * nrows)'}), '(nrows, ncols, squeeze=False, figsize=(8 * ncols, 8 * nrows))\n', (16662, 16723), True, 'import matplotlib.pyplot as plt\n'), ((18676, 18771), 'matplotlib.pyplot.subplots', 'plt.subplots', ([], {'ncols': 'ncols', 'sharex': '(True)', 'sharey': '(True)', 'squeeze': '(False)', 'figsize': '(7 * ncols, 7)'}), '(ncols=ncols, sharex=True, sharey=True, squeeze=False, figsize=\n (7 * ncols, 7))\n', (18688, 18771), True, 'import matplotlib.pyplot as plt\n'), ((19664, 19684), 'requests.delete', 'requests.delete', (['url'], {}), '(url)\n', (19679, 19684), False, 'import requests\n'), ((19937, 19971), 'skimage.color.gray2rgba', 'color.gray2rgba', (['image'], {'alpha': '(True)'}), '(image, alpha=True)\n', (19952, 19971), False, 'from skimage import color, exposure\n'), ((20012, 20027), 'numpy.dot', 'np.dot', (['rgba', 'T'], {}), '(rgba, T)\n', (20018, 20027), True, 'import numpy as np\n'), ((20043, 20082), 'skimage.exposure.rescale_intensity', 'exposure.rescale_intensity', (['transformed'], {}), '(transformed)\n', (20069, 20082), False, 'from skimage import color, exposure\n'), ((3877, 3893), 'numpy.ceil', 'np.ceil', (['(w / w_p)'], {}), '(w / w_p)\n', (3884, 3893), True, 'import numpy as np\n'), ((3931, 3947), 'numpy.ceil', 'np.ceil', (['(h / h_p)'], {}), '(h / h_p)\n', (3938, 3947), True, 'import numpy as np\n'), ((4571, 4594), 'numpy.round', 'np.round', (['(h / w * width)'], {}), '(h / w * width)\n', (4579, 4594), True, 'import numpy as np\n'), ((4873, 4992), 'renderapi.image.get_bb_image', 'get_bb_image', ([], {'stack': 'stack', 'z': 'z', 'x': 'p[0]', 'y': 'p[1]', 'width': 'p[2]', 'height': 'p[3]', 'scale': 's', 'render': 'render'}), '(stack=stack, z=z, x=p[0], y=p[1], width=p[2], height=p[3],\n scale=s, render=render, **renderapi_kwargs)\n', (4885, 4992), False, 'from renderapi.image import get_bb_image\n'), ((15607, 15651), 'renderapi.stack.get_stack_bounds', 'get_stack_bounds', ([], {'stack': 'stack', 'render': 'render'}), '(stack=stack, render=render)\n', (15623, 15651), False, 'from renderapi.stack import get_stack_bounds, get_bounds_from_z, get_z_values_for_stack\n'), ((2473, 2618), 'renderapi.render.format_preamble', 'format_preamble', ([], {'host': 'render.DEFAULT_HOST', 'port': 'render.DEFAULT_PORT', 'owner': 'render.DEFAULT_OWNER', 'project': 'render.DEFAULT_PROJECT', 'stack': 'stack'}), '(host=render.DEFAULT_HOST, port=render.DEFAULT_PORT, owner=\n render.DEFAULT_OWNER, project=render.DEFAULT_PROJECT, stack=stack)\n', (2488, 2618), False, 'from renderapi.render import format_preamble\n'), ((14152, 14201), 'numpy.swapaxes', 'np.swapaxes', (['[b.exterior.xy for b in boxes]', '(1)', '(2)'], {}), '([b.exterior.xy for b in boxes], 1, 2)\n', (14163, 14201), True, 'import numpy as np\n'), ((5208, 5353), 'renderapi.render.format_preamble', 'format_preamble', ([], {'host': 'render.DEFAULT_HOST', 'port': 'render.DEFAULT_PORT', 'owner': 'render.DEFAULT_OWNER', 'project': 'render.DEFAULT_PROJECT', 'stack': 'stack'}), '(host=render.DEFAULT_HOST, port=render.DEFAULT_PORT, owner=\n render.DEFAULT_OWNER, project=render.DEFAULT_PROJECT, stack=stack)\n', (5223, 5353), False, 'from renderapi.render import format_preamble\n'), ((12817, 12857), 'shapely.geometry.box', 'box', (['(0)', '(0)', "tile['width']", "tile['height']"], {}), "(0, 0, tile['width'], tile['height'])\n", (12820, 12857), False, 'from shapely.geometry import box\n'), ((13314, 13368), 'matplotlib.patches.Polygon', 'Polygon', (['xy'], {'color': 'cmap[stack]', 'alpha': '(0.2)', 'label': 'stack'}), '(xy, color=cmap[stack], alpha=0.2, label=stack)\n', (13321, 13368), False, 'from matplotlib.patches import Polygon\n'), ((15048, 15073), 'itertools.product', 'product', (['stacks', 'z_values'], {}), '(stacks, z_values)\n', (15055, 15073), False, 'from itertools import product\n'), ((16783, 16808), 'itertools.product', 'product', (['stacks', 'z_values'], {}), '(stacks, z_values)\n', (16790, 16808), False, 'from itertools import product\n'), ((13098, 13129), 'shapely.affinity.affine_transform', 'affinity.affine_transform', (['b', 'A'], {}), '(b, A)\n', (13123, 13129), False, 'from shapely import affinity\n'), ((13268, 13291), 'numpy.array', 'np.array', (['b.exterior.xy'], {}), '(b.exterior.xy)\n', (13276, 13291), True, 'import numpy as np\n'), ((4379, 4412), 'numpy.meshgrid', 'np.meshgrid', (['xs_p[:-1]', 'ys_p[:-1]'], {}), '(xs_p[:-1], ys_p[:-1])\n', (4390, 4412), True, 'import numpy as np\n'), ((4463, 4486), 'numpy.meshgrid', 'np.meshgrid', (['ws_p', 'hs_p'], {}), '(ws_p, hs_p)\n', (4474, 4486), True, 'import numpy as np\n'), ((13627, 13650), 'numpy.array', 'np.array', (['b.centroid.xy'], {}), '(b.centroid.xy)\n', (13635, 13650), True, 'import numpy as np\n'), ((18944, 18988), 'renderapi.stack.get_stack_bounds', 'get_stack_bounds', ([], {'stack': 'stack', 'render': 'render'}), '(stack=stack, render=render)\n', (18960, 18988), False, 'from renderapi.stack import get_stack_bounds, get_bounds_from_z, get_z_values_for_stack\n')] |
import os
import datetime
import numpy as np
import pandas as pd
import xarray as xr
import boto3
from botocore.handlers import disable_signing
def get_forecast_from_silam_zarr(date_str, modality, day, version="v5_7_1"):
"""
Obtain forecast of specified parameter from SILAM for the whole world in zarr format
:param date_str: date of forecast generation: 8 digits (YYYYMMDD), e.g. 21210101 (string)
:param modality: CO, NO2, NO, O3, PM10, PM25, SO2, airdens (string)
:param day: one of 0, 1, 2, 3, 4 (number)
:param version: "v5_7_1" by default, if needed, check version on
http://fmi-opendata-silam-surface-zarr.s3-website-eu-west-1.amazonaws.com/?prefix=global/
:return: dataset of forecasts (xarray dataset)
"""
bucket_name = f"fmi-opendata-silam-surface-zarr"
key = f"global/{date_str}/silam_glob_{version}_{date_str}_{modality}_d{day}.zarr"
tmp_dir = "/tmp"
tmp_file = tmp_dir + "/" + key
if not os.path.exists(os.path.dirname(tmp_file)):
os.makedirs(os.path.dirname(tmp_file))
def download(bucket_name, key, dst_root="/tmp"):
"""Download zarr directory from S3"""
resource = boto3.resource("s3")
resource.meta.client.meta.events.register("choose-signer.s3.*", disable_signing)
bucket = resource.Bucket(bucket_name)
for object in bucket.objects.filter(Prefix=key):
dst = dst_root + "/" + object.key
if not os.path.exists(os.path.dirname(dst)):
os.makedirs(os.path.dirname(dst))
resource.meta.client.download_file(bucket_name, object.key, dst)
# download data
download(bucket_name, key)
# read dataset from the downloaded file
ds = xr.open_zarr(tmp_file, consolidated=False)
return ds
def get_series_from_location(ds, modality, approx_lat, approx_lon):
"""
Obtain time series from the whole world dataset from a specified location
:param ds: whole world dataset (xarray dataset)
:param modality: CO, NO2, NO, O3, PM10, PM25, SO2, airdens (string)
:param approx_lat: location of interest - latitude in degrees (float)
:param approx_lon: location of interest - longitude in degrees (float)
:return: localised time series (pandas time series)
"""
def find_closest_to(arr, val):
return arr.flat[np.abs(arr - val).argmin()]
# find the closest model cell coordinates and obtain data from that location
lat = find_closest_to(ds[modality].lat.values, approx_lat)
lon = find_closest_to(ds[modality].lon.values, approx_lon)
times = [val.values for val in list(ds[modality].time)]
data = ds[modality].sel(lat=lat, lon=lon).values
return pd.Series(index=times, data=data)
def get_all_days_series(start_date, modality, lat, lon):
"""
Obtain 5-day forecast of [modality] from [start_date] from location [lat; lon]
:param start_date: date of forecast generation (datetime)
:param modality: CO, NO2, NO, O3, PM10, PM25, SO2, airdens (string)
:param lat: location of interest - latitude in degrees (float)
:param lon: location of interest - longitude in degrees (float)
:return: 5 concatenated time series of forecasts (pandas series)
"""
def get_date_str(start_date):
month_str = f'{start_date.month if len(str(start_date.month)) == 2 else f"0{start_date.month}"}'
day_str = f'{start_date.day if len(str(start_date.day)) == 2 else f"0{start_date.day}"}'
return f"{start_date.year}{month_str}{day_str}"
# transform date into 8 digits (YYYYMMDD) string
date_str = get_date_str(start_date)
# obtain forecasts for each of 5 days and concatenate them
series_list = []
for d in range(5):
ds = get_forecast_from_silam_zarr(date_str, modality, day=d)
ts = get_series_from_location(ds, modality, lat, lon)
series_list.append(ts)
return pd.concat(series_list, axis=0)
def get_silam_ts(modality, lat, lon, max_days=30):
"""
Obtain time series of [modality] generated by SILAM during the last [max_days] days from location [lat; lon]
:param modality: CO, NO2, NO, O3, PM10, PM25, SO2, airdens (string)
:param lat: location of interest - latitude in degrees (float)
:param lon: location of interest - longitude in degrees (float)
:param max_days: number of days (get all data from 0 - today, 30 - 30 days ago (number)
NB: 30 days is maximum stored on SILAM cloud
:return: time series of forecasts (pandas series)
"""
all_series = []
for offset_days in range(0, max_days + 1):
start_date = datetime.datetime.now() - datetime.timedelta(offset_days)
series = get_all_days_series(start_date, modality, lat, lon)
all_series.append(series)
# build a dataframe from 5-day forecasts
df = pd.DataFrame(columns=[0, 1, 2, 3, 4])
for ts in all_series:
for idx, val in ts.items():
if idx in list(df.index):
for col in df.columns:
if np.isnan(df.loc[idx, col]):
df.loc[idx, col] = val
break
else:
df.loc[idx, 0] = val
df = df.sort_index()
# take series of the latest available estimates
for idx, row in df.iterrows():
for col in df.columns:
if col != "0":
if not np.isnan(row[col]):
df.loc[idx, "0"] = row[col]
else:
break
silam_ts = df["0"]
format = "%Y-%m-%d %H:%M:%S"
silam_ts.index = pd.to_datetime(list(silam_ts.index), format=format)
return silam_ts
| [
"pandas.Series",
"numpy.abs",
"os.path.dirname",
"boto3.resource",
"datetime.datetime.now",
"numpy.isnan",
"pandas.DataFrame",
"datetime.timedelta",
"xarray.open_zarr",
"pandas.concat"
] | [((1725, 1767), 'xarray.open_zarr', 'xr.open_zarr', (['tmp_file'], {'consolidated': '(False)'}), '(tmp_file, consolidated=False)\n', (1737, 1767), True, 'import xarray as xr\n'), ((2700, 2733), 'pandas.Series', 'pd.Series', ([], {'index': 'times', 'data': 'data'}), '(index=times, data=data)\n', (2709, 2733), True, 'import pandas as pd\n'), ((3901, 3931), 'pandas.concat', 'pd.concat', (['series_list'], {'axis': '(0)'}), '(series_list, axis=0)\n', (3910, 3931), True, 'import pandas as pd\n'), ((4823, 4860), 'pandas.DataFrame', 'pd.DataFrame', ([], {'columns': '[0, 1, 2, 3, 4]'}), '(columns=[0, 1, 2, 3, 4])\n', (4835, 4860), True, 'import pandas as pd\n'), ((1175, 1195), 'boto3.resource', 'boto3.resource', (['"""s3"""'], {}), "('s3')\n", (1189, 1195), False, 'import boto3\n'), ((980, 1005), 'os.path.dirname', 'os.path.dirname', (['tmp_file'], {}), '(tmp_file)\n', (995, 1005), False, 'import os\n'), ((1028, 1053), 'os.path.dirname', 'os.path.dirname', (['tmp_file'], {}), '(tmp_file)\n', (1043, 1053), False, 'import os\n'), ((4607, 4630), 'datetime.datetime.now', 'datetime.datetime.now', ([], {}), '()\n', (4628, 4630), False, 'import datetime\n'), ((4633, 4664), 'datetime.timedelta', 'datetime.timedelta', (['offset_days'], {}), '(offset_days)\n', (4651, 4664), False, 'import datetime\n'), ((1469, 1489), 'os.path.dirname', 'os.path.dirname', (['dst'], {}), '(dst)\n', (1484, 1489), False, 'import os\n'), ((1520, 1540), 'os.path.dirname', 'os.path.dirname', (['dst'], {}), '(dst)\n', (1535, 1540), False, 'import os\n'), ((2338, 2355), 'numpy.abs', 'np.abs', (['(arr - val)'], {}), '(arr - val)\n', (2344, 2355), True, 'import numpy as np\n'), ((5023, 5049), 'numpy.isnan', 'np.isnan', (['df.loc[idx, col]'], {}), '(df.loc[idx, col])\n', (5031, 5049), True, 'import numpy as np\n'), ((5378, 5396), 'numpy.isnan', 'np.isnan', (['row[col]'], {}), '(row[col])\n', (5386, 5396), True, 'import numpy as np\n')] |
'''
Created on 2 juin 2015
@author: <NAME>
'''
import numpy as np
import matplotlib.pyplot as plt
def paretoSorting(x0, x1):
fronts=list()
idx=np.lexsort((x1, x0))
fronts.append(list())
fronts[-1].append(idx[0])
for i0 in idx[1:]:
if x1[i0]>=x1[fronts[-1][-1]]:
fronts.append(list())
fronts[-1].append(i0)
else:
for i1 in range(0,len(fronts)):
if x1[i0]<x1[fronts[i1][-1]]:
fronts[i1].append(i0)
break
return (fronts, idx)
def doubleParetoSorting(x0, x1):
fronts = [[]]
left = [[]]
right = [[]]
idx = np.lexsort((x1, x0))
idxEdge = np.lexsort((-np.square(x0-0.5), x1))
fronts[-1].append(idxEdge[0])
left[-1].append(x0[idxEdge[0]])
right[-1].append(x0[idxEdge[0]])
for i0 in idxEdge[1:]:
if x0[i0]>=left[-1] and x0[i0]<=right[-1]:
#add a new front
fronts.append([])
left.append([])
right.append([])
fronts[-1].append(i0)
left[-1].append(x0[i0])
right[-1].append(x0[i0])
else:
#check existing fonts
for i1 in range(len(fronts)):
if x0[i0]<left[i1] or x0[i0]>right[i1]:
if x0[i0]<left[i1]:
left[i1] = x0[i0]
fronts[i1].insert(0, i0)
else:
right[i1] = x0[i0]
fronts[i1].append(i0)
break
return (fronts, idx)
def plotFronts(fronts, x0, x1, **kwargs):
fig=plt.figure()
ax=plt.gca()
if 'size' in kwargs:
ax.scatter(x0, x1, c='k', s=kwargs['size'])
else:
ax.plot(x0, x1,'ok')
for l0 in fronts:
tmp0=x0[l0]
tmp1=x1[l0]
ax.plot(tmp0, tmp1,'-')
if 'annotate' in kwargs and kwargs['annotate']:
for label, x, y in zip(range(0,len(x0)), x0, x1):
plt.annotate(
label,
xy = (x, y), xytext = (-10, 10),
textcoords = 'offset points', ha = 'right', va = 'bottom',
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3, rad=-0.2'))
return fig
def convexSortingApprox(x0, x1):
''' does not work well '''
fronts0=paretoSorting(x0, x1)[0]
fronts1=paretoSorting(-x0, x1)[0]
minErrIdx=np.argmin(x1)
minErrNE=x0[minErrIdx]
fronts=[]
len0=len(fronts0)
len1=len(fronts1)
for i0 in range(max(len0, len1)):
tmpList=[]
if len0>i0:
tmp=x0[fronts0[i0]]<=minErrNE
tmpList.extend(np.array(fronts0[i0])[tmp])
if len1>i0:
tmp=x0[fronts1[i0]]>minErrNE
tmpList.extend(np.array(fronts1[i0])[tmp])
fronts.append(tmpList)
return fronts
def convexSorting(x0, x1):
#===========================================================================
# fronts, idx=paretoSorting(x0, x1)
#===========================================================================
fronts, idx=doubleParetoSorting(x0, x1)
lastChanged=0
for i0 in range(len(fronts)):
if len(fronts[i0])>0:
for i1 in range(lastChanged-1,i0-1,-1):
tmp=list()
for l0 in reversed(fronts[i1+1]):
if len(fronts[i1])==0 or x0[fronts[i1][-1]]<x0[l0] and x1[fronts[i1][-1]]>x1[l0]:
tmp.insert(0,fronts[i1+1].pop())
if len(tmp)>0:
fronts[i1].extend(tmp)
for i1 in range(i0+1, len(fronts)):
if len(fronts[i1])>0 and x0[fronts[i0][-1]]<x0[fronts[i1][-1]]:
fronts[i0].append(fronts[i1].pop())
lastChanged=i1
#=======================================================================
# if i0 in range(len(fronts)-23,len(fronts)-20):
# plotFronts(fronts, x0, x1)
# plt.show(block=False)
#=======================================================================
for i0 in range(len(fronts)-1,-1,-1):
if len(fronts[i0])==0:
fronts.pop(i0)
return (fronts, idx) | [
"matplotlib.pyplot.gca",
"numpy.square",
"numpy.lexsort",
"matplotlib.pyplot.figure",
"numpy.array",
"numpy.argmin"
] | [((154, 174), 'numpy.lexsort', 'np.lexsort', (['(x1, x0)'], {}), '((x1, x0))\n', (164, 174), True, 'import numpy as np\n'), ((667, 687), 'numpy.lexsort', 'np.lexsort', (['(x1, x0)'], {}), '((x1, x0))\n', (677, 687), True, 'import numpy as np\n'), ((1655, 1667), 'matplotlib.pyplot.figure', 'plt.figure', ([], {}), '()\n', (1665, 1667), True, 'import matplotlib.pyplot as plt\n'), ((1675, 1684), 'matplotlib.pyplot.gca', 'plt.gca', ([], {}), '()\n', (1682, 1684), True, 'import matplotlib.pyplot as plt\n'), ((2449, 2462), 'numpy.argmin', 'np.argmin', (['x1'], {}), '(x1)\n', (2458, 2462), True, 'import numpy as np\n'), ((720, 739), 'numpy.square', 'np.square', (['(x0 - 0.5)'], {}), '(x0 - 0.5)\n', (729, 739), True, 'import numpy as np\n'), ((2699, 2720), 'numpy.array', 'np.array', (['fronts0[i0]'], {}), '(fronts0[i0])\n', (2707, 2720), True, 'import numpy as np\n'), ((2815, 2836), 'numpy.array', 'np.array', (['fronts1[i0]'], {}), '(fronts1[i0])\n', (2823, 2836), True, 'import numpy as np\n')] |
import numpy as np
from numpy.ma import squeeze
from IAlgorithm import IAlgorithm
from Blob import Blob
from numpy import vstack, split
from numpy import genfromtxt
__author__ = 'simon'
class Colorname(IAlgorithm):
def __init__(self):
self.mapping = genfromtxt('data/colormapping.txt', delimiter=',')[:,3:]
def _compute(self, blob_generator):
# Design pattern:
for blob in blob_generator:
# Map all RGB values to colorname histograms
blob.data *= 255
if len(blob.data.shape)==2:
blob.data = (blob.data.astype(np.uint8)/8).astype(np.uint32)
r=blob.data
g=blob.data
b=blob.data
else:
r,g,b=split((blob.data.astype(np.uint8)/8).astype(np.uint32),3,axis=2)
index = r+32*g+32*32*b
blob.data = self.mapping[squeeze(index)]
yield blob | [
"numpy.genfromtxt",
"numpy.ma.squeeze"
] | [((265, 315), 'numpy.genfromtxt', 'genfromtxt', (['"""data/colormapping.txt"""'], {'delimiter': '""","""'}), "('data/colormapping.txt', delimiter=',')\n", (275, 315), False, 'from numpy import genfromtxt\n'), ((889, 903), 'numpy.ma.squeeze', 'squeeze', (['index'], {}), '(index)\n', (896, 903), False, 'from numpy.ma import squeeze\n')] |
import warnings
def fxn():
warnings.warn("deprecated", DeprecationWarning)
import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np
import pickle
import cv2
import os
import json
import sys
import lmdb
from collections import defaultdict
import random
from utils import *
from datetime import datetime
os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2,3'
# GPU_ID
gpu_options = tf.GPUOptions(allow_growth=True)
config = tf.ConfigProto(gpu_options=gpu_options,log_device_placement=True,allow_soft_placement=True)
#############
# Visual Feature Extraction
# Columbia University
#############
# Specify data path
shared = ''
models = ''
corpus_path = '/root/LDC/'
working_path = shared + '/root/shared/'
model_path = models + '/root/models/'
# Version Setting
# Set evaluation version as the prefix folder
version_folder = 'dryrun03/' #'dryrun/'
# Input: LDC2019E42 unpacked data, CU visual grounding and instance matching moodels, UIUC text mention results, CU object detection results
# Input Paths
# Source corpus data paths
print('Check Point: Raw Data corpus_path change',corpus_path)
parent_child_tab = corpus_path + 'docs/parent_children.tab'
kfrm_msb = corpus_path + 'docs/masterShotBoundary.msb'
kfrm_path = corpus_path + 'data/video_shot_boundaries/representative_frames'
jpg_path = corpus_path + 'data/jpg/jpg/'
#UIUC text mention result paths
video_asr_path = working_path + 'uiuc_asr_files/' + version_folder +'en_asr_ltf/'
video_map_path = working_path + 'uiuc_asr_files/' + version_folder +'en_asr_map/'
print('Check Point: text mentions path change',video_asr_path)
# CU object detection result paths
det_results_path_img = working_path + 'cu_objdet_results/' + version_folder + 'det_results_merged_34a.pkl' # jpg images
det_results_path_kfrm = working_path + 'cu_objdet_results/' + version_folder + 'det_results_merged_34b.pkl' # key frames
print('Check Point: Alireza path change:','\n',det_results_path_img,'\n', det_results_path_kfrm,'\n')
# Model Paths
# CU visual grounding and instance matching moodel paths
grounding_model_path = model_path + 'model_ELMo_PNASNET_VOA_norm'
matching_model_path = model_path + 'model_universal_no_recons_ins_only'
# Output: CU visual grounding and instance matching features
# Output Paths
# CU visual grounding feature paths
out_path_jpg_sem = working_path + 'cu_grounding_matching_features/' + version_folder + 'semantic_features_jpg.lmdb'
out_path_kfrm_sem = working_path + 'cu_grounding_matching_features/' + version_folder + 'semantic_features_keyframe.lmdb'
if not os.path.exists(working_path + 'cu_grounding_matching_features/' + version_folder):
os.makedirs(working_path + 'cu_grounding_matching_features/' + version_folder)
# CU instance matching feature paths
out_path_jpg = working_path + 'cu_grounding_matching_features/' + version_folder + 'instance_features_jpg.lmdb'
out_path_kfrm = working_path + 'cu_grounding_matching_features/' + version_folder + 'instance_features_keyframe.lmdb'
#loading grounding pretrained model
print('Loading grounding pretrained model...')
sess, graph = load_model(grounding_model_path,config)
input_img = graph.get_tensor_by_name("input_img:0")
mode = graph.get_tensor_by_name("mode:0")
v = graph.get_tensor_by_name("image_local_features:0")
v_bar = graph.get_tensor_by_name("image_global_features:0")
print('Loading done.')
#preparing dicts
parent_dict, child_dict = create_dict(parent_child_tab)
id2dir_dict_kfrm = create_dict_kfrm(kfrm_path, kfrm_msb, video_asr_path, video_map_path)
#jpg
path_dict = create_path_dict(jpg_path)
#mp4
path_dict.update(create_path_dict_kfrm(id2dir_dict_kfrm))
# print('HC000TJCP' in id2dir_dict_kfrm.keys())
# print(id2dir_dict_kfrm.keys())
#loading object detection results
with open(det_results_path_img, 'rb') as f:
dict_obj_img = pickle.load(f)
with open(det_results_path_kfrm, 'rb') as f:
dict_obj_kfrm = pickle.load(f)
print(datetime.now())
# print(child_dict)
# Semantic Features
# about 8 hours in total for Instance Features
#opening lmdb environment
lmdb_env_jpg = lmdb.open(out_path_jpg_sem, map_size=int(1e11), lock=False)
lmdb_env_kfrm = lmdb.open(out_path_kfrm_sem, map_size=int(1e11), lock=False)
#about 1.5 hour
print(datetime.now())
missed_children_jpg = []
for i, key in enumerate(dict_obj_img):
imgs,_ = fetch_img(key+'.jpg.ldcc', parent_dict, child_dict, path_dict, level = 'Child')
if len(imgs)==0:
missed_children_jpg.append(key)
continue
img_batch, bb_ids, bboxes_norm = batch_of_bbox(imgs[0], dict_obj_img, key,\
score_thr=0, filter_out=False)
if len(bb_ids)>0:
feed_dict = {input_img: img_batch, mode: 'test'}
v_pred = sess.run([v], feed_dict)[0]
for j,bb_id in enumerate(bb_ids):
mask = mask_fm_bbox(feature_map_size=(19,19),bbox_norm=bboxes_norm[j,:],order='xyxy')
if np.sum(mask)==0:
continue
img_vec = np.average(v_pred[j,:], weights = np.reshape(mask,[361]), axis=0)
save_key = key+'/'+str(bb_id)
with lmdb_env_jpg.begin(write=True) as lmdb_txn:
lmdb_txn.put(save_key.encode(), img_vec)
# [break] only for dockerization testing
#break
sys.stderr.write("Stored for image {} / {} \r".format(i, len(dict_obj_img)))
print(datetime.now())
#about 4-6 hours
print(datetime.now())
missed_children_kfrm = []
for i, key in enumerate(dict_obj_kfrm):
# key+'.mp4.ldcc'
# print('path from obj detecton for kfrm:',key+'.mp4.ldcc')
imgs,_ = fetch_img(key+'.mp4.ldcc', parent_dict, child_dict, path_dict, level = 'Child')
if len(imgs)==0:
missed_children_kfrm.append(key)
continue
img_batch, bb_ids, bboxes_norm = batch_of_bbox(imgs[0], dict_obj_kfrm, key,\
score_thr=0, filter_out=False)
if len(bb_ids)>0:
feed_dict = {input_img: img_batch, mode: 'test'}
v_pred = sess.run([v], feed_dict)[0]
for j,bb_id in enumerate(bb_ids):
mask = mask_fm_bbox(feature_map_size=(19,19),bbox_norm=bboxes_norm[j,:],order='xyxy')
if np.sum(mask)==0:
continue
img_vec = np.average(v_pred[j,:], weights = np.reshape(mask,[361]), axis=0)
save_key = key+'/'+str(bb_id)
with lmdb_env_kfrm.begin(write=True) as lmdb_txn:
lmdb_txn.put(save_key.encode(), img_vec)
# [break] only for dockerization testing
#break
sys.stderr.write("Stored for keyframe {} / {} \r".format(i, len(dict_obj_kfrm)))
print(datetime.now())
len(missed_children_jpg)
len(missed_children_kfrm)
# Instance Features
# about 3 hours in total for Instance Features
#opening lmdb environment
lmdb_env_jpg = lmdb.open(out_path_jpg, map_size=int(1e11), lock=False)
lmdb_env_kfrm = lmdb.open(out_path_kfrm, map_size=int(1e11), lock=False)
#loading instance matching pretrained model
sess, graph = load_model(matching_model_path, config)
input_img = graph.get_tensor_by_name("input_img:0")
mode = graph.get_tensor_by_name("mode:0")
img_vec = graph.get_tensor_by_name("img_vec:0")
#about 0.5 hour
print(datetime.now())
missed_children_jpg = []
for i, key in enumerate(dict_obj_img):
# Todo test
#if 'HC0005KMS' not in key: #or 'HC0001H01' in key:
# continue
print(i,key)
imgs,_ = fetch_img(key+'.jpg.ldcc', parent_dict, child_dict, path_dict, level = 'Child')
if len(imgs)==0:
missed_children_jpg.append(key)
continue
img_batch, bb_ids, bboxes_norm = batch_of_bbox(imgs[0], dict_obj_img, key,\
score_thr=0, filter_out=False,img_size=(224,224))
if len(bb_ids)>0:
# Test for Corpping bug
feed_dict = {input_img: img_batch, mode: 'test'}
img_vec_pred = sess.run([img_vec], feed_dict)[0]
# print('img_batch',img_batch)
# print('img_batch len:',len(img_batch),np.shape(img_batch))
# print('img_batch vec:',img_batch)
# print(np.shape(img_vec_pred))
# print('img_vec_pred',type(img_vec_pred),img_vec_pred)
for j,bb_id in enumerate(bb_ids):
save_key = key+'/'+str(bb_id)
with lmdb_env_jpg.begin(write=True) as lmdb_txn:
lmdb_txn.put(save_key.encode(), img_vec_pred[j,:])
# print(sum(img_vec_pred[j,:]))
# [break] only for dockerization testing
#break
sys.stderr.write("Stored for image {} / {} \r".format(i, len(dict_obj_img)))
print(datetime.now())
#about 3 hours
missed_children_kfrm = []
for i, key in enumerate(dict_obj_kfrm):
imgs,_ = fetch_img(key+'.mp4.ldcc', parent_dict, child_dict, path_dict, level = 'Child')
if len(imgs)==0:
missed_children_kfrm.append(key)
continue
img_batch, bb_ids, bboxes_norm = batch_of_bbox(imgs[0], dict_obj_kfrm, key,\
score_thr=0, filter_out=False,img_size=(224,224))
if len(bb_ids)>0:
feed_dict = {input_img: img_batch, mode: 'test'}
img_vec_pred = sess.run([img_vec], feed_dict)[0]
for j,bb_id in enumerate(bb_ids):
save_key = key+'/'+str(bb_id)
with lmdb_env_kfrm.begin(write=True) as lmdb_txn:
lmdb_txn.put(save_key.encode(), img_vec_pred[j,:])
# [break] only for dockerization testing
#break
sys.stderr.write("Stored for keyframe {} / {} \r".format(i, len(dict_obj_kfrm)))
print(datetime.now())
print('Visual Feature Extraction Finished.')
| [
"os.path.exists",
"numpy.reshape",
"os.makedirs",
"pickle.load",
"datetime.datetime.now",
"numpy.sum",
"warnings.warn",
"tensorflow.ConfigProto",
"tensorflow.GPUOptions"
] | [((445, 477), 'tensorflow.GPUOptions', 'tf.GPUOptions', ([], {'allow_growth': '(True)'}), '(allow_growth=True)\n', (458, 477), True, 'import tensorflow as tf\n'), ((487, 584), 'tensorflow.ConfigProto', 'tf.ConfigProto', ([], {'gpu_options': 'gpu_options', 'log_device_placement': '(True)', 'allow_soft_placement': '(True)'}), '(gpu_options=gpu_options, log_device_placement=True,\n allow_soft_placement=True)\n', (501, 584), True, 'import tensorflow as tf\n'), ((31, 78), 'warnings.warn', 'warnings.warn', (['"""deprecated"""', 'DeprecationWarning'], {}), "('deprecated', DeprecationWarning)\n", (44, 78), False, 'import warnings\n'), ((2603, 2688), 'os.path.exists', 'os.path.exists', (["(working_path + 'cu_grounding_matching_features/' + version_folder)"], {}), "(working_path + 'cu_grounding_matching_features/' +\n version_folder)\n", (2617, 2688), False, 'import os\n'), ((2690, 2768), 'os.makedirs', 'os.makedirs', (["(working_path + 'cu_grounding_matching_features/' + version_folder)"], {}), "(working_path + 'cu_grounding_matching_features/' + version_folder)\n", (2701, 2768), False, 'import os\n'), ((3860, 3874), 'pickle.load', 'pickle.load', (['f'], {}), '(f)\n', (3871, 3874), False, 'import pickle\n'), ((3945, 3959), 'pickle.load', 'pickle.load', (['f'], {}), '(f)\n', (3956, 3959), False, 'import pickle\n'), ((3966, 3980), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (3978, 3980), False, 'from datetime import datetime\n'), ((4272, 4286), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (4284, 4286), False, 'from datetime import datetime\n'), ((5394, 5408), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (5406, 5408), False, 'from datetime import datetime\n'), ((5435, 5449), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (5447, 5449), False, 'from datetime import datetime\n'), ((6644, 6658), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (6656, 6658), False, 'from datetime import datetime\n'), ((7214, 7228), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (7226, 7228), False, 'from datetime import datetime\n'), ((8573, 8587), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (8585, 8587), False, 'from datetime import datetime\n'), ((9510, 9524), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (9522, 9524), False, 'from datetime import datetime\n'), ((4953, 4965), 'numpy.sum', 'np.sum', (['mask'], {}), '(mask)\n', (4959, 4965), True, 'import numpy as np\n'), ((6204, 6216), 'numpy.sum', 'np.sum', (['mask'], {}), '(mask)\n', (6210, 6216), True, 'import numpy as np\n'), ((5051, 5074), 'numpy.reshape', 'np.reshape', (['mask', '[361]'], {}), '(mask, [361])\n', (5061, 5074), True, 'import numpy as np\n'), ((6302, 6325), 'numpy.reshape', 'np.reshape', (['mask', '[361]'], {}), '(mask, [361])\n', (6312, 6325), True, 'import numpy as np\n')] |
import xnet
import glob
import math
import matplotlib
import concurrent.futures
import numpy as np
import matplotlib.pyplot as plt
from igraph import *
from collections import defaultdict
from util import get_valid_pp
from util import filter_pp_name
def calculate_dist(filenames):
for filename in filenames:
# print(filename)
net = xnet.xnet2igraph(filename)
weights = net.es['weight']
weights = [math.sqrt(2*(1-w)) for w in weights]
if len(weights) > 0:
net.es['distance'] = weights
xnet.igraph2xnet(net,filename[:-5]+"_dist.xnet")
else:
print('error',filename)
def to_sort(dates,nets):
dates = np.asarray(dates)
nets = np.asarray(nets)
sorted_idxs = np.argsort(dates)
dates = dates[sorted_idxs]
nets = nets[sorted_idxs]
return dates,nets
# Utilidades
def get_freqs(summaries,dates):
ys = defaultdict(lambda:defaultdict(lambda:[]))
freq_dict = defaultdict(lambda:[])
for d in dates:
year_summary = summaries[d]
for pp1,summary_pp1 in year_summary.items():
if summary_pp1:
for pp2,(mean,std,f) in summary_pp1.items():
ys[pp1][pp2].append((d,mean,std,f))
freq_dict[pp2].append(f)
freq = [(np.nanmean(fs),pp) for pp,fs in freq_dict.items()]
freq = sorted(freq,reverse=True)
i = 0
f_max = freq[i][0]
while np.isnan(freq[i][0]):
i+= 1
f_max = freq[i][0]
return ys,freq,f_max
def plot_metric(to_plot,interval_colors,color,output_fname,metric_name,is_custom_labels,is_bg):
plt.figure(figsize=(12,3))
xs2 = []
print(output_fname)
for pp1,(means,total_std,fraq,xs) in to_plot.items():
if len(xs) > len(xs2):
xs2 = xs
fraq = max(fraq,0.45)
# elw = max(0.3,2*fraq)
# lw = max(0.3,2*fraq)
# ms = max(0.3,2*fraq)
plt.errorbar(xs,means,total_std,
linestyle='-',label=pp1.upper(),fmt='o',elinewidth=1.5*fraq,
linewidth=2*fraq,markersize=2*fraq,
alpha=max(0.6,fraq),color=color[pp1])
delta = 12
if is_custom_labels:
delta = 1
labels = [str(int(x)) if i%delta == 0 else '' for i,x in enumerate(xs2)]
xpos = np.arange(min(xs2), max(xs2)+1/delta, 1/delta)
plt.xticks(xpos,labels=labels,rotation=35)
if is_bg:
for begin,delta,color in interval_colors:
if begin+delta >= xs2[0] and begin <= xs2[-1]:
plt.axvspan(max(begin,xs2[0]), min(begin+delta,xs2[-1]), facecolor=color, alpha=0.3)
plt.axvline(max(begin,xs2[0]),color='#2e2e2e',linestyle='--',alpha=0.5)
plt.legend(loc='upper right',bbox_to_anchor=(1.05, 1.0))
plt.xlabel('year')
plt.ylabel(metric_name)
plt.savefig(output_fname+'.pdf',format='pdf',bbox_inches="tight")
plt.clf()
# Menores caminhos
def calculate_shortest_paths(net,pps):
summary = defaultdict(lambda:defaultdict(lambda:0))
all_paths = []
for pp1 in pps:
sources = net.vs.select(political_party_eq=pp1)
for pp2 in pps:
# print('current pps:',pp1,pp2)
targets = net.vs.select(political_party_eq=pp2)
targets = [v.index for v in targets]
paths = []
# for s in sources:
# for t in targets:
# print(net.shortest_paths_dijkstra(source=s,target=t,weights='distance')[0],end=',')
for s in sources:
path_lens = net.get_shortest_paths(s,to=targets,weights='distance',output="epath")
for p in path_lens:
x = sum(net.es[idx]['distance'] for idx in p)
# print(x,end=',')
if x > 0:
paths.append(x)
all_paths.append(x)
if len(paths) == 0:
summary[pp1][pp2] = (np.nan,np.nan,np.nan)
summary[pp2][pp1] = (np.nan,np.nan,np.nan)
else:
mean = np.mean(paths)
std_dev = np.std(paths)
summary[pp1][pp2] = (mean,std_dev,len(targets))
summary[pp2][pp1] = (mean,std_dev,len(sources))
if pp1 == pp2:
break
all_paths_mean = np.mean(all_paths)
all_paths_std = np.std(all_paths)
return summary,(all_paths_mean,all_paths_std)
def shortest_path_by_pp(freq,pp2_means,f_max):
to_plot = dict()
for f,pp2 in freq:
means_std = pp2_means[pp2]
means_std = np.asarray(means_std)
means = means_std[:,1]
std = means_std[:,2]
xs = means_std[:,0]
fraq = f/f_max
if not np.isnan(means).all():
to_plot[pp2] = (means,std,fraq,xs)
return to_plot
def plot_shortest_paths(dates,nets,valid_pps,interval_colors,color,header,is_custom_labels,is_bg):
summaries = dict()
all_paths_summary = []
for date,net in zip(dates,nets):
summaries[date],all_paths = calculate_shortest_paths(net,valid_pps)
all_paths_summary.append(all_paths)
all_paths_summary = np.asarray(all_paths_summary)
ys,_,_ = get_freqs(summaries,dates)
for pp1,pp2_means in ys.items():
if not pp1 in valid_pps:
continue
freq = []
for pp2,means_std in pp2_means.items():
means_std = np.array(means_std)
freq.append((np.nanmean(means_std[:,3]),pp2))
freq = sorted(freq,reverse=True)
f_max = freq[0][0]
to_plot = shortest_path_by_pp(freq,pp2_means,f_max)
to_plot['all'] = (all_paths_summary[:,0], all_paths_summary[:,1],0.3,dates)
plot_metric(to_plot,interval_colors,color,header+pp1,'average shortest path len',is_custom_labels,is_bg)
def plot_shortest_paths_all_years(dates,nets,valid_pps,interval_colors,color,is_bg):
header = 'shortest_path_'
plot_shortest_paths(dates,nets,valid_pps,interval_colors,color,header,True,is_bg)
def plot_shortest_paths_mandate(dates,nets,year,valid_pps,interval_colors,color,is_bg):
idxs = [idx for idx,date in enumerate(dates) if date < year+4 and date >= year]
current_dates = [dates[idx] for idx in idxs]
current_nets = [nets[idx] for idx in idxs]
header = 'shortest_path_' + str(year) + '_' + str(year+3) + '_'
plot_shortest_paths(current_dates,current_nets,valid_pps,interval_colors,color,header,False,is_bg)
# Isolamento/Fragmentação
def fragmentation_to_plot(summaries,dates):
to_plot = dict()
ys,freq,f_max = get_freqs(summaries,dates)
fragmentation = dict()
for f,pp1 in freq:
pp2_means = ys[pp1]
means = np.zeros(len(pp2_means[pp1]))
xs = []
for pp2,means_std in pp2_means.items():
if pp1 == pp2:
means_std = np.array(means_std)
means = means_std[:,1]
std = means_std[:,2]
xs = means_std[:,0]
break
fraq = f/f_max
fraq = max(fraq,0.45)
if np.isnan(fraq) or np.isnan(means).all():
continue
to_plot[pp1] = (means,std,fraq,xs)
return to_plot
def isolation_to_plot(summaries,dates):
to_plot = dict()
ys,freq,f_max = get_freqs(summaries,dates)
# if np.isnan(f_max):
# return None,None
for f,pp1 in freq:
pp2_means = ys[pp1]
# if pp1 == 'psl':
# print(pp2_means)
means = np.zeros(len(pp2_means[pp1]))
total_std = np.zeros(len(pp2_means[pp1]))
total = np.zeros(len(pp2_means[pp1]))
xs = []
for pp2,means_std in pp2_means.items():
if not pp1 == pp2:
means_std = np.array(means_std)
means_std[np.isnan(means_std)]=0
if not np.isnan(means_std).any():
xs = means_std[:,0]
t = means_std[:,3]
std = means_std[:,2]
total += t
means += means_std[:,1]*t
total_std += std*t
means /= total
total_std /= total
fraq = f/f_max
fraq = max(fraq,0.45)
if np.isnan(fraq) or np.isnan(means).all():
continue
to_plot[pp1] = (means,total_std,fraq,xs)
return to_plot
def plot_metric_all_years(dates,nets,metric_to_plot,valid_pps,pps_color,metric_name,is_bg):
summaries = dict()
for d,n in zip(dates,nets):
summaries[d],all_paths = calculate_shortest_paths(n,valid_pps)
output_fname = metric_name + '_' + str(min(dates))+'_'+str(max(dates))
to_plot = metric_to_plot(summaries,dates)
metric = {pp1:(means,total_std) for pp1,(means,total_std,_,_) in to_plot.items()}
plot_metric(to_plot,interval_colors,pps_color,output_fname,metric_name,True,is_bg)
return metric,dates
def plot_metric_mandate(dates,nets,metric_to_plot,year,valid_pps,pps_color,metric_name,is_bg,delta=4):
summaries = dict()
idxs = [idx for idx,date in enumerate(dates) if date < year+delta and date >= year]
current_dates = [dates[idx] for idx in idxs]
current_nets = [nets[idx] for idx in idxs]
for d,n in zip(current_dates,current_nets):
summaries[d],all_paths = calculate_shortest_paths(n,valid_pps)
output_fname = metric_name + '_' + str(int(min(current_dates)))+'_'+str(int(max(current_dates)))
to_plot = metric_to_plot(summaries,current_dates)
metric = {pp1:(means,total_std) for pp1,(means,total_std,_,_) in to_plot.items()}
plot_metric(to_plot,interval_colors,pps_color,output_fname,metric_name,False,is_bg)
return metric,current_dates
if __name__ == '__main__':
##############################################################
# READ INPUT
##############################################################
source_by_year = 'data/1991-2019/by_year/dep_*_obstr_0.8_leidenalg'
source_by_mandate = 'data/1991-2019/mandate/dep_*_0.8'
# Called only once
source = 'data/1991-2019/by_year/dep_*_obstr_0.8_leidenalg'
filenames = glob.glob(source+'.xnet')
calculate_dist(filenames)
filenames_by_year = sorted(glob.glob(source_by_year+'_dist.xnet'))
filenames_by_mandate = sorted(glob.glob(source_by_mandate+'_dist.xnet'))
dates_by_year, dates_by_mandate = [],[]
nets_by_year, nets_by_mandate = [],[]
for filename in filenames_by_year:
net = xnet.xnet2igraph(filename)
net.vs['political_party'] = [filter_pp_name(p) for p in net.vs['political_party']]
nets_by_year.append(net.components().giant())
date = int(filename.split('dep_')[1].split('_')[0])
dates_by_year.append(date)
for filename in filenames_by_mandate:
net = xnet.xnet2igraph(filename)
net.vs['political_party'] = [filter_pp_name(p) for p in net.vs['political_party']]
nets_by_mandate.append(net.components().giant())
# por ano
date = int(filename.split('dep_')[1].split('_')[0])
date += float(filename.split('dep_')[1].split('_')[1])/12
dates_by_mandate.append(date)
dates_by_year,nets_by_year = to_sort(dates_by_year,nets_by_year)
dates_by_mandate,nets_by_mandate = to_sort(dates_by_mandate,nets_by_mandate)
##############################################################
# VALID POLITICAL PARTIES
##############################################################
# valid_pps = list(get_valid_pp(nets_by_year,1990,1,cut_percent=0.06))
# valid_pps = ['psdb', 'pp', 'pmdb', 'pt', 'dem', 'pl', 'ptb', 'psb', 'pr']
# valid_pps = sorted(valid_pps)
# valid_pps = ['psdb', 'pp', 'pmdb', 'pt', 'dem', 'pdt', 'psb', 'psl', 'ptb', 'prb', 'pl']
valid_pps = ['psdb', 'pp', 'pmdb', 'pt', 'dem']#,'psl']
colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] + ['magenta','navy','violet','teal']
pps_color = dict()
for pp,c in zip(valid_pps,colors):
pps_color[pp] = c
pps_color['all'] = 'cyan'
interval_colors = [(1992.95,2.05,pps_color['pmdb']),(1995,8,pps_color['psdb']),
(2003,13.4,pps_color['pt']),(2016.4,0.26,'#757373'),(2016.66,2.34,pps_color['pmdb'])]
# ,
# (2019,1,pps_color['psl'])] # psl
govs = [('FHC',(1995.01,2003)),('Lula',(2003.01,2011)),('Dilma',(2011.01,2016.4)),('Temer',(2016.4,2019)),('Bolsonaro',(2019.1,2020))]
gov_map = {'FHC':'psdb','Lula':'pt','Dilma':'pt','Temer':'pmdb','Bolsonaro':'psl'}
##############################################################
# PLOT SHORTEST PATHS
##############################################################
# todos os anos
plot_shortest_paths_all_years(dates_by_year,nets_by_year,valid_pps,interval_colors,pps_color,True)
# por mandato
for year in range(2016,2020,4):
plot_shortest_paths_mandate(dates_by_mandate,nets_by_mandate,year,valid_pps,interval_colors,pps_color,False)
##############################################################
# ISOLATION/FRAGMENTATION
##############################################################
# Código para dados em intervalos de anos:
plot_metric_all_years(dates_by_year,nets_by_year,isolation_to_plot,valid_pps,pps_color,'isolation',True)
plot_metric_all_years(dates_by_year,nets_by_year,fragmentation_to_plot,valid_pps,pps_color,'fragmentation',True)
# Código para dados em intervalos de meses:
plot_metric_mandate(dates_by_mandate,nets_by_mandate,fragmentation_to_plot,2015,valid_pps,pps_color,'fragmentation',True,5)
plot_metric_mandate(dates_by_mandate,nets_by_mandate,isolation_to_plot,2015,valid_pps,pps_color,'isolation',True,5)
##############################################################
# ZOOM 2015 - 2020
##############################################################
total_frag = defaultdict(lambda:[])
total_xs = []
for year in range(2015,2020,4):
frag,xs = plot_metric_mandate(dates_by_mandate,nets_by_mandate,fragmentation_to_plot,year,valid_pps,pps_color,'fragmentation',True)
for k,v in frag.items():
total_frag[k].append(v)
total_xs.append(xs)
total_isol = defaultdict(lambda:[])
total_xs = []
for year in range(2015,2020,4):
isol,xs = plot_metric_mandate(dates_by_mandate,nets_by_mandate,isolation_to_plot,year,valid_pps,pps_color,'isolation',True)
for k,v in isol.items():
total_isol[k].append(v)
total_xs.append(xs) | [
"xnet.xnet2igraph",
"xnet.igraph2xnet",
"matplotlib.pyplot.ylabel",
"math.sqrt",
"numpy.argsort",
"numpy.nanmean",
"numpy.array",
"numpy.mean",
"matplotlib.pyplot.xlabel",
"numpy.asarray",
"glob.glob",
"matplotlib.pyplot.savefig",
"matplotlib.pyplot.xticks",
"util.filter_pp_name",
"numpy... | [((627, 644), 'numpy.asarray', 'np.asarray', (['dates'], {}), '(dates)\n', (637, 644), True, 'import numpy as np\n'), ((656, 672), 'numpy.asarray', 'np.asarray', (['nets'], {}), '(nets)\n', (666, 672), True, 'import numpy as np\n'), ((692, 709), 'numpy.argsort', 'np.argsort', (['dates'], {}), '(dates)\n', (702, 709), True, 'import numpy as np\n'), ((907, 931), 'collections.defaultdict', 'defaultdict', (['(lambda : [])'], {}), '(lambda : [])\n', (918, 931), False, 'from collections import defaultdict\n'), ((1379, 1399), 'numpy.isnan', 'np.isnan', (['freq[i][0]'], {}), '(freq[i][0])\n', (1387, 1399), True, 'import numpy as np\n'), ((1578, 1605), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(12, 3)'}), '(figsize=(12, 3))\n', (1588, 1605), True, 'import matplotlib.pyplot as plt\n'), ((2308, 2352), 'matplotlib.pyplot.xticks', 'plt.xticks', (['xpos'], {'labels': 'labels', 'rotation': '(35)'}), '(xpos, labels=labels, rotation=35)\n', (2318, 2352), True, 'import matplotlib.pyplot as plt\n'), ((2676, 2733), 'matplotlib.pyplot.legend', 'plt.legend', ([], {'loc': '"""upper right"""', 'bbox_to_anchor': '(1.05, 1.0)'}), "(loc='upper right', bbox_to_anchor=(1.05, 1.0))\n", (2686, 2733), True, 'import matplotlib.pyplot as plt\n'), ((2737, 2755), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""year"""'], {}), "('year')\n", (2747, 2755), True, 'import matplotlib.pyplot as plt\n'), ((2760, 2783), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['metric_name'], {}), '(metric_name)\n', (2770, 2783), True, 'import matplotlib.pyplot as plt\n'), ((2804, 2873), 'matplotlib.pyplot.savefig', 'plt.savefig', (["(output_fname + '.pdf')"], {'format': '"""pdf"""', 'bbox_inches': '"""tight"""'}), "(output_fname + '.pdf', format='pdf', bbox_inches='tight')\n", (2815, 2873), True, 'import matplotlib.pyplot as plt\n'), ((2874, 2883), 'matplotlib.pyplot.clf', 'plt.clf', ([], {}), '()\n', (2881, 2883), True, 'import matplotlib.pyplot as plt\n'), ((4305, 4323), 'numpy.mean', 'np.mean', (['all_paths'], {}), '(all_paths)\n', (4312, 4323), True, 'import numpy as np\n'), ((4344, 4361), 'numpy.std', 'np.std', (['all_paths'], {}), '(all_paths)\n', (4350, 4361), True, 'import numpy as np\n'), ((5129, 5158), 'numpy.asarray', 'np.asarray', (['all_paths_summary'], {}), '(all_paths_summary)\n', (5139, 5158), True, 'import numpy as np\n'), ((10132, 10159), 'glob.glob', 'glob.glob', (["(source + '.xnet')"], {}), "(source + '.xnet')\n", (10141, 10159), False, 'import glob\n'), ((13879, 13903), 'collections.defaultdict', 'defaultdict', (['(lambda : [])'], {}), '(lambda : [])\n', (13890, 13903), False, 'from collections import defaultdict\n'), ((14211, 14235), 'collections.defaultdict', 'defaultdict', (['(lambda : [])'], {}), '(lambda : [])\n', (14222, 14235), False, 'from collections import defaultdict\n'), ((341, 367), 'xnet.xnet2igraph', 'xnet.xnet2igraph', (['filename'], {}), '(filename)\n', (357, 367), False, 'import xnet\n'), ((4559, 4580), 'numpy.asarray', 'np.asarray', (['means_std'], {}), '(means_std)\n', (4569, 4580), True, 'import numpy as np\n'), ((10220, 10260), 'glob.glob', 'glob.glob', (["(source_by_year + '_dist.xnet')"], {}), "(source_by_year + '_dist.xnet')\n", (10229, 10260), False, 'import glob\n'), ((10294, 10337), 'glob.glob', 'glob.glob', (["(source_by_mandate + '_dist.xnet')"], {}), "(source_by_mandate + '_dist.xnet')\n", (10303, 10337), False, 'import glob\n'), ((10478, 10504), 'xnet.xnet2igraph', 'xnet.xnet2igraph', (['filename'], {}), '(filename)\n', (10494, 10504), False, 'import xnet\n'), ((10811, 10837), 'xnet.xnet2igraph', 'xnet.xnet2igraph', (['filename'], {}), '(filename)\n', (10827, 10837), False, 'import xnet\n'), ((410, 432), 'math.sqrt', 'math.sqrt', (['(2 * (1 - w))'], {}), '(2 * (1 - w))\n', (419, 432), False, 'import math\n'), ((505, 556), 'xnet.igraph2xnet', 'xnet.igraph2xnet', (['net', "(filename[:-5] + '_dist.xnet')"], {}), "(net, filename[:-5] + '_dist.xnet')\n", (521, 556), False, 'import xnet\n'), ((866, 890), 'collections.defaultdict', 'defaultdict', (['(lambda : [])'], {}), '(lambda : [])\n', (877, 890), False, 'from collections import defaultdict\n'), ((1243, 1257), 'numpy.nanmean', 'np.nanmean', (['fs'], {}), '(fs)\n', (1253, 1257), True, 'import numpy as np\n'), ((2977, 3000), 'collections.defaultdict', 'defaultdict', (['(lambda : 0)'], {}), '(lambda : 0)\n', (2988, 3000), False, 'from collections import defaultdict\n'), ((5390, 5409), 'numpy.array', 'np.array', (['means_std'], {}), '(means_std)\n', (5398, 5409), True, 'import numpy as np\n'), ((7082, 7096), 'numpy.isnan', 'np.isnan', (['fraq'], {}), '(fraq)\n', (7090, 7096), True, 'import numpy as np\n'), ((8235, 8249), 'numpy.isnan', 'np.isnan', (['fraq'], {}), '(fraq)\n', (8243, 8249), True, 'import numpy as np\n'), ((10542, 10559), 'util.filter_pp_name', 'filter_pp_name', (['p'], {}), '(p)\n', (10556, 10559), False, 'from util import filter_pp_name\n'), ((10875, 10892), 'util.filter_pp_name', 'filter_pp_name', (['p'], {}), '(p)\n', (10889, 10892), False, 'from util import filter_pp_name\n'), ((4050, 4064), 'numpy.mean', 'np.mean', (['paths'], {}), '(paths)\n', (4057, 4064), True, 'import numpy as np\n'), ((4091, 4104), 'numpy.std', 'np.std', (['paths'], {}), '(paths)\n', (4097, 4104), True, 'import numpy as np\n'), ((6864, 6883), 'numpy.array', 'np.array', (['means_std'], {}), '(means_std)\n', (6872, 6883), True, 'import numpy as np\n'), ((7765, 7784), 'numpy.array', 'np.array', (['means_std'], {}), '(means_std)\n', (7773, 7784), True, 'import numpy as np\n'), ((4708, 4723), 'numpy.isnan', 'np.isnan', (['means'], {}), '(means)\n', (4716, 4723), True, 'import numpy as np\n'), ((5435, 5462), 'numpy.nanmean', 'np.nanmean', (['means_std[:, 3]'], {}), '(means_std[:, 3])\n', (5445, 5462), True, 'import numpy as np\n'), ((7100, 7115), 'numpy.isnan', 'np.isnan', (['means'], {}), '(means)\n', (7108, 7115), True, 'import numpy as np\n'), ((7811, 7830), 'numpy.isnan', 'np.isnan', (['means_std'], {}), '(means_std)\n', (7819, 7830), True, 'import numpy as np\n'), ((8253, 8268), 'numpy.isnan', 'np.isnan', (['means'], {}), '(means)\n', (8261, 8268), True, 'import numpy as np\n'), ((7857, 7876), 'numpy.isnan', 'np.isnan', (['means_std'], {}), '(means_std)\n', (7865, 7876), True, 'import numpy as np\n')] |
# -*- coding: utf-8 -*-
"""MNISTlargeUntrainedNetCNN.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1_Z1XQGDnL5OV8mILfg8yTIAZfiGbio3G
Adapted from [Keras GitHub Example](http://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py) and [<NAME>'s NB](https://www.kaggle.com/yassineghouzam/introduction-to-cnn-keras-0-997-top-6)
TensorFlow 1.x --> TensorFlow 2.x
SciKit Learn --> TensorFlow
# 1. Introduction
This tutorial is an introduction to Convolutional Neural Networks using TensorFlow 2.x Keras API. The dataset that we will work it is the MNIST dataset, a dataset of handwritten digits 0-9, and we will use a Sequential CNN to predict which digit was drawn.
This model reaches 99.3% accuracy.
To prepare our notebook, run the next cell to import the necessary packages and change the accelerator from ```None``` to ```GPU```.
"""
import tensorflow as tf
import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
print(tf.__version__)
"""# 2. Data Preprocessing
Before building any ML model, it is important to preprocess the data. In fact, data preprocessing will generally take up the most time in any ML pipeline. The following module goes over the steps to preprocess the MNIST dataset for our purposes.
## 2.1 Load Data
Our first step is to load the data and divide it into a training and testing dataset. The MNIST dataset can be downloaded directly from TensorFlow and has already been divided. Run the next cell to import the data.
``` x_train ``` is the dataset of 28x28 images of handwritten digits that the model will be trained on.
```y_train``` is the dataset of labels that correspond to ```x_train```.
``` x_test ``` is the dataset of 28x28 images of handwritten digits that the model will be tested on.
```y_test``` is the dataset of labels that correspond to ```x_test```.
"""
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
"""Run the following code to see the counts of each digit present in our training dataset."""
sns.countplot(y_train)
"""There are similar counts for each digit. This is good as the model will have enough images for each class to train the features for each class. There is no need to downsample or upweigh.
## 2.2 Check for NaN Values
"""
np.isnan(x_train).any()
np.isnan(x_test).any()
"""There are no NaN values in our dataset. There is no need to preprocess the data to deal with Nan's.
## 2.3 Normalization and Reshaping
Since the values in our ```x_train``` dataset are 28x28 images, our input shape must be specified so that our model will know what is being inputed.
The first convolution layer expects a single 60000x28x28x1 tensor instead of 60000 28x28x1 tensors.
Models generally run better on normalized values. The best way to normalize the data depends on each individual dataset. For the MNIST dataset, we want each value to be between 0.0 and 1.0. As all values originally fall under the 0.0-255.0 range, divide by 255.0.
Run the following cell to define the ```input_shape``` and to normalize and reshape the data.
"""
input_shape = (28, 28, 1)
x_train=x_train.reshape(x_train.shape[0], x_train.shape[1], x_train.shape[2], 1)
x_train=x_train / 255.0
x_test = x_test.reshape(x_test.shape[0], x_test.shape[1], x_test.shape[2], 1)
x_test=x_test/255.0
"""## 2.4 Label Encoding
The labels for the training and the testing dataset are currently categorical and is not continuous. To include categorical dataset in our model, our labels should be converted to one-hot encodings.
For example, ```2``` becomes ```[0,0,1,0,0,0,0,0,0,0]``` and ```7``` becomes ```[0,0,0,0,0,0,0,1,0,0]```.
Run the following cell to transform the labels into one-hot encodings
"""
y_train = tf.one_hot(y_train.astype(np.int32), depth=10)
y_test = tf.one_hot(y_test.astype(np.int32), depth=10)
"""## 2.5 Visualize Data
Run the following cell to visualize an image in our dataset.
"""
plt.imshow(x_train[100][:,:,0])
print(y_train[100])
"""The image is an image of a handwritten ```5```. The one-hot encoding holds the value of ```5```.
# 3. CNN
In this module, we will build our CNN model.
## 3.1 Define the Model
Run the following cell to define ```batch_size```, ```num_classes```, and ```epochs```. Try changing the values and test how different values affect the accuracy of the CNN model.
"""
batch_size = 64
num_classes = 10
epochs = 5
"""Run the following cell to build the model. The model contains various layers stacked on top of each other. The output of one layer feeds into the input of the next layer.
Conv2D layers are convolutions. Each filter (32 in the first two convolution layers and 64 in the next two convolution layers) transforms a part of the image (5x5 for the first two Conv2D layers and 3x3 for the next two Conv2D layers). The transformation is applied on the whole image.
MaxPool2D is a downsampling filter. It reduces a 2x2 matrix of the image to a single pixel with the maximum value of the 2x2 matrix. The filter aims to conserve the main features of the image while reducing the size.
Dropout is a regularization layer. In our model, 25% of the nodes in the layer are randomly ignores, allowing the network to learn different features. This prevents overfitting.
```relu``` is the rectifier, and it is used to find nonlinearity in the data. It works by returning the input value if the input value >= 0. If the input is negative, it returns 0.
Flatten converts the tensors into a 1D vector.
The Dense layers are an artificial neural network (ANN). The last layer returns the probability that an image is in each class (one for each digit).
As this model aims to categorize the images, we will use a ```categorical_crossentropy``` loss function.
"""
numberOfHiddenLayers = 2 #default = 5, if 0 then useSVM=True
generateLargeNetworkUntrained = True
addSkipLayers = False
if(numberOfHiddenLayers > 1):
addSkipLayers = False #True #optional
if(generateLargeNetworkUntrained):
generateNetworkUntrained = True
largeNetworkRatio = 10 #100
generateLargeNetworkExpansion = False
if(generateLargeNetworkExpansion):
generateLargeNetworkRatioExponential = False
else:
generateNetworkUntrained = False
generateLargeNetworkRatio = False
def getLayerRatio(layerIndex):
layerRatio = 1
if(generateLargeNetworkUntrained):
if(generateLargeNetworkExpansion):
if(generateLargeNetworkRatioExponential):
layerRatio = largeNetworkRatio**layerIndex
else:
layerRatio = largeNetworkRatio * layerIndex
else:
layerRatio = largeNetworkRatio
else:
layerRatio = 1
return int(layerRatio)
x = tf.keras.layers.Input(shape=input_shape)
hLast = x
if(numberOfHiddenLayers >= 1):
layerRatio = getLayerRatio(1)
h1 = tf.keras.layers.Conv2D(32*layerRatio, (5,5), padding='same', activation='relu')(x)
hLast = h1
if(numberOfHiddenLayers >= 2):
layerRatio = getLayerRatio(2)
h2 = tf.keras.layers.Conv2D(32*layerRatio, (5,5), padding='same', activation='relu')(h1)
h2 = tf.keras.layers.MaxPool2D()(h2)
h2 = tf.keras.layers.Dropout(0.25)(h2)
hLast = h2
if(numberOfHiddenLayers >= 3):
layerRatio = getLayerRatio(3)
h3 = tf.keras.layers.Conv2D(32*layerRatio, (3,3), padding='same', activation='relu')(h2)
hLast = h3
if(numberOfHiddenLayers >= 4):
layerRatio = getLayerRatio(4)
h4 = tf.keras.layers.Conv2D(32*layerRatio, (3,3), padding='same', activation='relu')(h3)
h4 = tf.keras.layers.MaxPool2D(strides=(2,2))(h4)
h4 = tf.keras.layers.Flatten()(h4)
hLast = h4
if(numberOfHiddenLayers >= 5):
layerRatio = getLayerRatio(5)
h5 = tf.keras.layers.Dense(128*layerRatio, activation='relu')(h4)
h5 = tf.keras.layers.Dropout(0.5)(h5)
hLast = h5
if(addSkipLayers):
mList = []
if(numberOfHiddenLayers >= 1):
m1 = tf.keras.layers.Flatten()(h1)
mList.append(m1)
if(numberOfHiddenLayers >= 2):
m2 = tf.keras.layers.Flatten()(h2)
mList.append(m2)
if(numberOfHiddenLayers >= 3):
m3 = tf.keras.layers.Flatten()(h3)
mList.append(m3)
if(numberOfHiddenLayers >= 4):
m4 = tf.keras.layers.Flatten()(h4)
mList.append(m4)
if(numberOfHiddenLayers >= 5):
m5 = h5
mList.append(m5)
hLast = tf.keras.layers.concatenate(mList)
hLast = tf.keras.layers.Flatten()(hLast) #flatten hLast if necessary (ie numberOfHiddenLayers <4)
if(generateNetworkUntrained):
hLast = tf.keras.layers.Lambda(lambda x: tf.keras.backend.stop_gradient(x))(hLast)
y = tf.keras.layers.Dense(num_classes, activation='softmax')(hLast)
model = tf.keras.Model(x, y)
print(model.summary())
model.compile(optimizer=tf.keras.optimizers.RMSprop(epsilon=1e-08), loss='categorical_crossentropy', metrics=['acc'])
"""## 3.2 Fit the Training Data
The next step is to fit our training data. If we achieve a certain level of accuracy, it may not be necessary to continue training the model, especially if time and resources are limited.
The following cell defines a CallBack so that if 99.5% accuracy is achieved, the model stops training. The model is not likely to stop prematurely if only 5 epochs are specified. Try it out with more epochs.
"""
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.995):
print("\nReached 99.5% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
"""Testing the model on a validation dataset prevents overfitting of the data. We specified a 10% validation and 90% training split."""
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_split=0.1,
callbacks=[callbacks])
"""# 4. Evaluate the Model
## 4.1 Loss and Accuracy Curves
Run the following cell to evaluate the loss and accuracy of our model.
"""
fig, ax = plt.subplots(2,1)
ax[0].plot(history.history['loss'], color='b', label="Training Loss")
ax[0].plot(history.history['val_loss'], color='r', label="Validation Loss",axes =ax[0])
legend = ax[0].legend(loc='best', shadow=True)
ax[1].plot(history.history['acc'], color='b', label="Training Accuracy")
ax[1].plot(history.history['val_acc'], color='r',label="Validation Accuracy")
legend = ax[1].legend(loc='best', shadow=True)
"""The accuracy increases over time and the loss decreases over time. However, the accuracy of our validation set seems to slightly decrease towards the end even thought our training accuracy increased. Running the model for more epochs might cause our model to be susceptible to overfitting.
## 4.2 Predict Results
"""
test_loss, test_acc = model.evaluate(x_test, y_test)
"""Our model runs pretty well, with an accuracy of 99.3% on our testing data.
## 4.3 Confusion Matrix
Run the following cell to compute our confusion matrix using TensorFlow.
"""
# Predict the values from the testing dataset
Y_pred = model.predict(x_test)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred,axis = 1)
# Convert testing observations to one hot vectors
Y_true = np.argmax(y_test,axis = 1)
# compute the confusion matrix
confusion_mtx = tf.math.confusion_matrix(Y_true, Y_pred_classes)
"""Run the following cell to plot the confusion matrix."""
plt.figure(figsize=(10, 8))
sns.heatmap(confusion_mtx, annot=True, fmt='g')
"""There seems to be a slightly higher confusion between (0,6) and (4,9). This is reasonable as 0's and 6's look similar with their loops and 4's and 9's can be mistaken when the 4's are more rounded and 9's are more angular.""" | [
"matplotlib.pyplot.imshow",
"tensorflow.keras.layers.Input",
"tensorflow.keras.Model",
"tensorflow.keras.layers.Conv2D",
"tensorflow.keras.optimizers.RMSprop",
"tensorflow.keras.layers.Dropout",
"numpy.argmax",
"seaborn.heatmap",
"tensorflow.keras.layers.concatenate",
"tensorflow.keras.backend.sto... | [((2153, 2175), 'seaborn.countplot', 'sns.countplot', (['y_train'], {}), '(y_train)\n', (2166, 2175), True, 'import seaborn as sns\n'), ((4049, 4082), 'matplotlib.pyplot.imshow', 'plt.imshow', (['x_train[100][:, :, 0]'], {}), '(x_train[100][:, :, 0])\n', (4059, 4082), True, 'import matplotlib.pyplot as plt\n'), ((6752, 6792), 'tensorflow.keras.layers.Input', 'tf.keras.layers.Input', ([], {'shape': 'input_shape'}), '(shape=input_shape)\n', (6773, 6792), True, 'import tensorflow as tf\n'), ((8628, 8648), 'tensorflow.keras.Model', 'tf.keras.Model', (['x', 'y'], {}), '(x, y)\n', (8642, 8648), True, 'import tensorflow as tf\n'), ((9963, 9981), 'matplotlib.pyplot.subplots', 'plt.subplots', (['(2)', '(1)'], {}), '(2, 1)\n', (9975, 9981), True, 'import matplotlib.pyplot as plt\n'), ((11088, 11113), 'numpy.argmax', 'np.argmax', (['Y_pred'], {'axis': '(1)'}), '(Y_pred, axis=1)\n', (11097, 11113), True, 'import numpy as np\n'), ((11175, 11200), 'numpy.argmax', 'np.argmax', (['y_test'], {'axis': '(1)'}), '(y_test, axis=1)\n', (11184, 11200), True, 'import numpy as np\n'), ((11249, 11297), 'tensorflow.math.confusion_matrix', 'tf.math.confusion_matrix', (['Y_true', 'Y_pred_classes'], {}), '(Y_true, Y_pred_classes)\n', (11273, 11297), True, 'import tensorflow as tf\n'), ((11359, 11386), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(10, 8)'}), '(figsize=(10, 8))\n', (11369, 11386), True, 'import matplotlib.pyplot as plt\n'), ((11387, 11434), 'seaborn.heatmap', 'sns.heatmap', (['confusion_mtx'], {'annot': '(True)', 'fmt': '"""g"""'}), "(confusion_mtx, annot=True, fmt='g')\n", (11398, 11434), True, 'import seaborn as sns\n'), ((8303, 8337), 'tensorflow.keras.layers.concatenate', 'tf.keras.layers.concatenate', (['mList'], {}), '(mList)\n', (8330, 8337), True, 'import tensorflow as tf\n'), ((8346, 8371), 'tensorflow.keras.layers.Flatten', 'tf.keras.layers.Flatten', ([], {}), '()\n', (8369, 8371), True, 'import tensorflow as tf\n'), ((8556, 8612), 'tensorflow.keras.layers.Dense', 'tf.keras.layers.Dense', (['num_classes'], {'activation': '"""softmax"""'}), "(num_classes, activation='softmax')\n", (8577, 8612), True, 'import tensorflow as tf\n'), ((2401, 2418), 'numpy.isnan', 'np.isnan', (['x_train'], {}), '(x_train)\n', (2409, 2418), True, 'import numpy as np\n'), ((2426, 2442), 'numpy.isnan', 'np.isnan', (['x_test'], {}), '(x_test)\n', (2434, 2442), True, 'import numpy as np\n'), ((6873, 6960), 'tensorflow.keras.layers.Conv2D', 'tf.keras.layers.Conv2D', (['(32 * layerRatio)', '(5, 5)'], {'padding': '"""same"""', 'activation': '"""relu"""'}), "(32 * layerRatio, (5, 5), padding='same', activation=\n 'relu')\n", (6895, 6960), True, 'import tensorflow as tf\n'), ((7039, 7126), 'tensorflow.keras.layers.Conv2D', 'tf.keras.layers.Conv2D', (['(32 * layerRatio)', '(5, 5)'], {'padding': '"""same"""', 'activation': '"""relu"""'}), "(32 * layerRatio, (5, 5), padding='same', activation=\n 'relu')\n", (7061, 7126), True, 'import tensorflow as tf\n'), ((7130, 7157), 'tensorflow.keras.layers.MaxPool2D', 'tf.keras.layers.MaxPool2D', ([], {}), '()\n', (7155, 7157), True, 'import tensorflow as tf\n'), ((7169, 7198), 'tensorflow.keras.layers.Dropout', 'tf.keras.layers.Dropout', (['(0.25)'], {}), '(0.25)\n', (7192, 7198), True, 'import tensorflow as tf\n'), ((7286, 7373), 'tensorflow.keras.layers.Conv2D', 'tf.keras.layers.Conv2D', (['(32 * layerRatio)', '(3, 3)'], {'padding': '"""same"""', 'activation': '"""relu"""'}), "(32 * layerRatio, (3, 3), padding='same', activation=\n 'relu')\n", (7308, 7373), True, 'import tensorflow as tf\n'), ((7453, 7540), 'tensorflow.keras.layers.Conv2D', 'tf.keras.layers.Conv2D', (['(32 * layerRatio)', '(3, 3)'], {'padding': '"""same"""', 'activation': '"""relu"""'}), "(32 * layerRatio, (3, 3), padding='same', activation=\n 'relu')\n", (7475, 7540), True, 'import tensorflow as tf\n'), ((7544, 7585), 'tensorflow.keras.layers.MaxPool2D', 'tf.keras.layers.MaxPool2D', ([], {'strides': '(2, 2)'}), '(strides=(2, 2))\n', (7569, 7585), True, 'import tensorflow as tf\n'), ((7596, 7621), 'tensorflow.keras.layers.Flatten', 'tf.keras.layers.Flatten', ([], {}), '()\n', (7619, 7621), True, 'import tensorflow as tf\n'), ((7709, 7767), 'tensorflow.keras.layers.Dense', 'tf.keras.layers.Dense', (['(128 * layerRatio)'], {'activation': '"""relu"""'}), "(128 * layerRatio, activation='relu')\n", (7730, 7767), True, 'import tensorflow as tf\n'), ((7777, 7805), 'tensorflow.keras.layers.Dropout', 'tf.keras.layers.Dropout', (['(0.5)'], {}), '(0.5)\n', (7800, 7805), True, 'import tensorflow as tf\n'), ((8698, 8740), 'tensorflow.keras.optimizers.RMSprop', 'tf.keras.optimizers.RMSprop', ([], {'epsilon': '(1e-08)'}), '(epsilon=1e-08)\n', (8725, 8740), True, 'import tensorflow as tf\n'), ((7897, 7922), 'tensorflow.keras.layers.Flatten', 'tf.keras.layers.Flatten', ([], {}), '()\n', (7920, 7922), True, 'import tensorflow as tf\n'), ((7990, 8015), 'tensorflow.keras.layers.Flatten', 'tf.keras.layers.Flatten', ([], {}), '()\n', (8013, 8015), True, 'import tensorflow as tf\n'), ((8083, 8108), 'tensorflow.keras.layers.Flatten', 'tf.keras.layers.Flatten', ([], {}), '()\n', (8106, 8108), True, 'import tensorflow as tf\n'), ((8176, 8201), 'tensorflow.keras.layers.Flatten', 'tf.keras.layers.Flatten', ([], {}), '()\n', (8199, 8201), True, 'import tensorflow as tf\n'), ((8510, 8543), 'tensorflow.keras.backend.stop_gradient', 'tf.keras.backend.stop_gradient', (['x'], {}), '(x)\n', (8540, 8543), True, 'import tensorflow as tf\n')] |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@title: Bidirectional Adversarial Networks for microscopic Image Synthesis (BANIS)
@topic: BANIS model
@author: junzhuang, daliwang
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, \
Conv2DTranspose, Conv2D, Dropout, Flatten, Activation
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.losses import MeanSquaredError as loss_mse
from utils import convert2binary, dice_coefficient
class BANIS():
def __init__(self, img_shape):
# Initialize input shape
self.img_shape = img_shape # (64, 64, 1) (img_rows, img_cols, channels)
self.latent_dim = 100
# Optimizer
self.lr = 1e-5
self.beta1 = 0.5
self.D_optA = Adam(2*self.lr, self.beta1)
self.G_optA = Adam(2*self.lr, self.beta1)
self.E_optA = SGD(5*self.lr)
self.D_optB = Adam(2*self.lr, self.beta1)
self.G_optB = Adam(self.lr, self.beta1)
self.E_optB = SGD(5*self.lr)
# Build the encoder
self.encoderA = self.encoder_model()
self.encoderB = self.encoder_model()
# Build the generator
self.generatorA = self.generator_model()
self.generatorB = self.generator_model()
# Build and compile the discriminator
self.discriminatorA = self.discriminator_model()
self.discriminatorA.trainable = True
self.discriminatorA.compile(loss=['binary_crossentropy'],
optimizer=self.D_optA,
metrics=['accuracy'])
self.discriminatorB = self.discriminator_model()
self.discriminatorB.trainable = True
self.discriminatorB.compile(loss=['binary_crossentropy'],
optimizer=self.D_optB,
metrics=['accuracy'])
# Build the pioneer model
self.discriminatorA.trainable = False
self.pioneerA = Sequential([self.generatorA, self.discriminatorA])
self.pioneerA.compile(loss=['binary_crossentropy'],
optimizer=self.G_optA,
metrics=['accuracy'])
self.discriminatorB.trainable = False
self.pioneerB = Sequential([self.generatorB, self.discriminatorB])
self.pioneerB.compile(loss=['binary_crossentropy'],
optimizer=self.G_optB,
metrics=['accuracy'])
# Build the successor model
self.successorA = Sequential([self.encoderA, self.generatorA])
self.successorA.compile(loss=loss_mse(),
optimizer=self.E_optA)
self.successorB = Sequential([self.encoderB, self.generatorB])
self.successorB.compile(loss=loss_mse(),
optimizer=self.E_optB)
# Build the coordinator model
self.coordinatorA = Sequential([self.successorA, self.successorB])
self.coordinatorA.compile(loss=loss_mse(),
optimizer=self.E_optA)
self.coordinatorB = Sequential([self.successorB, self.successorA])
self.coordinatorB.compile(loss=loss_mse(),
optimizer=self.E_optB)
# The dir for logs and checkpoint
self.log_dir = "./TB_logs_baait"
self.checkpoint_dir = './Train_Checkpoints_baait'
def encoder_model(self, depth=128):
model = Sequential()
model.add(Conv2D(depth, (5, 5), strides=(2, 2), padding='same', kernel_initializer='he_normal',
input_shape=self.img_shape))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Conv2D(2*depth, (5, 5), strides=(2, 2), padding='same', kernel_initializer='he_normal'))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Conv2D(4*depth, (5, 5), strides=(2, 2), padding='same', kernel_initializer='he_normal'))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Flatten())
model.add(Dense(self.latent_dim))
model.summary()
return model
def generator_model(self, dim=8, depth=256):
model = Sequential()
model.add(Dense(dim*dim*depth, use_bias=False, input_shape=(self.latent_dim,)))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Reshape((dim, dim, depth)))
assert model.output_shape == (None, dim, dim, depth)
model.add(Conv2DTranspose(depth, (5, 5), strides=(1, 1), padding='same', use_bias=False, kernel_initializer='he_normal'))
assert model.output_shape == (None, dim, dim, depth)
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Conv2DTranspose(int(depth//2), (5, 5), strides=(2, 2), padding='same', use_bias=False, kernel_initializer='he_normal'))
assert model.output_shape == (None, 2*dim, 2*dim, int(depth//2))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Conv2DTranspose(int(depth//4), (5, 5), strides=(2, 2), padding='same', use_bias=False, kernel_initializer='he_normal'))
assert model.output_shape == (None, 4*dim, 4*dim, int(depth//4)) # for 64x64
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Conv2DTranspose(self.img_shape[2], (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, self.img_shape[0], self.img_shape[1], self.img_shape[2])
model.summary()
return model
def discriminator_model(self, depth=64, drop_rate=0.3):
model = Sequential()
model.add(Conv2D(depth, (5, 5), strides=(2, 2), padding='same', kernel_initializer='he_normal',
input_shape=self.img_shape))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(drop_rate))
model.add(Conv2D(2*depth, (5, 5), strides=(2, 2), padding='same', kernel_initializer='he_normal'))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(drop_rate))
#model.add(Conv2D(4*depth, (5, 5), strides=(2, 2), padding='same', kernel_initializer='he_normal'))
#model.add(LeakyReLU(alpha=0.2))
#model.add(Dropout(drop_rate)) # ---
# Out: 1-dim probability
model.add(Flatten())
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.summary()
return model
def train(self, A, B, EPOCHS=100, BATCH_SIZE=128, WARMUP_STEP=20, NUM_IMG=5):
# Define the groundtruth
Y_real = np.ones((BATCH_SIZE, 1))
Y_rec = np.zeros((BATCH_SIZE, 1)) # Reconstructed Label
Y_both = np.concatenate((Y_real, Y_rec), axis=0)
# Log for TensorBoard
summary_writer = tf.summary.create_file_writer(self.log_dir)
# Initialize the checkpoint
interval = int(EPOCHS // 5) if EPOCHS >= 10 else 5
checkpoint_path = os.path.join(self.checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(E_optimizerA=self.E_optA,
G_optimizerA=self.G_optA,
D_optimizerA=self.D_optA,
E_optimizerB=self.E_optB,
G_optimizerB=self.G_optB,
D_optimizerB=self.D_optB,
encoderA=self.encoderA,
generatorA=self.generatorA,
discriminatorA=self.discriminatorA,
pioneerA=self.pioneerA,
successorA=self.successorA,
coordinatorA=self.coordinatorA,
encoderB=self.encoderB,
generatorB=self.generatorB,
discriminatorB=self.discriminatorB,
pioneerB=self.pioneerB,
successorB=self.successorB,
coordinatorB=self.coordinatorB)
# Restore the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(self.checkpoint_dir))
A_gen_list, B_gen_list, AB_rec_list = [], [], []
match_cnt, total_cnt = 0, 0 # Initialize the counting for matching DSC
NUM_BATCH = len(A) // BATCH_SIZE # np.ceil()
for epoch in range(EPOCHS):
for nb in range(NUM_BATCH-1):
# ---Pretrain Stage ---
# Select real instances batch by batch
step = int(epoch * NUM_BATCH + nb)
idx = np.arange(nb*BATCH_SIZE, nb*BATCH_SIZE+BATCH_SIZE)
A_real = A[idx, :, :, :]
# Generate a batch of latent variables based on uniform distribution
z_A = np.random.uniform(-1.0, 1.0, size=[BATCH_SIZE, self.latent_dim])
A_gen = self.generatorA.predict(z_A)
# Train the discriminator (real for 1 and rec for 0) ------
A_both = np.concatenate((A_real, A_gen))
Dloss_A = self.discriminatorA.train_on_batch(A_both, Y_both)
# Train the pioneer model to fool the discriminator ------
Gloss_A = self.pioneerA.train_on_batch(z_A, Y_real)
# Repeat the same procedure as above for B
B_real = B[idx, :, :, :]
z_B = np.random.uniform(-1.0, 1.0, size=[BATCH_SIZE, self.latent_dim])
B_gen = self.generatorB.predict(z_B)
B_both = np.concatenate((B_real, B_gen))
Dloss_B = self.discriminatorB.train_on_batch(B_both, Y_both)
Gloss_B = self.pioneerB.train_on_batch(z_B, Y_real)
# Train the successor & coordinator when epoch > WARMUP_STEP ------
mse = -1
if epoch > WARMUP_STEP:
mseA_gen = self.successorA.train_on_batch(B_real, A_gen)
mseB_gen = self.successorB.train_on_batch(A_real, B_gen)
mseA_real = self.successorA.train_on_batch(B_real, A_real)
mseB_real = self.successorB.train_on_batch(A_real, B_real)
mse_B2B = self.coordinatorA.train_on_batch(B_real, B_real)
mse_A2A = self.coordinatorB.train_on_batch(A_real, A_real)
mse_A = 0.5 * np.add(mseA_real, mseA_gen)
mse_B = 0.5 * np.add(mseB_real, mseB_gen)
identity_loss = 0.5 * np.add(mse_A, mse_B)
pair_matched_loss = 0.5 * np.add(mse_A2A, mse_B2B)
mse = np.mean([identity_loss, pair_matched_loss], axis=0)
# For Experiments and Visualization ---------------------
# Save scalars into TensorBoard
with summary_writer.as_default():
tf.summary.scalar('D_loss_A', Dloss_A[0], step=step)
tf.summary.scalar('G_loss_A', Gloss_A[0], step=step)
tf.summary.scalar('D_acc_A', Dloss_A[1], step=step)
tf.summary.scalar('D_loss_B', Dloss_B[0], step=step)
tf.summary.scalar('G_loss_B', Gloss_B[0], step=step)
tf.summary.scalar('D_acc_B', Dloss_B[1], step=step)
if mse != -1:
tf.summary.scalar('MSE_A', mse_A, step=step)
tf.summary.scalar('MSE_B', mse_B, step=step)
tf.summary.scalar('Identity_Loss', identity_loss, step=step)
tf.summary.scalar('Pair_Matched_Loss', pair_matched_loss, step=step)
tf.summary.scalar('MSE', mse, step=step)
# Save the checkpoint at given interval
if (step + 1) % int(interval*BATCH_SIZE) == 0:
checkpoint.save(file_prefix=str(checkpoint_path))
# Schedule the learning rate
if (epoch + 1) % 100000 == 0:
self.lr = self.lr_scheduler(self.lr, Type="Periodic", epoch=epoch, period=100000)
# Kears callback: https://keras.io/zh/callbacks/
# Store the generated/reconstructed samples
if (step + 1) % 100 == 0:
z_gen = np.random.normal(size=(int(NUM_IMG), self.latent_dim))
A_gen_list.append(self.prediction(self.generatorA, z_gen))
B_gen_list.append(self.prediction(self.generatorB, z_gen))
if mse != -1 and (step + 1) % 100 == 0:
# Prediction
A_rec = self.prediction(self.successorA, B_real)
B_rec = self.prediction(self.successorB, A_real)
for i in range(len(idx)):
total_cnt += 1
# Reshape image to 2D size
A_rec_i = A_rec[i].reshape(self.img_shape[0], self.img_shape[1])
B_rec_i = B_rec[i].reshape(self.img_shape[0], self.img_shape[1])
# Get the binary masks of images
A_rec_i_bi = convert2binary(A_rec_i, A_rec_i.max()*0.6)
B_rec_i_bi = convert2binary(B_rec_i, B_rec_i.max()*0.3)
# Compute the DSC
DSC = dice_coefficient(A_rec_i_bi, B_rec_i_bi)
# Compute matching_index & select rec samples
if DSC < 0.2:
match_cnt += 1
AB_rec_list.append([A_rec[i], B_rec[i]])
# Plot the progress
print("A: No.{0}: D_loss: {1}; D_acc: {2}; G_loss: {3}."\
.format(step, Dloss_A[0], Dloss_A[1], Gloss_A[0]))
print("B: No.{0}: D_loss: {1}; D_acc: {2}; G_loss: {3}."\
.format(step, Dloss_B[0], Dloss_B[1], Gloss_B[0]))
if mse != -1:
print("Total MSE: {0}.".format(mse))
print("----------")
# Save file
np.save("./A_gen_baait.npy", A_gen_list)
np.save("./B_gen_baait.npy", B_gen_list)
np.save("./AB_rec_baait.npy", AB_rec_list)
checkpoint.save(file_prefix = str(checkpoint_path))
# Evaluation
print("Evaluation:")
if total_cnt != 0:
matching_index = match_cnt/total_cnt
print("Matching Index: ", matching_index)
def prediction(self, model, inputs):
# Prediction with given model and inputs
x_pred = model.predict(inputs)
x_pred = 127.5 * x_pred + 127.5
return x_pred
def lr_scheduler(self, lr, Type, epoch, period=100):
# Schedule the learning rate
if Type == "Periodic":
if epoch < int(period):
lr = lr
elif epoch % int(period) == 0:
lr = lr*0.5
return lr
| [
"tensorflow.train.Checkpoint",
"tensorflow.keras.losses.MeanSquaredError",
"tensorflow.keras.layers.BatchNormalization",
"tensorflow.keras.layers.Dense",
"numpy.save",
"numpy.arange",
"numpy.mean",
"tensorflow.keras.layers.Reshape",
"tensorflow.keras.layers.Conv2D",
"tensorflow.keras.optimizers.SG... | [((888, 917), 'tensorflow.keras.optimizers.Adam', 'Adam', (['(2 * self.lr)', 'self.beta1'], {}), '(2 * self.lr, self.beta1)\n', (892, 917), False, 'from tensorflow.keras.optimizers import Adam, SGD\n'), ((938, 967), 'tensorflow.keras.optimizers.Adam', 'Adam', (['(2 * self.lr)', 'self.beta1'], {}), '(2 * self.lr, self.beta1)\n', (942, 967), False, 'from tensorflow.keras.optimizers import Adam, SGD\n'), ((988, 1004), 'tensorflow.keras.optimizers.SGD', 'SGD', (['(5 * self.lr)'], {}), '(5 * self.lr)\n', (991, 1004), False, 'from tensorflow.keras.optimizers import Adam, SGD\n'), ((1025, 1054), 'tensorflow.keras.optimizers.Adam', 'Adam', (['(2 * self.lr)', 'self.beta1'], {}), '(2 * self.lr, self.beta1)\n', (1029, 1054), False, 'from tensorflow.keras.optimizers import Adam, SGD\n'), ((1075, 1100), 'tensorflow.keras.optimizers.Adam', 'Adam', (['self.lr', 'self.beta1'], {}), '(self.lr, self.beta1)\n', (1079, 1100), False, 'from tensorflow.keras.optimizers import Adam, SGD\n'), ((1123, 1139), 'tensorflow.keras.optimizers.SGD', 'SGD', (['(5 * self.lr)'], {}), '(5 * self.lr)\n', (1126, 1139), False, 'from tensorflow.keras.optimizers import Adam, SGD\n'), ((2104, 2154), 'tensorflow.keras.models.Sequential', 'Sequential', (['[self.generatorA, self.discriminatorA]'], {}), '([self.generatorA, self.discriminatorA])\n', (2114, 2154), False, 'from tensorflow.keras.models import Sequential\n'), ((2390, 2440), 'tensorflow.keras.models.Sequential', 'Sequential', (['[self.generatorB, self.discriminatorB]'], {}), '([self.generatorB, self.discriminatorB])\n', (2400, 2440), False, 'from tensorflow.keras.models import Sequential\n'), ((2669, 2713), 'tensorflow.keras.models.Sequential', 'Sequential', (['[self.encoderA, self.generatorA]'], {}), '([self.encoderA, self.generatorA])\n', (2679, 2713), False, 'from tensorflow.keras.models import Sequential\n'), ((2844, 2888), 'tensorflow.keras.models.Sequential', 'Sequential', (['[self.encoderB, self.generatorB]'], {}), '([self.encoderB, self.generatorB])\n', (2854, 2888), False, 'from tensorflow.keras.models import Sequential\n'), ((3060, 3106), 'tensorflow.keras.models.Sequential', 'Sequential', (['[self.successorA, self.successorB]'], {}), '([self.successorA, self.successorB])\n', (3070, 3106), False, 'from tensorflow.keras.models import Sequential\n'), ((3243, 3289), 'tensorflow.keras.models.Sequential', 'Sequential', (['[self.successorB, self.successorA]'], {}), '([self.successorB, self.successorA])\n', (3253, 3289), False, 'from tensorflow.keras.models import Sequential\n'), ((3598, 3610), 'tensorflow.keras.models.Sequential', 'Sequential', ([], {}), '()\n', (3608, 3610), False, 'from tensorflow.keras.models import Sequential\n'), ((4441, 4453), 'tensorflow.keras.models.Sequential', 'Sequential', ([], {}), '()\n', (4451, 4453), False, 'from tensorflow.keras.models import Sequential\n'), ((5994, 6006), 'tensorflow.keras.models.Sequential', 'Sequential', ([], {}), '()\n', (6004, 6006), False, 'from tensorflow.keras.models import Sequential\n'), ((6932, 6956), 'numpy.ones', 'np.ones', (['(BATCH_SIZE, 1)'], {}), '((BATCH_SIZE, 1))\n', (6939, 6956), True, 'import numpy as np\n'), ((6973, 6998), 'numpy.zeros', 'np.zeros', (['(BATCH_SIZE, 1)'], {}), '((BATCH_SIZE, 1))\n', (6981, 6998), True, 'import numpy as np\n'), ((7038, 7077), 'numpy.concatenate', 'np.concatenate', (['(Y_real, Y_rec)'], {'axis': '(0)'}), '((Y_real, Y_rec), axis=0)\n', (7052, 7077), True, 'import numpy as np\n'), ((7134, 7177), 'tensorflow.summary.create_file_writer', 'tf.summary.create_file_writer', (['self.log_dir'], {}), '(self.log_dir)\n', (7163, 7177), True, 'import tensorflow as tf\n'), ((7300, 7341), 'os.path.join', 'os.path.join', (['self.checkpoint_dir', '"""ckpt"""'], {}), "(self.checkpoint_dir, 'ckpt')\n", (7312, 7341), False, 'import os\n'), ((7363, 7914), 'tensorflow.train.Checkpoint', 'tf.train.Checkpoint', ([], {'E_optimizerA': 'self.E_optA', 'G_optimizerA': 'self.G_optA', 'D_optimizerA': 'self.D_optA', 'E_optimizerB': 'self.E_optB', 'G_optimizerB': 'self.G_optB', 'D_optimizerB': 'self.D_optB', 'encoderA': 'self.encoderA', 'generatorA': 'self.generatorA', 'discriminatorA': 'self.discriminatorA', 'pioneerA': 'self.pioneerA', 'successorA': 'self.successorA', 'coordinatorA': 'self.coordinatorA', 'encoderB': 'self.encoderB', 'generatorB': 'self.generatorB', 'discriminatorB': 'self.discriminatorB', 'pioneerB': 'self.pioneerB', 'successorB': 'self.successorB', 'coordinatorB': 'self.coordinatorB'}), '(E_optimizerA=self.E_optA, G_optimizerA=self.G_optA,\n D_optimizerA=self.D_optA, E_optimizerB=self.E_optB, G_optimizerB=self.\n G_optB, D_optimizerB=self.D_optB, encoderA=self.encoderA, generatorA=\n self.generatorA, discriminatorA=self.discriminatorA, pioneerA=self.\n pioneerA, successorA=self.successorA, coordinatorA=self.coordinatorA,\n encoderB=self.encoderB, generatorB=self.generatorB, discriminatorB=self\n .discriminatorB, pioneerB=self.pioneerB, successorB=self.successorB,\n coordinatorB=self.coordinatorB)\n', (7382, 7914), True, 'import tensorflow as tf\n'), ((14619, 14659), 'numpy.save', 'np.save', (['"""./A_gen_baait.npy"""', 'A_gen_list'], {}), "('./A_gen_baait.npy', A_gen_list)\n", (14626, 14659), True, 'import numpy as np\n'), ((14668, 14708), 'numpy.save', 'np.save', (['"""./B_gen_baait.npy"""', 'B_gen_list'], {}), "('./B_gen_baait.npy', B_gen_list)\n", (14675, 14708), True, 'import numpy as np\n'), ((14717, 14759), 'numpy.save', 'np.save', (['"""./AB_rec_baait.npy"""', 'AB_rec_list'], {}), "('./AB_rec_baait.npy', AB_rec_list)\n", (14724, 14759), True, 'import numpy as np\n'), ((3629, 3747), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['depth', '(5, 5)'], {'strides': '(2, 2)', 'padding': '"""same"""', 'kernel_initializer': '"""he_normal"""', 'input_shape': 'self.img_shape'}), "(depth, (5, 5), strides=(2, 2), padding='same', kernel_initializer=\n 'he_normal', input_shape=self.img_shape)\n", (3635, 3747), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((3787, 3819), 'tensorflow.keras.layers.BatchNormalization', 'BatchNormalization', ([], {'momentum': '(0.8)'}), '(momentum=0.8)\n', (3805, 3819), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((3839, 3859), 'tensorflow.keras.layers.LeakyReLU', 'LeakyReLU', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (3848, 3859), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((3879, 3972), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(2 * depth)', '(5, 5)'], {'strides': '(2, 2)', 'padding': '"""same"""', 'kernel_initializer': '"""he_normal"""'}), "(2 * depth, (5, 5), strides=(2, 2), padding='same',\n kernel_initializer='he_normal')\n", (3885, 3972), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((3986, 4018), 'tensorflow.keras.layers.BatchNormalization', 'BatchNormalization', ([], {'momentum': '(0.8)'}), '(momentum=0.8)\n', (4004, 4018), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4038, 4058), 'tensorflow.keras.layers.LeakyReLU', 'LeakyReLU', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (4047, 4058), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4078, 4171), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(4 * depth)', '(5, 5)'], {'strides': '(2, 2)', 'padding': '"""same"""', 'kernel_initializer': '"""he_normal"""'}), "(4 * depth, (5, 5), strides=(2, 2), padding='same',\n kernel_initializer='he_normal')\n", (4084, 4171), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4185, 4217), 'tensorflow.keras.layers.BatchNormalization', 'BatchNormalization', ([], {'momentum': '(0.8)'}), '(momentum=0.8)\n', (4203, 4217), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4237, 4257), 'tensorflow.keras.layers.LeakyReLU', 'LeakyReLU', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (4246, 4257), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4277, 4286), 'tensorflow.keras.layers.Flatten', 'Flatten', ([], {}), '()\n', (4284, 4286), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4306, 4328), 'tensorflow.keras.layers.Dense', 'Dense', (['self.latent_dim'], {}), '(self.latent_dim)\n', (4311, 4328), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4472, 4544), 'tensorflow.keras.layers.Dense', 'Dense', (['(dim * dim * depth)'], {'use_bias': '(False)', 'input_shape': '(self.latent_dim,)'}), '(dim * dim * depth, use_bias=False, input_shape=(self.latent_dim,))\n', (4477, 4544), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4560, 4592), 'tensorflow.keras.layers.BatchNormalization', 'BatchNormalization', ([], {'momentum': '(0.8)'}), '(momentum=0.8)\n', (4578, 4592), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4612, 4632), 'tensorflow.keras.layers.LeakyReLU', 'LeakyReLU', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (4621, 4632), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4652, 4678), 'tensorflow.keras.layers.Reshape', 'Reshape', (['(dim, dim, depth)'], {}), '((dim, dim, depth))\n', (4659, 4678), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4759, 4874), 'tensorflow.keras.layers.Conv2DTranspose', 'Conv2DTranspose', (['depth', '(5, 5)'], {'strides': '(1, 1)', 'padding': '"""same"""', 'use_bias': '(False)', 'kernel_initializer': '"""he_normal"""'}), "(depth, (5, 5), strides=(1, 1), padding='same', use_bias=\n False, kernel_initializer='he_normal')\n", (4774, 4874), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((4950, 4982), 'tensorflow.keras.layers.BatchNormalization', 'BatchNormalization', ([], {'momentum': '(0.8)'}), '(momentum=0.8)\n', (4968, 4982), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((5002, 5022), 'tensorflow.keras.layers.LeakyReLU', 'LeakyReLU', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (5011, 5022), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((5253, 5285), 'tensorflow.keras.layers.BatchNormalization', 'BatchNormalization', ([], {'momentum': '(0.8)'}), '(momentum=0.8)\n', (5271, 5285), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((5305, 5325), 'tensorflow.keras.layers.LeakyReLU', 'LeakyReLU', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (5314, 5325), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((5568, 5600), 'tensorflow.keras.layers.BatchNormalization', 'BatchNormalization', ([], {'momentum': '(0.8)'}), '(momentum=0.8)\n', (5586, 5600), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((5620, 5640), 'tensorflow.keras.layers.LeakyReLU', 'LeakyReLU', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (5629, 5640), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((5660, 5773), 'tensorflow.keras.layers.Conv2DTranspose', 'Conv2DTranspose', (['self.img_shape[2]', '(5, 5)'], {'strides': '(2, 2)', 'padding': '"""same"""', 'use_bias': '(False)', 'activation': '"""tanh"""'}), "(self.img_shape[2], (5, 5), strides=(2, 2), padding='same',\n use_bias=False, activation='tanh')\n", (5675, 5773), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((6025, 6143), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['depth', '(5, 5)'], {'strides': '(2, 2)', 'padding': '"""same"""', 'kernel_initializer': '"""he_normal"""', 'input_shape': 'self.img_shape'}), "(depth, (5, 5), strides=(2, 2), padding='same', kernel_initializer=\n 'he_normal', input_shape=self.img_shape)\n", (6031, 6143), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((6183, 6203), 'tensorflow.keras.layers.LeakyReLU', 'LeakyReLU', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (6192, 6203), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((6223, 6241), 'tensorflow.keras.layers.Dropout', 'Dropout', (['drop_rate'], {}), '(drop_rate)\n', (6230, 6241), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((6261, 6354), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(2 * depth)', '(5, 5)'], {'strides': '(2, 2)', 'padding': '"""same"""', 'kernel_initializer': '"""he_normal"""'}), "(2 * depth, (5, 5), strides=(2, 2), padding='same',\n kernel_initializer='he_normal')\n", (6267, 6354), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((6368, 6388), 'tensorflow.keras.layers.LeakyReLU', 'LeakyReLU', ([], {'alpha': '(0.2)'}), '(alpha=0.2)\n', (6377, 6388), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((6408, 6426), 'tensorflow.keras.layers.Dropout', 'Dropout', (['drop_rate'], {}), '(drop_rate)\n', (6415, 6426), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((6673, 6682), 'tensorflow.keras.layers.Flatten', 'Flatten', ([], {}), '()\n', (6680, 6682), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((6702, 6710), 'tensorflow.keras.layers.Dense', 'Dense', (['(1)'], {}), '(1)\n', (6707, 6710), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((6730, 6751), 'tensorflow.keras.layers.Activation', 'Activation', (['"""sigmoid"""'], {}), "('sigmoid')\n", (6740, 6751), False, 'from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose, Conv2D, Dropout, Flatten, Activation\n'), ((8666, 8713), 'tensorflow.train.latest_checkpoint', 'tf.train.latest_checkpoint', (['self.checkpoint_dir'], {}), '(self.checkpoint_dir)\n', (8692, 8713), True, 'import tensorflow as tf\n'), ((2751, 2761), 'tensorflow.keras.losses.MeanSquaredError', 'loss_mse', ([], {}), '()\n', (2759, 2761), True, 'from tensorflow.keras.losses import MeanSquaredError as loss_mse\n'), ((2926, 2936), 'tensorflow.keras.losses.MeanSquaredError', 'loss_mse', ([], {}), '()\n', (2934, 2936), True, 'from tensorflow.keras.losses import MeanSquaredError as loss_mse\n'), ((3146, 3156), 'tensorflow.keras.losses.MeanSquaredError', 'loss_mse', ([], {}), '()\n', (3154, 3156), True, 'from tensorflow.keras.losses import MeanSquaredError as loss_mse\n'), ((3329, 3339), 'tensorflow.keras.losses.MeanSquaredError', 'loss_mse', ([], {}), '()\n', (3337, 3339), True, 'from tensorflow.keras.losses import MeanSquaredError as loss_mse\n'), ((9152, 9208), 'numpy.arange', 'np.arange', (['(nb * BATCH_SIZE)', '(nb * BATCH_SIZE + BATCH_SIZE)'], {}), '(nb * BATCH_SIZE, nb * BATCH_SIZE + BATCH_SIZE)\n', (9161, 9208), True, 'import numpy as np\n'), ((9352, 9416), 'numpy.random.uniform', 'np.random.uniform', (['(-1.0)', '(1.0)'], {'size': '[BATCH_SIZE, self.latent_dim]'}), '(-1.0, 1.0, size=[BATCH_SIZE, self.latent_dim])\n', (9369, 9416), True, 'import numpy as np\n'), ((9571, 9602), 'numpy.concatenate', 'np.concatenate', (['(A_real, A_gen)'], {}), '((A_real, A_gen))\n', (9585, 9602), True, 'import numpy as np\n'), ((9946, 10010), 'numpy.random.uniform', 'np.random.uniform', (['(-1.0)', '(1.0)'], {'size': '[BATCH_SIZE, self.latent_dim]'}), '(-1.0, 1.0, size=[BATCH_SIZE, self.latent_dim])\n', (9963, 10010), True, 'import numpy as np\n'), ((10091, 10122), 'numpy.concatenate', 'np.concatenate', (['(B_real, B_gen)'], {}), '((B_real, B_gen))\n', (10105, 10122), True, 'import numpy as np\n'), ((11172, 11223), 'numpy.mean', 'np.mean', (['[identity_loss, pair_matched_loss]'], {'axis': '(0)'}), '([identity_loss, pair_matched_loss], axis=0)\n', (11179, 11223), True, 'import numpy as np\n'), ((11417, 11469), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""D_loss_A"""', 'Dloss_A[0]'], {'step': 'step'}), "('D_loss_A', Dloss_A[0], step=step)\n", (11434, 11469), True, 'import tensorflow as tf\n'), ((11490, 11542), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""G_loss_A"""', 'Gloss_A[0]'], {'step': 'step'}), "('G_loss_A', Gloss_A[0], step=step)\n", (11507, 11542), True, 'import tensorflow as tf\n'), ((11563, 11614), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""D_acc_A"""', 'Dloss_A[1]'], {'step': 'step'}), "('D_acc_A', Dloss_A[1], step=step)\n", (11580, 11614), True, 'import tensorflow as tf\n'), ((11635, 11687), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""D_loss_B"""', 'Dloss_B[0]'], {'step': 'step'}), "('D_loss_B', Dloss_B[0], step=step)\n", (11652, 11687), True, 'import tensorflow as tf\n'), ((11708, 11760), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""G_loss_B"""', 'Gloss_B[0]'], {'step': 'step'}), "('G_loss_B', Gloss_B[0], step=step)\n", (11725, 11760), True, 'import tensorflow as tf\n'), ((11781, 11832), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""D_acc_B"""', 'Dloss_B[1]'], {'step': 'step'}), "('D_acc_B', Dloss_B[1], step=step)\n", (11798, 11832), True, 'import tensorflow as tf\n'), ((10922, 10949), 'numpy.add', 'np.add', (['mseA_real', 'mseA_gen'], {}), '(mseA_real, mseA_gen)\n', (10928, 10949), True, 'import numpy as np\n'), ((10984, 11011), 'numpy.add', 'np.add', (['mseB_real', 'mseB_gen'], {}), '(mseB_real, mseB_gen)\n', (10990, 11011), True, 'import numpy as np\n'), ((11054, 11074), 'numpy.add', 'np.add', (['mse_A', 'mse_B'], {}), '(mse_A, mse_B)\n', (11060, 11074), True, 'import numpy as np\n'), ((11121, 11145), 'numpy.add', 'np.add', (['mse_A2A', 'mse_B2B'], {}), '(mse_A2A, mse_B2B)\n', (11127, 11145), True, 'import numpy as np\n'), ((11891, 11935), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""MSE_A"""', 'mse_A'], {'step': 'step'}), "('MSE_A', mse_A, step=step)\n", (11908, 11935), True, 'import tensorflow as tf\n'), ((11960, 12004), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""MSE_B"""', 'mse_B'], {'step': 'step'}), "('MSE_B', mse_B, step=step)\n", (11977, 12004), True, 'import tensorflow as tf\n'), ((12029, 12089), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""Identity_Loss"""', 'identity_loss'], {'step': 'step'}), "('Identity_Loss', identity_loss, step=step)\n", (12046, 12089), True, 'import tensorflow as tf\n'), ((12114, 12182), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""Pair_Matched_Loss"""', 'pair_matched_loss'], {'step': 'step'}), "('Pair_Matched_Loss', pair_matched_loss, step=step)\n", (12131, 12182), True, 'import tensorflow as tf\n'), ((12207, 12247), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""MSE"""', 'mse'], {'step': 'step'}), "('MSE', mse, step=step)\n", (12224, 12247), True, 'import tensorflow as tf\n'), ((13875, 13915), 'utils.dice_coefficient', 'dice_coefficient', (['A_rec_i_bi', 'B_rec_i_bi'], {}), '(A_rec_i_bi, B_rec_i_bi)\n', (13891, 13915), False, 'from utils import convert2binary, dice_coefficient\n')] |
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from skimage.transform import resize
from IPython.display import clear_output
def color_mask(img, size=512):
# dict_labels = {'fondo':0, 'vidrio':1, 'consolidación':2, 'pleura':3}
# A cada label le ponemos un rgba
dict_color = {0:[0,0,0,255], 1:[55,126,184,255], 2:[77,175,74,255], 3:[152,78,163,255]}
img_color = np.zeros((size,size,4))
img = img.reshape(size,size)
for i in range(size):
for j in range(size):
img_color[i,j] = dict_color[img[i,j]]
return np.array(img_color)/255
def create_mask(pred_mask, i=0):
pred_mask = tf.argmax(pred_mask, axis=-1)
pred_mask = pred_mask[..., tf.newaxis]
return pred_mask[i]
def resize_mask(x, size):
return resize(x, (size,size), order=0, mode = 'constant', preserve_range = True,
anti_aliasing=False, clip=True)
def show_training_predictions(model, x_val, y_val, size, epoch, logs):
fig, (ax0, ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=4, figsize=(14,7))
ax0.imshow(x_val[1])
ax0.axis('off')
ax0.set_title('Original image', size=16)
ax1.imshow(color_mask(y_val[1].reshape(size,size), size=size))
ax1.axis('off')
ax1.set_title('Mask', size=16)
pred_mask = model.predict(x_val[0:10])
ax2.imshow(color_mask(create_mask(pred_mask, 1).numpy().reshape(size,size), size=size))
ax2.axis('off')
ax2.set_title('Prediction', size=16)
mask_labels = np.ones((512, 32, 4))
mask_labels[140:172] = np.array([0, 0, 0, 1])
mask_labels[204:236] = np.array([55, 126, 184, 255])/255
mask_labels[268:300] = np.array([77, 175, 74, 255])/255
mask_labels[332:364] = np.array([152, 78, 163, 255])/255
box = ax3.get_position()
box.x0 = box.x0 - 0.07
box.x1 = box.x1 - 0.07
ax3.set_position(box)
ax3.imshow(mask_labels)
ax3.tick_params(axis='x', which='both', bottom=False, top=False,labelbottom=False)
ax3.tick_params(axis='y', which='both', left=False, labelleft=False, right = True,
labelright=True)
ax3.spines['top'].set_visible(False)
ax3.spines['right'].set_visible(False)
ax3.spines['bottom'].set_visible(False)
ax3.spines['left'].set_visible(False)
ax3.set_yticks([156, 220, 284, 348])
ax3.set_yticklabels(['Background','Ground glass','Consolidation','Pleural effusion'],
size=14)
# acc, val_acc = logs['accuracy'], logs['val_accuracy']
plt.suptitle('Epoch {}'.format(epoch+1), size=18, y=0.8)
fig.savefig('gif/{}_{}.jpg'.format(size, epoch), bbox_inches='tight')
plt.show()
def show_predictions(img, mask, mask_prediction):
fig, (ax0, ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=4, figsize=(14,7))
ax0.imshow(img, cmap='gray')
ax0.axis('off')
ax0.set_title('Original image', size=16)
ax1.imshow(color_mask(mask))
ax1.axis('off')
ax1.set_title('Mask', size=16)
ax2.imshow(color_mask(mask_prediction))
ax2.axis('off')
ax2.set_title('Prediction', size=16)
mask_labels = np.ones((512, 32, 4))
mask_labels[140:172] = np.array([0, 0, 0, 1])
mask_labels[204:236] = np.array([55, 126, 184, 255])/255
mask_labels[268:300] = np.array([77, 175, 74, 255])/255
mask_labels[332:364] = np.array([152, 78, 163, 255])/255
box = ax3.get_position()
box.x0 = box.x0 - 0.07
box.x1 = box.x1 - 0.07
ax3.set_position(box)
ax3.imshow(mask_labels)
ax3.tick_params(axis='x', which='both', bottom=False, top=False,labelbottom=False)
ax3.tick_params(axis='y', which='both', left=False, labelleft=False, right = True,
labelright=True)
ax3.spines['top'].set_visible(False)
ax3.spines['right'].set_visible(False)
ax3.spines['bottom'].set_visible(False)
ax3.spines['left'].set_visible(False)
ax3.set_yticks([156, 220, 284, 348])
ax3.set_yticklabels(['Background','Ground glass','Consolidation','Pleural effusion'],
size=14)
plt.show()
def plot_losses(train_loss, val_loss):
epochs = range(1, len(train_loss) + 1)
plt.figure(figsize=(10,7))
plt.plot(epochs, train_loss, c='red', marker='o', label='Training loss')
plt.plot(epochs, val_loss, c='b', marker='o', label='Validation loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss Value')
plt.legend()
plt.grid()
plt.show()
def bagging(masks, weights=None, sample_size=10, canonical_size=512, output_channels=4):
if weights is None:
weights = [1]*sample_size
res = []
for i in range(sample_size):
new_image = []
for l in range(output_channels):
new_channel = np.zeros((canonical_size,canonical_size))
for m in range(len(masks)):
new_channel += weights[m]*resize_mask(masks[m][i,:,:,l], canonical_size)
new_image.append(new_channel)
new_mask = np.transpose(np.array(new_image), (1, 2, 0))
res.append(new_mask)
return np.array(res)
def merge_img_mask(img, mask, size=512):
mask = color_mask(mask, size)
img = np.repeat(img[:,:,:], 2, 2)[:,:,0:4]
img[:,:,3] = 1
return 0.5*img + 0.5*mask
def show_mask_predictions(img, mask_prediction):
fig, (ax0, ax1, ax2) = plt.subplots(nrows=1, ncols=3, figsize=(14,7))
ax0.imshow(img, cmap='gray')
ax0.axis('off')
ax0.set_title('Original image', size=16)
ax1.imshow(merge_img_mask(img, mask_prediction))
ax1.axis('off')
ax1.set_title('Prediction', size=16)
mask_labels = np.ones((512, 32, 4))
mask_labels[140:172] = np.array([0, 0, 0, 1])
mask_labels[204:236] = np.array([55, 126, 184, 255]) / 255
mask_labels[268:300] = np.array([77, 175, 74, 255]) / 255
mask_labels[332:364] = np.array([152, 78, 163, 255]) / 255
box = ax2.get_position()
box.x0 = box.x0 - 0.07
box.x1 = box.x1 - 0.07
ax2.set_position(box)
ax2.imshow(mask_labels)
ax2.tick_params(axis='x', which='both', bottom=False, top=False,labelbottom=False)
ax2.tick_params(axis='y', which='both', left=False, labelleft=False, right = True,
labelright=True)
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.spines['bottom'].set_visible(False)
ax2.spines['left'].set_visible(False)
ax2.set_yticks([156, 220, 284, 348])
ax2.set_yticklabels(['Background','Ground glass','Consolidation','Pleural effusion'],
size=14)
plt.show() | [
"matplotlib.pyplot.grid",
"numpy.repeat",
"numpy.ones",
"matplotlib.pyplot.ylabel",
"matplotlib.pyplot.xlabel",
"matplotlib.pyplot.plot",
"numpy.array",
"tensorflow.argmax",
"numpy.zeros",
"matplotlib.pyplot.figure",
"matplotlib.pyplot.title",
"skimage.transform.resize",
"matplotlib.pyplot.s... | [((412, 437), 'numpy.zeros', 'np.zeros', (['(size, size, 4)'], {}), '((size, size, 4))\n', (420, 437), True, 'import numpy as np\n'), ((674, 703), 'tensorflow.argmax', 'tf.argmax', (['pred_mask'], {'axis': '(-1)'}), '(pred_mask, axis=-1)\n', (683, 703), True, 'import tensorflow as tf\n'), ((809, 915), 'skimage.transform.resize', 'resize', (['x', '(size, size)'], {'order': '(0)', 'mode': '"""constant"""', 'preserve_range': '(True)', 'anti_aliasing': '(False)', 'clip': '(True)'}), "(x, (size, size), order=0, mode='constant', preserve_range=True,\n anti_aliasing=False, clip=True)\n", (815, 915), False, 'from skimage.transform import resize\n'), ((1038, 1085), 'matplotlib.pyplot.subplots', 'plt.subplots', ([], {'nrows': '(1)', 'ncols': '(4)', 'figsize': '(14, 7)'}), '(nrows=1, ncols=4, figsize=(14, 7))\n', (1050, 1085), True, 'import matplotlib.pyplot as plt\n'), ((1527, 1548), 'numpy.ones', 'np.ones', (['(512, 32, 4)'], {}), '((512, 32, 4))\n', (1534, 1548), True, 'import numpy as np\n'), ((1576, 1598), 'numpy.array', 'np.array', (['[0, 0, 0, 1]'], {}), '([0, 0, 0, 1])\n', (1584, 1598), True, 'import numpy as np\n'), ((2683, 2693), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (2691, 2693), True, 'import matplotlib.pyplot as plt\n'), ((2777, 2824), 'matplotlib.pyplot.subplots', 'plt.subplots', ([], {'nrows': '(1)', 'ncols': '(4)', 'figsize': '(14, 7)'}), '(nrows=1, ncols=4, figsize=(14, 7))\n', (2789, 2824), True, 'import matplotlib.pyplot as plt\n'), ((3149, 3170), 'numpy.ones', 'np.ones', (['(512, 32, 4)'], {}), '((512, 32, 4))\n', (3156, 3170), True, 'import numpy as np\n'), ((3198, 3220), 'numpy.array', 'np.array', (['[0, 0, 0, 1]'], {}), '([0, 0, 0, 1])\n', (3206, 3220), True, 'import numpy as np\n'), ((4102, 4112), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (4110, 4112), True, 'import matplotlib.pyplot as plt\n'), ((4203, 4230), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(10, 7)'}), '(figsize=(10, 7))\n', (4213, 4230), True, 'import matplotlib.pyplot as plt\n'), ((4234, 4306), 'matplotlib.pyplot.plot', 'plt.plot', (['epochs', 'train_loss'], {'c': '"""red"""', 'marker': '"""o"""', 'label': '"""Training loss"""'}), "(epochs, train_loss, c='red', marker='o', label='Training loss')\n", (4242, 4306), True, 'import matplotlib.pyplot as plt\n'), ((4311, 4381), 'matplotlib.pyplot.plot', 'plt.plot', (['epochs', 'val_loss'], {'c': '"""b"""', 'marker': '"""o"""', 'label': '"""Validation loss"""'}), "(epochs, val_loss, c='b', marker='o', label='Validation loss')\n", (4319, 4381), True, 'import matplotlib.pyplot as plt\n'), ((4387, 4428), 'matplotlib.pyplot.title', 'plt.title', (['"""Training and Validation Loss"""'], {}), "('Training and Validation Loss')\n", (4396, 4428), True, 'import matplotlib.pyplot as plt\n'), ((4433, 4452), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""Epoch"""'], {}), "('Epoch')\n", (4443, 4452), True, 'import matplotlib.pyplot as plt\n'), ((4457, 4481), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""Loss Value"""'], {}), "('Loss Value')\n", (4467, 4481), True, 'import matplotlib.pyplot as plt\n'), ((4486, 4498), 'matplotlib.pyplot.legend', 'plt.legend', ([], {}), '()\n', (4496, 4498), True, 'import matplotlib.pyplot as plt\n'), ((4503, 4513), 'matplotlib.pyplot.grid', 'plt.grid', ([], {}), '()\n', (4511, 4513), True, 'import matplotlib.pyplot as plt\n'), ((4519, 4529), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (4527, 4529), True, 'import matplotlib.pyplot as plt\n'), ((5153, 5166), 'numpy.array', 'np.array', (['res'], {}), '(res)\n', (5161, 5166), True, 'import numpy as np\n'), ((5427, 5474), 'matplotlib.pyplot.subplots', 'plt.subplots', ([], {'nrows': '(1)', 'ncols': '(3)', 'figsize': '(14, 7)'}), '(nrows=1, ncols=3, figsize=(14, 7))\n', (5439, 5474), True, 'import matplotlib.pyplot as plt\n'), ((5715, 5736), 'numpy.ones', 'np.ones', (['(512, 32, 4)'], {}), '((512, 32, 4))\n', (5722, 5736), True, 'import numpy as np\n'), ((5764, 5786), 'numpy.array', 'np.array', (['[0, 0, 0, 1]'], {}), '([0, 0, 0, 1])\n', (5772, 5786), True, 'import numpy as np\n'), ((6674, 6684), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (6682, 6684), True, 'import matplotlib.pyplot as plt\n'), ((600, 619), 'numpy.array', 'np.array', (['img_color'], {}), '(img_color)\n', (608, 619), True, 'import numpy as np\n'), ((1626, 1655), 'numpy.array', 'np.array', (['[55, 126, 184, 255]'], {}), '([55, 126, 184, 255])\n', (1634, 1655), True, 'import numpy as np\n'), ((1687, 1715), 'numpy.array', 'np.array', (['[77, 175, 74, 255]'], {}), '([77, 175, 74, 255])\n', (1695, 1715), True, 'import numpy as np\n'), ((1747, 1776), 'numpy.array', 'np.array', (['[152, 78, 163, 255]'], {}), '([152, 78, 163, 255])\n', (1755, 1776), True, 'import numpy as np\n'), ((3248, 3277), 'numpy.array', 'np.array', (['[55, 126, 184, 255]'], {}), '([55, 126, 184, 255])\n', (3256, 3277), True, 'import numpy as np\n'), ((3309, 3337), 'numpy.array', 'np.array', (['[77, 175, 74, 255]'], {}), '([77, 175, 74, 255])\n', (3317, 3337), True, 'import numpy as np\n'), ((3369, 3398), 'numpy.array', 'np.array', (['[152, 78, 163, 255]'], {}), '([152, 78, 163, 255])\n', (3377, 3398), True, 'import numpy as np\n'), ((5258, 5287), 'numpy.repeat', 'np.repeat', (['img[:, :, :]', '(2)', '(2)'], {}), '(img[:, :, :], 2, 2)\n', (5267, 5287), True, 'import numpy as np\n'), ((5814, 5843), 'numpy.array', 'np.array', (['[55, 126, 184, 255]'], {}), '([55, 126, 184, 255])\n', (5822, 5843), True, 'import numpy as np\n'), ((5877, 5905), 'numpy.array', 'np.array', (['[77, 175, 74, 255]'], {}), '([77, 175, 74, 255])\n', (5885, 5905), True, 'import numpy as np\n'), ((5939, 5968), 'numpy.array', 'np.array', (['[152, 78, 163, 255]'], {}), '([152, 78, 163, 255])\n', (5947, 5968), True, 'import numpy as np\n'), ((4823, 4865), 'numpy.zeros', 'np.zeros', (['(canonical_size, canonical_size)'], {}), '((canonical_size, canonical_size))\n', (4831, 4865), True, 'import numpy as np\n'), ((5072, 5091), 'numpy.array', 'np.array', (['new_image'], {}), '(new_image)\n', (5080, 5091), True, 'import numpy as np\n')] |
# -*- coding: utf-8 -*-
import time
import sys
import os
from datetime import datetime
from datetime import timedelta
import dateutil.parser
import argparse
import json
import numpy as np
def calc_move_average_price_of_btc_acc(price_and_amount_ordered, target_acc_btc):
acc_btc = 0
total_amount = 0
for v in price_and_amount_ordered:
price = float(v[0])
amount = float(v[1])
btc_amount = price * amount
if acc_btc + btc_amount >= target_acc_btc:
remain_btc_amount = target_acc_btc - acc_btc
acc_btc = target_acc_btc
total_amount += remain_btc_amount / price
break
acc_btc += btc_amount
total_amount += amount
return acc_btc / total_amount
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Analyze spread')
parser.add_argument('--input_file', required=True)
parser.add_argument('--accumulate_btc', default=1.0, type=float)
args = parser.parse_args()
bid_ask_avgs = []
for line in open(args.input_file):
line = line.strip()
if line == "": continue
# {"lastUpdateId":35755367,"bids":[["0.15087900","6.15500000",[]],["0.15087800","1.32200000",[]],["0.15052100","0.05000000",[]],["0.15050200","0.21600000",[]],["0.15033700","0.46800000",[]],["0.15028600","0.58200000",[]],["0.15028000","1.11300000",[]],["0.15021800","0.50000000",[]],["0.15021300","0.27600000",[]],["0.15020000","0.20000000",[]]],"asks":[["0.15088000","0.00100000",[]],["0.15088200","9.95700000",[]],["0.15120600","0.04900000",[]],["0.15127400","0.05000000",[]],["0.15136300","0.12000000",[]],["0.15137100","0.12000000",[]],["0.15137200","0.01500000",[]],["0.15138200","0.31000000",[]],["0.15150000","0.00700000",[]],["0.15158900","2.71700000",[]]]}
depth = json.loads(line)
bids = list(sorted(depth["bids"], key=lambda x:-float(x[0]))) # 買い注文
asks = list(sorted(depth["asks"], key=lambda x:float(x[0]))) # 売り注文
bid_average_value = calc_move_average_price_of_btc_acc(bids, args.accumulate_btc)
ask_average_value = calc_move_average_price_of_btc_acc(asks, args.accumulate_btc)
bid_ask_avgs.append((bid_average_value, ask_average_value))
# spread rate
spread_rates = list(map(lambda x:(x[1]-x[0])/x[0], bid_ask_avgs))
count = len(spread_rates)
spread_average = np.average(spread_rates)*100
spread_stdev = np.std(spread_rates)*100
print("data count = %d" % (count,))
print("spread average = %f%%" % (spread_average))
print("spread stdev = %f%%" % (spread_stdev))
print("95%% range: %f%% - %f%% - %f%%" % ((spread_average-2*spread_stdev), spread_average, (spread_average+2*spread_stdev),))
| [
"json.loads",
"numpy.std",
"argparse.ArgumentParser",
"numpy.average"
] | [((796, 849), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {'description': '"""Analyze spread"""'}), "(description='Analyze spread')\n", (819, 849), False, 'import argparse\n'), ((1821, 1837), 'json.loads', 'json.loads', (['line'], {}), '(line)\n', (1831, 1837), False, 'import json\n'), ((2382, 2406), 'numpy.average', 'np.average', (['spread_rates'], {}), '(spread_rates)\n', (2392, 2406), True, 'import numpy as np\n'), ((2430, 2450), 'numpy.std', 'np.std', (['spread_rates'], {}), '(spread_rates)\n', (2436, 2450), True, 'import numpy as np\n')] |
import os
import time
import h5py
from collections import OrderedDict
import numpy as np
from env.jaco.two_jaco import TwoJacoEnv
from env.transform_utils import quat_dist
class TwoJacoPlaceEnv(TwoJacoEnv):
def __init__(self, **kwargs):
self.name = 'two-jaco-place'
super().__init__('two_jaco_pick.xml', **kwargs)
# config
self._env_config.update({
"train_left": True,
"train_right": True,
"success_reward": 500,
"target_xy_reward": 500,
"target_z_reward": 500,
"move_finish_reward": 50,
"grasp_reward": 100,
"inair_reward": 0,
"init_randomness": 0.005,
"dest_pos": [0.3, -0.02, 0.86],
"dest_center": True, # set destination to center
"ctrl_reward": 1e-4,
"max_episode_steps": 100,
"init_qpos_dir": None
})
self._env_config.update({ k:v for k,v in kwargs.items() if k in self._env_config })
# target position
if not self._env_config['dest_center']:
if self._env_config['train_left'] and not self._env_config['train_right']:
self._env_config['dest_pos'] = [0.05, 0.2, 0.86] # 0.15
elif not self._env_config['train_left'] and self._env_config['train_right']:
self._env_config['dest_pos'] = [0.05, -0.2, 0.86] # 0.15
# state
self._hold_duration = [0, 0]
self._box_z = [0, 0]
self._min_height = self._env_config['dest_pos'][-1] + 0.08
self._get_reference()
def _get_reference(self):
self.cube_body_id = [self.sim.model.body_name2id("cube{}".format(i)) for i in [1, 2]]
self.cube_geom_id = [self.sim.model.geom_name2id("cube{}".format(i)) for i in [1, 2]]
def _compute_reward(self):
''' Environment reward consists of place reward, xy-position reward
Place reward = -|current z position - expected z position|
Xy-position reward = -||current xy position - init xy position||^2
'''
done = False
off_table = [False, False]
info = {}
# compute gripper centers
cube_z = [self._get_pos("cube1")[-1], self._get_pos("cube2")[-1]]
gripper_centers = self._get_gripper_centers()
# get init box pos
if self._t == 0:
self._init_box_pos = [self._get_pos('cube{}'.format(i + 1)) for i in range(2)]
# object placing reward before placing is finished
in_hand = [True, True]
target_xy_rewards = [0, 0]
target_z_rewards = [0, 0]
target_dist_xy = [0, 0]
target_dist_z = [0, 0]
grasp_count = [0, 0]
grasp_rewards = [0, 0]
inair_rewards = [1, 1]
move_finish_rewards = [0, 0]
for i in range(2):
dist_cube_hand = np.linalg.norm(self._get_pos('cube{}'.format(i + 1)) - gripper_centers[i])
in_hand[i] = dist_cube_hand < 0.08
off_table[i] = cube_z[i] < 0.8
# place reward
cube_pos = self._get_pos('cube{}'.format(i + 1))
target_dist_xy[i] = float(np.linalg.norm(cube_pos[:2] - self._env_config['dest_pos'][:2]))
target_dist_z[i] = float(np.abs(cube_pos[-1] - self._env_config['dest_pos'][-1]))
if self._stage[i] == 'move':
height_decrease = self._init_box_pos[i][-1] - cube_pos[-1]
target_z_rewards[i] = -self._env_config["target_z_reward"] * max(0, height_decrease)
if target_dist_xy[i] < 0.06 and cube_pos[-1] > self._min_height:
move_finish_rewards[i] += self._env_config['move_finish_reward']
self._stage[i] = 'place'
if self._stage[i] == 'place':
target_z_rewards[i] = -self._env_config["target_z_reward"] * target_dist_z[i]
target_xy_rewards[i] -= self._env_config["target_xy_reward"] * target_dist_xy[i]
# grasp reward
contact_flag = [False] * 3
geom_name_prefix = 'jaco_{}_link_finger'.format('l' if i == 0 else 'r')
for j in range(self.sim.data.ncon):
c = self.sim.data.contact[j]
for k in range(3):
geom_name = '{}_{}'.format(geom_name_prefix, k + 1)
geom_id = self.sim.model.geom_name2id(geom_name)
if c.geom1 == geom_id and c.geom2 == self.cube_geom_id[i]:
contact_flag[k] = True
if c.geom2 == geom_id and c.geom1 == self.cube_geom_id[i]:
contact_flag[k] = True
grasp_count[i] = np.array(contact_flag).astype(int).sum()
if self._t == 0:
self._init_grasp_count[i] = grasp_count[i]
grasp_rewards[i] = -self._env_config["grasp_reward"] * max(0, self._init_grasp_count[i] - grasp_count[i])
# in air reward
table_geom_id = self.sim.model.geom_name2id("table_collision")
for j in range(self.sim.data.ncon):
c = self.sim.data.contact[j]
if c.geom1 == table_geom_id and c.geom2 == self.cube_geom_id[i]:
inair_rewards[i] = 0
if c.geom2 == table_geom_id and c.geom1 == self.cube_geom_id[i]:
inair_rewards[i] = 0
if target_dist_xy[i] < 0.06:
inair_rewards[i] = 1
inair_rewards[i] *= self._env_config["inair_reward"]
# success criteria
self._success = True
if self._env_config['train_left']:
self._success &= in_hand[0] and target_dist_xy[0] < 0.04 and target_dist_z[0] < 0.03 and self._stage[0] == 'place'
if self._env_config['train_right']:
self._success &= in_hand[1] and target_dist_xy[1] < 0.04 and target_dist_z[1] < 0.03 and self._stage[1] == 'place'
# success reward
success_reward = 0
if self._success:
print('All places success!')
success_reward = self._env_config["success_reward"]
done = self._success
if self._env_config['train_left']:
done |= (not in_hand[0])
if self._env_config['train_right']:
done |= (not in_hand[1])
reward = success_reward
in_hand_rewards = [0, 0]
if self._env_config["train_left"]:
done |= off_table[0]
reward += target_xy_rewards[0] + target_z_rewards[0] + grasp_rewards[0] + inair_rewards[0] + move_finish_rewards[0]
if self._env_config["train_right"]:
done |= off_table[1]
reward += target_xy_rewards[1] + target_z_rewards[1] + grasp_rewards[1] + inair_rewards[1] + move_finish_rewards[1]
info = {"reward_xy_1": target_xy_rewards[0],
"reward_xy_2": target_xy_rewards[1],
"reward_z_1": target_z_rewards[0],
"reward_z_2": target_z_rewards[1],
"reward_grasp_1": grasp_rewards[0],
"reward_grasp_2": grasp_rewards[1],
"reward_inair_1": inair_rewards[0],
"reward_inair_2": inair_rewards[1],
"reward_move_finish_1": move_finish_rewards[0],
"reward_move_finish_2": move_finish_rewards[1],
"grasp_count_1": grasp_count[0],
"grasp_count_2": grasp_count[1],
"in_hand": in_hand,
"target_pos": self._env_config['dest_pos'],
"cube1_pos": self._get_pos("cube1"),
"cube2_pos": self._get_pos("cube2"),
"gripper1_pos": gripper_centers[0],
"gripper2_pos": gripper_centers[1],
"target_dist_xy": np.round(target_dist_xy, 3),
"target_dist_z": np.round(target_dist_z, 3),
"curr_qpos_l": np.round(self.data.qpos[1:10], 1).tolist(),
"curr_qpos_r": np.round(self.data.qpos[10:19], 1).tolist(),
"stage": self._stage,
"success": self._success }
return reward, done, info
def _step(self, a):
prev_reward, _, _ = self._compute_reward()
# build action from policy output
filled_action = np.zeros((self.action_space.size,))
inclusion_dict = { 'right_arm': self._env_config['train_right'],
'left_arm': self._env_config['train_left'] }
dest_idx, src_idx = 0, 0
for k in self.action_space.shape.keys():
new_dest_idx = dest_idx + self.action_space.shape[k]
if inclusion_dict[k]:
new_src_idx = src_idx + self.action_space.shape[k]
filled_action[dest_idx:new_dest_idx] = a[src_idx:new_src_idx]
src_idx = new_src_idx
dest_idx = new_dest_idx
a = filled_action
# scale actions from [-1, 1] range to actual control range
mins = self.action_space.minimum
maxs = self.action_space.maximum
scaled_action = np.zeros_like(a)
for i in range(self.action_space.size):
scaled_action[i] = mins[i] + (maxs[i] - mins[i]) * (a[i] / 2 + 0.5)
self.do_simulation(scaled_action)
self._t += 1
ob = self._get_obs()
reward, done, info = self._compute_reward()
ctrl_reward = self._ctrl_reward(scaled_action)
info['reward_ctrl'] = ctrl_reward
self._reward = reward - prev_reward + ctrl_reward
return ob, self._reward, done, info
def reset_box(self):
self.cube1_target_reached = False
self.cube2_target_reached = False
super().reset_box()
qpos = self.data.qpos.ravel().copy()
qvel = self.data.qvel.ravel().copy()
# set agent's and box's initial position from saved poses
if self._env_config['init_qpos_dir']:
filepath = os.path.join(self._env_config['init_qpos_dir'], 'success_qpos.p')
with h5py.File(filepath, 'r', libver='latest', swmr=True) as f:
select_success = False
while not select_success:
ix = np.random.randint(len(f))
qpos = f[str(ix)].value
cube_l_ok = qpos[20] > self._min_height or (not self._env_config['train_left'])
cube_r_ok = qpos[27] > self._min_height or (not self._env_config['train_right'])
if cube_l_ok and cube_r_ok:
select_success = True
self.set_state(qpos, qvel)
self._hold_duration = [0, 0]
self._t = 0
self._placed = False
self._init_grasp_count = [0, 0]
self._stage = ['move'] * 2
| [
"numpy.abs",
"os.path.join",
"h5py.File",
"numpy.array",
"numpy.zeros",
"numpy.linalg.norm",
"numpy.zeros_like",
"numpy.round"
] | [((8242, 8277), 'numpy.zeros', 'np.zeros', (['(self.action_space.size,)'], {}), '((self.action_space.size,))\n', (8250, 8277), True, 'import numpy as np\n'), ((9023, 9039), 'numpy.zeros_like', 'np.zeros_like', (['a'], {}), '(a)\n', (9036, 9039), True, 'import numpy as np\n'), ((7742, 7769), 'numpy.round', 'np.round', (['target_dist_xy', '(3)'], {}), '(target_dist_xy, 3)\n', (7750, 7769), True, 'import numpy as np\n'), ((7804, 7830), 'numpy.round', 'np.round', (['target_dist_z', '(3)'], {}), '(target_dist_z, 3)\n', (7812, 7830), True, 'import numpy as np\n'), ((9882, 9947), 'os.path.join', 'os.path.join', (["self._env_config['init_qpos_dir']", '"""success_qpos.p"""'], {}), "(self._env_config['init_qpos_dir'], 'success_qpos.p')\n", (9894, 9947), False, 'import os\n'), ((3162, 3225), 'numpy.linalg.norm', 'np.linalg.norm', (["(cube_pos[:2] - self._env_config['dest_pos'][:2])"], {}), "(cube_pos[:2] - self._env_config['dest_pos'][:2])\n", (3176, 3225), True, 'import numpy as np\n'), ((3264, 3319), 'numpy.abs', 'np.abs', (["(cube_pos[-1] - self._env_config['dest_pos'][-1])"], {}), "(cube_pos[-1] - self._env_config['dest_pos'][-1])\n", (3270, 3319), True, 'import numpy as np\n'), ((9965, 10017), 'h5py.File', 'h5py.File', (['filepath', '"""r"""'], {'libver': '"""latest"""', 'swmr': '(True)'}), "(filepath, 'r', libver='latest', swmr=True)\n", (9974, 10017), False, 'import h5py\n'), ((7863, 7896), 'numpy.round', 'np.round', (['self.data.qpos[1:10]', '(1)'], {}), '(self.data.qpos[1:10], 1)\n', (7871, 7896), True, 'import numpy as np\n'), ((7938, 7972), 'numpy.round', 'np.round', (['self.data.qpos[10:19]', '(1)'], {}), '(self.data.qpos[10:19], 1)\n', (7946, 7972), True, 'import numpy as np\n'), ((4679, 4701), 'numpy.array', 'np.array', (['contact_flag'], {}), '(contact_flag)\n', (4687, 4701), True, 'import numpy as np\n')] |
import math
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
class GraphAttentionLayer(nn.Module):
"""
Simple GAT layer, similar to https://arxiv.org/abs/1710.10903
"""
def __init__(self, in_features, out_features, dropout, alpha, concat=True):
super(GraphAttentionLayer, self).__init__()
self.dropout = dropout
self.in_features = in_features
self.out_features = out_features
self.alpha = alpha
self.concat = concat
self.W = nn.Parameter(nn.init.xavier_uniform_(torch.Tensor(in_features, out_features).type(
torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor), gain=np.sqrt(2.0)),
requires_grad=True)
self.a = nn.Parameter(nn.init.xavier_uniform_(torch.Tensor(2 * out_features, 1).type(
torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor), gain=np.sqrt(2.0)),
requires_grad=True)
self.leakyrelu = nn.LeakyReLU(self.alpha)
def forward(self, input, adj):
h = torch.mm(input, self.W)
N = h.size()[0]
# print(f'self.a\'s shape is {self.a.size()}')
a_input = torch.cat([h.repeat(1, N).view(N * N, -1), h.repeat(N, 1)], dim=1).view(N, -1, 2 * self.out_features)
e = self.leakyrelu(torch.matmul(a_input, self.a).squeeze(2))
zero_vec = -9e15 * torch.ones_like(e)
attention = torch.where(adj > 0, e, zero_vec)
attention = F.softmax(attention, dim=1)
attention = F.dropout(attention, self.dropout, training=self.training)
h_prime = torch.matmul(attention, h)
if self.concat:
return F.elu(h_prime)
else:
return h_prime
def __repr__(self):
return self.__class__.__name__ + ' (' + str(self.in_features) + ' -> ' + str(self.out_features) + ')'
class GraphConvolution(nn.Module):
"""
Simple GCN layer, similar to https://arxiv.org/abs/1609.02907
"""
def __init__(self, in_features, out_features, bias=True):
super(GraphConvolution, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = nn.Parameter(torch.FloatTensor(in_features, out_features))
if bias:
self.bias = nn.Parameter(torch.FloatTensor(out_features))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
stdv = 1. / math.sqrt(self.weight.size(1))
self.weight.data.uniform_(-stdv, stdv)
if self.bias is not None:
self.bias.data.uniform_(-stdv, stdv)
def forward(self, input, adj):
support = torch.mm(input, self.weight)
output = torch.spmm(adj, support)
if self.bias is not None:
return output + self.bias
else:
return output
def __repr__(self):
return self.__class__.__name__ + ' (' \
+ str(self.in_features) + ' -> ' \
+ str(self.out_features) + ')'
| [
"torch.ones_like",
"numpy.sqrt",
"torch.nn.LeakyReLU",
"torch.nn.functional.elu",
"torch.Tensor",
"torch.FloatTensor",
"torch.nn.functional.dropout",
"torch.mm",
"torch.cuda.is_available",
"torch.matmul",
"torch.spmm",
"torch.nn.functional.softmax",
"torch.where"
] | [((1060, 1084), 'torch.nn.LeakyReLU', 'nn.LeakyReLU', (['self.alpha'], {}), '(self.alpha)\n', (1072, 1084), True, 'import torch.nn as nn\n'), ((1133, 1156), 'torch.mm', 'torch.mm', (['input', 'self.W'], {}), '(input, self.W)\n', (1141, 1156), False, 'import torch\n'), ((1493, 1526), 'torch.where', 'torch.where', (['(adj > 0)', 'e', 'zero_vec'], {}), '(adj > 0, e, zero_vec)\n', (1504, 1526), False, 'import torch\n'), ((1547, 1574), 'torch.nn.functional.softmax', 'F.softmax', (['attention'], {'dim': '(1)'}), '(attention, dim=1)\n', (1556, 1574), True, 'import torch.nn.functional as F\n'), ((1595, 1653), 'torch.nn.functional.dropout', 'F.dropout', (['attention', 'self.dropout'], {'training': 'self.training'}), '(attention, self.dropout, training=self.training)\n', (1604, 1653), True, 'import torch.nn.functional as F\n'), ((1672, 1698), 'torch.matmul', 'torch.matmul', (['attention', 'h'], {}), '(attention, h)\n', (1684, 1698), False, 'import torch\n'), ((2777, 2805), 'torch.mm', 'torch.mm', (['input', 'self.weight'], {}), '(input, self.weight)\n', (2785, 2805), False, 'import torch\n'), ((2823, 2847), 'torch.spmm', 'torch.spmm', (['adj', 'support'], {}), '(adj, support)\n', (2833, 2847), False, 'import torch\n'), ((1454, 1472), 'torch.ones_like', 'torch.ones_like', (['e'], {}), '(e)\n', (1469, 1472), False, 'import torch\n'), ((1743, 1757), 'torch.nn.functional.elu', 'F.elu', (['h_prime'], {}), '(h_prime)\n', (1748, 1757), True, 'import torch.nn.functional as F\n'), ((2280, 2324), 'torch.FloatTensor', 'torch.FloatTensor', (['in_features', 'out_features'], {}), '(in_features, out_features)\n', (2297, 2324), False, 'import torch\n'), ((2380, 2411), 'torch.FloatTensor', 'torch.FloatTensor', (['out_features'], {}), '(out_features)\n', (2397, 2411), False, 'import torch\n'), ((716, 728), 'numpy.sqrt', 'np.sqrt', (['(2.0)'], {}), '(2.0)\n', (723, 728), True, 'import numpy as np\n'), ((969, 981), 'numpy.sqrt', 'np.sqrt', (['(2.0)'], {}), '(2.0)\n', (976, 981), True, 'import numpy as np\n'), ((1384, 1413), 'torch.matmul', 'torch.matmul', (['a_input', 'self.a'], {}), '(a_input, self.a)\n', (1396, 1413), False, 'import torch\n'), ((576, 615), 'torch.Tensor', 'torch.Tensor', (['in_features', 'out_features'], {}), '(in_features, out_features)\n', (588, 615), False, 'import torch\n'), ((660, 685), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (683, 685), False, 'import torch\n'), ((835, 868), 'torch.Tensor', 'torch.Tensor', (['(2 * out_features)', '(1)'], {}), '(2 * out_features, 1)\n', (847, 868), False, 'import torch\n'), ((913, 938), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (936, 938), False, 'import torch\n')] |
"""Setup the file structure for the software. Specifies several folders:
software_dir: path of installation
"""
import inspect
import os
import warnings
from socket import getfqdn
import pandas as pd
from immutabledict import immutabledict
import typing as ty
import dddm
import numpy as np
export, __all__ = dddm.exporter()
__all__ += ['log']
context = {}
log = dddm.utils.get_logger('dddm')
_naive_tmp = '/tmp/'
_host = getfqdn()
base_detectors = [
dddm.detectors.examples.XenonSimple,
dddm.detectors.examples.ArgonSimple,
dddm.detectors.examples.GermaniumSimple,
dddm.detectors.xenon_nt.XenonNtNr,
dddm.detectors.xenon_nt.XenonNtMigdal,
dddm.detectors.super_cdms.SuperCdmsHvGeNr,
dddm.detectors.super_cdms.SuperCdmsHvSiNr,
dddm.detectors.super_cdms.SuperCdmsIzipGeNr,
dddm.detectors.super_cdms.SuperCdmsIzipSiNr,
dddm.detectors.super_cdms.SuperCdmsHvGeMigdal,
dddm.detectors.super_cdms.SuperCdmsHvSiMigdal,
dddm.detectors.super_cdms.SuperCdmsIzipGeMigdal,
dddm.detectors.super_cdms.SuperCdmsIzipSiMigdal,
]
class Context:
"""Centralized object for managing:
- configurations
- files
- detector objects
"""
_directories = None
_detector_registry = None
_samplers = immutabledict({
'nestle': dddm.samplers.nestle.NestleSampler,
'multinest': dddm.samplers.pymultinest.MultiNestSampler,
'emcee': dddm.samplers.emcee.MCMCStatModel,
'multinest_combined': dddm.samplers.multi_detectors.CombinedMultinest,
'nestle_combined': dddm.samplers.multi_detectors.CombinedNestle,
})
_halo_classes = immutabledict({
'shm': dddm.SHM,
'shielded_shm': dddm.ShieldedSHM,
})
def register(self, detector: dddm.Experiment):
"""Register a detector to the context"""
if self._detector_registry is None:
self._detector_registry = {}
existing_detector = self._detector_registry.get(detector.detector_name)
if existing_detector is not None:
log.warning(f'replacing {existing_detector} with {detector}')
self._check_detector_is_valid(detector)
self._detector_registry[detector.detector_name] = detector
def set_paths(self, paths: dict, tolerant=False):
if self._directories is None:
self._directories = {}
for reference, path in paths.items():
if not os.path.exists(path):
try:
os.mkdir(path)
except Exception as e:
if tolerant:
warnings.warn(f'Could not find {path} for {reference}', UserWarning)
else:
raise FileNotFoundError(
f'Could not find {path} for {reference}'
) from e
result = {**self._directories.copy(), **paths}
self._directories = result
def show_folders(self):
result = {'name': list(self._directories.keys())}
result['path'] = [self._directories[name] for name in result['name']]
result['exists'] = [os.path.exists(p) for p in result['path']]
result['n_files'] = [(len(os.listdir(p)) if os.path.exists(p) else 0) for p in
result['path']]
return pd.DataFrame(result)
def get_detector(self, detector: str, **kwargs):
if detector not in self._detector_registry:
raise NotImplementedError(f'{detector} not in {self.detectors}')
return self._detector_registry[detector](**kwargs)
def get_sampler_for_detector(self,
wimp_mass,
cross_section,
sampler_name: str,
detector_name: ty.Union[str, list, tuple],
prior: ty.Union[str, dict],
halo_name='shm',
detector_kwargs: dict = None,
halo_kwargs: dict = None,
sampler_kwargs: dict = None,
fit_parameters=dddm.statistics.get_param_list(),
):
self._check_sampler_args(wimp_mass, cross_section, sampler_name, detector_name, prior,
halo_name, detector_kwargs, halo_kwargs, sampler_kwargs,
fit_parameters)
sampler_class = self._samplers[sampler_name]
# If any class needs any of the paths, provide those here.
sampler_kwargs = self._add_folders_to_kwargs(sampler_class, sampler_kwargs)
halo_kwargs = self._add_folders_to_kwargs(
self._halo_classes.get(halo_name), halo_kwargs)
halo_model = self._halo_classes[halo_name](**halo_kwargs)
# TODO instead, create a super detector instead of smaller ones
if isinstance(detector_name, (list, tuple)):
if not sampler_class.allow_multiple_detectors:
raise NotImplementedError(f'{sampler_class} does not allow multiple detectors')
detector_instance = [
self.get_detector(
det,
**self._add_folders_to_kwargs(self._detector_registry.get(det),
detector_kwargs)
)
for det in detector_name]
if halo_name == 'shielded_shm':
if len(locations := {d.location for d in detector_instance}) > 1:
raise ValueError(
f'Running with multiple locations for shielded_shm is not allowed. Got {locations}')
halo_kwargs.setdefault('log_mass', np.log10(wimp_mass))
halo_kwargs.setdefault('log_cross_section', np.log10(cross_section))
halo_kwargs.setdefault('location', list(locations)[0])
spectrum_instance = [dddm.DetectorSpectrum(
experiment=d, dark_matter_model=halo_model)
for d in detector_instance]
else:
detector_kwargs = self._add_folders_to_kwargs(
self._detector_registry.get(detector_name), detector_kwargs)
detector_instance = self.get_detector(detector_name, **detector_kwargs)
spectrum_instance = dddm.DetectorSpectrum(experiment=detector_instance,
dark_matter_model=halo_model)
if isinstance(prior, str):
prior = dddm.get_priors(prior)
return sampler_class(wimp_mass=wimp_mass,
cross_section=cross_section,
spectrum_class=spectrum_instance,
prior=prior,
fit_parameters=fit_parameters,
**sampler_kwargs
)
def _check_sampler_args(self,
wimp_mass,
cross_section,
sampler_name: str,
detector_name: ty.Union[str, list, tuple],
prior: ty.Union[str, dict],
halo_name='shm',
detector_kwargs: dict = None,
halo_kwargs: dict = None,
sampler_kwargs: dict = None,
fit_parameters=dddm.statistics.get_param_list(),
):
for det in dddm.utils.to_str_tuple(detector_name):
assert det in self._detector_registry, f'{det} is unknown'
assert wimp_mass < 200 and wimp_mass > 0.001, f'{wimp_mass} invalid'
assert np.log10(cross_section) < -20 and np.log10(
cross_section) > -60, f'{cross_section} invalid'
assert sampler_name in self._samplers, f'choose from {self._samplers}, got {sampler_name}'
assert isinstance(prior, (str, dict, immutabledict)), f'invalid {prior}'
assert halo_name in self._halo_classes, f'invalid {halo_name}'
def _add_folders_to_kwargs(self, function, current_kwargs: ty.Union[None, dict]) -> dict:
if function is None:
return
if current_kwargs is None:
current_kwargs = {}
takes = inspect.getfullargspec(function).args
for directory, path in self._directories.items():
if directory in takes:
current_kwargs.update({directory: path})
return current_kwargs
@property
def detectors(self):
return sorted(list(self._detector_registry.keys()))
@staticmethod
def _check_detector_is_valid(detector: dddm.Experiment):
detector()._check_class()
@export
def base_context():
context = Context()
installation_folder = dddm.__path__[0]
default_context = {
'software_dir': installation_folder,
'results_dir': os.path.join(installation_folder, 'DD_DM_targets_data'),
'spectra_files': os.path.join(installation_folder, 'DD_DM_targets_spectra'),
'verne_folder': _get_verne_folder(),
'verne_files': _get_verne_folder(),
'tmp_folder': get_temp(),
}
context.set_paths(default_context)
for detector in base_detectors:
context.register(detector)
return context
def _get_verne_folder():
if not dddm.utils.is_installed('verne'):
return './verne'
import verne
return os.path.join(os.path.split(verne.__path__[0])[0], 'results')
def get_temp():
if 'TMPDIR' in os.environ and os.access(os.environ['TMPDIR'], os.W_OK):
tmp_folder = os.environ['TMPDIR']
elif 'TMP' in os.environ and os.access(os.environ['TMP'], os.W_OK):
tmp_folder = os.environ['TMP']
elif os.path.exists(_naive_tmp) and os.access(_naive_tmp, os.W_OK):
tmp_folder = _naive_tmp
else:
raise FileNotFoundError('No temp folder available')
return tmp_folder
def open_save_dir(save_as, base_dir=None, force_index=False, _hash=None):
"""
:param save_as: requested name of folder to open in the result folder
:param base_dir: folder where the save_as dir is to be saved in.
This is the results folder by default
:param force_index: option to force to write to a number (must be an
override!)
:param _hash: add a has to save_as dir to avoid duplicate naming
conventions while running multiple jobs
:return: the name of the folder as was saveable (usually input +
some number)
"""
if base_dir is None:
raise ValueError(save_as, base_dir, force_index, _hash)
if force_index:
results_path = os.path.join(base_dir, save_as + str(force_index))
elif _hash is None:
if force_index is not False:
raise ValueError(
f'do not set _hash to {_hash} and force_index to '
f'{force_index} simultaneously'
)
results_path = dddm.utils._folders_plus_one(base_dir, save_as)
else:
results_path = os.path.join(base_dir, save_as + '_HASH' + str(_hash))
dddm.utils.check_folder_for_file(os.path.join(results_path, "some_file_goes_here"))
log.info('open_save_dir::\tusing ' + results_path)
return results_path
| [
"numpy.log10",
"dddm.utils.to_str_tuple",
"dddm.utils.is_installed",
"inspect.getfullargspec",
"dddm.exporter",
"os.path.exists",
"os.listdir",
"dddm.DetectorSpectrum",
"dddm.statistics.get_param_list",
"os.path.split",
"os.mkdir",
"pandas.DataFrame",
"warnings.warn",
"dddm.utils._folders_... | [((312, 327), 'dddm.exporter', 'dddm.exporter', ([], {}), '()\n', (325, 327), False, 'import dddm\n'), ((367, 396), 'dddm.utils.get_logger', 'dddm.utils.get_logger', (['"""dddm"""'], {}), "('dddm')\n", (388, 396), False, 'import dddm\n'), ((426, 435), 'socket.getfqdn', 'getfqdn', ([], {}), '()\n', (433, 435), False, 'from socket import getfqdn\n'), ((1262, 1579), 'immutabledict.immutabledict', 'immutabledict', (["{'nestle': dddm.samplers.nestle.NestleSampler, 'multinest': dddm.samplers.\n pymultinest.MultiNestSampler, 'emcee': dddm.samplers.emcee.\n MCMCStatModel, 'multinest_combined': dddm.samplers.multi_detectors.\n CombinedMultinest, 'nestle_combined': dddm.samplers.multi_detectors.\n CombinedNestle}"], {}), "({'nestle': dddm.samplers.nestle.NestleSampler, 'multinest':\n dddm.samplers.pymultinest.MultiNestSampler, 'emcee': dddm.samplers.\n emcee.MCMCStatModel, 'multinest_combined': dddm.samplers.\n multi_detectors.CombinedMultinest, 'nestle_combined': dddm.samplers.\n multi_detectors.CombinedNestle})\n", (1275, 1579), False, 'from immutabledict import immutabledict\n'), ((1628, 1694), 'immutabledict.immutabledict', 'immutabledict', (["{'shm': dddm.SHM, 'shielded_shm': dddm.ShieldedSHM}"], {}), "({'shm': dddm.SHM, 'shielded_shm': dddm.ShieldedSHM})\n", (1641, 1694), False, 'from immutabledict import immutabledict\n'), ((3302, 3322), 'pandas.DataFrame', 'pd.DataFrame', (['result'], {}), '(result)\n', (3314, 3322), True, 'import pandas as pd\n'), ((4168, 4200), 'dddm.statistics.get_param_list', 'dddm.statistics.get_param_list', ([], {}), '()\n', (4198, 4200), False, 'import dddm\n'), ((7481, 7513), 'dddm.statistics.get_param_list', 'dddm.statistics.get_param_list', ([], {}), '()\n', (7511, 7513), False, 'import dddm\n'), ((7565, 7603), 'dddm.utils.to_str_tuple', 'dddm.utils.to_str_tuple', (['detector_name'], {}), '(detector_name)\n', (7588, 7603), False, 'import dddm\n'), ((8972, 9027), 'os.path.join', 'os.path.join', (['installation_folder', '"""DD_DM_targets_data"""'], {}), "(installation_folder, 'DD_DM_targets_data')\n", (8984, 9027), False, 'import os\n'), ((9054, 9112), 'os.path.join', 'os.path.join', (['installation_folder', '"""DD_DM_targets_spectra"""'], {}), "(installation_folder, 'DD_DM_targets_spectra')\n", (9066, 9112), False, 'import os\n'), ((9410, 9442), 'dddm.utils.is_installed', 'dddm.utils.is_installed', (['"""verne"""'], {}), "('verne')\n", (9433, 9442), False, 'import dddm\n'), ((9610, 9650), 'os.access', 'os.access', (["os.environ['TMPDIR']", 'os.W_OK'], {}), "(os.environ['TMPDIR'], os.W_OK)\n", (9619, 9650), False, 'import os\n'), ((11182, 11231), 'os.path.join', 'os.path.join', (['results_path', '"""some_file_goes_here"""'], {}), "(results_path, 'some_file_goes_here')\n", (11194, 11231), False, 'import os\n'), ((3112, 3129), 'os.path.exists', 'os.path.exists', (['p'], {}), '(p)\n', (3126, 3129), False, 'import os\n'), ((6369, 6455), 'dddm.DetectorSpectrum', 'dddm.DetectorSpectrum', ([], {'experiment': 'detector_instance', 'dark_matter_model': 'halo_model'}), '(experiment=detector_instance, dark_matter_model=\n halo_model)\n', (6390, 6455), False, 'import dddm\n'), ((6560, 6582), 'dddm.get_priors', 'dddm.get_priors', (['prior'], {}), '(prior)\n', (6575, 6582), False, 'import dddm\n'), ((8350, 8382), 'inspect.getfullargspec', 'inspect.getfullargspec', (['function'], {}), '(function)\n', (8372, 8382), False, 'import inspect\n'), ((9510, 9542), 'os.path.split', 'os.path.split', (['verne.__path__[0]'], {}), '(verne.__path__[0])\n', (9523, 9542), False, 'import os\n'), ((9727, 9764), 'os.access', 'os.access', (["os.environ['TMP']", 'os.W_OK'], {}), "(os.environ['TMP'], os.W_OK)\n", (9736, 9764), False, 'import os\n'), ((11008, 11055), 'dddm.utils._folders_plus_one', 'dddm.utils._folders_plus_one', (['base_dir', 'save_as'], {}), '(base_dir, save_as)\n', (11036, 11055), False, 'import dddm\n'), ((2408, 2428), 'os.path.exists', 'os.path.exists', (['path'], {}), '(path)\n', (2422, 2428), False, 'import os\n'), ((3207, 3224), 'os.path.exists', 'os.path.exists', (['p'], {}), '(p)\n', (3221, 3224), False, 'import os\n'), ((5976, 6041), 'dddm.DetectorSpectrum', 'dddm.DetectorSpectrum', ([], {'experiment': 'd', 'dark_matter_model': 'halo_model'}), '(experiment=d, dark_matter_model=halo_model)\n', (5997, 6041), False, 'import dddm\n'), ((7768, 7791), 'numpy.log10', 'np.log10', (['cross_section'], {}), '(cross_section)\n', (7776, 7791), True, 'import numpy as np\n'), ((7802, 7825), 'numpy.log10', 'np.log10', (['cross_section'], {}), '(cross_section)\n', (7810, 7825), True, 'import numpy as np\n'), ((9814, 9840), 'os.path.exists', 'os.path.exists', (['_naive_tmp'], {}), '(_naive_tmp)\n', (9828, 9840), False, 'import os\n'), ((9845, 9875), 'os.access', 'os.access', (['_naive_tmp', 'os.W_OK'], {}), '(_naive_tmp, os.W_OK)\n', (9854, 9875), False, 'import os\n'), ((2471, 2485), 'os.mkdir', 'os.mkdir', (['path'], {}), '(path)\n', (2479, 2485), False, 'import os\n'), ((3189, 3202), 'os.listdir', 'os.listdir', (['p'], {}), '(p)\n', (3199, 3202), False, 'import os\n'), ((5765, 5784), 'numpy.log10', 'np.log10', (['wimp_mass'], {}), '(wimp_mass)\n', (5773, 5784), True, 'import numpy as np\n'), ((5846, 5869), 'numpy.log10', 'np.log10', (['cross_section'], {}), '(cross_section)\n', (5854, 5869), True, 'import numpy as np\n'), ((2582, 2650), 'warnings.warn', 'warnings.warn', (['f"""Could not find {path} for {reference}"""', 'UserWarning'], {}), "(f'Could not find {path} for {reference}', UserWarning)\n", (2595, 2650), False, 'import warnings\n')] |
"""
In this file we use deploy-ml to apply an sk-learn logistic regression to
cleaned Titanic data, we then evaluate it and deploy it
"""
import pickle
import numpy as np
# we import logistic regression
from deployml.sklearn import LogisticRegressionBase
import pandas as pd
# we load the data
train = pd.read_csv('titanic_example.csv')
# Here we define the model
log = LogisticRegressionBase()
# we then define the data
log.data = train
train.drop('Unnamed: 0', inplace=True, axis=1)
# we then define what we're trying to predict in the data, in this case we are trying to predict survival
log.outcome_pointer = 'Survived'
# we then plot the learning curve, this also trains the model, and we scale the data using a min max
# scaler, the larger the batch size the quicker the training. This function also supports early stopping
log.plot_learning_curve(scale=True, scaling_tool='min max', batch_size=15)
# we then show the learning curve (with this small example data don't expect a good learning curve)
# log.show_learning_curve()
# we then evaluate the outcome. This is just the sk-learn metrics function wrapped by deploy-ml
# how it's encouraged to be used as the metrics will be attached to the object and included in
# deployment
# log.evaluate_outcome()
# We can also plot the ROC curve (again small data count so it will not be a smooth curve)
# log.show_roc_curve()
# if we're happy with it we can deploy the algorithm, the scaler and variable input order will also be saved
log.deploy_model(description="trial to test the function", author="<NAME>",
organisation="Test", contact="<EMAIL>", file_name="trial.sav")
# we can then load the model
loaded_algorithm = pickle.load(open("trial.sav", 'rb'))
# we want to know what we need to put in and it's input order
# print(loaded_algorithm['input order'])
# We can scale new data
input_data = loaded_algorithm['scaler'].transform([[2, 34, 0, 0, 50, 1, 0, 1]])
# and we can make a new prediction with the scaled data
new_prediction = loaded_algorithm['model'].predict_proba(input_data)
print(new_prediction)
# print(log.model.intercept_)
# print(log.model.coef_)
# ++++++++++++ this is where we are extracting the matricies of the model but it's not working at the moment
print(np.dot(loaded_algorithm['model'].coef_, np.transpose(input_data)))
print("here is the main matricies")
print(loaded_algorithm['model'].coef_, np.transpose(input_data))
print(np.shape(loaded_algorithm['model'].coef_), np.shape(np.transpose(input_data)), log.model.intercept_)
# print(input_data)
| [
"numpy.shape",
"deployml.sklearn.LogisticRegressionBase",
"numpy.transpose",
"pandas.read_csv"
] | [((305, 339), 'pandas.read_csv', 'pd.read_csv', (['"""titanic_example.csv"""'], {}), "('titanic_example.csv')\n", (316, 339), True, 'import pandas as pd\n'), ((374, 398), 'deployml.sklearn.LogisticRegressionBase', 'LogisticRegressionBase', ([], {}), '()\n', (396, 398), False, 'from deployml.sklearn import LogisticRegressionBase\n'), ((2409, 2433), 'numpy.transpose', 'np.transpose', (['input_data'], {}), '(input_data)\n', (2421, 2433), True, 'import numpy as np\n'), ((2441, 2482), 'numpy.shape', 'np.shape', (["loaded_algorithm['model'].coef_"], {}), "(loaded_algorithm['model'].coef_)\n", (2449, 2482), True, 'import numpy as np\n'), ((2307, 2331), 'numpy.transpose', 'np.transpose', (['input_data'], {}), '(input_data)\n', (2319, 2331), True, 'import numpy as np\n'), ((2493, 2517), 'numpy.transpose', 'np.transpose', (['input_data'], {}), '(input_data)\n', (2505, 2517), True, 'import numpy as np\n')] |
import numpy as np
import unittest
from hgail.policies.scheduling import ConstantIntervalScheduler
class TestConstantIntervalScheduler(unittest.TestCase):
def test_constant_interval_scheduler(self):
# k = 2
scheduler = ConstantIntervalScheduler(k=2)
scheduler.reset(dones=[True, True, True])
indicators = scheduler.should_update(observations=None)
np.testing.assert_array_equal(indicators, [True] * 3)
scheduler.reset(dones=[True, False, True])
indicators = scheduler.should_update(observations=None)
np.testing.assert_array_equal(indicators, [True, False, True])
scheduler.reset(dones=[False, False, False])
indicators = scheduler.should_update(observations=None)
np.testing.assert_array_equal(indicators, [False, True, False])
# k = inf
scheduler = ConstantIntervalScheduler()
scheduler.reset(dones=[True, True, True])
indicators = scheduler.should_update(observations=None)
np.testing.assert_array_equal(indicators, [True, True, True])
for _ in range(10):
indicators = scheduler.should_update(observations=None)
np.testing.assert_array_equal(indicators, [False, False, False])
if __name__ == '__main__':
unittest.main() | [
"unittest.main",
"hgail.policies.scheduling.ConstantIntervalScheduler",
"numpy.testing.assert_array_equal"
] | [((1277, 1292), 'unittest.main', 'unittest.main', ([], {}), '()\n', (1290, 1292), False, 'import unittest\n'), ((243, 273), 'hgail.policies.scheduling.ConstantIntervalScheduler', 'ConstantIntervalScheduler', ([], {'k': '(2)'}), '(k=2)\n', (268, 273), False, 'from hgail.policies.scheduling import ConstantIntervalScheduler\n'), ((396, 449), 'numpy.testing.assert_array_equal', 'np.testing.assert_array_equal', (['indicators', '([True] * 3)'], {}), '(indicators, [True] * 3)\n', (425, 449), True, 'import numpy as np\n'), ((573, 635), 'numpy.testing.assert_array_equal', 'np.testing.assert_array_equal', (['indicators', '[True, False, True]'], {}), '(indicators, [True, False, True])\n', (602, 635), True, 'import numpy as np\n'), ((761, 824), 'numpy.testing.assert_array_equal', 'np.testing.assert_array_equal', (['indicators', '[False, True, False]'], {}), '(indicators, [False, True, False])\n', (790, 824), True, 'import numpy as np\n'), ((864, 891), 'hgail.policies.scheduling.ConstantIntervalScheduler', 'ConstantIntervalScheduler', ([], {}), '()\n', (889, 891), False, 'from hgail.policies.scheduling import ConstantIntervalScheduler\n'), ((1014, 1075), 'numpy.testing.assert_array_equal', 'np.testing.assert_array_equal', (['indicators', '[True, True, True]'], {}), '(indicators, [True, True, True])\n', (1043, 1075), True, 'import numpy as np\n'), ((1180, 1244), 'numpy.testing.assert_array_equal', 'np.testing.assert_array_equal', (['indicators', '[False, False, False]'], {}), '(indicators, [False, False, False])\n', (1209, 1244), True, 'import numpy as np\n')] |
# Authored by <NAME>
import sys
import numpy as np
import psi4
emitter_yaml = False
emitter_ruamel = False
try:
try:
import ruamel.yaml as ruamel
except ImportError:
import ruamel_yaml as ruamel
emitter_ruamel = True
except ImportError:
import yaml
emitter_yaml = True
preamble="""
"$schema": https://raw.githubusercontent.com/Microsoft/Quantum/master/Chemistry/Schema/broombridge-0.1.schema.json
"""
def extract_fields(mol, scf=None, scf_corr_en=None, fci=None, ccsd=None):
# get basis set of first atom
#basis = mol.basis_on_atom(0)
# set all atoms to same basis set
#mol.set_basis_all_atoms(basis)
# get the geometry in bohr units
geom = np.array(mol.geometry())
symm = mol.symmetry_from_input()
# energy and wavefunction from scf
if scf is None or scf_corr_en is None:
scf_en, scf_wfn = psi4.energy('scf', molecule=mol, return_wfn=True)
# energy offset
scf_corr_en = psi4.core.scalar_variable('CURRENT CORRELATION ENERGY')
else:
scf_en, scf_wfn = scf
# number of alpha & beta electrons
alpha = scf_wfn.nalpha()
beta = scf_wfn.nbeta()
# number of orbitals
orbitals = scf_wfn.nmo()
# energy and wavefunction of fci (exponential time)
if fci is not None:
fci_en, fci_wfn = fci
elif scf is not None:
# approximate fci with scf
fci_en, fci_wfn = scf
else:
fci_en, fci_wfn = psi4.energy('fci', molecule=mol, return_wfn=True)
if ccsd is not None:
ccsd_en, ccsd_wfn = ccsd
else:
ccsd_en, ccsd_wfn = psi4.energy('ccsd', molecule=mol, return_wfn=True)
# one electron integrals
mints = psi4.core.MintsHelper(ccsd_wfn.basisset())
one_elec = ccsd_wfn.H()
# turn it into MO basis
one_elec.transform(ccsd_wfn.Ca())
# two electron integral in MO basis
two_elec = mints.mo_eri(ccsd_wfn.Ca(), ccsd_wfn.Ca(), ccsd_wfn.Ca(), ccsd_wfn.Ca())
# turn them into numpy arrays
one_elec = np.array(one_elec)
two_elec = np.array(two_elec)
data = {}
data['format'] = {'version' : '0.1'}
data['bibliography'] = [{'url' : 'http://www.psicode.org/psi4manual/1.2/index.html'}]
data['generator'] = {'source' : 'psi4',
'version' : '1.2'}
skip_input_geometry = False
geometry = {
'coordinate_system': 'cartesian',
'units' : 'bohr', # for now all geometries are converted to bohr by default
'atoms' : [],
'symmetry' : symm
}
N, _ = geom.shape
for i in range(N):
geometry['atoms'].append({
'name' : mol.symbol(i),
'coords' : [geom.item((i, 0)), geom.item((i, 1)), geom.item((i, 2))]
})
# coulomb repulsion = nuclear repulsion energy
coulomb_repulsion = {
'units' : 'hartree',
'value' : mol.nuclear_repulsion_energy()
}
scf_energy = {
'units' : 'hartree',
'value' : scf_en
}
scf_energy_offset = {
'units' : 'hartree',
'value' : scf_corr_en
}
energy_offset = {
'units' : 'hartree',
'value' : scf_corr_en
}
one_electron_integrals = {
'units' : 'hartree',
'format' : 'sparse',
'values' : []
}
N, _ = one_elec.shape
for i in range(N):
for j in range(i+1):
if (i + j) % 2 == 0:
one_electron_integrals['values'].append([
i+1,
j+1,
one_elec.item((i, j))
])
two_electron_integrals = {
'units' : 'hartree',
'format' : 'sparse',
'index_convention' : 'mulliken',
'values' : []
}
N, _, _, _ = two_elec.shape
# mulliken index convention:
# if element with indices (i, j, k, l) is present, then indices
# (i, j, l, k), (j, i, k, l), (j, i, l, k), (k, l, i, j),
# (k, l, j, i), (l, k, j, i) are not present
for i in range(N):
for j in range(i+1):
for k in range(i+1):
for l in range(k+1):
if (i + j + k + l) % 2 == 0 and (i != k or l <= j):
two_electron_integrals['values'].append([
i+1,
j+1,
k+1,
l+1,
two_elec.item((i, j, k, l))
])
n_electrons_alpha = alpha
basis_set = {
'name' : 'unknown',
'type' : 'gaussian'
}
n_electrons_beta = beta
n_orbitals = orbitals
initial_state = None
ccsd_energy = ccsd_en
reader_mode = ""
excited_state_count = 1
excitation_energy = 0.0
fci_energy = {
'units' : 'hartree',
'value' : fci_en,
'upper' : fci_en+0.1,
'lower' : fci_en-0.1
}
hamiltonian = {'one_electron_integrals' : one_electron_integrals,
'two_electron_integrals' : two_electron_integrals}
integral_sets = [{"metadata": { 'molecule_name' : 'unknown'},
"geometry":geometry,
"basis_set":basis_set,
"coulomb_repulsion" : coulomb_repulsion,
"scf_energy" : scf_energy,
"scf_energy_offset" : scf_energy_offset,
"energy_offset" : energy_offset,
"fci_energy" : fci_energy,
"hamiltonian" : hamiltonian,
"n_orbitals" : n_orbitals,
"n_electrons" : n_electrons_alpha + n_electrons_beta
}]
if initial_state is not None:
integral_sets[-1]["initial_state_suggestions"] = initial_state
data['integral_sets'] = integral_sets
return data
def emitter_ruamel_func(mol, name=None, scf=None, scf_corr_en=None, fci=None, ccsd=None):
yaml = ruamel.YAML(typ="safe")
yaml.default_flow_style = False
with open(name, "w") as f:
f.write(preamble)
f.write('\n')
data = extract_fields(mol, scf, scf_corr_en, fci, ccsd)
yaml.dump(data, f)
def emitter_yaml_func(mol, name=None, scf=None, scf_corr_en=None, fci=None, ccsd=None):
with open(name, "w") as f:
f.write(preamble)
f.write('\n')
data = extract_fields(mol, scf, scf_corr_en, fci, ccsd)
yaml.dump(data, f, default_flow_style=False)
def to_broombridge(mol, name=None, scf=None, scf_corr_en=None, fci=None, ccsd=None):
assert emitter_yaml or emitter_ruamel, "Extraction failed: could not import YAML or RUAMEL packages."
if name is None:
name = 'out.yaml'
if emitter_yaml:
emitter_yaml_func(mol, name, scf, scf_corr_en, fci, ccsd)
elif emitter_ruamel:
emitter_ruamel_func(mol, name, scf, scf_corr_en, fci, ccsd)
else:
assert False, "Unreachable code"
print("output written to:", name)
| [
"yaml.dump",
"psi4.core.scalar_variable",
"numpy.array",
"psi4.energy",
"ruamel_yaml.YAML"
] | [((1994, 2012), 'numpy.array', 'np.array', (['one_elec'], {}), '(one_elec)\n', (2002, 2012), True, 'import numpy as np\n'), ((2028, 2046), 'numpy.array', 'np.array', (['two_elec'], {}), '(two_elec)\n', (2036, 2046), True, 'import numpy as np\n'), ((5849, 5872), 'ruamel_yaml.YAML', 'ruamel.YAML', ([], {'typ': '"""safe"""'}), "(typ='safe')\n", (5860, 5872), True, 'import ruamel_yaml as ruamel\n'), ((877, 926), 'psi4.energy', 'psi4.energy', (['"""scf"""'], {'molecule': 'mol', 'return_wfn': '(True)'}), "('scf', molecule=mol, return_wfn=True)\n", (888, 926), False, 'import psi4\n'), ((969, 1024), 'psi4.core.scalar_variable', 'psi4.core.scalar_variable', (['"""CURRENT CORRELATION ENERGY"""'], {}), "('CURRENT CORRELATION ENERGY')\n", (994, 1024), False, 'import psi4\n'), ((1587, 1637), 'psi4.energy', 'psi4.energy', (['"""ccsd"""'], {'molecule': 'mol', 'return_wfn': '(True)'}), "('ccsd', molecule=mol, return_wfn=True)\n", (1598, 1637), False, 'import psi4\n'), ((6052, 6070), 'yaml.dump', 'yaml.dump', (['data', 'f'], {}), '(data, f)\n', (6061, 6070), False, 'import yaml\n'), ((6303, 6347), 'yaml.dump', 'yaml.dump', (['data', 'f'], {'default_flow_style': '(False)'}), '(data, f, default_flow_style=False)\n', (6312, 6347), False, 'import yaml\n'), ((1444, 1493), 'psi4.energy', 'psi4.energy', (['"""fci"""'], {'molecule': 'mol', 'return_wfn': '(True)'}), "('fci', molecule=mol, return_wfn=True)\n", (1455, 1493), False, 'import psi4\n')] |
"""
Linear algebra functions (matrix multiplication, vector length etc.)
"""
import numpy as np
def gramian(matrix):
"""
Parameters
----------
matrix : numpy.ndarray
A matrix
Returns
-------
numpy.ndarray
The Gramian of `matrix`.
"""
return np.transpose(matrix).dot(matrix)
def dot_product(vector1, vector2):
"""
Calculates a dot product of two vectors.
Parameters
----------
vector1, vector2 : numpy.ndarray
Two n by 1 matrices.
Returns
-------
float
The dot product of two matrices
"""
return np.dot(np.transpose(vector1), vector2)[0, 0]
def matrix_size(matrix):
"""
Parameters
----------
matrix : numpy.ndarray
A matrix.
Returns
-------
tuple of two integers
Size of a matrix. First integer is the number of rows.
"""
rows = len(matrix)
if rows > 0:
columns = len(matrix[0])
else:
columns = 0
return (rows, columns)
| [
"numpy.transpose"
] | [((298, 318), 'numpy.transpose', 'np.transpose', (['matrix'], {}), '(matrix)\n', (310, 318), True, 'import numpy as np\n'), ((620, 641), 'numpy.transpose', 'np.transpose', (['vector1'], {}), '(vector1)\n', (632, 641), True, 'import numpy as np\n')] |
import numpy as np
import matplotlib.pyplot as plt
from Biological_Questions.Cell_Migration_Analysis.Calculate_Migration_Distance_Function import ExtractMigrationDistances
cell_ID_list, distance_list = ExtractMigrationDistances()
migration_data = []
for cell_ID, distances in zip(cell_ID_list, distance_list):
averages = [np.mean(distances), np.std(distances), np.median(distances), np.sum(distances), len(distances)]
migration_data.append([round(item, 4) for item in averages])
if np.mean(distances) > 5.00:
print ("Cell_ID #{} migrated {} pixels on averege (±{} st.dev). It lived for {} frames... Min = {}; max = {}"
.format(cell_ID, migration_data[-1][0], migration_data[-1][1],
len(distances), min(distances), max(distances)))
# Process the large list:
#mean_list, stdv_list, medn_list, sums_list, frms_list = [], [], [], [], []
data_list = [[] for _ in range(5)]
for mini_list in migration_data:
index = 0
while index < 5:
data_list[index].append(mini_list[index])
index += 1
# Plot the means on the histogram:
print (migration_data)
print (len(migration_data))
print (data_list)
print (len(data_list))
for mini in data_list:
print (mini)
a, b, c = plt.hist(data_list[0], bins=6, range=(0.5, 3.5))
plt.title("Histogram of the MEAN distance migrated by Gen #1 MDCK WT cells")
plt.xlabel("Mean distance migrated by the cell_ID [pixels]")
plt.ylabel("Frequency of occurence")
plt.ylim(-10)
#plt.savefig(dr + "Outlier_{}_Boxplot.png".format(label), bbox_inches='tight')
plt.show()
plt.close()
print (a)
print (b)
print (c)
"""
a, b, c = plt.hist(data_list[3], bins=10, range=(0, 5))
plt.title("Histogram of the TOTAL distance migrated by Gen #1 MDCK WT cells")
plt.xlabel("Total distance migrated by the cell_ID [pixels]")
plt.ylabel("Frequency of occurence")
plt.ylim(-10)
#plt.savefig(dr + "Outlier_{}_Boxplot.png".format(label), bbox_inches='tight')
plt.show()
plt.close()
print (a)
print (b)
print (c)
""" | [
"numpy.mean",
"numpy.median",
"matplotlib.pyplot.hist",
"matplotlib.pyplot.ylabel",
"matplotlib.pyplot.xlabel",
"matplotlib.pyplot.close",
"numpy.sum",
"Biological_Questions.Cell_Migration_Analysis.Calculate_Migration_Distance_Function.ExtractMigrationDistances",
"numpy.std",
"matplotlib.pyplot.ti... | [((204, 231), 'Biological_Questions.Cell_Migration_Analysis.Calculate_Migration_Distance_Function.ExtractMigrationDistances', 'ExtractMigrationDistances', ([], {}), '()\n', (229, 231), False, 'from Biological_Questions.Cell_Migration_Analysis.Calculate_Migration_Distance_Function import ExtractMigrationDistances\n'), ((1247, 1295), 'matplotlib.pyplot.hist', 'plt.hist', (['data_list[0]'], {'bins': '(6)', 'range': '(0.5, 3.5)'}), '(data_list[0], bins=6, range=(0.5, 3.5))\n', (1255, 1295), True, 'import matplotlib.pyplot as plt\n'), ((1296, 1372), 'matplotlib.pyplot.title', 'plt.title', (['"""Histogram of the MEAN distance migrated by Gen #1 MDCK WT cells"""'], {}), "('Histogram of the MEAN distance migrated by Gen #1 MDCK WT cells')\n", (1305, 1372), True, 'import matplotlib.pyplot as plt\n'), ((1373, 1433), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""Mean distance migrated by the cell_ID [pixels]"""'], {}), "('Mean distance migrated by the cell_ID [pixels]')\n", (1383, 1433), True, 'import matplotlib.pyplot as plt\n'), ((1434, 1470), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""Frequency of occurence"""'], {}), "('Frequency of occurence')\n", (1444, 1470), True, 'import matplotlib.pyplot as plt\n'), ((1471, 1484), 'matplotlib.pyplot.ylim', 'plt.ylim', (['(-10)'], {}), '(-10)\n', (1479, 1484), True, 'import matplotlib.pyplot as plt\n'), ((1564, 1574), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (1572, 1574), True, 'import matplotlib.pyplot as plt\n'), ((1575, 1586), 'matplotlib.pyplot.close', 'plt.close', ([], {}), '()\n', (1584, 1586), True, 'import matplotlib.pyplot as plt\n'), ((329, 347), 'numpy.mean', 'np.mean', (['distances'], {}), '(distances)\n', (336, 347), True, 'import numpy as np\n'), ((349, 366), 'numpy.std', 'np.std', (['distances'], {}), '(distances)\n', (355, 366), True, 'import numpy as np\n'), ((368, 388), 'numpy.median', 'np.median', (['distances'], {}), '(distances)\n', (377, 388), True, 'import numpy as np\n'), ((390, 407), 'numpy.sum', 'np.sum', (['distances'], {}), '(distances)\n', (396, 407), True, 'import numpy as np\n'), ((497, 515), 'numpy.mean', 'np.mean', (['distances'], {}), '(distances)\n', (504, 515), True, 'import numpy as np\n')] |
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
from torch.autograd import Variable
import numpy as np
n_iter=0
def group_weight(module):
group_decay = []
group_no_decay = []
for m in module.modules():
if isinstance(m, nn.Linear):
group_decay.append(m.weight)
if m.bias is not None:
group_no_decay.append(m.bias)
elif isinstance(m, nn.modules.conv._ConvNd):
group_decay.append(m.weight)
if m.bias is not None:
group_no_decay.append(m.bias)
elif isinstance(m, nn.modules.batchnorm._BatchNorm):
if m.weight is not None:
group_no_decay.append(m.weight)
if m.bias is not None:
group_no_decay.append(m.bias)
assert len(list(module.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
def update_ema_variables(model, ema_model, alpha, global_step):
# Use the true average until the exponential average is more correct
alpha = min(1 - 1 / (global_step + 1), alpha)
for ema_param, param in zip(ema_model.parameters(), model.parameters()):
ema_param.data.mul_(alpha).add_(1 - alpha, param.data)
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.initialized = False
self.val = None
self.avg = None
self.sum = None
self.count = None
def initialize(self, val, weight):
self.val = val
self.avg = val
self.sum = val * weight
self.count = weight
self.initialized = True
def update(self, val, weight=1):
if not self.initialized:
self.initialize(val, weight)
else:
self.add(val, weight)
def add(self, val, weight):
self.val = val
self.sum += val * weight
self.count += weight
self.avg = self.sum / self.count
def value(self):
return self.val
def average(self):
return self.avg
def IOU_Score(y_pred,y_val):
def IoUOld(a,b):
intersection = ((a==1) & (a==b)).sum()
union = ((a==1) | (b==1)).sum()
if union > 0:
return intersection / union
elif union == 0 and intersection == 0:
return 1
else:
return 0
y_pred=y_pred[:,1,:,:]#.view(batch_size,1,101,101)
t=0.5
IOU_list=[]
for j in range(y_pred.shape[0]):
y_pred_ = np.array(y_pred[j,:,:] > t, dtype=bool)
y_val_=np.array(y_val[j,:,:], dtype=bool)
IOU = IoUOld(y_pred_, y_val_)
IOU_list.append(IOU)
#now we take different threshholds, these threshholds
#basically determine if our IOU consitutes as a "true positiv"
#or not
prec_list=[]
for IOU_t in np.arange(0.5, 1.0, 0.05):
#get true positives, aka all examples where the IOU is larger than the threshhold
TP=np.sum(np.asarray(IOU_list)>IOU_t)
#calculate the current precision, by devididing by the total number of examples ( pretty sure this is correct :D)
#they where writing the denominator as TP+FP+FN but that doesnt really make sens becasue there are no False postivies i think
Prec=TP/len(IOU_list)
prec_list.append(Prec)
return np.mean(prec_list)
#Main Training Function
from losses import lovasz_softmax,FocalLoss
from training_functions import IOU_Score
focal=FocalLoss(size_average=True)
def train(train_loader,segmentation_module,segmentation_ema,optimizer
,writer
,lovasz_scaling=0.1
,focal_scaling=0.9
,unsupervised_scaling=0.1
,ema_scaling=0.2
,non_ema_scaling=1
,second_batch_size=2
,train=True
,test=False
,writer_name_list=None
):
global n_iter
#Training Loop
cudnn.benchmark = True
lovasz_scaling=torch.tensor(lovasz_scaling).float().cuda()
focal_scaling=torch.tensor(focal_scaling).float().cuda()
unsupervised_scaling=torch.tensor(unsupervised_scaling).float().cuda()
ema_scaling=torch.tensor(ema_scaling).float().cuda()
non_ema_scaling=torch.tensor(non_ema_scaling).float().cuda()
#average meter for all the losses we keep track of.
ave_total_loss = AverageMeter() # Total Loss
ave_non_ema_loss = AverageMeter()
ave_ema_loss = AverageMeter()
ave_total_loss = AverageMeter()
ave_lovasz_loss = AverageMeter()
ave_focal_loss = AverageMeter()
ave_lovasz_loss_ema = AverageMeter()
ave_focal_loss_ema = AverageMeter()
ave_unsupervised_loss = AverageMeter()
ave_iou_score = AverageMeter()
if train==True:
segmentation_module.train()
segmentation_ema.train()
else:
segmentation_module.eval()
segmentation_ema.eval()
for batch_data in train_loader:
batch_data["img_data"]=batch_data["img_data"].cuda()
batch_data["seg_label"]=batch_data["seg_label"].cuda().long().squeeze()
#Normal Pred and Pred from the self ensembeled model
pred = segmentation_module(batch_data)
pred_ema = segmentation_ema(batch_data)
#We dont want to gradient descent into the EMA model
pred_ema=Variable(pred_ema.detach().data, requires_grad=False)
### UNSUPVERVISED LOSS ####
unsupervised_loss = torch.mean((pred - pred_ema)**2).cuda()
### SUPERVISED LOSS ####
#We jsut get rid of the Unlabeled examples for the supervised loss!
pred=pred[:-second_batch_size,:,:]
pred_ema=pred_ema[:-second_batch_size,:,:]
batch_data["seg_label"]=batch_data["seg_label"][:-second_batch_size,:,:]
lovasz_loss=lovasz_softmax(pred, batch_data['seg_label'],ignore=-1,only_present=True).cuda()
focal_loss=focal(pred, batch_data['seg_label'],)
lovasz_loss_ema=lovasz_softmax(pred_ema, batch_data['seg_label'],ignore=-1,only_present=True).cuda()
focal_loss_ema=focal(pred_ema, batch_data['seg_label'],)
#### Loss Combinations #####
non_ema_loss=(lovasz_loss*lovasz_scaling+focal_loss*focal_scaling).cuda()
ema_loss=(lovasz_loss_ema*lovasz_scaling+focal_loss_ema*focal_scaling).cuda()
total_loss=(non_ema_loss*non_ema_scaling+ema_loss*ema_scaling+unsupervised_scaling*unsupervised_loss).cuda()
#Need to give it as softmaxes
pred = nn.functional.softmax(pred, dim=1)
iou_score=IOU_Score(pred,batch_data["seg_label"])
### BW ####
if train==True:
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
n_iter=n_iter+1
update_ema_variables(segmentation_module, segmentation_ema, 0.999, n_iter)
### WRITING STUFF #########
ave_non_ema_loss.update(non_ema_loss.data.item())
ave_ema_loss.update(ema_loss.data.item())
ave_total_loss.update(total_loss.data.item())
ave_lovasz_loss.update(lovasz_loss.data.item())
ave_focal_loss.update(focal_loss.data.item())
ave_lovasz_loss_ema.update(lovasz_loss_ema.data.item())
ave_focal_loss_ema.update(focal_loss_ema.data.item())
ave_unsupervised_loss.update(unsupervised_loss.data.item())
ave_iou_score.update(iou_score.item())
if test==True:
print(n_iter)
break
writer.add_scalar(writer_name_list[0], ave_non_ema_loss.average(), n_iter)
writer.add_scalar(writer_name_list[1], ave_ema_loss.average(), n_iter)
writer.add_scalar(writer_name_list[2], ave_total_loss.average(), n_iter)
writer.add_scalar(writer_name_list[3], ave_lovasz_loss.average(), n_iter)
writer.add_scalar(writer_name_list[4], ave_focal_loss.average(), n_iter)
writer.add_scalar(writer_name_list[5], ave_lovasz_loss_ema.average(), n_iter)
writer.add_scalar(writer_name_list[6], ave_focal_loss_ema.average(), n_iter)
writer.add_scalar(writer_name_list[7], ave_unsupervised_loss.average(), n_iter)
writer.add_scalar(writer_name_list[8], ave_iou_score.average(), n_iter)
if train==False:
return np.mean(ave_iou_score.average())
| [
"numpy.mean",
"losses.FocalLoss",
"training_functions.IOU_Score",
"torch.mean",
"numpy.asarray",
"losses.lovasz_softmax",
"numpy.array",
"torch.tensor",
"torch.nn.functional.softmax",
"numpy.arange"
] | [((3575, 3603), 'losses.FocalLoss', 'FocalLoss', ([], {'size_average': '(True)'}), '(size_average=True)\n', (3584, 3603), False, 'from losses import lovasz_softmax, FocalLoss\n'), ((2945, 2970), 'numpy.arange', 'np.arange', (['(0.5)', '(1.0)', '(0.05)'], {}), '(0.5, 1.0, 0.05)\n', (2954, 2970), True, 'import numpy as np\n'), ((3437, 3455), 'numpy.mean', 'np.mean', (['prec_list'], {}), '(prec_list)\n', (3444, 3455), True, 'import numpy as np\n'), ((2612, 2653), 'numpy.array', 'np.array', (['(y_pred[j, :, :] > t)'], {'dtype': 'bool'}), '(y_pred[j, :, :] > t, dtype=bool)\n', (2620, 2653), True, 'import numpy as np\n'), ((2667, 2703), 'numpy.array', 'np.array', (['y_val[j, :, :]'], {'dtype': 'bool'}), '(y_val[j, :, :], dtype=bool)\n', (2675, 2703), True, 'import numpy as np\n'), ((6571, 6605), 'torch.nn.functional.softmax', 'nn.functional.softmax', (['pred'], {'dim': '(1)'}), '(pred, dim=1)\n', (6592, 6605), True, 'import torch.nn as nn\n'), ((6624, 6664), 'training_functions.IOU_Score', 'IOU_Score', (['pred', "batch_data['seg_label']"], {}), "(pred, batch_data['seg_label'])\n", (6633, 6664), False, 'from training_functions import IOU_Score\n'), ((3080, 3100), 'numpy.asarray', 'np.asarray', (['IOU_list'], {}), '(IOU_list)\n', (3090, 3100), True, 'import numpy as np\n'), ((5530, 5564), 'torch.mean', 'torch.mean', (['((pred - pred_ema) ** 2)'], {}), '((pred - pred_ema) ** 2)\n', (5540, 5564), False, 'import torch\n'), ((5880, 5955), 'losses.lovasz_softmax', 'lovasz_softmax', (['pred', "batch_data['seg_label']"], {'ignore': '(-1)', 'only_present': '(True)'}), "(pred, batch_data['seg_label'], ignore=-1, only_present=True)\n", (5894, 5955), False, 'from losses import lovasz_softmax, FocalLoss\n'), ((6043, 6122), 'losses.lovasz_softmax', 'lovasz_softmax', (['pred_ema', "batch_data['seg_label']"], {'ignore': '(-1)', 'only_present': '(True)'}), "(pred_ema, batch_data['seg_label'], ignore=-1, only_present=True)\n", (6057, 6122), False, 'from losses import lovasz_softmax, FocalLoss\n'), ((4062, 4090), 'torch.tensor', 'torch.tensor', (['lovasz_scaling'], {}), '(lovasz_scaling)\n', (4074, 4090), False, 'import torch\n'), ((4124, 4151), 'torch.tensor', 'torch.tensor', (['focal_scaling'], {}), '(focal_scaling)\n', (4136, 4151), False, 'import torch\n'), ((4192, 4226), 'torch.tensor', 'torch.tensor', (['unsupervised_scaling'], {}), '(unsupervised_scaling)\n', (4204, 4226), False, 'import torch\n'), ((4258, 4283), 'torch.tensor', 'torch.tensor', (['ema_scaling'], {}), '(ema_scaling)\n', (4270, 4283), False, 'import torch\n'), ((4319, 4348), 'torch.tensor', 'torch.tensor', (['non_ema_scaling'], {}), '(non_ema_scaling)\n', (4331, 4348), False, 'import torch\n')] |
import pandas as pd
import numpy as np
from itertools import product
import datarail.experimental_design.edge_fingerprint as edge_fingerprint
import warnings
def get_boundary_cell_count(plate_dims, exclude_outer=1):
"""Get number of wells in outer or inner edges
Parameters
----------
plate_dims : array
dimensions of plate
Returns
-------
boundary_cell_count : int
number of wells in the edges
"""
boundary_cell_count = 2 * (plate_dims[0] + plate_dims[1] - 2)
if exclude_outer == 2:
boundary_cell_count += 2 * (plate_dims[0]-2 + plate_dims[1]-2 - 2)
return boundary_cell_count
def set_dosing(max_dose, num_doses, num_replicates=1, step='half-log',
exclude_doses=None):
"""Returns list of doses (micromolar) at half log intervals
Parameters
----------
max_dose : int
highest dose in the dose range
num_doses : int
number of doses in the dose range. Maximum is set to 9.
If num doses < 9, then doses will be removed starting from
lowest moving to higher doses.
num_replicates : int
number of times the dose range is replicated on the plate
step : str
interval between doses. Default is 'half-log'. Options include
'half-log', 'onethird-log', 'quarter-log', '1:n' where n is the
dilution fold.
Returns
-------
dose_range : list of floats
"""
if step == 'half-log':
dose_range = max_dose * 1e-4 * np.logspace(0, 4, 9)
elif step == 'onethird-log':
dose_range = max_dose * 1e-4 * np.logspace(0, 4, 7)
elif step == 'quarter-log':
dose_range = max_dose * 1e-4 * np.logspace(0, 4, 5)
else:
base_dilution = int(step.split(':')[1])
dose_range = [max_dose/(base_dilution ** x) for x in range(num_doses)]
dose_range = [round(s, 4) for s in dose_range]
dose_range = sorted(list(set(dose_range)))[::-1]
dose_range = dose_range[:num_doses]
if num_replicates > 1:
dr = list(set(dose_range))
for i in range(1, num_replicates):
dose_range.extend(dr)
if exclude_doses is not None:
exb = [False if s+1 in exclude_doses else True for s in range(num_doses)]
dose_range = np.array(dose_range[::-1])
dose_range = list(dose_range[exb])
dose_range = dose_range[::-1]
return dose_range
def set_combo_dosing(max_doses, num_doses, eq=False, num_replicates=1):
"""Returns combination of doses for 2 or more agents
Parameters
----------
max_doses : list of int
highest doses in the dose range for each agent
num_doses : list of int
number of doses in the dose range for each agent. Maximum is set to 9.
If num doses < 9, then doses will be removed starting from
lowest moving to higher doses.
num_replicates : list of int
list of number of times the dose range is replicated on the plate
for each agent
Returns
-------
dose_range : list of tuples
each tuple corresponds to one combination of doses for 2 or more agents
"""
dose_lists = []
for md, nd in zip(max_doses, num_doses):
dose_range = md * 1e-4 * np.logspace(0, 4, 9)
dose_range = sorted(list(set(dose_range)))[::-1]
dose_range = [round(s, 4) for s in dose_range]
dose_range = dose_range[:nd]
dose_lists.append(dose_range)
combo_doses = list(product(*dose_lists))
if num_replicates > 1:
cd = list(set(combo_doses))
for i in range(1, num_replicates):
combo_doses.extend(cd)
if eq:
combo_doses = [c for c in combo_doses if len(set(c)) == 1]
return combo_doses
def exclude_treatment(df, drug, doses, type='replace'):
df2 = df.copy()
if type == 'replace':
df2.loc[((df2.agent == drug) & (df2.concentration.isin(doses))), 'agent'] = 'DMSO'
df2.loc[((df2.agent == drug) & (df2.concentration.isin(doses))), 'role'] = 'negative_control'
elif type=='remove':
df2 = df2[~((df2.agent == drug) & (df2.concentration.isin(doses)))].copy()
return df2
def construct_well_level_df(input_file, plate_dims=[16, 24],
exclude_outer=1, cell_lines=[]):
"""Generates long table of doses and drugs mapped to wells.
The Input file should be broad description of the planned experimental design
and should contain the following columns:
* agent : column listing the names of drugs
including positve and negative controls
* max_dose__um : column listing the highest dose for each agent
* num_doses : column listing the number of doses for each agent
* role : column listing the intended role for each agent
'treatment', 'positive_control', 'negative_control', or 'fingerprint'
* num_replicates : column listing number of times a drug's
dosing scheme is replicated on the same plate
* exclude_doses : column (OPTIONAL) listing doses to be excluded
for a given drug
* step : column listing option for interval between doses.
Parameters
----------
input_file : str
csv or tsv file name of the input file
plate_dims : list of int
dimensions of the physical plate
exlude_outer : int
number of outer wells to exlude; defaut set to 1
Returns
-------
df_well: pandas dataframe
dataframe mapping wells to drug and dose response
"""
df_spec = pd.read_csv(input_file)
df_spec = df_spec.fillna('')
if 'exclude_doses' not in df_spec.columns.tolist():
df_spec['exclude_doses'] = ['']*len(df_spec)
df_spec['exclude_doses'] = df_spec['exclude_doses'].fillna('')
drugs, doses, role, identifier = [], [], [], []
df_tr = df_spec[df_spec.role == 'treatment'].copy()
for drug in df_tr.agent.tolist():
max_dose = df_tr[df_tr.agent == drug]['max_dose__um'].values[0]
max_dose = str(max_dose).split(',')
max_dose = [float(mx) for mx in max_dose]
num_doses = df_tr[df_tr.agent == drug]['num_doses'].values[0]
num_doses = str(num_doses).split(',')
num_doses = [int(nd) for nd in num_doses]
num_replicates = df_tr[df_tr.agent == drug][
'num_replicates'].values[0]
exclude_doses = df_tr[df_tr.agent == drug][
'exclude_doses'].values[0]
if exclude_doses != '':
exclude_doses = [int(s) for s in exclude_doses.split(',')]
else:
exclude_doses = None
if 'dose_interval' in df_tr.columns.tolist():
step = df_tr[df_tr.agent == drug]['dose_interval'].values[0]
else:
step = 'half-log'
if len(max_dose) == 1:
max_dose = max_dose[0]
num_doses = num_doses[0]
dose_range = set_dosing(max_dose, num_doses, num_replicates, step=step,
exclude_doses=exclude_doses)
# if exclude_doses == '':
# dose_range = dose_range[:num_doses]
# else:
# exclude_doses = [float(s) for s in exclude_doses.split(',')]
# dose_range =[d for d in dose_range if d not in exclude_doses]
else:
eqm = df_tr[df_tr.agent == drug]['equivalent'].values[0]
dose_range = set_combo_dosing(max_dose, num_doses,
eq=bool(eqm),
num_replicates=num_replicates)
doses += dose_range
drugs += [drug] * len(dose_range)
role += ['treatment'] * len(dose_range)
identifier += ["%s_%d" % (drug, num) for num in range(len(dose_range))]
if 'positive_control' in df_spec.role.unique():
dfp = df_spec[df_spec.role == 'positive_control'].copy()
for drug in dfp.agent.tolist():
max_dose = dfp[dfp.agent == drug]['max_dose__um'].values[0]
num_replicates = int(dfp[dfp.agent == drug][
'num_replicates'].values[0])
doses += [max_dose] * num_replicates
drugs += [drug] * num_replicates
role += ['positive_control'] * num_replicates
identifier += ["%s_%d" % (drug, num)
for num in range(num_replicates)]
else:
warnings.warn(
'Experimental design does not have positive_controls')
# num_outer_wells = get_boundary_cell_count(plate_dims, exclude_outer)
num_wells_per_cell_line = wells_per_cell_line(cell_lines, exclude_outer)
# num_available_wells = (plate_dims[0] * plate_dims[1]) - num_outer_wells
num_treatment_wells = len(doses)
# if num_available_wells < num_treatment_wells:
# warnings.warn('Number of treatment wells required (%d)'
# 'exceed available wells (%d)' % (
# num_treatment_wells, num_available_wells))
if num_wells_per_cell_line < num_treatment_wells:
warnings.warn('Number of treatment wells per cell line required (%d)'
'exceed available wells per cell line (%d)' % (
num_treatment_wells, num_wells_per_cell_line))
df_well = pd.DataFrame(list(zip(drugs, doses, role, identifier)),
columns=['agent', 'concentration',
'role', 'identifier'])
return df_well
def add_negative_control(df, control_name='DMSO',
plate_dims=[16, 24], exclude_outer=1,
cell_lines=[]):
"""Assigns negative control agent to untreated wells
Parameters
----------
df : pandas dataframe
well level metadata with specification of agents and concentration
control_name : str
name of control agent; default is DMSO
plate_dims : list of int
dimension of physical plate
exclude_outer : int
number of outer well layers to be excluded
Returns
-------
df_well : pandas dataframe
well level metadata with specification of both agents and
negative control
"""
num_treatment_wells = len(df)
# num_outer_wells = get_boundary_cell_count(plate_dims, exclude_outer)
# num_available_wells = (plate_dims[0] * plate_dims[1]) - num_outer_wells
num_wells_per_cell_line = wells_per_cell_line(cell_lines, exclude_outer)
num_nc_wells = num_wells_per_cell_line - num_treatment_wells
if num_nc_wells < 8:
print("")
warnings.warn(
'Insufficent number of wells alloted for negative controls')
print("Atleast 8 wells have to be assigned for negative controls,"
" recommended number is 12, user has currently alloted %d wells"
" for negative_controls" % num_nc_wells)
role = df.role.tolist()
doses = df.concentration.tolist()
drugs = df.agent.tolist()
identifiers = df.identifier.tolist()
role += ['negative_control'] * num_nc_wells
doses += [np.nan] * num_nc_wells
drugs += [control_name] * num_nc_wells
identifiers += [control_name] * num_nc_wells
df_well = pd.DataFrame(list(zip(drugs, doses, role, identifiers)),
columns=['agent', 'concentration', 'role',
'identifier'])
return df_well
def assign_fingerprint_wells(fingerprint, treatment, dose):
"""Returns a set of wells along the edge that serve as barcode for
the plate based on the fingerprint word
Parameters
----------
fingerprint : str
treatment : str
the drug used to treat fingerprint wells
dose : float
the dose of drug treatment
Returns
-------
df : pandas dataframe
table mapping wells encoding the fingerprint to treatment
"""
fingerprint_wells = edge_fingerprint.encode_fingerprint(fingerprint)
treatment_list = [treatment] * len(fingerprint_wells)
dose_list = [dose] * len(fingerprint_wells)
role = ['fingerprint'] * len(fingerprint_wells)
identifier = ["%s_%d" % (treatment, d)
for d in range(len(fingerprint_wells))]
df = pd.DataFrame(list(zip(fingerprint_wells, treatment_list,
dose_list, role, identifier)),
columns=['well', 'agent', 'concentration',
'role', 'identifier'])
return df
def define_treatment_wells(exclude_outer=1, plate_dims=[16, 24]):
"""Defines set of inner wells to be used for treatments
Parameters
----------
exclude_outer : int
defines outer well columns and rows to to exclude
plate_dims : list of int
Returns
-------
tr_wells, list(set(exclude_wells)): tuple of lists
lists of treatment wells and outer wells
"""
cols = ["%02d" % s for s in range(1, plate_dims[1]+1)]
rows = [chr(65+n) for n in range(plate_dims[0])]
if exclude_outer:
rows = rows[exclude_outer:-exclude_outer]
cols = cols[exclude_outer:-exclude_outer]
tr_wells = []
for row in rows:
for col in cols:
tr_wells.append("%s%s" % (row, col))
return tr_wells
def randomize_wells(df_plate,
fingerprint_drug=None, fingerprint_dose=1,
exclude_outer=1, plate_dims=[16, 24]):
"""Returns dataframe with randomized wells for all plate replicates
Parameters
-----------
df : pandas dataframe
plate level input metadata file
fingerprint_drug : str
drug used for treating fingerprint wells
fingerprint_dose : int
dose of drug used for fingerprint wells treatment
exclude_outer : int
number of outer well layers to exclude
plate_dims : list of int
Returns
-------
dfr : pandas dataframe
drug and dose mapped to randomized wells
"""
cols = ["%02d" % s for s in range(1, plate_dims[1]+1)]
rows = [chr(65+n) for n in range(plate_dims[0])]
wells = []
for row in rows:
for col in cols:
wells.append("%s%s" % (row, col))
df_list = []
for plate_num in range(len(df_plate)):
barcode = df_plate.loc[plate_num, 'barcode']
cell_lines = df_plate.loc[plate_num, 'cell_line']
timepoint = df_plate.loc[plate_num, 'timepoint']
randomization_num = df_plate.loc[plate_num, 'randomization_scheme']
well_input_file = df_plate.loc[plate_num, 'well_level_input']
timepoint = str(timepoint)
cell_lines = cell_lines.split(', ')
randomization_num = int(randomization_num)
if timepoint == 'time0_ctrl':
dfw = pd.DataFrame()
elif str(timepoint) == '0':
dfw = pd.DataFrame()
else:
dfw = construct_well_level_df(well_input_file,
cell_lines=cell_lines,
exclude_outer=exclude_outer)
dfw = add_negative_control(dfw, cell_lines=cell_lines,
exclude_outer=exclude_outer)
df = randomize_per_line(dfw, randomization_num,
exclude_outer, cell_lines)
if 'fingerprint' in df_plate:
fd = df_plate.loc[plate_num, 'fingerprint']
df_fp = assign_fingerprint_wells(fd,
fingerprint_drug,
fingerprint_dose)
df_fp.index = df_fp['well']
dfc = pd.concat([df_fp, df])
else:
dfc = df.copy()
dfc['barcode'] = [barcode] * len(dfc)
dfc['timepoint'] = [timepoint] * len(dfc)
remainder_wells = [w for w in wells if w not in dfc.well.tolist()]
dfo = pd.DataFrame(list(zip(remainder_wells,
[barcode] * len(remainder_wells),
[timepoint] * len(remainder_wells))),
columns=['well', 'barcode', 'timepoint'])
dfc2 = pd.concat([dfo, dfc])
dfc2 = dfc2.sort_values(['well'])
df_list.append(dfc2)
dfr = pd.concat(df_list)
dfr[['agent', 'role']] = dfr[['agent', 'role']].where(pd.notnull(dfr), '')
dfr['concentration'] = dfr['concentration'].fillna(0)
dfr.index = range(len(dfr))
max_agents = np.max([len(a.split(',')) for a in dfr.agent.tolist()])
if max_agents > 1:
agent_columns = ['agent%d' % ma for ma in range(1, max_agents+1)]
dfr['agent'] = dfr['agent'].str.replace(' ', '')
dfr[agent_columns] = dfr.agent.str.split(',', expand=True)
dfr[agent_columns] = dfr[agent_columns].where(pd.notnull(dfr), '')
del dfr['agent']
concentration_columns = ['concentration%d' % ma
for ma in range(1, max_agents+1)]
dfr[concentration_columns] = dfr['concentration'].apply(pd.Series)
del dfr['concentration']
return dfr
def wells_per_cell_line(cell_lines, exclude_outer):
"""Computes number of wells available per cell line
Parameters
----------
cell_lines : list
list of cell lines on a plate
exclude_outer : int
number of outer well layers to exclude
Returns
-------
avail_wells_per_line : int
number of wells available per cell line
"""
if len(cell_lines) <= 1:
avail_wells = len(define_treatment_wells(exclude_outer=exclude_outer))
avail_wells_per_line = avail_wells
if len(cell_lines) > 1:
avail_wells = len(define_treatment_wells(exclude_outer=2))
avail_wells_per_line = avail_wells / len(cell_lines)
return int(avail_wells_per_line)
def chunks(l, n):
"""splits list l into chunks of size n
Parameters
----------
l : list
list of well names
n : int
number of wells available per cell line
Returns
-------
wells_per_line : list of lists
lenght of the list equals number of cell lines
"""
n = max(1, n)
wells_per_line = list(l[i:i+n] for i in range(0, len(l), n))
return wells_per_line
def randomize_per_line(df, rand_num, exclude_outer,
cell_lines=[''], plate_dims=[16, 24]):
"""Takes initial drug layout schema and applies the layout pattern
to equal portions of the plate based on the number of cell lines.
Wells are randomized if rand_num > 1
Parameters
----------
df : pandas dataframe
initial drug layout schema
rand_num : int
seed to be used for randomizaition
cell_lines : list of str
cell lines on the plate
plate_dims : list of int
physical dimensions of the plate
Returns
-------
dfr : pandas dataframe
drug schema layout with layout repeated per cell line and assigned to
equal portions of the plate. Wells are assigned and
randomized if rand num > 1.
"""
if len(cell_lines) > 1:
exclude_outer = 2
avail_wells_per_line = wells_per_cell_line(cell_lines, exclude_outer)
tr_wells = define_treatment_wells(exclude_outer, plate_dims)
tr_wells_per_cell_line = chunks(tr_wells, avail_wells_per_line)
dfrs = []
for i, cell_line in enumerate(cell_lines):
dfc = df.copy()
dfc['well'] = tr_wells_per_cell_line[i]
ordered_wells = dfc.well.tolist()
if rand_num > 0:
np.random.seed(rand_num)
randomized_wells = np.random.choice(ordered_wells,
size=len(ordered_wells),
replace=False)
dfc['well'] = randomized_wells
dfc.index = dfc['well']
dfc['cell_line'] = [cell_line] * len(dfc)
dfrs.append(dfc)
dfr = pd.concat(dfrs)
return dfr
| [
"pandas.read_csv",
"pandas.DataFrame",
"itertools.product",
"numpy.array",
"datarail.experimental_design.edge_fingerprint.encode_fingerprint",
"numpy.random.seed",
"warnings.warn",
"numpy.logspace",
"pandas.notnull",
"pandas.concat"
] | [((5520, 5543), 'pandas.read_csv', 'pd.read_csv', (['input_file'], {}), '(input_file)\n', (5531, 5543), True, 'import pandas as pd\n'), ((11837, 11885), 'datarail.experimental_design.edge_fingerprint.encode_fingerprint', 'edge_fingerprint.encode_fingerprint', (['fingerprint'], {}), '(fingerprint)\n', (11872, 11885), True, 'import datarail.experimental_design.edge_fingerprint as edge_fingerprint\n'), ((16159, 16177), 'pandas.concat', 'pd.concat', (['df_list'], {}), '(df_list)\n', (16168, 16177), True, 'import pandas as pd\n'), ((19810, 19825), 'pandas.concat', 'pd.concat', (['dfrs'], {}), '(dfrs)\n', (19819, 19825), True, 'import pandas as pd\n'), ((2280, 2306), 'numpy.array', 'np.array', (['dose_range[::-1]'], {}), '(dose_range[::-1])\n', (2288, 2306), True, 'import numpy as np\n'), ((3481, 3501), 'itertools.product', 'product', (['*dose_lists'], {}), '(*dose_lists)\n', (3488, 3501), False, 'from itertools import product\n'), ((8343, 8411), 'warnings.warn', 'warnings.warn', (['"""Experimental design does not have positive_controls"""'], {}), "('Experimental design does not have positive_controls')\n", (8356, 8411), False, 'import warnings\n'), ((9000, 9170), 'warnings.warn', 'warnings.warn', (["('Number of treatment wells per cell line required (%d)exceed available wells per cell line (%d)'\n % (num_treatment_wells, num_wells_per_cell_line))"], {}), "(\n 'Number of treatment wells per cell line required (%d)exceed available wells per cell line (%d)'\n % (num_treatment_wells, num_wells_per_cell_line))\n", (9013, 9170), False, 'import warnings\n'), ((10512, 10586), 'warnings.warn', 'warnings.warn', (['"""Insufficent number of wells alloted for negative controls"""'], {}), "('Insufficent number of wells alloted for negative controls')\n", (10525, 10586), False, 'import warnings\n'), ((16056, 16077), 'pandas.concat', 'pd.concat', (['[dfo, dfc]'], {}), '([dfo, dfc])\n', (16065, 16077), True, 'import pandas as pd\n'), ((16236, 16251), 'pandas.notnull', 'pd.notnull', (['dfr'], {}), '(dfr)\n', (16246, 16251), True, 'import pandas as pd\n'), ((1509, 1529), 'numpy.logspace', 'np.logspace', (['(0)', '(4)', '(9)'], {}), '(0, 4, 9)\n', (1520, 1529), True, 'import numpy as np\n'), ((3250, 3270), 'numpy.logspace', 'np.logspace', (['(0)', '(4)', '(9)'], {}), '(0, 4, 9)\n', (3261, 3270), True, 'import numpy as np\n'), ((14657, 14671), 'pandas.DataFrame', 'pd.DataFrame', ([], {}), '()\n', (14669, 14671), True, 'import pandas as pd\n'), ((15539, 15561), 'pandas.concat', 'pd.concat', (['[df_fp, df]'], {}), '([df_fp, df])\n', (15548, 15561), True, 'import pandas as pd\n'), ((16695, 16710), 'pandas.notnull', 'pd.notnull', (['dfr'], {}), '(dfr)\n', (16705, 16710), True, 'import pandas as pd\n'), ((19426, 19450), 'numpy.random.seed', 'np.random.seed', (['rand_num'], {}), '(rand_num)\n', (19440, 19450), True, 'import numpy as np\n'), ((1602, 1622), 'numpy.logspace', 'np.logspace', (['(0)', '(4)', '(7)'], {}), '(0, 4, 7)\n', (1613, 1622), True, 'import numpy as np\n'), ((14726, 14740), 'pandas.DataFrame', 'pd.DataFrame', ([], {}), '()\n', (14738, 14740), True, 'import pandas as pd\n'), ((1694, 1714), 'numpy.logspace', 'np.logspace', (['(0)', '(4)', '(5)'], {}), '(0, 4, 5)\n', (1705, 1714), True, 'import numpy as np\n')] |
import os
os.environ["MKL_NUM_THREADS"] = "1"
os.environ["NUMEXPR_NUM_THREADS"] = "1"
os.environ["OMP_NUM_THREADS"] = "1"
import numpy as np
import pyscf
import pyqmc.multislater
import pyscf.hci
def avg(vec):
nblock = vec.shape[0]
avg = np.mean(vec, axis=0)
std = np.std(vec, axis=0)
return avg, std / np.sqrt(nblock)
def run_test():
mol = pyscf.gto.M(
atom="O 0. 0. 0.; H 0. 0. 2.0",
basis="ccecpccpvtz",
ecp="ccecp",
unit="bohr",
charge=-1,
)
mf = pyscf.scf.RHF(mol)
mf.kernel()
e_hf = mf.energy_tot()
cisolver = pyscf.hci.SCI(mol)
cisolver.select_cutoff = 0.1
nmo = mf.mo_coeff.shape[1]
nelec = mol.nelec
h1 = mf.mo_coeff.T.dot(mf.get_hcore()).dot(mf.mo_coeff)
h2 = pyscf.ao2mo.full(mol, mf.mo_coeff)
e, civec = cisolver.kernel(h1, h2, nmo, nelec, verbose=4)
cisolver.ci = civec[0]
ci_energy = mf.energy_nuc() + e
tol = 0.1
configs = pyqmc.initial_guess(mol, 1000)
wf = pyqmc.multislater.MultiSlater(mol, mf, cisolver, tol=tol)
data, configs = pyqmc.vmc(
wf,
configs,
nblocks=40,
verbose=True,
accumulators={"energy": pyqmc.EnergyAccumulator(mol)},
)
en, err = avg(data["energytotal"][1:])
nsigma = 4
assert en - nsigma * err < e_hf
assert en + nsigma * err > ci_energy
if __name__ == "__main__":
run_test()
| [
"numpy.mean",
"numpy.sqrt",
"pyscf.hci.SCI",
"pyscf.gto.M",
"pyscf.ao2mo.full",
"numpy.std",
"pyscf.scf.RHF"
] | [((249, 269), 'numpy.mean', 'np.mean', (['vec'], {'axis': '(0)'}), '(vec, axis=0)\n', (256, 269), True, 'import numpy as np\n'), ((280, 299), 'numpy.std', 'np.std', (['vec'], {'axis': '(0)'}), '(vec, axis=0)\n', (286, 299), True, 'import numpy as np\n'), ((366, 472), 'pyscf.gto.M', 'pyscf.gto.M', ([], {'atom': '"""O 0. 0. 0.; H 0. 0. 2.0"""', 'basis': '"""ccecpccpvtz"""', 'ecp': '"""ccecp"""', 'unit': '"""bohr"""', 'charge': '(-1)'}), "(atom='O 0. 0. 0.; H 0. 0. 2.0', basis='ccecpccpvtz', ecp=\n 'ccecp', unit='bohr', charge=-1)\n", (377, 472), False, 'import pyscf\n'), ((524, 542), 'pyscf.scf.RHF', 'pyscf.scf.RHF', (['mol'], {}), '(mol)\n', (537, 542), False, 'import pyscf\n'), ((601, 619), 'pyscf.hci.SCI', 'pyscf.hci.SCI', (['mol'], {}), '(mol)\n', (614, 619), False, 'import pyscf\n'), ((775, 809), 'pyscf.ao2mo.full', 'pyscf.ao2mo.full', (['mol', 'mf.mo_coeff'], {}), '(mol, mf.mo_coeff)\n', (791, 809), False, 'import pyscf\n'), ((322, 337), 'numpy.sqrt', 'np.sqrt', (['nblock'], {}), '(nblock)\n', (329, 337), True, 'import numpy as np\n')] |
"""
Evaluate NPL posterior samples predictive performance/time
"""
import numpy as np
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from tqdm import tqdm
import scipy as sp
import importlib
import npl.sk_gaussian_mixture as skgm
from npl.evaluate import gmm_ll as gll
from sklearn.mixture.gaussian_mixture import _compute_precision_cholesky
from joblib import Parallel, delayed
def IS_lppd(y,pi,mu,sigma,K,logweights_is): #calculate posterior predictive of test
model = skgm.GaussianMixture(K, covariance_type = 'diag')
B = np.shape(mu)[0]
N_test = np.shape(y)[0]
ll_test = np.zeros(N_test)
model.fit(y,np.ones(N_test))
for i in tqdm(range(B)):
model.means_ = mu[i,:,:]
model.covariances_ = sigma[i,:]**2
model.precisions_ = 1/(sigma[i,:]**2)
model.weights_ = pi[i,:]
model.precisions_cholesky_ = _compute_precision_cholesky(model.covariances_, model.covariance_type)
if i ==0:
ll_test = model.score_lppd(y) + logweights_is[i]
else:
ll_test = np.logaddexp(ll_test, model.score_lppd(y) + logweights_is[i])
lppd_test = np.sum(ll_test)
return lppd_test
def eval_IS(y_test,N_test,K,i):
par_is = pd.read_pickle('./parameters/par_IS_B{}_r{}'.format(10000000,i))
pi_is = np.array(par_is['pi'])
mu_is = np.array(par_is['mu'])
sigma_is = np.array(par_is['sigma'])
logweights_is = par_is['logweights_is']
#Normalize weights
logweights_is -= sp.special.logsumexp(logweights_is)
lppd = IS_lppd(y_test,pi_is,mu_is,sigma_is,K,logweights_is)
ESS = 1/np.exp(sp.special.logsumexp(2*logweights_is))
time = par_is['time']
return np.sum(lppd),ESS,time
def load_data(method,seed):
if method == 'RRNPL_IS' or method =='IS':
gmm_data_test = np.load('./sim_data_plot/gmm_data_test_sep.npy',allow_pickle=True).item()
N_test = gmm_data_test['N']
K = gmm_data_test['K']
y_test = gmm_data_test['y'].reshape(N_test)
else:
#load test data
gmm_data_test = np.load('./sim_data/gmm_data_test_insep_seed{}.npy'.format(seed),allow_pickle = True).item()
#Extract parameters from data
N_test = gmm_data_test['N']
K = gmm_data_test['K']
y_test = gmm_data_test['y'].reshape(N_test)
return y_test,N_test,K
def load_posterior(method,type,seed,K):
logweights_is = 0
if method == 'RRNPL_IS':
par = pd.read_pickle('./parameters/par_bb_{}_random_repeat_parallel_rep{}_B{}_plot{}'.format(type,10,2000,seed-100))
pi =np.array(par['pi'])
mu =np.array(par['mu'])
sigma = np.array(par[['sigma']][0])
time = par['time']
elif method == 'DPNPL':
par = pd.read_pickle('./parameters/par_bb_{}_random_repeat_parallel_alpha10_rep{}_B{}_seed{}'.format(type,10,2000,seed))
pi =np.array(par['pi'])
mu =np.array(par['mu'])
sigma = np.array(par[['sigma']][0])
time = par['time']
if method == 'MDPNPL':
par = pd.read_pickle('./parameters/par_bb_{}_random_repeat_parallel_alpha1000_rep{}_B{}_seed{}_MDP'.format(type,10,2000,seed))
pi =np.array(par['pi'])
mu =np.array(par['mu'])
sigma = np.array(par[['sigma']][0])
time = par['time']
elif method == 'NUTS':
D = 1
par = pd.read_pickle('./parameters/par_nuts_{}_seed{}'.format(type,seed))
pi =par.iloc[:,3:K+3].values
mu =par.iloc[:,3+K: 3+(K*(D+1))].values.reshape(2000,D,K).transpose(0,2,1)
sigma = par.iloc[:,3+K*(D+1) :3+ K*(2*D+1)].values.reshape(2000,D,K).transpose(0,2,1)
time = np.load('./parameters/time_nuts_{}_seed{}.npy'.format(type,seed),allow_pickle = True)
elif method == 'ADVI':
D = 1
par = pd.read_pickle('./parameters/par_advi_{}_seed{}'.format(type,seed))
pi =par.iloc[:,0:K].values
mu =par.iloc[:,K: (K*(D+1))].values.reshape(2000,D,K).transpose(0,2,1)
sigma = par.iloc[:,K*(D+1) : K*(2*D+1)].values.reshape(2000,D,K).transpose(0,2,1)
time = np.load('./parameters/time_advi_{}_seed{}.npy'.format(type,seed),allow_pickle = True)
return pi,mu,sigma,time, logweights_is
def eval(method):
if method == 'RRNPL_IS' or method =='IS':
rep = 10
type = 'sep'
else:
rep = 30
type = 'insep'
ll_test = np.zeros(rep)
time = np.zeros(rep)
if method != 'IS':
for i in range(rep):
seed = 100+i
#Extract parameters from data
y_test,N_test,K = load_data(method,seed)
pi,mu,sigma,time[i],logweights = load_posterior(method,type,seed,K)
ll_test[i] = gll.lppd(y_test.reshape(-1,1),pi,mu, sigma,K)
else:
y_test,N_test,K = load_data(method,0)
temp = Parallel(n_jobs=-1, backend= 'loky')(delayed(eval_IS)(y_test.reshape(-1,1),N_test,K,i) for i in tqdm(range(rep)))
ESS = np.zeros(rep)
for i in range(rep):
ll_test[i] = temp[i][0]
ESS[i]= temp[i][1]
time[i] = temp[i][2]
print(np.mean(ESS))
print(np.std(ESS))
print('For {}, dataset {}'.format(method,type))
print(np.mean(ll_test/N_test))
print(np.std(ll_test/N_test))
print(np.mean(time))
print(np.std(time))
if __name__=='__main__':
eval('RRNPL_IS')
eval('IS')
eval('DPNPL')
eval('MDPNPL')
eval('NUTS')
eval('ADVI')
| [
"numpy.mean",
"npl.sk_gaussian_mixture.GaussianMixture",
"numpy.ones",
"joblib.Parallel",
"numpy.array",
"numpy.sum",
"numpy.zeros",
"numpy.std",
"joblib.delayed",
"numpy.shape",
"numpy.load",
"scipy.special.logsumexp",
"sklearn.mixture.gaussian_mixture._compute_precision_cholesky"
] | [((522, 569), 'npl.sk_gaussian_mixture.GaussianMixture', 'skgm.GaussianMixture', (['K'], {'covariance_type': '"""diag"""'}), "(K, covariance_type='diag')\n", (542, 569), True, 'import npl.sk_gaussian_mixture as skgm\n'), ((638, 654), 'numpy.zeros', 'np.zeros', (['N_test'], {}), '(N_test)\n', (646, 654), True, 'import numpy as np\n'), ((1176, 1191), 'numpy.sum', 'np.sum', (['ll_test'], {}), '(ll_test)\n', (1182, 1191), True, 'import numpy as np\n'), ((1336, 1358), 'numpy.array', 'np.array', (["par_is['pi']"], {}), "(par_is['pi'])\n", (1344, 1358), True, 'import numpy as np\n'), ((1371, 1393), 'numpy.array', 'np.array', (["par_is['mu']"], {}), "(par_is['mu'])\n", (1379, 1393), True, 'import numpy as np\n'), ((1409, 1434), 'numpy.array', 'np.array', (["par_is['sigma']"], {}), "(par_is['sigma'])\n", (1417, 1434), True, 'import numpy as np\n'), ((1523, 1558), 'scipy.special.logsumexp', 'sp.special.logsumexp', (['logweights_is'], {}), '(logweights_is)\n', (1543, 1558), True, 'import scipy as sp\n'), ((4410, 4423), 'numpy.zeros', 'np.zeros', (['rep'], {}), '(rep)\n', (4418, 4423), True, 'import numpy as np\n'), ((4435, 4448), 'numpy.zeros', 'np.zeros', (['rep'], {}), '(rep)\n', (4443, 4448), True, 'import numpy as np\n'), ((580, 592), 'numpy.shape', 'np.shape', (['mu'], {}), '(mu)\n', (588, 592), True, 'import numpy as np\n'), ((609, 620), 'numpy.shape', 'np.shape', (['y'], {}), '(y)\n', (617, 620), True, 'import numpy as np\n'), ((671, 686), 'numpy.ones', 'np.ones', (['N_test'], {}), '(N_test)\n', (678, 686), True, 'import numpy as np\n'), ((910, 980), 'sklearn.mixture.gaussian_mixture._compute_precision_cholesky', '_compute_precision_cholesky', (['model.covariances_', 'model.covariance_type'], {}), '(model.covariances_, model.covariance_type)\n', (937, 980), False, 'from sklearn.mixture.gaussian_mixture import _compute_precision_cholesky\n'), ((1720, 1732), 'numpy.sum', 'np.sum', (['lppd'], {}), '(lppd)\n', (1726, 1732), True, 'import numpy as np\n'), ((2606, 2625), 'numpy.array', 'np.array', (["par['pi']"], {}), "(par['pi'])\n", (2614, 2625), True, 'import numpy as np\n'), ((2638, 2657), 'numpy.array', 'np.array', (["par['mu']"], {}), "(par['mu'])\n", (2646, 2657), True, 'import numpy as np\n'), ((2674, 2701), 'numpy.array', 'np.array', (["par[['sigma']][0]"], {}), "(par[['sigma']][0])\n", (2682, 2701), True, 'import numpy as np\n'), ((3197, 3216), 'numpy.array', 'np.array', (["par['pi']"], {}), "(par['pi'])\n", (3205, 3216), True, 'import numpy as np\n'), ((3229, 3248), 'numpy.array', 'np.array', (["par['mu']"], {}), "(par['mu'])\n", (3237, 3248), True, 'import numpy as np\n'), ((3265, 3292), 'numpy.array', 'np.array', (["par[['sigma']][0]"], {}), "(par[['sigma']][0])\n", (3273, 3292), True, 'import numpy as np\n'), ((4974, 4987), 'numpy.zeros', 'np.zeros', (['rep'], {}), '(rep)\n', (4982, 4987), True, 'import numpy as np\n'), ((5235, 5260), 'numpy.mean', 'np.mean', (['(ll_test / N_test)'], {}), '(ll_test / N_test)\n', (5242, 5260), True, 'import numpy as np\n'), ((5270, 5294), 'numpy.std', 'np.std', (['(ll_test / N_test)'], {}), '(ll_test / N_test)\n', (5276, 5294), True, 'import numpy as np\n'), ((5305, 5318), 'numpy.mean', 'np.mean', (['time'], {}), '(time)\n', (5312, 5318), True, 'import numpy as np\n'), ((5330, 5342), 'numpy.std', 'np.std', (['time'], {}), '(time)\n', (5336, 5342), True, 'import numpy as np\n'), ((1643, 1682), 'scipy.special.logsumexp', 'sp.special.logsumexp', (['(2 * logweights_is)'], {}), '(2 * logweights_is)\n', (1663, 1682), True, 'import scipy as sp\n'), ((2899, 2918), 'numpy.array', 'np.array', (["par['pi']"], {}), "(par['pi'])\n", (2907, 2918), True, 'import numpy as np\n'), ((2931, 2950), 'numpy.array', 'np.array', (["par['mu']"], {}), "(par['mu'])\n", (2939, 2950), True, 'import numpy as np\n'), ((2967, 2994), 'numpy.array', 'np.array', (["par[['sigma']][0]"], {}), "(par[['sigma']][0])\n", (2975, 2994), True, 'import numpy as np\n'), ((4845, 4880), 'joblib.Parallel', 'Parallel', ([], {'n_jobs': '(-1)', 'backend': '"""loky"""'}), "(n_jobs=-1, backend='loky')\n", (4853, 4880), False, 'from joblib import Parallel, delayed\n'), ((5131, 5143), 'numpy.mean', 'np.mean', (['ESS'], {}), '(ESS)\n', (5138, 5143), True, 'import numpy as np\n'), ((5159, 5170), 'numpy.std', 'np.std', (['ESS'], {}), '(ESS)\n', (5165, 5170), True, 'import numpy as np\n'), ((1841, 1908), 'numpy.load', 'np.load', (['"""./sim_data_plot/gmm_data_test_sep.npy"""'], {'allow_pickle': '(True)'}), "('./sim_data_plot/gmm_data_test_sep.npy', allow_pickle=True)\n", (1848, 1908), True, 'import numpy as np\n'), ((4882, 4898), 'joblib.delayed', 'delayed', (['eval_IS'], {}), '(eval_IS)\n', (4889, 4898), False, 'from joblib import Parallel, delayed\n')] |
# -*- coding: utf-8 -*-
"""Percent - How much should a child get?
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1MXNFC_ClLnHFupipXG618P39Tunavn7C
graphs removed
"""
## To Do ##
# use hovertemplate to add labels for Adult UBI, Child UBI, and Ratio
# Chart that shows the current state of child poverty?
# Make a chart about child poverty, kids get 0, half, all
# Install microdf
# !pip install git+https://github.com/PSLmodels/microdf.git
# # update plotly
# !pip install plotly --upgrade
import pandas as pd
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import microdf as mdf
person = pd.read_csv('https://github.com/MaxGhenis/datarepo/raw/master/pppub20.csv.gz',
usecols=['MARSUPWT', 'SPM_ID', 'SPM_POVTHRESHOLD',
'SPM_RESOURCES','A_AGE', 'TAX_INC'])
# Lower column headers and adapt weights
person.columns = person.columns.str.lower()
person['weight'] = person.marsupwt / 100
# Determine age demographic
person['child'] = person.a_age < 18
person['adult'] = person.a_age > 17
# Calculate the number of children and adults in each household
households = person.groupby(['spm_id'])[['child','adult', 'tax_inc']].sum()
households.columns = ['hh_children', 'hh_adults', 'hh_tax_income']
person = person.merge(households,left_on=['spm_id'], right_index=True)
person['hh_total_people'] = person.hh_adults + person.hh_children
# calculate population statistics
adult_pop = (person.adult * person.weight).sum()
child_pop = (person.child * person.weight).sum()
pop = child_pop + adult_pop
child_pop / pop
# calculate the total taxable income of the population
total_taxable_income = (person.tax_inc * person.weight).sum()
"""# `ubi()` Function
What this second argument, `percent`, should take/be explaining, as an example:
$$percent = \frac{spending\ on\ children}{total\ spending}*100$$
They are calculated in the body of function `ubi()` as follows:
```
adult_ubi = ((1 - percent) * funding) / adult_pop
child_ubi = (percent * funding) / child_pop
```
We are allocating some `percent` of the total budget on children, then calculating how much $/beneficiary.
**Answer**: Yes that is correct, and the point
"""
def ubi(funding_billions=0, percent=0):
""" Calculate the poverty rate among the total US population by:
-passing a total level of funding for a UBI proposal (billions USD),
-passing a percent of the benefit recieved by a child and the benefit
recieved by an adult
AND
taking into account that funding will be raise by a flat tax leveled on each households
taxable income """
percent = percent / 100
funding = funding_billions * 1e9
target_persons = person.copy(deep=True)
# i think this is % funding, not % benefit
adult_ubi = ((1 - percent) * funding) / adult_pop
child_ubi = (percent * funding) / child_pop
tax_rate = funding / total_taxable_income
target_persons['hh_new_tax'] = target_persons.hh_tax_income * tax_rate
target_persons['hh_ubi'] = (target_persons.hh_adults * adult_ubi +
target_persons.hh_children * child_ubi)
target_persons['new_spm_resources'] = (target_persons.spm_resources +
target_persons.hh_ubi -
target_persons.hh_new_tax)
target_persons['new_spm_resources_pp'] = (target_persons.new_spm_resources /
(target_persons.hh_total_people))
# Calculate poverty rate
target_persons['poor'] = (target_persons.new_spm_resources <
target_persons.spm_povthreshold)
total_poor = (target_persons.poor * target_persons.weight).sum()
poverty_rate = (total_poor / pop * 100)
# Calculate poverty gap
target_persons['poverty_gap'] = target_persons.spm_povthreshold - target_persons.new_spm_resources
poverty_gap = (((target_persons.poor * target_persons.poverty_gap
* target_persons.weight).sum()))
# Calculate Gini
gini = mdf.gini(target_persons, 'new_spm_resources_pp', w='weight')
# Percent winners
target_persons['better_off'] = (target_persons.new_spm_resources > target_persons.spm_resources)
total_better_off = (target_persons.better_off * target_persons.weight).sum()
percent_better_off = total_better_off / pop
return pd.Series([poverty_rate, gini, poverty_gap, percent_better_off, adult_ubi, child_ubi])
# create a dataframe with all possible combinations of funding levels and
summary = mdf.cartesian_product({'funding_billions': np.arange(0,3_001,50),
'percent': np.arange(0, 101, 1)})
def ubi_row(row):
return ubi(row.funding_billions, row.percent)
summary[['poverty_rate', 'gini', 'poverty_gap', 'percent_better_off', 'adult_ubi', 'child_ubi']] = summary.apply(ubi_row, axis=1)
summary.sample(5)
"""## Save `summary` to CSV"""
summary['monthly_child_ubi'] =summary['child_ubi'].apply(lambda x: int(round(x/12,0)))
summary['monthly_adult_ubi'] =summary['adult_ubi'].apply(lambda x: int(round(x/12,0)))
summary.to_csv("children_share_ubi_spending_summary.csv.gz",compression='gzip')
"""## `optimal_[whatever concept]` `dataframe`s for `ubi()` """
optimal_poverty = summary.sort_values('poverty_gap').drop_duplicates('funding_billions', keep='first')
optimal_poverty = optimal_poverty.drop(27)
optimal_inequality = summary.sort_values('gini').drop_duplicates('funding_billions', keep='first')
optimal_inequality = optimal_inequality.drop(27)
optimal_winners = summary.sort_values('percent_better_off').drop_duplicates('funding_billions', keep='last')
optimal_winners = optimal_winners.drop(27)
"""# `big_percent()` function"""
def big_percent(funding_billions=0, percent=1000):
""" Calculate the poverty rate among the total US population by:
-passing a total level of funding for a UBI proposal (billions USD),
-passing a percent of the benefit recieved by a child and the benefit
recieved by an adult
AND
taking into account that funding will be raise by a flat tax leveled on each households
taxable income """
percent = percent / 100
funding = funding_billions * 1e9
target_persons = person.copy(deep=True)
adult_ubi = (funding / (adult_pop + (child_pop * percent)))
child_ubi = adult_ubi * percent
tax_rate = funding / total_taxable_income
target_persons['hh_new_tax'] = target_persons.hh_tax_income * tax_rate
target_persons['hh_ubi'] = (target_persons.hh_adults * adult_ubi +
target_persons.hh_children * child_ubi)
target_persons['new_spm_resources'] = (target_persons.spm_resources +
target_persons.hh_ubi -
target_persons.hh_new_tax)
target_persons['new_spm_resources_pp'] = (target_persons.new_spm_resources /
(target_persons.hh_total_people))
# Calculate poverty rate
target_persons['poor'] = (target_persons.new_spm_resources <
target_persons.spm_povthreshold)
total_child_poor = (target_persons.poor * target_persons.weight *
target_persons.child).sum()
child_poverty_rate = (total_child_poor / child_pop * 100).round(1)
adult_ubi = int(adult_ubi)
child_ubi = int(child_ubi)
return pd.Series([child_poverty_rate, adult_ubi, child_ubi])
summary2 = mdf.cartesian_product({'funding_billions': np.arange(0,3_001,50),
'percent': np.arange(0, 101, 50)})
def big_percent_row(row):
return big_percent(row.funding_billions, row.percent)
summary2[['child_poverty_rate', 'adult_ubi', 'child_ubi']] = summary2.apply(big_percent_row, axis=1)
summary2
# calculate monthly payments
summary2['monthly_child_ubi'] =summary2['child_ubi'].apply(lambda x: int(round(x/12,0)))
summary2['monthly_adult_ubi'] =summary2['adult_ubi'].apply(lambda x: int(round(x/12,0)))
"""## Write `summary2` to CSV"""
summary2.to_csv("ratio_child_ben_to_adult_ben_summary.csv.gz",compression="gzip")
summary2.sample(10)
| [
"microdf.gini",
"pandas.Series",
"pandas.read_csv",
"numpy.arange"
] | [((731, 906), 'pandas.read_csv', 'pd.read_csv', (['"""https://github.com/MaxGhenis/datarepo/raw/master/pppub20.csv.gz"""'], {'usecols': "['MARSUPWT', 'SPM_ID', 'SPM_POVTHRESHOLD', 'SPM_RESOURCES', 'A_AGE', 'TAX_INC']"}), "('https://github.com/MaxGhenis/datarepo/raw/master/pppub20.csv.gz',\n usecols=['MARSUPWT', 'SPM_ID', 'SPM_POVTHRESHOLD', 'SPM_RESOURCES',\n 'A_AGE', 'TAX_INC'])\n", (742, 906), True, 'import pandas as pd\n'), ((4246, 4306), 'microdf.gini', 'mdf.gini', (['target_persons', '"""new_spm_resources_pp"""'], {'w': '"""weight"""'}), "(target_persons, 'new_spm_resources_pp', w='weight')\n", (4254, 4306), True, 'import microdf as mdf\n'), ((4569, 4659), 'pandas.Series', 'pd.Series', (['[poverty_rate, gini, poverty_gap, percent_better_off, adult_ubi, child_ubi]'], {}), '([poverty_rate, gini, poverty_gap, percent_better_off, adult_ubi,\n child_ubi])\n', (4578, 4659), True, 'import pandas as pd\n'), ((7661, 7714), 'pandas.Series', 'pd.Series', (['[child_poverty_rate, adult_ubi, child_ubi]'], {}), '([child_poverty_rate, adult_ubi, child_ubi])\n', (7670, 7714), True, 'import pandas as pd\n'), ((4787, 4809), 'numpy.arange', 'np.arange', (['(0)', '(3001)', '(50)'], {}), '(0, 3001, 50)\n', (4796, 4809), True, 'import numpy as np\n'), ((4855, 4875), 'numpy.arange', 'np.arange', (['(0)', '(101)', '(1)'], {}), '(0, 101, 1)\n', (4864, 4875), True, 'import numpy as np\n'), ((7772, 7794), 'numpy.arange', 'np.arange', (['(0)', '(3001)', '(50)'], {}), '(0, 3001, 50)\n', (7781, 7794), True, 'import numpy as np\n'), ((7840, 7861), 'numpy.arange', 'np.arange', (['(0)', '(101)', '(50)'], {}), '(0, 101, 50)\n', (7849, 7861), True, 'import numpy as np\n')] |
import os
import time
import argparse
import random
import numpy as np
from tqdm import trange
from statistics import mean
parser = argparse.ArgumentParser(description='NAS Without Training')
parser.add_argument('--data_loc', default='../datasets/cifar', type=str, help='dataset folder')
parser.add_argument('--api_loc', default='../datasets/NAS-Bench-201-v1_1-096897.pth',
type=str, help='path to API')
parser.add_argument('--save_loc', default='results', type=str, help='folder to save results')
parser.add_argument('--batch_size', default=256, type=int)
parser.add_argument('--GPU', default='0', type=str)
parser.add_argument('--seed', default=1, type=int)
parser.add_argument('--trainval', action='store_true')
parser.add_argument('--dataset', default='cifar10', type=str)
parser.add_argument('--n_samples', default=100, type=int)
parser.add_argument('--n_runs', default=500, type=int)
args = parser.parse_args()
os.environ['CUDA_VISIBLE_DEVICES'] = args.GPU
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
import torchvision.datasets as datasets
import torch.optim as optim
from models import get_cell_based_tiny_net
# Reproducibility
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
import torchvision.transforms as transforms
from datasets import get_datasets
from config_utils import load_config
from nas_201_api import NASBench201API as API
def get_batch_jacobian(net, x, target, to, device, args=None):
"""Calculates a jacobian matrix for a batch of input data x. For each input x and the corresponding output of the
net f(x) = y, the Jacobian contains dy/dx.
Args:
net (torch.nn): The network that should be utilized for calculating the output y for an input x.
x: The input data
target: TODO
to: TODO unused
device: TODO unused
Returns:
tuple of torch.Tensor and TODO Target: The Jacobian and TODO Target
"""
net.zero_grad()
x.requires_grad_(True)
_, y = net(x)
y.backward(torch.ones_like(y))
jacobian = x.grad.detach()
return jacobian, target.detach()
def eval_score(jacobian, labels=None):
"""Calculates the correlation score from the given Jacobian matrix.
Args:
Jacobian (torch.Tensor): The Jacobian with which the correlation score should be calculated.
labels: TODO
Returns:
float: The correlation score of the given Jacobian.
"""
corrs = np.corrcoef(jacobian)
v, _ = np.linalg.eig(corrs)
k = 1e-5
return -np.sum(np.log(v + k) + 1./(v + k))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
print(f'api_loc argument: {args.api_loc}')
THE_START = time.time()
api = API(args.api_loc)
os.makedirs(args.save_loc, exist_ok=True)
train_data, valid_data, xshape, class_num = get_datasets(args.dataset, args.data_loc, cutout=0)
if args.dataset == 'cifar10':
acc_type = 'ori-test'
val_acc_type = 'x-valid'
else:
acc_type = 'x-test'
val_acc_type = 'x-valid'
if args.trainval:
cifar_split = load_config('config_utils/cifar-split.txt', None, None)
train_split, valid_split = cifar_split.train, cifar_split.valid
train_loader = torch.utils.data.DataLoader(train_data, batch_size=args.batch_size,
num_workers=0, pin_memory=True, sampler= torch.utils.data.sampler.SubsetRandomSampler(train_split))
else:
train_loader = torch.utils.data.DataLoader(train_data, batch_size=args.batch_size, shuffle=True,
num_workers=0, pin_memory=True)
times = []
chosen = []
acc = []
val_acc = []
topscores = []
dset = args.dataset if not args.trainval else 'cifar10-valid'
order_fn = np.nanargmax
runs = trange(args.n_runs, desc='acc: ')
for N in runs:
start = time.time()
indices = np.random.randint(0,15625,args.n_samples)
scores = []
for arch in indices:
data_iterator = iter(train_loader)
x, target = next(data_iterator)
x, target = x.to(device), target.to(device)
config = api.get_net_config(arch, args.dataset)
config['num_classes'] = 1
network = get_cell_based_tiny_net(config) # create the network from configuration
network = network.to(device)
jacobs, labels= get_batch_jacobian(network, x, target, 1, device, args)
jacobs = jacobs.reshape(jacobs.size(0), -1).cpu().numpy()
try:
s = eval_score(jacobs, labels)
except Exception as e:
print(e)
s = np.nan
scores.append(s)
best_arch = indices[order_fn(scores)]
info = api.query_by_index(best_arch)
topscores.append(scores[order_fn(scores)])
chosen.append(best_arch)
acc.append(info.get_metrics(dset, acc_type)['accuracy'])
if not args.dataset == 'cifar10' or args.trainval:
val_acc.append(info.get_metrics(dset, val_acc_type)['accuracy'])
times.append(time.time()-start)
runs.set_description(f"acc: {mean(acc if not args.trainval else val_acc):.2f}%")
print(f"Final mean test accuracy: {np.mean(acc)}")
if len(val_acc) > 1:
print(f"Final mean validation accuracy: {np.mean(val_acc)}")
state = {'accs': acc,
'val_accs': val_acc,
'chosen': chosen,
'times': times,
'topscores': topscores,
}
dset = args.dataset if not args.trainval else 'cifar10-valid'
fname = f"{args.save_loc}/{dset}_{args.n_runs}_{args.n_samples}_{args.seed}.t7"
torch.save(state, fname)
| [
"numpy.log",
"torch.cuda.is_available",
"numpy.mean",
"argparse.ArgumentParser",
"numpy.random.seed",
"config_utils.load_config",
"torch.utils.data.sampler.SubsetRandomSampler",
"torch.ones_like",
"datasets.get_datasets",
"numpy.linalg.eig",
"numpy.corrcoef",
"nas_201_api.NASBench201API",
"t... | [((133, 192), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {'description': '"""NAS Without Training"""'}), "(description='NAS Without Training')\n", (156, 192), False, 'import argparse\n'), ((1273, 1295), 'random.seed', 'random.seed', (['args.seed'], {}), '(args.seed)\n', (1284, 1295), False, 'import random\n'), ((1296, 1321), 'numpy.random.seed', 'np.random.seed', (['args.seed'], {}), '(args.seed)\n', (1310, 1321), True, 'import numpy as np\n'), ((1322, 1350), 'torch.manual_seed', 'torch.manual_seed', (['args.seed'], {}), '(args.seed)\n', (1339, 1350), False, 'import torch\n'), ((2826, 2837), 'time.time', 'time.time', ([], {}), '()\n', (2835, 2837), False, 'import time\n'), ((2844, 2861), 'nas_201_api.NASBench201API', 'API', (['args.api_loc'], {}), '(args.api_loc)\n', (2847, 2861), True, 'from nas_201_api import NASBench201API as API\n'), ((2863, 2904), 'os.makedirs', 'os.makedirs', (['args.save_loc'], {'exist_ok': '(True)'}), '(args.save_loc, exist_ok=True)\n', (2874, 2904), False, 'import os\n'), ((2950, 3001), 'datasets.get_datasets', 'get_datasets', (['args.dataset', 'args.data_loc'], {'cutout': '(0)'}), '(args.dataset, args.data_loc, cutout=0)\n', (2962, 3001), False, 'from datasets import get_datasets\n'), ((3902, 3935), 'tqdm.trange', 'trange', (['args.n_runs'], {'desc': '"""acc: """'}), "(args.n_runs, desc='acc: ')\n", (3908, 3935), False, 'from tqdm import trange\n'), ((5641, 5665), 'torch.save', 'torch.save', (['state', 'fname'], {}), '(state, fname)\n', (5651, 5665), False, 'import torch\n'), ((2568, 2589), 'numpy.corrcoef', 'np.corrcoef', (['jacobian'], {}), '(jacobian)\n', (2579, 2589), True, 'import numpy as np\n'), ((2602, 2622), 'numpy.linalg.eig', 'np.linalg.eig', (['corrs'], {}), '(corrs)\n', (2615, 2622), True, 'import numpy as np\n'), ((3185, 3240), 'config_utils.load_config', 'load_config', (['"""config_utils/cifar-split.txt"""', 'None', 'None'], {}), "('config_utils/cifar-split.txt', None, None)\n", (3196, 3240), False, 'from config_utils import load_config\n'), ((3569, 3687), 'torch.utils.data.DataLoader', 'torch.utils.data.DataLoader', (['train_data'], {'batch_size': 'args.batch_size', 'shuffle': '(True)', 'num_workers': '(0)', 'pin_memory': '(True)'}), '(train_data, batch_size=args.batch_size, shuffle\n =True, num_workers=0, pin_memory=True)\n', (3596, 3687), False, 'import torch\n'), ((3963, 3974), 'time.time', 'time.time', ([], {}), '()\n', (3972, 3974), False, 'import time\n'), ((3989, 4032), 'numpy.random.randint', 'np.random.randint', (['(0)', '(15625)', 'args.n_samples'], {}), '(0, 15625, args.n_samples)\n', (4006, 4032), True, 'import numpy as np\n'), ((2139, 2157), 'torch.ones_like', 'torch.ones_like', (['y'], {}), '(y)\n', (2154, 2157), False, 'import torch\n'), ((2719, 2744), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (2742, 2744), False, 'import torch\n'), ((4319, 4350), 'models.get_cell_based_tiny_net', 'get_cell_based_tiny_net', (['config'], {}), '(config)\n', (4342, 4350), False, 'from models import get_cell_based_tiny_net\n'), ((3484, 3541), 'torch.utils.data.sampler.SubsetRandomSampler', 'torch.utils.data.sampler.SubsetRandomSampler', (['train_split'], {}), '(train_split)\n', (3528, 3541), False, 'import torch\n'), ((5107, 5118), 'time.time', 'time.time', ([], {}), '()\n', (5116, 5118), False, 'import time\n'), ((5247, 5259), 'numpy.mean', 'np.mean', (['acc'], {}), '(acc)\n', (5254, 5259), True, 'import numpy as np\n'), ((2655, 2668), 'numpy.log', 'np.log', (['(v + k)'], {}), '(v + k)\n', (2661, 2668), True, 'import numpy as np\n'), ((5159, 5202), 'statistics.mean', 'mean', (['(acc if not args.trainval else val_acc)'], {}), '(acc if not args.trainval else val_acc)\n', (5163, 5202), False, 'from statistics import mean\n'), ((5329, 5345), 'numpy.mean', 'np.mean', (['val_acc'], {}), '(val_acc)\n', (5336, 5345), True, 'import numpy as np\n')] |
import numpy as np
from pytest import approx
from datasetinsights.evaluation_metrics import AverageLog10Error
def get_y_true_y_pred():
# Generate two ground-truth images and two pred images with different
# depths
y_true = np.ones((2, 2, 2, 3), dtype=np.double) * 10
y_pred = np.ones((2, 2, 2, 3), dtype=np.double) * 10
y_pred[0, 0, 0, 0] = 0
y_pred[0, 1, 0, 1] = 19
y_pred[1, 1, 1, 2] = 15
return y_true, y_pred
def test_average_log10_error():
y_true, y_pred = get_y_true_y_pred()
log10_metric = AverageLog10Error()
# compute real metric
num_pixels = y_true.size
error = 0
for i in range(y_true.shape[0]):
for j in range(y_true.shape[1]):
for k in range(y_true.shape[2]):
for m in range(y_true.shape[3]):
error += abs(
np.log10(y_true[i][j][k][m] + 1e-15)
- np.log10(y_pred[i][j][k][m] + 1e-15)
)
true_res = (error / num_pixels).item()
# Update metric
output1 = (y_true[0], y_true[0])
log10_metric.update(output1)
output2 = (y_true[1], y_true[1])
log10_metric.update(output2)
res = log10_metric.compute()
assert approx(0) == res
log10_metric.reset()
output1 = (y_pred[0], y_true[0])
log10_metric.update(output1)
output2 = (y_pred[1], y_true[1])
log10_metric.update(output2)
res = log10_metric.compute()
assert approx(true_res) == res
| [
"pytest.approx",
"datasetinsights.evaluation_metrics.AverageLog10Error",
"numpy.log10",
"numpy.ones"
] | [((544, 563), 'datasetinsights.evaluation_metrics.AverageLog10Error', 'AverageLog10Error', ([], {}), '()\n', (561, 563), False, 'from datasetinsights.evaluation_metrics import AverageLog10Error\n'), ((238, 276), 'numpy.ones', 'np.ones', (['(2, 2, 2, 3)'], {'dtype': 'np.double'}), '((2, 2, 2, 3), dtype=np.double)\n', (245, 276), True, 'import numpy as np\n'), ((296, 334), 'numpy.ones', 'np.ones', (['(2, 2, 2, 3)'], {'dtype': 'np.double'}), '((2, 2, 2, 3), dtype=np.double)\n', (303, 334), True, 'import numpy as np\n'), ((1233, 1242), 'pytest.approx', 'approx', (['(0)'], {}), '(0)\n', (1239, 1242), False, 'from pytest import approx\n'), ((1460, 1476), 'pytest.approx', 'approx', (['true_res'], {}), '(true_res)\n', (1466, 1476), False, 'from pytest import approx\n'), ((863, 899), 'numpy.log10', 'np.log10', (['(y_true[i][j][k][m] + 1e-15)'], {}), '(y_true[i][j][k][m] + 1e-15)\n', (871, 899), True, 'import numpy as np\n'), ((926, 962), 'numpy.log10', 'np.log10', (['(y_pred[i][j][k][m] + 1e-15)'], {}), '(y_pred[i][j][k][m] + 1e-15)\n', (934, 962), True, 'import numpy as np\n')] |
import argparse
import numpy as np
from paz.backend.image import load_image, show_image, resize_image
from paz.backend.camera import Camera
from paz.pipelines import DetectMiniXceptionFER
from paz.pipelines import DetectFaceKeypointNet2D32
from paz.pipelines import HeadPoseKeypointNet2D32
from paz.pipelines import SSD300FAT, SSD300VOC, SSD512COCO, SSD512YCBVideo
parser = argparse.ArgumentParser(description='Real-time face classifier')
parser.add_argument('-o', '--offset', type=float, default=0.1,
help='Scaled offset to be added to bounding boxes')
parser.add_argument('-s', '--score_thresh', type=float, default=0.6,
help='Box/class score threshold')
parser.add_argument('-n', '--nms_thresh', type=float, default=0.45,
help='non-maximum suppression threshold')
parser.add_argument('-p', '--image_path', type=str,
help='full image path used for the pipelines')
parser.add_argument('-c', '--camera_id', type=str,
help='Camera/device ID')
parser.add_argument('-d', '--dataset', type=str, default='COCO',
choices=['VOC', 'COCO', 'YCBVideo', 'FAT'],
help='Dataset name')
args = parser.parse_args()
name_to_model = {'VOC': SSD300VOC, 'FAT': SSD300FAT, 'COCO': SSD512COCO,
'YCBVideo': SSD512YCBVideo}
image = load_image(args.image_path)
H = 1000
W = int((H / image.shape[0]) * image.shape[1])
# image = resize_image(image, (W, H))
focal_length = image.shape[1]
image_center = (image.shape[1] / 2.0, image.shape[0] / 2.0)
camera = Camera(args.camera_id)
camera.distortion = np.zeros((4, 1))
camera.intrinsics = np.array([[focal_length, 0, image_center[0]],
[0, focal_length, image_center[1]],
[0, 0, 1]])
pipeline_A = DetectMiniXceptionFER([args.offset, args.offset])
pipeline_B = DetectFaceKeypointNet2D32()
pipeline_C = HeadPoseKeypointNet2D32(camera)
pipeline_D = name_to_model[args.dataset](args.score_thresh, args.nms_thresh)
pipelines = [pipeline_A, pipeline_B, pipeline_C, pipeline_D]
for pipeline in pipelines:
predictions = pipeline(image.copy())
show_image(predictions['image'])
| [
"argparse.ArgumentParser",
"paz.backend.image.show_image",
"numpy.array",
"numpy.zeros",
"paz.pipelines.HeadPoseKeypointNet2D32",
"paz.pipelines.DetectFaceKeypointNet2D32",
"paz.backend.camera.Camera",
"paz.pipelines.DetectMiniXceptionFER",
"paz.backend.image.load_image"
] | [((378, 442), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {'description': '"""Real-time face classifier"""'}), "(description='Real-time face classifier')\n", (401, 442), False, 'import argparse\n'), ((1374, 1401), 'paz.backend.image.load_image', 'load_image', (['args.image_path'], {}), '(args.image_path)\n', (1384, 1401), False, 'from paz.backend.image import load_image, show_image, resize_image\n'), ((1596, 1618), 'paz.backend.camera.Camera', 'Camera', (['args.camera_id'], {}), '(args.camera_id)\n', (1602, 1618), False, 'from paz.backend.camera import Camera\n'), ((1639, 1655), 'numpy.zeros', 'np.zeros', (['(4, 1)'], {}), '((4, 1))\n', (1647, 1655), True, 'import numpy as np\n'), ((1676, 1773), 'numpy.array', 'np.array', (['[[focal_length, 0, image_center[0]], [0, focal_length, image_center[1]], [0,\n 0, 1]]'], {}), '([[focal_length, 0, image_center[0]], [0, focal_length,\n image_center[1]], [0, 0, 1]])\n', (1684, 1773), True, 'import numpy as np\n'), ((1844, 1893), 'paz.pipelines.DetectMiniXceptionFER', 'DetectMiniXceptionFER', (['[args.offset, args.offset]'], {}), '([args.offset, args.offset])\n', (1865, 1893), False, 'from paz.pipelines import DetectMiniXceptionFER\n'), ((1907, 1934), 'paz.pipelines.DetectFaceKeypointNet2D32', 'DetectFaceKeypointNet2D32', ([], {}), '()\n', (1932, 1934), False, 'from paz.pipelines import DetectFaceKeypointNet2D32\n'), ((1948, 1979), 'paz.pipelines.HeadPoseKeypointNet2D32', 'HeadPoseKeypointNet2D32', (['camera'], {}), '(camera)\n', (1971, 1979), False, 'from paz.pipelines import HeadPoseKeypointNet2D32\n'), ((2190, 2222), 'paz.backend.image.show_image', 'show_image', (["predictions['image']"], {}), "(predictions['image'])\n", (2200, 2222), False, 'from paz.backend.image import load_image, show_image, resize_image\n')] |
#!/usr/bin/python
# -*- coding=utf-8 -*-
# Project: eaglemine
# File: eaglemine.py
# Goal: The main routine of eaglemine
# Version: 1.0
# Created by @wenchieh on <12/17/2017>
#
__author__ = 'wenchieh'
# sys
import time
# third-party lib
import numpy as np
from scipy.sparse.linalg import svds
# project
from .eaglemine_model import EagleMineModel
from .tools.histogram_heuristic_generator import HistogramHeuristicGenerator
from .._model import DMmodel
from spartan.tensor.graph import Graph
class EagleMine( DMmodel ):
''' Micro-cluster detection: vision-guided anomaly detection.
Given a histogram derived from the correlated features of graph nodes,
EagleMine can be used to identify the micro-clusters in the graph,
these nodes in micro-clusters basically corresponds to some anomaly patterns.
'''
def __init__(self, voctype:str="dtmnorm", mode:int=2, mix_comps:int=2):
'''Initialization func for EagleMine
Parameters:
--------
:param voctype: str
vocabulary type: {"dtmnorm", "dmgauss"}
Default is "dtmnorm".
:param mode: int
The dimensions of features (the histogram).
Default is 2.
:param mix_comps: int
# mixture component for describing the major island.
Default is 2.
'''
self.voctype = voctype
self.mode = mode
self.ncomps = mix_comps
self.eaglemodel = None
self.__histgenerator__ = None
@classmethod
def __create__(cls, *args, **kwargs):
return cls(*args, **kwargs)
@staticmethod
def get_graph_spectral(sparse_matrix):
'''Extract the spectral features of the given sparse matrix (graph)
Parameter:
--------
:param sparse_matrix:
sparse matrix for adjacency matrix a graph.
'''
hub, _, auth = svds(1.0 * sparse_matrix.tocsr(), k=1, which='LM')
hub, auth = np.squeeze(np.array(hub)), np.squeeze(np.array(auth))
if abs(np.max(hub)) < abs(np.min(hub)):
hub *= -1
hub[hub < 0] = 0
if abs(np.max(auth)) < abs(np.min(auth)):
auth *= -1
auth[auth < 0] = 0
return hub, auth
def graph2feature(self, graph:Graph, feature_type:str='outdegree2hubness'):
'''Extract example correlated-features of the given graph and generate the corresponding histogram.
Parameters:
--------
:param graph: Graph
Given graph data
:param feature_type: str
Feature type for the graph node: {'outdegree2hubness', 'indegree2authority', 'degree2pagerank'}.
Default is 'outdegree2hubness'.
'''
if feature_type not in {'outdegree2hubness', 'indegree2authority', 'degree2pagerank'}:
raise NameError("Invalid feature type 'ty_type', which should be in {'outdegree2hubness', 'indegree2authority', 'degree2pagerank'}")
print("extract graph features ..... ")
start_tm = time.time()
outd, ind = np.asarray(graph.sm.sum(axis=1)).squeeze(), np.asarray(graph.sm.sum(axis=0)).squeeze()
hub, auth = self.get_graph_spectral(graph.sm)
if feature_type == 'outdegree2hubness' or feature_type == 'degree2pagerank':
deg, spec = outd, hub
else:
deg, spec = ind, auth
degreeidx = 0
print("done! @ {}s".format(time.time() - start_tm))
return degreeidx, np.column_stack((deg, spec))
def feature2histogram(self, feature:np.ndarray, degreeidx:int=0,
N_bins:int=80, base:int=10, mode:int=2, verbose:bool=True):
'''Construct histogram with given features
Parameters:
--------
:param feature: np.ndarray
The correlated features
:param degreeidx: int
The index of 'degree' ('out-degree', 'in-degree') in features, degreeidx=-1 if not containing 'degree' feature
Default is 0.
:param N_bins: int
The expected number of bins for generating histogram.
Default is 80.
:param base: int
The logarithmic base for bucketing the graph features.
Default is 10.
:param mode: int
The dimensions of features for constructing the histogram.
Default is 2.
:param verbose: bool
Whether output some running logs.
Default is True.
'''
n_samples, n_features = feature.shape
index = np.array([True] * n_samples)
for mod in range(mode):
index &= feature[:, mod] > 0
if verbose:
print("total shape: {}, valid samples:{}".format(feature.shape, np.sum(index)))
degree, feat = None, None
feature = feature[index, :]
if degreeidx >= 0:
degree = feature[:, degreeidx]
feat = np.delete(feature, degreeidx, axis=1)
del feature
else:
feat = feature
print("construct histogram ..... ")
start_tm = time.time()
self.__histgenerator__ = HistogramHeuristicGenerator()
if degree is not None:
self.__histgenerator__.set_deg_data(degree, feat)
self.__histgenerator__.histogram_gen(method="degree", N=N_bins, base=base)
else:
self.__histgenerator__.set_data(feat)
self.__histgenerator__.histogram_gen(method="N", N=N_bins, logarithmic=True, base=base)
print("done! @ {}s".format(time.time() - start_tm))
if verbose:
self.__histgenerator__.dump()
n_nodes = len(self.__histgenerator__.points_coord)
node2hcel = map(tuple, self.__histgenerator__.points_coord)
nodeidx2hcel = dict(zip(range(n_nodes), node2hcel))
return self.__histgenerator__.histogram, nodeidx2hcel, self.__histgenerator__.hpos2avgfeat
def set_histdata(self, histogram:dict, node2hcel:dict, hcel2avgfeat:dict, weighted_ftidx:int=0):
'''Set the histogram data
Parameters:
--------
:param histogram: dict
the format '(x,y,z,...): val', denoting that the cell (x,y,z,...) affiliates with value 'val'.
:param node2hcel: dict
graph node id (index) to histogram cell
:param hcel2avgfeat: dict
the average feature values for each histogram cell.
:param weighted_ftidx: int
The feature index as weight for suspiciousness metric.
Default is 0.
'''
self.histogram, self.hcelsusp_wt = list(), list()
for hcel in histogram.keys():
self.histogram.append(list(hcel) + [histogram.get(hcel)])
self.hcelsusp_wt.append(hcel2avgfeat[hcel][weighted_ftidx])
self.histogram = np.asarray(self.histogram)
self.hcelsusp_wt = np.asarray(self.hcelsusp_wt)
self.node2hcel = np.column_stack((list(node2hcel.keys()), list(node2hcel.values())))
self.eaglemodel = None
self.__histgenerator__ = None
self.hcel2label = None
def run(self, outs:str, waterlevel_step:float=0.2, prune_alpha:float=0.80,
min_pts:int=20, strictness:int=3, verbose:bool=True):
''' micro-cluster identification and refinement with water-level tree.
Parameters:
--------
:param outs: str
Output path for some temporary results.
:param waterlevel_step: float
Step size for raising the water level.
Default is 0.2.
:param prune_alpha: float
How proportion of pruning for level-tree.
Default is 0.80.
:param min_pts: int
The minimum number of points in a histogram cell.
Default is 20.
:param strictness: int
How strict should the anderson-darling test for normality. 0: not at all strict; 4: very strict
Default is 3.
:param verbose: bool
Whether output some running logs.
Default is True.
'''
print("*****************")
print("[0]. initialization")
self.eaglemodel = EagleMineModel(self.mode, self.ncomps)
self.eaglemodel.set_vocabulary(self.voctype)
self.eaglemodel.set_histogram(self.histogram)
print("*****************")
print("[1]. WaterLevelTree")
start_tm = time.time()
self.eaglemodel.leveltree_build(outs, waterlevel_step, prune_alpha, verbose=verbose)
end_tm1 = time.time()
print("done @ {}".format(end_tm1 - start_tm))
print("*****************")
print("[2]. TreeExplore")
self.eaglemodel.search(min_pts, strictness, verbose=verbose)
self.eaglemodel.post_stitch(strictness, verbose=verbose)
end_tm2 = time.time()
print("done @ {}".format(end_tm2 - end_tm1))
print("*****************")
print("[3]. node groups cluster and suspicious measure")
self.hcel2label, mdl = self.eaglemodel.cluster_remarks(strictness, verbose=verbose)
cluster_suspicious = self.eaglemodel.cluster_weighted_suspicious(self.hcelsusp_wt, strictness, verbose=verbose)
# print("description length (mdl): {}".format(mdl))
# print("suspicious result: {}".foramt(cluster_suspicious))
print('done @ {}'.format(time.time() - start_tm))
def __str__(self):
return str(vars(self))
def dump(self):
self.eaglemodel.dump()
if self.__histgenerator__ is not None:
self.__histgenerator__.dump()
print("done!")
def save(self, outfn_eaglemine:str, outfn_leveltree:str=None,
outfn_node2label:str=None, outfn_hcel2label:str=None,
comments:str="#", delimiter:str=";"):
'''save result of EagleMine
Parameters:
--------
:param outfn_eaglemine: str
Output path for eaglemine data
:param outfn_leveltree: str
Output path for the eater-level-tree data.
:param outfn_node2label: str
Output path for node2label data
:param outfn_hcel2label: str
Output path for hcel2label data
:param comments: str
The comments (start character) of inputs.
Default is "#".
:param delimiter: str
The separator of items in each line of inputs.
Default is ";".
'''
print("saving result")
start_tm = time.time()
self.eaglemodel.save(outfn_eaglemine)
if outfn_leveltree:
self.eaglemodel.leveltree.save_leveltree(outfn_leveltree, verbose=False)
nlabs = len(np.unique(list(self.hcel2label.values())))
if outfn_node2label is not None:
nnds = len(self.node2hcel)
with open(outfn_node2label, 'w') as ofp_node2lab:
ofp_node2lab.writelines(comments + " #pt: {}, #label: {}\n".format(nnds, nlabs))
for k in range(nnds):
nodeidx, hcel = self.node2hcel[k, 0], tuple(self.node2hcel[k, 1:])
nodelab = self.hcel2label.get(hcel, -1)
ofp_node2lab.writelines("{}{}{}\n".format(nodeidx, delimiter, nodelab))
ofp_node2lab.close()
if outfn_hcel2label is not None:
nhcels = len(self.hcelsusp_wt)
with open(outfn_hcel2label, 'w') as ofp_hcel2lab:
ofp_hcel2lab.writelines(comments + ' #hcel: {}, #label: {}'.format(nhcels, nlabs))
for hcel, lab in self.hcel2label.items():
hcel_str = delimiter.join(map(str, hcel))
ofp_hcel2lab.writelines("{}{}{}\n".format(hcel_str, delimiter, lab))
ofp_hcel2lab.close()
end_tm = time.time()
print("done! @ {}s".format(end_tm - start_tm))
def save_histogram(self,outfn_histogram:str=None, outfn_node2hcel:str=None, outfn_hcel2avgfeat:str=None,
comments:str="#", delimiter:str=","):
'''Save the histogram data for the graph.
Parameters:
--------
:param outfn_histogram: str
Output path of histogram.
The record in histogram should be in the format 'x,y,z,...,val', denoting that the cell (x, y, z, ...) affiliates with value 'val'
Default is None.
:param outfn_node2hcel: str
Output path of the file mapping the node to histogram cell.
Default is None.
:param outfn_hcel2avgfeat: str
Output path of the file mapping the histogram cell to the average features and #points
Default is None.
:param comments: str
The comments (start character) of inputs.
Default is "#".
:param delimiter: str
The separator of items in each line of inputs.
Default is ",".
'''
if self.__histgenerator__ is not None:
if outfn_histogram is not None:
self.__histgenerator__.save_histogram(outfn_histogram, delimiter, comments)
if outfn_node2hcel is not None:
self.__histgenerator__.save_pts_index(outfn_node2hcel, delimiter, comments)
if outfn_hcel2avgfeat is not None:
self.__histgenerator__.save_hpos2avgfeat(outfn_hcel2avgfeat, delimiter, comments)
else:
RuntimeError("No histogram generated for given graph!")
## TODO: relized the 'anomaly_detection' for the task based running. The root API may need to be refactored.
# def anomaly_detection(self, outs:str, waterlevel_step:float=0.2,
# prune_alpha:float=0.80, min_pts:int=20, strictness:int=3):
# '''anomaly detection with EagleMine
# Parameters:
# --------
# outs: str
# Output path for some temporary results.
# waterlevel_step: float
# Step size for raising the water level.
# Default is 0.2.
# prune_alpha: float
# How proportion of pruning for level-tree.
# Default is 0.80.
# min_pts: int
# The minimum number of points in a histogram cell.
# Default is 20.
# strictness: int
# How strict should the anderson-darling test for normality. 0: not at all strict; 4: very strict
# Default is 3.
# '''
# return self.run(outs, waterlevel_step, prune_alpha, min_pts, strictness, True)
| [
"numpy.delete",
"numpy.asarray",
"numpy.column_stack",
"numpy.max",
"numpy.array",
"numpy.sum",
"numpy.min",
"time.time"
] | [((3076, 3087), 'time.time', 'time.time', ([], {}), '()\n', (3085, 3087), False, 'import time\n'), ((4600, 4628), 'numpy.array', 'np.array', (['([True] * n_samples)'], {}), '([True] * n_samples)\n', (4608, 4628), True, 'import numpy as np\n'), ((5153, 5164), 'time.time', 'time.time', ([], {}), '()\n', (5162, 5164), False, 'import time\n'), ((6924, 6950), 'numpy.asarray', 'np.asarray', (['self.histogram'], {}), '(self.histogram)\n', (6934, 6950), True, 'import numpy as np\n'), ((6978, 7006), 'numpy.asarray', 'np.asarray', (['self.hcelsusp_wt'], {}), '(self.hcelsusp_wt)\n', (6988, 7006), True, 'import numpy as np\n'), ((8524, 8535), 'time.time', 'time.time', ([], {}), '()\n', (8533, 8535), False, 'import time\n'), ((8647, 8658), 'time.time', 'time.time', ([], {}), '()\n', (8656, 8658), False, 'import time\n'), ((8943, 8954), 'time.time', 'time.time', ([], {}), '()\n', (8952, 8954), False, 'import time\n'), ((10638, 10649), 'time.time', 'time.time', ([], {}), '()\n', (10647, 10649), False, 'import time\n'), ((11960, 11971), 'time.time', 'time.time', ([], {}), '()\n', (11969, 11971), False, 'import time\n'), ((3533, 3561), 'numpy.column_stack', 'np.column_stack', (['(deg, spec)'], {}), '((deg, spec))\n', (3548, 3561), True, 'import numpy as np\n'), ((4978, 5015), 'numpy.delete', 'np.delete', (['feature', 'degreeidx'], {'axis': '(1)'}), '(feature, degreeidx, axis=1)\n', (4987, 5015), True, 'import numpy as np\n'), ((1995, 2008), 'numpy.array', 'np.array', (['hub'], {}), '(hub)\n', (2003, 2008), True, 'import numpy as np\n'), ((2022, 2036), 'numpy.array', 'np.array', (['auth'], {}), '(auth)\n', (2030, 2036), True, 'import numpy as np\n'), ((2053, 2064), 'numpy.max', 'np.max', (['hub'], {}), '(hub)\n', (2059, 2064), True, 'import numpy as np\n'), ((2072, 2083), 'numpy.min', 'np.min', (['hub'], {}), '(hub)\n', (2078, 2083), True, 'import numpy as np\n'), ((2148, 2160), 'numpy.max', 'np.max', (['auth'], {}), '(auth)\n', (2154, 2160), True, 'import numpy as np\n'), ((2168, 2180), 'numpy.min', 'np.min', (['auth'], {}), '(auth)\n', (2174, 2180), True, 'import numpy as np\n'), ((3473, 3484), 'time.time', 'time.time', ([], {}), '()\n', (3482, 3484), False, 'import time\n'), ((4798, 4811), 'numpy.sum', 'np.sum', (['index'], {}), '(index)\n', (4804, 4811), True, 'import numpy as np\n'), ((5607, 5618), 'time.time', 'time.time', ([], {}), '()\n', (5616, 5618), False, 'import time\n'), ((9508, 9519), 'time.time', 'time.time', ([], {}), '()\n', (9517, 9519), False, 'import time\n')] |
import numpy as np
import random
from collections import namedtuple, deque
from dueling_model import Dueling_QNetwork
from model import QNetwork
import torch
import torch.nn.functional as F
import torch.optim as optim
BUFFER_SIZE = int(1e5) # replay buffer size
BATCH_SIZE = 64 # minibatch size
GAMMA = 0.99 # discount factor
TAU = 1e-3 # for soft update of target parameters
LR = 5e-4 # learning rate
UPDATE_EVERY = 4 # how often to update the network
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class Double_DQN_Agent():
"""Interacts with and learns from the environment."""
def __init__(self, state_size, action_size, seed, network_type=None):
"""Initialize an Agent object.
Params
======
state_size (int): dimension of each state
action_size (int): dimension of each action
seed (int): random seed
"""
self.state_size = state_size
self.action_size = action_size
self.seed = random.seed(seed)
# Specify Network
# Vanilla Q-Network
if network_type == None:
print("Using double-DQN network")
self.qnetwork_local = QNetwork(state_size, action_size, seed).to(device)
self.qnetwork_target = QNetwork(state_size, action_size, seed).to(device)
# Dueling Q-Network
elif network_type == 'Dueling':
print("Using double-DQN Dueling network")
self.qnetwork_local = Dueling_QNetwork(state_size, action_size, seed).to(device)
self.qnetwork_target = Dueling_QNetwork(state_size, action_size, seed).to(device)
self.optimizer = optim.Adam(self.qnetwork_local.parameters(), lr=LR)
# Replay memory
self.memory = ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, seed)
# Initialize time step (for updating every UPDATE_EVERY steps)
self.t_step = 0
def step(self, state, action, reward, next_state, done):
# Save experience in replay memory
self.memory.add(state, action, reward, next_state, done)
# Learn every UPDATE_EVERY time steps.
self.t_step = (self.t_step + 1) % UPDATE_EVERY
if self.t_step == 0:
# If enough samples are available in memory, get random subset and learn
if len(self.memory) > BATCH_SIZE:
experiences = self.memory.sample()
self.learn(experiences, GAMMA)
def act(self, state, eps=0.):
"""Returns actions for given state as per current policy.
Params
======
state (array_like): current state
eps (float): epsilon, for epsilon-greedy action selection
"""
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
self.qnetwork_local.eval()
with torch.no_grad():
action_values = self.qnetwork_local(state) # same as self.qnetwork_local.forward(state)
self.qnetwork_local.train()
# Epsilon-greedy action selection
if random.random() > eps:
return np.argmax(action_values.cpu().data.numpy())
else:
return random.choice(np.arange(self.action_size))
def learn(self, experiences, gamma):
"""Update value parameters using given batch of experience tuples.
Params
======
experiences (Tuple[torch.Tensor]): tuple of (s, a, r, s', done) tuples
gamma (float): discount factor
"""
states, actions, rewards, next_states, dones = experiences
# Double DQN,has two q network value, one is qns_local which is used to get the most possible action.
# but compute the target qsa value through qns_target which value keep the same as last step.
# only update after the loss backpropagation completed.
# compared to normal DQN network, it's designed to reduce estimation of qsa value.
# Get the local net next State value
qns_local = self.qnetwork_local.forward(next_states)
# Get the argmax action of state of local net
_, qnsa_local_argmax_a = torch.max(qns_local, dim=1)
# Get the target net next state value
qns_target = self.qnetwork_target.forward(next_states)
# Use the argmax a of local net to get the next state Q in target net
qnsa_target = qns_target[torch.arange(BATCH_SIZE, dtype=torch.long), qnsa_local_argmax_a.reshape(BATCH_SIZE)]
# Get the TD target
qnsa_target = qnsa_target * (1 - dones.reshape(BATCH_SIZE))
qnsa_target = qnsa_target.reshape((BATCH_SIZE, 1))
TD_target = rewards + gamma * qnsa_target
# Get the train value
qs_local = self.qnetwork_local.forward(states)
qsa_local = qs_local[torch.arange(BATCH_SIZE, dtype=torch.long), actions.reshape(BATCH_SIZE)]
qsa_local = qsa_local.reshape((BATCH_SIZE, 1))
# Backpropagation
loss = F.mse_loss(qsa_local, TD_target) # mean square loss
self.optimizer.zero_grad() # clears the gradients
loss.backward()
self.optimizer.step()
# ------------------- update target network ------------------- #
self.soft_update(self.qnetwork_local, self.qnetwork_target, TAU)
def soft_update(self, local_model, target_model, tau):
"""Soft update model parameters.
θ_target = τ*θ_local + (1 - τ)*θ_target
Params
======
local_model (PyTorch model): weights will be copied from
target_model (PyTorch model): weights will be copied to
tau (float): interpolation parameter
"""
for target_param, local_param in zip(target_model.parameters(), local_model.parameters()):
target_param.data.copy_(tau * local_param.data + (1.0 - tau) * target_param.data)
class ReplayBuffer:
"""Fixed-size buffer to store experience tuples."""
def __init__(self, action_size, buffer_size, batch_size, seed):
"""Initialize a ReplayBuffer object.
Params
======
action_size (int): dimension of each action
buffer_size (int): maximum size of buffer
batch_size (int): size of each training batch
seed (int): random seed
"""
self.action_size = action_size
self.memory = deque(maxlen=buffer_size)
self.batch_size = batch_size
self.experience = namedtuple("Experience", field_names=["state", "action", "reward", "next_state", "done"])
self.seed = random.seed(seed)
def add(self, state, action, reward, next_state, done):
"""Add a new experience to memory."""
e = self.experience(state, action, reward, next_state, done)
self.memory.append(e)
def sample(self):
"""Randomly sample a batch of experiences from memory."""
experiences = random.sample(self.memory, k=self.batch_size)
states = torch.from_numpy(np.vstack([e.state for e in experiences if e is not None])).float().to(device)
actions = torch.from_numpy(np.vstack([e.action for e in experiences if e is not None])).long().to(device)
rewards = torch.from_numpy(np.vstack([e.reward for e in experiences if e is not None])).float().to(device)
next_states = torch.from_numpy(np.vstack([e.next_state for e in experiences if e is not None])).float().to(
device)
dones = torch.from_numpy(np.vstack([e.done for e in experiences if e is not None]).astype(np.uint8)).float().to(
device)
return (states, actions, rewards, next_states, dones)
def __len__(self):
"""Return the current size of internal memory."""
return len(self.memory) | [
"random.sample",
"torch.nn.functional.mse_loss",
"collections.deque",
"collections.namedtuple",
"model.QNetwork",
"torch.max",
"random.seed",
"torch.from_numpy",
"torch.cuda.is_available",
"torch.arange",
"numpy.vstack",
"dueling_model.Dueling_QNetwork",
"torch.no_grad",
"random.random",
... | [((497, 522), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (520, 522), False, 'import torch\n'), ((1020, 1037), 'random.seed', 'random.seed', (['seed'], {}), '(seed)\n', (1031, 1037), False, 'import random\n'), ((4135, 4162), 'torch.max', 'torch.max', (['qns_local'], {'dim': '(1)'}), '(qns_local, dim=1)\n', (4144, 4162), False, 'import torch\n'), ((4959, 4991), 'torch.nn.functional.mse_loss', 'F.mse_loss', (['qsa_local', 'TD_target'], {}), '(qsa_local, TD_target)\n', (4969, 4991), True, 'import torch.nn.functional as F\n'), ((6345, 6370), 'collections.deque', 'deque', ([], {'maxlen': 'buffer_size'}), '(maxlen=buffer_size)\n', (6350, 6370), False, 'from collections import namedtuple, deque\n'), ((6434, 6527), 'collections.namedtuple', 'namedtuple', (['"""Experience"""'], {'field_names': "['state', 'action', 'reward', 'next_state', 'done']"}), "('Experience', field_names=['state', 'action', 'reward',\n 'next_state', 'done'])\n", (6444, 6527), False, 'from collections import namedtuple, deque\n'), ((6544, 6561), 'random.seed', 'random.seed', (['seed'], {}), '(seed)\n', (6555, 6561), False, 'import random\n'), ((6879, 6924), 'random.sample', 'random.sample', (['self.memory'], {'k': 'self.batch_size'}), '(self.memory, k=self.batch_size)\n', (6892, 6924), False, 'import random\n'), ((2842, 2857), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (2855, 2857), False, 'import torch\n'), ((3050, 3065), 'random.random', 'random.random', ([], {}), '()\n', (3063, 3065), False, 'import random\n'), ((3183, 3210), 'numpy.arange', 'np.arange', (['self.action_size'], {}), '(self.action_size)\n', (3192, 3210), True, 'import numpy as np\n'), ((4383, 4425), 'torch.arange', 'torch.arange', (['BATCH_SIZE'], {'dtype': 'torch.long'}), '(BATCH_SIZE, dtype=torch.long)\n', (4395, 4425), False, 'import torch\n'), ((4789, 4831), 'torch.arange', 'torch.arange', (['BATCH_SIZE'], {'dtype': 'torch.long'}), '(BATCH_SIZE, dtype=torch.long)\n', (4801, 4831), False, 'import torch\n'), ((1207, 1246), 'model.QNetwork', 'QNetwork', (['state_size', 'action_size', 'seed'], {}), '(state_size, action_size, seed)\n', (1215, 1246), False, 'from model import QNetwork\n'), ((1293, 1332), 'model.QNetwork', 'QNetwork', (['state_size', 'action_size', 'seed'], {}), '(state_size, action_size, seed)\n', (1301, 1332), False, 'from model import QNetwork\n'), ((1501, 1548), 'dueling_model.Dueling_QNetwork', 'Dueling_QNetwork', (['state_size', 'action_size', 'seed'], {}), '(state_size, action_size, seed)\n', (1517, 1548), False, 'from dueling_model import Dueling_QNetwork\n'), ((1595, 1642), 'dueling_model.Dueling_QNetwork', 'Dueling_QNetwork', (['state_size', 'action_size', 'seed'], {}), '(state_size, action_size, seed)\n', (1611, 1642), False, 'from dueling_model import Dueling_QNetwork\n'), ((6960, 7018), 'numpy.vstack', 'np.vstack', (['[e.state for e in experiences if e is not None]'], {}), '([e.state for e in experiences if e is not None])\n', (6969, 7018), True, 'import numpy as np\n'), ((7074, 7133), 'numpy.vstack', 'np.vstack', (['[e.action for e in experiences if e is not None]'], {}), '([e.action for e in experiences if e is not None])\n', (7083, 7133), True, 'import numpy as np\n'), ((7188, 7247), 'numpy.vstack', 'np.vstack', (['[e.reward for e in experiences if e is not None]'], {}), '([e.reward for e in experiences if e is not None])\n', (7197, 7247), True, 'import numpy as np\n'), ((7307, 7370), 'numpy.vstack', 'np.vstack', (['[e.next_state for e in experiences if e is not None]'], {}), '([e.next_state for e in experiences if e is not None])\n', (7316, 7370), True, 'import numpy as np\n'), ((2738, 2761), 'torch.from_numpy', 'torch.from_numpy', (['state'], {}), '(state)\n', (2754, 2761), False, 'import torch\n'), ((7437, 7494), 'numpy.vstack', 'np.vstack', (['[e.done for e in experiences if e is not None]'], {}), '([e.done for e in experiences if e is not None])\n', (7446, 7494), True, 'import numpy as np\n')] |
import numpy as np
from typing import Callable
import lcs.agents.xcs as xcs
class Configuration(xcs.Configuration):
def __init__(self,
number_of_actions: int, # theta_mna it is actually smart to make it equal to number of actions
lmc: int = 100,
lem: float = 1,
classifier_wildcard: str = '#',
max_population: int = 200, # n
learning_rate: float = 0.1, # beta
alpha: float = 0.1,
epsilon_0: float = 10,
v: int = 5,
gamma: float = 0.71,
ga_threshold: int = 25,
chi: float = 0.5,
mutation_chance: float = 0.01, # mu
deletion_threshold: int = 20, # theta_del
delta: float = 0.1,
subsumption_threshold: int = 20, # theta_sub
covering_wildcard_chance: float = 0.33, # population wildcard
initial_prediction: float = float(np.finfo(np.float32).tiny), # p_i
initial_error: float = float(np.finfo(np.float32).tiny), # epsilon_i
initial_fitness: float = float(np.finfo(np.float32).tiny), # f_i
epsilon: float = 0.5, # p_exp, exploration probability
do_ga_subsumption: bool = False,
do_action_set_subsumption: bool = False,
metrics_trial_frequency: int = 5,
user_metrics_collector_fcn: Callable = None
) -> None:
self.lmc = lmc
self.lem = lem
self.classifier_wildcard = classifier_wildcard
self.max_population = max_population
self.learning_rate = learning_rate
self.alpha = alpha
self.epsilon_0 = epsilon_0
self.v = v
self.gamma = gamma
self.ga_threshold = ga_threshold
self.chi = chi
self.mutation_chance = mutation_chance
self.deletion_threshold = deletion_threshold
self.delta = delta
self.subsumption_threshold = subsumption_threshold
self.covering_wildcard_chance = covering_wildcard_chance
self.initial_prediction = initial_prediction
self.initial_error = initial_error
self.initial_fitness = initial_fitness
self.epsilon = epsilon # p_exp, probability of exploration
self.number_of_actions = number_of_actions
self.do_GA_subsumption = do_ga_subsumption
self.do_action_set_subsumption = do_action_set_subsumption
self.metrics_trial_frequency = metrics_trial_frequency
self.user_metrics_collector_fcn = user_metrics_collector_fcn
| [
"numpy.finfo"
] | [((1038, 1058), 'numpy.finfo', 'np.finfo', (['np.float32'], {}), '(np.float32)\n', (1046, 1058), True, 'import numpy as np\n'), ((1119, 1139), 'numpy.finfo', 'np.finfo', (['np.float32'], {}), '(np.float32)\n', (1127, 1139), True, 'import numpy as np\n'), ((1208, 1228), 'numpy.finfo', 'np.finfo', (['np.float32'], {}), '(np.float32)\n', (1216, 1228), True, 'import numpy as np\n')] |
import cv2
import numpy
#url = "rtsp://71014217:123@10.10.10.209:8554/profile0"
#cam = cv2.VideoCapture(url, cv2.CAP_FFMPEG)
cam = cv2.VideoCapture(0)
kernel = numpy.ones((5 ,5), numpy.uint8)
if cam is None or not cam.isOpened():
print('Warning: unable to open video source: ', cam)
else:
while (True):
ret, frame = cam.read()
cinza = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
_, imagembin = cv2.threshold(cinza, 90, 255, cv2.THRESH_BINARY)
imagemdesfoq = cv2.GaussianBlur(imagembin, (5,5), 0)
contornos, hier = cv2.findContours(imagemdesfoq, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(frame, contornos, -1, (0, 255, 0), 2)
#circles = cv2.HoughCircles(imagemdesfoq, cv2.HOUGH_GRADIENT, 1.2, 100)
for c in contornos:
contornofechado = cv2.arcLength(c, True)
aproximarforma = cv2.approxPolyDP(c, 0.03 * contornofechado, True)
cv2.imshow('contornos', frame)
#rangomax = numpy.array([255, 50, 50]) # B, G, R
#rangomin = numpy.array([51, 0, 0])
# mask = cv2.inRange(frame, rangomin, rangomax)
# reduce the noise
#opening = cv2.morphologyEx(mask, cv2.MORPH_BLACKHAT, kernel)
#x, y, w, h = cv2.boundingRect(opening)
#cv2.rectangle(frame, (x, y), (x+w, y + h), (0, 255, 0), 3)
#cv2.circle(frame, (int(x+w/2), int(y+h/2)), 5, (0, 0, 255), -1)
#cv2.imshow('camera', frame)
#cv2.imshow('cinza', cinza)
#cv2.imshow('Binarizada', imagembin)
#cv2.imshow('Sem Ruidos + Desfoque', imagemdesfoq)
k = cv2.waitKey(1) & 0xFF
if k == 27:
break | [
"cv2.drawContours",
"numpy.ones",
"cv2.threshold",
"cv2.arcLength",
"cv2.imshow",
"cv2.approxPolyDP",
"cv2.VideoCapture",
"cv2.cvtColor",
"cv2.findContours",
"cv2.GaussianBlur",
"cv2.waitKey"
] | [((132, 151), 'cv2.VideoCapture', 'cv2.VideoCapture', (['(0)'], {}), '(0)\n', (148, 151), False, 'import cv2\n'), ((161, 192), 'numpy.ones', 'numpy.ones', (['(5, 5)', 'numpy.uint8'], {}), '((5, 5), numpy.uint8)\n', (171, 192), False, 'import numpy\n'), ((361, 400), 'cv2.cvtColor', 'cv2.cvtColor', (['frame', 'cv2.COLOR_BGR2GRAY'], {}), '(frame, cv2.COLOR_BGR2GRAY)\n', (373, 400), False, 'import cv2\n'), ((424, 472), 'cv2.threshold', 'cv2.threshold', (['cinza', '(90)', '(255)', 'cv2.THRESH_BINARY'], {}), '(cinza, 90, 255, cv2.THRESH_BINARY)\n', (437, 472), False, 'import cv2\n'), ((496, 534), 'cv2.GaussianBlur', 'cv2.GaussianBlur', (['imagembin', '(5, 5)', '(0)'], {}), '(imagembin, (5, 5), 0)\n', (512, 534), False, 'import cv2\n'), ((560, 630), 'cv2.findContours', 'cv2.findContours', (['imagemdesfoq', 'cv2.RETR_TREE', 'cv2.CHAIN_APPROX_SIMPLE'], {}), '(imagemdesfoq, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)\n', (576, 630), False, 'import cv2\n'), ((639, 693), 'cv2.drawContours', 'cv2.drawContours', (['frame', 'contornos', '(-1)', '(0, 255, 0)', '(2)'], {}), '(frame, contornos, -1, (0, 255, 0), 2)\n', (655, 693), False, 'import cv2\n'), ((832, 854), 'cv2.arcLength', 'cv2.arcLength', (['c', '(True)'], {}), '(c, True)\n', (845, 854), False, 'import cv2\n'), ((884, 933), 'cv2.approxPolyDP', 'cv2.approxPolyDP', (['c', '(0.03 * contornofechado)', '(True)'], {}), '(c, 0.03 * contornofechado, True)\n', (900, 933), False, 'import cv2\n'), ((946, 976), 'cv2.imshow', 'cv2.imshow', (['"""contornos"""', 'frame'], {}), "('contornos', frame)\n", (956, 976), False, 'import cv2\n'), ((1649, 1663), 'cv2.waitKey', 'cv2.waitKey', (['(1)'], {}), '(1)\n', (1660, 1663), False, 'import cv2\n')] |
from Node3D.base.node import GeometryNode
from Node3D.opengl import Mesh
from Node3D.vendor.NodeGraphQt.constants import *
import numpy as np
from Node3D.base.mesh.base_primitives import generate_tube, generate_sphere, \
generate_cylinder, generate_cone, generate_torus
import open3d as o3d
class Tube(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Tube'
def __init__(self):
super(Tube, self).__init__()
params = [{'name': 'Bottom center', 'type': 'vector3', 'value': [0, 0, 0]},
{'name': 'Top center', 'type': 'vector3', 'value': [0, 1, 0]},
{'name': 'Outer radius', 'type': 'vector2', 'value': [0.5, 0.5]},
{'name': 'Inner radius', 'type': 'vector2', 'value': [0.3, 0.3]},
{'name': 'Segments', 'type': 'int', 'value': 10, 'limits': (3, 30)},
{'name': 'Quad', 'type': 'bool', 'value': True}]
self.set_parameters(params)
self.cook()
def run(self):
outer = self.get_property("Outer radius")
inner = self.get_property("Inner radius")
s = self.get_property("Segments")
if s < 3:
self.geo = None
return
vertices, faces = generate_tube(self.get_property("Bottom center"), self.get_property("Top center"),
outer[0], outer[1], inner[0], inner[1], s,
self.get_property("Quad"))
self.geo = Mesh()
self.geo.addVertices(vertices)
self.geo.addFaces(faces)
class Box(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Box'
def __init__(self):
super(Box, self).__init__()
self.create_property("Size", value=[1, 1, 1], widget_type=NODE_PROP_VECTOR3)
self.cook()
def run(self):
size = self.get_property("Size")
x = size[0] * 0.5
y = size[1] * 0.5
z = size[2] * 0.5
self.geo = Mesh()
v1 = self.geo.addVertex([x, -y, -z])
v2 = self.geo.addVertex([x, -y, z])
v3 = self.geo.addVertex([x, y, z])
v4 = self.geo.addVertex([x, y, -z])
v5 = self.geo.addVertex([-x, -y, -z])
v6 = self.geo.addVertex([-x, -y, z])
v7 = self.geo.addVertex([-x, y, z])
v8 = self.geo.addVertex([-x, y, -z])
self.geo.addFace([v1, v2, v3, v4])
self.geo.addFace([v2, v6, v7, v3])
self.geo.addFace([v6, v5, v8, v7])
self.geo.addFace([v5, v1, v4, v8])
self.geo.addFace([v4, v3, v7, v8])
self.geo.addFace([v5, v6, v2, v1])
self.geo.mesh.update_normals()
class Grid(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Grid'
def __init__(self):
super(Grid, self).__init__()
params = [{'name': 'Size', 'type': 'vector2', 'value': [10, 10]},
{'name': 'Resolution', 'type': 'vector2i', 'value': [10, 10]}]
self.set_parameters(params)
self.cook()
def run(self):
size = self.get_property("Size")
resolution = self.get_property("Resolution")
x = size[0] * 0.5
z = size[1] * 0.5
fx = resolution[0]
fz = resolution[1]
if fx < 2 or fz < 2:
self.geo = None
return
x_range = np.linspace(-x, x, fx)
z_range = np.linspace(-z, z, fz)
vertices = np.dstack(np.meshgrid(x_range, z_range, np.array([0.0]))).reshape(-1, 3)
a = np.add.outer(np.array(range(fx - 1)), fx * np.array(range(fz - 1)))
faces = np.dstack([a, a + 1, a + fx + 1, a + fx]).reshape(-1, 4)
nms = np.zeros((vertices.shape[0], 3), dtype=float)
nms[..., 1] = 1
self.geo = Mesh()
self.geo.addVertices(vertices[:, [0, 2, 1]])
self.geo.addFaces(faces)
self.geo.setVertexAttribData('normal', nms, attribType='vector3', defaultValue=[0, 0, 0])
class Arrow(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Arrow'
def __init__(self):
super(Arrow, self).__init__()
params = [{'name': 'Radius', 'type': 'vector2', 'value': [1, 1.5]},
{'name': 'Height', 'type': 'vector2', 'value': [2, 4]},
{'name': 'Cylinder split', 'type': 'int', 'value': 1, 'limits': (1, 10)},
{'name': 'Cone split', 'type': 'int', 'value': 1, 'limits': (1, 10)},
{'name': 'Resolution', 'type': 'int', 'value': 20, 'limits': (3, 30)}]
self.set_parameters(params)
self.cook()
def run(self):
radius = self.get_property("Radius")
height = self.get_property("Height")
tri = o3d.geometry.TriangleMesh.create_arrow(radius[0], radius[1], height[0], height[1],
self.get_property("Resolution"),
self.get_property("Cylinder split"),
self.get_property("Cone split"))
self.geo = Mesh()
self.geo.addVertices(np.array(tri.vertices)[:, [0, 2, 1]])
self.geo.addFaces(np.array(tri.triangles))
class Cone(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Cone'
def __init__(self):
super(Cone, self).__init__()
params = [{'name': 'Radius', 'type': 'float', 'value': 1.0},
{'name': 'Height', 'type': 'float', 'value': 2.0},
{'name': 'Split', 'type': 'int', 'value': 1, 'limits': (1, 10)},
{'name': 'Resolution', 'type': 'int', 'value': 20, 'limits': (3, 30)},
{'name': 'Cap', 'type': 'bool', 'value': True}]
self.set_parameters(params)
self.cook()
def run(self):
s = self.get_property("Resolution")
if s < 3:
self.geo = None
return
tri, quad, vt = generate_cone(self.get_property("Radius"),
self.get_property("Height"),
s,
self.get_property("Split"))
self.geo = Mesh()
self.geo.addVertices(vt)
self.geo.addFaces(quad)
self.geo.addFaces(tri)
if not self.get_property("Cap"):
self.geo.removeVertex(0, True)
class CoordinateFrame(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Coordinate Frame'
def __init__(self):
super(CoordinateFrame, self).__init__()
params = [{'name': 'Size', 'type': 'float', 'value': 1.0},
{'name': 'Origin', 'type': 'vector3', 'value': [0, 0, 0]}]
self.set_parameters(params)
self.cook()
def run(self):
size = self.get_property("Size")
if size == 0:
size = 0.0001
tri = o3d.geometry.TriangleMesh.create_coordinate_frame(size, self.get_property("Origin"))
self.geo = Mesh()
self.geo.addVertices(np.array(tri.vertices)[:, [0, 2, 1]])
self.geo.addFaces(np.array(tri.triangles))
class Cylinder(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Cylinder'
def __init__(self):
super(Cylinder, self).__init__()
params = [{'name': 'Radius', 'type': 'float', 'value': 1.0},
{'name': 'Height', 'type': 'float', 'value': 2.0},
{'name': 'Split', 'type': 'int', 'value': 4, 'limits': (1, 10)},
{'name': 'Resolution', 'type': 'int', 'value': 20, 'limits': (3, 30)},
{'name': 'Cap', 'type': 'bool', 'value': True}]
self.set_parameters(params)
self.cook()
def run(self):
s = self.get_property("Resolution")
if s < 3:
self.geo = None
return
tri, quad, vt = generate_cylinder(self.get_property("Radius"),
self.get_property("Height"),
s,
self.get_property("Split"))
self.geo = Mesh()
self.geo.addVertices(vt)
self.geo.addFaces(quad)
if self.get_property("Cap"):
self.geo.addFaces(tri)
else:
self.geo.removeVertices([0, 1])
class Icosahedron(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Icosahedron'
def __init__(self):
super(Icosahedron, self).__init__()
params = [{'name': 'Radius', 'type': 'float', 'value': 1.0}]
self.set_parameters(params)
self.cook()
def run(self):
rad = self.get_property("Radius")
if rad == 0:
rad = 0.0001
tri = o3d.geometry.TriangleMesh.create_icosahedron(rad)
self.geo = Mesh()
self.geo.addVertices(np.array(tri.vertices)[:, [0, 2, 1]])
self.geo.addFaces(np.array(tri.triangles))
class Moebius(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Moebius'
def __init__(self):
super(Moebius, self).__init__()
params = [{'name': 'Length Split', 'type': 'int', 'value': 70, 'limits': (1, 100)},
{'name': 'Width Split', 'type': 'int', 'value': 15, 'limits': (1, 100)},
{'name': 'Twists', 'type': 'int', 'value': 1, 'limits': (0, 10)},
{'name': 'Raidus', 'type': 'float', 'value': 1, 'limits': (0, 10)},
{'name': 'Flatness', 'type': 'float', 'value': 1, 'limits': (0, 30)},
{'name': 'Width', 'type': 'float', 'value': 1, 'limits': (0, 10)},
{'name': 'Scale', 'type': 'float', 'value': 1, 'limits': (0, 30)}]
self.set_parameters(params)
self.cook()
def run(self):
tri = o3d.geometry.TriangleMesh.create_moebius(self.get_property('Length Split'),
self.get_property('Width Split'),
self.get_property("Twists"),
self.get_property("Raidus"),
self.get_property("Flatness"),
self.get_property("Width"),
self.get_property("Scale"))
self.geo = Mesh()
self.geo.addVertices(np.array(tri.vertices)[:, [0, 2, 1]])
self.geo.addFaces(np.array(tri.triangles))
class Octahedron(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Octahedron'
def __init__(self):
super(Octahedron, self).__init__()
params = [{'name': 'Radius', 'type': 'float', 'value': 1.0}]
self.set_parameters(params)
self.cook()
def run(self):
rad = self.get_property("Radius")
if rad == 0:
rad = 0.0001
tri = o3d.geometry.TriangleMesh.create_octahedron(rad)
self.geo = Mesh()
self.geo.addVertices(np.array(tri.vertices)[:, [0, 2, 1]])
self.geo.addFaces(np.array(tri.triangles))
class Sphere(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Sphere'
def __init__(self):
super(Sphere, self).__init__()
params = [{'name': 'Radius', 'type': 'float', 'value': 1.0},
{'name': 'Resolution', 'type': 'int', 'value': 20, 'limits': (2, 50)}]
self.set_parameters(params)
self.cook()
def run(self):
rad = self.get_property("Radius")
if rad == 0:
rad = 0.0001
s = self.get_property("Resolution")
if s < 2:
self.geo = None
return
tri, quad, vt = generate_sphere(rad, s)
self.geo = Mesh()
self.geo.addVertices(vt)
self.geo.addFaces(tri)
self.geo.addFaces(quad)
class Tetrahedron(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Tetrahedron'
def __init__(self):
super(Tetrahedron, self).__init__()
params = [{'name': 'Radius', 'type': 'float', 'value': 1.0}]
self.set_parameters(params)
self.cook()
def run(self):
rad = self.get_property("Radius")
if rad == 0:
rad = 0.0001
tri = o3d.geometry.TriangleMesh.create_tetrahedron(rad)
self.geo = Mesh()
self.geo.addVertices(np.array(tri.vertices)[:, [0, 2, 1]])
self.geo.addFaces(np.array(tri.triangles))
class Torus(GeometryNode):
__identifier__ = 'Primitives'
NODE_NAME = 'Torus'
def __init__(self):
super(Torus, self).__init__()
params = [{'name': 'Radius', 'type': 'vector2', 'value': [1, 0.5]},
{'name': 'Radial resolution', 'type': 'int', 'value': 20, 'limits': (3, 50)},
{'name': 'Tubular resolution', 'type': 'int', 'value': 20, 'limits': (3, 50)}]
self.set_parameters(params)
self.cook()
def run(self):
rad = self.get_property("Radius")
if rad[0] == 0:
rad[0] = 0.0001
if rad[1] == 0:
rad[1] = 0.0001
r1 = self.get_property("Radial resolution")
r2 = self.get_property("Tubular resolution")
if r1 < 3 or r2 < 3:
self.geo = None
return
faces, vertices = generate_torus(rad[0], rad[1], r1, r2)
self.geo = Mesh()
self.geo.addVertices(vertices)
self.geo.addFaces(faces)
if __name__ == '__main__':
vertices, faces = generate_tube([0, 0, 0], [0, 1, 0],
2, 2, 1, 1, 5,
True)
print(vertices)
| [
"numpy.dstack",
"Node3D.base.mesh.base_primitives.generate_tube",
"Node3D.base.mesh.base_primitives.generate_sphere",
"Node3D.base.mesh.base_primitives.generate_torus",
"open3d.geometry.TriangleMesh.create_octahedron",
"numpy.array",
"numpy.linspace",
"numpy.zeros",
"open3d.geometry.TriangleMesh.cre... | [((13467, 13523), 'Node3D.base.mesh.base_primitives.generate_tube', 'generate_tube', (['[0, 0, 0]', '[0, 1, 0]', '(2)', '(2)', '(1)', '(1)', '(5)', '(True)'], {}), '([0, 0, 0], [0, 1, 0], 2, 2, 1, 1, 5, True)\n', (13480, 13523), False, 'from Node3D.base.mesh.base_primitives import generate_tube, generate_sphere, generate_cylinder, generate_cone, generate_torus\n'), ((1491, 1497), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (1495, 1497), False, 'from Node3D.opengl import Mesh\n'), ((1977, 1983), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (1981, 1983), False, 'from Node3D.opengl import Mesh\n'), ((3312, 3334), 'numpy.linspace', 'np.linspace', (['(-x)', 'x', 'fx'], {}), '(-x, x, fx)\n', (3323, 3334), True, 'import numpy as np\n'), ((3353, 3375), 'numpy.linspace', 'np.linspace', (['(-z)', 'z', 'fz'], {}), '(-z, z, fz)\n', (3364, 3375), True, 'import numpy as np\n'), ((3636, 3681), 'numpy.zeros', 'np.zeros', (['(vertices.shape[0], 3)'], {'dtype': 'float'}), '((vertices.shape[0], 3), dtype=float)\n', (3644, 3681), True, 'import numpy as np\n'), ((3725, 3731), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (3729, 3731), False, 'from Node3D.opengl import Mesh\n'), ((5032, 5038), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (5036, 5038), False, 'from Node3D.opengl import Mesh\n'), ((6128, 6134), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (6132, 6134), False, 'from Node3D.opengl import Mesh\n'), ((6925, 6931), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (6929, 6931), False, 'from Node3D.opengl import Mesh\n'), ((8049, 8055), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (8053, 8055), False, 'from Node3D.opengl import Mesh\n'), ((8668, 8717), 'open3d.geometry.TriangleMesh.create_icosahedron', 'o3d.geometry.TriangleMesh.create_icosahedron', (['rad'], {}), '(rad)\n', (8712, 8717), True, 'import open3d as o3d\n'), ((8738, 8744), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (8742, 8744), False, 'from Node3D.opengl import Mesh\n'), ((10327, 10333), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (10331, 10333), False, 'from Node3D.opengl import Mesh\n'), ((10866, 10914), 'open3d.geometry.TriangleMesh.create_octahedron', 'o3d.geometry.TriangleMesh.create_octahedron', (['rad'], {}), '(rad)\n', (10909, 10914), True, 'import open3d as o3d\n'), ((10935, 10941), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (10939, 10941), False, 'from Node3D.opengl import Mesh\n'), ((11671, 11694), 'Node3D.base.mesh.base_primitives.generate_sphere', 'generate_sphere', (['rad', 's'], {}), '(rad, s)\n', (11686, 11694), False, 'from Node3D.base.mesh.base_primitives import generate_tube, generate_sphere, generate_cylinder, generate_cone, generate_torus\n'), ((11714, 11720), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (11718, 11720), False, 'from Node3D.opengl import Mesh\n'), ((12234, 12283), 'open3d.geometry.TriangleMesh.create_tetrahedron', 'o3d.geometry.TriangleMesh.create_tetrahedron', (['rad'], {}), '(rad)\n', (12278, 12283), True, 'import open3d as o3d\n'), ((12304, 12310), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (12308, 12310), False, 'from Node3D.opengl import Mesh\n'), ((13279, 13317), 'Node3D.base.mesh.base_primitives.generate_torus', 'generate_torus', (['rad[0]', 'rad[1]', 'r1', 'r2'], {}), '(rad[0], rad[1], r1, r2)\n', (13293, 13317), False, 'from Node3D.base.mesh.base_primitives import generate_tube, generate_sphere, generate_cylinder, generate_cone, generate_torus\n'), ((13337, 13343), 'Node3D.opengl.Mesh', 'Mesh', ([], {}), '()\n', (13341, 13343), False, 'from Node3D.opengl import Mesh\n'), ((5132, 5155), 'numpy.array', 'np.array', (['tri.triangles'], {}), '(tri.triangles)\n', (5140, 5155), True, 'import numpy as np\n'), ((7025, 7048), 'numpy.array', 'np.array', (['tri.triangles'], {}), '(tri.triangles)\n', (7033, 7048), True, 'import numpy as np\n'), ((8838, 8861), 'numpy.array', 'np.array', (['tri.triangles'], {}), '(tri.triangles)\n', (8846, 8861), True, 'import numpy as np\n'), ((10427, 10450), 'numpy.array', 'np.array', (['tri.triangles'], {}), '(tri.triangles)\n', (10435, 10450), True, 'import numpy as np\n'), ((11035, 11058), 'numpy.array', 'np.array', (['tri.triangles'], {}), '(tri.triangles)\n', (11043, 11058), True, 'import numpy as np\n'), ((12404, 12427), 'numpy.array', 'np.array', (['tri.triangles'], {}), '(tri.triangles)\n', (12412, 12427), True, 'import numpy as np\n'), ((3564, 3605), 'numpy.dstack', 'np.dstack', (['[a, a + 1, a + fx + 1, a + fx]'], {}), '([a, a + 1, a + fx + 1, a + fx])\n', (3573, 3605), True, 'import numpy as np\n'), ((5068, 5090), 'numpy.array', 'np.array', (['tri.vertices'], {}), '(tri.vertices)\n', (5076, 5090), True, 'import numpy as np\n'), ((6961, 6983), 'numpy.array', 'np.array', (['tri.vertices'], {}), '(tri.vertices)\n', (6969, 6983), True, 'import numpy as np\n'), ((8774, 8796), 'numpy.array', 'np.array', (['tri.vertices'], {}), '(tri.vertices)\n', (8782, 8796), True, 'import numpy as np\n'), ((10363, 10385), 'numpy.array', 'np.array', (['tri.vertices'], {}), '(tri.vertices)\n', (10371, 10385), True, 'import numpy as np\n'), ((10971, 10993), 'numpy.array', 'np.array', (['tri.vertices'], {}), '(tri.vertices)\n', (10979, 10993), True, 'import numpy as np\n'), ((12340, 12362), 'numpy.array', 'np.array', (['tri.vertices'], {}), '(tri.vertices)\n', (12348, 12362), True, 'import numpy as np\n'), ((3435, 3450), 'numpy.array', 'np.array', (['[0.0]'], {}), '([0.0])\n', (3443, 3450), True, 'import numpy as np\n')] |
# -*- coding: utf-8 -*-
"""
Created on Thu Sep 26 22:32:07 2019
@author: jaehooncha
@email: <EMAIL>
resnet runs
"""
import tensorflow as tf
from cifar100 import Cifar100
from collections import OrderedDict
from networks import cycle_lr
import argparse
import os
import numpy as np
from models import ResNet18
np.random.seed(0)
tf.set_random_seed(0)
def parse_args():
parser = argparse.ArgumentParser()
# optim config
parser.add_argument('--model_name', type=str, default = 'ResNet18_200_200_200')
parser.add_argument('--datasets', type = str, default = 'CIFAR100')
parser.add_argument('--epochs_set', type=list, default=[200,200,200])
parser.add_argument('--batch_size', type=int, default=128)
parser.add_argument('--base_lr', type = list, default = [0.08, 0.008])
parser.add_argument('--max_lr', type = list, default = [0.5, 0.05])
parser.add_argument('--cycle_epoch', type = int, default = 20)
parser.add_argument('--cycle_ratio', type = float, default = 0.7)
parser.add_argument('--num_fines', type = int, default = 100)
parser.add_argument('--num_coarse', type = int, default = 20)
args = parser.parse_args()
config = OrderedDict([
('model_name', args.model_name),
('datasets', args.datasets),
('epochs_set', args.epochs_set),
('batch_size', args.batch_size),
('base_lr', args.base_lr),
('max_lr', args.max_lr),
('cycle_epoch', args.cycle_epoch),
('cycle_ratio', args.cycle_ratio),
('num_fines', args.num_fines),
('num_coarse', args.num_coarse)])
return config
config = parse_args()
### call data ###
cifar100 = Cifar100()
n_samples = cifar100.num_examples
n_test_samples = cifar100.num_test_examples
### make models ###
### shared models ###
model = ResNet18(config['num_fines'],'shared', False)
shared_pred, shared_loss = model.Forward()
### coarse models ###
model_coarse = ResNet18(config['num_coarse'], 'coarse', True)
coarse_pred, coarse_loss = model_coarse.Forward()
### fine models ###
model_fines = {}
fines_pred = {}
fines_loss = {}
for i in range(config['num_coarse']):
model_fines[i] = ResNet18(config['num_fines'], 'fine{:02}'.format(i), True)
fines_pred[i], fines_loss[i] = model_fines[i].Forward()
### make folder ###
mother_folder = config['model_name']
try:
os.mkdir(mother_folder)
except OSError:
pass
def run(mod, pred, loss, epochs, base_lre, max_lr, name):
train_loss_set = []
train_acc_set = []
test_loss_set = []
test_acc_set = []
iter_per_epoch = int(n_samples/config['batch_size'])
Lr = cycle_lr(base_lre, max_lr, iter_per_epoch,
config['cycle_epoch'], config['cycle_ratio'], epochs)
cy_lr = tf.placeholder(tf.float32, shape=(), name = "cy_lr_"+name)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=cy_lr).minimize(loss)
prediction = tf.argmax(pred, 1)
correct_prediction = tf.equal(prediction, tf.argmax(mod.y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name = "accuracy_"+name)
iteration = 0
iter_per_test_epoch = n_test_samples/config['batch_size']
for epoch in range(epochs):
epoch_loss = 0.
epoch_acc = 0.
for iter_in_epoch in range(iter_per_epoch):
epoch_x, epoch_fine, epoch_coarse = cifar100.next_train_batch(config['batch_size'])
_, c, acc = sess.run([optimizer, loss, accuracy],#, summ],
feed_dict = {mod.x: epoch_x, mod.y: epoch_fine,
mod.training:True, cy_lr: Lr[iteration],
mod.keep_prob: 0.7})
epoch_loss += c
epoch_acc += acc
iteration+=1
if iter_in_epoch%100 == 0:
print('Epoch ', epoch, '{:.2f}%'.format(100*(iter_in_epoch+1)/int(iter_per_epoch)),
'completed out of ', epochs, 'loss: ', epoch_loss/(iter_in_epoch+1),
'acc: ', '{:.2f}%'.format(epoch_acc*100/(iter_in_epoch+1)))
print('######################')
print('TRAIN')
print('Epoch ', epoch, '{:.2f}%'.format(100*(iter_in_epoch+1)/int(iter_per_epoch)),
'completed out of ', epochs, 'loss: ', epoch_loss/int(iter_per_epoch),
'acc: ', '{:.2f}%'.format(epoch_acc*100/int(iter_per_epoch)))
train_loss_set.append(epoch_loss/int(iter_per_epoch))
train_acc_set.append(epoch_acc*100/int(iter_per_epoch))
test_loss = 0.
test_acc = 0.
for iter_in_epoch in range(int(iter_per_test_epoch)):
epoch_x, epoch_fine, epoch_coarse = cifar100.next_test_batch(config['batch_size'])
c, acc = sess.run([loss, accuracy],
feed_dict = {mod.x: epoch_x, mod.y: epoch_fine,
mod.training:False, mod.keep_prob:1.})
test_loss += c
test_acc += acc
print('TEST')
print('Epoch ', epoch, 'loss: ', test_loss/int(iter_per_test_epoch),
'acc: ', '{:.2f}%'.format(test_acc*100/int(iter_per_test_epoch)))
print('###################### \n')
test_loss_set.append(test_loss/int(iter_per_test_epoch))
test_acc_set.append(test_acc*100/int(iter_per_test_epoch))
return train_loss_set, train_acc_set, test_loss_set, test_acc_set
def run_coarse(mod, pred, loss, epochs, base_lre, max_lr, name, var):
train_loss_set = []
train_acc_set = []
test_loss_set = []
test_acc_set = []
iter_per_epoch = int(n_samples/config['batch_size'])
Lr = cycle_lr(base_lre, max_lr, iter_per_epoch,
config['cycle_epoch'], config['cycle_ratio'], epochs)
cy_lr = tf.placeholder(tf.float32, shape=(), name = "cy_lr_"+name)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=cy_lr).minimize(loss, var_list = var)
prediction = tf.argmax(pred, 1)
correct_prediction = tf.equal(prediction, tf.argmax(mod.y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name = "accuracy_"+name)
iteration = 0
iter_per_test_epoch = n_test_samples/config['batch_size']
for epoch in range(epochs):
epoch_loss = 0.
epoch_acc = 0.
for iter_in_epoch in range(iter_per_epoch):
epoch_x, epoch_fine, epoch_coarse = cifar100.next_train_batch(config['batch_size'])
_, c, acc = sess.run([optimizer, loss, accuracy],#, summ],
feed_dict = {mod.x: epoch_x, mod.y: epoch_coarse,
mod.training:True, cy_lr: Lr[iteration],
mod.keep_prob: 0.7})
epoch_loss += c
epoch_acc += acc
iteration+=1
if iter_in_epoch%100 == 0:
print('Epoch ', epoch, '{:.2f}%'.format(100*(iter_in_epoch+1)/int(iter_per_epoch)),
'completed out of ', epochs, 'loss: ', epoch_loss/(iter_in_epoch+1),
'acc: ', '{:.2f}%'.format(epoch_acc*100/(iter_in_epoch+1)))
print('######################')
print('TRAIN')
print('Epoch ', epoch, '{:.2f}%'.format(100*(iter_in_epoch+1)/int(iter_per_epoch)),
'completed out of ', epochs, 'loss: ', epoch_loss/int(iter_per_epoch),
'acc: ', '{:.2f}%'.format(epoch_acc*100/int(iter_per_epoch)))
train_loss_set.append(epoch_loss/int(iter_per_epoch))
train_acc_set.append(epoch_acc*100/int(iter_per_epoch))
test_loss = 0.
test_acc = 0.
for iter_in_epoch in range(int(iter_per_test_epoch)):
epoch_x, epoch_fine, epoch_coarse = cifar100.next_test_batch(config['batch_size'])
c, acc = sess.run([loss, accuracy],
feed_dict = {mod.x: epoch_x, mod.y: epoch_coarse,
mod.training:False, mod.keep_prob:1.})
test_loss += c
test_acc += acc
print('TEST')
print('Epoch ', epoch, 'loss: ', test_loss/int(iter_per_test_epoch),
'acc: ', '{:.2f}%'.format(test_acc*100/int(iter_per_test_epoch)))
print('###################### \n')
test_loss_set.append(test_loss/int(iter_per_test_epoch))
test_acc_set.append(test_acc*100/int(iter_per_test_epoch))
return train_loss_set, train_acc_set, test_loss_set, test_acc_set
def run_fine(ithx, ithy, test_ith, test_ithy, mod, pred, loss, epochs, base_lre, max_lr, name, var):
train_loss_set = []
train_acc_set = []
test_loss_set = []
test_acc_set = []
iter_per_epoch = int(n_samples/(config['batch_size']*20))
Lr = cycle_lr(base_lre, max_lr, iter_per_epoch,
config['cycle_epoch'], config['cycle_ratio'], epochs)
cy_lr = tf.placeholder(tf.float32, shape=(), name = "cy_lr_"+name)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=cy_lr).minimize(loss, var_list = var)
prediction = tf.argmax(pred, 1)
correct_prediction = tf.equal(prediction, tf.argmax(mod.y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name = "accuracy_"+name)
iteration = 0
# iter_per_test_epoch = n_test_samples/config['batch_size']
for epoch in range(epochs):
epoch_loss = 0.
epoch_acc = 0.
for iter_in_epoch in range(iter_per_epoch):
iter_idx = np.random.choice(range(2500), config['batch_size'] , replace= False)
epoch_x, epoch_fine = ithx[iter_idx], ithy[iter_idx]
_, c, acc = sess.run([optimizer, loss, accuracy],#, summ],
feed_dict = {mod.x: epoch_x, mod.y: epoch_fine,
mod.training:True, cy_lr: Lr[iteration],
mod.keep_prob: 0.7})
epoch_loss += c
epoch_acc += acc
iteration+=1
print('######################')
print('TRAIN')
print('Epoch ', epoch, '{:.2f}%'.format(100*(iter_in_epoch+1)/int(iter_per_epoch)),
'completed out of ', epochs, 'loss: ', epoch_loss/int(iter_per_epoch),
'acc: ', '{:.2f}%'.format(epoch_acc*100/int(iter_per_epoch)))
train_loss_set.append(epoch_loss/int(iter_per_epoch))
train_acc_set.append(epoch_acc*100/int(iter_per_epoch))
test_loss = 0.
test_acc = 0.
c, acc = sess.run([loss, accuracy],
feed_dict = {mod.x: test_ith, mod.y: test_ithy,
mod.training:False, mod.keep_prob:1.})
test_loss += c
test_acc += acc
print('TEST')
print('Epoch ', epoch, 'loss: ', test_loss,
'acc: ', '{:.2f}%'.format(test_acc*100))
print('###################### \n')
test_loss_set.append(test_loss)
test_acc_set.append(test_acc*100)
return train_loss_set, train_acc_set, test_loss_set, test_acc_set
folder_name = os.path.join(mother_folder, config['model_name']+'_'+config['datasets'])
result_dic = {}
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
### run shared ###
tr_l, tr_a, te_l, te_a = run(model, shared_pred, shared_loss, config['epochs_set'][0],
config['base_lr'][0], config['max_lr'][0], 's')
result_dic['s'] =[tr_l, tr_a, te_l, te_a]
### run coarse ###
coarse_vars =[]
for layer in tf.trainable_variables():
if layer.name[:6] in ['coarse']:
coarse_vars.append(layer)
tr_l, tr_a, te_l, te_a = run_coarse(model_coarse, coarse_pred, coarse_loss,
config['epochs_set'][1], config['base_lr'][1],
config['max_lr'][1], 'c', var = coarse_vars)
result_dic['c'] =[tr_l, tr_a, te_l, te_a]
### run fine ###
for ith in range(config['num_coarse']):
n = 'f{:02}'.format(ith)
fine_vars = []
for layer in tf.trainable_variables():
if layer.name[:6] in ['fine{:02}'.format(ith)]:
fine_vars.append(layer)
ith_idx = np.where(np.argmax(cifar100.train_labels_coarse,1)==ith)[0]
test_ith_idx = np.where(np.argmax(cifar100.test_labels_coarse,1)==ith)[0]
tr_l, tr_a, te_l, te_a = run_fine(cifar100.train_images[ith_idx], cifar100.train_labels[ith_idx],
cifar100.test_images[test_ith_idx], cifar100.test_labels[test_ith_idx],
model_fines[ith], fines_pred[ith], fines_loss[ith],
config['epochs_set'][2], config['base_lr'][1],
config['max_lr'][1], 'f{:02}'.format(ith), var = fine_vars)
result_dic[n] =[tr_l, tr_a, te_l, te_a]
### final accuracy ###
iter_per_epoch = int(n_samples/config['batch_size'])
train_correction = 0
for iter_in_epoch in range(iter_per_epoch):
epoch_x, epoch_fine, epoch_coarse = cifar100.next_train_batch(config['batch_size'])
coarse_info = sess.run(coarse_pred,
feed_dict = {model_coarse.x: epoch_x, model_coarse.training:False,
model_coarse.keep_prob: 1.})
pred_y = np.zeros(shape = (config['batch_size'], config['num_fines']))
for ith in range(config['num_coarse']):
fine_info = sess.run(fines_pred[ith],
feed_dict = {model_fines[ith].x: epoch_x,
model_fines[ith].training:False, model_fines[ith].keep_prob: 1.})
pred_y += np.multiply(coarse_info[:,ith].reshape(-1,1), fine_info)
batch_prediction = np.argmax(pred_y,1)
train_correction += np.sum(np.equal(batch_prediction, np.argmax(epoch_fine,1)))
final_train_acc = train_correction/n_samples
iter_test_per_epoch = int(n_test_samples/config['batch_size'])
test_correction = 0
for iter_in_epoch in range(iter_test_per_epoch):
epoch_x, epoch_fine, epoch_coarse = cifar100.next_test_batch(config['batch_size'])
coarse_info = sess.run(coarse_pred,
feed_dict = {model_coarse.x: epoch_x, model_coarse.training:False,
model_coarse.keep_prob: 1.})
pred_y = np.zeros(shape = (config['batch_size'], config['num_fines']))
for ith in range(config['num_coarse']):
fine_info = sess.run(fines_pred[ith],
feed_dict = {model_fines[ith].x: epoch_x,
model_fines[ith].training:False, model_fines[ith].keep_prob: 1.})
pred_y += np.multiply(coarse_info[:,ith].reshape(-1,1), fine_info)
batch_prediction = np.argmax(pred_y,1)
test_correction += np.sum(np.equal(batch_prediction, np.argmax(epoch_fine,1)))
final_test_acc = test_correction/n_test_samples
result_dic['final'] =[final_train_acc, final_test_acc]
print(result_dic['final'])
| [
"tensorflow.cast",
"collections.OrderedDict",
"argparse.ArgumentParser",
"tensorflow.placeholder",
"networks.cycle_lr",
"os.path.join",
"tensorflow.Session",
"numpy.argmax",
"tensorflow.global_variables_initializer",
"tensorflow.train.GradientDescentOptimizer",
"tensorflow.argmax",
"cifar100.C... | [((312, 329), 'numpy.random.seed', 'np.random.seed', (['(0)'], {}), '(0)\n', (326, 329), True, 'import numpy as np\n'), ((330, 351), 'tensorflow.set_random_seed', 'tf.set_random_seed', (['(0)'], {}), '(0)\n', (348, 351), True, 'import tensorflow as tf\n'), ((1671, 1681), 'cifar100.Cifar100', 'Cifar100', ([], {}), '()\n', (1679, 1681), False, 'from cifar100 import Cifar100\n'), ((1812, 1858), 'models.ResNet18', 'ResNet18', (["config['num_fines']", '"""shared"""', '(False)'], {}), "(config['num_fines'], 'shared', False)\n", (1820, 1858), False, 'from models import ResNet18\n'), ((1938, 1984), 'models.ResNet18', 'ResNet18', (["config['num_coarse']", '"""coarse"""', '(True)'], {}), "(config['num_coarse'], 'coarse', True)\n", (1946, 1984), False, 'from models import ResNet18\n'), ((11199, 11275), 'os.path.join', 'os.path.join', (['mother_folder', "(config['model_name'] + '_' + config['datasets'])"], {}), "(mother_folder, config['model_name'] + '_' + config['datasets'])\n", (11211, 11275), False, 'import os\n'), ((384, 409), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {}), '()\n', (407, 409), False, 'import argparse\n'), ((1188, 1534), 'collections.OrderedDict', 'OrderedDict', (["[('model_name', args.model_name), ('datasets', args.datasets), (\n 'epochs_set', args.epochs_set), ('batch_size', args.batch_size), (\n 'base_lr', args.base_lr), ('max_lr', args.max_lr), ('cycle_epoch', args\n .cycle_epoch), ('cycle_ratio', args.cycle_ratio), ('num_fines', args.\n num_fines), ('num_coarse', args.num_coarse)]"], {}), "([('model_name', args.model_name), ('datasets', args.datasets),\n ('epochs_set', args.epochs_set), ('batch_size', args.batch_size), (\n 'base_lr', args.base_lr), ('max_lr', args.max_lr), ('cycle_epoch', args\n .cycle_epoch), ('cycle_ratio', args.cycle_ratio), ('num_fines', args.\n num_fines), ('num_coarse', args.num_coarse)])\n", (1199, 1534), False, 'from collections import OrderedDict\n'), ((2354, 2377), 'os.mkdir', 'os.mkdir', (['mother_folder'], {}), '(mother_folder)\n', (2362, 2377), False, 'import os\n'), ((2625, 2726), 'networks.cycle_lr', 'cycle_lr', (['base_lre', 'max_lr', 'iter_per_epoch', "config['cycle_epoch']", "config['cycle_ratio']", 'epochs'], {}), "(base_lre, max_lr, iter_per_epoch, config['cycle_epoch'], config[\n 'cycle_ratio'], epochs)\n", (2633, 2726), False, 'from networks import cycle_lr\n'), ((2750, 2808), 'tensorflow.placeholder', 'tf.placeholder', (['tf.float32'], {'shape': '()', 'name': "('cy_lr_' + name)"}), "(tf.float32, shape=(), name='cy_lr_' + name)\n", (2764, 2808), True, 'import tensorflow as tf\n'), ((2914, 2932), 'tensorflow.argmax', 'tf.argmax', (['pred', '(1)'], {}), '(pred, 1)\n', (2923, 2932), True, 'import tensorflow as tf\n'), ((5707, 5808), 'networks.cycle_lr', 'cycle_lr', (['base_lre', 'max_lr', 'iter_per_epoch', "config['cycle_epoch']", "config['cycle_ratio']", 'epochs'], {}), "(base_lre, max_lr, iter_per_epoch, config['cycle_epoch'], config[\n 'cycle_ratio'], epochs)\n", (5715, 5808), False, 'from networks import cycle_lr\n'), ((5832, 5890), 'tensorflow.placeholder', 'tf.placeholder', (['tf.float32'], {'shape': '()', 'name': "('cy_lr_' + name)"}), "(tf.float32, shape=(), name='cy_lr_' + name)\n", (5846, 5890), True, 'import tensorflow as tf\n'), ((6012, 6030), 'tensorflow.argmax', 'tf.argmax', (['pred', '(1)'], {}), '(pred, 1)\n', (6021, 6030), True, 'import tensorflow as tf\n'), ((8845, 8946), 'networks.cycle_lr', 'cycle_lr', (['base_lre', 'max_lr', 'iter_per_epoch', "config['cycle_epoch']", "config['cycle_ratio']", 'epochs'], {}), "(base_lre, max_lr, iter_per_epoch, config['cycle_epoch'], config[\n 'cycle_ratio'], epochs)\n", (8853, 8946), False, 'from networks import cycle_lr\n'), ((8970, 9028), 'tensorflow.placeholder', 'tf.placeholder', (['tf.float32'], {'shape': '()', 'name': "('cy_lr_' + name)"}), "(tf.float32, shape=(), name='cy_lr_' + name)\n", (8984, 9028), True, 'import tensorflow as tf\n'), ((9150, 9168), 'tensorflow.argmax', 'tf.argmax', (['pred', '(1)'], {}), '(pred, 1)\n', (9159, 9168), True, 'import tensorflow as tf\n'), ((11294, 11306), 'tensorflow.Session', 'tf.Session', ([], {}), '()\n', (11304, 11306), True, 'import tensorflow as tf\n'), ((11682, 11706), 'tensorflow.trainable_variables', 'tf.trainable_variables', ([], {}), '()\n', (11704, 11706), True, 'import tensorflow as tf\n'), ((2979, 2998), 'tensorflow.argmax', 'tf.argmax', (['mod.y', '(1)'], {}), '(mod.y, 1)\n', (2988, 2998), True, 'import tensorflow as tf\n'), ((3030, 3069), 'tensorflow.cast', 'tf.cast', (['correct_prediction', 'tf.float32'], {}), '(correct_prediction, tf.float32)\n', (3037, 3069), True, 'import tensorflow as tf\n'), ((6077, 6096), 'tensorflow.argmax', 'tf.argmax', (['mod.y', '(1)'], {}), '(mod.y, 1)\n', (6086, 6096), True, 'import tensorflow as tf\n'), ((6128, 6167), 'tensorflow.cast', 'tf.cast', (['correct_prediction', 'tf.float32'], {}), '(correct_prediction, tf.float32)\n', (6135, 6167), True, 'import tensorflow as tf\n'), ((9215, 9234), 'tensorflow.argmax', 'tf.argmax', (['mod.y', '(1)'], {}), '(mod.y, 1)\n', (9224, 9234), True, 'import tensorflow as tf\n'), ((9266, 9305), 'tensorflow.cast', 'tf.cast', (['correct_prediction', 'tf.float32'], {}), '(correct_prediction, tf.float32)\n', (9273, 9305), True, 'import tensorflow as tf\n'), ((11333, 11366), 'tensorflow.global_variables_initializer', 'tf.global_variables_initializer', ([], {}), '()\n', (11364, 11366), True, 'import tensorflow as tf\n'), ((12253, 12277), 'tensorflow.trainable_variables', 'tf.trainable_variables', ([], {}), '()\n', (12275, 12277), True, 'import tensorflow as tf\n'), ((13586, 13645), 'numpy.zeros', 'np.zeros', ([], {'shape': "(config['batch_size'], config['num_fines'])"}), "(shape=(config['batch_size'], config['num_fines']))\n", (13594, 13645), True, 'import numpy as np\n'), ((14040, 14060), 'numpy.argmax', 'np.argmax', (['pred_y', '(1)'], {}), '(pred_y, 1)\n', (14049, 14060), True, 'import numpy as np\n'), ((14667, 14726), 'numpy.zeros', 'np.zeros', ([], {'shape': "(config['batch_size'], config['num_fines'])"}), "(shape=(config['batch_size'], config['num_fines']))\n", (14675, 14726), True, 'import numpy as np\n'), ((15121, 15141), 'numpy.argmax', 'np.argmax', (['pred_y', '(1)'], {}), '(pred_y, 1)\n', (15130, 15141), True, 'import numpy as np\n'), ((2826, 2880), 'tensorflow.train.GradientDescentOptimizer', 'tf.train.GradientDescentOptimizer', ([], {'learning_rate': 'cy_lr'}), '(learning_rate=cy_lr)\n', (2859, 2880), True, 'import tensorflow as tf\n'), ((5908, 5962), 'tensorflow.train.GradientDescentOptimizer', 'tf.train.GradientDescentOptimizer', ([], {'learning_rate': 'cy_lr'}), '(learning_rate=cy_lr)\n', (5941, 5962), True, 'import tensorflow as tf\n'), ((9046, 9100), 'tensorflow.train.GradientDescentOptimizer', 'tf.train.GradientDescentOptimizer', ([], {'learning_rate': 'cy_lr'}), '(learning_rate=cy_lr)\n', (9079, 9100), True, 'import tensorflow as tf\n'), ((14126, 14150), 'numpy.argmax', 'np.argmax', (['epoch_fine', '(1)'], {}), '(epoch_fine, 1)\n', (14135, 14150), True, 'import numpy as np\n'), ((15206, 15230), 'numpy.argmax', 'np.argmax', (['epoch_fine', '(1)'], {}), '(epoch_fine, 1)\n', (15215, 15230), True, 'import numpy as np\n'), ((12414, 12456), 'numpy.argmax', 'np.argmax', (['cifar100.train_labels_coarse', '(1)'], {}), '(cifar100.train_labels_coarse, 1)\n', (12423, 12456), True, 'import numpy as np\n'), ((12497, 12538), 'numpy.argmax', 'np.argmax', (['cifar100.test_labels_coarse', '(1)'], {}), '(cifar100.test_labels_coarse, 1)\n', (12506, 12538), True, 'import numpy as np\n')] |
# This is needed becuase pygame's init() calls for an audio driver,
# which seemed to default to ALSA, which was causing an underrun error.
import os
os.environ['SDL_AUDIODRIVER'] = 'dsp'
import yaml
import numpy as np
import hexy as hx
import pygame as pg
from tkinter import filedialog, Tk
COL_IDX = np.random.randint(0, 4, (7 ** 3))
COLORS = np.array([
[251, 149, 80], # orange [244, 98, 105]
[207, 0, 0], # red
[0, 255, 255], # sky blue
[141, 207, 104], # green
[85, 163, 193], # sky blue
])
DIRECTIONS = ["SE", "SW", "W", "NW", "NE", "E"]
class Selection:
class Type:
RECT = 0
HEX = 1
TRIANGLE = 2
RHOMBUS = 3
CUSTOM = 4
@staticmethod
def to_string(selection_type):
if selection_type == Selection.Type.RECT:
return "rectangle"
elif selection_type == Selection.Type.HEX:
return "hexagon"
elif selection_type == Selection.Type.TRIANGLE:
return "triangle"
elif selection_type == Selection.Type.RHOMBUS:
return "rhombus"
elif selection_type == Selection.Type.CUSTOM:
return "custom"
else:
return "INVALID VALUE"
@staticmethod
def get_selection(selection_type, max_range, hex_radius):
hex_map = hx.HexMap()
hexes = []
axial_coordinates = []
if selection_type == Selection.Type.RECT:
for r in range(-max_range, max_range + 1):
r_offset = r >> 1
for q in range(-max_range - r_offset, max_range - r_offset):
c = [q, r]
axial_coordinates.append(c)
hexes.append(ExampleHex(c, [141, 207, 104, 255], hex_radius))
elif selection_type == Selection.Type.HEX:
spiral_coordinates = hx.get_spiral(np.array((0, 0, 0)), 1, max_range) # Gets spiral coordinates from center
axial_coordinates = hx.cube_to_axial(spiral_coordinates)
for i, axial in enumerate(axial_coordinates):
hex_color = list(COLORS[3]) # set color of hex
hex_color.append(255) #set alpha to 255
hexes.append(ExampleHex(axial, hex_color, hex_radius))
elif selection_type == Selection.Type.TRIANGLE:
top = int(max_range / 2)
for q in range(0, max_range + 1):
for r in range(0, max_range - q + 1):
c = [q, r]
axial_coordinates.append(c)
hexes.append(ExampleHex(c, [141, 207, 104, 255], hex_radius))
elif selection_type == Selection.Type.RHOMBUS:
q1 = int(-max_range / 2)
q2 = int(max_range / 2)
r1 = -max_range
r2 = max_range
# Parallelogram
for q in range(q1, q2 + 1):
for r in range(r1, r2 + 1):
axial_coordinates.append([q, r])
hexes.append(ExampleHex([q, r], [141, 207, 104, 255], hex_radius))
elif selection_type == Selection.Type.CUSTOM:
axial_coordinates.append([0, 0])
hexes.append(ExampleHex([0, 0], [141, 207, 104, 255], hex_radius))
hex_map[np.array(axial_coordinates)] = hexes # create hex map
return hex_map
class ClampedInteger:
"""
A simple class for "clamping" an integer value between a range. Its value will not increase beyond `upper_limit`
and will not decrease below `lower_limit`.
"""
def __init__(self, initial_value, lower_limit, upper_limit):
self.value = initial_value
self.lower_limit = lower_limit
self.upper_limit = upper_limit
def increment(self):
self.value += 1
if self.value > self.upper_limit:
self.value = self.upper_limit
def decrement(self):
self.value -= 1
if self.value < self.lower_limit:
self.value = self.lower_limit
class CyclicInteger:
"""
A simple helper class for "cycling" an integer through a range of values. Its value will be set to `lower_limit`
if it increases above `upper_limit`. Its value will be set to `upper_limit` if its value decreases below
`lower_limit`.
"""
def __init__(self, initial_value, lower_limit, upper_limit):
self.value = initial_value
self.lower_limit = lower_limit
self.upper_limit = upper_limit
def increment(self):
self.value += 1
if self.value > self.upper_limit:
self.value = self.lower_limit
def decrement(self):
self.value -= 1
if self.value < self.lower_limit:
self.value = self.upper_limit
def make_hex_surface(color, radius, border_color=(100, 100, 100), border=True, hollow=False):
"""
Draws a hexagon with gray borders on a pygame surface.
:param color: The fill color of the hexagon.
:param radius: The radius (from center to any corner) of the hexagon.
:param border_color: Color of the border.
:param border: Draws border if True.
:param hollow: Does not fill hex with color if True.
:return: A pygame surface with a hexagon drawn on it
"""
angles_in_radians = np.deg2rad([60 * i + 30 for i in range(6)])
x = radius * np.cos(angles_in_radians)
y = radius * np.sin(angles_in_radians)
points = np.round(np.vstack([x, y]).T)
sorted_x = sorted(points[:, 0])
sorted_y = sorted(points[:, 1])
minx = sorted_x[0]
maxx = sorted_x[-1]
miny = sorted_y[0]
maxy = sorted_y[-1]
sorted_idxs = np.lexsort((points[:, 0], points[:, 1]))
surf_size = np.array((maxx - minx, maxy - miny)) * 2 + 1
center = surf_size / 2
surface = pg.Surface(surf_size.astype(int))
surface.set_colorkey((0, 0, 0))
# Set alpha if color has 4th coordinate.
if len(color) >= 4:
surface.set_alpha(color[-1])
# fill if not hollow.
if not hollow:
pg.draw.polygon(surface, color, (points + center).astype(int), 0)
points[sorted_idxs[-1:-4:-1]] += [0, 1]
# if border is true or hollow is true draw border.
if border or hollow:
pg.draw.lines(surface, border_color, True, (points + center).astype(int), 1)
return surface
class ExampleHex(hx.HexTile):
def __init__(self, axial_coordinates, color, radius, hollow = False):
self.axial_coordinates = np.array([axial_coordinates])
self.cube_coordinates = hx.axial_to_cube(self.axial_coordinates)
self.position = hx.axial_to_pixel(self.axial_coordinates, radius)
self.color = color
self.radius = radius
self.image = make_hex_surface(color, radius, hollow = hollow)
def get_draw_position(self):
"""
Get the location to draw this hex so that the center of the hex is at `self.position`.
:return: The location to draw this hex so that the center of the hex is at `self.position`.
"""
draw_position = self.position[0] - [self.image.get_width() / 2, self.image.get_height() / 2]
return draw_position
def get_position(self):
"""
Retrieves the location of the center of the hex.
:return: The location of the center of the hex.
"""
return self.position[0]
class ExampleHexMap:
def __init__(self, num_pieces, hex_radius=20, caption="Config File Creator"):
root = Tk()
size = (root.winfo_screenheight() - 50, root.winfo_screenheight() - 50)
root.destroy()
self.caption = caption # Controls window caption
self.size = np.array(size) # Controls window size
self.width, self.height = self.size # Width and height of window
self.center = self.size / 2 # Should be center of window
self.num_pieces = num_pieces
self.hex_radius = int(hex_radius * self.size[0] / 1000) # Radius of individual hexagons
self.max_coord = ClampedInteger(13, 1, 13) # Controls the radius of the hex map in hexagon shape.
self.selection = ClampedInteger(1, 0, 4) # Controls map style selection in step 1.
self.piece_selection = CyclicInteger(1, 1, num_pieces) # Controls piece type in step 2.
self.player_selection = CyclicInteger(1, 1, 2) # Controls player in step 2.
self.direction_selection = CyclicInteger(0, 0, 5) # Controls direction of pieces in step 2.
self.old_selection = self.selection.value # Saves old selection, used so a new map isn't generated every turn.
self.old_max = self.max_coord.value # Holds the old maximum size.
self.old_axial_held = np.array([]) # Hold the old axial coordinates the mouse is at in custom map mode.
self.clicked_hex = np.array([0, 0]) # Center hex
self.direction_hex = np.array([])
self.selected_hex_image = make_hex_surface(
(128, 128, 128, 255), # Highlight color for a hexagon.
self.hex_radius, # radius of individual hexagon
(255, 255, 255), # Border color white
hollow=True)
self.hex_map = Selection.get_selection(self.selection.value, self.max_coord.value, self.hex_radius)
self.b_map = hx.HexMap()
b_hexes = []
b_axial_coordinates = []
for r in range(-self.max_coord.value, self.max_coord.value + 1):
r_offset = r >> 1
for q in range(-self.max_coord.value - r_offset, self.max_coord.value - r_offset):
c = [q, r]
b_axial_coordinates.append(c)
b_hexes.append(ExampleHex(c, [141, 207, 250, 255], self.hex_radius, hollow = False))
self.b_map[np.array(b_axial_coordinates)] = b_hexes
self.step = 1
self.player_list = {1: [], 2: []}
# pygame specific variables
self.main_surf = None # Pygame surface
self.font = None # Pygame font
self.clock = None # Pygame fps
self.init_pg() # Starts pygame
def init_pg(self):
pg.init()
self.main_surf = pg.display.set_mode(self.size) # Turns main_surf into a display
pg.display.set_caption(self.caption)
pg.font.init()
self.font = pg.font.SysFont("monospace", 20, True)
self.clock = pg.time.Clock()
def handle_events(self):
running = True # Program is running
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
if event.type == pg.MOUSEBUTTONUP:
if event.button == 1:# Left mouse
self.clicked_hex = hx.pixel_to_axial(
np.array([pg.mouse.get_pos() - self.center]),
self.hex_radius)[0].astype(np.int32)
if self.step == 2:
if np.array_equal(self.hex_map[self.clicked_hex], []):
self.clicked_hex = self.old_axial_held
continue
if not np.array_equal(self.clicked_hex, self.old_axial_held):
self.direction_hex = np.array([])
index = 0
repeat = False
for piece in self.player_list[1]:
if np.array_equal(piece[1], self.clicked_hex):
del self.player_list[1][index]
repeat = True
else:
index += 1
index = 0
for piece in self.player_list[2]:
if np.array_equal(piece[1], self.clicked_hex):
del self.player_list[2][index]
repeat = True
else:
index += 1
if repeat:
continue
if self.clicked_hex.size > 0:
if self.direction_hex.size <= 0:
self.direction_hex = self.clicked_hex
else:
self.direction_hex = np.array([])
if not repeat:
self.player_list[self.player_selection.value].append([self.piece_selection.value,
self.clicked_hex,
self.direction_selection.value])
self.direction_selection.value = 0
self.old_axial_held = self.clicked_hex
if event.type == pg.MOUSEBUTTONDOWN:
if self.step == 1:
if event.button == 4: #Scroll up
self.selection.increment()
if event.button == 5: #scroll down
self.selection.decrement()
elif self.step == 2:
if event.button == 3:
self.player_selection.increment()
if event.button == 4: #Scroll up
self.direction_selection.increment()
if event.button == 5: #scroll down
self.direction_selection.decrement()
if event.type == pg.KEYUP:
if self.step == 1:
if event.key == pg.K_RIGHT:
self.max_coord.increment()
elif event.key == pg.K_LEFT:
self.max_coord.decrement()
elif event.key == pg.K_a:
self.selection.decrement()
elif event.key == pg.K_d:
self.selection.increment()
elif self.step == 2:
if event.key == pg.K_RIGHT:
self.piece_selection.increment()
elif event.key == pg.K_LEFT:
self.piece_selection.decrement()
elif event.key == pg.K_a:
self.direction_selection.decrement()
elif event.key == pg.K_d:
self.direction_selection.increment()
if event.type == pg.KEYDOWN:
if event.key == pg.K_RETURN:
self.step += 1
if event.key == pg.K_ESCAPE or self.step > 2:
running = False
# Check if we are in the map-making step, and it is a custom map.
if self.step == 1 and self.selection.value == Selection.Type.CUSTOM:
# Check that the left mouse is being held down and the mouse is moving.
if event.type == pg.MOUSEMOTION and event.buttons == (1, 0, 0):
mouse_pos = np.array([np.array(pg.mouse.get_pos()) - self.center])
axial_clicked = hx.pixel_to_axial(mouse_pos, self.hex_radius).astype(np.int32)[0]
# Check if within the map boundaries and the mouse is not on the same tile as before.
if not np.array_equal(self.b_map[axial_clicked], []) and not np.array_equal(axial_clicked, self.old_axial_held):
# Check if a tile already exists. If not, create one. If one does, delete it.
if np.array_equal(self.hex_map[axial_clicked], []):
self.hex_map[[axial_clicked]] = [ExampleHex(axial_clicked, [141, 207, 104, 255], self.hex_radius)]
else:
del self.hex_map[axial_clicked]
# Save the axial coordinates so the tile is not added and delete every other frame.
self.old_axial_held = axial_clicked
return running
def main_loop(self):
running = self.handle_events()
return running
def draw(self):
if self.step == 1:
b_hexagons = list(self.b_map.values())
b_hex_positions = np.array([hexagon.get_draw_position() for hexagon in b_hexagons])
b_sorted_indexes = np.argsort(b_hex_positions[:, 1])
for index in b_sorted_indexes:
self.main_surf.blit(b_hexagons[index].image, (b_hex_positions[index] + self.center).astype(int)) #Draws the hexagons on
#hexagons[index].image uses an image created in example_hex from make_hex_surface
if self.selection.value != self.old_selection or self.max_coord.value != self.old_max:
self.hex_map = Selection.get_selection(self.selection.value, self.max_coord.value, self.hex_radius)
self.old_selection = self.selection.value
self.old_max = self.max_coord.value
# show all hexes
hexagons = list(self.hex_map.values())
hex_positions = np.array([hexagon.get_draw_position() for hexagon in hexagons])
sorted_indexes = np.argsort(hex_positions[:, 1])
for index in sorted_indexes:
self.main_surf.blit(hexagons[index].image, (hex_positions[index] + self.center).astype(int)) #Draws the hexagons on
#hexagons[index].image uses an image created in example_hex from make_hex_surface
selection_type_text = self.font.render(
"Board Shape: " + Selection.Type.to_string(self.selection.value),
True,
(50, 50, 50))
self.main_surf.blit(selection_type_text, (5, 30))
if self.step == 2:
hexagons = list(self.hex_map.values())
hex_positions = np.array([hexagon.get_draw_position() for hexagon in hexagons])
sorted_indexes = np.argsort(hex_positions[:, 1])
for index in sorted_indexes:
self.main_surf.blit(hexagons[index].image, (hex_positions[index] + self.center).astype(np.int32))
if self.direction_hex.size > 0:
pixels = self.hex_map[self.clicked_hex][0].get_draw_position() + self.center
self.main_surf.blit(self.selected_hex_image, pixels.astype(np.int32))
# This index is used to find the two angles for the directional triangle.
index = self.direction_selection.value
# Find the radian angles of the direction, and scale to the hex radius
angles_in_radians = np.deg2rad([60 * i + 30 for i in range(index, index + 2)])
x = self.hex_radius * np.cos(angles_in_radians)
y = self.hex_radius * np.sin(angles_in_radians)
# Merge all points to a single array of a triangle
points = np.round(np.vstack([x, y]).T)
points = np.round(np.vstack([points, [0, 0]]))
# Find pixel coordinates for the triangle, then find the middle point of the far edge, and draw the line.
coords = (points + hx.axial_to_pixel(self.clicked_hex, self.hex_radius))
start_point = coords[2] + self.center
end_point = (coords[0] + coords[1]) / 2 + self.center
pg.draw.line(self.main_surf, [230, 230, 0], start_point.astype(np.int32), end_point.astype(np.int32), 3)
player_text = self.font.render(
"Current Player: " + str(self.player_selection.value),
True,
(50, 50, 50))
piece_selection_text = self.font.render(
"Piece Type: " + str(self.piece_selection.value),
True,
(50, 50, 50))
for piece1 in self.player_list[1]:
text = self.font.render(str(piece1[0]), False, COLORS[1])
pos = hx.axial_to_pixel(piece1[1], self.hex_radius)
text_pos = pos + self.center
text_pos -= (text.get_width() / 2, text.get_height() / 2)
self.main_surf.blit(text, text_pos.astype(np.int32))
# This index is used to find the two angles for the directional triangle.
index = piece1[2]
# Find the radian angles of the direction, and scale to the hex radius
angles_in_radians = np.deg2rad([60 * i + 30 for i in range(index, index + 2)])
x = self.hex_radius * np.cos(angles_in_radians)
y = self.hex_radius * np.sin(angles_in_radians)
# Merge all points to a single array of a triangle
points = np.round(np.vstack([x, y]).T)
points = np.round(np.vstack([points, [0, 0]]))
# Find pixel coordinates for the triangle, then find the middle point of the far edge, and draw the line.
coords = (points + hx.axial_to_pixel(piece1[1], self.hex_radius))
start_point = coords[2] + self.center
end_point = (coords[0] + coords[1]) / 2 + self.center
pg.draw.line(self.main_surf, [230, 230, 0], start_point.astype(np.int32), end_point.astype(np.int32), 3)
for piece2 in self.player_list[2]:
text = self.font.render(str(piece2[0]), False, COLORS[2])
pos = hx.axial_to_pixel(piece2[1], self.hex_radius)
text_pos = pos + self.center
text_pos -= (text.get_width() / 2, text.get_height() / 2)
self.main_surf.blit(text, text_pos.astype(np.int32))
# This index is used to find the two angles for the directional triangle.
index = piece2[2]
# Find the radian angles of the direction, and scale to the hex radius
angles_in_radians = np.deg2rad([60 * i + 30 for i in range(index, index + 2)])
x = self.hex_radius * np.cos(angles_in_radians)
y = self.hex_radius * np.sin(angles_in_radians)
# Merge all points to a single array of a triangle
points = np.round(np.vstack([x, y]).T)
points = np.round(np.vstack([points, [0, 0]]))
# Find pixel coordinates for the triangle, then find the middle point of the far edge, and draw the line.
coords = (points + hx.axial_to_pixel(piece2[1], self.hex_radius))
start_point = coords[2] + self.center
end_point = (coords[0] + coords[1]) / 2 + self.center
pg.draw.line(self.main_surf, [230, 230, 0], start_point.astype(np.int32), end_point.astype(np.int32), 3)
self.main_surf.blit(player_text, (5, 20))
self.main_surf.blit(piece_selection_text, (5, 40))
# Update screen at 30 frames per second
pg.display.update()
self.main_surf.fill(COLORS[-1])
self.clock.tick(30)
def quit_app(self):
board = dict()
onepieces = dict()
twopieces = dict()
hexagons = list(self.hex_map.values())
board['board'] = [hex.axial_coordinates[0].astype(np.int32).tolist() for hex in hexagons]
# TODO: Allow player to pick the direction of each piece. For now, have dummy direction.
#print(DIRECTIONS[self.player_list[1][0][2]])
# print(self.player_list[1][0])
# print(self.player_list[1][0][2])
onepieces['player1'] = {i + 1: [self.player_list[1][i][0],
self.player_list[1][i][1].tolist(),
DIRECTIONS[self.player_list[1][i][2]]]
for i in range(0, len(self.player_list[1]))}
twopieces['player2'] = {i + 1: [self.player_list[2][i][0],
self.player_list[2][i][1].tolist(),
DIRECTIONS[self.player_list[2][i][2]]]
for i in range(0, len(self.player_list[2]))}
pg.quit()
return board, onepieces, twopieces
if __name__ == '__main__':
print("\n\nInstructions for use:\n")
print("Step 1: Piece templates. Input the number of piece templates to be created, ")
print("and give the maximum health, attack power, and moving distance for each template.\n")
print("Step 2: Create the board. A window should pop up that allows you to create a board from")
print("a select number of types. Use the left and right arrow keys to change the size of the map.")
print("A custom board mode is in development.\n")
print("Step 3: Place pieces on the board. The window will stay open, but will now not allow you")
print("to change the board. Use the scroll wheel to cycle through piece templates, and use the")
print("right mouse to alternate between players 1 and 2.\n")
settings = dict()
num_pieces = int(input('Number of piece types: '))
while num_pieces <= 0:
print("Invalid number of pieces.")
num_pieces = int(input('Number of piece types: '))
pieces_list = dict()
for i in range(1, num_pieces + 1):
print("Settings for piece type {}:".format(i))
health = int(input("health: "))
distance = int(input("movement distance: "))
attack_d = int(input("attack distance: "))
power = int(input("attack power: "))
pieces_list[i] = {'health': health, 'movement_d': distance, 'attack_d': attack_d, 'power': power}
settings['pieces'] = pieces_list
example_hex_map = ExampleHexMap(num_pieces)
while example_hex_map.main_loop():
example_hex_map.draw()
board, player1, player2 = example_hex_map.quit_app()
root = Tk()
root.withdraw()
f = filedialog.asksaveasfile(mode='w', initialdir = 'settings/', defaultextension=".yaml")
root.destroy()
if not f is None:
f.write("---\n")
yaml.dump(settings, f, sort_keys=False)
yaml.dump(player1, f, default_flow_style=None)
yaml.dump(player2, f, default_flow_style=None)
yaml.dump(board, f, default_flow_style = None)
f.write("...")
f.close()
input("Press enter to close...")
raise SystemExit
| [
"hexy.HexMap",
"pygame.init",
"pygame.quit",
"numpy.argsort",
"numpy.array",
"numpy.sin",
"tkinter.filedialog.asksaveasfile",
"pygame.display.set_mode",
"pygame.mouse.get_pos",
"pygame.font.init",
"numpy.vstack",
"hexy.axial_to_pixel",
"pygame.display.update",
"yaml.dump",
"numpy.cos",
... | [((305, 336), 'numpy.random.randint', 'np.random.randint', (['(0)', '(4)', '(7 ** 3)'], {}), '(0, 4, 7 ** 3)\n', (322, 336), True, 'import numpy as np\n'), ((348, 439), 'numpy.array', 'np.array', (['[[251, 149, 80], [207, 0, 0], [0, 255, 255], [141, 207, 104], [85, 163, 193]]'], {}), '([[251, 149, 80], [207, 0, 0], [0, 255, 255], [141, 207, 104], [85,\n 163, 193]])\n', (356, 439), True, 'import numpy as np\n'), ((5658, 5698), 'numpy.lexsort', 'np.lexsort', (['(points[:, 0], points[:, 1])'], {}), '((points[:, 0], points[:, 1]))\n', (5668, 5698), True, 'import numpy as np\n'), ((25937, 25941), 'tkinter.Tk', 'Tk', ([], {}), '()\n', (25939, 25941), False, 'from tkinter import filedialog, Tk\n'), ((25970, 26059), 'tkinter.filedialog.asksaveasfile', 'filedialog.asksaveasfile', ([], {'mode': '"""w"""', 'initialdir': '"""settings/"""', 'defaultextension': '""".yaml"""'}), "(mode='w', initialdir='settings/', defaultextension\n ='.yaml')\n", (25994, 26059), False, 'from tkinter import filedialog, Tk\n'), ((1378, 1389), 'hexy.HexMap', 'hx.HexMap', ([], {}), '()\n', (1387, 1389), True, 'import hexy as hx\n'), ((5360, 5385), 'numpy.cos', 'np.cos', (['angles_in_radians'], {}), '(angles_in_radians)\n', (5366, 5385), True, 'import numpy as np\n'), ((5403, 5428), 'numpy.sin', 'np.sin', (['angles_in_radians'], {}), '(angles_in_radians)\n', (5409, 5428), True, 'import numpy as np\n'), ((6469, 6498), 'numpy.array', 'np.array', (['[axial_coordinates]'], {}), '([axial_coordinates])\n', (6477, 6498), True, 'import numpy as np\n'), ((6531, 6571), 'hexy.axial_to_cube', 'hx.axial_to_cube', (['self.axial_coordinates'], {}), '(self.axial_coordinates)\n', (6547, 6571), True, 'import hexy as hx\n'), ((6596, 6645), 'hexy.axial_to_pixel', 'hx.axial_to_pixel', (['self.axial_coordinates', 'radius'], {}), '(self.axial_coordinates, radius)\n', (6613, 6645), True, 'import hexy as hx\n'), ((7473, 7477), 'tkinter.Tk', 'Tk', ([], {}), '()\n', (7475, 7477), False, 'from tkinter import filedialog, Tk\n'), ((7680, 7694), 'numpy.array', 'np.array', (['size'], {}), '(size)\n', (7688, 7694), True, 'import numpy as np\n'), ((8716, 8728), 'numpy.array', 'np.array', (['[]'], {}), '([])\n', (8724, 8728), True, 'import numpy as np\n'), ((8826, 8842), 'numpy.array', 'np.array', (['[0, 0]'], {}), '([0, 0])\n', (8834, 8842), True, 'import numpy as np\n'), ((8890, 8902), 'numpy.array', 'np.array', (['[]'], {}), '([])\n', (8898, 8902), True, 'import numpy as np\n'), ((9359, 9370), 'hexy.HexMap', 'hx.HexMap', ([], {}), '()\n', (9368, 9370), True, 'import hexy as hx\n'), ((10171, 10180), 'pygame.init', 'pg.init', ([], {}), '()\n', (10178, 10180), True, 'import pygame as pg\n'), ((10206, 10236), 'pygame.display.set_mode', 'pg.display.set_mode', (['self.size'], {}), '(self.size)\n', (10225, 10236), True, 'import pygame as pg\n'), ((10278, 10314), 'pygame.display.set_caption', 'pg.display.set_caption', (['self.caption'], {}), '(self.caption)\n', (10300, 10314), True, 'import pygame as pg\n'), ((10324, 10338), 'pygame.font.init', 'pg.font.init', ([], {}), '()\n', (10336, 10338), True, 'import pygame as pg\n'), ((10359, 10397), 'pygame.font.SysFont', 'pg.font.SysFont', (['"""monospace"""', '(20)', '(True)'], {}), "('monospace', 20, True)\n", (10374, 10397), True, 'import pygame as pg\n'), ((10419, 10434), 'pygame.time.Clock', 'pg.time.Clock', ([], {}), '()\n', (10432, 10434), True, 'import pygame as pg\n'), ((10530, 10544), 'pygame.event.get', 'pg.event.get', ([], {}), '()\n', (10542, 10544), True, 'import pygame as pg\n'), ((23063, 23082), 'pygame.display.update', 'pg.display.update', ([], {}), '()\n', (23080, 23082), True, 'import pygame as pg\n'), ((24248, 24257), 'pygame.quit', 'pg.quit', ([], {}), '()\n', (24255, 24257), True, 'import pygame as pg\n'), ((26131, 26170), 'yaml.dump', 'yaml.dump', (['settings', 'f'], {'sort_keys': '(False)'}), '(settings, f, sort_keys=False)\n', (26140, 26170), False, 'import yaml\n'), ((26179, 26225), 'yaml.dump', 'yaml.dump', (['player1', 'f'], {'default_flow_style': 'None'}), '(player1, f, default_flow_style=None)\n', (26188, 26225), False, 'import yaml\n'), ((26234, 26280), 'yaml.dump', 'yaml.dump', (['player2', 'f'], {'default_flow_style': 'None'}), '(player2, f, default_flow_style=None)\n', (26243, 26280), False, 'import yaml\n'), ((26289, 26333), 'yaml.dump', 'yaml.dump', (['board', 'f'], {'default_flow_style': 'None'}), '(board, f, default_flow_style=None)\n', (26298, 26333), False, 'import yaml\n'), ((3323, 3350), 'numpy.array', 'np.array', (['axial_coordinates'], {}), '(axial_coordinates)\n', (3331, 3350), True, 'import numpy as np\n'), ((5451, 5468), 'numpy.vstack', 'np.vstack', (['[x, y]'], {}), '([x, y])\n', (5460, 5468), True, 'import numpy as np\n'), ((5716, 5752), 'numpy.array', 'np.array', (['(maxx - minx, maxy - miny)'], {}), '((maxx - minx, maxy - miny))\n', (5724, 5752), True, 'import numpy as np\n'), ((9817, 9846), 'numpy.array', 'np.array', (['b_axial_coordinates'], {}), '(b_axial_coordinates)\n', (9825, 9846), True, 'import numpy as np\n'), ((16458, 16491), 'numpy.argsort', 'np.argsort', (['b_hex_positions[:, 1]'], {}), '(b_hex_positions[:, 1])\n', (16468, 16491), True, 'import numpy as np\n'), ((17317, 17348), 'numpy.argsort', 'np.argsort', (['hex_positions[:, 1]'], {}), '(hex_positions[:, 1])\n', (17327, 17348), True, 'import numpy as np\n'), ((18081, 18112), 'numpy.argsort', 'np.argsort', (['hex_positions[:, 1]'], {}), '(hex_positions[:, 1])\n', (18091, 18112), True, 'import numpy as np\n'), ((2030, 2066), 'hexy.cube_to_axial', 'hx.cube_to_axial', (['spiral_coordinates'], {}), '(spiral_coordinates)\n', (2046, 2066), True, 'import hexy as hx\n'), ((20102, 20147), 'hexy.axial_to_pixel', 'hx.axial_to_pixel', (['piece1[1]', 'self.hex_radius'], {}), '(piece1[1], self.hex_radius)\n', (20119, 20147), True, 'import hexy as hx\n'), ((21567, 21612), 'hexy.axial_to_pixel', 'hx.axial_to_pixel', (['piece2[1]', 'self.hex_radius'], {}), '(piece2[1], self.hex_radius)\n', (21584, 21612), True, 'import hexy as hx\n'), ((1925, 1944), 'numpy.array', 'np.array', (['(0, 0, 0)'], {}), '((0, 0, 0))\n', (1933, 1944), True, 'import numpy as np\n'), ((18857, 18882), 'numpy.cos', 'np.cos', (['angles_in_radians'], {}), '(angles_in_radians)\n', (18863, 18882), True, 'import numpy as np\n'), ((18921, 18946), 'numpy.sin', 'np.sin', (['angles_in_radians'], {}), '(angles_in_radians)\n', (18927, 18946), True, 'import numpy as np\n'), ((19104, 19131), 'numpy.vstack', 'np.vstack', (['[points, [0, 0]]'], {}), '([points, [0, 0]])\n', (19113, 19131), True, 'import numpy as np\n'), ((19307, 19359), 'hexy.axial_to_pixel', 'hx.axial_to_pixel', (['self.clicked_hex', 'self.hex_radius'], {}), '(self.clicked_hex, self.hex_radius)\n', (19324, 19359), True, 'import hexy as hx\n'), ((20681, 20706), 'numpy.cos', 'np.cos', (['angles_in_radians'], {}), '(angles_in_radians)\n', (20687, 20706), True, 'import numpy as np\n'), ((20745, 20770), 'numpy.sin', 'np.sin', (['angles_in_radians'], {}), '(angles_in_radians)\n', (20751, 20770), True, 'import numpy as np\n'), ((20928, 20955), 'numpy.vstack', 'np.vstack', (['[points, [0, 0]]'], {}), '([points, [0, 0]])\n', (20937, 20955), True, 'import numpy as np\n'), ((21131, 21176), 'hexy.axial_to_pixel', 'hx.axial_to_pixel', (['piece1[1]', 'self.hex_radius'], {}), '(piece1[1], self.hex_radius)\n', (21148, 21176), True, 'import hexy as hx\n'), ((22146, 22171), 'numpy.cos', 'np.cos', (['angles_in_radians'], {}), '(angles_in_radians)\n', (22152, 22171), True, 'import numpy as np\n'), ((22210, 22235), 'numpy.sin', 'np.sin', (['angles_in_radians'], {}), '(angles_in_radians)\n', (22216, 22235), True, 'import numpy as np\n'), ((22393, 22420), 'numpy.vstack', 'np.vstack', (['[points, [0, 0]]'], {}), '([points, [0, 0]])\n', (22402, 22420), True, 'import numpy as np\n'), ((22596, 22641), 'hexy.axial_to_pixel', 'hx.axial_to_pixel', (['piece2[1]', 'self.hex_radius'], {}), '(piece2[1], self.hex_radius)\n', (22613, 22641), True, 'import hexy as hx\n'), ((10978, 11028), 'numpy.array_equal', 'np.array_equal', (['self.hex_map[self.clicked_hex]', '[]'], {}), '(self.hex_map[self.clicked_hex], [])\n', (10992, 11028), True, 'import numpy as np\n'), ((15664, 15711), 'numpy.array_equal', 'np.array_equal', (['self.hex_map[axial_clicked]', '[]'], {}), '(self.hex_map[axial_clicked], [])\n', (15678, 15711), True, 'import numpy as np\n'), ((19049, 19066), 'numpy.vstack', 'np.vstack', (['[x, y]'], {}), '([x, y])\n', (19058, 19066), True, 'import numpy as np\n'), ((20873, 20890), 'numpy.vstack', 'np.vstack', (['[x, y]'], {}), '([x, y])\n', (20882, 20890), True, 'import numpy as np\n'), ((22338, 22355), 'numpy.vstack', 'np.vstack', (['[x, y]'], {}), '([x, y])\n', (22347, 22355), True, 'import numpy as np\n'), ((11166, 11219), 'numpy.array_equal', 'np.array_equal', (['self.clicked_hex', 'self.old_axial_held'], {}), '(self.clicked_hex, self.old_axial_held)\n', (11180, 11219), True, 'import numpy as np\n'), ((11270, 11282), 'numpy.array', 'np.array', (['[]'], {}), '([])\n', (11278, 11282), True, 'import numpy as np\n'), ((11446, 11488), 'numpy.array_equal', 'np.array_equal', (['piece[1]', 'self.clicked_hex'], {}), '(piece[1], self.clicked_hex)\n', (11460, 11488), True, 'import numpy as np\n'), ((11800, 11842), 'numpy.array_equal', 'np.array_equal', (['piece[1]', 'self.clicked_hex'], {}), '(piece[1], self.clicked_hex)\n', (11814, 11842), True, 'import numpy as np\n'), ((15420, 15465), 'numpy.array_equal', 'np.array_equal', (['self.b_map[axial_clicked]', '[]'], {}), '(self.b_map[axial_clicked], [])\n', (15434, 15465), True, 'import numpy as np\n'), ((15474, 15524), 'numpy.array_equal', 'np.array_equal', (['axial_clicked', 'self.old_axial_held'], {}), '(axial_clicked, self.old_axial_held)\n', (15488, 15524), True, 'import numpy as np\n'), ((12376, 12388), 'numpy.array', 'np.array', (['[]'], {}), '([])\n', (12384, 12388), True, 'import numpy as np\n'), ((15220, 15265), 'hexy.pixel_to_axial', 'hx.pixel_to_axial', (['mouse_pos', 'self.hex_radius'], {}), '(mouse_pos, self.hex_radius)\n', (15237, 15265), True, 'import hexy as hx\n'), ((15148, 15166), 'pygame.mouse.get_pos', 'pg.mouse.get_pos', ([], {}), '()\n', (15164, 15166), True, 'import pygame as pg\n'), ((10809, 10827), 'pygame.mouse.get_pos', 'pg.mouse.get_pos', ([], {}), '()\n', (10825, 10827), True, 'import pygame as pg\n')] |
# Written by: <NAME>, @dataoutsider
# Viz: "Race Day", enjoy!
import numpy as np
import pandas as pd
import os
import math
from datetime import datetime
df = pd.read_csv(os.path.dirname(__file__) + '/allyears.csv', engine='python')
data = df['seconds']._values / 3600
tmax = int(max(data)) + 1
hr_res = 2
hmax = tmax * hr_res + 1
xbins = [x * (1/hr_res) for x in range(0, hmax)]
test = np.histogram(data, bins=xbins)
print(test) | [
"os.path.dirname",
"numpy.histogram"
] | [((390, 420), 'numpy.histogram', 'np.histogram', (['data'], {'bins': 'xbins'}), '(data, bins=xbins)\n', (402, 420), True, 'import numpy as np\n'), ((172, 197), 'os.path.dirname', 'os.path.dirname', (['__file__'], {}), '(__file__)\n', (187, 197), False, 'import os\n')] |
"""
Project: RadarBook
File: infinitesimal_dipole.py
Created by: <NAME>
On: 1/22/2018
Created with: PyCharm
Copyright (C) 2019 Artech House (<EMAIL>)
This file is part of Introduction to Radar Using Python and MATLAB
and can not be copied and/or distributed without the express permission of Artech House.
"""
from scipy.constants import c, pi, mu_0, epsilon_0
from numpy import sin, exp, sqrt
def directivity():
"""
The directivity of an infinitesimal dipole antenna.
:return: The directivity of an infinitesimal dipole antenna.
"""
return 1.5
def beamwidth():
"""
The half power beamwidth of an infinitesimal dipole antenna.
:return: The half power beamwidth of an infinitesimal dipole antenna (deg).
"""
return 90.0
def maximum_effective_aperture(frequency):
"""
Calculate the maximum effective aperture of an infinitesimal dipole antenna.
:param frequency: The operating frequency (Hz).
:return: The maximum effective aperture (m^2).
"""
# Calculate the wavelength
wavelength = c / frequency
return 3.0 * wavelength ** 2 / (8.0 * pi)
def radiation_resistance(frequency, length):
"""
Calculate the radiation resistance for a infinitesimal dipole.
:param frequency: The operating frequency (Hz).
:param length: The length of the dipole (m).
:return: The radiation resistance (Ohms).
"""
# Calculate and return the radiation resistance
return 80.0 * (pi * length * frequency / c) ** 2
def radiated_power(frequency, length, current):
"""
Calculate the power radiated by a infinitesimal dipole.
:param frequency: The operating frequency (Hz).
:param length: The length of the dipole (m).
:param current: The current on the dipole (A).
:return: The radiated power (W).
"""
return 0.5 * radiation_resistance(frequency, length) * abs(current) ** 2
def far_field(frequency, length, current, r, theta):
"""
Calculate the electric and magnetic far fields for a infinitesimal dipole.
:param r: The range to the field point (m).
:param theta: The angle to the field point (rad).
:param frequency: The operating frequency (Hz).
:param length: The length of the dipole (m).
:param current: The current on the dipole (A).
:return: The electric and magnetic far fields (V/m) & (A/m).
"""
# Calculate the wave impedance
eta = sqrt(mu_0 / epsilon_0)
# Calculate the wavenumber
k = 2.0 * pi * frequency / c
# Define the radial-component of the electric far field (V/m)
e_r = 0.0
# Define the theta-component of the electric far field (V/m)
e_theta = 1j * eta * k * current * length / (4.0 * pi * r) * sin(theta) * exp(-1j * k * r)
# Define the phi-component of the electric far field (V/m)
e_phi = 0.0
# Define the r-component of the magnetic far field (A/m)
h_r = 0.0
# Define the theta-component of the magnetic far field (A/m)
h_theta = 0.0
# Define the phi-component of the magnetic far field (A/m)
h_phi = 1j * k * current * length / (4.0 * pi * r) * sin(theta) * exp(-1j * k * r)
# Return all six components of the far field
return e_r, e_theta, e_phi, h_r, h_theta, h_phi
| [
"numpy.exp",
"numpy.sin",
"numpy.sqrt"
] | [((2406, 2428), 'numpy.sqrt', 'sqrt', (['(mu_0 / epsilon_0)'], {}), '(mu_0 / epsilon_0)\n', (2410, 2428), False, 'from numpy import sin, exp, sqrt\n'), ((2719, 2737), 'numpy.exp', 'exp', (['(-1.0j * k * r)'], {}), '(-1.0j * k * r)\n', (2722, 2737), False, 'from numpy import sin, exp, sqrt\n'), ((3110, 3128), 'numpy.exp', 'exp', (['(-1.0j * k * r)'], {}), '(-1.0j * k * r)\n', (3113, 3128), False, 'from numpy import sin, exp, sqrt\n'), ((2706, 2716), 'numpy.sin', 'sin', (['theta'], {}), '(theta)\n', (2709, 2716), False, 'from numpy import sin, exp, sqrt\n'), ((3097, 3107), 'numpy.sin', 'sin', (['theta'], {}), '(theta)\n', (3100, 3107), False, 'from numpy import sin, exp, sqrt\n')] |
"""
File name: visualize_training.py
Author: <NAME>
Date created: 07/04/2018
This script plots the prediction of training patches.
"""
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
from Unet import config
from Unet.utils.prepare_train_val_sets import create_training_datasets
from Unet.utils import helper
from Unet.utils.metrics import dice_coef_loss, dice_coef
import pickle
import os
################################################
# SET PARAMETERS
################################################
patch_size_list = [96] # size of one patch n x n
num_epochs = 10 # number of epochs
batch_size_list = [8,16,32,64] # list with batch sizes
learning_rate_list = [1e-4,1e-5] # list with learning rates of the optimizer Adam
dropout_list = [0.0,0.1,0.2] # percentage of weights to be dropped
num_patients_train = 41
num_patients_val = 11
save = False # True for saving the plots into imgs folder as png files, False for not saving but showing the plots
################################################
if not os.path.exists(config.IMGS_DIR):
os.makedirs(config.IMGS_DIR)
for patch_size in patch_size_list:
for batch_size in batch_size_list:
for lr in learning_rate_list:
for dropout in dropout_list:
print('________________________________________________________________________________')
print('patch size', patch_size)
print('batch size', batch_size)
print('learning rate', lr)
print('dropout', dropout)
# create the name of current run
run_name = config.get_run_name(patch_size, num_epochs, batch_size, lr, dropout, num_patients_train,
num_patients_val)
print(run_name)
# -----------------------------------------------------------
# LOADING MODEL DATA
# -----------------------------------------------------------
train_X, train_y, val_X, val_y, mean, std = create_training_datasets(patch_size, config.NUM_PATCHES,
config.PATIENTS)
# -----------------------------------------------------------
# LOADING MODEL
# -----------------------------------------------------------
model_filepath = config.get_model_filepath(run_name)
model = load_model(model_filepath,
custom_objects={'dice_coef_loss': dice_coef_loss, 'dice_coef': dice_coef})
# model.summary()
train_metadata_filepath = config.get_train_metadata_filepath(run_name)
with open(train_metadata_filepath, 'rb') as handle:
train_metadata = pickle.load(handle)
print('Train params:')
print(train_metadata['params'])
print('Train performance:')
print(train_metadata['performance'])
# -----------------------------------------------------------
# PREDICTION
# -----------------------------------------------------------
start = 495
nr_predict = 10
print('Prediction of training patches:')
train_y_pred = model.predict(train_X[start:start + nr_predict], batch_size=1, verbose=1)
print('Prediction of validation patches:')
val_y_pred = model.predict(val_X[start:start + nr_predict], batch_size=1, verbose=1)
# -----------------------------------------------------------
# PLOTTING CURVES
# -----------------------------------------------------------
title = 'patchsize: ' + str(train_metadata['params']['patch_size']) + ', epochs: ' + str(
train_metadata['params']['epochs']) + ', batchsize: ' + str(
train_metadata['params']['batchsize']) + '\n' + ' lr: ' + str(
train_metadata['params']['learning_rate']) + ', dropout: ' + str(
train_metadata['params']['dropout']) + '\n' + ' number of training samples: ' + str(
train_metadata['params']['samples']) + ', number of validation samples: ' + str(
train_metadata['params']['val_samples'])
save_name = config.IMGS_DIR + 'curves_' + run_name + '.png'
helper.plot_loss_acc_history(train_metadata, save_name=save_name, suptitle=title, val=True, save=save)
# -----------------------------------------------------------
# PLOTTING PREDICTIONS
# -----------------------------------------------------------
# reshape for plotting
train_X = train_X.reshape(train_X.shape[0], train_X.shape[1], train_X.shape[2])
train_y = train_y.reshape(train_y.shape[0], train_y.shape[1], train_y.shape[2])
val_X = val_X.reshape(val_X.shape[0], val_X.shape[1], val_X.shape[2])
val_y = val_y.reshape(val_y.shape[0], val_y.shape[1], val_y.shape[2])
train_y_pred = train_y_pred.reshape(train_y_pred.shape[0], train_y_pred.shape[1], train_y_pred.shape[2])
val_y_pred = val_y_pred.reshape(val_y_pred.shape[0], val_y_pred.shape[1], val_y_pred.shape[2])
print('train_y_pred: max', np.max(train_y_pred), ' min', np.min(train_y_pred))
print('val_y_pred: max', np.max(val_y_pred), ' min', np.min(val_y_pred))
print('Plotting...')
plt.figure(figsize=(12, 9))
plt.suptitle(title)
nr_patches_to_plot = nr_predict
num_columns = 6
for i in range(nr_patches_to_plot):
plt.subplot(nr_patches_to_plot, num_columns, (i * num_columns) + 1)
plt.title('train:' + '\n' + 'MRA img') if i == 0 else 0
plt.axis('off')
plt.imshow(train_X[start + i])
plt.subplot(nr_patches_to_plot, num_columns, (i * num_columns) + 2)
plt.title('train:' + '\n' + 'ground truth') if i == 0 else 0
plt.axis('off')
plt.imshow(train_y[start + i])
plt.subplot(nr_patches_to_plot, num_columns, (i * num_columns) + 3)
plt.title('train:' + '\n' + 'probability map') if i == 0 else 0
plt.axis('off')
plt.imshow(train_y_pred[i], cmap='hot')
cax = plt.axes([0.42, 0.05, 0.02, 0.75])
plt.colorbar(cax=cax)
plt.subplot(nr_patches_to_plot, num_columns, (i * num_columns) + 4)
plt.title('val:' + '\n' + 'MRA img') if i == 0 else 0
plt.axis('off')
plt.imshow(val_X[start + i])
plt.subplot(nr_patches_to_plot, num_columns, (i * num_columns) + 5)
plt.title('val:' + '\n' + 'ground truth') if i == 0 else 0
plt.axis('off')
plt.imshow(val_y[start + i])
plt.subplot(nr_patches_to_plot, num_columns, (i * num_columns) + 6)
plt.title('val:' + '\n' + 'probability map') if i == 0 else 0
plt.axis('off')
plt.imshow(val_y_pred[i], cmap='hot')
cax2 = plt.axes([0.9, 0.05, 0.02, 0.75])
plt.colorbar(cax=cax2)
plt.subplots_adjust(bottom=0.05, right=0.9, top=0.8, left=0)
if save:
plt.savefig(config.IMGS_DIR + 'preds_' + run_name + '.png')
else:
plt.show()
print('DONE')
| [
"matplotlib.pyplot.imshow",
"os.path.exists",
"Unet.config.get_train_metadata_filepath",
"numpy.max",
"numpy.min",
"matplotlib.pyplot.axis",
"matplotlib.pyplot.savefig",
"Unet.utils.helper.plot_loss_acc_history",
"pickle.load",
"matplotlib.pyplot.axes",
"matplotlib.pyplot.title",
"matplotlib.p... | [((1058, 1089), 'os.path.exists', 'os.path.exists', (['config.IMGS_DIR'], {}), '(config.IMGS_DIR)\n', (1072, 1089), False, 'import os\n'), ((1095, 1123), 'os.makedirs', 'os.makedirs', (['config.IMGS_DIR'], {}), '(config.IMGS_DIR)\n', (1106, 1123), False, 'import os\n'), ((1642, 1752), 'Unet.config.get_run_name', 'config.get_run_name', (['patch_size', 'num_epochs', 'batch_size', 'lr', 'dropout', 'num_patients_train', 'num_patients_val'], {}), '(patch_size, num_epochs, batch_size, lr, dropout,\n num_patients_train, num_patients_val)\n', (1661, 1752), False, 'from Unet import config\n'), ((2082, 2155), 'Unet.utils.prepare_train_val_sets.create_training_datasets', 'create_training_datasets', (['patch_size', 'config.NUM_PATCHES', 'config.PATIENTS'], {}), '(patch_size, config.NUM_PATCHES, config.PATIENTS)\n', (2106, 2155), False, 'from Unet.utils.prepare_train_val_sets import create_training_datasets\n'), ((2463, 2498), 'Unet.config.get_model_filepath', 'config.get_model_filepath', (['run_name'], {}), '(run_name)\n', (2488, 2498), False, 'from Unet import config\n'), ((2523, 2628), 'keras.models.load_model', 'load_model', (['model_filepath'], {'custom_objects': "{'dice_coef_loss': dice_coef_loss, 'dice_coef': dice_coef}"}), "(model_filepath, custom_objects={'dice_coef_loss': dice_coef_loss,\n 'dice_coef': dice_coef})\n", (2533, 2628), False, 'from keras.models import load_model\n'), ((2737, 2781), 'Unet.config.get_train_metadata_filepath', 'config.get_train_metadata_filepath', (['run_name'], {}), '(run_name)\n', (2771, 2781), False, 'from Unet import config\n'), ((4567, 4674), 'Unet.utils.helper.plot_loss_acc_history', 'helper.plot_loss_acc_history', (['train_metadata'], {'save_name': 'save_name', 'suptitle': 'title', 'val': '(True)', 'save': 'save'}), '(train_metadata, save_name=save_name, suptitle=\n title, val=True, save=save)\n', (4595, 4674), False, 'from Unet.utils import helper\n'), ((5741, 5768), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(12, 9)'}), '(figsize=(12, 9))\n', (5751, 5768), True, 'import matplotlib.pyplot as plt\n'), ((5785, 5804), 'matplotlib.pyplot.suptitle', 'plt.suptitle', (['title'], {}), '(title)\n', (5797, 5804), True, 'import matplotlib.pyplot as plt\n'), ((7707, 7767), 'matplotlib.pyplot.subplots_adjust', 'plt.subplots_adjust', ([], {'bottom': '(0.05)', 'right': '(0.9)', 'top': '(0.8)', 'left': '(0)'}), '(bottom=0.05, right=0.9, top=0.8, left=0)\n', (7726, 7767), True, 'import matplotlib.pyplot as plt\n'), ((2887, 2906), 'pickle.load', 'pickle.load', (['handle'], {}), '(handle)\n', (2898, 2906), False, 'import pickle\n'), ((5546, 5566), 'numpy.max', 'np.max', (['train_y_pred'], {}), '(train_y_pred)\n', (5552, 5566), True, 'import numpy as np\n'), ((5576, 5596), 'numpy.min', 'np.min', (['train_y_pred'], {}), '(train_y_pred)\n', (5582, 5596), True, 'import numpy as np\n'), ((5639, 5657), 'numpy.max', 'np.max', (['val_y_pred'], {}), '(val_y_pred)\n', (5645, 5657), True, 'import numpy as np\n'), ((5667, 5685), 'numpy.min', 'np.min', (['val_y_pred'], {}), '(val_y_pred)\n', (5673, 5685), True, 'import numpy as np\n'), ((5958, 6023), 'matplotlib.pyplot.subplot', 'plt.subplot', (['nr_patches_to_plot', 'num_columns', '(i * num_columns + 1)'], {}), '(nr_patches_to_plot, num_columns, i * num_columns + 1)\n', (5969, 6023), True, 'import matplotlib.pyplot as plt\n'), ((6122, 6137), 'matplotlib.pyplot.axis', 'plt.axis', (['"""off"""'], {}), "('off')\n", (6130, 6137), True, 'import matplotlib.pyplot as plt\n'), ((6158, 6188), 'matplotlib.pyplot.imshow', 'plt.imshow', (['train_X[start + i]'], {}), '(train_X[start + i])\n', (6168, 6188), True, 'import matplotlib.pyplot as plt\n'), ((6210, 6275), 'matplotlib.pyplot.subplot', 'plt.subplot', (['nr_patches_to_plot', 'num_columns', '(i * num_columns + 2)'], {}), '(nr_patches_to_plot, num_columns, i * num_columns + 2)\n', (6221, 6275), True, 'import matplotlib.pyplot as plt\n'), ((6379, 6394), 'matplotlib.pyplot.axis', 'plt.axis', (['"""off"""'], {}), "('off')\n", (6387, 6394), True, 'import matplotlib.pyplot as plt\n'), ((6415, 6445), 'matplotlib.pyplot.imshow', 'plt.imshow', (['train_y[start + i]'], {}), '(train_y[start + i])\n', (6425, 6445), True, 'import matplotlib.pyplot as plt\n'), ((6467, 6532), 'matplotlib.pyplot.subplot', 'plt.subplot', (['nr_patches_to_plot', 'num_columns', '(i * num_columns + 3)'], {}), '(nr_patches_to_plot, num_columns, i * num_columns + 3)\n', (6478, 6532), True, 'import matplotlib.pyplot as plt\n'), ((6639, 6654), 'matplotlib.pyplot.axis', 'plt.axis', (['"""off"""'], {}), "('off')\n", (6647, 6654), True, 'import matplotlib.pyplot as plt\n'), ((6675, 6714), 'matplotlib.pyplot.imshow', 'plt.imshow', (['train_y_pred[i]'], {'cmap': '"""hot"""'}), "(train_y_pred[i], cmap='hot')\n", (6685, 6714), True, 'import matplotlib.pyplot as plt\n'), ((6742, 6776), 'matplotlib.pyplot.axes', 'plt.axes', (['[0.42, 0.05, 0.02, 0.75]'], {}), '([0.42, 0.05, 0.02, 0.75])\n', (6750, 6776), True, 'import matplotlib.pyplot as plt\n'), ((6797, 6818), 'matplotlib.pyplot.colorbar', 'plt.colorbar', ([], {'cax': 'cax'}), '(cax=cax)\n', (6809, 6818), True, 'import matplotlib.pyplot as plt\n'), ((6840, 6905), 'matplotlib.pyplot.subplot', 'plt.subplot', (['nr_patches_to_plot', 'num_columns', '(i * num_columns + 4)'], {}), '(nr_patches_to_plot, num_columns, i * num_columns + 4)\n', (6851, 6905), True, 'import matplotlib.pyplot as plt\n'), ((7002, 7017), 'matplotlib.pyplot.axis', 'plt.axis', (['"""off"""'], {}), "('off')\n", (7010, 7017), True, 'import matplotlib.pyplot as plt\n'), ((7038, 7066), 'matplotlib.pyplot.imshow', 'plt.imshow', (['val_X[start + i]'], {}), '(val_X[start + i])\n', (7048, 7066), True, 'import matplotlib.pyplot as plt\n'), ((7088, 7153), 'matplotlib.pyplot.subplot', 'plt.subplot', (['nr_patches_to_plot', 'num_columns', '(i * num_columns + 5)'], {}), '(nr_patches_to_plot, num_columns, i * num_columns + 5)\n', (7099, 7153), True, 'import matplotlib.pyplot as plt\n'), ((7255, 7270), 'matplotlib.pyplot.axis', 'plt.axis', (['"""off"""'], {}), "('off')\n", (7263, 7270), True, 'import matplotlib.pyplot as plt\n'), ((7291, 7319), 'matplotlib.pyplot.imshow', 'plt.imshow', (['val_y[start + i]'], {}), '(val_y[start + i])\n', (7301, 7319), True, 'import matplotlib.pyplot as plt\n'), ((7341, 7406), 'matplotlib.pyplot.subplot', 'plt.subplot', (['nr_patches_to_plot', 'num_columns', '(i * num_columns + 6)'], {}), '(nr_patches_to_plot, num_columns, i * num_columns + 6)\n', (7352, 7406), True, 'import matplotlib.pyplot as plt\n'), ((7511, 7526), 'matplotlib.pyplot.axis', 'plt.axis', (['"""off"""'], {}), "('off')\n", (7519, 7526), True, 'import matplotlib.pyplot as plt\n'), ((7547, 7584), 'matplotlib.pyplot.imshow', 'plt.imshow', (['val_y_pred[i]'], {'cmap': '"""hot"""'}), "(val_y_pred[i], cmap='hot')\n", (7557, 7584), True, 'import matplotlib.pyplot as plt\n'), ((7613, 7646), 'matplotlib.pyplot.axes', 'plt.axes', (['[0.9, 0.05, 0.02, 0.75]'], {}), '([0.9, 0.05, 0.02, 0.75])\n', (7621, 7646), True, 'import matplotlib.pyplot as plt\n'), ((7667, 7689), 'matplotlib.pyplot.colorbar', 'plt.colorbar', ([], {'cax': 'cax2'}), '(cax=cax2)\n', (7679, 7689), True, 'import matplotlib.pyplot as plt\n'), ((7813, 7872), 'matplotlib.pyplot.savefig', 'plt.savefig', (["(config.IMGS_DIR + 'preds_' + run_name + '.png')"], {}), "(config.IMGS_DIR + 'preds_' + run_name + '.png')\n", (7824, 7872), True, 'import matplotlib.pyplot as plt\n'), ((7915, 7925), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (7923, 7925), True, 'import matplotlib.pyplot as plt\n'), ((6046, 6084), 'matplotlib.pyplot.title', 'plt.title', (["('train:' + '\\n' + 'MRA img')"], {}), "('train:' + '\\n' + 'MRA img')\n", (6055, 6084), True, 'import matplotlib.pyplot as plt\n'), ((6298, 6341), 'matplotlib.pyplot.title', 'plt.title', (["('train:' + '\\n' + 'ground truth')"], {}), "('train:' + '\\n' + 'ground truth')\n", (6307, 6341), True, 'import matplotlib.pyplot as plt\n'), ((6555, 6601), 'matplotlib.pyplot.title', 'plt.title', (["('train:' + '\\n' + 'probability map')"], {}), "('train:' + '\\n' + 'probability map')\n", (6564, 6601), True, 'import matplotlib.pyplot as plt\n'), ((6928, 6964), 'matplotlib.pyplot.title', 'plt.title', (["('val:' + '\\n' + 'MRA img')"], {}), "('val:' + '\\n' + 'MRA img')\n", (6937, 6964), True, 'import matplotlib.pyplot as plt\n'), ((7176, 7217), 'matplotlib.pyplot.title', 'plt.title', (["('val:' + '\\n' + 'ground truth')"], {}), "('val:' + '\\n' + 'ground truth')\n", (7185, 7217), True, 'import matplotlib.pyplot as plt\n'), ((7429, 7473), 'matplotlib.pyplot.title', 'plt.title', (["('val:' + '\\n' + 'probability map')"], {}), "('val:' + '\\n' + 'probability map')\n", (7438, 7473), True, 'import matplotlib.pyplot as plt\n')] |
import tensorflow as tf
import pandas as pd
import numpy as np
import os
from sklearn.linear_model import LogisticRegression
import sys
from sklearn.metrics import accuracy_score, roc_auc_score, f1_score
import warnings
import pickle
import pandas as pd
warnings.simplefilter('ignore', category=Warning, lineno=0, append=False)
DATA_NAME = sys.argv[1] # 'hotel'
dimension = 300
np.random.seed(1)
############
def load_data(mode='train'): # train, test, valid
data_path = '../data/{}/{}.csv'.format(DATA_NAME, mode)
#
data = pd.read_csv(data_path)
input_y = np.array(data.iloc[:, 0])
input_X = np.array(data.iloc[:, 2:])
return input_X, input_y
def reloadGraph(modelPath):
tf.reset_default_graph()
sess = tf.Session()
metaFile = modelPath.split('/')[-1]+'.ckpt.meta'
saver = tf.train.import_meta_graph(os.path.join(modelPath, metaFile))
saver.restore(sess, tf.train.latest_checkpoint(modelPath))
graph = tf.get_default_graph()
return graph, sess
def get_embed_feature(graph, loaded_sess, inputX):
X = tf.placeholder(tf.float32, shape=[None, dimension], name='input')
sess = loaded_sess
#with loaded_sess as sess:
w1 = graph.get_tensor_by_name('fc_l1/weights:0')
b1 = graph.get_tensor_by_name('fc_l1/biases:0')
w2 = graph.get_tensor_by_name('fc_l2/weights:0')
b2 = graph.get_tensor_by_name('fc_l2/biases:0')
embd = tf.nn.sigmoid(tf.matmul(tf.nn.sigmoid(tf.matmul(X, w1)+b1), w2)+b2)
feed = {X: inputX}
output = sess.run(embd, feed_dict=feed)
return output
if __name__ == '__main__':
deep_model_file = 'model_checkpoint_folder'
train_X, train_y = load_data(mode='train')
valid_X, valid_y = load_data(mode='valid')
graph, session = reloadGraph('../model_neucrowd/rll/' + deep_model_file)
embd = get_embed_feature(graph, session, train_X)
embd_valid = get_embed_feature(graph, session, valid_X)
session.close()
model = LogisticRegression(penalty='l2', C=1, max_iter=400, solver='liblinear', class_weight='balanced')
model.fit(embd, train_y)
y_hat = model.predict(embd_valid)
y_proba = model.predict_proba(embd_valid)
| [
"tensorflow.reset_default_graph",
"pandas.read_csv",
"tensorflow.Session",
"tensorflow.placeholder",
"os.path.join",
"sklearn.linear_model.LogisticRegression",
"numpy.array",
"numpy.random.seed",
"tensorflow.matmul",
"warnings.simplefilter",
"tensorflow.train.latest_checkpoint",
"tensorflow.ge... | [((256, 329), 'warnings.simplefilter', 'warnings.simplefilter', (['"""ignore"""'], {'category': 'Warning', 'lineno': '(0)', 'append': '(False)'}), "('ignore', category=Warning, lineno=0, append=False)\n", (277, 329), False, 'import warnings\n'), ((385, 402), 'numpy.random.seed', 'np.random.seed', (['(1)'], {}), '(1)\n', (399, 402), True, 'import numpy as np\n'), ((548, 570), 'pandas.read_csv', 'pd.read_csv', (['data_path'], {}), '(data_path)\n', (559, 570), True, 'import pandas as pd\n'), ((586, 611), 'numpy.array', 'np.array', (['data.iloc[:, 0]'], {}), '(data.iloc[:, 0])\n', (594, 611), True, 'import numpy as np\n'), ((626, 652), 'numpy.array', 'np.array', (['data.iloc[:, 2:]'], {}), '(data.iloc[:, 2:])\n', (634, 652), True, 'import numpy as np\n'), ((716, 740), 'tensorflow.reset_default_graph', 'tf.reset_default_graph', ([], {}), '()\n', (738, 740), True, 'import tensorflow as tf\n'), ((752, 764), 'tensorflow.Session', 'tf.Session', ([], {}), '()\n', (762, 764), True, 'import tensorflow as tf\n'), ((972, 994), 'tensorflow.get_default_graph', 'tf.get_default_graph', ([], {}), '()\n', (992, 994), True, 'import tensorflow as tf\n'), ((1079, 1144), 'tensorflow.placeholder', 'tf.placeholder', (['tf.float32'], {'shape': '[None, dimension]', 'name': '"""input"""'}), "(tf.float32, shape=[None, dimension], name='input')\n", (1093, 1144), True, 'import tensorflow as tf\n'), ((1973, 2073), 'sklearn.linear_model.LogisticRegression', 'LogisticRegression', ([], {'penalty': '"""l2"""', 'C': '(1)', 'max_iter': '(400)', 'solver': '"""liblinear"""', 'class_weight': '"""balanced"""'}), "(penalty='l2', C=1, max_iter=400, solver='liblinear',\n class_weight='balanced')\n", (1991, 2073), False, 'from sklearn.linear_model import LogisticRegression\n'), ((862, 895), 'os.path.join', 'os.path.join', (['modelPath', 'metaFile'], {}), '(modelPath, metaFile)\n', (874, 895), False, 'import os\n'), ((921, 958), 'tensorflow.train.latest_checkpoint', 'tf.train.latest_checkpoint', (['modelPath'], {}), '(modelPath)\n', (947, 958), True, 'import tensorflow as tf\n'), ((1458, 1474), 'tensorflow.matmul', 'tf.matmul', (['X', 'w1'], {}), '(X, w1)\n', (1467, 1474), True, 'import tensorflow as tf\n')] |
#!/bin/python3
import rospy
import std_msgs.msg as std_msg
import nav_msgs.msg as nav_msg
import math
import numpy as np
def euler_from_quaternion(x, y, z, w):
"""
Convert a quaternion into euler angles (roll, pitch, yaw)
roll is rotation around x in radians (counterclockwise)
pitch is rotation around y in radians (counterclockwise)
yaw is rotation around z in radians (counterclockwise)
"""
t0 = +2.0 * (w * x + y * z)
t1 = +1.0 - 2.0 * (x * x + y * y)
roll_x = math.atan2(t0, t1)
t2 = +2.0 * (w * y - z * x)
t2 = +1.0 if t2 > +1.0 else t2
t2 = -1.0 if t2 < -1.0 else t2
pitch_y = math.asin(t2)
t3 = +2.0 * (w * z + x * y)
t4 = +1.0 - 2.0 * (y * y + z * z)
yaw_z = math.atan2(t3, t4)
return roll_x, pitch_y, yaw_z # in radians
class GalleryTracker:
def __init__(self):
self.galleries = np.zeros(0)
self.confidences = np.zeros(0)
self.block_odom = True
self.block_gallery = False
self.max_confidence = 20
def new_odom(self, turn):
if self.block_odom:
return
self.galleries = (self.galleries - turn + 2 * math.pi) % (2 * math.pi)
def new_galleries(self, angles, values):
if self.block_gallery:
return
self.block_gallery = True
self.block_odom = True
base_confidences = values / max(values)
angles = np.delete(angles, np.where(base_confidences < 0.2))
base_confidences = np.delete(base_confidences, np.where(base_confidences < 0.2))
distance_matrix = np.zeros((len(self.galleries), len(angles)))
for i, gallery in enumerate(self.galleries):
distances = (gallery - angles + 2 * math.pi) % (2 * math.pi)
distances[distances > math.pi] = (
2 * math.pi - distances[distances > math.pi]
)
distance_matrix[i, :] = distances
unasigned_angles = list(angles)
galleries_to_delete = []
for i in range(len(self.galleries)):
distances = distance_matrix[i, :]
j = np.argmin(distances)
min_distance = distances[j]
gallery_re_observed = False
if i == np.argmin(distance_matrix[:, j]):
if min_distance < 10 / 180 * math.pi:
self.galleries[i] = angles[j]
self.confidences[i] = min(
base_confidences[j] + self.confidences[i], self.max_confidence
)
unasigned_angles.remove(angles[j])
gallery_re_observed = True
if not gallery_re_observed:
self.confidences[i] -= 2
if self.confidences[i] < 0:
galleries_to_delete.append(i)
galleries_to_delete.sort(reverse=True)
for i in galleries_to_delete:
self.galleries = np.delete(self.galleries, i)
self.confidences = np.delete(self.confidences, i)
for a in unasigned_angles:
self.galleries = np.append(self.galleries, a)
self.confidences = np.append(
self.confidences, base_confidences[list(angles).index(a)]
)
# Delete galleries too close to each other
gallery_was_deleted = True
while gallery_was_deleted:
gallery_was_deleted = False
for gallery in self.galleries:
distances = (self.galleries - gallery + 2 * math.pi) % (2 * math.pi)
distances[distances > math.pi] = (
2 * math.pi - distances[distances > math.pi]
)
close_to_gallery = np.array(
np.where(distances < 20 / 180 * 2 * math.pi)
).flatten()
if len(close_to_gallery) > 1:
dominant_gallery = np.argmax(self.confidences[close_to_gallery])
close_to_gallery = np.delete(close_to_gallery, dominant_gallery)
close_to_gallery = list(close_to_gallery)
close_to_gallery.sort(reverse=True)
for gal_to_del in close_to_gallery:
self.galleries = np.delete(self.galleries, gal_to_del)
self.confidences = np.delete(self.confidences, gal_to_del)
gallery_was_deleted = True
self.block_odom = False
self.block_gallery = False
return np.append(self.galleries, self.confidences)
class TrackingNode:
def __init__(self):
rospy.init_node("gallery_tracking_node")
self.tracker = GalleryTracker()
self.prev_z = 0
self.gallery_subscriber = rospy.Subscriber(
"/currently_detected_galleries",
std_msg.Float32MultiArray,
callback=self.currently_detected_callback,
)
self.odometry_subscriber = rospy.Subscriber(
"/odometry/filtered",
nav_msg.Odometry,
callback=self.odometry_callback,
)
self.tracked_galleries_publisher = rospy.Publisher(
"/tracked_galleries", std_msg.Float32MultiArray, queue_size=10
)
def currently_detected_callback(self, msg):
assert msg.data.__len__() % 2 == 0
reshaped = np.reshape(msg.data, (2, -1))
galleries = self.tracker.new_galleries(reshaped[0, :], reshaped[1, :])
if galleries is None:
return
data = galleries
dim = (std_msg.MultiArrayDimension("0", data.__len__(), 2),)
layout = std_msg.MultiArrayLayout(dim, 0)
output_message = std_msg.Float32MultiArray(layout, data)
self.tracked_galleries_publisher.publish(output_message)
def odometry_callback(self, msg):
q = msg.pose.pose.orientation
x, y, z = euler_from_quaternion(q.x, q.y, q.z, q.w)
turn = z - self.prev_z
self.tracker.new_odom(turn)
self.prev_z = z
def main():
node = TrackingNode()
rospy.spin()
if __name__ == "__main__":
main()
| [
"rospy.Publisher",
"numpy.reshape",
"std_msgs.msg.Float32MultiArray",
"rospy.init_node",
"numpy.where",
"math.asin",
"numpy.delete",
"numpy.argmax",
"numpy.append",
"std_msgs.msg.MultiArrayLayout",
"numpy.zeros",
"math.atan2",
"rospy.spin",
"numpy.argmin",
"rospy.Subscriber"
] | [((503, 521), 'math.atan2', 'math.atan2', (['t0', 't1'], {}), '(t0, t1)\n', (513, 521), False, 'import math\n'), ((639, 652), 'math.asin', 'math.asin', (['t2'], {}), '(t2)\n', (648, 652), False, 'import math\n'), ((736, 754), 'math.atan2', 'math.atan2', (['t3', 't4'], {}), '(t3, t4)\n', (746, 754), False, 'import math\n'), ((6003, 6015), 'rospy.spin', 'rospy.spin', ([], {}), '()\n', (6013, 6015), False, 'import rospy\n'), ((877, 888), 'numpy.zeros', 'np.zeros', (['(0)'], {}), '(0)\n', (885, 888), True, 'import numpy as np\n'), ((916, 927), 'numpy.zeros', 'np.zeros', (['(0)'], {}), '(0)\n', (924, 927), True, 'import numpy as np\n'), ((4466, 4509), 'numpy.append', 'np.append', (['self.galleries', 'self.confidences'], {}), '(self.galleries, self.confidences)\n', (4475, 4509), True, 'import numpy as np\n'), ((4564, 4604), 'rospy.init_node', 'rospy.init_node', (['"""gallery_tracking_node"""'], {}), "('gallery_tracking_node')\n", (4579, 4604), False, 'import rospy\n'), ((4703, 4826), 'rospy.Subscriber', 'rospy.Subscriber', (['"""/currently_detected_galleries"""', 'std_msg.Float32MultiArray'], {'callback': 'self.currently_detected_callback'}), "('/currently_detected_galleries', std_msg.Float32MultiArray,\n callback=self.currently_detected_callback)\n", (4719, 4826), False, 'import rospy\n'), ((4905, 4999), 'rospy.Subscriber', 'rospy.Subscriber', (['"""/odometry/filtered"""', 'nav_msg.Odometry'], {'callback': 'self.odometry_callback'}), "('/odometry/filtered', nav_msg.Odometry, callback=self.\n odometry_callback)\n", (4921, 4999), False, 'import rospy\n'), ((5085, 5164), 'rospy.Publisher', 'rospy.Publisher', (['"""/tracked_galleries"""', 'std_msg.Float32MultiArray'], {'queue_size': '(10)'}), "('/tracked_galleries', std_msg.Float32MultiArray, queue_size=10)\n", (5100, 5164), False, 'import rospy\n'), ((5298, 5327), 'numpy.reshape', 'np.reshape', (['msg.data', '(2, -1)'], {}), '(msg.data, (2, -1))\n', (5308, 5327), True, 'import numpy as np\n'), ((5567, 5599), 'std_msgs.msg.MultiArrayLayout', 'std_msg.MultiArrayLayout', (['dim', '(0)'], {}), '(dim, 0)\n', (5591, 5599), True, 'import std_msgs.msg as std_msg\n'), ((5625, 5664), 'std_msgs.msg.Float32MultiArray', 'std_msg.Float32MultiArray', (['layout', 'data'], {}), '(layout, data)\n', (5650, 5664), True, 'import std_msgs.msg as std_msg\n'), ((1428, 1460), 'numpy.where', 'np.where', (['(base_confidences < 0.2)'], {}), '(base_confidences < 0.2)\n', (1436, 1460), True, 'import numpy as np\n'), ((1517, 1549), 'numpy.where', 'np.where', (['(base_confidences < 0.2)'], {}), '(base_confidences < 0.2)\n', (1525, 1549), True, 'import numpy as np\n'), ((2098, 2118), 'numpy.argmin', 'np.argmin', (['distances'], {}), '(distances)\n', (2107, 2118), True, 'import numpy as np\n'), ((2907, 2935), 'numpy.delete', 'np.delete', (['self.galleries', 'i'], {}), '(self.galleries, i)\n', (2916, 2935), True, 'import numpy as np\n'), ((2967, 2997), 'numpy.delete', 'np.delete', (['self.confidences', 'i'], {}), '(self.confidences, i)\n', (2976, 2997), True, 'import numpy as np\n'), ((3063, 3091), 'numpy.append', 'np.append', (['self.galleries', 'a'], {}), '(self.galleries, a)\n', (3072, 3091), True, 'import numpy as np\n'), ((2220, 2252), 'numpy.argmin', 'np.argmin', (['distance_matrix[:, j]'], {}), '(distance_matrix[:, j])\n', (2229, 2252), True, 'import numpy as np\n'), ((3869, 3914), 'numpy.argmax', 'np.argmax', (['self.confidences[close_to_gallery]'], {}), '(self.confidences[close_to_gallery])\n', (3878, 3914), True, 'import numpy as np\n'), ((3954, 3999), 'numpy.delete', 'np.delete', (['close_to_gallery', 'dominant_gallery'], {}), '(close_to_gallery, dominant_gallery)\n', (3963, 3999), True, 'import numpy as np\n'), ((4215, 4252), 'numpy.delete', 'np.delete', (['self.galleries', 'gal_to_del'], {}), '(self.galleries, gal_to_del)\n', (4224, 4252), True, 'import numpy as np\n'), ((4296, 4335), 'numpy.delete', 'np.delete', (['self.confidences', 'gal_to_del'], {}), '(self.confidences, gal_to_del)\n', (4305, 4335), True, 'import numpy as np\n'), ((3711, 3755), 'numpy.where', 'np.where', (['(distances < 20 / 180 * 2 * math.pi)'], {}), '(distances < 20 / 180 * 2 * math.pi)\n', (3719, 3755), True, 'import numpy as np\n')] |
import numpy as np
def decomposicaoLU(A):
i = 1
U = np.copy(A)
n = np.shape(U)[0]
L = np.eye(n)
for j in np.arange(n-1):
for i in np.arange(j+1,n):
L[i,j] = U[i,j]/U[j,j]
for k in np.arange(j+1,n):
U[i,k] = U[i,k] - L[i,j]*U[j,k]
U[i,j] = 0
i = i + 1
print(i)
matrizLU = []
matrizLU.append(L)
matrizLU.append(U)
return matrizLU
print("Matriz original:")
matriz = np.matrix("1 0 -2; -0.5 1 -0.25; 1 -0.5 1 ")
print(matriz,"\n")
list = decomposicaoLU(matriz)
print("Matriz triangular inferior:\n", np.matrix(list[0]),"\n")
print("Matriz triangular superior:\n", np.matrix(list[1]),"\n") | [
"numpy.copy",
"numpy.shape",
"numpy.eye",
"numpy.matrix",
"numpy.arange"
] | [((514, 558), 'numpy.matrix', 'np.matrix', (['"""1 0 -2; -0.5 1 -0.25; 1 -0.5 1 """'], {}), "('1 0 -2; -0.5 1 -0.25; 1 -0.5 1 ')\n", (523, 558), True, 'import numpy as np\n'), ((65, 75), 'numpy.copy', 'np.copy', (['A'], {}), '(A)\n', (72, 75), True, 'import numpy as np\n'), ((113, 122), 'numpy.eye', 'np.eye', (['n'], {}), '(n)\n', (119, 122), True, 'import numpy as np\n'), ((139, 155), 'numpy.arange', 'np.arange', (['(n - 1)'], {}), '(n - 1)\n', (148, 155), True, 'import numpy as np\n'), ((650, 668), 'numpy.matrix', 'np.matrix', (['list[0]'], {}), '(list[0])\n', (659, 668), True, 'import numpy as np\n'), ((715, 733), 'numpy.matrix', 'np.matrix', (['list[1]'], {}), '(list[1])\n', (724, 733), True, 'import numpy as np\n'), ((87, 98), 'numpy.shape', 'np.shape', (['U'], {}), '(U)\n', (95, 98), True, 'import numpy as np\n'), ((175, 194), 'numpy.arange', 'np.arange', (['(j + 1)', 'n'], {}), '(j + 1, n)\n', (184, 194), True, 'import numpy as np\n'), ((255, 274), 'numpy.arange', 'np.arange', (['(j + 1)', 'n'], {}), '(j + 1, n)\n', (264, 274), True, 'import numpy as np\n')] |
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from playground.neural_net.layers.dense import Dense
from playground.neural_net.nn_model.forward_propagation import propagate_forward
from playground.neural_net.nn_model.backward_propagation import propagate_backward
import playground.neural_net.optimizers.adam as adam
import playground.neural_net.optimizers.rms_prop as rms_prop
from playground.neural_net.utils.mini_batch import *
import playground.neural_net.optimizers.gd_with_momentum as gd_with_momentum
import playground.neural_net.optimizers.gradient_descent as gradient_descent
from playground.neural_net.loss.cost import compute_cost
class Model:
def __init__(self):
self.layers = []
self.activations = {}
self.parameters = {}
self.optimizer = None
self.learning_rate = 0.0
self.X = None
self.y = None
self.cost = []
self.v = None
self.s = None
self.mini_batch = None
self.train_acc = 0.0
self.test_acc = 0.0
self.mse = 0.0
self.problem_type = None
def init_parameters(self):
for l in range(1, len(self.layers)):
if(self.layers[l].activation == 'relu'):
self.parameters['W' + str(l)] = np.random.randn(self.layers[l].units, self.layers[l-1].units) * np.sqrt(2 / self.layers[l - 1].units)
else:
self.parameters['W' + str(l)] = np.random.randn(self.layers[l].units, self.layers[l-1].units) * np.sqrt(1 / self.layers[l - 1].units)
self.parameters['b' + str(l)] = np.zeros((self.layers[l].units, 1))
self.activations['Activation' + str(l)] = self.layers[l].activation
def add(self, units, activation):
layer = Dense(units, activation)
self.layers.append(layer)
def compile(self, opt, lr):
self.optimizer = opt
self.init_parameters()
if(opt == 'Adam'):
self.v, self.s = adam.initialize_parameters(self.parameters)
elif(opt == 'GD_momentum'):
self.v = gd_with_momentum.initialize_parameters(self.parameters)
elif(opt == 'RMSProp'):
self.s = rms_prop.initialize_parameters(self.parameters)
self.learning_rate = lr
def fit(self, X, Y, epochs, regularization_type, regularization_rate, mini_batch_size):
self.X = np.array(X).T
self.Y = np.array(Y).T
self.mini_batch = create_mini_batches(self.X, self.Y, mini_batch_size)
costs = []
t = 0
for i in range(0, epochs):
for batch in self.mini_batch:
AL, caches = propagate_forward(batch[0], self.parameters, self.activations, self.learning_rate)
cost = compute_cost(AL, batch[1], self.parameters,regularization_type, regularization_rate)
costs.append(cost)
grads = propagate_backward(AL, batch[1], caches, regularization_type, regularization_rate, self.activations, self.learning_rate)
if(self.optimizer == 'Adam'):
t = t + 1
self.parameters, self.v, self.s = adam.update_parameters(self.parameters, grads, self.v, self.s, t, self.learning_rate)
elif(self.optimizer == 'GD' or self.optimizer == 'Sgd'):
self.parameters = gradient_descent.update_parameters(self.parameters, grads, self.learning_rate)
elif(self.optimizer == 'GD_momentum'):
self.parameters, self.v = gd_with_momentum.update_parameters(self.parameters, grads, self.v, 0.9, self.learning_rate)
elif(self.optimizer=='RMSProp'):
self.parameters, self.s = rms_prop.update_parameters(self.parameters, grads, self.s, self.learning_rate)
# if i % 100 == 0:
# print ("Cost after iteration %i: %f" %(i, cost))
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('#iterations')
plt.title("Learning rate=" + str(self.learning_rate))
plt.savefig('playground/static/img/test.png')
def evaluate(self, X, y, type, acc):
self.problem_type = type
X = np.array(X).T
y = np.array(y).T
m = X.shape[1]
n = len(self.parameters) // 2
p = np.zeros((1, m))
probas, caches = propagate_forward(X, self.parameters, self.activations, self.learning_rate)
if(type == 'classification'):
for i in range(0, probas.shape[1]):
if probas[0,i] > 0.5:
p[0,i] = 1
else:
p[0,i] = 0
if(acc):
self.train_acc = (np.sum((p == y)/m)) * 100
else:
self.test_acc = (np.sum((p == y)/m)) * 100
self.rmse = 0.0
return p
else:
self.test_acc = 0.0
self.test_acc = 0.0
self.mse = mean_squared_error(y, probas) * 100
return self.mse
def predict(self, X, type):
X = np.array(X).T
m = X.shape[1]
n = len(self.parameters) // 2
p = np.zeros((1, m))
proba, caches = propagate_forward(X, self.parameters, self.activations, self.learning_rate)
if(type == 'classification'):
for i in range(0, proba.shape[1]):
if proba[0,i] > 0.5:
p[0,i] = 1
else:
p[0,i] = 0
return p
else:
return proba | [
"numpy.sqrt",
"matplotlib.pyplot.ylabel",
"playground.neural_net.layers.dense.Dense",
"playground.neural_net.optimizers.rms_prop.initialize_parameters",
"playground.neural_net.optimizers.gd_with_momentum.initialize_parameters",
"playground.neural_net.optimizers.adam.initialize_parameters",
"numpy.array"... | [((37, 58), 'matplotlib.use', 'matplotlib.use', (['"""Agg"""'], {}), "('Agg')\n", (51, 58), False, 'import matplotlib\n'), ((1838, 1862), 'playground.neural_net.layers.dense.Dense', 'Dense', (['units', 'activation'], {}), '(units, activation)\n', (1843, 1862), False, 'from playground.neural_net.layers.dense import Dense\n'), ((4042, 4060), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""cost"""'], {}), "('cost')\n", (4052, 4060), True, 'import matplotlib.pyplot as plt\n'), ((4069, 4094), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""#iterations"""'], {}), "('#iterations')\n", (4079, 4094), True, 'import matplotlib.pyplot as plt\n'), ((4165, 4210), 'matplotlib.pyplot.savefig', 'plt.savefig', (['"""playground/static/img/test.png"""'], {}), "('playground/static/img/test.png')\n", (4176, 4210), True, 'import matplotlib.pyplot as plt\n'), ((4428, 4444), 'numpy.zeros', 'np.zeros', (['(1, m)'], {}), '((1, m))\n', (4436, 4444), True, 'import numpy as np\n'), ((4479, 4554), 'playground.neural_net.nn_model.forward_propagation.propagate_forward', 'propagate_forward', (['X', 'self.parameters', 'self.activations', 'self.learning_rate'], {}), '(X, self.parameters, self.activations, self.learning_rate)\n', (4496, 4554), False, 'from playground.neural_net.nn_model.forward_propagation import propagate_forward\n'), ((5291, 5307), 'numpy.zeros', 'np.zeros', (['(1, m)'], {}), '((1, m))\n', (5299, 5307), True, 'import numpy as np\n'), ((5333, 5408), 'playground.neural_net.nn_model.forward_propagation.propagate_forward', 'propagate_forward', (['X', 'self.parameters', 'self.activations', 'self.learning_rate'], {}), '(X, self.parameters, self.activations, self.learning_rate)\n', (5350, 5408), False, 'from playground.neural_net.nn_model.forward_propagation import propagate_forward\n'), ((1667, 1702), 'numpy.zeros', 'np.zeros', (['(self.layers[l].units, 1)'], {}), '((self.layers[l].units, 1))\n', (1675, 1702), True, 'import numpy as np\n'), ((2047, 2090), 'playground.neural_net.optimizers.adam.initialize_parameters', 'adam.initialize_parameters', (['self.parameters'], {}), '(self.parameters)\n', (2073, 2090), True, 'import playground.neural_net.optimizers.adam as adam\n'), ((2448, 2459), 'numpy.array', 'np.array', (['X'], {}), '(X)\n', (2456, 2459), True, 'import numpy as np\n'), ((2479, 2490), 'numpy.array', 'np.array', (['Y'], {}), '(Y)\n', (2487, 2490), True, 'import numpy as np\n'), ((4015, 4032), 'numpy.squeeze', 'np.squeeze', (['costs'], {}), '(costs)\n', (4025, 4032), True, 'import numpy as np\n'), ((4315, 4326), 'numpy.array', 'np.array', (['X'], {}), '(X)\n', (4323, 4326), True, 'import numpy as np\n'), ((4341, 4352), 'numpy.array', 'np.array', (['y'], {}), '(y)\n', (4349, 4352), True, 'import numpy as np\n'), ((5204, 5215), 'numpy.array', 'np.array', (['X'], {}), '(X)\n', (5212, 5215), True, 'import numpy as np\n'), ((2148, 2203), 'playground.neural_net.optimizers.gd_with_momentum.initialize_parameters', 'gd_with_momentum.initialize_parameters', (['self.parameters'], {}), '(self.parameters)\n', (2186, 2203), True, 'import playground.neural_net.optimizers.gd_with_momentum as gd_with_momentum\n'), ((2719, 2806), 'playground.neural_net.nn_model.forward_propagation.propagate_forward', 'propagate_forward', (['batch[0]', 'self.parameters', 'self.activations', 'self.learning_rate'], {}), '(batch[0], self.parameters, self.activations, self.\n learning_rate)\n', (2736, 2806), False, 'from playground.neural_net.nn_model.forward_propagation import propagate_forward\n'), ((2826, 2915), 'playground.neural_net.loss.cost.compute_cost', 'compute_cost', (['AL', 'batch[1]', 'self.parameters', 'regularization_type', 'regularization_rate'], {}), '(AL, batch[1], self.parameters, regularization_type,\n regularization_rate)\n', (2838, 2915), False, 'from playground.neural_net.loss.cost import compute_cost\n'), ((2987, 3111), 'playground.neural_net.nn_model.backward_propagation.propagate_backward', 'propagate_backward', (['AL', 'batch[1]', 'caches', 'regularization_type', 'regularization_rate', 'self.activations', 'self.learning_rate'], {}), '(AL, batch[1], caches, regularization_type,\n regularization_rate, self.activations, self.learning_rate)\n', (3005, 3111), False, 'from playground.neural_net.nn_model.backward_propagation import propagate_backward\n'), ((5087, 5116), 'sklearn.metrics.mean_squared_error', 'mean_squared_error', (['y', 'probas'], {}), '(y, probas)\n', (5105, 5116), False, 'from sklearn.metrics import mean_squared_error\n'), ((1353, 1416), 'numpy.random.randn', 'np.random.randn', (['self.layers[l].units', 'self.layers[l - 1].units'], {}), '(self.layers[l].units, self.layers[l - 1].units)\n', (1368, 1416), True, 'import numpy as np\n'), ((1417, 1454), 'numpy.sqrt', 'np.sqrt', (['(2 / self.layers[l - 1].units)'], {}), '(2 / self.layers[l - 1].units)\n', (1424, 1454), True, 'import numpy as np\n'), ((1521, 1584), 'numpy.random.randn', 'np.random.randn', (['self.layers[l].units', 'self.layers[l - 1].units'], {}), '(self.layers[l].units, self.layers[l - 1].units)\n', (1536, 1584), True, 'import numpy as np\n'), ((1585, 1622), 'numpy.sqrt', 'np.sqrt', (['(1 / self.layers[l - 1].units)'], {}), '(1 / self.layers[l - 1].units)\n', (1592, 1622), True, 'import numpy as np\n'), ((2257, 2304), 'playground.neural_net.optimizers.rms_prop.initialize_parameters', 'rms_prop.initialize_parameters', (['self.parameters'], {}), '(self.parameters)\n', (2287, 2304), True, 'import playground.neural_net.optimizers.rms_prop as rms_prop\n'), ((3255, 3345), 'playground.neural_net.optimizers.adam.update_parameters', 'adam.update_parameters', (['self.parameters', 'grads', 'self.v', 'self.s', 't', 'self.learning_rate'], {}), '(self.parameters, grads, self.v, self.s, t, self.\n learning_rate)\n', (3277, 3345), True, 'import playground.neural_net.optimizers.adam as adam\n'), ((4832, 4852), 'numpy.sum', 'np.sum', (['((p == y) / m)'], {}), '((p == y) / m)\n', (4838, 4852), True, 'import numpy as np\n'), ((4909, 4929), 'numpy.sum', 'np.sum', (['((p == y) / m)'], {}), '((p == y) / m)\n', (4915, 4929), True, 'import numpy as np\n'), ((3452, 3530), 'playground.neural_net.optimizers.gradient_descent.update_parameters', 'gradient_descent.update_parameters', (['self.parameters', 'grads', 'self.learning_rate'], {}), '(self.parameters, grads, self.learning_rate)\n', (3486, 3530), True, 'import playground.neural_net.optimizers.gradient_descent as gradient_descent\n'), ((3632, 3727), 'playground.neural_net.optimizers.gd_with_momentum.update_parameters', 'gd_with_momentum.update_parameters', (['self.parameters', 'grads', 'self.v', '(0.9)', 'self.learning_rate'], {}), '(self.parameters, grads, self.v, 0.9,\n self.learning_rate)\n', (3666, 3727), True, 'import playground.neural_net.optimizers.gd_with_momentum as gd_with_momentum\n'), ((3819, 3897), 'playground.neural_net.optimizers.rms_prop.update_parameters', 'rms_prop.update_parameters', (['self.parameters', 'grads', 'self.s', 'self.learning_rate'], {}), '(self.parameters, grads, self.s, self.learning_rate)\n', (3845, 3897), True, 'import playground.neural_net.optimizers.rms_prop as rms_prop\n')] |
from sklearn.datasets import load_digits
from MulticoreTSNE import MulticoreTSNE as TSNE
import matplotlib
matplotlib.use('agg')
from matplotlib import pyplot as plt
import scipy
import torch
import numpy as np
#digits = load_digits()
query_path = '.'
result_n = scipy.io.loadmat(query_path+'/query_result_normal.mat')
query_n = torch.FloatTensor(result_n['query_f'])
label_n = result_n['query_label'][0]
result_q = scipy.io.loadmat(query_path+'/query_result.mat')
query_q = torch.FloatTensor(result_q['query_f'])
label_q = result_q['query_label'][0]
data = torch.cat( (query_n, query_q), 0)
flag = -1
label_t1 = torch.zeros(label_n.shape)
for index, xx in enumerate(label_n):
if index == 0:
flag = xx
continue
if xx !=flag:
flag = xx
label_t1[index] = label_t1[index-1] +1
else:
label_t1[index] = label_t1[index-1]
flag = -1
label_t2 = torch.zeros(label_q.shape)
for index, xx in enumerate(label_q):
if index == 0:
flag = xx
continue
if xx !=flag:
flag = xx
label_t2[index] = label_t2[index-1] +1
else:
label_t2[index] = label_t2[index-1]
label = np.concatenate( (label_t1, label_t2), 0)
print(label)
#label = torch.cat( (torch.zeros(label_n.shape), torch.ones(label_q.shape)), 0)
print(data.shape, label.shape)
embeddings = TSNE(n_jobs=16).fit_transform(data)
fig = plt.figure(dpi=1200)
top = 10
vis_x = [] #embeddings[0:first20, 0]
vis_y = [] #embeddings[0:first20, 1]
label_t = []
for i in range(500):
if label_t1[i] == top:
break
if i==0 or label_t1[i] != label_t1[i-1]:
vis_x.append(embeddings[i, 0])
vis_y.append(embeddings[i, 1])
label_t.append(label_t1[i])
print(label_t)
plt.scatter(vis_x, vis_y, c=label_t, cmap=plt.cm.get_cmap("jet", top), marker='.')
start = len(label_t1)
vis_x = [] #embeddings[0:first20, 0]
vis_y = [] #embeddings[0:first20, 1]
label_t = []
for i in range(500):
if label_t2[i] == top:
break
if i==0 or label_t2[i] != label_t2[i-1]:
vis_x.append(embeddings[start+i, 0])
vis_y.append(embeddings[start+i, 1])
label_t.append(label_t2[i])
print(label_t)
plt.scatter(vis_x, vis_y, c=label_t, cmap=plt.cm.get_cmap("jet", top), marker='*')
plt.colorbar(ticks=range(top))
plt.clim(-0.5, top-0.5)
plt.show()
fig.savefig( 'tsne.jpg')
| [
"matplotlib.pyplot.clim",
"matplotlib.use",
"scipy.io.loadmat",
"torch.FloatTensor",
"matplotlib.pyplot.figure",
"numpy.concatenate",
"matplotlib.pyplot.cm.get_cmap",
"MulticoreTSNE.MulticoreTSNE",
"torch.zeros",
"torch.cat",
"matplotlib.pyplot.show"
] | [((107, 128), 'matplotlib.use', 'matplotlib.use', (['"""agg"""'], {}), "('agg')\n", (121, 128), False, 'import matplotlib\n'), ((264, 321), 'scipy.io.loadmat', 'scipy.io.loadmat', (["(query_path + '/query_result_normal.mat')"], {}), "(query_path + '/query_result_normal.mat')\n", (280, 321), False, 'import scipy\n'), ((330, 368), 'torch.FloatTensor', 'torch.FloatTensor', (["result_n['query_f']"], {}), "(result_n['query_f'])\n", (347, 368), False, 'import torch\n'), ((418, 468), 'scipy.io.loadmat', 'scipy.io.loadmat', (["(query_path + '/query_result.mat')"], {}), "(query_path + '/query_result.mat')\n", (434, 468), False, 'import scipy\n'), ((477, 515), 'torch.FloatTensor', 'torch.FloatTensor', (["result_q['query_f']"], {}), "(result_q['query_f'])\n", (494, 515), False, 'import torch\n'), ((561, 593), 'torch.cat', 'torch.cat', (['(query_n, query_q)', '(0)'], {}), '((query_n, query_q), 0)\n', (570, 593), False, 'import torch\n'), ((617, 643), 'torch.zeros', 'torch.zeros', (['label_n.shape'], {}), '(label_n.shape)\n', (628, 643), False, 'import torch\n'), ((895, 921), 'torch.zeros', 'torch.zeros', (['label_q.shape'], {}), '(label_q.shape)\n', (906, 921), False, 'import torch\n'), ((1159, 1198), 'numpy.concatenate', 'np.concatenate', (['(label_t1, label_t2)', '(0)'], {}), '((label_t1, label_t2), 0)\n', (1173, 1198), True, 'import numpy as np\n'), ((1382, 1402), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'dpi': '(1200)'}), '(dpi=1200)\n', (1392, 1402), True, 'from matplotlib import pyplot as plt\n'), ((2294, 2319), 'matplotlib.pyplot.clim', 'plt.clim', (['(-0.5)', '(top - 0.5)'], {}), '(-0.5, top - 0.5)\n', (2302, 2319), True, 'from matplotlib import pyplot as plt\n'), ((2318, 2328), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (2326, 2328), True, 'from matplotlib import pyplot as plt\n'), ((1339, 1354), 'MulticoreTSNE.MulticoreTSNE', 'TSNE', ([], {'n_jobs': '(16)'}), '(n_jobs=16)\n', (1343, 1354), True, 'from MulticoreTSNE import MulticoreTSNE as TSNE\n'), ((1780, 1807), 'matplotlib.pyplot.cm.get_cmap', 'plt.cm.get_cmap', (['"""jet"""', 'top'], {}), "('jet', top)\n", (1795, 1807), True, 'from matplotlib import pyplot as plt\n'), ((2221, 2248), 'matplotlib.pyplot.cm.get_cmap', 'plt.cm.get_cmap', (['"""jet"""', 'top'], {}), "('jet', top)\n", (2236, 2248), True, 'from matplotlib import pyplot as plt\n')] |
'''
Calculate PSNR and SSIM metrics
'''
import argparse
import glob
import os.path as path
import skimage.io
import skimage.measure
import numpy as np
def process(args):
fns = glob.glob(path.join(args.imgdir, '*gt.jpg'))
n = len(fns)
psnr_list = []
ssim_list = []
for i in range(n):
fn = fns[i] # ground truth
gt_img = skimage.io.imread(fn)
fn = fn[:-6] + 'out.jpg' # predicted
pred_img = skimage.io.imread(fn)
psnr = skimage.measure.compare_psnr(gt_img, pred_img)
ssim = skimage.measure.compare_ssim(gt_img, pred_img, multichannel=True)
psnr_list.append(psnr)
ssim_list.append(ssim)
log = '[%4d / %4d] %s PSNR: %8.3f SSIM: %8.3f' \
% (i, n, path.basename(fn), psnr, ssim)
print(log)
print('mean PSNR: %.3f' % np.mean(psnr_list))
print('mean SSIM: %.3f' % np.mean(ssim_list))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Calculating metrics...')
parser.add_argument('--imgdir', type=str, required=True)
args = parser.parse_args()
process(args) | [
"numpy.mean",
"os.path.join",
"os.path.basename",
"argparse.ArgumentParser"
] | [((962, 1023), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {'description': '"""Calculating metrics..."""'}), "(description='Calculating metrics...')\n", (985, 1023), False, 'import argparse\n'), ((197, 230), 'os.path.join', 'path.join', (['args.imgdir', '"""*gt.jpg"""'], {}), "(args.imgdir, '*gt.jpg')\n", (206, 230), True, 'import os.path as path\n'), ((850, 868), 'numpy.mean', 'np.mean', (['psnr_list'], {}), '(psnr_list)\n', (857, 868), True, 'import numpy as np\n'), ((900, 918), 'numpy.mean', 'np.mean', (['ssim_list'], {}), '(ssim_list)\n', (907, 918), True, 'import numpy as np\n'), ((769, 786), 'os.path.basename', 'path.basename', (['fn'], {}), '(fn)\n', (782, 786), True, 'import os.path as path\n')] |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2019/7/2 7:25 PM
# @Author : Slade
# @File : model.py
# -*- coding:utf-8 -*-
import tensorflow as tf
from tensorflow.contrib import rnn
import tensorflow.contrib.layers as layers
"""wd_5_bigru_cnn
两部分使用不同的 embedding, 因为RNN与CNN结构完全不同,共用embedding会降低性能。
title 部分使用 bigru+attention;content 部分使用 textcnn; 两部分输出直接 concat。
"""
class Settings(object):
def __init__(self):
self.model_name = 'wd_5_bigru_cnn'
self.char_len = 75
self.word_len = 36
self.hidden_size = 256
self.n_layer = 1
self.filter_sizes = [2, 3, 4, 5, 7]
self.n_filter = 256
self.fc_hidden_size = 1024
self.n_class = 2
self.ckpt_path = '/Users/slade/Documents/YMM/Code/tf/model/ckpt/'
class BiGRU_CNN(object):
"""
title: inputs->bigru+attention->output_title
content: inputs->textcnn->output_content
concat[output_title, output_content] -> fc+bn+relu -> sigmoid_entropy.
"""
def __init__(self, W_embedding, settings):
self.model_name = settings.model_name
self.char_len = settings.char_len
self.word_len = settings.word_len
self.hidden_size = settings.hidden_size
self.n_layer = settings.n_layer
self.filter_sizes = settings.filter_sizes
self.n_filter = settings.n_filter
self.n_filter_total = self.n_filter * len(self.filter_sizes)
self.n_class = settings.n_class
self.fc_hidden_size = settings.fc_hidden_size
self._global_step = tf.Variable(0, trainable=False, name='Global_Step')
self.update_emas = list()
# placeholders
self._tst = tf.placeholder(tf.bool)
self._keep_prob = tf.placeholder(tf.float32, [])
self._batch_size = tf.placeholder(tf.int32, [])
with tf.name_scope('Inputs'):
self._X1_inputs = tf.placeholder(tf.int64, [None, self.char_len], name='X1_inputs')
self._X2_inputs = tf.placeholder(tf.int64, [None, self.word_len], name='X2_inputs')
self._y_inputs = tf.placeholder(tf.float32, [None, self.n_class], name='y_input')
with tf.variable_scope('embedding'):
self.char_embedding = tf.get_variable(name='char_embedding', shape=W_embedding.shape,
initializer=tf.constant_initializer(W_embedding), trainable=True)
self.word_embedding = tf.get_variable(name='word_embedding', shape=W_embedding.shape,
initializer=tf.constant_initializer(W_embedding), trainable=True)
self.embedding_size = W_embedding.shape[1]
with tf.variable_scope('bigru_char'):
output_char = self.bigru_inference(self._X1_inputs)
with tf.variable_scope('cnn_word'):
output_word = self.cnn_inference(self._X2_inputs, self.word_len)
with tf.variable_scope('fc-bn-layer'):
output = tf.concat([output_char, output_word], axis=1)
W_fc = self.weight_variable([self.hidden_size * 2 + self.n_filter_total, self.fc_hidden_size],
name='Weight_fc')
tf.summary.histogram('W_fc', W_fc)
h_fc = tf.matmul(output, W_fc, name='h_fc')
beta_fc = tf.Variable(tf.constant(0.1, tf.float32, shape=[self.fc_hidden_size], name="beta_fc"))
tf.summary.histogram('beta_fc', beta_fc)
fc_bn, update_ema_fc = self.batchnorm(h_fc, beta_fc, convolutional=False)
self.update_emas.append(update_ema_fc)
self.fc_bn_relu = tf.nn.relu(fc_bn, name="relu")
fc_bn_drop = tf.nn.dropout(self.fc_bn_relu, self.keep_prob)
with tf.variable_scope('out_layer'):
W_out = self.weight_variable([self.fc_hidden_size, self.n_class], name='Weight_out')
tf.summary.histogram('Weight_out', W_out)
b_out = self.bias_variable([self.n_class], name='bias_out')
tf.summary.histogram('bias_out', b_out)
self._y_pred = tf.nn.xw_plus_b(fc_bn_drop, W_out, b_out, name='y_pred') # 每个类别的分数 scores
with tf.name_scope('loss'):
self._loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=self._y_pred, labels=self._y_inputs))
tf.summary.scalar('loss', self._loss)
self.saver = tf.train.Saver(max_to_keep=1)
@property
def tst(self):
return self._tst
@property
def keep_prob(self):
return self._keep_prob
@property
def batch_size(self):
return self._batch_size
@property
def global_step(self):
return self._global_step
@property
def X1_inputs(self):
return self._X1_inputs
@property
def X2_inputs(self):
return self._X2_inputs
@property
def y_inputs(self):
return self._y_inputs
@property
def y_pred(self):
return self._y_pred
@property
def loss(self):
return self._loss
def weight_variable(self, shape, name):
"""Create a weight variable with appropriate initialization."""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial, name=name)
def bias_variable(self, shape, name):
"""Create a bias variable with appropriate initialization."""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial, name=name)
def batchnorm(self, Ylogits, offset, convolutional=False):
"""batchnormalization.
Args:
Ylogits: 1D向量或者是3D的卷积结果。
num_updates: 迭代的global_step
offset:表示beta,全局均值;在 RELU 激活中一般初始化为 0.1。
scale:表示lambda,全局方差;在 sigmoid 激活中需要,这 RELU 激活中作用不大。
m: 表示batch均值;v:表示batch方差。
bnepsilon:一个很小的浮点数,防止除以 0.
Returns:
Ybn: 和 Ylogits 的维度一样,就是经过 Batch Normalization 处理的结果。
update_moving_everages:更新mean和variance,主要是给最后的 test 使用。
"""
exp_moving_avg = tf.train.ExponentialMovingAverage(0.999,
self._global_step) # adding the iteration prevents from averaging across non-existing iterations
bnepsilon = 1e-5
if convolutional:
mean, variance = tf.nn.moments(Ylogits, [0, 1, 2])
else:
mean, variance = tf.nn.moments(Ylogits, [0])
update_moving_everages = exp_moving_avg.apply([mean, variance])
# 条件判断 if self.tst lambda: exp_moving_avg.average(mean) else lambda: mean
m = tf.cond(self.tst, lambda: exp_moving_avg.average(mean), lambda: mean)
v = tf.cond(self.tst, lambda: exp_moving_avg.average(variance), lambda: variance)
Ybn = tf.nn.batch_normalization(Ylogits, m, v, offset, None, bnepsilon)
return Ybn, update_moving_everages
def gru_cell(self):
with tf.name_scope('gru_cell'):
# activation:tanh
cell = rnn.GRUCell(self.hidden_size, reuse=tf.get_variable_scope().reuse)
return rnn.DropoutWrapper(cell, output_keep_prob=self.keep_prob)
def bi_gru(self, inputs):
"""build the bi-GRU network. 返回个所有层的隐含状态。"""
cells_fw = [self.gru_cell() for _ in range(self.n_layer)]
cells_bw = [self.gru_cell() for _ in range(self.n_layer)]
initial_states_fw = [cell_fw.zero_state(self.batch_size, tf.float32) for cell_fw in cells_fw]
initial_states_bw = [cell_bw.zero_state(self.batch_size, tf.float32) for cell_bw in cells_bw]
outputs, _, _ = rnn.stack_bidirectional_dynamic_rnn(cells_fw, cells_bw, inputs,
initial_states_fw=initial_states_fw,
initial_states_bw=initial_states_bw, dtype=tf.float32)
return outputs
def task_specific_attention(self, inputs, output_size,
initializer=layers.xavier_initializer(),
activation_fn=tf.tanh, scope=None):
"""
Performs task-specific attention reduction, using learned
attention context vector (constant within task of interest).
Args:
inputs: Tensor of shape [batch_size, units, input_size]
`input_size` must be static (known)
`units` axis will be attended over (reduced from outputs)
`batch_size` will be preserved
output_size: Size of outputs's inner (feature) dimension
Returns:
outputs: Tensor of shape [batch_size, output_dim].
"""
assert len(inputs.get_shape()) == 3 and inputs.get_shape()[-1].value is not None
with tf.variable_scope(scope or 'attention') as scope:
# u_w, attention 向量
attention_context_vector = tf.get_variable(name='attention_context_vector', shape=[output_size],
initializer=initializer, dtype=tf.float32)
# 全连接层,把 h_i 转为 u_i , shape= [batch_size, units, input_size] -> [batch_size, units, output_size]
input_projection = layers.fully_connected(inputs, output_size, activation_fn=activation_fn, scope=scope)
# 输出 [batch_size, units]
vector_attn = tf.reduce_sum(tf.multiply(input_projection, attention_context_vector), axis=2, keep_dims=True)
attention_weights = tf.nn.softmax(vector_attn, dim=1)
tf.summary.histogram('attention_weigths', attention_weights)
weighted_projection = tf.multiply(inputs, attention_weights)
outputs = tf.reduce_sum(weighted_projection, axis=1)
return outputs # 输出 [batch_size, hidden_size*2]
def bigru_inference(self, X_inputs):
inputs = tf.nn.embedding_lookup(self.char_embedding, X_inputs)
output_bigru = self.bi_gru(inputs)
output_att = self.task_specific_attention(output_bigru, self.hidden_size * 2)
return output_att
def cnn_inference(self, X_inputs, n_step):
"""TextCNN 模型。
Args:
X_inputs: tensor.shape=(batch_size, n_step)
Returns:
title_outputs: tensor.shape=(batch_size, self.n_filter_total)
"""
inputs = tf.nn.embedding_lookup(self.word_embedding, X_inputs)
inputs = tf.expand_dims(inputs, -1)
pooled_outputs = list()
for i, filter_size in enumerate(self.filter_sizes):
with tf.variable_scope("conv-maxpool-%s" % filter_size):
# Convolution Layer
filter_shape = [filter_size, self.embedding_size, 1, self.n_filter]
W_filter = self.weight_variable(shape=filter_shape, name='W_filter')
beta = self.bias_variable(shape=[self.n_filter], name='beta_filter')
tf.summary.histogram('beta', beta)
conv = tf.nn.conv2d(inputs, W_filter, strides=[1, 1, 1, 1], padding="VALID", name="conv")
conv_bn, update_ema = self.batchnorm(conv, beta, convolutional=True)
# Apply nonlinearity, batch norm scaling is not useful with relus
h = tf.nn.relu(conv_bn, name="relu")
# Maxpooling over the outputs
pooled = tf.nn.max_pool(h, ksize=[1, n_step - filter_size + 1, 1, 1],
strides=[1, 1, 1, 1], padding='VALID', name="pool")
pooled_outputs.append(pooled)
self.update_emas.append(update_ema)
h_pool = tf.concat(pooled_outputs, 3)
h_pool_flat = tf.reshape(h_pool, [-1, self.n_filter_total])
return h_pool_flat # shape = [batch_size, self.n_filter_total]
# test the model
def test():
import numpy as np
print('Begin testing...')
settings = Settings()
W_embedding = np.random.randn(50, 10)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
batch_size = 128
with tf.Session(config=config) as sess:
model = BiGRU_CNN(W_embedding, settings)
optimizer = tf.train.AdamOptimizer(0.001)
train_op = optimizer.minimize(model.loss)
update_op = tf.group(*model.update_emas)
sess.run(tf.global_variables_initializer())
fetch = [model.loss, model.y_pred, train_op, update_op]
loss_list = list()
for i in range(100):
X1_batch = np.zeros((batch_size, 75), dtype=float)
X2_batch = np.zeros((batch_size, 36), dtype=float)
y_batch = np.zeros((batch_size, 2), dtype=int)
_batch_size = len(y_batch)
feed_dict = {model.X1_inputs: X1_batch, model.X2_inputs: X2_batch, model.y_inputs: y_batch,
model.batch_size: _batch_size, model.tst: False, model.keep_prob: 0.5}
loss, y_pred, _, _ = sess.run(fetch, feed_dict=feed_dict)
loss_list.append(loss)
print(i, loss)
if __name__ == '__main__':
test()
| [
"tensorflow.get_variable",
"tensorflow.reduce_sum",
"tensorflow.nn.moments",
"tensorflow.multiply",
"tensorflow.contrib.rnn.stack_bidirectional_dynamic_rnn",
"tensorflow.get_variable_scope",
"tensorflow.group",
"tensorflow.nn.dropout",
"tensorflow.nn.softmax",
"tensorflow.nn.embedding_lookup",
"... | [((11835, 11858), 'numpy.random.randn', 'np.random.randn', (['(50)', '(10)'], {}), '(50, 10)\n', (11850, 11858), True, 'import numpy as np\n'), ((11872, 11888), 'tensorflow.ConfigProto', 'tf.ConfigProto', ([], {}), '()\n', (11886, 11888), True, 'import tensorflow as tf\n'), ((1555, 1606), 'tensorflow.Variable', 'tf.Variable', (['(0)'], {'trainable': '(False)', 'name': '"""Global_Step"""'}), "(0, trainable=False, name='Global_Step')\n", (1566, 1606), True, 'import tensorflow as tf\n'), ((1684, 1707), 'tensorflow.placeholder', 'tf.placeholder', (['tf.bool'], {}), '(tf.bool)\n', (1698, 1707), True, 'import tensorflow as tf\n'), ((1734, 1764), 'tensorflow.placeholder', 'tf.placeholder', (['tf.float32', '[]'], {}), '(tf.float32, [])\n', (1748, 1764), True, 'import tensorflow as tf\n'), ((1792, 1820), 'tensorflow.placeholder', 'tf.placeholder', (['tf.int32', '[]'], {}), '(tf.int32, [])\n', (1806, 1820), True, 'import tensorflow as tf\n'), ((4398, 4427), 'tensorflow.train.Saver', 'tf.train.Saver', ([], {'max_to_keep': '(1)'}), '(max_to_keep=1)\n', (4412, 4427), True, 'import tensorflow as tf\n'), ((5178, 5216), 'tensorflow.truncated_normal', 'tf.truncated_normal', (['shape'], {'stddev': '(0.1)'}), '(shape, stddev=0.1)\n', (5197, 5216), True, 'import tensorflow as tf\n'), ((5232, 5263), 'tensorflow.Variable', 'tf.Variable', (['initial'], {'name': 'name'}), '(initial, name=name)\n', (5243, 5263), True, 'import tensorflow as tf\n'), ((5395, 5424), 'tensorflow.constant', 'tf.constant', (['(0.1)'], {'shape': 'shape'}), '(0.1, shape=shape)\n', (5406, 5424), True, 'import tensorflow as tf\n'), ((5440, 5471), 'tensorflow.Variable', 'tf.Variable', (['initial'], {'name': 'name'}), '(initial, name=name)\n', (5451, 5471), True, 'import tensorflow as tf\n'), ((6039, 6098), 'tensorflow.train.ExponentialMovingAverage', 'tf.train.ExponentialMovingAverage', (['(0.999)', 'self._global_step'], {}), '(0.999, self._global_step)\n', (6072, 6098), True, 'import tensorflow as tf\n'), ((6762, 6827), 'tensorflow.nn.batch_normalization', 'tf.nn.batch_normalization', (['Ylogits', 'm', 'v', 'offset', 'None', 'bnepsilon'], {}), '(Ylogits, m, v, offset, None, bnepsilon)\n', (6787, 6827), True, 'import tensorflow as tf\n'), ((7067, 7124), 'tensorflow.contrib.rnn.DropoutWrapper', 'rnn.DropoutWrapper', (['cell'], {'output_keep_prob': 'self.keep_prob'}), '(cell, output_keep_prob=self.keep_prob)\n', (7085, 7124), False, 'from tensorflow.contrib import rnn\n'), ((7569, 7733), 'tensorflow.contrib.rnn.stack_bidirectional_dynamic_rnn', 'rnn.stack_bidirectional_dynamic_rnn', (['cells_fw', 'cells_bw', 'inputs'], {'initial_states_fw': 'initial_states_fw', 'initial_states_bw': 'initial_states_bw', 'dtype': 'tf.float32'}), '(cells_fw, cells_bw, inputs,\n initial_states_fw=initial_states_fw, initial_states_bw=\n initial_states_bw, dtype=tf.float32)\n', (7604, 7733), False, 'from tensorflow.contrib import rnn\n'), ((7972, 7999), 'tensorflow.contrib.layers.xavier_initializer', 'layers.xavier_initializer', ([], {}), '()\n', (7997, 7999), True, 'import tensorflow.contrib.layers as layers\n'), ((9803, 9856), 'tensorflow.nn.embedding_lookup', 'tf.nn.embedding_lookup', (['self.char_embedding', 'X_inputs'], {}), '(self.char_embedding, X_inputs)\n', (9825, 9856), True, 'import tensorflow as tf\n'), ((10273, 10326), 'tensorflow.nn.embedding_lookup', 'tf.nn.embedding_lookup', (['self.word_embedding', 'X_inputs'], {}), '(self.word_embedding, X_inputs)\n', (10295, 10326), True, 'import tensorflow as tf\n'), ((10344, 10370), 'tensorflow.expand_dims', 'tf.expand_dims', (['inputs', '(-1)'], {}), '(inputs, -1)\n', (10358, 10370), True, 'import tensorflow as tf\n'), ((11538, 11566), 'tensorflow.concat', 'tf.concat', (['pooled_outputs', '(3)'], {}), '(pooled_outputs, 3)\n', (11547, 11566), True, 'import tensorflow as tf\n'), ((11589, 11634), 'tensorflow.reshape', 'tf.reshape', (['h_pool', '[-1, self.n_filter_total]'], {}), '(h_pool, [-1, self.n_filter_total])\n', (11599, 11634), True, 'import tensorflow as tf\n'), ((11962, 11987), 'tensorflow.Session', 'tf.Session', ([], {'config': 'config'}), '(config=config)\n', (11972, 11987), True, 'import tensorflow as tf\n'), ((12066, 12095), 'tensorflow.train.AdamOptimizer', 'tf.train.AdamOptimizer', (['(0.001)'], {}), '(0.001)\n', (12088, 12095), True, 'import tensorflow as tf\n'), ((12166, 12194), 'tensorflow.group', 'tf.group', (['*model.update_emas'], {}), '(*model.update_emas)\n', (12174, 12194), True, 'import tensorflow as tf\n'), ((1835, 1858), 'tensorflow.name_scope', 'tf.name_scope', (['"""Inputs"""'], {}), "('Inputs')\n", (1848, 1858), True, 'import tensorflow as tf\n'), ((1890, 1955), 'tensorflow.placeholder', 'tf.placeholder', (['tf.int64', '[None, self.char_len]'], {'name': '"""X1_inputs"""'}), "(tf.int64, [None, self.char_len], name='X1_inputs')\n", (1904, 1955), True, 'import tensorflow as tf\n'), ((1986, 2051), 'tensorflow.placeholder', 'tf.placeholder', (['tf.int64', '[None, self.word_len]'], {'name': '"""X2_inputs"""'}), "(tf.int64, [None, self.word_len], name='X2_inputs')\n", (2000, 2051), True, 'import tensorflow as tf\n'), ((2081, 2145), 'tensorflow.placeholder', 'tf.placeholder', (['tf.float32', '[None, self.n_class]'], {'name': '"""y_input"""'}), "(tf.float32, [None, self.n_class], name='y_input')\n", (2095, 2145), True, 'import tensorflow as tf\n'), ((2160, 2190), 'tensorflow.variable_scope', 'tf.variable_scope', (['"""embedding"""'], {}), "('embedding')\n", (2177, 2190), True, 'import tensorflow as tf\n'), ((2685, 2716), 'tensorflow.variable_scope', 'tf.variable_scope', (['"""bigru_char"""'], {}), "('bigru_char')\n", (2702, 2716), True, 'import tensorflow as tf\n'), ((2796, 2825), 'tensorflow.variable_scope', 'tf.variable_scope', (['"""cnn_word"""'], {}), "('cnn_word')\n", (2813, 2825), True, 'import tensorflow as tf\n'), ((2918, 2950), 'tensorflow.variable_scope', 'tf.variable_scope', (['"""fc-bn-layer"""'], {}), "('fc-bn-layer')\n", (2935, 2950), True, 'import tensorflow as tf\n'), ((2973, 3018), 'tensorflow.concat', 'tf.concat', (['[output_char, output_word]'], {'axis': '(1)'}), '([output_char, output_word], axis=1)\n', (2982, 3018), True, 'import tensorflow as tf\n'), ((3197, 3231), 'tensorflow.summary.histogram', 'tf.summary.histogram', (['"""W_fc"""', 'W_fc'], {}), "('W_fc', W_fc)\n", (3217, 3231), True, 'import tensorflow as tf\n'), ((3252, 3288), 'tensorflow.matmul', 'tf.matmul', (['output', 'W_fc'], {'name': '"""h_fc"""'}), "(output, W_fc, name='h_fc')\n", (3261, 3288), True, 'import tensorflow as tf\n'), ((3410, 3450), 'tensorflow.summary.histogram', 'tf.summary.histogram', (['"""beta_fc"""', 'beta_fc'], {}), "('beta_fc', beta_fc)\n", (3430, 3450), True, 'import tensorflow as tf\n'), ((3619, 3649), 'tensorflow.nn.relu', 'tf.nn.relu', (['fc_bn'], {'name': '"""relu"""'}), "(fc_bn, name='relu')\n", (3629, 3649), True, 'import tensorflow as tf\n'), ((3675, 3721), 'tensorflow.nn.dropout', 'tf.nn.dropout', (['self.fc_bn_relu', 'self.keep_prob'], {}), '(self.fc_bn_relu, self.keep_prob)\n', (3688, 3721), True, 'import tensorflow as tf\n'), ((3736, 3766), 'tensorflow.variable_scope', 'tf.variable_scope', (['"""out_layer"""'], {}), "('out_layer')\n", (3753, 3766), True, 'import tensorflow as tf\n'), ((3877, 3918), 'tensorflow.summary.histogram', 'tf.summary.histogram', (['"""Weight_out"""', 'W_out'], {}), "('Weight_out', W_out)\n", (3897, 3918), True, 'import tensorflow as tf\n'), ((4004, 4043), 'tensorflow.summary.histogram', 'tf.summary.histogram', (['"""bias_out"""', 'b_out'], {}), "('bias_out', b_out)\n", (4024, 4043), True, 'import tensorflow as tf\n'), ((4072, 4128), 'tensorflow.nn.xw_plus_b', 'tf.nn.xw_plus_b', (['fc_bn_drop', 'W_out', 'b_out'], {'name': '"""y_pred"""'}), "(fc_bn_drop, W_out, b_out, name='y_pred')\n", (4087, 4128), True, 'import tensorflow as tf\n'), ((4161, 4182), 'tensorflow.name_scope', 'tf.name_scope', (['"""loss"""'], {}), "('loss')\n", (4174, 4182), True, 'import tensorflow as tf\n'), ((4338, 4375), 'tensorflow.summary.scalar', 'tf.summary.scalar', (['"""loss"""', 'self._loss'], {}), "('loss', self._loss)\n", (4355, 4375), True, 'import tensorflow as tf\n'), ((6317, 6350), 'tensorflow.nn.moments', 'tf.nn.moments', (['Ylogits', '[0, 1, 2]'], {}), '(Ylogits, [0, 1, 2])\n', (6330, 6350), True, 'import tensorflow as tf\n'), ((6394, 6421), 'tensorflow.nn.moments', 'tf.nn.moments', (['Ylogits', '[0]'], {}), '(Ylogits, [0])\n', (6407, 6421), True, 'import tensorflow as tf\n'), ((6909, 6934), 'tensorflow.name_scope', 'tf.name_scope', (['"""gru_cell"""'], {}), "('gru_cell')\n", (6922, 6934), True, 'import tensorflow as tf\n'), ((8733, 8772), 'tensorflow.variable_scope', 'tf.variable_scope', (["(scope or 'attention')"], {}), "(scope or 'attention')\n", (8750, 8772), True, 'import tensorflow as tf\n'), ((8854, 8970), 'tensorflow.get_variable', 'tf.get_variable', ([], {'name': '"""attention_context_vector"""', 'shape': '[output_size]', 'initializer': 'initializer', 'dtype': 'tf.float32'}), "(name='attention_context_vector', shape=[output_size],\n initializer=initializer, dtype=tf.float32)\n", (8869, 8970), True, 'import tensorflow as tf\n'), ((9162, 9251), 'tensorflow.contrib.layers.fully_connected', 'layers.fully_connected', (['inputs', 'output_size'], {'activation_fn': 'activation_fn', 'scope': 'scope'}), '(inputs, output_size, activation_fn=activation_fn,\n scope=scope)\n', (9184, 9251), True, 'import tensorflow.contrib.layers as layers\n'), ((9438, 9471), 'tensorflow.nn.softmax', 'tf.nn.softmax', (['vector_attn'], {'dim': '(1)'}), '(vector_attn, dim=1)\n', (9451, 9471), True, 'import tensorflow as tf\n'), ((9484, 9544), 'tensorflow.summary.histogram', 'tf.summary.histogram', (['"""attention_weigths"""', 'attention_weights'], {}), "('attention_weigths', attention_weights)\n", (9504, 9544), True, 'import tensorflow as tf\n'), ((9579, 9617), 'tensorflow.multiply', 'tf.multiply', (['inputs', 'attention_weights'], {}), '(inputs, attention_weights)\n', (9590, 9617), True, 'import tensorflow as tf\n'), ((9640, 9682), 'tensorflow.reduce_sum', 'tf.reduce_sum', (['weighted_projection'], {'axis': '(1)'}), '(weighted_projection, axis=1)\n', (9653, 9682), True, 'import tensorflow as tf\n'), ((12212, 12245), 'tensorflow.global_variables_initializer', 'tf.global_variables_initializer', ([], {}), '()\n', (12243, 12245), True, 'import tensorflow as tf\n'), ((12390, 12429), 'numpy.zeros', 'np.zeros', (['(batch_size, 75)'], {'dtype': 'float'}), '((batch_size, 75), dtype=float)\n', (12398, 12429), True, 'import numpy as np\n'), ((12453, 12492), 'numpy.zeros', 'np.zeros', (['(batch_size, 36)'], {'dtype': 'float'}), '((batch_size, 36), dtype=float)\n', (12461, 12492), True, 'import numpy as np\n'), ((12515, 12551), 'numpy.zeros', 'np.zeros', (['(batch_size, 2)'], {'dtype': 'int'}), '((batch_size, 2), dtype=int)\n', (12523, 12551), True, 'import numpy as np\n'), ((3323, 3396), 'tensorflow.constant', 'tf.constant', (['(0.1)', 'tf.float32'], {'shape': '[self.fc_hidden_size]', 'name': '"""beta_fc"""'}), "(0.1, tf.float32, shape=[self.fc_hidden_size], name='beta_fc')\n", (3334, 3396), True, 'import tensorflow as tf\n'), ((4241, 4329), 'tensorflow.nn.sigmoid_cross_entropy_with_logits', 'tf.nn.sigmoid_cross_entropy_with_logits', ([], {'logits': 'self._y_pred', 'labels': 'self._y_inputs'}), '(logits=self._y_pred, labels=self.\n _y_inputs)\n', (4280, 4329), True, 'import tensorflow as tf\n'), ((9325, 9380), 'tensorflow.multiply', 'tf.multiply', (['input_projection', 'attention_context_vector'], {}), '(input_projection, attention_context_vector)\n', (9336, 9380), True, 'import tensorflow as tf\n'), ((10480, 10530), 'tensorflow.variable_scope', 'tf.variable_scope', (["('conv-maxpool-%s' % filter_size)"], {}), "('conv-maxpool-%s' % filter_size)\n", (10497, 10530), True, 'import tensorflow as tf\n'), ((10838, 10872), 'tensorflow.summary.histogram', 'tf.summary.histogram', (['"""beta"""', 'beta'], {}), "('beta', beta)\n", (10858, 10872), True, 'import tensorflow as tf\n'), ((10896, 10983), 'tensorflow.nn.conv2d', 'tf.nn.conv2d', (['inputs', 'W_filter'], {'strides': '[1, 1, 1, 1]', 'padding': '"""VALID"""', 'name': '"""conv"""'}), "(inputs, W_filter, strides=[1, 1, 1, 1], padding='VALID', name=\n 'conv')\n", (10908, 10983), True, 'import tensorflow as tf\n'), ((11166, 11198), 'tensorflow.nn.relu', 'tf.nn.relu', (['conv_bn'], {'name': '"""relu"""'}), "(conv_bn, name='relu')\n", (11176, 11198), True, 'import tensorflow as tf\n'), ((11270, 11386), 'tensorflow.nn.max_pool', 'tf.nn.max_pool', (['h'], {'ksize': '[1, n_step - filter_size + 1, 1, 1]', 'strides': '[1, 1, 1, 1]', 'padding': '"""VALID"""', 'name': '"""pool"""'}), "(h, ksize=[1, n_step - filter_size + 1, 1, 1], strides=[1, 1,\n 1, 1], padding='VALID', name='pool')\n", (11284, 11386), True, 'import tensorflow as tf\n'), ((2352, 2388), 'tensorflow.constant_initializer', 'tf.constant_initializer', (['W_embedding'], {}), '(W_embedding)\n', (2375, 2388), True, 'import tensorflow as tf\n'), ((2566, 2602), 'tensorflow.constant_initializer', 'tf.constant_initializer', (['W_embedding'], {}), '(W_embedding)\n', (2589, 2602), True, 'import tensorflow as tf\n'), ((7021, 7044), 'tensorflow.get_variable_scope', 'tf.get_variable_scope', ([], {}), '()\n', (7042, 7044), True, 'import tensorflow as tf\n')] |
from coopihc.base.StateElement import StateElement
from coopihc.base.utils import (
StateNotContainedError,
StateNotContainedWarning,
)
from coopihc.base.elements import integer_set, box_space
import numpy
import pytest
import json
import copy
from tabulate import tabulate
def test_array_init_integer():
x = StateElement(2, integer_set(3))
assert hasattr(x, "space")
assert x.shape == ()
assert x == 2
def test_array_init_numeric():
x = StateElement(
numpy.zeros((2, 2)), box_space(numpy.ones((2, 2))), out_of_bounds_mode="error"
)
assert hasattr(x, "space")
assert x.shape == (2, 2)
assert (x == numpy.zeros((2, 2))).all()
def test_array_init():
test_array_init_integer()
test_array_init_numeric()
def test_array_init_error_integer():
x = StateElement(2, integer_set(3), out_of_bounds_mode="error")
with pytest.raises(StateNotContainedError):
x = StateElement(4, integer_set(3), out_of_bounds_mode="error")
with pytest.raises(StateNotContainedError):
x = StateElement(-3, integer_set(3), out_of_bounds_mode="error")
def test_array_init_error_numeric():
x = StateElement(
numpy.zeros((2, 2)), box_space(numpy.ones((2, 2))), out_of_bounds_mode="error"
)
with pytest.raises(StateNotContainedError):
x = StateElement(
2 * numpy.ones((2, 2)),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="error",
)
with pytest.raises(StateNotContainedError):
x = StateElement(
-2 * numpy.ones((2, 2)),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="error",
)
with pytest.raises(StateNotContainedError):
x = StateElement(
numpy.array([[0, 0], [-2, 0]]),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="error",
)
def test_array_init_error():
test_array_init_error_integer()
test_array_init_error_numeric()
def test_array_init_warning_integer():
x = StateElement(2, integer_set(3), out_of_bounds_mode="warning")
with pytest.warns(StateNotContainedWarning):
x = StateElement(4, integer_set(3), out_of_bounds_mode="warning")
with pytest.warns(StateNotContainedWarning):
x = StateElement(-3, integer_set(3), out_of_bounds_mode="warning")
def test_array_init_warning_numeric():
x = StateElement(
numpy.zeros((2, 2)), box_space(numpy.ones((2, 2))), out_of_bounds_mode="warning"
)
with pytest.warns(StateNotContainedWarning):
x = StateElement(
2 * numpy.ones((2, 2)),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="warning",
)
with pytest.warns(StateNotContainedWarning):
x = StateElement(
-2 * numpy.ones((2, 2)),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="warning",
)
with pytest.warns(StateNotContainedWarning):
x = StateElement(
numpy.array([[0, 0], [-2, 0]]),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="warning",
)
def test_array_init_warning():
test_array_init_warning_integer()
test_array_init_warning_numeric()
def test_array_init_clip_integer():
x = StateElement(2, integer_set(3), out_of_bounds_mode="clip")
assert x == numpy.array([2])
x = StateElement(4, integer_set(3), out_of_bounds_mode="clip")
assert x == numpy.array([2])
x = StateElement(-3, integer_set(3), out_of_bounds_mode="clip")
assert x == numpy.array([0])
def test_array_init_clip_numeric():
x = StateElement(
numpy.zeros((2, 2)), box_space(numpy.ones((2, 2))), out_of_bounds_mode="clip"
)
assert (x == numpy.zeros((2, 2))).all()
x = StateElement(
2 * numpy.ones((2, 2)),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="clip",
)
assert (x == numpy.ones((2, 2))).all()
x = StateElement(
-2 * numpy.ones((2, 2)),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="clip",
)
assert (x == -1.0 * numpy.ones((2, 2))).all()
def test_array_init_clip():
test_array_init_clip_integer()
test_array_init_clip_numeric()
def test_array_init_dtype_integer():
x = StateElement(2, integer_set(3), out_of_bounds_mode="warning")
assert x.dtype == numpy.int64
x = StateElement(2, integer_set(3, dtype=numpy.int16), out_of_bounds_mode="warning")
assert x.dtype == numpy.int16
def test_array_init_dtype_numeric():
x = StateElement(
numpy.zeros((2, 2)),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="warning",
)
assert x.dtype == numpy.float64
x = StateElement(
numpy.zeros((2, 2)),
box_space(numpy.ones((2, 2), dtype=numpy.float32)),
out_of_bounds_mode="warning",
)
assert x.dtype == numpy.float32
x = StateElement(
numpy.zeros((2, 2)),
box_space(numpy.ones((2, 2), dtype=numpy.int8)),
out_of_bounds_mode="warning",
)
assert x.dtype == numpy.int8
def test_array_init_dtype():
test_array_init_dtype_integer()
test_array_init_dtype_numeric()
# def test__array_ufunc__discrete():
# # Simple arithmetic
# global discr_space
# x = StateElement(2, discr_space, out_of_bounds_mode="error")
# assert x + numpy.array(1) == 3
# assert x + 1 == 3
# assert x - 1 == 1
# assert 3 - x == 1
# assert x - numpy.array(1) == 1
# assert numpy.array(3) - x == 1
# assert 1 + x == 3
# x += 1
# y = x - 1
# assert y.out_of_bounds_mode == "error"
# with pytest.raises(StateNotContainedError):
# 1 - x
# with pytest.raises(StateNotContainedError):
# x + 2
# with pytest.raises(StateNotContainedError):
# x += 5
# def test__array_ufunc__continuous():
# # some matrix operations
# global cont_space
# x = StateElement(numpy.zeros((2, 2)), cont_space, out_of_bounds_mode="error")
# assert (x + numpy.ones((2, 2)) == numpy.ones((2, 2))).all()
# assert (x + 1 == numpy.ones((2, 2))).all()
# assert (1 + x == numpy.ones((2, 2))).all()
# assert (x - 1 == -numpy.ones((2, 2))).all()
# assert (1 - x == numpy.ones((2, 2))).all()
# assert ((1 + x) * 0.5 == 0.5 * numpy.ones((2, 2))).all()
# assert (0.5 * (1 + x) @ numpy.ones((2, 2)) == numpy.ones((2, 2))).all()
# def test__array_ufunc__multidiscrete():
# global multidiscr_space
# x = StateElement([1, 1, 8], multidiscr_space, out_of_bounds_mode="error")
# assert (x + numpy.array([[1], [1], [-3]]) == numpy.array([[2], [2], [5]])).all()
# with pytest.raises(StateNotContainedError):
# x + numpy.array([[1], [1], [1]])
# def test__array_ufunc__comparisons():
# global discr_space
# x = StateElement(2, discr_space, out_of_bounds_mode="error")
# assert x > 1 == True
# global cont_space
# x = StateElement(numpy.zeros((2, 2)), cont_space, out_of_bounds_mode="error")
# assert (x < 0).all() == False
# global multidiscr_space
# x = StateElement(
# numpy.array([[1], [1], [1]]), multidiscr_space, out_of_bounds_mode="error"
# )
# assert (x >= numpy.array([[1], [0], [1]])).all() == True
# assert (x >= numpy.array([[1], [5], [1]])).all() == False
# comp = x >= numpy.array([[1], [5], [1]])
# assert (comp == numpy.array([[True], [False], [True]])).all()
# def test__array_ufunc__trigonometry():
# global cont_space
# x = StateElement(numpy.zeros((2, 2)), cont_space, out_of_bounds_mode="error")
# assert (numpy.cos(x) == numpy.ones((2, 2))).all()
# def test__array_ufunc__floating():
# global cont_space
# x = StateElement(
# numpy.array([[0.2, 0.3], [1, 0.95]]), cont_space, out_of_bounds_mode="error"
# )
# assert numpy.isfinite(x).all() == True
# def test__array_ufunc__out_of_bounds_mode():
# x = StateElement(
# numpy.array([[0.2, 0.3], [1, 0.95]]), cont_space, out_of_bounds_mode="error"
# )
# y = StateElement(
# numpy.array([[-0.2, -0.3], [-1, -0.95]]),
# cont_space,
# out_of_bounds_mode="warning",
# )
# z = StateElement(
# numpy.array([[0.0, 0.0], [0.0, 0.0]]),
# cont_space,
# out_of_bounds_mode="silent",
# )
# u = x + y
# assert u.out_of_bounds_mode == "error"
# u = y + x
# assert u.out_of_bounds_mode == "error"
# u = z + x
# assert u.out_of_bounds_mode == "error"
# u = y + z
# assert u.out_of_bounds_mode == "warning"
# u = z + 0
# assert u.out_of_bounds_mode == "silent"
# def test__array_ufunc__():
# test__array_ufunc__discrete()
# test__array_ufunc__continuous()
# test__array_ufunc__multidiscrete()
# test__array_ufunc__comparisons()
# test__array_ufunc__trigonometry()
# test__array_ufunc__floating()
# test__array_ufunc__out_of_bounds_mode()
# def test_amax_nothandled():
# StateElement.HANDLED_FUNCTIONS = {}
# cont_space = autospace(
# [[-1, -1], [-1, -1]], [[1, 1], [1, 1]], dtype=numpy.float64
# ) # Here the
# x = StateElement(
# numpy.array([[0, 0.1], [-0.5, 0.8]], dtype=numpy.float64),
# cont_space,
# out_of_bounds_mode="warning",
# )
# # Without handled function
# with pytest.warns(NumpyFunctionNotHandledWarning):
# y = numpy.max(x)
# assert isinstance(y, numpy.ndarray)
# assert not isinstance(y, StateElement)
# assert y == 0.8
# assert not hasattr(y, "space")
# assert not hasattr(y, "out_of_bounds_mode")
# def test_amax_implements_decorator():
# cont_space = autospace([[-1, -1], [-1, -2]], [[1, 1], [1, 3]], dtype=numpy.float64)
# x = StateElement(
# numpy.array([[0, 0.1], [-0.5, 0.8]], dtype=numpy.float64),
# cont_space,
# out_of_bounds_mode="warning",
# )
# @StateElement.implements(numpy.amax)
# def amax(arr, **keywordargs):
# space, out_of_bounds_mode, kwargs = (
# arr.space,
# arr.out_of_bounds_mode,
# arr.kwargs,
# )
# obj = arr.view(numpy.ndarray)
# argmax = numpy.argmax(obj, **keywordargs)
# index = numpy.unravel_index(argmax, arr.space.shape)
# obj = numpy.amax(obj, **keywordargs)
# obj = numpy.asarray(obj).view(StateElement)
# if arr.space.space_type == "continuous":
# obj.space = autospace(
# numpy.atleast_2d(arr.space.low[index[0], index[1]]),
# numpy.atleast_2d(arr.space.high[index[0], index[1]]),
# )
# else:
# raise NotImplementedError
# obj.out_of_bounds_mode = arr.out_of_bounds_mode
# obj.kwargs = arr.kwargs
# return obj
# y = numpy.amax(x)
# assert isinstance(y, StateElement)
# assert StateElement.HANDLED_FUNCTIONS.get(numpy.amax) is not None
# assert x.HANDLED_FUNCTIONS.get(numpy.amax) is not None
# assert y.shape == ()
# assert y == 0.8
# assert y.space.space_type == "continuous"
# assert y.space.shape == (1, 1)
# assert y.space.low == numpy.array([[-2]])
# assert y.space.high == numpy.array([[3]])
# def test_array_function_simple():
# test_amax_nothandled()
# test_amax_implements_decorator()
# def test__array_function__():
# test_array_function_simple()
def test_equals_integer():
int_space = integer_set(3)
other_int_space = integer_set(4)
x = StateElement(numpy.array(1), int_space)
y = StateElement(numpy.array(1), other_int_space)
assert x.equals(y)
assert not x.equals(y, mode="hard")
z = StateElement(numpy.array(2), int_space)
assert not x.equals(z)
def test_equals_numeric():
numeric_space = box_space(numpy.ones((2, 2)))
other_numeric_space = box_space(
low=numpy.array([[-1, -1], [-1, -2]]), high=numpy.array([[1, 2], [1, 1]])
)
x = StateElement(numpy.zeros((2, 2)), numeric_space)
y = StateElement(numpy.zeros((2, 2)), other_numeric_space)
assert (x.equals(y)).all()
assert not (x.equals(y, mode="hard")).all()
z = StateElement(numpy.eye(2), numeric_space)
assert not (x.equals(z)).all()
def test_equals():
test_equals_integer()
test_equals_numeric()
def test__iter__integer():
x = StateElement([2], integer_set(3))
with pytest.raises(TypeError):
next(iter(x))
def test__iter__numeric():
x = StateElement(
numpy.array([[0.2, 0.3], [0.4, 0.5]]), box_space(numpy.ones((2, 2)))
)
for i, _x in enumerate(x):
if i == 0:
assert (
_x == StateElement(numpy.array([0.2, 0.3]), box_space(numpy.ones((2,))))
).all()
if i == 1:
assert (
_x == StateElement(numpy.array([0.4, 0.5]), box_space(numpy.ones((2,))))
).all()
for j, _xx in enumerate(_x):
print(i, j)
if i == 0 and j == 0:
assert _xx == StateElement(
numpy.array(0.2), box_space(numpy.float64(1))
)
elif i == 0 and j == 1:
assert _xx == StateElement(
numpy.array(0.3), box_space(numpy.float64(1))
)
elif i == 1 and j == 0:
assert _xx == StateElement(
numpy.array(0.4), box_space(numpy.float64(1))
)
elif i == 1 and j == 1:
assert _xx == StateElement(
numpy.array(0.5), box_space(numpy.float64(1))
)
def test__iter__():
test__iter__integer()
test__iter__numeric()
def test__repr__integer():
x = StateElement(2, integer_set(3))
assert x.__repr__() == "StateElement(array(2), CatSet([0 1 2]), 'warning')"
def test__repr__numeric():
x = StateElement(numpy.zeros((2, 2)), box_space(numpy.ones((2, 2))))
x.__repr__()
def test__repr__():
test__repr__integer()
test__repr__numeric()
def test_serialize_integer():
x = StateElement(numpy.array([2]), integer_set(3))
assert x.serialize() == {
"values": 2,
"space": {
"space": "CatSet",
"seed": None,
"array": [0, 1, 2],
"dtype": "dtype[int64]",
},
}
def test_serialize_numeric():
x = StateElement(numpy.zeros((2, 2)), box_space(numpy.ones((2, 2))))
assert x.serialize() == {
"values": [[0.0, 0.0], [0.0, 0.0]],
"space": {
"space": "Numeric",
"seed": None,
"low,high": [[[-1.0, -1.0], [-1.0, -1.0]], [[1.0, 1.0], [1.0, 1.0]]],
"shape": (2, 2),
"dtype": "dtype[float64]",
},
}
def test_serialize():
test_serialize_integer()
test_serialize_numeric()
def test__getitem__integer():
x = StateElement(1, integer_set(3))
assert x[..., {"space": True}] == x
assert x[..., {"space": True}] is x
assert x[...] == x
def test__getitem__numeric():
x = StateElement(
numpy.array([[0.0, 0.1], [0.2, 0.3]]), box_space(numpy.ones((2, 2)))
)
assert x[0, 0] == 0.0
assert x[0, 0, {"space": True}] == StateElement(0.0, box_space(numpy.float64(1)))
assert x[0, 1, {"space": True}] == StateElement(0.1, box_space(numpy.float64(1)))
assert x[1, 0, {"space": True}] == StateElement(0.2, box_space(numpy.float64(1)))
assert x[1, 1, {"space": True}] == StateElement(0.3, box_space(numpy.float64(1)))
assert (x[:, 1] == numpy.array([0.1, 0.3])).all()
assert (
x[:, 1, {"space": True}]
== StateElement(numpy.array([0.1, 0.3]), box_space(numpy.ones((2,))))
).all()
x = StateElement(numpy.array(0), box_space(low=-1, high=1))
from coopihc import State
s = State()
s["x"] = x
fd = {"x": ...}
a = s.filter(mode="stateelement", filterdict=fd)
def test__getitem__():
test__getitem__integer()
test__getitem__numeric()
def test__setitem__integer():
x = StateElement(1, integer_set(3))
x[...] = 2
assert x == StateElement(2, integer_set(3))
with pytest.warns(StateNotContainedWarning):
x[...] = 4
def test__setitem__numeric():
x = StateElement(
numpy.array([[0.0, 0.1], [0.2, 0.3]]), box_space(numpy.ones((2, 2)))
)
x[0, 0] = 0.5
x[0, 1] = 0.6
x[1, 0] = 0.7
x[1, 1] = 0.8
assert (
x
== StateElement(
numpy.array([[0.5, 0.6], [0.7, 0.8]]), box_space(numpy.ones((2, 2)))
)
).all()
with pytest.warns(StateNotContainedWarning):
x[0, 0] = 1.3
x = StateElement(
numpy.array([[0.0, 0.1], [0.2, 0.3]]),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="clip",
)
x[:, 0] = numpy.array([0.9, 0.9])
x[0, :] = numpy.array([1.2, 0.2])
x[1, 1] = 0.5
assert (
x
== StateElement(
numpy.array([[1, 0.2], [0.9, 0.5]]),
box_space(numpy.ones((2, 2))),
out_of_bounds_mode="clip",
)
).all()
def test__setitem__():
test__setitem__integer()
test__setitem__numeric()
def test_reset_integer():
x = StateElement(numpy.array([2]), integer_set(3), out_of_bounds_mode="error")
xset = {}
for i in range(1000):
x.reset()
_x = x.squeeze().tolist()
xset.update({str(_x): _x})
assert sorted(xset.values()) == [0, 1, 2]
# forced reset:
x.reset(value=0)
assert x == StateElement(0, integer_set(3), out_of_bounds_mode="error")
with pytest.raises(StateNotContainedError):
x.reset(value=5)
x.out_of_bounds_mode = "clip"
x.reset(value=5)
assert x == StateElement(
numpy.array([2]), integer_set(3), out_of_bounds_mode="clip"
)
def test_reset_numeric():
x = StateElement(numpy.ones((2, 2)), box_space(numpy.ones((2, 2))))
for i in range(1000):
x.reset()
x.reset(0.59 * numpy.ones((2, 2)))
assert (
x == StateElement(0.59 * numpy.ones((2, 2)), box_space(numpy.ones((2, 2))))
).all()
def test_reset():
test_reset_integer()
test_reset_numeric()
def test_tabulate_integer():
x = StateElement(1, integer_set(3))
x._tabulate()
tabulate(x._tabulate()[0])
def test_tabulate_numeric():
x = StateElement(numpy.zeros((3, 3)), box_space(numpy.ones((3, 3))))
x._tabulate()
tabulate(x._tabulate()[0])
def test_tabulate():
test_tabulate_integer()
test_tabulate_numeric()
def test_cast_discrete_to_cont():
discr_box_space = box_space(low=numpy.int8(1), high=numpy.int8(3))
cont_box_space = box_space(low=numpy.float64(-1.5), high=numpy.float64(1.5))
x = StateElement(1, discr_box_space)
ret_stateElem = x.cast(cont_box_space, mode="edges")
assert ret_stateElem == StateElement(-1.5, cont_box_space)
ret_stateElem = x.cast(cont_box_space, mode="center")
assert ret_stateElem == StateElement(-1, cont_box_space)
x = StateElement(2, discr_box_space)
ret_stateElem = x.cast(cont_box_space, mode="edges")
assert ret_stateElem == StateElement(0, cont_box_space)
ret_stateElem = x.cast(cont_box_space, mode="center")
assert ret_stateElem == StateElement(0, cont_box_space)
x = StateElement(3, discr_box_space)
ret_stateElem = x.cast(cont_box_space, mode="edges")
assert ret_stateElem == StateElement(1.5, cont_box_space)
ret_stateElem = x.cast(cont_box_space, mode="center")
assert ret_stateElem == StateElement(1, cont_box_space)
def test_cast_cont_to_discrete():
cont_box_space = box_space(low=numpy.float64(-1.5), high=numpy.float64(1.5))
discr_box_space = box_space(low=numpy.int8(1), high=numpy.int8(3))
x = StateElement(0, cont_box_space)
ret_stateElem = x.cast(discr_box_space, mode="center")
assert ret_stateElem == StateElement(2, discr_box_space)
ret_stateElem = x.cast(discr_box_space, mode="edges")
assert ret_stateElem == StateElement(2, discr_box_space)
center = []
edges = []
for i in numpy.linspace(-1.5, 1.5, 100):
x = StateElement(i, cont_box_space)
ret_stateElem = x.cast(discr_box_space, mode="center")
if i < -0.75:
assert ret_stateElem == StateElement(1, discr_box_space)
if i > -0.75 and i < 0.75:
assert ret_stateElem == StateElement(2, discr_box_space)
if i > 0.75:
assert ret_stateElem == StateElement(3, discr_box_space)
center.append(ret_stateElem.tolist())
ret_stateElem = x.cast(discr_box_space, mode="edges")
if i < -0.5:
assert ret_stateElem == StateElement(1, discr_box_space)
if i > -0.5 and i < 0.5:
assert ret_stateElem == StateElement(2, discr_box_space)
if i > 0.5:
assert ret_stateElem == StateElement(3, discr_box_space)
edges.append(ret_stateElem.tolist())
# import matplotlib.pyplot as plt
# fig = plt.figure()
# ax = fig.add_subplot(111)
# ax.plot(
# numpy.linspace(-1.5, 1.5, 100), numpy.array(center) - 0.05, "+", label="center"
# )
# ax.plot(
# numpy.linspace(-1.5, 1.5, 100), numpy.array(edges) + 0.05, "o", label="edges"
# )
# ax.legend()
# plt.show()
def test_cast_cont_to_cont():
cont_space = box_space(numpy.full((2, 2), 1), dtype=numpy.float32)
other_cont_space = box_space(
low=numpy.full((2, 2), 0), high=numpy.full((2, 2), 4), dtype=numpy.float32
)
for i in numpy.linspace(-1, 1, 100):
x = StateElement(numpy.full((2, 2), i), cont_space)
ret_stateElement = x.cast(other_cont_space)
assert (ret_stateElement == (x + 1) * 2).all()
def test_cast_discr_to_discr():
discr_box_space = box_space(low=numpy.int8(1), high=numpy.int8(4))
other_discr_box_space = box_space(low=numpy.int8(11), high=numpy.int8(14))
for i in [1, 2, 3, 4]:
x = StateElement(i, discr_box_space)
ret_stateElement = x.cast(other_discr_box_space)
assert ret_stateElement == x + 10
def test_cast():
test_cast_discrete_to_cont()
test_cast_cont_to_discrete()
test_cast_cont_to_cont()
test_cast_discr_to_discr()
if __name__ == "__main__":
test_array_init()
test_array_init_error()
test_array_init_warning()
test_array_init_clip()
test_array_init_dtype()
# test__array_ufunc__() # kept here just in case
# test__array_function__() # kept here just in case
test_equals()
test__iter__()
test__repr__()
test_serialize()
test__setitem__()
test__getitem__()
test_reset()
test_tabulate()
test_cast()
| [
"numpy.int8",
"coopihc.State",
"numpy.eye",
"numpy.ones",
"coopihc.base.StateElement.StateElement",
"numpy.float64",
"coopihc.base.elements.integer_set",
"numpy.array",
"numpy.linspace",
"numpy.zeros",
"pytest.raises",
"coopihc.base.elements.box_space",
"numpy.full",
"pytest.warns"
] | [((11464, 11478), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (11475, 11478), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((11501, 11515), 'coopihc.base.elements.integer_set', 'integer_set', (['(4)'], {}), '(4)\n', (11512, 11515), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((15826, 15833), 'coopihc.State', 'State', ([], {}), '()\n', (15831, 15833), False, 'from coopihc import State\n'), ((16805, 16828), 'numpy.array', 'numpy.array', (['[0.9, 0.9]'], {}), '([0.9, 0.9])\n', (16816, 16828), False, 'import numpy\n'), ((16843, 16866), 'numpy.array', 'numpy.array', (['[1.2, 0.2]'], {}), '([1.2, 0.2])\n', (16854, 16866), False, 'import numpy\n'), ((18714, 18746), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(1)', 'discr_box_space'], {}), '(1, discr_box_space)\n', (18726, 18746), False, 'from coopihc.base.StateElement import StateElement\n'), ((18995, 19027), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(2)', 'discr_box_space'], {}), '(2, discr_box_space)\n', (19007, 19027), False, 'from coopihc.base.StateElement import StateElement\n'), ((19272, 19304), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(3)', 'discr_box_space'], {}), '(3, discr_box_space)\n', (19284, 19304), False, 'from coopihc.base.StateElement import StateElement\n'), ((19738, 19769), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(0)', 'cont_box_space'], {}), '(0, cont_box_space)\n', (19750, 19769), False, 'from coopihc.base.StateElement import StateElement\n'), ((20054, 20084), 'numpy.linspace', 'numpy.linspace', (['(-1.5)', '(1.5)', '(100)'], {}), '(-1.5, 1.5, 100)\n', (20068, 20084), False, 'import numpy\n'), ((21510, 21536), 'numpy.linspace', 'numpy.linspace', (['(-1)', '(1)', '(100)'], {}), '(-1, 1, 100)\n', (21524, 21536), False, 'import numpy\n'), ((340, 354), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (351, 354), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((493, 512), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (504, 512), False, 'import numpy\n'), ((830, 844), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (841, 844), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((883, 920), 'pytest.raises', 'pytest.raises', (['StateNotContainedError'], {}), '(StateNotContainedError)\n', (896, 920), False, 'import pytest\n'), ((1003, 1040), 'pytest.raises', 'pytest.raises', (['StateNotContainedError'], {}), '(StateNotContainedError)\n', (1016, 1040), False, 'import pytest\n'), ((1184, 1203), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (1195, 1203), False, 'import numpy\n'), ((1278, 1315), 'pytest.raises', 'pytest.raises', (['StateNotContainedError'], {}), '(StateNotContainedError)\n', (1291, 1315), False, 'import pytest\n'), ((1481, 1518), 'pytest.raises', 'pytest.raises', (['StateNotContainedError'], {}), '(StateNotContainedError)\n', (1494, 1518), False, 'import pytest\n'), ((1685, 1722), 'pytest.raises', 'pytest.raises', (['StateNotContainedError'], {}), '(StateNotContainedError)\n', (1698, 1722), False, 'import pytest\n'), ((2055, 2069), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (2066, 2069), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((2110, 2148), 'pytest.warns', 'pytest.warns', (['StateNotContainedWarning'], {}), '(StateNotContainedWarning)\n', (2122, 2148), False, 'import pytest\n'), ((2233, 2271), 'pytest.warns', 'pytest.warns', (['StateNotContainedWarning'], {}), '(StateNotContainedWarning)\n', (2245, 2271), False, 'import pytest\n'), ((2419, 2438), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (2430, 2438), False, 'import numpy\n'), ((2515, 2553), 'pytest.warns', 'pytest.warns', (['StateNotContainedWarning'], {}), '(StateNotContainedWarning)\n', (2527, 2553), False, 'import pytest\n'), ((2721, 2759), 'pytest.warns', 'pytest.warns', (['StateNotContainedWarning'], {}), '(StateNotContainedWarning)\n', (2733, 2759), False, 'import pytest\n'), ((2928, 2966), 'pytest.warns', 'pytest.warns', (['StateNotContainedWarning'], {}), '(StateNotContainedWarning)\n', (2940, 2966), False, 'import pytest\n'), ((3304, 3318), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (3315, 3318), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((3363, 3379), 'numpy.array', 'numpy.array', (['[2]'], {}), '([2])\n', (3374, 3379), False, 'import numpy\n'), ((3404, 3418), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (3415, 3418), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((3463, 3479), 'numpy.array', 'numpy.array', (['[2]'], {}), '([2])\n', (3474, 3479), False, 'import numpy\n'), ((3505, 3519), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (3516, 3519), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((3564, 3580), 'numpy.array', 'numpy.array', (['[0]'], {}), '([0])\n', (3575, 3580), False, 'import numpy\n'), ((3649, 3668), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (3660, 3668), False, 'import numpy\n'), ((4302, 4316), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (4313, 4316), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((4406, 4439), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {'dtype': 'numpy.int16'}), '(3, dtype=numpy.int16)\n', (4417, 4439), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((4574, 4593), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (4585, 4593), False, 'import numpy\n'), ((4744, 4763), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (4755, 4763), False, 'import numpy\n'), ((4935, 4954), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (4946, 4954), False, 'import numpy\n'), ((11537, 11551), 'numpy.array', 'numpy.array', (['(1)'], {}), '(1)\n', (11548, 11551), False, 'import numpy\n'), ((11585, 11599), 'numpy.array', 'numpy.array', (['(1)'], {}), '(1)\n', (11596, 11599), False, 'import numpy\n'), ((11702, 11716), 'numpy.array', 'numpy.array', (['(2)'], {}), '(2)\n', (11713, 11716), False, 'import numpy\n'), ((11815, 11833), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (11825, 11833), False, 'import numpy\n'), ((11982, 12001), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (11993, 12001), False, 'import numpy\n'), ((12039, 12058), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (12050, 12058), False, 'import numpy\n'), ((12182, 12194), 'numpy.eye', 'numpy.eye', (['(2)'], {}), '(2)\n', (12191, 12194), False, 'import numpy\n'), ((12374, 12388), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (12385, 12388), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((12399, 12423), 'pytest.raises', 'pytest.raises', (['TypeError'], {}), '(TypeError)\n', (12412, 12423), False, 'import pytest\n'), ((12506, 12543), 'numpy.array', 'numpy.array', (['[[0.2, 0.3], [0.4, 0.5]]'], {}), '([[0.2, 0.3], [0.4, 0.5]])\n', (12517, 12543), False, 'import numpy\n'), ((13752, 13766), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (13763, 13766), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((13898, 13917), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (13909, 13917), False, 'import numpy\n'), ((14094, 14110), 'numpy.array', 'numpy.array', (['[2]'], {}), '([2])\n', (14105, 14110), False, 'import numpy\n'), ((14112, 14126), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (14123, 14126), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((14394, 14413), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (14405, 14413), False, 'import numpy\n'), ((14902, 14916), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (14913, 14916), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((15083, 15120), 'numpy.array', 'numpy.array', (['[[0.0, 0.1], [0.2, 0.3]]'], {}), '([[0.0, 0.1], [0.2, 0.3]])\n', (15094, 15120), False, 'import numpy\n'), ((15744, 15758), 'numpy.array', 'numpy.array', (['(0)'], {}), '(0)\n', (15755, 15758), False, 'import numpy\n'), ((15760, 15785), 'coopihc.base.elements.box_space', 'box_space', ([], {'low': '(-1)', 'high': '(1)'}), '(low=-1, high=1)\n', (15769, 15785), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((16061, 16075), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (16072, 16075), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((16149, 16187), 'pytest.warns', 'pytest.warns', (['StateNotContainedWarning'], {}), '(StateNotContainedWarning)\n', (16161, 16187), False, 'import pytest\n'), ((16270, 16307), 'numpy.array', 'numpy.array', (['[[0.0, 0.1], [0.2, 0.3]]'], {}), '([[0.0, 0.1], [0.2, 0.3]])\n', (16281, 16307), False, 'import numpy\n'), ((16579, 16617), 'pytest.warns', 'pytest.warns', (['StateNotContainedWarning'], {}), '(StateNotContainedWarning)\n', (16591, 16617), False, 'import pytest\n'), ((16672, 16709), 'numpy.array', 'numpy.array', (['[[0.0, 0.1], [0.2, 0.3]]'], {}), '([[0.0, 0.1], [0.2, 0.3]])\n', (16683, 16709), False, 'import numpy\n'), ((17219, 17235), 'numpy.array', 'numpy.array', (['[2]'], {}), '([2])\n', (17230, 17235), False, 'import numpy\n'), ((17237, 17251), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (17248, 17251), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((17580, 17617), 'pytest.raises', 'pytest.raises', (['StateNotContainedError'], {}), '(StateNotContainedError)\n', (17593, 17617), False, 'import pytest\n'), ((17852, 17870), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (17862, 17870), False, 'import numpy\n'), ((18220, 18234), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (18231, 18234), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((18337, 18356), 'numpy.zeros', 'numpy.zeros', (['(3, 3)'], {}), '((3, 3))\n', (18348, 18356), False, 'import numpy\n'), ((18832, 18866), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(-1.5)', 'cont_box_space'], {}), '(-1.5, cont_box_space)\n', (18844, 18866), False, 'from coopihc.base.StateElement import StateElement\n'), ((18953, 18985), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(-1)', 'cont_box_space'], {}), '(-1, cont_box_space)\n', (18965, 18985), False, 'from coopihc.base.StateElement import StateElement\n'), ((19113, 19144), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(0)', 'cont_box_space'], {}), '(0, cont_box_space)\n', (19125, 19144), False, 'from coopihc.base.StateElement import StateElement\n'), ((19231, 19262), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(0)', 'cont_box_space'], {}), '(0, cont_box_space)\n', (19243, 19262), False, 'from coopihc.base.StateElement import StateElement\n'), ((19390, 19423), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(1.5)', 'cont_box_space'], {}), '(1.5, cont_box_space)\n', (19402, 19423), False, 'from coopihc.base.StateElement import StateElement\n'), ((19510, 19541), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(1)', 'cont_box_space'], {}), '(1, cont_box_space)\n', (19522, 19541), False, 'from coopihc.base.StateElement import StateElement\n'), ((19857, 19889), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(2)', 'discr_box_space'], {}), '(2, discr_box_space)\n', (19869, 19889), False, 'from coopihc.base.StateElement import StateElement\n'), ((19976, 20008), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(2)', 'discr_box_space'], {}), '(2, discr_box_space)\n', (19988, 20008), False, 'from coopihc.base.StateElement import StateElement\n'), ((20098, 20129), 'coopihc.base.StateElement.StateElement', 'StateElement', (['i', 'cont_box_space'], {}), '(i, cont_box_space)\n', (20110, 20129), False, 'from coopihc.base.StateElement import StateElement\n'), ((21329, 21350), 'numpy.full', 'numpy.full', (['(2, 2)', '(1)'], {}), '((2, 2), 1)\n', (21339, 21350), False, 'import numpy\n'), ((21929, 21961), 'coopihc.base.StateElement.StateElement', 'StateElement', (['i', 'discr_box_space'], {}), '(i, discr_box_space)\n', (21941, 21961), False, 'from coopihc.base.StateElement import StateElement\n'), ((524, 542), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (534, 542), False, 'import numpy\n'), ((950, 964), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (961, 964), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((1071, 1085), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (1082, 1085), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((1215, 1233), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (1225, 1233), False, 'import numpy\n'), ((1762, 1792), 'numpy.array', 'numpy.array', (['[[0, 0], [-2, 0]]'], {}), '([[0, 0], [-2, 0]])\n', (1773, 1792), False, 'import numpy\n'), ((2178, 2192), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (2189, 2192), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((2302, 2316), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (2313, 2316), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((2450, 2468), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (2460, 2468), False, 'import numpy\n'), ((3006, 3036), 'numpy.array', 'numpy.array', (['[[0, 0], [-2, 0]]'], {}), '([[0, 0], [-2, 0]])\n', (3017, 3036), False, 'import numpy\n'), ((3680, 3698), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (3690, 3698), False, 'import numpy\n'), ((3811, 3829), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (3821, 3829), False, 'import numpy\n'), ((3849, 3867), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (3859, 3867), False, 'import numpy\n'), ((3989, 4007), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (3999, 4007), False, 'import numpy\n'), ((4027, 4045), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (4037, 4045), False, 'import numpy\n'), ((4613, 4631), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (4623, 4631), False, 'import numpy\n'), ((4783, 4822), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {'dtype': 'numpy.float32'}), '((2, 2), dtype=numpy.float32)\n', (4793, 4822), False, 'import numpy\n'), ((4974, 5010), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {'dtype': 'numpy.int8'}), '((2, 2), dtype=numpy.int8)\n', (4984, 5010), False, 'import numpy\n'), ((11884, 11917), 'numpy.array', 'numpy.array', (['[[-1, -1], [-1, -2]]'], {}), '([[-1, -1], [-1, -2]])\n', (11895, 11917), False, 'import numpy\n'), ((11924, 11953), 'numpy.array', 'numpy.array', (['[[1, 2], [1, 1]]'], {}), '([[1, 2], [1, 1]])\n', (11935, 11953), False, 'import numpy\n'), ((12555, 12573), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (12565, 12573), False, 'import numpy\n'), ((13929, 13947), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (13939, 13947), False, 'import numpy\n'), ((14425, 14443), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (14435, 14443), False, 'import numpy\n'), ((15132, 15150), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (15142, 15150), False, 'import numpy\n'), ((16124, 16138), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (16135, 16138), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((16319, 16337), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (16329, 16337), False, 'import numpy\n'), ((16729, 16747), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (16739, 16747), False, 'import numpy\n'), ((17527, 17541), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (17538, 17541), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((17737, 17753), 'numpy.array', 'numpy.array', (['[2]'], {}), '([2])\n', (17748, 17753), False, 'import numpy\n'), ((17755, 17769), 'coopihc.base.elements.integer_set', 'integer_set', (['(3)'], {}), '(3)\n', (17766, 17769), False, 'from coopihc.base.elements import integer_set, box_space\n'), ((17882, 17900), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (17892, 17900), False, 'import numpy\n'), ((17966, 17984), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (17976, 17984), False, 'import numpy\n'), ((18368, 18386), 'numpy.ones', 'numpy.ones', (['(3, 3)'], {}), '((3, 3))\n', (18378, 18386), False, 'import numpy\n'), ((18589, 18602), 'numpy.int8', 'numpy.int8', (['(1)'], {}), '(1)\n', (18599, 18602), False, 'import numpy\n'), ((18609, 18622), 'numpy.int8', 'numpy.int8', (['(3)'], {}), '(3)\n', (18619, 18622), False, 'import numpy\n'), ((18659, 18678), 'numpy.float64', 'numpy.float64', (['(-1.5)'], {}), '(-1.5)\n', (18672, 18678), False, 'import numpy\n'), ((18685, 18703), 'numpy.float64', 'numpy.float64', (['(1.5)'], {}), '(1.5)\n', (18698, 18703), False, 'import numpy\n'), ((19613, 19632), 'numpy.float64', 'numpy.float64', (['(-1.5)'], {}), '(-1.5)\n', (19626, 19632), False, 'import numpy\n'), ((19639, 19657), 'numpy.float64', 'numpy.float64', (['(1.5)'], {}), '(1.5)\n', (19652, 19657), False, 'import numpy\n'), ((19695, 19708), 'numpy.int8', 'numpy.int8', (['(1)'], {}), '(1)\n', (19705, 19708), False, 'import numpy\n'), ((19715, 19728), 'numpy.int8', 'numpy.int8', (['(3)'], {}), '(3)\n', (19725, 19728), False, 'import numpy\n'), ((21419, 21440), 'numpy.full', 'numpy.full', (['(2, 2)', '(0)'], {}), '((2, 2), 0)\n', (21429, 21440), False, 'import numpy\n'), ((21447, 21468), 'numpy.full', 'numpy.full', (['(2, 2)', '(4)'], {}), '((2, 2), 4)\n', (21457, 21468), False, 'import numpy\n'), ((21563, 21584), 'numpy.full', 'numpy.full', (['(2, 2)', 'i'], {}), '((2, 2), i)\n', (21573, 21584), False, 'import numpy\n'), ((21775, 21788), 'numpy.int8', 'numpy.int8', (['(1)'], {}), '(1)\n', (21785, 21788), False, 'import numpy\n'), ((21795, 21808), 'numpy.int8', 'numpy.int8', (['(4)'], {}), '(4)\n', (21805, 21808), False, 'import numpy\n'), ((21852, 21866), 'numpy.int8', 'numpy.int8', (['(11)'], {}), '(11)\n', (21862, 21866), False, 'import numpy\n'), ((21873, 21887), 'numpy.int8', 'numpy.int8', (['(14)'], {}), '(14)\n', (21883, 21887), False, 'import numpy\n'), ((655, 674), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (666, 674), False, 'import numpy\n'), ((1359, 1377), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (1369, 1377), False, 'import numpy\n'), ((1401, 1419), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (1411, 1419), False, 'import numpy\n'), ((1563, 1581), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (1573, 1581), False, 'import numpy\n'), ((1605, 1623), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (1615, 1623), False, 'import numpy\n'), ((1816, 1834), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (1826, 1834), False, 'import numpy\n'), ((2597, 2615), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (2607, 2615), False, 'import numpy\n'), ((2639, 2657), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (2649, 2657), False, 'import numpy\n'), ((2804, 2822), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (2814, 2822), False, 'import numpy\n'), ((2846, 2864), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (2856, 2864), False, 'import numpy\n'), ((3060, 3078), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (3070, 3078), False, 'import numpy\n'), ((3750, 3769), 'numpy.zeros', 'numpy.zeros', (['(2, 2)'], {}), '((2, 2))\n', (3761, 3769), False, 'import numpy\n'), ((3928, 3946), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (3938, 3946), False, 'import numpy\n'), ((15251, 15267), 'numpy.float64', 'numpy.float64', (['(1)'], {}), '(1)\n', (15264, 15267), False, 'import numpy\n'), ((15338, 15354), 'numpy.float64', 'numpy.float64', (['(1)'], {}), '(1)\n', (15351, 15354), False, 'import numpy\n'), ((15425, 15441), 'numpy.float64', 'numpy.float64', (['(1)'], {}), '(1)\n', (15438, 15441), False, 'import numpy\n'), ((15512, 15528), 'numpy.float64', 'numpy.float64', (['(1)'], {}), '(1)\n', (15525, 15528), False, 'import numpy\n'), ((15555, 15578), 'numpy.array', 'numpy.array', (['[0.1, 0.3]'], {}), '([0.1, 0.3])\n', (15566, 15578), False, 'import numpy\n'), ((20251, 20283), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(1)', 'discr_box_space'], {}), '(1, discr_box_space)\n', (20263, 20283), False, 'from coopihc.base.StateElement import StateElement\n'), ((20355, 20387), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(2)', 'discr_box_space'], {}), '(2, discr_box_space)\n', (20367, 20387), False, 'from coopihc.base.StateElement import StateElement\n'), ((20445, 20477), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(3)', 'discr_box_space'], {}), '(3, discr_box_space)\n', (20457, 20477), False, 'from coopihc.base.StateElement import StateElement\n'), ((20644, 20676), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(1)', 'discr_box_space'], {}), '(1, discr_box_space)\n', (20656, 20676), False, 'from coopihc.base.StateElement import StateElement\n'), ((20746, 20778), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(2)', 'discr_box_space'], {}), '(2, discr_box_space)\n', (20758, 20778), False, 'from coopihc.base.StateElement import StateElement\n'), ((20835, 20867), 'coopihc.base.StateElement.StateElement', 'StateElement', (['(3)', 'discr_box_space'], {}), '(3, discr_box_space)\n', (20847, 20867), False, 'from coopihc.base.StateElement import StateElement\n'), ((4113, 4131), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (4123, 4131), False, 'import numpy\n'), ((15656, 15679), 'numpy.array', 'numpy.array', (['[0.1, 0.3]'], {}), '([0.1, 0.3])\n', (15667, 15679), False, 'import numpy\n'), ((16478, 16515), 'numpy.array', 'numpy.array', (['[[0.5, 0.6], [0.7, 0.8]]'], {}), '([[0.5, 0.6], [0.7, 0.8]])\n', (16489, 16515), False, 'import numpy\n'), ((16946, 16981), 'numpy.array', 'numpy.array', (['[[1, 0.2], [0.9, 0.5]]'], {}), '([[1, 0.2], [0.9, 0.5]])\n', (16957, 16981), False, 'import numpy\n'), ((13069, 13085), 'numpy.array', 'numpy.array', (['(0.2)'], {}), '(0.2)\n', (13080, 13085), False, 'import numpy\n'), ((15691, 15707), 'numpy.ones', 'numpy.ones', (['(2,)'], {}), '((2,))\n', (15701, 15707), False, 'import numpy\n'), ((16527, 16545), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (16537, 16545), False, 'import numpy\n'), ((17005, 17023), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (17015, 17023), False, 'import numpy\n'), ((18032, 18050), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (18042, 18050), False, 'import numpy\n'), ((18062, 18080), 'numpy.ones', 'numpy.ones', (['(2, 2)'], {}), '((2, 2))\n', (18072, 18080), False, 'import numpy\n'), ((12687, 12710), 'numpy.array', 'numpy.array', (['[0.2, 0.3]'], {}), '([0.2, 0.3])\n', (12698, 12710), False, 'import numpy\n'), ((12836, 12859), 'numpy.array', 'numpy.array', (['[0.4, 0.5]'], {}), '([0.4, 0.5])\n', (12847, 12859), False, 'import numpy\n'), ((13097, 13113), 'numpy.float64', 'numpy.float64', (['(1)'], {}), '(1)\n', (13110, 13113), False, 'import numpy\n'), ((13233, 13249), 'numpy.array', 'numpy.array', (['(0.3)'], {}), '(0.3)\n', (13244, 13249), False, 'import numpy\n'), ((12722, 12738), 'numpy.ones', 'numpy.ones', (['(2,)'], {}), '((2,))\n', (12732, 12738), False, 'import numpy\n'), ((12871, 12887), 'numpy.ones', 'numpy.ones', (['(2,)'], {}), '((2,))\n', (12881, 12887), False, 'import numpy\n'), ((13261, 13277), 'numpy.float64', 'numpy.float64', (['(1)'], {}), '(1)\n', (13274, 13277), False, 'import numpy\n'), ((13397, 13413), 'numpy.array', 'numpy.array', (['(0.4)'], {}), '(0.4)\n', (13408, 13413), False, 'import numpy\n'), ((13425, 13441), 'numpy.float64', 'numpy.float64', (['(1)'], {}), '(1)\n', (13438, 13441), False, 'import numpy\n'), ((13561, 13577), 'numpy.array', 'numpy.array', (['(0.5)'], {}), '(0.5)\n', (13572, 13577), False, 'import numpy\n'), ((13589, 13605), 'numpy.float64', 'numpy.float64', (['(1)'], {}), '(1)\n', (13602, 13605), False, 'import numpy\n')] |
import wave
from numpy.lib.npyio import BagObj
import pyaudio
import pylab
import numpy as np
import matplotlib.pyplot as plt
def get_framerate(wavfile):
'''
输入文件路径,获取帧率
'''
wf=wave.open(wavfile,"rb")#打开wav
p = pyaudio.PyAudio()#创建PyAudio对象
params = wf.getparams()#参数获取
nchannels, sampwidth, framerate, nframes = params[:4]
return framerate/2
def get_nframes(wavfile):
'''
输入文件路径,获取帧数
'''
wf=wave.open(wavfile,"rb")#打开wav
p = pyaudio.PyAudio()#创建PyAudio对象
params = wf.getparams()#参数获取
nchannels, sampwidth, framerate, nframes = params[:4]
return nframes/2
def get_wavedata(wavfile):
'''
输入文件路径,获取处理好的 N-2 左右声部数组
'''
#####1.读入wave文件
wf=wave.open(wavfile,"rb")#打开wav
p = pyaudio.PyAudio()#创建PyAudio对象
params = wf.getparams()#参数获取
nchannels, sampwidth, framerate, nframes = params[:4]
stream = p.open(format=p.get_format_from_width(sampwidth),
channels=nchannels,
rate=framerate,
output=True)#创建输出流
#读取完整的帧数据到str_data中,这是一个string类型的数据
str_data = wf.readframes(nframes)
wf.close()#关闭wave
#####2.将波形数据转换为数组
# N-1 一维数组,右声道接着左声道
wave_data = np.frombuffer(str_data, dtype=np.short)
#2-N N维数组
wave_data.shape = -1,2
#将数组转置为 N-2 目标数组
wave_data = wave_data.T
return wave_data
def plot_timedomain(wavfile):
'''
画出时域图
'''
wave_data=get_wavedata(wavfile) #获取处理好的wave数据
framerate=get_framerate(wavfile) #获取帧率
nframes=get_nframes(wavfile) #获取帧数
#####3.构建横坐标
time = np.arange(0,nframes)*(1.0/framerate)
#####4.画图
pylab.figure(figsize=(40,10))
pylab.subplot(211)
pylab.plot(time, wave_data[0]) #第一幅图:左声道
pylab.subplot(212)
pylab.plot(time, wave_data[1], c="g") #第二幅图:右声道
pylab.xlabel("time (seconds)")
pylab.show()
return None
def plot_freqdomain(start,fft_size,f,wavfile):
'''
画出频域图
'''
waveData=get_wavedata(wavfile) #获取wave数据
framerate=get_framerate(wavfile) #获取帧率数据
#### 1.取出所需部分进行傅里叶变换,并得到幅值
# rfft,对称保留一半,结果为 fft_size/2-1 维复数数组
fft_y1 = np.fft.rfft(waveData[0][start:start+fft_size-1])/fft_size #左声部
fft_y2 = np.fft.rfft(waveData[1][start:start+fft_size-1])/fft_size #右声部
#### 2.计算频域图x值
#最小值为0Hz,最大值一般设为采样频率的一半
freqs = np.linspace(0, framerate//2, fft_size//2)
#### 3.画图
plt.figure(figsize=(20,10))
pylab.subplot(211)
#plt.xlim(f-20, f+20)
plt.plot(freqs, np.abs(fft_y1))
pylab.xlabel("frequence(Hz)")
pylab.subplot(212)
plt.xlim(f-20, f+20)
plt.plot(freqs, np.abs(fft_y2),c='g')
pylab.xlabel("frequence(Hz)")
plt.show()
file="./data/new600.wav"
#plot_timedomain(wavfile=file)
frame_per_sec=8000
ed=75.1
bg=ed-.25
f=600
plot_freqdomain(int(bg*frame_per_sec),int((ed-bg)*frame_per_sec),f,file) | [
"numpy.abs",
"wave.open",
"matplotlib.pyplot.show",
"pylab.subplot",
"pylab.plot",
"pylab.xlabel",
"pylab.figure",
"numpy.fft.rfft",
"numpy.linspace",
"matplotlib.pyplot.figure",
"numpy.frombuffer",
"matplotlib.pyplot.xlim",
"pyaudio.PyAudio",
"numpy.arange",
"pylab.show"
] | [((209, 233), 'wave.open', 'wave.open', (['wavfile', '"""rb"""'], {}), "(wavfile, 'rb')\n", (218, 233), False, 'import wave\n'), ((248, 265), 'pyaudio.PyAudio', 'pyaudio.PyAudio', ([], {}), '()\n', (263, 265), False, 'import pyaudio\n'), ((471, 495), 'wave.open', 'wave.open', (['wavfile', '"""rb"""'], {}), "(wavfile, 'rb')\n", (480, 495), False, 'import wave\n'), ((510, 527), 'pyaudio.PyAudio', 'pyaudio.PyAudio', ([], {}), '()\n', (525, 527), False, 'import pyaudio\n'), ((770, 794), 'wave.open', 'wave.open', (['wavfile', '"""rb"""'], {}), "(wavfile, 'rb')\n", (779, 794), False, 'import wave\n'), ((809, 826), 'pyaudio.PyAudio', 'pyaudio.PyAudio', ([], {}), '()\n', (824, 826), False, 'import pyaudio\n'), ((1285, 1324), 'numpy.frombuffer', 'np.frombuffer', (['str_data'], {'dtype': 'np.short'}), '(str_data, dtype=np.short)\n', (1298, 1324), True, 'import numpy as np\n'), ((1737, 1767), 'pylab.figure', 'pylab.figure', ([], {'figsize': '(40, 10)'}), '(figsize=(40, 10))\n', (1749, 1767), False, 'import pylab\n'), ((1772, 1790), 'pylab.subplot', 'pylab.subplot', (['(211)'], {}), '(211)\n', (1785, 1790), False, 'import pylab\n'), ((1796, 1826), 'pylab.plot', 'pylab.plot', (['time', 'wave_data[0]'], {}), '(time, wave_data[0])\n', (1806, 1826), False, 'import pylab\n'), ((1842, 1860), 'pylab.subplot', 'pylab.subplot', (['(212)'], {}), '(212)\n', (1855, 1860), False, 'import pylab\n'), ((1866, 1903), 'pylab.plot', 'pylab.plot', (['time', 'wave_data[1]'], {'c': '"""g"""'}), "(time, wave_data[1], c='g')\n", (1876, 1903), False, 'import pylab\n'), ((1919, 1949), 'pylab.xlabel', 'pylab.xlabel', (['"""time (seconds)"""'], {}), "('time (seconds)')\n", (1931, 1949), False, 'import pylab\n'), ((1955, 1967), 'pylab.show', 'pylab.show', ([], {}), '()\n', (1965, 1967), False, 'import pylab\n'), ((2461, 2506), 'numpy.linspace', 'np.linspace', (['(0)', '(framerate // 2)', '(fft_size // 2)'], {}), '(0, framerate // 2, fft_size // 2)\n', (2472, 2506), True, 'import numpy as np\n'), ((2525, 2553), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(20, 10)'}), '(figsize=(20, 10))\n', (2535, 2553), True, 'import matplotlib.pyplot as plt\n'), ((2558, 2576), 'pylab.subplot', 'pylab.subplot', (['(211)'], {}), '(211)\n', (2571, 2576), False, 'import pylab\n'), ((2646, 2675), 'pylab.xlabel', 'pylab.xlabel', (['"""frequence(Hz)"""'], {}), "('frequence(Hz)')\n", (2658, 2675), False, 'import pylab\n'), ((2681, 2699), 'pylab.subplot', 'pylab.subplot', (['(212)'], {}), '(212)\n', (2694, 2699), False, 'import pylab\n'), ((2705, 2729), 'matplotlib.pyplot.xlim', 'plt.xlim', (['(f - 20)', '(f + 20)'], {}), '(f - 20, f + 20)\n', (2713, 2729), True, 'import matplotlib.pyplot as plt\n'), ((2774, 2803), 'pylab.xlabel', 'pylab.xlabel', (['"""frequence(Hz)"""'], {}), "('frequence(Hz)')\n", (2786, 2803), False, 'import pylab\n'), ((2809, 2819), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (2817, 2819), True, 'import matplotlib.pyplot as plt\n'), ((1678, 1699), 'numpy.arange', 'np.arange', (['(0)', 'nframes'], {}), '(0, nframes)\n', (1687, 1699), True, 'import numpy as np\n'), ((2254, 2306), 'numpy.fft.rfft', 'np.fft.rfft', (['waveData[0][start:start + fft_size - 1]'], {}), '(waveData[0][start:start + fft_size - 1])\n', (2265, 2306), True, 'import numpy as np\n'), ((2331, 2383), 'numpy.fft.rfft', 'np.fft.rfft', (['waveData[1][start:start + fft_size - 1]'], {}), '(waveData[1][start:start + fft_size - 1])\n', (2342, 2383), True, 'import numpy as np\n'), ((2625, 2639), 'numpy.abs', 'np.abs', (['fft_y1'], {}), '(fft_y1)\n', (2631, 2639), True, 'import numpy as np\n'), ((2747, 2761), 'numpy.abs', 'np.abs', (['fft_y2'], {}), '(fft_y2)\n', (2753, 2761), True, 'import numpy as np\n')] |
# -*- coding: utf-8 -*-
'''
split long seq into small sub_seq,
feed sub_seq to lstm
'''
from __future__ import division, print_function, absolute_import
import tflearn
import tflearn.data_utils as du
import numpy as np
import ReadData
import tensorflow as tf
from tensorflow.contrib import learn
from tensorflow.contrib.learn.python.learn.estimators import model_fn as model_fn_lib
from sklearn.model_selection import StratifiedKFold
import MyEval
from tflearn.layers.recurrent import bidirectional_rnn, BasicLSTMCell
from tflearn.layers.core import dropout
import dill
import MyEval
import pickle
def pruning(nd_vec):
'''
pruning small values and return
'''
thresh = 1e-1
nd_vec[np.abs(nd_vec) < thresh] = 0
return nd_vec
def read_data():
long_pid, long_data, long_label = ReadData.ReadData( '../../data1/long.csv' )
all_pid = np.array(long_pid)
all_feature = np.array(long_data)
all_label = np.array(long_label)
print('read data done')
data_out, label_out, pid_out = slide_and_cut(all_feature, all_label, all_pid)
pid_map = {}
for i in range(len(all_pid)):
pid_map[all_pid[i]] = i
return data_out, label_out, pid_out, pid_map
def slide_and_cut(tmp_data, tmp_label, tmp_pid):
out_pid = []
out_data = []
out_label = []
window_size = 6000
cnter = {'N': 0, 'O': 0, 'A': 0, '~': 0}
for i in range(len(tmp_data)):
#print(tmp_label[i])
if cnter[tmp_label[i]] is not None:
cnter[tmp_label[i]] += len(tmp_data[i])
stride_N = 500
stride_O = int(stride_N // (cnter['N'] / cnter['O']))
stride_A = int(stride_N // (cnter['N'] / cnter['A']))
stride_P = int(0.85 * stride_N // (cnter['N'] / cnter['~']))
stride = {'N': stride_N, 'O': stride_O, 'A': stride_A, '~': stride_P}
for i in range(len(tmp_data)):
tmp_stride = stride[tmp_label[i]]
tmp_ts = tmp_data[i]
for j in range(0, len(tmp_ts)-window_size, tmp_stride):
out_pid.append(tmp_pid[i])
out_data.append(tmp_ts[j:j+window_size])
out_label.append(tmp_label[i])
out_label = ReadData.Label2OneHot(out_label)
out_data = np.expand_dims(np.array(out_data, dtype=np.float32), axis=2)
out_label = np.array(out_label, dtype=np.float32)
out_pid = np.array(out_pid, dtype=np.string_)
return out_data, out_label, out_pid
def get_model():
n_dim = 6000
n_split = 300
tf.reset_default_graph()
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
### split
#X = X.reshape([-1, n_split, 1])
#testX = testX.reshape([-1, n_split, 1])
# Building Residual Network
net = tflearn.input_data(shape=[None, n_dim, 1])
print("input", net.get_shape())
############ reshape for sub_seq
net = tf.reshape(net, [-1, n_split, 1])
print("reshaped input", net.get_shape())
net = tflearn.conv_1d(net, 64, 16, 2)
#net = tflearn.conv_1d(net, 64, 16, 2, regularizer='L2', weight_decay=0.0001)
print("cov1", net.get_shape())
net = tflearn.batch_normalization(net)
print("bn1", net.get_shape())
net = tflearn.activation(net, 'relu')
print("relu1", net.get_shape())
# Residual blocks
'''net = tflearn.residual_bottleneck(net, 2, 16, 64, downsample_strides = 2, downsample=True, is_first_block = True)
print("resn2", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 128, downsample_strides = 2, downsample=True)
print("resn4", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 256, downsample_strides = 2, downsample=True)
print("resn6", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 512, downsample_strides = 2, downsample=True)
print("resn8", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 1024, downsample_strides = 2, downsample=True)
print("resn10", net.get_shape())'''
net = tflearn.residual_bottleneck(net, 2, 16, 64, downsample_strides = 2, downsample=True, is_first_block = True)
print("resn2", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 64, downsample_strides = 2, downsample=True)
print("resn4", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 128, downsample_strides = 2, downsample=True)
print("resn6", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 128, downsample_strides = 2, downsample=True)
print("resn8", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 256, downsample_strides = 2, downsample=True)
print("resn10", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 256, downsample_strides = 2, downsample=True)
print("resn12", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 512, downsample_strides = 2, downsample=True)
print("resn14", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 512, downsample_strides = 2, downsample=True)
print("resn16", net.get_shape())
'''net = tflearn.residual_bottleneck(net, 2, 16, 1024, downsample_strides = 2, downsample=True)
print("resn18", net.get_shape())
net = tflearn.residual_bottleneck(net, 2, 16, 1024, downsample_strides = 2, downsample=True)
print("resn20", net.get_shape())'''
net = tflearn.batch_normalization(net)
net = tflearn.activation(net, 'relu')
#net = tflearn.global_avg_pool(net)
# LSTM
print("before LSTM, before reshape", net.get_shape())
############ reshape for sub_seq
net = tf.reshape(net, [-1, n_dim//n_split, 512])
print("before LSTM", net.get_shape())
net = bidirectional_rnn(net, BasicLSTMCell(256), BasicLSTMCell(256))
print("after LSTM", net.get_shape())
#net = tflearn.layers.recurrent.lstm(net, n_units=512)
#print("after LSTM", net.get_shape())
net = dropout(net, 0.5)
# Regression
feature_layer = tflearn.fully_connected(net, 32, activation='sigmoid')
net = tflearn.dropout(feature_layer, 0.5)
net = tflearn.fully_connected(net, 4, activation='softmax')
print("dense", net.get_shape())
net = tflearn.regression(net, optimizer='adam',#momentum',
loss='categorical_crossentropy')
#,learning_rate=0.1)
## save model
### load
model = tflearn.DNN(net)
run_id = 'resnet_6000_500_10_5_v1'
model.load('../model/resNet/'+run_id)
all_names = tflearn.variables.get_all_variables()
print(all_names[0])
ttt = model.get_weights(all_names[0])
print(type(ttt))
print(ttt)
# tflearn.variables.get_value(all_names[0], xxx)
return all_names
def read_data_from_pkl():
with open('../../data1/expanded_three_part_window_6000_stride_500_5.pkl', 'rb') as fin:
train_data = pickle.load(fin)
train_label = pickle.load(fin)
val_data = pickle.load(fin)
val_label = pickle.load(fin)
test_data = pickle.load(fin)
test_label = pickle.load(fin)
test_pid= pickle.load(fin)
return train_data, train_label, val_data, val_label, test_data, test_label, test_pid
if __name__ == '__main__':
'''all_data, all_label, all_pid, pid_map = read_data()
out_feature = get_resnet_feature(all_data, all_label, all_pid, pid_map)
print('out_feature shape: ', out_feature.shape)
with open('../data/feat_resnet.pkl', 'wb') as fout:
dill.dump(out_feature, fout)
'''
'''
#-----------------------------------------------test--------------------------------------------
'''
# all_names = get_model()
vec = np.random.normal(size=[2,3,4])
print(vec)
vec = pruning(vec)
print(vec)
| [
"tflearn.variables.get_all_variables",
"tflearn.DNN",
"numpy.array",
"ReadData.ReadData",
"tflearn.regression",
"tflearn.layers.core.dropout",
"tflearn.layers.recurrent.BasicLSTMCell",
"tflearn.residual_bottleneck",
"tensorflow.ConfigProto",
"tflearn.fully_connected",
"tflearn.input_data",
"nu... | [((811, 852), 'ReadData.ReadData', 'ReadData.ReadData', (['"""../../data1/long.csv"""'], {}), "('../../data1/long.csv')\n", (828, 852), False, 'import ReadData\n'), ((869, 887), 'numpy.array', 'np.array', (['long_pid'], {}), '(long_pid)\n', (877, 887), True, 'import numpy as np\n'), ((906, 925), 'numpy.array', 'np.array', (['long_data'], {}), '(long_data)\n', (914, 925), True, 'import numpy as np\n'), ((942, 962), 'numpy.array', 'np.array', (['long_label'], {}), '(long_label)\n', (950, 962), True, 'import numpy as np\n'), ((2181, 2213), 'ReadData.Label2OneHot', 'ReadData.Label2OneHot', (['out_label'], {}), '(out_label)\n', (2202, 2213), False, 'import ReadData\n'), ((2306, 2343), 'numpy.array', 'np.array', (['out_label'], {'dtype': 'np.float32'}), '(out_label, dtype=np.float32)\n', (2314, 2343), True, 'import numpy as np\n'), ((2358, 2393), 'numpy.array', 'np.array', (['out_pid'], {'dtype': 'np.string_'}), '(out_pid, dtype=np.string_)\n', (2366, 2393), True, 'import numpy as np\n'), ((2498, 2522), 'tensorflow.reset_default_graph', 'tf.reset_default_graph', ([], {}), '()\n', (2520, 2522), True, 'import tensorflow as tf\n'), ((2735, 2777), 'tflearn.input_data', 'tflearn.input_data', ([], {'shape': '[None, n_dim, 1]'}), '(shape=[None, n_dim, 1])\n', (2753, 2777), False, 'import tflearn\n'), ((2862, 2895), 'tensorflow.reshape', 'tf.reshape', (['net', '[-1, n_split, 1]'], {}), '(net, [-1, n_split, 1])\n', (2872, 2895), True, 'import tensorflow as tf\n'), ((2951, 2982), 'tflearn.conv_1d', 'tflearn.conv_1d', (['net', '(64)', '(16)', '(2)'], {}), '(net, 64, 16, 2)\n', (2966, 2982), False, 'import tflearn\n'), ((3110, 3142), 'tflearn.batch_normalization', 'tflearn.batch_normalization', (['net'], {}), '(net)\n', (3137, 3142), False, 'import tflearn\n'), ((3187, 3218), 'tflearn.activation', 'tflearn.activation', (['net', '"""relu"""'], {}), "(net, 'relu')\n", (3205, 3218), False, 'import tflearn\n'), ((3979, 4086), 'tflearn.residual_bottleneck', 'tflearn.residual_bottleneck', (['net', '(2)', '(16)', '(64)'], {'downsample_strides': '(2)', 'downsample': '(True)', 'is_first_block': '(True)'}), '(net, 2, 16, 64, downsample_strides=2,\n downsample=True, is_first_block=True)\n', (4006, 4086), False, 'import tflearn\n'), ((4133, 4219), 'tflearn.residual_bottleneck', 'tflearn.residual_bottleneck', (['net', '(2)', '(16)', '(64)'], {'downsample_strides': '(2)', 'downsample': '(True)'}), '(net, 2, 16, 64, downsample_strides=2,\n downsample=True)\n', (4160, 4219), False, 'import tflearn\n'), ((4264, 4351), 'tflearn.residual_bottleneck', 'tflearn.residual_bottleneck', (['net', '(2)', '(16)', '(128)'], {'downsample_strides': '(2)', 'downsample': '(True)'}), '(net, 2, 16, 128, downsample_strides=2,\n downsample=True)\n', (4291, 4351), False, 'import tflearn\n'), ((4396, 4483), 'tflearn.residual_bottleneck', 'tflearn.residual_bottleneck', (['net', '(2)', '(16)', '(128)'], {'downsample_strides': '(2)', 'downsample': '(True)'}), '(net, 2, 16, 128, downsample_strides=2,\n downsample=True)\n', (4423, 4483), False, 'import tflearn\n'), ((4528, 4615), 'tflearn.residual_bottleneck', 'tflearn.residual_bottleneck', (['net', '(2)', '(16)', '(256)'], {'downsample_strides': '(2)', 'downsample': '(True)'}), '(net, 2, 16, 256, downsample_strides=2,\n downsample=True)\n', (4555, 4615), False, 'import tflearn\n'), ((4661, 4748), 'tflearn.residual_bottleneck', 'tflearn.residual_bottleneck', (['net', '(2)', '(16)', '(256)'], {'downsample_strides': '(2)', 'downsample': '(True)'}), '(net, 2, 16, 256, downsample_strides=2,\n downsample=True)\n', (4688, 4748), False, 'import tflearn\n'), ((4794, 4881), 'tflearn.residual_bottleneck', 'tflearn.residual_bottleneck', (['net', '(2)', '(16)', '(512)'], {'downsample_strides': '(2)', 'downsample': '(True)'}), '(net, 2, 16, 512, downsample_strides=2,\n downsample=True)\n', (4821, 4881), False, 'import tflearn\n'), ((4927, 5014), 'tflearn.residual_bottleneck', 'tflearn.residual_bottleneck', (['net', '(2)', '(16)', '(512)'], {'downsample_strides': '(2)', 'downsample': '(True)'}), '(net, 2, 16, 512, downsample_strides=2,\n downsample=True)\n', (4954, 5014), False, 'import tflearn\n'), ((5335, 5367), 'tflearn.batch_normalization', 'tflearn.batch_normalization', (['net'], {}), '(net)\n', (5362, 5367), False, 'import tflearn\n'), ((5378, 5409), 'tflearn.activation', 'tflearn.activation', (['net', '"""relu"""'], {}), "(net, 'relu')\n", (5396, 5409), False, 'import tflearn\n'), ((5567, 5611), 'tensorflow.reshape', 'tf.reshape', (['net', '[-1, n_dim // n_split, 512]'], {}), '(net, [-1, n_dim // n_split, 512])\n', (5577, 5611), True, 'import tensorflow as tf\n'), ((5877, 5894), 'tflearn.layers.core.dropout', 'dropout', (['net', '(0.5)'], {}), '(net, 0.5)\n', (5884, 5894), False, 'from tflearn.layers.core import dropout\n'), ((5933, 5987), 'tflearn.fully_connected', 'tflearn.fully_connected', (['net', '(32)'], {'activation': '"""sigmoid"""'}), "(net, 32, activation='sigmoid')\n", (5956, 5987), False, 'import tflearn\n'), ((5998, 6033), 'tflearn.dropout', 'tflearn.dropout', (['feature_layer', '(0.5)'], {}), '(feature_layer, 0.5)\n', (6013, 6033), False, 'import tflearn\n'), ((6044, 6097), 'tflearn.fully_connected', 'tflearn.fully_connected', (['net', '(4)'], {'activation': '"""softmax"""'}), "(net, 4, activation='softmax')\n", (6067, 6097), False, 'import tflearn\n'), ((6144, 6218), 'tflearn.regression', 'tflearn.regression', (['net'], {'optimizer': '"""adam"""', 'loss': '"""categorical_crossentropy"""'}), "(net, optimizer='adam', loss='categorical_crossentropy')\n", (6162, 6218), False, 'import tflearn\n'), ((6352, 6368), 'tflearn.DNN', 'tflearn.DNN', (['net'], {}), '(net)\n', (6363, 6368), False, 'import tflearn\n'), ((6476, 6513), 'tflearn.variables.get_all_variables', 'tflearn.variables.get_all_variables', ([], {}), '()\n', (6511, 6513), False, 'import tflearn\n'), ((7661, 7693), 'numpy.random.normal', 'np.random.normal', ([], {'size': '[2, 3, 4]'}), '(size=[2, 3, 4])\n', (7677, 7693), True, 'import numpy as np\n'), ((2244, 2280), 'numpy.array', 'np.array', (['out_data'], {'dtype': 'np.float32'}), '(out_data, dtype=np.float32)\n', (2252, 2280), True, 'import numpy as np\n'), ((5685, 5703), 'tflearn.layers.recurrent.BasicLSTMCell', 'BasicLSTMCell', (['(256)'], {}), '(256)\n', (5698, 5703), False, 'from tflearn.layers.recurrent import bidirectional_rnn, BasicLSTMCell\n'), ((5705, 5723), 'tflearn.layers.recurrent.BasicLSTMCell', 'BasicLSTMCell', (['(256)'], {}), '(256)\n', (5718, 5723), False, 'from tflearn.layers.recurrent import bidirectional_rnn, BasicLSTMCell\n'), ((6854, 6870), 'pickle.load', 'pickle.load', (['fin'], {}), '(fin)\n', (6865, 6870), False, 'import pickle\n'), ((6893, 6909), 'pickle.load', 'pickle.load', (['fin'], {}), '(fin)\n', (6904, 6909), False, 'import pickle\n'), ((6929, 6945), 'pickle.load', 'pickle.load', (['fin'], {}), '(fin)\n', (6940, 6945), False, 'import pickle\n'), ((6966, 6982), 'pickle.load', 'pickle.load', (['fin'], {}), '(fin)\n', (6977, 6982), False, 'import pickle\n'), ((7003, 7019), 'pickle.load', 'pickle.load', (['fin'], {}), '(fin)\n', (7014, 7019), False, 'import pickle\n'), ((7041, 7057), 'pickle.load', 'pickle.load', (['fin'], {}), '(fin)\n', (7052, 7057), False, 'import pickle\n'), ((7076, 7092), 'pickle.load', 'pickle.load', (['fin'], {}), '(fin)\n', (7087, 7092), False, 'import pickle\n'), ((708, 722), 'numpy.abs', 'np.abs', (['nd_vec'], {}), '(nd_vec)\n', (714, 722), True, 'import numpy as np\n'), ((2552, 2593), 'tensorflow.ConfigProto', 'tf.ConfigProto', ([], {'log_device_placement': '(True)'}), '(log_device_placement=True)\n', (2566, 2593), True, 'import tensorflow as tf\n')] |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
'''
Classes:
Metric - Abstract Base Class from which all metrics must inherit.
'''
from abc import ABCMeta, abstractmethod
import ocw.utils as utils
import numpy
from scipy import stats
class Metric(object):
'''Base Metric Class'''
__metaclass__ = ABCMeta
class UnaryMetric(Metric):
'''Abstract Base Class from which all unary metrics inherit.'''
__metaclass__ = ABCMeta
@abstractmethod
def run(self, target_dataset):
'''Run the metric for a given target dataset.
:param target_dataset: The dataset on which the current metric will
be run.
:type target_dataset: :class:`dataset.Dataset`
:returns: The result of evaluating the metric on the target_dataset.
'''
class BinaryMetric(Metric):
'''Abstract Base Class from which all binary metrics inherit.'''
__metaclass__ = ABCMeta
@abstractmethod
def run(self, ref_dataset, target_dataset):
'''Run the metric for the given reference and target datasets.
:param ref_dataset: The Dataset to use as the reference dataset when
running the evaluation.
:type ref_dataset: :class:`dataset.Dataset`
:param target_dataset: The Dataset to use as the target dataset when
running the evaluation.
:type target_dataset: :class:`dataset.Dataset`
:returns: The result of evaluation the metric on the reference and
target dataset.
'''
class Bias(BinaryMetric):
'''Calculate the bias between a reference and target dataset.'''
def run(self, ref_dataset, target_dataset):
'''Calculate the bias between a reference and target dataset.
.. note::
Overrides BinaryMetric.run()
:param ref_dataset: The reference dataset to use in this metric run.
:type ref_dataset: :class:`dataset.Dataset`
:param target_dataset: The target dataset to evaluate against the
reference dataset in this metric run.
:type target_dataset: :class:`dataset.Dataset`
:returns: The difference between the reference and target datasets.
:rtype: :class:`numpy.ndarray`
'''
return ref_dataset.values - target_dataset.values
class TemporalStdDev(UnaryMetric):
'''Calculate the standard deviation over the time.'''
def run(self, target_dataset):
'''Calculate the temporal std. dev. for a datasets.
.. note::
Overrides UnaryMetric.run()
:param target_dataset: The target_dataset on which to calculate the
temporal standard deviation.
:type target_dataset: :class:`dataset.Dataset`
:returns: The temporal standard deviation of the target dataset
:rtype: :class:`ndarray`
'''
return target_dataset.values.std(axis=0, ddof=1)
class StdDevRatio(BinaryMetric):
'''Calculate the standard deviation ratio between two datasets.'''
def run(self, ref_dataset, target_dataset):
'''Calculate the standard deviation ratio.
.. note::
Overrides BinaryMetric.run()
:param ref_dataset: The reference dataset to use in this metric run.
:type ref_dataset: :class:`dataset.Dataset`
:param target_dataset: The target dataset to evaluate against the
reference dataset in this metric run.
:type target_dataset: :class:`dataset.Dataset`
:returns: The standard deviation ratio of the reference and target
'''
return target_dataset.values.std() / ref_dataset.values.std()
class PatternCorrelation(BinaryMetric):
'''Calculate the correlation coefficient between two datasets'''
def run(self, ref_dataset, target_dataset):
'''Calculate the correlation coefficient between two dataset.
.. note::
Overrides BinaryMetric.run()
:param ref_dataset: The reference dataset to use in this metric run.
:type ref_dataset: :class:`dataset.Dataset`
:param target_dataset: The target dataset to evaluate against the
reference dataset in this metric run.
:type target_dataset: :class:`dataset.Dataset`
:returns: The correlation coefficient between a reference and target dataset.
'''
# stats.pearsonr returns correlation_coefficient, 2-tailed p-value
# We only care about the correlation coefficient
# Docs at http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html
return stats.pearsonr(ref_dataset.values.flatten(), target_dataset.values.flatten())[0]
class TemporalCorrelation(BinaryMetric):
'''Calculate the temporal correlation coefficients and associated
confidence levels between two datasets, using Pearson's correlation.'''
def run(self, reference_dataset, target_dataset):
'''Calculate the temporal correlation coefficients and associated
confidence levels between two datasets, using Pearson's correlation.
.. note::
Overrides BinaryMetric.run()
:param reference_dataset: The reference dataset to use in this metric
run
:type reference_dataset: :class:`dataset.Dataset`
:param target_dataset: The target dataset to evaluate against the
reference dataset in this metric run
:type target_dataset: :class:`dataset.Dataset`
:returns: A 2D array of temporal correlation coefficients and a 2D
array of confidence levels associated with the temporal correlation
coefficients
'''
num_times, num_lats, num_lons = reference_dataset.values.shape
coefficients = numpy.zeros([num_lats, num_lons])
levels = numpy.zeros([num_lats, num_lons])
for i in numpy.arange(num_lats):
for j in numpy.arange(num_lons):
coefficients[i, j], levels[i, j] = (
stats.pearsonr(
reference_dataset.values[:, i, j],
target_dataset.values[:, i, j]
)
)
levels[i, j] = 1 - levels[i, j]
return coefficients, levels
class TemporalMeanBias(BinaryMetric):
'''Calculate the bias averaged over time.'''
def run(self, ref_dataset, target_dataset, absolute=False):
'''Calculate the bias averaged over time.
.. note::
Overrides BinaryMetric.run()
:param ref_dataset: The reference dataset to use in this metric run.
:type ref_dataset: :class:`dataset.Dataset`
:param target_dataset: The target dataset to evaluate against the
reference dataset in this metric run.
:type target_dataset: :class:`dataset.Dataset`
:returns: The mean bias between a reference and target dataset over time.
'''
diff = ref_dataset.values - target_dataset.values
if absolute:
diff = abs(diff)
mean_bias = diff.mean(axis=0)
return mean_bias
class SpatialMeanOfTemporalMeanBias(BinaryMetric):
'''Calculate the bias averaged over time and domain.'''
def run(self, reference_dataset, target_dataset):
'''Calculate the bias averaged over time and domain.
.. note::
Overrides BinaryMetric.run()
:param reference_dataset: The reference dataset to use in this metric
run
:type reference_dataset: :class:`dataset.Dataset`
:param target_dataset: The target dataset to evaluate against the
reference dataset in this metric run
:type target_dataset: :class:`dataset.Dataset`
:returns: The bias averaged over time and domain
'''
bias = reference_dataset.values - target_dataset.values
return bias.mean()
class RMSError(BinaryMetric):
'''Calculate the Root Mean Square Difference (RMS Error), with the mean
calculated over time and space.'''
def run(self, reference_dataset, target_dataset):
'''Calculate the Root Mean Square Difference (RMS Error), with the mean
calculated over time and space.
.. note::
Overrides BinaryMetric.run()
:param reference_dataset: The reference dataset to use in this metric
run
:type reference_dataset: :class:`dataset.Dataset`
:param target_dataset: The target dataset to evaluate against the
reference dataset in this metric run
:type target_dataset: :class:`dataset.Dataset`
:returns: The RMS error, with the mean calculated over time and space
'''
sqdiff = (reference_dataset.values - target_dataset.values) ** 2
return numpy.sqrt(sqdiff.mean())
| [
"numpy.zeros",
"numpy.arange",
"scipy.stats.pearsonr"
] | [((6457, 6490), 'numpy.zeros', 'numpy.zeros', (['[num_lats, num_lons]'], {}), '([num_lats, num_lons])\n', (6468, 6490), False, 'import numpy\n'), ((6508, 6541), 'numpy.zeros', 'numpy.zeros', (['[num_lats, num_lons]'], {}), '([num_lats, num_lons])\n', (6519, 6541), False, 'import numpy\n'), ((6559, 6581), 'numpy.arange', 'numpy.arange', (['num_lats'], {}), '(num_lats)\n', (6571, 6581), False, 'import numpy\n'), ((6604, 6626), 'numpy.arange', 'numpy.arange', (['num_lons'], {}), '(num_lons)\n', (6616, 6626), False, 'import numpy\n'), ((6701, 6786), 'scipy.stats.pearsonr', 'stats.pearsonr', (['reference_dataset.values[:, i, j]', 'target_dataset.values[:, i, j]'], {}), '(reference_dataset.values[:, i, j], target_dataset.values[:,\n i, j])\n', (6715, 6786), False, 'from scipy import stats\n')] |
# -*- coding: utf-8 -*-
import torch
import logging
from torch.utils.data import DataLoader
import numpy as np
from ..utils.misc import fopen, pbar
logger = logging.getLogger('nmtpytorch')
def sort_predictions(data_loader, results):
"""Recovers the dataset order when bucketing samplers are used."""
if getattr(data_loader.batch_sampler, 'store_indices', False):
results = [results[i] for i, j in sorted(
enumerate(data_loader.batch_sampler.orig_idxs), key=lambda k: k[1])]
return results
def make_dataloader(dataset, pin_memory=False, num_workers=0):
if num_workers != 0:
logger.info('Forcing num_workers to 0 since it fails with torch 0.4')
num_workers = 0
return DataLoader(
dataset, batch_sampler=dataset.sampler,
collate_fn=dataset.collate_fn,
pin_memory=pin_memory, num_workers=num_workers)
def sort_batch(seqbatch):
"""Sorts torch tensor of integer indices by decreasing order."""
# 0 is padding_idx
omask = (seqbatch != 0).long()
olens = omask.sum(0)
slens, sidxs = torch.sort(olens, descending=True)
oidxs = torch.sort(sidxs)[1]
return (oidxs, sidxs, slens.data.tolist(), omask.float())
def pad_video_sequence(seqs):
"""
Pads video sequences with zero vectors for minibatch processing.
(contributor: @elliottd)
TODO: Can we write the for loop in a more compact format?
"""
lengths = [len(s) for s in seqs]
# Get the desired size of the padding vector from the input seqs data
feat_size = seqs[0].shape[1]
max_len = max(lengths)
tmp = []
for s, len_ in zip(seqs, lengths):
if max_len - len_ == 0:
tmp.append(s)
else:
inner_tmp = s
for i in range(max_len - len_):
inner_tmp = np.vstack((inner_tmp, (np.array([0.] * feat_size))))
tmp.append(inner_tmp)
padded = np.array(tmp, dtype='float32')
return torch.FloatTensor(torch.from_numpy(padded))
def convert_to_onehot(idxs, n_classes):
"""Returns a binary batch_size x n_classes one-hot tensor."""
out = torch.zeros(len(idxs), n_classes, device=idxs[0].device)
for row, indices in zip(out, idxs):
row.scatter_(0, indices, 1)
return out
def read_sentences(fname, vocab, bos=False, eos=True):
lines = []
lens = []
with fopen(fname) as f:
for idx, line in enumerate(pbar(f, unit='sents')):
line = line.strip()
# Empty lines will cause a lot of headaches,
# get rid of them during preprocessing!
assert line, "Empty line (%d) found in %s" % (idx + 1, fname)
# Map and append
seq = vocab.sent_to_idxs(line, explicit_bos=bos, explicit_eos=eos)
lines.append(seq)
lens.append(len(seq))
return lines, lens
| [
"logging.getLogger",
"torch.sort",
"torch.from_numpy",
"numpy.array",
"torch.utils.data.DataLoader"
] | [((159, 190), 'logging.getLogger', 'logging.getLogger', (['"""nmtpytorch"""'], {}), "('nmtpytorch')\n", (176, 190), False, 'import logging\n'), ((729, 863), 'torch.utils.data.DataLoader', 'DataLoader', (['dataset'], {'batch_sampler': 'dataset.sampler', 'collate_fn': 'dataset.collate_fn', 'pin_memory': 'pin_memory', 'num_workers': 'num_workers'}), '(dataset, batch_sampler=dataset.sampler, collate_fn=dataset.\n collate_fn, pin_memory=pin_memory, num_workers=num_workers)\n', (739, 863), False, 'from torch.utils.data import DataLoader\n'), ((1083, 1117), 'torch.sort', 'torch.sort', (['olens'], {'descending': '(True)'}), '(olens, descending=True)\n', (1093, 1117), False, 'import torch\n'), ((1915, 1945), 'numpy.array', 'np.array', (['tmp'], {'dtype': '"""float32"""'}), "(tmp, dtype='float32')\n", (1923, 1945), True, 'import numpy as np\n'), ((1130, 1147), 'torch.sort', 'torch.sort', (['sidxs'], {}), '(sidxs)\n', (1140, 1147), False, 'import torch\n'), ((1975, 1999), 'torch.from_numpy', 'torch.from_numpy', (['padded'], {}), '(padded)\n', (1991, 1999), False, 'import torch\n'), ((1838, 1865), 'numpy.array', 'np.array', (['([0.0] * feat_size)'], {}), '([0.0] * feat_size)\n', (1846, 1865), True, 'import numpy as np\n')] |
from neuron.choice import sigmoid, sigmoid_prime, J, J_derivative
import numpy as np
def compute_grad_analytically(neuron, X, y, J_prime=J_derivative):
"""
Analytical derivative of target function.
neuron - object of class Neuron
X - vertical input matrix (n, m).
y - correct answers on X (m, 1)
J_prime - function считающая производные целевой функции по ответам
Возвращает вектор размера (m, 1)
"""
z = neuron.summatory(X)
y_hat = neuron.activation(z)
# derivative chain
dy_dyhat = J_prime(y, y_hat) # 1st gear in chain
dyhat_dz = neuron.activation_function_derivative(z) # 2nd gear in chain
dz_dw = X # 3rd gear in chain
grad = ((dy_dyhat * dyhat_dz).T).dot(dz_dw)
grad = grad.T
return grad
def compute_grad_numerically(neuron, X, y, J=J, eps=10e-6):
"""
Численная производная целевой функции.
neuron - объект класса Neuron с вертикальным вектором весов w,
X - вертикальная матрица входов формы (n, m), на которой считается сумма квадратов отклонений,
y - правильные ответы для тестовой выборки X,
J - целевая функция, градиент которой мы хотим получить,
eps - размер $\delta w$ (малого изменения весов).
"""
w_0 = neuron.w
num_grad = np.zeros(w_0.shape)
for i in range(len(w_0)):
old_wi = neuron.w[i].copy()
# Меняем вес
neuron.w[i] = old_wi + eps
J_up = J(neuron, X, y)
neuron.w[i] = old_wi - eps
J_down = J(neuron, X, y)
# Считаем новое значение целевой функции и вычисляем приближенное значение градиента
num_grad[i] = (J_up - J_down) / (2 * eps)
# Возвращаем вес обратно. Лучше так, чем -= eps, чтобы не накапливать ошибки округления
neuron.w[i] = old_wi
# проверим, что не испортили нейрону веса своими манипуляциями
assert np.allclose(neuron.w, w_0), "МЫ ИСПОРТИЛИ НЕЙРОНУ ВЕСА"
return num_grad
| [
"numpy.zeros",
"numpy.allclose",
"neuron.choice.J"
] | [((1257, 1276), 'numpy.zeros', 'np.zeros', (['w_0.shape'], {}), '(w_0.shape)\n', (1265, 1276), True, 'import numpy as np\n'), ((1848, 1874), 'numpy.allclose', 'np.allclose', (['neuron.w', 'w_0'], {}), '(neuron.w, w_0)\n', (1859, 1874), True, 'import numpy as np\n'), ((1415, 1430), 'neuron.choice.J', 'J', (['neuron', 'X', 'y'], {}), '(neuron, X, y)\n', (1416, 1430), False, 'from neuron.choice import sigmoid, sigmoid_prime, J, J_derivative\n'), ((1483, 1498), 'neuron.choice.J', 'J', (['neuron', 'X', 'y'], {}), '(neuron, X, y)\n', (1484, 1498), False, 'from neuron.choice import sigmoid, sigmoid_prime, J, J_derivative\n')] |
import argparse
from myPackage import tools as tl
from myPackage import preprocess
from myPackage import minutiaeExtraction as minExtract
from enhancementFP import image_enhance as img_e
from os.path import basename, splitext, exists
import time
from numpy import mean, std
if __name__ == '__main__':
ap = argparse.ArgumentParser()
ap.add_argument("-p", "--path", required=True,
help="-p Source path where the images are stored.")
ap.add_argument("-r", "--results", required= False,
help="-r Destiny path where the results will be stored.")
args = vars(ap.parse_args())
# Configuration
image_ext = '.tif'
plot = False
path = None
# ratio = 0.2
# Create folders for results
# -r ../Data/Results/fingerprints
if args.get("results") is not None:
if not exists(args["results"]):
tl.makeDir(args["results"])
path = args["results"]
# Extract names
all_images = tl.natSort(tl.getSamples(args["path"], image_ext))
# Split train and test data
# train_data, test_data = tl.split_train_test(all_images, ratio)
print("\nAll_images size: {}\n".format(len(all_images)))
all_times= []
for image in all_images:
start = time.time()
name = splitext(basename(image))[0]
print("\nProcessing image '{}'".format(name))
cleaned_img = preprocess.blurrImage(image, name, plot)
enhanced_img = img_e.image_enhance(cleaned_img, name, plot)
cleaned_img = preprocess.cleanImage(enhanced_img, name, plot)
# skeleton = preprocess.zhangSuen(cleaned_img, name, plot)
skeleton = preprocess.thinImage(cleaned_img, name, plot)
minExtract.process(skeleton, name, plot, path)
all_times.append((time.time()-start))
mean = mean(all_times)
std = std(all_times)
print("\n\nAlgorithm takes {:2.3f} (+/-{:2.3f}) seconds per image".format(mean, std)) | [
"myPackage.preprocess.thinImage",
"numpy.mean",
"os.path.exists",
"argparse.ArgumentParser",
"myPackage.preprocess.blurrImage",
"myPackage.tools.getSamples",
"enhancementFP.image_enhance.image_enhance",
"myPackage.preprocess.cleanImage",
"myPackage.minutiaeExtraction.process",
"numpy.std",
"myPa... | [((312, 337), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {}), '()\n', (335, 337), False, 'import argparse\n'), ((1813, 1828), 'numpy.mean', 'mean', (['all_times'], {}), '(all_times)\n', (1817, 1828), False, 'from numpy import mean, std\n'), ((1839, 1853), 'numpy.std', 'std', (['all_times'], {}), '(all_times)\n', (1842, 1853), False, 'from numpy import mean, std\n'), ((993, 1031), 'myPackage.tools.getSamples', 'tl.getSamples', (["args['path']", 'image_ext'], {}), "(args['path'], image_ext)\n", (1006, 1031), True, 'from myPackage import tools as tl\n'), ((1258, 1269), 'time.time', 'time.time', ([], {}), '()\n', (1267, 1269), False, 'import time\n'), ((1390, 1430), 'myPackage.preprocess.blurrImage', 'preprocess.blurrImage', (['image', 'name', 'plot'], {}), '(image, name, plot)\n', (1411, 1430), False, 'from myPackage import preprocess\n'), ((1454, 1498), 'enhancementFP.image_enhance.image_enhance', 'img_e.image_enhance', (['cleaned_img', 'name', 'plot'], {}), '(cleaned_img, name, plot)\n', (1473, 1498), True, 'from enhancementFP import image_enhance as img_e\n'), ((1521, 1568), 'myPackage.preprocess.cleanImage', 'preprocess.cleanImage', (['enhanced_img', 'name', 'plot'], {}), '(enhanced_img, name, plot)\n', (1542, 1568), False, 'from myPackage import preprocess\n'), ((1655, 1700), 'myPackage.preprocess.thinImage', 'preprocess.thinImage', (['cleaned_img', 'name', 'plot'], {}), '(cleaned_img, name, plot)\n', (1675, 1700), False, 'from myPackage import preprocess\n'), ((1709, 1755), 'myPackage.minutiaeExtraction.process', 'minExtract.process', (['skeleton', 'name', 'plot', 'path'], {}), '(skeleton, name, plot, path)\n', (1727, 1755), True, 'from myPackage import minutiaeExtraction as minExtract\n'), ((849, 872), 'os.path.exists', 'exists', (["args['results']"], {}), "(args['results'])\n", (855, 872), False, 'from os.path import basename, splitext, exists\n'), ((886, 913), 'myPackage.tools.makeDir', 'tl.makeDir', (["args['results']"], {}), "(args['results'])\n", (896, 913), True, 'from myPackage import tools as tl\n'), ((1294, 1309), 'os.path.basename', 'basename', (['image'], {}), '(image)\n', (1302, 1309), False, 'from os.path import basename, splitext, exists\n'), ((1782, 1793), 'time.time', 'time.time', ([], {}), '()\n', (1791, 1793), False, 'import time\n')] |
#Ref: <NAME>
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D
from tensorflow.keras.models import Sequential
import numpy as np
import matplotlib.pyplot as plt
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))
#adding some noise
noise_factor = 0.5
x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape)
x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape)
x_train_noisy = np.clip(x_train_noisy, 0., 1.)
x_test_noisy = np.clip(x_test_noisy, 0., 1.)
#Displaying images with noise
plt.figure(figsize=(20, 2))
for i in range(1,10):
ax = plt.subplot(1, 10, i)
plt.imshow(x_test_noisy[i].reshape(28, 28), cmap="binary")
plt.show()
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=(28, 28, 1)))
model.add(MaxPooling2D((2, 2), padding='same'))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), padding='same'))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), padding='same'))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(1, (3, 3), activation='relu', padding='same'))
model.compile(optimizer='adam', loss='mean_squared_error')
model.summary()
model.fit(x_train_noisy, x_train, epochs=10, batch_size=256, shuffle=True,
validation_data=(x_test_noisy, x_test))
model.evaluate(x_test_noisy, x_test)
model.save('denoising_autoencoder.model')
no_noise_img = model.predict(x_test_noisy)
plt.figure(figsize=(40, 4))
for i in range(10):
# display original
ax = plt.subplot(3, 20, i + 1)
plt.imshow(x_test_noisy[i].reshape(28, 28), cmap="binary")
# display reconstructed (after noise removed) image
ax = plt.subplot(3, 20, 40 +i+ 1)
plt.imshow(no_noise_img[i].reshape(28, 28), cmap="binary")
plt.show()
| [
"numpy.clip",
"numpy.random.normal",
"tensorflow.keras.layers.Conv2D",
"tensorflow.keras.layers.UpSampling2D",
"tensorflow.keras.datasets.mnist.load_data",
"tensorflow.keras.layers.MaxPooling2D",
"matplotlib.pyplot.subplot",
"matplotlib.pyplot.figure",
"tensorflow.keras.models.Sequential",
"matplo... | [((258, 275), 'tensorflow.keras.datasets.mnist.load_data', 'mnist.load_data', ([], {}), '()\n', (273, 275), False, 'from tensorflow.keras.datasets import mnist\n'), ((723, 755), 'numpy.clip', 'np.clip', (['x_train_noisy', '(0.0)', '(1.0)'], {}), '(x_train_noisy, 0.0, 1.0)\n', (730, 755), True, 'import numpy as np\n'), ((769, 800), 'numpy.clip', 'np.clip', (['x_test_noisy', '(0.0)', '(1.0)'], {}), '(x_test_noisy, 0.0, 1.0)\n', (776, 800), True, 'import numpy as np\n'), ((830, 857), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(20, 2)'}), '(figsize=(20, 2))\n', (840, 857), True, 'import matplotlib.pyplot as plt\n'), ((974, 984), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (982, 984), True, 'import matplotlib.pyplot as plt\n'), ((995, 1007), 'tensorflow.keras.models.Sequential', 'Sequential', ([], {}), '()\n', (1005, 1007), False, 'from tensorflow.keras.models import Sequential\n'), ((2044, 2071), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(40, 4)'}), '(figsize=(40, 4))\n', (2054, 2071), True, 'import matplotlib.pyplot as plt\n'), ((2376, 2386), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (2384, 2386), True, 'import matplotlib.pyplot as plt\n'), ((889, 910), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(1)', '(10)', 'i'], {}), '(1, 10, i)\n', (900, 910), True, 'import matplotlib.pyplot as plt\n'), ((1018, 1096), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(32)', '(3, 3)'], {'activation': '"""relu"""', 'padding': '"""same"""', 'input_shape': '(28, 28, 1)'}), "(32, (3, 3), activation='relu', padding='same', input_shape=(28, 28, 1))\n", (1024, 1096), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1108, 1144), 'tensorflow.keras.layers.MaxPooling2D', 'MaxPooling2D', (['(2, 2)'], {'padding': '"""same"""'}), "((2, 2), padding='same')\n", (1120, 1144), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1156, 1208), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(8)', '(3, 3)'], {'activation': '"""relu"""', 'padding': '"""same"""'}), "(8, (3, 3), activation='relu', padding='same')\n", (1162, 1208), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1220, 1256), 'tensorflow.keras.layers.MaxPooling2D', 'MaxPooling2D', (['(2, 2)'], {'padding': '"""same"""'}), "((2, 2), padding='same')\n", (1232, 1256), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1268, 1320), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(8)', '(3, 3)'], {'activation': '"""relu"""', 'padding': '"""same"""'}), "(8, (3, 3), activation='relu', padding='same')\n", (1274, 1320), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1335, 1371), 'tensorflow.keras.layers.MaxPooling2D', 'MaxPooling2D', (['(2, 2)'], {'padding': '"""same"""'}), "((2, 2), padding='same')\n", (1347, 1371), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1385, 1437), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(8)', '(3, 3)'], {'activation': '"""relu"""', 'padding': '"""same"""'}), "(8, (3, 3), activation='relu', padding='same')\n", (1391, 1437), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1449, 1469), 'tensorflow.keras.layers.UpSampling2D', 'UpSampling2D', (['(2, 2)'], {}), '((2, 2))\n', (1461, 1469), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1481, 1533), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(8)', '(3, 3)'], {'activation': '"""relu"""', 'padding': '"""same"""'}), "(8, (3, 3), activation='relu', padding='same')\n", (1487, 1533), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1545, 1565), 'tensorflow.keras.layers.UpSampling2D', 'UpSampling2D', (['(2, 2)'], {}), '((2, 2))\n', (1557, 1565), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1577, 1614), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(32)', '(3, 3)'], {'activation': '"""relu"""'}), "(32, (3, 3), activation='relu')\n", (1583, 1614), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1626, 1646), 'tensorflow.keras.layers.UpSampling2D', 'UpSampling2D', (['(2, 2)'], {}), '((2, 2))\n', (1638, 1646), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((1658, 1710), 'tensorflow.keras.layers.Conv2D', 'Conv2D', (['(1)', '(3, 3)'], {'activation': '"""relu"""', 'padding': '"""same"""'}), "(1, (3, 3), activation='relu', padding='same')\n", (1664, 1710), False, 'from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D\n'), ((2124, 2149), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(20)', '(i + 1)'], {}), '(3, 20, i + 1)\n', (2135, 2149), True, 'import matplotlib.pyplot as plt\n'), ((2283, 2313), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(20)', '(40 + i + 1)'], {}), '(3, 20, 40 + i + 1)\n', (2294, 2313), True, 'import matplotlib.pyplot as plt\n'), ((552, 608), 'numpy.random.normal', 'np.random.normal', ([], {'loc': '(0.0)', 'scale': '(1.0)', 'size': 'x_train.shape'}), '(loc=0.0, scale=1.0, size=x_train.shape)\n', (568, 608), True, 'import numpy as np\n'), ((649, 704), 'numpy.random.normal', 'np.random.normal', ([], {'loc': '(0.0)', 'scale': '(1.0)', 'size': 'x_test.shape'}), '(loc=0.0, scale=1.0, size=x_test.shape)\n', (665, 704), True, 'import numpy as np\n')] |
import argparse
import glob
import importlib
import os
import sys
import numpy as np
import torch as th
import yaml
import gym
from stable_baselines3.common.utils import set_random_seed, obs_as_tensor
from stable_baselines3.common.vec_env import VecVideoRecorder
import utils.import_envs # noqa: F401 pylint: disable=unused-import
from utils import ALGOS, create_test_env, get_latest_run_id, get_saved_hyperparams
from utils.exp_manager import ExperimentManager
from utils.utils import StoreDict
from blind_walking.net.adapter import Adapter
class Logger:
def __init__(self, name: str = "log"):
self.data = []
self.name = name
def update(self, data: np.ndarray):
self.data.append(data)
def save(self, savedir: str = None):
all_data = np.concatenate(self.data)
np.save(os.path.join(savedir, self.name), all_data)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--env", help="environment ID", type=str, default="CartPole-v1")
parser.add_argument("-f", "--folder", help="Log folder", type=str, default="rl-trained-agents")
parser.add_argument(
"--algo",
help="RL Algorithm",
default="ppo",
type=str,
required=False,
choices=list(ALGOS.keys()),
)
parser.add_argument("-n", "--n-timesteps", help="number of timesteps", default=1000, type=int)
parser.add_argument(
"--num-threads",
help="Number of threads for PyTorch (-1 to use default)",
default=-1,
type=int,
)
parser.add_argument("--n-envs", help="number of environments", default=1, type=int)
parser.add_argument(
"--exp-id",
help="Experiment ID (default: 0: latest, -1: no exp folder)",
default=0,
type=int,
)
parser.add_argument("--verbose", help="Verbose mode (0: no output, 1: INFO)", default=1, type=int)
parser.add_argument("--record", action="store_true", default=False, help="Record video")
parser.add_argument("-o", "--output-folder", help="Video output folder", type=str)
parser.add_argument(
"--no-render",
action="store_true",
default=False,
help="Do not render the environment (useful for tests)",
)
parser.add_argument("--save-encoder-output", action="store_true", default=False, help="Log the encoder output to a file")
parser.add_argument("--save-observation", action="store_true", default=False, help="Save the observations into a file")
parser.add_argument(
"--adapter",
action="store_true",
default=False,
help="Use adapter module instead of trained model",
)
parser.add_argument(
"--load-best",
action="store_true",
default=False,
help="Load best model instead of last model if available",
)
parser.add_argument(
"--load-checkpoint",
type=int,
help="Load checkpoint instead of last model if available, "
"you must pass the number of timesteps corresponding to it",
)
parser.add_argument(
"--load-last-checkpoint",
action="store_true",
default=False,
help="Load last checkpoint instead of last model if available",
)
parser.add_argument(
"--deterministic",
action="store_true",
default=False,
help="Use deterministic actions",
)
parser.add_argument(
"--stochastic",
action="store_true",
default=False,
help="Use stochastic actions",
)
parser.add_argument(
"--norm-reward",
action="store_true",
default=False,
help="Normalize reward if applicable (trained with VecNormalize)",
)
parser.add_argument("--seed", help="Random generator seed", type=int, default=0)
parser.add_argument("--reward-log", help="Where to log reward", default="", type=str)
parser.add_argument(
"--gym-packages",
type=str,
nargs="+",
default=[],
help="Additional external Gym environment package modules to import (e.g. gym_minigrid)",
)
parser.add_argument(
"--env-kwargs",
type=str,
nargs="+",
action=StoreDict,
help="Optional keyword argument to pass to the env constructor",
)
return parser.parse_args()
def main(): # noqa: C901
args = parse_args()
# Going through custom gym packages to let them register in the global registory
for env_module in args.gym_packages:
importlib.import_module(env_module)
env_id = args.env
algo = args.algo
folder = args.folder
# ######################### Get experiment folder ######################### #
if args.exp_id == 0:
args.exp_id = get_latest_run_id(os.path.join(folder, algo), env_id)
print(f"Loading latest experiment, id={args.exp_id}")
# Sanity checks
if args.exp_id > 0:
log_path = os.path.join(folder, algo, f"{env_id}_{args.exp_id}")
else:
log_path = os.path.join(folder, algo)
assert os.path.isdir(log_path), f"The {log_path} folder was not found"
found = False
for ext in ["zip"]:
model_path = os.path.join(log_path, f"{env_id}.{ext}")
found = os.path.isfile(model_path)
name_prefix = f"final-model-{algo}-{env_id}"
if found:
break
if args.load_best:
model_path = os.path.join(log_path, "best_model.zip")
found = os.path.isfile(model_path)
name_prefix = f"best-model-{algo}-{env_id}"
if args.load_checkpoint is not None:
model_path = os.path.join(log_path, f"rl_model_{args.load_checkpoint}_steps.zip")
found = os.path.isfile(model_path)
name_prefix = f"checkpoint-{args.load_checkpoint}-{algo}-{env_id}"
if args.load_last_checkpoint:
checkpoints = glob.glob(os.path.join(log_path, "rl_model_*_steps.zip"))
if len(checkpoints) == 0:
raise ValueError(f"No checkpoint found for {algo} on {env_id}, path: {log_path}")
def step_count(checkpoint_path: str) -> int:
# path follow the pattern "rl_model_*_steps.zip", we count from the back to ignore any other _ in the path
return int(checkpoint_path.split("_")[-2])
checkpoints = sorted(checkpoints, key=step_count)
model_path = checkpoints[-1]
found = True
name_prefix = f"checkpoint-{step_count(model_path)}-{algo}-{env_id}"
if not found:
raise ValueError(f"No model found for {algo} on {env_id}, path: {model_path}")
print(f"Loading {model_path}")
# ######################### Create environment ######################### #
# Off-policy algorithm only support one env for now
off_policy_algos = ["qrdqn", "dqn", "ddpg", "sac", "her", "td3", "tqc"]
if algo in off_policy_algos:
args.n_envs = 1
set_random_seed(args.seed)
if args.num_threads > 0:
if args.verbose > 1:
print(f"Setting torch.num_threads to {args.num_threads}")
th.set_num_threads(args.num_threads)
is_atari = ExperimentManager.is_atari(env_id)
stats_path = os.path.join(log_path, env_id)
hyperparams, stats_path = get_saved_hyperparams(stats_path, norm_reward=args.norm_reward, test_mode=True)
# load env_kwargs if existing
env_kwargs = {}
args_path = os.path.join(log_path, env_id, "args.yml")
if os.path.isfile(args_path):
with open(args_path, "r") as f:
loaded_args = yaml.load(f, Loader=yaml.UnsafeLoader) # pytype: disable=module-attr
if loaded_args["env_kwargs"] is not None:
env_kwargs = loaded_args["env_kwargs"]
# overwrite with command line arguments
if args.env_kwargs is not None:
env_kwargs.update(args.env_kwargs)
log_dir = args.reward_log if args.reward_log != "" else None
env = create_test_env(
env_id,
n_envs=args.n_envs,
stats_path=stats_path,
seed=args.seed,
log_dir=log_dir,
should_render=not args.no_render,
hyperparams=hyperparams,
env_kwargs=env_kwargs,
)
obs = env.reset()
# If record video
if args.record:
video_folder = args.output_folder
if video_folder is None:
if args.adapter:
video_folder = os.path.join(log_path, "videos_adapter")
else:
video_folder = os.path.join(log_path, "videos")
env = VecVideoRecorder(
env,
video_folder,
record_video_trigger=lambda x: x == 0,
video_length=args.n_timesteps,
name_prefix=name_prefix,
)
env.reset()
# ######################### Load model ######################### #
kwargs = dict(seed=args.seed)
if algo in off_policy_algos:
# Dummy buffer size as we don't need memory to enjoy the trained agent
kwargs.update(dict(buffer_size=1))
# Check if we are running python 3.8+
# we need to patch saved model under python 3.6/3.7 to load them
newer_python_version = sys.version_info.major == 3 and sys.version_info.minor >= 8
custom_objects = {}
if newer_python_version:
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
model = ALGOS[algo].load(model_path, env=env, custom_objects=custom_objects, **kwargs)
if args.save_observation:
observation_logger = Logger(name="observations")
# Get actor-critic policy which contains the feature extractor and ppo
is_a1_gym_env = args.save_encoder_output or args.adapter
if is_a1_gym_env:
policy = model.policy
policy.eval()
feature_encoder = policy.features_extractor
base_policy = policy.mlp_extractor
base_policy_action = policy.action_net
base_policy_value = policy.value_net
if args.save_encoder_output:
true_extrinsics_logger = Logger(name="true_extrinsics")
heightmap_logger = Logger(name="heightmap_observations")
if args.adapter:
# Load adapter module
adapter_path = os.path.join(log_path, f"{env_id}_adapter", "adapter.pth")
adapter = Adapter(policy.observation_space, output_size=feature_encoder.mlp_output_size)
adapter.load_state_dict(th.load(adapter_path))
adapter.eval()
if args.save_encoder_output:
predicted_extrinsics_logger = Logger(name="predicted_extrinsics")
# ######################### Enjoy Loop ######################### #
# Deterministic by default except for atari games
stochastic = args.stochastic or is_atari and not args.deterministic
deterministic = not stochastic
state = None
episode_reward = 0.0
episode_rewards, episode_lengths = [], []
ep_len = 0
# For HER, monitor success rate
successes = []
try:
for _ in range(args.n_timesteps):
if is_a1_gym_env:
obs_np = obs
obs = obs_as_tensor(obs_np, model.device)
true_extrinsics = feature_encoder(obs)
if args.save_encoder_output:
true_visual_extrinsics = true_extrinsics[:, -feature_encoder.visual_output_size :]
true_extrinsics_logger.update(true_visual_extrinsics.detach().cpu().numpy())
heightmap_logger.update(obs_np["visual"])
if args.adapter:
predicted_extrinsics = adapter(obs)
if args.save_encoder_output:
predicted_visual_extrinsics = predicted_extrinsics[:, -feature_encoder.visual_output_size :]
predicted_extrinsics_logger.update(predicted_visual_extrinsics.detach().cpu().numpy())
extrinsics = predicted_extrinsics if args.adapter else true_extrinsics
output = base_policy(extrinsics)
action = base_policy_action(output[0])
value = base_policy_value(output[1])
# Clip and perform action
clipped_action = action.detach().cpu().numpy()
if isinstance(model.action_space, gym.spaces.Box):
clipped_action = np.clip(clipped_action, model.action_space.low, model.action_space.high)
if args.save_observation:
observation_logger.update(obs)
obs, reward, done, infos = env.step(clipped_action)
else:
action, state = model.predict(obs, state=state, deterministic=deterministic)
if args.save_observation:
observation_logger.update(obs)
obs, reward, done, infos = env.step(action)
if not args.no_render:
env.render("human")
episode_reward += reward[0]
ep_len += 1
if args.n_envs == 1:
# For atari the return reward is not the atari score
# so we have to get it from the infos dict
if is_atari and infos is not None and args.verbose >= 1:
episode_infos = infos[0].get("episode")
if episode_infos is not None:
print(f"Atari Episode Score: {episode_infos['r']:.2f}")
print("Atari Episode Length", episode_infos["l"])
if done and not is_atari and args.verbose > 0:
# NOTE: for env using VecNormalize, the mean reward
# is a normalized reward when `--norm_reward` flag is passed
print(f"Episode Reward: {episode_reward:.2f}")
print("Episode Length", ep_len)
episode_rewards.append(episode_reward)
episode_lengths.append(ep_len)
episode_reward = 0.0
ep_len = 0
state = None
# Reset also when the goal is achieved when using HER
if done and infos[0].get("is_success") is not None:
if args.verbose > 1:
print("Success?", infos[0].get("is_success", False))
if infos[0].get("is_success") is not None:
successes.append(infos[0].get("is_success", False))
episode_reward, ep_len = 0.0, 0
except KeyboardInterrupt:
pass
# ######################### Print stats ######################### #
if args.save_encoder_output or args.save_observation:
output_dir = os.path.join(log_path, "stats")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if args.save_encoder_output:
true_extrinsics_logger.save(output_dir)
heightmap_logger.save(output_dir)
if args.adapter:
predicted_extrinsics_logger.save(output_dir)
if args.save_observation:
observation_logger.save(output_dir)
if args.verbose > 0 and len(successes) > 0:
print(f"Success rate: {100 * np.mean(successes):.2f}%")
if args.verbose > 0 and len(episode_rewards) > 0:
print(f"{len(episode_rewards)} Episodes")
print(f"Mean reward: {np.mean(episode_rewards):.2f} +/- {np.std(episode_rewards):.2f}")
if args.verbose > 0 and len(episode_lengths) > 0:
print(f"Mean episode length: {np.mean(episode_lengths):.2f} +/- {np.std(episode_lengths):.2f}")
env.close()
if __name__ == "__main__":
main()
| [
"numpy.clip",
"utils.exp_manager.ExperimentManager.is_atari",
"blind_walking.net.adapter.Adapter",
"utils.create_test_env",
"yaml.load",
"stable_baselines3.common.vec_env.VecVideoRecorder",
"os.path.exists",
"numpy.mean",
"argparse.ArgumentParser",
"utils.get_saved_hyperparams",
"stable_baseline... | [((904, 929), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {}), '()\n', (927, 929), False, 'import argparse\n'), ((5062, 5085), 'os.path.isdir', 'os.path.isdir', (['log_path'], {}), '(log_path)\n', (5075, 5085), False, 'import os\n'), ((6878, 6904), 'stable_baselines3.common.utils.set_random_seed', 'set_random_seed', (['args.seed'], {}), '(args.seed)\n', (6893, 6904), False, 'from stable_baselines3.common.utils import set_random_seed, obs_as_tensor\n'), ((7095, 7129), 'utils.exp_manager.ExperimentManager.is_atari', 'ExperimentManager.is_atari', (['env_id'], {}), '(env_id)\n', (7121, 7129), False, 'from utils.exp_manager import ExperimentManager\n'), ((7148, 7178), 'os.path.join', 'os.path.join', (['log_path', 'env_id'], {}), '(log_path, env_id)\n', (7160, 7178), False, 'import os\n'), ((7209, 7288), 'utils.get_saved_hyperparams', 'get_saved_hyperparams', (['stats_path'], {'norm_reward': 'args.norm_reward', 'test_mode': '(True)'}), '(stats_path, norm_reward=args.norm_reward, test_mode=True)\n', (7230, 7288), False, 'from utils import ALGOS, create_test_env, get_latest_run_id, get_saved_hyperparams\n'), ((7360, 7402), 'os.path.join', 'os.path.join', (['log_path', 'env_id', '"""args.yml"""'], {}), "(log_path, env_id, 'args.yml')\n", (7372, 7402), False, 'import os\n'), ((7410, 7435), 'os.path.isfile', 'os.path.isfile', (['args_path'], {}), '(args_path)\n', (7424, 7435), False, 'import os\n'), ((7882, 8072), 'utils.create_test_env', 'create_test_env', (['env_id'], {'n_envs': 'args.n_envs', 'stats_path': 'stats_path', 'seed': 'args.seed', 'log_dir': 'log_dir', 'should_render': '(not args.no_render)', 'hyperparams': 'hyperparams', 'env_kwargs': 'env_kwargs'}), '(env_id, n_envs=args.n_envs, stats_path=stats_path, seed=\n args.seed, log_dir=log_dir, should_render=not args.no_render,\n hyperparams=hyperparams, env_kwargs=env_kwargs)\n', (7897, 8072), False, 'from utils import ALGOS, create_test_env, get_latest_run_id, get_saved_hyperparams\n'), ((785, 810), 'numpy.concatenate', 'np.concatenate', (['self.data'], {}), '(self.data)\n', (799, 810), True, 'import numpy as np\n'), ((4524, 4559), 'importlib.import_module', 'importlib.import_module', (['env_module'], {}), '(env_module)\n', (4547, 4559), False, 'import importlib\n'), ((4940, 4993), 'os.path.join', 'os.path.join', (['folder', 'algo', 'f"""{env_id}_{args.exp_id}"""'], {}), "(folder, algo, f'{env_id}_{args.exp_id}')\n", (4952, 4993), False, 'import os\n'), ((5023, 5049), 'os.path.join', 'os.path.join', (['folder', 'algo'], {}), '(folder, algo)\n', (5035, 5049), False, 'import os\n'), ((5190, 5231), 'os.path.join', 'os.path.join', (['log_path', 'f"""{env_id}.{ext}"""'], {}), "(log_path, f'{env_id}.{ext}')\n", (5202, 5231), False, 'import os\n'), ((5248, 5274), 'os.path.isfile', 'os.path.isfile', (['model_path'], {}), '(model_path)\n', (5262, 5274), False, 'import os\n'), ((5409, 5449), 'os.path.join', 'os.path.join', (['log_path', '"""best_model.zip"""'], {}), "(log_path, 'best_model.zip')\n", (5421, 5449), False, 'import os\n'), ((5466, 5492), 'os.path.isfile', 'os.path.isfile', (['model_path'], {}), '(model_path)\n', (5480, 5492), False, 'import os\n'), ((5608, 5676), 'os.path.join', 'os.path.join', (['log_path', 'f"""rl_model_{args.load_checkpoint}_steps.zip"""'], {}), "(log_path, f'rl_model_{args.load_checkpoint}_steps.zip')\n", (5620, 5676), False, 'import os\n'), ((5693, 5719), 'os.path.isfile', 'os.path.isfile', (['model_path'], {}), '(model_path)\n', (5707, 5719), False, 'import os\n'), ((7042, 7078), 'torch.set_num_threads', 'th.set_num_threads', (['args.num_threads'], {}), '(args.num_threads)\n', (7060, 7078), True, 'import torch as th\n'), ((8472, 8606), 'stable_baselines3.common.vec_env.VecVideoRecorder', 'VecVideoRecorder', (['env', 'video_folder'], {'record_video_trigger': '(lambda x: x == 0)', 'video_length': 'args.n_timesteps', 'name_prefix': 'name_prefix'}), '(env, video_folder, record_video_trigger=lambda x: x == 0,\n video_length=args.n_timesteps, name_prefix=name_prefix)\n', (8488, 8606), False, 'from stable_baselines3.common.vec_env import VecVideoRecorder\n'), ((14650, 14681), 'os.path.join', 'os.path.join', (['log_path', '"""stats"""'], {}), "(log_path, 'stats')\n", (14662, 14681), False, 'import os\n'), ((827, 859), 'os.path.join', 'os.path.join', (['savedir', 'self.name'], {}), '(savedir, self.name)\n', (839, 859), False, 'import os\n'), ((4778, 4804), 'os.path.join', 'os.path.join', (['folder', 'algo'], {}), '(folder, algo)\n', (4790, 4804), False, 'import os\n'), ((5862, 5908), 'os.path.join', 'os.path.join', (['log_path', '"""rl_model_*_steps.zip"""'], {}), "(log_path, 'rl_model_*_steps.zip')\n", (5874, 5908), False, 'import os\n'), ((7503, 7541), 'yaml.load', 'yaml.load', (['f'], {'Loader': 'yaml.UnsafeLoader'}), '(f, Loader=yaml.UnsafeLoader)\n', (7512, 7541), False, 'import yaml\n'), ((10202, 10260), 'os.path.join', 'os.path.join', (['log_path', 'f"""{env_id}_adapter"""', '"""adapter.pth"""'], {}), "(log_path, f'{env_id}_adapter', 'adapter.pth')\n", (10214, 10260), False, 'import os\n'), ((10283, 10361), 'blind_walking.net.adapter.Adapter', 'Adapter', (['policy.observation_space'], {'output_size': 'feature_encoder.mlp_output_size'}), '(policy.observation_space, output_size=feature_encoder.mlp_output_size)\n', (10290, 10361), False, 'from blind_walking.net.adapter import Adapter\n'), ((14697, 14723), 'os.path.exists', 'os.path.exists', (['output_dir'], {}), '(output_dir)\n', (14711, 14723), False, 'import os\n'), ((14737, 14760), 'os.makedirs', 'os.makedirs', (['output_dir'], {}), '(output_dir)\n', (14748, 14760), False, 'import os\n'), ((1277, 1289), 'utils.ALGOS.keys', 'ALGOS.keys', ([], {}), '()\n', (1287, 1289), False, 'from utils import ALGOS, create_test_env, get_latest_run_id, get_saved_hyperparams\n'), ((8335, 8375), 'os.path.join', 'os.path.join', (['log_path', '"""videos_adapter"""'], {}), "(log_path, 'videos_adapter')\n", (8347, 8375), False, 'import os\n'), ((8425, 8457), 'os.path.join', 'os.path.join', (['log_path', '"""videos"""'], {}), "(log_path, 'videos')\n", (8437, 8457), False, 'import os\n'), ((10398, 10419), 'torch.load', 'th.load', (['adapter_path'], {}), '(adapter_path)\n', (10405, 10419), True, 'import torch as th\n'), ((11097, 11132), 'stable_baselines3.common.utils.obs_as_tensor', 'obs_as_tensor', (['obs_np', 'model.device'], {}), '(obs_np, model.device)\n', (11110, 11132), False, 'from stable_baselines3.common.utils import set_random_seed, obs_as_tensor\n'), ((12316, 12388), 'numpy.clip', 'np.clip', (['clipped_action', 'model.action_space.low', 'model.action_space.high'], {}), '(clipped_action, model.action_space.low, model.action_space.high)\n', (12323, 12388), True, 'import numpy as np\n'), ((15316, 15340), 'numpy.mean', 'np.mean', (['episode_rewards'], {}), '(episode_rewards)\n', (15323, 15340), True, 'import numpy as np\n'), ((15351, 15374), 'numpy.std', 'np.std', (['episode_rewards'], {}), '(episode_rewards)\n', (15357, 15374), True, 'import numpy as np\n'), ((15475, 15499), 'numpy.mean', 'np.mean', (['episode_lengths'], {}), '(episode_lengths)\n', (15482, 15499), True, 'import numpy as np\n'), ((15510, 15533), 'numpy.std', 'np.std', (['episode_lengths'], {}), '(episode_lengths)\n', (15516, 15533), True, 'import numpy as np\n'), ((15154, 15172), 'numpy.mean', 'np.mean', (['successes'], {}), '(successes)\n', (15161, 15172), True, 'import numpy as np\n')] |
import numpy as np
time = np.arange(365)
hydrograph = np.ones((len(time),)) + \
(np.random.uniform(size=(len(time),)) / 10)
hydrograph[100:150] = 2 + np.sin(np.arange(15, 65) / 15)
flood = hydrograph >= 2.5
| [
"numpy.arange"
] | [((27, 41), 'numpy.arange', 'np.arange', (['(365)'], {}), '(365)\n', (36, 41), True, 'import numpy as np\n'), ((162, 179), 'numpy.arange', 'np.arange', (['(15)', '(65)'], {}), '(15, 65)\n', (171, 179), True, 'import numpy as np\n')] |
# -*- coding: utf-8 -*-
"""
Created on Tue Nov 17 13:02:21 2020
@author: James
"""
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import os
from plotUtils import *
from physUtils import *
#TODO: obtain these values from the data files
#TODO: check conformal time used correctly
from itertools import chain
#%% Define variables
global lam, Mpl, g
lam = 10**-2
Mpl = 1024
Nstar= 50
xi = 3.8*10**6 * lam * Nstar**2
g = np.sqrt(3 * lam) #TODO: ?
L = 64
#wd = os.getcwd()
wd_path = "D:/Physics/MPhys Project/gw-local-repo/HLatticeV2.0/"
os.chdir(wd_path)
print("Current working dir: ",os.getcwd())
def trim_name(file):
ind = file.index('_screen')
return file[5:ind]
#%% File management
file_name = "data/run_with_GW_17_11_screen.log"
GW_file = "data/run_with_GW_17_11_GW.log"
eliot_file = "data/lf4-std-run1_screen.log"
higgs_file = "data/higgs-vev-run1_screen.log"
tanhfile = "data/higgs-tanh4-run1_screen.log"
lf4_tkachev1 = "data/lf4-tkachev-coupling1_screen.log"
lf4_tkachev2 = "data/lf4-tkachev-coupling2_screen.log"
lf4_tkachev3 = "data/lf4-tkachev-coupling3_screen.log"
file128 = "data/lf4-std-h2-128-run1_screen.log"
filefile = file128
pw_field_number =2 #Choose which field spectrum to plot (start: 1)
form = 'log'
rows=[1,10,20,30,40,50,60,70,80,90]
rsmall = [1,5,10,15,20,25,30,35,40,45]
tk1 = [1,20,50,75,100] + list(range(2,19,3))
tk2 = range(0,22,3)
rmid = [1,10,90] + list(range(30,60,3))
my_rows = rows
tk_rows = tk2
my_rows = sorted(my_rows)
my_img_name = trim_name(filefile) + '_img'
save_img = False
#%% Function definitions
def plot_pw_t(df1,save_img=True,img_name='df1',trim=10):
dfs = df1.iloc[:,:trim]
a = df1['a']
fig,ax = plt.subplots()
for c in dfs.columns:
ax.plot(a,dfs[c])
plt.yscale('log')
if save_img==True:
fig.savefig("pw_img_"+img_name)
fig.show()
def plot_pw_k(df1,save_img=True,img_name='df1',trim=10):
if trim>=df1.shape[1]: trim = -1
dfs = df1.T.iloc[:-1,:trim]
fig,ax = plt.subplots()
for c in dfs.columns:
ax.plot(dfs[c])
plt.yscale('log')
if save_img==True:
fig.savefig("pw_img_"+img_name)
fig.show()
def plot_fig1(data,error=True,save_img=True,img_name="img",truncate=0):
plt.yscale('log')
plt.title('Reproduction of Fig.1: ratios of energies')
plt.xlabel('a')
plt.ylabel('$\log_{10}(|E|/E_{tot})$')
plt.plot(data['a'],data['pratio'],linestyle='dashed',label='Potential energy')
plt.plot(data['a'],data['kratio'],linestyle='dashed',label='Kinetic energy')
plt.plot(data['a'],data['gratio'],'b',label='Gradient of field energy')
plt.legend()
if save_img==True:
plt.savefig(img_name)
if error==True:
plt.plot(data['a'][truncate:],abs(1/(data['omega'][truncate:]+1)-1),linestyle='dashed',label='Fractional energy noises')
plt.legend()
if save_img==True: plt.savefig(img_name + "_with_error")
plt.show()
def plot_fig2_tchakev(data):
plt.yscale('log')
plt.xscale('log')
plt.title("Reproduction of Tchakev's Fig2")
#fftchi = np.fft.fft(data['mean2'])
#plt.plot(fftchi)
m_chi2 = 1**2
m = 1
a2 = data['a']**2
print(a2.size)
k = np.logspace(-1,2,a2.size)
eps2 = k**2/a2 + m_chi2
phi0 = data['a']**(-3/2)
g = 1 #1.095 * 10**(-6)
beta2 = np.exp(np.pi * eps2 / g / phi0 /m)
NMAX = 30
print(beta2[:NMAX])
plt.plot(k[:NMAX],beta2[:NMAX])
plt.show()
def k_list(pw_data1,L=64):
ki = 2 * np.pi / L
#Note: We only want to create ks for the spectrum values and not for the a column
#Save as np.array for extra functionality
return np.array([ki * i for i in range(1,pw_data1.shape[1])])
def n_k(pw_data1,pw_data2,L=64):
pw_a = pw_data1.drop(['a'],axis=1)
pw_b = pw_data2.drop(['a'],axis=1)
a_list = pw_data1['a'] # By construction, this is the same as pw_data2['a']
ks = k_list(pw_data1,L=L)
#Retrieve actual field eignemode values
fk_df = pw_a.multiply(a_list**2,axis=0) / ks**5
fkdot_df = pw_b / ks**3
#fk_df = pw_a
#fkdot_df = pw_b
#Some pretty cool element-wise multiplication between vector ks and dataframes fk_df and fkdot_df!
ns = 1/(2*ks) * (ks**2 * fk_df + fkdot_df)
#Store ks in the column headers
ns.columns = ks
#Add a values back for plotting purposes
ns['a'] = a_list
return ns
def n_k_red(pw_data1,pw_data2,L=64):
pw_a = pw_data1.drop(['a'],axis=1)
pw_b = pw_data2.drop(['a'],axis=1)
a_list = pw_data1['a'] # By construction, this is the same as pw_data2['a']
ks = k_list(pw_data1,L=L)
#Retrieve actual field eignemode values
fk_df = pw_a.multiply(a_list**2,axis=0)
fkdot_df = pw_b
#Some pretty cool element-wise multiplication between vector ks and dataframes fk_df and fkdot_df!
ns = 1/2 * (fk_df + fkdot_df)
#Store ks in the column headers
ns.columns = ks
#Add a values back for plotting purposes
ns['a'] = a_list
return ns
def plot_n_t(ns,cols=[],save_img=False,img_name='Fig3_rubio',data=[]):
if cols==[]:
cols.append(int(ns.shape[1]/2))
colors = np.flip(cm.magma(np.linspace(0,1,len(cols))),axis=0)
for c in range(len(cols)):
print('C:',c)
#Retrieve k values from the column headers, discarding the 'a' in the last column
xs = np.array(ns['a'])
ks = np.array(ns.columns[:-1])
ys = ns.iloc[:,cols[c]] * ks[cols[c]]**4
plt.plot(xs,ys,label=ks[cols[c]],color=colors[c])
if type(data)==pd.DataFrame:
plt.plot(data['a'],10**3*np.exp(data['mean1']),label='Field oscillation',linestyle='dashed')
plt.yscale('log')
plt.xlabel('$a(t)$')
plt.ylabel('$n_k(t)$')
plt.title("Occupation number $n_k(t)$ for different eigenmodes $k$ (Fig.3, Rubio)\n"+trim_name(filefile)+'_field_%i'%pw_field_number)
plt.legend(title='Spectrum at $k=$',loc='lower right')
if save_img:
plt.savefig(img_name + '_fig3_Rubio_f%i'%pw_field_number)
plt.show()
def plot_n_k(ns,rows=[]):
if rows==[]:
rows.append(int(ns.shape[0]/2))
for r in rows:
#Retrieve k values from the column headers, discarding the 'a' in the last column
xs = np.array(ns.columns[:-1])
ys = ns.iloc[r,:-1] * xs**4
plt.plot(xs,ys,label=ns['a'][r])
#plt.yscale('log')
plt.legend(title='Spectrum at $a=$',loc='lower right')
plt.show()
def plot_n_k_red(ns,rows=[]):
if rows==[]:
rows.append(int(ns.shape[0]/2))
for r in rows:
#Retrieve k values from the column headers, discarding the 'a' in the last column
xs = np.array(ns.columns[:-1])
ys = ns.iloc[r,:-1]
plt.plot(xs,ys,label=ns['a'][r])
plt.yscale('log')
plt.legend(title='Spectrum at $a=$',loc='lower right')
plt.show()
def plot_tkachev2(ns,rows=[],save_img=False,img_name="Tkachev2"):
if rows==[]:
rows.append(int(ns.shape[0])/2)
colors = np.flip(cm.magma(np.linspace(0,1,len(rows))),axis=0)
else:
colors = np.flip(cm.magma(np.linspace(0,1,len(rows))),axis=0)
for j in range(len(rows)):
#Retrieve k values from the column headers, discarding the 'a' in the last column
xs = np.array(ns.columns[:-1]) / (2 * np.pi)
ys = ns.iloc[rows[j],:-1] * 2
plt.plot(xs,ys,label="$a=$%f"%ns['a'][rows[j]], color=colors[j])
plt.yscale('log')
plt.xscale('log')
xl1 = np.linspace(min(xs),max(xs),1000)
plt.plot(xl1, xl1**(-3/2)*10**2,linestyle='dashed',label='~$k^{-3/2}$')
plt.plot(xl1, xl1**(-1)*10**4,linestyle='dashed',label='~$k^{-1}$')
plt.legend(loc='lower right')
plt.title("Occuptation number $n_k$ (Fig.2, Tkachev)")
plt.xlabel('k')
plt.ylabel('$n_k$')
if save_img==True:
plt.savefig(img_name + '_fig2_tkachev_f%i'%pw_field_number)
plt.show()
def plot_fig6(ns,rows=[],vlines=False,save_img=False,img_name="Fig 6"):
if rows==[]:
rows.append(int(ns.shape[0])/2)
colors = np.flip(cm.magma(np.linspace(0,1,len(rows))),axis=0)
else:
colors = np.flip(cm.magma(np.linspace(0,1,len(rows))),axis=0)
#colors = np.flip(cm.magma(np.array(rows)/max(rows)),axis=0)
for j in range(len(rows)):
#Retrieve k values from the column headers, discarding the 'a' in the last column
xs = np.array(ns.columns[:-1]) / (2 * np.pi)
ys = ns.iloc[rows[j],:-1] * (2 * xs**4)
plt.plot(xs,ys,label=ns['a'][rows[j]], color=colors[j])
plt.yscale('log')
if vlines:
vert_lines = np.array([1.25, 1.8, 3.35])/ (2*np.pi)
for vl in vert_lines:
plt.axvline(x = vl)
plt.legend(title='Spectrum at $a=$',loc='lower right')
plt.title("Occupation number $n_k$ at different scale factors $a$ (Fig.6, Z.Huang)\n"+trim_name(filefile)+'_field_%i'%pw_field_number)
plt.xlabel(r"$k\Delta / 2 \pi$")
plt.ylabel(r"$ k^4\; n_k /\; 2 \pi^2\; \rho}$")
if save_img==True:
plt.savefig(img_name + '_fig6_field_%i'%pw_field_number)
plt.show()
#%% Main
data = import_screen(filefile)
pw_data1, pw_data2 = import_pw('data/'+trim_name(filefile) + '_pw_%i.%s'%(pw_field_number,form))
n_df = n_k(pw_data1,pw_data2,L=L)
nred_df = n_k_red(pw_data1,pw_data2,L=L)
print(data['a'])
#plot_n_t(n_df,cols=[1,4,5,20,30],save_img=save_img,img_name=my_img_name,data=data)
my_rows = np.searchsorted(n_df['a'],my_rows)
plot_fig6(n_df, rows=my_rows, save_img=save_img,img_name=my_img_name)
tk_rows = sorted(tk_rows)
#plot_tkachev2(n_df,rows=tk_rows,save_img=save_img,img_name=my_img_name)
#plot_gw(pw_data1,trim=2,save_img=False)
#plt.plot(pw_data1.iloc[4,:-1]**2)
#plot_pw_k(pw_data1,save_img=True,trim=10)
#plt.plot(np.abs(data['a']*data['mean1']))
#plt.yscale('log')
#plt.plot(data['a'][truncate:],1/(data['omega'][truncate:]+1)-1)
#plt.plot(np.fft.fft(np.cos(np.linspace(0,100,1000))))
#plt.plot(sp.ellipj(np.linspace(0,10,100),1/2**0.5)[1])
#plt.show()
#plot_fig1(data,error=True,img_name=my_img_name)
#plot_fig2_tchakev(data)
#df1,df2 = import_GW(GW_file)
#plot_gw(df1,img_name='df1')
#plot_gw(df2,img_name='df2')
| [
"numpy.sqrt",
"matplotlib.pyplot.savefig",
"matplotlib.pyplot.xscale",
"matplotlib.pyplot.ylabel",
"numpy.searchsorted",
"matplotlib.pyplot.xlabel",
"matplotlib.pyplot.plot",
"numpy.logspace",
"os.getcwd",
"os.chdir",
"numpy.exp",
"numpy.array",
"matplotlib.pyplot.axvline",
"matplotlib.pyp... | [((471, 487), 'numpy.sqrt', 'np.sqrt', (['(3 * lam)'], {}), '(3 * lam)\n', (478, 487), True, 'import numpy as np\n'), ((588, 605), 'os.chdir', 'os.chdir', (['wd_path'], {}), '(wd_path)\n', (596, 605), False, 'import os\n'), ((9500, 9535), 'numpy.searchsorted', 'np.searchsorted', (["n_df['a']", 'my_rows'], {}), "(n_df['a'], my_rows)\n", (9515, 9535), True, 'import numpy as np\n'), ((636, 647), 'os.getcwd', 'os.getcwd', ([], {}), '()\n', (645, 647), False, 'import os\n'), ((1728, 1742), 'matplotlib.pyplot.subplots', 'plt.subplots', ([], {}), '()\n', (1740, 1742), True, 'import matplotlib.pyplot as plt\n'), ((1799, 1816), 'matplotlib.pyplot.yscale', 'plt.yscale', (['"""log"""'], {}), "('log')\n", (1809, 1816), True, 'import matplotlib.pyplot as plt\n'), ((2035, 2049), 'matplotlib.pyplot.subplots', 'plt.subplots', ([], {}), '()\n', (2047, 2049), True, 'import matplotlib.pyplot as plt\n'), ((2104, 2121), 'matplotlib.pyplot.yscale', 'plt.yscale', (['"""log"""'], {}), "('log')\n", (2114, 2121), True, 'import matplotlib.pyplot as plt\n'), ((2277, 2294), 'matplotlib.pyplot.yscale', 'plt.yscale', (['"""log"""'], {}), "('log')\n", (2287, 2294), True, 'import matplotlib.pyplot as plt\n'), ((2299, 2353), 'matplotlib.pyplot.title', 'plt.title', (['"""Reproduction of Fig.1: ratios of energies"""'], {}), "('Reproduction of Fig.1: ratios of energies')\n", (2308, 2353), True, 'import matplotlib.pyplot as plt\n'), ((2358, 2373), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""a"""'], {}), "('a')\n", (2368, 2373), True, 'import matplotlib.pyplot as plt\n'), ((2378, 2417), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""$\\\\log_{10}(|E|/E_{tot})$"""'], {}), "('$\\\\log_{10}(|E|/E_{tot})$')\n", (2388, 2417), True, 'import matplotlib.pyplot as plt\n'), ((2421, 2507), 'matplotlib.pyplot.plot', 'plt.plot', (["data['a']", "data['pratio']"], {'linestyle': '"""dashed"""', 'label': '"""Potential energy"""'}), "(data['a'], data['pratio'], linestyle='dashed', label=\n 'Potential energy')\n", (2429, 2507), True, 'import matplotlib.pyplot as plt\n'), ((2504, 2583), 'matplotlib.pyplot.plot', 'plt.plot', (["data['a']", "data['kratio']"], {'linestyle': '"""dashed"""', 'label': '"""Kinetic energy"""'}), "(data['a'], data['kratio'], linestyle='dashed', label='Kinetic energy')\n", (2512, 2583), True, 'import matplotlib.pyplot as plt\n'), ((2585, 2659), 'matplotlib.pyplot.plot', 'plt.plot', (["data['a']", "data['gratio']", '"""b"""'], {'label': '"""Gradient of field energy"""'}), "(data['a'], data['gratio'], 'b', label='Gradient of field energy')\n", (2593, 2659), True, 'import matplotlib.pyplot as plt\n'), ((2661, 2673), 'matplotlib.pyplot.legend', 'plt.legend', ([], {}), '()\n', (2671, 2673), True, 'import matplotlib.pyplot as plt\n'), ((2976, 2986), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (2984, 2986), True, 'import matplotlib.pyplot as plt\n'), ((3020, 3037), 'matplotlib.pyplot.yscale', 'plt.yscale', (['"""log"""'], {}), "('log')\n", (3030, 3037), True, 'import matplotlib.pyplot as plt\n'), ((3042, 3059), 'matplotlib.pyplot.xscale', 'plt.xscale', (['"""log"""'], {}), "('log')\n", (3052, 3059), True, 'import matplotlib.pyplot as plt\n'), ((3064, 3107), 'matplotlib.pyplot.title', 'plt.title', (['"""Reproduction of Tchakev\'s Fig2"""'], {}), '("Reproduction of Tchakev\'s Fig2")\n', (3073, 3107), True, 'import matplotlib.pyplot as plt\n'), ((3248, 3275), 'numpy.logspace', 'np.logspace', (['(-1)', '(2)', 'a2.size'], {}), '(-1, 2, a2.size)\n', (3259, 3275), True, 'import numpy as np\n'), ((3373, 3408), 'numpy.exp', 'np.exp', (['(np.pi * eps2 / g / phi0 / m)'], {}), '(np.pi * eps2 / g / phi0 / m)\n', (3379, 3408), True, 'import numpy as np\n'), ((3450, 3482), 'matplotlib.pyplot.plot', 'plt.plot', (['k[:NMAX]', 'beta2[:NMAX]'], {}), '(k[:NMAX], beta2[:NMAX])\n', (3458, 3482), True, 'import matplotlib.pyplot as plt\n'), ((3487, 3497), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (3495, 3497), True, 'import matplotlib.pyplot as plt\n'), ((5701, 5718), 'matplotlib.pyplot.yscale', 'plt.yscale', (['"""log"""'], {}), "('log')\n", (5711, 5718), True, 'import matplotlib.pyplot as plt\n'), ((5723, 5743), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""$a(t)$"""'], {}), "('$a(t)$')\n", (5733, 5743), True, 'import matplotlib.pyplot as plt\n'), ((5748, 5770), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""$n_k(t)$"""'], {}), "('$n_k(t)$')\n", (5758, 5770), True, 'import matplotlib.pyplot as plt\n'), ((5913, 5968), 'matplotlib.pyplot.legend', 'plt.legend', ([], {'title': '"""Spectrum at $k=$"""', 'loc': '"""lower right"""'}), "(title='Spectrum at $k=$', loc='lower right')\n", (5923, 5968), True, 'import matplotlib.pyplot as plt\n'), ((6064, 6074), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (6072, 6074), True, 'import matplotlib.pyplot as plt\n'), ((6439, 6494), 'matplotlib.pyplot.legend', 'plt.legend', ([], {'title': '"""Spectrum at $a=$"""', 'loc': '"""lower right"""'}), "(title='Spectrum at $a=$', loc='lower right')\n", (6449, 6494), True, 'import matplotlib.pyplot as plt\n'), ((6498, 6508), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (6506, 6508), True, 'import matplotlib.pyplot as plt\n'), ((6839, 6856), 'matplotlib.pyplot.yscale', 'plt.yscale', (['"""log"""'], {}), "('log')\n", (6849, 6856), True, 'import matplotlib.pyplot as plt\n'), ((6861, 6916), 'matplotlib.pyplot.legend', 'plt.legend', ([], {'title': '"""Spectrum at $a=$"""', 'loc': '"""lower right"""'}), "(title='Spectrum at $a=$', loc='lower right')\n", (6871, 6916), True, 'import matplotlib.pyplot as plt\n'), ((6920, 6930), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (6928, 6930), True, 'import matplotlib.pyplot as plt\n'), ((7499, 7516), 'matplotlib.pyplot.yscale', 'plt.yscale', (['"""log"""'], {}), "('log')\n", (7509, 7516), True, 'import matplotlib.pyplot as plt\n'), ((7521, 7538), 'matplotlib.pyplot.xscale', 'plt.xscale', (['"""log"""'], {}), "('log')\n", (7531, 7538), True, 'import matplotlib.pyplot as plt\n'), ((7588, 7674), 'matplotlib.pyplot.plot', 'plt.plot', (['xl1', '(xl1 ** (-3 / 2) * 10 ** 2)'], {'linestyle': '"""dashed"""', 'label': '"""~$k^{-3/2}$"""'}), "(xl1, xl1 ** (-3 / 2) * 10 ** 2, linestyle='dashed', label=\n '~$k^{-3/2}$')\n", (7596, 7674), True, 'import matplotlib.pyplot as plt\n'), ((7664, 7737), 'matplotlib.pyplot.plot', 'plt.plot', (['xl1', '(xl1 ** -1 * 10 ** 4)'], {'linestyle': '"""dashed"""', 'label': '"""~$k^{-1}$"""'}), "(xl1, xl1 ** -1 * 10 ** 4, linestyle='dashed', label='~$k^{-1}$')\n", (7672, 7737), True, 'import matplotlib.pyplot as plt\n'), ((7736, 7765), 'matplotlib.pyplot.legend', 'plt.legend', ([], {'loc': '"""lower right"""'}), "(loc='lower right')\n", (7746, 7765), True, 'import matplotlib.pyplot as plt\n'), ((7770, 7824), 'matplotlib.pyplot.title', 'plt.title', (['"""Occuptation number $n_k$ (Fig.2, Tkachev)"""'], {}), "('Occuptation number $n_k$ (Fig.2, Tkachev)')\n", (7779, 7824), True, 'import matplotlib.pyplot as plt\n'), ((7829, 7844), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""k"""'], {}), "('k')\n", (7839, 7844), True, 'import matplotlib.pyplot as plt\n'), ((7849, 7868), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""$n_k$"""'], {}), "('$n_k$')\n", (7859, 7868), True, 'import matplotlib.pyplot as plt\n'), ((7965, 7975), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (7973, 7975), True, 'import matplotlib.pyplot as plt\n'), ((8615, 8632), 'matplotlib.pyplot.yscale', 'plt.yscale', (['"""log"""'], {}), "('log')\n", (8625, 8632), True, 'import matplotlib.pyplot as plt\n'), ((8786, 8841), 'matplotlib.pyplot.legend', 'plt.legend', ([], {'title': '"""Spectrum at $a=$"""', 'loc': '"""lower right"""'}), "(title='Spectrum at $a=$', loc='lower right')\n", (8796, 8841), True, 'import matplotlib.pyplot as plt\n'), ((8984, 9017), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""$k\\\\Delta / 2 \\\\pi$"""'], {}), "('$k\\\\Delta / 2 \\\\pi$')\n", (8994, 9017), True, 'import matplotlib.pyplot as plt\n'), ((9021, 9072), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""$ k^4\\\\; n_k /\\\\; 2 \\\\pi^2\\\\; \\\\rho}$"""'], {}), "('$ k^4\\\\; n_k /\\\\; 2 \\\\pi^2\\\\; \\\\rho}$')\n", (9031, 9072), True, 'import matplotlib.pyplot as plt\n'), ((9162, 9172), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (9170, 9172), True, 'import matplotlib.pyplot as plt\n'), ((2710, 2731), 'matplotlib.pyplot.savefig', 'plt.savefig', (['img_name'], {}), '(img_name)\n', (2721, 2731), True, 'import matplotlib.pyplot as plt\n'), ((2889, 2901), 'matplotlib.pyplot.legend', 'plt.legend', ([], {}), '()\n', (2899, 2901), True, 'import matplotlib.pyplot as plt\n'), ((5398, 5415), 'numpy.array', 'np.array', (["ns['a']"], {}), "(ns['a'])\n", (5406, 5415), True, 'import numpy as np\n'), ((5430, 5455), 'numpy.array', 'np.array', (['ns.columns[:-1]'], {}), '(ns.columns[:-1])\n', (5438, 5455), True, 'import numpy as np\n'), ((5513, 5565), 'matplotlib.pyplot.plot', 'plt.plot', (['xs', 'ys'], {'label': 'ks[cols[c]]', 'color': 'colors[c]'}), '(xs, ys, label=ks[cols[c]], color=colors[c])\n', (5521, 5565), True, 'import matplotlib.pyplot as plt\n'), ((5993, 6052), 'matplotlib.pyplot.savefig', 'plt.savefig', (["(img_name + '_fig3_Rubio_f%i' % pw_field_number)"], {}), "(img_name + '_fig3_Rubio_f%i' % pw_field_number)\n", (6004, 6052), True, 'import matplotlib.pyplot as plt\n'), ((6308, 6333), 'numpy.array', 'np.array', (['ns.columns[:-1]'], {}), '(ns.columns[:-1])\n', (6316, 6333), True, 'import numpy as np\n'), ((6379, 6413), 'matplotlib.pyplot.plot', 'plt.plot', (['xs', 'ys'], {'label': "ns['a'][r]"}), "(xs, ys, label=ns['a'][r])\n", (6387, 6413), True, 'import matplotlib.pyplot as plt\n'), ((6739, 6764), 'numpy.array', 'np.array', (['ns.columns[:-1]'], {}), '(ns.columns[:-1])\n', (6747, 6764), True, 'import numpy as np\n'), ((6802, 6836), 'matplotlib.pyplot.plot', 'plt.plot', (['xs', 'ys'], {'label': "ns['a'][r]"}), "(xs, ys, label=ns['a'][r])\n", (6810, 6836), True, 'import matplotlib.pyplot as plt\n'), ((7430, 7498), 'matplotlib.pyplot.plot', 'plt.plot', (['xs', 'ys'], {'label': "('$a=$%f' % ns['a'][rows[j]])", 'color': 'colors[j]'}), "(xs, ys, label='$a=$%f' % ns['a'][rows[j]], color=colors[j])\n", (7438, 7498), True, 'import matplotlib.pyplot as plt\n'), ((7900, 7961), 'matplotlib.pyplot.savefig', 'plt.savefig', (["(img_name + '_fig2_tkachev_f%i' % pw_field_number)"], {}), "(img_name + '_fig2_tkachev_f%i' % pw_field_number)\n", (7911, 7961), True, 'import matplotlib.pyplot as plt\n'), ((8555, 8612), 'matplotlib.pyplot.plot', 'plt.plot', (['xs', 'ys'], {'label': "ns['a'][rows[j]]", 'color': 'colors[j]'}), "(xs, ys, label=ns['a'][rows[j]], color=colors[j])\n", (8563, 8612), True, 'import matplotlib.pyplot as plt\n'), ((9100, 9158), 'matplotlib.pyplot.savefig', 'plt.savefig', (["(img_name + '_fig6_field_%i' % pw_field_number)"], {}), "(img_name + '_fig6_field_%i' % pw_field_number)\n", (9111, 9158), True, 'import matplotlib.pyplot as plt\n'), ((2929, 2966), 'matplotlib.pyplot.savefig', 'plt.savefig', (["(img_name + '_with_error')"], {}), "(img_name + '_with_error')\n", (2940, 2966), True, 'import matplotlib.pyplot as plt\n'), ((7344, 7369), 'numpy.array', 'np.array', (['ns.columns[:-1]'], {}), '(ns.columns[:-1])\n', (7352, 7369), True, 'import numpy as np\n'), ((8459, 8484), 'numpy.array', 'np.array', (['ns.columns[:-1]'], {}), '(ns.columns[:-1])\n', (8467, 8484), True, 'import numpy as np\n'), ((8675, 8702), 'numpy.array', 'np.array', (['[1.25, 1.8, 3.35]'], {}), '([1.25, 1.8, 3.35])\n', (8683, 8702), True, 'import numpy as np\n'), ((8757, 8774), 'matplotlib.pyplot.axvline', 'plt.axvline', ([], {'x': 'vl'}), '(x=vl)\n', (8768, 8774), True, 'import matplotlib.pyplot as plt\n'), ((5629, 5650), 'numpy.exp', 'np.exp', (["data['mean1']"], {}), "(data['mean1'])\n", (5635, 5650), True, 'import numpy as np\n')] |
# coding: utf-8
import numpy as np
import random
import tensorflow as tf
import logging
import imageio
import read_data
from keras.utils import to_categorical
# from data_generator import DataGenerator
from place_pick_mil import MIL
# from evaluation.eval_reach import evaluate_vision_reach
# from evaluation.eval_push import evaluate_push
from tensorflow.python.platform import flags
import os
from functools import reduce
from operator import mul
def get_num_params():
nums=0
for variable in tf.trainable_variables():
shape= variable.get_shape()
nums+=reduce(mul, [dim.value for dim in shape], 1)
return nums
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
FLAGS = flags.FLAGS
LOGGER = logging.getLogger(__name__)
## Dataset/method options
flags.DEFINE_string('experiment', 'pick_place', 'sim_vision_reach or sim_push')
flags.DEFINE_string('data_path', './pick_dataset_origin/human_robot_dataset/',
'path to the directory where demo files that containing robot states and actions are stored')
flags.DEFINE_string('demo_gif_dir', 'data', 'path to the videos of demonstrations')
flags.DEFINE_string('gif_prefix', 'object', 'prefix of the video directory for each task, e.g. object_0 for task 0')
flags.DEFINE_integer('im_width', 264,
'width of the images in the demo videos, 125 for sim_push, and 80 for sim_vision_reach')
flags.DEFINE_integer('im_height', 196,
'height of the images in the demo videos, 125 for sim_push, and 64 for sim_vision_reach')
flags.DEFINE_integer('num_channels', 3, 'number of channels of the images in the demo videos')
flags.DEFINE_integer('T', 3, 'time horizon of the demo videos, 50 for reach, 100 for push')
flags.DEFINE_bool('hsv', False, 'convert the image to HSV format')
flags.DEFINE_bool('use_noisy_demos', False, 'use noisy demonstrations or not (for domain shift)')
flags.DEFINE_string('noisy_demo_gif_dir', None, 'path to the videos of noisy demonstrations')
flags.DEFINE_string('noisy_demo_file', None,
'path to the directory where noisy demo files that containing robot states and actions are stored')
flags.DEFINE_bool('no_action', True, 'do not include actions in the demonstrations for inner update')
flags.DEFINE_bool('no_state', False, 'do not include states in the demonstrations during training')
flags.DEFINE_bool('no_final_eept', False, 'do not include final ee pos in the demonstrations for inner update')
flags.DEFINE_bool('zero_state', True,
'zero-out states (meta-learn state) in the demonstrations for inner update (used in the paper with video-only demos)')
flags.DEFINE_bool('two_arms', False, 'use two-arm structure when state is zeroed-out')
flags.DEFINE_integer('training_set_size', -1, 'size of the training set, 1500 for sim_reach, 693 for sim push, anzero_stated \
-1 for all data except those in validation set')
flags.DEFINE_integer('val_set_size', 150, 'size of the training set, 150 for sim_reach and 76 for sim push')
## Training options
flags.DEFINE_integer('metatrain_iterations', 30000, 'number of metatraining iterations.') # 30k for pushing, 50k for reaching and placing
flags.DEFINE_integer('meta_batch_size', 16, 'number of tasks sampled per meta-update') # 15 for reaching, 15 for pushing, 12 for placing
flags.DEFINE_integer('meta_test_batch_size', 1, 'number of tasks sampled per meta-update') # 15 for reaching, 15 for pushing, 12 for placing
flags.DEFINE_float('meta_lr', 1e-4, 'the base learning rate of the generator')
flags.DEFINE_integer('update_batch_size', 1,
'number of examples used for inner gradient update (K for K-shot learning).')
flags.DEFINE_float('train_update_lr', 1e-4,
'step size alpha for inner gradient update.') # 0.001 for reaching, 0.01 for pushing and placing
flags.DEFINE_integer('num_updates', 1, 'number of inner gradient updates during training.') # 5 for placing
flags.DEFINE_bool('clip', True, 'use gradient clipping for fast gradient')
flags.DEFINE_float('clip_max', 100.0, 'maximum clipping value for fast gradient')
flags.DEFINE_float('clip_min', -100.0, 'minimum clipping value for fast gradient')
# flags.DEFINE_float('clip_max', 20.0, 'maximum clipping value for fast gradient')
# flags.DEFINE_float('clip_min', -20.0, 'minimum clipping value for fast gradient')
flags.DEFINE_bool('fc_bt', True, 'use bias transformation for the first fc layer')
flags.DEFINE_bool('all_fc_bt', False, 'use bias transformation for all fc layers')
flags.DEFINE_bool('conv_bt', False, 'use bias transformation for the first conv layer, N/A for using pretraining')
flags.DEFINE_integer('bt_dim', 10, 'the dimension of bias transformation for FC layers')
flags.DEFINE_string('pretrain_weight_path', 'N/A', 'path to pretrained weights')
flags.DEFINE_bool('train_pretrain_conv1', False, 'whether to finetune the pretrained weights')
flags.DEFINE_bool('two_head', True, 'use two-head architecture')
flags.DEFINE_bool('learn_final_eept', False, 'learn an auxiliary loss for predicting final end-effector pose')
flags.DEFINE_bool('learn_final_eept_whole_traj', False, 'learn an auxiliary loss for predicting final end-effector pose \
by passing the whole trajectory of eepts (used for video-only models)')
flags.DEFINE_bool('stopgrad_final_eept', True,
'stop the gradient when concatenate the predicted final eept with the feature points')
flags.DEFINE_integer('final_eept_min', 6, 'first index of the final eept in the action array')
flags.DEFINE_integer('final_eept_max', 8, 'last index of the final eept in the action array')
flags.DEFINE_float('final_eept_loss_eps', 0.1, 'the coefficient of the auxiliary loss')
flags.DEFINE_float('act_loss_eps', 1.0, 'the coefficient of the action loss')
flags.DEFINE_float('loss_multiplier', 100.0,
'the constant multiplied with the loss value, 100 for reach and 50 for push')
flags.DEFINE_bool('use_l1_l2_loss', False, 'use a loss with combination of l1 and l2')
flags.DEFINE_float('l2_eps', 0.01, 'coeffcient of l2 loss')
flags.DEFINE_bool('shuffle_val', False, 'whether to choose the validation set via shuffling or not')
## Model options
flags.DEFINE_integer('random_seed', 0, 'random seed for training')
flags.DEFINE_bool('fp', True, 'use spatial soft-argmax or not')
flags.DEFINE_string('norm', 'layer_norm', 'batch_norm, layer_norm, or None')
flags.DEFINE_bool('dropout', False, 'use dropout for fc layers or not')
flags.DEFINE_float('keep_prob', 0.5, 'keep probability for dropout')
flags.DEFINE_integer('num_filters', 64,
'number of filters for conv nets -- 64 for placing, 16 for pushing, 40 for reaching.')
flags.DEFINE_integer('filter_size', 3, 'filter size for conv nets -- 3 for placing, 5 for pushing, 3 for reaching.')
flags.DEFINE_integer('num_conv_layers', 5, 'number of conv layers -- 5 for placing, 4 for pushing, 3 for reaching.')
flags.DEFINE_integer('num_strides', 3,
'number of conv layers with strided filters -- 3 for placing, 4 for pushing, 3 for reaching.')
flags.DEFINE_bool('conv', True, 'whether or not to use a convolutional network, only applicable in some cases')
flags.DEFINE_integer('num_fc_layers', 3, 'number of fully-connected layers')
flags.DEFINE_integer('layer_size', 200, 'hidden dimension of fully-connected layers')
flags.DEFINE_bool('temporal_conv_2_head', True,
'whether or not to use temporal convolutions for the two-head architecture in video-only setting.')
flags.DEFINE_bool('temporal_conv_2_head_ee', False, 'whether or not to use temporal convolutions for the two-head architecture in video-only setting \
for predicting the ee pose.')
flags.DEFINE_integer('temporal_filter_size', 10, 'filter size for temporal convolution')
flags.DEFINE_integer('temporal_num_filters', 32, 'number of filters for temporal convolution')
flags.DEFINE_integer('temporal_num_filters_ee', 32, 'number of filters for temporal convolution for ee pose prediction')
flags.DEFINE_integer('temporal_num_layers', 3, 'number of layers for temporal convolution for ee pose prediction')
flags.DEFINE_integer('temporal_num_layers_ee', 3, 'number of layers for temporal convolution for ee pose prediction')
flags.DEFINE_string('init', 'xavier', 'initializer for conv weights. Choose among random, xavier, and he')
flags.DEFINE_bool('max_pool', False, 'Whether or not to use max pooling rather than strided convolutions')
flags.DEFINE_bool('stop_grad', False, 'if True, do not use second derivatives in meta-optimization (for axis_angle)')
## Logging, saving, and testing options
flags.DEFINE_bool('log', True, 'if false, do not log summaries, for debugging code.')
flags.DEFINE_string('save_dir', './atmaml_pick_logs', 'directory for summaries and checkpoints.')
# flags.DEFINE_string('save_dir', './amaml_human_pick_logs', 'directory for summaries and checkpoints.')
# flags.DEFINE_bool('resume', True, 'resume training if there is a model available')
flags.DEFINE_bool('resume', False, 'resume training if there is a model available')
flags.DEFINE_bool('train', True, 'True to train, False to test.')
flags.DEFINE_integer('restore_iter', -1, 'iteration to load model (-1 for latest model)')
# flags.DEFINE_integer('restore_iter', 20000, 'iteration to load model (-1 for latest model)')
flags.DEFINE_integer('train_update_batch_size', -1, 'number of examples used for gradient update during training \
(use if you want to test with a different number).')
flags.DEFINE_integer('test_update_batch_size', 1, 'number of demos used during test time')
flags.DEFINE_float('gpu_memory_fraction', 0.8, 'fraction of memory used in gpu')
flags.DEFINE_bool('record_gifs', True, 'record gifs during evaluation')
flags.DEFINE_integer('color_num', 4, '')
flags.DEFINE_integer('object_num', 4, '')
flags.DEFINE_integer('train_task_num', 6, '')
flags.DEFINE_integer('task_num', 8, '')
flags.DEFINE_integer('demo_num', 5, '')
flags.DEFINE_integer('index_range', 20, '')
flags.DEFINE_integer('index_train_range', 20, '')
# flags.DEFINE_string('demo_type', 'robot', 'robot or human')
flags.DEFINE_string('demo_type', 'human', 'robot or human')
flags.DEFINE_string('extra_type', 'robot', 'robot or human') #opposite to demo_type
flags.DEFINE_string('compare_type', 'robot', 'robot or human')
flags.DEFINE_string('target_type', 'robot', '')
# flags.DEFINE_float('weight_xy', 0.999, '')
# flags.DEFINE_float('weight_z', 0.001, '')
# flags.DEFINE_float('weight_rxyz', 0.001, '')
flags.DEFINE_integer('clip_action_size', 2, 'size of embedding')
flags.DEFINE_integer('action_size', 2, 'size of embedding')
flags.DEFINE_integer('state_size', 6, 'size of embedding')
flags.DEFINE_integer('output_data', 6, '')
# flags.DEFINE_float('margin', 1.0, 'margin of loss')
# flags.DEFINE_float('pre_margin', 1.0, 'margin of loss')
flags.DEFINE_float('pre_margin_clip', 1.0, '')
flags.DEFINE_float('pre_margin', 2.0, 'pre_margin of loss')
flags.DEFINE_float('margin', 2.0, 'margin of loss')
flags.DEFINE_float('pre_margin_coefficient', 10.0, 'margin of pre-contrastive conloss')
flags.DEFINE_float('margin_coefficient', 10.0, 'margin of post-contrastive loss')
flags.DEFINE_float('pre_tar_coefficient', 0.0, '')
flags.DEFINE_float('post_tar_coefficient', 0.0, '')
flags.DEFINE_bool('norm_pre_contrastive', True, 'norm for pre_task contrastive ')
flags.DEFINE_bool('norm_post_contrastive', True, 'norm for post_task contrastive ')
flags.DEFINE_bool('all_frame_contrastive', True, '')
flags.DEFINE_bool('cos_contrastive_loss', True, 'True for cos loss, False for kl loss ')
flags.DEFINE_bool('pre_contrastive', False, 'task contrastive for pre-update ')
flags.DEFINE_bool('post_contrastive', False, 'task contrastive for post-update ')
# flags.DEFINE_bool('amaml', False, 'true for amaml')
flags.DEFINE_bool('amaml', True, 'true for amaml')
flags.DEFINE_bool('amaml_extra_domain', True, 'true for extra_domain')
flags.DEFINE_bool('cross_domain', True, 'use human and robot ')
flags.DEFINE_bool('cross_adapt_domain', True, 'adpat human/robot demos')
flags.DEFINE_bool('zero_robot_domain', True, 'adpat human/robot demos')
flags.DEFINE_bool('random_adapt_domain', True, 'randomly adpat human/robot demos')
flags.DEFINE_bool('random_single_domain', True, 'randomly adpat human/robot demos')
flags.DEFINE_integer('random_single_domain_margin', 50, 'random single domain margin')
flags.DEFINE_integer('embed_size', 0, 'size of embedding')
flags.DEFINE_bool('tar_mil', False, 'with target loss')
flags.DEFINE_bool('record_loss', False, 'true for amaml')
flags.DEFINE_bool('contrastive_fc', False, '')
# flags.DEFINE_bool('amaml', False, 'true for amaml')
def generate_data(if_train=True):
if if_train:
batch_size = FLAGS.meta_batch_size
else:
batch_size = FLAGS.meta_test_batch_size
color_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.color_num
print('color_list', color_list)
object_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.object_num
print('object_list', object_list)
if if_train:
task_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.train_task_num
else:
task_list = np.random.randint(FLAGS.train_task_num, FLAGS.task_num, size=batch_size)
print('task_list', task_list)
demo_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.demo_num
print('demo_list', demo_list)
target_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.demo_num
print('target_list', target_list)
obsas = []
obsbs = []
stateas = []
statebs = []
actionas = []
actionbs = []
color_num = ['color_blue', 'color_green', 'color_orange', 'color_yellow']
# color_num = ['color_blue', 'color_green', 'color_orange']
object_num = ['object_type_animal', 'object_type_car', 'object_type_dinosaur', 'object_type_tool']
for element in range(0, batch_size):
demo_path = '%s/%s/%s/%s/task_%d/demo_%d' % (
FLAGS.data_path, color_num[color_list[element]], object_num[object_list[element]], FLAGS.demo_type,
task_list[element], demo_list[element])
target_path = '%s/%s/%s/%s/task_%d/demo_%d' % (
FLAGS.data_path, color_num[color_list[element]], object_num[object_list[element]], FLAGS.target_type,
task_list[element], target_list[element])
print('demo_path', demo_path)
print('target_path', target_path)
index = np.random.randint(0, FLAGS.index_train_range)
# if if_train:
# index = np.random.randint(0, FLAGS.index_train_range)
# else:
# index = np.random.randint(FLAGS.index_train_range, FLAGS.index_range)
if FLAGS.demo_type == 'robot':
obsa, statea, actiona = read_data.Read_Robot_Data2(demo_path, FLAGS.T, index)
elif FLAGS.demo_type == 'human':
obsa, statea, actiona = read_data.Read_Human_Data2(demo_path, FLAGS.T, index)
obsb, stateb, actionb = read_data.Read_Robot_Data2(target_path, FLAGS.T, index)
obsas.append(obsa)
obsbs.append(obsb)
stateas.append(statea)
statebs.append(stateb)
actionas.append(actiona)
actionbs.append(actionb)
obsas = np.reshape(obsas, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels])
obsbs = np.reshape(obsbs, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels])
actionas = np.reshape(actionas, [batch_size, FLAGS.T, FLAGS.output_data])
actionbs = np.reshape(actionbs, [batch_size, FLAGS.T, FLAGS.output_data])
stateas = np.zeros([batch_size, FLAGS.T, FLAGS.output_data])
statebs = np.zeros([batch_size, FLAGS.T, FLAGS.output_data])
return obsas, obsbs, actionas, actionbs, stateas, statebs
def generate_atmaml_data(if_train=True):
if if_train:
batch_size = FLAGS.meta_batch_size
else:
batch_size = FLAGS.meta_test_batch_size
color_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.color_num
# print('color_list', color_list)
compare_color_list = (color_list + np.random.randint(1, FLAGS.color_num-1)) % FLAGS.color_num
# print('compare_color_list', compare_color_list)
object_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.object_num
# print('object_list', object_list)
compare_object_list = (object_list + np.random.randint(1, FLAGS.object_num-1)) % FLAGS.object_num
# print('compare_object_list', compare_object_list)
if if_train:
task_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.train_task_num
compare_task_list = (task_list + np.random.randint(1, FLAGS.train_task_num)) % FLAGS.train_task_num
# compare_task_list = (task_list + np.random.randint(1, FLAGS.task_num - 1)) % FLAGS.task_num
else:
task_list = np.random.randint(FLAGS.train_task_num, FLAGS.task_num, size=batch_size)
compare_task_list = np.random.randint(FLAGS.train_task_num, FLAGS.task_num, size=batch_size)
# print('task_list', task_list)
# print('compare_task_list', compare_task_list)
demo_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.demo_num
# print('demo_list', demo_list)
target_list = (np.random.randint(0, 100, size=batch_size) + 1) % FLAGS.demo_num
# print('target_list', target_list)
compare_demo_list = (task_list + np.random.randint(1, FLAGS.demo_num-1)) % FLAGS.demo_num
# print('compare_demo_list', compare_demo_list)
obsas = []
obsbs = []
obscs = []
extra_obses = []
stateas = []
statebs = []
statecs = []
extra_states = []
actionas = []
actionbs = []
actioncs = []
extra_actions = []
labelabs = []
labelcs = []
color_num = ['color_blue', 'color_green', 'color_orange', 'color_yellow']
# color_num = ['color_blue', 'color_green', 'color_orange']
object_num = ['object_type_animal', 'object_type_car', 'object_type_dinosaur', 'object_type_tool']
for element in range(0, batch_size):
demo_path = '%s/%s/%s/%s/task_%d/demo_%d' % (
FLAGS.data_path, color_num[color_list[element]], object_num[object_list[element]], FLAGS.demo_type,
task_list[element], demo_list[element])
target_path = '%s/%s/%s/%s/task_%d/demo_%d' % (
FLAGS.data_path, color_num[color_list[element]], object_num[object_list[element]], FLAGS.target_type,
task_list[element], target_list[element])
compare_path = '%s/%s/%s/%s/task_%d/demo_%d' % (
FLAGS.data_path, color_num[compare_color_list[element]], object_num[compare_object_list[element]],
FLAGS.compare_type, compare_task_list[element], compare_demo_list[element])
if FLAGS.cross_domain:
extra_path = '%s/%s/%s/%s/task_%d/demo_%d' % (FLAGS.data_path, color_num[color_list[element]], object_num[object_list[element]],
FLAGS.extra_type, task_list[element], demo_list[element])
if element==0:
if FLAGS.cross_domain:
print('extra_path', extra_path)
print('demo_path', demo_path)
print('target_path', target_path)
print('compare_path', compare_path)
index = np.random.randint(0, FLAGS.index_train_range)
# if if_train:
# index = np.random.randint(0, FLAGS.index_train_range)
# else:
# index = np.random.randint(FLAGS.index_train_range, FLAGS.index_range)
if FLAGS.demo_type == 'robot':
obsa, statea, actiona = read_data.Read_Robot_Data2(demo_path, FLAGS.T, index)
else :
obsa, statea, actiona = read_data.Read_Human_Data2(demo_path, FLAGS.T, index)
if FLAGS.compare_type == 'robot':
obsc, statec, actionc = read_data.Read_Robot_Data2(compare_path, FLAGS.T, index)
else:
obsc, statec, actionc = read_data.Read_Human_Data2(compare_path, FLAGS.T, index)
if FLAGS.cross_domain:
if FLAGS.extra_type == 'robot':
extra_obs, extra_state, extra_action = read_data.Read_Robot_Data2(extra_path, FLAGS.T, index)
else:
extra_obs, extra_state, extra_action = read_data.Read_Human_Data2(extra_path, FLAGS.T, index)
obsb, stateb, actionb = read_data.Read_Robot_Data2(target_path, FLAGS.T, index)
if FLAGS.tar_mil:
labelab = to_categorical(color_list[element] * FLAGS.index_train_range + index, FLAGS.color_num*FLAGS.index_train_range)
labelc = to_categorical(compare_color_list[element] * FLAGS.index_train_range + index, FLAGS.color_num*FLAGS.index_train_range)
labelabs.append(labelab)
labelcs.append(labelc)
# print('element', element, 'labela and labelb', labelab, 'labelc', labelc)
# print('----------------------------------------------------------------')
obsas.append(obsa)
obsbs.append(obsb)
obscs.append(obsc)
stateas.append(statea)
statebs.append(stateb)
statecs.append(statec)
actionas.append(actiona)
actionbs.append(actionb)
actioncs.append(actionc)
if FLAGS.cross_domain:
extra_obses.append(extra_obs)
extra_states.append(extra_state)
extra_actions.append(extra_action)
obsas = np.reshape(obsas, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels])
obsbs = np.reshape(obsbs, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels])
obscs = np.reshape(obscs, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels])
actionas = np.reshape(actionas, [batch_size, FLAGS.T, FLAGS.output_data])
actionbs = np.reshape(actionbs, [batch_size, FLAGS.T, FLAGS.output_data])
actioncs = np.reshape(actioncs, [batch_size, FLAGS.T, FLAGS.output_data])
stateas = np.zeros([batch_size, FLAGS.T, FLAGS.output_data])
statebs = np.zeros([batch_size, FLAGS.T, FLAGS.output_data])
statecs = np.zeros([batch_size, FLAGS.T, FLAGS.output_data])
if FLAGS.tar_mil:
labelabs = np.reshape(labelabs, [batch_size, 1, -1])
labelcs = np.reshape(labelcs, [batch_size, 1, -1])
# print('labelabs', labelabs.shape, 'labelcs', labelcs)
else:
labelabs = np.zeros([batch_size, 1, FLAGS.color_num*FLAGS.index_train_range])
labelcs = np.zeros( [batch_size, 1, FLAGS.color_num*FLAGS.index_train_range])
if FLAGS.cross_domain:
extra_obses = np.reshape(extra_obses, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels])
extra_actions = np.reshape(extra_actions, [batch_size, FLAGS.T, FLAGS.output_data])
extra_states = np.zeros([batch_size, FLAGS.T, FLAGS.output_data])
return obsas, obsbs, obscs, extra_obses, \
actionas, actionbs, actioncs, extra_actions, \
stateas, statebs, statecs, extra_states, \
labelabs, labelcs
else:
return obsas, obsbs, obscs, actionas, actionbs, actioncs, stateas, statebs, statecs, labelabs, labelcs
def generate_single_place_data(obsas, obsbs, actionas, actionbs, stateas, statebs, if_train=True):
if if_train:
batch_size = FLAGS.meta_batch_size
else:
batch_size = FLAGS.meta_test_batch_size
# print('obsas, obsbs, actionas, actionbs, stateas, statebs', obsas.shape, obsbs.shape, actionas.shape, actionbs.shape, stateas.shape, statebs.shape)
obsbs = np.reshape(obsbs[:, 0, :], [batch_size , 1, -1])
actionbs = np.reshape(actionbs[:, 1, :], [batch_size , 1, -1])
statebs = np.reshape(statebs[:, -1, :], [batch_size , 1, -1])
# print('obsas, obsbs, actionas, actionbs, stateas, statebs of generate_single_place_data', obsas.shape, obsbs.shape, actionas.shape, actionbs.shape,
# stateas.shape, statebs.shape)
return obsas, obsbs, actionas, actionbs, stateas, statebs
def generate_single_atmaml_place_data(obsbs, actionbs, statebs, if_train=True):
if if_train:
batch_size = FLAGS.meta_batch_size
else:
batch_size = FLAGS.meta_test_batch_size
# print('obsas, obsbs, actionas, actionbs, stateas, statebs', obsbs.shape, actionbs.shape, statebs.shape)
obsbs = np.reshape(obsbs[:, 0, :], [batch_size , 1, -1])
actionbs = np.reshape(actionbs[:, 1, :], [batch_size , 1, -1])
statebs = np.reshape(statebs[:, -1, :], [batch_size , 1, -1])
return obsbs, actionbs, statebs
def handle_action(actionas, actionbs, stateas, statebs, if_train=True):
if if_train:
batch_size = FLAGS.meta_batch_size
else:
batch_size = FLAGS.meta_test_batch_size
actionas = actionas[:, :, :FLAGS.clip_action_size]
# print('actionas is', actionas.shape)
actionas = np.reshape(actionas[:, 1, :], [batch_size, 1, -1])
actionas = np.tile(actionas, (1, 3, 1))
actionbs = actionbs[:, :, :FLAGS.clip_action_size]
actionbs = np.reshape(actionbs[:, 1, :], [batch_size, 1, -1])
actionbs = np.tile(actionbs, (1, 3, 1))
stateas = stateas[:, :, :FLAGS.state_size]
statebs = statebs[:, :, :FLAGS.state_size]
return actionas, actionbs, stateas, statebs
def handle_atmaml_action(actionas, actionbs, stateas, statebs, if_train=True):
if if_train:
batch_size = FLAGS.meta_batch_size
else:
batch_size = FLAGS.meta_test_batch_size
actionas = actionas[:, :, :FLAGS.clip_action_size]
pick = np.reshape(actionas[:, 0, :], [batch_size, 1, -1])
place = np.reshape(actionas[:, 1, :], [batch_size, 1, -1])
pick_place = np.concatenate([pick, place], axis=2)
# place_place = np.concatenate([place, place], axis=2)
actionas = np.tile(pick_place, (1, 3, 1))
# actionas = np.tile(pick_place, (1, 2, 1))
# actionas = np.concatenate([actionas, place_place], axis=1)
actionbs = actionbs[:, :, :FLAGS.clip_action_size]
pick = np.reshape(actionbs[:, 0, :], [batch_size, 1, -1])
place = np.reshape(actionbs[:, 1, :], [batch_size, 1, -1])
pick_place = np.concatenate([pick, place], axis=2)
# place_place = np.concatenate([place, place], axis=2)
actionbs = np.tile(pick_place, (1, 3, 1))
# actionbs = np.tile(pick_place, (1, 2, 1))
# actionbs = np.concatenate([actionbs, place_place], axis=1)
stateas = stateas[:, :, :FLAGS.state_size]
statebs = statebs[:, :, :FLAGS.state_size]
return actionas, actionbs, stateas, statebs
def handle_atmaml_action2(actionas, actionbs, stateas, statebs, if_train=True):
if if_train:
batch_size = FLAGS.meta_batch_size
else:
batch_size = FLAGS.meta_test_batch_size
actionas = actionas[:, :, :FLAGS.clip_action_size]
place = np.reshape(actionas[:, 1, :], [batch_size, 1, -1])
actionas = np.tile(place, (1, 3, 1))
actionbs = actionbs[:, :, :FLAGS.clip_action_size]
place = np.reshape(actionbs[:, 1, :], [batch_size, 1, -1])
actionbs = np.tile(place, (1, 3, 1))
# print('statebs',statebs)
stateas = stateas[:, :, :FLAGS.state_size]
statebs = statebs[:, :, :FLAGS.state_size]
return actionas, actionbs, stateas, statebs
def train(graph, model, saver, sess, save_dir, restore_itr=0):
"""
Train the model.
"""
PRINT_INTERVAL = 100
TEST_PRINT_INTERVAL = 100
# TEST_PRINT_INTERVAL = 1
SUMMARY_INTERVAL = 100
# SAVE_INTERVAL = 10
SAVE_INTERVAL = 1000
TOTAL_ITERS = FLAGS.metatrain_iterations
prelosses, postlosses = [], []
save_dir = save_dir + '/model'
train_writer = tf.summary.FileWriter(save_dir, graph)
# actual training.
if restore_itr == 0:
training_range = range(TOTAL_ITERS)
else:
training_range = range(restore_itr + 1, TOTAL_ITERS)
with graph.as_default():
nums= get_num_params()
print('total parameters', nums)
for itr in training_range:
if FLAGS.cross_domain:
obsas, obsbs, obscs, obses, actionas, actionbs, actioncs, actiones, \
stateas, statebs, statecs, statees, labelabs, labelcs = generate_atmaml_data(if_train=True)
actionas, actionbs, stateas, statebs = handle_atmaml_action2(actionas, actionbs, stateas, statebs,
if_train=True)
actioncs, actiones, statecs, statees = handle_atmaml_action2(actioncs, actiones, statecs, statees,
if_train=True)
# print('actiones', actiones.shape, actiones[0])
if FLAGS.cross_adapt_domain:
# probas = np.ones([FLAGS.meta_batch_size, 1])
# probas = np.random.randint(0, 100, [FLAGS.meta_batch_size, 1]) / 100.0
if FLAGS.random_adapt_domain:
probas = np.random.randint(0, 100, [FLAGS.meta_batch_size, 1]) / 100.0
probes = np.random.randint(0, 100, [FLAGS.meta_batch_size, 1]) / 100.0
if FLAGS.random_single_domain:
single_probas = np.random.randint(0, 2, [FLAGS.meta_batch_size, 1]) / 1.0
single_probes = 1.0 - single_probas
for mbs in range(0, FLAGS.meta_batch_size):
m = np.random.randint(0, 100)
if m < FLAGS.random_single_domain_margin:
probas[mbs] = single_probas[mbs]
probes[mbs] = single_probes[mbs]
else:
probas = probes = np.ones([FLAGS.meta_batch_size, 1])
# print('probas', probas, 'probes', probes)
print('probas', np.squeeze(probas))
print('probes', np.squeeze(probes))
feed_dict = {
model.obsa: obsas,
model.obsb: obsbs,
model.obsc: obscs,
model.obse: obses,
model.statea: stateas,
model.stateb: statebs,
model.actiona: actionas,
model.actionb: actionbs,
model.actione: actiones,
model.labelab: labelabs,
model.prop_a: probas,
model.prop_e: probes
}
else:
feed_dict = {
model.obsa: obsas,
model.obsb: obsbs,
model.obsc: obscs,
model.obse: obses,
model.statea: stateas,
model.stateb: statebs,
model.actiona: actionas,
model.actionb: actionbs,
model.actione: actiones,
model.labelab: labelabs
}
else:
obsas, obsbs, obscs, actionas, actionbs, actioncs, stateas, statebs, statecs, labelabs, labelcs = generate_atmaml_data(if_train=True)
actionas, actionbs, stateas, statebs = handle_atmaml_action2(actionas, actionbs, stateas, statebs,
if_train=True)
feed_dict = {
model.obsa: obsas,
model.obsb: obsbs,
model.obsc: obscs,
model.statea: stateas,
model.stateb: statebs,
model.actiona: actionas,
model.actionb: actionbs,
model.labelab: labelabs
}
# print('obsas, obsbs, obscs, actionas, actionbs, actioncs, stateas, statebs, statecs, labelabs, labelcs',
# obsas.shape, obsbs.shape, obscs.shape, actionas.shape, actionbs.shape, actioncs.shape,
# stateas.shape, statebs.shape, statecs.shape, labelabs.shape, labelcs.shape)
# input_tensors = [model.train_op, model.total_mix_loss, model.total_losses2[model.num_updates - 1],
# model.test_act_op, model.total_semantic_loss, model.total_loss1, model.train_summ_op]
input_tensors = [model.train_op, model.total_mix_loss, model.total_losses2[model.num_updates - 1],
model.test_act_op, model.total_semantic_loss, model.total_loss1, model.outputas,
model.semantic_outputs, model.post_pre_semantic_loss, model.train_summ_op]
with graph.as_default():
# nums= get_num_params()
# print('total parameters', nums)
results = sess.run(input_tensors, feed_dict=feed_dict)
if FLAGS.record_loss:
train_writer.add_summary(results[-1], itr)
print('Iteration %d: average preloss is %.2f, post_pre_semantic_loss is %.2f, total_mix_loss is %.2f, average postloss is %.2f, '
'average semantic is %.2f' % (itr, np.mean(results[5]), np.mean(results[8]), np.mean(results[1]), np.mean(results[2]), np.mean(results[4])))
# print('demo state', stateas[-1])
# print('target state', statebs[-1])
# print ('results',results[3].shape,results[3][0][0], 'demos',actionbs[0][0])
# print('demo actionas', actionas[-1])
print('predict outputa', results[6][-1].shape)
print('predict pre_semantic_outputa', results[7][0][-1].shape, results[7][0][-1])
print('predict pre_semantic_outputb', results[7][1][-1].shape, results[7][1][-1])
print('predict pre_semantic_outputc', results[7][2][-1].shape, results[7][2][-1])
print('demo actionbs', actionbs[-1])
print ('predicted actions', results[3][-1])
if itr != 0 and (itr % TEST_PRINT_INTERVAL == 0):
if FLAGS.cross_domain:
obsas, obsbs, obscs, obses, actionas, actionbs, actioncs, actiones, \
stateas, statebs, statecs, extra_states, labelabs, labelcs = generate_atmaml_data(if_train=False)
else:
obsas, obsbs, obscs, actionas, actionbs, actioncs, stateas, statebs, statecs, labelabs, labelcs = generate_atmaml_data(if_train=False)
actionas, actionbs, stateas, statebs = handle_atmaml_action2(actionas, actionbs, stateas, statebs, if_train=False)
obsbs, actionbs, statebs = generate_single_atmaml_place_data(obsbs, actionbs, statebs, if_train=False)
print('actionas, actionbs, stateas, statebs', actionas.shape, actionbs.shape, stateas.shape, statebs.shape)
if FLAGS.cross_domain:
if FLAGS.cross_adapt_domain:
if not FLAGS.zero_robot_domain:
actioncs, actiones, statecs, statees = handle_atmaml_action2(actioncs, actiones, statecs,
statees, if_train=False)
probas = np.ones([1, 1])
probes = np.random.randint(0, 1, [1, 1]) / 100.0
feed_dict = {
model.obsa: obsas,
model.obsb: obsbs,
model.obsc: obscs,
model.obse: obses,
model.statea: stateas,
model.stateb: statebs,
model.actiona: actionas,
model.actionb: actionbs,
model.actione: actiones,
model.labelab: labelabs,
model.prop_a: probas,
model.prop_e: probes
}
else:
feed_dict = {
model.obsa: obsas,
model.obsb: obsbs,
model.obsc: obscs,
model.obse: obses,
model.statea: stateas,
model.stateb: statebs,
model.actiona: actionas,
model.actionb: actionbs,
model.actione: actiones,
model.labelab: labelabs
}
else:
feed_dict = {
model.obsa: obsas,
model.obsb: obsbs,
model.statea: stateas,
model.stateb: statebs,
model.actiona: actionas,
model.actionb: actionbs,
model.labelab: labelcs
}
input_tensors = [model.total_loss1, model.total_losses2[model.num_updates - 1],
model.test_act_op]
with graph.as_default():
results = sess.run(input_tensors, feed_dict=feed_dict)
# train_writer.add_summary(results[-1], itr)
print('test loss', np.mean(results[1]) )
# print('test demoa', actionas)
print('test demob', actionbs)
print('test predict', results[2])
if itr != 0 and (itr % SAVE_INTERVAL == 0 or itr == training_range[-1]):
print ('Saving model to: %s' % (save_dir + '_%d' % itr))
with graph.as_default():
saver.save(sess, save_dir + '_%d' % itr)
def main():
tf.set_random_seed(FLAGS.random_seed)
np.random.seed(FLAGS.random_seed)
random.seed(FLAGS.random_seed)
graph = tf.Graph()
# gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=FLAGS.gpu_memory_fraction)
# tf_config = tf.ConfigProto(gpu_options=gpu_options)
tf_config = tf.ConfigProto()
tf_config.gpu_options.allow_growth = True
sess = tf.Session(graph=graph, config=tf_config)
# sess = tf.Session(graph=graph)
network_config = {
'num_filters': [FLAGS.num_filters] * FLAGS.num_conv_layers,
'strides': [[1, 2, 2, 1]] * FLAGS.num_strides + [[1, 1, 1, 1]] * (FLAGS.num_conv_layers - FLAGS.num_strides),
'filter_size': FLAGS.filter_size,
'image_width': FLAGS.im_width,
'image_height': FLAGS.im_height,
'image_channels': FLAGS.num_channels,
'n_layers': FLAGS.num_fc_layers,
'layer_size': FLAGS.layer_size,
'initialization': FLAGS.init,
}
# data_generator = DataGenerator()
state_idx = range(FLAGS.state_size)
img_idx = range(len(state_idx), len(state_idx) + FLAGS.im_height * FLAGS.im_width * FLAGS.num_channels)
# need to compute x_idx and img_idx from data_generator
model = MIL(FLAGS.action_size, state_idx=state_idx, img_idx=img_idx, network_config=network_config)
# TODO: figure out how to save summaries and checkpoints
exp_string = FLAGS.experiment + '.' + FLAGS.init + '_init.' + str(FLAGS.num_conv_layers) + '_conv' + '.' + str(
FLAGS.num_strides) + '_strides' + '.' + str(FLAGS.num_filters) + '_filters' + \
'.' + str(FLAGS.num_fc_layers) + '_fc' + '.' + str(FLAGS.layer_size) + '_dim' + '.bt_dim_' + str(
FLAGS.bt_dim) + '.mbs_' + str(FLAGS.meta_batch_size) + \
'.ubs_' + str(FLAGS.update_batch_size) + '.numstep_' + str(FLAGS.num_updates) + '.updatelr_' + str(
FLAGS.train_update_lr)
if FLAGS.clip:
exp_string += '.clip_' + str(int(FLAGS.clip_max))
if FLAGS.conv_bt:
exp_string += '.conv_bt'
if FLAGS.all_fc_bt:
exp_string += '.all_fc_bt'
if FLAGS.fp:
exp_string += '.fp'
if FLAGS.learn_final_eept:
exp_string += '.learn_ee_pos'
if FLAGS.no_action:
exp_string += '.no_action'
if FLAGS.zero_state:
exp_string += '.zero_state'
if FLAGS.two_head:
exp_string += '.two_heads'
if FLAGS.two_arms:
exp_string += '.two_arms'
if FLAGS.temporal_conv_2_head:
exp_string += '.1d_conv_act_' + str(FLAGS.temporal_num_layers) + '_' + str(FLAGS.temporal_num_filters)
if FLAGS.temporal_conv_2_head_ee:
exp_string += '_ee_' + str(FLAGS.temporal_num_layers_ee) + '_' + str(FLAGS.temporal_num_filters_ee)
exp_string += '_' + str(FLAGS.temporal_filter_size) + 'x1_filters'
if FLAGS.training_set_size != -1:
exp_string += '.' + str(FLAGS.training_set_size) + '_trials'
save_dir = FLAGS.save_dir + '/' + exp_string
# put here for now
if FLAGS.train:
# data_generator.generate_batches(noisy=FLAGS.use_noisy_demos)
# with graph.as_default():
# train_image_tensors = data_generator.make_batch_tensor(network_config, restore_iter=FLAGS.restore_iter)
# inputa = train_image_tensors[:, :FLAGS.update_batch_size*FLAGS.T, :]
# inputb = train_image_tensors[:, FLAGS.update_batch_size*FLAGS.T:, :]
# train_input_tensors = {'inputa': inputa, 'inputb': inputb}
model.init_network(graph, input_tensors=None, restore_iter=FLAGS.restore_iter)
# model.init_network(graph, input_tensors=val_input_tensors, restore_iter=FLAGS.restore_iter, prefix='Validation_')
else:
model.init_network(graph, prefix='Testing')
with graph.as_default():
# Set up saver.
saver = tf.train.Saver(max_to_keep=10)
# Initialize variables.
init_op = tf.global_variables_initializer()
sess.run(init_op, feed_dict=None)
# Start queue runners (used for loading videos on the fly)
tf.train.start_queue_runners(sess=sess)
if FLAGS.resume:
model_file = tf.train.latest_checkpoint(save_dir)
if FLAGS.restore_iter > 0:
model_file = model_file[:model_file.index('model')] + 'model_' + str(FLAGS.restore_iter)
if model_file:
ind1 = model_file.index('model')
resume_itr = int(model_file[ind1 + 6:])
print("Restoring model weights from " + model_file)
with graph.as_default():
saver.restore(sess, model_file)
if FLAGS.train:
train(graph, model, saver, sess, save_dir, restore_itr=FLAGS.restore_iter)
if __name__ == "__main__":
main() | [
"logging.getLogger",
"read_data.Read_Human_Data2",
"keras.utils.to_categorical",
"place_pick_mil.MIL",
"tensorflow.set_random_seed",
"tensorflow.Graph",
"numpy.mean",
"numpy.reshape",
"tensorflow.Session",
"numpy.random.seed",
"numpy.concatenate",
"tensorflow.ConfigProto",
"tensorflow.traina... | [((712, 739), 'logging.getLogger', 'logging.getLogger', (['__name__'], {}), '(__name__)\n', (729, 739), False, 'import logging\n'), ((767, 846), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""experiment"""', '"""pick_place"""', '"""sim_vision_reach or sim_push"""'], {}), "('experiment', 'pick_place', 'sim_vision_reach or sim_push')\n", (786, 846), False, 'from tensorflow.python.platform import flags\n'), ((847, 1032), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""data_path"""', '"""./pick_dataset_origin/human_robot_dataset/"""', '"""path to the directory where demo files that containing robot states and actions are stored"""'], {}), "('data_path',\n './pick_dataset_origin/human_robot_dataset/',\n 'path to the directory where demo files that containing robot states and actions are stored'\n )\n", (866, 1032), False, 'from tensorflow.python.platform import flags\n'), ((1040, 1127), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""demo_gif_dir"""', '"""data"""', '"""path to the videos of demonstrations"""'], {}), "('demo_gif_dir', 'data',\n 'path to the videos of demonstrations')\n", (1059, 1127), False, 'from tensorflow.python.platform import flags\n'), ((1124, 1244), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""gif_prefix"""', '"""object"""', '"""prefix of the video directory for each task, e.g. object_0 for task 0"""'], {}), "('gif_prefix', 'object',\n 'prefix of the video directory for each task, e.g. object_0 for task 0')\n", (1143, 1244), False, 'from tensorflow.python.platform import flags\n'), ((1241, 1377), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""im_width"""', '(264)', '"""width of the images in the demo videos, 125 for sim_push, and 80 for sim_vision_reach"""'], {}), "('im_width', 264,\n 'width of the images in the demo videos, 125 for sim_push, and 80 for sim_vision_reach'\n )\n", (1261, 1377), False, 'from tensorflow.python.platform import flags\n'), ((1390, 1527), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""im_height"""', '(196)', '"""height of the images in the demo videos, 125 for sim_push, and 64 for sim_vision_reach"""'], {}), "('im_height', 196,\n 'height of the images in the demo videos, 125 for sim_push, and 64 for sim_vision_reach'\n )\n", (1410, 1527), False, 'from tensorflow.python.platform import flags\n'), ((1540, 1638), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""num_channels"""', '(3)', '"""number of channels of the images in the demo videos"""'], {}), "('num_channels', 3,\n 'number of channels of the images in the demo videos')\n", (1560, 1638), False, 'from tensorflow.python.platform import flags\n'), ((1635, 1730), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""T"""', '(3)', '"""time horizon of the demo videos, 50 for reach, 100 for push"""'], {}), "('T', 3,\n 'time horizon of the demo videos, 50 for reach, 100 for push')\n", (1655, 1730), False, 'from tensorflow.python.platform import flags\n'), ((1727, 1793), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""hsv"""', '(False)', '"""convert the image to HSV format"""'], {}), "('hsv', False, 'convert the image to HSV format')\n", (1744, 1793), False, 'from tensorflow.python.platform import flags\n'), ((1794, 1895), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""use_noisy_demos"""', '(False)', '"""use noisy demonstrations or not (for domain shift)"""'], {}), "('use_noisy_demos', False,\n 'use noisy demonstrations or not (for domain shift)')\n", (1811, 1895), False, 'from tensorflow.python.platform import flags\n'), ((1892, 1989), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""noisy_demo_gif_dir"""', 'None', '"""path to the videos of noisy demonstrations"""'], {}), "('noisy_demo_gif_dir', None,\n 'path to the videos of noisy demonstrations')\n", (1911, 1989), False, 'from tensorflow.python.platform import flags\n'), ((1986, 2139), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""noisy_demo_file"""', 'None', '"""path to the directory where noisy demo files that containing robot states and actions are stored"""'], {}), "('noisy_demo_file', None,\n 'path to the directory where noisy demo files that containing robot states and actions are stored'\n )\n", (2005, 2139), False, 'from tensorflow.python.platform import flags\n'), ((2151, 2256), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""no_action"""', '(True)', '"""do not include actions in the demonstrations for inner update"""'], {}), "('no_action', True,\n 'do not include actions in the demonstrations for inner update')\n", (2168, 2256), False, 'from tensorflow.python.platform import flags\n'), ((2253, 2356), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""no_state"""', '(False)', '"""do not include states in the demonstrations during training"""'], {}), "('no_state', False,\n 'do not include states in the demonstrations during training')\n", (2270, 2356), False, 'from tensorflow.python.platform import flags\n'), ((2353, 2468), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""no_final_eept"""', '(False)', '"""do not include final ee pos in the demonstrations for inner update"""'], {}), "('no_final_eept', False,\n 'do not include final ee pos in the demonstrations for inner update')\n", (2370, 2468), False, 'from tensorflow.python.platform import flags\n'), ((2465, 2630), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""zero_state"""', '(True)', '"""zero-out states (meta-learn state) in the demonstrations for inner update (used in the paper with video-only demos)"""'], {}), "('zero_state', True,\n 'zero-out states (meta-learn state) in the demonstrations for inner update (used in the paper with video-only demos)'\n )\n", (2482, 2630), False, 'from tensorflow.python.platform import flags\n'), ((2640, 2730), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""two_arms"""', '(False)', '"""use two-arm structure when state is zeroed-out"""'], {}), "('two_arms', False,\n 'use two-arm structure when state is zeroed-out')\n", (2657, 2730), False, 'from tensorflow.python.platform import flags\n'), ((2727, 2957), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""training_set_size"""', '(-1)', '"""size of the training set, 1500 for sim_reach, 693 for sim push, anzero_stated -1 for all data except those in validation set"""'], {}), "('training_set_size', -1,\n 'size of the training set, 1500 for sim_reach, 693 for sim push, anzero_stated -1 for all data except those in validation set'\n )\n", (2747, 2957), False, 'from tensorflow.python.platform import flags\n'), ((2951, 3063), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""val_set_size"""', '(150)', '"""size of the training set, 150 for sim_reach and 76 for sim push"""'], {}), "('val_set_size', 150,\n 'size of the training set, 150 for sim_reach and 76 for sim push')\n", (2971, 3063), False, 'from tensorflow.python.platform import flags\n'), ((3081, 3174), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""metatrain_iterations"""', '(30000)', '"""number of metatraining iterations."""'], {}), "('metatrain_iterations', 30000,\n 'number of metatraining iterations.')\n", (3101, 3174), False, 'from tensorflow.python.platform import flags\n'), ((3221, 3311), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""meta_batch_size"""', '(16)', '"""number of tasks sampled per meta-update"""'], {}), "('meta_batch_size', 16,\n 'number of tasks sampled per meta-update')\n", (3241, 3311), False, 'from tensorflow.python.platform import flags\n'), ((3360, 3454), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""meta_test_batch_size"""', '(1)', '"""number of tasks sampled per meta-update"""'], {}), "('meta_test_batch_size', 1,\n 'number of tasks sampled per meta-update')\n", (3380, 3454), False, 'from tensorflow.python.platform import flags\n'), ((3503, 3588), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""meta_lr"""', '(0.0001)', '"""the base learning rate of the generator"""'], {}), "('meta_lr', 0.0001, 'the base learning rate of the generator'\n )\n", (3521, 3588), False, 'from tensorflow.python.platform import flags\n'), ((3582, 3713), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""update_batch_size"""', '(1)', '"""number of examples used for inner gradient update (K for K-shot learning)."""'], {}), "('update_batch_size', 1,\n 'number of examples used for inner gradient update (K for K-shot learning).'\n )\n", (3602, 3713), False, 'from tensorflow.python.platform import flags\n'), ((3726, 3821), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""train_update_lr"""', '(0.0001)', '"""step size alpha for inner gradient update."""'], {}), "('train_update_lr', 0.0001,\n 'step size alpha for inner gradient update.')\n", (3744, 3821), False, 'from tensorflow.python.platform import flags\n'), ((3887, 3982), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""num_updates"""', '(1)', '"""number of inner gradient updates during training."""'], {}), "('num_updates', 1,\n 'number of inner gradient updates during training.')\n", (3907, 3982), False, 'from tensorflow.python.platform import flags\n'), ((3996, 4070), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""clip"""', '(True)', '"""use gradient clipping for fast gradient"""'], {}), "('clip', True, 'use gradient clipping for fast gradient')\n", (4013, 4070), False, 'from tensorflow.python.platform import flags\n'), ((4071, 4156), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""clip_max"""', '(100.0)', '"""maximum clipping value for fast gradient"""'], {}), "('clip_max', 100.0,\n 'maximum clipping value for fast gradient')\n", (4089, 4156), False, 'from tensorflow.python.platform import flags\n'), ((4153, 4239), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""clip_min"""', '(-100.0)', '"""minimum clipping value for fast gradient"""'], {}), "('clip_min', -100.0,\n 'minimum clipping value for fast gradient')\n", (4171, 4239), False, 'from tensorflow.python.platform import flags\n'), ((4403, 4489), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""fc_bt"""', '(True)', '"""use bias transformation for the first fc layer"""'], {}), "('fc_bt', True,\n 'use bias transformation for the first fc layer')\n", (4420, 4489), False, 'from tensorflow.python.platform import flags\n'), ((4486, 4572), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""all_fc_bt"""', '(False)', '"""use bias transformation for all fc layers"""'], {}), "('all_fc_bt', False,\n 'use bias transformation for all fc layers')\n", (4503, 4572), False, 'from tensorflow.python.platform import flags\n'), ((4569, 4692), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""conv_bt"""', '(False)', '"""use bias transformation for the first conv layer, N/A for using pretraining"""'], {}), "('conv_bt', False,\n 'use bias transformation for the first conv layer, N/A for using pretraining'\n )\n", (4586, 4692), False, 'from tensorflow.python.platform import flags\n'), ((4684, 4776), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""bt_dim"""', '(10)', '"""the dimension of bias transformation for FC layers"""'], {}), "('bt_dim', 10,\n 'the dimension of bias transformation for FC layers')\n", (4704, 4776), False, 'from tensorflow.python.platform import flags\n'), ((4773, 4858), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""pretrain_weight_path"""', '"""N/A"""', '"""path to pretrained weights"""'], {}), "('pretrain_weight_path', 'N/A', 'path to pretrained weights'\n )\n", (4792, 4858), False, 'from tensorflow.python.platform import flags\n'), ((4854, 4952), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""train_pretrain_conv1"""', '(False)', '"""whether to finetune the pretrained weights"""'], {}), "('train_pretrain_conv1', False,\n 'whether to finetune the pretrained weights')\n", (4871, 4952), False, 'from tensorflow.python.platform import flags\n'), ((4949, 5013), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""two_head"""', '(True)', '"""use two-head architecture"""'], {}), "('two_head', True, 'use two-head architecture')\n", (4966, 5013), False, 'from tensorflow.python.platform import flags\n'), ((5014, 5128), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""learn_final_eept"""', '(False)', '"""learn an auxiliary loss for predicting final end-effector pose"""'], {}), "('learn_final_eept', False,\n 'learn an auxiliary loss for predicting final end-effector pose')\n", (5031, 5128), False, 'from tensorflow.python.platform import flags\n'), ((5125, 5382), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""learn_final_eept_whole_traj"""', '(False)', '"""learn an auxiliary loss for predicting final end-effector pose by passing the whole trajectory of eepts (used for video-only models)"""'], {}), "('learn_final_eept_whole_traj', False,\n 'learn an auxiliary loss for predicting final end-effector pose by passing the whole trajectory of eepts (used for video-only models)'\n )\n", (5142, 5382), False, 'from tensorflow.python.platform import flags\n'), ((5376, 5518), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""stopgrad_final_eept"""', '(True)', '"""stop the gradient when concatenate the predicted final eept with the feature points"""'], {}), "('stopgrad_final_eept', True,\n 'stop the gradient when concatenate the predicted final eept with the feature points'\n )\n", (5393, 5518), False, 'from tensorflow.python.platform import flags\n'), ((5528, 5626), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""final_eept_min"""', '(6)', '"""first index of the final eept in the action array"""'], {}), "('final_eept_min', 6,\n 'first index of the final eept in the action array')\n", (5548, 5626), False, 'from tensorflow.python.platform import flags\n'), ((5623, 5720), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""final_eept_max"""', '(8)', '"""last index of the final eept in the action array"""'], {}), "('final_eept_max', 8,\n 'last index of the final eept in the action array')\n", (5643, 5720), False, 'from tensorflow.python.platform import flags\n'), ((5717, 5808), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""final_eept_loss_eps"""', '(0.1)', '"""the coefficient of the auxiliary loss"""'], {}), "('final_eept_loss_eps', 0.1,\n 'the coefficient of the auxiliary loss')\n", (5735, 5808), False, 'from tensorflow.python.platform import flags\n'), ((5805, 5882), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""act_loss_eps"""', '(1.0)', '"""the coefficient of the action loss"""'], {}), "('act_loss_eps', 1.0, 'the coefficient of the action loss')\n", (5823, 5882), False, 'from tensorflow.python.platform import flags\n'), ((5883, 6014), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""loss_multiplier"""', '(100.0)', '"""the constant multiplied with the loss value, 100 for reach and 50 for push"""'], {}), "('loss_multiplier', 100.0,\n 'the constant multiplied with the loss value, 100 for reach and 50 for push'\n )\n", (5901, 6014), False, 'from tensorflow.python.platform import flags\n'), ((6025, 6115), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""use_l1_l2_loss"""', '(False)', '"""use a loss with combination of l1 and l2"""'], {}), "('use_l1_l2_loss', False,\n 'use a loss with combination of l1 and l2')\n", (6042, 6115), False, 'from tensorflow.python.platform import flags\n'), ((6112, 6171), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""l2_eps"""', '(0.01)', '"""coeffcient of l2 loss"""'], {}), "('l2_eps', 0.01, 'coeffcient of l2 loss')\n", (6130, 6171), False, 'from tensorflow.python.platform import flags\n'), ((6172, 6276), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""shuffle_val"""', '(False)', '"""whether to choose the validation set via shuffling or not"""'], {}), "('shuffle_val', False,\n 'whether to choose the validation set via shuffling or not')\n", (6189, 6276), False, 'from tensorflow.python.platform import flags\n'), ((6291, 6357), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""random_seed"""', '(0)', '"""random seed for training"""'], {}), "('random_seed', 0, 'random seed for training')\n", (6311, 6357), False, 'from tensorflow.python.platform import flags\n'), ((6358, 6421), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""fp"""', '(True)', '"""use spatial soft-argmax or not"""'], {}), "('fp', True, 'use spatial soft-argmax or not')\n", (6375, 6421), False, 'from tensorflow.python.platform import flags\n'), ((6422, 6498), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""norm"""', '"""layer_norm"""', '"""batch_norm, layer_norm, or None"""'], {}), "('norm', 'layer_norm', 'batch_norm, layer_norm, or None')\n", (6441, 6498), False, 'from tensorflow.python.platform import flags\n'), ((6499, 6570), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""dropout"""', '(False)', '"""use dropout for fc layers or not"""'], {}), "('dropout', False, 'use dropout for fc layers or not')\n", (6516, 6570), False, 'from tensorflow.python.platform import flags\n'), ((6571, 6639), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""keep_prob"""', '(0.5)', '"""keep probability for dropout"""'], {}), "('keep_prob', 0.5, 'keep probability for dropout')\n", (6589, 6639), False, 'from tensorflow.python.platform import flags\n'), ((6640, 6775), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""num_filters"""', '(64)', '"""number of filters for conv nets -- 64 for placing, 16 for pushing, 40 for reaching."""'], {}), "('num_filters', 64,\n 'number of filters for conv nets -- 64 for placing, 16 for pushing, 40 for reaching.'\n )\n", (6660, 6775), False, 'from tensorflow.python.platform import flags\n'), ((6788, 6913), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""filter_size"""', '(3)', '"""filter size for conv nets -- 3 for placing, 5 for pushing, 3 for reaching."""'], {}), "('filter_size', 3,\n 'filter size for conv nets -- 3 for placing, 5 for pushing, 3 for reaching.'\n )\n", (6808, 6913), False, 'from tensorflow.python.platform import flags\n'), ((6905, 7025), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""num_conv_layers"""', '(5)', '"""number of conv layers -- 5 for placing, 4 for pushing, 3 for reaching."""'], {}), "('num_conv_layers', 5,\n 'number of conv layers -- 5 for placing, 4 for pushing, 3 for reaching.')\n", (6925, 7025), False, 'from tensorflow.python.platform import flags\n'), ((7022, 7164), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""num_strides"""', '(3)', '"""number of conv layers with strided filters -- 3 for placing, 4 for pushing, 3 for reaching."""'], {}), "('num_strides', 3,\n 'number of conv layers with strided filters -- 3 for placing, 4 for pushing, 3 for reaching.'\n )\n", (7042, 7164), False, 'from tensorflow.python.platform import flags\n'), ((7177, 7297), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""conv"""', '(True)', '"""whether or not to use a convolutional network, only applicable in some cases"""'], {}), "('conv', True,\n 'whether or not to use a convolutional network, only applicable in some cases'\n )\n", (7194, 7297), False, 'from tensorflow.python.platform import flags\n'), ((7289, 7365), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""num_fc_layers"""', '(3)', '"""number of fully-connected layers"""'], {}), "('num_fc_layers', 3, 'number of fully-connected layers')\n", (7309, 7365), False, 'from tensorflow.python.platform import flags\n'), ((7366, 7455), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""layer_size"""', '(200)', '"""hidden dimension of fully-connected layers"""'], {}), "('layer_size', 200,\n 'hidden dimension of fully-connected layers')\n", (7386, 7455), False, 'from tensorflow.python.platform import flags\n'), ((7452, 7608), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""temporal_conv_2_head"""', '(True)', '"""whether or not to use temporal convolutions for the two-head architecture in video-only setting."""'], {}), "('temporal_conv_2_head', True,\n 'whether or not to use temporal convolutions for the two-head architecture in video-only setting.'\n )\n", (7469, 7608), False, 'from tensorflow.python.platform import flags\n'), ((7618, 7821), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""temporal_conv_2_head_ee"""', '(False)', '"""whether or not to use temporal convolutions for the two-head architecture in video-only setting for predicting the ee pose."""'], {}), "('temporal_conv_2_head_ee', False,\n 'whether or not to use temporal convolutions for the two-head architecture in video-only setting for predicting the ee pose.'\n )\n", (7635, 7821), False, 'from tensorflow.python.platform import flags\n'), ((7815, 7907), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""temporal_filter_size"""', '(10)', '"""filter size for temporal convolution"""'], {}), "('temporal_filter_size', 10,\n 'filter size for temporal convolution')\n", (7835, 7907), False, 'from tensorflow.python.platform import flags\n'), ((7904, 8002), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""temporal_num_filters"""', '(32)', '"""number of filters for temporal convolution"""'], {}), "('temporal_num_filters', 32,\n 'number of filters for temporal convolution')\n", (7924, 8002), False, 'from tensorflow.python.platform import flags\n'), ((7999, 8123), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""temporal_num_filters_ee"""', '(32)', '"""number of filters for temporal convolution for ee pose prediction"""'], {}), "('temporal_num_filters_ee', 32,\n 'number of filters for temporal convolution for ee pose prediction')\n", (8019, 8123), False, 'from tensorflow.python.platform import flags\n'), ((8120, 8238), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""temporal_num_layers"""', '(3)', '"""number of layers for temporal convolution for ee pose prediction"""'], {}), "('temporal_num_layers', 3,\n 'number of layers for temporal convolution for ee pose prediction')\n", (8140, 8238), False, 'from tensorflow.python.platform import flags\n'), ((8235, 8356), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""temporal_num_layers_ee"""', '(3)', '"""number of layers for temporal convolution for ee pose prediction"""'], {}), "('temporal_num_layers_ee', 3,\n 'number of layers for temporal convolution for ee pose prediction')\n", (8255, 8356), False, 'from tensorflow.python.platform import flags\n'), ((8353, 8463), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""init"""', '"""xavier"""', '"""initializer for conv weights. Choose among random, xavier, and he"""'], {}), "('init', 'xavier',\n 'initializer for conv weights. Choose among random, xavier, and he')\n", (8372, 8463), False, 'from tensorflow.python.platform import flags\n'), ((8460, 8570), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""max_pool"""', '(False)', '"""Whether or not to use max pooling rather than strided convolutions"""'], {}), "('max_pool', False,\n 'Whether or not to use max pooling rather than strided convolutions')\n", (8477, 8570), False, 'from tensorflow.python.platform import flags\n'), ((8567, 8693), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""stop_grad"""', '(False)', '"""if True, do not use second derivatives in meta-optimization (for axis_angle)"""'], {}), "('stop_grad', False,\n 'if True, do not use second derivatives in meta-optimization (for axis_angle)'\n )\n", (8584, 8693), False, 'from tensorflow.python.platform import flags\n'), ((8726, 8815), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""log"""', '(True)', '"""if false, do not log summaries, for debugging code."""'], {}), "('log', True,\n 'if false, do not log summaries, for debugging code.')\n", (8743, 8815), False, 'from tensorflow.python.platform import flags\n'), ((8812, 8913), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""save_dir"""', '"""./atmaml_pick_logs"""', '"""directory for summaries and checkpoints."""'], {}), "('save_dir', './atmaml_pick_logs',\n 'directory for summaries and checkpoints.')\n", (8831, 8913), False, 'from tensorflow.python.platform import flags\n'), ((9100, 9187), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""resume"""', '(False)', '"""resume training if there is a model available"""'], {}), "('resume', False,\n 'resume training if there is a model available')\n", (9117, 9187), False, 'from tensorflow.python.platform import flags\n'), ((9184, 9249), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""train"""', '(True)', '"""True to train, False to test."""'], {}), "('train', True, 'True to train, False to test.')\n", (9201, 9249), False, 'from tensorflow.python.platform import flags\n'), ((9250, 9343), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""restore_iter"""', '(-1)', '"""iteration to load model (-1 for latest model)"""'], {}), "('restore_iter', -1,\n 'iteration to load model (-1 for latest model)')\n", (9270, 9343), False, 'from tensorflow.python.platform import flags\n'), ((9435, 9629), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""train_update_batch_size"""', '(-1)', '"""number of examples used for gradient update during training (use if you want to test with a different number)."""'], {}), "('train_update_batch_size', -1,\n 'number of examples used for gradient update during training (use if you want to test with a different number).'\n )\n", (9455, 9629), False, 'from tensorflow.python.platform import flags\n'), ((9623, 9717), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""test_update_batch_size"""', '(1)', '"""number of demos used during test time"""'], {}), "('test_update_batch_size', 1,\n 'number of demos used during test time')\n", (9643, 9717), False, 'from tensorflow.python.platform import flags\n'), ((9714, 9799), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""gpu_memory_fraction"""', '(0.8)', '"""fraction of memory used in gpu"""'], {}), "('gpu_memory_fraction', 0.8, 'fraction of memory used in gpu'\n )\n", (9732, 9799), False, 'from tensorflow.python.platform import flags\n'), ((9795, 9866), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""record_gifs"""', '(True)', '"""record gifs during evaluation"""'], {}), "('record_gifs', True, 'record gifs during evaluation')\n", (9812, 9866), False, 'from tensorflow.python.platform import flags\n'), ((9867, 9907), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""color_num"""', '(4)', '""""""'], {}), "('color_num', 4, '')\n", (9887, 9907), False, 'from tensorflow.python.platform import flags\n'), ((9908, 9949), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""object_num"""', '(4)', '""""""'], {}), "('object_num', 4, '')\n", (9928, 9949), False, 'from tensorflow.python.platform import flags\n'), ((9950, 9995), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""train_task_num"""', '(6)', '""""""'], {}), "('train_task_num', 6, '')\n", (9970, 9995), False, 'from tensorflow.python.platform import flags\n'), ((9996, 10035), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""task_num"""', '(8)', '""""""'], {}), "('task_num', 8, '')\n", (10016, 10035), False, 'from tensorflow.python.platform import flags\n'), ((10036, 10075), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""demo_num"""', '(5)', '""""""'], {}), "('demo_num', 5, '')\n", (10056, 10075), False, 'from tensorflow.python.platform import flags\n'), ((10076, 10119), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""index_range"""', '(20)', '""""""'], {}), "('index_range', 20, '')\n", (10096, 10119), False, 'from tensorflow.python.platform import flags\n'), ((10120, 10169), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""index_train_range"""', '(20)', '""""""'], {}), "('index_train_range', 20, '')\n", (10140, 10169), False, 'from tensorflow.python.platform import flags\n'), ((10232, 10291), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""demo_type"""', '"""human"""', '"""robot or human"""'], {}), "('demo_type', 'human', 'robot or human')\n", (10251, 10291), False, 'from tensorflow.python.platform import flags\n'), ((10292, 10352), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""extra_type"""', '"""robot"""', '"""robot or human"""'], {}), "('extra_type', 'robot', 'robot or human')\n", (10311, 10352), False, 'from tensorflow.python.platform import flags\n'), ((10376, 10438), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""compare_type"""', '"""robot"""', '"""robot or human"""'], {}), "('compare_type', 'robot', 'robot or human')\n", (10395, 10438), False, 'from tensorflow.python.platform import flags\n'), ((10439, 10486), 'tensorflow.python.platform.flags.DEFINE_string', 'flags.DEFINE_string', (['"""target_type"""', '"""robot"""', '""""""'], {}), "('target_type', 'robot', '')\n", (10458, 10486), False, 'from tensorflow.python.platform import flags\n'), ((10623, 10687), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""clip_action_size"""', '(2)', '"""size of embedding"""'], {}), "('clip_action_size', 2, 'size of embedding')\n", (10643, 10687), False, 'from tensorflow.python.platform import flags\n'), ((10688, 10747), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""action_size"""', '(2)', '"""size of embedding"""'], {}), "('action_size', 2, 'size of embedding')\n", (10708, 10747), False, 'from tensorflow.python.platform import flags\n'), ((10748, 10806), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""state_size"""', '(6)', '"""size of embedding"""'], {}), "('state_size', 6, 'size of embedding')\n", (10768, 10806), False, 'from tensorflow.python.platform import flags\n'), ((10807, 10849), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""output_data"""', '(6)', '""""""'], {}), "('output_data', 6, '')\n", (10827, 10849), False, 'from tensorflow.python.platform import flags\n'), ((10962, 11008), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""pre_margin_clip"""', '(1.0)', '""""""'], {}), "('pre_margin_clip', 1.0, '')\n", (10980, 11008), False, 'from tensorflow.python.platform import flags\n'), ((11009, 11068), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""pre_margin"""', '(2.0)', '"""pre_margin of loss"""'], {}), "('pre_margin', 2.0, 'pre_margin of loss')\n", (11027, 11068), False, 'from tensorflow.python.platform import flags\n'), ((11069, 11120), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""margin"""', '(2.0)', '"""margin of loss"""'], {}), "('margin', 2.0, 'margin of loss')\n", (11087, 11120), False, 'from tensorflow.python.platform import flags\n'), ((11121, 11212), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""pre_margin_coefficient"""', '(10.0)', '"""margin of pre-contrastive conloss"""'], {}), "('pre_margin_coefficient', 10.0,\n 'margin of pre-contrastive conloss')\n", (11139, 11212), False, 'from tensorflow.python.platform import flags\n'), ((11209, 11294), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""margin_coefficient"""', '(10.0)', '"""margin of post-contrastive loss"""'], {}), "('margin_coefficient', 10.0,\n 'margin of post-contrastive loss')\n", (11227, 11294), False, 'from tensorflow.python.platform import flags\n'), ((11291, 11341), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""pre_tar_coefficient"""', '(0.0)', '""""""'], {}), "('pre_tar_coefficient', 0.0, '')\n", (11309, 11341), False, 'from tensorflow.python.platform import flags\n'), ((11342, 11393), 'tensorflow.python.platform.flags.DEFINE_float', 'flags.DEFINE_float', (['"""post_tar_coefficient"""', '(0.0)', '""""""'], {}), "('post_tar_coefficient', 0.0, '')\n", (11360, 11393), False, 'from tensorflow.python.platform import flags\n'), ((11394, 11479), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""norm_pre_contrastive"""', '(True)', '"""norm for pre_task contrastive """'], {}), "('norm_pre_contrastive', True,\n 'norm for pre_task contrastive ')\n", (11411, 11479), False, 'from tensorflow.python.platform import flags\n'), ((11476, 11563), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""norm_post_contrastive"""', '(True)', '"""norm for post_task contrastive """'], {}), "('norm_post_contrastive', True,\n 'norm for post_task contrastive ')\n", (11493, 11563), False, 'from tensorflow.python.platform import flags\n'), ((11560, 11612), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""all_frame_contrastive"""', '(True)', '""""""'], {}), "('all_frame_contrastive', True, '')\n", (11577, 11612), False, 'from tensorflow.python.platform import flags\n'), ((11613, 11705), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""cos_contrastive_loss"""', '(True)', '"""True for cos loss, False for kl loss """'], {}), "('cos_contrastive_loss', True,\n 'True for cos loss, False for kl loss ')\n", (11630, 11705), False, 'from tensorflow.python.platform import flags\n'), ((11702, 11781), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""pre_contrastive"""', '(False)', '"""task contrastive for pre-update """'], {}), "('pre_contrastive', False, 'task contrastive for pre-update ')\n", (11719, 11781), False, 'from tensorflow.python.platform import flags\n'), ((11782, 11867), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""post_contrastive"""', '(False)', '"""task contrastive for post-update """'], {}), "('post_contrastive', False,\n 'task contrastive for post-update ')\n", (11799, 11867), False, 'from tensorflow.python.platform import flags\n'), ((11918, 11968), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""amaml"""', '(True)', '"""true for amaml"""'], {}), "('amaml', True, 'true for amaml')\n", (11935, 11968), False, 'from tensorflow.python.platform import flags\n'), ((11969, 12039), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""amaml_extra_domain"""', '(True)', '"""true for extra_domain"""'], {}), "('amaml_extra_domain', True, 'true for extra_domain')\n", (11986, 12039), False, 'from tensorflow.python.platform import flags\n'), ((12040, 12103), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""cross_domain"""', '(True)', '"""use human and robot """'], {}), "('cross_domain', True, 'use human and robot ')\n", (12057, 12103), False, 'from tensorflow.python.platform import flags\n'), ((12104, 12176), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""cross_adapt_domain"""', '(True)', '"""adpat human/robot demos"""'], {}), "('cross_adapt_domain', True, 'adpat human/robot demos')\n", (12121, 12176), False, 'from tensorflow.python.platform import flags\n'), ((12177, 12248), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""zero_robot_domain"""', '(True)', '"""adpat human/robot demos"""'], {}), "('zero_robot_domain', True, 'adpat human/robot demos')\n", (12194, 12248), False, 'from tensorflow.python.platform import flags\n'), ((12249, 12335), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""random_adapt_domain"""', '(True)', '"""randomly adpat human/robot demos"""'], {}), "('random_adapt_domain', True,\n 'randomly adpat human/robot demos')\n", (12266, 12335), False, 'from tensorflow.python.platform import flags\n'), ((12332, 12419), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""random_single_domain"""', '(True)', '"""randomly adpat human/robot demos"""'], {}), "('random_single_domain', True,\n 'randomly adpat human/robot demos')\n", (12349, 12419), False, 'from tensorflow.python.platform import flags\n'), ((12416, 12506), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""random_single_domain_margin"""', '(50)', '"""random single domain margin"""'], {}), "('random_single_domain_margin', 50,\n 'random single domain margin')\n", (12436, 12506), False, 'from tensorflow.python.platform import flags\n'), ((12503, 12561), 'tensorflow.python.platform.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""embed_size"""', '(0)', '"""size of embedding"""'], {}), "('embed_size', 0, 'size of embedding')\n", (12523, 12561), False, 'from tensorflow.python.platform import flags\n'), ((12562, 12617), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""tar_mil"""', '(False)', '"""with target loss"""'], {}), "('tar_mil', False, 'with target loss')\n", (12579, 12617), False, 'from tensorflow.python.platform import flags\n'), ((12618, 12675), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""record_loss"""', '(False)', '"""true for amaml"""'], {}), "('record_loss', False, 'true for amaml')\n", (12635, 12675), False, 'from tensorflow.python.platform import flags\n'), ((12676, 12722), 'tensorflow.python.platform.flags.DEFINE_bool', 'flags.DEFINE_bool', (['"""contrastive_fc"""', '(False)', '""""""'], {}), "('contrastive_fc', False, '')\n", (12693, 12722), False, 'from tensorflow.python.platform import flags\n'), ((504, 528), 'tensorflow.trainable_variables', 'tf.trainable_variables', ([], {}), '()\n', (526, 528), True, 'import tensorflow as tf\n'), ((15375, 15474), 'numpy.reshape', 'np.reshape', (['obsas', '[batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels]'], {}), '(obsas, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height *\n FLAGS.num_channels])\n', (15385, 15474), True, 'import numpy as np\n'), ((15484, 15583), 'numpy.reshape', 'np.reshape', (['obsbs', '[batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels]'], {}), '(obsbs, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height *\n FLAGS.num_channels])\n', (15494, 15583), True, 'import numpy as np\n'), ((15599, 15661), 'numpy.reshape', 'np.reshape', (['actionas', '[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '(actionas, [batch_size, FLAGS.T, FLAGS.output_data])\n', (15609, 15661), True, 'import numpy as np\n'), ((15677, 15739), 'numpy.reshape', 'np.reshape', (['actionbs', '[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '(actionbs, [batch_size, FLAGS.T, FLAGS.output_data])\n', (15687, 15739), True, 'import numpy as np\n'), ((15755, 15805), 'numpy.zeros', 'np.zeros', (['[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '([batch_size, FLAGS.T, FLAGS.output_data])\n', (15763, 15805), True, 'import numpy as np\n'), ((15820, 15870), 'numpy.zeros', 'np.zeros', (['[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '([batch_size, FLAGS.T, FLAGS.output_data])\n', (15828, 15870), True, 'import numpy as np\n'), ((21584, 21683), 'numpy.reshape', 'np.reshape', (['obsas', '[batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels]'], {}), '(obsas, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height *\n FLAGS.num_channels])\n', (21594, 21683), True, 'import numpy as np\n'), ((21693, 21792), 'numpy.reshape', 'np.reshape', (['obsbs', '[batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels]'], {}), '(obsbs, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height *\n FLAGS.num_channels])\n', (21703, 21792), True, 'import numpy as np\n'), ((21804, 21903), 'numpy.reshape', 'np.reshape', (['obscs', '[batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels]'], {}), '(obscs, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height *\n FLAGS.num_channels])\n', (21814, 21903), True, 'import numpy as np\n'), ((21917, 21979), 'numpy.reshape', 'np.reshape', (['actionas', '[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '(actionas, [batch_size, FLAGS.T, FLAGS.output_data])\n', (21927, 21979), True, 'import numpy as np\n'), ((21995, 22057), 'numpy.reshape', 'np.reshape', (['actionbs', '[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '(actionbs, [batch_size, FLAGS.T, FLAGS.output_data])\n', (22005, 22057), True, 'import numpy as np\n'), ((22073, 22135), 'numpy.reshape', 'np.reshape', (['actioncs', '[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '(actioncs, [batch_size, FLAGS.T, FLAGS.output_data])\n', (22083, 22135), True, 'import numpy as np\n'), ((22152, 22202), 'numpy.zeros', 'np.zeros', (['[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '([batch_size, FLAGS.T, FLAGS.output_data])\n', (22160, 22202), True, 'import numpy as np\n'), ((22217, 22267), 'numpy.zeros', 'np.zeros', (['[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '([batch_size, FLAGS.T, FLAGS.output_data])\n', (22225, 22267), True, 'import numpy as np\n'), ((22282, 22332), 'numpy.zeros', 'np.zeros', (['[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '([batch_size, FLAGS.T, FLAGS.output_data])\n', (22290, 22332), True, 'import numpy as np\n'), ((23753, 23800), 'numpy.reshape', 'np.reshape', (['obsbs[:, 0, :]', '[batch_size, 1, -1]'], {}), '(obsbs[:, 0, :], [batch_size, 1, -1])\n', (23763, 23800), True, 'import numpy as np\n'), ((23817, 23867), 'numpy.reshape', 'np.reshape', (['actionbs[:, 1, :]', '[batch_size, 1, -1]'], {}), '(actionbs[:, 1, :], [batch_size, 1, -1])\n', (23827, 23867), True, 'import numpy as np\n'), ((23883, 23933), 'numpy.reshape', 'np.reshape', (['statebs[:, -1, :]', '[batch_size, 1, -1]'], {}), '(statebs[:, -1, :], [batch_size, 1, -1])\n', (23893, 23933), True, 'import numpy as np\n'), ((24513, 24560), 'numpy.reshape', 'np.reshape', (['obsbs[:, 0, :]', '[batch_size, 1, -1]'], {}), '(obsbs[:, 0, :], [batch_size, 1, -1])\n', (24523, 24560), True, 'import numpy as np\n'), ((24577, 24627), 'numpy.reshape', 'np.reshape', (['actionbs[:, 1, :]', '[batch_size, 1, -1]'], {}), '(actionbs[:, 1, :], [batch_size, 1, -1])\n', (24587, 24627), True, 'import numpy as np\n'), ((24643, 24693), 'numpy.reshape', 'np.reshape', (['statebs[:, -1, :]', '[batch_size, 1, -1]'], {}), '(statebs[:, -1, :], [batch_size, 1, -1])\n', (24653, 24693), True, 'import numpy as np\n'), ((25039, 25089), 'numpy.reshape', 'np.reshape', (['actionas[:, 1, :]', '[batch_size, 1, -1]'], {}), '(actionas[:, 1, :], [batch_size, 1, -1])\n', (25049, 25089), True, 'import numpy as np\n'), ((25105, 25133), 'numpy.tile', 'np.tile', (['actionas', '(1, 3, 1)'], {}), '(actionas, (1, 3, 1))\n', (25112, 25133), True, 'import numpy as np\n'), ((25205, 25255), 'numpy.reshape', 'np.reshape', (['actionbs[:, 1, :]', '[batch_size, 1, -1]'], {}), '(actionbs[:, 1, :], [batch_size, 1, -1])\n', (25215, 25255), True, 'import numpy as np\n'), ((25271, 25299), 'numpy.tile', 'np.tile', (['actionbs', '(1, 3, 1)'], {}), '(actionbs, (1, 3, 1))\n', (25278, 25299), True, 'import numpy as np\n'), ((25711, 25761), 'numpy.reshape', 'np.reshape', (['actionas[:, 0, :]', '[batch_size, 1, -1]'], {}), '(actionas[:, 0, :], [batch_size, 1, -1])\n', (25721, 25761), True, 'import numpy as np\n'), ((25774, 25824), 'numpy.reshape', 'np.reshape', (['actionas[:, 1, :]', '[batch_size, 1, -1]'], {}), '(actionas[:, 1, :], [batch_size, 1, -1])\n', (25784, 25824), True, 'import numpy as np\n'), ((25842, 25879), 'numpy.concatenate', 'np.concatenate', (['[pick, place]'], {'axis': '(2)'}), '([pick, place], axis=2)\n', (25856, 25879), True, 'import numpy as np\n'), ((25954, 25984), 'numpy.tile', 'np.tile', (['pick_place', '(1, 3, 1)'], {}), '(pick_place, (1, 3, 1))\n', (25961, 25984), True, 'import numpy as np\n'), ((26166, 26216), 'numpy.reshape', 'np.reshape', (['actionbs[:, 0, :]', '[batch_size, 1, -1]'], {}), '(actionbs[:, 0, :], [batch_size, 1, -1])\n', (26176, 26216), True, 'import numpy as np\n'), ((26229, 26279), 'numpy.reshape', 'np.reshape', (['actionbs[:, 1, :]', '[batch_size, 1, -1]'], {}), '(actionbs[:, 1, :], [batch_size, 1, -1])\n', (26239, 26279), True, 'import numpy as np\n'), ((26297, 26334), 'numpy.concatenate', 'np.concatenate', (['[pick, place]'], {'axis': '(2)'}), '([pick, place], axis=2)\n', (26311, 26334), True, 'import numpy as np\n'), ((26409, 26439), 'numpy.tile', 'np.tile', (['pick_place', '(1, 3, 1)'], {}), '(pick_place, (1, 3, 1))\n', (26416, 26439), True, 'import numpy as np\n'), ((26967, 27017), 'numpy.reshape', 'np.reshape', (['actionas[:, 1, :]', '[batch_size, 1, -1]'], {}), '(actionas[:, 1, :], [batch_size, 1, -1])\n', (26977, 27017), True, 'import numpy as np\n'), ((27033, 27058), 'numpy.tile', 'np.tile', (['place', '(1, 3, 1)'], {}), '(place, (1, 3, 1))\n', (27040, 27058), True, 'import numpy as np\n'), ((27128, 27178), 'numpy.reshape', 'np.reshape', (['actionbs[:, 1, :]', '[batch_size, 1, -1]'], {}), '(actionbs[:, 1, :], [batch_size, 1, -1])\n', (27138, 27178), True, 'import numpy as np\n'), ((27194, 27219), 'numpy.tile', 'np.tile', (['place', '(1, 3, 1)'], {}), '(place, (1, 3, 1))\n', (27201, 27219), True, 'import numpy as np\n'), ((27794, 27832), 'tensorflow.summary.FileWriter', 'tf.summary.FileWriter', (['save_dir', 'graph'], {}), '(save_dir, graph)\n', (27815, 27832), True, 'import tensorflow as tf\n'), ((37425, 37462), 'tensorflow.set_random_seed', 'tf.set_random_seed', (['FLAGS.random_seed'], {}), '(FLAGS.random_seed)\n', (37443, 37462), True, 'import tensorflow as tf\n'), ((37467, 37500), 'numpy.random.seed', 'np.random.seed', (['FLAGS.random_seed'], {}), '(FLAGS.random_seed)\n', (37481, 37500), True, 'import numpy as np\n'), ((37505, 37535), 'random.seed', 'random.seed', (['FLAGS.random_seed'], {}), '(FLAGS.random_seed)\n', (37516, 37535), False, 'import random\n'), ((37549, 37559), 'tensorflow.Graph', 'tf.Graph', ([], {}), '()\n', (37557, 37559), True, 'import tensorflow as tf\n'), ((37729, 37745), 'tensorflow.ConfigProto', 'tf.ConfigProto', ([], {}), '()\n', (37743, 37745), True, 'import tensorflow as tf\n'), ((37803, 37844), 'tensorflow.Session', 'tf.Session', ([], {'graph': 'graph', 'config': 'tf_config'}), '(graph=graph, config=tf_config)\n', (37813, 37844), True, 'import tensorflow as tf\n'), ((38644, 38740), 'place_pick_mil.MIL', 'MIL', (['FLAGS.action_size'], {'state_idx': 'state_idx', 'img_idx': 'img_idx', 'network_config': 'network_config'}), '(FLAGS.action_size, state_idx=state_idx, img_idx=img_idx, network_config\n =network_config)\n', (38647, 38740), False, 'from place_pick_mil import MIL\n'), ((580, 624), 'functools.reduce', 'reduce', (['mul', '[dim.value for dim in shape]', '(1)'], {}), '(mul, [dim.value for dim in shape], 1)\n', (586, 624), False, 'from functools import reduce\n'), ((13315, 13387), 'numpy.random.randint', 'np.random.randint', (['FLAGS.train_task_num', 'FLAGS.task_num'], {'size': 'batch_size'}), '(FLAGS.train_task_num, FLAGS.task_num, size=batch_size)\n', (13332, 13387), True, 'import numpy as np\n'), ((14591, 14636), 'numpy.random.randint', 'np.random.randint', (['(0)', 'FLAGS.index_train_range'], {}), '(0, FLAGS.index_train_range)\n', (14608, 14636), True, 'import numpy as np\n'), ((15122, 15177), 'read_data.Read_Robot_Data2', 'read_data.Read_Robot_Data2', (['target_path', 'FLAGS.T', 'index'], {}), '(target_path, FLAGS.T, index)\n', (15148, 15177), False, 'import read_data\n'), ((17006, 17078), 'numpy.random.randint', 'np.random.randint', (['FLAGS.train_task_num', 'FLAGS.task_num'], {'size': 'batch_size'}), '(FLAGS.train_task_num, FLAGS.task_num, size=batch_size)\n', (17023, 17078), True, 'import numpy as np\n'), ((17107, 17179), 'numpy.random.randint', 'np.random.randint', (['FLAGS.train_task_num', 'FLAGS.task_num'], {'size': 'batch_size'}), '(FLAGS.train_task_num, FLAGS.task_num, size=batch_size)\n', (17124, 17179), True, 'import numpy as np\n'), ((19467, 19512), 'numpy.random.randint', 'np.random.randint', (['(0)', 'FLAGS.index_train_range'], {}), '(0, FLAGS.index_train_range)\n', (19484, 19512), True, 'import numpy as np\n'), ((20530, 20585), 'read_data.Read_Robot_Data2', 'read_data.Read_Robot_Data2', (['target_path', 'FLAGS.T', 'index'], {}), '(target_path, FLAGS.T, index)\n', (20556, 20585), False, 'import read_data\n'), ((22376, 22417), 'numpy.reshape', 'np.reshape', (['labelabs', '[batch_size, 1, -1]'], {}), '(labelabs, [batch_size, 1, -1])\n', (22386, 22417), True, 'import numpy as np\n'), ((22436, 22476), 'numpy.reshape', 'np.reshape', (['labelcs', '[batch_size, 1, -1]'], {}), '(labelcs, [batch_size, 1, -1])\n', (22446, 22476), True, 'import numpy as np\n'), ((22570, 22638), 'numpy.zeros', 'np.zeros', (['[batch_size, 1, FLAGS.color_num * FLAGS.index_train_range]'], {}), '([batch_size, 1, FLAGS.color_num * FLAGS.index_train_range])\n', (22578, 22638), True, 'import numpy as np\n'), ((22655, 22723), 'numpy.zeros', 'np.zeros', (['[batch_size, 1, FLAGS.color_num * FLAGS.index_train_range]'], {}), '([batch_size, 1, FLAGS.color_num * FLAGS.index_train_range])\n', (22663, 22723), True, 'import numpy as np\n'), ((22773, 22879), 'numpy.reshape', 'np.reshape', (['extra_obses', '[batch_size, FLAGS.T, FLAGS.im_width * FLAGS.im_height * FLAGS.num_channels]'], {}), '(extra_obses, [batch_size, FLAGS.T, FLAGS.im_width * FLAGS.\n im_height * FLAGS.num_channels])\n', (22783, 22879), True, 'import numpy as np\n'), ((22899, 22966), 'numpy.reshape', 'np.reshape', (['extra_actions', '[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '(extra_actions, [batch_size, FLAGS.T, FLAGS.output_data])\n', (22909, 22966), True, 'import numpy as np\n'), ((22990, 23040), 'numpy.zeros', 'np.zeros', (['[batch_size, FLAGS.T, FLAGS.output_data]'], {}), '([batch_size, FLAGS.T, FLAGS.output_data])\n', (22998, 23040), True, 'import numpy as np\n'), ((41252, 41282), 'tensorflow.train.Saver', 'tf.train.Saver', ([], {'max_to_keep': '(10)'}), '(max_to_keep=10)\n', (41266, 41282), True, 'import tensorflow as tf\n'), ((41333, 41366), 'tensorflow.global_variables_initializer', 'tf.global_variables_initializer', ([], {}), '()\n', (41364, 41366), True, 'import tensorflow as tf\n'), ((41484, 41523), 'tensorflow.train.start_queue_runners', 'tf.train.start_queue_runners', ([], {'sess': 'sess'}), '(sess=sess)\n', (41512, 41523), True, 'import tensorflow as tf\n'), ((41566, 41602), 'tensorflow.train.latest_checkpoint', 'tf.train.latest_checkpoint', (['save_dir'], {}), '(save_dir)\n', (41592, 41602), True, 'import tensorflow as tf\n'), ((12949, 12991), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (12966, 12991), True, 'import numpy as np\n'), ((13071, 13113), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (13088, 13113), True, 'import numpy as np\n'), ((13440, 13482), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (13457, 13482), True, 'import numpy as np\n'), ((13559, 13601), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (13576, 13601), True, 'import numpy as np\n'), ((14905, 14958), 'read_data.Read_Robot_Data2', 'read_data.Read_Robot_Data2', (['demo_path', 'FLAGS.T', 'index'], {}), '(demo_path, FLAGS.T, index)\n', (14931, 14958), False, 'import read_data\n'), ((16114, 16156), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (16131, 16156), True, 'import numpy as np\n'), ((16257, 16298), 'numpy.random.randint', 'np.random.randint', (['(1)', '(FLAGS.color_num - 1)'], {}), '(1, FLAGS.color_num - 1)\n', (16274, 16298), True, 'import numpy as np\n'), ((16391, 16433), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (16408, 16433), True, 'import numpy as np\n'), ((16539, 16581), 'numpy.random.randint', 'np.random.randint', (['(1)', '(FLAGS.object_num - 1)'], {}), '(1, FLAGS.object_num - 1)\n', (16556, 16581), True, 'import numpy as np\n'), ((17289, 17331), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (17306, 17331), True, 'import numpy as np\n'), ((17409, 17451), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (17426, 17451), True, 'import numpy as np\n'), ((17552, 17592), 'numpy.random.randint', 'np.random.randint', (['(1)', '(FLAGS.demo_num - 1)'], {}), '(1, FLAGS.demo_num - 1)\n', (17569, 17592), True, 'import numpy as np\n'), ((19781, 19834), 'read_data.Read_Robot_Data2', 'read_data.Read_Robot_Data2', (['demo_path', 'FLAGS.T', 'index'], {}), '(demo_path, FLAGS.T, index)\n', (19807, 19834), False, 'import read_data\n'), ((19886, 19939), 'read_data.Read_Human_Data2', 'read_data.Read_Human_Data2', (['demo_path', 'FLAGS.T', 'index'], {}), '(demo_path, FLAGS.T, index)\n', (19912, 19939), False, 'import read_data\n'), ((20019, 20075), 'read_data.Read_Robot_Data2', 'read_data.Read_Robot_Data2', (['compare_path', 'FLAGS.T', 'index'], {}), '(compare_path, FLAGS.T, index)\n', (20045, 20075), False, 'import read_data\n'), ((20126, 20182), 'read_data.Read_Human_Data2', 'read_data.Read_Human_Data2', (['compare_path', 'FLAGS.T', 'index'], {}), '(compare_path, FLAGS.T, index)\n', (20152, 20182), False, 'import read_data\n'), ((20635, 20752), 'keras.utils.to_categorical', 'to_categorical', (['(color_list[element] * FLAGS.index_train_range + index)', '(FLAGS.color_num * FLAGS.index_train_range)'], {}), '(color_list[element] * FLAGS.index_train_range + index, FLAGS\n .color_num * FLAGS.index_train_range)\n', (20649, 20752), False, 'from keras.utils import to_categorical\n'), ((20767, 20891), 'keras.utils.to_categorical', 'to_categorical', (['(compare_color_list[element] * FLAGS.index_train_range + index)', '(FLAGS.color_num * FLAGS.index_train_range)'], {}), '(compare_color_list[element] * FLAGS.index_train_range +\n index, FLAGS.color_num * FLAGS.index_train_range)\n', (20781, 20891), False, 'from keras.utils import to_categorical\n'), ((13214, 13256), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (13231, 13256), True, 'import numpy as np\n'), ((15036, 15089), 'read_data.Read_Human_Data2', 'read_data.Read_Human_Data2', (['demo_path', 'FLAGS.T', 'index'], {}), '(demo_path, FLAGS.T, index)\n', (15062, 15089), False, 'import read_data\n'), ((16695, 16737), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {'size': 'batch_size'}), '(0, 100, size=batch_size)\n', (16712, 16737), True, 'import numpy as np\n'), ((16807, 16849), 'numpy.random.randint', 'np.random.randint', (['(1)', 'FLAGS.train_task_num'], {}), '(1, FLAGS.train_task_num)\n', (16824, 16849), True, 'import numpy as np\n'), ((20314, 20368), 'read_data.Read_Robot_Data2', 'read_data.Read_Robot_Data2', (['extra_path', 'FLAGS.T', 'index'], {}), '(extra_path, FLAGS.T, index)\n', (20340, 20368), False, 'import read_data\n'), ((20442, 20496), 'read_data.Read_Human_Data2', 'read_data.Read_Human_Data2', (['extra_path', 'FLAGS.T', 'index'], {}), '(extra_path, FLAGS.T, index)\n', (20468, 20496), False, 'import read_data\n'), ((29826, 29861), 'numpy.ones', 'np.ones', (['[FLAGS.meta_batch_size, 1]'], {}), '([FLAGS.meta_batch_size, 1])\n', (29833, 29861), True, 'import numpy as np\n'), ((29954, 29972), 'numpy.squeeze', 'np.squeeze', (['probas'], {}), '(probas)\n', (29964, 29972), True, 'import numpy as np\n'), ((30006, 30024), 'numpy.squeeze', 'np.squeeze', (['probes'], {}), '(probes)\n', (30016, 30024), True, 'import numpy as np\n'), ((33088, 33107), 'numpy.mean', 'np.mean', (['results[5]'], {}), '(results[5])\n', (33095, 33107), True, 'import numpy as np\n'), ((33109, 33128), 'numpy.mean', 'np.mean', (['results[8]'], {}), '(results[8])\n', (33116, 33128), True, 'import numpy as np\n'), ((33130, 33149), 'numpy.mean', 'np.mean', (['results[1]'], {}), '(results[1])\n', (33137, 33149), True, 'import numpy as np\n'), ((33151, 33170), 'numpy.mean', 'np.mean', (['results[2]'], {}), '(results[2])\n', (33158, 33170), True, 'import numpy as np\n'), ((33172, 33191), 'numpy.mean', 'np.mean', (['results[4]'], {}), '(results[4])\n', (33179, 33191), True, 'import numpy as np\n'), ((35050, 35065), 'numpy.ones', 'np.ones', (['[1, 1]'], {}), '([1, 1])\n', (35057, 35065), True, 'import numpy as np\n'), ((36995, 37014), 'numpy.mean', 'np.mean', (['results[1]'], {}), '(results[1])\n', (37002, 37014), True, 'import numpy as np\n'), ((29076, 29129), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)', '[FLAGS.meta_batch_size, 1]'], {}), '(0, 100, [FLAGS.meta_batch_size, 1])\n', (29093, 29129), True, 'import numpy as np\n'), ((29167, 29220), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)', '[FLAGS.meta_batch_size, 1]'], {}), '(0, 100, [FLAGS.meta_batch_size, 1])\n', (29184, 29220), True, 'import numpy as np\n'), ((35095, 35126), 'numpy.random.randint', 'np.random.randint', (['(0)', '(1)', '[1, 1]'], {}), '(0, 1, [1, 1])\n', (35112, 35126), True, 'import numpy as np\n'), ((29321, 29372), 'numpy.random.randint', 'np.random.randint', (['(0)', '(2)', '[FLAGS.meta_batch_size, 1]'], {}), '(0, 2, [FLAGS.meta_batch_size, 1])\n', (29338, 29372), True, 'import numpy as np\n'), ((29539, 29564), 'numpy.random.randint', 'np.random.randint', (['(0)', '(100)'], {}), '(0, 100)\n', (29556, 29564), True, 'import numpy as np\n')] |
import csv
import glob
import io
import os
import sys
import cv2
import numpy as np
from PIL import Image as PILImage
from pdf2image import convert_from_path
from skimage.metrics import structural_similarity
class Image:
STD_WIDTH = 2500
STD_HEIGHT = 1600
def __init__(self, path, debug=False):
self.path = path
self.img = None
self.debug = debug
self.debug_img = None
self.overlay_img = None
def convert_pdf(self):
images = convert_from_path(
pdf_path=self.path,
dpi=300,
fmt="jpeg"
)
return np.asarray(images[0])
def load(self):
self.img = self.convert_pdf()
if self.debug:
self.debug_img = self.img.copy()
self.validate_image()
x, y, _w, _h, rotation = self.get_contour_info()
self.rotate(x, y, rotation)
# After rotation, x and y (center of bounding box have moved.
# Need to re-find center.
_x, _y, w, h, _rotation = self.get_contour_info()
self.resize(w, h)
x, y, _w, _h, _rotation = self.get_contour_info()
self.center(x, y)
def validate_image(self):
# Rotate image so longer axis is horizontal
w = self.img.shape[1]
h = self.img.shape[0]
if h > w:
raise ValueError(f'Image {self.path} requires rotation.')
def write(self):
cv2.imwrite(self.path + '.jpg', self.img)
cv2.imwrite(self.path + '.debug.jpg', self.debug_img)
if self.debug:
cv2.imwrite(self.path + '.overlay.jpg', self.overlay_img)
def get_contour_info(self):
"""
Return dims of outer most box. Assume that this is box is the
blue print bounding box.
"""
# Get contours from B&W image
img_bw = cv2.cvtColor(
src=self.img,
code=cv2.COLOR_RGB2GRAY,
)
_, img_bw = cv2.threshold(
src=img_bw,
thresh=175,
maxval=255,
type=cv2.THRESH_BINARY,
)
contours, _ = cv2.findContours(
image=img_bw,
mode=cv2.RETR_TREE,
method=cv2.CHAIN_APPROX_SIMPLE,
)
# Find area for each contour
img_area = self.img.shape[0] * self.img.shape[1]
contour_details = {}
for cnt in contours:
# Find convex hull of contour
hull = cv2.convexHull(points=cnt, hull=False)
# Find smallest box that fits the convex hull
(x, y), (w, h), rotation = cv2.minAreaRect(hull)
if 0.999 < (w * h) / img_area < 1.001:
# This is the image bounding box
continue
area = w * h
contour_details[(
area,
x,
y,
w,
h,
rotation,
)] = hull
# Get contour with largest area
contour_details_keys = sorted(contour_details.keys())
largest_contour_details_key = contour_details_keys[-1]
area, x, y, w, h, rotation = largest_contour_details_key
cnt = contour_details[largest_contour_details_key]
# Ensure width is always the largest dimension
if h > w:
#w, h = h, w
# If cv2 found a rectangle that was taller than
# it was wide, then the rotation value is probably
# very large (~ 90 or ~ -90 degrees). This needs to be
# calibrated down to something between -2 and 2 degrees.
# Ensure rotation is always small (-2 < rotation < 2)
if -90 < rotation < -2:
rotation = 90 + rotation
elif 90 > rotation > 2:
rotation = 90 - rotation
else:
raise ValueError(f'Rotation error for {self.path}: {rotation}')
x, y, w, h = int(x), int(y), int(w), int(h)
if self.debug:
approx = cv2.approxPolyDP(
curve=cnt,
epsilon=0.009 * cv2.arcLength(cnt, True),
closed=True,
)
cv2.drawContours(
image=self.debug_img,
contours=[approx],
contourIdx=0,
color=(0, 0, 255),
thickness=10,
)
cv2.circle(
img=self.debug_img,
center=(x, y),
radius=15,
color=(0, 0, 255),
thickness=-1
)
return x, y, w, h, rotation
def rotate(self, x, y, rotation):
self.img = self._rotate(self.img, x, y, rotation)
if self.debug:
self.debug_img = self._rotate(self.debug_img, x, y, rotation)
def _rotate(self, img, x, y, rotation):
img_pil = PILImage.fromarray(img)
mode = img_pil.mode
img_pil = img_pil.convert('RGBA')
img_pil = img_pil.rotate(rotation, center=(x, y), expand=1)
# Fill in black after rotation
background = PILImage.new('RGBA', img_pil.size, (255,) * 4)
img_pil = PILImage.composite(img_pil, background, img_pil)
img_pil = img_pil.convert(mode)
return np.asarray(img_pil)
def resize(self, w, h):
self.img = self._resize(self.img, w, h)
if self.debug:
self.debug_img = self._resize(self.debug_img, w, h)
def _resize(self, img, w, h):
scale_factor_w = (self.STD_WIDTH / w)
scale_factor_h = (self.STD_HEIGHT / h)
new_w = int(scale_factor_w * img.shape[1])
new_h = int(scale_factor_h * img.shape[0])
return cv2.resize(
src=img,
dsize=(new_w, new_h),
)
def center(self, x, y):
self.img = self._center(self.img, x, y)
if self.debug:
self.debug_img = self._center(self.debug_img, x, y)
def _center(self, img, x, y):
# x and y are center of box.
# Need to make center of box also the center of the image
top = bottom = left = right = 0
image_midline = img.shape[1] / 2
if x < image_midline:
distance_to_side = img.shape[1] - x
left = distance_to_side - x
elif x > image_midline:
distance_to_side = img.shape[1] - x
right = x - distance_to_side
image_midline = img.shape[0] / 2
if y < image_midline:
distance_to_side = img.shape[0] - y
top = distance_to_side - y
elif y > image_midline:
distance_to_side = img.shape[0] - y
bottom = y - distance_to_side
return cv2.copyMakeBorder(
src=img,
top=top,
bottom=bottom,
left=left,
right=right,
borderType=cv2.BORDER_CONSTANT,
value=(255, 255, 255),
)
def diff(self, other):
score, gray_self, gray_other = self._diff(self.img, other.img)
if self.debug:
self._diff(self.debug_img, other.debug_img)
return score
def _diff(self, img, other_img):
img, other_img = self.scale_cooperatively(img, other_img)
gray_self = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray_other = cv2.cvtColor(other_img, cv2.COLOR_BGR2GRAY)
if self.debug:
self.overlay_img = self.overlay(gray_self, gray_other)
score = structural_similarity(gray_self, gray_other)
return score, gray_self, gray_other
def overlay(self, img, other_img):
alpha = 0.5
dst = img.copy()
cv2.addWeighted(
src1=img,
alpha=alpha,
src2=other_img,
beta=1-alpha,
gamma=0,
dst=dst,
)
return dst
def scale_cooperatively(self, img, other_img):
"""
Add boarders to each image such that both images are of the same size.
"""
max_w = max(img.shape[1], other_img.shape[1])
max_h = max(img.shape[0], other_img.shape[0])
# Scale img
left = right = int((max_w - img.shape[1]) / 2)
top = bottom = int((max_h - img.shape[0]) / 2)
img = cv2.copyMakeBorder(
src=img,
top=top,
bottom=bottom,
left=left,
right=right,
borderType=cv2.BORDER_CONSTANT,
value=(255, 255, 255),
)
# Scale other image
left = right = int((max_w - other_img.shape[1]) / 2)
top = bottom = int((max_h - other_img.shape[0]) / 2)
other_img = cv2.copyMakeBorder(
src=other_img,
top=top,
bottom=bottom,
left=left,
right=right,
borderType=cv2.BORDER_CONSTANT,
value=(255, 255, 255),
)
return img, other_img
def load_images(dir_path, debug=False):
pdf_paths = glob.glob(os.path.join(dir_path, '*.pdf'))
images = []
print(
f'Found {len(pdf_paths)} PDFs. '
f'Approximate load time: {4 * len(pdf_paths)} seconds.',
file=sys.stderr,
flush=True,
)
print('Loading images ', end='', file=sys.stderr, flush=True)
for pdf_path in pdf_paths:
img = Image(pdf_path, debug=debug)
img.load()
images.append(img)
print('.', end='', file=sys.stderr, flush=True)
print(' Finished.', file=sys.stderr, flush=True)
return images
def compare(images):
print(
f'Comparing {len(images)} images. '
f'Approximate comparison time: {4 * len(images)} seconds.',
file=sys.stderr,
flush=True,
)
print('Comparing images ', end='', file=sys.stderr, flush=True)
comps = {}
for image_copy1 in images:
for image_copy2 in images:
if image_copy1 is image_copy2:
continue
key = frozenset((image_copy1.path, image_copy2.path))
if key in comps:
continue
comps[key] = image_copy1.diff(image_copy2)
if image_copy1.debug:
image_copy1.write()
print('.', end='', file=sys.stderr, flush=True)
print(' Finished.', file=sys.stderr, flush=True)
return comps
def sort_comparisons(comps):
return sorted([value] + sorted(key) for key, value in comps.items())
def print_results(comps):
with io.StringIO() as fp:
writer = csv.writer(fp)
writer.writerow(['Score', 'File A', 'File B'])
writer.writerows(comps)
print(fp.getvalue())
def main(dir_path, debug=False):
images = load_images(dir_path, debug)
comparisons = compare(images)
comparisons = sort_comparisons(comparisons)
print_results(comparisons)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser('As-built Comparer')
parser.add_argument(
'dir',
help='Directory with PDFs',
)
parser.add_argument(
'--debug',
default=False,
action='store_true',
help='Output debug images to show progress of program. Program will run much slower.',
)
args = parser.parse_args()
main(args.dir)
| [
"PIL.Image.new",
"skimage.metrics.structural_similarity",
"argparse.ArgumentParser",
"cv2.threshold",
"cv2.arcLength",
"numpy.asarray",
"cv2.minAreaRect",
"cv2.addWeighted",
"io.StringIO",
"cv2.drawContours",
"csv.writer",
"cv2.circle",
"cv2.cvtColor",
"cv2.resize",
"cv2.imwrite",
"PIL... | [((10819, 10863), 'argparse.ArgumentParser', 'argparse.ArgumentParser', (['"""As-built Comparer"""'], {}), "('As-built Comparer')\n", (10842, 10863), False, 'import argparse\n'), ((495, 553), 'pdf2image.convert_from_path', 'convert_from_path', ([], {'pdf_path': 'self.path', 'dpi': '(300)', 'fmt': '"""jpeg"""'}), "(pdf_path=self.path, dpi=300, fmt='jpeg')\n", (512, 553), False, 'from pdf2image import convert_from_path\n'), ((616, 637), 'numpy.asarray', 'np.asarray', (['images[0]'], {}), '(images[0])\n', (626, 637), True, 'import numpy as np\n'), ((1423, 1464), 'cv2.imwrite', 'cv2.imwrite', (["(self.path + '.jpg')", 'self.img'], {}), "(self.path + '.jpg', self.img)\n", (1434, 1464), False, 'import cv2\n'), ((1473, 1526), 'cv2.imwrite', 'cv2.imwrite', (["(self.path + '.debug.jpg')", 'self.debug_img'], {}), "(self.path + '.debug.jpg', self.debug_img)\n", (1484, 1526), False, 'import cv2\n'), ((1835, 1886), 'cv2.cvtColor', 'cv2.cvtColor', ([], {'src': 'self.img', 'code': 'cv2.COLOR_RGB2GRAY'}), '(src=self.img, code=cv2.COLOR_RGB2GRAY)\n', (1847, 1886), False, 'import cv2\n'), ((1942, 2015), 'cv2.threshold', 'cv2.threshold', ([], {'src': 'img_bw', 'thresh': '(175)', 'maxval': '(255)', 'type': 'cv2.THRESH_BINARY'}), '(src=img_bw, thresh=175, maxval=255, type=cv2.THRESH_BINARY)\n', (1955, 2015), False, 'import cv2\n'), ((2097, 2184), 'cv2.findContours', 'cv2.findContours', ([], {'image': 'img_bw', 'mode': 'cv2.RETR_TREE', 'method': 'cv2.CHAIN_APPROX_SIMPLE'}), '(image=img_bw, mode=cv2.RETR_TREE, method=cv2.\n CHAIN_APPROX_SIMPLE)\n', (2113, 2184), False, 'import cv2\n'), ((4841, 4864), 'PIL.Image.fromarray', 'PILImage.fromarray', (['img'], {}), '(img)\n', (4859, 4864), True, 'from PIL import Image as PILImage\n'), ((5064, 5110), 'PIL.Image.new', 'PILImage.new', (['"""RGBA"""', 'img_pil.size', '((255,) * 4)'], {}), "('RGBA', img_pil.size, (255,) * 4)\n", (5076, 5110), True, 'from PIL import Image as PILImage\n'), ((5129, 5177), 'PIL.Image.composite', 'PILImage.composite', (['img_pil', 'background', 'img_pil'], {}), '(img_pil, background, img_pil)\n', (5147, 5177), True, 'from PIL import Image as PILImage\n'), ((5233, 5252), 'numpy.asarray', 'np.asarray', (['img_pil'], {}), '(img_pil)\n', (5243, 5252), True, 'import numpy as np\n'), ((5664, 5705), 'cv2.resize', 'cv2.resize', ([], {'src': 'img', 'dsize': '(new_w, new_h)'}), '(src=img, dsize=(new_w, new_h))\n', (5674, 5705), False, 'import cv2\n'), ((6663, 6797), 'cv2.copyMakeBorder', 'cv2.copyMakeBorder', ([], {'src': 'img', 'top': 'top', 'bottom': 'bottom', 'left': 'left', 'right': 'right', 'borderType': 'cv2.BORDER_CONSTANT', 'value': '(255, 255, 255)'}), '(src=img, top=top, bottom=bottom, left=left, right=right,\n borderType=cv2.BORDER_CONSTANT, value=(255, 255, 255))\n', (6681, 6797), False, 'import cv2\n'), ((7212, 7249), 'cv2.cvtColor', 'cv2.cvtColor', (['img', 'cv2.COLOR_BGR2GRAY'], {}), '(img, cv2.COLOR_BGR2GRAY)\n', (7224, 7249), False, 'import cv2\n'), ((7271, 7314), 'cv2.cvtColor', 'cv2.cvtColor', (['other_img', 'cv2.COLOR_BGR2GRAY'], {}), '(other_img, cv2.COLOR_BGR2GRAY)\n', (7283, 7314), False, 'import cv2\n'), ((7421, 7465), 'skimage.metrics.structural_similarity', 'structural_similarity', (['gray_self', 'gray_other'], {}), '(gray_self, gray_other)\n', (7442, 7465), False, 'from skimage.metrics import structural_similarity\n'), ((7603, 7695), 'cv2.addWeighted', 'cv2.addWeighted', ([], {'src1': 'img', 'alpha': 'alpha', 'src2': 'other_img', 'beta': '(1 - alpha)', 'gamma': '(0)', 'dst': 'dst'}), '(src1=img, alpha=alpha, src2=other_img, beta=1 - alpha,\n gamma=0, dst=dst)\n', (7618, 7695), False, 'import cv2\n'), ((8200, 8334), 'cv2.copyMakeBorder', 'cv2.copyMakeBorder', ([], {'src': 'img', 'top': 'top', 'bottom': 'bottom', 'left': 'left', 'right': 'right', 'borderType': 'cv2.BORDER_CONSTANT', 'value': '(255, 255, 255)'}), '(src=img, top=top, bottom=bottom, left=left, right=right,\n borderType=cv2.BORDER_CONSTANT, value=(255, 255, 255))\n', (8218, 8334), False, 'import cv2\n'), ((8597, 8738), 'cv2.copyMakeBorder', 'cv2.copyMakeBorder', ([], {'src': 'other_img', 'top': 'top', 'bottom': 'bottom', 'left': 'left', 'right': 'right', 'borderType': 'cv2.BORDER_CONSTANT', 'value': '(255, 255, 255)'}), '(src=other_img, top=top, bottom=bottom, left=left, right=\n right, borderType=cv2.BORDER_CONSTANT, value=(255, 255, 255))\n', (8615, 8738), False, 'import cv2\n'), ((8928, 8959), 'os.path.join', 'os.path.join', (['dir_path', '"""*.pdf"""'], {}), "(dir_path, '*.pdf')\n", (8940, 8959), False, 'import os\n'), ((10396, 10409), 'io.StringIO', 'io.StringIO', ([], {}), '()\n', (10407, 10409), False, 'import io\n'), ((10434, 10448), 'csv.writer', 'csv.writer', (['fp'], {}), '(fp)\n', (10444, 10448), False, 'import csv\n'), ((1562, 1619), 'cv2.imwrite', 'cv2.imwrite', (["(self.path + '.overlay.jpg')", 'self.overlay_img'], {}), "(self.path + '.overlay.jpg', self.overlay_img)\n", (1573, 1619), False, 'import cv2\n'), ((2441, 2479), 'cv2.convexHull', 'cv2.convexHull', ([], {'points': 'cnt', 'hull': '(False)'}), '(points=cnt, hull=False)\n', (2455, 2479), False, 'import cv2\n'), ((2577, 2598), 'cv2.minAreaRect', 'cv2.minAreaRect', (['hull'], {}), '(hull)\n', (2592, 2598), False, 'import cv2\n'), ((4151, 4259), 'cv2.drawContours', 'cv2.drawContours', ([], {'image': 'self.debug_img', 'contours': '[approx]', 'contourIdx': '(0)', 'color': '(0, 0, 255)', 'thickness': '(10)'}), '(image=self.debug_img, contours=[approx], contourIdx=0,\n color=(0, 0, 255), thickness=10)\n', (4167, 4259), False, 'import cv2\n'), ((4363, 4456), 'cv2.circle', 'cv2.circle', ([], {'img': 'self.debug_img', 'center': '(x, y)', 'radius': '(15)', 'color': '(0, 0, 255)', 'thickness': '(-1)'}), '(img=self.debug_img, center=(x, y), radius=15, color=(0, 0, 255),\n thickness=-1)\n', (4373, 4456), False, 'import cv2\n'), ((4070, 4094), 'cv2.arcLength', 'cv2.arcLength', (['cnt', '(True)'], {}), '(cnt, True)\n', (4083, 4094), False, 'import cv2\n')] |
'''
Generate transaction sequence and other necessary input information
'''
import yaml
import argparse
import numpy as np
from numpy.random import default_rng
import random
import math
import pickle
import pdb
import scipy.stats as stats
from os.path import dirname
from os.path import abspath
from pathlib import Path
def genr_item_univ(config: dict, size_res_path='data/item_size.pkl', cls_res_path='data/cls_item.pkl'):
"""Generate universe of items (queries).
Generate and save result as <Item Size Table>. Item number and size range
specified by params in config. Compute and save <Class Item Table> based on
<Item Size Table> result.
Args:
config: dict, params parsed from input_params.yaml file
size_res_path: str, file path to save <Item Size Table> dict
cls_res_path: str, file path to save <Class Item Table> dict
Returns:
a dict mapping item id to item size,
another dict mapping class id to class vector (binary/boolean)
showing which items are in each class.
Raises:
ValueError: Undefined item distribution.
"""
item_size_dict = {}
# cls_num = config['item_cls_num']
item_num = config['item_num']
item_min_size = config['item_min_size'] # s0, minimum item size
item_max_size = config['item_max_size'] # s1, maximum item size
if config['item_distr'] == 'cls_norm':
# assert item_num % cls_num == 0 # each class contains same number of items
print('Generating item universe in cls_norm distribution.')
mu = (item_min_size + item_max_size) / 2 # adopt mean value of minimum and maximum size as mu
sigma = (mu - item_min_size) / 4 # adopt 4 std, guarantee 99.99% size greater than item_min_size and smaller than item_max_size
# random numbers satisfying normal distribution
rng = stats.truncnorm(
(item_min_size - mu) / sigma, (item_max_size - mu) / sigma, loc=mu, scale=sigma)
item_size_arr = np.around(rng.rvs(item_num), 0)
np.random.shuffle(item_size_arr)
for j in range(item_num):
item_size_dict[j] = item_size_arr[j]
elif config['item_distr'] == 'cls_random':
rng = default_rng(216) # set random seed
item_size_arr = rng.integers(low=item_min_size, high=item_max_size + 1, size=item_num)
np.random.shuffle(item_size_arr)
for j in range(item_num):
item_size_dict[j] = item_size_arr[j]
elif config['item_distr'] == 'uni_size':
assert item_min_size == item_max_size
for j in range(item_num):
item_size_dict[j] = item_min_size
else:
raise ValueError('Undefined item distribution.')
print('Item Size Dict: \n {}'.format(item_size_dict))
# generate cls_item
cls_item_fp = open(size_res_path, 'wb')
pickle.dump(item_size_dict, cls_item_fp)
cls_item_fp.close()
# compute cls_item_dict based on item_size_dict
cls_item_dict = {}
if config['item_distr'] == 'uni_size':
cls_num = 1
else:
cls_num = math.ceil(math.log2(item_max_size / item_min_size))
for cls_id in range(cls_num):
cls_item_dict[cls_id] = np.zeros(item_num, dtype=bool)
# check each item size and update class binary vector
for item_id in range(item_num):
item_cls = math.floor(math.log2(item_size_dict[item_id] / item_min_size))
cls_item_dict[item_cls][item_id] = 1
# dump <Class Item Table> using pickle
cls_item_fp = open(cls_res_path, 'wb')
pickle.dump(cls_item_dict, cls_item_fp)
cls_item_fp.close()
return item_size_dict, cls_item_dict
def genr_txn_seq(config: dict, txn_item_path='data/txn_item.pkl', id_seq_path='data/id_seq.npy', flag_seq_path='data/flag_seq.npy'):
"""Generate transaction sequence for point-write workload.
"""
# generate read-write flag based on write frequency
seq_len = config['seq_len'] # transaction sequence length
write_freq = config['write_freq'] # expected write transaction frequency
rng = default_rng(522)
flag_arr = rng.random(seq_len)
write_flag_seq = flag_arr < write_freq
np.save(flag_seq_path, write_flag_seq) # save to numpy file
# create read / write transactions based on recent read / write queries
item_num = config['item_num']
recent_read_thresh, recent_write_thresh = config['recent_read_thresh'], config['recent_write_thresh']
read_txn_size, write_txn_size = config['read_txn_size'], config['write_txn_size']
txn_vec_set = set() # store frozenset representing unique transactions
unique_txn_cnt = 0 # unique transaction count, serve as transaction id during generation
txn_item_dict = {} # map transaction id to transaction item vector
txn_id_seq = np.zeros(seq_len, dtype=int) # transaction id sequence
for i in range(seq_len):
# generate write transaction
if write_flag_seq[i]:
# check if there are enought 'recent read transactions' to select write queries
past_reads = np.where(write_flag_seq[0:i+1] == 0)[0]
if past_reads.shape[0] >= recent_read_thresh:
recent_reads = past_reads[-recent_read_thresh:past_reads.shape[0]]# find recent read queries
recent_read_id = txn_id_seq[recent_reads]
recent_read_queries = np.zeros(item_num, dtype=bool)
for txn_id in recent_read_id:
recent_read_queries = np.logical_or(recent_read_queries, txn_item_dict[txn_id])
non_recent_read_queries = np.logical_not(recent_read_queries)
# choose 1 query from non-recent read queries, another from recent read queries
if write_txn_size == 2:
recent_num, non_recent_num = 1, 1
else:
# TODO: fix this 50/50 setup
recent_num, non_recent_num = math.ceil(write_txn_size * 0.5), math.floor(write_txn_size * 0.5)
recent_samples = rng.choice(np.where(recent_read_queries == 1)[0], recent_num)
non_recent_samples = rng.choice(np.where(non_recent_read_queries == 1)[0], non_recent_num)
samples = np.concatenate((recent_samples, non_recent_samples))
tmp_txn_vec = np.zeros(item_num, dtype=bool)
for item_id in samples:
tmp_txn_vec[item_id] = 1
tmp_item_set = frozenset(samples)
if tmp_item_set not in txn_vec_set:
txn_vec_set.add(tmp_item_set)
tmp_txn_id = unique_txn_cnt
txn_id_seq[i] = tmp_txn_id
txn_item_dict[tmp_txn_id] = tmp_txn_vec
unique_txn_cnt += 1
else:
for txn_id in txn_item_dict:
if np.equal(tmp_txn_vec, txn_item_dict[txn_id]).all():
txn_id_seq[i] = txn_id
break
# not enough recent read transactions, choose write queries randomly
else:
samples = rng.choice(item_num, write_txn_size) # choose queries by random
tmp_txn_vec = np.zeros(item_num, dtype=bool)
for item_id in samples:
tmp_txn_vec[item_id] = 1
tmp_item_set = frozenset(samples)
if tmp_item_set not in txn_vec_set:
txn_vec_set.add(tmp_item_set)
tmp_txn_id = unique_txn_cnt
txn_id_seq[i] = tmp_txn_id
txn_item_dict[tmp_txn_id] = tmp_txn_vec
unique_txn_cnt += 1
else:
for txn_id in txn_item_dict:
if np.equal(tmp_txn_vec, txn_item_dict[txn_id]).all():
txn_id_seq[i] = txn_id
break
# generate read transaction
else:
past_writes = np.where(write_flag_seq[0:i+1] == 1)[0]
if past_writes.shape[0] >= recent_write_thresh:
recent_writes = past_writes[-recent_write_thresh:past_writes.shape[0]]# find recent write queries
recent_write_id = txn_id_seq[recent_writes]
recent_write_queries = np.zeros(item_num, dtype=bool)
for txn_id in recent_write_id:
recent_write_queries = np.logical_or(recent_write_queries, txn_item_dict[txn_id])
non_recent_write_queries = np.logical_not(recent_write_queries)
# choose 2 queries from non_recent_write, others from recent_write
recent_num, non_recent_num = read_txn_size - 2, 2
recent_samples = rng.choice(np.where(recent_write_queries == 1)[0], recent_num)
non_recent_samples = rng.choice(np.where(non_recent_write_queries == 1)[0], non_recent_num)
samples = np.concatenate((recent_samples, non_recent_samples))
tmp_txn_vec = np.zeros(item_num, dtype=bool)
for item_id in samples:
tmp_txn_vec[item_id] = 1
tmp_item_set = frozenset(samples)
if tmp_item_set not in txn_vec_set:
txn_vec_set.add(tmp_item_set)
tmp_txn_id = unique_txn_cnt
txn_id_seq[i] = tmp_txn_id
txn_item_dict[tmp_txn_id] = tmp_txn_vec
unique_txn_cnt += 1
else:
for txn_id in txn_item_dict:
if np.equal(tmp_txn_vec, txn_item_dict[txn_id]).all():
txn_id_seq[i] = txn_id
break
# not enough recent write transactions, choose read queries randomly
else:
samples = rng.choice(item_num, read_txn_size) # choose queries by random
tmp_txn_vec = np.zeros(item_num, dtype=bool)
for item_id in samples:
tmp_txn_vec[item_id] = 1
tmp_item_set = frozenset(samples)
if tmp_item_set not in txn_vec_set:
txn_vec_set.add(tmp_item_set)
tmp_txn_id = unique_txn_cnt
txn_id_seq[i] = tmp_txn_id
txn_item_dict[tmp_txn_id] = tmp_txn_vec
unique_txn_cnt += 1
else:
for txn_id in txn_item_dict:
if np.equal(tmp_txn_vec, txn_item_dict[txn_id]).all():
txn_id_seq[i] = txn_id
break
# save results to file
txn_item_fp = open(txn_item_path, 'wb')
pickle.dump(txn_item_dict, txn_item_fp)
txn_item_fp.close()
np.save(id_seq_path, txn_id_seq)
np.save(flag_seq_path, write_flag_seq)
return txn_item_dict, txn_id_seq, write_flag_seq
if __name__ == '__main__':
path = abspath(dirname(dirname(__file__)))+'/config/PointWrt.yaml'
config_file = open(path, 'r')
config_dict = yaml.load(config_file, Loader=yaml.FullLoader)
workload_dir = abspath(dirname(dirname(__file__))) + '/data/PointWrt/' + 'QueryNum{}_Unisize_RThresh{}_WThresh{}_RSize{}_WSize{}_Wrt{}_Len{}'.format(config_dict['item_num'], config_dict['recent_read_thresh'], config_dict['recent_write_thresh'], config_dict['read_txn_size'], config_dict['write_txn_size'], config_dict['write_freq'], config_dict['seq_len'])
Path(workload_dir).mkdir(parents=True, exist_ok=True)
item_size_dict, cls_item_dict = genr_item_univ(config_dict, size_res_path=workload_dir+'/item_size.pkl', cls_res_path=workload_dir+'/cls_item.pkl')
txn_item_dict, txn_id_seq, write_flag_seq = genr_txn_seq(config_dict, txn_item_path=workload_dir+'/txn_item.pkl', id_seq_path=workload_dir+'/id_seq.npy', flag_seq_path=workload_dir+'/flag_seq.npy')
| [
"pickle.dump",
"numpy.random.default_rng",
"math.ceil",
"pathlib.Path",
"numpy.where",
"math.floor",
"numpy.logical_not",
"math.log2",
"yaml.load",
"numpy.logical_or",
"numpy.equal",
"os.path.dirname",
"numpy.zeros",
"numpy.concatenate",
"scipy.stats.truncnorm",
"numpy.save",
"numpy.... | [((2858, 2898), 'pickle.dump', 'pickle.dump', (['item_size_dict', 'cls_item_fp'], {}), '(item_size_dict, cls_item_fp)\n', (2869, 2898), False, 'import pickle\n'), ((3549, 3588), 'pickle.dump', 'pickle.dump', (['cls_item_dict', 'cls_item_fp'], {}), '(cls_item_dict, cls_item_fp)\n', (3560, 3588), False, 'import pickle\n'), ((4068, 4084), 'numpy.random.default_rng', 'default_rng', (['(522)'], {}), '(522)\n', (4079, 4084), False, 'from numpy.random import default_rng\n'), ((4167, 4205), 'numpy.save', 'np.save', (['flag_seq_path', 'write_flag_seq'], {}), '(flag_seq_path, write_flag_seq)\n', (4174, 4205), True, 'import numpy as np\n'), ((4788, 4816), 'numpy.zeros', 'np.zeros', (['seq_len'], {'dtype': 'int'}), '(seq_len, dtype=int)\n', (4796, 4816), True, 'import numpy as np\n'), ((10720, 10759), 'pickle.dump', 'pickle.dump', (['txn_item_dict', 'txn_item_fp'], {}), '(txn_item_dict, txn_item_fp)\n', (10731, 10759), False, 'import pickle\n'), ((10788, 10820), 'numpy.save', 'np.save', (['id_seq_path', 'txn_id_seq'], {}), '(id_seq_path, txn_id_seq)\n', (10795, 10820), True, 'import numpy as np\n'), ((10825, 10863), 'numpy.save', 'np.save', (['flag_seq_path', 'write_flag_seq'], {}), '(flag_seq_path, write_flag_seq)\n', (10832, 10863), True, 'import numpy as np\n'), ((11069, 11115), 'yaml.load', 'yaml.load', (['config_file'], {'Loader': 'yaml.FullLoader'}), '(config_file, Loader=yaml.FullLoader)\n', (11078, 11115), False, 'import yaml\n'), ((1883, 1983), 'scipy.stats.truncnorm', 'stats.truncnorm', (['((item_min_size - mu) / sigma)', '((item_max_size - mu) / sigma)'], {'loc': 'mu', 'scale': 'sigma'}), '((item_min_size - mu) / sigma, (item_max_size - mu) / sigma,\n loc=mu, scale=sigma)\n', (1898, 1983), True, 'import scipy.stats as stats\n'), ((2054, 2086), 'numpy.random.shuffle', 'np.random.shuffle', (['item_size_arr'], {}), '(item_size_arr)\n', (2071, 2086), True, 'import numpy as np\n'), ((3207, 3237), 'numpy.zeros', 'np.zeros', (['item_num'], {'dtype': 'bool'}), '(item_num, dtype=bool)\n', (3215, 3237), True, 'import numpy as np\n'), ((2231, 2247), 'numpy.random.default_rng', 'default_rng', (['(216)'], {}), '(216)\n', (2242, 2247), False, 'from numpy.random import default_rng\n'), ((2374, 2406), 'numpy.random.shuffle', 'np.random.shuffle', (['item_size_arr'], {}), '(item_size_arr)\n', (2391, 2406), True, 'import numpy as np\n'), ((3099, 3139), 'math.log2', 'math.log2', (['(item_max_size / item_min_size)'], {}), '(item_max_size / item_min_size)\n', (3108, 3139), False, 'import math\n'), ((3362, 3412), 'math.log2', 'math.log2', (['(item_size_dict[item_id] / item_min_size)'], {}), '(item_size_dict[item_id] / item_min_size)\n', (3371, 3412), False, 'import math\n'), ((11483, 11501), 'pathlib.Path', 'Path', (['workload_dir'], {}), '(workload_dir)\n', (11487, 11501), False, 'from pathlib import Path\n'), ((5058, 5096), 'numpy.where', 'np.where', (['(write_flag_seq[0:i + 1] == 0)'], {}), '(write_flag_seq[0:i + 1] == 0)\n', (5066, 5096), True, 'import numpy as np\n'), ((5361, 5391), 'numpy.zeros', 'np.zeros', (['item_num'], {'dtype': 'bool'}), '(item_num, dtype=bool)\n', (5369, 5391), True, 'import numpy as np\n'), ((5580, 5615), 'numpy.logical_not', 'np.logical_not', (['recent_read_queries'], {}), '(recent_read_queries)\n', (5594, 5615), True, 'import numpy as np\n'), ((6220, 6272), 'numpy.concatenate', 'np.concatenate', (['(recent_samples, non_recent_samples)'], {}), '((recent_samples, non_recent_samples))\n', (6234, 6272), True, 'import numpy as np\n'), ((6303, 6333), 'numpy.zeros', 'np.zeros', (['item_num'], {'dtype': 'bool'}), '(item_num, dtype=bool)\n', (6311, 6333), True, 'import numpy as np\n'), ((7221, 7251), 'numpy.zeros', 'np.zeros', (['item_num'], {'dtype': 'bool'}), '(item_num, dtype=bool)\n', (7229, 7251), True, 'import numpy as np\n'), ((7995, 8033), 'numpy.where', 'np.where', (['(write_flag_seq[0:i + 1] == 1)'], {}), '(write_flag_seq[0:i + 1] == 1)\n', (8003, 8033), True, 'import numpy as np\n'), ((8308, 8338), 'numpy.zeros', 'np.zeros', (['item_num'], {'dtype': 'bool'}), '(item_num, dtype=bool)\n', (8316, 8338), True, 'import numpy as np\n'), ((8531, 8567), 'numpy.logical_not', 'np.logical_not', (['recent_write_queries'], {}), '(recent_write_queries)\n', (8545, 8567), True, 'import numpy as np\n'), ((8947, 8999), 'numpy.concatenate', 'np.concatenate', (['(recent_samples, non_recent_samples)'], {}), '((recent_samples, non_recent_samples))\n', (8961, 8999), True, 'import numpy as np\n'), ((9030, 9060), 'numpy.zeros', 'np.zeros', (['item_num'], {'dtype': 'bool'}), '(item_num, dtype=bool)\n', (9038, 9060), True, 'import numpy as np\n'), ((9947, 9977), 'numpy.zeros', 'np.zeros', (['item_num'], {'dtype': 'bool'}), '(item_num, dtype=bool)\n', (9955, 9977), True, 'import numpy as np\n'), ((10973, 10990), 'os.path.dirname', 'dirname', (['__file__'], {}), '(__file__)\n', (10980, 10990), False, 'from os.path import dirname\n'), ((5480, 5537), 'numpy.logical_or', 'np.logical_or', (['recent_read_queries', 'txn_item_dict[txn_id]'], {}), '(recent_read_queries, txn_item_dict[txn_id])\n', (5493, 5537), True, 'import numpy as np\n'), ((8429, 8487), 'numpy.logical_or', 'np.logical_or', (['recent_write_queries', 'txn_item_dict[txn_id]'], {}), '(recent_write_queries, txn_item_dict[txn_id])\n', (8442, 8487), True, 'import numpy as np\n'), ((11152, 11169), 'os.path.dirname', 'dirname', (['__file__'], {}), '(__file__)\n', (11159, 11169), False, 'from os.path import dirname\n'), ((5926, 5957), 'math.ceil', 'math.ceil', (['(write_txn_size * 0.5)'], {}), '(write_txn_size * 0.5)\n', (5935, 5957), False, 'import math\n'), ((5959, 5991), 'math.floor', 'math.floor', (['(write_txn_size * 0.5)'], {}), '(write_txn_size * 0.5)\n', (5969, 5991), False, 'import math\n'), ((6036, 6070), 'numpy.where', 'np.where', (['(recent_read_queries == 1)'], {}), '(recent_read_queries == 1)\n', (6044, 6070), True, 'import numpy as np\n'), ((6135, 6173), 'numpy.where', 'np.where', (['(non_recent_read_queries == 1)'], {}), '(non_recent_read_queries == 1)\n', (6143, 6173), True, 'import numpy as np\n'), ((8761, 8796), 'numpy.where', 'np.where', (['(recent_write_queries == 1)'], {}), '(recent_write_queries == 1)\n', (8769, 8796), True, 'import numpy as np\n'), ((8861, 8900), 'numpy.where', 'np.where', (['(non_recent_write_queries == 1)'], {}), '(non_recent_write_queries == 1)\n', (8869, 8900), True, 'import numpy as np\n'), ((6864, 6908), 'numpy.equal', 'np.equal', (['tmp_txn_vec', 'txn_item_dict[txn_id]'], {}), '(tmp_txn_vec, txn_item_dict[txn_id])\n', (6872, 6908), True, 'import numpy as np\n'), ((7782, 7826), 'numpy.equal', 'np.equal', (['tmp_txn_vec', 'txn_item_dict[txn_id]'], {}), '(tmp_txn_vec, txn_item_dict[txn_id])\n', (7790, 7826), True, 'import numpy as np\n'), ((9591, 9635), 'numpy.equal', 'np.equal', (['tmp_txn_vec', 'txn_item_dict[txn_id]'], {}), '(tmp_txn_vec, txn_item_dict[txn_id])\n', (9599, 9635), True, 'import numpy as np\n'), ((10508, 10552), 'numpy.equal', 'np.equal', (['tmp_txn_vec', 'txn_item_dict[txn_id]'], {}), '(tmp_txn_vec, txn_item_dict[txn_id])\n', (10516, 10552), True, 'import numpy as np\n')] |
import matplotlib.pyplot as plt
from skimage import io, filters, util, morphology, measure
import numpy
from scipy import signal
def gkern(kernlen, nsig):
# Return 2D Gaussian Kernel
gkern1d = signal.gaussian(kernlen, std=nsig).reshape(kernlen, 1)
kernel = numpy.outer(gkern1d, gkern1d)
return kernel/kernel.sum()
def imageConvolution(image,kernel, median=False):
ConvImage = numpy.zeros(numpy.shape(image))
KernelSizeI, KernelSizeJ = numpy.shape(kernel)
pad = (KernelSizeI - 1) // 2
image = numpy.pad(image, pad, mode='constant', constant_values=0)
for w in range(ConvImage.shape[0]):
for h in range(ConvImage.shape[1]):
if median:
ConvImage[w,h] = numpy.median(image[(w):(w+KernelSizeI), (h):(h+KernelSizeI)] * kernel)
else:
ConvImage[w,h] = numpy.mean(image[(w):(w+KernelSizeI),(h):(h+KernelSizeI)] * kernel)
return ConvImage.astype('uint8')
#################################################
################### Quiz 1 ######################
#################################################
PIXEL_MAX = 255
# Load image file
#fpath = '/store1/bypark/test_Comis/2_filter/'
fpath = 'd:/github/ComputerVision/hw2/'
image = io.imread(fpath + 'lena_gray.gif')
# make noise image
imageSaltAndPepper = util.noise.random_noise(image, mode='s&p')*PIXEL_MAX
imageSaltAndPepper = imageSaltAndPepper.astype('uint8')
# Kernel Definition
kernelSize = 3
sigma = 3
GaussianKernel = gkern(kernelSize, sigma)
MedianFilterWindow = morphology.square(kernelSize)
# Original Image
ax = plt.subplot(3, 2, 1)
ax.imshow(image, cmap='gray')
plt.title('Original Image')
# Salt & Pepper Noise Reduction
ax = plt.subplot(3, 2, 2)
ax.imshow(imageSaltAndPepper, cmap='gray')
plt.title('Gaussian Noise')
ax = plt.subplot(3, 2, 4)
filteredImage = imageConvolution(imageSaltAndPepper, GaussianKernel) # Gaussian filtering
ax.imshow(filteredImage, cmap='gray')
plt.title('Gaussian Filtering')
ax = plt.subplot(3, 2, 6)
filteredImage = imageConvolution(imageSaltAndPepper, MedianFilterWindow, median=True) # Median filtering
ax.imshow(filteredImage, cmap='gray')
plt.title('Median Filtering')
plt.show()
#################################################
################### Quiz 2 ######################
#################################################
for size in [5, 7]:
kernelSize = size
GaussianKernel = gkern(kernelSize, sigma)
MedianFilterWindow = morphology.square(kernelSize)
# Original Image
ax = plt.subplot(3, 2, 1)
ax.imshow(image, cmap='gray')
plt.title('Original Image')
# Salt & Pepper Noise Reduction
ax = plt.subplot(3, 2, 2)
ax.imshow(imageSaltAndPepper, cmap='gray')
plt.title('Gaussian Noise')
ax = plt.subplot(3, 2, 4)
filteredImage = imageConvolution(imageSaltAndPepper, GaussianKernel) # Gaussian filtering
ax.imshow(filteredImage, cmap='gray')
plt.title('Gaussian Filtering')
ax = plt.subplot(3, 2, 6)
filteredImage = imageConvolution(imageSaltAndPepper, MedianFilterWindow, median=True) # Median filtering
ax.imshow(filteredImage, cmap='gray')
plt.title('Median Filtering')
plt.show()
| [
"numpy.mean",
"numpy.median",
"skimage.morphology.square",
"skimage.io.imread",
"numpy.outer",
"numpy.pad",
"scipy.signal.gaussian",
"skimage.util.noise.random_noise",
"matplotlib.pyplot.title",
"numpy.shape",
"matplotlib.pyplot.subplot",
"matplotlib.pyplot.show"
] | [((1235, 1269), 'skimage.io.imread', 'io.imread', (["(fpath + 'lena_gray.gif')"], {}), "(fpath + 'lena_gray.gif')\n", (1244, 1269), False, 'from skimage import io, filters, util, morphology, measure\n'), ((1530, 1559), 'skimage.morphology.square', 'morphology.square', (['kernelSize'], {}), '(kernelSize)\n', (1547, 1559), False, 'from skimage import io, filters, util, morphology, measure\n'), ((1583, 1603), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(2)', '(1)'], {}), '(3, 2, 1)\n', (1594, 1603), True, 'import matplotlib.pyplot as plt\n'), ((1634, 1661), 'matplotlib.pyplot.title', 'plt.title', (['"""Original Image"""'], {}), "('Original Image')\n", (1643, 1661), True, 'import matplotlib.pyplot as plt\n'), ((1700, 1720), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(2)', '(2)'], {}), '(3, 2, 2)\n', (1711, 1720), True, 'import matplotlib.pyplot as plt\n'), ((1764, 1791), 'matplotlib.pyplot.title', 'plt.title', (['"""Gaussian Noise"""'], {}), "('Gaussian Noise')\n", (1773, 1791), True, 'import matplotlib.pyplot as plt\n'), ((1798, 1818), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(2)', '(4)'], {}), '(3, 2, 4)\n', (1809, 1818), True, 'import matplotlib.pyplot as plt\n'), ((1947, 1978), 'matplotlib.pyplot.title', 'plt.title', (['"""Gaussian Filtering"""'], {}), "('Gaussian Filtering')\n", (1956, 1978), True, 'import matplotlib.pyplot as plt\n'), ((1985, 2005), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(2)', '(6)'], {}), '(3, 2, 6)\n', (1996, 2005), True, 'import matplotlib.pyplot as plt\n'), ((2149, 2178), 'matplotlib.pyplot.title', 'plt.title', (['"""Median Filtering"""'], {}), "('Median Filtering')\n", (2158, 2178), True, 'import matplotlib.pyplot as plt\n'), ((2180, 2190), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (2188, 2190), True, 'import matplotlib.pyplot as plt\n'), ((270, 299), 'numpy.outer', 'numpy.outer', (['gkern1d', 'gkern1d'], {}), '(gkern1d, gkern1d)\n', (281, 299), False, 'import numpy\n'), ((464, 483), 'numpy.shape', 'numpy.shape', (['kernel'], {}), '(kernel)\n', (475, 483), False, 'import numpy\n'), ((530, 587), 'numpy.pad', 'numpy.pad', (['image', 'pad'], {'mode': '"""constant"""', 'constant_values': '(0)'}), "(image, pad, mode='constant', constant_values=0)\n", (539, 587), False, 'import numpy\n'), ((1312, 1354), 'skimage.util.noise.random_noise', 'util.noise.random_noise', (['image'], {'mode': '"""s&p"""'}), "(image, mode='s&p')\n", (1335, 1354), False, 'from skimage import io, filters, util, morphology, measure\n'), ((2455, 2484), 'skimage.morphology.square', 'morphology.square', (['kernelSize'], {}), '(kernelSize)\n', (2472, 2484), False, 'from skimage import io, filters, util, morphology, measure\n'), ((2520, 2540), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(2)', '(1)'], {}), '(3, 2, 1)\n', (2531, 2540), True, 'import matplotlib.pyplot as plt\n'), ((2579, 2606), 'matplotlib.pyplot.title', 'plt.title', (['"""Original Image"""'], {}), "('Original Image')\n", (2588, 2606), True, 'import matplotlib.pyplot as plt\n'), ((2657, 2677), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(2)', '(2)'], {}), '(3, 2, 2)\n', (2668, 2677), True, 'import matplotlib.pyplot as plt\n'), ((2729, 2756), 'matplotlib.pyplot.title', 'plt.title', (['"""Gaussian Noise"""'], {}), "('Gaussian Noise')\n", (2738, 2756), True, 'import matplotlib.pyplot as plt\n'), ((2771, 2791), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(2)', '(4)'], {}), '(3, 2, 4)\n', (2782, 2791), True, 'import matplotlib.pyplot as plt\n'), ((2932, 2963), 'matplotlib.pyplot.title', 'plt.title', (['"""Gaussian Filtering"""'], {}), "('Gaussian Filtering')\n", (2941, 2963), True, 'import matplotlib.pyplot as plt\n'), ((2978, 2998), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(3)', '(2)', '(6)'], {}), '(3, 2, 6)\n', (2989, 2998), True, 'import matplotlib.pyplot as plt\n'), ((3154, 3183), 'matplotlib.pyplot.title', 'plt.title', (['"""Median Filtering"""'], {}), "('Median Filtering')\n", (3163, 3183), True, 'import matplotlib.pyplot as plt\n'), ((3188, 3198), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (3196, 3198), True, 'import matplotlib.pyplot as plt\n'), ((413, 431), 'numpy.shape', 'numpy.shape', (['image'], {}), '(image)\n', (424, 431), False, 'import numpy\n'), ((202, 236), 'scipy.signal.gaussian', 'signal.gaussian', (['kernlen'], {'std': 'nsig'}), '(kernlen, std=nsig)\n', (217, 236), False, 'from scipy import signal\n'), ((728, 794), 'numpy.median', 'numpy.median', (['(image[w:w + KernelSizeI, h:h + KernelSizeI] * kernel)'], {}), '(image[w:w + KernelSizeI, h:h + KernelSizeI] * kernel)\n', (740, 794), False, 'import numpy\n'), ((850, 914), 'numpy.mean', 'numpy.mean', (['(image[w:w + KernelSizeI, h:h + KernelSizeI] * kernel)'], {}), '(image[w:w + KernelSizeI, h:h + KernelSizeI] * kernel)\n', (860, 914), False, 'import numpy\n')] |
from mermaid_demos.rdmm_synth_data_generation.create_poly import Poly
import numpy as np
class Rectangle(Poly):
def __init__(self,setting,scale=1.):
name, img_sz, center_pos, height, width, rotation = setting['name'],setting['img_sz'], setting['center_pos'], setting['height'], setting['width'], setting['rotation']
self.center_pos = center_pos
height,width = self.rescale(height*2,width*2,scale)
self.height = height
self.width = width
vertices = self.get_point()
setting_for_poly = dict(name=name, img_sz=img_sz, vertices=vertices,rotation=rotation)
super(Rectangle,self).__init__(setting_for_poly)
self.name = setting['name']
self.type='rect'
self.shape_info = {'center_pos':center_pos, 'height':height,'width':width}
def rescale(self,height,width,scale):
return height*scale, width*scale
def get_point(self):
r= self.height
c = self.width
point = self.center_pos
points = [[point[0]-r/2,point[0] + r/2,point[0] + r/2,point[0]-r/2],
[point[1]-c/2, point[1]-c/2,point[1] + c/2,point[1]+c/2]]
points = np.array(points)
return points | [
"numpy.array"
] | [((1174, 1190), 'numpy.array', 'np.array', (['points'], {}), '(points)\n', (1182, 1190), True, 'import numpy as np\n')] |
# Copyright (c) 2021. <NAME>. All rights Reserved.
import numpy
import numpy as np
import pandas as pd
from bm.datamanipulation.AdjustDataFrame import remove_null_values
class DocumentProcessor:
custom_dtypes = []
model_types = []
def __init__(self):
self.custom_dtypes = ['int64', 'float64', 'datetime', 'string']
self.model_types = ['Prediction', 'Time Series Forecasting']
def document_analyzer(self, csv_file_location):
# Read the file
df = pd.read_csv(csv_file_location)
if (not df.empty) or (len(df.columns) < 2):
total_rows = len(df.index)
# list of columns data types
columns_list = df.columns
data_types = df.dtypes
extracted_data_types = []
datetime_columns = []
numric_columns = []
for col in df.columns:
if df[col].dtype == 'object':
try:
df[col] = pd.to_datetime(df[col])
datetime_columns.append(col)
extracted_data_types.append('datetime')
except ValueError:
extracted_data_types.append('string')
pass
elif (df[col].dtype == 'float64' or df[col].dtype == 'int64'):
numric_columns.append(col)
extracted_data_types.append(df[col].dtype.name)
else:
extracted_data_types.append('string')
# Check if there is any empty columns
df = df.replace(' ', np.nan)
nan_cols = []
for col in df.columns:
x = pd.isna(df[col])
x = x.to_numpy()
if not False in x:
nan_cols.append(col)
nan_cols = numpy.array(nan_cols)
# Clean the data frame
df = df.drop(columns=nan_cols, axis=1)
final_columns_list = df.columns
total_rows = len(df.index)
df = remove_null_values(df) # drop empty columns
final_total_rows = len(df.index) # Get number of rows after removing the null rows
no_droped_coulmns = total_rows
# According to data types, show suggessted models
#print('We have reviwed the provided data and below is our reviewing results:')
#print('1- You have data of (' + ', '.join(columns_list) + ')')
#print('2- The columns (' + ', '.join(nan_cols) + ') are empty and will be removed before proccesing to create the model')
#print('3- The final columns list to be used for creating the model is: [' + ', '.join(final_columns_list) + ']')
#print('4- You have number of records that contains empty values in some columns, the model ignores any record that have empty values')
#print('5- Number of rows after removing rows with some empty values is:' + str(final_total_rows) + '\n')
#print('Based on above analysis results, BrontoMind can help you to do the following:')
#print('1- Create prediction model to predict the value of one or more item from (' + ', '.join(columns_list) + ') and using the remaing columns as an input for this model.')
#print('2- Build time series timeforecasting model to track the chages in the values of one from (' + ','.join(numric_columns) + ') according the change in the date/time of one from: (' + ','.join(datetime_columns) + ')')
return ', '.join(columns_list), ', '.join(nan_cols), ', '.join(final_columns_list), str(final_total_rows), ','.join(numric_columns), ','.join(datetime_columns)
else:
return -1
def dataframe_summary(self,df):
return 'We have reviwed the provided data and below is our reviewing results:' \
' 1- You have date of ' + 'columns_list'\
' 2- The column/s' + 'nan_cols' + 'are empty and will be removed before proccesing to create the model' + \
' 3- The final columns list is:' + 'final_columns_list' + \
' 4- You have number of records that contains empty values in some columns' \
' 5- Number of rows after removing rows with some empty values are:' + 'final_total_rows'
#d = DocumentProcessor()
#d.document_analyzer('covid_19_india.csv')
| [
"bm.datamanipulation.AdjustDataFrame.remove_null_values",
"pandas.read_csv",
"numpy.array",
"pandas.isna",
"pandas.to_datetime"
] | [((499, 529), 'pandas.read_csv', 'pd.read_csv', (['csv_file_location'], {}), '(csv_file_location)\n', (510, 529), True, 'import pandas as pd\n'), ((1847, 1868), 'numpy.array', 'numpy.array', (['nan_cols'], {}), '(nan_cols)\n', (1858, 1868), False, 'import numpy\n'), ((2056, 2078), 'bm.datamanipulation.AdjustDataFrame.remove_null_values', 'remove_null_values', (['df'], {}), '(df)\n', (2074, 2078), False, 'from bm.datamanipulation.AdjustDataFrame import remove_null_values\n'), ((1698, 1714), 'pandas.isna', 'pd.isna', (['df[col]'], {}), '(df[col])\n', (1705, 1714), True, 'import pandas as pd\n'), ((980, 1003), 'pandas.to_datetime', 'pd.to_datetime', (['df[col]'], {}), '(df[col])\n', (994, 1003), True, 'import pandas as pd\n')] |
import torch
import torch.nn as nn
import transformers
from transformers import XLMRobertaModel
import numpy as np
from flag import get_parser
parser = get_parser()
args = parser.parse_args()
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
class RobertaBengali(nn.Module):
def __init__(self):
super(RobertaBengali, self).__init__()
self.roberta = XLMRobertaModel.from_pretrained(args.pretrained_model_name)
self.roberta_drop = nn.Dropout(args.dropout) #0.3
self.out = nn.Linear(args.roberta_hidden, args.classes)
def forward(self, ids, attention_mask):
_,o2 = self.roberta(
ids,
attention_mask=attention_mask
)
bo = self.roberta_drop(o2)
output = self.out(bo)
return output
class CustomRobertaBengali(nn.Module):
def __init__(self):
super(CustomRobertaBengali, self).__init__()
self.roberta = XLMRobertaModel.from_pretrained(args.pretrained_model_name)
self.roberta_drop = nn.Dropout(args.dropout) #0.3
self.out = nn.Linear(args.roberta_hidden * 2, args.classes)
def forward(self, ids, attention_mask):
o1,o2 = self.roberta(
ids,
attention_mask=attention_mask
)
apool = torch.mean(o1, 1)
mpool, _ = torch.max(o1, 1)
cat = torch.cat((apool, mpool), 1)
bo = self.roberta_drop(cat)
logits = self.out(bo)
return logits
class RobertaBengaliTwo(nn.Module):
def __init__(self):
super(RobertaBengaliTwo, self).__init__()
self.roberta = XLMRobertaModel.from_pretrained(args.pretrained_model_name,output_hidden_states=True)
self.roberta_drop = nn.Dropout(args.dropout)
self.l0 = nn.Linear(args.roberta_hidden * 2, args.classes)
torch.nn.init.normal_(self.l0.weight, std=0.02)
def forward(self, ids, attention_mask):
_, _, out = self.roberta(
ids,
attention_mask=attention_mask
)
out = torch.cat((out[-1], out[-2]), dim=-1)
#out = self.roberta_drop(out)
out = out[:,0,:]
logits = self.l0(out)
return logits
class RobertaBengaliNext(nn.Module):
def __init__(self):
super(RobertaBengaliNext, self).__init__()
self.roberta = XLMRobertaModel.from_pretrained(args.pretrained_model_name,output_hidden_states=True)
self.roberta_drop = nn.Dropout(args.dropout)
self.l0 = nn.Linear(args.roberta_hidden * 4, args.classes)
torch.nn.init.normal_(self.l0.weight, std=0.02)
def _get_cls_vec(self, vec):
return vec[:,0,:].view(-1, args.roberta_hidden)
def forward(self,ids,attention_mask):
_, _, hidden_states = self.roberta(
ids,
attention_mask=attention_mask
)
vec1 = self._get_cls_vec(hidden_states[-1])
vec2 = self._get_cls_vec(hidden_states[-2])
vec3 = self._get_cls_vec(hidden_states[-3])
vec4 = self._get_cls_vec(hidden_states[-4])
out = torch.cat([vec1, vec2, vec3, vec4], dim=1)
#out = self.roberta_drop(out)
logits = self.l0(out)
return logits
| [
"torch.manual_seed",
"torch.nn.Dropout",
"torch.mean",
"torch.max",
"torch.nn.init.normal_",
"numpy.random.seed",
"torch.nn.Linear",
"torch.cuda.manual_seed",
"transformers.XLMRobertaModel.from_pretrained",
"flag.get_parser",
"torch.cat"
] | [((152, 164), 'flag.get_parser', 'get_parser', ([], {}), '()\n', (162, 164), False, 'from flag import get_parser\n'), ((193, 218), 'numpy.random.seed', 'np.random.seed', (['args.seed'], {}), '(args.seed)\n', (207, 218), True, 'import numpy as np\n'), ((219, 247), 'torch.manual_seed', 'torch.manual_seed', (['args.seed'], {}), '(args.seed)\n', (236, 247), False, 'import torch\n'), ((248, 281), 'torch.cuda.manual_seed', 'torch.cuda.manual_seed', (['args.seed'], {}), '(args.seed)\n', (270, 281), False, 'import torch\n'), ((411, 470), 'transformers.XLMRobertaModel.from_pretrained', 'XLMRobertaModel.from_pretrained', (['args.pretrained_model_name'], {}), '(args.pretrained_model_name)\n', (442, 470), False, 'from transformers import XLMRobertaModel\n'), ((499, 523), 'torch.nn.Dropout', 'nn.Dropout', (['args.dropout'], {}), '(args.dropout)\n', (509, 523), True, 'import torch.nn as nn\n'), ((548, 592), 'torch.nn.Linear', 'nn.Linear', (['args.roberta_hidden', 'args.classes'], {}), '(args.roberta_hidden, args.classes)\n', (557, 592), True, 'import torch.nn as nn\n'), ((964, 1023), 'transformers.XLMRobertaModel.from_pretrained', 'XLMRobertaModel.from_pretrained', (['args.pretrained_model_name'], {}), '(args.pretrained_model_name)\n', (995, 1023), False, 'from transformers import XLMRobertaModel\n'), ((1052, 1076), 'torch.nn.Dropout', 'nn.Dropout', (['args.dropout'], {}), '(args.dropout)\n', (1062, 1076), True, 'import torch.nn as nn\n'), ((1101, 1149), 'torch.nn.Linear', 'nn.Linear', (['(args.roberta_hidden * 2)', 'args.classes'], {}), '(args.roberta_hidden * 2, args.classes)\n', (1110, 1149), True, 'import torch.nn as nn\n'), ((1311, 1328), 'torch.mean', 'torch.mean', (['o1', '(1)'], {}), '(o1, 1)\n', (1321, 1328), False, 'import torch\n'), ((1348, 1364), 'torch.max', 'torch.max', (['o1', '(1)'], {}), '(o1, 1)\n', (1357, 1364), False, 'import torch\n'), ((1379, 1407), 'torch.cat', 'torch.cat', (['(apool, mpool)', '(1)'], {}), '((apool, mpool), 1)\n', (1388, 1407), False, 'import torch\n'), ((1637, 1727), 'transformers.XLMRobertaModel.from_pretrained', 'XLMRobertaModel.from_pretrained', (['args.pretrained_model_name'], {'output_hidden_states': '(True)'}), '(args.pretrained_model_name,\n output_hidden_states=True)\n', (1668, 1727), False, 'from transformers import XLMRobertaModel\n'), ((1751, 1775), 'torch.nn.Dropout', 'nn.Dropout', (['args.dropout'], {}), '(args.dropout)\n', (1761, 1775), True, 'import torch.nn as nn\n'), ((1796, 1844), 'torch.nn.Linear', 'nn.Linear', (['(args.roberta_hidden * 2)', 'args.classes'], {}), '(args.roberta_hidden * 2, args.classes)\n', (1805, 1844), True, 'import torch.nn as nn\n'), ((1853, 1900), 'torch.nn.init.normal_', 'torch.nn.init.normal_', (['self.l0.weight'], {'std': '(0.02)'}), '(self.l0.weight, std=0.02)\n', (1874, 1900), False, 'import torch\n'), ((2064, 2101), 'torch.cat', 'torch.cat', (['(out[-1], out[-2])'], {'dim': '(-1)'}), '((out[-1], out[-2]), dim=-1)\n', (2073, 2101), False, 'import torch\n'), ((2354, 2444), 'transformers.XLMRobertaModel.from_pretrained', 'XLMRobertaModel.from_pretrained', (['args.pretrained_model_name'], {'output_hidden_states': '(True)'}), '(args.pretrained_model_name,\n output_hidden_states=True)\n', (2385, 2444), False, 'from transformers import XLMRobertaModel\n'), ((2468, 2492), 'torch.nn.Dropout', 'nn.Dropout', (['args.dropout'], {}), '(args.dropout)\n', (2478, 2492), True, 'import torch.nn as nn\n'), ((2513, 2561), 'torch.nn.Linear', 'nn.Linear', (['(args.roberta_hidden * 4)', 'args.classes'], {}), '(args.roberta_hidden * 4, args.classes)\n', (2522, 2561), True, 'import torch.nn as nn\n'), ((2570, 2617), 'torch.nn.init.normal_', 'torch.nn.init.normal_', (['self.l0.weight'], {'std': '(0.02)'}), '(self.l0.weight, std=0.02)\n', (2591, 2617), False, 'import torch\n'), ((3087, 3129), 'torch.cat', 'torch.cat', (['[vec1, vec2, vec3, vec4]'], {'dim': '(1)'}), '([vec1, vec2, vec3, vec4], dim=1)\n', (3096, 3129), False, 'import torch\n')] |
# (c) British Crown Copyright 2020, the Met Office.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the Met Office nor the names of its contributors may
# be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
# THE POSSIBILITY OF SUCH DAMAGE.
import netCDF4,argparse,sys
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import os
def collapse_dimensions_for_plotting(longitude, latitude, vname, vx, vd, dims):
"""
Pre-processing of COSP variable for plotting.
Arguments:
longitude: longitude from COSP output file [loc].
latitude: latitude from COSP output file [loc].
vname: variable name.
vx: variable from COSP output file [..., loc]
vd: dictionary with metadata about variable.
dims: dictionary with additional dimensions.
Return:
x: x axis values.
y: y axis values.
z: data array for plotting.
d: dictionary with plot configuration.
"""
yflip = False
xticks, yticks, xticks_labels, yticks_labels, xlabel, ylabel, vmax = (None,)*7
if vd['plot_type'] == 'map':
d = None
x = longitude[0]
y = latitude[:,0]
z = vx
if vname == 'parasolGrid_refl': z = vx[2]
# Roll longitude if there are values > 180
m = (x > 180.0)
if m is not None:
Nroll = longitude.shape[1] // 2
x[m] = x[m] - 360.0
x = np.roll(x, Nroll)
z = np.roll(z,Nroll,axis=1)
# Calculate latitude and longitude edge points.
# Assume they are increasing monotonically.
# Extend length to N+2 and calculate midpoints.
x = midpoints_to_edges(x)
y = midpoints_to_edges(y)
xticks = np.arange(-180,181,60)
yticks = np.arange(-90,91,30)
xlabel = 'Longitude (deg)'
ylabel = 'Latitude (deg)'
if vd['plot_type'] == '2Dhist':
weights = np.cos(latitude * np.pi / 180.0)
weights = weights / weights.sum()
z = np.sum(vx * weights, axis=2)
x = np.arange(z.shape[1]+1)
y = np.arange(z.shape[0]+1)
if vd['plot_type'] == 'zonal_cross_section':
z = np.average(vx, axis=2)
x = midpoints_to_edges(latitude[:,0])
y = np.arange(z.shape[0] + 1)
if vd['plot_type'] in ('2Dhist','zonal_cross_section'):
if vd['xaxis_type'] == 'tau7':
xticks_labels = ('0', '0.3', '1.3', '3.6', '9.4', '23', '60', '')
xticks = x
xlabel = 'Cloud optical depth'
if vd['xaxis_type'] == 'cloudsat_DBZE_BINS':
x = np.arange(-50,26,5)
xticks = x
xticks_labels = None
xlabel = 'Radar reflectivity (dBZ)'
if vd['xaxis_type'] == 'SR_BINS':
xticks_labels = ('0', '0.01', '1.2', '3', '5', '7', '10', '15', '20',
'25', '30', '40', '50', '60', '80', '')
xticks = x
xlabel = 'Lidar scattering ratio'
if vd['xaxis_type'] == 'latitude':
xticks_labels = None
xticks = np.arange(-90,91,30)
xlabel = 'Latitude (deg)'
if vd['yaxis_type'] == 'pres7':
yticks_labels = ('1000', '800', '680', '560', '440', '310', '180','')
yticks = y
ylabel = 'Cloud Top Pressure (hPa)'
if vd['yaxis_type'] == 'hgt16':
yticks_labels = ('', '0', '500', '1000', '1500', '2000', '2500',
'3000', '4000', '5000', '7000', '9000', '11000',
'13000', '15000', '17000', '')
yticks = y
ylabel = 'Cloud Top Height (m)'
if vd['yaxis_type'] == 'REICE_MODIS':
yticks_labels = ('0', '10', '20', '30', '40', '60', '90')
yticks = y
ylabel = 'Ice particle size (micron)'
if vd['yaxis_type'] == 'RELIQ_MODIS':
yticks_labels = ('0', '8', '10', '13', '15', '20', '30')
yticks = y
ylabel = 'Liquid particle size (micron)'
if vd['yaxis_type'] == 'levStat':
y = 480*np.arange(41)
yticks = y[0::4]
yticks_labels = None
ylabel = 'Altitude (m)'
yflip = True
if vd['yaxis_type'] == 'lev':
yticks = y[0::4]
yticks_labels = None
ylabel = 'Model level'
yflip = True
# Extra processing for specific variables
vmax = None
if vname == 'cfadLidarsr355': vmax = 0.03
if vname == 'cfadLidarsr532': vmax = 0.03
if vname == 'cfadLidarsr532gr': vmax = 0.03
if vname == 'cfadDbze94': vmax = 0.05
if vname == 'iwpmodis': vmax = 2.0
if vname == 'lwpmodis': vmax = 1.0
if vname == 'tauisccp': vmax = 100.0
if vname == 'tautmodis': vmax = 100.0
if vname == 'tauwmodis': vmax = 100.0
d = {'xticks':xticks,
'yticks':yticks,
'xticks_labels':xticks_labels,
'yticks_labels':yticks_labels,
'xlabel':xlabel,
'ylabel':ylabel,
'vmax':vmax}
# Flip y axis?
if yflip: z = np.flip(z, axis=0)
return x, y, z, d
def midpoints_to_edges(x):
"""
Calculate edge points. Midpoints must increase monotonically.
Arguments:
x: vector with mid points. Dimension N.
Return:
y: numpy vector with edges. Dimension N+1.
"""
y = np.append(np.append(2 * x[0] - x[1], x), 2 * x[-1] - x[-2])
return 0.5 * (y[1:] + y[:-1])
def plot_pcolormesh(x, y, v, d, fig_name, title=None, coastlines=False):
"""
Plot pcolormesh and write the output to a png file.
Arguments:
x: x axis values.
y: y axis values.
v: data array. Dimensions [Nx,Ny,Np]
d: dictionary with plot configuration.
fig_name: output file name.
Keywords:
title: plot title.
coastlines: plot coast lines.
"""
fig = plt.figure(figsize=(10,5))
cmap = plt.get_cmap('YlOrRd', 20)
cmap.set_bad('grey', 1)
if coastlines:
ax = plt.axes(projection=ccrs.PlateCarree(central_longitude=0))
ax.coastlines()
h = plt.pcolormesh(x, y, v, cmap=cmap, vmax=d['vmax'])
if d['xticks_labels']:
plt.xticks(d['xticks'],d['xticks_labels'])
else:
plt.xticks(d['xticks'])
if d['yticks_labels']:
plt.yticks(d['yticks'],d['yticks_labels'])
else:
plt.yticks(d['yticks'])
plt.xlabel(d['xlabel'])
plt.ylabel(d['ylabel'])
plt.colorbar(h,orientation='vertical')
if title is not None: plt.title(title)
plt.savefig(fig_name, dpi=200)
plt.close()
def read_dimensions(fname):
"""
Read useful dimensions from COSP output file.
Arguments:
fname: path to NetCDF file.
Return:
d: dictionary with the following dimensions:
'cloudsat_DBZE_BINS', 'hgt16', 'REICE_MODIS',
'RELIQ_MODIS', 'levStat', 'SR_BINS', 'lev'
"""
dim_names = ['cloudsat_DBZE_BINS', 'hgt16', 'REICE_MODIS', 'RELIQ_MODIS',
'levStat', 'SR_BINS', 'lev']
d = {}
f_id = netCDF4.Dataset(fname, 'r')
for dim in dim_names:
d[dim] = f_id.variables[dim][:]
f_id.close()
return d
def read_var_to_masked_array(fname, vname, fill_value, Nlat_lon = None):
"""
Reads a variable from a NetCDF file, and produces a masked array.
Arguments:
fname: path to NetCDF file.
vname: variable name.
fill_value: missing data value.
Keywords:
Nlat_lon: tuple (Nrows, Ncols). If defined, variable is
reshaped to a lat-lon grid.
Return:
x: variable data array.
lon: longitude array.
lat: latitude array
units: units attribute.
long_name: long name attribute.
"""
f_id = netCDF4.Dataset(fname, 'r')
x = np.ma.masked_equal(f_id.variables[vname][:], fill_value)
lon = np.ma.masked_equal(f_id.variables['longitude'][:], fill_value)
lat = np.ma.masked_equal(f_id.variables['latitude'][:], fill_value)
units = f_id.variables[vname].getncattr('units')
long_name = f_id.variables[vname].getncattr('long_name')
f_id.close()
if Nlat_lon is not None:
x = np.reshape(x, x.shape[:-1]+Nlat_lon)
lon = np.reshape(lon, lon.shape[:-1] + Nlat_lon)
lat = np.reshape(lat, lat.shape[:-1] + Nlat_lon)
return x, lon, lat, units, long_name
def produce_cosp_summary_plots(fname, variables, output_dir, Nlat_lon = None):
"""
Wrapper function that iterates over a list of COSP variables and produces
a PNG figure for each of them.
Arguments:
fname: COSP output filename.
variables: list of variable names.
output_dir: output directory.
Keywords:
Nlat_lon: tuple with (lat,lon) dimensions model's grid.
"""
fill_value = -1.0e30
dimensions = read_dimensions(fname)
for vname, vd in variables.items():
new_shape = None
if vd['reshape']: new_shape = Nlat_lon
vx, longitude, latitude, units, long_name = read_var_to_masked_array(fname, vname, fill_value, Nlat_lon = new_shape)
x, y, z, pkw = collapse_dimensions_for_plotting(longitude, latitude, vname, vx, vd, dimensions)
title = long_name + ' (' + units + ')'
fig_name = os.path.join(output_dir, ".".join([os.path.basename(fname), vname, 'png']))
coastlines = False
if vd['plot_type'] == 'map': coastlines = True
plot_pcolormesh(x, y, z, pkw, fig_name, title=title, coastlines=coastlines)
def variable2D_metadata(var_list, fname):
"""
Return dictionary with metadata for each variable.
Arguments:
var_list: list of variable names.
fname: COSP output filename.
Return:
d: dictionary of dictionaries with relevant metadata.
"""
map_dims = (('loc',),('PARASOL_NREFL','loc'))
hist2D_dims = (('pres7', 'tau7', 'loc'),
('levStat', 'cloudsat_DBZE_BINS', 'loc'),
('hgt16', 'tau7', 'loc'),
('REICE_MODIS', 'tau7', 'loc'),
('RELIQ_MODIS', 'tau7', 'loc'),
('levStat', 'SR_BINS', 'loc'))
zcs_dims = (('levStat','loc'), ('lev','loc'))
f_id = netCDF4.Dataset(fname, 'r')
vmeta = {}
for vname in var_list:
x = f_id.variables[vname]
# Standard map
if x.dimensions in map_dims:
vmeta[vname] = {'plot_type':'map', 'reshape':True}
# 2D histograms
if x.dimensions in hist2D_dims:
vmeta[vname] = {'plot_type':'2Dhist', 'reshape':False,
'xaxis_type': x.dimensions[1],
'yaxis_type': x.dimensions[0]}
# Zonal cross section
if x.dimensions in zcs_dims:
vmeta[vname] = {'plot_type':'zonal_cross_section', 'reshape':True,
'xaxis_type': 'latitude',
'yaxis_type': x.dimensions[0]}
f_id.close()
return vmeta
#######################
# Main
#######################
if __name__ == '__main__':
# Command line arguments.
parser = argparse.ArgumentParser()
parser.add_argument("--tst_file", default="./data/outputs/UKMO/cosp2_output.um_global.nc",
help="Test output file.")
parser.add_argument("--out_dir", default="./data/outputs/UKMO",
help="Output directory.")
parser.add_argument("--Nlat", type=int, default=36, help="Number of latitude points.")
parser.add_argument("--Nlon", type=int, default=48, help="Number of longitude points.")
args = parser.parse_args()
# Dictonaries with list of variables.
v2D_maps_names = ['cllcalipsoice', 'clmcalipsoice', 'clhcalipsoice', 'cltcalipsoice', 'cllcalipsoliq', 'clmcalipsoliq',
'clhcalipsoliq', 'cltcalipsoliq', 'cllcalipsoun', 'clmcalipsoun', 'clhcalipsoun', 'cltcalipsoun',
'cllcalipso', 'clmcalipso', 'clhcalipso', 'cltcalipso', 'clopaquecalipso', 'clthincalipso',
'clzopaquecalipso', 'clopaquetemp', 'clthintemp', 'clzopaquetemp', 'clopaquemeanz', 'clthinmeanz',
'clthinemis', 'clopaquemeanzse', 'clthinmeanzse', 'clzopaquecalipsose', 'cllgrLidar532',
'clmgrLidar532', 'clhgrLidar532', 'cltgrLidar532', 'cllatlid', 'clmatlid', 'clhatlid',
'cltatlid', 'cloudsatpia', 'cltisccp', 'meantbisccp', 'meantbclrisccp', 'pctisccp', 'tauisccp',
'albisccp', 'misr_meanztop', 'misr_cldarea', 'cltmodis', 'clwmodis', 'climodis', 'clhmodis',
'clmmodis', 'cllmodis', 'tautmodis', 'tauwmodis', 'tauimodis', 'tautlogmodis', 'tauwlogmodis',
'tauilogmodis', 'reffclwmodis', 'reffclimodis', 'pctmodis', 'lwpmodis', 'iwpmodis',
'cltlidarradar', 'cloudsat_tcc', 'cloudsat_tcc2','parasolGrid_refl']
v2D_hists_names = ['clisccp', 'clmodis', 'cfadDbze94', 'clMISR',
'modis_Optical_Thickness_vs_ReffICE',
'modis_Optical_Thickness_vs_ReffLIQ',
'cfadLidarsr532', 'cfadLidarsr532gr', 'cfadLidarsr355']
v2D_zcs_names = ['clcalipsoice','clcalipsoliq','clcalipsoun','clcalipsotmp','clcalipsotmpice','clcalipsotmpliq',
'clcalipsotmpun','clcalipso','clcalipsoopaque','clcalipsothin','clcalipsozopaque',
'clcalipsoopacity','clgrLidar532','clatlid','clcalipso2',
'lidarBetaMol532gr','lidarBetaMol532','lidarBetaMol355']
v2D_all_names = v2D_maps_names + v2D_hists_names + v2D_zcs_names
# Plots for these variables are not yet developed
# atb532_perp(lev, cosp_scol, loc);
# atb532(lev, cosp_scol, loc);
# calipso_tau(lev, cosp_scol, loc);
# atb532gr(lev, cosp_scol, loc);
# atb355(lev, cosp_scol, loc);
# dbze94(lev, cosp_scol, loc);
# parasolPix_refl(PARASOL_NREFL, cosp_scol, loc);
# boxtauisccp(cosp_scol, loc);
# boxptopisccp(cosp_scol, loc);
# Build dictionary with metadata and produce plots.
vars2D = variable2D_metadata(v2D_all_names, args.tst_file)
produce_cosp_summary_plots(args.tst_file, vars2D, args.out_dir,
Nlat_lon=(args.Nlat, args.Nlon))
print("===== Summary plots produced =====")
| [
"numpy.ma.masked_equal",
"matplotlib.pyplot.ylabel",
"matplotlib.pyplot.pcolormesh",
"numpy.arange",
"numpy.flip",
"numpy.reshape",
"argparse.ArgumentParser",
"matplotlib.pyplot.xlabel",
"netCDF4.Dataset",
"matplotlib.pyplot.close",
"matplotlib.pyplot.yticks",
"matplotlib.pyplot.savefig",
"m... | [((7214, 7241), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(10, 5)'}), '(figsize=(10, 5))\n', (7224, 7241), True, 'import matplotlib.pyplot as plt\n'), ((7252, 7278), 'matplotlib.pyplot.get_cmap', 'plt.get_cmap', (['"""YlOrRd"""', '(20)'], {}), "('YlOrRd', 20)\n", (7264, 7278), True, 'import matplotlib.pyplot as plt\n'), ((7430, 7480), 'matplotlib.pyplot.pcolormesh', 'plt.pcolormesh', (['x', 'y', 'v'], {'cmap': 'cmap', 'vmax': "d['vmax']"}), "(x, y, v, cmap=cmap, vmax=d['vmax'])\n", (7444, 7480), True, 'import matplotlib.pyplot as plt\n'), ((7725, 7748), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (["d['xlabel']"], {}), "(d['xlabel'])\n", (7735, 7748), True, 'import matplotlib.pyplot as plt\n'), ((7753, 7776), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (["d['ylabel']"], {}), "(d['ylabel'])\n", (7763, 7776), True, 'import matplotlib.pyplot as plt\n'), ((7781, 7820), 'matplotlib.pyplot.colorbar', 'plt.colorbar', (['h'], {'orientation': '"""vertical"""'}), "(h, orientation='vertical')\n", (7793, 7820), True, 'import matplotlib.pyplot as plt\n'), ((7867, 7897), 'matplotlib.pyplot.savefig', 'plt.savefig', (['fig_name'], {'dpi': '(200)'}), '(fig_name, dpi=200)\n', (7878, 7897), True, 'import matplotlib.pyplot as plt\n'), ((7902, 7913), 'matplotlib.pyplot.close', 'plt.close', ([], {}), '()\n', (7911, 7913), True, 'import matplotlib.pyplot as plt\n'), ((8383, 8410), 'netCDF4.Dataset', 'netCDF4.Dataset', (['fname', '"""r"""'], {}), "(fname, 'r')\n", (8398, 8410), False, 'import netCDF4, argparse, sys\n'), ((9098, 9125), 'netCDF4.Dataset', 'netCDF4.Dataset', (['fname', '"""r"""'], {}), "(fname, 'r')\n", (9113, 9125), False, 'import netCDF4, argparse, sys\n'), ((9134, 9190), 'numpy.ma.masked_equal', 'np.ma.masked_equal', (['f_id.variables[vname][:]', 'fill_value'], {}), '(f_id.variables[vname][:], fill_value)\n', (9152, 9190), True, 'import numpy as np\n'), ((9201, 9263), 'numpy.ma.masked_equal', 'np.ma.masked_equal', (["f_id.variables['longitude'][:]", 'fill_value'], {}), "(f_id.variables['longitude'][:], fill_value)\n", (9219, 9263), True, 'import numpy as np\n'), ((9274, 9335), 'numpy.ma.masked_equal', 'np.ma.masked_equal', (["f_id.variables['latitude'][:]", 'fill_value'], {}), "(f_id.variables['latitude'][:], fill_value)\n", (9292, 9335), True, 'import numpy as np\n'), ((11528, 11555), 'netCDF4.Dataset', 'netCDF4.Dataset', (['fname', '"""r"""'], {}), "(fname, 'r')\n", (11543, 11555), False, 'import netCDF4, argparse, sys\n'), ((12426, 12451), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {}), '()\n', (12449, 12451), False, 'import netCDF4, argparse, sys\n'), ((3061, 3085), 'numpy.arange', 'np.arange', (['(-180)', '(181)', '(60)'], {}), '(-180, 181, 60)\n', (3070, 3085), True, 'import numpy as np\n'), ((3101, 3123), 'numpy.arange', 'np.arange', (['(-90)', '(91)', '(30)'], {}), '(-90, 91, 30)\n', (3110, 3123), True, 'import numpy as np\n'), ((3245, 3277), 'numpy.cos', 'np.cos', (['(latitude * np.pi / 180.0)'], {}), '(latitude * np.pi / 180.0)\n', (3251, 3277), True, 'import numpy as np\n'), ((3332, 3360), 'numpy.sum', 'np.sum', (['(vx * weights)'], {'axis': '(2)'}), '(vx * weights, axis=2)\n', (3338, 3360), True, 'import numpy as np\n'), ((3373, 3398), 'numpy.arange', 'np.arange', (['(z.shape[1] + 1)'], {}), '(z.shape[1] + 1)\n', (3382, 3398), True, 'import numpy as np\n'), ((3409, 3434), 'numpy.arange', 'np.arange', (['(z.shape[0] + 1)'], {}), '(z.shape[0] + 1)\n', (3418, 3434), True, 'import numpy as np\n'), ((3494, 3516), 'numpy.average', 'np.average', (['vx'], {'axis': '(2)'}), '(vx, axis=2)\n', (3504, 3516), True, 'import numpy as np\n'), ((3575, 3600), 'numpy.arange', 'np.arange', (['(z.shape[0] + 1)'], {}), '(z.shape[0] + 1)\n', (3584, 3600), True, 'import numpy as np\n'), ((6403, 6421), 'numpy.flip', 'np.flip', (['z'], {'axis': '(0)'}), '(z, axis=0)\n', (6410, 6421), True, 'import numpy as np\n'), ((6699, 6728), 'numpy.append', 'np.append', (['(2 * x[0] - x[1])', 'x'], {}), '(2 * x[0] - x[1], x)\n', (6708, 6728), True, 'import numpy as np\n'), ((7516, 7559), 'matplotlib.pyplot.xticks', 'plt.xticks', (["d['xticks']", "d['xticks_labels']"], {}), "(d['xticks'], d['xticks_labels'])\n", (7526, 7559), True, 'import matplotlib.pyplot as plt\n'), ((7577, 7600), 'matplotlib.pyplot.xticks', 'plt.xticks', (["d['xticks']"], {}), "(d['xticks'])\n", (7587, 7600), True, 'import matplotlib.pyplot as plt\n'), ((7636, 7679), 'matplotlib.pyplot.yticks', 'plt.yticks', (["d['yticks']", "d['yticks_labels']"], {}), "(d['yticks'], d['yticks_labels'])\n", (7646, 7679), True, 'import matplotlib.pyplot as plt\n'), ((7697, 7720), 'matplotlib.pyplot.yticks', 'plt.yticks', (["d['yticks']"], {}), "(d['yticks'])\n", (7707, 7720), True, 'import matplotlib.pyplot as plt\n'), ((7846, 7862), 'matplotlib.pyplot.title', 'plt.title', (['title'], {}), '(title)\n', (7855, 7862), True, 'import matplotlib.pyplot as plt\n'), ((9509, 9547), 'numpy.reshape', 'np.reshape', (['x', '(x.shape[:-1] + Nlat_lon)'], {}), '(x, x.shape[:-1] + Nlat_lon)\n', (9519, 9547), True, 'import numpy as np\n'), ((9560, 9602), 'numpy.reshape', 'np.reshape', (['lon', '(lon.shape[:-1] + Nlat_lon)'], {}), '(lon, lon.shape[:-1] + Nlat_lon)\n', (9570, 9602), True, 'import numpy as np\n'), ((9617, 9659), 'numpy.reshape', 'np.reshape', (['lat', '(lat.shape[:-1] + Nlat_lon)'], {}), '(lat, lat.shape[:-1] + Nlat_lon)\n', (9627, 9659), True, 'import numpy as np\n'), ((2754, 2771), 'numpy.roll', 'np.roll', (['x', 'Nroll'], {}), '(x, Nroll)\n', (2761, 2771), True, 'import numpy as np\n'), ((2788, 2813), 'numpy.roll', 'np.roll', (['z', 'Nroll'], {'axis': '(1)'}), '(z, Nroll, axis=1)\n', (2795, 2813), True, 'import numpy as np\n'), ((3913, 3934), 'numpy.arange', 'np.arange', (['(-50)', '(26)', '(5)'], {}), '(-50, 26, 5)\n', (3922, 3934), True, 'import numpy as np\n'), ((4396, 4418), 'numpy.arange', 'np.arange', (['(-90)', '(91)', '(30)'], {}), '(-90, 91, 30)\n', (4405, 4418), True, 'import numpy as np\n'), ((5416, 5429), 'numpy.arange', 'np.arange', (['(41)'], {}), '(41)\n', (5425, 5429), True, 'import numpy as np\n'), ((7359, 7396), 'cartopy.crs.PlateCarree', 'ccrs.PlateCarree', ([], {'central_longitude': '(0)'}), '(central_longitude=0)\n', (7375, 7396), True, 'import cartopy.crs as ccrs\n'), ((10629, 10652), 'os.path.basename', 'os.path.basename', (['fname'], {}), '(fname)\n', (10645, 10652), False, 'import os\n')] |
from typing import List
from copy import deepcopy
import multiprocessing
import time
import pytest
import numpy as np
import torch
from regym.networks.servers import neural_net_server
from regym.networks.servers.neural_net_server import NeuralNetServerHandler
def test_can_process_single_observation():
client_connection1, server_connection1 = multiprocessing.Pipe()
net = generate_dummy_neural_net(weight=1.)
server = multiprocessing.Process(
target=neural_net_server,
args=(deepcopy(net), [server_connection1]))
observation_1 = np.array([0])
client_connection1.send((observation_1, None))
server.start()
expected_response_1 = {'output': torch.Tensor([0])}
assert expected_response_1 == client_connection1.recv()
server.terminate()
def test_can_process_batch_observation_and_respond_individually():
client_connection1, server_connection1 = multiprocessing.Pipe()
client_connection2, server_connection2 = multiprocessing.Pipe()
net = generate_dummy_neural_net(weight=1.)
server = multiprocessing.Process(
target=neural_net_server,
args=(deepcopy(net), [server_connection1, server_connection2]))
observation_1 = np.array([0])
observation_2 = np.array([1])
client_connection1.send((observation_1, None))
client_connection2.send((observation_2, None))
server.start()
expected_response_1 = {'output': torch.Tensor([0])}
expected_response_2 = {'output': torch.Tensor([1])}
assert expected_response_1 == client_connection1.recv()
assert expected_response_2 == client_connection2.recv()
server.terminate()
def test_can_update_the_neural_net_in_the_server():
net1 = generate_dummy_neural_net(weight=0.)
net2 = generate_dummy_neural_net(weight=1.)
observation = np.array([1])
expected_response_1 = {'output': torch.Tensor([0])}
expected_response_2 = {'output': torch.Tensor([1])}
server_handler = NeuralNetServerHandler(num_connections=1,
net=net1)
server_handler.client_connections[0].send((observation, None))
actual_response = server_handler.client_connections[0].recv()
assert expected_response_1 == actual_response
server_handler.update_neural_net(net2)
server_handler.client_connections[0].send((observation, None))
actual_response = server_handler.client_connections[0].recv()
assert expected_response_2 == actual_response
server_handler.close_server()
@pytest.mark.skipif(not torch.cuda.is_available(),
reason="Requires a gpu and cuda to be available")
def test_server_is_faster_on_gpu():
torch.multiprocessing.set_start_method('spawn', force=True)
import cProfile
import pstats
pr = cProfile.Profile()
pr.enable()
gpu_time = _test_server_speed(device='cuda:0')
pr.disable()
sortby = 'cumulative'
ps = pstats.Stats(pr).sort_stats(sortby)
print(ps.print_stats())
pr.enable()
cpu_time = _test_server_speed(device='cpu')
pr.disable()
sortby = 'cumulative'
ps = pstats.Stats(pr).sort_stats(sortby)
print(ps.print_stats())
print('CPU time:', cpu_time, 'GPU time:', gpu_time, 'Speedup:', cpu_time / gpu_time)
assert gpu_time < cpu_time
#if filename != '': ps.dump_stats(filename)
#gpu_time = _test_server_speed(device='cpu')
def _test_server_speed(device, init_dim=32, num_connections=20,
num_requests=500):
net = TimingDummyNet(dims=[init_dim,32,32,32,32,32,32])
server_handler = NeuralNetServerHandler(num_connections=num_connections,
net=net, device=device)
total_time = 0
for _ in range(num_requests):
for connection_i in range(num_connections):
observation = torch.rand(size=(1, init_dim))
server_handler.client_connections[connection_i].send((observation, None))
responses = [server_handler.client_connections[connection_i].recv()
for connection_i in range(num_connections)]
total_time += sum([x['time'] for x in responses])
return total_time.item()
def generate_dummy_neural_net(weight):
class DummyNet(torch.nn.Module):
def __init__(self, weight):
super().__init__()
self.linear = torch.nn.Linear(in_features=1, out_features=1, bias=False)
self.linear.weight.data = torch.Tensor([[weight]])
def forward(self, x, legal_actions=None):
return {'output': self.linear(x)}
return DummyNet(weight)
class TimingDummyNet(torch.nn.Module):
def __init__(self, dims: List[int]):
super().__init__()
self.layers = torch.nn.Sequential(
*[torch.nn.Linear(in_features=h_in, out_features=h_out, bias=True)
for h_in, h_out in zip(dims, dims[1:])])
def forward(self, x, legal_actions=None):
start = time.time()
self.layers(x)
total_time = time.time() - start
return {'time': torch.Tensor([total_time] * x.shape[0])}
| [
"regym.networks.servers.neural_net_server.NeuralNetServerHandler",
"torch.multiprocessing.set_start_method",
"torch.Tensor",
"numpy.array",
"pstats.Stats",
"torch.cuda.is_available",
"torch.nn.Linear",
"copy.deepcopy",
"cProfile.Profile",
"multiprocessing.Pipe",
"time.time",
"torch.rand"
] | [((352, 374), 'multiprocessing.Pipe', 'multiprocessing.Pipe', ([], {}), '()\n', (372, 374), False, 'import multiprocessing\n'), ((576, 589), 'numpy.array', 'np.array', (['[0]'], {}), '([0])\n', (584, 589), True, 'import numpy as np\n'), ((915, 937), 'multiprocessing.Pipe', 'multiprocessing.Pipe', ([], {}), '()\n', (935, 937), False, 'import multiprocessing\n'), ((983, 1005), 'multiprocessing.Pipe', 'multiprocessing.Pipe', ([], {}), '()\n', (1003, 1005), False, 'import multiprocessing\n'), ((1228, 1241), 'numpy.array', 'np.array', (['[0]'], {}), '([0])\n', (1236, 1241), True, 'import numpy as np\n'), ((1262, 1275), 'numpy.array', 'np.array', (['[1]'], {}), '([1])\n', (1270, 1275), True, 'import numpy as np\n'), ((1826, 1839), 'numpy.array', 'np.array', (['[1]'], {}), '([1])\n', (1834, 1839), True, 'import numpy as np\n'), ((1975, 2026), 'regym.networks.servers.neural_net_server.NeuralNetServerHandler', 'NeuralNetServerHandler', ([], {'num_connections': '(1)', 'net': 'net1'}), '(num_connections=1, net=net1)\n', (1997, 2026), False, 'from regym.networks.servers.neural_net_server import NeuralNetServerHandler\n'), ((2681, 2740), 'torch.multiprocessing.set_start_method', 'torch.multiprocessing.set_start_method', (['"""spawn"""'], {'force': '(True)'}), "('spawn', force=True)\n", (2719, 2740), False, 'import torch\n'), ((2788, 2806), 'cProfile.Profile', 'cProfile.Profile', ([], {}), '()\n', (2804, 2806), False, 'import cProfile\n'), ((3579, 3658), 'regym.networks.servers.neural_net_server.NeuralNetServerHandler', 'NeuralNetServerHandler', ([], {'num_connections': 'num_connections', 'net': 'net', 'device': 'device'}), '(num_connections=num_connections, net=net, device=device)\n', (3601, 3658), False, 'from regym.networks.servers.neural_net_server import NeuralNetServerHandler\n'), ((699, 716), 'torch.Tensor', 'torch.Tensor', (['[0]'], {}), '([0])\n', (711, 716), False, 'import torch\n'), ((1438, 1455), 'torch.Tensor', 'torch.Tensor', (['[0]'], {}), '([0])\n', (1450, 1455), False, 'import torch\n'), ((1494, 1511), 'torch.Tensor', 'torch.Tensor', (['[1]'], {}), '([1])\n', (1506, 1511), False, 'import torch\n'), ((1878, 1895), 'torch.Tensor', 'torch.Tensor', (['[0]'], {}), '([0])\n', (1890, 1895), False, 'import torch\n'), ((1934, 1951), 'torch.Tensor', 'torch.Tensor', (['[1]'], {}), '([1])\n', (1946, 1951), False, 'import torch\n'), ((2544, 2569), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (2567, 2569), False, 'import torch\n'), ((4944, 4955), 'time.time', 'time.time', ([], {}), '()\n', (4953, 4955), False, 'import time\n'), ((2927, 2943), 'pstats.Stats', 'pstats.Stats', (['pr'], {}), '(pr)\n', (2939, 2943), False, 'import pstats\n'), ((3108, 3124), 'pstats.Stats', 'pstats.Stats', (['pr'], {}), '(pr)\n', (3120, 3124), False, 'import pstats\n'), ((3834, 3864), 'torch.rand', 'torch.rand', ([], {'size': '(1, init_dim)'}), '(size=(1, init_dim))\n', (3844, 3864), False, 'import torch\n'), ((4350, 4408), 'torch.nn.Linear', 'torch.nn.Linear', ([], {'in_features': '(1)', 'out_features': '(1)', 'bias': '(False)'}), '(in_features=1, out_features=1, bias=False)\n', (4365, 4408), False, 'import torch\n'), ((4447, 4471), 'torch.Tensor', 'torch.Tensor', (['[[weight]]'], {}), '([[weight]])\n', (4459, 4471), False, 'import torch\n'), ((5000, 5011), 'time.time', 'time.time', ([], {}), '()\n', (5009, 5011), False, 'import time\n'), ((5044, 5083), 'torch.Tensor', 'torch.Tensor', (['([total_time] * x.shape[0])'], {}), '([total_time] * x.shape[0])\n', (5056, 5083), False, 'import torch\n'), ((517, 530), 'copy.deepcopy', 'deepcopy', (['net'], {}), '(net)\n', (525, 530), False, 'from copy import deepcopy\n'), ((1149, 1162), 'copy.deepcopy', 'deepcopy', (['net'], {}), '(net)\n', (1157, 1162), False, 'from copy import deepcopy\n'), ((4763, 4827), 'torch.nn.Linear', 'torch.nn.Linear', ([], {'in_features': 'h_in', 'out_features': 'h_out', 'bias': '(True)'}), '(in_features=h_in, out_features=h_out, bias=True)\n', (4778, 4827), False, 'import torch\n')] |
from typing import List, Tuple
import numpy as np
from mathy_envs.envs.poly_simplify import PolySimplify
from mathy_envs.state import MathyEnvState, MathyObservation
def test_state_to_observation():
"""to_observation has defaults to allow calling with no arguments"""
env_state = MathyEnvState(problem="4x+2")
assert env_state.to_observation() is not None
def test_state_to_observation_normalization():
"""normalize argument converts all values to range 0.0-1.0"""
env_state = MathyEnvState(problem="4+2")
obs: MathyObservation = env_state.to_observation(normalize=False)
assert np.max(obs.values) == 4.0
norm: MathyObservation = env_state.to_observation(normalize=True)
assert np.max(norm.values) == 1.0
def test_state_to_observation_normalized_problem_type():
"""normalize argument converts all values and type hash to range 0.0-1.0"""
env_state = MathyEnvState(problem="4+2")
obs: MathyObservation = env_state.to_observation()
print(obs.type)
assert np.max(obs.time) <= 1.0
assert np.min(obs.time) >= 0.0
assert np.max(obs.values) <= 1.0
assert np.min(obs.values) >= 0.0
assert np.max(obs.type) <= 1.0
assert np.min(obs.type) >= 0.0
def test_state_encodes_hierarchy():
"""Verify that the observation generated encodes hierarchy properly
so the model can determine the precise nodes to act on"""
diff_pairs: List[Tuple[str, str]] = [
("4x + (3u + 7x + 3u) + 4u", "4x + 3u + 7x + 3u + 4u"),
("7c * 5", "7 * (c * 5)"),
("5v + 20b + (10v + 7b)", "5v + 20b + 10v + 7b"),
("5s + 60 + 12s + s^2", "5s + 60 + (12s + s^2)"),
]
env = PolySimplify()
for one, two in diff_pairs:
state_one = MathyEnvState(problem=one)
obs_one = state_one.to_observation(env.get_valid_moves(state_one))
state_two = MathyEnvState(problem=two)
obs_two = state_two.to_observation(env.get_valid_moves(state_two))
assert obs_one.nodes != obs_two.nodes
def test_state_sanity():
state = MathyEnvState(problem="4+4")
assert state is not None
def test_state_encode_player():
env_state = MathyEnvState(problem="4x+2")
env_state = env_state.get_out_state(
problem="2+4x", moves_remaining=10, action=(0, 0)
)
agent = env_state.agent
assert agent.problem == "2+4x"
assert agent.moves_remaining == 10
assert agent.action == (0, 0)
def test_state_serialize_string():
env_state = MathyEnvState(problem="4x+2")
for i in range(10):
env_state = env_state.get_out_state(
problem="2+4x", moves_remaining=10 - i, action=(i, i)
)
state_str = env_state.to_string()
compare = MathyEnvState.from_string(state_str)
assert env_state.agent.problem == compare.agent.problem
assert env_state.agent.moves_remaining == compare.agent.moves_remaining
for one, two in zip(env_state.agent.history, compare.agent.history):
assert one.raw == two.raw
assert one.action == two.action
def test_state_serialize_numpy():
env_state = MathyEnvState(problem="4x+2")
for i in range(10):
env_state = env_state.get_out_state(
problem="2+4x", moves_remaining=10 - i, action=(i, i)
)
state_np = env_state.to_np()
compare = MathyEnvState.from_np(state_np)
assert env_state.agent.problem == compare.agent.problem
assert env_state.agent.moves_remaining == compare.agent.moves_remaining
for one, two in zip(env_state.agent.history, compare.agent.history):
assert one.raw == two.raw
assert one.action == two.action
| [
"mathy_envs.envs.poly_simplify.PolySimplify",
"mathy_envs.state.MathyEnvState.from_np",
"numpy.max",
"mathy_envs.state.MathyEnvState",
"mathy_envs.state.MathyEnvState.from_string",
"numpy.min"
] | [((292, 321), 'mathy_envs.state.MathyEnvState', 'MathyEnvState', ([], {'problem': '"""4x+2"""'}), "(problem='4x+2')\n", (305, 321), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((503, 531), 'mathy_envs.state.MathyEnvState', 'MathyEnvState', ([], {'problem': '"""4+2"""'}), "(problem='4+2')\n", (516, 531), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((903, 931), 'mathy_envs.state.MathyEnvState', 'MathyEnvState', ([], {'problem': '"""4+2"""'}), "(problem='4+2')\n", (916, 931), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((1669, 1683), 'mathy_envs.envs.poly_simplify.PolySimplify', 'PolySimplify', ([], {}), '()\n', (1681, 1683), False, 'from mathy_envs.envs.poly_simplify import PolySimplify\n'), ((2048, 2076), 'mathy_envs.state.MathyEnvState', 'MathyEnvState', ([], {'problem': '"""4+4"""'}), "(problem='4+4')\n", (2061, 2076), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((2156, 2185), 'mathy_envs.state.MathyEnvState', 'MathyEnvState', ([], {'problem': '"""4x+2"""'}), "(problem='4x+2')\n", (2169, 2185), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((2480, 2509), 'mathy_envs.state.MathyEnvState', 'MathyEnvState', ([], {'problem': '"""4x+2"""'}), "(problem='4x+2')\n", (2493, 2509), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((2708, 2744), 'mathy_envs.state.MathyEnvState.from_string', 'MathyEnvState.from_string', (['state_str'], {}), '(state_str)\n', (2733, 2744), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((3080, 3109), 'mathy_envs.state.MathyEnvState', 'MathyEnvState', ([], {'problem': '"""4x+2"""'}), "(problem='4x+2')\n", (3093, 3109), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((3303, 3334), 'mathy_envs.state.MathyEnvState.from_np', 'MathyEnvState.from_np', (['state_np'], {}), '(state_np)\n', (3324, 3334), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((613, 631), 'numpy.max', 'np.max', (['obs.values'], {}), '(obs.values)\n', (619, 631), True, 'import numpy as np\n'), ((721, 740), 'numpy.max', 'np.max', (['norm.values'], {}), '(norm.values)\n', (727, 740), True, 'import numpy as np\n'), ((1018, 1034), 'numpy.max', 'np.max', (['obs.time'], {}), '(obs.time)\n', (1024, 1034), True, 'import numpy as np\n'), ((1053, 1069), 'numpy.min', 'np.min', (['obs.time'], {}), '(obs.time)\n', (1059, 1069), True, 'import numpy as np\n'), ((1089, 1107), 'numpy.max', 'np.max', (['obs.values'], {}), '(obs.values)\n', (1095, 1107), True, 'import numpy as np\n'), ((1126, 1144), 'numpy.min', 'np.min', (['obs.values'], {}), '(obs.values)\n', (1132, 1144), True, 'import numpy as np\n'), ((1164, 1180), 'numpy.max', 'np.max', (['obs.type'], {}), '(obs.type)\n', (1170, 1180), True, 'import numpy as np\n'), ((1199, 1215), 'numpy.min', 'np.min', (['obs.type'], {}), '(obs.type)\n', (1205, 1215), True, 'import numpy as np\n'), ((1737, 1763), 'mathy_envs.state.MathyEnvState', 'MathyEnvState', ([], {'problem': 'one'}), '(problem=one)\n', (1750, 1763), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n'), ((1860, 1886), 'mathy_envs.state.MathyEnvState', 'MathyEnvState', ([], {'problem': 'two'}), '(problem=two)\n', (1873, 1886), False, 'from mathy_envs.state import MathyEnvState, MathyObservation\n')] |
"""
Algorithms to infer topological order from interventional data
"""
import pickle
import numpy as np
import os
import sys
import networkx as nx
from collections import defaultdict
from copy import deepcopy
import evaluation.ground_truth
from order import es
from order.evaluate import Evaluator
from order import weighted_trueskill
# Sun algorithm mess
sys.path.append('order/sun/')
import remove_cycle_edges_by_hierarchy as sunalg
from file_io import write_dict_to_file
from true_skill import graphbased_trueskill
from compute_social_agony import compute_social_agony
def update_weighted_trueskill(file_data='../data/kemmeren/orig.p', dir_output='../Output/02ordertest/updateweightedtrueskill/', int_ids=None):
"""
Define order by sorting the iterated update-weighted TrueSkill scores
Parameters:
"""
# Define files
file_order = os.path.join(dir_output, 'order.csv')
os.makedirs(dir_output, exist_ok=True)
# Run algorithm
order = weighted_trueskill.run(file_data, int_ids)
# Output order
with open(file_order, 'w') as f:
f.write(",".join(str(i) for i in order))
return order
def sun(file_data='../data/kemmeren/orig.p', dir_output='../Output/00order_size/sun/', int_ids=None, model_type='ensembling'):
"""
Aply Sun's algorithm
NOTE: if there are multiple weakly connected components in the infered binary ground-truth,
this algorithm puts them in arbitrary respective order
Returns:
order of integer ids
"""
os.makedirs(dir_output, exist_ok=True)
GT_THRESHOLD = 0.8 # percent
print(f"Set ground-truth threshold at {GT_THRESHOLD}")
file_edgelist = os.path.join(dir_output, 'graph.edges')
if model_type=='ensembling':
file_reduced_edgelist = os.path.join(dir_output, 'graph_removed_by_H-Voting.edges')
elif model_type == 'trueskill-greedy':
file_reduced_edgelist = os.path.join(dir_output, 'graph_removed_by_TS_G.edges')
else:
raise NotImplementedError
file_order = os.path.join(dir_output, 'order.csv')
# select only mutant-mutant data
data = pickle.load(open(file_data, 'rb'))
intpos = data[2]
data_int = data[1][intpos]
del data
# select subset of variables
data_int = data_int[int_ids][:, int_ids]
# Create graph based on absolute threshold binary ground-truth
# NOTE: Graph with edges from effect to cause, such that high score relates to early in order
_, A = evaluation.ground_truth.abs_percentile(data_int, range(len(data_int)), percentile=GT_THRESHOLD, dim='full')
G_gt = nx.from_numpy_matrix(A, create_using=nx.DiGraph)
nx.relabel_nodes(G_gt, {i: int_ids[i] for i in range(len(int_ids))}, copy=False)
# Write edge list to file
nx.write_edgelist(G_gt, file_edgelist, data=False)
# Run Sun
if model_type == 'ensembling':
sunalg.breaking_cycles_by_hierarchy_performance(
graph_file=file_edgelist,
gt_file=None,
players_score_name=model_type)
elif model_type == 'trueskill-greedy':
players_score_dict = sunalg.computing_hierarchy(
graph_file=file_edgelist,
players_score_func_name='trueskill',
nodetype = int)
g = nx.read_edgelist(file_edgelist,create_using = nx.DiGraph(),nodetype = int)
e1 = sunalg.scc_based_to_remove_cycle_edges_iterately(g, players_score_dict)
sunalg.write_pairs_to_file(e1, file_reduced_edgelist)
# Remove edges from adjacency matrix
reduced_edgelist = nx.read_edgelist(file_reduced_edgelist, nodetype=int, create_using=nx.DiGraph).edges
for edge in reduced_edgelist:
A[int_ids.index(edge[0]),int_ids.index(edge[1])]=False
# Infer topological order
G_reduced = nx.from_numpy_matrix(A, create_using=nx.DiGraph)
nx.relabel_nodes(G_reduced, {i: int_ids[i] for i in range(len(int_ids))}, copy=False)
order = list(reversed(list(nx.topological_sort(G_reduced))))
# Output order
with open(file_order, 'w') as f:
f.write(",".join(str(i) for i in order))
return order
def sort_score(file_data='../data/kemmeren/orig.p', dir_output='../Output/00order_size/sortscore/', int_ids=None, score_type='trueskill', reverse=False, return_scores=False):
"""
Define order by sorting some hierarchy score
Parameters:
score_type: trueskill, socialagony, pagerank
reverse=Forward=E->C: False is default, edges go Cause->Effect (Backward interpretation) and a high rank (likely cause) corresponds to:
TS: high score (causes are the winner)
SA: high score (causes are higher in the hierarchy)
PR: high score (causes are often refered to by effects [if there is a strong fan-out effect, we might reverse this])
"""
os.makedirs(dir_output, exist_ok=True)
GT_THRESHOLD=0.8 # percent
print(f"Set ground-truth threshold at {GT_THRESHOLD}")
file_edgelist = os.path.join(dir_output, 'graph.edges')
file_order = os.path.join(dir_output, 'order.csv')
# select only mutant-mutant data
data = pickle.load(open(file_data, 'rb'))
intpos = data[2]
data_int = data[1][intpos]
del data
# select subset of variables
data_int = data_int[int_ids][:, int_ids]
# Create graph based on absolute threshold binary ground-truth
# NOTE: Graph with edges from effect to cause, such that high score relates to early in order
_, A = evaluation.ground_truth.abs_percentile(data_int, range(len(data_int)), percentile=GT_THRESHOLD, dim='full')
# Transpose adjacency matrix if edges are reversed
if reverse:
A = A.T
g = nx.from_numpy_matrix(A, create_using=nx.DiGraph)
# TODO: IS DIT ECHT CORRECT???
# (edit 27 mei; was copy=False en zonder 'g = ')
g = nx.relabel_nodes(g, {i: int_ids[i] for i in range(len(int_ids))}, copy=True)
# Write edge list to file
nx.write_edgelist(g, file_edgelist, data=False)
# Compute scores
if score_type == "pagerank":
print("computing pagerank...")
scores = defaultdict(
lambda:0,
nx.pagerank(g, alpha = 0.85))
elif score_type == 'socialagony':
agony_file = os.path.join(dir_output,"graph_socialagony.txt")
print("start computing socialagony...")
scores = defaultdict(
lambda:0,
compute_social_agony(
graph_file=file_edgelist,
agony_path = "order/sun/agony/agony "))
print("write socialagony to file: %s" % agony_file)
elif score_type == 'trueskill':
trueskill_file = os.path.join(dir_output,"graph_trueskill.txt")
print("start computing trueskill...")
scores = defaultdict(
lambda:0,
graphbased_trueskill(g))
print("write trueskill to file: %s" % trueskill_file)
write_dict_to_file(scores, trueskill_file)
else:
raise NotImplementedError
# Determine order
order = np.argsort([scores[i] for i in int_ids])[::-1]
order = list(np.array(int_ids)[order])
# Reverse order if edges are reversed
if reverse:
order = order[::-1]
# Output order
with open(file_order, 'w') as f:
f.write(",".join(str(i) for i in order))
if return_scores:
return order, scores
else:
return order
def edmond(file_data='../data/kemmeren/orig.p', dir_output='../Output/00order_size/edmond/', int_ids=None, reverse=False):
"""
Define order by Edmond's algorithm for the optimal branching problem
NOTE: prohibitively expensive for fully-connected graphs of over 300 nodes
reversed: if False, edges go Effect->Cause
"""
os.makedirs(dir_output, exist_ok=True)
file_edgelist = os.path.join(dir_output, 'graph.edges')
file_order = os.path.join(dir_output, 'order.csv')
# select only mutant-mutant data
data = pickle.load(open(file_data, 'rb'))
intpos = data[2]
data_int = data[1][intpos]
del data
# select subset of variables
data_int = data_int[int_ids][:, int_ids]
# Transpose adjacency matrix if edges are reversed
if reverse:
data_int = data_int.T
# Create graph and solve arborescence problem
G = nx.from_numpy_matrix(abs(data_int), create_using=nx.DiGraph())
nx.relabel_nodes(G, {i: int_ids[i] for i in range(len(int_ids))}, copy=False)
ed = nx.algorithms.tree.branchings.Edmonds(G)
G_arb = ed.find_optimum()
# Infer topological order
order = list(reversed(list(nx.topological_sort(G_arb))))
# Reverse order if edges are reversed
if reverse:
order = order[::-1]
# Output order
with open(file_order, 'w') as f:
f.write(",".join(str(i) for i in order))
return order
def edmond_sparse(file_data='../data/kemmeren/orig.p', dir_output='../Output/00order_size/edmond/', int_ids=None, edges_per_node=10, reverse=False):
"""
Define order by Edmond's algorithm for the optimal branching problem
edges_per_node: select on average this number of edges per node
reversed: if False, edges go Effect->Cause
"""
os.makedirs(dir_output, exist_ok=True)
file_edgelist = os.path.join(dir_output, 'graph.edges')
file_order = os.path.join(dir_output, 'order.csv')
# select only mutant-mutant data
data = pickle.load(open(file_data, 'rb'))
intpos = data[2]
data_int = data[1][intpos]
del data
# select subset of variables
data_int = data_int[int_ids][:, int_ids]
# make sparse
N = len(data_int)
edges_per_node = min(edges_per_node, N)
perc = 1-edges_per_node/N
T = np.percentile(abs(data_int).flatten(), perc*100)
data_sparse = abs(data_int)
data_sparse[abs(data_int)<T] = 0
# Transpose adjacency matrix if edges are reversed
if reverse:
data_sparse = data_sparse.T
# Create graph and solve arborescence problem
G = nx.from_numpy_matrix(data_sparse, create_using=nx.DiGraph())
nx.relabel_nodes(G, {i: int_ids[i] for i in range(len(int_ids))}, copy=False)
ed = nx.algorithms.tree.branchings.Edmonds(G)
G_arb = ed.find_optimum()
# Infer topological order
order = list(reversed(list(nx.topological_sort(G_arb))))
# Reverse order if edges are reversed
if reverse:
order = order[::-1]
# Output order
with open(file_order, 'w') as f:
f.write(",".join(str(i) for i in order))
return order
def evolution_strategy(file_data='../data/kemmeren/orig.p', dir_output='../Output/00order_size/evolutionstrategy/', int_ids=None, fitness_type='binary'):
"""
Parameters:
fitness_type: ['binary', 'continuous']
"""
if fitness_type == 'binary':
GT_THRESHOLD = 1 # absolute
print(f"Set ground-truth threshold at {GT_THRESHOLD} absolute")
os.makedirs(dir_output, exist_ok=True)
file_order = os.path.join(dir_output, 'order.csv')
file_output = os.path.join(dir_output, 'output.p')
# select only mutant-mutant data
data = pickle.load(open(file_data, 'rb'))
intpos = data[2]
data_int = data[1][intpos]
del data
# select subset of variables
data_int = data_int[int_ids][:, int_ids]
# Set data type for ES (NOTE: interv. pos is mapped to range)
D = (None, data_int, list(range(len(int_ids))))
if fitness_type == 'binary':
evaluator = es.EvaluatorBinary(D, threshold=GT_THRESHOLD)
elif fitness_type == 'continuous':
evaluator = es.EvaluatorContinuous(D)
solver = es.Solver(evaluator, nvars=len(int_ids))
results = solver.run(verbose=True)
# Store results
pickle.dump(results, open(file_output, 'wb'))
# Map order to intervention ids
order = results[-1][3]
order = list(np.array(int_ids)[order])
# Output order
with open(file_order, 'w') as f:
f.write(",".join(str(i) for i in order))
return order
def random(int_ids):
return list(np.random.permutation(int_ids))
| [
"networkx.algorithms.tree.branchings.Edmonds",
"remove_cycle_edges_by_hierarchy.write_pairs_to_file",
"numpy.argsort",
"networkx.write_edgelist",
"numpy.array",
"order.es.EvaluatorContinuous",
"file_io.write_dict_to_file",
"sys.path.append",
"networkx.DiGraph",
"numpy.random.permutation",
"remov... | [((359, 388), 'sys.path.append', 'sys.path.append', (['"""order/sun/"""'], {}), "('order/sun/')\n", (374, 388), False, 'import sys\n'), ((864, 901), 'os.path.join', 'os.path.join', (['dir_output', '"""order.csv"""'], {}), "(dir_output, 'order.csv')\n", (876, 901), False, 'import os\n'), ((906, 944), 'os.makedirs', 'os.makedirs', (['dir_output'], {'exist_ok': '(True)'}), '(dir_output, exist_ok=True)\n', (917, 944), False, 'import os\n'), ((978, 1020), 'order.weighted_trueskill.run', 'weighted_trueskill.run', (['file_data', 'int_ids'], {}), '(file_data, int_ids)\n', (1000, 1020), False, 'from order import weighted_trueskill\n'), ((1525, 1563), 'os.makedirs', 'os.makedirs', (['dir_output'], {'exist_ok': '(True)'}), '(dir_output, exist_ok=True)\n', (1536, 1563), False, 'import os\n'), ((1677, 1716), 'os.path.join', 'os.path.join', (['dir_output', '"""graph.edges"""'], {}), "(dir_output, 'graph.edges')\n", (1689, 1716), False, 'import os\n'), ((2034, 2071), 'os.path.join', 'os.path.join', (['dir_output', '"""order.csv"""'], {}), "(dir_output, 'order.csv')\n", (2046, 2071), False, 'import os\n'), ((2595, 2643), 'networkx.from_numpy_matrix', 'nx.from_numpy_matrix', (['A'], {'create_using': 'nx.DiGraph'}), '(A, create_using=nx.DiGraph)\n', (2615, 2643), True, 'import networkx as nx\n'), ((2763, 2813), 'networkx.write_edgelist', 'nx.write_edgelist', (['G_gt', 'file_edgelist'], {'data': '(False)'}), '(G_gt, file_edgelist, data=False)\n', (2780, 2813), True, 'import networkx as nx\n'), ((3776, 3824), 'networkx.from_numpy_matrix', 'nx.from_numpy_matrix', (['A'], {'create_using': 'nx.DiGraph'}), '(A, create_using=nx.DiGraph)\n', (3796, 3824), True, 'import networkx as nx\n'), ((4810, 4848), 'os.makedirs', 'os.makedirs', (['dir_output'], {'exist_ok': '(True)'}), '(dir_output, exist_ok=True)\n', (4821, 4848), False, 'import os\n'), ((4960, 4999), 'os.path.join', 'os.path.join', (['dir_output', '"""graph.edges"""'], {}), "(dir_output, 'graph.edges')\n", (4972, 4999), False, 'import os\n'), ((5017, 5054), 'os.path.join', 'os.path.join', (['dir_output', '"""order.csv"""'], {}), "(dir_output, 'order.csv')\n", (5029, 5054), False, 'import os\n'), ((5662, 5710), 'networkx.from_numpy_matrix', 'nx.from_numpy_matrix', (['A'], {'create_using': 'nx.DiGraph'}), '(A, create_using=nx.DiGraph)\n', (5682, 5710), True, 'import networkx as nx\n'), ((5918, 5965), 'networkx.write_edgelist', 'nx.write_edgelist', (['g', 'file_edgelist'], {'data': '(False)'}), '(g, file_edgelist, data=False)\n', (5935, 5965), True, 'import networkx as nx\n'), ((7699, 7737), 'os.makedirs', 'os.makedirs', (['dir_output'], {'exist_ok': '(True)'}), '(dir_output, exist_ok=True)\n', (7710, 7737), False, 'import os\n'), ((7759, 7798), 'os.path.join', 'os.path.join', (['dir_output', '"""graph.edges"""'], {}), "(dir_output, 'graph.edges')\n", (7771, 7798), False, 'import os\n'), ((7816, 7853), 'os.path.join', 'os.path.join', (['dir_output', '"""order.csv"""'], {}), "(dir_output, 'order.csv')\n", (7828, 7853), False, 'import os\n'), ((8395, 8435), 'networkx.algorithms.tree.branchings.Edmonds', 'nx.algorithms.tree.branchings.Edmonds', (['G'], {}), '(G)\n', (8432, 8435), True, 'import networkx as nx\n'), ((9130, 9168), 'os.makedirs', 'os.makedirs', (['dir_output'], {'exist_ok': '(True)'}), '(dir_output, exist_ok=True)\n', (9141, 9168), False, 'import os\n'), ((9190, 9229), 'os.path.join', 'os.path.join', (['dir_output', '"""graph.edges"""'], {}), "(dir_output, 'graph.edges')\n", (9202, 9229), False, 'import os\n'), ((9247, 9284), 'os.path.join', 'os.path.join', (['dir_output', '"""order.csv"""'], {}), "(dir_output, 'order.csv')\n", (9259, 9284), False, 'import os\n'), ((10070, 10110), 'networkx.algorithms.tree.branchings.Edmonds', 'nx.algorithms.tree.branchings.Edmonds', (['G'], {}), '(G)\n', (10107, 10110), True, 'import networkx as nx\n'), ((10827, 10865), 'os.makedirs', 'os.makedirs', (['dir_output'], {'exist_ok': '(True)'}), '(dir_output, exist_ok=True)\n', (10838, 10865), False, 'import os\n'), ((10883, 10920), 'os.path.join', 'os.path.join', (['dir_output', '"""order.csv"""'], {}), "(dir_output, 'order.csv')\n", (10895, 10920), False, 'import os\n'), ((10939, 10975), 'os.path.join', 'os.path.join', (['dir_output', '"""output.p"""'], {}), "(dir_output, 'output.p')\n", (10951, 10975), False, 'import os\n'), ((1782, 1841), 'os.path.join', 'os.path.join', (['dir_output', '"""graph_removed_by_H-Voting.edges"""'], {}), "(dir_output, 'graph_removed_by_H-Voting.edges')\n", (1794, 1841), False, 'import os\n'), ((2872, 2994), 'remove_cycle_edges_by_hierarchy.breaking_cycles_by_hierarchy_performance', 'sunalg.breaking_cycles_by_hierarchy_performance', ([], {'graph_file': 'file_edgelist', 'gt_file': 'None', 'players_score_name': 'model_type'}), '(graph_file=file_edgelist,\n gt_file=None, players_score_name=model_type)\n', (2919, 2994), True, 'import remove_cycle_edges_by_hierarchy as sunalg\n'), ((3547, 3625), 'networkx.read_edgelist', 'nx.read_edgelist', (['file_reduced_edgelist'], {'nodetype': 'int', 'create_using': 'nx.DiGraph'}), '(file_reduced_edgelist, nodetype=int, create_using=nx.DiGraph)\n', (3563, 3625), True, 'import networkx as nx\n'), ((6990, 7030), 'numpy.argsort', 'np.argsort', (['[scores[i] for i in int_ids]'], {}), '([scores[i] for i in int_ids])\n', (7000, 7030), True, 'import numpy as np\n'), ((11375, 11420), 'order.es.EvaluatorBinary', 'es.EvaluatorBinary', (['D'], {'threshold': 'GT_THRESHOLD'}), '(D, threshold=GT_THRESHOLD)\n', (11393, 11420), False, 'from order import es\n'), ((11943, 11973), 'numpy.random.permutation', 'np.random.permutation', (['int_ids'], {}), '(int_ids)\n', (11964, 11973), True, 'import numpy as np\n'), ((1917, 1972), 'os.path.join', 'os.path.join', (['dir_output', '"""graph_removed_by_TS_G.edges"""'], {}), "(dir_output, 'graph_removed_by_TS_G.edges')\n", (1929, 1972), False, 'import os\n'), ((3101, 3208), 'remove_cycle_edges_by_hierarchy.computing_hierarchy', 'sunalg.computing_hierarchy', ([], {'graph_file': 'file_edgelist', 'players_score_func_name': '"""trueskill"""', 'nodetype': 'int'}), "(graph_file=file_edgelist,\n players_score_func_name='trueskill', nodetype=int)\n", (3127, 3208), True, 'import remove_cycle_edges_by_hierarchy as sunalg\n'), ((3344, 3415), 'remove_cycle_edges_by_hierarchy.scc_based_to_remove_cycle_edges_iterately', 'sunalg.scc_based_to_remove_cycle_edges_iterately', (['g', 'players_score_dict'], {}), '(g, players_score_dict)\n', (3392, 3415), True, 'import remove_cycle_edges_by_hierarchy as sunalg\n'), ((3424, 3477), 'remove_cycle_edges_by_hierarchy.write_pairs_to_file', 'sunalg.write_pairs_to_file', (['e1', 'file_reduced_edgelist'], {}), '(e1, file_reduced_edgelist)\n', (3450, 3477), True, 'import remove_cycle_edges_by_hierarchy as sunalg\n'), ((6124, 6150), 'networkx.pagerank', 'nx.pagerank', (['g'], {'alpha': '(0.85)'}), '(g, alpha=0.85)\n', (6135, 6150), True, 'import networkx as nx\n'), ((6213, 6262), 'os.path.join', 'os.path.join', (['dir_output', '"""graph_socialagony.txt"""'], {}), "(dir_output, 'graph_socialagony.txt')\n", (6225, 6262), False, 'import os\n'), ((7054, 7071), 'numpy.array', 'np.array', (['int_ids'], {}), '(int_ids)\n', (7062, 7071), True, 'import numpy as np\n'), ((8290, 8302), 'networkx.DiGraph', 'nx.DiGraph', ([], {}), '()\n', (8300, 8302), True, 'import networkx as nx\n'), ((9965, 9977), 'networkx.DiGraph', 'nx.DiGraph', ([], {}), '()\n', (9975, 9977), True, 'import networkx as nx\n'), ((11480, 11505), 'order.es.EvaluatorContinuous', 'es.EvaluatorContinuous', (['D'], {}), '(D)\n', (11502, 11505), False, 'from order import es\n'), ((11755, 11772), 'numpy.array', 'np.array', (['int_ids'], {}), '(int_ids)\n', (11763, 11772), True, 'import numpy as np\n'), ((3946, 3976), 'networkx.topological_sort', 'nx.topological_sort', (['G_reduced'], {}), '(G_reduced)\n', (3965, 3976), True, 'import networkx as nx\n'), ((6374, 6462), 'compute_social_agony.compute_social_agony', 'compute_social_agony', ([], {'graph_file': 'file_edgelist', 'agony_path': '"""order/sun/agony/agony """'}), "(graph_file=file_edgelist, agony_path=\n 'order/sun/agony/agony ')\n", (6394, 6462), False, 'from compute_social_agony import compute_social_agony\n'), ((6616, 6663), 'os.path.join', 'os.path.join', (['dir_output', '"""graph_trueskill.txt"""'], {}), "(dir_output, 'graph_trueskill.txt')\n", (6628, 6663), False, 'import os\n'), ((6868, 6910), 'file_io.write_dict_to_file', 'write_dict_to_file', (['scores', 'trueskill_file'], {}), '(scores, trueskill_file)\n', (6886, 6910), False, 'from file_io import write_dict_to_file\n'), ((8528, 8554), 'networkx.topological_sort', 'nx.topological_sort', (['G_arb'], {}), '(G_arb)\n', (8547, 8554), True, 'import networkx as nx\n'), ((10203, 10229), 'networkx.topological_sort', 'nx.topological_sort', (['G_arb'], {}), '(G_arb)\n', (10222, 10229), True, 'import networkx as nx\n'), ((3302, 3314), 'networkx.DiGraph', 'nx.DiGraph', ([], {}), '()\n', (3312, 3314), True, 'import networkx as nx\n'), ((6773, 6796), 'true_skill.graphbased_trueskill', 'graphbased_trueskill', (['g'], {}), '(g)\n', (6793, 6796), False, 'from true_skill import graphbased_trueskill\n')] |
import numpy as np
def generate_complex_multiplication_tensor (dtype=float):
complex_multiplication_tensor = np.zeros((2,2,2), dtype=dtype)
complex_multiplication_tensor[0,0,0] = 1
complex_multiplication_tensor[0,1,1] = -1
complex_multiplication_tensor[1,0,1] = 1
complex_multiplication_tensor[1,1,0] = 1
return complex_multiplication_tensor
if __name__ == '__main__':
import sympy as sp
import tensor
complex_multiplication_tensor = generate_complex_multiplication_tensor(dtype=object)
a,b,c,d = sp.symbols('a,b,c,d')
product = ((a + sp.I*b) * (c + sp.I*d)).expand()
fancy_product = tensor.contract('ijk,j,k', complex_multiplication_tensor, np.array([a,b]), np.array([c,d]), dtype=object)
fancy_product_as_complex = fancy_product[0] + sp.I*fancy_product[1]
# print product
# print fancy_product
# print fancy_product_as_complex
# print 'difference:', (product-fancy_product_as_complex).simplify()
assert (product-fancy_product_as_complex).simplify() == 0
print('passed test')
z = np.array([a,b])
print(tensor.contract('ijk,k', complex_multiplication_tensor, z, dtype=object))
| [
"sympy.symbols",
"numpy.array",
"numpy.zeros",
"tensor.contract"
] | [((114, 146), 'numpy.zeros', 'np.zeros', (['(2, 2, 2)'], {'dtype': 'dtype'}), '((2, 2, 2), dtype=dtype)\n', (122, 146), True, 'import numpy as np\n'), ((544, 565), 'sympy.symbols', 'sp.symbols', (['"""a,b,c,d"""'], {}), "('a,b,c,d')\n", (554, 565), True, 'import sympy as sp\n'), ((1071, 1087), 'numpy.array', 'np.array', (['[a, b]'], {}), '([a, b])\n', (1079, 1087), True, 'import numpy as np\n'), ((698, 714), 'numpy.array', 'np.array', (['[a, b]'], {}), '([a, b])\n', (706, 714), True, 'import numpy as np\n'), ((715, 731), 'numpy.array', 'np.array', (['[c, d]'], {}), '([c, d])\n', (723, 731), True, 'import numpy as np\n'), ((1097, 1169), 'tensor.contract', 'tensor.contract', (['"""ijk,k"""', 'complex_multiplication_tensor', 'z'], {'dtype': 'object'}), "('ijk,k', complex_multiplication_tensor, z, dtype=object)\n", (1112, 1169), False, 'import tensor\n')] |
from vsi.tools import Try
import numpy as np
import os
class Register(object):
def __init__(self):
self.readers = []
def register(self, reader):
if reader not in self.readers:
self.readers.append(reader)
registered_readers=Register()
registered_writers=Register()
# @@@ Generic classes @@@
class Reader(object):
def __init__(self, filename, autoload=False, *args, **kwargs):
self.filename = filename
if autoload:
self.load(*args, **kwargs)
Reader.extensions = None #Default all
class Writer(object):
def __init__(self, array, dtype=None):
if dtype:
self.dtype = dtype
self.array = np.asarray(array, self.dtype)
else:
self.array = array
self.dtype = array.dtype
# @@@ tifffile classes @@@
with Try(ImportError):
import tifffile
class TifffileReader(Reader):
#Assume series 0, this if for multiple files being treated as a series
#Not currently considered
def load(self, **kwargs):
self.object = tifffile.TiffFile(self.filename, **kwargs)
def raster(self, segment=0,**kwargs):
return self.object.asarray(key=segment, **kwargs)
def shape(self, segment=0):
return tuple(self.object.pages[segment].shape)
def dtype(self, segment=0):
return self.object.series[0]['dtype']
def bpp(self, segment=0):
return self.dtype(segment).itemsize*8
def band_names(self):
raise Exception('Unimplemented. Used PilReader')
def endian(self):
return self.object.byteorder
def bands(self, segment=0):
if len(self.object.pages[segment].shape)>2:
return self.object.pages[segment].shape[2]
else:
return 1
@property
def segments(self):
return len(self.object.pages)
TifffileReader.extensions=['tif', 'tiff']
registered_readers.register(TifffileReader)
#Monkey patching to add JPEG compress TIFF support via PIL
with Try(ImportError):
from PIL import Image
def decode_jpeg(encoded, tables=b'', photometric=None,
ycbcr_subsampling=None, ycbcr_positioning=None):
''' ycbcr resampling is missing in both tifffile and PIL '''
from StringIO import StringIO
from PIL import JpegImagePlugin
return JpegImagePlugin.JpegImageFile(StringIO(tables + encoded)).tobytes()
tifffile.TIFF_DECOMPESSORS['jpeg'] = decode_jpeg
tifffile.decodejpg = decode_jpeg
class TifffileWriter(Writer):
def save(self, filename, **kwargs):
tifargs = {}
for key in ('byteorder', 'bigtiff', 'software', 'writeshape'):
if key in kwargs:
tifargs[key] = kwargs[key]
del kwargs[key]
if 'writeshape' not in kwargs:
kwargs['writeshape'] = True
if 'bigtiff' not in tifargs and self.array.size*self.array.dtype.itemsize > 2000*2**20:
tifargs['bigtiff'] = True
self.object = tifffile.TiffWriter(filename, **tifargs)
self.object.save(self.array, **kwargs)
TifffileWriter.extensions=['tif', 'tiff']
registered_writers.register(TifffileWriter)
# @@@ PIL classes @@@
with Try(ImportError):
from PIL import Image
class PilReader(Reader):
def load(self, mode='r'):
self.object = Image.open(self.filename, mode)
def _get_mode_info(self, segment=0):
''' TODO: Cache this if it's being called a lot and slowing down'''
self.object.seek(segment)
mode = Image._MODE_CONV[self.object.mode]
mode = {'endian':mode[0][0],
'type':mode[0][1],
'bpp':int(mode[0][2])*8,
'bands':mode[1]}
if mode['type'] == 'b':
mode['bpp'] = 1
if mode['bands'] is None:
mode['bands'] = 1
if mode['type'] == 'b':
mode['type'] = np.bool
elif mode['type'] == 'u':
mode['type'] = getattr(np, 'uint%d' % mode['bpp'])
elif mode['type'] == 'i':
mode['type'] = getattr(np, 'int%d' % mode['bpp'])
elif mode['type'] == 'f':
mode['type'] = getattr(np, 'float%d' % mode['bpp'])
else:
raise Exception('Unknown mode type')
return mode
def endian(self, segment=0):
return self._get_mode_info(segment)['endian']
def raster(self, segment=0):
self.object.seek(segment)
return np.array(self.object)
def bpp(self, segment=0):
self.object.seek(segment)
return self._get_mode_info()['bpp']
def dtype(self, segment=0):
self.object.seek(segment)
return self._get_mode_info()['type']
def bands(self, segment=0):
self.object.seek(segment)
return self._get_mode_info()['bands']
def band_names(self, segment=0):
self.object.seek(segment)
try:
band_names = Image._MODEINFO[self.object.mode][2]
if len(band_names) == 1:
return ('P',)
return band_names
except KeyError:
return ('P',) #Panchromatic
def shape(self, segment=0):
#shape is height, width, bands
self.object.seek(segment)
shape = self.object.size
return (shape[1], shape[0])+shape[2:]
registered_readers.register(PilReader)
class PilWriter(Writer):
def __init__(self, array, dtype=None, *args, **kwargs):
super(PilWriter, self).__init__(array, dtype)
self.object = Image.fromarray(self.array, *args, **kwargs)
def save(self, filename, *args, **kwargs):
self.object.save(filename, *args, **kwargs)
registered_writers.register(PilWriter)
# @@@ GDAL classes @@@
with Try(ImportError):
from osgeo import gdal
class GdalReader(Reader):
def __init__(self, *args, **kwargs):
#default
self._segment = 0
super(GdalReader, self).__init__(*args, **kwargs)
def _change_segment(self, segment=0, mode=gdal.GA_ReadOnly):
if segment != self._segment:
self._dataset = gdal.Open(self.object.GetSubDatasets()[segment][0], mode)
self._segment = segment
def load(self, mode=gdal.GA_ReadOnly, *args, **kwargs):
self.object = gdal.Open(self.filename, mode, *args, **kwargs)
if self.object is None:
raise Exception('Gdal can not determine driver')
self._dataset = self.object
def raster(self, segment=0, *args, **kwargs):
#return self.object.GetRasterBand(band).ReadAsArray()
self._change_segment(segment)
raster = self._dataset.ReadAsArray()
if len(raster.shape)==3:
return raster.transpose((1,2,0))
else:
return raster
def raster_roi(self, segment=0, *args, **kwargs):
'''This isn't written yet'''
self._change_segment(segment)
band = self.object.GetRasterBand(band)
scanline = band.ReadRaster( 0, 0, band.XSize, 1, \
band.XSize, 1, GDT_Float32 )
import struct
tuple_of_floats = struct.unpack('f' * b2.XSize, scanline)
#(use a numpy array instead of unpack)
def bands(self, segment=0):
self._change_segment(segment)
return self._dataset.RasterCount
def shape(self, segment=0):
self._change_segment(segment)
if self.object.RasterCount > 1:
return (self.object.RasterYSize, self.object.RasterXSize, self.object.RasterCount)
else:
return (self.object.RasterYSize, self.object.RasterXSize)
#def saveas(self, filename, strict=False): THIS IS CRAP
# ''' Copy the current object and save it to disk as a different file '''
#
# destination = self.object.GetDriver().CreateCopy(filename, self.object, strict)
#There is a LOT unimplemented here. I do NOT know GDAL enough to fill in the gaps
registered_readers.register(GdalReader)
from osgeo.gdal_array import codes as gdal_codes
class GdalWriter(Writer):
gdal_array_types = {np.dtype(v):k for k,v in gdal_codes.iteritems()}
def save(self, filename, driver=None, *args, **kwargs):
if driver is None:
ext = os.path.splitext(filename)[1][1:]
if ext.lower() in ['tif', 'tiff']:
driver = gdal.GetDriverByName('GTiff')
else:
raise Exception('Unkown extension. Can not determine driver')
bands = self.array.shape[2] if len(self.array.shape)>2 else 1
self.object = driver.Create(filename, self.array.shape[1],
self.array.shape[0], bands, GdalWriter.gdal_array_types[np.dtype(self.dtype)])
if bands==1:
self.object.GetRasterBand(1).WriteArray(self.array)
else:
for band in range(bands):
self.object.GetRasterBand(band+1).WriteArray(self.array[:,:,band])
#del self.object
#Need to be deleted to actually save
# dst_ds.SetGeoTransform( [ 444720, 30, 0, 3751320, 0, -30 ] )
# srs = osr.SpatialReference()
# srs.SetUTM( 11, 1 )
# srs.SetWellKnownGeogCS( 'NAD27' )
# dst_ds.SetProjection( srs.ExportToWkt() )
# @@@ Common feel functions @@@
def imread(filename, *args, **kwargs):
extension = os.path.splitext(filename)[1][1:]
for reader in registered_readers.readers:
if not reader.extensions or extension in reader.extensions:
try:
return reader(filename, autoload=True)
except:
pass
return None
def imwrite(img, filename, *args, **kwargs):
""" write the numpy array as an image """
_, ext = os.path.splitext(filename)
is_multiplane = len(img.shape) > 2
if has_tifffile and (ext == '.tiff' or ext == '.tif') and is_multiplane:
# if image is tiff, use tifffile module
tifffile.imsave(filename, img)
else:
pilImg = Image.fromarray(img)
if pilImg.mode == 'L':
pilImg.convert('I') # convert to 32 bit signed mode
pilImg.save(filename)
return
def imwrite_geotiff(img, filename, transform, wkt_projection=None):
if wkt_projection == None:
import osr
projection = osr.SpatialReference()
projection.SetWellKnownGeogCS('WGS84')
wkt_projection = projection.ExportToWkt()
gdal_writer = GdalWriter(img)
gdal_writer.save(filename)
gdal_writer.object.SetGeoTransform(transform)
gdal_writer.object.SetProjection(wkt_projection)
def imwrite_byte(img, vmin, vmax, filename):
""" write the 2-d numpy array as an image, scale to byte range first """
img_byte = np.uint8(np.zeros_like(img))
img_norm = (img - vmin)/(vmax-vmin)
img_norm = img_norm.clip(0.0, 1.0)
img_byte[:] = img_norm * 255
imwrite(img_byte, filename)
| [
"osgeo.gdal.Open",
"tifffile.TiffFile",
"PIL.Image.fromarray",
"PIL.Image.open",
"osgeo.gdal.GetDriverByName",
"vsi.tools.Try",
"StringIO.StringIO",
"tifffile.TiffWriter",
"os.path.splitext",
"numpy.asarray",
"numpy.array",
"struct.unpack",
"osgeo.gdal_array.codes.iteritems",
"tifffile.ims... | [((768, 784), 'vsi.tools.Try', 'Try', (['ImportError'], {}), '(ImportError)\n', (771, 784), False, 'from vsi.tools import Try\n'), ((3083, 3099), 'vsi.tools.Try', 'Try', (['ImportError'], {}), '(ImportError)\n', (3086, 3099), False, 'from vsi.tools import Try\n'), ((5473, 5489), 'vsi.tools.Try', 'Try', (['ImportError'], {}), '(ImportError)\n', (5476, 5489), False, 'from vsi.tools import Try\n'), ((9236, 9262), 'os.path.splitext', 'os.path.splitext', (['filename'], {}), '(filename)\n', (9252, 9262), False, 'import os\n'), ((1910, 1926), 'vsi.tools.Try', 'Try', (['ImportError'], {}), '(ImportError)\n', (1913, 1926), False, 'from vsi.tools import Try\n'), ((9435, 9465), 'tifffile.imsave', 'tifffile.imsave', (['filename', 'img'], {}), '(filename, img)\n', (9450, 9465), False, 'import tifffile\n'), ((9493, 9513), 'PIL.Image.fromarray', 'Image.fromarray', (['img'], {}), '(img)\n', (9508, 9513), False, 'from PIL import Image\n'), ((9781, 9803), 'osr.SpatialReference', 'osr.SpatialReference', ([], {}), '()\n', (9801, 9803), False, 'import osr\n'), ((10197, 10215), 'numpy.zeros_like', 'np.zeros_like', (['img'], {}), '(img)\n', (10210, 10215), True, 'import numpy as np\n'), ((638, 667), 'numpy.asarray', 'np.asarray', (['array', 'self.dtype'], {}), '(array, self.dtype)\n', (648, 667), True, 'import numpy as np\n'), ((992, 1034), 'tifffile.TiffFile', 'tifffile.TiffFile', (['self.filename'], {}), '(self.filename, **kwargs)\n', (1009, 1034), False, 'import tifffile\n'), ((2876, 2916), 'tifffile.TiffWriter', 'tifffile.TiffWriter', (['filename'], {}), '(filename, **tifargs)\n', (2895, 2916), False, 'import tifffile\n'), ((3203, 3234), 'PIL.Image.open', 'Image.open', (['self.filename', 'mode'], {}), '(self.filename, mode)\n', (3213, 3234), False, 'from PIL import Image\n'), ((4255, 4276), 'numpy.array', 'np.array', (['self.object'], {}), '(self.object)\n', (4263, 4276), True, 'import numpy as np\n'), ((5260, 5304), 'PIL.Image.fromarray', 'Image.fromarray', (['self.array', '*args'], {}), '(self.array, *args, **kwargs)\n', (5275, 5304), False, 'from PIL import Image\n'), ((5977, 6024), 'osgeo.gdal.Open', 'gdal.Open', (['self.filename', 'mode', '*args'], {}), '(self.filename, mode, *args, **kwargs)\n', (5986, 6024), False, 'from osgeo import gdal\n'), ((6780, 6819), 'struct.unpack', 'struct.unpack', (["('f' * b2.XSize)", 'scanline'], {}), "('f' * b2.XSize, scanline)\n", (6793, 6819), False, 'import struct\n'), ((7717, 7728), 'numpy.dtype', 'np.dtype', (['v'], {}), '(v)\n', (7725, 7728), True, 'import numpy as np\n'), ((8889, 8915), 'os.path.splitext', 'os.path.splitext', (['filename'], {}), '(filename)\n', (8905, 8915), False, 'import os\n'), ((7742, 7764), 'osgeo.gdal_array.codes.iteritems', 'gdal_codes.iteritems', ([], {}), '()\n', (7762, 7764), True, 'from osgeo.gdal_array import codes as gdal_codes\n'), ((7962, 7991), 'osgeo.gdal.GetDriverByName', 'gdal.GetDriverByName', (['"""GTiff"""'], {}), "('GTiff')\n", (7982, 7991), False, 'from osgeo import gdal\n'), ((8279, 8299), 'numpy.dtype', 'np.dtype', (['self.dtype'], {}), '(self.dtype)\n', (8287, 8299), True, 'import numpy as np\n'), ((2261, 2287), 'StringIO.StringIO', 'StringIO', (['(tables + encoded)'], {}), '(tables + encoded)\n', (2269, 2287), False, 'from StringIO import StringIO\n'), ((7866, 7892), 'os.path.splitext', 'os.path.splitext', (['filename'], {}), '(filename)\n', (7882, 7892), False, 'import os\n')] |
"""
Object detection using OpenCV DNN module using YOLO V3
(YOLOv3: An Incremental Improvement: https://pjreddie.com/media/files/papers/YOLOv3.pdf)
(yolov3.weights is not included as exceeds GitHub's file size limit of 100.00 MB)
yolov3.weights: https://pjreddie.com/media/files/yolov3.weights
yolov3.cfg: https://github.com/pjreddie/darknet/blob/master/cfg/yolov3.cfg
"""
# Import required packages:
import cv2
import numpy as np
from matplotlib import pyplot as plt
def show_img_with_matplotlib(color_img, title, pos):
"""Shows an image using matplotlib capabilities"""
img_RGB = color_img[:, :, ::-1]
ax = plt.subplot(1, 1, pos)
plt.imshow(img_RGB)
plt.title(title)
plt.axis('off')
# load the COCO class labels:
class_names = open("coco.names").read().strip().split("\n")
# Load the serialized caffe model from disk:
net = cv2.dnn.readNetFromDarknet("yolov3.cfg", "yolov3.weights")
# Load input image:
image = cv2.imread("object_detection_test_image.png")
(H, W) = image.shape[:2]
# Get the output layer names:
layer_names = net.getLayerNames()
layer_names = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
# Create the blob with a size of (416, 416), swap red and blue channels
# and also a scale factor of 1/255 = 0,003921568627451:
blob = cv2.dnn.blobFromImage(image, 1 / 255.0, (416, 416), swapRB=True, crop=False)
print(blob.shape)
# Feed the input blob to the network, perform inference and get the output:
net.setInput(blob)
layerOutputs = net.forward(layer_names)
# Get inference time:
t, _ = net.getPerfProfile()
print('Inference time: %.2f ms' % (t * 1000.0 / cv2.getTickFrequency()))
# Initialization:
boxes = []
confidences = []
class_ids = []
# loop over each of the layer outputs
for output in layerOutputs:
# loop over each of the detections
for detection in output:
# Get class ID and confidence of the current detection:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
# Filter out weak predictions:
if confidence > 0.25:
# Scale the bounding box coordinates (center, width, height) using the dimensions of the original image:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
# Calculate the top-left corner of the bounding box:
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
# Update the information we have for each detection:
boxes.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
class_ids.append(class_id)
# We can apply non-maxima suppression (eliminate weak and overlapping bounding boxes):
indices = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.3)
# Show the results (if any object is detected after non-maxima suppression):
if len(indices) > 0:
for i in indices.flatten():
# Extract the (previously recalculated) bounding box coordinates:
(x, y) = (boxes[i][0], boxes[i][1])
(w, h) = (boxes[i][2], boxes[i][3])
# Draw label and confidence:
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
label = "{}: {:.4f}".format(class_names[class_ids[i]], confidences[i])
labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 1, 2)
y = max(y, labelSize[1])
cv2.rectangle(image, (x, y - labelSize[1]), (x + labelSize[0], y + 0), (0, 255, 0), cv2.FILLED)
cv2.putText(image, label, (x, y), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 2)
# Create the dimensions of the figure and set title:
fig = plt.figure(figsize=(14, 8))
plt.suptitle("Object detection using OpenCV DNN module and YOLO V3", fontsize=14, fontweight='bold')
fig.patch.set_facecolor('silver')
# Show the output image
show_img_with_matplotlib(image, "YOLO V3 for object detection", 1)
# Show the Figure:
plt.show()
| [
"cv2.dnn.blobFromImage",
"matplotlib.pyplot.imshow",
"cv2.rectangle",
"cv2.getTextSize",
"numpy.argmax",
"matplotlib.pyplot.axis",
"matplotlib.pyplot.suptitle",
"matplotlib.pyplot.subplot",
"matplotlib.pyplot.figure",
"cv2.putText",
"numpy.array",
"cv2.getTickFrequency",
"matplotlib.pyplot.t... | [((860, 918), 'cv2.dnn.readNetFromDarknet', 'cv2.dnn.readNetFromDarknet', (['"""yolov3.cfg"""', '"""yolov3.weights"""'], {}), "('yolov3.cfg', 'yolov3.weights')\n", (886, 918), False, 'import cv2\n'), ((948, 993), 'cv2.imread', 'cv2.imread', (['"""object_detection_test_image.png"""'], {}), "('object_detection_test_image.png')\n", (958, 993), False, 'import cv2\n'), ((1297, 1373), 'cv2.dnn.blobFromImage', 'cv2.dnn.blobFromImage', (['image', '(1 / 255.0)', '(416, 416)'], {'swapRB': '(True)', 'crop': '(False)'}), '(image, 1 / 255.0, (416, 416), swapRB=True, crop=False)\n', (1318, 1373), False, 'import cv2\n'), ((2794, 2840), 'cv2.dnn.NMSBoxes', 'cv2.dnn.NMSBoxes', (['boxes', 'confidences', '(0.5)', '(0.3)'], {}), '(boxes, confidences, 0.5, 0.3)\n', (2810, 2840), False, 'import cv2\n'), ((3687, 3714), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(14, 8)'}), '(figsize=(14, 8))\n', (3697, 3714), True, 'from matplotlib import pyplot as plt\n'), ((3715, 3819), 'matplotlib.pyplot.suptitle', 'plt.suptitle', (['"""Object detection using OpenCV DNN module and YOLO V3"""'], {'fontsize': '(14)', 'fontweight': '"""bold"""'}), "('Object detection using OpenCV DNN module and YOLO V3',\n fontsize=14, fontweight='bold')\n", (3727, 3819), True, 'from matplotlib import pyplot as plt\n'), ((3962, 3972), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (3970, 3972), True, 'from matplotlib import pyplot as plt\n'), ((628, 650), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(1)', '(1)', 'pos'], {}), '(1, 1, pos)\n', (639, 650), True, 'from matplotlib import pyplot as plt\n'), ((655, 674), 'matplotlib.pyplot.imshow', 'plt.imshow', (['img_RGB'], {}), '(img_RGB)\n', (665, 674), True, 'from matplotlib import pyplot as plt\n'), ((679, 695), 'matplotlib.pyplot.title', 'plt.title', (['title'], {}), '(title)\n', (688, 695), True, 'from matplotlib import pyplot as plt\n'), ((700, 715), 'matplotlib.pyplot.axis', 'plt.axis', (['"""off"""'], {}), "('off')\n", (708, 715), True, 'from matplotlib import pyplot as plt\n'), ((1963, 1980), 'numpy.argmax', 'np.argmax', (['scores'], {}), '(scores)\n', (1972, 1980), True, 'import numpy as np\n'), ((3180, 3240), 'cv2.rectangle', 'cv2.rectangle', (['image', '(x, y)', '(x + w, y + h)', '(0, 255, 0)', '(2)'], {}), '(image, (x, y), (x + w, y + h), (0, 255, 0), 2)\n', (3193, 3240), False, 'import cv2\n'), ((3350, 3404), 'cv2.getTextSize', 'cv2.getTextSize', (['label', 'cv2.FONT_HERSHEY_SIMPLEX', '(1)', '(2)'], {}), '(label, cv2.FONT_HERSHEY_SIMPLEX, 1, 2)\n', (3365, 3404), False, 'import cv2\n'), ((3446, 3546), 'cv2.rectangle', 'cv2.rectangle', (['image', '(x, y - labelSize[1])', '(x + labelSize[0], y + 0)', '(0, 255, 0)', 'cv2.FILLED'], {}), '(image, (x, y - labelSize[1]), (x + labelSize[0], y + 0), (0, \n 255, 0), cv2.FILLED)\n', (3459, 3546), False, 'import cv2\n'), ((3550, 3626), 'cv2.putText', 'cv2.putText', (['image', 'label', '(x, y)', 'cv2.FONT_HERSHEY_SIMPLEX', '(1)', '(0, 0, 0)', '(2)'], {}), '(image, label, (x, y), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 2)\n', (3561, 3626), False, 'import cv2\n'), ((1627, 1649), 'cv2.getTickFrequency', 'cv2.getTickFrequency', ([], {}), '()\n', (1647, 1649), False, 'import cv2\n'), ((2241, 2263), 'numpy.array', 'np.array', (['[W, H, W, H]'], {}), '([W, H, W, H])\n', (2249, 2263), True, 'import numpy as np\n')] |
"""
visualize results for test image
"""
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
import os
from torch.autograd import Variable
import expression_attentiveness
import mtcnn_attentiveness
import transforms as transforms
from skimage import io
from skimage.transform import resize
from models import *
score = 0
def facial_expresssion(boxes_c, landmarks):
global score
cut_size = 44
transform_test = transforms.Compose([
transforms.TenCrop(cut_size),
transforms.Lambda(lambda crops: torch.stack([transforms.ToTensor()(crop) for crop in crops])),
])
def rgb2gray(rgb):
return np.dot(rgb[..., :3], [0.299, 0.587, 0.114])
print("boxes_c.shape = ", landmarks.shape[0])
# score = 0
for i in range(len(os.listdir('picture_data'))):
raw_img = io.imread('picture_data/cut' + str(i) + '.jpg')
gray = rgb2gray(raw_img)
gray = resize(gray, (48, 48), mode='symmetric').astype(np.uint8)
img = gray[:, :, np.newaxis]
img = np.concatenate((img, img, img), axis=2)
img = Image.fromarray(img)
inputs = transform_test(img)
class_names = ['Angry', 'Disgust', 'Fear', 'Happy', 'Sad', 'Surprise', 'Neutral']
net = VGG('VGG19')
checkpoint = torch.load(os.path.join('FER2013_VGG19', 'Test_model.t7'), map_location='cpu')
net.load_state_dict(checkpoint['net'])
net.cuda()
net.eval()
ncrops, c, h, w = np.shape(inputs)
inputs = inputs.view(-1, c, h, w)
inputs = inputs.cuda()
with torch.no_grad():
inputs = Variable(inputs)
outputs = net(inputs)
outputs_avg = outputs.view(ncrops, -1).mean(0) # avg over crops
# fix dim
score = F.softmax(outputs_avg, dim=0)
_, predicted = torch.max(outputs_avg.data, 0)
plt.rcParams['figure.figsize'] = (13.5, 5.5)
axes = plt.subplot(1, 3, 1)
# plt.imshow(raw_img)
plt.xlabel('Input Image', fontsize=16)
axes.set_xticks([])
axes.set_yticks([])
plt.tight_layout()
plt.subplots_adjust(left=0.05, bottom=0.2, right=0.95, top=0.9, hspace=0.02, wspace=0.3)
plt.subplot(1, 3, 2)
ind = 0.1 + 0.6 * np.arange(len(class_names)) # the x locations for the groups
width = 0.4 # the width of the bars: can also be len(x) sequence
color_list = ['red', 'orangered', 'darkorange', 'limegreen', 'darkgreen', 'royalblue', 'navy']
for i in range(len(class_names)):
plt.bar(ind[i], score.data.cpu().numpy()[i], width, color=color_list[i])
print(class_names[i], " = ", score.data.cpu().numpy()[i])
score += expression_attentiveness.facial_expression_recognition(score,
attensiveness_weight=mtcnn_attentiveness.receive_coordinate(
boxes_c, landmarks))
plt.title("Classification results ", fontsize=20)
plt.xlabel(" Expression Category ", fontsize=16)
plt.ylabel(" Classification Score ", fontsize=16)
plt.xticks(ind, class_names, rotation=45, fontsize=14)
axes = plt.subplot(1, 3, 3)
emojis_img = io.imread('images/emojis/%s.png' % str(class_names[int(predicted.cpu().numpy())]))
plt.imshow(emojis_img)
plt.xlabel('Emoji Expression', fontsize=16)
axes.set_xticks([])
axes.set_yticks([])
plt.tight_layout()
# show emojis
# plt.show()
plt.savefig(os.path.join('images/results/1.png'))
plt.close()
print("The Expression is %s" % str(class_names[int(predicted.cpu().numpy())]))
print("total = ", torch.mean(score).item())
return torch.mean(score).item()
| [
"matplotlib.pyplot.ylabel",
"torch.max",
"mtcnn_attentiveness.receive_coordinate",
"torch.nn.functional.softmax",
"matplotlib.pyplot.imshow",
"os.listdir",
"transforms.ToTensor",
"torch.mean",
"matplotlib.pyplot.xlabel",
"matplotlib.pyplot.close",
"numpy.dot",
"numpy.concatenate",
"torch.aut... | [((723, 766), 'numpy.dot', 'np.dot', (['rgb[..., :3]', '[0.299, 0.587, 0.114]'], {}), '(rgb[..., :3], [0.299, 0.587, 0.114])\n', (729, 766), True, 'import numpy as np\n'), ((1112, 1151), 'numpy.concatenate', 'np.concatenate', (['(img, img, img)'], {'axis': '(2)'}), '((img, img, img), axis=2)\n', (1126, 1151), True, 'import numpy as np\n'), ((1166, 1186), 'PIL.Image.fromarray', 'Image.fromarray', (['img'], {}), '(img)\n', (1181, 1186), False, 'from PIL import Image\n'), ((1555, 1571), 'numpy.shape', 'np.shape', (['inputs'], {}), '(inputs)\n', (1563, 1571), True, 'import numpy as np\n'), ((1854, 1883), 'torch.nn.functional.softmax', 'F.softmax', (['outputs_avg'], {'dim': '(0)'}), '(outputs_avg, dim=0)\n', (1863, 1883), True, 'import torch.nn.functional as F\n'), ((1907, 1937), 'torch.max', 'torch.max', (['outputs_avg.data', '(0)'], {}), '(outputs_avg.data, 0)\n', (1916, 1937), False, 'import torch\n'), ((2007, 2027), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(1)', '(3)', '(1)'], {}), '(1, 3, 1)\n', (2018, 2027), True, 'import matplotlib.pyplot as plt\n'), ((2066, 2104), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""Input Image"""'], {'fontsize': '(16)'}), "('Input Image', fontsize=16)\n", (2076, 2104), True, 'import matplotlib.pyplot as plt\n'), ((2169, 2187), 'matplotlib.pyplot.tight_layout', 'plt.tight_layout', ([], {}), '()\n', (2185, 2187), True, 'import matplotlib.pyplot as plt\n'), ((2197, 2289), 'matplotlib.pyplot.subplots_adjust', 'plt.subplots_adjust', ([], {'left': '(0.05)', 'bottom': '(0.2)', 'right': '(0.95)', 'top': '(0.9)', 'hspace': '(0.02)', 'wspace': '(0.3)'}), '(left=0.05, bottom=0.2, right=0.95, top=0.9, hspace=0.02,\n wspace=0.3)\n', (2216, 2289), True, 'import matplotlib.pyplot as plt\n'), ((2295, 2315), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(1)', '(3)', '(2)'], {}), '(1, 3, 2)\n', (2306, 2315), True, 'import matplotlib.pyplot as plt\n'), ((3096, 3145), 'matplotlib.pyplot.title', 'plt.title', (['"""Classification results """'], {'fontsize': '(20)'}), "('Classification results ', fontsize=20)\n", (3105, 3145), True, 'import matplotlib.pyplot as plt\n'), ((3154, 3202), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['""" Expression Category """'], {'fontsize': '(16)'}), "(' Expression Category ', fontsize=16)\n", (3164, 3202), True, 'import matplotlib.pyplot as plt\n'), ((3211, 3260), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['""" Classification Score """'], {'fontsize': '(16)'}), "(' Classification Score ', fontsize=16)\n", (3221, 3260), True, 'import matplotlib.pyplot as plt\n'), ((3269, 3323), 'matplotlib.pyplot.xticks', 'plt.xticks', (['ind', 'class_names'], {'rotation': '(45)', 'fontsize': '(14)'}), '(ind, class_names, rotation=45, fontsize=14)\n', (3279, 3323), True, 'import matplotlib.pyplot as plt\n'), ((3340, 3360), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(1)', '(3)', '(3)'], {}), '(1, 3, 3)\n', (3351, 3360), True, 'import matplotlib.pyplot as plt\n'), ((3473, 3495), 'matplotlib.pyplot.imshow', 'plt.imshow', (['emojis_img'], {}), '(emojis_img)\n', (3483, 3495), True, 'import matplotlib.pyplot as plt\n'), ((3504, 3547), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""Emoji Expression"""'], {'fontsize': '(16)'}), "('Emoji Expression', fontsize=16)\n", (3514, 3547), True, 'import matplotlib.pyplot as plt\n'), ((3612, 3630), 'matplotlib.pyplot.tight_layout', 'plt.tight_layout', ([], {}), '()\n', (3628, 3630), True, 'import matplotlib.pyplot as plt\n'), ((3741, 3752), 'matplotlib.pyplot.close', 'plt.close', ([], {}), '()\n', (3750, 3752), True, 'import matplotlib.pyplot as plt\n'), ((544, 572), 'transforms.TenCrop', 'transforms.TenCrop', (['cut_size'], {}), '(cut_size)\n', (562, 572), True, 'import transforms as transforms\n'), ((857, 883), 'os.listdir', 'os.listdir', (['"""picture_data"""'], {}), "('picture_data')\n", (867, 883), False, 'import os\n'), ((1375, 1421), 'os.path.join', 'os.path.join', (['"""FER2013_VGG19"""', '"""Test_model.t7"""'], {}), "('FER2013_VGG19', 'Test_model.t7')\n", (1387, 1421), False, 'import os\n'), ((1660, 1675), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (1673, 1675), False, 'import torch\n'), ((1698, 1714), 'torch.autograd.Variable', 'Variable', (['inputs'], {}), '(inputs)\n', (1706, 1714), False, 'from torch.autograd import Variable\n'), ((3695, 3731), 'os.path.join', 'os.path.join', (['"""images/results/1.png"""'], {}), "('images/results/1.png')\n", (3707, 3731), False, 'import os\n'), ((3901, 3918), 'torch.mean', 'torch.mean', (['score'], {}), '(score)\n', (3911, 3918), False, 'import torch\n'), ((1001, 1041), 'skimage.transform.resize', 'resize', (['gray', '(48, 48)'], {'mode': '"""symmetric"""'}), "(gray, (48, 48), mode='symmetric')\n", (1007, 1041), False, 'from skimage.transform import resize\n'), ((2951, 3009), 'mtcnn_attentiveness.receive_coordinate', 'mtcnn_attentiveness.receive_coordinate', (['boxes_c', 'landmarks'], {}), '(boxes_c, landmarks)\n', (2989, 3009), False, 'import mtcnn_attentiveness\n'), ((3864, 3881), 'torch.mean', 'torch.mean', (['score'], {}), '(score)\n', (3874, 3881), False, 'import torch\n'), ((627, 648), 'transforms.ToTensor', 'transforms.ToTensor', ([], {}), '()\n', (646, 648), True, 'import transforms as transforms\n')] |
#!/usr/bin/env python3
# -*- codcolg: utf-8 -*-
# @Filename: matrix.py
# @Date: 2019-06-10-13-56
# @Author: <NAME>
# @Contact: <EMAIL>
import numpy as np
import numpy.random as npr
from scipy import linalg
from numpy.core.umath_tests import inner1d
from mimo.abstractions import Distribution
class MatrixNormal(Distribution):
def __init__(self, M=None, U=None, V=None):
self.M = M
self._U = U
self._V = V
@property
def params(self):
return self.M, self.U, self.V
@params.setter
def params(self, values):
self.M, self.U, self.V = values
@property
def dcol(self):
return self.M.shape[1]
@property
def drow(self):
return self.M.shape[0]
@property
def U(self):
return self._U
@U.setter
def U(self, value):
self._U = value
self._U_chol = None
@property
def V(self):
return self._V
@V.setter
def V(self, value):
self._V = value
self._V_chol = None
@property
def sigma(self):
return np.kron(self.V, self.U)
@property
def U_chol(self):
if not hasattr(self, '_U_chol') or self._U_chol is None:
self._U_chol = np.linalg.cholesky(self.U)
return self._U_chol
@property
def V_chol(self):
if not hasattr(self, '_V_chol') or self._V_chol is None:
self._V_chol = np.linalg.cholesky(self.V)
return self._V_chol
@property
def sigma_chol(self):
if not hasattr(self, '_sigma_chol') or self._sigma_chol is None:
self._sigma_chol = np.linalg.cholesky(self.sigma)
return self._sigma_chol
def rvs(self, size=None):
if size is None:
aux = npr.normal(size=self.drow * self.dcol).dot(self.sigma_chol.T)
return self.M + np.reshape(aux, (self.drow, self.dcol), order='F')
else:
size = tuple([size, self.drow * self.dcol])
aux = npr.normal(size=self.size).dot(self.sigma_chol.T)
return self.M + np.reshape(aux, (size, self.drow, self.dcol), order='F')
def mean(self):
return self.M
def mode(self):
return self.M
def log_likelihood(self, x):
# apply vector operator with Fortran convention
xr = np.reshape(x, (-1, self.drow * self.dcol), order='F')
mr = np.reshape(self.M, (self.drow * self.dcol), order='F')
# Gaussian likelihood on vector dist.
bads = np.isnan(np.atleast_2d(xr)).any(axis=1)
xc = np.nan_to_num(xr).reshape((-1, self.dim)) - mr
xs = linalg.solve_triangular(self.sigma_chol, xc.T, lower=True)
out = - 0.5 * self.drow * self.dcol * np.log(2. * np.pi) -\
np.sum(np.log(np.diag(self.sigma_chol))) - 0.5 * inner1d(xs.T, xs.T)
out[bads] = 0
return out
def get_statistics(self, data):
if isinstance(data, np.ndarray):
idx = ~np.isnan(data).any(1)
data = data[idx]
xxT = np.einsum('nk,nh->kh', data, data)
x = data.sum(0)
n = data.shape[0]
return np.array([x, xxT, n])
else:
return sum(list(map(self.get_statistics, data)), self._empty_statistics())
def get_weighted_statistics(self, data, weights):
if isinstance(data, np.ndarray):
idx = ~np.isnan(data).any(1)
data = data[idx]
weights = weights[idx]
xxT = np.einsum('nk,n,nh->kh', data, weights, data)
x = weights.dot(data)
n = weights.sum()
return np.array([x, xxT, n])
else:
return sum(list(map(self.get_weighted_statistics, data, weights)), self._empty_statistics())
def _empty_statistics(self):
return np.array([np.zeros((self.dim, )),
np.zeros((self.dim, self.dim)), 0])
def log_partition(self):
return 0.5 * self.drow * self.dcol * np.log(2. * np.pi) +\
self.drow * np.sum(np.log(np.diag(self.V_chol))) +\
self.dcol * np.sum(np.log(np.diag(self.U_chol)))
def entropy(self):
return NotImplementedError
| [
"numpy.random.normal",
"numpy.atleast_2d",
"numpy.reshape",
"numpy.log",
"numpy.core.umath_tests.inner1d",
"numpy.kron",
"numpy.array",
"numpy.zeros",
"numpy.diag",
"scipy.linalg.solve_triangular",
"numpy.einsum",
"numpy.isnan",
"numpy.linalg.cholesky",
"numpy.nan_to_num"
] | [((1079, 1102), 'numpy.kron', 'np.kron', (['self.V', 'self.U'], {}), '(self.V, self.U)\n', (1086, 1102), True, 'import numpy as np\n'), ((2306, 2359), 'numpy.reshape', 'np.reshape', (['x', '(-1, self.drow * self.dcol)'], {'order': '"""F"""'}), "(x, (-1, self.drow * self.dcol), order='F')\n", (2316, 2359), True, 'import numpy as np\n'), ((2373, 2425), 'numpy.reshape', 'np.reshape', (['self.M', '(self.drow * self.dcol)'], {'order': '"""F"""'}), "(self.M, self.drow * self.dcol, order='F')\n", (2383, 2425), True, 'import numpy as np\n'), ((2603, 2661), 'scipy.linalg.solve_triangular', 'linalg.solve_triangular', (['self.sigma_chol', 'xc.T'], {'lower': '(True)'}), '(self.sigma_chol, xc.T, lower=True)\n', (2626, 2661), False, 'from scipy import linalg\n'), ((1232, 1258), 'numpy.linalg.cholesky', 'np.linalg.cholesky', (['self.U'], {}), '(self.U)\n', (1250, 1258), True, 'import numpy as np\n'), ((1416, 1442), 'numpy.linalg.cholesky', 'np.linalg.cholesky', (['self.V'], {}), '(self.V)\n', (1434, 1442), True, 'import numpy as np\n'), ((1616, 1646), 'numpy.linalg.cholesky', 'np.linalg.cholesky', (['self.sigma'], {}), '(self.sigma)\n', (1634, 1646), True, 'import numpy as np\n'), ((3021, 3055), 'numpy.einsum', 'np.einsum', (['"""nk,nh->kh"""', 'data', 'data'], {}), "('nk,nh->kh', data, data)\n", (3030, 3055), True, 'import numpy as np\n'), ((3133, 3154), 'numpy.array', 'np.array', (['[x, xxT, n]'], {}), '([x, xxT, n])\n', (3141, 3154), True, 'import numpy as np\n'), ((3476, 3521), 'numpy.einsum', 'np.einsum', (['"""nk,n,nh->kh"""', 'data', 'weights', 'data'], {}), "('nk,n,nh->kh', data, weights, data)\n", (3485, 3521), True, 'import numpy as np\n'), ((3605, 3626), 'numpy.array', 'np.array', (['[x, xxT, n]'], {}), '([x, xxT, n])\n', (3613, 3626), True, 'import numpy as np\n'), ((1843, 1893), 'numpy.reshape', 'np.reshape', (['aux', '(self.drow, self.dcol)'], {'order': '"""F"""'}), "(aux, (self.drow, self.dcol), order='F')\n", (1853, 1893), True, 'import numpy as np\n'), ((2060, 2116), 'numpy.reshape', 'np.reshape', (['aux', '(size, self.drow, self.dcol)'], {'order': '"""F"""'}), "(aux, (size, self.drow, self.dcol), order='F')\n", (2070, 2116), True, 'import numpy as np\n'), ((2793, 2812), 'numpy.core.umath_tests.inner1d', 'inner1d', (['xs.T', 'xs.T'], {}), '(xs.T, xs.T)\n', (2800, 2812), False, 'from numpy.core.umath_tests import inner1d\n'), ((3805, 3826), 'numpy.zeros', 'np.zeros', (['(self.dim,)'], {}), '((self.dim,))\n', (3813, 3826), True, 'import numpy as np\n'), ((3854, 3884), 'numpy.zeros', 'np.zeros', (['(self.dim, self.dim)'], {}), '((self.dim, self.dim))\n', (3862, 3884), True, 'import numpy as np\n'), ((1753, 1791), 'numpy.random.normal', 'npr.normal', ([], {'size': '(self.drow * self.dcol)'}), '(size=self.drow * self.dcol)\n', (1763, 1791), True, 'import numpy.random as npr\n'), ((1982, 2008), 'numpy.random.normal', 'npr.normal', ([], {'size': 'self.size'}), '(size=self.size)\n', (1992, 2008), True, 'import numpy.random as npr\n'), ((2499, 2516), 'numpy.atleast_2d', 'np.atleast_2d', (['xr'], {}), '(xr)\n', (2512, 2516), True, 'import numpy as np\n'), ((2543, 2560), 'numpy.nan_to_num', 'np.nan_to_num', (['xr'], {}), '(xr)\n', (2556, 2560), True, 'import numpy as np\n'), ((2708, 2727), 'numpy.log', 'np.log', (['(2.0 * np.pi)'], {}), '(2.0 * np.pi)\n', (2714, 2727), True, 'import numpy as np\n'), ((3965, 3984), 'numpy.log', 'np.log', (['(2.0 * np.pi)'], {}), '(2.0 * np.pi)\n', (3971, 3984), True, 'import numpy as np\n'), ((2758, 2782), 'numpy.diag', 'np.diag', (['self.sigma_chol'], {}), '(self.sigma_chol)\n', (2765, 2782), True, 'import numpy as np\n'), ((2951, 2965), 'numpy.isnan', 'np.isnan', (['data'], {}), '(data)\n', (2959, 2965), True, 'import numpy as np\n'), ((3371, 3385), 'numpy.isnan', 'np.isnan', (['data'], {}), '(data)\n', (3379, 3385), True, 'import numpy as np\n'), ((4095, 4115), 'numpy.diag', 'np.diag', (['self.U_chol'], {}), '(self.U_chol)\n', (4102, 4115), True, 'import numpy as np\n'), ((4028, 4048), 'numpy.diag', 'np.diag', (['self.V_chol'], {}), '(self.V_chol)\n', (4035, 4048), True, 'import numpy as np\n')] |
import numpy as np
from enum import IntEnum
from numba import int64, float64, boolean
from numba import njit
from numba.experimental import jitclass
import math
from tardis.montecarlo.montecarlo_numba import njit_dict, numba_config , njit_dict_no_parallel
from tardis.montecarlo import (
montecarlo_configuration as montecarlo_configuration,
)
from tardis.montecarlo.montecarlo_numba.numba_config import (
CLOSE_LINE_THRESHOLD,
C_SPEED_OF_LIGHT,
MISS_DISTANCE,
SIGMA_THOMSON,
)
class MonteCarloException(ValueError):
pass
class PacketStatus(IntEnum):
IN_PROCESS = 0
EMITTED = 1
REABSORBED = 2
class InteractionType(IntEnum):
BOUNDARY = 1
LINE = 2
ESCATTERING = 3
rpacket_spec = [
("r", float64),
("mu", float64),
("nu", float64),
("energy", float64),
("next_line_id", int64),
("current_shell_id", int64),
("status", int64),
("seed", int64),
("index", int64),
("is_close_line", boolean),
("last_interaction_type", int64),
("last_interaction_in_nu", float64),
("last_line_interaction_in_id", int64),
("last_line_interaction_out_id", int64),
]
@jitclass(rpacket_spec)
class RPacket(object):
def __init__(self, r, mu, nu, energy, seed, index=0, is_close_line=False):
self.r = r
self.mu = mu
self.nu = nu
self.energy = energy
self.current_shell_id = 0
self.status = PacketStatus.IN_PROCESS
self.seed = seed
self.index = index
self.is_close_line = is_close_line
self.last_interaction_type = -1
self.last_interaction_in_nu = 0.0
self.last_line_interaction_in_id = -1
self.last_line_interaction_out_id = -1
def initialize_line_id(self, numba_plasma, numba_model):
inverse_line_list_nu = numba_plasma.line_list_nu[::-1]
doppler_factor = get_doppler_factor(
self.r, self.mu, numba_model.time_explosion
)
comov_nu = self.nu * doppler_factor
next_line_id = len(numba_plasma.line_list_nu) - np.searchsorted(
inverse_line_list_nu, comov_nu
)
if next_line_id == len(numba_plasma.line_list_nu):
next_line_id -= 1
self.next_line_id = next_line_id
@njit(**njit_dict_no_parallel)
def calculate_distance_boundary(r, mu, r_inner, r_outer):
"""
Calculate distance to shell boundary in cm.
Parameters
----------
r : float
radial coordinate of the RPacket
mu : float
cosine of the direction of movement
r_inner : float
inner radius of current shell
r_outer : float
outer radius of current shell
"""
delta_shell = 0
if mu > 0.0:
# direction outward
distance = math.sqrt(r_outer * r_outer + ((mu * mu - 1.0) * r * r)) - (
r * mu
)
delta_shell = 1
else:
# going inward
check = r_inner * r_inner + (r * r * (mu * mu - 1.0))
if check >= 0.0:
# hit inner boundary
distance = -r * mu - math.sqrt(check)
delta_shell = -1
else:
# miss inner boundary
distance = math.sqrt(
r_outer * r_outer + ((mu * mu - 1.0) * r * r)
) - (r * mu)
delta_shell = 1
return distance, delta_shell
# @log_decorator
#'float64(RPacket, float64, int64, float64, float64)'
@njit(**njit_dict_no_parallel)
def calculate_distance_line(
r_packet, comov_nu, is_last_line, nu_line, time_explosion
):
"""
Calculate distance until RPacket is in resonance with the next line
Parameters
----------
r_packet : tardis.montecarlo.montecarlo_numba.r_packet.RPacket
comov_nu : float
comoving frequency at the CURRENT position of the RPacket
is_last_line : bool
return MISS_DISTANCE if at the end of the line list
nu_line : float
line to check the distance to
time_explosion : float
time since explosion in seconds
Returns
-------
"""
nu = r_packet.nu
if is_last_line:
return MISS_DISTANCE
nu_diff = comov_nu - nu_line
# for numerical reasons, if line is too close, we set the distance to 0.
if r_packet.is_close_line:
nu_diff = 0.0
r_packet.is_close_line = False
if nu_diff >= 0:
distance = (nu_diff / nu) * C_SPEED_OF_LIGHT * time_explosion
else:
print("WARNING: nu difference is less than 0.0")
raise MonteCarloException(
"nu difference is less than 0.0; for more"
" information, see print statement beforehand"
)
if numba_config.ENABLE_FULL_RELATIVITY:
return calculate_distance_line_full_relativity(
nu_line, nu, time_explosion, r_packet
)
return distance
@njit(**njit_dict_no_parallel)
def calculate_distance_line_full_relativity(
nu_line, nu, time_explosion, r_packet
):
# distance = - mu * r + (ct - nu_r * nu_r * sqrt(ct * ct - (1 + r * r * (1 - mu * mu) * (1 + pow(nu_r, -2))))) / (1 + nu_r * nu_r);
nu_r = nu_line / nu
ct = C_SPEED_OF_LIGHT * time_explosion
distance = -r_packet.mu * r_packet.r + (
ct
- nu_r
* nu_r
* math.sqrt(
ct * ct
- (
1
+ r_packet.r
* r_packet.r
* (1 - r_packet.mu * r_packet.mu)
* (1 + 1.0 / (nu_r * nu_r))
)
)
) / (1 + nu_r * nu_r)
return distance
@njit(**njit_dict_no_parallel)
def calculate_distance_electron(electron_density, tau_event):
"""
Calculate distance to Thomson Scattering
Parameters
----------
electron_density : float
tau_event : float
"""
# add full_relativity here
return tau_event / (electron_density * numba_config.SIGMA_THOMSON)
@njit(**njit_dict_no_parallel)
def calculate_tau_electron(electron_density, distance):
"""
Calculate tau for Thomson scattering
Parameters
----------
electron_density : float
distance : float
"""
return electron_density * numba_config.SIGMA_THOMSON * distance
@njit(**njit_dict_no_parallel)
def get_doppler_factor(r, mu, time_explosion):
inv_c = 1 / C_SPEED_OF_LIGHT
inv_t = 1 / time_explosion
beta = r * inv_t * inv_c
if not numba_config.ENABLE_FULL_RELATIVITY:
return get_doppler_factor_partial_relativity(mu, beta)
else:
return get_doppler_factor_full_relativity(mu, beta)
@njit(**njit_dict_no_parallel)
def get_doppler_factor_partial_relativity(mu, beta):
return 1.0 - mu * beta
@njit(**njit_dict_no_parallel)
def get_doppler_factor_full_relativity(mu, beta):
return (1.0 - mu * beta) / math.sqrt(1 - beta * beta)
@njit(**njit_dict_no_parallel)
def get_inverse_doppler_factor(r, mu, time_explosion):
"""
Calculate doppler factor for frame transformation
Parameters
----------
r : float
mu : float
time_explosion : float
"""
inv_c = 1 / C_SPEED_OF_LIGHT
inv_t = 1 / time_explosion
beta = r * inv_t * inv_c
if not numba_config.ENABLE_FULL_RELATIVITY:
return get_inverse_doppler_factor_partial_relativity(mu, beta)
else:
return get_inverse_doppler_factor_full_relativity(mu, beta)
@njit(**njit_dict_no_parallel)
def get_inverse_doppler_factor_partial_relativity(mu, beta):
return 1.0 / (1.0 - mu * beta)
@njit(**njit_dict_no_parallel)
def get_inverse_doppler_factor_full_relativity(mu, beta):
return (1.0 + mu * beta) / math.sqrt(1 - beta * beta)
@njit(**njit_dict_no_parallel)
def get_random_mu():
return 2.0 * np.random.random() - 1.0
@njit(**njit_dict_no_parallel)
def update_line_estimators(
estimators, r_packet, cur_line_id, distance_trace, time_explosion
):
"""
Function to update the line estimators
Parameters
----------
estimators : tardis.montecarlo.montecarlo_numba.numba_interface.Estimators
r_packet : tardis.montecarlo.montecarlo_numba.r_packet.RPacket
cur_line_id : int
distance_trace : float
time_explosion : float
"""
""" Actual calculation - simplified below
r_interaction = math.sqrt(r_packet.r**2 + distance_trace**2 +
2 * r_packet.r * distance_trace * r_packet.mu)
mu_interaction = (r_packet.mu * r_packet.r + distance_trace) / r_interaction
doppler_factor = 1.0 - mu_interaction * r_interaction /
( time_explosion * C)
"""
if not numba_config.ENABLE_FULL_RELATIVITY:
energy = calc_packet_energy(r_packet, distance_trace, time_explosion)
else:
energy = calc_packet_energy_full_relativity(r_packet)
estimators.j_blue_estimator[cur_line_id, r_packet.current_shell_id] += (
energy / r_packet.nu
)
estimators.Edotlu_estimator[
cur_line_id, r_packet.current_shell_id
] += energy
@njit(**njit_dict_no_parallel)
def calc_packet_energy_full_relativity(r_packet):
# accurate to 1 / gamma - according to C. Vogl
return r_packet.energy
@njit(**njit_dict_no_parallel)
def calc_packet_energy(r_packet, distance_trace, time_explosion):
doppler_factor = 1.0 - (
(distance_trace + r_packet.mu * r_packet.r)
/ (time_explosion * C_SPEED_OF_LIGHT)
)
energy = r_packet.energy * doppler_factor
return energy
@njit(**njit_dict_no_parallel)
def trace_packet(r_packet, numba_model, numba_plasma, estimators):
"""
Traces the RPacket through the ejecta and stops when an interaction happens (heart of the calculation)
Parameters
----------
r_packet : tardis.montecarlo.montecarlo_numba.r_packet.RPacket
numba_model : tardis.montecarlo.montecarlo_numba.numba_interface.NumbaModel
numba_plasma : tardis.montecarlo.montecarlo_numba.numba_interface.NumbaPlasma
estimators : tardis.montecarlo.montecarlo_numba.numba_interface.Estimators
Returns
-------
"""
r_inner = numba_model.r_inner[r_packet.current_shell_id]
r_outer = numba_model.r_outer[r_packet.current_shell_id]
distance_boundary, delta_shell = calculate_distance_boundary(
r_packet.r, r_packet.mu, r_inner, r_outer
)
# defining start for line interaction
start_line_id = r_packet.next_line_id
# defining taus
tau_event = -np.log(np.random.random())
tau_trace_line_combined = 0.0
# e scattering initialization
cur_electron_density = numba_plasma.electron_density[
r_packet.current_shell_id
]
distance_electron = calculate_distance_electron(
cur_electron_density, tau_event
)
# Calculating doppler factor
doppler_factor = get_doppler_factor(
r_packet.r, r_packet.mu, numba_model.time_explosion
)
comov_nu = r_packet.nu * doppler_factor
cur_line_id = start_line_id # initializing varibale for Numba
# - do not remove
last_line_id = len(numba_plasma.line_list_nu) - 1
for cur_line_id in range(start_line_id, len(numba_plasma.line_list_nu)):
# Going through the lines
nu_line = numba_plasma.line_list_nu[cur_line_id]
nu_line_last_interaction = numba_plasma.line_list_nu[cur_line_id - 1]
# Getting the tau for the next line
tau_trace_line = numba_plasma.tau_sobolev[
cur_line_id, r_packet.current_shell_id
]
# Adding it to the tau_trace_line_combined
tau_trace_line_combined += tau_trace_line
# Calculating the distance until the current photons co-moving nu
# redshifts to the line frequency
is_last_line = cur_line_id == last_line_id
distance_trace = calculate_distance_line(
r_packet,
comov_nu,
is_last_line,
nu_line,
numba_model.time_explosion,
)
# calculating the tau electron of how far the trace has progressed
tau_trace_electron = calculate_tau_electron(
cur_electron_density, distance_trace
)
# calculating the trace
tau_trace_combined = tau_trace_line_combined + tau_trace_electron
if (
(distance_boundary <= distance_trace)
and (distance_boundary <= distance_electron)
) and distance_trace != 0.0:
interaction_type = InteractionType.BOUNDARY # BOUNDARY
r_packet.next_line_id = cur_line_id
distance = distance_boundary
break
if (
(distance_electron < distance_trace)
and (distance_electron < distance_boundary)
) and distance_trace != 0.0:
interaction_type = InteractionType.ESCATTERING
# print('scattering')
distance = distance_electron
r_packet.next_line_id = cur_line_id
break
# Updating the J_b_lu and E_dot_lu
# This means we are still looking for line interaction and have not
# been kicked out of the path by boundary or electron interaction
update_line_estimators(
estimators,
r_packet,
cur_line_id,
distance_trace,
numba_model.time_explosion,
)
if (
tau_trace_combined > tau_event
and not montecarlo_configuration.disable_line_scattering
):
interaction_type = InteractionType.LINE # Line
r_packet.last_interaction_in_nu = r_packet.nu
r_packet.last_line_interaction_in_id = cur_line_id
r_packet.next_line_id = cur_line_id
distance = distance_trace
break
if not is_last_line:
test_for_close_line(
r_packet, cur_line_id + 1, nu_line, numba_plasma
)
# Recalculating distance_electron using tau_event -
# tau_trace_line_combined
distance_electron = calculate_distance_electron(
cur_electron_density, tau_event - tau_trace_line_combined
)
else: # Executed when no break occurs in the for loop
# We are beyond the line list now and the only next thing is to see
# if we are interacting with the boundary or electron scattering
if cur_line_id == (len(numba_plasma.line_list_nu) - 1):
# Treatment for last line
cur_line_id += 1
if distance_electron < distance_boundary:
distance = distance_electron
interaction_type = InteractionType.ESCATTERING
# print('scattering')
else:
distance = distance_boundary
interaction_type = InteractionType.BOUNDARY
# r_packet.next_line_id = cur_line_id
return distance, interaction_type, delta_shell
@njit(**njit_dict_no_parallel)
def move_r_packet(r_packet, distance, time_explosion, numba_estimator):
"""
Move packet a distance and recalculate the new angle mu
Parameters
----------
r_packet : tardis.montecarlo.montecarlo_numba.r_packet.RPacket
r_packet objects
time_explosion : float
time since explosion in s
numba_estimator : tardis.montecarlo.montecarlo_numba.numba_interface.NumbaEstimator
Estimators object
distance : float
distance in cm
"""
doppler_factor = get_doppler_factor(r_packet.r, r_packet.mu, time_explosion)
r = r_packet.r
if distance > 0.0:
new_r = np.sqrt(
r * r + distance * distance + 2.0 * r * distance * r_packet.mu
)
r_packet.mu = (r_packet.mu * r + distance) / new_r
r_packet.r = new_r
comov_nu = r_packet.nu * doppler_factor
comov_energy = r_packet.energy * doppler_factor
if not numba_config.ENABLE_FULL_RELATIVITY:
set_estimators(
r_packet, distance, numba_estimator, comov_nu, comov_energy
)
else:
distance = distance * doppler_factor
set_estimators_full_relativity(
r_packet,
distance,
numba_estimator,
comov_nu,
comov_energy,
doppler_factor,
)
@njit(**njit_dict_no_parallel)
def set_estimators(r_packet, distance, numba_estimator, comov_nu, comov_energy):
"""
Updating the estimators
"""
numba_estimator.j_estimator[r_packet.current_shell_id] += (
comov_energy * distance
)
numba_estimator.nu_bar_estimator[r_packet.current_shell_id] += (
comov_energy * distance * comov_nu
)
@njit(**njit_dict_no_parallel)
def set_estimators_full_relativity(
r_packet, distance, numba_estimator, comov_nu, comov_energy, doppler_factor
):
numba_estimator.j_estimator[r_packet.current_shell_id] += (
comov_energy * distance * doppler_factor
)
numba_estimator.nu_bar_estimator[r_packet.current_shell_id] += (
comov_energy * distance * comov_nu * doppler_factor
)
@njit(**njit_dict_no_parallel)
def move_packet_across_shell_boundary(packet, delta_shell, no_of_shells):
"""
Move packet across shell boundary - realizing if we are still in the simulation or have
moved out through the inner boundary or outer boundary and updating packet
status.
Parameters
----------
distance : float
distance to move to shell boundary
delta_shell : int
is +1 if moving outward or -1 if moving inward
no_of_shells : int
number of shells in TARDIS simulation
"""
next_shell_id = packet.current_shell_id + delta_shell
if next_shell_id >= no_of_shells:
packet.status = PacketStatus.EMITTED
elif next_shell_id < 0:
packet.status = PacketStatus.REABSORBED
else:
packet.current_shell_id = next_shell_id
@njit(**njit_dict_no_parallel)
def angle_aberration_CMF_to_LF(r_packet, time_explosion, mu):
"""
Converts angle aberration from comoving frame to
laboratory frame.
"""
ct = C_SPEED_OF_LIGHT * time_explosion
beta = r_packet.r / (ct)
return (r_packet.mu + beta) / (1.0 + beta * mu)
@njit(**njit_dict_no_parallel)
def angle_aberration_LF_to_CMF(r_packet, time_explosion, mu):
"""
c code:
double beta = rpacket_get_r (packet) * storage->inverse_time_explosion * INVERSE_C;
return (mu - beta) / (1.0 - beta * mu);
"""
ct = C_SPEED_OF_LIGHT * time_explosion
beta = r_packet.r / (ct)
return (mu - beta) / (1.0 - beta * mu)
@njit(**njit_dict_no_parallel)
def test_for_close_line(r_packet, line_id, nu_line, numba_plasma):
r_packet.is_close_line = abs(
numba_plasma.line_list_nu[line_id] - nu_line
) < (nu_line * CLOSE_LINE_THRESHOLD)
| [
"numpy.sqrt",
"numpy.searchsorted",
"numpy.random.random",
"math.sqrt",
"numba.njit",
"numba.experimental.jitclass"
] | [((1157, 1179), 'numba.experimental.jitclass', 'jitclass', (['rpacket_spec'], {}), '(rpacket_spec)\n', (1165, 1179), False, 'from numba.experimental import jitclass\n'), ((2261, 2290), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (2265, 2290), False, 'from numba import njit\n'), ((3407, 3436), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (3411, 3436), False, 'from numba import njit\n'), ((4825, 4854), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (4829, 4854), False, 'from numba import njit\n'), ((5534, 5563), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (5538, 5563), False, 'from numba import njit\n'), ((5874, 5903), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (5878, 5903), False, 'from numba import njit\n'), ((6169, 6198), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (6173, 6198), False, 'from numba import njit\n'), ((6523, 6552), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (6527, 6552), False, 'from numba import njit\n'), ((6636, 6665), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (6640, 6665), False, 'from numba import njit\n'), ((6777, 6806), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (6781, 6806), False, 'from numba import njit\n'), ((7312, 7341), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (7316, 7341), False, 'from numba import njit\n'), ((7441, 7470), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (7445, 7470), False, 'from numba import njit\n'), ((7590, 7619), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (7594, 7619), False, 'from numba import njit\n'), ((7686, 7715), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (7690, 7715), False, 'from numba import njit\n'), ((8903, 8932), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (8907, 8932), False, 'from numba import njit\n'), ((9064, 9093), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (9068, 9093), False, 'from numba import njit\n'), ((9360, 9389), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (9364, 9389), False, 'from numba import njit\n'), ((14692, 14721), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (14696, 14721), False, 'from numba import njit\n'), ((16107, 16136), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (16111, 16136), False, 'from numba import njit\n'), ((16485, 16514), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (16489, 16514), False, 'from numba import njit\n'), ((16891, 16920), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (16895, 16920), False, 'from numba import njit\n'), ((17716, 17745), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (17720, 17745), False, 'from numba import njit\n'), ((18026, 18055), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (18030, 18055), False, 'from numba import njit\n'), ((18397, 18426), 'numba.njit', 'njit', ([], {}), '(**njit_dict_no_parallel)\n', (18401, 18426), False, 'from numba import njit\n'), ((6747, 6773), 'math.sqrt', 'math.sqrt', (['(1 - beta * beta)'], {}), '(1 - beta * beta)\n', (6756, 6773), False, 'import math\n'), ((7560, 7586), 'math.sqrt', 'math.sqrt', (['(1 - beta * beta)'], {}), '(1 - beta * beta)\n', (7569, 7586), False, 'import math\n'), ((15353, 15424), 'numpy.sqrt', 'np.sqrt', (['(r * r + distance * distance + 2.0 * r * distance * r_packet.mu)'], {}), '(r * r + distance * distance + 2.0 * r * distance * r_packet.mu)\n', (15360, 15424), True, 'import numpy as np\n'), ((2058, 2105), 'numpy.searchsorted', 'np.searchsorted', (['inverse_line_list_nu', 'comov_nu'], {}), '(inverse_line_list_nu, comov_nu)\n', (2073, 2105), True, 'import numpy as np\n'), ((2755, 2809), 'math.sqrt', 'math.sqrt', (['(r_outer * r_outer + (mu * mu - 1.0) * r * r)'], {}), '(r_outer * r_outer + (mu * mu - 1.0) * r * r)\n', (2764, 2809), False, 'import math\n'), ((7658, 7676), 'numpy.random.random', 'np.random.random', ([], {}), '()\n', (7674, 7676), True, 'import numpy as np\n'), ((10320, 10338), 'numpy.random.random', 'np.random.random', ([], {}), '()\n', (10336, 10338), True, 'import numpy as np\n'), ((3056, 3072), 'math.sqrt', 'math.sqrt', (['check'], {}), '(check)\n', (3065, 3072), False, 'import math\n'), ((3173, 3227), 'math.sqrt', 'math.sqrt', (['(r_outer * r_outer + (mu * mu - 1.0) * r * r)'], {}), '(r_outer * r_outer + (mu * mu - 1.0) * r * r)\n', (3182, 3227), False, 'import math\n'), ((5244, 5360), 'math.sqrt', 'math.sqrt', (['(ct * ct - (1 + r_packet.r * r_packet.r * (1 - r_packet.mu * r_packet.mu) *\n (1 + 1.0 / (nu_r * nu_r))))'], {}), '(ct * ct - (1 + r_packet.r * r_packet.r * (1 - r_packet.mu *\n r_packet.mu) * (1 + 1.0 / (nu_r * nu_r))))\n', (5253, 5360), False, 'import math\n')] |
import torch
import torch.nn as nn
import torch.nn.functional as F
from feature import Extractor
from torch.utils.data import DataLoader
import torch.optim as optim
from MVTec import NormalDataset, TestDataset
import time
import datetime
import warnings
import os
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from skimage.io import imsave
from skimage import measure
from skimage.transform import resize
from skimage.util import img_as_ubyte
from feat_cae import FeatCAE
import joblib
from sklearn.decomposition import PCA
from utils import * # auc_roc, visualization
class AnoSegDFR():
"""
Anomaly segmentation model: DFR.
"""
def __init__(self, cfg):
super(AnoSegDFR, self).__init__()
self.cfg = cfg
self.path = cfg.save_path # model and results saving path
self.n_layers = len(cfg.cnn_layers)
self.n_dim = cfg.latent_dim
self.log_step = 10
self.data_name = cfg.data_name
self.model_name = ""
self.img_size = cfg.img_size
self.threshold = cfg.thred
self.device = torch.device(cfg.device)
# feature extractor
self.extractor = Extractor(backbone=cfg.backbone,
cnn_layers=cfg.cnn_layers,
upsample=cfg.upsample,
is_agg=cfg.is_agg,
kernel_size=cfg.kernel_size,
stride=cfg.stride,
dilation=cfg.dilation,
featmap_size=cfg.featmap_size,
device=cfg.device).to(self.device)
# datasest
self.train_data_path = cfg.train_data_path
self.test_data_path = cfg.test_data_path
self.train_data = self.build_dataset(is_train=True)
self.test_data = self.build_dataset(is_train=False)
# dataloader
self.train_data_loader = DataLoader(self.train_data, batch_size=cfg.batch_size, shuffle=True, num_workers=2)
self.test_data_loader = DataLoader(self.test_data, batch_size=1, shuffle=False, num_workers=2)
self.eval_data_loader = DataLoader(self.train_data, batch_size=10, shuffle=False, num_workers=2)
# autoencoder classifier
self.search_model()
print("n_dim:", self.n_dim)
self.autoencoder, self.model_name = self.build_classifier()
if cfg.model_name != "":
self.model_name = cfg.model_name
print("model name:", self.model_name)
# optimizer
self.lr = cfg.lr
self.optimizer = optim.Adam(self.autoencoder.parameters(), lr=self.lr, weight_decay=0)
# saving paths
self.subpath = self.data_name + "/" + self.model_name
self.model_path = os.path.join(self.path, "models/" + self.subpath + "/model")
if not os.path.exists(self.model_path):
os.makedirs(self.model_path)
self.eval_path = os.path.join(self.path, "models/" + self.subpath + "/eval")
if not os.path.exists(self.eval_path):
os.makedirs(self.eval_path)
def search_model(self):
subpath = self.path + "/models/" + self.data_name
pattern = re.compile('vgg19_(\w+)_l(\d+)_d(\d+)')
if os.path.exists(subpath):
trained_models = os.listdir(subpath)
if trained_models:
for model in trained_models:
match = pattern.search(model)
data_name = match.group(1)
n_layers = int(match.group(2))
n_dim = int(match.group(3))
if data_name == self.data_name and n_layers == self.n_layers:
if self.n_dim:
if n_dim == self.n_dim:
self.model_name = model
break
else:
self.model_name = model
self.n_dim = n_dim
break
def build_classifier(self):
# self.load_dim(self.model_path)
if self.n_dim is None:
print("Estimating one class classifier AE parameter...")
feats = torch.Tensor()
for i, normal_img in enumerate(self.eval_data_loader):
i += 1
if i > 1:
break
normal_img = normal_img.to(self.device)
feat = self.extractor.feat_vec(normal_img)
feats = torch.cat([feats, feat.cpu()], dim=0)
# to numpy
feats = feats.detach().numpy()
# estimate parameters for mlp
pca = PCA(n_components=0.90) # 0.9 here try 0.8
pca.fit(feats)
n_dim, in_feat = pca.components_.shape
print("AE Parameter (in_feat, n_dim): ({}, {})".format(in_feat, n_dim))
self.n_dim = n_dim
else:
for i, normal_img in enumerate(self.eval_data_loader):
i += 1
if i > 1:
break
normal_img = normal_img.to(self.device)
feat = self.extractor.feat_vec(normal_img)
in_feat = feat.shape[1]
autoencoder = FeatCAE(in_channels=in_feat, latent_dim=self.n_dim, is_bn=self.cfg.is_bn).to(self.device)
model_name = self.model_name
if not model_name:
model_name = "AnoSegDFR({})_{}_{}_l{}_d{}_s{}_k{}_{}".format('BN' if self.cfg.is_bn else 'noBN',
self.cfg.backbone, self.data_name,
self.n_layers, self.n_dim, self.cfg.stride[0],
self.cfg.kernel_size[0], self.cfg.upsample)
return autoencoder, model_name
def build_dataset(self, is_train):
normal_data_path = self.train_data_path
abnormal_data_path = self.test_data_path
if is_train:
dataset = NormalDataset(normal_data_path, normalize=True)
else:
dataset = TestDataset(path=abnormal_data_path)
return dataset
def train(self):
if self.load_model():
print("Model Loaded.")
return
start_time = time.time()
print("Beginning training...")
# train
iters_per_epoch = len(self.train_data_loader) # total iterations every epoch
epochs = self.cfg.epochs # total epochs
for epoch in range(1, epochs+1):
self.extractor.train()
self.autoencoder.train()
losses = []
for i, normal_img in enumerate(self.train_data_loader):
normal_img = normal_img.to(self.device)
# forward and backward
total_loss = self.optimize_step(normal_img)
# statistics and logging
loss = {}
loss['total_loss'] = total_loss.data.item()
# tracking loss
losses.append(loss['total_loss'])
if epoch % 5 == 0:
# self.save_model()
print('Epoch {}/{}'.format(epoch, epochs))
print('-' * 10)
elapsed = time.time() - start_time
total_time = ((epochs * iters_per_epoch) - (epoch * iters_per_epoch + i)) * elapsed / (
epoch * iters_per_epoch + i + 1)
epoch_time = (iters_per_epoch - i) * elapsed / (epoch * iters_per_epoch + i + 1)
epoch_time = str(datetime.timedelta(seconds=epoch_time))
total_time = str(datetime.timedelta(seconds=total_time))
elapsed = str(datetime.timedelta(seconds=elapsed))
log = "Elapsed {}/{} -- {} , Epoch [{}/{}], Iter [{}/{}]".format(
elapsed, epoch_time, total_time, epoch, epochs, i + 1, iters_per_epoch)
for tag, value in loss.items():
log += ", {}: {:.4f}".format(tag, value)
print(log)
if epoch % 10 == 0:
# save model
self.save_model()
self.validation(epoch)
# print("Cost total time {}s".format(time.time() - start_time))
# print("Done.")
self.tracking_loss(epoch, np.mean(np.array(losses)))
# save model
self.save_model()
print("Cost total time {}s".format(time.time() - start_time))
print("Done.")
def tracking_loss(self, epoch, loss):
out_file = os.path.join(self.eval_path, '{}_epoch_loss.csv'.format(self.model_name))
if not os.path.exists(out_file):
with open(out_file, mode='w') as f:
f.write("Epoch" + ",loss" + "\n")
with open(out_file, mode='a+') as f:
f.write(str(epoch) + "," + str(loss) + "\n")
def optimize_step(self, input_data):
self.extractor.train()
self.autoencoder.train()
self.optimizer.zero_grad()
# forward
input_data = self.extractor(input_data)
# print(input_data.size())
dec = self.autoencoder(input_data)
# loss
total_loss = self.autoencoder.loss_function(dec, input_data.detach().data)
# self.reset_grad()
total_loss.backward()
self.optimizer.step()
return total_loss
def score(self, input):
"""
Args:
input: image with size of (img_size_h, img_size_w, channels)
Returns:
score map with shape (img_size_h, img_size_w)
"""
self.extractor.eval()
self.autoencoder.eval()
input = self.extractor(input)
dec = self.autoencoder(input)
# sample energy
scores = self.autoencoder.compute_energy(dec, input)
scores = scores.reshape((1, 1, self.extractor.out_size[0], self.extractor.out_size[1])) # test batch size is 1.
scores = nn.functional.interpolate(scores, size=self.img_size, mode="bilinear", align_corners=True).squeeze()
# print("score shape:", scores.shape)
return scores
def segment(self, input, threshold=0.5):
"""
Args:
input: image with size of (img_size_h, img_size_w, channels)
Returns:
score map and binary score map with shape (img_size_h, img_size_w)
"""
# predict
scores = self.score(input).data.cpu().numpy()
# binary score
#print("threshold:", threshold)
binary_scores = np.zeros_like(scores) # torch.zeros_like(scores)
binary_scores[scores <= threshold] = 0
binary_scores[scores > threshold] = 1
return scores, binary_scores
def segment_evaluation(self):
i = 0
metrics = []
time_start = time.time()
for i, (img, mask, name) in enumerate(self.test_data_loader): # batch size is 1.
i += 1
# segment
img = img.to(self.device)
scores, binary_scores = self.segment(img, threshold=self.threshold)
# show something
# plt.figure()
# ax1 = plt.subplot(1, 2, 1)
# ax1.imshow(resize(mask[0], (256, 256)))
# ax1.set_title("gt")
# ax2 = plt.subplot(1, 2, 2)
# ax2.imshow(scores)
# ax2.set_title("pred")
mask = mask.squeeze().numpy()
name = name[0]
# save results
self.save_seg_results(normalize(scores), binary_scores, mask, name)
# metrics of one batch
if name.split("/")[-2] != "good":
specificity, sensitivity, accuracy, coverage, auc = spec_sensi_acc_iou_auc(mask, binary_scores, scores)
metrics.append([specificity, sensitivity, accuracy, coverage, auc])
print("Batch {},".format(i), "Cost total time {}s".format(time.time()-time_start))
# metrics over all data
metrics = np.array(metrics)
metrics_mean = metrics.mean(axis=0)
metrics_std = metrics.std(axis=0)
print("metrics: specificity, sensitivity, accuracy, iou, auc")
print("mean:", metrics_mean)
print("std:", metrics_std)
print("threshold:", self.threshold)
def save_paths(self):
# generating saving paths
score_map_path = os.path.join(self.cfg.save_path+"/Results", self.subpath + "/score_map")
if not os.path.exists(score_map_path):
os.makedirs(score_map_path)
binary_score_map_path = os.path.join(self.cfg.save_path+"/Results", self.subpath + "/binary_score_map")
if not os.path.exists(binary_score_map_path):
os.makedirs(binary_score_map_path)
gt_pred_map_path = os.path.join(self.cfg.save_path+"/Results", self.subpath + "/gt_pred_score_map")
if not os.path.exists(gt_pred_map_path):
os.makedirs(gt_pred_map_path)
mask_path = os.path.join(self.cfg.save_path+"/Results", self.subpath + "/mask")
if not os.path.exists(mask_path):
os.makedirs(mask_path)
gt_pred_seg_image_path = os.path.join(self.cfg.save_path+"/Results", self.subpath + "/gt_pred_seg_image")
if not os.path.exists(gt_pred_seg_image_path):
os.makedirs(gt_pred_seg_image_path)
return score_map_path, binary_score_map_path, gt_pred_map_path, mask_path, gt_pred_seg_image_path
def save_seg_results(self, scores, binary_scores, mask, name):
score_map_path, binary_score_map_path, gt_pred_score_map, mask_path, gt_pred_seg_image_path = self.save_paths()
img_name = name.split("/")
img_name = "-".join(img_name[-2:])
#print(img_name)
# score map
imsave(os.path.join(score_map_path, "{}".format(img_name)), img_as_ubyte(scores), check_contrast=False)
# binary score map
imsave(os.path.join(binary_score_map_path, "{}".format(img_name)), img_as_ubyte(binary_scores), check_contrast=False)
# mask
imsave(os.path.join(mask_path, "{}".format(img_name)), img_as_ubyte(mask), check_contrast=False)
# # pred vs gt map
# imsave(os.path.join(gt_pred_score_map, "{}".format(img_name)), normalize(binary_scores + mask))
visulization_score(img_file=name, mask_path=mask_path,
score_map_path=score_map_path, saving_path=gt_pred_score_map)
# pred vs gt image
visulization(img_file=name, mask_path=mask_path,
score_map_path=binary_score_map_path, saving_path=gt_pred_seg_image_path)
def save_model(self, epoch=0):
# save model weights
torch.save({'autoencoder': self.autoencoder.state_dict()},
os.path.join(self.model_path, 'autoencoder.pth'))
np.save(os.path.join(self.model_path, 'n_dim.npy'), self.n_dim)
def load_model(self, path=None):
print("Loading model...")
if path is None:
model_path = os.path.join(self.model_path, 'autoencoder.pth')
print("model path:", model_path)
if not os.path.exists(model_path):
print("No existing models found.")
return False
if torch.cuda.is_available():
data = torch.load(model_path)
else:
data = torch.load(model_path,
map_location=lambda storage, loc: storage) # Load all tensors onto the CPU, using a function
self.autoencoder.load_state_dict(data['autoencoder'])
return True
# def save_dim(self):
# np.save(os.path.join(self.model_path, 'n_dim.npy'))
def load_dim(self, model_path):
dim_path = os.path.join(model_path, 'n_dim.npy')
if not os.path.exists(dim_path):
print("Dim not exists.")
self.n_dim = None
else:
self.n_dim = np.load(os.path.join(model_path, 'n_dim.npy'))
########################################################
# Evaluation (testing)
########################################################
def segmentation_results(self):
#def normalize(x):
# return x/x.max()
time_start = time.time()
for i, (img, mask, name) in enumerate(self.test_data_loader): # batch size is 1.
i += 1
# segment
img = img.to(self.device)
scores, binary_scores = self.segment(img, threshold=self.threshold)
mask = mask.squeeze().numpy()
name = name[0]
# save results
if name[0].split("/")[-2] != "good":
self.save_seg_results(normalize(scores), binary_scores, mask, name)
# self.save_seg_results((scores-score_min)/score_range, binary_scores, mask, name)
#print("Batch {},".format(i), "Cost total time {}s".format(time.time()-time_start))
######################################################
# Evaluation of segmentation
######################################################
def save_segment_paths(self, fpr):
# generating saving paths
binary_score_map_path = os.path.join(self.cfg.save_path+"/Results", self.subpath + "/fpr_{}/binary_score_map".format(fpr))
if not os.path.exists(binary_score_map_path):
os.makedirs(binary_score_map_path)
mask_path = os.path.join(self.cfg.save_path+"/Results", self.subpath + "/fpr_{}/mask".format(fpr))
if not os.path.exists(mask_path):
os.makedirs(mask_path)
gt_pred_seg_image_path = os.path.join(self.cfg.save_path+"/Results", self.subpath + "/fpr_{}/gt_pred_seg_image".format(fpr))
if not os.path.exists(gt_pred_seg_image_path):
os.makedirs(gt_pred_seg_image_path)
return binary_score_map_path, mask_path, gt_pred_seg_image_path
def save_segment_results(self, binary_scores, mask, name, fpr):
binary_score_map_path, mask_path, gt_pred_seg_image_path = self.save_segment_paths(fpr)
img_name = name.split("/")
img_name = "-".join(img_name[-2:])
print(img_name)
# binary score map
imsave(os.path.join(binary_score_map_path, "{}".format(img_name)), binary_scores)
# mask
imsave(os.path.join(mask_path, "{}".format(img_name)), mask)
# pred vs gt image
visulization(img_file=name, mask_path=mask_path,
score_map_path=binary_score_map_path, saving_path=gt_pred_seg_image_path)
def estimate_thred_with_fpr(self, expect_fpr=0.05):
"""
Use training set to estimate the threshold.
"""
threshold = 0
scores_list = []
for i, normal_img in enumerate(self.train_data_loader):
normal_img = normal_img[0:1].to(self.device)
scores_list.append(self.score(normal_img).data.cpu().numpy())
scores = np.concatenate(scores_list, axis=0)
# find the optimal threshold
max_step = 100
min_th = scores.min()
max_th = scores.max()
delta = (max_th - min_th) / max_step
for step in range(max_step):
threshold = max_th - step * delta
# segmentation
binary_score_maps = np.zeros_like(scores)
binary_score_maps[scores <= threshold] = 0
binary_score_maps[scores > threshold] = 1
# estimate the optimal threshold base on user defined min_area
fpr = binary_score_maps.sum() / binary_score_maps.size
print(
"threshold {}: find fpr {} / user defined fpr {}".format(threshold, fpr, expect_fpr))
if fpr >= expect_fpr: # find the optimal threshold
print("find optimal threshold:", threshold)
print("Done.\n")
break
return threshold
def segment_evaluation_with_fpr(self, expect_fpr=0.05):
# estimate threshold
thred = self.estimate_thred_with_fpr(expect_fpr=expect_fpr)
# segment
i = 0
metrics = []
time_start = time.time()
for i, (img, mask, name) in enumerate(self.test_data_loader): # batch size is 1.
i += 1
# segment
img = img.to(self.device)
scores, binary_scores = self.segment(img, threshold=thred)
mask = mask.squeeze().numpy()
name = name[0]
# save results
self.save_segment_results(binary_scores, mask, name, expect_fpr)
print("Batch {},".format(i), "Cost total time {}s".format(time.time()-time_start))
print("threshold:", thred)
def segment_evaluation_with_otsu_li(self, seg_method='otsu'):
"""
ref: skimage.filters.threshold_otsu
skimage.filters.threshold_li
e.g.
thresh = filters.threshold_otsu(image)
dst =(image <= thresh)*1.0
"""
from skimage.filters import threshold_li
from skimage.filters import threshold_otsu
# segment
thred = 0
time_start = time.time()
for i, (img, mask, name) in enumerate(self.test_data_loader): # batch size is 1.
i += 1
# segment
img = img.to(self.device)
# estimate threshold and seg
if seg_method == 'otsu':
thred = threshold_otsu(img.detach().cpu().numpy())
else:
thred = threshold_li(img.detach().cpu().numpy())
scores, binary_scores = self.segment(img, threshold=thred)
mask = mask.squeeze().numpy()
name = name[0]
# save results
self.save_segment_results(binary_scores, mask, name, seg_method)
print("Batch {},".format(i), "Cost total time {}s".format(time.time()-time_start))
print("threshold:", thred)
def segmentation_evaluation(self):
if self.load_model():
print("Model Loaded.")
else:
print("None pretrained models.")
return
self.segmentation_results()
#self.segment_evaluation_with_fpr(expect_fpr=self.cfg.except_fpr)
def validation(self, epoch):
i = 0
time_start = time.time()
masks = []
scores = []
for i, (img, mask, name) in enumerate(self.test_data_loader): # batch size is 1.
i += 1
# data
img = img.to(self.device)
mask = mask.squeeze().numpy()
# score
score = self.score(img).data.cpu().numpy()
masks.append(mask)
scores.append(score)
#print("Batch {},".format(i), "Cost total time {}s".format(time.time() - time_start))
# as array
masks = np.array(masks)
masks[masks <= 0.5] = 0
masks[masks > 0.5] = 1
masks = masks.astype(np.bool)
scores = np.array(scores)
# auc score
auc_score, roc = auc_roc(masks, scores)
# metrics over all data
print("auc:", auc_score)
out_file = os.path.join(self.eval_path, '{}_epoch_auc.csv'.format(self.model_name))
if not os.path.exists(out_file):
with open(out_file, mode='w') as f:
f.write("Epoch" + ",AUC" + "\n")
with open(out_file, mode='a+') as f:
f.write(str(epoch) + "," + str(auc_score) + "\n")
def metrics_evaluation(self, expect_fpr=0.3, max_step=1000):
from sklearn.metrics import auc, roc_curve, RocCurveDisplay
from sklearn.metrics import roc_auc_score, average_precision_score
if self.load_model():
print("Model Loaded.")
else:
print("No pretrained models.")
return
print("Calculating ROC & PR AUC metrics on testing data...")
time_start = time.time()
masks = []
scores = []
for i, (img, mask, name) in enumerate(self.test_data_loader): # batch size is 1.
# data
img = img.to(self.device)
mask = mask.squeeze().numpy()
# anomaly score
# anomaly_map = self.score(img).data.cpu().numpy()
anomaly_map = self.score(img).data.cpu().numpy()
masks.append(mask)
scores.append(anomaly_map)
#print("Batch {},".format(i), "Cost total time {}s".format(time.time() - time_start))
# as array
masks = np.array(masks)
scores = np.array(scores)
# binary masks
masks[masks <= 0.5] = 0
masks[masks > 0.5] = 1
masks = masks.astype(np.bool)
# auc score (image level) for detection
labels = masks.any(axis=1).any(axis=1)
# preds = scores.mean(1).mean(1)
preds = scores.max(1).max(1) # for detection
det_auc_score = roc_auc_score(labels, preds)
det_pr_score = average_precision_score(labels, preds)
# auc score (per pixel level) for segmentation
seg_auc_score = roc_auc_score(masks.ravel(), scores.ravel())
seg_pr_score = average_precision_score(masks.ravel(), scores.ravel())
# metrics over all data
print(f"Det ROC AUC: {det_auc_score:.4f}, Seg ROC AUC: {seg_auc_score:.4f}")
print(f"Det PR AUC: {det_pr_score:.4f}, Seg PR AUC: {seg_pr_score:.4f}")
det_fpr, det_tpr, _ = roc_curve(labels, preds)
seg_fpr, seg_tpr, _ = roc_curve(masks.ravel(), scores.ravel())
path = os.path.join(self.eval_path, "predictions.npz")
np.savez(path, det_fpr=det_fpr, det_tpr=det_tpr, seg_fpr=seg_fpr, seg_tpr=seg_tpr)
print("Calculating IOU & PRO metrics on testing data...")
# per region overlap and per image iou
max_th = scores.max()
min_th = scores.min()
delta = (max_th - min_th) / max_step
ious_mean = []
ious_std = []
pros_mean = []
pros_std = []
threds = []
fprs = []
binary_score_maps = np.zeros_like(scores, dtype=np.bool)
for step in range(max_step):
thred = max_th - step * delta
# segmentation
binary_score_maps[scores <= thred] = 0
binary_score_maps[scores > thred] = 1
pro = [] # per region overlap
iou = [] # per image iou
# pro: find each connected gt region, compute the overlapped pixels between the gt region and predicted region
# iou: for each image, compute the ratio, i.e. intersection/union between the gt and predicted binary map
for i in range(len(binary_score_maps)): # for i th image
# pro (per region level)
label_map = measure.label(masks[i], connectivity=2)
props = measure.regionprops(label_map)
for prop in props:
x_min, y_min, x_max, y_max = prop.bbox # find the bounding box of an anomaly region
cropped_pred_label = binary_score_maps[i][x_min:x_max, y_min:y_max]
# cropped_mask = masks[i][x_min:x_max, y_min:y_max] # bug!
cropped_mask = prop.filled_image # corrected!
intersection = np.logical_and(cropped_pred_label, cropped_mask).astype(np.float32).sum()
pro.append(intersection / prop.area)
# iou (per image level)
intersection = np.logical_and(binary_score_maps[i], masks[i]).astype(np.float32).sum()
union = np.logical_or(binary_score_maps[i], masks[i]).astype(np.float32).sum()
if masks[i].any() > 0: # when the gt have no anomaly pixels, skip it
iou.append(intersection / union)
# against steps and average metrics on the testing data
ious_mean.append(np.array(iou).mean())
# print("per image mean iou:", np.array(iou).mean())
ious_std.append(np.array(iou).std())
pros_mean.append(np.array(pro).mean())
pros_std.append(np.array(pro).std())
# fpr for pro-auc
masks_neg = ~masks
fpr = np.logical_and(masks_neg, binary_score_maps).sum() / masks_neg.sum()
fprs.append(fpr)
threds.append(thred)
# as array
threds = np.array(threds)
pros_mean = np.array(pros_mean)
pros_std = np.array(pros_std)
fprs = np.array(fprs)
ious_mean = np.array(ious_mean)
ious_std = np.array(ious_std)
# save results
data = np.vstack([threds, fprs, pros_mean, pros_std, ious_mean, ious_std])
df_metrics = pd.DataFrame(data=data.T, columns=['thred', 'fpr',
'pros_mean', 'pros_std',
'ious_mean', 'ious_std'])
# save results
df_metrics.to_csv(os.path.join(self.eval_path, 'thred_fpr_pro_iou.csv'), sep=',', index=False)
# best per image iou
best_miou = ious_mean.max()
print(f"Best IOU: {best_miou:.4f}")
# default 30% fpr vs pro, pro_auc
idx = fprs <= expect_fpr # find the indexs of fprs that is less than expect_fpr (default 0.3)
fprs_selected = fprs[idx]
fprs_selected = rescale(fprs_selected) # rescale fpr [0,0.3] -> [0, 1]
pros_mean_selected = pros_mean[idx]
pro_auc_score = auc(fprs_selected, pros_mean_selected)
print("pro auc ({}% FPR):".format(int(expect_fpr*100)), pro_auc_score)
# save results
data = np.vstack([threds[idx], fprs[idx], pros_mean[idx], pros_std[idx]])
df_metrics = pd.DataFrame(data=data.T, columns=['thred', 'fpr',
'pros_mean', 'pros_std'])
df_metrics.to_csv(os.path.join(self.eval_path, 'thred_fpr_pro_{}.csv'.format(expect_fpr)), sep=',', index=False)
# save auc, pro as 30 fpr
with open(os.path.join(self.eval_path, 'pr_auc_pro_iou_{}.csv'.format(expect_fpr)), mode='w') as f:
f.write("det_pr, det_auc, seg_pr, seg_auc, seg_pro, seg_iou\n")
f.write(f"{det_pr_score:.5f},{det_auc_score:.5f},{seg_pr_score:.5f},{seg_auc_score:.5f},{pro_auc_score:.5f},{best_miou:.5f}")
def metrics_detection(self, expect_fpr=0.3, max_step=5000):
from sklearn.metrics import auc
from sklearn.metrics import roc_auc_score, average_precision_score
if self.load_model():
print("Model Loaded.")
else:
print("None pretrained models.")
return
print("Calculating AUC, IOU, PRO metrics on testing data...")
time_start = time.time()
masks = []
scores = []
for i, (img, mask, name) in enumerate(self.test_data_loader): # batch size is 1.
# data
img = img.to(self.device)
mask = mask.squeeze().numpy()
# anomaly score
# anomaly_map = self.score(img).data.cpu().numpy()
anomaly_map = self.score(img).data.cpu().numpy()
masks.append(mask)
scores.append(anomaly_map)
#print("Batch {},".format(i), "Cost total time {}s".format(time.time() - time_start))
# as array
masks = np.array(masks)
scores = np.array(scores)
# binary masks
masks[masks <= 0.5] = 0
masks[masks > 0.5] = 1
masks = masks.astype(np.bool)
# auc score (image level) for detection
labels = masks.any(axis=1).any(axis=1)
# preds = scores.mean(1).mean(1)
preds = scores.max(1).max(1) # for detection
det_auc_score = roc_auc_score(labels, preds)
det_pr_score = average_precision_score(labels, preds)
# auc score (per pixel level) for segmentation
seg_auc_score = roc_auc_score(masks.ravel(), scores.ravel())
seg_pr_score = average_precision_score(masks.ravel(), scores.ravel())
# metrics over all data
print(f"Det AUC: {det_auc_score:.4f}, Seg AUC: {seg_auc_score:.4f}")
print(f"Det PR: {det_pr_score:.4f}, Seg PR: {seg_pr_score:.4f}")
# save detection metrics
with open(os.path.join(self.eval_path, 'det_pr_auc.csv'), mode='w') as f:
f.write("det_pr, det_auc\n")
f.write(f"{det_pr_score:.5f},{det_auc_score:.5f}")
| [
"re.compile",
"sklearn.metrics.auc",
"sklearn.metrics.roc_auc_score",
"numpy.array",
"sklearn.metrics.roc_curve",
"torch.cuda.is_available",
"torch.nn.functional.interpolate",
"feat_cae.FeatCAE",
"datetime.timedelta",
"os.path.exists",
"numpy.savez",
"os.listdir",
"sklearn.decomposition.PCA"... | [((1116, 1140), 'torch.device', 'torch.device', (['cfg.device'], {}), '(cfg.device)\n', (1128, 1140), False, 'import torch\n'), ((1865, 1952), 'torch.utils.data.DataLoader', 'DataLoader', (['self.train_data'], {'batch_size': 'cfg.batch_size', 'shuffle': '(True)', 'num_workers': '(2)'}), '(self.train_data, batch_size=cfg.batch_size, shuffle=True,\n num_workers=2)\n', (1875, 1952), False, 'from torch.utils.data import DataLoader\n'), ((1981, 2051), 'torch.utils.data.DataLoader', 'DataLoader', (['self.test_data'], {'batch_size': '(1)', 'shuffle': '(False)', 'num_workers': '(2)'}), '(self.test_data, batch_size=1, shuffle=False, num_workers=2)\n', (1991, 2051), False, 'from torch.utils.data import DataLoader\n'), ((2084, 2156), 'torch.utils.data.DataLoader', 'DataLoader', (['self.train_data'], {'batch_size': '(10)', 'shuffle': '(False)', 'num_workers': '(2)'}), '(self.train_data, batch_size=10, shuffle=False, num_workers=2)\n', (2094, 2156), False, 'from torch.utils.data import DataLoader\n'), ((2700, 2760), 'os.path.join', 'os.path.join', (['self.path', "('models/' + self.subpath + '/model')"], {}), "(self.path, 'models/' + self.subpath + '/model')\n", (2712, 2760), False, 'import os\n'), ((2875, 2934), 'os.path.join', 'os.path.join', (['self.path', "('models/' + self.subpath + '/eval')"], {}), "(self.path, 'models/' + self.subpath + '/eval')\n", (2887, 2934), False, 'import os\n'), ((3127, 3169), 're.compile', 're.compile', (['"""vgg19_(\\\\w+)_l(\\\\d+)_d(\\\\d+)"""'], {}), "('vgg19_(\\\\w+)_l(\\\\d+)_d(\\\\d+)')\n", (3137, 3169), False, 'import re\n'), ((3178, 3201), 'os.path.exists', 'os.path.exists', (['subpath'], {}), '(subpath)\n', (3192, 3201), False, 'import os\n'), ((6291, 6302), 'time.time', 'time.time', ([], {}), '()\n', (6300, 6302), False, 'import time\n'), ((10593, 10614), 'numpy.zeros_like', 'np.zeros_like', (['scores'], {}), '(scores)\n', (10606, 10614), True, 'import numpy as np\n'), ((10867, 10878), 'time.time', 'time.time', ([], {}), '()\n', (10876, 10878), False, 'import time\n'), ((12063, 12080), 'numpy.array', 'np.array', (['metrics'], {}), '(metrics)\n', (12071, 12080), True, 'import numpy as np\n'), ((12440, 12514), 'os.path.join', 'os.path.join', (["(self.cfg.save_path + '/Results')", "(self.subpath + '/score_map')"], {}), "(self.cfg.save_path + '/Results', self.subpath + '/score_map')\n", (12452, 12514), False, 'import os\n'), ((12633, 12718), 'os.path.join', 'os.path.join', (["(self.cfg.save_path + '/Results')", "(self.subpath + '/binary_score_map')"], {}), "(self.cfg.save_path + '/Results', self.subpath +\n '/binary_score_map')\n", (12645, 12718), False, 'import os\n'), ((12842, 12928), 'os.path.join', 'os.path.join', (["(self.cfg.save_path + '/Results')", "(self.subpath + '/gt_pred_score_map')"], {}), "(self.cfg.save_path + '/Results', self.subpath +\n '/gt_pred_score_map')\n", (12854, 12928), False, 'import os\n'), ((13035, 13104), 'os.path.join', 'os.path.join', (["(self.cfg.save_path + '/Results')", "(self.subpath + '/mask')"], {}), "(self.cfg.save_path + '/Results', self.subpath + '/mask')\n", (13047, 13104), False, 'import os\n'), ((13214, 13300), 'os.path.join', 'os.path.join', (["(self.cfg.save_path + '/Results')", "(self.subpath + '/gt_pred_seg_image')"], {}), "(self.cfg.save_path + '/Results', self.subpath +\n '/gt_pred_seg_image')\n", (13226, 13300), False, 'import os\n'), ((15790, 15827), 'os.path.join', 'os.path.join', (['model_path', '"""n_dim.npy"""'], {}), "(model_path, 'n_dim.npy')\n", (15802, 15827), False, 'import os\n'), ((16288, 16299), 'time.time', 'time.time', ([], {}), '()\n', (16297, 16299), False, 'import time\n'), ((18967, 19002), 'numpy.concatenate', 'np.concatenate', (['scores_list'], {'axis': '(0)'}), '(scores_list, axis=0)\n', (18981, 19002), True, 'import numpy as np\n'), ((20143, 20154), 'time.time', 'time.time', ([], {}), '()\n', (20152, 20154), False, 'import time\n'), ((21127, 21138), 'time.time', 'time.time', ([], {}), '()\n', (21136, 21138), False, 'import time\n'), ((22277, 22288), 'time.time', 'time.time', ([], {}), '()\n', (22286, 22288), False, 'import time\n'), ((22811, 22826), 'numpy.array', 'np.array', (['masks'], {}), '(masks)\n', (22819, 22826), True, 'import numpy as np\n'), ((22945, 22961), 'numpy.array', 'np.array', (['scores'], {}), '(scores)\n', (22953, 22961), True, 'import numpy as np\n'), ((23875, 23886), 'time.time', 'time.time', ([], {}), '()\n', (23884, 23886), False, 'import time\n'), ((24473, 24488), 'numpy.array', 'np.array', (['masks'], {}), '(masks)\n', (24481, 24488), True, 'import numpy as np\n'), ((24506, 24522), 'numpy.array', 'np.array', (['scores'], {}), '(scores)\n', (24514, 24522), True, 'import numpy as np\n'), ((24881, 24909), 'sklearn.metrics.roc_auc_score', 'roc_auc_score', (['labels', 'preds'], {}), '(labels, preds)\n', (24894, 24909), False, 'from sklearn.metrics import roc_auc_score, average_precision_score\n'), ((24933, 24971), 'sklearn.metrics.average_precision_score', 'average_precision_score', (['labels', 'preds'], {}), '(labels, preds)\n', (24956, 24971), False, 'from sklearn.metrics import roc_auc_score, average_precision_score\n'), ((25420, 25444), 'sklearn.metrics.roc_curve', 'roc_curve', (['labels', 'preds'], {}), '(labels, preds)\n', (25429, 25444), False, 'from sklearn.metrics import auc, roc_curve, RocCurveDisplay\n'), ((25531, 25578), 'os.path.join', 'os.path.join', (['self.eval_path', '"""predictions.npz"""'], {}), "(self.eval_path, 'predictions.npz')\n", (25543, 25578), False, 'import os\n'), ((25587, 25674), 'numpy.savez', 'np.savez', (['path'], {'det_fpr': 'det_fpr', 'det_tpr': 'det_tpr', 'seg_fpr': 'seg_fpr', 'seg_tpr': 'seg_tpr'}), '(path, det_fpr=det_fpr, det_tpr=det_tpr, seg_fpr=seg_fpr, seg_tpr=\n seg_tpr)\n', (25595, 25674), True, 'import numpy as np\n'), ((26071, 26107), 'numpy.zeros_like', 'np.zeros_like', (['scores'], {'dtype': 'np.bool'}), '(scores, dtype=np.bool)\n', (26084, 26107), True, 'import numpy as np\n'), ((28397, 28413), 'numpy.array', 'np.array', (['threds'], {}), '(threds)\n', (28405, 28413), True, 'import numpy as np\n'), ((28434, 28453), 'numpy.array', 'np.array', (['pros_mean'], {}), '(pros_mean)\n', (28442, 28453), True, 'import numpy as np\n'), ((28473, 28491), 'numpy.array', 'np.array', (['pros_std'], {}), '(pros_std)\n', (28481, 28491), True, 'import numpy as np\n'), ((28507, 28521), 'numpy.array', 'np.array', (['fprs'], {}), '(fprs)\n', (28515, 28521), True, 'import numpy as np\n'), ((28551, 28570), 'numpy.array', 'np.array', (['ious_mean'], {}), '(ious_mean)\n', (28559, 28570), True, 'import numpy as np\n'), ((28590, 28608), 'numpy.array', 'np.array', (['ious_std'], {}), '(ious_std)\n', (28598, 28608), True, 'import numpy as np\n'), ((28656, 28723), 'numpy.vstack', 'np.vstack', (['[threds, fprs, pros_mean, pros_std, ious_mean, ious_std]'], {}), '([threds, fprs, pros_mean, pros_std, ious_mean, ious_std])\n', (28665, 28723), True, 'import numpy as np\n'), ((28745, 28850), 'pandas.DataFrame', 'pd.DataFrame', ([], {'data': 'data.T', 'columns': "['thred', 'fpr', 'pros_mean', 'pros_std', 'ious_mean', 'ious_std']"}), "(data=data.T, columns=['thred', 'fpr', 'pros_mean', 'pros_std',\n 'ious_mean', 'ious_std'])\n", (28757, 28850), True, 'import pandas as pd\n'), ((29548, 29586), 'sklearn.metrics.auc', 'auc', (['fprs_selected', 'pros_mean_selected'], {}), '(fprs_selected, pros_mean_selected)\n', (29551, 29586), False, 'from sklearn.metrics import auc\n'), ((29705, 29771), 'numpy.vstack', 'np.vstack', (['[threds[idx], fprs[idx], pros_mean[idx], pros_std[idx]]'], {}), '([threds[idx], fprs[idx], pros_mean[idx], pros_std[idx]])\n', (29714, 29771), True, 'import numpy as np\n'), ((29793, 29869), 'pandas.DataFrame', 'pd.DataFrame', ([], {'data': 'data.T', 'columns': "['thred', 'fpr', 'pros_mean', 'pros_std']"}), "(data=data.T, columns=['thred', 'fpr', 'pros_mean', 'pros_std'])\n", (29805, 29869), True, 'import pandas as pd\n'), ((30845, 30856), 'time.time', 'time.time', ([], {}), '()\n', (30854, 30856), False, 'import time\n'), ((31443, 31458), 'numpy.array', 'np.array', (['masks'], {}), '(masks)\n', (31451, 31458), True, 'import numpy as np\n'), ((31476, 31492), 'numpy.array', 'np.array', (['scores'], {}), '(scores)\n', (31484, 31492), True, 'import numpy as np\n'), ((31851, 31879), 'sklearn.metrics.roc_auc_score', 'roc_auc_score', (['labels', 'preds'], {}), '(labels, preds)\n', (31864, 31879), False, 'from sklearn.metrics import roc_auc_score, average_precision_score\n'), ((31903, 31941), 'sklearn.metrics.average_precision_score', 'average_precision_score', (['labels', 'preds'], {}), '(labels, preds)\n', (31926, 31941), False, 'from sklearn.metrics import roc_auc_score, average_precision_score\n'), ((2776, 2807), 'os.path.exists', 'os.path.exists', (['self.model_path'], {}), '(self.model_path)\n', (2790, 2807), False, 'import os\n'), ((2821, 2849), 'os.makedirs', 'os.makedirs', (['self.model_path'], {}), '(self.model_path)\n', (2832, 2849), False, 'import os\n'), ((2950, 2980), 'os.path.exists', 'os.path.exists', (['self.eval_path'], {}), '(self.eval_path)\n', (2964, 2980), False, 'import os\n'), ((2994, 3021), 'os.makedirs', 'os.makedirs', (['self.eval_path'], {}), '(self.eval_path)\n', (3005, 3021), False, 'import os\n'), ((3232, 3251), 'os.listdir', 'os.listdir', (['subpath'], {}), '(subpath)\n', (3242, 3251), False, 'import os\n'), ((4177, 4191), 'torch.Tensor', 'torch.Tensor', ([], {}), '()\n', (4189, 4191), False, 'import torch\n'), ((4637, 4658), 'sklearn.decomposition.PCA', 'PCA', ([], {'n_components': '(0.9)'}), '(n_components=0.9)\n', (4640, 4658), False, 'from sklearn.decomposition import PCA\n'), ((6019, 6066), 'MVTec.NormalDataset', 'NormalDataset', (['normal_data_path'], {'normalize': '(True)'}), '(normal_data_path, normalize=True)\n', (6032, 6066), False, 'from MVTec import NormalDataset, TestDataset\n'), ((6103, 6139), 'MVTec.TestDataset', 'TestDataset', ([], {'path': 'abnormal_data_path'}), '(path=abnormal_data_path)\n', (6114, 6139), False, 'from MVTec import NormalDataset, TestDataset\n'), ((8701, 8725), 'os.path.exists', 'os.path.exists', (['out_file'], {}), '(out_file)\n', (8715, 8725), False, 'import os\n'), ((12528, 12558), 'os.path.exists', 'os.path.exists', (['score_map_path'], {}), '(score_map_path)\n', (12542, 12558), False, 'import os\n'), ((12572, 12599), 'os.makedirs', 'os.makedirs', (['score_map_path'], {}), '(score_map_path)\n', (12583, 12599), False, 'import os\n'), ((12728, 12765), 'os.path.exists', 'os.path.exists', (['binary_score_map_path'], {}), '(binary_score_map_path)\n', (12742, 12765), False, 'import os\n'), ((12779, 12813), 'os.makedirs', 'os.makedirs', (['binary_score_map_path'], {}), '(binary_score_map_path)\n', (12790, 12813), False, 'import os\n'), ((12938, 12970), 'os.path.exists', 'os.path.exists', (['gt_pred_map_path'], {}), '(gt_pred_map_path)\n', (12952, 12970), False, 'import os\n'), ((12984, 13013), 'os.makedirs', 'os.makedirs', (['gt_pred_map_path'], {}), '(gt_pred_map_path)\n', (12995, 13013), False, 'import os\n'), ((13118, 13143), 'os.path.exists', 'os.path.exists', (['mask_path'], {}), '(mask_path)\n', (13132, 13143), False, 'import os\n'), ((13157, 13179), 'os.makedirs', 'os.makedirs', (['mask_path'], {}), '(mask_path)\n', (13168, 13179), False, 'import os\n'), ((13310, 13348), 'os.path.exists', 'os.path.exists', (['gt_pred_seg_image_path'], {}), '(gt_pred_seg_image_path)\n', (13324, 13348), False, 'import os\n'), ((13362, 13397), 'os.makedirs', 'os.makedirs', (['gt_pred_seg_image_path'], {}), '(gt_pred_seg_image_path)\n', (13373, 13397), False, 'import os\n'), ((13884, 13904), 'skimage.util.img_as_ubyte', 'img_as_ubyte', (['scores'], {}), '(scores)\n', (13896, 13904), False, 'from skimage.util import img_as_ubyte\n'), ((14031, 14058), 'skimage.util.img_as_ubyte', 'img_as_ubyte', (['binary_scores'], {}), '(binary_scores)\n', (14043, 14058), False, 'from skimage.util import img_as_ubyte\n'), ((14161, 14179), 'skimage.util.img_as_ubyte', 'img_as_ubyte', (['mask'], {}), '(mask)\n', (14173, 14179), False, 'from skimage.util import img_as_ubyte\n'), ((14813, 14861), 'os.path.join', 'os.path.join', (['self.model_path', '"""autoencoder.pth"""'], {}), "(self.model_path, 'autoencoder.pth')\n", (14825, 14861), False, 'import os\n'), ((14879, 14921), 'os.path.join', 'os.path.join', (['self.model_path', '"""n_dim.npy"""'], {}), "(self.model_path, 'n_dim.npy')\n", (14891, 14921), False, 'import os\n'), ((15057, 15105), 'os.path.join', 'os.path.join', (['self.model_path', '"""autoencoder.pth"""'], {}), "(self.model_path, 'autoencoder.pth')\n", (15069, 15105), False, 'import os\n'), ((15294, 15319), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (15317, 15319), False, 'import torch\n'), ((15843, 15867), 'os.path.exists', 'os.path.exists', (['dim_path'], {}), '(dim_path)\n', (15857, 15867), False, 'import os\n'), ((17345, 17382), 'os.path.exists', 'os.path.exists', (['binary_score_map_path'], {}), '(binary_score_map_path)\n', (17359, 17382), False, 'import os\n'), ((17396, 17430), 'os.makedirs', 'os.makedirs', (['binary_score_map_path'], {}), '(binary_score_map_path)\n', (17407, 17430), False, 'import os\n'), ((17554, 17579), 'os.path.exists', 'os.path.exists', (['mask_path'], {}), '(mask_path)\n', (17568, 17579), False, 'import os\n'), ((17593, 17615), 'os.makedirs', 'os.makedirs', (['mask_path'], {}), '(mask_path)\n', (17604, 17615), False, 'import os\n'), ((17765, 17803), 'os.path.exists', 'os.path.exists', (['gt_pred_seg_image_path'], {}), '(gt_pred_seg_image_path)\n', (17779, 17803), False, 'import os\n'), ((17817, 17852), 'os.makedirs', 'os.makedirs', (['gt_pred_seg_image_path'], {}), '(gt_pred_seg_image_path)\n', (17828, 17852), False, 'import os\n'), ((19311, 19332), 'numpy.zeros_like', 'np.zeros_like', (['scores'], {}), '(scores)\n', (19324, 19332), True, 'import numpy as np\n'), ((23203, 23227), 'os.path.exists', 'os.path.exists', (['out_file'], {}), '(out_file)\n', (23217, 23227), False, 'import os\n'), ((29008, 29061), 'os.path.join', 'os.path.join', (['self.eval_path', '"""thred_fpr_pro_iou.csv"""'], {}), "(self.eval_path, 'thred_fpr_pro_iou.csv')\n", (29020, 29061), False, 'import os\n'), ((1195, 1432), 'feature.Extractor', 'Extractor', ([], {'backbone': 'cfg.backbone', 'cnn_layers': 'cfg.cnn_layers', 'upsample': 'cfg.upsample', 'is_agg': 'cfg.is_agg', 'kernel_size': 'cfg.kernel_size', 'stride': 'cfg.stride', 'dilation': 'cfg.dilation', 'featmap_size': 'cfg.featmap_size', 'device': 'cfg.device'}), '(backbone=cfg.backbone, cnn_layers=cfg.cnn_layers, upsample=cfg.\n upsample, is_agg=cfg.is_agg, kernel_size=cfg.kernel_size, stride=cfg.\n stride, dilation=cfg.dilation, featmap_size=cfg.featmap_size, device=\n cfg.device)\n', (1204, 1432), False, 'from feature import Extractor\n'), ((5217, 5290), 'feat_cae.FeatCAE', 'FeatCAE', ([], {'in_channels': 'in_feat', 'latent_dim': 'self.n_dim', 'is_bn': 'self.cfg.is_bn'}), '(in_channels=in_feat, latent_dim=self.n_dim, is_bn=self.cfg.is_bn)\n', (5224, 5290), False, 'from feat_cae import FeatCAE\n'), ((10011, 10105), 'torch.nn.functional.interpolate', 'nn.functional.interpolate', (['scores'], {'size': 'self.img_size', 'mode': '"""bilinear"""', 'align_corners': '(True)'}), "(scores, size=self.img_size, mode='bilinear',\n align_corners=True)\n", (10036, 10105), True, 'import torch.nn as nn\n'), ((15170, 15196), 'os.path.exists', 'os.path.exists', (['model_path'], {}), '(model_path)\n', (15184, 15196), False, 'import os\n'), ((15344, 15366), 'torch.load', 'torch.load', (['model_path'], {}), '(model_path)\n', (15354, 15366), False, 'import torch\n'), ((15408, 15473), 'torch.load', 'torch.load', (['model_path'], {'map_location': '(lambda storage, loc: storage)'}), '(model_path, map_location=lambda storage, loc: storage)\n', (15418, 15473), False, 'import torch\n'), ((15983, 16020), 'os.path.join', 'os.path.join', (['model_path', '"""n_dim.npy"""'], {}), "(model_path, 'n_dim.npy')\n", (15995, 16020), False, 'import os\n'), ((26784, 26823), 'skimage.measure.label', 'measure.label', (['masks[i]'], {'connectivity': '(2)'}), '(masks[i], connectivity=2)\n', (26797, 26823), False, 'from skimage import measure\n'), ((26848, 26878), 'skimage.measure.regionprops', 'measure.regionprops', (['label_map'], {}), '(label_map)\n', (26867, 26878), False, 'from skimage import measure\n'), ((32395, 32441), 'os.path.join', 'os.path.join', (['self.eval_path', '"""det_pr_auc.csv"""'], {}), "(self.eval_path, 'det_pr_auc.csv')\n", (32407, 32441), False, 'import os\n'), ((7294, 7305), 'time.time', 'time.time', ([], {}), '()\n', (7303, 7305), False, 'import time\n'), ((7611, 7649), 'datetime.timedelta', 'datetime.timedelta', ([], {'seconds': 'epoch_time'}), '(seconds=epoch_time)\n', (7629, 7649), False, 'import datetime\n'), ((7684, 7722), 'datetime.timedelta', 'datetime.timedelta', ([], {'seconds': 'total_time'}), '(seconds=total_time)\n', (7702, 7722), False, 'import datetime\n'), ((7754, 7789), 'datetime.timedelta', 'datetime.timedelta', ([], {'seconds': 'elapsed'}), '(seconds=elapsed)\n', (7772, 7789), False, 'import datetime\n'), ((8390, 8406), 'numpy.array', 'np.array', (['losses'], {}), '(losses)\n', (8398, 8406), True, 'import numpy as np\n'), ((8500, 8511), 'time.time', 'time.time', ([], {}), '()\n', (8509, 8511), False, 'import time\n'), ((11988, 11999), 'time.time', 'time.time', ([], {}), '()\n', (11997, 11999), False, 'import time\n'), ((20642, 20653), 'time.time', 'time.time', ([], {}), '()\n', (20651, 20653), False, 'import time\n'), ((21855, 21866), 'time.time', 'time.time', ([], {}), '()\n', (21864, 21866), False, 'import time\n'), ((27902, 27915), 'numpy.array', 'np.array', (['iou'], {}), '(iou)\n', (27910, 27915), True, 'import numpy as np\n'), ((28017, 28030), 'numpy.array', 'np.array', (['iou'], {}), '(iou)\n', (28025, 28030), True, 'import numpy as np\n'), ((28067, 28080), 'numpy.array', 'np.array', (['pro'], {}), '(pro)\n', (28075, 28080), True, 'import numpy as np\n'), ((28117, 28130), 'numpy.array', 'np.array', (['pro'], {}), '(pro)\n', (28125, 28130), True, 'import numpy as np\n'), ((28217, 28261), 'numpy.logical_and', 'np.logical_and', (['masks_neg', 'binary_score_maps'], {}), '(masks_neg, binary_score_maps)\n', (28231, 28261), True, 'import numpy as np\n'), ((27497, 27543), 'numpy.logical_and', 'np.logical_and', (['binary_score_maps[i]', 'masks[i]'], {}), '(binary_score_maps[i], masks[i])\n', (27511, 27543), True, 'import numpy as np\n'), ((27593, 27638), 'numpy.logical_or', 'np.logical_or', (['binary_score_maps[i]', 'masks[i]'], {}), '(binary_score_maps[i], masks[i])\n', (27606, 27638), True, 'import numpy as np\n'), ((27295, 27343), 'numpy.logical_and', 'np.logical_and', (['cropped_pred_label', 'cropped_mask'], {}), '(cropped_pred_label, cropped_mask)\n', (27309, 27343), True, 'import numpy as np\n')] |
"""
Wrapper to get the ChemProp predictions on the test set for each model in
the folder `model_folder_cp`.
"""
import os
import json
import argparse
import numpy as np
from nff.utils import (bash_command, parse_args, read_csv,
fprint, CHEMPROP_METRICS, apply_metric)
def is_model_path(cp_model_path):
"""
Check to see if a directory is actually a model path.
Args:
cp_model_path (str): path to folder
Returns:
check_paths (list[str]): paths to the different model checkpoints
is_model (bool): whether it's really a model path
"""
# get the paths of all the models saved with different initial random seeds
check_names = [i for i in os.listdir(cp_model_path)
if i.startswith("fold_") and i.split("_")[-1].isdigit()]
# sort by order
check_names = sorted(check_names, key=lambda x: int(x.split("_")[-1]))
check_paths = [os.path.join(cp_model_path, name, "model_0/model.pt")
for name in check_names]
is_model = len(check_paths) != 0
return check_paths, is_model
def predict(cp_folder,
test_path,
cp_model_path,
device,
check_paths):
"""
Get and save the prediction results from a ChemProp model.
Args:
cp_folder (str): path to the chemprop folder on your computer
test_path (str): path to the file with the test SMILES and their properties
cp_model_path (str): path to the folder with the model of interest
device (Union[str, int]): device to evaluate the model on
check_paths (list[str]): paths to the different model checkpoints
Returns:
reals (dict):dictionary of the form {prop: real}, where `real`
are the real values of the property `prop`.
preds (list[dict]): same as `real` but for predicted. One for each
model.
"""
script = os.path.join(cp_folder, "predict.py")
preds_path = os.path.join(cp_model_path, f"test_pred.csv")
# load the arguments from that model to get the features path
args_path = f"{cp_model_path}/fold_0/args.json"
if not os.path.isfile(args_path):
args_path = args_path.replace("fold_0/", "")
with open(args_path, "r") as f:
args = json.load(f)
features_path = args["separate_test_features_path"]
# predictions from different models
preds = []
for i, check_path in enumerate(check_paths):
# make the chemprop command
this_path = preds_path.replace(".csv", f"_{i}.csv")
cmd = (f"source activate chemprop && python {script} "
f" --test_path {test_path} --preds_path {this_path} "
f" --checkpoint_paths {check_path} ")
if device == "cpu":
cmd += f" --no_cuda"
else:
cmd += f" --gpu {device} "
if features_path is not None:
feat_str = " ".join(features_path)
cmd += f" --features_path {feat_str}"
p = bash_command(cmd)
p.wait()
pred = read_csv(this_path)
preds.append(pred)
real = read_csv(test_path)
return real, preds
def get_metrics(actual_dic, pred_dics, metrics, cp_model_path):
"""
Get all requested metric scores for a set of predictions and save
to a JSON file.
Args:
actual_dic (dict): dictionary of the form {prop: real}, where `real` are the
real values of the property `prop`.
pred_dics (list[dict]): list of dictionaries, each the same as `real` but
with values predicted by each different model.
metrics (list[str]): metrics to apply
cp_model_path (str): path to the folder with the model of interest
Returns:
None
"""
overall_dic = {}
for i, pred_dic in enumerate(pred_dics):
metric_dic = {}
for prop in pred_dic.keys():
if prop == "smiles":
continue
actual = actual_dic[prop]
pred = pred_dic[prop]
metric_dic[prop] = {}
for metric in metrics:
score = apply_metric(metric, pred, actual)
metric_dic[prop][metric] = score
overall_dic[str(i)] = metric_dic
props = [prop for prop in pred_dic.keys() if prop != 'smiles']
overall_dic['average'] = {prop: {} for prop in props}
sub_dics = [val for key, val in overall_dic.items() if key != 'average']
for prop in props:
for key in sub_dics[0][prop].keys():
vals = [sub_dic[prop][key] for sub_dic in sub_dics]
mean = np.mean(vals).item()
std = np.std(vals).item()
overall_dic['average'][prop][key] = {"mean": mean, "std": std}
save_path = os.path.join(cp_model_path, f"test_metrics.json")
with open(save_path, "w") as f:
json.dump(overall_dic, f, indent=4, sort_keys=True)
fprint(f"Saved metric scores to {save_path}")
def main(model_folder_cp,
cp_folder,
test_path,
device,
metrics,
**kwargs):
"""
Get predictions for all models and evaluate with a set of metrics.
Args:
model_folder_cp (str): directory in which all the model folders
can be found
cp_folder (str): path to the chemprop folder on your computer
test_path (str): path to the file with the test SMILES and their properties
device (Union[str, int]): device to evaluate the model on
metrics (list[str]): metrics to apply
Returns:
None
"""
folders = os.listdir(model_folder_cp)
# go through each folder
for folder in folders:
cp_model_path = os.path.join(model_folder_cp,
folder)
# continue if it's a file not a folder
if not os.path.isdir(cp_model_path):
continue
check_paths, is_model = is_model_path(cp_model_path)
if not is_model:
continue
# make predictions
real, preds = predict(cp_folder=cp_folder,
test_path=test_path,
cp_model_path=cp_model_path,
device=device,
check_paths=check_paths)
# get and save metric scores
get_metrics(real, preds, metrics, cp_model_path)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--model_folder_cp", type=str,
help=("Folder in which you will train your "
"ChemProp model. Models with different "
"parameters will get their own folders, "
"each located in `model_folder_cp`."))
parser.add_argument("--cp_folder", type=str,
help=("Path to ChemProp folder."))
parser.add_argument("--test_path", type=str,
help=("Path to the CSV with test set SMILES "
"and their actual property values"))
parser.add_argument("--device", type=str,
help=("Device to use for model evaluation: "
"either the index of the GPU, "
"or 'cpu'. "))
parser.add_argument("--metrics", type=str, nargs="+",
default=None, choices=CHEMPROP_METRICS,
help=("Optional metrics with which you want to "
"evaluate predictions."))
parser.add_argument('--config_file', type=str,
help=("Path to JSON file with arguments. If given, "
"any arguments in the file override the command "
"line arguments."))
args = parse_args(parser)
main(**args.__dict__)
| [
"numpy.mean",
"os.listdir",
"nff.utils.bash_command",
"argparse.ArgumentParser",
"nff.utils.read_csv",
"numpy.std",
"os.path.join",
"os.path.isfile",
"os.path.isdir",
"nff.utils.apply_metric",
"json.load",
"nff.utils.fprint",
"nff.utils.parse_args",
"json.dump"
] | [((1901, 1938), 'os.path.join', 'os.path.join', (['cp_folder', '"""predict.py"""'], {}), "(cp_folder, 'predict.py')\n", (1913, 1938), False, 'import os\n'), ((1956, 2001), 'os.path.join', 'os.path.join', (['cp_model_path', 'f"""test_pred.csv"""'], {}), "(cp_model_path, f'test_pred.csv')\n", (1968, 2001), False, 'import os\n'), ((3095, 3114), 'nff.utils.read_csv', 'read_csv', (['test_path'], {}), '(test_path)\n', (3103, 3114), False, 'from nff.utils import bash_command, parse_args, read_csv, fprint, CHEMPROP_METRICS, apply_metric\n'), ((4713, 4762), 'os.path.join', 'os.path.join', (['cp_model_path', 'f"""test_metrics.json"""'], {}), "(cp_model_path, f'test_metrics.json')\n", (4725, 4762), False, 'import os\n'), ((4864, 4909), 'nff.utils.fprint', 'fprint', (['f"""Saved metric scores to {save_path}"""'], {}), "(f'Saved metric scores to {save_path}')\n", (4870, 4909), False, 'from nff.utils import bash_command, parse_args, read_csv, fprint, CHEMPROP_METRICS, apply_metric\n'), ((5519, 5546), 'os.listdir', 'os.listdir', (['model_folder_cp'], {}), '(model_folder_cp)\n', (5529, 5546), False, 'import os\n'), ((6352, 6377), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {}), '()\n', (6375, 6377), False, 'import argparse\n'), ((7752, 7770), 'nff.utils.parse_args', 'parse_args', (['parser'], {}), '(parser)\n', (7762, 7770), False, 'from nff.utils import bash_command, parse_args, read_csv, fprint, CHEMPROP_METRICS, apply_metric\n'), ((923, 976), 'os.path.join', 'os.path.join', (['cp_model_path', 'name', '"""model_0/model.pt"""'], {}), "(cp_model_path, name, 'model_0/model.pt')\n", (935, 976), False, 'import os\n'), ((2132, 2157), 'os.path.isfile', 'os.path.isfile', (['args_path'], {}), '(args_path)\n', (2146, 2157), False, 'import os\n'), ((2263, 2275), 'json.load', 'json.load', (['f'], {}), '(f)\n', (2272, 2275), False, 'import json\n'), ((2985, 3002), 'nff.utils.bash_command', 'bash_command', (['cmd'], {}), '(cmd)\n', (2997, 3002), False, 'from nff.utils import bash_command, parse_args, read_csv, fprint, CHEMPROP_METRICS, apply_metric\n'), ((3036, 3055), 'nff.utils.read_csv', 'read_csv', (['this_path'], {}), '(this_path)\n', (3044, 3055), False, 'from nff.utils import bash_command, parse_args, read_csv, fprint, CHEMPROP_METRICS, apply_metric\n'), ((4807, 4858), 'json.dump', 'json.dump', (['overall_dic', 'f'], {'indent': '(4)', 'sort_keys': '(True)'}), '(overall_dic, f, indent=4, sort_keys=True)\n', (4816, 4858), False, 'import json\n'), ((5628, 5665), 'os.path.join', 'os.path.join', (['model_folder_cp', 'folder'], {}), '(model_folder_cp, folder)\n', (5640, 5665), False, 'import os\n'), ((707, 732), 'os.listdir', 'os.listdir', (['cp_model_path'], {}), '(cp_model_path)\n', (717, 732), False, 'import os\n'), ((5766, 5794), 'os.path.isdir', 'os.path.isdir', (['cp_model_path'], {}), '(cp_model_path)\n', (5779, 5794), False, 'import os\n'), ((4077, 4111), 'nff.utils.apply_metric', 'apply_metric', (['metric', 'pred', 'actual'], {}), '(metric, pred, actual)\n', (4089, 4111), False, 'from nff.utils import bash_command, parse_args, read_csv, fprint, CHEMPROP_METRICS, apply_metric\n'), ((4562, 4575), 'numpy.mean', 'np.mean', (['vals'], {}), '(vals)\n', (4569, 4575), True, 'import numpy as np\n'), ((4601, 4613), 'numpy.std', 'np.std', (['vals'], {}), '(vals)\n', (4607, 4613), True, 'import numpy as np\n')] |
# coding: utf-8
# ## Imports and helper functions
from IPython.core.debugger import set_trace
import sys
import os, sys, inspect
import os
import numpy as np
import h5py
import scipy.sparse.linalg as la
import scipy.sparse as sp
import scipy
import time
import pyflann
import re
import math
import itertools as it
from sklearn import metrics
def load_matlab_file(path_file, name_field, struct=False):
"""
load '.mat' files
inputs:
path_file, string containing the file path
name_field, string containig the field name (default='shape')
warning:
'.mat' files should be saved in the '-v7.3' format
"""
out = None
db = h5py.File(path_file, "r")
if type(name_field) is tuple:
if name_field[1] not in db[name_field[0]]:
return None
ds = db[name_field[0]][name_field[1]]
else:
ds = db[name_field]
try:
if "ir" in ds.keys():
data = np.asarray(ds["data"])
ir = np.asarray(ds["ir"])
jc = np.asarray(ds["jc"])
out = sp.csc_matrix((data, ir, jc)).astype(np.float32)
if struct:
out = dict()
for c_k in ds.keys():
# THis is a horrible way to manage the exception when shape_comp_25 is not defined
if c_k.startswith("shape_comp"):
try:
out[c_k] = np.asarray(ds[c_k])
except:
continue
else:
out[c_k] = np.asarray(ds[c_k])
except AttributeError:
# Transpose in case is a dense matrix because of the row- vs column- major ordering between python and matlab
out = np.asarray(ds).astype(np.float32).T
db.close()
return out
# From a full shape in a full protein, extract a patch around a vertex.
# If patch_indices = True, then store the indices of all neighbors.
def extract_patch_and_coord(
vix, shape, coord, max_distance, max_vertices, patch_indices=False
):
# Member vertices are nonzero elements
i, j = coord[np.int(vix), : coord.shape[1] // 2].nonzero()
# D = np.squeeze(np.asarray(coord[np.int(vix),j].todense()))
D = np.squeeze(np.asarray(coord[np.int(vix), : coord.shape[1] // 2].todense()))
j = np.where((D < max_distance) & (D > 0))[0]
max_dist_tmp = max_distance
old_j = len(j)
while len(j) > max_vertices:
max_dist_tmp = max_dist_tmp * 0.95
j = np.where((D < max_dist_tmp) & (D > 0))[0]
# print('j = {} {}'.format(len(j), old_j))
D = D[j]
patch = {}
patch["X"] = shape["X"][0][j]
patch["Y"] = shape["Y"][0][j]
patch["Z"] = shape["Z"][0][j]
patch["charge"] = shape["charge"][0][j]
patch["hbond"] = shape["hbond"][0][j]
patch["normal"] = shape["normal"][:, j]
patch["shape_index"] = shape["shape_index"][0][j]
if "hphob" in shape:
patch["hphob"] = shape["hphob"][0][j]
patch["center"] = np.argmin(D)
j_theta = j + coord.shape[1] // 2
theta = np.squeeze(np.asarray(coord[np.int(vix), j_theta].todense()))
coord = np.concatenate([D, theta], axis=0)
if patch_indices:
return patch, coord, j
else:
return patch, coord
# FOR DEBUGGING only.... too slow, use precomputed values instead.
from scipy.spatial import KDTree
def compute_shape_complementarity(shape1, coord1, shape2, coord2):
w = 0.5
radius = 8.0
D1 = coord1[: coord1.shape[0] // 2]
v1 = np.stack([shape1["X"], shape1["Y"], shape1["Z"]], 1)
v1 = v1[np.where(D1 < radius)]
n1 = shape1["normal"].T[np.where(D1 < radius)]
D2 = coord2[: coord2.shape[0] // 2]
v2 = np.stack([shape2["X"], shape2["Y"], shape2["Z"]], 1)
v2 = v2[np.where(D2 < radius)]
n2 = shape2["normal"].T[np.where(D2 < radius)]
# First v2 -> v1
kdt = KDTree(v1)
d, i = kdt.query(v2)
comp2 = [np.dot(n2[x], -n1[i[x]]) for x in range(len(n2))]
comp2 = np.multiply(comp2, np.exp(-w * np.square(d)))
comp2 = np.percentile(comp2, 50)
# Now v1 -> v2
kdt = KDTree(v2)
d, i = kdt.query(v1)
comp1 = [np.dot(n1[x], -n2[i[x]]) for x in range(len(n1))]
comp1 = np.multiply(comp1, np.exp(-w * np.square(d)))
comp1 = np.percentile(comp1, 50)
return np.mean([comp1, comp2])
def memory():
import os
import psutil
pid = os.getpid()
py = psutil.Process(pid)
memoryUse = py.memory_info()[0] / 2.0 ** 30 # memory use in GB...I think
print("memory use:", memoryUse)
# ## Read input dataset and patch coords (and filter input if desired)
# if do_shape_comp_pairs is false, then use the random coords.
def read_data_from_matfile(
coord_file, shape_file, seed_pair, params, do_shape_comp_pairs, reshuffle=True
):
training_idx = []
val_idx = []
test_idx = []
non_sc_idx = []
# Ignore any shapes with more than X vertices.
max_shape_size = params["max_shape_size"]
list_desc = []
list_coords = []
list_shape_idx = []
list_names = []
positive_pairs_idx = set()
idx_positives = []
discarded_sc = 0.0
discarded_large = 0.0
ppi_accept_probability = params["ppi_accept_probability"]
all_patch_coord = {}
print(coord_file)
all_patch_coord["p1"] = load_matlab_file(coord_file, ("all_patch_coord", "p1"))
all_patch_coord["p2"] = load_matlab_file(coord_file, ("all_patch_coord", "p2"))
if all_patch_coord["p1"] is None:
return
if all_patch_coord["p2"] is None:
return
p_s = {}
p_s["p1"] = load_matlab_file(shape_file, "p1", True)
p_s["p2"] = load_matlab_file(shape_file, "p2", True)
if True:
if params["sc_filt"] > 0:
shape_comp_25_p1 = load_matlab_file(shape_file, ("p1", "shape_comp_25"))
shape_comp_25_p2 = load_matlab_file(shape_file, ("p2", "shape_comp_25"))
sc_pairs = load_matlab_file(shape_file, "sc_pairs", True)
data = np.asarray(sc_pairs["data"])
ir = np.asarray(sc_pairs["ir"])
jc = np.asarray(sc_pairs["jc"])
sc_pairs = sp.csc_matrix((data, ir, jc)).astype(np.float32).nonzero()
# Go through a subset of all shape complementary pairs.
num_accepted = 0.0
order = np.arange(len(sc_pairs[0]))
# Randomly reshuffle the dataset.
if reshuffle:
np.random.shuffle(order)
pairs_subset = len(order) * ppi_accept_probability
# Always accept at least one pair.
pairs_subset = np.ceil(pairs_subset)
for pair_ix in order[0 : int(pairs_subset)]:
pix1 = sc_pairs[0][pair_ix]
pix2 = sc_pairs[1][pair_ix]
# Filter on SC if desired.
if params["sc_filt"] > 0:
sc_filter_val_1 = np.asarray(shape_comp_25_p1[pix1, :].todense())[0]
sc_filter_val_1 = np.percentile(sc_filter_val_1, 50)
sc_filter_val_2 = np.asarray(shape_comp_25_p2[pix2, :].todense())[0]
sc_filter_val_2 = np.percentile(sc_filter_val_2, 50)
if np.mean([sc_filter_val_1, sc_filter_val_2]) < params["sc_filt"]:
discarded_sc += 1
continue
# Extract the vertices within the threshold
s1, coord1 = extract_patch_and_coord(
pix1,
p_s["p1"],
all_patch_coord["p1"],
params["max_distance"],
params["max_shape_size"],
)
s2, coord2 = extract_patch_and_coord(
pix2,
p_s["p2"],
all_patch_coord["p2"],
params["max_distance"],
params["max_shape_size"],
)
if s1["X"].shape[0] > max_shape_size or s2["X"].shape[0] > max_shape_size:
discarded_large += 1
continue
num_accepted += 1
pair_name = "{}_{}_{}".format(seed_pair, pix1, pix2)
ids_p1_p2 = (len(list_shape_idx), len(list_shape_idx) + 1)
positive_pairs_idx.add(ids_p1_p2)
idx_positives.append(ids_p1_p2)
list_names.append(pair_name)
list_names.append(pair_name)
list_desc.append(s1)
list_desc.append(s2)
list_coords.append(coord1)
list_coords.append(coord2)
list_shape_idx.append(pair_name + "_p1")
list_shape_idx.append(pair_name + "_p2")
print("Num accepted this pair {}".format(num_accepted))
print("Number of pairs of shapes: {}".format(len(idx_positives)))
print("Discarded pairs for size (total): {}".format(discarded_large))
print("Discarded pairs for sc (total): {}".format(discarded_sc))
return list_desc, list_coords, list_shape_idx, list_names
# if do_shape_comp_pairs is false, then use the random coords.
# if label iface, return a list of points with X of the partner protein.
def read_data_from_matfile_full_protein(
coord_file,
shape_file,
seed_pair,
params,
protein_id,
label_iface=False,
label_sc=False,
):
# Ignore any shapes with more than X vertices.
print("Reading full protein")
max_shape_size = params["max_shape_size"]
list_desc = []
list_coords = []
list_shape_idx = []
list_names = []
discarded_large = 0.0
if protein_id == "p2":
other_pid = "p1"
else:
other_pid = "p2"
all_patch_coord = {}
print(coord_file)
all_patch_coord[protein_id] = load_matlab_file(
coord_file, ("all_patch_coord", protein_id)
)
if all_patch_coord[protein_id] is None:
return
p_s = {}
p_s[protein_id] = load_matlab_file(shape_file, protein_id, True)
if label_sc:
try:
shape_comp_25 = load_matlab_file(shape_file, (protein_id, "shape_comp_25"))
shape_comp_50 = load_matlab_file(shape_file, (protein_id, "shape_comp_50"))
sc_filter_val = np.array([shape_comp_25.todense(), shape_comp_50.todense()])
# Pad with zeros the last few lines, since they are lost when reading from a sparse matrix.
pad_val = len(p_s[protein_id]["X"][0]) - sc_filter_val.shape[1]
if pad_val > 0:
padding = np.zeros((2, pad_val, 10))
sc_filter_val = np.concatenate([sc_filter_val, padding], axis=1)
# Not found: set -1 to all fields
except:
sc_filter_val = -np.ones((2, len(p_s[protein_id]["X"][0]), 10))
[rows, cols] = all_patch_coord[protein_id].nonzero()
assert len(np.unique(rows)) == len(p_s[protein_id]["X"][0])
X = p_s[protein_id]["X"][0]
Y = p_s[protein_id]["Y"][0]
Z = p_s[protein_id]["Z"][0]
if label_iface:
try:
# Read the interface information from the ply file itself.
import pymesh
if protein_id == "p1":
plyfile = "_".join(seed_pair.split("_")[0:2])
elif protein_id == "p2":
plyfile = "{}_{}".format(
seed_pair.split("_")[0], seed_pair.split("_")[2]
)
plyfile = params["ply_chain_dir"] + plyfile + ".ply"
iface_mesh = pymesh.load_mesh(plyfile)
iface_labels = iface_mesh.get_attribute("vertex_iface")
except:
print("Unable to label interface as other protein not present. ")
iface_labels = np.zeros((len(p_s[protein_id]["X"][0]), 1))
list_indices = []
for pix in range(len(p_s[protein_id]["X"][0])):
shape, coord, neigh_indices = extract_patch_and_coord(
pix,
p_s[protein_id],
all_patch_coord[protein_id],
params["max_distance"],
params["max_shape_size"],
patch_indices=True,
)
if len(shape["X"]) > max_shape_size:
discarded_large += 1
continue
list_desc.append(shape)
list_coords.append(coord)
shape_name = "{}_{}_rand_{}".format(seed_pair, pix, protein_id)
list_names.append(shape_name)
list_shape_idx.append(shape_name)
list_indices.append(neigh_indices)
if pix % 500 == 0:
print("pix: {}\n".format(pix))
print("Number of pairs of shapes: {}".format(len(list_desc)))
print("Discarded pairs for size (total): {}".format(discarded_large))
if label_iface:
return (
list_desc,
list_coords,
list_shape_idx,
list_names,
X,
Y,
Z,
iface_labels,
list_indices,
)
elif label_sc:
return (
list_desc,
list_coords,
list_shape_idx,
list_names,
X,
Y,
Z,
sc_filter_val,
list_indices,
)
else:
return list_desc, list_coords, list_shape_idx, list_names, X, Y, Z
# Get the closest X vertices.
def compute_closest_vertices(coords, vix, X=200):
n = coords.shape[0]
neigh = coords[vix, :n].nonzero()[1]
dists = np.asarray(coords[vix, neigh].todense())[0]
neigh_dists = zip(dists, neigh)
neigh_dists = sorted(neigh_dists)
# Only take into account the closest X vertices.
neigh_dists = neigh_dists[:X]
max_dist = neigh_dists[-1]
return neigh_dists, max_dist
def read_data_from_matfile_binding_pair(coord_file, shape_file, seed_pair, params, pid):
# Ignore any shapes with more than X vertices.
max_shape_size = params["max_shape_size"]
list_desc = []
list_coords = []
list_shape_idx = []
list_names = []
other_pid = "p1"
if pid == "p1":
other_pid = "p2"
# Read coordinates.
all_patch_coord = {}
all_patch_coord[pid] = load_matlab_file(coord_file, ("all_patch_coord", pid))
all_patch_coord[other_pid] = load_matlab_file(
coord_file, ("all_patch_coord", other_pid)
)
if all_patch_coord[pid] is None:
return
if all_patch_coord[other_pid] is None:
return
# Read shapes.
p_s = {}
p_s[pid] = load_matlab_file(shape_file, pid, True)
p_s[other_pid] = load_matlab_file(shape_file, other_pid, True)
# Compute all points in pid that are within 1.0A of a point in the other protein.
n_pid = range(len(p_s[pid]))
v1 = np.vstack([p_s[pid]["X"][0], p_s[pid]["Y"][0], p_s[pid]["Z"][0]]).T
v2 = np.vstack(
[p_s[other_pid]["X"][0], p_s[other_pid]["Y"][0], p_s[other_pid]["Z"][0]]
).T
flann = pyflann.FLANN()
closest_vertex_in_v2, dist = flann.nn(v2, v1)
dist = np.sqrt(dist)
iface1 = np.where(dist <= params["pos_interface_cutoff"])[0]
# Find the corresponding point in iface2.
iface2 = closest_vertex_in_v2[iface1]
# Now to iface1 add all negatives.
iface1_neg = np.where(dist > params["neg_interface_cutoff"])[0]
K = params["neg_surf_accept_probability"] * len(iface1_neg)
k = np.random.choice(len(iface1_neg), int(K))
iface1_neg = iface1_neg[k]
labels1 = np.concatenate([np.ones_like(iface1), np.zeros_like(iface1_neg)], axis=0)
iface1 = np.concatenate([iface1, iface1_neg], axis=0)
# Compute flann from iface2 to iface1
flann = pyflann.FLANN()
closest_vertex_in_v1, dist = flann.nn(v1, v2)
dist = np.sqrt(dist)
iface2_neg = np.where(dist > params["neg_interface_cutoff"])[0]
# Randomly sample iface2_neg
K = params["neg_surf_accept_probability"] * len(iface2_neg)
k = np.random.choice(len(iface2_neg), int(K))
iface2_neg = iface2_neg[k]
labels2 = np.concatenate([np.ones_like(iface2), np.zeros_like(iface2_neg)], axis=0)
iface2 = np.concatenate([iface2, iface2_neg], axis=0)
list_desc_binder = []
list_coord_binder = []
list_names_binder = []
list_vert_binder = []
for vix in iface1:
s, coord = extract_patch_and_coord(
vix,
p_s[pid],
all_patch_coord[pid],
params["max_distance"],
params["max_shape_size"],
)
name = "{}_{}_{}".format(seed_pair, pid, vix)
list_desc_binder.append(s)
list_coord_binder.append(coord)
list_names_binder.append(name)
list_vert_binder.append(v1[vix])
list_desc_pos = []
list_coord_pos = []
list_names_pos = []
list_vert_pos = []
for vix in iface2:
s, coord = extract_patch_and_coord(
vix,
p_s[other_pid],
all_patch_coord[other_pid],
params["max_distance"],
params["max_shape_size"],
)
name = "{}_{}_{}".format(seed_pair, other_pid, vix)
list_desc_pos.append(s)
list_coord_pos.append(coord)
list_names_pos.append(name)
list_vert_pos.append(v2[vix])
return (
list_desc_binder,
list_coord_binder,
list_names_binder,
list_vert_binder,
labels1,
list_desc_pos,
list_coord_pos,
list_names_pos,
list_vert_pos,
labels2,
)
# Select points that are farther than X from the other protein.
def read_data_from_matfile_negative_patches(
coord_file, shape_file, ppi_pair_id, params, pid, other_pid
):
all_patch_coord = {}
all_patch_coord[pid] = load_matlab_file(coord_file, ("all_patch_coord", pid))
if all_patch_coord[pid] is None:
return
# Read shapes.
p_s = {}
p_s[pid] = load_matlab_file(shape_file, pid, True)
p_s[other_pid] = load_matlab_file(shape_file, other_pid, True)
n = len(p_s[pid]["X"][0])
vert = np.vstack([p_s[pid]["X"][0], p_s[pid]["Y"][0], p_s[pid]["Z"][0]]).T
other_vert = np.vstack(
[p_s[other_pid]["X"][0], p_s[other_pid]["Y"][0], p_s[other_pid]["Z"][0]]
).T
flann = pyflann.FLANN()
closest_vertex_in_v2, dist = flann.nn(v2, v1)
dist = np.sqrt(dist)
iface1 = np.where(dist > 1.5)[0]
# Find the corresponding point in iface2.
iface2 = closest_vertex_in_v2[iface1]
neigh_dists, _ = compute_closest_vertices(all_patch_coord[pid], rvix, X=25)
neigh = [x[1] for x in neigh_dists]
list_desc_neg = []
list_coord_neg = []
list_names_neg = []
list_vert_neg = []
for vix in neigh:
s, coord = extract_patch_and_coord(
vix,
p_s[pid],
all_patch_coord[pid],
params["max_distance"],
params["max_shape_size"],
)
name = "{}_{}_{}".format(ppi_pair_id, pid, vix)
list_desc_neg.append(s)
list_coord_neg.append(coord)
list_names_neg.append(name)
list_vert_neg.append(rvert[vix])
return list_desc_neg, list_coord_neg, list_names_neg, list_vert_neg
| [
"numpy.sqrt",
"scipy.spatial.KDTree",
"psutil.Process",
"numpy.mean",
"numpy.where",
"numpy.asarray",
"numpy.stack",
"numpy.dot",
"numpy.vstack",
"numpy.concatenate",
"os.getpid",
"numpy.argmin",
"numpy.ceil",
"h5py.File",
"numpy.square",
"numpy.int",
"numpy.ones_like",
"numpy.uniq... | [((673, 698), 'h5py.File', 'h5py.File', (['path_file', '"""r"""'], {}), "(path_file, 'r')\n", (682, 698), False, 'import h5py\n'), ((2973, 2985), 'numpy.argmin', 'np.argmin', (['D'], {}), '(D)\n', (2982, 2985), True, 'import numpy as np\n'), ((3111, 3145), 'numpy.concatenate', 'np.concatenate', (['[D, theta]'], {'axis': '(0)'}), '([D, theta], axis=0)\n', (3125, 3145), True, 'import numpy as np\n'), ((3488, 3540), 'numpy.stack', 'np.stack', (["[shape1['X'], shape1['Y'], shape1['Z']]", '(1)'], {}), "([shape1['X'], shape1['Y'], shape1['Z']], 1)\n", (3496, 3540), True, 'import numpy as np\n'), ((3677, 3729), 'numpy.stack', 'np.stack', (["[shape2['X'], shape2['Y'], shape2['Z']]", '(1)'], {}), "([shape2['X'], shape2['Y'], shape2['Z']], 1)\n", (3685, 3729), True, 'import numpy as np\n'), ((3848, 3858), 'scipy.spatial.KDTree', 'KDTree', (['v1'], {}), '(v1)\n', (3854, 3858), False, 'from scipy.spatial import KDTree\n'), ((4017, 4041), 'numpy.percentile', 'np.percentile', (['comp2', '(50)'], {}), '(comp2, 50)\n', (4030, 4041), True, 'import numpy as np\n'), ((4072, 4082), 'scipy.spatial.KDTree', 'KDTree', (['v2'], {}), '(v2)\n', (4078, 4082), False, 'from scipy.spatial import KDTree\n'), ((4241, 4265), 'numpy.percentile', 'np.percentile', (['comp1', '(50)'], {}), '(comp1, 50)\n', (4254, 4265), True, 'import numpy as np\n'), ((4278, 4301), 'numpy.mean', 'np.mean', (['[comp1, comp2]'], {}), '([comp1, comp2])\n', (4285, 4301), True, 'import numpy as np\n'), ((4361, 4372), 'os.getpid', 'os.getpid', ([], {}), '()\n', (4370, 4372), False, 'import os\n'), ((4382, 4401), 'psutil.Process', 'psutil.Process', (['pid'], {}), '(pid)\n', (4396, 4401), False, 'import psutil\n'), ((14533, 14548), 'pyflann.FLANN', 'pyflann.FLANN', ([], {}), '()\n', (14546, 14548), False, 'import pyflann\n'), ((14610, 14623), 'numpy.sqrt', 'np.sqrt', (['dist'], {}), '(dist)\n', (14617, 14623), True, 'import numpy as np\n'), ((15133, 15177), 'numpy.concatenate', 'np.concatenate', (['[iface1, iface1_neg]'], {'axis': '(0)'}), '([iface1, iface1_neg], axis=0)\n', (15147, 15177), True, 'import numpy as np\n'), ((15233, 15248), 'pyflann.FLANN', 'pyflann.FLANN', ([], {}), '()\n', (15246, 15248), False, 'import pyflann\n'), ((15310, 15323), 'numpy.sqrt', 'np.sqrt', (['dist'], {}), '(dist)\n', (15317, 15323), True, 'import numpy as np\n'), ((15672, 15716), 'numpy.concatenate', 'np.concatenate', (['[iface2, iface2_neg]'], {'axis': '(0)'}), '([iface2, iface2_neg], axis=0)\n', (15686, 15716), True, 'import numpy as np\n'), ((17777, 17792), 'pyflann.FLANN', 'pyflann.FLANN', ([], {}), '()\n', (17790, 17792), False, 'import pyflann\n'), ((17854, 17867), 'numpy.sqrt', 'np.sqrt', (['dist'], {}), '(dist)\n', (17861, 17867), True, 'import numpy as np\n'), ((2292, 2330), 'numpy.where', 'np.where', (['((D < max_distance) & (D > 0))'], {}), '((D < max_distance) & (D > 0))\n', (2300, 2330), True, 'import numpy as np\n'), ((3553, 3574), 'numpy.where', 'np.where', (['(D1 < radius)'], {}), '(D1 < radius)\n', (3561, 3574), True, 'import numpy as np\n'), ((3604, 3625), 'numpy.where', 'np.where', (['(D1 < radius)'], {}), '(D1 < radius)\n', (3612, 3625), True, 'import numpy as np\n'), ((3742, 3763), 'numpy.where', 'np.where', (['(D2 < radius)'], {}), '(D2 < radius)\n', (3750, 3763), True, 'import numpy as np\n'), ((3793, 3814), 'numpy.where', 'np.where', (['(D2 < radius)'], {}), '(D2 < radius)\n', (3801, 3814), True, 'import numpy as np\n'), ((3897, 3921), 'numpy.dot', 'np.dot', (['n2[x]', '(-n1[i[x]])'], {}), '(n2[x], -n1[i[x]])\n', (3903, 3921), True, 'import numpy as np\n'), ((4121, 4145), 'numpy.dot', 'np.dot', (['n1[x]', '(-n2[i[x]])'], {}), '(n1[x], -n2[i[x]])\n', (4127, 4145), True, 'import numpy as np\n'), ((5941, 5969), 'numpy.asarray', 'np.asarray', (["sc_pairs['data']"], {}), "(sc_pairs['data'])\n", (5951, 5969), True, 'import numpy as np\n'), ((5983, 6009), 'numpy.asarray', 'np.asarray', (["sc_pairs['ir']"], {}), "(sc_pairs['ir'])\n", (5993, 6009), True, 'import numpy as np\n'), ((6023, 6049), 'numpy.asarray', 'np.asarray', (["sc_pairs['jc']"], {}), "(sc_pairs['jc'])\n", (6033, 6049), True, 'import numpy as np\n'), ((6489, 6510), 'numpy.ceil', 'np.ceil', (['pairs_subset'], {}), '(pairs_subset)\n', (6496, 6510), True, 'import numpy as np\n'), ((14343, 14408), 'numpy.vstack', 'np.vstack', (["[p_s[pid]['X'][0], p_s[pid]['Y'][0], p_s[pid]['Z'][0]]"], {}), "([p_s[pid]['X'][0], p_s[pid]['Y'][0], p_s[pid]['Z'][0]])\n", (14352, 14408), True, 'import numpy as np\n'), ((14420, 14508), 'numpy.vstack', 'np.vstack', (["[p_s[other_pid]['X'][0], p_s[other_pid]['Y'][0], p_s[other_pid]['Z'][0]]"], {}), "([p_s[other_pid]['X'][0], p_s[other_pid]['Y'][0], p_s[other_pid][\n 'Z'][0]])\n", (14429, 14508), True, 'import numpy as np\n'), ((14637, 14685), 'numpy.where', 'np.where', (["(dist <= params['pos_interface_cutoff'])"], {}), "(dist <= params['pos_interface_cutoff'])\n", (14645, 14685), True, 'import numpy as np\n'), ((14835, 14882), 'numpy.where', 'np.where', (["(dist > params['neg_interface_cutoff'])"], {}), "(dist > params['neg_interface_cutoff'])\n", (14843, 14882), True, 'import numpy as np\n'), ((15341, 15388), 'numpy.where', 'np.where', (["(dist > params['neg_interface_cutoff'])"], {}), "(dist > params['neg_interface_cutoff'])\n", (15349, 15388), True, 'import numpy as np\n'), ((17579, 17644), 'numpy.vstack', 'np.vstack', (["[p_s[pid]['X'][0], p_s[pid]['Y'][0], p_s[pid]['Z'][0]]"], {}), "([p_s[pid]['X'][0], p_s[pid]['Y'][0], p_s[pid]['Z'][0]])\n", (17588, 17644), True, 'import numpy as np\n'), ((17664, 17752), 'numpy.vstack', 'np.vstack', (["[p_s[other_pid]['X'][0], p_s[other_pid]['Y'][0], p_s[other_pid]['Z'][0]]"], {}), "([p_s[other_pid]['X'][0], p_s[other_pid]['Y'][0], p_s[other_pid][\n 'Z'][0]])\n", (17673, 17752), True, 'import numpy as np\n'), ((17881, 17901), 'numpy.where', 'np.where', (['(dist > 1.5)'], {}), '(dist > 1.5)\n', (17889, 17901), True, 'import numpy as np\n'), ((950, 972), 'numpy.asarray', 'np.asarray', (["ds['data']"], {}), "(ds['data'])\n", (960, 972), True, 'import numpy as np\n'), ((990, 1010), 'numpy.asarray', 'np.asarray', (["ds['ir']"], {}), "(ds['ir'])\n", (1000, 1010), True, 'import numpy as np\n'), ((1028, 1048), 'numpy.asarray', 'np.asarray', (["ds['jc']"], {}), "(ds['jc'])\n", (1038, 1048), True, 'import numpy as np\n'), ((2473, 2511), 'numpy.where', 'np.where', (['((D < max_dist_tmp) & (D > 0))'], {}), '((D < max_dist_tmp) & (D > 0))\n', (2481, 2511), True, 'import numpy as np\n'), ((6339, 6363), 'numpy.random.shuffle', 'np.random.shuffle', (['order'], {}), '(order)\n', (6356, 6363), True, 'import numpy as np\n'), ((10561, 10576), 'numpy.unique', 'np.unique', (['rows'], {}), '(rows)\n', (10570, 10576), True, 'import numpy as np\n'), ((11192, 11217), 'pymesh.load_mesh', 'pymesh.load_mesh', (['plyfile'], {}), '(plyfile)\n', (11208, 11217), False, 'import pymesh\n'), ((15062, 15082), 'numpy.ones_like', 'np.ones_like', (['iface1'], {}), '(iface1)\n', (15074, 15082), True, 'import numpy as np\n'), ((15084, 15109), 'numpy.zeros_like', 'np.zeros_like', (['iface1_neg'], {}), '(iface1_neg)\n', (15097, 15109), True, 'import numpy as np\n'), ((15601, 15621), 'numpy.ones_like', 'np.ones_like', (['iface2'], {}), '(iface2)\n', (15613, 15621), True, 'import numpy as np\n'), ((15623, 15648), 'numpy.zeros_like', 'np.zeros_like', (['iface2_neg'], {}), '(iface2_neg)\n', (15636, 15648), True, 'import numpy as np\n'), ((3990, 4002), 'numpy.square', 'np.square', (['d'], {}), '(d)\n', (3999, 4002), True, 'import numpy as np\n'), ((4214, 4226), 'numpy.square', 'np.square', (['d'], {}), '(d)\n', (4223, 4226), True, 'import numpy as np\n'), ((6841, 6875), 'numpy.percentile', 'np.percentile', (['sc_filter_val_1', '(50)'], {}), '(sc_filter_val_1, 50)\n', (6854, 6875), True, 'import numpy as np\n'), ((6995, 7029), 'numpy.percentile', 'np.percentile', (['sc_filter_val_2', '(50)'], {}), '(sc_filter_val_2, 50)\n', (7008, 7029), True, 'import numpy as np\n'), ((10246, 10272), 'numpy.zeros', 'np.zeros', (['(2, pad_val, 10)'], {}), '((2, pad_val, 10))\n', (10254, 10272), True, 'import numpy as np\n'), ((10305, 10353), 'numpy.concatenate', 'np.concatenate', (['[sc_filter_val, padding]'], {'axis': '(1)'}), '([sc_filter_val, padding], axis=1)\n', (10319, 10353), True, 'import numpy as np\n'), ((1067, 1096), 'scipy.sparse.csc_matrix', 'sp.csc_matrix', (['(data, ir, jc)'], {}), '((data, ir, jc))\n', (1080, 1096), True, 'import scipy.sparse as sp\n'), ((1536, 1555), 'numpy.asarray', 'np.asarray', (['ds[c_k]'], {}), '(ds[c_k])\n', (1546, 1555), True, 'import numpy as np\n'), ((2088, 2099), 'numpy.int', 'np.int', (['vix'], {}), '(vix)\n', (2094, 2099), True, 'import numpy as np\n'), ((7050, 7093), 'numpy.mean', 'np.mean', (['[sc_filter_val_1, sc_filter_val_2]'], {}), '([sc_filter_val_1, sc_filter_val_2])\n', (7057, 7093), True, 'import numpy as np\n'), ((1402, 1421), 'numpy.asarray', 'np.asarray', (['ds[c_k]'], {}), '(ds[c_k])\n', (1412, 1421), True, 'import numpy as np\n'), ((1715, 1729), 'numpy.asarray', 'np.asarray', (['ds'], {}), '(ds)\n', (1725, 1729), True, 'import numpy as np\n'), ((6069, 6098), 'scipy.sparse.csc_matrix', 'sp.csc_matrix', (['(data, ir, jc)'], {}), '((data, ir, jc))\n', (6082, 6098), True, 'import scipy.sparse as sp\n'), ((2236, 2247), 'numpy.int', 'np.int', (['vix'], {}), '(vix)\n', (2242, 2247), True, 'import numpy as np\n'), ((3065, 3076), 'numpy.int', 'np.int', (['vix'], {}), '(vix)\n', (3071, 3076), True, 'import numpy as np\n')] |
import cv2
import numpy
def merge_rectangle_contours(rectangle_contours):
merged_contours = [rectangle_contours[0]]
for rec in rectangle_contours[1:]:
for i in range(len(merged_contours)):
x_min = rec[0][0]
y_min = rec[0][1]
x_max = rec[2][0]
y_max = rec[2][1]
merged_x_min = merged_contours[i][0][0]
merged_y_min = merged_contours[i][0][1]
merged_x_max = merged_contours[i][2][0]
merged_y_max = merged_contours[i][2][1]
if x_min >= merged_x_min and y_min >= merged_y_min and x_max <= merged_x_max and y_max <= merged_y_max:
break
else:
if i == len(merged_contours)-1:
merged_contours.append(rec)
return merged_contours
def get_image_text(img, engine='cnocr'):
text = 'cnocr'
return text
def contour_area_filter(binary, contours, thresh=1500):
rectangle_contours =[]
h, w = binary.shape
for contour in contours:
if thresh < cv2.contourArea(contour) < 0.2*h*w:
rectangle_contours.append(contour)
return rectangle_contours
def get_roi_image(img, rectangle_contour):
roi_image = img[rectangle_contour[0][1]:rectangle_contour[2][1],
rectangle_contour[0][0]:rectangle_contour[1][0]]
return roi_image
def get_pop_v(image):
"""
calculate value if a pop window exit
:param image: image path
:return: mean of v channel
"""
img = cv2.imread('capture/'+image)
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(img_hsv)
return numpy.mean(v)
def get_rectangle_contours(binary):
_, contours, _ = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
rectangle_contours = []
for counter in contours:
x, y, w, h = cv2.boundingRect(counter)
cnt = numpy.array([[x, y], [x + w, y], [x + w, y + h], [x, y + h]])
rectangle_contours.append(cnt)
rectangle_contours = sorted(rectangle_contours, key=cv2.contourArea, reverse=True)[:100]
rectangle_contours = contour_area_filter(binary, rectangle_contours)
rectangle_contours = merge_rectangle_contours(rectangle_contours)
return rectangle_contours
def get_center_pos(contour):
x = int((contour[0][0]+contour[1][0])/2)
y = int((contour[1][1]+contour[2][1])/2)
return [x, y]
def get_label_pos(contour):
center = get_center_pos(contour)
x = int((int((center[0]+contour[2][0])/2)+contour[2][0])/2)
y = int((int((center[1]+contour[2][1])/2)+contour[2][1])/2)
return [x, y]
def draw_contours(img, contours, color="info"):
if color == "info":
cv2.drawContours(img, contours, -1, (255, 145, 30), 3)
| [
"numpy.mean",
"cv2.drawContours",
"cv2.contourArea",
"numpy.array",
"cv2.split",
"cv2.cvtColor",
"cv2.findContours",
"cv2.imread",
"cv2.boundingRect"
] | [((1518, 1548), 'cv2.imread', 'cv2.imread', (["('capture/' + image)"], {}), "('capture/' + image)\n", (1528, 1548), False, 'import cv2\n'), ((1561, 1597), 'cv2.cvtColor', 'cv2.cvtColor', (['img', 'cv2.COLOR_BGR2HSV'], {}), '(img, cv2.COLOR_BGR2HSV)\n', (1573, 1597), False, 'import cv2\n'), ((1612, 1630), 'cv2.split', 'cv2.split', (['img_hsv'], {}), '(img_hsv)\n', (1621, 1630), False, 'import cv2\n'), ((1642, 1655), 'numpy.mean', 'numpy.mean', (['v'], {}), '(v)\n', (1652, 1655), False, 'import numpy\n'), ((1715, 1779), 'cv2.findContours', 'cv2.findContours', (['binary', 'cv2.RETR_TREE', 'cv2.CHAIN_APPROX_SIMPLE'], {}), '(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)\n', (1731, 1779), False, 'import cv2\n'), ((1858, 1883), 'cv2.boundingRect', 'cv2.boundingRect', (['counter'], {}), '(counter)\n', (1874, 1883), False, 'import cv2\n'), ((1898, 1959), 'numpy.array', 'numpy.array', (['[[x, y], [x + w, y], [x + w, y + h], [x, y + h]]'], {}), '([[x, y], [x + w, y], [x + w, y + h], [x, y + h]])\n', (1909, 1959), False, 'import numpy\n'), ((2699, 2753), 'cv2.drawContours', 'cv2.drawContours', (['img', 'contours', '(-1)', '(255, 145, 30)', '(3)'], {}), '(img, contours, -1, (255, 145, 30), 3)\n', (2715, 2753), False, 'import cv2\n'), ((1050, 1074), 'cv2.contourArea', 'cv2.contourArea', (['contour'], {}), '(contour)\n', (1065, 1074), False, 'import cv2\n')] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.