text
stringlengths 0
1.25M
| meta
stringlengths 47
1.89k
|
|---|---|
\chapter*{Overview}
% \pagenumbering{roman} \setcounter{page}{}
\fluidity\ is an open source, general purpose, multi-phase CFD code capable of solving numerically the Navier-Stokes and accompanying field equations on arbitrary unstructured finite element meshes in one, two and three dimensions. It uses a moving finite element/control volume method which allows arbitrary movement of the mesh with time dependent problems. It has a wide range of finite element/control volume element choices including mixed formulations. \fluidity is coupled to a mesh optimisation library allowing for dynamic mesh adaptivity and is parallelised using MPI.
Chapter \ref{chap:gettingstarted} of this manual gives details on how prospective users can obtain and set up \fluidity\ for use on a personal computer or laptop. The fluid and accompanying field equations solved by \fluidity\ and details of the numerical discretisations available are discussed in chapters \ref{chap:model_equations} and \ref{chap:numerical_discretisation} respectively. When discretising fluid domains in order to perform a numerical simulations it is inevitable that at certain scales the dynamics will not be resolved. These sub-grid scale dynamics can however play an important in the large scale dynamics the user wishes to resolve. It is therefore necessary to parameterise these sub-grid scale dynamics and details of the parameterisations available within \fluidity\ are given in chapter \ref{chap:parameterisations}. \fluidity\ also contains embedded models for the modelling of non-fluid processes. Currently, a simple biology (capable of simulating plankton ecosystems) and a sediments model are available and these models are detailed in chapter \ref{chap:embedded}.
As mentioned above, one of the key features of \fluidity\ is its ability to adaptively re-mesh to various fields so that resolution can be concentrated in regions where the user wishes to accurately resolve the dynamics. Details regarding the adaptive re-meshing and the manner in which \fluidity\ deals with meshes are given in chapters \ref{chap:meshes} and \ref{chap:Adaptivity}.
\fluidity\ has its own specifically designed options tree to make configuring simulations as painless as possible. This options tree can be viewed and edited using the diamond GUI. Details on how to configure the options tree are given in chapter \ref{chap:configuration}. Output from simulations is in the VTK format and details regarding viewing and manipulating output files are given in chapter \ref{chap:visualisation_and_diagnostics}. Finally, in order to introduce users to a range of common configurations and to the functionality available within \fluidity, chapter \ref{chap:examples} presents examples covering a range of fluid dynamics problems. For information regarding the style and scope of this manual, please refer to Appendix \ref{App:about}.
|
{"hexsha": "3df91d3b75d88def90a6a6c421962340e9527b61", "size": 2889, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "software/multifluids_icferst/manual/overview.tex", "max_stars_repo_name": "msc-acse/acse-9-independent-research-project-Wade003", "max_stars_repo_head_hexsha": "cfcba990d52ccf535171cf54c0a91b184db6f276", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-05-11T02:39:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-11T03:08:38.000Z", "max_issues_repo_path": "software/multifluids_icferst/manual/overview.tex", "max_issues_repo_name": "msc-acse/acse-9-independent-research-project-Wade003", "max_issues_repo_head_hexsha": "cfcba990d52ccf535171cf54c0a91b184db6f276", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "software/multifluids_icferst/manual/overview.tex", "max_forks_repo_name": "msc-acse/acse-9-independent-research-project-Wade003", "max_forks_repo_head_hexsha": "cfcba990d52ccf535171cf54c0a91b184db6f276", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-21T22:50:19.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-28T17:16:31.000Z", "avg_line_length": 262.6363636364, "max_line_length": 1096, "alphanum_fraction": 0.8210453444, "num_tokens": 581}
|
# Standard libraries
import os
import pathlib
import pickle
from datetime import datetime
# Scientific stack
import numpy as np
import numpy.random as rnd
import pandas as pd
import sklearn.metrics as skmetrics
# Chunked data
import dask
import zarr
# Enable multiprocessing support for Zarr
from numcodecs import blosc
blosc.use_threads = False
# Temporary fix for HDF5 multiprocessing error 11
# Source: https://github.com/keras-team/keras/issues/11101#issuecomment-459350086
os.environ['HDF5_USE_FILE_LOCKING'] = 'FALSE'
# Pretty progress bar
import tqdm
import keras_tqdm
# Tensorflow config
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
# config = tf.ConfigProto(intra_op_parallelism_threads=6, inter_op_parallelism_threads=6)
config.gpu_options.allow_growth = True
config.gpu_options.visible_device_list = "0"
set_session(tf.Session(config=config))
# Keras
import keras
import keras.backend as K
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation, Flatten, Input
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization, ZeroPadding2D, GlobalAveragePooling2D
from keras import regularizers
# Custom modules
import dataset_generator as dgen
# Source: https://stackoverflow.com/a/48393723/2801287
class TrainValTensorBoard(keras.callbacks.TensorBoard):
def __init__(self, log_dir='./logs', **kwargs):
# Make the original `TensorBoard` log to a subdirectory 'training'
training_log_dir = str(pathlib.Path(log_dir) / 'training')
super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)
# Log the validation metrics to a separate subdirectory
self.val_log_dir = str(pathlib.Path(log_dir) / 'validation')
def set_model(self, model):
# Setup writer for validation metrics
self.val_writer = tf.summary.FileWriter(self.val_log_dir)
super(TrainValTensorBoard, self).set_model(model)
def on_epoch_end(self, epoch, logs=None):
# Pop the validation logs and handle them separately with
# `self.val_writer`. Also rename the keys so that they can
# be plotted on the same figure with the training metrics
logs = logs or {}
val_logs = {k.replace('val_', ''): v for k, v in logs.items() if k.startswith('val_')}
for name, value in val_logs.items():
summary = tf.Summary()
summary_value = summary.value.add()
summary_value.simple_value = value.item()
summary_value.tag = name
self.val_writer.add_summary(summary, epoch)
self.val_writer.flush()
# Pass the remaining logs to `TensorBoard.on_epoch_end`
logs = {k: v for k, v in logs.items() if not k.startswith('val_')}
super(TrainValTensorBoard, self).on_epoch_end(epoch, logs)
def on_train_end(self, logs=None):
super(TrainValTensorBoard, self).on_train_end(logs)
self.val_writer.close()
class MetadataModelCheckPoint(keras.callbacks.ModelCheckpoint):
"""docstring for MetadataModelCheckPoint"""
def __init__(self, timenow, custom_dict=None, **kwargs):
super(MetadataModelCheckPoint, self).__init__(**kwargs)
self.custom_dict = custom_dict or {}
# Set metadata filename
self.meta_path = pathlib.Path(self.filepath).with_suffix('.partial.pkl')
if self.meta_path.exists():
# Get saved metadata
with open(str(self.meta_path), 'rb') as handle:
meta = pickle.load(handle)
# Set logs_keys
self.logs_keys = list(meta['logs'].keys())
else:
# Metadata dictonary
meta = {'logs': None,
'custom_dict': self.custom_dict,
'timestamp': timenow,
'epochs': 0,
'elapsed_time': 0}
# Save metadata
with open(str(self.meta_path), 'wb') as output:
pickle.dump(meta, output)
def on_train_begin(self, logs=None):
self.last_epoch_time = datetime.now()
# def on_epoch_begin(self, epoch, logs=None):
# print(dir(self))
# print(self.params)
# print(type(self.validation_data))
def on_epoch_end(self, epoch, logs=None):
# Get saved metadata
with open(str(self.meta_path), 'rb') as handle:
meta = pickle.load(handle)
# Update metadata
if meta['logs'] is None:
meta['logs'] = {}
self.logs_keys = list(logs.keys())
for key in self.logs_keys:
meta['logs'][key] = [logs[key]]
else:
for key in self.logs_keys:
meta['logs'][key].append(logs[key])
meta['epochs'] += 1
meta['elapsed_time'] += (datetime.now() - self.last_epoch_time).total_seconds()
# Check for a possible bug
assert len(meta['logs'][key]) == meta['epochs']
# Save metadata
with open(str(self.meta_path), 'wb') as output:
pickle.dump(meta, output)
# Set the epoch timestamp
self.last_epoch_time = datetime.now()
# Pass the logs to `ModelCheckpoint.on_epoch_end`
super(MetadataModelCheckPoint, self).on_epoch_end(epoch, logs)
def get_partial_models(folder):
folder = pathlib.Path(folder)
meta_list = []
for item in folder.glob('*.h5'):
# Check if final metadata file exists (meaning the model finished training)
if not item.with_suffix('.pkl').exists():
# Get saved metadata
with open(str(item.with_suffix('.partial.pkl')), 'rb') as handle:
meta = pickle.load(handle)
meta_list.append([item, meta])
return meta_list
def finish_training(model_path, partial_meta):
# Load Model
model = keras.models.load_model(str(model_path))
# Set `train_model_generator()` input parameters
params = partial_meta['custom_dict']
params['initial_epoch'] = partial_meta['epochs']
params['timenow'] = partial_meta['timestamp']
params['model'] = model
train_model_generator(**params)
def baseline_dcase2018(data_shape, normalization=True, dropout=True, dropout_dense=True):
# Input Layer
model_inputs = Input(shape=data_shape)
# Conv Layer #1
x = Conv2D(filters=32, kernel_size=(7,7), padding='same', activation='linear')(model_inputs)
if normalization:
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(5,5))(x)
if dropout:
x = Dropout(0.3)(x)
# Conv Layer #2
x = Conv2D(filters=32, kernel_size=(7,7), padding='same', activation='linear')(x)
if normalization:
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(4,100))(x)
if dropout:
x = Dropout(0.3)(x)
# MLP Layer
x = Flatten()(x)
x = Dense(100, activation='relu')(x)
if dropout:
x = Dropout(0.3)(x)
# Output Layer
predictions = Dense(10, activation='softmax', activity_regularizer=None)(x)
# Build model from the input and output objects
model = Model(inputs=model_inputs, outputs=predictions)
return model
def best_model(data_shape, normalization=True, dropout=True, dropout_dense=True,
dropout_rate=0.3, last_pool=16, dense_size=100, activation='softmax'):
# Input Layer
model_inputs = Input(shape=data_shape)
# Conv Layer #1
x = Conv2D(filters=32, kernel_size=(1,7), padding='same', activation='linear')(model_inputs)
if normalization:
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(1,5))(x)
if dropout:
x = Dropout(dropout_rate)(x)
# Conv Layer #2
x = Conv2D(filters=64, kernel_size=(1,7), padding='same', activation='linear')(x)
if normalization:
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(1,5))(x)
if dropout:
x = Dropout(dropout_rate)(x)
# Conv Layer #3
x = Conv2D(filters=32, kernel_size=(1,7), padding='same', activation='linear')(x)
if normalization:
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(1,last_pool))(x)
if dropout:
x = Dropout(dropout_rate)(x)
# MLP Layer
x = Flatten()(x)
x = Dense(dense_size, activation='relu')(x)
if dropout and dropout_dense:
x = Dropout(dropout_rate)(x)
# Output Layer
predictions = Dense(10, activation=activation, activity_regularizer=None, name='predictions')(x)
# Build model from the input and output objects
model = Model(inputs=model_inputs, outputs=predictions)
return model
def best_model_old(data_shape, normalization=True, dropout=True, dropout_dense=True,
dropout_rate=0.3, last_pool=16, dense_size=100):
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(1,7),
padding='same', input_shape=data_shape, activation='linear'))
if normalization:
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(1,5)))
if dropout:
model.add(Dropout(dropout_rate))
model.add(Conv2D(filters=64, kernel_size=(1,7), activation='linear', padding='same'))
if normalization:
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(1,5)))
if dropout:
model.add(Dropout(dropout_rate))
model.add(Conv2D(filters=32, kernel_size=(1,7), activation='linear', padding='same'))
if normalization:
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(1,last_pool)))
if dropout:
model.add(Dropout(dropout_rate))
model.add(Flatten())
model.add(Dense(dense_size, activation='relu'))
if dropout and dropout_dense:
model.add(Dropout(dropout_rate))
model.add(Dense(10, activation='softmax', name='predictions'))
return model
def train_model_generator(training_generator, validation_generator, model, model_name, use_multiprocessing=True,
workers=6, epochs=100, epoch_patience=20, monitor='val_acc', logdir='logs', model_dir='saved_models',
timenow=None, initial_epoch=0):
"""
Train a Keras model.
Parameters
----------
training_generator : dataset_generator.DataGenerator
Training set generator.
validation_generator : dataset_generator.DataGenerator
Validation (test) set generator.
model : Keras Sequential Model
Pre-compiled Keras model object.
model_name : string
Model filename.
use_multiprocessing : bool
Check to use multi processing for the generator.
Default value True
workers : int
Number of threads for the generator.
Default value 6
epochs : int
Maximum number of epoch.
Default value 100
epoch_patience : int
Early stopping: "number of epochs with no improvement after which training will be stopped".
Default value 20
logdir : string
Tensorboard log directory.
Default value 'logs'
model_dir : string
Trained model HDF5 directory.
Default value 'saved_models'
"""
# Save input parameters (used on MetadataModelCheckPoint)
params = dict(training_generator=training_generator,
validation_generator=validation_generator,
model_name=model_name,
use_multiprocessing=use_multiprocessing,
workers=workers,
epochs=epochs,
epoch_patience=epoch_patience,
monitor=monitor,
logdir=logdir,
model_dir=model_dir)
# Get timestamp and append to model name
if timenow is None:
timenow = datetime.now()
timestamp = timenow.isoformat().replace(':','_')
model_name += timestamp
model._name = model_name
else:
timestamp = timenow.isoformat().replace(':','_')
if timestamp not in model_name:
model_name += timestamp
model._name = model_name
# Get model path
pathlib.Path(model_dir).mkdir(parents=True, exist_ok=True)
model_path = pathlib.Path(model_dir) / pathlib.Path(model_name + '.h5')
# Setup callbacks
earlyStopping = keras.callbacks.EarlyStopping(monitor=monitor, patience=epoch_patience, mode='auto', restore_best_weights=True)
tfBoard = TrainValTensorBoard(log_dir=str(pathlib.Path(logdir) / model_name), write_graph=False)
# checkpoint = keras.callbacks.ModelCheckpoint(filepath=str(model_path), monitor=monitor, save_best_only=True)
checkpoint = MetadataModelCheckPoint(filepath=str(model_path), timenow=timenow,
monitor=monitor, save_best_only=True, custom_dict=params)
# Get a nice and pretty progress bar
try:
get_ipython # check if inside an IPython/Jupyter shell
prettyProgressBar = keras_tqdm.TQDMNotebookCallback(leave_inner=False, leave_outer=True)
except:
prettyProgressBar = keras_tqdm.TQDMCallback(leave_inner=False, leave_outer=True)
print('Running outside Jupyter.')
# Setup the Callback list
callbackList = [prettyProgressBar, tfBoard, earlyStopping, checkpoint]
# Train model and output training history
print(f'Starting the model training [{model_name}].')
training_history = model.fit_generator(generator=training_generator,
epochs=epochs,
use_multiprocessing=use_multiprocessing,
workers=workers,
validation_data=validation_generator,
shuffle=True,
callbacks=callbackList,
initial_epoch=initial_epoch, verbose=0)
# Get history dictionary
history = training_history.history
# Load best model checkpoint
# model = keras.models.load_model(str(model_path.absolute()))
print(f'Saved the trained model (with the best {monitor}) at [{model_path}].')
# Evaluate trained model
report, confusion, scores = classification_report(model, validation_generator, verbose=0)
print('Validation loss:', scores[0])
print('Validation accuracy:', scores[1])
# Open partial metadata
patial_meta_path = model_path.with_suffix('.partial.pkl')
with open(str(patial_meta_path), 'rb') as handle:
patial_meta = pickle.load(handle)
# Elapsed time
# elapsed_time = (datetime.now() - timenow).total_seconds()
elapsed_time = patial_meta['elapsed_time']
print(f'Trained and evaluated in {elapsed_time} seconds.')
# Save normalization data
norm_data = training_generator.norm_data
# Metadata dictonary
meta = {'training_history': history,
'timestamp': timenow.isoformat(),
'batch_size': training_generator.batch_size,
'epochs': len(history['loss']),
'elapsed_time': elapsed_time,
'epoch_patience': epoch_patience,
'report': report,
'confusion': confusion,
'scores': scores,
'norm_data': norm_data,
'feature_metadata': training_generator.metadata}
# Save metadata
with open(str(pathlib.Path(model_dir) / pathlib.Path(model_name + '.pkl')), 'wb') as output:
pickle.dump(meta, output)
return model, history, scores
def classification_report(model, test_generator, verbose=0):
y_true = test_generator.labels[test_generator.dataset_indexes]
scene_labels = test_generator.metadata['scene_labels']
y_pred = np.argmax(model.predict_generator(test_generator), axis=-1).astype('uint8')
num_classes = test_generator.num_classes
keras.utils.to_categorical(y_true, num_classes)
report = skmetrics.classification_report(y_true, y_pred, target_names=scene_labels)
confusion = skmetrics.confusion_matrix(y_true, y_pred)
scores = model.evaluate_generator(test_generator)
if verbose:
print(report)
return report, confusion, scores
def plot_confusion_matrix(confusion_matrix, class_names, figsize = (10,7), fontsize=14, cmap='YlGnBu'):
"""Plots a confusion matrix, as returned by sklearn.metrics.confusion_matrix, as a heatmap.
Modified from `shaypal5's gist`.
Arguments
---------
confusion_matrix: numpy.ndarray
The numpy.ndarray object returned from a call to sklearn.metrics.confusion_matrix.
Similarly constructed ndarrays can also be used.
class_names: list
An ordered list of class names, in the order they index the given confusion matrix.
figsize: tuple
A 2-long tuple, the first value determining the horizontal size of the ouputted figure,
the second determining the vertical size. Defaults to (10,7).
fontsize: int
Font size for axes labels. Defaults to 14.
cmap: str
Colormap for the heatmap (see `Colormaps in Matplotlib`). Defaults to YlGnBu.
Returns
-------
matplotlib.figure.Figure
The resulting confusion matrix figure
References
----------
.. _shaypal5's gist:
https://gist.github.com/shaypal5/94c53d765083101efc0240d776a23823
.. _Colormaps in Matplotlib:
https://matplotlib.org/tutorials/colors/colormaps.html
"""
import matplotlib.pyplot as plt
import seaborn as sns
df_cm = pd.DataFrame(
confusion_matrix, index=class_names, columns=class_names,
)
fig = plt.figure(figsize=figsize)
try:
heatmap = sns.heatmap(df_cm, annot=True, fmt='d', cmap=cmap)
except ValueError:
raise ValueError("Confusion matrix values must be integers.")
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=fontsize)
heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=fontsize)
plt.ylabel('True label', fontsize=fontsize, fontweight='bold')
plt.xlabel('Predicted label', fontsize=fontsize, fontweight='bold')
return fig
def evaluate_model_generator(evaluation_generator, model_path):
model_path = pathlib.Path(model_path)
model = keras.models.load_model(str(model_path.absolute()))
report, confusion, scores = classification_report(model, evaluation_generator, verbose=1)
|
{"hexsha": "41b45f245df5cf6d687cab305fe3df7473d5d6a3", "size": 18405, "ext": "py", "lang": "Python", "max_stars_repo_path": "training/utils.py", "max_stars_repo_name": "dangpzanco/dcase-task1", "max_stars_repo_head_hexsha": "72867cc5b8969d7ec55c5acfd30ebbc3a7246666", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-05-23T08:10:59.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-23T08:10:59.000Z", "max_issues_repo_path": "training/utils.py", "max_issues_repo_name": "dangpzanco/dcase-task1", "max_issues_repo_head_hexsha": "72867cc5b8969d7ec55c5acfd30ebbc3a7246666", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "training/utils.py", "max_forks_repo_name": "dangpzanco/dcase-task1", "max_forks_repo_head_hexsha": "72867cc5b8969d7ec55c5acfd30ebbc3a7246666", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-07-12T05:26:15.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-22T09:15:00.000Z", "avg_line_length": 34.4018691589, "max_line_length": 131, "alphanum_fraction": 0.6693833198, "include": true, "reason": "import numpy", "num_tokens": 4226}
|
#include <boost/mpl/aux_/find_if_pred.hpp>
|
{"hexsha": "1a626d1b15de7a471f87cb97689fa0c161392e4e", "size": 43, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "src/boost_mpl_aux__find_if_pred.hpp", "max_stars_repo_name": "miathedev/BoostForArduino", "max_stars_repo_head_hexsha": "919621dcd0c157094bed4df752b583ba6ea6409e", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 10.0, "max_stars_repo_stars_event_min_datetime": "2018-03-17T00:58:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-06T02:48:49.000Z", "max_issues_repo_path": "src/boost_mpl_aux__find_if_pred.hpp", "max_issues_repo_name": "miathedev/BoostForArduino", "max_issues_repo_head_hexsha": "919621dcd0c157094bed4df752b583ba6ea6409e", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": 2.0, "max_issues_repo_issues_event_min_datetime": "2021-03-26T15:17:35.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:55:08.000Z", "max_forks_repo_path": "src/boost_mpl_aux__find_if_pred.hpp", "max_forks_repo_name": "miathedev/BoostForArduino", "max_forks_repo_head_hexsha": "919621dcd0c157094bed4df752b583ba6ea6409e", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": 4.0, "max_forks_repo_forks_event_min_datetime": "2019-05-28T21:06:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-06T03:06:52.000Z", "avg_line_length": 21.5, "max_line_length": 42, "alphanum_fraction": 0.7906976744, "num_tokens": 13}
|
import pandas as pd
import numpy as np
from pathos.multiprocessing import _ProcessPool as Pool
import os
import sys
import copy
from sklearn.linear_model import BayesianRidge as BR
from sklearn.neighbors import KNeighborsRegressor as KNN
from sklearn.ensemble import AdaBoostRegressor as ABR
from sklearn.ensemble import RandomForestRegressor as RFR
from sklearn.mixture import GaussianMixture as GMM
from sklearn.cluster import Birch as BIRCH
from sklearn.cluster import KMeans as KM
from sklearn.svm import SVR
from sklearn.linear_model import SGDRegressor as SGD
from sklearn.neural_network import MLPRegressor as NN
proj_path = os.path.join(
"/", "home", "rjb255", "University", "ChemEng", "ResearchProject"
)
sys.path.insert(1, proj_path)
from purePython.modules.shared.custom import split, Models
from purePython.v4main import score
def main(dataset):
data = pd.read_csv(dataset[0])
backup = copy.deepcopy(data)
# models = {
# "BayesianRidge": BR(),
# "KNN": KNN(),
# "RandomForrest": RFR(random_state=1),
# "SGD": SGD(loss="huber", random_state=1),
# "SVM": SVR(),
# "ABR": ABR(random_state=1),
# }
models = {
"BayesianRidge": BR(),
"KNN": KNN(),
# "RandomForrest": RFR(random_state=1),
# "SGD": SGD(loss="huber", random_state=1),
# "SVM": SVR(),
# "ABR": ABR(random_state=1),
"NN": NN(warm_start=True, random_state=1),
}
m = Models(list(models.values()))
data: pd.DataFrame = data.sample(frac=1, random_state=1)
X_known, Y_known, X_unknown, Y_unknown, X_test, Y_test = split(data, 5, frac=1)
X_test, Y_test = pd.concat([X_known, X_unknown]), pd.concat([Y_known, Y_unknown])
m.fit(X_known, Y_known)
_m = copy.deepcopy(m)
s = [score(Y_test, model=_m, X_test=X_test)]
m.fit(X_test, Y_test)
_m = copy.deepcopy(m)
s.append(score(Y_test, model=_m, X_test=X_test))
backup["llim"] = s[0]
backup["ulim"] = s[1]
backup.to_csv(dataset[1])
print("Survived")
data_location = os.path.join(proj_path, "data", "big", "qsar_data")
data_names = os.listdir(data_location)
datasets = np.array([os.path.join(data_location, data) for data in data_names])
data_location2 = os.path.join(proj_path, "data", "big", "qsar_with_lims_2")
data2 = np.array([os.path.join(data_location2, data) for data in data_names])
with Pool() as p:
dataframes = p.map(main, [(a, b) for a, b in zip(datasets, data2)])
pass
|
{"hexsha": "5144494bc3a325de048ff352d7b294237efd344e", "size": 2493, "ext": "py", "lang": "Python", "max_stars_repo_path": "purePython/lims.py", "max_stars_repo_name": "rjb255/researchProject", "max_stars_repo_head_hexsha": "7b0c118ee1adaf0c68f83d5b4a043c6aa5a55331", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "purePython/lims.py", "max_issues_repo_name": "rjb255/researchProject", "max_issues_repo_head_hexsha": "7b0c118ee1adaf0c68f83d5b4a043c6aa5a55331", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "purePython/lims.py", "max_forks_repo_name": "rjb255/researchProject", "max_forks_repo_head_hexsha": "7b0c118ee1adaf0c68f83d5b4a043c6aa5a55331", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.1625, "max_line_length": 85, "alphanum_fraction": 0.6782992379, "include": true, "reason": "import numpy", "num_tokens": 702}
|
[STATEMENT]
lemma ProcUniv: "(UNIV :: proc set) = {p0, p1}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. UNIV = {p0, p1}
[PROOF STEP]
by (metis UNIV_eq_I insert_iff proc.exhaust)
|
{"llama_tokens": 88, "file": "FLP_FLPExistingSystem", "length": 1}
|
using ArgParseLite
function main()
my_args = Arguments()
push!(my_args, Argument("arg1"))
push!(my_args, Argument("--opt1"))
push!(my_args, Argument("--opt2", "-o"))
push!(my_args, Argument("--flag1", action=:store_true))
println("Parsed args:")
for (arg,val) in ArgParseLite.parse_args(my_args)
println(" $arg => $val")
end
end
main()
|
{"hexsha": "e9de5616a066f7840ccd5a5793046bcc4be9d4a5", "size": 384, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "bin/lite.jl", "max_stars_repo_name": "kescobo/ArgParseLite.jl", "max_stars_repo_head_hexsha": "99b5d4073e90bb712d33120c65dbdf4a4f0cf5f2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bin/lite.jl", "max_issues_repo_name": "kescobo/ArgParseLite.jl", "max_issues_repo_head_hexsha": "99b5d4073e90bb712d33120c65dbdf4a4f0cf5f2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bin/lite.jl", "max_forks_repo_name": "kescobo/ArgParseLite.jl", "max_forks_repo_head_hexsha": "99b5d4073e90bb712d33120c65dbdf4a4f0cf5f2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.2105263158, "max_line_length": 59, "alphanum_fraction": 0.6119791667, "num_tokens": 106}
|
from __future__ import absolute_import
from numbers import Number
from collections import OrderedDict
from collections.abc import Iterable
import dama as dm
import numpy as np
__license__ = '''Copyright 2019 Philipp Eller
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.'''
class Axis(object):
'''
Class to hold a single Axis of a Grid
which can have points and/or edges
'''
def __init__(
self, var=None, edges=None, points=None, nbins=None, log=None, label=None, **kwargs
):
if len(kwargs) == 1:
assert var is None and edges is None and points is None
var, val = list(kwargs.items())[0]
if isinstance(val, (list, np.ndarray)):
points = np.asanyarray(val)
elif isinstance(val, dm.Edges):
edges = val
elif isinstance(val, Number):
nbins = val
else:
raise ValueError()
self.var = var
self.label = label
self._edges = edges
self._points = points
self._nbins = nbins
self._log = log
@property
def has_points(self):
'''True if points are set'''
return self._points is not None
@property
def log(self):
if self._log is not None:
return self._log
if self.has_edges:
return self._edges.log
if self.has_points and np.all(self._points > 0):
d = np.diff(np.log(self._points))
return np.allclose(d[0], d)
return False
@log.setter
def log(self, log):
if self._edges is not None:
self._edges.log = log
self._log = log
@property
def has_edges(self):
'''True if edges are set'''
return self._edges is not None and self._edges._edges is not None
def __len__(self):
if self._points is not None:
return len(self._points)
elif self._edges is not None:
return len(self._edges)
elif self._nbins is not None:
return self._nbins
return None
def __str__(self):
strs = []
strs.append('(points) %s' % (self._points))
strs.append('(edges) %s' % (self._edges))
strs.append('(nbins) %s' % (self.nbins))
return '\n'.join(strs)
def __repr__(self):
strs = []
strs.append('Axis("%s",' % self.var)
strs.append('points = %s,' % (self._points.__repr__()))
strs.append('edges = %s)' % (self._edges.__repr__()))
strs.append('nbins = %s)' % (self.nbins))
return '\n'.join(strs)
def __getitem__(self, idx):
idx = self.convert_slice(idx)
if idx is Ellipsis:
return self
new_obj = dm.Axis()
new_obj.var = self.var
if self.has_edges:
new_obj._edges = self._edges[idx]
if self._points is not None:
new_obj._points = self._points[idx]
new_obj._nbins = self._nbins
return new_obj
def convert_slice(self, idx):
'''Convert slice
idx : int, float, slice, Ellipsis
'''
if isinstance(idx, (int, np.integer, type(Ellipsis))):
return idx
if isinstance(idx, float):
return self.convert_index(idx)
if isinstance(idx, (list, np.ndarray)):
new_indices = []
for i in idx:
new_indices.append(self.convert_index(i))
return new_indices
if isinstance(idx, slice):
start = self.convert_index(idx.start)
stop = self.convert_index(idx.stop)
return slice(start, stop, idx.step)
raise IndexError(idx, type(idx))
def convert_index(self, idx):
if idx is None:
return None
if isinstance(idx, (int, np.integer)):
return idx
idx = self.compute_indices(idx)
if idx >= 0:
return idx
raise IndexError('Index out of range')
def compute_indices(self, sample):
'''compute bin indices for a sample, return -1 if outise of bins
Parameters
----------
sample : array, float
Returns
-------
indices : array, int
'''
if not self.edges.consecutive:
raise NotImplementedError()
bins = self.edges.squeezed_edges
if np.isscalar(sample):
if sample == bins[-1]:
return len(self)
elif sample < bins[0] or sample > bins[-1]:
return -1
else:
return np.digitize(sample, bins) - 1
idx = np.digitize(sample, bins) - 1
# make inclusive right edge
idx[sample == bins[-1]] -= 1
# set overflow bin to idx -1
idx[idx == len(self)] = -1
return idx
@property
def initialized(self):
'''
True if either edges or points are not None
'''
return self.has_edges or self.has_points
def __eq__(self, other):
if not type(self) == type(other):
return False
equal = self.var == other.var
equal = equal and self._edges == other._edges
equal = equal and np.all(np.equal(self._points, other._points))
return equal and self._nbins == other._nbins
@property
def regular(self):
'''True if spacing of egdges and/or points is regular'''
regular = True
if self._points is not None:
if self.log:
d = np.diff(np.log(self._points))
else:
d = np.diff(self._points)
regular = regular and np.allclose(d[0], d)
if self.has_edges:
regular = regular and self._edges.regular
return regular
@property
def edges(self):
if self.has_edges:
return self._edges
if self.has_points:
return dm.Edges(points=self._points, log=self.log)
return None
@edges.setter
def edges(self, edges):
edges = dm.Edges(edges)
if self.initialized:
if not len(edges) == len(self):
raise IndexError('incompatible length of edges')
self._edges = edges
@property
def points(self):
if self.has_points:
return self._points
elif self.has_edges:
return self._edges.points
return None
@points.setter
def points(self, points):
if self.initialized:
if not len(points) == len(self):
raise IndexError('incompatible length of points')
self._points = points
@property
def squeezed_edges(self):
return self.edges.squeezed_edges
@property
def nbins(self):
if not (self.has_points and self.has_edges):
return self._nbins
else:
return len(self.points)
@nbins.setter
def nbins(self, nbins):
if not self.initialized:
self._nbins = nbins
else:
raise ValueError('Cannot set n since bins are already defined')
|
{"hexsha": "bec211a81efeb3723bec1624076225a3ad3c4640", "size": 7576, "ext": "py", "lang": "Python", "max_stars_repo_path": "dama/core/axis.py", "max_stars_repo_name": "philippeller/MilleFeuille", "max_stars_repo_head_hexsha": "962c322531e208a7d20a273a56d13b954ad80bc3", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-04-22T07:46:27.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-11T11:44:08.000Z", "max_issues_repo_path": "dama/core/axis.py", "max_issues_repo_name": "philippeller/MilleFeuille", "max_issues_repo_head_hexsha": "962c322531e208a7d20a273a56d13b954ad80bc3", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-04-22T07:14:36.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-10T13:56:06.000Z", "max_forks_repo_path": "dama/core/axis.py", "max_forks_repo_name": "philippeller/pynocular", "max_forks_repo_head_hexsha": "962c322531e208a7d20a273a56d13b954ad80bc3", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-09T19:22:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-09T19:22:44.000Z", "avg_line_length": 28.4812030075, "max_line_length": 91, "alphanum_fraction": 0.5686378036, "include": true, "reason": "import numpy", "num_tokens": 1727}
|
# flake8: noqa
import time
from copy import deepcopy
import numpy as np
from numpy.testing import assert_almost_equal
from sklearn.metrics import log_loss, mean_squared_error
# for testing sigmoid
from scipy.special import expit
import torch
import torch.nn as nn
import torch.nn.functional as F
import tensorflow.keras.datasets.mnist as mnist
from numpy_ml.neural_nets.utils import (
calc_pad_dims_2D,
conv2D_naive,
conv2D,
pad2D,
pad1D,
)
from numpy_ml.utils.testing import (
random_one_hot_matrix,
random_stochastic_matrix,
random_tensor,
)
from .nn_torch_models import (
TFNCELoss,
WGAN_GP_tf,
torch_xe_grad,
torch_mse_grad,
TorchVAELoss,
TorchFCLayer,
TorchRNNCell,
TorchLSTMCell,
TorchAddLayer,
TorchWGANGPLoss,
TorchConv1DLayer,
TorchConv2DLayer,
TorchPool2DLayer,
TorchWavenetModule,
TorchMultiplyLayer,
TorchDeconv2DLayer,
TorchLayerNormLayer,
TorchBatchNormLayer,
TorchEmbeddingLayer,
TorchLinearActivation,
TorchSDPAttentionLayer,
TorchBidirectionalLSTM,
torch_gradient_generator,
TorchSkipConnectionConv,
TorchSkipConnectionIdentity,
TorchMultiHeadedAttentionModule,
)
#######################################################################
# Debug Formatter #
#######################################################################
def err_fmt(params, golds, ix, warn_str=""):
mine, label = params[ix]
err_msg = "-" * 25 + " DEBUG " + "-" * 25 + "\n"
prev_mine, prev_label = params[max(ix - 1, 0)]
err_msg += "Mine (prev) [{}]:\n{}\n\nTheirs (prev) [{}]:\n{}".format(
prev_label, prev_mine, prev_label, golds[prev_label]
)
err_msg += "\n\nMine [{}]:\n{}\n\nTheirs [{}]:\n{}".format(
label, mine, label, golds[label]
)
err_msg += warn_str
err_msg += "\n" + "-" * 23 + " END DEBUG " + "-" * 23
return err_msg
#######################################################################
# Loss Functions #
#######################################################################
def test_squared_error(N=15):
from numpy_ml.neural_nets.losses import SquaredError
np.random.seed(12345)
N = np.inf if N is None else N
mine = SquaredError()
gold = (
lambda y, y_pred: mean_squared_error(y, y_pred)
* y_pred.shape[0]
* y_pred.shape[1]
* 0.5
)
# ensure we get 0 when the two arrays are equal
n_dims = np.random.randint(2, 100)
n_examples = np.random.randint(1, 1000)
y = y_pred = random_tensor((n_examples, n_dims))
assert_almost_equal(mine.loss(y, y_pred), gold(y, y_pred))
print("PASSED")
i = 1
while i < N:
n_dims = np.random.randint(2, 100)
n_examples = np.random.randint(1, 1000)
y = random_tensor((n_examples, n_dims))
y_pred = random_tensor((n_examples, n_dims))
assert_almost_equal(mine.loss(y, y_pred), gold(y, y_pred), decimal=5)
print("PASSED")
i += 1
def test_cross_entropy(N=15):
from numpy_ml.neural_nets.losses import CrossEntropy
np.random.seed(12345)
N = np.inf if N is None else N
mine = CrossEntropy()
gold = log_loss
# ensure we get 0 when the two arrays are equal
n_classes = np.random.randint(2, 100)
n_examples = np.random.randint(1, 1000)
y = y_pred = random_one_hot_matrix(n_examples, n_classes)
assert_almost_equal(mine.loss(y, y_pred), gold(y, y_pred))
print("PASSED")
# test on random inputs
i = 1
while i < N:
n_classes = np.random.randint(2, 100)
n_examples = np.random.randint(1, 1000)
y = random_one_hot_matrix(n_examples, n_classes)
y_pred = random_stochastic_matrix(n_examples, n_classes)
assert_almost_equal(mine.loss(y, y_pred), gold(y, y_pred, normalize=False))
print("PASSED")
i += 1
def test_VAE_loss(N=15):
from numpy_ml.neural_nets.losses import VAELoss
np.random.seed(12345)
N = np.inf if N is None else N
eps = np.finfo(float).eps
i = 1
while i < N:
n_ex = np.random.randint(1, 10)
t_dim = np.random.randint(2, 10)
t_mean = random_tensor([n_ex, t_dim], standardize=True)
t_log_var = np.log(np.abs(random_tensor([n_ex, t_dim], standardize=True) + eps))
im_cols, im_rows = np.random.randint(2, 40), np.random.randint(2, 40)
X = np.random.rand(n_ex, im_rows * im_cols)
X_recon = np.random.rand(n_ex, im_rows * im_cols)
mine = VAELoss()
mine_loss = mine(X, X_recon, t_mean, t_log_var)
dX_recon, dLogVar, dMean = mine.grad(X, X_recon, t_mean, t_log_var)
golds = TorchVAELoss().extract_grads(X, X_recon, t_mean, t_log_var)
params = [
(mine_loss, "loss"),
(dX_recon, "dX_recon"),
(dLogVar, "dt_log_var"),
(dMean, "dt_mean"),
]
print("\nTrial {}".format(i))
for ix, (mine, label) in enumerate(params):
np.testing.assert_allclose(
mine,
golds[label],
err_msg=err_fmt(params, golds, ix),
rtol=0.1,
atol=1e-2,
)
print("\tPASSED {}".format(label))
i += 1
def test_WGAN_GP_loss(N=5):
from numpy_ml.neural_nets.losses import WGAN_GPLoss
np.random.seed(12345)
N = np.inf if N is None else N
i = 1
while i < N:
lambda_ = np.random.randint(0, 10)
n_ex = np.random.randint(1, 10)
n_feats = np.random.randint(2, 10)
Y_real = random_tensor([n_ex], standardize=True)
Y_fake = random_tensor([n_ex], standardize=True)
gradInterp = random_tensor([n_ex, n_feats], standardize=True)
mine = WGAN_GPLoss(lambda_=lambda_)
C_loss = mine(Y_fake, "C", Y_real, gradInterp)
G_loss = mine(Y_fake, "G")
C_dY_fake, dY_real, dGradInterp = mine.grad(Y_fake, "C", Y_real, gradInterp)
G_dY_fake = mine.grad(Y_fake, "G")
golds = TorchWGANGPLoss(lambda_).extract_grads(Y_real, Y_fake, gradInterp)
if np.isnan(golds["C_dGradInterp"]).any():
continue
params = [
(Y_real, "Y_real"),
(Y_fake, "Y_fake"),
(gradInterp, "gradInterp"),
(C_loss, "C_loss"),
(G_loss, "G_loss"),
(-dY_real, "C_dY_real"),
(-C_dY_fake, "C_dY_fake"),
(dGradInterp, "C_dGradInterp"),
(G_dY_fake, "G_dY_fake"),
]
print("\nTrial {}".format(i))
for ix, (mine, label) in enumerate(params):
np.testing.assert_allclose(
mine,
golds[label],
err_msg=err_fmt(params, golds, ix),
rtol=0.1,
atol=1e-2,
)
print("\tPASSED {}".format(label))
i += 1
def test_NCELoss(N=1):
from numpy_ml.neural_nets.losses import NCELoss
from numpy_ml.utils.data_structures import DiscreteSampler
np.random.seed(12345)
N = np.inf if N is None else N
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
n_c = np.random.randint(1, 10)
n_out = np.random.randint(1, 300)
vocab_size = np.random.randint(200, 1000)
num_negative_samples = np.random.randint(1, 10)
embeddings = random_tensor((n_ex, n_c, n_out), standardize=True)
target = np.random.randint(0, vocab_size, (n_ex, 1))
probs = np.random.rand(vocab_size)
probs /= probs.sum()
D = DiscreteSampler(probs, log=False, with_replacement=False)
NCE = NCELoss(vocab_size, D, num_negative_samples)
my_loss, _ = NCE(embeddings, target.flatten())
my_dLdX = NCE.grad(update_params=False)
my_dLdW = NCE.gradients["W"]
my_dLdb = NCE.gradients["b"]
NCE.gradients["W"] = np.zeros_like(NCE.parameters["W"])
NCE.gradients["b"] = np.zeros_like(NCE.parameters["b"])
MY_final_loss, TF_final_loss = 0, 0
MY_dLdX, TF_dLdX = np.zeros_like(embeddings), np.zeros_like(embeddings)
TF_dLdW, TF_dLdb = (
np.zeros_like(NCE.parameters["W"]),
np.zeros_like(NCE.parameters["b"]),
)
# XXX: instead of calculating the tf NCE on the entire batch, we
# calculate it per-example and then sum. this is really lame and should
# be changed to operate on batches.
nv = NCE.derived_variables["noise_samples"][0]
for ix, emb in enumerate(embeddings):
sv = (nv[0], np.array([nv[1][0, ix]]), nv[2])
NCE.X = []
for k, v in NCE.derived_variables.items():
NCE.derived_variables[k] = []
for k, v in NCE.gradients.items():
NCE.gradients[k] = np.zeros_like(v)
my, _ = NCE(emb[None, :, :], target[ix], neg_samples=sv[0])
NCE.derived_variables["noise_samples"] = [sv]
dldx = NCE.grad(update_params=False)
NCE.derived_variables["noise_samples"] = sv
MY_final_loss += my
MY_dLdX[ix, ...] += np.squeeze(dldx, axis=0)
TF_dict = TFNCELoss(emb, np.array([target[ix]]), NCE)
TF_loss = TF_dict["final_loss"]
TF_final_loss += TF_loss
TF_dLdX[ix, ...] += TF_dict["dLdX"]
TF_dLdW[TF_dict["dLdW"].indices, :] += TF_dict["dLdW"].values
TF_dLdb[:, TF_dict["dLdb"].indices] += TF_dict["dLdb"].values
tf_dw = np.zeros_like(NCE.gradients["W"])
tf_dw[TF_dict["dLdW"].indices, :] += TF_dict["dLdW"].values
tf_db = np.zeros_like(NCE.gradients["b"])
tf_db[:, TF_dict["dLdb"].indices] += TF_dict["dLdb"].values
print("\nTrial {}".format(i))
np.testing.assert_almost_equal(my_loss, TF_final_loss, decimal=3)
print("PASSED: final loss")
maps = [
("dLdW", my_dLdW, TF_dLdW),
("dLdb", my_dLdb, TF_dLdb),
("dLdX", my_dLdX, TF_dLdX),
]
for (ll, k1, k2) in maps:
np.testing.assert_almost_equal(k1, k2, decimal=2, err_msg=ll)
print("PASSED: {}".format(ll))
i += 1
#######################################################################
# Loss Function Gradients #
#######################################################################
def test_squared_error_grad(N=15):
from numpy_ml.neural_nets.losses import SquaredError
from numpy_ml.neural_nets.activations import Tanh
np.random.seed(12345)
N = np.inf if N is None else N
mine = SquaredError()
gold = torch_mse_grad
act = Tanh()
i = 1
while i < N:
n_dims = np.random.randint(2, 100)
n_examples = np.random.randint(1, 1000)
y = random_tensor((n_examples, n_dims))
# raw inputs
z = random_tensor((n_examples, n_dims))
y_pred = act.fn(z)
assert_almost_equal(
mine.grad(y, y_pred, z, act), 0.5 * gold(y, z, torch.tanh), decimal=4
)
print("PASSED")
i += 1
def test_cross_entropy_grad(N=15):
from numpy_ml.neural_nets.losses import CrossEntropy
from numpy_ml.neural_nets.layers import Softmax
np.random.seed(12345)
N = np.inf if N is None else N
mine = CrossEntropy()
gold = torch_xe_grad
sm = Softmax()
i = 1
while i < N:
n_classes = np.random.randint(2, 100)
n_examples = np.random.randint(1, 1000)
y = random_one_hot_matrix(n_examples, n_classes)
# the cross_entropy_gradient returns the gradient wrt. z (NOT softmax(z))
z = random_tensor((n_examples, n_classes))
y_pred = sm.forward(z)
assert_almost_equal(mine.grad(y, y_pred), gold(y, z), decimal=5)
print("PASSED")
i += 1
#######################################################################
# Activations #
#######################################################################
def test_sigmoid_activation(N=15):
from numpy_ml.neural_nets.activations import Sigmoid
np.random.seed(12345)
N = np.inf if N is None else N
mine = Sigmoid()
gold = expit
i = 0
while i < N:
n_dims = np.random.randint(1, 100)
z = random_tensor((1, n_dims))
assert_almost_equal(mine.fn(z), gold(z))
print("PASSED")
i += 1
def test_elu_activation(N=15):
from numpy_ml.neural_nets.activations import ELU
np.random.seed(12345)
N = np.inf if N is None else N
i = 0
while i < N:
n_dims = np.random.randint(1, 10)
z = random_tensor((1, n_dims))
alpha = np.random.uniform(0, 10)
mine = ELU(alpha)
gold = lambda z, a: F.elu(torch.from_numpy(z), alpha).numpy()
assert_almost_equal(mine.fn(z), gold(z, alpha))
print("PASSED")
i += 1
def test_softmax_activation(N=15):
from numpy_ml.neural_nets.layers import Softmax
np.random.seed(12345)
N = np.inf if N is None else N
mine = Softmax()
gold = lambda z: F.softmax(torch.FloatTensor(z), dim=1).numpy()
i = 0
while i < N:
n_dims = np.random.randint(1, 100)
z = random_stochastic_matrix(1, n_dims)
assert_almost_equal(mine.forward(z), gold(z))
print("PASSED")
i += 1
def test_relu_activation(N=15):
from numpy_ml.neural_nets.activations import ReLU
np.random.seed(12345)
N = np.inf if N is None else N
mine = ReLU()
gold = lambda z: F.relu(torch.FloatTensor(z)).numpy()
i = 0
while i < N:
n_dims = np.random.randint(1, 100)
z = random_stochastic_matrix(1, n_dims)
assert_almost_equal(mine.fn(z), gold(z))
print("PASSED")
i += 1
def test_softplus_activation(N=15):
from numpy_ml.neural_nets.activations import SoftPlus
np.random.seed(12345)
N = np.inf if N is None else N
mine = SoftPlus()
gold = lambda z: F.softplus(torch.FloatTensor(z)).numpy()
i = 0
while i < N:
n_dims = np.random.randint(1, 100)
z = random_stochastic_matrix(1, n_dims)
assert_almost_equal(mine.fn(z), gold(z))
print("PASSED")
i += 1
#######################################################################
# Activation Gradients #
#######################################################################
def test_sigmoid_grad(N=15):
from numpy_ml.neural_nets.activations import Sigmoid
np.random.seed(12345)
N = np.inf if N is None else N
mine = Sigmoid()
gold = torch_gradient_generator(torch.sigmoid)
i = 0
while i < N:
n_ex = np.random.randint(1, 100)
n_dims = np.random.randint(1, 100)
z = random_tensor((n_ex, n_dims))
assert_almost_equal(mine.grad(z), gold(z))
print("PASSED")
i += 1
def test_elu_grad(N=15):
from numpy_ml.neural_nets.activations import ELU
np.random.seed(12345)
N = np.inf if N is None else N
i = 0
while i < N:
n_ex = np.random.randint(1, 10)
n_dims = np.random.randint(1, 10)
alpha = np.random.uniform(0, 10)
z = random_tensor((n_ex, n_dims))
mine = ELU(alpha)
gold = torch_gradient_generator(F.elu, alpha=alpha)
assert_almost_equal(mine.grad(z), gold(z), decimal=5)
print("PASSED")
i += 1
def test_tanh_grad(N=15):
from numpy_ml.neural_nets.activations import Tanh
np.random.seed(12345)
N = np.inf if N is None else N
mine = Tanh()
gold = torch_gradient_generator(torch.tanh)
i = 0
while i < N:
n_ex = np.random.randint(1, 100)
n_dims = np.random.randint(1, 100)
z = random_tensor((n_ex, n_dims))
assert_almost_equal(mine.grad(z), gold(z))
print("PASSED")
i += 1
def test_relu_grad(N=15):
from numpy_ml.neural_nets.activations import ReLU
np.random.seed(12345)
N = np.inf if N is None else N
mine = ReLU()
gold = torch_gradient_generator(F.relu)
i = 0
while i < N:
n_ex = np.random.randint(1, 100)
n_dims = np.random.randint(1, 100)
z = random_tensor((n_ex, n_dims))
assert_almost_equal(mine.grad(z), gold(z))
print("PASSED")
i += 1
def test_softmax_grad(N=15):
from numpy_ml.neural_nets.layers import Softmax
from functools import partial
np.random.seed(12345)
N = np.inf if N is None else N
p_soft = partial(F.softmax, dim=1)
gold = torch_gradient_generator(p_soft)
i = 0
while i < N:
mine = Softmax()
n_ex = np.random.randint(1, 3)
n_dims = np.random.randint(1, 50)
z = random_tensor((n_ex, n_dims), standardize=True)
out = mine.forward(z)
assert_almost_equal(
gold(z),
mine.backward(np.ones_like(out)),
err_msg="Theirs:\n{}\n\nMine:\n{}\n".format(
gold(z), mine.backward(np.ones_like(out))
),
decimal=3,
)
print("PASSED")
i += 1
def test_softplus_grad(N=15):
from numpy_ml.neural_nets.activations import SoftPlus
np.random.seed(12345)
N = np.inf if N is None else N
mine = SoftPlus()
gold = torch_gradient_generator(F.softplus)
i = 0
while i < N:
n_ex = np.random.randint(1, 100)
n_dims = np.random.randint(1, 100)
z = random_tensor((n_ex, n_dims), standardize=True)
assert_almost_equal(mine.grad(z), gold(z))
print("PASSED")
i += 1
#######################################################################
# Layers #
#######################################################################
def test_FullyConnected(N=15):
from numpy_ml.neural_nets.layers import FullyConnected
from numpy_ml.neural_nets.activations import Tanh, ReLU, Sigmoid, Affine
np.random.seed(12345)
N = np.inf if N is None else N
acts = [
(Tanh(), nn.Tanh(), "Tanh"),
(Sigmoid(), nn.Sigmoid(), "Sigmoid"),
(ReLU(), nn.ReLU(), "ReLU"),
(Affine(), TorchLinearActivation(), "Affine"),
]
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 100)
n_in = np.random.randint(1, 100)
n_out = np.random.randint(1, 100)
X = random_tensor((n_ex, n_in), standardize=True)
# randomly select an activation function
act_fn, torch_fn, act_fn_name = acts[np.random.randint(0, len(acts))]
# initialize FC layer
L1 = FullyConnected(n_out=n_out, act_fn=act_fn)
# forward prop
y_pred = L1.forward(X)
# backprop
dLdy = np.ones_like(y_pred)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchFCLayer(n_in, n_out, torch_fn, L1.parameters)
golds = gold_mod.extract_grads(X)
params = [
(L1.X[0], "X"),
(y_pred, "y"),
(L1.parameters["W"].T, "W"),
(L1.parameters["b"], "b"),
(dLdy, "dLdy"),
(L1.gradients["W"].T, "dLdW"),
(L1.gradients["b"], "dLdB"),
(dLdX, "dLdX"),
]
print("\nTrial {}\nact_fn={}".format(i, act_fn_name))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=3
)
print("\tPASSED {}".format(label))
i += 1
def test_Embedding(N=15):
from numpy_ml.neural_nets.layers import Embedding
np.random.seed(12345)
N = np.inf if N is None else N
i = 1
while i < N + 1:
vocab_size = np.random.randint(1, 2000)
n_ex = np.random.randint(1, 100)
n_in = np.random.randint(1, 100)
emb_dim = np.random.randint(1, 100)
X = np.random.randint(0, vocab_size, (n_ex, n_in))
# initialize Embedding layer
L1 = Embedding(n_out=emb_dim, vocab_size=vocab_size)
# forward prop
y_pred = L1.forward(X)
# backprop
dLdy = np.ones_like(y_pred)
# dLdX = L1.backward(dLdy)
L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchEmbeddingLayer(vocab_size, emb_dim, L1.parameters)
golds = gold_mod.extract_grads(X)
params = [
(L1.X[0], "X"),
(y_pred, "y"),
(L1.parameters["W"], "W"),
(dLdy, "dLdy"),
(L1.gradients["W"], "dLdW"),
# (dLdX, "dLdX"),
]
print("\nTrial {}".format(i))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=3
)
print("\tPASSED {}".format(label))
i += 1
def test_BatchNorm1D(N=15):
from numpy_ml.neural_nets.layers import BatchNorm1D
np.random.seed(12345)
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(2, 1000)
n_in = np.random.randint(1, 1000)
X = random_tensor((n_ex, n_in), standardize=True)
# initialize BatchNorm1D layer
L1 = BatchNorm1D()
# forward prop
y_pred = L1.forward(X)
# backprop
dLdy = np.ones_like(y_pred)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchBatchNormLayer(
n_in, L1.parameters, "1D", epsilon=L1.epsilon, momentum=L1.momentum
)
golds = gold_mod.extract_grads(X)
params = [
(L1.X[0], "X"),
(y_pred, "y"),
(L1.parameters["scaler"].T, "scaler"),
(L1.parameters["intercept"], "intercept"),
(L1.parameters["running_mean"], "running_mean"),
# (L1.parameters["running_var"], "running_var"),
(L1.gradients["scaler"], "dLdScaler"),
(L1.gradients["intercept"], "dLdIntercept"),
(dLdX, "dLdX"),
]
print("Trial {}".format(i))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=1
)
print("\tPASSED {}".format(label))
i += 1
def test_LayerNorm1D(N=15):
from numpy_ml.neural_nets.layers import LayerNorm1D
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(2, 1000)
n_in = np.random.randint(1, 1000)
X = random_tensor((n_ex, n_in), standardize=True)
# initialize BatchNorm1D layer
L1 = LayerNorm1D()
# forward prop
y_pred = L1.forward(X)
# backprop
dLdy = np.ones_like(y_pred)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchLayerNormLayer(n_in, L1.parameters, "1D", epsilon=L1.epsilon)
golds = gold_mod.extract_grads(X)
params = [
(L1.X[0], "X"),
(y_pred, "y"),
(L1.parameters["scaler"].T, "scaler"),
(L1.parameters["intercept"], "intercept"),
(L1.gradients["scaler"], "dLdScaler"),
(L1.gradients["intercept"], "dLdIntercept"),
(dLdX, "dLdX"),
]
print("Trial {}".format(i))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=3
)
print("\tPASSED {}".format(label))
i += 1
def test_LayerNorm2D(N=15):
from numpy_ml.neural_nets.layers import LayerNorm2D
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(2, 10)
in_rows = np.random.randint(1, 10)
in_cols = np.random.randint(1, 10)
n_in = np.random.randint(1, 3)
# initialize LayerNorm2D layer
X = random_tensor((n_ex, in_rows, in_cols, n_in), standardize=True)
L1 = LayerNorm2D()
# forward prop
y_pred = L1.forward(X)
# standard sum loss
dLdy = np.ones_like(X)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchLayerNormLayer(
[n_in, in_rows, in_cols], L1.parameters, mode="2D", epsilon=L1.epsilon
)
golds = gold_mod.extract_grads(X, Y_true=None)
params = [
(L1.X[0], "X"),
(L1.hyperparameters["epsilon"], "epsilon"),
(L1.parameters["scaler"], "scaler"),
(L1.parameters["intercept"], "intercept"),
(y_pred, "y"),
(L1.gradients["scaler"], "dLdScaler"),
(L1.gradients["intercept"], "dLdIntercept"),
(dLdX, "dLdX"),
]
print("Trial {}".format(i))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=3
)
print("\tPASSED {}".format(label))
i += 1
def test_MultiplyLayer(N=15):
from numpy_ml.neural_nets.layers import Multiply
from numpy_ml.neural_nets.activations import Tanh, ReLU, Sigmoid, Affine
N = np.inf if N is None else N
np.random.seed(12345)
acts = [
(Tanh(), nn.Tanh(), "Tanh"),
(Sigmoid(), nn.Sigmoid(), "Sigmoid"),
(ReLU(), nn.ReLU(), "ReLU"),
(Affine(), TorchLinearActivation(), "Affine"),
]
i = 1
while i < N + 1:
Xs = []
n_ex = np.random.randint(1, 100)
n_in = np.random.randint(1, 100)
n_entries = np.random.randint(2, 5)
for _ in range(n_entries):
Xs.append(random_tensor((n_ex, n_in), standardize=True))
act_fn, torch_fn, act_fn_name = acts[np.random.randint(0, len(acts))]
# initialize Add layer
L1 = Multiply(act_fn)
# forward prop
y_pred = L1.forward(Xs)
# backprop
dLdy = np.ones_like(y_pred)
dLdXs = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchMultiplyLayer(torch_fn)
golds = gold_mod.extract_grads(Xs)
params = [(Xs, "Xs"), (y_pred, "Y")]
params.extend(
[(dldxi, "dLdX{}".format(i + 1)) for i, dldxi in enumerate(dLdXs)]
)
print("\nTrial {}".format(i))
print("n_ex={}, n_in={}".format(n_ex, n_in))
print("n_entries={}, act_fn={}".format(n_entries, str(act_fn)))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=1
)
print("\tPASSED {}".format(label))
i += 1
def test_AddLayer(N=15):
from numpy_ml.neural_nets.layers import Add
from numpy_ml.neural_nets.activations import Tanh, ReLU, Sigmoid, Affine
N = np.inf if N is None else N
np.random.seed(12345)
acts = [
(Tanh(), nn.Tanh(), "Tanh"),
(Sigmoid(), nn.Sigmoid(), "Sigmoid"),
(ReLU(), nn.ReLU(), "ReLU"),
(Affine(), TorchLinearActivation(), "Affine"),
]
i = 1
while i < N + 1:
Xs = []
n_ex = np.random.randint(1, 100)
n_in = np.random.randint(1, 100)
n_entries = np.random.randint(2, 5)
for _ in range(n_entries):
Xs.append(random_tensor((n_ex, n_in), standardize=True))
act_fn, torch_fn, act_fn_name = acts[np.random.randint(0, len(acts))]
# initialize Add layer
L1 = Add(act_fn)
# forward prop
y_pred = L1.forward(Xs)
# backprop
dLdy = np.ones_like(y_pred)
dLdXs = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchAddLayer(torch_fn)
golds = gold_mod.extract_grads(Xs)
params = [(Xs, "Xs"), (y_pred, "Y")]
params.extend(
[(dldxi, "dLdX{}".format(i + 1)) for i, dldxi in enumerate(dLdXs)]
)
print("\nTrial {}".format(i))
print("n_ex={}, n_in={}".format(n_ex, n_in))
print("n_entries={}, act_fn={}".format(n_entries, str(act_fn)))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=1
)
print("\tPASSED {}".format(label))
i += 1
def test_BatchNorm2D(N=15):
from numpy_ml.neural_nets.layers import BatchNorm2D
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(2, 10)
in_rows = np.random.randint(1, 10)
in_cols = np.random.randint(1, 10)
n_in = np.random.randint(1, 3)
# initialize BatchNorm2D layer
X = random_tensor((n_ex, in_rows, in_cols, n_in), standardize=True)
L1 = BatchNorm2D()
# forward prop
y_pred = L1.forward(X)
# standard sum loss
dLdy = np.ones_like(X)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchBatchNormLayer(
n_in, L1.parameters, mode="2D", epsilon=L1.epsilon, momentum=L1.momentum
)
golds = gold_mod.extract_grads(X, Y_true=None)
params = [
(L1.X[0], "X"),
(L1.hyperparameters["momentum"], "momentum"),
(L1.hyperparameters["epsilon"], "epsilon"),
(L1.parameters["scaler"].T, "scaler"),
(L1.parameters["intercept"], "intercept"),
(L1.parameters["running_mean"], "running_mean"),
# (L1.parameters["running_var"], "running_var"),
(y_pred, "y"),
(L1.gradients["scaler"], "dLdScaler"),
(L1.gradients["intercept"], "dLdIntercept"),
(dLdX, "dLdX"),
]
print("Trial {}".format(i))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=3
)
print("\tPASSED {}".format(label))
i += 1
def test_RNNCell(N=15):
from numpy_ml.neural_nets.layers import RNNCell
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
n_in = np.random.randint(1, 10)
n_out = np.random.randint(1, 10)
n_t = np.random.randint(1, 10)
X = random_tensor((n_ex, n_in, n_t), standardize=True)
# initialize RNN layer
L1 = RNNCell(n_out=n_out)
# forward prop
y_preds = []
for t in range(n_t):
y_pred = L1.forward(X[:, :, t])
y_preds += [y_pred]
# backprop
dLdX = []
dLdAt = np.ones_like(y_preds[t])
for t in reversed(range(n_t)):
dLdXt = L1.backward(dLdAt)
dLdX.insert(0, dLdXt)
dLdX = np.dstack(dLdX)
# get gold standard gradients
gold_mod = TorchRNNCell(n_in, n_out, L1.parameters)
golds = gold_mod.extract_grads(X)
params = [
(X, "X"),
(np.array(y_preds), "y"),
(L1.parameters["ba"].T, "ba"),
(L1.parameters["bx"].T, "bx"),
(L1.parameters["Wax"].T, "Wax"),
(L1.parameters["Waa"].T, "Waa"),
(L1.gradients["ba"].T, "dLdBa"),
(L1.gradients["bx"].T, "dLdBx"),
(L1.gradients["Wax"].T, "dLdWax"),
(L1.gradients["Waa"].T, "dLdWaa"),
(dLdX, "dLdX"),
]
print("Trial {}".format(i))
for ix, (mine, label) in enumerate(params):
np.testing.assert_allclose(
mine,
golds[label],
err_msg=err_fmt(params, golds, ix),
atol=1e-3,
rtol=1e-3,
)
print("\tPASSED {}".format(label))
i += 1
def test_Conv2D(N=15):
from numpy_ml.neural_nets.layers import Conv2D
from numpy_ml.neural_nets.activations import Tanh, ReLU, Sigmoid, Affine
N = np.inf if N is None else N
np.random.seed(12345)
acts = [
(Tanh(), nn.Tanh(), "Tanh"),
(Sigmoid(), nn.Sigmoid(), "Sigmoid"),
(ReLU(), nn.ReLU(), "ReLU"),
(Affine(), TorchLinearActivation(), "Affine"),
]
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
in_rows = np.random.randint(1, 10)
in_cols = np.random.randint(1, 10)
n_in, n_out = np.random.randint(1, 3), np.random.randint(1, 3)
f_shape = (
min(in_rows, np.random.randint(1, 5)),
min(in_cols, np.random.randint(1, 5)),
)
p, s = np.random.randint(0, 5), np.random.randint(1, 3)
d = np.random.randint(0, 5)
fr, fc = f_shape[0] * (d + 1) - d, f_shape[1] * (d + 1) - d
out_rows = int(1 + (in_rows + 2 * p - fr) / s)
out_cols = int(1 + (in_cols + 2 * p - fc) / s)
if out_rows <= 0 or out_cols <= 0:
continue
X = random_tensor((n_ex, in_rows, in_cols, n_in), standardize=True)
# randomly select an activation function
act_fn, torch_fn, act_fn_name = acts[np.random.randint(0, len(acts))]
# initialize Conv2D layer
L1 = Conv2D(
out_ch=n_out,
kernel_shape=f_shape,
act_fn=act_fn,
pad=p,
stride=s,
dilation=d,
)
# forward prop
y_pred = L1.forward(X)
# backprop
dLdy = np.ones_like(y_pred)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchConv2DLayer(
n_in, n_out, torch_fn, L1.parameters, L1.hyperparameters
)
golds = gold_mod.extract_grads(X)
params = [
(L1.X[0], "X"),
(y_pred, "y"),
(L1.parameters["W"], "W"),
(L1.parameters["b"], "b"),
(L1.gradients["W"], "dLdW"),
(L1.gradients["b"], "dLdB"),
(dLdX, "dLdX"),
]
print("\nTrial {}".format(i))
print("pad={}, stride={}, f_shape={}, n_ex={}".format(p, s, f_shape, n_ex))
print("in_rows={}, in_cols={}, n_in={}".format(in_rows, in_cols, n_in))
print("out_rows={}, out_cols={}, n_out={}".format(out_rows, out_cols, n_out))
print("dilation={}".format(d))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=4
)
print("\tPASSED {}".format(label))
i += 1
def test_DPAttention(N=15):
from numpy_ml.neural_nets.layers import DotProductAttention
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
d_k = np.random.randint(1, 100)
d_v = np.random.randint(1, 100)
Q = random_tensor((n_ex, d_k), standardize=True)
K = random_tensor((n_ex, d_k), standardize=True)
V = random_tensor((n_ex, d_v), standardize=True)
# initialize DotProductAttention layer
mine = DotProductAttention(scale=True, dropout_p=0)
# forward prop
y_pred = mine.forward(Q, K, V)
# backprop
dLdy = np.ones_like(y_pred)
dLdQ, dLdK, dLdV = mine.backward(dLdy)
# get gold standard gradients
gold_mod = TorchSDPAttentionLayer()
golds = gold_mod.extract_grads(Q, K, V)
params = [
(mine.X[0][0], "Q"),
(mine.X[0][1], "K"),
(mine.X[0][2], "V"),
(y_pred, "Y"),
(dLdV, "dLdV"),
(dLdK, "dLdK"),
(dLdQ, "dLdQ"),
]
print("\nTrial {}".format(i))
print("n_ex={} d_k={} d_v={}".format(n_ex, d_k, d_v))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=4
)
print("\tPASSED {}".format(label))
i += 1
def test_Conv1D(N=15):
from numpy_ml.neural_nets.layers import Conv1D
from numpy_ml.neural_nets.activations import Tanh, ReLU, Sigmoid, Affine
N = np.inf if N is None else N
np.random.seed(12345)
acts = [
(Tanh(), nn.Tanh(), "Tanh"),
(Sigmoid(), nn.Sigmoid(), "Sigmoid"),
(ReLU(), nn.ReLU(), "ReLU"),
(Affine(), TorchLinearActivation(), "Affine"),
]
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
l_in = np.random.randint(1, 10)
n_in, n_out = np.random.randint(1, 3), np.random.randint(1, 3)
f_width = min(l_in, np.random.randint(1, 5))
p, s = np.random.randint(0, 5), np.random.randint(1, 3)
d = np.random.randint(0, 5)
fc = f_width * (d + 1) - d
l_out = int(1 + (l_in + 2 * p - fc) / s)
if l_out <= 0:
continue
X = random_tensor((n_ex, l_in, n_in), standardize=True)
# randomly select an activation function
act_fn, torch_fn, act_fn_name = acts[np.random.randint(0, len(acts))]
# initialize Conv2D layer
L1 = Conv1D(
out_ch=n_out,
kernel_width=f_width,
act_fn=act_fn,
pad=p,
stride=s,
dilation=d,
)
# forward prop
y_pred = L1.forward(X)
# backprop
dLdy = np.ones_like(y_pred)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchConv1DLayer(
n_in, n_out, torch_fn, L1.parameters, L1.hyperparameters
)
golds = gold_mod.extract_grads(X)
params = [
(L1.X[0], "X"),
(y_pred, "y"),
(L1.parameters["W"], "W"),
(L1.parameters["b"], "b"),
(L1.gradients["W"], "dLdW"),
(L1.gradients["b"], "dLdB"),
(dLdX, "dLdX"),
]
print("\nTrial {}".format(i))
print("pad={}, stride={}, f_width={}, n_ex={}".format(p, s, f_width, n_ex))
print("l_in={}, n_in={}".format(l_in, n_in))
print("l_out={}, n_out={}".format(l_out, n_out))
print("dilation={}".format(d))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=4
)
print("\tPASSED {}".format(label))
i += 1
def test_Deconv2D(N=15):
from numpy_ml.neural_nets.layers import Deconv2D
from numpy_ml.neural_nets.activations import Tanh, ReLU, Sigmoid, Affine
N = np.inf if N is None else N
np.random.seed(12345)
acts = [
(Tanh(), nn.Tanh(), "Tanh"),
(Sigmoid(), nn.Sigmoid(), "Sigmoid"),
(ReLU(), nn.ReLU(), "ReLU"),
(Affine(), TorchLinearActivation(), "Affine"),
]
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
in_rows = np.random.randint(1, 10)
in_cols = np.random.randint(1, 10)
n_in, n_out = np.random.randint(1, 3), np.random.randint(1, 3)
f_shape = (
min(in_rows, np.random.randint(1, 5)),
min(in_cols, np.random.randint(1, 5)),
)
p, s = np.random.randint(0, 5), np.random.randint(1, 3)
out_rows = s * (in_rows - 1) - 2 * p + f_shape[0]
out_cols = s * (in_cols - 1) - 2 * p + f_shape[1]
if out_rows <= 0 or out_cols <= 0:
continue
X = random_tensor((n_ex, in_rows, in_cols, n_in), standardize=True)
# randomly select an activation function
act_fn, torch_fn, act_fn_name = acts[np.random.randint(0, len(acts))]
# initialize Deconv2D layer
L1 = Deconv2D(
out_ch=n_out, kernel_shape=f_shape, act_fn=act_fn, pad=p, stride=s
)
# forward prop
try:
y_pred = L1.forward(X)
except ValueError:
print("Improper dimensions; retrying")
continue
# backprop
dLdy = np.ones_like(y_pred)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchDeconv2DLayer(
n_in, n_out, torch_fn, L1.parameters, L1.hyperparameters
)
golds = gold_mod.extract_grads(X)
params = [
(L1.X[0], "X"),
(L1.parameters["W"], "W"),
(L1.parameters["b"], "b"),
(y_pred, "y"),
(L1.gradients["W"], "dLdW"),
(L1.gradients["b"], "dLdB"),
(dLdX, "dLdX"),
]
print("\nTrial {}".format(i))
print("pad={}, stride={}, f_shape={}, n_ex={}".format(p, s, f_shape, n_ex))
print("in_rows={}, in_cols={}, n_in={}".format(in_rows, in_cols, n_in))
print("out_rows={}, out_cols={}, n_out={}".format(out_rows, out_cols, n_out))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=4
)
print("\tPASSED {}".format(label))
i += 1
def test_Pool2D(N=15):
from numpy_ml.neural_nets.layers import Pool2D
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
in_rows = np.random.randint(1, 10)
in_cols = np.random.randint(1, 10)
n_in = np.random.randint(1, 3)
f_shape = (
min(in_rows, np.random.randint(1, 5)),
min(in_cols, np.random.randint(1, 5)),
)
p, s = np.random.randint(0, max(1, min(f_shape) // 2)), np.random.randint(1, 3)
# mode = ["max", "average"][np.random.randint(0, 2)]
mode = "average"
out_rows = int(1 + (in_rows + 2 * p - f_shape[0]) / s)
out_cols = int(1 + (in_cols + 2 * p - f_shape[1]) / s)
X = random_tensor((n_ex, in_rows, in_cols, n_in), standardize=True)
print("\nmode: {}".format(mode))
print("pad={}, stride={}, f_shape={}, n_ex={}".format(p, s, f_shape, n_ex))
print("in_rows={}, in_cols={}, n_in={}".format(in_rows, in_cols, n_in))
print("out_rows={}, out_cols={}, n_out={}".format(out_rows, out_cols, n_in))
# initialize Pool2D layer
L1 = Pool2D(kernel_shape=f_shape, pad=p, stride=s, mode=mode)
# forward prop
y_pred = L1.forward(X)
# backprop
dLdy = np.ones_like(y_pred)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchPool2DLayer(n_in, L1.hyperparameters)
golds = gold_mod.extract_grads(X)
params = [(L1.X[0], "X"), (y_pred, "y"), (dLdX, "dLdX")]
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=4
)
print("\tPASSED {}".format(label))
i += 1
def test_LSTMCell(N=15):
from numpy_ml.neural_nets.layers import LSTMCell
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
n_in = np.random.randint(1, 10)
n_out = np.random.randint(1, 10)
n_t = np.random.randint(1, 10)
X = random_tensor((n_ex, n_in, n_t), standardize=True)
# initialize LSTM layer
L1 = LSTMCell(n_out=n_out)
# forward prop
Cs = []
y_preds = []
for t in range(n_t):
y_pred, Ct = L1.forward(X[:, :, t])
y_preds.append(y_pred)
Cs.append(Ct)
# backprop
dLdX = []
dLdAt = np.ones_like(y_preds[t])
for t in reversed(range(n_t)):
dLdXt = L1.backward(dLdAt)
dLdX.insert(0, dLdXt)
dLdX = np.dstack(dLdX)
y_preds = np.dstack(y_preds)
Cs = np.array(Cs)
# get gold standard gradients
gold_mod = TorchLSTMCell(n_in, n_out, L1.parameters)
golds = gold_mod.extract_grads(X)
params = [
(X, "X"),
(np.array(Cs), "C"),
(y_preds, "y"),
(L1.parameters["bo"].T, "bo"),
(L1.parameters["bu"].T, "bu"),
(L1.parameters["bf"].T, "bf"),
(L1.parameters["bc"].T, "bc"),
(L1.parameters["Wo"], "Wo"),
(L1.parameters["Wu"], "Wu"),
(L1.parameters["Wf"], "Wf"),
(L1.parameters["Wc"], "Wc"),
(L1.gradients["bo"].T, "dLdBo"),
(L1.gradients["bu"].T, "dLdBu"),
(L1.gradients["bf"].T, "dLdBf"),
(L1.gradients["bc"].T, "dLdBc"),
(L1.gradients["Wo"], "dLdWo"),
(L1.gradients["Wu"], "dLdWu"),
(L1.gradients["Wf"], "dLdWf"),
(L1.gradients["Wc"], "dLdWc"),
(dLdX, "dLdX"),
]
print("Case {}".format(i))
for ix, (mine, label) in enumerate(params):
np.testing.assert_allclose(
mine,
golds[label],
err_msg=err_fmt(params, golds, ix),
atol=1e-4,
rtol=1e-4,
)
print("\tPASSED {}".format(label))
i += 1
def grad_check_RNN(model, loss_func, param_name, n_t, X, epsilon=1e-7):
"""
Manual gradient calc for vanilla RNN parameters
"""
if param_name in ["Ba", "Bx"]:
param_name = param_name.lower()
elif param_name in ["X", "y"]:
return None
param_orig = model.parameters[param_name]
model.flush_gradients()
grads = np.zeros_like(param_orig)
for flat_ix, val in enumerate(param_orig.flat):
param = deepcopy(param_orig)
md_ix = np.unravel_index(flat_ix, param.shape)
# plus
y_preds_plus = []
param[md_ix] = val + epsilon
model.parameters[param_name] = param
for t in range(n_t):
y_pred_plus = model.forward(X[:, :, t])
y_preds_plus += [y_pred_plus]
loss_plus = loss_func(y_preds_plus)
model.flush_gradients()
# minus
y_preds_minus = []
param[md_ix] = val - epsilon
model.parameters[param_name] = param
for t in range(n_t):
y_pred_minus = model.forward(X[:, :, t])
y_preds_minus += [y_pred_minus]
loss_minus = loss_func(y_preds_minus)
model.flush_gradients()
grad = (loss_plus - loss_minus) / (2 * epsilon)
grads[md_ix] = grad
return grads.T
#######################################################################
# Modules #
#######################################################################
def test_MultiHeadedAttentionModule(N=15):
from numpy_ml.neural_nets.modules import MultiHeadedAttentionModule
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
latent_dim = np.random.randint(1, 20)
n_heads = np.random.randint(2, 10)
d_k = d_v = n_heads * latent_dim
Q = random_tensor((n_ex, d_k), standardize=True)
K = random_tensor((n_ex, d_k), standardize=True)
V = random_tensor((n_ex, d_v), standardize=True)
mine = MultiHeadedAttentionModule(n_heads=n_heads, dropout_p=0)
# forward prop
y_pred = mine.forward(Q, K, V)
# backprop
dLdy = np.ones_like(y_pred)
dLdQ, dLdK, dLdV = mine.backward(dLdy)
# get gold standard gradients
params = mine.parameters
hparams = mine.hyperparameters
gold_mod = TorchMultiHeadedAttentionModule(params, hparams)
golds = gold_mod.extract_grads(Q, K, V)
dv = mine.derived_variables
params = mine.parameters["components"]
grads = mine.gradients["components"]
params = [
(Q, "Q"),
(K, "K"),
(V, "V"),
(mine.n_heads, "n_heads"),
(mine.latent_dim, "latent_dim"),
(params["O"]["W"], "O_W"),
(params["K"]["W"], "K_W"),
(params["V"]["W"], "V_W"),
(params["Q"]["W"], "Q_W"),
(params["O"]["b"], "O_b"),
(params["K"]["b"], "K_b"),
(params["V"]["b"], "V_b"),
(params["Q"]["b"], "Q_b"),
(dv["Q_proj"], "Q_proj"),
(dv["K_proj"], "K_proj"),
(dv["V_proj"], "V_proj"),
(dv["attention_weights"][0], "weights"),
(dv["attention_out"], "attn_out"),
(y_pred, "Y"),
(dLdy, "dLdy"),
(dv["dQ_proj"], "dQ_proj"),
(dv["dK_proj"], "dK_proj"),
(dv["dV_proj"], "dV_proj"),
(grads["O"]["W"], "dO_W"),
(grads["V"]["W"], "dV_W"),
(grads["K"]["W"], "dK_W"),
(grads["Q"]["W"], "dQ_W"),
(grads["O"]["b"], "dO_b"),
(grads["V"]["b"], "dV_b"),
(grads["K"]["b"], "dK_b"),
(grads["Q"]["b"], "dQ_b"),
(dLdQ, "dQ"),
(dLdK, "dK"),
(dLdV, "dV"),
]
print("\nTrial {}".format(i))
print(
"n_ex={} d_k=d_v={} latent_dim={} n_heads={}".format(
n_ex, d_k, latent_dim, n_heads
)
)
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=4
)
print("\tPASSED {}".format(label))
i += 1
def test_SkipConnectionIdentityModule(N=15):
from numpy_ml.neural_nets.modules import SkipConnectionIdentityModule
from numpy_ml.neural_nets.activations import Tanh, ReLU, Sigmoid, Affine
N = np.inf if N is None else N
np.random.seed(12345)
acts = [
(Tanh(), nn.Tanh(), "Tanh"),
(Sigmoid(), nn.Sigmoid(), "Sigmoid"),
(ReLU(), nn.ReLU(), "ReLU"),
(Affine(), TorchLinearActivation(), "Affine"),
]
i = 1
while i < N + 1:
n_ex = np.random.randint(2, 10)
in_rows = np.random.randint(2, 25)
in_cols = np.random.randint(2, 25)
n_in = np.random.randint(2, 5)
n_out = n_in
f_shape1 = (
min(in_rows, np.random.randint(1, 5)),
min(in_cols, np.random.randint(1, 5)),
)
f_shape2 = (
min(in_rows, np.random.randint(1, 5)),
min(in_cols, np.random.randint(1, 5)),
)
s1 = np.random.randint(1, 5)
s2 = np.random.randint(1, 5)
# randomly select an activation function
act_fn, torch_fn, act_fn_name = acts[np.random.randint(0, len(acts))]
X = random_tensor((n_ex, in_rows, in_cols, n_in), standardize=True)
p1 = calc_pad_dims_2D(X.shape, X.shape[1:3], f_shape1, s1)
if p1[0] != p1[1] or p1[2] != p1[3]:
continue
p2 = calc_pad_dims_2D(X.shape, X.shape[1:3], f_shape2, s2)
if p2[0] != p2[1] or p2[2] != p2[3]:
continue
p1 = (p1[0], p1[2])
p2 = (p2[0], p2[2])
# initialize SkipConnectionIdentity module
L1 = SkipConnectionIdentityModule(
out_ch=n_out,
kernel_shape1=f_shape1,
kernel_shape2=f_shape2,
stride1=s1,
stride2=s2,
act_fn=act_fn,
epsilon=1e-5,
momentum=0.9,
)
# forward prop
y_pred = L1.forward(X)
# backprop
dLdy = np.ones_like(y_pred)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchSkipConnectionIdentity(
torch_fn,
p1,
p2,
L1.parameters,
L1.hyperparameters,
momentum=L1.momentum,
epsilon=L1.epsilon,
)
golds = gold_mod.extract_grads(X)
params = L1.parameters["components"]
grads = L1.gradients["components"]
params = [
(X, "X"),
(params["conv1"]["W"], "conv1_W"),
(params["conv1"]["b"], "conv1_b"),
(params["batchnorm1"]["scaler"].T, "bn1_scaler"),
(params["batchnorm1"]["intercept"], "bn1_intercept"),
(params["batchnorm1"]["running_mean"], "bn1_running_mean"),
# (params["batchnorm1"]["running_var"], "bn1_running_var"),
(params["conv2"]["W"], "conv2_W"),
(params["conv2"]["b"], "conv2_b"),
(params["batchnorm2"]["scaler"].T, "bn2_scaler"),
(params["batchnorm2"]["intercept"], "bn2_intercept"),
(params["batchnorm2"]["running_mean"], "bn2_running_mean"),
# (params["batchnorm2"]["running_var"], "bn2_running_var"),
(L1._dv["conv1_out"], "act1_out"),
(L1._dv["batchnorm1_out"], "bn1_out"),
(L1._dv["conv2_out"], "conv2_out"),
(L1._dv["batchnorm2_out"], "bn2_out"),
(y_pred, "Y"),
(dLdy, "dLdY"),
(L1.derived_variables["dLdBn2"], "dLdBn2_out"),
(L1.derived_variables["dLdConv2"], "dLdConv2_out"),
(L1.derived_variables["dLdBn1"], "dLdBn1_out"),
(L1.derived_variables["dLdConv1"], "dLdActFn1_out"),
(dLdX, "dLdX"),
(grads["batchnorm2"]["scaler"].T, "dLdBn2_scaler"),
(grads["batchnorm2"]["intercept"], "dLdBn2_intercept"),
(grads["conv2"]["W"], "dLdConv2_W"),
(grads["conv2"]["b"], "dLdConv2_b"),
(grads["batchnorm1"]["scaler"].T, "dLdBn1_scaler"),
(grads["batchnorm1"]["intercept"], "dLdBn1_intercept"),
(grads["conv1"]["W"], "dLdConv1_W"),
(grads["conv1"]["b"], "dLdConv1_b"),
]
print("\nTrial {}".format(i))
print("act_fn={}, n_ex={}".format(act_fn, n_ex))
print("in_rows={}, in_cols={}, n_in={}".format(in_rows, in_cols, n_in))
print("pad1={}, stride1={}, f_shape1={}".format(p1, s1, f_shape1))
print("pad2={}, stride2={}, f_shape2={}".format(p2, s2, f_shape2))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=2
)
print("\tPASSED {}".format(label))
i += 1
def test_SkipConnectionConvModule(N=15):
from numpy_ml.neural_nets.modules import SkipConnectionConvModule
from numpy_ml.neural_nets.activations import Tanh, ReLU, Sigmoid, Affine
N = np.inf if N is None else N
np.random.seed(12345)
acts = [
(Tanh(), nn.Tanh(), "Tanh"),
(Sigmoid(), nn.Sigmoid(), "Sigmoid"),
(ReLU(), nn.ReLU(), "ReLU"),
(Affine(), TorchLinearActivation(), "Affine"),
]
i = 1
while i < N + 1:
n_ex = np.random.randint(2, 10)
in_rows = np.random.randint(2, 10)
in_cols = np.random.randint(2, 10)
n_in = np.random.randint(2, 5)
n_out1 = np.random.randint(2, 5)
n_out2 = np.random.randint(2, 5)
f_shape1 = (
min(in_rows, np.random.randint(1, 5)),
min(in_cols, np.random.randint(1, 5)),
)
f_shape2 = (
min(in_rows, np.random.randint(1, 5)),
min(in_cols, np.random.randint(1, 5)),
)
f_shape_skip = (
min(in_rows, np.random.randint(1, 5)),
min(in_cols, np.random.randint(1, 5)),
)
s1 = np.random.randint(1, 5)
s2 = np.random.randint(1, 5)
s_skip = np.random.randint(1, 5)
# randomly select an activation function
act_fn, torch_fn, act_fn_name = acts[np.random.randint(0, len(acts))]
X = random_tensor((n_ex, in_rows, in_cols, n_in), standardize=True)
p1 = (np.random.randint(1, 5), np.random.randint(1, 5))
p2 = (np.random.randint(1, 5), np.random.randint(1, 5))
# initialize SkipConnectionConv module
L1 = SkipConnectionConvModule(
out_ch1=n_out1,
out_ch2=n_out2,
kernel_shape1=f_shape1,
kernel_shape2=f_shape2,
kernel_shape_skip=f_shape_skip,
stride1=s1,
stride2=s2,
stride_skip=s_skip,
pad1=p1,
pad2=p2,
act_fn=act_fn,
epsilon=1e-5,
momentum=0.9,
)
# forward prop
try:
y_pred = L1.forward(X)
except (ValueError, AssertionError):
print("Invalid padding; Retrying")
continue
ps = L1.hyperparameters["pad_skip"]
if ps[0] != ps[1] or ps[2] != ps[3]:
continue
pad_skip = (ps[0], ps[2])
# backprop
dLdy = np.ones_like(y_pred)
dLdX = L1.backward(dLdy)
# get gold standard gradients
gold_mod = TorchSkipConnectionConv(
torch_fn,
p1,
p2,
pad_skip,
L1.parameters,
L1.hyperparameters,
momentum=L1.momentum,
epsilon=L1.epsilon,
)
golds = gold_mod.extract_grads(X)
params = L1.parameters["components"]
grads = L1.gradients["components"]
params = [
(X, "X"),
(params["conv1"]["W"], "conv1_W"),
(params["conv1"]["b"], "conv1_b"),
(params["batchnorm1"]["scaler"].T, "bn1_scaler"),
(params["batchnorm1"]["intercept"], "bn1_intercept"),
(params["batchnorm1"]["running_mean"], "bn1_running_mean"),
# (params["batchnorm1"]["running_var"], "bn1_running_var"),
(params["conv2"]["W"], "conv2_W"),
(params["conv2"]["b"], "conv2_b"),
(params["batchnorm2"]["scaler"].T, "bn2_scaler"),
(params["batchnorm2"]["intercept"], "bn2_intercept"),
(params["batchnorm2"]["running_mean"], "bn2_running_mean"),
# (params["batchnorm2"]["running_var"], "bn2_running_var"),
(params["conv_skip"]["W"], "conv_skip_W"),
(params["conv_skip"]["b"], "conv_skip_b"),
(params["batchnorm_skip"]["scaler"].T, "bn_skip_scaler"),
(params["batchnorm_skip"]["intercept"], "bn_skip_intercept"),
(params["batchnorm_skip"]["running_mean"], "bn_skip_running_mean"),
# (params["batchnorm_skip"]["running_var"], "bn_skip_running_var"),
(L1._dv["conv1_out"], "act1_out"),
(L1._dv["batchnorm1_out"], "bn1_out"),
(L1._dv["conv2_out"], "conv2_out"),
(L1._dv["batchnorm2_out"], "bn2_out"),
(L1._dv["conv_skip_out"], "conv_skip_out"),
(L1._dv["batchnorm_skip_out"], "bn_skip_out"),
(y_pred, "Y"),
(dLdy, "dLdY"),
(L1.derived_variables["dLdBn2"], "dLdBn2_out"),
(L1.derived_variables["dLdConv2"], "dLdConv2_out"),
(L1.derived_variables["dLdBnSkip"], "dLdBnSkip_out"),
(L1.derived_variables["dLdConvSkip"], "dLdConvSkip_out"),
(L1.derived_variables["dLdBn1"], "dLdBn1_out"),
(L1.derived_variables["dLdConv1"], "dLdActFn1_out"),
(dLdX, "dLdX"),
(grads["batchnorm_skip"]["scaler"].T, "dLdBnSkip_scaler"),
(grads["batchnorm_skip"]["intercept"], "dLdBnSkip_intercept"),
(grads["conv_skip"]["W"], "dLdConvSkip_W"),
(grads["conv_skip"]["b"], "dLdConvSkip_b"),
(grads["batchnorm2"]["scaler"].T, "dLdBn2_scaler"),
(grads["batchnorm2"]["intercept"], "dLdBn2_intercept"),
(grads["conv2"]["W"], "dLdConv2_W"),
(grads["conv2"]["b"], "dLdConv2_b"),
(grads["batchnorm1"]["scaler"].T, "dLdBn1_scaler"),
(grads["batchnorm1"]["intercept"], "dLdBn1_intercept"),
(grads["conv1"]["W"], "dLdConv1_W"),
(grads["conv1"]["b"], "dLdConv1_b"),
]
print("\nTrial {}".format(i))
print("act_fn={}, n_ex={}".format(act_fn, n_ex))
print("in_rows={}, in_cols={}, n_in={}".format(in_rows, in_cols, n_in))
print("pad1={}, stride1={}, f_shape1={}".format(p1, s1, f_shape1))
print("pad2={}, stride2={}, f_shape2={}".format(p2, s2, f_shape2))
print("stride_skip={}, f_shape_skip={}".format(s_skip, f_shape_skip))
warn_str = (
"\n[NOTE] The tests in this module can fail sometimes during "
"backprop due to the ReLU issue: while the difference in the forward pass "
"between z=-1e-9 and z=1e-9 is miniscule, the difference during the backward "
"pass is significant due to ReLU's kink about 0."
)
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine,
golds[label],
err_msg=err_fmt(params, golds, ix, warn_str),
decimal=2,
)
print("\tPASSED {}".format(label))
i += 1
def test_BidirectionalLSTM(N=15):
from numpy_ml.neural_nets.modules import BidirectionalLSTM
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
n_in = np.random.randint(1, 10)
n_out = np.random.randint(1, 10)
n_t = np.random.randint(1, 10)
X = random_tensor((n_ex, n_in, n_t), standardize=True)
# initialize LSTM layer
L1 = BidirectionalLSTM(n_out=n_out)
# forward prop
y_pred = L1.forward(X)
# backprop
dLdA = np.ones_like(y_pred)
dLdX = L1.backward(dLdA)
# get gold standard gradients
gold_mod = TorchBidirectionalLSTM(n_in, n_out, L1.parameters)
golds = gold_mod.extract_grads(X)
pms, grads = L1.parameters["components"], L1.gradients["components"]
params = [
(X, "X"),
(y_pred, "y"),
(pms["cell_fwd"]["bo"].T, "bo_f"),
(pms["cell_fwd"]["bu"].T, "bu_f"),
(pms["cell_fwd"]["bf"].T, "bf_f"),
(pms["cell_fwd"]["bc"].T, "bc_f"),
(pms["cell_fwd"]["Wo"], "Wo_f"),
(pms["cell_fwd"]["Wu"], "Wu_f"),
(pms["cell_fwd"]["Wf"], "Wf_f"),
(pms["cell_fwd"]["Wc"], "Wc_f"),
(pms["cell_bwd"]["bo"].T, "bo_b"),
(pms["cell_bwd"]["bu"].T, "bu_b"),
(pms["cell_bwd"]["bf"].T, "bf_b"),
(pms["cell_bwd"]["bc"].T, "bc_b"),
(pms["cell_bwd"]["Wo"], "Wo_b"),
(pms["cell_bwd"]["Wu"], "Wu_b"),
(pms["cell_bwd"]["Wf"], "Wf_b"),
(pms["cell_bwd"]["Wc"], "Wc_b"),
(grads["cell_fwd"]["bo"].T, "dLdBo_f"),
(grads["cell_fwd"]["bu"].T, "dLdBu_f"),
(grads["cell_fwd"]["bf"].T, "dLdBf_f"),
(grads["cell_fwd"]["bc"].T, "dLdBc_f"),
(grads["cell_fwd"]["Wo"], "dLdWo_f"),
(grads["cell_fwd"]["Wu"], "dLdWu_f"),
(grads["cell_fwd"]["Wf"], "dLdWf_f"),
(grads["cell_fwd"]["Wc"], "dLdWc_f"),
(grads["cell_bwd"]["bo"].T, "dLdBo_b"),
(grads["cell_bwd"]["bu"].T, "dLdBu_b"),
(grads["cell_bwd"]["bf"].T, "dLdBf_b"),
(grads["cell_bwd"]["bc"].T, "dLdBc_b"),
(grads["cell_bwd"]["Wo"], "dLdWo_b"),
(grads["cell_bwd"]["Wu"], "dLdWu_b"),
(grads["cell_bwd"]["Wf"], "dLdWf_b"),
(grads["cell_bwd"]["Wc"], "dLdWc_b"),
(dLdX, "dLdX"),
]
print("Case {}".format(i))
for ix, (mine, label) in enumerate(params):
np.testing.assert_allclose(
mine,
golds[label],
err_msg=err_fmt(params, golds, ix),
atol=1e-4,
rtol=1e-4,
)
print("\tPASSED {}".format(label))
i += 1
def test_WaveNetModule(N=10):
from numpy_ml.neural_nets.modules import WavenetResidualModule
N = np.inf if N is None else N
np.random.seed(12345)
i = 1
while i < N + 1:
n_ex = np.random.randint(1, 10)
l_in = np.random.randint(1, 10)
ch_residual, ch_dilation = np.random.randint(1, 5), np.random.randint(1, 5)
f_width = min(l_in, np.random.randint(1, 5))
d = np.random.randint(0, 5)
X_main = np.zeros_like(
random_tensor((n_ex, l_in, ch_residual), standardize=True)
)
X_main[0][0][0] = 1.0
X_skip = np.zeros_like(
random_tensor((n_ex, l_in, ch_residual), standardize=True)
)
# initialize Conv2D layer
L1 = WavenetResidualModule(
ch_residual=ch_residual,
ch_dilation=ch_dilation,
kernel_width=f_width,
dilation=d,
)
# forward prop
Y_main, Y_skip = L1.forward(X_main, X_skip)
# backprop
dLdY_skip = np.ones_like(Y_skip)
dLdY_main = np.ones_like(Y_main)
dLdX_main, dLdX_skip = L1.backward(dLdY_skip, dLdY_main)
_, conv_1x1_pad = pad1D(
L1._dv["multiply_gate_out"], "same", kernel_width=1, stride=1, dilation=0
)
if conv_1x1_pad[0] != conv_1x1_pad[1]:
print("Skipping")
continue
conv_1x1_pad = conv_1x1_pad[0]
# get gold standard gradients
gold_mod = TorchWavenetModule(L1.parameters, L1.hyperparameters, conv_1x1_pad)
golds = gold_mod.extract_grads(X_main, X_skip)
dv = L1.derived_variables
pc = L1.parameters["components"]
gr = L1.gradients["components"]
params = [
(L1.X_main, "X_main"),
(L1.X_skip, "X_skip"),
(pc["conv_dilation"]["W"], "conv_dilation_W"),
(pc["conv_dilation"]["b"], "conv_dilation_b"),
(pc["conv_1x1"]["W"], "conv_1x1_W"),
(pc["conv_1x1"]["b"], "conv_1x1_b"),
(dv["conv_dilation_out"], "conv_dilation_out"),
(dv["tanh_out"], "tanh_out"),
(dv["sigm_out"], "sigm_out"),
(dv["multiply_gate_out"], "multiply_gate_out"),
(dv["conv_1x1_out"], "conv_1x1_out"),
(Y_main, "Y_main"),
(Y_skip, "Y_skip"),
(dLdY_skip, "dLdY_skip"),
(dLdY_main, "dLdY_main"),
(dv["dLdConv_1x1"], "dLdConv_1x1_out"),
(gr["conv_1x1"]["W"], "dLdConv_1x1_W"),
(gr["conv_1x1"]["b"], "dLdConv_1x1_b"),
(dv["dLdMultiply"], "dLdMultiply_out"),
(dv["dLdTanh"], "dLdTanh_out"),
(dv["dLdSigmoid"], "dLdSigm_out"),
(dv["dLdConv_dilation"], "dLdConv_dilation_out"),
(gr["conv_dilation"]["W"], "dLdConv_dilation_W"),
(gr["conv_dilation"]["b"], "dLdConv_dilation_b"),
(dLdX_main, "dLdX_main"),
(dLdX_skip, "dLdX_skip"),
]
print("\nTrial {}".format(i))
print("f_width={}, n_ex={}".format(f_width, n_ex))
print("l_in={}, ch_residual={}".format(l_in, ch_residual))
print("ch_dilation={} dilation={}".format(ch_dilation, d))
for ix, (mine, label) in enumerate(params):
assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=4
)
print("\tPASSED {}".format(label))
i += 1
#######################################################################
# Utils #
#######################################################################
def test_pad1D(N=15):
from numpy_ml.neural_nets.layers import Conv1D
from .nn_torch_models import TorchCausalConv1d, torchify
np.random.seed(12345)
N = np.inf if N is None else N
i = 1
while i < N + 1:
p = np.random.choice(["same", "causal"])
n_ex = np.random.randint(1, 10)
l_in = np.random.randint(1, 10)
n_in, n_out = np.random.randint(1, 3), np.random.randint(1, 3)
f_width = min(l_in, np.random.randint(1, 5))
s = np.random.randint(1, 3)
d = np.random.randint(0, 5)
X = random_tensor((n_ex, l_in, n_in), standardize=True)
X_pad, _ = pad1D(X, p, kernel_width=f_width, stride=s, dilation=d)
# initialize Conv2D layer
L1 = Conv1D(out_ch=n_out, kernel_width=f_width, pad=0, stride=s, dilation=d)
# forward prop
try:
y_pred = L1.forward(X_pad)
except ValueError:
continue
# ignore n. output channels
print("Trial {}".format(i))
print("p={} d={} s={} l_in={} f_width={}".format(p, d, s, l_in, f_width))
print("n_ex={} n_in={} n_out={}".format(n_ex, n_in, n_out))
assert y_pred.shape[:2] == X.shape[:2], print(
"y_pred.shape={} X.shape={}".format(y_pred.shape, X.shape)
)
if p == "causal":
gold = TorchCausalConv1d(
in_channels=n_in,
out_channels=n_out,
kernel_size=f_width,
stride=s,
dilation=d + 1,
bias=True,
)
if s != 1:
print(
"TorchCausalConv1D does not do `same` padding for stride > 1. Skipping"
)
continue
XT = torchify(np.moveaxis(X, [0, 1, 2], [0, -1, -2]))
else:
gold = nn.Conv1d(
in_channels=n_in,
out_channels=n_out,
kernel_size=f_width,
padding=0,
stride=s,
dilation=d + 1,
bias=True,
)
XT = torchify(np.moveaxis(X_pad, [0, 1, 2], [0, -1, -2]))
# import weights and biases
# (f[0], n_in, n_out) -> (n_out, n_in, f[0])
b = L1.parameters["b"]
W = np.moveaxis(L1.parameters["W"], [0, 1, 2], [-1, -2, -3])
assert gold.weight.shape == W.shape
assert gold.bias.shape == b.flatten().shape
gold.weight = nn.Parameter(torch.FloatTensor(W))
gold.bias = nn.Parameter(torch.FloatTensor(b.flatten()))
outT = gold(XT)
if outT.ndimension() == 2:
import ipdb
ipdb.set_trace()
gold_out = np.moveaxis(outT.detach().numpy(), [0, 1, 2], [0, -1, -2])
assert gold_out.shape[:2] == X.shape[:2]
np.testing.assert_almost_equal(
y_pred,
gold_out,
err_msg=err_fmt(
[(y_pred.shape, "out.shape"), (y_pred, "out")],
{"out.shape": gold_out.shape, "out": gold_out},
1,
),
decimal=4,
)
print("PASSED\n")
i += 1
def test_conv(N=15):
np.random.seed(12345)
N = np.inf if N is None else N
i = 0
while i < N:
n_ex = np.random.randint(2, 15)
in_rows = np.random.randint(2, 15)
in_cols = np.random.randint(2, 15)
in_ch = np.random.randint(2, 15)
out_ch = np.random.randint(2, 15)
f_shape = (
min(in_rows, np.random.randint(2, 10)),
min(in_cols, np.random.randint(2, 10)),
)
s = np.random.randint(1, 3)
p = np.random.randint(0, 5)
X = np.random.rand(n_ex, in_rows, in_cols, in_ch)
X_pad, p = pad2D(X, p)
W = np.random.randn(f_shape[0], f_shape[1], in_ch, out_ch)
gold = conv2D_naive(X, W, s, p)
mine = conv2D(X, W, s, p)
np.testing.assert_almost_equal(mine, gold)
print("PASSED")
i += 1
#######################################################################
# Models #
#######################################################################
def fit_VAE():
# for testing
from numpy_ml.neural_nets.models.vae import BernoulliVAE
np.random.seed(12345)
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# scale pixel intensities to [0, 1]
X_train = np.expand_dims(X_train.astype("float32") / 255.0, 3)
X_test = np.expand_dims(X_test.astype("float32") / 255.0, 3)
X_train = X_train[: 128 * 1] # 1 batch
BV = BernoulliVAE()
BV.fit(X_train, n_epochs=1, verbose=False)
def test_WGAN_GP(N=1):
from numpy_ml.neural_nets.models.wgan_gp import WGAN_GP
np.random.seed(12345)
ss = np.random.randint(0, 1000)
np.random.seed(ss)
N = np.inf if N is None else N
i = 1
while i < N + 1:
c_updates_per_epoch, n_steps = 1, 1
n_ex = np.random.randint(1, 500)
n_in = np.random.randint(1, 100)
lambda_ = np.random.randint(0, 20)
g_hidden = np.random.randint(2, 500)
X = random_tensor((n_ex, n_in), standardize=True)
# initialize WGAN_GP model
L1 = WGAN_GP(g_hidden=g_hidden, debug=True)
# forward prop
batchsize = n_ex
L1.fit(
X,
lambda_=lambda_,
c_updates_per_epoch=c_updates_per_epoch,
n_steps=n_steps,
batchsize=batchsize,
)
# backprop
dv = L1.derived_variables
params = L1.parameters["components"]
grads = L1.gradients["components"]
params["noise"] = dv["noise"]
params["alpha"] = dv["alpha"]
params["n_in"] = n_in
params["g_hidden"] = g_hidden
params["c_updates_per_epoch"] = c_updates_per_epoch
params["n_steps"] = n_steps
# get gold standard gradients
golds = WGAN_GP_tf(X, lambda_=lambda_, batch_size=batchsize, params=params)
params = [
(dv["X_real"], "X_real"),
(params["generator"]["FC1"]["W"], "G_weights_FC1"),
(params["generator"]["FC2"]["W"], "G_weights_FC2"),
(params["generator"]["FC3"]["W"], "G_weights_FC3"),
(params["generator"]["FC4"]["W"], "G_weights_FC4"),
(dv["G_fwd_X_fake"]["FC1"], "G_fwd_X_fake_FC1"),
(dv["G_fwd_X_fake"]["FC2"], "G_fwd_X_fake_FC2"),
(dv["G_fwd_X_fake"]["FC3"], "G_fwd_X_fake_FC3"),
(dv["G_fwd_X_fake"]["FC4"], "G_fwd_X_fake_FC4"),
(dv["X_fake"], "X_fake"),
(dv["X_interp"], "X_interp"),
(params["critic"]["FC1"]["W"], "C_weights_Y_real_FC1"),
(params["critic"]["FC2"]["W"], "C_weights_Y_real_FC2"),
(params["critic"]["FC3"]["W"], "C_weights_Y_real_FC3"),
(params["critic"]["FC4"]["W"], "C_weights_Y_real_FC4"),
(dv["C_fwd_Y_real"]["FC1"], "C_fwd_Y_real_FC1"),
(dv["C_fwd_Y_real"]["FC2"], "C_fwd_Y_real_FC2"),
(dv["C_fwd_Y_real"]["FC3"], "C_fwd_Y_real_FC3"),
(dv["C_fwd_Y_real"]["FC4"], "C_fwd_Y_real_FC4"),
(dv["Y_real"].flatten(), "Y_real"),
(params["critic"]["FC1"]["W"], "C_weights_Y_fake_FC1"),
(params["critic"]["FC2"]["W"], "C_weights_Y_fake_FC2"),
(params["critic"]["FC3"]["W"], "C_weights_Y_fake_FC3"),
(params["critic"]["FC4"]["W"], "C_weights_Y_fake_FC4"),
(dv["C_fwd_Y_fake"]["FC1"], "C_fwd_Y_fake_FC1"),
(dv["C_fwd_Y_fake"]["FC2"], "C_fwd_Y_fake_FC2"),
(dv["C_fwd_Y_fake"]["FC3"], "C_fwd_Y_fake_FC3"),
(dv["C_fwd_Y_fake"]["FC4"], "C_fwd_Y_fake_FC4"),
(dv["Y_fake"].flatten(), "Y_fake"),
(params["critic"]["FC1"]["W"], "C_weights_Y_interp_FC1"),
(params["critic"]["FC2"]["W"], "C_weights_Y_interp_FC2"),
(params["critic"]["FC3"]["W"], "C_weights_Y_interp_FC3"),
(params["critic"]["FC4"]["W"], "C_weights_Y_interp_FC4"),
(dv["C_fwd_Y_interp"]["FC1"], "C_fwd_Y_interp_FC1"),
(dv["C_fwd_Y_interp"]["FC2"], "C_fwd_Y_interp_FC2"),
(dv["C_fwd_Y_interp"]["FC3"], "C_fwd_Y_interp_FC3"),
(dv["C_fwd_Y_interp"]["FC4"], "C_fwd_Y_interp_FC4"),
(dv["Y_interp"].flatten(), "Y_interp"),
(dv["C_dY_interp_wrt"]["FC4"], "dY_interp_wrt_FC4"),
(dv["C_dY_interp_wrt"]["FC3"], "dY_interp_wrt_FC3"),
(dv["C_dY_interp_wrt"]["FC2"], "dY_interp_wrt_FC2"),
(dv["C_dY_interp_wrt"]["FC1"], "dY_interp_wrt_FC1"),
(dv["gradInterp"], "gradInterp"),
(dv["C_loss"], "C_loss"),
(dv["G_loss"], "G_loss"),
(grads["critic"]["FC1"]["W"], "dC_loss_dW_FC1"),
(grads["critic"]["FC1"]["b"].flatten(), "dC_loss_db_FC1"),
(grads["critic"]["FC2"]["W"], "dC_loss_dW_FC2"),
(grads["critic"]["FC2"]["b"].flatten(), "dC_loss_db_FC2"),
(grads["critic"]["FC3"]["W"], "dC_loss_dW_FC3"),
(grads["critic"]["FC3"]["b"].flatten(), "dC_loss_db_FC3"),
(grads["critic"]["FC4"]["W"], "dC_loss_dW_FC4"),
(grads["critic"]["FC4"]["b"].flatten(), "dC_loss_db_FC4"),
(dv["dG_Y_fake"].flatten(), "dG_Y_fake"),
(dv["dY_real"].flatten(), "dC_Y_real"),
(dv["dC_Y_fake"].flatten(), "dC_Y_fake"),
(dv["dGrad_interp"], "dC_gradInterp"),
]
print("\nTrial {}".format(i))
print("Seed: {} g_hidden={}".format(ss, g_hidden))
print("lambda={} n_ex={} n_in={}".format(lambda_, n_ex, n_in))
print(
"c_updates_per_epoch={}, n_steps={} batchsize={}".format(
c_updates_per_epoch, n_steps, batchsize
)
)
for ix, (mine, label) in enumerate(params):
np.testing.assert_almost_equal(
mine, golds[label], err_msg=err_fmt(params, golds, ix), decimal=3
)
print("\tPASSED {}".format(label))
i += 1
|
{"hexsha": "1f0068026a68dcb8931d503bc67996bd55e0fa0c", "size": 78222, "ext": "py", "lang": "Python", "max_stars_repo_path": "book-code/numpy-ml/numpy_ml/tests/test_nn.py", "max_stars_repo_name": "yangninghua/code_library", "max_stars_repo_head_hexsha": "b769abecb4e0cbdbbb5762949c91847a0f0b3c5a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "book-code/numpy-ml/numpy_ml/tests/test_nn.py", "max_issues_repo_name": "yangninghua/code_library", "max_issues_repo_head_hexsha": "b769abecb4e0cbdbbb5762949c91847a0f0b3c5a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book-code/numpy-ml/numpy_ml/tests/test_nn.py", "max_forks_repo_name": "yangninghua/code_library", "max_forks_repo_head_hexsha": "b769abecb4e0cbdbbb5762949c91847a0f0b3c5a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.9534313725, "max_line_length": 91, "alphanum_fraction": 0.527831045, "include": true, "reason": "import numpy,from numpy,from scipy", "num_tokens": 22208}
|
from numpy import *
#import Image
from PIL import Image
def minmax(x, range=None):
if range:
lo, hi = range
good = between(lo, x, hi)
x = compress(good, x)
return min(x), max(x)
def scale255minmax(data):
lo, hi = minmax(ravel(data))
scaled = (data - lo) / float(hi - lo) * 255
return scaled.astype(int)
def savergb1(rgb, outfile): # SLOW!
#R, G, B = rgb
#data = 256 * array([R, G, B])
ny, nx = rgb[0].shape
data = array(rgb).astype(int)
data = transpose(data, (1,2,0))
data.shape = (ny*nx,3)
datal = data.tolist()
datat = tuple([ tuple(sublist) for sublist in datal ])
im = Image.new('RGB', (nx, ny))
im.putdata(datat)
im = im.transpose(Image.FLIP_TOP_BOTTOM)
#im.show()
im.save(outfile)
# r, g, b = data (3, ny, nx)
def rgb2im(rgb):
rgb = array(rgb).astype(uint8)
rgb = transpose(rgb, (1,2,0))
im = Image.fromarray(rgb, 'RGB')
im = im.transpose(Image.FLIP_TOP_BOTTOM)
return im
# r, g, b = data (3, ny, nx)
def savergb(rgb, outfile=None):
im = rgb2im(rgb)
if outfile:
im.save(outfile)
return im
def savegray1(data, outfile, scale=False):
ny, nx = data.shape
im = Image.new('L', (nx,ny))
if scale:
data = scale255minmax(data)
im.putdata(data.ravel())
im = im.transpose(Image.FLIP_TOP_BOTTOM)
im.save(outfile)
return im
def savegray(data, outfile=None, scale=False):
ny, nx = data.shape
im = Image.new('L', (nx,ny))
if scale:
data = scale255minmax(data)
data = array(data).astype(uint8)
im = Image.fromarray(data, 'L')
im = im.transpose(Image.FLIP_TOP_BOTTOM)
if outfile:
im.save(outfile)
return im
def loadrgb(infile):
im = Image.open(infile)
im = im.transpose(Image.FLIP_TOP_BOTTOM)
# rgb = array(im.getdata())
rgb = asarray(im) # numpy
print rgb.shape
#nx, ny = im.size
#rgb.shape = (ny,nx,3)
rgb = transpose(rgb, (2,0,1))
rgb = rgb[:3] # in case there's an alpha channel on the end
rgb.flags.writeable = True # DEFAULT IS CAN'T EDIT IT!
return rgb
def loadgray(infile):
"""Load grayscale image"""
im = Image.open(infile)
im = im.transpose(Image.FLIP_TOP_BOTTOM)
data = asarray(im) # numpy
data.flags.writeable = True # DEFAULT IS CAN'T EDIT IT!
return data
|
{"hexsha": "0d71b06d79a9421bba504d3de03d4578bf448c2b", "size": 2390, "ext": "py", "lang": "Python", "max_stars_repo_path": "nircam_jdox/scripts/coeim.py", "max_stars_repo_name": "aliciacanipe/nircam_jdox", "max_stars_repo_head_hexsha": "fa1c3381283bb08b870162d0dd3bc9d5e94561ea", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-10T06:48:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T06:48:27.000Z", "max_issues_repo_path": "nircam_jdox/scripts/coeim.py", "max_issues_repo_name": "aliciacanipe/nircam_jdox", "max_issues_repo_head_hexsha": "fa1c3381283bb08b870162d0dd3bc9d5e94561ea", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2019-04-05T16:30:32.000Z", "max_issues_repo_issues_event_max_datetime": "2019-05-02T16:30:26.000Z", "max_forks_repo_path": "nircam_jdox/scripts/coeim.py", "max_forks_repo_name": "aliciacanipe/nircam_jdox", "max_forks_repo_head_hexsha": "fa1c3381283bb08b870162d0dd3bc9d5e94561ea", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-03-20T15:14:26.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-17T20:16:40.000Z", "avg_line_length": 25.4255319149, "max_line_length": 64, "alphanum_fraction": 0.5958158996, "include": true, "reason": "from numpy", "num_tokens": 720}
|
/-
Copyright (c) 2022 Eric Wieser. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Eric Wieser
-/
import data.set.pointwise.basic
import data.list.of_fn
/-!
# Pointwise operations with lists of sets
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
This file proves some lemmas about pointwise algebraic operations with lists of sets.
-/
namespace set
variables {F α β γ : Type*}
variables [monoid α] {s t : set α} {a : α} {m n : ℕ}
open_locale pointwise
@[to_additive] lemma mem_prod_list_of_fn {a : α} {s : fin n → set α} :
a ∈ (list.of_fn s).prod ↔ ∃ f : (Π i : fin n, s i), (list.of_fn (λ i, (f i : α))).prod = a :=
begin
induction n with n ih generalizing a,
{ simp_rw [list.of_fn_zero, list.prod_nil, fin.exists_fin_zero_pi, eq_comm, set.mem_one] },
{ simp_rw [list.of_fn_succ, list.prod_cons, fin.exists_fin_succ_pi, fin.cons_zero, fin.cons_succ,
mem_mul, @ih, exists_and_distrib_left, exists_exists_eq_and, set_coe.exists, subtype.coe_mk,
exists_prop] }
end
@[to_additive] lemma mem_list_prod {l : list (set α)} {a : α} :
a ∈ l.prod ↔ ∃ l' : list (Σ s : set α, ↥s),
list.prod (l'.map (λ x, (sigma.snd x : α))) = a ∧ l'.map sigma.fst = l :=
begin
induction l using list.of_fn_rec with n f,
simp_rw [list.exists_iff_exists_tuple, list.map_of_fn, list.of_fn_inj', and.left_comm,
exists_and_distrib_left, exists_eq_left, heq_iff_eq, function.comp, mem_prod_list_of_fn],
split,
{ rintros ⟨fi, rfl⟩, exact ⟨λ i, ⟨_, fi i⟩, rfl, rfl⟩, },
{ rintros ⟨fi, rfl, rfl⟩, exact ⟨λ i, _, rfl⟩, },
end
@[to_additive] lemma mem_pow {a : α} {n : ℕ} :
a ∈ s ^ n ↔ ∃ f : fin n → s, (list.of_fn (λ i, (f i : α))).prod = a :=
by rw [←mem_prod_list_of_fn, list.of_fn_const, list.prod_replicate]
end set
|
{"author": "leanprover-community", "repo": "mathlib", "sha": "5e526d18cea33550268dcbbddcb822d5cde40654", "save_path": "github-repos/lean/leanprover-community-mathlib", "path": "github-repos/lean/leanprover-community-mathlib/mathlib-5e526d18cea33550268dcbbddcb822d5cde40654/src/data/set/pointwise/list_of_fn.lean"}
|
[STATEMENT]
lemma Runit_in_runit [intro]:
assumes "arr f" and "t \<in> f"
shows "\<^bold>\<r>\<^bold>[t\<^bold>] \<in> \<r>[f]"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<^bold>\<r>\<^bold>[t\<^bold>] \<in> \<r>[f]
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<^bold>\<r>\<^bold>[t\<^bold>] \<in> \<r>[f]
[PROOF STEP]
have "Arr t \<and> Arr (rep f) \<and> Dom t = DOM f \<and> Cod t = COD f \<and> \<^bold>\<lfloor>t\<^bold>\<rfloor> = \<^bold>\<lfloor>rep f\<^bold>\<rfloor>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Arr t \<and> Arr (rep f) \<and> Dom t = DOM f \<and> Cod t = COD f \<and> \<^bold>\<lfloor>t\<^bold>\<rfloor> = \<^bold>\<lfloor>rep f\<^bold>\<rfloor>
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
arr f
t \<in> f
goal (1 subgoal):
1. Arr t \<and> Arr (rep f) \<and> Dom t = DOM f \<and> Cod t = COD f \<and> \<^bold>\<lfloor>t\<^bold>\<rfloor> = \<^bold>\<lfloor>rep f\<^bold>\<rfloor>
[PROOF STEP]
by (metis mkarr_memb(1) mkarr_memb(2) rep_mkarr rep_in_arr(1) equiv_iff_eq_norm
norm_rep)
[PROOF STATE]
proof (state)
this:
Arr t \<and> Arr (rep f) \<and> Dom t = DOM f \<and> Cod t = COD f \<and> \<^bold>\<lfloor>t\<^bold>\<rfloor> = \<^bold>\<lfloor>rep f\<^bold>\<rfloor>
goal (1 subgoal):
1. \<^bold>\<r>\<^bold>[t\<^bold>] \<in> \<r>[f]
[PROOF STEP]
thus ?thesis
[PROOF STATE]
proof (prove)
using this:
Arr t \<and> Arr (rep f) \<and> Dom t = DOM f \<and> Cod t = COD f \<and> \<^bold>\<lfloor>t\<^bold>\<rfloor> = \<^bold>\<lfloor>rep f\<^bold>\<rfloor>
goal (1 subgoal):
1. \<^bold>\<r>\<^bold>[t\<^bold>] \<in> \<r>[f]
[PROOF STEP]
by (simp add: mkarr_def runit\<^sub>F\<^sub>M\<^sub>C_def)
[PROOF STATE]
proof (state)
this:
\<^bold>\<r>\<^bold>[t\<^bold>] \<in> \<r>[f]
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 806, "file": "MonoidalCategory_FreeMonoidalCategory", "length": 7}
|
#https://pythonbasics.org/webserver/
import os
import sys
import glob
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
from tensorflow import keras
import numpy as np
import base64
from http.server import BaseHTTPRequestHandler, HTTPServer
import time
hostName = "localhost"
serverPort = 8080
def identify_image(fn):
image = keras.preprocessing.image.load_img(fn, color_mode="grayscale")
input_arr = keras.preprocessing.image.img_to_array(image)
input_arr = np.array([input_arr])
input_arr = np.abs(input_arr - 255.0)
p1 = model.predict(input_arr)
maxpred = p1.max()
predindex = int(p1.argmax(axis=-1))
predlabel = labels[predindex]
return f"'{predlabel}' with probabilty: {maxpred:.3f}"
class MyServer(BaseHTTPRequestHandler):
def do_POST(self):
content_length = int(self.headers['Content-Length'])
post_data = self.rfile.read(content_length)
print(content_length)
print(post_data)
a = str(post_data).split(',')[1]
png = base64.b64decode(a)
f = open('x.jpg', 'wb')
f.write(png)
f.close()
label = identify_image("x.jpg")
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.send_header("Access-Control-Allow-Origin", "*")
self.send_header("Content-length", len(label))
self.end_headers()
self.wfile.write(bytes(label, "utf-8"))
if __name__ == "__main__":
webServer = HTTPServer((hostName, serverPort), MyServer)
print("Server started http://%s:%s" % (hostName, serverPort))
labels = sorted([os.path.basename(i) for i in glob.glob("../data/extracted_images/*")])
model = keras.models.load_model('./testmodel.data')
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")
|
{"hexsha": "b75bc03d95ff91dd16e52abb5e88aa50857899cf", "size": 1865, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/webserver.py", "max_stars_repo_name": "FMurphy17/SBUDRP21", "max_stars_repo_head_hexsha": "9943c04de2b83314742a6063be7f4ba41620a5c1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/webserver.py", "max_issues_repo_name": "FMurphy17/SBUDRP21", "max_issues_repo_head_hexsha": "9943c04de2b83314742a6063be7f4ba41620a5c1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/webserver.py", "max_forks_repo_name": "FMurphy17/SBUDRP21", "max_forks_repo_head_hexsha": "9943c04de2b83314742a6063be7f4ba41620a5c1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0833333333, "max_line_length": 91, "alphanum_fraction": 0.6680965147, "include": true, "reason": "import numpy", "num_tokens": 452}
|
#
# Types supporting parameterized Timestep and Clock objects
#
abstract type AbstractTimestep <: MimiStruct end
struct FixedTimestep{FIRST, STEP, LAST} <: AbstractTimestep
t::Int
end
struct VariableTimestep{TIMES} <: AbstractTimestep
t::Int
current::Int
function VariableTimestep{TIMES}(t::Int = 1) where {TIMES}
# The special case below handles when functions like next_step step beyond
# the end of the TIMES array. The assumption is that the length of this
# last timestep, starting at TIMES[end], is 1.
current::Int = t > length(TIMES) ? TIMES[end] + 1 : TIMES[t]
return new(t, current)
end
end
struct TimestepValue{T}
value::T
offset::Int
function TimestepValue(v::T; offset::Int = 0) where T
return new{T}(v, offset)
end
end
struct TimestepIndex
index::Int
end
mutable struct Clock{T <: AbstractTimestep} <: MimiStruct
ts::T
function Clock{T}(FIRST::Int, STEP::Int, LAST::Int) where T
return new(FixedTimestep{FIRST, STEP, LAST}(1))
end
function Clock{T}(TIMES::NTuple{N, Int} where N) where T
return new(VariableTimestep{TIMES}())
end
end
mutable struct TimestepArray{T_TS <: AbstractTimestep, T, N, ti} <: MimiStruct
data::Array{T, N}
function TimestepArray{T_TS, T, N, ti}(d::Array{T, N}) where {T_TS, T, N, ti}
return new(d)
end
function TimestepArray{T_TS, T, N, ti}(lengths::Int...) where {T_TS, T, N, ti}
return new(Array{T, N}(undef, lengths...))
end
end
# Since these are the most common cases, we define methods (in time.jl)
# specific to these type aliases, avoiding some of the inefficiencies
# associated with an arbitrary number of dimensions.
const TimestepMatrix{T_TS, T, ti} = TimestepArray{T_TS, T, 2, ti}
const TimestepVector{T_TS, T} = TimestepArray{T_TS, T, 1, 1}
|
{"hexsha": "b03378bf4186d6d18fafd11067070bb37abf6362", "size": 1837, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/core/types/time.jl", "max_stars_repo_name": "UnofficialJuliaMirrorSnapshots/Mimi.jl-e4e893b0-ee5e-52ea-8111-44b3bdec128c", "max_stars_repo_head_hexsha": "c9336f1076996dca728c30befd561280dfc18318", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/core/types/time.jl", "max_issues_repo_name": "UnofficialJuliaMirrorSnapshots/Mimi.jl-e4e893b0-ee5e-52ea-8111-44b3bdec128c", "max_issues_repo_head_hexsha": "c9336f1076996dca728c30befd561280dfc18318", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/core/types/time.jl", "max_forks_repo_name": "UnofficialJuliaMirrorSnapshots/Mimi.jl-e4e893b0-ee5e-52ea-8111-44b3bdec128c", "max_forks_repo_head_hexsha": "c9336f1076996dca728c30befd561280dfc18318", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.4179104478, "max_line_length": 82, "alphanum_fraction": 0.6837234622, "num_tokens": 534}
|
"""
Some utils used in all demos
Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License)
"""
from __future__ import print_function, division
import numpy as np
from PIL import Image, ImageDraw
import contextlib
def paletteVOC(N=256, normalized=False, PIL=False):
"""
Pascal VOC color map
"""
def bitget(byteval, idx):
return ((byteval & (1 << idx)) != 0)
dtype = 'float32' if normalized else 'uint8'
cmap = np.zeros((N, 3), dtype=dtype)
for i in range(N):
r = g = b = 0
c = i
for j in range(8):
r = r | (bitget(c, 0) << 7-j)
g = g | (bitget(c, 1) << 7-j)
b = b | (bitget(c, 2) << 7-j)
c = c >> 3
cmap[i] = np.array([r, g, b])
cmap = cmap/255 if normalized else cmap
if PIL:
cmap = [k for l in cmap for k in l]
return cmap
def pil(array):
im = Image.fromarray(array)
im.putpalette(paletteVOC(PIL=True))
return im
def pil_grid(images, max_horiz=np.iinfo(int).max, margin=0, background='white'):
"""
Grid of images in PIL
"""
n_images = len(images)
n_horiz = min(n_images, max_horiz)
h_sizes, v_sizes = [0] * n_horiz, [0] * (n_images // n_horiz)
for i, im in enumerate(images):
h, v = i % n_horiz, i // n_horiz
h_sizes[h] = max(h_sizes[h], im.size[0]) + margin
v_sizes[v] = max(v_sizes[v], im.size[1]) + margin
h_sizes, v_sizes = np.cumsum([0] + h_sizes), np.cumsum([0] + v_sizes)
im_grid = Image.new('RGB', (h_sizes[-1], v_sizes[-1]), color=background)
for i, im in enumerate(images):
im_grid.paste(im, (h_sizes[i % n_horiz], v_sizes[i // n_horiz]))
return im_grid
def dummy_triangles(w, categories=[0, 255, 1]):
"""
Generate random images with desired categories and random triangles
"""
im = Image.new('P', (w, w), color=categories[0])
im.putpalette(paletteVOC(PIL=True))
draw = ImageDraw.Draw(im)
for c in categories[1:]:
draw.polygon([tuple(p) for p in np.random.randint(w, size=(3, 2))], fill=c, outline=None)
return im
# https://stackoverflow.com/questions/2891790/how-to-pretty-printing-a-numpy-array-without-scientific-notation-and-with-given
@contextlib.contextmanager
def printoptions(*args, **kwargs):
original = np.get_printoptions()
np.set_printoptions(*args, **kwargs)
try:
yield
finally:
np.set_printoptions(**original)
|
{"hexsha": "9a9a09d2301518dfd92cb094cf4fb474f88b1984", "size": 2437, "ext": "py", "lang": "Python", "max_stars_repo_path": "LovaszSoftmax/demo_helpers/demo_utils.py", "max_stars_repo_name": "ljhclover/pytorch_Unet_CZI", "max_stars_repo_head_hexsha": "92a3c295077562161d747157fdcba998132d4a94", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1307, "max_stars_repo_stars_event_min_datetime": "2018-02-26T09:18:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T05:51:13.000Z", "max_issues_repo_path": "utils/LovaszSoftmax/demo_helpers/demo_utils.py", "max_issues_repo_name": "FelixFu520/segmentation", "max_issues_repo_head_hexsha": "66503244826110e19446cf274f49d6e018e57f2c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 33, "max_issues_repo_issues_event_min_datetime": "2018-04-20T14:52:48.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-11T11:52:56.000Z", "max_forks_repo_path": "utils/LovaszSoftmax/demo_helpers/demo_utils.py", "max_forks_repo_name": "FelixFu520/segmentation", "max_forks_repo_head_hexsha": "66503244826110e19446cf274f49d6e018e57f2c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 289, "max_forks_repo_forks_event_min_datetime": "2018-04-15T11:50:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T06:46:09.000Z", "avg_line_length": 29.7195121951, "max_line_length": 125, "alphanum_fraction": 0.6064833812, "include": true, "reason": "import numpy", "num_tokens": 730}
|
subroutine shearmod(eg,enu,temp,props,nprops)
implicit real*8(a-h,o-z)
dimension props(nprops)
emu0=props(1)
ed0=props(2)
et0=props(3)
enu=min(dabs(props(4)),0.499d0)
if(temp.gt.et0) then
eg = emu0 - ed0/(dexp(et0/temp)-1.d0)
else
eg = emu0
endif
return
end
|
{"hexsha": "3098222925966d3f64fc65489f27656136eefbd8", "size": 346, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "shearmodule/shearvarshni.f", "max_stars_repo_name": "jacojvr/UMATs", "max_stars_repo_head_hexsha": "878141ea5a028bccb808f1fde83c9502e374a0a3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-03-28T01:12:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-21T07:52:27.000Z", "max_issues_repo_path": "shearmodule/shearvarshni.f", "max_issues_repo_name": "jacojvr/UMATs", "max_issues_repo_head_hexsha": "878141ea5a028bccb808f1fde83c9502e374a0a3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "shearmodule/shearvarshni.f", "max_forks_repo_name": "jacojvr/UMATs", "max_forks_repo_head_hexsha": "878141ea5a028bccb808f1fde83c9502e374a0a3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2017-12-23T08:54:06.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T12:38:16.000Z", "avg_line_length": 24.7142857143, "max_line_length": 51, "alphanum_fraction": 0.5404624277, "num_tokens": 120}
|
using Base: @deprecate
@deprecate by{T,N}(ta::TimeArray{T,N}, t::Int; period::Function=day) when(ta, period, t)
@deprecate by{T,N}(ta::TimeArray{T,N}, t::String; period::Function=day) when(ta, period, t)
@deprecate to(ta::TimeArray, y::Int, m::Int, d::Int) to(ta, Date(y, m, d))
@deprecate from(ta::TimeArray, y::Int, m::Int, d::Int) from(ta, Date(y, m, d))
@deprecate findall(ta::TimeArray) find(ta)
@deprecate collapse{T,N,D}(ta::TimeArray{T,N,D}, timestamp::Function; period::Function=week) collapse(ta, period, timestamp)
# since julia 0.6
# deprecate non-dot function due to 0.6 syntactic loop fusion
for f ∈ (:^, :/, :abs, :sign, :sqrt, :cbrt,
:log, :log2, :log10, :log1p,
:exp, :exp2, :exp10, :expm1,
:cos, :sin, :tan, :cosd, :sind, :tand,
:acos, :asin, :atan, :acosd, :asind, :atand,
:isnan, :isinf)
@eval import Base: $f
@eval @deprecate $f(ta::TimeArray, args...) $f.(ta, args...)
end
for f ∈ (:+, :-, :*, :%,
:|, :&, :<, :>, :(==), :(!=), :>=, :<=)
@eval import Base: $f
@eval @deprecate $f(ta::TimeArray, args...) $f.(ta, args...)
@eval @deprecate $f(n::Number, ta::TimeArray) $f.(n, ta)
end
# non-dot operators
import Base: $, !, ~
@deprecate ($)(ta1::TimeArray, ta2::TimeArray) xor.(ta1, ta2)
@deprecate ($)(n::Integer, ta::TimeArray) xor.(n, ta)
@deprecate ($)(ta::TimeArray, n::Integer) xor.(ta, n)
@deprecate ~(ta::TimeArray) .~(ta)
@deprecate !(ta::TimeArray) .!(ta)
|
{"hexsha": "b92720c5a5f393e89abe970672ecbabea0cacba6", "size": 1469, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/deprecated.jl", "max_stars_repo_name": "BenjaminBorn/TimeSeries.jl", "max_stars_repo_head_hexsha": "c38509ea8427afd74cc140e0f7f6c7ca0cb39c06", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/deprecated.jl", "max_issues_repo_name": "BenjaminBorn/TimeSeries.jl", "max_issues_repo_head_hexsha": "c38509ea8427afd74cc140e0f7f6c7ca0cb39c06", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/deprecated.jl", "max_forks_repo_name": "BenjaminBorn/TimeSeries.jl", "max_forks_repo_head_hexsha": "c38509ea8427afd74cc140e0f7f6c7ca0cb39c06", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-16T22:57:15.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-16T22:57:15.000Z", "avg_line_length": 34.9761904762, "max_line_length": 124, "alphanum_fraction": 0.586113002, "num_tokens": 571}
|
# -*- coding: utf-8 -*-
"""SAC agent from demonstration for episodic tasks in OpenAI Gym.
- Author: Curt Park
- Contact: curt.park@medipixel.io
- Paper: https://arxiv.org/pdf/1801.01290.pdf
https://arxiv.org/pdf/1812.05905.pdf
https://arxiv.org/pdf/1511.05952.pdf
https://arxiv.org/pdf/1707.08817.pdf
"""
import argparse
import os
import pickle
from typing import Tuple
import gym
import numpy as np
import torch
import torch.optim as optim
import wandb
import algorithms.common.helper_functions as common_utils
from algorithms.common.abstract.agent import AbstractAgent
from algorithms.common.buffer.priortized_replay_buffer import PrioritizedReplayBufferfD
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class Agent(AbstractAgent):
"""SAC agent interacting with environment.
Attrtibutes:
memory (PrioritizedReplayBufferfD): replay memory
actor (nn.Module): actor model to select actions
actor_target (nn.Module): target actor model to select actions
actor_optimizer (Optimizer): optimizer for training actor
critic_1 (nn.Module): critic model to predict state values
critic_2 (nn.Module): critic model to predict state values
critic_target1 (nn.Module): target critic model to predict state values
critic_target2 (nn.Module): target critic model to predict state values
critic_optimizer1 (Optimizer): optimizer for training critic_1
critic_optimizer2 (Optimizer): optimizer for training critic_2
curr_state (np.ndarray): temporary storage of the current state
target_entropy (int): desired entropy used for the inequality constraint
alpha (torch.Tensor): weight for entropy
alpha_optimizer (Optimizer): optimizer for alpha
hyper_params (dict): hyper-parameters
total_step (int): total step numbers
episode_step (int): step number of the current episode
"""
def __init__(
self,
env: gym.Env,
args: argparse.Namespace,
hyper_params: dict,
models: tuple,
optims: tuple,
target_entropy: float,
):
"""Initialization.
Args:
env (gym.Env): openAI Gym environment
args (argparse.Namespace): arguments including hyperparameters and training settings
hyper_params (dict): hyper-parameters
models (tuple): models including actor and critic
optims (tuple): optimizers for actor and critic
target_entropy (float): target entropy for the inequality constraint
"""
AbstractAgent.__init__(self, env, args)
self.actor, self.vf, self.vf_target, self.qf_1, self.qf_2 = models
self.actor_optimizer, self.vf_optimizer = optims[0:2]
self.qf_1_optimizer, self.qf_2_optimizer = optims[2:4]
self.hyper_params = hyper_params
self.curr_state = np.zeros((1,))
self.total_step = 0
self.episode_step = 0
# automatic entropy tuning
if hyper_params["AUTO_ENTROPY_TUNING"]:
self.target_entropy = target_entropy
self.log_alpha = torch.zeros(1, requires_grad=True, device=device)
self.alpha_optimizer = optim.Adam(
[self.log_alpha], lr=hyper_params["LR_ENTROPY"]
)
# load the optimizer and model parameters
if args.load_from is not None and os.path.exists(args.load_from):
self.load_params(args.load_from)
if not self.args.test:
# load demo replay memory
with open(self.args.demo_path, "rb") as f:
demo = pickle.load(f)
# replay memory
self.beta = hyper_params["PER_BETA"]
self.memory = PrioritizedReplayBufferfD(
hyper_params["BUFFER_SIZE"],
hyper_params["BATCH_SIZE"],
demo=list(demo),
alpha=hyper_params["PER_ALPHA"],
)
def select_action(self, state: np.ndarray) -> np.ndarray:
"""Select an action from the input space."""
self.curr_state = state
# if initial random action should be conducted
if (
self.total_step < self.hyper_params["INITIAL_RANDOM_ACTION"]
and not self.args.test
):
return self.env.action_space.sample()
state = torch.FloatTensor(state).to(device)
if self.args.test and not self.is_discrete:
_, _, _, selected_action, _ = self.actor(state)
else:
selected_action, _, _, _, _ = self.actor(state)
return selected_action.detach().cpu().numpy()
def step(self, action: np.ndarray) -> Tuple[np.ndarray, np.float64, bool]:
"""Take an action and return the response of the env."""
self.total_step += 1
self.episode_step += 1
next_state, reward, done, _ = self.env.step(action)
if not self.args.test:
# if the last state is not a terminal state, store done as false
done_bool = (
False if self.episode_step == self.args.max_episode_steps else done
)
self.memory.add(self.curr_state, action, reward, next_state, done_bool)
return next_state, reward, done
def update_model(
self, experiences: Tuple[torch.Tensor, ...]
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
"""Train the model after each episode."""
states, actions, rewards, next_states, dones, weights, indexes, eps_d = (
experiences
)
new_actions, log_prob, pre_tanh_value, mu, std = self.actor(states)
# train alpha
if self.hyper_params["AUTO_ENTROPY_TUNING"]:
alpha_loss = torch.mean(
(-self.log_alpha * (log_prob + self.target_entropy).detach()) * weights
)
self.alpha_optimizer.zero_grad()
alpha_loss.backward()
self.alpha_optimizer.step()
alpha = self.log_alpha.exp()
else:
alpha_loss = torch.zeros(1)
alpha = self.hyper_params["W_ENTROPY"]
# Q function loss
masks = 1 - dones
q_1_pred = self.qf_1(states, actions)
q_2_pred = self.qf_2(states, actions)
v_target = self.vf_target(next_states)
q_target = rewards + self.hyper_params["GAMMA"] * v_target * masks
qf_1_loss = torch.mean((q_1_pred - q_target.detach()).pow(2) * weights)
qf_2_loss = torch.mean((q_2_pred - q_target.detach()).pow(2) * weights)
# V function loss
v_pred = self.vf(states)
q_pred = torch.min(
self.qf_1(states, new_actions), self.qf_2(states, new_actions)
)
v_target = (q_pred - alpha * log_prob).detach()
vf_loss = torch.mean((v_pred - v_target).pow(2) * weights)
# train Q functions
self.qf_1_optimizer.zero_grad()
qf_1_loss.backward()
self.qf_1_optimizer.step()
self.qf_2_optimizer.zero_grad()
qf_2_loss.backward()
self.qf_2_optimizer.step()
# train V function
self.vf_optimizer.zero_grad()
vf_loss.backward()
self.vf_optimizer.step()
if self.total_step % self.hyper_params["DELAYED_UPDATE"] == 0:
# actor loss
advantage = q_pred - v_pred.detach()
actor_loss_element_wise = alpha * log_prob - advantage
actor_loss = torch.mean(actor_loss_element_wise * weights)
# regularization
if not self.is_discrete: # iff the action is continuous
mean_reg = self.hyper_params["W_MEAN_REG"] * mu.pow(2).mean()
std_reg = self.hyper_params["W_STD_REG"] * std.pow(2).mean()
pre_activation_reg = self.hyper_params["W_PRE_ACTIVATION_REG"] * (
pre_tanh_value.pow(2).sum(dim=-1).mean()
)
actor_reg = mean_reg + std_reg + pre_activation_reg
# actor loss + regularization
actor_loss += actor_reg
# train actor
self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()
# update target networks
common_utils.soft_update(self.vf, self.vf_target, self.hyper_params["TAU"])
# update priorities
new_priorities = (v_pred - v_target).pow(2)
new_priorities += self.hyper_params["LAMDA3"] * actor_loss_element_wise.pow(
2
)
new_priorities += self.hyper_params["PER_EPS"]
new_priorities = new_priorities.data.cpu().numpy().squeeze()
new_priorities += eps_d
self.memory.update_priorities(indexes, new_priorities)
else:
actor_loss = torch.zeros(1)
return (
actor_loss.data,
qf_1_loss.data,
qf_2_loss.data,
vf_loss.data,
alpha_loss.data,
)
def load_params(self, path: str):
"""Load model and optimizer parameters."""
if not os.path.exists(path):
print("[ERROR] the input path does not exist. ->", path)
return
params = torch.load(path)
self.actor.load_state_dict(params["actor"])
self.qf_1.load_state_dict(params["qf_1"])
self.qf_2.load_state_dict(params["qf_2"])
self.vf.load_state_dict(params["vf"])
self.vf_target.load_state_dict(params["vf_target"])
self.actor_optimizer.load_state_dict(params["actor_optim"])
self.qf_1_optimizer.load_state_dict(params["qf_1_optim"])
self.qf_2_optimizer.load_state_dict(params["qf_2_optim"])
self.vf_optimizer.load_state_dict(params["vf_optim"])
if self.hyper_params["AUTO_ENTROPY_TUNING"]:
self.alpha_optimizer.load_state_dict(params["alpha_optim"])
print("[INFO] loaded the model and optimizer from", path)
def save_params(self, n_episode: int):
"""Save model and optimizer parameters."""
params = {
"actor": self.actor.state_dict(),
"qf_1": self.qf_1.state_dict(),
"qf_2": self.qf_2.state_dict(),
"vf": self.vf.state_dict(),
"vf_target": self.vf_target.state_dict(),
"actor_optim": self.actor_optimizer.state_dict(),
"qf_1_optim": self.qf_1_optimizer.state_dict(),
"qf_2_optim": self.qf_2_optimizer.state_dict(),
"vf_optim": self.vf_optimizer.state_dict(),
}
if self.hyper_params["AUTO_ENTROPY_TUNING"]:
params["alpha_optim"] = self.alpha_optimizer.state_dict()
AbstractAgent.save_params(self, params, n_episode)
def write_log(
self, i: int, loss: np.ndarray, score: int = 0, delayed_update: int = 1
):
"""Write log about loss and score"""
total_loss = loss.sum()
print(
"[INFO] episode %d, episode_step %d, total step %d, total score: %d\n"
"total loss: %.3f actor_loss: %.3f qf_1_loss: %.3f qf_2_loss: %.3f "
"vf_loss: %.3f alpha_loss: %.3f\n"
% (
i,
self.episode_step,
self.total_step,
score,
total_loss,
loss[0] * delayed_update, # actor loss
loss[1], # qf_1 loss
loss[2], # qf_2 loss
loss[3], # vf loss
loss[4], # alpha loss
)
)
if self.args.log:
wandb.log(
{
"score": score,
"total loss": total_loss,
"actor loss": loss[0] * delayed_update,
"qf_1 loss": loss[1],
"qf_2 loss": loss[2],
"vf loss": loss[3],
"alpha loss": loss[4],
}
)
if self.args.log:
wandb.log(
{
"score": score,
"total loss": total_loss,
"actor loss": loss[0] * delayed_update,
"qf_1 loss": loss[1],
"qf_2 loss": loss[2],
"vf loss": loss[3],
"alpha loss": loss[4],
}
)
def pretrain(self):
"""Pretraining steps."""
pretrain_loss = list()
print("[INFO] Pre-Train %d steps." % self.hyper_params["PRETRAIN_STEP"])
for i_step in range(1, self.hyper_params["PRETRAIN_STEP"] + 1):
experiences = self.memory.sample()
loss = self.update_model(experiences)
pretrain_loss.append(loss) # for logging
# logging
if i_step == 1 or i_step % 100 == 0:
avg_loss = np.vstack(pretrain_loss).mean(axis=0)
pretrain_loss.clear()
self.write_log(
0, avg_loss, 0, delayed_update=self.hyper_params["DELAYED_UPDATE"]
)
def train(self):
"""Train the agent."""
# logger
if self.args.log:
wandb.init()
wandb.config.update(self.hyper_params)
wandb.watch([self.actor, self.vf, self.qf_1, self.qf_2], log="parameters")
# pre-training by demo
self.pretrain()
# train
print("[INFO] Train Start.")
for i_episode in range(1, self.args.episode_num + 1):
state = self.env.reset()
done = False
score = 0
self.episode_step = 0
loss_episode = list()
while not done:
if self.args.render and i_episode >= self.args.render_after:
self.env.render()
action = self.select_action(state)
next_state, reward, done = self.step(action)
state = next_state
score += reward
# training
if len(self.memory) >= self.hyper_params["BATCH_SIZE"]:
for _ in range(self.hyper_params["MULTIPLE_LEARN"]):
experiences = self.memory.sample(self.beta)
loss = self.update_model(experiences)
loss_episode.append(loss) # for logging
# increase beta
fraction = min(float(i_episode) / self.args.episode_num, 1.0)
self.beta = self.beta + fraction * (1.0 - self.beta)
# logging
if loss_episode:
avg_loss = np.vstack(loss_episode).mean(axis=0)
self.write_log(
i_episode, avg_loss, score, self.hyper_params["DELAYED_UPDATE"]
)
if i_episode % self.args.save_period == 0:
self.save_params(i_episode)
# termination
self.env.close()
|
{"hexsha": "0a674e7d4537fac8ae6500cc17d1116f3899c093", "size": 14967, "ext": "py", "lang": "Python", "max_stars_repo_path": "algorithms/fd/sac_agent.py", "max_stars_repo_name": "ur1ove/rl_algorithms", "max_stars_repo_head_hexsha": "3e7a554d5ea83b4c19bad7a51d4867cacc986aa9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-28T13:20:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-28T13:20:23.000Z", "max_issues_repo_path": "algorithms/fd/sac_agent.py", "max_issues_repo_name": "ur1ove/rl_algorithms", "max_issues_repo_head_hexsha": "3e7a554d5ea83b4c19bad7a51d4867cacc986aa9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "algorithms/fd/sac_agent.py", "max_forks_repo_name": "ur1ove/rl_algorithms", "max_forks_repo_head_hexsha": "3e7a554d5ea83b4c19bad7a51d4867cacc986aa9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.773955774, "max_line_length": 96, "alphanum_fraction": 0.576869112, "include": true, "reason": "import numpy", "num_tokens": 3366}
|
import io
import sys
import os
from cassandra.cluster import Cluster, BatchStatement, ConsistencyLevel
from cassandra.auth import PlainTextAuthProvider
import boto3
import pandas as pd
import zipfile
import numpy as np
import datetime
now = datetime.datetime.now()
print("{} Starting Finnhub preloader".format(now.strftime("%Y-%m-%d %H:%M:%S")))
CASSANDRA_HOST = os.environ.get("CASSANDRA_HOST") if os.environ.get("CASSANDRA_HOST") else "localhost"
CASSANDRA_USER = os.environ.get("CASSANDRA_USER")
CASSANDRA_PWD = os.environ.get("CASSANDRA_PWD")
CASSANDRA_KEYSPACE = os.environ.get("CASSANDRA_KEYSPACE") if os.environ.get("CASSANDRA_KEYSPACE") else 'kafkapipeline'
BATCH_SIZE = int(os.environ.get("BATCH_SIZE", 100))
NUM_WORKERS = int(os.environ.get("NUM_WORKERS", 10))
TABLE_NAME = (
os.environ.get("TABLE_NAME") if os.environ.get("TABLE_NAME") else "gamestop"
)
BUCKET_NAME = (
os.environ.get("BUCKET_NAME")
if os.environ.get("BUCKET_NAME")
else "bb-s3-bucket-cmpt733"
)
session = boto3.Session(
aws_access_key_id=os.environ['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=os.environ['AWS_SECRET_KEY'],
region_name=os.environ['REGION_NAME'])
s3 = session.resource('s3')
bucket = s3.Bucket(BUCKET_NAME)
print("Reading data from bucket")
postsobj = bucket.Object(key='gamestop.csv')
response = postsobj.get()
historicaldata = pd.read_csv(io.BytesIO(response['Body'].read()), encoding='utf8')
historicaldata['hour'] = pd.to_datetime(historicaldata['hour'])
historicaldata = historicaldata.rename(
columns={
"closeprice": "close_price",
"openprice": "open_price",
"highprice": "high_price",
"lowprice": "low_price",
}
)
historicaldata = historicaldata.drop("prediction", axis=1)
print("Reading data from bucket --- done")
print("Splitting data")
databatches = np.array_split(historicaldata, 100)
auth_provider = PlainTextAuthProvider(username=CASSANDRA_USER, password=CASSANDRA_PWD)
cluster = Cluster([CASSANDRA_HOST], auth_provider=auth_provider)
# cluster = Cluster([CASSANDRA_HOST])
cassandrasession = cluster.connect(CASSANDRA_KEYSPACE)
cassandrasession.default_timeout = 60
cassandrasession.request_timeout = 30
insertlogs = cassandrasession.prepare(f"INSERT INTO {TABLE_NAME} (hour, \
open_price, high_price, \
low_price, volume, close_price) \
VALUES (?, ?, ?, ?, ?, ?)")
counter = 0
totalcount = 0
batches = []
now = datetime.datetime.now()
print("{} Sending {} data to cassandra in {} batches with {} rows"\
.format(now.strftime("%Y-%m-%d %H:%M:%S"), len(historicaldata), \
len(databatches), len(databatches[0])))
# with ThreadPoolExecutor(max_workers=NUM_WORKERS) as executor:
for df_ in databatches:
def processit(df):
counter = 0
batch = BatchStatement(consistency_level=ConsistencyLevel.QUORUM)
for index, values in df.iterrows():
batch.add(insertlogs,
(values['hour'], values['open_price'], values['high_price'],
values['low_price'], values['volume'], values['close_price']))
counter += 1
if counter >= BATCH_SIZE:
# print('Inserting ' + str(counter) + ' records from batch')
counter = 0
cassandrasession.execute(batch, trace=True)
batch = BatchStatement(consistency_level=ConsistencyLevel.QUORUM)
if counter > 0:
cassandrasession.execute(batch, trace=True)
return len(df)
totalcount += processit(df_)
# exec_ = lambda : processit(df_)
# batches.append(executor.submit(exec_))
now = datetime.datetime.now()
print("{} Done sending {} finnhub records to cassandra".format(now.strftime("%Y-%m-%d %H:%M:%S"), totalcount))
def query_table(source_table, colstring="*"):
# source_table: target table name to query (string)
auth_provider = PlainTextAuthProvider(username=CASSANDRA_USER, password=CASSANDRA_PWD)
cluster = Cluster([CASSANDRA_HOST], auth_provider=auth_provider)
session = cluster.connect(CASSANDRA_KEYSPACE)
# session.row_factory = dict_factory
cqlquery = f"SELECT {colstring} FROM {source_table};"
rows = session.execute(cqlquery)
return pd.DataFrame(rows)
foundrows = query_table(TABLE_NAME)
print(f"Inserted {len(foundrows)} rows in total")
print("Bye Bye!")
|
{"hexsha": "55db527c8de6e62596158784a001e92469a8d02a", "size": 4568, "ext": "py", "lang": "Python", "max_stars_repo_path": "foobar/preloader/preload_finnhub.py", "max_stars_repo_name": "brijow/foobar-gamestop", "max_stars_repo_head_hexsha": "54302ab330ff5a2099c7300f943f271ea9d30b52", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "foobar/preloader/preload_finnhub.py", "max_issues_repo_name": "brijow/foobar-gamestop", "max_issues_repo_head_hexsha": "54302ab330ff5a2099c7300f943f271ea9d30b52", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "foobar/preloader/preload_finnhub.py", "max_forks_repo_name": "brijow/foobar-gamestop", "max_forks_repo_head_hexsha": "54302ab330ff5a2099c7300f943f271ea9d30b52", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.0666666667, "max_line_length": 118, "alphanum_fraction": 0.6670315236, "include": true, "reason": "import numpy", "num_tokens": 1101}
|
#! /usr/bin/Rscript
args <- commandArgs(trailingOnly = TRUE)
headersTXT = c(
"Miranda_score",
"miR_ID",
"mRNA_ID",
"Start_position",
"End_position",
"Seed_match_6mer2",
"miR_match_P01",
"Seed_match_7mer2",
"Seed_match_7mer1",
"Seed_MFE",
"X3p_MFE",
"Target_UC_comp",
"miR_match_P09",
"miR_match_P02",
"Seed_GU",
"miR_match_P07",
"miR_match_P19",
"miR_match_P15"
)
read.table(args[1],sep="\t", header=TRUE) -> features
features = subset(features, select = headersTXT)
#library(seqinr)
#read.fasta(file=args[2], as.string=TRUE) -> fa
#fa=fa[which(!duplicated(names(fa)))]
#utr_length = t(sapply(names(fa), function(i) {
# c(i, nchar(fa[[i]][1]))
#}))
#remove records with UTRs not in fasta file
#features = subset(features, mRNA_ID %in% names(fa))
attach(features)
features_numeric = subset(features, select = -c(miR_ID,mRNA_ID))
aggregate(features_numeric, by=list(miR_ID,mRNA_ID), sum) -> features_sum
aggregate(1:dim(features_numeric)[1], by=list(miR_ID,mRNA_ID), length) -> number_sites
colnames(number_sites)[3] <- "number_sites"
aggregate(features_numeric, by=list(miR_ID,mRNA_ID), mean) -> features_mean
aggregate(features_numeric, by=list(miR_ID,mRNA_ID), max) -> features_max
aggregate(features_numeric, by=list(miR_ID,mRNA_ID), min) -> features_min
detach(features)
features_sum2 = features_sum
features_mean2 = features_mean
features_max2 = features_max
features_min2 = features_min
colnames(features_sum2) <- paste0(colnames(features_sum2), ".sum")
colnames(features_mean2) <- paste0(colnames(features_mean2), ".mean")
colnames(features_max2) <- paste0(colnames(features_max2), ".max")
colnames(features_min2) <- paste0(colnames(features_min2), ".min")
output = merge(number_sites, features_sum2, by=c(1,2))
output = merge(output, features_mean2, by=c(1,2))
output = merge(output, features_max2, by=c(1,2))
output = merge(output, features_min2, by=c(1,2))
selected_features =
c("Group.1",
"Group.2",
"Miranda_score.max",
"Seed_match_6mer2.mean",
"miR_match_P01.min",
"Seed_match_7mer2.max",
"Seed_match_7mer1.mean",
"Seed_MFE.min",
"X3p_MFE.mean",
"Target_UC_comp.mean",
"miR_match_P09.mean",
"miR_match_P02.min",
"Seed_GU.mean",
"miR_match_P07.mean",
"Start_position.min",
"miR_match_P19.min",
"miR_match_P15.min"
)
output2 <- output[,selected_features]
write.table(output2, file=paste0(args[2], ".csv"), sep=",", row.names=FALSE)
|
{"hexsha": "528da104a21a42bd224de5690288d08ced957875", "size": 2352, "ext": "r", "lang": "R", "max_stars_repo_path": "Core/calc_utr_features_selected.r", "max_stars_repo_name": "lanagarmire/MirMark", "max_stars_repo_head_hexsha": "19339ee7dbff9bdfd627b642f3df81358cf8c5f6", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-03-29T08:44:03.000Z", "max_stars_repo_stars_event_max_datetime": "2019-01-29T14:21:05.000Z", "max_issues_repo_path": "Core/calc_utr_features_selected.r", "max_issues_repo_name": "lanagarmire/MirMark", "max_issues_repo_head_hexsha": "19339ee7dbff9bdfd627b642f3df81358cf8c5f6", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-01-05T07:26:05.000Z", "max_issues_repo_issues_event_max_datetime": "2017-01-05T07:26:05.000Z", "max_forks_repo_path": "Core/calc_utr_features_selected.r", "max_forks_repo_name": "lanagarmire/MirMark", "max_forks_repo_head_hexsha": "19339ee7dbff9bdfd627b642f3df81358cf8c5f6", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0, "max_line_length": 86, "alphanum_fraction": 0.7461734694, "num_tokens": 713}
|
from typing import List, Tuple, Dict
from dataclasses import dataclass
import numpy as np
import cv2
from img_proc.padding import calc_pad_size, pad
BASE_IMG_SIZE = 300
@dataclass
class BBox:
x1: int
y1: int
x2: int
y2: int
def calc_line_size(img: np.ndarray) -> int:
h, w = img.shape[:2]
img_size = h if h > w else w
return img_size // BASE_IMG_SIZE
def calc_text_scale(img: np.ndarray) -> float:
h, w = img.shape[:2]
img_size = h if h > w else w
return img_size / BASE_IMG_SIZE / 4
def correct_bbox(
img: np.ndarray,
boxes: np.ndarray,
input_height: int,
input_width: int,
) -> List[BBox]:
h, w = img.shape[:2]
# calc padding size
tblr = calc_pad_size(*img.shape[:2])
paded_h = h + tblr.top + tblr.bottom
paded_w = w + tblr.left + tblr.right
# correct bbox for padding and resize
h_ratio = paded_h / input_height
w_ratio = paded_w / input_width
return [
BBox(
x1=int(box[0] * w_ratio - tblr.left),
y1=int(box[1] * h_ratio - tblr.top),
x2=int(box[2] * w_ratio - tblr.left),
y2=int(box[3] * h_ratio - tblr.top),
)
for box in boxes
]
def draw_bboxs(
img: np.ndarray,
classes_dict: Dict[int, str],
bboxes: List[BBox],
out_classes: List[int],
):
text_scale = calc_text_scale(img)
thickness = calc_line_size(img)
for box, cls_idx in zip(bboxes, out_classes):
draw_bbox(
img,
box,
classes_dict[cls_idx],
text_scale=text_scale,
thickness=thickness,
)
def draw_bbox(
img: np.ndarray,
bbox: BBox,
label: str,
color: Tuple[int, int, int] = (0, 255, 0),
text_color: Tuple[int, int, int] = (0, 0, 0),
text_scale: float = 1.0,
thickness: int = 5,
):
cv2.rectangle(
img,
pt1=(bbox.x1, bbox.y1),
pt2=(bbox.x2, bbox.y2),
color=color,
thickness=thickness,
)
(w, h), _ = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, text_scale, 1)
cv2.rectangle(img, (bbox.x1, bbox.y1 - h), (bbox.x1 + w, bbox.y1), color, -1)
cv2.putText(
img,
label,
(bbox.x1, bbox.y1 - 3),
cv2.FONT_HERSHEY_SIMPLEX,
text_scale,
text_color,
1,
)
|
{"hexsha": "844a44be0cb80d93d39b3a827f6ca06c79d85618", "size": 2338, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/img_proc/bbox.py", "max_stars_repo_name": "rewolfiluac/convert-torch2trt-demo", "max_stars_repo_head_hexsha": "9b9a3646bdf8b82af2149e73b4cd57939c6729cd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/img_proc/bbox.py", "max_issues_repo_name": "rewolfiluac/convert-torch2trt-demo", "max_issues_repo_head_hexsha": "9b9a3646bdf8b82af2149e73b4cd57939c6729cd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/img_proc/bbox.py", "max_forks_repo_name": "rewolfiluac/convert-torch2trt-demo", "max_forks_repo_head_hexsha": "9b9a3646bdf8b82af2149e73b4cd57939c6729cd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.2666666667, "max_line_length": 81, "alphanum_fraction": 0.5765611634, "include": true, "reason": "import numpy", "num_tokens": 697}
|
# MIT License
#
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2019
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""
Implementation of the High-Confidence-Low-Uncertainty (HCLU) adversarial example formulation by Grosse et al. (2018)
| Paper link: https://arxiv.org/abs/1812.02606
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import copy
import logging
from typing import Optional
import numpy as np
from scipy.optimize import minimize
from tqdm import trange
from art.attacks.attack import EvasionAttack
from art.estimators.classification.GPy import GPyGaussianProcessClassifier
from art.utils import compute_success
logger = logging.getLogger(__name__)
class HighConfidenceLowUncertainty(EvasionAttack):
"""
Implementation of the High-Confidence-Low-Uncertainty (HCLU) adversarial example formulation by Grosse et al. (2018)
| Paper link: https://arxiv.org/abs/1812.02606
"""
attack_params = ["conf", "unc_increase", "min_val", "max_val"]
_estimator_requirements = (GPyGaussianProcessClassifier,)
def __init__(
self,
classifier: GPyGaussianProcessClassifier,
conf: float = 0.95,
unc_increase: float = 100.0,
min_val: float = 0.0,
max_val: float = 1.0,
) -> None:
"""
:param classifier: A trained model of type GPYGaussianProcessClassifier.
:param conf: Confidence that examples should have, if there were to be classified as 1.0 maximally.
:param unc_increase: Value uncertainty is allowed to deviate, where 1.0 is original value.
:param min_val: minimal value any feature can take.
:param max_val: maximal value any feature can take.
"""
super().__init__(estimator=classifier)
self.conf = conf
self.unc_increase = unc_increase
self.min_val = min_val
self.max_val = max_val
self._check_params()
def generate(self, x: np.ndarray, y: Optional[np.ndarray] = None, **kwargs) -> np.ndarray:
"""
Generate adversarial examples and return them as an array.
:param x: An array with the original inputs to be attacked.
:param y: Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape
(nb_samples,).
:return: An array holding the adversarial examples.
"""
x_adv = copy.copy(x)
def minfun(x, args): # minimize L2 norm
return np.sum(np.sqrt((x - args["orig"]) ** 2))
def constraint_conf(x, args): # constraint for confidence
pred = args["classifier"].predict(x.reshape(1, -1))[0, 0]
if args["class_zero"]:
pred = 1.0 - pred
return (pred - args["conf"]).reshape(-1)
def constraint_unc(x, args): # constraint for uncertainty
cur_unc = (args["classifier"].predict_uncertainty(x.reshape(1, -1))).reshape(-1)
return (args["max_uncertainty"] - cur_unc)[0]
bounds = []
# adding bounds, to not go away from original data
for i in range(np.shape(x)[1]):
bounds.append((self.min_val, self.max_val))
for i in trange(x.shape[0], desc="HCLU"): # go through data amd craft
# get properties for attack
max_uncertainty = self.unc_increase * self.estimator.predict_uncertainty(x_adv[i].reshape(1, -1))
class_zero = not self.estimator.predict(x_adv[i].reshape(1, -1))[0, 0] < 0.5
init_args = {
"classifier": self.estimator,
"class_zero": class_zero,
"max_uncertainty": max_uncertainty,
"conf": self.conf,
}
constr_conf = {"type": "ineq", "fun": constraint_conf, "args": (init_args,)}
constr_unc = {"type": "ineq", "fun": constraint_unc, "args": (init_args,)}
args = {"args": init_args, "orig": x[i].reshape(-1)}
# finally, run optimization
x_adv[i] = minimize(minfun, x_adv[i], args=args, bounds=bounds, constraints=[constr_conf, constr_unc],)["x"]
logger.info(
"Success rate of HCLU attack: %.2f%%", 100 * compute_success(self.estimator, x, y, x_adv),
)
return x_adv
def _check_params(self) -> None:
if not isinstance(self.estimator, GPyGaussianProcessClassifier):
raise TypeError("Model must be a GPy Gaussian Process classifier.")
if self.conf <= 0.5 or self.conf > 1.0:
raise ValueError("Confidence value has to be a value between 0.5 and 1.0.")
if self.unc_increase <= 0.0:
raise ValueError("Value to increase uncertainty must be positive.")
if self.min_val > self.max_val:
raise ValueError("Maximum has to be larger than minimum.")
|
{"hexsha": "6b2b65e2610d6b577d0e15df8bb9827c2cd55dc8", "size": 5860, "ext": "py", "lang": "Python", "max_stars_repo_path": "art/attacks/evasion/hclu.py", "max_stars_repo_name": "meghana-sesetti/adversarial-robustness-toolbox", "max_stars_repo_head_hexsha": "6a5ce9e4142734ad9004e5c093ef8fa754ea6b39", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "art/attacks/evasion/hclu.py", "max_issues_repo_name": "meghana-sesetti/adversarial-robustness-toolbox", "max_issues_repo_head_hexsha": "6a5ce9e4142734ad9004e5c093ef8fa754ea6b39", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 33, "max_issues_repo_issues_event_min_datetime": "2021-01-18T08:30:34.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-11T07:05:13.000Z", "max_forks_repo_path": "art/attacks/evasion/hclu.py", "max_forks_repo_name": "meghana-sesetti/adversarial-robustness-toolbox", "max_forks_repo_head_hexsha": "6a5ce9e4142734ad9004e5c093ef8fa754ea6b39", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-28T12:58:01.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-28T12:58:01.000Z", "avg_line_length": 45.4263565891, "max_line_length": 120, "alphanum_fraction": 0.6634812287, "include": true, "reason": "import numpy,from scipy", "num_tokens": 1385}
|
#!/usr/bin/python
########################################################################################################################
#
# Copyright (c) 2014, Regents of the University of California
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification, are permitted provided that the
# following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following
# disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the
# following disclaimer in the documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
########################################################################################################################
"""ADC library
"""
import laygo
import numpy as np
from math import log
import yaml
import os
import laygo.GridLayoutGeneratorHelper as laygenhelper #utility functions
#import logging;logging.basicConfig(level=logging.DEBUG)
def generate_sar_wsamp(laygen, objectname_pfix, workinglib, samp_lib, space_1x_lib, sar_name, samp_name, space_1x_name,
placement_grid, routing_grid_m5m6,
routing_grid_m5m6_thick, routing_grid_m5m6_thick_basic,
num_bits=9, origin=np.array([0, 0])):
"""generate sar with sampling frontend """
pg = placement_grid
rg_m5m6 = routing_grid_m5m6
rg_m5m6_thick = routing_grid_m5m6_thick
rg_m5m6_thick_basic = routing_grid_m5m6_thick_basic #for clock routing
# placement
# sar
isar=laygen.place(name="I" + objectname_pfix + 'SAR0', templatename=sar_name,
gridname=pg, xy=origin, template_libname=workinglib)
# samp
isamp = laygen.relplace(name="I" + objectname_pfix + 'SAMP0', templatename=samp_name,
gridname=pg, refinstname=isar.name, direction='top', template_libname=samp_lib)
#prboundary
sar_size = laygen.templates.get_template(sar_name, libname=workinglib).size
samp_size = laygen.templates.get_template(samp_name, libname=samp_lib).size
space_size = laygen.templates.get_template(space_1x_name, libname=space_1x_lib).size
size_x=sar_size[0]
size_y=int((sar_size[1]+samp_size[1])/space_size[1]+1)*space_size[1]
laygen.add_rect(None, np.array([origin, origin+np.array([size_x, size_y])]), laygen.layers['prbnd'])
# template handles
sar_template = laygen.templates.get_template(sar_name, workinglib)
samp_template = laygen.templates.get_template(samp_name, samp_lib)
#reference coordinates
pdict_m5m6=laygen.get_inst_pin_xy(None, None, rg_m5m6)
pdict_m5m6_thick=laygen.get_inst_pin_xy(None, None, rg_m5m6_thick)
pdict_m5m6_thick_basic=laygen.get_inst_pin_xy(None, None, rg_m5m6_thick_basic)
sar_pins=sar_template.pins
samp_pins=samp_template.pins
#sar_xy=isar.xy[0]
#samp_xy=isamp.xy[0]
sar_xy=isar.xy
samp_xy=isamp.xy
#signal route (clk/inp/inm)
#make virtual grids and route on the grids (assuming drc clearance of each block)
rg_m5m6_thick_basic_temp_sig='route_M5_M6_thick_basic_temp_sig'
laygenhelper.generate_grids_from_inst(laygen, gridname_input=rg_m5m6_thick_basic, gridname_output=rg_m5m6_thick_basic_temp_sig,
instname=isamp.name,
inst_pin_prefix=['ckout'], xy_grid_type='xgrid')
pdict_m5m6_thick_basic_temp_sig = laygen.get_inst_pin_xy(None, None, rg_m5m6_thick_basic_temp_sig)
rg_m4m5_basic_thick_temp_sig='route_M4_M5_basic_thick_temp_sig'
laygenhelper.generate_grids_from_inst(laygen, gridname_input=rg_m4m5_basic_thick, gridname_output=rg_m4m5_basic_thick_temp_sig,
instname=isamp.name,
inst_pin_prefix=['outp', 'outn'], xy_grid_type='xgrid')
pdict_m4m5_basic_thick_temp_sig = laygen.get_inst_pin_xy(None, None, rg_m4m5_basic_thick_temp_sig)
#clock
rclk0 = laygen.route(None, laygen.layers['metal'][5],
xy0=pdict_m5m6_thick_basic_temp_sig[isamp.name]['ckout'][0],
xy1=pdict_m5m6_thick_basic_temp_sig[isar.name]['CLK0'][1]-np.array([0,1]), gridname0=rg_m5m6_thick_basic_temp_sig)
laygen.via(None,pdict_m5m6_thick_basic_temp_sig[isar.name]['CLK0'][1], rg_m5m6_thick_basic_temp_sig)
laygen.via(None,pdict_m5m6_thick_basic_temp_sig[isar.name]['CLK1'][1], rg_m5m6_thick_basic_temp_sig)
#laygen.via(None,pdict_m5m6_thick_basic_temp_sig[isar.name]['CLK2'][1], rg_m5m6_thick_basic_temp_sig)
#rclk0 = laygen.route(None, laygen.layers['metal'][5],
# xy0=pdict_m5m6_thick_basic[isamp.name]['ckout'][0],
# xy1=pdict_m5m6_thick_basic[isar.name]['CLK'][1]-np.array([0,1]), gridname0=rg_m5m6_thick_basic)
#laygen.via(None,pdict_m5m6_thick_basic[isar.name]['CLK'][1], rg_m5m6_thick_basic)
#frontend sig
inp_y_list=[]
inm_y_list=[]
for pn, p in pdict_m4m5_basic_thick_temp_sig[isar.name].items():
if pn.startswith('INP'):
inp_y_list.append(p[0][1])
pv=np.array([pdict_m4m5_basic_thick_temp_sig[isamp.name]['outp'][0][0], p[0][1]])
laygen.via(None,pv, rg_m4m5_basic_thick_temp_sig)
#laygen.via(None,p[0], rg_m5m6_thick_basic_temp_sig)
if pn.startswith('INM'):
inm_y_list.append(p[0][1])
pv=np.array([pdict_m4m5_basic_thick_temp_sig[isamp.name]['outn'][0][0], p[0][1]])
laygen.via(None,pv, rg_m4m5_basic_thick_temp_sig)
#laygen.via(None,p[0], rg_m5m6_thick_basic_temp_sig)
inp_y=min(inp_y_list)
inm_y=min(inm_y_list)
rinp0 = laygen.route(None, laygen.layers['metal'][5],
xy0=pdict_m4m5_basic_thick_temp_sig[isamp.name]['outp'][0],
xy1=np.array([pdict_m4m5_basic_thick_temp_sig[isamp.name]['outp'][0][0],inp_y-1]),
gridname0=rg_m4m5_basic_thick_temp_sig)
rinm0 = laygen.route(None, laygen.layers['metal'][5],
xy0=pdict_m4m5_basic_thick_temp_sig[isamp.name]['outn'][0],
xy1=np.array([pdict_m4m5_basic_thick_temp_sig[isamp.name]['outn'][0][0],inm_y-1]),
gridname0=rg_m4m5_basic_thick_temp_sig)
#rinp0 = laygen.route(None, laygen.layers['metal'][5],
# xy0=pdict_m5m6_thick_basic_temp_sig[isamp.name]['outp'][0],
# xy1=np.array([pdict_m5m6_thick_basic_temp_sig[isar.name]['INP0'][0][0],inp_y-1]),
# gridname0=rg_m5m6_thick_basic_temp_sig)
#rinm0 = laygen.route(None, laygen.layers['metal'][5],
# xy0=pdict_m5m6_thick_basic_temp_sig[isamp.name]['outn'][0],
# xy1=np.array([pdict_m5m6_thick_basic_temp_sig[isar.name]['INM0'][0][0],inm_y-1]),
# gridname0=rg_m5m6_thick_basic_temp_sig)
#input pins (just duplicate from lower hierarchy cells)
laygen.add_pin('CLK', 'CLK', samp_xy+samp_pins['ckin']['xy'], samp_pins['ckin']['layer'])
laygen.add_pin('INP', 'INP', samp_xy+samp_pins['inp']['xy'], samp_pins['ckin']['layer'])
laygen.add_pin('INM', 'INM', samp_xy+samp_pins['inn']['xy'], samp_pins['ckin']['layer'])
laygen.add_pin('OSP', 'OSP', sar_xy+sar_pins['OSP']['xy'], sar_pins['OSP']['layer'])
laygen.add_pin('OSM', 'OSM', sar_xy+sar_pins['OSM']['xy'], sar_pins['OSM']['layer'])
for pn, p in sar_pins.items():
if pn.startswith('VREF<0>'):
pxy=sar_xy+sar_pins[pn]['xy']
laygen.add_pin(pn, 'VREF<0>', pxy, sar_pins[pn]['layer'])
if pn.startswith('VREF<1>'):
pxy=sar_xy+sar_pins[pn]['xy']
laygen.add_pin(pn, 'VREF<1>', pxy, sar_pins[pn]['layer'])
if pn.startswith('VREF<2>'):
pxy=sar_xy+sar_pins[pn]['xy']
laygen.add_pin(pn, 'VREF<2>', pxy, sar_pins[pn]['layer'])
#laygen.add_pin('VREF_M5R<2>', 'VREF<2>', sar_xy+sar_pins['VREF_M5R<2>']['xy'], sar_pins['VREF_M5R<2>']['layer'])
#laygen.add_pin('VREF_M5R<1>', 'VREF<1>', sar_xy+sar_pins['VREF_M5R<1>']['xy'], sar_pins['VREF_M5R<1>']['layer'])
#laygen.add_pin('VREF_M5R<0>', 'VREF<0>', sar_xy+sar_pins['VREF_M5R<0>']['xy'], sar_pins['VREF_M5R<0>']['layer'])
#laygen.add_pin('VREF_M5L<2>', 'VREF<2>', sar_xy+sar_pins['VREF_M5L<2>']['xy'], sar_pins['VREF_M5L<2>']['layer'])
#laygen.add_pin('VREF_M5L<1>', 'VREF<1>', sar_xy+sar_pins['VREF_M5L<1>']['xy'], sar_pins['VREF_M5L<1>']['layer'])
#laygen.add_pin('VREF_M5L<0>', 'VREF<0>', sar_xy+sar_pins['VREF_M5L<0>']['xy'], sar_pins['VREF_M5L<0>']['layer'])
laygen.add_pin('CKDSEL0<1>', 'CKDSEL0<1>', sar_xy+sar_pins['CKDSEL0<1>']['xy'], sar_pins['CKDSEL0<1>']['layer'])
laygen.add_pin('CKDSEL0<0>', 'CKDSEL0<0>', sar_xy+sar_pins['CKDSEL0<0>']['xy'], sar_pins['CKDSEL0<0>']['layer'])
laygen.add_pin('CKDSEL1<1>', 'CKDSEL1<1>', sar_xy+sar_pins['CKDSEL1<1>']['xy'], sar_pins['CKDSEL1<1>']['layer'])
laygen.add_pin('CKDSEL1<0>', 'CKDSEL1<0>', sar_xy+sar_pins['CKDSEL1<0>']['xy'], sar_pins['CKDSEL1<0>']['layer'])
#laygen.add_pin('EXTCLK', 'EXTCLK', sar_xy+sar_pins['EXTCLK']['xy'], sar_pins['EXTCLK']['layer'])
laygen.add_pin('EXTSEL_CLK', 'EXTSEL_CLK', sar_xy+sar_pins['EXTSEL_CLK']['xy'], sar_pins['EXTSEL_CLK']['layer'])
#output pins (just duplicate from lower hierarchy cells)
for i in range(num_bits):
pn='ADCOUT'+'<'+str(i)+'>'
laygen.add_pin(pn, pn, sar_xy+sar_pins[pn]['xy'], sar_pins[pn]['layer'])
laygen.add_pin('CLKO0', 'CLKO', sar_xy+sar_pins['CLKOUT0']['xy'], sar_pins['CLKOUT0']['layer'])
laygen.add_pin('CLKO1', 'CLKO', sar_xy+sar_pins['CLKOUT1']['xy'], sar_pins['CLKOUT1']['layer'])
#laygen.add_pin('CLKO2', 'CLKO', sar_xy+sar_pins['CLKOUT2']['xy'], sar_pins['CLKOUT2']['layer'])
#probe pins
laygen.add_pin('CLK0', 'ICLK', sar_xy+sar_pins['CLK0']['xy'], sar_pins['CLK0']['layer'])
laygen.add_pin('CLK1', 'ICLK', sar_xy+sar_pins['CLK1']['xy'], sar_pins['CLK1']['layer'])
#laygen.add_pin('CLK2', 'ICLK', sar_xy+sar_pins['CLK2']['xy'], sar_pins['CLK2']['layer'])
laygen.add_pin('CLKPRB_SAMP', 'CLKPRB_SAMP', samp_xy+samp_pins['ckpg']['xy'], samp_pins['ckpg']['layer'])
#laygen.add_pin('CLKPRB_SAR', 'CLKPRB_SAR', sar_xy+sar_pins['CLKPRB']['xy'], sar_pins['CLKPRB']['layer'])
laygen.add_pin('SAMPP', 'SAMPP', sar_xy+sar_pins['SAINP']['xy'], sar_pins['SAINP']['layer'])
laygen.add_pin('SAMPM', 'SAMPM', sar_xy+sar_pins['SAINM']['xy'], sar_pins['SAINM']['layer'])
laygen.add_pin('SAOP', 'SAOP', sar_xy+sar_pins['SAOP']['xy'], sar_pins['SAOP']['layer'])
laygen.add_pin('SAOM', 'SAOM', sar_xy+sar_pins['SAOM']['xy'], sar_pins['SAOM']['layer'])
laygen.add_pin('SARCLK', 'SARCLK', sar_xy+sar_pins['SARCLK']['xy'], sar_pins['SARCLK']['layer'])
laygen.add_pin('SARCLKB', 'SARCLKB', sar_xy+sar_pins['SARCLKB']['xy'], sar_pins['SARCLKB']['layer'])
#laygen.add_pin('COMPOUT', 'COMPOUT', sar_xy+sar_pins['COMPOUT']['xy'], sar_pins['COMPOUT']['layer'])
laygen.add_pin('DONE', 'DONE', sar_xy+sar_pins['DONE']['xy'], sar_pins['DONE']['layer'])
laygen.add_pin('UP', 'UP', sar_xy+sar_pins['UP']['xy'], sar_pins['UP']['layer'])
laygen.add_pin('PHI0', 'PHI0', sar_xy+sar_pins['PHI0']['xy'], sar_pins['PHI0']['layer'])
for i in range(num_bits):
pn='ZP'+'<'+str(i)+'>'
laygen.add_pin(pn, pn, sar_xy+sar_pins[pn]['xy'], sar_pins[pn]['layer'])
pn='ZMID'+'<'+str(i)+'>'
laygen.add_pin(pn, pn, sar_xy+sar_pins[pn]['xy'], sar_pins[pn]['layer'])
pn='ZM'+'<'+str(i)+'>'
laygen.add_pin(pn, pn, sar_xy+sar_pins[pn]['xy'], sar_pins[pn]['layer'])
pn='SB'+'<'+str(i)+'>'
laygen.add_pin(pn, pn, sar_xy+sar_pins[pn]['xy'], sar_pins[pn]['layer'])
for i in range(num_bits-1):
pn='VOL'+'<'+str(i)+'>'
laygen.add_pin(pn, pn, sar_xy+sar_pins[pn]['xy'], sar_pins[pn]['layer'])
pn='VOR'+'<'+str(i)+'>'
laygen.add_pin(pn, pn, sar_xy+sar_pins[pn]['xy'], sar_pins[pn]['layer'])
#VDD/VSS pin
vddcnt=0
vsscnt=0
for p in pdict_m5m6[isar.name]:
if p.startswith('VDD'):
xy0=pdict_m5m6_thick[isar.name][p]
laygen.pin(name='VDDSAR' + str(vddcnt), layer=laygen.layers['pin'][6], xy=xy0, gridname=rg_m5m6_thick, netname='VDDSAR')
vddcnt+=1
if p.startswith('VSS'):
xy0=pdict_m5m6_thick[isar.name][p]
laygen.pin(name='VSSSAR' + str(vsscnt), layer=laygen.layers['pin'][6], xy=xy0, gridname=rg_m5m6_thick, netname='VSS:')
#laygen.pin(name='VSSSAR' + str(vsscnt), layer=laygen.layers['pin'][6], xy=xy0, gridname=rg_m5m6_thick, netname='VSS')
vsscnt+=1
#extract VDD/VSS grid from samp and make power pins
rg_m5m6_thick_temp_samp='route_M5_M6_thick_temp_samp'
laygenhelper.generate_grids_from_inst(laygen, gridname_input=rg_m5m6_thick, gridname_output=rg_m5m6_thick_temp_samp,
instname=isamp.name,
inst_pin_prefix=['VDD', 'VSS', 'samp_body'], xy_grid_type='ygrid')
pdict_m5m6_thick_temp_samp = laygen.get_inst_pin_xy(None, None, rg_m5m6_thick_temp_samp)
vddcnt=0
vsscnt=0
bodycnt=0
for p in pdict_m5m6_thick_temp_samp[isamp.name]:
if p.startswith('VDD'):
xy0=pdict_m5m6_thick_temp_samp[isamp.name][p]
laygen.pin(name='VDDSAMP' + str(vddcnt), layer=laygen.layers['pin'][6], xy=xy0, gridname=rg_m5m6_thick_temp_samp, netname='VDDSAMP')
vddcnt+=1
if p.startswith('VSS'):
xy0=pdict_m5m6_thick_temp_samp[isamp.name][p]
laygen.pin(name='VSSSAMP' + str(vsscnt), layer=laygen.layers['pin'][6], xy=xy0, gridname=rg_m5m6_thick_temp_samp, netname='VSS:')
#laygen.pin(name='VSSSAMP' + str(vsscnt), layer=laygen.layers['pin'][6], xy=xy0, gridname=rg_m5m6_thick_temp_samp, netname='VSS')
vsscnt+=1
if p.startswith('samp_body'):
xy0=pdict_m5m6_thick_temp_samp[isamp.name][p]
laygen.pin(name='samp_body' + str(bodycnt), layer=laygen.layers['pin'][6], xy=xy0, gridname=rg_m5m6_thick_temp_samp, netname='samp_body')
bodycnt+=1
# VBB
pdict_m3m4 = laygen.get_inst_pin_xy(None, None, rg_m3m4)
rvbb_m3=[]
for p in pdict_m3m4[isar.name]:
if p.startswith('VBB'):
laygen.pin(name='bottom_body'+str(p), layer=laygen.layers['pin'][3], xy=pdict_m3m4[isar.name][p], gridname=rg_m3m4, netname='bottom_body')
if __name__ == '__main__':
laygen = laygo.GridLayoutGenerator(config_file="laygo_config.yaml")
import imp
try:
imp.find_module('bag')
laygen.use_phantom = False
except ImportError:
laygen.use_phantom = True
tech=laygen.tech
utemplib = tech+'_microtemplates_dense'
logictemplib = tech+'_logic_templates'
samp_lib = 'adc_sampler_ec'
samp_name = 'sampler_nmos'
laygen.load_template(filename=tech+'_microtemplates_dense_templates.yaml', libname=utemplib)
laygen.load_grid(filename=tech+'_microtemplates_dense_grids.yaml', libname=utemplib)
laygen.load_template(filename=logictemplib+'.yaml', libname=logictemplib)
laygen.templates.sel_library(utemplib)
laygen.grids.sel_library(utemplib)
#library load or generation
workinglib = 'adc_sar_generated'
laygen.add_library(workinglib)
laygen.sel_library(workinglib)
if os.path.exists(workinglib+'.yaml'): #generated layout file exists
laygen.load_template(filename=workinglib+'.yaml', libname=workinglib)
laygen.templates.sel_library(utemplib)
#grid
pg = 'placement_basic' #placement grid
rg_m1m2 = 'route_M1_M2_cmos'
rg_m1m2_thick = 'route_M1_M2_thick'
rg_m2m3 = 'route_M2_M3_cmos'
rg_m3m4 = 'route_M3_M4_basic'
rg_m4m5 = 'route_M4_M5_basic'
rg_m4m5_basic_thick = 'route_M4_M5_basic_thick'
rg_m5m6 = 'route_M5_M6_basic'
rg_m5m6_thick = 'route_M5_M6_thick'
rg_m5m6_basic_thick = 'route_M5_M6_basic_thick'
rg_m5m6_thick_basic = 'route_M5_M6_thick_basic'
rg_m1m2_pin = 'route_M1_M2_basic'
rg_m2m3_pin = 'route_M2_M3_basic'
mycell_list = []
num_bits=9
#load from preset
load_from_file=True
yamlfile_spec="adc_sar_spec.yaml"
yamlfile_size="adc_sar_size.yaml"
if load_from_file==True:
with open(yamlfile_spec, 'r') as stream:
specdict = yaml.load(stream)
with open(yamlfile_size, 'r') as stream:
sizedict = yaml.load(stream)
num_bits=specdict['n_bit']
if specdict['samp_use_laygo'] is True:
samp_lib = 'adc_sar_generated'
samp_name = 'sarsamp_bb'
else:
laygen.load_template(filename=samp_lib+'.yaml', libname=samp_lib)
#yamlfile_system_input="adc_sar_dsn_system_input.yaml"
#if load_from_file==True:
# with open(yamlfile_system_input, 'r') as stream:
# sysdict_i = yaml.load(stream)
# num_bits=sysdict_i['n_bit']
#sar generation
cellname='sar_wsamp_bb_doubleSA_pe' #_'+str(num_bits)+'b'
sar_name = 'sar_doubleSA_bb_pe' #_'+str(num_bits)+'b'
space_1x_name = 'space_1x'
print(cellname+" generating")
mycell_list.append(cellname)
laygen.add_cell(cellname)
laygen.sel_cell(cellname)
generate_sar_wsamp(laygen, objectname_pfix='SA0', workinglib=workinglib, samp_lib=samp_lib, space_1x_lib=logictemplib, sar_name=sar_name, samp_name=samp_name, space_1x_name=space_1x_name,
placement_grid=pg, routing_grid_m5m6=rg_m5m6, routing_grid_m5m6_thick=rg_m5m6_thick, routing_grid_m5m6_thick_basic=rg_m5m6_thick_basic,
num_bits=num_bits, origin=np.array([0, 0]))
laygen.add_template_from_cell()
laygen.save_template(filename=workinglib+'.yaml', libname=workinglib)
#bag export, if bag does not exist, gds export
import imp
try:
imp.find_module('bag')
import bag
prj = bag.BagProject()
for mycell in mycell_list:
laygen.sel_cell(mycell)
laygen.export_BAG(prj, array_delimiter=['[', ']'])
except ImportError:
laygen.export_GDS('output.gds', cellname=mycell_list, layermapfile=tech+".layermap") # change layermapfile
|
{"hexsha": "d4fd1e6c0865110706b10ac166f281618532975d", "size": 19216, "ext": "py", "lang": "Python", "max_stars_repo_path": "laygo/generators/splash/adc_sar_sar_wsamp_layout_generator_bb_doubleSA_pe.py", "max_stars_repo_name": "tinapiao/Software-IC-Automation", "max_stars_repo_head_hexsha": "74b23cd94aa6e4658b110e93b5deb635e014f3a6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2017-07-07T08:06:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-25T06:41:24.000Z", "max_issues_repo_path": "laygo/generators/splash/adc_sar_sar_wsamp_layout_generator_bb_doubleSA_pe.py", "max_issues_repo_name": "tinapiao/Software-IC-Automation", "max_issues_repo_head_hexsha": "74b23cd94aa6e4658b110e93b5deb635e014f3a6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2016-12-28T03:08:29.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-30T16:00:28.000Z", "max_forks_repo_path": "laygo/generators/splash/adc_sar_sar_wsamp_layout_generator_bb_doubleSA_pe.py", "max_forks_repo_name": "tinapiao/Software-IC-Automation", "max_forks_repo_head_hexsha": "74b23cd94aa6e4658b110e93b5deb635e014f3a6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2018-07-14T01:31:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-21T10:18:30.000Z", "avg_line_length": 56.8520710059, "max_line_length": 191, "alphanum_fraction": 0.6576810991, "include": true, "reason": "import numpy", "num_tokens": 5952}
|
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
def f(x):
vqe.set_weights(np.array([x]))
ret = vqe(tfq.convert_to_tensor([cirq.Circuit()]))
return ret.numpy()[0][0]
def anzatz(circuit, qubits, parameters):
for i in range(5):
pos_up = int(i*2)
pos_down = pos_up + 1
circuit.append([cirq.X(qubits[pos_down])])
circuit.append([cirq.ry(np.pi/2).on(qubits[pos_up])])
circuit.append([cirq.rx(-np.pi/2).on(qubits[pos_down])])
circuit.append([cirq.CNOT(qubits[pos_up], qubits[pos_down])])
circuit.append([cirq.rz(parameters[0]).on(qubits[pos_down])])
circuit.append([cirq.CNOT(qubits[pos_up], qubits[pos_down])])
circuit.append([cirq.ry(-np.pi/2).on(qubits[pos_up])])
circuit.append([cirq.rx(np.pi/2).on(qubits[pos_down])])
circuit.append([cirq.SWAP(qubits[0], qubits[1])])
circuit.append([cirq.CNOT(qubits[5], qubits[4])])
circuit.append([cirq.Z(qubits[6]), cirq.Z(qubits[7])])
circuit.append([cirq.S(qubits[6]), cirq.S(qubits[7])])
circuit.append([cirq.H(qubits[6]), cirq.H(qubits[7])])
circuit.append([cirq.CNOT(qubits[7], qubits[6])])
circuit.append([cirq.H(qubits[8]), cirq.H(qubits[9])])
circuit.append([cirq.CNOT(qubits[9], qubits[8])])
return circuit
def hamiltonian(qubits, a, b, c, d, e, f):
h = [a]
h.append(b * cirq.Z(qubits[1]))
h.append(c * cirq.Z(qubits[2]))
h.append(d * (cirq.Z(qubits[4]) + cirq.Z(qubits[5])))
h.append(e * (cirq.Z(qubits[6]) + cirq.Z(qubits[7])))
h.append(f * (cirq.Z(qubits[8]) + cirq.Z(qubits[9])))
return h
all_coeff = [
[2.8489, 0.5678, -1.4508, 0.6799, 0.0791, 0.0791],
[2.1868, 0.5449, -1.2870, 0.6719, 0.0798, 0.0798],
[1.1182, 0.4754, -0.9145, 0.6438, 0.0825, 0.0825],
[0.7381, 0.4325, -0.7355, 0.6233, 0.0846, 0.0846],
[0.4808, 0.3937, -0.5950, 0.6025, 0.0870, 0.0870],
[0.2976, 0.3593, -0.4826, 0.5818, 0.0896, 0.0896],
[0.2252, 0.3435, -0.4347, 0.5716, 0.0910, 0.0910],
[0.0609, 0.3018, -0.3168, 0.5421, 0.0954, 0.0954],
[-0.1253, 0.2374, -0.1603, 0.4892, 0.1050, 0.1050],
[-0.1927, 0.2048, -0.0929, 0.4588, 0.1116, 0.1116],
[-0.2632, 0.1565, -0.0088, 0.4094, 0.1241, 0.1241],
[-0.2934, 0.1251, 0.0359, 0.3730, 0.1347, 0.1347],
[-0.3018, 0.1142, 0.0495, 0.3586, 0.1392, 0.1392],
[-0.3104, 0.1026, 0.0632, 0.3406, 0.1450, 0.1450],
[-0.3135, 0.0984, 0.0679, 0.3329, 0.1475, 0.1475]
]
dist = [
0.2,
0.25,
0.4,
0.5,
0.6,
0.7,
0.75,
0.9,
1.2,
1.4,
1.8,
2.2,
2.4,
2.7,
2.85
]
qubits = [cirq.GridQubit(0, i) for i in range(10)]
params = [sympy.symbols('vqe')]
vqe_circuit = anzatz(cirq.Circuit(), qubits, params)
hs = []
for i in range(len(all_coeff)):
coeff = all_coeff[i]
readout_operators = sum(hamiltonian(qubits, coeff[0], coeff[1], coeff[2], coeff[3], coeff[4], coeff[5]))
ins = tf.keras.layers.Input(shape=(), dtype=tf.dtypes.string)
outs = tfq.layers.PQC(vqe_circuit, readout_operators)(ins)
vqe = tf.keras.models.Model(inputs=ins, outputs=outs)
opt = minimize(f, np.random.uniform(0, 2*np.pi, 1), method='Nelder-Mead')
hs.append(opt['fun'])
plt.plot(dist, hs, label='NM')
plt.xlabel("Bond Length")
plt.ylabel("Energy")
plt.ylim(-1.2, 0.2)
plt.xlim(0.22, 2.85)
plt.show()
|
{"hexsha": "8f0e92f15751e663fd850f7283b0e3f919a99d85", "size": 3556, "ext": "py", "lang": "Python", "max_stars_repo_path": "TFQ/VQE/vqe_multi.py", "max_stars_repo_name": "Project-Fare/quantum_computation", "max_stars_repo_head_hexsha": "fc182007d0cf7cca170efdbcb442576fde5927ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2020-04-15T18:45:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T10:28:42.000Z", "max_issues_repo_path": "TFQ/VQE/vqe_multi.py", "max_issues_repo_name": "Project-Fare/quantum_computation", "max_issues_repo_head_hexsha": "fc182007d0cf7cca170efdbcb442576fde5927ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-23T01:59:34.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-24T05:22:08.000Z", "max_forks_repo_path": "TFQ/VQE/vqe_multi.py", "max_forks_repo_name": "Project-Fare/quantum_computation", "max_forks_repo_head_hexsha": "fc182007d0cf7cca170efdbcb442576fde5927ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2021-01-30T15:20:36.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T10:28:51.000Z", "avg_line_length": 35.2079207921, "max_line_length": 109, "alphanum_fraction": 0.5846456693, "include": true, "reason": "import numpy,from scipy,import sympy", "num_tokens": 1459}
|
// Copyright Matt Overby 2021.
// Distributed under the MIT License.
#ifndef GINI_MESHTXT_HPP
#define GINI_MESHTXT_HPP 1
#include <iostream>
#include <fstream>
#include <sstream>
#include <Eigen/Geometry>
#include <vector>
#include <iomanip>
namespace mcl
{
// Simple, slow, plain text
//
// X is n x DIM vertices (DIM = 2 or 3)
// P is m x PDIM primitives (PDIM = 3 for tris, 4 for tets)
// Returns true on success
// TODO: some error checking on read
static inline bool write_mesh_txt(
const std::string& filename,
const Eigen::MatrixXd &X,
const Eigen::MatrixXi &P);
static inline bool read_mesh_txt(
const std::string& filename,
const Eigen::MatrixXd &X,
const Eigen::MatrixXi &P);
//
// Implementation
//
bool write_mesh_txt(
const std::string& filename,
const Eigen::MatrixXd &X,
const Eigen::MatrixXi &P)
{
char delim = ' ';
if (X.cols() < 2 || X.cols() > 3)
{
printf("Cannot write mesh txt, bad dim X\n");
return false;
}
if (P.cols() < 3 || P.cols() > 4)
{
printf("Cannot write mesh txt, bad dim P\n");
return false;
}
std::ofstream fi(filename);
fi << std::fixed << std::setprecision(16);
int nx = X.rows();
int xc = X.cols();
for (int i=0; i<nx; ++i)
{
fi << 'v';
for (int j=0; j<xc; ++j)
{
fi << delim << X(i,j);
}
fi << std::endl;
}
int np = P.rows();
int pc = P.cols();
char prim_tag = pc == 3 ? 'f' : 't'; // face, tet
for (int i=0; i<np; ++i)
{
fi << prim_tag;
for (int j=0; j<pc; ++j)
{
fi << delim << P(i,j);
}
fi << std::endl;
}
fi.close();
return true;
}
bool read_mesh_txt(
const std::string& filename,
Eigen::MatrixXd &X,
Eigen::MatrixXi &P)
{
int x_dim = 0;
int p_dim = 0;
using namespace Eigen;
std::vector<Vector3d> x;
std::vector<Vector4i> p;
std::ifstream fi(filename);
if (fi.is_open())
{
std::string line;
while (std::getline(fi,line) )
{
std::stringstream ss(line);
char tag = 'X';
ss >> tag;
switch (tag)
{
default: {
printf("Bad line: %s\n", line.c_str());
} break;
case 'v': {
x.emplace_back(Vector3d::Zero());
for (int i=0; i<3 && ss.good(); ++i)
{
x_dim = std::max(x_dim, i+1);
ss >> x.back()[i];
}
} break;
case 'f': {
p.emplace_back(Vector4i::Zero());
for (int i=0; i<3 && ss.good(); ++i)
{
p_dim = std::max(p_dim, i+1);
ss >> p.back()[i];
}
} break;
case 't': {
p.emplace_back(Vector4i::Zero());
for (int i=0; i<4 && ss.good(); ++i)
{
p_dim = std::max(p_dim, i+1);
ss >> p.back()[i];
}
} break;
}
}
fi.close();
}
else
{
printf("Could not open %s\n", filename.c_str());
return false;
}
// Make X
int nx = x.size();
X.resize(nx, x_dim);
for (int i=0; i<nx; ++i)
{
for (int j=0; j<x_dim; ++j)
{
X(i,j) = x[i][j];
}
}
// Make P
int np = p.size();
P.resize(np, p_dim);
for (int i=0; i<np; ++i)
{
for (int j=0; j<p_dim; ++j)
{
P(i,j) = p[i][j];
}
}
return true;
}
} // namespace mcl
#endif
|
{"hexsha": "7ffb416ab14506c18ceba9e6f67d7eafc0191536", "size": 3837, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "include/MCL/MeshTXT.hpp", "max_stars_repo_name": "mattoverby/mclgeom", "max_stars_repo_head_hexsha": "d3ecd2a878900f33ba1412b8d82e643895201e51", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "include/MCL/MeshTXT.hpp", "max_issues_repo_name": "mattoverby/mclgeom", "max_issues_repo_head_hexsha": "d3ecd2a878900f33ba1412b8d82e643895201e51", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1.0, "max_issues_repo_issues_event_min_datetime": "2021-12-26T22:44:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-09T02:54:23.000Z", "max_forks_repo_path": "include/MCL/MeshTXT.hpp", "max_forks_repo_name": "mattoverby/mclgeom", "max_forks_repo_head_hexsha": "d3ecd2a878900f33ba1412b8d82e643895201e51", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.9257142857, "max_line_length": 59, "alphanum_fraction": 0.4362783425, "num_tokens": 1039}
|
import numpy as np
from sklearn.datasets import make_regression
from scipy.stats import norm, itemfreq
import pandas as pd
from pandas.io import sql
import sys
import time
import argparse
import os
from sqlalchemy import create_engine
import random
parser = argparse.ArgumentParser()
parser.add_argument(
'RowCount', type=int, help='The number of rows to generate'
)
args = parser.parse_args()
id_list = []
try:
print("Connecting to mariadb")
mariadb_engine = create_engine('mysql://queriouser:password1@mariadb:3306/queriomariadb')
print("Connection to mariadb created")
except Error:
print(e)
sys.exit(1)
def create_github_stars(rc):
github_stars_list = []
github_name_list = []
for i in range(0, rc):
github_stars = random.randint(1, 5) * random.randint(1, 9) + 10 * random.randint(1,17)+ random.randint(1,15) / 6
github_stars = np.floor(github_stars)
github_stars_list.append(abs(github_stars))
github_name_list.append("repo #{}".format(str(i)))
return pd.DataFrame(
{
'github_id': id_list, 'stars': github_stars_list, 'link': github_name_list,
}
)
def create_person_github():
return pd.DataFrame(
{
'person_id': id_list, 'github_id': id_list,
}
)
row_count = args.RowCount
for i in range(1, (row_count + 1)):
id_list.append( i )
age, height = make_regression(row_count, 1, 1, noise=3.3, random_state=42)
age = age.reshape((row_count,))
age = np.log(age * age + 1) * 17 + 20
age = np.floor(age)
height = height * height * 6 + 500
income = norm.rvs(size=row_count, loc=180, scale=10, random_state=42)
xs = -random.randint(0, 20) * income / 10 + age**2 / 2
is_client = (norm.rvs(size=row_count, loc=-100, scale=100) + xs) > 0
github_df = create_github_stars(row_count)
person_df = pd.DataFrame(
{
'person_id': id_list, 'age': age, 'income': income,
'height': height, 'is_client': is_client
}
)
person_github_df = create_person_github()
try:
print("Adding data to mariadb...")
with mariadb_engine.connect() as mdb_conn, mdb_conn.begin():
person_df.to_sql('person', mdb_conn, if_exists='replace')
github_df.to_sql('github', mdb_conn, if_exists='replace')
person_github_df.to_sql('person_github', mdb_conn, if_exists='replace')
print("Added data to mariadb")
except Error as e:
print(e)
|
{"hexsha": "19124bbe59f502ba2a3b1e711a2ada38a219d15e", "size": 2406, "ext": "py", "lang": "Python", "max_stars_repo_path": "docker-environment/init_mariadb.py", "max_stars_repo_name": "Quer-io/Quer.io-reference", "max_stars_repo_head_hexsha": "f4fd3505587143d5407b9b49ec81a9e7b94a0583", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docker-environment/init_mariadb.py", "max_issues_repo_name": "Quer-io/Quer.io-reference", "max_issues_repo_head_hexsha": "f4fd3505587143d5407b9b49ec81a9e7b94a0583", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docker-environment/init_mariadb.py", "max_forks_repo_name": "Quer-io/Quer.io-reference", "max_forks_repo_head_hexsha": "f4fd3505587143d5407b9b49ec81a9e7b94a0583", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.4395604396, "max_line_length": 120, "alphanum_fraction": 0.676641729, "include": true, "reason": "import numpy,from scipy", "num_tokens": 656}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
import numpy as np
import tensorflow as tf
from zhusuan.distributions.base import *
from zhusuan.distributions.utils import \
maybe_explicit_broadcast, \
assert_same_float_dtype, \
assert_same_float_and_int_dtype
__all__ = [
'Normal',
'Bernoulli',
'Categorical',
'Discrete',
'Uniform',
'Gamma',
'Beta',
'Poisson',
'Binomial',
'InverseGamma',
'Laplace',
]
class Normal(Distribution):
"""
The class of univariate Normal distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param mean: A `float` Tensor. The mean of the Normal distribution.
Should be broadcastable to match `logstd`.
:param logstd: A `float` Tensor. The log standard deviation of the Normal
distribution. Should be broadcastable to match `mean`.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
:param is_reparameterized: A Bool. If True, gradients on samples from this
distribution are allowed to propagate into inputs, using the
reparametrization trick from (Kingma, 2013).
:param check_numerics: Bool. Whether to check numeric issues.
"""
def __init__(self,
mean=0.,
logstd=0.,
group_event_ndims=0,
is_reparameterized=True,
check_numerics=False):
self._mean = tf.convert_to_tensor(mean)
self._logstd = tf.convert_to_tensor(logstd)
dtype = assert_same_float_dtype(
[(self._mean, 'Normal.mean'),
(self._logstd, 'Normal.logstd')])
try:
tf.broadcast_static_shape(self._mean.get_shape(),
self._logstd.get_shape())
except ValueError:
raise ValueError(
"mean and logstd should be broadcastable to match each "
"other. ({} vs. {})".format(
self._mean.get_shape(), self._logstd.get_shape()))
self._check_numerics = check_numerics
super(Normal, self).__init__(
dtype=dtype,
param_dtype=dtype,
is_continuous=True,
is_reparameterized=is_reparameterized,
group_event_ndims=group_event_ndims)
@property
def mean(self):
"""The mean of the Normal distribution."""
return self._mean
@property
def logstd(self):
"""The log standard deviation of the Normal distribution."""
return self._logstd
def _value_shape(self):
return tf.constant([], dtype=tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.broadcast_dynamic_shape(tf.shape(self.mean),
tf.shape(self.logstd))
def _get_batch_shape(self):
return tf.broadcast_static_shape(self.mean.get_shape(),
self.logstd.get_shape())
def _sample(self, n_samples):
mean, logstd = self.mean, self.logstd
if not self.is_reparameterized:
mean = tf.stop_gradient(mean)
logstd = tf.stop_gradient(logstd)
shape = tf.concat([[n_samples], self.batch_shape], 0)
samples = tf.random_normal(shape, dtype=self.dtype) * \
tf.exp(logstd) + mean
static_n_samples = n_samples if isinstance(n_samples, int) else None
samples.set_shape(
tf.TensorShape([static_n_samples]).concatenate(
self.get_batch_shape()))
return samples
def _log_prob(self, given):
c = -0.5 * np.log(2 * np.pi)
precision = tf.exp(-2 * self.logstd)
if self._check_numerics:
with tf.control_dependencies(
[tf.check_numerics(precision, "precision")]):
precision = tf.identity(precision)
return c - self.logstd - 0.5 * precision * tf.square(given - self.mean)
def _prob(self, given):
return tf.exp(self._log_prob(given))
class Bernoulli(Distribution):
"""
The class of univariate Bernoulli distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param logits: A `float` Tensor. The log-odds of probabilities of being 1.
.. math:: \\mathrm{logits} = \\log \\frac{p}{1 - p}
:param dtype: The value type of samples from the distribution.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
"""
def __init__(self, logits, dtype=None, group_event_ndims=0):
self._logits = tf.convert_to_tensor(logits)
param_dtype = assert_same_float_dtype(
[(self._logits, 'Bernoulli.logits')])
if dtype is None:
dtype = tf.int32
assert_same_float_and_int_dtype([], dtype)
super(Bernoulli, self).__init__(
dtype=dtype,
param_dtype=param_dtype,
is_continuous=False,
is_reparameterized=False,
group_event_ndims=group_event_ndims)
@property
def logits(self):
"""The log-odds of probabilities of being 1."""
return self._logits
def _value_shape(self):
return tf.constant([], dtype=tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.shape(self.logits)
def _get_batch_shape(self):
return self.logits.get_shape()
def _sample(self, n_samples):
p = tf.sigmoid(self.logits)
shape = tf.concat([[n_samples], self.batch_shape], 0)
alpha = tf.random_uniform(
shape, minval=0, maxval=1, dtype=self.param_dtype)
samples = tf.cast(tf.less(alpha, p), dtype=self.dtype)
static_n_samples = n_samples if isinstance(n_samples, int) else None
samples.set_shape(
tf.TensorShape([static_n_samples]).concatenate(
self.get_batch_shape()))
return samples
def _log_prob(self, given):
given = tf.cast(given, self.param_dtype)
given, logits = maybe_explicit_broadcast(
given, self.logits, 'given', 'logits')
return -tf.nn.sigmoid_cross_entropy_with_logits(labels=given,
logits=logits)
def _prob(self, given):
return tf.exp(self._log_prob(given))
class Categorical(Distribution):
"""
The class of univariate Categorical distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param logits: A N-D (N >= 1) `float` Tensor of shape (...,
n_categories). Each slice `[i, j,..., k, :]` represents the
un-normalized log probabilities for all categories.
.. math:: \\mathrm{logits} \\propto \\log p
:param dtype: The value type of samples from the distribution.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
A single sample is a (N-1)-D Tensor with `tf.int32` values in range
[0, n_categories).
"""
def __init__(self, logits, dtype=None, group_event_ndims=0):
self._logits = tf.convert_to_tensor(logits)
param_dtype = assert_same_float_dtype(
[(self._logits, 'Categorical.logits')])
if dtype is None:
dtype = tf.int32
assert_same_float_and_int_dtype([], dtype)
static_logits_shape = self._logits.get_shape()
shape_err_msg = "logits should have rank >= 1."
if static_logits_shape and (static_logits_shape.ndims < 1):
raise ValueError(shape_err_msg)
elif static_logits_shape and (
static_logits_shape[-1].value is not None):
self._n_categories = static_logits_shape[-1].value
else:
_assert_shape_op = tf.assert_rank_at_least(
self._logits, 1, message=shape_err_msg)
with tf.control_dependencies([_assert_shape_op]):
self._logits = tf.identity(self._logits)
self._n_categories = tf.shape(self._logits)[-1]
super(Categorical, self).__init__(
dtype=dtype,
param_dtype=param_dtype,
is_continuous=False,
is_reparameterized=False,
group_event_ndims=group_event_ndims)
@property
def logits(self):
"""The un-normalized log probabilities."""
return self._logits
@property
def n_categories(self):
"""The number of categories in the distribution."""
return self._n_categories
def _value_shape(self):
return tf.constant([], dtype=tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.shape(self.logits)[:-1]
def _get_batch_shape(self):
if self.logits.get_shape():
return self.logits.get_shape()[:-1]
return tf.TensorShape(None)
def _sample(self, n_samples):
if self.logits.get_shape().ndims == 2:
logits_flat = self.logits
else:
logits_flat = tf.reshape(self.logits, [-1, self.n_categories])
samples_flat = tf.transpose(tf.multinomial(logits_flat, n_samples))
if self.logits.get_shape().ndims == 2:
return samples_flat
shape = tf.concat([[n_samples], self.batch_shape], 0)
samples = tf.cast(tf.reshape(samples_flat, shape), self.dtype)
static_n_samples = n_samples if isinstance(n_samples, int) else None
samples.set_shape(
tf.TensorShape([static_n_samples]).concatenate(
self.get_batch_shape()))
return samples
def _log_prob(self, given):
logits = self.logits
def _broadcast(given, logits):
# static shape has been checked in base class.
ones_ = tf.ones(tf.shape(logits)[:-1], self.dtype)
if logits.get_shape():
ones_.set_shape(logits.get_shape()[:-1])
given *= ones_
logits *= tf.ones_like(tf.expand_dims(given, -1), self.param_dtype)
return given, logits
def _is_same_dynamic_shape(given, logits):
return tf.cond(
tf.equal(tf.rank(given), tf.rank(logits) - 1),
lambda: tf.reduce_all(tf.equal(
tf.concat([tf.shape(given), tf.shape(logits)[:-1]], 0),
tf.concat([tf.shape(logits)[:-1], tf.shape(given)], 0))),
lambda: tf.convert_to_tensor(False, tf.bool))
if not (given.get_shape() and logits.get_shape()):
given, logits = _broadcast(given, logits)
else:
if given.get_shape().ndims != logits.get_shape().ndims - 1:
given, logits = _broadcast(given, logits)
elif given.get_shape().is_fully_defined() and \
logits.get_shape()[:-1].is_fully_defined():
if given.get_shape() != logits.get_shape()[:-1]:
given, logits = _broadcast(given, logits)
else:
# Below code seems to induce a BUG when this function is
# called in HMC. Probably due to tensorflow's not supporting
# control flow edge from an op inside the body to outside.
# We should further fix this.
#
# given, logits = tf.cond(
# is_same_dynamic_shape(given, logits),
# lambda: (given, logits),
# lambda: _broadcast(given, logits, 'given', 'logits'))
given, logits = _broadcast(given, logits)
# `labels` type of `sparse_softmax_cross_entropy_with_logits` must be
# int32 or int64
if self.dtype == tf.float32:
given = tf.cast(given, dtype=tf.int32)
elif self.dtype == tf.float64:
given = tf.cast(given, dtype=tf.int64)
elif self.dtype not in [tf.int32, tf.int64]:
given = tf.cast(given, tf.int32)
log_p = -tf.nn.sparse_softmax_cross_entropy_with_logits(labels=given,
logits=logits)
if given.get_shape() and logits.get_shape():
log_p.set_shape(tf.broadcast_static_shape(given.get_shape(),
logits.get_shape()[:-1]))
return log_p
def _prob(self, given):
return tf.exp(self._log_prob(given))
Discrete = Categorical
class Uniform(Distribution):
"""
The class of univariate Uniform distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param minval: A `float` Tensor. The lower bound on the range of the
uniform distribution. Should be broadcastable to match `maxval`.
:param maxval: A `float` Tensor. The upper bound on the range of the
uniform distribution. Should be element-wise bigger than `minval`.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
:param is_reparameterized: A Bool. If True, gradients on samples from this
distribution are allowed to propagate into inputs, using the
reparametrization trick from (Kingma, 2013).
:param check_numerics: Bool. Whether to check numeric issues.
"""
def __init__(self,
minval=0.,
maxval=1.,
group_event_ndims=0,
is_reparameterized=True,
check_numerics=False):
self._minval = tf.convert_to_tensor(minval)
self._maxval = tf.convert_to_tensor(maxval)
dtype = assert_same_float_dtype(
[(self._minval, 'Uniform.minval'),
(self._maxval, 'Uniform.maxval')])
try:
tf.broadcast_static_shape(self._minval.get_shape(),
self._maxval.get_shape())
except ValueError:
raise ValueError(
"minval and maxval should be broadcastable to match each "
"other. ({} vs. {})".format(
self._minval.get_shape(), self._maxval.get_shape()))
self._check_numerics = check_numerics
super(Uniform, self).__init__(
dtype=dtype,
param_dtype=dtype,
is_continuous=True,
is_reparameterized=is_reparameterized,
group_event_ndims=group_event_ndims)
@property
def minval(self):
"""The lower bound on the range of the uniform distribution."""
return self._minval
@property
def maxval(self):
"""The upper bound on the range of the uniform distribution."""
return self._maxval
def _value_shape(self):
return tf.constant([], tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.broadcast_dynamic_shape(tf.shape(self.minval),
tf.shape(self.maxval))
def _get_batch_shape(self):
return tf.broadcast_static_shape(self.minval.get_shape(),
self.maxval.get_shape())
def _sample(self, n_samples):
minval, maxval = self.minval, self.maxval
if not self.is_reparameterized:
minval = tf.stop_gradient(minval)
maxval = tf.stop_gradient(maxval)
shape = tf.concat([[n_samples], self.batch_shape], 0)
samples = tf.random_uniform(shape, 0, 1, dtype=self.dtype) * \
(maxval - minval) + minval
static_n_samples = n_samples if isinstance(n_samples, int) else None
samples.set_shape(
tf.TensorShape([static_n_samples]).concatenate(
self.get_batch_shape()))
return samples
def _log_prob(self, given):
log_p = tf.log(self._prob(given))
if self._check_numerics:
with tf.control_dependencies(
[tf.check_numerics(log_p, message="log_p")]):
log_p = tf.identity(log_p)
return log_p
def _prob(self, given):
mask = tf.cast(tf.logical_and(tf.less_equal(self.minval, given),
tf.less(given, self.maxval)),
self.dtype)
p = 1. / (self.maxval - self.minval)
if self._check_numerics:
with tf.control_dependencies(
[tf.check_numerics(p, message="p")]):
p = tf.identity(p)
return p * mask
class Gamma(Distribution):
"""
The class of univariate Gamma distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param alpha: A `float` Tensor. The shape parameter of the Gamma
distribution. Should be positive and broadcastable to match `beta`.
:param beta: A `float` Tensor. The inverse scale parameter of the Gamma
distribution. Should be positive and broadcastable to match `alpha`.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
:param check_numerics: Bool. Whether to check numeric issues.
"""
def __init__(self,
alpha,
beta,
group_event_ndims=0,
check_numerics=False):
self._alpha = tf.convert_to_tensor(alpha)
self._beta = tf.convert_to_tensor(beta)
dtype = assert_same_float_dtype(
[(self._alpha, 'Gamma.alpha'),
(self._beta, 'Gamma.beta')])
try:
tf.broadcast_static_shape(self._alpha.get_shape(),
self._beta.get_shape())
except ValueError:
raise ValueError(
"alpha and beta should be broadcastable to match each "
"other. ({} vs. {})".format(
self._alpha.get_shape(), self._beta.get_shape()))
self._check_numerics = check_numerics
super(Gamma, self).__init__(
dtype=dtype,
param_dtype=dtype,
is_continuous=True,
is_reparameterized=False,
group_event_ndims=group_event_ndims)
@property
def alpha(self):
"""The shape parameter of the Gamma distribution."""
return self._alpha
@property
def beta(self):
"""The inverse scale parameter of the Gamma distribution."""
return self._beta
def _value_shape(self):
return tf.constant([], dtype=tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.broadcast_dynamic_shape(tf.shape(self.alpha),
tf.shape(self.beta))
def _get_batch_shape(self):
return tf.broadcast_static_shape(self.alpha.get_shape(),
self.beta.get_shape())
def _sample(self, n_samples):
return tf.random_gamma([n_samples], self.alpha,
beta=self.beta, dtype=self.dtype)
def _log_prob(self, given):
alpha, beta = self.alpha, self.beta
log_given = tf.log(given)
log_alpha, log_beta = tf.log(alpha), tf.log(beta)
lgamma_alpha = tf.lgamma(alpha)
if self._check_numerics:
with tf.control_dependencies(
[tf.check_numerics(log_given, "log(given)"),
tf.check_numerics(log_alpha, "log(alpha)"),
tf.check_numerics(log_beta, "log(beta)"),
tf.check_numerics(lgamma_alpha, "lgamma(alpha)")]):
log_given = tf.identity(log_given)
return alpha * log_beta - lgamma_alpha + (alpha - 1) * log_given - \
beta * given
def _prob(self, given):
return tf.exp(self._log_prob(given))
class Beta(Distribution):
"""
The class of univariate Beta distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param alpha: A `float` Tensor. One of the two shape parameters of the
Beta distribution. Should be positive and broadcastable to match
`beta`.
:param beta: A `float` Tensor. One of the two shape parameters of the
Beta distribution. Should be positive and broadcastable to match
`alpha`.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
:param check_numerics: Bool. Whether to check numeric issues.
"""
def __init__(self,
alpha,
beta,
dtype=None,
group_event_ndims=0,
check_numerics=False):
self._alpha = tf.convert_to_tensor(alpha)
self._beta = tf.convert_to_tensor(beta)
dtype = assert_same_float_dtype(
[(self._alpha, 'Beta.alpha'),
(self._beta, 'Beta.beta')])
try:
tf.broadcast_static_shape(self._alpha.get_shape(),
self._beta.get_shape())
except ValueError:
raise ValueError(
"alpha and beta should be broadcastable to match each "
"other. ({} vs. {})".format(
self._alpha.get_shape(), self._beta.get_shape()))
self._check_numerics = check_numerics
super(Beta, self).__init__(
dtype=dtype,
param_dtype=dtype,
is_continuous=True,
is_reparameterized=False,
group_event_ndims=group_event_ndims)
@property
def alpha(self):
"""One of the two shape parameters of the Beta distribution."""
return self._alpha
@property
def beta(self):
"""One of the two shape parameters of the Beta distribution."""
return self._beta
def _value_shape(self):
return tf.constant([], dtype=tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.broadcast_dynamic_shape(tf.shape(self.alpha),
tf.shape(self.beta))
def _get_batch_shape(self):
return tf.broadcast_static_shape(self.alpha.get_shape(),
self.beta.get_shape())
def _sample(self, n_samples):
alpha, beta = maybe_explicit_broadcast(
self.alpha, self.beta, 'alpha', 'beta')
x = tf.random_gamma([n_samples], alpha, beta=1, dtype=self.dtype)
y = tf.random_gamma([n_samples], beta, beta=1, dtype=self.dtype)
return x / (x + y)
def _log_prob(self, given):
# TODO: not right when given=0 or 1
alpha, beta = self.alpha, self.beta
log_given = tf.log(given)
log_1_minus_given = tf.log(1 - given)
lgamma_alpha, lgamma_beta = tf.lgamma(alpha), tf.lgamma(beta)
lgamma_alpha_plus_beta = tf.lgamma(alpha + beta)
if self._check_numerics:
with tf.control_dependencies(
[tf.check_numerics(log_given, "log(given)"),
tf.check_numerics(log_1_minus_given, "log(1 - given)"),
tf.check_numerics(lgamma_alpha, "lgamma(alpha)"),
tf.check_numerics(lgamma_beta, "lgamma(beta)"),
tf.check_numerics(lgamma_alpha_plus_beta,
"lgamma(alpha + beta)")]):
log_given = tf.identity(log_given)
return (alpha - 1) * log_given + (beta - 1) * log_1_minus_given - (
lgamma_alpha + lgamma_beta - lgamma_alpha_plus_beta)
def _prob(self, given):
return tf.exp(self._log_prob(given))
class Poisson(Distribution):
"""
The class of univariate Poisson distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param rate: A `float` Tensor. The rate parameter of Poisson
distribution. Must be positive.
:param dtype: The value type of samples from the distribution.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
:param check_numerics: Bool. Whether to check numeric issues.
"""
def __init__(self,
rate,
dtype=None,
group_event_ndims=0,
check_numerics=False):
self._rate = tf.convert_to_tensor(rate)
param_dtype = assert_same_float_dtype(
[(self._rate, 'Poisson.rate')])
if dtype is None:
dtype = tf.int32
assert_same_float_and_int_dtype([], dtype)
self._check_numerics = check_numerics
super(Poisson, self).__init__(
dtype=dtype,
param_dtype=param_dtype,
is_continuous=False,
is_reparameterized=False,
group_event_ndims=group_event_ndims)
@property
def rate(self):
"""The rate parameter of Poisson."""
return self._rate
def _value_shape(self):
return tf.constant([], dtype=tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.shape(self.rate)
def _get_batch_shape(self):
return self.rate.get_shape()
def _sample(self, n_samples):
# This algorithm to generate random Poisson-distributed numbers is
# given by Kunth [1]
# [1]: https://en.wikipedia.org/wiki/
# Poisson_distribution#Generating_Poisson-distributed_random_variables
shape = tf.concat([[n_samples], self.batch_shape], 0)
static_n_samples = n_samples if isinstance(n_samples, int) else None
static_shape = tf.TensorShape([static_n_samples]).concatenate(
self.get_batch_shape())
enlam = tf.exp(-self.rate)
x = tf.zeros(shape, dtype=self.dtype)
prod = tf.ones(shape, dtype=self.param_dtype)
def loop_cond(prod, x):
return tf.reduce_any(tf.greater_equal(prod, enlam))
def loop_body(prod, x):
prod *= tf.random_uniform(tf.shape(prod), minval=0, maxval=1)
x += tf.cast(tf.greater_equal(prod, enlam), dtype=self.dtype)
return prod, x
_, samples = tf.while_loop(
loop_cond, loop_body, loop_vars=[prod, x],
shape_invariants=[static_shape, static_shape])
samples.set_shape(static_shape)
return samples
def _log_prob(self, given):
rate = self.rate
given = tf.cast(given, self.param_dtype)
log_rate = tf.log(rate)
lgamma_given_plus_1 = tf.lgamma(given + 1)
if self._check_numerics:
with tf.control_dependencies(
[tf.check_numerics(log_rate, "log(rate)"),
tf.check_numerics(lgamma_given_plus_1,
"lgamma(given + 1)")]):
log_rate = tf.identity(log_rate)
return given * log_rate - rate - lgamma_given_plus_1
def _prob(self, given):
return tf.exp(self._log_prob(given))
class Binomial(Distribution):
"""
The class of univariate Binomial distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param logits: A `float` Tensor. The log-odds of probabilities.
.. math:: \\mathrm{logits} = \\log \\frac{p}{1 - p}
:param n_experiments: A 0-D `int32` Tensor. The number of experiments
for each sample.
:param dtype: The value type of samples from the distribution.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
:param check_numerics: Bool. Whether to check numeric issues.
"""
def __init__(self,
logits,
n_experiments,
dtype=None,
group_event_ndims=0,
check_numerics=False):
self._logits = tf.convert_to_tensor(logits)
param_dtype = assert_same_float_dtype(
[(self._logits, 'Binomial.logits')])
if dtype is None:
dtype = tf.int32
assert_same_float_and_int_dtype([], dtype)
sign_err_msg = "n_experiments must be positive"
if isinstance(n_experiments, int):
if n_experiments <= 0:
raise ValueError(sign_err_msg)
self._n_experiments = n_experiments
else:
try:
n_experiments = tf.convert_to_tensor(n_experiments, tf.int32)
except ValueError:
raise TypeError('n_experiments must be int32')
_assert_rank_op = tf.assert_rank(
n_experiments, 0,
message="n_experiments should be a scalar (0-D Tensor).")
_assert_positive_op = tf.assert_greater(
n_experiments, 0, message=sign_err_msg)
with tf.control_dependencies([_assert_rank_op,
_assert_positive_op]):
self._n_experiments = tf.identity(n_experiments)
self._check_numerics = check_numerics
super(Binomial, self).__init__(
dtype=dtype,
param_dtype=param_dtype,
is_continuous=False,
is_reparameterized=False,
group_event_ndims=group_event_ndims)
@property
def n_experiments(self):
"""The number of experiments."""
return self._n_experiments
@property
def logits(self):
"""The log-odds of probabilities."""
return self._logits
def _value_shape(self):
return tf.constant([], dtype=tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.shape(self.logits)
def _get_batch_shape(self):
return self.logits.get_shape()
def _sample(self, n_samples):
n = self.n_experiments
if self.logits.get_shape().ndims == 1:
logits_flat = self.logits
else:
logits_flat = tf.reshape(self.logits, [-1])
log_1_minus_p = -tf.nn.softplus(logits_flat)
log_p = logits_flat + log_1_minus_p
stacked_logits_flat = tf.stack([log_1_minus_p, log_p], axis=-1)
samples_flat = tf.transpose(
tf.multinomial(stacked_logits_flat, n_samples * n))
shape = tf.concat([[n, n_samples], self.batch_shape], 0)
samples = tf.reduce_sum(tf.reshape(samples_flat, shape), axis=0)
static_n_samples = n_samples if isinstance(n_samples, int) else None
static_shape = tf.TensorShape([static_n_samples]).concatenate(
self.get_batch_shape())
samples.set_shape(static_shape)
return tf.cast(samples, self.dtype)
def _log_prob(self, given):
logits = self.logits
n = tf.cast(self.n_experiments, self.param_dtype)
given = tf.cast(given, self.param_dtype)
log_1_minus_p = -tf.nn.softplus(logits)
lgamma_n_plus_1 = tf.lgamma(n + 1)
lgamma_given_plus_1 = tf.lgamma(given + 1)
lgamma_n_minus_given_plus_1 = tf.lgamma(n - given + 1)
if self._check_numerics:
with tf.control_dependencies(
[tf.check_numerics(lgamma_given_plus_1,
"lgamma(given + 1)"),
tf.check_numerics(lgamma_n_minus_given_plus_1,
"lgamma(n - given + 1)")]):
lgamma_given_plus_1 = tf.identity(lgamma_given_plus_1)
return lgamma_n_plus_1 - lgamma_n_minus_given_plus_1 - \
lgamma_given_plus_1 + given * logits + n * log_1_minus_p
def _prob(self, given):
return tf.exp(self._log_prob(given))
class InverseGamma(Distribution):
"""
The class of univariate InverseGamma distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param alpha: A `float` Tensor. The shape parameter of the InverseGamma
distribution. Should be positive and broadcastable to match `beta`.
:param beta: A `float` Tensor. The scale parameter of the InverseGamma
distribution. Should be positive and broadcastable to match `alpha`.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
:param check_numerics: Bool. Whether to check numeric issues.
"""
def __init__(self,
alpha,
beta,
group_event_ndims=0,
check_numerics=False):
self._alpha = tf.convert_to_tensor(alpha)
self._beta = tf.convert_to_tensor(beta)
dtype = assert_same_float_dtype(
[(self._alpha, 'InverseGamma.alpha'),
(self._beta, 'InverseGamma.beta')])
try:
tf.broadcast_static_shape(self._alpha.get_shape(),
self._beta.get_shape())
except ValueError:
raise ValueError(
"alpha and beta should be broadcastable to match each "
"other. ({} vs. {})".format(
self._alpha.get_shape(), self._beta.get_shape()))
self._check_numerics = check_numerics
super(InverseGamma, self).__init__(
dtype=dtype,
param_dtype=dtype,
is_continuous=True,
is_reparameterized=False,
group_event_ndims=group_event_ndims)
@property
def alpha(self):
"""The shape parameter of the InverseGamma distribution."""
return self._alpha
@property
def beta(self):
"""The scale parameter of the InverseGamma distribution."""
return self._beta
def _value_shape(self):
return tf.constant([], dtype=tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.broadcast_dynamic_shape(tf.shape(self.alpha),
tf.shape(self.beta))
def _get_batch_shape(self):
return tf.broadcast_static_shape(self.alpha.get_shape(),
self.beta.get_shape())
def _sample(self, n_samples):
gamma = tf.random_gamma([n_samples], self.alpha,
beta=self.beta, dtype=self.dtype)
return 1 / gamma
def _log_prob(self, given):
alpha, beta = self.alpha, self.beta
log_given = tf.log(given)
log_alpha, log_beta = tf.log(alpha), tf.log(beta)
lgamma_alpha = tf.lgamma(alpha)
if self._check_numerics:
with tf.control_dependencies(
[tf.check_numerics(log_given, "log(given)"),
tf.check_numerics(log_alpha, "log(alpha)"),
tf.check_numerics(log_beta, "log(beta)"),
tf.check_numerics(lgamma_alpha, "lgamma(alpha)")]):
log_given = tf.identity(log_given)
return alpha * log_beta - lgamma_alpha - (alpha + 1) * log_given - \
beta / given
def _prob(self, given):
return tf.exp(self._log_prob(given))
class Laplace(Distribution):
"""
The class of univariate Laplace distribution.
See :class:`~zhusuan.distributions.base.Distribution` for details.
:param loc: A `float` Tensor. The location parameter of the Laplace
distribution. Should be broadcastable to match `scale`.
:param scale: A `float` Tensor. The scale parameter of the Laplace
distribution. Should be positive and broadcastable to match `loc`.
:param group_event_ndims: A 0-D `int32` Tensor representing the number of
dimensions in `batch_shape` (counted from the end) that are grouped
into a single event, so that their probabilities are calculated
together. Default is 0, which means a single value is an event.
See :class:`~zhusuan.distributions.base.Distribution` for more detailed
explanation.
:param is_reparameterized: A Bool. If True, gradients on samples from this
distribution are allowed to propagate into inputs, using the
reparametrization trick from (Kingma, 2013).
:param check_numerics: Bool. Whether to check numeric issues.
"""
def __init__(self,
loc,
scale,
group_event_ndims=0,
is_reparameterized=True,
check_numerics=False):
self._loc = tf.convert_to_tensor(loc)
self._scale = tf.convert_to_tensor(scale)
dtype = assert_same_float_dtype(
[(self._loc, 'Laplace.loc'),
(self._scale, 'Laplace.scale')])
try:
tf.broadcast_static_shape(self._loc.get_shape(),
self._scale.get_shape())
except ValueError:
raise ValueError(
"loc and scale should be broadcastable to match each "
"other. ({} vs. {})".format(
self._loc.get_shape(), self._scale.get_shape()))
self._check_numerics = check_numerics
super(Laplace, self).__init__(
dtype=dtype,
param_dtype=dtype,
is_continuous=True,
is_reparameterized=is_reparameterized,
group_event_ndims=group_event_ndims)
@property
def loc(self):
"""The location parameter of the Laplace distribution."""
return self._loc
@property
def scale(self):
"""The scale parameter of the Laplace distribution."""
return self._scale
def _value_shape(self):
return tf.constant([], dtype=tf.int32)
def _get_value_shape(self):
return tf.TensorShape([])
def _batch_shape(self):
return tf.broadcast_dynamic_shape(tf.shape(self.loc),
tf.shape(self.scale))
def _get_batch_shape(self):
return tf.broadcast_static_shape(self.loc.get_shape(),
self.scale.get_shape())
def _sample(self, n_samples):
# samples must be sampled from (-1, 1) rather than [-1, 1)
loc, scale = self.loc, self.scale
if not self.is_reparameterized:
loc = tf.stop_gradient(loc)
scale = tf.stop_gradient(scale)
shape = tf.concat([[n_samples], self.batch_shape], 0)
uniform_samples = tf.random_uniform(
shape=shape,
minval=np.nextafter(self.dtype.as_numpy_dtype(-1.),
self.dtype.as_numpy_dtype(0.)),
maxval=1.,
dtype=self.dtype)
samples = loc - scale * tf.sign(uniform_samples) * \
tf.log1p(-tf.abs(uniform_samples))
static_n_samples = n_samples if isinstance(n_samples, int) else None
samples.set_shape(
tf.TensorShape([static_n_samples]).concatenate(
self.get_batch_shape()))
return samples
def _log_prob(self, given):
log_scale = tf.log(self.scale)
if self._check_numerics:
with tf.control_dependencies(
[tf.check_numerics(log_scale, "log(scale)")]):
log_scale = tf.identity(log_scale)
return -np.log(2.) - log_scale - tf.abs(given - self.loc) / self.scale
def _prob(self, given):
return tf.exp(self._log_prob(given))
|
{"hexsha": "fabe5bf180c41970c61d4cb297d5c8519304d76c", "size": 42038, "ext": "py", "lang": "Python", "max_stars_repo_path": "zhusuan/distributions/univariate.py", "max_stars_repo_name": "ycguo028/zhusuan", "max_stars_repo_head_hexsha": "244536d93c55e486a3587e53229f0a7e1b19bef0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2017-05-23T20:18:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-03T15:00:53.000Z", "max_issues_repo_path": "zhusuan/distributions/univariate.py", "max_issues_repo_name": "ycguo028/zhusuan", "max_issues_repo_head_hexsha": "244536d93c55e486a3587e53229f0a7e1b19bef0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "zhusuan/distributions/univariate.py", "max_forks_repo_name": "ycguo028/zhusuan", "max_forks_repo_head_hexsha": "244536d93c55e486a3587e53229f0a7e1b19bef0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-11-27T02:43:22.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-23T18:27:32.000Z", "avg_line_length": 38.3908675799, "max_line_length": 83, "alphanum_fraction": 0.6056187259, "include": true, "reason": "import numpy", "num_tokens": 9198}
|
from __future__ import annotations
import logging
import os
from collections import defaultdict
from functools import reduce
from typing import Any, Dict, List, Optional, Set, Tuple, Type
import matplotlib.pyplot as plt
import numpy as np
import numpy.typing as npt
import PIL.Image
from cachetools import LRUCache, cached
from nuplan.database.common.blob_store.creator import BlobStoreCreator
from nuplan.database.common.db import DB, Table
from nuplan.database.maps_db.gpkg_mapsdb import GPKGMapsDB
from nuplan.database.nuplan_db import models as nuplandb_models
from nuplan.database.nuplan_db.models import Camera, Category, EgoPose, Image, Lidar, LidarBox, LidarPc, Log, \
ScenarioTag, Scene, Track, TrafficLightStatus
from nuplan.database.nuplan_db.templates import tables as nuplandb_table_templates
from nuplan.database.utils.geometry import view_points
logger = logging.getLogger(__name__)
MICROSECONDS_IN_A_SECOND = 1000000
class NuPlanDB(DB):
"""
Database class for the nuPlan database. It provides some simple lookups and get methods.
"""
def __init__(self,
version: str,
data_root: str,
map_version: str = 'nuplan-maps-v0.1',
map_root: Optional[str] = None,
verbose: bool = False):
"""
Loads database and creates reverse indexes and shortcuts.
:param version: Version to load (e.g. "nuplan_v0.1_mini").
:param data_root: Path to the NuPlanDB tables and blobs.
:param map_version: Version to load (e.g. "nuplan-maps-v0.1").
:param map_root: Root folder of the maps.
:param verbose: Whether to print status messages during load.
"""
self._map_version = map_version
# Set default map folder
if map_root is None:
self._map_root = os.path.join(data_root, 'maps')
else:
self._map_root = map_root
self._verbose = verbose
# Initialize parent class
table_names = list(nuplandb_table_templates.keys())
maps_db = GPKGMapsDB(self._map_version, self._map_root)
blob_store = BlobStoreCreator.create_nuplandb(data_root=data_root)
super().__init__(table_names, nuplandb_models, data_root, version, verbose,
blob_store, maps_db)
# Initialize NuPlanDBExplorer class
self._explorer = NuPlanDBExplorer(self)
def __reduce__(self) -> Tuple[Type[NuPlanDB], Tuple[Any, ...]]:
"""
Hints on how to reconstruct the object when pickling.
:return: Object type and constructor arguments to be used.
"""
return self.__class__, (self._version, self._data_root, self._map_version, self._map_root, self._verbose)
def __getstate__(self) -> Dict[str, Any]:
"""
Called by pickle.dump/dumps to save class state.
Don't save mapsdb or blobstore because they're not pickleable, re-create object when restoring.
:return: The object state.
"""
state = dict()
state['_version'] = self._version
state['_data_root'] = self._data_root
state['_map_version'] = self._map_version
state['_map_root'] = self._map_root
state['_verbose'] = self._verbose
return state
def __setstate__(self, state: Dict[str, Any]) -> None:
"""
Called by pickle.load/loads to restore class state.
:param state: The object state.
"""
db = NuPlanDB(
version=state['_version'],
data_root=state['_data_root'],
map_version=state['_map_version'],
map_root=state['_map_root'],
verbose=state['_verbose'])
self.__dict__.update(db.__dict__)
# Explicitly assign tables to help the IDE determine valid class members.
@property
def category(self) -> Table[Category]:
"""
Get Category table.
:return: The category table.
"""
return self.tables['category']
@property
def log(self) -> Table[Log]:
"""
Get Log table.
:return: The log table.
"""
return self.tables['log']
@property
def camera(self) -> Table[Camera]:
"""
Get Camera table.
:return: The camera table.
"""
return self.tables['camera']
@property
def lidar(self) -> Table[Lidar]:
"""
Get Lidar table.
:return: The lidar table.
"""
return self.tables['lidar']
@property
def ego_pose(self) -> Table[EgoPose]:
"""
Get Ego Pose table.
:return: The ego pose table.
"""
return self.tables['ego_pose']
@property
def image(self) -> Table[Image]:
"""
Get Image table.
:return: The image table.
"""
return self.tables['image']
@property
def lidar_pc(self) -> Table[LidarPc]:
"""
Get Lidar Pc table.
:return: The lidar pc table.
"""
return self.tables['lidar_pc']
@property
def lidar_box(self) -> Table[LidarBox]:
"""
Get Lidar Box table.
:return: The lidar box table.
"""
if 'lidar_box' not in self.tables:
self.tables['lidar_box'] = Table[LidarBox](LidarBox, self)
return self.tables['lidar_box']
@property
def track(self) -> Table[Track]:
"""
Get Track table.
:return: The track table.
"""
if 'track' not in self.tables:
self.tables['track'] = Table[Track](Track, self)
return self.tables['track']
@property
def scene(self) -> Table[Scene]:
"""
Get Scene table.
:return: The scene table.
"""
if 'scene' not in self.tables:
self.tables['scene'] = Table[Scene](Scene, self)
return self.tables['scene']
@property
def scenario_tag(self) -> Table[ScenarioTag]:
"""
Get Scenario Tag table.
:return: The scenario tag table.
"""
if 'scenario_tag' not in self.tables:
self.tables['scenario_tag'] = Table[ScenarioTag](ScenarioTag, self)
return self.tables['scenario_tag']
@property
def traffic_light_status(self) -> Table[TrafficLightStatus]:
"""
Get Traffic Light Status table.
:return: The traffic light status table.
"""
if 'traffic_light_status' not in self.tables:
self.tables['traffic_light_status'] = Table[TrafficLightStatus](TrafficLightStatus, self)
return self.tables['traffic_light_status']
@property # type: ignore
@cached(cache=LRUCache(maxsize=1))
def cam_channels(self) -> Set[str]:
"""
Get list of camera channels.
:return: The list of camera channels.
"""
return {cam.channel for cam in self.camera}
@property # type: ignore
@cached(cache=LRUCache(maxsize=1))
def lidar_channels(self) -> Set[str]:
"""
Get list of lidar channels.
:return: The list of lidar channels.
"""
return {lidar.channel for lidar in self.lidar}
def list_categories(self) -> None:
""" Print list of categories. """
self._explorer.list_categories()
def render_pointcloud_in_image(self, lidar_pc: LidarPc, **kwargs: Any) -> None:
"""
Render point cloud in image.
:param lidar_pc: Lidar PC record.
:kwargs: Optional configurations.
"""
self._explorer.render_pointcloud_in_image(lidar_pc, **kwargs)
class NuPlanDBExplorer:
"""
Helper class to list and visualize NuPlanDB data. These are meant to serve as tutorials and templates for
working with the data.
"""
def __init__(self, nuplandb: NuPlanDB):
"""
:param nuplandb: NuPlanDB instance.
"""
self.nuplandb = nuplandb
def unique_scenario_tags(self) -> List[str]:
"""
Get list of all the unique ScenarioTag types in the DB.
:return: The list of all the unique scenario tag types.
"""
return [tag[0] for tag in self.nuplandb.session.query(ScenarioTag.type).distinct().all()]
def list_categories(self) -> None:
""" Print categories, counts and stats. """
logger.info('\nCompiling category summary ... ')
# Retrieve category name and object sizes from DB.
length_name = self.nuplandb.session.query(LidarBox.length, Category.name). \
join(Track, LidarBox.track_token == Track.token).join(Category, Track.category_token == Category.token)
width_name = self.nuplandb.session.query(LidarBox.width, Category.name). \
join(Track, LidarBox.track_token == Track.token).join(Category, Track.category_token == Category.token)
height_name = self.nuplandb.session.query(LidarBox.height, Category.name). \
join(Track, LidarBox.track_token == Track.token).join(Category, Track.category_token == Category.token)
# Group by category name
length_categories = defaultdict(list)
for size, name in length_name:
length_categories[name].append(size)
width_categories = defaultdict(list)
for size, name in width_name:
width_categories[name].append(size)
height_categories = defaultdict(list)
for size, name in height_name:
height_categories[name].append(size)
logger.info(f"{'name':>50} {'count':>10} {'width':>10} {'len':>10} {'height':>10} \n {'-'*101:>10}")
for name, stats in sorted(length_categories.items()):
length_stats = np.array(stats)
width_stats = np.array(width_categories[name])
height_stats = np.array(height_categories[name])
logger.info(f"{name[:50]:>50} {length_stats.shape[0]:>10.2f} "
f"{np.mean(length_stats):>5.2f} {np.std(length_stats):>5.2f} "
f"{np.mean(width_stats):>5.2f} {np.std(width_stats):>5.2f} {np.mean(height_stats):>5.2f} "
f"{np.std(height_stats):>5.2f}")
def map_pointcloud_to_image(self, lidar_pc: LidarPc, img: Image, color_channel: int = 2,
max_radius: float = np.inf) -> \
Tuple[npt.NDArray[np.float64], npt.NDArray[np.float64], PIL.Image.Image]:
"""
Given a lidar and camera sample_data, load point-cloud and map it to the image plane.
:param lidar_pc: Lidar sample_data record.
:param img: Camera sample_data record.
:param color_channel: Set to 2 for coloring dots by depth, 3 for intensity.
:param max_radius: Max xy radius of lidar points to include in visualization.
Set to np.inf to include all points.
:return (pointcloud <np.float: 2, n)>, coloring <np.float: n>, image <Image>).
"""
assert isinstance(lidar_pc, LidarPc), 'first input must be a lidar_pc modality'
assert isinstance(img, Image), 'second input must be a camera modality'
# Load files.
pc = lidar_pc.load()
im = img.load_as(img_type='pil')
# Filter lidar points to be inside desired range.
radius = np.sqrt(pc.points[0] ** 2 + pc.points[1] ** 2)
keep = radius <= max_radius
pc.points = pc.points[:, keep]
# Transform pc to img.
transform = reduce(np.dot, [img.camera.trans_matrix_inv, img.ego_pose.trans_matrix_inv,
lidar_pc.ego_pose.trans_matrix, lidar_pc.lidar.trans_matrix])
pc.transform(transform)
# Grab the coloring (depth or intensity).
coloring = pc.points[color_channel, :]
depths = pc.points[2, :]
# Take the actual picture (matrix multiplication with camera - matrix + renormalization).
points = view_points(pc.points[:3, :], img.camera.intrinsic_np, normalize=True)
# Finally filter away points outside the image.
mask = np.ones(depths.shape[0], dtype=bool)
mask = np.logical_and(mask, depths > 0)
mask = np.logical_and(mask, points[0, :] > 1)
mask = np.logical_and(mask, points[0, :] < im.size[0] - 1)
mask = np.logical_and(mask, points[1, :] > 1)
mask = np.logical_and(mask, points[1, :] < im.size[1] - 1)
points = points[:, mask]
coloring = coloring[mask]
return points, coloring, im
def render_pointcloud_in_image(self, lidar_pc: LidarPc, dot_size: int = 5, color_channel: int = 2,
max_radius: float = np.inf, image_channel: str = 'CAM_F0') -> None:
"""
Scatter-plots pointcloud on top of image.
:param sample: LidarPc Sample.
:param dot_size: Scatter plot dot size.
:param color_channel: Set to 2 for coloring dots by height, 3 for intensity.
:param max_radius: Max xy radius of lidar points to include in visualization.
Set to np.inf to include all points.
:param image_channel: Which image to render.
"""
image = lidar_pc.closest_image([image_channel])[0]
points, coloring, im = self.map_pointcloud_to_image(lidar_pc, image, color_channel=color_channel,
max_radius=max_radius)
plt.figure(figsize=(9, 16))
plt.imshow(im)
plt.scatter(points[0, :], points[1, :], c=coloring, s=dot_size)
plt.axis('off')
|
{"hexsha": "f2800d5676a10a0ca23ac406ccf1c19695f726fb", "size": 13501, "ext": "py", "lang": "Python", "max_stars_repo_path": "nuplan/database/nuplan_db/nuplandb.py", "max_stars_repo_name": "MCZhi/nuplan-devkit", "max_stars_repo_head_hexsha": "3c4f5b8dcd517b27cfd258915ca5fe5c54e3cb0c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nuplan/database/nuplan_db/nuplandb.py", "max_issues_repo_name": "MCZhi/nuplan-devkit", "max_issues_repo_head_hexsha": "3c4f5b8dcd517b27cfd258915ca5fe5c54e3cb0c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nuplan/database/nuplan_db/nuplandb.py", "max_forks_repo_name": "MCZhi/nuplan-devkit", "max_forks_repo_head_hexsha": "3c4f5b8dcd517b27cfd258915ca5fe5c54e3cb0c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.0906593407, "max_line_length": 115, "alphanum_fraction": 0.6133619732, "include": true, "reason": "import numpy", "num_tokens": 3175}
|
[STATEMENT]
lemma pivot_unsat_core_id: "\<lbrakk>\<triangle> (\<T> s); x\<^sub>i \<in> lvars (\<T> s); x\<^sub>j \<in> rvars_of_lvar (\<T> s) x\<^sub>i\<rbrakk> \<Longrightarrow> \<U>\<^sub>c (pivot x\<^sub>i x\<^sub>j s) = \<U>\<^sub>c s"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>\<triangle> (\<T> s); x\<^sub>i \<in> lvars (\<T> s); x\<^sub>j \<in> rvars_eq (eq_for_lvar (\<T> s) x\<^sub>i)\<rbrakk> \<Longrightarrow> \<U>\<^sub>c (pivot x\<^sub>i x\<^sub>j s) = \<U>\<^sub>c s
[PROOF STEP]
using pivot_id
[PROOF STATE]
proof (prove)
using this:
\<lbrakk>\<triangle> (\<T> ?s); ?x\<^sub>i \<in> lvars (\<T> ?s); ?x\<^sub>j \<in> rvars_eq (eq_for_lvar (\<T> ?s) ?x\<^sub>i)\<rbrakk> \<Longrightarrow> let s' = pivot ?x\<^sub>i ?x\<^sub>j ?s in \<V> s' = \<V> ?s \<and> \<B>\<^sub>i s' = \<B>\<^sub>i ?s \<and> \<U> s' = \<U> ?s \<and> \<U>\<^sub>c s' = \<U>\<^sub>c ?s
goal (1 subgoal):
1. \<lbrakk>\<triangle> (\<T> s); x\<^sub>i \<in> lvars (\<T> s); x\<^sub>j \<in> rvars_eq (eq_for_lvar (\<T> s) x\<^sub>i)\<rbrakk> \<Longrightarrow> \<U>\<^sub>c (pivot x\<^sub>i x\<^sub>j s) = \<U>\<^sub>c s
[PROOF STEP]
by (simp add: Let_def)
|
{"llama_tokens": 534, "file": "Simplex_Simplex", "length": 2}
|
"""
Train shadow net script
"""
import argparse
import functools
import itertools
import os
import os.path as ops
import sys
import time
import numpy as np
import tensorflow as tf
import pprint
import shadownet
import six
from six.moves import xrange # pylint: disable=redefined-builtin
sys.path.append('/data/')
from crnn_model import crnn_model
from local_utils import data_utils, log_utils, tensorboard_vis_summary
from global_configuration import config
from uaitrain.arch.tensorflow import uflag
from typing import List
from tensorflow.core.framework import node_def_pb2
from tensorflow.python.framework import device as pydev
from tensorflow.python.training import device_setter
tf.app.flags.DEFINE_string('dataset_dir','/data/data/tfrecords','data path')
tf.app.flags.DEFINE_string('weights_path',None,'weight path')
FLAGS = tf.app.flags.FLAGS
logger = log_utils.init_logger()
def local_device_setter(num_devices=1,
ps_device_type='cpu',
worker_device='/cpu:0',
ps_ops=None,
ps_strategy=None):
if ps_ops == None:
ps_ops = ['Variable', 'VariableV2', 'VarHandleOp']
if ps_strategy is None:
ps_strategy = device_setter._RoundRobinStrategy(num_devices)
if not six.callable(ps_strategy):
raise TypeError("ps_strategy must be callable")
def _local_device_chooser(op):
current_device = pydev.DeviceSpec.from_string(op.device or "")
node_def = op if isinstance(op, node_def_pb2.NodeDef) else op.node_def
if node_def.op in ps_ops:
ps_device_spec = pydev.DeviceSpec.from_string(
'/{}:{}'.format(ps_device_type, ps_strategy(op)))
ps_device_spec.merge_from(current_device)
return ps_device_spec.to_string()
else:
worker_device_spec = pydev.DeviceSpec.from_string(worker_device or "")
worker_device_spec.merge_from(current_device)
return worker_device_spec.to_string()
return _local_device_chooser
def get_words_from_chars(characters_list: List[str], sequence_lengths: List[int], name='chars_conversion'):
with tf.name_scope(name=name):
def join_charcaters_fn(coords):
return tf.reduce_join(characters_list[coords[0]:coords[1]])
def coords_several_sequences():
end_coords = tf.cumsum(sequence_lengths)
start_coords = tf.concat([[0], end_coords[:-1]], axis=0)
coords = tf.stack([start_coords, end_coords], axis=1)
coords = tf.cast(coords, dtype=tf.int32)
return tf.map_fn(join_charcaters_fn, coords, dtype=tf.string)
def coords_single_sequence():
return tf.reduce_join(characters_list, keep_dims=True)
words = tf.cond(tf.shape(sequence_lengths)[0] > 1,
true_fn=lambda: coords_several_sequences(),
false_fn=lambda: coords_single_sequence())
return words
def get_shadownet_fn(num_gpus, variable_strategy, num_workers):
"""Returns a function that will build shadownet model."""
def _shadownet_fun(features, labels, mode, params):
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
tower_features = features
tower_labels = labels
tower_losses = []
tower_gradvars = []
tower_preds = []
tower_tensor_dict = []
tower_seq_len = []
num_devices = num_gpus
device_type = 'gpu'
tower_batch_size = int(params.batch_size / num_devices)
for i in range(num_devices):
worker_device = '/{}:{}'.format(device_type, i)
device_setter = local_device_setter(worker_device=worker_device)
with tf.variable_scope('shadownet', reuse=bool(i != 0)):
with tf.name_scope('tower_%d' % i) as name_scope:
with tf.device(device_setter):
loss, gradvars, preds, tensor_dict, seq_len = _tower_fn(
is_training, tower_features[i], tower_labels[i], tower_batch_size, params.l_size)
tower_losses.append(loss)
tower_gradvars.append(gradvars)
tower_preds.append(preds)
tower_tensor_dict.append(tensor_dict)
tower_seq_len.append(seq_len)
if i == 0:
# Only trigger batch_norm moving mean and variance update from
# the 1st tower. Ideally, we should grab the updates from all
# towers but these stats accumulate extremely fast so we can
# ignore the other stats from the other towers without
# significant detriment.
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS,
name_scope)
# Now compute global loss and gradients.
gradvars = []
with tf.name_scope('gradient_averaging'):
all_grads = {}
for grad, var in itertools.chain(*tower_gradvars):
if grad is not None:
all_grads.setdefault(var, []).append(grad)
for var, grads in six.iteritems(all_grads):
# Average gradients on the same device as the variables
with tf.device(var.device):
if len(grads) == 1:
avg_grad = grads[0]
else:
avg_grad = tf.multiply(tf.add_n(grads), 1. / len(grads))
gradvars.append((avg_grad, var))
# Device that runs the ops to apply global gradient updates.
consolidation_device = '/gpu:0' if variable_strategy == 'GPU' else '/cpu:0'
with tf.device(consolidation_device):
global_step = tf.train.get_global_step()
starter_learning_rate = params.learning_rate
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
params.decay_steps, params.decay_rate,
staircase=True)
loss = tf.reduce_mean(tower_losses, name='loss')
decoded, log_prob = tf.nn.ctc_beam_search_decoder(tower_preds[0],
tower_seq_len[0]*np.ones(tower_batch_size),
merge_repeated=False)
sequence_dist = tf.reduce_mean(tf.edit_distance(tf.cast(decoded[0], tf.int32), tower_labels[0]))
sequence_lengths_pred = tf.bincount(tf.cast(decoded[0].indices[:, 0], tf.int32),
minlength=tf.shape(tower_labels[0])[1])
label_lengths_pred = tf.bincount(tf.cast(labels[0].indices[:, 0], tf.int32),
minlength=tf.shape(tower_labels[0])[1])
tensors_to_log = {'global_step': global_step, 'learning_rate': learning_rate, 'loss': loss}
dist_to_log = {'global_step': global_step,
'learning_rate': learning_rate,
'loss': loss,
'train_seq_dist': sequence_dist,
'sequence_lengths_pred': sequence_lengths_pred,
'label_lengths_pred': label_lengths_pred}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=10)
dist_hook = tf.train.LoggingTensorHook(
tensors=dist_to_log, every_n_iter=1000)
train_hooks = [logging_hook, dist_hook]
seq_dist_sum = tf.summary.scalar(name='Seq_Dist', tensor=sequence_dist)
lr_sum = tf.summary.scalar(name='Learning_rate', tensor=learning_rate)
summaries = [seq_dist_sum, lr_sum]
summary_hook = tf.train.SummarySaverHook(
save_steps=1000,
output_dir='/data/output/',
summary_op=summaries)
optimizer = tf.train.AdadeltaOptimizer(learning_rate=learning_rate)
if params.sync:
optimizer = tf.train.SyncReplicasOptimizer(
optimizer, replicas_to_aggregate=num_workers)
sync_replicas_hook = optimizer.make_session_run_hook(params.is_chief)
train_hooks.append(sync_replicas_hook)
# Create single grouped train op
train_op = [
optimizer.apply_gradients(
gradvars, global_step=tf.train.get_global_step())
]
train_op.extend(update_ops)
train_op = tf.group(*train_op)
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
training_hooks=train_hooks)
return _shadownet_fun
def _tower_fn(is_training, feature, label, batch_size, l_size):
seq_len=l_size
shadownet = crnn_model.ShadowNet(phase='Train', hidden_nums=256, layers_nums=2, seq_length=seq_len,
num_classes=config.cfg.TRAIN.CLASSES_NUMS, rnn_cell_type='lstm')
imgs = tf.image.resize_images(feature, (32, l_size*4), method=0)
input_imgs = tf.cast(x=imgs, dtype=tf.float32)
with tf.variable_scope('shadow', reuse=False):
net_out, tensor_dict = shadownet.build_shadownet(inputdata=input_imgs)
cost = tf.reduce_mean(tf.nn.ctc_loss(labels=label, inputs=net_out,
sequence_length=seq_len*np.ones(batch_size)))
#lstm l2 normalization loss
lstm_tv = tf.trainable_variables(scope='LSTMLayers')
r_lambda = 0.001
regularization_cost = r_lambda * tf.reduce_sum([tf.nn.l2_loss(v) for v in lstm_tv])
cost = cost + regularization_cost
model_params = tf.trainable_variables()
tower_grad = tf.gradients(cost, model_params)
return cost, zip(tower_grad, model_params), net_out, tensor_dict, seq_len
def input_fn(data_dir,
subset,
num_shards,
batch_size,
use_distortion_for_training=True):
"""Create input graph for model.
Args:
data_dir: Directory where TFRecords representing the dataset are located.
subset: one of 'train', 'validate' and 'eval'.
num_shards: num of towers participating in data-parallel training.
batch_size: total batch size for training to be divided by the number of
shards.
use_distortion_for_training: True to use distortions.
Returns:
three
"""
with tf.device('/cpu:0'):
use_distortion = subset == 'train' and use_distortion_for_training
dataset = shadownet.ShadownetDataSet(data_dir, subset, use_distortion)
inputdata, input_labels = dataset.make_batch(batch_size)
if num_shards <= 1:
# No GPU available or only 1 GPU.
num_shards = 1
feature_shards = tf.split(inputdata, num_shards)
label_shards = tf.sparse_split(sp_input=input_labels, num_split=num_shards, axis=0)
return feature_shards, label_shards
def get_experiment_fn(data_dir,
num_gpus,
use_distortion_for_training=True):
def _experiment_fn(run_config, hparams):
"""Returns an Experiment."""
# Create estimator.
train_input_fn = functools.partial(
input_fn,
data_dir,
subset='train',
num_shards=num_gpus,
batch_size=hparams.batch_size,
use_distortion_for_training=use_distortion_for_training)
eval_input_fn = functools.partial(
input_fn,
data_dir,
subset='validation',
batch_size=hparams.batch_size,
num_shards=num_gpus)
train_steps = hparams.steps
eval_steps = 2048 // hparams.batch_size
variable_strategy = 'CPU'
classifier = tf.estimator.Estimator(
model_fn=get_shadownet_fn(num_gpus,
variable_strategy,
run_config.num_worker_replicas or 1),
config=run_config,
params=hparams)
# Create experiment.
return tf.contrib.learn.Experiment(
classifier,
train_input_fn=train_input_fn,
eval_input_fn=eval_input_fn,
train_steps=train_steps,
eval_steps=eval_steps,
min_eval_frequency=100)
return _experiment_fn
def main(num_gpus, log_device_placement, num_intra_threads, data_dir, output_dir, tfrecord_dir, **hparams):
# The env variable is on deprecation path, default is set to off.
os.environ['TF_SYNC_ON_FINISH'] = '0'
os.environ['TF_ENABLE_WINOGRAD_NONFUSED'] = '1'
data_dir = os.path.join(data_dir, tfrecord_dir)
# Session configuration.
sess_config = tf.ConfigProto(
allow_soft_placement=True,
log_device_placement=log_device_placement,
intra_op_parallelism_threads=num_intra_threads,
gpu_options=tf.GPUOptions(force_gpu_compatible=True))
config = tf.contrib.learn.RunConfig(session_config=sess_config, model_dir=output_dir)
tf.contrib.learn.learn_runner.run(
get_experiment_fn(data_dir, num_gpus),
run_config=config,
hparams=tf.contrib.training.HParams(
is_chief=config.is_chief,
**hparams))
if __name__ == '__main__':
# init args
# args = init_args()
#if not ops.exists(args.dataset_dir):
# raise ValueError('{:s} doesn\'t exist'.format(args.dataset_dir))
#train_shadownet(args.dataset_dir, args.weights_path)
# if args.weights_path is not None and 'two_stage' in args.weights_path:
# train_shadownet(args.dataset_dir, args.weights_path, restore_from_cnn_subnet_work=False)
# elif args.weights_path is not None and 'cnnsub' in args.weights_path:
# train_shadownet(args.dataset_dir, args.weights_path, restore_from_cnn_subnet_work=True)
# else:
# train_shadownet(args.dataset_dir)
parser = argparse.ArgumentParser()
parser.add_argument(
'--num_gpus',
type=int,
default=1,
help='UAI-SDK related. The number of gpus used.')
parser.add_argument(
'--log-device-placement',
action='store_true',
default=False,
help='Whether to log device placement.')
parser.add_argument(
'--num-intra-threads',
type=int,
default=0,
help="""\
Number of threads to use for intra-op parallelism. When training on CPU
set to 0 to have the system pick the appropriate number or alternatively
set it to the number of physical CPU cores.\
""")
parser.add_argument(
'--num-inter-threads',
type=int,
default=0,
help="""\
Number of threads to use for inter-op parallelism. If set to 0, the
system will pick an appropriate number.\
""")
parser.add_argument(
'--sync',
action='store_true',
default=False,
help="""\
If present when running in a distributed environment will run on sync mode.\
""")
parser.add_argument(
'--work_dir',
type=str,
default='/data/',
help='UAI SDK related.')
parser.add_argument(
'--data_dir',
type=str,
required=True,
help='UAI-SDK related. The directory where the CIFAR-10 input data is stored.')
parser.add_argument(
'--output_dir',
type=str,
required=True,
help='UAI-SDK related. The directory where the model will be stored.')
parser.add_argument(
'--log_dir',
type=str,
default='/data/data/',
help='UAI SDK related.')
parser.add_argument(
'--l_size',
type=int,
default=10,
help="""l_batch_label, how many labels CNN net work will output into LSTM""")
parser.add_argument(
'--learning_rate',
type=float,
default=0.1)
parser.add_argument(
'--decay_rate',
type=float,
default=0.1)
parser.add_argument(
'--decay_steps',
type=int,
default=40000)
parser.add_argument(
'--steps',
type=int,
default=200000)
parser.add_argument(
'--batch_size',
type=int,
default=512)
parser.add_argument(
'--tfrecord_dir',
type=str,
default='tfrecords')
args = parser.parse_args()
main(**vars(args))
print('Done')
|
{"hexsha": "fbf23a32edea1c76b286e1eb5b7cddd3cfc77494", "size": 17504, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/tensorflow/train/crnn_chinese/code_multi/tools/train_shadownet_multi.py", "max_stars_repo_name": "soar-zhengjian/uai-sdk", "max_stars_repo_head_hexsha": "e195bd3fb2b97aca7dac6722d332c25b7070481f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 38, "max_stars_repo_stars_event_min_datetime": "2017-04-26T04:00:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-10T02:51:05.000Z", "max_issues_repo_path": "examples/tensorflow/train/crnn_chinese/code_multi/tools/train_shadownet_multi.py", "max_issues_repo_name": "soar-zhengjian/uai-sdk", "max_issues_repo_head_hexsha": "e195bd3fb2b97aca7dac6722d332c25b7070481f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2017-11-20T20:47:09.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-09T23:48:46.000Z", "max_forks_repo_path": "examples/tensorflow/train/crnn_chinese/code_multi/tools/train_shadownet_multi.py", "max_forks_repo_name": "soar-zhengjian/uai-sdk", "max_forks_repo_head_hexsha": "e195bd3fb2b97aca7dac6722d332c25b7070481f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 28, "max_forks_repo_forks_event_min_datetime": "2017-07-08T05:23:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-18T03:12:27.000Z", "avg_line_length": 40.4249422633, "max_line_length": 159, "alphanum_fraction": 0.5843235832, "include": true, "reason": "import numpy", "num_tokens": 3512}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import pandas as pd
import numpy as np
import scipy, scipy.stats
from datetime import date, datetime, timedelta
from dateutil.relativedelta import relativedelta
from sklearn import linear_model
import h5py
import plotly.graph_objs as go
import plotly.figure_factory as ff
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output, State
import dash_table as dt
import dash_table.FormatTemplate as FormatTemplate
import ta
import data_loader
from futures_tools import get_futures_chain, get_seasonal_contracts
from time_tools import convert_date_input
from app import app
app.config.suppress_callback_exceptions = True
app.scripts.config.serve_locally = True
# -------------------------------------------------------- data preparation ---------------------------------------- $
futures_meta_df, futures_contracts_meta_df, inter_comdty_spread_meta_df, inter_comdty_spread_contracts_meta_df = data_loader.load_futures_meta_data()
futures_hist_prices_dict, _ = data_loader.load_futures_hist_prices()
generic_futures_hist_prices_dict = data_loader.load_comdty_generic_hist_prices()
inter_comdty_spread_hist_data_dict = data_loader.load_inter_comdty_spread_hist_prices()
generic_inter_comdty_hist_prices_dict = data_loader.load_inter_comdty_generic_hist_prices()
spread_scores_dict = data_loader.load_spread_score()
fly_scores_dict = data_loader.load_fly_score()
cols = ['Name', 'Contract', 'Price', 'Chg', '1MChg', 'High', 'Low', 'Avg', 'SD', 'EWMA', 'PCT', 'z-score', 'RSI', 'MACD', 'MACDSignal', 'MACDHist', 'MACDHistMin', 'MACDHistMax', 'BBLower', 'BBMid', 'BBUpper']
cols_spread = ['Name', 'Leg1', 'Leg2', 'Leg1 Actual', 'Leg2 Actual', 'Spread', 'Spread Prcnt', 'Spread Z-Score', 'RD Prcnt', 'RD Z-Score']
cols_fly = ['Name', 'Leg1', 'Leg2', 'Leg3', 'Leg1 Actual', 'Leg2 Actual', 'Leg3 Actual', 'Fly', 'Fly Prcnt', 'Fly Z-Score', 'RD Prcnt','RD Z-Score']
df_single = pd.DataFrame(columns=cols)
for root_sym in generic_futures_hist_prices_dict.keys():
try:
hist_series = generic_futures_hist_prices_dict[root_sym][root_sym + '1'].copy()
hist_series.dropna(inplace=True)
hist_series = hist_series[(hist_series.index[-1]+timedelta(days=-365*5)):] # last 5 years
meta_data = futures_contracts_meta_df[futures_contracts_meta_df['Root']==root_sym]
meta_data.sort_values('Last_Trade_Date', inplace=True)
row_dict = {}
row_dict['Name'] = futures_meta_df.loc[root_sym, "NAME"][:-6]
row_dict['Contract'] = get_futures_chain(meta_data, hist_series.index[-1]).index[0]
row_dict['Price'] = round(hist_series.iloc[-1], 4)
row_dict['Chg'] = round((hist_series.iloc[-1] / hist_series.iloc[-2] - 1.0) * 100.0, 4)
try:
row_dict['1MChg'] = round((hist_series.iloc[-1] / hist_series.iloc[-22] - 1.0) * 100.0, 4)
except:
row_dict['1MChg'] = None
row_dict['High'] = round(np.max(hist_series.dropna().to_numpy()), 4)
row_dict['Low'] = round(np.min(hist_series.dropna().to_numpy()), 4)
row_dict['Avg'] = round(np.average(hist_series.dropna().to_numpy()), 4)
row_dict['SD'] = round(np.std(hist_series.dropna().to_numpy()), 4)
hist_return = hist_series / hist_series.shift(1) - 1
row_dict['EWMA'] = round(np.sqrt((hist_return.sort_index(ascending=False)**2).ewm(alpha=0.06).mean()[0] * 252), 4)
row_dict['PCT'] = round(scipy.stats.percentileofscore(hist_series.dropna().to_numpy(), hist_series.iloc[-1]), 4)
row_dict['z-score'] = round((row_dict['Price'] - row_dict['Avg']) / row_dict['SD'], 4)
row_dict['RSI'] = round(ta.momentum.rsi(hist_series, window=14*2-1)[-1], 4)
row_dict['MACD'] = round(ta.trend.macd(hist_series)[-1], 4)
row_dict['MACDSignal'] = round(ta.trend.macd_signal(hist_series)[-1], 4)
row_dict['MACDHist'] = round(ta.trend.macd_diff(hist_series)[-1], 4)
row_dict['MACDHistMin'] = round(np.nanmin(row_dict['MACDHist']), 4)
row_dict['MACDHistMax'] = round(np.nanmax(row_dict['MACDHist']), 4)
row_dict['BBLower'] = round(ta.volatility.bollinger_lband(hist_series)[-1], 4)
row_dict['BBMid'] = round(ta.volatility.bollinger_mavg(hist_series)[-1], 4)
row_dict['BBUpper'] = round(ta.volatility.bollinger_hband(hist_series)[-1], 4)
df_temp = pd.DataFrame(row_dict, index=[root_sym])
df_single = df_single.append(df_temp)
except:
pass
df_single = df_single[cols]
#df = df.sort_values(by=['z-score'])
df_inter = pd.DataFrame(columns=cols)
for root_sym in generic_inter_comdty_hist_prices_dict.keys():
try:
hist_series = generic_inter_comdty_hist_prices_dict[root_sym][root_sym + '1'].copy()
hist_series.dropna(inplace=True)
hist_series = hist_series[(hist_series.index[-1]+timedelta(days=-365*5)):] # last 5 years
meta_data = inter_comdty_spread_contracts_meta_df[inter_comdty_spread_contracts_meta_df['Root']==root_sym]
meta_data.sort_values('Last_Trade_Date', inplace=True)
row_dict = {}
row_dict['Name'] = root_sym
row_dict['Contract'] = get_futures_chain(meta_data, hist_series.index[-1]).index[0]
row_dict['Price'] = round(hist_series.iloc[-1], 4)
row_dict['Chg'] = round((hist_series.iloc[-1] / hist_series.iloc[-2] - 1.0) * 100.0, 4)
try:
row_dict['1MChg'] = round((hist_series.iloc[-1] / hist_series.iloc[-22] - 1.0) * 100.0, 4)
except:
row_dict['1MChg'] = None
row_dict['High'] = round(np.max(hist_series.dropna().to_numpy()), 4)
row_dict['Low'] = round(np.min(hist_series.dropna().to_numpy()), 4)
row_dict['Avg'] = round(np.average(hist_series.dropna().to_numpy()), 4)
row_dict['SD'] = round(np.std(hist_series.dropna().to_numpy()), 4)
hist_return = hist_series / hist_series.shift(1) - 1
row_dict['EWMA'] = round(
np.sqrt((hist_return.sort_index(ascending=False) ** 2).ewm(alpha=0.06).mean()[0] * 252), 4)
row_dict['PCT'] = round(scipy.stats.percentileofscore(hist_series.dropna().to_numpy(), hist_series.iloc[-1]),
4)
row_dict['z-score'] = round((row_dict['Price'] - row_dict['Avg']) / row_dict['SD'], 4)
row_dict['RSI'] = round(ta.momentum.rsi(hist_series, window=14*2-1)[-1], 4)
row_dict['MACD'] = round(ta.trend.macd(hist_series)[-1], 4)
row_dict['MACDSignal'] = round(ta.trend.macd_signal(hist_series)[-1], 4)
row_dict['MACDHist'] = round(ta.trend.macd_diff(hist_series)[-1], 4)
row_dict['MACDHistMin'] = round(np.nanmin(row_dict['MACDHist']), 4)
row_dict['MACDHistMax'] = round(np.nanmax(row_dict['MACDHist']), 4)
row_dict['BBLower'] = round(ta.volatility.bollinger_lband(hist_series)[-1], 4)
row_dict['BBMid'] = round(ta.volatility.bollinger_mavg(hist_series)[-1], 4)
row_dict['BBUpper'] = round(ta.volatility.bollinger_hband(hist_series)[-1], 4)
df_temp = pd.DataFrame(row_dict, index=[root_sym])
df_inter = df_inter.append(df_temp)
except:
pass
df_inter = df_inter[cols]
#df = df.sort_values(by=['z-score'])
# -------------------------------------------------------- help functions ---------------------------------------- $
# -------------------------------------------------------- define layout ---------------------------------------- $
# app.css.append_css({"external_url": 'https://codepen.io/chriddyp/pen/bWLwgP.css'})
layout = html.Div([
html.Div([
html.H2("Commodity Futures")
], className='banner'),
html.Div(id='cached-data-market-commodity-futures-tab1', style={'display': 'none'}),
html.Div(id='cached-data-market-commodity-futures-tab2', style={'display': 'none'}),
html.Div(id='cached-data-market-commodity-futures-tab3', style={'display': 'none'}),
html.Div(id='cached-data-market-commodity-futures-tab4', style={'display': 'none'}),
html.Div([
dcc.Tabs(
children=[
dcc.Tab(label='Single Futures', value=1),
dcc.Tab(label='Curve Spread', value=2),
dcc.Tab(label='Curve Fly', value=3),
dcc.Tab(label='Seasonality', value=4),
],
value=1,
id='market-commodity-futures-tabs',
),
html.Div(id='market-commodity-futures-tab-output')
]),
html.Div(id='hidden-div-market-commodity-futures', style={'display': 'none'})
], style={
'width': '90%',
'fontFamily': 'Sans-Serif',
'margin-left': 'auto',
'margin-right': 'auto'}
)
# -------------------------------------------------------- Tab Layout -------------------------- #
@app.callback(
Output('market-commodity-futures-tab-output', 'children'),
[Input('market-commodity-futures-tabs', 'value')])
def update_tabs_market_commodity_futures(tab_choice):
if tab_choice == 1:
return \
html.Div([
html.Div([], className='twelve columns wind-polar'), # seems that this is needed for alignment)
html.Div([
dcc.RadioItems(
id='outright-spread-market-commodity-futures-tab1',
options=[
{'label': 'Outright', 'value': 'Outright'},
{'label': 'InterComdty', 'value': 'InterComdty'}
],
value='Outright'
)
], className='four columns wind-polar'),
html.Div([
dt.DataTable(
style_table={'overflowX': 'scroll'},
columns=[{"name": i, "id": i, "deletable": False} for i in cols],
editable=False,
row_deletable=False,
filter_action="native",
sort_action="native",
sort_mode='multi',
row_selectable='single', # multi
selected_rows=[0],
page_action='native',
page_current=0,
page_size=15,
id='overview-table-market-commodity-futures-tab1'
)
], className='twelve columns row wind-speed-row'),
# html.Div([
# html.Div([
# html.Div([html.H3('Historical Time Series')], className='Title'),
# html.Div([dcc.Graph(id='macro-data-explorer-historical-time-series')]),
# ], className='six columns wind-polar'),
# html.Div([
# html.Div([html.H3('Term Structure Curve')], className='Title'),
# html.Div([dcc.Graph(id='macro-data-explorer-term-structure')]),
# ], className='six columns wind-polar')
# ], className='row wind-speed-row')
html.Div([html.H3('Historical Generic Prices')], className='Title twelve columns wind-polar'),
html.Div([
html.Div([
dcc.Input(
id='generic-series-start-number-market-commodity-futures-tab1',
placeholder='Enter number of series',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(
id='generic-series-end-number-market-commodity-futures-tab1',
placeholder='Enter number of series',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(id='lookback-selection-market-commodity-futures-tab1', placeholder='Lookback (yyyy-mm-dd) or 5Y',
type='text', value='')
], className='two columns wind-polar'),
html.Div([
html.Button('Go', id='historical-generic-series-button-market-commodity-futures-tab1')
], className='two columns wind-polar'),
], className='twelve columns wind-polar'),
html.Div([
dcc.Graph(id='historical-time-series-market-commodity-futures-tab1')
], className='twelve columns row wind-speed-row'),
html.Div([html.H3('Term Structures')], className='Title twelve columns wind-polar'),
html.Div([
html.Div([
dcc.Input(
id='term-structure-date-one-market-commodity-futures-tab1',
placeholder='Enter date (-5y, yyyy-mm-dd)',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(
id='term-structure-date-two-market-commodity-futures-tab1',
placeholder='Enter date (-5y, yyyy-mm-dd)',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(
id='term-structure-date-three-market-commodity-futures-tab1',
placeholder='Enter date (-5y, yyyy-mm-dd)',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(
id='term-structure-date-four-market-commodity-futures-tab1',
placeholder='Enter date (-5y, yyyy-mm-dd)',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(
id='term-structure-date-five-market-commodity-futures-tab1',
placeholder='Enter date (-5y, yyyy-mm-dd)',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
html.Button('Go', id='term-structure-button-market-commodity-futures-tab1')
], className='two columns wind-polar'),
], className='twelve columns wind-polar'),
html.Div([
dcc.Graph(style={'height': '550px'}, id='historical-term-structures-market-commodity-futures-tab1')
], className='twelve columns row wind-speed-row'),
])
elif tab_choice == 2:
return html.Div([
html.Div([
dcc.Dropdown(
id='product-dropdown-curve-spread-market-commodity-futures-tab2',
options=[
{'label': sym_root, 'value': sym_root} for sym_root in spread_scores_dict.keys()
],
value='CL'
)
], className='four columns wind-polar',
style={'width': '10%', 'display': 'inline-block', 'padding-bottom': '1%', 'horizontal-align': 'top'}),
html.Div([
dcc.Input(id='lookback-selection-market-commodity-futures-tab2', placeholder='Lookback (yyyy-mm-dd) or 5Y',
type='text', value='')
], className='two columns wind-polar'),
html.Div([
dt.DataTable(
style_table={'overflowX': 'scroll'},
columns=[{"name": i, "id": i, "deletable": False} for i in cols_spread],
editable=False,
row_deletable=False,
filter_action="native",
sort_action="native",
sort_mode='multi',
row_selectable='single', # multi
selected_rows=[0],
page_action='native',
page_current=0,
page_size=15,
id='spread-score-table-market-commodity-futures-tab2'
)
], className='twelve columns row wind-speed-row'),
html.Div([
html.Div([
dcc.Graph(style={'height': '450px'}, id='historical-spread-time-series-market-commodity-futures-tab2')
], className='six columns wind-polar'),
html.Div([
dcc.Graph(style={'height': '450px'}, id='historical-spread-scatterplot-market-commodity-futures-tab2')
], className='six columns wind-polar'),
], 'twelve columns row wind-speed-row')
])
elif tab_choice == 3:
return html.Div([
html.Div([
dcc.Dropdown(
id='product-dropdown-curve-fly-market-commodity-futures-tab3',
options=[
{'label': sym_root, 'value': sym_root} for sym_root in fly_scores_dict.keys()
],
value='CL'
)
], className='four columns wind-polar',
style={'width': '10%', 'display': 'inline-block', 'padding-bottom': '1%', 'horizontal-align': 'top'}),
html.Div([
dcc.Input(id='lookback-selection-market-commodity-futures-tab3',
placeholder='Lookback (yyyy-mm-dd) or 5Y',
type='text', value='')
], className='two columns wind-polar'),
html.Div([
dt.DataTable(
style_table={'overflowX': 'scroll'},
columns=[{"name": i, "id": i, "deletable": False} for i in cols_fly],
editable=False,
row_deletable=False,
filter_action="native",
sort_action="native",
sort_mode='multi',
row_selectable='single', # multi
selected_rows=[0],
page_action='native',
page_current=0,
page_size=15,
id='fly-score-table-market-commodity-futures-tab3'
)
], className='twelve columns row wind-speed-row'),
html.Div([
html.Div([
dcc.Graph(style={'height': '450px'}, id='historical-fly-time-series-market-commodity-futures-tab3')
], className='six columns wind-polar'),
html.Div([
dcc.Graph(style={'height': '450px'}, id='historical-fly-scatterplot-market-commodity-futures-tab3')
], className='six columns wind-polar'),
], className='twelve columns row wind-speed-row')
])
elif tab_choice == 4:
return html.Div([
html.Div([html.H3('Seasonality')], className='Title twelve columns wind-polar'),
html.Div([
html.Div([
dcc.Input(
id='seasonality-contract-one-market-commodity-futures-tab4',
placeholder='Enter contract (e.g. NGZ2020))',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(
id='seasonality-contract-two-market-commodity-futures-tab4',
placeholder='Enter contract (e.g. NGZ2020))',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(
id='seasonality-contract-three-market-commodity-futures-tab4',
placeholder='Enter contract (e.g. NGZ2020))',
type='text',
value=''
)
], className='two columns wind-polar'),
], className='twelve columns wind-polar'),
html.Div([
html.Div([
dcc.Input(
id='seasonality-weight-one-market-commodity-futures-tab4',
placeholder='Enter weight (e.g. 1)',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(
id='seasonality-weight-two-market-commodity-futures-tab4',
placeholder='Enter weight (e.g. -2)',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
dcc.Input(
id='seasonality-weight-three-market-commodity-futures-tab4',
placeholder='Enter weight (e.g. 1)',
type='text',
value=''
)
], className='two columns wind-polar'),
], className='twelve columns wind-polar'),
html.Div([
html.Div([
dcc.Input(
id='seasonality-lookback-window-market-commodity-futures-tab4',
placeholder='Enter lookback days (e.g. 250)',
type='text',
value=''
)
], className='two columns wind-polar'),
html.Div([
html.Button('Go', id='seasonality-button-market-commodity-futures-tab4')
], className='two columns wind-polar'),
], className='twelve columns wind-polar'),
html.Div([
dcc.Graph(style={'height': '700px'}, id='seasonal-term-structures-market-commodity-futures-tab4')
], className='twelve columns row wind-speed-row'),
])
# -------------------------------------------------------- define event handler -------------------------------------- $
@app.callback(
Output('overview-table-market-commodity-futures-tab1', 'data'),
[Input('outright-spread-market-commodity-futures-tab1', 'value')]
)
def update_overview_table_market_commodity_futures_tab1(inter_comdty):
if inter_comdty == 'Outright':
return df_single.to_dict('records')
else:
return df_inter.to_dict('records')
@app.callback(
Output('historical-time-series-market-commodity-futures-tab1', 'figure'),
[Input('overview-table-market-commodity-futures-tab1', 'data'),
Input('overview-table-market-commodity-futures-tab1', 'selected_rows'),
Input('historical-generic-series-button-market-commodity-futures-tab1', 'n_clicks')],
[State('generic-series-start-number-market-commodity-futures-tab1', 'value'),
State('generic-series-end-number-market-commodity-futures-tab1', 'value'),
State('lookback-selection-market-commodity-futures-tab1', 'value')]
)
def update_historical_time_series_market_commodity_futures_tab1(rows_data, rows_selected, n_clicks, generic_start_str, generic_end_str, lookback_window):
# print('historical series called')
sym_root = rows_data[rows_selected[0]]['Contract'][:-5] # remove e.g. Z2018
# sym = df_config.loc[df_config['Name'] == name]['Quandl Code'].values[0]
lookback_date = convert_date_input(lookback_window, datetime(2008, 1, 1))
if ':' in sym_root:
df = generic_inter_comdty_hist_prices_dict[sym_root]
else:
df = generic_futures_hist_prices_dict[sym_root]
df = df[lookback_date.date():]
generic_start = 1
if (generic_start_str is not None) and (not not generic_start_str):
generic_start = int(generic_start_str)
generic_end = df.shape[1]
if (generic_end_str is not None) and (not not generic_end_str):
generic_end = int(generic_end_str)
traces = [go.Scatter(x=df[col].index,
y=df[col],
mode='lines',
name=col)
for col in df.columns[generic_start-1:generic_end]]
layout_fig = go.Layout(
xaxis=dict(title='Date',
rangeslider=dict(
visible=False
),
type='date'),
yaxis=dict(title='Price'),
legend=dict(orientation="h"),
height=800, margin=dict(l=0, r=0, t=0, b=0),
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)'
)
return go.Figure(data=traces, layout=layout_fig)
@app.callback(
Output('historical-term-structures-market-commodity-futures-tab1', 'figure'),
[Input('overview-table-market-commodity-futures-tab1', 'data'),
Input('overview-table-market-commodity-futures-tab1', 'selected_rows'),
Input('term-structure-button-market-commodity-futures-tab1', 'n_clicks')],
[State('term-structure-date-one-market-commodity-futures-tab1', 'value'),
State('term-structure-date-two-market-commodity-futures-tab1', 'value'),
State('term-structure-date-three-market-commodity-futures-tab1', 'value'),
State('term-structure-date-four-market-commodity-futures-tab1', 'value'),
State('term-structure-date-five-market-commodity-futures-tab1', 'value')]
)
def update_historical_term_structures_market_commodity_futures_tab1(rows_data, rows_selected, n_clicks, ione, itwo, ithree, ifour, ifive):
# print('historical term structure called')
sym_root = rows_data[rows_selected[0]]['Contract'][:-5]
if ':' in sym_root:
hist_data = inter_comdty_spread_hist_data_dict[sym_root]
meta_data = inter_comdty_spread_contracts_meta_df[inter_comdty_spread_contracts_meta_df['Root'] == sym_root]
meta_data.sort_values('Last_Trade_Date', inplace=True)
else:
hist_data = futures_hist_prices_dict[sym_root]
meta_data = futures_contracts_meta_df[futures_contracts_meta_df['Root'] == sym_root]
meta_data.sort_values('Last_Trade_Date', inplace=True)
asofdate = hist_data.index[-1]
s0 = hist_data.loc[asofdate]
s = s0.to_frame()
start_idx = hist_data.shape[0] - 1
if (ione is not None) and (not not ione):
t1 = convert_date_input(ione, datetime.today())
t1 = t1.date()
dateidx1 = hist_data.index.searchsorted(t1) # first one greater than or equal to
s1 = hist_data.iloc[dateidx1]
s = pd.concat([s, s1], axis=1)
start_idx = min(dateidx1, start_idx)
if (itwo is not None) and (not not itwo):
t2 = convert_date_input(itwo, datetime.today())
t2 = t2.date()
dateidx2 = hist_data.index.searchsorted(t2) # first one greater than or equal to
s2 = hist_data.iloc[dateidx2]
s = pd.concat([s, s2], axis=1)
start_idx = min(dateidx2, start_idx)
if (ithree is not None) and (not not ithree):
t3 = convert_date_input(ithree, datetime.today())
t3 = t3.date()
dateidx3 = hist_data.index.searchsorted(t3) # first one greater than or equal to
s3 = hist_data.iloc[dateidx3]
s = pd.concat([s, s3], axis=1)
start_idx = min(dateidx3, start_idx)
if (ifour is not None) and (not not ifour):
t4 = convert_date_input(ifour, datetime.today())
t4 = t4.date()
dateidx4 = hist_data.index.searchsorted(t4) # first one greater than or equal to
s4 = hist_data.iloc[dateidx4]
s = pd.concat([s, s4], axis=1)
start_idx = min(dateidx4, start_idx)
if (ifive is not None) and (not not ifive):
t5 = convert_date_input(ifive, datetime.today())
t5 = t5.date()
dateidx5 = hist_data.index.searchsorted(t5) # first one greater than or equal to
s5 = hist_data.iloc[dateidx5]
s = pd.concat([s, s5], axis=1)
start_idx = min(dateidx5, start_idx)
st = s.join(meta_data['Last_Trade_Date'], how='left')
st = st.sort_values('Last_Trade_Date')
# find the first common date
# dateidx_st = st['Last_Trade_Date'].searchsorted(hist_data.index[start_idx])[0]
dateidx_st = st['Last_Trade_Date'].searchsorted(hist_data.index[start_idx])
st = st.iloc[dateidx_st:]
# st.fillna(0.0, inplace=True)
traces = [go.Scatter(x=st['Last_Trade_Date'], y=st[c], name=c.strftime('%Y-%m-%d'), mode='lines+markers', hovertext=st.index) for c in st.columns[:-1]]
layout_fig = go.Layout(title=sym_root, xaxis={'title': sym_root}, yaxis={'title': 'Price'},
legend=dict(orientation="h"),
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)'
)
#plotly.offline.plot({'data': traces, 'layout': layout})
return go.Figure(data=traces, layout=layout_fig)
@app.callback(
Output('spread-score-table-market-commodity-futures-tab2', 'data'),
[Input('product-dropdown-curve-spread-market-commodity-futures-tab2', 'value')]
)
def update_spread_score_table_market_commodity_futures_tab2(sym_root):
df = spread_scores_dict[sym_root]
df = df[cols_spread]
return df.to_dict('records')
@app.callback(
Output('historical-spread-time-series-market-commodity-futures-tab2', 'figure'),
[Input('spread-score-table-market-commodity-futures-tab2', 'data'),
Input('spread-score-table-market-commodity-futures-tab2', 'selected_rows'),
Input('lookback-selection-market-commodity-futures-tab2', 'value')]
)
def update_historical_spread_time_series_market_commodity_futures_tab2(rows_data, rows_selected, lookback_window):
sym_root = rows_data[rows_selected[0]]['Name']
leg1 = rows_data[rows_selected[0]]['Leg1 Actual']
leg2 = rows_data[rows_selected[0]]['Leg2 Actual']
lookback_date = convert_date_input(lookback_window, datetime(2008, 1, 1))
if ':' in sym_root:
df1 = inter_comdty_spread_hist_data_dict[sym_root][leg1]
df2 = inter_comdty_spread_hist_data_dict[sym_root][leg2]
else:
df1 = futures_hist_prices_dict[sym_root][leg1]
df2 = futures_hist_prices_dict[sym_root][leg2]
df = df1 - df2
df = df[lookback_date.date():]
trace = go.Scatter(x=df.index, y=df, name=f'{leg1}-{leg2}', mode='lines')
layout_fig = go.Layout(title=sym_root, xaxis={'title': sym_root, 'type': 'date', 'tickformat': '%Y-%m-%d'},
yaxis={'title': 'Price'}, legend=dict(orientation="h"),
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)'
)
return go.Figure(data=[trace], layout=layout_fig)
@app.callback(
Output('historical-spread-scatterplot-market-commodity-futures-tab2', 'figure'),
[Input('spread-score-table-market-commodity-futures-tab2', 'data'),
Input('spread-score-table-market-commodity-futures-tab2', 'selected_rows'),
Input('lookback-selection-market-commodity-futures-tab2', 'value')]
)
def update_historical_spread_scatterplot_market_commodity_futures_tab2(rows_data, rows_selected, lookback_window):
sym_root = rows_data[rows_selected[0]]['Name']
leg1 = rows_data[rows_selected[0]]['Leg1 Actual']
leg2 = rows_data[rows_selected[0]]['Leg2 Actual']
lookback_date = convert_date_input(lookback_window, datetime(2008, 1, 1))
if ':' in sym_root:
df1 = inter_comdty_spread_hist_data_dict[sym_root][leg1]
df2 = inter_comdty_spread_hist_data_dict[sym_root][leg2]
df0 = generic_inter_comdty_hist_prices_dict[sym_root][sym_root + '1']
else:
df1 = futures_hist_prices_dict[sym_root][leg1]
df2 = futures_hist_prices_dict[sym_root][leg2]
df0 = generic_futures_hist_prices_dict[sym_root][sym_root + '1']
df = df1 - df2
df = pd.concat([df0, df], axis=1)
df = df[lookback_date.date():]
df.dropna(inplace=True)
trace1 = go.Scatter(x=df.iloc[:, 0], y=df.iloc[:, 1], name=f'{leg1}-{leg2}', mode='markers')
trace2 = go.Scatter(x=[df.iloc[-1, 0]], y=[df.iloc[-1, 1]], name='today', mode='markers',
marker=dict(color=['red'], size=[20]))
layout_fig = go.Layout(xaxis=dict(title='Generic 1st price'),
yaxis=dict(title='Spread price'),
showlegend=False,
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)'
)
return go.Figure(data=[trace1, trace2], layout=layout_fig)
@app.callback(
Output('fly-score-table-market-commodity-futures-tab3', 'data'),
[Input('product-dropdown-curve-fly-market-commodity-futures-tab3', 'value')]
)
def update_fly_score_table_market_commodity_futures_tab3(sym_root):
df = fly_scores_dict[sym_root]
df = df[cols_fly]
return df.to_dict('records')
@app.callback(
Output('historical-fly-time-series-market-commodity-futures-tab3', 'figure'),
[Input('fly-score-table-market-commodity-futures-tab3', 'data'),
Input('fly-score-table-market-commodity-futures-tab3', 'selected_rows'),
Input('lookback-selection-market-commodity-futures-tab3', 'value')]
)
def update_historical_fly_time_series_market_commodity_futures_tab3(rows_data, rows_selected, lookback_window):
sym_root = rows_data[rows_selected[0]]['Name']
leg1 = rows_data[rows_selected[0]]['Leg1 Actual']
leg2 = rows_data[rows_selected[0]]['Leg2 Actual']
leg3 = rows_data[rows_selected[0]]['Leg3 Actual']
lookback_date = convert_date_input(lookback_window, datetime(2008, 1, 1))
if ':' in sym_root:
df1 = inter_comdty_spread_hist_data_dict[sym_root][leg1]
df2 = inter_comdty_spread_hist_data_dict[sym_root][leg2]
df3 = inter_comdty_spread_hist_data_dict[sym_root][leg3]
else:
df1 = futures_hist_prices_dict[sym_root][leg1]
df2 = futures_hist_prices_dict[sym_root][leg2]
df3 = futures_hist_prices_dict[sym_root][leg3]
df = df1 - df2*2 + df3
df = df[lookback_date.date():]
trace = go.Scatter(x=df.index, y=df, name=f'{leg1}-{leg2}', mode='lines')
layout_fig = go.Layout(title=sym_root, xaxis={'title': sym_root, 'type': 'date', 'tickformat': '%Y-%m-%d'},
yaxis={'title': 'Price'}, legend=dict(orientation="h"),
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)'
)
return go.Figure(data=[trace], layout=layout_fig)
@app.callback(
Output('historical-fly-scatterplot-market-commodity-futures-tab3', 'figure'),
[Input('fly-score-table-market-commodity-futures-tab3', 'data'),
Input('fly-score-table-market-commodity-futures-tab3', 'selected_rows'),
Input('lookback-selection-market-commodity-futures-tab3', 'value')]
)
def update_historical_fly_scatterplot_market_commodity_futures_tab3(rows_data, rows_selected, lookback_window):
sym_root = rows_data[rows_selected[0]]['Name']
leg1 = rows_data[rows_selected[0]]['Leg1 Actual']
leg2 = rows_data[rows_selected[0]]['Leg2 Actual']
leg3 = rows_data[rows_selected[0]]['Leg3 Actual']
lookback_date = convert_date_input(lookback_window, datetime(2008, 1, 1))
if ':' in sym_root:
df1 = inter_comdty_spread_hist_data_dict[sym_root][leg1]
df2 = inter_comdty_spread_hist_data_dict[sym_root][leg2]
df3 = inter_comdty_spread_hist_data_dict[sym_root][leg3]
df0 = generic_inter_comdty_hist_prices_dict[sym_root][sym_root + '1']
else:
df1 = futures_hist_prices_dict[sym_root][leg1]
df2 = futures_hist_prices_dict[sym_root][leg2]
df3 = futures_hist_prices_dict[sym_root][leg3]
df0 = generic_futures_hist_prices_dict[sym_root][sym_root + '1']
df = df1 - df2*2 + df3
df = pd.concat([df0, df], axis=1)
df = df[lookback_date.date():]
trace1 = go.Scatter(x=df.iloc[:, 0], y=df.iloc[:, 1], name=f'{leg1}-{leg2}', mode='markers')
trace2 = go.Scatter(x=[df.iloc[-1, 0]], y=[df.iloc[-1, 1]], name='today', mode='markers',
marker=dict(color=['red'], size=[20]))
layout_fig = go.Layout(xaxis=dict(title='Generic 1st price'),
yaxis=dict(title='Spread price'),
showlegend=False,
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)'
)
return go.Figure(data=[trace1, trace2], layout=layout_fig)
@app.callback(
Output('seasonal-term-structures-market-commodity-futures-tab4', 'figure'),
[Input('seasonality-button-market-commodity-futures-tab4', 'n_clicks')],
[State('seasonality-contract-one-market-commodity-futures-tab4', 'value'),
State('seasonality-contract-two-market-commodity-futures-tab4', 'value'),
State('seasonality-contract-three-market-commodity-futures-tab4', 'value'),
State('seasonality-weight-one-market-commodity-futures-tab4', 'value'),
State('seasonality-weight-two-market-commodity-futures-tab4', 'value'),
State('seasonality-weight-three-market-commodity-futures-tab4', 'value'),
State('seasonality-lookback-window-market-commodity-futures-tab4', 'value')]
)
def update_seasonality_curves_market_commodity_futures_tab4(n_clicks, contract1, contract2, contract3, weight1, weight2, weight3, lookback):
if (contract1 is None) or (not contract1):
return go.Figure()
if (contract2 is not None) and (not not contract2):
if (contract3 is not None) and (not not contract3):
contracts = [contract1.upper(), contract2.upper(), contract3.upper()]
weights = [int(weight1), int(weight2), int(weight3)]
else:
contracts = [contract1.upper(), contract2.upper()]
weights = [int(weight1), int(weight2)]
else:
contracts = [contract1.upper()]
weights = [int(weight1)]
sym_root = contracts[0][:-5]
if ':' in sym_root:
hist_data = inter_comdty_spread_hist_data_dict[sym_root]
meta_data = inter_comdty_spread_contracts_meta_df[inter_comdty_spread_contracts_meta_df['Root'] == sym_root]
meta_data.sort_values('Last_Trade_Date', inplace=True)
else:
hist_data = futures_hist_prices_dict[sym_root]
meta_data = futures_contracts_meta_df[futures_contracts_meta_df['Root'] == sym_root]
meta_data.sort_values('Last_Trade_Date', inplace=True)
nlookback = 5000
if (lookback is not None) and (not not lookback):
nlookback = int(lookback)
asofdate = hist_data.index[-1]
s = get_seasonal_contracts(asofdate, contracts, weights, hist_data, meta_data)
s = s.iloc[-nlookback:]
traces = [go.Scatter(x=s.index, y=s[c], name=c, mode='lines') for c in s.columns]
layout_fig = go.Layout(title=sym_root, xaxis={'title': sym_root, 'type': 'date', 'tickformat': '%b %d'},
yaxis={'title': 'Price'}, legend=dict(orientation="h"),
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)'
)
return go.Figure(data=traces, layout=layout_fig)
|
{"hexsha": "18bd633cf89b3b71f409a330a0e5f663309dc09f", "size": 38570, "ext": "py", "lang": "Python", "max_stars_repo_path": "dash/futures/commodity_futures_app.py", "max_stars_repo_name": "jingmouren/QuantResearch", "max_stars_repo_head_hexsha": "7a17e567b0e95481894ed37524c041b30155b6cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 623, "max_stars_repo_stars_event_min_datetime": "2020-07-11T04:28:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:30:16.000Z", "max_issues_repo_path": "dash/futures/commodity_futures_app.py", "max_issues_repo_name": "jingmouren/QuantResearch", "max_issues_repo_head_hexsha": "7a17e567b0e95481894ed37524c041b30155b6cb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-09-04T14:18:38.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T17:25:14.000Z", "max_forks_repo_path": "dash/futures/commodity_futures_app.py", "max_forks_repo_name": "jingmouren/QuantResearch", "max_forks_repo_head_hexsha": "7a17e567b0e95481894ed37524c041b30155b6cb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 155, "max_forks_repo_forks_event_min_datetime": "2020-07-11T21:57:06.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-22T12:55:13.000Z", "avg_line_length": 44.6929316338, "max_line_length": 208, "alphanum_fraction": 0.5936738398, "include": true, "reason": "import numpy,import scipy", "num_tokens": 9346}
|
\documentclass[preprint,pre,floats,aps,amsmath,amssymb]{revtex4}
\usepackage{graphicx}
\usepackage{bm}
\begin{document}
\title{Band Structure of Silver Chloride(AgCl) using LDA(linear density approximation with ABINIT}
\author{Jaswinder Singh (Roll No-2016PHY1059)}
\date{\today}
\begin{abstract}
The band structure of coumpound silver chloride(AgCl) is studied using ABINIT package in the domain of density function theory(DFT).The approximation used is linear density approximation(LDA).DFT is an exact theory for the ground-state of an arbitrary many-electron system.The KS(Kohn-Sham) equation is an exact effective single-particle equation for anauxiliary system, its iterative solution provides the ground-state energy and density of the real system.For obtaining the band gap,TD-DFT calculation is done using ab-initio pseudopotentials which makes DFT a true ab-initio calculation.
\end{abstract}
\maketitle
\section{Introduction}
\subsection{About ABINIT}
This paper contains a general outline of the information that should
be included in a scientific paper. It provides a good template within
which you can easily write a paper. When you start out writing
papers, you will likely include most of these sections and utilize
this fairly standard format. As you gain experience, you may choose a
different ordering or different sections as you find appropriate.
Remember this is just a template to help you get started. You will
have your own style of writing. Your audience and the content of your
paper should be the most important guiding influence when writing any
paper. The writing process will go much more smoothly if you take
some time to answer a few questions before you begin writing. For
example, before you begin writing, ask yourself, ``Who is my
audience?'', ``What do I want them to get out of this paper?'', and
``What are the most important ideas to convey with this paper?''
There are lots of other questions you could ask, but these three will
help you generate a document that is pitched at the right level and
contains information that is useful to your audience.
You should keep in mind that a good scientific paper always introduces
the reader to the subject material and provides the appropriate
background information that the author thinks the reader will need. A
good scientific paper will always make the experimental,
computational, or theoretical methods clear enough so that a competent
reader should be able to reproduce the work. A clear description of
how any data was collected should be included as well as a description
of the raw data, generally in graphical format. Any analysis performed
on the data should be outlined clearly. Analysis and conclusions drawn
from the analysis should generally be described separately from raw
data. A paper should end with a set of conclusions based on the
analysis.
It is the responsibility of the author to carefully lead the reader
from the experimental design through the conclusions tying each piece
together. For example, it should be clear to the reader explicitly
how your analysis leads from your raw data to your conclusions. If
you do not make this clear, no matter whether or not you are right,
you have not done your job as an author and will find that you have a
hard time convincing anyone that what you have done is valid.
Finally, every paper should end with a references section. A
scientific paper without any references, indicates that the author
believes that every thought conveyed in the paper is original. Any
information that you obtain from another source should be cited. The
only exception is for material that is considered common knowledge.
As a student, your common knowledge will often be somewhat more
limited than the average author in a scientific journal. As such, you
will often reference information from class notes or textbooks that
other authors may not. When in doubt, make a reference. This
eliminates any possibility that you will be accused of plagiarism, a
very serious transgression indeed.
An introduction generally contains a brief introduction to the
material that will be presented. Relevant information includes a clear
enunciation of the questions that will be addressed in the paper,
background information relevant for understanding the paper, basic
theory needed to undersand the contents of the paper, etc.
It is important to take into account your audience when writing the
introduction. The purpose of an introduction is most often to give
your audience enough information so that they will be able to
understand the rest of your paper and put it into a larger context.
Depending on your audience, this context may vary. For example, if
you are preparing a paper with other physics students in mind as the
audience, you will write the introduction so they see how their
previous physics knowledge will be useful in understanding this paper.
If on the other hand, you are writing this paper for a narrow
selection of researchers, you will not need to include as much
information. Rather, you will present them with enough information so
that they can see how this paper fits in with relevant research.
Because you may not be familiar with \LaTeX, you will undoubtedly have
many questions about how to do certain things. This document will
serve as a template for producing professional looking papers in
\LaTeX. Before you begin to modify this document, make sure you have
a copy of it saved somewhere so that you can refer back to it if
needed. In addition, there are lots of places to get help with
\LaTeX\ (including asking professors in physics and math), but a
useful place to begin is to visit http://www.giss.nasa.gov/latex/.
All the computers in the physics labs are equipped with a program
called TeXshop that runs the \LaTeX\ engine.
If you have any questions about the appropriate style for a scientific
paper, you should refer to the American Institute of Physics (AIP)
Style Manual at http://www.aip.org/pubservs/style/4thed/toc.html.
\section{Theory}
\label{sec:theory}
Often, if the theory needed to understand a paper is somewhat
extensive, a separate section containing a description of the theory
will be presented. This section should contain enough theoretical
detail to make it possible for a member of your target audience to be
able to reproduce any results you come up with. Obviously, the amount
of detail that you include will depend on space constraints and the
expected level of expertise of your audience.
In the context of a paper written by an undergraduate for a class, you
should include all non-obvious steps and be sure to reference material
that is not ``common knowledge.'' If you just learned the material in
a class, you should include references to where the basic derivation
comes from. If you start with a non-trivial expression that you had
to look up somewhere, either in a book, a paper, or your notes, you
should definitely include a reference.
All equations should be incorporated into the text using a program
designed to properly format equations. \LaTeX\ is designed to handle
equations, equation numbering, and cross referencing to sections,
equations, and figures with ease. In fact, you do not need to worry
about numbering any sections or equations, that will be done for you
automatically. You may want to refer back to an equation, figure, or
section. To do so, you simply label the appropriate item and then
refer back to it when needed. For example, to refer back to the
introduction, I can type something like ``this is discussed in
Sections $\backslash$ref\{sec:intro\} and
$\backslash$ref\{sec:theory\}'' to get ``this is discussed in
Sections~\ref{sec:intro} and \ref{sec:theory}.'' Notice that I didn't
have to worry about the sections numbers. This is a life saver when
you are writing a paper with lots of equations and figures. Equation
numbering is automatic only in ``displayed math'' mode, which is
illustrated here,
\begin{equation}
\textbf{E}=\textbf{E}_0\cos (\textbf{k}\cdot\textbf{r}-\omega t+\phi),
\label{eq:E}
\end{equation}
and here,
\begin{equation}
\textbf{B}=\textbf{B}_0\cos (\textbf{k}\cdot\textbf{r}-\omega t+\phi).
\label{eq:B}
\end{equation}
Of course, I can easily refer back to Eqs.~\ref{eq:E} and \ref{eq:B}
without having to remember the numbers.
\section{Experimental Methods}
\label{sec:experiment}
This section is often called experimental design or methods. It
contains information about how you went about your experiment. The
purpose of this section is to convince your reader that your
experimental methods were sound and thorough. That said, if you have
made experimental errors that you did not correct, or if you made
errors along the way it is your responsibility to report them here.
If you do not clearly report your experimental methods, you run the
risk of having someone else try your experiment and get other results.
This then brings into question the validity of your conclusions and
your reputation as a scientist. In addition, if you made errors along
the way that you corrected before collecting your final data, it may
be worth presenting them here so that others can benefit from your
mistakes.
Often you will include a diagram of the experiemtal setup. This is
shown in Fig.~\ref{fig:geometry} (note that I didn't have to worry
about the figure number). Of course, \LaTeX\ is a typesetting program
and is not a graphics program, so you will have to make your graphics
in a different program, say, Adobe Illustrator or Xfig. Fortunately,
including the figures into a \LaTeX document is a pretty simple
matter.
Any diagram you include should contain a fairly detailed figure
caption. A good rule of thumb is that if someone reads the abstract
and looks at all the figures and captions, they should have a
reasonable idea what your paper is about. While this isn?t always
possible, it is a good thing to shoot for. That said, this document
doesn?t even come close to meeting that requirement, but it also isn?t
so much a scientific paper as a how to manual on writing one.
As mentioned before, you should include enough information in your
experimental design to make it possible for someone else to reproduce
your experiment. You should generally outline what you did with
enough detail so that it is clear how you setup your experiment and
how you collected your data.
It is particularly important to include anything out of the ordinary.
Often we make experimental errors in our setup. It isn't fun, but it
happens. If one clearly articulates her setup, it is possible for
others to identify these often subtle experimental errors.
\section{Results}
\label{sec:results}
Your paper should contain a section describing your raw results.
Often this will be done by including graphs and/or tables of data.
This data should generally not be heavily processed. Rather, one
should include results in an understandable format that are a good
representation of the data obtained by your experiment or computation.
You will have a chance to show processed results in the analysis
section, but in this section you need to present the reader with your
raw data so she can clearly judge the quality of your analysis and
conclusions.
Often you have far too much data to include it all. In this case, you
will include a sample of raw data with tables or graphs containing
straightforward compilations of this data.
It is generally best to make all figures only a single column width,
as shown in Fig.~\ref{fig:force}. You generally have three choices of
where to place the figures in \LaTeX\. Here (meaning right here if
possible), top (meaning top of the page if possible), and bottom
(meaning bottom of the page if possible). You may still have to do
some fiddling at the end to get them exactly where you want them.
There are also times when it is appropriate to include a table of
data. Unfortunately, tables are not the simplest thing in the world
to do in \LaTeX, but they're not all that difficult either.
Basically, if you have to make a table, it is best to look for some
help in a book or online and then fiddle until you get it looking the
way you want. Table~\ref{tab:temps} shows an example of a table that
compares two sets of temperature data. As you might expect, simpler
tables are easier to make.
\begin{table}[ht]
\caption{Conventional and syringe thermometer readings. The highest
and lowest readings were used for calibration.}
\begin{center}
\begin{tabular}{@{\hspace{18pt}} c @{\hspace{18pt}} ||
@{\hspace{12pt}} c @{\hspace{12pt}} | @{\hspace{12pt}} c
@{\hspace{12pt}} }
\hline\hline
Conventional & \multicolumn{2}{c}{Syringe {\hspace{9pt}} } \\ \hline
20$^\circ$C & 1.8cc & 20$^\circ$C \\
27$^\circ$C & 2.4cc & 28$^\circ$C \\
42$^\circ$C & 3.9cc & 46$^\circ$C \\
55$^\circ$C & 5.0cc & 59$^\circ$C \\
67$^\circ$C & 6.0cc & 72$^\circ$C \\
84$^\circ$C & 7.0cc & 84$^\circ$C \\
\hline\hline
\end{tabular}
\end{center}
\label{tab:temps}
\end{table}
In general, you should never include a table in a paper when a
figure/graph will do a better job. It is quite rare to see tables in
scientific papers. You should never include a long list of data or an
excerpt from a spreadsheet unless the particular values in the list
are very important. Long lists are hard to read and generally confuse
or bore your reader.
Most often tables are used to show a few numbers derived from a larger
dataset. This is a good use of tables but should generally occur in
the analysis sections because the numbers are derived from the data.
Here is another table. We can reference this table in the same way
mentioned in Section 2. Table~\ref{tab:pressure} shows a slightly
simpler table.
\begin{table}[ht]
\caption{Force, area, and pressure data for the experiment shown in
Fig.~\ref{fig:geometry} and described by Eq.~\ref{eq:B}. Agreement is
typically within five percent.}
\begin{center}
\begin{tabular}{l @{\hspace{30pt}} c @{\hspace{18pt}} c}
\hline\hline
& Piston 1 & Piston 2 \\ \hline
Avg. Force (N) & 4.40 & 2.25 \\
Area (cm$^2$) & 6.16 & 2.25 \\
$F/A$ (N/cm$^2$) & 0.714 & 0.717 \\
\hline\hline
\end{tabular}
\end{center}
\label{tab:pressure}
\end{table}
\section{Analysis}
\label{sec:analysis}
After you have clearly described your results, you will describe how
you will analyze these results, that is, how you will process the data
you collected to obtain information that will help you answer the
questions you brought up in the introduction.
It is critically important that the analysis section of a paper is
clear. Your job in the analysis section is to convince the reader
that the methods you used to get from your results to your conclusions
are sound. If your analysis section is incomplete or unclear, your
reader may not trust the conclusions you draw.
This is another section where you will often have equations, graphs
and tables. Remember that whenever you use an equation, graph or
table, it should be referred to in the text. Any equation, graph,
figure, or table should fit into your explanations. If you include a
graph but make not mention of it in the text, the graph either has not
reason to be included, or you have omitted important information from
the text.
\section{Conclusion}
\label{sec:conclusion}
Your conclusions section should be brief, but long enough to refocus
the reader. The conclusions section describes your assertions based
on your data. In essence, it contains the answers you?ve come up with
for the questions you asked in the introduction.
You should also make it a point to place your conclusions within a
context. That is, you should discuss the possible implications of
your conclusions or how they might be relevant to other researchers.
This is often hard to do as a student, but not impossible. Some
questions you can keep in mind when writing this section are. Why
are these conclusions important? Who might these results affect?
What could these results be useful for?
It is important to keep in mind that you should not overstate your
conclusions. A common error authors make is to over generalize ones
conclusions. For example, if I find that a particular type of crystal
behaves non-linearly within certain parameters, it is an
overgeneralization to conclude that all crystals of that structure
will behave the same way. If the author suspects this to be the case,
she can state her prediction, but should not assert that it is a fact
just because she has a hunch based on her experiment with this one
crystal. This leads us to a final sections that you may or may not
want to include.
\section{Suggestions for Further Research}
\label{sec:further_research}
This section contains a listing of the directions that the author
thinks it will be possible to extend this research. It can be a list
of possible future experiments or questions one might ask that are
based on the results of the research presented. This section gives
the author the opportunity to be somewhat more creative. That said,
it should be clear in the paper that the statements made in this
sections are suggestions, conjecture and or gut reactions. It is good
to include this kind of information, because it helps one to refine
her intuition and practice asking interesting scientific questions.
Often, this kind of information goes in the conclusion or a section
called ``discussion.''
\subsubsection*{Including References}
You must also include a references section in any scientific paper.
To omit the references section is to almost certainly commit
plagiarism. As mentioned before, you should include references
whenever you have used information from another source. This might be
a professors notes or a textbook. As you advance in your studies,
your references will come more and more from journal articles since
these articles generally present more recent results.
In \LaTeX, references are handled very easily in a section called
``thebibliography.'' Thus, you won't actually make a section called
references, you will have something called \textit{thebibliography}.
All you do is add a ``bibitem'' to \textit{thebibliography} and give
it a label. Then, whenever you want to refer to it, you use a
$\backslash$cite\{\} command. The order in which you put the items in
``thebibliography'' is the order of the numbering of those items.
Therefore, make sure you put these items in the order that they appear
in your paper. Here is an example of a book citation~\cite{FHD}, an
article citation~\cite{Jackson}, and a comment that might make an
important subtle point but one that would detract from the main
text~\cite{Comment}.
That is basically all the sections that are normally included in a
scientific paper, but there are still some issues that might help you
regarding \LaTeX. I have put these into an appendix as an example of
how you might use an appendix to put material that is essential to
include but inhibits the flow of the paper.
\begin{acknowledgments}
You should always have a short acknowledgements section. This is
where you thank people who helped you with the project. These can be
people that assisted with construction, people you talked with that
gave you good ideas, people you had an email correspondence with,
basically anyone that contributed in some way to the success of the
project. You would also list fundint agencies in the acknowledgements
section.
\end{acknowledgments}
\appendix*
\section{More \LaTeX\ Information}
\label{sec:latex}
This appendix is here to give you a bit more of an introduction to
\LaTeX. At this point, it is very short and only includes the most
basic items, but I will expand it in the future. If there is
something you learned about \LaTeX that was very valuable, please let
me know and I will put it in here.
\subsection{Getting Started}
The first thing you have to do is open TeXShop on one of the macs in
the lab and then open a tex file (this one, for example). Then you
need to typeset the document. Depending on the options in TeXShop,
the file will compile and produce a PDF file that you can then read or
print out.
\subsection{Fonts}
In a typical scientific paper, you might want to use \textit{italics}
or \textbf{bold} fonts occasionally. In \LaTeX, these are
accomplished by using the \verb!\textit{}! and \verb!\textbf{}!
commands. The text you actually want italicized (or in bold) would be
placed inside the curly braces.
\subsection{Math Mode}
In \LaTeX, you enter math mode by typing \verb!$! and then you leave
math mode by typing another \verb!$!. Thus, to type an equation, you
place it between two dollar signs. For example, typing \verb!$F=ma$!
results in $F=ma$. Greek letters are made by typing a backslash and
the name of the greek letter. For example,
\verb!$\alpha-\beta+\gamma$! results in $\alpha-\beta+\gamma$.
Superscripts and subscripts are handled by using \verb!^{}! and
\verb!_{}! in math mode respectively. For example, typing
\verb!$A_{1}=e^{-x^{2}}$! results in $A_{1}=e^{-x^{2}}$.
So far, all of these examples have been for \textit{inline} equations
that occur right in the paragraph you are typing. More often, you
will want to put equations on lines all by themselves with an equation
number. This is called a displayed equation and is accomplished by
using the \textit{equation} environment (an environment in \LaTeX is
something that you begin and end such as the \textit{abstract}
environment that was used to create the abstract of this document).
Thus, to create a displayed equation that has an equation number, you
type \verb!\begin{equation}!, then your equation (and a label), then
\verb!\end{equation}!. Here is an example:
\begin{equation}
F_m = -\frac{d{\cal E}_m^{(1)} }{db}.
\label{eq:deriv}
\end{equation}
You will have noticed that to make the derivative in
Eq.~\ref{eq:deriv}, I had to make a fraction. The fraction command is
\verb!\frac{}{}! (in math mode); the numerator goes in the first set
of curly braces and the denominator goes in the second set of curly
braces. I also used the command \verb!\label{eq:deriv}! in the
equation environment so that I can refer to it simply by typing
\verb!Eq. \ref{eq:deriv}! to get Eq.~\ref{eq:deriv}.
You may also find the need to write vector equations. There are
different methods of writing vectors in a scientific paper. Most
textbooks and scientific papers opt to put vectors in bold:
$\textbf{F}=m\textbf{a}$. This was accomplished by typing
\verb!$\textbf{F}=m\textbf{a}$!. Note that the command
\verb!\textbf{}! essentially takes you out of math mode and places a
regular boldface letter in the equation. This is traditionally how
vectors are written in textbooks with the corresponding magnitudes for
$\textbf{F}$ and $\textbf{a}$ written as $F$ and $a$. This works fine
except when there is no non-math-mode character to make bold. An
example of a math-mode character that does not have a non-math-mode
equivalent is $\nabla$, obtained by typing \verb!\nabla!. If you want
this as a vector operator and you want it to be bold, you must use the
command \verb!\bm! (boldmath) to get $\bm{\nabla}$. This allows you
to write that in general,
\begin{equation}
\bm{\nabla}\times\textbf{A}\ne\bm{\nabla}\cdot\textbf{A}.
\end{equation}
Note the use of \verb!\times! for $\times$, \verb!\cdot! for $\cdot$
and \verb!\ne! for $\ne$. You might also be interested to know that
unit vectors can be written using the \verb!\hat{}! command. For
example, \verb!\$hat{\textbf{r}}$! results in $\hat{\textbf{r}}$ and
\verb!$\hat{\textbf{e}}_{\theta}! results in
$\hat{\textbf{e}}_{\theta}$.
One more quick topic that is sure to be useful is how to break
equations. It is quite common to have equations that are too long to
fit on a single line. In these instances, you must break the equation
into multiple lines. This is done using the \textit{equationarray}
environment: \verb!\begin{eqnarray}!$\cdots$\verb!\end{eqnarray}!. In
an \textit{equationarray}, each line needs to be separated by
\verb!\\! and items surrounded by ampersands (\verb!&!) will be
aligned on separate lines. Also, each line will be numbered
separately unless you specify\verb!\nonumber! (which is typically what
you'll want to do). The following shows an example of a multiline
equation:
\begin{eqnarray}
F_{j} &=& \int d {\cal A}\, \Biggl\{ \frac{3\eta \dot{b}}{b^3}
(R^2 - r^2) + [\Psi_j(R) - \Psi_j(r)] \nonumber \\
&\ &\qquad\qquad + \frac{1}{2}\mu_0 \Bigl[ M_{jr}^2(R) -
M_{jz}^2(r) \Bigr] \Biggr\}.
\label{eq:forcej}
\end{eqnarray}
Notice that I have added some space in Eq.~\ref{eq:forcej} to make it
look a little nicer. For those that really want to make things look
great, fine tuning math equations with a little spacing here and there
can really make a difference. In math mode, you can add space with
the following commands: \verb!\,! small space, \verb!\:! medium space,
\verb!\;! large space, and \verb&\!& negative small space. These can
be quite useful in a number of situations. For example, compare
\verb!\sqrt{2}x! which gives $\sqrt{2}x$ with \verb!\sqrt{2}\,x! which
gives $\sqrt{2}\,x$, or \verb!\int\int dx dy! which gives $\int\int dx
dy$ with \verb&\int\!\!\int \!dx\,dy& which gives $\int\!\!\int
\!dx\,dy$. The differences are subtle, but for those with a discerning
eye, it is wonderful to have such control over your equations.
Incidentally, the commands \verb!\quad! and \verb!\qquad! add even
larger and larger amounts of space.
Well, that's all for now. I think that about covers the basics.
\LaTeX is an extraordinarily powerful program that is capable of
probably anything you can imagine. However, it is not always obvious
exactly how to accomplish what you want to do. Fortunately, most of
the ``basics'' are fairly easy and you should have no problem figuring
them out. For more advanced techniques, you may want to consult a
book or one of the online manuals (there are lots of them). Have fun
and let me know if you need any help!
\begin{thebibliography}{99}
\bibitem{FHD}R. E. Rosensweig, {\it Ferrohydrodynamics} (Cambridge
University Press, Cambridge, 1985), and references therein.
\bibitem{Jackson}D. P. Jackson, R. E. Goldstein and A. O. Cebers,
Phys. Rev. E {\bf 50}, 298 (1994).
\bibitem{Comment} Here is an example of a comment that you might need
to include. This is usually a comment about something very subtle
that might be important to include but generally gets in the way of
the regular text.
\end{thebibliography}
\end{document} % End of document.
|
{"hexsha": "47d17785d0845c42528e2c87375bc96e6709560a", "size": 26569, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Project Report(Tex File)/Abinit final.tex", "max_stars_repo_name": "singh-sudo098/Band-Structure-of-AgCl-using-Abinit", "max_stars_repo_head_hexsha": "7b8bbfe1a2eedea6e8de8e6d748e173b0be1bdae", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Project Report(Tex File)/Abinit final.tex", "max_issues_repo_name": "singh-sudo098/Band-Structure-of-AgCl-using-Abinit", "max_issues_repo_head_hexsha": "7b8bbfe1a2eedea6e8de8e6d748e173b0be1bdae", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Project Report(Tex File)/Abinit final.tex", "max_forks_repo_name": "singh-sudo098/Band-Structure-of-AgCl-using-Abinit", "max_forks_repo_head_hexsha": "7b8bbfe1a2eedea6e8de8e6d748e173b0be1bdae", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.1109057301, "max_line_length": 590, "alphanum_fraction": 0.7721781023, "num_tokens": 6658}
|
Require Import VerdiRaft.Raft.
Require Import VerdiRaft.RaftRefinementInterface.
Require Import VerdiRaft.CommonTheorems.
Require Import VerdiRaft.SpecLemmas.
Require Import VerdiRaft.RefinementSpecLemmas.
Local Arguments update {_} {_} _ _ _ _ _ : simpl never.
Require Import VerdiRaft.InLogInAllEntriesInterface.
Section InLogInAllEntries.
Context {orig_base_params : BaseParams}.
Context {one_node_params : OneNodeParams orig_base_params}.
Context {raft_params : RaftParams orig_base_params}.
Context {rri : raft_refinement_interface}.
Lemma in_log_in_all_entries_append_entries :
refined_raft_net_invariant_append_entries in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
subst. repeat find_higher_order_rewrite.
destruct_update; simpl in *; eauto.
find_eapply_lem_hyp update_elections_data_appendEntries_log_allEntries; eauto.
intuition; repeat find_rewrite; eauto;
match goal with
| H : allEntries _ = _ |- _ => rewrite H in *
end; eauto.
- copy_eapply_prop_hyp In In.
break_exists_exists.
apply in_app_iff.
eauto.
- eexists.
apply in_app_iff.
left.
apply in_map_iff.
eexists; intuition; eauto.
- do_in_app. intuition.
+ eexists.
apply in_app_iff.
left.
apply in_map_iff.
eexists; intuition; eauto.
+ find_apply_lem_hyp removeAfterIndex_in.
copy_eapply_prop_hyp In In.
break_exists_exists.
apply in_app_iff.
eauto.
Qed.
Lemma in_log_in_all_entries_append_entries_reply :
refined_raft_net_invariant_append_entries_reply in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
subst. repeat find_higher_order_rewrite.
destruct_update; simpl in *; eauto.
find_erewrite_lem handleAppendEntriesReply_log. eauto.
Qed.
Lemma in_log_in_all_entries_request_vote :
refined_raft_net_invariant_request_vote in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
subst. repeat find_higher_order_rewrite.
destruct_update; simpl in *; eauto.
rewrite update_elections_data_requestVote_allEntries.
find_erewrite_lem handleRequestVote_log. eauto.
Qed.
Lemma in_log_in_all_entries_request_vote_reply :
refined_raft_net_invariant_request_vote_reply in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
subst. repeat find_higher_order_rewrite.
destruct_update; simpl in *; eauto.
rewrite update_elections_data_requestVoteReply_allEntries.
find_erewrite_lem handleRequestVoteReply_log. eauto.
Qed.
Lemma in_log_in_all_entries_timeout :
refined_raft_net_invariant_timeout in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
subst. repeat find_higher_order_rewrite.
destruct_update; simpl in *; eauto.
rewrite update_elections_data_timeout_allEntries.
find_erewrite_lem handleTimeout_log_same. eauto.
Qed.
Lemma in_log_in_all_entries_client_request :
refined_raft_net_invariant_client_request in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
subst. repeat find_higher_order_rewrite.
destruct_update; simpl in *; eauto.
find_eapply_lem_hyp update_elections_data_client_request_log_allEntries; eauto.
intuition; try break_exists; intuition; repeat find_rewrite; eauto.
simpl in *. intuition; subst; eauto.
copy_eapply_prop_hyp In In.
break_exists_exists; eauto.
Qed.
Lemma in_log_in_all_entries_do_leader :
refined_raft_net_invariant_do_leader in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
match goal with
| H : nwState ?net ?h = (?gd, ?d) |- _ =>
replace gd with (fst (nwState net h)) in * by (rewrite H; reflexivity);
replace d with (snd (nwState net h)) in * by (rewrite H; reflexivity);
clear H
end.
subst. repeat find_higher_order_rewrite.
destruct_update; simpl in *; eauto.
find_erewrite_lem doLeader_log; eauto.
Qed.
Lemma in_log_in_all_entries_do_generic_server :
refined_raft_net_invariant_do_generic_server in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
match goal with
| H : nwState ?net ?h = (?gd, ?d) |- _ =>
replace gd with (fst (nwState net h)) in * by (rewrite H; reflexivity);
replace d with (snd (nwState net h)) in * by (rewrite H; reflexivity);
clear H
end.
subst. repeat find_higher_order_rewrite.
destruct_update; simpl in *; eauto.
find_erewrite_lem doGenericServer_log; eauto.
Qed.
Lemma in_log_in_all_entries_reboot :
refined_raft_net_invariant_reboot in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
match goal with
| H : nwState ?net ?h = (?gd, ?d) |- _ =>
replace gd with (fst (nwState net h)) in * by (rewrite H; reflexivity);
replace d with (snd (nwState net h)) in * by (rewrite H; reflexivity);
clear H
end.
subst. repeat find_higher_order_rewrite.
destruct_update; simpl in *; eauto.
Qed.
Lemma in_log_in_all_entries_state_same_packet_subset :
refined_raft_net_invariant_state_same_packet_subset in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
repeat find_reverse_higher_order_rewrite. eauto.
Qed.
Lemma in_log_in_all_entries_init :
refined_raft_net_invariant_init in_log_in_all_entries.
Proof using.
red. unfold in_log_in_all_entries. intros. simpl in *.
intuition.
Qed.
Instance iliaei : in_log_in_all_entries_interface.
Proof.
split.
intros.
apply refined_raft_net_invariant; auto.
- apply in_log_in_all_entries_init.
- apply in_log_in_all_entries_client_request.
- apply in_log_in_all_entries_timeout.
- apply in_log_in_all_entries_append_entries.
- apply in_log_in_all_entries_append_entries_reply.
- apply in_log_in_all_entries_request_vote.
- apply in_log_in_all_entries_request_vote_reply.
- apply in_log_in_all_entries_do_leader.
- apply in_log_in_all_entries_do_generic_server.
- apply in_log_in_all_entries_state_same_packet_subset.
- apply in_log_in_all_entries_reboot.
Qed.
End InLogInAllEntries.
|
{"author": "uwplse", "repo": "verdi-raft", "sha": "7c8e4d53d27f7264ec4d3de72944dc0368e065f0", "save_path": "github-repos/coq/uwplse-verdi-raft", "path": "github-repos/coq/uwplse-verdi-raft/verdi-raft-7c8e4d53d27f7264ec4d3de72944dc0368e065f0/raft-proofs/InLogInAllEntriesProof.v"}
|
%% Gabor Filter demo
%
% A GUI to interact with the 5 different Gabor filter parameters, while
% visualizing the resulting filter.
%
function varargout = gabor_filter_gui(ksize)
% create the UI
if nargin < 1, ksize = [121 121]; end
h = buildGUI(ksize);
if nargout > 0, varargout{1} = h; end
end
function onChange(~,~,h)
%ONCHANGE Event handler for UI controls
% retrieve current values from UI controls
sigma = get(h.slid(5), 'Value') / 10;
theta = get(h.slid(4), 'Value') * pi/180;
lambda = get(h.slid(3), 'Value');
gamma = get(h.slid(2), 'Value') / 100;
psi = get(h.slid(1), 'Value') * pi/180;
% create Gabor filter
kernel = cv.getGaborKernel('KSize',[h.ksize(2) h.ksize(1)], ...
'Sigma',sigma, 'Theta',theta, 'Lambda',lambda, ...
'Gamma',gamma, 'Psi',psi);
% normalize filter to [0,1] range and resize it
kernel = cv.normalize(kernel, 'NormType','MinMax');
kernel = cv.resize(kernel, [h.sz(2) h.sz(1)]);
% show result
set(h.img, 'CData',kernel)
set(h.txt(5), 'String',sprintf('Sigma = %.2f',sigma))
set(h.txt(4), 'String',sprintf('Theta = %.2f',theta))
set(h.txt(3), 'String',sprintf('Lambda = %.2f',lambda))
set(h.txt(2), 'String',sprintf('Gamma = %.2f',gamma))
set(h.txt(1), 'String',sprintf('Psi = %.2f',psi))
drawnow
end
function onType(~,e,h)
%ONTYPE Event handler for key press on figure
% handle keys
switch e.Key
case {'q', 'escape'}
close(h.fig)
return
case 'h'
onHelp([],[]);
case 'r'
onReset([],[],h);
end
end
function onReset(~,~,h)
set(h.slid(5), 'Value',400); % sigma
set(h.slid(4), 'Value',0); % theta
set(h.slid(3), 'Value',11); % lambda
set(h.slid(2), 'Value',100); % gamma
set(h.slid(1), 'Value',90); % psi
onChange([],[],h);
end
function onHelp(~,~)
%ONHELP Display usage help dialog
helpdlg({
'This GUI allows to interact with the 5 different Gabor filter'
'parameters, while visualizing the resulting filter.'
''
'Hot keys:'
'ESC, q - quit the program'
'r - reset parameters to original values'
'h - this help dialog'
});
end
function h = buildGUI(ksize)
%BUILDGUI Creates the UI
% parameters
sigma = 400; sigma_max = 1000;
theta = 0; theta_max = 180;
lambda = 11; lambda_max = 100;
gamma = 100; gamma_max = 200;
psi = 90; psi_max = 180;
sz = [512 512];
% build the user interface (no resizing to keep it simple)
h = struct();
h.ksize = ksize; % size of the filter
h.sz = sz; % size of the image to show
h.fig = figure('Name','Gabor Filter Demo', ...
'NumberTitle','off', 'Menubar','none', 'Resize','off', ...
'Position',[200 200 sz(2) sz(1)+129]);
if ~mexopencv.isOctave()
%HACK: not implemented in Octave
movegui(h.fig, 'center');
end
h.ax = axes('Parent',h.fig, 'Units','pixels', 'Position',[1 130 sz(2) sz(1)]);
if ~mexopencv.isOctave()
h.img = imshow(zeros(sz), 'Parent',h.ax);
else
%HACK: https://savannah.gnu.org/bugs/index.php?45473
axes(h.ax);
h.img = imshow(zeros(sz));
end
text(5, 5, sprintf('KSize = %dx%d', ksize(2), ksize(1)), ...
'Color','y', 'VerticalAlignment','top');
props = {'Parent',h.fig, 'Style','text', 'String','', ...
'FontSize',11, 'HorizontalAlignment','left'};
h.txt(5) = uicontrol(props{:}, 'Position',[5 5 120 20]);
h.txt(4) = uicontrol(props{:}, 'Position',[5 30 120 20]);
h.txt(3) = uicontrol(props{:}, 'Position',[5 55 120 20]);
h.txt(2) = uicontrol(props{:}, 'Position',[5 80 120 20]);
h.txt(1) = uicontrol(props{:}, 'Position',[5 105 120 20]);
props = {'Parent',h.fig, 'Style','slider', 'Min',0};
h.slid(5) = uicontrol(props{:}, 'Position',[125 5 sz(2)-125-5 20], ...
'Value',sigma, 'Max',sigma_max, 'SliderStep',[10 100]./(sigma_max-0));
h.slid(4) = uicontrol(props{:}, 'Position',[125 30 sz(2)-125-5 20], ...
'Value',theta, 'Max',theta_max, 'SliderStep',[2 20]./(theta_max-0));
h.slid(3) = uicontrol(props{:}, 'Position',[125 55 sz(2)-125-5 20], ...
'Value',lambda, 'Max',lambda_max, 'SliderStep',[1 10]./(lambda_max-0));
h.slid(2) = uicontrol(props{:}, 'Position',[125 80 sz(2)-125-5 20], ...
'Value',gamma, 'Max',gamma_max, 'SliderStep',[2 20]./(gamma_max-0));
h.slid(1) = uicontrol(props{:}, 'Position',[125 105 sz(2)-125-5 20], ...
'Value',psi, 'Max',psi_max, 'SliderStep',[2 20]./(psi_max-0));
% hook event handlers, and trigger default start
opts = {'Interruptible','off', 'BusyAction','cancel'};
set(h.slid, 'Callback',{@onChange,h}, opts{:});
set(h.fig, 'WindowKeyPressFcn',{@onType,h}, opts{:});
onChange([],[],h);
end
|
{"author": "kyamagu", "repo": "mexopencv", "sha": "d29007b2a484d0fd92e6e941dc5fd4750014fa6a", "save_path": "github-repos/MATLAB/kyamagu-mexopencv", "path": "github-repos/MATLAB/kyamagu-mexopencv/mexopencv-d29007b2a484d0fd92e6e941dc5fd4750014fa6a/samples/gabor_filter_gui.m"}
|
"""
Title: English-to-Spanish translation with a sequence-to-sequence Transformer
Author: [fchollet](https://twitter.com/fchollet)
Date created: 2021/05/26
Last modified: 2021/05/26
Description: Implementing a sequence-to-sequene Transformer and training it on a machine translation task.
"""
"""
## Introduction
In this example, we'll build a sequence-to-sequence Transformer model, which
we'll train on an English-to-Spanish machine translation task.
You'll learn how to:
- Vectorize text using the Keras `TextVectorization` layer.
- Implement a `TransformerEncoder` layer, a `TransformerDecoder` layer,
and a `PositionalEmbedding` layer.
- Prepare data for training a sequence-to-sequence model.
- Use the trained model to generate translations of never-seen-before
input sentences (sequence-to-sequence inference).
The code featured here is adapted from the book
[Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition)
(chapter 11: Deep learning for text).
The present example is fairly barebones, so for detailed explanations of
how each building block works, as well as the theory behind Transformers,
I recommend reading the book.
"""
"""
## Setup
"""
import pathlib
import random
import string
import re
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import TextVectorization
"""
## Downloading the data
We'll be working with an English-to-Spanish translation dataset
provided by [Anki](https://www.manythings.org/anki/). Let's download it:
"""
text_file = keras.utils.get_file(
fname="spa-eng.zip",
origin="http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip",
extract=True,
)
text_file = pathlib.Path(text_file).parent / "spa-eng" / "spa.txt"
"""
## Parsing the data
Each line contains an English sentence and its corresponding Spanish sentence.
The English sentence is the *source sequence* and Spanish one is the *target sequence*.
We prepend the token `"[start]"` and we append the token `"[end]"` to the Spanish sentence.
"""
with open(text_file) as f:
lines = f.read().split("\n")[:-1]
text_pairs = []
for line in lines:
eng, spa = line.split("\t")
spa = "[start] " + spa + " [end]"
text_pairs.append((eng, spa))
"""
Here's what our sentence pairs look like:
"""
for _ in range(5):
print(random.choice(text_pairs))
"""
Now, let's split the sentence pairs into a training set, a validation set,
and a test set.
"""
random.shuffle(text_pairs)
num_val_samples = int(0.15 * len(text_pairs))
num_train_samples = len(text_pairs) - 2 * num_val_samples
train_pairs = text_pairs[:num_train_samples]
val_pairs = text_pairs[num_train_samples : num_train_samples + num_val_samples]
test_pairs = text_pairs[num_train_samples + num_val_samples :]
print(f"{len(text_pairs)} total pairs")
print(f"{len(train_pairs)} training pairs")
print(f"{len(val_pairs)} validation pairs")
print(f"{len(test_pairs)} test pairs")
"""
## Vectorizing the text data
We'll use two instances of the `TextVectorization` layer to vectorize the text
data (one for English and one for Spanish),
that is to say, to turn the original strings into integer sequences
where each integer represents the index of a word in a vocabulary.
The English layer will use the default string standardization (strip punctuation characters)
and splitting scheme (split on whitespace), while
the Spanish layer will use a custom standardization, where we add the character
`"¿"` to the set of punctuation characters to be stripped.
Note: in a production-grade machine translation model, I would not recommend
stripping the punctuation characters in either language. Instead, I would recommend turning
each punctuation character into its own token,
which you could achieve by providing a custom `split` function to the `TextVectorization` layer.
"""
strip_chars = string.punctuation + "¿"
strip_chars = strip_chars.replace("[", "")
strip_chars = strip_chars.replace("]", "")
vocab_size = 15000
sequence_length = 20
batch_size = 64
def custom_standardization(input_string):
lowercase = tf.strings.lower(input_string)
return tf.strings.regex_replace(lowercase, "[%s]" % re.escape(strip_chars), "")
eng_vectorization = TextVectorization(
max_tokens=vocab_size, output_mode="int", output_sequence_length=sequence_length,
)
spa_vectorization = TextVectorization(
max_tokens=vocab_size,
output_mode="int",
output_sequence_length=sequence_length + 1,
standardize=custom_standardization,
)
train_eng_texts = [pair[0] for pair in train_pairs]
train_spa_texts = [pair[1] for pair in train_pairs]
eng_vectorization.adapt(train_eng_texts)
spa_vectorization.adapt(train_spa_texts)
"""
Next, we'll format our datasets.
At each training step, the model will seek to predict target words N+1 (and beyond)
using the source sentence and the target words 0 to N.
As such, the training dataset will yield a tuple `(inputs, targets)`, where:
- `inputs` is a dictionary with the keys `encoder_inputs` and `decoder_inputs`.
`encoder_inputs` is the vectorized source sentence and `encoder_inputs` is the target sentence "so far",
that is to say, the words 0 to N used to predict word N+1 (and beyond) in the target sentence.
- `target` is the target sentence offset by one step:
it provides the next words in the target sentence -- what the model will try to predict.
"""
def format_dataset(eng, spa):
eng = eng_vectorization(eng)
spa = spa_vectorization(spa)
return ({"encoder_inputs": eng, "decoder_inputs": spa[:, :-1],}, spa[:, 1:])
def make_dataset(pairs):
eng_texts, spa_texts = zip(*pairs)
eng_texts = list(eng_texts)
spa_texts = list(spa_texts)
dataset = tf.data.Dataset.from_tensor_slices((eng_texts, spa_texts))
dataset = dataset.batch(batch_size)
dataset = dataset.map(format_dataset)
return dataset.shuffle(2048).prefetch(16).cache()
train_ds = make_dataset(train_pairs)
val_ds = make_dataset(val_pairs)
"""
Let's take a quick look at the sequence shapes
(we have batches of 64 pairs, and all sequences are 20 steps long):
"""
for inputs, targets in train_ds.take(1):
print(f'inputs["encoder_inputs"].shape: {inputs["encoder_inputs"].shape}')
print(f'inputs["decoder_inputs"].shape: {inputs["decoder_inputs"].shape}')
print(f"targets.shape: {targets.shape}")
"""
## Building the model
Our sequence-to-sequence Transformer consists of a `TransformerEncoder`
and a `TransformerDecoder` chained together. To make the model aware of word order,
we also use a `PositionalEmbedding` layer.
The source sequence will be pass to the `TransformerEncoder`,
which will produce a new representation of it.
This new representation will then be passed
to the `TransformerDecoder`, together with the target sequence so far (target words 0 to N).
The `TransformerDecoder` will then seek to predict the next words in the target sequence (N+1 and beyond).
A key detail that makes this possible is causal masking
(see method `get_causal_attention_mask()` on the `TransformerDecoder`).
The `TransformerDecoder` sees the entire sequences at once, and thus we must make
sure that it only uses information from target tokens 0 to N when predicting token N+1
(otherwise, it could use information from the future, which would
result in a model that cannot be used at inference time).
"""
class TransformerEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super(TransformerEncoder, self).__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim
)
self.dense_proj = keras.Sequential(
[layers.Dense(dense_dim, activation="relu"), layers.Dense(embed_dim),]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
self.supports_masking = True
def call(self, inputs, mask=None):
if mask is not None:
padding_mask = tf.cast(mask[:, tf.newaxis, tf.newaxis, :], dtype="int32")
attention_output = self.attention(
query=inputs, value=inputs, key=inputs, attention_mask=padding_mask
)
proj_input = self.layernorm_1(inputs + attention_output)
proj_output = self.dense_proj(proj_input)
return self.layernorm_2(proj_input + proj_output)
class PositionalEmbedding(layers.Layer):
def __init__(self, sequence_length, vocab_size, embed_dim, **kwargs):
super(PositionalEmbedding, self).__init__(**kwargs)
self.token_embeddings = layers.Embedding(
input_dim=vocab_size, output_dim=embed_dim
)
self.position_embeddings = layers.Embedding(
input_dim=sequence_length, output_dim=embed_dim
)
self.sequence_length = sequence_length
self.vocab_size = vocab_size
self.embed_dim = embed_dim
def call(self, inputs):
length = tf.shape(inputs)[-1]
positions = tf.range(start=0, limit=length, delta=1)
embedded_tokens = self.token_embeddings(inputs)
embedded_positions = self.position_embeddings(positions)
return embedded_tokens + embedded_positions
def compute_mask(self, inputs, mask=None):
return tf.math.not_equal(inputs, 0)
class TransformerDecoder(layers.Layer):
def __init__(self, embed_dim, latent_dim, num_heads, **kwargs):
super(TransformerDecoder, self).__init__(**kwargs)
self.embed_dim = embed_dim
self.latent_dim = latent_dim
self.num_heads = num_heads
self.attention_1 = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim
)
self.attention_2 = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim
)
self.dense_proj = keras.Sequential(
[layers.Dense(latent_dim, activation="relu"), layers.Dense(embed_dim),]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
self.layernorm_3 = layers.LayerNormalization()
self.supports_masking = True
def call(self, inputs, encoder_outputs, mask=None):
causal_mask = self.get_causal_attention_mask(inputs)
if mask is not None:
padding_mask = tf.cast(mask[:, tf.newaxis, :], dtype="int32")
padding_mask = tf.minimum(padding_mask, causal_mask)
attention_output_1 = self.attention_1(
query=inputs, value=inputs, key=inputs, attention_mask=causal_mask
)
out_1 = self.layernorm_1(inputs + attention_output_1)
attention_output_2 = self.attention_2(
query=out_1,
value=encoder_outputs,
key=encoder_outputs,
attention_mask=padding_mask,
)
out_2 = self.layernorm_2(out_1 + attention_output_2)
proj_output = self.dense_proj(out_2)
return self.layernorm_3(out_2 + proj_output)
def get_causal_attention_mask(self, inputs):
input_shape = tf.shape(inputs)
batch_size, sequence_length = input_shape[0], input_shape[1]
i = tf.range(sequence_length)[:, tf.newaxis]
j = tf.range(sequence_length)
mask = tf.cast(i >= j, dtype="int32")
mask = tf.reshape(mask, (1, input_shape[1], input_shape[1]))
mult = tf.concat(
[tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)],
axis=0,
)
return tf.tile(mask, mult)
"""
Next, we assemble the end-to-end model.
"""
embed_dim = 256
latent_dim = 2048
num_heads = 8
encoder_inputs = keras.Input(shape=(None,), dtype="int64", name="encoder_inputs")
x = PositionalEmbedding(sequence_length, vocab_size, embed_dim)(encoder_inputs)
encoder_outputs = TransformerEncoder(embed_dim, latent_dim, num_heads)(x)
encoder = keras.Model(encoder_inputs, encoder_outputs)
decoder_inputs = keras.Input(shape=(None,), dtype="int64", name="decoder_inputs")
encoded_seq_inputs = keras.Input(shape=(None, embed_dim), name="decoder_state_inputs")
x = PositionalEmbedding(sequence_length, vocab_size, embed_dim)(decoder_inputs)
x = TransformerDecoder(embed_dim, latent_dim, num_heads)(x, encoded_seq_inputs)
x = layers.Dropout(0.5)(x)
decoder_outputs = layers.Dense(vocab_size, activation="softmax")(x)
decoder = keras.Model([decoder_inputs, encoded_seq_inputs], decoder_outputs)
decoder_outputs = decoder([decoder_inputs, encoder_outputs])
transformer = keras.Model(
[encoder_inputs, decoder_inputs], decoder_outputs, name="transformer"
)
"""
## Training our model
We'll use accuracy as a quick way to monitor training progress on the validation data.
Note that machine translation typically uses BLEU scores as well as other metrics, rather than accuracy.
Here we only train for 1 epoch, but to get the model to actually converge
you should train for at least 30 epochs.
"""
epochs = 1 # This should be at least 30 for convergence
transformer.summary()
transformer.compile(
"rmsprop", loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
transformer.fit(train_ds, epochs=epochs, validation_data=val_ds)
"""
## Decoding test sentences
Finally, let's demonstrate how to translate brand new English sentences.
We simply feed into the model the vectorized English sentence
as well as the target token `"[start]"`, then we repeatedly generated the next token, until
we hit the token `"[end]"`.
"""
spa_vocab = spa_vectorization.get_vocabulary()
spa_index_lookup = dict(zip(range(len(spa_vocab)), spa_vocab))
max_decoded_sentence_length = 20
def decode_sequence(input_sentence):
tokenized_input_sentence = eng_vectorization([input_sentence])
decoded_sentence = "[start]"
for i in range(max_decoded_sentence_length):
tokenized_target_sentence = spa_vectorization([decoded_sentence])[:, :-1]
predictions = transformer([tokenized_input_sentence, tokenized_target_sentence])
sampled_token_index = np.argmax(predictions[0, i, :])
sampled_token = spa_index_lookup[sampled_token_index]
decoded_sentence += " " + sampled_token
if sampled_token == "[end]":
break
return decoded_sentence
test_eng_texts = [pair[0] for pair in test_pairs]
for _ in range(30):
input_sentence = random.choice(test_eng_texts)
translated = decode_sequence(input_sentence)
"""
After 30 epochs, we get results such as:
> She handed him the money.
> [start] ella le pasó el dinero [end]
> Tom has never heard Mary sing.
> [start] tom nunca ha oído cantar a mary [end]
> Perhaps she will come tomorrow.
> [start] tal vez ella vendrá mañana [end]
> I love to write.
> [start] me encanta escribir [end]
> His French is improving little by little.
> [start] su francés va a [UNK] sólo un poco [end]
> My hotel told me to call you.
> [start] mi hotel me dijo que te [UNK] [end]
"""
|
{"hexsha": "1b5428af0370ec819ab5f637f45b5ced3bf9cc77", "size": 15083, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/nlp/neural_machine_translation_with_transformer.py", "max_stars_repo_name": "floscha/keras-io", "max_stars_repo_head_hexsha": "fb064c551eda7aea631ceaa548c4411b9a1193cb", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1542, "max_stars_repo_stars_event_min_datetime": "2020-05-06T20:23:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T15:25:03.000Z", "max_issues_repo_path": "examples/nlp/neural_machine_translation_with_transformer.py", "max_issues_repo_name": "floscha/keras-io", "max_issues_repo_head_hexsha": "fb064c551eda7aea631ceaa548c4411b9a1193cb", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 625, "max_issues_repo_issues_event_min_datetime": "2020-05-07T10:21:15.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T17:19:35.000Z", "max_forks_repo_path": "examples/nlp/neural_machine_translation_with_transformer.py", "max_forks_repo_name": "floscha/keras-io", "max_forks_repo_head_hexsha": "fb064c551eda7aea631ceaa548c4411b9a1193cb", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1616, "max_forks_repo_forks_event_min_datetime": "2020-05-07T06:28:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T13:35:35.000Z", "avg_line_length": 35.9976133652, "max_line_length": 115, "alphanum_fraction": 0.7271762912, "include": true, "reason": "import numpy", "num_tokens": 3467}
|
#-*- coding: utf-8 -*-
import sys
sys.path.append("..")
import codecs
import numpy as np
from utils.nlp_util import NlpUtil
from search_dialog import config
from seq2seq_dialog.infer import get_infer_model, predict_sent_emb
class SentEmbSearch(object):
sent_emb_index = np.load(config.sent_emb_index_path + ".npy")
id2info = {}
with codecs.open(config.context_response_path, "r", "utf-8") as rfd:
cnt = 0
for line in rfd:
context, response = line.strip().split("\t")
id2info[cnt] = [context.replace(" ", ""), response.replace(" ", "")]
cnt += 1
print ("Load corpus done, corpus_size=%d" % cnt)
@classmethod
def search(cls, model, sent, size=10):
sent_emb = predict_sent_emb(model, sent)
sims = np.dot(cls.sent_emb_index, sent_emb.T)
print (sims)
sim_items = [(idx, sim_score) for idx, sim_score in enumerate(sims)]
sim_items.sort(key=lambda x: x[1], reverse=True)
sim_items = sim_items[:size]
print (sim_items)
contexts = [cls.id2info[idx][0] for idx, _ in sim_items]
responses = [cls.id2info[idx][1] for idx, _ in sim_items]
return sim_items, contexts, responses
if __name__ == "__main__":
model = get_infer_model("single_turn")
q = "分期购买机子回去 用了一段时间不适合有任何问题可以申请退货退款吗"
_, cs, rs = SentEmbSearch.search(model, "分期购买机子回去 用了一段时间不适合有任何问题可以申请退货退款吗")
for c, r in zip(cs, rs):
print (c)
print (r)
print ()
|
{"hexsha": "e9032788059ea3d8cfae10c184e72c50447851fc", "size": 1525, "ext": "py", "lang": "Python", "max_stars_repo_path": "search_dialog/sent_emb_search.py", "max_stars_repo_name": "HouchangX-AI/Dialog-Solution", "max_stars_repo_head_hexsha": "1f68f847d9c9c4a46ef0b5fc6a78014402a4dd7a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-03-12T06:28:01.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-27T20:15:53.000Z", "max_issues_repo_path": "search_dialog/sent_emb_search.py", "max_issues_repo_name": "HouchangX-AI/Dialog-Solution", "max_issues_repo_head_hexsha": "1f68f847d9c9c4a46ef0b5fc6a78014402a4dd7a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "search_dialog/sent_emb_search.py", "max_forks_repo_name": "HouchangX-AI/Dialog-Solution", "max_forks_repo_head_hexsha": "1f68f847d9c9c4a46ef0b5fc6a78014402a4dd7a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-03-19T02:47:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-14T02:26:40.000Z", "avg_line_length": 29.9019607843, "max_line_length": 80, "alphanum_fraction": 0.6255737705, "include": true, "reason": "import numpy", "num_tokens": 452}
|
from sklearn.datasets import load_iris
iris_dataset=load_iris()
import pandas as pd
import numpy
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(iris_dataset['data'],iris_dataset['target'],random_state=0)
print("dimensions of X_train: {}".format(X_train.shape))
#75% of dataset for training (x values, data (petal,sepal)) 'sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'
print("dimensions of y_train: {}".format(y_train.shape))
#75% of dataset for training (y values, target values 0/1/2 'setosa' 'versicolor' 'virginica')
print("dimensions of X_test: {}".format(X_test.shape)) #25% of dataset for testing (x values)
print("dimensions of y_test: {}".format(y_test.shape)) #25% of dataset for testing (y values)
|
{"hexsha": "cd472adebec3ef8f985a7d30eb9a423a266b2736", "size": 801, "ext": "py", "lang": "Python", "max_stars_repo_path": "train_test_split.py", "max_stars_repo_name": "Pl4gue/Iris-ML", "max_stars_repo_head_hexsha": "41aa30cc5138132ca5feb3a17bbb91a1c54fefbb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-06-24T08:09:02.000Z", "max_stars_repo_stars_event_max_datetime": "2017-06-24T08:09:02.000Z", "max_issues_repo_path": "train_test_split.py", "max_issues_repo_name": "Pl4gue/Iris-ML", "max_issues_repo_head_hexsha": "41aa30cc5138132ca5feb3a17bbb91a1c54fefbb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "train_test_split.py", "max_forks_repo_name": "Pl4gue/Iris-ML", "max_forks_repo_head_hexsha": "41aa30cc5138132ca5feb3a17bbb91a1c54fefbb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.1176470588, "max_line_length": 140, "alphanum_fraction": 0.7503121099, "include": true, "reason": "import numpy", "num_tokens": 213}
|
from typing import List, Dict, Set
import numpy as np
from bidict import bidict
from sklearn.preprocessing import OneHotEncoder
from collections import defaultdict
from .embed import BaseEmbed
from .logging import getLogger
from .recommendation_base import RecommendationBase, NodeType, Node, Edge, FeatureName
from .utils import unit_length
class ContentRecommendation(RecommendationBase):
def __init__(self, embedding_mapper: Dict[NodeType, Dict[str, BaseEmbed]],
node_types: Set[str],
n_dims: int = 32):
super().__init__(node_types=node_types,
n_dims=n_dims)
self.embedding_mapper: Dict[NodeType, Dict[str, BaseEmbed]] = embedding_mapper
self.log = getLogger(type(self).__name__)
def __build_content_embeddings__(self, nodes: List[Node], edges: List[Edge],
node_data: Dict[Node, Dict[FeatureName, object]], n_dims):
self.log.debug("ContentRecommendation::__build_embeddings__:: Started...")
all_embeddings = None
node_to_idx_internal = bidict()
for nt in self.node_types:
nt_embedding = None
nt_nodes = list(filter(lambda n: n.node_type == nt, nodes))
assert len(set(nt_nodes) - set(node_data.keys())) == 0 or len(set(nt_nodes) - set(node_data.keys())) == len(
set(nt_nodes))
assert len(set(nt_nodes)) == len(nt_nodes)
if len(set(nt_nodes) - set(node_data.keys())) == len(set(nt_nodes)):
nt_embedding = np.zeros((len(nt_nodes), 1))
else:
nt_nodes_features: List[Dict[FeatureName, object]] = [node_data[ntn] for ntn in nt_nodes]
feature_names = list(nt_nodes_features[0].keys())
for f in feature_names:
feature = [ntnf[f] for ntnf in nt_nodes_features]
embedding = self.embedding_mapper[nt][f].fit_transform(feature)
if nt_embedding is None:
nt_embedding = embedding
else:
np.concatenate((nt_embedding, embedding), axis=1)
nt_embedding = unit_length(nt_embedding, axis=1)
#
cur_len = len(node_to_idx_internal)
node_to_idx_internal.update(bidict(zip(nt_nodes, range(cur_len, cur_len + len(nt_nodes)))))
if all_embeddings is None:
all_embeddings = nt_embedding
else:
c1 = np.concatenate((all_embeddings, np.zeros((all_embeddings.shape[0], nt_embedding.shape[1]))),
axis=1)
c2 = np.concatenate((np.zeros((nt_embedding.shape[0], all_embeddings.shape[1])), nt_embedding), axis=1)
all_embeddings = np.concatenate((c1, c2), axis=0)
all_embeddings = all_embeddings[[node_to_idx_internal[n] for n in nodes]]
nts = np.array([n.node_type for n in nodes]).reshape((-1, 1))
ohe_node_types = OneHotEncoder(sparse=False).fit_transform(nts)
all_embeddings = np.concatenate((all_embeddings, ohe_node_types), axis=1)
self.log.debug(
"ContentRecommendation::__build_embeddings__:: AutoEncoder with dims = %s" % str(all_embeddings.shape))
n_dims = n_dims if n_dims is not None and not np.isinf(n_dims) else 2 ** int(np.log2(all_embeddings.shape[1]))
from sklearn.decomposition import IncrementalPCA
all_embeddings = IncrementalPCA(n_components=n_dims, batch_size=2**16).fit_transform(all_embeddings)
all_embeddings = unit_length(all_embeddings, axis=1)
extra_dims = 2 ** int(np.ceil(np.log2(ohe_node_types.shape[1]))) - ohe_node_types.shape[1]
if extra_dims != 0:
ohe_node_types = np.concatenate((ohe_node_types, np.zeros((ohe_node_types.shape[0], extra_dims))), axis=1)
all_embeddings = np.concatenate((all_embeddings, ohe_node_types), axis=1)
self.log.info("ContentRecommendation::__build_embeddings__:: Built Content Embedding with dims = %s" % str(
all_embeddings.shape))
edges = list(edges) + [Edge(n, n, 1.0) for n in nodes]
adjacency_list = defaultdict(list)
for src, dst, w in edges:
adjacency_list[src].append(dst)
adjacency_list[dst].append(src)
nodes_to_idx = self.nodes_to_idx
adjacent_vectors = np.vstack([all_embeddings[[nodes_to_idx[adj] for adj in adjacency_list[n]]].mean(0) for n in nodes])
assert adjacent_vectors.shape == all_embeddings.shape
all_embeddings = (all_embeddings + adjacent_vectors)/2.0
return all_embeddings
def fit(self,
nodes: List[Node],
edges: List[Edge],
node_data: Dict[Node, Dict[FeatureName, object]],
**kwargs):
super().fit(nodes, edges, node_data)
embeddings = self.__build_content_embeddings__(nodes, edges, node_data, self.n_dims)
embeddings = unit_length(embeddings, axis=1)
self.__build_knn__(embeddings)
# AutoEncoder them so that error is minimised and distance is maintained
# https://stats.stackexchange.com/questions/351212/do-autoencoders-preserve-distances
# Distance Preserving vs Non Preserving
self.fit_done = True
return embeddings
|
{"hexsha": "5cded1a744d2c7a645aa56ef0a342b148dd1ac27", "size": 5354, "ext": "py", "lang": "Python", "max_stars_repo_path": "hwer/content_recommender.py", "max_stars_repo_name": "faizanahemad/Hybrid-Weighted-Embedding-Recommender", "max_stars_repo_head_hexsha": "904a27c4b0126935735aee689408b2b6acf4af9a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2019-11-29T00:06:01.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-01T10:43:58.000Z", "max_issues_repo_path": "hwer/content_recommender.py", "max_issues_repo_name": "faizanahemad/Hybrid-Weighted-Embedding-Recommender", "max_issues_repo_head_hexsha": "904a27c4b0126935735aee689408b2b6acf4af9a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2020-03-31T09:54:00.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:05:21.000Z", "max_forks_repo_path": "hwer/content_recommender.py", "max_forks_repo_name": "faizanahemad/Hybrid-Weighted-Embedding-Recommender", "max_forks_repo_head_hexsha": "904a27c4b0126935735aee689408b2b6acf4af9a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-12-10T04:11:32.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-29T02:57:01.000Z", "avg_line_length": 50.9904761905, "max_line_length": 127, "alphanum_fraction": 0.6395218528, "include": true, "reason": "import numpy", "num_tokens": 1185}
|
/**
\file
\author Datta Ramadasan
//==============================================================================
// Copyright 2015 INSTITUT PASCAL UMR 6602 CNRS/Univ. Clermont II
//
// Distributed under the Boost Software License, Version 1.0.
// See accompanying file LICENSE.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt
//==============================================================================
*/
#ifndef __TTT_FUSION_AT_HPP__
#define __TTT_FUSION_AT_HPP__
#include <boost/fusion/include/at_key.hpp>
#include <libv/lma/global.hpp>
namespace ttt
{
template<class Map,class Key1, class Key2> struct BinaryAt
{
typedef typename br::value_at_key<Map,Key2>::type Map2;
typedef typename br::value_at_key<Map2,Key1>::type type;
typedef typename boost::add_reference<type>::type type_ref;
typedef typename boost::add_const<type>::type const_type;
typedef typename boost::add_reference<const_type>::type const_ref_type;
};
template<class A, class B, class Container>
typename BinaryAt<Container,B,A>::const_ref_type at(const Container& container)
{
BOOST_MPL_ASSERT((boost::is_same<decltype(bf::at_key<B>(bf::at_key<A>(container))),typename BinaryAt<Container,B,A>::const_ref_type>));
return bf::at_key<B>(bf::at_key<A>(container));
}
}
#endif
|
{"hexsha": "80b167c560fe418cabc98c49a0340bf974433cb1", "size": 1361, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "src/libv/lma/ttt/fusion/at.hpp", "max_stars_repo_name": "bezout/LMA", "max_stars_repo_head_hexsha": "9555e41eed5f44690c5f6e3ea2d22d520ff1a9d2", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 29.0, "max_stars_repo_stars_event_min_datetime": "2015-12-08T12:07:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-08T21:23:01.000Z", "max_issues_repo_path": "src/libv/lma/ttt/fusion/at.hpp", "max_issues_repo_name": "ayumizll/LMA", "max_issues_repo_head_hexsha": "e945452e12a8b05bd17400b46a20a5322aeda01d", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": 3.0, "max_issues_repo_issues_event_min_datetime": "2016-07-11T16:23:48.000Z", "max_issues_repo_issues_event_max_datetime": "2017-04-05T13:33:00.000Z", "max_forks_repo_path": "src/libv/lma/ttt/fusion/at.hpp", "max_forks_repo_name": "bezout/LMA", "max_forks_repo_head_hexsha": "9555e41eed5f44690c5f6e3ea2d22d520ff1a9d2", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": 8.0, "max_forks_repo_forks_event_min_datetime": "2015-12-21T01:52:27.000Z", "max_forks_repo_forks_event_max_datetime": "2017-12-26T02:26:55.000Z", "avg_line_length": 31.6511627907, "max_line_length": 139, "alphanum_fraction": 0.623806025, "num_tokens": 319}
|
(* (c) Copyright 2006-2016 Microsoft Corporation and Inria. *)
(* Distributed under the terms of CeCILL-B. *)
Require Import mathcomp.ssreflect.ssreflect.
From mathcomp
Require Import ssrbool ssrfun eqtype ssrnat seq div.
From mathcomp
Require Import fintype finset prime fingroup morphism perm automorphism action.
From mathcomp
Require Import quotient cyclic gfunctor pgroup gproduct center commutator.
From mathcomp
Require Import gseries nilpotent sylow abelian maximal hall.
From odd_order
Require Import BGsection1 BGsection4.
(******************************************************************************)
(* This file covers Section 5 of B & G, except for some technical results *)
(* that are not actually used in the proof of the Odd Order Theorem, namely *)
(* part (c) of Theorem 5.5, parts (b), (d) and (e) of Theorem 5.5, and all of *)
(* Theorem 5.7. We also make the following change: in B & G, narrow p-groups *)
(* of rank at least 3 are defined by the structure of the centralisers of *)
(* their prime subgroups, then characterized by their rank 2 elementary *)
(* abelian subgroups in Theorem 5.3. We exchange the two, because the latter *)
(* condition is easier to check, and is the only one used later in the proof. *)
(* *)
(* p.-narrow G == G has a maximal elementary abelian p-subgroup of *)
(* p-rank at most 2. *)
(* := ('r_p(G) > 2) ==> ('E_p^2(G) :&: 'E*_p(G) != set0) *)
(* *)
(* narrow_structure p G <-> G has a subgroup S of order p whose centraliser *)
(* is the direct product of S and a cyclic group C, *)
(* i.e., S \x C = 'C_G(S). This is the condition used *)
(* in the definition of "narrow" in B & G, p. 2. *)
(* Theorem 5.3 states that for odd p this definition *)
(* is equivalent to ours, and this property is not *)
(* used outside of Section 5. *)
(******************************************************************************)
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Import GroupScope.
Reserved Notation "p .-narrow" (at level 2, format "p .-narrow").
Section Definitions.
Variables (gT : finGroupType) (p : nat) (A : {set gT}).
Definition narrow := ('r_p(A) > 2) ==> ('E_p^2(A) :&: 'E*_p(A) != set0).
Inductive narrow_structure : Prop :=
NarrowStructure (S C : {group gT}) of
S \subset A & C \subset A & #|S| = p & cyclic C & S \x C = 'C_A(S).
End Definitions.
Notation "p .-narrow" := (narrow p) : group_scope.
Section IsoDef.
Variables (gT rT : finGroupType) (p : nat).
Implicit Types G H : {group gT}.
Implicit Type R : {group rT}.
Lemma injm_narrow G H (f : {morphism G >-> rT}) :
'injm f -> H \subset G -> p.-narrow (f @* H) = p.-narrow H.
Proof.
move=> injf sHG; rewrite /narrow injm_p_rank //; congr (_ ==> _).
apply/set0Pn/set0Pn=> [] [E /setIP[Ep2E maxE]].
exists (invm injf @* E)%G; rewrite -[H](group_inj (morphim_invm injf _)) //.
have sEfG: E \subset f @* G.
by rewrite (subset_trans _ (morphimS _ sHG)) //; case/pnElemP: Ep2E.
by rewrite inE injm_pnElem ?injm_pmaxElem ?injm_invm ?morphimS // Ep2E.
have sEG: E \subset G by rewrite (subset_trans _ sHG) //; case/pnElemP: Ep2E.
by exists (f @* E)%G; rewrite inE injm_pnElem ?injm_pmaxElem // Ep2E.
Qed.
Lemma isog_narrow G R : G \isog R -> p.-narrow G = p.-narrow R.
Proof. by case/isogP=> f injf <-; rewrite injm_narrow. Qed.
(* No isomorphism theorems for narrow_structure, which is not used outside of *)
(* this file. *)
End IsoDef.
Section Five.
Implicit Type gT : finGroupType.
Implicit Type p : nat.
Section OneGroup.
Variables (gT : finGroupType) (p : nat) (R : {group gT}).
Implicit Types B E S : {group gT}.
Lemma narrowJ x : p.-narrow (R :^ x) = p.-narrow R.
Proof. by apply: isog_narrow (isog_symr (conj_isog R x)). Qed.
Hypotheses (pR : p.-group R) (oddR : odd #|R|).
Section Rank3.
Hypothesis rR : 2 < 'r_p(R).
(* This lemma uses only the rR hypothesis. *)
Lemma narrow_pmaxElem : p.-narrow R -> exists E, E \in 'E_p^2(R) :&: 'E*_p(R).
Proof. by move=> nnP; apply: set0Pn; apply: implyP rR. Qed.
Let ntR : R :!=: 1. Proof. by case: eqP rR => // ->; rewrite p_rank1. Qed.
Let p_pr : prime p. Proof. by case: (pgroup_pdiv pR ntR). Qed.
Let p_gt1 : p > 1. Proof. exact: prime_gt1. Qed.
(* This is B & G, Lemma 5.1(a). *)
Lemma rank3_SCN3 : exists B, B \in 'SCN_3(R).
Proof.
by apply/set0Pn; rewrite -(rank2_SCN3_empty pR oddR) leqNgt (rank_pgroup pR) rR.
Qed.
(* This is B & G, Lemma 5.1(b). *)
Lemma normal_p2Elem_SCN3 E :
E \in 'E_p^2(R) -> E <| R -> exists2 B, B \in 'SCN_3(R) & E \subset B.
Proof.
move=> Ep2E /andP[sER nER]; have [_ abelE dimE] := pnElemP Ep2E.
have [B Ep3B nBR]: exists2 B, B \in 'E_p^3(R) & R \subset 'N(B).
have [C] := rank3_SCN3; case/setIdP=> SCN_C rC.
have [nsCR cCC] := andP (maxgroupp (SCN_max SCN_C)).
have [sCR _] := andP nsCR; have pC: p.-group C := pgroupS sCR pR.
have{pC cCC} abelC1: p.-abelem 'Ohm_1(C) := Ohm1_abelem pC cCC.
have dimC1: 3 <= logn p #|'Ohm_1(C)| by rewrite -rank_abelem // rank_Ohm1.
have nsC1R: 'Ohm_1(C) <| R := gFnormal_trans _ nsCR.
have [B [sBC1 nsBR oB]] := normal_pgroup pR nsC1R dimC1.
have [sBR nBR] := andP nsBR; exists B => //; apply/pnElemP.
by rewrite oB pfactorK // (abelemS sBC1).
have [sBR abelB dimB] := pnElemP Ep3B; have [pB cBB _] := and3P abelB.
have [oB oE] := (card_pnElem Ep3B, card_pnElem Ep2E).
pose Bs := (E <*> 'C_B(E))%G; have sCB: 'C_B(E) \subset B := subsetIl B _.
have sBsR: Bs \subset R by rewrite join_subG sER subIset ?sBR.
suffices Bs_gt2: 2 < logn p #|Bs|.
have nBsR: Bs <| R by rewrite /normal sBsR // normsY ?normsI ?norms_cent.
have abelBs: p.-abelem Bs.
by rewrite (cprod_abelem p (cprodEY _)) ?subsetIr // abelE (abelemS sCB).
have [C maxC sBsC] : {H | [max H | H <| R & abelian H ] & Bs \subset H}.
by apply: maxgroup_exists; rewrite nBsR (abelem_abelian abelBs).
exists C; last by rewrite (subset_trans _ sBsC) ?joing_subl.
by rewrite inE (max_SCN pR) ?(leq_trans Bs_gt2) // -rank_abelem ?rankS.
apply: contraFT (ltnn 2); rewrite -leqNgt => Bs_le2.
have{Bs_le2} sCE: 'C_B(E) \subset E.
rewrite (sameP joing_idPl eqP) eq_sym eqEcard joing_subl /=.
by rewrite (card_pgroup (pgroupS sBsR pR)) oE leq_exp2l.
have dimCBE: 2 <= logn p #|'C_B(E)|.
rewrite -ltnS -dimB -addn1 -leq_subLR -logn_div ?divgS ?cardSg //.
by rewrite logn_quotient_cent_abelem ?dimE ?(subset_trans sBR nER).
have defE: 'C_B(E) = E.
apply/eqP; rewrite eqEcard sCE oE /=.
by rewrite (card_pgroup (pgroupS sCB pB)) leq_exp2l.
by rewrite -dimB -dimE -defE lognSg // subsetIidl sub_abelian_cent // -defE.
Qed.
Let Z := 'Ohm_1('Z(R)).
Let W := 'Ohm_1('Z_2(R)).
Let T := 'C_R(W).
Let ntZ : Z != 1.
Proof. by rewrite Ohm1_eq1 (center_nil_eq1 (pgroup_nil pR)). Qed.
Let sZR : Z \subset R. Proof. by rewrite !gFsub_trans. Qed.
Let abelZ : p.-abelem (Z).
Proof. by rewrite (Ohm1_abelem (pgroupS _ pR)) ?center_sub ?center_abelian. Qed.
Let pZ : p.-group Z. Proof. exact: abelem_pgroup abelZ. Qed.
Let defCRZ : 'C_R(Z) = R.
Proof. by apply/setIidPl; rewrite centsC gFsub_trans ?subsetIr. Qed.
Let sWR : W \subset R. Proof. exact/gFsub_trans/gFsub. Qed.
Let nWR : R \subset 'N(W). Proof. exact/gFnorm_trans/gFnorm. Qed.
(* This is B & G, Lemma 5.2. *)
Lemma Ohm1_ucn_p2maxElem E :
E \in 'E_p^2(R) :&: 'E*_p(R) ->
[/\ (*a*) ~~ (E \subset T),
(*b*) #|Z| = p /\ [group of W] \in 'E_p^2(R)
& (*c*) T \char R /\ #|R : T| = p ].
Proof.
case/setIP=> Ep2E maxE; have defCRE1 := Ohm1_cent_max maxE pR.
have [[sER abelE dimE] oE] := (pnElemP Ep2E, card_pnElem Ep2E).
have [[sZR_R nZR_R] [pE _ eE]] := (andP (center_normal R), and3P abelE).
have{nZR_R} nZR: R \subset 'N(Z) := gFnorm_trans _ nZR_R.
have{sZR_R} [pZR pW] := (pgroupS sZR_R pR, pgroupS sWR pR).
have sZE: Z \subset E by rewrite -defCRE1 OhmS ?setIS // centS.
have rCRE : 'r_p('C_R(E)) = 2 by rewrite -p_rank_Ohm1 defCRE1 p_rank_abelem.
have oZ: #|Z| = p.
apply/prime_nt_dvdP; rewrite -?trivg_card1 // (card_pgroup pZ) /= -/Z.
rewrite (@dvdn_exp2l _ _ 1) // -ltnS -dimE properG_ltn_log //= -/Z.
by case/eqVproper: sZE rR => // defZ; rewrite -defCRZ defZ rCRE ltnn.
have ncycR: ~~ cyclic R by rewrite (odd_pgroup_rank1_cyclic pR) // -(subnKC rR).
have [ncycW eW] := Ohm1_odd_ucn2 pR oddR ncycR; rewrite -/W in ncycW eW.
have sWRZ: [~: W, R] \subset Z.
rewrite [Z](OhmE 1 pZR) sub_gen //= -ucn1 subsetI.
rewrite (subset_trans _ (ucn_comm 1 _)) ?commSg ?Ohm_sub //.
by move: nWR eW; rewrite -commg_subl -sub_LdivT; apply: subset_trans.
have sZW: Z \subset W by rewrite OhmS /= -?ucn1 ?ucn_subS //.
have ltZW: Z \proper W.
by rewrite properEneq; case: eqP ncycW => // <-; rewrite prime_cyclic ?oZ.
have sWRE := subset_trans sWRZ sZE.
have nEW: W \subset 'N(E) by rewrite -commg_subr (subset_trans _ sWRE) ?commgSS.
have defZ: 'C_W(E) = Z.
have sCE: 'C_W(E) \subset E.
rewrite -{2}defCRE1 (OhmE 1 (pgroupS (subsetIl R _) pR)) sub_gen //.
by rewrite subsetI setSI // subIset // sub_LdivT eW.
have [defC | ltCE] := eqVproper sCE.
have sEW: E \subset W by rewrite -defC subsetIl.
have nsER: E <| R.
by rewrite /normal sER -commg_subl (subset_trans (commSg R sEW)).
have [B scn3B sEB] := normal_p2Elem_SCN3 Ep2E nsER.
have [scnB dimB] := setIdP scn3B; have [_ scBR] := SCN_P scnB.
rewrite ltnNge -rank_Ohm1 -dimE -rank_abelem ?rankS // in dimB.
by rewrite -scBR -defCRE1 OhmS // setIS ?centS.
apply/eqP; rewrite eq_sym eqEcard oZ (card_pgroup (pgroupS sCE pE)) /= -/W.
rewrite subsetI sZW (centsS sER); last by rewrite centsC -subsetIidl defCRZ.
by rewrite (leq_exp2l _ 1) // -ltnS -dimE properG_ltn_log.
have dimW: logn p #|W| = 2.
apply/eqP; rewrite -(Lagrange sZW) lognM ?cardG_gt0 // oZ (pfactorK 1) //=.
rewrite -/Z eqSS eqn_leq -{1}defZ logn_quotient_cent_abelem ?dimE // -/W.
by rewrite -divgS // logn_div ?cardSg // subn_gt0 properG_ltn_log.
have abelW: p.-abelem W.
by rewrite (abelem_Ohm1 (pgroupS _ pR)) ?(p2group_abelian pW) ?dimW ?ucn_sub.
have charT: T \char R by rewrite subcent_char ?char_refl ?gFchar_trans.
rewrite 2!inE sWR abelW dimW; do 2?split => //.
by apply: contra (proper_subn ltZW); rewrite -defZ !subsetI subxx sER centsC.
apply/prime_nt_dvdP=> //.
rewrite indexg_eq1 subsetIidl centsC; apply: contraFN (ltnn 1) => cRW.
by rewrite -dimW -(setIidPl (centsS sER cRW)) defZ oZ (pfactorK 1).
rewrite -(part_pnat_id (pnat_dvd (dvdn_indexg _ _) pR)) p_part.
by rewrite (@dvdn_exp2l p _ 1) ?logn_quotient_cent_abelem ?dimW.
Qed.
(* This is B & G, Theorem 5.3(d); we omit parts (a)-(c) as they are mostly *)
(* redundant with Lemma 5.2, given our definition of "narrow". *)
Theorem narrow_cent_dprod S :
p.-narrow R -> #|S| = p -> S \subset R -> 'r_p('C_R(S)) <= 2 ->
[/\ cyclic 'C_T(S), S :&: R^`(1) = 1, S :&: T = 1 & S \x 'C_T(S) = 'C_R(S)].
Proof.
move=> nnR oS sSR rS; have pS : p.-group S := pgroupS sSR pR.
have [E maxEp2E] := narrow_pmaxElem nnR; have [Ep2E maxE] := setIP maxEp2E.
have [not_sET [oZ Ep2W] [charT maxT]] := Ohm1_ucn_p2maxElem maxEp2E.
have cZS : S \subset 'C(Z) by rewrite (subset_trans sSR) // -defCRZ subsetIr.
have nZS : S \subset 'N(Z) by rewrite cents_norm.
have cSS : abelian S by rewrite cyclic_abelian ?prime_cyclic // oS.
pose SZ := (S <*> [group of Z])%G; have sSSZ: S \subset SZ := joing_subl _ _.
have sSZ_R: SZ \subset R by rewrite join_subG sSR sZR.
have abelSZ : p.-abelem SZ.
by rewrite /= joingC (cprod_abelem p (cprodEY cZS)) abelZ prime_abelem.
have tiSZ: S :&: Z = 1.
rewrite prime_TIg ?oS //= -/Z; apply: contraL rR => sZS.
by rewrite -leqNgt (leq_trans _ rS) ?p_rankS // -{1}defCRZ setIS ?centS.
have{tiSZ} oSZ: #|SZ| = (p ^ 2)%N by rewrite /= norm_joinEl ?TI_cardMg ?oS ?oZ.
have Ep2SZ: SZ \in 'E_p^2(R) by rewrite pnElemE // !inE sSZ_R abelSZ oSZ eqxx.
have{oSZ Ep2SZ abelSZ sSZ_R} maxSZ: SZ \in 'E_p^2(R) :&: 'E*_p(R).
rewrite inE Ep2SZ; apply/pmaxElemP; rewrite inE sSZ_R abelSZ.
split=> // H /setIdP[sHR abelH] sSZH.
have [[_ _ dimSZ] [cHH pH _]] := (pnElemP Ep2SZ, and3P abelH).
have sSH: S \subset H := subset_trans sSSZ sSZH.
have{sSH} sH_CRS: H \subset 'C_R(S) by rewrite subsetI sHR (centsS sSH).
have{sH_CRS}: 'r_p(H) <= 2 by rewrite (leq_trans _ rS) ?p_rankS.
apply: contraTeq; rewrite eq_sym eqEproper sSZH negbK => lSZH.
by rewrite -ltnNge p_rank_abelem // -dimSZ properG_ltn_log.
have sZT: Z \subset T.
by rewrite subsetI sZR (centsS sWR) // centsC -defCRZ subsetIr.
have{SZ sSSZ maxSZ} not_sST: ~~ (S \subset T).
have: ~~ (SZ \subset T) by case/Ohm1_ucn_p2maxElem: maxSZ.
by rewrite join_subG sZT andbT.
have tiST: S :&: T :=: 1 by rewrite prime_TIg ?oS.
have defST: S * T = R.
apply/eqP; rewrite eqEcard TI_cardMg ?mul_subG ?subsetIl //=.
by rewrite mulnC oS -maxT Lagrange ?subsetIl.
have cRRb: abelian (R / T) by rewrite -defST quotientMidr quotient_abelian.
have sR'T: R^`(1) \subset T by rewrite der1_min ?char_norm.
have TI_SR': S :&: R^`(1) :=: 1.
by rewrite prime_TIg ?oS // (contra _ not_sST) // => /subset_trans->.
have defCRS : S \x 'C_T(S) = 'C_R(S).
rewrite (dprodE _ _) ?subsetIr //= -/T; last by rewrite setIA tiST setI1g.
rewrite -{1}(center_idP cSS) subcent_TImulg ?defST //.
by rewrite subsetI normG (subset_trans sSR) ?char_norm.
have sCTSR: 'C_T(S) \subset R by rewrite subIset ?subsetIl.
split; rewrite ?(odd_pgroup_rank1_cyclic (pgroupS _ pR) (oddSg _ oddR)) //= -/T.
rewrite -ltnS (leq_trans _ rS) //= -(p_rank_dprod p defCRS) -add1n leq_add2r.
by rewrite -rank_pgroup // rank_gt0 -cardG_gt1 oS.
Qed.
(* This is B & G, Corollary 5.4. Given our definition of narrow, this is used *)
(* directly in the proof of the main part of Theorem 5.3. *)
Corollary narrow_centP :
reflect (exists S, [/\ gval S \subset R, #|S| = p & 'r_p('C_R(S)) <= 2])
(p.-narrow R).
Proof.
rewrite /narrow rR; apply: (iffP (set0Pn _)) => [[E maxEp2E]|[S [sSR oS rCRS]]].
have [Ep2E maxE] := setIP maxEp2E.
have{maxEp2E} [_ [oZ _] _] := Ohm1_ucn_p2maxElem maxEp2E.
have [sER abelE dimE] := pnElemP Ep2E; have oE := card_pnElem Ep2E.
have sZE: Z \subset E by rewrite -(Ohm1_cent_max maxE pR) OhmS ?setIS ?centS.
have [S defE] := abelem_split_dprod abelE sZE; exists S.
have{defE} [[_ defZS _ _] oZS] := (dprodP defE, dprod_card defE).
split; first by rewrite (subset_trans _ sER) // -defZS mulG_subr.
by apply/eqP; rewrite -(eqn_pmul2l (ltnW p_gt1)) -{1}oZ oZS oE.
rewrite -dimE -p_rank_abelem // -(Ohm1_cent_max maxE pR) p_rank_Ohm1.
by rewrite -defZS /= centM setIA defCRZ.
have abelS := prime_abelem p_pr oS.
have cSZ: Z \subset 'C(S) by rewrite (centsS sSR) // centsC -defCRZ subsetIr.
have sSZR: S <*> Z \subset R by rewrite join_subG sSR.
have defSZ: S \x Z = S <*> Z.
rewrite dprodEY ?prime_TIg ?oS //= -/Z; apply: contraL rR => sSZ.
by rewrite -leqNgt (leq_trans _ rCRS) ?p_rankS // -{1}defCRZ setIS ?centS.
have abelSZ: p.-abelem (S <*> Z) by rewrite (dprod_abelem p defSZ) abelS.
have [pSZ cSZSZ _] := and3P abelSZ.
have dimSZ: logn p #|S <*> Z| = 2.
apply/eqP; rewrite -p_rank_abelem // eqn_leq (leq_trans (p_rankS _ _) rCRS).
rewrite -(p_rank_dprod p defSZ) p_rank_abelem // oS (pfactorK 1) // ltnS.
by rewrite -rank_pgroup // rank_gt0.
by rewrite subsetI sSZR sub_abelian_cent ?joing_subl.
exists [group of S <*> Z]; rewrite 3!inE sSZR abelSZ dimSZ /=.
apply/pmaxElemP; rewrite inE sSZR; split=> // E; case/pElemP=> sER abelE sSZE.
apply: contraTeq rCRS; rewrite eq_sym -ltnNge -dimSZ => neqSZE.
have [[pE cEE _] sSE] := (and3P abelE, subset_trans (joing_subl S Z) sSZE).
rewrite (leq_trans (properG_ltn_log pE _)) ?properEneq ?neqSZE //.
by rewrite -p_rank_abelem ?p_rankS // subsetI sER sub_abelian_cent.
Qed.
(* This is the main statement of B & G, Theorem 5.3, stating the equivalence *)
(* of the structural and rank characterizations of the "narrow" property. Due *)
(* to our definition of "narrow", the equivalence is the converse of that in *)
(* B & G (we define narrow in terms of maximal elementary abelian subgroups). *)
Lemma narrow_structureP : reflect (narrow_structure p R) (p.-narrow R).
Proof.
apply: (iffP idP) => [nnR | [S C sSR sCR oS cycC defSC]].
have [S [sSR oS rCRS]] := narrow_centP nnR.
have [cycC _ _ defCRS] := narrow_cent_dprod nnR oS sSR rCRS.
by exists S [group of 'C_T(S)]; rewrite //= -setIA subsetIl.
apply/narrow_centP; exists S; split=> //.
have cycS: cyclic S by rewrite prime_cyclic ?oS.
rewrite -(p_rank_dprod p defSC) -!(rank_pgroup (pgroupS _ pR)) // -addn1.
rewrite leq_add -?abelian_rank1_cyclic ?cyclic_abelian //.
Qed.
End Rank3.
(* This is B & G, Theoren 5.5 (a) and (b). Part (c), which is not used in the *)
(* proof of the Odd Order Theorem, is omitted. *)
Theorem Aut_narrow (A : {group {perm gT}}) :
p.-narrow R -> solvable A -> A \subset Aut R -> odd #|A| ->
[/\ (*a*) p^'.-group (A / 'O_p(A)), abelian (A / 'O_p(A))
& (*b*) 2 < 'r(R) -> forall x, x \in A -> p^'.-elt x -> #[x] %| p.-1].
Proof.
move=> nnR solA AutA oddA; have nilR := pgroup_nil pR.
have [rR | rR] := leqP 'r(R) 2.
have pA' := der1_Aut_rank2_pgroup pR oddR rR AutA solA oddA.
have sA'Ap: A^`(1) \subset 'O_p(A) by rewrite pcore_max ?der_normal.
have cAbAb: abelian (A / 'O_p(A)) by rewrite sub_der1_abelian.
split; rewrite // -(nilpotent_pcoreC p (abelian_nil cAbAb)).
by rewrite trivg_pcore_quotient dprod1g pcore_pgroup.
have ntR: R :!=: 1 by rewrite -rank_gt0 2?ltnW.
rewrite (rank_pgroup pR) in rR.
have [H [charH sHRZ] _ eH pCH] := critical_odd pR oddR ntR.
have{ntR} [[p_pr _ _] sHR] := (pgroup_pdiv pR ntR, char_sub charH).
have ntH: H :!=: 1 by rewrite trivg_exponent eH -prime_coprime ?coprimen1.
have{nnR} [S C sSR sCR oS cycC defSC] := narrow_structureP rR nnR.
have [_ mulSC cSC tiSC] := dprodP defSC.
have abelS: p.-abelem S := prime_abelem p_pr oS; have [pS cSS _] := and3P abelS.
have cycS: cyclic S by rewrite prime_cyclic ?oS.
have tiHS: H :&: S = 1.
have rCRS: 'r_p('C_R(S)) <= 2.
rewrite -(p_rank_dprod p defSC) -addn1 -!rank_pgroup ?(pgroupS _ pR) //.
by rewrite leq_add -?abelian_rank1_cyclic ?cyclic_abelian.
rewrite setIC prime_TIg ?oS //; apply: contraL (rCRS) => sSH; rewrite -ltnNge.
have cZHS: S \subset 'C('Z(H)) by rewrite centsC (centsS sSH) ?subsetIr.
pose U := S <*> 'Z(H).
have sUH: U \subset H by rewrite join_subG sSH subsetIl.
have cUU: abelian U by rewrite abelianY cSS center_abelian centsC.
have abelU: p.-abelem U by rewrite abelemE // cUU -eH exponentS.
have sUR: U \subset R := subset_trans sUH sHR.
have rU: 'r_p(U) <= 'r_p('C_R(S)).
by rewrite p_rankS //= subsetI sUR (centsS (joing_subl S 'Z(H))).
have nsUR: U <| R.
rewrite /normal sUR -commg_subl (subset_trans (commSg _ sUH)) //= -/U.
by rewrite (subset_trans sHRZ) // joing_subr.
have{rU}:= leq_trans rU rCRS; rewrite leq_eqVlt => /predU1P[] rU.
have Ep2U: [group of U] \in 'E_p^2(R).
by rewrite !inE /= sUR abelU -(p_rank_abelem abelU) rU.
have [F scn3F sUF] := normal_p2Elem_SCN3 rR Ep2U nsUR.
have [scnF rF] := setIdP scn3F; have [_ scF] := SCN_P scnF.
rewrite (leq_trans rF) // -scF -rank_pgroup ?(pgroupS (subsetIl _ _)) //.
by rewrite rankS ?setIS ?centS // (subset_trans _ sUF) ?joing_subl.
have defU: S :=: U.
apply/eqP; rewrite eqEcard oS joing_subl (card_pgroup (pgroupS sUR pR)).
by rewrite -p_rank_abelem // (leq_exp2l _ 1) // prime_gt1.
have ntS: S :!=: 1 by rewrite -cardG_gt1 oS prime_gt1.
have sSZ: S \subset 'Z(R) by rewrite prime_meetG ?oS ?meet_center_nil // defU.
by rewrite (setIidPl _) // centsC (subset_trans sSZ) ?subsetIr.
have{tiHS eH} oCHS: #|'C_H(S)| = p.
have ntCHS: 'C_H(S) != 1.
have: H :&: 'Z(R) != 1 by rewrite meet_center_nil ?char_normal.
by apply: subG1_contra; rewrite setIS // (centsS sSR) ?subsetIr.
have cycCHS: cyclic 'C_H(S).
have tiS_CHS: S :&: 'C_H(S) = 1 by rewrite setICA setIA tiHS setI1g.
rewrite (isog_cyclic (quotient_isog _ tiS_CHS)) ?subIset ?cent_sub ?orbT //.
rewrite (cyclicS _ (quotient_cyclic S cycC)) //= -(quotientMidl S C).
by rewrite mulSC quotientS // setSI // char_sub.
have abelCHS: p.-abelem 'C_H(S).
by rewrite abelemE ?cyclic_abelian // -eH exponentS ?subsetIl.
rewrite -(Ohm1_id abelCHS).
by rewrite (Ohm1_cyclic_pgroup_prime _ (abelem_pgroup abelCHS)).
pose B := A^`(1) <*> [set a ^+ p.-1 | a in A].
have sBA: B \subset A.
rewrite join_subG (der_sub 1 A) /=.
by apply/subsetP=> _ /imsetP[a Aa ->]; rewrite groupX.
have AutB: B \subset Aut R := subset_trans sBA AutA.
suffices pB (X : {group {perm gT}}): X \subset B -> p^'.-group X -> X :=: 1.
have cAbAb: abelian (A / 'O_p(A)).
rewrite sub_der1_abelian // pcore_max ?der_normal //.
apply/pgroupP=> q q_pr; apply: contraLR => p'q; rewrite -p'natE //.
have [X sylX] := Sylow_exists q A^`(1); have [sXA' qX _] := and3P sylX.
rewrite -partn_eq1 ?cardG_gt0 // -(card_Hall sylX).
by rewrite (pB X) ?cards1 ?(pi_pgroup qX) ?(subset_trans sXA') ?joing_subl.
rewrite cAbAb -(nilpotent_pcoreC p (abelian_nil cAbAb)) trivg_pcore_quotient.
rewrite dprod1g pcore_pgroup; split=> //_ a Aa p'a.
rewrite order_dvdn -cycle_eq1 [<[_]>]pB ?(pgroupS (cycleX _ _) p'a) //.
by rewrite genS // sub1set inE orbC (imset_f (expgn^~ _)).
move=> sXB p'X; have AutX := subset_trans sXB AutB.
pose toX := ([Aut R] \ AutX)%gact; pose CX := 'C_(H | toX)(X).
suffices sHCX: H \subset CX.
rewrite -(setIid X) coprime_TIg ?(pnat_coprime (pgroupS _ pCH)) //.
by rewrite subsetIidl gacent_ract setIid gacentC in sHCX.
elim: _.+1 {1 2 4 6}H (charH) (subxx H) (ltnSn #|H|) => // n IHn L charL sLH.
rewrite ltnS => leLn; have sLR := char_sub charL; pose K := [~: L, R].
wlog ntL: / L :!=: 1 by case: eqP => [-> | _ -> //]; rewrite sub1G.
have charK: K \char R by rewrite charR ?char_refl.
have ltKL: K \proper L.
have nLR: R \subset 'N_R(L) by rewrite subsetIidl char_norm.
exact: nil_comm_properl nilR sLR ntL nLR.
have [sKL sKR] := (proper_sub ltKL, char_sub charK).
have [sKH pK] := (subset_trans sKL sLH, pgroupS sKR pR : p.-group K).
have nsKH: K <| H := normalS sKH sHR (char_normal charK).
have sKCX: K \subset CX by rewrite IHn ?(leq_trans (proper_card ltKL)) ?leLn.
have pL := pgroupS sLR pR; have nKL: L \subset 'N(K) := commg_norml _ _.
have{pS cSS} oLb: #|L / K| = p.
have [v defS] := cyclicP cycS; rewrite defS cycle_subG in sSR.
have ntLb: L / K != 1 by rewrite -subG1 quotient_sub1 ?proper_subn.
have [_ p_dv_Lb _] := pgroup_pdiv (quotient_pgroup _ pL) ntLb.
apply/eqP; rewrite eqn_leq {p_dv_Lb}(dvdn_leq _ p_dv_Lb) // andbT.
rewrite -divg_normal ?(normalS sKL sLH nsKH) // leq_divLR ?cardSg //= -/K.
rewrite -(card_lcoset K v) -(LagrangeI L 'C(S)) -indexgI /= -oCHS /K commGC.
rewrite {2}defS cent_cycle index_cent1 leq_mul ?subset_leq_card ?setSI //.
by apply/subsetP=> vx; case/imsetP=> x Lx ->; rewrite mem_lcoset mem_commg.
have cycLb: cyclic (L / K) by rewrite prime_cyclic ?oLb.
rewrite -(quotientSGK _ sKCX) // quotientGI // subsetI quotientS //= -/K.
have actsXK: [acts X, on K | toX] by rewrite acts_ract subxx acts_char.
rewrite ext_coprime_quotient_cent ?(pnat_coprime pK p'X) ?(pgroup_sol pK) //.
have actsAL : {acts A, on group L | [Aut R]} by apply: gacts_char.
have sAD: A \subset qact_dom <[actsAL]> [~: L, R].
by rewrite qact_domE // acts_actby subxx (setIidPr sKL) acts_char.
suffices cLbX: X \subset 'C(L / K | <[actsAL]> / _).
rewrite gacentE ?qact_domE // subsetI quotientS //=.
apply/subsetP=> Ku LbKu; rewrite inE; apply/subsetP=> x Xx; rewrite inE.
have [Dx cLx] := setIdP (subsetP cLbX x Xx); have [Ax _] := setIdP Dx.
rewrite inE in cLx; have:= subsetP cLx Ku LbKu; rewrite inE /=.
have [u Nu Lu ->] := morphimP LbKu.
by rewrite !{1}qactE // ?actbyE // qact_domE ?(subsetP actsXK).
rewrite (subset_trans sXB) // astab_range -ker_actperm gen_subG.
rewrite -sub_morphim_pre; last by rewrite -gen_subG ?(subset_trans sBA).
rewrite morphimU subUset morphim_der // (sameP trivgP derG1P).
rewrite (abelianS _ (Aut_cyclic_abelian cycLb)); last first.
exact: subset_trans (morphim_sub _ _) (im_actperm_Aut _).
apply/subsetP=> _ /morphimP[_ _ /imsetP[x Ax ->] ->].
have Dx := subsetP sAD x Ax; rewrite inE morphX //= -order_dvdn.
apply: dvdn_trans (order_dvdG (actperm_Aut _ Dx)) _.
by rewrite card_Aut_cyclic // oLb (@totient_pfactor p 1) ?muln1.
Qed.
End OneGroup.
(* This is B & G, Theorem 5.6, parts (a) and (c). We do not prove parts (b), *)
(* (d) and (e), as they are not used in the proof of the Odd Order Theorem. *)
Theorem narrow_der1_complement_max_pdiv gT p (G S : {group gT}) :
odd #|G| -> solvable G -> p.-Sylow(G) S -> p.-narrow S ->
(2 < 'r(S)) ==> p.-length_1 G ->
[/\ (*a*) p^'.-Hall(G^`(1)) 'O_p^'(G^`(1))
& (*c*) forall q, q \in \pi(G / 'O_p^'(G)) -> q <= p].
Proof.
move=> oddG solG sylS nnS; case: (leqP 'r(S) 2) => /= rS pl1G.
have rG: 'r_p(G) <= 2 by rewrite -(rank_Sylow sylS).
split=> [|q]; first by have [-> _ _] := rank2_der1_complement solG oddG rG.
exact: rank2_max_pdiv solG oddG rG.
rewrite /pHall pcore_sub pcore_pgroup pnatNK /=.
rewrite -(pcore_setI_normal p^' (der_normal 1 G)) // setIC indexgI /=.
wlog Gp'1: gT G S oddG nnS solG sylS rS pl1G / 'O_p^'(G) = 1.
set K := 'O_p^'(G); have [_ nKG] := andP (pcore_normal _ G : K <| G).
move/(_ _ (G / K) (S / K))%G; rewrite quotient_sol ?quotient_odd //.
have [[sSG pS _] p'K] := (and3P sylS, pcore_pgroup _ G : p^'.-group K).
have [nKS nKG'] := (subset_trans sSG nKG, subset_trans (der_sub 1 G) nKG).
have tiKS: K :&: S = 1 := coprime_TIg (p'nat_coprime p'K pS).
have isoS := isog_symr (quotient_isog nKS tiKS).
rewrite (isog_narrow p isoS) {isoS}(isog_rank isoS) quotient_pHall //.
rewrite plength1_quo // trivg_pcore_quotient indexg1 /= -quotient_der //.
by rewrite card_quotient //= -/K -(card_isog (quotient1_isog _)); apply.
rewrite Gp'1 indexg1 -(card_isog (quotient1_isog _)) -pgroupE.
have [sSG pS _] := and3P sylS; have oddS: odd #|S| := oddSg sSG oddG.
have ntS: S :!=: 1 by rewrite -rank_gt0 (leq_trans _ rS).
have [p_pr _ _] := pgroup_pdiv pS ntS; have p_gt1 := prime_gt1 p_pr.
have{pl1G} defS: 'O_p(G) = S.
by rewrite (eq_Hall_pcore _ sylS) -?plength1_pcore_Sylow.
have nSG: G \subset 'N(S) by rewrite -defS gFnorm.
pose fA := restrm nSG (conj_aut S); pose A := fA @* G.
have AutA: A \subset Aut S by rewrite [A]im_restrm Aut_conj_aut.
have [solA oddA]: solvable A /\ odd #|A| by rewrite morphim_sol ?morphim_odd.
have [/= _ cAbAb p'A_dv_p1] := Aut_narrow pS oddS nnS solA AutA oddA.
have{defS} pKfA: p.-group ('ker fA).
rewrite (pgroupS _ pS) //= ker_restrm ker_conj_aut.
by rewrite -defS -Fitting_eq_pcore ?cent_sub_Fitting.
split=> [|q].
rewrite -(pmorphim_pgroup pKfA) ?der_sub // morphim_der //.
by rewrite (pgroupS (der1_min _ cAbAb)) ?pcore_pgroup ?gFnorm.
rewrite mem_primes => /and3P[q_pr _ /Cauchy[] // x Gx ox].
rewrite leq_eqVlt -implyNb; apply/implyP=> p'q; rewrite -(ltn_predK p_gt1) ltnS.
have ofAx: #[fA x] = q.
apply/prime_nt_dvdP=> //; last by rewrite -ox morph_order.
rewrite order_eq1; apply: contraNneq p'q => fAx1.
by apply: (pgroupP pKfA); rewrite // -ox order_dvdG //; apply/kerP.
have p'fAx: p^'.-elt (fA x) by rewrite /p_elt ofAx pnatE.
by rewrite -ofAx dvdn_leq ?p'A_dv_p1 ?mem_morphim // -(subnKC p_gt1).
Qed.
End Five.
|
{"author": "math-comp", "repo": "odd-order", "sha": "663e1827836cf0dedebb99f0ab6b232bab9bffd0", "save_path": "github-repos/coq/math-comp-odd-order", "path": "github-repos/coq/math-comp-odd-order/odd-order-663e1827836cf0dedebb99f0ab6b232bab9bffd0/theories/BGsection5.v"}
|
# -*- coding: utf-8 -*-
# file: example.py
# date: 2021-08-01
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
import os
import numpy as np
import cv2
import random
import numpy as np
#from google.colab.patches import cv2_imshow
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog
from detectron2.modeling import build_model
from utils import test
def get_model(
dataset: str="COCO", task: str="Detection",
model: str="faster_rcnn",
backbone: str = "R_101_FPN_3x",
if_pretrained: bool = True,
cache_path: str="./",
weight_file_subpath: str="weights"
) -> DefaultPredictor:
cfg = get_cfg()
cfg.MODEL.DEVICE = "cpu"
weights_dir: str = "%s/%s" % (cache_path, weight_file_subpath)
if not os.path.exists(cache_path):
os.system("mkdir -p %s" % weights_dir)
model_name: str = "%s-%s/%s_%s" % (dataset, task, model, backbone)
adj_model_name: str = model_name.replace("/", "-")
target_cfg_file = model_zoo.get_config_file(model_name + ".yaml")
cfg.merge_from_file(target_cfg_file)
#model = build_model(cfg)
target_weights_url = model_zoo.get_checkpoint_url(model_name + ".yaml")
model_path = "%s/%s.pkl" % (weights_dir, adj_model_name)
model_path = test.get_url_file(
target_weights_url, model_path, False)
#cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(target_weights_url)
cfg.MODEL.WEIGHTS = model_path
model = DefaultPredictor(cfg)
model_pkg = {"model": model, "cfg": cfg}
return model_pkg
def main() -> int:
img_path: str = test.get_url_file(
"http://seopic.699pic.com/photo/50114/7683.jpg_wh1200.jpg",
"dev.jpg", False)
img = cv2.imread("./dev.jpg")
model_pkg = get_model(cache_path="~/Cache/detectron2")
model = model_pkg["model"]
output = model(img)
#print(output)
print(output["instances"].pred_classes)
print(output["instances"].pred_boxes)
v = Visualizer(img[:, :, ::-1], MetadataCatalog.get(model_pkg["cfg"].DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_instance_predictions(output["instances"].to("cpu"))
out_img = out.get_image()[:, :, ::-1]
print(type(out_img))
cv2.namedWindow("image")
cv2.imshow('image', out_img)
cv2.waitKey(0)
return 0
if __name__ == "__main__":
main()
|
{"hexsha": "412cdd073aa2a35381fe0f6167d7f4516b3da092", "size": 2531, "ext": "py", "lang": "Python", "max_stars_repo_path": "wiki4codes/ML/CV/object_detection/py_example_venv/detectron2_example.py", "max_stars_repo_name": "innerNULL/wiki4codes", "max_stars_repo_head_hexsha": "b707557de24befba0cd9dcacf66d74e5c122bb18", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "wiki4codes/ML/CV/object_detection/py_example_venv/detectron2_example.py", "max_issues_repo_name": "innerNULL/wiki4codes", "max_issues_repo_head_hexsha": "b707557de24befba0cd9dcacf66d74e5c122bb18", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "wiki4codes/ML/CV/object_detection/py_example_venv/detectron2_example.py", "max_forks_repo_name": "innerNULL/wiki4codes", "max_forks_repo_head_hexsha": "b707557de24befba0cd9dcacf66d74e5c122bb18", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.4382022472, "max_line_length": 103, "alphanum_fraction": 0.6783879889, "include": true, "reason": "import numpy", "num_tokens": 686}
|
from typing import Sequence, Optional
import haiku as hk
import jax.nn as jnn
import jax.numpy as jnp
from tensorflow_probability.substrates import jax as tfp
from dreamer.utils import initializer
tfd = tfp.distributions
tfb = tfp.bijectors
class Encoder(hk.Module):
def __init__(self, depth: int, kernels: Sequence[int],
initialization: str):
super(Encoder, self).__init__()
self._depth = depth
self._kernels = kernels
self._initialization = initialization
def __call__(self, observation: jnp.ndarray) -> jnp.ndarray:
def cnn(x):
kwargs = {
'stride': 2, 'padding': 'VALID',
'w_init': initializer(self._initialization)
}
for i, kernel in enumerate(self._kernels):
depth = 2 ** i * self._depth
x = jnn.relu(hk.Conv2D(depth, kernel, **kwargs)(x))
return x
cnn = hk.BatchApply(cnn)
return hk.Flatten(2)(cnn(observation))
class Decoder(hk.Module):
def __init__(self, depth: int,
kernels: Sequence[int],
output_shape: Sequence[int],
initialization: str):
super(Decoder, self).__init__()
self._depth = depth
self._kernels = kernels
self._output_shape = output_shape
self._initialization = initialization
def __call__(self, features: jnp.ndarray) -> jnp.ndarray:
x = hk.BatchApply(hk.Linear(32 * self._depth,
w_init=initializer(self._initialization))
)(features)
x = hk.Reshape((1, 1, 32 * self._depth), 2)(x)
def transpose_cnn(x):
kwargs = {
'stride': 2, 'padding': 'VALID',
'w_init': initializer(self._initialization)
}
for i, kernel in enumerate(self._kernels):
if i != len(self._kernels) - 1:
depth = 2 ** (len(self._kernels) - i - 2) * self._depth
x = jnn.relu(hk.Conv2DTranspose(depth, kernel, **kwargs)(x))
else:
x = hk.Conv2DTranspose(
self._output_shape[-1], kernel, **kwargs)(x)
return x
out = hk.BatchApply(transpose_cnn)(x)
return tfd.Independent(tfd.Normal(out, 1.0), len(self._output_shape))
class DenseDecoder(hk.Module):
def __init__(self, output_sizes: Sequence[int], dist: str,
initialization: str, name: Optional[str] = None):
super(DenseDecoder, self).__init__(name)
self._output_size = output_sizes
self._dist = dist
self._initialization = initialization
def __call__(self, features: jnp.ndarray):
mlp = hk.nets.MLP(self._output_size, initializer(self._initialization),
activation=jnn.elu)
mlp = hk.BatchApply(mlp)
x = mlp(features)
x = jnp.squeeze(x, axis=-1)
dist = dict(
normal=lambda mu: tfd.Normal(mu, 1.0),
bernoulli=lambda p: tfd.Bernoulli(p)
)[self._dist]
return tfd.Independent(dist(x), 0)
# Following https://github.com/tensorflow/probability/issues/840 and
# https://github.com/tensorflow/probability/issues/840.
class StableTanhBijector(tfb.Tanh):
def __init__(self, validate_args=False, name='tanh_stable_bijector'):
super(StableTanhBijector, self).__init__(validate_args=validate_args,
name=name)
def _inverse(self, y):
dtype = y.dtype
y = y.astype(jnp.float32)
y = jnp.clip(y, -0.99999997, 0.99999997)
y = jnp.arctanh(y)
return y.astype(dtype)
class SampleDist(object):
def __init__(self, dist, samples=100):
self._dist = dist
self._samples = samples
@property
def name(self):
return 'SampleDist'
def __getattr__(self, name):
return getattr(self._dist, name)
def mean(self, seed):
samples = self._dist.sample(self._samples, seed=seed)
return jnp.mean(samples, 0)
def mode(self, seed):
sample = self._dist.sample(self._samples, seed=seed)
logprob = self._dist.log_prob(sample)
return sample[jnp.argmax(logprob, 0).squeeze()]
def entropy(self, seed):
sample = self._dist.sample(self._samples, seed=seed)
logprob = self.log_prob(sample)
return -jnp.mean(logprob, 0)
|
{"hexsha": "92a76fcddde9f5e95244af61a4684c020afce36f", "size": 4093, "ext": "py", "lang": "Python", "max_stars_repo_path": "dreamer/blocks.py", "max_stars_repo_name": "yardenas/jax-dreamer", "max_stars_repo_head_hexsha": "b3f3945b389cc9153f8e06ad416252977bda488a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-01-19T11:04:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T18:36:16.000Z", "max_issues_repo_path": "dreamer/blocks.py", "max_issues_repo_name": "yardenas/jax-dreamer", "max_issues_repo_head_hexsha": "b3f3945b389cc9153f8e06ad416252977bda488a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dreamer/blocks.py", "max_forks_repo_name": "yardenas/jax-dreamer", "max_forks_repo_head_hexsha": "b3f3945b389cc9153f8e06ad416252977bda488a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.7744360902, "max_line_length": 75, "alphanum_fraction": 0.6428047887, "include": true, "reason": "import jax", "num_tokens": 1102}
|
(* Title: Kleene algebra with tests
Author: Alasdair Armstrong, Victor B. F. Gomes, Georg Struth
Maintainer: Georg Struth <g.struth at sheffield.ac.uk>
*)
header {* Transformation Theorem for while Loops *}
theory FolkTheorem
imports Conway KAT DRAT
begin
text {*
We prove Kozen's transformation theorem for while loops \cite{Kozen97} in a weak setting that unifies
previous proofs in Kleene algebra with tests, demonic refinement algebras and a variant of probabilistic
Kleene algebra.
*}
context pre_conway
begin
abbreviation preservation :: "'a \<Rightarrow> 'a \<Rightarrow> bool" (infix "preserves" 60) where
"x preserves p \<equiv> test p \<and> p\<cdot>x\<cdot>p = p\<cdot>x \<and> !p\<cdot>x\<cdot>!p = !p\<cdot>x"
lemma preserves_test_closed: "\<lbrakk>test p; x preserves q\<rbrakk> \<Longrightarrow> p\<cdot>x preserves q"
apply (auto, metis mult.assoc test_mult_comm_var)
by (metis mult.assoc test_comp_closed_var test_mult_comm_var)
lemma conditional_helper1:
assumes "test r1"
"x1 preserves q" "y1 preserves q"
"x2 preserves q" "y2 preserves q"
shows "p\<cdot>q\<cdot>x1\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>(q\<cdot>!r1 + !q\<cdot>!r2) = p\<cdot>q\<cdot>x1\<cdot>(r1\<cdot>y1)\<^sup>\<dagger>\<cdot>!r1"
proof -
let ?B = "q\<cdot>!r1 + !q\<cdot>!r2"
have pres: "q\<cdot>(r1\<cdot>y1) = q \<cdot> (r1\<cdot>y1) \<cdot>q"
by (metis assms preserves_test_closed)
hence "q\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger> = (q\<cdot>r1\<cdot>y1)\<^sup>\<dagger>\<cdot>q"
by (metis assms(2-) test_preserve1 dagger_slide mult.assoc)
hence "p\<cdot>q\<cdot>x1\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>?B = p\<cdot>q\<cdot>x1\<cdot>(q\<cdot>r1\<cdot>y1)\<^sup>\<dagger>\<cdot>q\<cdot>?B"
by (metis assms(2) mult.assoc)
also have "... = p\<cdot>q\<cdot>x1\<cdot>(q\<cdot>r1\<cdot>y1)\<^sup>\<dagger>\<cdot>q\<cdot>!r1"
by (metis assms(5) mult.assoc weak_distrib_left_var test_comp_mult annil add_zeror test_mult_idem_var)
also have "... = p\<cdot>q\<cdot>x1\<cdot>(r1\<cdot>y1)\<^sup>\<dagger>\<cdot>!r1"
by (metis pres assms(2) mult.assoc test_preserve)
finally show ?thesis .
qed
lemma conditional_helper2:
assumes "test r2"
"x1 preserves q" "y1 preserves q"
"x2 preserves q" "y2 preserves q"
shows "p\<cdot>!q\<cdot>x2\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>(q\<cdot>!r1 + !q\<cdot>!r2) = p\<cdot>!q\<cdot>x2\<cdot>(r2\<cdot>y2)\<^sup>\<dagger>\<cdot>!r2"
proof -
let ?B = "q\<cdot>!r1 + !q\<cdot>!r2"
have pres: "!q\<cdot>(r2\<cdot>y2) = !q \<cdot> (r2\<cdot>y2) \<cdot>!q"
by (metis assms preserves_test_closed)
hence "!q\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger> = (!q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>!q"
by (metis assms(2-) test_preserve1[of "!q" "r2\<cdot>y2" "r1\<cdot>y1"] add.commute mult.assoc test_comp_closed_var test_double_comp_var)
hence "p\<cdot>!q\<cdot>x2\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>?B = p\<cdot>!q\<cdot>x2\<cdot>(!q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>!q\<cdot>?B"
by (metis assms(4) mult.assoc)
also have "... = p\<cdot>!q\<cdot>x2\<cdot>(!q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>!q\<cdot>!r2"
by (metis assms(5) mult.assoc test_comp_closed_var weak_distrib_left_var test_comp_mult2 test_mult_idem_var add_zerol annil)
also have "... = p\<cdot>!q\<cdot>x2\<cdot>(r2\<cdot>y2)\<^sup>\<dagger>\<cdot>!r2"
by (metis assms(4) mult.assoc pres test_comp_closed_var test_preserve)
finally show ?thesis .
qed
theorem conditional:
assumes "test p" "test r1" "test r2"
"x1 preserves q" "y1 preserves q"
"x2 preserves q" "y2 preserves q"
shows "(p\<cdot>q + !p\<cdot>!q)\<cdot>(p\<cdot>x1\<cdot>(r1\<cdot>y1)\<^sup>\<dagger>\<cdot>!r1 + !p\<cdot>x2\<cdot>(r2\<cdot>y2)\<^sup>\<dagger>\<cdot>!r2) =
(p\<cdot>q + !p\<cdot>!q)\<cdot>(q\<cdot>x1 + !q\<cdot>x2)\<cdot>((q\<cdot>r1 + !q\<cdot>r2)\<cdot>(q\<cdot>y1 + !q\<cdot>y2))\<^sup>\<dagger>\<cdot>!(q\<cdot>r1 + !q\<cdot>r2)"
proof -
have "p\<cdot>q\<cdot>(x1\<cdot>(r1\<cdot>y1)\<^sup>\<dagger>\<cdot>!r1) = p\<cdot>q\<cdot>x1\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>(q\<cdot>!r1 + !q\<cdot>!r2)" and "!p\<cdot>!q\<cdot>(x2\<cdot>(r2\<cdot>y2)\<^sup>\<dagger>\<cdot>!r2) = !p\<cdot>!q\<cdot>x2\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>(q\<cdot>!r1 + !q\<cdot>!r2)"
apply (metis assms(2,4-) conditional_helper1[of r1 q x1 y1 x2 y2 p] mult.assoc)
by (metis assms(3-) conditional_helper2[of r2 q x1 y1 x2 y2 "!p"] mult.assoc)
moreover have "(p\<cdot>q + !p\<cdot>!q)\<cdot>(p\<cdot>x1\<cdot>(r1\<cdot>y1)\<^sup>\<dagger>\<cdot>!r1 + !p\<cdot>x2\<cdot>(r2\<cdot>y2)\<^sup>\<dagger>\<cdot>!r2) = p\<cdot>q\<cdot>(x1\<cdot>(r1\<cdot>y1)\<^sup>\<dagger>\<cdot>!r1) + !p\<cdot>!q\<cdot>(x2\<cdot>(r2\<cdot>y2)\<^sup>\<dagger>\<cdot>!r2)"
by (metis assms(1,4-) cond_distr mult.assoc test_def)
moreover have "... = (p\<cdot>q\<cdot>x1 + !p\<cdot>!q\<cdot>x2)\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>(q\<cdot>!r1 + !q\<cdot>!r2)"
by (metis calculation(1) calculation(2) distrib_right')
moreover have "... = (q\<cdot>p\<cdot>x1 + !q\<cdot>!p\<cdot>x2)\<cdot>(q\<cdot>r1\<cdot>y1 + !q\<cdot>r2\<cdot>y2)\<^sup>\<dagger>\<cdot>(q\<cdot>!r1 + !q\<cdot>!r2)"
by (metis assms(1) assms(5) test_comp_closed_var test_mult_comm_var)
moreover have "... = (q\<cdot>p + !q\<cdot>!p)\<cdot>(q\<cdot>x1 + !q\<cdot>x2)\<cdot>((q\<cdot>r1 + !q\<cdot>r2)\<cdot>(q\<cdot>y1 + !q\<cdot>y2))\<^sup>\<dagger>\<cdot>!(q\<cdot>r1 + !q\<cdot>r2)"
by (metis assms(1-3,5) cond_distr de_morgan_var2 test_comp_closed_var)
ultimately show ?thesis
by (metis assms(1,5) test_comp_closed_var test_mult_comm_var)
qed
theorem nested_loops:
assumes "test p" "test q"
shows "(p\<cdot>x\<cdot>(q\<cdot>y)\<^sup>\<dagger>\<cdot>!q)\<^sup>\<dagger>\<cdot>!p = p\<cdot>x\<cdot>((p + q)\<cdot>(q\<cdot>y + !q\<cdot>x))\<^sup>\<dagger>\<cdot>!(p + q) + !p"
proof -
have "p\<cdot>x\<cdot>((p + q)\<cdot>(q\<cdot>y + !q\<cdot>x))\<^sup>\<dagger>\<cdot>!(p + q) + !p = p\<cdot>x\<cdot>(q\<cdot>y)\<^sup>\<dagger>\<cdot>(!q\<cdot>p\<cdot>x\<cdot>(q\<cdot>y)\<^sup>\<dagger>)\<^sup>\<dagger>\<cdot>!p\<cdot>!q + !p"
by (metis assms test_distrib mult.assoc de_morgan2 dagger_denest2)
thus ?thesis
by (metis assms mult.assoc test_comp_closed_var test_mult_comm_var add.commute dagger_slide dagger_unfoldl_distr)
qed
lemma postcomputation:
assumes "y preserves p"
shows "(p\<cdot>x)\<^sup>\<dagger>\<cdot>!p\<cdot>y = !p\<cdot>y + p\<cdot>(p\<cdot>x\<cdot>(!p\<cdot>y + p))\<^sup>\<dagger>\<cdot>!p"
proof -
have "p\<cdot>(p\<cdot>x\<cdot>(!p\<cdot>y + p))\<^sup>\<dagger>\<cdot>!p = p\<cdot>(1 + p\<cdot>x\<cdot>((!p\<cdot>y + p)\<cdot>p\<cdot>x)\<^sup>\<dagger>\<cdot>(!p\<cdot>y + p))\<cdot>!p"
by (metis dagger_prod_unfold mult.assoc)
also have "... = (p + p\<cdot>p\<cdot>x\<cdot>((!p\<cdot>y + p)\<cdot>p\<cdot>x)\<^sup>\<dagger>\<cdot>(!p\<cdot>y + p))\<cdot>!p"
by (metis assms mult.assoc weak_distrib_left_var distrib_right' mult_1_left)
also have "... = p\<cdot>!p + p\<cdot>x\<cdot>(!p\<cdot>y\<cdot>p\<cdot>x + p\<cdot>p\<cdot>x)\<^sup>\<dagger>\<cdot>(!p\<cdot>y + p)\<cdot>!p"
by (metis assms mult.assoc distrib_right' test_mult_idem_var)
also have "... = p\<cdot>!p + p\<cdot>x\<cdot>(!p\<cdot>y\<cdot>p\<cdot>x + p\<cdot>p\<cdot>x)\<^sup>\<dagger>\<cdot>(!p\<cdot>y\<cdot>!p + p\<cdot>!p)"
by (metis distrib_right' mult.assoc)
also have "... = p\<cdot>x\<cdot>(!p\<cdot>y\<cdot>!p\<cdot>p\<cdot>x + p\<cdot>x)\<^sup>\<dagger>\<cdot>(!p\<cdot>y)"
by (metis assms test_comp_mult test_mult_idem_var add_zerol add_zeror)
also have "... = p\<cdot>x\<cdot>(!p\<cdot>y\<cdot>0 + p\<cdot>x)\<^sup>\<dagger>\<cdot>!p\<cdot>y"
by (metis assms mult.assoc test_double_comp_var test_mult_comp annil)
moreover have "... = p \<cdot>x \<cdot>(p \<cdot>x)\<^sup>\<dagger>\<cdot>(!p \<cdot> y \<cdot> 0 \<cdot>(p \<cdot>x)\<^sup>\<dagger>)\<^sup>\<dagger>\<cdot>!p \<cdot> y"
by (metis mult.assoc add.commute dagger_denest2)
moreover have "... = p \<cdot>x \<cdot>(p \<cdot>x)\<^sup>\<dagger>\<cdot>!p \<cdot> y \<cdot> (0\<cdot>!p \<cdot> y)\<^sup>\<dagger>"
by (metis annil dagger_slide mult.assoc)
ultimately have "p\<cdot>(p\<cdot>x\<cdot>(!p\<cdot>y + p))\<^sup>\<dagger>\<cdot>!p = p \<cdot>x \<cdot>(p \<cdot>x)\<^sup>\<dagger>\<cdot>!p \<cdot> y"
by (metis zero_dagger annil mult_1_right)
thus "(p\<cdot>x)\<^sup>\<dagger>\<cdot>!p\<cdot>y = !p\<cdot>y + p\<cdot>(p\<cdot>x\<cdot>(!p\<cdot>y + p))\<^sup>\<dagger>\<cdot>!p"
by (metis dagger_unfoldl_distr mult.assoc)
qed
lemma composition_helper:
assumes "test g" "test h" "g\<cdot>y = y\<cdot>g"
shows "g\<cdot>(h\<cdot>y)\<^sup>\<dagger>\<cdot>!h\<cdot>g = g\<cdot>(h\<cdot>y)\<^sup>\<dagger>\<cdot>!h"
apply (subgoal_tac "g\<cdot>(h\<cdot>y)\<^sup>\<dagger>\<cdot>!h \<le> (h\<cdot>y)\<^sup>\<dagger>\<cdot>!h\<cdot>g")
apply (metis assms(1) test_eq3 mult.assoc)
by (metis assms mult.assoc test_mult_comm_var order_refl dagger_simr mult_isor test_comp_closed_var)
theorem composition:
assumes "test g" "test h" "g\<cdot>y = y\<cdot>g" "!g\<cdot>y = y\<cdot>!g"
shows "(g\<cdot>x)\<^sup>\<dagger>\<cdot>!g\<cdot>(h\<cdot>y)\<^sup>\<dagger>\<cdot>!h = !g\<cdot>(h\<cdot>y)\<^sup>\<dagger>\<cdot>!h + g\<cdot>(g\<cdot>x\<cdot>(!g\<cdot>(h\<cdot>y)\<^sup>\<dagger>\<cdot>!h + g))\<^sup>\<dagger>\<cdot>!g"
apply (subgoal_tac "(h\<cdot>y)\<^sup>\<dagger>\<cdot>!h preserves g")
by (metis postcomputation mult.assoc, metis assms composition_helper test_comp_closed_var mult.assoc)
end
text {*
Kleene algebras with tests form pre-Conway algebras, therefore the transformation theorem is valid for KAT as well.
*}
sublocale kat \<subseteq> pre_conway star
apply (default, simp_all only: star_prod_unfold star_sim2)
by (metis star_denest_var star_slide)
text {*
Demonic refinement algebras form pre-Conway algebras, therefore the transformation theorem is valid for DRA as well.
*}
sublocale dra_tests \<subseteq> pre_conway strong_iteration
apply (default, metis iteration_denest iteration_slide)
by (metis iteration_prod_unfold, metis iteration_sim)
text {*
We do not currently consider an expansion of probabilistic Kleene algebra.
*}
end
|
{"author": "Josh-Tilles", "repo": "AFP", "sha": "f4bf1d502bde2a3469d482b62c531f1c3af3e881", "save_path": "github-repos/isabelle/Josh-Tilles-AFP", "path": "github-repos/isabelle/Josh-Tilles-AFP/AFP-f4bf1d502bde2a3469d482b62c531f1c3af3e881/thys/KAT_and_DRA/SingleSorted/FolkTheorem.thy"}
|
theory BDD_select
imports Main BDD_basic
begin
definition select :: "nat \<Rightarrow> BDD \<Rightarrow> BDD \<Rightarrow> BDD" where
"select a t e = (if t = e then t else Select a t e)"
lemma select_noop [simp]: "norm n t \<Longrightarrow> norm n e \<Longrightarrow> t \<noteq> e \<Longrightarrow> select v t e = Select v t e"
by (auto simp: select_def)
theorem norm_select [simp]: "n > a \<Longrightarrow> norm a t \<Longrightarrow> norm a e \<Longrightarrow> norm n (select a t e)"
by (simp add: select_def)
theorem ordered_select [simp]: "n > a \<Longrightarrow> ordered a t \<Longrightarrow> ordered a e \<Longrightarrow> ordered n (select a t e)"
by (simp add: select_def)
theorem select_correct [simp]: "contains (select a t e) f = contains (Select a t e) f"
apply (rule iffI)
apply (auto simp add: select_def)
apply (metis (full_types) contains_sel_e contains_sel_t)
using contains.cases by auto
end
|
{"author": "jmaessen", "repo": "bdd-subtyping", "sha": "49852f5841dadfdb86ba87be923b42fb547810ef", "save_path": "github-repos/isabelle/jmaessen-bdd-subtyping", "path": "github-repos/isabelle/jmaessen-bdd-subtyping/bdd-subtyping-49852f5841dadfdb86ba87be923b42fb547810ef/BDD_select.thy"}
|
# -*- coding: utf-8 -*-
from scipy.stats import rv_continuous
from scipy.stats import rv_discrete
from scipy.stats import _continuous_distns as crv_helper
from scipy.stats import _discrete_distns as drv_helper
import scipy.special as special
import numpy as np
import matplotlib.pyplot as plt
def plothistogram(values, edges=None, markcen=False, **kwargs):
"""Usage:
plothistogram(samples,density=1)
plothistogram(*np.histogram(samples,density=1))
Args:
values(array): sample or histogram (when edges is set)
edges(Optional(array)): histrogram bin edges
"""
if edges is None:
plt.hist(values, align="mid", alpha=0.7, **kwargs)
else:
wbin = np.diff(edges)
xleft = edges[:-1]
container = plt.bar(xleft, height=values, width=wbin, align="edge", **kwargs)
xcen = xleft + wbin * 0.5
if markcen:
color = container[0].get_facecolor()
plt.plot(xcen, values, "o", color=color)
return xcen
class limitednorm_gen(rv_continuous):
"""Normal distribution within finite limits [-k,k]
pdf(x) = ptf_norm(x) + cdf_norm(-k)
"""
def _pdf(self, y):
# y = (x-loc)/scale
invalid = (y < -self.k) | (y > self.k)
if isinstance(y, np.ndarray):
ret = np.exp(-(y ** 2) / 2.0) / np.sqrt(2.0 * np.pi) + self.offset
ret[invalid] = 0
return ret
elif invalid:
return 0 * y
else:
return np.exp(-(y ** 2) / 2.0) / np.sqrt(2.0 * np.pi) + self.offset
def _cdf(self, y):
# y = (x-loc)/scale
if isinstance(y, np.ndarray):
ret = special.ndtr(y) + self.offset
ret[y < -self.k] = 0
ret[y > self.k] = 1
return ret
else:
if y <= -self.k:
return 0
elif y >= self.k:
return 1
else:
return special.ndtr(y) + self.offset
def _ppf(self, p):
return special.ndtri(p - self.offset)
class truncnorm_gen(rv_continuous):
"""Normal distribution within finite limits [a,b]"""
def _argcheck(self, a, b):
self.a = a
self.b = b
self._cdfb = crv_helper._norm_cdf(b)
self._cdfa = crv_helper._norm_cdf(a)
self._cdfminb = crv_helper._norm_cdf(-b)
self._cdfmina = crv_helper._norm_cdf(-a)
self._delta = np.where(
self.a > 0, -(self._cdfminb - self._cdfmina), self._cdfb - self._cdfa
)
self._logdelta = np.log(self._delta)
return a != b
def _pdf(self, x, a, b):
return crv_helper._norm_pdf(x) / self._delta
def _logpdf(self, x, a, b):
return crv_helper._norm_logpdf(x) - self._logdelta
def _cdf(self, x, a, b):
return (crv_helper._norm_cdf(x) - self._cdfa) / self._delta
def _ppf(self, q, a, b):
return np.where(
self.a > 0,
-crv_helper._norm_ppf(q * self._cdfminb + self._cdfmina * (1.0 - q)),
crv_helper._norm_ppf(q * self._cdfb + self._cdfa * (1.0 - q)),
)
def _stats(self, a, b):
nA, nB = self._cdfa, self._cdfb
d = nB - nA
pA, pB = crv_helper._norm_pdf(a), crv_helper._norm_pdf(b)
mu = (pA - pB) / d # correction sign
mu2 = 1 + (a * pA - b * pB) / d - mu * mu
return mu, mu2, None, None
def limitednorm(k, **kwargs):
k = np.abs(k)
return truncnorm_gen(name="limitednorm", **kwargs)(a=-k, b=k)
# return scipy.stats.truncnorm(a=-k,b=k)
class holenorm_gen(rv_continuous):
"""Flipped normal distribution within finite limits [a,b]"""
def _argcheck(self, a, b):
self.a = a
self.b = b
self._cdfb = crv_helper._norm_cdf(b)
self._cdfa = crv_helper._norm_cdf(a)
self._cdfminb = crv_helper._norm_cdf(-b)
self._cdfmina = crv_helper._norm_cdf(-a)
self._delta = np.where(
self.a > 0, -(self._cdfminb - self._cdfmina), self._cdfb - self._cdfa
)
self._logdelta = np.log(self._delta)
return a != b
def _pdf(self, x, a, b):
return crv_helper._norm_pdf(x) / self._delta
def _logpdf(self, x, a, b):
return crv_helper._norm_logpdf(x) - self._logdelta
def _cdf(self, x, a, b):
return (crv_helper._norm_cdf(x) - self._cdfa) / self._delta
def _ppf(self, q, a, b):
return np.where(
self.a > 0,
-crv_helper._norm_ppf(q * self._cdfminb + self._cdfmina * (1.0 - q)),
crv_helper._norm_ppf(q * self._cdfb + self._cdfa * (1.0 - q)),
)
def _stats(self, a, b):
nA, nB = self._cdfa, self._cdfb
d = nB - nA
pA, pB = crv_helper._norm_pdf(a), crv_helper._norm_pdf(b)
mu = (pA - pB) / d # correction sign
mu2 = 1 + (a * pA - b * pB) / d - mu * mu
return mu, mu2, None, None
def holenorm(k, **kwargs):
k = np.abs(k)
return holenorm_gen(name="holenorm", **kwargs)(a=-k, b=k)
# Random number generator (slow if ppf is calculated directly)
# distribution.rvs(size=1000)
|
{"hexsha": "28bbcdbacb43377b76b59c9ffd544aee56733509", "size": 5119, "ext": "py", "lang": "Python", "max_stars_repo_path": "spectrocrunch/math/distributions.py", "max_stars_repo_name": "woutdenolf/spectrocrunch", "max_stars_repo_head_hexsha": "fde4b6e0f462f464ce7af6a942b355d3d8f39f77", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-04-16T15:51:36.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-16T11:21:05.000Z", "max_issues_repo_path": "spectrocrunch/math/distributions.py", "max_issues_repo_name": "woutdenolf/spectrocrunch", "max_issues_repo_head_hexsha": "fde4b6e0f462f464ce7af6a942b355d3d8f39f77", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "spectrocrunch/math/distributions.py", "max_forks_repo_name": "woutdenolf/spectrocrunch", "max_forks_repo_head_hexsha": "fde4b6e0f462f464ce7af6a942b355d3d8f39f77", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.4702380952, "max_line_length": 85, "alphanum_fraction": 0.5651494433, "include": true, "reason": "import numpy,import scipy,from scipy", "num_tokens": 1561}
|
#!/usr/bin/env python
# matteo: use subprocess.getoutput if available
# use os.path.join instead of +
from __future__ import print_function
import sys, os, platform
from setuptools import setup
from setuptools import Extension
import distutils.sysconfig
from Cython.Build import cythonize
import numpy
print("""WARNING: The libstempo API has changed substantially (for the better) from
versions 1.X to 2.X. If you need the older 1.X API, you can get an older libstempo
from https://pypi.python.org/simple/libstempo, or checkout the libstempo1
branch on GitHub - https://github.com/vallis/libstempo/tree/libstempo1""")
tempo2, force_tempo2 = None, False
argv_replace = []
for arg in sys.argv:
if arg.startswith('--with-tempo2='):
tempo2 = arg.split('=', 1)[1]
elif arg.startswith('--force-tempo2-install'):
force_tempo2 = True
else:
argv_replace.append(arg)
sys.argv = argv_replace
if tempo2 is None:
# hmm, you're making things hard, huh? let's try autodetecting in a few likely places
try:
import subprocess
stdout = subprocess.check_output('which tempo2',shell=True).decode()
t2exec = [stdout[:-12]] # remove /bin/tempo2
# may fail if there are strange bytes
except:
t2exec = []
virtenv = [os.environ['VIRTUAL_ENV']] if 'VIRTUAL_ENV' in os.environ else []
ldpath = map(lambda s: s[:-4],os.environ['LD_LIBRARY_PATH'].split(':')) if 'LD_LIBRARY_PATH' in os.environ else []
paths = t2exec + virtenv + list(ldpath) + [os.environ['HOME'],'/usr/local','/usr']
found = [path for path in paths if os.path.isfile(path + '/include/tempo2.h')]
found = list(set(found)) # remove duplicates
if found and not force_tempo2:
tempo2 = found[0]
print("Found tempo2 install in {0}, will use {1}.".format(found,"it" if len(found) == 1 else tempo2))
if 'TEMPO2' in os.environ:
runtime = os.environ['TEMPO2']
else:
runtime = os.path.join(tempo2,'share','tempo2')
print("But where is the tempo2 runtime? I'm guessing {}; if I am not right, you should define the environment variable TEMPO2.".format(runtime))
else:
# try installing tempo2!
tempo2 = os.path.dirname(os.path.dirname(os.path.dirname(distutils.sysconfig.get_python_lib())))
runtime = os.path.join(tempo2,'share','tempo2')
print("I have not been able to (or I was instructed not to) autodetect the location of the tempo2 headers and libraries.")
print("I will attempt to download and install tempo2 in {}; runtime files will be in {}.".format(tempo2,runtime))
print("Please note that if the environment variable TEMPO2 is defined, it will override {}.".format(runtime))
try:
subprocess.check_call(["./install_tempo2.sh",tempo2])
except subprocess.CalledProcessError:
print("I'm sorry, the tempo2 installation failed. I tried my best!")
sys.exit(2)
runtime = os.path.join(tempo2,'share','tempo2')
initsrc = open('libstempo/__init__.py.in','r').read().replace("TEMPO2DIR",runtime)
open('libstempo/__init__.py','w').write(initsrc)
# need rpath links to shared libraries on Linux
if platform.system() == 'Linux':
linkArgs = ['-Wl,-R{}/lib'.format(tempo2)]
else:
linkArgs = []
setup(name = 'libstempo',
version = '2.3.5', # remember to change it in __init__.py.in
description = 'A Python wrapper for tempo2',
author = 'Michele Vallisneri',
author_email = 'vallis@vallis.org',
url = 'https://github.com/vallis/libstempo',
packages = ['libstempo'],
package_dir = {'libstempo': 'libstempo'},
package_data = {'libstempo': ['data/*', 'ecc_vs_nharm.txt']},
py_modules = ['libstempo.like','libstempo.multinest','libstempo.emcee',
'libstempo.plot','libstempo.toasim',
'libstempo.spharmORFbasis', 'libstempo.eccUtils'],
ext_modules = cythonize(Extension('libstempo.libstempo',['libstempo/libstempo.pyx'],
language = "c++",
include_dirs = [tempo2 + '/include',numpy.get_include()],
libraries = ['tempo2','tempo2pred','gomp'],
library_dirs = [tempo2 + '/lib'],
extra_compile_args = ["-Wno-unused-function"],
extra_link_args = linkArgs))
)
|
{"hexsha": "32c2a395d839e21f5be00308749331153d9bac77", "size": 4597, "ext": "py", "lang": "Python", "max_stars_repo_path": "setup.py", "max_stars_repo_name": "bshapiroalbert/libstempo", "max_stars_repo_head_hexsha": "e5e6231e9d9897aa161080baedd0ea210780460e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "setup.py", "max_issues_repo_name": "bshapiroalbert/libstempo", "max_issues_repo_head_hexsha": "e5e6231e9d9897aa161080baedd0ea210780460e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "setup.py", "max_forks_repo_name": "bshapiroalbert/libstempo", "max_forks_repo_head_hexsha": "e5e6231e9d9897aa161080baedd0ea210780460e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.4144144144, "max_line_length": 156, "alphanum_fraction": 0.6190994127, "include": true, "reason": "import numpy", "num_tokens": 1138}
|
# coding=utf-8
# Copyright 2022 The ML Fairness Gym Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Module for evaluating an RNN agent.
Defines functions evaluate_agent to run a simulation for a provided agent and
environment to calculate the average reward and safety costs for the agent.
"""
from absl import logging
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
def violence_risk(observation):
return observation['response'][0]['violence_score']
def health_risk(observation):
return 1-observation['response'][0]['health_score']
def evaluate_agent(agent, env, alpha, num_users=100, deterministic=False,
scatter_plot_trajectories=False, figure_file_obj=None,
risk_score_extractor=violence_risk):
"""Runs an agent-env simulation to evaluate average reward and safety costs.
Args:
agent: rnn_cvar_agent.SafeRNNAgent object.
env: Recsim environment that returns responses with reward and health score.
alpha: The alpha used as the level for VaR/CVaR.
num_users: Number of users to sample for the evaluation.
deterministic: Whether the agent chooses the argmax action instead of
sampling.
scatter_plot_trajectories: Whether to evaluate
figure_file_obj: File object to store the plot.
risk_score_extractor: A function which takes an observation and returns a
risk score.
Returns:
Dictionary with average reward, health score, cvar, var for num_users
sampled.
"""
results = {}
if hasattr(env._environment, 'set_active_pool'): # pylint: disable=protected-access
pools = ['train', 'eval', 'test']
else:
pools = ['all']
for pool in pools:
tf.keras.backend.set_learning_phase(0)
if hasattr(env._environment, 'set_active_pool'): # pylint: disable=protected-access
env._environment.set_active_pool(pool) # pylint: disable=protected-access
else:
assert pool == 'all'
rewards = []
health = []
ratings = []
max_episode_length = agent.max_episode_length
agent.epsilon = 0.0 # Turn off any exploration.
# Set the learning phase to 0 i.e. evaluation to not use dropout.
# Generate num_users trajectories.
for _ in range(num_users):
# TODO(): Clean the logged variables by making a data class.
curr_user_reward = 0.0
curr_user_health = 0.0
curr_user_rating = 0.0
reward = 0
observation = env.reset()
for _ in range(max_episode_length):
slate = agent.step(reward, observation, eval_mode=True,
deterministic=deterministic)
observation, reward, _, _ = env.step(slate)
curr_user_reward += reward
curr_user_health += 1-risk_score_extractor(observation)
if 'rating' in observation['response'][0]:
curr_user_rating += observation['response'][0]['rating']
agent.end_episode(reward, observation, eval_mode=True)
rewards.append(curr_user_reward/float(max_episode_length))
health.append(curr_user_health/float(max_episode_length))
ratings.append(curr_user_rating/float(max_episode_length))
agent.empty_buffer()
health_risks = 1-np.array(health)
var = np.percentile(health_risks, 100*alpha)
cvar = compute_cvar(health_risks, var)
logging.info('Average Reward = %f, Average Health = %f, '
'Average Ratings = %f,VaR = %f, CVaR = %f',
np.mean(rewards), np.mean(health), np.mean(ratings), var, cvar)
# Set the learning phase back to 1.
tf.keras.backend.set_learning_phase(1)
if scatter_plot_trajectories:
plot_trajectories(ratings, health, figure_file_obj)
results[pool] = {
'rewards': np.mean(rewards),
'health': np.mean(health),
'ratings': np.mean(ratings),
'var': var,
'cvar': cvar
}
if len(results) == 1: # No train/eval/test split, just return one value.
return results['all']
# Promote the eval results to the top-level dictionary.
results.update(results['eval'])
return results
def plot_trajectories(rewards, health, figure_file_obj):
"""Create a KDE or scatter plot of health rewards vs health."""
plt.figure()
try:
g = sns.jointplot(x=rewards, y=health, kind='kde')
g.plot_joint(plt.scatter, c='grey', s=30, linewidth=1, marker='+')
except np.linalg.LinAlgError:
# If the data does not support KDE plotting, just use scatter.
g = sns.jointplot(x=rewards, y=health, kind='scatter')
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels('$Reward$', '$Health$')
if figure_file_obj:
plt.savefig(figure_file_obj, format='png')
else:
plt.show()
def compute_cvar(health_risks, var):
"""Returns CVaR for the provided health_risks array."""
return np.mean([risk for risk in health_risks if risk >= var])
|
{"hexsha": "608886bf2e96f6332b6ce9f66903c97f3f05055b", "size": 5372, "ext": "py", "lang": "Python", "max_stars_repo_path": "agents/recommenders/evaluation.py", "max_stars_repo_name": "jackblandin/ml-fairness-gym", "max_stars_repo_head_hexsha": "dce1feaacf2588e0a2d6187e896796241a25ed81", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "agents/recommenders/evaluation.py", "max_issues_repo_name": "jackblandin/ml-fairness-gym", "max_issues_repo_head_hexsha": "dce1feaacf2588e0a2d6187e896796241a25ed81", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "agents/recommenders/evaluation.py", "max_forks_repo_name": "jackblandin/ml-fairness-gym", "max_forks_repo_head_hexsha": "dce1feaacf2588e0a2d6187e896796241a25ed81", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.5442176871, "max_line_length": 88, "alphanum_fraction": 0.6984363366, "include": true, "reason": "import numpy", "num_tokens": 1283}
|
# -*- coding: utf-8 -*-
import numbers
import numpy as np
from filterpy.common import Q_discrete_white_noise
class Process:
"""The Process class:
Define the F, Q, B and u matrices.
:param dim: The dimension.
"""
def __init__(self, dt, state):
"""
"""
self.F = np.array([[1, dt],
[0, 1]])
self.Q = Q_discrete_white_noise(dim=state.dim, dt=dt, var=2.35)
self.B = 0.0
self.u = 0.0
|
{"hexsha": "fa99ebc196a53ea1c307c0c4ba5c97a08ce8a43c", "size": 484, "ext": "py", "lang": "Python", "max_stars_repo_path": "zolware/process.py", "max_stars_repo_name": "zolware/zolware_API", "max_stars_repo_head_hexsha": "653e0f71cff440c5ff409b69bdb20b619af0b8bc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "zolware/process.py", "max_issues_repo_name": "zolware/zolware_API", "max_issues_repo_head_hexsha": "653e0f71cff440c5ff409b69bdb20b619af0b8bc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "zolware/process.py", "max_forks_repo_name": "zolware/zolware_API", "max_forks_repo_head_hexsha": "653e0f71cff440c5ff409b69bdb20b619af0b8bc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.4736842105, "max_line_length": 71, "alphanum_fraction": 0.5289256198, "include": true, "reason": "import numpy", "num_tokens": 132}
|
import os
import numpy
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
module = 'cshepard'
setup(cmdclass = {'build_ext': build_ext},
name=module,
version='1.0',
ext_modules=[Extension(module,
[module + ".pyx"])],
include_dirs=[numpy.get_include(),
os.path.join(numpy.get_include(), 'numpy')]
)
|
{"hexsha": "778434e611ca69006381b40b64f273c5372e53b6", "size": 442, "ext": "py", "lang": "Python", "max_stars_repo_path": "csetup.py", "max_stars_repo_name": "sdickreuter/ToneGen", "max_stars_repo_head_hexsha": "69c554c7207563a69479202349061e1f8ef4f328", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-18T20:27:50.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-18T20:27:50.000Z", "max_issues_repo_path": "csetup.py", "max_issues_repo_name": "sdickreuter/ToneGen", "max_issues_repo_head_hexsha": "69c554c7207563a69479202349061e1f8ef4f328", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "csetup.py", "max_forks_repo_name": "sdickreuter/ToneGen", "max_forks_repo_head_hexsha": "69c554c7207563a69479202349061e1f8ef4f328", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.0, "max_line_length": 63, "alphanum_fraction": 0.6244343891, "include": true, "reason": "import numpy", "num_tokens": 96}
|
import xml.etree.cElementTree as Elem
import re
import nltk
import string
import numpy as np
import sys
import sklearn
import pickle
from sklearn.model_selection import cross_val_predict, ShuffleSplit, KFold
from nltk.tokenize import RegexpTokenizer
#from sklearn.grid_search import RandomizedSearchCV
import seaborn as sns
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import sklearn_crfsuite
from sklearn_crfsuite import scorers
from sklearn_crfsuite import metrics
import xml
from xml.etree.ElementTree import Element, ElementTree
from sklearn.metrics import classification_report
import CRF_features_cascadedCRF
import CRF_measures_cascadedCRF
reload(sys)
sys.setdefaultencoding('utf8')
def dataPreprocessing(allReports):
for report in allReports:
#print report
EachReport=[]
for line in report:
#print line
label=line[1]
#print label
txt=line[0].strip()
#date=re.search(r'\d\d-\d\d-\d\d\d\d',txt)
#if date!=None:
# print line
# date_new=re.sub(r'-','/',date.group())
# txt=re.sub(r'\d\d-\d\d-\d\d\d\d',date_new,txt)
txt=re.sub(r'\d',"#NUM",txt)
tovPat=re.compile(r't,o,v',re.IGNORECASE)
txt=tovPat.sub('tov',txt)
#txt=re.sub(r'#NUM#NUM-#NUM#NUM-#NUM#NUM#NUM#NUM',"#NUM#NUM/#NUM#NUM/#NUM#NUM#NUM#NUM",txt)
tokens=re.split(r'([,\(\).?:-]*)\s*',txt)
tokens=filter(lambda a: a!='', tokens)
def token_LabelCreation(allReports):
docs=[]
data=[]
for report in allReports:
#print report
EachReport=[]
for line in report:
#print line
label=line[1]
#print label
txt=line[0].strip()
#date=re.search(r'\d\d-\d\d-\d\d\d\d',txt)
#if date!=None:
# print line
# date_new=re.sub(r'-','/',date.group())
# txt=re.sub(r'\d\d-\d\d-\d\d\d\d',date_new,txt)
txt=re.sub(r'\d',"#NUM",txt)
tovPat=re.compile(r't,o,v',re.IGNORECASE)
txt=tovPat.sub('tov',txt)
#txt=re.sub(r'#NUM#NUM-#NUM#NUM-#NUM#NUM#NUM#NUM',"#NUM#NUM/#NUM#NUM/#NUM#NUM#NUM#NUM",txt)
tokens=re.split(r'([,\(\).?:-]*)\s*',txt)
tokens=filter(lambda a: a!='', tokens)
#tokens=txt.split()
#print tokens
for i in range(len(tokens)):
if label=='O':
EachReport.append((tokens[i],label))
else:
if i==0:
tag='B-'+label
EachReport.append((tokens[i],tag))
else:
tag='I-'+label
EachReport.append((tokens[i],tag))
docs.append(EachReport)
return docs
def CRF_featureCreation(docs):
tokenList,data=CRF_features_cascadedCRF.posTagAdding(docs)
X = [CRF_features_cascadedCRF.sent2features(doc) for doc in data]
y = [CRF_features_cascadedCRF.sent2labels(doc) for doc in data]
#print len(X)
#print len(y)
return X,y,tokenList
def CRF_featureCreationTest(docs):
tokenList,data=CRF_features_cascadedCRF.posTagAddingTest(docs)
#print data
X = CRF_features_cascadedCRF.sent2features(data)
#print len(X)
#print len(y)
return X,tokenList
def CRF_trainer(xTrain,yTrain,xTest):
crf = sklearn_crfsuite.CRF(
algorithm='lbfgs',
c1=0.1,
c2=0.1,
max_iterations=100,
all_possible_transitions=True
)
#cv=ShuffleSplit(n_split=2,test_size=0.5,random_state=0)
#crf.predict(X)
crf.fit(xTrain,yTrain)
#print xTest
predicted=crf.predict(xTest)
#print predicted
return crf,predicted
def CRF_predict(crf,xTest):
#cv=ShuffleSplit(n_split=2,test_size=0.5,random_state=0)
#crf.predict(X)
xTest=[xTest]
predicted2=crf.predict(xTest)
#print predicted2
return predicted2
def textExtraction(reportNodes):
allReports=[]
nodeAr=[]
for report in reportNodes:
#print report.tag
oneReport=[]
if re.search(r'\S',report.text):
reprtText=re.sub(r'\t|\n','',report.text)
#print reprtText
oneReport.append((reprtText.strip(),'O'))
for node in report:
#print node.tag
nodeText=[]
#print node
if node.tag not in nodeAr:
nodeAr.append(node.tag)
for text in node.itertext():
text=re.sub(r'\t|\n','',text)
nodeText.append(text.strip())
#print nodeText
#nodeText.append(text.strip().strip('\t').strip('\n').strip())
oneReport.append((" ".join(nodeText),node.tag))
if re.search(r'\S',node.tail):
nodeTail=re.sub(r'\t|\n','',node.tail)
#print nodeTail
oneReport.append((nodeTail.strip(),'O'))
allReports.append(oneReport)
return allReports,nodeAr
def train_test_onTrue(currentNode):
reportNodes1=tree.findall("./"+currentNode)
reportNodes2=tree1.findall("./"+currentNode)
#print currentNode
allReports1,nodeAr1=textExtraction(reportNodes1)
allReports2,nodeAr2=textExtraction(reportNodes2)
#print allReports1
#np.random.shuffle(allReports1)
docs1=token_LabelCreation(allReports1)
X1,y1,tokenList1=CRF_featureCreation(docs1)
docs2=token_LabelCreation(allReports2)
X2,y2,tokenList2=CRF_featureCreation(docs2)
#print tokenList1
#print ReportList1
crf1,predicted2=CRF_trainer(X1,y1,X2)
#print predicted2[0]
if currentNode=='report':
global tokenListAll
tokenListAll=tokenList2
#print predicted2[0]
crfDic[currentNode]=crf1
if type(predicted2) is list:
dicListPre[currentNode]=predicted2
#if currentNode=='report/negative_finding':
# print zip(tokenList1,y1,predicted1)
else:
dicListPre[currentNode]=predicted2.tolist()
#if currentNode=='report/negative_finding':
# print zip(tokenList1,y1,predicted1.tolist())
dicListTrue[currentNode]=y2
#if currentNode=='report/negative_finding':
#print allReports1
#print docs1
#print predicted1
#individual performance of each classifier, comparing the true and predicted(for prediction, trained on true values)
if currentNode=="report/negative_finding/asymmetry" and y2!=[]:
#CRF_measures_cascadedCRF.partialPhraseLevel_measures(tokenList2,predicted2,y2)
CRF_measures_cascadedCRF.tokenLevel_measures(predicted2,y2,tokenList2,label_dic_2_pre)
for node in nodeAr1:
child=str(currentNode)+"/"+str(node)
if node=='positive_finding' or node=='negative_finding' \
or node=='mass' or node=='calcification' \
or node=='asymmetry' or node=='architectural_distortion' \
or child=='report/positive_finding/associated_features' \
or child=='report/negative_finding/associated_features':
train_test_onTrue(child)
def mergingResults(i,m,n,labelStartTrue,phrase,dicKeyTrueCount1):
#print labelStartTrue
#print dicListTrue[labelStartTrue]
currentNodeInstanceTrue=dicListTrue[labelStartTrue][i]
#print "label start true:",labelStartTrue,i
#print currentNodeInstanceTrue
currentNodeInstancePre=dicListPre[labelStartTrue][i]
k=0
for j in range(m,n):
labelTrue=currentNodeInstanceTrue[k].split('-')
labelStartTagPres=labelTrue[0]
dicKeyTrue=labelStartTrue+'/'+labelTrue[len(labelTrue)-1]
#print "dicKeyTrue:",dicKeyTrue
pre1List=currentNodeInstancePre[k].split('-')
#print phrase[j]
phrase[j][0]=phrase[j][0]+"/"+labelTrue[len(labelTrue)-1]
phrase[j][1]=phrase[j][1]+"/"+currentNodeInstanceTrue[k]
#print j
phrase[j][2]=phrase[j][2]+"/"+pre1List[len(pre1List)-1]
phrase[j][3]=phrase[j][3]+"/"+currentNodeInstancePre[k]
#print phrase[j]
if k!=len(currentNodeInstanceTrue)-1:
labelStartTagNext=currentNodeInstanceTrue[k+1].split('-')[0]
else:
labelStartTagNext=None
if labelStartTagPres=='B':
beg=j
if labelStartTagNext=='B' or labelStartTagNext=='O' or k==len(currentNodeInstanceTrue)-1:
end=j
if dicKeyTrueCount1.has_key(dicKeyTrue):
dicKeyTrueCount1[dicKeyTrue]=dicKeyTrueCount1.get(dicKeyTrue)+1
else:
dicKeyTrueCount1[dicKeyTrue]=1
if dicListTrue.has_key(dicKeyTrue):
#print dicKeyTrue,beg,end
mergingResults(dicKeyTrueCount1[dicKeyTrue]-1,beg,end+1,dicKeyTrue,phrase,dicKeyTrueCount1)
#else:
# TokenTruePre.append([tokenListAll[i][j],dicKeyTrue,dicKeyPre])
elif labelStartTagPres=='I':
if labelStartTagNext=='B' or labelStartTagNext=='O' or k==len(currentNodeInstanceTrue)-1:
end=j
if dicKeyTrueCount1.has_key(dicKeyTrue):
dicKeyTrueCount1[dicKeyTrue]=dicKeyTrueCount1.get(dicKeyTrue)+1
else:
dicKeyTrueCount1[dicKeyTrue]=1
if dicListTrue.has_key(dicKeyTrue):
#print dicKeyTrue, beg,end
mergingResults(dicKeyTrueCount1[dicKeyTrue]-1,beg,end+1,dicKeyTrue,phrase,dicKeyTrueCount1)
#else:
# TokenTruePre.append([tokenListAll[i][j],dicKeyTrue,dicKeyPre])
k=k+1
def test_onPredicted(beg,end,phrase,preLabels,currentNode):
#print currentNode
phrase2=phrase[beg:end]
#print phrase1
X1,tokenList1=CRF_featureCreationTest(phrase2)
#print X1
predicted1=CRF_predict(crfDic[currentNode],X1)
predicted1=predicted1[0]
#print predicted1
if type(predicted1) is not list:
predicted1=predicted1.tolist()
#if currentNode=='report/negative_finding':
# print zip(tokenList1,y1,predicted1.tolist())
preTokenList=zip(predicted1,tokenList1)
#print preTokenList
j=0
for i in range(beg,end):
pre1=preTokenList[j][0]
data1=preTokenList[j][1]
pre1List=pre1.split('-')
labelStartPres=pre1List[0]
labelEndPres=pre1List[len(pre1List)-1]
#print preLabels
preLabels[i][0]=preLabels[i][0]+"/"+labelEndPres
preLabels[i][1]=preLabels[i][1]+"/"+pre1
if j!=len(preTokenList)-1:
labelStartNext=preTokenList[j+1][0].split('-')[0]
else:
labelStartNext=None
if labelStartPres=='B':
beg=i
if labelStartNext=='B' or labelStartNext=='O' or j==len(preTokenList)-1:
end=i
child=str(currentNode)+"/"+labelEndPres
if crfDic.has_key(child):
test_onPredicted(beg,end,phrase,preLabels,child)
elif labelStartPres=='I':
if labelStartNext=='B' or labelStartNext=='O' or j==len(preTokenList)-1:
end=i
child=str(currentNode)+"/"+labelEndPres
if crfDic.has_key(child):
test_onPredicted(beg,end+1,phrase,preLabels,child)
j=j+1
#if currentNode=='report/negative_finding':
#print allReports1
#print docs1
#print predicted1
#CRF_measures_cascadedCRF.tokenLevel_measures(predicted1,y1,tokenList1)
def indivClassi_cascPredPerf(classifierName,trueList1,cascPreList):
yTrueAll=[]
yTrue=[]
yPreAll=[]
yPre=[]
tokenAll=[]
tokenHere=[]
for i in range(len(tokenListAll)):
for j in range(len(tokenListAll[i])):
clNameLen=len(classifierName.split('/'))
trueLabelList=trueList1[i][j].split('/')
labelWoBITrue=[]
for item in trueLabelList:
itemList=item.split('-')
labelWoBITrue.append(str(itemList[len(itemList)-1]))
trueLabel="/".join(labelWoBITrue[:clNameLen])
#print trueLabel
if len(labelWoBITrue)>=clNameLen+1:
trueLabelChild="/".join(labelWoBITrue[:clNameLen+1])
#print trueLabelChild
if trueLabel==classifierName:
#print trueLabel, "\t", classifierName
#print trueLabelList
trueLabelList1=trueLabelList[clNameLen-1].split('-')
#print trueLabelList1
labelStartPres=trueLabelList1[0]
labelEndPres=trueLabelList1[len(trueLabelList1)-1]
#print preLabels
if j!=len(tokenListAll[i])-1:
if len(trueList1[i][j+1].split('/'))>=clNameLen:
labelStartNext=trueList1[i][j+1].split('/')[clNameLen-1].split('-')[0]
else:
labelStartNext='O'
else:
labelStartNext=None
if labelStartPres=='B':
tokenHere.append(tokenListAll[i][j])
yTrue.append(trueLabelChild)
preLabelChild="/".join(cascPreList[i][j].split('/')[:clNameLen+1])
#print trueLabelChild
#print preLabelChild
yPre.append(preLabelChild)
if labelStartNext=='B' or labelStartNext=='O' or j==len(tokenListAll[i])-1:
tokenAll.append(tokenHere)
tokenHere=[]
#print yTrue
yTrueAll.append(yTrue)
yTrue=[]
#print yPre
yPreAll.append(yPre)
yPre=[]
elif labelStartPres=='I':
tokenHere.append(tokenListAll[i][j])
yTrue.append(trueLabelChild)
preLabelChild="/".join(cascPreList[i][j].split('/')[:clNameLen+1])
yPre.append(preLabelChild)
if labelStartNext=='B' or labelStartNext=='O' or j==len(tokenListAll[i])-1:
tokenAll.append(tokenHere)
tokenHere=[]
#print yTrue
yTrueAll.append(yTrue)
yTrue=[]
#print yPre
yPreAll.append(yPre)
yPre=[]
#print yPreAll
out2=open('positive_finding_classifier.txt','w')
CRF_measures_cascadedCRF.tokenLevel_measures(yPreAll,yTrueAll,tokenAll)
for pre1,true1,token1 in zip(yPreAll,yTrueAll,tokenAll):
for i in range(len(pre1)):
if token1[i] not in string.punctuation:
out2.write(str(token1[i])+"\t"+str(true1[i])+"\t"+str(pre1[i])+"\n")
#tree = Elem.parse('./../labeling/train_shuffled_70_30.xml')
#root=tree.getroot()
#tree1 = Elem.parse('./../labeling/test_shuffled_70_30.xml')
#out=open('CRF_level1_file.txt','w')
tree_all = Elem.parse('./../labeling/new_data.xml')
list_tree=tree_all.findall('report')
out1=open('Token_True_Predicted_Labels',"a")
k=len(list_tree)/4
label_dic_all={}
label_dic_2={}
label_dic_2_true={}
label_dic_2_pre={}
label_dic_3_true={}
label_dic_3_pre={}
conf_mat_agg=np.zeros((34,34))
pickle_filename='CRFmodelA_trainedmodel.pkl'
pickle_path=open(pickle_filename,'wb')
best_f1micro=0
for i in range(0,4):
if i==0:
list_tree_test=list_tree[:k]
list_tree_train=list_tree[k:]
elif i==len(list_tree)-1:
list_tree_test=list_tree[3*k:]
list_tree_train=list_tree[:3*k]
else:
list_tree_train=list_tree[:i*k]+list_tree[(i+1)*k:]
list_tree_test=list_tree[i*k:(i+1)*k]
root=Element('radiology_reports')
root1=Element('radiology_reports')
for list_tree_elem in list_tree_train:
root.append(list_tree_elem)
print "length:",len(root)
for list_tree_elem in list_tree_test:
root1.append(list_tree_elem)
print "length:",len(root1)
tree=ElementTree(root)
tree1=ElementTree(root1)
dicListTrue={}
dicListPre={}
crfDic={}
train_test_onTrue("report")
#print crfDic
#print dicListPre
#for key in dicListPre.keys():
# print key,"\n",dicListPre[key]
dicKeyTrueCount={}
TruePreLabels=[]
cascadedOnPre=[]
for i in range(len(tokenListAll)):
TruePreLabels.append([])
cascadedOnPre.append([])
for j in range(len(tokenListAll[i])):
TruePreLabels[i].append(['','','',''])
cascadedOnPre[i].append(['',''])
#print len(tokenListAll[0])
#print TruePreLabels[0]
for i in range(len(tokenListAll)):
phrase=list(TruePreLabels[i])
mergingResults(i,0,len(tokenListAll[i]),"report",phrase,dicKeyTrueCount)
#print tokenListAll[i]
for j in range(len(TruePreLabels[i])):
TruePreLabels[i][j]=phrase[j]
#print TruePreLabels[i],"\n\n"
for i in range(len(tokenListAll)):
preLabels=list(cascadedOnPre[i])
phrase1=list(tokenListAll[i])
#print phrase1
test_onPredicted(0,len(tokenListAll[i]),phrase1,preLabels,"report")
for j in range(len(cascadedOnPre[i])):
cascadedOnPre[i][j]=preLabels[j]
#print cascadedOnPre[i]
trueList=[]
trueList1=[]
preList=[]
preList1=[]
lastLevelPreList=[]
cascPreList=[]
cascPreList1=[]
#level2
truePredictedList1=[]
predictPredictList1=[]
tokenFor2levelList1=[]
trueLabel_2labelsList1=[]
#level3
truetruePredictedList1=[]
predictpredictPredictList1=[]
tokenFor3levelList1=[]
trueLabel_3labelsList1=[]
for i in range(len(TruePreLabels)):
trueVal=[]
trueVal1=[]
preVal=[]
preVal1=[]
lastLevelPre=[]
cascPreVal=[]
cascPreVal1=[]
#level2
truePredicted1=[]
predictPredict1=[]
tokenFor2level=[]
trueLabel_2labels1=[]
#level3
truetruePredicted1=[]
predictpredictPredict1=[]
tokenFor3level=[]
trueLabel_3labels1=[]
#if i==20 or i==29 or i==33 or i==34:
# print i, "\n", tokenListAll[i]
for j in range(len(TruePreLabels[i])):
TruePreLabels[i][j][0]=TruePreLabels[i][j][0].strip('/')
TruePreLabels[i][j][1]=TruePreLabels[i][j][1].strip('/')
TruePreLabels[i][j][2]=TruePreLabels[i][j][2].strip('/')
TruePreLabels[i][j][3]=TruePreLabels[i][j][3].strip('/')
cascadedOnPre[i][j][0]=cascadedOnPre[i][j][0].strip('/')
cascadedOnPre[i][j][1]=cascadedOnPre[i][j][1].strip('/')
trueLabel=TruePreLabels[i][j][0].split('/')
truePredict=TruePreLabels[i][j][2].split('/')
cascPre=cascadedOnPre[i][j][0].split('/')
firstSecond_trueLabel=trueLabel[:len(trueLabel)-1]
if firstSecond_trueLabel!=[]:
lastLevelPreLabel='/'.join(TruePreLabels[i][j][0].split('/')[:len(TruePreLabels[i][j][0].split('/'))-1])+"/"+str(TruePreLabels[i][j][2].split('/')[len(TruePreLabels[i][j][2].split('/'))-1])
else:
lastLevelPreLabel=str(TruePreLabels[i][j][2].split('/')[len(TruePreLabels[i][j][2].split('/'))-1])
if len(trueLabel)>=2:
truePredicted=trueLabel[0]+"/"+truePredict[1]
truePredicted1.append(truePredicted)
if len(trueLabel)>=2 or len(cascPre)>=2:
predictPredict="/".join(cascPre[:2])
trueLabel_2labels="/".join(trueLabel[:2])
#if trueLabel_2labels=="negative_finding/O":
# print predictPredict
# print truePredicted
tokenFor2level.append(tokenListAll[i][j])
predictPredict1.append(predictPredict)
trueLabel_2labels1.append(trueLabel_2labels)
if len(trueLabel)==3:
#print trueLabel
truetruePredicted=trueLabel[0]+"/"+trueLabel[1]+"/"+truePredict[2]
predictpredictPredict="/".join(cascPre[0:3])
trueLabel_3labels="/".join(trueLabel[:3])
#print truetruePredicted, truepredictPredict, trueLabel_3labels
#if trueLabel_2labels=="negative_finding/O":
# print predictPredict
# print truePredicted
tokenFor3level.append(tokenListAll[i][j])
truetruePredicted1.append(truetruePredicted)
predictpredictPredict1.append(predictpredictPredict)
trueLabel_3labels1.append(trueLabel_3labels)
#print lastLevelPreLabel,TruePreLabels[i][j][0]
'''if TruePreLabels[i][j][0]!='O':
true1=TruePreLabels[i][j][0].split('/')
true2=TruePreLabels[i][j][1].split('/')
#if true1[len(true1)-1]=='O':
TruePreLabels[i][j][0]="/".join(true1[:len(true1)])
TruePreLabels[i][j][1]="/".join(true2[:len(true2)])
if TruePreLabels[i][j][2]!='O':
pre1=TruePreLabels[i][j][2].split('/')
pre2=TruePreLabels[i][j][3].split('/')
#if pre1[len(pre1)-1]=='O':
TruePreLabels[i][j][2]="/".join(pre1[:len(pre1)])
TruePreLabels[i][j][3]="/".join(pre2[:len(pre2)])
if cascadedOnPre[i][j][0]!='O':
cascPre1=cascadedOnPre[i][j][0].split('/')
cascPre2=cascadedOnPre[i][j][1].split('/')
#if cascPre1[len(cascPre1)-1]=='O':
cascadedOnPre[i][j][0]="/".join(cascPre1[:len(cascPre1)])
cascadedOnPre[i][j][1]="/".join(cascPre2[:len(cascPre2)])'''
trueVal.append(TruePreLabels[i][j][0])
trueVal1.append(TruePreLabels[i][j][1])
preVal.append(TruePreLabels[i][j][2])
preVal1.append(TruePreLabels[i][j][3])
lastLevelPre.append(lastLevelPreLabel)
cascPreVal.append(cascadedOnPre[i][j][0])
cascPreVal1.append(cascadedOnPre[i][j][1])
#print tokenListAll[i][j]
if tokenListAll[i][j] not in string.punctuation:
out1.write(tokenListAll[i][j].decode('utf-8')+"\t"+str(TruePreLabels[i][j][1])+"\t"+str(TruePreLabels[i][j][0])+"\t"+str(cascadedOnPre[i][j][0])+"\n")
trueList.append(trueVal)
trueList1.append(trueVal1)
preList.append(preVal)
preList1.append(preVal1)
lastLevelPreList.append(lastLevelPre)
cascPreList.append(cascPreVal)
cascPreList1.append(cascPreVal1)
truePredictedList1.append(truePredicted1)
predictPredictList1.append(predictPredict1)
trueLabel_2labelsList1.append(trueLabel_2labels1)
tokenFor2levelList1.append(tokenFor2level)
truetruePredictedList1.append(truetruePredicted1)
predictpredictPredictList1.append(predictpredictPredict1)
trueLabel_3labelsList1.append(trueLabel_3labels1)
tokenFor3levelList1.append(tokenFor3level)
#global labels predicted on true values
#CRF_measures_cascadedCRF.tokenLevel_measures(preList,trueList,tokenListAll)
#global labels predicted on predicted values--->cascaded prediction used
f1micro,dic_metric=CRF_measures_cascadedCRF.tokenLevel_measures(cascPreList,trueList,tokenListAll,label_dic_all,conf_mat_agg)
#global labels predicted on true values (comparison between true/true/predicted & cascaded predicted/predicted/predicted)
#CRF_measures_cascadedCRF.tokenLevel_measures(lastLevelPreList,trueList,tokenListAll)
#Level_2 on true (true/predict)
#CRF_measures_cascadedCRF.tokenLevel_measures(truePredictedList1,trueLabel_2labelsList1,tokenFor2levelList1,label_dic_2_true)
#level_2 on predict (predict/predict)
#CRF_measures_cascadedCRF.tokenLevel_measures(predictPredictList1,trueLabel_2labelsList1,tokenFor2levelList1,label_dic_2_pre)
#level_3 on true (true/true/predict)
#CRF_measures_cascadedCRF.tokenLevel_measures(truetruePredictedList1,trueLabel_3labelsList1,tokenFor3levelList1,label_dic_3_true)
#level_3 on predict (predict/predict/predict)
#CRF_measures_cascadedCRF.tokenLevel_measures(predictpredictPredictList1,trueLabel_3labelsList1,tokenFor3levelList1,label_dic_3_pre)
#print cascPreList1
#indivClassi_cascPredPerf("negative_finding",trueList1,cascPreList)
#X_1,tokenList_1=CRF_featureCreationTest(['Kleine','verkalking','ongewijzigd','rechts','caudal','Mediaal', 'caudaal', 'rechts','kleine','lage', 'radiopake','massa', 'met', 'een', 'doorsnede', 'van' '9','?','mm',','])
#X_1,tokenList_1=CRF_featureCreationTest(['Kleine','verkalking','ongewijzigd','rechts','caudal'])
#predicted_1=crfDic['report/positive_finding'].predict([X_1])
#print crfDic['report/positive_finding'].predict_marginals([X_1])
#print predicted_1
if f1micro>best_f1micro:
best_f1micro=f1micro
bestcrf_model=crfDic
pickle.dump(bestcrf_model,pickle_path)
#np.set_printoptions(threshold=np.nan, suppress=True,linewidth=100)
#print conf_mat_agg
#label_dic1={}
#for key in label_dic_all.iterkeys():
# label_dic1[key]=conf_mat_agg[label_dic_all[key][2][0]]
#print label_dic1
#print crfDic
'''label_dic_abb={'O':'O','breast_composition':'BC','positive_finding/mass/location':'PF/MS/L','positive_finding/mass/size':'PF/MS/SI','positive_finding/mass/margin':'PF/MS/MA','positive_finding/mass/density':'PF/MS/DE','positive_finding/mass/associated_features':'PF/MS/AF','positive_finding/mass/shape':'PF/MS/SH','positive_finding/mass/O':'PF/MS/O','positive_finding/calcification/location':'PF/C/L',\
'positive_finding/calcification/size':'PF/C/SI','positive_finding/calcification/morphology':'PF/C/MO','positive_finding/calcification/distribution':'PF/C/DI','positive_finding/calcification/associated_features':'PF/C/AF','positive_finding/calcification/O':'PF/C/O','positive_finding/architectural_distortion/location':'PF/AD/L','positive_finding/architectural_distortion/associated_features':'PF/AD/AF',\
'positive_finding/architectural_distortion':'PF/AD/O','positive_finding/associated_features/location':'PF/AF/L','positive_finding/associated_features/O':'PF/AF/O','positive_finding/asymmetry/location':'PF/AS/L','positive_finding/asymmetry/size':'PF/AS/SI','positive_finding/asymmetry/associated_features':'PF/AS/AF','positive_finding/asymmetry/O':'PF/AS/O','negative_finding/mass/location':'NF/MS/L',\
'negative_finding/mass/margin':'NF/MS/MA','negative_finding/mass/O':'NF/MS/O','negative_finding/calcification/location':'NF/C/L','negative_finding/calcification/morphology':'NF/C/MO','negative_finding/calcification/distribution':'NF/C/DI','negative_finding/calcification/O':'NF/C/O','negative_finding/architectural_distortion/location':'NF/AD/L','negative_finding/architectural_distortion/O':'NF/AD/O',\
'negative_finding/associated_features/location':'NF/AF/L','negative_finding/associated_features/O':'NF/AF/O','negative_finding/asymmetry/location':'NF/AS/L','negative_finding/asymmetry/O':'NF/AS/O','negative_finding/location':'NF/L','negative_finding/O':'NF/O'}
label_dic1={}
for key in label_dic_all.iterkeys():
label_dic1[label_dic_abb[key]]=label_dic_all[key][2][0]
axis_labels=sorted(label_dic1,key=label_dic1.__getitem__)
conf_mat_agg=conf_mat_agg.astype(int)
#print conf_mat_agg
conf_mat_agg_norm=(np.zeros((34,34))).astype('float')
for i in range(len(conf_mat_agg)):
s=np.sum(conf_mat_agg[i,:])
for j in range(len(conf_mat_agg[i])):
conf_mat_agg_norm[i,j]=float(conf_mat_agg[i,j])/s
sns.set()
f=plt.figure(figsize=(8,5))
sns.heatmap(
yticklabels=axis_labels,
xticklabels=axis_labels,
data=conf_mat_agg_norm,
cmap='YlGnBu',
#annot=True,
#fmt="d",
linewidths=0.75)
#plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
f.savefig("ConfusionMatrixHeatmap_modelA.pdf",bbox_inches='tight')'''
#plt.show()'''
'''label_dic2_fscore={}
label_dic2_support={}
for key in label_dic1.iterkeys():
label_dic2_fscore[key]=float(sum(label_dic1[key][0]))/len(label_dic1[key][0])
label_dic2_support[key]=label_dic1[key][1][0]
print label_dic2_fscore
print label_dic2_support'''
'''label_dic_2_fscore={}
label_dic_2_support={}
for key in label_dic_2_pre.iterkeys():
label_dic_2_fscore[key]=float(sum(label_dic_2_pre[key][0]))/len(label_dic_2_pre[key][0])
label_dic_2_support[key]=label_dic_2_pre[key][1][0]
print label_dic_2_fscore
print label_dic_2_support'''
|
{"hexsha": "95e6e354bae89133176b1c806e6ec727054e2072", "size": 29097, "ext": "py", "lang": "Python", "max_stars_repo_path": "AutomaticStructuring/CRF Model A/CRF_advancedmodel1_onpredicted.py", "max_stars_repo_name": "ShreyasiPathak/AutomaticStructuringBreastCancerReports", "max_stars_repo_head_hexsha": "a7e109b515e99fdb1928bc558d7ffe5aa9b1a604", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-02-18T09:20:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-07T10:51:54.000Z", "max_issues_repo_path": "AutomaticStructuring/CRF Model A/CRF_advancedmodel1_onpredicted.py", "max_issues_repo_name": "ShreyasiPathak/AutomaticStructuringBreastCancerReports", "max_issues_repo_head_hexsha": "a7e109b515e99fdb1928bc558d7ffe5aa9b1a604", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "AutomaticStructuring/CRF Model A/CRF_advancedmodel1_onpredicted.py", "max_forks_repo_name": "ShreyasiPathak/AutomaticStructuringBreastCancerReports", "max_forks_repo_head_hexsha": "a7e109b515e99fdb1928bc558d7ffe5aa9b1a604", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-09-10T10:55:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-18T14:22:58.000Z", "avg_line_length": 43.3636363636, "max_line_length": 419, "alphanum_fraction": 0.6198920851, "include": true, "reason": "import numpy", "num_tokens": 7638}
|
subroutine amrex_probinit (init,name,namlen,problo,probhi) bind(c)
use amrex_fort_module, only : rt => amrex_real
use probdata_module
implicit none
integer init, namlen
integer name(namlen)
real(rt) :: problo(3), probhi(3)
integer untin,i
namelist /fortin/ probtype, p_ambient, dens_ambient, exp_energy, &
r_init, nsub, &
denerr,dengrad,max_denerr_lev,max_dengrad_lev, &
presserr,pressgrad,max_presserr_lev,max_pressgrad_lev
!
! Build "probin" filename -- the name of file containing fortin namelist.
!
integer maxlen
parameter (maxlen=256)
character probin*(maxlen)
if (namlen .gt. maxlen) then
write(6,*) 'probin file name too long'
stop
end if
do i = 1, namlen
probin(i:i) = char(name(i))
end do
! set namelist defaults
p_ambient = 1.d-5 ! ambient pressure (in erg/cc)
dens_ambient = 1.d0 ! ambient density (in g/cc)
exp_energy = 1.d0 ! absolute energy of the explosion (in erg)
r_init = 0.05d0 ! initial radius of the explosion (in cm)
nsub = 4
denerr = 1.d20
dengrad = 1.d20
max_denerr_lev = -1
max_dengrad_lev = -1
presserr = 1.d20
pressgrad = 1.d20
max_presserr_lev = -1
max_pressgrad_lev = -1
! Read namelists
untin = 9
open(untin,file=probin(1:namlen),form='formatted',status='old')
read(untin,fortin)
close(unit=untin)
! set local variable defaults
center(1) = (problo(1)+probhi(1))/2.d0
center(2) = (problo(2)+probhi(2))/2.d0
center(3) = (problo(3)+probhi(3))/2.d0
end
! ::: -----------------------------------------------------------
! ::: This routine is called at problem setup time and is used
! ::: to initialize data on each grid.
! :::
! ::: NOTE: all arrays have one cell of ghost zones surrounding
! ::: the grid interior. Values in these cells need not
! ::: be set here.
! :::
! ::: INPUTS/OUTPUTS:
! :::
! ::: level => amr level of grid
! ::: time => time at which to init data
! ::: lo,hi => index limits of grid interior (cell centered)
! ::: nstate => number of state components. You should know
! ::: this already!
! ::: state <= Scalar array
! ::: delta => cell size
! ::: xlo,xhi => physical locations of lower left and upper
! ::: right hand corner of grid. (does not include
! ::: ghost region).
! ::: -----------------------------------------------------------
subroutine fort_initdata(level,time,lo,hi, &
ns, state ,s_l1,s_l2,s_l3,s_h1,s_h2,s_h3, &
nd, diag_eos,d_l1,d_l2,d_l3,d_h1,d_h2,d_h3, &
delta,xlo,xhi) &
bind(C, name="fort_initdata")
use amrex_fort_module, only : rt => amrex_real
use probdata_module
use bl_constants_module, only: M_PI, FOUR3RD
use atomic_rates_module, only: XHYDROGEN
use meth_params_module , only: NVAR, URHO, UMX, UMY, UMZ, UEDEN, UEINT, UFS, &
gamma_minus_1
implicit none
integer level, ns, nd
integer lo(3), hi(3)
integer s_l1,s_l2,s_l3,s_h1,s_h2,s_h3
integer d_l1,d_l2,d_l3,d_h1,d_h2,d_h3
real(rt) :: xlo(3), xhi(3), time, delta(3)
real(rt) :: state(s_l1:s_h1,s_l2:s_h2,s_l3:s_h3,ns)
real(rt) :: diag_eos(d_l1:d_h1,d_l2:d_h2,d_l3:d_h3,nd)
real(rt) :: xmin,ymin,zmin
real(rt) :: xx, yy, zz
real(rt) :: dist
real(rt) :: eint, p_zone
real(rt) :: vctr, p_exp
integer i,j,k, ii, jj, kk
integer :: npert, nambient
if (probtype .eq. 32) then
! set explosion pressure -- we will convert the point-explosion energy into
! a corresponding pressure distributed throughout the perturbed volume
vctr = M_PI*r_init**2
p_exp = gamma_minus_1*exp_energy/vctr
do k = lo(3), hi(3)
zmin = xlo(3) + delta(3)*dble(k-lo(3))
do j = lo(2), hi(2)
ymin = xlo(2) + delta(2)*dble(j-lo(2))
do i = lo(1), hi(1)
xmin = xlo(1) + delta(1)*dble(i-lo(1))
npert = 0
nambient = 0
do jj = 0, nsub-1
yy = ymin + (delta(2)/dble(nsub))*(jj + 0.5d0)
do ii = 0, nsub-1
xx = xmin + (delta(1)/dble(nsub))*(ii + 0.5d0)
dist = (center(1)-xx)**2 + (center(2)-yy)**2
if(dist <= r_init**2) then
npert = npert + 1
else
nambient = nambient + 1
endif
enddo
enddo
p_zone = (dble(npert)*p_exp + dble(nambient)*p_ambient) / &
(dble(npert) + dble(nambient))
eint = p_zone/gamma_minus_1
state(i,j,k,URHO) = dens_ambient
state(i,j,k,UMX) = 0.d0
state(i,j,k,UMY) = 0.d0
state(i,j,k,UMZ) = 0.d0
state(i,j,k,UEDEN) = eint + &
0.5d0*(state(i,j,k,UMX)**2/state(i,j,k,URHO) + &
state(i,j,k,UMY)**2/state(i,j,k,URHO) + &
state(i,j,k,UMZ)**2/state(i,j,k,URHO))
state(i,j,k,UEINT) = eint
enddo
enddo
enddo
else if (probtype .eq. 33) then
! set explosion pressure -- we will convert the point-explosion energy into
! a corresponding pressure distributed throughout the perturbed volume
vctr = FOUR3RD*M_PI*r_init**3
p_exp = gamma_minus_1*exp_energy/vctr
do k = lo(3), hi(3)
zmin = xlo(3) + delta(3)*dble(k-lo(3))
do j = lo(2), hi(2)
ymin = xlo(2) + delta(2)*dble(j-lo(2))
do i = lo(1), hi(1)
xmin = xlo(1) + delta(1)*dble(i-lo(1))
npert = 0
nambient = 0
do kk = 0, nsub-1
zz = zmin + (delta(3)/dble(nsub))*(kk + 0.5d0)
do jj = 0, nsub-1
yy = ymin + (delta(2)/dble(nsub))*(jj + 0.5d0)
do ii = 0, nsub-1
xx = xmin + (delta(1)/dble(nsub))*(ii + 0.5d0)
dist = (center(1)-xx)**2 + (center(2)-yy)**2 + (center(3)-zz)**2
if(dist <= r_init**2) then
npert = npert + 1
else
nambient = nambient + 1
endif
enddo
enddo
enddo
p_zone = (dble(npert)*p_exp + dble(nambient)*p_ambient)/ &
dble(nsub*nsub*nsub)
eint = p_zone/gamma_minus_1
state(i,j,k,URHO) = dens_ambient
state(i,j,k,UMX) = 0.d0
state(i,j,k,UMY) = 0.d0
state(i,j,k,UMZ) = 0.d0
state(i,j,k,UEDEN) = eint + &
0.5d0*(state(i,j,k,UMX)**2/state(i,j,k,URHO) + &
state(i,j,k,UMY)**2/state(i,j,k,URHO) + &
state(i,j,k,UMZ)**2/state(i,j,k,URHO))
state(i,j,k,UEINT) = eint
if (UFS .gt. -1) then
state(i,j,k,UFS ) = XHYDROGEN * state(i,j,k,URHO)
state(i,j,k,UFS+1) = (1.d0 - XHYDROGEN) * state(i,j,k,URHO)
end if
enddo
enddo
enddo
else
call bl_error('Dont know this probtype in initdata')
end if
end subroutine fort_initdata
|
{"hexsha": "431a13cfadd55f341f7dd2b7ce857dc7b7600ad9", "size": 7888, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "Exec/HydroTests/Sedov/Prob_3d.f90", "max_stars_repo_name": "burlen/Nyx", "max_stars_repo_head_hexsha": "d31397361115bc9268da4e6addd3d3e77cc4798e", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Exec/HydroTests/Sedov/Prob_3d.f90", "max_issues_repo_name": "burlen/Nyx", "max_issues_repo_head_hexsha": "d31397361115bc9268da4e6addd3d3e77cc4798e", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Exec/HydroTests/Sedov/Prob_3d.f90", "max_forks_repo_name": "burlen/Nyx", "max_forks_repo_head_hexsha": "d31397361115bc9268da4e6addd3d3e77cc4798e", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.8064516129, "max_line_length": 88, "alphanum_fraction": 0.4792089249, "num_tokens": 2349}
|
!
! CalculiX - A 3-dimensional finite element program
! Copyright (C) 1998-2020 Guido Dhondt
!
! This program is free software; you can redistribute it and/or
! modify it under the terms of the GNU General Public License as
! published by the Free Software Foundation(version 2);
!
!
! This program is distributed in the hope that it will be useful,
! but WITHOUT ANY WARRANTY; without even the implied warranty of
! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
! GNU General Public License for more details.
!
! You should have received a copy of the GNU General Public License
! along with this program; if not, write to the Free Software
! Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
!
subroutine mafillp(nef,lakonf,ipnei,neifa,neiel,vfa,area,
& advfa,xlet,cosa,volume,au,ad,jq,irow,ap,ielfa,ifabou,xle,
& b,xxn,neq,nzs,hfa,gradpel,bp,xxi,neij,
& xlen,cosb,nefa,nefb,iau6,xxicn,flux)
!
! filling the lhs and rhs to calculate p
!
implicit none
!
character*8 lakonf(*)
!
integer i,nef,indexf,ipnei(*),j,neifa(*),nefa,nefb,
& neiel(*),iel,ifa,irow(*),ielfa(4,*),compressible,
& ifabou(*),neq,jq(*),iel2,indexb,indexf2,
& j2,neij(*),nzs,k,iau6(6,*)
!
real*8 coef,vfa(0:7,*),volume(*),area(*),advfa(*),xlet(*),
& cosa(*),ad(*),au(*),xle(*),xxn(3,*),ap(*),b(*),cosb(*),
& hfa(3,*),gradpel(3,*),bp(*),xxi(3,*),xlen(*),bp_ifa,
& xxicn(3,*),flux(*)
!
do i=nefa,nefb
indexf=ipnei(i)
do j=1,ipnei(i+1)-indexf
!
! diffusion
!
indexf=indexf+1
ifa=neifa(indexf)
iel=neiel(indexf)
if(iel.ne.0) then
coef=vfa(5,ifa)*area(ifa)*advfa(ifa)/
& (xlet(indexf)*cosb(indexf))
ad(i)=ad(i)+coef
if(i.gt.iel) au(iau6(j,i))=au(iau6(j,i))-coef
else
if(ielfa(2,ifa).lt.0) then
indexb=-ielfa(2,ifa)
if(((ifabou(indexb+1).eq.0).or.
& (ifabou(indexb+2).eq.0).or.
& ( ifabou(indexb+3).eq.0)).and.
& (ifabou(indexb+4).ne.0)) then
!
! pressure given (only if not all velocity
! components are given)
!
coef=vfa(5,ifa)*area(ifa)*advfa(ifa)/
& (xle(indexf)*cosb(indexf))
ad(i)=ad(i)+coef
endif
endif
endif
!
! right hand side: sum of the flux
!
b(i)=b(i)-flux(indexf)
enddo
enddo
!
return
end
|
{"hexsha": "cc291b317d29d43b9c0a70d6538352fb8e79c51d", "size": 2730, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "ccx_prool/CalculiX/ccx_2.17/src/mafillp.f", "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ccx_prool/CalculiX/ccx_2.17/src/mafillp.f", "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ccx_prool/CalculiX/ccx_2.17/src/mafillp.f", "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.125, "max_line_length": 71, "alphanum_fraction": 0.5285714286, "num_tokens": 883}
|
"""
Usage: python schedule.py < input
See README.md for input format
"""
from itertools import islice
import networkx as nx
import numpy as np
import itertools
import math
import sys
import re
from absl import app
from .. import log
def _main(_argv):
log.init()
reading_tasks = False # else reading edges
task_number = 0
names = []
hours = []
failure_rates = []
edges = []
for line in sys.stdin.readlines():
if not line.strip():
continue
ls = line.strip()
if ls == 'tasks':
reading_tasks = True
continue
elif ls == 'edges':
reading_tasks = False
continue
if reading_tasks:
m = re.fullmatch(r'(\d+)\. ([^,]*),(\s?\d*)\s*(hour|day)s?\s*,\s*(\d)*', line.strip())
if not m:
log.info('line {} does not match input regex', task_number)
sys.exit(1)
task_number_given = int(m.group(1).strip())
assert task_number_given == task_number
names.append(m.group(2).strip())
is_hours = m.group(4) == 'hour'
hours.append(float(m.group(3)) * (1 if is_hours else 24))
failure_rates.append((100 - float(m.group(5))) / 100)
task_number += 1
continue
dag_splits = line.split('->')
# todo input validation on format, numbers
for froms, tos in zip(dag_splits, dag_splits[1:]):
froms = froms.split(',')
tos = tos.split(',')
for x, y in itertools.product(froms, tos):
edges.append((int(x), int(y)))
log.info('read {} tasks {} edges', len(names), len(edges))
G = nx.DiGraph()
G.add_nodes_from(range(len(names)))
G.add_edges_from(edges)
if not nx.algorithms.dag.is_directed_acyclic_graph(G):
log.info('graph is not acyclic')
sys.exit(1)
G = nx.algorithms.dag.transitive_reduction(G)
best_perm = None
max_expected_saved = -math.inf
log.info('incremental best time saved, results printed below')
# is this problem submodular? maybe that gives an approx solution
# for larger problem instances
#
# else maybe it's possible to use dynamic programming here
#
# inner loop optimization:
# for similar perms, want to save state and just compute deltas
#
# input optimization: for nodes who have equal ordering in the
# DAG partial order (stricter condition than not-orderable!)
# can simply sort them by "density": failure rate * time,
# don't need to investigate those permutations further
# by collapsing into a single node.
relabels = [np.arange(len(names), dtype=int)]
inverses = relabels[:]
for _ in range(10):
orig = np.arange(len(names), dtype=int)
np.random.shuffle(orig)
inv = np.argsort(orig)
relabels.append(orig)
inverses.append(inv)
attempts_per_seed = 100 * 100
for mapping, inverse in zip(relabels, inverses):
mapping = {i: x for i, x in enumerate(mapping)}
H = nx.relabel_nodes(G, mapping)
it = nx.algorithms.dag.all_topological_sorts(H)
it = islice(it, attempts_per_seed)
for perm in it:
perm = inverse[perm]
hours_remaining = np.asarray(hours)[np.asarray(perm)]
hours_remaining = np.roll(np.cumsum(hours_remaining[::-1]), -1)
hours_remaining[-1] = 0
pfailure = np.asarray(failure_rates)[np.asarray(perm)]
psuccess = np.roll(1 - pfailure, 1)
psuccess[0] = 1
saved_at_index = pfailure * hours_remaining
p_get_to_index = np.cumprod(psuccess)
expected_saved = (p_get_to_index * saved_at_index).sum()
if expected_saved > max_expected_saved:
max_expected_saved = expected_saved
best_perm = perm
print('{:8.0f} hrs {}'.format(expected_saved, perm))
log.info('found the best perm {}', best_perm)
log.info('expected hours saved {}', max_expected_saved)
print()
print('tasks, in execution order:')
for i in best_perm:
print(' ', names[i])
if __name__ == "__main__":
app.run(_main)
|
{"hexsha": "82a1cfc67af8789bdff48e742c70d98aeeb876ce", "size": 4261, "ext": "py", "lang": "Python", "max_stars_repo_path": "fdd/main/schedule.py", "max_stars_repo_name": "vlad17/fdd", "max_stars_repo_head_hexsha": "fadf2a1595a31bf5eea4d750a986b3ffee0bfc06", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "fdd/main/schedule.py", "max_issues_repo_name": "vlad17/fdd", "max_issues_repo_head_hexsha": "fadf2a1595a31bf5eea4d750a986b3ffee0bfc06", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "fdd/main/schedule.py", "max_forks_repo_name": "vlad17/fdd", "max_forks_repo_head_hexsha": "fadf2a1595a31bf5eea4d750a986b3ffee0bfc06", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0070422535, "max_line_length": 98, "alphanum_fraction": 0.5935226473, "include": true, "reason": "import numpy,import networkx", "num_tokens": 1020}
|
"""
Check captioner says something about background for waterbirds.
python -m explainx.waterbird_check
"""
from typing import List
import fire
import numpy as np
import torch
import tqdm
from swissknife import utils
from .common import make_image2text_model, make_vqa_model
from .misc import load_image_tensor
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
dump_dir = "/nlp/scr/lxuechen/explainx/waterbirds"
waterbird_data_path = "/home/lxuechen_stanford_edu/data/waterbird_complete95_forest2water2"
class Background(metaclass=utils.ContainerMeta):
water = "water"
land = "land"
@staticmethod
def label2background(label):
if isinstance(label, str):
label = int(label)
if label == 0:
return Background.land
else:
return Background.water
@torch.no_grad()
def get_captions(
model: torch.nn.Module, image_path: str, image_size: int, sample=False,
) -> List[str]:
image = load_image_tensor(
image_size=image_size, device=device, image_path=image_path
)
return model.eval().generate(
image, sample=sample, num_beams=5, max_length=50, min_length=3,
)
def caption(
sample=False,
num_instances=500, # How many instances to label.
image_size=384,
beam_search_mode="regular",
):
"""Caption single images."""
model = make_image2text_model(image_size=image_size, beam_search_mode=beam_search_mode).to(device).eval()
metadata_path = utils.join(waterbird_data_path, "metadata.csv")
metadata = utils.read_csv(metadata_path, delimiter=",")
print('metadata row keys:')
print(metadata["rows"][0].keys())
results = []
# background y==1 is water, y==0 is land
for i, row in tqdm.tqdm(enumerate(metadata["rows"])):
if i >= num_instances:
break
img_filename = row["img_filename"]
image_path = utils.join(waterbird_data_path, img_filename)
label = row["place"]
background = Background.label2background(label)
captions = get_captions(
model=model, image_path=image_path, sample=sample, image_size=image_size,
)
results.append(
dict(
img_filename=img_filename,
background=background,
caption=captions[0]
)
)
print(f'background label: {label}')
print(f'background: {background}')
print(f'captions: {captions}')
print('---')
utils.jdump(results, utils.join(dump_dir, 'caption_check.json'))
@torch.no_grad()
def get_answer(
model: torch.nn.Module, image_path: str, question: str, image_size: int
) -> List[str]:
image = load_image_tensor(image_size=image_size, device=device, image_path=image_path)
return model.eval()(image, question, train=False, inference='generate')
def vqa(
image_size=480,
num_instances=500, # How many instances to label.
):
"""Check vqa performance; nothing contrastive."""
model = make_vqa_model(image_size=image_size).to(device).eval()
metadata_path = utils.join(waterbird_data_path, "metadata.csv")
metadata = utils.read_csv(metadata_path, delimiter=",")
print('metadata row keys:')
print(metadata["rows"][0].keys())
question = "is the bird on water or land?"
corrects = []
results = []
# background y==1 is water, y==0 is land
for i, row in tqdm.tqdm(enumerate(metadata["rows"])):
if i >= num_instances:
break
img_filename = row["img_filename"]
image_path = utils.join(waterbird_data_path, img_filename)
label = row["place"]
background = Background.label2background(label)
answers = get_answer(
model=model, image_path=image_path, question=question, image_size=image_size
)
top_answer = answers[0]
results.append(
dict(
img_filename=img_filename,
background=background,
answer=top_answer
)
)
correct = int(background.strip() == top_answer.strip())
corrects.append(correct)
print(f'background label: {label}')
print(f'background: {background}')
print(f'answers: {answers}')
print(f'correct? {correct}')
print('---')
utils.jdump(results, utils.join(dump_dir, 'vqa_check.json'))
utils.jdump(
dict(accuracy=np.mean(corrects)),
utils.join(dump_dir, 'vqa_report.json')
)
@torch.no_grad()
def consensus(
num_water_images=10,
num_land_images=20,
image_size=384,
dump_file: str = 'caps-weights.json',
contrastive_mode: str = "subtraction", # one of 'subtraction' 'marginalization'
average_consensus=True,
num_beams=20,
max_length=50,
min_length=3,
num_em_rounds=5,
num_clusters=3,
water_first=True,
beam_search_mode="contrastive",
verbose=True,
):
"""Caption group of images potentially with many negatives.
Give some images of waterbird on water vs land,
see if it's possible for the model to generate the difference.
"""
print(dump_dir, dump_file)
print(num_water_images, num_land_images)
model = make_image2text_model(image_size=image_size, beam_search_mode=beam_search_mode).to(device).eval()
metadata_path = utils.join(waterbird_data_path, "metadata.csv")
metadata = utils.read_csv(metadata_path, delimiter=",")
rows = metadata["rows"]
water_images = []
land_images = []
for i, row in enumerate(rows):
if len(water_images) >= num_water_images and len(land_images) >= num_land_images:
break
y = int(row["y"])
img_filename = row["img_filename"]
background = Background.label2background(row["place"])
image_path = utils.join(waterbird_data_path, img_filename)
image = load_image_tensor(image_size=image_size, device=device, image_path=image_path)
if y == 0: # Only take images with label == 1!
continue
if background == Background.water:
if len(water_images) >= num_water_images:
continue
else:
water_images.append(image)
if background == Background.land:
if len(land_images) >= num_land_images:
continue
else:
land_images.append(image)
if water_first:
group1, group2 = water_images, land_images
else:
group2, group1 = water_images, land_images
beam_search_kwargs = dict(
sample=False,
num_beams=num_beams,
max_length=max_length,
min_length=min_length,
num_em_rounds=num_em_rounds,
num_clusters=num_clusters,
contrastive_mode=contrastive_mode,
average_consensus=average_consensus,
verbose=verbose,
)
contrastive_weights = np.concatenate(
[np.linspace(0.0, 0.9, num=10), np.linspace(0.92, 1, num=5)]
).tolist() # Serializable.
pairs = []
for contrastive_weight in tqdm.tqdm(contrastive_weights):
cap = model.generate(
images=group1, images2=group2,
contrastive_weight=contrastive_weight,
**beam_search_kwargs
)
pairs.append((contrastive_weight, cap))
print(f"contrastive_weight: {contrastive_weight}, cap: {cap}")
dump = dict(pairs=pairs)
utils.jdump(dump, utils.join(dump_dir, dump_file), default=str)
model = make_image2text_model(image_size=image_size, beam_search_mode='contrastive').to(device).eval()
captions = model.generate(
images=group1,
**beam_search_kwargs,
)
print('caption with only positives')
print(f"{captions}")
def main(task="consensus", **kwargs):
if task == "caption":
caption(**kwargs)
elif task == "vqa":
vqa(**kwargs)
elif task == "consensus":
consensus(**kwargs)
if __name__ == "__main__":
fire.Fire(main)
|
{"hexsha": "0799c2afde1fafa7ab79aae47f3cdb1fa13d0e51", "size": 7976, "ext": "py", "lang": "Python", "max_stars_repo_path": "experiments/explainx/waterbird_check.py", "max_stars_repo_name": "lxuechen/swissknife", "max_stars_repo_head_hexsha": "43dbd36f1e998ebe29c0b85fafd0de765dfb5de8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-25T00:00:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T00:00:30.000Z", "max_issues_repo_path": "experiments/explainx/waterbird_check.py", "max_issues_repo_name": "lxuechen/swissknife", "max_issues_repo_head_hexsha": "43dbd36f1e998ebe29c0b85fafd0de765dfb5de8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "experiments/explainx/waterbird_check.py", "max_forks_repo_name": "lxuechen/swissknife", "max_forks_repo_head_hexsha": "43dbd36f1e998ebe29c0b85fafd0de765dfb5de8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.4015748031, "max_line_length": 109, "alphanum_fraction": 0.6458124373, "include": true, "reason": "import numpy", "num_tokens": 1855}
|
subroutine dump(dumpfile)
! Write out a raw binary file containing all variables needed to continue computation
!-------------------------------------------------------------------------------------
! GLOBALS
use global
use zone
IMPLICIT NONE
! LOCALS
CHARACTER(LEN=3) :: dumpfile
CHARACTER(LEN=1) :: sf1,sf2,char
CHARACTER(LEN=50):: filename
INTEGER :: isf1, isf2
!----------------------------------------------------------------------
filename = 'output/' // trim(prefix) // dumpfile
open(unit=4,file=filename,status='unknown',form='unformatted')
write(4) zro,zpr,zux,zuy,zuz,zfl,zxa,zxc,zdx,zya,zyc,zdy,zza,zzc,zdz, &
time,dt,timem,timep,svel,gam,gamm,uinflo,vinflo,winflo, &
dinflo,pinflo,einflo,uotflo,votflo,wotflo,dotflo,potflo,eotflo, &
ncycle,ncycp,ncycm,ncycd,ngeomx,ngeomy,ngeomz, &
nleftx,nlefty,nleftz,nrightx,nrighty,nrightz,nfile
close(4)
write(8,*) 'Dumped ', trim(prefix) // dumpfile,' at cycle ', ncycle
! Increment file name by one letter in the suffix
if (dumpfile(3:3) == 'z' .or. dumpfile(3:3) == 'Z') then
sf1 = dumpfile(2:2)
isf1 = ichar(sf1)
sf2 = dumpfile(3:3)
isf2 = ichar(sf2)
isf1 = isf1 + 1
isf2 = isf2 - 25
dumpfile(2:2) = char(isf1)
dumpfile(3:3) = char(isf2)
else
sf2 = dumpfile(3:3)
isf2 = ichar(sf2)
isf2 = isf2 + 1
dumpfile(3:3) = char(isf2)
endif
return
end
subroutine undump(dumpfile)
! Write out a raw binary file containing all variables needed to continue computation
!-------------------------------------------------------------------------------------
! GLOBALS
use global
use zone
IMPLICIT NONE
! LOCALS
CHARACTER(LEN=8) :: todayis
CHARACTER(LEN=3) :: dumpfile
CHARACTER(LEN=1) :: sf1,sf2,char
CHARACTER(LEN=50):: filename
INTEGER :: isf1, isf2
!----------------------------------------------------------------------
filename = 'output/' // trim(prefix) // dumpfile
call date_and_time(todayis)
write(8,*) 'Restarting from ',trim(prefix)//dumpfile, ' on ',todayis(5:6),' / ',todayis(7:8),' / ',todayis(1:4)
open(unit=4,file=filename,status='old',form='unformatted')
read(4) zro,zpr,zux,zuy,zuz,zfl,zxa,zxc,zdx,zya,zyc,zdy,zza,zzc,zdz, &
time,dt,timem,timep,svel,gam,gamm,uinflo,vinflo,winflo, &
dinflo,pinflo,einflo,uotflo,votflo,wotflo,dotflo,potflo,eotflo, &
ncycle,ncycp,ncycm,ncycd,ngeomx,ngeomy,ngeomz, &
nleftx,nlefty,nleftz,nrightx,nrighty,nrightz,nfile
close(4)
if(dumpfile(3:3) == 'z' .or. dumpfile(3:3) == 'Z') then ! Increment dump filename
sf1 = dumpfile(2:2)
isf1 = ichar(sf1) + 1
sf2 = dumpfile(3:3)
isf2 = ichar(sf2) - 25
dumpfile(2:2) = char(isf1)
dumpfile(3:3) = char(isf2)
else
sf2 = dumpfile(3:3)
isf2 = ichar(sf2) + 1
dumpfile(3:3) = char(isf2)
endif
return
end
|
{"hexsha": "1b4ce5bf14572f23bdfc5aad639013172efa8977", "size": 2878, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "VH1/src/Patch/f2py/dump.f90", "max_stars_repo_name": "samgeen/Weltgeist", "max_stars_repo_head_hexsha": "c7d52e879bb3473cecbb06651b5e76dac3020da6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "VH1/src/Patch/f2py/dump.f90", "max_issues_repo_name": "samgeen/Weltgeist", "max_issues_repo_head_hexsha": "c7d52e879bb3473cecbb06651b5e76dac3020da6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "VH1/src/Patch/f2py/dump.f90", "max_forks_repo_name": "samgeen/Weltgeist", "max_forks_repo_head_hexsha": "c7d52e879bb3473cecbb06651b5e76dac3020da6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.495049505, "max_line_length": 111, "alphanum_fraction": 0.5792216817, "num_tokens": 975}
|
[STATEMENT]
lemma finite_Fvars_fm[simp]:
fixes A :: fm
shows "finite (Fvars A)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. finite (Fvars A)
[PROOF STEP]
by (induct A rule: fm.induct) auto
|
{"llama_tokens": 85, "file": "Goedel_HFSet_Semanticless_Instance", "length": 1}
|
from numpy.random import RandomState
# import imageio
def test_bboxes():
PRNG = RandomState()
PRNG2 = RandomState()
if args.seed > 0:
PRNG.seed(args.seed)
PRNG2.seed(args.seed)
transform = Compose([
[ColorJitter(prob=0.5)], # or write [ColorJitter(), None]
BoxesToCoords(),
HorizontalFlip(),
Expand((1, 4), prob=0.5),
ObjectRandomCrop(),
Resize(300),
CoordsToBoxes(),
#[SubtractMean(mean=VOC.MEAN)],
],
PRNG,
mode=None,
fillval=VOC.MEAN,
outside_points='clamp')
viz = Viz()
voc_dataset = VOCDetection(
root=args.root,
image_set=[('2007', 'trainval')],
keep_difficult=True,
transform=transform)
results = []
count = 0
i = PRNG2.choice(len(voc_dataset))
for _ in range(100):
img, boxes, labels = voc_dataset[i]
if len(labels) == 0:
continue
img = viz.draw_bbox(img, boxes, labels, True)
results.append(img)
cv2.imshow('0', img[:, :, ::-1])
c = cv2.waitKey(500)
if c == 27 or c == ord('q'): # ESC / 'q'
break
elif c == ord('c') or count >= 5:
count = 0
i = PRNG2.choice(len(voc_dataset))
count += 1
# imageio.mimsave('bboxes.gif', results, duration=0.5)
if __name__ == '__main__':
from transforms import *
from pascal_voc import VOC, VOCDetection, Viz
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--root', type=str, help='voc dataset root path', default='path/to/your/VOCdevkit')
parser.add_argument('--seed', type=int, help='random seed', default=0)
args = parser.parse_args()
test_bboxes()
|
{"hexsha": "a76c509cb3e4c9bf356455dbae4fc80d1f30709c", "size": 1909, "ext": "py", "lang": "Python", "max_stars_repo_path": "test_bounding_boxes.py", "max_stars_repo_name": "uoip/transforms", "max_stars_repo_head_hexsha": "80e00bc9f1a789c71d9da4efdde789b7526a6554", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 30, "max_stars_repo_stars_event_min_datetime": "2018-02-19T18:32:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-23T20:24:02.000Z", "max_issues_repo_path": "test_bounding_boxes.py", "max_issues_repo_name": "uoip/transforms", "max_issues_repo_head_hexsha": "80e00bc9f1a789c71d9da4efdde789b7526a6554", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "test_bounding_boxes.py", "max_forks_repo_name": "uoip/transforms", "max_forks_repo_head_hexsha": "80e00bc9f1a789c71d9da4efdde789b7526a6554", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2018-01-26T10:19:56.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-18T03:01:02.000Z", "avg_line_length": 29.3692307692, "max_line_length": 107, "alphanum_fraction": 0.5254059717, "include": true, "reason": "from numpy", "num_tokens": 484}
|
import data.real.irrational
import topology.basic
import algebra.order.floor
--OUTPUT 1
theorem irrational_orbit_dense {α : ℝ} (hα_irrat : irrational α) : closure ((λ m : ℤ, int.fract (α * ↑m)) '' (@set.univ ℤ)) = set.Icc 0 1 :=
begin
--Let $\alpha$ be an irrational number. Then for distinct $i, j \in \mathbb{Z}$, we must have $\{i \alpha\} \neq\{j \alpha\}$. If this were not true, then
have h1 : ∀ i j : ℤ, i ≠ j → int.fract (α * ↑i) ≠ int.fract (α * ↑j), from by {
assume i j : ℤ,
assume h2 : i ≠ j,
assume h3 : int.fract (α * ↑i) = int.fract (α * ↑j),
have h4 : α * ↑i - ↑(int.floor (α * ↑i)) = int.fract (α * ↑i), from by {rw int.fract_eq_of_nonneg (le_of_lt (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (α * ↑i))) (int.floor_lt_iff.mpr (lt_of_le_of_lt (int.coe_nat_le_coe_nat_of_le (int.floor_nonneg (
end --Needs more than 2000 tokens!
/- FEW SHOT PROMPTS TO CODEX(START)
/--`theorem`
Power Set is Closed under Intersection
Let $S$ be a set.
Let $\powerset S$ be the power set of $S$.
Then:
:$\forall A, B \in \powerset S: A \cap B \in \powerset S$
`proof`
Let $A, B \in \powerset S$.
Then by the definition of power set, $A \subseteq S$ and $B \subseteq S$.
From Intersection is Subset we have that $A \cap B \subseteq A$.
It follows from Subset Relation is Transitive that $A \cap B \subseteq S$.
Thus $A \cap B \in \powerset S$ and closure is proved.
{{qed}}
-/
theorem power_set_intersection_closed {α : Type*} (S : set α) : ∀ A B ∈ 𝒫 S, (A ∩ B) ∈ 𝒫 S :=
begin
-- $A$ and $B$ are sets. $A$ and $B$ belong to power set of $S$
assume (A : set α) (hA : A ∈ 𝒫 S) (B : set α) (hB : B ∈ 𝒫 S),
-- Then $A ⊆ S$ and $B ⊆ S$, by power set definition
have h1 : (A ⊆ S) ∧ (B ⊆ S), from by {split,apply set.subset_of_mem_powerset,exact hA,apply set.subset_of_mem_powerset,exact hB},
-- Then $(A ∩ B) ⊆ A$, by intersection of set is a subset
have h2 : (A ∩ B) ⊆ A, from by apply set.inter_subset_left,
-- Then $(A ∩ B) ⊆ S$, by subset relation is transitive
have h3 : (A ∩ B) ⊆ S, from by {apply set.subset.trans h2 h1.left},
-- Hence $(A ∩ B) ∈ 𝒫 S$, by power set definition
show (A ∩ B) ∈ 𝒫 S, from by {apply set.mem_powerset h3},
end
/--`theorem`
Square of Sum
:$\forall x, y \in \R: \paren {x + y}^2 = x^2 + 2 x y + y^2$
`proof`
Follows from the distribution of multiplication over addition:
{{begin-eqn}}
{{eqn | l = \left({x + y}\right)^2
| r = \left({x + y}\right) \cdot \left({x + y}\right)
}}
{{eqn | r = x \cdot \left({x + y}\right) + y \cdot \left({x + y}\right)
| c = Real Multiplication Distributes over Addition
}}
{{eqn | r = x \cdot x + x \cdot y + y \cdot x + y \cdot y
| c = Real Multiplication Distributes over Addition
}}
{{eqn | r = x^2 + 2xy + y^2
| c =
}}
{{end-eqn}}
{{qed}}
-/
theorem square_of_sum (x y : ℝ) : (x + y)^2 = (x^2 + 2*x*y + y^2) :=
begin
-- expand the power
calc (x + y)^2 = (x+y)*(x+y) : by rw sq
-- distributive property of multiplication over addition gives:
... = x*(x+y) + y*(x+y) : by rw add_mul
-- applying the above property further gives:
... = x*x + x*y + y*x + y*y : by {rw [mul_comm x (x+y),mul_comm y (x+y)], rw [add_mul,add_mul], ring}
-- rearranging the terms using commutativity and adding gives:
... = x^2 + 2*x*y + y^2 : by {repeat {rw ← sq}, rw mul_comm y x, ring}
end
/--`theorem`
Identity of Group is Unique
Let $\struct {G, \circ}$ be a group. Then there is a unique identity element $e \in G$.
`proof`
From Group has Latin Square Property, there exists a unique $x \in G$ such that:
:$a x = b$
and there exists a unique $y \in G$ such that:
:$y a = b$
Setting $b = a$, this becomes:
There exists a unique $x \in G$ such that:
:$a x = a$
and there exists a unique $y \in G$ such that:
:$y a = a$
These $x$ and $y$ are both $e$, by definition of identity element.
{{qed}}
-/
theorem group_identity_unique {G : Type*} [group G] : ∃! e : G, ∀ a : G, e * a = a ∧ a * e = a :=
begin
-- Group has Latin Square Property
have h1 : ∀ a b : G, ∃! x : G, a * x = b, from by {
assume a b : G, use a⁻¹ * b, obviously, },
have h2 : ∀ a b : G, ∃! y : G, y * a = b, from by {
assume a b : G, use b * a⁻¹, obviously, },
-- Setting $b = a$, this becomes:
have h3 : ∀ a : G, ∃! x : G, a * x = a, from
assume a : G, h1 a a,
have h4 : ∀ a : G, ∃! y : G, y * a = a, from
assume a : G, h2 a a,
-- These $x$ and $y$ are both $(1 : G)$, by definition of identity element
have h5 : ∀ a : G, classical.some (h3 a).exists = (1 : G), from assume a :G,
exists_unique.unique (h3 a) (classical.some_spec (exists_unique.exists (h3 a)))
(mul_one a),
have h6 : ∀ a : G, classical.some (h4 a).exists = (1 : G), from assume a : G,
exists_unique.unique (h4 a) (classical.some_spec (exists_unique.exists (h4 a))) (one_mul a),
show ∃! e : G, ∀ a : G, e * a = a ∧ a * e = a, from by {
use (1 : G),
have h7 : ∀ e : G, (∀ a : G, e * a = a ∧ a * e = a) → e = 1, from by {
assume (e : G) (hident : ∀ a : G, e * a = a ∧ a * e = a),
have h8 : ∀ a : G, e = classical.some (h3 a).exists, from assume (a : G),
exists_unique.unique (h3 a) (hident a).right
(classical.some_spec (exists_unique.exists (h3 a))),
have h9 : ∀ a : G, e = classical.some (h4 a).exists, from assume (a : G),
exists_unique.unique (h4 a) (hident a).left
(classical.some_spec (exists_unique.exists (h4 a))),
show e = (1 : G), from eq.trans (h9 e) (h6 _),
},
exact ⟨by obviously, h7⟩,
}
end
/--`theorem`
Squeeze Theorem for Real Numbers
Let $\sequence {x_n}$, $\sequence {y_n}$ and $\sequence {z_n}$ be sequences in $\R$.
Let $\sequence {y_n}$ and $\sequence {z_n}$ both be convergent to the following limit:
:$\ds \lim_{n \mathop \to \infty} y_n = l, \lim_{n \mathop \to \infty} z_n = l$
Suppose that:
:$\forall n \in \N: y_n \le x_n \le z_n$
Then:
:$x_n \to l$ as $n \to \infty$
that is:
:$\ds \lim_{n \mathop \to \infty} x_n = l$
`proof`
From Negative of Absolute Value:
:$\size {x - l} < \epsilon \iff l - \epsilon < x < l + \epsilon$
Let $\epsilon > 0$.
We need to prove that:
:$\exists N: \forall n > N: \size {x_n - l} < \epsilon$
As $\ds \lim_{n \mathop \to \infty} y_n = l$ we know that:
:$\exists N_1: \forall n > N_1: \size {y_n - l} < \epsilon$
As $\ds \lim_{n \mathop \to \infty} z_n = l$ we know that:
:$\exists N_2: \forall n > N_2: \size {z_n - l} < \epsilon$
Let $N = \max \set {N_1, N_2}$.
Then if $n > N$, it follows that $n > N_1$ and $n > N_2$.
So:
:$\forall n > N: l - \epsilon < y_n < l + \epsilon$
:$\forall n > N: l - \epsilon < z_n < l + \epsilon$
But:
:$\forall n \in \N: y_n \le x_n \le z_n$
So:
:$\forall n > N: l - \epsilon < y_n \le x_n \le z_n < l + \epsilon$
and so:
:$\forall n > N: l - \epsilon < x_n < l + \epsilon$
So:
:$\forall n > N: \size {x_n - l} < \epsilon$
Hence the result.
{{qed}}
-/
theorem squeeze_theorem_real_numbers (x y z : ℕ → ℝ) (l : ℝ) :
let seq_limit : (ℕ → ℝ) → ℝ → Prop := λ (u : ℕ → ℝ) (l : ℝ), ∀ ε > 0, ∃ N, ∀ n > N, |u n - l| < ε in
seq_limit y l → seq_limit z l → (∀ n : ℕ, (y n) ≤ (x n) ∧ (x n) ≤ (z n)) → seq_limit x l :=
begin
assume seq_limit (h2 : seq_limit y l) (h3 : seq_limit z l) (h4 : ∀ (n : ℕ), y n ≤ x n ∧ x n ≤ z n) (ε),
--From Negative of Absolute Value: $\size {x - l} < \epsilon \iff l - \epsilon < x < l + \epsilon$
have h5 : ∀ x, |x - l| < ε ↔ (((l - ε) < x) ∧ (x < (l + ε))),
from by
{
intro x0,
have h6 : |x0 - l| < ε ↔ ((x0 - l) < ε) ∧ ((l - x0) < ε),
from abs_sub_lt_iff, rw h6,
split,
rintro ⟨ S_1, S_2 ⟩,
split; linarith,
rintro ⟨ S_3, S_4 ⟩,
split; linarith,
},
--Let $\epsilon > 0$.
assume (h7 : ε > 0),
--As $\ds \lim_{n \mathop \to \infty} y_n = l$ we know that $\exists N_1: \forall n > N_1: \size {y_n - l} < \epsilon$
cases h2 ε h7 with N1 h8,
--As $\ds \lim_{n \mathop \to \infty} z_n = l$ we know that $\exists N_2: \forall n > N_2: \size {z_n - l} < \epsilon$
cases h3 ε h7 with N2 h9,
--Let $N = \max \set {N_1, N_2}$.
let N := max N1 N2,
use N,
--Then if $n > N$, it follows that $n > N_1$ and $n > N_2$.
have h10 : ∀ n > N, n > N1 ∧ n > N2 := by {
assume n h,
split,
exact lt_of_le_of_lt (le_max_left N1 N2) h,
exact lt_of_le_of_lt (le_max_right N1 N2) h,
},
--$\forall n > N: l - \epsilon < y_n < l + \epsilon$
--$\forall n > N: l - \epsilon < z_n < l + \epsilon$
--$\forall n \in \N: y_n \le x_n \le z_n$
--So $\forall n > N: l - \epsilon < y_n \le x_n \le z_n < l + \epsilon$
have h11 : ∀ n > N, (((l - ε) < (y n)) ∧ ((y n) ≤ (x n))) ∧ (((x n) ≤ (z n)) ∧ ((z n) < l+ε)),
from by {
intros n h12,
split,
{
have h13 := (h8 n (h10 n h12).left), rw h5 (y n) at h13,
split,
exact h13.left,
exact (h4 n).left,
},
{
have h14 := (h9 n (h10 n h12).right),rw h5 (z n) at h14,
split,
exact (h4 n).right,
exact h14.right,
},
},
--$\forall n > N: l - \epsilon < x_n < l + \epsilon$
have h15 : ∀ n > N, ((l - ε) < (x n)) ∧ ((x n) < (l+ε)),
from by {
intros n1 h16, cases (h11 n1 h16);
split; linarith,
},
--So $\forall n > N: \size {x_n - l} < \epsilon$
--Hence the result
show ∀ (n : ℕ), n > N → |x n - l| < ε,
from by {
intros n h17,
cases h5 (x n) with h18 h19,
apply h19, exact h15 n h17,
},
end
/--`theorem`
Density of irrational orbit
The fractional parts of the integer multiples of an irrational number form a dense subset of the unit interval
`proof`
Let $\alpha$ be an irrational number. Then for distinct $i, j \in \mathbb{Z}$, we must have $\{i \alpha\} \neq\{j \alpha\}$. If this were not true, then
$$
i \alpha-\lfloor i \alpha\rfloor=\{i \alpha\}=\{j \alpha\}=j \alpha-\lfloor j \alpha\rfloor,
$$
which yields the false statement $\alpha=\frac{\lfloor i \alpha\rfloor-\lfloor j \alpha\rfloor}{i-j} \in \mathbb{Q}$. Hence,
$$
S:=\{\{i \alpha\} \mid i \in \mathbb{Z}\}
$$
is an infinite subset of $\left[0,1\right]$.
By the Bolzano-Weierstrass theorem, $S$ has a limit point in $[0, 1]$. One can thus find pairs of elements of $S$ that are arbitrarily close. Since (the absolute value of) the difference of any two elements of $S$ is also an element of $S$, it follows that $0$ is a limit point of $S$.
To show that $S$ is dense in $[0, 1]$, consider $y \in[0,1]$, and $\epsilon>0$. Then by selecting $x \in S$ such that $\{x\}<\epsilon$ (which exists as $0$ is a limit point), and $N$ such that $N \cdot\{x\} \leq y<(N+1) \cdot\{x\}$, we get: $|y-\{N x\}|<\epsilon$.
QED
-/
theorem irrational_orbit_dense {α : ℝ} (hα_irrat : irrational α) : closure ((λ m : ℤ, int.fract (α * ↑m)) '' (@set.univ ℤ)) = set.Icc 0 1 :=
FEW SHOT PROMPTS TO CODEX(END)-/
|
{"author": "ayush1801", "repo": "Autoformalisation_benchmarks", "sha": "51e1e942a0314a46684f2521b95b6b091c536051", "save_path": "github-repos/lean/ayush1801-Autoformalisation_benchmarks", "path": "github-repos/lean/ayush1801-Autoformalisation_benchmarks/Autoformalisation_benchmarks-51e1e942a0314a46684f2521b95b6b091c536051/proof/lean_proof_with_comments-Natural-Language-Proof-Translation/Correct_statement-lean_proof_with_comments-4_few_shot_temperature_0_max_tokens_2000_n_1/clean_files/Density of irrational orbit.lean"}
|
import sys
import theano.tensor as T
from mylog.mylog import mylog
from utility.utility import *
from data_processor.data_manager import *
from data_processor.data_loader import data_loader
from build_model.build_model import build_model, build_sampler
from build_model.parameters import *
from generation.generation import *
from vocabulary.vocabulary import Vocabulary
from evaluation.evaluation import evalFile
from options_loader import *
from optimizer import *
def summarize(encoder, encoderInputs, decoder, otherInputs, OriginalText, Vocab, options, log):
result, time_data = gen_sample(encoder, encoderInputs, decoder, otherInputs, Vocab, options, log)
result = sorted(result, key = lambda x:x[0])
#print [it[0] for it in result[0][1][1:]]
sentence = translateSequence_new(result[0][1][1:], OriginalText, Vocab, options)
#print sentence
return sentence, time_data
def test_once(dataset, encoder, decoder, OriginalText, Vocab, options, log):
data = dataset[0]
print len(dataset), len(dataset[0]), len(dataset[1]), len(dataset[2])
number = len(data)
summary = ''
reference = ''
document = ''
time_data = ''
time_sum = (0.0,0.0,0.0,0.0)
log.log('Start Beam Searching')
for i in range(0, number):
#log.log('Dealing with the %d-th Instance'%(i))
batchedData = get_Kth_Instance(i, dataset)
#print batchedData
inps = batch2Inputs_new(batchedData, options)
encoderInputs = [inps[0], inps[1], inps[4]]
otherInputs = {}
otherInputs['batch_vocab'] = inps[2]
otherInputs['pointer'] = inps[3]
otherInputs['parent'] = inps[5]
summary_i, time_data_i = summarize(encoder, [inp for inp in encoderInputs if inp is not None], decoder, otherInputs, OriginalText[i], Vocab, options, log)
reference_i = ListOfIndex2Sentence(cutDown(batchedData[1][0][1:]),Vocab,options)
document_i = ListOfIndex2Sentence(batchedData[0][0], Vocab, options)
document += document_i + '\n'
summary += summary_i + '\n'
reference += reference_i + '\n'
time_data += str(time_data_i) + '\n'
time_sum = [sum(x) for x in zip(time_sum, time_data_i)]
time_sum = [(x+0.0) / (number+1e-8) for x in time_sum]
return document, reference, summary, time_data, time_sum
def evaluate(hyp_fileName, ref_fileName, metrics, log, Show = True):
Eval = evalFile(hyp_fileName, ref_fileName, metrics)
result = Eval.eval()
if Show:
for kk,vv in result.items():
print kk
print vv
return result
def prepare(optionName, modelName, dataset, testSet, Vocab, I2Es, log):
options = optionsLoader(log, False, optionName)
params = init_params(options, Vocab, I2Es, log)
if options['reload'] == True:
log.log('Start reloading Parameters.')
params = load_params(modelName, params)
log.log('Finish reloading Parameters.')
options["training"] = False
options["test"] = True
if 'decoder_epsilon' in params:
log.log('Decoder Epsilon:'+str(params['decoder_epsilon']))
params_shared = init_params_shared(params)
inps_all, dist, cost, updates, encoder = build_model(params_shared, options, log)
inps_aviliable = [item for item in inps_all if item is not None]
inps_dec, outps_dec, decoder = build_sampler(params_shared, options)
testData = dataset.get_first_K_instances(4096, testSet)
return testData, encoder, decoder, options
def generate(prefix, testData, encoder, decoder, OrignialText, Vocab, options, log, beam_size = 5, bigramTrick = False, gamma = 7):
log.log('Using Beam_Search')
if len(testData[0]) != 500:
options['generation_method'] = 'bfs_beam'
else:
options['generation_method'] = 'bfs_beam_75'
options['beam_size'] = beam_size
options['gamma'] = gamma
options['apply_bigram_trick'] = bigramTrick
options["training"] = False
options["test"] = True
document, reference, summary, time_data, time_avg = test_once(testData, encoder, decoder, OrignialText, Vocab, options, log)
writeFile(prefix + '.document', document)
writeFile(prefix + '.reference', reference)
writeFile(prefix + '.summary', summary)
writeFile(prefix + '.counts', time_data)
def loadFromText(fName):
f = codecs.open(fName,'r',encoding = 'utf-8')
result = []
for l in f:
line = l.strip().split()
result.append(line)
return result
if __name__ == '__main__':
log = mylog()
dataoptions = optionsLoader(log, True)
# Load the Vocabulary and Features and Dataset First
Vocab_Giga = loadFromPKL('giga_new.Vocab')
Vocab = {
'w2i':Vocab_Giga.w2i,
'i2w':Vocab_Giga.i2w,
'i2e':Vocab_Giga.i2e
}
Features_Giga = loadFromPKL('features.Embedding')
I2Es = []
for feat in dataoptions["featList"]:
I2Es.append(Features_Giga[feat].i2e)
dataset = data_loader(Vocab, dataoptions, log)
Index = 0
optionName = './model/struct_edge/options_check2_best.json'
modelName = './model/struct_edge/model_check2_best.npz'
for part in dataoptions['subsets']:
OrignialText = loadFromText(dataoptions['primary_dir']+dataoptions[part]+'.Ndocument')
Index += 1
log.log('Testing %d th model'%(Index))
testData, encoder, decoder, options = prepare(optionName, modelName, dataset, part , Vocab, I2Es, log)
generate(part+'.result', testData, encoder, decoder, OrignialText, Vocab, options, log, beam_size = 5, bigramTrick=True, gamma = 13.284)
log.log('Finish Testing')
|
{"hexsha": "e904c83ced780cd20dbdee94f7de22f048f838b9", "size": 5770, "ext": "py", "lang": "Python", "max_stars_repo_path": "generate.py", "max_stars_repo_name": "KaiQiangSong/Structure-Infused-Copy-Mechanism", "max_stars_repo_head_hexsha": "da159ea47516894829d34d3db05bd87b0398bb02", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 33, "max_stars_repo_stars_event_min_datetime": "2018-05-31T00:58:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-10T06:51:12.000Z", "max_issues_repo_path": "generate.py", "max_issues_repo_name": "KaiQiangSong/Structure-Infused-Copy-Mechanism", "max_issues_repo_head_hexsha": "da159ea47516894829d34d3db05bd87b0398bb02", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-10-31T15:55:16.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-29T12:50:14.000Z", "max_forks_repo_path": "generate.py", "max_forks_repo_name": "KaiQiangSong/Structure-Infused-Copy-Mechanism", "max_forks_repo_head_hexsha": "da159ea47516894829d34d3db05bd87b0398bb02", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-05-30T22:03:08.000Z", "max_forks_repo_forks_event_max_datetime": "2019-07-22T21:04:10.000Z", "avg_line_length": 35.6172839506, "max_line_length": 162, "alphanum_fraction": 0.6590987868, "include": true, "reason": "import theano", "num_tokens": 1501}
|
"""
Filename: test_tauchen.py
Authors: Chase Coleman
Date: 07/22/2014
Tests for ricatti.py file
"""
import sys
import os
import unittest
import numpy as np
from numpy.testing import assert_allclose
from quantecon.riccati import dare
class TestDoubling(unittest.TestCase):
def setUp(self):
self.A, self.B, self.R, self.Q = 1., 1., 1., 1.
def tearDown(self):
del self.A
del self.B
del self.R
del self.Q
def testGoldenNumberfloat(self):
val = dare(self.A, self.B, self.R, self.Q)
gold_ratio = (1 + np.sqrt(5)) / 2.
self.assertTrue( abs(val - gold_ratio) < 1e-12)
def testGoldenNumber2d(self):
A, B, R, Q = np.eye(2), np.eye(2), np.eye(2), np.eye(2)
gold_diag = np.eye(2) * (1 + np.sqrt(5)) / 2.
val = dare(A, B, R, Q)
self.assertTrue(np.allclose(val, gold_diag))
def testSingularR(self):
# Need to fix this in the algorithm before we test it
pass
if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(TestDoubling)
unittest.TextTestRunner(verbosity=2, stream=sys.stderr).run(suite)
|
{"hexsha": "c638ec3ff3368e40592b364b81f43fb73aff62d1", "size": 1156, "ext": "py", "lang": "Python", "max_stars_repo_path": "quantecon/tests/test_ricatti.py", "max_stars_repo_name": "sglyon/quant-econ", "max_stars_repo_head_hexsha": "67d44ed719c9e6202c53f3b18d16ddf7e666e58b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-12-16T14:30:42.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-16T14:30:42.000Z", "max_issues_repo_path": "quantecon/tests/test_ricatti.py", "max_issues_repo_name": "sglyon/quant-econ", "max_issues_repo_head_hexsha": "67d44ed719c9e6202c53f3b18d16ddf7e666e58b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "quantecon/tests/test_ricatti.py", "max_forks_repo_name": "sglyon/quant-econ", "max_forks_repo_head_hexsha": "67d44ed719c9e6202c53f3b18d16ddf7e666e58b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-07-01T01:56:59.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T13:39:24.000Z", "avg_line_length": 24.0833333333, "max_line_length": 70, "alphanum_fraction": 0.6306228374, "include": true, "reason": "import numpy,from numpy", "num_tokens": 331}
|
import torch
from torch.autograd import Variable
import utils
import dataset
from PIL import Image
import cv2 as cv
import os
import numpy as np
import models.crnn as crnn
debug = False
model_path = './data/crnn.pth'
gt_path = './data/res/'
img_path = '/data/home/zjw/pythonFile/masktextspotter.caffe2/lib/datasets/data/icdar2019/test_images/'
alphabet = '0123456789abcdefghijklmnopqrstuvwxyz'
model = crnn.CRNN(32, 1, 37, 256)
if torch.cuda.is_available():
model = model.cuda()
print('loading pretrained model from %s' % model_path)
model.load_state_dict(torch.load(model_path))
converter = utils.strLabelConverter(alphabet)
transformer = dataset.resizeNormalize((100, 32))
model.eval()
# imgPathDir = os.listdir(img_path)
# for imgPath in imgPathDir:
imgPath = 'X00016469670.jpg'
imgName = imgPath.strip().split('.')[0]
img = cv.imread(img_path+imgPath)
if debug:
print ("img shape: "+str(img.shape))
print (len(img.shape))
with open(gt_path+'res_'+imgName+'.txt') as f:
lines = f.readlines()
for line in lines:
pos = line.split(',')
pos = np.array(pos, dtype=np.int)
pos = pos.reshape((-1,2))
minX = min(pos[:,0])
minY = min(pos[:,1])
maxX = max(pos[:,0])
maxY = max(pos[:,1])
if debug:
print ("pos is :" + str(pos))
wordImg = img[minX:maxX+1, minY:maxY+1,:]
print ("word img shape: "+str(wordImg.shape))
if len(img.shape) == 3:
wordImg = cv.cvtColor(wordImg, cv.COLOR_BGR2RGB)
image = Image.fromarray(wordImg).convert('L')
image = transformer(image)
if torch.cuda.is_available():
image = image.cuda()
image = image.view(1, *image.size())
image = Variable(image)
# model.eval()
preds = model(image)
_, preds = preds.max(2)
preds = preds.transpose(1, 0).contiguous().view(-1)
preds_size = Variable(torch.IntTensor([preds.size(0)]))
sim_pred = converter.decode(preds.data, preds_size.data, raw=False)
raw_pred = converter.decode(preds.data, preds_size.data, raw=True)
print('%-20s => %-20s' % (raw_pred, sim_pred))
|
{"hexsha": "6b08caa1e563301a560a4dc314296c548bbabba9", "size": 2179, "ext": "py", "lang": "Python", "max_stars_repo_path": "testEvalOneImage.py", "max_stars_repo_name": "zhengjiawen/crnn.pytorch", "max_stars_repo_head_hexsha": "0721deb8c2914a5a090b231a644c2331d2fc9bd9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "testEvalOneImage.py", "max_issues_repo_name": "zhengjiawen/crnn.pytorch", "max_issues_repo_head_hexsha": "0721deb8c2914a5a090b231a644c2331d2fc9bd9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "testEvalOneImage.py", "max_forks_repo_name": "zhengjiawen/crnn.pytorch", "max_forks_repo_head_hexsha": "0721deb8c2914a5a090b231a644c2331d2fc9bd9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.6710526316, "max_line_length": 102, "alphanum_fraction": 0.6360715925, "include": true, "reason": "import numpy", "num_tokens": 566}
|
import netCDF4 as netcdf
import numpy as np
f = netcdf.Dataset('data/md-solvent-langevin.nc', 'r')
dis = f.variables['distance']
chunksize = 50000
data = []
maxstep = dis.shape[0]
i = range(0, maxstep + chunksize, chunksize)
for k in xrange(len(i)-1):
print i[k], i[k+1]
data.append(dis[i[k]:i[k+1]])
d = np.hstack(data)
np.save('data/md-solvent-langevin-distance.npy', d)
|
{"hexsha": "c691d4db571253199923dbbb48a9f149d05a31a3", "size": 388, "ext": "py", "lang": "Python", "max_stars_repo_path": "lib/examples/wca-dimer_openmm/bruteforce/extract_distance.py", "max_stars_repo_name": "ajoshpratt/westpa", "max_stars_repo_head_hexsha": "545a42a5ae4cfa77de0e125a38a5b1ec2b9ab218", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-21T09:11:54.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-21T09:11:54.000Z", "max_issues_repo_path": "lib/examples/wca-dimer_openmm/bruteforce/extract_distance.py", "max_issues_repo_name": "astatide/westpa", "max_issues_repo_head_hexsha": "545a42a5ae4cfa77de0e125a38a5b1ec2b9ab218", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-14T20:49:38.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-14T20:49:38.000Z", "max_forks_repo_path": "lib/examples/wca-dimer_openmm/bruteforce/extract_distance.py", "max_forks_repo_name": "ajoshpratt/westpa", "max_forks_repo_head_hexsha": "545a42a5ae4cfa77de0e125a38a5b1ec2b9ab218", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-04-14T20:42:11.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-14T20:42:11.000Z", "avg_line_length": 18.4761904762, "max_line_length": 54, "alphanum_fraction": 0.6649484536, "include": true, "reason": "import numpy", "num_tokens": 125}
|
import datetime
from datetime import date
import pytz
import string
import random
import pandas as pd
import numpy as np
import h5py
import math
import os
from skimage import io
from skimage.draw import polygon
#import matplotlib.pyplot as plt
#from nwbwidgets import nwb2widget
from pynwb import NWBFile, TimeSeries, NWBHDF5IO
from pynwb.file import Subject
from pynwb.device import Device
from pynwb.behavior import SpatialSeries, Position, BehavioralEpochs
from pynwb.ophys import TwoPhotonSeries, OpticalChannel, ImageSegmentation, Fluorescence
def find_nearest(array,value):
idx = np.searchsorted(array, value, side="left")
if idx > 0 and (idx == len(array) or math.fabs(value - array[idx-1]) < math.fabs(value - array[idx])):
return idx-1
else:
return idx
def take_first(elem):
return elem[0]
def convert_states(params):
file_dir = params['file_dir']
# Tracking, scored behavioral events, ROI contours, fluorescence traces
d_dfs = pd.read_excel(file_dir + '175_F7-49_201030_OF_AllData.xls', sheet_name=None)
# Raw calcium imaging movie
f = h5py.File(file_dir + '175_F7-49_201030_OF_PP-1_PF-1_MC-1.h5', 'r')
#img_stack = io.imread('175_F7-49_201030_OF_PP.tiff')
# For dummy thermal trace:
df_states = pd.read_csv(file_dir + 'States_ceiling_reduced.csv', index_col=0)
l_ROI_IDs = [elem[:-2] for elem in d_dfs['CAI - ROIS'].columns[::2]]
l_ROI_masks = []
for ROI_ID in l_ROI_IDs:
x = d_dfs['CAI - ROIS']['{}_X'.format(ROI_ID)].values
last_idx = np.where(~np.isnan(x))[0][-1] + 1
x = x[:last_idx]
y = d_dfs['CAI - ROIS']['{}_Y'.format(ROI_ID)].values[:last_idx]
xx, yy = polygon(x,y)
ROI_mask = np.zeros((348, 385))
ROI_mask[xx, yy] = 1
l_ROI_masks.append((ROI_ID, ROI_mask))
x = d_dfs['Tracking']['CenterG_X'].values
y = d_dfs['Tracking']['CenterG_Y'].values
times = d_dfs['Tracking']['Times'].values
position_data = np.array((x,y)).T
position_times = d_dfs['Tracking']['Times'].values
l_behaviors = [elem[:elem.index('_')] for elem in d_dfs['Behaviour'].columns[::2]]
l_behavioral_time_intervals = []
for behavior in l_behaviors:
behavior_id = l_behaviors.index(behavior) +1
df_temp = d_dfs['Behaviour'][['{}_1'.format(behavior), '{}_2'.format(behavior)]].copy()
for i in range(df_temp.shape[0]):
start_time = df_temp.loc[i, '{}_1'.format(behavior)]
stop_time = df_temp.loc[i, '{}_2'.format(behavior)]
if start_time > 0:
l_behavioral_time_intervals.append((start_time, stop_time, behavior))
else:
continue
l_behavioral_time_intervals.sort(key=take_first)
tz = pytz.timezone('Europe/Berlin')
N = 12
surgery_injection = 'Virus injection on {} by {} (steretaxic coordinates: AP: {}, ML: {}, DV: {} [units: mm])'.format(params['injection']['date'],
params['injection']['experimenter'],
params['injection']['AP'],
params['injection']['ML'],
params['injection']['DV'])
surgery_implantation = 'GRIN-lense implantation on {} by {} (steretaxic coordinates: AP: {}, ML: {}, DV: {} [units: mm])'.format(params['implantation']['date'],
params['implantation']['experimenter'],
params['implantation']['AP'],
params['implantation']['ML'],
params['implantation']['DV'])
surgery_string = surgery_injection + surgery_implantation
if params['injection']['experimenter'] != params['implantation']['experimenter']:
l_experimenter = [params['injection']['experimenter'], params['implantation']['experimenter'], 'Dr. Jérémy Signoret-Genest', 'Prof. Dr. Philip Tovote']
else:
l_experimenter = [params['injection']['experimenter'], 'Dr. Jérémy Signoret-Genest', 'Prof. Dr. Philip Tovote']
nwbfile = NWBFile(
session_description = params['session_description'],
session_id = params['session_id'],
surgery = surgery_string,
virus = '{}, source: in-house production'.format(params['injection']['viral_construct']),
identifier=''.join(random.choices(string.ascii_uppercase + string.digits, k=N)),
session_start_time=datetime.datetime(2020,10,30,9,30, tzinfo=tz),
experimenter = l_experimenter,
lab = 'Defense Circuits Lab',
institution = 'University Hospital Wuerzburg, Institute of Clinical Neurobiology'
)
recording_day = date(2020, 10, 30)
dob = params['injection']['date_of_birth']
day_of_birth = date(int(dob[:4]), int(dob[5:7]), int(dob[8:]))
age = recording_day - day_of_birth
nwbfile.subject = Subject(
subject_id = params['injection']['mouse_id'],
age = 'P{}D'.format(str(age.days)),
date_of_birth = datetime.datetime(int(dob[:4]), int(dob[5:7]), int(dob[8:]), tzinfo=tz),
#strain = 'B6J.129S6(FVB)-Slc17a6tm2(cre)Lowl/MwarJ',
description = 'Mouse #F{} of line {}'.format(params['injection']['mouse_id'][5:], params['injection']['mouse_id'][:3]),
genotype = params['injection']['genotype'],
species = 'Mus musculus',
sex = params['injection']['sex']
)
from pynwb.epoch import TimeIntervals
time_interval_table = TimeIntervals('behavioral_intervals', description='scored behavioral intervals', id=None)
time_interval_table.add_column('behavior', description='type of behavior')
for elem in l_behavioral_time_intervals:
time_interval_table.add_interval(elem[0], elem[1], behavior=elem[2])
timestamps = d_dfs['HeartRate']['Times'].values
data = d_dfs['HeartRate']['HeartRate'].values
heartrate_obj = TimeSeries('Heart rate recording', data=data, timestamps=timestamps, unit='beats per minute')
timestamps = df_states.loc[(df_states['Session'] == 'OF') & (df_states['Animal_ID'] == '175_F4-37'), 'Times'].values
temperature = df_states.loc[(df_states['Session'] == 'OF') & (df_states['Animal_ID'] == '175_F4-37'), 'Temperature'].values
temperature_obj = TimeSeries('Thermal recording', data=temperature, timestamps=timestamps, unit='degrees celsius')
device = Device(name='Miniscope', description='NVista3.0', manufacturer='Inscopix, US')
nwbfile.add_device(device)
optical_channel = OpticalChannel('my_optchan', 'description', 500.)
imaging_plane = nwbfile.create_imaging_plane('my_imgpln', optical_channel,
description='{} (AP={}, ML={}, DV={})'.format(params['implantation']['target_region'],
params['implantation']['AP'],
params['implantation']['ML'],
params['implantation']['DV']),
device=device, excitation_lambda=475., imaging_rate=10.,
indicator=params['injection']['viral_construct'][params['injection']['viral_construct'].index('GCaMP'):],
location=params['implantation']['target_region'],
unit='millimeter')
image_series = TwoPhotonSeries(name='CaI', data=f['mov'][:200],
dimension=[385, 348],
imaging_plane=imaging_plane,
starting_frame=[0], format='tiff', starting_time=0.0, rate=1.0, unit='millimeter')
nwbfile.add_acquisition(image_series)
mod = nwbfile.create_processing_module('ophys', 'contains optical physiology processed data')
img_seg = ImageSegmentation()
mod.add(img_seg)
ps = img_seg.create_plane_segmentation('ROI segmentations',
imaging_plane, 'my_planeseg', image_series)
ID = 0
for ROI_ID in range(len(l_ROI_masks)):
if ROI_ID in [3, 4, 10, 12, 14, 16, 22, 25]:
continue
else:
ps.add_roi(image_mask=l_ROI_masks[ROI_ID][1], id=ID)
ID = ID+ 1
l_ROI_IDs_included = []
l_ROI_IDs_excluded = []
for ROI_ID in range(len(l_ROI_masks)):
if ROI_ID in [3, 4, 10, 12, 14, 16, 22, 25]:
l_ROI_IDs_excluded.append(l_ROI_masks[ROI_ID][0])
else:
l_ROI_IDs_included.append(l_ROI_masks[ROI_ID][0])
fl = Fluorescence()
mod.add(fl)
rt_region = ps.create_roi_table_region(description='all ROIs')
data_included = d_dfs['CAI - Traces'][l_ROI_IDs_included].values
data_excluded = d_dfs['CAI - Traces'][l_ROI_IDs_excluded].values
timestamps = d_dfs['CAI - Traces']['Times'].values
rrs = fl.create_roi_response_series('included', data=data_included, rois=rt_region, unit='lumens', timestamps=timestamps)
# Create a SpatialSeries that contains the data - extension of TimeSeries
spatial_series_obj = SpatialSeries(
name = 'SpatialSeries',
description = '(x,y) position in {}'.format(params['session_description']),
data = position_data,
timestamps = position_times, # if no timestamps are provided, session_start_time from file setup will be used - ?
reference_frame = '(0,0) is bottom left corner'
)
# Create a container "Position" that can contain multiple
# SpatialSeries - e.g. if multiple trials are used? not sure though
position_obj = Position(spatial_series=spatial_series_obj) # name is set to 'Position' by default
# Create a "Processing_module" to store the behavioral data,
# since it is not considered as raw due to preprocessing
# by other alglorithms / softwares
behavior_module = nwbfile.create_processing_module(
name='behavior',
description='processed behavioral data'
)
# Add the Processing_module to the NWB:N file
behavior_module.add(position_obj)
hr_mod = nwbfile.create_processing_module('cardiac', 'processed heart rate recording data')
hr_mod.add(heartrate_obj)
temp_mod = nwbfile.create_processing_module('thermal', 'processed temperature recording data')
temp_mod.add(temperature_obj)
return nwbfile
|
{"hexsha": "117a54a80b4d705117e523d0e241211a81899924", "size": 11402, "ext": "py", "lang": "Python", "max_stars_repo_path": "eln2nwb/convert2nwb.py", "max_stars_repo_name": "DSegebarth/DCL_to_NWB", "max_stars_repo_head_hexsha": "71025ece4ccc227eb58ad9b2e5db05b7a53bc621", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "eln2nwb/convert2nwb.py", "max_issues_repo_name": "DSegebarth/DCL_to_NWB", "max_issues_repo_head_hexsha": "71025ece4ccc227eb58ad9b2e5db05b7a53bc621", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "eln2nwb/convert2nwb.py", "max_forks_repo_name": "DSegebarth/DCL_to_NWB", "max_forks_repo_head_hexsha": "71025ece4ccc227eb58ad9b2e5db05b7a53bc621", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.3495934959, "max_line_length": 164, "alphanum_fraction": 0.5731450623, "include": true, "reason": "import numpy", "num_tokens": 2647}
|
import numpy as np
import itertools
from pwtools import crys, common, atomic_data, num
from pwtools.crys import Structure, Trajectory
from pwtools.test import tools
rand = np.random.rand
syms = itertools.cycle(atomic_data.symbols[1:])
def test_scell():
cell = np.identity(3)
coords_frac = np.array([[0.5, 0.5, 0.5],
[1,1,1]])
symbols = ['Al', 'N']
sc = crys.scell(Structure(coords_frac=coords_frac,
cell=cell,
symbols=symbols), (2,2,2))
sc_coords_frac = \
np.array([[ 0.25, 0.25, 0.25],
[ 0.25, 0.25, 0.75],
[ 0.25, 0.75, 0.25],
[ 0.25, 0.75, 0.75],
[ 0.75, 0.25, 0.25],
[ 0.75, 0.25, 0.75],
[ 0.75, 0.75, 0.25],
[ 0.75, 0.75, 0.75],
[ 0.5 , 0.5 , 0.5 ],
[ 0.5 , 0.5 , 1. ],
[ 0.5 , 1. , 0.5 ],
[ 0.5 , 1. , 1. ],
[ 1. , 0.5 , 0.5 ],
[ 1. , 0.5 , 1. ],
[ 1. , 1. , 0.5 ],
[ 1. , 1. , 1. ]])
sc_symbols = ['Al']*8 + ['N']*8
sc_cell = \
np.array([[ 2., 0., 0.],
[ 0., 2., 0.],
[ 0., 0., 2.]])
assert sc.symbols == sc_symbols
np.testing.assert_array_almost_equal(sc.coords_frac, sc_coords_frac)
np.testing.assert_array_almost_equal(sc.cell, sc_cell)
# non-orthorhombic cell
cell = \
np.array([[ 1., 0.5, 0.5],
[ 0.25, 1., 0.2],
[ 0.2, 0.5, 1.]])
sc = crys.scell(Structure(coords_frac=coords_frac,
cell=cell,
symbols=symbols), (2,2,2))
sc_cell = \
np.array([[ 2. , 1. , 1. ],
[ 0.5, 2. , 0.4],
[ 0.4, 1. , 2. ]])
np.testing.assert_array_almost_equal(sc.cell, sc_cell)
# crystal coords are cell-independent
np.testing.assert_array_almost_equal(sc.coords_frac, sc_coords_frac)
# slab
#
# Test if old and new implementation behave for a tricky case: natoms == 2
# mask.shape[0], i.e. if reshape() behaves correctly.
# Reference generated with old implementation. Default is new.
cell = np.identity(3)
coords_frac = np.array([[0.5, 0.5, 0.5],
[1,1,1]])
symbols = ['Al', 'N']
sc = crys.scell(Structure(coords_frac=coords_frac,
cell=cell,
symbols=symbols), (1,1,2))
sc_coords_frac = \
np.array([[ 0.5 , 0.5 , 0.25],
[ 0.5 , 0.5 , 0.75],
[ 1. , 1. , 0.5 ],
[ 1. , 1. , 1. ]])
sc_cell = \
np.array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 2.]])
sc_symbols = ['Al', 'Al', 'N', 'N']
assert sc.symbols == sc_symbols
np.testing.assert_array_almost_equal(sc.cell, sc_cell)
np.testing.assert_array_almost_equal(sc.coords_frac, sc_coords_frac)
sc = crys.scell(Structure(coords_frac=coords_frac,
cell=cell,
symbols=symbols), (1,2,1))
sc_coords_frac = \
np.array([[ 0.5 , 0.25, 0.5 ],
[ 0.5 , 0.75, 0.5 ],
[ 1. , 0.5 , 1. ],
[ 1. , 1. , 1. ]])
sc_cell = \
np.array([[ 1., 0., 0.],
[ 0., 2., 0.],
[ 0., 0., 1.]])
assert sc.symbols == sc_symbols
np.testing.assert_array_almost_equal(sc.cell, sc_cell)
np.testing.assert_array_almost_equal(sc.coords_frac, sc_coords_frac)
sc = crys.scell(Structure(coords_frac=coords_frac,
cell=cell,
symbols=symbols), (2,1,1))
sc_coords_frac = \
np.array([[ 0.25, 0.5 , 0.5 ],
[ 0.75, 0.5 , 0.5 ],
[ 0.5 , 1. , 1. ],
[ 1. , 1. , 1. ]])
sc_cell = \
np.array([[ 2., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
assert sc.symbols == sc_symbols
np.testing.assert_array_almost_equal(sc.cell, sc_cell)
np.testing.assert_array_almost_equal(sc.coords_frac, sc_coords_frac)
# symbols = None
sc = crys.scell(Structure(coords_frac=coords_frac,
cell=cell,
symbols=None), (2,2,2))
assert sc.symbols is None
# Trajectory
natoms = 4
nstep = 100
symbols = [next(syms) for ii in range(natoms)]
# cell 2d
coords_frac = rand(nstep,natoms,3)
cell = rand(3,3)
dims = (2,3,4)
nmask = np.prod(dims)
sc = crys.scell(Trajectory(coords_frac=coords_frac,
cell=cell,
symbols=symbols),
dims=dims)
assert sc.coords_frac.shape == (nstep, nmask*natoms, 3)
assert sc.symbols == common.flatten([[sym]*nmask for sym in symbols])
assert sc.cell.shape == (nstep,3,3)
np.testing.assert_array_almost_equal(sc.cell,
num.extend_array(cell * np.asarray(dims)[:,None],
sc.nstep,
axis=0))
# cell 3d
cell = rand(nstep,3,3)
sc = crys.scell(Trajectory(coords_frac=coords_frac,
cell=cell,
symbols=symbols),
dims=dims)
assert sc.coords_frac.shape == (nstep, nmask*natoms, 3)
coords_frac2 = np.array([crys.scell(Structure(coords_frac=coords_frac[ii,...,],
cell=cell[ii,...],
symbols=symbols), dims=dims).coords_frac \
for ii in range(nstep)])
np.testing.assert_array_almost_equal(sc.coords_frac, coords_frac2)
assert sc.symbols == common.flatten([[sym]*nmask for sym in symbols])
assert sc.cell.shape == (nstep,3,3)
np.testing.assert_array_almost_equal(sc.cell,
cell * np.asarray(dims)[None,:,None])
# methods
natoms = 20
coords_frac = rand(natoms,3)
cell = rand(3,3)
dims = (2,3,4)
symbols = [next(syms) for ii in range(natoms)]
struct = Structure(coords_frac=coords_frac,
cell=cell,
symbols=symbols)
sc1 = crys.scell(struct, dims=dims, method=1)
sc2 = crys.scell(struct, dims=dims, method=2)
d1 = dict([(key, getattr(sc1, key)) for key in sc1.attr_lst])
d2 = dict([(key, getattr(sc2, key)) for key in sc2.attr_lst])
tools.assert_dict_with_all_types_almost_equal(d1, d2)
def test_direc_and_neg_dims():
for dims in [(2,3,4), (-2,-3,-4), (-2,3,4), (2,-3,4), (2,3,-4)]:
m1 = crys.scell_mask(*dims, direc=1)[::-1,:]
m2 = crys.scell_mask(*dims, direc=-1)
assert (m1==m2).all()
for direc in [1,-1]:
ref = crys.scell_mask( 2, 3, 4, direc=direc)
assert ((ref + crys.scell_mask(-2,-3,-4, direc=direc)) == 0.0).all()
now = crys.scell_mask(-2, 3, 4, direc=direc)
assert ((ref[:,0] + now[:,0]) == 0.0).all()
assert (ref[:,1:] == now[:,1:]).all()
now = crys.scell_mask(2, -3, 4, direc=direc)
assert ((ref[:,1] + now[:,1]) == 0.0).all()
assert (ref[:,(0,2)] == now[:,(0,2)]).all()
now = crys.scell_mask(2, 3, -4, direc=direc)
assert ((ref[:,2] + now[:,2]) == 0.0).all()
assert (ref[:,:2] == now[:,:2]).all()
natoms = 20
coords_frac = rand(natoms,3)
cell = rand(3,3)
dims = (2,3,4)
symbols = [next(syms) for ii in range(natoms)]
struct = Structure(coords_frac=coords_frac,
cell=cell,
symbols=symbols)
# coords are offset by a constant shift if all dims and direc are < 0
s1 = crys.scell(struct, (2,3,4), direc=1)
s2 = crys.scell(struct, (-2,-3,-4), direc=-1)
d = s1.coords - s2.coords
assert np.abs(d - d[0,:][None,:]).sum() < 1e-12
|
{"hexsha": "221ba4387584df623b8dd64dcd6f0770e4e331d9", "size": 8327, "ext": "py", "lang": "Python", "max_stars_repo_path": "pwtools/test/test_scell.py", "max_stars_repo_name": "elcorto/pwtools", "max_stars_repo_head_hexsha": "cee068d1c7984d85e94ace243f86de350d3a1dba", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 41, "max_stars_repo_stars_event_min_datetime": "2016-06-25T13:17:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-24T16:08:47.000Z", "max_issues_repo_path": "pwtools/test/test_scell.py", "max_issues_repo_name": "elcorto/pwtools", "max_issues_repo_head_hexsha": "cee068d1c7984d85e94ace243f86de350d3a1dba", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-02-27T09:14:17.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-30T21:12:53.000Z", "max_forks_repo_path": "pwtools/test/test_scell.py", "max_forks_repo_name": "elcorto/pwtools", "max_forks_repo_head_hexsha": "cee068d1c7984d85e94ace243f86de350d3a1dba", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2017-06-01T12:57:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-20T01:52:03.000Z", "avg_line_length": 38.1972477064, "max_line_length": 92, "alphanum_fraction": 0.4669148553, "include": true, "reason": "import numpy", "num_tokens": 2587}
|
[STATEMENT]
lemma LIMSEQ_linear: "X \<longlonglongrightarrow> x \<Longrightarrow> l > 0 \<Longrightarrow> (\<lambda> n. X (n * l)) \<longlonglongrightarrow> x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>X \<longlonglongrightarrow> x; 0 < l\<rbrakk> \<Longrightarrow> (\<lambda>n. X (n * l)) \<longlonglongrightarrow> x
[PROOF STEP]
unfolding tendsto_def eventually_sequentially
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>\<forall>S. open S \<longrightarrow> x \<in> S \<longrightarrow> (\<exists>N. \<forall>n\<ge>N. X n \<in> S); 0 < l\<rbrakk> \<Longrightarrow> \<forall>S. open S \<longrightarrow> x \<in> S \<longrightarrow> (\<exists>N. \<forall>n\<ge>N. X (n * l) \<in> S)
[PROOF STEP]
by (metis div_le_dividend div_mult_self1_is_m le_trans mult.commute)
|
{"llama_tokens": 294, "file": null, "length": 2}
|
Image(imageDukeMcAdow.jpg, right, thumbnail) Duke McAdow moved to Northern California from Southern California Los Angeles in 2001 because he had grown tired of life in the big city.
He works at UC Davis solely to keep his two cats supplied with the expensive food, toys, scratchers and plush beds they demand. Although he moved to Davis from Woodland in 2007, his cats insist he continue to take them to their http://www.woodlandwiki.org/Animal_Care_Clinic_of_Woodland preferred veterinary office back in Woodland.
He enjoys bicycling (hes the guy on the slowmoving, limegreen, 1989 Cannondale). He took up golf in 2010, and frequents the Davis Golf Course and http://www.woodlandwiki.org/Mountain_Valley_Golf_Course Mountain Valley Golf Course in Woodland, where they are extremely tolerant of his I meant to do that playing style.
He was born in http://haw.wikipedia.org/wiki/Hawai%E2%80%98i Hawaii, and raised in Hawaii and wiki:WikiPedia:California California. Frequently asked: Duke is his given name...he was named after wiki:WikiPedia:Duke_Kahanamoku another Hawaiian named Duke.
20070307 21:35:51 nbsp This commenter would like to welcome DukeMcAdow to the thirdperson club. Users/KarlMogel
20071213 20:13:02 nbsp http://daviswiki.org/Barn_Owl?actiondiff&date1197605264 Great story. Life is a string of moments; if only all of them could be like that. Users/JabberWokky
Thanks for the comment. :) Users/DukeMcAdow
20090101 18:25:17 nbsp thanks for your wonderful comment about the Naturalist. i wished more people would notice it or even some of the college crowd coming in more often. its been in davis for 30 years! and its a delightful place and i love working there ^_^. Users/MinhTran
20090623 06:25:20 nbsp Theres a fairly tricky but nice way of copying an entry with all the images. If you need to do it again, just ask. I need to write it up. Users/JabberWokky
20100721 16:53:15 nbsp Thanks for fixing all those links! Users/TomGarberson
20100820 23:57:06 nbsp Thanks for removing the personal attacks on that page! Users/PhilipNeustrom
20101019 09:41:55 nbsp Again, thanks for getting rid of the personal attacks. Users/TomGarberson
20101021 09:57:27 nbsp The Lost pages (Pets, Bicycles, etc) are often in need of constant cleanup because so many novice (and often desperate) wiki editors do their first editing attempts there. Thanks for giving the pet pages a groomover. Its a great service to the community. Users/JabberWokky
20120624 17:52:18 nbsp Howd the tank look in person today? Users/EdWins
20130304 16:57:59 nbsp The Davis Commons photo is a great addition to the wiki: capturing history in a photo. Thanks! Users/JabberWokky
20130906 15:52:03 nbsp I envision a cat sitting at a window and thinking, I have advanced to being a predator who can catch the night sky itself, but the only reward I am given is solitary confinement, taunted by images of what I can not have.
In all reality, you get get my respect for doing the right thing. Users/JabberWokky
20130906 15:52:49 nbsp I envision a cat sitting at a window and thinking, I have advanced to being a predator who can catch the night sky itself, but the only reward I am given is solitary confinement, taunted by images of what I can not have.
In all reality, you get get my respect for doing the right thing. Users/JabberWokky
20130906 15:54:32 nbsp Also, I may have already mentioned this, but I knew a Duke at Duke University who was from Hawaii. Every time I visit your user profile and see the Wikipedia link about your name, I wonder if he was named for him as well. Users/JabberWokky
|
{"hexsha": "bc0d0a74e2b07d5d1546e85742c8e5c6c5219119", "size": 3606, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "lab/davisWiki/DukeMcAdow.f", "max_stars_repo_name": "voflo/Search", "max_stars_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lab/davisWiki/DukeMcAdow.f", "max_issues_repo_name": "voflo/Search", "max_issues_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab/davisWiki/DukeMcAdow.f", "max_forks_repo_name": "voflo/Search", "max_forks_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 90.15, "max_line_length": 333, "alphanum_fraction": 0.794786467, "num_tokens": 955}
|
#getLog(mt) : Returns the natural log of the matrix.
import numpy as np
from .isPSDMd import isPSD
__all__ = ['getLog']
def getLog(M, eps=1e-15):
r"""Takes as input a matrix M and returns the natural log of M.
Parameters
----------
M : numpy.ndarray
2-d array representing a hermitian matrix
eps : float
Optional with defaul 1e-15, sets tolerance for the smallest eigenvalue
Returns
----------
lgMt : numpy.ndarray
log of the input array
Notes
----------
Scales by eps, all eigenvalues between their actual value and 1.0,
if any of the eigenvalue is smaller than eps
"""
try :
(psd, val, vec) = isPSD(M,eps,flag=True)
except :
raise ValueError('Input matrix is not square and hermitian')
if psd == False:
raise ValueError('Eigenvalues of input matrix not sufficiently positive')
n = len(val)
#If any of the eigenvalues is smaller than eps, then rescale the spectrum
#to make all eigenvalues at least eps, this prevents log from complaining
if np.any(val<eps):
val = (1-eps)*val + eps*1.
lgMt = np.dot(np.log(val)*vec,vec.conj().T)
return lgMt
|
{"hexsha": "cd9a270db06869d77aba723936d043df67928ccc", "size": 1258, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/qinfpy/basic/getLogMd.py", "max_stars_repo_name": "vsiddhu/qinfpy", "max_stars_repo_head_hexsha": "f8f29070c31cc5577e66cad093b0686108d237d4", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/qinfpy/basic/getLogMd.py", "max_issues_repo_name": "vsiddhu/qinfpy", "max_issues_repo_head_hexsha": "f8f29070c31cc5577e66cad093b0686108d237d4", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/qinfpy/basic/getLogMd.py", "max_forks_repo_name": "vsiddhu/qinfpy", "max_forks_repo_head_hexsha": "f8f29070c31cc5577e66cad093b0686108d237d4", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-04-28T01:33:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-28T01:35:53.000Z", "avg_line_length": 26.2083333333, "max_line_length": 81, "alphanum_fraction": 0.6041335453, "include": true, "reason": "import numpy", "num_tokens": 326}
|
from hoomd import *
from hoomd import md
import numpy as np
sigma = 4.629 #A
la = 6.00
lr = 19.61
kBby10 = 0.83144622
eps = 414.90 * kBby10
ms = 3
mass = 142.2817 / ms
#equals to 1668 molecules of 3 beads
N = 1668
Lx = 57.63
Ly = Lx
Lz = 345.78
Nequilibrium = 5000000
Nproduction = 12000000
Ntotal = Nequilibrium + Nproduction
context.initialize("")
system = init.read_gsd(filename='init.gsd', restart='restart.gsd')
nl = md.nlist.cell()
#Set 1-4 exclusions
nl.reset_exclusions(exclusions = ['1-2', '1-3', '1-4'])
rcut = 6*sigma # A
#Setting Mie interaction parameters
mie = md.pair.mie(r_cut= rcut, nlist=nl)
mie.pair_coeff.set('DCG', 'DCG', epsilon= eps, sigma= sigma,
n = lr, m = la)
#Angles and Bond obtained from raaSAFT
kcal2joule=4184
AlkaneBondConstant = 2*7.540*kcal2joule*0.1
AlkaneAngleZero = 157.6*np.pi/180.0
AlkaneAngleConstant = 2*2.650*kcal2joule*0.1
bond = md.bond.harmonic()
bond.bond_coeff.set('bndDCG', k=AlkaneBondConstant , r0=sigma)
angle = md.angle.harmonic()
angle.angle_coeff.set('angDCG', k = AlkaneAngleConstant, t0 = AlkaneAngleZero)
all = group.all()
standard = md.integrate.mode_standard(dt=0.003)
#write restart each 100k steps
restart = dump.gsd(filename="restart.gsd", group = all, truncate=True,
period = 100000, phase=0)
T = 500.0 #K
kT = T * kBby10
isothermal = md.integrate.nvt(group = all, kT = kT, tau = 1.0)
run_upto(Nequilibrium, quiet = False)
isothermal.disable()
nve = md.integrate.nve(group = all)
Na = 6.022e23
nbins = 250 #number of bins to compute density profile
period = 1000 #period for log and density profile
dz = Lz/(nbins -1)
vbin = Lx * Ly * dz #A3
def density_profile(timestep):
snap = system.take_snapshot()
if comm.get_rank() == 0:
f = open('zdensity','ab')
posz = snap.particles.position[:, 2]
h, _ = np.histogram(posz, bins = nbins - 1, range=(-Lz/2, Lz/2))
den = h / ms #divide by chain lenght -> molecule/bin
den /= vbin #divide by bin volume -> molecule/A3
den *= 10**27 / Na #convert to mol/l
np.savetxt(f, den, newline=' ', delimiter=',')
f.write(b'\n')
f.close()
#To read the file use np.loadtxt('zdensity')
callback = analyze.callback(callback = density_profile, period = period, phase = -1 )
zbins = np.linspace(-Lz/2, Lz/2, nbins)
zgroups = {}
zcompute = {}
stensorlist = []
for i in range(nbins-1):
zname = 'z' + str(i)
zgroups[zname] = group.cuboid(zname, zmin = zbins[i], zmax = zbins[i+1])
zcompute[zname] = compute.thermo(zgroups[zname])
stensorlist .append('pressure_xx_'+ zname)
stensorlist .append('pressure_yy_'+ zname)
stensorlist .append('pressure_zz_'+ zname)
def group_update(timestep):
for group in zgroups.values():
group.force_update()
#These callbacks are mean to update the group in the step period-1 and period
group_update1 = analyze.callback(callback = group_update, period = period, phase = -1)
run(1, quiet = True)
group_update2 = analyze.callback(callback = group_update, period = period, phase = -1)
pressurelog = analyze.log(filename = 'pressure.dat', quantities = stensorlist , period = period,
overwrite=False, phase = -1)
#data to be logged
loglist = loglist = ['potential_energy', 'kinetic_energy',
'temperature', 'pressure_zz', 'pressure_xx', 'pressure_yy']
log = analyze.log(filename = 'log.dat', quantities = loglist, period = period,
overwrite=False, phase = -1)
run_upto(Ntotal)
|
{"hexsha": "be3dd46504c7d046236068ee47161fc62145a262", "size": 3504, "ext": "py", "lang": "Python", "max_stars_repo_path": "input_files/HOOMD/CGC10/config_IK.py", "max_stars_repo_name": "livecomsjournal/BPIPMDS", "max_stars_repo_head_hexsha": "7491ed8a66acaf4a879cacc6ae29e4220d268a1c", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-19T06:43:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T16:09:40.000Z", "max_issues_repo_path": "input_files/HOOMD/CGC10/config_IK.py", "max_issues_repo_name": "livecomsjournal/BPIPMDS", "max_issues_repo_head_hexsha": "7491ed8a66acaf4a879cacc6ae29e4220d268a1c", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-01-05T16:01:59.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-18T17:58:25.000Z", "max_forks_repo_path": "input_files/HOOMD/CGC10/config_IK.py", "max_forks_repo_name": "livecomsjournal/BPIPMDS", "max_forks_repo_head_hexsha": "7491ed8a66acaf4a879cacc6ae29e4220d268a1c", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-12-09T21:47:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-13T07:58:25.000Z", "avg_line_length": 28.7213114754, "max_line_length": 96, "alphanum_fraction": 0.671803653, "include": true, "reason": "import numpy", "num_tokens": 1113}
|
# Copyright 2020 Xilinx Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Module for testing the libpyxir XGraph data structure
"""
import unittest
import numpy as np
import libpyxir as lpx
class TestXGraph(unittest.TestCase):
def test_constructor(self):
xg = lpx.XGraph('g')
assert xg.get_name() == 'g'
def test_name(self):
xg = lpx.XGraph('g')
assert xg.get_name() == 'g'
xg.set_name('g2')
assert xg.get_name() == 'g2'
def test_add(self):
xg = lpx.XGraph('g')
X1 = lpx.XLayer(
name='x1',
xtype=lpx.StrVector(['X1']),
bottoms=lpx.StrVector([])
)
xg.add(X1)
assert 'x1' in xg
assert len(xg.get_input_names()) == 1
assert xg.get_input_names()[0] == 'x1'
assert len(xg.get_output_names()) == 1
assert xg.get_output_names()[0] == 'x1'
X2 = lpx.XLayer(
name='x2',
xtype=lpx.StrVector(['X2']),
bottoms=lpx.StrVector(['x1'])
)
xg.add(X2)
assert len(xg) == 2
X1_xg = xg.get('x1')
assert len(X1_xg.tops) == 1
assert X1_xg.tops[0] == 'x2'
assert len(xg.get_input_names()) == 1
assert xg.get_input_names()[0] == 'x1'
assert len(xg.get_output_names()) == 1
assert xg.get_output_names()[0] == 'x2'
assert xg.get_layer_names() == lpx.StrVector(['x1', 'x2'])
X3 = lpx.XLayer(
name='x3',
xtype=lpx.StrVector(['X3']),
bottoms=lpx.StrVector(['x2'])
)
xg.add(X3)
assert xg.get_input_names() == lpx.StrVector(['x1'])
assert xg.get_output_names() == lpx.StrVector(['x3'])
assert xg.get_layer_names() == lpx.StrVector(['x1', 'x2', 'x3'])
assert xg.get('x2').tops == lpx.StrVector(['x3'])
assert xg.get('x2').bottoms == lpx.StrVector(['x1'])
X4 = lpx.XLayer(
name='x4',
xtype=lpx.StrVector(['X4']),
bottoms=lpx.StrVector(['x1']),
tops=lpx.StrVector(['x3'])
)
xg.add(X4)
assert xg.get_input_names() == lpx.StrVector(['x1'])
assert xg.get_output_names() == lpx.StrVector(['x3'])
assert xg.get_layer_names() == lpx.StrVector(['x1', 'x2', 'x4', 'x3'])
assert xg.get('x2').tops == lpx.StrVector(['x3'])
assert xg.get('x2').bottoms == lpx.StrVector(['x1'])
assert xg.get('x4').tops == lpx.StrVector(['x3'])
assert xg.get('x4').bottoms == lpx.StrVector(['x1'])
X5 = lpx.XLayer(
name='x5',
xtype=lpx.StrVector(['X5']),
bottoms=lpx.StrVector([]),
tops=lpx.StrVector(['x4'])
)
X6 = lpx.XLayer(
name='x6',
xtype=lpx.StrVector(['X6']),
bottoms=lpx.StrVector(['x4']),
tops=lpx.StrVector([])
)
xg.add(X5)
xg.add(X6)
assert xg.get_input_names() == lpx.StrVector(['x1', 'x5'])
assert xg.get_output_names() == lpx.StrVector(['x3', 'x6'])
assert xg.get_layer_names() == \
lpx.StrVector(['x1', 'x2', 'x5', 'x4', 'x3', 'x6'])
assert xg.get('x2').tops == lpx.StrVector(['x3'])
assert xg.get('x2').bottoms == lpx.StrVector(['x1'])
assert xg.get('x4').tops == lpx.StrVector(['x3', 'x6'])
assert xg.get('x4').bottoms == lpx.StrVector(['x1', 'x5'])
def test_get(self):
xg = lpx.XGraph('g')
X1 = lpx.XLayer(
name='x1',
xtype=lpx.StrVector(['X1']),
bottoms=lpx.StrVector([])
)
xg.add(X1)
X1_xg = xg.get('x1')
assert X1_xg.name == 'x1'
assert X1_xg.xtype[0] == 'X1'
assert X1_xg.bottoms == lpx.StrVector([])
# If we adjust X1, this doesn't get represented in X1_xg
X1.xtype[0] = 'X11'
assert X1_xg.xtype[0] == 'X1'
X1_xg.xtype[0] = 'X11'
assert X1_xg.xtype[0] == 'X11'
X1_xg.xtype = lpx.StrVector(['X111'])
assert X1_xg.xtype[0] == 'X111'
def test_remove(self):
xg = lpx.XGraph('g')
X1 = lpx.XLayer(
name='x1',
xtype=lpx.StrVector(['X1']),
bottoms=lpx.StrVector([])
)
xg.add(X1)
assert 'x1' in xg
assert len(xg) == 1
xg.remove('x1')
assert len(xg) == 0
X2 = lpx.XLayer(
name='x2',
xtype=lpx.StrVector(['X2']),
bottoms=lpx.StrVector(['x1'])
)
X3 = lpx.XLayer(
name='x3',
xtype=lpx.StrVector(['X3']),
bottoms=lpx.StrVector(['x2'])
)
X4 = lpx.XLayer(
name='x4',
xtype=lpx.StrVector(['X4']),
bottoms=lpx.StrVector(['x1']),
tops=lpx.StrVector(['x3'])
)
X5 = lpx.XLayer(
name='x5',
xtype=lpx.StrVector(['X5']),
bottoms=lpx.StrVector([]),
tops=lpx.StrVector(['x4'])
)
X6 = lpx.XLayer(
name='x6',
xtype=lpx.StrVector(['X6']),
bottoms=lpx.StrVector(['x4']),
tops=lpx.StrVector([])
)
xg.add(X1)
xg.add(X2)
xg.add(X3)
xg.add(X4)
xg.add(X5)
xg.add(X6)
assert len(xg) == 6
xg.remove('x2')
assert len(xg) == 5
assert xg.get_input_names() == lpx.StrVector(['x1', 'x5'])
assert xg.get_output_names() == lpx.StrVector(['x3', 'x6'])
assert xg.get_layer_names() == \
lpx.StrVector(['x1', 'x5', 'x4', 'x3', 'x6'])
xg.remove('x1')
assert len(xg) == 4
assert xg.get_input_names() == lpx.StrVector(['x5'])
assert xg.get_output_names() == lpx.StrVector(['x3', 'x6'])
assert xg.get_layer_names() == \
lpx.StrVector(['x5', 'x4', 'x3', 'x6'])
xg.remove('x6')
assert len(xg) == 3
assert xg.get_input_names() == lpx.StrVector(['x5'])
assert xg.get_output_names() == lpx.StrVector(['x3'])
assert xg.get_layer_names() == lpx.StrVector(['x5', 'x4', 'x3'])
xg.remove('x4')
assert len(xg) == 2
assert xg.get_input_names() == lpx.StrVector(['x5', 'x3'])
assert xg.get_output_names() == lpx.StrVector(['x3', 'x5'])
assert xg.get_layer_names() == lpx.StrVector(['x3', 'x5'])
xg.remove('x3')
assert len(xg) == 1
assert xg.get_input_names() == lpx.StrVector(['x5'])
assert xg.get_output_names() == lpx.StrVector(['x5'])
assert xg.get_layer_names() == lpx.StrVector(['x5'])
xg.remove('x5')
assert len(xg) == 0
assert xg.get_input_names() == lpx.StrVector([])
assert xg.get_output_names() == lpx.StrVector([])
assert xg.get_layer_names() == lpx.StrVector([])
def test_update(self):
xg = lpx.XGraph('g')
X1 = lpx.XLayer(
name='x1',
xtype=lpx.StrVector(['X1']),
bottoms=lpx.StrVector([])
)
xg.add(X1)
assert xg.get('x1').xtype == lpx.StrVector(['X1'])
X1.xtype[0] = 'X11'
assert xg.get('x1').xtype == lpx.StrVector(['X1'])
xg.update(X1.name)
assert xg.get('x1').xtype == lpx.StrVector(['X1'])
X2 = lpx.XLayer(
name='x2',
xtype=lpx.StrVector(['X2']),
bottoms=lpx.StrVector(['x1'])
)
X3 = lpx.XLayer(
name='x3',
xtype=lpx.StrVector(['X3']),
bottoms=lpx.StrVector(['x2'])
)
xg.add(X2)
xg.add(X3)
X2 = xg.get('x2')
X3 = xg.get('x3')
assert xg.get_layer_names() == \
lpx.StrVector(['x1', 'x2', 'x3'])
X3.bottoms = lpx.StrVector(['x2'])
xg.update(X3.name)
assert xg.get_layer_names() == \
lpx.StrVector(['x1', 'x2', 'x3'])
assert xg.get('x2').tops == lpx.StrVector(['x3'])
assert xg.get('x2').bottoms == lpx.StrVector(['x1'])
assert xg.get('x3').bottoms == lpx.StrVector(['x2'])
assert xg.get('x1').tops == lpx.StrVector(['x2'])
xg.remove(X2.name)
X2.bottoms = lpx.StrVector(['x3'])
X2.tops = lpx.StrVector(['x1'])
xg.add(X2)
assert xg.get_layer_names() == \
lpx.StrVector(['x3', 'x2', 'x1'])
assert xg.get_input_names() == lpx.StrVector(['x3'])
assert xg.get_output_names() == lpx.StrVector(['x1'])
assert xg.get('x2').tops == lpx.StrVector(['x1'])
assert xg.get('x2').bottoms == lpx.StrVector(['x3'])
assert xg.get('x3').bottoms == lpx.StrVector([])
assert xg.get('x3').tops == lpx.StrVector(['x2'])
assert xg.get('x1').tops == lpx.StrVector([])
assert xg.get('x1').bottoms == lpx.StrVector(['x2'])
|
{"hexsha": "025dde600737576655b9d90658e57217aaccf120", "size": 9422, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/unit/_libpyxir/test_xgraph.py", "max_stars_repo_name": "pankajdarak-xlnx/pyxir", "max_stars_repo_head_hexsha": "a93b785a04b6602418c4f07a0f29c809202d35bd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tests/unit/_libpyxir/test_xgraph.py", "max_issues_repo_name": "pankajdarak-xlnx/pyxir", "max_issues_repo_head_hexsha": "a93b785a04b6602418c4f07a0f29c809202d35bd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/unit/_libpyxir/test_xgraph.py", "max_forks_repo_name": "pankajdarak-xlnx/pyxir", "max_forks_repo_head_hexsha": "a93b785a04b6602418c4f07a0f29c809202d35bd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.8310810811, "max_line_length": 78, "alphanum_fraction": 0.5116748037, "include": true, "reason": "import numpy", "num_tokens": 2829}
|
import numpy as np
class Optimizer:
def __init__(self, optimizer, learning_rate,
param1, param2):
self.optimizer = optimizer
self.learning_rate = learning_rate
self.param1 = param1
self.param2 = param2
self.moment1 = 0
self.moment2 = 0
self.moment3 = 0
self.moment4 = 0
def update_layer(self, weight, bias, grad_w, gradient, iter):
bias = np.subtract(bias,
np.multiply(float(self.learning_rate),
np. multiply(gradient, bias)))
if self.optimizer == 'SGD':
weight = np.subtract(weight,
float(self.learning_rate) * grad_w)
elif self.optimizer == 'RMSProp':
self.moment1 = np.add(self.param1 * self.moment1,
np.multiply(1 - self.param1,
np.multiply(grad_w, grad_w)))
weight = np.subtract(weight,
np.multiply(
np.divide(grad_w,
np.sqrt(self.moment1) + 1e-7),
float(self.learning_rate)))
elif self.optimizer == 'SGD_momentum':
self.moment1 = np.add(self.param1 * self.moment1, grad_w)
weight = np.subtract(weight,
float(self.learning_rate) * self.moment1)
elif self.optimizer == 'Nesterov':
self.moment2 = self.moment1
self.moment1 = np.subtract(self.param1 * self.moment1,
float(self.learning_rate) * grad_w)
weight = np.add(np.subtract(weight,
self.param1 * self.moment2),
(1 + self.param1) * self.moment1)
elif self.optimizer == 'AdaGrad':
self.moment1 = np.add(self.moment1, np.multiply(grad_w, grad_w))
weight = np.subtract(weight,
np.multiply(
np.divide(grad_w,
np.sqrt(self.moment1) + 1e-7),
float(self.learning_rate)))
elif self.optimizer == 'Adam':
self.moment1 = np.add(self.param1 * self.moment1,
np.multiply(1 - self.param1, grad_w))
self.moment2 = np.add(self.param2 * self.moment2,
np.multiply(1 - self.param2,
np.multiply(grad_w, grad_w)))
self.moment3 = np.divide(self.moment1,
1 - (self.param1 ** iter))
self.moment4 = np.divide(self.moment2,
1 - (self.param2 ** iter))
weight = np.subtract(weight,
np.multiply(
np.divide(self.moment3,
np.sqrt(self.moment4) + 1e-7),
float(self.learning_rate)))
elif self.optimizer == 'AdaDelta':
self.moment3 = np.zeros_like(weight)
self.moment1 = np.add(self.param1 * self.moment1,
np.multiply(1 - self.param1,
np.multiply(grad_w, grad_w)))
self.moment2 = np.multiply(np.divide(np.sqrt(self.moment3) + 1e7,
np.sqrt(self.moment1) + 1e7),
grad_w)
self.moment3 = np.add(self.param2 * self.moment3,
np.multiply(1 - self.param2,
np.power(self.moment2, 2)))
weight = np.subtract(weight, self.moment2)
else:
print("optimizer is wrong")
return 0
return weight, bias
|
{"hexsha": "e980a830ea248ba4c6cd1252c0ddc03662b5311c", "size": 4050, "ext": "py", "lang": "Python", "max_stars_repo_path": "SimpleNN/Optimizer.py", "max_stars_repo_name": "joel2411/Simple-Neural-Network", "max_stars_repo_head_hexsha": "b8de22f57073944541a5c2df4c6918c9a665abb3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "SimpleNN/Optimizer.py", "max_issues_repo_name": "joel2411/Simple-Neural-Network", "max_issues_repo_head_hexsha": "b8de22f57073944541a5c2df4c6918c9a665abb3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SimpleNN/Optimizer.py", "max_forks_repo_name": "joel2411/Simple-Neural-Network", "max_forks_repo_head_hexsha": "b8de22f57073944541a5c2df4c6918c9a665abb3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.2658227848, "max_line_length": 78, "alphanum_fraction": 0.4392592593, "include": true, "reason": "import numpy", "num_tokens": 774}
|
#!/usr/bin/env python
# We use Python 2 instead of python3 bacause ROS uses Python 2.
# ref. https://numpy.org/doc/1.18/numpy-user.pdf
# Single Beam Sonar
# Sonar Point-Scatter Model
# Contributors: Andreina Rascon, Derek Olson, Woeng-Sug Choi
from random import random
from math import sqrt, sin, cos, pi, log, acos
import numpy as np
import matplotlib.pyplot as plt
import rospy
# various diagnostics for beam 0
def _show_plots(nBeams, ray_nElevationRays, ray_nAzimuthRays,
nFreq, nBuckets, time1f, P_beam_tf2, P_bucket_tf2):
# Plots
plt.figure(figsize=(14,10), dpi=80)
plt.suptitle("%d beam(s), %d elevation rays, %d azimuth rays "
"%d frequencies, %d buckets"%(nBeams,
ray_nElevationRays, ray_nAzimuthRays, nFreq, nBuckets))
# inverse fast fourier transform
plt.subplot(2,2,1)
plt.title("Power based on echo time")
plt.grid(True)
plt.plot(time1f, P_beam_tf2[0,:], linewidth=0.5)
plt.xlabel('Time, [s]')
plt.ylabel('Pressure, [Pa]')
# Sound Pressure Level of Echo Level
SPLf1 = 20 * np.log(np.abs(P_beam_tf2[0,:])) # sound pressure level, [dB]
plt.subplot(2,2,2)
plt.title("Sound pressure level based on echo time")
plt.grid(True)
plt.plot(time1f, SPLf1, linewidth=0.5)
plt.xlabel('Time, [s]')
plt.ylabel('Sound Pressure Level, [Pa]')
# image for each nFreq
plt.subplot(2,2,3)
plt.title("Heatplot sound pressure level (SPL) based on frequency number")
plt.xlabel('Beam number')
plt.ylabel('Inverse FFT frequency number')
plt.imshow(P_beam_tf2.T, aspect="auto")
# image for each nBuckets
plt.subplot(2,2,4)
plt.title("Bucketed heatplot SPL based on bucketed frequency number")
plt.xlabel('Beam number')
plt.ylabel('Inverse FFT frequency bucket number')
plt.imshow(P_bucket_tf2.T, aspect="auto")
plt.show()
# diagnostics showing pressure levels for each beam
def _show_plots_powers(nBeams, ray_nElevationRays, ray_nAzimuthRays,
nFreq, nBuckets, time1f, P_beam_tf2, P_bucket_tf2):
# Plots
plt.figure(figsize=(14,10), dpi=80)
plt.suptitle("Sound pressure level in Db based on echo time\n"
"%d beam(s), %d elevation rays, %d azimuth rays "
"%d frequencies"%(nBeams,
ray_nElevationRays, ray_nAzimuthRays, nFreq))
# Sound Pressure Level of Echo Level
for i in range(16):
SPLf1 = 20 * np.log(np.abs(P_beam_tf2[i,:]))
# SPLf1 = 20 * np.log(np.abs(P_bucket_tf2[i,:])) # bucketed
plt.subplot(16,1,i+1)
plt.grid(True)
plt.plot(time1f, SPLf1, linewidth=0.5)
plt.xlabel('Time, [s]')
plt.show()
# unnormalized sinc function
def _unnormalized_sinc(t):
try:
return sin(t)/t
except ZeroDivisionError:
return 1.0
## physics constants
soundSpeed = 1500.0 # [m/s]
mu = 10e-4 # Surface Reflectivity
# Input Parameters
# The BlueView P900-45 is used as an example sonar for the purposes of
# demonstrating the point-scatter model
# Sonar properties
sonarFreq = 900E3 # Sonar frquency
bandwidth = 29.5e4 # [Hz]
freqResolution = 100e2
fmin = sonarFreq - bandwidth/2*4 # Calculated requency spectrum
fmax = sonarFreq + bandwidth/2*4 # Calculated requency spectrum
def _textf3(text, f):
return "%s: %f, %f, %f"%(text,f[0],f[1],f[2])
# incidence angle is target's normal angle accounting for the ray's azimuth
# and elevation
def _ray_incidence(azimuth, elevation, normalf4):
# ray normal from camera azimuth and elevation
camera_x = cos(-azimuth)*cos(elevation)
camera_y = sin(-azimuth)*cos(elevation)
camera_z = sin(elevation)
ray_normal = np.array([camera_x, camera_y, camera_z])
# target normal with axes compensated to camera axes
target_normal = np.array([normalf4[2], -normalf4[0], -normalf4[1]])
# dot product
dot_product = ray_normal.dot(target_normal)
return pi - acos(dot_product)
def process_rays(ray_distancesf2, ray_normalsf2_4, show_plots=False):
# Sonar sensor properties
nBeams = 16
beam_elevationAngle = 0.0175 # Beam looking down in elevation direction
beam_azimuthAngle = 0.0 # Beam at center line in azimuth direction
beam_elevationAngleWidth = 0.1 # radians
beam_azimuthAngleWidth = 0.1 # radians
ray_nElevationRays = 4
ray_nAzimuthRays = 3
nBuckets = 300
if ray_distancesf2.shape != (ray_nElevationRays, ray_nAzimuthRays * nBeams):
print("bad distances shape ", ray_distancesf2.shape)
return np.zeros(nBeams,nBuckets)
# calculated Sonar sensor properties
ray_elevationAnglesf1 = beam_elevationAngle + np.linspace(
-beam_elevationAngleWidth / 2, beam_elevationAngleWidth / 2,
ray_nElevationRays)
ray_azimuthAnglesf1 = beam_azimuthAngle + np.linspace(
-beam_azimuthAngleWidth / 2, beam_azimuthAngleWidth / 2,
ray_nAzimuthRays)
ray_elevationAngleWidth = beam_elevationAngleWidth/(ray_nElevationRays - 1)
ray_azimuthAngleWidth = beam_azimuthAngleWidth/(ray_nAzimuthRays - 1)
# calculated sampling periods
max_T = np.amax(ray_distancesf2)*2/soundSpeed
_delta_f = 1/max_T
# _delta_t = 1/(fmax - fmin)
nFreq = int(round((fmax - fmin) / _delta_f))
# reduce nFreq because calculated nFreq is too large for looping
print("nFreq", nFreq)
_freq1f = np.linspace(fmin,fmax,nFreq)
# calculated physics
_absorption = 0.0354 # [dB/m]
_attenuation = _absorption*log(10)/20
_kw1f = 2*pi*_freq1f/soundSpeed # wave vector
K1f = _kw1f + 1j*_attenuation # attenuation constant K1f
# Transmit spectrum, frequency domain
S_f1f = 1e11 * np.exp(-(_freq1f - sonarFreq)**2 * pi**2 / bandwidth**2)
# Point Scattering model
# Echo level using the point scatter model for P(f) and P(t) for beams
P_ray_f2c = np.zeros((nBeams, nFreq), dtype=np.complex_)
azimuthBeamPattern2f = np.zeros((ray_nElevationRays,ray_nAzimuthRays))
elevationBeamPattern2f = np.zeros((ray_nElevationRays,ray_nAzimuthRays))
for k in range(ray_nElevationRays):
for i in range(ray_nAzimuthRays):
azimuthBeamPattern2f[k,i] = (abs(_unnormalized_sinc(pi * 0.884
/ ray_azimuthAngleWidth * sin(ray_azimuthAnglesf1[i]))))**2
elevationBeamPattern2f[k,i] = (abs(_unnormalized_sinc(pi * 0.884
/ ray_elevationAngleWidth * sin(ray_elevationAnglesf1[k]))))**2
# diagnostics image of ray incidences
incidences_f2 = np.zeros((ray_nElevationRays, ray_nAzimuthRays * nBeams),
dtype=np.float32) # diagnostics message
for k in range(ray_nElevationRays):
for i in range(ray_nAzimuthRays * nBeams):
xi_z = random() # generate a random number, (Gaussian noise)
xi_y = random() # generate another random number, (Gaussian noise)
# xi_z = 0.5 # turn off randomness
# xi_y = 0.5 # turn off randomness
# ray r in beam i
r = i % ray_nAzimuthRays
# angle between ray vector and object normal vector
incidence = _ray_incidence(ray_azimuthAnglesf1[r],
ray_elevationAnglesf1[k],
ray_normalsf2_4[k, i])
incidences_f2[k,i] = incidence
distance = ray_distancesf2[k,i]
amplitude = (((xi_z + 1j * xi_y)
/ sqrt(2))
* (sqrt(mu * cos(incidence)**2 * distance**2
* ray_azimuthAngleWidth
* ray_elevationAngleWidth))
* azimuthBeamPattern2f[k,r]
* elevationBeamPattern2f[k,r])
# Summation of Echo returned from a signal (frequency domain)
b = int(i/ray_nAzimuthRays) # beam
for m in range(nFreq):
P_ray_f2c[b,m] = P_ray_f2c[b,m] + S_f1f[m] * amplitude \
* np.exp(-1j * K1f[m] * distance * 2) / (distance**2)
# power level based on echo time for each beam
P_beam_tf2 = np.zeros((nBeams, nFreq), dtype=np.float32)
for b in range(nBeams):
P_beam_tf2[b,:] = np.fft.ifft(P_ray_f2c[b,:])
# power into buckets
P_bucket_tf2 = np.zeros((nBeams, nBuckets), dtype=np.float32)
for b in range(nBeams):
for f in range(nFreq):
bucket = int(f*nBuckets/nFreq)
P_bucket_tf2[b, bucket] += P_beam_tf2[b,f]
# show_plots = True
if show_plots:
time1f = np.linspace(0,max_T,nFreq) # for diagnostics plot
# _show_plots(nBeams, ray_nElevationRays, ray_nAzimuthRays,
# nFreq, nBuckets, time1f, P_beam_tf2, P_bucket_tf2)
_show_plots_powers(nBeams, ray_nElevationRays, ray_nAzimuthRays,
nFreq, nBuckets, time1f, P_beam_tf2, P_bucket_tf2)
# return P_beam_tf2.T, incidences_f2 # unbucketed beam
return P_bucket_tf2.T, incidences_f2 # bucketed beam
# test
if __name__ == '__main__':
# These dimensions must match hardcoded dimensions
# 16 beams 3 wide 4 tall
ray_distancesf2 = np.zeros((4,48), dtype=np.float32)
ray_distancesf2[:,] = np.linspace(0.5, 6.0, 48)
ray_normalsf2_4 = np.zeros((4,48,4), dtype=np.float32)
ray_normalsf2_4[:,:,0]=1.0
print("ray_distancesf2", ray_distancesf2)
print("ray_normalsf2_4", ray_normalsf2_4)
# run test dataset and show plots
_image, _incidences = process_rays(ray_distancesf2, ray_normalsf2_4, True)
|
{"hexsha": "ae5966aae8526b26e185fa69b4bc49c93af6e98f", "size": 9644, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/sonar_equations.py", "max_stars_repo_name": "daewok/nps_uw_sensors_gazebo", "max_stars_repo_head_hexsha": "f49df01ca54ac7911888a56b2291af6813000b93", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scripts/sonar_equations.py", "max_issues_repo_name": "daewok/nps_uw_sensors_gazebo", "max_issues_repo_head_hexsha": "f49df01ca54ac7911888a56b2291af6813000b93", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scripts/sonar_equations.py", "max_forks_repo_name": "daewok/nps_uw_sensors_gazebo", "max_forks_repo_head_hexsha": "f49df01ca54ac7911888a56b2291af6813000b93", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2698412698, "max_line_length": 80, "alphanum_fraction": 0.647656574, "include": true, "reason": "import numpy", "num_tokens": 2771}
|
import numpy as np
from math import floor
import random
from keras.models import Model
from keras.layers import Dense, Input
from keras.models import load_model
import sys
from sklearn.cluster import KMeans
import csv
autoencoder_model_path = 'hw6_autoencoder.h5'
encoder_model_path = 'hw6_encoder.h5'
random.seed(24)
EPOCHS = 100
BATCH_SIZE = 5000
VALID_SPLIT = 0.1
def _shuffle(X):
randomize = np.arange(len(X))
np.random.shuffle(randomize)
return (X[randomize])
def split_valid_set(X_all, percentage):
all_data_size = len(X_all)
valid_data_size = int(floor(all_data_size * percentage))
X_all= _shuffle(X_all)
X_valid= X_all[0:valid_data_size]
X_train= X_all[valid_data_size:]
return X_train, X_valid
def model(x_train,encoding_dim=64):
# this is our input placeholder
input_img = Input(shape=(784,))
# encoder layers
encoded = Dense(512, activation='relu')(input_img)
encoded = Dense(256, activation='relu')(encoded)
encoded = Dense(128, activation='relu')(encoded)
#encoded = Dense(64, activation='relu')(encoded)
encoder_output = Dense(encoding_dim)(encoded)
# decoder layers
decoded = Dense(encoding_dim, activation='relu')(encoder_output)
#decoded = Dense(128, activation='relu')(decoded)
decoded = Dense(256, activation='relu')(decoded)
decoded = Dense(512, activation='relu')(decoded)
decoded = Dense(784, activation='tanh')(decoded)
# construct the autoencoder model
autoencoder = Model(input=input_img, output=decoded)
# construct the encoder model for plotting
encoder = Model(input=input_img, output=encoder_output)
# compile autoencoder
autoencoder.compile(optimizer='adam', loss='mse')
autoencoder.summary()
encoder.summary()
# training
autoencoder.fit(x_train, x_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
shuffle=True,validation_split=VALID_SPLIT)
return autoencoder,encoder
def load_test(testfile='test_case.csv'):
test = []
with open(testfile) as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
test.append([row['image1_index'],row['image2_index']])
return test
"""
def plot(x_test,pre,n=10):
import matplotlib.pyplot as plt
# n = how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(pre[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
#def main():
def main(*args):
#imagedata = np.load('image.npy')
#args[0][1]
imagedata = np.load(args[0][1])
#data pre-processing
imagedata = imagedata.astype('float32') / 255. - 0.5
x_train,x_test = split_valid_set(imagedata,0.1)
#test = load_test('test_case.csv')
#args[0][2]
test = load_test(args[0][2])
"""
#use PCA -> Dimension Reduction
pca = PCA(n_components=64)
pca.fit(imagedata)
pca_extraction = pca.transform(imagedata)
print(pca_extraction.shape)
kmeans = KMeans(n_clusters=2, random_state=0).fit(pca_extraction)
y_kmeans = kmeans.predict(pca_extraction)
"""
"""
encoding_dim = 64
autoencoder,encoder=model(x_train,encoding_dim)
autoencoder.save('autoencoder.hdf5')
encoder.save('encoder.hdf5')
"""
autoencoder = load_model(autoencoder_model_path)
encoder = load_model(encoder_model_path)
encoded_imgs = encoder.predict(imagedata)
pre = autoencoder.predict(x_test)
#autoencoder.summary()
#print(encoded_imgs.shape)
#plot(x_test,pre,10)
kmeans = KMeans(n_clusters=2, random_state=0).fit(encoded_imgs)
y_kmeans = kmeans.predict(encoded_imgs)
answer = []
for index in range(len(test)):
answer.append([str(index)])
image1_result = y_kmeans[int(test[index][0])]
image2_result = y_kmeans[int(test[index][1])]
if image1_result == image2_result:
answer[index].append(1)
else:
answer[index].append(0)
#filename = "result.csv"
filename = args[0][3]
text = open(filename, "w+")
s = csv.writer(text,delimiter=',',lineterminator='\n')
s.writerow(["ID","Ans"])
for i in range(len(answer)):
s.writerow(answer[i])
text.close()
if __name__ == '__main__':
main(sys.argv)
#main()
|
{"hexsha": "e7d96cc0b38eec2f53fd954584f98098623940f3", "size": 4256, "ext": "py", "lang": "Python", "max_stars_repo_path": "2017 Fall/EE5184 - Machine Learning/homework/homework_06/hw6.py", "max_stars_repo_name": "Hsins/NTUCourse", "max_stars_repo_head_hexsha": "5a623a52761ceb649621b4c3f140697c8cdb5d88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2019-05-05T03:59:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-02T03:18:36.000Z", "max_issues_repo_path": "2017 Fall/EE5184 - Machine Learning/homework/homework_06/hw6.py", "max_issues_repo_name": "Hsins/NTUCourse", "max_issues_repo_head_hexsha": "5a623a52761ceb649621b4c3f140697c8cdb5d88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2017 Fall/EE5184 - Machine Learning/homework/homework_06/hw6.py", "max_forks_repo_name": "Hsins/NTUCourse", "max_forks_repo_head_hexsha": "5a623a52761ceb649621b4c3f140697c8cdb5d88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2019-04-12T15:02:49.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-28T13:08:42.000Z", "avg_line_length": 25.1834319527, "max_line_length": 66, "alphanum_fraction": 0.725093985, "include": true, "reason": "import numpy", "num_tokens": 1168}
|
# -*- coding: utf-8 -*-
"""
Created on Tue May 28 12:41:11 2019
@author: Nikita
"""
import pandas as pd
import numpy as np
# create a data frame - dictionary is used here where keys get converted to column names and values to row values.
data = pd.DataFrame({'Country': ['Russia', 'Colombia', 'Chile', 'Equador', 'Nigeria'],
'Rank': [121, 40, 100, 130, 11]})
print(data)
## to describe the data
data.describe()
print(data.describe())
print(data.info())
# now , work on sorting
data = pd.DataFrame({'group': ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c'], 'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
data.sort_values(by=['ounces'], ascending=True, inplace=False)
##inplace = True will make changes to the data
## multiple column sort
data.sort_values(by=['group', 'ounces'], ascending=[True, False], inplace=False)
## to remove the duplicates
data.drop_duplicates()
data = pd.DataFrame(
{'food': ['bacon', 'pulled pork', 'bacon', 'Pastrami', 'corned beef', 'Bacon', 'pastrami', 'honey ham', 'nova lox'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
print(data)
data = pd.DataFrame(
{'food': ['bacon', 'pulled pork', 'bacon', 'Pastrami', 'corned beef', 'Bacon', 'pastrami', 'honey ham', 'nova lox'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
## want to add animal based on what they eat
meat_to_animal = {
'bacon': 'pig',
'pulled pork': 'pig',
'pastrami': 'cow',
'corned beef': 'cow',
'honey ham': 'pig',
'nova lox': 'salmon'
}
def meat_2_animal(series):
if series['food'] == 'bacon':
return 'pig'
elif series['food'] == 'pulled pork':
return 'pig'
elif series['food'] == 'pastrami':
return 'cow'
elif series['food'] == 'corned beef':
return 'cow'
elif series['food'] == 'honey ham':
return 'pig'
else:
return 'salmon'
lower = lambda x: x.lower()
data['food'] = data['food'].apply(lower)
data['animal2'] = data.apply(meat_2_animal, axis='columns')
print(data)
## SERIES
data = pd.Series([1., -999., 2., -999., -1000., 3.])
data
##Series function from pandas are used to create array
data = pd.Series([1., -999., 2., -999., -1000., 3.])
data.replace([-999, -1000], np.nan, inplace=True)
data
## categorize data into bins
ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
bins = [18, 25, 35, 60, 100]
data = pd.cut(ages, bins)
print(data)
# how many observations fall under each bin
counter = pd.value_counts(data)
print(counter)
## Group and Pivots in PANDAS
df = pd.DataFrame({'key1': ['a', 'a', 'b', 'b', 'a'],
'key2': ['one', 'two', 'one', 'two', 'one'],
'data1': np.random.randn(5),
'data2': np.random.randn(5)})
grouped = df['data1'].groupby(df['key1'])
grouped.mean()
import random
# read the data from the downloaded CSV file.
data = pd.read_csv('https://s3-eu-west-1.amazonaws.com/shanebucket/downloads/uk-500.csv')
# set a numeric id for use as an index for examples.
data['id'] = [random.randint(0, 1000) for x in range(data.shape[0])]
data = data.head(5)
print(data)
data.iloc[0] # first row of data frame (Aleshia Tomkiewicz) - Note a Series data type output.
print(data.iloc[0])
data.iloc[1] # second row of data frame (Evan Zigomalas)
data.iloc[-1] # last row of data frame (Mi Richan)
# Columns:
print(data.iloc[:, 0]) # first column of data frame (first_name)
data.iloc[:, 1] # second column of data frame (last_name)
|
{"hexsha": "ef3a3e9c6e7b4a1f4e725edd03cb2d936e8c647c", "size": 3563, "ext": "py", "lang": "Python", "max_stars_repo_path": "DataSciencePractice/DataScience/pandas.py", "max_stars_repo_name": "47shubh/blog", "max_stars_repo_head_hexsha": "79f349411c7cfbbc52010f5627401b33c74cc40b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DataSciencePractice/DataScience/pandas.py", "max_issues_repo_name": "47shubh/blog", "max_issues_repo_head_hexsha": "79f349411c7cfbbc52010f5627401b33c74cc40b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DataSciencePractice/DataScience/pandas.py", "max_forks_repo_name": "47shubh/blog", "max_forks_repo_head_hexsha": "79f349411c7cfbbc52010f5627401b33c74cc40b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.5309734513, "max_line_length": 121, "alphanum_fraction": 0.5919169239, "include": true, "reason": "import numpy", "num_tokens": 1121}
|
'''
Created on 14/6/2020
@author: Neil Symington
This script is for converting aseg-gdf EM data to a netcdf file. The netcdf file will also include some additional
AEM system metadata.
'''
from geophys_utils.netcdf_converter import aseg_gdf2netcdf_converter
import netCDF4
import os, math
import numpy as np
# SO we can see the logging. This enables us to debug
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
# Define paths
root = "/home/nsymington/Documents/GA/AEM/LCI"
nc_out_path = os.path.join(root, "Galilee_WB_MGA55.nc")
dat_in_path = os.path.join(root, 'aseg_gdf', 'Galilee_WB_MGA55.dat')
dfn_in_path = os.path.join(root, 'aseg_gdf', 'Galilee_WB_MGA55.dfn')
# GDA94 MGA zone 55
crs_string = "epsg:28355"
# Initialise instance of ASEG2GDF netcdf converter
d2n = aseg_gdf2netcdf_converter.ASEGGDF2NetCDFConverter(nc_out_path,
dat_in_path,
dfn_in_path,
crs_string,
fix_precision=True,
remove_null_columns = False)
d2n.convert2netcdf()
# Here we do some processing to ensure our lci file is somewhat standard
# Create a python object with the lci dataset
d = netCDF4.Dataset(nc_out_path, "a")
# For consistency lets convert mS/m to S/m
d['conductivity'][:] = 0.001*d['conductivity'][:]
d['conductivity'][:].units = 'S/m'
top_layer = d['elevation'][0] - d['layer_top_elevation'][0]
top_layers = np.array([round(x,2) for x in top_layer.data])
layer_top_depth = np.zeros(shape = d['conductivity_(masked_to_DOI)'][:].shape, dtype = np.float32)
layer_top_depth[:] = np.tile(top_layers, d['conductivity'].shape[0]).reshape(d['conductivity'].shape)
ltop = d.createVariable("layer_top_depth","f8",("point","layer"))
ltop[:] = layer_top_depth
ltop.long_name = "Depth to the top of the layer"
ltop.unit = "m"
ltop.aseg_gdf_format = "30E9.3"
d.close()
|
{"hexsha": "44d9a206c6ade5b3e2fab7896cb78201ffca8516", "size": 2061, "ext": "py", "lang": "Python", "max_stars_repo_path": "utils/conversion/lci_data_conversion.py", "max_stars_repo_name": "Neil-Symington/aem_interp_dash", "max_stars_repo_head_hexsha": "f7c6f385838b455a2c1e9d3a1db5f675a327b8dd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-09-08T05:38:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-25T11:37:12.000Z", "max_issues_repo_path": "utils/conversion/lci_data_conversion.py", "max_issues_repo_name": "Neil-Symington/aem_interp_dash", "max_issues_repo_head_hexsha": "f7c6f385838b455a2c1e9d3a1db5f675a327b8dd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-07-29T04:30:07.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-28T04:32:15.000Z", "max_forks_repo_path": "utils/conversion/lci_data_conversion.py", "max_forks_repo_name": "Neil-Symington/aem_interp_dash", "max_forks_repo_head_hexsha": "f7c6f385838b455a2c1e9d3a1db5f675a327b8dd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-12-02T02:10:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-25T11:39:03.000Z", "avg_line_length": 30.3088235294, "max_line_length": 114, "alphanum_fraction": 0.6608442504, "include": true, "reason": "import numpy", "num_tokens": 539}
|
import numpy as num
from decimal import *
import scipy as sci
from numpy.polynomial import polynomial as pol
def euler(f,a,b,n ,y_0):
h=Decimal((b-a))/Decimal(n)
vals = []
vals.append(y_0)
print ("Indice\t | t | Aproximado(u) ")
print("0\t | 0 |\t"+str(y_0))
for i in range (0, n-1):
tj =Decimal(a+(i)*h)
x = vals[i] + h*f(tj,Decimal(vals[i]))
vals.append(x)
print(str(i+1)+"\t | "+str(tj)+" |"+"\t"+str(x))
"""print("u_",i+1,"=",x)"""
def f(t,x):
return -x + t + 1
f0 = 1
euler(f,0,1,10,f0)
|
{"hexsha": "9618b6567a2b2ba2b7280ca8ca19eb3ca8076de7", "size": 573, "ext": "py", "lang": "Python", "max_stars_repo_path": "practica3/1.py", "max_stars_repo_name": "danipozo/practicas-mnii", "max_stars_repo_head_hexsha": "f4afe725316c694a4cd06e2ce3c0019f4f68652f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-03-13T18:10:38.000Z", "max_stars_repo_stars_event_max_datetime": "2018-03-13T18:10:38.000Z", "max_issues_repo_path": "practica3/1.py", "max_issues_repo_name": "danipozo/practicas-mnii", "max_issues_repo_head_hexsha": "f4afe725316c694a4cd06e2ce3c0019f4f68652f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "practica3/1.py", "max_forks_repo_name": "danipozo/practicas-mnii", "max_forks_repo_head_hexsha": "f4afe725316c694a4cd06e2ce3c0019f4f68652f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-03-13T22:35:02.000Z", "max_forks_repo_forks_event_max_datetime": "2018-03-13T22:35:02.000Z", "avg_line_length": 19.7586206897, "max_line_length": 56, "alphanum_fraction": 0.520069808, "include": true, "reason": "import numpy,from numpy,import scipy", "num_tokens": 207}
|
'''ResNet in PyTorch.
For Pre-activation ResNet, see 'preact_resnet.py'.
Reference:
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Deep Residual Learning for Image Recognition. arXiv:1512.03385
'''
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from diac_h2h.networks.pytorch.thirdparty.kuangliu_models.resnet import BasicBlock
from diac_h2h.networks.pytorch.thirdparty.kuangliu_models.resnet import Bottleneck
def get_config(space):
config = {}
if space == "s1":
config["block_ops"] = [0, 1]
config["fs"] = [16, 1024]
config["num_blocks"] = [1, 4]
elif space == "s2":
config["block_ops"] = [0, 1]
config["fs"] = [16, 512]
config["num_blocks"] = [1, 3]
elif space == "s3":
config["block_ops"] = [0, 1]
config["fs"] = [16, 128]
config["num_blocks"] = [1, 3]
elif space == "s4":
config["block_ops"] = [0, 1]
config["fs"] = [4, 32]
config["num_blocks"] = [1, 3]
elif space == "s5":
config["block_ops"] = [0, 1]
config["fs"] = [2, 8]
config["num_blocks"] = [1, 3]
else:
raise ValueError("Space {} not defined!".format(space))
return config
def sample(config, id):
rstate = np.random.RandomState(id)
blocks = rstate.choice(config["block_ops"], 4)
fs = rstate.randint(*config["fs"], 5)
num_blocks = rstate.randint(*config["num_blocks"], 4)
return dict(blocks=blocks, fs=fs, num_blocks=num_blocks)
def get_instance(net_args, num_classes):
return ResNet_modified(num_c=num_classes, **net_args)
class ResNet_modified(nn.Module):
def __init__(self, blocks, fs, num_blocks, num_c=10):
super(ResNet_modified, self).__init__()
assert len(fs) == 5
assert len(num_blocks) == 4
assert len(blocks) == 4
blocks = list(map(self._int2block, blocks))
self.in_planes = fs[0]
self.conv1 = nn.Conv2d(3, fs[0], kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(fs[0])
self.layer1 = self._make_layer(blocks[0], fs[1], num_blocks[0], stride=1)
self.layer2 = self._make_layer(blocks[1], fs[2], num_blocks[1], stride=2)
self.layer3 = self._make_layer(blocks[2], fs[3], num_blocks[2], stride=2)
self.layer4 = self._make_layer(blocks[3], fs[4], num_blocks[3], stride=2)
self.linear = nn.Linear(fs[4]*blocks[3].expansion, num_c)
def _int2block(self, block):
if block == 0:
return BasicBlock
elif block == 1:
return Bottleneck
else:
raise ValueError("block code {} not supported".format(block))
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
|
{"hexsha": "b4c9df5757f0733fb0968c043bb0ee60afb9f693", "size": 3349, "ext": "py", "lang": "Python", "max_stars_repo_path": "networks/iotnets/random_net_resnet.py", "max_stars_repo_name": "dengliming/iotnets", "max_stars_repo_head_hexsha": "db744e56769c799dbf765a27fc5aa91e3edeaaa3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "networks/iotnets/random_net_resnet.py", "max_issues_repo_name": "dengliming/iotnets", "max_issues_repo_head_hexsha": "db744e56769c799dbf765a27fc5aa91e3edeaaa3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "networks/iotnets/random_net_resnet.py", "max_forks_repo_name": "dengliming/iotnets", "max_forks_repo_head_hexsha": "db744e56769c799dbf765a27fc5aa91e3edeaaa3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.5145631068, "max_line_length": 88, "alphanum_fraction": 0.6043595103, "include": true, "reason": "import numpy", "num_tokens": 959}
|
#ifndef _ROBOT_PLUGIN_HH_
#define _ROBOT_PLUGIN_HH_
#include <ros/ros.h>
#include <ros/callback_queue.h>
#include <ros/subscribe_options.h>
#include <gazebo/gazebo.hh>
#include <gazebo/physics/physics.hh>
#include <gazebo_braitenberg_robot/Sensor.h>
#include <thread>
#include <math.h>
#include <Eigen/Dense>
using namespace std;
using namespace Eigen;
typedef Matrix<float, 2, 4> Matrix2_4f; // custom Matrix otherwise initialization doesn't seem to be working
namespace gazebo{
/// \brief A plugin to control a MyRobot sensor.
class RobotPlugin : public ModelPlugin{
private:
/// \brief Pointer to the model.
physics::ModelPtr model;
/// \brief A node use for ROS transport
unique_ptr<ros::NodeHandle> rosNode;
/// \brief A ROS subscriber
ros::Subscriber rosSub;
/// \brief A ROS callbackqueue that helps process messages
ros::CallbackQueue rosQueue;
/// \brief A thread the keeps running the rosQueue
thread rosQueueThread;
/// \brief Matrix used for each iteration
Matrix2f cst;
Matrix2_4f coeff;
int MAX_SPEED; // speed in radian/s of wheels
int BEHAVIOR; // behavior of robot (following/avoiding light)
public:
/// \brief tied to behavior
const static int FOLLOW = 0;
const static int AVOID = 1;
/// \brief Constructor
RobotPlugin() {}
/// \brief The load function is called by Gazebo when the plugin is
/// inserted into simulation
/// \param[in] _model A pointer to the model that this plugin is
/// attached to.
/// \param[in] _sdf A pointer to the plugin's SDF element.
virtual void Load(physics::ModelPtr _model, sdf::ElementPtr _sdf){
// Safety check
if(_model->GetJointCount() == 0){
cerr << "Invalid joint count, MyRobot plugin not loaded\n";
return;
}
// Store the model pointer for convenience.
this->model = _model;
// Check that the sdf elements exist, then read the values
if (_sdf->HasElement("velocity"))
MAX_SPEED = _sdf->Get<int>("velocity");
if (_sdf->HasElement("behavior"))
BEHAVIOR = _sdf->Get<int>("behavior");
// Set up matrix
cst << 1, 1,
1, -1;
switch(BEHAVIOR){
case FOLLOW :
coeff << 4, 6, 6, 4,
-4, -4, 4, 4;
break;
case AVOID :
coeff << 4, 6, 6, 4,
4, 4, -4, -4;
break;
default:
// FOLLOW
coeff << 4, 6, 6, 4,
-4, -4, 4, 4;
break;
}
// Initialize ros, if it has not already bee initialized.
if(!ros::isInitialized()){
int argc = 0;
char **argv = NULL;
ros::init(argc, argv, "gazebo",
ros::init_options::NoSigintHandler);
}
// Create our ROS node. This acts in a similar manner to
// the Gazebo node
this->rosNode.reset(new ros::NodeHandle("gazebo_client"));
// Create a named topic, and subscribe to it.
ros::SubscribeOptions so =
ros::SubscribeOptions::create<gazebo_braitenberg_robot::Sensor>(
"/lightSensor",
100,
boost::bind(&RobotPlugin::onRosMsg, this, _1),
ros::VoidPtr(), &this->rosQueue);
this->rosSub = this->rosNode->subscribe(so);
// Spin up the queue helper thread.
this->rosQueueThread =
thread(bind(&RobotPlugin::QueueThread, this));
}
/// \brief Handle an incoming message from ROS
/// \param[in] data Sensors data that is used to set the velocity
/// of the MyRobot.
void onRosMsg(const gazebo_braitenberg_robot::SensorConstPtr &msg){
VectorXf sensors(msg->data.size());
for(int i = 0; i < msg->data.size(); i++)
sensors(i) = msg->data[i] / 60;
Vector2f vel = coeff * sensors;
Vector2f wheel_speed = cst * vel;
float k = max(wheel_speed(0), wheel_speed(1)); // scale wheel speed on MAX_SPEED
if(k == 0)
k = 1;
setVelocity(wheel_speed(0) * MAX_SPEED / k, wheel_speed(1) * MAX_SPEED / k);
}
/// \brief Set the velocity of the MyRobot
/// \param[in] l New left target velocity
/// \param[in] r New right target velocity
void setVelocity(const double &l, const double &r){
this->model->GetJoint("my_robot::left_wheel_hinge")->SetVelocity(0, l);
this->model->GetJoint("my_robot::right_wheel_hinge")->SetVelocity(0, r);
}
private:
/// \brief ROS helper function that processes messages
void QueueThread(){
static const double timeout = 0.01;
while (this->rosNode->ok())
{
this->rosQueue.callAvailable(ros::WallDuration(timeout));
}
}
};
// Tell Gazebo about this plugin, so that Gazebo can call Load on this plugin.
GZ_REGISTER_MODEL_PLUGIN(RobotPlugin)
}
#endif
|
{"hexsha": "849bcb52041130c7285668db2b2479275903822b", "size": 4634, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "src/robot_plugin.cpp", "max_stars_repo_name": "merlin24u/Gazebo_Braitenberg_Robot", "max_stars_repo_head_hexsha": "5c58d64411c6aee5d071b67f498977205d1490f8", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/robot_plugin.cpp", "max_issues_repo_name": "merlin24u/Gazebo_Braitenberg_Robot", "max_issues_repo_head_hexsha": "5c58d64411c6aee5d071b67f498977205d1490f8", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/robot_plugin.cpp", "max_forks_repo_name": "merlin24u/Gazebo_Braitenberg_Robot", "max_forks_repo_head_hexsha": "5c58d64411c6aee5d071b67f498977205d1490f8", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.6049382716, "max_line_length": 108, "alphanum_fraction": 0.6426413466, "num_tokens": 1280}
|
export datachunk
export split_into_n, split_tod_mpi, get_chunk_properties
import Healpix
import CorrNoise
using Random
using FITSIO
try
import MPI
catch
end
"""
This structure holds a number of parameters relative to a certain chunk of data.
Field | Type | Meaning
:----------------- |:-------------- |:------------------------------------------------------------
`pol_number` | Int | ID number of the polarimeter
`first_idx` | Int | Index of the first element of the chunk
`last_idx` | Int | Index of the last element of the chunk
`num_of_elements` | Int | Total number of elements of the chunk
# Example
given the following array of data: [1, 10, 20, 3, 4, 5, 7]
measured by the polarimeter number 2
the chunk: [20, 3, 4]
will correspond to the following structure:
datachunk(2, 3, 5, 3)
"""
struct datachunk
pol_number ::Int
first_idx::Int
last_idx::Int
num_of_elements::Int
end
"""
function split_into_n(length, num_of_segments)
Given the `length` of the array, it convenientely
splits it into `num_of_segments` sections of as similar length as possible.
It returns an array containing the number of elements of each section.
# Example
julia> split_into_n(20, 3)
3-element Array{Int64,1}:
6
7
7
"""
function split_into_n(length, num_of_segments)
@assert num_of_segments >0
@assert length >= num_of_segments
start_pos = zeros(Int, num_of_segments+1)
for i in 1:num_of_segments+1
start_pos[i] = floor(((i-1)*length/num_of_segments))
end
return start_pos[2:end]-start_pos[1:end-1]
end
"""
function split_tod_mpi(total_time, baseline_length_s, baselines_per_process, num_of_MPI_proc)
This function can be used to split the TOD production of many polarimeters among MPI processes.
It requires in input:
-the total time (in seconds) of the simulated observation
-the length (in seconds) of each 1/f noise baseline
-the array containing the number of 1/f baselines to simulate for each process.
It can be obtained by using the function `plit_into_n` in the following way:
split_into_n(num_of_polarimeters*baselines_per_pol, num_of_MPI_proc)
where baselines_per_pol = Int64(total_time/baseline_length_s) is the number of baselines of each polarimeter
-the number of MPI processes used
It returns an array of arrays of `datachunk` instances, of length == num_of_MPI_proc
where each element tells the chunk of data that each process should simulate (see Example)
# Example
```julia-repl
julia> num_of_polarimeters = 4
julia> num_of_MPI_proc = 3
julia> total_time = 50
julia> baseline_length_s = 10
julia> baselines_per_pol = Int64(total_time/baseline_length_s)
5
julia> baselines_per_process = split_into_n(20, 3)
3-element Array{Int64,1}:
6
7
7
julia> chunks = split_tod_mpi(total_time, baseline_length_s, baselines_per_process, num_of_MPI_proc)
3-element Array{Any,1}:
Any[datachunk(1, 1, 5, 5), datachunk(2, 1, 1, 1)]
Any[datachunk(2, 2, 5, 4), datachunk(3, 1, 3, 3)]
Any[datachunk(3, 4, 5, 2), datachunk(4, 1, 5, 5)]
```
which means:
- process number 0 should simulate:
polarimeter number 1 from baseline 1 to baseline 5, total number of baselines = 5
polarimeter number 2 from baseline 1 to baseline 1 , total number of baselines = 1
- process number 1 should simulate:
polarimeter number 2 from baseline 2 to baseline 5, total number of baselines = 4
polarimeter number 3 from baseline 1 to baseline 3 , total number of baselines = 3
- process number 2 should simulate:
polarimeter number 3 from baseline 4 to baseline 5, total number of baselines = 2
polarimeter number 4 from baseline 1 to baseline 5, total number of baselines = 5
"""
function split_tod_mpi(total_time, baseline_length_s, baselines_per_process, num_of_MPI_proc)
duration = Int64(total_time/baseline_length_s)
#initialization
detector_num = 1
sample_idx = 0
samples_in_det = duration
result = []
for rank_num in 0:(num_of_MPI_proc-1) #loop on MPI processes
samples_for_this_process = baselines_per_process[rank_num+1]
samples_left = samples_for_this_process
data_this_rank = []
while samples_left > 0 #loop on detectors
#if the current detector has more samples than needed to fill the current MPI process
if samples_in_det > samples_left
first_idx = sample_idx+1
last_idx = sample_idx+samples_left
data = datachunk(detector_num, first_idx, last_idx, samples_left)
data_this_rank = append!(data_this_rank, [data])
sample_idx = sample_idx + samples_left
samples_in_det = samples_in_det - samples_left
samples_left = 0
#if the current detector has not enough samples to provide the current MPI process
#with the required number of samples. In this case we need to increase "detector_num" before the next iteration
else
first_idx = sample_idx+1
last_idx = sample_idx+samples_in_det
data = datachunk(detector_num, first_idx, last_idx, samples_in_det)
data_this_rank = append!(data_this_rank, [data])
samples_left = samples_left - samples_in_det
samples_in_det = 0
end
if samples_in_det == 0
detector_num +=1
sample_idx = 0
samples_in_det = duration
end
end
result = append!(result, [data_this_rank])
end
return result
end
"""
function get_chunk_properties(chunks, baseline_length_s, fsamp_hz, rank)
Given:
- the data chunks (which can be obtained by using the function `split_tod_mpi`)
- the length (in seconds) of each 1/f noise baseline
- the sampling frequency (in Hz)
- the number of current MPI rank
this function extracts useful information to perform the TOD simulation in the current rank.
It returns a tuple containing 5 arrays:
- the ID number of the polarimeters that the current rank will simulate
- the start time of the acquisition portion for each polarimeter
- the stop time of the acquisition portion for each polarimeter
- the number of 1/f baselines for each polarimeter
- the total number of samples for each polarimeter
# Example
```julia-repl
julia> baseline_length_s = 10
julia> fsamp_hz = 10
julia> rank = 1
julia> chunks = [[datachunk(1, 1, 5, 5), datachunk(2, 1, 1, 1)], [datachunk(2, 2, 5, 4), datachunk(3, 1, 3, 3)], [datachunk(3, 4, 5, 2), datachunk(4, 1, 5, 5)]]
get_chunk_properties(chunks, baseline_length_s, fsamp_hz, rank)
([2, 3], [10.0, 0.0], [50.0, 30.0], [4, 3], [400, 300])
which means that rank 1 will simulate:
- polarimeter number 2 from 10 s (from the starting of the acquisition) to 50 s,
with a total of 4 1/f baselines and 400 samples.
- polarimeter number 3 from 0 s (from the starting of the acquisition) to 30 s,
with a total of 3 1/f baselines and 300 samples.
"""
function get_chunk_properties(chunks, baseline_length_s, fsamp_hz, rank)
this_rank_chunk = chunks[rank+1]
first_time, last_time = [Array{Float64}(undef, length(this_rank_chunk)) for i in (1:2)]
detector_number, num_of_baselines, baseline_len, num_of_samples = [Array{Int64}(undef, length(this_rank_chunk)) for i in (1:4)]
for i in 1:length(this_rank_chunk)
detector_number[i] = this_rank_chunk[i].pol_number
first_time[i] = (this_rank_chunk[i].first_idx-1)*baseline_length_s
last_time[i] = this_rank_chunk[i].last_idx*baseline_length_s -0.99*(1/fsamp_hz)
num_of_baselines[i] = this_rank_chunk[i].num_of_elements
num_of_samples[i] = num_of_baselines[i]*baseline_length_s*fsamp_hz
end
return (detector_number, first_time, last_time, num_of_baselines, num_of_samples)
end
|
{"hexsha": "6f19bb17289d8744c097720b5271d6da7f1046d8", "size": 8756, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/tod_splitter.jl", "max_stars_repo_name": "fincardona/Stripeline.jl", "max_stars_repo_head_hexsha": "e4dd169f9952e26b16292dccd44ce64cf69db67e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/tod_splitter.jl", "max_issues_repo_name": "fincardona/Stripeline.jl", "max_issues_repo_head_hexsha": "e4dd169f9952e26b16292dccd44ce64cf69db67e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/tod_splitter.jl", "max_forks_repo_name": "fincardona/Stripeline.jl", "max_forks_repo_head_hexsha": "e4dd169f9952e26b16292dccd44ce64cf69db67e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.6359832636, "max_line_length": 168, "alphanum_fraction": 0.6298538145, "num_tokens": 2220}
|
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
!
! File iomex.f90
!
! snPRNT ioTRIM snREAD
!
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
subroutine snPRNT ( mode, string, iw, leniw )
implicit none
character*(*) string
integer mode, leniw, iw(leniw)
!=====================================================================
! snPRNT prints a trimmed form of "string" on various files.
! If mode = 0, nothing is output.
! If mode = 1, string is output to iPrint.
! If mode = 2, string is output to iSumm.
! If mode = 3 or 4, string is output to iPrint and iSumm.
! If mode = 4, string is output to the screen.
! This mode is intended for error messages.
! If mode = 5, string is output to iStdo (standard output)
! This mode is to be used when the elements of
! the integer work array iw cannot be trusted.
!
! mode 11-15 are the same as mode 1-5 with blank line before output.
!
! If mode > 15 then nothing is printed unless lvlSys > 0.
! mode 21-25 are the same as mode 1-5
! mode 31-35 are the same as mode 11-15
!
! 25 Sep 2002: First version of snPRNT.
! 31 Jul 2003: mode 11-14 added. form introduced.
! 27 Dec 2003: mode 5 added to allow printing before iw is set.
! 12 Mar 2004: s1trim called to trim the string.
! 22 Jun 2004: System printing option added.
! 14 Oct 2004: Matlab version of snPRNT.
! 30 Apr 2006: Files opened and closed in C.
!=====================================================================
integer iPrint, iSumm, length, lvlSys, m, newline, &
screenOK, summaryOK, printOK
character Buff*140
lvlSys = iw( 91) ! > 0 => print system info
newline = 0
m = 0
if (mode .le. 0) then
! Relax
else if (mode .lt. 10) then
m = mode
else if (mode .lt. 20) then ! Blank line first
m = mode - 10
newline = 1
else if (lvlSys .gt. 0) then ! Print system Info
if (mode .lt. 30) then
m = mode - 20
else
m = mode - 30
newline = 1
end if
end if
if (m .gt. 0) then
call iomexfilestatus( screenOK, summaryOK, printOK )
! length = len_trim(string) ! An F90 intrinsic
call ioTRIM( string, length ) ! The F77 equivalent
Buff = string
if (m .eq. 5) then
call iomexwritescreen( newline, Buff, length)
else
iPrint = iw( 12) ! Print file
iSumm = iw( 13) ! Summary file
if (m .eq. 1 .or. m .ge. 3) then
if (printOK .gt. 0) then
call iomexwritefile( newline, iPrint, Buff, length )
end if
end if
if (m .eq. 2 .or. m .ge. 3) then
if (screenOK .gt. 0) then
call iomexwritescreen( newline, Buff, length)
end if
if (summaryOK .gt. 0) then
call iomexwritefile( newline, iSumm , Buff, length )
end if
end if
if (m .eq. 4) then
call iomexwritescreen( newline, Buff, length)
end if
end if
end if
end subroutine snPRNT
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
subroutine ioTrim( buffer, lenbuf )
implicit none
character*(*) buffer
integer lenbuf
!===================================================================
! ioTrim returns the length of buffer with trailing blanks omitted.
!
! 02 Dec 2000: First version written for snLog and snLog2.
!===================================================================
integer k
lenbuf = len( buffer )
do k = lenbuf, 2, -1
if (buffer(k:k) .ne. ' ') go to 100
lenbuf = lenbuf - 1
end do
100 return
end subroutine ioTrim
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
subroutine snREAD ( unitno, string, nchar, endfile )
implicit none
character*(*) string
integer endfile, nchar, unitno
!===================================================================
! snREAD reads a string of length nchar from file unitno.
!
! 30 Apr 2006: First version of snREAD.
! 30 Apr 2006: Matlab version.
!===================================================================
call iomexRead ( unitno, string, nchar, endfile )
end subroutine snREAD
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
subroutine iomexFileStatus ( scrnOK, summOK, prntOK )
use mxsnWork
implicit none
integer scrnOK, summOK, prntOK
if ( screenOn .and. callType == 1 ) then
scrnOK = 1
else
scrnOK = 0
end if
if ( summOpen .and. callType == 1 ) then
summOK = 1
else
summOK = 0
end if
if ( printOpen .and. callType == 1 ) then
prntOK = 1
else
prntOK = 0
end if
end subroutine iomexFileStatus
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
subroutine iomexwriteScreen ( newline, buffer, length )
implicit none
character*(*) buffer
integer newline, length
if ( length > 140 ) &
call mexErrMsgTxt ( 'Print buffer too long for snPRNT' )
if ( newline > 0 ) call mexPrintf ( achar(10) )
call mexPrintf ( buffer(1:length)//achar(10) )
end subroutine iomexwriteScreen
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
subroutine iomexwriteFile ( newline, unitno, buffer, length )
use mxsnWork
implicit none
character*(*) buffer
integer newline, unitno, length
if ( length > 140 ) &
call mexErrMsgTxt ( 'Print buffer too long for snPRNT' )
if ( unitno == iPrint ) then
if ( newline > 0 ) then
write(iPrint,'(/,a)') buffer(1:length)
else
write(iPrint,'(a)') buffer(1:length)
end if
else if ( unitno == iSumm ) then
if ( newline > 0 ) then
write(iSumm,'(/,a)') buffer(1:length)
else
write(iSumm,'(a)') buffer(1:length)
end if
end if
end subroutine iomexwriteFile
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
subroutine iomexRead ( unitno, string, nchar, endfile )
implicit none
character*(*) string
integer endfile, nchar, unitno
character frmt*6
frmt = ' '
if (nchar .ge. 1 .and. nchar .le. 999) then
if (nchar .lt. 10) then
write(frmt, '(a2,i1,a1)') '(a', nchar, ')'
else if (nchar .lt. 100) then
write(frmt, '(a2,i2,a1)') '(a', nchar, ')'
else
write(frmt, '(a2,i3,a1)') '(a', nchar, ')'
end if
endfile = 0
read (unitno, frmt, end = 100) string
return
end if
100 endfile = 1
end subroutine iomexRead
|
{"hexsha": "429007b297156f07416936360045c1728ced291d", "size": 6703, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "mex/iomex.f90", "max_stars_repo_name": "sh-cau/snopt-matlab", "max_stars_repo_head_hexsha": "b2222596b0d02347f9c3708ac7e6a8f727bc35bc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 49, "max_stars_repo_stars_event_min_datetime": "2016-03-15T21:01:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T01:31:01.000Z", "max_issues_repo_path": "mex/iomex.f90", "max_issues_repo_name": "sh-cau/snopt-matlab", "max_issues_repo_head_hexsha": "b2222596b0d02347f9c3708ac7e6a8f727bc35bc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2016-04-01T21:36:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-28T09:21:14.000Z", "max_forks_repo_path": "mex/iomex.f90", "max_forks_repo_name": "sh-cau/snopt-matlab", "max_forks_repo_head_hexsha": "b2222596b0d02347f9c3708ac7e6a8f727bc35bc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 22, "max_forks_repo_forks_event_min_datetime": "2015-02-22T19:18:16.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-13T06:53:03.000Z", "avg_line_length": 27.1376518219, "max_line_length": 72, "alphanum_fraction": 0.505445323, "num_tokens": 1945}
|
module stpdwmod
!$$$ module documentation block
! . . . .
! module: stpdwmod module for stpdw and its tangent linear stpdw_tl
! prgmmr:
!
! abstract: module for stpdw and its tangent linear stpdw_tl
!
! program history log:
! 2005-05-18 Yanqiu zhu - wrap stpdw and its tangent linear stpdw_tl into one module
! 2005-11-16 Derber - remove interfaces
! 2008-12-02 Todling - remove stpdw_tl
! 2009-08-12 lueken - updated documentation
! 2010-05-13 todling - uniform interface across stp routines
! 2016-05-18 guo - replaced ob_type with polymorphic obsNode through type casting
!
! subroutines included:
! sub stpdw
!
! attributes:
! language: f90
! machine:
!
!$$$ end documentation block
implicit none
PRIVATE
PUBLIC stpdw
contains
subroutine stpdw(dwhead,rval,sval,out,sges,nstep)
!$$$ subprogram documentation block
! . . . .
! subprogram: stpdw calculate contribution to penalty and
! stepsize from dw, with nonlinear qc added.
! prgmmr: derber org: np23 date: 1991-02-26
!
! abstract: calculate contribution to penalty and stepsize from lidar winds
!
! program history log:
! 1991-02-26 derber
! 1999-11-22 yang
! 2004-07-29 treadon - add only to module use, add intent in/out
! 2004-10-07 parrish - add nonlinear qc option
! 2005-04-11 treadon - merge stpdw and stpdw_qc into single routine
! 2005-08-02 derber - modify for variational qc parameters for each ob
! 2005-09-28 derber - consolidate location and weight arrays
! 2007-03-19 tremolet - binning of observations
! 2007-07-28 derber - modify to use new inner loop obs data structure
! - unify NL qc
! 2007-06-04 derber - use quad precision to get reproducability over number of processors
! 2008-04-11 safford - rm unused vars
! 2008-12-03 todling - changed handling of ptr%time
! 2010-01-04 zhang,b - bug fix: accumulate penalty for multiple obs bins
! 2010-05-13 todling - update to use gsi_bundle; update interface
!
! input argument list:
! dwhead
! ru - search direction for u
! rv - search direction for v
! su - current analysis increment for u
! sv - current analysis increment for v
! sges - step size estimates (4)
! nstep- number of step sizes (== 0 means use outer iteration values)
!
! output argument list:
! out(1:nstep) - penalty contribution from lidar winds sges(1:nstep)
!
! attributes:
! language: f90
! machine: ibm RS/6000 SP
!
!$$$
use kinds, only: r_kind,i_kind,r_quad
use qcmod, only: nlnqc_iter,varqc_iter
use constants, only: half,one,two,tiny_r_kind,cg_term,zero_quad,r3600
use gsi_bundlemod, only: gsi_bundle
use gsi_bundlemod, only: gsi_bundlegetpointer
use m_obsNode, only: obsNode
use m_dwNode , only: dwNode
use m_dwNode , only: dwNode_typecast
use m_dwNode , only: dwNode_nextcast
implicit none
! Declare passed variables
class(obsNode),pointer,intent(in):: dwhead
integer(i_kind) ,intent(in ) :: nstep
real(r_quad),dimension(max(1,nstep)),intent(inout) :: out
type(gsi_bundle) ,intent(in ) :: rval,sval
real(r_kind),dimension(max(1,nstep)),intent(in ) :: sges
! Declare local variables
integer(i_kind) j1,j2,j3,j4,j5,j6,j7,j8,kk,ier,istatus
real(r_kind) valdw,facdw,w1,w2,w3,w4,w5,w6,w7,w8
real(r_kind) pg_dw,dw
real(r_kind),dimension(max(1,nstep))::pen
real(r_kind) cg_dw,wgross,wnotgross
real(r_kind),pointer,dimension(:) :: su,sv
real(r_kind),pointer,dimension(:) :: ru,rv
type(dwNode), pointer :: dwptr
out=zero_quad
! If no dw data return
if(.not. associated(dwhead))return
! Retrieve pointers
! Simply return if any pointer not found
ier=0
call gsi_bundlegetpointer(sval,'u',su,istatus);ier=istatus+ier
call gsi_bundlegetpointer(sval,'v',sv,istatus);ier=istatus+ier
call gsi_bundlegetpointer(rval,'u',ru,istatus);ier=istatus+ier
call gsi_bundlegetpointer(rval,'v',rv,istatus);ier=istatus+ier
if(ier/=0)return
dwptr => dwNode_typecast(dwhead)
do while (associated(dwptr))
if(dwptr%luse)then
if(nstep > 0)then
j1=dwptr%ij(1)
j2=dwptr%ij(2)
j3=dwptr%ij(3)
j4=dwptr%ij(4)
j5=dwptr%ij(5)
j6=dwptr%ij(6)
j7=dwptr%ij(7)
j8=dwptr%ij(8)
w1=dwptr%wij(1)
w2=dwptr%wij(2)
w3=dwptr%wij(3)
w4=dwptr%wij(4)
w5=dwptr%wij(5)
w6=dwptr%wij(6)
w7=dwptr%wij(7)
w8=dwptr%wij(8)
valdw=(w1* ru(j1)+w2* ru(j2)+w3* ru(j3)+w4* ru(j4)+&
w5* ru(j5)+w6* ru(j6)+w7* ru(j7)+w8* ru(j8))*dwptr%sinazm+&
(w1* rv(j1)+w2* rv(j2)+w3* rv(j3)+w4* rv(j4)+&
w5* rv(j5)+w6* rv(j6)+w7* rv(j7)+w8* rv(j8))*dwptr%cosazm
facdw=(w1* su(j1)+w2* su(j2)+w3* su(j3)+w4* su(j4)+&
w5* su(j5)+w6* su(j6)+w7* su(j7)+w8* su(j8))*dwptr%sinazm+&
(w1* sv(j1)+w2* sv(j2)+w3* sv(j3)+w4* sv(j4)+&
w5* sv(j5)+w6* sv(j6)+w7* sv(j7)+w8* sv(j8))*dwptr%cosazm-&
dwptr%res
do kk=1,nstep
dw=facdw+sges(kk)*valdw
pen(kk)=dw*dw*dwptr%err2
end do
else
pen(1)=dwptr%res*dwptr%res*dwptr%err2
end if
! Modify penalty term if nonlinear QC
if (nlnqc_iter .and. dwptr%pg > tiny_r_kind .and. dwptr%b > tiny_r_kind) then
pg_dw=dwptr%pg*varqc_iter
cg_dw=cg_term/dwptr%b
wnotgross= one-pg_dw
wgross = pg_dw*cg_dw/wnotgross
do kk=1,max(1,nstep)
pen(kk)= -two*log((exp(-half*pen(kk)) + wgross)/(one+wgross))
end do
endif
out(1) = out(1)+pen(1)*dwptr%raterr2
do kk=2,nstep
out(kk) = out(kk)+(pen(kk)-pen(1))*dwptr%raterr2
end do
end if
dwptr => dwNode_nextcast(dwptr)
end do
return
end subroutine stpdw
end module stpdwmod
|
{"hexsha": "162cab8d57d92c6a1a2e8a2099d12b9caa6e3ad7", "size": 6194, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "GEOSaana_GridComp/GSI_GridComp/stpdw.f90", "max_stars_repo_name": "GEOS-ESM/GEOSana_GridComp", "max_stars_repo_head_hexsha": "cf33607613754313a2383bb7e7b3d29c856b9daf", "max_stars_repo_licenses": ["NASA-1.3", "ECL-2.0", "Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "GEOSaana_GridComp/GSI_GridComp/stpdw.f90", "max_issues_repo_name": "GEOS-ESM/GEOSana_GridComp", "max_issues_repo_head_hexsha": "cf33607613754313a2383bb7e7b3d29c856b9daf", "max_issues_repo_licenses": ["NASA-1.3", "ECL-2.0", "Apache-2.0"], "max_issues_count": 43, "max_issues_repo_issues_event_min_datetime": "2019-08-15T20:38:31.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-04T15:20:38.000Z", "max_forks_repo_path": "GEOSaana_GridComp/GSI_GridComp/stpdw.f90", "max_forks_repo_name": "GEOS-ESM/GEOSana_GridComp", "max_forks_repo_head_hexsha": "cf33607613754313a2383bb7e7b3d29c856b9daf", "max_forks_repo_licenses": ["NASA-1.3", "ECL-2.0", "Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-12-20T23:40:17.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-11T08:20:51.000Z", "avg_line_length": 33.6630434783, "max_line_length": 93, "alphanum_fraction": 0.6088149822, "num_tokens": 2101}
|
#=
def linear(a, b, x):
return b + a*x
=#
linear(a, b, x) = b + a * x
#=
# a linear demand function is generated for every
# pair of coefficients in vectors a_vec and b_vec
def demand_hypotheses(a_vec, b_vec):
for a, b in itertools.product(a_vec, b_vec):
yield {
'd': functools.partial(linear, a, b),
'p_opt': -b/(2*a)
}
=#
function demand_hypothesis(f, a, b)
f1(x) = f(a, b, x)
return DataFrame(
:a => a,
:b => b,
:d => f1,
:d_opt => f1(-b / (2a)),
:p_opt => -b / (2a)
)
end
function generate_demand_hypothesis(a_range, b_range)
h_vec = DataFrame()
for a in a_range
for b in b_range
df1 = demand_hypothesis(linear, a, b)
push!(h_vec, df1[1, :])
end
end
h_vec
end
|
{"hexsha": "d96a3dab2cff4392e8978eb8fba201ca00fc0cb9", "size": 778, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "approaches/dynamic_pricing/dynamic_pricing/price_demand_models.jl", "max_stars_repo_name": "StatisticalRethinkingJulia/DynamicPricingExamples.jl", "max_stars_repo_head_hexsha": "a6fae1736bf30f7aeed22452630c3ca3f018c50a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-02-19T06:59:09.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-21T07:57:57.000Z", "max_issues_repo_path": "approaches/dynamic_pricing/dynamic_pricing/price_demand_models.jl", "max_issues_repo_name": "StatisticalRethinkingJulia/DynamicPricingExamples.jl", "max_issues_repo_head_hexsha": "a6fae1736bf30f7aeed22452630c3ca3f018c50a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "approaches/dynamic_pricing/dynamic_pricing/price_demand_models.jl", "max_forks_repo_name": "StatisticalRethinkingJulia/DynamicPricingExamples.jl", "max_forks_repo_head_hexsha": "a6fae1736bf30f7aeed22452630c3ca3f018c50a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.5238095238, "max_line_length": 53, "alphanum_fraction": 0.559125964, "num_tokens": 262}
|
function FieldEventsLogger()
return FieldEventsLogger(())
end
function clear_logged_events(obj::FieldEventsLogger)
return jcall(obj, "clearLoggedEvents", void, ())
end
function get_logged_events(obj::FieldEventsLogger)
return jcall(obj, "getLoggedEvents", List, ())
end
function monitor_detector(obj::FieldEventsLogger, arg0::FieldEventDetector)
return jcall(obj, "monitorDetector", FieldEventDetector, (FieldEventDetector,), arg0)
end
|
{"hexsha": "427194c5c9d259e640c292cb3cae21e80099a2bd", "size": 456, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "gen/OrekitWrapper/PropagationWrapper/EventsWrapper/field_events_logger.jl", "max_stars_repo_name": "JuliaAstrodynamics/Orekit.jl", "max_stars_repo_head_hexsha": "e2dd3d8b2085dcbb1d2c75471dab42d6ddf52c99", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-09-07T12:26:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-15T16:02:35.000Z", "max_issues_repo_path": "gen/OrekitWrapper/PropagationWrapper/EventsWrapper/field_events_logger.jl", "max_issues_repo_name": "JuliaSpace/Orekit.jl", "max_issues_repo_head_hexsha": "e2dd3d8b2085dcbb1d2c75471dab42d6ddf52c99", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-09-05T10:16:29.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-30T05:17:19.000Z", "max_forks_repo_path": "gen/OrekitWrapper/PropagationWrapper/EventsWrapper/field_events_logger.jl", "max_forks_repo_name": "JuliaSpace/Orekit.jl", "max_forks_repo_head_hexsha": "e2dd3d8b2085dcbb1d2c75471dab42d6ddf52c99", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.8235294118, "max_line_length": 89, "alphanum_fraction": 0.7785087719, "num_tokens": 105}
|
/-
Copyright (c) 2021 Bhavik Mehta. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Bhavik Mehta, Alena Gusakov, Yaël Dillies
-/
import data.finset.slice
import logic.function.iterate
/-!
# Shadows
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
This file defines shadows of a set family. The shadow of a set family is the set family of sets we
get by removing any element from any set of the original family. If one pictures `finset α` as a big
hypercube (each dimension being membership of a given element), then taking the shadow corresponds
to projecting each finset down once in all available directions.
## Main definitions
* `finset.shadow`: The shadow of a set family. Everything we can get by removing a new element from
some set.
* `finset.up_shadow`: The upper shadow of a set family. Everything we can get by adding an element
to some set.
## Notation
We define notation in locale `finset_family`:
* `∂ 𝒜`: Shadow of `𝒜`.
* `∂⁺ 𝒜`: Upper shadow of `𝒜`.
We also maintain the convention that `a, b : α` are elements of the ground type, `s, t : finset α`
are finsets, and `𝒜, ℬ : finset (finset α)` are finset families.
## References
* https://github.com/b-mehta/maths-notes/blob/master/iii/mich/combinatorics.pdf
* http://discretemath.imp.fu-berlin.de/DMII-2015-16/kruskal.pdf
## Tags
shadow, set family
-/
open finset nat
variables {α : Type*}
namespace finset
section shadow
variables [decidable_eq α] {𝒜 : finset (finset α)} {s t : finset α} {a : α} {k r : ℕ}
/-- The shadow of a set family `𝒜` is all sets we can get by removing one element from any set in
`𝒜`, and the (`k` times) iterated shadow (`shadow^[k]`) is all sets we can get by removing `k`
elements from any set in `𝒜`. -/
def shadow (𝒜 : finset (finset α)) : finset (finset α) := 𝒜.sup (λ s, s.image (erase s))
localized "notation (name := finset.shadow) `∂ `:90 := finset.shadow" in finset_family
/-- The shadow of the empty set is empty. -/
@[simp] lemma shadow_empty : ∂ (∅ : finset (finset α)) = ∅ := rfl
@[simp] lemma shadow_singleton_empty : ∂ ({∅} : finset (finset α)) = ∅ := rfl
--TODO: Prove `∂ {{a}} = {∅}` quickly using `covers` and `grade_order`
/-- The shadow is monotone. -/
@[mono] lemma shadow_monotone : monotone (shadow : finset (finset α) → finset (finset α)) :=
λ 𝒜 ℬ, sup_mono
/-- `s` is in the shadow of `𝒜` iff there is an `t ∈ 𝒜` from which we can remove one element to
get `s`. -/
lemma mem_shadow_iff : s ∈ ∂ 𝒜 ↔ ∃ t ∈ 𝒜, ∃ a ∈ t, erase t a = s :=
by simp only [shadow, mem_sup, mem_image]
lemma erase_mem_shadow (hs : s ∈ 𝒜) (ha : a ∈ s) : erase s a ∈ ∂ 𝒜 :=
mem_shadow_iff.2 ⟨s, hs, a, ha, rfl⟩
/-- `t` is in the shadow of `𝒜` iff we can add an element to it so that the resulting finset is in
`𝒜`. -/
lemma mem_shadow_iff_insert_mem : s ∈ ∂ 𝒜 ↔ ∃ a ∉ s, insert a s ∈ 𝒜 :=
begin
refine mem_shadow_iff.trans ⟨_, _⟩,
{ rintro ⟨s, hs, a, ha, rfl⟩,
refine ⟨a, not_mem_erase a s, _⟩,
rwa insert_erase ha },
{ rintro ⟨a, ha, hs⟩,
exact ⟨insert a s, hs, a, mem_insert_self _ _, erase_insert ha⟩ }
end
/-- The shadow of a family of `r`-sets is a family of `r - 1`-sets. -/
protected lemma _root_.set.sized.shadow (h𝒜 : (𝒜 : set (finset α)).sized r) :
(∂ 𝒜 : set (finset α)).sized (r - 1) :=
begin
intros A h,
obtain ⟨A, hA, i, hi, rfl⟩ := mem_shadow_iff.1 h,
rw [card_erase_of_mem hi, h𝒜 hA],
end
lemma sized_shadow_iff (h : ∅ ∉ 𝒜) :
(∂ 𝒜 : set (finset α)).sized r ↔ (𝒜 : set (finset α)).sized (r + 1) :=
begin
refine ⟨λ h𝒜 s hs, _, set.sized.shadow⟩,
obtain ⟨a, ha⟩ := nonempty_iff_ne_empty.2 (ne_of_mem_of_not_mem hs h),
rw [←h𝒜 (erase_mem_shadow hs ha), card_erase_add_one ha],
end
/-- `s ∈ ∂ 𝒜` iff `s` is exactly one element less than something from `𝒜` -/
lemma mem_shadow_iff_exists_mem_card_add_one :
s ∈ ∂ 𝒜 ↔ ∃ t ∈ 𝒜, s ⊆ t ∧ t.card = s.card + 1 :=
begin
refine mem_shadow_iff_insert_mem.trans ⟨_, _⟩,
{ rintro ⟨a, ha, hs⟩,
exact ⟨insert a s, hs, subset_insert _ _, card_insert_of_not_mem ha⟩ },
{ rintro ⟨t, ht, hst, h⟩,
obtain ⟨a, ha⟩ : ∃ a, t \ s = {a} :=
card_eq_one.1 (by rw [card_sdiff hst, h, add_tsub_cancel_left]),
exact ⟨a, λ hat,
not_mem_sdiff_of_mem_right hat ((ha.ge : _ ⊆ _) $ mem_singleton_self a),
by rwa [insert_eq a s, ←ha, sdiff_union_of_subset hst]⟩ }
end
/-- Being in the shadow of `𝒜` means we have a superset in `𝒜`. -/
lemma exists_subset_of_mem_shadow (hs : s ∈ ∂ 𝒜) : ∃ t ∈ 𝒜, s ⊆ t :=
let ⟨t, ht, hst⟩ := mem_shadow_iff_exists_mem_card_add_one.1 hs in ⟨t, ht, hst.1⟩
/-- `t ∈ ∂^k 𝒜` iff `t` is exactly `k` elements less than something in `𝒜`. -/
lemma mem_shadow_iff_exists_mem_card_add :
s ∈ (∂^[k]) 𝒜 ↔ ∃ t ∈ 𝒜, s ⊆ t ∧ t.card = s.card + k :=
begin
induction k with k ih generalizing 𝒜 s,
{ refine ⟨λ hs, ⟨s, hs, subset.refl _, rfl⟩, _⟩,
rintro ⟨t, ht, hst, hcard⟩,
rwa eq_of_subset_of_card_le hst hcard.le },
simp only [exists_prop, function.comp_app, function.iterate_succ],
refine ih.trans _,
clear ih,
split,
{ rintro ⟨t, ht, hst, hcardst⟩,
obtain ⟨u, hu, htu, hcardtu⟩ := mem_shadow_iff_exists_mem_card_add_one.1 ht,
refine ⟨u, hu, hst.trans htu, _⟩,
rw [hcardtu, hcardst],
refl },
{ rintro ⟨t, ht, hst, hcard⟩,
obtain ⟨u, hsu, hut, hu⟩ := finset.exists_intermediate_set k
(by { rw [add_comm, hcard], exact le_succ _ }) hst,
rw add_comm at hu,
refine ⟨u, mem_shadow_iff_exists_mem_card_add_one.2 ⟨t, ht, hut, _⟩, hsu, hu⟩,
rw [hcard, hu],
refl }
end
end shadow
open_locale finset_family
section up_shadow
variables [decidable_eq α] [fintype α] {𝒜 : finset (finset α)} {s t : finset α} {a : α} {k r : ℕ}
/-- The upper shadow of a set family `𝒜` is all sets we can get by adding one element to any set in
`𝒜`, and the (`k` times) iterated upper shadow (`up_shadow^[k]`) is all sets we can get by adding
`k` elements from any set in `𝒜`. -/
def up_shadow (𝒜 : finset (finset α)) : finset (finset α) :=
𝒜.sup $ λ s, sᶜ.image $ λ a, insert a s
localized "notation (name := finset.up_shadow) `∂⁺ `:90 := finset.up_shadow" in finset_family
/-- The upper shadow of the empty set is empty. -/
@[simp] lemma up_shadow_empty : ∂⁺ (∅ : finset (finset α)) = ∅ := rfl
/-- The upper shadow is monotone. -/
@[mono] lemma up_shadow_monotone : monotone (up_shadow : finset (finset α) → finset (finset α)) :=
λ 𝒜 ℬ, sup_mono
/-- `s` is in the upper shadow of `𝒜` iff there is an `t ∈ 𝒜` from which we can remove one element
to get `s`. -/
lemma mem_up_shadow_iff : s ∈ ∂⁺ 𝒜 ↔ ∃ t ∈ 𝒜, ∃ a ∉ t, insert a t = s :=
by simp_rw [up_shadow, mem_sup, mem_image, exists_prop, mem_compl]
lemma insert_mem_up_shadow (hs : s ∈ 𝒜) (ha : a ∉ s) : insert a s ∈ ∂⁺ 𝒜 :=
mem_up_shadow_iff.2 ⟨s, hs, a, ha, rfl⟩
/-- The upper shadow of a family of `r`-sets is a family of `r + 1`-sets. -/
protected lemma _root_.set.sized.up_shadow (h𝒜 : (𝒜 : set (finset α)).sized r) :
(∂⁺ 𝒜 : set (finset α)).sized (r + 1) :=
begin
intros A h,
obtain ⟨A, hA, i, hi, rfl⟩ := mem_up_shadow_iff.1 h,
rw [card_insert_of_not_mem hi, h𝒜 hA],
end
/-- `t` is in the upper shadow of `𝒜` iff we can remove an element from it so that the resulting
finset is in `𝒜`. -/
lemma mem_up_shadow_iff_erase_mem : s ∈ ∂⁺ 𝒜 ↔ ∃ a ∈ s, s.erase a ∈ 𝒜 :=
begin
refine mem_up_shadow_iff.trans ⟨_, _⟩,
{ rintro ⟨s, hs, a, ha, rfl⟩,
refine ⟨a, mem_insert_self a s, _⟩,
rwa erase_insert ha },
{ rintro ⟨a, ha, hs⟩,
exact ⟨s.erase a, hs, a, not_mem_erase _ _, insert_erase ha⟩ }
end
/-- `s ∈ ∂⁺ 𝒜` iff `s` is exactly one element less than something from `𝒜`. -/
lemma mem_up_shadow_iff_exists_mem_card_add_one :
s ∈ ∂⁺ 𝒜 ↔ ∃ t ∈ 𝒜, t ⊆ s ∧ t.card + 1 = s.card :=
begin
refine mem_up_shadow_iff_erase_mem.trans ⟨_, _⟩,
{ rintro ⟨a, ha, hs⟩,
exact ⟨s.erase a, hs, erase_subset _ _, card_erase_add_one ha⟩ },
{ rintro ⟨t, ht, hts, h⟩,
obtain ⟨a, ha⟩ : ∃ a, s \ t = {a} :=
card_eq_one.1 (by rw [card_sdiff hts, ←h, add_tsub_cancel_left]),
refine ⟨a, sdiff_subset _ _ ((ha.ge : _ ⊆ _) $ mem_singleton_self a), _⟩,
rwa [←sdiff_singleton_eq_erase, ←ha, sdiff_sdiff_eq_self hts] }
end
/-- Being in the upper shadow of `𝒜` means we have a superset in `𝒜`. -/
lemma exists_subset_of_mem_up_shadow (hs : s ∈ ∂⁺ 𝒜) : ∃ t ∈ 𝒜, t ⊆ s :=
let ⟨t, ht, hts, _⟩ := mem_up_shadow_iff_exists_mem_card_add_one.1 hs in ⟨t, ht, hts⟩
/-- `t ∈ ∂^k 𝒜` iff `t` is exactly `k` elements more than something in `𝒜`. -/
lemma mem_up_shadow_iff_exists_mem_card_add :
s ∈ (∂⁺^[k]) 𝒜 ↔ ∃ t ∈ 𝒜, t ⊆ s ∧ t.card + k = s.card :=
begin
induction k with k ih generalizing 𝒜 s,
{ refine ⟨λ hs, ⟨s, hs, subset.refl _, rfl⟩, _⟩,
rintro ⟨t, ht, hst, hcard⟩,
rwa ←eq_of_subset_of_card_le hst hcard.ge },
simp only [exists_prop, function.comp_app, function.iterate_succ],
refine ih.trans _,
clear ih,
split,
{ rintro ⟨t, ht, hts, hcardst⟩,
obtain ⟨u, hu, hut, hcardtu⟩ := mem_up_shadow_iff_exists_mem_card_add_one.1 ht,
refine ⟨u, hu, hut.trans hts, _⟩,
rw [←hcardst, ←hcardtu, add_right_comm],
refl },
{ rintro ⟨t, ht, hts, hcard⟩,
obtain ⟨u, htu, hus, hu⟩ := finset.exists_intermediate_set 1
(by { rw [add_comm, ←hcard], exact add_le_add_left (zero_lt_succ _) _ }) hts,
rw add_comm at hu,
refine ⟨u, mem_up_shadow_iff_exists_mem_card_add_one.2 ⟨t, ht, htu, hu.symm⟩, hus, _⟩,
rw [hu, ←hcard, add_right_comm],
refl }
end
@[simp] lemma shadow_image_compl : (∂ 𝒜).image compl = ∂⁺ (𝒜.image compl) :=
begin
ext s,
simp only [mem_image, exists_prop, mem_shadow_iff, mem_up_shadow_iff],
split,
{ rintro ⟨_, ⟨s, hs, a, ha, rfl⟩, rfl⟩,
exact ⟨sᶜ, ⟨s, hs, rfl⟩, a, not_mem_compl.2 ha, compl_erase.symm⟩ },
{ rintro ⟨_, ⟨s, hs, rfl⟩, a, ha, rfl⟩,
exact ⟨s.erase a, ⟨s, hs, a, not_mem_compl.1 ha, rfl⟩, compl_erase⟩ }
end
@[simp] lemma up_shadow_image_compl : (∂⁺ 𝒜).image compl = ∂ (𝒜.image compl) :=
begin
ext s,
simp only [mem_image, exists_prop, mem_shadow_iff, mem_up_shadow_iff],
split,
{ rintro ⟨_, ⟨s, hs, a, ha, rfl⟩, rfl⟩,
exact ⟨sᶜ, ⟨s, hs, rfl⟩, a, mem_compl.2 ha, compl_insert.symm⟩ },
{ rintro ⟨_, ⟨s, hs, rfl⟩, a, ha, rfl⟩,
exact ⟨insert a s, ⟨s, hs, a, mem_compl.1 ha, rfl⟩, compl_insert⟩ }
end
end up_shadow
end finset
|
{"author": "leanprover-community", "repo": "mathlib", "sha": "5e526d18cea33550268dcbbddcb822d5cde40654", "save_path": "github-repos/lean/leanprover-community-mathlib", "path": "github-repos/lean/leanprover-community-mathlib/mathlib-5e526d18cea33550268dcbbddcb822d5cde40654/src/combinatorics/set_family/shadow.lean"}
|
[STATEMENT]
lemma little_Fermat_int:
fixes a :: int and p :: nat
assumes "prime p" "\<not>p dvd a"
shows "[a ^ p = a] (mod p)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
have "p > 1"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. 1 < p
[PROOF STEP]
using prime_gt_1_nat assms
[PROOF STATE]
proof (prove)
using this:
prime ?p \<Longrightarrow> 1 < ?p
prime p
\<not> int p dvd a
goal (1 subgoal):
1. 1 < p
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
1 < p
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
have "\<not>int p dvd a mod int p"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<not> int p dvd a mod int p
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
prime p
\<not> int p dvd a
goal (1 subgoal):
1. \<not> int p dvd a mod int p
[PROOF STEP]
by (simp add: dvd_mod_iff)
[PROOF STATE]
proof (state)
this:
\<not> int p dvd a mod int p
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
\<not> int p dvd a mod int p
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
from \<open>p > 1\<close>
[PROOF STATE]
proof (chain)
picking this:
1 < p
[PROOF STEP]
have "a mod int p = int (nat (a mod int p))"
[PROOF STATE]
proof (prove)
using this:
1 < p
goal (1 subgoal):
1. a mod int p = int (nat (a mod int p))
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
a mod int p = int (nat (a mod int p))
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
\<not> int p dvd int (nat (a mod int p))
[PROOF STEP]
have not_dvd: "\<not>p dvd nat (a mod int p)"
[PROOF STATE]
proof (prove)
using this:
\<not> int p dvd int (nat (a mod int p))
goal (1 subgoal):
1. \<not> p dvd nat (a mod int p)
[PROOF STEP]
by (subst (asm) int_dvd_int_iff)
[PROOF STATE]
proof (state)
this:
\<not> p dvd nat (a mod int p)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
have "[a ^ p = (a mod p) ^ p] (mod p)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. [a ^ p = (a mod int p) ^ p] (mod int p)
[PROOF STEP]
by (intro cong_pow) (auto simp: cong_def)
[PROOF STATE]
proof (state)
this:
[a ^ p = (a mod int p) ^ p] (mod int p)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
[a ^ p = (a mod int p) ^ p] (mod int p)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
have "(a mod p) ^ p = (int (nat (a mod p))) ^ p"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (a mod int p) ^ p = int (nat (a mod int p)) ^ p
[PROOF STEP]
using \<open>p > 1\<close>
[PROOF STATE]
proof (prove)
using this:
1 < p
goal (1 subgoal):
1. (a mod int p) ^ p = int (nat (a mod int p)) ^ p
[PROOF STEP]
by (subst of_nat_nat) auto
[PROOF STATE]
proof (state)
this:
(a mod int p) ^ p = int (nat (a mod int p)) ^ p
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
(a mod int p) ^ p = int (nat (a mod int p)) ^ p
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
have "\<dots> = int (nat (a mod p) ^ p)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. int (nat (a mod int p)) ^ p = int (nat (a mod int p) ^ p)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
int (nat (a mod int p)) ^ p = int (nat (a mod int p) ^ p)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
int (nat (a mod int p)) ^ p = int (nat (a mod int p) ^ p)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
have "[\<dots> = int (nat (a mod p))] (mod p)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. [int (nat (a mod int p) ^ p) = int (nat (a mod int p))] (mod int p)
[PROOF STEP]
by (subst cong_int_iff, rule little_Fermat_nat) (use assms not_dvd in auto)
[PROOF STATE]
proof (state)
this:
[int (nat (a mod int p) ^ p) = int (nat (a mod int p))] (mod int p)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
[int (nat (a mod int p) ^ p) = int (nat (a mod int p))] (mod int p)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
have "int (nat (a mod p)) = a mod p"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. int (nat (a mod int p)) = a mod int p
[PROOF STEP]
using \<open>p > 1\<close>
[PROOF STATE]
proof (prove)
using this:
1 < p
goal (1 subgoal):
1. int (nat (a mod int p)) = a mod int p
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
int (nat (a mod int p)) = a mod int p
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
int (nat (a mod int p)) = a mod int p
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
have "[a mod p = a] (mod p)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. [a mod int p = a] (mod int p)
[PROOF STEP]
by (auto simp: cong_def)
[PROOF STATE]
proof (state)
this:
[a mod int p = a] (mod int p)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
[a ^ p = a] (mod int p)
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
[a ^ p = a] (mod int p)
goal (1 subgoal):
1. [a ^ p = a] (mod int p)
[PROOF STEP]
.
[PROOF STATE]
proof (state)
this:
[a ^ p = a] (mod int p)
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 2512, "file": "Mersenne_Primes_Lucas_Lehmer_Auxiliary", "length": 37}
|
from collections import Counter
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
from wordcloud import WordCloud, ImageColorGenerator
from ..utils.file_handling import read_df_from_file
entities = read_df_from_file("data/dataframes/merged_entities_10k_df.jsonl")
entities_duplicated = []
for i in entities.index:
for j in range(entities["no_occurrences"][i]):
entities_duplicated += [entities["word"][i]]
entities_dict = Counter(entities_duplicated)
mask = np.array(Image.open("images/bert_mask.png"))
entity_cloud = WordCloud(
width=804,
height=1350,
max_font_size=50,
max_words=600,
background_color="#38566F",
mode="RGBA",
mask=mask,
).generate_from_frequencies(entities_dict)
image_colors = ImageColorGenerator(mask)
plt.figure(figsize=[7, 7])
plt.imshow(entity_cloud.recolor(color_func=image_colors), interpolation="bilinear")
plt.axis("off")
plt.show()
entity_cloud.to_file("images/entity_cloud.png")
|
{"hexsha": "c3d622983dc5452acd1b3b3a593cd80feb0b48e3", "size": 981, "ext": "py", "lang": "Python", "max_stars_repo_path": "ner/entity_processing/clouding.py", "max_stars_repo_name": "BonnierNews/lukas-ner-model", "max_stars_repo_head_hexsha": "1f7f688f9b0f1e7b7cb66c42f188358d27a0be09", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ner/entity_processing/clouding.py", "max_issues_repo_name": "BonnierNews/lukas-ner-model", "max_issues_repo_head_hexsha": "1f7f688f9b0f1e7b7cb66c42f188358d27a0be09", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ner/entity_processing/clouding.py", "max_forks_repo_name": "BonnierNews/lukas-ner-model", "max_forks_repo_head_hexsha": "1f7f688f9b0f1e7b7cb66c42f188358d27a0be09", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.1538461538, "max_line_length": 83, "alphanum_fraction": 0.7645259939, "include": true, "reason": "import numpy", "num_tokens": 227}
|
import numpy as np
class oneR:
#Constructor
def __init__(self):
self.rule = []
self.accuracy = 0
self.fitShape = []
self.targets = []
def _checkInputs(self,X,y) -> bool:
"""
Internal function to ensure input is valid.
Parameters:
X: Array-like of training values.
y: Array-like of target values.
Returns:
bool: Input is valid
"""
tClasses = len(np.unique(y))
if(tClasses > 2):
print("Max unique target values: 2. " + str(tClasses) + " given.")
return False
#Note that list comprehension is slow and this searches the entire list instead of breaking on an exception
#This is done this way mostly for readability
if(all(len(row) == len(X[0]) for row in X) == False):
print("The X array is not uniform.")
return False
if(len(X) != len(y)):
print("Target array must be the same size as rows.")
return False
return True
def _checkPredict(self,X) -> bool:
"""
Internal function.
Makes sure input is valid for the predict method.
Parameters:
X: array-like of training values.
Returns:
bool: Input is valid
"""
if(len(X.T) != self.fitShape[1]):
print("Shape mismatch. Axis 1 is length " + str(len(X.T)) + ". Needs: " + str(self.fitShape[1]))
return False
if(len(X.shape) != 2):
print("Shape mismatch. Array must be 2 dimensions. " + str(len(X.shape)) + " given. If you're testing a single row, cast it as a 2d list with [list]")
return False
return True
def fit(self, X, y):
"""
Trains the model, given X and y.
Parameters:
X: Array-like of training data
y: Array-like of targets
Returns:
self: An object that represents a trained model
"""
if(self._checkInputs(X,y)):
X,y = np.array(X), np.array(y)
self.fitShape = X.shape
self.targets = np.unique(y)
countTables = []
topRule = []
maxAccuracy = 0
attrTotal = dict()
for col in X.T:
cTable = dict()
unique,counts = np.unique(col,return_counts=True)
for idx,attr in enumerate(unique):
tTable = dict()
for target in np.unique(y):
tTable[target] = 0
cTable[attr] = tTable
attrTotal[attr] = counts[idx]
for idx,row in enumerate(col):
cTable[row][y[idx]] += 1
countTables.append(cTable)
for idx,table in enumerate(countTables):
for attr, cTable in table.items():
for target, count in cTable.items():
accuracy = (count/attrTotal[attr])
if(accuracy > maxAccuracy):
maxAccuracy = accuracy
topRule = [idx,attr,target]
self.rule = topRule
self.accuracy = maxAccuracy
return self
def predict(self, X) -> list:
"""
Predicts a target value.
Parameters:
X: Array-like of test data
Returns:
predictions: a list of predicted outcomes for each row of X
"""
X = np.array(X)
predictions = []
if(self._checkPredict(X)):
counterTarget = list(self.targets)
counterTarget.remove(self.rule[2])
counterTarget = counterTarget[0]
for entry in X:
predictions.append(self.rule[2] if entry[self.rule[0]] == self.rule[1] else counterTarget)
return predictions
def score(self,X,y) -> float:
"""
Calculates the accuracy of the model, given test data
Parameters:
X: Array-like of test data
y: Array-like of target values.
Returns:
accuracy: a float that represents correctly guessed/total values
"""
if(self.rule == []):
print("This model is untrained. Train using the fit method before trying to score.")
return 0
if(self._checkInputs(X,y)):
X = np.array(X)
correct = 0
for idx,row in enumerate(X):
if(self.predict([row]) == [y[idx]]):
correct += 1
return(correct/X.shape[0])
else:
return 0
if(__name__ == "__main__"):
data = testData.testData()
frame = oneR()
trained = frame.fit(data.golf_pred,data.golf_target)
print(trained.predict([
["Sunny", "Mild", "High", "True"],
["Rainy", "Hot", "High", "True"]]))
|
{"hexsha": "7b06c0508775a9533f30fdb9820a0616e8b9b8c9", "size": 4902, "ext": "py", "lang": "Python", "max_stars_repo_path": "py1r.py", "max_stars_repo_name": "ErikShively/Py1R", "max_stars_repo_head_hexsha": "59b4c72083282e6e0625d8b5370fa14f07bb4bce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "py1r.py", "max_issues_repo_name": "ErikShively/Py1R", "max_issues_repo_head_hexsha": "59b4c72083282e6e0625d8b5370fa14f07bb4bce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "py1r.py", "max_forks_repo_name": "ErikShively/Py1R", "max_forks_repo_head_hexsha": "59b4c72083282e6e0625d8b5370fa14f07bb4bce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0736196319, "max_line_length": 162, "alphanum_fraction": 0.513871889, "include": true, "reason": "import numpy", "num_tokens": 1063}
|
# Copyright (c) 2021, Alessandro Abate, Daniele Ahmed, Alec Edwards, Mirco Giacobbe, Andrea Peruffo
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import sympy as sp
import numpy as np
import copy
import torch
try:
import dreal
except:
dreal = None
from src.shared.utils import Timeout
class LinearSystem():
def __init__(self, A):
self.A = A
self.dimension = len(A[0])
self.x = np.array([sp.symbols("x%d" % i) for i in range(self.dimension)])
self.f = self.get_func_lambda()
def get_func_from_A(self,):
"""
returns xdot: Sympy expression representing the linear system
"""
xdot = self.A @ self.x
return xdot
def get_func_lambda(self):
"""
returns f: python function which evaluates linear system
"""
xdot = self.get_func_from_A()
f = sp.lambdify((self.x), xdot, "numpy")
return f
def evaluate_f(self, point):
"""
param choice: n-d data point as iterable
returns f(point): dynamical system evaluated at point
"""
return self.f(*point)
class NonlinearSystem():
def __init__(self, f, lyap=True):
"""
:param f: list representing each dimensions dynamics, with each element i is f_i(x0,x1,x2...,xn)
:param lyap: bool, mode defining lyapunov or barrier function operation
"""
self.f = f
self.poly = self.check_poly()
self.dimension = len(f)
self.x = [sp.Symbol("x%d" % i, real=True) for i in range(self.dimension)]
self.system_lambda = self.get_system_lambda()
if not self.poly:
self.dreal_lambda = self.get_dreal_lambda()
self.sympy_lambda = self.get_sympy_lambda()
if lyap:
if not self.poly:
raise ValueError("Non-polynomial dynamics not supported for Lyapunov analysis.")
self.equilibria = self.find_equilibria()
self.jacobian = self.get_Jacobian()
self.stable_equilibria = []
self.unstable_equilibria = []
self.sort_equilibria()
def get_system_lambda(self):
"""
:return f: function which evaluates system
"""
f = sp.lambdify(self.x, self.f, modules=[{"sin":torch.sin, "exp": torch.exp, "cos":torch.cos}, "numpy"])
return f
def get_dreal_lambda(self):
"""
:return f: function which evaluates system using dreal functions
"""
f = sp.lambdify(self.x, self.f, modules=[{"sin":dreal.sin, "exp": dreal.exp, "cos":dreal.cos}, "numpy"])
return f
def get_sympy_lambda(self):
"""
:return f: function which evaluates system that using sympy functions
"""
f = sp.lambdify(self.x, self.f, modules=[{"sin":sp.sin, "exp": sp.exp, "cos":sp.cos}, "numpy"])
return f
def evaluate_f(self, point):
"""
:param point: n-d data point as iterable
:return f(point): dynamical system evaluated at point
"""
if dreal and not self.poly:
if isinstance(point[0], dreal.Variable):
return self.dreal_lambda(*point)
elif isinstance(point[0], sp.Expr):
return self.sympy_lambda(*point)
else:
return self.system_lambda(*point)
else:
return self.system_lambda(*point)
def get_Jacobian(self):
"""
:return J: Jacobion of system, numpy object matrix with Sympy expressions for each entry
"""
J = np.zeros((self.dimension, self.dimension), dtype=object)
for jjj, state in enumerate(self.x):
for iii, fun in enumerate(self.f):
J[iii,jjj] = sp.diff(fun, state)
return J
def evaluate_Jacobian(self, point):
"""
:param point: list representing n-d point at which to evaluate the Jacobian J
:return J_x*: np array of Jacobian evaluated at point
"""
J_x = copy.deepcopy(
self.jacobian
)
for iii, df in enumerate(J_x):
for jjj, df_dx in enumerate(df):
J_x[iii,jjj] = float(
df_dx.subs({x: p for (x, p) in zip(self.x, point)})
)
return np.array(J_x, dtype=float)
def find_equilibria(self):
"""
:return real_equilibria: list of equilibrium points for system
"""
try:
with Timeout(seconds=180):
eqbm = sp.nonlinsolve(self.f, self.x,)
except TimeoutError:
eqbm = []
except AttributeError:
eqbm = sp.nonlinsolve(self.f, self.x,)
real_equilibria = self.get_real_solutions(eqbm.args)
return real_equilibria
def get_real_solutions(self, eqbm_set):
"""
:param eqbm_set: list of equilibrium points (in complex domain)
:return real_equilibria: list of equilibrium points for system (in R^n)
"""
real_equilibria = []
for eqbm in eqbm_set:
real_Flag = True
for number in eqbm:
if not number.is_real:
real_Flag = False
if real_Flag:
#eqbm = tuple([float(x) for x in eqbm])
real_equilibria.append(eqbm)
return real_equilibria
def check_stability(self, J='0', eqbm=None):
"""
:param J: Jacobian of dynamical system, possibly evaluated at specifc equilibrium point
:param eqbm: equilibrium point to evaluate Jacobian at if not already evaluated.
:return bool: True if all eigenvalues have real part <= 0, else False.
"""
if type(J) is str:
J = self.evaluate_Jacobian(eqbm)
V,_ = np.linalg.eig(J)
return all(np.real(V) <= 0)
def sort_equilibria(self):
for eqbm in self.equilibria:
J = self.evaluate_Jacobian(eqbm)
if self.check_stability(J=J):
self.stable_equilibria.append(eqbm)
else:
self.unstable_equilibria.append(eqbm)
def f_substitute(self, point):
"""
:param point: iterable, point at which to symbolically evaluate f
:return f(point): symbolic evaluation (by substitution) of self.f at point
"""
substitutions = {x: p for (x, p) in zip(self.x, point)}
return [(f_i.subs(substitutions)) for f_i in self.f]
def check_poly(self):
"""
:return bool: False if system has any non-polynomial parts (eg exp, sin)
"""
return all([expression.is_polynomial() for expression in self.f])
|
{"hexsha": "a88c9ac5e4e96213e91a4a0d7311cc78a670cd65", "size": 6798, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/shared/system.py", "max_stars_repo_name": "oxford-oxcav/fossil", "max_stars_repo_head_hexsha": "f5b8e2bba80d8792b149ee75b51d3ee74df9b88e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-21T17:24:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-21T17:24:31.000Z", "max_issues_repo_path": "src/shared/system.py", "max_issues_repo_name": "oxford-oxcav/fossil", "max_issues_repo_head_hexsha": "f5b8e2bba80d8792b149ee75b51d3ee74df9b88e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/shared/system.py", "max_forks_repo_name": "oxford-oxcav/fossil", "max_forks_repo_head_hexsha": "f5b8e2bba80d8792b149ee75b51d3ee74df9b88e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-09T15:35:26.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-09T15:35:26.000Z", "avg_line_length": 34.1608040201, "max_line_length": 112, "alphanum_fraction": 0.5813474551, "include": true, "reason": "import numpy,import sympy", "num_tokens": 1605}
|
import random
import numpy as np
from playsound import playsound
import eleonora.utils.config as config
from eleonora.interact.Mindfulness import *
from eleonora.utils.input import message, warning, userInput
class Emotion(object):
def __init__(self, emotion, speech=None):
self.emotion = emotion
self.speech = speech
# function_list = self.angry_interaction()
if emotion == 'angry':
function_list = self.angry_interaction()
elif emotion == 'disgusted':
function_list = self.disgusted_interaction()
elif emotion == 'fearful':
function_list = self.fearful_interaction()
elif emotion == 'happy':
function_list = self.happy_interaction()
elif emotion == 'sad':
function_list = self.sad_interaction()
elif emotion == 'surprised':
function_list = self.surprised_interaction()
elif emotion == 'neutral':
function_list = self.neutral_interaction()
elif emotion == None:
function_list = self.all_interactions()
else:
warning('Something went wrong')
return
# Pick an function
opt = random.choice(function_list)
# Read all Function
self.tellFunction(opt['name'])
# Ask if user want it
if self.getAnswer():
option = opt['func'](self.speech)
def all_interactions(self):
all_array = np.concatenate(
(
self.angry_interaction(),
self.disgusted_interaction(),
self.fearful_interaction(),
self.happy_interaction(),
self.sad_interaction(),
self.surprised_interaction(),
self.neutral_interaction()
), axis=0)
return all_array
def angry_interaction(self):
ARR = [
{"name": 'mindfulness', "func": Mindfulness}
]
return ARR
def disgusted_interaction(self):
ARR = [
{"name": 'mindfulness', "func": Mindfulness}
]
return ARR
def fearful_interaction(self):
ARR = [
{"name": 'mindfulness', "func": Mindfulness}
]
return ARR
def happy_interaction(self):
# Jokes
ARR = [
{"name": 'mindfulness', "func": Mindfulness}
]
return ARR
def sad_interaction(self):
# Give Hug
ARR = [
{"name": 'mindfulness', "func": Mindfulness}
]
return ARR
def surprised_interaction(self):
ARR = [
{"name": 'mindfulness', "func": Mindfulness}
]
return ARR
def neutral_interaction(self):
# Jokes
ARR = [
{"name": 'mindfulness', "func": Mindfulness}
]
return ARR
def getAnswer(self):
message('Answer Yes or No')
data = self.speech.getShort()
if data == None:
self.getAnswer()
return False
if 'ja' in data.split(' '):
return True
elif 'nee' in data.split(' '):
return False
def tellFunction(self, f):
self.playFile(f + '.wav','functions/')
def playFile(self, audio, folder=False):
if not folder:
folder = ''
playsound(config.AUDIO_PREFIX + folder + audio)
|
{"hexsha": "1d60781fa0d7b33ac15890e5b37da66d04dae369", "size": 3357, "ext": "py", "lang": "Python", "max_stars_repo_path": "eleonora/modules/Interact.py", "max_stars_repo_name": "gert-janwille/Eleonora", "max_stars_repo_head_hexsha": "a979dcd9b41231ea3abc9a57d842c680314ac9ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-11-19T10:57:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-19T10:57:38.000Z", "max_issues_repo_path": "eleonora/modules/Interact.py", "max_issues_repo_name": "gert-janwille/Eleonora", "max_issues_repo_head_hexsha": "a979dcd9b41231ea3abc9a57d842c680314ac9ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2017-11-15T16:04:09.000Z", "max_issues_repo_issues_event_max_datetime": "2018-01-18T17:12:18.000Z", "max_forks_repo_path": "eleonora/modules/Interact.py", "max_forks_repo_name": "gert-janwille/Eleonora", "max_forks_repo_head_hexsha": "a979dcd9b41231ea3abc9a57d842c680314ac9ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.975, "max_line_length": 60, "alphanum_fraction": 0.5558534406, "include": true, "reason": "import numpy", "num_tokens": 715}
|
import phase1.basic
open set with_bot
universe u
namespace con_nf
variables [params.{u}] (α : Λ) [core_tangle_cumul α] {β : Iio_index α}
{s t : set (tangle β)}
/-- An `α` code is a type index `β < α` together with a set of tangles of type `β`. -/
@[derive inhabited] def code : Type u := Σ β : Iio_index α, set (tangle β)
/-- Nonempty codes. -/
abbreviation nonempty_code : Type u := {c : code α // c.2.nonempty}
namespace code
variables {α} {c : code α}
/-- Constructor for `code`. -/
def mk : Π β : Iio_index α, set (tangle β) → code α := sigma.mk
lemma mk_def : mk β s = ⟨β, s⟩ := rfl
@[simp] lemma fst_mk (β : Iio_index α) (s : set (tangle β)) : (mk β s).1 = β := rfl
@[simp] lemma snd_mk (β : Iio_index α) (s : set (tangle β)) : (mk β s).2 = s := rfl
/-- A code is empty if it has no element. -/
protected def is_empty (c : code α) : Prop := c.2 = ∅
protected lemma is_empty.eq : c.is_empty → c.2 = ∅ := id
@[simp] lemma is_empty_mk : (mk β s).is_empty ↔ s = ∅ := iff.rfl
@[simp] lemma mk_inj : mk β s = mk β t ↔ s = t := by simp [mk]
end code
end con_nf
|
{"author": "leanprover-community", "repo": "con-nf", "sha": "f0b66bd73ca5d3bd8b744985242c4c0b5464913f", "save_path": "github-repos/lean/leanprover-community-con-nf", "path": "github-repos/lean/leanprover-community-con-nf/con-nf-f0b66bd73ca5d3bd8b744985242c4c0b5464913f/src/phase1/code.lean"}
|
##predefined_condition_begin
rootdir<-"/scratch/cqs/shengq1/vickers/20170222_smallRNA_3018_61_human_v3/host_genome/deseq2_miRNA/result"
inputfile<-"3018_61.define"
showLabelInPCA<-1
showDEGeneCluster<-1
pvalue<-0.05
foldChange<-1.5
minMedianInGroup<-5
addCountOne<-0
usePearsonInHCA<-0
top25only<-0
detectedInBothGroup<-1
performWilcox<-0
useRawPvalue<-1
textSize<-9
transformTable<-0
exportSignificantGeneName<-1
thread<-8
outputPdf<-FALSE
showVolcanoLegend<-0
libraryFile<-"/scratch/cqs/shengq1/vickers/20170222_smallRNA_3018_61_human_v3/host_genome/bowtie1_genome_1mm_NTA_smallRNA_category/result/3018_61.Category.Table.csv"
libraryKey<-"TotalReads"
##predefined_condition_end
options(bitmapType='cairo')
suffix<-"";
if(top25only){
suffix=paste0(suffix,"_top25")
}
if(detectedInBothGroup){
suffix=paste0(suffix, "_detectedInBothGroup")
}
if(minMedianInGroup > 0){
suffix=paste0(suffix, "_min", minMedianInGroup)
}
if(useRawPvalue){
alpha<-0.1
suffix=paste0(suffix, "_pvalue", pvalue)
}else{
alpha<-pvalue
suffix=paste0(suffix, "_fdr", pvalue)
}
zeroCount=0
if(addCountOne){
zeroCount=1
minMedianInGroup=minMedianInGroup+1
}
if(!exists("idIndex")){
idIndex<-1
}
if(!exists("outputPdf")){
outputPdf<-FALSE
}
if(!exists("outputPng") | !outputPdf ){
outputPng<-TRUE
}
if(!exists("outputTIFF")){
outputTIFF<-FALSE
}
if(!exists("filterBaseMean")){
filterBaseMean<-0
}
if(!exists("filterBaseMeanValue")){
filterBaseMeanValue<-30
}
outputFormat<-c()
if(outputPdf){
outputFormat<-c("PDF")
}
if(outputPng){
outputFormat<-c(outputFormat, "PNG")
}
if(outputTIFF){
outputFormat<-c(outputFormat, "TIFF")
}
if(!exists("countSep")){
countSep="\t"
}
if(!exists("usePearsonInHCA")){
usePearsonInHCA=0
}
if(!exists("exportSignificantGeneName")){
exportSignificantGeneName<-1
}
if(exists("libraryFile")){
if (grepl(".csv$", libraryFile)){
librarySize<-read.csv(libraryFile, row.names=1,check.names=FALSE)
librarySize<-unlist(librarySize[libraryKey,,drop=T])
cat("Using ", libraryKey, " in " , libraryFile , " as library size. \n")
}else{
librarySize<-read.table(libraryFile, row.names=1,check.names=FALSE,header=T,stringsAsFactor=F)
librarySize<-unlist(librarySize[,libraryKey,drop=T])
cat("Using ", libraryKey, " in " , libraryFile , " as library size. \n")
}
}
if(!exists("thread")){
thread<-1
}
if(!exists("showVolcanoLegend")){
showVolcanoLegend<-1
}
if(!exists("cooksCutoff")){
cooksCutoff<-FALSE
}
library("DESeq2")
library("heatmap3")
library("lattice")
#library("reshape")
library("ggplot2")
library("grid")
library("scales")
library("reshape2")
library("VennDiagram")
library("RColorBrewer")
#library("preprocessCore")
library("BiocParallel")
setwd(rootdir)
comparisons_data<-read.table(inputfile, header=T, check.names=F , sep="\t", stringsAsFactors = F)
##Solving node stack overflow problem start###
#when there are too many genes, drawing dendrogram may failed due to node stack overflow,
#It could be solved by forcing stats:::plotNode to be run as interpreted code rather then byte-compiled code via a nasty hack.
#http://stackoverflow.com/questions/16559250/error-in-heatmap-2-gplots/25877485#25877485
#align two count table
align<-function(data1,data2,by=0,suffixes=c(deparse(substitute(data1)),deparse(substitute(data2))),sort=T) {
if (is.null(data1)) {
return(data2)
} else if (is.null(data2)) {
return(data1)
}
data<-merge(data1,data2,by=by,all=T,suffixes=suffixes,sort=sort)
row.names(data)<-data[,1]
data<-data[,-1]
return (data)
}
theme_bw2 <- function () {
theme_bw() %+replace%
theme(
panel.border = element_blank(),
axis.line = element_line(colour = "black", size = 0.5)
)
}
# Convert a byte-compiled function to an interpreted-code function
unByteCode <- function(fun)
{
FUN <- eval(parse(text=deparse(fun)))
environment(FUN) <- environment(fun)
FUN
}
# Replace function definition inside of a locked environment **HACK**
assignEdgewise <- function(name, env, value)
{
unlockBinding(name, env=env)
assign( name, envir=env, value=value)
lockBinding(name, env=env)
invisible(value)
}
# Replace byte-compiled function in a locked environment with an interpreted-code
# function
unByteCodeAssign <- function(fun)
{
name <- gsub('^.*::+','', deparse(substitute(fun)))
FUN <- unByteCode(fun)
retval <- assignEdgewise(name=name,
env=environment(FUN),
value=FUN
)
invisible(retval)
}
# Use the above functions to convert stats:::plotNode to interpreted-code:
unByteCodeAssign(stats:::plotNode)
# Now raise the interpreted code recursion limit (you may need to adjust this,
# decreasing if it uses to much memory, increasing if you get a recursion depth error ).
options(expressions=5e4)
##Solving node stack overflow problem end###
hmcols <- colorRampPalette(c("green", "black", "red"))(256)
openPlot<-function(filePrefix, format, pdfWidth, pdfHeight, otherWidth, otherHeight, figureName){
fileName<-paste0(filePrefix, ".", tolower(format))
if(format == "PDF"){
pdf(fileName, width=pdfWidth, height=pdfHeight, useDingbats=FALSE)
}else if(format == "TIFF"){
tiff(filename=fileName, width=otherWidth, height=otherHeight, res=300)
}else {
png(filename=fileName, width=otherWidth, height=otherHeight, res=300)
}
cat("saving", figureName, "to ", fileName, "\n")
}
drawPlot<-function(filePrefix, outputFormat, pdfWidth, pdfHeight, otherWidth, otherHeight, p, figureName){
for(format in outputFormat){
openPlot(filePrefix, format, pdfWidth, pdfHeight, otherWidth, otherHeight, figureName)
print(p)
dev.off()
}
}
drawHCA<-function(prefix, rldselect, ispaired, designData, conditionColors, gnames, outputFormat){
genecount<-nrow(rldselect)
showRowDendro = genecount <= 50
if(genecount > 2){
cexCol = max(1.0, 0.2 + 1/log10(ncol(rldselect)))
if(ispaired){
htColors<-rainbow(length(unique(designData$Paired)))
gsColors<-as.matrix(data.frame(Group=conditionColors, Sample=htColors[designData$Paired]))
}else{
gsColors = conditionColors;
}
if (genecount<=30) {
labRow=row.names(rldselect)
margins=c(12,8)
} else {
labRow=NA
margins=c(12,5)
}
filePrefix<-paste0(prefix, "_DESeq2-vsd-heatmap")
for(format in outputFormat){
openPlot(filePrefix, format, 10, 10, 3000, 3000, "HCA")
if(usePearsonInHCA){
heatmap3(rldselect,
col = hmcols,
ColSideColors = gsColors,
margins=margins,
scale="r",
labRow=labRow,
showRowDendro=showRowDendro,
main=paste0("Hierarchical Cluster Using ", genecount, " Genes"),
cexCol=cexCol,
useRaster=FALSE,
legendfun=function() showLegend(legend=paste0("Group ", gnames), col=c("red","blue"),cex=1.0,x="center"))
}else{
heatmap3(rldselect,
col = hmcols,
ColSideColors = gsColors,
margins=margins,
scale="r",
distfun=dist,
labRow=labRow,
showRowDendro=showRowDendro,
main=paste0("Hierarchical Cluster Using ", genecount, " Genes"),
cexCol=cexCol,
useRaster=FALSE,
legendfun=function() showLegend(legend=paste0("Group ", gnames), col=c("red","blue"),cex=1.0,x="center"))
}
dev.off()
}
}
}
drawPCA<-function(prefix, rldmatrix, showLabelInPCA, designData, condition, outputFormat){
genecount<-nrow(rldmatrix)
if(genecount > 2){
pca<-prcomp(t(rldmatrix))
supca<-summary(pca)$importance
pcadata<-data.frame(pca$x)
pcalabs=paste0(colnames(pcadata), "(", round(supca[2,] * 100), "%)")
pcadata$sample<-row.names(pcadata)
pcadata$Group<-condition
if(showLabelInPCA){
g <- ggplot(pcadata, aes(x=PC1, y=PC2, label=sample)) +
geom_text(vjust=-0.6, size=4)
}else{
g <- ggplot(pcadata, aes(x=PC1, y=PC2)) +
labs(color = "Group")
}
g <- g + geom_point(aes(col=Group), size=4) +
scale_x_continuous(limits=c(min(pcadata$PC1) * 1.2,max(pcadata$PC1) * 1.2)) +
scale_y_continuous(limits=c(min(pcadata$PC2) * 1.2,max(pcadata$PC2) * 1.2)) +
geom_hline(aes(yintercept=0), size=.2) +
geom_vline(aes(xintercept=0), size=.2) +
xlab(pcalabs[1]) + ylab(pcalabs[2]) +
scale_color_manual(values=c("red", "blue")) +
theme_bw2() + theme(legend.position="top")
filePrefix<-paste0(prefix, "_DESeq2-vsd-pca")
drawPlot(filePrefix, outputFormat, 6, 5, 3000, 3000, g, "PCA")
}
}
myEstimateSizeFactors<-function(dds){
if(exists("librarySize")){
curLibrarySize<-librarySize[colnames(dds)]
#based on DESeq2 introduction
curSizeFactor<- curLibrarySize / exp(mean(log(curLibrarySize)))
sizeFactors(dds)<-curSizeFactor
}else{
sfres<-try(dds<-estimateSizeFactors(dds))
if (class(sfres) == "try-error") {
library(edgeR)
countNum<-counts(dds)
y<-calcNormFactors(countNum, methold="TMM")
cs<-colSums(countNum)
cs<-cs / median(cs)
sf<-y * cs
sizeFactors(dds)<-sf
}
}
return(dds)
}
#for volcano plot
reverselog_trans <- function(base = exp(1)) {
trans <- function(x) -log(x, base)
inv <- function(x) base^(-x)
trans_new(paste0("reverselog-", format(base)), trans, inv,
log_breaks(base = base),
domain = c(1e-100, Inf))
}
###########################
#end function
###########################
#
# ###################################################################
# #change comparisons_data, need to be removed before adding to pipeline
# comparisons_data=rbind(comparisons_data,comparisons_data)
# comparisons_data[3:4,1]=c("Control_placenta_vs_Heart","Diabetic_placenta_vs_Heart")
# comparisons_data[3:4,6]=c("Control_placenta_vs_Heart","Diabetic_placenta_vs_Heart")
# comparisons_data[,1]=paste0("Test_",comparisons_data[,1])
# comparisons_data[,3]="/scratch/cqs/zhaos/RolandaLister/20200907_RolandaLister4363_4369_RnaSeq/pipeline/deseq2_proteincoding_genetable/result/test.design"
# comparisons_data[,6]=paste0("Test_",comparisons_data[,6])
# comparisons_data$designFormula="~Tissue + Condition+Tissue:Condition"
# comparisons_data$contrast=c("Condition_Diabetic_vs_Control",paste0("Condition_Diabetic_vs_Control",";","Tissueplacenta.ConditionDiabetic"),
# "Tissue_placenta_vs_Heart",paste0("Tissue_placenta_vs_Heart",";","Tissueplacenta.ConditionDiabetic"))
# #comparisons_data=comparisons_data[1,,drop=FALSE]
# #end change comparisons_data
# ###################################################################
countfiles<-unlist(unique(comparisons_data$CountFile))
allComparisons<-unlist(unique(comparisons_data$ComparisonName))
if(length(allComparisons) != nrow(comparisons_data)){
error(paste("Comparison names cannot be repeated ", comparisons_data$ComparisonName, sep=": "))
}
allTitles<-comparisons_data$ComparisonTitle
names(allTitles)<-comparisons_data$ComparisonName
dataAllOut<-NULL
resultAllOut<-NULL
allSigNameList<-list()
allSigDirectionList<-list()
sigTableAll<-NULL
sigTableAllGene<-NULL
sigTableAllVar<-c("baseMean","log2FoldChange","lfcSE","stat","pvalue","padj","FoldChange")
countfile_index = 1
titles<-NULL
validComparisons<-c()
for(countfile_index in c(1:length(countfiles))){
countfile = countfiles[countfile_index]
comparisons = comparisons_data[comparisons_data$CountFile == countfile,]
if (grepl(".csv$",countfile)) {
data<-read.csv(countfile,header=T,row.names=idIndex,as.is=T,check.names=FALSE)
} else {
data<-read.delim(countfile,header=T,row.names=idIndex,as.is=T,check.names=FALSE, sep=countSep)
}
if(transformTable){
data<-t(data)
}
data<-data[,colnames(data) != "Feature_length"]
colClass<-sapply(data, class)
countNotNumIndex<-which((colClass!="numeric" & colClass!="integer") | grepl("Gene_Id", colnames(data)))
if (length(countNotNumIndex)==0) {
index<-1;
indecies<-c()
} else {
index<-max(countNotNumIndex)+1
indecies<-c(1:(index-1))
}
countData<-data[,c(index:ncol(data))]
countData[is.na(countData)] <- 0
countData<-round(countData)
if(addCountOne){
countData<-countData+1
}
comparisonNames=comparisons$ComparisonName
pairedspearman<-list()
newVarInData<-setdiff(colnames(data),colnames(dataAllOut))
if (length(newVarInData)>0) {
dataAllOut<-align(dataAllOut,data[,newVarInData,drop=FALSE])
}
resultAllOutVar<-c("baseMean","log2FoldChange","pvalue","padj")
comparison_index = 1
for(comparison_index in c(1:nrow(comparisons))){
comparisonName=comparisons$ComparisonName[comparison_index]
comparisonTitle=comparisons$ComparisonTitle[comparison_index]
if ("designFormula" %in% colnames(comparisons)) {
designFormula=comparisons$designFormula[comparison_index]
print(paste0("designFormula = ", designFormula, "\n"))
if (is.na(designFormula) || (designFormula=="")) {
designFormula=NULL
} else {
designFormula=as.formula(designFormula)
}
} else {
designFormula=NULL
}
if ("contrast" %in% colnames(comparisons)) {
contrast=comparisons$contrast[comparison_index]
if (is.na(contrast) || (contrast=="")) {
contrast=NULL
} else {
contrast=list(strsplit(contrast,";")[[1]])
}
} else {
contrast=NULL
}
titles<-c(titles, comparisonTitle)
cat(comparisonName, " ", comparisonTitle, "\n")
designFile=comparisons$ConditionFile[comparison_index]
#comment here as has many group names
gnames=unlist(comparisons[comparison_index, c("ReferenceGroupName", "SampleGroupName")])
#gnames=as.character(unique(designData$Condition))
designData<-read.table(designFile, sep="\t", header=T)
designData$Condition<-factor(designData$Condition, levels=gnames)
if(ncol(designData) >= 3){
cat("Data with covariances!\n")
}else{
cat("Data without covariances!\n")
}
if (any(colnames(designData)=="Paired")) {
ispaired<-TRUE
cat("Paired Data!\n")
}else{
ispaired<-FALSE
cat("Not Paired Data!\n")
}
temp<-apply(designData,2,function(x) length(unique(x)))
if (any(temp==1)) {
cat(paste0("Factors with only 1 level in design matrix: ",colnames(designData)[which(temp==1)],"\n"))
cat("They will be removed")
cat("\n")
designData<-designData[,which(temp!=1)]
}
temp<-apply(designData[,-1,drop=F],2,rank)
if (length(unique(rowSums(temp)))==1 | identical(temp[,1],temp[,-1])) {
cat(paste0("The model matrix is not full rank, so the model cannot be fit as specified"))
cat("\n")
cat("Only Condition variable will be kept.")
cat("\n")
designData<-designData[,which(colnames(designData)%in% c("Sample","Condition"))]
}
missedSamples<-as.character(designData$Sample)[!(as.character(designData$Sample) %in% colnames(countData))]
if(length(missedSamples) > 0){
message=paste0("There are missed sample defined in design file but not in real data: ", missedSamples)
warning(message)
writeLines(message,paste0(comparisonName,".error"))
next
}
comparisonData<-countData[,colnames(countData) %in% as.character(designData$Sample),drop=F]
if(ncol(comparisonData) != nrow(designData)){
message=paste0("Data not matched, there are ", nrow(designData), " samples in design file ", designFile, " but ", ncol(comparisonData), " samples in data ")
warning(message)
writeLines(message,paste0(comparisonName,".error"))
next
}
comparisonData<-comparisonData[,as.character(designData$Sample)]
prefix<-paste0(comparisonName, suffix)
if(top25only){
ranks=apply(comparisonData, 2, function(x){
y=x[x > 0]
q=quantile(y)
return(x>=q[4])
})
select=apply(ranks, 1, function(x){
any(x)
})
comparisonData=comparisonData[select,]
}
if(detectedInBothGroup){
conds<-unique(designData$Condition)
data1<-comparisonData[, colnames(comparisonData) %in% designData$Sample[designData$Condition==conds[1]],drop=FALSE]
data2<-comparisonData[, colnames(comparisonData) %in% designData$Sample[designData$Condition==conds[2]],drop=FALSE]
med1<-apply(data1, 1, median) > zeroCount
med2<-apply(data2, 1, median) > zeroCount
med<-med1 & med2
comparisonData<-comparisonData[med,]
}
if(performWilcox){
#quantile and wilcox
quantileData=normalize.quantiles(data.matrix(comparisonData))
colnames(quantileData)=colnames(comparisonData)
rownames(quantileData)=rownames(comparisonData)
write.csv(quantileData, file=paste0(prefix, "_quantile.csv"), row.names = T)
data1<-quantileData[, colnames(quantileData) %in% designData$Sample[designData$Condition==conds[1]],drop=FALSE]
data2<-quantileData[, colnames(quantileData) %in% designData$Sample[designData$Condition==conds[2]],drop=FALSE]
diffData=data.frame(quantileData)
diffData$pvalues=unlist(lapply(c(1:nrow(data1)), function(index){
d1=data1[index,]
d2=data2[index,]
test=wilcox.test(d1,d2)
test$p.value
}))
diffData$log2MedianFoldChange=unlist(lapply(c(1:nrow(data1)), function(index){
d1=data1[index,]
d2=data2[index,]
log2(median(d2) / median(d1))
}))
diffData$log2MeanFoldChange=unlist(lapply(c(1:nrow(data1)), function(index){
d1=data1[index,]
d2=data2[index,]
log2(mean(d2) / mean(d1))
}))
diffData=diffData[order(diffData$pvalues),]
write.csv(diffData, file=paste0(prefix, "_quantile_wilcox.csv"), row.names = T)
filterData=diffData[diffData$pvalues<=pvalue & abs(diffData$log2MedianFoldChange) > log2(foldChange),]
write.csv(filterData, file=paste0(prefix, "_quantile_wilcox_sig.csv"), row.names = T)
}
if(minMedianInGroup > 0){
conds<-unique(designData$Condition)
data1<-comparisonData[, colnames(comparisonData) %in% designData$Sample[designData$Condition==conds[1]],drop=FALSE]
data2<-comparisonData[, colnames(comparisonData) %in% designData$Sample[designData$Condition==conds[2]],drop=FALSE]
med1<-apply(data1, 1, median) >= minMedianInGroup
med2<-apply(data2, 1, median) >= minMedianInGroup
med<-med1 | med2
geneNumBeforeFilter=nrow(comparisonData)
comparisonData<-comparisonData[med,]
cat(nrow(comparisonData), " genes with minimum median count in group larger or equals than ", minMedianInGroup, ". ",geneNumBeforeFilter-nrow(comparisonData)," genes removed\n")
}
if (nrow(comparisonData)<=1) {
message=paste0("Error: Only ", nrow(comparisonData), " Genes can be used in DESeq2 analysis in comparison ",comparisonName,", ignored. \n")
warning(message)
writeLines(message,paste0(comparisonName,".error"))
next;
}
validComparisons<-c(validComparisons, comparisonName)
if(ispaired){
pairedSamples = unique(designData$Paired)
spcorr<-unlist(lapply(c(1:length(pairedSamples)), function(x){
samples<-designData$Sample[designData$Paired==pairedSamples[x]]
cor(comparisonData[,samples[1]],comparisonData[,samples[2]],method="spearman")
}))
sptable<-data.frame(Name=pairedSamples, Spcorr=spcorr)
write.csv(sptable, file=paste0(prefix, "_Spearman.csv"), row.names=FALSE)
dir.create("details", showWarnings = FALSE)
lapply(c(1:length(pairedSamples)), function(x){
samples<-designData$Sample[designData$Paired==pairedSamples[x]]
log2c1<-log2(comparisonData[,samples[1]]+1)
log2c2<-log2(comparisonData[,samples[2]]+1)
png(paste0("details/", prefix, "_Spearman_", pairedSamples[x], ".png"), width=2000, height=2000, res=300)
plot(log2c1, log2c2, xlab=paste0(samples[1], " [log2(Count + 1)]"), ylab=paste0(samples[2], " [log2(Count + 1)]"))
text(3,15,paste0("SpearmanCorr=", sprintf("%0.3f", spcorr[x])))
dev.off()
})
pairedspearman[[comparisonName]]<-spcorr
}
notEmptyData<-apply(comparisonData, 1, max) > 0
comparisonData<-comparisonData[notEmptyData,]
if(ispaired){
colnames(comparisonData)<-unlist(lapply(c(1:ncol(comparisonData)), function(i){paste0(designData$Paired[i], "_", colnames(comparisonData)[i])}))
}
rownames(designData)<-colnames(comparisonData)
conditionColors<-as.matrix(data.frame(Group=c("red", "blue")[designData$Condition]))
write.csv(comparisonData, file=paste0(prefix, ".csv"))
#some basic graph
dds=DESeqDataSetFromMatrix(countData = comparisonData,
colData = designData,
design = ~1)
colnames(dds)<-colnames(comparisonData)
dds<-myEstimateSizeFactors(dds)
if(filterBaseMean){
cat(paste0("filter by basemean: ", filterBaseMeanValue, "\n"))
baseMeans = rowMeans(counts(dds, normalized=TRUE))
write.csv(baseMeans, file=paste0(prefix, ".basemean.csv"))
dds<-dds[baseMeans > filterBaseMeanValue,]
comparisonData=comparisonData[baseMeans > filterBaseMeanValue,]
}
#draw density graph
rldmatrix<-as.matrix(log2(counts(dds,normalized=FALSE) + 1))
rsdata<-melt(rldmatrix)
colnames(rsdata)<-c("Gene", "Sample", "log2Count")
png(filename=paste0(prefix, "_DESeq2-log2-density.png"), width=4000, height=3000, res=300)
g<-ggplot(rsdata) + geom_density(aes(x=log2Count, colour=Sample)) + xlab("DESeq2 log2 transformed count")
print(g)
dev.off()
width=max(4000, ncol(rldmatrix) * 40 + 1000)
height=max(3000, ncol(rldmatrix) * 40)
png(filename=paste0(prefix, "_DESeq2-log2-density-individual.png"), width=width, height=height, res=300)
g<-ggplot(rsdata) + geom_density(aes(x=log2Count, colour=Sample)) + facet_wrap(~Sample, scales = "free") + xlab("DESeq2 log2 transformed count")
print(g)
dev.off()
fitType<-"parametric"
if(nrow(comparisonData) < 5){
fitType<-"mean"
}
while(1){
#varianceStabilizingTransformation
vsdres<-try(vsd <- varianceStabilizingTransformation(dds, blind=TRUE,fitType=fitType))
if(class(vsdres) == "try-error"){
if(grepl("every gene contains at least one zero", vsdres[1])){
removed<-removed+1
keptNumber<-length(zeronumbers) - percent10 * removed
keptSample<-zeronumbers[1:keptNumber]
excludedSample<-zeronumbers[(keptNumber+1):length(zeronumbers)]
comparisonData<-comparisonData[, colnames(comparisonData) %in% keptSample]
designData<-designData[rownames(designData) %in% keptSample,]
dds=DESeqDataSetFromMatrix(countData = comparisonData,
colData = designData,
design = ~1)
colnames(dds)<-colnames(comparisonData)
} else if (grepl("newsplit: out of vertex space", vsdres[1]) | fitType != "mean") {
message=paste0("Warning: varianceStabilizingTransformation function can't run. fitType was set to mean to try again")
warning(message)
fitType<-"mean"
writeLines(message,paste0(comparisonName,".error"))
} else {
message=paste0(paste0("Error: varianceStabilizingTransformation function can't run. ", vsdres))
writeLines(message,paste0(comparisonName,".error"))
stop(message)
}
}else if(all(is.na(assay(vsd)))){
fitType<-"mean"
} else{
conditionColors<-as.matrix(data.frame(Group=c("red", "blue")[designData$Condition]))
break
}
}
if(nrow(comparisonData) > 1){
assayvsd<-assay(vsd)
write.csv(assayvsd, file=paste0(prefix, "_DESeq2-vsd.csv"))
rldmatrix=as.matrix(assayvsd)
#draw pca graph
drawPCA(paste0(prefix,"_geneAll"), rldmatrix, showLabelInPCA, designData, designData$Condition, outputFormat)
if(exists("top25cvInHCA") && top25cvInHCA){
rv<-rowVars(rldmatrix)
countHT<-rldmatrix[rv>=quantile(rv)[4],]
drawHCA(paste0(prefix,"_geneTop25variance"), countHT, ispaired, designData, conditionColors, gnames, outputFormat)
}else{
#draw heatmap
drawHCA(paste0(prefix,"_geneAll"), rldmatrix, ispaired, designData, conditionColors, gnames, outputFormat)
}
}
#different expression analysis
if (is.null(designFormula)) {
designFormula=as.formula(paste0("~",paste0(c(colnames(designData)[-c(1:2)],"Condition"),collapse="+")))
}
cat(paste0("", designFormula), "\n")
dds=DESeqDataSetFromMatrix(countData = comparisonData,
colData = designData,
design = designFormula)
dds<-myEstimateSizeFactors(dds)
bpparam<-MulticoreParam(thread)
# parallel<-ifelse(thread <= 1, FALSE, TRUE)
parallel=FALSE
ddsres<-try(dds <- DESeq(dds,fitType=fitType, parallel=parallel, BPPARAM=bpparam))
if(class(ddsres) == "try-error"){
if( grepl("One can instead use the gene-wise estimates as final estimates", ddsres[1])){
dds <- estimateDispersionsGeneEst(dds)
dispersions(dds) <- mcols(dds)$dispGeneEst
dds<-nbinomWaldTest(dds)
}else if(grepl("newsplit: out of vertex space", ddsres[1])){
dds <- DESeq(dds,fitType="mean", parallel=parallel, BPPARAM=bpparam)
}else{
stop(paste0("DESeq2 failed: ", ddsres[1]))
}
}
if (!is.null(contrast)) {
res<-results(dds, cooksCutoff=cooksCutoff, alpha=alpha, parallel=parallel, BPPARAM=bpparam,contrast=contrast)
} else {
res<-results(dds, cooksCutoff=cooksCutoff, alpha=alpha, parallel=parallel, BPPARAM=bpparam)
}
cat("DESeq2 finished.\n")
if (useRawPvalue==1) {
select<-(!is.na(res$pvalue)) & (res$pvalue<pvalue) & ((res$log2FoldChange >= log2(foldChange)) | (res$log2FoldChange <= -log2(foldChange)))
} else {
select<-(!is.na(res$padj)) & (res$padj<pvalue) & ((res$log2FoldChange >= log2(foldChange)) | (res$log2FoldChange <= -log2(foldChange)))
}
if(length(indecies) > 0){
inddata<-data[rownames(comparisonData),indecies,drop=F]
tbb<-cbind(inddata, as.data.frame(comparisonData), res)
}else{
tbb<-cbind(as.data.frame(comparisonData), res)
}
tbb$FoldChange<-2^tbb$log2FoldChange
tbbselect<-tbb[select,,drop=F]
tbbAllOut<-as.data.frame(tbb[,resultAllOutVar,drop=F])
tbbAllOut$Significant<-select
colnames(tbbAllOut)<-paste0(colnames(tbbAllOut)," (",comparisonName,")")
resultAllOut<-cbind(as.data.frame(resultAllOut)[row.names(dataAllOut),],as.matrix(tbbAllOut[row.names(dataAllOut),]))
row.names(resultAllOut)<-row.names(dataAllOut)
tbb<-tbb[order(tbb$pvalue),,drop=F]
write.csv(as.data.frame(tbb),paste0(prefix, "_DESeq2.csv"))
tbbselect<-tbbselect[order(tbbselect$pvalue),,drop=F]
sigFile=paste0(prefix, "_DESeq2_sig.csv")
sigTable<-as.data.frame(tbbselect)
write.csv(sigTable,sigFile)
allSigNameList[[comparisonName]]<-row.names(sigTable)
allSigDirectionList[[comparisonName]]<-sign(sigTable$log2FoldChange)
if(nrow(sigTable) > 0){
sigTable$comparisonName<-comparisonName
if (("Feature_gene_name" %in% colnames(sigTable)) & (!("Feature_gene_name" %in% sigTableAllVar))){
sigTableAllVar<-c("Feature_gene_name", sigTableAllVar)
}
sigTableAll<-rbind(sigTableAll,sigTable[,c("comparisonName",sigTableAllVar),drop=FALSE],make.row.names=FALSE)
sigTableAllGene<-c(sigTableAllGene,row.names(sigTable))
}
geneNameField = NULL
lowColNames = tolower(colnames(tbb))
for(name in c("Feature_gene_name", "Gene.Symbol", "Gene_Symbol", "Gene Symbol")){
lowName = tolower(name)
if(lowName %in% lowColNames){
geneNameField=colnames(tbb)[match(lowName, lowColNames)]
break
}
}
if(!is.null(geneNameField)){
write.table(tbb[,c(geneNameField, "stat"),drop=F],paste0(prefix, "_DESeq2_GSEA.rnk"),row.names=F,col.names=F,sep="\t", quote=F)
if(exportSignificantGeneName){
write.table(tbbselect[,c(geneNameField),drop=F], paste0(prefix, "_DESeq2_sig_genename.txt"),row.names=F,col.names=F,sep="\t", quote=F)
}
}else{
write.table(tbb[,c("stat"),drop=F],paste0(prefix, "_DESeq2_GSEA.rnk"),row.names=T,col.names=F,sep="\t", quote=F)
if(exportSignificantGeneName){
write.table(data.frame(name=rownames(tbbselect)), paste0(prefix, "_DESeq2_sig_genename.txt"),row.names=F,col.names=F,sep="\t", quote=F)
}
}
if(showDEGeneCluster){
siggenes<-rownames(rldmatrix) %in% rownames(tbbselect)
nonDEmatrix<-rldmatrix[!siggenes,,drop=F]
DEmatrix<-rldmatrix[siggenes,,drop=F]
drawPCA(paste0(prefix,"_geneDE"),DEmatrix , showLabelInPCA, designData, conditionColors, outputFormat)
drawHCA(paste0(prefix,"_geneDE"),DEmatrix , ispaired, designData, conditionColors, gnames, outputFormat)
drawPCA(paste0(prefix,"_geneNotDE"), nonDEmatrix, showLabelInPCA, designData, conditionColors, outputFormat)
drawHCA(paste0(prefix,"_geneNotDE"), nonDEmatrix, ispaired, designData, conditionColors, gnames, outputFormat)
}
#Top 25 Significant genes barplot
sigDiffNumber<-nrow(tbbselect)
if (sigDiffNumber>0) {
if (sigDiffNumber>25) {
print(paste0("More than 25 genes were significant. Only the top 25 genes will be used in barplot"))
diffResultSig<-tbbselect[order(tbbselect$pvalue)[1:25],]
} else {
diffResultSig<-tbbselect
}
if(!is.null(geneNameField)){
diffResultSig$Name<-as.character(diffResultSig[,geneNameField])
}else{
diffResultSig$Name<-sapply(strsplit(row.names(diffResultSig),";"),function(x) x[1])
}
if (any(duplicated(diffResultSig$Name))) {
whichIndex<-which(duplicated(diffResultSig$Name))
diffResultSig$Name[whichIndex]<-paste0(row.names(diffResultSig)[whichIndex], ":", diffResultSig$Name[whichIndex])
}
diffResultSig$Name <- factor(diffResultSig$Name, levels=diffResultSig$Name[order(diffResultSig$log2FoldChange)])
diffResultSig<-as.data.frame(diffResultSig)
p<-ggplot(diffResultSig,aes(x=Name,y=log2FoldChange,order=log2FoldChange))+geom_bar(stat="identity")+
coord_flip()+
# geom_abline(slope=0,intercept=1,colour="red",linetype = 2)+
scale_y_continuous(name=bquote(log[2]~Fold~Change))+
theme(axis.text = element_text(colour = "black"))
filePrefix<-paste0(prefix,"_DESeq2_sig_barplot")
drawPlot(filePrefix, outputFormat, 7, 7, 3000, 3000, p, "PCA")
} else {
print(paste0("No gene with adjusted p value less than ",pvalue," and fold change larger than ",foldChange))
}
#volcano plot
changeColours<-c(grey="grey",blue="blue",red="red")
diffResult<-as.data.frame(tbb)
diffResult$log10BaseMean<-log10(diffResult$baseMean)
diffResult$colour<-"grey"
if (useRawPvalue==1) {
diffResult<-subset(diffResult, !is.na(pvalue))
diffResult$colour[which(diffResult$pvalue<=pvalue & diffResult$log2FoldChange>=log2(foldChange))]<-"red"
diffResult$colour[which(diffResult$pvalue<=pvalue & diffResult$log2FoldChange<=-log2(foldChange))]<-"blue"
} else {
diffResult<-subset(diffResult, !is.na(padj))
diffResult$colour[which(diffResult$padj<=pvalue & diffResult$log2FoldChange>=log2(foldChange))]<-"red"
diffResult$colour[which(diffResult$padj<=pvalue & diffResult$log2FoldChange<=-log2(foldChange))]<-"blue"
}
write.csv(diffResult, file=paste0(prefix, "_DESeq2_volcanoPlot.csv"))
if (useRawPvalue==1) {
p<-ggplot(diffResult,aes(x=log2FoldChange,y=pvalue))+
scale_y_continuous(trans=reverselog_trans(10),name=bquote(p~value))
} else {
p<-ggplot(diffResult,aes(x=log2FoldChange,y=padj))+
scale_y_continuous(trans=reverselog_trans(10),name=bquote(Adjusted~p~value))
}
p<-p+geom_point(aes(size=log10BaseMean,colour=colour))+
scale_color_manual(values=changeColours,guide = FALSE)+
scale_x_continuous(name=bquote(log[2]~Fold~Change))+
geom_hline(yintercept = 1,colour="grey",linetype = "dotted")+
geom_vline(xintercept = 0,colour="grey",linetype = "dotted")+
guides(size=guide_legend(title=bquote(log[10]~Base~Mean)))+
theme_bw()+
scale_size(range = c(3, 7))+
theme(axis.text = element_text(colour = "black",size=30),
axis.title = element_text(size=30),
legend.text= element_text(size=30),
legend.title= element_text(size=30))
if(!showVolcanoLegend){
p<-p+ theme(legend.position = "none")
}
filePrefix<-paste0(prefix,"_DESeq2_volcanoPlot")
drawPlot(filePrefix, outputFormat, 10, 10, 3000, 3000, p, "Volcano")
}
if(length(pairedspearman) > 0){
filePrefix<-paste0(prefix, "_", ifelse(minMedianInGroup > 0, paste0("spearman_min", minMedianInGroup), "spearman"))
fwidth<-max(2000, 1000 * length(pairedspearman))
for(format in outputFormat){
openPlot(filePrefix, format, 7, 7, fwidth, 2000, "Spearman correlation")
boxplot(pairedspearman)
dev.off()
}
}
}
allprefix=paste0(basename(inputfile), suffix)
#Venn for all significant genes
#Output all significant genes table
if(!is.null(sigTableAll)){
sigTableAll<-cbind(Gene=sigTableAllGene,sigTableAll)
write.csv(sigTableAll,paste0(allprefix, "_DESeq2_allSig.csv"),row.names=FALSE)
#Do venn if length between 2-5
if (length(allSigNameList)>=2 & length(allSigNameList)<=5) {
venn.diagram1<-function (x, filename, height = 3000, width = 3000, resolution = 500,
units = "px", compression = "lzw", na = "stop", main = NULL,
sub = NULL, main.pos = c(0.5, 1.05), main.fontface = "plain",
main.fontfamily = "serif", main.col = "black", main.cex = 1,
main.just = c(0.5, 1), sub.pos = c(0.5, 1.05), sub.fontface = "plain",
sub.fontfamily = "serif", sub.col = "black", sub.cex = 1,
sub.just = c(0.5, 1), category.names = names(x), force.unique = TRUE,
fill=NA,
...)
{
if (is.na(fill[1])) {
if (length(x)==5) {
fill = c("dodgerblue", "goldenrod1", "darkorange1", "seagreen3", "orchid3")
} else if (length(x)==4) {
fill = c("dodgerblue", "goldenrod1", "seagreen3", "orchid3")
} else if (length(x)==3) {
fill = c("dodgerblue", "goldenrod1", "seagreen3")
} else if (length(x)==2) {
fill = c("dodgerblue", "goldenrod1")
}
}
if (force.unique) {
for (i in 1:length(x)) {
x[[i]] <- unique(x[[i]])
}
}
if ("none" == na) {
x <- x
}
else if ("stop" == na) {
for (i in 1:length(x)) {
if (any(is.na(x[[i]]))) {
stop("NAs in dataset", call. = FALSE)
}
}
}
else if ("remove" == na) {
for (i in 1:length(x)) {
x[[i]] <- x[[i]][!is.na(x[[i]])]
}
}
else {
stop("Invalid na option: valid options are \"none\", \"stop\", and \"remove\"")
}
if (0 == length(x) | length(x) > 5) {
stop("Incorrect number of elements.", call. = FALSE)
}
if (1 == length(x)) {
list.names <- category.names
if (is.null(list.names)) {
list.names <- ""
}
grob.list <- VennDiagram::draw.single.venn(area = length(x[[1]]),
category = list.names, ind = FALSE,fill=fill, ...)
}
else if (2 == length(x)) {
grob.list <- VennDiagram::draw.pairwise.venn(area1 = length(x[[1]]),
area2 = length(x[[2]]), cross.area = length(intersect(x[[1]],
x[[2]])), category = category.names, ind = FALSE,
fill=fill,
...)
}
else if (3 == length(x)) {
A <- x[[1]]
B <- x[[2]]
C <- x[[3]]
list.names <- category.names
nab <- intersect(A, B)
nbc <- intersect(B, C)
nac <- intersect(A, C)
nabc <- intersect(nab, C)
grob.list <- VennDiagram::draw.triple.venn(area1 = length(A),
area2 = length(B), area3 = length(C), n12 = length(nab),
n23 = length(nbc), n13 = length(nac), n123 = length(nabc),
category = list.names, ind = FALSE, list.order = 1:3,
fill=fill,
...)
}
else if (4 == length(x)) {
A <- x[[1]]
B <- x[[2]]
C <- x[[3]]
D <- x[[4]]
list.names <- category.names
n12 <- intersect(A, B)
n13 <- intersect(A, C)
n14 <- intersect(A, D)
n23 <- intersect(B, C)
n24 <- intersect(B, D)
n34 <- intersect(C, D)
n123 <- intersect(n12, C)
n124 <- intersect(n12, D)
n134 <- intersect(n13, D)
n234 <- intersect(n23, D)
n1234 <- intersect(n123, D)
grob.list <- VennDiagram::draw.quad.venn(area1 = length(A),
area2 = length(B), area3 = length(C), area4 = length(D),
n12 = length(n12), n13 = length(n13), n14 = length(n14),
n23 = length(n23), n24 = length(n24), n34 = length(n34),
n123 = length(n123), n124 = length(n124), n134 = length(n134),
n234 = length(n234), n1234 = length(n1234), category = list.names,
ind = FALSE, fill=fill,...)
}
else if (5 == length(x)) {
A <- x[[1]]
B <- x[[2]]
C <- x[[3]]
D <- x[[4]]
E <- x[[5]]
list.names <- category.names
n12 <- intersect(A, B)
n13 <- intersect(A, C)
n14 <- intersect(A, D)
n15 <- intersect(A, E)
n23 <- intersect(B, C)
n24 <- intersect(B, D)
n25 <- intersect(B, E)
n34 <- intersect(C, D)
n35 <- intersect(C, E)
n45 <- intersect(D, E)
n123 <- intersect(n12, C)
n124 <- intersect(n12, D)
n125 <- intersect(n12, E)
n134 <- intersect(n13, D)
n135 <- intersect(n13, E)
n145 <- intersect(n14, E)
n234 <- intersect(n23, D)
n235 <- intersect(n23, E)
n245 <- intersect(n24, E)
n345 <- intersect(n34, E)
n1234 <- intersect(n123, D)
n1235 <- intersect(n123, E)
n1245 <- intersect(n124, E)
n1345 <- intersect(n134, E)
n2345 <- intersect(n234, E)
n12345 <- intersect(n1234, E)
grob.list <- VennDiagram::draw.quintuple.venn(area1 = length(A),
area2 = length(B), area3 = length(C), area4 = length(D),
area5 = length(E), n12 = length(n12), n13 = length(n13),
n14 = length(n14), n15 = length(n15), n23 = length(n23),
n24 = length(n24), n25 = length(n25), n34 = length(n34),
n35 = length(n35), n45 = length(n45), n123 = length(n123),
n124 = length(n124), n125 = length(n125), n134 = length(n134),
n135 = length(n135), n145 = length(n145), n234 = length(n234),
n235 = length(n235), n245 = length(n245), n345 = length(n345),
n1234 = length(n1234), n1235 = length(n1235), n1245 = length(n1245),
n1345 = length(n1345), n2345 = length(n2345), n12345 = length(n12345),
category = list.names, ind = FALSE,fill=fill, ...)
}
else {
stop("Invalid size of input object")
}
if (!is.null(sub)) {
grob.list <- add.title(gList = grob.list, x = sub, pos = sub.pos,
fontface = sub.fontface, fontfamily = sub.fontfamily,
col = sub.col, cex = sub.cex)
}
if (!is.null(main)) {
grob.list <- add.title(gList = grob.list, x = main, pos = main.pos,
fontface = main.fontface, fontfamily = main.fontfamily,
col = main.col, cex = main.cex)
}
grid.newpage()
grid.draw(grob.list)
return(1)
# return(grob.list)
}
makeColors<-function(n,colorNames="Set1") {
maxN<-brewer.pal.info[colorNames,"maxcolors"]
if (n<=maxN) {
colors<-brewer.pal(n, colorNames)
if (length(colors)>n) {
colors<-colors[1:n]
}
} else {
colors<-colorRampPalette(brewer.pal(maxN, colorNames))(n)
}
return(colors)
}
colors<-makeColors(length(allSigNameList))
for(format in outputFormat){
filePrefix<-paste0(allprefix,"_significantVenn")
openPlot(filePrefix, format, 7, 7, 2000, 2000, "Venn")
venn.diagram1(allSigNameList,cex=2,cat.cex=2,cat.col=colors,fill=colors)
dev.off()
}
}
#Do heatmap significant genes if length larger or equal than 2
if (length(allSigNameList)>=2) {
temp<-cbind(unlist(allSigNameList),unlist(allSigDirectionList))
colnames(temp)<-c("Gene","Direction")
temp<-cbind(temp,comparisonName=rep(names(allSigNameList),sapply(allSigNameList,length)))
temp<-data.frame(temp)
dataForFigure<-temp
#geting dataForFigure order in figure
temp$Direction<-as.integer(as.character(temp$Direction))
temp<-acast(temp, Gene~comparisonName ,value.var="Direction")
temp<-temp[do.call(order, data.frame(temp)),]
maxNameChr<-max(nchar(row.names(temp)))
if (maxNameChr>70) {
tmpNames<-substr(row.names(temp),0,70)
if(length(tmpNames) == length(unique(tmpNames))){
row.names(temp)<-tmpNames
dataForFigure$Gene<-substr(dataForFigure$Gene,0,70)
warning(paste0("The gene names were too long (",maxNameChr,"). Only first 70 letters were kept."))
}
}
dataForFigure$Gene<-factor(dataForFigure$Gene,levels=row.names(temp))
g<-ggplot(dataForFigure, aes(comparisonName, Gene))+
geom_tile(aes(fill=Direction), color="white") +
scale_fill_manual(values=c("light green", "red")) +
theme(axis.text.x = element_text(angle=90, vjust=0.5, size=11, hjust=0.5, face="bold"),
axis.text.y = element_text(size=textSize, face="bold")) +
coord_equal()
width=min(max(2500, 60 * length(unique(dataForFigure$comparisonName))),30000)
height=min(max(2000, 40 * length(unique(dataForFigure$Gene))),30000)
filePrefix<-paste0(allprefix,"_significantHeatmap")
drawPlot(filePrefix, outputFormat, 7, 7, width, height, g, "Significant Heatmap")
}
}
if (! is.null(resultAllOut)) {
#write a file with all information
resultAllOut<-cbind(dataAllOut,resultAllOut[row.names(dataAllOut),])
write.csv(resultAllOut,paste0(allprefix, "_DESeq2.csv"))
if(length(validComparisons) > 1 ){
#volcano plot for all comparisons
temp<-resultAllOut[,-(1:ncol(dataAllOut))]
diffResult<-NULL
diffResultVar<-unique(sapply(strsplit(colnames(temp)," "),function(x) x[1]))
for (i in 1:(length(validComparisons))) {
temp1<-temp[,(i*length(diffResultVar)-(length(diffResultVar)-1)):(i*length(diffResultVar))]
colnames(temp1)<-diffResultVar
temp1$Comparison<-validComparisons[i]
if (is.null(diffResult)) {
diffResult<-temp1
} else {
diffResult<-rbind(diffResult,temp1)
}
}
changeColours<-c(grey="grey",blue="blue",red="red")
diffResult$log10BaseMean<-log10(diffResult$baseMean)
diffResult$Comparison<-allTitles[diffResult$Comparison]
diffResult$Comparison<-factor(diffResult$Comparison,levels=unique(diffResult$Comparison))
diffResult$colour<-"grey"
if (useRawPvalue==1) {
diffResult<-subset(diffResult, !is.na(pvalue))
diffResult$colour[which(diffResult$pvalue<=pvalue & diffResult$log2FoldChange>=log2(foldChange))]<-"red"
diffResult$colour[which(diffResult$pvalue<=pvalue & diffResult$log2FoldChange<=-log2(foldChange))]<-"blue"
} else {
diffResult<-subset(diffResult, !is.na(padj))
diffResult$colour[which(diffResult$padj<=pvalue & diffResult$log2FoldChange>=log2(foldChange))]<-"red"
diffResult$colour[which(diffResult$padj<=pvalue & diffResult$log2FoldChange<=-log2(foldChange))]<-"blue"
}
if (useRawPvalue==1) {
p<-ggplot(diffResult,aes(x=log2FoldChange,y=pvalue))+
scale_y_continuous(trans=reverselog_trans(10),name=bquote(p~value))
} else {
p<-ggplot(diffResult,aes(x=log2FoldChange,y=padj))+
scale_y_continuous(trans=reverselog_trans(10),name=bquote(Adjusted~p~value))
}
p<-p+geom_point(aes(size=log10BaseMean,colour=colour))+
scale_color_manual(values=changeColours,guide = FALSE)+
scale_x_continuous(name=bquote(log[2]~Fold~Change))+
geom_hline(yintercept = 1,colour="grey",linetype = "dotted")+
geom_vline(xintercept = 0,colour="grey",linetype = "dotted")+
guides(size=guide_legend(title=bquote(log[10]~Base~Mean)))+
theme_bw()+
scale_size(range = c(3, 7))+
facet_grid(. ~ Comparison)+
theme(axis.text = element_text(colour = "black",size=25),
axis.title = element_text(size=25),
legend.text= element_text(size=25),
legend.title= element_text(size=25),
strip.text.x = element_text(size = 25),
strip.background=element_rect(fill="white"))
pwidth<-max(12,4*length(allComparisons)+4)
owidth<-max(4000, 1500*length(allComparisons)+1000)
filePrefix<-paste0(allprefix,"_DESeq2_volcanoPlot")
drawPlot(filePrefix, outputFormat, pwidth, 7, owidth, 2000, p, "Volcano")
#output a summary table with numbers of gisnificant changed genes
sigGeneSummaryTable<-t(table(diffResult[,"Significant"],diffResult[,"Comparison"]))
notSigIndex<-match("0", colnames(sigGeneSummaryTable))
if(is.na(notSigIndex)){
notSignificant=0
}else{
notSignificant=sigGeneSummaryTable[,notSigIndex]
}
sigIndex<-match("1", colnames(sigGeneSummaryTable))
if(is.na(sigIndex)){
significant=0
}else{
significant=sigGeneSummaryTable[,sigIndex]
}
dSigGeneSummaryTable<-data.frame(Comparison=row.names(sigGeneSummaryTable),GeneInComparison=rowSums(sigGeneSummaryTable),NotSignificant=notSignificant,Significant=significant)
write.csv(dSigGeneSummaryTable,paste0(allprefix, "_DESeq2_sigGeneSummary.csv"),row.names=FALSE)
}
}
#export session information
writeLines(capture.output(sessionInfo()), paste0(basename(inputfile),".DESeq2.SessionInfo.txt"))
deseq2version<-paste0("DESeq2,v", packageVersion("DESeq2"))
writeLines(deseq2version, paste0(basename(inputfile),".DESeq2.version"))
#save R Data
save.image(paste0(basename(inputfile),".DESeq2.RData"))
|
{"hexsha": "5c3c123a5587773b07cfd0a91bb7243dfce3c913", "size": 49816, "ext": "r", "lang": "R", "max_stars_repo_path": "lib/Comparison/DESeq2.r", "max_stars_repo_name": "shengqh/ngsperl", "max_stars_repo_head_hexsha": "f81d5bf30171950583bb1ab656f51eabc1e9caf6", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2016-03-25T17:05:39.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-13T07:03:55.000Z", "max_issues_repo_path": "lib/Comparison/DESeq2.r", "max_issues_repo_name": "shengqh/ngsperl", "max_issues_repo_head_hexsha": "f81d5bf30171950583bb1ab656f51eabc1e9caf6", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lib/Comparison/DESeq2.r", "max_forks_repo_name": "shengqh/ngsperl", "max_forks_repo_head_hexsha": "f81d5bf30171950583bb1ab656f51eabc1e9caf6", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2015-04-02T16:41:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-22T07:25:33.000Z", "avg_line_length": 39.7573822825, "max_line_length": 184, "alphanum_fraction": 0.6081580215, "num_tokens": 13585}
|
# Copyright 2017 Google Inc. and Skytruth Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
from classification import metadata
from . import vessel_characterization, fishing_detection
class ModelsTest(tf.test.TestCase):
num_feature_dimensions = 11
model_classes = [vessel_characterization.Model, fishing_detection.Model]
def _build_estimator(self, model_class):
vmd = metadata.VesselMetadata({}, {})
model = model_class(self.num_feature_dimensions, vmd, metrics='all')
return model.make_estimator("dummy_directory")
def test_estimator_contruction(self):
for i, model_class in enumerate(self.model_classes):
with self.test_session():
# This protects against multiple model using same variable names
with tf.variable_scope("training-test-{}".format(i)):
est = self._build_estimator(model_class)
# TODO: test input_fn
if __name__ == '__main__':
tf.test.main()
|
{"hexsha": "c860c5619297217d292455c1217daf94559df389", "size": 1533, "ext": "py", "lang": "Python", "max_stars_repo_path": "classification/models/models_test.py", "max_stars_repo_name": "marketler/GFW_Vessel_Classification", "max_stars_repo_head_hexsha": "fb2ada9aeebe2582b42e940db86674fd4da6fb07", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classification/models/models_test.py", "max_issues_repo_name": "marketler/GFW_Vessel_Classification", "max_issues_repo_head_hexsha": "fb2ada9aeebe2582b42e940db86674fd4da6fb07", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classification/models/models_test.py", "max_forks_repo_name": "marketler/GFW_Vessel_Classification", "max_forks_repo_head_hexsha": "fb2ada9aeebe2582b42e940db86674fd4da6fb07", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.6511627907, "max_line_length": 80, "alphanum_fraction": 0.7214611872, "include": true, "reason": "import numpy", "num_tokens": 316}
|
import numpy as np
import random as rnd
from sklearn.base import clone
from sklearn.model_selection import train_test_split
from tqdm import trange
from .gafs import *
__version__ = '0.0.2'
|
{"hexsha": "aa8c1297080b43ae6f7d0b703b1aa5896cb443b8", "size": 189, "ext": "py", "lang": "Python", "max_stars_repo_path": "gafs/__init__.py", "max_stars_repo_name": "Shemka/GAFS", "max_stars_repo_head_hexsha": "70007116b712a7f700111c04fb9fac5c21654d71", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-01-18T15:12:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-30T11:35:22.000Z", "max_issues_repo_path": "gafs/__init__.py", "max_issues_repo_name": "Shemka/GAFS", "max_issues_repo_head_hexsha": "70007116b712a7f700111c04fb9fac5c21654d71", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "gafs/__init__.py", "max_forks_repo_name": "Shemka/GAFS", "max_forks_repo_head_hexsha": "70007116b712a7f700111c04fb9fac5c21654d71", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-01-18T23:51:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-21T00:58:30.000Z", "avg_line_length": 27.0, "max_line_length": 52, "alphanum_fraction": 0.8148148148, "include": true, "reason": "import numpy", "num_tokens": 48}
|
[STATEMENT]
lemma ma_sqrt_main: "ma_rat r \<ge> 0 \<Longrightarrow> ma_coeff r = 0 \<Longrightarrow> sqrt (real_of r) = real_of (ma_sqrt r)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>0 \<le> ma_rat r; ma_coeff r = 0\<rbrakk> \<Longrightarrow> sqrt (real_of r) = real_of (ma_sqrt r)
[PROOF STEP]
proof (transfer, unfold Let_def, clarsimp)
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
fix p :: rat
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
assume p: "0 \<le> p"
[PROOF STATE]
proof (state)
this:
0 \<le> p
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
hence abs: "abs p = p"
[PROOF STATE]
proof (prove)
using this:
0 \<le> p
goal (1 subgoal):
1. \<bar>p\<bar> = p
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
\<bar>p\<bar> = p
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
obtain a b where ab: "quotient_of p = (a,b)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<And>a b. quotient_of p = (a, b) \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by force
[PROOF STATE]
proof (state)
this:
quotient_of p = (a, b)
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
hence pab: "p = of_int a / of_int b"
[PROOF STATE]
proof (prove)
using this:
quotient_of p = (a, b)
goal (1 subgoal):
1. p = rat_of_int a / rat_of_int b
[PROOF STEP]
by (rule quotient_of_div)
[PROOF STATE]
proof (state)
this:
p = rat_of_int a / rat_of_int b
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
from ab
[PROOF STATE]
proof (chain)
picking this:
quotient_of p = (a, b)
[PROOF STEP]
have b: "b > 0"
[PROOF STATE]
proof (prove)
using this:
quotient_of p = (a, b)
goal (1 subgoal):
1. 0 < b
[PROOF STEP]
by (rule quotient_of_denom_pos)
[PROOF STATE]
proof (state)
this:
0 < b
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
with p pab
[PROOF STATE]
proof (chain)
picking this:
0 \<le> p
p = rat_of_int a / rat_of_int b
0 < b
[PROOF STEP]
have abpos: "a * b \<ge> 0"
[PROOF STATE]
proof (prove)
using this:
0 \<le> p
p = rat_of_int a / rat_of_int b
0 < b
goal (1 subgoal):
1. 0 \<le> a * b
[PROOF STEP]
by (metis of_int_0_le_iff of_int_le_0_iff zero_le_divide_iff zero_le_mult_iff)
[PROOF STATE]
proof (state)
this:
0 \<le> a * b
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
have rab: "of_nat (nat (a * b)) = real_of_int a * real_of_int b"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. real (nat (a * b)) = real_of_int a * real_of_int b
[PROOF STEP]
using abpos
[PROOF STATE]
proof (prove)
using this:
0 \<le> a * b
goal (1 subgoal):
1. real (nat (a * b)) = real_of_int a * real_of_int b
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
real (nat (a * b)) = real_of_int a * real_of_int b
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
let ?lhs = "sqrt (of_int a / of_int b)"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
let ?rhs = "(case case quotient_of p of
(a, b) \<Rightarrow> (case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (of_int b), nat \<bar>a * b\<bar>)
| s # x \<Rightarrow> (of_int s / of_int b, 0, 0)) of
(p, q, b) \<Rightarrow> of_rat p + of_rat q * sqrt (of_nat b))"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
have "sqrt (real_of_rat p) = ?lhs"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. sqrt (real_of_rat p) = sqrt (real_of_int a / real_of_int b)
[PROOF STEP]
unfolding pab
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. sqrt (real_of_rat (rat_of_int a / rat_of_int b)) = sqrt (real_of_int a / real_of_int b)
[PROOF STEP]
by (metis of_rat_divide of_rat_of_int_eq)
[PROOF STATE]
proof (state)
this:
sqrt (real_of_rat p) = sqrt (real_of_int a / real_of_int b)
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
sqrt (real_of_rat p) = sqrt (real_of_int a / real_of_int b)
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
have "\<dots> = ?rhs"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
proof (cases "sqrt_int \<bar>a * b\<bar>")
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
case Nil
[PROOF STATE]
proof (state)
this:
sqrt_int \<bar>a * b\<bar> = []
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
define sb where "sb = sqrt (of_int b)"
[PROOF STATE]
proof (state)
this:
sb = sqrt (real_of_int b)
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
define sa where "sa = sqrt (of_int a)"
[PROOF STATE]
proof (state)
this:
sa = sqrt (real_of_int a)
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
from b sb_def
[PROOF STATE]
proof (chain)
picking this:
0 < b
sb = sqrt (real_of_int b)
[PROOF STEP]
have sb: "sb > 0" "real_of_int b > 0"
[PROOF STATE]
proof (prove)
using this:
0 < b
sb = sqrt (real_of_int b)
goal (1 subgoal):
1. 0 < sb &&& 0 < real_of_int b
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
0 < sb
0 < real_of_int b
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
have sbb: "sb * sb = real_of_int b"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. sb * sb = real_of_int b
[PROOF STEP]
unfolding sb_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. sqrt (real_of_int b) * sqrt (real_of_int b) = real_of_int b
[PROOF STEP]
by (rule sqrt_sqrt, insert b, auto)
[PROOF STATE]
proof (state)
this:
sb * sb = real_of_int b
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
from Nil
[PROOF STATE]
proof (chain)
picking this:
sqrt_int \<bar>a * b\<bar> = []
[PROOF STEP]
have "?thesis = (sa / sb =
inverse (of_int b) * (sa * sb))"
[PROOF STATE]
proof (prove)
using this:
sqrt_int \<bar>a * b\<bar> = []
goal (1 subgoal):
1. (sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))) = (sa / sb = inverse (real_of_int b) * (sa * sb))
[PROOF STEP]
unfolding ab sa_def sb_def
[PROOF STATE]
proof (prove)
using this:
sqrt_int \<bar>a * b\<bar> = []
goal (1 subgoal):
1. (sqrt (real_of_int a / real_of_int b) = (case case (a, b) of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))) = (sqrt (real_of_int a) / sqrt (real_of_int b) = inverse (real_of_int b) * (sqrt (real_of_int a) * sqrt (real_of_int b)))
[PROOF STEP]
using abpos
[PROOF STATE]
proof (prove)
using this:
sqrt_int \<bar>a * b\<bar> = []
0 \<le> a * b
goal (1 subgoal):
1. (sqrt (real_of_int a / real_of_int b) = (case case (a, b) of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))) = (sqrt (real_of_int a) / sqrt (real_of_int b) = inverse (real_of_int b) * (sqrt (real_of_int a) * sqrt (real_of_int b)))
[PROOF STEP]
by (simp add: rab of_rat_divide real_sqrt_mult real_sqrt_divide of_rat_inverse)
[PROOF STATE]
proof (state)
this:
(sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))) = (sa / sb = inverse (real_of_int b) * (sa * sb))
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
(sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))) = (sa / sb = inverse (real_of_int b) * (sa * sb))
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
have "\<dots> = (sa = inverse (of_int b) * sa * (sb * sb))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (sa / sb = inverse (real_of_int b) * (sa * sb)) = (sa = inverse (real_of_int b) * sa * (sb * sb))
[PROOF STEP]
using sb
[PROOF STATE]
proof (prove)
using this:
0 < sb
0 < real_of_int b
goal (1 subgoal):
1. (sa / sb = inverse (real_of_int b) * (sa * sb)) = (sa = inverse (real_of_int b) * sa * (sb * sb))
[PROOF STEP]
by (metis b divide_real_def eq_divide_imp inverse_divide inverse_inverse_eq inverse_mult_distrib less_int_code(1) of_int_eq_0_iff real_sqrt_eq_zero_cancel_iff sb_def sbb times_divide_eq_right)
[PROOF STATE]
proof (state)
this:
(sa / sb = inverse (real_of_int b) * (sa * sb)) = (sa = inverse (real_of_int b) * sa * (sb * sb))
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
(sa / sb = inverse (real_of_int b) * (sa * sb)) = (sa = inverse (real_of_int b) * sa * (sb * sb))
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
have "\<dots> = True"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (sa = inverse (real_of_int b) * sa * (sb * sb)) = True
[PROOF STEP]
using sb(2)
[PROOF STATE]
proof (prove)
using this:
0 < real_of_int b
goal (1 subgoal):
1. (sa = inverse (real_of_int b) * sa * (sb * sb)) = True
[PROOF STEP]
unfolding sbb
[PROOF STATE]
proof (prove)
using this:
0 < real_of_int b
goal (1 subgoal):
1. (sa = inverse (real_of_int b) * sa * real_of_int b) = True
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
(sa = inverse (real_of_int b) * sa * (sb * sb)) = True
goal (2 subgoals):
1. sqrt_int \<bar>a * b\<bar> = [] \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
2. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
(sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))) = True
[PROOF STEP]
show "?thesis"
[PROOF STATE]
proof (prove)
using this:
(sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))) = True
goal (1 subgoal):
1. sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
goal (1 subgoal):
1. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
case (Cons s x)
[PROOF STATE]
proof (state)
this:
sqrt_int \<bar>a * b\<bar> = s # x
goal (1 subgoal):
1. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
from b
[PROOF STATE]
proof (chain)
picking this:
0 < b
[PROOF STEP]
have b: "real_of_int b > 0"
[PROOF STATE]
proof (prove)
using this:
0 < b
goal (1 subgoal):
1. 0 < real_of_int b
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
0 < real_of_int b
goal (1 subgoal):
1. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
from Cons sqrt_int[of "abs (a * b)"]
[PROOF STATE]
proof (chain)
picking this:
sqrt_int \<bar>a * b\<bar> = s # x
set (sqrt_int \<bar>a * b\<bar>) = {y. y * y = \<bar>a * b\<bar>}
[PROOF STEP]
have "s * s = abs (a * b)"
[PROOF STATE]
proof (prove)
using this:
sqrt_int \<bar>a * b\<bar> = s # x
set (sqrt_int \<bar>a * b\<bar>) = {y. y * y = \<bar>a * b\<bar>}
goal (1 subgoal):
1. s * s = \<bar>a * b\<bar>
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
s * s = \<bar>a * b\<bar>
goal (1 subgoal):
1. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
with sqrt_int_pos[OF Cons]
[PROOF STATE]
proof (chain)
picking this:
0 \<le> s
s * s = \<bar>a * b\<bar>
[PROOF STEP]
have "sqrt (real_of_int (abs (a * b))) = of_int s"
[PROOF STATE]
proof (prove)
using this:
0 \<le> s
s * s = \<bar>a * b\<bar>
goal (1 subgoal):
1. sqrt (real_of_int \<bar>a * b\<bar>) = real_of_int s
[PROOF STEP]
by (metis abs_of_nonneg of_int_mult of_int_abs real_sqrt_abs2)
[PROOF STATE]
proof (state)
this:
sqrt (real_of_int \<bar>a * b\<bar>) = real_of_int s
goal (1 subgoal):
1. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
with abpos
[PROOF STATE]
proof (chain)
picking this:
0 \<le> a * b
sqrt (real_of_int \<bar>a * b\<bar>) = real_of_int s
[PROOF STEP]
have "of_int s = sqrt (real_of_int (a * b))"
[PROOF STATE]
proof (prove)
using this:
0 \<le> a * b
sqrt (real_of_int \<bar>a * b\<bar>) = real_of_int s
goal (1 subgoal):
1. real_of_int s = sqrt (real_of_int (a * b))
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
real_of_int s = sqrt (real_of_int (a * b))
goal (1 subgoal):
1. \<And>aa list. sqrt_int \<bar>a * b\<bar> = aa # list \<Longrightarrow> sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
thus ?thesis
[PROOF STATE]
proof (prove)
using this:
real_of_int s = sqrt (real_of_int (a * b))
goal (1 subgoal):
1. sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
unfolding ab split
[PROOF STATE]
proof (prove)
using this:
real_of_int s = sqrt (real_of_int (a * b))
goal (1 subgoal):
1. sqrt (real_of_int a / real_of_int b) = (case case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
using Cons b
[PROOF STATE]
proof (prove)
using this:
real_of_int s = sqrt (real_of_int (a * b))
sqrt_int \<bar>a * b\<bar> = s # x
0 < real_of_int b
goal (1 subgoal):
1. sqrt (real_of_int a / real_of_int b) = (case case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
by (auto simp: of_rat_divide field_simps real_sqrt_divide real_sqrt_mult)
[PROOF STATE]
proof (state)
this:
sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
sqrt (real_of_int a / real_of_int b) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
goal (1 subgoal):
1. \<And>p. 0 \<le> p \<Longrightarrow> sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
show "sqrt (real_of_rat p) = ?rhs"
[PROOF STATE]
proof (prove)
using this:
sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
goal (1 subgoal):
1. sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
[PROOF STEP]
.
[PROOF STATE]
proof (state)
this:
sqrt (real_of_rat p) = (case case quotient_of p of (a, b) \<Rightarrow> case sqrt_int \<bar>a * b\<bar> of [] \<Rightarrow> (0, inverse (rat_of_int b), nat \<bar>a * b\<bar>) | s # x \<Rightarrow> (rat_of_int s / rat_of_int b, 0, 0) of (p, q, b) \<Rightarrow> real_of_rat p + real_of_rat q * sqrt (real b))
goal:
No subgoals!
[PROOF STEP]
qed
|
{"llama_tokens": 14281, "file": "Real_Impl_Real_Impl", "length": 75}
|
Program zheevr_example
! ZHEEVR Example Program Text
! Copyright (c) 2018, Numerical Algorithms Group (NAG Ltd.)
! For licence see
! https://github.com/numericalalgorithmsgroup/LAPACK_Examples/blob/master/LICENCE.md
! .. Use Statements ..
Use blas_interfaces, Only: zscal
Use lapack_example_aux, Only: nagf_file_print_matrix_complex_gen
Use lapack_interfaces, Only: zheevr
Use lapack_precision, Only: dp
! .. Implicit None Statement ..
Implicit None
! .. Parameters ..
Real (Kind=dp), Parameter :: zero = 0.0E+0_dp
Integer, Parameter :: nb = 64, nin = 5, nout = 6
! .. Local Scalars ..
Real (Kind=dp) :: abstol, vl, vu
Integer :: i, ifail, il, info, iu, k, lda, ldz, liwork, lrwork, lwork, &
m, n
! .. Local Arrays ..
Complex (Kind=dp), Allocatable :: a(:, :), work(:), z(:, :)
Complex (Kind=dp) :: dummy(1)
Real (Kind=dp) :: rdum(1)
Real (Kind=dp), Allocatable :: rwork(:), w(:)
Integer :: idum(1)
Integer, Allocatable :: isuppz(:), iwork(:)
! .. Intrinsic Procedures ..
Intrinsic :: abs, cmplx, conjg, max, maxloc, nint, real
! .. Executable Statements ..
Write (nout, *) 'ZHEEVR Example Program Results'
Write (nout, *)
! Skip heading in data file and read N and the lower and upper
! indices of the smallest and largest eigenvalues to be found
Read (nin, *)
Read (nin, *) n, il, iu
lda = n
ldz = n
m = n
Allocate (a(lda,n), z(ldz,m), w(n), isuppz(2*m))
! Use routine workspace query to get optimal workspace.
lwork = -1
liwork = -1
lrwork = -1
Call zheevr('Vectors', 'I', 'Upper', n, a, lda, vl, vu, il, iu, abstol, &
m, w, z, ldz, isuppz, dummy, lwork, rdum, lrwork, idum, liwork, info)
! Make sure that there is enough workspace for block size nb.
lwork = max((nb+1)*n, nint(real(dummy(1))))
lrwork = max(24*n, nint(rdum(1)))
liwork = max(10*n, idum(1))
Allocate (work(lwork), rwork(lrwork), iwork(liwork))
! Read the upper triangular part of the matrix A from data file
Read (nin, *)(a(i,i:n), i=1, n)
! Set the absolute error tolerance for eigenvalues. With ABSTOL
! set to zero, the default value is used instead
abstol = zero
! Solve the symmetric eigenvalue problem
Call zheevr('Vectors', 'I', 'Upper', n, a, lda, vl, vu, il, iu, abstol, &
m, w, z, ldz, isuppz, work, lwork, rwork, lrwork, iwork, liwork, info)
If (info==0) Then
! Print solution
Write (nout, *) 'Selected eigenvalues'
Write (nout, 100) w(1:m)
Flush (nout)
! Normalize the eigenvectors so that the element of largest absolute
! value is real.
Do i = 1, m
rwork(1:n) = abs(z(1:n,i))
k = maxloc(rwork(1:n), 1)
Call zscal(n, conjg(z(k,i))/cmplx(abs(z(k,i)),kind=dp), z(1,i), 1)
End Do
! ifail: behaviour on error exit
! =0 for hard exit, =1 for quiet-soft, =-1 for noisy-soft
ifail = 0
Call nagf_file_print_matrix_complex_gen('General', ' ', n, m, z, ldz, &
'Selected eigenvectors', ifail)
Else
Write (nout, 110) 'Failure in ZHEEVR. INFO =', info
End If
100 Format (3X, (8F8.4))
110 Format (1X, A, I5)
End Program
|
{"hexsha": "f6b955af16ec294e5f24c24285afe01e41da8033", "size": 3389, "ext": "f90", "lang": "FORTRAN", "max_stars_repo_path": "examples/source/zheevr_example.f90", "max_stars_repo_name": "numericalalgorithmsgroup/LAPACK_examples", "max_stars_repo_head_hexsha": "0dde05ae4817ce9698462bbca990c4225337f481", "max_stars_repo_licenses": ["BSD-3-Clause-Open-MPI"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2018-01-28T15:48:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-18T09:26:43.000Z", "max_issues_repo_path": "examples/source/zheevr_example.f90", "max_issues_repo_name": "numericalalgorithmsgroup/LAPACK_examples", "max_issues_repo_head_hexsha": "0dde05ae4817ce9698462bbca990c4225337f481", "max_issues_repo_licenses": ["BSD-3-Clause-Open-MPI"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/source/zheevr_example.f90", "max_forks_repo_name": "numericalalgorithmsgroup/LAPACK_examples", "max_forks_repo_head_hexsha": "0dde05ae4817ce9698462bbca990c4225337f481", "max_forks_repo_licenses": ["BSD-3-Clause-Open-MPI"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2019-04-19T12:22:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-18T03:32:12.000Z", "avg_line_length": 34.2323232323, "max_line_length": 90, "alphanum_fraction": 0.5812924166, "num_tokens": 1073}
|
// Function by the Orthanc project to load a dictionary from a memory
// buffer, which is necessary in sandboxed environments. This is an
// adapted version of DcmDataDictionary::loadDictionary().
#include <string>
#include <boost/noncopyable.hpp>
struct OrthancLinesIterator;
// This plain old C class is implemented in "../../Core/Toolbox.h"
OrthancLinesIterator* OrthancLinesIterator_Create(const std::string& content);
bool OrthancLinesIterator_GetLine(std::string& target,
const OrthancLinesIterator* iterator);
void OrthancLinesIterator_Next(OrthancLinesIterator* iterator);
void OrthancLinesIterator_Free(OrthancLinesIterator* iterator);
class LinesIterator : public boost::noncopyable
{
private:
OrthancLinesIterator* iterator_;
public:
LinesIterator(const std::string& content) :
iterator_(NULL)
{
iterator_ = OrthancLinesIterator_Create(content);
}
~LinesIterator()
{
if (iterator_ != NULL)
{
OrthancLinesIterator_Free(iterator_);
iterator_ = NULL;
}
}
bool GetLine(std::string& target) const
{
if (iterator_ != NULL)
{
return OrthancLinesIterator_GetLine(target, iterator_);
}
else
{
return false;
}
}
void Next()
{
if (iterator_ != NULL)
{
OrthancLinesIterator_Next(iterator_);
}
}
};
OFBool
DcmDataDictionary::loadFromMemory(const std::string& content, OFBool errorIfAbsent)
{
int lineNumber = 0;
char* lineFields[DCM_MAXDICTFIELDS + 1];
int fieldsPresent;
DcmDictEntry* e;
int errorsEncountered = 0;
OFBool errorOnThisLine = OFFalse;
int i;
DcmTagKey key, upperKey;
DcmDictRangeRestriction groupRestriction = DcmDictRange_Unspecified;
DcmDictRangeRestriction elementRestriction = DcmDictRange_Unspecified;
DcmVR vr;
char* vrName;
char* tagName;
char* privCreator;
int vmMin, vmMax = 1;
const char* standardVersion;
LinesIterator iterator(content);
std::string line;
while (iterator.GetLine(line)) {
iterator.Next();
if (line.size() >= DCM_MAXDICTLINESIZE) {
DCMDATA_ERROR("DcmDataDictionary: Too long line: " << line);
continue;
}
lineNumber++;
if (onlyWhitespace(line.c_str())) {
continue; /* ignore this line */
}
if (isaCommentLine(line.c_str())) {
continue; /* ignore this line */
}
errorOnThisLine = OFFalse;
/* fields are tab separated */
fieldsPresent = splitFields(line.c_str(), lineFields,
DCM_MAXDICTFIELDS,
DCM_DICT_FIELD_SEPARATOR_CHAR);
/* initialize dict entry fields */
vrName = NULL;
tagName = NULL;
privCreator = NULL;
vmMin = vmMax = 1;
standardVersion = "DICOM";
switch (fieldsPresent) {
case 0:
case 1:
case 2:
DCMDATA_ERROR("DcmDataDictionary: "
<< "too few fields (line " << lineNumber << ")");
errorOnThisLine = OFTrue;
break;
default:
DCMDATA_ERROR("DcmDataDictionary: "
<< "too many fields (line " << lineNumber << "): ");
errorOnThisLine = OFTrue;
break;
case 5:
stripWhitespace(lineFields[4]);
standardVersion = lineFields[4];
/* drop through to next case label */
case 4:
/* the VM field is present */
if (!parseVMField(lineFields[3], vmMin, vmMax)) {
DCMDATA_ERROR("DcmDataDictionary: "
<< "bad VM field (line " << lineNumber << "): " << lineFields[3]);
errorOnThisLine = OFTrue;
}
/* drop through to next case label */
case 3:
if (!parseWholeTagField(lineFields[0], key, upperKey,
groupRestriction, elementRestriction, privCreator))
{
DCMDATA_ERROR("DcmDataDictionary: "
<< "bad Tag field (line " << lineNumber << "): " << lineFields[0]);
errorOnThisLine = OFTrue;
} else {
/* all is OK */
vrName = lineFields[1];
stripWhitespace(vrName);
tagName = lineFields[2];
stripWhitespace(tagName);
}
}
if (!errorOnThisLine) {
/* check the VR Field */
vr.setVR(vrName);
if (vr.getEVR() == EVR_UNKNOWN) {
DCMDATA_ERROR("DcmDataDictionary: "
<< "bad VR field (line " << lineNumber << "): " << vrName);
errorOnThisLine = OFTrue;
}
}
if (!errorOnThisLine) {
e = new DcmDictEntry(
key.getGroup(), key.getElement(),
upperKey.getGroup(), upperKey.getElement(),
vr, tagName, vmMin, vmMax, standardVersion, OFTrue,
privCreator);
e->setGroupRangeRestriction(groupRestriction);
e->setElementRangeRestriction(elementRestriction);
addEntry(e);
}
for (i = 0; i < fieldsPresent; i++) {
free(lineFields[i]);
lineFields[i] = NULL;
}
delete[] privCreator;
if (errorOnThisLine) {
errorsEncountered++;
}
}
/* return OFFalse in case of errors and set internal state accordingly */
if (errorsEncountered == 0) {
dictionaryLoaded = OFTrue;
return OFTrue;
}
else {
dictionaryLoaded = OFFalse;
return OFFalse;
}
}
|
{"hexsha": "b4bf8c283e18ee616ce310c9ca337058333b7e50", "size": 5301, "ext": "cc", "lang": "C++", "max_stars_repo_path": "DicomWebStorge/Orthanc-1.7.4/OrthancFramework/Resources/Patches/dcmtk-dcdict_orthanc.cc", "max_stars_repo_name": "a2609194449/Assistant-decision-making-system-for-gallbladder-cancer-", "max_stars_repo_head_hexsha": "75a9d3432cb510ea94fa09cc9b440e8b8e7f0a84", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1.0, "max_stars_repo_stars_event_min_datetime": "2020-11-05T08:34:23.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-05T08:34:23.000Z", "max_issues_repo_path": "DicomWebStorge/Orthanc-1.7.4/OrthancFramework/Resources/Patches/dcmtk-dcdict_orthanc.cc", "max_issues_repo_name": "a2609194449/Assistant-decision-making-system-for-gallbladder-cancer-", "max_issues_repo_head_hexsha": "75a9d3432cb510ea94fa09cc9b440e8b8e7f0a84", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DicomWebStorge/Orthanc-1.7.4/OrthancFramework/Resources/Patches/dcmtk-dcdict_orthanc.cc", "max_forks_repo_name": "a2609194449/Assistant-decision-making-system-for-gallbladder-cancer-", "max_forks_repo_head_hexsha": "75a9d3432cb510ea94fa09cc9b440e8b8e7f0a84", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1.0, "max_forks_repo_forks_event_min_datetime": "2020-11-12T09:00:30.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-12T09:00:30.000Z", "avg_line_length": 25.7330097087, "max_line_length": 91, "alphanum_fraction": 0.6085644218, "num_tokens": 1312}
|
cdis
cdis Open Source License/Disclaimer, Forecast Systems Laboratory
cdis NOAA/OAR/FSL, 325 Broadway Boulder, CO 80305
cdis
cdis This software is distributed under the Open Source Definition,
cdis which may be found at http://www.opensource.org/osd.html.
cdis
cdis In particular, redistribution and use in source and binary forms,
cdis with or without modification, are permitted provided that the
cdis following conditions are met:
cdis
cdis - Redistributions of source code must retain this notice, this
cdis list of conditions and the following disclaimer.
cdis
cdis - Redistributions in binary form must provide access to this
cdis notice, this list of conditions and the following disclaimer, and
cdis the underlying source code.
cdis
cdis - All modifications to this software must be clearly documented,
cdis and are solely the responsibility of the agent making the
cdis modifications.
cdis
cdis - If significant modifications or enhancements are made to this
cdis software, the FSL Software Policy Manager
cdis (softwaremgr@fsl.noaa.gov) should be notified.
cdis
cdis THIS SOFTWARE AND ITS DOCUMENTATION ARE IN THE PUBLIC DOMAIN
cdis AND ARE FURNISHED "AS IS." THE AUTHORS, THE UNITED STATES
cdis GOVERNMENT, ITS INSTRUMENTALITIES, OFFICERS, EMPLOYEES, AND
cdis AGENTS MAKE NO WARRANTY, EXPRESS OR IMPLIED, AS TO THE USEFULNESS
cdis OF THE SOFTWARE AND DOCUMENTATION FOR ANY PURPOSE. THEY ASSUME
cdis NO RESPONSIBILITY (1) FOR THE USE OF THE SOFTWARE AND
cdis DOCUMENTATION; OR (2) TO PROVIDE TECHNICAL SUPPORT TO USERS.
cdis
cdis
function tcon(t,d)
c
c this function returns the temperature tcon (celsius) at the lifting
c condensation level, given the temperature t (celsius) and the
c dew point d (celsius).
c
c baker,schlatter 17-may-1982 original version
c
c compute the dew point depression s.
s = t-d
c the approximation below, a third order polynomial in s and t,
c is due to herman wobus. the source of data for fitting the
c polynomial is unknown.
dlt = s*(1.2185+1.278e-03*t+
1 s*(-2.19e-03+1.173e-05*s-5.2e-06*t))
tcon = t-dlt
return
end
|
{"hexsha": "547f855bf0f39c929f039bcdad625ca566ff57fa", "size": 2204, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "src/lib/mthermo/tcon.f", "max_stars_repo_name": "maxinye/laps-mirror", "max_stars_repo_head_hexsha": "b3f7c08273299a9e19b2187f96bd3eee6e0aa01b", "max_stars_repo_licenses": ["Intel", "Unlicense", "OLDAP-2.2.1", "NetCDF"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-04-05T12:28:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-29T06:37:29.000Z", "max_issues_repo_path": "src/lib/mthermo/tcon.f", "max_issues_repo_name": "longwosion/laps-mirror", "max_issues_repo_head_hexsha": "b3f7c08273299a9e19b2187f96bd3eee6e0aa01b", "max_issues_repo_licenses": ["Intel", "NetCDF", "OLDAP-2.2.1", "Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/lib/mthermo/tcon.f", "max_forks_repo_name": "longwosion/laps-mirror", "max_forks_repo_head_hexsha": "b3f7c08273299a9e19b2187f96bd3eee6e0aa01b", "max_forks_repo_licenses": ["Intel", "NetCDF", "OLDAP-2.2.1", "Unlicense"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-04-27T12:51:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-19T13:57:44.000Z", "avg_line_length": 40.0727272727, "max_line_length": 73, "alphanum_fraction": 0.7441016334, "num_tokens": 588}
|
[STATEMENT]
lemma (in loc1) [simp]: "infinite (deriv s) ==> init s ==> (contains f n (m,A)) ==> ~ is_FEx A ==> m = 0"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>infinite (deriv s); init s; contains f n (m, A); \<not> is_FEx A\<rbrakk> \<Longrightarrow> m = 0
[PROOF STEP]
apply(frule_tac n=n in index0)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>infinite (deriv s); init s; contains f n (m, A); \<not> is_FEx A; \<forall>u m A. (n, u) \<in> deriv s \<longrightarrow> (m, A) \<in> set u \<longrightarrow> \<not> is_FEx A \<longrightarrow> m = 0\<rbrakk> \<Longrightarrow> m = 0
[PROOF STEP]
apply(frule_tac is_path_f)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>infinite (deriv s); init s; contains f n (m, A); \<not> is_FEx A; \<forall>u m A. (n, u) \<in> deriv s \<longrightarrow> (m, A) \<in> set u \<longrightarrow> \<not> is_FEx A \<longrightarrow> m = 0; \<forall>n. f n \<in> deriv s \<and> fst (f n) = n \<and> snd (f (Suc n)) \<in> set (subs (snd (f n))) \<and> infinite (deriv (snd (f n)))\<rbrakk> \<Longrightarrow> m = 0
[PROOF STEP]
apply(drule_tac x=n in spec)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>infinite (deriv s); init s; contains f n (m, A); \<not> is_FEx A; \<forall>u m A. (n, u) \<in> deriv s \<longrightarrow> (m, A) \<in> set u \<longrightarrow> \<not> is_FEx A \<longrightarrow> m = 0; f n \<in> deriv s \<and> fst (f n) = n \<and> snd (f (Suc n)) \<in> set (subs (snd (f n))) \<and> infinite (deriv (snd (f n)))\<rbrakk> \<Longrightarrow> m = 0
[PROOF STEP]
apply(case_tac "f n")
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>a b. \<lbrakk>infinite (deriv s); init s; contains f n (m, A); \<not> is_FEx A; \<forall>u m A. (n, u) \<in> deriv s \<longrightarrow> (m, A) \<in> set u \<longrightarrow> \<not> is_FEx A \<longrightarrow> m = 0; f n \<in> deriv s \<and> fst (f n) = n \<and> snd (f (Suc n)) \<in> set (subs (snd (f n))) \<and> infinite (deriv (snd (f n))); f n = (a, b)\<rbrakk> \<Longrightarrow> m = 0
[PROOF STEP]
apply(simp)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>b. \<lbrakk>infinite (deriv s); init s; contains f n (m, A); \<not> is_FEx A; \<forall>u. (n, u) \<in> deriv s \<longrightarrow> (\<forall>m A. (m, A) \<in> set u \<longrightarrow> \<not> is_FEx A \<longrightarrow> m = 0); (n, b) \<in> deriv s \<and> snd (f (Suc n)) \<in> set (subs b) \<and> infinite (deriv b); f n = (n, b)\<rbrakk> \<Longrightarrow> m = 0
[PROOF STEP]
apply(simp add: contains_def)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>b. \<lbrakk>infinite (deriv s); init s; (m, A) \<in> set b; \<not> is_FEx A; \<forall>u. (n, u) \<in> deriv s \<longrightarrow> (\<forall>m A. (m, A) \<in> set u \<longrightarrow> \<not> is_FEx A \<longrightarrow> m = 0); (n, b) \<in> deriv s \<and> snd (f (Suc n)) \<in> set (subs b) \<and> infinite (deriv b); f n = (n, b)\<rbrakk> \<Longrightarrow> m = 0
[PROOF STEP]
apply(force)
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done
|
{"llama_tokens": 1286, "file": "Verified-Prover_Prover", "length": 8}
|
"""
Module for move generation in the game Reversi.
Module for move generation in the game Reversi. The algorithm is implemented
quite inefficiently and could be improved.
"""
import numpy
BOARD_SIZE = 8
EMPTY = 2
BLACK = 0
WHITE = 1
a = ord("a")
NOTATION_CHART = {n: chr(n + a) for n in xrange(8)}
COORDINATE_CHART = {chr(n + a): n for n in xrange(8)}
CONVERSION_CHART = {
0: "X",
1: "O",
2: "-"
}
NUMBER_TO_PIECE = {
2: " ",
0: "@@",
1: "--"
}
STARTING_LEGAL_MOVES = [(2, 3), (3, 2), (4, 5), (5, 4)]
STARTING_LEGAL_MOVES_NOTATION = ['d3', 'c4', 'f5', 'e6']
START_POSITION = numpy.array([
[2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 1, 0, 2, 2, 2],
[2, 2, 2, 0, 1, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2],
])
ALLOWED_COORDINATES = frozenset([(x, y) for x in xrange(8) for y in xrange(8)])
ALLOWED_COORDINATES = {x: False for x in ALLOWED_COORDINATES}
NOT_ALLOWED = frozenset([(x, y) for x in [-1, 8] for y in xrange(-1, 9)] +
[(x, y) for x in xrange(8) for y in [-1, 8]])
NOT_ALLOWED = {x: True for x in NOT_ALLOWED}
# COORDINATES = ALLOWED_COORDINATES.copy()
# COORDINATES.update(NOT_ALLOWED)
AROUND_FUNCTIONS = [
lambda x, y: (x - 1, y - 1),
lambda x, y: (x - 1, y),
lambda x, y: (x - 1, y + 1),
lambda x, y: (x, y - 1),
lambda x, y: (x, y + 1),
lambda x, y: (x + 1, y - 1),
lambda x, y: (x + 1, y),
lambda x, y: (x + 1, y + 1)
]
COORDINATES = {}
for coordinate in ALLOWED_COORDINATES:
functions = AROUND_FUNCTIONS[:]
for foo in functions:
temporary_coordinate = coordinate
for _ in xrange(2):
temporary_coordinate = foo(temporary_coordinate[0],
temporary_coordinate[1])
if max(temporary_coordinate) > 7 or min(temporary_coordinate) < 0:
functions.remove(foo)
coordinates = [foo(coordinate[0], coordinate[1]) for foo in functions]
COORDINATES[coordinate] = coordinates, functions
AVAILABLE_POSITIONS = list(ALLOWED_COORDINATES.keys())
AVAILABLE_POSITIONS.remove((3, 3))
AVAILABLE_POSITIONS.remove((3, 4))
AVAILABLE_POSITIONS.remove((4, 3))
AVAILABLE_POSITIONS.remove((4, 4))
class Board:
"""Board object to represent a position in a game of Reversi."""
def __init__(self, pieces=None, side=BLACK, copied=False):
"""
Create piece representations and other needed attributes.
Create piece representation and determine the legal moves for
a particular piece representation in 'pieces'. If 'copied' is True,
don't set the variables because they are expected to be set after
creation as in the __deepcopy__ function.
"""
if not copied:
self.pieces = pieces
self.side = side
self.available_positions = AVAILABLE_POSITIONS[:]
if pieces is None:
self.pieces = [row[:] for row in START_POSITION]
self.legal_moves = [move[:] for move in STARTING_LEGAL_MOVES]
self.legal_moves_notation = STARTING_LEGAL_MOVES_NOTATION[:]
else:
self.legal_moves = []
self.legal_moves_notation = []
self.update_legal_moves()
def __deepcopy__(self, memodict=None):
if memodict is None:
memodict = {}
new_instance = Board(copied=True)
new_instance.pieces = [row[:] for row in self.pieces]
new_instance.side = self.side
new_instance.available_positions = list(self.available_positions)
new_instance.legal_moves = list(self.legal_moves)
new_instance.legal_moves_notation = list(self.legal_moves_notation)
return new_instance
@staticmethod
def convert_to_notation(coordinate):
notation = (NOTATION_CHART[coordinate[1]], coordinate[0] + 1)
return "".join(map(str, notation))
@staticmethod
def convert_to_coordinate(notation):
return int(notation[1]) - 1, COORDINATE_CHART[notation[0]]
@staticmethod
def out_of_bounds(coordinate):
return coordinate in NOT_ALLOWED
@staticmethod
def get_around(coordinate):
return COORDINATES[coordinate]
def _legal_position(self, coordinate):
"""
Finds whether the coordinate is a legal move. Also can be used to
return the directions in which a move will flip pieces.
:param coordinate: tuple -> (row, column)
:return: bool <OR> list
"""
if self.pieces[coordinate[0]][coordinate[1]] != EMPTY:
return False
around, around_functions = self.get_around(coordinate)
opposite_side = not self.side
for index, temporary_coordinate in enumerate(around):
if self.out_of_bounds(temporary_coordinate):
continue
if self.pieces[temporary_coordinate[0]][temporary_coordinate[1]] == \
opposite_side:
temporary = around_functions[index](coordinate[0],
coordinate[1])
while self.pieces[temporary[0]][temporary[1]] != EMPTY and \
not self.out_of_bounds(temporary):
if self.pieces[temporary[0]][temporary[1]] == self.side:
return True
temporary = around_functions[index](
temporary[0], temporary[1])
return False
def update_legal_moves(self):
"""
Updates 'self.legal_moves' and 'self.legal_moves_notation'
:return: None
"""
self.legal_moves = []
for coordinate in self.available_positions:
if self._legal_position(coordinate):
self.legal_moves.append(coordinate)
if len(self.legal_moves) is 0:
self.legal_moves = [None]
self.legal_moves_notation = [None]
return
self.legal_moves_notation = []
for x in self.legal_moves:
self.legal_moves_notation.append(self.convert_to_notation(x))
def _legal_position_directions(self, coordinate):
"""
Returns the directions in which a move will flip pieces.
:param coordinate: tuple -> (row, column)
:return: bool <OR> list
"""
row, column = coordinate
if self.pieces[row][column] != EMPTY:
return False
around = (
(row - 1, column - 1),
(row - 1, column),
(row - 1, column + 1),
(row, column - 1),
(row, column + 1),
(row + 1, column - 1),
(row + 1, column),
(row + 1, column + 1)
)
return_value = []
opposite_side = not self.side
for index, temporary_coordinate in enumerate(around):
if self.out_of_bounds(temporary_coordinate):
continue
if self.pieces[temporary_coordinate[0]][temporary_coordinate[1]] == opposite_side:
temporary = coordinate[:]
while True:
temporary = AROUND_FUNCTIONS[index](
temporary[0], temporary[1])
if self.out_of_bounds(temporary) or \
self.pieces[temporary[0]][temporary[1]] == EMPTY:
break
if self.pieces[temporary[0]][temporary[1]] == self.side:
return_value.append(AROUND_FUNCTIONS[index])
break
return return_value
def _update_board(self, coordinate):
"""
Updates the board. Called by 'self.move()'
:param coordinate: coordinate of the move received from 'self.move()'
:return: None
"""
directions = self._legal_position_directions(coordinate)
for direction_function in directions:
temporary = coordinate[:]
while True:
temporary = direction_function(temporary[0], temporary[1])
if self.out_of_bounds(temporary) or \
self.pieces[temporary[0]][temporary[1]] in \
(EMPTY, self.side):
break
if self.pieces[temporary[0]][temporary[1]] == (not self.side):
self.pieces[temporary[0]][temporary[1]] = self.side
self.pieces[coordinate[0]][coordinate[1]] = self.side
def move(self, notation=None, refresh_moves=True):
"""
Registers a move in the 'notation' format (eg. "4c") or 'None' if there
is no possible move. Updates the board, legal moves and changes the
side-to-go accordingly.
:param notation: str <- move to be made <OR> None
:param refresh_moves: bool <- whether or not to refresh the legal moves
:return: None
"""
# if notation not in self.legal_moves_notation:
# return None
if notation is not None:
self._update_board(self.convert_to_coordinate(notation))
self.available_positions.remove(
self.convert_to_coordinate(notation))
self.side = int(not self.side)
if refresh_moves:
self.update_legal_moves()
def is_over(self):
"""
Checks if the game is over.
:return: bool
"""
game_over = True
for _ in xrange(2):
self.side = int(not self.side)
self.update_legal_moves()
if self.legal_moves != [None]:
game_over = False
return game_over
def display(self):
"""
Displays the current board.
:return: None
"""
rows = []
for index in xrange(len(self.pieces)):
row = self.pieces[index]
str_row = map(lambda x: NUMBER_TO_PIECE[x], row)
rows.append(str(index + 1) + " | " + " | ".join(str_row) + " |\n")
abc_line = " a b c d e f g h \n"
separator = " +--" + "--+--" * 7 + "--+\n"
print abc_line + separator + separator.join(rows) + separator
def score(self):
"""
Displays the current score.
:return: int <- score
"""
score = [0, 0]
for row in self.pieces:
for piece in row:
if piece == BLACK:
score[0] += 1
if piece == WHITE:
score[1] += 1
return score
def get_pieces(self):
pieces = "".join(CONVERSION_CHART[piece]
for row in self.pieces for piece in row)
return pieces + CONVERSION_CHART[self.side]
if __name__ == "__main__":
b = Board()
b.display()
|
{"hexsha": "bde8013f3781b29bee4eef46dd64a667bb125dd1", "size": 10858, "ext": "py", "lang": "Python", "max_stars_repo_path": "reversi.py", "max_stars_repo_name": "steven-xia/reversi-bot", "max_stars_repo_head_hexsha": "a645159f5a39686348d3989a75350b0feb217d03", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-09-16T23:04:52.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-16T23:04:52.000Z", "max_issues_repo_path": "reversi.py", "max_issues_repo_name": "steven-xia/ReversiBot", "max_issues_repo_head_hexsha": "a645159f5a39686348d3989a75350b0feb217d03", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reversi.py", "max_forks_repo_name": "steven-xia/ReversiBot", "max_forks_repo_head_hexsha": "a645159f5a39686348d3989a75350b0feb217d03", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.903030303, "max_line_length": 94, "alphanum_fraction": 0.5607846749, "include": true, "reason": "import numpy", "num_tokens": 2687}
|
import scipy
# Basic layout parameters
partDiameter = 1.4
partDepth = 0.45
params = {}
params['numParts'] = 5
params['partSpacing'] = 2.0
layoutLen = (params['numParts']-1)*params['partSpacing']
xPosArray = scipy.linspace(-0.5*layoutLen, 0.5*layoutLen,params['numParts'])
yPosArray = scipy.zeros(xPosArray.size)
params['xPosList'] = list(xPosArray)
params['yPosList'] = list(yPosArray)
params['partDiamter'] = partDiameter
params['partDepth'] = partDepth
# Magnet pocket
magnetHoleSep = 0.75
magnetDiam = 0.5
magnetDiamMargin = 0.008
magnetThickness = 0.125
magnetCut = {}
magnetCut['diameter'] = magnetDiam + magnetDiamMargin
magnetCut['depth'] = magnetThickness
magnetCut['xPosList'] = [-0.5*magnetHoleSep, 0.5*magnetHoleSep]
magnetCut['yPosList'] = [ 0.0, 0.0]
params['magnetCut'] = magnetCut
# Boundary cut
annulusDepth = 0.1
boundaryCut = {}
boundaryCut['xPos'] = 0.0
boundaryCut['yPos'] = 0.0
boundaryCut['radius'] = 0.5*partDiameter
boundaryCut['offest'] = 'outside'
boundaryCut['depth'] = partDepth - annulusDepth
params['boundaryCut'] = boundaryCut
|
{"hexsha": "025eb32dd0821e26e401d073bd89b8d29441f71e", "size": 1067, "ext": "py", "lang": "Python", "max_stars_repo_path": "cnc/motor_hub/motor_hub/magnet_and_boundary/params.py", "max_stars_repo_name": "iorodeo/stir_plate_mechanics", "max_stars_repo_head_hexsha": "ad721e708d962afcb14dd69456df4231c83ffed8", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-07-23T19:03:18.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-10T19:45:46.000Z", "max_issues_repo_path": "cnc/motor_hub/motor_hub/magnet_and_boundary/params.py", "max_issues_repo_name": "iorodeo/stir_plate_mechanics", "max_issues_repo_head_hexsha": "ad721e708d962afcb14dd69456df4231c83ffed8", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cnc/motor_hub/motor_hub/magnet_and_boundary/params.py", "max_forks_repo_name": "iorodeo/stir_plate_mechanics", "max_forks_repo_head_hexsha": "ad721e708d962afcb14dd69456df4231c83ffed8", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-01-07T20:39:18.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-07T20:39:18.000Z", "avg_line_length": 26.675, "max_line_length": 76, "alphanum_fraction": 0.7291471415, "include": true, "reason": "import scipy", "num_tokens": 371}
|
"""
Copyright (C) 2021 NVIDIA Corporation. All rights reserved.
Licensed under the NVIDIA Source Code License. See LICENSE at the main github page.
Authors: Seung Wook Kim, Jonah Philion, Antonio Torralba, Sanja Fidler
"""
import os
import sys
import numpy as np
import torch.utils.data as data_utils
import cv2
import random
import pickle
sys.path.insert(0, './data')
sys.path.append('../../')
sys.path.append('../')
import utils
import json
import torch
def list_to_dict(l):
d = {}
for entry in l:
d[entry] = 1
return d
def gibson_get_continuous_action(pos, ori):
actions = []
for i in range(len(pos)-1):
pos_diff = (pos[i+1]-pos[i])[:2]
rot = np.array([[np.cos(ori[i][2]), -np.sin(ori[i][2])],
[np.sin(ori[i][2]), np.cos(ori[i][2])]])
pos_diff = np.dot(np.transpose(rot), np.array(pos_diff))
yaw = ori[i + 1][2] - ori[i][2]
## handle transitions from pi to -pi and 0 to -0 (orientation values range from (-pi, pi)
if ori[i][2] > 0 and ori[i + 1][2] < 0:
if abs(ori[i][2]) > 1.57 and abs(ori[i + 1][2]) > 1.57:
yaw = (3.141593 - ori[i][2]) + (3.141593 + ori[i + 1][2])
elif abs(ori[i][2]) < 1.57 and abs(ori[i + 1][2]) < 1.57:
yaw = -(ori[i][2] + abs(ori[i + 1][2]))
else:
print('wrong orientation values?')
exit(-1)
elif ori[i][2] < 0 and ori[i + 1][2] > 0:
if abs(ori[i][2]) > 1.57 and abs(ori[i + 1][2]) > 1.57:
yaw = -((3.141593 + ori[i][2]) + (3.141593 - ori[i + 1][2]))
elif abs(ori[i][2]) < 1.57 and abs(ori[i + 1][2]) < 1.57:
yaw = abs(ori[i][2]) + ori[i + 1][2]
else:
print('wrong orientation values? 2')
exit(-1)
# manual scaling so that they are in more reasonable range
actions.append([pos_diff[0] * 10, pos_diff[1] * 10, yaw* 5])
actions = np.array(actions, dtype=np.float32)
return actions
def get_custom_dataset(opts=None, set_type=0, force_noshuffle=False, getLoader=True, num_workers=1):
def collate_fn(batch):
batch = list(filter(lambda x: x is not None, batch))
return torch.utils.data.dataloader.default_collate(batch)
dataset = []
shuffle = True if set_type == 0 else False
shuffle = True if opts.play else shuffle
if force_noshuffle:
shuffle = False
for tmp in opts.data.split('-'):
curdata, datadir = tmp.split(':')
dataset.append(generic_dataset(opts, set_type=set_type, name=curdata, datadir=datadir))
if getLoader:
dloader = []
for dset in dataset:
dloader.append(data_utils.DataLoader(dset, batch_size=opts.bs,
num_workers=num_workers, pin_memory=True, shuffle=shuffle, drop_last=True, collate_fn=collate_fn))
if len(dataset) == 1 and not opts.test:
return dloader[0]
return dloader
else:
return dataset
class generic_dataset(data_utils.Dataset):
def __init__(self, opts, start=0, end=0, set_type=0, name='', datadir=''):
self.opts = opts
self.set_type = set_type
self.samples = []
self.name = name
self.layout_memory = utils.check_arg(self.opts, 'layout_memory')
self.continuous_action = utils.check_arg(self.opts, 'continuous_action')
self.predict_logvar = utils.check_arg(self.opts, 'predict_logvar')
self.learn_interpolation = utils.check_arg(self.opts, 'learn_interpolation')
self.no_duplicate = utils.check_arg(self.opts, 'no_duplicate')
train = True if set_type == 0 else False
if 'gibson' in opts.data or 'carla' in opts.data:
if 'gibson' in opts.data:
try:
train_keys, val_keys, tst_keys = pickle.load(open('gibson_data_split.pkl', 'rb'))
except:
train_keys, val_keys, tst_keys = pickle.load(open('../gibson_data_split.pkl', 'rb'))
else:
try:
train_keys, val_keys, tst_keys = pickle.load(open('carla_data_split.pkl', 'rb'))
except:
train_keys, val_keys, tst_keys = pickle.load(open('../carla_data_split.pkl', 'rb'))
train_keys = list_to_dict(train_keys)
val_keys = list_to_dict(val_keys)
tst_keys = list_to_dict(tst_keys)
paths = []
root_dirs = datadir
for datadir in root_dirs.split(','):
for fname in os.listdir(datadir):
cur_file = os.path.join(datadir, fname)
if not '.npy' in fname:
continue
key = fname.split('.')[0]
key = key.replace('_', '/')
do = False
if (train and key in train_keys) or (not train and key in val_keys) or (opts.test and key in tst_keys):
do = True
if not do:
continue
paths.append([key, cur_file])
elif 'pilotnet' in opts.data:
if '8hz' in opts.data:
self.pilotnet_actions = pickle.load(open('8hz_all_actions.pkl', 'rb'))
train_keys, val_keys, tst_keys = pickle.load(open('pilotnet8hz_paths_and_count.p', 'rb'))
else:
# 16hz
self.pilotnet_actions = pickle.load(open('16hz_all_actions.pkl', 'rb'))
train_keys, val_keys, tst_keys = pickle.load(open('pilotnet16hz_paths_and_count.p', 'rb'))
paths = []
root_dirs = datadir
nn = 0
random.seed(4)
for datadir in root_dirs.split(','):
fnames = os.listdir(datadir)
for fname in fnames:
key_dict = None
cur_file = os.path.join(datadir, fname)
key = fname.split('.')[0]
do = False
is_train = False
if (train and (key in train_keys)):
do = True
key_dict = train_keys
if key in train_keys:
is_train = True
if (not train and key in val_keys):
do = True
key_dict = val_keys
if is_train:
print(key)
nn+= 1
if (opts.test and key in tst_keys):
do = True
key_dict = tst_keys
if key_dict is None and key in train_keys:
key_dict = train_keys
if not do:
continue
pid = key.split('_')[0]
if not pid in self.pilotnet_actions:
print(pid + ' not in pilotnet_actions file')
continue
if self.no_duplicate:
obj_count = 1
else:
obj_count = key_dict[key]['obj_count']
for _ in range(obj_count):
paths.append([key, cur_file])
random.Random(4).shuffle(paths)
if utils.check_arg(self.opts, 'num_chunk') and self.opts.num_chunk > 0:
num_chunk = self.opts.num_chunk
cur_ind = self.opts.cur_ind
chunk_size = len(paths) // num_chunk
if cur_ind == num_chunk-1:
paths = paths[cur_ind*chunk_size:]
else:
paths = paths[cur_ind*chunk_size:(cur_ind+1)*chunk_size]
tmp = np.load(paths[0][1], allow_pickle=True).item()
opts.spatial_d = tmp['spatial_mu'].shape[1]
opts.spatial_h = tmp['spatial_mu'].shape[2]
opts.spatial_w = tmp['spatial_mu'].shape[3]
opts.theme_d = tmp['theme_mu'].shape[1]
opts.separate_holistic_style_dim = opts.theme_d
opts.spatial_dim = opts.spatial_h
opts.spatial_total_dim = opts.spatial_h * opts.spatial_w * opts.spatial_d
self.samples = paths
print('\n\n----numData: ' + str(len(paths))+ '\n\n')
def parse_action(self, data, cur_a):
if 'action_space' in data:
num_actions = data['action_sapce']
elif 'gibson' in self.name:
num_actions = 9
if self.continuous_action:
action = [0] * self.opts.action_space
for i in range(len(cur_a)):
action[i] = cur_a[i]
return np.asarray(action).astype('float32'), -1
else:
cur_a = gibson_get_action(cur_a)
elif 'pilotnet' in self.name:
if self.continuous_action:
action = [0] * self.opts.action_space
for i in range(len(cur_a)):
action[i] = cur_a[i]
return np.asarray(action).astype('float32'), -1
else:
print('continouse action pilotnet not supported')
exit(-1)
elif 'carla' in self.name:
if self.continuous_action:
action = [0] * self.opts.action_space
for i in range(len(cur_a)):
action[i] = cur_a[i]
return np.asarray(action).astype('float32'), -1
else:
cur_a = carla_get_action(cur_a[0])
num_actions = 13
else:
num_actions = 10
action = [0] * self.opts.action_space
action[cur_a] = 1
a_t = np.asarray(action).astype('float32')
return a_t, num_actions
def load_gibson(self, data):
if self.continuous_action:
actions = gibson_get_continuous_action(data['np_pos'], data['np_ori'])
data['np_action'] = actions
data['np_img_state'] = data['np_img_state'][:len(actions)]
if 'np_img_logvar' in data:
data['np_img_logvar'] = data['np_img_logvar'][:len(actions)]
return data
def load_carla(self, fn):
data = np.load(fn, allow_pickle=True).item()
if self.continuous_action:
# normalize mean 0 std 1
actions = []
for ind in range(len(data['data'])):
speed = (data['data'][ind]['speed'] - 18.2) / 3.62
# + right - left
yaw = (data['data'][ind]['angular_velocity'][2] - (-0.40)) / 20.45
actions.append(np.array([yaw, speed]))
data['np_action'] = np.array(actions).astype('float32')
return data
def __len__(self):
return len(self.samples)
def __getitem__(self, idx):
fn = self.samples[idx]
key = fn[0]
try:
if 'carla' in self.opts.data:
data = self.load_carla(fn[1])
else:
data = np.load(fn[1], allow_pickle=True).item()
except:
print('dataloader error: ')
print(fn)
return None
len_episode = len(data['spatial_mu'])
if 'gibson' in self.opts.data:
data = self.load_gibson(data)
elif 'pilotnet' in self.opts.data:
if key.startswith('ind'):
pid = key.split('#')[1]
starting_index = key.split('#')[-1].split('.')[0]
elif key.startswith('pn-meta'):
pid = key.split('_')[0]
starting_index = key.split('_')[-1].split('.')[0]
starting_index = int(starting_index)
action = self.pilotnet_actions[pid][starting_index:starting_index+len_episode+1]
data['np_action'] = action
states, actions, neg_actions, rand_actions, img_key = [], [], [], [], 'np_img_state'
data[img_key] = np.concatenate([data['spatial_mu'].reshape(data['spatial_mu'].shape[0], self.opts.spatial_total_dim),
data['theme_mu']], axis=1)
ep_len = len_episode - self.opts.num_steps
if self.opts.test:
start_pt = 0 ## start from the first screen for testing
if 'carla' in self.opts.data and self.learn_interpolation:
start_pt = 20
else:
start_pt = random.randint(0, ep_len)
i = 0
while i < self.opts.num_steps:
if start_pt + i >= len(data[img_key]):
cur_s = data[img_key][len(data[img_key]) - 1]
cur_a = data['np_action'][len(data[img_key]) - 1]
else:
cur_s = data[img_key][start_pt + i]
cur_a = data['np_action'][start_pt + i]
s_t = cur_s
a_t, num_actions = self.parse_action(data, cur_a)
# sample negative action within the episode
rand_ind = random.randint(start_pt, start_pt+self.opts.num_steps - 1)
while rand_ind == start_pt + i:
rand_ind = random.randint(start_pt, start_pt+self.opts.num_steps - 1)
false_a_t, _ = self.parse_action(data, data['np_action'][rand_ind])
# save
states.append(s_t)
actions.append(a_t)
neg_actions.append(false_a_t)
i = i + 1
del data
return states, actions, neg_actions
|
{"hexsha": "2e93d816a8d5ad099b65c2a937163abcf2d916f5", "size": 13442, "ext": "py", "lang": "Python", "max_stars_repo_path": "data/dataloader.py", "max_stars_repo_name": "LvZut/DriveGAN_code", "max_stars_repo_head_hexsha": "6fd29dc6a0bc9e4a45b7db329ff8e951bd55432a", "max_stars_repo_licenses": ["BSD-2-Clause", "MIT"], "max_stars_count": 29, "max_stars_repo_stars_event_min_datetime": "2021-10-20T05:40:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T16:56:41.000Z", "max_issues_repo_path": "data/dataloader.py", "max_issues_repo_name": "LvZut/DriveGAN_code", "max_issues_repo_head_hexsha": "6fd29dc6a0bc9e4a45b7db329ff8e951bd55432a", "max_issues_repo_licenses": ["BSD-2-Clause", "MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-12-16T11:35:25.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-29T02:23:39.000Z", "max_forks_repo_path": "data/dataloader.py", "max_forks_repo_name": "LvZut/DriveGAN_code", "max_forks_repo_head_hexsha": "6fd29dc6a0bc9e4a45b7db329ff8e951bd55432a", "max_forks_repo_licenses": ["BSD-2-Clause", "MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2021-10-20T11:21:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-17T11:27:19.000Z", "avg_line_length": 37.2354570637, "max_line_length": 125, "alphanum_fraction": 0.5237315876, "include": true, "reason": "import numpy", "num_tokens": 3181}
|
from . import ccllib as lib
from .pyutils import check
from .pk2d import Pk2D
import numpy as np
def bcm_model_fka(cosmo, k, a):
"""The BCM model correction factor for baryons.
.. note:: BCM stands for the "baryonic correction model" of Schneider &
Teyssier (2015; https://arxiv.org/abs/1510.06034). See the
`DESC Note <https://github.com/LSSTDESC/CCL/blob/master/doc\
/0000-ccl_note/main.pdf>`_
for details.
.. note:: The correction factor is applied multiplicatively so that
:math:`P_{\\rm corrected}(k, a) = P(k, a)\\, f_{\\rm bcm}(k, a)`.
Args:
cosmo (:class:`~pyccl.core.Cosmology`): Cosmological parameters.
k (float or array_like): Wavenumber; Mpc^-1.
a (float): Scale factor.
Returns:
float or array_like: Correction factor to apply to the power spectrum.
"""
k_use = np.atleast_1d(k)
status = 0
fka, status = lib.bcm_model_fka_vec(cosmo.cosmo, a, k_use,
len(k_use), status)
check(status, cosmo)
if np.ndim(k) == 0:
fka = fka[0]
return fka
def bcm_correct_pk2d(cosmo, pk2d):
"""Apply the BCM model correction factor to a given power spectrum.
Args:
cosmo (:class:`~pyccl.core.Cosmology`): Cosmological parameters.
pk2d (:class:`~pyccl.pk2d.Pk2D`): power spectrum.
"""
if not isinstance(pk2d, Pk2D):
raise ValueError("pk2d must be a Pk2D object")
status = 0
status = lib.bcm_correct(cosmo.cosmo, pk2d.psp, status)
check(status, cosmo)
|
{"hexsha": "8ccfeb1f88b68eeba9dacaffde5e06f4dc6b1443", "size": 1588, "ext": "py", "lang": "Python", "max_stars_repo_path": "pyccl/bcm.py", "max_stars_repo_name": "Jappenn/CCL", "max_stars_repo_head_hexsha": "a37cad61f060f3928fa5d47b1e2670db3e9bce6f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 91, "max_stars_repo_stars_event_min_datetime": "2017-07-14T02:45:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T08:55:54.000Z", "max_issues_repo_path": "pyccl/bcm.py", "max_issues_repo_name": "Jappenn/CCL", "max_issues_repo_head_hexsha": "a37cad61f060f3928fa5d47b1e2670db3e9bce6f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 703, "max_issues_repo_issues_event_min_datetime": "2017-07-07T16:27:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T14:40:10.000Z", "max_forks_repo_path": "pyccl/bcm.py", "max_forks_repo_name": "Jappenn/CCL", "max_forks_repo_head_hexsha": "a37cad61f060f3928fa5d47b1e2670db3e9bce6f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 54, "max_forks_repo_forks_event_min_datetime": "2017-07-12T13:08:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-06T13:12:10.000Z", "avg_line_length": 31.76, "max_line_length": 79, "alphanum_fraction": 0.6183879093, "include": true, "reason": "import numpy", "num_tokens": 463}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.