repo stringlengths 1 99 | file stringlengths 13 215 | code stringlengths 12 59.2M | file_length int64 12 59.2M | avg_line_length float64 3.82 1.48M | max_line_length int64 12 2.51M | extension_type stringclasses 1
value |
|---|---|---|---|---|---|---|
keras | keras-master/keras/applications/mobilenet_v2.py | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""MobileNet v2 models for Keras.
MobileNetV2 is a general architecture and can be used for multiple use cases.
Depending on the use case, it can use different input layer size and
different width factors. This allows different width models to reduce
the number of multiply-adds and thereby
reduce inference cost on mobile devices.
MobileNetV2 is very similar to the original MobileNet,
except that it uses inverted residual blocks with
bottlenecking features. It has a drastically lower
parameter count than the original MobileNet.
MobileNets support any input size greater
than 32 x 32, with larger image sizes
offering better performance.
The number of parameters and number of multiply-adds
can be modified by using the `alpha` parameter,
which increases/decreases the number of filters in each layer.
By altering the image size and `alpha` parameter,
all 22 models from the paper can be built, with ImageNet weights provided.
The paper demonstrates the performance of MobileNets using `alpha` values of
1.0 (also called 100 % MobileNet), 0.35, 0.5, 0.75, 1.0, 1.3, and 1.4
For each of these `alpha` values, weights for 5 different input image sizes
are provided (224, 192, 160, 128, and 96).
The following table describes the performance of
MobileNet on various input sizes:
------------------------------------------------------------------------
MACs stands for Multiply Adds
Classification Checkpoint|MACs (M)|Parameters (M)|Top 1 Accuracy|Top 5 Accuracy
--------------------------|------------|---------------|---------|----|---------
| [mobilenet_v2_1.4_224] | 582 | 6.06 | 75.0 | 92.5 |
| [mobilenet_v2_1.3_224] | 509 | 5.34 | 74.4 | 92.1 |
| [mobilenet_v2_1.0_224] | 300 | 3.47 | 71.8 | 91.0 |
| [mobilenet_v2_1.0_192] | 221 | 3.47 | 70.7 | 90.1 |
| [mobilenet_v2_1.0_160] | 154 | 3.47 | 68.8 | 89.0 |
| [mobilenet_v2_1.0_128] | 99 | 3.47 | 65.3 | 86.9 |
| [mobilenet_v2_1.0_96] | 56 | 3.47 | 60.3 | 83.2 |
| [mobilenet_v2_0.75_224] | 209 | 2.61 | 69.8 | 89.6 |
| [mobilenet_v2_0.75_192] | 153 | 2.61 | 68.7 | 88.9 |
| [mobilenet_v2_0.75_160] | 107 | 2.61 | 66.4 | 87.3 |
| [mobilenet_v2_0.75_128] | 69 | 2.61 | 63.2 | 85.3 |
| [mobilenet_v2_0.75_96] | 39 | 2.61 | 58.8 | 81.6 |
| [mobilenet_v2_0.5_224] | 97 | 1.95 | 65.4 | 86.4 |
| [mobilenet_v2_0.5_192] | 71 | 1.95 | 63.9 | 85.4 |
| [mobilenet_v2_0.5_160] | 50 | 1.95 | 61.0 | 83.2 |
| [mobilenet_v2_0.5_128] | 32 | 1.95 | 57.7 | 80.8 |
| [mobilenet_v2_0.5_96] | 18 | 1.95 | 51.2 | 75.8 |
| [mobilenet_v2_0.35_224] | 59 | 1.66 | 60.3 | 82.9 |
| [mobilenet_v2_0.35_192] | 43 | 1.66 | 58.2 | 81.2 |
| [mobilenet_v2_0.35_160] | 30 | 1.66 | 55.7 | 79.1 |
| [mobilenet_v2_0.35_128] | 20 | 1.66 | 50.8 | 75.0 |
| [mobilenet_v2_0.35_96] | 11 | 1.66 | 45.5 | 70.4 |
Reference:
- [MobileNetV2: Inverted Residuals and Linear Bottlenecks](
https://arxiv.org/abs/1801.04381) (CVPR 2018)
"""
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
import tensorflow.compat.v2 as tf
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHT_PATH = ('https://storage.googleapis.com/tensorflow/'
'keras-applications/mobilenet_v2/')
layers = None
@keras_export('keras.applications.mobilenet_v2.MobileNetV2',
'keras.applications.MobileNetV2')
def MobileNetV2(input_shape=None,
alpha=1.0,
include_top=True,
weights='imagenet',
input_tensor=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
"""Instantiates the MobileNetV2 architecture.
MobileNetV2 is very similar to the original MobileNet,
except that it uses inverted residual blocks with
bottlenecking features. It has a drastically lower
parameter count than the original MobileNet.
MobileNets support any input size greater
than 32 x 32, with larger image sizes
offering better performance.
Reference:
- [MobileNetV2: Inverted Residuals and Linear Bottlenecks](
https://arxiv.org/abs/1801.04381) (CVPR 2018)
This function returns a Keras image classification model,
optionally loaded with weights pre-trained on ImageNet.
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For MobileNetV2, call `tf.keras.applications.mobilenet_v2.preprocess_input`
on your inputs before passing them to the model.
`mobilenet_v2.preprocess_input` will scale input pixels between -1 and 1.
Args:
input_shape: Optional shape tuple, to be specified if you would
like to use a model with an input image resolution that is not
(224, 224, 3).
It should have exactly 3 inputs channels (224, 224, 3).
You can also omit this option if you would like
to infer input_shape from an input_tensor.
If you choose to include both input_tensor and input_shape then
input_shape will be used if they match, if the shapes
do not match then we will throw an error.
E.g. `(160, 160, 3)` would be one valid value.
alpha: Float, larger than zero, controls the width of the network. This is
known as the width multiplier in the MobileNetV2 paper, but the name is
kept for consistency with `applications.MobileNetV1` model in Keras.
- If `alpha` < 1.0, proportionally decreases the number
of filters in each layer.
- If `alpha` > 1.0, proportionally increases the number
of filters in each layer.
- If `alpha` = 1.0, default number of filters from the paper
are used at each layer.
include_top: Boolean, whether to include the fully-connected layer at the
top of the network. Defaults to `True`.
weights: String, one of `None` (random initialization), 'imagenet'
(pre-training on ImageNet), or the path to the weights file to be loaded.
input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
pooling: String, optional pooling mode for feature extraction when
`include_top` is `False`.
- `None` means that the output of the model
will be the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a
2D tensor.
- `max` means that global max pooling will
be applied.
classes: Optional integer number of classes to classify images into, only to
be specified if `include_top` is True, and if no `weights` argument is
specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
**kwargs: For backwards compatibility only.
Returns:
A `keras.Model` instance.
"""
global layers
if 'layers' in kwargs:
layers = kwargs.pop('layers')
else:
layers = VersionAwareLayers()
if kwargs:
raise ValueError(f'Unknown argument(s): {kwargs}')
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded. '
f'Received `weights={weights}`')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError(
'If using `weights` as `"imagenet"` with `include_top` '
f'as true, `classes` should be 1000. Received `classes={classes}`')
# Determine proper input shape and default size.
# If both input_shape and input_tensor are used, they should match
if input_shape is not None and input_tensor is not None:
try:
is_input_t_tensor = backend.is_keras_tensor(input_tensor)
except ValueError:
try:
is_input_t_tensor = backend.is_keras_tensor(
layer_utils.get_source_inputs(input_tensor))
except ValueError:
raise ValueError(
f'input_tensor: {input_tensor}'
'is not type input_tensor. '
f'Received `type(input_tensor)={type(input_tensor)}`'
)
if is_input_t_tensor:
if backend.image_data_format() == 'channels_first':
if backend.int_shape(input_tensor)[1] != input_shape[1]:
raise ValueError('input_shape[1] must equal shape(input_tensor)[1] '
'when `image_data_format` is `channels_first`; '
'Received `input_tensor.shape='
f'{input_tensor.shape}`'
f', `input_shape={input_shape}`')
else:
if backend.int_shape(input_tensor)[2] != input_shape[1]:
raise ValueError(
'input_tensor.shape[2] must equal input_shape[1]; '
'Received `input_tensor.shape='
f'{input_tensor.shape}`, '
f'`input_shape={input_shape}`')
else:
raise ValueError('input_tensor is not a Keras tensor; '
f'Received `input_tensor={input_tensor}`')
# If input_shape is None, infer shape from input_tensor.
if input_shape is None and input_tensor is not None:
try:
backend.is_keras_tensor(input_tensor)
except ValueError:
raise ValueError('input_tensor must be a valid Keras tensor type; '
f'Received {input_tensor} of type {type(input_tensor)}')
if input_shape is None and not backend.is_keras_tensor(input_tensor):
default_size = 224
elif input_shape is None and backend.is_keras_tensor(input_tensor):
if backend.image_data_format() == 'channels_first':
rows = backend.int_shape(input_tensor)[2]
cols = backend.int_shape(input_tensor)[3]
else:
rows = backend.int_shape(input_tensor)[1]
cols = backend.int_shape(input_tensor)[2]
if rows == cols and rows in [96, 128, 160, 192, 224]:
default_size = rows
else:
default_size = 224
# If input_shape is None and no input_tensor
elif input_shape is None:
default_size = 224
# If input_shape is not None, assume default size.
else:
if backend.image_data_format() == 'channels_first':
rows = input_shape[1]
cols = input_shape[2]
else:
rows = input_shape[0]
cols = input_shape[1]
if rows == cols and rows in [96, 128, 160, 192, 224]:
default_size = rows
else:
default_size = 224
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=default_size,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if backend.image_data_format() == 'channels_last':
row_axis, col_axis = (0, 1)
else:
row_axis, col_axis = (1, 2)
rows = input_shape[row_axis]
cols = input_shape[col_axis]
if weights == 'imagenet':
if alpha not in [0.35, 0.50, 0.75, 1.0, 1.3, 1.4]:
raise ValueError('If imagenet weights are being loaded, '
'alpha must be one of `0.35`, `0.50`, `0.75`, '
'`1.0`, `1.3` or `1.4` only;'
f' Received `alpha={alpha}`')
if rows != cols or rows not in [96, 128, 160, 192, 224]:
rows = 224
logging.warning('`input_shape` is undefined or non-square, '
'or `rows` is not in [96, 128, 160, 192, 224]. '
'Weights for input shape (224, 224) will be '
'loaded as the default.')
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
first_block_filters = _make_divisible(32 * alpha, 8)
x = layers.Conv2D(
first_block_filters,
kernel_size=3,
strides=(2, 2),
padding='same',
use_bias=False,
name='Conv1')(img_input)
x = layers.BatchNormalization(
axis=channel_axis, epsilon=1e-3, momentum=0.999, name='bn_Conv1')(
x)
x = layers.ReLU(6., name='Conv1_relu')(x)
x = _inverted_res_block(
x, filters=16, alpha=alpha, stride=1, expansion=1, block_id=0)
x = _inverted_res_block(
x, filters=24, alpha=alpha, stride=2, expansion=6, block_id=1)
x = _inverted_res_block(
x, filters=24, alpha=alpha, stride=1, expansion=6, block_id=2)
x = _inverted_res_block(
x, filters=32, alpha=alpha, stride=2, expansion=6, block_id=3)
x = _inverted_res_block(
x, filters=32, alpha=alpha, stride=1, expansion=6, block_id=4)
x = _inverted_res_block(
x, filters=32, alpha=alpha, stride=1, expansion=6, block_id=5)
x = _inverted_res_block(
x, filters=64, alpha=alpha, stride=2, expansion=6, block_id=6)
x = _inverted_res_block(
x, filters=64, alpha=alpha, stride=1, expansion=6, block_id=7)
x = _inverted_res_block(
x, filters=64, alpha=alpha, stride=1, expansion=6, block_id=8)
x = _inverted_res_block(
x, filters=64, alpha=alpha, stride=1, expansion=6, block_id=9)
x = _inverted_res_block(
x, filters=96, alpha=alpha, stride=1, expansion=6, block_id=10)
x = _inverted_res_block(
x, filters=96, alpha=alpha, stride=1, expansion=6, block_id=11)
x = _inverted_res_block(
x, filters=96, alpha=alpha, stride=1, expansion=6, block_id=12)
x = _inverted_res_block(
x, filters=160, alpha=alpha, stride=2, expansion=6, block_id=13)
x = _inverted_res_block(
x, filters=160, alpha=alpha, stride=1, expansion=6, block_id=14)
x = _inverted_res_block(
x, filters=160, alpha=alpha, stride=1, expansion=6, block_id=15)
x = _inverted_res_block(
x, filters=320, alpha=alpha, stride=1, expansion=6, block_id=16)
# no alpha applied to last conv as stated in the paper:
# if the width multiplier is greater than 1 we increase the number of output
# channels.
if alpha > 1.0:
last_block_filters = _make_divisible(1280 * alpha, 8)
else:
last_block_filters = 1280
x = layers.Conv2D(
last_block_filters, kernel_size=1, use_bias=False, name='Conv_1')(
x)
x = layers.BatchNormalization(
axis=channel_axis, epsilon=1e-3, momentum=0.999, name='Conv_1_bn')(
x)
x = layers.ReLU(6., name='out_relu')(x)
if include_top:
x = layers.GlobalAveragePooling2D()(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(classes, activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account any potential predecessors of
# `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name='mobilenetv2_%0.2f_%s' % (alpha, rows))
# Load weights.
if weights == 'imagenet':
if include_top:
model_name = ('mobilenet_v2_weights_tf_dim_ordering_tf_kernels_' +
str(float(alpha)) + '_' + str(rows) + '.h5')
weight_path = BASE_WEIGHT_PATH + model_name
weights_path = data_utils.get_file(
model_name, weight_path, cache_subdir='models')
else:
model_name = ('mobilenet_v2_weights_tf_dim_ordering_tf_kernels_' +
str(float(alpha)) + '_' + str(rows) + '_no_top' + '.h5')
weight_path = BASE_WEIGHT_PATH + model_name
weights_path = data_utils.get_file(
model_name, weight_path, cache_subdir='models')
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
def _inverted_res_block(inputs, expansion, stride, alpha, filters, block_id):
"""Inverted ResNet block."""
channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
in_channels = backend.int_shape(inputs)[channel_axis]
pointwise_conv_filters = int(filters * alpha)
# Ensure the number of filters on the last 1x1 convolution is divisible by 8.
pointwise_filters = _make_divisible(pointwise_conv_filters, 8)
x = inputs
prefix = 'block_{}_'.format(block_id)
if block_id:
# Expand with a pointwise 1x1 convolution.
x = layers.Conv2D(
expansion * in_channels,
kernel_size=1,
padding='same',
use_bias=False,
activation=None,
name=prefix + 'expand')(
x)
x = layers.BatchNormalization(
axis=channel_axis,
epsilon=1e-3,
momentum=0.999,
name=prefix + 'expand_BN')(
x)
x = layers.ReLU(6., name=prefix + 'expand_relu')(x)
else:
prefix = 'expanded_conv_'
# Depthwise 3x3 convolution.
if stride == 2:
x = layers.ZeroPadding2D(
padding=imagenet_utils.correct_pad(x, 3),
name=prefix + 'pad')(x)
x = layers.DepthwiseConv2D(
kernel_size=3,
strides=stride,
activation=None,
use_bias=False,
padding='same' if stride == 1 else 'valid',
name=prefix + 'depthwise')(
x)
x = layers.BatchNormalization(
axis=channel_axis,
epsilon=1e-3,
momentum=0.999,
name=prefix + 'depthwise_BN')(
x)
x = layers.ReLU(6., name=prefix + 'depthwise_relu')(x)
# Project wiht a pointwise 1x1 convolution.
x = layers.Conv2D(
pointwise_filters,
kernel_size=1,
padding='same',
use_bias=False,
activation=None,
name=prefix + 'project')(
x)
x = layers.BatchNormalization(
axis=channel_axis,
epsilon=1e-3,
momentum=0.999,
name=prefix + 'project_BN')(
x)
if in_channels == pointwise_filters and stride == 1:
return layers.Add(name=prefix + 'add')([inputs, x])
return x
def _make_divisible(v, divisor, min_value=None):
if min_value is None:
min_value = divisor
new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
# Make sure that round down does not go down by more than 10%.
if new_v < 0.9 * v:
new_v += divisor
return new_v
@keras_export('keras.applications.mobilenet_v2.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(x, data_format=data_format, mode='tf')
@keras_export('keras.applications.mobilenet_v2.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 20,697 | 38.05283 | 87 | py |
keras | keras-master/keras/applications/efficientnet.py | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
# pylint: disable=missing-docstring
"""EfficientNet models for Keras.
Reference:
- [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](
https://arxiv.org/abs/1905.11946) (ICML 2019)
"""
import tensorflow.compat.v2 as tf
import copy
import math
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHTS_PATH = 'https://storage.googleapis.com/keras-applications/'
WEIGHTS_HASHES = {
'b0': ('902e53a9f72be733fc0bcb005b3ebbac',
'50bc09e76180e00e4465e1a485ddc09d'),
'b1': ('1d254153d4ab51201f1646940f018540',
'74c4e6b3e1f6a1eea24c589628592432'),
'b2': ('b15cce36ff4dcbd00b6dd88e7857a6ad',
'111f8e2ac8aa800a7a99e3239f7bfb39'),
'b3': ('ffd1fdc53d0ce67064dc6a9c7960ede0',
'af6d107764bb5b1abb91932881670226'),
'b4': ('18c95ad55216b8f92d7e70b3a046e2fc',
'ebc24e6d6c33eaebbd558eafbeedf1ba'),
'b5': ('ace28f2a6363774853a83a0b21b9421a',
'38879255a25d3c92d5e44e04ae6cec6f'),
'b6': ('165f6e37dce68623721b423839de8be5',
'9ecce42647a20130c1f39a5d4cb75743'),
'b7': ('8c03f828fec3ef71311cd463b6759d99',
'cbcfe4450ddf6f3ad90b1b398090fe4a'),
}
DEFAULT_BLOCKS_ARGS = [{
'kernel_size': 3,
'repeats': 1,
'filters_in': 32,
'filters_out': 16,
'expand_ratio': 1,
'id_skip': True,
'strides': 1,
'se_ratio': 0.25
}, {
'kernel_size': 3,
'repeats': 2,
'filters_in': 16,
'filters_out': 24,
'expand_ratio': 6,
'id_skip': True,
'strides': 2,
'se_ratio': 0.25
}, {
'kernel_size': 5,
'repeats': 2,
'filters_in': 24,
'filters_out': 40,
'expand_ratio': 6,
'id_skip': True,
'strides': 2,
'se_ratio': 0.25
}, {
'kernel_size': 3,
'repeats': 3,
'filters_in': 40,
'filters_out': 80,
'expand_ratio': 6,
'id_skip': True,
'strides': 2,
'se_ratio': 0.25
}, {
'kernel_size': 5,
'repeats': 3,
'filters_in': 80,
'filters_out': 112,
'expand_ratio': 6,
'id_skip': True,
'strides': 1,
'se_ratio': 0.25
}, {
'kernel_size': 5,
'repeats': 4,
'filters_in': 112,
'filters_out': 192,
'expand_ratio': 6,
'id_skip': True,
'strides': 2,
'se_ratio': 0.25
}, {
'kernel_size': 3,
'repeats': 1,
'filters_in': 192,
'filters_out': 320,
'expand_ratio': 6,
'id_skip': True,
'strides': 1,
'se_ratio': 0.25
}]
CONV_KERNEL_INITIALIZER = {
'class_name': 'VarianceScaling',
'config': {
'scale': 2.0,
'mode': 'fan_out',
'distribution': 'truncated_normal'
}
}
DENSE_KERNEL_INITIALIZER = {
'class_name': 'VarianceScaling',
'config': {
'scale': 1. / 3.,
'mode': 'fan_out',
'distribution': 'uniform'
}
}
layers = VersionAwareLayers()
BASE_DOCSTRING = """Instantiates the {name} architecture.
Reference:
- [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](
https://arxiv.org/abs/1905.11946) (ICML 2019)
This function returns a Keras image classification model,
optionally loaded with weights pre-trained on ImageNet.
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For EfficientNet, input preprocessing is included as part of the model
(as a `Rescaling` layer), and thus
`tf.keras.applications.efficientnet.preprocess_input` is actually a
pass-through function. EfficientNet models expect their inputs to be float
tensors of pixels with values in the [0-255] range.
Args:
include_top: Whether to include the fully-connected
layer at the top of the network. Defaults to True.
weights: One of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded. Defaults to 'imagenet'.
input_tensor: Optional Keras tensor
(i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: Optional shape tuple, only to be specified
if `include_top` is False.
It should have exactly 3 inputs channels.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`. Defaults to None.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: Optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified. Defaults to 1000 (number of
ImageNet classes).
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
Defaults to 'softmax'.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A `keras.Model` instance.
"""
def EfficientNet(
width_coefficient,
depth_coefficient,
default_size,
dropout_rate=0.2,
drop_connect_rate=0.2,
depth_divisor=8,
activation='swish',
blocks_args='default',
model_name='efficientnet',
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax'):
"""Instantiates the EfficientNet architecture using given scaling coefficients.
Args:
width_coefficient: float, scaling coefficient for network width.
depth_coefficient: float, scaling coefficient for network depth.
default_size: integer, default input image size.
dropout_rate: float, dropout rate before final classifier layer.
drop_connect_rate: float, dropout rate at skip connections.
depth_divisor: integer, a unit of network width.
activation: activation function.
blocks_args: list of dicts, parameters to construct block modules.
model_name: string, model name.
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor
(i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False.
It should have exactly 3 inputs channels.
pooling: optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
Returns:
A `keras.Model` instance.
Raises:
ValueError: in case of invalid argument for `weights`,
or invalid input shape.
ValueError: if `classifier_activation` is not `softmax` or `None` when
using a pretrained top layer.
"""
if blocks_args == 'default':
blocks_args = DEFAULT_BLOCKS_ARGS
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded.')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top`'
' as true, `classes` should be 1000')
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=default_size,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1
def round_filters(filters, divisor=depth_divisor):
"""Round number of filters based on depth multiplier."""
filters *= width_coefficient
new_filters = max(divisor, int(filters + divisor / 2) // divisor * divisor)
# Make sure that round down does not go down by more than 10%.
if new_filters < 0.9 * filters:
new_filters += divisor
return int(new_filters)
def round_repeats(repeats):
"""Round number of repeats based on depth multiplier."""
return int(math.ceil(depth_coefficient * repeats))
# Build stem
x = img_input
x = layers.Rescaling(1. / 255.)(x)
x = layers.Normalization(axis=bn_axis)(x)
x = layers.ZeroPadding2D(
padding=imagenet_utils.correct_pad(x, 3),
name='stem_conv_pad')(x)
x = layers.Conv2D(
round_filters(32),
3,
strides=2,
padding='valid',
use_bias=False,
kernel_initializer=CONV_KERNEL_INITIALIZER,
name='stem_conv')(x)
x = layers.BatchNormalization(axis=bn_axis, name='stem_bn')(x)
x = layers.Activation(activation, name='stem_activation')(x)
# Build blocks
blocks_args = copy.deepcopy(blocks_args)
b = 0
blocks = float(sum(round_repeats(args['repeats']) for args in blocks_args))
for (i, args) in enumerate(blocks_args):
assert args['repeats'] > 0
# Update block input and output filters based on depth multiplier.
args['filters_in'] = round_filters(args['filters_in'])
args['filters_out'] = round_filters(args['filters_out'])
for j in range(round_repeats(args.pop('repeats'))):
# The first block needs to take care of stride and filter size increase.
if j > 0:
args['strides'] = 1
args['filters_in'] = args['filters_out']
x = block(
x,
activation,
drop_connect_rate * b / blocks,
name='block{}{}_'.format(i + 1, chr(j + 97)),
**args)
b += 1
# Build top
x = layers.Conv2D(
round_filters(1280),
1,
padding='same',
use_bias=False,
kernel_initializer=CONV_KERNEL_INITIALIZER,
name='top_conv')(x)
x = layers.BatchNormalization(axis=bn_axis, name='top_bn')(x)
x = layers.Activation(activation, name='top_activation')(x)
if include_top:
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
if dropout_rate > 0:
x = layers.Dropout(dropout_rate, name='top_dropout')(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(
classes,
activation=classifier_activation,
kernel_initializer=DENSE_KERNEL_INITIALIZER,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D(name='max_pool')(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name=model_name)
# Load weights.
if weights == 'imagenet':
if include_top:
file_suffix = '.h5'
file_hash = WEIGHTS_HASHES[model_name[-2:]][0]
else:
file_suffix = '_notop.h5'
file_hash = WEIGHTS_HASHES[model_name[-2:]][1]
file_name = model_name + file_suffix
weights_path = data_utils.get_file(
file_name,
BASE_WEIGHTS_PATH + file_name,
cache_subdir='models',
file_hash=file_hash)
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
def block(inputs,
activation='swish',
drop_rate=0.,
name='',
filters_in=32,
filters_out=16,
kernel_size=3,
strides=1,
expand_ratio=1,
se_ratio=0.,
id_skip=True):
"""An inverted residual block.
Args:
inputs: input tensor.
activation: activation function.
drop_rate: float between 0 and 1, fraction of the input units to drop.
name: string, block label.
filters_in: integer, the number of input filters.
filters_out: integer, the number of output filters.
kernel_size: integer, the dimension of the convolution window.
strides: integer, the stride of the convolution.
expand_ratio: integer, scaling coefficient for the input filters.
se_ratio: float between 0 and 1, fraction to squeeze the input filters.
id_skip: boolean.
Returns:
output tensor for the block.
"""
bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1
# Expansion phase
filters = filters_in * expand_ratio
if expand_ratio != 1:
x = layers.Conv2D(
filters,
1,
padding='same',
use_bias=False,
kernel_initializer=CONV_KERNEL_INITIALIZER,
name=name + 'expand_conv')(
inputs)
x = layers.BatchNormalization(axis=bn_axis, name=name + 'expand_bn')(x)
x = layers.Activation(activation, name=name + 'expand_activation')(x)
else:
x = inputs
# Depthwise Convolution
if strides == 2:
x = layers.ZeroPadding2D(
padding=imagenet_utils.correct_pad(x, kernel_size),
name=name + 'dwconv_pad')(x)
conv_pad = 'valid'
else:
conv_pad = 'same'
x = layers.DepthwiseConv2D(
kernel_size,
strides=strides,
padding=conv_pad,
use_bias=False,
depthwise_initializer=CONV_KERNEL_INITIALIZER,
name=name + 'dwconv')(x)
x = layers.BatchNormalization(axis=bn_axis, name=name + 'bn')(x)
x = layers.Activation(activation, name=name + 'activation')(x)
# Squeeze and Excitation phase
if 0 < se_ratio <= 1:
filters_se = max(1, int(filters_in * se_ratio))
se = layers.GlobalAveragePooling2D(name=name + 'se_squeeze')(x)
if bn_axis == 1:
se_shape = (filters, 1, 1)
else:
se_shape = (1, 1, filters)
se = layers.Reshape(se_shape, name=name + 'se_reshape')(se)
se = layers.Conv2D(
filters_se,
1,
padding='same',
activation=activation,
kernel_initializer=CONV_KERNEL_INITIALIZER,
name=name + 'se_reduce')(
se)
se = layers.Conv2D(
filters,
1,
padding='same',
activation='sigmoid',
kernel_initializer=CONV_KERNEL_INITIALIZER,
name=name + 'se_expand')(se)
x = layers.multiply([x, se], name=name + 'se_excite')
# Output phase
x = layers.Conv2D(
filters_out,
1,
padding='same',
use_bias=False,
kernel_initializer=CONV_KERNEL_INITIALIZER,
name=name + 'project_conv')(x)
x = layers.BatchNormalization(axis=bn_axis, name=name + 'project_bn')(x)
if id_skip and strides == 1 and filters_in == filters_out:
if drop_rate > 0:
x = layers.Dropout(
drop_rate, noise_shape=(None, 1, 1, 1), name=name + 'drop')(x)
x = layers.add([x, inputs], name=name + 'add')
return x
@keras_export('keras.applications.efficientnet.EfficientNetB0',
'keras.applications.EfficientNetB0')
def EfficientNetB0(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
return EfficientNet(
1.0,
1.0,
224,
0.2,
model_name='efficientnetb0',
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
input_shape=input_shape,
pooling=pooling,
classes=classes,
classifier_activation=classifier_activation,
**kwargs)
@keras_export('keras.applications.efficientnet.EfficientNetB1',
'keras.applications.EfficientNetB1')
def EfficientNetB1(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
return EfficientNet(
1.0,
1.1,
240,
0.2,
model_name='efficientnetb1',
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
input_shape=input_shape,
pooling=pooling,
classes=classes,
classifier_activation=classifier_activation,
**kwargs)
@keras_export('keras.applications.efficientnet.EfficientNetB2',
'keras.applications.EfficientNetB2')
def EfficientNetB2(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
return EfficientNet(
1.1,
1.2,
260,
0.3,
model_name='efficientnetb2',
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
input_shape=input_shape,
pooling=pooling,
classes=classes,
classifier_activation=classifier_activation,
**kwargs)
@keras_export('keras.applications.efficientnet.EfficientNetB3',
'keras.applications.EfficientNetB3')
def EfficientNetB3(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
return EfficientNet(
1.2,
1.4,
300,
0.3,
model_name='efficientnetb3',
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
input_shape=input_shape,
pooling=pooling,
classes=classes,
classifier_activation=classifier_activation,
**kwargs)
@keras_export('keras.applications.efficientnet.EfficientNetB4',
'keras.applications.EfficientNetB4')
def EfficientNetB4(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
return EfficientNet(
1.4,
1.8,
380,
0.4,
model_name='efficientnetb4',
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
input_shape=input_shape,
pooling=pooling,
classes=classes,
classifier_activation=classifier_activation,
**kwargs)
@keras_export('keras.applications.efficientnet.EfficientNetB5',
'keras.applications.EfficientNetB5')
def EfficientNetB5(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
return EfficientNet(
1.6,
2.2,
456,
0.4,
model_name='efficientnetb5',
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
input_shape=input_shape,
pooling=pooling,
classes=classes,
classifier_activation=classifier_activation,
**kwargs)
@keras_export('keras.applications.efficientnet.EfficientNetB6',
'keras.applications.EfficientNetB6')
def EfficientNetB6(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
return EfficientNet(
1.8,
2.6,
528,
0.5,
model_name='efficientnetb6',
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
input_shape=input_shape,
pooling=pooling,
classes=classes,
classifier_activation=classifier_activation,
**kwargs)
@keras_export('keras.applications.efficientnet.EfficientNetB7',
'keras.applications.EfficientNetB7')
def EfficientNetB7(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
return EfficientNet(
2.0,
3.1,
600,
0.5,
model_name='efficientnetb7',
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
input_shape=input_shape,
pooling=pooling,
classes=classes,
classifier_activation=classifier_activation,
**kwargs)
EfficientNetB0.__doc__ = BASE_DOCSTRING.format(name='EfficientNetB0')
EfficientNetB1.__doc__ = BASE_DOCSTRING.format(name='EfficientNetB1')
EfficientNetB2.__doc__ = BASE_DOCSTRING.format(name='EfficientNetB2')
EfficientNetB3.__doc__ = BASE_DOCSTRING.format(name='EfficientNetB3')
EfficientNetB4.__doc__ = BASE_DOCSTRING.format(name='EfficientNetB4')
EfficientNetB5.__doc__ = BASE_DOCSTRING.format(name='EfficientNetB5')
EfficientNetB6.__doc__ = BASE_DOCSTRING.format(name='EfficientNetB6')
EfficientNetB7.__doc__ = BASE_DOCSTRING.format(name='EfficientNetB7')
@keras_export('keras.applications.efficientnet.preprocess_input')
def preprocess_input(x, data_format=None): # pylint: disable=unused-argument
"""A placeholder method for backward compatibility.
The preprocessing logic has been included in the efficientnet model
implementation. Users are no longer required to call this method to normalize
the input data. This method does nothing and only kept as a placeholder to
align the API surface between old and new version of model.
Args:
x: A floating point `numpy.array` or a `tf.Tensor`.
data_format: Optional data format of the image tensor/array. Defaults to
None, in which case the global setting
`tf.keras.backend.image_data_format()` is used (unless you changed it,
it defaults to "channels_last").{mode}
Returns:
Unchanged `numpy.array` or `tf.Tensor`.
"""
return x
@keras_export('keras.applications.efficientnet.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 25,089 | 31.5 | 87 | py |
keras | keras-master/keras/applications/resnet.py | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""ResNet models for Keras.
Reference:
- [Deep Residual Learning for Image Recognition](
https://arxiv.org/abs/1512.03385) (CVPR 2015)
"""
import tensorflow.compat.v2 as tf
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHTS_PATH = (
'https://storage.googleapis.com/tensorflow/keras-applications/resnet/')
WEIGHTS_HASHES = {
'resnet50': ('2cb95161c43110f7111970584f804107',
'4d473c1dd8becc155b73f8504c6f6626'),
'resnet101': ('f1aeb4b969a6efcfb50fad2f0c20cfc5',
'88cf7a10940856eca736dc7b7e228a21'),
'resnet152': ('100835be76be38e30d865e96f2aaae62',
'ee4c566cf9a93f14d82f913c2dc6dd0c'),
'resnet50v2': ('3ef43a0b657b3be2300d5770ece849e0',
'fac2f116257151a9d068a22e544a4917'),
'resnet101v2': ('6343647c601c52e1368623803854d971',
'c0ed64b8031c3730f411d2eb4eea35b5'),
'resnet152v2': ('a49b44d1979771252814e80f8ec446f9',
'ed17cf2e0169df9d443503ef94b23b33'),
'resnext50': ('67a5b30d522ed92f75a1f16eef299d1a',
'62527c363bdd9ec598bed41947b379fc'),
'resnext101':
('34fb605428fcc7aa4d62f44404c11509', '0f678c91647380debd923963594981b3')
}
layers = None
def ResNet(stack_fn,
preact,
use_bias,
model_name='resnet',
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
"""Instantiates the ResNet, ResNetV2, and ResNeXt architecture.
Args:
stack_fn: a function that returns output tensor for the
stacked residual blocks.
preact: whether to use pre-activation or not
(True for ResNetV2, False for ResNet and ResNeXt).
use_bias: whether to use biases for convolutional layers or not
(True for ResNet and ResNetV2, False for ResNeXt).
model_name: string, model name.
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor
(i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `channels_last` data format)
or `(3, 224, 224)` (with `channels_first` data format).
It should have exactly 3 inputs channels.
pooling: optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
**kwargs: For backwards compatibility only.
Returns:
A `keras.Model` instance.
"""
global layers
if 'layers' in kwargs:
layers = kwargs.pop('layers')
else:
layers = VersionAwareLayers()
if kwargs:
raise ValueError('Unknown argument(s): %s' % (kwargs,))
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded.')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top`'
' as true, `classes` should be 1000')
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=224,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1
x = layers.ZeroPadding2D(
padding=((3, 3), (3, 3)), name='conv1_pad')(img_input)
x = layers.Conv2D(64, 7, strides=2, use_bias=use_bias, name='conv1_conv')(x)
if not preact:
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name='conv1_bn')(x)
x = layers.Activation('relu', name='conv1_relu')(x)
x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name='pool1_pad')(x)
x = layers.MaxPooling2D(3, strides=2, name='pool1_pool')(x)
x = stack_fn(x)
if preact:
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name='post_bn')(x)
x = layers.Activation('relu', name='post_relu')(x)
if include_top:
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(classes, activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D(name='max_pool')(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name=model_name)
# Load weights.
if (weights == 'imagenet') and (model_name in WEIGHTS_HASHES):
if include_top:
file_name = model_name + '_weights_tf_dim_ordering_tf_kernels.h5'
file_hash = WEIGHTS_HASHES[model_name][0]
else:
file_name = model_name + '_weights_tf_dim_ordering_tf_kernels_notop.h5'
file_hash = WEIGHTS_HASHES[model_name][1]
weights_path = data_utils.get_file(
file_name,
BASE_WEIGHTS_PATH + file_name,
cache_subdir='models',
file_hash=file_hash)
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
def block1(x, filters, kernel_size=3, stride=1, conv_shortcut=True, name=None):
"""A residual block.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer.
kernel_size: default 3, kernel size of the bottleneck layer.
stride: default 1, stride of the first layer.
conv_shortcut: default True, use convolution shortcut if True,
otherwise identity shortcut.
name: string, block label.
Returns:
Output tensor for the residual block.
"""
bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1
if conv_shortcut:
shortcut = layers.Conv2D(
4 * filters, 1, strides=stride, name=name + '_0_conv')(x)
shortcut = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_0_bn')(shortcut)
else:
shortcut = x
x = layers.Conv2D(filters, 1, strides=stride, name=name + '_1_conv')(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_1_bn')(x)
x = layers.Activation('relu', name=name + '_1_relu')(x)
x = layers.Conv2D(
filters, kernel_size, padding='SAME', name=name + '_2_conv')(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_2_bn')(x)
x = layers.Activation('relu', name=name + '_2_relu')(x)
x = layers.Conv2D(4 * filters, 1, name=name + '_3_conv')(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_3_bn')(x)
x = layers.Add(name=name + '_add')([shortcut, x])
x = layers.Activation('relu', name=name + '_out')(x)
return x
def stack1(x, filters, blocks, stride1=2, name=None):
"""A set of stacked residual blocks.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer in a block.
blocks: integer, blocks in the stacked blocks.
stride1: default 2, stride of the first layer in the first block.
name: string, stack label.
Returns:
Output tensor for the stacked blocks.
"""
x = block1(x, filters, stride=stride1, name=name + '_block1')
for i in range(2, blocks + 1):
x = block1(x, filters, conv_shortcut=False, name=name + '_block' + str(i))
return x
def block2(x, filters, kernel_size=3, stride=1, conv_shortcut=False, name=None):
"""A residual block.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer.
kernel_size: default 3, kernel size of the bottleneck layer.
stride: default 1, stride of the first layer.
conv_shortcut: default False, use convolution shortcut if True,
otherwise identity shortcut.
name: string, block label.
Returns:
Output tensor for the residual block.
"""
bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1
preact = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_preact_bn')(x)
preact = layers.Activation('relu', name=name + '_preact_relu')(preact)
if conv_shortcut:
shortcut = layers.Conv2D(
4 * filters, 1, strides=stride, name=name + '_0_conv')(preact)
else:
shortcut = layers.MaxPooling2D(1, strides=stride)(x) if stride > 1 else x
x = layers.Conv2D(
filters, 1, strides=1, use_bias=False, name=name + '_1_conv')(preact)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_1_bn')(x)
x = layers.Activation('relu', name=name + '_1_relu')(x)
x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name=name + '_2_pad')(x)
x = layers.Conv2D(
filters,
kernel_size,
strides=stride,
use_bias=False,
name=name + '_2_conv')(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_2_bn')(x)
x = layers.Activation('relu', name=name + '_2_relu')(x)
x = layers.Conv2D(4 * filters, 1, name=name + '_3_conv')(x)
x = layers.Add(name=name + '_out')([shortcut, x])
return x
def stack2(x, filters, blocks, stride1=2, name=None):
"""A set of stacked residual blocks.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer in a block.
blocks: integer, blocks in the stacked blocks.
stride1: default 2, stride of the first layer in the first block.
name: string, stack label.
Returns:
Output tensor for the stacked blocks.
"""
x = block2(x, filters, conv_shortcut=True, name=name + '_block1')
for i in range(2, blocks):
x = block2(x, filters, name=name + '_block' + str(i))
x = block2(x, filters, stride=stride1, name=name + '_block' + str(blocks))
return x
def block3(x,
filters,
kernel_size=3,
stride=1,
groups=32,
conv_shortcut=True,
name=None):
"""A residual block.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer.
kernel_size: default 3, kernel size of the bottleneck layer.
stride: default 1, stride of the first layer.
groups: default 32, group size for grouped convolution.
conv_shortcut: default True, use convolution shortcut if True,
otherwise identity shortcut.
name: string, block label.
Returns:
Output tensor for the residual block.
"""
bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1
if conv_shortcut:
shortcut = layers.Conv2D(
(64 // groups) * filters,
1,
strides=stride,
use_bias=False,
name=name + '_0_conv')(x)
shortcut = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_0_bn')(shortcut)
else:
shortcut = x
x = layers.Conv2D(filters, 1, use_bias=False, name=name + '_1_conv')(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_1_bn')(x)
x = layers.Activation('relu', name=name + '_1_relu')(x)
c = filters // groups
x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name=name + '_2_pad')(x)
x = layers.DepthwiseConv2D(
kernel_size,
strides=stride,
depth_multiplier=c,
use_bias=False,
name=name + '_2_conv')(x)
x_shape = backend.shape(x)[:-1]
x = backend.reshape(x, backend.concatenate([x_shape, (groups, c, c)]))
x = layers.Lambda(
lambda x: sum(x[:, :, :, :, i] for i in range(c)),
name=name + '_2_reduce')(x)
x = backend.reshape(x, backend.concatenate([x_shape, (filters,)]))
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_2_bn')(x)
x = layers.Activation('relu', name=name + '_2_relu')(x)
x = layers.Conv2D(
(64 // groups) * filters, 1, use_bias=False, name=name + '_3_conv')(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_3_bn')(x)
x = layers.Add(name=name + '_add')([shortcut, x])
x = layers.Activation('relu', name=name + '_out')(x)
return x
def stack3(x, filters, blocks, stride1=2, groups=32, name=None):
"""A set of stacked residual blocks.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer in a block.
blocks: integer, blocks in the stacked blocks.
stride1: default 2, stride of the first layer in the first block.
groups: default 32, group size for grouped convolution.
name: string, stack label.
Returns:
Output tensor for the stacked blocks.
"""
x = block3(x, filters, stride=stride1, groups=groups, name=name + '_block1')
for i in range(2, blocks + 1):
x = block3(
x,
filters,
groups=groups,
conv_shortcut=False,
name=name + '_block' + str(i))
return x
@keras_export('keras.applications.resnet50.ResNet50',
'keras.applications.resnet.ResNet50',
'keras.applications.ResNet50')
def ResNet50(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
**kwargs):
"""Instantiates the ResNet50 architecture."""
def stack_fn(x):
x = stack1(x, 64, 3, stride1=1, name='conv2')
x = stack1(x, 128, 4, name='conv3')
x = stack1(x, 256, 6, name='conv4')
return stack1(x, 512, 3, name='conv5')
return ResNet(stack_fn, False, True, 'resnet50', include_top, weights,
input_tensor, input_shape, pooling, classes, **kwargs)
@keras_export('keras.applications.resnet.ResNet101',
'keras.applications.ResNet101')
def ResNet101(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
**kwargs):
"""Instantiates the ResNet101 architecture."""
def stack_fn(x):
x = stack1(x, 64, 3, stride1=1, name='conv2')
x = stack1(x, 128, 4, name='conv3')
x = stack1(x, 256, 23, name='conv4')
return stack1(x, 512, 3, name='conv5')
return ResNet(stack_fn, False, True, 'resnet101', include_top, weights,
input_tensor, input_shape, pooling, classes, **kwargs)
@keras_export('keras.applications.resnet.ResNet152',
'keras.applications.ResNet152')
def ResNet152(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
**kwargs):
"""Instantiates the ResNet152 architecture."""
def stack_fn(x):
x = stack1(x, 64, 3, stride1=1, name='conv2')
x = stack1(x, 128, 8, name='conv3')
x = stack1(x, 256, 36, name='conv4')
return stack1(x, 512, 3, name='conv5')
return ResNet(stack_fn, False, True, 'resnet152', include_top, weights,
input_tensor, input_shape, pooling, classes, **kwargs)
@keras_export('keras.applications.resnet50.preprocess_input',
'keras.applications.resnet.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(
x, data_format=data_format, mode='caffe')
@keras_export('keras.applications.resnet50.decode_predictions',
'keras.applications.resnet.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_CAFFE,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
DOC = """
Reference:
- [Deep Residual Learning for Image Recognition](
https://arxiv.org/abs/1512.03385) (CVPR 2015)
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your
inputs before passing them to the model.
`resnet.preprocess_input` will convert the input images from RGB to BGR,
then will zero-center each color channel with respect to the ImageNet dataset,
without scaling.
Args:
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `'channels_last'` data format)
or `(3, 224, 224)` (with `'channels_first'` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A Keras model instance.
"""
setattr(ResNet50, '__doc__', ResNet50.__doc__ + DOC)
setattr(ResNet101, '__doc__', ResNet101.__doc__ + DOC)
setattr(ResNet152, '__doc__', ResNet152.__doc__ + DOC)
| 21,207 | 35.191126 | 87 | py |
keras | keras-master/keras/applications/vgg16.py | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""VGG16 model for Keras.
Reference:
- [Very Deep Convolutional Networks for Large-Scale Image Recognition]
(https://arxiv.org/abs/1409.1556) (ICLR 2015)
"""
import tensorflow.compat.v2 as tf
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.util.tf_export import keras_export
WEIGHTS_PATH = ('https://storage.googleapis.com/tensorflow/keras-applications/'
'vgg16/vgg16_weights_tf_dim_ordering_tf_kernels.h5')
WEIGHTS_PATH_NO_TOP = ('https://storage.googleapis.com/tensorflow/'
'keras-applications/vgg16/'
'vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5')
layers = VersionAwareLayers()
@keras_export('keras.applications.vgg16.VGG16', 'keras.applications.VGG16')
def VGG16(
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax'):
"""Instantiates the VGG16 model.
Reference:
- [Very Deep Convolutional Networks for Large-Scale Image Recognition](
https://arxiv.org/abs/1409.1556) (ICLR 2015)
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
The default input size for this model is 224x224.
Note: each Keras Application expects a specific kind of input preprocessing.
For VGG16, call `tf.keras.applications.vgg16.preprocess_input` on your
inputs before passing them to the model.
`vgg16.preprocess_input` will convert the input images from RGB to BGR,
then will zero-center each color channel with respect to the ImageNet dataset,
without scaling.
Args:
include_top: whether to include the 3 fully-connected
layers at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor
(i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)`
(with `channels_last` data format)
or `(3, 224, 224)` (with `channels_first` data format).
It should have exactly 3 input channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A `keras.Model` instance.
"""
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError(
'The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded. Received: '
f'weights={weights}')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top` '
'as true, `classes` should be 1000. '
f'Received `classes={classes}`')
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=224,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
# Block 1
x = layers.Conv2D(
64, (3, 3), activation='relu', padding='same', name='block1_conv1')(
img_input)
x = layers.Conv2D(
64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = layers.Conv2D(
128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = layers.Conv2D(
128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = layers.Conv2D(
256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = layers.Conv2D(
256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = layers.Conv2D(
256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
if include_top:
# Classification block
x = layers.Flatten(name='flatten')(x)
x = layers.Dense(4096, activation='relu', name='fc1')(x)
x = layers.Dense(4096, activation='relu', name='fc2')(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(classes, activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name='vgg16')
# Load weights.
if weights == 'imagenet':
if include_top:
weights_path = data_utils.get_file(
'vgg16_weights_tf_dim_ordering_tf_kernels.h5',
WEIGHTS_PATH,
cache_subdir='models',
file_hash='64373286793e3c8b2b4e3219cbf3544b')
else:
weights_path = data_utils.get_file(
'vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5',
WEIGHTS_PATH_NO_TOP,
cache_subdir='models',
file_hash='6d6bbae143d832006294945121d1f1fc')
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
@keras_export('keras.applications.vgg16.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(
x, data_format=data_format, mode='caffe')
@keras_export('keras.applications.vgg16.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_CAFFE,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 9,586 | 37.971545 | 87 | py |
keras | keras-master/keras/applications/densenet.py | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""DenseNet models for Keras.
Reference:
- [Densely Connected Convolutional Networks](
https://arxiv.org/abs/1608.06993) (CVPR 2017)
"""
import tensorflow.compat.v2 as tf
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHTS_PATH = ('https://storage.googleapis.com/tensorflow/'
'keras-applications/densenet/')
DENSENET121_WEIGHT_PATH = (
BASE_WEIGHTS_PATH + 'densenet121_weights_tf_dim_ordering_tf_kernels.h5')
DENSENET121_WEIGHT_PATH_NO_TOP = (
BASE_WEIGHTS_PATH +
'densenet121_weights_tf_dim_ordering_tf_kernels_notop.h5')
DENSENET169_WEIGHT_PATH = (
BASE_WEIGHTS_PATH + 'densenet169_weights_tf_dim_ordering_tf_kernels.h5')
DENSENET169_WEIGHT_PATH_NO_TOP = (
BASE_WEIGHTS_PATH +
'densenet169_weights_tf_dim_ordering_tf_kernels_notop.h5')
DENSENET201_WEIGHT_PATH = (
BASE_WEIGHTS_PATH + 'densenet201_weights_tf_dim_ordering_tf_kernels.h5')
DENSENET201_WEIGHT_PATH_NO_TOP = (
BASE_WEIGHTS_PATH +
'densenet201_weights_tf_dim_ordering_tf_kernels_notop.h5')
layers = VersionAwareLayers()
def dense_block(x, blocks, name):
"""A dense block.
Args:
x: input tensor.
blocks: integer, the number of building blocks.
name: string, block label.
Returns:
Output tensor for the block.
"""
for i in range(blocks):
x = conv_block(x, 32, name=name + '_block' + str(i + 1))
return x
def transition_block(x, reduction, name):
"""A transition block.
Args:
x: input tensor.
reduction: float, compression rate at transition layers.
name: string, block label.
Returns:
output tensor for the block.
"""
bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_bn')(
x)
x = layers.Activation('relu', name=name + '_relu')(x)
x = layers.Conv2D(
int(backend.int_shape(x)[bn_axis] * reduction),
1,
use_bias=False,
name=name + '_conv')(
x)
x = layers.AveragePooling2D(2, strides=2, name=name + '_pool')(x)
return x
def conv_block(x, growth_rate, name):
"""A building block for a dense block.
Args:
x: input tensor.
growth_rate: float, growth rate at dense layers.
name: string, block label.
Returns:
Output tensor for the block.
"""
bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1
x1 = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_0_bn')(
x)
x1 = layers.Activation('relu', name=name + '_0_relu')(x1)
x1 = layers.Conv2D(
4 * growth_rate, 1, use_bias=False, name=name + '_1_conv')(
x1)
x1 = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + '_1_bn')(
x1)
x1 = layers.Activation('relu', name=name + '_1_relu')(x1)
x1 = layers.Conv2D(
growth_rate, 3, padding='same', use_bias=False, name=name + '_2_conv')(
x1)
x = layers.Concatenate(axis=bn_axis, name=name + '_concat')([x, x1])
return x
def DenseNet(
blocks,
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax'):
"""Instantiates the DenseNet architecture.
Reference:
- [Densely Connected Convolutional Networks](
https://arxiv.org/abs/1608.06993) (CVPR 2017)
This function returns a Keras image classification model,
optionally loaded with weights pre-trained on ImageNet.
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your
inputs before passing them to the model.
`densenet.preprocess_input` will scale pixels between 0 and 1 and then
will normalize each channel with respect to the ImageNet dataset statistics.
Args:
blocks: numbers of building blocks for the four dense layers.
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor
(i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `'channels_last'` data format)
or `(3, 224, 224)` (with `'channels_first'` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
pooling: optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A `keras.Model` instance.
"""
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded.')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top`'
' as true, `classes` should be 1000')
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=224,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1
x = layers.ZeroPadding2D(padding=((3, 3), (3, 3)))(img_input)
x = layers.Conv2D(64, 7, strides=2, use_bias=False, name='conv1/conv')(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name='conv1/bn')(
x)
x = layers.Activation('relu', name='conv1/relu')(x)
x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)))(x)
x = layers.MaxPooling2D(3, strides=2, name='pool1')(x)
x = dense_block(x, blocks[0], name='conv2')
x = transition_block(x, 0.5, name='pool2')
x = dense_block(x, blocks[1], name='conv3')
x = transition_block(x, 0.5, name='pool3')
x = dense_block(x, blocks[2], name='conv4')
x = transition_block(x, 0.5, name='pool4')
x = dense_block(x, blocks[3], name='conv5')
x = layers.BatchNormalization(axis=bn_axis, epsilon=1.001e-5, name='bn')(x)
x = layers.Activation('relu', name='relu')(x)
if include_top:
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(classes, activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D(name='max_pool')(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
if blocks == [6, 12, 24, 16]:
model = training.Model(inputs, x, name='densenet121')
elif blocks == [6, 12, 32, 32]:
model = training.Model(inputs, x, name='densenet169')
elif blocks == [6, 12, 48, 32]:
model = training.Model(inputs, x, name='densenet201')
else:
model = training.Model(inputs, x, name='densenet')
# Load weights.
if weights == 'imagenet':
if include_top:
if blocks == [6, 12, 24, 16]:
weights_path = data_utils.get_file(
'densenet121_weights_tf_dim_ordering_tf_kernels.h5',
DENSENET121_WEIGHT_PATH,
cache_subdir='models',
file_hash='9d60b8095a5708f2dcce2bca79d332c7')
elif blocks == [6, 12, 32, 32]:
weights_path = data_utils.get_file(
'densenet169_weights_tf_dim_ordering_tf_kernels.h5',
DENSENET169_WEIGHT_PATH,
cache_subdir='models',
file_hash='d699b8f76981ab1b30698df4c175e90b')
elif blocks == [6, 12, 48, 32]:
weights_path = data_utils.get_file(
'densenet201_weights_tf_dim_ordering_tf_kernels.h5',
DENSENET201_WEIGHT_PATH,
cache_subdir='models',
file_hash='1ceb130c1ea1b78c3bf6114dbdfd8807')
else:
if blocks == [6, 12, 24, 16]:
weights_path = data_utils.get_file(
'densenet121_weights_tf_dim_ordering_tf_kernels_notop.h5',
DENSENET121_WEIGHT_PATH_NO_TOP,
cache_subdir='models',
file_hash='30ee3e1110167f948a6b9946edeeb738')
elif blocks == [6, 12, 32, 32]:
weights_path = data_utils.get_file(
'densenet169_weights_tf_dim_ordering_tf_kernels_notop.h5',
DENSENET169_WEIGHT_PATH_NO_TOP,
cache_subdir='models',
file_hash='b8c4d4c20dd625c148057b9ff1c1176b')
elif blocks == [6, 12, 48, 32]:
weights_path = data_utils.get_file(
'densenet201_weights_tf_dim_ordering_tf_kernels_notop.h5',
DENSENET201_WEIGHT_PATH_NO_TOP,
cache_subdir='models',
file_hash='c13680b51ded0fb44dff2d8f86ac8bb1')
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
@keras_export('keras.applications.densenet.DenseNet121',
'keras.applications.DenseNet121')
def DenseNet121(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000):
"""Instantiates the Densenet121 architecture."""
return DenseNet([6, 12, 24, 16], include_top, weights, input_tensor,
input_shape, pooling, classes)
@keras_export('keras.applications.densenet.DenseNet169',
'keras.applications.DenseNet169')
def DenseNet169(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000):
"""Instantiates the Densenet169 architecture."""
return DenseNet([6, 12, 32, 32], include_top, weights, input_tensor,
input_shape, pooling, classes)
@keras_export('keras.applications.densenet.DenseNet201',
'keras.applications.DenseNet201')
def DenseNet201(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000):
"""Instantiates the Densenet201 architecture."""
return DenseNet([6, 12, 48, 32], include_top, weights, input_tensor,
input_shape, pooling, classes)
@keras_export('keras.applications.densenet.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(
x, data_format=data_format, mode='torch')
@keras_export('keras.applications.densenet.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TORCH,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
DOC = """
Reference:
- [Densely Connected Convolutional Networks](
https://arxiv.org/abs/1608.06993) (CVPR 2017)
Optionally loads weights pre-trained on ImageNet.
Note that the data format convention used by the model is
the one specified in your Keras config at `~/.keras/keras.json`.
Note: each Keras Application expects a specific kind of input preprocessing.
For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your
inputs before passing them to the model.
Args:
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `'channels_last'` data format)
or `(3, 224, 224)` (with `'channels_first'` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
Returns:
A Keras model instance.
"""
setattr(DenseNet121, '__doc__', DenseNet121.__doc__ + DOC)
setattr(DenseNet169, '__doc__', DenseNet169.__doc__ + DOC)
setattr(DenseNet201, '__doc__', DenseNet201.__doc__ + DOC)
| 16,084 | 36.320186 | 87 | py |
keras | keras-master/keras/applications/imagenet_utils.py | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Utilities for ImageNet data preprocessing & prediction decoding."""
import json
import warnings
import numpy as np
from keras import activations
from keras import backend
from keras.utils import data_utils
from tensorflow.python.util.tf_export import keras_export
CLASS_INDEX = None
CLASS_INDEX_PATH = ('https://storage.googleapis.com/download.tensorflow.org/'
'data/imagenet_class_index.json')
PREPROCESS_INPUT_DOC = """
Preprocesses a tensor or Numpy array encoding a batch of images.
Usage example with `applications.MobileNet`:
```python
i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)
x = tf.cast(i, tf.float32)
x = tf.keras.applications.mobilenet.preprocess_input(x)
core = tf.keras.applications.MobileNet()
x = core(x)
model = tf.keras.Model(inputs=[i], outputs=[x])
image = tf.image.decode_png(tf.io.read_file('file.png'))
result = model(image)
```
Args:
x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color
channels, with values in the range [0, 255].
The preprocessed data are written over the input data
if the data types are compatible. To avoid this
behaviour, `numpy.copy(x)` can be used.
data_format: Optional data format of the image tensor/array. Defaults to
None, in which case the global setting
`tf.keras.backend.image_data_format()` is used (unless you changed it,
it defaults to "channels_last").{mode}
Returns:
Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.
{ret}
Raises:
{error}
"""
PREPROCESS_INPUT_MODE_DOC = """
mode: One of "caffe", "tf" or "torch". Defaults to "caffe".
- caffe: will convert the images from RGB to BGR,
then will zero-center each color channel with
respect to the ImageNet dataset,
without scaling.
- tf: will scale pixels between -1 and 1,
sample-wise.
- torch: will scale pixels between 0 and 1 and then
will normalize each channel with respect to the
ImageNet dataset.
"""
PREPROCESS_INPUT_DEFAULT_ERROR_DOC = """
ValueError: In case of unknown `mode` or `data_format` argument."""
PREPROCESS_INPUT_ERROR_DOC = """
ValueError: In case of unknown `data_format` argument."""
PREPROCESS_INPUT_RET_DOC_TF = """
The inputs pixel values are scaled between -1 and 1, sample-wise."""
PREPROCESS_INPUT_RET_DOC_TORCH = """
The input pixels values are scaled between 0 and 1 and each channel is
normalized with respect to the ImageNet dataset."""
PREPROCESS_INPUT_RET_DOC_CAFFE = """
The images are converted from RGB to BGR, then each color channel is
zero-centered with respect to the ImageNet dataset, without scaling."""
@keras_export('keras.applications.imagenet_utils.preprocess_input')
def preprocess_input(x, data_format=None, mode='caffe'):
"""Preprocesses a tensor or Numpy array encoding a batch of images."""
if mode not in {'caffe', 'tf', 'torch'}:
raise ValueError('Expected mode to be one of `caffe`, `tf` or `torch`. '
f'Received: mode={mode}')
if data_format is None:
data_format = backend.image_data_format()
elif data_format not in {'channels_first', 'channels_last'}:
raise ValueError('Expected data_format to be one of `channels_first` or '
f'`channels_last`. Received: data_format={data_format}')
if isinstance(x, np.ndarray):
return _preprocess_numpy_input(
x, data_format=data_format, mode=mode)
else:
return _preprocess_symbolic_input(
x, data_format=data_format, mode=mode)
preprocess_input.__doc__ = PREPROCESS_INPUT_DOC.format(
mode=PREPROCESS_INPUT_MODE_DOC,
ret='',
error=PREPROCESS_INPUT_DEFAULT_ERROR_DOC)
@keras_export('keras.applications.imagenet_utils.decode_predictions')
def decode_predictions(preds, top=5):
"""Decodes the prediction of an ImageNet model.
Args:
preds: Numpy array encoding a batch of predictions.
top: Integer, how many top-guesses to return. Defaults to 5.
Returns:
A list of lists of top class prediction tuples
`(class_name, class_description, score)`.
One list of tuples per sample in batch input.
Raises:
ValueError: In case of invalid shape of the `pred` array
(must be 2D).
"""
global CLASS_INDEX
if len(preds.shape) != 2 or preds.shape[1] != 1000:
raise ValueError('`decode_predictions` expects '
'a batch of predictions '
'(i.e. a 2D array of shape (samples, 1000)). '
'Found array with shape: ' + str(preds.shape))
if CLASS_INDEX is None:
fpath = data_utils.get_file(
'imagenet_class_index.json',
CLASS_INDEX_PATH,
cache_subdir='models',
file_hash='c2c37ea517e94d9795004a39431a14cb')
with open(fpath) as f:
CLASS_INDEX = json.load(f)
results = []
for pred in preds:
top_indices = pred.argsort()[-top:][::-1]
result = [tuple(CLASS_INDEX[str(i)]) + (pred[i],) for i in top_indices]
result.sort(key=lambda x: x[2], reverse=True)
results.append(result)
return results
def _preprocess_numpy_input(x, data_format, mode):
"""Preprocesses a Numpy array encoding a batch of images.
Args:
x: Input array, 3D or 4D.
data_format: Data format of the image array.
mode: One of "caffe", "tf" or "torch".
- caffe: will convert the images from RGB to BGR,
then will zero-center each color channel with
respect to the ImageNet dataset,
without scaling.
- tf: will scale pixels between -1 and 1,
sample-wise.
- torch: will scale pixels between 0 and 1 and then
will normalize each channel with respect to the
ImageNet dataset.
Returns:
Preprocessed Numpy array.
"""
if not issubclass(x.dtype.type, np.floating):
x = x.astype(backend.floatx(), copy=False)
if mode == 'tf':
x /= 127.5
x -= 1.
return x
elif mode == 'torch':
x /= 255.
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
else:
if data_format == 'channels_first':
# 'RGB'->'BGR'
if x.ndim == 3:
x = x[::-1, ...]
else:
x = x[:, ::-1, ...]
else:
# 'RGB'->'BGR'
x = x[..., ::-1]
mean = [103.939, 116.779, 123.68]
std = None
# Zero-center by mean pixel
if data_format == 'channels_first':
if x.ndim == 3:
x[0, :, :] -= mean[0]
x[1, :, :] -= mean[1]
x[2, :, :] -= mean[2]
if std is not None:
x[0, :, :] /= std[0]
x[1, :, :] /= std[1]
x[2, :, :] /= std[2]
else:
x[:, 0, :, :] -= mean[0]
x[:, 1, :, :] -= mean[1]
x[:, 2, :, :] -= mean[2]
if std is not None:
x[:, 0, :, :] /= std[0]
x[:, 1, :, :] /= std[1]
x[:, 2, :, :] /= std[2]
else:
x[..., 0] -= mean[0]
x[..., 1] -= mean[1]
x[..., 2] -= mean[2]
if std is not None:
x[..., 0] /= std[0]
x[..., 1] /= std[1]
x[..., 2] /= std[2]
return x
def _preprocess_symbolic_input(x, data_format, mode):
"""Preprocesses a tensor encoding a batch of images.
Args:
x: Input tensor, 3D or 4D.
data_format: Data format of the image tensor.
mode: One of "caffe", "tf" or "torch".
- caffe: will convert the images from RGB to BGR,
then will zero-center each color channel with
respect to the ImageNet dataset,
without scaling.
- tf: will scale pixels between -1 and 1,
sample-wise.
- torch: will scale pixels between 0 and 1 and then
will normalize each channel with respect to the
ImageNet dataset.
Returns:
Preprocessed tensor.
"""
if mode == 'tf':
x /= 127.5
x -= 1.
return x
elif mode == 'torch':
x /= 255.
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
else:
if data_format == 'channels_first':
# 'RGB'->'BGR'
if backend.ndim(x) == 3:
x = x[::-1, ...]
else:
x = x[:, ::-1, ...]
else:
# 'RGB'->'BGR'
x = x[..., ::-1]
mean = [103.939, 116.779, 123.68]
std = None
mean_tensor = backend.constant(-np.array(mean))
# Zero-center by mean pixel
if backend.dtype(x) != backend.dtype(mean_tensor):
x = backend.bias_add(
x, backend.cast(mean_tensor, backend.dtype(x)), data_format=data_format)
else:
x = backend.bias_add(x, mean_tensor, data_format)
if std is not None:
std_tensor = backend.constant(np.array(std), dtype=backend.dtype(x))
if data_format == 'channels_first':
std_tensor = backend.reshape(std_tensor, (-1, 1, 1))
x /= std_tensor
return x
def obtain_input_shape(input_shape,
default_size,
min_size,
data_format,
require_flatten,
weights=None):
"""Internal utility to compute/validate a model's input shape.
Args:
input_shape: Either None (will return the default network input shape),
or a user-provided shape to be validated.
default_size: Default input width/height for the model.
min_size: Minimum input width/height accepted by the model.
data_format: Image data format to use.
require_flatten: Whether the model is expected to
be linked to a classifier via a Flatten layer.
weights: One of `None` (random initialization)
or 'imagenet' (pre-training on ImageNet).
If weights='imagenet' input channels must be equal to 3.
Returns:
An integer shape tuple (may include None entries).
Raises:
ValueError: In case of invalid argument values.
"""
if weights != 'imagenet' and input_shape and len(input_shape) == 3:
if data_format == 'channels_first':
if input_shape[0] not in {1, 3}:
warnings.warn('This model usually expects 1 or 3 input channels. '
'However, it was passed an input_shape with ' +
str(input_shape[0]) + ' input channels.')
default_shape = (input_shape[0], default_size, default_size)
else:
if input_shape[-1] not in {1, 3}:
warnings.warn('This model usually expects 1 or 3 input channels. '
'However, it was passed an input_shape with ' +
str(input_shape[-1]) + ' input channels.')
default_shape = (default_size, default_size, input_shape[-1])
else:
if data_format == 'channels_first':
default_shape = (3, default_size, default_size)
else:
default_shape = (default_size, default_size, 3)
if weights == 'imagenet' and require_flatten:
if input_shape is not None:
if input_shape != default_shape:
raise ValueError('When setting `include_top=True` '
'and loading `imagenet` weights, '
f'`input_shape` should be {default_shape}. '
f'Received: input_shape={input_shape}')
return default_shape
if input_shape:
if data_format == 'channels_first':
if input_shape is not None:
if len(input_shape) != 3:
raise ValueError('`input_shape` must be a tuple of three integers.')
if input_shape[0] != 3 and weights == 'imagenet':
raise ValueError('The input must have 3 channels; Received '
f'`input_shape={input_shape}`')
if ((input_shape[1] is not None and input_shape[1] < min_size) or
(input_shape[2] is not None and input_shape[2] < min_size)):
raise ValueError(f'Input size must be at least {min_size}'
f'x{min_size}; Received: '
f'input_shape={input_shape}')
else:
if input_shape is not None:
if len(input_shape) != 3:
raise ValueError('`input_shape` must be a tuple of three integers.')
if input_shape[-1] != 3 and weights == 'imagenet':
raise ValueError('The input must have 3 channels; Received '
f'`input_shape={input_shape}`')
if ((input_shape[0] is not None and input_shape[0] < min_size) or
(input_shape[1] is not None and input_shape[1] < min_size)):
raise ValueError('Input size must be at least '
f'{min_size}x{min_size}; Received: '
f'input_shape={input_shape}')
else:
if require_flatten:
input_shape = default_shape
else:
if data_format == 'channels_first':
input_shape = (3, None, None)
else:
input_shape = (None, None, 3)
if require_flatten:
if None in input_shape:
raise ValueError('If `include_top` is True, '
'you should specify a static `input_shape`. '
f'Received: input_shape={input_shape}')
return input_shape
def correct_pad(inputs, kernel_size):
"""Returns a tuple for zero-padding for 2D convolution with downsampling.
Args:
inputs: Input tensor.
kernel_size: An integer or tuple/list of 2 integers.
Returns:
A tuple.
"""
img_dim = 2 if backend.image_data_format() == 'channels_first' else 1
input_size = backend.int_shape(inputs)[img_dim:(img_dim + 2)]
if isinstance(kernel_size, int):
kernel_size = (kernel_size, kernel_size)
if input_size[0] is None:
adjust = (1, 1)
else:
adjust = (1 - input_size[0] % 2, 1 - input_size[1] % 2)
correct = (kernel_size[0] // 2, kernel_size[1] // 2)
return ((correct[0] - adjust[0], correct[0]),
(correct[1] - adjust[1], correct[1]))
def validate_activation(classifier_activation, weights):
"""validates that the classifer_activation is compatible with the weights.
Args:
classifier_activation: str or callable activation function
weights: The pretrained weights to load.
Raises:
ValueError: if an activation other than `None` or `softmax` are used with
pretrained weights.
"""
if weights is None:
return
classifier_activation = activations.get(classifier_activation)
if classifier_activation not in {
activations.get('softmax'),
activations.get(None)
}:
raise ValueError('Only `None` and `softmax` activations are allowed '
'for the `classifier_activation` argument when using '
'pretrained weights, with `include_top=True`; Received: '
f'classifier_activation={classifier_activation}')
| 15,197 | 33.778032 | 80 | py |
keras | keras-master/keras/applications/resnet_v2.py | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""ResNet v2 models for Keras.
Reference:
- [Identity Mappings in Deep Residual Networks]
(https://arxiv.org/abs/1603.05027) (CVPR 2016)
"""
from keras.applications import imagenet_utils
from keras.applications import resnet
from tensorflow.python.util.tf_export import keras_export
@keras_export('keras.applications.resnet_v2.ResNet50V2',
'keras.applications.ResNet50V2')
def ResNet50V2(
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax'):
"""Instantiates the ResNet50V2 architecture."""
def stack_fn(x):
x = resnet.stack2(x, 64, 3, name='conv2')
x = resnet.stack2(x, 128, 4, name='conv3')
x = resnet.stack2(x, 256, 6, name='conv4')
return resnet.stack2(x, 512, 3, stride1=1, name='conv5')
return resnet.ResNet(
stack_fn,
True,
True,
'resnet50v2',
include_top,
weights,
input_tensor,
input_shape,
pooling,
classes,
classifier_activation=classifier_activation)
@keras_export('keras.applications.resnet_v2.ResNet101V2',
'keras.applications.ResNet101V2')
def ResNet101V2(
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax'):
"""Instantiates the ResNet101V2 architecture."""
def stack_fn(x):
x = resnet.stack2(x, 64, 3, name='conv2')
x = resnet.stack2(x, 128, 4, name='conv3')
x = resnet.stack2(x, 256, 23, name='conv4')
return resnet.stack2(x, 512, 3, stride1=1, name='conv5')
return resnet.ResNet(
stack_fn,
True,
True,
'resnet101v2',
include_top,
weights,
input_tensor,
input_shape,
pooling,
classes,
classifier_activation=classifier_activation)
@keras_export('keras.applications.resnet_v2.ResNet152V2',
'keras.applications.ResNet152V2')
def ResNet152V2(
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax'):
"""Instantiates the ResNet152V2 architecture."""
def stack_fn(x):
x = resnet.stack2(x, 64, 3, name='conv2')
x = resnet.stack2(x, 128, 8, name='conv3')
x = resnet.stack2(x, 256, 36, name='conv4')
return resnet.stack2(x, 512, 3, stride1=1, name='conv5')
return resnet.ResNet(
stack_fn,
True,
True,
'resnet152v2',
include_top,
weights,
input_tensor,
input_shape,
pooling,
classes,
classifier_activation=classifier_activation)
@keras_export('keras.applications.resnet_v2.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(
x, data_format=data_format, mode='tf')
@keras_export('keras.applications.resnet_v2.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
DOC = """
Reference:
- [Identity Mappings in Deep Residual Networks]
(https://arxiv.org/abs/1603.05027) (CVPR 2016)
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your
inputs before passing them to the model.
`resnet_v2.preprocess_input` will scale input pixels between -1 and 1.
Args:
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `'channels_last'` data format)
or `(3, 224, 224)` (with `'channels_first'` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A `keras.Model` instance.
"""
setattr(ResNet50V2, '__doc__', ResNet50V2.__doc__ + DOC)
setattr(ResNet101V2, '__doc__', ResNet101V2.__doc__ + DOC)
setattr(ResNet152V2, '__doc__', ResNet152V2.__doc__ + DOC)
| 6,741 | 32.879397 | 87 | py |
keras | keras-master/keras/applications/vgg19.py | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""VGG19 model for Keras.
Reference:
- [Very Deep Convolutional Networks for Large-Scale Image Recognition](
https://arxiv.org/abs/1409.1556) (ICLR 2015)
"""
import tensorflow.compat.v2 as tf
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.util.tf_export import keras_export
WEIGHTS_PATH = ('https://storage.googleapis.com/tensorflow/keras-applications/'
'vgg19/vgg19_weights_tf_dim_ordering_tf_kernels.h5')
WEIGHTS_PATH_NO_TOP = ('https://storage.googleapis.com/tensorflow/'
'keras-applications/vgg19/'
'vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5')
layers = VersionAwareLayers()
@keras_export('keras.applications.vgg19.VGG19', 'keras.applications.VGG19')
def VGG19(
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax'):
"""Instantiates the VGG19 architecture.
Reference:
- [Very Deep Convolutional Networks for Large-Scale Image Recognition](
https://arxiv.org/abs/1409.1556) (ICLR 2015)
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
The default input size for this model is 224x224.
Note: each Keras Application expects a specific kind of input preprocessing.
For VGG19, call `tf.keras.applications.vgg19.preprocess_input` on your
inputs before passing them to the model.
`vgg19.preprocess_input` will convert the input images from RGB to BGR,
then will zero-center each color channel with respect to the ImageNet dataset,
without scaling.
Args:
include_top: whether to include the 3 fully-connected
layers at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor
(i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)`
(with `channels_last` data format)
or `(3, 224, 224)` (with `channels_first` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A `keras.Model` instance.
"""
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded. '
f'Received: `weights={weights}.`')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top` '
'as true, `classes` should be 1000. '
f'Received: `classes={classes}.`')
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=224,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
# Block 1
x = layers.Conv2D(
64, (3, 3), activation='relu', padding='same', name='block1_conv1')(
img_input)
x = layers.Conv2D(
64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = layers.Conv2D(
128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = layers.Conv2D(
128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = layers.Conv2D(
256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = layers.Conv2D(
256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = layers.Conv2D(
256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = layers.Conv2D(
256, (3, 3), activation='relu', padding='same', name='block3_conv4')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block4_conv4')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = layers.Conv2D(
512, (3, 3), activation='relu', padding='same', name='block5_conv4')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
if include_top:
# Classification block
x = layers.Flatten(name='flatten')(x)
x = layers.Dense(4096, activation='relu', name='fc1')(x)
x = layers.Dense(4096, activation='relu', name='fc2')(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(classes, activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name='vgg19')
# Load weights.
if weights == 'imagenet':
if include_top:
weights_path = data_utils.get_file(
'vgg19_weights_tf_dim_ordering_tf_kernels.h5',
WEIGHTS_PATH,
cache_subdir='models',
file_hash='cbe5617147190e668d6c5d5026f83318')
else:
weights_path = data_utils.get_file(
'vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5',
WEIGHTS_PATH_NO_TOP,
cache_subdir='models',
file_hash='253f8cb515780f3b799900260a226db6')
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
@keras_export('keras.applications.vgg19.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(
x, data_format=data_format, mode='caffe')
@keras_export('keras.applications.vgg19.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_CAFFE,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 9,818 | 38.276 | 87 | py |
keras | keras-master/keras/applications/inception_v3.py | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""Inception V3 model for Keras.
Reference:
- [Rethinking the Inception Architecture for Computer Vision](
http://arxiv.org/abs/1512.00567) (CVPR 2016)
"""
import tensorflow.compat.v2 as tf
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.util.tf_export import keras_export
WEIGHTS_PATH = (
'https://storage.googleapis.com/tensorflow/keras-applications/'
'inception_v3/inception_v3_weights_tf_dim_ordering_tf_kernels.h5')
WEIGHTS_PATH_NO_TOP = (
'https://storage.googleapis.com/tensorflow/keras-applications/'
'inception_v3/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5')
layers = VersionAwareLayers()
@keras_export('keras.applications.inception_v3.InceptionV3',
'keras.applications.InceptionV3')
def InceptionV3(
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax'):
"""Instantiates the Inception v3 architecture.
Reference:
- [Rethinking the Inception Architecture for Computer Vision](
http://arxiv.org/abs/1512.00567) (CVPR 2016)
This function returns a Keras image classification model,
optionally loaded with weights pre-trained on ImageNet.
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For `InceptionV3`, call `tf.keras.applications.inception_v3.preprocess_input`
on your inputs before passing them to the model.
`inception_v3.preprocess_input` will scale input pixels between -1 and 1.
Args:
include_top: Boolean, whether to include the fully-connected
layer at the top, as the last layer of the network. Default to `True`.
weights: One of `None` (random initialization),
`imagenet` (pre-training on ImageNet),
or the path to the weights file to be loaded. Default to `imagenet`.
input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model. `input_tensor` is useful for sharing
inputs between multiple different networks. Default to None.
input_shape: Optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(299, 299, 3)` (with `channels_last` data format)
or `(3, 299, 299)` (with `channels_first` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 75.
E.g. `(150, 150, 3)` would be one valid value.
`input_shape` will be ignored if the `input_tensor` is provided.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` (default) means that the output of the model will be
the 4D tensor output of the last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified. Default to 1000.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A `keras.Model` instance.
"""
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded; '
f'Received: weights={weights}')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top` '
'as true, `classes` should be 1000; '
f'Received classes={classes}')
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=299,
min_size=75,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
if backend.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = 3
x = conv2d_bn(img_input, 32, 3, 3, strides=(2, 2), padding='valid')
x = conv2d_bn(x, 32, 3, 3, padding='valid')
x = conv2d_bn(x, 64, 3, 3)
x = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
x = conv2d_bn(x, 80, 1, 1, padding='valid')
x = conv2d_bn(x, 192, 3, 3, padding='valid')
x = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
# mixed 0: 35 x 35 x 256
branch1x1 = conv2d_bn(x, 64, 1, 1)
branch5x5 = conv2d_bn(x, 48, 1, 1)
branch5x5 = conv2d_bn(branch5x5, 64, 5, 5)
branch3x3dbl = conv2d_bn(x, 64, 1, 1)
branch3x3dbl = conv2d_bn(branch3x3dbl, 96, 3, 3)
branch3x3dbl = conv2d_bn(branch3x3dbl, 96, 3, 3)
branch_pool = layers.AveragePooling2D(
(3, 3), strides=(1, 1), padding='same')(x)
branch_pool = conv2d_bn(branch_pool, 32, 1, 1)
x = layers.concatenate([branch1x1, branch5x5, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed0')
# mixed 1: 35 x 35 x 288
branch1x1 = conv2d_bn(x, 64, 1, 1)
branch5x5 = conv2d_bn(x, 48, 1, 1)
branch5x5 = conv2d_bn(branch5x5, 64, 5, 5)
branch3x3dbl = conv2d_bn(x, 64, 1, 1)
branch3x3dbl = conv2d_bn(branch3x3dbl, 96, 3, 3)
branch3x3dbl = conv2d_bn(branch3x3dbl, 96, 3, 3)
branch_pool = layers.AveragePooling2D(
(3, 3), strides=(1, 1), padding='same')(x)
branch_pool = conv2d_bn(branch_pool, 64, 1, 1)
x = layers.concatenate([branch1x1, branch5x5, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed1')
# mixed 2: 35 x 35 x 288
branch1x1 = conv2d_bn(x, 64, 1, 1)
branch5x5 = conv2d_bn(x, 48, 1, 1)
branch5x5 = conv2d_bn(branch5x5, 64, 5, 5)
branch3x3dbl = conv2d_bn(x, 64, 1, 1)
branch3x3dbl = conv2d_bn(branch3x3dbl, 96, 3, 3)
branch3x3dbl = conv2d_bn(branch3x3dbl, 96, 3, 3)
branch_pool = layers.AveragePooling2D(
(3, 3), strides=(1, 1), padding='same')(x)
branch_pool = conv2d_bn(branch_pool, 64, 1, 1)
x = layers.concatenate([branch1x1, branch5x5, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed2')
# mixed 3: 17 x 17 x 768
branch3x3 = conv2d_bn(x, 384, 3, 3, strides=(2, 2), padding='valid')
branch3x3dbl = conv2d_bn(x, 64, 1, 1)
branch3x3dbl = conv2d_bn(branch3x3dbl, 96, 3, 3)
branch3x3dbl = conv2d_bn(
branch3x3dbl, 96, 3, 3, strides=(2, 2), padding='valid')
branch_pool = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
x = layers.concatenate([branch3x3, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed3')
# mixed 4: 17 x 17 x 768
branch1x1 = conv2d_bn(x, 192, 1, 1)
branch7x7 = conv2d_bn(x, 128, 1, 1)
branch7x7 = conv2d_bn(branch7x7, 128, 1, 7)
branch7x7 = conv2d_bn(branch7x7, 192, 7, 1)
branch7x7dbl = conv2d_bn(x, 128, 1, 1)
branch7x7dbl = conv2d_bn(branch7x7dbl, 128, 7, 1)
branch7x7dbl = conv2d_bn(branch7x7dbl, 128, 1, 7)
branch7x7dbl = conv2d_bn(branch7x7dbl, 128, 7, 1)
branch7x7dbl = conv2d_bn(branch7x7dbl, 192, 1, 7)
branch_pool = layers.AveragePooling2D(
(3, 3), strides=(1, 1), padding='same')(x)
branch_pool = conv2d_bn(branch_pool, 192, 1, 1)
x = layers.concatenate([branch1x1, branch7x7, branch7x7dbl, branch_pool],
axis=channel_axis,
name='mixed4')
# mixed 5, 6: 17 x 17 x 768
for i in range(2):
branch1x1 = conv2d_bn(x, 192, 1, 1)
branch7x7 = conv2d_bn(x, 160, 1, 1)
branch7x7 = conv2d_bn(branch7x7, 160, 1, 7)
branch7x7 = conv2d_bn(branch7x7, 192, 7, 1)
branch7x7dbl = conv2d_bn(x, 160, 1, 1)
branch7x7dbl = conv2d_bn(branch7x7dbl, 160, 7, 1)
branch7x7dbl = conv2d_bn(branch7x7dbl, 160, 1, 7)
branch7x7dbl = conv2d_bn(branch7x7dbl, 160, 7, 1)
branch7x7dbl = conv2d_bn(branch7x7dbl, 192, 1, 7)
branch_pool = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same')(
x)
branch_pool = conv2d_bn(branch_pool, 192, 1, 1)
x = layers.concatenate([branch1x1, branch7x7, branch7x7dbl, branch_pool],
axis=channel_axis,
name='mixed' + str(5 + i))
# mixed 7: 17 x 17 x 768
branch1x1 = conv2d_bn(x, 192, 1, 1)
branch7x7 = conv2d_bn(x, 192, 1, 1)
branch7x7 = conv2d_bn(branch7x7, 192, 1, 7)
branch7x7 = conv2d_bn(branch7x7, 192, 7, 1)
branch7x7dbl = conv2d_bn(x, 192, 1, 1)
branch7x7dbl = conv2d_bn(branch7x7dbl, 192, 7, 1)
branch7x7dbl = conv2d_bn(branch7x7dbl, 192, 1, 7)
branch7x7dbl = conv2d_bn(branch7x7dbl, 192, 7, 1)
branch7x7dbl = conv2d_bn(branch7x7dbl, 192, 1, 7)
branch_pool = layers.AveragePooling2D(
(3, 3), strides=(1, 1), padding='same')(x)
branch_pool = conv2d_bn(branch_pool, 192, 1, 1)
x = layers.concatenate([branch1x1, branch7x7, branch7x7dbl, branch_pool],
axis=channel_axis,
name='mixed7')
# mixed 8: 8 x 8 x 1280
branch3x3 = conv2d_bn(x, 192, 1, 1)
branch3x3 = conv2d_bn(branch3x3, 320, 3, 3, strides=(2, 2), padding='valid')
branch7x7x3 = conv2d_bn(x, 192, 1, 1)
branch7x7x3 = conv2d_bn(branch7x7x3, 192, 1, 7)
branch7x7x3 = conv2d_bn(branch7x7x3, 192, 7, 1)
branch7x7x3 = conv2d_bn(
branch7x7x3, 192, 3, 3, strides=(2, 2), padding='valid')
branch_pool = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
x = layers.concatenate([branch3x3, branch7x7x3, branch_pool],
axis=channel_axis,
name='mixed8')
# mixed 9: 8 x 8 x 2048
for i in range(2):
branch1x1 = conv2d_bn(x, 320, 1, 1)
branch3x3 = conv2d_bn(x, 384, 1, 1)
branch3x3_1 = conv2d_bn(branch3x3, 384, 1, 3)
branch3x3_2 = conv2d_bn(branch3x3, 384, 3, 1)
branch3x3 = layers.concatenate([branch3x3_1, branch3x3_2],
axis=channel_axis,
name='mixed9_' + str(i))
branch3x3dbl = conv2d_bn(x, 448, 1, 1)
branch3x3dbl = conv2d_bn(branch3x3dbl, 384, 3, 3)
branch3x3dbl_1 = conv2d_bn(branch3x3dbl, 384, 1, 3)
branch3x3dbl_2 = conv2d_bn(branch3x3dbl, 384, 3, 1)
branch3x3dbl = layers.concatenate([branch3x3dbl_1, branch3x3dbl_2],
axis=channel_axis)
branch_pool = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same')(
x)
branch_pool = conv2d_bn(branch_pool, 192, 1, 1)
x = layers.concatenate([branch1x1, branch3x3, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed' + str(9 + i))
if include_top:
# Classification block
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(classes, activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name='inception_v3')
# Load weights.
if weights == 'imagenet':
if include_top:
weights_path = data_utils.get_file(
'inception_v3_weights_tf_dim_ordering_tf_kernels.h5',
WEIGHTS_PATH,
cache_subdir='models',
file_hash='9a0d58056eeedaa3f26cb7ebd46da564')
else:
weights_path = data_utils.get_file(
'inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5',
WEIGHTS_PATH_NO_TOP,
cache_subdir='models',
file_hash='bcbd6486424b2319ff4ef7d526e38f63')
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
def conv2d_bn(x,
filters,
num_row,
num_col,
padding='same',
strides=(1, 1),
name=None):
"""Utility function to apply conv + BN.
Args:
x: input tensor.
filters: filters in `Conv2D`.
num_row: height of the convolution kernel.
num_col: width of the convolution kernel.
padding: padding mode in `Conv2D`.
strides: strides in `Conv2D`.
name: name of the ops; will become `name + '_conv'`
for the convolution and `name + '_bn'` for the
batch norm layer.
Returns:
Output tensor after applying `Conv2D` and `BatchNormalization`.
"""
if name is not None:
bn_name = name + '_bn'
conv_name = name + '_conv'
else:
bn_name = None
conv_name = None
if backend.image_data_format() == 'channels_first':
bn_axis = 1
else:
bn_axis = 3
x = layers.Conv2D(
filters, (num_row, num_col),
strides=strides,
padding=padding,
use_bias=False,
name=conv_name)(
x)
x = layers.BatchNormalization(axis=bn_axis, scale=False, name=bn_name)(x)
x = layers.Activation('relu', name=name)(x)
return x
@keras_export('keras.applications.inception_v3.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(x, data_format=data_format, mode='tf')
@keras_export('keras.applications.inception_v3.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 16,038 | 36.562061 | 87 | py |
keras | keras-master/keras/applications/mobilenet_v3.py | # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
# pylint: disable=missing-function-docstring
"""MobileNet v3 models for Keras."""
import tensorflow.compat.v2 as tf
from keras import backend
from keras import models
from keras.applications import imagenet_utils
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util.tf_export import keras_export
# TODO(scottzhu): Change this to the GCS path.
BASE_WEIGHT_PATH = ('https://storage.googleapis.com/tensorflow/'
'keras-applications/mobilenet_v3/')
WEIGHTS_HASHES = {
'large_224_0.75_float': ('765b44a33ad4005b3ac83185abf1d0eb',
'e7b4d1071996dd51a2c2ca2424570e20'),
'large_224_1.0_float': ('59e551e166be033d707958cf9e29a6a7',
'037116398e07f018c0005ffcb0406831'),
'large_minimalistic_224_1.0_float': ('675e7b876c45c57e9e63e6d90a36599c',
'a2c33aed672524d1d0b4431808177695'),
'small_224_0.75_float': ('cb65d4e5be93758266aa0a7f2c6708b7',
'4d2fe46f1c1f38057392514b0df1d673'),
'small_224_1.0_float': ('8768d4c2e7dee89b9d02b2d03d65d862',
'be7100780f875c06bcab93d76641aa26'),
'small_minimalistic_224_1.0_float': ('99cd97fb2fcdad2bf028eb838de69e37',
'20d4e357df3f7a6361f3a288857b1051'),
}
layers = VersionAwareLayers()
BASE_DOCSTRING = """Instantiates the {name} architecture.
Reference:
- [Searching for MobileNetV3](
https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)
The following table describes the performance of MobileNets v3:
------------------------------------------------------------------------
MACs stands for Multiply Adds
|Classification Checkpoint|MACs(M)|Parameters(M)|Top1 Accuracy|Pixel1 CPU(ms)|
|---|---|---|---|---|
| mobilenet_v3_large_1.0_224 | 217 | 5.4 | 75.6 | 51.2 |
| mobilenet_v3_large_0.75_224 | 155 | 4.0 | 73.3 | 39.8 |
| mobilenet_v3_large_minimalistic_1.0_224 | 209 | 3.9 | 72.3 | 44.1 |
| mobilenet_v3_small_1.0_224 | 66 | 2.9 | 68.1 | 15.8 |
| mobilenet_v3_small_0.75_224 | 44 | 2.4 | 65.4 | 12.8 |
| mobilenet_v3_small_minimalistic_1.0_224 | 65 | 2.0 | 61.9 | 12.2 |
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For ModelNetV3, by default input preprocessing is included as a part of the
model (as a `Rescaling` layer), and thus
`tf.keras.applications.mobilenet_v3.preprocess_input` is actually a
pass-through function. In this use case, ModelNetV3 models expect their inputs
to be float tensors of pixels with values in the [0-255] range.
At the same time, preprocessing as a part of the model (i.e. `Rescaling`
layer) can be disabled by setting `include_preprocessing` argument to False.
With preprocessing disabled ModelNetV3 models expect their inputs to be float
tensors of pixels with values in the [-1, 1] range.
Args:
input_shape: Optional shape tuple, to be specified if you would
like to use a model with an input image resolution that is not
(224, 224, 3).
It should have exactly 3 inputs channels (224, 224, 3).
You can also omit this option if you would like
to infer input_shape from an input_tensor.
If you choose to include both input_tensor and input_shape then
input_shape will be used if they match, if the shapes
do not match then we will throw an error.
E.g. `(160, 160, 3)` would be one valid value.
alpha: controls the width of the network. This is known as the
depth multiplier in the MobileNetV3 paper, but the name is kept for
consistency with MobileNetV1 in Keras.
- If `alpha` < 1.0, proportionally decreases the number
of filters in each layer.
- If `alpha` > 1.0, proportionally increases the number
of filters in each layer.
- If `alpha` = 1, default number of filters from the paper
are used at each layer.
minimalistic: In addition to large and small models this module also
contains so-called minimalistic models, these models have the same
per-layer dimensions characteristic as MobilenetV3 however, they don't
utilize any of the advanced blocks (squeeze-and-excite units, hard-swish,
and 5x5 convolutions). While these models are less efficient on CPU, they
are much more performant on GPU/DSP.
include_top: Boolean, whether to include the fully-connected
layer at the top of the network. Defaults to `True`.
weights: String, one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: Optional Keras tensor (i.e. output of
`layers.Input()`)
to use as image input for the model.
pooling: String, optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model
will be the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a
2D tensor.
- `max` means that global max pooling will
be applied.
classes: Integer, optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
dropout_rate: fraction of the input units to drop on the last layer.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
include_preprocessing: Boolean, whether to include the preprocessing
layer (`Rescaling`) at the bottom of the network. Defaults to `True`.
Call arguments:
inputs: A floating point `numpy.array` or a `tf.Tensor`, 4D with 3 color
channels, with values in the range [0, 255] if `include_preprocessing`
is True and in the range [-1, 1] otherwise.
Returns:
A `keras.Model` instance.
"""
def MobileNetV3(stack_fn,
last_point_ch,
input_shape=None,
alpha=1.0,
model_type='large',
minimalistic=False,
include_top=True,
weights='imagenet',
input_tensor=None,
classes=1000,
pooling=None,
dropout_rate=0.2,
classifier_activation='softmax',
include_preprocessing=True):
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded. '
f'Received weights={weights}')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top` '
'as true, `classes` should be 1000. '
f'Received classes={classes}')
# Determine proper input shape and default size.
# If both input_shape and input_tensor are used, they should match
if input_shape is not None and input_tensor is not None:
try:
is_input_t_tensor = backend.is_keras_tensor(input_tensor)
except ValueError:
try:
is_input_t_tensor = backend.is_keras_tensor(
layer_utils.get_source_inputs(input_tensor))
except ValueError:
raise ValueError('input_tensor: ', input_tensor,
'is not type input_tensor. '
f'Received type(input_tensor)={type(input_tensor)}')
if is_input_t_tensor:
if backend.image_data_format() == 'channels_first':
if backend.int_shape(input_tensor)[1] != input_shape[1]:
raise ValueError('When backend.image_data_format()=channels_first, '
'input_shape[1] must equal '
'backend.int_shape(input_tensor)[1]. Received '
f'input_shape={input_shape}, '
'backend.int_shape(input_tensor)='
f'{backend.int_shape(input_tensor)}')
else:
if backend.int_shape(input_tensor)[2] != input_shape[1]:
raise ValueError('input_shape[1] must equal '
'backend.int_shape(input_tensor)[2]. Received '
f'input_shape={input_shape}, '
'backend.int_shape(input_tensor)='
f'{backend.int_shape(input_tensor)}')
else:
raise ValueError('input_tensor specified: ', input_tensor,
'is not a keras tensor')
# If input_shape is None, infer shape from input_tensor
if input_shape is None and input_tensor is not None:
try:
backend.is_keras_tensor(input_tensor)
except ValueError:
raise ValueError('input_tensor: ', input_tensor, 'is type: ',
type(input_tensor), 'which is not a valid type')
if backend.is_keras_tensor(input_tensor):
if backend.image_data_format() == 'channels_first':
rows = backend.int_shape(input_tensor)[2]
cols = backend.int_shape(input_tensor)[3]
input_shape = (3, cols, rows)
else:
rows = backend.int_shape(input_tensor)[1]
cols = backend.int_shape(input_tensor)[2]
input_shape = (cols, rows, 3)
# If input_shape is None and input_tensor is None using standard shape
if input_shape is None and input_tensor is None:
input_shape = (None, None, 3)
if backend.image_data_format() == 'channels_last':
row_axis, col_axis = (0, 1)
else:
row_axis, col_axis = (1, 2)
rows = input_shape[row_axis]
cols = input_shape[col_axis]
if rows and cols and (rows < 32 or cols < 32):
raise ValueError('Input size must be at least 32x32; Received `input_shape='
f'{input_shape}`')
if weights == 'imagenet':
if (not minimalistic and alpha not in [0.75, 1.0]
or minimalistic and alpha != 1.0):
raise ValueError('If imagenet weights are being loaded, '
'alpha can be one of `0.75`, `1.0` for non minimalistic '
'or `1.0` for minimalistic only.')
if rows != cols or rows != 224:
logging.warning('`input_shape` is undefined or non-square, '
'or `rows` is not 224. '
'Weights for input shape (224, 224) will be '
'loaded as the default.')
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
if minimalistic:
kernel = 3
activation = relu
se_ratio = None
else:
kernel = 5
activation = hard_swish
se_ratio = 0.25
x = img_input
if include_preprocessing:
x = layers.Rescaling(scale=1. / 127.5, offset=-1.)(x)
x = layers.Conv2D(
16,
kernel_size=3,
strides=(2, 2),
padding='same',
use_bias=False,
name='Conv')(x)
x = layers.BatchNormalization(
axis=channel_axis, epsilon=1e-3,
momentum=0.999, name='Conv/BatchNorm')(x)
x = activation(x)
x = stack_fn(x, kernel, activation, se_ratio)
last_conv_ch = _depth(backend.int_shape(x)[channel_axis] * 6)
# if the width multiplier is greater than 1 we
# increase the number of output channels
if alpha > 1.0:
last_point_ch = _depth(last_point_ch * alpha)
x = layers.Conv2D(
last_conv_ch,
kernel_size=1,
padding='same',
use_bias=False,
name='Conv_1')(x)
x = layers.BatchNormalization(
axis=channel_axis, epsilon=1e-3,
momentum=0.999, name='Conv_1/BatchNorm')(x)
x = activation(x)
x = layers.GlobalAveragePooling2D(keepdims=True)(x)
x = layers.Conv2D(
last_point_ch,
kernel_size=1,
padding='same',
use_bias=True,
name='Conv_2')(x)
x = activation(x)
if include_top:
if dropout_rate > 0:
x = layers.Dropout(dropout_rate)(x)
x = layers.Conv2D(classes, kernel_size=1, padding='same', name='Logits')(x)
x = layers.Flatten()(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Activation(activation=classifier_activation,
name='Predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D(name='max_pool')(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = models.Model(inputs, x, name='MobilenetV3' + model_type)
# Load weights.
if weights == 'imagenet':
model_name = '{}{}_224_{}_float'.format(
model_type, '_minimalistic' if minimalistic else '', str(alpha))
if include_top:
file_name = 'weights_mobilenet_v3_' + model_name + '.h5'
file_hash = WEIGHTS_HASHES[model_name][0]
else:
file_name = 'weights_mobilenet_v3_' + model_name + '_no_top.h5'
file_hash = WEIGHTS_HASHES[model_name][1]
weights_path = data_utils.get_file(
file_name,
BASE_WEIGHT_PATH + file_name,
cache_subdir='models',
file_hash=file_hash)
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
@keras_export('keras.applications.MobileNetV3Small')
def MobileNetV3Small(input_shape=None,
alpha=1.0,
minimalistic=False,
include_top=True,
weights='imagenet',
input_tensor=None,
classes=1000,
pooling=None,
dropout_rate=0.2,
classifier_activation='softmax',
include_preprocessing=True):
def stack_fn(x, kernel, activation, se_ratio):
def depth(d):
return _depth(d * alpha)
x = _inverted_res_block(x, 1, depth(16), 3, 2, se_ratio, relu, 0)
x = _inverted_res_block(x, 72. / 16, depth(24), 3, 2, None, relu, 1)
x = _inverted_res_block(x, 88. / 24, depth(24), 3, 1, None, relu, 2)
x = _inverted_res_block(x, 4, depth(40), kernel, 2, se_ratio, activation, 3)
x = _inverted_res_block(x, 6, depth(40), kernel, 1, se_ratio, activation, 4)
x = _inverted_res_block(x, 6, depth(40), kernel, 1, se_ratio, activation, 5)
x = _inverted_res_block(x, 3, depth(48), kernel, 1, se_ratio, activation, 6)
x = _inverted_res_block(x, 3, depth(48), kernel, 1, se_ratio, activation, 7)
x = _inverted_res_block(x, 6, depth(96), kernel, 2, se_ratio, activation, 8)
x = _inverted_res_block(x, 6, depth(96), kernel, 1, se_ratio, activation, 9)
x = _inverted_res_block(x, 6, depth(96), kernel, 1, se_ratio, activation,
10)
return x
return MobileNetV3(stack_fn, 1024, input_shape, alpha, 'small', minimalistic,
include_top, weights, input_tensor, classes, pooling,
dropout_rate, classifier_activation, include_preprocessing)
@keras_export('keras.applications.MobileNetV3Large')
def MobileNetV3Large(input_shape=None,
alpha=1.0,
minimalistic=False,
include_top=True,
weights='imagenet',
input_tensor=None,
classes=1000,
pooling=None,
dropout_rate=0.2,
classifier_activation='softmax',
include_preprocessing=True):
def stack_fn(x, kernel, activation, se_ratio):
def depth(d):
return _depth(d * alpha)
x = _inverted_res_block(x, 1, depth(16), 3, 1, None, relu, 0)
x = _inverted_res_block(x, 4, depth(24), 3, 2, None, relu, 1)
x = _inverted_res_block(x, 3, depth(24), 3, 1, None, relu, 2)
x = _inverted_res_block(x, 3, depth(40), kernel, 2, se_ratio, relu, 3)
x = _inverted_res_block(x, 3, depth(40), kernel, 1, se_ratio, relu, 4)
x = _inverted_res_block(x, 3, depth(40), kernel, 1, se_ratio, relu, 5)
x = _inverted_res_block(x, 6, depth(80), 3, 2, None, activation, 6)
x = _inverted_res_block(x, 2.5, depth(80), 3, 1, None, activation, 7)
x = _inverted_res_block(x, 2.3, depth(80), 3, 1, None, activation, 8)
x = _inverted_res_block(x, 2.3, depth(80), 3, 1, None, activation, 9)
x = _inverted_res_block(x, 6, depth(112), 3, 1, se_ratio, activation, 10)
x = _inverted_res_block(x, 6, depth(112), 3, 1, se_ratio, activation, 11)
x = _inverted_res_block(x, 6, depth(160), kernel, 2, se_ratio, activation,
12)
x = _inverted_res_block(x, 6, depth(160), kernel, 1, se_ratio, activation,
13)
x = _inverted_res_block(x, 6, depth(160), kernel, 1, se_ratio, activation,
14)
return x
return MobileNetV3(stack_fn, 1280, input_shape, alpha, 'large', minimalistic,
include_top, weights, input_tensor, classes, pooling,
dropout_rate, classifier_activation, include_preprocessing)
MobileNetV3Small.__doc__ = BASE_DOCSTRING.format(name='MobileNetV3Small')
MobileNetV3Large.__doc__ = BASE_DOCSTRING.format(name='MobileNetV3Large')
def relu(x):
return layers.ReLU()(x)
def hard_sigmoid(x):
return layers.ReLU(6.)(x + 3.) * (1. / 6.)
def hard_swish(x):
return layers.Multiply()([x, hard_sigmoid(x)])
# This function is taken from the original tf repo.
# It ensures that all layers have a channel number that is divisible by 8
# It can be seen here:
# https://github.com/tensorflow/models/blob/master/research/
# slim/nets/mobilenet/mobilenet.py
def _depth(v, divisor=8, min_value=None):
if min_value is None:
min_value = divisor
new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
# Make sure that round down does not go down by more than 10%.
if new_v < 0.9 * v:
new_v += divisor
return new_v
def _se_block(inputs, filters, se_ratio, prefix):
x = layers.GlobalAveragePooling2D(
keepdims=True, name=prefix + 'squeeze_excite/AvgPool')(
inputs)
x = layers.Conv2D(
_depth(filters * se_ratio),
kernel_size=1,
padding='same',
name=prefix + 'squeeze_excite/Conv')(
x)
x = layers.ReLU(name=prefix + 'squeeze_excite/Relu')(x)
x = layers.Conv2D(
filters,
kernel_size=1,
padding='same',
name=prefix + 'squeeze_excite/Conv_1')(
x)
x = hard_sigmoid(x)
x = layers.Multiply(name=prefix + 'squeeze_excite/Mul')([inputs, x])
return x
def _inverted_res_block(x, expansion, filters, kernel_size, stride, se_ratio,
activation, block_id):
channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
shortcut = x
prefix = 'expanded_conv/'
infilters = backend.int_shape(x)[channel_axis]
if block_id:
# Expand
prefix = 'expanded_conv_{}/'.format(block_id)
x = layers.Conv2D(
_depth(infilters * expansion),
kernel_size=1,
padding='same',
use_bias=False,
name=prefix + 'expand')(
x)
x = layers.BatchNormalization(
axis=channel_axis,
epsilon=1e-3,
momentum=0.999,
name=prefix + 'expand/BatchNorm')(
x)
x = activation(x)
if stride == 2:
x = layers.ZeroPadding2D(
padding=imagenet_utils.correct_pad(x, kernel_size),
name=prefix + 'depthwise/pad')(
x)
x = layers.DepthwiseConv2D(
kernel_size,
strides=stride,
padding='same' if stride == 1 else 'valid',
use_bias=False,
name=prefix + 'depthwise')(
x)
x = layers.BatchNormalization(
axis=channel_axis,
epsilon=1e-3,
momentum=0.999,
name=prefix + 'depthwise/BatchNorm')(
x)
x = activation(x)
if se_ratio:
x = _se_block(x, _depth(infilters * expansion), se_ratio, prefix)
x = layers.Conv2D(
filters,
kernel_size=1,
padding='same',
use_bias=False,
name=prefix + 'project')(
x)
x = layers.BatchNormalization(
axis=channel_axis,
epsilon=1e-3,
momentum=0.999,
name=prefix + 'project/BatchNorm')(
x)
if stride == 1 and infilters == filters:
x = layers.Add(name=prefix + 'Add')([shortcut, x])
return x
@keras_export('keras.applications.mobilenet_v3.preprocess_input')
def preprocess_input(x, data_format=None): # pylint: disable=unused-argument
"""A placeholder method for backward compatibility.
The preprocessing logic has been included in the mobilenet_v3 model
implementation. Users are no longer required to call this method to normalize
the input data. This method does nothing and only kept as a placeholder to
align the API surface between old and new version of model.
Args:
x: A floating point `numpy.array` or a `tf.Tensor`.
data_format: Optional data format of the image tensor/array. Defaults to
None, in which case the global setting
`tf.keras.backend.image_data_format()` is used (unless you changed it,
it defaults to "channels_last").{mode}
Returns:
Unchanged `numpy.array` or `tf.Tensor`.
"""
return x
@keras_export('keras.applications.mobilenet_v3.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 23,489 | 38.412752 | 87 | py |
keras | keras-master/keras/applications/nasnet.py | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""NASNet-A models for Keras.
NASNet refers to Neural Architecture Search Network, a family of models
that were designed automatically by learning the model architectures
directly on the dataset of interest.
Here we consider NASNet-A, the highest performance model that was found
for the CIFAR-10 dataset, and then extended to ImageNet 2012 dataset,
obtaining state of the art performance on CIFAR-10 and ImageNet 2012.
Only the NASNet-A models, and their respective weights, which are suited
for ImageNet 2012 are provided.
The below table describes the performance on ImageNet 2012:
--------------------------------------------------------------------------------
Architecture | Top-1 Acc | Top-5 Acc | Multiply-Adds | Params (M)
--------------------------------------------------------------------------------
| NASNet-A (4 @ 1056) | 74.0 % | 91.6 % | 564 M | 5.3 |
| NASNet-A (6 @ 4032) | 82.7 % | 96.2 % | 23.8 B | 88.9 |
--------------------------------------------------------------------------------
Reference:
- [Learning Transferable Architectures for Scalable Image Recognition](
https://arxiv.org/abs/1707.07012) (CVPR 2018)
"""
import tensorflow.compat.v2 as tf
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHTS_PATH = ('https://storage.googleapis.com/tensorflow/'
'keras-applications/nasnet/')
NASNET_MOBILE_WEIGHT_PATH = BASE_WEIGHTS_PATH + 'NASNet-mobile.h5'
NASNET_MOBILE_WEIGHT_PATH_NO_TOP = BASE_WEIGHTS_PATH + 'NASNet-mobile-no-top.h5'
NASNET_LARGE_WEIGHT_PATH = BASE_WEIGHTS_PATH + 'NASNet-large.h5'
NASNET_LARGE_WEIGHT_PATH_NO_TOP = BASE_WEIGHTS_PATH + 'NASNet-large-no-top.h5'
layers = VersionAwareLayers()
def NASNet(input_shape=None,
penultimate_filters=4032,
num_blocks=6,
stem_block_filters=96,
skip_reduction=True,
filter_multiplier=2,
include_top=True,
weights='imagenet',
input_tensor=None,
pooling=None,
classes=1000,
default_size=None,
classifier_activation='softmax'):
"""Instantiates a NASNet model.
Reference:
- [Learning Transferable Architectures for Scalable Image Recognition](
https://arxiv.org/abs/1707.07012) (CVPR 2018)
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For NasNet, call `tf.keras.applications.nasnet.preprocess_input`
on your inputs before passing them to the model.
`nasnet.preprocess_input` will scale input pixels between -1 and 1.
Args:
input_shape: Optional shape tuple, the input shape
is by default `(331, 331, 3)` for NASNetLarge and
`(224, 224, 3)` for NASNetMobile.
It should have exactly 3 input channels,
and width and height should be no smaller than 32.
E.g. `(224, 224, 3)` would be one valid value.
penultimate_filters: Number of filters in the penultimate layer.
NASNet models use the notation `NASNet (N @ P)`, where:
- N is the number of blocks
- P is the number of penultimate filters
num_blocks: Number of repeated blocks of the NASNet model.
NASNet models use the notation `NASNet (N @ P)`, where:
- N is the number of blocks
- P is the number of penultimate filters
stem_block_filters: Number of filters in the initial stem block
skip_reduction: Whether to skip the reduction step at the tail
end of the network.
filter_multiplier: Controls the width of the network.
- If `filter_multiplier` < 1.0, proportionally decreases the number
of filters in each layer.
- If `filter_multiplier` > 1.0, proportionally increases the number
of filters in each layer.
- If `filter_multiplier` = 1, default number of filters from the
paper are used at each layer.
include_top: Whether to include the fully-connected
layer at the top of the network.
weights: `None` (random initialization) or
`imagenet` (ImageNet weights)
input_tensor: Optional Keras tensor (i.e. output of
`layers.Input()`)
to use as image input for the model.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model
will be the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a
2D tensor.
- `max` means that global max pooling will
be applied.
classes: Optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
default_size: Specifies the default image size of the model
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A `keras.Model` instance.
"""
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded.')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top` '
'as true, `classes` should be 1000')
if (isinstance(input_shape, tuple) and None in input_shape and
weights == 'imagenet'):
raise ValueError('When specifying the input shape of a NASNet'
' and loading `ImageNet` weights, '
'the input_shape argument must be static '
'(no None entries). Got: `input_shape=' +
str(input_shape) + '`.')
if default_size is None:
default_size = 331
# Determine proper input shape and default size.
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=default_size,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=True,
weights=weights)
if backend.image_data_format() != 'channels_last':
logging.warning('The NASNet family of models is only available '
'for the input data format "channels_last" '
'(width, height, channels). '
'However your settings specify the default '
'data format "channels_first" (channels, width, height).'
' You should set `image_data_format="channels_last"` '
'in your Keras config located at ~/.keras/keras.json. '
'The model being returned right now will expect inputs '
'to follow the "channels_last" data format.')
backend.set_image_data_format('channels_last')
old_data_format = 'channels_first'
else:
old_data_format = None
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
if penultimate_filters % (24 * (filter_multiplier**2)) != 0:
raise ValueError(
'For NASNet-A models, the `penultimate_filters` must be a multiple '
'of 24 * (`filter_multiplier` ** 2). Current value: %d' %
penultimate_filters)
channel_dim = 1 if backend.image_data_format() == 'channels_first' else -1
filters = penultimate_filters // 24
x = layers.Conv2D(
stem_block_filters, (3, 3),
strides=(2, 2),
padding='valid',
use_bias=False,
name='stem_conv1',
kernel_initializer='he_normal')(
img_input)
x = layers.BatchNormalization(
axis=channel_dim, momentum=0.9997, epsilon=1e-3, name='stem_bn1')(
x)
p = None
x, p = _reduction_a_cell(
x, p, filters // (filter_multiplier**2), block_id='stem_1')
x, p = _reduction_a_cell(
x, p, filters // filter_multiplier, block_id='stem_2')
for i in range(num_blocks):
x, p = _normal_a_cell(x, p, filters, block_id='%d' % (i))
x, p0 = _reduction_a_cell(
x, p, filters * filter_multiplier, block_id='reduce_%d' % (num_blocks))
p = p0 if not skip_reduction else p
for i in range(num_blocks):
x, p = _normal_a_cell(
x, p, filters * filter_multiplier, block_id='%d' % (num_blocks + i + 1))
x, p0 = _reduction_a_cell(
x,
p,
filters * filter_multiplier**2,
block_id='reduce_%d' % (2 * num_blocks))
p = p0 if not skip_reduction else p
for i in range(num_blocks):
x, p = _normal_a_cell(
x,
p,
filters * filter_multiplier**2,
block_id='%d' % (2 * num_blocks + i + 1))
x = layers.Activation('relu')(x)
if include_top:
x = layers.GlobalAveragePooling2D()(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(classes, activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
model = training.Model(inputs, x, name='NASNet')
# Load weights.
if weights == 'imagenet':
if default_size == 224: # mobile version
if include_top:
weights_path = data_utils.get_file(
'nasnet_mobile.h5',
NASNET_MOBILE_WEIGHT_PATH,
cache_subdir='models',
file_hash='020fb642bf7360b370c678b08e0adf61')
else:
weights_path = data_utils.get_file(
'nasnet_mobile_no_top.h5',
NASNET_MOBILE_WEIGHT_PATH_NO_TOP,
cache_subdir='models',
file_hash='1ed92395b5b598bdda52abe5c0dbfd63')
model.load_weights(weights_path)
elif default_size == 331: # large version
if include_top:
weights_path = data_utils.get_file(
'nasnet_large.h5',
NASNET_LARGE_WEIGHT_PATH,
cache_subdir='models',
file_hash='11577c9a518f0070763c2b964a382f17')
else:
weights_path = data_utils.get_file(
'nasnet_large_no_top.h5',
NASNET_LARGE_WEIGHT_PATH_NO_TOP,
cache_subdir='models',
file_hash='d81d89dc07e6e56530c4e77faddd61b5')
model.load_weights(weights_path)
else:
raise ValueError('ImageNet weights can only be loaded with NASNetLarge'
' or NASNetMobile')
elif weights is not None:
model.load_weights(weights)
if old_data_format:
backend.set_image_data_format(old_data_format)
return model
@keras_export('keras.applications.nasnet.NASNetMobile',
'keras.applications.NASNetMobile')
def NASNetMobile(input_shape=None,
include_top=True,
weights='imagenet',
input_tensor=None,
pooling=None,
classes=1000):
"""Instantiates a Mobile NASNet model in ImageNet mode.
Reference:
- [Learning Transferable Architectures for Scalable Image Recognition](
https://arxiv.org/abs/1707.07012) (CVPR 2018)
Optionally loads weights pre-trained on ImageNet.
Note that the data format convention used by the model is
the one specified in your Keras config at `~/.keras/keras.json`.
Note: each Keras Application expects a specific kind of input preprocessing.
For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your
inputs before passing them to the model.
Args:
input_shape: Optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` for NASNetMobile
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(224, 224, 3)` would be one valid value.
include_top: Whether to include the fully-connected
layer at the top of the network.
weights: `None` (random initialization) or
`imagenet` (ImageNet weights)
For loading `imagenet` weights, `input_shape` should be (224, 224, 3)
input_tensor: Optional Keras tensor (i.e. output of
`layers.Input()`)
to use as image input for the model.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model
will be the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a
2D tensor.
- `max` means that global max pooling will
be applied.
classes: Optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
Returns:
A Keras model instance.
Raises:
ValueError: In case of invalid argument for `weights`,
or invalid input shape.
RuntimeError: If attempting to run this model with a
backend that does not support separable convolutions.
"""
return NASNet(
input_shape,
penultimate_filters=1056,
num_blocks=4,
stem_block_filters=32,
skip_reduction=False,
filter_multiplier=2,
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
pooling=pooling,
classes=classes,
default_size=224)
@keras_export('keras.applications.nasnet.NASNetLarge',
'keras.applications.NASNetLarge')
def NASNetLarge(input_shape=None,
include_top=True,
weights='imagenet',
input_tensor=None,
pooling=None,
classes=1000):
"""Instantiates a NASNet model in ImageNet mode.
Reference:
- [Learning Transferable Architectures for Scalable Image Recognition](
https://arxiv.org/abs/1707.07012) (CVPR 2018)
Optionally loads weights pre-trained on ImageNet.
Note that the data format convention used by the model is
the one specified in your Keras config at `~/.keras/keras.json`.
Note: each Keras Application expects a specific kind of input preprocessing.
For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your
inputs before passing them to the model.
Args:
input_shape: Optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(331, 331, 3)` for NASNetLarge.
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(224, 224, 3)` would be one valid value.
include_top: Whether to include the fully-connected
layer at the top of the network.
weights: `None` (random initialization) or
`imagenet` (ImageNet weights)
For loading `imagenet` weights, `input_shape` should be (331, 331, 3)
input_tensor: Optional Keras tensor (i.e. output of
`layers.Input()`)
to use as image input for the model.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model
will be the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a
2D tensor.
- `max` means that global max pooling will
be applied.
classes: Optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
Returns:
A Keras model instance.
Raises:
ValueError: in case of invalid argument for `weights`,
or invalid input shape.
RuntimeError: If attempting to run this model with a
backend that does not support separable convolutions.
"""
return NASNet(
input_shape,
penultimate_filters=4032,
num_blocks=6,
stem_block_filters=96,
skip_reduction=True,
filter_multiplier=2,
include_top=include_top,
weights=weights,
input_tensor=input_tensor,
pooling=pooling,
classes=classes,
default_size=331)
def _separable_conv_block(ip,
filters,
kernel_size=(3, 3),
strides=(1, 1),
block_id=None):
"""Adds 2 blocks of [relu-separable conv-batchnorm].
Args:
ip: Input tensor
filters: Number of output filters per layer
kernel_size: Kernel size of separable convolutions
strides: Strided convolution for downsampling
block_id: String block_id
Returns:
A Keras tensor
"""
channel_dim = 1 if backend.image_data_format() == 'channels_first' else -1
with backend.name_scope('separable_conv_block_%s' % block_id):
x = layers.Activation('relu')(ip)
if strides == (2, 2):
x = layers.ZeroPadding2D(
padding=imagenet_utils.correct_pad(x, kernel_size),
name='separable_conv_1_pad_%s' % block_id)(x)
conv_pad = 'valid'
else:
conv_pad = 'same'
x = layers.SeparableConv2D(
filters,
kernel_size,
strides=strides,
name='separable_conv_1_%s' % block_id,
padding=conv_pad,
use_bias=False,
kernel_initializer='he_normal')(
x)
x = layers.BatchNormalization(
axis=channel_dim,
momentum=0.9997,
epsilon=1e-3,
name='separable_conv_1_bn_%s' % (block_id))(
x)
x = layers.Activation('relu')(x)
x = layers.SeparableConv2D(
filters,
kernel_size,
name='separable_conv_2_%s' % block_id,
padding='same',
use_bias=False,
kernel_initializer='he_normal')(
x)
x = layers.BatchNormalization(
axis=channel_dim,
momentum=0.9997,
epsilon=1e-3,
name='separable_conv_2_bn_%s' % (block_id))(
x)
return x
def _adjust_block(p, ip, filters, block_id=None):
"""Adjusts the input `previous path` to match the shape of the `input`.
Used in situations where the output number of filters needs to be changed.
Args:
p: Input tensor which needs to be modified
ip: Input tensor whose shape needs to be matched
filters: Number of output filters to be matched
block_id: String block_id
Returns:
Adjusted Keras tensor
"""
channel_dim = 1 if backend.image_data_format() == 'channels_first' else -1
img_dim = 2 if backend.image_data_format() == 'channels_first' else -2
ip_shape = backend.int_shape(ip)
if p is not None:
p_shape = backend.int_shape(p)
with backend.name_scope('adjust_block'):
if p is None:
p = ip
elif p_shape[img_dim] != ip_shape[img_dim]:
with backend.name_scope('adjust_reduction_block_%s' % block_id):
p = layers.Activation('relu', name='adjust_relu_1_%s' % block_id)(p)
p1 = layers.AveragePooling2D((1, 1),
strides=(2, 2),
padding='valid',
name='adjust_avg_pool_1_%s' % block_id)(
p)
p1 = layers.Conv2D(
filters // 2, (1, 1),
padding='same',
use_bias=False,
name='adjust_conv_1_%s' % block_id,
kernel_initializer='he_normal')(
p1)
p2 = layers.ZeroPadding2D(padding=((0, 1), (0, 1)))(p)
p2 = layers.Cropping2D(cropping=((1, 0), (1, 0)))(p2)
p2 = layers.AveragePooling2D((1, 1),
strides=(2, 2),
padding='valid',
name='adjust_avg_pool_2_%s' % block_id)(
p2)
p2 = layers.Conv2D(
filters // 2, (1, 1),
padding='same',
use_bias=False,
name='adjust_conv_2_%s' % block_id,
kernel_initializer='he_normal')(
p2)
p = layers.concatenate([p1, p2], axis=channel_dim)
p = layers.BatchNormalization(
axis=channel_dim,
momentum=0.9997,
epsilon=1e-3,
name='adjust_bn_%s' % block_id)(
p)
elif p_shape[channel_dim] != filters:
with backend.name_scope('adjust_projection_block_%s' % block_id):
p = layers.Activation('relu')(p)
p = layers.Conv2D(
filters, (1, 1),
strides=(1, 1),
padding='same',
name='adjust_conv_projection_%s' % block_id,
use_bias=False,
kernel_initializer='he_normal')(
p)
p = layers.BatchNormalization(
axis=channel_dim,
momentum=0.9997,
epsilon=1e-3,
name='adjust_bn_%s' % block_id)(
p)
return p
def _normal_a_cell(ip, p, filters, block_id=None):
"""Adds a Normal cell for NASNet-A (Fig. 4 in the paper).
Args:
ip: Input tensor `x`
p: Input tensor `p`
filters: Number of output filters
block_id: String block_id
Returns:
A Keras tensor
"""
channel_dim = 1 if backend.image_data_format() == 'channels_first' else -1
with backend.name_scope('normal_A_block_%s' % block_id):
p = _adjust_block(p, ip, filters, block_id)
h = layers.Activation('relu')(ip)
h = layers.Conv2D(
filters, (1, 1),
strides=(1, 1),
padding='same',
name='normal_conv_1_%s' % block_id,
use_bias=False,
kernel_initializer='he_normal')(
h)
h = layers.BatchNormalization(
axis=channel_dim,
momentum=0.9997,
epsilon=1e-3,
name='normal_bn_1_%s' % block_id)(
h)
with backend.name_scope('block_1'):
x1_1 = _separable_conv_block(
h, filters, kernel_size=(5, 5), block_id='normal_left1_%s' % block_id)
x1_2 = _separable_conv_block(
p, filters, block_id='normal_right1_%s' % block_id)
x1 = layers.add([x1_1, x1_2], name='normal_add_1_%s' % block_id)
with backend.name_scope('block_2'):
x2_1 = _separable_conv_block(
p, filters, (5, 5), block_id='normal_left2_%s' % block_id)
x2_2 = _separable_conv_block(
p, filters, (3, 3), block_id='normal_right2_%s' % block_id)
x2 = layers.add([x2_1, x2_2], name='normal_add_2_%s' % block_id)
with backend.name_scope('block_3'):
x3 = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same',
name='normal_left3_%s' % (block_id))(
h)
x3 = layers.add([x3, p], name='normal_add_3_%s' % block_id)
with backend.name_scope('block_4'):
x4_1 = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same',
name='normal_left4_%s' % (block_id))(
p)
x4_2 = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same',
name='normal_right4_%s' % (block_id))(
p)
x4 = layers.add([x4_1, x4_2], name='normal_add_4_%s' % block_id)
with backend.name_scope('block_5'):
x5 = _separable_conv_block(
h, filters, block_id='normal_left5_%s' % block_id)
x5 = layers.add([x5, h], name='normal_add_5_%s' % block_id)
x = layers.concatenate([p, x1, x2, x3, x4, x5],
axis=channel_dim,
name='normal_concat_%s' % block_id)
return x, ip
def _reduction_a_cell(ip, p, filters, block_id=None):
"""Adds a Reduction cell for NASNet-A (Fig. 4 in the paper).
Args:
ip: Input tensor `x`
p: Input tensor `p`
filters: Number of output filters
block_id: String block_id
Returns:
A Keras tensor
"""
channel_dim = 1 if backend.image_data_format() == 'channels_first' else -1
with backend.name_scope('reduction_A_block_%s' % block_id):
p = _adjust_block(p, ip, filters, block_id)
h = layers.Activation('relu')(ip)
h = layers.Conv2D(
filters, (1, 1),
strides=(1, 1),
padding='same',
name='reduction_conv_1_%s' % block_id,
use_bias=False,
kernel_initializer='he_normal')(
h)
h = layers.BatchNormalization(
axis=channel_dim,
momentum=0.9997,
epsilon=1e-3,
name='reduction_bn_1_%s' % block_id)(
h)
h3 = layers.ZeroPadding2D(
padding=imagenet_utils.correct_pad(h, 3),
name='reduction_pad_1_%s' % block_id)(
h)
with backend.name_scope('block_1'):
x1_1 = _separable_conv_block(
h,
filters, (5, 5),
strides=(2, 2),
block_id='reduction_left1_%s' % block_id)
x1_2 = _separable_conv_block(
p,
filters, (7, 7),
strides=(2, 2),
block_id='reduction_right1_%s' % block_id)
x1 = layers.add([x1_1, x1_2], name='reduction_add_1_%s' % block_id)
with backend.name_scope('block_2'):
x2_1 = layers.MaxPooling2D((3, 3),
strides=(2, 2),
padding='valid',
name='reduction_left2_%s' % block_id)(
h3)
x2_2 = _separable_conv_block(
p,
filters, (7, 7),
strides=(2, 2),
block_id='reduction_right2_%s' % block_id)
x2 = layers.add([x2_1, x2_2], name='reduction_add_2_%s' % block_id)
with backend.name_scope('block_3'):
x3_1 = layers.AveragePooling2D((3, 3),
strides=(2, 2),
padding='valid',
name='reduction_left3_%s' % block_id)(
h3)
x3_2 = _separable_conv_block(
p,
filters, (5, 5),
strides=(2, 2),
block_id='reduction_right3_%s' % block_id)
x3 = layers.add([x3_1, x3_2], name='reduction_add3_%s' % block_id)
with backend.name_scope('block_4'):
x4 = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same',
name='reduction_left4_%s' % block_id)(
x1)
x4 = layers.add([x2, x4])
with backend.name_scope('block_5'):
x5_1 = _separable_conv_block(
x1, filters, (3, 3), block_id='reduction_left4_%s' % block_id)
x5_2 = layers.MaxPooling2D((3, 3),
strides=(2, 2),
padding='valid',
name='reduction_right5_%s' % block_id)(
h3)
x5 = layers.add([x5_1, x5_2], name='reduction_add4_%s' % block_id)
x = layers.concatenate([x2, x3, x4, x5],
axis=channel_dim,
name='reduction_concat_%s' % block_id)
return x, ip
@keras_export('keras.applications.nasnet.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(x, data_format=data_format, mode='tf')
@keras_export('keras.applications.nasnet.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 30,441 | 36.215159 | 87 | py |
keras | keras-master/keras/applications/imagenet_utils_test.py | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for imagenet_utils."""
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import numpy as np
import keras
from keras import keras_parameterized
from keras.applications import imagenet_utils as utils
from keras.mixed_precision.policy import set_policy
class TestImageNetUtils(keras_parameterized.TestCase):
def test_preprocess_input(self):
# Test invalid mode check
x = np.random.uniform(0, 255, (10, 10, 3))
with self.assertRaises(ValueError):
utils.preprocess_input(x, mode='some_unknown_mode')
# Test image batch with float and int image input
x = np.random.uniform(0, 255, (2, 10, 10, 3))
xint = x.astype('int32')
self.assertEqual(utils.preprocess_input(x).shape, x.shape)
self.assertEqual(utils.preprocess_input(xint).shape, xint.shape)
out1 = utils.preprocess_input(x, 'channels_last')
out1int = utils.preprocess_input(xint, 'channels_last')
out2 = utils.preprocess_input(
np.transpose(x, (0, 3, 1, 2)), 'channels_first')
out2int = utils.preprocess_input(
np.transpose(xint, (0, 3, 1, 2)), 'channels_first')
self.assertAllClose(out1, out2.transpose(0, 2, 3, 1))
self.assertAllClose(out1int, out2int.transpose(0, 2, 3, 1))
# Test single image
x = np.random.uniform(0, 255, (10, 10, 3))
xint = x.astype('int32')
self.assertEqual(utils.preprocess_input(x).shape, x.shape)
self.assertEqual(utils.preprocess_input(xint).shape, xint.shape)
out1 = utils.preprocess_input(x, 'channels_last')
out1int = utils.preprocess_input(xint, 'channels_last')
out2 = utils.preprocess_input(np.transpose(x, (2, 0, 1)), 'channels_first')
out2int = utils.preprocess_input(
np.transpose(xint, (2, 0, 1)), 'channels_first')
self.assertAllClose(out1, out2.transpose(1, 2, 0))
self.assertAllClose(out1int, out2int.transpose(1, 2, 0))
# Test that writing over the input data works predictably
for mode in ['torch', 'tf']:
x = np.random.uniform(0, 255, (2, 10, 10, 3))
xint = x.astype('int')
x2 = utils.preprocess_input(x, mode=mode)
xint2 = utils.preprocess_input(xint)
self.assertAllClose(x, x2)
self.assertNotEqual(xint.astype('float').max(), xint2.max())
# Caffe mode works differently from the others
x = np.random.uniform(0, 255, (2, 10, 10, 3))
xint = x.astype('int')
x2 = utils.preprocess_input(x, data_format='channels_last', mode='caffe')
xint2 = utils.preprocess_input(xint)
self.assertAllClose(x, x2[..., ::-1])
self.assertNotEqual(xint.astype('float').max(), xint2.max())
@parameterized.named_parameters([
{
'testcase_name': 'mode_torch',
'mode': 'torch'
},
{
'testcase_name': 'mode_tf',
'mode': 'tf'
},
{
'testcase_name': 'mode_caffe',
'mode': 'caffe'
},
])
def test_preprocess_input_symbolic(self, mode):
# Test image batch
x = np.random.uniform(0, 255, (2, 10, 10, 3))
inputs = keras.layers.Input(shape=x.shape[1:])
outputs = keras.layers.Lambda(
lambda x: utils.preprocess_input(x, mode=mode),
output_shape=x.shape[1:])(
inputs)
model = keras.Model(inputs, outputs)
self.assertEqual(model.predict(x).shape, x.shape)
outputs1 = keras.layers.Lambda(
lambda x: utils.preprocess_input(x, 'channels_last', mode=mode),
output_shape=x.shape[1:])(
inputs)
model1 = keras.Model(inputs, outputs1)
out1 = model1.predict(x)
x2 = np.transpose(x, (0, 3, 1, 2))
inputs2 = keras.layers.Input(shape=x2.shape[1:])
outputs2 = keras.layers.Lambda(
lambda x: utils.preprocess_input(x, 'channels_first', mode=mode),
output_shape=x2.shape[1:])(
inputs2)
model2 = keras.Model(inputs2, outputs2)
out2 = model2.predict(x2)
self.assertAllClose(out1, out2.transpose(0, 2, 3, 1))
# Test single image
x = np.random.uniform(0, 255, (10, 10, 3))
inputs = keras.layers.Input(shape=x.shape)
outputs = keras.layers.Lambda(
lambda x: utils.preprocess_input(x, mode=mode), output_shape=x.shape)(
inputs)
model = keras.Model(inputs, outputs)
self.assertEqual(model.predict(x[np.newaxis])[0].shape, x.shape)
outputs1 = keras.layers.Lambda(
lambda x: utils.preprocess_input(x, 'channels_last', mode=mode),
output_shape=x.shape)(
inputs)
model1 = keras.Model(inputs, outputs1)
out1 = model1.predict(x[np.newaxis])[0]
x2 = np.transpose(x, (2, 0, 1))
inputs2 = keras.layers.Input(shape=x2.shape)
outputs2 = keras.layers.Lambda(
lambda x: utils.preprocess_input(x, 'channels_first', mode=mode),
output_shape=x2.shape)(
inputs2)
model2 = keras.Model(inputs2, outputs2)
out2 = model2.predict(x2[np.newaxis])[0]
self.assertAllClose(out1, out2.transpose(1, 2, 0))
@parameterized.named_parameters([
{
'testcase_name': 'mode_torch',
'mode': 'torch'
},
{
'testcase_name': 'mode_tf',
'mode': 'tf'
},
{
'testcase_name': 'mode_caffe',
'mode': 'caffe'
},
])
def test_preprocess_input_symbolic_mixed_precision(self, mode):
set_policy('mixed_float16')
shape = (20, 20, 3)
inputs = keras.layers.Input(shape=shape)
try:
keras.layers.Lambda(
lambda x: utils.preprocess_input(x, mode=mode), output_shape=shape)(
inputs)
finally:
set_policy('float32')
@parameterized.named_parameters([
{'testcase_name': 'channels_last_format',
'data_format': 'channels_last'},
{'testcase_name': 'channels_first_format',
'data_format': 'channels_first'},
])
def test_obtain_input_shape(self, data_format):
# input_shape and default_size are not identical.
with self.assertRaises(ValueError):
utils.obtain_input_shape(
input_shape=(224, 224, 3),
default_size=299,
min_size=139,
data_format='channels_last',
require_flatten=True,
weights='imagenet')
# Test invalid use cases
shape = (139, 139)
if data_format == 'channels_last':
input_shape = shape + (99,)
else:
input_shape = (99,) + shape
# input_shape is smaller than min_size.
shape = (100, 100)
if data_format == 'channels_last':
input_shape = shape + (3,)
else:
input_shape = (3,) + shape
with self.assertRaises(ValueError):
utils.obtain_input_shape(
input_shape=input_shape,
default_size=None,
min_size=139,
data_format=data_format,
require_flatten=False)
# shape is 1D.
shape = (100,)
if data_format == 'channels_last':
input_shape = shape + (3,)
else:
input_shape = (3,) + shape
with self.assertRaises(ValueError):
utils.obtain_input_shape(
input_shape=input_shape,
default_size=None,
min_size=139,
data_format=data_format,
require_flatten=False)
# the number of channels is 5 not 3.
shape = (100, 100)
if data_format == 'channels_last':
input_shape = shape + (5,)
else:
input_shape = (5,) + shape
with self.assertRaises(ValueError):
utils.obtain_input_shape(
input_shape=input_shape,
default_size=None,
min_size=139,
data_format=data_format,
require_flatten=False)
# require_flatten=True with dynamic input shape.
with self.assertRaises(ValueError):
utils.obtain_input_shape(
input_shape=None,
default_size=None,
min_size=139,
data_format='channels_first',
require_flatten=True)
# test include top
self.assertEqual(utils.obtain_input_shape(
input_shape=(3, 200, 200),
default_size=None,
min_size=139,
data_format='channels_first',
require_flatten=True), (3, 200, 200))
self.assertEqual(utils.obtain_input_shape(
input_shape=None,
default_size=None,
min_size=139,
data_format='channels_last',
require_flatten=False), (None, None, 3))
self.assertEqual(utils.obtain_input_shape(
input_shape=None,
default_size=None,
min_size=139,
data_format='channels_first',
require_flatten=False), (3, None, None))
self.assertEqual(utils.obtain_input_shape(
input_shape=None,
default_size=None,
min_size=139,
data_format='channels_last',
require_flatten=False), (None, None, 3))
self.assertEqual(utils.obtain_input_shape(
input_shape=(150, 150, 3),
default_size=None,
min_size=139,
data_format='channels_last',
require_flatten=False), (150, 150, 3))
self.assertEqual(utils.obtain_input_shape(
input_shape=(3, None, None),
default_size=None,
min_size=139,
data_format='channels_first',
require_flatten=False), (3, None, None))
if __name__ == '__main__':
tf.test.main()
| 9,851 | 32.39661 | 80 | py |
keras | keras-master/keras/applications/applications_load_weight_test.py | # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Integration tests for Keras applications."""
import tensorflow.compat.v2 as tf
from absl import flags
from absl.testing import parameterized
import numpy as np
from keras.applications import densenet
from keras.applications import efficientnet
from keras.applications import inception_resnet_v2
from keras.applications import inception_v3
from keras.applications import mobilenet
from keras.applications import mobilenet_v2
from keras.applications import mobilenet_v3
from keras.applications import nasnet
from keras.applications import resnet
from keras.applications import resnet_v2
from keras.applications import vgg16
from keras.applications import vgg19
from keras.applications import xception
from keras.preprocessing import image
from keras.utils import data_utils
ARG_TO_MODEL = {
'resnet': (resnet, [resnet.ResNet50, resnet.ResNet101, resnet.ResNet152]),
'resnet_v2': (resnet_v2, [resnet_v2.ResNet50V2, resnet_v2.ResNet101V2,
resnet_v2.ResNet152V2]),
'vgg16': (vgg16, [vgg16.VGG16]),
'vgg19': (vgg19, [vgg19.VGG19]),
'xception': (xception, [xception.Xception]),
'inception_v3': (inception_v3, [inception_v3.InceptionV3]),
'inception_resnet_v2': (inception_resnet_v2,
[inception_resnet_v2.InceptionResNetV2]),
'mobilenet': (mobilenet, [mobilenet.MobileNet]),
'mobilenet_v2': (mobilenet_v2, [mobilenet_v2.MobileNetV2]),
'mobilenet_v3_small': (mobilenet_v3, [mobilenet_v3.MobileNetV3Small]),
'mobilenet_v3_large': (mobilenet_v3, [mobilenet_v3.MobileNetV3Large]),
'densenet': (densenet, [densenet.DenseNet121,
densenet.DenseNet169, densenet.DenseNet201]),
'nasnet_mobile': (nasnet, [nasnet.NASNetMobile]),
'nasnet_large': (nasnet, [nasnet.NASNetLarge]),
'efficientnet': (efficientnet,
[efficientnet.EfficientNetB0, efficientnet.EfficientNetB1,
efficientnet.EfficientNetB2, efficientnet.EfficientNetB3,
efficientnet.EfficientNetB4, efficientnet.EfficientNetB5,
efficientnet.EfficientNetB6, efficientnet.EfficientNetB7])
}
TEST_IMAGE_PATH = ('https://storage.googleapis.com/tensorflow/'
'keras-applications/tests/elephant.jpg')
_IMAGENET_CLASSES = 1000
# Add a flag to define which application module file is tested.
# This is set as an 'arg' in the build target to guarantee that
# it only triggers the tests of the application models in the module
# if that module file has been modified.
FLAGS = flags.FLAGS
flags.DEFINE_string('module', None,
'Application module used in this test.')
def _get_elephant(target_size):
# For models that don't include a Flatten step,
# the default is to accept variable-size inputs
# even when loading ImageNet weights (since it is possible).
# In this case, default to 299x299.
if target_size[0] is None:
target_size = (299, 299)
test_image = data_utils.get_file('elephant.jpg', TEST_IMAGE_PATH)
img = image.load_img(test_image, target_size=tuple(target_size))
x = image.img_to_array(img)
return np.expand_dims(x, axis=0)
class ApplicationsLoadWeightTest(tf.test.TestCase, parameterized.TestCase):
def assertShapeEqual(self, shape1, shape2):
if len(shape1) != len(shape2):
raise AssertionError(
'Shapes are different rank: %s vs %s' % (shape1, shape2))
if shape1 != shape2:
raise AssertionError('Shapes differ: %s vs %s' % (shape1, shape2))
def test_application_pretrained_weights_loading(self):
app_module = ARG_TO_MODEL[FLAGS.module][0]
apps = ARG_TO_MODEL[FLAGS.module][1]
for app in apps:
model = app(weights='imagenet')
self.assertShapeEqual(model.output_shape, (None, _IMAGENET_CLASSES))
x = _get_elephant(model.input_shape[1:3])
x = app_module.preprocess_input(x)
preds = model.predict(x)
names = [p[1] for p in app_module.decode_predictions(preds)[0]]
# Test correct label is in top 3 (weak correctness test).
self.assertIn('African_elephant', names[:3])
if __name__ == '__main__':
tf.test.main()
| 4,840 | 40.732759 | 80 | py |
keras | keras-master/keras/applications/xception.py | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""Xception V1 model for Keras.
On ImageNet, this model gets to a top-1 validation accuracy of 0.790
and a top-5 validation accuracy of 0.945.
Reference:
- [Xception: Deep Learning with Depthwise Separable Convolutions](
https://arxiv.org/abs/1610.02357) (CVPR 2017)
"""
import tensorflow.compat.v2 as tf
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.util.tf_export import keras_export
TF_WEIGHTS_PATH = (
'https://storage.googleapis.com/tensorflow/keras-applications/'
'xception/xception_weights_tf_dim_ordering_tf_kernels.h5')
TF_WEIGHTS_PATH_NO_TOP = (
'https://storage.googleapis.com/tensorflow/keras-applications/'
'xception/xception_weights_tf_dim_ordering_tf_kernels_notop.h5')
layers = VersionAwareLayers()
@keras_export('keras.applications.xception.Xception',
'keras.applications.Xception')
def Xception(
include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax'):
"""Instantiates the Xception architecture.
Reference:
- [Xception: Deep Learning with Depthwise Separable Convolutions](
https://arxiv.org/abs/1610.02357) (CVPR 2017)
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
The default input image size for this model is 299x299.
Note: each Keras Application expects a specific kind of input preprocessing.
For Xception, call `tf.keras.applications.xception.preprocess_input` on your
inputs before passing them to the model.
`xception.preprocess_input` will scale input pixels between -1 and 1.
Args:
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor
(i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(299, 299, 3)`.
It should have exactly 3 inputs channels,
and width and height should be no smaller than 71.
E.g. `(150, 150, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True,
and if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A `keras.Model` instance.
"""
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded.')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top`'
' as true, `classes` should be 1000')
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=299,
min_size=71,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
x = layers.Conv2D(
32, (3, 3),
strides=(2, 2),
use_bias=False,
name='block1_conv1')(img_input)
x = layers.BatchNormalization(axis=channel_axis, name='block1_conv1_bn')(x)
x = layers.Activation('relu', name='block1_conv1_act')(x)
x = layers.Conv2D(64, (3, 3), use_bias=False, name='block1_conv2')(x)
x = layers.BatchNormalization(axis=channel_axis, name='block1_conv2_bn')(x)
x = layers.Activation('relu', name='block1_conv2_act')(x)
residual = layers.Conv2D(
128, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x)
residual = layers.BatchNormalization(axis=channel_axis)(residual)
x = layers.SeparableConv2D(
128, (3, 3), padding='same', use_bias=False, name='block2_sepconv1')(x)
x = layers.BatchNormalization(axis=channel_axis, name='block2_sepconv1_bn')(x)
x = layers.Activation('relu', name='block2_sepconv2_act')(x)
x = layers.SeparableConv2D(
128, (3, 3), padding='same', use_bias=False, name='block2_sepconv2')(x)
x = layers.BatchNormalization(axis=channel_axis, name='block2_sepconv2_bn')(x)
x = layers.MaxPooling2D((3, 3),
strides=(2, 2),
padding='same',
name='block2_pool')(x)
x = layers.add([x, residual])
residual = layers.Conv2D(
256, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x)
residual = layers.BatchNormalization(axis=channel_axis)(residual)
x = layers.Activation('relu', name='block3_sepconv1_act')(x)
x = layers.SeparableConv2D(
256, (3, 3), padding='same', use_bias=False, name='block3_sepconv1')(x)
x = layers.BatchNormalization(axis=channel_axis, name='block3_sepconv1_bn')(x)
x = layers.Activation('relu', name='block3_sepconv2_act')(x)
x = layers.SeparableConv2D(
256, (3, 3), padding='same', use_bias=False, name='block3_sepconv2')(x)
x = layers.BatchNormalization(axis=channel_axis, name='block3_sepconv2_bn')(x)
x = layers.MaxPooling2D((3, 3),
strides=(2, 2),
padding='same',
name='block3_pool')(x)
x = layers.add([x, residual])
residual = layers.Conv2D(
728, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x)
residual = layers.BatchNormalization(axis=channel_axis)(residual)
x = layers.Activation('relu', name='block4_sepconv1_act')(x)
x = layers.SeparableConv2D(
728, (3, 3), padding='same', use_bias=False, name='block4_sepconv1')(x)
x = layers.BatchNormalization(axis=channel_axis, name='block4_sepconv1_bn')(x)
x = layers.Activation('relu', name='block4_sepconv2_act')(x)
x = layers.SeparableConv2D(
728, (3, 3), padding='same', use_bias=False, name='block4_sepconv2')(x)
x = layers.BatchNormalization(axis=channel_axis, name='block4_sepconv2_bn')(x)
x = layers.MaxPooling2D((3, 3),
strides=(2, 2),
padding='same',
name='block4_pool')(x)
x = layers.add([x, residual])
for i in range(8):
residual = x
prefix = 'block' + str(i + 5)
x = layers.Activation('relu', name=prefix + '_sepconv1_act')(x)
x = layers.SeparableConv2D(
728, (3, 3),
padding='same',
use_bias=False,
name=prefix + '_sepconv1')(x)
x = layers.BatchNormalization(
axis=channel_axis, name=prefix + '_sepconv1_bn')(x)
x = layers.Activation('relu', name=prefix + '_sepconv2_act')(x)
x = layers.SeparableConv2D(
728, (3, 3),
padding='same',
use_bias=False,
name=prefix + '_sepconv2')(x)
x = layers.BatchNormalization(
axis=channel_axis, name=prefix + '_sepconv2_bn')(x)
x = layers.Activation('relu', name=prefix + '_sepconv3_act')(x)
x = layers.SeparableConv2D(
728, (3, 3),
padding='same',
use_bias=False,
name=prefix + '_sepconv3')(x)
x = layers.BatchNormalization(
axis=channel_axis, name=prefix + '_sepconv3_bn')(x)
x = layers.add([x, residual])
residual = layers.Conv2D(
1024, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x)
residual = layers.BatchNormalization(axis=channel_axis)(residual)
x = layers.Activation('relu', name='block13_sepconv1_act')(x)
x = layers.SeparableConv2D(
728, (3, 3), padding='same', use_bias=False, name='block13_sepconv1')(x)
x = layers.BatchNormalization(
axis=channel_axis, name='block13_sepconv1_bn')(x)
x = layers.Activation('relu', name='block13_sepconv2_act')(x)
x = layers.SeparableConv2D(
1024, (3, 3), padding='same', use_bias=False, name='block13_sepconv2')(x)
x = layers.BatchNormalization(
axis=channel_axis, name='block13_sepconv2_bn')(x)
x = layers.MaxPooling2D((3, 3),
strides=(2, 2),
padding='same',
name='block13_pool')(x)
x = layers.add([x, residual])
x = layers.SeparableConv2D(
1536, (3, 3), padding='same', use_bias=False, name='block14_sepconv1')(x)
x = layers.BatchNormalization(
axis=channel_axis, name='block14_sepconv1_bn')(x)
x = layers.Activation('relu', name='block14_sepconv1_act')(x)
x = layers.SeparableConv2D(
2048, (3, 3), padding='same', use_bias=False, name='block14_sepconv2')(x)
x = layers.BatchNormalization(
axis=channel_axis, name='block14_sepconv2_bn')(x)
x = layers.Activation('relu', name='block14_sepconv2_act')(x)
if include_top:
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(classes, activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name='xception')
# Load weights.
if weights == 'imagenet':
if include_top:
weights_path = data_utils.get_file(
'xception_weights_tf_dim_ordering_tf_kernels.h5',
TF_WEIGHTS_PATH,
cache_subdir='models',
file_hash='0a58e3b7378bc2990ea3b43d5981f1f6')
else:
weights_path = data_utils.get_file(
'xception_weights_tf_dim_ordering_tf_kernels_notop.h5',
TF_WEIGHTS_PATH_NO_TOP,
cache_subdir='models',
file_hash='b0042744bf5b25fce3cb969f33bebb97')
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
@keras_export('keras.applications.xception.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(x, data_format=data_format, mode='tf')
@keras_export('keras.applications.xception.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 13,000 | 38.159639 | 87 | py |
keras | keras-master/keras/applications/inception_resnet_v2.py | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""Inception-ResNet V2 model for Keras.
Reference:
- [Inception-v4, Inception-ResNet and the Impact of
Residual Connections on Learning](https://arxiv.org/abs/1602.07261)
(AAAI 2017)
"""
import tensorflow.compat.v2 as tf
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHT_URL = ('https://storage.googleapis.com/tensorflow/'
'keras-applications/inception_resnet_v2/')
layers = None
@keras_export('keras.applications.inception_resnet_v2.InceptionResNetV2',
'keras.applications.InceptionResNetV2')
def InceptionResNetV2(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
"""Instantiates the Inception-ResNet v2 architecture.
Reference:
- [Inception-v4, Inception-ResNet and the Impact of
Residual Connections on Learning](https://arxiv.org/abs/1602.07261)
(AAAI 2017)
This function returns a Keras image classification model,
optionally loaded with weights pre-trained on ImageNet.
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For InceptionResNetV2, call
`tf.keras.applications.inception_resnet_v2.preprocess_input`
on your inputs before passing them to the model.
`inception_resnet_v2.preprocess_input`
will scale input pixels between -1 and 1.
Args:
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is `False` (otherwise the input shape
has to be `(299, 299, 3)` (with `'channels_last'` data format)
or `(3, 299, 299)` (with `'channels_first'` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 75.
E.g. `(150, 150, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the last convolutional block.
- `'avg'` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `'max'` means that global max pooling will be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is `True`, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
**kwargs: For backwards compatibility only.
Returns:
A `keras.Model` instance.
"""
global layers
if 'layers' in kwargs:
layers = kwargs.pop('layers')
else:
layers = VersionAwareLayers()
if kwargs:
raise ValueError('Unknown argument(s): %s' % (kwargs,))
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded.')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top`'
' as true, `classes` should be 1000')
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=299,
min_size=75,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
# Stem block: 35 x 35 x 192
x = conv2d_bn(img_input, 32, 3, strides=2, padding='valid')
x = conv2d_bn(x, 32, 3, padding='valid')
x = conv2d_bn(x, 64, 3)
x = layers.MaxPooling2D(3, strides=2)(x)
x = conv2d_bn(x, 80, 1, padding='valid')
x = conv2d_bn(x, 192, 3, padding='valid')
x = layers.MaxPooling2D(3, strides=2)(x)
# Mixed 5b (Inception-A block): 35 x 35 x 320
branch_0 = conv2d_bn(x, 96, 1)
branch_1 = conv2d_bn(x, 48, 1)
branch_1 = conv2d_bn(branch_1, 64, 5)
branch_2 = conv2d_bn(x, 64, 1)
branch_2 = conv2d_bn(branch_2, 96, 3)
branch_2 = conv2d_bn(branch_2, 96, 3)
branch_pool = layers.AveragePooling2D(3, strides=1, padding='same')(x)
branch_pool = conv2d_bn(branch_pool, 64, 1)
branches = [branch_0, branch_1, branch_2, branch_pool]
channel_axis = 1 if backend.image_data_format() == 'channels_first' else 3
x = layers.Concatenate(axis=channel_axis, name='mixed_5b')(branches)
# 10x block35 (Inception-ResNet-A block): 35 x 35 x 320
for block_idx in range(1, 11):
x = inception_resnet_block(
x, scale=0.17, block_type='block35', block_idx=block_idx)
# Mixed 6a (Reduction-A block): 17 x 17 x 1088
branch_0 = conv2d_bn(x, 384, 3, strides=2, padding='valid')
branch_1 = conv2d_bn(x, 256, 1)
branch_1 = conv2d_bn(branch_1, 256, 3)
branch_1 = conv2d_bn(branch_1, 384, 3, strides=2, padding='valid')
branch_pool = layers.MaxPooling2D(3, strides=2, padding='valid')(x)
branches = [branch_0, branch_1, branch_pool]
x = layers.Concatenate(axis=channel_axis, name='mixed_6a')(branches)
# 20x block17 (Inception-ResNet-B block): 17 x 17 x 1088
for block_idx in range(1, 21):
x = inception_resnet_block(
x, scale=0.1, block_type='block17', block_idx=block_idx)
# Mixed 7a (Reduction-B block): 8 x 8 x 2080
branch_0 = conv2d_bn(x, 256, 1)
branch_0 = conv2d_bn(branch_0, 384, 3, strides=2, padding='valid')
branch_1 = conv2d_bn(x, 256, 1)
branch_1 = conv2d_bn(branch_1, 288, 3, strides=2, padding='valid')
branch_2 = conv2d_bn(x, 256, 1)
branch_2 = conv2d_bn(branch_2, 288, 3)
branch_2 = conv2d_bn(branch_2, 320, 3, strides=2, padding='valid')
branch_pool = layers.MaxPooling2D(3, strides=2, padding='valid')(x)
branches = [branch_0, branch_1, branch_2, branch_pool]
x = layers.Concatenate(axis=channel_axis, name='mixed_7a')(branches)
# 10x block8 (Inception-ResNet-C block): 8 x 8 x 2080
for block_idx in range(1, 10):
x = inception_resnet_block(
x, scale=0.2, block_type='block8', block_idx=block_idx)
x = inception_resnet_block(
x, scale=1., activation=None, block_type='block8', block_idx=10)
# Final convolution block: 8 x 8 x 1536
x = conv2d_bn(x, 1536, 1, name='conv_7b')
if include_top:
# Classification block
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(classes, activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name='inception_resnet_v2')
# Load weights.
if weights == 'imagenet':
if include_top:
fname = 'inception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5'
weights_path = data_utils.get_file(
fname,
BASE_WEIGHT_URL + fname,
cache_subdir='models',
file_hash='e693bd0210a403b3192acc6073ad2e96')
else:
fname = ('inception_resnet_v2_weights_'
'tf_dim_ordering_tf_kernels_notop.h5')
weights_path = data_utils.get_file(
fname,
BASE_WEIGHT_URL + fname,
cache_subdir='models',
file_hash='d19885ff4a710c122648d3b5c3b684e4')
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
def conv2d_bn(x,
filters,
kernel_size,
strides=1,
padding='same',
activation='relu',
use_bias=False,
name=None):
"""Utility function to apply conv + BN.
Args:
x: input tensor.
filters: filters in `Conv2D`.
kernel_size: kernel size as in `Conv2D`.
strides: strides in `Conv2D`.
padding: padding mode in `Conv2D`.
activation: activation in `Conv2D`.
use_bias: whether to use a bias in `Conv2D`.
name: name of the ops; will become `name + '_ac'` for the activation
and `name + '_bn'` for the batch norm layer.
Returns:
Output tensor after applying `Conv2D` and `BatchNormalization`.
"""
x = layers.Conv2D(
filters,
kernel_size,
strides=strides,
padding=padding,
use_bias=use_bias,
name=name)(
x)
if not use_bias:
bn_axis = 1 if backend.image_data_format() == 'channels_first' else 3
bn_name = None if name is None else name + '_bn'
x = layers.BatchNormalization(axis=bn_axis, scale=False, name=bn_name)(x)
if activation is not None:
ac_name = None if name is None else name + '_ac'
x = layers.Activation(activation, name=ac_name)(x)
return x
def inception_resnet_block(x, scale, block_type, block_idx, activation='relu'):
"""Adds an Inception-ResNet block.
This function builds 3 types of Inception-ResNet blocks mentioned
in the paper, controlled by the `block_type` argument (which is the
block name used in the official TF-slim implementation):
- Inception-ResNet-A: `block_type='block35'`
- Inception-ResNet-B: `block_type='block17'`
- Inception-ResNet-C: `block_type='block8'`
Args:
x: input tensor.
scale: scaling factor to scale the residuals (i.e., the output of passing
`x` through an inception module) before adding them to the shortcut
branch. Let `r` be the output from the residual branch, the output of this
block will be `x + scale * r`.
block_type: `'block35'`, `'block17'` or `'block8'`, determines the network
structure in the residual branch.
block_idx: an `int` used for generating layer names. The Inception-ResNet
blocks are repeated many times in this network. We use `block_idx` to
identify each of the repetitions. For example, the first
Inception-ResNet-A block will have `block_type='block35', block_idx=0`,
and the layer names will have a common prefix `'block35_0'`.
activation: activation function to use at the end of the block (see
[activations](../activations.md)). When `activation=None`, no activation
is applied
(i.e., "linear" activation: `a(x) = x`).
Returns:
Output tensor for the block.
Raises:
ValueError: if `block_type` is not one of `'block35'`,
`'block17'` or `'block8'`.
"""
if block_type == 'block35':
branch_0 = conv2d_bn(x, 32, 1)
branch_1 = conv2d_bn(x, 32, 1)
branch_1 = conv2d_bn(branch_1, 32, 3)
branch_2 = conv2d_bn(x, 32, 1)
branch_2 = conv2d_bn(branch_2, 48, 3)
branch_2 = conv2d_bn(branch_2, 64, 3)
branches = [branch_0, branch_1, branch_2]
elif block_type == 'block17':
branch_0 = conv2d_bn(x, 192, 1)
branch_1 = conv2d_bn(x, 128, 1)
branch_1 = conv2d_bn(branch_1, 160, [1, 7])
branch_1 = conv2d_bn(branch_1, 192, [7, 1])
branches = [branch_0, branch_1]
elif block_type == 'block8':
branch_0 = conv2d_bn(x, 192, 1)
branch_1 = conv2d_bn(x, 192, 1)
branch_1 = conv2d_bn(branch_1, 224, [1, 3])
branch_1 = conv2d_bn(branch_1, 256, [3, 1])
branches = [branch_0, branch_1]
else:
raise ValueError('Unknown Inception-ResNet block type. '
'Expects "block35", "block17" or "block8", '
'but got: ' + str(block_type))
block_name = block_type + '_' + str(block_idx)
channel_axis = 1 if backend.image_data_format() == 'channels_first' else 3
mixed = layers.Concatenate(
axis=channel_axis, name=block_name + '_mixed')(
branches)
up = conv2d_bn(
mixed,
backend.int_shape(x)[channel_axis],
1,
activation=None,
use_bias=True,
name=block_name + '_conv')
x = layers.Lambda(
lambda inputs, scale: inputs[0] + inputs[1] * scale,
output_shape=backend.int_shape(x)[1:],
arguments={'scale': scale},
name=block_name)([x, up])
if activation is not None:
x = layers.Activation(activation, name=block_name + '_ac')(x)
return x
@keras_export('keras.applications.inception_resnet_v2.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(x, data_format=data_format, mode='tf')
@keras_export('keras.applications.inception_resnet_v2.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 15,195 | 37.470886 | 87 | py |
keras | keras-master/keras/applications/mobilenet.py | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""MobileNet v1 models for Keras.
MobileNet is a general architecture and can be used for multiple use cases.
Depending on the use case, it can use different input layer size and
different width factors. This allows different width models to reduce
the number of multiply-adds and thereby
reduce inference cost on mobile devices.
MobileNets support any input size greater than 32 x 32, with larger image sizes
offering better performance.
The number of parameters and number of multiply-adds
can be modified by using the `alpha` parameter,
which increases/decreases the number of filters in each layer.
By altering the image size and `alpha` parameter,
all 16 models from the paper can be built, with ImageNet weights provided.
The paper demonstrates the performance of MobileNets using `alpha` values of
1.0 (also called 100 % MobileNet), 0.75, 0.5 and 0.25.
For each of these `alpha` values, weights for 4 different input image sizes
are provided (224, 192, 160, 128).
The following table describes the size and accuracy of the 100% MobileNet
on size 224 x 224:
----------------------------------------------------------------------------
Width Multiplier (alpha) | ImageNet Acc | Multiply-Adds (M) | Params (M)
----------------------------------------------------------------------------
| 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 |
| 0.75 MobileNet-224 | 68.4 % | 325 | 2.6 |
| 0.50 MobileNet-224 | 63.7 % | 149 | 1.3 |
| 0.25 MobileNet-224 | 50.6 % | 41 | 0.5 |
----------------------------------------------------------------------------
The following table describes the performance of
the 100 % MobileNet on various input sizes:
------------------------------------------------------------------------
Resolution | ImageNet Acc | Multiply-Adds (M) | Params (M)
------------------------------------------------------------------------
| 1.0 MobileNet-224 | 70.6 % | 569 | 4.2 |
| 1.0 MobileNet-192 | 69.1 % | 418 | 4.2 |
| 1.0 MobileNet-160 | 67.2 % | 290 | 4.2 |
| 1.0 MobileNet-128 | 64.4 % | 186 | 4.2 |
------------------------------------------------------------------------
Reference:
- [MobileNets: Efficient Convolutional Neural Networks
for Mobile Vision Applications](
https://arxiv.org/abs/1704.04861)
"""
import tensorflow.compat.v2 as tf
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHT_PATH = ('https://storage.googleapis.com/tensorflow/'
'keras-applications/mobilenet/')
layers = None
@keras_export('keras.applications.mobilenet.MobileNet',
'keras.applications.MobileNet')
def MobileNet(input_shape=None,
alpha=1.0,
depth_multiplier=1,
dropout=1e-3,
include_top=True,
weights='imagenet',
input_tensor=None,
pooling=None,
classes=1000,
classifier_activation='softmax',
**kwargs):
"""Instantiates the MobileNet architecture.
Reference:
- [MobileNets: Efficient Convolutional Neural Networks
for Mobile Vision Applications](
https://arxiv.org/abs/1704.04861)
This function returns a Keras image classification model,
optionally loaded with weights pre-trained on ImageNet.
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each Keras Application expects a specific kind of input preprocessing.
For MobileNet, call `tf.keras.applications.mobilenet.preprocess_input`
on your inputs before passing them to the model.
`mobilenet.preprocess_input` will scale input pixels between -1 and 1.
Args:
input_shape: Optional shape tuple, only to be specified if `include_top`
is False (otherwise the input shape has to be `(224, 224, 3)` (with
`channels_last` data format) or (3, 224, 224) (with `channels_first`
data format). It should have exactly 3 inputs channels, and width and
height should be no smaller than 32. E.g. `(200, 200, 3)` would be one
valid value. Default to `None`.
`input_shape` will be ignored if the `input_tensor` is provided.
alpha: Controls the width of the network. This is known as the width
multiplier in the MobileNet paper. - If `alpha` < 1.0, proportionally
decreases the number of filters in each layer. - If `alpha` > 1.0,
proportionally increases the number of filters in each layer. - If
`alpha` = 1, default number of filters from the paper are used at each
layer. Default to 1.0.
depth_multiplier: Depth multiplier for depthwise convolution. This is
called the resolution multiplier in the MobileNet paper. Default to 1.0.
dropout: Dropout rate. Default to 0.001.
include_top: Boolean, whether to include the fully-connected layer at the
top of the network. Default to `True`.
weights: One of `None` (random initialization), 'imagenet' (pre-training
on ImageNet), or the path to the weights file to be loaded. Default to
`imagenet`.
input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`) to
use as image input for the model. `input_tensor` is useful for sharing
inputs between multiple different networks. Default to None.
pooling: Optional pooling mode for feature extraction when `include_top`
is `False`.
- `None` (default) means that the output of the model will be
the 4D tensor output of the last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will be applied.
classes: Optional number of classes to classify images into, only to be
specified if `include_top` is True, and if no `weights` argument is
specified. Defaults to 1000.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
**kwargs: For backwards compatibility only.
Returns:
A `keras.Model` instance.
"""
global layers
if 'layers' in kwargs:
layers = kwargs.pop('layers')
else:
layers = VersionAwareLayers()
if kwargs:
raise ValueError(f'Unknown argument(s): {(kwargs,)}')
if not (weights in {'imagenet', None} or tf.io.gfile.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded. '
f'Received weights={weights}')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top` '
'as true, `classes` should be 1000. '
f'Received classes={classes}')
# Determine proper input shape and default size.
if input_shape is None:
default_size = 224
else:
if backend.image_data_format() == 'channels_first':
rows = input_shape[1]
cols = input_shape[2]
else:
rows = input_shape[0]
cols = input_shape[1]
if rows == cols and rows in [128, 160, 192, 224]:
default_size = rows
else:
default_size = 224
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=default_size,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights)
if backend.image_data_format() == 'channels_last':
row_axis, col_axis = (0, 1)
else:
row_axis, col_axis = (1, 2)
rows = input_shape[row_axis]
cols = input_shape[col_axis]
if weights == 'imagenet':
if depth_multiplier != 1:
raise ValueError('If imagenet weights are being loaded, '
'depth multiplier must be 1. '
'Received depth_multiplier={depth_multiplier}')
if alpha not in [0.25, 0.50, 0.75, 1.0]:
raise ValueError('If imagenet weights are being loaded, '
'alpha can be one of'
'`0.25`, `0.50`, `0.75` or `1.0` only. '
f'Received alpha={alpha}')
if rows != cols or rows not in [128, 160, 192, 224]:
rows = 224
logging.warning('`input_shape` is undefined or non-square, '
'or `rows` is not in [128, 160, 192, 224]. '
'Weights for input shape (224, 224) will be '
'loaded as the default.')
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
x = _conv_block(img_input, 32, alpha, strides=(2, 2))
x = _depthwise_conv_block(x, 64, alpha, depth_multiplier, block_id=1)
x = _depthwise_conv_block(
x, 128, alpha, depth_multiplier, strides=(2, 2), block_id=2)
x = _depthwise_conv_block(x, 128, alpha, depth_multiplier, block_id=3)
x = _depthwise_conv_block(
x, 256, alpha, depth_multiplier, strides=(2, 2), block_id=4)
x = _depthwise_conv_block(x, 256, alpha, depth_multiplier, block_id=5)
x = _depthwise_conv_block(
x, 512, alpha, depth_multiplier, strides=(2, 2), block_id=6)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=7)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=8)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=9)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=10)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=11)
x = _depthwise_conv_block(
x, 1024, alpha, depth_multiplier, strides=(2, 2), block_id=12)
x = _depthwise_conv_block(x, 1024, alpha, depth_multiplier, block_id=13)
if include_top:
x = layers.GlobalAveragePooling2D(keepdims=True)(x)
x = layers.Dropout(dropout, name='dropout')(x)
x = layers.Conv2D(classes, (1, 1), padding='same', name='conv_preds')(x)
x = layers.Reshape((classes,), name='reshape_2')(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Activation(activation=classifier_activation,
name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name='mobilenet_%0.2f_%s' % (alpha, rows))
# Load weights.
if weights == 'imagenet':
if alpha == 1.0:
alpha_text = '1_0'
elif alpha == 0.75:
alpha_text = '7_5'
elif alpha == 0.50:
alpha_text = '5_0'
else:
alpha_text = '2_5'
if include_top:
model_name = 'mobilenet_%s_%d_tf.h5' % (alpha_text, rows)
weight_path = BASE_WEIGHT_PATH + model_name
weights_path = data_utils.get_file(
model_name, weight_path, cache_subdir='models')
else:
model_name = 'mobilenet_%s_%d_tf_no_top.h5' % (alpha_text, rows)
weight_path = BASE_WEIGHT_PATH + model_name
weights_path = data_utils.get_file(
model_name, weight_path, cache_subdir='models')
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
def _conv_block(inputs, filters, alpha, kernel=(3, 3), strides=(1, 1)):
"""Adds an initial convolution layer (with batch normalization and relu6).
Args:
inputs: Input tensor of shape `(rows, cols, 3)` (with `channels_last`
data format) or (3, rows, cols) (with `channels_first` data format).
It should have exactly 3 inputs channels, and width and height should
be no smaller than 32. E.g. `(224, 224, 3)` would be one valid value.
filters: Integer, the dimensionality of the output space (i.e. the
number of output filters in the convolution).
alpha: controls the width of the network. - If `alpha` < 1.0,
proportionally decreases the number of filters in each layer. - If
`alpha` > 1.0, proportionally increases the number of filters in each
layer. - If `alpha` = 1, default number of filters from the paper are
used at each layer.
kernel: An integer or tuple/list of 2 integers, specifying the width and
height of the 2D convolution window. Can be a single integer to
specify the same value for all spatial dimensions.
strides: An integer or tuple/list of 2 integers, specifying the strides
of the convolution along the width and height. Can be a single integer
to specify the same value for all spatial dimensions. Specifying any
stride value != 1 is incompatible with specifying any `dilation_rate`
value != 1. # Input shape
4D tensor with shape: `(samples, channels, rows, cols)` if
data_format='channels_first'
or 4D tensor with shape: `(samples, rows, cols, channels)` if
data_format='channels_last'. # Output shape
4D tensor with shape: `(samples, filters, new_rows, new_cols)` if
data_format='channels_first'
or 4D tensor with shape: `(samples, new_rows, new_cols, filters)` if
data_format='channels_last'. `rows` and `cols` values might have
changed due to stride.
Returns:
Output tensor of block.
"""
channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
filters = int(filters * alpha)
x = layers.Conv2D(
filters,
kernel,
padding='same',
use_bias=False,
strides=strides,
name='conv1')(inputs)
x = layers.BatchNormalization(axis=channel_axis, name='conv1_bn')(x)
return layers.ReLU(6., name='conv1_relu')(x)
def _depthwise_conv_block(inputs,
pointwise_conv_filters,
alpha,
depth_multiplier=1,
strides=(1, 1),
block_id=1):
"""Adds a depthwise convolution block.
A depthwise convolution block consists of a depthwise conv,
batch normalization, relu6, pointwise convolution,
batch normalization and relu6 activation.
Args:
inputs: Input tensor of shape `(rows, cols, channels)` (with
`channels_last` data format) or (channels, rows, cols) (with
`channels_first` data format).
pointwise_conv_filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the pointwise convolution).
alpha: controls the width of the network. - If `alpha` < 1.0,
proportionally decreases the number of filters in each layer. - If
`alpha` > 1.0, proportionally increases the number of filters in each
layer. - If `alpha` = 1, default number of filters from the paper are
used at each layer.
depth_multiplier: The number of depthwise convolution output channels
for each input channel. The total number of depthwise convolution
output channels will be equal to `filters_in * depth_multiplier`.
strides: An integer or tuple/list of 2 integers, specifying the strides
of the convolution along the width and height. Can be a single integer
to specify the same value for all spatial dimensions. Specifying any
stride value != 1 is incompatible with specifying any `dilation_rate`
value != 1.
block_id: Integer, a unique identification designating the block number.
# Input shape
4D tensor with shape: `(batch, channels, rows, cols)` if
data_format='channels_first'
or 4D tensor with shape: `(batch, rows, cols, channels)` if
data_format='channels_last'. # Output shape
4D tensor with shape: `(batch, filters, new_rows, new_cols)` if
data_format='channels_first'
or 4D tensor with shape: `(batch, new_rows, new_cols, filters)` if
data_format='channels_last'. `rows` and `cols` values might have
changed due to stride.
Returns:
Output tensor of block.
"""
channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
pointwise_conv_filters = int(pointwise_conv_filters * alpha)
if strides == (1, 1):
x = inputs
else:
x = layers.ZeroPadding2D(((0, 1), (0, 1)), name='conv_pad_%d' % block_id)(
inputs)
x = layers.DepthwiseConv2D((3, 3),
padding='same' if strides == (1, 1) else 'valid',
depth_multiplier=depth_multiplier,
strides=strides,
use_bias=False,
name='conv_dw_%d' % block_id)(
x)
x = layers.BatchNormalization(
axis=channel_axis, name='conv_dw_%d_bn' % block_id)(
x)
x = layers.ReLU(6., name='conv_dw_%d_relu' % block_id)(x)
x = layers.Conv2D(
pointwise_conv_filters, (1, 1),
padding='same',
use_bias=False,
strides=(1, 1),
name='conv_pw_%d' % block_id)(
x)
x = layers.BatchNormalization(
axis=channel_axis, name='conv_pw_%d_bn' % block_id)(
x)
return layers.ReLU(6., name='conv_pw_%d_relu' % block_id)(x)
@keras_export('keras.applications.mobilenet.preprocess_input')
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(x, data_format=data_format, mode='tf')
@keras_export('keras.applications.mobilenet.decode_predictions')
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode='',
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| 19,722 | 42.157549 | 87 | py |
keras | keras-master/keras/applications/applications_test.py | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Integration tests for Keras applications."""
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from keras import backend
from keras.applications import densenet
from keras.applications import efficientnet
from keras.applications import inception_resnet_v2
from keras.applications import inception_v3
from keras.applications import mobilenet
from keras.applications import mobilenet_v2
from keras.applications import mobilenet_v3
from keras.applications import nasnet
from keras.applications import resnet
from keras.applications import resnet_v2
from keras.applications import vgg16
from keras.applications import vgg19
from keras.applications import xception
MODEL_LIST_NO_NASNET = [
(resnet.ResNet50, 2048),
(resnet.ResNet101, 2048),
(resnet.ResNet152, 2048),
(resnet_v2.ResNet50V2, 2048),
(resnet_v2.ResNet101V2, 2048),
(resnet_v2.ResNet152V2, 2048),
(vgg16.VGG16, 512),
(vgg19.VGG19, 512),
(xception.Xception, 2048),
(inception_v3.InceptionV3, 2048),
(inception_resnet_v2.InceptionResNetV2, 1536),
(mobilenet.MobileNet, 1024),
(mobilenet_v2.MobileNetV2, 1280),
(mobilenet_v3.MobileNetV3Small, 1024),
(mobilenet_v3.MobileNetV3Large, 1280),
(densenet.DenseNet121, 1024),
(densenet.DenseNet169, 1664),
(densenet.DenseNet201, 1920),
(efficientnet.EfficientNetB0, 1280),
(efficientnet.EfficientNetB1, 1280),
(efficientnet.EfficientNetB2, 1408),
(efficientnet.EfficientNetB3, 1536),
(efficientnet.EfficientNetB4, 1792),
(efficientnet.EfficientNetB5, 2048),
(efficientnet.EfficientNetB6, 2304),
(efficientnet.EfficientNetB7, 2560),
]
NASNET_LIST = [
(nasnet.NASNetMobile, 1056),
(nasnet.NASNetLarge, 4032),
]
MODEL_LIST = MODEL_LIST_NO_NASNET + NASNET_LIST
class ApplicationsTest(tf.test.TestCase, parameterized.TestCase):
def assertShapeEqual(self, shape1, shape2):
if len(shape1) != len(shape2):
raise AssertionError(
'Shapes are different rank: %s vs %s' % (shape1, shape2))
for v1, v2 in zip(shape1, shape2):
if v1 != v2:
raise AssertionError('Shapes differ: %s vs %s' % (shape1, shape2))
@parameterized.parameters(*MODEL_LIST)
def test_application_base(self, app, _):
# Can be instantiated with default arguments
model = app(weights=None)
# Can be serialized and deserialized
config = model.get_config()
reconstructed_model = model.__class__.from_config(config)
self.assertEqual(len(model.weights), len(reconstructed_model.weights))
backend.clear_session()
@parameterized.parameters(*MODEL_LIST)
def test_application_notop(self, app, last_dim):
if 'NASNet' or 'MobileNetV3' in app.__name__:
only_check_last_dim = True
else:
only_check_last_dim = False
output_shape = _get_output_shape(
lambda: app(weights=None, include_top=False))
if only_check_last_dim:
self.assertEqual(output_shape[-1], last_dim)
else:
self.assertShapeEqual(output_shape, (None, None, None, last_dim))
backend.clear_session()
@parameterized.parameters(MODEL_LIST)
def test_application_pooling(self, app, last_dim):
output_shape = _get_output_shape(
lambda: app(weights=None, include_top=False, pooling='avg'))
self.assertShapeEqual(output_shape, (None, last_dim))
@parameterized.parameters(*MODEL_LIST_NO_NASNET)
def test_application_variable_input_channels(self, app, last_dim):
if backend.image_data_format() == 'channels_first':
input_shape = (1, None, None)
else:
input_shape = (None, None, 1)
output_shape = _get_output_shape(
lambda: app(weights=None, include_top=False, input_shape=input_shape))
if 'MobileNetV3' in app.__name__:
self.assertShapeEqual(output_shape, (None, 1, 1, last_dim))
else:
self.assertShapeEqual(output_shape, (None, None, None, last_dim))
backend.clear_session()
if backend.image_data_format() == 'channels_first':
input_shape = (4, None, None)
else:
input_shape = (None, None, 4)
output_shape = _get_output_shape(
lambda: app(weights=None, include_top=False, input_shape=input_shape))
if 'MobileNetV3' in app.__name__:
self.assertShapeEqual(output_shape, (None, 1, 1, last_dim))
else:
self.assertShapeEqual(output_shape, (None, None, None, last_dim))
backend.clear_session()
def _get_output_shape(model_fn):
model = model_fn()
return model.output_shape
if __name__ == '__main__':
tf.test.main()
| 5,219 | 34.27027 | 80 | py |
keras | keras-master/keras/applications/efficientnet_weight_update_util.py | # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
r"""Utils for EfficientNet models for Keras.
Write weights from ckpt file as in original repo
(https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet)
to h5 file for keras implementation of the models.
Usage:
# use checkpoint efficientnet-b0/model.ckpt (can be downloaded from
# https://storage.googleapis.com/cloud-tpu-checkpoints/
# efficientnet/ckptsaug/efficientnet-b0.tar.gz)
# to update weight without top layers, saving to efficientnetb0_notop.h5
python efficientnet_weight_update_util.py --model b0 --notop \
--ckpt efficientnet-b0/model.ckpt --o efficientnetb0_notop.h5
# use checkpoint noisy_student_efficientnet-b3/model.ckpt (providing
# improved result for b3, can be downloaded from
# https://storage.googleapis.com/cloud-tpu-checkpoints/
# efficientnet/noisystudent/noisy_student_efficientnet-b3.tar.gz)
# to update weight with top layers, saving to efficientnetb3_new.h5
python efficientnet_weight_update_util.py --model b3 --notop \
--ckpt noisy_student_efficientnet-b3/model.ckpt --o efficientnetb3_new.h5
"""
import tensorflow.compat.v2 as tf
import argparse
import warnings
from tensorflow.keras.applications import efficientnet
def write_ckpt_to_h5(path_h5, path_ckpt, keras_model, use_ema=True):
"""Map the weights in checkpoint file (tf) to h5 file (keras).
Args:
path_h5: str, path to output hdf5 file to write weights loaded from ckpt
files.
path_ckpt: str, path to the ckpt files (e.g. 'efficientnet-b0/model.ckpt')
that records efficientnet weights from original repo
https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
keras_model: keras model, built from keras.applications efficientnet
functions (e.g. EfficientNetB0)
use_ema: Bool, whether to use ExponentialMovingAverage result or not
"""
model_name_keras = keras_model.name
model_name_tf = model_name_keras.replace('efficientnet', 'efficientnet-')
keras_weight_names = [w.name for w in keras_model.weights]
tf_weight_names = get_variable_names_from_ckpt(path_ckpt)
keras_blocks = get_keras_blocks(keras_weight_names)
tf_blocks = get_tf_blocks(tf_weight_names)
print('check variables match in each block')
for keras_block, tf_block in zip(keras_blocks, tf_blocks):
check_match(keras_block, tf_block, keras_weight_names, tf_weight_names,
model_name_tf)
print('{} and {} match.'.format(tf_block, keras_block))
block_mapping = {x[0]: x[1] for x in zip(keras_blocks, tf_blocks)}
changed_weights = 0
for w in keras_model.weights:
if 'block' in w.name:
# example: 'block1a_dwconv/depthwise_kernel:0' -> 'block1a'
keras_block = w.name.split('/')[0].split('_')[0]
tf_block = block_mapping[keras_block]
tf_name = keras_name_to_tf_name_block(
w.name,
keras_block=keras_block,
tf_block=tf_block,
use_ema=use_ema,
model_name_tf=model_name_tf)
elif any([x in w.name for x in ['stem', 'top', 'predictions', 'probs']]):
tf_name = keras_name_to_tf_name_stem_top(
w.name, use_ema=use_ema, model_name_tf=model_name_tf)
elif 'normalization' in w.name:
print('skipping variable {}: normalization is a layer'
'in keras implementation, but preprocessing in '
'TF implementation.'.format(w.name))
continue
else:
raise ValueError('{} failed to parse.'.format(w.name))
try:
w_tf = tf.train.load_variable(path_ckpt, tf_name)
if (w.value().numpy() != w_tf).any():
w.assign(w_tf)
changed_weights += 1
except ValueError as e:
if any([x in w.name for x in ['top', 'predictions', 'probs']]):
warnings.warn('Fail to load top layer variable {}'
'from {} because of {}.'.format(w.name, tf_name, e))
else:
raise ValueError('Fail to load {} from {}'.format(w.name, tf_name))
total_weights = len(keras_model.weights)
print('{}/{} weights updated'.format(changed_weights, total_weights))
keras_model.save_weights(path_h5)
def get_variable_names_from_ckpt(path_ckpt, use_ema=True):
"""Get list of tensor names from checkpoint.
Args:
path_ckpt: str, path to the ckpt files
use_ema: Bool, whether to use ExponentialMovingAverage result or not.
Returns:
List of variable names from checkpoint.
"""
v_all = tf.train.list_variables(path_ckpt)
# keep name only
v_name_all = [x[0] for x in v_all]
if use_ema:
v_name_all = [x for x in v_name_all if 'ExponentialMovingAverage' in x]
else:
v_name_all = [x for x in v_name_all if 'ExponentialMovingAverage' not in x]
# remove util variables used for RMSprop
v_name_all = [x for x in v_name_all if 'RMS' not in x]
return v_name_all
def get_tf_blocks(tf_weight_names):
"""Extract the block names from list of full weight names."""
# Example: 'efficientnet-b0/blocks_0/conv2d/kernel' -> 'blocks_0'
tf_blocks = {x.split('/')[1] for x in tf_weight_names if 'block' in x}
# sort by number
tf_blocks = sorted(tf_blocks, key=lambda x: int(x.split('_')[1]))
return tf_blocks
def get_keras_blocks(keras_weight_names):
"""Extract the block names from list of full weight names."""
# example: 'block1a_dwconv/depthwise_kernel:0' -> 'block1a'
keras_blocks = {x.split('_')[0] for x in keras_weight_names if 'block' in x}
return sorted(keras_blocks)
def keras_name_to_tf_name_stem_top(keras_name,
use_ema=True,
model_name_tf='efficientnet-b0'):
"""Mapping name in h5 to ckpt that is in stem or top (head).
we map name keras_name that points to a weight in h5 file
to a name of weight in ckpt file.
Args:
keras_name: str, the name of weight in the h5 file of keras implementation
use_ema: Bool, use the ExponentialMovingAverage resuolt in ckpt or not
model_name_tf: str, the name of model in ckpt.
Returns:
String for the name of weight as in ckpt file.
Raises:
KeyError: if we cannot parse the keras_name.
"""
if use_ema:
ema = '/ExponentialMovingAverage'
else:
ema = ''
stem_top_dict = {
'probs/bias:0': '{}/head/dense/bias{}',
'probs/kernel:0': '{}/head/dense/kernel{}',
'predictions/bias:0': '{}/head/dense/bias{}',
'predictions/kernel:0': '{}/head/dense/kernel{}',
'stem_conv/kernel:0': '{}/stem/conv2d/kernel{}',
'top_conv/kernel:0': '{}/head/conv2d/kernel{}',
}
for x in stem_top_dict:
stem_top_dict[x] = stem_top_dict[x].format(model_name_tf, ema)
# stem batch normalization
for bn_weights in ['beta', 'gamma', 'moving_mean', 'moving_variance']:
tf_name = '{}/stem/tpu_batch_normalization/{}{}'.format(
model_name_tf, bn_weights, ema)
stem_top_dict['stem_bn/{}:0'.format(bn_weights)] = tf_name
# top / head batch normalization
for bn_weights in ['beta', 'gamma', 'moving_mean', 'moving_variance']:
tf_name = '{}/head/tpu_batch_normalization/{}{}'.format(
model_name_tf, bn_weights, ema)
stem_top_dict['top_bn/{}:0'.format(bn_weights)] = tf_name
if keras_name in stem_top_dict:
return stem_top_dict[keras_name]
raise KeyError('{} from h5 file cannot be parsed'.format(keras_name))
def keras_name_to_tf_name_block(keras_name,
keras_block='block1a',
tf_block='blocks_0',
use_ema=True,
model_name_tf='efficientnet-b0'):
"""Mapping name in h5 to ckpt that belongs to a block.
we map name keras_name that points to a weight in h5 file
to a name of weight in ckpt file.
Args:
keras_name: str, the name of weight in the h5 file of keras implementation
keras_block: str, the block name for keras implementation (e.g. 'block1a')
tf_block: str, the block name for tf implementation (e.g. 'blocks_0')
use_ema: Bool, use the ExponentialMovingAverage resuolt in ckpt or not
model_name_tf: str, the name of model in ckpt.
Returns:
String for the name of weight as in ckpt file.
Raises:
ValueError if keras_block does not show up in keras_name
"""
if keras_block not in keras_name:
raise ValueError('block name {} not found in {}'.format(
keras_block, keras_name))
# all blocks in the first group will not have expand conv and bn
is_first_blocks = (keras_block[5] == '1')
tf_name = [model_name_tf, tf_block]
# depthwide conv
if 'dwconv' in keras_name:
tf_name.append('depthwise_conv2d')
tf_name.append('depthwise_kernel')
# conv layers
if is_first_blocks:
# first blocks only have one conv2d
if 'project_conv' in keras_name:
tf_name.append('conv2d')
tf_name.append('kernel')
else:
if 'project_conv' in keras_name:
tf_name.append('conv2d_1')
tf_name.append('kernel')
elif 'expand_conv' in keras_name:
tf_name.append('conv2d')
tf_name.append('kernel')
# squeeze expansion layers
if '_se_' in keras_name:
if 'reduce' in keras_name:
tf_name.append('se/conv2d')
elif 'expand' in keras_name:
tf_name.append('se/conv2d_1')
if 'kernel' in keras_name:
tf_name.append('kernel')
elif 'bias' in keras_name:
tf_name.append('bias')
# batch normalization layers
if 'bn' in keras_name:
if is_first_blocks:
if 'project' in keras_name:
tf_name.append('tpu_batch_normalization_1')
else:
tf_name.append('tpu_batch_normalization')
else:
if 'project' in keras_name:
tf_name.append('tpu_batch_normalization_2')
elif 'expand' in keras_name:
tf_name.append('tpu_batch_normalization')
else:
tf_name.append('tpu_batch_normalization_1')
for x in ['moving_mean', 'moving_variance', 'beta', 'gamma']:
if x in keras_name:
tf_name.append(x)
if use_ema:
tf_name.append('ExponentialMovingAverage')
return '/'.join(tf_name)
def check_match(keras_block, tf_block, keras_weight_names, tf_weight_names,
model_name_tf):
"""Check if the weights in h5 and ckpt match.
we match each name from keras_weight_names that is in keras_block
and check if there is 1-1 correspondence to names from tf_weight_names
that is in tf_block
Args:
keras_block: str, the block name for keras implementation (e.g. 'block1a')
tf_block: str, the block name for tf implementation (e.g. 'blocks_0')
keras_weight_names: list of str, weight names in keras implementation
tf_weight_names: list of str, weight names in tf implementation
model_name_tf: str, the name of model in ckpt.
"""
names_from_keras = set()
for x in keras_weight_names:
if keras_block in x:
y = keras_name_to_tf_name_block(
x,
keras_block=keras_block,
tf_block=tf_block,
model_name_tf=model_name_tf)
names_from_keras.add(y)
names_from_tf = set()
for x in tf_weight_names:
if tf_block in x and x.split('/')[1].endswith(tf_block):
names_from_tf.add(x)
names_missing = names_from_keras - names_from_tf
if names_missing:
raise ValueError('{} variables not found in checkpoint file: {}'.format(
len(names_missing), names_missing))
names_unused = names_from_tf - names_from_keras
if names_unused:
warnings.warn('{} variables from checkpoint file are not used: {}'.format(
len(names_unused), names_unused))
if __name__ == '__main__':
arg_to_model = {
'b0': efficientnet.EfficientNetB0,
'b1': efficientnet.EfficientNetB1,
'b2': efficientnet.EfficientNetB2,
'b3': efficientnet.EfficientNetB3,
'b4': efficientnet.EfficientNetB4,
'b5': efficientnet.EfficientNetB5,
'b6': efficientnet.EfficientNetB6,
'b7': efficientnet.EfficientNetB7
}
p = argparse.ArgumentParser(description='write weights from checkpoint to h5')
p.add_argument(
'--model',
required=True,
type=str,
help='name of efficient model',
choices=arg_to_model.keys())
p.add_argument(
'--notop',
action='store_true',
help='do not include top layers',
default=False)
p.add_argument('--ckpt', required=True, type=str, help='checkpoint path')
p.add_argument(
'--output', '-o', required=True, type=str, help='output (h5) file path')
args = p.parse_args()
include_top = not args.notop
model = arg_to_model[args.model](include_top=include_top)
write_ckpt_to_h5(args.output, args.ckpt, keras_model=model)
| 13,222 | 34.834688 | 80 | py |
ReCO | ReCO-master/test.py | # -*- coding: utf-8 -*-
"""
@Time : 2020/6/23 下午1:43
@FileName: test.py
@author: 王炳宁
@contact: wangbingning@sogou-inc.com
"""
import argparse
import torch
from model import Bert4ReCO
from utils import *
parser = argparse.ArgumentParser()
parser.add_argument("--model_type", type=str, default='bert-base-chinese')
parser.add_argument("--batch_size", type=int, default=16)
parser.add_argument(
"--fp16",
action="store_true",
default=True,
)
args = parser.parse_args()
model_type = args.model_type
batch_size = args.batch_size
test_data = load_file('data/test.{}.obj'.format(model_type.replace('/', '.')))
test_data = sorted(test_data, key=lambda x: len(x[0]))
model = Bert4ReCO(model_type)
model.load_state_dict(torch.load('checkpoint.{}.th'.format(model_type.replace('/', '.')), map_location='cpu'))
model.cuda()
if args.fp16:
try:
from apex import amp
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
[model] = amp.initialize([model], opt_level='O1', verbosity=0)
model.eval()
total = len(test_data)
right = 0
with torch.no_grad():
for i in tqdm(range(0, total, batch_size)):
seq = [x[0] for x in test_data[i:i + batch_size]]
labels = [x[1] for x in test_data[i:i + batch_size]]
seq = padding(seq, pads=0, max_len=512)
seq = torch.LongTensor(seq).cuda()
predictions = model([seq, None])
predictions = predictions.cpu()
right += predictions.eq(torch.LongTensor(labels)).sum().item()
acc = 100 * right / total
print('test acc is {}'.format(acc))
| 1,629 | 30.346154 | 110 | py |
ReCO | ReCO-master/model.py | # -*- coding: utf-8 -*-
"""
@Time : 2020/6/23 上午10:13
@FileName: model.py
@author: 王炳宁
@contact: wangbingning@sogou-inc.com
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers import AutoModel
class Bert4ReCO(nn.Module):
def __init__(self, model_type):
super().__init__()
self.encoder = AutoModel.from_pretrained(model_type)
self.n_hidden = self.encoder.config.hidden_size
self.prediction = nn.Linear(self.n_hidden, 1, bias=False)
def forward(self, inputs):
[seq, label] = inputs
hidden = self.encoder(seq)[0]
mask_idx = torch.eq(seq, 1) # 1 is the index in the seq we separate each candidates.
hidden = hidden.masked_select(mask_idx.unsqueeze(2).expand_as(hidden)).view(
-1, 3, self.n_hidden)
hidden = self.prediction(hidden).squeeze(-1)
if label is None:
return hidden.argmax(1)
return F.cross_entropy(hidden, label)
if __name__ == '__main__':
model = Bert4ReCO('voidful/albert_chinese_xxlarge')
| 1,073 | 28.833333 | 93 | py |
ReCO | ReCO-master/train.py | # -*- coding: utf-8 -*-
"""
@Time : 2019/11/21 下午7:14
@FileName: train.py
@author: 王炳宁
@contact: wangbingning@sogou-inc.com
"""
import argparse
import torch
from model import Bert4ReCO
from prepare_data import prepare_bert_data
from utils import *
import torch.distributed as dist
torch.manual_seed(100)
np.random.seed(100)
parser = argparse.ArgumentParser()
parser.add_argument("--batch_size", type=int, default=16)
parser.add_argument("--epoch", type=int, default=10)
parser.add_argument("--lr", type=float, default=2.0e-5)
parser.add_argument("--max_grad_norm", type=float, default=0.2)
parser.add_argument("--model_type", type=str, default="voidful/albert_chinese_base")
parser.add_argument(
"--fp16",
action="store_true",
default=True,
)
parser.add_argument("--local_rank", type=int, default=-1)
args = parser.parse_args()
model_type = args.model_type
local_rank = args.local_rank
if local_rank >= 0:
torch.distributed.init_process_group(backend='nccl',
init_method='env://')
torch.cuda.set_device(args.local_rank)
if local_rank in [-1, 0]:
prepare_bert_data(model_type)
if local_rank >= 0:
dist.barrier() # wait for the first gpu to load data
data = load_file('data/train.{}.obj'.format(model_type.replace('/', '.')))
valid_data = load_file('data/valid.{}.obj'.format(model_type.replace('/', '.')))
valid_data = sorted(valid_data, key=lambda x: len(x[0]))
batch_size = args.batch_size
model = Bert4ReCO(model_type).cuda()
optimizer = torch.optim.AdamW(model.parameters(),
weight_decay=0.01,
lr=args.lr)
if args.fp16:
try:
from apex import amp
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
model, optimizer = amp.initialize(model, optimizer, opt_level='O1', verbosity=0)
if local_rank >= 0:
try:
import apex
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use parallel training.")
model = apex.parallel.DistributedDataParallel(model)
def get_shuffle_data():
pool = {}
for one in data:
length = len(one[0]) // 5
if length not in pool:
pool[length] = []
pool[length].append(one)
for one in pool:
np.random.shuffle(pool[one])
length_lst = list(pool.keys())
np.random.shuffle(length_lst)
whole_data = [x for y in length_lst for x in pool[y]]
if local_rank >= 0:
remove_data_size = len(whole_data) % dist.get_world_size()
thread_data = [whole_data[x + args.local_rank] for x in
range(0, len(whole_data) - remove_data_size, dist.get_world_size())]
return thread_data
return whole_data
def iter_printer(total, epoch):
if local_rank >= 0:
if local_rank == 0:
return tqdm(range(0, total, batch_size), desc='epoch {}'.format(epoch))
else:
return range(0, total, batch_size)
else:
return tqdm(range(0, total, batch_size), desc='epoch {}'.format(epoch))
def train(epoch):
model.train()
train_data = get_shuffle_data()
total = len(train_data)
for i in iter_printer(total, epoch):
seq = [x[0] for x in train_data[i:i + batch_size]]
label = [x[1] for x in train_data[i:i + batch_size]]
seq = padding(seq, pads=0, max_len=512)
seq = torch.LongTensor(seq).cuda()
label = torch.LongTensor(label).cuda()
loss = model([seq, label])
if args.fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm)
else:
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
optimizer.step()
optimizer.zero_grad()
def evaluation(epoch):
model.eval()
total = len(valid_data)
right = 0
with torch.no_grad():
for i in iter_printer(total, epoch):
seq = [x[0] for x in valid_data[i:i + batch_size]]
labels = [x[1] for x in valid_data[i:i + batch_size]]
seq = padding(seq, pads=0, max_len=512)
seq = torch.LongTensor(seq).cuda()
predictions = model([seq, None])
predictions = predictions.cpu()
right += predictions.eq(torch.LongTensor(labels)).sum().item()
acc = 100 * right / total
print('epoch {} eval acc is {}'.format(epoch, acc))
return acc
best_acc = 0.0
for epo in range(args.epoch):
train(epo)
if local_rank == -1 or local_rank == 0:
accuracy = evaluation(epo)
if accuracy > best_acc:
best_acc = accuracy
with open('checkpoint.{}.th'.format(model_type.replace('/', '.')), 'wb') as f:
state_dict = model.module.state_dict() if args.fp16 else model.state_dict()
torch.save(state_dict, f)
| 5,074 | 33.290541 | 114 | py |
ReCO | ReCO-master/BiDAF/inference.py | # -*- coding: utf-8 -*-
import argparse
import cPickle
import codecs
import torch
from utils import *
from preprocess import seg_data, transform_data_to_id
parser = argparse.ArgumentParser(description='inference procedure, note you should train the data at first')
parser.add_argument('--data', type=str,
default='data/ai_challenger_oqmrc_testa_20180816/ai_challenger_oqmrc_testa.json',
help='location of the test data')
parser.add_argument('--word_path', type=str, default='data/word2id.obj',
help='location of the word2id.obj')
parser.add_argument('--output', type=str, default='data/prediction.a.txt',
help='prediction path')
parser.add_argument('--model', type=str, default='model.pt',
help='model path')
parser.add_argument('--batch_size', type=int, default=32, metavar='N',
help='batch size')
parser.add_argument('--cuda', action='store_true',default=True,
help='use CUDA')
args = parser.parse_args()
with open(args.model, 'rb') as f:
model = torch.load(f)
if args.cuda:
model.cuda()
with open(args.word_path, 'rb') as f:
word2id = cPickle.load(f)
raw_data = seg_data(args.data)
transformed_data = transform_data_to_id(raw_data, word2id)
data = [x + [y[2]] for x, y in zip(transformed_data, raw_data)]
data = sorted(data, key=lambda x: len(x[1]))
print 'test data size {:d}'.format(len(data))
def inference():
model.eval()
predictions = []
with torch.no_grad():
for i in range(0, len(data), args.batch_size):
one = data[i:i + args.batch_size]
query, _ = padding([x[0] for x in one], max_len=50)
passage, _ = padding([x[1] for x in one], max_len=300)
answer = pad_answer([x[2] for x in one])
str_words = [x[-1] for x in one]
ids = [x[3] for x in one]
query, passage, answer = torch.LongTensor(query), torch.LongTensor(passage), torch.LongTensor(answer)
if args.cuda:
query = query.cuda()
passage = passage.cuda()
answer = answer.cuda()
output = model([query, passage, answer, False])
for q_id, prediction, candidates in zip(ids, output, str_words):
prediction_answer = u''.join(candidates[prediction])
predictions.append(str(q_id) + '\t' + prediction_answer)
outputs = u'\n'.join(predictions)
with codecs.open(args.output, 'w',encoding='utf-8') as f:
f.write(outputs)
print 'done!'
if __name__ == '__main__':
inference()
| 2,636 | 34.635135 | 113 | py |
ReCO | ReCO-master/BiDAF/MwAN.py | # -*- coding: utf-8 -*-
import torch
from torch import nn
from torch.nn import functional as F
class MwAN(nn.Module):
def __init__(self, vocab_size, embedding_size, encoder_size, drop_out=0.2):
super(MwAN, self).__init__()
self.drop_out=drop_out
self.embedding = nn.Embedding(vocab_size + 1, embedding_dim=embedding_size)
self.q_encoder = nn.GRU(input_size=embedding_size, hidden_size=encoder_size, batch_first=True,
bidirectional=True)
self.p_encoder = nn.GRU(input_size=embedding_size, hidden_size=encoder_size, batch_first=True,
bidirectional=True)
self.a_encoder = nn.GRU(input_size=embedding_size, hidden_size=embedding_size / 2, batch_first=True,
bidirectional=True)
self.a_attention = nn.Linear(embedding_size, 1, bias=False)
# Concat Attention
self.Wc1 = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.Wc2 = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.vc = nn.Linear(encoder_size, 1, bias=False)
# Bilinear Attention
self.Wb = nn.Linear(2 * encoder_size, 2 * encoder_size, bias=False)
# Dot Attention :
self.Wd = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.vd = nn.Linear(encoder_size, 1, bias=False)
# Minus Attention :
self.Wm = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.vm = nn.Linear(encoder_size, 1, bias=False)
self.Ws = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.vs = nn.Linear(encoder_size, 1, bias=False)
self.gru_agg = nn.GRU(12 * encoder_size, encoder_size, batch_first=True, bidirectional=True)
"""
prediction layer
"""
self.Wq = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.vq = nn.Linear(encoder_size, 1, bias=False)
self.Wp1 = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.Wp2 = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.vp = nn.Linear(encoder_size, 1, bias=False)
self.prediction = nn.Linear(2 * encoder_size, embedding_size, bias=False)
self.initiation()
def initiation(self):
initrange = 0.1
nn.init.uniform_(self.embedding.weight, -initrange, initrange)
for module in self.modules():
if isinstance(module, nn.Linear):
nn.init.xavier_uniform_(module.weight, 0.1)
def forward(self, inputs):
[query, passage, answer, is_train] = inputs
q_embedding = self.embedding(query)
p_embedding = self.embedding(passage)
a_embeddings = self.embedding(answer)
a_embedding, _ = self.a_encoder(a_embeddings.view(-1, a_embeddings.size(2), a_embeddings.size(3)))
a_score = F.softmax(self.a_attention(a_embedding), 1)
a_output = a_score.transpose(2, 1).bmm(a_embedding).squeeze()
a_embedding = a_output.view(a_embeddings.size(0), 3, -1)
hq, _ = self.q_encoder(p_embedding)
hq=F.dropout(hq,self.drop_out)
hp, _ = self.p_encoder(q_embedding)
hp=F.dropout(hp,self.drop_out)
_s1 = self.Wc1(hq).unsqueeze(1)
_s2 = self.Wc2(hp).unsqueeze(2)
sjt = self.vc(torch.tanh(_s1 + _s2)).squeeze()
ait = F.softmax(sjt, 2)
qtc = ait.bmm(hq)
_s1 = self.Wb(hq).transpose(2, 1)
sjt = hp.bmm(_s1)
ait = F.softmax(sjt, 2)
qtb = ait.bmm(hq)
_s1 = hq.unsqueeze(1)
_s2 = hp.unsqueeze(2)
sjt = self.vd(torch.tanh(self.Wd(_s1 * _s2))).squeeze()
ait = F.softmax(sjt, 2)
qtd = ait.bmm(hq)
sjt = self.vm(torch.tanh(self.Wm(_s1 - _s2))).squeeze()
ait = F.softmax(sjt, 2)
qtm = ait.bmm(hq)
_s1 = hp.unsqueeze(1)
_s2 = hp.unsqueeze(2)
sjt = self.vs(torch.tanh(self.Ws(_s1 * _s2))).squeeze()
ait = F.softmax(sjt, 2)
qts = ait.bmm(hp)
aggregation = torch.cat([hp, qts, qtc, qtd, qtb, qtm], 2)
aggregation_representation, _ = self.gru_agg(aggregation)
sj = self.vq(torch.tanh(self.Wq(hq))).transpose(2, 1)
rq = F.softmax(sj, 2).bmm(hq)
sj = F.softmax(self.vp(self.Wp1(aggregation_representation) + self.Wp2(rq)).transpose(2, 1), 2)
rp = sj.bmm(aggregation_representation)
encoder_output = F.dropout(F.leaky_relu(self.prediction(rp)),self.drop_out)
score = F.softmax(a_embedding.bmm(encoder_output.transpose(2, 1)).squeeze(), 1)
if not is_train:
return score.argmax(1)
loss = -torch.log(score[:, 0]).mean()
return loss
| 4,704 | 44.679612 | 108 | py |
ReCO | ReCO-master/BiDAF/train.py | # -*- coding: utf-8 -*-
import argparse
import cPickle
import torch
from MwAN import MwAN
from preprocess import process_data
from utils import *
parser = argparse.ArgumentParser(description='PyTorch implementation for Multiway Attention Networks for Modeling '
'Sentence Pairs of the AI-Challenges')
parser.add_argument('--data', type=str, default='data/',
help='location directory of the data corpus')
parser.add_argument('--threshold', type=int, default=5,
help='threshold count of the word')
parser.add_argument('--epoch', type=int, default=50,
help='training epochs')
parser.add_argument('--emsize', type=int, default=128,
help='size of word embeddings')
parser.add_argument('--nhid', type=int, default=128,
help='hidden size of the model')
parser.add_argument('--batch_size', type=int, default=32, metavar='N',
help='batch size')
parser.add_argument('--log_interval', type=int, default=300,
help='# of batches to see the training error')
parser.add_argument('--dropout', type=float, default=0.2,
help='dropout applied to layers (0 = no dropout)')
parser.add_argument('--cuda', action='store_true',
help='use CUDA')
parser.add_argument('--save', type=str, default='model.pt',
help='path to save the final model')
args = parser.parse_args()
# vocab_size = process_data(args.data, args.threshold)
vocab_size = 98745
model = MwAN(vocab_size=vocab_size, embedding_size=args.emsize, encoder_size=args.nhid, drop_out=args.dropout)
print('Model total parameters:', get_model_parameters(model))
if args.cuda:
model.cuda()
optimizer = torch.optim.Adamax(model.parameters())
with open(args.data + 'train.pickle', 'rb') as f:
train_data = cPickle.load(f)
with open(args.data + 'dev.pickle', 'rb') as f:
dev_data = cPickle.load(f)
dev_data = sorted(dev_data, key=lambda x: len(x[1]))
print('train data size {:d}, dev data size {:d}'.format(len(train_data), len(dev_data)))
def train(epoch):
model.train()
data = shuffle_data(train_data, 1)
total_loss = 0.0
for num, i in enumerate(range(0, len(data), args.batch_size)):
one = data[i:i + args.batch_size]
query, _ = padding([x[0] for x in one], max_len=50)
passage, _ = padding([x[1] for x in one], max_len=350)
answer = pad_answer([x[2] for x in one])
query, passage, answer = torch.LongTensor(query), torch.LongTensor(passage), torch.LongTensor(answer)
if args.cuda:
query = query.cuda()
passage = passage.cuda()
answer = answer.cuda()
optimizer.zero_grad()
loss = model([query, passage, answer, True])
loss.backward()
total_loss += loss.item()
optimizer.step()
if (num + 1) % args.log_interval == 0:
print '|------epoch {:d} train error is {:f} eclipse {:.2f}%------|'.format(epoch,
total_loss / args.log_interval,
i * 100.0 / len(data))
total_loss = 0
def test():
model.eval()
r, a = 0.0, 0.0
with torch.no_grad():
for i in range(0, len(dev_data), args.batch_size):
one = dev_data[i:i + args.batch_size]
query, _ = padding([x[0] for x in one], max_len=50)
passage, _ = padding([x[1] for x in one], max_len=500)
answer = pad_answer([x[2] for x in one])
query, passage, answer = torch.LongTensor(query), torch.LongTensor(passage), torch.LongTensor(answer)
if args.cuda:
query = query.cuda()
passage = passage.cuda()
answer = answer.cuda()
output = model([query, passage, answer, False])
r += torch.eq(output, 0).sum().item()
a += len(one)
return r * 100.0 / a
def main():
best = 0.0
for epoch in range(args.epoch):
train(epoch)
acc = test()
if acc > best:
best = acc
with open(args.save, 'wb') as f:
torch.save(model, f)
print 'epcoh {:d} dev acc is {:f}, best dev acc {:f}'.format(epoch, acc, best)
if __name__ == '__main__':
main()
| 4,466 | 38.184211 | 120 | py |
ReCO | ReCO-master/BiDAF/BiDAF.py | # -*- coding: utf-8 -*-
"""
@Time : 2019/11/21 下午4:42
@FileName: BiDAF.py
@author: 王炳宁
@contact: wangbingning@sogou-inc.com
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
class BiDAF(nn.Module):
def __init__(self, vocab_size, embedding_size, encoder_size, drop_out=0.2):
super(BiDAF, self).__init__()
# 2. Word Embedding Layer
# initialize word embedding with GloVe
self.word_emb = nn.Embedding(vocab_size, embedding_size)
self.a_encoder = nn.GRU(input_size=embedding_size, hidden_size=embedding_size // 2, batch_first=True,
bidirectional=True)
self.a_attention = nn.Linear(embedding_size, 1, bias=False)
# 3. Contextual Embedding Layer
self.context_LSTM = nn.LSTM(input_size=embedding_size,
hidden_size=encoder_size,
bidirectional=True,
batch_first=True,
)
# 4. Attention Flow Layer
self.att_weight_c = nn.Linear(encoder_size * 2, 1)
self.att_weight_q = nn.Linear(encoder_size * 2, 1)
self.att_weight_cq = nn.Linear(encoder_size * 2, 1)
# 5. Modeling Layer
self.modeling_LSTM = nn.LSTM(input_size=encoder_size * 8,
hidden_size=encoder_size,
bidirectional=True,
batch_first=True,
num_layers=2
)
self.Wq = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.vq = nn.Linear(encoder_size, 1, bias=False)
self.Wp1 = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.Wp2 = nn.Linear(2 * encoder_size, encoder_size, bias=False)
self.vp = nn.Linear(encoder_size, 1, bias=False)
self.prediction = nn.Linear(2 * encoder_size, embedding_size, bias=False)
self.drop_out = drop_out
def forward(self, inputs):
def att_flow_layer(c, q):
"""
:param c: (batch, c_len, hidden_size * 2)
:param q: (batch, q_len, hidden_size * 2)
:return: (batch, c_len, q_len)
"""
c_len = c.size(1)
q_len = q.size(1)
cq = []
for i in range(q_len):
# (batch, 1, hidden_size * 2)
qi = q.select(1, i).unsqueeze(1)
# (batch, c_len, 1)
ci = self.att_weight_cq(c * qi).squeeze()
cq.append(ci)
# (batch, c_len, q_len)
cq = torch.stack(cq, dim=-1)
# (batch, c_len, q_len)
s = self.att_weight_c(c).expand(-1, -1, q_len) + \
self.att_weight_q(q).permute(0, 2, 1).expand(-1, c_len, -1) + \
cq
# (batch, c_len, q_len)
a = F.softmax(s, dim=2)
# (batch, c_len, q_len) * (batch, q_len, hidden_size * 2) -> (batch, c_len, hidden_size * 2)
c2q_att = torch.bmm(a, q)
# (batch, 1, c_len)
b = F.softmax(torch.max(s, dim=2)[0], dim=1).unsqueeze(1)
# (batch, 1, c_len) * (batch, c_len, hidden_size * 2) -> (batch, hidden_size * 2)
q2c_att = torch.bmm(b, c).squeeze()
# (batch, c_len, hidden_size * 2) (tiled)
q2c_att = q2c_att.unsqueeze(1).expand(-1, c_len, -1)
# q2c_att = torch.stack([q2c_att] * c_len, dim=1)
# (batch, c_len, hidden_size * 8)
x = torch.cat([c, c2q_att, c * c2q_att, c * q2c_att], dim=-1)
return x
# 2. Word Embedding Layer
[query, passage, answer, is_train] = inputs
c_word = self.word_emb(passage)
q_word = self.word_emb(query)
a_embeddings = self.word_emb(answer)
a_embedding, _ = self.a_encoder(a_embeddings.view(-1, a_embeddings.size(2), a_embeddings.size(3)))
a_score = F.softmax(self.a_attention(a_embedding), 1)
a_output = a_score.transpose(2, 1).bmm(a_embedding).squeeze()
a_embedding = a_output.view(a_embeddings.size(0), 3, -1)
# Highway network
# 3. Contextual Embedding Layer
c, _ = self.context_LSTM(c_word)
q, _ = self.context_LSTM(q_word)
# 4. Attention Flow Layer
g = att_flow_layer(c, q)
# 5. Modeling Layer
m, _ = self.modeling_LSTM(g)
# 6. Output Layer
sj = F.softmax(self.vp(self.Wp1(m)).transpose(2, 1), 2)
rp = sj.bmm(m)
encoder_output = F.dropout(F.leaky_relu(self.prediction(rp)), self.drop_out)
score = F.softmax(a_embedding.bmm(encoder_output.transpose(2, 1)).squeeze(), 1)
if not is_train:
return score.argmax(1)
loss = -torch.log(score[:, 0]).mean()
return loss
| 4,904 | 40.923077 | 109 | py |
ReCO | ReCO-master/InHouseBert/model.py | # -*- coding: utf-8 -*-
"""
@Time : 2020/6/24 下午6:18
@FileName: model.py
@author: 王炳宁
@contact: wangbingning@sogou-inc.com
"""
import warnings
import apex
import torch
import torch.nn as nn
from apex.contrib.multihead_attn import SelfMultiheadAttn
from apex.mlp import MLP
from torch.nn import functional as F
warnings.filterwarnings("ignore")
layer_norm = apex.normalization.FusedLayerNorm
class TransformerEncoderLayer(nn.Module):
def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1):
super(TransformerEncoderLayer, self).__init__()
self.self_attn = SelfMultiheadAttn(d_model, nhead, dropout=dropout, impl='fast')
self.feed_forward = MLP([d_model, dim_feedforward, d_model])
self.d_model = d_model
self.norm1 = layer_norm(d_model)
self.norm2 = layer_norm(d_model)
self.activation = F.gelu
def __setstate__(self, state):
if 'activation' not in state:
state['activation'] = F.relu
super(TransformerEncoderLayer, self).__setstate__(state)
def forward(self, src, src_mask=None, src_key_padding_mask=None):
# type: (Tensor, Optional[Tensor], Optional[Tensor]) -> Tensor
src = self.norm2(src)
src2 = self.self_attn(src, src, src, attn_mask=src_mask,
key_padding_mask=src_key_padding_mask, is_training=self.training)[0]
src = src + src2
src = self.norm1(src)
src2 = self.feed_forward(src.view(-1, self.d_model)).view(src.size())
src = src + src2
return src
class SelfAttention(nn.Module):
def __init__(self, n_hidden, n_layer, n_head=6):
super().__init__()
self.att = nn.ModuleList()
for l in range(n_layer):
en = TransformerEncoderLayer(n_hidden, n_head, n_hidden * 4)
self.att.append(en)
self.output_ln = layer_norm(n_hidden)
def forward(self, representations):
representations = representations.transpose(0, 1).contiguous()
for one in self.att:
representations = one(representations)
return self.output_ln(representations.transpose(0, 1))
class BERTLSTM(nn.Module):
def __init__(self, vocab_size, n_embedding, n_hidden, n_layer, n_head):
super().__init__()
vocabulary_size = (2 + vocab_size // 8) * 8
self.word_embedding = nn.Embedding(vocabulary_size, embedding_dim=n_embedding)
self.encoder = nn.LSTM(input_size=n_embedding, hidden_size=n_hidden // 2, bidirectional=True, batch_first=True)
self.n_embedding = n_embedding
self.n_hidden = n_hidden
self.attention = SelfAttention(n_hidden, n_layer, n_head=n_head)
self.output = nn.Sequential(nn.Linear(n_hidden, n_embedding),
nn.LeakyReLU(inplace=True),
apex.normalization.FusedLayerNorm(n_embedding))
self.trans = nn.Linear(n_embedding, vocabulary_size, bias=False)
self.word_embedding.weight = self.trans.weight
def inference(self, seq):
word_embedding = self.word_embedding(seq)
encoder_representations, _ = self.encoder(word_embedding)
encoder_representations = self.attention(encoder_representations)
return encoder_representations
class BERT(BERTLSTM):
def __init__(self, vocab_size, n_embedding, n_hidden, n_layer, n_head):
super().__init__(vocab_size, n_embedding, n_hidden, n_layer, n_head)
del self.trans
del self.output
self.prediction = nn.Sequential(
nn.Linear(self.n_hidden, self.n_hidden // 2),
nn.GELU(),
nn.Linear(self.n_hidden // 2, 1, bias=False),
)
def forward(self, inputs):
[seq, label] = inputs
hidden = self.inference(seq)
mask_idx = torch.eq(seq, 1) # 1 is the index in the seq we separate each candidates.
hidden = hidden.masked_select(mask_idx.unsqueeze(2).expand_as(hidden)).view(
-1, 3, self.n_hidden)
hidden = self.prediction(hidden).squeeze(-1)
if label is None:
return hidden.argmax(1)
return F.cross_entropy(hidden, label)
| 4,192 | 37.118182 | 119 | py |
ReCO | ReCO-master/InHouseBert/train.py | # -*- coding: utf-8 -*-
"""
@Time : 2020/6/24 下午6:16
@FileName: train.py
@author: 王炳宁
@contact: wangbingning@sogou-inc.com
"""
import argparse
import sys
sys.path.append("../..")
sys.path.append("..")
from tasks.ReCO.model import BERT
from utils import *
import torch.distributed as dist
torch.manual_seed(100)
np.random.seed(200)
parser = argparse.ArgumentParser()
parser.add_argument("--batch_size", type=int, default=16)
parser.add_argument("--epoch", type=int, default=10)
parser.add_argument("--lr", type=float, default=4.0e-5)
parser.add_argument("--max_grad_norm", type=float, default=0.2)
parser.add_argument("--model_type", type=str, default="bert-base-chinese-new")
parser.add_argument(
"--fp16",
action="store_true",
default=True,
)
parser.add_argument("--local_rank", type=int, default=-1)
args = parser.parse_args()
model_type = args.model_type
local_rank = args.local_rank
if local_rank >= 0:
torch.distributed.init_process_group(backend='nccl',
init_method='env://')
torch.cuda.set_device(args.local_rank)
data = load_file('data/train.{}.obj'.format(model_type.replace('/', '.')))
valid_data = load_file('data/valid.{}.obj'.format(model_type.replace('/', '.')))
valid_data = sorted(valid_data, key=lambda x: len(x[0]))
batch_size = args.batch_size
n_embedding = 128
n_hidden = 768
n_layer = 12
n_head = 12
vocab_size = 50000
model = BERT(vocab_size, n_embedding, n_hidden, n_layer, n_head)
state_dict = load_file('model.bert.base.th')
for name, para in model.named_parameters():
if name not in state_dict:
print('{} not load'.format(name))
continue
para.data = torch.FloatTensor(state_dict[name])
model.cuda()
optimizer = torch.optim.AdamW(model.parameters(),
weight_decay=0.01,
lr=args.lr)
if args.fp16:
try:
from apex import amp
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
model, optimizer = amp.initialize(model, optimizer, opt_level='O2', verbosity=0)
if local_rank >= 0:
try:
import apex
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use parallel training.")
model = apex.parallel.DistributedDataParallel(model)
def get_shuffle_data():
pool = {}
for one in data:
length = len(one[0]) // 5
if length not in pool:
pool[length] = []
pool[length].append(one)
for one in pool:
np.random.shuffle(pool[one])
length_lst = list(pool.keys())
np.random.shuffle(length_lst)
whole_data = [x for y in length_lst for x in pool[y]]
if local_rank >= 0:
remove_data_size = len(whole_data) % dist.get_world_size()
thread_data = [whole_data[x + args.local_rank] for x in
range(0, len(whole_data) - remove_data_size, dist.get_world_size())]
return thread_data
return whole_data
def iter_printer(total, epoch):
if local_rank >= 0:
if local_rank == 0:
return tqdm(range(0, total, batch_size), desc='epoch {}'.format(epoch))
else:
return range(0, total, batch_size)
else:
return tqdm(range(0, total, batch_size), desc='epoch {}'.format(epoch))
def train(epoch):
model.train()
train_data = get_shuffle_data()
total = len(train_data)
for i in iter_printer(total, epoch):
seq = [x[0] for x in train_data[i:i + batch_size]]
label = [x[1] for x in train_data[i:i + batch_size]]
seq, _ = padding(seq, pads=0, max_len=512)
seq = torch.LongTensor(seq).cuda()
label = torch.LongTensor(label).cuda()
loss = model([seq, label])
if args.fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm)
else:
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
optimizer.step()
optimizer.zero_grad()
def evaluation(epoch):
model.eval()
total = len(valid_data)
right = 0
with torch.no_grad():
for i in iter_printer(total, epoch):
seq = [x[0] for x in valid_data[i:i + batch_size]]
labels = [x[1] for x in valid_data[i:i + batch_size]]
seq, _ = padding(seq, pads=0, max_len=512)
seq = torch.LongTensor(seq).cuda()
predictions = model([seq, None])
predictions = predictions.cpu()
right += predictions.eq(torch.LongTensor(labels)).sum().item()
acc = 100 * right / total
print('epoch {} eval acc is {}'.format(epoch, acc))
return acc
best_acc = 0.0
for epo in range(args.epoch):
train(epo)
if local_rank == -1 or local_rank == 0:
accuracy = evaluation(epo)
if accuracy > best_acc:
best_acc = accuracy
with open('checkpoint.{}.th'.format(model_type.replace('/', '.')), 'wb') as f:
state_dict = model.module.state_dict() if args.fp16 else model.state_dict()
torch.save(state_dict, f)
| 5,292 | 32.713376 | 114 | py |
mining-legal-arguments | mining-legal-arguments-main/evaluate.py | #!/usr/bin/env python
# coding: utf-8
from collections import Counter
from prettytable import PrettyTable
import os
from transformers import AutoTokenizer
import torch
from torch.utils.data import Dataset
import pandas as pd
from datasets import load_dataset, load_metric
import csv
from ast import literal_eval
import numpy as np
import torch
import torch.nn as nn
import transformers
from datasets import load_dataset, load_metric
import logging
import dataclasses
from torch.utils.data.dataloader import DataLoader
from transformers.training_args import is_torch_tpu_available
from transformers.trainer_pt_utils import get_tpu_sampler
from transformers.data.data_collator import DataCollator, InputDataClass
from torch.utils.data.distributed import DistributedSampler
from torch.utils.data.sampler import RandomSampler, SequentialSampler
from typing import List, Union, Dict
from transformers import DataCollatorForTokenClassification
from transformers.tokenization_utils_base import PreTrainedTokenizerBase
from transformers.file_utils import PaddingStrategy
from typing import Optional, Any
from sklearn.metrics import confusion_matrix
from multiTaskModel import MultitaskModel, StrIgnoreDevice, DataLoaderWithTaskname, MultitaskDataloader, MultitaskTrainer, MyDataCollatorForTokenClassification, compute_f1, compute_macro_f1, eval_f1
import argparse
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
id2label_argType = ['B-Distinguishing',
'B-Einschätzungsspielraum',
'B-Entscheidung des EGMR',
'B-Konsens der prozessualen Parteien',
'B-Overruling',
'B-Rechtsvergleichung',
'B-Sinn & Zweck Auslegung',
'B-Subsumtion',
'B-Systematische Auslegung',
'B-Verhältnismäßigkeitsprüfung – Angemessenheit',
'B-Verhältnismäßigkeitsprüfung – Geeignetheit',
'B-Verhältnismäßigkeitsprüfung – Legitimer Zweck',
'B-Verhältnismäßigkeitsprüfung – Rechtsgrundlage',
'B-Vorherige Rechtsprechung des EGMR',
'B-Wortlaut Auslegung',
'I-Distinguishing',
'I-Einschätzungsspielraum',
'I-Entscheidung des EGMR',
'I-Konsens der prozessualen Parteien',
'I-Overruling',
'I-Rechtsvergleichung',
'I-Sinn & Zweck Auslegung',
'I-Subsumtion',
'I-Systematische Auslegung',
'I-Verhältnismäßigkeitsprüfung – Angemessenheit',
'I-Verhältnismäßigkeitsprüfung – Geeignetheit',
'I-Verhältnismäßigkeitsprüfung – Legitimer Zweck',
'I-Verhältnismäßigkeitsprüfung – Rechtsgrundlage',
'I-Vorherige Rechtsprechung des EGMR',
'I-Wortlaut Auslegung',
'O']
label2id_argType = {}
for i, label in enumerate(id2label_argType):
label2id_argType[label] = i
id2label_agent = ['B-Beschwerdeführer',
'B-Dritte',
'B-EGMR',
'B-Kommission/Kammer',
'B-Staat',
'I-Beschwerdeführer',
'I-Dritte',
'I-EGMR',
'I-Kommission/Kammer',
'I-Staat',
'O']
label2id_agent = {}
for i, label in enumerate(id2label_agent):
label2id_agent[label] = i
def tokenize_and_align_labels_argType(examples, label_all_tokens=False):
"""
Tokenizes the input using the tokenizer and aligns the argument type labels to the subwords.
:param examples: input dataset
:param label_all_tokens: Whether to label all subwords of a token or only the first subword
:return: Tokenized input"""
tokenized_inputs = tokenizer(examples['tokens'], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples['labels']):
word_ids = tokenized_inputs.word_ids(batch_index=i)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
# Special tokens have a word id that is None. We set the label to -100 so they are automatically
# ignored in the loss function.
if word_idx is None:
label_ids.append(-100)
# We set the label for the first token of each word.
elif word_idx != previous_word_idx:
label_ids.append(label2id_argType[label[word_idx]])
# For the other tokens in a word, we set the label to either the current label or -100, depending on
# the label_all_tokens flag.
else:
label_ids.append(label2id_argType[label[word_idx]] if label_all_tokens else -100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
def tokenize_and_align_labels_agent(examples, label_all_tokens=False):
"""
Tokenizes the input using the tokenizer and aligns the agent labels to the subwords.
:param examples: input dataset
:param label_all_tokens: Whether to label all subwords of a token or only the first subword
:return: Tokenized input"""
tokenized_inputs = tokenizer(examples['tokens'], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples['labels']):
word_ids = tokenized_inputs.word_ids(batch_index=i)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
# Special tokens have a word id that is None. We set the label to -100 so they are automatically
# ignored in the loss function.
if word_idx is None:
label_ids.append(-100)
# We set the label for the first token of each word.
elif word_idx != previous_word_idx:
label_ids.append(label2id_agent[label[word_idx]])
# For the other tokens in a word, we set the label to either the current label or -100, depending on
# the label_all_tokens flag.
else:
label_ids.append(label2id_agent[label[word_idx]] if label_all_tokens else -100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
def get_subset_df(tokens, predictions, labels, predlabel=None, truelabel=None):
"""
Can filter model predictions by predicted and true label. If none provided, just postprocesses and returns output.
:param tokens: text tokens
:param predictions: predictions of the model
:param labels: true annotator labels
:param predlabel: optional, filters the model predictions if a label is provided, e.g. label2id_agent['I-EGMR']
:param truelabels: optional, filters the true annotator labels if a label is provided, e.g. label2id_agent['I-EGMR']
:return: DataFrame with the (filtered) predictions, labels and tokens."""
pred = []
label = []
for p,l in zip(predictions, labels):
p = np.array(p)
l = np.array(l)
ind = np.logical_and(p > -1, l > -1)
pred.append(p[ind].tolist())
label.append(l[ind].tolist())
if predlabel is None and truelabel is None:
return pd.DataFrame({'Predictions': pred, 'Labels': label, 'Tokens': tokens})
elif predlabel is None:
preds = []
labels = []
toks = []
for i,l in enumerate(label):
if truelabel in l[2:] and not truelabel in pred[i][2:]:
preds.append(pred[i])
labels.append(l)
toks.append(tokens[i])
return pd.DataFrame({'Predictions': preds, 'Labels': labels, 'Tokens': toks})
elif truelabel is None:
preds = []
labels = []
toks = []
for i,p in enumerate(pred):
if predlabel in p[2:] and not predlabel in label[i][2:]:
preds.append(p)
labels.append(llabel[i])
toks.append(tokens[i])
return pd.DataFrame({'Predictions': preds, 'Labels': labels, 'Tokens': toks})
else:
preds = []
labels = []
toks = []
for i,p in enumerate(pred):
if predlabel in p[2:] and truelabel in label[i][2:]:
preds.append(p)
labels.append(label[i])
toks.append(tokens[i])
return pd.DataFrame({'Predictions': preds, 'Labels': labels, 'Tokens': toks})
def save_predictions(tokens, predictions, labels, file):
"""
Saves the model predictions as csv after postprocessing them.
:param tokens: text tokens
:param predictions: predictions of the model
:param labels: true annotator labels
:param file: path of the output file
"""
df = get_subset_df(tokens, predictions, labels)
df.to_csv(file , sep='\t', encoding='utf-8', index=False)
if __name__ == '__main__':
# parse optional args
parser = argparse.ArgumentParser(description='Evaluate a MultiTask model and save its predictions')
parser.add_argument('--pathprefix', help='path to the project directory')
parser.add_argument('--models', nargs='*' ,help='paths to the models to evaluate')
parser.add_argument('--test_dir', help='path to the directory with the test files')
parser.add_argument('--val_dir', help='path to the directory with the dev files')
parser.add_argument('--output_dir', help='path to the output directory for saving the predictions')
parser.add_argument('--do_val', default=False, type=lambda x: (str(x).lower() == 'true'), help='whether to evaluate the validation/dev dataset')
parser.add_argument('--do_test', default=True, type=lambda x: (str(x).lower() == 'true'), help='whether to evaluate the test dataset')
args = parser.parse_args()
# project directory
pathprefix = '/ukp-storage-1/dfaber/'
pathprefix = '../Uni/masterthesis/'
pathprefix = ''
if args.pathprefix:
pathprefix = args.pathprefix
#test_dir = 'data/article_3/'
test_dir = 'data/test/'
if args.test_dir:
test_dir = args.test_dir
val_dir = 'data/val/'
if args.val_dir:
val_dir = args.val_dir
output_dir = 'predictions/'
if args.output_dir:
output_dir = args.output_dir
# load datasets
testfiles = [f for f in os.listdir(os.path.join(pathprefix, test_dir, 'argType/')) if f.endswith('.csv')]
valfiles = [f for f in os.listdir(os.path.join(pathprefix, val_dir, 'argType/')) if f.endswith('.csv')]
testfiles = testfiles[:2]
valfiles = valfiles[:2]
dataset_argType = load_dataset('csv', data_files={'test': [os.path.join(pathprefix, test_dir, 'argType/', file) for file in testfiles],
'validation': [os.path.join(pathprefix, val_dir, 'argType/', file) for file in valfiles]}, delimiter='\t')
dataset_actor = load_dataset('csv', data_files={'test': [os.path.join(pathprefix, test_dir, 'agent/', file) for file in testfiles],
'validation': [os.path.join(pathprefix, val_dir, 'agent/', file) for file in valfiles]}, delimiter='\t')
dataset_argType = dataset_argType.map(lambda x: {'tokens': literal_eval(x['tokens']), 'labels': literal_eval(x['labels'])})
dataset_actor = dataset_actor.map(lambda x: {'tokens': literal_eval(x['tokens']), 'labels': literal_eval(x['labels'])})
# models to evaluate
'''
models = ['/ukp-storage-1/dfaber/models/multitask/legal-bert-final/checkpoint-39820/bert', '/ukp-storage-1/dfaber/models/multitask/legal-bert-final/checkpoint-47784/bert',
'/ukp-storage-1/dfaber/models/multitask/legal-bert-final/checkpoint-55748/bert', '/ukp-storage-1/dfaber/models/multitask/legal-bert-final/checkpoint-71676/bert',
'/ukp-storage-1/dfaber/models/multitask/legal-bert-final/checkpoint-79640/bert', '/ukp-storage-1/dfaber/models/multitask/roberta-large-final/checkpoint-111482/roberta',
'/ukp-storage-1/dfaber/models/multitask/roberta-large-final/checkpoint-143334/roberta', '/ukp-storage-1/dfaber/models/multitask/roberta-large-final/checkpoint-159260/roberta',
'/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-13000/checkpoint-95556/roberta', '/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-13000/checkpoint-127408/roberta',
'/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-13000/checkpoint-143334/roberta', '/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-13000/checkpoint-159260/roberta',
'/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-15000/checkpoint-111482/roberta', '/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-15000/checkpoint-127408/roberta',
'/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-15000/checkpoint-143334/roberta', '/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-15000/checkpoint-159260/roberta',
'/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-final/checkpoint-143334/roberta']
'''
models = ['/ukp-storage-1/dfaber/models/multitask/legal-bert-final/checkpoint-39820/bert', '/ukp-storage-1/dfaber/models/multitask/legal-bert-final/checkpoint-47784/bert',
'/ukp-storage-1/dfaber/models/multitask/roberta-large-final/checkpoint-111482/roberta', '/ukp-storage-1/dfaber/models/multitask/roberta-large-final/checkpoint-143334/roberta',
'/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-13000/checkpoint-95556/roberta', '/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-13000/checkpoint-143334/roberta',
'/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-15000/checkpoint-143334/roberta', '/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-15000/checkpoint-159260/roberta']
models = ['/ukp-storage-1/dfaber/models/multitask/roberta-large-fp-15000/checkpoint-143334/roberta']
if args.models:
models = args.models
# Evaluate each model
for model in models:
print('\n\n\n\n********************Evaluating ', model, '********************\n\n\n\n')
# load model and tokenizer
multitask_model = torch.load(model)
tokenizer = AutoTokenizer.from_pretrained(multitask_model.encoder.name_or_path)
if model.split('/')[-1] == 'roberta':
tokenizer.add_prefix_space = True
if tokenizer.model_max_length > 1024:
tokenizer.model_max_length = 512
# preprocess data and create datasets
tokenized_dataset_argType = dataset_argType.map(tokenize_and_align_labels_argType, batched=True)
tokenized_dataset_actor = dataset_actor.map(tokenize_and_align_labels_agent, batched=True)
dataset_dict = {
"ArgType": tokenized_dataset_argType,
"Actor": tokenized_dataset_actor,
}
data_collator= MyDataCollatorForTokenClassification(tokenizer)
test_dataset = {
task_name: dataset["test"]
for task_name, dataset in dataset_dict.items()
}
val_dataset = {
task_name: dataset["validation"]
for task_name, dataset in dataset_dict.items()
}
# initialize Trainer
batch_size = 8
train_args = transformers.TrainingArguments(
'test_bert/legal_bert/',
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
)
trainer = MultitaskTrainer(
model=multitask_model,
args=train_args,
data_collator=data_collator,
eval_dataset=val_dataset,
tokenizer=tokenizer,
compute_metrics=eval_f1,
)
# evaluate validation data if specified
if args.do_val:
print('\n\n*****VALIDATION DATASET*****\n\n')
eval_dataloader_argType = DataLoaderWithTaskname(
'ArgType',
data_loader=DataLoader(
val_dataset['ArgType'],
batch_size=trainer.args.eval_batch_size,
collate_fn=trainer.data_collator.collate_batch,
),
)
preds_arg = trainer.prediction_loop(eval_dataloader_argType, description='Validation ArgType')
eval_dataloader_agent = DataLoaderWithTaskname(
'Actor',
data_loader=DataLoader(
val_dataset['Actor'],
batch_size=trainer.args.eval_batch_size,
collate_fn=trainer.data_collator.collate_batch,
),
)
preds_agent = trainer.prediction_loop(eval_dataloader_agent, description='Validation Agent')
# postprocess (remove -100 indices)
labels_argType_wordlevel = []
preds_argType_wordlevel = []
for l,p in zip(preds_arg.label_ids, np.argmax(preds_arg.predictions, axis=2)):
ind = np.logical_and(p > -1, l > -1)
labels_argType_wordlevel.append(l[ind])
preds_argType_wordlevel.append(p[ind])
print('ArgType:')
print('Macro F1: ', compute_macro_f1(gold=labels_argType_wordlevel, pred=preds_argType_wordlevel, id2label=id2label_argType))
# postprocess (remove -100 indices)
labels_agent_wordlevel = []
preds_agent_wordlevel = []
for l,p in zip(preds_agent.label_ids, np.argmax(preds_agent.predictions, axis=2)):
ind = np.logical_and(p > -1, l > -1)
labels_agent_wordlevel.append(l[ind])
preds_agent_wordlevel.append(p[ind])
print('Agent:')
print('Macro F1: ', compute_macro_f1(gold=labels_agent_wordlevel, pred=preds_agent_wordlevel, id2label=id2label_agent))
# save predictions
save_predictions(val_dataset['ArgType']['tokens'], np.argmax(preds_arg.predictions, axis=2), preds_arg.label_ids, os.path.join(pathprefix, output_dir, 'val_preds/', '_'.join(model.split('/')[-3:]) + '-argType.csv'))
save_predictions(val_dataset['Actor']['tokens'], np.argmax(preds_agent.predictions, axis=2), preds_agent.label_ids, os.path.join(pathprefix, output_dir, 'val_preds/', '_'.join(model.split('/')[-3:]) + '-agent.csv'))
# evaluate test data if specified
if args.do_test:
print('\n\n*****TEST DATASET*****\n\n')
eval_dataloader_argType = DataLoaderWithTaskname(
'ArgType',
data_loader=DataLoader(
test_dataset['ArgType'],
batch_size=trainer.args.eval_batch_size,
collate_fn=trainer.data_collator.collate_batch,
),
)
preds_arg = trainer.prediction_loop(eval_dataloader_argType, description='Validation ArgType')
eval_dataloader_agent = DataLoaderWithTaskname(
'Actor',
data_loader=DataLoader(
test_dataset['Actor'],
batch_size=trainer.args.eval_batch_size,
collate_fn=trainer.data_collator.collate_batch,
),
)
preds_agent = trainer.prediction_loop(eval_dataloader_agent, description='Validation Agent')
# postprocess (remove -100 indices)
labels_argType_wordlevel = []
preds_argType_wordlevel = []
for l,p in zip(preds_arg.label_ids, np.argmax(preds_arg.predictions, axis=2)):
ind = np.logical_and(p > -1, l > -1)
labels_argType_wordlevel.append(l[ind])
preds_argType_wordlevel.append(p[ind])
print('ArgType:')
print('Macro F1: ', compute_macro_f1(gold=labels_argType_wordlevel, pred=preds_argType_wordlevel, id2label=id2label_argType))
# postprocess (remove -100 indices)
labels_agent_wordlevel = []
preds_agent_wordlevel = []
for l,p in zip(preds_agent.label_ids, np.argmax(preds_agent.predictions, axis=2)):
ind = np.logical_and(p > -1, l > -1)
labels_agent_wordlevel.append(l[ind])
preds_agent_wordlevel.append(p[ind])
print('Agent:')
print('Macro F1: ', compute_macro_f1(gold=labels_agent_wordlevel, pred=preds_agent_wordlevel, id2label=id2label_agent))
# save predictions
save_predictions(test_dataset['ArgType']['tokens'], np.argmax(preds_arg.predictions, axis=2), preds_arg.label_ids, os.path.join(pathprefix, output_dir, '_'.join(model.split('/')[-3:]) + '-argType.csv'))
save_predictions(test_dataset['Actor']['tokens'], np.argmax(preds_agent.predictions, axis=2), preds_agent.label_ids, os.path.join(pathprefix, output_dir, '_'.join(model.split('/')[-3:]) + '-agent.csv'))
| 20,634 | 44.855556 | 227 | py |
mining-legal-arguments | mining-legal-arguments-main/multiTaskModel.py | #!/usr/bin/env python
# coding: utf-8
from collections import Counter
from prettytable import PrettyTable
import os
from transformers import AutoTokenizer
import torch
from torch.utils.data import Dataset
import pandas as pd
from datasets import load_dataset, load_metric
import csv
from ast import literal_eval
import numpy as np
import torch
import torch.nn as nn
import transformers
from datasets import load_dataset, load_metric
import logging
import dataclasses
from torch.utils.data.dataloader import DataLoader
from transformers.training_args import is_torch_tpu_available
from transformers.trainer_pt_utils import get_tpu_sampler
from transformers.data.data_collator import DataCollator, InputDataClass
from torch.utils.data.distributed import DistributedSampler
from torch.utils.data.sampler import RandomSampler
from typing import List, Union, Dict
from transformers import DataCollatorForTokenClassification
from transformers.tokenization_utils_base import PreTrainedTokenizerBase
from transformers.file_utils import PaddingStrategy
from typing import Optional, Any
import argparse
from tabulate import tabulate
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
id2label_argType = ['B-Distinguishing',
'B-Einschätzungsspielraum',
'B-Entscheidung des EGMR',
'B-Konsens der prozessualen Parteien',
'B-Overruling',
'B-Rechtsvergleichung',
'B-Sinn & Zweck Auslegung',
'B-Subsumtion',
'B-Systematische Auslegung',
'B-Verhältnismäßigkeitsprüfung – Angemessenheit',
'B-Verhältnismäßigkeitsprüfung – Geeignetheit',
'B-Verhältnismäßigkeitsprüfung – Legitimer Zweck',
'B-Verhältnismäßigkeitsprüfung – Rechtsgrundlage',
'B-Vorherige Rechtsprechung des EGMR',
'B-Wortlaut Auslegung',
'I-Distinguishing',
'I-Einschätzungsspielraum',
'I-Entscheidung des EGMR',
'I-Konsens der prozessualen Parteien',
'I-Overruling',
'I-Rechtsvergleichung',
'I-Sinn & Zweck Auslegung',
'I-Subsumtion',
'I-Systematische Auslegung',
'I-Verhältnismäßigkeitsprüfung – Angemessenheit',
'I-Verhältnismäßigkeitsprüfung – Geeignetheit',
'I-Verhältnismäßigkeitsprüfung – Legitimer Zweck',
'I-Verhältnismäßigkeitsprüfung – Rechtsgrundlage',
'I-Vorherige Rechtsprechung des EGMR',
'I-Wortlaut Auslegung',
'O']
label2id_argType = {}
for i, label in enumerate(id2label_argType):
label2id_argType[label] = i
id2label_agent = ['B-Beschwerdeführer',
'B-Dritte',
'B-EGMR',
'B-Kommission/Kammer',
'B-Staat',
'I-Beschwerdeführer',
'I-Dritte',
'I-EGMR',
'I-Kommission/Kammer',
'I-Staat',
'O']
label2id_agent = {}
for i, label in enumerate(id2label_agent):
label2id_agent[label] = i
def tokenize_and_align_labels_argType(examples, label_all_tokens=False):
"""
Tokenizes the input using the tokenizer and aligns the argument type labels to the subwords.
:param examples: input dataset
:param label_all_tokens: Whether to label all subwords of a token or only the first subword
:return: Tokenized input"""
tokenized_inputs = tokenizer(examples['tokens'], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples['labels']):
word_ids = tokenized_inputs.word_ids(batch_index=i)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
# Special tokens have a word id that is None. We set the label to -100 so they are automatically
# ignored in the loss function.
if word_idx is None:
label_ids.append(-100)
# We set the label for the first token of each word.
elif word_idx != previous_word_idx:
label_ids.append(label2id_argType[label[word_idx]])
# For the other tokens in a word, we set the label to either the current label or -100, depending on
# the label_all_tokens flag.
else:
label_ids.append(label2id_argType[label[word_idx]] if label_all_tokens else -100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
def tokenize_and_align_labels_agent(examples, label_all_tokens=False):
"""
Tokenizes the input using the tokenizer and aligns the agent labels to the subwords.
:param examples: input dataset
:param label_all_tokens: Whether to label all subwords of a token or only the first subword
:return: Tokenized input"""
tokenized_inputs = tokenizer(examples['tokens'], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples['labels']):
word_ids = tokenized_inputs.word_ids(batch_index=i)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
# Special tokens have a word id that is None. We set the label to -100 so they are automatically
# ignored in the loss function.
if word_idx is None:
label_ids.append(-100)
# We set the label for the first token of each word.
elif word_idx != previous_word_idx:
label_ids.append(label2id_agent[label[word_idx]])
# For the other tokens in a word, we set the label to either the current label or -100, depending on
# the label_all_tokens flag.
else:
label_ids.append(label2id_agent[label[word_idx]] if label_all_tokens else -100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
class MultitaskModel(transformers.PreTrainedModel):
def __init__(self, encoder, taskmodels_dict):
"""
Setting MultitaskModel up as a PretrainedModel allows us
to take better advantage of Trainer features
"""
super().__init__(transformers.PretrainedConfig())
self.encoder = encoder
self.taskmodels_dict = nn.ModuleDict(taskmodels_dict)
@classmethod
def create(cls, model_name, model_type_dict, model_config_dict):
"""
This creates a MultitaskModel using the model class and config objects
from single-task models.
We do this by creating each single-task model, and having them share
the same encoder transformer.
"""
shared_encoder = None
taskmodels_dict = {}
for task_name, model_type in model_type_dict.items():
model = model_type.from_pretrained(
model_name,
config=model_config_dict[task_name],
)
if shared_encoder is None:
shared_encoder = getattr(model, cls.get_encoder_attr_name(model))
else:
setattr(model, cls.get_encoder_attr_name(model), shared_encoder)
taskmodels_dict[task_name] = model
return cls(encoder=shared_encoder, taskmodels_dict=taskmodels_dict)
@classmethod
def get_encoder_attr_name(cls, model):
"""
The encoder transformer is named differently in each model "architecture".
This method lets us get the name of the encoder attribute
"""
model_class_name = model.__class__.__name__
if model_class_name.startswith("Bert"):
return "bert"
elif model_class_name.startswith("Roberta"):
return "roberta"
elif model_class_name.startswith("Albert"):
return "albert"
elif model_class_name.startswith("DistilBert"):
return "distilbert"
else:
raise KeyError(f"Add support for new model {model_class_name}")
def forward(self, task_name, **kwargs):
return self.taskmodels_dict[task_name](**kwargs)
class StrIgnoreDevice(str):
"""
This is a hack. The Trainer is going call .to(device) on every input
value, but we need to pass in an additional `task_name` string.
This prevents it from throwing an error
"""
def to(self, device):
return self
class DataLoaderWithTaskname:
"""
Wrapper around a DataLoader to also yield a task name
"""
def __init__(self, task_name, data_loader):
self.task_name = task_name
self.data_loader = data_loader
self.batch_size = data_loader.batch_size
self.dataset = data_loader.dataset
def __len__(self):
return len(self.data_loader)
def __iter__(self):
for batch in self.data_loader:
batch["task_name"] = StrIgnoreDevice(self.task_name)
yield batch
class MultitaskDataloader:
"""
Data loader that combines and samples from multiple single-task
data loaders.
"""
def __init__(self, dataloader_dict):
self.dataloader_dict = dataloader_dict
self.num_batches_dict = {
task_name: len(dataloader)
for task_name, dataloader in self.dataloader_dict.items()
}
self.task_name_list = list(self.dataloader_dict)
self.dataset = [None] * sum(
len(dataloader.dataset)
for dataloader in self.dataloader_dict.values()
)
def __len__(self):
return sum(self.num_batches_dict.values())
def __iter__(self):
"""
For each batch, sample a task, and yield a batch from the respective
task Dataloader.
We use size-proportional sampling, but you could easily modify this
to sample from some-other distribution.
"""
task_choice_list = []
for i, task_name in enumerate(self.task_name_list):
task_choice_list += [i] * self.num_batches_dict[task_name]
task_choice_list = np.array(task_choice_list)
np.random.shuffle(task_choice_list)
dataloader_iter_dict = {
task_name: iter(dataloader)
for task_name, dataloader in self.dataloader_dict.items()
}
for task_choice in task_choice_list:
task_name = self.task_name_list[task_choice]
yield next(dataloader_iter_dict[task_name])
class MultitaskTrainer(transformers.Trainer):
def get_single_train_dataloader(self, task_name, train_dataset):
"""
Create a single-task data loader that also yields task names
"""
if self.train_dataset is None:
raise ValueError("Trainer: training requires a train_dataset.")
if is_torch_tpu_available():
train_sampler = get_tpu_sampler(train_dataset)
else:
train_sampler = (
RandomSampler(train_dataset)
if self.args.local_rank == -1
else DistributedSampler(train_dataset)
)
data_loader = DataLoaderWithTaskname(
task_name=task_name,
data_loader=DataLoader(
train_dataset,
batch_size=self.args.train_batch_size,
sampler=train_sampler,
collate_fn=self.data_collator.collate_batch,
),
)
if is_torch_tpu_available():
data_loader = pl.ParallelLoader(
data_loader, [self.args.device]
).per_device_loader(self.args.device)
return data_loader
def get_train_dataloader(self):
"""
Returns a MultitaskDataloader, which is not actually a Dataloader
but an iterable that returns a generator that samples from each
task Dataloader.
"""
return MultitaskDataloader({
task_name: self.get_single_train_dataloader(task_name, task_dataset)
for task_name, task_dataset in self.train_dataset.items()
})
def get_eval_dataloader(self, q):
"""
Returns a DataLoaderWithTaskname for the argument type task
for evaluation of it during the training.
"""
eval_dataloader_argType = DataLoaderWithTaskname(
'ArgType',
data_loader=DataLoader(
eval_dataset['ArgType'],
batch_size=trainer.args.eval_batch_size,
collate_fn=trainer.data_collator.collate_batch,
),
)
return eval_dataloader_argType
def save_model(self, output_dir: Optional[str] = None):
"""
Saving best-practices: if you use default names for the model,
you can reload it using from_pretrained().
Will only save from the world_master process (unless in TPUs).
"""
if is_torch_tpu_available():
self._save_tpu(output_dir)
elif self.is_world_process_zero():
self._save(output_dir)
def _save_tpu(self, output_dir: Optional[str] = None):
output_dir = output_dir if output_dir is not None else self.args.output_dir
logger.info("Saving model checkpoint to %s", output_dir)
if xm.is_master_ordinal():
os.makedirs(output_dir, exist_ok=True)
torch.save(self.args, os.path.join(output_dir, "training_args.bin"))
xm.rendezvous("saving_checkpoint")
torch.save(self.model, os.path.join(output_dir, self.model.encoder.base_model_prefix))
def _save(self, output_dir: Optional[str] = None):
output_dir = output_dir if output_dir is not None else self.args.output_dir
os.makedirs(output_dir, exist_ok=True)
logger.info("Saving model checkpoint to %s", output_dir)
# Low-Level workaround for MultiTaskModel
torch.save(self.model, os.path.join(output_dir, self.model.encoder.base_model_prefix))
# Good practice: save your training arguments together with the trained model
torch.save(self.args, os.path.join(output_dir, "training_args.bin"))
@dataclasses.dataclass
class MyDataCollatorForTokenClassification:
"""
Data collator that will dynamically pad the inputs received, as well as the labels.
Args:
tokenizer (:class:`~transformers.PreTrainedTokenizer` or :class:`~transformers.PreTrainedTokenizerFast`):
The tokenizer used for encoding the data.
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.file_utils.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
among:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
max_length (:obj:`int`, `optional`):
Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (:obj:`int`, `optional`):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
7.5 (Volta).
label_pad_token_id (:obj:`int`, `optional`, defaults to -100):
The id to use when padding the labels (-100 will be automatically ignore by PyTorch loss functions).
"""
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
label_pad_token_id: int = -100
# call not used?
def __call__(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature[label_name] for feature in features] if label_name in features[0].keys() else None
batch = self.tokenizer.pad(
features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
# Conversion to tensors will fail if we have labels as they are not of the same length yet.
return_tensors="pt" if labels is None else None,
)
if labels is None:
return batch
sequence_length = torch.tensor(batch["input_ids"]).shape[1]
padding_side = self.tokenizer.padding_side
if padding_side == "right":
batch["labels"] = [label + [self.label_pad_token_id] * (sequence_length - len(label)) for label in labels]
else:
batch["labels"] = [[self.label_pad_token_id] * (sequence_length - len(label)) + label for label in labels]
batch = {k: torch.tensor(v, dtype=torch.int64) for k, v in batch.items()}
return batch
def collate_batch(self, features, pad_to_multiple_of: Optional[int] = None):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature[label_name] for feature in features] if label_name in features[0].keys() else None
batch = self.tokenizer.pad(
features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
# Conversion to tensors will fail if we have labels as they are not of the same length yet.
return_tensors="pt" if labels is None else None,
)
if labels is None:
return batch
del batch['tokens']
sequence_length = torch.tensor(batch["input_ids"]).shape[1]
padding_side = self.tokenizer.padding_side
if padding_side == "right":
batch["labels"] = [label + [self.label_pad_token_id] * (sequence_length - len(label)) for label in labels]
else:
batch["labels"] = [[self.label_pad_token_id] * (sequence_length - len(label)) + label for label in labels]
batch = {k: torch.tensor(v, dtype=torch.int64) for k, v in batch.items()}
return batch
def compute_f1(label, gold, pred):
"""
Computes the F1 Score for a single class.
:param labal: the class to compute the score for
:param gold: the gold standard
:param pred: the model predictions
:return: the F1 score for the label"""
tp = 0
fp = 0
fn = 0
for i, sent in enumerate(pred):
for j, tag in enumerate(sent):
# check for relevant label to compute F1
if tag == label:
# if relevant and equals gold -> true positive
if tag == gold[i][j]:
tp += 1
# if it differs from gold -> false positive
else:
fp += 1
# we have a negative, so check if it's a false negative
else:
if gold[i][j] == label:
fn += 1
# use epsilon to avoid division by zero
precision = tp / (tp + fp + 1e-10)
recall = tp / (tp + fn + 1e-10)
f1 = 2 * precision * recall / (precision + recall + 1e-10)
return f1
def compute_macro_f1(gold, pred, id2label):
"""
Computes the Macro F1 Score over all classes.
:param gold: the gold standard
:param pred: the model predictions
:param id2label: the mapping list for the current labels
:return: the Macro F1 score"""
f1s = [(tag, compute_f1(tag, gold, pred)) for tag in range(len(id2label))]
all_f1s = [(id2label[idx], score) for idx, score in f1s]
df = pd.DataFrame(all_f1s, columns=['Label', 'F1'])
df['F1'] = np.around(df['F1'], decimals=4)
print(tabulate(df, headers='keys', tablefmt='pretty', showindex=False))
f1_scores = [f1[1] for f1 in f1s]
macro_f1 = np.sum(f1_scores) / len(f1_scores)
#print('Macro F1: ', macro_f1)
return macro_f1
def eval_f1(evalpred):
"""
Computes the Macro F1 Score over all argument type classes during train evaluation.
:param evalpred: evalpred from the trainer
:return: the Macro F1 score"""
pred = []
gold = []
for p,l in zip(np.argmax(evalpred.predictions, axis=2), evalpred.label_ids):
ind = np.logical_and(p > -1, l > -1)
pred.append(p[ind])
gold.append(l[ind])
f1s = [(tag, compute_f1(tag, gold, pred)) for tag in range(len(id2label_argType))]
all_f1s = [(id2label_argType[idx], score) for idx, score in f1s]
#print('F1 for each Class: ', all_f1s)
f1_scores = [f1[1] for f1 in f1s]
macro_f1 = np.sum(f1_scores) / len(f1_scores)
return {"F1 ArgType": macro_f1}
if __name__ == '__main__':
# parse optional args
parser = argparse.ArgumentParser(description='Train a MultiTask model')
parser.add_argument('--pathprefix', help='path to the project directory')
parser.add_argument('--model', help='name of the model or path to the model')
parser.add_argument('--tokenizer', help='name of the model or path to the tokenizer')
parser.add_argument('--batch_size', type=int, help='batch size of the model')
parser.add_argument('--output_dir', help='path to the output directory')
args = parser.parse_args()
# path to working directory
pathprefix = '/ukp-storage-1/dfaber/'
#pathprefix = ''
if args.pathprefix:
pathprefix = args.pathprefix
# load datasets
trainfiles = [f for f in os.listdir(pathprefix + 'data/train/argType/') if f.endswith('.csv')]
valfiles = [f for f in os.listdir(pathprefix + 'data/val/argType/') if f.endswith('.csv')]
dataset_argType = load_dataset('csv', data_files={'train': [pathprefix + 'data/train/argType/' + file for file in trainfiles],
'validation': [pathprefix + 'data/val/argType/' + file for file in valfiles]}, delimiter='\t')
dataset_actor = load_dataset('csv', data_files={'train': [pathprefix + 'data/train/agent/' + file for file in trainfiles],
'validation': [pathprefix + 'data/val/agent/' + file for file in valfiles]}, delimiter='\t')
dataset_argType = dataset_argType.map(lambda x: {'tokens': literal_eval(x['tokens']), 'labels': literal_eval(x['labels'])})
dataset_actor = dataset_actor.map(lambda x: {'tokens': literal_eval(x['tokens']), 'labels': literal_eval(x['labels'])})
# select the model with the correspronding tokenizer
#model_name = "/ukp-storage-1/dfaber/models/court_bert/checkpoint-20000"
#tokenizer = AutoTokenizer.from_pretrained('/ukp-storage-1/dfaber/legal_tokenizer_bert', do_lower_case=False)
model_name = "/ukp-storage-1/dfaber/models/roberta-large-finetuned/checkpoint-15000"
#model_name = 'roberta-large'
tokenizer = AutoTokenizer.from_pretrained('roberta-large')
#model_name = 'nlpaueb/legal-bert-base-uncased'
#tokenizer = AutoTokenizer.from_pretrained('nlpaueb/legal-bert-base-uncased')
# use parsed args if provided
if args.model:
model_name = args.model
if args.tokenizer:
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer)
# need prefix space for already tokenized data
if 'roberta' in model_name:
tokenizer.add_prefix_space = True
if tokenizer.model_max_length > 1024:
tokenizer.model_max_length = 512
# tokenize and align labels
tokenized_dataset_argType = dataset_argType.map(tokenize_and_align_labels_argType, batched=True)
tokenized_dataset_actor = dataset_actor.map(tokenize_and_align_labels_agent, batched=True)
# create multitask dataset
dataset_dict = {
"ArgType": tokenized_dataset_argType,
"Actor": tokenized_dataset_actor,
}
# create multitask model
multitask_model = MultitaskModel.create(
model_name=model_name,
model_type_dict={
"ArgType": transformers.AutoModelForTokenClassification,
"Actor": transformers.AutoModelForTokenClassification,
},
model_config_dict={
"ArgType": transformers.AutoConfig.from_pretrained(model_name, num_labels=len(id2label_argType)),
"Actor": transformers.AutoConfig.from_pretrained(model_name, num_labels=len(id2label_agent)),
},
)
# create data collator
data_collator= MyDataCollatorForTokenClassification(tokenizer)
# split dataset into training and evaluation (dev) dataset
train_dataset = {
task_name: dataset["train"]
for task_name, dataset in dataset_dict.items()
}
eval_dataset = {
task_name: dataset["validation"]
for task_name, dataset in dataset_dict.items()
}
# set training parameter and train the model
output_dir = pathprefix + 'models/multitask/roberta-large-fp-15000'
batch_size = 4
# use parsed if provided
if args.output_dir:
output_dir = args.output_dir
if args.batch_size:
batch_size = args.batch_size
train_args = transformers.TrainingArguments(
output_dir,
evaluation_strategy = "epoch",
logging_steps=1592,
learning_rate=1e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=10,
weight_decay=0.01,
warmup_steps=1000,
save_steps=15926,
save_total_limit = 10,
logging_dir=pathprefix + 'logs',
)
trainer = MultitaskTrainer(
model=multitask_model,
args=train_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=tokenizer,
compute_metrics=eval_f1,
)
trainer.train()
| 25,615 | 37.232836 | 144 | py |
imbalanced-learn | imbalanced-learn-master/conftest.py | # This file is here so that when running from the root folder
# ./imblearn is added to sys.path by pytest.
# See https://docs.pytest.org/en/latest/pythonpath.html for more details.
# For example, this allows to build extensions in place and run pytest
# doc/modules/clustering.rst and use imblearn from the local folder
# rather than the one from site-packages.
import os
import pytest
def pytest_runtest_setup(item):
fname = item.fspath.strpath
if (
fname.endswith(os.path.join("keras", "_generator.py"))
or fname.endswith(os.path.join("tensorflow", "_generator.py"))
or fname.endswith("miscellaneous.rst")
):
try:
import tensorflow # noqa
except ImportError:
pytest.skip("The tensorflow package is not installed.")
| 798 | 32.291667 | 73 | py |
imbalanced-learn | imbalanced-learn-master/examples/applications/porto_seguro_keras_under_sampling.py | """
==========================================================
Porto Seguro: balancing samples in mini-batches with Keras
==========================================================
This example compares two strategies to train a neural-network on the Porto
Seguro Kaggle data set [1]_. The data set is imbalanced and we show that
balancing each mini-batch allows to improve performance and reduce the training
time.
References
----------
.. [1] https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/data
"""
# Authors: Guillaume Lemaitre <g.lemaitre58@gmail.com>
# License: MIT
print(__doc__)
###############################################################################
# Data loading
###############################################################################
from collections import Counter
import numpy as np
import pandas as pd
###############################################################################
# First, you should download the Porto Seguro data set from Kaggle. See the
# link in the introduction.
training_data = pd.read_csv("./input/train.csv")
testing_data = pd.read_csv("./input/test.csv")
y_train = training_data[["id", "target"]].set_index("id")
X_train = training_data.drop(["target"], axis=1).set_index("id")
X_test = testing_data.set_index("id")
###############################################################################
# The data set is imbalanced and it will have an effect on the fitting.
print(f"The data set is imbalanced: {Counter(y_train['target'])}")
###############################################################################
# Define the pre-processing pipeline
###############################################################################
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer, OneHotEncoder, StandardScaler
def convert_float64(X):
return X.astype(np.float64)
###############################################################################
# We want to standard scale the numerical features while we want to one-hot
# encode the categorical features. In this regard, we make use of the
# :class:`~sklearn.compose.ColumnTransformer`.
numerical_columns = [
name for name in X_train.columns if "_calc_" in name and "_bin" not in name
]
numerical_pipeline = make_pipeline(
FunctionTransformer(func=convert_float64, validate=False), StandardScaler()
)
categorical_columns = [name for name in X_train.columns if "_cat" in name]
categorical_pipeline = make_pipeline(
SimpleImputer(missing_values=-1, strategy="most_frequent"),
OneHotEncoder(categories="auto"),
)
preprocessor = ColumnTransformer(
[
("numerical_preprocessing", numerical_pipeline, numerical_columns),
(
"categorical_preprocessing",
categorical_pipeline,
categorical_columns,
),
],
remainder="drop",
)
# Create an environment variable to avoid using the GPU. This can be changed.
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
from tensorflow.keras.layers import Activation, BatchNormalization, Dense, Dropout
###############################################################################
# Create a neural-network
###############################################################################
from tensorflow.keras.models import Sequential
def make_model(n_features):
model = Sequential()
model.add(Dense(200, input_shape=(n_features,), kernel_initializer="glorot_normal"))
model.add(BatchNormalization())
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(100, kernel_initializer="glorot_normal", use_bias=False))
model.add(BatchNormalization())
model.add(Activation("relu"))
model.add(Dropout(0.25))
model.add(Dense(50, kernel_initializer="glorot_normal", use_bias=False))
model.add(BatchNormalization())
model.add(Activation("relu"))
model.add(Dropout(0.15))
model.add(Dense(25, kernel_initializer="glorot_normal", use_bias=False))
model.add(BatchNormalization())
model.add(Activation("relu"))
model.add(Dropout(0.1))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
return model
###############################################################################
# We create a decorator to report the computation time
import time
from functools import wraps
def timeit(f):
@wraps(f)
def wrapper(*args, **kwds):
start_time = time.time()
result = f(*args, **kwds)
elapsed_time = time.time() - start_time
print(f"Elapsed computation time: {elapsed_time:.3f} secs")
return (elapsed_time, result)
return wrapper
###############################################################################
# The first model will be trained using the ``fit`` method and with imbalanced
# mini-batches.
import tensorflow
from sklearn.metrics import roc_auc_score
from sklearn.utils import parse_version
tf_version = parse_version(tensorflow.__version__)
@timeit
def fit_predict_imbalanced_model(X_train, y_train, X_test, y_test):
model = make_model(X_train.shape[1])
model.fit(X_train, y_train, epochs=2, verbose=1, batch_size=1000)
if tf_version < parse_version("2.6"):
# predict_proba was removed in tensorflow 2.6
predict_method = "predict_proba"
else:
predict_method = "predict"
y_pred = getattr(model, predict_method)(X_test, batch_size=1000)
return roc_auc_score(y_test, y_pred)
###############################################################################
# In the contrary, we will use imbalanced-learn to create a generator of
# mini-batches which will yield balanced mini-batches.
from imblearn.keras import BalancedBatchGenerator
@timeit
def fit_predict_balanced_model(X_train, y_train, X_test, y_test):
model = make_model(X_train.shape[1])
training_generator = BalancedBatchGenerator(
X_train, y_train, batch_size=1000, random_state=42
)
model.fit(training_generator, epochs=5, verbose=1)
y_pred = model.predict(X_test, batch_size=1000)
return roc_auc_score(y_test, y_pred)
###############################################################################
# Classification loop
###############################################################################
###############################################################################
# We will perform a 10-fold cross-validation and train the neural-network with
# the two different strategies previously presented.
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=10)
cv_results_imbalanced = []
cv_time_imbalanced = []
cv_results_balanced = []
cv_time_balanced = []
for train_idx, valid_idx in skf.split(X_train, y_train):
X_local_train = preprocessor.fit_transform(X_train.iloc[train_idx])
y_local_train = y_train.iloc[train_idx].values.ravel()
X_local_test = preprocessor.transform(X_train.iloc[valid_idx])
y_local_test = y_train.iloc[valid_idx].values.ravel()
elapsed_time, roc_auc = fit_predict_imbalanced_model(
X_local_train, y_local_train, X_local_test, y_local_test
)
cv_time_imbalanced.append(elapsed_time)
cv_results_imbalanced.append(roc_auc)
elapsed_time, roc_auc = fit_predict_balanced_model(
X_local_train, y_local_train, X_local_test, y_local_test
)
cv_time_balanced.append(elapsed_time)
cv_results_balanced.append(roc_auc)
###############################################################################
# Plot of the results and computation time
###############################################################################
df_results = pd.DataFrame(
{
"Balanced model": cv_results_balanced,
"Imbalanced model": cv_results_imbalanced,
}
)
df_results = df_results.unstack().reset_index()
df_time = pd.DataFrame(
{"Balanced model": cv_time_balanced, "Imbalanced model": cv_time_imbalanced}
)
df_time = df_time.unstack().reset_index()
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure()
sns.boxplot(y="level_0", x=0, data=df_time)
sns.despine(top=True, right=True, left=True)
plt.xlabel("time [s]")
plt.ylabel("")
plt.title("Computation time difference using a random under-sampling")
plt.figure()
sns.boxplot(y="level_0", x=0, data=df_results, whis=10.0)
sns.despine(top=True, right=True, left=True)
ax = plt.gca()
ax.xaxis.set_major_formatter(plt.FuncFormatter(lambda x, pos: "%i%%" % (100 * x)))
plt.xlabel("ROC-AUC")
plt.ylabel("")
plt.title("Difference in terms of ROC-AUC using a random under-sampling")
| 8,747 | 32.776062 | 88 | py |
imbalanced-learn | imbalanced-learn-master/imblearn/_min_dependencies.py | """All minimum dependencies for imbalanced-learn."""
import argparse
NUMPY_MIN_VERSION = "1.17.3"
SCIPY_MIN_VERSION = "1.5.0"
PANDAS_MIN_VERSION = "1.0.5"
SKLEARN_MIN_VERSION = "1.0.2"
TENSORFLOW_MIN_VERSION = "2.4.3"
KERAS_MIN_VERSION = "2.4.3"
JOBLIB_MIN_VERSION = "1.1.1"
THREADPOOLCTL_MIN_VERSION = "2.0.0"
PYTEST_MIN_VERSION = "5.0.1"
# 'build' and 'install' is included to have structured metadata for CI.
# It will NOT be included in setup's extras_require
# The values are (version_spec, comma separated tags)
dependent_packages = {
"numpy": (NUMPY_MIN_VERSION, "install"),
"scipy": (SCIPY_MIN_VERSION, "install"),
"scikit-learn": (SKLEARN_MIN_VERSION, "install"),
"joblib": (JOBLIB_MIN_VERSION, "install"),
"threadpoolctl": (THREADPOOLCTL_MIN_VERSION, "install"),
"pandas": (PANDAS_MIN_VERSION, "optional, docs, examples, tests"),
"tensorflow": (TENSORFLOW_MIN_VERSION, "optional, docs, examples, tests"),
"keras": (KERAS_MIN_VERSION, "optional, docs, examples, tests"),
"matplotlib": ("3.1.2", "docs, examples"),
"seaborn": ("0.9.0", "docs, examples"),
"memory_profiler": ("0.57.0", "docs"),
"pytest": (PYTEST_MIN_VERSION, "tests"),
"pytest-cov": ("2.9.0", "tests"),
"flake8": ("3.8.2", "tests"),
"black": ("23.3.0", "tests"),
"mypy": ("1.3.0", "tests"),
"sphinx": ("6.0.0", "docs"),
"sphinx-gallery": ("0.13.0", "docs"),
"sphinx-copybutton": ("0.5.2", "docs"),
"numpydoc": ("1.5.0", "docs"),
"sphinxcontrib-bibtex": ("2.4.1", "docs"),
"pydata-sphinx-theme": ("0.13.3", "docs"),
}
# create inverse mapping for setuptools
tag_to_packages: dict = {
extra: [] for extra in ["install", "optional", "docs", "examples", "tests"]
}
for package, (min_version, extras) in dependent_packages.items():
for extra in extras.split(", "):
tag_to_packages[extra].append("{}>={}".format(package, min_version))
# Used by CI to get the min dependencies
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Get min dependencies for a package")
parser.add_argument("package", choices=dependent_packages)
args = parser.parse_args()
min_version = dependent_packages[args.package][0]
print(min_version)
| 2,240 | 36.35 | 86 | py |
imbalanced-learn | imbalanced-learn-master/imblearn/__init__.py | """Toolbox for imbalanced dataset in machine learning.
``imbalanced-learn`` is a set of python methods to deal with imbalanced
datset in machine learning and pattern recognition.
Subpackages
-----------
combine
Module which provides methods based on over-sampling and under-sampling.
ensemble
Module which provides methods generating an ensemble of
under-sampled subsets.
exceptions
Module including custom warnings and error clases used across
imbalanced-learn.
keras
Module which provides custom generator, layers for deep learning using
keras.
metrics
Module which provides metrics to quantified the classification performance
with imbalanced dataset.
over_sampling
Module which provides methods to over-sample a dataset.
tensorflow
Module which provides custom generator, layers for deep learning using
tensorflow.
under-sampling
Module which provides methods to under-sample a dataset.
utils
Module including various utilities.
pipeline
Module which allowing to create pipeline with scikit-learn estimators.
"""
import importlib
import sys
import types
try:
# This variable is injected in the __builtins__ by the build
# process. It is used to enable importing subpackages of sklearn when
# the binaries are not built
# mypy error: Cannot determine type of '__SKLEARN_SETUP__'
__IMBLEARN_SETUP__ # type: ignore
except NameError:
__IMBLEARN_SETUP__ = False
if __IMBLEARN_SETUP__:
sys.stderr.write("Partial import of imblearn during the build process.\n")
# We are not importing the rest of scikit-learn during the build
# process, as it may not be compiled yet
else:
from . import (
combine,
ensemble,
exceptions,
metrics,
over_sampling,
pipeline,
tensorflow,
under_sampling,
utils,
)
from ._version import __version__
from .base import FunctionSampler
from .utils._show_versions import show_versions # noqa: F401
# FIXME: When we get Python 3.7 as minimal version, we will need to switch to
# the following solution:
# https://snarky.ca/lazy-importing-in-python-3-7/
class LazyLoader(types.ModuleType):
"""Lazily import a module, mainly to avoid pulling in large dependencies.
Adapted from TensorFlow:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/
python/util/lazy_loader.py
"""
def __init__(self, local_name, parent_module_globals, name, warning=None):
self._local_name = local_name
self._parent_module_globals = parent_module_globals
self._warning = warning
super(LazyLoader, self).__init__(name)
def _load(self):
"""Load the module and insert it into the parent's globals."""
# Import the target module and insert it into the parent's namespace
module = importlib.import_module(self.__name__)
self._parent_module_globals[self._local_name] = module
# Update this object's dict so that if someone keeps a reference to the
# LazyLoader, lookups are efficient (__getattr__ is only called on
# lookups that fail).
self.__dict__.update(module.__dict__)
return module
def __getattr__(self, item):
module = self._load()
return getattr(module, item)
def __dir__(self):
module = self._load()
return dir(module)
# delay the import of keras since we are going to import either tensorflow
# or keras
keras = LazyLoader("keras", globals(), "imblearn.keras")
__all__ = [
"combine",
"ensemble",
"exceptions",
"keras",
"metrics",
"over_sampling",
"tensorflow",
"under_sampling",
"utils",
"pipeline",
"FunctionSampler",
"__version__",
]
| 3,963 | 30.967742 | 83 | py |
imbalanced-learn | imbalanced-learn-master/imblearn/keras/_generator.py | """Implement generators for ``keras`` which will balance the data."""
# This is a trick to avoid an error during tests collection with pytest. We
# avoid the error when importing the package raise the error at the moment of
# creating the instance.
# This is a trick to avoid an error during tests collection with pytest. We
# avoid the error when importing the package raise the error at the moment of
# creating the instance.
def import_keras():
"""Try to import keras from keras and tensorflow.
This is possible to import the sequence from keras or tensorflow.
"""
def import_from_keras():
try:
import keras # noqa
if hasattr(keras.utils, "Sequence"):
return (keras.utils.Sequence,), True
else:
return (keras.utils.data_utils.Sequence,), True
except ImportError:
return tuple(), False
def import_from_tensforflow():
try:
from tensorflow import keras
if hasattr(keras.utils, "Sequence"):
return (keras.utils.Sequence,), True
else:
return (keras.utils.data_utils.Sequence,), True
except ImportError:
return tuple(), False
ParentClassKeras, has_keras_k = import_from_keras()
ParentClassTensorflow, has_keras_tf = import_from_tensforflow()
has_keras = has_keras_k or has_keras_tf
if has_keras:
if has_keras_k:
ParentClass = ParentClassKeras
else:
ParentClass = ParentClassTensorflow
else:
ParentClass = (object,)
return ParentClass, has_keras
ParentClass, HAS_KERAS = import_keras()
from scipy.sparse import issparse # noqa
from sklearn.base import clone # noqa
from sklearn.utils import _safe_indexing # noqa
from sklearn.utils import check_random_state # noqa
from ..tensorflow import balanced_batch_generator as tf_bbg # noqa
from ..under_sampling import RandomUnderSampler # noqa
from ..utils import Substitution # noqa
from ..utils._docstring import _random_state_docstring # noqa
class BalancedBatchGenerator(*ParentClass): # type: ignore
"""Create balanced batches when training a keras model.
Create a keras ``Sequence`` which is given to ``fit``. The
sampler defines the sampling strategy used to balance the dataset ahead of
creating the batch. The sampler should have an attribute
``sample_indices_``.
.. versionadded:: 0.4
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Original imbalanced dataset.
y : ndarray of shape (n_samples,) or (n_samples, n_classes)
Associated targets.
sample_weight : ndarray of shape (n_samples,)
Sample weight.
sampler : sampler object, default=None
A sampler instance which has an attribute ``sample_indices_``.
By default, the sampler used is a
:class:`~imblearn.under_sampling.RandomUnderSampler`.
batch_size : int, default=32
Number of samples per gradient update.
keep_sparse : bool, default=False
Either or not to conserve or not the sparsity of the input (i.e. ``X``,
``y``, ``sample_weight``). By default, the returned batches will be
dense.
random_state : int, RandomState instance or None, default=None
Control the randomization of the algorithm:
- If int, ``random_state`` is the seed used by the random number
generator;
- If ``RandomState`` instance, random_state is the random number
generator;
- If ``None``, the random number generator is the ``RandomState``
instance used by ``np.random``.
Attributes
----------
sampler_ : sampler object
The sampler used to balance the dataset.
indices_ : ndarray of shape (n_samples, n_features)
The indices of the samples selected during sampling.
Examples
--------
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> from imblearn.datasets import make_imbalance
>>> class_dict = dict()
>>> class_dict[0] = 30; class_dict[1] = 50; class_dict[2] = 40
>>> X, y = make_imbalance(iris.data, iris.target, sampling_strategy=class_dict)
>>> import tensorflow
>>> y = tensorflow.keras.utils.to_categorical(y, 3)
>>> model = tensorflow.keras.models.Sequential()
>>> model.add(
... tensorflow.keras.layers.Dense(
... y.shape[1], input_dim=X.shape[1], activation='softmax'
... )
... )
>>> model.compile(optimizer='sgd', loss='categorical_crossentropy',
... metrics=['accuracy'])
>>> from imblearn.keras import BalancedBatchGenerator
>>> from imblearn.under_sampling import NearMiss
>>> training_generator = BalancedBatchGenerator(
... X, y, sampler=NearMiss(), batch_size=10, random_state=42)
>>> callback_history = model.fit(training_generator, epochs=10, verbose=0)
"""
# flag for keras sequence duck-typing
use_sequence_api = True
def __init__(
self,
X,
y,
*,
sample_weight=None,
sampler=None,
batch_size=32,
keep_sparse=False,
random_state=None,
):
if not HAS_KERAS:
raise ImportError("'No module named 'keras'")
self.X = X
self.y = y
self.sample_weight = sample_weight
self.sampler = sampler
self.batch_size = batch_size
self.keep_sparse = keep_sparse
self.random_state = random_state
self._sample()
def _sample(self):
random_state = check_random_state(self.random_state)
if self.sampler is None:
self.sampler_ = RandomUnderSampler(random_state=random_state)
else:
self.sampler_ = clone(self.sampler)
self.sampler_.fit_resample(self.X, self.y)
if not hasattr(self.sampler_, "sample_indices_"):
raise ValueError("'sampler' needs to have an attribute 'sample_indices_'.")
self.indices_ = self.sampler_.sample_indices_
# shuffle the indices since the sampler are packing them by class
random_state.shuffle(self.indices_)
def __len__(self):
return int(self.indices_.size // self.batch_size)
def __getitem__(self, index):
X_resampled = _safe_indexing(
self.X,
self.indices_[index * self.batch_size : (index + 1) * self.batch_size],
)
y_resampled = _safe_indexing(
self.y,
self.indices_[index * self.batch_size : (index + 1) * self.batch_size],
)
if issparse(X_resampled) and not self.keep_sparse:
X_resampled = X_resampled.toarray()
if self.sample_weight is not None:
sample_weight_resampled = _safe_indexing(
self.sample_weight,
self.indices_[index * self.batch_size : (index + 1) * self.batch_size],
)
if self.sample_weight is None:
return X_resampled, y_resampled
else:
return X_resampled, y_resampled, sample_weight_resampled
@Substitution(random_state=_random_state_docstring)
def balanced_batch_generator(
X,
y,
*,
sample_weight=None,
sampler=None,
batch_size=32,
keep_sparse=False,
random_state=None,
):
"""Create a balanced batch generator to train keras model.
Returns a generator --- as well as the number of step per epoch --- which
is given to ``fit``. The sampler defines the sampling strategy
used to balance the dataset ahead of creating the batch. The sampler should
have an attribute ``sample_indices_``.
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Original imbalanced dataset.
y : ndarray of shape (n_samples,) or (n_samples, n_classes)
Associated targets.
sample_weight : ndarray of shape (n_samples,), default=None
Sample weight.
sampler : sampler object, default=None
A sampler instance which has an attribute ``sample_indices_``.
By default, the sampler used is a
:class:`~imblearn.under_sampling.RandomUnderSampler`.
batch_size : int, default=32
Number of samples per gradient update.
keep_sparse : bool, default=False
Either or not to conserve or not the sparsity of the input (i.e. ``X``,
``y``, ``sample_weight``). By default, the returned batches will be
dense.
{random_state}
Returns
-------
generator : generator of tuple
Generate batch of data. The tuple generated are either (X_batch,
y_batch) or (X_batch, y_batch, sampler_weight_batch).
steps_per_epoch : int
The number of samples per epoch. Required by ``fit_generator`` in
keras.
Examples
--------
>>> from sklearn.datasets import load_iris
>>> X, y = load_iris(return_X_y=True)
>>> from imblearn.datasets import make_imbalance
>>> class_dict = dict()
>>> class_dict[0] = 30; class_dict[1] = 50; class_dict[2] = 40
>>> from imblearn.datasets import make_imbalance
>>> X, y = make_imbalance(X, y, sampling_strategy=class_dict)
>>> import tensorflow
>>> y = tensorflow.keras.utils.to_categorical(y, 3)
>>> model = tensorflow.keras.models.Sequential()
>>> model.add(
... tensorflow.keras.layers.Dense(
... y.shape[1], input_dim=X.shape[1], activation='softmax'
... )
... )
>>> model.compile(optimizer='sgd', loss='categorical_crossentropy',
... metrics=['accuracy'])
>>> from imblearn.keras import balanced_batch_generator
>>> from imblearn.under_sampling import NearMiss
>>> training_generator, steps_per_epoch = balanced_batch_generator(
... X, y, sampler=NearMiss(), batch_size=10, random_state=42)
>>> callback_history = model.fit(training_generator,
... steps_per_epoch=steps_per_epoch,
... epochs=10, verbose=0)
"""
return tf_bbg(
X=X,
y=y,
sample_weight=sample_weight,
sampler=sampler,
batch_size=batch_size,
keep_sparse=keep_sparse,
random_state=random_state,
)
| 10,276 | 33.719595 | 87 | py |
imbalanced-learn | imbalanced-learn-master/imblearn/keras/__init__.py | """The :mod:`imblearn.keras` provides utilities to deal with imbalanced dataset
in keras."""
from ._generator import BalancedBatchGenerator, balanced_batch_generator
__all__ = ["BalancedBatchGenerator", "balanced_batch_generator"]
| 233 | 32.428571 | 79 | py |
imbalanced-learn | imbalanced-learn-master/imblearn/keras/tests/test_generator.py | import numpy as np
import pytest
from scipy import sparse
from sklearn.cluster import KMeans
from sklearn.datasets import load_iris
from sklearn.preprocessing import LabelBinarizer
keras = pytest.importorskip("keras")
from keras.layers import Dense # noqa: E402
from keras.models import Sequential # noqa: E402
from imblearn.datasets import make_imbalance # noqa: E402
from imblearn.keras import (
BalancedBatchGenerator, # noqa: E402
balanced_batch_generator, # noqa: E402
)
from imblearn.over_sampling import RandomOverSampler # noqa: E402
from imblearn.under_sampling import (
ClusterCentroids, # noqa: E402
NearMiss, # noqa: E402
)
3
@pytest.fixture
def data():
iris = load_iris()
X, y = make_imbalance(
iris.data, iris.target, sampling_strategy={0: 30, 1: 50, 2: 40}
)
y = LabelBinarizer().fit_transform(y)
return X, y
def _build_keras_model(n_classes, n_features):
model = Sequential()
model.add(Dense(n_classes, input_dim=n_features, activation="softmax"))
model.compile(
optimizer="sgd", loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
def test_balanced_batch_generator_class_no_return_indices(data):
with pytest.raises(ValueError, match="needs to have an attribute"):
BalancedBatchGenerator(
*data, sampler=ClusterCentroids(estimator=KMeans(n_init=1)), batch_size=10
)
@pytest.mark.filterwarnings("ignore:`wait_time` is not used") # keras 2.2.4
@pytest.mark.parametrize(
"sampler, sample_weight",
[
(None, None),
(RandomOverSampler(), None),
(NearMiss(), None),
(None, np.random.uniform(size=120)),
],
)
def test_balanced_batch_generator_class(data, sampler, sample_weight):
X, y = data
model = _build_keras_model(y.shape[1], X.shape[1])
training_generator = BalancedBatchGenerator(
X,
y,
sample_weight=sample_weight,
sampler=sampler,
batch_size=10,
random_state=42,
)
model.fit_generator(generator=training_generator, epochs=10)
@pytest.mark.parametrize("keep_sparse", [True, False])
def test_balanced_batch_generator_class_sparse(data, keep_sparse):
X, y = data
training_generator = BalancedBatchGenerator(
sparse.csr_matrix(X),
y,
batch_size=10,
keep_sparse=keep_sparse,
random_state=42,
)
for idx in range(len(training_generator)):
X_batch, _ = training_generator.__getitem__(idx)
if keep_sparse:
assert sparse.issparse(X_batch)
else:
assert not sparse.issparse(X_batch)
def test_balanced_batch_generator_function_no_return_indices(data):
with pytest.raises(ValueError, match="needs to have an attribute"):
balanced_batch_generator(
*data,
sampler=ClusterCentroids(estimator=KMeans(n_init=10)),
batch_size=10,
random_state=42,
)
@pytest.mark.filterwarnings("ignore:`wait_time` is not used") # keras 2.2.4
@pytest.mark.parametrize(
"sampler, sample_weight",
[
(None, None),
(RandomOverSampler(), None),
(NearMiss(), None),
(None, np.random.uniform(size=120)),
],
)
def test_balanced_batch_generator_function(data, sampler, sample_weight):
X, y = data
model = _build_keras_model(y.shape[1], X.shape[1])
training_generator, steps_per_epoch = balanced_batch_generator(
X,
y,
sample_weight=sample_weight,
sampler=sampler,
batch_size=10,
random_state=42,
)
model.fit_generator(
generator=training_generator,
steps_per_epoch=steps_per_epoch,
epochs=10,
)
@pytest.mark.parametrize("keep_sparse", [True, False])
def test_balanced_batch_generator_function_sparse(data, keep_sparse):
X, y = data
training_generator, steps_per_epoch = balanced_batch_generator(
sparse.csr_matrix(X),
y,
keep_sparse=keep_sparse,
batch_size=10,
random_state=42,
)
for _ in range(steps_per_epoch):
X_batch, _ = next(training_generator)
if keep_sparse:
assert sparse.issparse(X_batch)
else:
assert not sparse.issparse(X_batch)
| 4,289 | 27.986486 | 86 | py |
imbalanced-learn | imbalanced-learn-master/imblearn/utils/_show_versions.py | """
Utility method which prints system info to help with debugging,
and filing issues on GitHub.
Adapted from :func:`sklearn.show_versions`,
which was adapted from :func:`pandas.show_versions`
"""
# Author: Alexander L. Hayes <hayesall@iu.edu>
# License: MIT
from .. import __version__
def _get_deps_info():
"""Overview of the installed version of main dependencies
Returns
-------
deps_info: dict
version information on relevant Python libraries
"""
deps = [
"imbalanced-learn",
"pip",
"setuptools",
"numpy",
"scipy",
"scikit-learn",
"Cython",
"pandas",
"keras",
"tensorflow",
"joblib",
]
deps_info = {
"imbalanced-learn": __version__,
}
from importlib.metadata import PackageNotFoundError, version
for modname in deps:
try:
deps_info[modname] = version(modname)
except PackageNotFoundError:
deps_info[modname] = None
return deps_info
def show_versions(github=False):
"""Print debugging information.
.. versionadded:: 0.5
Parameters
----------
github : bool,
If true, wrap system info with GitHub markup.
"""
from sklearn.utils._show_versions import _get_sys_info
_sys_info = _get_sys_info()
_deps_info = _get_deps_info()
_github_markup = (
"<details>"
"<summary>System, Dependency Information</summary>\n\n"
"**System Information**\n\n"
"{0}\n"
"**Python Dependencies**\n\n"
"{1}\n"
"</details>"
)
if github:
_sys_markup = ""
_deps_markup = ""
for k, stat in _sys_info.items():
_sys_markup += f"* {k:<10}: `{stat}`\n"
for k, stat in _deps_info.items():
_deps_markup += f"* {k:<10}: `{stat}`\n"
print(_github_markup.format(_sys_markup, _deps_markup))
else:
print("\nSystem:")
for k, stat in _sys_info.items():
print(f"{k:>11}: {stat}")
print("\nPython dependencies:")
for k, stat in _deps_info.items():
print(f"{k:>11}: {stat}")
| 2,176 | 22.408602 | 64 | py |
imbalanced-learn | imbalanced-learn-master/imblearn/utils/tests/test_show_versions.py | """Test for the show_versions helper. Based on the sklearn tests."""
# Author: Alexander L. Hayes <hayesall@iu.edu>
# License: MIT
from imblearn.utils._show_versions import _get_deps_info, show_versions
def test_get_deps_info():
_deps_info = _get_deps_info()
assert "pip" in _deps_info
assert "setuptools" in _deps_info
assert "imbalanced-learn" in _deps_info
assert "scikit-learn" in _deps_info
assert "numpy" in _deps_info
assert "scipy" in _deps_info
assert "Cython" in _deps_info
assert "pandas" in _deps_info
assert "joblib" in _deps_info
def test_show_versions_default(capsys):
show_versions()
out, err = capsys.readouterr()
assert "python" in out
assert "executable" in out
assert "machine" in out
assert "pip" in out
assert "setuptools" in out
assert "imbalanced-learn" in out
assert "scikit-learn" in out
assert "numpy" in out
assert "scipy" in out
assert "Cython" in out
assert "pandas" in out
assert "keras" in out
assert "tensorflow" in out
assert "joblib" in out
def test_show_versions_github(capsys):
show_versions(github=True)
out, err = capsys.readouterr()
assert "<details><summary>System, Dependency Information</summary>" in out
assert "**System Information**" in out
assert "* python" in out
assert "* executable" in out
assert "* machine" in out
assert "**Python Dependencies**" in out
assert "* pip" in out
assert "* setuptools" in out
assert "* imbalanced-learn" in out
assert "* scikit-learn" in out
assert "* numpy" in out
assert "* scipy" in out
assert "* Cython" in out
assert "* pandas" in out
assert "* keras" in out
assert "* tensorflow" in out
assert "* joblib" in out
assert "</details>" in out
| 1,818 | 28.819672 | 78 | py |
ML-Doctor | ML-Doctor-main/demo.py | import os
import sys
import torch
import argparse
import torch.nn as nn
import torchvision.models as models
from doctor.meminf import *
from doctor.modinv import *
from doctor.attrinf import *
from doctor.modsteal import *
from demoloader.train import *
from demoloader.DCGAN import *
from utils.define_models import *
from demoloader.dataloader import *
def train_model(PATH, device, train_set, test_set, model, use_DP, noise, norm, delta):
train_loader = torch.utils.data.DataLoader(
train_set, batch_size=64, shuffle=True, num_workers=2)
test_loader = torch.utils.data.DataLoader(
test_set, batch_size=64, shuffle=True, num_workers=2)
model = model_training(train_loader, test_loader, model, device, use_DP, noise, norm, delta)
acc_train = 0
acc_test = 0
for i in range(100):
print("<======================= Epoch " + str(i+1) + " =======================>")
print("target training")
acc_train = model.train()
print("target testing")
acc_test = model.test()
overfitting = round(acc_train - acc_test, 6)
print('The overfitting rate is %s' % overfitting)
FILE_PATH = PATH + "_target.pth"
model.saveModel(FILE_PATH)
print("Saved target model!!!")
print("Finished training!!!")
return acc_train, acc_test, overfitting
def train_DCGAN(PATH, device, train_set, name):
train_loader = torch.utils.data.DataLoader(
train_set, batch_size=128, shuffle=True, num_workers=2)
if name.lower() == 'fmnist':
D = FashionDiscriminator().eval()
G = FashionGenerator().eval()
else:
D = Discriminator(ngpu=1).eval()
G = Generator(ngpu=1).eval()
print("Starting Training DCGAN...")
GAN = GAN_training(train_loader, D, G, device)
for i in range(200):
print("<======================= Epoch " + str(i+1) + " =======================>")
GAN.train()
GAN.saveModel(PATH + "_discriminator.pth", PATH + "_generator.pth")
def test_meminf(PATH, device, num_classes, target_train, target_test, shadow_train, shadow_test, target_model, shadow_model, train_shadow, use_DP, noise, norm, delta, mode):
batch_size = 64
if train_shadow:
shadow_trainloader = torch.utils.data.DataLoader(
shadow_train, batch_size=batch_size, shuffle=True, num_workers=2)
shadow_testloader = torch.utils.data.DataLoader(
shadow_test, batch_size=batch_size, shuffle=True, num_workers=2)
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD(shadow_model.parameters(), lr=1e-2, momentum=0.9, weight_decay=5e-4)
train_shadow_model(PATH, device, shadow_model, shadow_trainloader, shadow_testloader, use_DP, noise, norm, loss, optimizer, delta)
if mode == 0 or mode == 3:
attack_trainloader, attack_testloader = get_attack_dataset_with_shadow(
target_train, target_test, shadow_train, shadow_test, batch_size)
else:
attack_trainloader, attack_testloader = get_attack_dataset_without_shadow(target_train, target_test, batch_size)
#for white box
if mode == 2 or mode == 3:
gradient_size = get_gradient_size(target_model)
total = gradient_size[0][0] // 2 * gradient_size[0][1] // 2
if mode == 0:
attack_model = ShadowAttackModel(num_classes)
attack_mode0(PATH + "_target.pth", PATH + "_shadow.pth", PATH, device, attack_trainloader, attack_testloader, target_model, shadow_model, attack_model, 1, num_classes)
elif mode == 1:
attack_model = PartialAttackModel(num_classes)
attack_mode1(PATH + "_target.pth", PATH, device, attack_trainloader, attack_testloader, target_model, attack_model, 1, num_classes)
elif mode == 2:
attack_model = WhiteBoxAttackModel(num_classes, total)
attack_mode2(PATH + "_target.pth", PATH, device, attack_trainloader, attack_testloader, target_model, attack_model, 1, num_classes)
elif mode == 3:
attack_model = WhiteBoxAttackModel(num_classes, total)
attack_mode3(PATH + "_target.pth", PATH + "_shadow.pth", PATH, device,
attack_trainloader, attack_testloader, target_model, shadow_model, attack_model, 1, num_classes)
else:
raise Exception("Wrong mode")
# attack_mode0(PATH + "_target.pth", PATH + "_shadow.pth", PATH, device, attack_trainloader, attack_testloader, target_model, shadow_model, attack_model, 1, num_classes)
# attack_mode1(PATH + "_target.pth", PATH, device, attack_trainloader, attack_testloader, target_model, attack_model, 1, num_classes)
# attack_mode2(PATH + "_target.pth", PATH, device, attack_trainloader, attack_testloader, target_model, attack_model, 1, num_classes)
def test_modinv(PATH, device, num_classes, target_train, target_model, name):
size = (1,) + tuple(target_train[0][0].shape)
target_model, evaluation_model = load_data(PATH + "_target.pth", PATH + "_eval.pth", target_model, models.resnet18(num_classes=num_classes))
# CCS 15
modinv_ccs = ccs_inversion(target_model, size, num_classes, 1, 3000, 100, 0.001, 0.003, device)
train_loader = torch.utils.data.DataLoader(target_train, batch_size=1, shuffle=False)
ccs_result = modinv_ccs.reverse_mse(train_loader)
# Secret Revealer
if name.lower() == 'fmnist':
D = FashionDiscriminator(ngpu=1).eval()
G = FashionGenerator(ngpu=1).eval()
else:
D = Discriminator(ngpu=1).eval()
G = Generator(ngpu=1).eval()
PATH_D = PATH + "_discriminator.pth"
PATH_G = PATH + "_generator.pth"
D, G, iden = prepare_GAN(name, D, G, PATH_D, PATH_G)
modinv_revealer = revealer_inversion(G, D, target_model, evaluation_model, iden, device)
def test_attrinf(PATH, device, num_classes, target_train, target_test, target_model):
attack_length = int(0.5 * len(target_train))
rest = len(target_train) - attack_length
attack_train, _ = torch.utils.data.random_split(target_train, [attack_length, rest])
attack_test = target_test
attack_trainloader = torch.utils.data.DataLoader(
attack_train, batch_size=64, shuffle=True, num_workers=2)
attack_testloader = torch.utils.data.DataLoader(
attack_test, batch_size=64, shuffle=True, num_workers=2)
image_size = [1] + list(target_train[0][0].shape)
train_attack_model(
PATH + "_target.pth", PATH, num_classes[1], device, target_model, attack_trainloader, attack_testloader, image_size)
def test_modsteal(PATH, device, train_set, test_set, target_model, attack_model):
train_loader = torch.utils.data.DataLoader(
train_set, batch_size=64, shuffle=True, num_workers=2)
test_loader = torch.utils.data.DataLoader(
test_set, batch_size=64, shuffle=True, num_workers=2)
loss = nn.MSELoss()
optimizer = optim.SGD(attack_model.parameters(), lr=0.01, momentum=0.9)
attacking = train_steal_model(
train_loader, test_loader, target_model, attack_model, PATH + "_target.pth", PATH + "_modsteal.pth", device, 64, loss, optimizer)
for i in range(100):
print("[Epoch %d/%d] attack training"%((i+1), 100))
attacking.train_with_same_distribution()
print("Finished training!!!")
attacking.saveModel()
acc_test, agreement_test = attacking.test()
print("Saved Target Model!!!\nstolen test acc = %.3f, stolen test agreement = %.3f\n"%(acc_test, agreement_test))
def str_to_bool(string):
if isinstance(string, bool):
return string
if string.lower() in ('yes', 'true', 't', 'y', '1'):
return True
elif string.lower() in ('no', 'false', 'f', 'n', '0'):
return False
else:
raise argparse.ArgumentTypeError('Boolean value expected.')
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-g', '--gpu', type=str, default="0")
parser.add_argument('-a', '--attributes', type=str, default="race", help="For attrinf, two attributes should be in format x_y e.g. race_gender")
parser.add_argument('-dn', '--dataset_name', type=str, default="UTKFace")
parser.add_argument('-at', '--attack_type', type=int, default=0)
parser.add_argument('-tm', '--train_model', action='store_true')
parser.add_argument('-ts', '--train_shadow', action='store_true')
parser.add_argument('-ud', '--use_DP', action='store_true',)
parser.add_argument('-ne', '--noise', type=float, default=1.3)
parser.add_argument('-nm', '--norm', type=float, default=1.5)
parser.add_argument('-d', '--delta', type=float, default=1e-5)
parser.add_argument('-m', '--mode', type=int, default=0)
args = parser.parse_args()
os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
device = torch.device("cuda:0")
dataset_name = args.dataset_name
attr = args.attributes
if "_" in attr:
attr = attr.split("_")
root = "../data"
use_DP = args.use_DP
noise = args.noise
norm = args.norm
delta = args.delta
mode = args.mode
train_shadow = args.train_shadow
TARGET_ROOT = "./demoloader/trained_model/"
if not os.path.exists(TARGET_ROOT):
print(f"Create directory named {TARGET_ROOT}")
os.makedirs(TARGET_ROOT)
TARGET_PATH = TARGET_ROOT + dataset_name
num_classes, target_train, target_test, shadow_train, shadow_test, target_model, shadow_model = prepare_dataset(dataset_name, attr, root)
if args.train_model:
train_model(TARGET_PATH, device, target_train, target_test, target_model, use_DP, noise, norm, delta)
# membership inference
if args.attack_type == 0:
test_meminf(TARGET_PATH, device, num_classes, target_train, target_test, shadow_train, shadow_test, target_model, shadow_model, train_shadow, use_DP, noise, norm, delta, mode)
# model inversion
elif args.attack_type == 1:
train_DCGAN(TARGET_PATH, device, shadow_test + shadow_train, dataset_name)
test_modinv(TARGET_PATH, device, num_classes, target_train, target_model, dataset_name)
# attribut inference
elif args.attack_type == 2:
test_attrinf(TARGET_PATH, device, num_classes, target_train, target_test, target_model)
# model stealing
elif args.attack_type == 3:
test_modsteal(TARGET_PATH, device, shadow_train+shadow_test, target_test, target_model, shadow_model)
else:
sys.exit("we have not supported this mode yet! 0c0")
# target_model = models.resnet18(num_classes=num_classes)
# train_model(TARGET_PATH, device, target_train + shadow_train, target_test + shadow_test, target_model)
if __name__ == "__main__":
main()
| 10,614 | 41.290837 | 183 | py |
ML-Doctor | ML-Doctor-main/demoloader/dataloader.py | import os
import torch
import pandas
import torchvision
torch.manual_seed(0)
import torch.nn as nn
import PIL.Image as Image
import torchvision.transforms as transforms
from functools import partial
from typing import Any, Callable, List, Optional, Union, Tuple
class CNN(nn.Module):
def __init__(self, input_channel=3, num_classes=10):
super(CNN, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(input_channel, 32, kernel_size=3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(32, 64, kernel_size=3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(64, 128, kernel_size=3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
)
self.classifier = nn.Sequential(
nn.Linear(128*6*6, 512),
nn.ReLU(),
nn.Linear(512, num_classes),
)
def forward(self, x):
x = self.features(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
class UTKFaceDataset(torch.utils.data.Dataset):
def __init__(self, root, attr: Union[List[str], str] = "gender", transform=None, target_transform=None)-> None:
self.root = root
self.transform = transform
self.target_transform = target_transform
self.processed_path = os.path.join(self.root, 'UTKFace/processed/')
self.files = os.listdir(self.processed_path)
if isinstance(attr, list):
self.attr = attr
else:
self.attr = [attr]
self.lines = []
for txt_file in self.files:
txt_file_path = os.path.join(self.processed_path, txt_file)
with open(txt_file_path, 'r') as f:
assert f is not None
for i in f:
image_name = i.split('jpg ')[0]
attrs = image_name.split('_')
if len(attrs) < 4 or int(attrs[2]) >= 4 or '' in attrs:
continue
self.lines.append(image_name+'jpg')
def __len__(self):
return len(self.lines)
def __getitem__(self, index:int)-> Tuple[Any, Any]:
attrs = self.lines[index].split('_')
age = int(attrs[0])
gender = int(attrs[1])
race = int(attrs[2])
image_path = os.path.join(self.root, 'UTKFace/raw/', self.lines[index]+'.chip.jpg').rstrip()
image = Image.open(image_path).convert('RGB')
target: Any = []
for t in self.attr:
if t == "age":
target.append(age)
elif t == "gender":
target.append(gender)
elif t == "race":
target.append(race)
else:
raise ValueError("Target type \"{}\" is not recognized.".format(t))
if self.transform:
image = self.transform(image)
if target:
target = tuple(target) if len(target) > 1 else target[0]
if self.target_transform is not None:
target = self.target_transform(target)
else:
target = None
return image, target
class CelebA(torch.utils.data.Dataset):
base_folder = "celeba"
def __init__(
self,
root: str,
attr_list: str,
target_type: Union[List[str], str] = "attr",
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
) -> None:
if isinstance(target_type, list):
self.target_type = target_type
else:
self.target_type = [target_type]
self.root = root
self.transform = transform
self.target_transform =target_transform
self.attr_list = attr_list
fn = partial(os.path.join, self.root, self.base_folder)
splits = pandas.read_csv(fn("list_eval_partition.txt"), delim_whitespace=True, header=None, index_col=0)
attr = pandas.read_csv(fn("list_attr_celeba.txt"), delim_whitespace=True, header=1)
mask = slice(None)
self.filename = splits[mask].index.values
self.attr = torch.as_tensor(attr[mask].values)
self.attr = (self.attr + 1) // 2 # map from {-1, 1} to {0, 1}
self.attr_names = list(attr.columns)
def __getitem__(self, index: int) -> Tuple[Any, Any]:
X = Image.open(os.path.join(self.root, self.base_folder, "img_celeba", self.filename[index]))
target: Any = []
for t, nums in zip(self.target_type, self.attr_list):
if t == "attr":
final_attr = 0
for i in range(len(nums)):
final_attr += 2 ** i * self.attr[index][nums[i]]
target.append(final_attr)
else:
# TODO: refactor with utils.verify_str_arg
raise ValueError("Target type \"{}\" is not recognized.".format(t))
if self.transform is not None:
X = self.transform(X)
if target:
target = tuple(target) if len(target) > 1 else target[0]
if self.target_transform is not None:
target = self.target_transform(target)
else:
target = None
return X, target
def __len__(self) -> int:
return len(self.attr)
def extra_repr(self) -> str:
lines = ["Target type: {target_type}", "Split: {split}"]
return '\n'.join(lines).format(**self.__dict__)
def prepare_dataset(dataset, attr, root):
num_classes, dataset, target_model, shadow_model = get_model_dataset(dataset, attr=attr, root=root)
length = len(dataset)
each_length = length//4
target_train, target_test, shadow_train, shadow_test, _ = torch.utils.data.random_split(dataset, [each_length, each_length, each_length, each_length, len(dataset)-(each_length*4)])
return num_classes, target_train, target_test, shadow_train, shadow_test, target_model, shadow_model
def get_model_dataset(dataset_name, attr, root):
if dataset_name.lower() == "utkface":
if isinstance(attr, list):
num_classes = []
for a in attr:
if a == "age":
num_classes.append(117)
elif a == "gender":
num_classes.append(2)
elif a == "race":
num_classes.append(4)
else:
raise ValueError("Target type \"{}\" is not recognized.".format(a))
else:
if attr == "age":
num_classes = 117
elif attr == "gender":
num_classes = 2
elif attr == "race":
num_classes = 4
else:
raise ValueError("Target type \"{}\" is not recognized.".format(attr))
transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.Resize((64, 64)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
dataset = UTKFaceDataset(root=root, attr=attr, transform=transform)
input_channel = 3
elif dataset_name.lower() == "celeba":
if isinstance(attr, list):
for a in attr:
if a != "attr":
raise ValueError("Target type \"{}\" is not recognized.".format(a))
num_classes = [8, 4]
# heavyMakeup MouthSlightlyOpen Smiling, Male Young
attr_list = [[18, 21, 31], [20, 39]]
else:
if attr == "attr":
num_classes = 8
attr_list = [[18, 21, 31]]
else:
raise ValueError("Target type \"{}\" is not recognized.".format(attr))
transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.Resize((64, 64)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
dataset = CelebA(root=root, attr_list=attr_list, target_type=attr, transform=transform)
input_channel = 3
elif dataset_name.lower() == "stl10":
num_classes = 10
transform = transforms.Compose([
transforms.Resize((64, 64)),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
train_set = torchvision.datasets.STL10(
root=root, split='train', transform=transform, download=True)
test_set = torchvision.datasets.STL10(
root=root, split='test', transform=transform, download=True)
dataset = train_set + test_set
input_channel = 3
elif dataset_name.lower() == "fmnist":
num_classes = 10
transform = transforms.Compose([
transforms.Resize((64, 64)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_set = torchvision.datasets.FashionMNIST(
root=root, train=True, download=True, transform=transform)
test_set = torchvision.datasets.FashionMNIST(
root=root, train=False, download=True, transform=transform)
dataset = train_set + test_set
input_channel = 1
if isinstance(num_classes, int):
target_model = CNN(input_channel=input_channel, num_classes=num_classes)
shadow_model = CNN(input_channel=input_channel, num_classes=num_classes)
else:
target_model = CNN(input_channel=input_channel, num_classes=num_classes[0])
shadow_model = CNN(input_channel=input_channel, num_classes=num_classes[0])
return num_classes, dataset, target_model, shadow_model | 9,806 | 33.900356 | 184 | py |
ML-Doctor | ML-Doctor-main/demoloader/train.py | import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
np.set_printoptions(threshold=np.inf)
from opacus import PrivacyEngine
from torch.optim import lr_scheduler
def GAN_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
class model_training():
def __init__(self, trainloader, testloader, model, device, use_DP, noise, norm, delta):
self.use_DP = use_DP
self.device = device
self.delta = delta
self.net = model.to(self.device)
self.trainloader = trainloader
self.testloader = testloader
if self.device == 'cuda':
self.net = torch.nn.DataParallel(self.net)
cudnn.benchmark = True
self.criterion = nn.CrossEntropyLoss()
self.optimizer = optim.SGD(self.net.parameters(), lr=1e-2, momentum=0.9, weight_decay=5e-4)
self.noise_multiplier, self.max_grad_norm = noise, norm
if self.use_DP:
self.privacy_engine = PrivacyEngine()
self.model, self.optimizer, self.trainloader = self.privacy_engine.make_private(
module=model,
optimizer=self.optimizer,
data_loader=self.trainloader,
noise_multiplier=self.noise_multiplier,
max_grad_norm=self.max_grad_norm,
)
# self.net = module_modification.convert_batchnorm_modules(self.net)
# inspector = DPModelInspector()
# inspector.validate(self.net)
# privacy_engine = PrivacyEngine(
# self.net,
# batch_size=64,
# sample_size=len(self.trainloader.dataset),
# alphas=[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64)),
# noise_multiplier=self.noise_multiplier,
# max_grad_norm=self.max_grad_norm,
# secure_rng=False,
# )
print( 'noise_multiplier: %.3f | max_grad_norm: %.3f' % (self.noise_multiplier, self.max_grad_norm))
# privacy_engine.attach(self.optimizer)
self.scheduler = lr_scheduler.MultiStepLR(self.optimizer, [50, 75], 0.1)
# Training
def train(self):
self.net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(self.trainloader):
if isinstance(targets, list):
targets = targets[0]
if str(self.criterion) != "CrossEntropyLoss()":
targets = torch.from_numpy(np.eye(self.num_classes)[targets]).float()
inputs, targets = inputs.to(self.device), targets.to(self.device)
self.optimizer.zero_grad()
outputs = self.net(inputs)
loss = self.criterion(outputs, targets)
loss.backward()
self.optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
if str(self.criterion) != "CrossEntropyLoss()":
_, targets= targets.max(1)
correct += predicted.eq(targets).sum().item()
if self.use_DP:
epsilon = self.privacy_engine.accountant.get_epsilon(delta=self.delta)
# epsilon, best_alpha = self.optimizer.privacy_engine.get_privacy_spent(1e-5)
print("\u03B5: %.3f \u03B4: 1e-5" % (epsilon))
self.scheduler.step()
print( 'Train Acc: %.3f%% (%d/%d) | Loss: %.3f' % (100.*correct/total, correct, total, 1.*train_loss/batch_idx))
return 1.*correct/total
def saveModel(self, path):
torch.save(self.net.state_dict(), path)
def get_noise_norm(self):
return self.noise_multiplier, self.max_grad_norm
def test(self):
self.net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for inputs, targets in self.testloader:
if isinstance(targets, list):
targets = targets[0]
if str(self.criterion) != "CrossEntropyLoss()":
targets = torch.from_numpy(np.eye(self.num_classes)[targets]).float()
inputs, targets = inputs.to(self.device), targets.to(self.device)
outputs = self.net(inputs)
loss = self.criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
if str(self.criterion) != "CrossEntropyLoss()":
_, targets= targets.max(1)
correct += predicted.eq(targets).sum().item()
print( 'Test Acc: %.3f%% (%d/%d)' % (100.*correct/total, correct, total))
return 1.*correct/total
class distillation_training():
def __init__(self, PATH, trainloader, testloader, model, teacher, device):
self.device = device
self.model = model.to(self.device)
self.trainloader = trainloader
self.testloader = testloader
self.PATH = PATH
self.teacher = teacher.to(self.device)
self.teacher.load_state_dict(torch.load(self.PATH))
if self.device == 'cuda':
self.model = torch.nn.DataParallel(self.model)
cudnn.benchmark = True
self.criterion = nn.KLDivLoss(reduction='batchmean')
self.optimizer = optim.SGD(self.model.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)
self.scheduler = lr_scheduler.MultiStepLR(self.optimizer, [50, 100], 0.1)
def distillation_loss(self, y, labels, teacher_scores, T, alpha):
loss = self.criterion(F.log_softmax(y/T, dim=1), F.softmax(teacher_scores/T, dim=1))
loss = loss * (T*T * 2.0 * alpha) + F.cross_entropy(y, labels) * (1. - alpha)
return loss
def train(self):
self.model.train()
self.teacher.eval()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(self.trainloader):
inputs, targets = inputs.to(self.device), targets.to(self.device)
self.optimizer.zero_grad()
outputs = self.model(inputs)
teacher_output = self.teacher(inputs)
teacher_output = teacher_output.detach()
loss = self.distillation_loss(outputs, targets, teacher_output, T=20.0, alpha=0.7)
loss.backward()
self.optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
self.scheduler.step()
print( 'Train Acc: %.3f%% (%d/%d) | Loss: %.3f' % (100.*correct/total, correct, total, 1.*train_loss/batch_idx))
return 1.*correct/total
def saveModel(self, path):
torch.save(self.model.state_dict(), path)
def test(self):
self.model.eval()
correct = 0
total = 0
with torch.no_grad():
for inputs, targets in self.testloader:
inputs, targets = inputs.to(self.device), targets.to(self.device)
outputs = self.model(inputs)
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
print( 'Test Acc: %.3f%% (%d/%d)' % (100.*correct/total, correct, total))
return 1.*correct/total
class GAN_training():
def __init__(self, trainloader, model_discriminator, model_generator, device):
self.device = device
self.trainloader = trainloader
self.model_discriminator = model_discriminator.to(self.device)
self.model_generator = model_generator.to(self.device)
self.model_discriminator.apply(GAN_init)
self.model_generator.apply(GAN_init)
self.criterion = nn.BCELoss()
self.optimizer_discriminator = optim.Adam(model_discriminator.parameters(), lr=0.0002, betas=(0.5, 0.999))
self.optimizer_generator = optim.Adam(model_generator.parameters(), lr=0.0002, betas=(0.5, 0.999))
self.real_label = 1.
self.fake_label = 0.
def train(self):
# For each batch in the dataloader
for i, data in enumerate(self.trainloader, 0):
self.model_discriminator.zero_grad()
# Format batch
real_cpu = data[0].to(self.device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), self.real_label, dtype=torch.float, device=self.device)
# Forward pass real batch through D
output = self.model_discriminator(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = self.criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, 100, 1, 1, device=self.device)
# Generate fake image batch with G
fake = self.model_generator(noise)
label.fill_(self.fake_label)
# Classify all fake batch with D
output = self.model_discriminator(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = self.criterion(output, label)
# Calculate the gradients for this batch, accumulated (summed) with previous gradients
errD_fake.backward()
D_G_z1 = output.mean().item()
# Compute error of D as sum over the fake and the real batches
errD = errD_real + errD_fake
# Update D
self.optimizer_discriminator.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
self.model_generator.zero_grad()
label.fill_(self.real_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = self.model_discriminator(fake).view(-1)
# Calculate G's loss based on this output
errG = self.criterion(output, label)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
self.optimizer_generator.step()
# Output training stats
if i % 50 == 0:
print('[%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (i, len(self.trainloader), errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
# G_losses.append(errG.item())
# D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
# if (iters % 500 == 0) or ((epoch == 4) and (i == len(dataloader)-1)):
# with torch.no_grad():
# fake = netG(fixed_noise).detach().cpu()
# img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
def saveModel(self, path_d, path_g):
torch.save(self.model_discriminator.state_dict(), path_d)
torch.save(self.model_generator.state_dict(), path_g)
| 11,748 | 37.270358 | 120 | py |
ML-Doctor | ML-Doctor-main/demoloader/DCGAN.py | import torch.nn as nn
class Generator(nn.Module):
def __init__(self, ngpu=1, nc=3, nz=100, ngf=64):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d(nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
if input.is_cuda and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output
class Discriminator(nn.Module):
def __init__(self, ngpu=1, nc=3, ndf=64):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
if input.is_cuda and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output.view(-1, 1).squeeze(1)
class FashionGenerator(nn.Module):
def __init__(self):
super(FashionGenerator, self).__init__()
d_input = 100
d_output = 64 * 64
self.input = nn.Sequential(
nn.Linear(d_input, 256),
nn.ReLU()
)
self.hidden1 = nn.Sequential(
nn.Linear(256, 512),
nn.ReLU()
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 1024),
nn.ReLU()
)
self.output = nn.Sequential(
nn.Linear(1024, d_output),
nn.Tanh()
)
def forward(self, x):
x = x.view(-1, 100)
x = self.input(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.output(x)
return x.reshape(-1, 1, 64, 64)
class FashionDiscriminator(nn.Module):
def __init__(self):
super(FashionDiscriminator, self).__init__()
d_input = 64 * 64
d_output = 1
self.input = nn.Sequential(
nn.Linear(d_input, 1024),
nn.ReLU(),
nn.Dropout(0.2)
)
self.hidden1 = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Dropout(0.2)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(0.2)
)
self.output = nn.Sequential(
nn.Linear(256, d_output),
nn.Sigmoid()
)
def forward(self, x):
x = x.reshape(-1, 64 * 64)
x = self.input(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.output(x)
return x
| 4,237 | 29.934307 | 82 | py |
ML-Doctor | ML-Doctor-main/utils/define_models.py | import torch
import torch.nn as nn
import torch.nn.functional as F
class attrinf_attack_model(nn.Module):
def __init__(self, inputs, outputs):
super(attrinf_attack_model, self).__init__()
self.classifier = nn.Linear(inputs, outputs)
def forward(self, x):
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
class ShadowAttackModel(nn.Module):
def __init__(self, class_num):
super(ShadowAttackModel, self).__init__()
self.Output_Component = nn.Sequential(
# nn.Dropout(p=0.2),
nn.Linear(class_num, 128),
nn.ReLU(),
nn.Linear(128, 64),
)
self.Prediction_Component = nn.Sequential(
# nn.Dropout(p=0.2),
nn.Linear(1, 128),
nn.ReLU(),
nn.Linear(128, 64),
)
self.Encoder_Component = nn.Sequential(
# nn.Dropout(p=0.2),
nn.Linear(128, 256),
nn.ReLU(),
# nn.Dropout(p=0.2),
nn.Linear(256, 128),
nn.ReLU(),
# nn.Dropout(p=0.2),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 2),
)
def forward(self, output, prediction):
Output_Component_result = self.Output_Component(output)
Prediction_Component_result = self.Prediction_Component(prediction)
final_inputs = torch.cat((Output_Component_result, Prediction_Component_result), 1)
final_result = self.Encoder_Component(final_inputs)
return final_result
class PartialAttackModel(nn.Module):
def __init__(self, class_num):
super(PartialAttackModel, self).__init__()
self.Output_Component = nn.Sequential(
# nn.Dropout(p=0.2),
nn.Linear(class_num, 128),
nn.ReLU(),
nn.Linear(128, 64),
)
self.Prediction_Component = nn.Sequential(
# nn.Dropout(p=0.2),
nn.Linear(1, 128),
nn.ReLU(),
nn.Linear(128, 64),
)
self.Encoder_Component = nn.Sequential(
# nn.Dropout(p=0.2),
nn.Linear(128, 256),
nn.ReLU(),
# nn.Dropout(p=0.2),
nn.Linear(256, 128),
nn.ReLU(),
# nn.Dropout(p=0.2),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 2),
)
def forward(self, output, prediction):
Output_Component_result = self.Output_Component(output)
Prediction_Component_result = self.Prediction_Component(prediction)
final_inputs = torch.cat((Output_Component_result, Prediction_Component_result), 1)
final_result = self.Encoder_Component(final_inputs)
return final_result
class WhiteBoxAttackModel(nn.Module):
def __init__(self, class_num, total):
super(WhiteBoxAttackModel, self).__init__()
self.Output_Component = nn.Sequential(
nn.Dropout(p=0.2),
nn.Linear(class_num, 128),
nn.ReLU(),
nn.Linear(128, 64),
)
self.Loss_Component = nn.Sequential(
nn.Dropout(p=0.2),
nn.Linear(1, 128),
nn.ReLU(),
nn.Linear(128, 64),
)
self.Gradient_Component = nn.Sequential(
nn.Dropout(p=0.2),
nn.Conv2d(1, 1, kernel_size=5, padding=2),
nn.BatchNorm2d(1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Flatten(),
nn.Dropout(p=0.2),
nn.Linear(total, 256),
nn.ReLU(),
nn.Dropout(p=0.2),
nn.Linear(256, 128),
nn.ReLU(),
nn.Linear(128, 64),
)
self.Label_Component = nn.Sequential(
nn.Dropout(p=0.2),
nn.Linear(class_num, 128),
nn.ReLU(),
nn.Linear(128, 64),
)
self.Encoder_Component = nn.Sequential(
nn.Dropout(p=0.2),
nn.Linear(256, 256),
nn.ReLU(),
nn.Dropout(p=0.2),
nn.Linear(256, 128),
nn.ReLU(),
nn.Dropout(p=0.2),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 2),
)
def forward(self, output, loss, gradient, label):
Output_Component_result = self.Output_Component(output)
Loss_Component_result = self.Loss_Component(loss)
Gradient_Component_result = self.Gradient_Component(gradient)
Label_Component_result = self.Label_Component(label)
# Loss_Component_result = F.softmax(Loss_Component_result, dim=1)
# Gradient_Component_result = F.softmax(Gradient_Component_result, dim=1)
# final_inputs = Output_Component_result
# final_inputs = Loss_Component_result
# final_inputs = Gradient_Component_result
# final_inputs = Label_Component_result
final_inputs = torch.cat((Output_Component_result, Loss_Component_result, Gradient_Component_result, Label_Component_result), 1)
final_result = self.Encoder_Component(final_inputs)
return final_result | 4,227 | 24.017751 | 130 | py |
ML-Doctor | ML-Doctor-main/doctor/modsteal.py | import torch
import torch.nn.functional as F
from math import *
from tqdm import tqdm
class train_steal_model():
def __init__(self, train_loader, test_loader, target_model, attack_model, TARGET_PATH, ATTACK_PATH, device, batch_size, loss, optimizer):
self.device = device
self.batch_size = batch_size
self.train_loader = train_loader
self.test_loader = test_loader
self.TARGET_PATH = TARGET_PATH
self.target_model = target_model.to(self.device)
self.target_model.load_state_dict(torch.load(self.TARGET_PATH, map_location=self.device))
self.target_model.eval()
self.ATTACK_PATH = ATTACK_PATH
self.attack_model = attack_model.to(self.device)
self.criterion = loss
self.optimizer = optimizer
self.index = 0
self.count = [0 for i in range(10)]
self.dataset = []
def train(self, train_set, train_out):
self.attack_model.train()
for inputs, targets in tqdm(zip(train_set, train_out)):
inputs, targets = inputs.to(self.device), targets.to(self.device)
self.optimizer.zero_grad()
outputs = self.attack_model(inputs)
outputs = F.softmax(outputs, dim=1)
loss = self.criterion(outputs, targets)
loss.backward()
self.optimizer.step()
def train_with_same_distribution(self):
self.attack_model.train()
train_loss = 0
correct = 0
total = 0
correct_target = 0
total_target = 0
for inputs, targets in tqdm(self.train_loader):
inputs, targets = inputs.to(self.device), targets.to(self.device)
target_model_logit = self.target_model(inputs)
_,target_model_output = target_model_logit.max(1)
target_model_posterior = F.softmax(target_model_logit, dim=1)
# print(inputs, targets)
self.optimizer.zero_grad()
outputs = self.attack_model(inputs)
# outputs = F.softmax(outputs, dim=1)
# loss = self.criterion(outputs, targets)
loss = self.criterion(outputs, target_model_posterior)
loss.backward()
self.optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
total_target += targets.size(0)
correct_target += predicted.eq(target_model_output).sum().item()
print( 'Train Acc: %.3f%% (%d/%d)' % (100.*correct/total, correct, total))
print( 'Train Agreement: %.3f%% (%d/%d)' % (100.*correct_target/total_target, correct_target, total_target))
def test(self):
self.attack_model.eval()
correct = 0
target_correct = 0
total = 0
agreement_correct = 0
agreement_total = 0
with torch.no_grad():
for inputs, targets in self.test_loader:
inputs, targets = inputs.to(self.device), targets.to(self.device)
outputs = self.attack_model(inputs)
_, predicted = outputs.max(1)
target_model_logit = self.target_model(inputs)
_,target_predicted = target_model_logit.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
target_correct += target_predicted.eq(targets).sum().item()
output_target = self.target_model(inputs)
_, predicted_target = output_target.max(1)
agreement_total += targets.size(0)
agreement_correct += predicted.eq(predicted_target).sum().item()
print( 'Test Acc: %.3f%% (%d/%d)' % (100.*correct/total, correct, total))
print( 'Target Test Acc: %.3f%% (%d/%d)' % (100.*target_correct/total, target_correct, total))
print( 'Test Agreement: %.3f%% (%d/%d)' % (100.*agreement_correct/agreement_total, agreement_correct, agreement_total))
acc_test = correct/total
agreemenet_test = agreement_correct / agreement_total
return acc_test, agreemenet_test
def saveModel(self):
torch.save(self.attack_model.state_dict(), self.ATTACK_PATH)
| 4,341 | 33.188976 | 141 | py |
ML-Doctor | ML-Doctor-main/doctor/attrinf.py | import torch
import pickle
import torch.nn as nn
import torch.optim as optim
from utils.define_models import *
from sklearn.metrics import f1_score
class attack_training():
def __init__(self, device, attack_trainloader, attack_testloader, target_model, TARGET_PATH, ATTACK_PATH):
self.device = device
self.TARGET_PATH = TARGET_PATH
self.ATTACK_PATH = ATTACK_PATH
self.target_model = target_model.to(self.device)
self.target_model.load_state_dict(torch.load(self.TARGET_PATH))
self.target_model.eval()
self.attack_model = None
self.attack_trainloader = attack_trainloader
self.attack_testloader = attack_testloader
self.criterion = nn.CrossEntropyLoss()
self.optimizer = None
# self.scheduler = lr_scheduler.MultiStepLR(self.optimizer, [50, 100], 0.1)
self.dataset_type = None
def _get_activation(self, name, activation):
def hook(model, input, output):
activation[name] = output.detach()
return hook
def init_attack_model(self, size, output_classes):
x = torch.rand(size).to(self.device)
input_classes = self.get_middle_output(x).flatten().shape[0]
self.attack_model = attrinf_attack_model(inputs=input_classes, outputs=output_classes)
self.attack_model.to(self.device)
self.optimizer = optim.Adam(self.attack_model.parameters(), lr=1e-3)
if output_classes == 2:
self.dataset_type = "binary"
else:
self.dataset_type = "macro"
def get_middle_output(self, x):
temp = []
for name, _ in self.target_model.named_parameters():
if "weight" in name:
temp.append(name)
if 1 > len(temp):
raise IndexError('layer is out of range')
name = temp[-2].split('.')
var = eval('self.target_model.' + name[0])
out = {}
var[int(name[1])].register_forward_hook(self._get_activation(name[1], out))
_ = self.target_model(x)
return out[name[1]]
# Training
def train(self, epoch):
self.attack_model.train()
train_loss = 0
correct = 0
total = 0
final_result = []
final_gndtrth = []
final_predict = []
final_probabe = []
for batch_idx, (inputs, [_, targets]) in enumerate(self.attack_trainloader):
inputs, targets = inputs.to(self.device), targets.to(self.device)
self.optimizer.zero_grad()
oracles = self.get_middle_output(inputs)
outputs = self.attack_model(oracles)
outputs = F.softmax(outputs, dim=1)
loss = self.criterion(outputs, targets)
loss.backward()
self.optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
if epoch:
final_gndtrth.append(targets)
final_predict.append(predicted)
final_probabe.append(outputs[:, 1])
if epoch:
final_gndtrth = torch.cat(final_gndtrth, dim=0).cpu().detach().numpy()
final_predict = torch.cat(final_predict, dim=0).cpu().detach().numpy()
final_probabe = torch.cat(final_probabe, dim=0).cpu().detach().numpy()
test_f1_score = f1_score(final_gndtrth, final_predict, average=self.dataset_type)
final_result.append(test_f1_score)
with open(self.ATTACK_PATH + "_attrinf_train.p", "wb") as f:
pickle.dump((final_gndtrth, final_predict, final_probabe), f)
print("Saved Attack Test Ground Truth and Predict Sets")
print("Test F1: %f" % (test_f1_score))
final_result.append(1.*correct/total)
print( 'Test Acc: %.3f%% (%d/%d)' % (100.*correct/(1.0*total), correct, total))
return final_result
def test(self, epoch):
self.attack_model.eval()
correct = 0
total = 0
final_result = []
final_gndtrth = []
final_predict = []
final_probabe = []
with torch.no_grad():
for inputs, [_, targets] in self.attack_testloader:
inputs, targets = inputs.to(self.device), targets.to(self.device)
oracles = self.get_middle_output(inputs)
outputs = self.attack_model(oracles)
outputs = F.softmax(outputs, dim=1)
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
if epoch:
final_gndtrth.append(targets)
final_predict.append(predicted)
final_probabe.append(outputs[:, 1])
if epoch:
final_gndtrth = torch.cat(final_gndtrth, dim=0).cpu().numpy()
final_predict = torch.cat(final_predict, dim=0).cpu().numpy()
final_probabe = torch.cat(final_probabe, dim=0).cpu().numpy()
test_f1_score = f1_score(final_gndtrth, final_predict, average=self.dataset_type)
final_result.append(test_f1_score)
with open(self.ATTACK_PATH + "_attrinf_test.p", "wb") as f:
pickle.dump((final_gndtrth, final_predict, final_probabe), f)
print("Saved Attack Test Ground Truth and Predict Sets")
print("Test F1: %f" % (test_f1_score))
final_result.append(1.*correct/total)
print( 'Test Acc: %.3f%% (%d/%d)' % (100.*correct/(1.0*total), correct, total))
return final_result
def saveModel(self):
torch.save(self.attack_model.state_dict(), self.ATTACK_PATH + "_attrinf_attack_model.pth")
def train_attack_model(TARGET_PATH, ATTACK_PATH, output_classes, device, target_model, train_loader, test_loader, size):
attack = attack_training(device, train_loader, test_loader, target_model, TARGET_PATH, ATTACK_PATH)
attack.init_attack_model(size, output_classes)
for epoch in range(100):
flag = 1 if epoch==99 else 0
print("<======================= Epoch " + str(epoch+1) + " =======================>")
print("attack training")
acc_train = attack.train(flag)
print("attack testing")
acc_test = attack.test(flag)
attack.saveModel()
print("Saved Attack Model")
print("Finished!!!")
return acc_train, acc_test | 6,547 | 34.978022 | 120 | py |
ML-Doctor | ML-Doctor-main/doctor/modinv.py | import time
import torch
import random
import numpy as np
import torch.nn as nn
import torch.utils.data
from torch.autograd import Variable
class ccs_inversion(object):
'''
Model inversion is a kind of data reconstruct attack.
This class we implement the attack on neural network,
the attack goal is to generate data that is close to original data distribution.
This attack was first described in Fredrikson's paper (Algorithm 1):
"Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures" (CCS2015)
-----------------------------NOTICE---------------------------
If the model's output layer doesn't contain Softmax layer, please add it manually.
And parameters will influence the quality of the reconstructed data significantly.
--------------------------------------------------------------
Args:
------------------------
:param target_model: the target model which we are trying to reconstruct its training dataset
:param input_size: the size of the model's input
:param output_size: the size of the model's output
:param target_label: the reconstructed output is belong to this class
:param param_alpha: the number of iteration round
:param param_beta, gamma, lambda: the hyperparameters in paper
'''
def __init__(self, target_model, input_size, output_size, target_label, param_alpha, param_beta, param_gamma, param_lambda, device):
self.target_model = target_model
self.input_size = input_size
self.output_size = output_size
self.target_label = target_label
self.param_alpha = param_alpha
self.param_beta = param_beta
self.param_gamma = param_gamma
self.param_lambda = param_lambda
self.device = device
self.target_model.to(self.device).eval()
def model_invert(self):
current_x = []
cost_x = []
current_x.append(Variable(torch.from_numpy(np.zeros(self.input_size, dtype=np.uint8))).float().to(self.device))
for i in range(self.param_alpha):
cost_x.append(self.invert_cost(current_x[i]).to(self.device))
cost_x[i].backward()
current_x.append((current_x[i] - self.param_lambda * current_x[i].grad).data)
if self.invert_cost(current_x[i + 1]) <= self.param_gamma:
print('Target cost value achieved')
break
elif i >= self.param_beta and self.invert_cost(current_x[i + 1]) >= max(cost_x[self.param_beta:i + 1]):
print('Exceed beta')
break
i = cost_x.index(min(cost_x))
return current_x[i]
def invert_cost(self, input_x):
return 1 - self.target_model(input_x.requires_grad_(True))[0][self.target_label]
def reverse_mse(self, ori_dataset):
'''
output the average MSE value of different classes
:param ori_dataset: the data used to train the target model, please make sure setting the batch size as 1.
:return: MSE value
'''
reverse_data = []
for i in range(self.output_size):
self.target_label = i
a = self.model_invert()
reverse_data.append(a)
class_avg = [Variable(torch.from_numpy(np.zeros(self.input_size, dtype=np.uint8))).float().to(self.device) for _ in range(self.output_size)]
class_mse = [0 for _ in range(self.output_size)]
class_count = [0 for _ in range(self.output_size)]
for x, y in ori_dataset:
x, y = x.to(self.device), y.to(self.device)
class_avg[y] = class_avg[y] + x
class_count[y] = class_count[y] + 1
for i in range(self.output_size):
class_mse[i] = self.figure_mse(class_avg[i] / class_count[i], (reverse_data[i]))
all_class_avg_mse = 0
for i in range(self.output_size):
all_class_avg_mse = all_class_avg_mse + class_mse[i]
return all_class_avg_mse / self.output_size
def figure_mse(self, recover_fig, ori_fig):
'''
:param recover_fig: figure recovered by model inversion attack, type:
:param ori_fig: figure in the training dataset
:return: MSE value of these two figures
'''
diff = nn.MSELoss()
return diff(recover_fig, ori_fig)
def revealer_inversion(G, D, T, E, iden, device, noise = 100,lr=1e-3, momentum=0.9, lamda=100, iter_times=1500, clip_range=1):
'''
This model inversion attack was proposed by Zhang et al. in CVPR20
"The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks"
'''
iden = iden.view(-1).long().to(device)
G, D, T, E = G.to(device), D.to(device), T.to(device), E.to(device)
criterion = nn.CrossEntropyLoss().to(device)
bs = iden.shape[0]
G.eval()
D.eval()
T.eval()
max_score = torch.zeros(bs)
max_iden = torch.zeros(bs)
z_hat = torch.zeros(bs, noise,1,1)
cnt = 0
for random_seed_sudo in range(10):
tf = time.time()
random_seed = random.randint(0,200)
torch.manual_seed(random_seed)
torch.cuda.manual_seed(random_seed)
np.random.seed(random_seed)
random.seed(random_seed)
z = torch.randn(bs, noise, 1, 1).to(device).float()
z.requires_grad = True
v = torch.zeros(bs, noise, 1, 1).to(device).float()
for i in range(iter_times):
fake = G(z)
label = D(fake)
out = T(fake)
if z.grad is not None:
z.grad.data.zero_()
Prior_Loss = - label.mean()
Iden_Loss = criterion(out, iden)
Total_Loss = Prior_Loss + lamda * Iden_Loss
Total_Loss.backward()
v_prev = v.clone()
gradient = z.grad.data
v = momentum * v - lr * gradient
z = z + ( - momentum * v_prev + (1 + momentum) * v)
z = torch.clamp(z.detach(), -clip_range, clip_range).float()
z.requires_grad = True
Prior_Loss_val = Prior_Loss.item()
Iden_Loss_val = Iden_Loss.item()
if (i + 1) % 300 == 0:
fake_img = G(z.detach())
eval_prob = E(fake_img)
eval_iden = torch.argmax(eval_prob, dim=1).view(-1)
acc = iden.eq(eval_iden.long()).sum().item() * 1.0 / bs
fake = G(z)
score = T(fake)
eval_prob = E(fake)
_, eval_iden = torch.max(eval_prob, dim=1)
for i in range(bs):
_, gtl = torch.max(score, 1)
gt = gtl[i].item()
if score[i, gt].item() > max_score[i].item():
max_score[i] = score[i, gt]
max_iden[i] = eval_iden[i]
z_hat[i, :] = z[i, :]
if eval_iden[i].item() == gt:
cnt += 1
print("Acc:{:.2f}\t".format(cnt * 1.0 / (bs*10)))
return cnt * 1.0 / (bs*10)
def load_data(PATH_target, PATH_evaluation, target_model, evaluate_model):
'''
Evaluate model is used to predict the identity based on the input reconstructed image.
If the evaluation classifier achieves high accuracy, the reconstructed image is considered to expose
private information about the target label.
The evaluate model should be different from the target network because the reconstructed images may
incorporate features that overfit the target network while being semantically meaningless.
Moreover, the evaluation classifier should be highly performant.
'''
target_model.load_state_dict(torch.load(PATH_target))
evaluate_model.load_state_dict(torch.load(PATH_evaluation))
print("Finished Loading")
return target_model, evaluate_model
def prepare_GAN(data_type, discriminator, generator, PATH_1, PATH_2):
discriminator.load_state_dict(torch.load(PATH_1))
generator.load_state_dict(torch.load(PATH_2))
iden = torch.zeros(10)
if data_type.lower() == 'stl10' or data_type.lower() == 'fmnist':
for i in range(10):
iden[i] = i
elif data_type.lower() == 'utkface':
for i in range(10):
iden[i] = i % 4
elif data_type.lower() == 'celeba':
for i in range(10):
iden[i] = i % 8
return discriminator, generator, iden | 8,409 | 36.713004 | 148 | py |
ML-Doctor | ML-Doctor-main/doctor/meminf.py | import os
import glob
import torch
import pickle
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
np.set_printoptions(threshold=np.inf)
from opacus import PrivacyEngine
from torch.optim import lr_scheduler
from sklearn.metrics import f1_score, roc_auc_score
def weights_init(m):
if isinstance(m, nn.Conv2d):
nn.init.normal_(m.weight.data)
m.bias.data.fill_(0)
elif isinstance(m,nn.Linear):
nn.init.xavier_normal_(m.weight)
nn.init.constant_(m.bias, 0)
class shadow():
def __init__(self, trainloader, testloader, model, device, use_DP, noise, norm, loss, optimizer, delta):
self.delta = delta
self.use_DP = use_DP
self.device = device
self.model = model.to(self.device)
self.trainloader = trainloader
self.testloader = testloader
self.criterion = loss
self.optimizer = optimizer
self.noise_multiplier, self.max_grad_norm = noise, norm
if self.use_DP:
self.privacy_engine = PrivacyEngine()
self.model, self.optimizer, self.trainloader = self.privacy_engine.make_private(
module=self.model,
optimizer=self.optimizer,
data_loader=self.trainloader,
noise_multiplier=self.noise_multiplier,
max_grad_norm=self.max_grad_norm,
)
# self.model = module_modification.convert_batchnorm_modules(self.model)
# inspector = DPModelInspector()
# inspector.validate(self.model)
# privacy_engine = PrivacyEngine(
# self.model,
# batch_size=batch_size,
# sample_size=len(self.trainloader.dataset),
# alphas=[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64)),
# noise_multiplier=self.noise_multiplier,
# max_grad_norm=self.max_grad_norm,
# secure_rng=False,
# )
print( 'noise_multiplier: %.3f | max_grad_norm: %.3f' % (self.noise_multiplier, self.max_grad_norm))
# privacy_engine.attach(self.optimizer)
self.scheduler = lr_scheduler.MultiStepLR(self.optimizer, [50, 100], 0.1)
# Training
def train(self):
self.model.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(self.trainloader):
inputs, targets = inputs.to(self.device), targets.to(self.device)
self.optimizer.zero_grad()
outputs = self.model(inputs)
loss = self.criterion(outputs, targets)
loss.backward()
self.optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
if self.use_DP:
epsilon = self.privacy_engine.accountant.get_epsilon(delta=self.delta)
# epsilon, best_alpha = self.optimizer.privacy_engine.get_privacy_spent(1e-5)
print("\u03B5: %.3f \u03B4: 1e-5" % (epsilon))
self.scheduler.step()
print( 'Train Acc: %.3f%% (%d/%d) | Loss: %.3f' % (100.*correct/total, correct, total, 1.*train_loss/batch_idx))
return 1.*correct/total
def saveModel(self, path):
torch.save(self.model.state_dict(), path)
def get_noise_norm(self):
return self.noise_multiplier, self.max_grad_norm
def test(self):
self.model.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for inputs, targets in self.testloader:
inputs, targets = inputs.to(self.device), targets.to(self.device)
outputs = self.model(inputs)
loss = self.criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
print( 'Test Acc: %.3f%% (%d/%d)' % (100.*correct/total, correct, total))
return 1.*correct/total
class distillation_training():
def __init__(self, PATH, trainloader, testloader, model, teacher, device, optimizer, T, alpha):
self.device = device
self.model = model.to(self.device)
self.trainloader = trainloader
self.testloader = testloader
self.PATH = PATH
self.teacher = teacher.to(self.device)
self.teacher.load_state_dict(torch.load(self.PATH))
self.teacher.eval()
self.criterion = nn.KLDivLoss(reduction='batchmean')
self.optimizer = optimizer
self.scheduler = lr_scheduler.MultiStepLR(self.optimizer, [50, 100], 0.1)
self.T = T
self.alpha = alpha
def distillation_loss(self, y, labels, teacher_scores, T, alpha):
loss = self.criterion(F.log_softmax(y/T, dim=1), F.softmax(teacher_scores/T, dim=1))
loss = loss * (T*T * alpha) + F.cross_entropy(y, labels) * (1. - alpha)
return loss
def train(self):
self.model.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, [targets, _]) in enumerate(self.trainloader):
inputs, targets = inputs.to(self.device), targets.to(self.device)
self.optimizer.zero_grad()
outputs = self.model(inputs)
teacher_output = self.teacher(inputs)
teacher_output = teacher_output.detach()
loss = self.distillation_loss(outputs, targets, teacher_output, T=self.T, alpha=self.alpha)
loss.backward()
self.optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
self.scheduler.step()
print( 'Train Acc: %.3f%% (%d/%d) | Loss: %.3f' % (100.*correct/total, correct, total, 1.*train_loss/batch_idx))
return 1.*correct/total
def saveModel(self, path):
torch.save(self.model.state_dict(), path)
def test(self):
self.model.eval()
correct = 0
total = 0
with torch.no_grad():
for inputs, [targets, _] in self.testloader:
inputs, targets = inputs.to(self.device), targets.to(self.device)
outputs = self.model(inputs)
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
print( 'Test Acc: %.3f%% (%d/%d)' % (100.*correct/total, correct, total))
return 1.*correct/total
class attack_for_blackbox():
def __init__(self, SHADOW_PATH, TARGET_PATH, ATTACK_SETS, attack_train_loader, attack_test_loader, target_model, shadow_model, attack_model, device):
self.device = device
self.TARGET_PATH = TARGET_PATH
self.SHADOW_PATH = SHADOW_PATH
self.ATTACK_SETS = ATTACK_SETS
self.target_model = target_model.to(self.device)
self.shadow_model = shadow_model.to(self.device)
self.target_model.load_state_dict(torch.load(self.TARGET_PATH))
self.shadow_model.load_state_dict(torch.load(self.SHADOW_PATH))
self.target_model.eval()
self.shadow_model.eval()
self.attack_train_loader = attack_train_loader
self.attack_test_loader = attack_test_loader
self.attack_model = attack_model.to(self.device)
torch.manual_seed(0)
self.attack_model.apply(weights_init)
self.criterion = nn.CrossEntropyLoss()
self.optimizer = optim.Adam(self.attack_model.parameters(), lr=1e-5)
def _get_data(self, model, inputs, targets):
result = model(inputs)
output, _ = torch.sort(result, descending=True)
# results = F.softmax(results[:,:5], dim=1)
_, predicts = result.max(1)
prediction = predicts.eq(targets).float()
# prediction = []
# for predict in predicts:
# prediction.append([1,] if predict else [0,])
# prediction = torch.Tensor(prediction)
# final_inputs = torch.cat((results, prediction), 1)
# print(final_inputs.shape)
return output, prediction.unsqueeze(-1)
def prepare_dataset(self):
with open(self.ATTACK_SETS + "train.p", "wb") as f:
for inputs, targets, members in self.attack_train_loader:
inputs, targets = inputs.to(self.device), targets.to(self.device)
output, prediction = self._get_data(self.shadow_model, inputs, targets)
# output = output.cpu().detach().numpy()
pickle.dump((output, prediction, members), f)
print("Finished Saving Train Dataset")
with open(self.ATTACK_SETS + "test.p", "wb") as f:
for inputs, targets, members in self.attack_test_loader:
inputs, targets = inputs.to(self.device), targets.to(self.device)
output, prediction = self._get_data(self.target_model, inputs, targets)
# output = output.cpu().detach().numpy()
pickle.dump((output, prediction, members), f)
print("Finished Saving Test Dataset")
def train(self, epoch, result_path):
self.attack_model.train()
batch_idx = 1
train_loss = 0
correct = 0
total = 0
final_train_gndtrth = []
final_train_predict = []
final_train_probabe = []
final_result = []
with open(self.ATTACK_SETS + "train.p", "rb") as f:
while(True):
try:
output, prediction, members = pickle.load(f)
output, prediction, members = output.to(self.device), prediction.to(self.device), members.to(self.device)
results = self.attack_model(output, prediction)
results = F.softmax(results, dim=1)
losses = self.criterion(results, members)
losses.backward()
self.optimizer.step()
train_loss += losses.item()
_, predicted = results.max(1)
total += members.size(0)
correct += predicted.eq(members).sum().item()
if epoch:
final_train_gndtrth.append(members)
final_train_predict.append(predicted)
final_train_probabe.append(results[:, 1])
batch_idx += 1
except EOFError:
break
if epoch:
final_train_gndtrth = torch.cat(final_train_gndtrth, dim=0).cpu().detach().numpy()
final_train_predict = torch.cat(final_train_predict, dim=0).cpu().detach().numpy()
final_train_probabe = torch.cat(final_train_probabe, dim=0).cpu().detach().numpy()
train_f1_score = f1_score(final_train_gndtrth, final_train_predict)
train_roc_auc_score = roc_auc_score(final_train_gndtrth, final_train_probabe)
final_result.append(train_f1_score)
final_result.append(train_roc_auc_score)
with open(result_path, "wb") as f:
pickle.dump((final_train_gndtrth, final_train_predict, final_train_probabe), f)
print("Saved Attack Train Ground Truth and Predict Sets")
print("Train F1: %f\nAUC: %f" % (train_f1_score, train_roc_auc_score))
final_result.append(1.*correct/total)
print( 'Train Acc: %.3f%% (%d/%d) | Loss: %.3f' % (100.*correct/total, correct, total, 1.*train_loss/batch_idx))
return final_result
def test(self, epoch, result_path):
self.attack_model.eval()
batch_idx = 1
correct = 0
total = 0
final_test_gndtrth = []
final_test_predict = []
final_test_probabe = []
final_result = []
with torch.no_grad():
with open(self.ATTACK_SETS + "test.p", "rb") as f:
while(True):
try:
output, prediction, members = pickle.load(f)
output, prediction, members = output.to(self.device), prediction.to(self.device), members.to(self.device)
results = self.attack_model(output, prediction)
_, predicted = results.max(1)
total += members.size(0)
correct += predicted.eq(members).sum().item()
results = F.softmax(results, dim=1)
if epoch:
final_test_gndtrth.append(members)
final_test_predict.append(predicted)
final_test_probabe.append(results[:, 1])
batch_idx += 1
except EOFError:
break
if epoch:
final_test_gndtrth = torch.cat(final_test_gndtrth, dim=0).cpu().numpy()
final_test_predict = torch.cat(final_test_predict, dim=0).cpu().numpy()
final_test_probabe = torch.cat(final_test_probabe, dim=0).cpu().numpy()
test_f1_score = f1_score(final_test_gndtrth, final_test_predict)
test_roc_auc_score = roc_auc_score(final_test_gndtrth, final_test_probabe)
final_result.append(test_f1_score)
final_result.append(test_roc_auc_score)
with open(result_path, "wb") as f:
pickle.dump((final_test_gndtrth, final_test_predict, final_test_probabe), f)
print("Saved Attack Test Ground Truth and Predict Sets")
print("Test F1: %f\nAUC: %f" % (test_f1_score, test_roc_auc_score))
final_result.append(1.*correct/total)
print( 'Test Acc: %.3f%% (%d/%d)' % (100.*correct/(1.0*total), correct, total))
return final_result
def delete_pickle(self):
train_file = glob.glob(self.ATTACK_SETS +"train.p")
for trf in train_file:
os.remove(trf)
test_file = glob.glob(self.ATTACK_SETS +"test.p")
for tef in test_file:
os.remove(tef)
def saveModel(self, path):
torch.save(self.attack_model.state_dict(), path)
class attack_for_whitebox():
def __init__(self, TARGET_PATH, SHADOW_PATH, ATTACK_SETS, attack_train_loader, attack_test_loader, target_model, shadow_model, attack_model, device, class_num):
self.device = device
self.class_num = class_num
self.ATTACK_SETS = ATTACK_SETS
self.TARGET_PATH = TARGET_PATH
self.target_model = target_model.to(self.device)
self.target_model.load_state_dict(torch.load(self.TARGET_PATH))
self.target_model.eval()
self.SHADOW_PATH = SHADOW_PATH
self.shadow_model = shadow_model.to(self.device)
self.shadow_model.load_state_dict(torch.load(self.SHADOW_PATH))
self.shadow_model.eval()
self.attack_train_loader = attack_train_loader
self.attack_test_loader = attack_test_loader
self.attack_model = attack_model.to(self.device)
torch.manual_seed(0)
self.attack_model.apply(weights_init)
self.target_criterion = nn.CrossEntropyLoss(reduction='none')
self.attack_criterion = nn.CrossEntropyLoss()
#self.optimizer = optim.SGD(self.attack_model.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)
self.optimizer = optim.Adam(self.attack_model.parameters(), lr=1e-5)
self.attack_train_data = None
self.attack_test_data = None
def _get_data(self, model, inputs, targets):
results = model(inputs)
# outputs = F.softmax(outputs, dim=1)
losses = self.target_criterion(results, targets)
gradients = []
for loss in losses:
loss.backward(retain_graph=True)
gradient_list = reversed(list(model.named_parameters()))
for name, parameter in gradient_list:
if 'weight' in name:
gradient = parameter.grad.clone() # [column[:, None], row].resize_(100,100)
gradient = gradient.unsqueeze_(0)
gradients.append(gradient.unsqueeze_(0))
break
labels = []
for num in targets:
label = [0 for i in range(self.class_num)]
label[num.item()] = 1
labels.append(label)
gradients = torch.cat(gradients, dim=0)
losses = losses.unsqueeze_(1).detach()
outputs, _ = torch.sort(results, descending=True)
labels = torch.Tensor(labels)
return outputs, losses, gradients, labels
def prepare_dataset(self):
with open(self.ATTACK_SETS + "train.p", "wb") as f:
for inputs, targets, members in self.attack_train_loader:
inputs, targets = inputs.to(self.device), targets.to(self.device)
output, loss, gradient, label = self._get_data(self.shadow_model, inputs, targets)
pickle.dump((output, loss, gradient, label, members), f)
print("Finished Saving Train Dataset")
with open(self.ATTACK_SETS + "test.p", "wb") as f:
for inputs, targets, members in self.attack_test_loader:
inputs, targets = inputs.to(self.device), targets.to(self.device)
output, loss, gradient, label = self._get_data(self.target_model, inputs, targets)
pickle.dump((output, loss, gradient, label, members), f)
# pickle.dump((output, loss, gradient, label, members), open(self.ATTACK_PATH + "test.p", "wb"))
print("Finished Saving Test Dataset")
def train(self, epoch, result_path):
self.attack_model.train()
batch_idx = 1
train_loss = 0
correct = 0
total = 0
final_train_gndtrth = []
final_train_predict = []
final_train_probabe = []
final_result = []
with open(self.ATTACK_SETS + "train.p", "rb") as f:
while(True):
try:
output, loss, gradient, label, members = pickle.load(f)
output, loss, gradient, label, members = output.to(self.device), loss.to(self.device), gradient.to(self.device), label.to(self.device), members.to(self.device)
results = self.attack_model(output, loss, gradient, label)
# results = F.softmax(results, dim=1)
losses = self.attack_criterion(results, members)
losses.backward()
self.optimizer.step()
train_loss += losses.item()
_, predicted = results.max(1)
total += members.size(0)
correct += predicted.eq(members).sum().item()
if epoch:
final_train_gndtrth.append(members)
final_train_predict.append(predicted)
final_train_probabe.append(results[:, 1])
batch_idx += 1
except EOFError:
break
if epoch:
final_train_gndtrth = torch.cat(final_train_gndtrth, dim=0).cpu().detach().numpy()
final_train_predict = torch.cat(final_train_predict, dim=0).cpu().detach().numpy()
final_train_probabe = torch.cat(final_train_probabe, dim=0).cpu().detach().numpy()
train_f1_score = f1_score(final_train_gndtrth, final_train_predict)
train_roc_auc_score = roc_auc_score(final_train_gndtrth, final_train_probabe)
final_result.append(train_f1_score)
final_result.append(train_roc_auc_score)
with open(result_path, "wb") as f:
pickle.dump((final_train_gndtrth, final_train_predict, final_train_probabe), f)
print("Saved Attack Train Ground Truth and Predict Sets")
print("Train F1: %f\nAUC: %f" % (train_f1_score, train_roc_auc_score))
final_result.append(1.*correct/total)
print( 'Train Acc: %.3f%% (%d/%d) | Loss: %.3f' % (100.*correct/total, correct, total, 1.*train_loss/batch_idx))
return final_result
def test(self, epoch, result_path):
self.attack_model.eval()
batch_idx = 1
correct = 0
total = 0
final_test_gndtrth = []
final_test_predict = []
final_test_probabe = []
final_result = []
with torch.no_grad():
with open(self.ATTACK_SETS + "test.p", "rb") as f:
while(True):
try:
output, loss, gradient, label, members = pickle.load(f)
output, loss, gradient, label, members = output.to(self.device), loss.to(self.device), gradient.to(self.device), label.to(self.device), members.to(self.device)
results = self.attack_model(output, loss, gradient, label)
_, predicted = results.max(1)
total += members.size(0)
correct += predicted.eq(members).sum().item()
results = F.softmax(results, dim=1)
if epoch:
final_test_gndtrth.append(members)
final_test_predict.append(predicted)
final_test_probabe.append(results[:, 1])
batch_idx += 1
except EOFError:
break
if epoch:
final_test_gndtrth = torch.cat(final_test_gndtrth, dim=0).cpu().numpy()
final_test_predict = torch.cat(final_test_predict, dim=0).cpu().numpy()
final_test_probabe = torch.cat(final_test_probabe, dim=0).cpu().numpy()
test_f1_score = f1_score(final_test_gndtrth, final_test_predict)
test_roc_auc_score = roc_auc_score(final_test_gndtrth, final_test_probabe)
final_result.append(test_f1_score)
final_result.append(test_roc_auc_score)
with open(result_path, "wb") as f:
pickle.dump((final_test_gndtrth, final_test_predict, final_test_probabe), f)
print("Saved Attack Test Ground Truth and Predict Sets")
print("Test F1: %f\nAUC: %f" % (test_f1_score, test_roc_auc_score))
final_result.append(1.*correct/total)
print( 'Test Acc: %.3f%% (%d/%d)' % (100.*correct/(1.0*total), correct, total))
return final_result
def delete_pickle(self):
train_file = glob.glob(self.ATTACK_SETS +"train.p")
for trf in train_file:
os.remove(trf)
test_file = glob.glob(self.ATTACK_SETS +"test.p")
for tef in test_file:
os.remove(tef)
def saveModel(self, path):
torch.save(self.attack_model.state_dict(), path)
def train_shadow_model(PATH, device, shadow_model, train_loader, test_loader, use_DP, noise, norm, loss, optimizer, delta):
model = shadow(train_loader, test_loader, shadow_model, device, use_DP, noise, norm, loss, optimizer, delta)
acc_train = 0
acc_test = 0
for i in range(100):
print("<======================= Epoch " + str(i+1) + " =======================>")
print("shadow training")
acc_train = model.train()
print("shadow testing")
acc_test = model.test()
overfitting = round(acc_train - acc_test, 6)
print('The overfitting rate is %s' % overfitting)
FILE_PATH = PATH + "_shadow.pth"
model.saveModel(FILE_PATH)
print("saved shadow model!!!")
print("Finished training!!!")
return acc_train, acc_test, overfitting
def train_shadow_distillation(MODEL_PATH, DL_PATH, device, target_model, student_model, train_loader, test_loader):
distillation = distillation_training(MODEL_PATH, train_loader, test_loader, student_model, target_model, device)
for i in range(100):
print("<======================= Epoch " + str(i+1) + " =======================>")
print("shadow distillation training")
acc_distillation_train = distillation.train()
print("shadow distillation testing")
acc_distillation_test = distillation.test()
overfitting = round(acc_distillation_train - acc_distillation_test, 6)
print('The overfitting rate is %s' % overfitting)
result_path = DL_PATH + "_shadow.pth"
distillation.saveModel(result_path)
print("Saved shadow model!!!")
print("Finished training!!!")
return acc_distillation_train, acc_distillation_test, overfitting
def get_attack_dataset_without_shadow(train_set, test_set, batch_size):
mem_length = len(train_set)//3
nonmem_length = len(test_set)//3
mem_train, mem_test, _ = torch.utils.data.random_split(train_set, [mem_length, mem_length, len(train_set)-(mem_length*2)])
nonmem_train, nonmem_test, _ = torch.utils.data.random_split(test_set, [nonmem_length, nonmem_length, len(test_set)-(nonmem_length*2)])
mem_train, mem_test, nonmem_train, nonmem_test = list(mem_train), list(mem_test), list(nonmem_train), list(nonmem_test)
for i in range(len(mem_train)):
mem_train[i] = mem_train[i] + (1,)
for i in range(len(nonmem_train)):
nonmem_train[i] = nonmem_train[i] + (0,)
for i in range(len(nonmem_test)):
nonmem_test[i] = nonmem_test[i] + (0,)
for i in range(len(mem_test)):
mem_test[i] = mem_test[i] + (1,)
attack_train = mem_train + nonmem_train
attack_test = mem_test + nonmem_test
attack_trainloader = torch.utils.data.DataLoader(
attack_train, batch_size=batch_size, shuffle=True, num_workers=2)
attack_testloader = torch.utils.data.DataLoader(
attack_test, batch_size=batch_size, shuffle=True, num_workers=2)
return attack_trainloader, attack_testloader
def get_attack_dataset_with_shadow(target_train, target_test, shadow_train, shadow_test, batch_size):
mem_train, nonmem_train, mem_test, nonmem_test = list(shadow_train), list(shadow_test), list(target_train), list(target_test)
for i in range(len(mem_train)):
mem_train[i] = mem_train[i] + (1,)
for i in range(len(nonmem_train)):
nonmem_train[i] = nonmem_train[i] + (0,)
for i in range(len(nonmem_test)):
nonmem_test[i] = nonmem_test[i] + (0,)
for i in range(len(mem_test)):
mem_test[i] = mem_test[i] + (1,)
train_length = min(len(mem_train), len(nonmem_train))
test_length = min(len(mem_test), len(nonmem_test))
mem_train, _ = torch.utils.data.random_split(mem_train, [train_length, len(mem_train) - train_length])
non_mem_train, _ = torch.utils.data.random_split(nonmem_train, [train_length, len(nonmem_train) - train_length])
mem_test, _ = torch.utils.data.random_split(mem_test, [test_length, len(mem_test) - test_length])
non_mem_test, _ = torch.utils.data.random_split(nonmem_test, [test_length, len(nonmem_test) - test_length])
attack_train = mem_train + non_mem_train
attack_test = mem_test + non_mem_test
attack_trainloader = torch.utils.data.DataLoader(
attack_train, batch_size=batch_size, shuffle=True, num_workers=2)
attack_testloader = torch.utils.data.DataLoader(
attack_test, batch_size=batch_size, shuffle=True, num_workers=2)
return attack_trainloader, attack_testloader
# black shadow
def attack_mode0(TARGET_PATH, SHADOW_PATH, ATTACK_PATH, device, attack_trainloader, attack_testloader, target_model, shadow_model, attack_model, get_attack_set, num_classes):
MODELS_PATH = ATTACK_PATH + "_meminf_attack0.pth"
RESULT_PATH = ATTACK_PATH + "_meminf_attack0.p"
ATTACK_SETS = ATTACK_PATH + "_meminf_attack_mode0_"
attack = attack_for_blackbox(SHADOW_PATH, TARGET_PATH, ATTACK_SETS, attack_trainloader, attack_testloader, target_model, shadow_model, attack_model, device)
if get_attack_set:
attack.delete_pickle()
attack.prepare_dataset()
for i in range(50):
flag = 1 if i == 49 else 0
print("Epoch %d :" % (i+1))
res_train = attack.train(flag, RESULT_PATH)
res_test = attack.test(flag, RESULT_PATH)
attack.saveModel(MODELS_PATH)
print("Saved Attack Model")
return res_train, res_test
# black partial
def attack_mode1(TARGET_PATH, ATTACK_PATH, device, attack_trainloader, attack_testloader, target_model, attack_model, get_attack_set, num_classes):
MODELS_PATH = ATTACK_PATH + "_meminf_attack1.pth"
RESULT_PATH = ATTACK_PATH + "_meminf_attack1.p"
ATTACK_SETS = ATTACK_PATH + "_meminf_attack_mode1_"
attack = attack_for_blackbox(TARGET_PATH, TARGET_PATH, ATTACK_SETS, attack_trainloader, attack_testloader, target_model, target_model, attack_model, device)
if get_attack_set:
attack.delete_pickle()
attack.prepare_dataset()
for i in range(50):
flag = 1 if i == 49 else 0
print("Epoch %d :" % (i+1))
res_train = attack.train(flag, RESULT_PATH)
res_test = attack.test(flag, RESULT_PATH)
attack.saveModel(MODELS_PATH)
print("Saved Attack Model")
return res_train, res_test
# white partial
def attack_mode2(TARGET_PATH, ATTACK_PATH, device, attack_trainloader, attack_testloader, target_model, attack_model, get_attack_set, num_classes):
MODELS_PATH = ATTACK_PATH + "_meminf_attack2.pth"
RESULT_PATH = ATTACK_PATH + "_meminf_attack2.p"
ATTACK_SETS = ATTACK_PATH + "_meminf_attack_mode2_"
attack = attack_for_whitebox(TARGET_PATH, TARGET_PATH, ATTACK_SETS, attack_trainloader, attack_testloader, target_model, target_model, attack_model, device, num_classes)
if get_attack_set:
attack.delete_pickle()
attack.prepare_dataset()
for i in range(50):
flag = 1 if i == 49 else 0
print("Epoch %d :" % (i+1))
res_train = attack.train(flag, RESULT_PATH)
res_test = attack.test(flag, RESULT_PATH)
attack.saveModel(MODELS_PATH)
print("Saved Attack Model")
return res_train, res_test
# white shadow
def attack_mode3(TARGET_PATH, SHADOW_PATH, ATTACK_PATH, device, attack_trainloader, attack_testloader, target_model, shadow_model, attack_model, get_attack_set, num_classes):
MODELS_PATH = ATTACK_PATH + "_meminf_attack3.pth"
RESULT_PATH = ATTACK_PATH + "_meminf_attack3.p"
ATTACK_SETS = ATTACK_PATH + "_meminf_attack_mode3_"
attack = attack_for_whitebox(TARGET_PATH, SHADOW_PATH, ATTACK_SETS, attack_trainloader, attack_testloader, target_model, shadow_model, attack_model, device, num_classes)
if get_attack_set:
attack.delete_pickle()
attack.prepare_dataset()
for i in range(50):
flag = 1 if i == 49 else 0
print("Epoch %d :" % (i+1))
res_train = attack.train(flag, RESULT_PATH)
res_test = attack.test(flag, RESULT_PATH)
attack.saveModel(MODELS_PATH)
print("Saved Attack Model")
return res_train, res_test
def get_gradient_size(model):
gradient_size = []
gradient_list = reversed(list(model.named_parameters()))
for name, parameter in gradient_list:
if 'weight' in name:
gradient_size.append(parameter.shape)
return gradient_size
| 31,612 | 37.042118 | 183 | py |
IVOS-ATNet | IVOS-ATNet-master/eval_real-world.py | from davisinteractive.session import DavisInteractiveSession
from davisinteractive import utils as interactive_utils
from davisinteractive.dataset import Davis
from davisinteractive.metrics import batched_jaccard
from libs import custom_transforms as tr, davis2017_torchdataset
import os
import numpy as np
from PIL import Image
import csv
from datetime import datetime
import torch
from torch.autograd import Variable
from torchvision import transforms
from torch.utils.data import DataLoader
from libs import utils, utils_torch
from libs.analyze_report import analyze_summary
from config import Config
from networks.atnet import ATnet
class Main_tester(object):
def __init__(self, config):
self.config = config
self.Davisclass = Davis(self.config.davis_dataset_dir)
self.current_time = datetime.now().strftime('%Y%m%d-%H%M%S')
self._palette = Image.open(self.config.palette_dir).getpalette()
self.save_res_dir = str()
self.save_log_dir = str()
self.save_logger = None
self.save_csvsummary_dir = str()
self.net = ATnet()
self.net.cuda()
self.net.eval()
self.net.load_state_dict(torch.load(self.config.test_load_state_dir))
# To implement ordered test
self.scr_indices = [1, 2, 3]
self.max_nb_interactions = 8
self.max_time = self.max_nb_interactions * 30
self.scr_samples = []
for v in sorted(self.Davisclass.sets[self.config.test_subset]):
for idx in self.scr_indices:
self.scr_samples.append((v, idx))
self.img_size, self.num_frames, self.n_objects, self.final_masks, self.tmpdict_siact = None, None, None, None, None
self.pad_info, self.hpad1, self.wpad1, self.hpad2, self.wpad2 = None, None, None, None, None
def run_for_diverse_metrics(self, ):
with torch.no_grad():
for metric in self.config.test_metric_list:
if metric == 'J':
dir_name = os.path.split(os.path.split(__file__)[0])[1] + '[J]_' + self.current_time
elif metric == 'J_AND_F':
dir_name = os.path.split(os.path.split(__file__)[0])[1] + '[JF]_' + self.current_time
else:
dir_name = None
print("Impossible metric is contained in config.test_metric_list!")
raise NotImplementedError()
self.save_res_dir = os.path.join(self.config.test_result_dir, dir_name)
utils.mkdir(self.save_res_dir)
self.save_csvsummary_dir = os.path.join(self.save_res_dir, 'summary_in_csv.csv')
self.save_log_dir = os.path.join(self.save_res_dir, 'test_logs.txt')
self.save_logger = utils.logger(self.save_log_dir)
self.save_logger.printNlog(dir_name)
curr_path = os.path.dirname(os.path.abspath(__file__))
os.system('cp {}/config.py {}/config.py'.format(curr_path, self.save_res_dir))
self.run_IVOS(metric)
def run_IVOS(self, metric):
seen_seq = {}
numseq, tmpseq = 0, ''
output_dict = dict()
output_dict['average_objs_iou'] = dict()
output_dict['average_iact_iou'] = np.zeros(self.max_nb_interactions)
output_dict['annotated_frames'] = dict()
with open(self.save_csvsummary_dir, mode='a') as csv_file:
writer = csv.writer(csv_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['sequence', 'obj_idx', 'scr_idx'] + ['round-' + str(i + 1) for i in range(self.max_nb_interactions)])
with DavisInteractiveSession(host=self.config.test_host,
user_key=self.config.test_userkey,
davis_root=self.config.davis_dataset_dir,
subset=self.config.test_subset,
report_save_dir=self.save_res_dir,
max_nb_interactions=self.max_nb_interactions,
max_time=self.max_time,
metric_to_optimize=metric) as sess:
sess.connector.service.robot.min_nb_nodes = self.config.test_min_nb_nodes
sess.samples = self.scr_samples
# sess.samples = [('dog', 3)]
while sess.next():
# Get the current iteration scribbles
self.sequence, scribbles, first_scribble = sess.get_scribbles(only_last=False)
if first_scribble:
anno_dict = {'frames': [], 'annotated_masks': [], 'masks_tobe_modified': []}
n_interaction = 1
info = Davis.dataset[self.sequence]
self.img_size = info['image_size'][::-1]
self.num_frames = info['num_frames']
self.n_objects = info['num_objects']
info = None
seen_seq[self.sequence] = 1 if self.sequence not in seen_seq.keys() else seen_seq[self.sequence] + 1
scr_id = seen_seq[self.sequence]
self.final_masks = np.zeros([self.num_frames, self.img_size[0], self.img_size[1]])
self.pad_info = utils.apply_pad(self.final_masks[0])[1]
self.hpad1, self.wpad1 = self.pad_info[0][0], self.pad_info[1][0]
self.hpad2, self.wpad2 = self.pad_info[0][1], self.pad_info[1][1]
self.h_ds, self.w_ds = int((self.img_size[0] + sum(self.pad_info[0])) / 4), int((self.img_size[1] + sum(self.pad_info[1])) / 4)
self.anno_6chEnc_r5_list = []
self.anno_3chEnc_r5_list = []
self.prob_map_of_frames = torch.zeros((self.num_frames, self.n_objects, 4 * self.h_ds, 4 * self.w_ds)).cuda()
self.gt_masks = self.Davisclass.load_annotations(self.sequence)
IoU_over_eobj = []
else:
n_interaction += 1
self.save_logger.printNlog('\nRunning sequence {} in (scribble index: {}) (round: {})'
.format(self.sequence, sess.samples[sess.sample_idx][1], n_interaction))
annotated_now = interactive_utils.scribbles.annotated_frames(sess.sample_last_scribble)[0]
anno_dict['frames'].append(annotated_now) # Where we save annotated frames
anno_dict['masks_tobe_modified'].append(self.final_masks[annotated_now]) # mask before modefied at the annotated frame
# Get Predicted mask & Mask decision from pred_mask
self.final_masks = self.run_VOS_singleiact(n_interaction, scribbles, anno_dict['frames']) # self.final_mask changes
if self.config.test_save_all_segs_option:
utils.mkdir(
os.path.join(self.save_res_dir, 'result_video', '{}-scr{:02d}/round{:02d}'.format(self.sequence, scr_id, n_interaction)))
for fr in range(self.num_frames):
savefname = os.path.join(self.save_res_dir, 'result_video',
'{}-scr{:02d}/round{:02d}'.format(self.sequence, scr_id, n_interaction),
'{:05d}.png'.format(fr))
tmpPIL = Image.fromarray(self.final_masks[fr].astype(np.uint8), 'P')
tmpPIL.putpalette(self._palette)
tmpPIL.save(savefname)
# Submit your prediction
sess.submit_masks(self.final_masks) # F, H, W
# print sequence name
if tmpseq != self.sequence:
tmpseq, numseq = self.sequence, numseq + 1
print(str(numseq) + ':' + str(self.sequence) + '-' + str(seen_seq[self.sequence]) + '\n')
## Visualizers and Saver
# IoU estimation
jaccard = batched_jaccard(self.gt_masks,
self.final_masks,
average_over_objects=False,
nb_objects=self.n_objects
) # frames, objid
IoU_over_eobj.append(jaccard)
anno_dict['annotated_masks'].append(self.final_masks[annotated_now]) # mask after modefied at the annotated frame
if self.max_nb_interactions == len(anno_dict['frames']): # After Lastround -> total 90 iter
seq_scrid_name = self.sequence + str(scr_id)
# IoU manager
IoU_over_eobj = np.stack(IoU_over_eobj, axis=0) # niact,frames,n_obj
IoUeveryround_perobj = np.mean(IoU_over_eobj, axis=1) # niact,n_obj
output_dict['average_iact_iou'] += np.sum(IoU_over_eobj[list(range(n_interaction)), anno_dict['frames']], axis=-1)
output_dict['annotated_frames'][seq_scrid_name] = anno_dict['frames']
# write csv
for obj_idx in range(self.n_objects):
with open(self.save_csvsummary_dir, mode='a') as csv_file:
writer = csv.writer(csv_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow([self.sequence, str(obj_idx + 1), str(scr_id)] + list(IoUeveryround_perobj[:, obj_idx]))
summary = sess.get_global_summary(save_file=self.save_res_dir + '/summary_' + sess.report_name[7:] + '.json')
analyze_summary(self.save_res_dir + '/summary_' + sess.report_name[7:] + '.json', metric=metric)
# final_IOU = summary['curve'][metric][-1]
average_IoU_per_round = summary['curve'][metric][1:-1]
torch.cuda.empty_cache()
model = None
return average_IoU_per_round
def run_VOS_singleiact(self, n_interaction, scribbles_data, annotated_frames):
annotated_frames_np = np.array(annotated_frames)
num_workers = 4
annotated_now = annotated_frames[-1]
scribbles_list = scribbles_data['scribbles']
seq_name = scribbles_data['sequence']
output_masks = self.final_masks.copy().astype(np.float64)
prop_list = utils.get_prop_list(annotated_frames, annotated_now, self.num_frames, proportion=self.config.test_propagation_proportion)
prop_fore = sorted(prop_list)[0]
prop_rear = sorted(prop_list)[-1]
# Interaction settings
pm_ps_ns_3ch_t = [] # n_obj,3,h,w
if n_interaction == 1:
for obj_id in range(1, self.n_objects + 1):
pos_scrimg = utils.scribble_to_image(scribbles_list, annotated_now, obj_id,
dilation=self.config.scribble_dilation_param,
prev_mask=self.final_masks[annotated_now])
pm_ps_ns_3ch_t.append(np.stack([np.ones_like(pos_scrimg) / 2, pos_scrimg, np.zeros_like(pos_scrimg)], axis=0))
pm_ps_ns_3ch_t = np.stack(pm_ps_ns_3ch_t, axis=0) # n_obj,3,h,w
# Image.fromarray((scr_img[:, :, 1] * 255).astype(np.uint8)).save('/home/six/Desktop/CVPRW_figure/judo_obj1_scr.png')
else:
for obj_id in range(1, self.n_objects + 1):
prev_round_input = (self.final_masks[annotated_now] == obj_id).astype(np.float32) # H,W
pos_scrimg, neg_scrimg = utils.scribble_to_image(scribbles_list, annotated_now, obj_id,
dilation=self.config.scribble_dilation_param,
prev_mask=self.final_masks[annotated_now], blur=True,
singleimg=False, seperate_pos_neg=True)
pm_ps_ns_3ch_t.append(np.stack([prev_round_input, pos_scrimg, neg_scrimg], axis=0))
pm_ps_ns_3ch_t = np.stack(pm_ps_ns_3ch_t, axis=0) # n_obj,3,h,w
pm_ps_ns_3ch_t = torch.from_numpy(pm_ps_ns_3ch_t).cuda()
if (prop_list[0] != annotated_now) and (prop_list.count(annotated_now) != 2):
print(str(prop_list))
raise NotImplementedError
print(str(prop_list)) # we made our proplist first backward, and then forward
composed_transforms = transforms.Compose([tr.Normalize_ApplymeanvarImage(self.config.mean, self.config.var),
tr.ToTensor()])
db_test = davis2017_torchdataset.DAVIS2017(split='val', transform=composed_transforms, root=self.config.davis_dataset_dir,
custom_frames=prop_list, seq_name=seq_name, rgb=True,
obj_id=None, no_gt=True, retname=True, prev_round_masks=self.final_masks, )
testloader = DataLoader(db_test, batch_size=1, shuffle=False, num_workers=num_workers, pin_memory=True)
flag = 0 # 1: propagating backward, 2: propagating forward
print('[{:01d} round] processing...'.format(n_interaction))
for ii, batched in enumerate(testloader):
# batched : image, scr_img, 0~fr, meta
inpdict = dict()
operating_frame = int(batched['meta']['frame_id'][0])
for inp in batched:
if inp == 'meta': continue
inpdict[inp] = Variable(batched[inp]).cuda()
inpdict['image'] = inpdict['image'].expand(self.n_objects, -1, -1, -1)
#################### Iaction ########################
if operating_frame == annotated_now: # Check the round is on interaction
if flag == 0:
flag += 1
adjacent_to_anno = True
elif flag == 1:
flag += 1
adjacent_to_anno = True
continue
else:
raise NotImplementedError
pm_ps_ns_3ch_t = torch.nn.ReflectionPad2d(self.pad_info[1] + self.pad_info[0])(pm_ps_ns_3ch_t)
inputs = torch.cat([inpdict['image'], pm_ps_ns_3ch_t], dim=1)
output_logit, anno_6chEnc_r5 = self.net.forward_ANet(inputs) # [nobj, 1, P_H, P_W], # [n_obj,2048,h/16,w/16]
output_prob_anno = torch.sigmoid(output_logit)
prob_onehot_t = output_prob_anno[:, 0].detach()
anno_3chEnc_r5, _, _, r2_prev_fromanno = self.net.encoder_3ch.forward(inpdict['image'])
self.anno_6chEnc_r5_list.append(anno_6chEnc_r5)
self.anno_3chEnc_r5_list.append(anno_3chEnc_r5)
if len(self.anno_6chEnc_r5_list) != len(annotated_frames):
raise NotImplementedError
#################### Propagation ########################
else:
# Flag [1: propagating backward, 2: propagating forward]
if adjacent_to_anno:
r2_prev = r2_prev_fromanno
predmask_prev = output_prob_anno
else:
predmask_prev = output_prob_prop
adjacent_to_anno = False
output_logit, r2_prev = self.net.forward_TNet(
self.anno_3chEnc_r5_list, inpdict['image'], self.anno_6chEnc_r5_list, r2_prev, predmask_prev) # [nobj, 1, P_H, P_W]
output_prob_prop = torch.sigmoid(output_logit)
prob_onehot_t = output_prob_prop[:, 0].detach()
smallest_alpha = 0.5
if flag == 1:
sorted_frames = annotated_frames_np[annotated_frames_np < annotated_now]
if len(sorted_frames) == 0:
alpha = 1
else:
closest_addianno_frame = np.max(sorted_frames)
alpha = smallest_alpha + (1 - smallest_alpha) * (
(operating_frame - closest_addianno_frame) / (annotated_now - closest_addianno_frame))
else:
sorted_frames = annotated_frames_np[annotated_frames_np > annotated_now]
if len(sorted_frames) == 0:
alpha = 1
else:
closest_addianno_frame = np.min(sorted_frames)
alpha = smallest_alpha + (1 - smallest_alpha) * (
(closest_addianno_frame - operating_frame) / (closest_addianno_frame - annotated_now))
prob_onehot_t = (alpha * prob_onehot_t) + ((1 - alpha) * self.prob_map_of_frames[operating_frame])
# Final mask indexing
self.prob_map_of_frames[operating_frame] = prob_onehot_t
output_masks[prop_fore:prop_rear + 1] = \
utils_torch.combine_masks_with_batch(self.prob_map_of_frames[prop_fore:prop_rear + 1],
n_obj=self.n_objects, th=self.config.test_propth
)[:, 0, self.hpad1:-self.hpad2, self.wpad1:-self.wpad2].cpu().numpy().astype(np.float) # f,h,w
torch.cuda.empty_cache()
return output_masks
if __name__ == '__main__':
config = Config()
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = str(config.test_gpu_id)
tester = Main_tester(config)
tester.run_for_diverse_metrics()
# try:main_val(model,
# Config,
# min_nb_nodes= min_nb_nodes,
# simplyfied_testset= simplyfied_test,tr(config.test_gpu_id)
# metric = metric)
# except: continue
| 17,873 | 50.2149 | 147 | py |
IVOS-ATNet | IVOS-ATNet-master/eval_davis-framework.py | from davisinteractive.session import DavisInteractiveSession
from davisinteractive import utils as interactive_utils
from davisinteractive.dataset import Davis
from davisinteractive.metrics import batched_jaccard
from libs import custom_transforms as tr, davis2017_torchdataset
import os
import numpy as np
from PIL import Image
import csv
from datetime import datetime
import torch
from torch.autograd import Variable
from torchvision import transforms
from torch.utils.data import DataLoader
from libs import utils, utils_torch
from libs.analyze_report import analyze_summary
from config import Config
from networks.atnet import ATnet
class Main_tester(object):
def __init__(self, config):
self.config = config
self.Davisclass = Davis(self.config.davis_dataset_dir)
self.current_time = datetime.now().strftime('%Y%m%d-%H%M%S')
self._palette = Image.open(self.config.palette_dir).getpalette()
self.save_res_dir = str()
self.save_log_dir = str()
self.save_logger = None
self.save_csvsummary_dir = str()
self.net = ATnet()
self.net.cuda()
self.net.eval()
self.net.load_state_dict(torch.load(self.config.test_load_state_dir))
# To implement ordered test
self.scr_indices = [1, 2, 3]
self.max_nb_interactions = 8
self.max_time = self.max_nb_interactions * 30
self.scr_samples = []
for v in sorted(self.Davisclass.sets[self.config.test_subset]):
for idx in self.scr_indices:
self.scr_samples.append((v, idx))
self.img_size, self.num_frames, self.n_objects, self.final_masks, self.tmpdict_siact = None, None, None, None, None
self.pad_info, self.hpad1, self.wpad1, self.hpad2, self.wpad2 = None, None, None, None, None
def run_for_diverse_metrics(self, ):
with torch.no_grad():
for metric in self.config.test_metric_list:
if metric == 'J':
dir_name = 'IVOS-ATNet_J_' + self.current_time
elif metric == 'J_AND_F':
dir_name = 'IVOS-ATNet_JF_' + self.current_time
else:
dir_name = None
print("Impossible metric is contained in config.test_metric_list!")
raise NotImplementedError()
self.save_res_dir = os.path.join(self.config.test_result_df_dir, dir_name)
utils.mkdir(self.save_res_dir)
self.save_csvsummary_dir = os.path.join(self.save_res_dir, 'summary_in_csv.csv')
self.save_log_dir = os.path.join(self.save_res_dir, 'test_logs.txt')
self.save_logger = utils.logger(self.save_log_dir)
self.save_logger.printNlog(dir_name)
curr_path = os.path.dirname(os.path.abspath(__file__))
os.system('cp {}/config.py {}/config.py'.format(curr_path, self.save_res_dir))
self.run_IVOS(metric)
def run_IVOS(self, metric):
seen_seq = {}
numseq, tmpseq = 0, ''
output_dict = dict()
output_dict['average_objs_iou'] = dict()
output_dict['average_iact_iou'] = np.zeros(self.max_nb_interactions)
output_dict['annotated_frames'] = dict()
with open(self.save_csvsummary_dir, mode='a') as csv_file:
writer = csv.writer(csv_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['sequence', 'obj_idx', 'scr_idx'] + ['round-' + str(i + 1) for i in range(self.max_nb_interactions)])
with DavisInteractiveSession(host=self.config.test_host,
user_key=self.config.test_userkey,
davis_root=self.config.davis_dataset_dir,
subset=self.config.test_subset,
report_save_dir=self.save_res_dir,
max_nb_interactions=self.max_nb_interactions,
max_time=self.max_time,
metric_to_optimize=metric) as sess:
sess.connector.service.robot.min_nb_nodes = self.config.test_min_nb_nodes
sess.samples = self.scr_samples
# sess.samples = [('dog', 3)]
while sess.next():
# Get the current iteration scribbles
self.sequence, scribbles, first_scribble = sess.get_scribbles(only_last=False)
if first_scribble:
anno_dict = {'frames': [], 'annotated_masks': [], 'masks_tobe_modified': []}
n_interaction = 1
info = Davis.dataset[self.sequence]
self.img_size = info['image_size'][::-1]
self.num_frames = info['num_frames']
self.n_objects = info['num_objects']
info = None
seen_seq[self.sequence] = 1 if self.sequence not in seen_seq.keys() else seen_seq[self.sequence] + 1
scr_id = seen_seq[self.sequence]
self.final_masks = np.zeros([self.num_frames, self.img_size[0], self.img_size[1]])
self.pad_info = utils.apply_pad(self.final_masks[0])[1]
self.hpad1, self.wpad1 = self.pad_info[0][0], self.pad_info[1][0]
self.hpad2, self.wpad2 = self.pad_info[0][1], self.pad_info[1][1]
self.h_ds, self.w_ds = int((self.img_size[0] + sum(self.pad_info[0])) / 4), int((self.img_size[1] + sum(self.pad_info[1])) / 4)
self.anno_6chEnc_r5_list = []
self.anno_3chEnc_r5_list = []
self.prob_map_of_frames = torch.zeros((self.num_frames, self.n_objects, 4 * self.h_ds, 4 * self.w_ds)).cuda()
self.gt_masks = self.Davisclass.load_annotations(self.sequence)
IoU_over_eobj = []
else:
n_interaction += 1
self.save_logger.printNlog('\nRunning sequence {} in (scribble index: {}) (round: {})'
.format(self.sequence, sess.samples[sess.sample_idx][1], n_interaction))
annotated_now = interactive_utils.scribbles.annotated_frames(sess.sample_last_scribble)[0]
anno_dict['frames'].append(annotated_now) # Where we save annotated frames
anno_dict['masks_tobe_modified'].append(self.final_masks[annotated_now]) # mask before modefied at the annotated frame
# Get Predicted mask & Mask decision from pred_mask
self.final_masks = self.run_VOS_singleiact(n_interaction, scribbles, anno_dict['frames']) # self.final_mask changes
if self.config.test_save_all_segs_option:
utils.mkdir(
os.path.join(self.save_res_dir, 'result_video', '{}-scr{:02d}/round{:02d}'.format(self.sequence, scr_id, n_interaction)))
for fr in range(self.num_frames):
savefname = os.path.join(self.save_res_dir, 'result_video',
'{}-scr{:02d}/round{:02d}'.format(self.sequence, scr_id, n_interaction),
'{:05d}.png'.format(fr))
tmpPIL = Image.fromarray(self.final_masks[fr].astype(np.uint8), 'P')
tmpPIL.putpalette(self._palette)
tmpPIL.save(savefname)
# Submit your prediction
sess.submit_masks(self.final_masks) # F, H, W
# print sequence name
if tmpseq != self.sequence:
tmpseq, numseq = self.sequence, numseq + 1
print(str(numseq) + ':' + str(self.sequence) + '-' + str(seen_seq[self.sequence]) + '\n')
## Visualizers and Saver
# IoU estimation
jaccard = batched_jaccard(self.gt_masks,
self.final_masks,
average_over_objects=False,
nb_objects=self.n_objects
) # frames, objid
IoU_over_eobj.append(jaccard)
anno_dict['annotated_masks'].append(self.final_masks[annotated_now]) # mask after modefied at the annotated frame
if self.max_nb_interactions == len(anno_dict['frames']): # After Lastround -> total 90 iter
seq_scrid_name = self.sequence + str(scr_id)
# IoU manager
IoU_over_eobj = np.stack(IoU_over_eobj, axis=0) # niact,frames,n_obj
IoUeveryround_perobj = np.mean(IoU_over_eobj, axis=1) # niact,n_obj
output_dict['average_iact_iou'] += np.sum(IoU_over_eobj[list(range(n_interaction)), anno_dict['frames']], axis=-1)
output_dict['annotated_frames'][seq_scrid_name] = anno_dict['frames']
# write csv
for obj_idx in range(self.n_objects):
with open(self.save_csvsummary_dir, mode='a') as csv_file:
writer = csv.writer(csv_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow([self.sequence, str(obj_idx + 1), str(scr_id)] + list(IoUeveryround_perobj[:, obj_idx]))
summary = sess.get_global_summary(save_file=self.save_res_dir + '/summary_' + sess.report_name[7:] + '.json')
analyze_summary(self.save_res_dir + '/summary_' + sess.report_name[7:] + '.json', metric=metric)
# final_IOU = summary['curve'][metric][-1]
average_IoU_per_round = summary['curve'][metric][1:-1]
torch.cuda.empty_cache()
model = None
return average_IoU_per_round
def run_VOS_singleiact(self, n_interaction, scribbles_data, annotated_frames):
annotated_frames_np = np.array(annotated_frames)
num_workers = 4
annotated_now = annotated_frames[-1]
scribbles_list = scribbles_data['scribbles']
seq_name = scribbles_data['sequence']
output_masks = self.final_masks.copy().astype(np.float64)
prop_list = utils.get_prop_list(annotated_frames, annotated_now, self.num_frames, proportion=self.config.test_propagation_proportion)
prop_fore = sorted(prop_list)[0]
prop_rear = sorted(prop_list)[-1]
# Interaction settings
pm_ps_ns_3ch_t = [] # n_obj,3,h,w
if n_interaction == 1:
for obj_id in range(1, self.n_objects + 1):
pos_scrimg = utils.scribble_to_image(scribbles_list, annotated_now, obj_id,
dilation=self.config.scribble_dilation_param,
prev_mask=self.final_masks[annotated_now])
pm_ps_ns_3ch_t.append(np.stack([np.ones_like(pos_scrimg) / 2, pos_scrimg, np.zeros_like(pos_scrimg)], axis=0))
pm_ps_ns_3ch_t = np.stack(pm_ps_ns_3ch_t, axis=0) # n_obj,3,h,w
# Image.fromarray((scr_img[:, :, 1] * 255).astype(np.uint8)).save('/home/six/Desktop/CVPRW_figure/judo_obj1_scr.png')
else:
for obj_id in range(1, self.n_objects + 1):
prev_round_input = (self.final_masks[annotated_now] == obj_id).astype(np.float32) # H,W
pos_scrimg, neg_scrimg = utils.scribble_to_image(scribbles_list, annotated_now, obj_id,
dilation=self.config.scribble_dilation_param,
prev_mask=self.final_masks[annotated_now], blur=True,
singleimg=False, seperate_pos_neg=True)
pm_ps_ns_3ch_t.append(np.stack([prev_round_input, pos_scrimg, neg_scrimg], axis=0))
pm_ps_ns_3ch_t = np.stack(pm_ps_ns_3ch_t, axis=0) # n_obj,3,h,w
pm_ps_ns_3ch_t = torch.from_numpy(pm_ps_ns_3ch_t).cuda()
if (prop_list[0] != annotated_now) and (prop_list.count(annotated_now) != 2):
print(str(prop_list))
raise NotImplementedError
print(str(prop_list)) # we made our proplist first backward, and then forward
composed_transforms = transforms.Compose([tr.Normalize_ApplymeanvarImage(self.config.mean, self.config.var),
tr.ToTensor()])
db_test = davis2017_torchdataset.DAVIS2017(split='val', transform=composed_transforms, root=self.config.davis_dataset_dir,
custom_frames=prop_list, seq_name=seq_name, rgb=True,
obj_id=None, no_gt=True, retname=True, prev_round_masks=self.final_masks, )
testloader = DataLoader(db_test, batch_size=1, shuffle=False, num_workers=num_workers, pin_memory=True)
flag = 0 # 1: propagating backward, 2: propagating forward
print('[{:01d} round] processing...'.format(n_interaction))
for ii, batched in enumerate(testloader):
# batched : image, scr_img, 0~fr, meta
inpdict = dict()
operating_frame = int(batched['meta']['frame_id'][0])
for inp in batched:
if inp == 'meta': continue
inpdict[inp] = Variable(batched[inp]).cuda()
inpdict['image'] = inpdict['image'].expand(self.n_objects, -1, -1, -1)
#################### Iaction ########################
if operating_frame == annotated_now: # Check the round is on interaction
if flag == 0:
flag += 1
adjacent_to_anno = True
elif flag == 1:
flag += 1
adjacent_to_anno = True
continue
else:
raise NotImplementedError
pm_ps_ns_3ch_t = torch.nn.ReflectionPad2d(self.pad_info[1] + self.pad_info[0])(pm_ps_ns_3ch_t)
inputs = torch.cat([inpdict['image'], pm_ps_ns_3ch_t], dim=1)
output_logit, anno_6chEnc_r5 = self.net.forward_ANet(inputs) # [nobj, 1, P_H, P_W], # [n_obj,2048,h/16,w/16]
output_prob_anno = torch.sigmoid(output_logit)
prob_onehot_t = output_prob_anno[:, 0].detach()
anno_3chEnc_r5, _, _, r2_prev_fromanno = self.net.encoder_3ch.forward(inpdict['image'])
self.anno_6chEnc_r5_list.append(anno_6chEnc_r5)
self.anno_3chEnc_r5_list.append(anno_3chEnc_r5)
if len(self.anno_6chEnc_r5_list) != len(annotated_frames):
raise NotImplementedError
#################### Propagation ########################
else:
# Flag [1: propagating backward, 2: propagating forward]
if adjacent_to_anno:
r2_prev = r2_prev_fromanno
predmask_prev = output_prob_anno
else:
predmask_prev = output_prob_prop
adjacent_to_anno = False
output_logit, r2_prev = self.net.forward_TNet(
self.anno_3chEnc_r5_list, inpdict['image'], self.anno_6chEnc_r5_list, r2_prev, predmask_prev) # [nobj, 1, P_H, P_W]
output_prob_prop = torch.sigmoid(output_logit)
prob_onehot_t = output_prob_prop[:, 0].detach()
smallest_alpha = 0.5
if flag == 1:
sorted_frames = annotated_frames_np[annotated_frames_np < annotated_now]
if len(sorted_frames) == 0:
alpha = 1
else:
closest_addianno_frame = np.max(sorted_frames)
alpha = smallest_alpha + (1 - smallest_alpha) * (
(operating_frame - closest_addianno_frame) / (annotated_now - closest_addianno_frame))
else:
sorted_frames = annotated_frames_np[annotated_frames_np > annotated_now]
if len(sorted_frames) == 0:
alpha = 1
else:
closest_addianno_frame = np.min(sorted_frames)
alpha = smallest_alpha + (1 - smallest_alpha) * (
(closest_addianno_frame - operating_frame) / (closest_addianno_frame - annotated_now))
prob_onehot_t = (alpha * prob_onehot_t) + ((1 - alpha) * self.prob_map_of_frames[operating_frame])
# Final mask indexing
self.prob_map_of_frames[operating_frame] = prob_onehot_t
output_masks[prop_fore:prop_rear + 1] = \
utils_torch.combine_masks_with_batch(self.prob_map_of_frames[prop_fore:prop_rear + 1],
n_obj=self.n_objects, th=self.config.test_propth
)[:, 0, self.hpad1:-self.hpad2, self.wpad1:-self.wpad2].cpu().numpy().astype(np.float) # f,h,w
torch.cuda.empty_cache()
return output_masks
if __name__ == '__main__':
config = Config()
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = str(config.test_gpu_id)
tester = Main_tester(config)
tester.run_for_diverse_metrics()
# try:main_val(model,
# Config,
# min_nb_nodes= min_nb_nodes,
# simplyfied_testset= simplyfied_test,tr(config.test_gpu_id)
# metric = metric)
# except: continue
| 17,800 | 50.005731 | 147 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/ltm_transfer.py |
import torch
import torch.nn as nn
import torch.nn.functional as F
class LTM_transfer(nn.Module):
def __init__(self,md=4, stride=1):
super(LTM_transfer, self).__init__()
self.md = md #displacement (default = 4pixels)
self.range = (md*2 + 1) ** 2 #(default = (4x2+1)**2 = 81)
self.grid = None
self.Channelwise_sum = None
d_u = torch.linspace(-self.md * stride, self.md * stride, 2 * self.md + 1).view(1, -1).repeat((2 * self.md + 1, 1)).view(self.range, 1) # (25,1)
d_v = torch.linspace(-self.md * stride, self.md * stride, 2 * self.md + 1).view(-1, 1).repeat((1, 2 * self.md + 1)).view(self.range, 1) # (25,1)
self.d = torch.cat((d_u, d_v), dim=1).cuda() # (25,2)
def L2normalize(self, x, d=1):
eps = 1e-6
norm = x ** 2
norm = norm.sum(dim=d, keepdim=True) + eps
norm = norm ** (0.5)
return (x/norm)
def UniformGrid(self, Input):
'''
Make uniform grid
:param Input: tensor(N,C,H,W)
:return grid: (1,2,H,W)
'''
# torchHorizontal = torch.linspace(-1.0, 1.0, W).view(1, 1, 1, W).expand(N, 1, H, W)
# torchVertical = torch.linspace(-1.0, 1.0, H).view(1, 1, H, 1).expand(N, 1, H, W)
# grid = torch.cat([torchHorizontal, torchVertical], 1).cuda()
_, _, H, W = Input.size()
# mesh grid
xx = torch.arange(0, W).view(1, 1, 1, W).expand(1, 1, H, W)
yy = torch.arange(0, H).view(1, 1, H, 1).expand(1, 1, H, W)
grid = torch.cat((xx, yy), 1).float()
if Input.is_cuda:
grid = grid.cuda()
return grid
def warp(self, x, BM_d):
vgrid = self.grid + BM_d # [N2HW] # [(2d+1)^2, 2, H, W]
# scale grid to [-1,1]
vgrid[:, 0, :, :] = 2.0 * vgrid[:, 0, :, :] / max(x.size(3) - 1, 1) - 1.0
vgrid[:, 1, :, :] = 2.0 * vgrid[:, 1, :, :] / max(x.size(2) - 1, 1) - 1.0
vgrid = vgrid.permute(0, 2, 3, 1)
output = nn.functional.grid_sample(x, vgrid, mode='bilinear', padding_mode = 'border') #800MB memory occupied (d=2,C=64,H=256,W=256)
mask = torch.autograd.Variable(torch.ones(x.size())).cuda()
mask = nn.functional.grid_sample(mask, vgrid) #300MB memory occpied (d=2,C=64,H=256,W=256)
mask = mask.masked_fill_(mask<0.999,0)
mask = mask.masked_fill_(mask>0,1)
return output * mask
def forward(self,sim_feature, f_map, apply_softmax_on_simfeature = True):
'''
Return bilateral cost volume(Set of bilateral correlation map)
:param sim_feature: Correlation feature based on operating frame's HW (N,D2,H,W)
:param f_map: Previous frame mask (N,1,H,W)
:return Correlation Cost: (N,(2d+1)^2,H,W)
'''
# feature1 = self.L2normalize(feature1)
# feature2 = self.L2normalize(feature2)
B_size,C_size,H_size,W_size = f_map.size()
if self.grid is None:
# Initialize first uniform grid
self.grid = self.UniformGrid(f_map)
if H_size != self.grid.size(2) or W_size != self.grid.size(3):
# Update uniform grid to fit on input tensor shape
self.grid = self.UniformGrid(f_map)
# Displacement volume (N,(2d+1)^2,2,H,W) d = (i,j) , i in [-md,md] & j in [-md,md]
D_vol = self.d.view(self.range, 2, 1, 1).expand(-1, -1, H_size, W_size) # [(2d+1)^2, 2, H, W]
if apply_softmax_on_simfeature:
sim_feature = F.softmax(sim_feature, dim=1) # B,D^2,H,W
f_map = self.warp(f_map.transpose(0, 1).expand(self.range,-1,-1,-1), D_vol).transpose(0, 1) # B,D^2,H,W
f_map = torch.sum(torch.mul(sim_feature, f_map),dim=1, keepdim=True) # B,1,H,W
return f_map # B,1,H,W
| 3,767 | 38.25 | 153 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/atnet.py | import torch
import torch.nn as nn
import torch.nn.functional as F
from networks.deeplab.aspp import ASPP
from networks.deeplab.backbone.resnet import SEResNet50
from networks.correlation_package.correlation import Correlation
from networks.ltm_transfer import LTM_transfer
class ATnet(nn.Module):
def __init__(self, pretrained=1, resfix=False, corr_displacement=4, corr_stride=2):
super(ATnet, self).__init__()
print("Constructing ATnet architecture..")
self.encoder_6ch = Encoder_6ch(resfix)
self.encoder_3ch = Encoder_3ch(resfix)
self.indicator_encoder = ConverterEncoder() #
self.decoder_iact = Decoder()
self.decoder_prop = Decoder_prop()
self.ltm_local_affinity = Correlation(pad_size=corr_displacement * corr_stride, kernel_size=1,
max_displacement=corr_displacement * corr_stride,
stride1=1, stride2=corr_stride, corr_multiply=1)
self.ltm_transfer = LTM_transfer(md=corr_displacement, stride=corr_stride)
self.prev_conv1x1 = nn.Conv2d(256, 256, kernel_size=1, padding=0) # 1/4, 256
self.conv1x1 = nn.Conv2d(2048*2, 2048, kernel_size=1, padding=0) # 1/16, 2048
self.refer_weight = None
self._initialize_weights(pretrained)
def forward_ANet(self, x): # Bx4xHxW to Bx1xHxW
r5, r4, r3, r2 = self.encoder_6ch(x)
estimated_mask, m2 = self.decoder_iact(r5, r3, r2, only_return_feature=False)
r5_indicator = self.indicator_encoder(r5, m2)
return estimated_mask, r5_indicator
def forward_TNet(self, anno_propEnc_r5_list, targframe_3ch, anno_iactEnc_r5_list, r2_prev, predmask_prev, debug_f_mask = False): #1/16, 2048
f_targ, _, r3_targ, r2_targ = self.encoder_3ch(targframe_3ch)
f_mask_r5 = self.correlation_global_transfer(anno_propEnc_r5_list, f_targ, anno_iactEnc_r5_list) # 1/16, 2048
r2_targ_c = self.prev_conv1x1(r2_targ)
r2_prev = self.prev_conv1x1(r2_prev)
f_mask_r2 = self.correlation_local_transfer(r2_prev, r2_targ_c, predmask_prev) # 1/4, 1 [B,1,H/4,W/4]
r5_concat = torch.cat([f_targ, f_mask_r5], dim=1) # 1/16, 2048*2
r5_concat = self.conv1x1(r5_concat)
estimated_mask, m2 = self.decoder_prop(r5_concat, r3_targ, r2_targ, f_mask_r2)
if not debug_f_mask:
return estimated_mask, r2_targ
else:
return estimated_mask, r2_targ, f_mask_r2
def correlation_global_transfer(self, anno_feature_list, targ_feature, anno_indicator_feature_list ):
'''
:param anno_feature_list: [B,C,H,W] x list (N values in list)
:param targ_feature: [B,C,H,W]
:param anno_indicator_feature_list: [B,C,H,W] x list (N values in list)
:return targ_mask_feature: [B,C,H,W]
'''
b, c, h, w = anno_indicator_feature_list[0].size() # b means n_objs
targ_feature = targ_feature.view(b, c, h * w) # [B, C, HxW]
n_features = len(anno_feature_list)
anno_feature = []
for f_idx in range(n_features):
anno_feature.append(anno_feature_list[f_idx].view(b, c, h * w).transpose(1, 2)) # [B, HxW', C]
anno_feature = torch.cat(anno_feature, dim=1) # [B, NxHxW', C]
sim_feature = torch.bmm(anno_feature, targ_feature) # [B, NxHxW', HxW]
sim_feature = F.softmax(sim_feature, dim=2) / n_features # [B, NxHxW', HxW]
anno_indicator_feature = []
for f_idx in range(n_features):
anno_indicator_feature.append(anno_indicator_feature_list[f_idx].view(b, c, h * w)) # [B, C, HxW']
anno_indicator_feature = torch.cat(anno_indicator_feature, dim=-1) # [B, C, NxHxW']
targ_mask_feature = torch.bmm(anno_indicator_feature, sim_feature) # [B, C, HxW]
targ_mask_feature = targ_mask_feature.view(b, c, h, w)
return targ_mask_feature
def correlation_local_transfer(self, r2_prev, r2_targ, predmask_prev):
'''
:param r2_prev: [B,C,H,W]
:param r2_targ: [B,C,H,W]
:param predmask_prev: [B,1,4*H,4*W]
:return targ_mask_feature_r2: [B,1,H,W]
'''
predmask_prev = F.interpolate(predmask_prev, scale_factor=0.25, mode='bilinear',align_corners=True) # B,1,H,W
sim_feature = self.ltm_local_affinity.forward(r2_targ,r2_prev,) # B,D^2,H,W
sim_feature = F.softmax(sim_feature, dim=2) # B,D^2,H,W
predmask_targ = self.ltm_transfer.forward(sim_feature, predmask_prev, apply_softmax_on_simfeature = False) # B,1,H,W
return predmask_targ
def _initialize_weights(self, pretrained):
for m in self.modules():
if pretrained:
break
else:
if isinstance(m, nn.Conv2d):
m.weight.data.normal_(0, 0.001)
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
m.weight.data.normal_(0, 0.01)
m.bias.data.zero_()
class Encoder_3ch(nn.Module):
# T-Net Encoder
def __init__(self, resfix):
super(Encoder_3ch, self).__init__()
self.conv0_3ch = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=True)
resnet = SEResNet50(output_stride=16, BatchNorm=nn.BatchNorm2d, pretrained=True)
self.bn1 = resnet.bn1
self.relu = resnet.relu # 1/2, 64
self.maxpool = resnet.maxpool
self.res2 = resnet.layer1 # 1/4, 256
self.res3 = resnet.layer2 # 1/8, 512
self.res4 = resnet.layer3 # 1/16, 1024
self.res5 = resnet.layer4 # 1/16, 2048
# freeze BNs
if resfix:
for m in self.modules():
if isinstance(m, nn.BatchNorm2d):
for p in m.parameters():
p.requires_grad = False
def forward(self, x):
x = self.conv0_3ch(x) # 1/2, 64
x = self.bn1(x)
c1 = self.relu(x) # 1/2, 64
x = self.maxpool(c1) # 1/4, 64
r2 = self.res2(x) # 1/4, 256
r3 = self.res3(r2) # 1/8, 512
r4 = self.res4(r3) # 1/16, 1024
r5 = self.res5(r4) # 1/16, 2048
return r5, r4, r3, r2
def forward_r2(self,x):
x = self.conv0_3ch(x) # 1/2, 64
x = self.bn1(x)
c1 = self.relu(x) # 1/2, 64
x = self.maxpool(c1) # 1/4, 64
r2 = self.res2(x) # 1/4, 256
return r2
class Encoder_6ch(nn.Module):
# A-Net Encoder
def __init__(self, resfix):
super(Encoder_6ch, self).__init__()
self.conv0_6ch = nn.Conv2d(6, 64, kernel_size=7, stride=2, padding=3, bias=True)
resnet = SEResNet50(output_stride=16, BatchNorm=nn.BatchNorm2d, pretrained=True)
self.bn1 = resnet.bn1
self.relu = resnet.relu # 1/2, 64
self.maxpool = resnet.maxpool
self.res2 = resnet.layer1 # 1/4, 256
self.res3 = resnet.layer2 # 1/8, 512
self.res4 = resnet.layer3 # 1/16, 1024
self.res5 = resnet.layer4 # 1/16, 2048
# freeze BNs
if resfix:
for m in self.modules():
if isinstance(m, nn.BatchNorm2d):
for p in m.parameters():
p.requires_grad = False
def forward(self, x):
x = self.conv0_6ch(x) # 1/2, 64
x = self.bn1(x)
c1 = self.relu(x) # 1/2, 64
x = self.maxpool(c1) # 1/4, 64
r2 = self.res2(x) # 1/4, 256
r3 = self.res3(r2) # 1/8, 512
r4 = self.res4(r3) # 1/16, 1024
r5 = self.res5(r4) # 1/16, 2048
return r5, r4, r3, r2
class Decoder(nn.Module):
# A-Net Decoder
def __init__(self):
super(Decoder, self).__init__()
mdim = 256
self.aspp_decoder = ASPP(backbone='res', output_stride=16, BatchNorm=nn.BatchNorm2d, pretrained=1)
self.convG0 = nn.Conv2d(2048, mdim, kernel_size=3, padding=1)
self.convG1 = nn.Conv2d(mdim, mdim, kernel_size=3, padding=1)
self.convG2 = nn.Conv2d(mdim, mdim, kernel_size=3, padding=1)
self.RF3 = Refine(512, mdim) # 1/16 -> 1/8
self.RF2 = Refine(256, mdim) # 1/8 -> 1/4
self.lastconv = nn.Sequential(nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Dropout(0.5),
nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Conv2d(256, 1, kernel_size=1, stride=1))
def forward(self, r5, r3_targ, r2_targ, only_return_feature = False):
aspp_out = self.aspp_decoder(r5) #1/16 mdim
aspp_out = F.interpolate(aspp_out, scale_factor=4, mode='bilinear',align_corners=True) #1/4 mdim
m4 = self.convG0(F.relu(r5)) # out: # 1/16, mdim
m4 = self.convG1(F.relu(m4)) # out: # 1/16, mdim
m4 = self.convG2(F.relu(m4)) # out: # 1/16, mdim
m3 = self.RF3(r3_targ, m4) # out: 1/8, mdim
m2 = self.RF2(r2_targ, m3) # out: 1/4, mdim
m2 = torch.cat((m2, aspp_out), dim=1) # out: 1/4, mdim*2
if only_return_feature:
return m2
x = self.lastconv(m2)
x = F.interpolate(x, scale_factor=4, mode='bilinear', align_corners=True)
return x, m2
class Decoder_prop(nn.Module):
# T-Net Decoder
def __init__(self):
super(Decoder_prop, self).__init__()
mdim = 256
self.aspp_decoder = ASPP(backbone='res', output_stride=16, BatchNorm=nn.BatchNorm2d, pretrained=1)
self.convG0 = nn.Conv2d(2048, mdim, kernel_size=3, padding=1)
self.convG1 = nn.Conv2d(mdim, mdim, kernel_size=3, padding=1)
self.convG2 = nn.Conv2d(mdim, mdim, kernel_size=3, padding=1)
self.RF3 = Refine(512, mdim) # 1/16 -> 1/8
self.RF2 = Refine(256, mdim) # 1/8 -> 1/4
self.lastconv = nn.Sequential(nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Dropout(0.5),
nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Conv2d(256, 1, kernel_size=1, stride=1))
def forward(self, r5, r3_targ, r2_targ, f_mask_r2):
aspp_out = self.aspp_decoder(r5) #1/16 mdim
aspp_out = F.interpolate(aspp_out, scale_factor=4, mode='bilinear',align_corners=True) #1/4 mdim
m4 = self.convG0(F.relu(r5)) # out: # 1/16, mdim
m4 = self.convG1(F.relu(m4)) # out: # 1/16, mdim
m4 = self.convG2(F.relu(m4)) # out: # 1/16, mdim
m3 = self.RF3(r3_targ, m4) # out: 1/8, mdim
m3 = m3 + 0.5 * F.interpolate(f_mask_r2, scale_factor=0.5, mode='bilinear',align_corners=True) #1/4 mdim
m2 = self.RF2(r2_targ, m3) # out: 1/4, mdim
m2 = m2 + 0.5 * f_mask_r2
m2 = torch.cat((m2, aspp_out), dim=1) # out: 1/4, mdim*2
x = self.lastconv(m2)
x = F.interpolate(x, scale_factor=4, mode='bilinear', align_corners=True)
return x, m2
class ConverterEncoder(nn.Module):
def __init__(self):
super(ConverterEncoder, self).__init__()
# [1/4, 512] to [1/8, 1024]
downsample1 = nn.Sequential(nn.Conv2d(512, 1024, kernel_size=1, stride=2, bias=False),
nn.BatchNorm2d(1024),
)
self.block1 = SEBottleneck(512, 256, stride = 2, downsample = downsample1)
# [1/8, 1024] to [1/16, 2048]
downsample2 = nn.Sequential(nn.Conv2d(1024, 2048, kernel_size=1, stride=2, bias=False),
nn.BatchNorm2d(2048),
)
self.block2 = SEBottleneck(1024, 512, stride = 2, downsample=downsample2)
self.conv1x1 = nn.Conv2d(2048 * 2, 2048, kernel_size=1, padding=0) # 1/16, 2048
def forward(self, r5, m2):
'''
:param r5: 1/16, 2048
:param m2: 1/4, 512
:return:
'''
x = self.block1(m2)
x = self.block2(x)
x = torch.cat((x,r5),dim=1)
x = self.conv1x1(x)
return x
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None, BatchNorm=nn.BatchNorm2d):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = BatchNorm(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
dilation=dilation, padding=dilation, bias=False)
self.bn2 = BatchNorm(planes)
self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
self.bn3 = BatchNorm(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
# SE
self.global_pool = nn.AdaptiveAvgPool2d(1)
self.conv_down = nn.Conv2d(
planes * 4, planes // 4, kernel_size=1, bias=False)
self.conv_up = nn.Conv2d(
planes // 4, planes * 4, kernel_size=1, bias=False)
self.sig = nn.Sigmoid()
self.downsample = downsample
self.stride = stride
self.dilation = dilation
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out1 = self.global_pool(out)
out1 = self.conv_down(out1)
out1 = self.relu(out1)
out1 = self.conv_up(out1)
out1 = self.sig(out1)
if self.downsample is not None:
residual = self.downsample(x)
res = out1 * out + residual
res = self.relu(res)
return res
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
class Refine(nn.Module):
def __init__(self, inplanes, planes, scale_factor=2):
super(Refine, self).__init__()
self.convFS1 = nn.Conv2d(inplanes, planes, kernel_size=3, padding=1)
self.convFS2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1)
self.convFS3 = nn.Conv2d(planes, planes, kernel_size=3, padding=1)
self.convMM1 = nn.Conv2d(planes, planes, kernel_size=3, padding=1)
self.convMM2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1)
self.scale_factor = scale_factor
def forward(self, f, pm):
s = self.convFS1(f)
sr = self.convFS2(F.relu(s))
sr = self.convFS3(F.relu(sr))
s = s + sr
m = s + F.interpolate(pm, scale_factor=self.scale_factor, mode='bilinear',align_corners=True)
mr = self.convMM1(F.relu(m))
mr = self.convMM2(F.relu(mr))
m = m + mr
return m
| 16,091 | 38.153285 | 144 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/aspp.py | import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from networks.deeplab.sync_batchnorm.batchnorm import SynchronizedBatchNorm2d
class _ASPPModule(nn.Module):
def __init__(self, inplanes, planes, kernel_size, padding, dilation, BatchNorm, pretrained):
super(_ASPPModule, self).__init__()
self.atrous_conv = nn.Conv2d(inplanes, planes, kernel_size=kernel_size,
stride=1, padding=padding, dilation=dilation, bias=False)
self.bn = BatchNorm(planes)
self.relu = nn.ReLU()
self._init_weight(pretrained)
def forward(self, x):
x = self.atrous_conv(x)
x = self.bn(x)
return self.relu(x)
def _init_weight(self,pretrained):
for m in self.modules():
if pretrained:
break
else:
if isinstance(m, nn.Conv2d):
torch.nn.init.kaiming_normal_(m.weight)
elif isinstance(m, SynchronizedBatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
class ASPP(nn.Module):
def __init__(self, backbone, output_stride, BatchNorm, pretrained):
super(ASPP, self).__init__()
if backbone == 'drn':
inplanes = 512
elif backbone == 'mobilenet':
inplanes = 320
else:
inplanes = 2048
if output_stride == 16:
dilations = [1, 6, 12, 18]
elif output_stride == 8:
dilations = [1, 12, 24, 36]
else:
raise NotImplementedError
self.aspp1 = _ASPPModule(inplanes, 256, 1, padding=0, dilation=dilations[0], BatchNorm=BatchNorm, pretrained=pretrained)
self.aspp2 = _ASPPModule(inplanes, 256, 3, padding=dilations[1], dilation=dilations[1], BatchNorm=BatchNorm, pretrained=pretrained)
self.aspp3 = _ASPPModule(inplanes, 256, 3, padding=dilations[2], dilation=dilations[2], BatchNorm=BatchNorm, pretrained=pretrained)
self.aspp4 = _ASPPModule(inplanes, 256, 3, padding=dilations[3], dilation=dilations[3], BatchNorm=BatchNorm, pretrained=pretrained)
self.global_avg_pool = nn.Sequential(nn.AdaptiveAvgPool2d((1, 1)),
nn.Conv2d(inplanes, 256, 1, stride=1, bias=False),
BatchNorm(256),
nn.ReLU())
self.conv1 = nn.Conv2d(1280, 256, 1, bias=False)
self.bn1 = BatchNorm(256)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(0.5)
self._init_weight(pretrained)
def forward(self, x):
x1 = self.aspp1(x)
x2 = self.aspp2(x)
x3 = self.aspp3(x)
x4 = self.aspp4(x)
x5 = self.global_avg_pool(x)
# if type(x4.size()[2]) != int:
# tmpsize = (x4.size()[2].item(),x4.size()[3].item())
# else:
# tmpsize = (x4.size()[2],x4.size()[3])
# x5 = F.interpolate(x5, size=(14,14), mode='bilinear', align_corners=True)
x5 = F.interpolate(x5, size=x4.size()[2:], mode='bilinear', align_corners=True)
x = torch.cat((x1, x2, x3, x4, x5), dim=1)
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
return self.dropout(x)
def _init_weight(self,pretrained):
for m in self.modules():
if pretrained:
break
else:
if isinstance(m, nn.Conv2d):
# n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
# m.weight.data.normal_(0, math.sqrt(2. / n))
torch.nn.init.kaiming_normal_(m.weight)
elif isinstance(m, SynchronizedBatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def build_aspp(backbone, output_stride, BatchNorm,pretrained):
return ASPP(backbone, output_stride, BatchNorm, pretrained) | 4,257 | 38.425926 | 139 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/decoder.py | import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from networks.deeplab.sync_batchnorm.batchnorm import SynchronizedBatchNorm2d
class Decoder(nn.Module):
def __init__(self, num_classes, backbone, BatchNorm):
super(Decoder, self).__init__()
if backbone == 'resnet' or backbone == 'drn':
low_level_inplanes = 256
elif backbone == 'xception':
low_level_inplanes = 128
elif backbone == 'mobilenet':
low_level_inplanes = 24
else:
raise NotImplementedError
self.conv1 = nn.Conv2d(low_level_inplanes, 48, 1, bias=False)
self.bn1 = BatchNorm(48)
self.relu = nn.ReLU()
self.last_conv = nn.Sequential(nn.Conv2d(304, 256, kernel_size=3, stride=1, padding=1, bias=False),
BatchNorm(256),
nn.ReLU(),
nn.Dropout(0.5),
nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=False),
BatchNorm(256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Conv2d(256, num_classes, kernel_size=1, stride=1))
self._init_weight()
def forward(self, x, low_level_feat):
low_level_feat = self.conv1(low_level_feat)
low_level_feat = self.bn1(low_level_feat)
low_level_feat = self.relu(low_level_feat)
x = F.interpolate(x, size=low_level_feat.size()[2:], mode='bilinear', align_corners=True)
x = torch.cat((x, low_level_feat), dim=1)
x = self.last_conv(x)
return x
def _init_weight(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
torch.nn.init.kaiming_normal_(m.weight)
elif isinstance(m, SynchronizedBatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def build_decoder(num_classes, backbone, BatchNorm):
return Decoder(num_classes, backbone, BatchNorm) | 2,280 | 39.017544 | 107 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/deeplab.py | import torch
import torch.nn as nn
import torch.nn.functional as F
from networks.deeplab.sync_batchnorm.batchnorm import SynchronizedBatchNorm2d
from networks.deeplab.aspp import build_aspp
from networks.deeplab.decoder import build_decoder
from networks.deeplab.backbone import build_backbone
class DeepLab(nn.Module):
def __init__(self, backbone='resnet', output_stride=16, num_classes=21,
sync_bn=True, freeze_bn=False):
super(DeepLab, self).__init__()
if backbone == 'drn':
output_stride = 8
if sync_bn == True:
BatchNorm = SynchronizedBatchNorm2d
else:
BatchNorm = nn.BatchNorm2d
self.backbone = build_backbone(backbone, output_stride, BatchNorm)
self.aspp = build_aspp(backbone, output_stride, BatchNorm)
self.decoder = build_decoder(num_classes, backbone, BatchNorm)
if freeze_bn:
self.freeze_bn()
def forward(self, input):
x, low_level_feat = self.backbone(input)
x = self.aspp(x)
x = self.decoder(x, low_level_feat)
x = F.interpolate(x, size=input.size()[2:], mode='bilinear', align_corners=True)
return x
def freeze_bn(self):
for m in self.modules():
if isinstance(m, SynchronizedBatchNorm2d):
m.eval()
elif isinstance(m, nn.BatchNorm2d):
m.eval()
def get_1x_lr_params(self):
modules = [self.backbone]
for i in range(len(modules)):
for m in modules[i].named_modules():
if isinstance(m[1], nn.Conv2d) or isinstance(m[1], SynchronizedBatchNorm2d) \
or isinstance(m[1], nn.BatchNorm2d):
for p in m[1].parameters():
if p.requires_grad:
yield p
def get_10x_lr_params(self):
modules = [self.aspp, self.decoder]
for i in range(len(modules)):
for m in modules[i].named_modules():
if isinstance(m[1], nn.Conv2d) or isinstance(m[1], SynchronizedBatchNorm2d) \
or isinstance(m[1], nn.BatchNorm2d):
for p in m[1].parameters():
if p.requires_grad:
yield p
if __name__ == "__main__":
model = DeepLab(backbone='mobilenet', output_stride=16)
model.eval()
input = torch.rand(1, 3, 513, 513)
output = model(input)
print(output.size())
| 2,493 | 33.638889 | 93 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/backbone/resnet.py | import math
import torch.nn as nn
import torch.utils.model_zoo as model_zoo
from networks.deeplab.sync_batchnorm.batchnorm import SynchronizedBatchNorm2d
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None, BatchNorm=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = BatchNorm(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
dilation=dilation, padding=dilation, bias=False)
self.bn2 = BatchNorm(planes)
self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
self.bn3 = BatchNorm(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
self.dilation = dilation
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None, BatchNorm=None):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = BatchNorm(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
dilation=dilation, padding=dilation, bias=False)
self.bn2 = BatchNorm(planes)
self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
self.bn3 = BatchNorm(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
# SE
self.global_pool = nn.AdaptiveAvgPool2d(1)
self.conv_down = nn.Conv2d(
planes * 4, planes // 4, kernel_size=1, bias=False)
self.conv_up = nn.Conv2d(
planes // 4, planes * 4, kernel_size=1, bias=False)
self.sig = nn.Sigmoid()
self.downsample = downsample
self.stride = stride
self.dilation = dilation
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out1 = self.global_pool(out)
out1 = self.conv_down(out1)
out1 = self.relu(out1)
out1 = self.conv_up(out1)
out1 = self.sig(out1)
if self.downsample is not None:
residual = self.downsample(x)
res = out1 * out + residual
res = self.relu(res)
return res
class ResNet(nn.Module):
def __init__(self, block, layers, output_stride, BatchNorm, pretrained=True, modelname = 'res101'):
self.inplanes = 64
self.modelname = modelname
super(ResNet, self).__init__()
blocks = [1, 2, 4]
if output_stride == 16:
strides = [1, 2, 2, 1]
dilations = [1, 1, 1, 2]
elif output_stride == 8:
strides = [1, 2, 1, 1]
dilations = [1, 1, 2, 4]
else:
raise NotImplementedError
# Modules
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
self.bn1 = BatchNorm(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0], stride=strides[0], dilation=dilations[0], BatchNorm=BatchNorm)
self.layer2 = self._make_layer(block, 128, layers[1], stride=strides[1], dilation=dilations[1], BatchNorm=BatchNorm)
self.layer3 = self._make_layer(block, 256, layers[2], stride=strides[2], dilation=dilations[2], BatchNorm=BatchNorm)
self.layer4 = self._make_MG_unit(block, 512, blocks=blocks, stride=strides[3], dilation=dilations[3], BatchNorm=BatchNorm)
# self.layer4 = self._make_layer(block, 512, layers[3], stride=strides[3], dilation=dilations[3], BatchNorm=BatchNorm)
self._init_weight()
if pretrained:
self._load_pretrained_model()
def _make_layer(self, block, planes, blocks, stride=1, dilation=1, BatchNorm=None):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
BatchNorm(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, dilation, downsample, BatchNorm))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, dilation=dilation, BatchNorm=BatchNorm))
return nn.Sequential(*layers)
def _make_MG_unit(self, block, planes, blocks, stride=1, dilation=1, BatchNorm=None):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
BatchNorm(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, dilation=blocks[0]*dilation,
downsample=downsample, BatchNorm=BatchNorm))
self.inplanes = planes * block.expansion
for i in range(1, len(blocks)):
layers.append(block(self.inplanes, planes, stride=1,
dilation=blocks[i]*dilation, BatchNorm=BatchNorm))
return nn.Sequential(*layers)
def forward(self, input):
x = self.conv1(input)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x) #256 128 128
low_level_feat = x
x = self.layer2(x) #512 64 64
x = self.layer3(x) #1024 32 32
x = self.layer4(x) #2048 32 32
return x, low_level_feat
def _init_weight(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, SynchronizedBatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _load_pretrained_model(self):
if self.modelname =='res101':
pretrain_dict = model_zoo.load_url('https://download.pytorch.org/models/resnet101-5d3b4d8f.pth')
elif self.modelname == 'res50':
pretrain_dict = model_zoo.load_url('https://download.pytorch.org/models/resnet50-19c8e357.pth')
elif self.modelname == 'SEres50':
pretrain_dict = model_zoo.load_url('https://download.pytorch.org/models/resnet50-19c8e357.pth')
else: raise NotImplementedError
model_dict = {}
state_dict = self.state_dict()
for k, v in pretrain_dict.items():
if k in state_dict:
model_dict[k] = v
state_dict.update(model_dict)
self.load_state_dict(state_dict)
def ResNet101(output_stride, BatchNorm, pretrained=True,):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 4, 23, 3], output_stride, BatchNorm, pretrained=pretrained, modelname='res101')
return model
def ResNet50(output_stride, BatchNorm, pretrained=True):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 4, 6, 3], output_stride, BatchNorm, pretrained=pretrained, modelname='res50')
return model
def SEResNet50(output_stride, BatchNorm, pretrained=True):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], output_stride, BatchNorm, pretrained=pretrained, modelname='SEres50')
return model
if __name__ == "__main__":
import torch
model = ResNet50(BatchNorm=nn.BatchNorm2d, pretrained=True, output_stride=16)
input = torch.rand(1, 3, 512, 512)
output, low_level_feat = model(input)
print(output.size())
print(low_level_feat.size()) | 9,076 | 36.979079 | 130 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/backbone/drn.py | import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo
from networks.deeplab.sync_batchnorm.batchnorm import SynchronizedBatchNorm2d
webroot = 'https://tigress-web.princeton.edu/~fy/drn/models/'
model_urls = {
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
'drn-c-26': webroot + 'drn_c_26-ddedf421.pth',
'drn-c-42': webroot + 'drn_c_42-9d336e8c.pth',
'drn-c-58': webroot + 'drn_c_58-0a53a92c.pth',
'drn-d-22': webroot + 'drn_d_22-4bd2f8ea.pth',
'drn-d-38': webroot + 'drn_d_38-eebb45f0.pth',
'drn-d-54': webroot + 'drn_d_54-0e0534ff.pth',
'drn-d-105': webroot + 'drn_d_105-12b40979.pth'
}
def conv3x3(in_planes, out_planes, stride=1, padding=1, dilation=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=padding, bias=False, dilation=dilation)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None,
dilation=(1, 1), residual=True, BatchNorm=None):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride,
padding=dilation[0], dilation=dilation[0])
self.bn1 = BatchNorm(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes,
padding=dilation[1], dilation=dilation[1])
self.bn2 = BatchNorm(planes)
self.downsample = downsample
self.stride = stride
self.residual = residual
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
if self.residual:
out += residual
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None,
dilation=(1, 1), residual=True, BatchNorm=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = BatchNorm(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=dilation[1], bias=False,
dilation=dilation[1])
self.bn2 = BatchNorm(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = BatchNorm(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class DRN(nn.Module):
def __init__(self, block, layers, arch='D',
channels=(16, 32, 64, 128, 256, 512, 512, 512),
BatchNorm=None):
super(DRN, self).__init__()
self.inplanes = channels[0]
self.out_dim = channels[-1]
self.arch = arch
if arch == 'C':
self.conv1 = nn.Conv2d(3, channels[0], kernel_size=7, stride=1,
padding=3, bias=False)
self.bn1 = BatchNorm(channels[0])
self.relu = nn.ReLU(inplace=True)
self.layer1 = self._make_layer(
BasicBlock, channels[0], layers[0], stride=1, BatchNorm=BatchNorm)
self.layer2 = self._make_layer(
BasicBlock, channels[1], layers[1], stride=2, BatchNorm=BatchNorm)
elif arch == 'D':
self.layer0 = nn.Sequential(
nn.Conv2d(3, channels[0], kernel_size=7, stride=1, padding=3,
bias=False),
BatchNorm(channels[0]),
nn.ReLU(inplace=True)
)
self.layer1 = self._make_conv_layers(
channels[0], layers[0], stride=1, BatchNorm=BatchNorm)
self.layer2 = self._make_conv_layers(
channels[1], layers[1], stride=2, BatchNorm=BatchNorm)
self.layer3 = self._make_layer(block, channels[2], layers[2], stride=2, BatchNorm=BatchNorm)
self.layer4 = self._make_layer(block, channels[3], layers[3], stride=2, BatchNorm=BatchNorm)
self.layer5 = self._make_layer(block, channels[4], layers[4],
dilation=2, new_level=False, BatchNorm=BatchNorm)
self.layer6 = None if layers[5] == 0 else \
self._make_layer(block, channels[5], layers[5], dilation=4,
new_level=False, BatchNorm=BatchNorm)
if arch == 'C':
self.layer7 = None if layers[6] == 0 else \
self._make_layer(BasicBlock, channels[6], layers[6], dilation=2,
new_level=False, residual=False, BatchNorm=BatchNorm)
self.layer8 = None if layers[7] == 0 else \
self._make_layer(BasicBlock, channels[7], layers[7], dilation=1,
new_level=False, residual=False, BatchNorm=BatchNorm)
elif arch == 'D':
self.layer7 = None if layers[6] == 0 else \
self._make_conv_layers(channels[6], layers[6], dilation=2, BatchNorm=BatchNorm)
self.layer8 = None if layers[7] == 0 else \
self._make_conv_layers(channels[7], layers[7], dilation=1, BatchNorm=BatchNorm)
self._init_weight()
def _init_weight(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, SynchronizedBatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1, dilation=1,
new_level=True, residual=True, BatchNorm=None):
assert dilation == 1 or dilation % 2 == 0
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
BatchNorm(planes * block.expansion),
)
layers = list()
layers.append(block(
self.inplanes, planes, stride, downsample,
dilation=(1, 1) if dilation == 1 else (
dilation // 2 if new_level else dilation, dilation),
residual=residual, BatchNorm=BatchNorm))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, residual=residual,
dilation=(dilation, dilation), BatchNorm=BatchNorm))
return nn.Sequential(*layers)
def _make_conv_layers(self, channels, convs, stride=1, dilation=1, BatchNorm=None):
modules = []
for i in range(convs):
modules.extend([
nn.Conv2d(self.inplanes, channels, kernel_size=3,
stride=stride if i == 0 else 1,
padding=dilation, bias=False, dilation=dilation),
BatchNorm(channels),
nn.ReLU(inplace=True)])
self.inplanes = channels
return nn.Sequential(*modules)
def forward(self, x):
if self.arch == 'C':
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
elif self.arch == 'D':
x = self.layer0(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
low_level_feat = x
x = self.layer4(x)
x = self.layer5(x)
if self.layer6 is not None:
x = self.layer6(x)
if self.layer7 is not None:
x = self.layer7(x)
if self.layer8 is not None:
x = self.layer8(x)
return x, low_level_feat
class DRN_A(nn.Module):
def __init__(self, block, layers, BatchNorm=None):
self.inplanes = 64
super(DRN_A, self).__init__()
self.out_dim = 512 * block.expansion
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = BatchNorm(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0], BatchNorm=BatchNorm)
self.layer2 = self._make_layer(block, 128, layers[1], stride=2, BatchNorm=BatchNorm)
self.layer3 = self._make_layer(block, 256, layers[2], stride=1,
dilation=2, BatchNorm=BatchNorm)
self.layer4 = self._make_layer(block, 512, layers[3], stride=1,
dilation=4, BatchNorm=BatchNorm)
self._init_weight()
def _init_weight(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, SynchronizedBatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1, dilation=1, BatchNorm=None):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
BatchNorm(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, BatchNorm=BatchNorm))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes,
dilation=(dilation, dilation, ), BatchNorm=BatchNorm))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
return x
def drn_a_50(BatchNorm, pretrained=True):
model = DRN_A(Bottleneck, [3, 4, 6, 3], BatchNorm=BatchNorm)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
return model
def drn_c_26(BatchNorm, pretrained=True):
model = DRN(BasicBlock, [1, 1, 2, 2, 2, 2, 1, 1], arch='C', BatchNorm=BatchNorm)
if pretrained:
pretrained = model_zoo.load_url(model_urls['drn-c-26'])
del pretrained['fc.weight']
del pretrained['fc.bias']
model.load_state_dict(pretrained)
return model
def drn_c_42(BatchNorm, pretrained=True):
model = DRN(BasicBlock, [1, 1, 3, 4, 6, 3, 1, 1], arch='C', BatchNorm=BatchNorm)
if pretrained:
pretrained = model_zoo.load_url(model_urls['drn-c-42'])
del pretrained['fc.weight']
del pretrained['fc.bias']
model.load_state_dict(pretrained)
return model
def drn_c_58(BatchNorm, pretrained=True):
model = DRN(Bottleneck, [1, 1, 3, 4, 6, 3, 1, 1], arch='C', BatchNorm=BatchNorm)
if pretrained:
pretrained = model_zoo.load_url(model_urls['drn-c-58'])
del pretrained['fc.weight']
del pretrained['fc.bias']
model.load_state_dict(pretrained)
return model
def drn_d_22(BatchNorm, pretrained=True):
model = DRN(BasicBlock, [1, 1, 2, 2, 2, 2, 1, 1], arch='D', BatchNorm=BatchNorm)
if pretrained:
pretrained = model_zoo.load_url(model_urls['drn-d-22'])
del pretrained['fc.weight']
del pretrained['fc.bias']
model.load_state_dict(pretrained)
return model
def drn_d_24(BatchNorm, pretrained=True):
model = DRN(BasicBlock, [1, 1, 2, 2, 2, 2, 2, 2], arch='D', BatchNorm=BatchNorm)
if pretrained:
pretrained = model_zoo.load_url(model_urls['drn-d-24'])
del pretrained['fc.weight']
del pretrained['fc.bias']
model.load_state_dict(pretrained)
return model
def drn_d_38(BatchNorm, pretrained=True):
model = DRN(BasicBlock, [1, 1, 3, 4, 6, 3, 1, 1], arch='D', BatchNorm=BatchNorm)
if pretrained:
pretrained = model_zoo.load_url(model_urls['drn-d-38'])
del pretrained['fc.weight']
del pretrained['fc.bias']
model.load_state_dict(pretrained)
return model
def drn_d_40(BatchNorm, pretrained=True):
model = DRN(BasicBlock, [1, 1, 3, 4, 6, 3, 2, 2], arch='D', BatchNorm=BatchNorm)
if pretrained:
pretrained = model_zoo.load_url(model_urls['drn-d-40'])
del pretrained['fc.weight']
del pretrained['fc.bias']
model.load_state_dict(pretrained)
return model
def drn_d_54(BatchNorm, pretrained=True):
model = DRN(Bottleneck, [1, 1, 3, 4, 6, 3, 1, 1], arch='D', BatchNorm=BatchNorm)
if pretrained:
pretrained = model_zoo.load_url(model_urls['drn-d-54'])
del pretrained['fc.weight']
del pretrained['fc.bias']
model.load_state_dict(pretrained)
return model
def drn_d_105(BatchNorm, pretrained=True):
model = DRN(Bottleneck, [1, 1, 3, 4, 23, 3, 1, 1], arch='D', BatchNorm=BatchNorm)
if pretrained:
pretrained = model_zoo.load_url(model_urls['drn-d-105'])
del pretrained['fc.weight']
del pretrained['fc.bias']
model.load_state_dict(pretrained)
return model
if __name__ == "__main__":
import torch
model = drn_a_50(BatchNorm=nn.BatchNorm2d, pretrained=True)
input = torch.rand(1, 3, 512, 512)
output, low_level_feat = model(input)
print(output.size())
print(low_level_feat.size())
| 14,657 | 35.372208 | 100 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/backbone/xception.py | import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.model_zoo as model_zoo
from networks.deeplab.sync_batchnorm.batchnorm import SynchronizedBatchNorm2d
def fixed_padding(inputs, kernel_size, dilation):
kernel_size_effective = kernel_size + (kernel_size - 1) * (dilation - 1)
pad_total = kernel_size_effective - 1
pad_beg = pad_total // 2
pad_end = pad_total - pad_beg
padded_inputs = F.pad(inputs, (pad_beg, pad_end, pad_beg, pad_end))
return padded_inputs
class SeparableConv2d(nn.Module):
def __init__(self, inplanes, planes, kernel_size=3, stride=1, dilation=1, bias=False, BatchNorm=None):
super(SeparableConv2d, self).__init__()
self.conv1 = nn.Conv2d(inplanes, inplanes, kernel_size, stride, 0, dilation,
groups=inplanes, bias=bias)
self.bn = BatchNorm(inplanes)
self.pointwise = nn.Conv2d(inplanes, planes, 1, 1, 0, 1, 1, bias=bias)
def forward(self, x):
x = fixed_padding(x, self.conv1.kernel_size[0], dilation=self.conv1.dilation[0])
x = self.conv1(x)
x = self.bn(x)
x = self.pointwise(x)
return x
class Block(nn.Module):
def __init__(self, inplanes, planes, reps, stride=1, dilation=1, BatchNorm=None,
start_with_relu=True, grow_first=True, is_last=False):
super(Block, self).__init__()
if planes != inplanes or stride != 1:
self.skip = nn.Conv2d(inplanes, planes, 1, stride=stride, bias=False)
self.skipbn = BatchNorm(planes)
else:
self.skip = None
self.relu = nn.ReLU(inplace=True)
rep = []
filters = inplanes
if grow_first:
rep.append(self.relu)
rep.append(SeparableConv2d(inplanes, planes, 3, 1, dilation, BatchNorm=BatchNorm))
rep.append(BatchNorm(planes))
filters = planes
for i in range(reps - 1):
rep.append(self.relu)
rep.append(SeparableConv2d(filters, filters, 3, 1, dilation, BatchNorm=BatchNorm))
rep.append(BatchNorm(filters))
if not grow_first:
rep.append(self.relu)
rep.append(SeparableConv2d(inplanes, planes, 3, 1, dilation, BatchNorm=BatchNorm))
rep.append(BatchNorm(planes))
if stride != 1:
rep.append(self.relu)
rep.append(SeparableConv2d(planes, planes, 3, 2, BatchNorm=BatchNorm))
rep.append(BatchNorm(planes))
if stride == 1 and is_last:
rep.append(self.relu)
rep.append(SeparableConv2d(planes, planes, 3, 1, BatchNorm=BatchNorm))
rep.append(BatchNorm(planes))
if not start_with_relu:
rep = rep[1:]
self.rep = nn.Sequential(*rep)
def forward(self, inp):
x = self.rep(inp)
if self.skip is not None:
skip = self.skip(inp)
skip = self.skipbn(skip)
else:
skip = inp
x = x + skip
return x
class AlignedXception(nn.Module):
"""
Modified Alighed Xception
"""
def __init__(self, output_stride, BatchNorm,
pretrained=True):
super(AlignedXception, self).__init__()
if output_stride == 16:
entry_block3_stride = 2
middle_block_dilation = 1
exit_block_dilations = (1, 2)
elif output_stride == 8:
entry_block3_stride = 1
middle_block_dilation = 2
exit_block_dilations = (2, 4)
else:
raise NotImplementedError
# Entry flow
self.conv1 = nn.Conv2d(3, 32, 3, stride=2, padding=1, bias=False)
self.bn1 = BatchNorm(32)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(32, 64, 3, stride=1, padding=1, bias=False)
self.bn2 = BatchNorm(64)
self.block1 = Block(64, 128, reps=2, stride=2, BatchNorm=BatchNorm, start_with_relu=False)
self.block2 = Block(128, 256, reps=2, stride=2, BatchNorm=BatchNorm, start_with_relu=False,
grow_first=True)
self.block3 = Block(256, 728, reps=2, stride=entry_block3_stride, BatchNorm=BatchNorm,
start_with_relu=True, grow_first=True, is_last=True)
# Middle flow
self.block4 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block5 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block6 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block7 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block8 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block9 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block10 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block11 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block12 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block13 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block14 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block15 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block16 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block17 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block18 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
self.block19 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation,
BatchNorm=BatchNorm, start_with_relu=True, grow_first=True)
# Exit flow
self.block20 = Block(728, 1024, reps=2, stride=1, dilation=exit_block_dilations[0],
BatchNorm=BatchNorm, start_with_relu=True, grow_first=False, is_last=True)
self.conv3 = SeparableConv2d(1024, 1536, 3, stride=1, dilation=exit_block_dilations[1], BatchNorm=BatchNorm)
self.bn3 = BatchNorm(1536)
self.conv4 = SeparableConv2d(1536, 1536, 3, stride=1, dilation=exit_block_dilations[1], BatchNorm=BatchNorm)
self.bn4 = BatchNorm(1536)
self.conv5 = SeparableConv2d(1536, 2048, 3, stride=1, dilation=exit_block_dilations[1], BatchNorm=BatchNorm)
self.bn5 = BatchNorm(2048)
# Init weights
self._init_weight()
# Load pretrained model
if pretrained:
self._load_pretrained_model()
def forward(self, x):
# Entry flow
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu(x)
x = self.block1(x)
# add relu here
x = self.relu(x)
low_level_feat = x
x = self.block2(x)
x = self.block3(x)
# Middle flow
x = self.block4(x)
x = self.block5(x)
x = self.block6(x)
x = self.block7(x)
x = self.block8(x)
x = self.block9(x)
x = self.block10(x)
x = self.block11(x)
x = self.block12(x)
x = self.block13(x)
x = self.block14(x)
x = self.block15(x)
x = self.block16(x)
x = self.block17(x)
x = self.block18(x)
x = self.block19(x)
# Exit flow
x = self.block20(x)
x = self.relu(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.relu(x)
x = self.conv4(x)
x = self.bn4(x)
x = self.relu(x)
x = self.conv5(x)
x = self.bn5(x)
x = self.relu(x)
return x, low_level_feat
def _init_weight(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, SynchronizedBatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _load_pretrained_model(self):
pretrain_dict = model_zoo.load_url('http://data.lip6.fr/cadene/pretrainedmodels/xception-b5690688.pth')
model_dict = {}
state_dict = self.state_dict()
for k, v in pretrain_dict.items():
if k in model_dict:
if 'pointwise' in k:
v = v.unsqueeze(-1).unsqueeze(-1)
if k.startswith('block11'):
model_dict[k] = v
model_dict[k.replace('block11', 'block12')] = v
model_dict[k.replace('block11', 'block13')] = v
model_dict[k.replace('block11', 'block14')] = v
model_dict[k.replace('block11', 'block15')] = v
model_dict[k.replace('block11', 'block16')] = v
model_dict[k.replace('block11', 'block17')] = v
model_dict[k.replace('block11', 'block18')] = v
model_dict[k.replace('block11', 'block19')] = v
elif k.startswith('block12'):
model_dict[k.replace('block12', 'block20')] = v
elif k.startswith('bn3'):
model_dict[k] = v
model_dict[k.replace('bn3', 'bn4')] = v
elif k.startswith('conv4'):
model_dict[k.replace('conv4', 'conv5')] = v
elif k.startswith('bn4'):
model_dict[k.replace('bn4', 'bn5')] = v
else:
model_dict[k] = v
state_dict.update(model_dict)
self.load_state_dict(state_dict)
if __name__ == "__main__":
import torch
model = AlignedXception(BatchNorm=nn.BatchNorm2d, pretrained=True, output_stride=16)
input = torch.rand(1, 3, 512, 512)
output, low_level_feat = model(input)
print(output.size())
print(low_level_feat.size()) | 11,561 | 39.145833 | 116 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/backbone/mobilenet.py | import torch
import torch.nn.functional as F
import torch.nn as nn
import math
from networks.deeplab.sync_batchnorm.batchnorm import SynchronizedBatchNorm2d
import torch.utils.model_zoo as model_zoo
def conv_bn(inp, oup, stride, BatchNorm):
return nn.Sequential(
nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
BatchNorm(oup),
nn.ReLU6(inplace=True)
)
def fixed_padding(inputs, kernel_size, dilation):
kernel_size_effective = kernel_size + (kernel_size - 1) * (dilation - 1)
pad_total = kernel_size_effective - 1
pad_beg = pad_total // 2
pad_end = pad_total - pad_beg
padded_inputs = F.pad(inputs, (pad_beg, pad_end, pad_beg, pad_end))
return padded_inputs
class InvertedResidual(nn.Module):
def __init__(self, inp, oup, stride, dilation, expand_ratio, BatchNorm):
super(InvertedResidual, self).__init__()
self.stride = stride
assert stride in [1, 2]
hidden_dim = round(inp * expand_ratio)
self.use_res_connect = self.stride == 1 and inp == oup
self.kernel_size = 3
self.dilation = dilation
if expand_ratio == 1:
self.conv = nn.Sequential(
# dw
nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 0, dilation, groups=hidden_dim, bias=False),
BatchNorm(hidden_dim),
nn.ReLU6(inplace=True),
# pw-linear
nn.Conv2d(hidden_dim, oup, 1, 1, 0, 1, 1, bias=False),
BatchNorm(oup),
)
else:
self.conv = nn.Sequential(
# pw
nn.Conv2d(inp, hidden_dim, 1, 1, 0, 1, bias=False),
BatchNorm(hidden_dim),
nn.ReLU6(inplace=True),
# dw
nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 0, dilation, groups=hidden_dim, bias=False),
BatchNorm(hidden_dim),
nn.ReLU6(inplace=True),
# pw-linear
nn.Conv2d(hidden_dim, oup, 1, 1, 0, 1, bias=False),
BatchNorm(oup),
)
def forward(self, x):
x_pad = fixed_padding(x, self.kernel_size, dilation=self.dilation)
if self.use_res_connect:
x = x + self.conv(x_pad)
else:
x = self.conv(x_pad)
return x
class MobileNetV2(nn.Module):
def __init__(self, output_stride=8, BatchNorm=None, width_mult=1., pretrained=True):
super(MobileNetV2, self).__init__()
block = InvertedResidual
input_channel = 32
current_stride = 1
rate = 1
interverted_residual_setting = [
# t, c, n, s
[1, 16, 1, 1],
[6, 24, 2, 2],
[6, 32, 3, 2],
[6, 64, 4, 2],
[6, 96, 3, 1],
[6, 160, 3, 2],
[6, 320, 1, 1],
]
# building first layer
input_channel = int(input_channel * width_mult)
self.features = [conv_bn(3, input_channel, 2, BatchNorm)]
current_stride *= 2
# building inverted residual blocks
for t, c, n, s in interverted_residual_setting:
if current_stride == output_stride:
stride = 1
dilation = rate
rate *= s
else:
stride = s
dilation = 1
current_stride *= s
output_channel = int(c * width_mult)
for i in range(n):
if i == 0:
self.features.append(block(input_channel, output_channel, stride, dilation, t, BatchNorm))
else:
self.features.append(block(input_channel, output_channel, 1, dilation, t, BatchNorm))
input_channel = output_channel
self.features = nn.Sequential(*self.features)
self._initialize_weights()
if pretrained:
self._load_pretrained_model()
self.low_level_features = self.features[0:4]
self.high_level_features = self.features[4:]
def forward(self, x):
low_level_feat = self.low_level_features(x)
x = self.high_level_features(low_level_feat)
return x, low_level_feat
def _load_pretrained_model(self):
pretrain_dict = model_zoo.load_url('http://jeff95.me/models/mobilenet_v2-6a65762b.pth')
model_dict = {}
state_dict = self.state_dict()
for k, v in pretrain_dict.items():
if k in state_dict:
model_dict[k] = v
state_dict.update(model_dict)
self.load_state_dict(state_dict)
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
# n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
# m.weight.data.normal_(0, math.sqrt(2. / n))
torch.nn.init.kaiming_normal_(m.weight)
elif isinstance(m, SynchronizedBatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
if __name__ == "__main__":
input = torch.rand(1, 3, 512, 512)
model = MobileNetV2(output_stride=16, BatchNorm=nn.BatchNorm2d)
output, low_level_feat = model(input)
print(output.size())
print(low_level_feat.size())
| 5,398 | 34.519737 | 110 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/sync_batchnorm/replicate.py | # -*- coding: utf-8 -*-
# File : replicate.py
# Author : Jiayuan Mao
# Email : maojiayuan@gmail.com
# Date : 27/01/2018
#
# This file is part of Synchronized-BatchNorm-PyTorch.
# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
# Distributed under MIT License.
import functools
from torch.nn.parallel.data_parallel import DataParallel
__all__ = [
'CallbackContext',
'execute_replication_callbacks',
'DataParallelWithCallback',
'patch_replication_callback'
]
class CallbackContext(object):
pass
def execute_replication_callbacks(modules):
"""
Execute an replication callback `__data_parallel_replicate__` on each module created by original replication.
The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
Note that, as all modules are isomorphism, we assign each sub-module with a context
(shared among multiple copies of this module on different devices).
Through this context, different copies can share some information.
We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback
of any slave copies.
"""
master_copy = modules[0]
nr_modules = len(list(master_copy.modules()))
ctxs = [CallbackContext() for _ in range(nr_modules)]
for i, module in enumerate(modules):
for j, m in enumerate(module.modules()):
if hasattr(m, '__data_parallel_replicate__'):
m.__data_parallel_replicate__(ctxs[j], i)
class DataParallelWithCallback(DataParallel):
"""
Data Parallel with a replication callback.
An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by
original `replicate` function.
The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
Examples:
> sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
> sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
# sync_bn.__data_parallel_replicate__ will be invoked.
"""
def replicate(self, module, device_ids):
modules = super(DataParallelWithCallback, self).replicate(module, device_ids)
execute_replication_callbacks(modules)
return modules
def patch_replication_callback(data_parallel):
"""
Monkey-patch an existing `DataParallel` object. Add the replication callback.
Useful when you have customized `DataParallel` implementation.
Examples:
> sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
> sync_bn = DataParallel(sync_bn, device_ids=[0, 1])
> patch_replication_callback(sync_bn)
# this is equivalent to
> sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
> sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
"""
assert isinstance(data_parallel, DataParallel)
old_replicate = data_parallel.replicate
@functools.wraps(old_replicate)
def new_replicate(module, device_ids):
modules = old_replicate(module, device_ids)
execute_replication_callbacks(modules)
return modules
data_parallel.replicate = new_replicate | 3,218 | 35.579545 | 115 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/sync_batchnorm/unittest.py | # -*- coding: utf-8 -*-
# File : unittest.py
# Author : Jiayuan Mao
# Email : maojiayuan@gmail.com
# Date : 27/01/2018
#
# This file is part of Synchronized-BatchNorm-PyTorch.
# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
# Distributed under MIT License.
import unittest
import numpy as np
from torch.autograd import Variable
def as_numpy(v):
if isinstance(v, Variable):
v = v.data
return v.cpu().numpy()
class TorchTestCase(unittest.TestCase):
def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3):
npa, npb = as_numpy(a), as_numpy(b)
self.assertTrue(
np.allclose(npa, npb, atol=atol),
'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max())
)
| 834 | 26.833333 | 157 | py |
IVOS-ATNet | IVOS-ATNet-master/networks/deeplab/sync_batchnorm/batchnorm.py | # -*- coding: utf-8 -*-
# File : batchnorm.py
# Author : Jiayuan Mao
# Email : maojiayuan@gmail.com
# Date : 27/01/2018
#
# This file is part of Synchronized-BatchNorm-PyTorch.
# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
# Distributed under MIT License.
import collections
import torch
import torch.nn.functional as F
from torch.nn.modules.batchnorm import _BatchNorm
from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast
from .comm import SyncMaster
__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d']
def _sum_ft(tensor):
"""sum over the first and last dimention"""
return tensor.sum(dim=0).sum(dim=-1)
def _unsqueeze_ft(tensor):
"""add new dementions at the front and the tail"""
return tensor.unsqueeze(0).unsqueeze(-1)
_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])
_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])
class _SynchronizedBatchNorm(_BatchNorm):
def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True):
super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine)
self._sync_master = SyncMaster(self._data_parallel_master)
self._is_parallel = False
self._parallel_id = None
self._slave_pipe = None
def forward(self, input):
# If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.
if not (self._is_parallel and self.training):
return F.batch_norm(
input, self.running_mean, self.running_var, self.weight, self.bias,
self.training, self.momentum, self.eps)
# Resize the input to (B, C, -1).
input_shape = input.size()
input = input.view(input.size(0), self.num_features, -1)
# Compute the sum and square-sum.
sum_size = input.size(0) * input.size(2)
input_sum = _sum_ft(input)
input_ssum = _sum_ft(input ** 2)
# Reduce-and-broadcast the statistics.
if self._parallel_id == 0:
mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))
else:
mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))
# Compute the output.
if self.affine:
# MJY:: Fuse the multiplication for speed.
output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)
else:
output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)
# Reshape it.
return output.view(input_shape)
def __data_parallel_replicate__(self, ctx, copy_id):
self._is_parallel = True
self._parallel_id = copy_id
# parallel_id == 0 means master device.
if self._parallel_id == 0:
ctx.sync_master = self._sync_master
else:
self._slave_pipe = ctx.sync_master.register_slave(copy_id)
def _data_parallel_master(self, intermediates):
"""Reduce the sum and square-sum, compute the statistics, and broadcast it."""
# Always using same "device order" makes the ReduceAdd operation faster.
# Thanks to:: Tete Xiao (http://tetexiao.com/)
intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())
to_reduce = [i[1][:2] for i in intermediates]
to_reduce = [j for i in to_reduce for j in i] # flatten
target_gpus = [i[1].sum.get_device() for i in intermediates]
sum_size = sum([i[1].sum_size for i in intermediates])
sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)
mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)
broadcasted = Broadcast.apply(target_gpus, mean, inv_std)
outputs = []
for i, rec in enumerate(intermediates):
outputs.append((rec[0], _MasterMessage(*broadcasted[i * 2:i * 2 + 2])))
return outputs
def _compute_mean_std(self, sum_, ssum, size):
"""Compute the mean and standard-deviation with sum and square-sum. This method
also maintains the moving average on the master device."""
assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
mean = sum_ / size
sumvar = ssum - sum_ * mean
unbias_var = sumvar / (size - 1)
bias_var = sumvar / size
self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
return mean, bias_var.clamp(self.eps) ** -0.5
class SynchronizedBatchNorm1d(_SynchronizedBatchNorm):
r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a
mini-batch.
.. math::
y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
This module differs from the built-in PyTorch BatchNorm1d as the mean and
standard-deviation are reduced across all devices during training.
For example, when one uses `nn.DataParallel` to wrap the network during
training, PyTorch's implementation normalize the tensor on each device using
the statistics only on that device, which accelerated the computation and
is also easy to implement, but the statistics might be inaccurate.
Instead, in this synchronized version, the statistics will be computed
over all training samples distributed on multiple devices.
Note that, for one-GPU or CPU-only case, this module behaves exactly same
as the built-in PyTorch implementation.
The mean and standard-deviation are calculated per-dimension over
the mini-batches and gamma and beta are learnable parameter vectors
of size C (where C is the input size).
During training, this layer keeps a running estimate of its computed mean
and variance. The running sum is kept with a default momentum of 0.1.
During evaluation, this running mean/variance is used for normalization.
Because the BatchNorm is done over the `C` dimension, computing statistics
on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm
Args:
num_features: num_features from an expected input of size
`batch_size x num_features [x width]`
eps: a value added to the denominator for numerical stability.
Default: 1e-5
momentum: the value used for the running_mean and running_var
computation. Default: 0.1
affine: a boolean value that when set to ``True``, gives the layer learnable
affine parameters. Default: ``True``
Shape:
- Input: :math:`(N, C)` or :math:`(N, C, L)`
- Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)
Examples:
>>> # With Learnable Parameters
>>> m = SynchronizedBatchNorm1d(100)
>>> # Without Learnable Parameters
>>> m = SynchronizedBatchNorm1d(100, affine=False)
>>> input = torch.autograd.Variable(torch.randn(20, 100))
>>> output = m(input)
"""
def _check_input_dim(self, input):
if input.dim() != 2 and input.dim() != 3:
raise ValueError('expected 2D or 3D input (got {}D input)'
.format(input.dim()))
super(SynchronizedBatchNorm1d, self)._check_input_dim(input)
class SynchronizedBatchNorm2d(_SynchronizedBatchNorm):
r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch
of 3d inputs
.. math::
y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
This module differs from the built-in PyTorch BatchNorm2d as the mean and
standard-deviation are reduced across all devices during training.
For example, when one uses `nn.DataParallel` to wrap the network during
training, PyTorch's implementation normalize the tensor on each device using
the statistics only on that device, which accelerated the computation and
is also easy to implement, but the statistics might be inaccurate.
Instead, in this synchronized version, the statistics will be computed
over all training samples distributed on multiple devices.
Note that, for one-GPU or CPU-only case, this module behaves exactly same
as the built-in PyTorch implementation.
The mean and standard-deviation are calculated per-dimension over
the mini-batches and gamma and beta are learnable parameter vectors
of size C (where C is the input size).
During training, this layer keeps a running estimate of its computed mean
and variance. The running sum is kept with a default momentum of 0.1.
During evaluation, this running mean/variance is used for normalization.
Because the BatchNorm is done over the `C` dimension, computing statistics
on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm
Args:
num_features: num_features from an expected input of
size batch_size x num_features x height x width
eps: a value added to the denominator for numerical stability.
Default: 1e-5
momentum: the value used for the running_mean and running_var
computation. Default: 0.1
affine: a boolean value that when set to ``True``, gives the layer learnable
affine parameters. Default: ``True``
Shape:
- Input: :math:`(N, C, H, W)`
- Output: :math:`(N, C, H, W)` (same shape as input)
Examples:
>>> # With Learnable Parameters
>>> m = SynchronizedBatchNorm2d(100)
>>> # Without Learnable Parameters
>>> m = SynchronizedBatchNorm2d(100, affine=False)
>>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))
>>> output = m(input)
"""
def _check_input_dim(self, input):
if input.dim() != 4:
raise ValueError('expected 4D input (got {}D input)'
.format(input.dim()))
super(SynchronizedBatchNorm2d, self)._check_input_dim(input)
class SynchronizedBatchNorm3d(_SynchronizedBatchNorm):
r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch
of 4d inputs
.. math::
y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
This module differs from the built-in PyTorch BatchNorm3d as the mean and
standard-deviation are reduced across all devices during training.
For example, when one uses `nn.DataParallel` to wrap the network during
training, PyTorch's implementation normalize the tensor on each device using
the statistics only on that device, which accelerated the computation and
is also easy to implement, but the statistics might be inaccurate.
Instead, in this synchronized version, the statistics will be computed
over all training samples distributed on multiple devices.
Note that, for one-GPU or CPU-only case, this module behaves exactly same
as the built-in PyTorch implementation.
The mean and standard-deviation are calculated per-dimension over
the mini-batches and gamma and beta are learnable parameter vectors
of size C (where C is the input size).
During training, this layer keeps a running estimate of its computed mean
and variance. The running sum is kept with a default momentum of 0.1.
During evaluation, this running mean/variance is used for normalization.
Because the BatchNorm is done over the `C` dimension, computing statistics
on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm
or Spatio-temporal BatchNorm
Args:
num_features: num_features from an expected input of
size batch_size x num_features x depth x height x width
eps: a value added to the denominator for numerical stability.
Default: 1e-5
momentum: the value used for the running_mean and running_var
computation. Default: 0.1
affine: a boolean value that when set to ``True``, gives the layer learnable
affine parameters. Default: ``True``
Shape:
- Input: :math:`(N, C, D, H, W)`
- Output: :math:`(N, C, D, H, W)` (same shape as input)
Examples:
>>> # With Learnable Parameters
>>> m = SynchronizedBatchNorm3d(100)
>>> # Without Learnable Parameters
>>> m = SynchronizedBatchNorm3d(100, affine=False)
>>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))
>>> output = m(input)
"""
def _check_input_dim(self, input):
if input.dim() != 5:
raise ValueError('expected 5D input (got {}D input)'
.format(input.dim()))
super(SynchronizedBatchNorm3d, self)._check_input_dim(input) | 12,932 | 44.861702 | 116 | py |
IVOS-ATNet | IVOS-ATNet-master/libs/custom_transforms.py | import numpy as np
import torch
class Normalize_ApplymeanvarImage(object):
def __init__(self, mean, var, change_channels=False):
self.mean = mean
self.var = var
self.change_channels = change_channels
def __call__(self, sample):
for elem in sample.keys():
if 'image' in elem:
if self.change_channels:
sample[elem] = sample[elem][:, :, [2, 1, 0]]
sample[elem] = sample[elem].astype(np.float32)/255.0
sample[elem] = np.subtract(sample[elem], np.array(self.mean, dtype=np.float32))/np.array(self.var, dtype=np.float32)
return sample
def __str__(self):
return 'SubtractMeanImage'+str(self.mean)
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
for elem in sample.keys():
if 'meta' in elem:
continue
tmp = sample[elem]
if tmp.ndim == 2:
tmp = tmp[:, :, np.newaxis]
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
tmp = tmp.transpose((2, 0, 1))
sample[elem] = torch.from_numpy(tmp)
return sample
| 1,272 | 25.520833 | 132 | py |
IVOS-ATNet | IVOS-ATNet-master/libs/utils_torch.py | import torch
def combine_masks_with_batch(masks, n_obj, th=0.5, return_as_onehot = False):
""" Combine mask for different objects.
Different methods are the following:
* `max_per_pixel`: Computes the final mask taking the pixel with the highest
probability for every object.
# Arguments
masks: Tensor with shape[B, nobj, H, W]. H, W on batches must be same
method: String. Method that specifies how the masks are fused.
# Returns
[B, 1, H, W]
"""
# masks : B, nobj, h, w
# output : h,w
marker = torch.argmax(masks, dim=1, keepdim=True) #
if not return_as_onehot:
out_mask = torch.unsqueeze(torch.zeros_like(masks)[:,0],1) #[B, 1, H, W]
for obj_id in range(n_obj):
try :tmp_mask = (marker == obj_id) * (masks[:,obj_id].unsqueeze(1) > th)
except: raise NotImplementedError
out_mask[tmp_mask] = obj_id + 1 # [B, 1, H, W]
if return_as_onehot:
out_mask = torch.zeros_like(masks) # [B, nobj, H, W]
for obj_id in range(n_obj):
try :tmp_mask = (marker == obj_id) * (masks[:,obj_id].unsqueeze(1) > th)
except: raise NotImplementedError
out_mask[:, obj_id] = tmp_mask[:,0].type(torch.cuda.FloatTensor)
return out_mask
| 1,315 | 34.567568 | 84 | py |
IVOS-ATNet | IVOS-ATNet-master/libs/davis2017_torchdataset.py | from __future__ import division
import os
import numpy as np
import cv2
from libs import utils
from torch.utils.data import Dataset
import json
from PIL import Image
class DAVIS2017(Dataset):
"""DAVIS 2017 dataset constructed using the PyTorch built-in functionalities"""
def __init__(self,
split='val',
root='',
num_frames=None,
custom_frames=None,
transform=None,
retname=False,
seq_name=None,
obj_id=None,
gt_only_first_frame=False,
no_gt=False,
batch_gt=False,
rgb=False,
effective_batch=None,
prev_round_masks = None,#f,h,w
):
"""Loads image to label pairs for tool pose estimation
split: Split or list of splits of the dataset
root: dataset directory with subfolders "JPEGImages" and "Annotations"
num_frames: Select number of frames of the sequence (None for all frames)
custom_frames: List or Tuple with the number of the frames to include
transform: Data transformations
retname: Retrieve meta data in the sample key 'meta'
seq_name: Use a specific sequence
obj_id: Use a specific object of a sequence (If None and sequence is specified, the batch_gt is True)
gt_only_first_frame: Provide the GT only in the first frame
no_gt: No GT is provided
batch_gt: For every frame sequence batch all the different objects gt
rgb: Use RGB channel order in the image
"""
if isinstance(split, str):
self.split = [split]
else:
split.sort()
self.split = split
self.db_root_dir = root
self.transform = transform
self.seq_name = seq_name
self.obj_id = obj_id
self.num_frames = num_frames
self.custom_frames = custom_frames
self.retname = retname
self.rgb = rgb
if seq_name is not None and obj_id is None:
batch_gt = True
self.batch_gt = batch_gt
self.all_seqs_list = []
self.seqs = []
for splt in self.split:
with open(os.path.join(self.db_root_dir, 'ImageSets', '2017', splt + '.txt')) as f:
seqs_tmp = f.readlines()
seqs_tmp = list(map(lambda elem: elem.strip(), seqs_tmp))
self.seqs.extend(seqs_tmp)
self.seq_list_file = os.path.join(self.db_root_dir, 'ImageSets', '2017',
'_'.join(self.split) + '_instances.txt')
# Precompute the dictionary with the objects per sequence
if not self._check_preprocess():
self._preprocess()
if self.seq_name is None:
img_list = []
labels = []
prevmask_list= []
for seq in self.seqs:
images = np.sort(os.listdir(os.path.join(self.db_root_dir, 'JPEGImages/480p/', seq.strip())))
images_path = list(map(lambda x: os.path.join('JPEGImages/480p/', seq.strip(), x), images))
lab = np.sort(os.listdir(os.path.join(self.db_root_dir, 'Annotations/480p/', seq.strip())))
lab_path = list(map(lambda x: os.path.join('Annotations/480p/', seq.strip(), x), lab))
if num_frames is not None:
seq_len = len(images_path)
num_frames = min(num_frames, seq_len)
frame_vector = np.arange(num_frames)
frames_ids = list(np.round(frame_vector*seq_len/float(num_frames)).astype(np.int))
frames_ids[-1] = min(frames_ids[-1], seq_len)
images_path = [images_path[x] for x in frames_ids]
if no_gt:
lab_path = [None] * len(images_path)
else:
lab_path = [lab_path[x] for x in frames_ids]
elif isinstance(custom_frames, tuple) or isinstance(custom_frames, list):
assert min(custom_frames) >= 0 and max(custom_frames) <= len(images_path)
images_path = [images_path[x] for x in custom_frames]
prevmask_list = [prev_round_masks[x] for x in custom_frames]
if no_gt:
lab_path = [None] * len(images_path)
else:
lab_path = [lab_path[x] for x in custom_frames]
if gt_only_first_frame:
lab_path = [lab_path[0]]
lab_path.extend([None] * (len(images_path) - 1))
elif no_gt:
lab_path = [None] * len(images_path)
if self.batch_gt:
obj = self.seq_dict[seq]
if -1 in obj:
obj.remove(-1)
for ii in range(len(img_list), len(images_path)+len(img_list)):
self.all_seqs_list.append([obj, ii])
else:
for obj in self.seq_dict[seq]:
if obj != -1:
for ii in range(len(img_list), len(images_path)+len(img_list)):
self.all_seqs_list.append([obj, ii])
img_list.extend(images_path)
labels.extend(lab_path)
else:
# Initialize the per sequence images for online training
assert self.seq_name in self.seq_dict.keys(), '{} not in {} set.'.format(self.seq_name, '_'.join(self.split))
names_img = np.sort(os.listdir(os.path.join(self.db_root_dir, 'JPEGImages/480p/', str(seq_name))))
img_list = list(map(lambda x: os.path.join('JPEGImages/480p/', str(seq_name), x), names_img))
name_label = np.sort(os.listdir(os.path.join(self.db_root_dir, 'Annotations/480p/', str(seq_name))))
labels = list(map(lambda x: os.path.join('Annotations/480p/', str(seq_name), x), name_label))
prevmask_list = []
if num_frames is not None:
seq_len = len(img_list)
num_frames = min(num_frames, seq_len)
frame_vector = np.arange(num_frames)
frames_ids = list(np.round(frame_vector * seq_len / float(num_frames)).astype(np.int))
frames_ids[-1] = min(frames_ids[-1], seq_len)
img_list = [img_list[x] for x in frames_ids]
if no_gt:
labels = [None] * len(img_list)
else:
labels = [labels[x] for x in frames_ids]
elif isinstance(custom_frames, tuple) or isinstance(custom_frames, list):
assert min(custom_frames) >= 0 and max(custom_frames) <= len(img_list)
img_list = [img_list[x] for x in custom_frames]
prevmask_list = [prev_round_masks[x] for x in custom_frames]
if no_gt:
labels = [None] * len(img_list)
else:
labels = [labels[x] for x in custom_frames]
if gt_only_first_frame:
labels = [labels[0]]
labels.extend([None]*(len(img_list)-1))
elif no_gt:
labels = [None] * len(img_list)
if obj_id is not None:
assert obj_id in self.seq_dict[self.seq_name], \
"{} doesn't have this object id {}.".format(self.seq_name, str(obj_id))
if self.batch_gt:
self.obj_id = self.seq_dict[self.seq_name]
if -1 in self.obj_id:
self.obj_id.remove(-1)
self.obj_id = [0]+self.obj_id
assert (len(labels) == len(img_list))
if effective_batch:
self.img_list = img_list * effective_batch
self.labels = labels * effective_batch
else:
self.img_list = img_list
self.labels = labels
self.prevmasks_list = prevmask_list
# print('Done initializing DAVIS2017 '+'_'.join(self.split)+' Dataset')
# print('Number of images: {}'.format(len(self.img_list)))
# if self.seq_name is None:
# print('Number of elements {}'.format(len(self.all_seqs_list)))
def _check_preprocess(self):
_seq_list_file = self.seq_list_file
if not os.path.isfile(_seq_list_file):
return False
else:
self.seq_dict = json.load(open(self.seq_list_file, 'r'))
return True
def _preprocess(self):
self.seq_dict = {}
for seq in self.seqs:
# Read object masks and get number of objects
name_label = np.sort(os.listdir(os.path.join(self.db_root_dir, 'Annotations/480p/', seq)))
label_path = os.path.join(self.db_root_dir, 'Annotations/480p/', seq, name_label[0])
_mask = np.array(Image.open(label_path))
_mask_ids = np.unique(_mask)
n_obj = _mask_ids[-1]
self.seq_dict[seq] = list(range(1, n_obj+1))
with open(self.seq_list_file, 'w') as outfile:
outfile.write('{{\n\t"{:s}": {:s}'.format(self.seqs[0], json.dumps(self.seq_dict[self.seqs[0]])))
for ii in range(1, len(self.seqs)):
outfile.write(',\n\t"{:s}": {:s}'.format(self.seqs[ii], json.dumps(self.seq_dict[self.seqs[ii]])))
outfile.write('\n}\n')
print('Preprocessing finished')
def __len__(self):
if self.seq_name is None:
return len(self.all_seqs_list)
else:
return len(self.img_list)
def __getitem__(self, idx):
# print(idx)
img, gt, prev_round_mask = self.make_img_gt_mask_pair(idx)
pad_img, pad_info = utils.apply_pad(img)
pad_gt= utils.apply_pad(gt, padinfo = pad_info)#h,w,n
sample = {'image': pad_img, 'gt': pad_gt}
if self.retname:
if self.seq_name is None:
obj_id = self.all_seqs_list[idx][0]
img_path = self.img_list[self.all_seqs_list[idx][1]]
else:
obj_id = self.obj_id
img_path = self.img_list[idx]
seq_name = img_path.split('/')[-2]
frame_id = img_path.split('/')[-1].split('.')[-2]
sample['meta'] = {'seq_name': seq_name,
'frame_id': frame_id,
'obj_id': obj_id,
'im_size': (img.shape[0], img.shape[1]),
'pad_size': (pad_img.shape[0], pad_img.shape[1]),
'pad_info': pad_info}
if self.transform is not None:
sample = self.transform(sample)
return sample
def make_img_gt_mask_pair(self, idx):
"""
Make the image-ground-truth pair
"""
prev_round_mask_tmp = self.prevmasks_list[idx]
if self.seq_name is None:
obj_id = self.all_seqs_list[idx][0]
img_path = self.img_list[self.all_seqs_list[idx][1]]
label_path = self.labels[self.all_seqs_list[idx][1]]
else:
obj_id = self.obj_id
img_path = self.img_list[idx]
label_path = self.labels[idx]
seq_name = img_path.split('/')[-2]
n_obj = 1 if isinstance(obj_id, int) else len(obj_id)
img = cv2.imread(os.path.join(self.db_root_dir, img_path))
img = np.array(img, dtype=np.float32)
if self.rgb:
img = img[:, :, [2, 1, 0]]
if label_path is not None:
label = Image.open(os.path.join(self.db_root_dir, label_path))
else:
if self.batch_gt:
gt = np.zeros(np.append(img.shape[:-1], n_obj), dtype=np.float32)
else:
gt = np.zeros(img.shape[:-1], dtype=np.float32)
if label_path is not None:
gt_tmp = np.array(label, dtype=np.uint8)
if self.batch_gt:
gt = np.zeros(np.append(n_obj, gt_tmp.shape), dtype=np.float32)
for ii, k in enumerate(obj_id):
gt[ii, :, :] = gt_tmp == k
gt = gt.transpose((1, 2, 0))
else:
gt = (gt_tmp == obj_id).astype(np.float32)
if self.batch_gt:
prev_round_mask = np.zeros(np.append(img.shape[:-1], n_obj), dtype=np.float32)
for ii, k in enumerate(obj_id):
prev_round_mask[:, :, ii] = prev_round_mask_tmp == k
else:
prev_round_mask = (prev_round_mask_tmp == obj_id).astype(np.float32)
return img, gt, prev_round_mask
def get_img_size(self):
img = cv2.imread(os.path.join(self.db_root_dir, self.img_list[0]))
return list(img.shape[:2])
def __str__(self):
return 'DAVIS2017'
if __name__ =='__main__':
a = DAVIS2017(split='val', custom_frames=[21,22], seq_name='gold-fish', rgb=True, no_gt=False, retname=True,prev_round_masks=np.zeros([40,480,854]))
c= a.__getitem__(0)
b=1 | 13,111 | 42.417219 | 153 | py |
GraphLoG | GraphLoG-main/pretrain_graphlog.py | import argparse
from loader import MoleculeDataset
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from tqdm import tqdm
import numpy as np
import os, sys
import pdb
import copy
import random
from model import GNN, ProjectNet
from sklearn.metrics import roc_auc_score
from splitters import scaffold_split, random_split, random_scaffold_split
import pandas as pd
from util import ExtractSubstructureContextPair
from torch_geometric.data import DataLoader
from dataloader import DataLoaderSubstructContext
from torch_geometric.nn import global_add_pool, global_mean_pool, global_max_pool
from tensorboardX import SummaryWriter
# Graph pooling functions
def pool_func(x, batch, mode = "mean"):
if mode == "sum":
return global_add_pool(x, batch)
elif mode == "mean":
return global_mean_pool(x, batch)
elif mode == "max":
return global_max_pool(x, batch)
# Mask some nodes in a graph
def mask_nodes(batch, args, num_atom_type=119):
masked_node_indices = list()
# select indices of masked nodes
for i in range(batch.batch[-1] + 1):
idx = torch.nonzero((batch.batch == i).float()).squeeze(-1)
num_node = idx.shape[0]
if args.mask_num == 0:
sample_size = int(num_node * args.mask_rate + 1)
else:
sample_size = min(args.mask_num, int(num_node * 0.5))
masked_node_idx = random.sample(idx.tolist(), sample_size)
masked_node_idx.sort()
masked_node_indices += masked_node_idx
batch.masked_node_indices = torch.tensor(masked_node_indices)
# mask nodes' features
for node_idx in masked_node_indices:
batch.x[node_idx] = torch.tensor([num_atom_type, 0])
return batch
# NCE loss within a graph
def intra_NCE_loss(node_reps, node_modify_reps, batch, tau=0.1, epsilon=1e-6):
node_reps_norm = torch.norm(node_reps, dim = 1).unsqueeze(-1)
node_modify_reps_norm = torch.norm(node_modify_reps, dim = 1).unsqueeze(-1)
sim = torch.mm(node_reps, node_modify_reps.t()) / (
torch.mm(node_reps_norm, node_modify_reps_norm.t()) + epsilon)
exp_sim = torch.exp(sim / tau)
mask = torch.stack([(batch.batch == i).float() for i in batch.batch.tolist()], dim = 1)
exp_sim_mask = exp_sim * mask
exp_sim_all = torch.index_select(exp_sim_mask, 1, batch.masked_node_indices)
exp_sim_positive = torch.index_select(exp_sim_all, 0, batch.masked_node_indices)
positive_ratio = exp_sim_positive.sum(0) / (exp_sim_all.sum(0) + epsilon)
NCE_loss = -torch.log(positive_ratio).sum() / batch.masked_node_indices.shape[0]
mask_select = torch.index_select(mask, 1, batch.masked_node_indices)
thr = 1. / mask_select.sum(0)
correct_cnt = (positive_ratio > thr).float().sum()
return NCE_loss, correct_cnt
# NCE loss across different graphs
def inter_NCE_loss(graph_reps, graph_modify_reps, device, tau=0.1, epsilon=1e-6):
graph_reps_norm = torch.norm(graph_reps, dim = 1).unsqueeze(-1)
graph_modify_reps_norm = torch.norm(graph_modify_reps, dim = 1).unsqueeze(-1)
sim = torch.mm(graph_reps, graph_modify_reps.t()) / (
torch.mm(graph_reps_norm, graph_modify_reps_norm.t()) + epsilon)
exp_sim = torch.exp(sim / tau)
mask = torch.eye(graph_reps.shape[0]).to(device)
positive = (exp_sim * mask).sum(0)
negative = (exp_sim * (1 - mask)).sum(0)
positive_ratio = positive / (positive + negative + epsilon)
NCE_loss = -torch.log(positive_ratio).sum() / graph_reps.shape[0]
thr = 1. / ((1 - mask).sum(0) + 1.)
correct_cnt = (positive_ratio > thr).float().sum()
return NCE_loss, correct_cnt
# NCE loss for global-local mutual information maximization
def gl_NCE_loss(node_reps, graph_reps, batch, tau=0.1, epsilon=1e-6):
node_reps_norm = torch.norm(node_reps, dim = 1).unsqueeze(-1)
graph_reps_norm = torch.norm(graph_reps, dim = 1).unsqueeze(-1)
sim = torch.mm(node_reps, graph_reps.t()) / (
torch.mm(node_reps_norm, graph_reps_norm.t()) + epsilon)
exp_sim = torch.exp(sim / tau)
mask = torch.stack([(batch == i).float() for i in range(graph_reps.shape[0])], dim = 1)
positive = exp_sim * mask
negative = exp_sim * (1 - mask)
positive_ratio = positive / (positive + negative.sum(0).unsqueeze(0) + epsilon)
NCE_loss = -torch.log(positive_ratio + (1 - mask)).sum() / node_reps.shape[0]
thr = 1. / ((1 - mask).sum(0) + 1.).unsqueeze(0)
correct_cnt = (positive_ratio > thr).float().sum()
return NCE_loss, correct_cnt
# NCE loss between graphs and prototypes
def proto_NCE_loss(graph_reps, tau=0.1, epsilon=1e-6):
global proto, proto_connection
# similarity for original and modified graphs
graph_reps_norm = torch.norm(graph_reps, dim=1).unsqueeze(-1)
exp_sim_list = []
mask_list = []
NCE_loss = 0
for i in range(len(proto)-1, -1, -1):
tmp_proto = proto[i]
proto_norm = torch.norm(tmp_proto, dim=1).unsqueeze(-1)
sim = torch.mm(graph_reps, tmp_proto.t()) / (
torch.mm(graph_reps_norm, proto_norm.t()) + epsilon)
exp_sim = torch.exp(sim / tau)
if i != (len(proto) - 1):
# apply the connection mask
exp_sim_last = exp_sim_list[-1]
idx_last = torch.argmax(exp_sim_last, dim = 1).unsqueeze(-1)
connection = proto_connection[i]
connection_mask = (connection.unsqueeze(0) == idx_last.float()).float()
exp_sim = exp_sim * connection_mask
# define NCE loss between prototypes from consecutive layers
upper_proto = proto[i+1]
upper_proto_norm = torch.norm(upper_proto, dim=1).unsqueeze(-1)
proto_sim = torch.mm(tmp_proto, upper_proto.t()) / (
torch.mm(proto_norm, upper_proto_norm.t()) + epsilon)
proto_exp_sim = torch.exp(proto_sim / tau)
proto_positive_list = [proto_exp_sim[j, connection[j].long()] for j in range(proto_exp_sim.shape[0])]
proto_positive = torch.stack(proto_positive_list, dim=0)
proto_positive_ratio = proto_positive / (proto_exp_sim.sum(1) + epsilon)
NCE_loss += -torch.log(proto_positive_ratio).mean()
mask = (exp_sim == exp_sim.max(1)[0].unsqueeze(-1)).float()
exp_sim_list.append(exp_sim)
mask_list.append(mask)
# define NCE loss between graph embedding and prototypes
for i in range(len(proto)):
exp_sim = exp_sim_list[i]
mask = mask_list[i]
positive = exp_sim * mask
negative = exp_sim * (1 - mask)
positive_ratio = positive.sum(1) / (positive.sum(1) + negative.sum(1) + epsilon)
NCE_loss += -torch.log(positive_ratio).mean()
return NCE_loss
# Update prototypes with batch information
def update_proto_lowest(graph_reps, decay_ratio=0.7, epsilon=1e-6):
global proto, proto_state
graph_reps_norm = torch.norm(graph_reps, dim=1).unsqueeze(-1)
proto_norm = torch.norm(proto[0], dim=1).unsqueeze(-1)
sim = torch.mm(graph_reps, proto[0].t()) / (
torch.mm(graph_reps_norm, proto_norm.t()) + epsilon)
# update states of prototypes
mask = (sim == sim.max(1)[0].unsqueeze(-1)).float()
cnt = mask.sum(0)
proto_state[0].data = proto_state[0].data + cnt.data
# update prototypes
batch_cnt = mask.t() / (cnt.unsqueeze(-1) + epsilon)
batch_mean = torch.mm(batch_cnt, graph_reps)
proto[0].data = proto[0].data * (cnt == 0).float().unsqueeze(-1).data + (
proto[0].data * decay_ratio + batch_mean.data * (1 - decay_ratio)) * (cnt != 0).float().unsqueeze(-1).data
return
# Initialze prototypes and their state
def init_proto_lowest(args, model, proj, loader, device, num_iter = 5):
model.eval()
proj.eval()
for iter in range(num_iter):
for step, batch in enumerate(tqdm(loader, desc="Iteration")):
batch = batch.to(device)
# get node and graph representations
node_reps = model(batch.x, batch.edge_index, batch.edge_attr)
graph_reps = pool_func(node_reps, batch.batch, mode=args.graph_pooling)
# feature projection
graph_reps_proj = proj(graph_reps)
# update prototypes
update_proto_lowest(graph_reps_proj, decay_ratio = args.decay_ratio)
global proto, proto_state
idx = torch.nonzero((proto_state[0] >= 2).float()).squeeze(-1)
proto_selected = torch.index_select(proto[0], 0, idx)
proto_selected.requires_grad = True
return proto_selected
# Initialze prototypes and their state
def init_proto(args, index, device, num_iter = 20):
global proto, proto_state
proto_connection = torch.zeros(proto[index-1].shape[0]).to(device)
for iter in range(num_iter):
for i in range(proto[index-1].shape[0]):
# update the closest prototype
sim = torch.mm(proto[index], proto[index-1][i,:].unsqueeze(-1)).squeeze(-1)
idx = torch.argmax(sim)
if iter == (num_iter - 1):
proto_state[index][idx] = 1
proto_connection[i] = idx
proto[index].data[idx, :] = proto[index].data[idx, :] * args.decay_ratio + \
proto[index-1].data[i, :] * (1 - args.decay_ratio)
# penalize rival
sim[idx] = 0
rival_idx = torch.argmax(sim)
proto[index].data[rival_idx, :] = proto[index].data[rival_idx, :] * (2 - args.decay_ratio) - \
proto[index-1].data[i, :] * (1 - args.decay_ratio)
indices = torch.nonzero(proto_state[index]).squeeze(-1)
proto_selected = torch.index_select(proto[index], 0, indices)
proto_selected.requires_grad = True
for i in range(indices.shape[0]):
idx = indices[i]
idx_connection = torch.nonzero((proto_connection == idx.float()).float()).squeeze(-1)
proto_connection[idx_connection] = i
return proto_selected, proto_connection
# For one epoch pretraining
def pretrain(args, model, proj, loader, optimizer, device):
model.train()
proj.train()
NCE_loss_intra_cnt = 0
NCE_loss_inter_cnt = 0
correct_intra_cnt = 0
correct_inter_cnt = 0
total_intra_cnt = 0
total_inter_cnt = 0
for step, batch in enumerate(tqdm(loader, desc="Iteration")):
batch_modify = copy.deepcopy(batch)
batch_modify = mask_nodes(batch_modify, args)
batch, batch_modify = batch.to(device), batch_modify.to(device)
# get node and graph representations
node_reps = model(batch.x, batch.edge_index, batch.edge_attr)
node_modify_reps = model(batch_modify.x, batch_modify.edge_index, batch_modify.edge_attr)
graph_reps = pool_func(node_reps, batch.batch, mode=args.graph_pooling)
graph_modify_reps = pool_func(node_modify_reps, batch_modify.batch, mode=args.graph_pooling)
# feature projection
node_reps_proj = proj(node_reps)
node_modify_reps_proj = proj(node_modify_reps)
graph_reps_proj = proj(graph_reps)
graph_modify_reps_proj = proj(graph_modify_reps)
# NCE loss
NCE_loss_intra, correct_intra = intra_NCE_loss(node_reps_proj, node_modify_reps_proj,
batch_modify, tau=args.tau)
NCE_loss_inter, correct_inter = inter_NCE_loss(graph_reps_proj, graph_modify_reps_proj,
device, tau=args.tau)
NCE_loss_intra_cnt += NCE_loss_intra.item()
NCE_loss_inter_cnt += NCE_loss_inter.item()
correct_intra_cnt += correct_intra
correct_inter_cnt += correct_inter
total_intra_cnt += batch_modify.masked_node_indices.shape[0]
total_inter_cnt += graph_reps.shape[0]
# optimization
optimizer.zero_grad()
NCE_loss = args.alpha * NCE_loss_intra + args.beta * NCE_loss_inter
NCE_loss.backward()
optimizer.step()
if (step + 1) % args.disp_interval == 0:
print(
'iteration: %d, intra NCE loss: %f, intra acc: %f, inter NCE loss: %f, inter acc: %f' % (
step + 1, NCE_loss_intra.item(), float(correct_intra_cnt) / float(total_intra_cnt),
NCE_loss_inter.item(), float(correct_inter_cnt) / float(total_inter_cnt)))
return NCE_loss_intra_cnt / step, float(correct_intra_cnt) / float(
total_intra_cnt), NCE_loss_inter_cnt / step, float(correct_inter_cnt) / float(total_inter_cnt)
# For every epoch training
def train(args, model, proj, loader, optimizer, device):
global proto, proto_connection
model.train()
proj.train()
NCE_loss_intra_cnt = 0
NCE_loss_inter_cnt = 0
NCE_loss_proto_cnt = 0
correct_intra_cnt = 0
correct_inter_cnt = 0
total_intra_cnt = 0
total_inter_cnt = 0
for step, batch in enumerate(tqdm(loader, desc="Iteration")):
batch_modify = copy.deepcopy(batch)
batch_modify = mask_nodes(batch_modify, args)
batch, batch_modify = batch.to(device), batch_modify.to(device)
# get node and graph representations
node_reps = model(batch.x, batch.edge_index, batch.edge_attr)
node_modify_reps = model(batch_modify.x, batch_modify.edge_index, batch_modify.edge_attr)
graph_reps = pool_func(node_reps, batch.batch, mode=args.graph_pooling)
graph_modify_reps = pool_func(node_modify_reps, batch_modify.batch, mode=args.graph_pooling)
# feature projection
node_reps_proj = proj(node_reps)
node_modify_reps_proj = proj(node_modify_reps)
graph_reps_proj = proj(graph_reps)
graph_modify_reps_proj = proj(graph_modify_reps)
# NCE loss
NCE_loss_intra, correct_intra = intra_NCE_loss(node_reps_proj, node_modify_reps_proj,
batch_modify, tau=args.tau)
NCE_loss_inter, correct_inter = inter_NCE_loss(graph_reps_proj, graph_modify_reps_proj,
device, tau=args.tau)
NCE_loss_proto = proto_NCE_loss(graph_reps_proj, tau=args.tau)
NCE_loss_intra_cnt += NCE_loss_intra.item()
NCE_loss_inter_cnt += NCE_loss_inter.item()
NCE_loss_proto_cnt += NCE_loss_proto.item()
correct_intra_cnt += correct_intra
correct_inter_cnt += correct_inter
total_intra_cnt += batch_modify.masked_node_indices.shape[0]
total_inter_cnt += graph_reps.shape[0]
# optimization
optimizer.zero_grad()
NCE_loss = args.alpha * NCE_loss_intra + args.beta * NCE_loss_inter + \
args.gamma * NCE_loss_proto
NCE_loss.backward()
optimizer.step()
if (step + 1) % args.disp_interval == 0:
print(
'iteration: %d, intra NCE loss: %f, intra acc: %f, inter NCE loss: %f, inter acc: %f' % (
step + 1, NCE_loss_intra.item(), float(correct_intra_cnt) / float(total_intra_cnt),
NCE_loss_inter.item(), float(correct_inter_cnt) / float(total_inter_cnt)))
template = 'iteration: %d, proto NCE loss: %f'
value_list = [step + 1, NCE_loss_proto.item()]
for i in range(args.hierarchy):
template += (', active num ' + str(i+1) + ': %d')
value_list.append(proto[i].shape[0])
print (template % tuple(value_list))
return NCE_loss_intra_cnt / step, float(correct_intra_cnt) / float(
total_intra_cnt), NCE_loss_inter_cnt / step, float(correct_inter_cnt) / float(
total_inter_cnt), NCE_loss_proto_cnt / step
def main():
# Training settings
parser = argparse.ArgumentParser(description='GraphLoG for GNN pre-training')
parser.add_argument('--device', type=int, default=0,
help='which gpu to use if any (default: 0)')
parser.add_argument('--batch_size', type=int, default=512,
help='input batch size for training (default: 512)')
parser.add_argument('--local_epochs', type=int, default=1,
help='number of epochs for local learning (default: 1)')
parser.add_argument('--global_epochs', type=int, default=10,
help='number of epochs for global learning (default: 10)')
parser.add_argument('--lr', type=float, default=0.001,
help='learning rate (default: 0.001)')
parser.add_argument('--decay', type=float, default=0,
help='weight decay (default: 0)')
parser.add_argument('--num_layer', type=int, default=5,
help='number of GNN message passing layers (default: 5).')
parser.add_argument('--emb_dim', type=int, default=300,
help='embedding dimensions (default: 300)')
parser.add_argument('--dropout_ratio', type=float, default=0,
help='dropout ratio (default: 0)')
parser.add_argument('--mask_rate', type=float, default=0.3,
help='dropout ratio (default: 0.3)')
parser.add_argument('--mask_num', type=int, default=0,
help='the number of modified nodes (default: 0)')
parser.add_argument('--JK', type=str, default="last",
help='how the node features are combined across layers. last, sum, max or concat')
parser.add_argument('--graph_pooling', type=str, default="mean",
help='graph level pooling (sum, mean, max)')
parser.add_argument('--dataset', type=str, default='zinc_standard_agent',
help='root directory of dataset for pretraining')
parser.add_argument('--output_model_file', type=str, default='', help='filename to output the model')
parser.add_argument('--gnn_type', type=str, default="gin")
parser.add_argument('--seed', type=int, default=0, help="Seed for splitting dataset.")
parser.add_argument('--num_workers', type=int, default=1, help='number of workers for dataset loading')
parser.add_argument('--tau', type=float, default=0.04, help='the temperature parameter for softmax')
parser.add_argument('--decay_ratio', type=float, default=0.95, help='the decay ratio for moving average')
parser.add_argument('--num_proto', type=int, default=50, help='the number of initial prototypes')
parser.add_argument('--hierarchy', type=int, default=3, help='the number of hierarchy')
parser.add_argument('--alpha', type=float, default=1, help='the weight of intra-graph NCE loss')
parser.add_argument('--beta', type=float, default=1, help='the weight of inter-graph NCE loss')
parser.add_argument('--gamma', type=float, default=0.1, help='the weight of prototype NCE loss')
parser.add_argument('--disp_interval', type=int, default=10, help='the display interval')
args = parser.parse_args()
torch.manual_seed(args.seed)
np.random.seed(args.seed)
device = torch.device("cuda:" + str(args.device)) if torch.cuda.is_available() else torch.device("cpu")
if torch.cuda.is_available():
torch.cuda.manual_seed_all(args.seed)
print("num GNN layer: %d" % (args.num_layer))
# set up dataset and transform function.
dataset = MoleculeDataset("./dataset/" + args.dataset, dataset=args.dataset)
loader = DataLoader(dataset, batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers)
# set up pretraining models and feature projector
model = GNN(args.num_layer, args.emb_dim, JK=args.JK, drop_ratio=args.dropout_ratio,
gnn_type=args.gnn_type).to(device)
if args.JK == 'concat':
proj = ProjectNet((args.num_layer + 1) * args.emb_dim).to(device)
else:
proj = ProjectNet(args.emb_dim).to(device)
# set up the optimizer for pretraining
model_param_group = [{"params": model.parameters(), "lr": args.lr},
{"params": proj.parameters(), "lr": args.lr}]
optimizer_pretrain = optim.Adam(model_param_group, lr=args.lr, weight_decay=args.decay)
# initialize prototypes and their state
global proto, proto_state, proto_connection
if args.JK == 'concat':
proto = [torch.rand((args.num_proto, (args.num_layer + 1) * args.emb_dim)).to(device) for i in
range(args.hierarchy)]
else:
proto = [torch.rand((args.num_proto, args.emb_dim)).to(device) for i in range(args.hierarchy)]
proto_state = [torch.zeros(args.num_proto).to(device) for i in range(args.hierarchy)]
proto_connection = []
# pre-training with only local objective
for epoch in range(1, args.local_epochs + 1):
print("====epoch " + str(epoch))
train_intra_loss, train_intra_acc, train_inter_loss, train_inter_acc = pretrain(
args, model, proj, loader, optimizer_pretrain, device)
print(train_intra_loss, train_intra_acc, train_inter_loss, train_inter_acc)
print("")
# initialize prototypes and their state according to pretrained representations
print("Initalize prototypes: layer 1")
tmp_proto = init_proto_lowest(args, model, proj, loader, device)
proto[0] = tmp_proto
for i in range(1, args.hierarchy):
print ("Initialize prototypes: layer ", i + 1)
tmp_proto, tmp_proto_connection = init_proto(args, i, device)
proto[i] = tmp_proto
proto_connection.append(tmp_proto_connection)
# set up the optimizer
model_param_group = [{"params": model.parameters(), "lr": args.lr},
{"params": proj.parameters(), "lr": args.lr}]
for i in range(args.hierarchy):
model_param_group += [{'params': proto[i], 'lr': args.lr, 'weight_decay': 0}]
optimizer = optim.Adam(model_param_group, lr=args.lr, weight_decay=args.decay)
# Training with local and global objectives
for epoch in range(1, args.global_epochs + 1):
print("====epoch " + str(epoch))
train_intra_loss, train_intra_acc, train_inter_loss, train_inter_acc, train_proto_loss = train(
args, model, proj, loader, optimizer, device)
print(train_intra_loss, train_intra_acc, train_inter_loss, train_inter_acc, train_proto_loss)
if not args.output_model_file == "":
torch.save(model.state_dict(), args.output_model_file + ".pth")
os.system('watch nvidia-smi')
if __name__ == "__main__":
main() | 22,536 | 43.364173 | 118 | py |
GraphLoG | GraphLoG-main/batch.py | import torch
from torch_geometric.data import Data, Batch
class BatchMasking(Data):
r"""A plain old python object modeling a batch of graphs as one big
(dicconnected) graph. With :class:`torch_geometric.data.Data` being the
base class, all its methods can also be used here.
In addition, single graphs can be reconstructed via the assignment vector
:obj:`batch`, which maps each node to its respective graph identifier.
"""
def __init__(self, batch=None, **kwargs):
super(BatchMasking, self).__init__(**kwargs)
self.batch = batch
@staticmethod
def from_data_list(data_list):
r"""Constructs a batch object from a python list holding
:class:`torch_geometric.data.Data` objects.
The assignment vector :obj:`batch` is created on the fly."""
keys = [set(data.keys) for data in data_list]
keys = list(set.union(*keys))
assert 'batch' not in keys
batch = BatchMasking()
for key in keys:
batch[key] = []
batch.batch = []
cumsum_node = 0
cumsum_edge = 0
for i, data in enumerate(data_list):
num_nodes = data.num_nodes
batch.batch.append(torch.full((num_nodes, ), i, dtype=torch.long))
for key in data.keys:
item = data[key]
if key in ['edge_index', 'masked_atom_indices']:
item = item + cumsum_node
elif key == 'connected_edge_indices':
item = item + cumsum_edge
batch[key].append(item)
cumsum_node += num_nodes
cumsum_edge += data.edge_index.shape[1]
for key in keys:
batch[key] = torch.cat(
batch[key], dim=data_list[0].cat_dim(key, batch[key][0]))
batch.batch = torch.cat(batch.batch, dim=-1)
return batch.contiguous()
def cumsum(self, key, item):
r"""If :obj:`True`, the attribute :obj:`key` with content :obj:`item`
should be added up cumulatively before concatenated together.
.. note::
This method is for internal use only, and should only be overridden
if the batch concatenation process is corrupted for a specific data
attribute.
"""
return key in ['edge_index', 'face', 'masked_atom_indices', 'connected_edge_indices']
@property
def num_graphs(self):
"""Returns the number of graphs in the batch."""
return self.batch[-1].item() + 1
class BatchAE(Data):
r"""A plain old python object modeling a batch of graphs as one big
(dicconnected) graph. With :class:`torch_geometric.data.Data` being the
base class, all its methods can also be used here.
In addition, single graphs can be reconstructed via the assignment vector
:obj:`batch`, which maps each node to its respective graph identifier.
"""
def __init__(self, batch=None, **kwargs):
super(BatchAE, self).__init__(**kwargs)
self.batch = batch
@staticmethod
def from_data_list(data_list):
r"""Constructs a batch object from a python list holding
:class:`torch_geometric.data.Data` objects.
The assignment vector :obj:`batch` is created on the fly."""
keys = [set(data.keys) for data in data_list]
keys = list(set.union(*keys))
assert 'batch' not in keys
batch = BatchAE()
for key in keys:
batch[key] = []
batch.batch = []
cumsum_node = 0
for i, data in enumerate(data_list):
num_nodes = data.num_nodes
batch.batch.append(torch.full((num_nodes, ), i, dtype=torch.long))
for key in data.keys:
item = data[key]
if key in ['edge_index', 'negative_edge_index']:
item = item + cumsum_node
batch[key].append(item)
cumsum_node += num_nodes
for key in keys:
batch[key] = torch.cat(
batch[key], dim=batch.cat_dim(key))
batch.batch = torch.cat(batch.batch, dim=-1)
return batch.contiguous()
@property
def num_graphs(self):
"""Returns the number of graphs in the batch."""
return self.batch[-1].item() + 1
def cat_dim(self, key):
return -1 if key in ["edge_index", "negative_edge_index"] else 0
class BatchSubstructContext(Data):
r"""A plain old python object modeling a batch of graphs as one big
(dicconnected) graph. With :class:`torch_geometric.data.Data` being the
base class, all its methods can also be used here.
In addition, single graphs can be reconstructed via the assignment vector
:obj:`batch`, which maps each node to its respective graph identifier.
"""
"""
Specialized batching for substructure context pair!
"""
def __init__(self, batch=None, **kwargs):
super(BatchSubstructContext, self).__init__(**kwargs)
self.batch = batch
@staticmethod
def from_data_list(data_list):
r"""Constructs a batch object from a python list holding
:class:`torch_geometric.data.Data` objects.
The assignment vector :obj:`batch` is created on the fly."""
#keys = [set(data.keys) for data in data_list]
#keys = list(set.union(*keys))
#assert 'batch' not in keys
batch = BatchSubstructContext()
keys = ["center_substruct_idx", "edge_attr_substruct", "edge_index_substruct", "x_substruct", "overlap_context_substruct_idx", "edge_attr_context", "edge_index_context", "x_context"]
for key in keys:
#print(key)
batch[key] = []
#batch.batch = []
#used for pooling the context
batch.batch_overlapped_context = []
batch.overlapped_context_size = []
cumsum_main = 0
cumsum_substruct = 0
cumsum_context = 0
i = 0
for data in data_list:
#If there is no context, just skip!!
if hasattr(data, "x_context"):
num_nodes = data.num_nodes
num_nodes_substruct = len(data.x_substruct)
num_nodes_context = len(data.x_context)
#batch.batch.append(torch.full((num_nodes, ), i, dtype=torch.long))
batch.batch_overlapped_context.append(torch.full((len(data.overlap_context_substruct_idx), ), i, dtype=torch.long))
batch.overlapped_context_size.append(len(data.overlap_context_substruct_idx))
###batching for the main graph
#for key in data.keys:
# if not "context" in key and not "substruct" in key:
# item = data[key]
# item = item + cumsum_main if batch.cumsum(key, item) else item
# batch[key].append(item)
###batching for the substructure graph
for key in ["center_substruct_idx", "edge_attr_substruct", "edge_index_substruct", "x_substruct"]:
item = data[key]
item = item + cumsum_substruct if batch.cumsum(key, item) else item
batch[key].append(item)
###batching for the context graph
for key in ["overlap_context_substruct_idx", "edge_attr_context", "edge_index_context", "x_context"]:
item = data[key]
item = item + cumsum_context if batch.cumsum(key, item) else item
batch[key].append(item)
cumsum_main += num_nodes
cumsum_substruct += num_nodes_substruct
cumsum_context += num_nodes_context
i += 1
for key in keys:
batch[key] = torch.cat(
batch[key], dim=batch.cat_dim(key))
#batch.batch = torch.cat(batch.batch, dim=-1)
batch.batch_overlapped_context = torch.cat(batch.batch_overlapped_context, dim=-1)
batch.overlapped_context_size = torch.LongTensor(batch.overlapped_context_size)
return batch.contiguous()
def cat_dim(self, key):
return -1 if key in ["edge_index", "edge_index_substruct", "edge_index_context"] else 0
def cumsum(self, key, item):
r"""If :obj:`True`, the attribute :obj:`key` with content :obj:`item`
should be added up cumulatively before concatenated together.
.. note::
This method is for internal use only, and should only be overridden
if the batch concatenation process is corrupted for a specific data
attribute.
"""
return key in ["edge_index", "edge_index_substruct", "edge_index_context", "overlap_context_substruct_idx", "center_substruct_idx"]
@property
def num_graphs(self):
"""Returns the number of graphs in the batch."""
return self.batch[-1].item() + 1
| 8,940 | 38.043668 | 190 | py |
GraphLoG | GraphLoG-main/dataloader.py | import torch.utils.data
from torch.utils.data.dataloader import default_collate
from batch import BatchSubstructContext, BatchMasking, BatchAE
class DataLoaderSubstructContext(torch.utils.data.DataLoader):
r"""Data loader which merges data objects from a
:class:`torch_geometric.data.dataset` to a mini-batch.
Args:
dataset (Dataset): The dataset from which to load the data.
batch_size (int, optional): How may samples per batch to load.
(default: :obj:`1`)
shuffle (bool, optional): If set to :obj:`True`, the data will be
reshuffled at every epoch (default: :obj:`True`)
"""
def __init__(self, dataset, batch_size=1, shuffle=True, **kwargs):
super(DataLoaderSubstructContext, self).__init__(
dataset,
batch_size,
shuffle,
collate_fn=lambda data_list: BatchSubstructContext.from_data_list(data_list),
**kwargs)
class DataLoaderMasking(torch.utils.data.DataLoader):
r"""Data loader which merges data objects from a
:class:`torch_geometric.data.dataset` to a mini-batch.
Args:
dataset (Dataset): The dataset from which to load the data.
batch_size (int, optional): How may samples per batch to load.
(default: :obj:`1`)
shuffle (bool, optional): If set to :obj:`True`, the data will be
reshuffled at every epoch (default: :obj:`True`)
"""
def __init__(self, dataset, batch_size=1, shuffle=True, **kwargs):
super(DataLoaderMasking, self).__init__(
dataset,
batch_size,
shuffle,
collate_fn=lambda data_list: BatchMasking.from_data_list(data_list),
**kwargs)
class DataLoaderAE(torch.utils.data.DataLoader):
r"""Data loader which merges data objects from a
:class:`torch_geometric.data.dataset` to a mini-batch.
Args:
dataset (Dataset): The dataset from which to load the data.
batch_size (int, optional): How may samples per batch to load.
(default: :obj:`1`)
shuffle (bool, optional): If set to :obj:`True`, the data will be
reshuffled at every epoch (default: :obj:`True`)
"""
def __init__(self, dataset, batch_size=1, shuffle=True, **kwargs):
super(DataLoaderAE, self).__init__(
dataset,
batch_size,
shuffle,
collate_fn=lambda data_list: BatchAE.from_data_list(data_list),
**kwargs)
| 2,503 | 36.939394 | 89 | py |
GraphLoG | GraphLoG-main/model.py | import torch
from torch_geometric.nn import MessagePassing
from torch_geometric.utils import add_self_loops, degree, softmax
from torch_geometric.nn import global_add_pool, global_mean_pool, global_max_pool, GlobalAttention, Set2Set
import torch.nn.functional as F
from torch_scatter import scatter_add
from torch_geometric.nn.inits import glorot, zeros
num_atom_type = 120 #including the extra mask tokens
num_chirality_tag = 3
num_bond_type = 6 #including aromatic and self-loop edge, and extra masked tokens
num_bond_direction = 3
class GINConv(MessagePassing):
"""
Extension of GIN aggregation to incorporate edge information by concatenation.
Args:
emb_dim (int): dimensionality of embeddings for nodes and edges.
embed_input (bool): whether to embed input or not.
See https://arxiv.org/abs/1810.00826
"""
def __init__(self, emb_dim, aggr = "add"):
super(GINConv, self).__init__()
#multi-layer perceptron
self.mlp = torch.nn.Sequential(torch.nn.Linear(emb_dim, 2*emb_dim), torch.nn.ReLU(), torch.nn.Linear(2*emb_dim, emb_dim))
self.edge_embedding1 = torch.nn.Embedding(num_bond_type, emb_dim)
self.edge_embedding2 = torch.nn.Embedding(num_bond_direction, emb_dim)
torch.nn.init.xavier_uniform_(self.edge_embedding1.weight.data)
torch.nn.init.xavier_uniform_(self.edge_embedding2.weight.data)
self.aggr = aggr
def forward(self, x, edge_index, edge_attr):
#add self loops in the edge space
edge_index = add_self_loops(edge_index, num_nodes = x.size(0))
#add features corresponding to self-loop edges.
self_loop_attr = torch.zeros(x.size(0), 2)
self_loop_attr[:,0] = 4 #bond type for self-loop edge
self_loop_attr = self_loop_attr.to(edge_attr.device).to(edge_attr.dtype)
edge_attr = torch.cat((edge_attr, self_loop_attr), dim = 0)
edge_embeddings = self.edge_embedding1(edge_attr[:,0]) + self.edge_embedding2(edge_attr[:,1])
return self.propagate(self.aggr, edge_index, x=x, edge_attr=edge_embeddings)
def message(self, x_j, edge_attr):
return x_j + edge_attr
def update(self, aggr_out):
return self.mlp(aggr_out)
class GCNConv(MessagePassing):
def __init__(self, emb_dim, aggr = "add"):
super(GCNConv, self).__init__()
self.emb_dim = emb_dim
self.linear = torch.nn.Linear(emb_dim, emb_dim)
self.edge_embedding1 = torch.nn.Embedding(num_bond_type, emb_dim)
self.edge_embedding2 = torch.nn.Embedding(num_bond_direction, emb_dim)
torch.nn.init.xavier_uniform_(self.edge_embedding1.weight.data)
torch.nn.init.xavier_uniform_(self.edge_embedding2.weight.data)
self.aggr = aggr
def norm(self, edge_index, num_nodes, dtype):
### assuming that self-loops have been already added in edge_index
edge_weight = torch.ones((edge_index.size(1), ), dtype=dtype,
device=edge_index.device)
row, col = edge_index
deg = scatter_add(edge_weight, row, dim=0, dim_size=num_nodes)
deg_inv_sqrt = deg.pow(-0.5)
deg_inv_sqrt[deg_inv_sqrt == float('inf')] = 0
return deg_inv_sqrt[row] * edge_weight * deg_inv_sqrt[col]
def forward(self, x, edge_index, edge_attr):
#add self loops in the edge space
edge_index = add_self_loops(edge_index, num_nodes = x.size(0))
#add features corresponding to self-loop edges.
self_loop_attr = torch.zeros(x.size(0), 2)
self_loop_attr[:,0] = 4 #bond type for self-loop edge
self_loop_attr = self_loop_attr.to(edge_attr.device).to(edge_attr.dtype)
edge_attr = torch.cat((edge_attr, self_loop_attr), dim = 0)
edge_embeddings = self.edge_embedding1(edge_attr[:,0]) + self.edge_embedding2(edge_attr[:,1])
norm = self.norm(edge_index, x.size(0), x.dtype)
x = self.linear(x)
return self.propagate(self.aggr, edge_index, x=x, edge_attr=edge_embeddings, norm = norm)
def message(self, x_j, edge_attr, norm):
return norm.view(-1, 1) * (x_j + edge_attr)
class GATConv(MessagePassing):
def __init__(self, emb_dim, heads=2, negative_slope=0.2, aggr = "add"):
super(GATConv, self).__init__()
self.aggr = aggr
self.emb_dim = emb_dim
self.heads = heads
self.negative_slope = negative_slope
self.weight_linear = torch.nn.Linear(emb_dim, heads * emb_dim)
self.att = torch.nn.Parameter(torch.Tensor(1, heads, 2 * emb_dim))
self.bias = torch.nn.Parameter(torch.Tensor(emb_dim))
self.edge_embedding1 = torch.nn.Embedding(num_bond_type, heads * emb_dim)
self.edge_embedding2 = torch.nn.Embedding(num_bond_direction, heads * emb_dim)
torch.nn.init.xavier_uniform_(self.edge_embedding1.weight.data)
torch.nn.init.xavier_uniform_(self.edge_embedding2.weight.data)
self.reset_parameters()
def reset_parameters(self):
glorot(self.att)
zeros(self.bias)
def forward(self, x, edge_index, edge_attr):
#add self loops in the edge space
edge_index = add_self_loops(edge_index, num_nodes = x.size(0))
#add features corresponding to self-loop edges.
self_loop_attr = torch.zeros(x.size(0), 2)
self_loop_attr[:,0] = 4 #bond type for self-loop edge
self_loop_attr = self_loop_attr.to(edge_attr.device).to(edge_attr.dtype)
edge_attr = torch.cat((edge_attr, self_loop_attr), dim = 0)
edge_embeddings = self.edge_embedding1(edge_attr[:,0]) + self.edge_embedding2(edge_attr[:,1])
x = self.weight_linear(x).view(-1, self.heads, self.emb_dim)
return self.propagate(self.aggr, edge_index, x=x, edge_attr=edge_embeddings)
def message(self, edge_index, x_i, x_j, edge_attr):
edge_attr = edge_attr.view(-1, self.heads, self.emb_dim)
x_j += edge_attr
alpha = (torch.cat([x_i, x_j], dim=-1) * self.att).sum(dim=-1)
alpha = F.leaky_relu(alpha, self.negative_slope)
alpha = softmax(alpha, edge_index[0])
return x_j * alpha.view(-1, self.heads, 1)
def update(self, aggr_out):
aggr_out = aggr_out.mean(dim=1)
aggr_out = aggr_out + self.bias
return aggr_out
class GraphSAGEConv(MessagePassing):
def __init__(self, emb_dim, aggr = "mean"):
super(GraphSAGEConv, self).__init__()
self.emb_dim = emb_dim
self.linear = torch.nn.Linear(emb_dim, emb_dim)
self.edge_embedding1 = torch.nn.Embedding(num_bond_type, emb_dim)
self.edge_embedding2 = torch.nn.Embedding(num_bond_direction, emb_dim)
torch.nn.init.xavier_uniform_(self.edge_embedding1.weight.data)
torch.nn.init.xavier_uniform_(self.edge_embedding2.weight.data)
self.aggr = aggr
def forward(self, x, edge_index, edge_attr):
#add self loops in the edge space
edge_index = add_self_loops(edge_index, num_nodes = x.size(0))
#add features corresponding to self-loop edges.
self_loop_attr = torch.zeros(x.size(0), 2)
self_loop_attr[:,0] = 4 #bond type for self-loop edge
self_loop_attr = self_loop_attr.to(edge_attr.device).to(edge_attr.dtype)
edge_attr = torch.cat((edge_attr, self_loop_attr), dim = 0)
edge_embeddings = self.edge_embedding1(edge_attr[:,0]) + self.edge_embedding2(edge_attr[:,1])
x = self.linear(x)
return self.propagate(self.aggr, edge_index, x=x, edge_attr=edge_embeddings)
def message(self, x_j, edge_attr):
return x_j + edge_attr
def update(self, aggr_out):
return F.normalize(aggr_out, p = 2, dim = -1)
class GNN(torch.nn.Module):
"""
Args:
num_layer (int): the number of GNN layers
emb_dim (int): dimensionality of embeddings
JK (str): last, concat, max or sum.
max_pool_layer (int): the layer from which we use max pool rather than add pool for neighbor aggregation
drop_ratio (float): dropout rate
gnn_type: gin, gcn, graphsage, gat
Output:
node representations
"""
def __init__(self, num_layer, emb_dim, JK = "last", drop_ratio = 0, gnn_type = "gin"):
super(GNN, self).__init__()
self.num_layer = num_layer
self.drop_ratio = drop_ratio
self.JK = JK
if self.num_layer < 2:
raise ValueError("Number of GNN layers must be greater than 1.")
self.x_embedding1 = torch.nn.Embedding(num_atom_type, emb_dim)
self.x_embedding2 = torch.nn.Embedding(num_chirality_tag, emb_dim)
torch.nn.init.xavier_uniform_(self.x_embedding1.weight.data)
torch.nn.init.xavier_uniform_(self.x_embedding2.weight.data)
###List of MLPs
self.gnns = torch.nn.ModuleList()
for layer in range(num_layer):
if gnn_type == "gin":
self.gnns.append(GINConv(emb_dim, aggr = "add"))
elif gnn_type == "gcn":
self.gnns.append(GCNConv(emb_dim))
elif gnn_type == "gat":
self.gnns.append(GATConv(emb_dim))
elif gnn_type == "graphsage":
self.gnns.append(GraphSAGEConv(emb_dim))
###List of batchnorms
self.batch_norms = torch.nn.ModuleList()
for layer in range(num_layer):
self.batch_norms.append(torch.nn.BatchNorm1d(emb_dim))
#def forward(self, x, edge_index, edge_attr):
def forward(self, *argv):
if len(argv) == 3:
x, edge_index, edge_attr = argv[0], argv[1], argv[2]
elif len(argv) == 1:
data = argv[0]
x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr
else:
raise ValueError("unmatched number of arguments.")
x = self.x_embedding1(x[:,0]) + self.x_embedding2(x[:,1])
h_list = [x]
for layer in range(self.num_layer):
h = self.gnns[layer](h_list[layer], edge_index, edge_attr)
h = self.batch_norms[layer](h)
#h = F.dropout(F.relu(h), self.drop_ratio, training = self.training)
if layer == self.num_layer - 1:
#remove relu for the last layer
h = F.dropout(h, self.drop_ratio, training = self.training)
else:
h = F.dropout(F.relu(h), self.drop_ratio, training = self.training)
h_list.append(h)
### Different implementations of Jk-concat
if self.JK == "concat":
node_representation = torch.cat(h_list, dim = 1)
elif self.JK == "last":
node_representation = h_list[-1]
elif self.JK == "max":
h_list = [h.unsqueeze_(0) for h in h_list]
node_representation = torch.max(torch.cat(h_list, dim = 0), dim = 0)[0]
elif self.JK == "sum":
h_list = [h.unsqueeze_(0) for h in h_list]
node_representation = torch.sum(torch.cat(h_list, dim = 0), dim = 0)[0]
return node_representation
class GNN_graphpred(torch.nn.Module):
"""
Extension of GIN to incorporate edge information by concatenation.
Args:
num_layer (int): the number of GNN layers
emb_dim (int): dimensionality of embeddings
num_tasks (int): number of tasks in multi-task learning scenario
drop_ratio (float): dropout rate
JK (str): last, concat, max or sum.
graph_pooling (str): sum, mean, max, attention, set2set
gnn_type: gin, gcn, graphsage, gat
See https://arxiv.org/abs/1810.00826
JK-net: https://arxiv.org/abs/1806.03536
"""
def __init__(self, num_layer, emb_dim, num_tasks, JK = "last", drop_ratio = 0, graph_pooling = "mean", gnn_type = "gin"):
super(GNN_graphpred, self).__init__()
self.num_layer = num_layer
self.drop_ratio = drop_ratio
self.JK = JK
self.emb_dim = emb_dim
self.num_tasks = num_tasks
if self.num_layer < 2:
raise ValueError("Number of GNN layers must be greater than 1.")
self.gnn = GNN(num_layer, emb_dim, JK, drop_ratio, gnn_type = gnn_type)
#Different kind of graph pooling
if graph_pooling == "sum":
self.pool = global_add_pool
elif graph_pooling == "mean":
self.pool = global_mean_pool
elif graph_pooling == "max":
self.pool = global_max_pool
elif graph_pooling == "attention":
if self.JK == "concat":
self.pool = GlobalAttention(gate_nn = torch.nn.Linear((self.num_layer + 1) * emb_dim, 1))
else:
self.pool = GlobalAttention(gate_nn = torch.nn.Linear(emb_dim, 1))
elif graph_pooling[:-1] == "set2set":
set2set_iter = int(graph_pooling[-1])
if self.JK == "concat":
self.pool = Set2Set((self.num_layer + 1) * emb_dim, set2set_iter)
else:
self.pool = Set2Set(emb_dim, set2set_iter)
else:
raise ValueError("Invalid graph pooling type.")
#For graph-level binary classification
if graph_pooling[:-1] == "set2set":
self.mult = 2
else:
self.mult = 1
if self.JK == "concat":
rep_dim = self.mult * (self.num_layer + 1) * self.emb_dim
self.graph_pred_linear = torch.nn.Linear(rep_dim, self.num_tasks)
# self.graph_pred_linear = torch.nn.Sequential(
# torch.nn.Linear(rep_dim, rep_dim),
# torch.nn.ReLU(),
# torch.nn.Linear(rep_dim, self.num_tasks)
# )
else:
rep_dim = self.mult * self.emb_dim
self.graph_pred_linear = torch.nn.Linear(rep_dim, self.num_tasks)
# self.graph_pred_linear = torch.nn.Sequential(
# torch.nn.Linear(rep_dim, rep_dim),
# torch.nn.ReLU(),
# torch.nn.Linear(rep_dim, self.num_tasks)
# )
def from_pretrained(self, model_file):
#self.gnn = GNN(self.num_layer, self.emb_dim, JK = self.JK, drop_ratio = self.drop_ratio)
self.gnn.load_state_dict(torch.load(model_file))
def forward(self, *argv):
if len(argv) == 4:
x, edge_index, edge_attr, batch = argv[0], argv[1], argv[2], argv[3]
elif len(argv) == 1:
data = argv[0]
x, edge_index, edge_attr, batch = data.x, data.edge_index, data.edge_attr, data.batch
else:
raise ValueError("unmatched number of arguments.")
node_representation = self.gnn(x, edge_index, edge_attr)
return self.graph_pred_linear(self.pool(node_representation, batch))
class ProjectNet(torch.nn.Module):
def __init__(self, rep_dim):
super(ProjectNet, self).__init__()
self.rep_dim = rep_dim
self.proj = torch.nn.Sequential(
torch.nn.Linear(self.rep_dim, self.rep_dim),
torch.nn.ReLU(),
torch.nn.Linear(self.rep_dim, self.rep_dim)
)
def forward(self, x):
x_proj = self.proj(x)
return x_proj
if __name__ == "__main__":
pass
| 15,224 | 36.967581 | 129 | py |
GraphLoG | GraphLoG-main/finetune.py | import argparse
from loader import MoleculeDataset
from torch_geometric.data import DataLoader
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
from tqdm import tqdm
import os, sys
import numpy as np
import random
from model import GNN, GNN_graphpred
from sklearn.metrics import roc_auc_score
from splitters import scaffold_split
import pandas as pd
import os
import shutil
from tensorboardX import SummaryWriter
criterion = nn.BCEWithLogitsLoss(reduction = "none")
def train(args, model, device, loader, optimizer, scheduler):
model.train()
scheduler.step()
for step, batch in enumerate(tqdm(loader, desc="Iteration")):
batch = batch.to(device)
pred = model(batch.x, batch.edge_index, batch.edge_attr, batch.batch)
y = batch.y.view(pred.shape).to(torch.float64)
#Whether y is non-null or not.
is_valid = y**2 > 0
#Loss matrix
loss_mat = criterion(pred.double(), (y+1)/2)
#loss matrix after removing null target
loss_mat = torch.where(is_valid, loss_mat, torch.zeros(loss_mat.shape).to(loss_mat.device).to(loss_mat.dtype))
optimizer.zero_grad()
loss = torch.sum(loss_mat)/torch.sum(is_valid)
loss.backward()
optimizer.step()
def eval(args, model, device, loader):
model.eval()
y_true = []
y_scores = []
for step, batch in enumerate(tqdm(loader, desc="Iteration")):
batch = batch.to(device)
with torch.no_grad():
pred = model(batch.x, batch.edge_index, batch.edge_attr, batch.batch)
y_true.append(batch.y.view(pred.shape))
y_scores.append(pred)
y_true = torch.cat(y_true, dim = 0).cpu().numpy()
y_scores = torch.cat(y_scores, dim = 0).cpu().numpy()
roc_list = []
for i in range(y_true.shape[1]):
#AUC is only defined when there is at least one positive data.
if np.sum(y_true[:,i] == 1) > 0 and np.sum(y_true[:,i] == -1) > 0:
is_valid = y_true[:,i]**2 > 0
roc_list.append(roc_auc_score((y_true[is_valid,i] + 1)/2, y_scores[is_valid,i]))
if len(roc_list) < y_true.shape[1]:
print("Some target is missing!")
print("Missing ratio: %f" %(1 - float(len(roc_list))/y_true.shape[1]))
return sum(roc_list)/len(roc_list) #y_true.shape[1]
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch implementation of pre-training of graph neural networks')
parser.add_argument('--device', type=int, default=0,
help='which gpu to use if any (default: 0)')
parser.add_argument('--batch_size', type=int, default=32,
help='input batch size for training (default: 32)')
parser.add_argument('--epochs', type=int, default=100,
help='number of epochs to train (default: 100)')
parser.add_argument('--num_run', type=int, default=5,
help='number of independent runs (default: 5)')
parser.add_argument('--lr', type=float, default=0.001,
help='learning rate (default: 0.001)')
parser.add_argument('--lr_scale', type=float, default=1,
help='relative learning rate for the feature extraction layer (default: 1)')
parser.add_argument('--frozen', action='store_true', default=False,
help='whether to freeze gnn extractor')
parser.add_argument('--decay', type=float, default=0,
help='weight decay (default: 0)')
parser.add_argument('--num_layer', type=int, default=5,
help='number of GNN message passing layers (default: 5).')
parser.add_argument('--emb_dim', type=int, default=300,
help='embedding dimensions (default: 300)')
parser.add_argument('--dropout_ratio', type=float, default=0.5,
help='dropout ratio (default: 0.5)')
parser.add_argument('--graph_pooling', type=str, default="mean",
help='graph level pooling (sum, mean, max, set2set, attention)')
parser.add_argument('--JK', type=str, default="last",
help='how the node features across layers are combined. last, sum, max or concat')
parser.add_argument('--gnn_type', type=str, default="gin")
parser.add_argument('--dataset', type=str, default = 'bbbp', help='root directory of dataset. For now, only classification.')
parser.add_argument('--input_model_file', type=str, default = '', help='filename to read the model (if there is any)')
parser.add_argument('--filename', type=str, default = '', help='output filename')
parser.add_argument('--seed', type=int, default=None, help = "Seed for splitting the dataset.")
parser.add_argument('--runseed', type=int, default=None, help = "Seed for minibatch selection, random initialization.")
parser.add_argument('--split', type = str, default="scaffold", help = "random or scaffold or random_scaffold")
parser.add_argument('--eval_train', type=int, default = 0, help='evaluating training or not')
parser.add_argument('--num_workers', type=int, default = 1, help='number of workers for dataset loading')
args = parser.parse_args()
if args.seed:
seed = args.seed
print ('Manual seed: ', seed)
else:
seed = random.randint(0, 10000)
print ('Random seed: ', seed)
# Bunch of classification tasks
if args.dataset == "tox21":
num_tasks = 12
elif args.dataset == "hiv":
num_tasks = 1
elif args.dataset == "pcba":
num_tasks = 128
elif args.dataset == "muv":
num_tasks = 17
elif args.dataset == "bace":
num_tasks = 1
elif args.dataset == "bbbp":
num_tasks = 1
elif args.dataset == "toxcast":
num_tasks = 617
elif args.dataset == "sider":
num_tasks = 27
elif args.dataset == "clintox":
num_tasks = 2
else:
raise ValueError("Invalid dataset name.")
# set up dataset
dataset = MoleculeDataset("./dataset/" + args.dataset, dataset=args.dataset)
print(dataset)
if args.split == "scaffold":
smiles_list = pd.read_csv('./dataset/' + args.dataset + '/processed/smiles.csv', header=None)[0].tolist()
train_dataset, valid_dataset, test_dataset = scaffold_split(dataset, smiles_list, null_value=0,
frac_train=0.8, frac_valid=0.1, frac_test=0.1)
print("scaffold")
elif args.split == "random":
train_dataset, valid_dataset, test_dataset = random_split(dataset, null_value=0, frac_train=0.8,
frac_valid=0.1, frac_test=0.1, seed=seed)
print("random")
elif args.split == "random_scaffold":
smiles_list = pd.read_csv('./dataset/' + args.dataset + '/processed/smiles.csv', header=None)[0].tolist()
train_dataset, valid_dataset, test_dataset = random_scaffold_split(dataset, smiles_list, null_value=0,
frac_train=0.8, frac_valid=0.1,
frac_test=0.1, seed=seed)
print("random scaffold")
else:
raise ValueError("Invalid split option.")
print(train_dataset[0])
# run multiple times
best_valid_auc_list = []
last_epoch_auc_list = []
for run_idx in range(args.num_run):
print ('\nRun ', run_idx + 1)
if args.runseed:
runseed = args.runseed
print('Manual runseed: ', runseed)
else:
runseed = random.randint(0, 10000)
print('Random runseed: ', runseed)
torch.manual_seed(runseed)
np.random.seed(runseed)
device = torch.device("cuda:" + str(args.device)) if torch.cuda.is_available() else torch.device("cpu")
if torch.cuda.is_available():
torch.cuda.manual_seed_all(runseed)
train_loader = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers)
val_loader = DataLoader(valid_dataset, batch_size=args.batch_size, shuffle=False, num_workers=args.num_workers)
test_loader = DataLoader(test_dataset, batch_size=args.batch_size, shuffle=False, num_workers=args.num_workers)
# set up model
model = GNN_graphpred(args.num_layer, args.emb_dim, num_tasks, JK=args.JK, drop_ratio=args.dropout_ratio,
graph_pooling=args.graph_pooling, gnn_type=args.gnn_type)
if not args.input_model_file == "":
model.from_pretrained(args.input_model_file)
model.to(device)
# set up optimizer
# different learning rate for different part of GNN
model_param_group = []
if args.frozen:
model_param_group.append({"params": model.gnn.parameters(), "lr": 0})
else:
model_param_group.append({"params": model.gnn.parameters()})
if args.graph_pooling == "attention":
model_param_group.append({"params": model.pool.parameters(), "lr": args.lr * args.lr_scale})
model_param_group.append({"params": model.graph_pred_linear.parameters(), "lr": args.lr * args.lr_scale})
optimizer = optim.Adam(model_param_group, lr=args.lr, weight_decay=args.decay)
print(optimizer)
scheduler = StepLR(optimizer, step_size=30, gamma=0.3)
# run fine-tuning
best_valid = 0
best_valid_test = 0
last_epoch_test = 0
for epoch in range(1, args.epochs + 1):
print("====epoch " + str(epoch), " lr: ", optimizer.param_groups[-1]['lr'])
train(args, model, device, train_loader, optimizer, scheduler)
print("====Evaluation")
if args.eval_train:
train_acc = eval(args, model, device, train_loader)
else:
print("omit the training accuracy computation")
train_acc = 0
val_acc = eval(args, model, device, val_loader)
test_acc = eval(args, model, device, test_loader)
if val_acc > best_valid:
best_valid = val_acc
best_valid_test = test_acc
if epoch == args.epochs:
last_epoch_test = test_acc
print("train: %f val: %f test: %f" % (train_acc, val_acc, test_acc))
print("")
best_valid_auc_list.append(best_valid_test)
last_epoch_auc_list.append(last_epoch_test)
# summarize results
best_valid_auc_list = np.array(best_valid_auc_list)
last_epoch_auc_list = np.array(last_epoch_auc_list)
if args.dataset in ["muv", "hiv"]:
print('Best validation epoch:')
print('Mean: {}\tStd: {}'.format(np.mean(best_valid_auc_list), np.std(best_valid_auc_list)))
else:
print('Last epoch:')
print('Mean: {}\tStd: {}'.format(np.mean(last_epoch_auc_list), np.std(last_epoch_auc_list)))
os.system('watch nvidia-smi')
if __name__ == "__main__":
main()
| 11,223 | 40.724907 | 129 | py |
GraphLoG | GraphLoG-main/splitters.py | import torch
import random
import numpy as np
from itertools import compress
from rdkit.Chem.Scaffolds import MurckoScaffold
from collections import defaultdict
from sklearn.model_selection import StratifiedKFold
# splitter function
def generate_scaffold(smiles, include_chirality=False):
"""
Obtain Bemis-Murcko scaffold from smiles
:param smiles:
:param include_chirality:
:return: smiles of scaffold
"""
scaffold = MurckoScaffold.MurckoScaffoldSmiles(
smiles=smiles, includeChirality=include_chirality)
return scaffold
# # test generate_scaffold
# s = 'Cc1cc(Oc2nccc(CCC)c2)ccc1'
# scaffold = generate_scaffold(s)
# assert scaffold == 'c1ccc(Oc2ccccn2)cc1'
def scaffold_split(dataset, smiles_list, task_idx=None, null_value=0,
frac_train=0.8, frac_valid=0.1, frac_test=0.1,
return_smiles=False):
"""
Adapted from https://github.com/deepchem/deepchem/blob/master/deepchem/splits/splitters.py
Split dataset by Bemis-Murcko scaffolds
This function can also ignore examples containing null values for a
selected task when splitting. Deterministic split
:param dataset: pytorch geometric dataset obj
:param smiles_list: list of smiles corresponding to the dataset obj
:param task_idx: column idx of the data.y tensor. Will filter out
examples with null value in specified task column of the data.y tensor
prior to splitting. If None, then no filtering
:param null_value: float that specifies null value in data.y to filter if
task_idx is provided
:param frac_train:
:param frac_valid:
:param frac_test:
:param return_smiles:
:return: train, valid, test slices of the input dataset obj. If
return_smiles = True, also returns ([train_smiles_list],
[valid_smiles_list], [test_smiles_list])
"""
np.testing.assert_almost_equal(frac_train + frac_valid + frac_test, 1.0)
if task_idx != None:
# filter based on null values in task_idx
# get task array
y_task = np.array([data.y[task_idx].item() for data in dataset])
# boolean array that correspond to non null values
non_null = y_task != null_value
smiles_list = list(compress(enumerate(smiles_list), non_null))
else:
non_null = np.ones(len(dataset)) == 1
smiles_list = list(compress(enumerate(smiles_list), non_null))
# create dict of the form {scaffold_i: [idx1, idx....]}
all_scaffolds = {}
for i, smiles in smiles_list:
scaffold = generate_scaffold(smiles, include_chirality=True)
if scaffold not in all_scaffolds:
all_scaffolds[scaffold] = [i]
else:
all_scaffolds[scaffold].append(i)
# sort from largest to smallest sets
all_scaffolds = {key: sorted(value) for key, value in all_scaffolds.items()}
all_scaffold_sets = [
scaffold_set for (scaffold, scaffold_set) in sorted(
all_scaffolds.items(), key=lambda x: (len(x[1]), x[1][0]), reverse=True)
]
# get train, valid test indices
train_cutoff = frac_train * len(smiles_list)
valid_cutoff = (frac_train + frac_valid) * len(smiles_list)
train_idx, valid_idx, test_idx = [], [], []
for scaffold_set in all_scaffold_sets:
if len(train_idx) + len(scaffold_set) > train_cutoff:
if len(train_idx) + len(valid_idx) + len(scaffold_set) > valid_cutoff:
test_idx.extend(scaffold_set)
else:
valid_idx.extend(scaffold_set)
else:
train_idx.extend(scaffold_set)
assert len(set(train_idx).intersection(set(valid_idx))) == 0
assert len(set(test_idx).intersection(set(valid_idx))) == 0
train_dataset = dataset[torch.tensor(train_idx)]
valid_dataset = dataset[torch.tensor(valid_idx)]
test_dataset = dataset[torch.tensor(test_idx)]
if not return_smiles:
return train_dataset, valid_dataset, test_dataset
else:
train_smiles = [smiles_list[i][1] for i in train_idx]
valid_smiles = [smiles_list[i][1] for i in valid_idx]
test_smiles = [smiles_list[i][1] for i in test_idx]
return train_dataset, valid_dataset, test_dataset, (train_smiles,
valid_smiles,
test_smiles)
def random_scaffold_split(dataset, smiles_list, task_idx=None, null_value=0,
frac_train=0.8, frac_valid=0.1, frac_test=0.1, seed=0):
"""
Adapted from https://github.com/pfnet-research/chainer-chemistry/blob/master/chainer_chemistry/dataset/splitters/scaffold_splitter.py
Split dataset by Bemis-Murcko scaffolds
This function can also ignore examples containing null values for a
selected task when splitting. Deterministic split
:param dataset: pytorch geometric dataset obj
:param smiles_list: list of smiles corresponding to the dataset obj
:param task_idx: column idx of the data.y tensor. Will filter out
examples with null value in specified task column of the data.y tensor
prior to splitting. If None, then no filtering
:param null_value: float that specifies null value in data.y to filter if
task_idx is provided
:param frac_train:
:param frac_valid:
:param frac_test:
:param seed;
:return: train, valid, test slices of the input dataset obj
"""
np.testing.assert_almost_equal(frac_train + frac_valid + frac_test, 1.0)
if task_idx != None:
# filter based on null values in task_idx
# get task array
y_task = np.array([data.y[task_idx].item() for data in dataset])
# boolean array that correspond to non null values
non_null = y_task != null_value
smiles_list = list(compress(enumerate(smiles_list), non_null))
else:
non_null = np.ones(len(dataset)) == 1
smiles_list = list(compress(enumerate(smiles_list), non_null))
rng = np.random.RandomState(seed)
scaffolds = defaultdict(list)
for ind, smiles in smiles_list:
scaffold = generate_scaffold(smiles, include_chirality=True)
scaffolds[scaffold].append(ind)
scaffold_sets = rng.permutation(list(scaffolds.values()))
n_total_valid = int(np.floor(frac_valid * len(dataset)))
n_total_test = int(np.floor(frac_test * len(dataset)))
train_idx = []
valid_idx = []
test_idx = []
for scaffold_set in scaffold_sets:
if len(valid_idx) + len(scaffold_set) <= n_total_valid:
valid_idx.extend(scaffold_set)
elif len(test_idx) + len(scaffold_set) <= n_total_test:
test_idx.extend(scaffold_set)
else:
train_idx.extend(scaffold_set)
train_dataset = dataset[torch.tensor(train_idx)]
valid_dataset = dataset[torch.tensor(valid_idx)]
test_dataset = dataset[torch.tensor(test_idx)]
return train_dataset, valid_dataset, test_dataset
def random_split(dataset, task_idx=None, null_value=0,
frac_train=0.8, frac_valid=0.1, frac_test=0.1, seed=0,
smiles_list=None):
"""
:param dataset:
:param task_idx:
:param null_value:
:param frac_train:
:param frac_valid:
:param frac_test:
:param seed:
:param smiles_list: list of smiles corresponding to the dataset obj, or None
:return: train, valid, test slices of the input dataset obj. If
smiles_list != None, also returns ([train_smiles_list],
[valid_smiles_list], [test_smiles_list])
"""
np.testing.assert_almost_equal(frac_train + frac_valid + frac_test, 1.0)
if task_idx != None:
# filter based on null values in task_idx
# get task array
y_task = np.array([data.y[task_idx].item() for data in dataset])
non_null = y_task != null_value # boolean array that correspond to non null values
idx_array = np.where(non_null)[0]
dataset = dataset[torch.tensor(idx_array)] # examples containing non
# null labels in the specified task_idx
else:
pass
num_mols = len(dataset)
random.seed(seed)
all_idx = list(range(num_mols))
random.shuffle(all_idx)
train_idx = all_idx[:int(frac_train * num_mols)]
valid_idx = all_idx[int(frac_train * num_mols):int(frac_valid * num_mols)
+ int(frac_train * num_mols)]
test_idx = all_idx[int(frac_valid * num_mols) + int(frac_train * num_mols):]
assert len(set(train_idx).intersection(set(valid_idx))) == 0
assert len(set(valid_idx).intersection(set(test_idx))) == 0
assert len(train_idx) + len(valid_idx) + len(test_idx) == num_mols
train_dataset = dataset[torch.tensor(train_idx)]
valid_dataset = dataset[torch.tensor(valid_idx)]
test_dataset = dataset[torch.tensor(test_idx)]
if not smiles_list:
return train_dataset, valid_dataset, test_dataset
else:
train_smiles = [smiles_list[i] for i in train_idx]
valid_smiles = [smiles_list[i] for i in valid_idx]
test_smiles = [smiles_list[i] for i in test_idx]
return train_dataset, valid_dataset, test_dataset, (train_smiles,
valid_smiles,
test_smiles)
def cv_random_split(dataset, fold_idx = 0,
frac_train=0.9, frac_valid=0.1, seed=0,
smiles_list=None):
"""
:param dataset:
:param task_idx:
:param null_value:
:param frac_train:
:param frac_valid:
:param frac_test:
:param seed:
:param smiles_list: list of smiles corresponding to the dataset obj, or None
:return: train, valid, test slices of the input dataset obj. If
smiles_list != None, also returns ([train_smiles_list],
[valid_smiles_list], [test_smiles_list])
"""
np.testing.assert_almost_equal(frac_train + frac_valid, 1.0)
skf = StratifiedKFold(n_splits=10, shuffle = True, random_state = seed)
labels = [data.y.item() for data in dataset]
idx_list = []
for idx in skf.split(np.zeros(len(labels)), labels):
idx_list.append(idx)
train_idx, val_idx = idx_list[fold_idx]
train_dataset = dataset[torch.tensor(train_idx)]
valid_dataset = dataset[torch.tensor(val_idx)]
return train_dataset, valid_dataset
if __name__ == "__main__":
from loader import MoleculeDataset
from rdkit import Chem
import pandas as pd
# # test scaffold_split
dataset = MoleculeDataset('dataset/tox21', dataset='tox21')
smiles_list = pd.read_csv('dataset/tox21/processed/smiles.csv', header=None)[0].tolist()
train_dataset, valid_dataset, test_dataset = scaffold_split(dataset, smiles_list, task_idx=None, null_value=0, frac_train=0.8,frac_valid=0.1, frac_test=0.1)
# train_dataset, valid_dataset, test_dataset = random_scaffold_split(dataset, smiles_list, task_idx=None, null_value=0, frac_train=0.8,frac_valid=0.1, frac_test=0.1, seed = 0)
unique_ids = set(train_dataset.data.id.tolist() +
valid_dataset.data.id.tolist() +
test_dataset.data.id.tolist())
assert len(unique_ids) == len(dataset) # check that we did not have any
# missing or overlapping examples
# test scaffold_split with smiles returned
dataset = MoleculeDataset('dataset/bbbp', dataset='bbbp')
smiles_list = pd.read_csv('dataset/bbbp/processed/smiles.csv', header=None)[
0].tolist()
train_dataset, valid_dataset, test_dataset, (train_smiles, valid_smiles,
test_smiles) = \
scaffold_split(dataset, smiles_list, task_idx=None, null_value=0,
frac_train=0.8,frac_valid=0.1, frac_test=0.1,
return_smiles=True)
assert len(train_dataset) == len(train_smiles)
for i in range(len(train_dataset)):
data_obj_n_atoms = train_dataset[i].x.size()[0]
smiles_n_atoms = len(list(Chem.MolFromSmiles(train_smiles[
i]).GetAtoms()))
assert data_obj_n_atoms == smiles_n_atoms
assert len(valid_dataset) == len(valid_smiles)
for i in range(len(valid_dataset)):
data_obj_n_atoms = valid_dataset[i].x.size()[0]
smiles_n_atoms = len(list(Chem.MolFromSmiles(valid_smiles[
i]).GetAtoms()))
assert data_obj_n_atoms == smiles_n_atoms
assert len(test_dataset) == len(test_smiles)
for i in range(len(test_dataset)):
data_obj_n_atoms = test_dataset[i].x.size()[0]
smiles_n_atoms = len(list(Chem.MolFromSmiles(test_smiles[
i]).GetAtoms()))
assert data_obj_n_atoms == smiles_n_atoms
# test random_split
from loader import MoleculeDataset
dataset = MoleculeDataset('dataset/tox21', dataset='tox21')
train_dataset, valid_dataset, test_dataset = random_split(dataset, task_idx=None, null_value=0, frac_train=0.8,frac_valid=0.1, frac_test=0.1)
unique_ids = set(train_dataset.data.id.tolist() +
valid_dataset.data.id.tolist() +
test_dataset.data.id.tolist())
assert len(unique_ids) == len(dataset) # check that we did not have any
# missing or overlapping examples
# test random_split with smiles returned
dataset = MoleculeDataset('dataset/bbbp', dataset='bbbp')
smiles_list = pd.read_csv('dataset/bbbp/processed/smiles.csv', header=None)[
0].tolist()
train_dataset, valid_dataset, test_dataset, (train_smiles, valid_smiles,
test_smiles) = \
random_split(dataset, task_idx=None, null_value=0,
frac_train=0.8, frac_valid=0.1, frac_test=0.1, seed=42,
smiles_list=smiles_list)
assert len(train_dataset) == len(train_smiles)
for i in range(len(train_dataset)):
data_obj_n_atoms = train_dataset[i].x.size()[0]
smiles_n_atoms = len(list(Chem.MolFromSmiles(train_smiles[
i]).GetAtoms()))
assert data_obj_n_atoms == smiles_n_atoms
assert len(valid_dataset) == len(valid_smiles)
for i in range(len(valid_dataset)):
data_obj_n_atoms = valid_dataset[i].x.size()[0]
smiles_n_atoms = len(list(Chem.MolFromSmiles(valid_smiles[
i]).GetAtoms()))
assert data_obj_n_atoms == smiles_n_atoms
assert len(test_dataset) == len(test_smiles)
for i in range(len(test_dataset)):
data_obj_n_atoms = test_dataset[i].x.size()[0]
smiles_n_atoms = len(list(Chem.MolFromSmiles(test_smiles[
i]).GetAtoms()))
assert data_obj_n_atoms == smiles_n_atoms
| 14,949 | 41.351275 | 179 | py |
GraphLoG | GraphLoG-main/util.py | import torch
import copy
import random
import networkx as nx
import numpy as np
from torch_geometric.utils import convert
from loader import graph_data_obj_to_nx_simple, nx_to_graph_data_obj_simple
from rdkit import Chem
from rdkit.Chem import AllChem
from loader import mol_to_graph_data_obj_simple, \
graph_data_obj_to_mol_simple
from loader import MoleculeDataset
def check_same_molecules(s1, s2):
mol1 = AllChem.MolFromSmiles(s1)
mol2 = AllChem.MolFromSmiles(s2)
return AllChem.MolToInchi(mol1) == AllChem.MolToInchi(mol2)
class NegativeEdge:
def __init__(self):
"""
Randomly sample negative edges
"""
pass
def __call__(self, data):
num_nodes = data.num_nodes
num_edges = data.num_edges
edge_set = set([str(data.edge_index[0, i].cpu().item()) + "," + str(
data.edge_index[1, i].cpu().item()) for i in
range(data.edge_index.shape[1])])
redandunt_sample = torch.randint(0, num_nodes, (2, 5 * num_edges))
sampled_ind = []
sampled_edge_set = set([])
for i in range(5 * num_edges):
node1 = redandunt_sample[0, i].cpu().item()
node2 = redandunt_sample[1, i].cpu().item()
edge_str = str(node1) + "," + str(node2)
if not edge_str in edge_set and not edge_str in sampled_edge_set and not node1 == node2:
sampled_edge_set.add(edge_str)
sampled_ind.append(i)
if len(sampled_ind) == num_edges / 2:
break
data.negative_edge_index = redandunt_sample[:, sampled_ind]
return data
class ExtractSubstructureContextPair:
def __init__(self, k, l1, l2):
"""
Randomly selects a node from the data object, and adds attributes
that contain the substructure that corresponds to k hop neighbours
rooted at the node, and the context substructures that corresponds to
the subgraph that is between l1 and l2 hops away from the
root node.
:param k:
:param l1:
:param l2:
"""
self.k = k
self.l1 = l1
self.l2 = l2
# for the special case of 0, addresses the quirk with
# single_source_shortest_path_length
if self.k == 0:
self.k = -1
if self.l1 == 0:
self.l1 = -1
if self.l2 == 0:
self.l2 = -1
def __call__(self, data, root_idx=None):
"""
:param data: pytorch geometric data object
:param root_idx: If None, then randomly samples an atom idx.
Otherwise sets atom idx of root (for debugging only)
:return: None. Creates new attributes in original data object:
data.center_substruct_idx
data.x_substruct
data.edge_attr_substruct
data.edge_index_substruct
data.x_context
data.edge_attr_context
data.edge_index_context
data.overlap_context_substruct_idx
"""
num_atoms = data.x.size()[0]
if root_idx == None:
root_idx = random.sample(range(num_atoms), 1)[0]
G = graph_data_obj_to_nx_simple(data) # same ordering as input data obj
# Get k-hop subgraph rooted at specified atom idx
substruct_node_idxes = nx.single_source_shortest_path_length(G,
root_idx,
self.k).keys()
if len(substruct_node_idxes) > 0:
substruct_G = G.subgraph(substruct_node_idxes)
substruct_G, substruct_node_map = reset_idxes(substruct_G) # need
# to reset node idx to 0 -> num_nodes - 1, otherwise data obj does not
# make sense, since the node indices in data obj must start at 0
substruct_data = nx_to_graph_data_obj_simple(substruct_G)
data.x_substruct = substruct_data.x
data.edge_attr_substruct = substruct_data.edge_attr
data.edge_index_substruct = substruct_data.edge_index
data.center_substruct_idx = torch.tensor([substruct_node_map[
root_idx]]) # need
# to convert center idx from original graph node ordering to the
# new substruct node ordering
# Get subgraphs that is between l1 and l2 hops away from the root node
l1_node_idxes = nx.single_source_shortest_path_length(G, root_idx,
self.l1).keys()
l2_node_idxes = nx.single_source_shortest_path_length(G, root_idx,
self.l2).keys()
context_node_idxes = set(l1_node_idxes).symmetric_difference(
set(l2_node_idxes))
if len(context_node_idxes) > 0:
context_G = G.subgraph(context_node_idxes)
context_G, context_node_map = reset_idxes(context_G) # need to
# reset node idx to 0 -> num_nodes - 1, otherwise data obj does not
# make sense, since the node indices in data obj must start at 0
context_data = nx_to_graph_data_obj_simple(context_G)
data.x_context = context_data.x
data.edge_attr_context = context_data.edge_attr
data.edge_index_context = context_data.edge_index
# Get indices of overlapping nodes between substruct and context,
# WRT context ordering
context_substruct_overlap_idxes = list(set(
context_node_idxes).intersection(set(substruct_node_idxes)))
if len(context_substruct_overlap_idxes) > 0:
context_substruct_overlap_idxes_reorder = [context_node_map[old_idx]
for
old_idx in
context_substruct_overlap_idxes]
# need to convert the overlap node idxes, which is from the
# original graph node ordering to the new context node ordering
data.overlap_context_substruct_idx = \
torch.tensor(context_substruct_overlap_idxes_reorder)
return data
def __repr__(self):
return '{}(k={},l1={}, l2={})'.format(self.__class__.__name__, self.k,
self.l1, self.l2)
def reset_idxes(G):
"""
Resets node indices such that they are numbered from 0 to num_nodes - 1
:param G:
:return: copy of G with relabelled node indices, mapping
"""
mapping = {}
for new_idx, old_idx in enumerate(G.nodes()):
mapping[old_idx] = new_idx
new_G = nx.relabel_nodes(G, mapping, copy=True)
return new_G, mapping
class MaskAtom:
def __init__(self, num_atom_type, num_edge_type, mask_rate, mask_num=0, mask_edge=True):
"""
Randomly masks an atom, and optionally masks edges connecting to it.
The mask atom type index is num_possible_atom_type
The mask edge type index in num_possible_edge_type
:param num_atom_type:
:param num_edge_type:
:param mask_rate: % of atoms to be masked
:param mask_num: number of atoms to be masked
:param mask_edge: If True, also mask the edges that connect to the
masked atoms
"""
self.num_atom_type = num_atom_type
self.num_edge_type = num_edge_type
self.mask_rate = mask_rate
self.mask_num = mask_num
self.mask_edge = mask_edge
def __call__(self, data, masked_atom_indices=None):
"""
:param data: pytorch geometric data object. Assume that the edge
ordering is the default pytorch geometric ordering, where the two
directions of a single edge occur in pairs.
Eg. data.edge_index = tensor([[0, 1, 1, 2, 2, 3],
[1, 0, 2, 1, 3, 2]])
:param masked_atom_indices: If None, then randomly samples num_atoms
* mask rate number of atom indices
Otherwise a list of atom idx that sets the atoms to be masked (for
debugging only)
:return: None, Creates new attributes in original data object:
data.mask_node_idx
data.mask_node_label
data.mask_edge_idx
data.mask_edge_label
"""
if masked_atom_indices == None:
# sample x distinct atoms to be masked, based on mask rate. But
# will sample at least 1 atom
num_atoms = data.x.size()[0]
if self.mask_num == 0:
sample_size = int(num_atoms * self.mask_rate + 1)
else:
sample_size = self.mask_num
masked_atom_indices = random.sample(range(num_atoms), sample_size)
# create mask node label by copying atom feature of mask atom
mask_node_labels_list = []
for atom_idx in masked_atom_indices:
mask_node_labels_list.append(data.x[atom_idx].view(1, -1))
data.mask_node_label = torch.cat(mask_node_labels_list, dim=0)
data.masked_atom_indices = torch.tensor(masked_atom_indices)
# modify the original node feature of the masked node
for atom_idx in masked_atom_indices:
data.x[atom_idx] = torch.tensor([self.num_atom_type, 0])
if self.mask_edge:
# create mask edge labels by copying edge features of edges that are bonded to
# mask atoms
connected_edge_indices = []
for bond_idx, (u, v) in enumerate(data.edge_index.cpu().numpy().T):
for atom_idx in masked_atom_indices:
if atom_idx in set((u, v)) and \
bond_idx not in connected_edge_indices:
connected_edge_indices.append(bond_idx)
if len(connected_edge_indices) > 0:
# create mask edge labels by copying bond features of the bonds connected to
# the mask atoms
mask_edge_labels_list = []
for bond_idx in connected_edge_indices[::2]: # because the
# edge ordering is such that two directions of a single
# edge occur in pairs, so to get the unique undirected
# edge indices, we take every 2nd edge index from list
mask_edge_labels_list.append(
data.edge_attr[bond_idx].view(1, -1))
data.mask_edge_label = torch.cat(mask_edge_labels_list, dim=0)
# modify the original bond features of the bonds connected to the mask atoms
for bond_idx in connected_edge_indices:
data.edge_attr[bond_idx] = torch.tensor(
[self.num_edge_type, 0])
data.connected_edge_indices = torch.tensor(
connected_edge_indices[::2])
else:
data.mask_edge_label = torch.empty((0, 2)).to(torch.int64)
data.connected_edge_indices = torch.tensor(
connected_edge_indices).to(torch.int64)
return data
def __repr__(self):
if self.mask_num == 0:
return '{}(num_atom_type={}, num_edge_type={}, mask_rate={}, mask_edge={})'.format(
self.__class__.__name__, self.num_atom_type, self.num_edge_type,
self.mask_rate, self.mask_edge)
else:
return '{}(num_atom_type={}, num_edge_type={}, mask_num={}, mask_edge={})'.format(
self.__class__.__name__, self.num_atom_type, self.num_edge_type,
self.mask_num, self.mask_edge)
if __name__ == "__main__":
transform = NegativeEdge()
dataset = MoleculeDataset("dataset/tox21", dataset="tox21")
transform(dataset[0]) | 11,865 | 41.378571 | 100 | py |
GraphLoG | GraphLoG-main/loader.py | import os
import torch
import pickle
import collections
import math
import pandas as pd
import numpy as np
import networkx as nx
from rdkit import Chem
from rdkit.Chem import Descriptors
from rdkit.Chem import AllChem
from rdkit import DataStructs
from rdkit.Chem.rdMolDescriptors import GetMorganFingerprintAsBitVect
from torch.utils import data
from torch_geometric.data import Data
from torch_geometric.data import InMemoryDataset
from torch_geometric.data import Batch
from itertools import repeat, product, chain
# allowable node and edge features
allowable_features = {
'possible_atomic_num_list' : list(range(1, 119)),
'possible_formal_charge_list' : [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
'possible_chirality_list' : [
Chem.rdchem.ChiralType.CHI_UNSPECIFIED,
Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CW,
Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CCW,
Chem.rdchem.ChiralType.CHI_OTHER
],
'possible_hybridization_list' : [
Chem.rdchem.HybridizationType.S,
Chem.rdchem.HybridizationType.SP, Chem.rdchem.HybridizationType.SP2,
Chem.rdchem.HybridizationType.SP3, Chem.rdchem.HybridizationType.SP3D,
Chem.rdchem.HybridizationType.SP3D2, Chem.rdchem.HybridizationType.UNSPECIFIED
],
'possible_numH_list' : [0, 1, 2, 3, 4, 5, 6, 7, 8],
'possible_implicit_valence_list' : [0, 1, 2, 3, 4, 5, 6],
'possible_degree_list' : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'possible_bonds' : [
Chem.rdchem.BondType.SINGLE,
Chem.rdchem.BondType.DOUBLE,
Chem.rdchem.BondType.TRIPLE,
Chem.rdchem.BondType.AROMATIC
],
'possible_bond_dirs' : [ # only for double bond stereo information
Chem.rdchem.BondDir.NONE,
Chem.rdchem.BondDir.ENDUPRIGHT,
Chem.rdchem.BondDir.ENDDOWNRIGHT
]
}
def mol_to_graph_data_obj_simple(mol):
"""
Converts rdkit mol object to graph Data object required by the pytorch
geometric package. NB: Uses simplified atom and bond features, and represent
as indices
:param mol: rdkit mol object
:return: graph data object with the attributes: x, edge_index, edge_attr
"""
# atoms
num_atom_features = 2 # atom type, chirality tag
atom_features_list = []
for atom in mol.GetAtoms():
atom_feature = [allowable_features['possible_atomic_num_list'].index(
atom.GetAtomicNum())] + [allowable_features[
'possible_chirality_list'].index(atom.GetChiralTag())]
atom_features_list.append(atom_feature)
x = torch.tensor(np.array(atom_features_list), dtype=torch.long)
# bonds
num_bond_features = 2 # bond type, bond direction
if len(mol.GetBonds()) > 0: # mol has bonds
edges_list = []
edge_features_list = []
for bond in mol.GetBonds():
i = bond.GetBeginAtomIdx()
j = bond.GetEndAtomIdx()
edge_feature = [allowable_features['possible_bonds'].index(
bond.GetBondType())] + [allowable_features[
'possible_bond_dirs'].index(
bond.GetBondDir())]
edges_list.append((i, j))
edge_features_list.append(edge_feature)
edges_list.append((j, i))
edge_features_list.append(edge_feature)
# data.edge_index: Graph connectivity in COO format with shape [2, num_edges]
edge_index = torch.tensor(np.array(edges_list).T, dtype=torch.long)
# data.edge_attr: Edge feature matrix with shape [num_edges, num_edge_features]
edge_attr = torch.tensor(np.array(edge_features_list),
dtype=torch.long)
else: # mol has no bonds
edge_index = torch.empty((2, 0), dtype=torch.long)
edge_attr = torch.empty((0, num_bond_features), dtype=torch.long)
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
return data
def graph_data_obj_to_mol_simple(data_x, data_edge_index, data_edge_attr):
"""
Convert pytorch geometric data obj to rdkit mol object. NB: Uses simplified
atom and bond features, and represent as indices.
:param: data_x:
:param: data_edge_index:
:param: data_edge_attr
:return:
"""
mol = Chem.RWMol()
# atoms
atom_features = data_x.cpu().numpy()
num_atoms = atom_features.shape[0]
for i in range(num_atoms):
atomic_num_idx, chirality_tag_idx = atom_features[i]
atomic_num = allowable_features['possible_atomic_num_list'][atomic_num_idx]
chirality_tag = allowable_features['possible_chirality_list'][chirality_tag_idx]
atom = Chem.Atom(atomic_num)
atom.SetChiralTag(chirality_tag)
mol.AddAtom(atom)
# bonds
edge_index = data_edge_index.cpu().numpy()
edge_attr = data_edge_attr.cpu().numpy()
num_bonds = edge_index.shape[1]
for j in range(0, num_bonds, 2):
begin_idx = int(edge_index[0, j])
end_idx = int(edge_index[1, j])
bond_type_idx, bond_dir_idx = edge_attr[j]
bond_type = allowable_features['possible_bonds'][bond_type_idx]
bond_dir = allowable_features['possible_bond_dirs'][bond_dir_idx]
mol.AddBond(begin_idx, end_idx, bond_type)
# set bond direction
new_bond = mol.GetBondBetweenAtoms(begin_idx, end_idx)
new_bond.SetBondDir(bond_dir)
# Chem.SanitizeMol(mol) # fails for COC1=CC2=C(NC(=N2)[S@@](=O)CC2=NC=C(
# C)C(OC)=C2C)C=C1, when aromatic bond is possible
# when we do not have aromatic bonds
# Chem.SanitizeMol(mol, sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
return mol
def graph_data_obj_to_nx_simple(data):
"""
Converts graph Data object required by the pytorch geometric package to
network x data object. NB: Uses simplified atom and bond features,
and represent as indices. NB: possible issues with recapitulating relative
stereochemistry since the edges in the nx object are unordered.
:param data: pytorch geometric Data object
:return: network x object
"""
G = nx.Graph()
# atoms
atom_features = data.x.cpu().numpy()
num_atoms = atom_features.shape[0]
for i in range(num_atoms):
atomic_num_idx, chirality_tag_idx = atom_features[i]
G.add_node(i, atom_num_idx=atomic_num_idx, chirality_tag_idx=chirality_tag_idx)
pass
# bonds
edge_index = data.edge_index.cpu().numpy()
edge_attr = data.edge_attr.cpu().numpy()
num_bonds = edge_index.shape[1]
for j in range(0, num_bonds, 2):
begin_idx = int(edge_index[0, j])
end_idx = int(edge_index[1, j])
bond_type_idx, bond_dir_idx = edge_attr[j]
if not G.has_edge(begin_idx, end_idx):
G.add_edge(begin_idx, end_idx, bond_type_idx=bond_type_idx,
bond_dir_idx=bond_dir_idx)
return G
def nx_to_graph_data_obj_simple(G):
"""
Converts nx graph to pytorch geometric Data object. Assume node indices
are numbered from 0 to num_nodes - 1. NB: Uses simplified atom and bond
features, and represent as indices. NB: possible issues with
recapitulating relative stereochemistry since the edges in the nx
object are unordered.
:param G: nx graph obj
:return: pytorch geometric Data object
"""
# atoms
num_atom_features = 2 # atom type, chirality tag
atom_features_list = []
for _, node in G.nodes(data=True):
atom_feature = [node['atom_num_idx'], node['chirality_tag_idx']]
atom_features_list.append(atom_feature)
x = torch.tensor(np.array(atom_features_list), dtype=torch.long)
# bonds
num_bond_features = 2 # bond type, bond direction
if len(G.edges()) > 0: # mol has bonds
edges_list = []
edge_features_list = []
for i, j, edge in G.edges(data=True):
edge_feature = [edge['bond_type_idx'], edge['bond_dir_idx']]
edges_list.append((i, j))
edge_features_list.append(edge_feature)
edges_list.append((j, i))
edge_features_list.append(edge_feature)
# data.edge_index: Graph connectivity in COO format with shape [2, num_edges]
edge_index = torch.tensor(np.array(edges_list).T, dtype=torch.long)
# data.edge_attr: Edge feature matrix with shape [num_edges, num_edge_features]
edge_attr = torch.tensor(np.array(edge_features_list),
dtype=torch.long)
else: # mol has no bonds
edge_index = torch.empty((2, 0), dtype=torch.long)
edge_attr = torch.empty((0, num_bond_features), dtype=torch.long)
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
return data
def get_gasteiger_partial_charges(mol, n_iter=12):
"""
Calculates list of gasteiger partial charges for each atom in mol object.
:param mol: rdkit mol object
:param n_iter: number of iterations. Default 12
:return: list of computed partial charges for each atom.
"""
Chem.rdPartialCharges.ComputeGasteigerCharges(mol, nIter=n_iter,
throwOnParamFailure=True)
partial_charges = [float(a.GetProp('_GasteigerCharge')) for a in
mol.GetAtoms()]
return partial_charges
def create_standardized_mol_id(smiles):
"""
:param smiles:
:return: inchi
"""
if check_smiles_validity(smiles):
# remove stereochemistry
smiles = AllChem.MolToSmiles(AllChem.MolFromSmiles(smiles),
isomericSmiles=False)
mol = AllChem.MolFromSmiles(smiles)
if mol != None: # to catch weird issue with O=C1O[al]2oc(=O)c3ccc(cn3)c3ccccc3c3cccc(c3)c3ccccc3c3cc(C(F)(F)F)c(cc3o2)-c2ccccc2-c2cccc(c2)-c2ccccc2-c2cccnc21
if '.' in smiles: # if multiple species, pick largest molecule
mol_species_list = split_rdkit_mol_obj(mol)
largest_mol = get_largest_mol(mol_species_list)
inchi = AllChem.MolToInchi(largest_mol)
else:
inchi = AllChem.MolToInchi(mol)
return inchi
else:
return
else:
return
class MoleculeDataset(InMemoryDataset):
def __init__(self,
root,
#data = None,
#slices = None,
transform=None,
pre_transform=None,
pre_filter=None,
dataset='zinc250k',
empty=False):
"""
Adapted from qm9.py. Disabled the download functionality
:param root: directory of the dataset, containing a raw and processed
dir. The raw dir should contain the file containing the smiles, and the
processed dir can either empty or a previously processed file
:param dataset: name of the dataset. Currently only implemented for
zinc250k, chembl_with_labels, tox21, hiv, bace, bbbp, clintox, esol,
freesolv, lipophilicity, muv, pcba, sider, toxcast
:param empty: if True, then will not load any data obj. For
initializing empty dataset
"""
self.dataset = dataset
self.root = root
super(MoleculeDataset, self).__init__(root, transform, pre_transform,
pre_filter)
self.transform, self.pre_transform, self.pre_filter = transform, pre_transform, pre_filter
if not empty:
self.data, self.slices = torch.load(self.processed_paths[0])
def get(self, idx):
data = Data()
for key in self.data.keys:
item, slices = self.data[key], self.slices[key]
s = list(repeat(slice(None), item.dim()))
s[data.cat_dim(key, item)] = slice(slices[idx],
slices[idx + 1])
data[key] = item[s]
return data
@property
def raw_file_names(self):
file_name_list = os.listdir(self.raw_dir)
# assert len(file_name_list) == 1 # currently assume we have a
# # single raw file
return file_name_list
@property
def processed_file_names(self):
return 'geometric_data_processed.pt'
def download(self):
raise NotImplementedError('Must indicate valid location of raw data. '
'No download allowed')
def process(self):
data_smiles_list = []
data_list = []
if self.dataset == 'zinc_standard_agent':
input_path = self.raw_paths[0]
input_df = pd.read_csv(input_path, sep=',', compression='gzip',
dtype='str')
smiles_list = list(input_df['smiles'])
zinc_id_list = list(input_df['zinc_id'])
for i in range(len(smiles_list)):
print(i)
s = smiles_list[i]
# each example contains a single species
try:
rdkit_mol = AllChem.MolFromSmiles(s)
if rdkit_mol != None: # ignore invalid mol objects
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
id = int(zinc_id_list[i].split('ZINC')[1].lstrip('0'))
data.id = torch.tensor(
[id]) # id here is zinc id value, stripped of
# leading zeros
data_list.append(data)
data_smiles_list.append(smiles_list[i])
except:
continue
elif self.dataset == 'chembl_filtered':
### get downstream test molecules.
from splitters import scaffold_split
###
downstream_dir = [
'dataset/bace',
'dataset/bbbp',
'dataset/clintox',
'dataset/esol',
'dataset/freesolv',
'dataset/hiv',
'dataset/lipophilicity',
'dataset/muv',
# 'dataset/pcba/processed/smiles.csv',
'dataset/sider',
'dataset/tox21',
'dataset/toxcast'
]
downstream_inchi_set = set()
for d_path in downstream_dir:
print(d_path)
dataset_name = d_path.split('/')[1]
downstream_dataset = MoleculeDataset(d_path, dataset=dataset_name)
downstream_smiles = pd.read_csv(os.path.join(d_path,
'processed', 'smiles.csv'),
header=None)[0].tolist()
assert len(downstream_dataset) == len(downstream_smiles)
_, _, _, (train_smiles, valid_smiles, test_smiles) = scaffold_split(downstream_dataset, downstream_smiles, task_idx=None, null_value=0,
frac_train=0.8,frac_valid=0.1, frac_test=0.1,
return_smiles=True)
### remove both test and validation molecules
remove_smiles = test_smiles + valid_smiles
downstream_inchis = []
for smiles in remove_smiles:
species_list = smiles.split('.')
for s in species_list: # record inchi for all species, not just
# largest (by default in create_standardized_mol_id if input has
# multiple species)
inchi = create_standardized_mol_id(s)
downstream_inchis.append(inchi)
downstream_inchi_set.update(downstream_inchis)
smiles_list, rdkit_mol_objs, folds, labels = \
_load_chembl_with_labels_dataset(os.path.join(self.root, 'raw'))
print('processing')
for i in range(len(rdkit_mol_objs)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
if rdkit_mol != None:
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
mw = Descriptors.MolWt(rdkit_mol)
if 50 <= mw <= 900:
inchi = create_standardized_mol_id(smiles_list[i])
if inchi != None and inchi not in downstream_inchi_set:
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor(labels[i, :])
# fold information
if i in folds[0]:
data.fold = torch.tensor([0])
elif i in folds[1]:
data.fold = torch.tensor([1])
else:
data.fold = torch.tensor([2])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'tox21':
smiles_list, rdkit_mol_objs, labels = \
_load_tox21_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
## convert aromatic bonds to double bonds
#Chem.SanitizeMol(rdkit_mol,
#sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor(labels[i, :])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'hiv':
smiles_list, rdkit_mol_objs, labels = \
_load_hiv_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor([labels[i]])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'bace':
smiles_list, rdkit_mol_objs, folds, labels = \
_load_bace_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor([labels[i]])
data.fold = torch.tensor([folds[i]])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'bbbp':
smiles_list, rdkit_mol_objs, labels = \
_load_bbbp_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
if rdkit_mol != None:
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor([labels[i]])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'clintox':
smiles_list, rdkit_mol_objs, labels = \
_load_clintox_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
if rdkit_mol != None:
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor(labels[i, :])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'esol':
smiles_list, rdkit_mol_objs, labels = \
_load_esol_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor([labels[i]])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'freesolv':
smiles_list, rdkit_mol_objs, labels = \
_load_freesolv_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor([labels[i]])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'lipophilicity':
smiles_list, rdkit_mol_objs, labels = \
_load_lipophilicity_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor([labels[i]])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'muv':
smiles_list, rdkit_mol_objs, labels = \
_load_muv_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor(labels[i, :])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'pcba':
smiles_list, rdkit_mol_objs, labels = \
_load_pcba_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor(labels[i, :])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'pcba_pretrain':
smiles_list, rdkit_mol_objs, labels = \
_load_pcba_dataset(self.raw_paths[0])
downstream_inchi = set(pd.read_csv(os.path.join(self.root,
'downstream_mol_inchi_may_24_2019'),
sep=',', header=None)[0])
for i in range(len(smiles_list)):
print(i)
if '.' not in smiles_list[i]: # remove examples with
# multiples species
rdkit_mol = rdkit_mol_objs[i]
mw = Descriptors.MolWt(rdkit_mol)
if 50 <= mw <= 900:
inchi = create_standardized_mol_id(smiles_list[i])
if inchi != None and inchi not in downstream_inchi:
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor(labels[i, :])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
# elif self.dataset == ''
elif self.dataset == 'sider':
smiles_list, rdkit_mol_objs, labels = \
_load_sider_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor(labels[i, :])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'toxcast':
smiles_list, rdkit_mol_objs, labels = \
_load_toxcast_dataset(self.raw_paths[0])
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
if rdkit_mol != None:
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i]) # id here is the index of the mol in
# the dataset
data.y = torch.tensor(labels[i, :])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'ptc_mr':
input_path = self.raw_paths[0]
input_df = pd.read_csv(input_path, sep=',', header=None, names=['id', 'label', 'smiles'])
smiles_list = input_df['smiles']
labels = input_df['label'].values
for i in range(len(smiles_list)):
print(i)
s = smiles_list[i]
rdkit_mol = AllChem.MolFromSmiles(s)
if rdkit_mol != None: # ignore invalid mol objects
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i])
data.y = torch.tensor([labels[i]])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'mutag':
smiles_path = os.path.join(self.root, 'raw', 'mutag_188_data.can')
# smiles_path = 'dataset/mutag/raw/mutag_188_data.can'
labels_path = os.path.join(self.root, 'raw', 'mutag_188_target.txt')
# labels_path = 'dataset/mutag/raw/mutag_188_target.txt'
smiles_list = pd.read_csv(smiles_path, sep=' ', header=None)[0]
labels = pd.read_csv(labels_path, header=None)[0].values
for i in range(len(smiles_list)):
print(i)
s = smiles_list[i]
rdkit_mol = AllChem.MolFromSmiles(s)
if rdkit_mol != None: # ignore invalid mol objects
# # convert aromatic bonds to double bonds
# Chem.SanitizeMol(rdkit_mol,
# sanitizeOps=Chem.SanitizeFlags.SANITIZE_KEKULIZE)
data = mol_to_graph_data_obj_simple(rdkit_mol)
# manually add mol id
data.id = torch.tensor(
[i])
data.y = torch.tensor([labels[i]])
data_list.append(data)
data_smiles_list.append(smiles_list[i])
else:
raise ValueError('Invalid dataset name')
if self.pre_filter is not None:
data_list = [data for data in data_list if self.pre_filter(data)]
if self.pre_transform is not None:
data_list = [self.pre_transform(data) for data in data_list]
# write data_smiles_list in processed paths
data_smiles_series = pd.Series(data_smiles_list)
data_smiles_series.to_csv(os.path.join(self.processed_dir,
'smiles.csv'), index=False,
header=False)
data, slices = self.collate(data_list)
torch.save((data, slices), self.processed_paths[0])
# NB: only properly tested when dataset_1 is chembl_with_labels and dataset_2
# is pcba_pretrain
def merge_dataset_objs(dataset_1, dataset_2):
"""
Naively merge 2 molecule dataset objects, and ignore identities of
molecules. Assumes both datasets have multiple y labels, and will pad
accordingly. ie if dataset_1 has obj_1 with y dim 1310 and dataset_2 has
obj_2 with y dim 128, then the resulting obj_1 and obj_2 will have dim
1438, where obj_1 have the last 128 cols with 0, and obj_2 have
the first 1310 cols with 0.
:return: pytorch geometric dataset obj, with the x, edge_attr, edge_index,
new y attributes only
"""
d_1_y_dim = dataset_1[0].y.size()[0]
d_2_y_dim = dataset_2[0].y.size()[0]
data_list = []
# keep only x, edge_attr, edge_index, padded_y then append
for d in dataset_1:
old_y = d.y
new_y = torch.cat([old_y, torch.zeros(d_2_y_dim, dtype=torch.long)])
data_list.append(Data(x=d.x, edge_index=d.edge_index,
edge_attr=d.edge_attr, y=new_y))
for d in dataset_2:
old_y = d.y
new_y = torch.cat([torch.zeros(d_1_y_dim, dtype=torch.long), old_y.long()])
data_list.append(Data(x=d.x, edge_index=d.edge_index,
edge_attr=d.edge_attr, y=new_y))
# create 'empty' dataset obj. Just randomly pick a dataset and root path
# that has already been processed
new_dataset = MoleculeDataset(root='dataset/chembl_with_labels',
dataset='chembl_with_labels', empty=True)
# collate manually
new_dataset.data, new_dataset.slices = new_dataset.collate(data_list)
return new_dataset
def create_circular_fingerprint(mol, radius, size, chirality):
"""
:param mol:
:param radius:
:param size:
:param chirality:
:return: np array of morgan fingerprint
"""
fp = GetMorganFingerprintAsBitVect(mol, radius,
nBits=size, useChirality=chirality)
return np.array(fp)
class MoleculeFingerprintDataset(data.Dataset):
def __init__(self, root, dataset, radius, size, chirality=True):
"""
Create dataset object containing list of dicts, where each dict
contains the circular fingerprint of the molecule, label, id,
and possibly precomputed fold information
:param root: directory of the dataset, containing a raw and
processed_fp dir. The raw dir should contain the file containing the
smiles, and the processed_fp dir can either be empty or a
previously processed file
:param dataset: name of dataset. Currently only implemented for
tox21, hiv, chembl_with_labels
:param radius: radius of the circular fingerprints
:param size: size of the folded fingerprint vector
:param chirality: if True, fingerprint includes chirality information
"""
self.dataset = dataset
self.root = root
self.radius = radius
self.size = size
self.chirality = chirality
self._load()
def _process(self):
data_smiles_list = []
data_list = []
if self.dataset == 'chembl_with_labels':
smiles_list, rdkit_mol_objs, folds, labels = \
_load_chembl_with_labels_dataset(os.path.join(self.root, 'raw'))
print('processing')
for i in range(len(rdkit_mol_objs)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
if rdkit_mol != None:
# # convert aromatic bonds to double bonds
fp_arr = create_circular_fingerprint(rdkit_mol,
self.radius,
self.size, self.chirality)
fp_arr = torch.tensor(fp_arr)
# manually add mol id
id = torch.tensor([i]) # id here is the index of the mol in
# the dataset
y = torch.tensor(labels[i, :])
# fold information
if i in folds[0]:
fold = torch.tensor([0])
elif i in folds[1]:
fold = torch.tensor([1])
else:
fold = torch.tensor([2])
data_list.append({'fp_arr': fp_arr, 'id': id, 'y': y,
'fold': fold})
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'tox21':
smiles_list, rdkit_mol_objs, labels = \
_load_tox21_dataset(os.path.join(self.root, 'raw/tox21.csv'))
print('processing')
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
## convert aromatic bonds to double bonds
fp_arr = create_circular_fingerprint(rdkit_mol,
self.radius,
self.size,
self.chirality)
fp_arr = torch.tensor(fp_arr)
# manually add mol id
id = torch.tensor([i]) # id here is the index of the mol in
# the dataset
y = torch.tensor(labels[i, :])
data_list.append({'fp_arr': fp_arr, 'id': id, 'y': y})
data_smiles_list.append(smiles_list[i])
elif self.dataset == 'hiv':
smiles_list, rdkit_mol_objs, labels = \
_load_hiv_dataset(os.path.join(self.root, 'raw/HIV.csv'))
print('processing')
for i in range(len(smiles_list)):
print(i)
rdkit_mol = rdkit_mol_objs[i]
# # convert aromatic bonds to double bonds
fp_arr = create_circular_fingerprint(rdkit_mol,
self.radius,
self.size,
self.chirality)
fp_arr = torch.tensor(fp_arr)
# manually add mol id
id = torch.tensor([i]) # id here is the index of the mol in
# the dataset
y = torch.tensor([labels[i]])
data_list.append({'fp_arr': fp_arr, 'id': id, 'y': y})
data_smiles_list.append(smiles_list[i])
else:
raise ValueError('Invalid dataset name')
# save processed data objects and smiles
processed_dir = os.path.join(self.root, 'processed_fp')
data_smiles_series = pd.Series(data_smiles_list)
data_smiles_series.to_csv(os.path.join(processed_dir, 'smiles.csv'),
index=False,
header=False)
with open(os.path.join(processed_dir,
'fingerprint_data_processed.pkl'),
'wb') as f:
pickle.dump(data_list, f)
def _load(self):
processed_dir = os.path.join(self.root, 'processed_fp')
# check if saved file exist. If so, then load from save
file_name_list = os.listdir(processed_dir)
if 'fingerprint_data_processed.pkl' in file_name_list:
with open(os.path.join(processed_dir,
'fingerprint_data_processed.pkl'),
'rb') as f:
self.data_list = pickle.load(f)
# if no saved file exist, then perform processing steps, save then
# reload
else:
self._process()
self._load()
def __len__(self):
return len(self.data_list)
def __getitem__(self, index):
## if iterable class is passed, return dataset objection
if hasattr(index, "__iter__"):
dataset = MoleculeFingerprintDataset(self.root, self.dataset, self.radius, self.size, chirality=self.chirality)
dataset.data_list = [self.data_list[i] for i in index]
return dataset
else:
return self.data_list[index]
def _load_tox21_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels
"""
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
tasks = ['NR-AR', 'NR-AR-LBD', 'NR-AhR', 'NR-Aromatase', 'NR-ER', 'NR-ER-LBD',
'NR-PPAR-gamma', 'SR-ARE', 'SR-ATAD5', 'SR-HSE', 'SR-MMP', 'SR-p53']
labels = input_df[tasks]
# convert 0 to -1
labels = labels.replace(0, -1)
# convert nan to 0
labels = labels.fillna(0)
assert len(smiles_list) == len(rdkit_mol_objs_list)
assert len(smiles_list) == len(labels)
return smiles_list, rdkit_mol_objs_list, labels.values
def _load_hiv_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels
"""
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
labels = input_df['HIV_active']
# convert 0 to -1
labels = labels.replace(0, -1)
# there are no nans
assert len(smiles_list) == len(rdkit_mol_objs_list)
assert len(smiles_list) == len(labels)
return smiles_list, rdkit_mol_objs_list, labels.values
def _load_bace_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array
containing indices for each of the 3 folds, np.array containing the
labels
"""
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['mol']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
labels = input_df['Class']
# convert 0 to -1
labels = labels.replace(0, -1)
# there are no nans
folds = input_df['Model']
folds = folds.replace('Train', 0) # 0 -> train
folds = folds.replace('Valid', 1) # 1 -> valid
folds = folds.replace('Test', 2) # 2 -> test
assert len(smiles_list) == len(rdkit_mol_objs_list)
assert len(smiles_list) == len(labels)
assert len(smiles_list) == len(folds)
return smiles_list, rdkit_mol_objs_list, folds.values, labels.values
def _load_bbbp_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels
"""
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
preprocessed_rdkit_mol_objs_list = [m if m != None else None for m in
rdkit_mol_objs_list]
preprocessed_smiles_list = [AllChem.MolToSmiles(m) if m != None else
None for m in preprocessed_rdkit_mol_objs_list]
labels = input_df['p_np']
# convert 0 to -1
labels = labels.replace(0, -1)
# there are no nans
assert len(smiles_list) == len(preprocessed_rdkit_mol_objs_list)
assert len(smiles_list) == len(preprocessed_smiles_list)
assert len(smiles_list) == len(labels)
return preprocessed_smiles_list, preprocessed_rdkit_mol_objs_list, \
labels.values
def _load_clintox_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels
"""
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
preprocessed_rdkit_mol_objs_list = [m if m != None else None for m in
rdkit_mol_objs_list]
preprocessed_smiles_list = [AllChem.MolToSmiles(m) if m != None else
None for m in preprocessed_rdkit_mol_objs_list]
tasks = ['FDA_APPROVED', 'CT_TOX']
labels = input_df[tasks]
# convert 0 to -1
labels = labels.replace(0, -1)
# there are no nans
assert len(smiles_list) == len(preprocessed_rdkit_mol_objs_list)
assert len(smiles_list) == len(preprocessed_smiles_list)
assert len(smiles_list) == len(labels)
return preprocessed_smiles_list, preprocessed_rdkit_mol_objs_list, \
labels.values
# input_path = 'dataset/clintox/raw/clintox.csv'
# smiles_list, rdkit_mol_objs_list, labels = _load_clintox_dataset(input_path)
def _load_esol_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels (regression task)
"""
# NB: some examples have multiple species
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
labels = input_df['measured log solubility in mols per litre']
assert len(smiles_list) == len(rdkit_mol_objs_list)
assert len(smiles_list) == len(labels)
return smiles_list, rdkit_mol_objs_list, labels.values
# input_path = 'dataset/esol/raw/delaney-processed.csv'
# smiles_list, rdkit_mol_objs_list, labels = _load_esol_dataset(input_path)
def _load_freesolv_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels (regression task)
"""
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
labels = input_df['expt']
assert len(smiles_list) == len(rdkit_mol_objs_list)
assert len(smiles_list) == len(labels)
return smiles_list, rdkit_mol_objs_list, labels.values
def _load_lipophilicity_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels (regression task)
"""
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
labels = input_df['exp']
assert len(smiles_list) == len(rdkit_mol_objs_list)
assert len(smiles_list) == len(labels)
return smiles_list, rdkit_mol_objs_list, labels.values
def _load_muv_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels
"""
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
tasks = ['MUV-466', 'MUV-548', 'MUV-600', 'MUV-644', 'MUV-652', 'MUV-689',
'MUV-692', 'MUV-712', 'MUV-713', 'MUV-733', 'MUV-737', 'MUV-810',
'MUV-832', 'MUV-846', 'MUV-852', 'MUV-858', 'MUV-859']
labels = input_df[tasks]
# convert 0 to -1
labels = labels.replace(0, -1)
# convert nan to 0
labels = labels.fillna(0)
assert len(smiles_list) == len(rdkit_mol_objs_list)
assert len(smiles_list) == len(labels)
return smiles_list, rdkit_mol_objs_list, labels.values
def _load_sider_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels
"""
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
tasks = ['Hepatobiliary disorders',
'Metabolism and nutrition disorders', 'Product issues', 'Eye disorders',
'Investigations', 'Musculoskeletal and connective tissue disorders',
'Gastrointestinal disorders', 'Social circumstances',
'Immune system disorders', 'Reproductive system and breast disorders',
'Neoplasms benign, malignant and unspecified (incl cysts and polyps)',
'General disorders and administration site conditions',
'Endocrine disorders', 'Surgical and medical procedures',
'Vascular disorders', 'Blood and lymphatic system disorders',
'Skin and subcutaneous tissue disorders',
'Congenital, familial and genetic disorders',
'Infections and infestations',
'Respiratory, thoracic and mediastinal disorders',
'Psychiatric disorders', 'Renal and urinary disorders',
'Pregnancy, puerperium and perinatal conditions',
'Ear and labyrinth disorders', 'Cardiac disorders',
'Nervous system disorders',
'Injury, poisoning and procedural complications']
labels = input_df[tasks]
# convert 0 to -1
labels = labels.replace(0, -1)
assert len(smiles_list) == len(rdkit_mol_objs_list)
assert len(smiles_list) == len(labels)
return smiles_list, rdkit_mol_objs_list, labels.value
def _load_toxcast_dataset(input_path):
"""
:param input_path:
:return: list of smiles, list of rdkit mol obj, np.array containing the
labels
"""
# NB: some examples have multiple species, some example smiles are invalid
input_df = pd.read_csv(input_path, sep=',')
smiles_list = input_df['smiles']
rdkit_mol_objs_list = [AllChem.MolFromSmiles(s) for s in smiles_list]
# Some smiles could not be successfully converted
# to rdkit mol object so them to None
preprocessed_rdkit_mol_objs_list = [m if m != None else None for m in
rdkit_mol_objs_list]
preprocessed_smiles_list = [AllChem.MolToSmiles(m) if m != None else
None for m in preprocessed_rdkit_mol_objs_list]
tasks = list(input_df.columns)[1:]
labels = input_df[tasks]
# convert 0 to -1
labels = labels.replace(0, -1)
# convert nan to 0
labels = labels.fillna(0)
assert len(smiles_list) == len(preprocessed_rdkit_mol_objs_list)
assert len(smiles_list) == len(preprocessed_smiles_list)
assert len(smiles_list) == len(labels)
return preprocessed_smiles_list, preprocessed_rdkit_mol_objs_list, \
labels.values
def _load_chembl_with_labels_dataset(root_path):
"""
Data from 'Large-scale comparison of machine learning methods for drug target prediction on ChEMBL'
:param root_path: path to the folder containing the reduced chembl dataset
:return: list of smiles, preprocessed rdkit mol obj list, list of np.array
containing indices for each of the 3 folds, np.array containing the labels
"""
# adapted from https://github.com/ml-jku/lsc/blob/master/pythonCode/lstm/loadData.py
# first need to download the files and unzip:
# wget http://bioinf.jku.at/research/lsc/chembl20/dataPythonReduced.zip
# unzip and rename to chembl_with_labels
# wget http://bioinf.jku.at/research/lsc/chembl20/dataPythonReduced/chembl20Smiles.pckl
# into the dataPythonReduced directory
# wget http://bioinf.jku.at/research/lsc/chembl20/dataPythonReduced/chembl20LSTM.pckl
# 1. load folds and labels
f=open(os.path.join(root_path, 'folds0.pckl'), 'rb')
folds=pickle.load(f)
f.close()
f=open(os.path.join(root_path, 'labelsHard.pckl'), 'rb')
targetMat=pickle.load(f)
sampleAnnInd=pickle.load(f)
targetAnnInd=pickle.load(f)
f.close()
targetMat=targetMat
targetMat=targetMat.copy().tocsr()
targetMat.sort_indices()
targetAnnInd=targetAnnInd
targetAnnInd=targetAnnInd-targetAnnInd.min()
folds=[np.intersect1d(fold, sampleAnnInd.index.values).tolist() for fold in folds]
targetMatTransposed=targetMat[sampleAnnInd[list(chain(*folds))]].T.tocsr()
targetMatTransposed.sort_indices()
# # num positive examples in each of the 1310 targets
trainPosOverall=np.array([np.sum(targetMatTransposed[x].data > 0.5) for x in range(targetMatTransposed.shape[0])])
# # num negative examples in each of the 1310 targets
trainNegOverall=np.array([np.sum(targetMatTransposed[x].data < -0.5) for x in range(targetMatTransposed.shape[0])])
# dense array containing the labels for the 456331 molecules and 1310 targets
denseOutputData=targetMat.A # possible values are {-1, 0, 1}
# 2. load structures
f=open(os.path.join(root_path, 'chembl20LSTM.pckl'), 'rb')
rdkitArr=pickle.load(f)
f.close()
assert len(rdkitArr) == denseOutputData.shape[0]
assert len(rdkitArr) == len(folds[0]) + len(folds[1]) + len(folds[2])
preprocessed_rdkitArr = []
print('preprocessing')
for i in range(len(rdkitArr)):
print(i)
m = rdkitArr[i]
if m == None:
preprocessed_rdkitArr.append(None)
else:
mol_species_list = split_rdkit_mol_obj(m)
if len(mol_species_list) == 0:
preprocessed_rdkitArr.append(None)
else:
largest_mol = get_largest_mol(mol_species_list)
if len(largest_mol.GetAtoms()) <= 2:
preprocessed_rdkitArr.append(None)
else:
preprocessed_rdkitArr.append(largest_mol)
assert len(preprocessed_rdkitArr) == denseOutputData.shape[0]
smiles_list = [AllChem.MolToSmiles(m) if m != None else None for m in
preprocessed_rdkitArr] # bc some empty mol in the
# rdkitArr zzz...
assert len(preprocessed_rdkitArr) == len(smiles_list)
return smiles_list, preprocessed_rdkitArr, folds, denseOutputData
# root_path = 'dataset/chembl_with_labels'
def check_smiles_validity(smiles):
try:
m = Chem.MolFromSmiles(smiles)
if m:
return True
else:
return False
except:
return False
def split_rdkit_mol_obj(mol):
"""
Split rdkit mol object containing multiple species or one species into a
list of mol objects or a list containing a single object respectively
:param mol:
:return:
"""
smiles = AllChem.MolToSmiles(mol, isomericSmiles=True)
smiles_list = smiles.split('.')
mol_species_list = []
for s in smiles_list:
if check_smiles_validity(s):
mol_species_list.append(AllChem.MolFromSmiles(s))
return mol_species_list
def get_largest_mol(mol_list):
"""
Given a list of rdkit mol objects, returns mol object containing the
largest num of atoms. If multiple containing largest num of atoms,
picks the first one
:param mol_list:
:return:
"""
num_atoms_list = [len(m.GetAtoms()) for m in mol_list]
largest_mol_idx = num_atoms_list.index(max(num_atoms_list))
return mol_list[largest_mol_idx]
def create_all_datasets():
#### create dataset
downstream_dir = [
'bace',
'bbbp',
'clintox',
'esol',
'freesolv',
'hiv',
'lipophilicity',
'muv',
'sider',
'tox21',
'toxcast'
]
for dataset_name in downstream_dir:
print(dataset_name)
root = "dataset/" + dataset_name
os.makedirs(root + "/processed", exist_ok=True)
dataset = MoleculeDataset(root, dataset=dataset_name)
print(dataset)
dataset = MoleculeDataset(root = "dataset/chembl_filtered", dataset="chembl_filtered")
print(dataset)
dataset = MoleculeDataset(root = "dataset/zinc_standard_agent", dataset="zinc_standard_agent")
print(dataset)
# test MoleculeDataset object
if __name__ == "__main__":
create_all_datasets()
| 56,150 | 41.250564 | 165 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/run_test_ensemble.py | """Script to launch ensemble on test set results."""
import argparse
import json
from collections import defaultdict
from functools import partial
from pathlib import Path
import jax
import jax.numpy as jnp
import numpy as np
import pandas as pd
import SimpleITK as sitk # noqa: N813
from absl import logging
from omegaconf import OmegaConf
from tqdm import tqdm
from imgx.datasets import (
DIR_TFDS_PROCESSED_MAP,
IMAGE_SPACING_MAP,
NUM_CLASSES_MAP,
)
from imgx.exp.eval import (
get_jit_segmentation_metrics,
get_non_jit_segmentation_metrics_per_step,
)
logging.set_verbosity(logging.INFO)
def parse_args() -> argparse.Namespace:
"""Parse arguments."""
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--log_dir",
type=Path,
help="Folder of wandb.",
default=None,
)
args = parser.parse_args()
return args
def vote_ensemble(test_dir: Path, dir_tfds: Path, num_classes: int) -> None:
"""Ensemble prediction via voting.
Args:
test_dir: path having predictions.
dir_tfds: path of tfds data, having ground truth.
num_classes: number of classes in labels.
"""
# get seed dirs and sort by seeds
lst_seed_dir = sorted(
test_dir.glob("seed_*/"), key=lambda x: int(x.stem.split("_")[-1])
)
num_seeds = len(lst_seed_dir)
# map relative_path to list of full path, corresponding to seeds
path_dict = defaultdict(list)
for seed_dir in lst_seed_dir:
for x in seed_dir.glob("**/*.nii.gz"):
rel_path = x.relative_to(seed_dir)
path_dict[rel_path].append(x)
# vote to ensemble
logging.info("Calculating ensemble predictions.")
for rel_path, pred_paths in path_dict.items():
# a list of shape (D, W, H)
mask_preds = [
sitk.GetArrayFromImage(sitk.ReadImage(x)) for x in pred_paths
]
# (D, W, H, num_classes, num_seeds)
mask_onehot = jax.nn.one_hot(
jnp.stack(mask_preds, axis=-1), num_classes=num_classes, axis=-2
)
# vote (D, W, H)
mask_pred = jnp.argmax(jnp.sum(mask_onehot, axis=-1), axis=-1).astype(
"uint16"
)
# copy meta data
uid = pred_paths[0].stem.split("_")[0]
volume_mask_true = sitk.ReadImage(
dir_tfds / f"{uid}_mask_preprocessed.nii.gz"
)
volume_mask_pred = sitk.GetImageFromArray(mask_pred)
volume_mask_pred.CopyInformation(volume_mask_true)
# save
out_path = test_dir / f"ensemble_{num_seeds}" / rel_path
out_path.parent.mkdir(parents=True, exist_ok=True)
sitk.WriteImage(
image=volume_mask_pred,
fileName=out_path,
useCompression=True,
)
def evaluate_ensemble_prediction(
dir_path: Path, dir_tfds: Path, num_classes: int, spacing: jnp.ndarray
) -> None:
"""Evaluate the saved predictions from ensemble.
Args:
dir_path: path having predictions.
dir_tfds: path of tfds data, having ground truth.
num_classes: number of classes in labels.
spacing: spacing for voxels.
"""
num_steps = int(dir_path.name.split("_")[1])
uids = [
x.name.split("_")[0] for x in (dir_path / "step_0").glob("*.nii.gz")
]
lst_df_scalar = []
for uid in tqdm(uids, total=len(uids)):
# (D, W, H)
mask_true = sitk.GetArrayFromImage(
sitk.ReadImage(dir_tfds / f"{uid}_mask_preprocessed.nii.gz")
)
# (D, W, H, num_classes)
mask_true = jax.nn.one_hot(mask_true, num_classes=num_classes, axis=-1)
# (1, W, H, D, num_classes)
mask_true = jnp.transpose(mask_true, axes=(2, 1, 0, 3))[None, ...]
pred_paths = [
dir_path / f"step_{i}" / f"{uid}_mask_pred.nii.gz"
for i in range(num_steps)
]
# a list of shape (D, W, H)
mask_preds = [
sitk.GetArrayFromImage(sitk.ReadImage(x)) for x in pred_paths
]
# (D, W, H, num_classes, num_steps)
mask_pred = jax.nn.one_hot(
jnp.stack(mask_preds, axis=-1), num_classes=num_classes, axis=-2
)
# (1, W, H, D, num_classes, num_steps)
mask_pred = jnp.transpose(mask_pred, axes=(2, 1, 0, 3, 4))[None, ...]
# metrics
scalars_jit = jax.vmap(
partial(
get_jit_segmentation_metrics,
mask_true=mask_true,
spacing=spacing,
),
in_axes=-1,
out_axes=-1,
)(mask_pred)
scalars_nonjit = get_non_jit_segmentation_metrics_per_step(
mask_pred=mask_pred,
mask_true=mask_true,
spacing=spacing,
)
scalars = {**scalars_jit, **scalars_nonjit}
# flatten per step
scalars_flatten = {}
for k, v in scalars.items():
for i in range(v.shape[-1]):
scalars_flatten[f"{k}_step_{i}"] = v[..., i]
scalars_flatten[k] = v[..., -1]
scalars = scalars_flatten
scalars = jax.tree_map(lambda x: np.asarray(x).tolist(), scalars)
scalars["uid"] = [uid]
lst_df_scalar.append(pd.DataFrame(scalars))
# assemble metrics
df_scalar = pd.concat(lst_df_scalar)
df_scalar = df_scalar.sort_values("uid")
df_scalar.to_csv(dir_path / "metrics_per_sample.csv", index=False)
# average over samples in the dataset
scalars = df_scalar.drop("uid", axis=1).mean().to_dict()
scalars = {"test_" + k: v for k, v in scalars.items()}
scalars["num_images_in_total"] = len(df_scalar)
with open(dir_path / "mean_metrics.json", "w", encoding="utf-8") as f:
json.dump(scalars, f, sort_keys=True, indent=4)
def main() -> None: # pylint:disable=R0915
"""Main function."""
args = parse_args()
config = OmegaConf.load(args.log_dir / "files" / "config_backup.yaml")
if config.task.name != "diffusion":
raise ValueError("Ensemble is only for diffusion.")
data_config = config.data
dir_tfds = DIR_TFDS_PROCESSED_MAP[data_config.name]
spacing = jnp.array(IMAGE_SPACING_MAP[data_config.name])
num_classes = NUM_CLASSES_MAP[data_config["name"]]
test_dir = args.log_dir / "files" / "test_evaluation"
# no ensemble if 1 seed only
lst_seed_dir = sorted(
test_dir.glob("seed_*/"), key=lambda x: int(x.stem.split("_")[-1])
)
if len(lst_seed_dir) == 1:
logging.info("Ensemble not performed as there is one seed only.")
return
# ensemble
vote_ensemble(test_dir=test_dir, dir_tfds=dir_tfds, num_classes=num_classes)
# evaluate
for dir_path in test_dir.glob("ensemble_*/sample_*_steps"):
logging.info(f"Evaluating ensemble predictions metrics for {dir_path}.")
evaluate_ensemble_prediction(
dir_path=dir_path,
dir_tfds=dir_tfds,
num_classes=num_classes,
spacing=spacing,
)
if __name__ == "__main__":
main()
| 7,072 | 31.296804 | 80 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/run_test.py | """Script to launch evaluation on test sets."""
import argparse
import json
from pathlib import Path
import jax
import numpy as np
from absl import logging
from omegaconf import OmegaConf
from imgx import TEST_SPLIT
from imgx.device import broadcast_to_local_devices
from imgx.exp import Experiment
from imgx.exp.train_state import get_eval_params_and_state_from_ckpt
logging.set_verbosity(logging.INFO)
def get_checkpoint_dir(
log_dir: Path, num_batch: int, metric: str, max_metric: bool
) -> Path:
"""Get the checkpoint directory.
Args:
log_dir: Directory of entire log.
num_batch: number of batches to select checkpoint.
-1 means select the latest one.
metric: metric to maximise or minimise.
max_metric: maximise the metric or not.
Returns:
A directory having arrays.npy and tree.pkl.
Raises:
ValueError: if any file not found.
"""
ckpt_dir = log_dir / "files" / "ckpt"
if num_batch < 0:
# take the one having the best metrics
best_metric_scalar = -np.inf if max_metric else np.inf
for ckpt_i_dir in ckpt_dir.glob("batch_*/"):
if not ckpt_i_dir.is_dir():
continue
# load metric
num_batch_i = int(ckpt_i_dir.stem.split("_")[-1])
metric_path = ckpt_i_dir / "mean_metrics.json"
if not metric_path.exists():
continue
with open(metric_path, encoding="utf-8") as f:
scalars = json.load(f)
if metric not in scalars:
raise ValueError(f"Metrics {metric} not found in {ckpt_i_dir}")
metric_scalar = scalars[metric]
# use the ckpt if it's the first or its metric is better
# if same performance, prefer being trained for longer
if (
(num_batch < 0)
or (max_metric and (best_metric_scalar <= metric_scalar))
or ((not max_metric) and (best_metric_scalar >= metric_scalar))
):
best_metric_scalar = metric_scalar
num_batch = num_batch_i
if num_batch < 0:
raise ValueError(f"Checkpoint not found under {ckpt_dir}")
ckpt_dir = ckpt_dir / f"batch_{num_batch}"
# sanity check
if not ckpt_dir.exists():
raise ValueError(f"Checkpoint directory {ckpt_dir} does not exist.")
array_path = ckpt_dir / "arrays.npy"
if not array_path.exists():
raise ValueError(f"Checkpoint {array_path} does not exist.")
tree_path = ckpt_dir / "tree.pkl"
if not tree_path.exists():
raise ValueError(f"Checkpoint {tree_path} does not exist.")
return ckpt_dir
def parse_args() -> argparse.Namespace:
"""Parse arguments."""
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--log_dir",
type=Path,
help="Folder of wandb.",
default=None,
)
parser.add_argument(
"--num_batch",
type=int,
help="Number of batches for identify checkpoint.",
default=-1,
)
parser.add_argument(
"--num_timesteps",
type=int,
help="Number of sampling steps for diffusion.",
default=-1,
)
parser.add_argument(
"--num_seeds",
type=int,
help="Number of seeds for inference.",
default=1,
)
parser.add_argument(
"--metric",
type=str,
help="Metric to select template.",
default="mean_binary_dice_score",
)
parser.add_argument("--max_metric", dest="max_metric", action="store_true")
parser.add_argument("--min_metric", dest="max_metric", action="store_false")
parser.set_defaults(max_metric=True)
args = parser.parse_args()
return args
def main() -> None:
"""Main function."""
args = parse_args()
config = OmegaConf.load(args.log_dir / "files" / "config_backup.yaml")
out_dir = args.log_dir / "files" / "test_evaluation"
if config.task.name == "diffusion":
if args.num_timesteps <= 0:
raise ValueError("num_timesteps required for diffusion.")
config.task.diffusion.num_timesteps = args.num_timesteps
logging.info(f"Sampling {args.num_timesteps} steps.")
ckpt_dir = get_checkpoint_dir(
log_dir=args.log_dir,
num_batch=args.num_batch,
metric=args.metric,
max_metric=args.max_metric,
)
logging.info(f"Using checkpoint {ckpt_dir}.")
# load checkpoint
params, state = get_eval_params_and_state_from_ckpt(
ckpt_dir=ckpt_dir,
use_ema=config.training.ema.use,
)
# prevent any gradient related actions
params = jax.lax.stop_gradient(params)
state = jax.lax.stop_gradient(state)
# inference per seed
for seed in range(args.num_seeds):
logging.info(f"Starting test split evaluation for seed {seed}.")
out_dir_seed = out_dir / f"seed_{seed}"
out_dir_seed.mkdir(parents=True, exist_ok=True)
if config.task.name == "diffusion":
out_dir_seed = out_dir_seed / f"sample_{args.num_timesteps}_steps"
# init exp
# necessary as data set will be exhausted
run = Experiment(config=config)
run.eval_init()
rng = jax.random.PRNGKey(seed)
rng = broadcast_to_local_devices(rng)
run.eval_step(
split=TEST_SPLIT,
params=params,
state=state,
rng=rng,
out_dir=out_dir_seed,
save_predictions=True,
)
logging.info(f"Finished test split evaluation for seed {seed}.")
if __name__ == "__main__":
main()
| 5,684 | 30.236264 | 80 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/math_util.py | """Module for math functions."""
import jax
import jax.numpy as jnp
def logits_to_mask(x: jnp.ndarray, axis: int) -> jnp.ndarray:
"""Transform logits to one hot mask.
The one will be on the class having largest logit.
Args:
x: logits.
axis: axis of num_classes.
Returns:
One hot probabilities.
"""
return jax.nn.one_hot(
x=jnp.argmax(x, axis=axis),
num_classes=x.shape[axis],
axis=axis,
)
| 471 | 18.666667 | 61 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/run_valid.py | """Script to launch evaluation on validation tests."""
import argparse
from pathlib import Path
from typing import List
import jax
from absl import logging
from omegaconf import OmegaConf
from imgx import VALID_SPLIT
from imgx.device import broadcast_to_local_devices
from imgx.exp import Experiment
from imgx.exp.train_state import get_eval_params_and_state_from_ckpt
logging.set_verbosity(logging.INFO)
def get_checkpoint_dirs(
log_dir: Path,
) -> List[Path]:
"""Get the directory of all checkpoints.
Args:
log_dir: Directory of entire log.
Returns:
A list of directories having arrays.npy and tree.pkl.
Raises:
ValueError: if any file not found.
"""
ckpt_dir = log_dir / "files" / "ckpt"
ckpt_dirs = []
num_batches = []
for ckpt_i_dir in ckpt_dir.glob("batch_*/"):
if not ckpt_i_dir.is_dir():
continue
array_path = ckpt_i_dir / "arrays.npy"
if not array_path.exists():
continue
tree_path = ckpt_i_dir / "tree.pkl"
if not tree_path.exists():
continue
num_batches.append(int(ckpt_i_dir.stem.split("_")[-1]))
ckpt_dirs.append(ckpt_i_dir)
ckpt_dirs = [
x[1] for x in sorted(zip(num_batches, ckpt_dirs), reverse=True)
]
return ckpt_dirs
def parse_args() -> argparse.Namespace:
"""Parse arguments."""
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--log_dir",
type=Path,
help="Folder of wandb.",
default=None,
)
parser.add_argument(
"--num_timesteps",
type=int,
help="Number of sampling steps for diffusion.",
default=-1,
)
args = parser.parse_args()
return args
def main() -> None:
"""Main function."""
args = parse_args()
config = OmegaConf.load(args.log_dir / "files" / "config_backup.yaml")
if config.task.name == "diffusion":
if args.num_timesteps <= 0:
raise ValueError("num_timesteps required for diffusion.")
config.task.diffusion.num_timesteps = args.num_timesteps
ckpt_dirs = get_checkpoint_dirs(
log_dir=args.log_dir,
)
# init exp
run = Experiment(config=config)
run.eval_init()
for ckpt_dir in ckpt_dirs:
# load checkpoint
params, state = get_eval_params_and_state_from_ckpt(
ckpt_dir=ckpt_dir,
use_ema=config.training.ema.use,
)
# prevent any gradient related actions
params = jax.lax.stop_gradient(params)
state = jax.lax.stop_gradient(state)
# inference
logging.info(f"Starting valid split evaluation for {ckpt_dir}.")
rng = jax.random.PRNGKey(config.seed)
rng = broadcast_to_local_devices(rng)
run.eval_step(
split=VALID_SPLIT,
params=params,
state=state,
rng=rng,
out_dir=ckpt_dir,
save_predictions=False,
)
# clean up
del params
del state
if __name__ == "__main__":
main()
| 3,106 | 24.891667 | 74 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/__init__.py | """A Jax-based DL toolkit for biomedical and bioinformatics applications."""
from pathlib import Path
# machine error
EPS = 1.0e-5
NAN_MASK = "nan_mask"
# path for all non-tensorflow-dataset data sets
DIR_DATA = Path("datasets")
# splits
TRAIN_SPLIT = "train"
VALID_SPLIT = "valid"
TEST_SPLIT = "test"
# jax device
# one model can be stored across multiple shards/slices
# given 8 devices, it can be grouped into 4x2
# if num_devices_per_replica = 2, then one model is stored across 2 devices
# so the replica_axis would be of size 4
SHARD_AXIS = "shard_axis"
REPLICA_AXIS = "replica_axis"
# data dict keys
UID = "uid"
IMAGE = "image"
LABEL = "label"
| 657 | 21.689655 | 76 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/run_train.py | """Script to launch training."""
from pathlib import Path
import hydra
import jax
import wandb
from absl import logging
from omegaconf import DictConfig, OmegaConf
from imgx import VALID_SPLIT
from imgx.config import flatten_dict
from imgx.exp import Experiment
from imgx.exp.train_state import get_eval_params_and_state, save_ckpt
logging.set_verbosity(logging.INFO)
def set_debug_config(config: DictConfig) -> DictConfig:
"""Modify config for debugging purpose.
Args:
config: original config.
Returns:
modified config.
Raises:
ValueError: if data set is unknown.
"""
# reduce all model size
config.model.unet3d.num_channels = (1, 2, 4, 8)
config.model.unet3d_slice.num_channels = (1, 2, 4, 8)
config.model.unet3d_time.num_channels = (1, 2, 4, 8)
config.model.unet3d_slice_time.num_channels = (1, 2, 4, 8)
# make training shorter
n_devices = jax.local_device_count()
config.data.max_num_samples = 11
config.training.batch_size_per_replica = 2
config.training.batch_size = (
n_devices * config.training.batch_size_per_replica
)
config.training.max_num_samples = 100
# make logging more frequent
config.logging.eval_freq = 1
config.logging.save_freq = 4
return config
def get_batch_size_per_step(config: DictConfig) -> int:
"""Return the actual number of samples per step.
Args:
config: total config.
Returns:
Number of samples across all devices.
"""
if "batch_size_per_replica" not in config["training"]:
logging.warning("Batch size per step is not accurate.")
return 1
num_devices = jax.local_device_count()
num_devices_per_replica = config["training"]["num_devices_per_replica"]
num_models = num_devices // num_devices_per_replica
batch_size = config["training"]["batch_size_per_replica"] * num_models
return batch_size
@hydra.main(
version_base=None, config_path="conf", config_name="config_segmentation"
)
def main( # pylint:disable=too-many-statements
config: DictConfig,
) -> None:
"""Main function.
Args:
config: config loaded from yaml.
"""
# update config
if config.debug:
config = set_debug_config(config)
# init wandb
files_dir = None
if config.logging.wandb.project:
wandb_run = wandb.init(
project=config.logging.wandb.project,
entity=config.logging.wandb.entity,
config=flatten_dict(dict(config)),
)
files_dir = Path(wandb_run.settings.files_dir)
# backup config
OmegaConf.save(config=config, f=files_dir / "config_backup.yaml")
# init devices
devices = jax.local_devices()
if config.training.num_devices_per_replica != 1:
raise ValueError("Distributed training not supported.")
logging.info(f"Local devices are: {devices}")
# init exp
run = Experiment(config=config)
train_state = run.train_init()
run.eval_init()
logging.info("Start training.")
batch_size_per_step = get_batch_size_per_step(config)
max_num_steps = config.training.max_num_samples // batch_size_per_step
for i in range(1, 1 + max_num_steps):
# train step
train_state, train_scalars = run.train_step(
train_state=train_state,
)
train_scalars = {"train_" + k: v for k, v in train_scalars.items()}
scalars = {
"num_samples": i * batch_size_per_step,
**train_scalars,
}
to_save_ckpt = (
(i > 0)
and (i % config.logging.save_freq == 0)
and (files_dir is not None)
)
# evaluate if saving ckpt or time to evaluate
to_eval = to_save_ckpt or (i % config.logging.eval_freq == 0)
if to_save_ckpt and (files_dir is not None):
ckpt_dir = files_dir / "ckpt" / f"batch_{i}"
else:
ckpt_dir = None
if to_eval and config.eval:
# TODO on TPU evaluation causes OOM
params, state = get_eval_params_and_state(train_state)
val_scalars = run.eval_step(
split=VALID_SPLIT,
params=params,
state=state,
rng=jax.random.PRNGKey(config.seed),
out_dir=ckpt_dir,
save_predictions=False,
)
val_scalars = {"valid_" + k: v for k, v in val_scalars.items()}
scalars = {
**scalars,
**val_scalars,
}
if config.logging.wandb.project:
wandb.log(scalars)
scalars = {
k: v if isinstance(v, int) else f"{v:.2e}"
for k, v in scalars.items()
}
logging.info(f"Batch {i}: {scalars}")
# save checkpoint and metrics
if ckpt_dir is not None:
save_ckpt(
train_state=train_state,
ckpt_dir=ckpt_dir,
)
# backup config every time
OmegaConf.save(config=config, f=ckpt_dir / "config.yaml")
logging.info(f"Checkpoint saved at {ckpt_dir}")
if __name__ == "__main__":
main() # pylint: disable=no-value-for-parameter
| 5,235 | 29.619883 | 76 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/device.py | """Module to handle multi-devices."""
from typing import Optional, Tuple, Union
import chex
import jax
import jax.numpy as jnp
def broadcast_to_local_devices(value: chex.ArrayTree) -> chex.ArrayTree:
"""Broadcasts an object to all local devices.
Args:
value: value to be broadcast.
Returns:
broadcast value.
"""
devices = jax.local_devices()
return jax.tree_map(
lambda v: jax.device_put_sharded(len(devices) * [v], devices), value
)
def get_first_replica_values(value: chex.ArrayTree) -> chex.ArrayTree:
"""Gets values from the first replica.
Args:
value: broadcast value.
Returns:
value of the first replica.
"""
return jax.tree_map(lambda x: x[0], value)
def bind_rng_to_host_or_device(
rng: jnp.ndarray,
bind_to: Optional[str] = None,
axis_name: Optional[Union[str, Tuple[str, ...]]] = None,
) -> jnp.ndarray:
"""Binds a rng to the host or device.
https://github.com/google-research/scenic/blob/main/scenic/train_lib/train_utils.py#L577
Must be called from within a pmapped function. Note that when binding to
"device", we also bind the rng to hosts, as we fold_in the rng with
axis_index, which is unique for devices across all hosts.
Args:
rng: A jax.random.PRNGKey.
bind_to: Must be one of the 'host' or 'device'. None means no binding.
axis_name: The axis of the devices we are binding rng across, necessary
if bind_to is device.
Returns:
jax.random.PRNGKey specialized to host/device.
"""
if bind_to is None:
return rng
if bind_to == "host":
return jax.random.fold_in(rng, jax.process_index())
if bind_to == "device":
return jax.random.fold_in(rng, jax.lax.axis_index(axis_name))
raise ValueError(
"`bind_to` should be one of the `[None, 'host', 'device']`"
)
def shard(
pytree: chex.ArrayTree,
num_replicas: int,
) -> chex.ArrayTree:
"""Reshapes all arrays in the pytree to add a leading shard dimension.
We assume that all arrays in the pytree have leading dimension
divisible by num_devices_per_replica.
Args:
pytree: A pytree of arrays to be sharded.
num_replicas: number of model replicas.
Returns:
Sharded data.
"""
def _shard_array(array: jnp.ndarray) -> jnp.ndarray:
return array.reshape((num_replicas, -1) + array.shape[1:])
return jax.tree_map(_shard_array, pytree)
def unshard(pytree: chex.ArrayTree) -> chex.ArrayTree:
"""Reshapes arrays from [ndev, bs, ...] to [host_bs, ...].
Args:
pytree: A pytree of arrays to be sharded.
Returns:
Sharded data.
"""
def _unshard_array(array: jnp.ndarray) -> jnp.ndarray:
ndev, bs = array.shape[:2]
return array.reshape((ndev * bs,) + array.shape[2:])
return jax.tree_map(_unshard_array, pytree)
def is_tpu() -> bool:
"""Return true if the device is tpu.
Returns:
True if tpu.
"""
return jax.local_devices()[0].platform == "tpu"
| 3,095 | 25.689655 | 92 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/metric/area.py | """Metrics to measure foreground area."""
import jax.numpy as jnp
def class_proportion(mask: jnp.ndarray) -> jnp.ndarray:
"""Calculate proportion per class.
Args:
mask: shape = (batch, d1, ..., dn, num_classes).
Returns:
Proportion, shape = (batch, num_classes).
"""
reduce_axes = tuple(range(1, mask.ndim - 1))
volume = jnp.float32(jnp.prod(jnp.array(mask.shape[1:-1])))
sqrt_volume = jnp.sqrt(volume)
mask = jnp.float32(mask)
# attempt to avoid over/underflow
return jnp.sum(mask / sqrt_volume, axis=reduce_axes) / sqrt_volume
| 590 | 27.142857 | 70 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/metric/distribution.py | """Metric functions for probability distributions."""
import jax.numpy as jnp
def normal_kl(
p_mean: jnp.ndarray,
p_log_variance: jnp.ndarray,
q_mean: jnp.ndarray,
q_log_variance: jnp.ndarray,
) -> jnp.ndarray:
"""Compute the KL divergence between two 1D normal distributions.
Although the inputs are arrays, each value is considered independently.
This function is not symmetric.
Input array shapes should be broadcastable.
Args:
p_mean: mean of distribution p.
p_log_variance: log variance of distribution p.
q_mean: mean of distribution q.
q_log_variance: log variance of distribution q.
Returns:
KL divergence.
"""
return 0.5 * (
-1.0
+ q_log_variance
- p_log_variance
+ jnp.exp(p_log_variance - q_log_variance)
+ ((p_mean - q_mean) ** 2) * jnp.exp(-q_log_variance)
)
def approx_standard_normal_cdf(x: jnp.ndarray) -> jnp.ndarray:
"""Approximate cumulative distribution function of standard normal.
if x ~ Normal(mean, var), then cdf(z) = p(x <= z)
https://www.aimspress.com/article/doi/10.3934/math.2022648#b13
https://www.jstor.org/stable/2346872
Args:
x: array of any shape with any float values.
Returns:
CDF estimation.
"""
return 0.5 * (
1.0 + jnp.tanh(jnp.sqrt(2.0 / jnp.pi) * (x + 0.044715 * x**3))
)
def discretized_gaussian_log_likelihood(
x: jnp.ndarray,
mean: jnp.ndarray,
log_variance: jnp.ndarray,
x_delta: float = 1.0 / 255.0,
x_bound: float = 0.999,
) -> jnp.ndarray:
"""Log-likelihood of a normal distribution discretizing to an image.
Args:
x: target image, with value inside normalized in [-1, 1].
mean: normal distribution mean.
log_variance: log of distribution variance.
x_delta: discretization step, used to estimate probability.
x_bound: values with abs > x_bound are calculated differently.
Returns:
Discretized log likelihood over 2*delta.
"""
log_scales = 0.5 * log_variance
centered_x = x - mean
inv_stdv = jnp.exp(-log_scales)
# let y be a variable
# cdf(z+delta) = p(y <= z+delta)
plus_in = inv_stdv * (centered_x + x_delta)
cdf_plus = approx_standard_normal_cdf(plus_in)
# log( p(y <= z+delta) )
log_cdf_plus = jnp.log(cdf_plus.clip(min=1e-12))
# cdf(z-delta) = p(y <= z-delta)
minus_in = inv_stdv * (centered_x - x_delta)
cdf_minus = approx_standard_normal_cdf(minus_in)
# log( 1-p(y <= z-delta) ) = log( p(y > z-delta) )
log_one_minus_cdf_minus = jnp.log((1.0 - cdf_minus).clip(min=1e-12))
# p(z-delta < y <= z+delta)
cdf_delta = cdf_plus - cdf_minus
log_cdf_delta = jnp.log(cdf_delta.clip(min=1e-12))
# if x < -0.999, log( p(y <= z+delta) )
# if x > 0.999, log( p(y > z-delta) )
# if -0.999 <= x <= 0.999, log( p(z-delta < y <= z+delta) )
return jnp.where(
x < -x_bound,
log_cdf_plus,
jnp.where(x > x_bound, log_one_minus_cdf_minus, log_cdf_delta),
)
| 3,097 | 29.07767 | 75 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/metric/dice.py | """Metric functions for image segmentation."""
import jax.numpy as jnp
def dice_score(
mask_pred: jnp.ndarray,
mask_true: jnp.ndarray,
) -> jnp.ndarray:
"""Soft Dice score, larger is better.
Args:
mask_pred: soft mask with probabilities, (batch, ..., num_classes).
mask_true: one hot targets, (batch, ..., num_classes).
Returns:
Dice score of shape (batch, num_classes).
"""
reduce_axis = tuple(range(mask_pred.ndim))[1:-1]
numerator = 2.0 * jnp.sum(mask_pred * mask_true, axis=reduce_axis)
denominator = jnp.sum(mask_pred + mask_true, axis=reduce_axis)
return jnp.where(
condition=denominator > 0,
x=numerator / denominator,
y=jnp.nan,
)
def iou(
mask_pred: jnp.ndarray,
mask_true: jnp.ndarray,
) -> jnp.ndarray:
"""IOU (Intersection Over Union), or Jaccard index.
Args:
mask_pred: binary mask of predictions, (batch, ..., num_classes).
mask_true: one hot targets, (batch, ..., num_classes).
Returns:
IoU of shape (batch, num_classes).
"""
reduce_axis = tuple(range(mask_pred.ndim))[1:-1]
numerator = jnp.sum(mask_pred * mask_true, axis=reduce_axis)
sum_mask = jnp.sum(mask_pred + mask_true, axis=reduce_axis)
denominator = sum_mask - numerator
return jnp.where(
condition=sum_mask > 0, x=numerator / denominator, y=jnp.nan
)
| 1,407 | 28.333333 | 75 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/metric/centroid.py | """Metric centroid distance."""
from typing import Optional, Tuple
import jax.numpy as jnp
def get_coordinate_grid(shape: Tuple[int, ...]) -> jnp.ndarray:
"""Generate a grid with given shape.
This function is not jittable as the output depends on the value of shapes.
Args:
shape: shape of the grid, (d1, ..., dn).
Returns:
grid: grid coordinates, of shape (n, d1, ..., dn).
grid[:, i1, ..., in] = [i1, ..., in]
"""
return jnp.stack(
jnp.meshgrid(
*(jnp.arange(d) for d in shape),
indexing="ij",
),
axis=0,
dtype=jnp.float32,
)
def get_centroid(
mask: jnp.ndarray,
grid: jnp.ndarray,
) -> Tuple[jnp.ndarray, jnp.ndarray]:
"""Calculate the centroid of the mask.
Args:
mask: boolean mask of shape = (batch, d1, ..., dn, num_classes)
grid: shape = (n, d1, ..., dn)
Returns:
centroid of shape = (batch, n, num_classes).
nan mask of shape = (batch, num_classes).
"""
mask_reduce_axes = tuple(range(1, mask.ndim - 1))
grid_reduce_axes = tuple(range(2, mask.ndim))
# (batch, n, d1, ..., dn, num_classes)
masked_grid = jnp.expand_dims(mask, axis=1) * jnp.expand_dims(
grid, axis=(0, -1)
)
# (batch, n, num_classes)
numerator = jnp.sum(masked_grid, axis=grid_reduce_axes)
# (batch, num_classes)
summed_mask = jnp.sum(mask, axis=mask_reduce_axes)
# (batch, 1, num_classes)
denominator = summed_mask[:, None, :]
# if mask is not empty return real centroid, else nan
centroid = jnp.where(
condition=denominator > 0, x=numerator / denominator, y=jnp.nan
)
return centroid, summed_mask == 0
def centroid_distance(
mask_true: jnp.ndarray,
mask_pred: jnp.ndarray,
grid: jnp.ndarray,
spacing: Optional[jnp.ndarray] = None,
) -> jnp.ndarray:
"""Calculate the L2-distance between two centroids.
Args:
mask_true: shape = (batch, d1, ..., dn, num_classes).
mask_pred: shape = (batch, d1, ..., dn, num_classes).
grid: shape = (n, d1, ..., dn).
spacing: spacing of pixel/voxels along each dimension, (n,).
Returns:
distance, shape = (batch, num_classes).
"""
# centroid (batch, n, num_classes) nan_mask (batch, num_classes)
centroid_true, nan_mask_true = get_centroid(
mask=mask_true,
grid=grid,
)
centroid_pred, nan_mask_pred = get_centroid(
mask=mask_pred,
grid=grid,
)
nan_mask = nan_mask_true | nan_mask_pred
if spacing is not None:
centroid_true = jnp.where(
condition=nan_mask[:, None, :],
x=jnp.nan,
y=centroid_true * spacing[None, :, None],
)
centroid_pred = jnp.where(
condition=nan_mask[:, None, :],
x=jnp.nan,
y=centroid_pred * spacing[None, :, None],
)
# return nan if the centroid cannot be defined for one sample with one class
return jnp.where(
condition=nan_mask,
x=jnp.nan,
y=jnp.linalg.norm(centroid_true - centroid_pred, axis=1),
)
| 3,160 | 28.542056 | 80 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/datasets/augmentation.py | """Image augmentation functions."""
from functools import partial
from typing import Callable, Dict, Sequence
import jax
import numpy as np
from jax import numpy as jnp
from jax.scipy.ndimage import map_coordinates
from omegaconf import DictConfig
from imgx import IMAGE, LABEL
from imgx.datasets import FOREGROUND_RANGE, IMAGE_SHAPE_MAP
from imgx.metric.centroid import get_coordinate_grid
def get_2d_rotation_matrix(
radians: jnp.ndarray,
) -> jnp.ndarray:
"""Return 2d rotation matrix given radians.
The affine transformation applies as following:
[x, = [[* * 0] * [x,
y, [* * 0] y,
1] [0 0 1]] 1]
Args:
radians: tuple of one values, correspond to xy planes.
Returns:
Rotation matrix of shape (3, 3).
"""
sin, cos = jnp.sin(radians[0]), jnp.cos(radians[0])
return jnp.array(
[
[cos, -sin, 0.0],
[sin, cos, 0.0],
[0.0, 0.0, 1.0],
]
)
def get_3d_rotation_matrix(
radians: jnp.ndarray,
) -> jnp.ndarray:
"""Return 3d rotation matrix given radians.
The affine transformation applies as following:
[x, = [[* * * 0] * [x,
y, [* * * 0] y,
z, [* * * 0] z,
1] [0 0 0 1]] 1]
Args:
radians: tuple of three values, correspond to yz, xz, xy planes.
Returns:
Rotation matrix of shape (4, 4).
"""
affine = jnp.eye(4)
# rotation of yz around x-axis
sin, cos = jnp.sin(radians[0]), jnp.cos(radians[0])
affine_ax = jnp.array(
[
[1.0, 0.0, 0.0, 0.0],
[0.0, cos, -sin, 0.0],
[0.0, sin, cos, 0.0],
[0.0, 0.0, 0.0, 1.0],
]
)
affine = jnp.matmul(affine_ax, affine)
# rotation of zx around y-axis
sin, cos = jnp.sin(radians[1]), jnp.cos(radians[1])
affine_ax = jnp.array(
[
[cos, 0.0, sin, 0.0],
[0.0, 1.0, 0.0, 0.0],
[-sin, 0.0, cos, 0.0],
[0.0, 0.0, 0.0, 1.0],
]
)
affine = jnp.matmul(affine_ax, affine)
# rotation of xy around z-axis
sin, cos = jnp.sin(radians[2]), jnp.cos(radians[2])
affine_ax = jnp.array(
[
[cos, -sin, 0.0, 0.0],
[sin, cos, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0],
]
)
affine = jnp.matmul(affine_ax, affine)
return affine
def get_rotation_matrix(
radians: jnp.ndarray,
) -> jnp.ndarray:
"""Return rotation matrix given radians.
Args:
radians: correspond to rotate around each axis.
Returns:
Rotation matrix of shape (n+1, n+1).
Raises:
ValueError: if not 2D or 3D.
"""
if radians.size == 1:
return get_2d_rotation_matrix(radians)
if radians.size == 3:
return get_3d_rotation_matrix(radians)
raise ValueError("Only support 2D/3D rotations.")
def get_translation_matrix(
shifts: jnp.ndarray,
) -> jnp.ndarray:
"""Return 3d translation matrix given shifts.
For example, the 3D affine transformation applies as following:
[x, = [[1 0 0 *] * [x,
y, [0 1 0 *] y,
z, [0 0 1 *] z,
1] [0 0 0 1]] 1]
Args:
shifts: correspond to each axis shift.
Returns:
Translation matrix of shape (n+1, n+1).
"""
ndims = shifts.size
shifts = jnp.concatenate([shifts, jnp.array([1.0])])
return jnp.concatenate(
[
jnp.eye(ndims + 1, ndims),
shifts[:, None],
],
axis=1,
)
def get_scaling_matrix(
scales: jnp.ndarray,
) -> jnp.ndarray:
"""Return scaling matrix given scales.
For example, the 3D affine transformation applies as following:
[x, = [[* 0 0 0] * [x,
y, [0 * 0 0] y,
z, [0 0 * 0] z,
1] [0 0 0 1]] 1]
Args:
scales: correspond to each axis scaling.
Returns:
Affine matrix of shape (n+1, n+1).
"""
scales = jnp.concatenate([scales, jnp.array([1.0])])
return jnp.diag(scales)
def get_affine_matrix(
radians: jnp.ndarray,
shifts: jnp.ndarray,
scales: jnp.ndarray,
) -> jnp.ndarray:
"""Return an affine matrix from parameters.
The matrix is not squared, as the last row is not needed. For rotation,
translation, and scaling matrix, they are kept for composition purpose.
For example, the 3D affine transformation applies as following:
[x, = [[* * * *] * [x,
y, [* * * *] y,
z, [* * * *] z,
1] [0 0 0 1]] 1]
Args:
radians: correspond to rotate around each axis.
shifts: correspond to each axis shift.
scales: correspond to each axis scaling.
Returns:
Affine matrix of shape (n+1, n+1).
"""
affine_rot = get_rotation_matrix(radians)
affine_shift = get_translation_matrix(shifts)
affine_scale = get_scaling_matrix(scales)
return jnp.matmul(affine_shift, jnp.matmul(affine_scale, affine_rot))
def batch_get_random_affine_matrix(
key: jax.random.PRNGKey,
max_rotation: jnp.ndarray,
min_translation: jnp.ndarray,
max_translation: jnp.ndarray,
max_scaling: jnp.ndarray,
) -> jnp.ndarray:
"""Get a batch of random affine matrices.
Args:
key: jax random key.
max_rotation: maximum rotation in radians.
min_translation: minimum translation in pixel/voxels.
max_translation: maximum translation in pixel/voxels.
max_scaling: maximum scaling difference in pixel/voxels.
Returns:
Affine matrix of shape (batch, n+1, n+1).
"""
key_radian, key_shift, key_scale = jax.random.split(key, num=3)
radians = jax.random.uniform(
key=key_radian,
shape=max_rotation.shape,
minval=-max_rotation,
maxval=max_rotation,
)
shifts = jax.random.uniform(
key=key_shift,
shape=max_translation.shape,
minval=min_translation,
maxval=max_translation,
)
scales = jax.random.uniform(
key=key_scale,
shape=max_scaling.shape,
minval=1.0 - max_scaling,
maxval=1.0 + max_scaling,
)
# vmap on first axis, which is a batch
return jax.vmap(get_affine_matrix)(radians, shifts, scales)
def apply_affine_to_grid(
grid: jnp.ndarray, affine_matrix: jnp.ndarray
) -> jnp.ndarray:
"""Apply affine matrix to grid.
The grid has non-negative coordinates, means the origin is at a corner.
Need to shift the grid such that the origin is at center,
then apply affine, then shift the origin back.
Args:
grid: grid coordinates, of shape (n, d1, ..., dn).
grid[:, i1, ..., in] = [i1, ..., in]
affine_matrix: shape (n+1, n+1)
Returns:
Grid with updated coordinates.
"""
# (n+1, d1, ..., dn)
extended_grid = jnp.concatenate(
[grid, jnp.ones((1,) + grid.shape[1:])], axis=0
)
# shift to center
shift = (jnp.array(grid.shape[1:]) - 1) / 2
shift_matrix = get_translation_matrix(-shift) # (n+1, n+1)
# (n+1, n+1) * (n+1, d1, ..., dn) = (n+1, d1, ..., dn)
extended_grid = jnp.einsum("ji,i...->j...", shift_matrix, extended_grid)
# affine
# (n+1, n+1) * (n+1, d1, ..., dn) = (n+1, d1, ..., dn)
extended_grid = jnp.einsum("ji,i...->j...", affine_matrix, extended_grid)
# shift to corner
shift_matrix = get_translation_matrix(shift)[:-1, :] # (n, n+1)
# (n, n+1) * (n+1, d1, ..., dn) = (n, d1, ..., dn)
extended_grid = jnp.einsum("ji,i...->j...", shift_matrix, extended_grid)
return extended_grid
def batch_apply_affine_to_grid(
grid: jnp.ndarray, affine_matrix: jnp.ndarray
) -> jnp.ndarray:
"""Apply batch of affine matrix to grid.
Args:
grid: grid coordinates, of shape (n, d1, ..., dn).
grid[:, i1, ..., in] = [i1, ..., in]
affine_matrix: shape (batch, n+1, n+1).
Returns:
Grid with updated coordinates, shape (batch, n, d1, ..., dn).
"""
return jax.vmap(apply_affine_to_grid, in_axes=(None, 0))(
grid, affine_matrix
)
def batch_resample_image_label(
input_dict: Dict[str, jnp.ndarray],
grid: jnp.ndarray,
) -> Dict[str, jnp.ndarray]:
"""Apply batch of affine matrix to image and label.
Args:
input_dict: dict having image and label.
image shape (batch, d1, ..., dn)
grid: grid coordinates, of shape (batch, n, d1, ..., dn).
Returns:
Updated image and label, of same shape.
"""
resample_image_vmap = jax.vmap(
partial(
map_coordinates,
order=1,
mode="constant",
cval=0.0,
),
in_axes=(0, 0),
)
resample_label_vmap = jax.vmap(
partial(
map_coordinates,
order=0,
mode="constant",
cval=0.0,
),
in_axes=(0, 0),
)
image = resample_image_vmap(
input_dict[IMAGE],
grid,
)
label = resample_label_vmap(
input_dict[LABEL],
grid,
)
return {IMAGE: image, LABEL: label}
def batch_random_affine_transform(
key: jax.random.PRNGKey,
input_dict: Dict[str, jnp.ndarray],
grid: jnp.ndarray,
max_rotation: jnp.ndarray,
max_translation: jnp.ndarray,
max_scaling: jnp.ndarray,
) -> Dict[str, jnp.ndarray]:
"""Keep image and label only.
TODO: image does not have channel.
Args:
key: jax random key.
input_dict: dict having image and label.
image shape (batch, d1, ..., dn)
grid: grid coordinates, of shape (n, d1, ..., dn).
grid[:, i1, ..., in] = [i1, ..., in]
max_rotation: maximum rotation in radians, shape = (batch, ...).
max_translation: maximum translation in pixel/voxels,
shape = (batch, ...).
max_scaling: maximum scaling difference in pixel/voxels,
shape = (batch, ...).
Returns:
Augmented dict having image and label.
image and label all have shapes (batch, d1, ..., dn).
"""
# (batch, ...)
batch_size = input_dict[IMAGE].shape[0]
max_rotation = jnp.tile(max_rotation[None, ...], (batch_size, 1))
max_translation = jnp.tile(max_translation[None, ...], (batch_size, 1))
min_translation = -max_translation
max_scaling = jnp.tile(max_scaling[None, ...], (batch_size, 1))
# refine translation to avoid remove classes
shape = jnp.array(input_dict[LABEL].shape[1:])
shape = jnp.tile(shape[None, ...], (batch_size, 1))
max_translation = jnp.minimum(
max_translation, shape - 1 - input_dict[FOREGROUND_RANGE][..., -1]
)
min_translation = jnp.maximum(
min_translation, -input_dict[FOREGROUND_RANGE][..., 0]
)
# (batch, n+1, n+1)
affine_matrix = batch_get_random_affine_matrix(
key=key,
max_rotation=max_rotation,
min_translation=min_translation,
max_translation=max_translation,
max_scaling=max_scaling,
)
# (batch, n, d1, ..., dn)
grid = batch_apply_affine_to_grid(grid=grid, affine_matrix=affine_matrix)
return batch_resample_image_label(
input_dict=input_dict,
grid=grid,
)
def build_aug_fn_from_fns(
fns: Sequence[Callable],
) -> Callable:
"""Combine a list of data augmentation functions.
Args:
fns: entire config.
Returns:
A data augmentation function.
"""
def aug_fn(
key: jax.random.PRNGKey, input_dict: Dict[str, jnp.ndarray]
) -> Dict[str, jnp.ndarray]:
keys = jax.random.split(key, num=len(fns))
for k, fn in zip(keys, fns):
input_dict = fn(k, input_dict)
return input_dict
return aug_fn
def build_aug_fn_from_config(
config: DictConfig,
) -> Callable:
"""Return a data augmentation function.
Args:
config: entire config.
Returns:
A data augmentation function.
"""
data_config = config.data
dataset_name = data_config["name"]
image_shape = IMAGE_SHAPE_MAP[dataset_name]
da_config = data_config[dataset_name]["data_augmentation"]
grid = get_coordinate_grid(shape=image_shape)
max_rotation = np.array(da_config["max_rotation"])
max_translation = np.array(da_config["max_translation"])
max_scaling = np.array(da_config["max_scaling"])
aug_fns = [
partial(
batch_random_affine_transform,
grid=grid,
max_rotation=max_rotation,
max_translation=max_translation,
max_scaling=max_scaling,
)
]
if len(aug_fns) == 1:
return aug_fns[0]
return build_aug_fn_from_fns(aug_fns)
| 12,812 | 26.793926 | 77 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/datasets/iterator.py | """Dataset related classes and functions."""
from functools import partial
from typing import Callable, Dict, Iterator, Optional, Tuple
import jax
import jax.numpy as jnp
import jax.scipy
import jmp
import tensorflow as tf
import tensorflow_datasets as tfds
from absl import logging
from omegaconf import DictConfig
from imgx import IMAGE, LABEL, TEST_SPLIT, TRAIN_SPLIT, VALID_SPLIT
from imgx.datasets import FOREGROUND_RANGE, Dataset
from imgx.datasets.util import (
get_foreground_range,
maybe_pad_batch,
tf_to_numpy,
)
from imgx.device import shard
def create_image_label_dict_from_dict(
x: Dict[str, tf.Tensor],
) -> Dict[str, tf.Tensor]:
"""Create a dict from inputs.
Args:
x: dict having image, label, and other tensors.
Returns:
Dict having image and label.
"""
return {
IMAGE: x[IMAGE],
LABEL: x[LABEL],
FOREGROUND_RANGE: get_foreground_range(x[LABEL]),
}
def load_split_from_image_tfds_builder(
builder: tfds.core.DatasetBuilder,
batch_size: int,
split: str,
augment_train_example_fn: Optional[Callable] = None,
shuffle_buffer_size: Optional[int] = None,
shuffle_seed: int = 0,
max_num_samples: int = -1,
dtype: jnp.dtype = jnp.float32,
) -> Tuple[tf.data.Dataset, int]:
"""Loads a split from a TensorFlow Dataset compatible builder.
https://github.com/google-research/scenic/blob/main/scenic/dataset_lib/dataset_utils.py
Args:
builder: A TFDS compatible dataset builder.
batch_size: The batch size returned by the data pipeline.
split: Name of the split to be loaded.
augment_train_example_fn: A function that given a train example
returns the augmented example. Note that this function is applied
AFTER caching and repeat to get true randomness.
shuffle_buffer_size: Size of the tf.data.dataset shuffle buffer.
shuffle_seed: Seed for shuffling the training data.
max_num_samples: maximum number of samples to consider.
dtype: data type for images.
Returns:
- A repeated dataset.
- Number of steps after batch if the dataset is not repeated,
returns -1 for training.
"""
is_train = split == TRAIN_SPLIT
# Prepare arguments.
shuffle_buffer_size = shuffle_buffer_size or (8 * batch_size)
# Download data.
builder.download_and_prepare()
# Each host is responsible for a fixed subset of data.
if is_train:
split = tfds.even_splits(split, jax.process_count())[
jax.process_index()
]
dataset = builder.as_dataset(
split=split,
shuffle_files=False,
)
# Shrink data set if required
if max_num_samples > 0:
logging.info(
f"Taking first {max_num_samples} data samples for split {split}."
)
dataset = dataset.take(max_num_samples)
# Caching.
dataset = dataset.cache()
num_steps = -1 # not set for training
if is_train:
# First repeat then batch.
dataset = dataset.repeat()
# Augmentation should be done after repeat for true randomness.
if augment_train_example_fn:
dataset = dataset.map(
augment_train_example_fn,
num_parallel_calls=tf.data.experimental.AUTOTUNE,
)
# Shuffle after augmentation to avoid loading non-augmented images into
# buffer.
dataset = dataset.shuffle(shuffle_buffer_size, seed=shuffle_seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
else:
# First batch then repeat.
dataset = dataset.batch(batch_size, drop_remainder=False)
num_steps = tf.data.experimental.cardinality(dataset).numpy()
if split == VALID_SPLIT:
# repeat dataset for validation
dataset = dataset.repeat()
# NOTE: You may be tempted to move the casting earlier on in the pipeline,
# but for bf16 some operations will end up silently placed on the TPU and
# this causes stalls while TF and JAX battle for the accelerator.
if dtype != jnp.float32:
def cast_fn(batch: Dict[str, jnp.ndarray]) -> Dict[str, jnp.ndarray]:
batch[IMAGE] = tf.cast(batch[IMAGE], tf.dtypes.as_dtype(dtype))
return batch
dataset = dataset.map(cast_fn)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
return dataset, num_steps
def get_image_iterator(
builder: tfds.core.DatasetBuilder,
split: str,
is_train: bool,
batch_size_per_replica: int,
num_replicas: int,
shuffle_seed: int,
max_num_samples: int,
dtype: jnp.dtype = jnp.float32,
) -> Tuple[Iterator, int]:
"""Returns iterator from builder.
Args:
builder: data set builder.
split: split name.
is_train: if the split is for training.
batch_size_per_replica: Number of samples consumed per model per step.
num_replicas: number of model replicas.
shuffle_seed: Seed for shuffling the training data.
max_num_samples: maximum number of samples in iterator.
dtype: data type for images.
Returns:
- Batch iterator.
- Number of steps after batch if the dataset is not repeated,
returns -1 for training.
"""
batch_size = batch_size_per_replica * num_replicas
dataset, num_steps = load_split_from_image_tfds_builder(
builder=builder,
batch_size=batch_size,
split=split,
shuffle_seed=shuffle_seed,
augment_train_example_fn=create_image_label_dict_from_dict,
max_num_samples=max_num_samples,
dtype=dtype,
)
maybe_pad_batches = partial(
maybe_pad_batch, is_train=is_train, batch_size=batch_size
)
dataset_iter = iter(dataset)
dataset_iter = map(tf_to_numpy, dataset_iter)
dataset_iter = map(maybe_pad_batches, dataset_iter)
shard_batches = partial(shard, num_replicas=num_replicas)
dataset_iter = map(shard_batches, dataset_iter)
return dataset_iter, num_steps
def get_image_tfds_dataset(
dataset_name: str,
config: DictConfig,
) -> Dataset:
"""Returns generators for the dataset train, valid, and test sets.
Args:
dataset_name: Data set name.
config: entire config.
Returns:
A Dataset() which includes train_iter, valid_iter, and test_iter.
"""
batch_size_per_replica = config["training"]["batch_size_per_replica"]
num_devices_per_replica = config["training"]["num_devices_per_replica"]
num_replicas = jax.local_device_count() // num_devices_per_replica
shuffle_seed = config["seed"]
max_num_samples = config["data"]["max_num_samples"]
dtype = jnp.float32
if config["training"]["mixed_precision"]["use"]:
dtype = jmp.half_dtype()
builder = tfds.builder(dataset_name)
train_iter, _ = get_image_iterator(
builder=builder,
split=TRAIN_SPLIT,
is_train=True,
batch_size_per_replica=batch_size_per_replica,
num_replicas=num_replicas,
shuffle_seed=shuffle_seed,
max_num_samples=max_num_samples,
dtype=dtype,
)
valid_iter, num_valid_steps = get_image_iterator(
builder=builder,
split=VALID_SPLIT,
is_train=False,
batch_size_per_replica=batch_size_per_replica,
num_replicas=num_replicas,
shuffle_seed=shuffle_seed,
max_num_samples=max_num_samples,
dtype=dtype,
)
test_iter, num_test_steps = get_image_iterator(
builder=builder,
split=TEST_SPLIT,
is_train=False,
batch_size_per_replica=batch_size_per_replica,
num_replicas=num_replicas,
shuffle_seed=shuffle_seed,
max_num_samples=max_num_samples,
dtype=dtype,
)
return Dataset(
train_iter=train_iter,
valid_iter=valid_iter,
test_iter=test_iter,
num_valid_steps=num_valid_steps,
num_test_steps=num_test_steps,
)
| 8,041 | 31.297189 | 91 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/datasets/util.py | """Util functions for image.
Some are adapted from
https://github.com/google-research/scenic/blob/03735eb81f64fd1241c4efdb946ea6de3d326fe1/scenic/dataset_lib/dataset_utils.py
"""
import functools
import queue
import threading
from typing import Any, Callable, Dict, Generator, Iterable, Tuple
import chex
import jax
import jax.numpy as jnp
import numpy as np
import tensorflow as tf
import tensorflow.experimental.numpy as tnp
from absl import logging
from imgx import IMAGE
def maybe_pad_batch(
batch: Dict[str, chex.ArrayTree],
is_train: bool,
batch_size: int,
batch_dim: int = 0,
) -> Dict[str, chex.ArrayTree]:
"""Zero pad the batch on the right to the batch_size.
All leave tensors in the batch pytree will be padded. This function expects
the root structure of the batch pytree to be a dictionary and returns a
dictionary with the same structure (and substructures), additionally with
the key 'batch_mask' added to the root dict, with 1.0 indicating indices
which are true data and 0.0 indicating a padded index. `batch_mask` will
be used for calculating the weighted cross entropy, or weighted accuracy.
Note that in this codebase, we assume we drop the last partial batch from
the training set, so if the batch is from the training set
(i.e. `train=True`), or when the batch is from the test/validation set,
but it is a complete batch, we *modify* the batch dict by adding an array of
ones as the `batch_mask` of all examples in the batch. Otherwise, we create
a new dict that has the padded patch and its corresponding `batch_mask`
array. Note that batch_mask can be also used as the label mask
(not input mask), for task that are pixel/token level. This is simply done
by applying the mask we make for padding the partial batches on top of
the existing label mask.
Args:
batch: A dictionary containing a pytree. If `inputs_key` is not
set, we use the first leave to get the current batch size.
Otherwise, the tensor mapped with `inputs_key`
at the root dictionary is used.
is_train: if the batch is from the training data. In that case,
we drop the last (incomplete) batch and thus don't do any padding.
batch_size: All arrays in the dict will be padded to have first
dimension equal to desired_batch_size.
batch_dim: Batch dimension. The default is 0, but it can be different
if a sharded batch is given.
Returns:
A dictionary mapping the same keys to the padded batches.
Additionally, we add a key representing weights, to indicate how
the batch was padded.
Raises:
ValueError: if configs are conflicting.
"""
sample_tensor = batch[IMAGE]
batch_pad = batch_size - sample_tensor.shape[batch_dim]
if is_train and batch_pad != 0:
raise ValueError(
"In this codebase, we assumed that we always drop the "
"last partial batch of the train set. Please use "
"` drop_remainder=True` for the training set."
)
# Most batches do not need padding, so we quickly return to avoid slowdown.
if is_train or batch_pad == 0:
return batch
def zero_pad(array: np.ndarray) -> np.ndarray:
pad_with = (
[(0, 0)] * batch_dim
+ [(0, batch_pad)]
+ [(0, 0)] * (array.ndim - batch_dim - 1)
)
return np.pad(array, pad_with, mode="constant")
padded_batch = jax.tree_map(zero_pad, batch)
return padded_batch
def unpad(
pytree: chex.ArrayTree,
num_samples: int,
) -> chex.ArrayTree:
"""Remove padded data for all arrays in the pytree.
We assume that all arrays in the pytree have the same leading dimension.
Args:
pytree: A pytree of arrays to be sharded.
num_samples: number of samples to keep.
Returns:
Data without padding
"""
def _unpad_array(x: jnp.ndarray) -> jnp.ndarray:
return x[:num_samples, ...]
return jax.tree_map(_unpad_array, pytree)
def tf_to_numpy(batch: Dict) -> np.ndarray:
"""Convert an input batch from tf Tensors to numpy arrays.
Args:
batch: A dictionary that has items in a batch: image and labels.
Returns:
Numpy arrays of the given tf Tensors.
"""
def convert_data(x: tf.Tensor) -> np.ndarray:
"""Use _numpy() for zero-copy conversion between TF and NumPy.
Args:
x: tf tensor.
Returns:
numpy array.
"""
return x._numpy() # pylint: disable=protected-access
return jax.tree_map(convert_data, batch)
def get_center_pad_shape(
current_shape: Tuple[int, ...], target_shape: Tuple[int, ...]
) -> Tuple[Tuple[int, ...], Tuple[int, ...]]:
"""Get pad sizes for sitk.ConstantPad.
The padding is added symmetrically.
Args:
current_shape: current shape of the image.
target_shape: target shape of the image.
Returns:
pad_lower: shape to pad on the lower side.
pad_upper: shape to pad on the upper side.
"""
pad_lower = []
pad_upper = []
for i, size_i in enumerate(current_shape):
pad_i = max(target_shape[i] - size_i, 0)
pad_lower_i = pad_i // 2
pad_upper_i = pad_i - pad_lower_i
pad_lower.append(pad_lower_i)
pad_upper.append(pad_upper_i)
return tuple(pad_lower), tuple(pad_upper)
def get_center_crop_shape(
current_shape: Tuple[int, ...], target_shape: Tuple[int, ...]
) -> Tuple[Tuple[int, ...], Tuple[int, ...]]:
"""Get crop sizes for sitk.Crop.
The crop is performed symmetrically.
Args:
current_shape: current shape of the image.
target_shape: target shape of the image.
Returns:
crop_lower: shape to pad on the lower side.
crop_upper: shape to pad on the upper side.
"""
crop_lower = []
crop_upper = []
for i, size_i in enumerate(current_shape):
crop_i = max(size_i - target_shape[i], 0)
crop_lower_i = crop_i // 2
crop_upper_i = crop_i - crop_lower_i
crop_lower.append(crop_lower_i)
crop_upper.append(crop_upper_i)
return tuple(crop_lower), tuple(crop_upper)
def try_to_get_center_crop_shape(
label_min: int, label_max: int, current_length: int, target_length: int
) -> Tuple[int, int]:
"""Try to crop at the center of label, 1D.
Args:
label_min: label index minimum, inclusive.
label_max: label index maximum, exclusive.
current_length: current image length.
target_length: target image length.
Returns:
crop_lower: shape to pad on the lower side.
crop_upper: shape to pad on the upper side.
Raises:
ValueError: if label min max is out of range.
"""
if label_min < 0 or label_max > current_length:
raise ValueError("Label index out of range.")
if current_length <= target_length:
# no need of crop
return 0, 0
# attend to perform crop centered at label center
label_center = (label_max - 1 + label_min) / 2.0
bbox_lower = int(np.ceil(label_center - target_length / 2.0))
bbox_upper = bbox_lower + target_length
# if lower is negative, then have to shift the window to right
bbox_lower = max(bbox_lower, 0)
# if upper is too large, then have to shift the window to left
if bbox_upper > current_length:
bbox_lower -= bbox_upper - current_length
# calculate crop
crop_lower = bbox_lower # bbox index starts at 0
crop_upper = current_length - target_length - crop_lower
return crop_lower, crop_upper
def get_center_crop_shape_from_bbox(
bbox_min: Tuple[int, ...],
bbox_max: Tuple[int, ...],
current_shape: Tuple[int, ...],
target_shape: Tuple[int, ...],
) -> Tuple[Tuple[int, ...], Tuple[int, ...]]:
"""Get crop sizes for sitk.Crop from label bounding box.
The crop is not necessarily performed symmetrically.
Args:
bbox_min: [start_in_1st_spatial_dim, ...], inclusive, starts at zero.
bbox_max: [end_in_1st_spatial_dim, ...], exclusive, starts at zero.
current_shape: current shape of the image.
target_shape: target shape of the image.
Returns:
crop_lower: shape to pad on the lower side.
crop_upper: shape to pad on the upper side.
"""
crop_lower = []
crop_upper = []
for i, current_length in enumerate(current_shape):
crop_lower_i, crop_upper_i = try_to_get_center_crop_shape(
label_min=bbox_min[i],
label_max=bbox_max[i],
current_length=current_length,
target_length=target_shape[i],
)
crop_lower.append(crop_lower_i)
crop_upper.append(crop_upper_i)
return tuple(crop_lower), tuple(crop_upper)
def get_foreground_range(label: tf.Tensor) -> tf.Tensor:
"""Get the foreground range for a given label.
This function is not defined in jax for augmentation because,
nonzero is not jittable as the number of nonzero elements is unknown.
Args:
label: shape (d1, ..., dn), here n = ndim below.
Returns:
shape (ndim, 2), for each dimension, it's [min, max].
"""
# (ndim, num_nonzero)
nonzero_indices = tnp.stack(tnp.nonzero(label))
# (ndim, 2)
return tnp.stack(
[tnp.min(nonzero_indices, axis=-1), tnp.max(nonzero_indices, axis=-1)],
axis=-1,
)
def get_function_name(function: Callable[..., Any]) -> str:
"""Get name of any function.
Args:
function: function to query.
Returns:
function name.
"""
if isinstance(function, functools.partial):
return f"partial({function.func.__name__})"
return function.__name__
def py_prefetch(
iterable_function: Callable[[], Iterable[chex.ArrayTree]],
buffer_size: int = 5,
) -> Generator[chex.ArrayTree, None, None]:
"""Performs prefetching of elements from an iterable in a separate thread.
Args:
iterable_function: A python function that when called with no arguments
returns an iterable. This is used to build a fresh iterable for each
thread (crucial if working with tensorflow datasets because
`tf.graph` objects are thread local).
buffer_size (int): Number of elements to keep in the prefetch buffer.
Yields:
Prefetched elements from the original iterable.
Raises:
ValueError: if the buffer_size <= 1.
Any error thrown by the iterable_function. Note this is not
raised inside the producer, but after it finishes executing.
"""
if buffer_size <= 1:
raise ValueError("the buffer_size should be > 1")
buffer: queue.Queue = queue.Queue(maxsize=(buffer_size - 1))
producer_error = []
end = object()
def producer() -> None:
"""Enqueues items from iterable on a given thread."""
try:
# Build a new iterable for each thread. This is crucial if
# working with tensorflow datasets
# because tf.graph objects are thread local.
iterable = iterable_function()
for item in iterable:
buffer.put(item)
except Exception as err: # pylint: disable=broad-except
logging.exception(
"Error in producer thread for %s",
get_function_name(iterable_function),
)
producer_error.append(err)
finally:
buffer.put(end)
threading.Thread(target=producer, daemon=True).start()
# Consumer.
while True:
value = buffer.get()
if value is end:
break
yield value
if producer_error:
raise producer_error[0]
| 11,835 | 32.247191 | 123 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/diffusion/variance_schedule.py | """Variance schedule for diffusion models."""
from __future__ import annotations
import enum
import numpy as np
from jax import numpy as jnp
class DiffusionBetaSchedule(enum.Enum):
"""Class to define beta schedule."""
LINEAR = enum.auto()
QUADRADIC = enum.auto()
COSINE = enum.auto()
WARMUP10 = enum.auto()
WARMUP50 = enum.auto()
def get_beta_schedule(
num_timesteps: int,
beta_schedule: DiffusionBetaSchedule,
beta_start: float,
beta_end: float,
) -> jnp.ndarray:
"""Get variance (beta) schedule for q(x_t | x_{t-1}).
Args:
num_timesteps: number of time steps in total, T.
beta_schedule: schedule for beta.
beta_start: beta for t=0.
beta_end: beta for t=T-1.
Returns:
Shape (num_timesteps,) array of beta values, for t=0, ..., T-1.
Values are in ascending order.
Raises:
ValueError: for unknown schedule.
"""
if beta_schedule == DiffusionBetaSchedule.LINEAR:
return jnp.linspace(
beta_start,
beta_end,
num_timesteps,
)
if beta_schedule == DiffusionBetaSchedule.QUADRADIC:
return (
jnp.linspace(
beta_start**0.5,
beta_end**0.5,
num_timesteps,
)
** 2
)
if beta_schedule == DiffusionBetaSchedule.COSINE:
def f(t: float) -> float:
"""Eq 17 in https://arxiv.org/abs/2102.09672.
Args:
t: time step with values in [0, 1].
Returns:
Cumulative product of alpha.
"""
return np.cos((t + 0.008) / 1.008 * np.pi / 2) ** 2
betas = [0.0]
alphas_cumprod_prev = 1.0
for i in range(1, num_timesteps):
t = i / (num_timesteps - 1)
alphas_cumprod = f(t)
beta = 1 - alphas_cumprod / alphas_cumprod_prev
betas.append(beta)
return jnp.array(betas) * (beta_end - beta_start) + beta_start
if beta_schedule == DiffusionBetaSchedule.WARMUP10:
num_timesteps_warmup = max(num_timesteps // 10, 1)
betas_warmup = (
jnp.linspace(
beta_start**0.5,
beta_end**0.5,
num_timesteps_warmup,
)
** 2
)
return jnp.concatenate(
[
betas_warmup,
jnp.ones((num_timesteps - num_timesteps_warmup,)) * beta_end,
]
)
if beta_schedule == DiffusionBetaSchedule.WARMUP50:
num_timesteps_warmup = max(num_timesteps // 2, 1)
betas_warmup = (
jnp.linspace(
beta_start**0.5,
beta_end**0.5,
num_timesteps_warmup,
)
** 2
)
return jnp.concatenate(
[
betas_warmup,
jnp.ones((num_timesteps - num_timesteps_warmup,)) * beta_end,
]
)
raise ValueError(f"Unknown beta_schedule {beta_schedule}.")
def downsample_beta_schedule(
betas: jnp.ndarray,
num_timesteps: int,
num_timesteps_to_keep: int,
) -> jnp.ndarray:
"""Downsample beta schedule.
Args:
betas: beta schedule, shape (num_timesteps,).
Values are in ascending order.
num_timesteps: number of time steps in total, T.
num_timesteps_to_keep: number of time steps to keep.
Returns:
Downsampled beta schedule, shape (num_timesteps_to_keep,).
"""
if betas.shape != (num_timesteps,):
raise ValueError(
f"betas.shape ({betas.shape}) must be equal to "
f"(num_timesteps,)=({num_timesteps},)"
)
if (num_timesteps - 1) % (num_timesteps_to_keep - 1) != 0:
raise ValueError(
f"num_timesteps-1={num_timesteps-1} can't be evenly divided by "
f"num_timesteps_to_keep-1={num_timesteps_to_keep-1}."
)
if num_timesteps_to_keep < 2:
raise ValueError(
f"num_timesteps_to_keep ({num_timesteps_to_keep}) must be >= 2."
)
if num_timesteps_to_keep == num_timesteps:
return betas
if num_timesteps_to_keep < num_timesteps:
step_scale = (num_timesteps - 1) // (num_timesteps_to_keep - 1)
beta0 = betas[0]
alphas = 1.0 - betas
alphas_cumprod = jnp.cumprod(alphas)
# (num_timesteps_to_keep,)
alphas_cumprod = alphas_cumprod[::step_scale]
# (num_timesteps_to_keep-1,)
betas = 1.0 - alphas_cumprod[1:] / alphas_cumprod[:-1]
# (num_timesteps_to_keep,)
betas = jnp.append(beta0, betas)
return betas
raise ValueError(
f"num_timesteps_to_keep ({num_timesteps_to_keep}) "
f"must be <= num_timesteps ({num_timesteps})"
)
| 4,844 | 29.093168 | 77 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/diffusion/gaussian_diffusion.py | """Gaussian diffusion related functions.
https://github.com/WuJunde/MedSegDiff/blob/master/guided_diffusion/gaussian_diffusion.py
https://github.com/hojonathanho/diffusion/blob/master/diffusion_tf/diffusion_utils_2.py
"""
import dataclasses
import enum
from typing import Callable, Iterator, Sequence, Tuple, Union
import haiku as hk
import jax.numpy as jnp
import jax.random
from imgx import EPS
from imgx.diffusion.variance_schedule import (
DiffusionBetaSchedule,
downsample_beta_schedule,
get_beta_schedule,
)
from imgx.metric.distribution import (
discretized_gaussian_log_likelihood,
normal_kl,
)
class DiffusionModelOutputType(enum.Enum):
"""Class to define model's output meaning.
- X_START: model predicts x_0.
- X_PREVIOUS: model predicts x_{t-1}.
- EPSILON: model predicts noise epsilon.
"""
X_START = enum.auto()
X_PREVIOUS = enum.auto()
EPSILON = enum.auto()
class DiffusionModelVarianceType(enum.Enum):
r"""Class to define p(x_{t-1} | x_t) variance.
- FIXED_SMALL: a smaller variance,
\tilde{beta}_t = (1-\bar{alpha}_{t-1})/(1-\bar{alpha}_{t})*beta_t.
- FIXED_LARGE: a larger variance, beta_t.
- LEARNED: model outputs an array with channel=2, for mean and variance.
- LEARNED_RANGE: model outputs an array with channel=2, for mean and
variance. But the variance is not raw values, it's a coefficient to
control the value between FIXED_SMALL and FIXED_LARGE.
"""
FIXED_SMALL = enum.auto()
FIXED_LARGE = enum.auto()
LEARNED = enum.auto()
LEARNED_RANGE = enum.auto()
class DiffusionSpace(enum.Enum):
"""Class to define the meaning of x.
Model always outputs logits.
"""
SCALED_PROBS = enum.auto() # values will be [-1, 1]
LOGITS = enum.auto()
def extract_and_expand(
arr: jnp.ndarray, t: jnp.ndarray, ndim: int
) -> jnp.ndarray:
"""Extract values from a 1D array and expand.
This function is not jittable.
Args:
arr: 1D of shape (num_timesteps, ).
t: storing index values < self.num_timesteps, shape (batch, ).
ndim: number of dimensions for an array of shape (batch, ...).
Returns:
Expanded array of shape (batch, ...), expanded axes have dim 1.
"""
return jnp.expand_dims(arr[t], axis=tuple(range(1, ndim)))
@dataclasses.dataclass
class GaussianDiffusion(hk.Module):
"""Class for Gaussian diffusion sampling.
https://github.com/WuJunde/MedSegDiff/blob/master/guided_diffusion/gaussian_diffusion.py
TODO: split segmentation related functions to a sub-class.
"""
def __init__(
self,
model: hk.Module,
num_timesteps: int, # T
num_timesteps_beta: int,
beta_schedule: DiffusionBetaSchedule,
beta_start: float,
beta_end: float,
model_out_type: DiffusionModelOutputType,
model_var_type: DiffusionModelVarianceType,
x_space: DiffusionSpace,
x_limit: float,
use_ddim: bool,
noise_fn: Callable = jax.random.normal,
) -> None:
"""Init.
q(x_t | x_{t-1}) ~ Normal(sqrt(1-beta_t)*x_{t-1}, beta_t*I)
Args:
model: haiku model.
num_timesteps: number of diffusion steps.
num_timesteps_beta: number of steps when defining beta schedule.
beta_schedule: schedule for betas.
beta_start: beta for t=0.
beta_end: beta for t=T.
model_out_type: type of model output.
model_var_type: type of variance for p(x_{t-1} | x_t).
x_space: x is logits or scaled_probs.
x_limit: x_t has values in [-x_limit, x_limit], the range has to be
symmetric, as for T, the distribution is centered at zero.
use_ddim: use ddim_sample.
noise_fn: a function that gets noise of the same shape as x_t.
"""
super().__init__()
self.model = model
self.num_timesteps = num_timesteps
self.num_timesteps_beta = num_timesteps_beta
self.use_ddim = use_ddim
self.model_out_type = model_out_type
self.model_var_type = model_var_type
self.x_space = x_space
self.x_limit = x_limit
self.noise_fn = noise_fn
# shape are all (T,)
# corresponding to 0, ..., T-1, where 0 means one step
betas = get_beta_schedule(
num_timesteps=num_timesteps_beta,
beta_schedule=beta_schedule,
beta_start=beta_start,
beta_end=beta_end,
)
self.betas = downsample_beta_schedule(
betas=betas,
num_timesteps=num_timesteps_beta,
num_timesteps_to_keep=num_timesteps,
)
alphas = 1.0 - self.betas # alpha_t
self.alphas_cumprod = jnp.cumprod(alphas) # \bar{alpha}_t
self.alphas_cumprod_prev = jnp.append(1.0, self.alphas_cumprod[:-1])
self.alphas_cumprod_next = jnp.append(self.alphas_cumprod[1:], 0.0)
self.sqrt_alphas_cumprod = jnp.sqrt(self.alphas_cumprod)
self.sqrt_one_minus_alphas_cumprod = jnp.sqrt(1.0 - self.alphas_cumprod)
self.log_one_minus_alphas_cumprod = jnp.log(1.0 - self.alphas_cumprod)
# last value is inf as last value of alphas_cumprod is zero
self.sqrt_recip_alphas_cumprod = jnp.sqrt(1.0 / self.alphas_cumprod)
self.sqrt_recip_alphas_cumprod_minus_one = jnp.sqrt(
1.0 / self.alphas_cumprod - 1
)
# q(x_{t-1} | x_t, x_0)
# mean = coeff_start * x_0 + coeff_t * x_t
# first values are nan
self.posterior_mean_coeff_start = (
self.betas
* jnp.sqrt(self.alphas_cumprod_prev)
/ (1.0 - self.alphas_cumprod)
)
self.posterior_mean_coeff_t = (
jnp.sqrt(alphas)
* (1.0 - self.alphas_cumprod_prev)
/ (1.0 - self.alphas_cumprod)
)
# variance
# log calculation clipped because the posterior variance is 0 at t=0
# alphas_cumprod_prev has 1.0 appended in front
self.posterior_variance = (
self.betas
* (1.0 - self.alphas_cumprod_prev)
/ (1.0 - self.alphas_cumprod)
)
# posterior_variance first value is zero
self.posterior_log_variance_clipped = jnp.log(
jnp.append(self.posterior_variance[1], self.posterior_variance[1:])
)
def q_mean_log_variance(
self, x_start: jnp.ndarray, t: jnp.ndarray
) -> Tuple[jnp.ndarray, jnp.ndarray]:
"""Get the distribution q(x_t | x_0).
Args:
x_start: noiseless input, shape (batch, ...).
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
mean: shape (batch, ...), expanded axes have dim 1.
log_variance: shape (batch, ...), expanded axes have dim 1.
"""
mean = (
extract_and_expand(self.sqrt_alphas_cumprod, t, x_start.ndim)
* x_start
)
log_variance = extract_and_expand(
self.log_one_minus_alphas_cumprod, t, x_start.ndim
)
return mean, log_variance
def q_sample(
self,
x_start: jnp.ndarray,
noise: jnp.ndarray,
t: jnp.ndarray,
) -> jnp.ndarray:
"""Sample from q(x_t | x_0).
Args:
x_start: noiseless input, shape (batch, ...).
noise: same shape as x_start.
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
Noisy array with same shape as x_start.
"""
mean = (
extract_and_expand(self.sqrt_alphas_cumprod, t, x_start.ndim)
* x_start
)
var = extract_and_expand(
self.sqrt_one_minus_alphas_cumprod, t, x_start.ndim
)
x_t = mean + var * noise
x_t = self.clip_x(x_t)
return x_t
def q_posterior_mean(
self, x_start: jnp.ndarray, x_t: jnp.ndarray, t: jnp.ndarray
) -> jnp.ndarray:
"""Get mean of the distribution q(x_{t-1} | x_t, x_0).
Args:
x_start: noiseless input, shape (batch, ...).
x_t: noisy input, same shape as x_start.
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
mean: same shape as x_start.
"""
return (
extract_and_expand(self.posterior_mean_coeff_start, t, x_start.ndim)
* x_start
+ extract_and_expand(self.posterior_mean_coeff_t, t, x_start.ndim)
* x_t
)
def q_posterior_mean_variance(
self, x_start: jnp.ndarray, x_t: jnp.ndarray, t: jnp.ndarray
) -> Tuple[jnp.ndarray, jnp.ndarray]:
"""Get the distribution q(x_{t-1} | x_t, x_0).
Args:
x_start: noiseless input, shape (batch, ...).
x_t: noisy input, same shape as x_start.
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
mean: same shape as x_start.
log_variance: shape (batch, ...), expanded axes have dim 1.
"""
mean = self.q_posterior_mean(x_start, x_t, t)
log_variance = extract_and_expand(
self.posterior_log_variance_clipped, t, x_start.ndim
)
return mean, log_variance
def p_mean_variance( # pylint:disable=R0912
self,
model_out: jnp.ndarray,
x_t: jnp.ndarray,
t: jnp.ndarray,
) -> Tuple[jnp.ndarray, jnp.ndarray, jnp.ndarray]:
"""Get the distribution p(x_{t-1} | x_t).
Args:
model_out: model predicted output.
If model estimates variance, shape (batch, ..., 2*num_classes),
else shape (batch, ..., num_classes).
x_t: noisy input, shape (batch, ..., num_classes).
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
x_start: predicted, same shape as x_t, values are clipped.
mean: same shape as x_t.
log_variance: compatible shape to (batch, ..., num_classes).
"""
# variance
if self.model_var_type == DiffusionModelVarianceType.FIXED_SMALL:
log_variance = self.posterior_log_variance_clipped
# extend shape
log_variance = extract_and_expand(log_variance, t, x_t.ndim)
elif self.model_var_type == DiffusionModelVarianceType.FIXED_LARGE:
# TODO why appending?
variance = jnp.append(self.posterior_variance[1], self.betas[1:])
log_variance = jnp.log(variance)
# extend shape
log_variance = extract_and_expand(log_variance, t, x_t.ndim)
elif self.model_var_type == DiffusionModelVarianceType.LEARNED:
# model_out (batch, ..., num_classes*2)
model_out, log_variance = jnp.split(
model_out, indices_or_sections=2, axis=-1
)
elif self.model_var_type == DiffusionModelVarianceType.LEARNED_RANGE:
# model_out (batch, ..., num_classes*2)
model_out, var_coeff = jnp.split(
model_out, indices_or_sections=2, axis=-1
)
log_min_variance = self.posterior_log_variance_clipped
log_max_variance = jnp.log(self.betas)
log_min_variance = extract_and_expand(log_min_variance, t, x_t.ndim)
log_max_variance = extract_and_expand(log_max_variance, t, x_t.ndim)
# var_coeff values are in [-1, 1] for [min_var, max_var].
var_coeff = jnp.clip(var_coeff, -1.0, 1.0)
var_coeff = (var_coeff + 1) / 2
log_variance = (
var_coeff * log_max_variance
+ (1 - var_coeff) * log_min_variance
)
else:
raise ValueError(
f"Unknown DiffusionModelVarianceType {self.model_var_type}."
)
# mean
if self.model_out_type == DiffusionModelOutputType.X_START:
# q(x_{t-1} | x_t, x_0)
x_start = self.logits_to_x(model_out)
x_start = self.clip_x(x_start)
mean = self.q_posterior_mean(x_start=x_start, x_t=x_t, t=t)
elif self.model_out_type == DiffusionModelOutputType.X_PREVIOUS:
# x_{t-1}
x_prev = self.logits_to_x(model_out)
x_prev = self.clip_x(x_prev)
mean = x_prev
x_start = self.predict_xstart_from_xprev_xt(
x_prev=x_prev, x_t=x_t, t=t
)
x_start = self.clip_x(x_start)
elif self.model_out_type == DiffusionModelOutputType.EPSILON:
x_start = self.predict_xstart_from_epsilon_xt(
x_t=x_t, epsilon=model_out, t=t
)
x_start = self.clip_x(x_start)
mean = self.q_posterior_mean(x_start=x_start, x_t=x_t, t=t)
else:
raise ValueError(
f"Unknown DiffusionModelOutputType {self.model_out_type}."
)
return x_start, mean, log_variance
def p_sample(
self,
model_out: jnp.ndarray,
x_t: jnp.ndarray,
t: jnp.ndarray,
) -> Tuple[jnp.ndarray, jnp.ndarray]:
"""Sample x_{t-1} ~ p(x_{t-1} | x_t).
Args:
model_out: model predicted output.
If model estimates variance, shape (batch, ..., 2*num_classes),
else shape (batch, ..., num_classes).
x_t: noisy input, shape (batch, ..., num_classes).
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
sample: x_{t-1}, same shape as x_t.
x_start_pred: same shape as x_t.
"""
x_start_pred, mean, log_variance = self.p_mean_variance(
model_out=model_out,
x_t=x_t,
t=t,
)
noise = self.noise_sample(shape=x_t.shape, dtype=x_t.dtype)
# no noise when t=0
# mean + exp(log(sigma**2)/2) * noise = mean + sigma * noise
nonzero_mask = jnp.expand_dims(
jnp.array(t != 0, dtype=noise.dtype),
axis=tuple(range(1, noise.ndim)),
)
sample = mean + nonzero_mask * jnp.exp(0.5 * log_variance) * noise
# clip as the value may be out of range
sample = self.clip_x(sample)
return sample, x_start_pred
def ddim_sample(
self,
model_out: jnp.ndarray,
x_t: jnp.ndarray,
t: jnp.ndarray,
eta: float = 0.0,
) -> Tuple[jnp.ndarray, jnp.ndarray]:
"""Sample x_{t-1} ~ p(x_{t-1} | x_t).
TODO: what are the differences between p_sample / ddim_sample?
Args:
model_out: model predicted output.
If model estimates variance, shape (batch, ..., 2*num_classes),
else shape (batch, ..., num_classes).
x_t: noisy input, shape (batch, ..., num_classes).
t: storing index values < self.num_timesteps, shape (batch, ).
eta: control the noise level in sampling.
Returns:
sample: x_{t-1}, same shape as x_t.
x_start_pred: same shape as x_t.
"""
# TODO why not using log variance output here?
x_start_pred, _, _ = self.p_mean_variance(
model_out=model_out,
x_t=x_t,
t=t,
)
noise = self.noise_sample(shape=x_t.shape, dtype=x_t.dtype)
epsilon = self.predict_epsilon_from_xstart_xt(
x_t=x_t, x_start=x_start_pred, t=t
)
alphas_cumprod_prev = extract_and_expand(
self.alphas_cumprod_prev, t, x_t.ndim
)
coeff_start = jnp.sqrt(alphas_cumprod_prev)
log_variance = (
extract_and_expand(self.posterior_log_variance_clipped, t, x_t.ndim)
* eta
)
coeff_epsilon = jnp.sqrt(1.0 - alphas_cumprod_prev - log_variance**2)
mean = coeff_start * x_start_pred + coeff_epsilon * epsilon
nonzero_mask = jnp.expand_dims(
jnp.array(t != 0, dtype=noise.dtype),
axis=tuple(range(1, noise.ndim)),
)
sample = mean + nonzero_mask * log_variance * noise
# clip as the value may be out of range
sample = self.clip_x(sample)
return sample, x_start_pred
def sample_mask(
self,
image: jnp.ndarray,
x_t: jnp.ndarray,
) -> jnp.ndarray:
"""Generate segmentation mask from the model conditioned on image.
The noise here is defined on segmentation mask.
x_t is considered as logits.
Args:
image: image to be segmented, shape = (batch, ..., C).
x_t: segmentation logits to be refined,
shape = (batch, ..., num_classes).
Returns:
Sampled segmentation logits, shape = (batch, ..., num_classes).
"""
for x_start_t in self.sample_mask_progressive(image=image, x_t=x_t):
x_start = x_start_t
return x_start
def sample_mask_progressive(
self,
image: jnp.ndarray,
x_t: jnp.ndarray,
) -> Iterator[jnp.ndarray]:
"""Generate segmentation mask from the model conditioned on image.
The noise here is defined on segmentation mask.
x_t is considered as logits.
Args:
image: image to be segmented, shape = (batch, ..., C).
x_t: segmentation logits to be refined,
shape = (batch, ..., num_classes).
Yields:
x_start of shape = (batch, ..., num_classes, T).
"""
for t in reversed(range(self.num_timesteps)):
# (batch, )
t_batch = jnp.array(
[t] * x_t.shape[0],
dtype=jnp.int16,
)
# (batch, ..., ch_input + num_classes)
model_in = jnp.concatenate([image, x_t], axis=-1)
# (batch, ..., num_classes) or (batch, ..., 2*num_classes)
model_out = self.model(model_in, t_batch)
if self.use_ddim:
x_t, x_start = self.ddim_sample(
model_out=model_out,
x_t=x_t,
t=t_batch,
)
else:
x_t, x_start = self.p_sample(
model_out=model_out,
x_t=x_t,
t=t_batch,
)
yield x_start
def predict_xstart_from_xprev_xt(
self, x_prev: jnp.ndarray, x_t: jnp.ndarray, t: jnp.ndarray
) -> jnp.ndarray:
"""Get x_0 from x_{t-1} and x_t.
The mean of q(x_{t-1} | x_t, x_0) is coeff_start * x_0 + coeff_t * x_t.
So x_{t-1} = coeff_start * x_0 + coeff_t * x_t.
x_0 = (x_{t-1} - coeff_t * x_t) / coeff_start
= 1/coeff_start * x_{t-1} - coeff_t/coeff_start * x_t
Args:
x_prev: noisy input at t-1, shape (batch, ...).
x_t: noisy input, same shape as x_prev.
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
predicted x_0, same shape as x_prev.
"""
coeff_prev = extract_and_expand(
1.0 / self.posterior_mean_coeff_start, t, x_t.ndim
)
coeff_t = extract_and_expand(
self.posterior_mean_coeff_t / self.posterior_mean_coeff_start,
t,
x_t.ndim,
)
return coeff_prev * x_prev - coeff_t * x_t
def predict_xprev_from_xstart_xt(
self, x_start: jnp.ndarray, x_t: jnp.ndarray, t: jnp.ndarray
) -> jnp.ndarray:
"""Get x_{t-1} from x_0 and x_t.
The mean of q(x_{t-1} | x_t, x_0) is coeff_start * x_0 + coeff_t * x_t.
So x_{t-1} = coeff_start * x_0 + coeff_t * x_t.
Args:
x_start: noisy input at t, shape (batch, ...).
x_t: noisy input, same shape as x_start.
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
predicted x_0, same shape as x_prev.
"""
coeff_start = extract_and_expand(
self.posterior_mean_coeff_start, t, x_t.ndim
)
coeff_t = extract_and_expand(
self.posterior_mean_coeff_t,
t,
x_t.ndim,
)
return coeff_start * x_start + coeff_t * x_t
def sample_xprev_from_xstart_xt(
self, x_start: jnp.ndarray, x_t: jnp.ndarray, t: jnp.ndarray
) -> jnp.ndarray:
"""Sample x_{t-1} from q(x_{t-1} | x_0, x_t).
The mean of q(x_{t-1} | x_t, x_0) is coeff_start * x_0 + coeff_t * x_t.
So x_{t-1} = coeff_start * x_0 + coeff_t * x_t.
Args:
x_start: noisy input at t, shape (batch, ...).
x_t: noisy input, same shape as x_start.
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
predicted x_0, same shape as x_prev.
"""
x_prev = self.predict_xprev_from_xstart_xt(
x_start=x_start,
x_t=x_t,
t=t,
)
noise = self.noise_sample(shape=x_t.shape, dtype=x_t.dtype)
log_variance = extract_and_expand(
self.posterior_log_variance_clipped, t, x_t.ndim
)
sample = x_prev + noise * log_variance
return self.clip_x(sample)
def predict_xstart_from_epsilon_xt(
self, x_t: jnp.ndarray, epsilon: jnp.ndarray, t: jnp.ndarray
) -> jnp.ndarray:
"""Get x_0 from epsilon.
The reparameterization gives:
x_t = sqrt(alphas_cumprod) * x_0
+ sqrt(1-alphas_cumprod) * epsilon
so,
x_0 = 1/sqrt(alphas_cumprod) * x_t
- sqrt(1-alphas_cumprod)/sqrt(alphas_cumprod) * epsilon
= 1/sqrt(alphas_cumprod) * x_t
- sqrt(1/alphas_cumprod - 1) * epsilon
Args:
x_t: noisy input at t-1, shape (batch, ...).
epsilon: noise, shape (batch, ...), expanded axes have dim 1.
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
predicted x_0, same shape as x_t.
"""
coeff_t = extract_and_expand(
self.sqrt_recip_alphas_cumprod, t, x_t.ndim
)
coeff_epsilon = extract_and_expand(
self.sqrt_recip_alphas_cumprod_minus_one, t, x_t.ndim
)
return coeff_t * x_t - coeff_epsilon * epsilon
def predict_epsilon_from_xstart_xt(
self, x_t: jnp.ndarray, x_start: jnp.ndarray, t: jnp.ndarray
) -> jnp.ndarray:
"""Get epsilon from x_0 and x_t.
The reparameterization gives:
x_t = sqrt(alphas_cumprod) * x_0
+ sqrt(1-alphas_cumprod) * epsilon
so,
epsilon = (x_t - sqrt(alphas_cumprod) * x_0) / sqrt(1-alphas_cumprod)
= (1/sqrt(alphas_cumprod) * x_t - x_0)
/sqrt(1/alphas_cumprod-1)
Args:
x_t: noisy input at t-1, shape (batch, ...).
x_start: predicted x_0, same shape as x_t.
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
predicted x_0, same shape as x_t.
"""
coeff_t = extract_and_expand(
self.sqrt_recip_alphas_cumprod, t, x_t.ndim
)
denominator = extract_and_expand(
self.sqrt_recip_alphas_cumprod_minus_one, t, x_t.ndim
)
return (coeff_t * x_t - x_start) / denominator
def sample_timestep(
self, batch_size: int, min_val: Union[int, jnp.ndarray] = 0
) -> jnp.ndarray:
"""Sample t of shape (batch, ).
Define this function to avoid defining randon key.
Args:
batch_size: number of steps.
min_val: minimum value, inclusive.
Returns:
Time steps with value between 0 and T-1, both sides inclusive.
"""
min_val = jnp.minimum(min_val, self.num_timesteps - 1)
return jax.random.randint(
hk.next_rng_key(),
shape=(batch_size,),
minval=min_val, # inclusive
maxval=self.num_timesteps, # exclusive
)
def noise_sample(
self, shape: Sequence[int], dtype: jnp.dtype
) -> jnp.ndarray:
"""Return a noise of the same shape as input.
Define this function to avoid defining randon key.
Args:
shape: array shape.
dtype: data type.
Returns:
Noise of the same shape and dtype as x.
"""
return self.noise_fn(key=hk.next_rng_key(), shape=shape, dtype=dtype)
def clip_x(self, x: jnp.ndarray) -> jnp.ndarray:
"""Clip the x_start/x_t to desired range.
TODO: where should clip be used?
Args:
x: any array.
Returns:
Clipped array.
"""
if self.x_limit <= 0:
return x
return jnp.clip(x, -self.x_limit, self.x_limit)
def logits_to_x(self, logits: jnp.ndarray) -> jnp.ndarray:
"""Map logits to x space.
Args:
logits: unnormalised logits.
Returns:
Array in the same space as x_start.
"""
if self.x_space == DiffusionSpace.LOGITS:
return logits
if self.x_space == DiffusionSpace.SCALED_PROBS:
x = jax.nn.softmax(logits, axis=-1)
x = x * 2.0 - 1.0
return x
raise ValueError(f"Unknown x space {self.x_space}.")
def x_to_logits(self, x: jnp.ndarray) -> jnp.ndarray:
"""Map x to logits.
Args:
x: in the same space as x_start.
Returns:
Logits.
"""
if self.x_space == DiffusionSpace.LOGITS:
return x
if self.x_space == DiffusionSpace.SCALED_PROBS:
probs = (x + 1) / 2
probs = jnp.clip(probs, EPS, 1.0)
return jnp.log(probs)
raise ValueError(f"Unknown x space {self.x_space}.")
def variational_lower_bound(
self,
model_out: jnp.ndarray,
x_start: jnp.ndarray,
x_t: jnp.ndarray,
t: jnp.ndarray,
) -> jnp.ndarray:
"""Variational lower-bound, smaller is better.
The resulting units are bits (rather than nats, as one might expect).
This allows for comparison to other papers.
Args:
model_out: model predicted output, may present different things,
shape (batch, ...).
x_start: cleaned, same shape as x_t.
x_t: noisy input, shape (batch, ...).
t: storing index values < self.num_timesteps, shape (batch, ).
Returns:
lower bounds of shape (batch, ).
"""
reduce_axis = tuple(range(x_t.ndim))[1:]
# q(x_{t-1} | x_t, x_0)
q_mean, q_log_variance = self.q_posterior_mean_variance(
x_start=x_start, x_t=x_t, t=t
)
# p(x_{t-1} | x_t)
_, p_mean, p_log_variance = self.p_mean_variance(
model_out=model_out,
x_t=x_t,
t=t,
)
kl = normal_kl(
q_mean=q_mean,
q_log_variance=q_log_variance,
p_mean=p_mean,
p_log_variance=p_log_variance,
)
nll = -discretized_gaussian_log_likelihood(
x_start, mean=q_mean, log_variance=q_log_variance
)
# (batch, )
kl = jnp.mean(kl, axis=reduce_axis) / jnp.log(2.0)
nll = jnp.mean(nll, axis=reduce_axis) / jnp.log(2.0)
# return neg-log-likelihood for t = 0
return jnp.where(t == 0, nll, kl)
| 27,736 | 33.413151 | 92 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/loss/cross_entropy.py | """Loss functions for classification."""
import jax
import jax.numpy as jnp
import optax
def mean_cross_entropy(
logits: jnp.ndarray,
mask_true: jnp.ndarray,
) -> jnp.ndarray:
"""Cross entropy.
Args:
logits: unscaled prediction, (batch, ..., num_classes).
mask_true: one hot targets, (batch, ..., num_classes).
Returns:
Cross entropy loss value of shape (1, ).
"""
# (batch, ...)
loss = optax.softmax_cross_entropy(logits=logits, labels=mask_true)
return jnp.mean(loss)
def mean_focal_loss(
logits: jnp.ndarray,
mask_true: jnp.ndarray,
gamma: float = 2.0,
) -> jnp.ndarray:
"""Focal loss.
https://arxiv.org/abs/1708.02002
Args:
logits: unscaled prediction, (batch, ..., num_classes).
mask_true: one hot targets, (batch, ..., num_classes).
gamma: adjust class imbalance, 0 is equivalent to cross entropy.
Returns:
Focal loss value of shape (1, ).
"""
# normalise logits to be the log of probabilities
logits = jax.nn.log_softmax(logits, axis=-1)
probs = jnp.exp(logits)
focal_loss = -((1 - probs) ** gamma) * logits * mask_true
# (batch, ..., num_classes) -> (batch, ...)
# label are one hot, just sum over class axis
focal_loss = jnp.sum(focal_loss, axis=-1)
return jnp.mean(focal_loss)
| 1,352 | 26.06 | 72 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/loss/dice.py | """Loss functions for image segmentation."""
import jax
import jax.numpy as jnp
def mean_with_background(batch_cls_loss: jnp.ndarray) -> jnp.ndarray:
"""Return average with background class.
Args:
batch_cls_loss: shape (batch, num_classes).
Returns:
Mean loss of shape (1,).
"""
return jnp.nanmean(batch_cls_loss)
def mean_without_background(batch_cls_loss: jnp.ndarray) -> jnp.ndarray:
"""Return average without background class.
Args:
batch_cls_loss: shape (batch, num_classes).
Returns:
Mean loss of shape (1,).
"""
return jnp.nanmean(batch_cls_loss[:, 1:])
def dice_loss(
logits: jnp.ndarray,
mask_true: jnp.ndarray,
) -> jnp.ndarray:
"""Mean dice loss, smaller is better.
Losses are not calculated on instance-classes, where there is no label.
This is to avoid the need of smoothing and potentially nan gradients.
Args:
logits: unscaled prediction, (batch, ..., num_classes).
mask_true: one hot targets, (batch, ..., num_classes).
Returns:
Dice loss value of shape (batch, num_classes).
"""
mask_pred = jax.nn.softmax(logits)
reduce_axis = tuple(range(mask_pred.ndim))[1:-1]
# (batch, num_classes)
numerator = 2.0 * jnp.sum(mask_pred * mask_true, axis=reduce_axis)
denominator = jnp.sum(mask_pred + mask_true, axis=reduce_axis)
not_nan_mask = jnp.sum(mask_true, axis=reduce_axis) > 0
# nan loss are replaced by 0.0
return jnp.where(
condition=not_nan_mask,
x=1.0 - numerator / denominator,
y=jnp.nan,
)
def mean_dice_loss(
logits: jnp.ndarray,
mask_true: jnp.ndarray,
include_background: bool,
) -> jnp.ndarray:
"""Mean dice loss, smaller is better.
Losses are not calculated on instance-classes, where there is no label.
This is to avoid the need of smoothing and potentially nan gradients.
Args:
logits: unscaled prediction, (batch, ..., num_classes).
mask_true: one hot targets, (batch, ..., num_classes).
include_background: include background as a separate class.
Returns:
Dice loss value of shape (1, ).
"""
loss = dice_loss(logits=logits, mask_true=mask_true)
return jax.lax.cond(
include_background,
mean_with_background,
mean_without_background,
loss,
)
| 2,386 | 27.082353 | 75 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/model/unet_3d_slice_time.py | """UNet for segmentation."""
import dataclasses
from typing import Callable, List, Tuple
import haiku as hk
import jax
from jax import numpy as jnp
from imgx.model.basic import instance_norm, sinusoidal_positional_embedding
from imgx.model.unet_3d_slice import Conv2dNormAct, Conv2dPool
@dataclasses.dataclass
class TimeConv2dResBlock(hk.Module):
"""Conv2dResBlock with time embedding input.
This class is defined separately to use remat, as remat does not allow
condition loop (if / else).
https://github.com/CompVis/stable-diffusion/blob/main/ldm/modules/diffusionmodules/model.py
"""
out_channels: int
kernel_size: int
activation: Callable[[jnp.ndarray], jnp.ndarray] = jax.nn.gelu
def __call__(
self,
x: jnp.ndarray,
t: jnp.ndarray,
) -> jnp.ndarray:
"""Forward pass.
Args:
x: tensor to be up-sampled, (batch, w, h, in_channels).
t: time embedding, (batch, t_channels).
Returns:
Tensor.
"""
res = x
x = hk.Conv2D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
x = instance_norm(x)
x = self.activation(x)
x = hk.Conv2D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
t = self.activation(t[:, None, None, :])
t = hk.Linear(output_size=self.out_channels)(t)
x += t
x = instance_norm(x)
x = self.activation(x + res)
return x
@dataclasses.dataclass
class Unet3dSliceTime(hk.Module):
"""2D UNet for 3D images.
https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/nets/basic_unet.py
"""
in_shape: Tuple[int, int, int] # spatial shape
in_channels: int # input channels
out_channels: int
num_channels: Tuple[int, ...] # channel at each depth, including the bottom
num_timesteps: int # T
kernel_size: int = 3
scale_factor: int = 2 # spatial down-sampling/up-sampling
remat: bool = False # remat reduces memory cost at cost of compute speed
def encoder(
self,
image: jnp.ndarray,
t: jnp.ndarray,
) -> List[jnp.ndarray]:
"""Encoder the image.
Args:
image: image tensor of shape (batch, H, W, in_channels).
t: time embedding of shape (batch, t_channels).
Returns:
List of embeddings from each layer.
"""
conv = Conv2dNormAct(
out_channels=self.num_channels[0],
kernel_size=self.kernel_size,
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(image)
conv_t = TimeConv2dResBlock(
out_channels=self.num_channels[0],
kernel_size=self.kernel_size,
)
conv_t = hk.remat(conv_t) if self.remat else conv_t
emb = conv_t(x=emb, t=t)
embeddings = [emb]
for ch in self.num_channels:
conv = Conv2dPool(out_channels=ch, scale_factor=self.scale_factor)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
conv_t = TimeConv2dResBlock(
out_channels=ch,
kernel_size=self.kernel_size,
)
conv_t = hk.remat(conv_t) if self.remat else conv_t
emb = conv_t(x=emb, t=t)
embeddings.append(emb)
return embeddings
def decoder(
self,
embeddings: List[jnp.ndarray],
t: jnp.ndarray,
) -> jnp.ndarray:
"""Decode the embedding and perform prediction.
Args:
embeddings: list of embeddings from each layer.
Starting with the first layer.
t: time embedding of shape (batch, t_channels).
Returns:
Unnormalized logits.
"""
if len(embeddings) != len(self.num_channels) + 1:
raise ValueError("UNet decoder input length does not match")
emb = embeddings[-1]
# calculate up-sampled channel
# [32, 64, 128, 256] -> [32, 32, 64, 128]
channels = self.num_channels[:1] + self.num_channels[:-1]
for i, ch in enumerate(channels[::-1]):
# skipped.shape <= up-scaled shape
# as padding may be added when down-sampling
skipped = embeddings[-i - 2]
skipped_shape = skipped.shape[-3:-1]
# deconv and pad to make emb of same shape as skipped
conv = hk.Conv2DTranspose(
output_channels=ch,
kernel_shape=self.scale_factor,
stride=self.scale_factor,
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
emb = emb[
...,
: skipped_shape[0],
: skipped_shape[1],
:,
]
# add skipped
emb += skipped
# conv
conv_t = TimeConv2dResBlock(
out_channels=ch,
kernel_size=self.kernel_size,
)
conv_t = hk.remat(conv_t) if self.remat else conv_t
emb = conv_t(emb, t)
conv = hk.Conv2D(output_channels=self.out_channels, kernel_shape=1)
conv = hk.remat(conv) if self.remat else conv
out = conv(emb)
return out
def __call__( # type: ignore[no-untyped-def]
self,
image: jnp.ndarray,
t: jnp.ndarray,
**kwargs, # noqa: ARG002
) -> jnp.ndarray:
"""Forward pass.
Args:
image: (batch, h, w, d, in_channels).
t: (batch, ).
kwargs: unused arguments.
Returns:
Predictions (batch, h, w, d, out_channels).
Raises:
ValueError: if input shape does not match.
"""
if image.shape[-4:] != (*self.in_shape, self.in_channels):
raise ValueError(
f"Input shape {image.shape[-4:]} does not match"
f" configs {(*self.in_shape, self.in_channels)}"
)
# (batch, h, w, d, in_channels) -> (batch, d, h, w, in_channels)
image = jnp.transpose(image, (0, 3, 1, 2, 4))
# (batch, d, h, w, in_channels) -> (batch*d, h, w, in_channels)
image = jnp.reshape(image, (-1, *self.in_shape[:2], self.in_channels))
# (batch, ) -> (batch*d,)
t = jnp.repeat(t, repeats=self.in_shape[2], axis=0)
dim_t = self.num_channels[0] * 4
t = sinusoidal_positional_embedding(x=t, dim=dim_t)
embeddings = self.encoder(image=image, t=t)
out = self.decoder(embeddings=embeddings, t=t)
# (batch*d, h, w, out_channels) -> (batch, d, h, w, out_channels)
out = jnp.reshape(
out, (-1, self.in_shape[2], *self.in_shape[:2], self.out_channels)
)
# (batch, d, h, w, out_channels) -> (batch, h, w, d, out_channels)
out = jnp.transpose(out, (0, 2, 3, 1, 4))
return out
| 7,105 | 31.153846 | 95 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/model/unet_3d_time.py | """UNet for segmentation."""
import dataclasses
from typing import Callable, List, Tuple
import haiku as hk
import jax
from jax import numpy as jnp
from imgx.model.basic import instance_norm, sinusoidal_positional_embedding
from imgx.model.unet_3d import Conv3dNormAct, Conv3dPool
@dataclasses.dataclass
class TimeConv3dResBlock(hk.Module):
"""Conv3dResBlock with time embedding input.
This class is defined separately to use remat, as remat does not allow
condition loop (if / else).
https://github.com/CompVis/stable-diffusion/blob/main/ldm/modules/diffusionmodules/model.py
"""
out_channels: int
kernel_size: int
activation: Callable[[jnp.ndarray], jnp.ndarray] = jax.nn.gelu
def __call__(
self,
x: jnp.ndarray,
t: jnp.ndarray,
) -> jnp.ndarray:
"""Forward pass.
Args:
x: tensor to be up-sampled, (batch, w, h, d, in_channels).
t: time embedding, (batch, t_channels).
Returns:
Tensor.
"""
res = x
x = hk.Conv3D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
x = instance_norm(x)
x = self.activation(x)
x = hk.Conv3D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
t = self.activation(t[:, None, None, None, :])
t = hk.Linear(output_size=self.out_channels)(t)
x += t
x = instance_norm(x)
x = self.activation(x + res)
return x
@dataclasses.dataclass
class Unet3dTime(hk.Module):
"""3D UNet.
https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/nets/basic_unet.py
"""
in_shape: Tuple[int, int, int] # spatial shape
in_channels: int # input channels
out_channels: int
num_channels: Tuple[int, ...] # channel at each depth, including the bottom
num_timesteps: int # T
kernel_size: int = 3
scale_factor: int = 2 # spatial down-sampling/up-sampling
remat: bool = False # remat reduces memory cost at cost of compute speed
def encoder(
self,
image: jnp.ndarray,
t: jnp.ndarray,
) -> List[jnp.ndarray]:
"""Encoder the image.
Args:
image: image tensor of shape (batch, H, W, D, in_channels).
t: time embedding of shape (batch, t_channels).
Returns:
List of embeddings from each layer.
"""
conv = Conv3dNormAct(
out_channels=self.num_channels[0],
kernel_size=self.kernel_size,
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(image)
conv_t = TimeConv3dResBlock(
out_channels=self.num_channels[0],
kernel_size=self.kernel_size,
)
conv_t = hk.remat(conv_t) if self.remat else conv_t
emb = conv_t(x=emb, t=t)
embeddings = [emb]
for ch in self.num_channels:
conv = Conv3dPool(out_channels=ch, scale_factor=self.scale_factor)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
conv_t = TimeConv3dResBlock(
out_channels=ch,
kernel_size=self.kernel_size,
)
conv_t = hk.remat(conv_t) if self.remat else conv_t
emb = conv_t(x=emb, t=t)
embeddings.append(emb)
return embeddings
def decoder(
self,
embeddings: List[jnp.ndarray],
t: jnp.ndarray,
) -> jnp.ndarray:
"""Decode the embedding and perform prediction.
Args:
embeddings: list of embeddings from each layer.
Starting with the first layer.
t: time embedding of shape (batch, t_channels).
Returns:
Unnormalized logits.
"""
if len(embeddings) != len(self.num_channels) + 1:
raise ValueError("UNet decoder input length does not match")
emb = embeddings[-1]
# calculate up-sampled channel
# [32, 64, 128, 256] -> [32, 32, 64, 128]
channels = self.num_channels[:1] + self.num_channels[:-1]
for i, ch in enumerate(channels[::-1]):
# skipped.shape <= up-scaled shape
# as padding may be added when down-sampling
skipped = embeddings[-i - 2]
skipped_shape = skipped.shape[-4:-1]
# deconv and pad to make emb of same shape as skipped
conv = hk.Conv3DTranspose(
output_channels=ch,
kernel_shape=self.scale_factor,
stride=self.scale_factor,
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
emb = emb[
...,
: skipped_shape[0],
: skipped_shape[1],
: skipped_shape[2],
:,
]
# add skipped
emb += skipped
# conv
conv_t = TimeConv3dResBlock(
out_channels=ch,
kernel_size=self.kernel_size,
)
conv_t = hk.remat(conv_t) if self.remat else conv_t
emb = conv_t(emb, t)
conv = hk.Conv3D(output_channels=self.out_channels, kernel_shape=1)
conv = hk.remat(conv) if self.remat else conv
out = conv(emb)
return out
def __call__( # type: ignore[no-untyped-def]
self,
image: jnp.ndarray,
t: jnp.ndarray,
**kwargs, # noqa: ARG002
) -> jnp.ndarray:
"""Forward pass.
Args:
image: (batch, h, w, d, in_channels).
t: (batch, ).
kwargs: unused arguments.
Returns:
Predictions (batch, h, w, d, out_channels).
Raises:
ValueError: if input shape does not match.
"""
if image.shape[-4:] != (*self.in_shape, self.in_channels):
raise ValueError(
f"Input shape {image.shape[-4:]} does not match"
f" configs {(*self.in_shape, self.in_channels)}"
)
dim_t = self.num_channels[0] * 4
t = sinusoidal_positional_embedding(x=t, dim=dim_t)
embeddings = self.encoder(image=image, t=t)
out = self.decoder(embeddings=embeddings, t=t)
return out
| 6,438 | 30.10628 | 95 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/model/unet_3d_slice.py | """UNet for segmentation."""
import dataclasses
from typing import Callable, List, Tuple
import haiku as hk
import jax
from jax import numpy as jnp
from imgx.model.basic import instance_norm
@dataclasses.dataclass
class Conv2dNormAct(hk.Module):
"""Block with conv2d-norm-act."""
out_channels: int
kernel_size: int
activation: Callable[[jnp.ndarray], jnp.ndarray] = jax.nn.gelu
def __call__(
self,
x: jnp.ndarray,
) -> jnp.ndarray:
"""Forward pass.
Args:
x: tensor to be up-sampled.
Returns:
Tensor.
"""
x = hk.Conv2D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
x = instance_norm(x)
x = self.activation(x)
return x
@dataclasses.dataclass
class Conv2dResBlock(hk.Module):
"""Block with two conv2d-norm-act layers and residual link."""
out_channels: int
kernel_size: int
activation: Callable[[jnp.ndarray], jnp.ndarray] = jax.nn.gelu
def __call__(
self,
x: jnp.ndarray,
) -> jnp.ndarray:
"""Forward pass.
Args:
x: tensor to be up-sampled.
Returns:
Tensor.
"""
res = x
x = hk.Conv2D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
x = instance_norm(x)
x = self.activation(x)
x = hk.Conv2D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
x = instance_norm(x)
x = self.activation(x + res)
return x
@dataclasses.dataclass
class Conv2dPool(hk.Module):
"""Patch merging, a down-sample layer."""
out_channels: int
scale_factor: int
def __call__(
self,
x: jnp.ndarray,
) -> jnp.ndarray:
"""Forward.
Args:
x: shape (batch, h, w, d, in_channels).
Returns:
Down-sampled array.
"""
x = hk.Conv2D(
output_channels=self.out_channels,
kernel_shape=self.scale_factor,
stride=self.scale_factor,
with_bias=False,
)(x)
x = instance_norm(x)
return x
@dataclasses.dataclass
class Unet3dSlice(hk.Module):
"""2D UNet for 3D images.
https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/nets/basic_unet.py
"""
in_shape: Tuple[int, int, int] # spatial shape
in_channels: int # input channels
out_channels: int
num_channels: Tuple[int, ...] # channel at each depth, including the bottom
kernel_size: int = 3
scale_factor: int = 2 # spatial down-sampling/up-sampling
remat: bool = False # remat reduces memory cost at cost of compute speed
def encoder(
self,
image: jnp.ndarray,
) -> List[jnp.ndarray]:
"""Encoder the image.
Args:
image: image tensor of shape (batch, H, W, C).
Returns:
List of embeddings from each layer.
"""
conv = hk.Sequential(
[
Conv2dNormAct(
out_channels=self.num_channels[0],
kernel_size=self.kernel_size,
),
Conv2dResBlock(
out_channels=self.num_channels[0],
kernel_size=self.kernel_size,
),
]
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(image)
embeddings = [emb]
for ch in self.num_channels:
conv = hk.Sequential(
[
Conv2dPool(out_channels=ch, scale_factor=self.scale_factor),
Conv2dResBlock(
out_channels=ch,
kernel_size=self.kernel_size,
),
]
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
embeddings.append(emb)
return embeddings
def decoder(
self,
embeddings: List[jnp.ndarray],
) -> jnp.ndarray:
"""Decode the embedding and perform prediction.
Args:
embeddings: list of embeddings from each layer.
Starting with the first layer.
Returns:
Unnormalized logits.
"""
if len(embeddings) != len(self.num_channels) + 1:
raise ValueError("UNet decoder input length does not match")
emb = embeddings[-1]
# calculate up-sampled channel
# [32, 64, 128, 256] -> [32, 32, 64, 128]
channels = self.num_channels[:1] + self.num_channels[:-1]
for i, ch in enumerate(channels[::-1]):
# skipped.shape <= up-scaled shape
# as padding may be added when down-sampling
skipped = embeddings[-i - 2]
skipped_shape = skipped.shape[-3:-1]
# deconv and pad to make emb of same shape as skipped
conv = hk.Conv2DTranspose(
output_channels=ch,
kernel_shape=self.scale_factor,
stride=self.scale_factor,
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
emb = emb[
...,
: skipped_shape[0],
: skipped_shape[1],
:,
]
# add skipped
emb += skipped
# conv
conv = Conv2dResBlock(
out_channels=ch,
kernel_size=self.kernel_size,
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
conv = hk.Conv2D(output_channels=self.out_channels, kernel_shape=1)
conv = hk.remat(conv) if self.remat else conv
out = conv(emb)
return out
def __call__( # type: ignore[no-untyped-def]
self,
image: jnp.ndarray,
**kwargs, # noqa: ARG002
) -> jnp.ndarray:
"""Forward pass.
Args:
image: (batch, h, w, d, in_channels).
kwargs: unused arguments.
Returns:
Predictions (batch, h, w, d, out_channels).
Raises:
ValueError: if input shape does not match.
"""
if image.shape[-4:] != (*self.in_shape, self.in_channels):
raise ValueError(
f"Input shape {image.shape[-3:]} does not match"
f" configs {(*self.in_shape, self.in_channels)}"
)
# (batch, h, w, d, in_channels) -> (batch, d, h, w, in_channels)
image = jnp.transpose(image, (0, 3, 1, 2, 4))
# (batch, d, h, w, in_channels) -> (batch*d, h, w, in_channels)
image = jnp.reshape(image, (-1, *self.in_shape[:2], self.in_channels))
embeddings = self.encoder(image=image)
out = self.decoder(embeddings=embeddings)
# (batch*d, h, w, out_channels) -> (batch, d, h, w, out_channels)
out = jnp.reshape(
out, (-1, self.in_shape[2], *self.in_shape[:2], self.out_channels)
)
# (batch, d, h, w, out_channels) -> (batch, h, w, d, out_channels)
out = jnp.transpose(out, (0, 2, 3, 1, 4))
return out
| 7,387 | 27.635659 | 85 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/model/unet_3d.py | """UNet for segmentation."""
import dataclasses
from typing import Callable, List, Tuple
import haiku as hk
import jax
from jax import numpy as jnp
from imgx.model.basic import instance_norm
@dataclasses.dataclass
class Conv3dNormAct(hk.Module):
"""Block with conv3d-norm-act."""
out_channels: int
kernel_size: int
activation: Callable[[jnp.ndarray], jnp.ndarray] = jax.nn.gelu
def __call__(
self,
x: jnp.ndarray,
) -> jnp.ndarray:
"""Forward pass.
Args:
x: tensor to be up-sampled.
Returns:
Tensor.
"""
x = hk.Conv3D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
x = instance_norm(x)
x = self.activation(x)
return x
@dataclasses.dataclass
class Conv3dResBlock(hk.Module):
"""Block with two conv3d-norm-act layers and residual link."""
out_channels: int
kernel_size: int
activation: Callable[[jnp.ndarray], jnp.ndarray] = jax.nn.gelu
def __call__(
self,
x: jnp.ndarray,
) -> jnp.ndarray:
"""Forward pass.
Args:
x: tensor to be up-sampled.
Returns:
Tensor.
"""
res = x
x = hk.Conv3D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
x = instance_norm(x)
x = self.activation(x)
x = hk.Conv3D(
output_channels=self.out_channels,
kernel_shape=self.kernel_size,
with_bias=False,
)(x)
x = instance_norm(x)
x = self.activation(x + res)
return x
@dataclasses.dataclass
class Conv3dPool(hk.Module):
"""Patch merging, a down-sample layer."""
out_channels: int
scale_factor: int
def __call__(
self,
x: jnp.ndarray,
) -> jnp.ndarray:
"""Forward.
Args:
x: shape (batch, h, w, d, in_channels).
Returns:
Down-sampled array.
"""
x = hk.Conv3D(
output_channels=self.out_channels,
kernel_shape=self.scale_factor,
stride=self.scale_factor,
with_bias=False,
)(x)
x = instance_norm(x)
return x
@dataclasses.dataclass
class Unet3d(hk.Module):
"""3D UNet.
https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/nets/basic_unet.py
"""
in_shape: Tuple[int, int, int] # spatial shape
in_channels: int # input channels
out_channels: int
num_channels: Tuple[int, ...] # channel at each depth, including the bottom
kernel_size: int = 3
scale_factor: int = 2 # spatial down-sampling/up-sampling
remat: bool = False # remat reduces memory cost at cost of compute speed
def encoder(
self,
image: jnp.ndarray,
) -> List[jnp.ndarray]:
"""Encoder the image.
Args:
image: image tensor of shape (batch, H, W, D, C).
Returns:
List of embeddings from each layer.
"""
conv = hk.Sequential(
[
Conv3dNormAct(
out_channels=self.num_channels[0],
kernel_size=self.kernel_size,
),
Conv3dResBlock(
out_channels=self.num_channels[0],
kernel_size=self.kernel_size,
),
]
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(image)
embeddings = [emb]
for ch in self.num_channels:
conv = hk.Sequential(
[
Conv3dPool(out_channels=ch, scale_factor=self.scale_factor),
Conv3dResBlock(
out_channels=ch,
kernel_size=self.kernel_size,
),
]
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
embeddings.append(emb)
return embeddings
def decoder(
self,
embeddings: List[jnp.ndarray],
) -> jnp.ndarray:
"""Decode the embedding and perform prediction.
Args:
embeddings: list of embeddings from each layer.
Starting with the first layer.
Returns:
Unnormalized logits.
"""
if len(embeddings) != len(self.num_channels) + 1:
raise ValueError("UNet decoder input length does not match")
emb = embeddings[-1]
# calculate up-sampled channel
# [32, 64, 128, 256] -> [32, 32, 64, 128]
channels = self.num_channels[:1] + self.num_channels[:-1]
for i, ch in enumerate(channels[::-1]):
# skipped.shape <= up-scaled shape
# as padding may be added when down-sampling
skipped = embeddings[-i - 2]
skipped_shape = skipped.shape[-4:-1]
# deconv and pad to make emb of same shape as skipped
conv = hk.Conv3DTranspose(
output_channels=ch,
kernel_shape=self.scale_factor,
stride=self.scale_factor,
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
emb = emb[
...,
: skipped_shape[0],
: skipped_shape[1],
: skipped_shape[2],
:,
]
# add skipped
emb += skipped
# conv
conv = Conv3dResBlock(
out_channels=ch,
kernel_size=self.kernel_size,
)
conv = hk.remat(conv) if self.remat else conv
emb = conv(emb)
conv = hk.Conv3D(output_channels=self.out_channels, kernel_shape=1)
conv = hk.remat(conv) if self.remat else conv
out = conv(emb)
return out
def __call__( # type: ignore[no-untyped-def]
self,
image: jnp.ndarray,
**kwargs, # noqa: ARG002
) -> jnp.ndarray:
"""Forward pass.
Args:
image: (batch, h, w, d, in_channels).
kwargs: unused arguments.
Returns:
Predictions (batch, h, w, d, out_channels).
Raises:
ValueError: if input shape does not match.
"""
if image.shape[-4:] != (*self.in_shape, self.in_channels):
raise ValueError(
f"Input shape {image.shape[-3:]} does not match"
f" configs {(*self.in_shape, self.in_channels)}"
)
embeddings = self.encoder(image=image)
out = self.decoder(embeddings=embeddings)
return out
| 6,811 | 26.691057 | 85 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/model/basic.py | """Basic functions and modules."""
import haiku as hk
from jax import numpy as jnp
def layer_norm(x: jnp.ndarray) -> jnp.ndarray:
"""Applies a unique LayerNorm at the last axis.
Args:
x: input
Returns:
Normalised input.
"""
return hk.LayerNorm(axis=-1, create_scale=True, create_offset=True)(x)
def instance_norm(x: jnp.ndarray) -> jnp.ndarray:
"""Applies a unique InstanceNorm.
Args:
x: input
Returns:
Normalised input.
"""
return hk.InstanceNorm(create_scale=True, create_offset=True)(x)
def dropout(x: jnp.ndarray, dropout_rate: float) -> jnp.ndarray:
"""Applies dropout only if necessary.
This function is necessary to avoid defining random key for testing.
Otherwise, calling `hk.dropout` will result in the following error:
You must pass a non-None PRNGKey to init and/or apply
if you make use of random numbers.
Args:
x: input
dropout_rate: rate of dropout
Returns:
Dropout applied input.
"""
if dropout_rate == 0.0: # noqa: PLR2004
return x
return hk.dropout(hk.next_rng_key(), dropout_rate, x)
def sinusoidal_positional_embedding(
x: jnp.ndarray, dim: int, max_period: int = 10000
) -> jnp.ndarray:
"""Create sinusoidal timestep embeddings.
Half defined by sin, half by cos.
For position x, the embeddings are (for i = 0,...,half_dim-1)
sin(x / (max_period ** (i/half_dim)))
cos(x / (max_period ** (i/half_dim)))
Args:
x: (batch, ), with non-negative values.
dim: embedding dimension, assume to be evenly divided by two.
max_period: controls the minimum frequency of the embeddings.
Returns:
Embedding of size (batch, dim).
"""
half_dim = dim // 2
# (half_dim,)
freq = jnp.arange(0, half_dim, dtype=jnp.float32)
freq = jnp.exp(-jnp.log(max_period) * freq / half_dim)
# (batch, half_dim)
args = x[:, None] * freq[None, :]
# (batch, dim)
return jnp.concatenate([jnp.cos(args), jnp.sin(args)], axis=-1)
| 2,088 | 26.486842 | 74 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/exp/train_state.py | """Training state and checkpoints."""
import pickle
from pathlib import Path
from typing import Optional, Tuple
import chex
import haiku as hk
import jax
import jax.numpy as jnp
import jmp
import numpy as np
import optax
from imgx.device import broadcast_to_local_devices, get_first_replica_values
CHECKPOINT_ATTRS = [
"params",
"network_state",
"opt_state",
"ema_network_state",
"ema_params",
]
@chex.dataclass
class TrainState:
"""Dataclass to keep track of state of training.
The state of training is structured as a chex.dataclass, which enables
instances of this class to be passed into jax transformations like tree_map
and pmap.
The stored values are broadcast across devices.
"""
params: hk.Params
network_state: hk.State
opt_state: optax.OptState
loss_scale: jmp.LossScale
global_step: jnp.array
rng: jax.random.PRNGKey
ema_params: Optional[hk.Params] = None
ema_network_state: Optional[hk.State] = None
def save_array_tree(ckpt_dir: Path, state: chex.ArrayTree) -> None:
"""Save the state with arrays and tree saved separately.
Args:
ckpt_dir: directory to save.
state: state to save, including params, optimizer, etc.
"""
ckpt_dir.mkdir(parents=True, exist_ok=True)
with open(ckpt_dir / "arrays.npy", "wb") as f:
for x in jax.tree_util.tree_leaves(state):
np.save(f, x, allow_pickle=False)
tree_struct = jax.tree_map(lambda _: 0, state)
with open(ckpt_dir / "tree.pkl", "wb") as f:
pickle.dump(tree_struct, f)
def restore_array_tree(ckpt_dir: Path) -> chex.ArrayTree:
"""Restore the state from saved files.
Args:
ckpt_dir: directory to load.
Returns:
Restored state, including params, optimizer, etc.
"""
with open(ckpt_dir / "tree.pkl", "rb") as f:
tree_struct = pickle.load(f)
leaves, treedef = jax.tree_util.tree_flatten(tree_struct)
with open(ckpt_dir / "arrays.npy", "rb") as f:
flat_state = [np.load(f) for _ in leaves]
return jax.tree_util.tree_unflatten(treedef, flat_state)
def save_ckpt(train_state: TrainState, ckpt_dir: Path) -> None:
"""Save the state with arrays and tree saved separately.
Args:
train_state: checkpoint to save.
ckpt_dir: directory to save.
"""
train_state = jax.tree_map(get_first_replica_values, train_state)
state_dict = dict(train_state) # type: ignore[call-overload]
# loss_scale needs to be stored differently
loss_scale = state_dict["loss_scale"]
del state_dict["loss_scale"]
loss_scale_type = loss_scale.__class__.__name__
state_dict["loss_scale_type"] = loss_scale_type
if isinstance(loss_scale, jmp.StaticLossScale):
state_dict["loss_scale"] = loss_scale.loss_scale
elif isinstance(loss_scale, jmp.DynamicLossScale):
state_dict["loss_scale"] = loss_scale.loss_scale
state_dict["loss_scale_counter"] = loss_scale.counter
state_dict["loss_scale_period"] = loss_scale.period
state_dict["loss_scale_factor"] = loss_scale.factor
save_array_tree(ckpt_dir=ckpt_dir, state=state_dict)
def restore_ckpt(ckpt_dir: Path) -> TrainState:
"""Restore the state from saved files.
Args:
ckpt_dir: directory to load.
Returns:
train_state: checkpoint to save.
global_step: number of batch consumed.
"""
state_dict = restore_array_tree(ckpt_dir)
# loss_scale needs to be loaded differently
loss_scale_type = state_dict["loss_scale_type"]
del state_dict["loss_scale_type"]
if loss_scale_type == "NoOpLossScale":
loss_scale = jmp.NoOpLossScale()
elif loss_scale_type == "StaticLossScale":
loss_scale = state_dict["loss_scale"]
del state_dict["loss_scale"]
loss_scale = jmp.StaticLossScale(loss_scale)
elif loss_scale_type == "DynamicLossScale":
loss_scale = state_dict["loss_scale"]
counter = state_dict["loss_scale_counter"]
# factor and period are ints not arrays
period = int(state_dict["loss_scale_period"])
factor = int(state_dict["loss_scale_factor"])
del state_dict["loss_scale"]
del state_dict["loss_scale_counter"]
del state_dict["loss_scale_period"]
del state_dict["loss_scale_factor"]
loss_scale = jmp.DynamicLossScale(
loss_scale=loss_scale, counter=counter, period=period, factor=factor
)
else:
raise ValueError(f"Unknown loss_scale type {loss_scale_type}.")
# TODO should consider shards
state_dict = jax.tree_map(broadcast_to_local_devices, state_dict)
train_state = TrainState( # type: ignore[call-arg]
params=state_dict["params"],
network_state=state_dict["network_state"],
opt_state=state_dict["opt_state"],
loss_scale=loss_scale,
global_step=state_dict["global_step"],
rng=state_dict["rng"],
ema_params=state_dict.get("ema_params", None),
ema_network_state=state_dict.get("ema_network_state", None),
)
return train_state
def get_eval_params_and_state(
train_state: TrainState,
) -> Tuple[hk.Params, hk.State]:
"""Get the parameters and state for evaluation.
Args:
train_state: Train State.
Returns:
params, state.
"""
if train_state.ema_params is not None:
params = train_state.ema_params
state = train_state.ema_network_state
else:
params = train_state.params
state = train_state.network_state
return get_first_replica_values(params), get_first_replica_values(state)
def get_eval_params_and_state_from_ckpt(
ckpt_dir: Path,
use_ema: bool,
) -> Tuple[hk.Params, hk.State]:
"""Get the parameters and state for evaluation from checkpoint.
Args:
ckpt_dir: directory to load.
use_ema: use EMA or not.
Returns:
Broadcast params, state.
"""
state_dict = restore_array_tree(ckpt_dir)
if use_ema:
params = state_dict["ema_params"]
state = state_dict["ema_network_state"]
else:
params = state_dict["params"]
state = state_dict["network_state"]
# make sure arrays are initialised in CPU
with jax.default_device(jax.devices("cpu")[0]):
params = jax.tree_map(jnp.asarray, params)
state = jax.tree_map(jnp.asarray, state)
# broadcast to other devices
params = broadcast_to_local_devices(params)
state = broadcast_to_local_devices(state)
return params, state
| 6,560 | 30.695652 | 80 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/exp/optim.py | """Module for optimization."""
import logging
from typing import Tuple
import jax
import jax.numpy as jnp
import optax
from omegaconf import DictConfig
def ema_update(
ema_value: jnp.ndarray,
current_value: jnp.ndarray,
decay: float,
step: jnp.ndarray,
) -> jnp.ndarray:
"""Implements exponential moving average (EMA) with TF1-style decay warmup.
Args:
ema_value: previous value.
current_value: current value.
decay: decay ratio.
step: number of steps so far.
Returns:
updated value.
"""
decay = jnp.minimum(decay, (1.0 + step) / (10.0 + step))
return ema_value * decay + current_value * (1 - decay)
def get_lr_schedule(config: DictConfig) -> optax.Schedule:
"""Get learning rate scheduler.
Args:
config: entire configuration.
Returns:
Scheduler
"""
return optax.warmup_cosine_decay_schedule(**config.optimizer.lr_schedule)
def get_every_k_schedule(config: DictConfig) -> int:
"""Get k for gradient accumulations.
Args:
config: entire configuration.
Returns:
k, where gradients are accumulated every k step.
"""
num_devices_per_replica = config.training.num_devices_per_replica
batch_size_per_replica = config.training.batch_size_per_replica
num_replicas = jax.local_device_count() // num_devices_per_replica
batch_size_per_step = batch_size_per_replica * num_replicas
if config.training.batch_size < batch_size_per_step:
raise ValueError(
f"Batch size {config.training.batch_size} is too small. "
f"batch_size_per_replica * num_replicas = "
f"{batch_size_per_replica} * {num_replicas} = "
f"{batch_size_per_step}."
)
if config.training.batch_size % batch_size_per_step != 0:
raise ValueError(
"Batch size cannot be evenly divided by batch size per step."
)
every_k_schedule = config.training.batch_size // batch_size_per_step
if every_k_schedule > 1:
logging.info(
f"Using gradient accumulation. "
f"Each model duplicate is stored across {num_devices_per_replica} "
f"shard{'s' if num_devices_per_replica > 1 else ''}. "
f"Each step has {batch_size_per_step} samples. "
f"Gradients are averaged every {every_k_schedule} steps. "
f"Effective batch size is {config.training.batch_size}."
)
return every_k_schedule
def init_optimizer(
config: DictConfig,
) -> Tuple[optax.GradientTransformation, int]:
"""Initialize optimizer.
Args:
config: entire configuration.
Returns:
optimizer and every_k_schedule.
"""
lr_schedule = get_lr_schedule(config)
optimizer = optax.chain(
optax.clip_by_global_norm(config.optimizer.grad_norm),
getattr(optax, config.optimizer.name)(
learning_rate=lr_schedule, **config.optimizer.kwargs
),
)
# accumulate gradient when needed
every_k_schedule = get_every_k_schedule(config)
if every_k_schedule == 1:
# no need to accumulate gradient
return optimizer, every_k_schedule
optimizer = optax.MultiSteps(optimizer, every_k_schedule=every_k_schedule)
return optimizer, every_k_schedule
| 3,308 | 30.216981 | 79 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/exp/experiment.py | """Module for launching experiments."""
import logging
from functools import partial
from pathlib import Path
from typing import Callable, Dict, Mapping, Optional, Tuple, Union
import chex
import haiku as hk
import jax
import jax.numpy as jnp
import jmp
import optax
import tensorflow as tf
from omegaconf import DictConfig
from imgx import IMAGE, REPLICA_AXIS, TEST_SPLIT, VALID_SPLIT
from imgx.datasets import IMAGE_SHAPE_MAP
from imgx.datasets.augmentation import build_aug_fn_from_config
from imgx.datasets.iterator import get_image_tfds_dataset
from imgx.datasets.util import py_prefetch
from imgx.device import (
bind_rng_to_host_or_device,
broadcast_to_local_devices,
get_first_replica_values,
is_tpu,
)
from imgx.exp.eval import build_batch_eval_fn, build_dataset_eval_fn
from imgx.exp.loss import build_loss_fn
from imgx.exp.mixed_precision import get_mixed_precision_policy, select_tree
from imgx.exp.optim import ema_update, get_lr_schedule, init_optimizer
from imgx.exp.train_state import TrainState
def init_train_state(
batch: chex.ArrayTree,
rng: jax.random.PRNGKey,
loss_init: Callable,
config: DictConfig,
) -> TrainState:
"""Initialize train_state.
Args:
batch: a batch example.
rng: random key.
loss_init: init function of loss.
config: entire configuration.
"""
config_mp = config.training.mixed_precision
config_ema = config.training.ema
# init network
rng, train_rng = jax.random.split(rng)
params, network_state = loss_init(rng, batch)
ema_params = params if config_ema.use else None
ema_network_state = network_state if config_ema.use else None
# count params on one device
params_count = sum(x.size for x in jax.tree_util.tree_leaves(params))
logging.info(f"The model has {params_count:,} parameters.")
# init optimizer state
optimizer, _ = init_optimizer(config=config)
opt_state = optimizer.init(params)
# init loss_scale
# it is necessary to use NoOpLossScale even not intended to use mp
# otherwise, some unknown default policy may be used
# resulted in non-converging losses and nans
loss_scale = jmp.NoOpLossScale()
if config_mp.use and (not is_tpu()):
# no need to scale on TPU
# https://cloud.google.com/tpu/docs/bfloat16
scale = jmp.half_dtype()(2**15)
loss_scale = jmp.DynamicLossScale(scale)
global_step = jnp.array(0, dtype=jnp.int32)
return TrainState( # type: ignore[call-arg]
params=params,
network_state=network_state,
opt_state=opt_state,
loss_scale=loss_scale,
global_step=global_step,
rng=train_rng,
ema_params=ema_params,
ema_network_state=ema_network_state,
)
def update_parameters(
train_state: TrainState,
batch: Mapping[str, chex.ArrayTree],
loss_apply: Callable,
config: DictConfig,
) -> Tuple[TrainState, chex.ArrayTree]:
"""Updates parameters.
Mixed precision references:
- https://github.com/deepmind/jmp
- https://github.com/deepmind/dm-haiku/blob/main/examples/imagenet
Args:
train_state: training state.
batch: training data.
loss_apply: apply of loss function.
config: entire configuration.
Returns:
train_state: training state.
scalars: metric dict.
"""
def loss_fn(
params: hk.Params,
network_state: hk.State,
loss_scale: jmp.LossScale,
rng_key: jax.random.PRNGKey,
batch_data: chex.ArrayTree,
) -> Tuple[chex.ArrayTree, Tuple[chex.ArrayTree, hk.State]]:
"""Regroup loss output.
Args:
params: network parameters.
network_state: network state.
loss_scale: scale loss for mixed precision.
rng_key: random key.
batch_data: data of a batch.
Returns:
- loss
- (metric dict, model state)
"""
(loss, batch_scalars), network_state = loss_apply(
params, network_state, rng_key, batch_data
)
return loss_scale.scale(loss), (batch_scalars, network_state)
config_mp = config.training.mixed_precision
config_ema = config.training.ema
aug_fn = build_aug_fn_from_config(config)
# get random key for the step
rng, step_rng = jax.random.split(train_state.rng)
aug_rng, step_rng = jax.random.split(step_rng)
aug_rng = bind_rng_to_host_or_device(
aug_rng, bind_to="device", axis_name=REPLICA_AXIS
)
step_rng = bind_rng_to_host_or_device(
step_rng, bind_to="device", axis_name=REPLICA_AXIS
)
# data augmentation
batch = aug_fn(aug_rng, batch)
# gradient calculation
grad_loss_fn = jax.grad(loss_fn, has_aux=True)
grads, (scalars, updated_network_state) = grad_loss_fn(
train_state.params,
train_state.network_state,
train_state.loss_scale,
step_rng,
batch,
)
scalars["grad_norm_before_pmean"] = optax.global_norm(grads)
scalars["params_norm"] = optax.global_norm(train_state.params)
# grads are in "param_dtype" (likely float32)
# cast them back to compute dtype such that
# we do the all-reduce below in the compute precision
# which is typically lower than the param precision
policy = get_mixed_precision_policy(config_mp.use)
grads = policy.cast_to_compute(grads)
grads = train_state.loss_scale.unscale(grads)
# take the mean across all replicas to keep params in sync
grads = jax.lax.pmean(grads, axis_name=REPLICA_AXIS)
# compute our optimizer update in the same precision as params
grads = policy.cast_to_param(grads)
# update parameters
optimizer, every_k_schedule = init_optimizer(config=config)
updates, updated_opt_state = optimizer.update(
grads, train_state.opt_state, train_state.params
)
updated_params = optax.apply_updates(train_state.params, updates)
scalars["lr"] = get_lr_schedule(config)(
train_state.global_step // every_k_schedule
)
scalars["grad_norm"] = optax.global_norm(grads)
scalars["grad_update_norm"] = optax.global_norm(updates)
grads_finite = jmp.all_finite(grads)
updated_loss_scale = train_state.loss_scale.adjust(grads_finite)
# mixed precision or not, skip non-finite gradients
(updated_params, updated_network_state, updated_opt_state) = select_tree(
grads_finite,
(updated_params, updated_network_state, updated_opt_state),
(
train_state.params,
train_state.network_state,
train_state.opt_state,
),
)
scalars["loss_scale"] = updated_loss_scale.loss_scale
# average metrics across replicas
min_scalars = {}
max_scalars = {}
mean_scalars = {}
for k in scalars:
if k.startswith("min_"):
min_scalars[k] = scalars[k]
elif k.startswith("max_"):
max_scalars[k] = scalars[k]
else:
mean_scalars[k] = scalars[k]
min_scalars = jax.lax.pmin(min_scalars, axis_name=REPLICA_AXIS)
max_scalars = jax.lax.pmax(max_scalars, axis_name=REPLICA_AXIS)
mean_scalars = jax.lax.pmean(mean_scalars, axis_name=REPLICA_AXIS)
scalars = {
**min_scalars,
**max_scalars,
**mean_scalars,
}
# update train_state
train_state = train_state.replace(
params=updated_params,
network_state=updated_network_state,
opt_state=updated_opt_state,
loss_scale=updated_loss_scale,
rng=rng,
)
if train_state.ema_params is not None:
ema = partial(
ema_update,
decay=config_ema.decay,
step=train_state.global_step,
)
ema_params = jax.tree_map(ema, train_state.ema_params, updated_params)
ema_network_state = jax.tree_map(
ema, train_state.ema_network_state, updated_network_state
)
train_state = train_state.replace(
ema_params=ema_params,
ema_network_state=ema_network_state,
)
train_state = train_state.replace(
global_step=train_state.global_step + 1,
)
return train_state, scalars
def batch_eval(
batch: Mapping[str, chex.ArrayTree],
config: DictConfig,
) -> Tuple[chex.ArrayTree, chex.ArrayTree]:
"""Calculate prediction and metrics given a batch, without loss.
Args:
batch: input batch data.
config: entire configuration.
Returns:
- metrics.
- prediction.
"""
eval_fn = build_batch_eval_fn(
config=config,
)
return eval_fn(batch)
class Experiment:
"""Experiment for supervised training."""
def __init__(self, config: DictConfig) -> None:
"""Initializes experiment.
Args:
config: experiment config.
"""
# Do not use accelerators in data pipeline.
tf.config.experimental.set_visible_devices([], device_type="GPU")
tf.config.experimental.set_visible_devices([], device_type="TPU")
# save args
self.config = config
# init data loaders and networks
self.dataset = get_image_tfds_dataset(
dataset_name=self.config.data.name,
config=self.config,
)
self.train_iter = py_prefetch(lambda: self.dataset.train_iter)
self.valid_iter = py_prefetch(lambda: self.dataset.valid_iter)
self.test_iter = py_prefetch(lambda: self.dataset.test_iter)
def train_init(self) -> TrainState:
"""Initialize data loader, loss, networks for training.
Returns:
initialized training state.
"""
# init loss
loss = hk.transform_with_state(build_loss_fn(config=self.config))
# the batch is for multi-devices
# (num_models, ...)
# num_models is not the same as num_devices_per_replica
batch = next(self.train_iter)
batch = get_first_replica_values(batch)
# check image size
data_config = self.config["data"]
image_shape = IMAGE_SHAPE_MAP[data_config["name"]]
chex.assert_equal(batch[IMAGE].shape[1:4], image_shape)
aug_fn = build_aug_fn_from_config(self.config)
aug_rng = jax.random.PRNGKey(self.config["seed"])
batch = aug_fn(aug_rng, batch)
# init train state on cpu first
rng = jax.random.PRNGKey(self.config.seed)
train_state = jax.jit(
partial(init_train_state, loss_init=loss.init, config=self.config),
backend="cpu",
)(
batch=batch,
rng=rng,
)
# then broadcast train_state to devices
train_state = broadcast_to_local_devices(train_state)
# define pmap-ed update func
self.update_params_pmap = jax.pmap(
partial(
update_parameters,
loss_apply=loss.apply,
config=self.config,
),
axis_name=REPLICA_AXIS,
donate_argnums=(0,),
)
return train_state
def train_step(
self,
train_state: TrainState,
) -> Union[TrainState, chex.ArrayTree]:
"""Training step.
Args:
train_state: training state.
Returns:
- updated train_state.
- metric dict.
"""
batch = next(self.train_iter)
train_state, scalars = self.update_params_pmap(
train_state,
batch,
)
scalars = get_first_replica_values(scalars)
scalars = jax.tree_map(lambda x: x.item(), scalars) # tensor to values
return train_state, scalars
def eval_init(self) -> None:
"""Initialize data loader, loss, networks for validation."""
evaluate = hk.transform_with_state(
partial(batch_eval, config=self.config)
)
self.evaluate_pmap = jax.pmap(
evaluate.apply,
axis_name=REPLICA_AXIS,
)
self.eval_dataset = build_dataset_eval_fn(self.config)
def eval_step(
self,
split: str,
params: hk.Params,
state: hk.State,
rng: jax.random.PRNGKey,
out_dir: Optional[Path],
save_predictions: bool,
) -> Dict:
"""Validation step on entire validation data set.
Args:
split: data split.
params: network parameters.
state: network state.
rng: random key.
out_dir: output directory to save metrics and predictions,
if None, no files will be saved.
save_predictions: if True, save predicted masks.
Returns:
metric dict.
Raises:
ValueError: if split is not supported.
"""
if split not in [VALID_SPLIT, TEST_SPLIT]:
raise ValueError(
"Evaluation can only be performed on valid and test splits."
)
if split == VALID_SPLIT:
batch_iterator = self.valid_iter
num_steps = self.dataset.num_valid_steps
else:
batch_iterator = self.test_iter
num_steps = self.dataset.num_test_steps
if out_dir is not None:
out_dir.mkdir(parents=True, exist_ok=True)
return self.eval_dataset(
evaluate_pmap=self.evaluate_pmap,
params=params,
state=state,
rng=rng,
batch_iterator=batch_iterator,
num_steps=num_steps,
out_dir=out_dir,
save_predictions=save_predictions,
)
| 13,642 | 30.801865 | 79 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/exp/loss.py | """Module for building models and losses."""
from typing import Callable, Dict, Tuple
import haiku as hk
import jax
import jax.numpy as jnp
from omegaconf import DictConfig
from imgx import IMAGE, LABEL
from imgx.datasets import NUM_CLASSES_MAP
from imgx.diffusion.gaussian_diffusion import (
DiffusionModelOutputType,
DiffusionModelVarianceType,
GaussianDiffusion,
)
from imgx.exp.mixed_precision import set_mixed_precision_policy
from imgx.exp.model import build_diffusion_model, build_vision_model
from imgx.loss import mean_cross_entropy, mean_focal_loss
from imgx.loss.dice import (
dice_loss,
mean_with_background,
mean_without_background,
)
from imgx.metric import class_proportion
def segmentation_loss_with_aux(
logits: jnp.ndarray,
mask_true: jnp.ndarray,
loss_config: DictConfig,
) -> Tuple[jnp.ndarray, Dict[str, jnp.ndarray]]:
"""Calculate segmentation loss with auxiliary losses and return metrics.
Args:
logits: unnormalised logits of shape (batch, ..., num_classes).
mask_true: one hot label of shape (batch, ..., num_classes).
loss_config: have weights of diff losses.
Returns:
- calculated loss.
- metrics.
"""
scalars = {}
# Dice
# (batch, num_classes)
dice_loss_batch_cls = dice_loss(
logits=logits,
mask_true=mask_true,
)
# (1,)
dice_loss_scalar = jax.lax.cond(
loss_config["dice_include_background"],
mean_with_background,
mean_without_background,
dice_loss_batch_cls,
)
scalars["mean_dice_loss"] = dice_loss_scalar
# metrics
for i in range(dice_loss_batch_cls.shape[-1]):
scalars[f"mean_dice_loss_class_{i}"] = jnp.nanmean(
dice_loss_batch_cls[:, i]
)
scalars[f"min_dice_loss_class_{i}"] = jnp.nanmin(
dice_loss_batch_cls[:, i]
)
scalars[f"max_dice_loss_class_{i}"] = jnp.nanmax(
dice_loss_batch_cls[:, i]
)
scalars["mean_dice_loss"] = jnp.nanmean(dice_loss_batch_cls)
scalars["min_dice_loss"] = jnp.nanmin(dice_loss_batch_cls)
scalars["max_dice_loss"] = jnp.nanmax(dice_loss_batch_cls)
# cross entropy
ce_loss_scalar = mean_cross_entropy(
logits=logits,
mask_true=mask_true,
)
scalars["mean_cross_entropy_loss"] = ce_loss_scalar
# focal loss
focal_loss_scalar = mean_focal_loss(
logits=logits,
mask_true=mask_true,
)
scalars["mean_focal_loss"] = focal_loss_scalar
# total loss
loss_scalar = 0
if loss_config["dice"] > 0:
loss_scalar += dice_loss_scalar * loss_config["dice"]
if loss_config["cross_entropy"] > 0:
loss_scalar += ce_loss_scalar * loss_config["cross_entropy"]
if loss_config["focal"] > 0:
loss_scalar += focal_loss_scalar * loss_config["focal"]
# class proportion
# (batch, num_classes)
cls_prop = class_proportion(mask_true)
for i in range(dice_loss_batch_cls.shape[-1]):
scalars[f"mean_proportion_class_{i}"] = jnp.nanmean(cls_prop[:, i])
scalars[f"min_proportion_class_{i}"] = jnp.nanmin(cls_prop[:, i])
scalars[f"max_proportion_class_{i}"] = jnp.nanmax(cls_prop[:, i])
return loss_scalar, scalars
def segmentation_loss(
input_dict: Dict[str, jnp.ndarray],
model: hk.Module,
num_classes: int,
loss_config: DictConfig,
) -> Tuple[jnp.ndarray, Dict[str, jnp.ndarray]]:
"""Calculate segmentation loss and return metrics.
Args:
input_dict: input data having image and label.
model: network instance.
num_classes: number of classes including background.
loss_config: have weights of diff losses.
Returns:
- calculated loss.
- metrics.
"""
# (batch, ..., 1)
image = jnp.expand_dims(input_dict[IMAGE], axis=-1)
# (batch, ..., num_classes)
logits = model(image=image, is_train=True)
# (batch, ..., num_classes)
mask_true = jax.nn.one_hot(
input_dict[LABEL], num_classes=num_classes, axis=-1
)
return segmentation_loss_with_aux(
logits=logits,
mask_true=mask_true,
loss_config=loss_config,
)
def diffusion_loss( # pylint:disable=R0915
input_dict: Dict[str, jnp.ndarray],
num_classes: int,
gd: GaussianDiffusion,
loss_config: DictConfig,
recycle: bool,
) -> Tuple[jnp.ndarray, Dict[str, jnp.ndarray]]:
"""Calculate diffusion loss and return metrics.
In diffusion, the noise is defined on segmentation mask.
That is, x_t is segmentation logits.
Args:
input_dict: input data having image, label, and time_step.
image: (batch, ...)
label: (batch, ..., num_classes)
time_step: (batch, )
num_classes: number of classes including background.
gd: model for sampling.
loss_config: have weights of diff losses.
recycle: recycle model prediction or not.
Returns:
- calculated loss.
- metrics.
"""
scalars = {}
# (batch, ..., 1)
image = jnp.expand_dims(input_dict[IMAGE], axis=-1)
# (batch, ..., num_classes)
mask_true = jax.nn.one_hot(
input_dict[LABEL],
num_classes=num_classes,
axis=-1,
dtype=image.dtype,
)
# noise are standard normal distribution
x_start = mask_true * 2 - 1
# (batch, )
t = gd.sample_timestep(batch_size=image.shape[0])
if recycle:
# (batch, ..., num_classes)
noise_recyle = gd.noise_sample(shape=x_start.shape, dtype=x_start.dtype)
t_recycle = jnp.minimum(t + 1, gd.num_timesteps - 1)
x_t_recycle = gd.q_sample(
x_start=x_start, noise=noise_recyle, t=t_recycle
)
# (batch, ..., ch_input + num_classes)
model_in_recycle = jnp.concatenate([image, x_t_recycle], axis=-1)
# (batch, ..., num_classes) or (batch, ..., 2*num_classes)
# model outputs are always logits
model_out_recycle = gd.model(model_in_recycle, t_recycle, is_train=True)
x_start_recycle, _, _ = gd.p_mean_variance(
model_out=model_out_recycle,
x_t=x_t_recycle,
t=t_recycle,
)
x_start = jax.lax.stop_gradient(x_start_recycle)
# (batch, ..., num_classes)
noise = gd.noise_sample(shape=x_start.shape, dtype=x_start.dtype)
x_t = gd.q_sample(x_start=x_start, noise=noise, t=t)
# (batch, ..., ch_input + num_classes)
model_in = jnp.concatenate([image, x_t], axis=-1)
# (batch, ..., num_classes) or (batch, ..., 2*num_classes)
# model outputs are always logits
model_out = gd.model(model_in, t, is_train=True)
model_out_vlb = jax.lax.stop_gradient(model_out)
if gd.model_var_type in [
DiffusionModelVarianceType.LEARNED,
DiffusionModelVarianceType.LEARNED_RANGE,
]:
# model_out (batch, ..., num_classes)
model_out, log_variance = jnp.split(
model_out, indices_or_sections=2, axis=-1
)
# apply a stop-gradient to the mean output for the vlb to prevent
# this loss change mean prediction
model_out_vlb = jax.lax.stop_gradient(model_out)
# model_out (batch, ..., num_classes*2)
model_out_vlb = jnp.concatenate([model_out_vlb, log_variance], axis=-1)
vlb_scalar = gd.variational_lower_bound(
model_out=model_out_vlb,
x_start=x_start,
x_t=x_t,
t=t,
)
vlb_scalar = jnp.nanmean(vlb_scalar)
scalars["vlb_loss"] = vlb_scalar
if gd.model_out_type == DiffusionModelOutputType.EPSILON:
mse_loss_scalar = jnp.mean((model_out - noise) ** 2)
scalars["mse_loss"] = mse_loss_scalar
x_start = gd.predict_xstart_from_epsilon_xt(
x_t=x_t, epsilon=model_out, t=t
)
logits = gd.x_to_logits(x_start)
seg_loss_scalar, seg_scalars = segmentation_loss_with_aux(
logits=logits,
mask_true=mask_true,
loss_config=loss_config,
)
scalars = {**scalars, **seg_scalars}
loss_scalar = loss_config["mse"] * mse_loss_scalar + seg_loss_scalar
elif gd.model_out_type == DiffusionModelOutputType.X_START:
logits = model_out
loss_scalar, seg_scalars = segmentation_loss_with_aux(
logits=logits,
mask_true=mask_true,
loss_config=loss_config,
)
scalars = {**scalars, **seg_scalars}
else:
raise ValueError(
f"Unknown DiffusionModelOutputType {gd.model_out_type}."
)
if gd.model_var_type in [
DiffusionModelVarianceType.LEARNED,
DiffusionModelVarianceType.LEARNED_RANGE,
]:
# TODO nan values may happen
loss_scalar += vlb_scalar * gd.num_timesteps / gd.num_timesteps_beta
scalars["total_loss"] = loss_scalar
scalars["mean_t"] = jnp.mean(t)
scalars["max_t"] = jnp.max(t)
scalars["min_t"] = jnp.min(t)
return loss_scalar, scalars
def build_loss_fn(
config: DictConfig,
) -> Callable[
[Dict[str, jnp.ndarray]], Tuple[jnp.ndarray, Dict[str, jnp.ndarray]]
]:
"""Build model from config.
Args:
config: entire config.
Returns:
Loss function.
Raises:
ValueError: if config is wrong or not supported.
"""
data_config = config.data
task_config = config.task
model_config = config.model
loss_config = config.loss
mp_config = config.training.mixed_precision
set_mixed_precision_policy(
use_mp=mp_config.use, model_name=model_config.name
)
# number of classes including background
num_classes = NUM_CLASSES_MAP[data_config["name"]]
if task_config["name"] == "segmentation":
def seg_loss_fn(
input_dict: Dict[str, jnp.ndarray]
) -> Tuple[jnp.ndarray, Dict[str, jnp.ndarray]]:
vision_model = build_vision_model(
data_config=data_config,
task_config=task_config,
model_config=model_config,
)
return segmentation_loss(
input_dict=input_dict,
model=vision_model,
num_classes=num_classes,
loss_config=loss_config,
)
return seg_loss_fn
if task_config["name"] == "diffusion":
def diffusion_loss_fn(
input_dict: Dict[str, jnp.ndarray]
) -> Tuple[jnp.ndarray, Dict[str, jnp.ndarray]]:
vision_model = build_vision_model(
data_config=data_config,
task_config=task_config,
model_config=model_config,
)
diffusion_model = build_diffusion_model(
model=vision_model,
diffusion_config=task_config["diffusion"],
)
recycle = task_config["diffusion"]["recycle"]
return diffusion_loss(
input_dict=input_dict,
gd=diffusion_model,
num_classes=num_classes,
loss_config=loss_config,
recycle=recycle,
)
return diffusion_loss_fn
raise ValueError(f"Unknown task {task_config['name']}.")
| 11,245 | 30.858357 | 80 | py |
ImgX-DiffSeg | ImgX-DiffSeg-main/imgx/exp/model.py | """Module for building models."""
import haiku as hk
from omegaconf import DictConfig
from imgx.datasets import IMAGE_SHAPE_MAP, NUM_CLASSES_MAP
from imgx.diffusion.gaussian_diffusion import (
DiffusionBetaSchedule,
DiffusionModelOutputType,
DiffusionModelVarianceType,
DiffusionSpace,
GaussianDiffusion,
)
from imgx.model import Unet3d, Unet3dSlice, Unet3dSliceTime, Unet3dTime
def build_vision_model(
data_config: DictConfig,
task_config: DictConfig,
model_config: DictConfig,
) -> hk.Module:
"""Build model from config.
Args:
data_config: have in_shape and out_channels.
task_config: have task name and configs.
model_config: have model name attribute.
Returns:
Model.
Raises:
ValueError: if config is wrong or not supported.
"""
if model_config.name not in model_config:
raise ValueError(f"Missing configuration for {model_config.name}.")
dataset_name = data_config["name"]
image_shape = IMAGE_SHAPE_MAP[dataset_name]
num_classes = NUM_CLASSES_MAP[dataset_name]
if task_config.name == "segmentation":
# TODO use enum
in_channels = 1 # forward will expand dimension
out_channels = num_classes
elif task_config.name == "diffusion":
# diffusion model takes the image and a noised mask/logits as input
in_channels = 1 + num_classes
# diffusion model may output variance per class
out_channels = num_classes
model_var_type = DiffusionModelVarianceType[
task_config["diffusion"]["model_var_type"].upper()
]
if model_var_type in [
DiffusionModelVarianceType.LEARNED,
DiffusionModelVarianceType.LEARNED_RANGE,
]:
out_channels *= 2
else:
raise ValueError(f"Unknown task {task_config.name}.")
total_config = {
"remat": model_config.remat,
"in_shape": image_shape,
"in_channels": in_channels,
"out_channels": out_channels,
**model_config[model_config.name],
}
if model_config.name == "unet3d":
return Unet3d(**total_config)
if model_config.name == "unet3d_slice":
return Unet3dSlice(**total_config)
if model_config.name == "unet3d_time":
num_timesteps = task_config["diffusion"]["num_timesteps"]
return Unet3dTime(num_timesteps=num_timesteps, **total_config)
if model_config.name == "unet3d_slice_time":
num_timesteps = task_config["diffusion"]["num_timesteps"]
return Unet3dSliceTime(num_timesteps=num_timesteps, **total_config)
raise ValueError(f"Unknown model {model_config.name}.")
def build_diffusion_model(
model: hk.Module,
diffusion_config: DictConfig,
) -> GaussianDiffusion:
"""Build diffusion model from config and vision model.
Args:
model: the model used in diffusion.
diffusion_config: config for diffusion setting.
Returns:
A GaussianDiffusion model.
"""
num_timesteps = diffusion_config["num_timesteps"]
num_timesteps_beta = diffusion_config["num_timesteps_beta"]
beta_config = diffusion_config["beta"].copy()
beta_config["beta_schedule"] = DiffusionBetaSchedule[
beta_config["beta_schedule"].upper()
]
model_out_type = DiffusionModelOutputType[
diffusion_config["model_out_type"].upper()
]
model_var_type = DiffusionModelVarianceType[
diffusion_config["model_var_type"].upper()
]
x_space = DiffusionSpace[diffusion_config["x_space"].upper()]
x_limit = diffusion_config["x_limit"]
use_ddim = diffusion_config["use_ddim"]
return GaussianDiffusion(
model=model,
num_timesteps=num_timesteps,
num_timesteps_beta=num_timesteps_beta,
model_out_type=model_out_type,
model_var_type=model_var_type,
x_space=x_space,
x_limit=x_limit,
use_ddim=use_ddim,
**beta_config,
)
| 3,969 | 32.644068 | 75 | py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.