code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def get_serving_input_fn(self, hparams):
"""Returns an `input_fn` for serving in an exported SavedModel.
Args:
hparams: tf.HParams object.
Returns:
Returns an `input_fn` that takes no arguments and returns a
`ServingInputReceiver`.
"""
features_ind = [
idx for idx in s... | Returns an `input_fn` for serving in an exported SavedModel.
Args:
hparams: tf.HParams object.
Returns:
Returns an `input_fn` that takes no arguments and returns a
`ServingInputReceiver`.
| get_serving_input_fn | python | google/model_search | model_search/data/csv_data_for_binary.py | https://github.com/google/model_search/blob/master/model_search/data/csv_data_for_binary.py | Apache-2.0 |
def get_keras_input(self, batch_size):
"""Returns keras input as explained in data.py module."""
del batch_size
dataset = pd.read_csv(self._filename)
labels = dataset.pop(dataset.columns.values[self._label_index])
labels = np.array(labels)
features = np.array(dataset)
validation_features = ... | Returns keras input as explained in data.py module. | get_keras_input | python | google/model_search | model_search/data/csv_data_for_binary.py | https://github.com/google/model_search/blob/master/model_search/data/csv_data_for_binary.py | Apache-2.0 |
def get_input_fn(self, hparams, mode, batch_size):
"""Returns an `input_fn` for train and evaluation.
Args:
hparams: tf.HParams for the experiment.
mode: Defines whether this is training or evaluation. See
`estimator.ModeKeys`.
batch_size: the batch size for training and eval.
Re... | Returns an `input_fn` for train and evaluation.
Args:
hparams: tf.HParams for the experiment.
mode: Defines whether this is training or evaluation. See
`estimator.ModeKeys`.
batch_size: the batch size for training and eval.
Returns:
Returns an `input_fn` for train or evaluation... | get_input_fn | python | google/model_search | model_search/data/data.py | https://github.com/google/model_search/blob/master/model_search/data/data.py | Apache-2.0 |
def get_serving_input_fn(self, hparams):
"""Returns an `input_fn` for serving in an exported SavedModel.
Args:
hparams: tf.HParams for the experiment.
Returns:
Returns an `input_fn` that takes no arguments and returns a
`ServingInputReceiver`.
""" | Returns an `input_fn` for serving in an exported SavedModel.
Args:
hparams: tf.HParams for the experiment.
Returns:
Returns an `input_fn` that takes no arguments and returns a
`ServingInputReceiver`.
| get_serving_input_fn | python | google/model_search | model_search/data/data.py | https://github.com/google/model_search/blob/master/model_search/data/data.py | Apache-2.0 |
def get_input_layer_fn(self, problem_type):
"""Provides the function for converting feature Tensors to an input layer.
Most users do not need to modify this function. In the typical use case,
the user would only need to implement `get_feature_columns`, and the default
implementation of this method woul... | Provides the function for converting feature Tensors to an input layer.
Most users do not need to modify this function. In the typical use case,
the user would only need to implement `get_feature_columns`, and the default
implementation of this method would take care of converting the feature
Tensors i... | get_input_layer_fn | python | google/model_search | model_search/data/data.py | https://github.com/google/model_search/blob/master/model_search/data/data.py | Apache-2.0 |
def get_serving_input_fn(self, hparams):
"""Returns an `input_fn` for serving in an exported SavedModel.
Args:
hparams: tf.HParams object.
Returns:
Returns an `input_fn` that takes no arguments and returns a
`ServingInputReceiver`.
"""
tf.compat.v1.disable_eager_execution()
... | Returns an `input_fn` for serving in an exported SavedModel.
Args:
hparams: tf.HParams object.
Returns:
Returns an `input_fn` that takes no arguments and returns a
`ServingInputReceiver`.
| get_serving_input_fn | python | google/model_search | model_search/data/image_data.py | https://github.com/google/model_search/blob/master/model_search/data/image_data.py | Apache-2.0 |
def get_serving_input_fn(self, hparams):
"""Returns an `input_fn` for serving in an exported SavedModel.
Args:
hparams: tf.HParams object.
Returns:
Returns an `input_fn` that takes no arguments and returns a
`ServingInputReceiver`.
"""
tf.compat.v1.disable_eager_execution()
... | Returns an `input_fn` for serving in an exported SavedModel.
Args:
hparams: tf.HParams object.
Returns:
Returns an `input_fn` that takes no arguments and returns a
`ServingInputReceiver`.
| get_serving_input_fn | python | google/model_search | model_search/data/image_data_for_binary.py | https://github.com/google/model_search/blob/master/model_search/data/image_data_for_binary.py | Apache-2.0 |
def get_keras_input(self, batch_size):
"""Returns keras input as explained in data.py module."""
train_dataset = tf.keras.preprocessing.image_dataset_from_directory(
directory=self._input_dir,
labels='inferred',
label_mode='binary',
class_names=None,
color_mode='rgb',
... | Returns keras input as explained in data.py module. | get_keras_input | python | google/model_search | model_search/data/image_data_for_binary.py | https://github.com/google/model_search/blob/master/model_search/data/image_data_for_binary.py | Apache-2.0 |
def _wait_for_chief(self, model_dir):
"""Waits on a directory till a checkpoint appears in it.
Args:
model_dir: string - the directory to wait on.
"""
my_id = architecture_utils.DirectoryHandler.get_trial_id(
model_dir, self._phoenix_spec)
while not tf.train.latest_checkpoint(model_di... | Waits on a directory till a checkpoint appears in it.
Args:
model_dir: string - the directory to wait on.
| _wait_for_chief | python | google/model_search | model_search/generators/base_tower_generator.py | https://github.com/google/model_search/blob/master/model_search/generators/base_tower_generator.py | Apache-2.0 |
def _build_from_existing_checkpoint(self, model_dir, trial_mode,
logits_dimension, is_training):
"""Builds the neural network from an existing checkpoint.
This function builds or replicates the model network given a model_dir with
a checkpoint.
Args:
model_d... | Builds the neural network from an existing checkpoint.
This function builds or replicates the model network given a model_dir with
a checkpoint.
Args:
model_dir: a string holding the directory of the model. Must have a
checkpoint in it.
trial_mode: the TrialMode for the current Phoenix... | _build_from_existing_checkpoint | python | google/model_search | model_search/generators/base_tower_generator.py | https://github.com/google/model_search/blob/master/model_search/generators/base_tower_generator.py | Apache-2.0 |
def generate(self, input_layer_fn, trial_mode, logits_dimension, hparams,
run_config, is_training, trials) -> List[tower.Tower]:
"""Generates the next architecture to try.
Args:
input_layer_fn: A function that converts feature Tensors to input layer.
See learning.autolx.model_searc... | Generates the next architecture to try.
Args:
input_layer_fn: A function that converts feature Tensors to input layer.
See learning.autolx.model_search.data.Provider.get_input_layer_fn
for details.
trial_mode: the TrialMode for the current Phoenix trial.
logits_dimension: An int h... | generate | python | google/model_search | model_search/generators/base_tower_generator.py | https://github.com/google/model_search/blob/master/model_search/generators/base_tower_generator.py | Apache-2.0 |
def first_time_chief_generate(self, input_layer_fn, trial_mode,
logits_dimension, hparams, run_config,
is_training, trials):
"""Creates the prior for the ensemble."""
del input_layer_fn
my_id = architecture_utils.DirectoryHandler.get_trial_id(
... | Creates the prior for the ensemble. | first_time_chief_generate | python | google/model_search | model_search/generators/prior_generator.py | https://github.com/google/model_search/blob/master/model_search/generators/prior_generator.py | Apache-2.0 |
def first_time_chief_generate(self, input_layer_fn, trial_mode,
logits_dimension, hparams, run_config,
is_training, trials):
"""Creates the prior for the ensemble."""
del input_layer_fn
my_id = architecture_utils.DirectoryHandler.get_trial_id(
... | Creates the prior for the ensemble. | first_time_chief_generate | python | google/model_search | model_search/generators/replay_generator.py | https://github.com/google/model_search/blob/master/model_search/generators/replay_generator.py | Apache-2.0 |
def _suggest_and_create_architecture(create_new_architecture_fn,
relevant_trials, hparams, my_id,
run_config, search_algorithm, phoenix_spec,
input_layer_fn, is_training,
l... | A function to suggest and create an archiecture.
Args:
create_new_architecture_fn: A function to create the architecture with the
following signature Input architecture, A list of block encoding the
architecture. prev_trial, A trial if mutating a previous trial, otherwise
None. input_tensor, A... | _suggest_and_create_architecture | python | google/model_search | model_search/generators/search_candidate_generator.py | https://github.com/google/model_search/blob/master/model_search/generators/search_candidate_generator.py | Apache-2.0 |
def get_trial_mode(ensemble_spec, distillation_spec, trial_id):
"""Determines whether to bundle logits with Ensembler or Distiller.
If distillation and ensembling are specified at the same time, checks pool
sizes to see which phase this trial falls in. If the pool sizes are the same,
then defaults to ENSEMBLE_... | Determines whether to bundle logits with Ensembler or Distiller.
If distillation and ensembling are specified at the same time, checks pool
sizes to see which phase this trial falls in. If the pool sizes are the same,
then defaults to ENSEMBLE_SEARCH.
Args:
ensemble_spec: The spec defined in the Phoenix s... | get_trial_mode | python | google/model_search | model_search/generators/trial_utils.py | https://github.com/google/model_search/blob/master/model_search/generators/trial_utils.py | Apache-2.0 |
def get_intermixed_trials(trials, n, n_user_suggestions):
"""Filters the exploration trials for intermixed ensemble search."""
return [
trial for trial in trials
if trial.id % n != 0 or trial.id <= n_user_suggestions
] | Filters the exploration trials for intermixed ensemble search. | get_intermixed_trials | python | google/model_search | model_search/generators/trial_utils.py | https://github.com/google/model_search/blob/master/model_search/generators/trial_utils.py | Apache-2.0 |
def create_test_trials_intermixed(root_dir):
"""Creates fake trials used for testing."""
trials = [{
'model_dir': os.path.join(root_dir, str(2)),
'id': 2,
'status': 'COMPLETED',
'trial_infeasible': False,
'final_measurement': {
'objective_value': 0.94
},
}, {
'm... | Creates fake trials used for testing. | create_test_trials_intermixed | python | google/model_search | model_search/generators/trial_utils.py | https://github.com/google/model_search/blob/master/model_search/generators/trial_utils.py | Apache-2.0 |
def import_towers_one_trial(phoenix_spec, is_training, logits_dimension,
prev_model_dir, force_freeze, allow_auxiliary_head,
caller_generator, my_model_dir):
"""Imports a previous trial to current model."""
towers = []
imported_towers = 0
for generator in ... | Imports a previous trial to current model. | import_towers_one_trial | python | google/model_search | model_search/generators/trial_utils.py | https://github.com/google/model_search/blob/master/model_search/generators/trial_utils.py | Apache-2.0 |
def import_towers_multiple_trials(phoenix_spec, is_training, logits_dimension,
previous_model_dirs, force_freeze,
allow_auxiliary_head, caller_generator,
my_model_dir):
"""Imports search generators' model from many t... | Imports search generators' model from many trials. | import_towers_multiple_trials | python | google/model_search | model_search/generators/trial_utils.py | https://github.com/google/model_search/blob/master/model_search/generators/trial_utils.py | Apache-2.0 |
def write_replay_spec(model_dir, filename, original_spec, search_architecture,
hparams):
"""Writes a replay spec to retrain the same model."""
# Ensure the same search space as the original run
replay_spec = copy.deepcopy(original_spec)
# Remove user suggestions
replay_spec.ClearField('u... | Writes a replay spec to retrain the same model. | write_replay_spec | python | google/model_search | model_search/generators/trial_utils.py | https://github.com/google/model_search/blob/master/model_search/generators/trial_utils.py | Apache-2.0 |
def merge(self, hps, name_prefix="", overwrite=True):
"""Merges hyperparameters into this object.
Arguments:
hps: A `HyperParameters` object or list of `HyperParameter` objects.
name_prefix: A string to add to all hparams names in hps.
overwrite: bool. Whether existing `HyperParameter`s shoul... | Merges hyperparameters into this object.
Arguments:
hps: A `HyperParameters` object or list of `HyperParameter` objects.
name_prefix: A string to add to all hparams names in hps.
overwrite: bool. Whether existing `HyperParameter`s should be overridden
by those in `hps` with the same name ... | merge | python | google/model_search | model_search/hparams/hyperparameters.py | https://github.com/google/model_search/blob/master/model_search/hparams/hyperparameters.py | Apache-2.0 |
def get_distillation_loss_fn(teacher_logits, distillation_spec, my_id,
original_loss_fn):
"""Force the loss_fn to compare the student logits to the teacher logits."""
# The logits input below is the student logits
def _mse_teacher_loss_fn(labels, logits, weights=1.0):
"""A mean s... | Force the loss_fn to compare the student logits to the teacher logits. | get_distillation_loss_fn | python | google/model_search | model_search/meta/distillation.py | https://github.com/google/model_search/blob/master/model_search/meta/distillation.py | Apache-2.0 |
def _mse_teacher_loss_fn(labels, logits, weights=1.0):
"""A mean square error with the teacher's logits/predictions."""
del labels # Unused.
return tf.compat.v1.losses.mean_squared_error(
labels=teacher_logits, predictions=logits, weights=weights) | A mean square error with the teacher's logits/predictions. | _mse_teacher_loss_fn | python | google/model_search | model_search/meta/distillation.py | https://github.com/google/model_search/blob/master/model_search/meta/distillation.py | Apache-2.0 |
def _cross_entropy_loss_fn(labels, logits, weights=1.0):
"""A cross entropy with the teacher's predictions."""
del labels # Unused.
return tf.compat.v1.losses.softmax_cross_entropy(
onehot_labels=teacher_logits, logits=logits, weights=weights) | A cross entropy with the teacher's predictions. | _cross_entropy_loss_fn | python | google/model_search | model_search/meta/distillation.py | https://github.com/google/model_search/blob/master/model_search/meta/distillation.py | Apache-2.0 |
def _adaptively_balance_losses_loss_fn(loss1_fn,
loss2_fn,
labels,
logits,
weights=1.0):
"""Increasingly grow the distillation loss over time."""
or... | Increasingly grow the distillation loss over time. | _adaptively_balance_losses_loss_fn | python | google/model_search | model_search/meta/distillation.py | https://github.com/google/model_search/blob/master/model_search/meta/distillation.py | Apache-2.0 |
def bundle_logits(self, priors_logits_specs, search_logits_specs):
"""Bundles the priors and the search candidate."""
assert search_logits_specs, "Cannot distill with no student model."
assert len(search_logits_specs) == 1, "Search has more than one tower."
if not priors_logits_specs:
return Dis... | Bundles the priors and the search candidate. | bundle_logits | python | google/model_search | model_search/meta/distillation.py | https://github.com/google/model_search/blob/master/model_search/meta/distillation.py | Apache-2.0 |
def __init__(self, vars_to_warm_start, current_trial_id, completed_trials,
discount_factor, max_completed_trials, model_dir):
"""Initializes a new BaseTransferLearningHoook instance.
Args:
vars_to_warm_start: The variables to warm start from previous trials.
current_trial_id: The id ... | Initializes a new BaseTransferLearningHoook instance.
Args:
vars_to_warm_start: The variables to warm start from previous trials.
current_trial_id: The id of the current trial.
completed_trials: The list of successfully completed trials. Will be used
to warm start the variables of the cur... | __init__ | python | google/model_search | model_search/meta/transfer_learning.py | https://github.com/google/model_search/blob/master/model_search/meta/transfer_learning.py | Apache-2.0 |
def _sort_previous_trial_variables(self, trial_to_var):
"""Sorts trial_to_var in an order implemented by the subclass.
Args:
trial_to_var: A list of (Trial, tf.Variable) tuples corresponding to
previous trial variables which match variables in the current trial.
Returns:
The trial_to_v... | Sorts trial_to_var in an order implemented by the subclass.
Args:
trial_to_var: A list of (Trial, tf.Variable) tuples corresponding to
previous trial variables which match variables in the current trial.
Returns:
The trial_to_var list in sorted order.
| _sort_previous_trial_variables | python | google/model_search | model_search/meta/transfer_learning.py | https://github.com/google/model_search/blob/master/model_search/meta/transfer_learning.py | Apache-2.0 |
def _combine_previous_trial_variables(self, trial_to_var):
"""Combines the variables from previous trials.
Args:
trial_to_var: A list of (Trial, tf.Variable) tuples corresponding to
previous trial variables which match variables in the current trial.
Returns:
The tf.Variable with the c... | Combines the variables from previous trials.
Args:
trial_to_var: A list of (Trial, tf.Variable) tuples corresponding to
previous trial variables which match variables in the current trial.
Returns:
The tf.Variable with the combined value of all variables.
| _combine_previous_trial_variables | python | google/model_search | model_search/meta/transfer_learning.py | https://github.com/google/model_search/blob/master/model_search/meta/transfer_learning.py | Apache-2.0 |
def begin(self):
"""Creates the ops needed to warm start the model's variables."""
# Do not warm start the model if a checkopint already exists. If the model
# has already been training, we do not want to overwrite its variables.
if tf.train.latest_checkpoint(self._model_dir):
return
if not se... | Creates the ops needed to warm start the model's variables. | begin | python | google/model_search | model_search/meta/transfer_learning.py | https://github.com/google/model_search/blob/master/model_search/meta/transfer_learning.py | Apache-2.0 |
def _create_previous_trials(self, root_dir, shapes, dtypes, values):
"""Creates checkpoints for previous trials and returns num created."""
assert len(shapes) == len(dtypes) == len(values)
for i, (shape, dtype, value) in enumerate(zip(shapes, dtypes, values)):
with self.test_session(graph=tf.Graph())... | Creates checkpoints for previous trials and returns num created. | _create_previous_trials | python | google/model_search | model_search/meta/transfer_learning_test.py | https://github.com/google/model_search/blob/master/model_search/meta/transfer_learning_test.py | Apache-2.0 |
def __init__(self,
phoenix_spec,
study_name,
study_owner,
optimization_goal="minimize",
optimization_metric="loss",
connection_config=None):
"""Initializes a new MLMD connection instance.
Args:
phoenix_spec: Phoenix... | Initializes a new MLMD connection instance.
Args:
phoenix_spec: PhoenixSpec proto.
study_name: The name of the study.
study_owner: The owner (username) of the study.
optimization_goal: minimize or maximize (string).
optimization_metric: what metric are we optimizing (string).
co... | __init__ | python | google/model_search | model_search/metadata/ml_metadata_db.py | https://github.com/google/model_search/blob/master/model_search/metadata/ml_metadata_db.py | Apache-2.0 |
def get_best_k(trials,
k=1,
status_whitelist=None,
optimization_goal='minimize'):
"""Returns the top k trials sorted by objective_value.
Args:
trials: The trials (Trail objects) to sort and return the top_k of.
k: The top k trials to return. If k=1 we don't retu... | Returns the top k trials sorted by objective_value.
Args:
trials: The trials (Trail objects) to sort and return the top_k of.
k: The top k trials to return. If k=1 we don't return a list.
status_whitelist: list of statuses to whitelist. If None, we use all trials.
optimization_goal: string, minimize ... | get_best_k | python | google/model_search | model_search/metadata/trial.py | https://github.com/google/model_search/blob/master/model_search/metadata/trial.py | Apache-2.0 |
def __init__(self,
num_units,
memory_size=0,
rank=0,
use_bias=True,
activation=None,
feature_weights_initializer=None,
time_weights_initializer=None,
bias_initializer=tf.compat.v1.zeros_initializer(),... | Initializes SvdfCell.
Arguments:
num_units: int or long, the number of units in the layer.
memory_size: int or long, the size of the memory (i.e. every new inference
iteration we push a new memory entry, and remove the oldest one).
rank: int or long, the rank of the SVD approximation.
... | __init__ | python | google/model_search | model_search/ops/svdf_cell.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_cell.py | Apache-2.0 |
def __call__(self, inputs, state, scope=None):
"""SVDF Cell computation.
Arguments:
inputs: 2D Tensor, where the first dimension could be used for batching
purposes, and last dimension corresponds to the features. The size of
this last dimension determines the size of the feature filters.... | SVDF Cell computation.
Arguments:
inputs: 2D Tensor, where the first dimension could be used for batching
purposes, and last dimension corresponds to the features. The size of
this last dimension determines the size of the feature filters.
state: The state as of the last inference in th... | __call__ | python | google/model_search | model_search/ops/svdf_cell.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_cell.py | Apache-2.0 |
def _add_filter_image_summary(self, filters, name):
"""Adds image summaries for the given filter (an image per rank).
Arguments:
filters: A Tensor containing the filters. Expected shape: [rank *
num_units, filter_dim]. Thus, the tensor groups the rank filters for
each unit, and the functi... | Adds image summaries for the given filter (an image per rank).
Arguments:
filters: A Tensor containing the filters. Expected shape: [rank *
num_units, filter_dim]. Thus, the tensor groups the rank filters for
each unit, and the function will split them to generate an image per
rank. E... | _add_filter_image_summary | python | google/model_search | model_search/ops/svdf_cell.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_cell.py | Apache-2.0 |
def _runTest(self, num_units, memory_size, rank, inputs, use_bias, activation,
expected):
"""Executes SVDF test run.
Checks svdf produces the same outputs (activations/gradients) and
parameters.
Arguments:
num_units: int or long, the number of units in the layer.
memory_size... | Executes SVDF test run.
Checks svdf produces the same outputs (activations/gradients) and
parameters.
Arguments:
num_units: int or long, the number of units in the layer.
memory_size: int or long, the size of the SVDF cell memory.
rank: int or long, the rank of the SVD approximation.
... | _runTest | python | google/model_search | model_search/ops/svdf_cell_test.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_cell_test.py | Apache-2.0 |
def build(self, input_shape: List[int]):
"""Implements build interface for tf.keras.layers.Layer."""
# Sub-classes should check for input_shape and then call super.build.
self.num_features = input_shape[-1]
self.feature_kernel = self.add_weight(
shape=(self.num_features, self.num_filters),
... | Implements build interface for tf.keras.layers.Layer. | build | python | google/model_search | model_search/ops/svdf_conv.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_conv.py | Apache-2.0 |
def get_config(self) -> Dict[Text, Any]:
"""Configs of the model hparams for logging and debugging purposes."""
config = {
"units":
self.units,
"memory_size":
self.memory_size,
"rank":
self.rank,
"activation":
tf.keras.activations.s... | Configs of the model hparams for logging and debugging purposes. | get_config | python | google/model_search | model_search/ops/svdf_conv.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_conv.py | Apache-2.0 |
def call(self, inputs: tf.Tensor, training: Optional[bool] = None):
"""Implements call interface for tf.keras.layers.Layer."""
# Handle drop out.
if 0 < self.dropout < 1 and self._dropout_mask is not None:
self._dropout_mask = _generate_dropout_mask(
tf.keras.backend.ones_like(inputs), self.... | Implements call interface for tf.keras.layers.Layer. | call | python | google/model_search | model_search/ops/svdf_conv.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_conv.py | Apache-2.0 |
def _get_test_svdf_layer_weights():
"""Returns weights for an SvdfCell with following params.
(units=4, memory_size=3, rank=1, use_bias=True).
"""
return [
np.array([[-0.31614766, 0.37929568, 0.27584907, -0.36453721],
[-0.35801932, 0.22514193, 0.27241215, -0.06950231],
[... | Returns weights for an SvdfCell with following params.
(units=4, memory_size=3, rank=1, use_bias=True).
| _get_test_svdf_layer_weights | python | google/model_search | model_search/ops/svdf_conv_test.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_conv_test.py | Apache-2.0 |
def _get_test_svdf_expected_output():
"""Returns output of an svdf layer with the following params.
Note: the values are obtained from _get_svdf_output_using_numpy computation.
"""
return np.array([
[[-0.00300881, 0.00605831, 0.01394408, 0.00183136],
[-0.0082326, 0.0207185, 0.06872095, 0.0030723... | Returns output of an svdf layer with the following params.
Note: the values are obtained from _get_svdf_output_using_numpy computation.
| _get_test_svdf_expected_output | python | google/model_search | model_search/ops/svdf_conv_test.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_conv_test.py | Apache-2.0 |
def _get_svdf_output_using_numpy(input_values, weights):
"""Compute svdf output using numpy for expected values.
NOTE: Use this helper function as a way to verify computations in tensorflow.
This function assumes linear activation for svdf and projection.
Args:
input_values: ndarray (shape=[batch_size, seq... | Compute svdf output using numpy for expected values.
NOTE: Use this helper function as a way to verify computations in tensorflow.
This function assumes linear activation for svdf and projection.
Args:
input_values: ndarray (shape=[batch_size, sequence_length, input_dim]).
weights: A list of 3 ndarrays f... | _get_svdf_output_using_numpy | python | google/model_search | model_search/ops/svdf_conv_test.py | https://github.com/google/model_search/blob/master/model_search/ops/svdf_conv_test.py | Apache-2.0 |
def __init__(self,
phoenix_spec,
alpha=0.05,
degree=2,
n_mono=10,
min_for_regression=3,
num_random_samples=10000,
num_of_restarts=3,
seed=None,
debug_mode=False):
"""Initializes the... | Initializes the Harmonica instance.
Args:
phoenix_spec: PhoenixSpec proto.
alpha: The alpha of lasso solver (please read on lasso solver to
understand this constant. In a nutshell, this control regularization for
lasso. alpha equal zero means regular linear regression - however, try
... | __init__ | python | google/model_search | model_search/search/categorical_harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/categorical_harmonica.py | Apache-2.0 |
def basis_function(self, t, k):
r"""Calculates fourier basis.
Args:
t: integer.
k: integer.
Returns:
Returns the following value:
\Phi_t(k) = e^{(2\pi i k)/len(self._block_indices)}
"""
value = cmath.exp(
2 * cmath.pi * complex(0, 1) * t * k / len(self._block_indice... | Calculates fourier basis.
Args:
t: integer.
k: integer.
Returns:
Returns the following value:
\Phi_t(k) = e^{(2\pi i k)/len(self._block_indices)}
| basis_function | python | google/model_search | model_search/search/categorical_harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/categorical_harmonica.py | Apache-2.0 |
def translate_architecture_to_feature_assignment(self, architecture):
"""Translates the trial architecture to a categorical assignment."""
category_size = len(self._block_indices)
# TODO(b/172564129): Change to use architecture.size instead of _num_params
x_real = np.empty(self._num_params * category_si... | Translates the trial architecture to a categorical assignment. | translate_architecture_to_feature_assignment | python | google/model_search | model_search/search/categorical_harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/categorical_harmonica.py | Apache-2.0 |
def batch_sample(self, trials):
"""Returns all previous trials results as assignments and loss."""
completed = trials
x = []
y = []
for trial in completed:
arc = architecture_utils.get_architecture(
architecture_utils.DirectoryHandler.trial_dir(trial))
# Returns two assignments... | Returns all previous trials results as assignments and loss. | batch_sample | python | google/model_search | model_search/search/categorical_harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/categorical_harmonica.py | Apache-2.0 |
def _parse_variable_name(self, name):
"""Returns the indices that form the variable given the name."""
# Bias
if name == "1":
return []
# Names are of the form 'x1 x6 x8'
else:
variables = name.split(" ")
return [int(varname[1:]) for varname in variables] | Returns the indices that form the variable given the name. | _parse_variable_name | python | google/model_search | model_search/search/categorical_harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/categorical_harmonica.py | Apache-2.0 |
def _extract_relevant_variables_indices(self, feature_extender, coefficients):
"""Returns a list of the relevant variables indices based on coeff."""
all_variable_names = feature_extender.get_feature_names()
relevant_variables = []
for i, coeff in enumerate(coefficients):
if coeff > 0:
rel... | Returns a list of the relevant variables indices based on coeff. | _extract_relevant_variables_indices | python | google/model_search | model_search/search/categorical_harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/categorical_harmonica.py | Apache-2.0 |
def _get_good_architecture(self,
feature_extender,
num_samples,
coefficients,
relevant_variables=None):
"""Randomly samples architectures, predict loss, and return minimal.
Args:
feature_ex... | Randomly samples architectures, predict loss, and return minimal.
Args:
feature_extender: sklearn PolynomialFeatures extnder.
num_samples: the number of samples from the search space the function will
try before returning the minimal point. If the search space over the
relevant variable... | _get_good_architecture | python | google/model_search | model_search/search/categorical_harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/categorical_harmonica.py | Apache-2.0 |
def get_suggestion(self, trials, hparams, my_trial_id=None, model_dir=None):
"""Suggests a new architecture for Phoenix using the harmonica model.
For details please see:
https://arxiv.org/pdf/1706.00764.pdf
Args:
trials: a list of Trial objects
hparams: The suggested hparams.
my_tri... | Suggests a new architecture for Phoenix using the harmonica model.
For details please see:
https://arxiv.org/pdf/1706.00764.pdf
Args:
trials: a list of Trial objects
hparams: The suggested hparams.
my_trial_id: integer - the trial id which is making the call.
model_dir: string - th... | get_suggestion | python | google/model_search | model_search/search/categorical_harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/categorical_harmonica.py | Apache-2.0 |
def random(prob):
"""Returns True with probability `prob`.
Exploration controller to indicate it's time to take a random action.
Args:
prob: triggering probability. If prob > random, then returns True.
Raises:
ValueError: if prob is not bounded in [0, 1].
"""
# 1.00001 to guard against numerical ... | Returns True with probability `prob`.
Exploration controller to indicate it's time to take a random action.
Args:
prob: triggering probability. If prob > random, then returns True.
Raises:
ValueError: if prob is not bounded in [0, 1].
| random | python | google/model_search | model_search/search/common.py | https://github.com/google/model_search/blob/master/model_search/search/common.py | Apache-2.0 |
def write_fork_edge(model_dir, to_id, from_id):
"""Write an edge for the search tree graph.
Args:
model_dir: a string with the model directory.
to_id: the target trial id (int).
from_id: the trial we forked from (int).
"""
if model_dir is None or not model_dir:
return
if not tf.io.gfile.exis... | Write an edge for the search tree graph.
Args:
model_dir: a string with the model directory.
to_id: the target trial id (int).
from_id: the trial we forked from (int).
| write_fork_edge | python | google/model_search | model_search/search/common.py | https://github.com/google/model_search/blob/master/model_search/search/common.py | Apache-2.0 |
def encode_architecture(architecture, problem_type):
"""Encodes the architecture of strings into the np.array.
Args:
architecture: A list of strings of the architecture.
problem_type: The phoenix_spec.ProblemType.
Returns:
The np.array of the encoded architecture.
"""
architecture = [block_buil... | Encodes the architecture of strings into the np.array.
Args:
architecture: A list of strings of the architecture.
problem_type: The phoenix_spec.ProblemType.
Returns:
The np.array of the encoded architecture.
| encode_architecture | python | google/model_search | model_search/search/common.py | https://github.com/google/model_search/blob/master/model_search/search/common.py | Apache-2.0 |
def get_allowed_depth(num_completed_trials, depth_thresholds=None,
max_depth=20):
"""Returns the current allowed depth of the architecture."""
if not depth_thresholds:
depth_thresholds = _default_depth_thresholds(max_depth)
if len(depth_thresholds) > max_depth:
raise ValueError(
... | Returns the current allowed depth of the architecture. | get_allowed_depth | python | google/model_search | model_search/search/common.py | https://github.com/google/model_search/blob/master/model_search/search/common.py | Apache-2.0 |
def block_indices(phoenix_spec):
"""Returns a list of allowable BlockType enum values from a phoenix_spec."""
return [
block_builder.BlockType[block_type]
for block_type in phoenix_spec.blocks_to_use
] | Returns a list of allowable BlockType enum values from a phoenix_spec. | block_indices | python | google/model_search | model_search/search/common.py | https://github.com/google/model_search/blob/master/model_search/search/common.py | Apache-2.0 |
def choose_random_trial_and_get_architecture(trials):
"""Returns (architecture, trial) of a randomly chosen `trial`."""
idx = np.random.randint(0, len(trials))
chosen_trial = trials[idx]
architecture = architecture_utils.get_architecture(
architecture_utils.DirectoryHandler.trial_dir(chosen_trial))
retu... | Returns (architecture, trial) of a randomly chosen `trial`. | choose_random_trial_and_get_architecture | python | google/model_search | model_search/search/common.py | https://github.com/google/model_search/blob/master/model_search/search/common.py | Apache-2.0 |
def mutate_replace(architecture, new_block):
"""Replaces one random block with the chosen new block.
Returns a copy; input is not modified. The element to replace is chosen
uniformly at random. Special care is taken not to replace the FLATTEN block.
Args:
architecture: An np.ndarray of integers correspond... | Replaces one random block with the chosen new block.
Returns a copy; input is not modified. The element to replace is chosen
uniformly at random. Special care is taken not to replace the FLATTEN block.
Args:
architecture: An np.ndarray of integers corresponding to BlockType enum.
new_block: Integer valu... | mutate_replace | python | google/model_search | model_search/search/common.py | https://github.com/google/model_search/blob/master/model_search/search/common.py | Apache-2.0 |
def _remove_reduction_blocks(self, architecture):
"""Removes any reduction blocks from the architecture."""
result = []
for block in architecture:
if self._is_reduction_block(block):
continue
result.append(block)
return np.array(result) | Removes any reduction blocks from the architecture. | _remove_reduction_blocks | python | google/model_search | model_search/search/constrained_descent.py | https://github.com/google/model_search/blob/master/model_search/search/constrained_descent.py | Apache-2.0 |
def _add_reduction_blocks(self, architecture, every, reduction_type):
"""Adds a reduction block of given type after `every` few blocks."""
result = []
for i, block in enumerate(architecture):
result.append(block)
if (i + 1) % every == 0:
result.append(block_builder.BlockType[reduction_ty... | Adds a reduction block of given type after `every` few blocks. | _add_reduction_blocks | python | google/model_search | model_search/search/constrained_descent.py | https://github.com/google/model_search/blob/master/model_search/search/constrained_descent.py | Apache-2.0 |
def _get_allowed_depth(self, num_completed_trials):
"""Returns the allowed depth not including reductions and flatten blocks."""
if self._phoenix_spec.replicate_cell:
allowed_depth = self._phoenix_spec.maximum_depth
else:
allowed_depth = common.get_allowed_depth(
num_completed_trials,
... | Returns the allowed depth not including reductions and flatten blocks. | _get_allowed_depth | python | google/model_search | model_search/search/constrained_descent.py | https://github.com/google/model_search/blob/master/model_search/search/constrained_descent.py | Apache-2.0 |
def get_suggestion(self, trials, hparams, my_trial_id=None, model_dir=None):
"""See the base class for details."""
del my_trial_id # Unused.
new_block = block_builder.BlockType[common.get_random_block(
self._phoenix_spec.blocks_to_use)]
if self._is_reduction_block(new_block):
raise Value... | See the base class for details. | get_suggestion | python | google/model_search | model_search/search/constrained_descent.py | https://github.com/google/model_search/blob/master/model_search/search/constrained_descent.py | Apache-2.0 |
def get_suggestion(self, trials, hparams, my_trial_id=None, model_dir=None):
"""See the base class for details."""
if self._phoenix_spec.beam_size < 1:
raise ValueError("phoenix_spec.beam_size must be >= 1.")
sorted_trials = self._metadata.get_best_k(
trials, k=int(1e10), valid_only=True) or [... | See the base class for details. | get_suggestion | python | google/model_search | model_search/search/coordinate_descent.py | https://github.com/google/model_search/blob/master/model_search/search/coordinate_descent.py | Apache-2.0 |
def __init__(self,
phoenix_spec,
alpha=0.05,
degree=3,
n_mono=5,
min_for_regression=3,
num_random_samples=10000,
seed=None):
"""Initializes the Harmonica instance.
Args:
phoenix_spec: PhoenixSpec prot... | Initializes the Harmonica instance.
Args:
phoenix_spec: PhoenixSpec proto.
alpha: The alpha of lasso solver (please read on lasso solver to
understand this constant. In a nutshell, this control regularization for
lasso. alpha equal zero means regular linear regression - however, try
... | __init__ | python | google/model_search | model_search/search/harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/harmonica.py | Apache-2.0 |
def translate_architecture_to_feature_assignment(self, architecture):
"""Translates the trial architecture to a {-1, 1} assignment."""
x = np.empty(self._num_params)
x.fill(-1)
depth = 0
for block in architecture:
# These are connector blocks (non-trainable) that connect CNN and DNN.
# T... | Translates the trial architecture to a {-1, 1} assignment. | translate_architecture_to_feature_assignment | python | google/model_search | model_search/search/harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/harmonica.py | Apache-2.0 |
def batch_sample(self, trials):
"""Returns all previous trials results as assignments and loss."""
completed = trials
x = []
y = []
for trial in completed:
arc = architecture_utils.get_architecture(
architecture_utils.DirectoryHandler.trial_dir(trial))
x.append(self.translate_a... | Returns all previous trials results as assignments and loss. | batch_sample | python | google/model_search | model_search/search/harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/harmonica.py | Apache-2.0 |
def get_good_architecture(self, num_samples, coefficients):
"""Randomly samples architectures, predict loss, and return minimal."""
if self._seed:
np.random.seed(seed=self._seed)
assignments = []
architectures = []
for _ in range(num_samples):
rand_arc = np.random.randint(
len... | Randomly samples architectures, predict loss, and return minimal. | get_good_architecture | python | google/model_search | model_search/search/harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/harmonica.py | Apache-2.0 |
def get_suggestion(self, trials, hparams, my_trial_id=None, model_dir=None):
"""Suggests a new architecture for Phoenix using the harmonica model.
For details please see:
https://arxiv.org/pdf/1706.00764.pdf
Args:
trials: a list of metadata.trial.Trial
hparams: The suggested hparams.
... | Suggests a new architecture for Phoenix using the harmonica model.
For details please see:
https://arxiv.org/pdf/1706.00764.pdf
Args:
trials: a list of metadata.trial.Trial
hparams: The suggested hparams.
my_trial_id: integer - the trial id which is making the call.
model_dir: stri... | get_suggestion | python | google/model_search | model_search/search/harmonica.py | https://github.com/google/model_search/blob/master/model_search/search/harmonica.py | Apache-2.0 |
def _one_nonzero_per_row(matrix):
"""For each row in matrix, randomly zero all but one of the nonzero values."""
# TODO(b/172564129): can it be done without a loop?
out = np.zeros_like(matrix)
for i in range(matrix.shape[0]):
nonzero_indices = np.flatnonzero(matrix[i])
keep = np.random.choice(nonzero_in... | For each row in matrix, randomly zero all but one of the nonzero values. | _one_nonzero_per_row | python | google/model_search | model_search/search/linear_model.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model.py | Apache-2.0 |
def _predict_best_architecture(self, architectures, losses):
"""Fits a linear model for loss = f(architecture) and finds its argmin.
Main computational subroutine for trial data already in feature vector form.
Args:
architectures: (n_trials, depth) integer matrix of architectures.
losses: (n_t... | Fits a linear model for loss = f(architecture) and finds its argmin.
Main computational subroutine for trial data already in feature vector form.
Args:
architectures: (n_trials, depth) integer matrix of architectures.
losses: (n_trials) positive validation error.
Returns:
predicted_loss... | _predict_best_architecture | python | google/model_search | model_search/search/linear_model.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model.py | Apache-2.0 |
def _suggest_by_padding(self, architectures, losses):
"""Pads architectures with EMPTY_BLOCK and call _predict_best_architecture.
Variable-length architectures are padded into fixed dimensionality
at either head or base, as determined by spec.network_alignment.
Args:
architectures: List of itera... | Pads architectures with EMPTY_BLOCK and call _predict_best_architecture.
Variable-length architectures are padded into fixed dimensionality
at either head or base, as determined by spec.network_alignment.
Args:
architectures: List of iterables of block_builder.BlockType values (or
integers).... | _suggest_by_padding | python | google/model_search | model_search/search/linear_model.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model.py | Apache-2.0 |
def _pad_architecture(self, arch, maxdepth):
"""Pad with empty blocks according to spec network alignment."""
empties = [block_builder.BlockType.EMPTY_BLOCK.value] * (
maxdepth - len(arch))
align = self._phoenix_spec.linear_model.network_alignment
if align == phoenix_spec_pb2.LinearModelSpec.NET... | Pad with empty blocks according to spec network alignment. | _pad_architecture | python | google/model_search | model_search/search/linear_model.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model.py | Apache-2.0 |
def _get_suggestion(architectures,
blocks_to_use,
losses,
grow=False,
remove_outliers=False,
pass_flatten=False):
"""Testing subroutine to handle boilerplate Trial construction, dirs, etc."""
# TODO(b/172564129): Fi... | Testing subroutine to handle boilerplate Trial construction, dirs, etc. | _get_suggestion | python | google/model_search | model_search/search/linear_model_test.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model_test.py | Apache-2.0 |
def test_one_trial(self):
"""Degenerate case: one data point. Just make sure it doesn't explode."""
blocks_to_use = np.arange(1, 4)
architectures = np.array([[1, 2, 1]])
losses = ([1.0])
best = _get_suggestion(architectures, blocks_to_use, losses)
# The degenerate model might end up suggesting s... | Degenerate case: one data point. Just make sure it doesn't explode. | test_one_trial | python | google/model_search | model_search/search/linear_model_test.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model_test.py | Apache-2.0 |
def test_two_trials(self):
"""Underdetermined case - should find pattern in subset of dimensions."""
blocks_to_use = np.arange(1, 4)
architectures = np.array([[1, 2, 1], [1, 1, 2]])
losses = ([1.0, 2.0])
best = _get_suggestion(architectures, blocks_to_use, losses)
# The model won't be able to pr... | Underdetermined case - should find pattern in subset of dimensions. | test_two_trials | python | google/model_search | model_search/search/linear_model_test.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model_test.py | Apache-2.0 |
def test_three_trials(self):
"""Suggestion should combine trial 1 and 2's improvements over trial 0."""
blocks_to_use = np.arange(1, 4)
architectures = np.array([[1, 1, 1], [1, 2, 1], [1, 1, 2]])
losses = ([2.0, 1.0, 1.0])
best = _get_suggestion(architectures, blocks_to_use, losses)
self.assertE... | Suggestion should combine trial 1 and 2's improvements over trial 0. | test_three_trials | python | google/model_search | model_search/search/linear_model_test.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model_test.py | Apache-2.0 |
def test_loss_equals_id(self):
"""Larger-scale overdetermined case with easily predicted model output.
Each block contributes its own id worth of loss,
so tower of all block 1 should be best.
"""
nblocks = 4
blocks_to_use = np.arange(1, nblocks + 1)
depth = 9
ntrials = 10 * nblocks * de... | Larger-scale overdetermined case with easily predicted model output.
Each block contributes its own id worth of loss,
so tower of all block 1 should be best.
| test_loss_equals_id | python | google/model_search | model_search/search/linear_model_test.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model_test.py | Apache-2.0 |
def test_randomized(self):
"""Overdetermined case with randomly chosen linear model.
Hard to predict the argmin, but can check its expected performance.
"""
np.random.seed(0)
nblocks = 10
blocks_to_use = np.arange(1, nblocks + 1)
depth = 10
ntrials = 2 * nblocks * depth
architecture... | Overdetermined case with randomly chosen linear model.
Hard to predict the argmin, but can check its expected performance.
| test_randomized | python | google/model_search | model_search/search/linear_model_test.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model_test.py | Apache-2.0 |
def test_ids_nonrange(self):
"""Make sure we correctly handle non-contiguous range of blocks."""
np.random.seed(0)
# TODO(b/172564129): This test is not correct.
block_ints = [
b.value
for b in block_builder.BlockType
if b.value not in [126, 127, 128, 129]
]
blocks_to_use... | Make sure we correctly handle non-contiguous range of blocks. | test_ids_nonrange | python | google/model_search | model_search/search/linear_model_test.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model_test.py | Apache-2.0 |
def test_flatten_modelfitting(self):
"""Ensure that we correctly deal with the flatten block in fitting.
1) Flatten blocks won't be in spec.blocks_to_use, even though
the trial architectures loaded from filesystem will contain them.
2) The model shouldn't try to place a flatten block, since there i... | Ensure that we correctly deal with the flatten block in fitting.
1) Flatten blocks won't be in spec.blocks_to_use, even though
the trial architectures loaded from filesystem will contain them.
2) The model shouldn't try to place a flatten block, since there is only
one valid position at the conv... | test_flatten_modelfitting | python | google/model_search | model_search/search/linear_model_test.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model_test.py | Apache-2.0 |
def test_flatten_output(self, grow):
"""Ensure we output suggestions with a flatten block correctly placed."""
# Make trials s.t. the linear model will output all convolutions.
architectures = [
np.repeat(block_builder.BlockType.EMPTY_BLOCK, 4),
np.repeat(block_builder.BlockType.CONVOLUTION... | Ensure we output suggestions with a flatten block correctly placed. | test_flatten_output | python | google/model_search | model_search/search/linear_model_test.py | https://github.com/google/model_search/blob/master/model_search/search/linear_model_test.py | Apache-2.0 |
def get_suggestion(self, trials, hparams, my_trial_id=None, model_dir=None):
"""Suggests a new architecture for a Phoenix model.
Note that this algorithm performs on top of hparams oracle. Meaning, it will
receive suggested trial hparams, and determine the Phoenix
architecture. This algorithm has the f... | Suggests a new architecture for a Phoenix model.
Note that this algorithm performs on top of hparams oracle. Meaning, it will
receive suggested trial hparams, and determine the Phoenix
architecture. This algorithm has the final say. We implemented a simple
"Identity" algorithm, that passes oracle's sug... | get_suggestion | python | google/model_search | model_search/search/search_algorithm.py | https://github.com/google/model_search/blob/master/model_search/search/search_algorithm.py | Apache-2.0 |
def create_spec(problem_type,
complexity_thresholds=None,
max_depth=None,
min_depth=None,
blocks_to_use=None):
"""Creates a phoenix_spec_pb2.PhoenixSpec with the given options."""
output = phoenix_spec_pb2.PhoenixSpec()
if complexity_thresholds is no... | Creates a phoenix_spec_pb2.PhoenixSpec with the given options. | create_spec | python | google/model_search | model_search/search/test_utils.py | https://github.com/google/model_search/blob/master/model_search/search/test_utils.py | Apache-2.0 |
def is_mutation_or_equal(previous_architecture, new_architecture):
"""Returns whether if new arch is mutation of or equal to previous arch."""
if previous_architecture.shape != new_architecture.shape:
return False
mismatch = (
previous_architecture.size -
np.sum(previous_architecture == new_archit... | Returns whether if new arch is mutation of or equal to previous arch. | is_mutation_or_equal | python | google/model_search | model_search/search/test_utils.py | https://github.com/google/model_search/blob/master/model_search/search/test_utils.py | Apache-2.0 |
def str2bool(v):
"""
Converts string to bool type; enables command line
arguments in the format of '--arg1 true --arg2 false'
"""
if isinstance(v, bool):
return v
if v.lower() in ('yes', 'true', 't', 'y', '1'):
return True
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
... |
Converts string to bool type; enables command line
arguments in the format of '--arg1 true --arg2 false'
| str2bool | python | Jingkang50/OpenOOD | openood/attacks/misc.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/attacks/misc.py | MIT |
def __init__(
self,
net: nn.Module,
id_name: str,
data_root: str = './data',
config_root: str = './configs',
preprocessor: Callable = None,
batch_size: int = 200,
shuffle: bool = False,
num_workers: int = 4,
) -> None:
"""A unified, eas... | A unified, easy-to-use API for evaluating (most) discriminative OOD
detection methods.
Args:
net (nn.Module):
The base classifier.
id_name (str):
The name of the in-distribution dataset.
data_root (str, optional):
The p... | __init__ | python | Jingkang50/OpenOOD | openood/evaluation_api/attackdataset.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/evaluation_api/attackdataset.py | MIT |
def __init__(
self,
net: nn.Module,
id_name: str,
data_root: str = './data',
config_root: str = './configs',
preprocessor: Callable = None,
postprocessor_name: str = None,
postprocessor: Type[BasePostprocessor] = None,
batch_size: int = 200,
... | A unified, easy-to-use API for evaluating (most) discriminative OOD
detection methods.
Args:
net (nn.Module):
The base classifier.
id_name (str):
The name of the in-distribution dataset.
data_root (str, optional):
The p... | __init__ | python | Jingkang50/OpenOOD | openood/evaluation_api/evaluator.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/evaluation_api/evaluator.py | MIT |
def topk(output, target, ks=(1, )):
"""Returns one boolean vector for each k, whether the target is within the
output's top-k."""
_, pred = output.topk(max(ks), 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
return [correct[:k].max(0)[0] for k in ks] | Returns one boolean vector for each k, whether the target is within the
output's top-k. | topk | python | Jingkang50/OpenOOD | openood/evaluators/mos_evaluator.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/evaluators/mos_evaluator.py | MIT |
def __init__(self, config: Config):
"""OOD Evaluator.
Args:
config (Config): Config file from
"""
super(OODEvaluator, self).__init__(config)
self.id_pred = None
self.id_conf = None
self.id_gt = None | OOD Evaluator.
Args:
config (Config): Config file from
| __init__ | python | Jingkang50/OpenOOD | openood/evaluators/ood_evaluator.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/evaluators/ood_evaluator.py | MIT |
def eval_acc(self,
net: nn.Module,
data_loader: DataLoader,
postprocessor: BasePostprocessor = None,
epoch_idx: int = -1,
fsood: bool = False,
csid_data_loaders: DataLoader = None):
"""Returns the accuracy scor... | Returns the accuracy score of the labels and predictions.
:return: float
| eval_acc | python | Jingkang50/OpenOOD | openood/evaluators/ood_evaluator.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/evaluators/ood_evaluator.py | MIT |
def _get_item_by_idx(self, iterator, idx):
"""Get the idx-th item of the iterator."""
size = len(self)
idx = operator.index(idx)
if not -size <= idx < size:
raise IndexError('index {} is out of range'.format(idx))
idx %= size
return next(islice(iterator, idx, ... | Get the idx-th item of the iterator. | _get_item_by_idx | python | Jingkang50/OpenOOD | openood/networks/arpl_net.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/networks/arpl_net.py | MIT |
def _make_layer(self, block, planes, num_blocks, stride):
'''
strides = [stride] + [1] * (num_blocks - 1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
'''
norm_lay... |
strides = [stride] + [1] * (num_blocks - 1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
| _make_layer | python | Jingkang50/OpenOOD | openood/networks/resnet18_256x256.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/networks/resnet18_256x256.py | MIT |
def __init__(self, backbone, feature_size, num_classes, dof=16):
'''
dof: degree of freedom of variance
'''
super(RTSNet, self).__init__()
self.backbone = backbone
self.feature_size = feature_size
self.num_classes = num_classes
self.dof = dof
self.... |
dof: degree of freedom of variance
| __init__ | python | Jingkang50/OpenOOD | openood/networks/rts_net.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/networks/rts_net.py | MIT |
def get_GMM_stat(model, train_loader, num_clusters_list, feature_type_list,
reduce_dim_list):
""" Compute GMM.
Args:
model (nn.Module): pretrained model to extract features
train_loader (DataLoader): use all training data to perform GMM
num_clusters_list (list): number o... | Compute GMM.
Args:
model (nn.Module): pretrained model to extract features
train_loader (DataLoader): use all training data to perform GMM
num_clusters_list (list): number of clusters for each layer
feature_type_list (list): feature type for each layer
reduce_dim_list (list)... | get_GMM_stat | python | Jingkang50/OpenOOD | openood/postprocessors/gmm_postprocessor.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/postprocessors/gmm_postprocessor.py | MIT |
def compute_GMM_score(model,
data,
feature_mean,
feature_prec,
component_weight,
transform_matrix,
layer_idx,
feature_type_list,
return_pred=Fal... | Compute GMM.
Args:
model (nn.Module): pretrained model to extract features
data (DataLoader): input one training batch
feature_mean (list): a list of torch.cuda.Tensor()
feature_prec (list): a list of torch.cuda.Tensor()
component_weight (list): a list of torch.cuda.Tensor()... | compute_GMM_score | python | Jingkang50/OpenOOD | openood/postprocessors/gmm_postprocessor.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/postprocessors/gmm_postprocessor.py | MIT |
def get_MDS_stat(model, train_loader, num_classes, feature_type_list,
reduce_dim_list):
""" Compute sample mean and precision (inverse of covariance)
return: sample_class_mean: list of class mean
precision: list of precisions
transform_matrix_list: list of transform_matr... | Compute sample mean and precision (inverse of covariance)
return: sample_class_mean: list of class mean
precision: list of precisions
transform_matrix_list: list of transform_matrix
| get_MDS_stat | python | Jingkang50/OpenOOD | openood/postprocessors/mds_ensemble_postprocessor.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/postprocessors/mds_ensemble_postprocessor.py | MIT |
def get_Mahalanobis_scores(model, test_loader, num_classes, sample_mean,
precision, transform_matrix, layer_index,
feature_type_list, magnitude):
'''
Compute the proposed Mahalanobis confidence score on input dataset
return: Mahalanobis score from layer_... |
Compute the proposed Mahalanobis confidence score on input dataset
return: Mahalanobis score from layer_index
| get_Mahalanobis_scores | python | Jingkang50/OpenOOD | openood/postprocessors/mds_ensemble_postprocessor.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/postprocessors/mds_ensemble_postprocessor.py | MIT |
def _cov(X, shrinkage=None, covariance_estimator=None):
"""Estimate covariance matrix (using optional covariance_estimator).
Parameters
----------
X : array-like of shape (n_samples, n_features)
Input data.
shrinkage : {'empirical', 'auto'} or float, default=None
Shrinkage parameter,... | Estimate covariance matrix (using optional covariance_estimator).
Parameters
----------
X : array-like of shape (n_samples, n_features)
Input data.
shrinkage : {'empirical', 'auto'} or float, default=None
Shrinkage parameter, possible values:
- None or 'empirical': no shrinkage... | _cov | python | Jingkang50/OpenOOD | openood/postprocessors/mds_ensemble_postprocessor.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/postprocessors/mds_ensemble_postprocessor.py | MIT |
def _class_means(X, y):
"""Compute class means.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Input data.
y : array-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
Returns
-------
means : array-like of shape (n_classes, n_featur... | Compute class means.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Input data.
y : array-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
Returns
-------
means : array-like of shape (n_classes, n_features)
Class means.
| _class_means | python | Jingkang50/OpenOOD | openood/postprocessors/mds_ensemble_postprocessor.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/postprocessors/mds_ensemble_postprocessor.py | MIT |
def _class_cov(X, y, priors, shrinkage=None, covariance_estimator=None):
"""Compute weighted within-class covariance matrix.
The per-class covariance are weighted by the class priors.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Input data.
y : array-like of shap... | Compute weighted within-class covariance matrix.
The per-class covariance are weighted by the class priors.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Input data.
y : array-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
priors : arr... | _class_cov | python | Jingkang50/OpenOOD | openood/postprocessors/mds_ensemble_postprocessor.py | https://github.com/Jingkang50/OpenOOD/blob/master/openood/postprocessors/mds_ensemble_postprocessor.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.