code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def test_loss_intercept_only(loss, sample_weight):
"""Test that fit_intercept_only returns the argmin of the loss.
Also test that the gradient is zero at the minimum.
"""
n_samples = 50
if not loss.is_multiclass:
y_true = loss.link.inverse(np.linspace(-4, 4, num=n_samples))
else:
... | Test that fit_intercept_only returns the argmin of the loss.
Also test that the gradient is zero at the minimum.
| test_loss_intercept_only | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_specific_fit_intercept_only(loss, func, random_dist, global_random_seed):
"""Test that fit_intercept_only returns the correct functional.
We test the functional for specific, meaningful distributions, e.g.
squared error estimates the expectation of a probability distribution.
"""
rng = np.... | Test that fit_intercept_only returns the correct functional.
We test the functional for specific, meaningful distributions, e.g.
squared error estimates the expectation of a probability distribution.
| test_specific_fit_intercept_only | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_multinomial_loss_fit_intercept_only():
"""Test that fit_intercept_only returns the mean functional for CCE."""
rng = np.random.RandomState(0)
n_classes = 4
loss = HalfMultinomialLoss(n_classes=n_classes)
# Same logic as test_specific_fit_intercept_only. Here inverse link
# function = so... | Test that fit_intercept_only returns the mean functional for CCE. | test_multinomial_loss_fit_intercept_only | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_multinomial_cy_gradient(global_random_seed):
"""Test that Multinomial cy_gradient gives the same result as gradient.
CyHalfMultinomialLoss does not inherit from CyLossFunction and has a different API.
As a consequence, the functions like `loss` and `gradient` do not rely on `cy_loss`
and `cy_g... | Test that Multinomial cy_gradient gives the same result as gradient.
CyHalfMultinomialLoss does not inherit from CyLossFunction and has a different API.
As a consequence, the functions like `loss` and `gradient` do not rely on `cy_loss`
and `cy_gradient`.
| test_multinomial_cy_gradient | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_binomial_and_multinomial_loss(global_random_seed):
"""Test that multinomial loss with n_classes = 2 is the same as binomial loss."""
rng = np.random.RandomState(global_random_seed)
n_samples = 20
binom = HalfBinomialLoss()
multinom = HalfMultinomialLoss(n_classes=2)
y_train = rng.randin... | Test that multinomial loss with n_classes = 2 is the same as binomial loss. | test_binomial_and_multinomial_loss | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_binomial_vs_alternative_formulation(y_true, y_pred, global_dtype):
"""Test that both formulations of the binomial deviance agree.
Often, the binomial deviance or log loss is written in terms of a variable
z in {-1, +1}, but we use y in {0, 1}, hence z = 2 * y - 1.
ESL II Eq. (10.18):
... | Test that both formulations of the binomial deviance agree.
Often, the binomial deviance or log loss is written in terms of a variable
z in {-1, +1}, but we use y in {0, 1}, hence z = 2 * y - 1.
ESL II Eq. (10.18):
-loglike(z, f) = log(1 + exp(-2 * z * f))
Note:
- ESL 2*f = raw_predic... | test_binomial_vs_alternative_formulation | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_predict_proba(loss, global_random_seed):
"""Test that predict_proba and gradient_proba work as expected."""
n_samples = 20
y_true, raw_prediction = random_y_true_raw_prediction(
loss=loss,
n_samples=n_samples,
y_bound=(-100, 100),
raw_bound=(-5, 5),
seed=glob... | Test that predict_proba and gradient_proba work as expected. | test_predict_proba | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_init_gradient_and_hessians(loss, sample_weight, dtype, order):
"""Test that init_gradient_and_hessian works as expected.
passing sample_weight to a loss correctly influences the constant_hessian
attribute, and consequently the shape of the hessian array.
"""
n_samples = 5
if sample_wei... | Test that init_gradient_and_hessian works as expected.
passing sample_weight to a loss correctly influences the constant_hessian
attribute, and consequently the shape of the hessian array.
| test_init_gradient_and_hessians | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_init_gradient_and_hessian_raises(loss, params, err_msg):
"""Test that init_gradient_and_hessian raises errors for invalid input."""
loss = loss()
with pytest.raises((ValueError, TypeError), match=err_msg):
gradient, hessian = loss.init_gradient_and_hessian(n_samples=5, **params) | Test that init_gradient_and_hessian raises errors for invalid input. | test_init_gradient_and_hessian_raises | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_loss_pickle(loss):
"""Test that losses can be pickled."""
n_samples = 20
y_true, raw_prediction = random_y_true_raw_prediction(
loss=loss,
n_samples=n_samples,
y_bound=(-100, 100),
raw_bound=(-5, 5),
seed=42,
)
pickled_loss = pickle.dumps(loss)
un... | Test that losses can be pickled. | test_loss_pickle | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def test_tweedie_log_identity_consistency(p):
"""Test for identical losses when only the link function is different."""
half_tweedie_log = HalfTweedieLoss(power=p)
half_tweedie_identity = HalfTweedieLossIdentity(power=p)
n_samples = 10
y_true, raw_prediction = random_y_true_raw_prediction(
l... | Test for identical losses when only the link function is different. | test_tweedie_log_identity_consistency | python | scikit-learn/scikit-learn | sklearn/_loss/tests/test_loss.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_loss/tests/test_loss.py | BSD-3-Clause |
def _flatten_and_tokenize_metadata(encoder, item):
"""
Turn the article into tokens
:param item: Contains things that need to be tokenized
fields are ['domain', 'date', 'authors', 'title', 'article', 'summary']
:return: dict
"""
metadata = []
for key in ['domain', 'date', 'authors', 'ti... |
Turn the article into tokens
:param item: Contains things that need to be tokenized
fields are ['domain', 'date', 'authors', 'title', 'article', 'summary']
:return: dict
| _flatten_and_tokenize_metadata | python | rowanz/grover | discrimination/run_discrimination.py | https://github.com/rowanz/grover/blob/master/discrimination/run_discrimination.py | Apache-2.0 |
def input_fn_builder(input_files,
seq_length,
is_training,
num_cpu_threads=4,
evaluate_for_fixed_number_of_steps=True):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
def input_fn(params):
"""The actu... | Creates an `input_fn` closure to be passed to TPUEstimator. | input_fn_builder | python | rowanz/grover | lm/dataloader.py | https://github.com/rowanz/grover/blob/master/lm/dataloader.py | Apache-2.0 |
def classification_convert_examples_to_features(
examples, max_seq_length, batch_size, encoder, output_file, labels, pad_extra_examples=False,
chop_from_front_if_needed=True):
"""Convert a set of `InputExample`s to a TFRecord file."""
writer = tf.python_io.TFRecordWriter(output_file)
label... | Convert a set of `InputExample`s to a TFRecord file. | classification_convert_examples_to_features | python | rowanz/grover | lm/dataloader.py | https://github.com/rowanz/grover/blob/master/lm/dataloader.py | Apache-2.0 |
def __init__(self,
vocab_size,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropou... | Constructs NewsConfig.
Args:
vocab_size: Vocabulary size of `inputs_ids` in `GroverModel`.
hidden_size: Size of the layers
num_hidden_layers: Number of hidden layers in the Transformer encoder.
num_attention_heads: Number of attention heads for each attention layer in
... | __init__ | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def from_dict(cls, json_object):
"""Constructs a `NewsConfig` from a Python dictionary of parameters."""
config = GroverConfig(vocab_size=None)
for (key, value) in six.iteritems(json_object):
config.__dict__[key] = value
return config | Constructs a `NewsConfig` from a Python dictionary of parameters. | from_dict | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def from_json_file(cls, json_file):
"""Constructs a `NewsConfig` from a json file of parameters."""
with tf.gfile.GFile(json_file, "r") as reader:
text = reader.read()
return cls.from_dict(json.loads(text)) | Constructs a `NewsConfig` from a json file of parameters. | from_json_file | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def mask_attention_for_ltr(attention_scores, attention_mask):
"""
Mask attention so that we're only predicting going forward
:param attention_scores: [batch, heads, dst_sequence, src_sequence], where information flows from src to dst.
:param attention_mask [query_length, key_length]
:return: masked ... |
Mask attention so that we're only predicting going forward
:param attention_scores: [batch, heads, dst_sequence, src_sequence], where information flows from src to dst.
:param attention_mask [query_length, key_length]
:return: masked attention
| mask_attention_for_ltr | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def _attention_projection_and_transpose(x_flat, batch_size, seq_length, num_attention_heads, size_per_head,
name, initializer_range=0.02):
"""
:param x_flat: [batch_size*seq_length, width]
:return: A fixed up tensor of size [batch_size, num_attention_heads, seq_length... |
:param x_flat: [batch_size*seq_length, width]
:return: A fixed up tensor of size [batch_size, num_attention_heads, seq_length, size_per_head]
| _attention_projection_and_transpose | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def attention_layer(x_flat, attention_mask, batch_size, seq_length, size_per_head=512, num_attention_heads=1, *,
cache=None,
initializer_range=0.02, hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1, do_cache=False):
"""
:param x_flat: Tensor ... |
:param x_flat: Tensor input, should be [batch_size*seq_length, dim]
:param attention_mask: Attention mask to use of size [seq_length, seq_length+cached_length]
:param size_per_head: dim = size_per_head * num_attention_heads
:param num_attention_heads: dim = size_per_head * num_attention_heads
:pa... | attention_layer | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def residual_mlp_layer(x_flat, intermediate_size, initializer_range=0.02, hidden_dropout_prob=0.1):
"""
:param x: The attention output. It should be [batch_size*seq_length, dim]
:param intermediate_size: the hidden projection. By default this is the input_dim * 4.
in the original GPT we would return la... |
:param x: The attention output. It should be [batch_size*seq_length, dim]
:param intermediate_size: the hidden projection. By default this is the input_dim * 4.
in the original GPT we would return layer_norm(x_norm + h1) rather than layer_norm(x + h1)
:return:
| residual_mlp_layer | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def embed(input_ids,
vocab_size,
embedding_size,
position_offset=0,
initializer_range=0.02,
max_position_embeddings=512,
use_one_hot_embeddings=True):
"""reur and position embeddings
:param input_ids: int Tensor of shape [batch_size, seq_length].
:... | reur and position embeddings
:param input_ids: int Tensor of shape [batch_size, seq_length].
:param vocab_size: number of words in vocab
:param embedding_size: dimensionality of the embedding
:param position_offset: aka number of cached tokens.
:param initializer_range: float. Range of the weight in... | embed | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def _top_p_sample(logits, ignore_ids=None, num_samples=1, p=0.9):
"""
Does top-p sampling. if ignore_ids is on, then we will zero out those logits.
:param logits: [batch_size, vocab_size] tensor
:param ignore_ids: [vocab_size] one-hot representation of the indices we'd like to ignore and never predict,
... |
Does top-p sampling. if ignore_ids is on, then we will zero out those logits.
:param logits: [batch_size, vocab_size] tensor
:param ignore_ids: [vocab_size] one-hot representation of the indices we'd like to ignore and never predict,
like padding maybe
:param p: topp threshold t... | _top_p_sample | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def _top_k_sample(logits, ignore_ids=None, num_samples=1, k=10):
"""
Does top-k sampling. if ignore_ids is on, then we will zero out those logits.
:param logits: [batch_size, vocab_size] tensor
:param ignore_ids: [vocab_size] one-hot representation of the indices we'd like to ignore and never predict,
... |
Does top-k sampling. if ignore_ids is on, then we will zero out those logits.
:param logits: [batch_size, vocab_size] tensor
:param ignore_ids: [vocab_size] one-hot representation of the indices we'd like to ignore and never predict,
like padding maybe
:param p: topp threshold t... | _top_k_sample | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def __init__(self,
config: GroverConfig,
is_training,
input_ids,
cache=None,
do_cache=False,
pad_token_id=0,
chop_off_last_token=True,
scope=None,
reuse=False):
... |
:param config:
:param is_training:
:param input_ids: Tensor thats of size [batch_size, seq_length]
:param cache: Optionally, a tensor to use that will contain cached information of the size
[batch_size, num_layers, 2, num_heads, cache_length, features]
:param do_cach... | __init__ | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def pooled_output(self, clf_token):
"""
Extract pooled output given a token that says where we should look
:param clf_token:
:return:
"""
pool_idx = tf.cast(tf.argmax(tf.cast(tf.equal(self.input_ids, clf_token), tf.float32), 1), tf.int32)
return tf.gather(self.hid... |
Extract pooled output given a token that says where we should look
:param clf_token:
:return:
| pooled_output | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def sample_step(tokens, ignore_ids, news_config, batch_size=1, p_for_topp=0.95, cache=None, do_topk=False):
"""
Helper function that samples from grover for a single step
:param tokens: [batch_size, n_ctx_b] tokens that we will predict from
:param ignore_ids: [n_vocab] mask of the tokens we don't want t... |
Helper function that samples from grover for a single step
:param tokens: [batch_size, n_ctx_b] tokens that we will predict from
:param ignore_ids: [n_vocab] mask of the tokens we don't want to predict
:param news_config: config for the GroverModel
:param batch_size: batch size to use
:param p_... | sample_step | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def sample(news_config: GroverConfig, initial_context, eos_token, ignore_ids=None, p_for_topp=0.95,
do_topk=False):
"""
V1 version of: sample outputs from a model, and do it all at once
:param news_config: Configuration used to construct the model
:param initial_context: [batch_size, seq_leng... |
V1 version of: sample outputs from a model, and do it all at once
:param news_config: Configuration used to construct the model
:param initial_context: [batch_size, seq_length] that we'll start generating with
:param eos_token: Stop generating if you see this (tf scalar)
:param ignore_ids: NEVER GE... | sample | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def body(ctx, cache, probs):
""" for whatever reason this didn't work when I ran it on more than one at once... ugh."""
next_outputs = sample_step(ctx[:, -1][:, None], ignore_ids=ignore_ids, news_config=news_config,
batch_size=batch_size, p_for_topp=p_for_t... | for whatever reason this didn't work when I ran it on more than one at once... ugh. | body | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def classification_model_fn_builder(config: GroverConfig, init_checkpoint, learning_rate,
num_train_steps, num_warmup_steps, use_tpu, num_labels, pool_token_id,
adafactor=False, adam_bfloat=False, lm_loss_coef=0.5):
"""Returns `model_fn` closur... | Returns `model_fn` closure for TPUEstimator. FOR CLASSIFICATION ONLY! | classification_model_fn_builder | python | rowanz/grover | lm/modeling.py | https://github.com/rowanz/grover/blob/master/lm/modeling.py | Apache-2.0 |
def _do_use_weight_decay(self, param_name):
"""Whether to use L2 weight decay for `param_name`."""
if not self.weight_decay_rate:
return False
if self.exclude_from_weight_decay:
for r in self.exclude_from_weight_decay:
if re.search(r, param_name) is not No... | Whether to use L2 weight decay for `param_name`. | _do_use_weight_decay | python | rowanz/grover | lm/optimization_adafactor.py | https://github.com/rowanz/grover/blob/master/lm/optimization_adafactor.py | Apache-2.0 |
def _get_variable_name(self, param_name):
"""Get the variable name from the tensor name."""
m = re.match("^(.*):\\d+$", param_name)
if m is not None:
param_name = m.group(1)
return param_name | Get the variable name from the tensor name. | _get_variable_name | python | rowanz/grover | lm/optimization_adafactor.py | https://github.com/rowanz/grover/blob/master/lm/optimization_adafactor.py | Apache-2.0 |
def assert_rank(tensor, expected_rank, name=None):
"""Raises an exception if the tensor rank is not of the expected rank.
Args:
tensor: A tf.Tensor to check the rank of.
expected_rank: Python integer or list of integers, expected rank.
name: Optional name of the tensor for the error message.
... | Raises an exception if the tensor rank is not of the expected rank.
Args:
tensor: A tf.Tensor to check the rank of.
expected_rank: Python integer or list of integers, expected rank.
name: Optional name of the tensor for the error message.
Raises:
ValueError: If the expected shape doesn... | assert_rank | python | rowanz/grover | lm/utils.py | https://github.com/rowanz/grover/blob/master/lm/utils.py | Apache-2.0 |
def get_shape_list(tensor, expected_rank=None, name=None):
"""Returns a list of the shape of tensor, preferring static dimensions.
Args:
tensor: A tf.Tensor object to find the shape of.
expected_rank: (optional) int. The expected rank of `tensor`. If this is
specified and the `tensor` has a... | Returns a list of the shape of tensor, preferring static dimensions.
Args:
tensor: A tf.Tensor object to find the shape of.
expected_rank: (optional) int. The expected rank of `tensor`. If this is
specified and the `tensor` has a different rank, and exception will be
thrown.
name:... | get_shape_list | python | rowanz/grover | lm/utils.py | https://github.com/rowanz/grover/blob/master/lm/utils.py | Apache-2.0 |
def gelu(input_tensor):
"""Gaussian Error Linear Unit.
This is a smoother version of the RELU.
Original paper: https://arxiv.org/abs/1606.08415
Args:
input_tensor: float Tensor to perform activation.
Returns:
`input_tensor` with the GELU activation applied.
"""
cdf = 0.5 * (1.... | Gaussian Error Linear Unit.
This is a smoother version of the RELU.
Original paper: https://arxiv.org/abs/1606.08415
Args:
input_tensor: float Tensor to perform activation.
Returns:
`input_tensor` with the GELU activation applied.
| gelu | python | rowanz/grover | lm/utils.py | https://github.com/rowanz/grover/blob/master/lm/utils.py | Apache-2.0 |
def layer_norm(input_tensor, name=None, epsilon=1e-5):
"""Run layer normalization on the last dimension of the tensor."""
name2use = f'LayerNorm_{name}' if name is not None else name
with tf.variable_scope(name2use, default_name='LayerNorm'):
dim = input_tensor.shape[-1].value
gamma = tf.get... | Run layer normalization on the last dimension of the tensor. | layer_norm | python | rowanz/grover | lm/utils.py | https://github.com/rowanz/grover/blob/master/lm/utils.py | Apache-2.0 |
def dropout(input_tensor, dropout_prob):
"""Perform dropout.
Args:
input_tensor: float Tensor.
dropout_prob: Python float. The probability of dropping out a value (NOT of
*keeping* a dimension as in `tf.nn.dropout`).
Returns:
A version of `input_tensor` with dropout applied.
... | Perform dropout.
Args:
input_tensor: float Tensor.
dropout_prob: Python float. The probability of dropping out a value (NOT of
*keeping* a dimension as in `tf.nn.dropout`).
Returns:
A version of `input_tensor` with dropout applied.
| dropout | python | rowanz/grover | lm/utils.py | https://github.com/rowanz/grover/blob/master/lm/utils.py | Apache-2.0 |
def get_attention_mask(nd, ns, *, dtype):
"""
this is a TPU compatible version of tf.matrix_band_part(tf.ones([nd, ns]), -1, ns-nd)
where the lower right triangle contains 1s
"""
i = tf.range(nd)[:, None]
j = tf.range(ns)
m = i >= j - ns + nd
return tf.cast(m, dtype) |
this is a TPU compatible version of tf.matrix_band_part(tf.ones([nd, ns]), -1, ns-nd)
where the lower right triangle contains 1s
| get_attention_mask | python | rowanz/grover | lm/utils.py | https://github.com/rowanz/grover/blob/master/lm/utils.py | Apache-2.0 |
def get_assignment_map_from_checkpoint(tvars, init_checkpoint):
"""Compute the union of the current variables and checkpoint variables."""
assignment_map = {}
initialized_variable_names = {}
name_to_variable = collections.OrderedDict()
for var in tvars:
name = var.name
m = re.match(... | Compute the union of the current variables and checkpoint variables. | get_assignment_map_from_checkpoint | python | rowanz/grover | lm/utils.py | https://github.com/rowanz/grover/blob/master/lm/utils.py | Apache-2.0 |
def construct_scalar_host_call(metric_dict, model_dir, prefix=""):
"""Construct a host call to log scalars when training on TPU.
Args:
metric_dict: A dict of the tensors to be logged.
model_dir: The location to write the summary.
prefix: The prefix (if any) to prepend to the metric names.
... | Construct a host call to log scalars when training on TPU.
Args:
metric_dict: A dict of the tensors to be logged.
model_dir: The location to write the summary.
prefix: The prefix (if any) to prepend to the metric names.
Returns:
A tuple of (function, args_to_be_passed_to_said_function)... | construct_scalar_host_call | python | rowanz/grover | lm/utils.py | https://github.com/rowanz/grover/blob/master/lm/utils.py | Apache-2.0 |
def host_call_fn(global_step, *args):
"""Training host call. Creates scalar summaries for training metrics.
This function is executed on the CPU and should not directly reference
any Tensors in the rest of the `model_fn`. To pass Tensors from the
model to the `metric_fn`, provide as par... | Training host call. Creates scalar summaries for training metrics.
This function is executed on the CPU and should not directly reference
any Tensors in the rest of the `model_fn`. To pass Tensors from the
model to the `metric_fn`, provide as part of the `host_call`. See
https://www.ten... | host_call_fn | python | rowanz/grover | lm/utils.py | https://github.com/rowanz/grover/blob/master/lm/utils.py | Apache-2.0 |
def ind_where(array: np.ndarray, target, return_first_match=True, default_value=-1):
"""
:param array: Single dimension array
:param target: target to search for
:param return_first_match: If true, return the first index that matches, otherwise, return the last one
:param default_value: Index to ret... |
:param array: Single dimension array
:param target: target to search for
:param return_first_match: If true, return the first index that matches, otherwise, return the last one
:param default_value: Index to return if there was no match
:return: index of the first match, or -1 if nothing
| ind_where | python | rowanz/grover | lm/validate.py | https://github.com/rowanz/grover/blob/master/lm/validate.py | Apache-2.0 |
def _get_split(domain):
""" You could do this by domain, or not"""
if random.random() < TRAIN_PORTION:
return 'train'
return 'val' | You could do this by domain, or not | _get_split | python | rowanz/grover | realnews/dedupe_crawl.py | https://github.com/rowanz/grover/blob/master/realnews/dedupe_crawl.py | Apache-2.0 |
def get_matching_s3_objects(bucket, prefix='', suffix=''):
"""
Generate objects in an S3 bucket.
THANK YOU https://alexwlchan.net/2018/01/listing-s3-keys-redux/
:param bucket: Name of the S3 bucket.
:param prefix: Only fetch objects whose key starts with
this prefix (optional).
:param s... |
Generate objects in an S3 bucket.
THANK YOU https://alexwlchan.net/2018/01/listing-s3-keys-redux/
:param bucket: Name of the S3 bucket.
:param prefix: Only fetch objects whose key starts with
this prefix (optional).
:param suffix: Only fetch objects whose keys end with
this suffix ... | get_matching_s3_objects | python | rowanz/grover | realnews/dedupe_crawl.py | https://github.com/rowanz/grover/blob/master/realnews/dedupe_crawl.py | Apache-2.0 |
def article_iterator(encoder, final_desired_size=1025):
""" Iterate through the provided filename + tokenize"""
assert os.path.exists(args.input_fn)
with open(args.input_fn, 'r') as f:
for l_no, l in enumerate(f):
if l_no % args.num_folds == args.fold:
article = json.load... | Iterate through the provided filename + tokenize | article_iterator | python | rowanz/grover | realnews/prepare_lm_data.py | https://github.com/rowanz/grover/blob/master/realnews/prepare_lm_data.py | Apache-2.0 |
def _stream_from_buffer(buffer, current_desired_size, pad_token=0, add_articles_to_end=False):
""" Combines short articles that are in a buffer """
random.shuffle(buffer)
i = 0
while i < len(buffer):
article = buffer[i]
if add_articles_to_end:
for article2add in buffer[(i + 1... | Combines short articles that are in a buffer | _stream_from_buffer | python | rowanz/grover | realnews/prepare_lm_data.py | https://github.com/rowanz/grover/blob/master/realnews/prepare_lm_data.py | Apache-2.0 |
def buffered_and_sliding_window_article_iterator(encoder, current_desired_size, final_desired_size=1025):
""" We apply a sliding window to fix long sequences, and use a buffer that combines short sequences."""
assert current_desired_size <= final_desired_size
buffer = []
for article in article_iterator(... | We apply a sliding window to fix long sequences, and use a buffer that combines short sequences. | buffered_and_sliding_window_article_iterator | python | rowanz/grover | realnews/prepare_lm_data.py | https://github.com/rowanz/grover/blob/master/realnews/prepare_lm_data.py | Apache-2.0 |
def _url_seems_ok(url, domain_to_allowed_subdomains):
"""
Check if the URL seems ok. if it does then we'll return a tuple of
CLEAN URL, main domain.
:param url:
:return:
"""
# Long URLs are usually bad
if len(url) > 200:
return False
# FIRST check if the domain is OK
ext... |
Check if the URL seems ok. if it does then we'll return a tuple of
CLEAN URL, main domain.
:param url:
:return:
| _url_seems_ok | python | rowanz/grover | realnews/process_ccrawl.py | https://github.com/rowanz/grover/blob/master/realnews/process_ccrawl.py | Apache-2.0 |
def serialize(self):
"""
Return simple page object to JSONify and write to file.
"""
return {
'meta_lang': self.dummy_article.meta_lang,
'title': self.title,
'text': self.text,
'summary': self.summary,
'authors': self.authors,
... |
Return simple page object to JSONify and write to file.
| serialize | python | rowanz/grover | realnews/process_ccrawl.py | https://github.com/rowanz/grover/blob/master/realnews/process_ccrawl.py | Apache-2.0 |
def _tokenize_article_pieces(encoder, item):
"""
Turn the article into tokens
NOTE: in hindsight I kinda messed up here because the first token is always represented as a BPE continuation
rather than an initial token in its own right. whoops....
:param item: Contains things that need to be tokenize... |
Turn the article into tokens
NOTE: in hindsight I kinda messed up here because the first token is always represented as a BPE continuation
rather than an initial token in its own right. whoops....
:param item: Contains things that need to be tokenized
fields are ['domain', 'date', 'authors', 'ti... | _tokenize_article_pieces | python | rowanz/grover | sample/encoder.py | https://github.com/rowanz/grover/blob/master/sample/encoder.py | Apache-2.0 |
def _cut_tokens_to_add_stuff(tokens, stuff_to_add, desired_size, padding_token):
"""
The idea behind this function is to take away tokens from `tokens' such that tokens[:LENGTH] + stuff_to_add becomes
exactly at the right size (desired_size).
:param tokens:
:param stuff_to_add:
:param desired_s... |
The idea behind this function is to take away tokens from `tokens' such that tokens[:LENGTH] + stuff_to_add becomes
exactly at the right size (desired_size).
:param tokens:
:param stuff_to_add:
:param desired_size:
:return:
| _cut_tokens_to_add_stuff | python | rowanz/grover | sample/encoder.py | https://github.com/rowanz/grover/blob/master/sample/encoder.py | Apache-2.0 |
def tokenize_for_grover_training(encoder, item, desired_size=1024, unconditional_prob=0.35, metadata_dropout_prob=0.1,
cut_prob=0.2):
"""
Not only will we tokenize an item with a BPE encoder, but we'll also put it in a nice format for language modeling.
The goal is to MINIMI... |
Not only will we tokenize an item with a BPE encoder, but we'll also put it in a nice format for language modeling.
The goal is to MINIMIZE PADDING. If we don't fill up the desired size of 1024 tokens then we're wasting compute.
The canonical order is
DOMAIN DATE AUTHORS TITLE ARTICLE SUMMARY
:... | tokenize_for_grover_training | python | rowanz/grover | sample/encoder.py | https://github.com/rowanz/grover/blob/master/sample/encoder.py | Apache-2.0 |
def sliding_window(article, max_seq_length, pad_token):
"""
Randomly sample some spans. It's a simple approximation of sliding window
:param tokens:
:param max_seq_length:
:return:
"""
# if it's shorter, no need for this
if len(article['input_ids']) <= max_seq_length:
amount_to_p... |
Randomly sample some spans. It's a simple approximation of sliding window
:param tokens:
:param max_seq_length:
:return:
| sliding_window | python | rowanz/grover | sample/encoder.py | https://github.com/rowanz/grover/blob/master/sample/encoder.py | Apache-2.0 |
def format_context(encoder, news_article, target):
"""
Generates a news article given some partial information
:param news_article: Contains context
:param target: What we want to get an answer for.
:return:
"""
canonical_metadata_order = ['domain', 'date', 'authors', 'title', 'article']
... |
Generates a news article given some partial information
:param news_article: Contains context
:param target: What we want to get an answer for.
:return:
| format_context | python | rowanz/grover | sample/encoder.py | https://github.com/rowanz/grover/blob/master/sample/encoder.py | Apache-2.0 |
def extract_generated_target(output_tokens, encoder, target):
"""
Given some tokens that were generated, extract the target
:param output_tokens: [num_tokens] thing that was generated
:param encoder: how they were encoded
:param target: the piece of metadata we wanted to generate!
:return:
"... |
Given some tokens that were generated, extract the target
:param output_tokens: [num_tokens] thing that was generated
:param encoder: how they were encoded
:param target: the piece of metadata we wanted to generate!
:return:
| extract_generated_target | python | rowanz/grover | sample/encoder.py | https://github.com/rowanz/grover/blob/master/sample/encoder.py | Apache-2.0 |
def compute_distance_two_loops(self, X_test):
"""
Inefficient naive implementation, use only
as a way of understanding what kNN is doing
"""
num_test = X_test.shape[0]
num_train = self.X_train.shape[0]
distances = np.zeros((num_test, num_train))
for i in... |
Inefficient naive implementation, use only
as a way of understanding what kNN is doing
| compute_distance_two_loops | python | aladdinpersson/Machine-Learning-Collection | ML/algorithms/knn/knn.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/algorithms/knn/knn.py | MIT |
def compute_distance_one_loop(self, X_test):
"""
Much better than two-loops but not as fast as fully vectorized version.
Utilize Numpy broadcasting in X_train - X_test[i,:]
"""
num_test = X_test.shape[0]
num_train = self.X_train.shape[0]
distances = np.zeros((num_... |
Much better than two-loops but not as fast as fully vectorized version.
Utilize Numpy broadcasting in X_train - X_test[i,:]
| compute_distance_one_loop | python | aladdinpersson/Machine-Learning-Collection | ML/algorithms/knn/knn.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/algorithms/knn/knn.py | MIT |
def compute_distance_vectorized(self, X_test):
"""
Can be tricky to understand this, we utilize heavy
vecotorization as well as numpy broadcasting.
Idea: if we have two vectors a, b (two examples)
and for vectors we can compute (a-b)^2 = a^2 - 2a (dot) b + b^2
expanding o... |
Can be tricky to understand this, we utilize heavy
vecotorization as well as numpy broadcasting.
Idea: if we have two vectors a, b (two examples)
and for vectors we can compute (a-b)^2 = a^2 - 2a (dot) b + b^2
expanding on this and doing so for every vector lends to the
... | compute_distance_vectorized | python | aladdinpersson/Machine-Learning-Collection | ML/algorithms/knn/knn.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/algorithms/knn/knn.py | MIT |
def trim(im):
"""
Converts image to grayscale using cv2, then computes binary matrix
of the pixels that are above a certain threshold, then takes out
the first row where a certain percetage of the pixels are above the
threshold will be the first clip point. Same idea for col, max row, max col.
"... |
Converts image to grayscale using cv2, then computes binary matrix
of the pixels that are above a certain threshold, then takes out
the first row where a certain percetage of the pixels are above the
threshold will be the first clip point. Same idea for col, max row, max col.
| trim | python | aladdinpersson/Machine-Learning-Collection | ML/Kaggles/DiabeticRetinopathy/preprocess_images.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Kaggles/DiabeticRetinopathy/preprocess_images.py | MIT |
def resize_maintain_aspect(image, desired_size):
"""
Stole this from some stackoverflow post but can't remember which,
this will add padding to maintain the aspect ratio.
"""
old_size = image.size # old_size[0] is in (width, height) format
ratio = float(desired_size) / max(old_size)
new_siz... |
Stole this from some stackoverflow post but can't remember which,
this will add padding to maintain the aspect ratio.
| resize_maintain_aspect | python | aladdinpersson/Machine-Learning-Collection | ML/Kaggles/DiabeticRetinopathy/preprocess_images.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Kaggles/DiabeticRetinopathy/preprocess_images.py | MIT |
def fast_image_resize(input_path_folder, output_path_folder, output_size=None):
"""
Uses multiprocessing to make it fast
"""
if not output_size:
warnings.warn("Need to specify output_size! For example: output_size=100")
exit()
if not os.path.exists(output_path_folder):
os.ma... |
Uses multiprocessing to make it fast
| fast_image_resize | python | aladdinpersson/Machine-Learning-Collection | ML/Kaggles/DiabeticRetinopathy/preprocess_images.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Kaggles/DiabeticRetinopathy/preprocess_images.py | MIT |
def check_accuracy(
loader, model, loss_fn, input_shape=None, toggle_eval=True, print_accuracy=True
):
"""
Check accuracy of model on data from loader
"""
if toggle_eval:
model.eval()
device = next(model.parameters()).device
num_correct = 0
num_samples = 0
y_preds = []
y... |
Check accuracy of model on data from loader
| check_accuracy | python | aladdinpersson/Machine-Learning-Collection | ML/Kaggles/Dog vs Cat Competition/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Kaggles/Dog vs Cat Competition/utils.py | MIT |
def get_submission(loader, dataset, model_15, model_4):
"""
This can be done a lot faster.. but it didn't take
too much time to do it in this inefficient way
"""
model_15.eval()
model_4.eval()
id_lookup = pd.read_csv("data/IdLookupTable.csv")
predictions = []
image_id = 1
for im... |
This can be done a lot faster.. but it didn't take
too much time to do it in this inefficient way
| get_submission | python | aladdinpersson/Machine-Learning-Collection | ML/Kaggles/Facial Keypoint Detection Competition/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Kaggles/Facial Keypoint Detection Competition/utils.py | MIT |
def precision(y_true, y_pred):
"""
Fraction of True Positive Elements divided by total number of positive predicted units
How I view it: Assuming we say someone has cancer: how often are we correct?
It tells us how much we can trust the model when it predicts an individual as positive.
"""
tp = ... |
Fraction of True Positive Elements divided by total number of positive predicted units
How I view it: Assuming we say someone has cancer: how often are we correct?
It tells us how much we can trust the model when it predicts an individual as positive.
| precision | python | aladdinpersson/Machine-Learning-Collection | ML/ml_metrics/metrics.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/ml_metrics/metrics.py | MIT |
def recall(y_true, y_pred):
"""
Recall meaasure the model's predictive accuracy for the positive class.
How I view it, out of all the people that has cancer: how often are
we able to detect it?
"""
tp = true_negatives(y_true, y_pred)
fn = false_negatives(y_true, y_pred)
return tp / (tp +... |
Recall meaasure the model's predictive accuracy for the positive class.
How I view it, out of all the people that has cancer: how often are
we able to detect it?
| recall | python | aladdinpersson/Machine-Learning-Collection | ML/ml_metrics/metrics.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/ml_metrics/metrics.py | MIT |
def sort_array(encoder, decoder, device, arr=None):
"""
A very simple example of use of the model
Input: encoder nn.Module
decoder nn.Module
device
array to sort (optional)
"""
if arr is None:
arr = ask_user()
with torch.no_grad():
while arr != ... |
A very simple example of use of the model
Input: encoder nn.Module
decoder nn.Module
device
array to sort (optional)
| sort_array | python | aladdinpersson/Machine-Learning-Collection | ML/Projects/DeepSort/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Projects/DeepSort/utils.py | MIT |
def __init__(self, input_size, num_classes):
"""
Here we define the layers of the network. We create two fully connected layers
Parameters:
input_size: the size of the input, in this case 784 (28x28)
num_classes: the number of classes we want to predict, in this case 10 ... |
Here we define the layers of the network. We create two fully connected layers
Parameters:
input_size: the size of the input, in this case 784 (28x28)
num_classes: the number of classes we want to predict, in this case 10 (0-9)
| __init__ | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/Basics/pytorch_simple_fullynet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/Basics/pytorch_simple_fullynet.py | MIT |
def forward(self, x):
"""
x here is the mnist images and we run it through fc1, fc2 that we created above.
we also add a ReLU activation function in between and for that (since it has no parameters)
I recommend using nn.functional (F)
Parameters:
x: mnist images
... |
x here is the mnist images and we run it through fc1, fc2 that we created above.
we also add a ReLU activation function in between and for that (since it has no parameters)
I recommend using nn.functional (F)
Parameters:
x: mnist images
Returns:
out: th... | forward | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/Basics/pytorch_simple_fullynet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/Basics/pytorch_simple_fullynet.py | MIT |
def check_accuracy(loader, model):
"""
Check accuracy of our trained model given a loader and a model
Parameters:
loader: torch.utils.data.DataLoader
A loader for the dataset you want to check accuracy on
model: nn.Module
The model you want to check accuracy on
... |
Check accuracy of our trained model given a loader and a model
Parameters:
loader: torch.utils.data.DataLoader
A loader for the dataset you want to check accuracy on
model: nn.Module
The model you want to check accuracy on
Returns:
acc: float
Th... | check_accuracy | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/Basics/pytorch_simple_fullynet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/Basics/pytorch_simple_fullynet.py | MIT |
def visualize_bbox(img, bbox, class_name, color=(255, 0, 0), thickness=5):
"""Visualizes a single bounding box on the image"""
x_min, y_min, x_max, y_max = map(int, bbox)
cv2.rectangle(img, (x_min, y_min), (x_max, y_max), color, thickness)
return img | Visualizes a single bounding box on the image | visualize_bbox | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/Basics/albumentations_tutorial/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/Basics/albumentations_tutorial/utils.py | MIT |
def fade_in(self, alpha, downscaled, out):
"""Used to fade in downscaled using avg pooling and output from CNN"""
# alpha should be scalar within [0, 1], and upscale.shape == generated.shape
return alpha * out + (1 - alpha) * downscaled | Used to fade in downscaled using avg pooling and output from CNN | fade_in | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/GANs/ProGAN/model.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/GANs/ProGAN/model.py | MIT |
def generate_examples(gen, steps, truncation=0.7, n=100):
"""
Tried using truncation trick here but not sure it actually helped anything, you can
remove it if you like and just sample from torch.randn
"""
gen.eval()
alpha = 1.0
for i in range(n):
with torch.no_grad():
noi... |
Tried using truncation trick here but not sure it actually helped anything, you can
remove it if you like and just sample from torch.randn
| generate_examples | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/GANs/ProGAN/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/GANs/ProGAN/utils.py | MIT |
def __init__(self, gamma=0.99, save=True, save_frequency=100, save_filename="ema_weights.pth"):
"""
Initialize the weight to which we will do the
exponential moving average and the dictionary
where we store the model parameters
"""
self.gamma = gamma
self.register... |
Initialize the weight to which we will do the
exponential moving average and the dictionary
where we store the model parameters
| __init__ | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/GANs/StyleGAN/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/GANs/StyleGAN/utils.py | MIT |
def register_weights(self, model):
"""
Registers the weights of the model which will
later be used when we take the moving average
"""
for name, param in model.named_parameters():
if param.requires_grad:
self.registered[name] = param.clone().detach() |
Registers the weights of the model which will
later be used when we take the moving average
| register_weights | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/GANs/StyleGAN/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/GANs/StyleGAN/utils.py | MIT |
def inference(digit, num_examples=1):
"""
Generates (num_examples) of a particular digit.
Specifically we extract an example of each digit,
then after we have the mu, sigma representation for
each digit we can sample from that.
After we sample we can run the decoder part of the VAE
and gene... |
Generates (num_examples) of a particular digit.
Specifically we extract an example of each digit,
then after we have the mu, sigma representation for
each digit we can sample from that.
After we sample we can run the decoder part of the VAE
and generate examples.
| inference | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/more_advanced/VAE/train.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/more_advanced/VAE/train.py | MIT |
def intersection_over_union(boxes_preds, boxes_labels, box_format="midpoint"):
"""
Calculates intersection over union
Parameters:
boxes_preds (tensor): Predictions of Bounding Boxes (BATCH_SIZE, 4)
boxes_labels (tensor): Correct Labels of Boxes (BATCH_SIZE, 4)
box_format (str): midp... |
Calculates intersection over union
Parameters:
boxes_preds (tensor): Predictions of Bounding Boxes (BATCH_SIZE, 4)
boxes_labels (tensor): Correct Labels of Boxes (BATCH_SIZE, 4)
box_format (str): midpoint/corners, if boxes (x,y,w,h) or (x1,y1,x2,y2)
Returns:
tensor: Inters... | intersection_over_union | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/metrics/iou.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/metrics/iou.py | MIT |
def mean_average_precision(
pred_boxes, true_boxes, iou_threshold=0.5, box_format="midpoint", num_classes=20
):
"""
Calculates mean average precision
Parameters:
pred_boxes (list): list of lists containing all bboxes with each bboxes
specified as [train_idx, class_prediction, prob_scor... |
Calculates mean average precision
Parameters:
pred_boxes (list): list of lists containing all bboxes with each bboxes
specified as [train_idx, class_prediction, prob_score, x1, y1, x2, y2]
true_boxes (list): Similar as pred_boxes except all the correct ones
iou_threshold (flo... | mean_average_precision | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/metrics/mean_avg_precision.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/metrics/mean_avg_precision.py | MIT |
def nms(bboxes, iou_threshold, threshold, box_format="corners"):
"""
Does Non Max Suppression given bboxes
Parameters:
bboxes (list): list of lists containing all bboxes with each bboxes
specified as [class_pred, prob_score, x1, y1, x2, y2]
iou_threshold (float): threshold where pre... |
Does Non Max Suppression given bboxes
Parameters:
bboxes (list): list of lists containing all bboxes with each bboxes
specified as [class_pred, prob_score, x1, y1, x2, y2]
iou_threshold (float): threshold where predicted bboxes is correct
threshold (float): threshold to remove ... | nms | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/metrics/nms.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/metrics/nms.py | MIT |
def intersection_over_union(boxes_preds, boxes_labels, box_format="midpoint"):
"""
Calculates intersection over union
Parameters:
boxes_preds (tensor): Predictions of Bounding Boxes (BATCH_SIZE, 4)
boxes_labels (tensor): Correct labels of Bounding Boxes (BATCH_SIZE, 4)
box_format (s... |
Calculates intersection over union
Parameters:
boxes_preds (tensor): Predictions of Bounding Boxes (BATCH_SIZE, 4)
boxes_labels (tensor): Correct labels of Bounding Boxes (BATCH_SIZE, 4)
box_format (str): midpoint/corners, if boxes (x,y,w,h) or (x1,y1,x2,y2)
Returns:
tenso... | intersection_over_union | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/YOLO/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/YOLO/utils.py | MIT |
def plot_image(image, boxes):
"""Plots predicted bounding boxes on the image"""
im = np.array(image)
height, width, _ = im.shape
# Create figure and axes
fig, ax = plt.subplots(1)
# Display the image
ax.imshow(im)
# box[0] is x midpoint, box[2] is width
# box[1] is y midpoint, box[... | Plots predicted bounding boxes on the image | plot_image | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/YOLO/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/YOLO/utils.py | MIT |
def convert_cellboxes(predictions, S=7):
"""
Converts bounding boxes output from Yolo with
an image split size of S into entire image ratios
rather than relative to cell ratios. Tried to do this
vectorized, but this resulted in quite difficult to read
code... Use as a black box? Or implement a m... |
Converts bounding boxes output from Yolo with
an image split size of S into entire image ratios
rather than relative to cell ratios. Tried to do this
vectorized, but this resulted in quite difficult to read
code... Use as a black box? Or implement a more intuitive,
using 2 for loops iterating r... | convert_cellboxes | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/YOLO/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/YOLO/utils.py | MIT |
def iou_width_height(boxes1, boxes2):
"""
Parameters:
boxes1 (tensor): width and height of the first bounding boxes
boxes2 (tensor): width and height of the second bounding boxes
Returns:
tensor: Intersection over union of the corresponding boxes
"""
intersection = torch.min(... |
Parameters:
boxes1 (tensor): width and height of the first bounding boxes
boxes2 (tensor): width and height of the second bounding boxes
Returns:
tensor: Intersection over union of the corresponding boxes
| iou_width_height | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/YOLOv3/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/YOLOv3/utils.py | MIT |
def intersection_over_union(boxes_preds, boxes_labels, box_format="midpoint"):
"""
Video explanation of this function:
https://youtu.be/XXYG5ZWtjj0
This function calculates intersection over union (iou) given pred boxes
and target boxes.
Parameters:
boxes_preds (tensor): Predictions of... |
Video explanation of this function:
https://youtu.be/XXYG5ZWtjj0
This function calculates intersection over union (iou) given pred boxes
and target boxes.
Parameters:
boxes_preds (tensor): Predictions of Bounding Boxes (BATCH_SIZE, 4)
boxes_labels (tensor): Correct labels of Bound... | intersection_over_union | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/YOLOv3/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/YOLOv3/utils.py | MIT |
def non_max_suppression(bboxes, iou_threshold, threshold, box_format="corners"):
"""
Video explanation of this function:
https://youtu.be/YDkjWEN8jNA
Does Non Max Suppression given bboxes
Parameters:
bboxes (list): list of lists containing all bboxes with each bboxes
specified as [... |
Video explanation of this function:
https://youtu.be/YDkjWEN8jNA
Does Non Max Suppression given bboxes
Parameters:
bboxes (list): list of lists containing all bboxes with each bboxes
specified as [class_pred, prob_score, x1, y1, x2, y2]
iou_threshold (float): threshold where p... | non_max_suppression | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/YOLOv3/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/YOLOv3/utils.py | MIT |
def mean_average_precision(
pred_boxes, true_boxes, iou_threshold=0.5, box_format="midpoint", num_classes=20
):
"""
Video explanation of this function:
https://youtu.be/FppOzcDvaDI
This function calculates mean average precision (mAP)
Parameters:
pred_boxes (list): list of lists contai... |
Video explanation of this function:
https://youtu.be/FppOzcDvaDI
This function calculates mean average precision (mAP)
Parameters:
pred_boxes (list): list of lists containing all bboxes with each bboxes
specified as [train_idx, class_prediction, prob_score, x1, y1, x2, y2]
tru... | mean_average_precision | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/YOLOv3/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/YOLOv3/utils.py | MIT |
def cells_to_bboxes(predictions, anchors, S, is_preds=True):
"""
Scales the predictions coming from the model to
be relative to the entire image such that they for example later
can be plotted or.
INPUT:
predictions: tensor of size (N, 3, S, S, num_classes+5)
anchors: the anchors used for th... |
Scales the predictions coming from the model to
be relative to the entire image such that they for example later
can be plotted or.
INPUT:
predictions: tensor of size (N, 3, S, S, num_classes+5)
anchors: the anchors used for the predictions
S: the number of cells the image is divided in on ... | cells_to_bboxes | python | aladdinpersson/Machine-Learning-Collection | ML/Pytorch/object_detection/YOLOv3/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/object_detection/YOLOv3/utils.py | MIT |
def plot_to_image(figure):
"""Converts the matplotlib plot specified by 'figure' to a PNG image and
returns it. The supplied figure is closed and inaccessible after this call."""
# Save the plot to a PNG in memory.
buf = io.BytesIO()
plt.savefig(buf, format="png")
# Closing the figure prevents... | Converts the matplotlib plot specified by 'figure' to a PNG image and
returns it. The supplied figure is closed and inaccessible after this call. | plot_to_image | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/Basics/tutorial17-tensorboard/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/Basics/tutorial17-tensorboard/utils.py | MIT |
def create_sprite(data):
"""
Tile images into sprite image.
Add any necessary padding
"""
# For B&W or greyscale images
if len(data.shape) == 3:
data = np.tile(data[..., np.newaxis], (1, 1, 1, 3))
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = ((0, n ** 2 - data.shape[0]), ... |
Tile images into sprite image.
Add any necessary padding
| create_sprite | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/Basics/tutorial17-tensorboard/utils.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/Basics/tutorial17-tensorboard/utils.py | MIT |
def AlexNet(input_shape: typing.Tuple[int], classes: int = 1000) -> Model:
"""
Implementation of the AlexNet architecture.
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
Note:
... |
Implementation of the AlexNet architecture.
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
Note:
when you read the paper, you will notice that the channels (filters) in the dia... | AlexNet | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/AlexNet/alexnet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/AlexNet/alexnet.py | MIT |
def convolution_block(
X: tf.Tensor,
filters: int,
kernel_size: int,
stride: int = 1,
padding: str = 'valid',
) -> tf.Tensor:
"""
Convolution block for GoogLeNet.
Arguments:
X -- input tensor of shape (m, H, W, filters)
filters -- defining the number of filters in ... |
Convolution block for GoogLeNet.
Arguments:
X -- input tensor of shape (m, H, W, filters)
filters -- defining the number of filters in the CONV layers
kernel_size -- integer, specifying the shape of the middle CONV's window for the main path
stride -- integer specifying the ... | convolution_block | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/GoogLeNet/block.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/GoogLeNet/block.py | MIT |
def inception_block(
X: tf.Tensor,
filters_1x1: int,
filters_3x3_reduce: int,
filters_3x3: int,
filters_5x5_reduce: int,
filters_5x5: int,
pool_size: int,
) -> tf.Tensor:
"""
Inception block for GoogLeNet.
Arguments:
X -- input tensor of shape (m, H, W, filte... |
Inception block for GoogLeNet.
Arguments:
X -- input tensor of shape (m, H, W, filters)
filters_1x1 -- number of filters for (1x1 conv) in first branch
filters_3x3_reduce -- number of filters for (1x1 conv) dimensionality reduction before (3x3 conv) in second branch
fil... | inception_block | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/GoogLeNet/block.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/GoogLeNet/block.py | MIT |
def auxiliary_block(
X: tf.Tensor,
classes: int,
) -> tf.Tensor:
"""
Auxiliary block for GoogLeNet.
Refer to the original paper, page 8 for the auxiliary layer specification.
Arguments:
X -- input tensor of shape (m, H, W, filters)
classes -- number of classes for classification
... |
Auxiliary block for GoogLeNet.
Refer to the original paper, page 8 for the auxiliary layer specification.
Arguments:
X -- input tensor of shape (m, H, W, filters)
classes -- number of classes for classification
Return:
X -- output of the identity block, tensor of shape (H, W, fi... | auxiliary_block | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/GoogLeNet/block.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/GoogLeNet/block.py | MIT |
def GoogLeNet(input_shape: typing.Tuple[int] = (224, 224, 3), classes: int = 1000) -> Model:
"""
Implementation of the popular GoogLeNet aka Inception v1 architecture.
Refer to the original paper, page 6 - table 1 for inception block filter sizes.
Arguments:
input_shape -- shape of the images of the... |
Implementation of the popular GoogLeNet aka Inception v1 architecture.
Refer to the original paper, page 6 - table 1 for inception block filter sizes.
Arguments:
input_shape -- shape of the images of the dataset
classes -- number of classes for classification
Returns:
model -- a M... | GoogLeNet | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/GoogLeNet/googlenet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/GoogLeNet/googlenet.py | MIT |
def LeNet5(input_shape: typing.Tuple[int], classes: int = 1000) -> Model:
"""
Implementation of the classic LeNet architecture.
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
No... |
Implementation of the classic LeNet architecture.
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
Note:
because I want to keep it original, I used tanh activation instead of ReL... | LeNet5 | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/LeNet5/lenet5.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/LeNet5/lenet5.py | MIT |
def block(
X: tf.Tensor,
kernel_size: int,
filters: typing.List[int],
stage_no: int,
block_name: str,
is_conv_layer: bool = False,
stride: int = 2
) -> tf.Tensor:
"""
Block for residual network.
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_pr... |
Block for residual network.
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
kernel_size -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers o... | block | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/ResNet/block.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/ResNet/block.py | MIT |
def ResNet(name: str, layers: typing.List[int], input_shape: typing.Tuple[int] = (64, 64, 3), classes: int = 6) -> Model:
"""
Implementation of the popular ResNet architecture.
Arguments:
name -- name of the architecture
layers -- number of blocks per layer
input_shape -- shape of t... |
Implementation of the popular ResNet architecture.
Arguments:
name -- name of the architecture
layers -- number of blocks per layer
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Ker... | ResNet | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/ResNet/resnet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/ResNet/resnet.py | MIT |
def make_layer(X: tf.Tensor, layers: int, kernel_size: int, filters: typing.List[int], stride: int, stage_no: int) -> tf.Tensor:
"""
Method to create one conv-identity layer for ResNet.
Arguments:
X -- input tensor
layers -- number of blocks per layer
kernel_size -- size of the k... |
Method to create one conv-identity layer for ResNet.
Arguments:
X -- input tensor
layers -- number of blocks per layer
kernel_size -- size of the kernel for the block
filters -- number of filters/channels
stride -- number of stride for downsampling the input
sta... | make_layer | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/ResNet/resnet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/ResNet/resnet.py | MIT |
def VGGNet(
name: str,
architecture: typing.List[ typing.Union[int, str] ],
input_shape: typing.Tuple[int],
classes: int = 1000
) -> Model:
"""
Implementation of the VGGNet architecture.
Arguments:
name -- name of the architecture
architecture -- number of output channel per... |
Implementation of the VGGNet architecture.
Arguments:
name -- name of the architecture
architecture -- number of output channel per convolution layers in VGGNet
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model ... | VGGNet | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/VGGNet/vggnet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/VGGNet/vggnet.py | MIT |
def make_conv_layer(
X: tf.Tensor,
architecture: typing.List[ typing.Union[int, str] ],
activation: str = 'relu'
) -> tf.Tensor:
"""
Method to create convolution layers for VGGNet.
In VGGNet
- Kernal is always 3x3 for conv-layer with padding 1 and stride 1.
- 2x2 kernel for max p... |
Method to create convolution layers for VGGNet.
In VGGNet
- Kernal is always 3x3 for conv-layer with padding 1 and stride 1.
- 2x2 kernel for max pooling with stride of 2.
Arguments:
X -- input tensor
architecture -- number of output channel per convolution layers in VGG... | make_conv_layer | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/VGGNet/vggnet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/VGGNet/vggnet.py | MIT |
def make_dense_layer(X: tf.Tensor, output_units: int, dropout = 0.5, activation = 'relu') -> tf.Tensor:
"""
Method to create dense layer for VGGNet.
Arguments:
X -- input tensor
output_units -- output tensor size
dropout -- dropout value for regularization
activation -- ty... |
Method to create dense layer for VGGNet.
Arguments:
X -- input tensor
output_units -- output tensor size
dropout -- dropout value for regularization
activation -- type of activation method
Returns:
X -- input tensor
| make_dense_layer | python | aladdinpersson/Machine-Learning-Collection | ML/TensorFlow/CNN_architectures/VGGNet/vggnet.py | https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/CNN_architectures/VGGNet/vggnet.py | MIT |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.