partition stringclasses 3
values | func_name stringlengths 1 134 | docstring stringlengths 1 46.9k | path stringlengths 4 223 | original_string stringlengths 75 104k | code stringlengths 75 104k | docstring_tokens listlengths 1 1.97k | repo stringlengths 7 55 | language stringclasses 1
value | url stringlengths 87 315 | code_tokens listlengths 19 28.4k | sha stringlengths 40 40 |
|---|---|---|---|---|---|---|---|---|---|---|---|
test | triangular | The Triangular Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Triangular Csiszar-function is:
```none
f(u) = (u - 1)**2 / (1 + u)
```
This Csiszar-function induces a symmetric f-Divergence, i.e.,
`D_f[p, q] = D_f[q, p]`.
Warni... | tensorflow_probability/python/vi/csiszar_divergence.py | def triangular(logu, name=None):
"""The Triangular Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Triangular Csiszar-function is:
```none
f(u) = (u - 1)**2 / (1 + u)
```
This Csiszar-function induces a symmetric f-Divergence, i.e... | def triangular(logu, name=None):
"""The Triangular Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Triangular Csiszar-function is:
```none
f(u) = (u - 1)**2 / (1 + u)
```
This Csiszar-function induces a symmetric f-Divergence, i.e... | [
"The",
"Triangular",
"Csiszar",
"-",
"function",
"in",
"log",
"-",
"space",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L427-L459 | [
"def",
"triangular",
"(",
"logu",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"\"triangular\"",
",",
"[",
"logu",
"]",
")",
":",
"logu",
"=",
"tf",
".",
"convert_to_tensor",
"(",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | t_power | The T-Power Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
When `self_normalized = True` the T-Power Csiszar-function is:
```none
f(u) = s [ u**t - 1 - t(u - 1) ]
s = { -1 0 < t < 1
{ +1 otherwise
```
When `self_normalized ... | tensorflow_probability/python/vi/csiszar_divergence.py | def t_power(logu, t, self_normalized=False, name=None):
"""The T-Power Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
When `self_normalized = True` the T-Power Csiszar-function is:
```none
f(u) = s [ u**t - 1 - t(u - 1) ]
s = { -1 0 <... | def t_power(logu, t, self_normalized=False, name=None):
"""The T-Power Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
When `self_normalized = True` the T-Power Csiszar-function is:
```none
f(u) = s [ u**t - 1 - t(u - 1) ]
s = { -1 0 <... | [
"The",
"T",
"-",
"Power",
"Csiszar",
"-",
"function",
"in",
"log",
"-",
"space",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L462-L503 | [
"def",
"t_power",
"(",
"logu",
",",
"t",
",",
"self_normalized",
"=",
"False",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"\"t_power\"",
",",
"[",
"logu",
",",
"t",
"]",
")",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | log1p_abs | The log1p-abs Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Log1p-Abs Csiszar-function is:
```none
f(u) = u**(sign(u-1)) - 1
```
This function is so-named because it was invented from the following recipe.
Choose a convex functi... | tensorflow_probability/python/vi/csiszar_divergence.py | def log1p_abs(logu, name=None):
"""The log1p-abs Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Log1p-Abs Csiszar-function is:
```none
f(u) = u**(sign(u-1)) - 1
```
This function is so-named because it was invented from the follo... | def log1p_abs(logu, name=None):
"""The log1p-abs Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Log1p-Abs Csiszar-function is:
```none
f(u) = u**(sign(u-1)) - 1
```
This function is so-named because it was invented from the follo... | [
"The",
"log1p",
"-",
"abs",
"Csiszar",
"-",
"function",
"in",
"log",
"-",
"space",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L506-L547 | [
"def",
"log1p_abs",
"(",
"logu",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"\"log1p_abs\"",
",",
"[",
"logu",
"]",
")",
":",
"logu",
"=",
"tf",
".",
"convert_to_tensor",
"(",
"v... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | jeffreys | The Jeffreys Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Jeffreys Csiszar-function is:
```none
f(u) = 0.5 ( u log(u) - log(u) )
= 0.5 kl_forward + 0.5 kl_reverse
= symmetrized_csiszar_function(kl_reverse)
= symme... | tensorflow_probability/python/vi/csiszar_divergence.py | def jeffreys(logu, name=None):
"""The Jeffreys Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Jeffreys Csiszar-function is:
```none
f(u) = 0.5 ( u log(u) - log(u) )
= 0.5 kl_forward + 0.5 kl_reverse
= symmetrized_csiszar... | def jeffreys(logu, name=None):
"""The Jeffreys Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Jeffreys Csiszar-function is:
```none
f(u) = 0.5 ( u log(u) - log(u) )
= 0.5 kl_forward + 0.5 kl_reverse
= symmetrized_csiszar... | [
"The",
"Jeffreys",
"Csiszar",
"-",
"function",
"in",
"log",
"-",
"space",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L550-L585 | [
"def",
"jeffreys",
"(",
"logu",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"\"jeffreys\"",
",",
"[",
"logu",
"]",
")",
":",
"logu",
"=",
"tf",
".",
"convert_to_tensor",
"(",
"val... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | modified_gan | The Modified-GAN Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
When `self_normalized = True` the modified-GAN (Generative/Adversarial
Network) Csiszar-function is:
```none
f(u) = log(1 + u) - log(u) + 0.5 (u - 1)
```
When `self_norm... | tensorflow_probability/python/vi/csiszar_divergence.py | def modified_gan(logu, self_normalized=False, name=None):
"""The Modified-GAN Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
When `self_normalized = True` the modified-GAN (Generative/Adversarial
Network) Csiszar-function is:
```none
f(... | def modified_gan(logu, self_normalized=False, name=None):
"""The Modified-GAN Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
When `self_normalized = True` the modified-GAN (Generative/Adversarial
Network) Csiszar-function is:
```none
f(... | [
"The",
"Modified",
"-",
"GAN",
"Csiszar",
"-",
"function",
"in",
"log",
"-",
"space",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L620-L661 | [
"def",
"modified_gan",
"(",
"logu",
",",
"self_normalized",
"=",
"False",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"\"chi_square\"",
",",
"[",
"logu",
"]",
")",
":",
"logu",
"=",... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | dual_csiszar_function | Calculates the dual Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar-dual is defined as:
```none
f^*(u) = u f(1 / u)
```
where `f` is some other Csiszar-function.
For example, the dual of `kl_reverse` is `kl_forward`, i.e.... | tensorflow_probability/python/vi/csiszar_divergence.py | def dual_csiszar_function(logu, csiszar_function, name=None):
"""Calculates the dual Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar-dual is defined as:
```none
f^*(u) = u f(1 / u)
```
where `f` is some other Csiszar-funct... | def dual_csiszar_function(logu, csiszar_function, name=None):
"""Calculates the dual Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar-dual is defined as:
```none
f^*(u) = u f(1 / u)
```
where `f` is some other Csiszar-funct... | [
"Calculates",
"the",
"dual",
"Csiszar",
"-",
"function",
"in",
"log",
"-",
"space",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L664-L709 | [
"def",
"dual_csiszar_function",
"(",
"logu",
",",
"csiszar_function",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"\"dual_csiszar_function\"",
",",
"[",
"logu",
"]",
")",
":",
"return",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | symmetrized_csiszar_function | Symmetrizes a Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The symmetrized Csiszar-function is defined as:
```none
f_g(u) = 0.5 g(u) + 0.5 u g (1 / u)
```
where `g` is some other Csiszar-function.
We say the function is "symmetriz... | tensorflow_probability/python/vi/csiszar_divergence.py | def symmetrized_csiszar_function(logu, csiszar_function, name=None):
"""Symmetrizes a Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The symmetrized Csiszar-function is defined as:
```none
f_g(u) = 0.5 g(u) + 0.5 u g (1 / u)
```
wher... | def symmetrized_csiszar_function(logu, csiszar_function, name=None):
"""Symmetrizes a Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The symmetrized Csiszar-function is defined as:
```none
f_g(u) = 0.5 g(u) + 0.5 u g (1 / u)
```
wher... | [
"Symmetrizes",
"a",
"Csiszar",
"-",
"function",
"in",
"log",
"-",
"space",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L712-L780 | [
"def",
"symmetrized_csiszar_function",
"(",
"logu",
",",
"csiszar_function",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"\"symmetrized_csiszar_function\"",
",",
"[",
"logu",
"]",
")",
":",... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | monte_carlo_csiszar_f_divergence | Monte-Carlo approximation of the Csiszar f-Divergence.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar f-Divergence for Csiszar-function f is given by:
```none
D_f[p(X), q(X)] := E_{q(X)}[ f( p(X) / q(X) ) ]
~= m**-1 sum_j^m f( p(x_j) / q(x_j... | tensorflow_probability/python/vi/csiszar_divergence.py | def monte_carlo_csiszar_f_divergence(
f,
p_log_prob,
q,
num_draws,
use_reparametrization=None,
seed=None,
name=None):
"""Monte-Carlo approximation of the Csiszar f-Divergence.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar f-Divergenc... | def monte_carlo_csiszar_f_divergence(
f,
p_log_prob,
q,
num_draws,
use_reparametrization=None,
seed=None,
name=None):
"""Monte-Carlo approximation of the Csiszar f-Divergence.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar f-Divergenc... | [
"Monte",
"-",
"Carlo",
"approximation",
"of",
"the",
"Csiszar",
"f",
"-",
"Divergence",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L783-L906 | [
"def",
"monte_carlo_csiszar_f_divergence",
"(",
"f",
",",
"p_log_prob",
",",
"q",
",",
"num_draws",
",",
"use_reparametrization",
"=",
"None",
",",
"seed",
"=",
"None",
",",
"name",
"=",
"None",
")",
":",
"reparameterization_types",
"=",
"tf",
".",
"nest",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | csiszar_vimco | Use VIMCO to lower the variance of gradient[csiszar_function(Avg(logu))].
This function generalizes VIMCO [(Mnih and Rezende, 2016)][1] to Csiszar
f-Divergences.
Note: if `q.reparameterization_type = tfd.FULLY_REPARAMETERIZED`,
consider using `monte_carlo_csiszar_f_divergence`.
The VIMCO loss is:
```non... | tensorflow_probability/python/vi/csiszar_divergence.py | def csiszar_vimco(f,
p_log_prob,
q,
num_draws,
num_batch_draws=1,
seed=None,
name=None):
"""Use VIMCO to lower the variance of gradient[csiszar_function(Avg(logu))].
This function generalizes VIMCO [(Mnih an... | def csiszar_vimco(f,
p_log_prob,
q,
num_draws,
num_batch_draws=1,
seed=None,
name=None):
"""Use VIMCO to lower the variance of gradient[csiszar_function(Avg(logu))].
This function generalizes VIMCO [(Mnih an... | [
"Use",
"VIMCO",
"to",
"lower",
"the",
"variance",
"of",
"gradient",
"[",
"csiszar_function",
"(",
"Avg",
"(",
"logu",
"))",
"]",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L909-L1006 | [
"def",
"csiszar_vimco",
"(",
"f",
",",
"p_log_prob",
",",
"q",
",",
"num_draws",
",",
"num_batch_draws",
"=",
"1",
",",
"seed",
"=",
"None",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | csiszar_vimco_helper | Helper to `csiszar_vimco`; computes `log_avg_u`, `log_sooavg_u`.
`axis = 0` of `logu` is presumed to correspond to iid samples from `q`, i.e.,
```none
logu[j] = log(u[j])
u[j] = p(x, h[j]) / q(h[j] | x)
h[j] iid~ q(H | x)
```
Args:
logu: Floating-type `Tensor` representing `log(p(x, h) / q(h | x))`... | tensorflow_probability/python/vi/csiszar_divergence.py | def csiszar_vimco_helper(logu, name=None):
"""Helper to `csiszar_vimco`; computes `log_avg_u`, `log_sooavg_u`.
`axis = 0` of `logu` is presumed to correspond to iid samples from `q`, i.e.,
```none
logu[j] = log(u[j])
u[j] = p(x, h[j]) / q(h[j] | x)
h[j] iid~ q(H | x)
```
Args:
logu: Floating-type... | def csiszar_vimco_helper(logu, name=None):
"""Helper to `csiszar_vimco`; computes `log_avg_u`, `log_sooavg_u`.
`axis = 0` of `logu` is presumed to correspond to iid samples from `q`, i.e.,
```none
logu[j] = log(u[j])
u[j] = p(x, h[j]) / q(h[j] | x)
h[j] iid~ q(H | x)
```
Args:
logu: Floating-type... | [
"Helper",
"to",
"csiszar_vimco",
";",
"computes",
"log_avg_u",
"log_sooavg_u",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L1009-L1111 | [
"def",
"csiszar_vimco_helper",
"(",
"logu",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"\"csiszar_vimco_helper\"",
",",
"[",
"logu",
"]",
")",
":",
"logu",
"=",
"tf",
".",
"convert_t... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _interp_regular_1d_grid_impl | 1-D interpolation that works with/without batching. | tensorflow_probability/python/math/interpolation.py | def _interp_regular_1d_grid_impl(x,
x_ref_min,
x_ref_max,
y_ref,
axis=-1,
batch_y_ref=False,
fill_value='constant_extensio... | def _interp_regular_1d_grid_impl(x,
x_ref_min,
x_ref_max,
y_ref,
axis=-1,
batch_y_ref=False,
fill_value='constant_extensio... | [
"1",
"-",
"D",
"interpolation",
"that",
"works",
"with",
"/",
"without",
"batching",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/interpolation.py#L38-L237 | [
"def",
"_interp_regular_1d_grid_impl",
"(",
"x",
",",
"x_ref_min",
",",
"x_ref_max",
",",
"y_ref",
",",
"axis",
"=",
"-",
"1",
",",
"batch_y_ref",
"=",
"False",
",",
"fill_value",
"=",
"'constant_extension'",
",",
"fill_value_below",
"=",
"None",
",",
"fill_va... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | interp_regular_1d_grid | Linear `1-D` interpolation on a regular (constant spacing) grid.
Given reference values, this function computes a piecewise linear interpolant
and evaluates it on a new set of `x` values.
The interpolant is built from `C` reference values indexed by one dimension
of `y_ref` (specified by the `axis` kwarg).
... | tensorflow_probability/python/math/interpolation.py | def interp_regular_1d_grid(x,
x_ref_min,
x_ref_max,
y_ref,
axis=-1,
fill_value='constant_extension',
fill_value_below=None,
fill_va... | def interp_regular_1d_grid(x,
x_ref_min,
x_ref_max,
y_ref,
axis=-1,
fill_value='constant_extension',
fill_value_below=None,
fill_va... | [
"Linear",
"1",
"-",
"D",
"interpolation",
"on",
"a",
"regular",
"(",
"constant",
"spacing",
")",
"grid",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/interpolation.py#L240-L368 | [
"def",
"interp_regular_1d_grid",
"(",
"x",
",",
"x_ref_min",
",",
"x_ref_max",
",",
"y_ref",
",",
"axis",
"=",
"-",
"1",
",",
"fill_value",
"=",
"'constant_extension'",
",",
"fill_value_below",
"=",
"None",
",",
"fill_value_above",
"=",
"None",
",",
"grid_regu... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | batch_interp_regular_1d_grid | Linear `1-D` interpolation on a regular (constant spacing) grid.
Given [batch of] reference values, this function computes a piecewise linear
interpolant and evaluates it on a [batch of] of new `x` values.
The interpolant is built from `C` reference values indexed by one dimension
of `y_ref` (specified by the... | tensorflow_probability/python/math/interpolation.py | def batch_interp_regular_1d_grid(x,
x_ref_min,
x_ref_max,
y_ref,
axis=-1,
fill_value='constant_extension',
fill_value_belo... | def batch_interp_regular_1d_grid(x,
x_ref_min,
x_ref_max,
y_ref,
axis=-1,
fill_value='constant_extension',
fill_value_belo... | [
"Linear",
"1",
"-",
"D",
"interpolation",
"on",
"a",
"regular",
"(",
"constant",
"spacing",
")",
"grid",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/interpolation.py#L371-L497 | [
"def",
"batch_interp_regular_1d_grid",
"(",
"x",
",",
"x_ref_min",
",",
"x_ref_max",
",",
"y_ref",
",",
"axis",
"=",
"-",
"1",
",",
"fill_value",
"=",
"'constant_extension'",
",",
"fill_value_below",
"=",
"None",
",",
"fill_value_above",
"=",
"None",
",",
"gri... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | batch_interp_regular_nd_grid | Multi-linear interpolation on a regular (constant spacing) grid.
Given [a batch of] reference values, this function computes a multi-linear
interpolant and evaluates it on [a batch of] of new `x` values.
The interpolant is built from reference values indexed by `nd` dimensions
of `y_ref`, starting at `axis`.
... | tensorflow_probability/python/math/interpolation.py | def batch_interp_regular_nd_grid(x,
x_ref_min,
x_ref_max,
y_ref,
axis,
fill_value='constant_extension',
name=None):
"""M... | def batch_interp_regular_nd_grid(x,
x_ref_min,
x_ref_max,
y_ref,
axis,
fill_value='constant_extension',
name=None):
"""M... | [
"Multi",
"-",
"linear",
"interpolation",
"on",
"a",
"regular",
"(",
"constant",
"spacing",
")",
"grid",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/interpolation.py#L500-L681 | [
"def",
"batch_interp_regular_nd_grid",
"(",
"x",
",",
"x_ref_min",
",",
"x_ref_max",
",",
"y_ref",
",",
"axis",
",",
"fill_value",
"=",
"'constant_extension'",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _batch_interp_with_gather_nd | N-D interpolation that works with leading batch dims. | tensorflow_probability/python/math/interpolation.py | def _batch_interp_with_gather_nd(x, x_ref_min, x_ref_max, y_ref, nd, fill_value,
batch_dims):
"""N-D interpolation that works with leading batch dims."""
dtype = x.dtype
# In this function,
# x.shape = [A1, ..., An, D, nd], where n = batch_dims
# and
# y_ref.shape = [A1, ..... | def _batch_interp_with_gather_nd(x, x_ref_min, x_ref_max, y_ref, nd, fill_value,
batch_dims):
"""N-D interpolation that works with leading batch dims."""
dtype = x.dtype
# In this function,
# x.shape = [A1, ..., An, D, nd], where n = batch_dims
# and
# y_ref.shape = [A1, ..... | [
"N",
"-",
"D",
"interpolation",
"that",
"works",
"with",
"leading",
"batch",
"dims",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/interpolation.py#L684-L836 | [
"def",
"_batch_interp_with_gather_nd",
"(",
"x",
",",
"x_ref_min",
",",
"x_ref_max",
",",
"y_ref",
",",
"nd",
",",
"fill_value",
",",
"batch_dims",
")",
":",
"dtype",
"=",
"x",
".",
"dtype",
"# In this function,",
"# x.shape = [A1, ..., An, D, nd], where n = batch_dim... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _assert_ndims_statically | Assert that Tensor x has expected number of dimensions. | tensorflow_probability/python/math/interpolation.py | def _assert_ndims_statically(x,
expect_ndims=None,
expect_ndims_at_least=None,
expect_static=False):
"""Assert that Tensor x has expected number of dimensions."""
ndims = x.shape.ndims
if ndims is None:
if expect_static:
... | def _assert_ndims_statically(x,
expect_ndims=None,
expect_ndims_at_least=None,
expect_static=False):
"""Assert that Tensor x has expected number of dimensions."""
ndims = x.shape.ndims
if ndims is None:
if expect_static:
... | [
"Assert",
"that",
"Tensor",
"x",
"has",
"expected",
"number",
"of",
"dimensions",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/interpolation.py#L839-L853 | [
"def",
"_assert_ndims_statically",
"(",
"x",
",",
"expect_ndims",
"=",
"None",
",",
"expect_ndims_at_least",
"=",
"None",
",",
"expect_static",
"=",
"False",
")",
":",
"ndims",
"=",
"x",
".",
"shape",
".",
"ndims",
"if",
"ndims",
"is",
"None",
":",
"if",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _make_expand_x_fn_for_non_batch_interpolation | Make func to expand left/right (of axis) dims of tensors shaped like x. | tensorflow_probability/python/math/interpolation.py | def _make_expand_x_fn_for_non_batch_interpolation(y_ref, axis):
"""Make func to expand left/right (of axis) dims of tensors shaped like x."""
# This expansion is to help x broadcast with `y`, the output.
# In the non-batch case, the output shape is going to be
# y_ref.shape[:axis] + x.shape + y_ref.shape[axis... | def _make_expand_x_fn_for_non_batch_interpolation(y_ref, axis):
"""Make func to expand left/right (of axis) dims of tensors shaped like x."""
# This expansion is to help x broadcast with `y`, the output.
# In the non-batch case, the output shape is going to be
# y_ref.shape[:axis] + x.shape + y_ref.shape[axis... | [
"Make",
"func",
"to",
"expand",
"left",
"/",
"right",
"(",
"of",
"axis",
")",
"dims",
"of",
"tensors",
"shaped",
"like",
"x",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/interpolation.py#L856-L890 | [
"def",
"_make_expand_x_fn_for_non_batch_interpolation",
"(",
"y_ref",
",",
"axis",
")",
":",
"# This expansion is to help x broadcast with `y`, the output.",
"# In the non-batch case, the output shape is going to be",
"# y_ref.shape[:axis] + x.shape + y_ref.shape[axis+1:]",
"# Recall we made... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _make_expand_x_fn_for_batch_interpolation | Make func to expand left/right (of axis) dims of tensors shaped like x. | tensorflow_probability/python/math/interpolation.py | def _make_expand_x_fn_for_batch_interpolation(y_ref, axis):
"""Make func to expand left/right (of axis) dims of tensors shaped like x."""
# This expansion is to help x broadcast with `y`, the output.
# In the batch case, the output shape is going to be
# Broadcast(y_ref.shape[:axis], x.shape[:-1]) +
# x.s... | def _make_expand_x_fn_for_batch_interpolation(y_ref, axis):
"""Make func to expand left/right (of axis) dims of tensors shaped like x."""
# This expansion is to help x broadcast with `y`, the output.
# In the batch case, the output shape is going to be
# Broadcast(y_ref.shape[:axis], x.shape[:-1]) +
# x.s... | [
"Make",
"func",
"to",
"expand",
"left",
"/",
"right",
"(",
"of",
"axis",
")",
"dims",
"of",
"tensors",
"shaped",
"like",
"x",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/interpolation.py#L893-L927 | [
"def",
"_make_expand_x_fn_for_batch_interpolation",
"(",
"y_ref",
",",
"axis",
")",
":",
"# This expansion is to help x broadcast with `y`, the output.",
"# In the batch case, the output shape is going to be",
"# Broadcast(y_ref.shape[:axis], x.shape[:-1]) +",
"# x.shape[-1:] + y_ref.shap... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _batch_gather_with_broadcast | Like batch_gather, but broadcasts to the left of axis. | tensorflow_probability/python/math/interpolation.py | def _batch_gather_with_broadcast(params, indices, axis):
"""Like batch_gather, but broadcasts to the left of axis."""
# batch_gather assumes...
# params.shape = [A1,...,AN, B1,...,BM]
# indices.shape = [A1,...,AN, C]
# which gives output of shape
# [A1,...,AN, C, B1,...,BM]
# Here w... | def _batch_gather_with_broadcast(params, indices, axis):
"""Like batch_gather, but broadcasts to the left of axis."""
# batch_gather assumes...
# params.shape = [A1,...,AN, B1,...,BM]
# indices.shape = [A1,...,AN, C]
# which gives output of shape
# [A1,...,AN, C, B1,...,BM]
# Here w... | [
"Like",
"batch_gather",
"but",
"broadcasts",
"to",
"the",
"left",
"of",
"axis",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/interpolation.py#L930-L954 | [
"def",
"_batch_gather_with_broadcast",
"(",
"params",
",",
"indices",
",",
"axis",
")",
":",
"# batch_gather assumes...",
"# params.shape = [A1,...,AN, B1,...,BM]",
"# indices.shape = [A1,...,AN, C]",
"# which gives output of shape",
"# [A1,...,AN, C, B1,...,BM]",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _broadcast_cat_event_and_params | Broadcasts the event or distribution parameters. | tensorflow_probability/python/distributions/categorical.py | def _broadcast_cat_event_and_params(event, params, base_dtype):
"""Broadcasts the event or distribution parameters."""
if dtype_util.is_integer(event.dtype):
pass
elif dtype_util.is_floating(event.dtype):
# When `validate_args=True` we've already ensured int/float casting
# is closed.
event = tf.c... | def _broadcast_cat_event_and_params(event, params, base_dtype):
"""Broadcasts the event or distribution parameters."""
if dtype_util.is_integer(event.dtype):
pass
elif dtype_util.is_floating(event.dtype):
# When `validate_args=True` we've already ensured int/float casting
# is closed.
event = tf.c... | [
"Broadcasts",
"the",
"event",
"or",
"distribution",
"parameters",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/categorical.py#L33-L56 | [
"def",
"_broadcast_cat_event_and_params",
"(",
"event",
",",
"params",
",",
"base_dtype",
")",
":",
"if",
"dtype_util",
".",
"is_integer",
"(",
"event",
".",
"dtype",
")",
":",
"pass",
"elif",
"dtype_util",
".",
"is_floating",
"(",
"event",
".",
"dtype",
")"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | expectation_importance_sampler | r"""Monte Carlo estimate of \\(E_p[f(Z)] = E_q[f(Z) p(Z) / q(Z)]\\).
With \\(p(z) := exp^{log_p(z)}\\), this `Op` returns
\\(n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ], z_i ~ q,\\)
\\(\approx E_q[ f(Z) p(Z) / q(Z) ]\\)
\\(= E_p[f(Z)]\\)
This integral is done in log-space with max-subtraction to b... | tensorflow_probability/python/internal/monte_carlo.py | def expectation_importance_sampler(f,
log_p,
sampling_dist_q,
z=None,
n=None,
seed=None,
name='expectation_imp... | def expectation_importance_sampler(f,
log_p,
sampling_dist_q,
z=None,
n=None,
seed=None,
name='expectation_imp... | [
"r",
"Monte",
"Carlo",
"estimate",
"of",
"\\\\",
"(",
"E_p",
"[",
"f",
"(",
"Z",
")",
"]",
"=",
"E_q",
"[",
"f",
"(",
"Z",
")",
"p",
"(",
"Z",
")",
"/",
"q",
"(",
"Z",
")",
"]",
"\\\\",
")",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/internal/monte_carlo.py#L30-L99 | [
"def",
"expectation_importance_sampler",
"(",
"f",
",",
"log_p",
",",
"sampling_dist_q",
",",
"z",
"=",
"None",
",",
"n",
"=",
"None",
",",
"seed",
"=",
"None",
",",
"name",
"=",
"'expectation_importance_sampler'",
")",
":",
"q",
"=",
"sampling_dist_q",
"wit... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | expectation_importance_sampler_logspace | r"""Importance sampling with a positive function, in log-space.
With \\(p(z) := exp^{log_p(z)}\\), and \\(f(z) = exp{log_f(z)}\\),
this `Op` returns
\\(Log[ n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ] ], z_i ~ q,\\)
\\(\approx Log[ E_q[ f(Z) p(Z) / q(Z) ] ]\\)
\\(= Log[E_p[f(Z)]]\\)
This integra... | tensorflow_probability/python/internal/monte_carlo.py | def expectation_importance_sampler_logspace(
log_f,
log_p,
sampling_dist_q,
z=None,
n=None,
seed=None,
name='expectation_importance_sampler_logspace'):
r"""Importance sampling with a positive function, in log-space.
With \\(p(z) := exp^{log_p(z)}\\), and \\(f(z) = exp{log_f(z)}\\),
th... | def expectation_importance_sampler_logspace(
log_f,
log_p,
sampling_dist_q,
z=None,
n=None,
seed=None,
name='expectation_importance_sampler_logspace'):
r"""Importance sampling with a positive function, in log-space.
With \\(p(z) := exp^{log_p(z)}\\), and \\(f(z) = exp{log_f(z)}\\),
th... | [
"r",
"Importance",
"sampling",
"with",
"a",
"positive",
"function",
"in",
"log",
"-",
"space",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/internal/monte_carlo.py#L102-L152 | [
"def",
"expectation_importance_sampler_logspace",
"(",
"log_f",
",",
"log_p",
",",
"sampling_dist_q",
",",
"z",
"=",
"None",
",",
"n",
"=",
"None",
",",
"seed",
"=",
"None",
",",
"name",
"=",
"'expectation_importance_sampler_logspace'",
")",
":",
"q",
"=",
"sa... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _logspace_mean | Evaluate `Log[E[values]]` in a stable manner.
Args:
log_values: `Tensor` holding `Log[values]`.
Returns:
`Tensor` of same `dtype` as `log_values`, reduced across dim 0.
`Log[Mean[values]]`. | tensorflow_probability/python/internal/monte_carlo.py | def _logspace_mean(log_values):
"""Evaluate `Log[E[values]]` in a stable manner.
Args:
log_values: `Tensor` holding `Log[values]`.
Returns:
`Tensor` of same `dtype` as `log_values`, reduced across dim 0.
`Log[Mean[values]]`.
"""
# center = Max[Log[values]], with stop-gradient
# The center ... | def _logspace_mean(log_values):
"""Evaluate `Log[E[values]]` in a stable manner.
Args:
log_values: `Tensor` holding `Log[values]`.
Returns:
`Tensor` of same `dtype` as `log_values`, reduced across dim 0.
`Log[Mean[values]]`.
"""
# center = Max[Log[values]], with stop-gradient
# The center ... | [
"Evaluate",
"Log",
"[",
"E",
"[",
"values",
"]]",
"in",
"a",
"stable",
"manner",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/internal/monte_carlo.py#L155-L180 | [
"def",
"_logspace_mean",
"(",
"log_values",
")",
":",
"# center = Max[Log[values]], with stop-gradient",
"# The center hopefully keep the exponentiated term small. It is canceled",
"# from the final result, so putting stop gradient on it will not change the",
"# final result. We put stop gradie... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _broadcast_event_and_samples | Broadcasts the event or samples. | tensorflow_probability/python/distributions/empirical.py | def _broadcast_event_and_samples(event, samples, event_ndims):
"""Broadcasts the event or samples."""
# This is the shape of self.samples, without the samples axis, i.e. the shape
# of the result of a call to dist.sample(). This way we can broadcast it with
# event to get a properly-sized event, then add the si... | def _broadcast_event_and_samples(event, samples, event_ndims):
"""Broadcasts the event or samples."""
# This is the shape of self.samples, without the samples axis, i.e. the shape
# of the result of a call to dist.sample(). This way we can broadcast it with
# event to get a properly-sized event, then add the si... | [
"Broadcasts",
"the",
"event",
"or",
"samples",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/empirical.py#L35-L49 | [
"def",
"_broadcast_event_and_samples",
"(",
"event",
",",
"samples",
",",
"event_ndims",
")",
":",
"# This is the shape of self.samples, without the samples axis, i.e. the shape",
"# of the result of a call to dist.sample(). This way we can broadcast it with",
"# event to get a properly-size... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | MetropolisHastings.one_step | Takes one step of the TransitionKernel.
Args:
current_state: `Tensor` or Python `list` of `Tensor`s representing the
current state(s) of the Markov chain(s).
previous_kernel_results: A (possibly nested) `tuple`, `namedtuple` or
`list` of `Tensor`s representing internal calculations made... | tensorflow_probability/python/mcmc/metropolis_hastings.py | def one_step(self, current_state, previous_kernel_results):
"""Takes one step of the TransitionKernel.
Args:
current_state: `Tensor` or Python `list` of `Tensor`s representing the
current state(s) of the Markov chain(s).
previous_kernel_results: A (possibly nested) `tuple`, `namedtuple` or
... | def one_step(self, current_state, previous_kernel_results):
"""Takes one step of the TransitionKernel.
Args:
current_state: `Tensor` or Python `list` of `Tensor`s representing the
current state(s) of the Markov chain(s).
previous_kernel_results: A (possibly nested) `tuple`, `namedtuple` or
... | [
"Takes",
"one",
"step",
"of",
"the",
"TransitionKernel",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/mcmc/metropolis_hastings.py#L165-L245 | [
"def",
"one_step",
"(",
"self",
",",
"current_state",
",",
"previous_kernel_results",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
"=",
"mcmc_util",
".",
"make_name",
"(",
"self",
".",
"name",
",",
"'mh'",
",",
"'one_... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | MetropolisHastings.bootstrap_results | Returns an object with the same type as returned by `one_step`.
Args:
init_state: `Tensor` or Python `list` of `Tensor`s representing the
initial state(s) of the Markov chain(s).
Returns:
kernel_results: A (possibly nested) `tuple`, `namedtuple` or `list` of
`Tensor`s representing ... | tensorflow_probability/python/mcmc/metropolis_hastings.py | def bootstrap_results(self, init_state):
"""Returns an object with the same type as returned by `one_step`.
Args:
init_state: `Tensor` or Python `list` of `Tensor`s representing the
initial state(s) of the Markov chain(s).
Returns:
kernel_results: A (possibly nested) `tuple`, `namedtup... | def bootstrap_results(self, init_state):
"""Returns an object with the same type as returned by `one_step`.
Args:
init_state: `Tensor` or Python `list` of `Tensor`s representing the
initial state(s) of the Markov chain(s).
Returns:
kernel_results: A (possibly nested) `tuple`, `namedtup... | [
"Returns",
"an",
"object",
"with",
"the",
"same",
"type",
"as",
"returned",
"by",
"one_step",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/mcmc/metropolis_hastings.py#L247-L277 | [
"def",
"bootstrap_results",
"(",
"self",
",",
"init_state",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
"=",
"mcmc_util",
".",
"make_name",
"(",
"self",
".",
"name",
",",
"'mh'",
",",
"'bootstrap_results'",
")",
",",... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | minimize | Applies the BFGS algorithm to minimize a differentiable function.
Performs unconstrained minimization of a differentiable function using the
BFGS scheme. For details of the algorithm, see [Nocedal and Wright(2006)][1].
### Usage:
The following example demonstrates the BFGS optimizer attempting to find the
... | tensorflow_probability/python/optimizer/bfgs.py | def minimize(value_and_gradients_function,
initial_position,
tolerance=1e-8,
x_tolerance=0,
f_relative_tolerance=0,
initial_inverse_hessian_estimate=None,
max_iterations=50,
parallel_iterations=1,
stopping_condition=... | def minimize(value_and_gradients_function,
initial_position,
tolerance=1e-8,
x_tolerance=0,
f_relative_tolerance=0,
initial_inverse_hessian_estimate=None,
max_iterations=50,
parallel_iterations=1,
stopping_condition=... | [
"Applies",
"the",
"BFGS",
"algorithm",
"to",
"minimize",
"a",
"differentiable",
"function",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/bfgs.py#L72-L286 | [
"def",
"minimize",
"(",
"value_and_gradients_function",
",",
"initial_position",
",",
"tolerance",
"=",
"1e-8",
",",
"x_tolerance",
"=",
"0",
",",
"f_relative_tolerance",
"=",
"0",
",",
"initial_inverse_hessian_estimate",
"=",
"None",
",",
"max_iterations",
"=",
"50... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _inv_hessian_control_inputs | Computes control inputs to validate a provided inverse Hessian.
These ensure that the provided inverse Hessian is positive definite and
symmetric.
Args:
inv_hessian: The starting estimate for the inverse of the Hessian at the
initial point.
Returns:
A list of tf.Assert ops suitable for use with... | tensorflow_probability/python/optimizer/bfgs.py | def _inv_hessian_control_inputs(inv_hessian):
"""Computes control inputs to validate a provided inverse Hessian.
These ensure that the provided inverse Hessian is positive definite and
symmetric.
Args:
inv_hessian: The starting estimate for the inverse of the Hessian at the
initial point.
Returns... | def _inv_hessian_control_inputs(inv_hessian):
"""Computes control inputs to validate a provided inverse Hessian.
These ensure that the provided inverse Hessian is positive definite and
symmetric.
Args:
inv_hessian: The starting estimate for the inverse of the Hessian at the
initial point.
Returns... | [
"Computes",
"control",
"inputs",
"to",
"validate",
"a",
"provided",
"inverse",
"Hessian",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/bfgs.py#L289-L319 | [
"def",
"_inv_hessian_control_inputs",
"(",
"inv_hessian",
")",
":",
"# The easiest way to validate if the inverse Hessian is positive definite is",
"# to compute its Cholesky decomposition.",
"is_positive_definite",
"=",
"tf",
".",
"reduce_all",
"(",
"input_tensor",
"=",
"tf",
".",... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _update_inv_hessian | Update the BGFS state by computing the next inverse hessian estimate. | tensorflow_probability/python/optimizer/bfgs.py | def _update_inv_hessian(prev_state, next_state):
"""Update the BGFS state by computing the next inverse hessian estimate."""
# Only update the inverse Hessian if not already failed or converged.
should_update = ~next_state.converged & ~next_state.failed
# Compute the normalization term (y^T . s), should not up... | def _update_inv_hessian(prev_state, next_state):
"""Update the BGFS state by computing the next inverse hessian estimate."""
# Only update the inverse Hessian if not already failed or converged.
should_update = ~next_state.converged & ~next_state.failed
# Compute the normalization term (y^T . s), should not up... | [
"Update",
"the",
"BGFS",
"state",
"by",
"computing",
"the",
"next",
"inverse",
"hessian",
"estimate",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/bfgs.py#L327-L352 | [
"def",
"_update_inv_hessian",
"(",
"prev_state",
",",
"next_state",
")",
":",
"# Only update the inverse Hessian if not already failed or converged.",
"should_update",
"=",
"~",
"next_state",
".",
"converged",
"&",
"~",
"next_state",
".",
"failed",
"# Compute the normalizatio... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _bfgs_inv_hessian_update | Applies the BFGS update to the inverse Hessian estimate.
The BFGS update rule is (note A^T denotes the transpose of a vector/matrix A).
```None
rho = 1/(grad_delta^T * position_delta)
U = (I - rho * position_delta * grad_delta^T)
H_1 = U * H_0 * U^T + rho * position_delta * position_delta^T
```
... | tensorflow_probability/python/optimizer/bfgs.py | def _bfgs_inv_hessian_update(grad_delta, position_delta, normalization_factor,
inv_hessian_estimate):
"""Applies the BFGS update to the inverse Hessian estimate.
The BFGS update rule is (note A^T denotes the transpose of a vector/matrix A).
```None
rho = 1/(grad_delta^T * positi... | def _bfgs_inv_hessian_update(grad_delta, position_delta, normalization_factor,
inv_hessian_estimate):
"""Applies the BFGS update to the inverse Hessian estimate.
The BFGS update rule is (note A^T denotes the transpose of a vector/matrix A).
```None
rho = 1/(grad_delta^T * positi... | [
"Applies",
"the",
"BFGS",
"update",
"to",
"the",
"inverse",
"Hessian",
"estimate",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/bfgs.py#L355-L433 | [
"def",
"_bfgs_inv_hessian_update",
"(",
"grad_delta",
",",
"position_delta",
",",
"normalization_factor",
",",
"inv_hessian_estimate",
")",
":",
"# The quadratic form: y^T.H.y; where H is the inverse Hessian and y is the",
"# gradient change.",
"conditioned_grad_delta",
"=",
"_mul_ri... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _mul_right | Computes the product of a matrix with a vector on the right.
Note this supports dynamic shapes and batched computation.
Examples:
M = tf.reshape(tf.range(6), shape=(3, 2))
# => [[0, 1],
# [2, 3],
# [4, 5]]
v = tf.constant([1, 2]) # Shape: (2,)
_mul_right(M, v)
# => [ 2, 8, 1... | tensorflow_probability/python/optimizer/bfgs.py | def _mul_right(mat, vec):
"""Computes the product of a matrix with a vector on the right.
Note this supports dynamic shapes and batched computation.
Examples:
M = tf.reshape(tf.range(6), shape=(3, 2))
# => [[0, 1],
# [2, 3],
# [4, 5]]
v = tf.constant([1, 2]) # Shape: (2,)
_mul_... | def _mul_right(mat, vec):
"""Computes the product of a matrix with a vector on the right.
Note this supports dynamic shapes and batched computation.
Examples:
M = tf.reshape(tf.range(6), shape=(3, 2))
# => [[0, 1],
# [2, 3],
# [4, 5]]
v = tf.constant([1, 2]) # Shape: (2,)
_mul_... | [
"Computes",
"the",
"product",
"of",
"a",
"matrix",
"with",
"a",
"vector",
"on",
"the",
"right",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/bfgs.py#L436-L473 | [
"def",
"_mul_right",
"(",
"mat",
",",
"vec",
")",
":",
"return",
"tf",
".",
"squeeze",
"(",
"tf",
".",
"matmul",
"(",
"mat",
",",
"tf",
".",
"expand_dims",
"(",
"vec",
",",
"axis",
"=",
"-",
"1",
")",
")",
",",
"axis",
"=",
"-",
"1",
")"
] | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _tensor_product | Computes the outer product of two possibly batched vectors.
Args:
t1: A `tf.Tensor` of shape `[..., n]`.
t2: A `tf.Tensor` of shape `[..., m]`.
Returns:
A tensor of shape `[..., n, m]` with matching batch dimensions, let's call
it `r`, whose components are:
```None
r[..., i, j] = t1[...... | tensorflow_probability/python/optimizer/bfgs.py | def _tensor_product(t1, t2):
"""Computes the outer product of two possibly batched vectors.
Args:
t1: A `tf.Tensor` of shape `[..., n]`.
t2: A `tf.Tensor` of shape `[..., m]`.
Returns:
A tensor of shape `[..., n, m]` with matching batch dimensions, let's call
it `r`, whose components are:
`... | def _tensor_product(t1, t2):
"""Computes the outer product of two possibly batched vectors.
Args:
t1: A `tf.Tensor` of shape `[..., n]`.
t2: A `tf.Tensor` of shape `[..., m]`.
Returns:
A tensor of shape `[..., n, m]` with matching batch dimensions, let's call
it `r`, whose components are:
`... | [
"Computes",
"the",
"outer",
"product",
"of",
"two",
"possibly",
"batched",
"vectors",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/bfgs.py#L476-L491 | [
"def",
"_tensor_product",
"(",
"t1",
",",
"t2",
")",
":",
"return",
"tf",
".",
"matmul",
"(",
"tf",
".",
"expand_dims",
"(",
"t1",
",",
"axis",
"=",
"-",
"1",
")",
",",
"tf",
".",
"expand_dims",
"(",
"t2",
",",
"axis",
"=",
"-",
"2",
")",
")"
] | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _batch_transpose | Transpose a possibly batched matrix.
Args:
mat: A `tf.Tensor` of shape `[..., n, m]`.
Returns:
A tensor of shape `[..., m, n]` with matching batch dimensions. | tensorflow_probability/python/optimizer/bfgs.py | def _batch_transpose(mat):
"""Transpose a possibly batched matrix.
Args:
mat: A `tf.Tensor` of shape `[..., n, m]`.
Returns:
A tensor of shape `[..., m, n]` with matching batch dimensions.
"""
n = distribution_util.prefer_static_rank(mat)
perm = tf.range(n)
perm = tf.concat([perm[:-2], [perm[-1]... | def _batch_transpose(mat):
"""Transpose a possibly batched matrix.
Args:
mat: A `tf.Tensor` of shape `[..., n, m]`.
Returns:
A tensor of shape `[..., m, n]` with matching batch dimensions.
"""
n = distribution_util.prefer_static_rank(mat)
perm = tf.range(n)
perm = tf.concat([perm[:-2], [perm[-1]... | [
"Transpose",
"a",
"possibly",
"batched",
"matrix",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/bfgs.py#L494-L506 | [
"def",
"_batch_transpose",
"(",
"mat",
")",
":",
"n",
"=",
"distribution_util",
".",
"prefer_static_rank",
"(",
"mat",
")",
"perm",
"=",
"tf",
".",
"range",
"(",
"n",
")",
"perm",
"=",
"tf",
".",
"concat",
"(",
"[",
"perm",
"[",
":",
"-",
"2",
"]",... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | pad_shape_right_with_ones | Maybe add `ndims` ones to `x.shape` on the right.
If `ndims` is zero, this is a no-op; otherwise, we will create and return a
new `Tensor` whose shape is that of `x` with `ndims` ones concatenated on the
right side. If the shape of `x` is known statically, the shape of the return
value will be as well.
Args... | tensorflow_probability/python/positive_semidefinite_kernels/internal/util.py | def pad_shape_right_with_ones(x, ndims):
"""Maybe add `ndims` ones to `x.shape` on the right.
If `ndims` is zero, this is a no-op; otherwise, we will create and return a
new `Tensor` whose shape is that of `x` with `ndims` ones concatenated on the
right side. If the shape of `x` is known statically, the shape ... | def pad_shape_right_with_ones(x, ndims):
"""Maybe add `ndims` ones to `x.shape` on the right.
If `ndims` is zero, this is a no-op; otherwise, we will create and return a
new `Tensor` whose shape is that of `x` with `ndims` ones concatenated on the
right side. If the shape of `x` is known statically, the shape ... | [
"Maybe",
"add",
"ndims",
"ones",
"to",
"x",
".",
"shape",
"on",
"the",
"right",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/positive_semidefinite_kernels/internal/util.py#L34-L65 | [
"def",
"pad_shape_right_with_ones",
"(",
"x",
",",
"ndims",
")",
":",
"if",
"not",
"(",
"isinstance",
"(",
"ndims",
",",
"int",
")",
"and",
"ndims",
">=",
"0",
")",
":",
"raise",
"ValueError",
"(",
"'`ndims` must be a Python `integer` greater than zero. Got: {}'",... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | sum_rightmost_ndims_preserving_shape | Return `Tensor` with right-most ndims summed.
Args:
x: the `Tensor` whose right-most `ndims` dimensions to sum
ndims: number of right-most dimensions to sum.
Returns:
A `Tensor` resulting from calling `reduce_sum` on the `ndims` right-most
dimensions. If the shape of `x` is statically known, the r... | tensorflow_probability/python/positive_semidefinite_kernels/internal/util.py | def sum_rightmost_ndims_preserving_shape(x, ndims):
"""Return `Tensor` with right-most ndims summed.
Args:
x: the `Tensor` whose right-most `ndims` dimensions to sum
ndims: number of right-most dimensions to sum.
Returns:
A `Tensor` resulting from calling `reduce_sum` on the `ndims` right-most
d... | def sum_rightmost_ndims_preserving_shape(x, ndims):
"""Return `Tensor` with right-most ndims summed.
Args:
x: the `Tensor` whose right-most `ndims` dimensions to sum
ndims: number of right-most dimensions to sum.
Returns:
A `Tensor` resulting from calling `reduce_sum` on the `ndims` right-most
d... | [
"Return",
"Tensor",
"with",
"right",
"-",
"most",
"ndims",
"summed",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/positive_semidefinite_kernels/internal/util.py#L68-L86 | [
"def",
"sum_rightmost_ndims_preserving_shape",
"(",
"x",
",",
"ndims",
")",
":",
"x",
"=",
"tf",
".",
"convert_to_tensor",
"(",
"value",
"=",
"x",
")",
"if",
"x",
".",
"shape",
".",
"ndims",
"is",
"not",
"None",
":",
"axes",
"=",
"tf",
".",
"range",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | sqrt_with_finite_grads | A sqrt function whose gradient at zero is very large but finite.
Args:
x: a `Tensor` whose sqrt is to be computed.
name: a Python `str` prefixed to all ops created by this function.
Default `None` (i.e., "sqrt_with_finite_grads").
Returns:
sqrt: the square root of `x`, with an overridden gradien... | tensorflow_probability/python/positive_semidefinite_kernels/internal/util.py | def sqrt_with_finite_grads(x, name=None):
"""A sqrt function whose gradient at zero is very large but finite.
Args:
x: a `Tensor` whose sqrt is to be computed.
name: a Python `str` prefixed to all ops created by this function.
Default `None` (i.e., "sqrt_with_finite_grads").
Returns:
sqrt: the... | def sqrt_with_finite_grads(x, name=None):
"""A sqrt function whose gradient at zero is very large but finite.
Args:
x: a `Tensor` whose sqrt is to be computed.
name: a Python `str` prefixed to all ops created by this function.
Default `None` (i.e., "sqrt_with_finite_grads").
Returns:
sqrt: the... | [
"A",
"sqrt",
"function",
"whose",
"gradient",
"at",
"zero",
"is",
"very",
"large",
"but",
"finite",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/positive_semidefinite_kernels/internal/util.py#L90-L154 | [
"def",
"sqrt_with_finite_grads",
"(",
"x",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'sqrt_with_finite_grads'",
",",
"[",
"x",
"]",
")",
":",
"x",
"=",
"tf",
".",
"convert_to_tenso... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | maybe_get_common_dtype | Return common dtype of arg_list, or None.
Args:
arg_list: an iterable of items which are either `None` or have a `dtype`
property.
Returns:
dtype: The common dtype of items in `arg_list`, or `None` if the list is
empty or all items are `None`. | tensorflow_probability/python/positive_semidefinite_kernels/internal/util.py | def maybe_get_common_dtype(arg_list):
"""Return common dtype of arg_list, or None.
Args:
arg_list: an iterable of items which are either `None` or have a `dtype`
property.
Returns:
dtype: The common dtype of items in `arg_list`, or `None` if the list is
empty or all items are `None`.
"""
... | def maybe_get_common_dtype(arg_list):
"""Return common dtype of arg_list, or None.
Args:
arg_list: an iterable of items which are either `None` or have a `dtype`
property.
Returns:
dtype: The common dtype of items in `arg_list`, or `None` if the list is
empty or all items are `None`.
"""
... | [
"Return",
"common",
"dtype",
"of",
"arg_list",
"or",
"None",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/positive_semidefinite_kernels/internal/util.py#L157-L171 | [
"def",
"maybe_get_common_dtype",
"(",
"arg_list",
")",
":",
"# Note that `all` defaults to `True` if `arg_list` is empty.",
"if",
"all",
"(",
"a",
"is",
"None",
"for",
"a",
"in",
"arg_list",
")",
":",
"return",
"None",
"return",
"dtype_util",
".",
"common_dtype",
"(... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | minimize | Applies the L-BFGS algorithm to minimize a differentiable function.
Performs unconstrained minimization of a differentiable function using the
L-BFGS scheme. See [Nocedal and Wright(2006)][1] for details of the algorithm.
### Usage:
The following example demonstrates the L-BFGS optimizer attempting to find t... | tensorflow_probability/python/optimizer/lbfgs.py | def minimize(value_and_gradients_function,
initial_position,
num_correction_pairs=10,
tolerance=1e-8,
x_tolerance=0,
f_relative_tolerance=0,
initial_inverse_hessian_estimate=None,
max_iterations=50,
parallel_iteratio... | def minimize(value_and_gradients_function,
initial_position,
num_correction_pairs=10,
tolerance=1e-8,
x_tolerance=0,
f_relative_tolerance=0,
initial_inverse_hessian_estimate=None,
max_iterations=50,
parallel_iteratio... | [
"Applies",
"the",
"L",
"-",
"BFGS",
"algorithm",
"to",
"minimize",
"a",
"differentiable",
"function",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/lbfgs.py#L80-L260 | [
"def",
"minimize",
"(",
"value_and_gradients_function",
",",
"initial_position",
",",
"num_correction_pairs",
"=",
"10",
",",
"tolerance",
"=",
"1e-8",
",",
"x_tolerance",
"=",
"0",
",",
"f_relative_tolerance",
"=",
"0",
",",
"initial_inverse_hessian_estimate",
"=",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _get_initial_state | Create LBfgsOptimizerResults with initial state of search procedure. | tensorflow_probability/python/optimizer/lbfgs.py | def _get_initial_state(value_and_gradients_function,
initial_position,
num_correction_pairs,
tolerance):
"""Create LBfgsOptimizerResults with initial state of search procedure."""
init_args = bfgs_utils.get_initial_state_args(
value_and_grad... | def _get_initial_state(value_and_gradients_function,
initial_position,
num_correction_pairs,
tolerance):
"""Create LBfgsOptimizerResults with initial state of search procedure."""
init_args = bfgs_utils.get_initial_state_args(
value_and_grad... | [
"Create",
"LBfgsOptimizerResults",
"with",
"initial",
"state",
"of",
"search",
"procedure",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/lbfgs.py#L263-L274 | [
"def",
"_get_initial_state",
"(",
"value_and_gradients_function",
",",
"initial_position",
",",
"num_correction_pairs",
",",
"tolerance",
")",
":",
"init_args",
"=",
"bfgs_utils",
".",
"get_initial_state_args",
"(",
"value_and_gradients_function",
",",
"initial_position",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _get_search_direction | Computes the search direction to follow at the current state.
On the `k`-th iteration of the main L-BFGS algorithm, the state has collected
the most recent `m` correction pairs in position_deltas and gradient_deltas,
where `k = state.num_iterations` and `m = min(k, num_correction_pairs)`.
Assuming these, the ... | tensorflow_probability/python/optimizer/lbfgs.py | def _get_search_direction(state):
"""Computes the search direction to follow at the current state.
On the `k`-th iteration of the main L-BFGS algorithm, the state has collected
the most recent `m` correction pairs in position_deltas and gradient_deltas,
where `k = state.num_iterations` and `m = min(k, num_corr... | def _get_search_direction(state):
"""Computes the search direction to follow at the current state.
On the `k`-th iteration of the main L-BFGS algorithm, the state has collected
the most recent `m` correction pairs in position_deltas and gradient_deltas,
where `k = state.num_iterations` and `m = min(k, num_corr... | [
"Computes",
"the",
"search",
"direction",
"to",
"follow",
"at",
"the",
"current",
"state",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/lbfgs.py#L277-L366 | [
"def",
"_get_search_direction",
"(",
"state",
")",
":",
"# The number of correction pairs that have been collected so far.",
"num_elements",
"=",
"tf",
".",
"minimum",
"(",
"state",
".",
"num_iterations",
",",
"distribution_util",
".",
"prefer_static_shape",
"(",
"state",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _make_empty_queue_for | Creates a `tf.Tensor` suitable to hold `k` element-shaped tensors.
For example:
```python
element = tf.constant([[0., 1., 2., 3., 4.],
[5., 6., 7., 8., 9.]])
# A queue capable of holding 3 elements.
_make_empty_queue_for(3, element)
# => [[[ 0., 0., 0., 0., 0.],
... | tensorflow_probability/python/optimizer/lbfgs.py | def _make_empty_queue_for(k, element):
"""Creates a `tf.Tensor` suitable to hold `k` element-shaped tensors.
For example:
```python
element = tf.constant([[0., 1., 2., 3., 4.],
[5., 6., 7., 8., 9.]])
# A queue capable of holding 3 elements.
_make_empty_queue_for(3, elemen... | def _make_empty_queue_for(k, element):
"""Creates a `tf.Tensor` suitable to hold `k` element-shaped tensors.
For example:
```python
element = tf.constant([[0., 1., 2., 3., 4.],
[5., 6., 7., 8., 9.]])
# A queue capable of holding 3 elements.
_make_empty_queue_for(3, elemen... | [
"Creates",
"a",
"tf",
".",
"Tensor",
"suitable",
"to",
"hold",
"k",
"element",
"-",
"shaped",
"tensors",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/lbfgs.py#L369-L400 | [
"def",
"_make_empty_queue_for",
"(",
"k",
",",
"element",
")",
":",
"queue_shape",
"=",
"tf",
".",
"concat",
"(",
"[",
"[",
"k",
"]",
",",
"distribution_util",
".",
"prefer_static_shape",
"(",
"element",
")",
"]",
",",
"axis",
"=",
"0",
")",
"return",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _queue_push | Conditionally push new vectors into a batch of first-in-first-out queues.
The `queue` of shape `[k, ..., n]` can be thought of as a batch of queues,
each holding `k` n-D vectors; while `new_vecs` of shape `[..., n]` is a
fresh new batch of n-D vectors. The `should_update` batch of Boolean scalars,
i.e. shape `... | tensorflow_probability/python/optimizer/lbfgs.py | def _queue_push(queue, should_update, new_vecs):
"""Conditionally push new vectors into a batch of first-in-first-out queues.
The `queue` of shape `[k, ..., n]` can be thought of as a batch of queues,
each holding `k` n-D vectors; while `new_vecs` of shape `[..., n]` is a
fresh new batch of n-D vectors. The `s... | def _queue_push(queue, should_update, new_vecs):
"""Conditionally push new vectors into a batch of first-in-first-out queues.
The `queue` of shape `[k, ..., n]` can be thought of as a batch of queues,
each holding `k` n-D vectors; while `new_vecs` of shape `[..., n]` is a
fresh new batch of n-D vectors. The `s... | [
"Conditionally",
"push",
"new",
"vectors",
"into",
"a",
"batch",
"of",
"first",
"-",
"in",
"-",
"first",
"-",
"out",
"queues",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/lbfgs.py#L403-L466 | [
"def",
"_queue_push",
"(",
"queue",
",",
"should_update",
",",
"new_vecs",
")",
":",
"new_queue",
"=",
"tf",
".",
"concat",
"(",
"[",
"queue",
"[",
"1",
":",
"]",
",",
"[",
"new_vecs",
"]",
"]",
",",
"axis",
"=",
"0",
")",
"update_pattern",
"=",
"t... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _psd_mask | Computes whether each square matrix in the input is positive semi-definite.
Args:
x: A floating-point `Tensor` of shape `[B1, ..., Bn, M, M]`.
Returns:
mask: A floating-point `Tensor` of shape `[B1, ... Bn]`. Each
scalar is 1 if the corresponding matrix was PSD, otherwise 0. | tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py | def _psd_mask(x):
"""Computes whether each square matrix in the input is positive semi-definite.
Args:
x: A floating-point `Tensor` of shape `[B1, ..., Bn, M, M]`.
Returns:
mask: A floating-point `Tensor` of shape `[B1, ... Bn]`. Each
scalar is 1 if the corresponding matrix was PSD, otherwise 0.
... | def _psd_mask(x):
"""Computes whether each square matrix in the input is positive semi-definite.
Args:
x: A floating-point `Tensor` of shape `[B1, ..., Bn, M, M]`.
Returns:
mask: A floating-point `Tensor` of shape `[B1, ... Bn]`. Each
scalar is 1 if the corresponding matrix was PSD, otherwise 0.
... | [
"Computes",
"whether",
"each",
"square",
"matrix",
"in",
"the",
"input",
"is",
"positive",
"semi",
"-",
"definite",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py#L77-L104 | [
"def",
"_psd_mask",
"(",
"x",
")",
":",
"# Allegedly",
"# https://scicomp.stackexchange.com/questions/12979/testing-if-a-matrix-is-positive-semi-definite",
"# it is more efficient to test for positive semi-definiteness by",
"# trying to compute the Cholesky decomposition -- the matrix is PSD",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _det_large_enough_mask | Returns whether the input matches the given determinant limit.
Args:
x: A floating-point `Tensor` of shape `[B1, ..., Bn, M, M]`.
det_bounds: A floating-point `Tensor` that must broadcast to shape
`[B1, ..., Bn]`, giving the desired lower bound on the
determinants in `x`.
Returns:
mask: A ... | tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py | def _det_large_enough_mask(x, det_bounds):
"""Returns whether the input matches the given determinant limit.
Args:
x: A floating-point `Tensor` of shape `[B1, ..., Bn, M, M]`.
det_bounds: A floating-point `Tensor` that must broadcast to shape
`[B1, ..., Bn]`, giving the desired lower bound on the
... | def _det_large_enough_mask(x, det_bounds):
"""Returns whether the input matches the given determinant limit.
Args:
x: A floating-point `Tensor` of shape `[B1, ..., Bn, M, M]`.
det_bounds: A floating-point `Tensor` that must broadcast to shape
`[B1, ..., Bn]`, giving the desired lower bound on the
... | [
"Returns",
"whether",
"the",
"input",
"matches",
"the",
"given",
"determinant",
"limit",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py#L107-L130 | [
"def",
"_det_large_enough_mask",
"(",
"x",
",",
"det_bounds",
")",
":",
"# For the curious: I wonder whether it is possible and desirable to",
"# use a Cholesky decomposition-based algorithm for this, since the",
"# only matrices whose determinant this code cares about will be PSD.",
"# Didn't... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _uniform_correlation_like_matrix | Returns a uniformly random `Tensor` of "correlation-like" matrices.
A "correlation-like" matrix is a symmetric square matrix with all entries
between -1 and 1 (inclusive) and 1s on the main diagonal. Of these,
the ones that are positive semi-definite are exactly the correlation
matrices.
Args:
num_rows... | tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py | def _uniform_correlation_like_matrix(num_rows, batch_shape, dtype, seed):
"""Returns a uniformly random `Tensor` of "correlation-like" matrices.
A "correlation-like" matrix is a symmetric square matrix with all entries
between -1 and 1 (inclusive) and 1s on the main diagonal. Of these,
the ones that are posit... | def _uniform_correlation_like_matrix(num_rows, batch_shape, dtype, seed):
"""Returns a uniformly random `Tensor` of "correlation-like" matrices.
A "correlation-like" matrix is a symmetric square matrix with all entries
between -1 and 1 (inclusive) and 1s on the main diagonal. Of these,
the ones that are posit... | [
"Returns",
"a",
"uniformly",
"random",
"Tensor",
"of",
"correlation",
"-",
"like",
"matrices",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py#L133-L169 | [
"def",
"_uniform_correlation_like_matrix",
"(",
"num_rows",
",",
"batch_shape",
",",
"dtype",
",",
"seed",
")",
":",
"num_entries",
"=",
"num_rows",
"*",
"(",
"num_rows",
"+",
"1",
")",
"/",
"2",
"ones",
"=",
"tf",
".",
"ones",
"(",
"shape",
"=",
"[",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | correlation_matrix_volume_rejection_samples | Returns rejection samples from trying to get good correlation matrices.
The proposal being rejected from is the uniform distribution on
"correlation-like" matrices. We say a matrix is "correlation-like"
if it is a symmetric square matrix with all entries between -1 and 1
(inclusive) and 1s on the main diagona... | tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py | def correlation_matrix_volume_rejection_samples(
det_bounds, dim, sample_shape, dtype, seed):
"""Returns rejection samples from trying to get good correlation matrices.
The proposal being rejected from is the uniform distribution on
"correlation-like" matrices. We say a matrix is "correlation-like"
if it ... | def correlation_matrix_volume_rejection_samples(
det_bounds, dim, sample_shape, dtype, seed):
"""Returns rejection samples from trying to get good correlation matrices.
The proposal being rejected from is the uniform distribution on
"correlation-like" matrices. We say a matrix is "correlation-like"
if it ... | [
"Returns",
"rejection",
"samples",
"from",
"trying",
"to",
"get",
"good",
"correlation",
"matrices",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py#L172-L216 | [
"def",
"correlation_matrix_volume_rejection_samples",
"(",
"det_bounds",
",",
"dim",
",",
"sample_shape",
",",
"dtype",
",",
"seed",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"\"rejection_sampler\"",
")",
":",
"rej_proposals",
"="... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _clopper_pearson_confidence_interval | Computes a confidence interval for the mean of the given 1-D distribution.
Assumes (and checks) that the given distribution is Bernoulli, i.e.,
takes only two values. This licenses using the CDF of the binomial
distribution for the confidence, which is tighter (for extreme
probabilities) than the DKWM inequal... | tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py | def _clopper_pearson_confidence_interval(samples, error_rate):
"""Computes a confidence interval for the mean of the given 1-D distribution.
Assumes (and checks) that the given distribution is Bernoulli, i.e.,
takes only two values. This licenses using the CDF of the binomial
distribution for the confidence, ... | def _clopper_pearson_confidence_interval(samples, error_rate):
"""Computes a confidence interval for the mean of the given 1-D distribution.
Assumes (and checks) that the given distribution is Bernoulli, i.e.,
takes only two values. This licenses using the CDF of the binomial
distribution for the confidence, ... | [
"Computes",
"a",
"confidence",
"interval",
"for",
"the",
"mean",
"of",
"the",
"given",
"1",
"-",
"D",
"distribution",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py#L219-L294 | [
"def",
"_clopper_pearson_confidence_interval",
"(",
"samples",
",",
"error_rate",
")",
":",
"# TODO(b/78025336) Migrate this confidence interval function",
"# to statistical_testing.py. In order to do that",
"# - Get the binomial CDF from the Binomial distribution",
"# - Implement scalar root... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | compute_true_volumes | Returns confidence intervals for the desired correlation matrix volumes.
The confidence intervals are computed by the [Clopper-Pearson method]
(https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval).
Args:
det_bounds: A rank-1 numpy array of lower bounds on the
determinants of acceptab... | tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py | def compute_true_volumes(
det_bounds, dim, num_samples, error_rate=1e-6, seed=42):
"""Returns confidence intervals for the desired correlation matrix volumes.
The confidence intervals are computed by the [Clopper-Pearson method]
(https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval).
Args... | def compute_true_volumes(
det_bounds, dim, num_samples, error_rate=1e-6, seed=42):
"""Returns confidence intervals for the desired correlation matrix volumes.
The confidence intervals are computed by the [Clopper-Pearson method]
(https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval).
Args... | [
"Returns",
"confidence",
"intervals",
"for",
"the",
"desired",
"correlation",
"matrix",
"volumes",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/internal/correlation_matrix_volumes_lib.py#L297-L332 | [
"def",
"compute_true_volumes",
"(",
"det_bounds",
",",
"dim",
",",
"num_samples",
",",
"error_rate",
"=",
"1e-6",
",",
"seed",
"=",
"42",
")",
":",
"bounds",
"=",
"{",
"}",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"Session",
"(",
")",
"as",
"sess... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _kl_von_mises_von_mises | Batchwise KL divergence KL(d1 || d2) with d1 and d2 von Mises.
Args:
d1: instance of a von Mises distribution object.
d2: instance of a a von Mises distribution object.
name: (optional) Name to use for created operations.
default is "kl_von_mises_von_mises".
Returns:
Batchwise KL(d1 || d2) | tensorflow_probability/python/distributions/von_mises.py | def _kl_von_mises_von_mises(d1, d2, name=None):
"""Batchwise KL divergence KL(d1 || d2) with d1 and d2 von Mises.
Args:
d1: instance of a von Mises distribution object.
d2: instance of a a von Mises distribution object.
name: (optional) Name to use for created operations.
default is "kl_von_mises... | def _kl_von_mises_von_mises(d1, d2, name=None):
"""Batchwise KL divergence KL(d1 || d2) with d1 and d2 von Mises.
Args:
d1: instance of a von Mises distribution object.
d2: instance of a a von Mises distribution object.
name: (optional) Name to use for created operations.
default is "kl_von_mises... | [
"Batchwise",
"KL",
"divergence",
"KL",
"(",
"d1",
"||",
"d2",
")",
"with",
"d1",
"and",
"d2",
"von",
"Mises",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/von_mises.py#L250-L304 | [
"def",
"_kl_von_mises_von_mises",
"(",
"d1",
",",
"d2",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"name_scope",
"(",
"name",
"or",
"\"kl_von_mises_von_mises\"",
")",
":",
"# The density of von Mises is (abbreviating the concentration for conc):",
"# vonMi... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | von_mises_cdf | Computes the cumulative density function (CDF) of von Mises distribution.
Denote the density of vonMises(loc=0, concentration=concentration) by p(t).
Note that p(t) is periodic, p(t) = p(t + 2 pi).
The CDF at the point x is defined as int_{-pi}^x p(t) dt.
Thus, when x in [-pi, pi], the CDF is in [0, 1]; when x... | tensorflow_probability/python/distributions/von_mises.py | def von_mises_cdf(x, concentration):
"""Computes the cumulative density function (CDF) of von Mises distribution.
Denote the density of vonMises(loc=0, concentration=concentration) by p(t).
Note that p(t) is periodic, p(t) = p(t + 2 pi).
The CDF at the point x is defined as int_{-pi}^x p(t) dt.
Thus, when x ... | def von_mises_cdf(x, concentration):
"""Computes the cumulative density function (CDF) of von Mises distribution.
Denote the density of vonMises(loc=0, concentration=concentration) by p(t).
Note that p(t) is periodic, p(t) = p(t + 2 pi).
The CDF at the point x is defined as int_{-pi}^x p(t) dt.
Thus, when x ... | [
"Computes",
"the",
"cumulative",
"density",
"function",
"(",
"CDF",
")",
"of",
"von",
"Mises",
"distribution",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/von_mises.py#L308-L374 | [
"def",
"von_mises_cdf",
"(",
"x",
",",
"concentration",
")",
":",
"x",
"=",
"tf",
".",
"convert_to_tensor",
"(",
"value",
"=",
"x",
")",
"concentration",
"=",
"tf",
".",
"convert_to_tensor",
"(",
"value",
"=",
"concentration",
")",
"dtype",
"=",
"x",
"."... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _von_mises_cdf_series | Computes the von Mises CDF and its derivative via series expansion. | tensorflow_probability/python/distributions/von_mises.py | def _von_mises_cdf_series(x, concentration, num_terms, dtype):
"""Computes the von Mises CDF and its derivative via series expansion."""
# Keep the number of terms as a float. It should be a small integer, so
# exactly representable as a float.
num_terms = tf.cast(num_terms, dtype=dtype)
def loop_body(n, rn,... | def _von_mises_cdf_series(x, concentration, num_terms, dtype):
"""Computes the von Mises CDF and its derivative via series expansion."""
# Keep the number of terms as a float. It should be a small integer, so
# exactly representable as a float.
num_terms = tf.cast(num_terms, dtype=dtype)
def loop_body(n, rn,... | [
"Computes",
"the",
"von",
"Mises",
"CDF",
"and",
"its",
"derivative",
"via",
"series",
"expansion",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/von_mises.py#L377-L420 | [
"def",
"_von_mises_cdf_series",
"(",
"x",
",",
"concentration",
",",
"num_terms",
",",
"dtype",
")",
":",
"# Keep the number of terms as a float. It should be a small integer, so",
"# exactly representable as a float.",
"num_terms",
"=",
"tf",
".",
"cast",
"(",
"num_terms",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _von_mises_cdf_normal | Computes the von Mises CDF and its derivative via Normal approximation. | tensorflow_probability/python/distributions/von_mises.py | def _von_mises_cdf_normal(x, concentration, dtype):
"""Computes the von Mises CDF and its derivative via Normal approximation."""
def cdf_func(concentration):
"""A helper function that is passed to value_and_gradient."""
# z is an "almost Normally distributed" random variable.
z = ((np.sqrt(2. / np.pi)... | def _von_mises_cdf_normal(x, concentration, dtype):
"""Computes the von Mises CDF and its derivative via Normal approximation."""
def cdf_func(concentration):
"""A helper function that is passed to value_and_gradient."""
# z is an "almost Normally distributed" random variable.
z = ((np.sqrt(2. / np.pi)... | [
"Computes",
"the",
"von",
"Mises",
"CDF",
"and",
"its",
"derivative",
"via",
"Normal",
"approximation",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/von_mises.py#L423-L447 | [
"def",
"_von_mises_cdf_normal",
"(",
"x",
",",
"concentration",
",",
"dtype",
")",
":",
"def",
"cdf_func",
"(",
"concentration",
")",
":",
"\"\"\"A helper function that is passed to value_and_gradient.\"\"\"",
"# z is an \"almost Normally distributed\" random variable.",
"z",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | random_von_mises | Samples from the standardized von Mises distribution.
The distribution is vonMises(loc=0, concentration=concentration), so the mean
is zero.
The location can then be changed by adding it to the samples.
The sampling algorithm is rejection sampling with wrapped Cauchy proposal [1].
The samples are pathwise d... | tensorflow_probability/python/distributions/von_mises.py | def random_von_mises(shape, concentration, dtype=tf.float32, seed=None):
"""Samples from the standardized von Mises distribution.
The distribution is vonMises(loc=0, concentration=concentration), so the mean
is zero.
The location can then be changed by adding it to the samples.
The sampling algorithm is rej... | def random_von_mises(shape, concentration, dtype=tf.float32, seed=None):
"""Samples from the standardized von Mises distribution.
The distribution is vonMises(loc=0, concentration=concentration), so the mean
is zero.
The location can then be changed by adding it to the samples.
The sampling algorithm is rej... | [
"Samples",
"from",
"the",
"standardized",
"von",
"Mises",
"distribution",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/von_mises.py#L450-L582 | [
"def",
"random_von_mises",
"(",
"shape",
",",
"concentration",
",",
"dtype",
"=",
"tf",
".",
"float32",
",",
"seed",
"=",
"None",
")",
":",
"seed",
"=",
"SeedStream",
"(",
"seed",
",",
"salt",
"=",
"\"von_mises\"",
")",
"concentration",
"=",
"tf",
".",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | one_step | Performs one step of the differential evolution algorithm.
Args:
objective_function: A Python callable that accepts a batch of possible
solutions and returns the values of the objective function at those
arguments as a rank 1 real `Tensor`. This specifies the function to be
minimized. The inpu... | tensorflow_probability/python/optimizer/differential_evolution.py | def one_step(
objective_function,
population,
population_values=None,
differential_weight=0.5,
crossover_prob=0.9,
seed=None,
name=None):
"""Performs one step of the differential evolution algorithm.
Args:
objective_function: A Python callable that accepts a batch of possible
... | def one_step(
objective_function,
population,
population_values=None,
differential_weight=0.5,
crossover_prob=0.9,
seed=None,
name=None):
"""Performs one step of the differential evolution algorithm.
Args:
objective_function: A Python callable that accepts a batch of possible
... | [
"Performs",
"one",
"step",
"of",
"the",
"differential",
"evolution",
"algorithm",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L120-L211 | [
"def",
"one_step",
"(",
"objective_function",
",",
"population",
",",
"population_values",
"=",
"None",
",",
"differential_weight",
"=",
"0.5",
",",
"crossover_prob",
"=",
"0.9",
",",
"seed",
"=",
"None",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | minimize | Applies the Differential evolution algorithm to minimize a function.
Differential Evolution is an evolutionary optimization algorithm which works
on a set of candidate solutions called the population. It iteratively
improves the population by applying genetic operators of mutation and
recombination. The object... | tensorflow_probability/python/optimizer/differential_evolution.py | def minimize(objective_function,
initial_population=None,
initial_position=None,
population_size=50,
population_stddev=1.,
max_iterations=100,
func_tolerance=0,
position_tolerance=1e-8,
differential_weight=0.5,
... | def minimize(objective_function,
initial_population=None,
initial_position=None,
population_size=50,
population_stddev=1.,
max_iterations=100,
func_tolerance=0,
position_tolerance=1e-8,
differential_weight=0.5,
... | [
"Applies",
"the",
"Differential",
"evolution",
"algorithm",
"to",
"minimize",
"a",
"function",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L214-L456 | [
"def",
"minimize",
"(",
"objective_function",
",",
"initial_population",
"=",
"None",
",",
"initial_position",
"=",
"None",
",",
"population_size",
"=",
"50",
",",
"population_stddev",
"=",
"1.",
",",
"max_iterations",
"=",
"100",
",",
"func_tolerance",
"=",
"0"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _get_initial_args | Processes initial args. | tensorflow_probability/python/optimizer/differential_evolution.py | def _get_initial_args(objective_function,
initial_population,
initial_position,
population_size,
population_stddev,
max_iterations,
func_tolerance,
position_tolerance... | def _get_initial_args(objective_function,
initial_population,
initial_position,
population_size,
population_stddev,
max_iterations,
func_tolerance,
position_tolerance... | [
"Processes",
"initial",
"args",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L459-L502 | [
"def",
"_get_initial_args",
"(",
"objective_function",
",",
"initial_population",
",",
"initial_position",
",",
"population_size",
",",
"population_stddev",
",",
"max_iterations",
",",
"func_tolerance",
",",
"position_tolerance",
",",
"differential_weight",
",",
"crossover_... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _check_failure | Checks if all the population values are NaN/infinite. | tensorflow_probability/python/optimizer/differential_evolution.py | def _check_failure(population_values):
"""Checks if all the population values are NaN/infinite."""
return tf.math.reduce_all(input_tensor=tf.math.is_inf(population_values)) | def _check_failure(population_values):
"""Checks if all the population values are NaN/infinite."""
return tf.math.reduce_all(input_tensor=tf.math.is_inf(population_values)) | [
"Checks",
"if",
"all",
"the",
"population",
"values",
"are",
"NaN",
"/",
"infinite",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L505-L507 | [
"def",
"_check_failure",
"(",
"population_values",
")",
":",
"return",
"tf",
".",
"math",
".",
"reduce_all",
"(",
"input_tensor",
"=",
"tf",
".",
"math",
".",
"is_inf",
"(",
"population_values",
")",
")"
] | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _find_best_in_population | Finds the population member with the lowest value. | tensorflow_probability/python/optimizer/differential_evolution.py | def _find_best_in_population(population, values):
"""Finds the population member with the lowest value."""
best_value = tf.math.reduce_min(input_tensor=values)
best_index = tf.where(tf.math.equal(values, best_value))[0, 0]
return ([population_part[best_index] for population_part in population],
best_... | def _find_best_in_population(population, values):
"""Finds the population member with the lowest value."""
best_value = tf.math.reduce_min(input_tensor=values)
best_index = tf.where(tf.math.equal(values, best_value))[0, 0]
return ([population_part[best_index] for population_part in population],
best_... | [
"Finds",
"the",
"population",
"member",
"with",
"the",
"lowest",
"value",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L510-L516 | [
"def",
"_find_best_in_population",
"(",
"population",
",",
"values",
")",
":",
"best_value",
"=",
"tf",
".",
"math",
".",
"reduce_min",
"(",
"input_tensor",
"=",
"values",
")",
"best_index",
"=",
"tf",
".",
"where",
"(",
"tf",
".",
"math",
".",
"equal",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _check_convergence | Checks whether the convergence criteria have been met. | tensorflow_probability/python/optimizer/differential_evolution.py | def _check_convergence(population,
population_values,
func_tolerance,
position_tolerance):
"""Checks whether the convergence criteria have been met."""
# Check func tolerance
value_range = tf.math.abs(
tf.math.reduce_max(input_tensor=popul... | def _check_convergence(population,
population_values,
func_tolerance,
position_tolerance):
"""Checks whether the convergence criteria have been met."""
# Check func tolerance
value_range = tf.math.abs(
tf.math.reduce_max(input_tensor=popul... | [
"Checks",
"whether",
"the",
"convergence",
"criteria",
"have",
"been",
"met",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L519-L551 | [
"def",
"_check_convergence",
"(",
"population",
",",
"population_values",
",",
"func_tolerance",
",",
"position_tolerance",
")",
":",
"# Check func tolerance",
"value_range",
"=",
"tf",
".",
"math",
".",
"abs",
"(",
"tf",
".",
"math",
".",
"reduce_max",
"(",
"in... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _get_starting_population | Constructs the initial population.
If an initial population is not already provided, this function constructs
a population by adding random normal noise to the initial position.
Args:
initial_population: None or a list of `Tensor`s. The initial population.
initial_position: None or a list of `Tensor`s. ... | tensorflow_probability/python/optimizer/differential_evolution.py | def _get_starting_population(initial_population,
initial_position,
population_size,
population_stddev,
seed):
"""Constructs the initial population.
If an initial population is not already provided, t... | def _get_starting_population(initial_population,
initial_position,
population_size,
population_stddev,
seed):
"""Constructs the initial population.
If an initial population is not already provided, t... | [
"Constructs",
"the",
"initial",
"population",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L554-L602 | [
"def",
"_get_starting_population",
"(",
"initial_population",
",",
"initial_position",
",",
"population_size",
",",
"population_stddev",
",",
"seed",
")",
":",
"if",
"initial_population",
"is",
"not",
"None",
":",
"return",
"[",
"tf",
".",
"convert_to_tensor",
"(",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _binary_crossover | Performs recombination by binary crossover for the current population.
Let v_i denote the i'th component of the member v and m_i the corresponding
component of the mutant vector corresponding to v. Then the crossed over
vector w_i is determined by setting w_i =
(m_i with probability=crossover_prob else v_i). I... | tensorflow_probability/python/optimizer/differential_evolution.py | def _binary_crossover(population,
population_size,
mutants,
crossover_prob,
seed):
"""Performs recombination by binary crossover for the current population.
Let v_i denote the i'th component of the member v and m_i the correspo... | def _binary_crossover(population,
population_size,
mutants,
crossover_prob,
seed):
"""Performs recombination by binary crossover for the current population.
Let v_i denote the i'th component of the member v and m_i the correspo... | [
"Performs",
"recombination",
"by",
"binary",
"crossover",
"for",
"the",
"current",
"population",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L605-L671 | [
"def",
"_binary_crossover",
"(",
"population",
",",
"population_size",
",",
"mutants",
",",
"crossover_prob",
",",
"seed",
")",
":",
"sizes",
"=",
"[",
"tf",
".",
"cast",
"(",
"tf",
".",
"size",
"(",
"input",
"=",
"x",
")",
",",
"dtype",
"=",
"tf",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _get_mutants | Computes the mutatated vectors for each population member.
Args:
population: Python `list` of `Tensor`s representing the
current population vectors. Each `Tensor` must be of the same real dtype.
The first dimension of each `Tensor` indexes individual
population members. For example, if the pop... | tensorflow_probability/python/optimizer/differential_evolution.py | def _get_mutants(population,
population_size,
mixing_indices,
differential_weight):
"""Computes the mutatated vectors for each population member.
Args:
population: Python `list` of `Tensor`s representing the
current population vectors. Each `Tensor` mus... | def _get_mutants(population,
population_size,
mixing_indices,
differential_weight):
"""Computes the mutatated vectors for each population member.
Args:
population: Python `list` of `Tensor`s representing the
current population vectors. Each `Tensor` mus... | [
"Computes",
"the",
"mutatated",
"vectors",
"for",
"each",
"population",
"member",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L674-L709 | [
"def",
"_get_mutants",
"(",
"population",
",",
"population_size",
",",
"mixing_indices",
",",
"differential_weight",
")",
":",
"mixing_indices",
"=",
"tf",
".",
"reshape",
"(",
"mixing_indices",
",",
"[",
"-",
"1",
"]",
")",
"weights",
"=",
"tf",
".",
"stack... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _get_mixing_indices | Generates an array of indices suitable for mutation operation.
The mutation operation in differential evolution requires that for every
element of the population, three distinct other elements be chosen to produce
a trial candidate. This function generates an array of shape [size, 3]
satisfying the properties ... | tensorflow_probability/python/optimizer/differential_evolution.py | def _get_mixing_indices(size, seed=None, name=None):
"""Generates an array of indices suitable for mutation operation.
The mutation operation in differential evolution requires that for every
element of the population, three distinct other elements be chosen to produce
a trial candidate. This function generate... | def _get_mixing_indices(size, seed=None, name=None):
"""Generates an array of indices suitable for mutation operation.
The mutation operation in differential evolution requires that for every
element of the population, three distinct other elements be chosen to produce
a trial candidate. This function generate... | [
"Generates",
"an",
"array",
"of",
"indices",
"suitable",
"for",
"mutation",
"operation",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L712-L768 | [
"def",
"_get_mixing_indices",
"(",
"size",
",",
"seed",
"=",
"None",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"default_name",
"=",
"'get_mixing_indices'",
",",
"values",
"=",
"[",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _ensure_list | Converts the input arg to a list if it is not a list already.
Args:
tensor_or_list: A `Tensor` or a Python list of `Tensor`s. The argument to
convert to a list of `Tensor`s.
Returns:
A tuple of two elements. The first is a Python list of `Tensor`s containing
the original arguments. The second is... | tensorflow_probability/python/optimizer/differential_evolution.py | def _ensure_list(tensor_or_list):
"""Converts the input arg to a list if it is not a list already.
Args:
tensor_or_list: A `Tensor` or a Python list of `Tensor`s. The argument to
convert to a list of `Tensor`s.
Returns:
A tuple of two elements. The first is a Python list of `Tensor`s containing
... | def _ensure_list(tensor_or_list):
"""Converts the input arg to a list if it is not a list already.
Args:
tensor_or_list: A `Tensor` or a Python list of `Tensor`s. The argument to
convert to a list of `Tensor`s.
Returns:
A tuple of two elements. The first is a Python list of `Tensor`s containing
... | [
"Converts",
"the",
"input",
"arg",
"to",
"a",
"list",
"if",
"it",
"is",
"not",
"a",
"list",
"already",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/optimizer/differential_evolution.py#L771-L785 | [
"def",
"_ensure_list",
"(",
"tensor_or_list",
")",
":",
"if",
"isinstance",
"(",
"tensor_or_list",
",",
"(",
"list",
",",
"tuple",
")",
")",
":",
"return",
"list",
"(",
"tensor_or_list",
")",
",",
"True",
"return",
"[",
"tensor_or_list",
"]",
",",
"False"
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _get_tol | Gets a Tensor of type `dtype`, 0 if `tol` is None, validation optional. | tensorflow_probability/python/distributions/deterministic.py | def _get_tol(tol, dtype, validate_args):
"""Gets a Tensor of type `dtype`, 0 if `tol` is None, validation optional."""
if tol is None:
return tf.convert_to_tensor(value=0, dtype=dtype)
tol = tf.convert_to_tensor(value=tol, dtype=dtype)
if validate_args:
tol = distribution_util.with_dependencies([
... | def _get_tol(tol, dtype, validate_args):
"""Gets a Tensor of type `dtype`, 0 if `tol` is None, validation optional."""
if tol is None:
return tf.convert_to_tensor(value=0, dtype=dtype)
tol = tf.convert_to_tensor(value=tol, dtype=dtype)
if validate_args:
tol = distribution_util.with_dependencies([
... | [
"Gets",
"a",
"Tensor",
"of",
"type",
"dtype",
"0",
"if",
"tol",
"is",
"None",
"validation",
"optional",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/deterministic.py#L41-L52 | [
"def",
"_get_tol",
"(",
"tol",
",",
"dtype",
",",
"validate_args",
")",
":",
"if",
"tol",
"is",
"None",
":",
"return",
"tf",
".",
"convert_to_tensor",
"(",
"value",
"=",
"0",
",",
"dtype",
"=",
"dtype",
")",
"tol",
"=",
"tf",
".",
"convert_to_tensor",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _kl_deterministic_distribution | Calculate the batched KL divergence `KL(a || b)` with `a` Deterministic.
Args:
a: instance of a Deterministic distribution object.
b: instance of a Distribution distribution object.
name: (optional) Name to use for created operations. Default is
"kl_deterministic_distribution".
Returns:
Batc... | tensorflow_probability/python/distributions/deterministic.py | def _kl_deterministic_distribution(a, b, name=None):
"""Calculate the batched KL divergence `KL(a || b)` with `a` Deterministic.
Args:
a: instance of a Deterministic distribution object.
b: instance of a Distribution distribution object.
name: (optional) Name to use for created operations. Default is
... | def _kl_deterministic_distribution(a, b, name=None):
"""Calculate the batched KL divergence `KL(a || b)` with `a` Deterministic.
Args:
a: instance of a Deterministic distribution object.
b: instance of a Distribution distribution object.
name: (optional) Name to use for created operations. Default is
... | [
"Calculate",
"the",
"batched",
"KL",
"divergence",
"KL",
"(",
"a",
"||",
"b",
")",
"with",
"a",
"Deterministic",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/deterministic.py#L407-L420 | [
"def",
"_kl_deterministic_distribution",
"(",
"a",
",",
"b",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"name_scope",
"(",
"name",
"or",
"\"kl_deterministic_distribution\"",
")",
":",
"return",
"-",
"b",
".",
"log_prob",
"(",
"a",
".",
"loc",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _sqrtx2p1 | Implementation of `sqrt(1 + x**2)` which is stable despite large `x`. | tensorflow_probability/python/bijectors/sinh_arcsinh.py | def _sqrtx2p1(x):
"""Implementation of `sqrt(1 + x**2)` which is stable despite large `x`."""
sqrt_eps = np.sqrt(np.finfo(dtype_util.as_numpy_dtype(x.dtype)).eps)
return tf.where(
tf.abs(x) * sqrt_eps <= 1.,
tf.sqrt(x**2. + 1.),
# For large x, calculating x**2 can overflow. This can be alleviate... | def _sqrtx2p1(x):
"""Implementation of `sqrt(1 + x**2)` which is stable despite large `x`."""
sqrt_eps = np.sqrt(np.finfo(dtype_util.as_numpy_dtype(x.dtype)).eps)
return tf.where(
tf.abs(x) * sqrt_eps <= 1.,
tf.sqrt(x**2. + 1.),
# For large x, calculating x**2 can overflow. This can be alleviate... | [
"Implementation",
"of",
"sqrt",
"(",
"1",
"+",
"x",
"**",
"2",
")",
"which",
"is",
"stable",
"despite",
"large",
"x",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/bijectors/sinh_arcsinh.py#L36-L57 | [
"def",
"_sqrtx2p1",
"(",
"x",
")",
":",
"sqrt_eps",
"=",
"np",
".",
"sqrt",
"(",
"np",
".",
"finfo",
"(",
"dtype_util",
".",
"as_numpy_dtype",
"(",
"x",
".",
"dtype",
")",
")",
".",
"eps",
")",
"return",
"tf",
".",
"where",
"(",
"tf",
".",
"abs",... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | log1psquare | Numerically stable calculation of `log(1 + x**2)` for small or large `|x|`.
For sufficiently large `x` we use the following observation:
```none
log(1 + x**2) = 2 log(|x|) + log(1 + 1 / x**2)
--> 2 log(|x|) as x --> inf
```
Numerically, `log(1 + 1 / x**2)` is `0` when `1 / x**2` is small... | tensorflow_probability/python/math/numeric.py | def log1psquare(x, name=None):
"""Numerically stable calculation of `log(1 + x**2)` for small or large `|x|`.
For sufficiently large `x` we use the following observation:
```none
log(1 + x**2) = 2 log(|x|) + log(1 + 1 / x**2)
--> 2 log(|x|) as x --> inf
```
Numerically, `log(1 + 1 / x*... | def log1psquare(x, name=None):
"""Numerically stable calculation of `log(1 + x**2)` for small or large `|x|`.
For sufficiently large `x` we use the following observation:
```none
log(1 + x**2) = 2 log(|x|) + log(1 + 1 / x**2)
--> 2 log(|x|) as x --> inf
```
Numerically, `log(1 + 1 / x*... | [
"Numerically",
"stable",
"calculation",
"of",
"log",
"(",
"1",
"+",
"x",
"**",
"2",
")",
"for",
"small",
"or",
"large",
"|x|",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/numeric.py#L25-L56 | [
"def",
"log1psquare",
"(",
"x",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'log1psquare'",
",",
"[",
"x",
"]",
")",
":",
"x",
"=",
"tf",
".",
"convert_to_tensor",
"(",
"value",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | soft_threshold | Soft Thresholding operator.
This operator is defined by the equations
```none
{ x[i] - gamma, x[i] > gamma
SoftThreshold(x, gamma)[i] = { 0, x[i] == gamma
{ x[i] + gamma, x[i] < -gamma
```
In the context of proximal gradient... | tensorflow_probability/python/math/numeric.py | def soft_threshold(x, threshold, name=None):
"""Soft Thresholding operator.
This operator is defined by the equations
```none
{ x[i] - gamma, x[i] > gamma
SoftThreshold(x, gamma)[i] = { 0, x[i] == gamma
{ x[i] + gamma, x[i] < -... | def soft_threshold(x, threshold, name=None):
"""Soft Thresholding operator.
This operator is defined by the equations
```none
{ x[i] - gamma, x[i] > gamma
SoftThreshold(x, gamma)[i] = { 0, x[i] == gamma
{ x[i] + gamma, x[i] < -... | [
"Soft",
"Thresholding",
"operator",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/numeric.py#L59-L149 | [
"def",
"soft_threshold",
"(",
"x",
",",
"threshold",
",",
"name",
"=",
"None",
")",
":",
"# https://math.stackexchange.com/questions/471339/derivation-of-soft-thresholding-operator",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'soft_t... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | clip_by_value_preserve_gradient | Clips values to a specified min and max while leaving gradient unaltered.
Like `tf.clip_by_value`, this function returns a tensor of the same type and
shape as input `t` but with values clamped to be no smaller than to
`clip_value_min` and no larger than `clip_value_max`. Unlike
`tf.clip_by_value`, the gradien... | tensorflow_probability/python/math/numeric.py | def clip_by_value_preserve_gradient(t, clip_value_min, clip_value_max,
name=None):
"""Clips values to a specified min and max while leaving gradient unaltered.
Like `tf.clip_by_value`, this function returns a tensor of the same type and
shape as input `t` but with values clamp... | def clip_by_value_preserve_gradient(t, clip_value_min, clip_value_max,
name=None):
"""Clips values to a specified min and max while leaving gradient unaltered.
Like `tf.clip_by_value`, this function returns a tensor of the same type and
shape as input `t` but with values clamp... | [
"Clips",
"values",
"to",
"a",
"specified",
"min",
"and",
"max",
"while",
"leaving",
"gradient",
"unaltered",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/numeric.py#L152-L185 | [
"def",
"clip_by_value_preserve_gradient",
"(",
"t",
",",
"clip_value_min",
",",
"clip_value_max",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'clip_by_value_preserve_gradient'",
",",
"[",
"t... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | build_input_pipeline | Build an iterator over training batches. | tensorflow_probability/examples/generative_adversarial_network.py | def build_input_pipeline(train_images, batch_size):
"""Build an iterator over training batches."""
training_dataset = tf.data.Dataset.from_tensor_slices(train_images)
training_batches = training_dataset.shuffle(
50000, reshuffle_each_iteration=True).repeat().batch(batch_size)
training_iterator = tf.compa... | def build_input_pipeline(train_images, batch_size):
"""Build an iterator over training batches."""
training_dataset = tf.data.Dataset.from_tensor_slices(train_images)
training_batches = training_dataset.shuffle(
50000, reshuffle_each_iteration=True).repeat().batch(batch_size)
training_iterator = tf.compa... | [
"Build",
"an",
"iterator",
"over",
"training",
"batches",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/examples/generative_adversarial_network.py#L104-L112 | [
"def",
"build_input_pipeline",
"(",
"train_images",
",",
"batch_size",
")",
":",
"training_dataset",
"=",
"tf",
".",
"data",
".",
"Dataset",
".",
"from_tensor_slices",
"(",
"train_images",
")",
"training_batches",
"=",
"training_dataset",
".",
"shuffle",
"(",
"500... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | plot_generated_images | Save a synthetic image as a PNG file.
Args:
images: samples of synthetic images generated by the generative network.
fname: Python `str`, filename to save the plot to. | tensorflow_probability/examples/generative_adversarial_network.py | def plot_generated_images(images, fname):
"""Save a synthetic image as a PNG file.
Args:
images: samples of synthetic images generated by the generative network.
fname: Python `str`, filename to save the plot to.
"""
fig = plt.figure(figsize=(4, 4))
canvas = backend_agg.FigureCanvasAgg(fig)
for i,... | def plot_generated_images(images, fname):
"""Save a synthetic image as a PNG file.
Args:
images: samples of synthetic images generated by the generative network.
fname: Python `str`, filename to save the plot to.
"""
fig = plt.figure(figsize=(4, 4))
canvas = backend_agg.FigureCanvasAgg(fig)
for i,... | [
"Save",
"a",
"synthetic",
"image",
"as",
"a",
"PNG",
"file",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/examples/generative_adversarial_network.py#L123-L142 | [
"def",
"plot_generated_images",
"(",
"images",
",",
"fname",
")",
":",
"fig",
"=",
"plt",
".",
"figure",
"(",
"figsize",
"=",
"(",
"4",
",",
"4",
")",
")",
"canvas",
"=",
"backend_agg",
".",
"FigureCanvasAgg",
"(",
"fig",
")",
"for",
"i",
",",
"image... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | SmilesGrammar.convert_to_string | Converts a sequence of productions into a string of terminal symbols.
Args:
productions: Tensor of shape [1, num_productions, num_production_rules].
Slices along the `num_productions` dimension represent one-hot vectors.
Returns:
str that concatenates all terminal symbols from `productions... | tensorflow_probability/examples/grammar_vae.py | def convert_to_string(self, productions):
"""Converts a sequence of productions into a string of terminal symbols.
Args:
productions: Tensor of shape [1, num_productions, num_production_rules].
Slices along the `num_productions` dimension represent one-hot vectors.
Returns:
str that co... | def convert_to_string(self, productions):
"""Converts a sequence of productions into a string of terminal symbols.
Args:
productions: Tensor of shape [1, num_productions, num_production_rules].
Slices along the `num_productions` dimension represent one-hot vectors.
Returns:
str that co... | [
"Converts",
"a",
"sequence",
"of",
"productions",
"into",
"a",
"string",
"of",
"terminal",
"symbols",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/examples/grammar_vae.py#L137-L164 | [
"def",
"convert_to_string",
"(",
"self",
",",
"productions",
")",
":",
"symbols",
"=",
"[",
"]",
"for",
"production",
"in",
"tf",
".",
"unstack",
"(",
"productions",
",",
"axis",
"=",
"1",
")",
":",
"lhs",
",",
"rhs",
"=",
"self",
".",
"production_rule... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | SmilesGrammar.mask | Produces a masking tensor for (in)valid production rules.
Args:
symbol: str, a symbol in the grammar.
on_value: Value to use for a valid production rule.
off_value: Value to use for an invalid production rule.
Returns:
Tensor of shape [1, num_production_rules]. An element is `on_value`... | tensorflow_probability/examples/grammar_vae.py | def mask(self, symbol, on_value, off_value):
"""Produces a masking tensor for (in)valid production rules.
Args:
symbol: str, a symbol in the grammar.
on_value: Value to use for a valid production rule.
off_value: Value to use for an invalid production rule.
Returns:
Tensor of shape... | def mask(self, symbol, on_value, off_value):
"""Produces a masking tensor for (in)valid production rules.
Args:
symbol: str, a symbol in the grammar.
on_value: Value to use for a valid production rule.
off_value: Value to use for an invalid production rule.
Returns:
Tensor of shape... | [
"Produces",
"a",
"masking",
"tensor",
"for",
"(",
"in",
")",
"valid",
"production",
"rules",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/examples/grammar_vae.py#L166-L182 | [
"def",
"mask",
"(",
"self",
",",
"symbol",
",",
"on_value",
",",
"off_value",
")",
":",
"mask_values",
"=",
"[",
"on_value",
"if",
"lhs",
"==",
"symbol",
"else",
"off_value",
"for",
"lhs",
",",
"_",
"in",
"self",
".",
"production_rules",
"]",
"mask_value... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | ProbabilisticGrammar.call | Runs the model forward to generate a sequence of productions.
Args:
inputs: Unused.
Returns:
productions: Tensor of shape [1, num_productions, num_production_rules].
Slices along the `num_productions` dimension represent one-hot vectors. | tensorflow_probability/examples/grammar_vae.py | def call(self, inputs):
"""Runs the model forward to generate a sequence of productions.
Args:
inputs: Unused.
Returns:
productions: Tensor of shape [1, num_productions, num_production_rules].
Slices along the `num_productions` dimension represent one-hot vectors.
"""
del input... | def call(self, inputs):
"""Runs the model forward to generate a sequence of productions.
Args:
inputs: Unused.
Returns:
productions: Tensor of shape [1, num_productions, num_production_rules].
Slices along the `num_productions` dimension represent one-hot vectors.
"""
del input... | [
"Runs",
"the",
"model",
"forward",
"to",
"generate",
"a",
"sequence",
"of",
"productions",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/examples/grammar_vae.py#L209-L241 | [
"def",
"call",
"(",
"self",
",",
"inputs",
")",
":",
"del",
"inputs",
"# unused",
"latent_code",
"=",
"ed",
".",
"MultivariateNormalDiag",
"(",
"loc",
"=",
"tf",
".",
"zeros",
"(",
"self",
".",
"latent_size",
")",
",",
"sample_shape",
"=",
"1",
",",
"n... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | ProbabilisticGrammarVariational.call | Runs the model forward to return a stochastic encoding.
Args:
inputs: Tensor of shape [1, num_productions, num_production_rules]. It is
a sequence of productions of length `num_productions`. Each production
is a one-hot vector of length `num_production_rules`: it determines
which prod... | tensorflow_probability/examples/grammar_vae.py | def call(self, inputs):
"""Runs the model forward to return a stochastic encoding.
Args:
inputs: Tensor of shape [1, num_productions, num_production_rules]. It is
a sequence of productions of length `num_productions`. Each production
is a one-hot vector of length `num_production_rules`: i... | def call(self, inputs):
"""Runs the model forward to return a stochastic encoding.
Args:
inputs: Tensor of shape [1, num_productions, num_production_rules]. It is
a sequence of productions of length `num_productions`. Each production
is a one-hot vector of length `num_production_rules`: i... | [
"Runs",
"the",
"model",
"forward",
"to",
"return",
"a",
"stochastic",
"encoding",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/examples/grammar_vae.py#L267-L284 | [
"def",
"call",
"(",
"self",
",",
"inputs",
")",
":",
"net",
"=",
"self",
".",
"encoder_net",
"(",
"tf",
".",
"cast",
"(",
"inputs",
",",
"tf",
".",
"float32",
")",
")",
"return",
"ed",
".",
"MultivariateNormalDiag",
"(",
"loc",
"=",
"net",
"[",
"..... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | Zipf._hat_integral | Integral of the `hat` function, used for sampling.
We choose a `hat` function, h(x) = x^(-power), which is a continuous
(unnormalized) density touching each positive integer at the (unnormalized)
pmf. This function implements `hat` integral: H(x) = int_x^inf h(t) dt;
which is needed for sampling purpos... | tensorflow_probability/python/distributions/zipf.py | def _hat_integral(self, x):
"""Integral of the `hat` function, used for sampling.
We choose a `hat` function, h(x) = x^(-power), which is a continuous
(unnormalized) density touching each positive integer at the (unnormalized)
pmf. This function implements `hat` integral: H(x) = int_x^inf h(t) dt;
... | def _hat_integral(self, x):
"""Integral of the `hat` function, used for sampling.
We choose a `hat` function, h(x) = x^(-power), which is a continuous
(unnormalized) density touching each positive integer at the (unnormalized)
pmf. This function implements `hat` integral: H(x) = int_x^inf h(t) dt;
... | [
"Integral",
"of",
"the",
"hat",
"function",
"used",
"for",
"sampling",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/zipf.py#L306-L322 | [
"def",
"_hat_integral",
"(",
"self",
",",
"x",
")",
":",
"x",
"=",
"tf",
".",
"cast",
"(",
"x",
",",
"self",
".",
"power",
".",
"dtype",
")",
"t",
"=",
"self",
".",
"power",
"-",
"1.",
"return",
"tf",
".",
"exp",
"(",
"(",
"-",
"t",
")",
"*... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | Zipf._hat_integral_inverse | Inverse function of _hat_integral. | tensorflow_probability/python/distributions/zipf.py | def _hat_integral_inverse(self, x):
"""Inverse function of _hat_integral."""
x = tf.cast(x, self.power.dtype)
t = self.power - 1.
return tf.math.expm1(-(tf.math.log(t) + tf.math.log(x)) / t) | def _hat_integral_inverse(self, x):
"""Inverse function of _hat_integral."""
x = tf.cast(x, self.power.dtype)
t = self.power - 1.
return tf.math.expm1(-(tf.math.log(t) + tf.math.log(x)) / t) | [
"Inverse",
"function",
"of",
"_hat_integral",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/distributions/zipf.py#L324-L328 | [
"def",
"_hat_integral_inverse",
"(",
"self",
",",
"x",
")",
":",
"x",
"=",
"tf",
".",
"cast",
"(",
"x",
",",
"self",
".",
"power",
".",
"dtype",
")",
"t",
"=",
"self",
".",
"power",
"-",
"1.",
"return",
"tf",
".",
"math",
".",
"expm1",
"(",
"-"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | matrix_rank | Compute the matrix rank; the number of non-zero SVD singular values.
Arguments:
a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be
pseudo-inverted.
tol: Threshold below which the singular value is counted as "zero".
Default value: `None` (i.e., `eps * max(rows, cols) * max(singu... | tensorflow_probability/python/math/linalg.py | def matrix_rank(a, tol=None, validate_args=False, name=None):
"""Compute the matrix rank; the number of non-zero SVD singular values.
Arguments:
a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be
pseudo-inverted.
tol: Threshold below which the singular value is counted as "zero".
... | def matrix_rank(a, tol=None, validate_args=False, name=None):
"""Compute the matrix rank; the number of non-zero SVD singular values.
Arguments:
a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be
pseudo-inverted.
tol: Threshold below which the singular value is counted as "zero".
... | [
"Compute",
"the",
"matrix",
"rank",
";",
"the",
"number",
"of",
"non",
"-",
"zero",
"SVD",
"singular",
"values",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L48-L81 | [
"def",
"matrix_rank",
"(",
"a",
",",
"tol",
"=",
"None",
",",
"validate_args",
"=",
"False",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'matrix_rank'",
",",
"[",
"a",
",",
"tol"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | cholesky_concat | Concatenates `chol @ chol.T` with additional rows and columns.
This operation is conceptually identical to:
```python
def cholesky_concat_slow(chol, cols): # cols shaped (n + m) x m = z x m
mat = tf.matmul(chol, chol, adjoint_b=True) # batch of n x n
# Concat columns.
mat = tf.concat([mat, cols[...... | tensorflow_probability/python/math/linalg.py | def cholesky_concat(chol, cols, name=None):
"""Concatenates `chol @ chol.T` with additional rows and columns.
This operation is conceptually identical to:
```python
def cholesky_concat_slow(chol, cols): # cols shaped (n + m) x m = z x m
mat = tf.matmul(chol, chol, adjoint_b=True) # batch of n x n
# C... | def cholesky_concat(chol, cols, name=None):
"""Concatenates `chol @ chol.T` with additional rows and columns.
This operation is conceptually identical to:
```python
def cholesky_concat_slow(chol, cols): # cols shaped (n + m) x m = z x m
mat = tf.matmul(chol, chol, adjoint_b=True) # batch of n x n
# C... | [
"Concatenates",
"chol",
"@",
"chol",
".",
"T",
"with",
"additional",
"rows",
"and",
"columns",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L84-L139 | [
"def",
"cholesky_concat",
"(",
"chol",
",",
"cols",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v2",
".",
"name_scope",
"(",
"name",
"or",
"'cholesky_extend'",
")",
":",
"dtype",
"=",
"dtype_util",
".",
"common_dtype",
"(",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _swap_m_with_i | Swaps `m` and `i` on axis -1. (Helper for pivoted_cholesky.)
Given a batch of int64 vectors `vecs`, scalar index `m`, and compatibly shaped
per-vector indices `i`, this function swaps elements `m` and `i` in each
vector. For the use-case below, these are permutation vectors.
Args:
vecs: Vectors on which w... | tensorflow_probability/python/math/linalg.py | def _swap_m_with_i(vecs, m, i):
"""Swaps `m` and `i` on axis -1. (Helper for pivoted_cholesky.)
Given a batch of int64 vectors `vecs`, scalar index `m`, and compatibly shaped
per-vector indices `i`, this function swaps elements `m` and `i` in each
vector. For the use-case below, these are permutation vectors.
... | def _swap_m_with_i(vecs, m, i):
"""Swaps `m` and `i` on axis -1. (Helper for pivoted_cholesky.)
Given a batch of int64 vectors `vecs`, scalar index `m`, and compatibly shaped
per-vector indices `i`, this function swaps elements `m` and `i` in each
vector. For the use-case below, these are permutation vectors.
... | [
"Swaps",
"m",
"and",
"i",
"on",
"axis",
"-",
"1",
".",
"(",
"Helper",
"for",
"pivoted_cholesky",
".",
")"
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L142-L177 | [
"def",
"_swap_m_with_i",
"(",
"vecs",
",",
"m",
",",
"i",
")",
":",
"vecs",
"=",
"tf",
".",
"convert_to_tensor",
"(",
"value",
"=",
"vecs",
",",
"dtype",
"=",
"tf",
".",
"int64",
",",
"name",
"=",
"'vecs'",
")",
"m",
"=",
"tf",
".",
"convert_to_ten... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | pivoted_cholesky | Computes the (partial) pivoted cholesky decomposition of `matrix`.
The pivoted Cholesky is a low rank approximation of the Cholesky decomposition
of `matrix`, i.e. as described in [(Harbrecht et al., 2012)][1]. The
currently-worst-approximated diagonal element is selected as the pivot at each
iteration. This y... | tensorflow_probability/python/math/linalg.py | def pivoted_cholesky(matrix, max_rank, diag_rtol=1e-3, name=None):
"""Computes the (partial) pivoted cholesky decomposition of `matrix`.
The pivoted Cholesky is a low rank approximation of the Cholesky decomposition
of `matrix`, i.e. as described in [(Harbrecht et al., 2012)][1]. The
currently-worst-approximat... | def pivoted_cholesky(matrix, max_rank, diag_rtol=1e-3, name=None):
"""Computes the (partial) pivoted cholesky decomposition of `matrix`.
The pivoted Cholesky is a low rank approximation of the Cholesky decomposition
of `matrix`, i.e. as described in [(Harbrecht et al., 2012)][1]. The
currently-worst-approximat... | [
"Computes",
"the",
"(",
"partial",
")",
"pivoted",
"cholesky",
"decomposition",
"of",
"matrix",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L186-L320 | [
"def",
"pivoted_cholesky",
"(",
"matrix",
",",
"max_rank",
",",
"diag_rtol",
"=",
"1e-3",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v2",
".",
"name_scope",
"(",
"name",
"or",
"'pivoted_cholesky'",
")",
":",
"dtype",
"=",
"d... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | pinv | Compute the Moore-Penrose pseudo-inverse of a matrix.
Calculate the [generalized inverse of a matrix](
https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) using its
singular-value decomposition (SVD) and including all large singular values.
The pseudo-inverse of a matrix `A`, is defined as: "the matr... | tensorflow_probability/python/math/linalg.py | def pinv(a, rcond=None, validate_args=False, name=None):
"""Compute the Moore-Penrose pseudo-inverse of a matrix.
Calculate the [generalized inverse of a matrix](
https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) using its
singular-value decomposition (SVD) and including all large singular values.
... | def pinv(a, rcond=None, validate_args=False, name=None):
"""Compute the Moore-Penrose pseudo-inverse of a matrix.
Calculate the [generalized inverse of a matrix](
https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) using its
singular-value decomposition (SVD) and including all large singular values.
... | [
"Compute",
"the",
"Moore",
"-",
"Penrose",
"pseudo",
"-",
"inverse",
"of",
"a",
"matrix",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L323-L445 | [
"def",
"pinv",
"(",
"a",
",",
"rcond",
"=",
"None",
",",
"validate_args",
"=",
"False",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'pinv'",
",",
"[",
"a",
",",
"rcond",
"]",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | lu_solve | Solves systems of linear eqns `A X = RHS`, given LU factorizations.
Note: this function does not verify the implied matrix is actually invertible
nor is this condition checked even when `validate_args=True`.
Args:
lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if
`matmul(P, matmul(L, U)) = X` ... | tensorflow_probability/python/math/linalg.py | def lu_solve(lower_upper, perm, rhs,
validate_args=False,
name=None):
"""Solves systems of linear eqns `A X = RHS`, given LU factorizations.
Note: this function does not verify the implied matrix is actually invertible
nor is this condition checked even when `validate_args=True`.
Arg... | def lu_solve(lower_upper, perm, rhs,
validate_args=False,
name=None):
"""Solves systems of linear eqns `A X = RHS`, given LU factorizations.
Note: this function does not verify the implied matrix is actually invertible
nor is this condition checked even when `validate_args=True`.
Arg... | [
"Solves",
"systems",
"of",
"linear",
"eqns",
"A",
"X",
"=",
"RHS",
"given",
"LU",
"factorizations",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L448-L543 | [
"def",
"lu_solve",
"(",
"lower_upper",
",",
"perm",
",",
"rhs",
",",
"validate_args",
"=",
"False",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'lu_solve'",
",",
"[",
"lower_upper",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | lu_matrix_inverse | Computes a matrix inverse given the matrix's LU decomposition.
This op is conceptually identical to,
````python
inv_X = tf.lu_matrix_inverse(*tf.linalg.lu(X))
tf.assert_near(tf.matrix_inverse(X), inv_X)
# ==> True
```
Note: this function does not verify the implied matrix is actually invertible
nor i... | tensorflow_probability/python/math/linalg.py | def lu_matrix_inverse(lower_upper, perm, validate_args=False, name=None):
"""Computes a matrix inverse given the matrix's LU decomposition.
This op is conceptually identical to,
````python
inv_X = tf.lu_matrix_inverse(*tf.linalg.lu(X))
tf.assert_near(tf.matrix_inverse(X), inv_X)
# ==> True
```
Note: ... | def lu_matrix_inverse(lower_upper, perm, validate_args=False, name=None):
"""Computes a matrix inverse given the matrix's LU decomposition.
This op is conceptually identical to,
````python
inv_X = tf.lu_matrix_inverse(*tf.linalg.lu(X))
tf.assert_near(tf.matrix_inverse(X), inv_X)
# ==> True
```
Note: ... | [
"Computes",
"a",
"matrix",
"inverse",
"given",
"the",
"matrix",
"s",
"LU",
"decomposition",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L546-L605 | [
"def",
"lu_matrix_inverse",
"(",
"lower_upper",
",",
"perm",
",",
"validate_args",
"=",
"False",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'lu_matrix_inverse'",
",",
"[",
"lower_upper"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | lu_reconstruct | The inverse LU decomposition, `X == lu_reconstruct(*tf.linalg.lu(X))`.
Args:
lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if
`matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`.
perm: `p` as returned by `tf.linag.lu`, i.e., if
`matmul(P, matmul(L, U)) = X` then `perm = argmax... | tensorflow_probability/python/math/linalg.py | def lu_reconstruct(lower_upper, perm, validate_args=False, name=None):
"""The inverse LU decomposition, `X == lu_reconstruct(*tf.linalg.lu(X))`.
Args:
lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if
`matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`.
perm: `p` as returned by `tf... | def lu_reconstruct(lower_upper, perm, validate_args=False, name=None):
"""The inverse LU decomposition, `X == lu_reconstruct(*tf.linalg.lu(X))`.
Args:
lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if
`matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`.
perm: `p` as returned by `tf... | [
"The",
"inverse",
"LU",
"decomposition",
"X",
"==",
"lu_reconstruct",
"(",
"*",
"tf",
".",
"linalg",
".",
"lu",
"(",
"X",
"))",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L608-L676 | [
"def",
"lu_reconstruct",
"(",
"lower_upper",
",",
"perm",
",",
"validate_args",
"=",
"False",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'lu_reconstruct'",
",",
"[",
"lower_upper",
",... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _lu_reconstruct_assertions | Returns list of assertions related to `lu_reconstruct` assumptions. | tensorflow_probability/python/math/linalg.py | def _lu_reconstruct_assertions(lower_upper, perm, validate_args):
"""Returns list of assertions related to `lu_reconstruct` assumptions."""
assertions = []
message = 'Input `lower_upper` must have at least 2 dimensions.'
if lower_upper.shape.ndims is not None:
if lower_upper.shape.ndims < 2:
raise Va... | def _lu_reconstruct_assertions(lower_upper, perm, validate_args):
"""Returns list of assertions related to `lu_reconstruct` assumptions."""
assertions = []
message = 'Input `lower_upper` must have at least 2 dimensions.'
if lower_upper.shape.ndims is not None:
if lower_upper.shape.ndims < 2:
raise Va... | [
"Returns",
"list",
"of",
"assertions",
"related",
"to",
"lu_reconstruct",
"assumptions",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L679-L708 | [
"def",
"_lu_reconstruct_assertions",
"(",
"lower_upper",
",",
"perm",
",",
"validate_args",
")",
":",
"assertions",
"=",
"[",
"]",
"message",
"=",
"'Input `lower_upper` must have at least 2 dimensions.'",
"if",
"lower_upper",
".",
"shape",
".",
"ndims",
"is",
"not",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _lu_solve_assertions | Returns list of assertions related to `lu_solve` assumptions. | tensorflow_probability/python/math/linalg.py | def _lu_solve_assertions(lower_upper, perm, rhs, validate_args):
"""Returns list of assertions related to `lu_solve` assumptions."""
assertions = _lu_reconstruct_assertions(lower_upper, perm, validate_args)
message = 'Input `rhs` must have at least 2 dimensions.'
if rhs.shape.ndims is not None:
if rhs.shap... | def _lu_solve_assertions(lower_upper, perm, rhs, validate_args):
"""Returns list of assertions related to `lu_solve` assumptions."""
assertions = _lu_reconstruct_assertions(lower_upper, perm, validate_args)
message = 'Input `rhs` must have at least 2 dimensions.'
if rhs.shape.ndims is not None:
if rhs.shap... | [
"Returns",
"list",
"of",
"assertions",
"related",
"to",
"lu_solve",
"assumptions",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L711-L735 | [
"def",
"_lu_solve_assertions",
"(",
"lower_upper",
",",
"perm",
",",
"rhs",
",",
"validate_args",
")",
":",
"assertions",
"=",
"_lu_reconstruct_assertions",
"(",
"lower_upper",
",",
"perm",
",",
"validate_args",
")",
"message",
"=",
"'Input `rhs` must have at least 2 ... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | sparse_or_dense_matmul | Returns (batched) matmul of a SparseTensor (or Tensor) with a Tensor.
Args:
sparse_or_dense_a: `SparseTensor` or `Tensor` representing a (batch of)
matrices.
dense_b: `Tensor` representing a (batch of) matrices, with the same batch
shape as `sparse_or_dense_a`. The shape must be compatible with t... | tensorflow_probability/python/math/linalg.py | def sparse_or_dense_matmul(sparse_or_dense_a,
dense_b,
validate_args=False,
name=None,
**kwargs):
"""Returns (batched) matmul of a SparseTensor (or Tensor) with a Tensor.
Args:
sparse_or_dense_a: `Sparse... | def sparse_or_dense_matmul(sparse_or_dense_a,
dense_b,
validate_args=False,
name=None,
**kwargs):
"""Returns (batched) matmul of a SparseTensor (or Tensor) with a Tensor.
Args:
sparse_or_dense_a: `Sparse... | [
"Returns",
"(",
"batched",
")",
"matmul",
"of",
"a",
"SparseTensor",
"(",
"or",
"Tensor",
")",
"with",
"a",
"Tensor",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L738-L788 | [
"def",
"sparse_or_dense_matmul",
"(",
"sparse_or_dense_a",
",",
"dense_b",
",",
"validate_args",
"=",
"False",
",",
"name",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"'spa... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | sparse_or_dense_matvecmul | Returns (batched) matmul of a (sparse) matrix with a column vector.
Args:
sparse_or_dense_matrix: `SparseTensor` or `Tensor` representing a (batch of)
matrices.
dense_vector: `Tensor` representing a (batch of) vectors, with the same
batch shape as `sparse_or_dense_matrix`. The shape must be compa... | tensorflow_probability/python/math/linalg.py | def sparse_or_dense_matvecmul(sparse_or_dense_matrix,
dense_vector,
validate_args=False,
name=None,
**kwargs):
"""Returns (batched) matmul of a (sparse) matrix with a column vector.
Args:
spa... | def sparse_or_dense_matvecmul(sparse_or_dense_matrix,
dense_vector,
validate_args=False,
name=None,
**kwargs):
"""Returns (batched) matmul of a (sparse) matrix with a column vector.
Args:
spa... | [
"Returns",
"(",
"batched",
")",
"matmul",
"of",
"a",
"(",
"sparse",
")",
"matrix",
"with",
"a",
"column",
"vector",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L791-L826 | [
"def",
"sparse_or_dense_matvecmul",
"(",
"sparse_or_dense_matrix",
",",
"dense_vector",
",",
"validate_args",
"=",
"False",
",",
"name",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _sparse_tensor_dense_matmul | Returns (batched) matmul of a SparseTensor with a Tensor.
Args:
sp_a: `SparseTensor` representing a (batch of) matrices.
b: `Tensor` representing a (batch of) matrices, with the same batch shape of
`sp_a`. The shape must be compatible with the shape of `sp_a` and kwargs.
**kwargs: Keyword arguments... | tensorflow_probability/python/math/linalg.py | def _sparse_tensor_dense_matmul(sp_a, b, **kwargs):
"""Returns (batched) matmul of a SparseTensor with a Tensor.
Args:
sp_a: `SparseTensor` representing a (batch of) matrices.
b: `Tensor` representing a (batch of) matrices, with the same batch shape of
`sp_a`. The shape must be compatible with the sh... | def _sparse_tensor_dense_matmul(sp_a, b, **kwargs):
"""Returns (batched) matmul of a SparseTensor with a Tensor.
Args:
sp_a: `SparseTensor` representing a (batch of) matrices.
b: `Tensor` representing a (batch of) matrices, with the same batch shape of
`sp_a`. The shape must be compatible with the sh... | [
"Returns",
"(",
"batched",
")",
"matmul",
"of",
"a",
"SparseTensor",
"with",
"a",
"Tensor",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L837-L871 | [
"def",
"_sparse_tensor_dense_matmul",
"(",
"sp_a",
",",
"b",
",",
"*",
"*",
"kwargs",
")",
":",
"batch_shape",
"=",
"_get_shape",
"(",
"sp_a",
")",
"[",
":",
"-",
"2",
"]",
"# Reshape the SparseTensor into a rank 3 SparseTensors, with the",
"# batch shape flattened to... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _sparse_block_diag | Returns a block diagonal rank 2 SparseTensor from a batch of SparseTensors.
Args:
sp_a: A rank 3 `SparseTensor` representing a batch of matrices.
Returns:
sp_block_diag_a: matrix-shaped, `float` `SparseTensor` with the same dtype
as `sparse_or_matrix`, of shape [B * M, B * N] where `sp_a` has shape
... | tensorflow_probability/python/math/linalg.py | def _sparse_block_diag(sp_a):
"""Returns a block diagonal rank 2 SparseTensor from a batch of SparseTensors.
Args:
sp_a: A rank 3 `SparseTensor` representing a batch of matrices.
Returns:
sp_block_diag_a: matrix-shaped, `float` `SparseTensor` with the same dtype
as `sparse_or_matrix`, of shape [B * ... | def _sparse_block_diag(sp_a):
"""Returns a block diagonal rank 2 SparseTensor from a batch of SparseTensors.
Args:
sp_a: A rank 3 `SparseTensor` representing a batch of matrices.
Returns:
sp_block_diag_a: matrix-shaped, `float` `SparseTensor` with the same dtype
as `sparse_or_matrix`, of shape [B * ... | [
"Returns",
"a",
"block",
"diagonal",
"rank",
"2",
"SparseTensor",
"from",
"a",
"batch",
"of",
"SparseTensors",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L874-L895 | [
"def",
"_sparse_block_diag",
"(",
"sp_a",
")",
":",
"# Construct the matrix [[M, N], [1, 0], [0, 1]] which would map the index",
"# (b, i, j) to (Mb + i, Nb + j). This effectively creates a block-diagonal",
"# matrix of dense shape [B * M, B * N].",
"# Note that this transformation doesn't increas... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _maybe_validate_matrix | Checks that input is a `float` matrix. | tensorflow_probability/python/math/linalg.py | def _maybe_validate_matrix(a, validate_args):
"""Checks that input is a `float` matrix."""
assertions = []
if not a.dtype.is_floating:
raise TypeError('Input `a` must have `float`-like `dtype` '
'(saw {}).'.format(a.dtype.name))
if a.shape.ndims is not None:
if a.shape.ndims < 2:
... | def _maybe_validate_matrix(a, validate_args):
"""Checks that input is a `float` matrix."""
assertions = []
if not a.dtype.is_floating:
raise TypeError('Input `a` must have `float`-like `dtype` '
'(saw {}).'.format(a.dtype.name))
if a.shape.ndims is not None:
if a.shape.ndims < 2:
... | [
"Checks",
"that",
"input",
"is",
"a",
"float",
"matrix",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/math/linalg.py#L898-L911 | [
"def",
"_maybe_validate_matrix",
"(",
"a",
",",
"validate_args",
")",
":",
"assertions",
"=",
"[",
"]",
"if",
"not",
"a",
".",
"dtype",
".",
"is_floating",
":",
"raise",
"TypeError",
"(",
"'Input `a` must have `float`-like `dtype` '",
"'(saw {}).'",
".",
"format",... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _grad_neg_log_likelihood_and_fim | Computes the neg-log-likelihood gradient and Fisher information for a GLM.
Note that Fisher information is related to the Hessian of the log-likelihood
by the equation
```none
FisherInfo = E[Hessian with respect to model_coefficients of -LogLikelihood(
Y | model_matrix, model_coefficients)]
```
whe... | tensorflow_probability/python/glm/proximal_hessian.py | def _grad_neg_log_likelihood_and_fim(model_matrix, linear_response, response,
model):
"""Computes the neg-log-likelihood gradient and Fisher information for a GLM.
Note that Fisher information is related to the Hessian of the log-likelihood
by the equation
```none
Fisher... | def _grad_neg_log_likelihood_and_fim(model_matrix, linear_response, response,
model):
"""Computes the neg-log-likelihood gradient and Fisher information for a GLM.
Note that Fisher information is related to the Hessian of the log-likelihood
by the equation
```none
Fisher... | [
"Computes",
"the",
"neg",
"-",
"log",
"-",
"likelihood",
"gradient",
"and",
"Fisher",
"information",
"for",
"a",
"GLM",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/glm/proximal_hessian.py#L41-L107 | [
"def",
"_grad_neg_log_likelihood_and_fim",
"(",
"model_matrix",
",",
"linear_response",
",",
"response",
",",
"model",
")",
":",
"# TODO(b/111926503): Determine whether there are some practical cases where it",
"# is computationally favorable to compute the full FIM.",
"mean",
",",
"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | fit_sparse_one_step | One step of (the outer loop of) the GLM fitting algorithm.
This function returns a new value of `model_coefficients`, equal to
`model_coefficients_start + model_coefficients_update`. The increment
`model_coefficients_update in R^n` is computed by a coordinate descent method,
that is, by a loop in which each i... | tensorflow_probability/python/glm/proximal_hessian.py | def fit_sparse_one_step(model_matrix,
response,
model,
model_coefficients_start,
tolerance,
l1_regularizer,
l2_regularizer=None,
maximum_full_sweeps=Non... | def fit_sparse_one_step(model_matrix,
response,
model,
model_coefficients_start,
tolerance,
l1_regularizer,
l2_regularizer=None,
maximum_full_sweeps=Non... | [
"One",
"step",
"of",
"(",
"the",
"outer",
"loop",
"of",
")",
"the",
"GLM",
"fitting",
"algorithm",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/glm/proximal_hessian.py#L110-L232 | [
"def",
"fit_sparse_one_step",
"(",
"model_matrix",
",",
"response",
",",
"model",
",",
"model_coefficients_start",
",",
"tolerance",
",",
"l1_regularizer",
",",
"l2_regularizer",
"=",
"None",
",",
"maximum_full_sweeps",
"=",
"None",
",",
"learning_rate",
"=",
"None"... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | fit_sparse | r"""Fits a GLM using coordinate-wise FIM-informed proximal gradient descent.
This function uses a L1- and L2-regularized, second-order quasi-Newton method
to find maximum-likelihood parameters for the given model and observed data.
The second-order approximations use negative Fisher information in place of
the... | tensorflow_probability/python/glm/proximal_hessian.py | def fit_sparse(model_matrix,
response,
model,
model_coefficients_start,
tolerance,
l1_regularizer,
l2_regularizer=None,
maximum_iterations=None,
maximum_full_sweeps_per_iteration=1,
lea... | def fit_sparse(model_matrix,
response,
model,
model_coefficients_start,
tolerance,
l1_regularizer,
l2_regularizer=None,
maximum_iterations=None,
maximum_full_sweeps_per_iteration=1,
lea... | [
"r",
"Fits",
"a",
"GLM",
"using",
"coordinate",
"-",
"wise",
"FIM",
"-",
"informed",
"proximal",
"gradient",
"descent",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/glm/proximal_hessian.py#L235-L488 | [
"def",
"fit_sparse",
"(",
"model_matrix",
",",
"response",
",",
"model",
",",
"model_coefficients_start",
",",
"tolerance",
",",
"l1_regularizer",
",",
"l2_regularizer",
"=",
"None",
",",
"maximum_iterations",
"=",
"None",
",",
"maximum_full_sweeps_per_iteration",
"="... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _gen_slices | Generate the slices for building an autoregressive mask. | tensorflow_probability/python/bijectors/masked_autoregressive.py | def _gen_slices(num_blocks, n_in, n_out, mask_type=MASK_EXCLUSIVE):
"""Generate the slices for building an autoregressive mask."""
# TODO(b/67594795): Better support of dynamic shape.
slices = []
col = 0
d_in = n_in // num_blocks
d_out = n_out // num_blocks
row = d_out if mask_type == MASK_EXCLUSIVE else ... | def _gen_slices(num_blocks, n_in, n_out, mask_type=MASK_EXCLUSIVE):
"""Generate the slices for building an autoregressive mask."""
# TODO(b/67594795): Better support of dynamic shape.
slices = []
col = 0
d_in = n_in // num_blocks
d_out = n_out // num_blocks
row = d_out if mask_type == MASK_EXCLUSIVE else ... | [
"Generate",
"the",
"slices",
"for",
"building",
"an",
"autoregressive",
"mask",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/bijectors/masked_autoregressive.py#L306-L320 | [
"def",
"_gen_slices",
"(",
"num_blocks",
",",
"n_in",
",",
"n_out",
",",
"mask_type",
"=",
"MASK_EXCLUSIVE",
")",
":",
"# TODO(b/67594795): Better support of dynamic shape.",
"slices",
"=",
"[",
"]",
"col",
"=",
"0",
"d_in",
"=",
"n_in",
"//",
"num_blocks",
"d_o... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | _gen_mask | Generate the mask for building an autoregressive dense layer. | tensorflow_probability/python/bijectors/masked_autoregressive.py | def _gen_mask(num_blocks,
n_in,
n_out,
mask_type=MASK_EXCLUSIVE,
dtype=tf.float32):
"""Generate the mask for building an autoregressive dense layer."""
# TODO(b/67594795): Better support of dynamic shape.
mask = np.zeros([n_out, n_in], dtype=dtype.as_numpy_d... | def _gen_mask(num_blocks,
n_in,
n_out,
mask_type=MASK_EXCLUSIVE,
dtype=tf.float32):
"""Generate the mask for building an autoregressive dense layer."""
# TODO(b/67594795): Better support of dynamic shape.
mask = np.zeros([n_out, n_in], dtype=dtype.as_numpy_d... | [
"Generate",
"the",
"mask",
"for",
"building",
"an",
"autoregressive",
"dense",
"layer",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/bijectors/masked_autoregressive.py#L323-L334 | [
"def",
"_gen_mask",
"(",
"num_blocks",
",",
"n_in",
",",
"n_out",
",",
"mask_type",
"=",
"MASK_EXCLUSIVE",
",",
"dtype",
"=",
"tf",
".",
"float32",
")",
":",
"# TODO(b/67594795): Better support of dynamic shape.",
"mask",
"=",
"np",
".",
"zeros",
"(",
"[",
"n_... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | masked_dense | A autoregressively masked dense layer. Analogous to `tf.layers.dense`.
See [Germain et al. (2015)][1] for detailed explanation.
Arguments:
inputs: Tensor input.
units: Python `int` scalar representing the dimensionality of the output
space.
num_blocks: Python `int` scalar representing the number... | tensorflow_probability/python/bijectors/masked_autoregressive.py | def masked_dense(inputs,
units,
num_blocks=None,
exclusive=False,
kernel_initializer=None,
reuse=None,
name=None,
*args, # pylint: disable=keyword-arg-before-vararg
**kwargs):
"""A ... | def masked_dense(inputs,
units,
num_blocks=None,
exclusive=False,
kernel_initializer=None,
reuse=None,
name=None,
*args, # pylint: disable=keyword-arg-before-vararg
**kwargs):
"""A ... | [
"A",
"autoregressively",
"masked",
"dense",
"layer",
".",
"Analogous",
"to",
"tf",
".",
"layers",
".",
"dense",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/bijectors/masked_autoregressive.py#L337-L407 | [
"def",
"masked_dense",
"(",
"inputs",
",",
"units",
",",
"num_blocks",
"=",
"None",
",",
"exclusive",
"=",
"False",
",",
"kernel_initializer",
"=",
"None",
",",
"reuse",
"=",
"None",
",",
"name",
"=",
"None",
",",
"*",
"args",
",",
"# pylint: disable=keywo... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
test | masked_autoregressive_default_template | Build the Masked Autoregressive Density Estimator (Germain et al., 2015).
This will be wrapped in a make_template to ensure the variables are only
created once. It takes the input and returns the `loc` ("mu" in [Germain et
al. (2015)][1]) and `log_scale` ("alpha" in [Germain et al. (2015)][1]) from
the MADE ne... | tensorflow_probability/python/bijectors/masked_autoregressive.py | def masked_autoregressive_default_template(hidden_layers,
shift_only=False,
activation=tf.nn.relu,
log_scale_min_clip=-5.,
log_scale_max_clip=3.,
... | def masked_autoregressive_default_template(hidden_layers,
shift_only=False,
activation=tf.nn.relu,
log_scale_min_clip=-5.,
log_scale_max_clip=3.,
... | [
"Build",
"the",
"Masked",
"Autoregressive",
"Density",
"Estimator",
"(",
"Germain",
"et",
"al",
".",
"2015",
")",
"."
] | tensorflow/probability | python | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/bijectors/masked_autoregressive.py#L410-L526 | [
"def",
"masked_autoregressive_default_template",
"(",
"hidden_layers",
",",
"shift_only",
"=",
"False",
",",
"activation",
"=",
"tf",
".",
"nn",
".",
"relu",
",",
"log_scale_min_clip",
"=",
"-",
"5.",
",",
"log_scale_max_clip",
"=",
"3.",
",",
"log_scale_clip_grad... | e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.