markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Again $N$ is the hypothesis and $(K, n, k)$ is the data. If we've tagged $K$ hyraxes and then caught another $n-k$, the total number of unique hyraxes we're seen is $K + (n - k)$. For any smaller value of N, the likelihood is 0. Notice that I didn't bother to compute $K \choose k$; because it does not depend on $N$, ...
hypos = range(1, 1000) suite = Hyrax(hypos) data = 10, 10, 2 suite.Update(data)
examples/hyrax_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here's what the posterior distribution looks like:
import thinkplot thinkplot.Pdf(suite) thinkplot.Config(xlabel='Number of hyraxes', ylabel='PMF', legend=False)
examples/hyrax_soln.ipynb
AllenDowney/ThinkBayes2
mit
And here are some summaries of the posterior distribution:
print('Posterior mean', suite.Mean()) print('Maximum a posteriori estimate', suite.MaximumLikelihood()) print('90% credible interval', suite.CredibleInterval(90))
examples/hyrax_soln.ipynb
AllenDowney/ThinkBayes2
mit
The combinatorial expression we computed is the PMF of the hypergeometric distribution, so we can also compute it using thinkbayes2.EvalHypergeomPmf, which uses scipy.stats.hypergeom.pmf.
import thinkbayes2 class Hyrax2(thinkbayes2.Suite): """Represents hypotheses about how many hyraxes there are.""" def Likelihood(self, data, hypo): """Computes the likelihood of the data under the hypothesis. hypo: total population (N) data: # tagged (K), # caught (n), # of caught who...
examples/hyrax_soln.ipynb
AllenDowney/ThinkBayes2
mit
And the result is the same:
hypos = range(1, 1000) suite = Hyrax2(hypos) data = 10, 10, 2 suite.Update(data) thinkplot.Pdf(suite) thinkplot.Config(xlabel='Number of hyraxes', ylabel='PMF', legend=False) print('Posterior mean', suite.Mean()) print('Maximum a posteriori estimate', suite.MaximumLikelihood()) print('90% credible interval', suite.C...
examples/hyrax_soln.ipynb
AllenDowney/ThinkBayes2
mit
If we run the analysis again with a different prior (running from 0 to 1999), the MAP is the same, but the posterior mean and credible interval are substantially different:
hypos = range(1, 2000) suite = Hyrax2(hypos) data = 10, 10, 2 suite.Update(data) print('Posterior mean', suite.Mean()) print('Maximum a posteriori estimate', suite.MaximumLikelihood()) print('90% credible interval', suite.CredibleInterval(90))
examples/hyrax_soln.ipynb
AllenDowney/ThinkBayes2
mit
Preprocessing Functions The first thing to do to any dataset is preprocessing. Here are two main preprocessing functions, see below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to g...
import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ vocab = {word: None for word in text} empty = 0 # RNN...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
How does the vocab-to-int pairing look like?
test_text = ''' Moe_Szyslak Moe's Tavern Where the elite meet to drink ''' test_text = test_text.lower() test_text = test_text.split() vocab_to_int, int_to_vocab = create_lookup_tables(test_text) print(vocab_to_int['where']) print(int_to_vocab[10])
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to token...
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function symbols = ['.', ',', '"', ';', '!', '?', '(', ')', '--', '\n'] symbol_vals = ['Period', 'Comma', '...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file.
# Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() print(int_text[:2]) for i in int_text[:4]: print(i, int_to_vocab[i]) print(int_to_vocab[i], vocab_to_int[int_to_vocab[i]])
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Build the Neural Network We'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU
from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) print("Device name: ", tf.test.gpu_device_name(...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Input Here we implement the get_inputs() function to create TF Placeholders for the Neural Network. It creates the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following the tuple...
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ Input = tf.placeholder(tf.int32, [None, None], name='input') Targets = tf.placeholder(tf.int32, [None, None], name='Targets') LearingRate = tf.placeholder(tf...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the follo...
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # Set number of layers and dropout value lstm_layers = 3 keep_prob = 0.5 # Let us c...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ embedding = tf.V...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Build RNN We created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Here we build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32) final_state = tf.identity(state, name = 'final_state') ...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Build the Neural Network Here we apply the functions we implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the...
def build_nn(cell, rnn_size, input_data, vocab_size): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :return: Tuple (Logits, FinalState) """ embedding = get_embed(input_dat...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Batches We implemented get_batches to create batches of input and targets using int_text. The batches are a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The...
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ n_batches = int(l...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the ...
# Number of Epochs num_epochs = 50 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Sequence Length seq_length = 10 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 100 """ Point to where to save? """ save_dir = './save'
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Build the Graph Build the graph using the neural network you implemented.
input_text, targets, lr = get_inputs() print(lr) from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): ...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Save Parameters Save seq_length and save_dir for generating a new TV script.
# Save parameters for checkpoint helper.save_params((seq_length, save_dir))
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Checkpoint
import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params()
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTen...
def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ print(loaded_graph) InputTens...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ index = np.random.choice...
synthetic-dialog/script_generation-tech-summit.ipynb
syednasar/talks
mit
Распределение собственных значений
%matplotlib inline import matplotlib.pyplot as plt plt.rc("text", usetex=True) plt.rc("font", family='serif') eigs = np.linalg.eigvalsh(A) plt.semilogy(np.unique(eigs)) plt.ylabel("Eigenvalues", fontsize=20) plt.xticks(fontsize=18) _ = plt.yticks(fontsize=18)
Spring2022/cg.ipynb
amkatrutsa/MIPT-Opt
mit
Правильный ответ
import scipy.optimize as scopt def callback(x, array): array.append(x) scopt_cg_array = [] scopt_cg_callback = lambda x: callback(x, scopt_cg_array) x = scopt.minimize(f, x0, method="CG", jac=grad_f, callback=scopt_cg_callback) x = x.x print("||f'(x*)|| =", np.linalg.norm(A.dot(x) - b)) print("f* =", f(x))
Spring2022/cg.ipynb
amkatrutsa/MIPT-Opt
mit
Реализация метода сопряжённых градиентов
def ConjugateGradientQuadratic(x0, A, b, tol=1e-8, callback=None): x = x0 r = A.dot(x0) - b p = -r while np.linalg.norm(r) > tol: alpha = r.dot(r) / p.dot(A.dot(p)) x = x + alpha * p if callback is not None: callback(x) r_next = r + alpha * A.dot(p) be...
Spring2022/cg.ipynb
amkatrutsa/MIPT-Opt
mit
График сходимости
plt.figure(figsize=(8,6)) plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()], label=r"$\|f'(x_k)\|^{CG}_2$", linewidth=2) plt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array[:max_iter]], label=r"$\|f'(x_k)\|^{CG_{PR}}_2$", linewidth=2) plt.semilogy([np.linalg.norm(grad_f(x)) for x i...
Spring2022/cg.ipynb
amkatrutsa/MIPT-Opt
mit
Неквадратичная функция $$ f(w) = \frac12 \|w\|2^2 + C \frac1m \sum{i=1}^m \log (1 + \exp(- y_i \langle x_i, w \rangle)) \to \min_w $$
import numpy as np import sklearn.datasets as skldata import scipy.special as scspec import jax import jax.numpy as jnp from jax.config import config config.update("jax_enable_x64", True) n = 300 m = 1000 X, y = skldata.make_classification(n_classes=2, n_features=n, n_samples=m, n_informative=n//3, random_state=0) X ...
Spring2022/cg.ipynb
amkatrutsa/MIPT-Opt
mit
Реализация метода Флетчера-Ривса
def ConjugateGradientFR(f, gradf, x0, num_iter=100, tol=1e-8, callback=None, restart=False): x = x0 grad = gradf(x) p = -grad it = 0 while np.linalg.norm(gradf(x)) > tol and it < num_iter: alpha = utils.backtracking(x, p, method="Wolfe", beta1=0.1, beta2=0.4, rho=0.5, f=f, grad_f=gradf) ...
Spring2022/cg.ipynb
amkatrutsa/MIPT-Opt
mit
График сходимости
import scipy.optimize as scopt import liboptpy.restarts as restarts n_restart = 60 tol = 1e-5 max_iter = 600 scopt_cg_array = [] scopt_cg_callback = lambda x: callback(x, scopt_cg_array) x = scopt.minimize(f, x0, tol=tol, method="CG", jac=autograd_f, callback=scopt_cg_callback, options={"maxiter": max_iter}) x = x.x ...
Spring2022/cg.ipynb
amkatrutsa/MIPT-Opt
mit
Время выполнения
%timeit scopt.minimize(f, x0, method="CG", tol=tol, jac=grad_f, options={"maxiter": max_iter}) %timeit cg_fr.solve(x0, tol=tol, max_iter=max_iter) %timeit cg_fr_rest.solve(x0, tol=tol, max_iter=max_iter) %timeit gd.solve(x0, tol=tol, max_iter=max_iter)
Spring2022/cg.ipynb
amkatrutsa/MIPT-Opt
mit
TensorFlow Distributions: A Gentle Introduction <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/TensorFlow_Distributions_Tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <t...
import collections import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions try: tf.compat.v1.enable_eager_execution() except ValueError: pass import matplotlib.pyplot as plt
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Basic Univariate Distributions Let's dive right in and create a normal distribution:
n = tfd.Normal(loc=0., scale=1.) n
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
We can draw a sample from it:
n.sample()
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
We can draw multiple samples:
n.sample(3)
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
We can evaluate a log prob:
n.log_prob(0.)
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
We can evaluate multiple log probabilities:
n.log_prob([0., 2., 4.])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
We have a wide range of distributions. Let's try a Bernoulli:
b = tfd.Bernoulli(probs=0.7) b b.sample() b.sample(8) b.log_prob(1) b.log_prob([1, 0, 1, 0])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.]) nd
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Comparing this to the univariate normal we created earlier, what's different?
tfd.Normal(loc=0., scale=1.)
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
We see that the univariate normal has an event_shape of (), indicating it's a scalar distribution. The multivariate normal has an event_shape of 2, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&#41;) of this distribution is two-dimensional. Sampling works just as before:
nd.sample() nd.sample(5) nd.log_prob([0., 10])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
nd = tfd.MultivariateNormalFullCovariance( loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]]) data = nd.sample(200) plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4) plt.axis([-5, 5, 0, 10]) plt.title("Data set") plt.show()
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single Distribution object:
b3 = tfd.Bernoulli(probs=[.3, .5, .7]) b3
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python Distribution object. The three distributions cannot be manipulated individually. Note how the batch_shape is (3,), indicating a batch of three distributions, a...
b3.sample() b3.sample(6)
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
If we call prob, (this has the same shape semantics as log_prob; we use prob with these small Bernoulli examples for clarity, although log_prob is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
b3.prob([1, 1, 0])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a for loop (at least in Eager mode, in TF graph mode you'd need a tf.while loop). However, having a (potentially large) set of identically parameterized distributi...
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1) b3_joint
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Compare the shape to that of the original b3:
b3
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
As promised, we see that that Independent has moved the batch shape into the event shape: b3_joint is a single distribution (batch_shape = ()) over a three-dimensional event space (event_shape = (3,)). Let's check the semantics:
b3_joint.prob([1, 1, 0])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
An alternate way to get the same result would be to compute probabilities using b3 and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
tf.reduce_prod(b3.prob([1, 1, 0]))
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Indpendent allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts: b3.sample and b3_joint.sample have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distribut...
nd_batch = tfd.MultivariateNormalFullCovariance( loc = [[0., 0.], [1., 1.], [2., 2.]], covariance_matrix = [[[1., .1], [.1, 1.]], [[1., .3], [.3, 1.]], [[1., .5], [.5, 1.]]]) nd_batch
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
We see batch_shape = (3,), so there are three independent multivariate normals, and event_shape = (2,), so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements. Sampling works:
nd_batch.sample(4)
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Since batch_shape = (3,) and event_shape = (2,), we pass a tensor of shape (3, 2) to log_prob:
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape B and an event shape E. Let BE be the concatenation of the event shapes: For the univariate scalar distributions n and b, BE = ().. For the two-dimensional multivariate normals nd. BE = (2). For bo...
n = tfd.Normal(loc=0., scale=1.) n n.log_prob(0.) n.log_prob([0.]) n.log_prob([[0., 1.], [-1., 2.]])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Let's turn to the two-dimensional multivariate normal nd (parameters changed for illustrative purposes):
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.]) nd
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
log_prob "expects" an argument with shape (2,), but it will accept any argument that broadcasts against this shape:
nd.log_prob([0., 0.])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
But we can pass in "more" examples, and evaluate all their log_prob's at once:
nd.log_prob([[0., 0.], [1., 1.], [2., 2.]])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Perhaps less appealingly, we can broadcast over the event dimensions:
nd.log_prob([0.]) nd.log_prob([[0.], [1.], [2.]])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP. Now let's look at the three coins example again:
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Here, using broadcasting to represent the probability that each coin comes up heads is quite intuitive:
b3.prob([1])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
(Compare this to b3.prob([1., 1., 1.]), which we would have used back where b3 was introduced.) Now suppose we want to know, for each coin, the probability the coin comes up heads and the probability it comes up tails. We could imagine trying: b3.log_prob([0, 1]) Unfortunately, this produces an error with a long and no...
b3.prob([[0], [1]])
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
tensorflow/probability
apache-2.0
Convex hull The convex hull is probably the most common approach - its goal is to create the smallest polygon that contains all points from a given list. The scipy.spatial package provides this algorithm (https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.spatial.ConvexHull.html, accessed 29.12.2018).
# Function that takes a map and a list of points (LON,LAT tupels) and # returns a map with the convex hull polygon from the points as a new layer def create_convexhull_polygon( map_object, list_of_points, layer_name, line_color, fill_color, weight, text ): # Since it is pointless to draw a convex hull polygo...
examples/Polygons_from_list_of_points.ipynb
ocefpaf/folium
mit
Envelope The envelope is another interesting approach - its goal is to create a box that contains all points from a given list.
def create_envelope_polygon( map_object, list_of_points, layer_name, line_color, fill_color, weight, text ): # Since it is pointless to draw a box around less than 2 points check len of input if len(list_of_points) < 2: return # Find the edges of box from operator import itemgetter li...
examples/Polygons_from_list_of_points.ipynb
ocefpaf/folium
mit
Concave hull (alpha shape) In some cases the convex hull does not yield good results - this is when the shape of the polygon should be concave instead of convex. The solution is a concave hull that is also called alpha shape. Yet, there is no ready to go, off the shelve solution for this but there are great resources (...
# Initialize map my_map_global = folium.Map(location=[48.2460683, 9.26764125], zoom_start=7) # Create a convex hull polygon that contains some points list_of_points = randome_points( amount=10, LON_min=48, LON_max=49, LAT_min=9, LAT_max=10 ) create_convexhull_polygon( my_map_global, list_of_points, la...
examples/Polygons_from_list_of_points.ipynb
ocefpaf/folium
mit
Custom Initialization The convenience function tf.initialize_all_variables() adds an op to initialize all variables in the model. You can also pass it an explicit list of variables to initialize. See the Variables Documentation for more options, including checking if variables are initialized. Initialization from anoth...
weights = tf.Variable(tf.random_normal(shape = (3,3),mean = 0,stddev = 1.0),name = "weights") biases = tf.Variable(tf.random_uniform(shape = (3,1),minval = -1,maxval = 1),name = "biases") w2 = tf.Variable(weights.initialized_value(),name = "w2") b2 = tf.Variable(biases.initialized_value()*2,name = "b2") # init_op1 = ...
Tensorflow/.ipynb_checkpoints/TensorLearn2-checkpoint.ipynb
euler16/Deep-Learning
unlicense
Placeholders don't use eval() data need to be fed to them used for taking input and output, ie don't change during the course of learning Feeding TensorFlow's feed mechanism lets you inject data into any Tensor in a computation graph. A python computation can thus feed data directly into the graph. Supply feed da...
import numpy as np x = tf.placeholder(tf.float32,shape = (3,3),name = "x") y = tf.matmul(x,x) with tf.Session() as sess: rnd = np.random.rand(3,3) result = sess.run(y,feed_dict = {x:rnd}) print(result) # giving partial shapes x = tf.placeholder("float",[None,3]) # while the num_rows can be any number,...
Tensorflow/.ipynb_checkpoints/TensorLearn2-checkpoint.ipynb
euler16/Deep-Learning
unlicense
The raw data, expressed as percentages. We will divide by 100 to obtain proportions.
raw = StringIO("""0.05,0.00,1.25,2.50,5.50,1.00,5.00,5.00,17.50 0.00,0.05,1.25,0.50,1.00,5.00,0.10,10.00,25.00 0.00,0.05,2.50,0.01,6.00,5.00,5.00,5.00,42.50 0.10,0.30,16.60,3.00,1.10,5.00,5.00,5.00,50.00 0.25,0.75,2.50,2.50,2.50,5.00,50.00,25.00,37.50 0.05,0.30,2.50,0.01,8.00,5.00,10.00,75.00,95.00 0.50,3.00,0.00,25.00...
examples/notebooks/quasibinomial.ipynb
jseabold/statsmodels
bsd-3-clause
The regression model is a two-way additive model with site and variety effects. The data are a full unreplicated design with 10 rows (sites) and 9 columns (varieties).
df = pd.read_csv(raw, header=None) df = df.melt() df["site"] = 1 + np.floor(df.index / 10).astype(np.int) df["variety"] = 1 + (df.index % 10) df = df.rename(columns={"value": "blotch"}) df = df.drop("variable", axis=1) df["blotch"] /= 100
examples/notebooks/quasibinomial.ipynb
jseabold/statsmodels
bsd-3-clause
Fit the quasi-binomial regression with the standard variance function.
model1 = sm.GLM.from_formula("blotch ~ 0 + C(variety) + C(site)", family=sm.families.Binomial(), data=df) result1 = model1.fit(scale="X2") print(result1.summary())
examples/notebooks/quasibinomial.ipynb
jseabold/statsmodels
bsd-3-clause
The plot below shows that the default variance function is not capturing the variance structure very well. Also note that the scale parameter estimate is quite small.
plt.clf() plt.grid(True) plt.plot(result1.predict(linear=True), result1.resid_pearson, 'o') plt.xlabel("Linear predictor") plt.ylabel("Residual")
examples/notebooks/quasibinomial.ipynb
jseabold/statsmodels
bsd-3-clause
An alternative variance function is mu^2 * (1 - mu)^2.
class vf(sm.families.varfuncs.VarianceFunction): def __call__(self, mu): return mu**2 * (1 - mu)**2 def deriv(self, mu): return 2*mu - 6*mu**2 + 4*mu**3
examples/notebooks/quasibinomial.ipynb
jseabold/statsmodels
bsd-3-clause
Fit the quasi-binomial regression with the alternative variance function.
bin = sm.families.Binomial() bin.variance = vf() model2 = sm.GLM.from_formula("blotch ~ 0 + C(variety) + C(site)", family=bin, data=df) result2 = model2.fit(scale="X2") print(result2.summary())
examples/notebooks/quasibinomial.ipynb
jseabold/statsmodels
bsd-3-clause
With the alternative variance function, the mean/variance relationship seems to capture the data well, and the estimated scale parameter is close to 1.
plt.clf() plt.grid(True) plt.plot(result2.predict(linear=True), result2.resid_pearson, 'o') plt.xlabel("Linear predictor") plt.ylabel("Residual")
examples/notebooks/quasibinomial.ipynb
jseabold/statsmodels
bsd-3-clause
Page Layout / Dashboarding nbinteract gives basic page layout functionality using special comments in your code. Include one or more of these markers in a Python comment and nbinteract will add their corresponding CSS classes to the generated cells. | Marker | Description | CSS class added | | --------- | --------- | -...
df_interact(videos) # nbi:left options = { 'title': 'Views for Trending Videos', 'xlabel': 'Date Trending', 'ylabel': 'Views', 'animation_duration': 500, 'aspect_ratio': 1.0, } def xs(channel): return videos.loc[videos['channel_title'] == channel].index def ys(xs): return videos.loc[xs, '...
docs/notebooks/recipes/recipes_layout.ipynb
SamLau95/nbinteract
bsd-3-clause
Dashboard (without showing code)
# nbi:hide_in df_interact(videos) # nbi:hide_in # nbi:left options = { 'title': 'Views for Trending Videos', 'xlabel': 'Date Trending', 'ylabel': 'Views', 'animation_duration': 500, 'aspect_ratio': 1.0, } def xs(channel): return videos.loc[videos['channel_title'] == channel].index def ys(xs):...
docs/notebooks/recipes/recipes_layout.ipynb
SamLau95/nbinteract
bsd-3-clause
To print multiple strings, import print_function to prevent Py2 from interpreting it as a tuple:
# Python 2 only: print 'Hello', 'Guido' # Python 2 and 3: from __future__ import print_function # (at top of module) print('Hello', 'Guido') # Python 2 only: print >> sys.stderr, 'Hello' # Python 2 and 3: from __future__ import print_function print('Hello', file=sys.stderr) # Python 2 only: print 'Hello', # P...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Raising exceptions
# Python 2 only: raise ValueError, "dodgy value" # Python 2 and 3: raise ValueError("dodgy value")
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Raising exceptions with a traceback:
# Python 2 only: traceback = sys.exc_info()[2] raise ValueError, "dodgy value", traceback # Python 3 only: raise ValueError("dodgy value").with_traceback() # Python 2 and 3: option 1 from six import reraise as raise_ # or from future.utils import raise_ traceback = sys.exc_info()[2] raise_(ValueError, "dodgy value",...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Exception chaining (PEP 3134):
# Setup: class DatabaseError(Exception): pass # Python 3 only class FileDatabase: def __init__(self, filename): try: self.file = open(filename) except IOError as exc: raise DatabaseError('failed to open') from exc # Python 2 and 3: from future.utils import raise_from c...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Catching exceptions
# Python 2 only: try: ... except ValueError, e: ... # Python 2 and 3: try: ... except ValueError as e: ...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Division Integer division (rounding down):
# Python 2 only: assert 2 / 3 == 0 # Python 2 and 3: assert 2 // 3 == 0
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
"True division" (float division):
# Python 3 only: assert 3 / 2 == 1.5 # Python 2 and 3: from __future__ import division # (at top of module) assert 3 / 2 == 1.5
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
"Old division" (i.e. compatible with Py2 behaviour):
# Python 2 only: a = b / c # with any types # Python 2 and 3: from past.utils import old_div a = old_div(b, c) # always same as / on Py2
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Long integers Short integers are gone in Python 3 and long has become int (without the trailing L in the repr).
# Python 2 only k = 9223372036854775808L # Python 2 and 3: k = 9223372036854775808 # Python 2 only bigint = 1L # Python 2 and 3 from builtins import int bigint = int(1)
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
To test whether a value is an integer (of any kind):
# Python 2 only: if isinstance(x, (int, long)): ... # Python 3 only: if isinstance(x, int): ... # Python 2 and 3: option 1 from builtins import int # subclass of long on Py2 if isinstance(x, int): # matches both int and long on Py2 ... # Python 2 and 3: option 2 from past.builtins import ...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Octal constants
0644 # Python 2 only 0o644 # Python 2 and 3
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Backtick repr
`x` # Python 2 only repr(x) # Python 2 and 3
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Metaclasses
class BaseForm(object): pass class FormType(type): pass # Python 2 only: class Form(BaseForm): __metaclass__ = FormType pass # Python 3 only: class Form(BaseForm, metaclass=FormType): pass # Python 2 and 3: from six import with_metaclass # or from future.utils import with_metaclass class Form(w...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Strings and bytes Unicode (text) string literals If you are upgrading an existing Python 2 codebase, it may be preferable to mark up all string literals as unicode explicitly with u prefixes:
# Python 2 only s1 = 'The Zen of Python' s2 = u'きたないのよりきれいな方がいい\n' # Python 2 and 3 s1 = u'The Zen of Python' s2 = u'きたないのよりきれいな方がいい\n'
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
The futurize and python-modernize tools do not currently offer an option to do this automatically. If you are writing code for a new project or new codebase, you can use this idiom to make all string literals in a module unicode strings:
# Python 2 and 3 from __future__ import unicode_literals # at top of module s1 = 'The Zen of Python' s2 = 'きたないのよりきれいな方がいい\n'
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
See http://python-future.org/unicode_literals.html for more discussion on which style to use. Byte-string literals
# Python 2 only s = 'This must be a byte-string' # Python 2 and 3 s = b'This must be a byte-string'
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
To loop over a byte-string with possible high-bit characters, obtaining each character as a byte-string of length 1:
# Python 2 only: for bytechar in 'byte-string with high-bit chars like \xf9': ... # Python 3 only: for myint in b'byte-string with high-bit chars like \xf9': bytechar = bytes([myint]) # Python 2 and 3: from builtins import bytes for myint in bytes(b'byte-string with high-bit chars like \xf9'): bytechar = ...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
As an alternative, chr() and .encode('latin-1') can be used to convert an int into a 1-char byte string:
# Python 3 only: for myint in b'byte-string with high-bit chars like \xf9': char = chr(myint) # returns a unicode string bytechar = char.encode('latin-1') # Python 2 and 3: from builtins import bytes, chr for myint in bytes(b'byte-string with high-bit chars like \xf9'): char = chr(myint) # returns a ...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
basestring
# Python 2 only: a = u'abc' b = 'def' assert (isinstance(a, basestring) and isinstance(b, basestring)) # Python 2 and 3: alternative 1 from past.builtins import basestring # pip install future a = u'abc' b = b'def' assert (isinstance(a, basestring) and isinstance(b, basestring)) # Python 2 and 3: alternative 2: r...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
unicode
# Python 2 only: templates = [u"blog/blog_post_detail_%s.html" % unicode(slug)] # Python 2 and 3: alternative 1 from builtins import str templates = [u"blog/blog_post_detail_%s.html" % str(slug)] # Python 2 and 3: alternative 2 from builtins import str as text templates = [u"blog/blog_post_detail_%s.html" % text(slug...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
StringIO
# Python 2 only: from StringIO import StringIO # or: from cStringIO import StringIO # Python 2 and 3: from io import BytesIO # for handling byte strings from io import StringIO # for handling unicode strings
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Imports relative to a package Suppose the package is: mypackage/ __init__.py submodule1.py submodule2.py and the code below is in submodule1.py:
# Python 2 only: import submodule2 # Python 2 and 3: from . import submodule2 # Python 2 and 3: # To make Py2 code safer (more like Py3) by preventing # implicit relative imports, you can also add this to the top: from __future__ import absolute_import
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Dictionaries
heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit
Iterating through dict keys/values/items Iterable dict keys:
# Python 2 only: for key in heights.iterkeys(): ... # Python 2 and 3: for key in heights: ...
docs/notebooks/Writing Python 2-3 compatible code.ipynb
QuLogic/python-future
mit