markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$ The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned} \sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \ &= \sqrt{0.152} = 0.39 \ m \end{aligned...
print('std of Y is {:.2f} m'.format(np.std(Y)))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger. Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. $$ \begin{aligned} \sigma_z &=\sqrt{\frac{(...
print(np.std(Z))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished childr...
X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False)
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
If we didn't take the square of the differences the signs would cancel everything out: $$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$ This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly ...
X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1])))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide a...
from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf');
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
This curve is a probability density function or pdf for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many st...
import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief)
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! Nomenclature A bit of nomenclature before we continue - this chart depicts the probability density of a random variable having any value between ($-\infty..\infty)$. What ...
plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)');
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
The y-axis depicts the probability density — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section. The Gaussian model is imperfect. Though these charts do not show it, the tails of the distribution extend out to infinity. Tails are the far end...
x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2));
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now.
from filterpy.stats import gaussian #gaussian??
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$.
plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$');
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
What does this curve mean? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called Central Limit Theorem states that if we make many measurements that the measurements will be normally distr...
from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mat...
print(norm_cdf((-1e8, 1e8), mu=0, var=4))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of how much the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also ...
from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
By default gaussian normalizes the output, which turns the output back into a probability distribution. Use the argumentnormed to control this.
print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
If the Gaussian is not normalized it is called a Gaussian function instead of Gaussian distribution.
xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend();
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and...
from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot()
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Interactive Gaussians For those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell cur...
import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max...
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. <img src='animations/04_gaussian_animate.gif'> Computational Properties of Gaussians The discrete Bayes filter works by multiplying and...
x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.');
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result looks like a Gaussian distribution. Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up ...
x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x));
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians because they are computationally nice. The product of two independent Gaussians is given by: $$\begin{aligned}\mu &=\fra...
def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior)
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean an...
xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var...
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it. Next, recall that our filter imp...
from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
The call norm(2, 3) creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so:
n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
The documentation for scipy.stats.norm [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the rvs() function.
np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
We can get the cumulative distribution function (CDF), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$.
# probability that a random value is less than the mean 2 print(n23.cdf(2))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
We can get various properties of the distribution:
print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean())
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Limitations of Using Gaussians to Model the World Earlier I mentioned the central limit theorem, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature i...
xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09);
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minu...
from numpy.random import randn def sense(): return 10 + randn()*2
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Let's plot that signal and see what it looks like.
zs = [sense() for i in range(5000)] plt.plot(zs, lw=1);
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will ...
import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (mat...
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a boo...
import scipy scipy.stats.describe(zs)
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Let's examine two normal populations, one small, one large:
print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000)))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
B. Function construction B.1 Chinese Restaurant Process (CRP)
def CRP(topic, phi): ''' CRP gives the probability of topic assignment for specific vocabulary Return a 1 * j array, where j is the number of topic Parameter --------- topic: a list of lists, contains assigned words in each sublist (topic) phi: double, parameter for CRP Return ...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
B.2 Node Sampling
def node_sampling(corpus_s, phi): ''' Node sampling samples the number of topics, L return a j-layer list of lists, where j is the number of topics Parameter --------- corpus_s: a list of lists, contains words in each sublist (document) phi: double, parameter for CRP Return ...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
B.3 Gibbs sampling -- $z_{m,n}$
def Z(corpus_s, topic, alpha, beta): ''' Z samples from LDA model Return two j-layer list of lists, where j is the number of topics Parameter --------- corpus_s: a list of lists, contains words in each sublist (document) topic: a L-dimensional list of lists, sample from node_sampling ...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
B.4 Gibbs sampling -- ${\bf c}_{m}$, CRP prior
def CRP_prior(corpus_s, doc, phi): ''' CRP_prior implies by nCRP Return a m*j array, whre m is the number of documents and j is the number of topics Parameter --------- corpus_s: a list of lists, contains words in each sublist (document) doc: a j-dimensioanl list of lists, drawn from Z ...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
B.5 Gibbs sampling -- ${\bf c}_{m}$, likelihood
def likelihood(corpus_s, topic, eta): ''' likelihood gives the propability of data given a particular choice of c Return a m*j array, whre m is the number of documents and j is the number of topics Parameter --------- corpus_s: a list of lists, contains words in each sublist (document) ...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
B.6 Gibbs sampling -- ${\bf c}_{m}$, posterior
def post(w_m, c_p): ''' Parameter --------- w_m: likelihood, drawn from likelihood function c_p: prior, drawn from CRP_prior function Return ------ c_m, a m*j list of lists ''' c_m = (w_m * c_p) / (w_m * c_p).sum(axis = 1)[:, np.newaxis] return np.array(c_m)
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
B.7 Gibbs sampling -- $w_{n}$
def wn(c_m, corpus_s): ''' wn return the assignment of words for topics, drawn from multinomial distribution Return a n*1 array, where n is the total number of word Parameter --------- c_m: a m*j list of lists, drawn from post function corpus_s: a list of lists, contains words in each s...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
C. Gibbs sampling C.1 Find most common value
most_common = lambda x: Counter(x).most_common(1)[0][0]
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
C.2 Gibbs sampling
def gibbs(corpus_s, topic, alpha, beta, phi, eta, ite): ''' gibbs will return the distribution of words for topics Return a j-dimensional list of lists, where j is the number of topics Parameter --------- corpus_s: a list of lists, contains words in each sublist (document) topic: a j-di...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
V. Topic Model with hLDA Gibbs sampling in section IV distributes the input vocabularies from documents in corpus to available topics, which sampled from $L$-dimensional topics. In section V, an $n$-level tree will be presented by tree plot, which the root-node will be more general and the leaves will be more specific....
def hLDA(corpus_s, alpha, beta, phi, eta, ite, level): ''' hLDA generates an n*1 list of lists, where n is the number of level Parameter --------- corpus_s: a list of lists, contains words in each sublist (document) alpha: double, parameter for Z function beta: double, parameter for Z f...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
B. hLDA plot
def HLDA_plot(hLDA_object, Len = 8, save = False): from IPython.display import Image, display def viewPydot(pdot): plt = Image(pdot.create_png()) display(plt) words = hLDA_object[0] struc = hLDA_object[1] graph = pydot.Dot(graph_type='graph') end_index = [np.insert(n...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
VI. Empirical Example A. Simulated data For simulated data example, each document, $d$, in corpus is generated by normal distribution with different size of words, $w_{d,n}$, where $n\in{10,...,200}$ and ${\bf w}_{d}\sim N(0, 1)$. In this example, by generating 35 documents in the corpus, we are able to see the simulat...
def sim_corpus(n): n_rows = n corpus = [[] for _ in range(n_rows)] for i in range(n_rows): n_cols = np.random.randint(10, 200, 1, dtype = 'int')[0] for j in range(n_cols): num = np.random.normal(0, 1, n_cols) word = 'w%s' % int(round(num[j], 1)*10) corpus[...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
B. Real data For real data example, the corpus of documents is generated from Blei's sample data. The documents are splitted by paragraph; that is, each paragraph reprents one document. We take first 11 documents to form the sample corpus used in the hLDA model. To form the corpus, we read the corpus as a large list of...
def read_corpus(corpus_path): punc = ['`', ',', "'", '.', '!', '?'] corpus = [] with open(corpus_path, 'r') as f: for line in f: for x in punc: line = line.replace(x, '') line = line.strip('\n') word = line.split(' ') corpus.append(word...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
VII. Download and Install from Github The hLDA code of the paper Hierarchical Topic Models and the Nested Chinese Restaurant Process is released on github with the package named hLDA (click to clone). One can easily download (click to download) and install by running python setup.py install. The package provides 4 func...
import hLDA sim = hLDA.sim_corpus(5) print(sim[0]) corpus = hLDA.read_corpus('sample.txt') print(corpus[0]) tree = hLDA.hLDA(corpus, 0.1, 0.01, 1, 0.01, 10, 3) hLDA.HLDA_plot(tree)
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
VIII. Optimization To optimize the hLDA model, we choose cython to speed the functions up, since the only matrix calculation function, c_m, was already vectorized. However, after applying cython, the code is not able to speed up efficiently. The possible reasons are shown as follows. First, if we simply speed up single...
%load_ext Cython %%cython -a cimport cython cimport numpy as np import numpy as np @cython.cdivision @cython.boundscheck(False) @cython.wraparound(False) def CRP_c(list topic, double phi): cdef double[:] cm = np.empty(len(topic)+1) cdef int m = sum([len(x) for x in topic]) cm[0] = phi...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
IX. Code Comparison This section will introduce LDA model as the comparison with hLDA model. The LDA model needs user to specify the number of topics and returns the probability of the words in each topic, which are the most different parts compares to hLDA model. The hLDA model applies nonparametric prior which allows...
import matplotlib.pyplot as plt from nltk.tokenize import RegexpTokenizer from stop_words import get_stop_words from nltk.stem.porter import PorterStemmer from gensim import corpora, models import gensim def lda_topic(corpus_s, dic, n_topics, ite): lda = gensim.models.ldamodel.LdaModel(corpus = corpus_s, ...
final.ipynb
Yen-HuaChen/STA663-Final-Project
mit
.. _tut_raw_objects The :class:Raw &lt;mne.io.RawFIF&gt; data structure: continuous data
from __future__ import print_function import mne import os.path as op from matplotlib import pyplot as plt
0.14/_downloads/plot_raw_objects.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Continuous data is stored in objects of type :class:Raw &lt;mne.io.RawFIF&gt;. The core data structure is simply a 2D numpy array (channels × samples, ._data) combined with an :class:Info &lt;mne.io.meas_info.Info&gt; object (.info) (:ref:tut_info_objects. The most common way to load continuous data is from a .fif file...
# Load an example dataset, the preload flag loads the data into memory now data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.RawFIF(data_path, preload=True, verbose=False) # Give the sample rate print('sample rate:', raw.info['sfreq'], 'Hz')...
0.14/_downloads/plot_raw_objects.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Information about the channels contained in the :class:Raw &lt;mne.io.RawFIF&gt; object is contained in the :class:Info &lt;mne.io.meas_info.Info&gt; attribute. This is essentially a dictionary with a number of relevant fields (see :ref:tut_info_objects). Indexing data There are two ways to access the data stored withi...
print('Shape of data array:', raw._data.shape) array_data = raw._data[0, :1000] _ = plt.plot(array_data)
0.14/_downloads/plot_raw_objects.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
You can also pass an index directly to the :class:Raw &lt;mne.io.RawFIF&gt; object. This will return an array of times, as well as the data representing those timepoints. This may be used even if the data is not preloaded:
# Extract data from the first 5 channels, from 1 s to 3 s. sfreq = raw.info['sfreq'] data, times = raw[:5, int(sfreq * 1):int(sfreq * 3)] _ = plt.plot(times, data.T) _ = plt.title('Sample channels')
0.14/_downloads/plot_raw_objects.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Selecting subsets of channels and samples It is possible to use more intelligent indexing to extract data, using channel names, types or time ranges.
# Pull all MEG gradiometer channels: # Make sure to use copy==True or it will overwrite the data meg_only = raw.pick_types(meg=True, copy=True) eeg_only = raw.pick_types(meg=False, eeg=True, copy=True) # The MEG flag in particular lets you specify a string for more specificity grad_only = raw.pick_types(meg='grad', co...
0.14/_downloads/plot_raw_objects.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice the different scalings of these types
f, (a1, a2) = plt.subplots(2, 1) eeg, times = eeg_only[0, :int(sfreq * 2)] meg, times = meg_only[0, :int(sfreq * 2)] a1.plot(times, meg[0]) a2.plot(times, eeg[0])
0.14/_downloads/plot_raw_objects.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
You can restrict the data to a specific time range
restricted = raw.crop(5, 7) # in seconds print('New time range from', restricted.times.min(), 's to', restricted.times.max(), 's')
0.14/_downloads/plot_raw_objects.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And drop channels by name
restricted = restricted.drop_channels(['MEG 0241', 'EEG 001']) print('Number of channels reduced from', raw.info['nchan'], 'to', restricted.info['nchan'])
0.14/_downloads/plot_raw_objects.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Concatenating :class:Raw &lt;mne.io.RawFIF&gt; objects :class:Raw &lt;mne.io.RawFIF&gt; objects can be concatenated in time by using the :func:append &lt;mne.io.RawFIF.append&gt; function. For this to work, they must have the same number of channels and their :class:Info &lt;mne.io.meas_info.Info&gt; structures should ...
# Create multiple :class:`Raw <mne.io.RawFIF>` objects raw1 = raw.copy().crop(0, 10) raw2 = raw.copy().crop(10, 20) raw3 = raw.copy().crop(20, 100) # Concatenate in time (also works without preloading) raw1.append([raw2, raw3]) print('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')
0.14/_downloads/plot_raw_objects.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Cliques, Triangles and Squares Let's pose a problem: If A knows B and B knows C, would it be probable that A knows C as well? In a graph involving just these three individuals, it may look as such:
G = nx.Graph() G.add_nodes_from(['a', 'b', 'c']) G.add_edges_from([('a','b'), ('b', 'c')]) nx.draw(G, with_labels=True)
4. Cliques, Triangles and Squares (Instructor).ipynb
ehongdata/Network-Analysis-Made-Simple
mit
Let's think of another problem: If A knows B, B knows C, C knows D and D knows A, is it likely that A knows C and B knows D? How would this look like?
G.add_node('d') G.add_edge('c', 'd') G.add_edge('d', 'a') nx.draw(G, with_labels=True)
4. Cliques, Triangles and Squares (Instructor).ipynb
ehongdata/Network-Analysis-Made-Simple
mit
The set of relationships involving A, B and C, if closed, involves a triangle in the graph. The set of relationships that also include D form a square. You may have observed that social networks (LinkedIn, Facebook, Twitter etc.) have friend recommendation systems. How exactly do they work? Apart from analyzing other v...
# Load the network. G = nx.read_gpickle('Synthetic Social Network.pkl') nx.draw(G, with_labels=True)
4. Cliques, Triangles and Squares (Instructor).ipynb
ehongdata/Network-Analysis-Made-Simple
mit
Cliques In a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not. The core idea is that if a node is present in a triangle, then its neighbors' ...
# Example code that shouldn't be too hard to follow. def in_triangle(G, node): neighbors1 = G.neighbors(node) neighbors2 = [] for n in neighbors1: neighbors = G.neighbors(n) if node in neighbors2: neighbors2.remove(node) neighbors2.extend(G.neighbors(n)) neighbor...
4. Cliques, Triangles and Squares (Instructor).ipynb
ehongdata/Network-Analysis-Made-Simple
mit
In reality, NetworkX already has a function that counts the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
nx.triangles(G, 3)
4. Cliques, Triangles and Squares (Instructor).ipynb
ehongdata/Network-Analysis-Made-Simple
mit
Exercise Can you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with? Hint: The neighbor of my neighbor should also be my neighbor, then the three of us are in a triangle relationship. Hint: Python...
# Possible answer def get_triangles(G, node): neighbors = set(G.neighbors(node)) triangle_nodes = set() """ Fill in the rest of the code below. """ for n in neighbors: neighbors2 = set(G.neighbors(n)) neighbors.remove(n) neighbors2.remove(node) triangle_nodes.upda...
4. Cliques, Triangles and Squares (Instructor).ipynb
ehongdata/Network-Analysis-Made-Simple
mit
Friend Recommendation: Open Triangles Now that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles. Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured...
# Possible Answer, credit Justin Zabilansky (MIT) for help on this. def get_open_triangles(G, node): """ There are many ways to represent this. One may choose to represent only the nodes involved in an open triangle; this is not the approach taken here. Rather, we have a code that explicitly enumr...
4. Cliques, Triangles and Squares (Instructor).ipynb
ehongdata/Network-Analysis-Made-Simple
mit
Let's try several polynomial fits to the data:
for j,degree in enumerate(degrees): for i in range(sub): #create data - sample from sine wave x = np.random.random((n,1))*2*np.pi y = np.sin(x) + np.random.normal(mean,std,(n,1)) poly = PolynomialFeatures(degree=degree) #TODO ...
handsOn_lecture10_bias-variance_tradeoff/bias_variance_handsOn.ipynb
eecs445-f16/umich-eecs445-f16
mit
Let's plot the data with the estimators!
plt.subplot(3,1,1) plt.plot(degrees,bias) plt.title('bias') plt.subplot(3,1,2) plt.plot(degrees,variance) plt.title('variance') plt.subplot(3,1,3) plt.plot(degrees,mse) plt.title('MSE') plt.show()
handsOn_lecture10_bias-variance_tradeoff/bias_variance_handsOn.ipynb
eecs445-f16/umich-eecs445-f16
mit
Textblob installieren: Text in Sätze zerlegen 1. Textblob:
from textblob_de import TextBlobDE as TextBlob from textblob_de import PatternParser doc = TextBlob(text) print("Number of sentences: ", len(doc.sentences)) print("Length of sentences in characters: ") for s in doc.sentences: print(len(s), end=" - ")
Python_2_10.ipynb
fotis007/python_intermediate
gpl-3.0
Achtung: Mit doc.sentences iterieren wir über die Sätze im Text. Aber der Satz ist kein String, sondern ein besonderes Objekt:
type(s) Das gilt auch schon für unser Dokument-Objekt doc: type(doc)
Python_2_10.ipynb
fotis007/python_intermediate
gpl-3.0
Das Gute daran, ist, dass wir - wie oben - über dieses Objekt iterieren können: for s in doc.sentences Aber genau genommen iterieren wir ja nicht über das 'doc'-Objekt, sondern über die Daten einer bestimmten Sicht, die wir mit dem Attribut 'sentences' aktivieren. Wir können auch andere Sichten aktivieren, z.B. Wort...
doc.words[:20] w = doc.words[0] type(w)
Python_2_10.ipynb
fotis007/python_intermediate
gpl-3.0
Vielleicht sollten wir erst einmal erläutern, warum es nicht ganz einfach ist, einen Text in Sätze zu zerlegen. Zuuerst könnte man denken, dass man das mit einigen sehr einfachen Regeln erledigen kann, aber wie ein Blick auf das nächste Beispiel zeigt, ist das nicht so einfach:
text_2 = """Johann Wolfgang Goethe wurde, glaube ich, am 28.8.1749 geboren. Es könnte auch am 20.8. sein. Ich muss zugeben: Genau weiß ich das nicht.""" text_3 = """Die heutige Agenda ist kurz. 1. Die Frage nach dem Anfang. 2. Ende. Viel Spaß!""" doc = TextBlob(text_2) list(doc.sentences) doc = TextBlob(text_3) list(...
Python_2_10.ipynb
fotis007/python_intermediate
gpl-3.0
2. Spacy
import spacy nlp = spacy.load('de') doc = nlp(text_2) for s in doc.sents: print(s) doc = nlp(text_3) for s in doc.sents: print(s)
Python_2_10.ipynb
fotis007/python_intermediate
gpl-3.0
Im folgenden werden wir nur mit Spacy weiterarbeiten. Für Spacy spricht, dass es recht neu ist, eine ganze Reihe von Sprachen unterstützt, ein modernes Python-Interface mit einer wohlüberlegten API hat, vergleichsweise neue Aspekte der Sprachtechnologie, z.B. Word Embeddings, unterstützt und dass Deutsch zu den gut unt...
print(spacy.__version__)
Python_2_10.ipynb
fotis007/python_intermediate
gpl-3.0
Tokenisieren Spacy
import spacy doc = nlp(text_2) for token in doc: print(token.text, end="< | >") doc = nlp(text_3) a = [print(token.text, end="< | >") for token in doc] doc = nlp(text_2) print("{:<15}{:<15}{:<15}".format("TOKEN", "LEMMA", "POS-Tag")) for token in doc: print("{:15}{:15}{:15}".format(token.text, token.lemma_, ...
Python_2_10.ipynb
fotis007/python_intermediate
gpl-3.0
Part-of-Speech-Tagging Named Entity Recognition
text_5 = """Früher hat man über Johann Wolfang von Goethe gesprochen, weil er den 'Faust' geschrieben hat, oder über Mozart, weil der die Zauberflöte komponiert hat. Heute dagegen redet man über Samsung, weil das neue Samsung Note4 erschienen ist, oder über den neuen BMW. Gut, über Steve Jobs hat man noch so geredet,...
Python_2_10.ipynb
fotis007/python_intermediate
gpl-3.0
Read in the viral sequences.
sequences = SeqIO.to_dict(SeqIO.parse('20150902_nnet_ha.fasta', 'fasta')) # sequences
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
The sequences are going to be of variable length. To avoid the problem of doing multiple sequence alignments, filter to just the most common length (i.e. 566 amino acids).
lengths = Counter() for accession, seqrecord in sequences.items(): lengths[len(seqrecord.seq)] += 1 lengths.most_common(1)[0][0]
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
There are sequences that are ambiguously labeled. For example, "Environment" and "Avian" samples. We would like to give a more detailed prediction as to which hosts it likely came from. Therefore, take out the "Environment" and "Avian" samples.
# For convenience, we will only work with amino acid sequencees of length 566. final_sequences = dict() for accession, seqrecord in sequences.items(): host = seqrecord.id.split('|')[1] if len(seqrecord.seq) == lengths.most_common(1)[0][0]: final_sequences[accession] = seqrecord
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
Create a numpy array to store the alignment.
alignment = MultipleSeqAlignment(final_sequences.values()) alignment_array = np.array([list(rec) for rec in alignment])
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
The first piece of meat in the code begins here. In the cell below, we convert the sequence matrix into a series of binary 1s and 0s, to encode the features as numbers. This is important - AFAIK, almost all machine learning algorithms require numerical inputs.
# Create an empty dataframe. # df = pd.DataFrame() # # Create a dictionary of position + label binarizer objects. # pos_lb = dict() # for pos in range(lengths.most_common(1)[0][0]): # # Convert position 0 by binarization. # lb = LabelBinarizer() # # Fit to the alignment at that position. # lb.fit(alig...
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
With the cell above, we now have a sequence feature matrix, in which the 566 amino acids positions have been expanded to 6750 columns of binary sequence features. The next step is to grab out the host species labels, and encode them as 1s and 0s as well.
set([i for i in train_test_df['host'].values]) # Grab out the labels. output_lb = LabelBinarizer() output_lb.fit(train_test_df['host']) Y = output_lb.fit_transform(train_test_df['host']) Y = Y.astype(np.float32) # Necessary for passing the data into nolearn. Y.shape X = train_test_df.ix[:,:-1].values X = X.astype(np...
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
Next up, we do the train/test split.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state=42)
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
For comparison, let's train a random forest classifier, and see what the concordance is between the predicted labels and the actual labels.
rf = RandomForestClassifier() rf.fit(X_train, Y_train) predictions = rf.predict(X_test) predicted_labels = output_lb.inverse_transform(predictions) # Compute the mutual information between the predicted labels and the actual labels. mi(predicted_labels, output_lb.inverse_transform(Y_test))
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
By the majority-consensus rule, and using mutual information as the metric for scoring, things look not so bad! As mentioned above, the RandomForestClassifier is a pretty powerful method for finding non-linear patterns between features and class labels. Uncomment the cell below if you want to try the scikit-learn's Ext...
# et = ExtraTreesClassifier() # et.fit(X_train, Y_train) # predictions = et.predict(X_test) # predicted_labels = output_lb.inverse_transform(predictions) # mi(predicted_labels, output_lb.inverse_transform(Y_test))
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
As a demonstration of how this model can be used, let's look at the ambiguously labeled sequences, i.e. those from "Environment" and "Avian", to see whether we can make a prediction as to what host it likely came frome.
# unknown_hosts = unknowns.ix[:,:-1].values # preds = rf.predict(unknown_hosts) # output_lb.inverse_transform(preds)
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
Alrighty - we're now ready to try out a neural network! For this try, we will use lasagne and nolearn, two packages which have made things pretty easy for building neural networks. In this segment, I'm going to not show experiments with multiple architectures, activations and the like. The goal is to illustrate how eas...
from lasagne import nonlinearities as nl net1 = NeuralNet(layers=[ ('input', layers.InputLayer), ('hidden1', layers.DenseLayer), #('dropout', layers.DropoutLayer), #('hidden2', layers.DenseLayer), #('dropout2', layers.DropoutLayer), ('output', layers.DenseLayer), ], ...
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
Training a simple neural network on my MacBook Air takes quite a bit of time :). But the function call for fitting it is a simple nnet.fit(X, Y).
net1.fit(X_train, Y_train)
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
Let's grab out the predictions!
preds = net1.predict(X_test) preds.shape
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
We're going to see how good the classifier did by examining the class labels. The way to visualize this is to have, say, the class labels on the X-axis, and the probability of prediction on the Y-axis. We can do this sample by sample. Here's a simple example with no frills in the matplotlib interface.
import matplotlib.pyplot as plt %matplotlib inline plt.bar(np.arange(len(preds[0])), preds[0])
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
Alrighty, let's add some frills - the class labels, the probability of each class label, and the original class label.
### NOTE: Change the value of i to anything above! i = 111 plt.figure(figsize=(20,5)) plt.bar(np.arange(len(output_lb.classes_)), preds[i]) plt.xticks(np.arange(len(output_lb.classes_)) + 0.5, output_lb.classes_, rotation='vertical') plt.title('Original Label: ' + output_lb.inverse_transform(Y_test)[i]) plt.show() # pr...
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
Let's do a majority-consensus rule applied to the labels, and then compute the mutual information score again.
preds_labels = [] for i in range(preds.shape[0]): maxval = max(preds[i]) pos = list(preds[i]).index(maxval) preds_labels.append(output_lb.classes_[pos]) mi(preds_labels, output_lb.inverse_transform(Y_test))
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
With a score of 0.73, that's not bad either! It certainly didn't outperform the RandomForestClassifier, but the default parameters on the RFC were probably pretty good to begin with. Notice how little tweaking on the neural network we had to do as well. For good measure, these were the class labels. Notice how successf...
output_lb.classes_
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
ericmjl/hiv-resistance-prediction
mit
Demonstrate the $0.25 \neq 0.35$
if 0.25 != 0.35: print('0.25 != 0.35') else: print('hmmm')
unit_4/hw_2017/problem_set_2.ipynb
whitead/numerical_stats
gpl-3.0
Using the // operator, show that 3 is not divisible by 2.
if 3 // 2 != 3 / 2: print('3 is not divisble by 2') else: print('it is divisible by 2')
unit_4/hw_2017/problem_set_2.ipynb
whitead/numerical_stats
gpl-3.0
Using a set of if if statements, print whether a variable is odd or even and negative or positive. Use the variable name x and demonstrate your code works using x = -3, but ensure it can handle any integer (e.g., 3, 0, -100). Make sure your print statements use the value of x, not the name of the variable. For example,...
x = -3 if x // 2 == x / 2: print('{} is even'.format(x)) else: print('{} is odd'.format(x)) if x < 0: print('{} is negative'.format(x)) elif x > 0: print('{} is positive'.format(x)) else: print('{} is 0'.format(x))
unit_4/hw_2017/problem_set_2.ipynb
whitead/numerical_stats
gpl-3.0
We can visualize thie graph, and then explain what the different parts are: This code is a modification from the DeepDream notebook. There is more visualization / exploration that can be done using tensorboard, which is the tool this uses.
# TensorFlow Graph visualizer code # https://stackoverflow.com/questions/41388673/visualizing-a-tensorflow-graph-in-jupyter-doesnt-work import numpy as np from IPython.display import clear_output, Image, display, HTML def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def."""...
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
What we do in tensorflow is construct graphs like this, and then evaluate nodes. Each graph node is associated with some code. When we evaluate a node like y_hat, tensorflow figures out what nodes it depends on, evaluates all of those nodes, and then evaluates y_hat. In this graph, there are three types of nodes (Tenso...
from scipy.special import expit as logistic true_beta = np.random.normal(size=(n_features, 1)) x = np.random.normal(size=(n_obs, n_features)) y = np.random.binomial(n=1, p=logistic(x @ true_beta)) y_ph = tf.placeholder(shape=(n_obs, 1), name="y_ph", dtype=tf.float32) logistic_loss = -tf.reduce_sum(y_ph * tf.log(1e-1...
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
One more tutorial point is on how to print things. Since a tensor only has a value when the graph is executed, inspecting things is trickier than usual. The Print op returns the same op as its input, but prints as a side-effect. This means we need to inject the op into the graph. Unfortunately, the print happens on the...
logistic_loss_with_print = tf.Print(input_=logistic_loss, data=[x, logistic_loss]) _ = logistic_loss_with_print.eval(session=sess, feed_dict={x_ph:x, y_ph:y})
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
MLP on MNIST Now we build some neural network building blocks we will reuse for VAEs.
from tensorflow.examples.tutorials.mnist import input_data global_dtype = tf.float32 mnist = input_data.read_data_sets('MNIST_data', one_hot=True) input_size = mnist.train.images.shape[1]
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
Define our neural network building blocks.
def _dense_mlp_layer(x, input_size, out_size, nonlinearity=tf.nn.softmax, name_prefix=""): w_init = tf.truncated_normal(shape=[input_size, out_size], stddev=0.001) b_init = tf.ones(shape=[out_size]) * 0.1 W = tf.Variable(w_init, name="%s_W" % name_prefix) b = tf.Variable(b_init, name="%s_b" % name_prefi...
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
Now we construct the graph. The graph and scope boilerplate makes our life easier as far as visualization and debugging is concerned. We can visualize/run only this graph and not the graph for logistic regression (above).
mlp_graph = tf.Graph() with mlp_graph.as_default(): with tf.name_scope("Feedforward_Net"): x = tf.placeholder(shape=[None, input_size], dtype=global_dtype, name='x') y = tf.placeholder(shape=[None, 10], dtype=global_dtype, name='y') y_hat, mlp_test_vars = _mlp(x, n_layers=2, units_per_layer...
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit