markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The CompartmentalModel provides a number of inference algorithms. The cheapest and most scalable algorithm is SVI, avilable via the .fit_svi() method. This method returns a list of losses to help us diagnose convergence; the fitted parameters are stored in the model object.
%%time losses = model.fit_svi(num_steps=101 if smoke_test else 2001, jit=True) plt.figure(figsize=(8, 3)) plt.plot(losses) plt.xlabel("SVI step") plt.ylabel("loss") plt.ylim(min(losses), max(losses[50:]));
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0
After inference, samples of latent variables are stored in the .samples attribute. These are primarily for internal use, and do not contain the full set of latent variables.
for key, value in sorted(model.samples.items()): print("{}.shape = {}".format(key, tuple(value.shape)))
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0
Prediction <a class="anchor" id="Prediction"></a> After inference we can both examine latent variables and forecast forward using the .predict() method. First let's simply predict latent variables.
%%time samples = model.predict() for key, value in sorted(samples.items()): print("{}.shape = {}".format(key, tuple(value.shape))) names = ["R0", "rho"] fig, axes = plt.subplots(2, 1, figsize=(5, 5)) axes[0].set_title("Posterior estimates of global parameters") for ax, name in zip(axes, names): truth = synth_...
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0
Notice that while the inference recovers the basic reproductive number R0, it poorly estimates the response rate rho and underestimates its uncertainty. While perfect inference would provide better uncertainty estimates, the response rate is known to be difficult to recover from data. Ideally the model can either incor...
%time samples = model.predict(forecast=30) def plot_forecast(samples): duration = len(empty_data) forecast = samples["S"].size(-1) - duration num_samples = len(samples["R0"]) time = torch.arange(duration + forecast) S2I = samples["S2I"] median = S2I.median(dim=0).values p05 = S2I.kthvalue(...
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0
It looks like the mean field guide underestimates uncertainty. To improve uncertainty estimates we can instead try MCMC inference. In this simple model MCMC is only a small factor slower than SVI; in more complex models MCMC can be multiple orders of magnitude slower than SVI.
%%time model = SimpleSIRModel(population, recovery_time, obs) mcmc = model.fit_mcmc(num_samples=4 if smoke_test else 400, jit_compile=True) samples = model.predict(forecast=30) plot_forecast(samples)
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0
Advanced modeling <a class="anchor" id="Advanced-modeling"></a> So far we've seen how to create a simple univariate model, fit the model to data, and predict and forecast future data. Next let's consider more advanced modeling techniques: regional models that couple compartments among multiple aggregated regions; phyl...
class RegionalSIRModel(CompartmentalModel): def __init__(self, population, coupling, recovery_time, data): duration = len(data) num_regions, = population.shape assert coupling.shape == (num_regions, num_regions) assert (0 <= coupling).all() assert (coupling <= 1).all() ...
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0
Keras Keras is a neural network library that supports multiple backends, most notably the well-established tensorflow, but also the popular on Windows: CNTK, as scikit-multilearn supports both Windows, Linux and MacOSX, you can you a backend of choice, as described in the backend selection tutorial. To install Keras ru...
from keras.models import Sequential from keras.layers import Dense def create_model_single_class(input_dim, output_dim): # create model model = Sequential() model.add(Dense(12, input_dim=input_dim, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(output_dim, activation='sigmoid')) # Com...
docs/source/multilabeldnn.ipynb
scikit-multilearn/scikit-multilearn
bsd-2-clause
Let's use it with a problem transformation method which converts multi-label classification problems to single-label single-class problems, ex. Binary Relevance which trains a classifier per label. We will use 10 epochs and disable verbosity.
from skmultilearn.problem_transform import BinaryRelevance from skmultilearn.ext import Keras KERAS_PARAMS = dict(epochs=10, batch_size=100, verbose=0) clf = BinaryRelevance(classifier=Keras(create_model_single_class, False, KERAS_PARAMS), require_dense=[True,True]) clf.fit(X_train, y_train) result = clf.predict(X_te...
docs/source/multilabeldnn.ipynb
scikit-multilearn/scikit-multilearn
bsd-2-clause
Multi-class Keras classifier We now train a multi-class neural network using Keras and tensortflow as backend (feel free to use others) optimized via categorical cross entropy. This is a case from the Keras multi-class tutorial. Note again that the model creation function must create a model that accepts an input dimen...
def create_model_multiclass(input_dim, output_dim): # create model model = Sequential() model.add(Dense(8, input_dim=input_dim, activation='relu')) model.add(Dense(output_dim, activation='softmax')) # Compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return mode...
docs/source/multilabeldnn.ipynb
scikit-multilearn/scikit-multilearn
bsd-2-clause
We use the Label Powerset multi-label to multi-class transformation approach, but this can also be used with all the advanced label space division methods available in scikit-multilearn. Note that we set the second parameter of our Keras wrapper to true, as the base problem is multi-class now.
from skmultilearn.problem_transform import LabelPowerset clf = LabelPowerset(classifier=Keras(create_model_multiclass, True, KERAS_PARAMS), require_dense=[True,True]) clf.fit(X_train,y_train) y_pred = clf.predict(X_test)
docs/source/multilabeldnn.ipynb
scikit-multilearn/scikit-multilearn
bsd-2-clause
Pytorch Pytorch is another often used library, that is compatible with scikit-multilearn via the skorch wrapping library, to use it, you must first install the required libraries: bash pip install -U skorch torch To start, import:
import torch from torch import nn import torch.nn.functional as F from skorch import NeuralNetClassifier
docs/source/multilabeldnn.ipynb
scikit-multilearn/scikit-multilearn
bsd-2-clause
Single-class pytorch classifier We train a two-layer neural network using pytorch based on a simple example from the pytorch example page. Note that the model's first layer has to agree in size with the input data, and the model's last layer is two-dimensions, as there are two classes: 0 or 1.
input_dim = X_train.shape[1] class SingleClassClassifierModule(nn.Module): def __init__( self, num_units=10, nonlin=F.relu, dropout=0.5, ): super(SingleClassClassifierModule, self).__init__() self.num_units = num_units self.dense0 = nn.Li...
docs/source/multilabeldnn.ipynb
scikit-multilearn/scikit-multilearn
bsd-2-clause
We now wrap the model with skorch and use scikit-multilearn for Binary Relevance classification.
net = NeuralNetClassifier( SingleClassClassifierModule, max_epochs=20, verbose=0 ) from skmultilearn.problem_transform import BinaryRelevance clf = BinaryRelevance(classifier=net, require_dense=[True,True]) clf.fit(X_train.astype(numpy.float32),y_train) y_pred = clf.predict(X_test.astype(numpy.float32))
docs/source/multilabeldnn.ipynb
scikit-multilearn/scikit-multilearn
bsd-2-clause
Multi-class pytorch classifier Similarly we can train a multi-class DNN, this time hte last layer must agree with size with the number of classes.
nodes = 8 input_dim = X_train.shape[1] hidden_dim = int(input_dim/nodes) output_dim = len(numpy.unique(y_train.rows)) class MultiClassClassifierModule(nn.Module): def __init__( self, input_dim=input_dim, hidden_dim=hidden_dim, output_dim=output_dim, dropo...
docs/source/multilabeldnn.ipynb
scikit-multilearn/scikit-multilearn
bsd-2-clause
Now let's skorch-wrap it:
net = NeuralNetClassifier( MultiClassClassifierModule, max_epochs=20, verbose=0 ) from skmultilearn.problem_transform import LabelPowerset clf = LabelPowerset(classifier=net, require_dense=[True,True]) clf.fit(X_train.astype(numpy.float32),y_train) y_pred = clf.predict(X_test.astype(numpy.float32))
docs/source/multilabeldnn.ipynb
scikit-multilearn/scikit-multilearn
bsd-2-clause
=========================================================== Plot single trial activity, grouped by ROI and sorted by RT =========================================================== This will produce what is sometimes called an event related potential / field (ERP/ERF) image. The EEGLAB example file - containing an exper...
# Authors: Jona Sassenhagen <jona.sassenhagen@gmail.com> # # License: BSD (3-clause) import mne from mne.datasets import testing from mne import Epochs, io, pick_types from mne.event import define_target_events print(__doc__)
0.15/_downloads/plot_roi_erpimage_by_rt.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load EEGLAB example data (a small EEG dataset)
data_path = testing.data_path() fname = data_path + "/EEGLAB/test_raw.set" montage = data_path + "/EEGLAB/test_chans.locs" event_id = {"rt": 1, "square": 2} # must be specified for str events eog = {"FPz", "EOG1", "EOG2"} raw = io.eeglab.read_raw_eeglab(fname, eog=eog, montage=montage, ...
0.15/_downloads/plot_roi_erpimage_by_rt.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Create Epochs
# define target events: # 1. find response times: distance between "square" and "rt" events # 2. extract A. "square" events B. followed by a button press within 700 msec tmax = .7 sfreq = raw.info["sfreq"] reference_id, target_id = 2, 1 new_events, rts = define_target_events(events, reference_id, target_id, sfreq, ...
0.15/_downloads/plot_roi_erpimage_by_rt.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot
# Parameters for plotting order = rts.argsort() # sorting from fast to slow trials rois = dict() for pick, channel in enumerate(epochs.ch_names): last_char = channel[-1] # for 10/20, last letter codes the hemisphere roi = ("Midline" if last_char == "z" else ("Left" if int(last_char) % 2 else "Righ...
0.15/_downloads/plot_roi_erpimage_by_rt.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Model specification A neural network is quite simple. The basic unit is a perceptron which is nothing more than logistic regression. We use many of these in parallel and then stack them up to get hidden layers. Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem.
# Trick: Turn inputs and outputs into shared variables. # It's still the same thing, but we can later change the values of the shared variable # (to switch in the test-data later) and pymc3 will just use the new data. # Kind-of like a pointer we can redirect. # For more info, see: http://deeplearning.net/software/th...
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
That's not so bad. The Normal priors help regularize the weights. Usually we would add a constant b to the inputs but I omitted it here to keep the code cleaner. Variational Inference: Scaling model complexity We could now just run a MCMC sampler like NUTS which works pretty well in this case but as I already mentioned...
%%time with neural_network: # Run ADVI which returns posterior means, standard deviations, and the evidence lower bound (ELBO) v_params = pm.variational.advi(n=50000)
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
< 20 seconds on my older laptop. That's pretty good considering that NUTS is having a really hard time. Further below we make this even faster. To make it really fly, we probably want to run the Neural Network on the GPU. As samples are more convenient to work with, we can very quickly draw samples from the variational...
with neural_network: trace = pm.variational.sample_vp(v_params, draws=5000)
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time.
plt.plot(v_params.elbo_vals) plt.ylabel('ELBO') plt.xlabel('iteration')
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). We use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation).
# Replace shared variables with testing set ann_input.set_value(X_test) ann_output.set_value(Y_test) # Creater posterior predictive samples ppc = pm.sample_ppc(trace, model=neural_network, samples=500) # Use probability of > 0.5 to assume prediction of class 1 pred = ppc['out'].mean(axis=0) > 0.5 fig, ax = plt.subpl...
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Hey, our neural network did all right! Lets look at what the classifier has learned For this, we evaluate the class probability predictions on a grid over the whole input space.
grid = np.mgrid[-3:3:100j,-3:3:100j].astype(floatX) grid_2d = grid.reshape(2, -1).T dummy_out = np.ones(grid.shape[1], dtype=np.int8) ann_input.set_value(grid_2d) ann_output.set_value(dummy_out) # Creater posterior predictive samples ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Probability surface
cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True) fig, ax = plt.subplots(figsize=(10, 6)) contour = ax.contourf(*grid, ppc['out'].mean(axis=0).reshape(100, 100), cmap=cmap) ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1]) ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r') cbar = plt.colorbar(...
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Uncertainty in predicted value So far, everything I showed we could have done with a non-Bayesian Neural Network. The mean of the posterior predictive for each class-label should be identical to maximum likelihood predicted values. However, we can also look at the standard deviation of the posterior predictive to get a...
cmap = sns.cubehelix_palette(light=1, as_cmap=True) fig, ax = plt.subplots(figsize=(10, 6)) contour = ax.contourf(*grid, ppc['out'].std(axis=0).reshape(100, 100), cmap=cmap) ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1]) ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r') cbar = plt.colorbar(contour, ax=a...
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
We can see that very close to the decision boundary, our uncertainty as to which label to predict is highest. You can imagine that associating predictions with uncertainty is a critical property for many applications like health care. To further maximize accuracy, we might want to train the model primarily on samples f...
from six.moves import zip # Set back to original data to retrain ann_input.set_value(X_train) ann_output.set_value(Y_train) # Tensors and RV that will be using mini-batches minibatch_tensors = [ann_input, ann_output] minibatch_RVs = [out] # Generator that returns mini-batches in each iteration def create_minibatch(d...
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
While the above might look a bit daunting, I really like the design. Especially the fact that you define a generator allows for great flexibility. In principle, we could just pool from a database there and not have to keep all the data in RAM. Lets pass those to advi_minibatch():
%%time with neural_network: # Run advi_minibatch v_params = pm.variational.advi_minibatch( n=50000, minibatch_tensors=minibatch_tensors, minibatch_RVs=minibatch_RVs, minibatches=minibatches, total_size=total_size, learning_rate=1e-2, epsilon=1.0 ) with neural_network: tra...
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
As you can see, mini-batch ADVI's running time is much lower. It also seems to converge faster. For fun, we can also look at the trace. The point is that we also get uncertainty of our Neural Network weights.
pm.traceplot(trace);
notebooks/bayesian_neural_network_advi.ipynb
dolittle007/dolittle007.github.io
gpl-3.0
Visualizations Daily average of number of comments per post
posts_groupby.mean().num_comments.plot(kind='barh', figsize=[8,8])
games/games-analysis.ipynb
staeiou/reddit_downvote
mit
Daily average of number of upvotes per post
posts_groupby.mean().ups.plot(kind='barh', figsize=[8,8])
games/games-analysis.ipynb
staeiou/reddit_downvote
mit
<h1>Textbook example: Ramsey-Cass-Koopmans model</h1> <h2> Household behavior </h2> Suppose that there exist a representative household with the following lifetime utility... $$U \equiv \int_{t=0}^{\infty} e^{-\rho t}u(c(t))N(t)dt$$ ...where the flow utility function $u(C(t))$ is assumed to be a concave function of p...
def hara(t, c, a, b, **params): """ Hyperbolic Absolute Risk Aversion (HARA). Notes ----- For Constant Absolute Risk Aversion (CARA), set a=0; for Constant Relative Risk Aversion (CRRA), set b=0. """ return 1 / (a * c + b) def cobb_douglas_output(k_tilde, alpha, l, **params): ...
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
To complete the model we need to define some parameter values.
# set b=0 for CRRA... params = {'a': 1.0, 'b': 0.0, 'g': 0.02, 'n': 0.02, 'alpha': 0.15, 'delta': 0.04, 'l': 1.0, 'K0': 1.0, 'A0': 1.0, 'N0': 1.0, 'rho': 0.02}
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
<h2>Solving the model with pyCollocation</h2> <h3>Defining a `pycollocation.TwoPointBVP` instance</h3>
pycollocation.problems.TwoPointBVP? standard_ramsey_bvp = pycollocation.problems.TwoPointBVP(bcs_lower=initial_condition, bcs_upper=terminal_condition, number_bcs_lower=1, ...
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
Finding a good initial guess for $\tilde{k}(t)$ Theory tells us that, starting from some initial condition $\tilde{k}_0$, the solution to the Solow model converges monotonically toward its long run equilibrium value $\tilde{k}^*$. Our initial guess for the solution should preserve this property...
def initial_mesh(t, T, num, problem): # compute equilibrium values cstar = equilibrium_consumption(**problem.params) kstar = equilibrium_capital(**problem.params) ystar = cobb_douglas_output(kstar, **problem.params) # create the mesh for capital ts = np.linspace(t, T, num) k0 = problem.para...
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
Solving the model
pycollocation.solvers.Solver?
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
<h3> Polynomial basis functions </h3>
polynomial_basis = pycollocation.basis_functions.PolynomialBasis() solver = pycollocation.solvers.Solver(polynomial_basis) boundary_points = (0, 200) ts, ks, cs = initial_mesh(*boundary_points, num=1000, problem=standard_ramsey_bvp) basis_kwargs = {'kind': 'Chebyshev', 'domain': boundary_points, 'degree': 25} k_poly ...
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
<h3> B-spline basis functions </h3>
bspline_basis = pycollocation.basis_functions.BSplineBasis() solver = pycollocation.solvers.Solver(bspline_basis) boundary_points = (0, 200) ts, ks, cs = initial_mesh(*boundary_points, num=250, problem=standard_ramsey_bvp) tck, u = bspline_basis.fit([ks, cs], u=ts, k=5, s=0) knots, coefs, k = tck initial_coefs = np.h...
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
<h1> Generic Ramsey-Cass-Koopmans model</h1> Can we refactor the above code so that we can solve a Ramsey-Cass-Koopmans model for arbitrary intensive production functions and risk preferences? Yes!
from pycollocation.tests import models
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
Example usage...
def ces_output(k_tilde, alpha, l, sigma, **params): gamma = (sigma - 1) / sigma if gamma == 0: y = k_tilde**alpha * l**(1 - alpha) else: y = (alpha * k_tilde**gamma + (1 - alpha) * l**gamma)**(1 / gamma) return y def ces_mpk(k_tilde, alpha, l, sigma, **params): y = ces_output(k_til...
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
<h3> Phase space plots </h3> <h4> 2D phase space </h4>
css = generic_ramsey_bvp.equilibrium_consumption(**generic_ramsey_bvp.params) kss = ces_equilibrium_capital(**generic_ramsey_bvp.params) plt.plot(k_soln / kss, c_soln / css) plt.xlabel(r'$\frac{\tilde{k}}{\tilde{k}^*}$', fontsize=20) plt.ylabel(r'$\frac{\tilde{c}}{\tilde{c}^*}$', fontsize=20, rotation='horizontal') pl...
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
<h4> 3D phase space </h4>
from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.gca(projection='3d') ax.plot(k_soln / kss, c_soln / css, ts, label='Ramsey model trajectory') plt.xlabel(r'$\frac{\tilde{k}}{\tilde{k}^*}$', fontsize=20) plt.ylabel(r'$\frac{\tilde{c}}{\tilde{c}^*}$', fontsize=20) ax.set_zlabel('$t$') ax.legend() plt...
examples/ramsey-cass-koopmans-model.ipynb
davidrpugh/pyCollocation
mit
With air resistance Next we'll add air resistance using the drag equation I'll start by getting the units we'll need from Pint.
m = UNITS.meter s = UNITS.second kg = UNITS.kilogram
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
Now I'll create a Params object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
params = Params(height = 381 * m, v_init = 0 * m / s, g = 9.8 * m/s**2, mass = 2.5e-3 * kg, diameter = 19e-3 * m, rho = 1.2 * kg/m**3, v_term = 18 * m / s)
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
Now we can pass the Params object make_system which computes some additional parameters and defines init. make_system uses the given radius to compute area and the given v_term to compute the drag coefficient C_d.
def make_system(params): """Makes a System object for the given conditions. params: Params object returns: System object """ unpack(params) area = np.pi * (diameter/2)**2 C_d = 2 * mass * g / (rho * area * v_term**2) init = State(y=height, v=v_init) t_end = 30 * s ...
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
Here's the slope function, including acceleration due to gravity and drag.
def slope_func(state, t, system): """Compute derivatives of the state. state: position, velocity t: time system: System object returns: derivatives of y and v """ y, v = state rho, C_d, area = system.rho, system.C_d, system.area mass = system.mass g = system.g ...
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
As always, let's test the slope function with the initial conditions.
slope_func(system.init, 0, system)
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
We can use the same event function as in the previous chapter.
def event_func(state, t, system): """Return the height of the penny above the sidewalk. """ y, v = state return y
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
And then run the simulation.
results, details = run_ode_solver(system, slope_func, events=event_func) details
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
Here are the results.
results
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
The final height is close to 0, as expected. Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors. We can get the flight time from results.
t_sidewalk = get_last_label(results)
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
Here's the plot of position as a function of time.
def plot_position(results): plot(results.y) decorate(xlabel='Time (s)', ylabel='Position (m)') plot_position(results)
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
And velocity as a function of time:
def plot_velocity(results): plot(results.v, color='C1', label='v') decorate(xlabel='Time (s)', ylabel='Velocity (m/s)') plot_velocity(results)
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant. Back to Chapter 1 We have now considered three models of a falling penny: In Chapter 1, we started with the simplest model, which includes gravity and ignores drag. As an exercise in C...
g = 9.8 v_term = 18 t_end = 22.4 ts = linspace(0, t_end, 201) model1 = -g * ts; model2 = TimeSeries() for t in ts: v = -g * t if v < -v_term: model2[t] = -v_term else: model2[t] = v results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5) model3 = results.v; ...
soln/penny_model_comparison.ipynb
AllenDowney/ModSimPy
mit
Help with commands If you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now. Creating new cells One of the most common commands is creating new cells. You can create a cell above...
## Practice here def fibo(n): # Recursive Fibonacci sequence! if n == 0: return 0 elif n == 1: return 1 return fibo(n-1) + fibo(n-2)
misc/keyboard-shortcuts.ipynb
soloman817/udacity-ml
mit
Line numbers A lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell. Exercise: Turn line numbers on and off in the above code cell. Deleting cells Deleting cells is done by pressing D twice in a row so D, ...
# DELETE ME
misc/keyboard-shortcuts.ipynb
soloman817/udacity-ml
mit
Saving the notebook Notebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy! The Command Palette You can easily access the command palette by pressing Shift + Control/Command + P. Note: This won't work in Firefox and Internet Explo...
# Move this cell down # below this cell
misc/keyboard-shortcuts.ipynb
soloman817/udacity-ml
mit
Create a dataframe
data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'], 'year': [2012, 2012, 2013, 2014, 2014], 'reports': [4, 24, 31, 2, 3], 'coverage': [25, 94, 57, 62, 70]} df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma']) df
python/pandas_apply_operations_to_dataframes.ipynb
tpin3694/tpin3694.github.io
mit
Create a capitalization lambda function
capitalizer = lambda x: x.upper()
python/pandas_apply_operations_to_dataframes.ipynb
tpin3694/tpin3694.github.io
mit
Apply the capitalizer function over the column 'name' apply() can apply a function along any axis of the dataframe
df['name'].apply(capitalizer)
python/pandas_apply_operations_to_dataframes.ipynb
tpin3694/tpin3694.github.io
mit
Map the capitalizer lambda function over each element in the series 'name' map() applies an operation over each element of a series
df['name'].map(capitalizer)
python/pandas_apply_operations_to_dataframes.ipynb
tpin3694/tpin3694.github.io
mit
Apply a square root function to every single cell in the whole data frame applymap() applies a function to every single element in the entire dataframe.
# Drop the string variable so that applymap() can run df = df.drop('name', axis=1) # Return the square root of every cell in the dataframe df.applymap(np.sqrt)
python/pandas_apply_operations_to_dataframes.ipynb
tpin3694/tpin3694.github.io
mit
Applying A Function Over A Dataframe Create a function that multiplies all non-strings by 100
# create a function called times100 def times100(x): # that, if x is a string, if type(x) is str: # just returns it untouched return x # but, if not, return it multiplied by 100 elif x: return 100 * x # and leave everything else else: return
python/pandas_apply_operations_to_dataframes.ipynb
tpin3694/tpin3694.github.io
mit
Apply the times100 over every cell in the dataframe
df.applymap(times100)
python/pandas_apply_operations_to_dataframes.ipynb
tpin3694/tpin3694.github.io
mit
After importing neccessary modules, at the program startup we invoke enable_eager_execution().
tfe.enable_eager_execution()
examples/notebooks/deepchem_tensorflow_eager.ipynb
miaecle/deepchem
mit
Enabling eager execution changes how TensorFlow functions behave. Tensor objects return concrete values instead of being a symbolic reference to nodes in a static computational graph(non-eager mode). As a result, eager execution should be enabled at the beginning of a program. Note that with eager execution enabled, th...
import numpy as np import deepchem as dc from deepchem.models.tensorgraph import layers
examples/notebooks/deepchem_tensorflow_eager.ipynb
miaecle/deepchem
mit
In the following snippet we describe how to create a Dense layer in eager mode. The good thing about calling a layer as a function is that we don't have to call create_tensor() directly. This is identical to tensorflow API and has no conflict. And since eager mode is enabled, it should return concrete tensors right awa...
# Initialize parameters in_dim = 2 out_dim = 3 batch_size = 10 inputs = np.random.rand(batch_size, in_dim).astype(np.float32) #Input layer = layers.Dense(out_dim) # Provide the number of output values as parameter. This creates a Dense layer result = layer(inputs) #get the ouput tensors print(result)
examples/notebooks/deepchem_tensorflow_eager.ipynb
miaecle/deepchem
mit
Creating a second Dense layer should produce different results.
layer2 = layers.Dense(out_dim) result2 = layer2(inputs) print(result2)
examples/notebooks/deepchem_tensorflow_eager.ipynb
miaecle/deepchem
mit
We can also execute the layer in eager mode to compute its output as a function of inputs. If the layer defines any variables, they are created the first time it is invoked. This happens in the same exact way that we would create a single layer in non-eager mode. The following is also a way to create a layer in eager m...
x = layers.Dense(out_dim)(inputs) print(x)
examples/notebooks/deepchem_tensorflow_eager.ipynb
miaecle/deepchem
mit
Conv1D layer Dense layers are one of the layers defined in Deepchem. Along with it there are several others like Conv1D, Conv2D, conv3D etc. We also take a look at how to construct a Conv1D layer below. Basically this layer creates a convolution kernel that is convolved with the layer input over a single spatial (or te...
from deepchem.models.tensorgraph.layers import Conv1D width = 5 in_channels = 2 filters = 3 kernel_size = 2 batch_size = 5 inputs = np.random.rand(batch_size, width, in_channels).astype( np.float32) layer = layers.Conv1D(filters, kernel_size) result = layer(inputs) print(result)
examples/notebooks/deepchem_tensorflow_eager.ipynb
miaecle/deepchem
mit
Again it should be noted that creating a second Conv1D layer would producr different results. So thats how we invoke different DeepChem layers in eager mode. One of the other interesting point is that we can mix tensorflow layers and DeepChem layers. Since they all take tensors as inputs and return tensors as outputs,...
_input = tf.random_normal([2, 3]) print(_input) layer = layers.Dense(4) # A DeepChem Dense layer result = layer(_input) print(result)
examples/notebooks/deepchem_tensorflow_eager.ipynb
miaecle/deepchem
mit
This is exactly how a tensorflow Dense layer works. It implements the same operation as that of DeepChem's Dense layer i.e., outputs = activation(inputs.kernel + bias) where kernel is the weights matrix created by the layer, and bias is a bias vector created by the layer.
result = tf.layers.dense(_input, units=4) # A tensorflow Dense layer print(result)
examples/notebooks/deepchem_tensorflow_eager.ipynb
miaecle/deepchem
mit
We pass a tensor input to that of tensorflow Dense layer and recieve an output tensor that has the same shape as that of input except the last dimension is that of ouput space. Gradients Finding gradients under eager mode is much similar to the autograd API. The computational flow is very clean and logical. What happen...
def dense_squared(x): return layers.Dense(1)(layers.Dense(1)(inputs)) grad = tfe.gradients_function(dense_squared) print(dense_squared(3.0)) print(grad(3.0))
examples/notebooks/deepchem_tensorflow_eager.ipynb
miaecle/deepchem
mit
Run simulation
%%bash if [[ -z "${FAUNUS_EXECUTABLE}" ]]; then yason.py multipole.yml | faunus --nobar -s multipole.state.ubj else echo "Seems we're running CTest - use Faunus target from CMake" "${YASON_EXECUTABLE}" multipole.yml | "${FAUNUS_EXECUTABLE}" --nobar -s multipole.state.ubj fi
examples/multipole/multipole.ipynb
bjornstenqvist/faunus
mit
Plot multipolar energies as a function of separation
R, exact, total, ionion, iondip, dipdip, ionquad, mucorr = np.loadtxt('multipole.dat', unpack=True, skiprows=2) lw=4 plt.plot(R, ionion, label='ion-ion', lw=lw) plt.plot(R, iondip, label='ion-dipole', lw=lw) plt.plot(R, dipdip, label='dipole-dipole', lw=lw) plt.plot(R, ionquad, label='ion-quadrupole', lw=lw) plt.plot(R...
examples/multipole/multipole.ipynb
bjornstenqvist/faunus
mit
Unittests Compare distributions with previously saved results
class TestMultipole(unittest.TestCase): def test_Exact(self): self.assertAlmostEqual(exact.mean(), -0.12266326530612245, places=3) def test_IonIon(self): self.assertAlmostEqual(ionion.mean(), -0.11624285714285715, places=2) def test_IonDipole(self): self.assertAlmostEqual(iondip.me...
examples/multipole/multipole.ipynb
bjornstenqvist/faunus
mit
카테고리 분포의 모수 추정 다음으로 클래스 갯수가 $K$인 카테고리 분포의 모수 $\theta$ 벡터를 베이지안 추정법으로 추정해 본다. 카테고리 분포의 모수의 각 원소는 모두 0부터 1사이의 값을 가지므로 사전 분포는 하이퍼 모수 $\alpha_k=\dfrac{1}{K}$인 디리클리 분포로 한다. $$ P(\theta) \propto \prod_{k=1}^K \theta_k^{\alpha_k - 1} \;\;\; (\alpha_k = 1/K , \; \text{ for all } k) $$ 데이터는 모두 독립적인 카테고리 분포의 곱이므로 우도는 다음과 같이 다항 분...
def plot_dirichlet(alpha): def project(x): n1 = np.array([1, 0, 0]) n2 = np.array([0, 1, 0]) n3 = np.array([0, 0, 1]) n12 = (n1 + n2)/2 m1 = np.array([1, -1, 0]) m2 = n3 - n12 m1 = m1/np.linalg.norm(m1) m2 = m2/np.linalg.norm(m2) return np.dst...
12. 추정 및 검정/07. 베이지안 모수 추정.ipynb
zzsza/Datascience_School
mit
정규 분포의 기댓값 모수 추정 이번에는 정규 분포의 기댓값 모수를 베이지안 방법으로 추정한다. 분산 모수 $\sigma^2$은 알고 있다고 가정한다. 기댓값은 $-\infty$부터 $\infty$까지의 모든 수가 가능하기 때문에 모수의 사전 분포로는 정규 분포를 사용한다. $$ P(\mu) = N(\mu_0, \sigma^2_0) = \dfrac{1}{\sqrt{2\pi\sigma_0^2}} \exp \left(-\dfrac{(\mu-\mu_0)^2}{2\sigma_0^2}\right)$$ 데이터는 모두 독립적인 정규 분포의 곱이므로 우도는 다음과 같이 된다. $$ ...
mu, sigma2 = 2, 4 mu0, sigma20 = 0, 1 xx = np.linspace(1, 3, 1000) np.random.seed(0) N = 10 x = sp.stats.norm(mu).rvs(N) mu0 = sigma2/(N*sigma20 + sigma2) * mu0 + (N*sigma20)/(N*sigma20 + sigma2)*x.mean() sigma20 = 1/(1/sigma20 + N/sigma2) plt.plot(xx, sp.stats.norm(mu0, sigma20).pdf(xx), label="1st"); print(mu0) N...
12. 추정 및 검정/07. 베이지안 모수 추정.ipynb
zzsza/Datascience_School
mit
Copulas Primer <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Gaussian_Copula"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.resea...
import numpy as np import matplotlib.pyplot as plt import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_probability as tfp tfd = tfp.distributions tfb = tfp.bijectors
site/en-snapshot/probability/examples/Gaussian_Copula.ipynb
tensorflow/docs-l10n
apache-2.0
A [copula](https://en.wikipedia.org/wiki/Copula_(probability_theory%29) is a classical approach for capturing the dependence between random variables. More formally, a copula is a multivariate distribution $C(U_1, U_2, ...., U_n)$ such that marginalizing gives $U_i \sim \text{Uniform}(0, 1)$. Copulas are interesting be...
class GaussianCopulaTriL(tfd.TransformedDistribution): """Takes a location, and lower triangular matrix for the Cholesky factor.""" def __init__(self, loc, scale_tril): super(GaussianCopulaTriL, self).__init__( distribution=tfd.MultivariateNormalTriL( loc=loc, scale_tril=scale_tr...
site/en-snapshot/probability/examples/Gaussian_Copula.ipynb
tensorflow/docs-l10n
apache-2.0
The power, however, from such a model is using the Probability Integral Transform, to use the copula on arbitrary R.V.s. In this way, we can specify arbitrary marginals, and use the copula to stitch them together. We start with a model: $$\begin{align} X &\sim \text{Kumaraswamy}(a, b) \ Y &\sim \text{Gumbel}(\mu, \beta...
a = 2.0 b = 2.0 gloc = 0. gscale = 1. x = tfd.Kumaraswamy(a, b) y = tfd.Gumbel(loc=gloc, scale=gscale) # Plot the distributions, assuming independence x_axis_interval = np.linspace(0.01, 0.99, num=200, dtype=np.float32) y_axis_interval = np.linspace(-2., 3., num=200, dtype=np.float32) x_grid, y_grid = np.meshgrid(x_a...
site/en-snapshot/probability/examples/Gaussian_Copula.ipynb
tensorflow/docs-l10n
apache-2.0
Joint Distribution with Different Marginals Now we use a Gaussian copula to couple the distributions together, and plot that. Again our tool of choice is TransformedDistribution applying the appropriate Bijector to obtain the chosen marginals. Specifically, we use a Blockwise bijector which applies different bijectors ...
class WarpedGaussianCopula(tfd.TransformedDistribution): """Application of a Gaussian Copula on a list of target marginals. This implements an application of a Gaussian Copula. Given [x_0, ... x_n] which are distributed marginally (with CDF) [F_0, ... F_n], `GaussianCopula` represents an application of the Cop...
site/en-snapshot/probability/examples/Gaussian_Copula.ipynb
tensorflow/docs-l10n
apache-2.0
Finally, let's actually use this Gaussian Copula. We'll use a Cholesky of $\begin{bmatrix}1 & 0\\rho & \sqrt{(1-\rho^2)}\end{bmatrix}$, which will correspond to variances 1, and correlation $\rho$ for the multivariate normal. We'll look at a few cases:
# Create our coordinates: coordinates = np.concatenate( [x_grid[..., np.newaxis], y_grid[..., np.newaxis]], -1) def create_gaussian_copula(correlation): # Use Gaussian Copula to add dependence. return WarpedGaussianCopula( loc=[0., 0.], scale_tril=[[1., 0.], [correlation, tf.sqrt(1. - correlation...
site/en-snapshot/probability/examples/Gaussian_Copula.ipynb
tensorflow/docs-l10n
apache-2.0
Finally, let's verify that we actually get the marginals we want.
def kumaraswamy_pdf(x): return tfd.Kumaraswamy(a, b).prob(np.float32(x)) def gumbel_pdf(x): return tfd.Gumbel(gloc, gscale).prob(np.float32(x)) copula_samples = [] for copula in copulas: copula_samples.append(copula.sample(10000)) plot_rows = len(correlations) plot_cols = 2 # for 2 densities [kumarswamy...
site/en-snapshot/probability/examples/Gaussian_Copula.ipynb
tensorflow/docs-l10n
apache-2.0
Promises Hypothesis --> full implementation of promises coming in jQuery 3.0
%%javascript // version of jQuery element.text($.fn.jquery) %%javascript { // incorrect attempt to compute element independently... // just use element: http://stackoverflow.com/a/20020566/ var _e = IPython.notebook.get_selected_cell().output_area.element; var _element = _e.find(".rendered_html"); element.text...
notebooks/js_notes.ipynb
rdhyee/webtech-learning
apache-2.0
jquery ajax, getJSON, especially with promises https://www.flickr.com/services/api/explore/flickr.photos.search
from IPython.display import Javascript import requests from jinja2 import Template flickr_url = "https://api.flickr.com/services/rest/?method=flickr.photos.search&api_key={key}&tags={tag}&format=json&nojsoncallback=1" url = flickr_url.format(key=FLICKR_KEY, tag='tiger') js_template = Template(""" $.getJSON('{{url}}',...
notebooks/js_notes.ipynb
rdhyee/webtech-learning
apache-2.0
ideas behind promises and deferred. Wikipedia article: Futures and promises - Wikipedia, the free encyclopedia: Specifically, when usage is distinguished, a future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future." Notably, a f...
%%javascript var d = $.Deferred(); var p = d.promise(); p.then (function (value) {console.log("p: " + value)}); rdhyee.d = d; %%javascript // thinking that you would pass a value rdhyee.d.resolve(10); %%javascript // kinda ugly -- must be a better way // how to express // b = a +1 // c = 2*b var a = $.Deferred...
notebooks/js_notes.ipynb
rdhyee/webtech-learning
apache-2.0
对象作为 Python 中的基本单位,可以被创建、命名或删除。Python 中一般不需要手动删除对象,其垃圾回收机制会自动处理不再使用的对象,当然如果需要,也可以使用 del 语句删除某个变量;所谓命名则是指给对象贴上一个名字标签,方便使用,也就是声明或赋值变量;接下来我们重点来看如何创建一个对象。对于一些 Python 内置类型的对象,通常可以使用特定的语法生成,例如数字直接使用阿拉伯数字字面量,字符串使用引号 '',列表使用 [],字典使用 {} ,函数使用 def 语法等,这些对象的类型都是 Python 内置的,那我们能不能创建其它类型的对象呢? 类与实例 既然说 Python 是面向对象编程语言,也就允许用户自己创建对象,...
class A: pass a = A() who(A) who(a)
Tips/2016-05-01-Class-and-Metaclass-i.ipynb
rainyear/pytips
mit
上面的例子中 A 是我们创建的一个新的类,而通过调用 A() 可以获得一个 A 类型的实例对象,我们将其赋值为 a,也就是说我们成功创建了一个与所有内置对象类型不同的对象 a,它的类型为 __main__.A!至此我们可以将 Python 中一切的对象分为两种: 可以用来生成新对象的类,包括内置的 int、str 以及自己定义的 A 等; 由类生成的实例对象,包括内置类型的数字、字符串以及自己定义的类型为 __main__.A 的 a。 单纯从概念上理解这两种对象没有任何问题,但是这里要讨论的是在实践中不得不考虑的一些细节性问题: 需要一些方便的机制来实现面向对象编程中的继承、重载等特性; 需要一些固定的流程让我们可以在生成实...
class _EnumDict(dict): def __init__(self): dict.__init__(self) self._member_names = [] def keys(self): keys = dict.keys(self) return list(filter(lambda k: k.isupper(), keys)) ed = _EnumDict() ed['RED'] = 1 ed['red'] = 2 print(ed, ed.keys())
Tips/2016-05-01-Class-and-Metaclass-i.ipynb
rainyear/pytips
mit
在上面的例子中 _EnumDict 重载同时调用了父类 dict 的一些方法,上面的写法在语法上是没有错误的,但是如果我们要改变 _EnumDict 的父类,不再是继承自 dict,则必须手动修改所有方法中 dict.method(self) 的调用形式,这样就不是一个好的实践方案了。为了解决这一问题,Python 提供了一个内置函数 super():
print(super.__doc__)
Tips/2016-05-01-Class-and-Metaclass-i.ipynb
rainyear/pytips
mit
我最初只是把 super() 当做指向父类对象的指针,但实际上它可以提供更多功能:给定一个对象及其子类(这里对象要求至少是类对象,而子类可以是实例对象),从该对象父类的命名空间开始搜索对应的方法。 以下面的代码为例:
class A(object): def method(self): who(self) print("A.method") class B(A): def method(self): who(self) print("B.method") class C(B): def method(self): who(self) print("C.method") class D(C): def __init__(self): super().method() super(__clas...
Tips/2016-05-01-Class-and-Metaclass-i.ipynb
rainyear/pytips
mit
当然我们也可以在外部使用 super() 方法,只是不能再用缺省参数的形式,因为在外部的命名空间中不再存在 __class__ 和 self:
super(D, d).method() # calling D's parent's method with instance d
Tips/2016-05-01-Class-and-Metaclass-i.ipynb
rainyear/pytips
mit
上面的例子可以用下图来描述: +----------+ | A | +----------+ | method() &lt;---------------+ super(B,self) +----------+ | | +----------+ +----------+ | B | | D | +----------+ super(C,self) +----------+ | method() &lt;---------------+ method() |...
class A(object): pass class B(A): def method(self): print("B's method") class C(A): def method(self): print("C's method") class D(B, C): def __init__(self): super().method() class E(C, B): def __init__(self): super().method() d = D() e = E()
Tips/2016-05-01-Class-and-Metaclass-i.ipynb
rainyear/pytips
mit
Python 中提供了一个类方法 mro() 可以指定搜寻的顺序,mro 是Method Resolution Order 的缩写,它是类方法而不是实例方法,可以通过重载 mro() 方法改变继承中的方法解析顺序,但这需要在元类中完成,在这里只看一下其结果:
D.mro() E.mro()
Tips/2016-05-01-Class-and-Metaclass-i.ipynb
rainyear/pytips
mit
super() 方法就是沿着 mro() 给出的顺序向上寻找起点的:
super(D, d).method() super(E, e).method() super(C, e).method() super(B, d).method()
Tips/2016-05-01-Class-and-Metaclass-i.ipynb
rainyear/pytips
mit
Important Note: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: K.function(...). 1 - Problem Statement You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To co...
# GRADED FUNCTION: yolo_filter_boxes def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6): """Filters YOLO boxes by thresholding on object and class confidence. Arguments: box_confidence -- tensor of shape (19, 19, 5, 1) boxes -- tensor of shape (19, 19, 5, 4) box_clas...
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
Expected Output: <table> <tr> <td> **scores[2]** </td> <td> 10.7506 </td> </tr> <tr> <td> **boxes[2]** </td> <td> [ 8.42653275 3.27136683 -0.5313437 -4.94137383] </td> </tr> <tr> ...
# GRADED FUNCTION: iou def iou(box1, box2): """Implement the intersection over union (IoU) between box1 and box2 Arguments: box1 -- first box, list object with coordinates (x1, y1, x2, y2) box2 -- second box, list object with coordinates (x1, y1, x2, y2) """ # Calculate the (y1, x1, y2, x...
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
Expected Output: <table> <tr> <td> **iou = ** </td> <td> 0.14285714285714285 </td> </tr> </table> You are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score. 2. Compute its overlap with all other b...
# GRADED FUNCTION: yolo_non_max_suppression def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5): """ Applies Non-max suppression (NMS) to set of boxes Arguments: scores -- tensor of shape (None,), output of yolo_filter_boxes() boxes -- tensor of shape (Non...
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
Expected Output: <table> <tr> <td> **scores[2]** </td> <td> 6.9384 </td> </tr> <tr> <td> **boxes[2]** </td> <td> [-5.299932 3.13798141 4.45036697 0.95942086] </td> </tr> <tr> <...
# GRADED FUNCTION: yolo_eval def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5): """ Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes. Arguments: yolo_outputs...
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0