markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Expected Output: <table> <tr> <td> **scores[2]** </td> <td> 138.791 </td> </tr> <tr> <td> **boxes[2]** </td> <td> [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] </td> </tr> <...
sess = K.get_session()
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
3.1 - Defining classes, anchors and image shape. Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell. T...
class_names = read_classes("model_data/coco_classes.txt") anchors = read_anchors("model_data/yolo_anchors.txt") image_shape = (720., 1280.)
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
3.2 - Loading a pretrained model Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and wer...
yolo_model = load_model("model_data/yolo.h5")
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
yolo_model.summary()
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
Note: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine. Reminder: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2). 3.3 - Convert output of the model to usable bo...
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function. 3.4 - Filtering boxes yolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, whi...
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
3.5 - Run the graph on an image Let the fun begin. You have created a (sess) graph that can be summarized as follows: <font color='purple'> yolo_model.input </font> is given to yolo_model. The model is used to compute the output <font color='purple'> yolo_model.output </font> <font color='purple'> yolo_model.output </...
def predict(sess, image_file): """ Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions. Arguments: sess -- your tensorflow/Keras session containing the YOLO graph image_file -- name of an image stored in the "images" folder. Returns: o...
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
Run the following cell on the "test.jpg" image to verify that your function is correct.
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
jinntrance/MOOC
cc0-1.0
Training checkpoints <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/checkpoint"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com...
import tensorflow as tf class Net(tf.keras.Model): """A simple linear model.""" def __init__(self): super(Net, self).__init__() self.l1 = tf.keras.layers.Dense(5) def call(self, x): return self.l1(x) net = Net()
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
Saving from tf.keras training APIs See the tf.keras guide on saving and restoring. tf.keras.Model.save_weights saves a TensorFlow checkpoint.
net.save_weights('easy_checkpoint')
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
Writing checkpoints The persistent state of a TensorFlow model is stored in tf.Variable objects. These can be constructed directly, but are often created through high-level APIs like tf.keras.layers or tf.keras.Model. The easiest way to manage variables is by attaching them to Python objects, then referencing those obj...
def toy_dataset(): inputs = tf.range(10.)[:, None] labels = inputs * 5. + tf.range(5.)[None, :] return tf.data.Dataset.from_tensor_slices( dict(x=inputs, y=labels)).repeat().batch(2) def train_step(net, example, optimizer): """Trains `net` on `example` using `optimizer`.""" with tf.GradientTape() as tape...
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
Create the checkpoint objects Use a tf.train.Checkpoint object to manually create a checkpoint, where the objects you want to checkpoint are set as attributes on the object. A tf.train.CheckpointManager can also be helpful for managing multiple checkpoints.
opt = tf.keras.optimizers.Adam(0.1) dataset = toy_dataset() iterator = iter(dataset) ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator) manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
Train and checkpoint the model The following training loop creates an instance of the model and of an optimizer, then gathers them into a tf.train.Checkpoint object. It calls the training step in a loop on each batch of data, and periodically writes checkpoints to disk.
def train_and_checkpoint(net, manager): ckpt.restore(manager.latest_checkpoint) if manager.latest_checkpoint: print("Restored from {}".format(manager.latest_checkpoint)) else: print("Initializing from scratch.") for _ in range(50): example = next(iterator) loss = train_step(net, example, opt) ...
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
Restore and continue training After the first training cycle you can pass a new model and manager, but pick up training exactly where you left off:
opt = tf.keras.optimizers.Adam(0.1) net = Net() dataset = toy_dataset() iterator = iter(dataset) ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator) manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3) train_and_checkpoint(net, manager)
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
The tf.train.CheckpointManager object deletes old checkpoints. Above it's configured to keep only the three most recent checkpoints.
print(manager.checkpoints) # List the three remaining checkpoints
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
These paths, e.g. './tf_ckpts/ckpt-10', are not files on disk. Instead they are prefixes for an index file and one or more data files which contain the variable values. These prefixes are grouped together in a single checkpoint file ('./tf_ckpts/checkpoint') where the CheckpointManager saves its state.
!ls ./tf_ckpts
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
<a id="loading_mechanics"/> Loading mechanics TensorFlow matches variables to checkpointed values by traversing a directed graph with named edges, starting from the object being loaded. Edge names typically come from attribute names in objects, for example the "l1" in self.l1 = tf.keras.layers.Dense(5). tf.train.Checkp...
to_restore = tf.Variable(tf.zeros([5])) print(to_restore.numpy()) # All zeros fake_layer = tf.train.Checkpoint(bias=to_restore) fake_net = tf.train.Checkpoint(l1=fake_layer) new_root = tf.train.Checkpoint(net=fake_net) status = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/')) print(to_restore.numpy()) # Th...
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
The dependency graph for these new objects is a much smaller subgraph of the larger checkpoint you wrote above. It includes only the bias and a save counter that tf.train.Checkpoint uses to number checkpoints. restore returns a status object, which has optional assertions. All of the objects created in the new Checkpo...
status.assert_existing_objects_matched()
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
There are many objects in the checkpoint which haven't matched, including the layer's kernel and the optimizer's variables. status.assert_consumed only passes if the checkpoint and the program match exactly, and would throw an exception here. Deferred restorations Layer objects in TensorFlow may defer the creation of v...
deferred_restore = tf.Variable(tf.zeros([1, 5])) print(deferred_restore.numpy()) # Not restored; still zeros fake_layer.kernel = deferred_restore print(deferred_restore.numpy()) # Restored
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
Manually inspecting checkpoints tf.train.load_checkpoint returns a CheckpointReader that gives lower level access to the checkpoint contents. It contains mappings from each variable's key, to the shape and dtype for each variable in the checkpoint. A variable's key is its object path, like in the graphs displayed above...
reader = tf.train.load_checkpoint('./tf_ckpts/') shape_from_key = reader.get_variable_to_shape_map() dtype_from_key = reader.get_variable_to_dtype_map() sorted(shape_from_key.keys())
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
So if you're interested in the value of net.l1.kernel you can get the value with the following code:
key = 'net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE' print("Shape:", shape_from_key[key]) print("Dtype:", dtype_from_key[key].name)
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
It also provides a get_tensor method allowing you to inspect the value of a variable:
reader.get_tensor(key)
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
Object tracking Checkpoints save and restore the values of tf.Variable objects by "tracking" any variable or trackable object set in one of its attributes. When executing a save, variables are gathered recursively from all of the reachable tracked objects. As with direct attribute assignments like self.l1 = tf.keras.l...
save = tf.train.Checkpoint() save.listed = [tf.Variable(1.)] save.listed.append(tf.Variable(2.)) save.mapped = {'one': save.listed[0]} save.mapped['two'] = save.listed[1] save_path = save.save('./tf_list_example') restore = tf.train.Checkpoint() v2 = tf.Variable(0.) assert 0. == v2.numpy() # Not restored yet restore....
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
You may notice wrapper objects for lists and dictionaries. These wrappers are checkpointable versions of the underlying data-structures. Just like the attribute based loading, these wrappers restore a variable's value as soon as it's added to the container.
restore.listed = [] print(restore.listed) # ListWrapper([]) v1 = tf.Variable(0.) restore.listed.append(v1) # Restores v1, from restore() in the previous cell assert 1. == v1.numpy()
site/en/guide/checkpoint.ipynb
tensorflow/docs
apache-2.0
<br> <br> Bubble sort implemented in Cython Maybe we can speed things up a little bit via Cython's C-extensions for Python. Cython is basically a hybrid between C and Python and can be pictured as compiled Python code with type declarations. Since we are working in an IPython notebook here, we can make use of the very ...
%load_ext cythonmagic %%cython import numpy as np cimport numpy as np cimport cython @cython.boundscheck(False) @cython.wraparound(False) cpdef cython_bubblesort(inp_ary): """ The Cython implementation of Bubblesort with NumPy memoryview.""" cdef unsigned long length, i, swapped, ele, temp cdef long[:] np...
Day02_EverythingData/notebooks/05 - Theory and Practice.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
<br> <br> Bubble sort implemented in Numba Numba is using the LLVM compiler infrastructure for compiling Python code to machine code. Its strength is to work with NumPy arrays to speed-up the code. If you want to read more about Numba, please see refer to the original website and documentation.
from numba import jit as numba_jit @numba_jit def numba_bubblesort(np_ary): """ The NumPy implementation of Bubblesort on NumPy arrays.""" length = np_ary.shape[0] swapped = 1 for i in xrange(0, length): if swapped: swapped = 0 for ele in xrange(0, length-i-1): ...
Day02_EverythingData/notebooks/05 - Theory and Practice.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
<br> <br> Bubble sort implemented in parakeet Similar to Numba, parakeet is a Python compiler that optimizes the runtime of numerical computations based on the NumPy data types, such as NumPy arrays. The usage is also similar to Numba where we just have to put the jit decorator on top of the function we want to optim...
from parakeet import jit as para_jit @para_jit def parakeet_bubblesort(np_ary): """ The parakeet implementation of Bubblesort on NumPy arrays.""" length = np_ary.shape[0] swapped = 1 for i in xrange(0, length): if swapped: swapped = 0 for ele in xrange(0, length-i-1): ...
Day02_EverythingData/notebooks/05 - Theory and Practice.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
<br> <br> Verifying that all implementations work correctly
import random import copy import numpy as np random.seed(4354353) print "my number is", random.randint(1,1000) #l = np.asarray([random.randint(1,1000) for num in xrange(1, 1000)]) #print l l_sorted = np.sort(l) for f in [python_bubblesort, python_bubblesort_ary, cython_bubblesort, numba_bubblesort]: asse...
Day02_EverythingData/notebooks/05 - Theory and Practice.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
<br> <br> Timing
import timeit import copy import numpy as np funcs = ['python_bubblesort', 'python_bubblesort_ary', 'cython_bubblesort', 'numba_bubblesort' ] orders_n = [10**n for n in range(1, 6)] timings = {f:[] for f in funcs} for n in orders_n: l = [np.random.randint(n) for num in range(n...
Day02_EverythingData/notebooks/05 - Theory and Practice.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
<br> <br> Setting up the plots
import platform import multiprocessing from cython import __version__ as cython__version__ from llvm import __version__ as llvm__version__ from numba import __version__ as numba__version__ from parakeet import __version__ as parakeet__version__ def print_sysinfo(): print '\nPython version :', platform.python...
Day02_EverythingData/notebooks/05 - Theory and Practice.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
<br> <br> Results
title = 'Performance of Bubblesort in Python, Cython, parakeet, and Numba' labels = {'python_bubblesort':'(C)Python Bubblesort - Python lists', 'python_bubblesort_ary':'(C)Python Bubblesort - NumPy arrays', 'cython_bubblesort': 'Cython Bubblesort - NumPy arrays', 'numba_bubblesort': 'N...
Day02_EverythingData/notebooks/05 - Theory and Practice.ipynb
yaoxx151/UCSB_Boot_Camp_copy
cc0-1.0
Exploration vs exploitation Sigurd Carlen, September 2019. Reformatted by Holger Nahrstaedt 2020 .. currentmodule:: skopt We can control how much the acqusition function favors exploration and exploitation by tweaking the two parameters kappa and xi. Higher values means more exploration and less exploitation and vice v...
print(__doc__) import numpy as np np.random.seed(1234) import matplotlib.pyplot as plt from skopt.learning import ExtraTreesRegressor from skopt import Optimizer from skopt.plots import plot_gaussian_process
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Toy example First we define our objective like in the ask-and-tell example notebook and define a plotting function. We do however only use on initial random point. All points after the first one is therefore chosen by the acquisition function.
noise_level = 0.1 # Our 1D toy problem, this is the function we are trying to # minimize def objective(x, noise_level=noise_level): return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) +\ np.random.randn() * noise_level def objective_wo_noise(x): return objective(x, noise_level=0) opt = Optimizer(...
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Plotting parameters
plot_args = {"objective": objective_wo_noise, "noise_level": noise_level, "show_legend": True, "show_title": True, "show_next_point": False, "show_acq_func": True}
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
We run a an optimization loop with standard settings
for i in range(30): next_x = opt.ask() f_val = objective(next_x) opt.tell(next_x, f_val) # The same output could be created with opt.run(objective, n_iter=30) _ = plot_gaussian_process(opt.get_result(), **plot_args)
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
We see that some minima is found and "exploited" Now lets try to set kappa and xi using'to other values and pass it to the optimizer:
acq_func_kwargs = {"xi": 10000, "kappa": 10000} opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3, acq_optimizer="sampling", acq_func_kwargs=acq_func_kwargs) opt.run(objective, n_iter=20) _ = plot_gaussian_process(opt.get_result(), **plot_args)
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
We see that the points are more random now. This works both for kappa when using acq_func="LCB":
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3, acq_func="LCB", acq_optimizer="sampling", acq_func_kwargs=acq_func_kwargs) opt.run(objective, n_iter=20) _ = plot_gaussian_process(opt.get_result(), **plot_args)
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
And for xi when using acq_func="EI": or acq_func="PI":
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3, acq_func="PI", acq_optimizer="sampling", acq_func_kwargs=acq_func_kwargs) opt.run(objective, n_iter=20) _ = plot_gaussian_process(opt.get_result(), **plot_args)
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
We can also favor exploitaton:
acq_func_kwargs = {"xi": 0.000001, "kappa": 0.001} opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3, acq_func="LCB", acq_optimizer="sampling", acq_func_kwargs=acq_func_kwargs) opt.run(objective, n_iter=20) _ = plot_gaussian_process(opt.get_result(), **plot_args) opt = Optimizer...
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Note that negative values does not work with the "PI"-acquisition function but works with "EI":
acq_func_kwargs = {"xi": -1000000000000} opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3, acq_func="PI", acq_optimizer="sampling", acq_func_kwargs=acq_func_kwargs) opt.run(objective, n_iter=20) _ = plot_gaussian_process(opt.get_result(), **plot_args) opt = Optimizer([(-2.0, 2....
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Changing kappa and xi on the go If we want to change kappa or ki at any point during our optimization process we just replace opt.acq_func_kwargs. Remember to call opt.update_next() after the change, in order for next point to be recalculated.
acq_func_kwargs = {"kappa": 0} opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3, acq_func="LCB", acq_optimizer="sampling", acq_func_kwargs=acq_func_kwargs) opt.acq_func_kwargs opt.run(objective, n_iter=20) _ = plot_gaussian_process(opt.get_result(), **plot_args) acq_func_kwarg...
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Descriptive Analysis To better understand general trends in the data. This is a work in progress. last updated on: February 26, 2017 Seasonality It is well known that movies gunning for an Academy Award aim to be released between December and February, two months before the award ceremony. This is pretty evident lookin...
sb.countplot(x="release_month", data=oscars)
notebooks/analysis.ipynb
scruwys/and-the-award-goes-to
mit
This can be more or less confirmed by calculating the Pearson correlation coefficient, which measures the linear dependence between two variables:
def print_pearsonr(data, dependent, independent): for field in independent: coeff = stats.pearsonr(data[dependent], data[field]) print "{0} | coeff: {1} | p-value: {2}".format(field, coeff[0], coeff[1]) print_pearsonr(oscars, 'Oscar', ['q1_release', 'q2_release', 'q3_release', 'q4_release'])
notebooks/analysis.ipynb
scruwys/and-the-award-goes-to
mit
Q1 and Q4 have a higher coefficient than Q2 and Q3, so that points in the right direction... This won't really help us determine who will win the actual Oscar, but at least we know that if we want a shot, we need to be releasing in late Q4 and early Q1. Profitability How do the financial details contribute to Oscar suc...
# In case we want to examine the data based on the release decade... oscars['decade'] = oscars['year'].apply(lambda y: str(y)[2] + "0") # Adding some fields to slice and dice... profit = oscars[~oscars['budget'].isnull()] profit = profit[~profit['box_office'].isnull()] profit['profit'] = profit['box_office'] - profit...
notebooks/analysis.ipynb
scruwys/and-the-award-goes-to
mit
Profitability by Award Category Since 1980, the profitability for films which won an Oscar were on average higher than all films nominated that year.
avg_margin_for_all = profit.groupby(['category'])['margin'].mean() avg_margin_for_win = profit[profit['Oscar'] == 1].groupby(['category'])['margin'].mean() fig, ax = plt.subplots() index = np.arange(len(profit['category'].unique())) rects1 = plt.bar(index, avg_margin_for_win, 0.45, color='r', label='Won') rects2 = p...
notebooks/analysis.ipynb
scruwys/and-the-award-goes-to
mit
The biggest losers...that won? This is just a fun fact. There were 5 awards since 1980 that were given to films that actually lost money.
fields = ['year', 'film', 'category', 'name', 'budget', 'box_office', 'profit', 'margin'] profit[(profit['profit'] < 0) & (profit['Oscar'] == 1)][fields]
notebooks/analysis.ipynb
scruwys/and-the-award-goes-to
mit
Other Awards Do the BAFTAs, Golden Globes, Screen Actors Guild Awards, etc. forecast who is going to win the Oscars? Let's find out...
winning_awards = oscars[['category', 'Oscar', 'BAFTA', 'Golden Globe', 'Guild']] winning_awards.head() acting_categories = ['Actor', 'Actress', 'Supporting Actor', 'Supporting Actress'] y = winning_awards[(winning_awards['Oscar'] == 1)&(winning_awards['category'].isin(acting_categories))] fig, (...
notebooks/analysis.ipynb
scruwys/and-the-award-goes-to
mit
It looks like if the Golden Globes and Screen Actors Guild awards are better indicators of Oscar success than the BAFTAs. Let's take a look at the same analysis, but for Best Picture. The "Guild" award we use is the Screen Actor Guild Award for Outstanding Performance by a Cast in a Motion Picture.
y = winning_awards[(winning_awards['Oscar'] == 1)&(winning_awards['category'] == 'Picture')] fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, sharey=True) plt.title('Count Plot of Wins by Award') sb.countplot(x="BAFTA", data=y, ax=ax1) sb.countplot(x="Golden Globe", data=y, ax=ax2) sb.countplot(x="Guild", data=y, ax=ax3...
notebooks/analysis.ipynb
scruwys/and-the-award-goes-to
mit
Import the Waves class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
from cmt.components import Waves waves = Waves()
notebooks/waves_example.ipynb
mcflugen/bmi-tutorial
mit
Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
waves.get_output_var_names()
notebooks/waves_example.ipynb
mcflugen/bmi-tutorial
mit
Or the output variables.
waves.get_input_var_names()
notebooks/waves_example.ipynb
mcflugen/bmi-tutorial
mit
We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main output of the Waves model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is, "sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity...
angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity' print "Data type: %s" % waves.get_var_type(angle_name) print "Units: %s" % waves.get_var_units(angle_name) print "Grid id: %d" % waves.get_var_grid(angle_name) print "Number of elements in grid: %d" % waves.get_grid_size(0) print "Type ...
notebooks/waves_example.ipynb
mcflugen/bmi-tutorial
mit
OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Waves to use some defaults.
waves.initialize(None)
notebooks/waves_example.ipynb
mcflugen/bmi-tutorial
mit
Before running the model, let's set a couple input parameters. These two parameters represent the frequency for which waves approach the shore at a high angle and if they come from a prefered direction.
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_asymmetry_parameter', .25) waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_highness_parameter', .7)
notebooks/waves_example.ipynb
mcflugen/bmi-tutorial
mit
To advance the model in time, we use the update method. We'll advance the model one day.
waves.update()
notebooks/waves_example.ipynb
mcflugen/bmi-tutorial
mit
Let's double-check that the model advanced to the given time and see what the new wave angle is.
print 'Current model time: %f' % waves.get_current_time() val = waves.get_value(angle_name) print 'The current wave angle is: %f' % val[0]
notebooks/waves_example.ipynb
mcflugen/bmi-tutorial
mit
We'll put all this in a loop and advance the model in time to generate a time series of waves angles.
import numpy as np number_of_time_steps = 400 angles = np.empty(number_of_time_steps) for time in xrange(number_of_time_steps): waves.update() angles[time] = waves.get_value(angle_name) import matplotlib.pyplot as plt plt.plot(np.array(angles) * 180 / np.pi) plt.xlabel('Time (days)') plt.ylabel('Incoming wav...
notebooks/waves_example.ipynb
mcflugen/bmi-tutorial
mit
1. Load Training ASM and Byte Feature Data and Combine Run the model selection functions on the combined ASM training data for the 30% best feature set and call graph feature set. So the data frames will be: - final-combined-train-data-30percent.csv - sorted_train_labels.csv - all-combined-train-data.csv
# First load the .asm and .byte training data and training labels # sorted_train_data_asm = pd.read_csv('data/sorted-train-malware-features-asm-reduced.csv') # sorted_train_data_byte = pd.read_csv('data/sorted-train-malware-features-byte.csv') sorted_train_labels = pd.read_csv('data/sorted-train-labels.csv') combined_t...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
2. Model Selection On The ASM Features Using GridSearchCV Models include: - GradientBoostingClassifier: randomized and grid search. - SVC: randomized and grid search. - ExtraTrees: randomized and grid search.
# Assign asm data to X,y for brevity, then split the dataset in two equal parts. X = combined_train_data.iloc[:,1:] y = np.array(sorted_train_labels.iloc[:,1]) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0) X_train.shape plt.figure(figsize=(15,15)) plt.xlabel("EDX Register") ...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
2.1 Gradient Boosting 2.2 Support Vector Machine 2.2.1 Randomized Search
# Set the parameters by cross-validation tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4], 'C': [1, 10, 100, 1000]}, {'kernel': ['linear'], 'C': [1, 10, 100, 1000]}] print("# Tuning hyper-parameters for SVC") print() clfrand = RandomizedSearchCV(SVC(C=1), tuned_pa...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
2.2.2 Grid Search
# Set the parameters by cross-validation tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4], 'C': [1, 10, 100, 1000]}, {'kernel': ['linear'], 'C': [1, 10, 100, 1000]}] print("# Tuning hyper-parameters for SVC") print() clfgrid = GridSearchCV(SVC(C=1), tuned_paramete...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
2.3 Extra Trees Classifier 2.3.1 Randomized Search
clfextra1 = ExtraTreesClassifier(n_jobs=4) # use a random grid over parameters, most important parameters are n_estimators (larger is better) and # max_features (for classification best value is square root of the number of features) # Reference: http://scikit-learn.org/stable/modules/ensemble.html param_dist = {"n_es...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
2.3.2 Grid Search
clfextra2 = ExtraTreesClassifier(n_jobs=4) # use a full grid over all parameters, most important parameters are n_estimators (larger is better) and # max_features (for classification best value is square root of the number of features) # Reference: http://scikit-learn.org/stable/modules/ensemble.html param_grid = {"n_...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
3. Model selection On The Byte Features Using GridSearchCV Models include: - Ridge Classifier - SVC: grid search. - ExtraTrees: grid search.
# Assign byte data to X,y for brevity, then split the dataset in two equal parts. X = sorted_train_data_byte.iloc[:,1:] y = np.array(sorted_train_labels.iloc[:,1]) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0) plt.figure(figsize=(15,15)) plt.xlabel("File Entropy") plt.ylabel(...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
3.1 Ridge Classifier
clfridge = RidgeClassifierCV(cv=10) clfridge.fit(X_train, y_train) y_pred = clfridge.predict(X_test) print(classification_report(y_test, y_pred)) print(" ") print("score = {:.3f}".format(accuracy_score(y_train, y_pred))) cm = confusion_matrix(y_test, y_pred) print(cm)
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
3.2 Support Vector Machine 3.3 Extra Trees Classifier
clfextra = ExtraTreesClassifier(n_jobs=4) # use a full grid over all parameters, most important parameters are n_estimators (larger is better) and # max_features (for classification best value is square root of the number of features) # Reference: http://scikit-learn.org/stable/modules/ensemble.html param_grid = {"n_e...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
4. Model Selection On The Combined Training ASM/Byte Data Using GridSearchCV Grid search will be done on the following classifiers: - Ridge Classifier - ExtraTreesClassifier - GradientBoost - RandomForest - SVC
# Assign byte data to X,y for brevity, then split the dataset in two equal parts. X = combined_train_data.iloc[:,1:] y = np.array(sorted_train_labels.iloc[:,1]) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
4.1 Ridge Classifier
from sklearn.linear_model import RidgeClassifierCV clfridge = RidgeClassifierCV(cv=10) clfridge.fit(X_train, y_train) y_pred = clfridge.predict(X_test) print(classification_report(y_test, y_pred)) print(" ") print("score = {:.3f}".format(accuracy_score(y_train, y_pred))) cm = confusion_matrix(y_test, y_pred) print(cm)
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
4.2 Extra Trees Classifier
clf1 = ExtraTreesClassifier(n_estimators=1000, max_features=None, min_samples_leaf=1, min_samples_split=9, n_jobs=4, criterion='gini') p1, pred1 = run_cv(X,y,clf1) print("logloss = {:.3f}".format(log_loss(y, p1))) print("score = {:.3f}".format(accuracy_score(y, pred1))) cm = confusion_matrix(y, pred1) print(cm)
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
4.3 Gradient Boost 4.4 Random Forest 4.5 Support Vector Machine 4.6 Nearest Neighbours 4.7 XGBoost
X = combined_train_data.iloc[:,1:] ylabels = sorted_train_labels.iloc[:,1:] y = np.array(ylabels - 1) y = y.flatten() y xgclf = xgb.XGBClassifier(objective="multi:softprob", nthread=4) params = {"n_estimators": [1000, 2000], "max_depth": [5, 10], "learning_rate": [0.1, 0.05]} # run gr...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
5. Run ExtraTreeClassifiers With 10-Fold Cross Validation
help(xgb) help(ExtraTreesClassifier) ytrain = np.array(y) X = data_reduced.iloc[:,1:] X.shape clf1 = ExtraTreesClassifier(n_estimators=1000, max_features=None, min_samples_leaf=1, min_samples_split=9, n_jobs=4, criterion='gini') p1, pred1 = run_cv(X,ytrain,clf1) print "logloss = %.3f" % log_loss(y, p1) print "score...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
6. GridSearchCV with XGBoost on All Combined ASM and Call Graph Features.
data = pd.read_csv('data/all-combined-train-data-final.csv') labels = pd.read_csv('data/sorted-train-labels.csv') data.head(20) X = data.iloc[:,1:] ylabels = labels.iloc[:,1:].values y = np.array(ylabels - 1).flatten() # numpy arrays are unloved in many places. y labels.head() xgclf = xgb.XGBClassifier(objective="mu...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
7. Summary Of Results
# TODO:
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
8. Test/Experimental Code Only
# go through the features and delete any that sum to less than 200 colsum = X.sum(axis=0, numeric_only=True) zerocols = colsum[(colsum[:] == 0)] zerocols zerocols = colsum[(colsum[:] < 110)] zerocols.shape reduceX = X for col in reduceX.columns: if sum(reduceX[col]) < 100: del reduceX[col] reduceX.shape ...
mmcc/model-selection.ipynb
dchad/malware-detection
gpl-3.0
Now let's do a proper scan IP = 5.7 eV EA = 4.0 eV Window = 0.25 eV Insulating threshold = 4.0 eV
%%bash cd Electronic/ python scan_energies.py -i 5.7 -e 4.0 -w 0.5 -g 4.0
examples/Practical_tutorial/ELS_practical.ipynb
WMD-group/SMACT
mit
2. Lattice matching Background For stable interfaces there should be an integer relation between the lattice constants of the two surfaces in contact, which allows for perfect matching, with minimal strain. Generally a strain value of ~ 3% is considered acceptable, above this the interface will be incoherent. This sect...
%%bash cd Lattice/ for file in *.cif; do python LatticeMatch.py -a MAPI/CH3NH3PbI3.cif -b $file -s 0.03; done
examples/Practical_tutorial/ELS_practical.ipynb
WMD-group/SMACT
mit
3. Site matching So far the interface matching considered only the magnitude of the lattice vectors. It would be nice to be able to include some measure of how well the dangling bonds can passivate one another. We do this by calculating the site overlap. Basically, we determine the undercoordinated surface atoms on eac...
%%bash cd Site/ python csl.py -a CH3NH3PbI3 -b GaN -x 110 -y 010 -u 1,3 -v 2,5
examples/Practical_tutorial/ELS_practical.ipynb
WMD-group/SMACT
mit
All together The lattice and site examples above give a a feel for what is going on. For a proper screening procedure it would be nice to be able to run them together. That's exactly what happens with the LatticeSite.py script. It uses a new class Pair to store and pass information about the interface pairings. This in...
%%bash cd Site/ for file in *cif; do python LatticeSite.py -a MAPI/CH3NH3PbI3.cif -b $file -s 0.03; done
examples/Practical_tutorial/ELS_practical.ipynb
WMD-group/SMACT
mit
This may seem like a trivial task, but it is a simple version of a very important concept. By drawing this separating line, we have learned a model which can generalize to new data: if you were to drop another point onto the plane which is unlabeled, this algorithm could now predict whether it's a blue or a red point. ...
from fig_code import plot_linear_regression plot_linear_regression()
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/02.1-Machine-Learning-Intro.ipynb
csaladenes/csaladenes.github.io
mit
Again, this is an example of fitting a model to data, such that the model can make generalizations about new data. The model has been learned from the training data, and can be used to predict the result of test data: here, we might be given an x-value, and the model would allow us to predict the y value. Again, this...
from IPython.core.display import Image, display display(Image(filename='images/iris_setosa.jpg')) print("Iris Setosa\n") display(Image(filename='images/iris_versicolor.jpg')) print("Iris Versicolor\n") display(Image(filename='images/iris_virginica.jpg')) print("Iris Virginica")
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/02.1-Machine-Learning-Intro.ipynb
csaladenes/csaladenes.github.io
mit
Quick Question: If we want to design an algorithm to recognize iris species, what might the data be? Remember: we need a 2D array of size [n_samples x n_features]. What would the n_samples refer to? What might the n_features refer to? Remember that there must be a fixed number of features for each sample, and fea...
from sklearn.datasets import load_iris iris = load_iris() iris.keys() n_samples, n_features = iris.data.shape print((n_samples, n_features)) print(iris.data[10]) print(iris.data.shape) print(iris.target.shape) print(iris.target) print(iris.target_names)
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/02.1-Machine-Learning-Intro.ipynb
csaladenes/csaladenes.github.io
mit
This data is four dimensional, but we can visualize two of the dimensions at a time using a simple scatter-plot:
import numpy as np import matplotlib.pyplot as plt x_index = 2 y_index = 1 # this formatter will label the colorbar with the correct target names formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)]) plt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target, cmap=plt.cm....
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/02.1-Machine-Learning-Intro.ipynb
csaladenes/csaladenes.github.io
mit
Quick Exercise: Change x_index and y_index in the above script and find a combination of two parameters which maximally separate the three classes. This exercise is a preview of dimensionality reduction, which we'll see later. Other Available Data They come in three flavors: Packaged Data: these small datasets are pac...
from sklearn import datasets # Type datasets.fetch_<TAB> or datasets.load_<TAB> in IPython to see all possibilities # datasets.fetch_ # datasets.load_
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/02.1-Machine-Learning-Intro.ipynb
csaladenes/csaladenes.github.io
mit
After running this, the original home directory now contains all of the original .wav files pre-emphazised and written again as .wav and .aiff files. The reading, pre-emphasis, and writing are all done by Praat, while looping over all .wav files is done by standard Python code.
# List the current contents of the audio/ folder !ls audio/ # Remove the generated audio files again, to clean up the output from this example !rm audio/*_pre.wav !rm audio/*_pre.aiff
docs/examples/batch_processing.ipynb
YannickJadoul/Parselmouth
gpl-3.0
Similarly, we can use the pandas library to read a CSV file with data collected in an experiment, and loop over that data to e.g. extract the mean harmonics-to-noise ratio. The results CSV has the following structure: condition | ... | pp_id --------- | --- | ----- 0 | ... | 1877 1 | ... | 801 1 ...
import pandas as pd print(pd.read_csv("other/results.csv")) def analyse_sound(row): condition, pp_id = row['condition'], row['pp_id'] filepath = "audio/{}_{}.wav".format(condition, pp_id) sound = parselmouth.Sound(filepath) harmonicity = sound.to_harmonicity() return harmonicity.values[harmonicity...
docs/examples/batch_processing.ipynb
YannickJadoul/Parselmouth
gpl-3.0
We can now have a look at the results by reading in the processed_results.csv file again:
print(pd.read_csv("processed_results.csv")) # Clean up, remove the CSV file generated by this example !rm processed_results.csv
docs/examples/batch_processing.ipynb
YannickJadoul/Parselmouth
gpl-3.0
A first model with logistic regression The first thing we have to do is to set up an input tensor. One the key advantages of using keras is that you only have to specify the shapes for the inputs and outputs, the shapes for the remaining tensors will be inferred automatically. In our case, each input vector $x_i = <x_...
logreg = Sequential() logreg.add(Dense(output_dim=1, input_dim=num_variables, activation='sigmoid'))
k2e/ml/deep-learning/mlp.ipynb
kinshuk4/MoocX
mit
The first line tells keras to initialze a new sequential model. Sequential models take a single input tensor and produce a single output tensor. For the purposes of this tutorial, we're going to stick with the sequential model because it will have all of the functionality we'll need. The bulk of the action happens on t...
logreg.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) logreg.fit(X_train, y_train,validation_data=[X_val,y_val],verbose=0) val_score = logreg.evaluate(X_val,y_val,verbose=0) print "Accuracy on validation data: " + str(val_score[1]*100) + "%" # create a mesh to p...
k2e/ml/deep-learning/mlp.ipynb
kinshuk4/MoocX
mit
We can see that a neural network with 5 hidden units is trying to stitch together a relatively boxy set of lines to create piecewise linear functions in an effort to classify the points.
num_hidden = 128 mlp = Sequential() mlp.add(Dense(output_dim=num_hidden, input_dim=num_variables, activation='relu')) mlp.add(Dense(output_dim=1, activation='sigmoid')) mlp.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) mlp.fit(X_train, y_train,validation_data=[X...
k2e/ml/deep-learning/mlp.ipynb
kinshuk4/MoocX
mit
Not that we need to for this example, but we can easily extend this model to add more layers, simply by adding another dense layer.
mlp = Sequential() mlp.add(Dense(output_dim=num_hidden, input_dim=num_variables, activation='relu')) mlp.add(Dense(output_dim=num_hidden, activation='relu')) mlp.add(Dense(output_dim=1, activation='sigmoid')) mlp.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) mlp...
k2e/ml/deep-learning/mlp.ipynb
kinshuk4/MoocX
mit
Noisy Likelihoods In some problems, we don't actually have access to the likelihood $\mathcal{L}(\mathbf{x})$ because it might be intractable or numerically infeasible to compute. Instead, we are able to compute a noisy estimate $$ \mathcal{L}^\prime(\mathbf{x}) \sim P(\mathcal{L} | \mathbf{x}, \boldsymbol{\theta}) $$ ...
ndim = 3 # number of dimensions C = np.identity(ndim) # set covariance to identity matrix Cinv = linalg.inv(C) # precision matrix lnorm = -0.5 * (np.log(2 * np.pi) * ndim + np.log(linalg.det(C))) # ln(normalization) # 3-D correlated multivariate normal log-likelihood def loglikelihood(x): """Multivariate norma...
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
We'll again define our prior (via prior_transform) to be uniform in each dimension from -10 to 10 and 0 everywhere else.
# prior transform def prior_transform(u): """Transforms our unit cube samples `u` to a flat prior between -10. and 10. in each variable.""" return 10. * (2. * u - 1.)
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
Noiseless Case Let's first generate samples from this noiseless target distribution.
# initialize our nested sampler dsampler = dynesty.DynamicNestedSampler(loglikelihood, prior_transform, ndim=3, bound='single', sample='unif', rstate=rstate) dsampler.run_nested(maxiter=20000, use_stop=False) dres = dsampler.results
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
Noisy Case Now let's generate samples from a noisy version. Here we'll assume that we have a noisy estimate of our "model", which here is $f(\mathbf{x}) = \mathbf{x}$ such that $$ \mathbf{x}^\prime \sim \mathcal{N}(\mathbf{x}, \sigma=1.) $$ Since our "true" model is $\mathbf{x} = 0$ and our prior ranges from $-10$ to $...
noise = 1. # 3-D correlated multivariate normal log-likelihood def loglikelihood2(x): """Multivariate normal log-likelihood.""" xp = rstate.normal(x, noise) logl = -0.5 * np.dot(xp, np.dot(Cinv, xp)) + lnorm scale = - 0.5 * noise**2 # location and scale bias_corr = scale * ndim # ***bias correcti...
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
Note the additional bias correction term we have now included in the log-likelihood. This ensures that our noisy likelihood is unbiased relative to the true likelihood.
# compute estimator x = np.zeros(ndim) logls = np.array([loglikelihood2(x) for i in range(10000)]) print('True log-likelihood:', loglikelihood(x)) print('Estimated:', np.mean(logls), '+/-', np.std(logls))
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
Let's now sample from our noisy distribution.
dsampler2 = dynesty.DynamicNestedSampler(loglikelihood2, prior_transform, ndim=3, bound='single', sample='unif', update_interval=50., rstate=rstate) dsampler2.run_nested(maxiter=20000, use_stop=Fa...
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
As expected, sampling is substantially more inefficient in the noisy case since more likelihood calls are required to get a noisy realization that is "better" than the previous noisy realization. Comparing Results Comparing the two results, we see that the noise in our model appears to give a larger estimate for the ev...
# plot results from dynesty import plotting as dyplot lnz_truth = ndim * -np.log(2 * 10.) # analytic evidence solution fig, axes = dyplot.runplot(dres, color='blue') # noiseless fig, axes = dyplot.runplot(dres2, color='red', # noisy lnz_truth=lnz_truth, truth_color='black', ...
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
This effect also propagates through to our posteriors, broadening them relative to the underlying distribution.
# initialize figure fig, axes = plt.subplots(3, 7, figsize=(35, 15)) axes = axes.reshape((3, 7)) [a.set_frame_on(False) for a in axes[:, 3]] [a.set_xticks([]) for a in axes[:, 3]] [a.set_yticks([]) for a in axes[:, 3]] # plot noiseless run (left) fg, ax = dyplot.cornerplot(dres, color='blue', truths=[0., 0., 0.], trut...
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
Importance Reweighting If we knew the "true" likelihood, we could naively use importance reweighting to reweight our noiseless samples to approximate the "correct" distribution, as shown below.
# importance reweighting logl = np.array([loglikelihood(s) for s in dres2.samples]) dres2_rwt = dynesty.utils.reweight_run(dres2, logl) # initialize figure fig, axes = plt.subplots(3, 7, figsize=(35, 15)) axes = axes.reshape((3, 7)) [a.set_frame_on(False) for a in axes[:, 3]] [a.set_xticks([]) for a in axes[:, 3]] [a....
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
Full Analysis In general, however, we don't have access to the true likelihood. In that case, we need to incorporate importance reweighting into our error analysis. One possible approach would be the naive scheme outlined below, where we just add in an importance reweighting step as part of the error budget. Note that ...
Nmc = 50 # compute realizations of covariances (noiseless) covs = [] for i in range(Nmc): if i % 5 == 0: sys.stderr.write(str(i)+' ') dres_t = dynesty.utils.resample_run(dres) x, w = dres_t.samples, np.exp(dres_t.logwt - dres_t.logz[-1]) covs.append(dynesty.utils.mean_and_cov(x, w)[1].flatten()) # noi...
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit