markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Input UtilitiesNow we will need some utility functions for the inputs of our model. Let's start off with our image input transform function. We will separate out the normalization step from the transform in order to view the original image.
image_size = 448 # scale image to given size and center central_fraction = 1.0 transform = get_transform(image_size, central_fraction=central_fraction) transform_normalize = transform.transforms.pop()
/opt/homebrew/lib/python3.7/site-packages/torchvision/transforms/transforms.py:210: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. warnings.warn("The use of the transforms.Scale transform is deprecated, " +
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Now for the input question, we will need an encoding function (to go from words -> indices):
def encode_question(question): """ Turn a question into a vector of indices and a question length """ question_arr = question.lower().split() vec = torch.zeros(len(question_arr), device=device).long() for i, token in enumerate(question_arr): index = token_to_index.get(token, 0) vec[i] = ...
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Baseline Inputs The insights API utilises captum's attribution API under the hood, hence we will need a baseline for our inputs. A baseline is (typically) a neutral output to reference in order for our attribution algorithm(s) to understand which features are important in making a prediction (this is very simplified ...
def baseline_image(x): return x * 0
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
For sentences, as done in the multi-modal VQA tutorial, we will use a sentence composed of padded symbols.We will also require to pass our model through the [`configure_interpretable_embedding_layer`](https://captum.ai/api/utilities.html?highlight=configure_interpretable_embedding_layercaptum.attr._models.base.configur...
interpretable_embedding = configure_interpretable_embedding_layer( vqa_resnet, "module.text.embedding" ) PAD_IND = token_to_index["pad"] token_reference = TokenReferenceBase(reference_token_idx=PAD_IND) def baseline_text(x): seq_len = x.size(0) ref_indices = token_reference.generate_reference(seq_len, dev...
../captum/attr/_models/base.py:168: UserWarning: In order to make embedding layers more interpretable they will be replaced with an interpretable embedding layer which wraps the original embedding layer and takes word embedding vectors as inputs of the forward function. This allows to generate b...
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Using the Insights APIFinally we have reached the relevant part of the tutorial.First let's create a utility function to allow us to pass data into the insights API. This function will essentially produce `Batch` objects, which tell the insights API what your inputs, labels and any additional arguments are.
def vqa_dataset(image, questions, targets): img = Image.open(image).convert("RGB") img = transform(img).unsqueeze(0) for question, target in zip(questions, targets): q, q_len = encode_question(question) q = q.unsqueeze(0) q_len = q_len.unsqueeze(0) target_idx = answer_to_i...
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Let's create our `AttributionVisualizer`, to do this we need the following:- A score function, which tells us how to interpret the model's output vector- Description of the input features given to the model- The data to visualize (as described above)- Description of the output (the class names), in our case this is our...
def score_func(o): return F.softmax(o, dim=1)
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
The following function will convert a sequence of question indices to the associated question words for visualization purposes. This will be provided to the `TextFeature` object to describe text features.
def itos(input): return [question_words[int(i)] for i in input.squeeze(0)]
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Let's define some dummy data to visualize using the function we declared earlier.
dataset = vqa_dataset("./img/vqa/elephant.jpg", ["what is on the picture", "what color is the elephant", "where is the elephant" ], ["elephant", "gray", "zoo"] )
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Now let's describe our features. Each feature requires an input transformation function and a set of baselines. As described earlier, we will use the black image for the image baseline and a padded sequence for the text baseline.The input image will be transformed via our normalization transform (`transform_normalize`)...
features = [ ImageFeature( "Picture", input_transforms=[transform_normalize], baseline_transforms=[baseline_image], ), TextFeature( "Question", input_transforms=[input_text_transform], baseline_transforms=[baseline_text], visualization_transform=itos, ...
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Let's define our AttributionVisualizer object with the above parameters and our `vqa_resnet` model.
visualizer = AttributionVisualizer( models=[vqa_resnet], score_func=score_func, features=features, dataset=dataset, classes=answer_words, )
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
And now we can visualize the outputs produced by the model.As of writing this tutorial, the `AttributionVisualizer` class utilizes captum's implementation of [integrated gradients](https://captum.ai/docs/algorithmsintegrated-gradients) ([`IntegratedGradients`](https://captum.ai/api/integrated_gradients.html)).
visualizer.render() # show a screenshot if using notebook non-interactively from IPython.display import Image Image(filename='img/captum_insights_vqa.png')
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Finally, since we are done with visualization, we will revert the change to the model we made with `configure_interpretable_embedding_layer`. To do this, we will invoke the `remove_interpretable_embedding_layer` function.
remove_interpretable_embedding_layer(vqa_resnet, interpretable_embedding)
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Simple Widget Introduction What are widgets? Widgets are eventful python objects that have a representation in the browser, often as a control like a slider, textbox, etc. What can they be used for? You can use widgets to build **interactive GUIs** for your notebooks. You can also use widgets to **synchronize state...
import ipywidgets as widgets
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
repr Widgets have their own display `repr` which allows them to be displayed using IPython's display framework. Constructing and returning an `IntSlider` automatically displays the widget (as seen below). Widgets are displayed inside the output area below the code cell. Clearing cell output will also remove the widg...
widgets.IntSlider()
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
display() You can also explicitly display the widget using `display(...)`.
from IPython.display import display w = widgets.IntSlider() display(w)
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Multiple display() calls If you display the same widget twice, the displayed instances in the front-end will remain in sync with each other. Try dragging the slider below and watch the slider above.
display(w)
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Why does displaying the same widget twice work? Widgets are represented in the back-end by a single object. Each time a widget is displayed, a new representation of that same object is created in the front-end. These representations are called views.![Kernel & front-end diagram](images/WidgetModelView.png) Widget p...
w = widgets.IntSlider() display(w) w.value
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Similarly, to set a widget's value, you can set its `value` property.
w.value = 100
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Keys In addition to `value`, most widgets share `keys`, `description`, and `disabled`. To see the entire list of synchronized, stateful properties of any specific widget, you can query the `keys` property. Generally you should not interact with properties starting with an underscore.
w.keys
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Shorthand for setting the initial values of widget properties While creating a widget, you can set some or all of the initial values of that widget by defining them as keyword arguments in the widget's constructor (as seen below).
widgets.Text(value='Hello World!', disabled=True)
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Linking two similar widgets If you need to display the same value two different ways, you'll have to use two different widgets. Instead of attempting to manually synchronize the values of the two widgets, you can use the `link` or `jslink` function to link two properties together (the difference between these is dis...
a = widgets.FloatText() b = widgets.FloatSlider() display(a,b) mylink = widgets.link((a, 'value'), (b, 'value'))
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Unlinking widgets Unlinking the widgets is simple. All you have to do is call `.unlink` on the link object. Try changing one of the widgets above after unlinking to see that they can be independently changed.
# mylink.unlink()
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
`observe` changes in a widget valueAlmost every widget can be observed for changes in its value that trigger a call to a function. The example below is the slider from the first notebook of the tutorial. The `HTML` widget below the slider displays the square of the number.
slider = widgets.FloatSlider( value=7.5, min=5.0, max=10.0, step=0.1, description='Input:', ) # Create non-editable text area to display square of value square_display = widgets.HTML(description="Square: ", value='{}'.format(slider.value**2)) # Create function to update square_display's value when...
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Statistical study of alternative blocks/chainsAnalysis by IsthmusCrypto for the [Monero Archival Project](https://github.com/mitchellpkt/monero_archival_project), a product of *Noncesense-research-lab*Contributors: [NeptuneResearch](https://github.com/neptuneresearch), [Nathan](https://github.com/neffmallon), [Isthmus...
block0s_relative_path = 'data_for_analysis/block0s.txt.backup' # without the earlier stuff tacked on block1s_relative_path = 'data_for_analysis/block1s.txt'
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Import libraries
from copy import copy import time import datetime import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import scipy as sp
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Disable auto-scroll
%%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; }
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Import and pre-process dataTwo separate pandas DataFrames are used, `b0s` for main chain, and `b1s` for blocks that were abandoned Read in from CSV
# Read in the raw data from CSV files b0s = pd.read_csv(block0s_relative_path) b1s = pd.read_csv(block1s_relative_path)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Sort the rows by height
b0s = b0s.sort_values('block_height') b1s = b1s.sort_values('block_height')
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Glance at the data
display(b0s.describe()) display(b1s.describe()) b0s.head()
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
De-dupeBecause the MAP nodes record *every* instance that a block was received, most heights contain multiple copies from different peers. Each copy is identical, and stamped with a different `block_random`For the purposes of this analysis/notebook, we only need one copy of each block.Take a peek for current duplicati...
b1s.head(20)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
First we remove the `block_random` *column*, so that multiple copies become indistinguishable.
b0s.drop(['block_random'],1,inplace=True) b1s.drop(['block_random'],1,inplace=True)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Then drop the duplicate *rows*
b0s=b0s.drop_duplicates() b1s=b1s.drop_duplicates() b1s.head(20)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Feature EngineeringRather than looking at raw block timestamps, we'll want to study derived features like the time between blocks, alt chain lengths, etc. Generate difference columns`delta_time` is the timestamp difference between two blocks. The `merlin_block` label is applied when a block's miner-reported timestamp...
b0s['delta_time'] = b0s['block_time']-b0s['block_time'].shift() b1s['delta_time'] = b1s['block_time']-b1s['block_time'].shift() b0s['merlin_block'] = 0 # unnecessary? b1s['merlin_block'] = 0 # unnecessary? b0s['merlin_block'] = b0s['delta_time'].transform(lambda x: x < 0).astype(int) b1s['merlin_block'] = b1s['delta_t...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Replace delta_height != 1 to NaNThe first block in a alt chain (or following a gap in `b0s`) will have an anomalous `delta_time` and `delta_height`. We convert these to NaNs so that we can hang on to orphaned blocks and still have the start of alt chains included in our data set.
def mapper(x): if x == 1: return x else: return np.nan b0s.delta_height = b0s.delta_height.map(mapper, na_action='ignore') b0s.loc[b0s.delta_height.isnull(),('delta_time')] = np.nan b1s.delta_height = b1s.delta_height.map(mapper, na_action='ignore') b1s.loc[b1s.delta_height.isnull(),('delta_tim...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
What are we left with?
print('Retained ' + str(len(b0s)) + ' main-chain blocks') print('Retained ' + str(len(b1s)) + ' alt-chain blocks')
Retained 17925 main-chain blocks Retained 241 alt-chain blocks
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Label alt chains Initialize new labels and features:- `alt_chain_ID` assigns an arbitrary integer to identify each alt chain. NOTE: there is a (bad) implicit assumption here alternate blocks at two subsequent heights belong to the same chain. Will be fixed in versions with linked blocks.- `alt_chain_length` records...
b1s['alt_chain_ID'] = 0 b1s['alt_chain_length'] = b1s['block_height']-b1s['block_height'].shift() # how long did this alt-chain get? b1s['alt_chain_time'] = 0 b1s['terminal_block']= 0 # is this the last block in the alt-chain? b1s = b1s.reset_index() b1s.drop(['index'], 1, inplace=True) b1s.loc[0,('alt_chain_length')]...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Add new infoCalculate accumulated alt chain length/time, and label terminal blocks.Note that initialization of field `alt_chain_length` produces some value > 1 for the first block, and = 1 for subsequent blocks. Below, this is converted into actual alt chain lengths.Convention: starting alt chains at length 0
alt_chain_counter = -1 # Loop over rows = blocks for index, row in b1s.iterrows(): # If you want extra details: # print('index: ' + str(index) + ' this_row_val: ' + str(this_row_val)) # Check whether this is the first block in the chain, or further down if str(row['delta_height']) == 'nan': ...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Results General block interval studyLet's take a look at the intervals between blocks, for both the main and alt chains. What is the *average* interval between blocks?
b0_mean_time_s = np.mean(b0s.delta_time) b1_mean_time_s = np.mean(b1s.delta_time) print('Main-chain blocks come with mean time: ' + str(round(b0_mean_time_s)) + ' seconds = ' + str(round(b0_mean_time_s/60,1)) + ' min') print('alt-chain blocks come with mean time: ' + str(round(b1_mean_time_s)) + ' seconds = ' + str(ro...
Main-chain blocks come with mean time: 120 seconds = 2.0 min alt-chain blocks come with mean time: 6254 seconds = 104.2 min
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
The main chain blocks are 2 minutes apart, on average. This is what we expect, and is a good validation.The alt chain blocks come at VERY long intervals. The (not-representative) average is almost two hours! Visualize block discovery time
fig = plt.figure(figsize=(10,10),facecolor='white') plt.style.use('seaborn-white') plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.4) plt.rcParams['ytick.labelsize'] = 20 plt.rcParams['xtick.labelsize'] = 20 plt.rcParams['font.size'] = 20 ax1 = fig.add_subplot(211) ax1.set_xlabe...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Observed wait times**Main chain:**The top histogram (main chain) shows roughly the distribution that we would expect: long-tailed with a mean of 2 minutes. There seems to be some skew around 60 seconds, which is peculiar. Ittay Eyal and Emin Gün Sirer [point out](http://hackingdistributed.com/2014/01/15/detecting-self...
dt = b0s.delta_time[b0s.delta_time > 0] sns.distplot(dt, bins=np.linspace(0, 1000,100)) mean = np.nanmean(dt) stddev = np.std(dt) lam = mean/stddev**2 k = 2000 x = range(10000) shift = 1975 y = sp.stats.erlang.pdf(x, k, lam) x_plot = [xi-shift for xi in x] plt.plot(x_plot, y, color='r', label = "Erlong") plt.xlim(0, 75...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
If we are correct that Erlang statistics theoretically describe the situation, there are two main reasons why the distribution does not match observations:- IsthmusCrypto used arbitrary (and presumably wrong) parameters to make the fit line up- This analysis used miner-reported timestamps (MRTs) which are known to be...
orph_block_cut1 = copy(b1s[b1s.alt_chain_length==1]) orph_block_cut2 = copy(orph_block_cut1[orph_block_cut1.terminal_block==1]) experiment_time_d = (max(orph_block_cut2.block_time) - min(orph_block_cut2.block_time))/86400 # seconds per day experiment_time_height = (max(orph_block_cut2.block_height) - min(orph_block_cu...
Experiment lasted for:85 days = 60171 heights Observed 22 single orphan blocks Observed 219 blocks assocated with longer alternative chains This corresponds to 1 natural orphan per 2735 heights This corresponds to 1 alt-chain-related block per 2735 heights
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Assuming that the longer side chains are a different phenomenon that does not impact the frequency of natural orphaned blocks, how often would we expect to see a triple block?
monero_blocks_per_day = 720 heights_per_triple = heights_per_orphan*heights_per_side_block triple_frequency_days = heights_per_triple/monero_blocks_per_day print('Statistically we expect to see a triple block once per ' + str(round(heights_per_triple)) + ' blocks (' + str(round(triple_frequency_days)) + ' days)'...
Statistically we expect to see a triple block once per 751463 blocks (1044 days)
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
print('Observed: ' + str(number_of_orphans) + ' over the course of ' + str(round(experiment_time_d)) + ' days.')print('This corresponds to ' + str(round(orphans_per_day,3)) + ' orphans per day.')print('Over ' + str(experiment_time_height) + ' blocks, averaged:')print(str(round(orphans_per_height,5)), ' orphans per heig...
fig = plt.figure(figsize=(10,5),facecolor='white', dpi=300) plt.style.use('seaborn-white') plt.rcParams['ytick.labelsize'] = 20 plt.rcParams['xtick.labelsize'] = 20 plt.rcParams['font.size'] = 20 plt.hist(b0s.delta_time.dropna(), bins=np.linspace(-500,1000,100)) plt.xlabel('time between blocks (seconds)') plt.ylabel('...
2.68% of blocks on the main chain were delivered from the future.
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
**Time-traveling blocks:**About 2.5% of blocks on the main chain arrive with a miner timestamp BEFORE the timestamp of the block prior. This conclusively shows that miner-reported timestamps cannot be trusted, and further analysis must rely on node-reported timestamps. Direction of time travelLet `M` be the height of ...
# Indexing M_block_inds = b0s.index[b0s.merlin_block== 1].tolist() M_parent_inds = [x - 1 for x in M_block_inds] M_child_inds = [x + 1 for x in M_block_inds] fig = plt.figure(figsize=(10,10)) plt.style.use('seaborn-white') plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.4) plt.r...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
From the bottom plot, we notice that the blocks *preceding* (according to height) Merlin blocks to have mostly arrived on schedule (since, qualitatively at least, the interval distribution for M-1 blocks matches the interval distribution for all blocks)However, many of the blocks *following* (according to height) Merli...
b0s_M = copy(b0s[b0s.merlin_block==1]) del b0s_M['delta_time'] # b0s_M['delta_time'] = b0s_M['block_time']-b0s_M['block_time'].shift() b0s_M['delta_height'] = b0s_M['block_height']-b0s_M['block_height'].shift() ################################################ ## Sticking a pin in this section for now..... ## Not sure...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Investigate alt chainsLet's look at alt chains. The following plots will show how quickly each chain grew. How long do these alt chains persist?We expect lots of alt chains with length 1 or 2 from natural causes. Is anybody out there dumping mining power into longer altchains? We'll conaltr 'longer' in terms of heigh...
fig = plt.figure(figsize=(10,10)) plt.style.use('seaborn-white') plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.4) plt.rcParams['ytick.labelsize'] = 20 plt.rcParams['xtick.labelsize'] = 20 plt.rcParams['font.size'] = 20 ax1 = fig.add_subplot(211) ax1.set_xlabel('alt-chain lengt...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Unexpectedly, there are lots of alt chains being mined 10, 20, and 30 blocks deep (top plot).Some of these futile alt chains are mined for weeks (bottom plot)Highly unnatural... A closer look at the growth of alt chains It is time to individually inspect different alt chains, and see how fast they were produced. The p...
# Let's take a look at the first 20 blocks: max_chain_length = 20 max_chain_time = 25000 norm_block_time = 120 # seconds fig = plt.figure(figsize=(8,8), dpi=100) plt.style.use('seaborn-white') plt.rcParams['ytick.labelsize'] = 15 plt.rcParams['xtick.labelsize'] = 15 plt.rcParams['font.size'] = 15 plt.scatter(b1s.alt_...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
**Wow** there are several interesting things to note:Several of the alt chains, separated by weeks, show the exact same signature in hashrate (e.g. 5 and 11) and presumably were produced by the same equipmentalt chain 10 produced the first 8 blocks at approximately 2 minutes per block! This could indicate two things:- ...
fig = plt.figure(figsize=(4,4), dpi=100) plt.scatter(b1s.alt_chain_length, b1s.alt_chain_time+1, c=b1s.alt_chain_ID, cmap='tab10') fig.suptitle('Looking at alt chain lengths and time, all blocks') plt.xlabel('Nth block of the alt chain') plt.ylabel('Accumulated time (seconds)') plt.axis('tight') pass
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Let's look at these chains in terms of a (very rough and noisy) estimate of their hashpower, which is inversely proportional the length of time between discovering blocks.This can be normalized by the network discovery time (average 120s) to the alt chain's hashrate relative to the network-wide hashrate. `fraction_of_n...
# Let's take a look at the first 20 blocks: max_chain_length = 20 norm_block_time = 120 # seconds max_prop = 2 fig = plt.figure(figsize=(8,8), dpi=100) plt.style.use('seaborn-white') plt.rcParams['ytick.labelsize'] = 15 plt.rcParams['xtick.labelsize'] = 15 plt.rcParams['font.size'] = 15 plt.scatter(b1s.alt_chain_leng...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Ratio of chain length to chain timeFor each chain, we calculate the total chain_length/chain_time to produce the average blocks per secondThis should be directly proportional to the hashrate being used to mine that chain
b1s_terminal = copy(b1s[b1s.terminal_block==1]) b1s_terminal['average_speed'] = np.nan b1s_terminal['fraction_of_network_hashrate'] = np.nan # Loop over rows = blocks for index, row in b1s_terminal.iterrows(): if b1s_terminal.alt_chain_time[index] > 0: b1s_terminal.loc[index,('average_speed')] = b1s_term...
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Almost all of the alternate chains have an average speed that is SLOWER than the main chain (to the left of red line).Some of the chains clocked in with speeds higher than the network average. Let's see if these are long chains that should have overtaken, or fluke statistics from short chains.
b1s_order = copy(b1s_terminal.sort_values(['average_speed'], ascending=False)) display(b1s_order.dropna()[['alt_chain_ID', 'alt_chain_length', 'alt_chain_time', 'average_speed', 'fraction_of_network_hashrate']])
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
As expected, the chains that clocked in faster than average were all just 2-block detours.Let's take a look without those
b1s_order = copy(b1s_order[b1s_order.alt_chain_length > 2]) display(b1s_order[['alt_chain_ID', 'alt_chain_length', 'alt_chain_time', 'average_speed', 'fraction_of_network_hashrate']])
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Surprising observations:Note that chain with ID 10 was 20 blocks long, and averaged production at a speed that would have required **13% of the total network hashrate.**Chain ID 13 managed 18 blocks at a speed consistent with having **8% of the total network hashrate.** Comparison between chainsNow we look whether an...
# Create plot fig = plt.figure() plt.scatter(b0s.block_height, b0s.delta_time) plt.scatter(b1s.block_height, b1s.delta_time) #plt.xlim(1580000, 1615000) plt.ylim(0, 1500) plt.title('Matplot scatter plot') plt.show()
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
..... This doesn't line up exactly right. Need to get b0 times during b1 stretches... Summarization of the alt chainsQuickly spitting out some text data so I can cross-reference these alt chains against mining timing on the main chain.
verbose_text_output = 1 if verbose_text_output: for i, x in enumerate(b1s.alt_chain_ID.unique()): try: #print('alt chain #' + str(i) + ' median time: ', + np.median(b1s.block_time[b1s.alt_chain_ID==i])) print('alt chain #' + str(i) + ' median time: ' + time.strftime('%Y-%m-%d %H:%M:...
alt chain #0 median time: 2018-04-04 23:11:27 ... started at height 1545098 alt chain #1 median time: 2018-04-06 03:24:11 ... started at height 1546000 alt chain #2 median time: 2018-04-10 00:11:39 ... started at height 1547963 alt chain #3 median time: 2018-04-29 15:03:35 ... started at height 1562061 alt chai...
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Work in progress. Check back later for more excitement! Ah, here's a bug to fix:NaNs in delta_time get marked as a `merlin_block` which is not true
b0s[b0s.merlin_block==1]
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Linear Solvers QuickstartThis tutorial illustrates the basic usage and functionality of ProbNum's linear solver. In particular:- Loading a random linear system from ProbNum's `problems.zoo`.- Solving the system with one of ProbNum's linear solvers.- Visualizing the return objects of the solver: These are distributions...
import warnings warnings.filterwarnings('ignore') # Make inline plots vector graphics instead of raster graphics %matplotlib inline from IPython.display import set_matplotlib_formats set_matplotlib_formats("pdf", "svg") # Plotting import matplotlib.pyplot as plt from matplotlib.colors import TwoSlopeNorm plt.style...
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Linear Systems & Linear SolversConsider a linear system of the form$$A \mathbf{x} = \mathbf{b}$$where $A\in\mathbb{R}^{n\times n}$ is a symmetric positive definite matrix, $\mathbf{b}\in\mathbb{R}^n$ is a vector and $\mathbf{x}\in\mathbb{R}^n$ is the unknown solution of the linear system. Solving such a linear system ...
import numpy as np from probnum.problems.zoo.linalg import random_spd_matrix rng = np.random.default_rng(42) # for reproducibility n = 25 # dimensionality # generate linear system spectrum = 10 * np.linspace(0.5, 1, n) ** 4 A = random_spd_matrix(rng=rng, dim=n, spectrum=spectrum) b = rng.normal(size=(n, 1)) print(...
Matrix condition: 16.00 Eigenvalues: [ 0.625 0.73585981 0.8608519 1.00112915 1.15788966 1.33237674 1.52587891 1.73972989 1.97530864 2.23403931 2.51739125 2.82687905 3.1640625 3.53054659 3.92798153 4.35806274 4.82253086 5.32317173 5.86181641 6.44034115 7.06066744 7.72476196 8.43463662 9....
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Now we visualize the linear system.
# Plot linear system fig, axes = plt.subplots( nrows=1, ncols=4, figsize=(5, 3.5), sharey=True, squeeze=False, gridspec_kw={"width_ratios": [4, 0.25, 0.25, 0.25]}, ) vmax = np.max(np.hstack([A, b])) vmin = np.min(np.hstack([A, b])) # normalize diverging colobar, such that it is centered at zer...
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Solve the Linear System with ProbNum's SolverWe now use ProbNum's probabilistic linear solver `problinsolve` to estimate the solution vector $\mathbf{x}$.The algorithm iteratively chooses *actions* $\mathbf{s}$ and makes linear *observations* $\mathbf{y}=A \mathbf{s}$ to update its belief over the solution $\mathbf{x}...
from probnum.linalg import problinsolve # Solve with probabilistic linear solver x, Ahat, Ainv, info = problinsolve(A=A, b=b, maxiter=10) print(info)
{'iter': 10, 'maxiter': 10, 'resid_l2norm': 0.022193410186189838, 'trace_sol_cov': 27.593259516810043, 'conv_crit': 'maxiter', 'rel_cond': None}
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Visualization of Return Objects & Estimated Uncertainty The solver returns random variables $\mathsf{x}$, $\mathsf{A}$, and $\mathsf{H}:=\mathsf{A}^{-1}$ (described by distributions) which are called `x`, `Ahat` and `Ainv` respectively in the cell above. Those distributions describe possible values of the solution $\m...
x x.mean x.cov.todense()[0, :] n_samples = 10 x_samples = x.sample(rng=rng, size=n_samples) x_samples.shape
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Furthermore, the standard deviations together with the mean $\mathbb{E}(\mathsf{x})$ yield credible intervals for each dimension (entry) of $\mathbf{x}$. Credible intervals are a quick way to visualize the numerical uncertainty, but keep in mind that they only consider marginal (per element) distributions of $\mathsf{x...
plt.figure() plt.plot(x_samples[0, :], '.', color='gray', label='sample') plt.plot(x_samples[1:, :].T, '.', color='gray') plt.errorbar(np.arange(0, 25), x.mean, 1 * x.std, ls='none', label='68\% credible interval') plt.plot(x.mean, 'o', color='C0', label='$\mathbb{E}(\mathsf{x})$') plt.xlabel("index of e...
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Here are the credible intervals printed out:
x_true = np.linalg.solve(A, b)[:, 0] abs_err = abs(x.mean - x_true) rel_err = abs_err / abs(x.mean + x_true) print(f"Maximal absolute and relative error to mean estimate: {max(abs_err):.2e}, {max(rel_err):.2e}") print(f"68% marginal credible intervals of the entries of x") for i in range(25): print(f"element {i : >...
Maximal absolute and relative error to mean estimate: 4.84e-03, 9.16e-02 68% marginal credible intervals of the entries of x element 0: 0.43 pm 1.14 element 1: 0.16 pm 0.91 element 2: -0.96 pm 0.87 element 3: 0.33 pm 0.85 element 4: -0.01 pm 0.86 element 5: -0.78 pm 0.88 element 6: 0.72 pm 1.14...
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Generally, the uncertainty is a conservative estimate of the error, especially for small $n$, hence the credible intervals above as well as in the plot are quiet large. For large $n$ where uncertainty quantification matters more, the error bars are expected to fit the true uncertainty better. System Matrix $\mathsf{A}...
Ahat Ainv # Draw samples rng = np.random.default_rng(seed=42) Ahat_samples = Ahat.sample(rng=rng, size=3) Ainv_samples = Ainv.sample(rng=rng, size=3) vmax = np.max(np.hstack([A, b])) vmin = np.min(np.hstack([A, b])) # normalize diverging colobar, such that it is centered at zero norm = TwoSlopeNorm(vmin=vmin, vcenter...
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
ASSIGNMENT 3Using Tensorflow to build a CNN network for CIFAR-10 dataset. Each record is of size 1*3072. Building a CNN network to classify the data into the 10 classes. DatasetCIFAR-10 dataset The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 trainin...
!pip install pydrive
Collecting pydrive [?25l Downloading https://files.pythonhosted.org/packages/52/e0/0e64788e5dd58ce2d6934549676243dc69d982f198524be9b99e9c2a4fd5/PyDrive-1.3.1.tar.gz (987kB)  1% |▎ | 10kB 15.9MB/s eta 0:00:01  2% |▋ | 20kB 4.9MB/s eta 0:00:01 ...
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Creates connection
from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth import tensorflow as tf from oauth2client.client import GoogleCredentials
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Authenticating and creating the PyDrive client
auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth)
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Getting ids of all the files in folder
file_list = drive.ListFile({'q': "'1DCFFw2O6BFq8Gk0eYu7JT4Qn224BNoCt' in parents and trashed=false"}).GetList() for file1 in file_list: print('title: %s, id: %s' % (file1['title'], file1['id']))
title: data_batch_1, id: 11Bo2ULl9_aOQ761ONc2vhepnydriELiT title: data_batch_2, id: 1asFrGiOMdHKY-_KO94e1fLWMBN_Ke92I title: test_batch, id: 1Wyz_RdmoLe9r9t1rloap8AttSltmfwrp title: data_batch_3, id: 11ky6i6FSTGWJYOzXquELD4H-GUr49C4f title: data_batch_5, id: 1rmRytfjJWua0cv17DzST6PqoDFY2APa6 title: data_batch_4, id: 1b...
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Importing libraries
import tensorflow as tf import numpy as np import pandas as pd import matplotlib.pyplot as plt from IPython import display from sklearn.model_selection import train_test_split import pickle %matplotlib inline
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Loading the data
def unpickle(file): with open(file, 'rb') as fo: dict = pickle.load(fo, encoding='bytes') return dict
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
if file is zipped
zip_file = drive.CreateFile({'id': '11Bo2ULl9_aOQ761ONc2vhepnydriELiT'}) zip_file.GetContentFile('data_batch_1') zip_file = drive.CreateFile({'id': '1asFrGiOMdHKY-_KO94e1fLWMBN_Ke92I'}) zip_file.GetContentFile('data_batch_2') zip_file = drive.CreateFile({'id': '11ky6i6FSTGWJYOzXquELD4H-GUr49C4f'}) zip_file.GetContent...
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Combine the remaining four arrays to use as training data
X_tr = np.concatenate([data1, data2, data3, data4, data5], axis=0) X_tr = np.dstack((X_tr[:, :1024], X_tr[:, 1024:2048], X_tr[:, 2048:])) / 1.0 X_tr = (X_tr - 128) / 255.0 X_tr = X_tr.reshape(-1, 32, 32, 3) y_tr = np.concatenate([labels1, labels2, labels3, labels4, labels5], axis=0)
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Setting the number of classes
num_classes = len(np.unique(y_tr)) print("X_tr", X_tr.shape) print("y_tr", y_tr.shape)
X_tr (50000, 32, 32, 3) y_tr (50000,)
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Importing the test data
test_data = unpickle("test_batch") X_test = test_data[b'data'] X_test = np.dstack((X_test[:, :1024], X_test[:, 1024:2048], X_test[:, 2048:])) / 1.0 X_test = (X_test - 128) / 255.0 X_test = X_test.reshape(-1, 32, 32, 3) y_test = np.asarray(test_data[b'labels'])
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Spliting into test and validation
X_te, X_cv, y_te, y_cv = train_test_split(X_test, y_test, test_size=0.5, random_state=1) print("X_te", X_te.shape) print("X_cv", X_cv.shape) print("y_te", y_te.shape) print("y_cv", y_cv.shape)
X_te (5000, 32, 32, 3) X_cv (5000, 32, 32, 3) y_te (5000,) y_cv (5000,)
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Batch generator
def get_batches(X, y, batch_size, crop=False, distort=True): # Shuffle X,y shuffled_idx = np.arange(len(y)) np.random.shuffle(shuffled_idx) i, h, w, c = X.shape # Enumerate indexes by steps of batch_size for i in range(0, len(y), batch_size): batch_idx = shuffled_idx[i:i+batch_size]...
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Configurations
epochs = 20 # how many epochs batch_size = 128 steps_per_epoch = X_tr.shape[0] / batch_size
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Building the network MODEL 7.13.4.6.7fModel description:* 7.6 - changed kernel reg rate to 0.01 from 0.1* 7.7 - optimize loss instead of ce 7.8 - remove redundant lambda, replaced scale in regularizer with lambda, changed lambda from 0.01 to 0.001* 7.9 - lambda 0 instead of 3* 7.9.1 - lambda 1 instead of 0* 7.9.2 - u...
# Create new graph graph = tf.Graph() # whether to retrain model from scratch or use saved model init = True model_name = "model_7.13.4.7.7l" with graph.as_default(): # Placeholders X = tf.placeholder(dtype=tf.float32, shape=[None, 32, 32, 3]) y = tf.placeholder(dtype=tf.int32, shape=[None]) training =...
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
CONFIGURE OPTIONS
init = True # whether to initialize the model or use a saved version crop = False # do random cropping of images? meta_data_every = 5 log_to_tensorboard = False print_every = 1 # how often to print metrics checkpoint_every = 1 # how often to save model in epo...
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Trainig the model
with tf.Session(graph=graph, config=config) as sess: if log_to_tensorboard: train_writer = tf.summary.FileWriter('./logs/tr_' + model_name, sess.graph) test_writer = tf.summary.FileWriter('./logs/te_' + model_name) if not print_metrics: # create a plot to be updated as model is trai...
Training model_7.13.4.7.7l ... Saving checkpoint Epoch 00 - step 391 - cv acc: 0.514 - train acc: 0.480 (mean) - cv cost: 1.403 - lr: 0.00300 Saving checkpoint Epoch 01 - step 782 - cv acc: 0.680 - train acc: 0.648 (mean) - cv cost: 0.928 - lr: 0.00300 Saving checkpoint Epoch 02 - step 1173 - cv acc: 0.710 - train acc:...
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Scoring and Evaluating trained model
## MODEL 7.20.0.11g print("Model : ", model_name) print("Convolutional network accuracy (test set):",test_acc, " Validation Set", valid_acc_values[-1])
Model : model_7.13.4.7.7l Convolutional network accuracy (test set): 0.84121096 Validation Set 0.8509766
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
The Dice Problem This notebook is part of [Bite Size Bayes](https://allendowney.github.io/BiteSizeBayes/), an introduction to probability and Bayesian statistics using Python.Copyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/li...
from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://github.com/AllenDowney/BiteSizeBayes/raw/master/utils.py...
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
If everything we need is installed, the following cell should run with no error messages.
import pandas as pd import numpy as np from utils import values
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Review[In the previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/03_cookie.ipynb) we started with Bayes's Theorem, written like this:$P(A|B) = P(A) ~ P(B|A) ~/~ P(B)$And applied it to the case where we use data, $D$, to update the probability of a hypothesis, $H$. In thi...
import pandas as pd table = pd.DataFrame() table['prior'] = 1/5, 1/5, 1/5, 1/5, 1/5 table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
The likelihood of drawing a vanilla cookie from each bowl is the given proportion of vanilla cookies:
table['likelihood'] = 0, 0.25, 0.5, 0.75, 1 table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Once we have priors and likelihoods, the remaining steps are always the same. We compute the unnormalized posteriors:
table['unnorm'] = table['prior'] * table['likelihood'] table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
And the total probability of the data.
prob_data = table['unnorm'].sum() prob_data
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Then divide through to get the normalized posteriors.
table['posterior'] = table['unnorm'] / prob_data table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Two things you might notice about these results:1. One of the hypotheses has a posterior probability of 0, which means it has been ruled out entirely. And that makes sense: Bowl 0 contains no vanilla cookies, so if we get a vanilla cookie, we know it's not from Bowl 0.2. The posterior probabilities form a straight lin...
import matplotlib.pyplot as plt table['posterior'].plot(kind='bar') plt.xlabel('Bowl #') plt.ylabel('Posterior probability');
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
**Exercise:** Use the table method to solve the following problem and plot the results as a bar chart.>The blue M&M was introduced in 1995. Before then, the color mix in a bag of plain M&Ms was (30% Brown, 20% Yellow, 20% Red, 10% Green, 10% Orange, 10% Tan). >>Afterward it was (24% Blue , 20% Green, 16% Orange, 14%...
# Solution goes here # Solution goes here
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Why does this work?Now I will explain how the table method works, making two arguments:1. First, I'll show that it makes sense to normalize the posteriors so they add up to 1.2. Then I'll show that this step is consistent with Bayes's Theorem, because the total of the unnormalized posteriors is the total probability o...
(1/2)*(1/4) + (1/2)*(1/6)
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
And that's correct. But if your intuition did not tell you that, or if you would like to see something closer to a proof, keep going. DisjunctionIn this example, we can describe the outcome in terms of logical operators like this:> The outcome is 1 if you choose the 4-sided die **and** roll 1 **or** you roll the 6-si...
p_red = 26/52 p_face = 12/52 p_red_face = 6/52 p_red, p_face, p_red_face
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Use Theorem 4 to compute the probability of choosing a card that is either red, or a face card, or both:
# Solution goes here
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Total probabilityIn the dice example, $H_4$ and $H_6$ are mutually exclusive, which means only one of them can be true, so the purple region is 0. Therefore:$P(D) = P(H_4 ~and~ D) + P(H_6 ~and~ D) - 0$Now we can use **Theorem 2** to replace the conjunctions with conditonal probabilities:$P(D) = P(H_4)~P(D|H_4) + P(H_...
table = pd.DataFrame(index=['H4', 'H6']) table['prior'] = 1/2, 1/2 table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
And the likelihoods:
table['likelihood'] = 1/4, 1/6 table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Now we compute the unnormalized posteriors in the usual way:
table['unnorm'] = table['prior'] * table['likelihood'] table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
And the total probability of the data:
prob_data = table['unnorm'].sum() prob_data
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
That's what we got when we solved the problem by hand, so that's good. **Exercise:** Suppose you have a 4-sided, 6-sided, and 8-sided die. You choose one at random and roll it, what is the probability of getting a 1?Do you expect it to be higher or lower than in the previous example?
# Solution goes here # Solution goes here
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Prediction and inferenceIn the previous section, we use a Bayes table to solve this problem:> Suppose you have a 4-sided die and a 6-sided die. You choose one at random and roll it. What is the probability of getting a 1?I'll call this a "prediction problem" because we are given a scenario and asked for the probabil...
table = pd.DataFrame(index=['H4', 'H6']) table['prior'] = 1/2, 1/2 table['likelihood'] = 1/4, 1/6 table['unnorm'] = table['prior'] * table['likelihood'] prob_data = table['unnorm'].sum() table['posterior'] = table['unnorm'] / prob_data table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Given that the outcome is a 1, there is a 60% chance the die you rolled was 4-sided.As this example shows, prediction and inference closely-related problems, and we can use the same methods for both. **Exercise:** Let's add some more dice:1. Suppose you have a 4-sided, 6-sided, 8-sided, and 12-sided die. You choose on...
# Solution goes here # Solution goes here
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Copyright 2020 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
tensorflow_privacy/privacy/membership_inference_attack/codelab.ipynb
LuluBeatson/privacy