markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
TrainingTrain the network on random samples from the data. Try adjusting the epochs and watch the training performance closely using different models.
%matplotlib notebook from matplotlib import cm EPOCHS = 200000 max_accuracy = 0 fig, ax = plt.subplots(num='Training') scatter = ax.scatter(*inputs.T, 2) plt.show() for epoch in range(1, EPOCHS + 1): sample_index = np.random.randint(0, len(targets)) MLP.adapt(inputs[sample_index], targets[sample_index]) if (epoch % 2500) == 0: outputs = np.squeeze([MLP.activate(x) for x in inputs]) predictions = np.round(outputs) accuracy = np.sum(predictions == targets) / len(targets) * 100 if accuracy > max_accuracy: max_accuracy = accuracy scatter.set_color(cm.RdYlBu(outputs)) ax.set(title=f'Training {epoch / EPOCHS * 100:.0f}%: {accuracy:.2f}%. Best accuracy: {max_accuracy:.2f}%') fig.canvas.draw()
_____no_output_____
MIT
sheet_08/sheet_08_machine-learning_solution.ipynb
ArielMant0/ml2018
Evaluation
%matplotlib inline fig, ax = plt.subplots(nrows=2, ncols=2) ax[0, 0].scatter(*inputs.T, 2, c=outputs, cmap='RdYlBu') ax[0, 0].set_title('Continuous Classification') ax[0, 1].set_title('Discretized Classification') ax[0, 1].scatter(*inputs.T, 2, c=np.round(outputs), cmap='RdYlBu') ax[1, 0].set_title('Original Labels') ax[1, 0].scatter(*inputs.T, 2, c=targets, cmap='RdYlBu') ax[1, 1].set_title('Wrong Classifications') ax[1, 1].scatter(*inputs.T, 2, c=(targets != np.round(outputs)), cmap='OrRd') plt.show()
_____no_output_____
MIT
sheet_08/sheet_08_machine-learning_solution.ipynb
ArielMant0/ml2018
ResultsDocument your results in the following cell. We are interested in which network configurations you tried and what accuracies they resulted in. Did you run into problems during training? Was it steady or did it get stuck? Did you recognize anything about the training process? How could we get better results? Tell us! **Answer:** 2 hidden and one output layer with a total of 7 neurons can already stably render results of 90%+ (with some data generation luck). During training the model sometimes gets stuck in saddle points for a long time. One way to tackle this would be to compute noisy gradients instead of the real gradients -- something that *stochastic gradient descent*, the main method most frameworks for working with neural networks use by default, makes use of as well. Some more information on that specific problem and solution [here](http://www.offconvex.org/2016/03/22/saddlepoints/). Another problem with our training approach is that we train on the complete dataset without a training/evaluation split! If we would split the data we could also make use of "early stopping": Instead of using the final state of the network for our evaluation, we could use the one which got the best max accuracy on the evaluation set during training by saving it whenever the max accuracy goes up. Assignment 2: MLP and RBFN [10 Points] This exercise is aimed at deepening the understanding of Radial Basis Function Networks and how they relate to Multilayer Perceptrons. Not all of the answers can be found directly in the slides - so when answering the (more algorithmic) questions, first take a minute and think about how you would go about solving them and if nothing comes to mind search the internet for a little bit. If you are interested in a real life application of both algorithms and how they compare take a look at this paper: [Comparison between Multi-Layer Perceptron and Radial Basis Function Networks for Sediment Load Estimation in a Tropical Watershed](http://file.scirp.org/pdf/JWARP20121000014_80441700.pdf)![Schematic of a RBFN](RBFN.png)We have prepared a little example that shows how radial basis function approximation works in Python. This is not an example implementation of a RBFN but illustrates the work of the hidden neurons.
%matplotlib inline import numpy as np from numpy.random import uniform from scipy.interpolate import Rbf import matplotlib import matplotlib.pyplot as plt from matplotlib import cm def func(x, y): """ This is the example function that should be fitted. Its shape could be described as two peaks close to each other - one going up, the other going down """ return (x + y) * np.exp(-4.0 * (x**2 + y**2)) # number of training points (you may try different values here) training_size = 50 # sample 'training_size' data points from the input space [-1,1]x[-1,1] ... x = uniform(-1.0, 1.0, size=training_size) y = uniform(-1.0, 1.0, size=training_size) # ... and compute function values for them. fvals = func(x, y) # get the aprroximation via RBF new_func = Rbf(x, y, fvals) # Plot both functions: # create a 100x100 grid of input values x_grid, y_grid = np.mgrid[-1:1:100j, -1:1:100j] fig, ax = plt.subplots(ncols=2, sharey=True, figsize=(10, 6)) # This plot represents the original function f_orig = func(x_grid, y_grid) img = ax[0].imshow(f_orig, extent=[-1, 1, -1, 1], cmap='RdBu') ax[0].set(title='Original Function') # This plots the approximation of the original function by the RBF # if the plot looks strange try to run it again, the sampling # in the beginning is random f_new = new_func(x_grid, y_grid) plt.imshow(f_new, extent=[-1, 1, -1, 1], cmap='RdBu') ax[1].set(title='RBF Result', xlim=[-1, 1], ylim=[-1, 1]) # scatter the datapoints that have been used by the RBF plt.scatter(x, y, color='black') fig.colorbar(img, ax=ax) plt.show()
_____no_output_____
MIT
sheet_08/sheet_08_machine-learning_solution.ipynb
ArielMant0/ml2018
Tem caracteres em chinês? Pq eles pegam a maior distribuição do dataset???Tirado do Twitter? (Alguns nomes/sobrenomes)O Dataset do Bert base inglês parecia mais organizadoCade o alfabeto?Tem muitas subwords
!pip install transformers from transformers import AutoTokenizer # Or BertTokenizer from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads from transformers import AutoModel # or BertModel, for BERT without pretraining heads model = AutoModelForPreTraining.from_pretrained('neuralmind/bert-base-portuguese-cased') tokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-base-portuguese-cased', do_lower_case=False) import torch with open("vocabulary.txt", 'w') as f: # For each token... for token in tokenizer.vocab.keys(): # Write it out and escape any unicode characters. f.write(token + '\n') one_chars = [] one_chars_hashes = [] # For each token in the vocabulary... for token in tokenizer.vocab.keys(): # Record any single-character tokens. if len(token) == 1: one_chars.append(token) # Record single-character tokens preceded by the two hashes. elif len(token) == 3 and token[0:2] == '##': one_chars_hashes.append(token) print('Number of single character tokens:', len(one_chars), '\n') # Print all of the single characters, 40 per row. # For every batch of 40 tokens... for i in range(0, len(one_chars), 40): # Limit the end index so we don't go past the end of the list. end = min(i + 40, len(one_chars) + 1) # Print out the tokens, separated by a space. print(' '.join(one_chars[i:end])) print('Number of single character tokens with hashes:', len(one_chars_hashes), '\n') # Print all of the single characters, 40 per row. # Strip the hash marks, since they just clutter the display. tokens = [token.replace('##', '') for token in one_chars_hashes] # For every batch of 40 tokens... for i in range(0, len(tokens), 40): # Limit the end index so we don't go past the end of the list. end = min(i + 40, len(tokens) + 1) # Print out the tokens, separated by a space. print(' '.join(tokens[i:end])) print('Are the two sets identical?', set(one_chars) == set(tokens)) import matplotlib.pyplot as plt import seaborn as sns import numpy as np sns.set(style='darkgrid') # Increase the plot size and font size. sns.set(font_scale=1.5) plt.rcParams["figure.figsize"] = (10,5) # Measure the length of every token in the vocab. token_lengths = [len(token) for token in tokenizer.vocab.keys()] # Plot the number of tokens of each length. sns.countplot(token_lengths) plt.title('Vocab Token Lengths') plt.xlabel('Token Length') plt.ylabel('# of Tokens') print('Maximum token length:', max(token_lengths)) num_subwords = 0 subword_lengths = [] # For each token in the vocabulary... for token in tokenizer.vocab.keys(): # If it's a subword... if len(token) >= 2 and token[0:2] == '##': # Tally all subwords num_subwords += 1 # Measure the sub word length (without the hashes) length = len(token) - 2 # Record the lengths. subword_lengths.append(length) vocab_size = len(tokenizer.vocab.keys()) print('Number of subwords: {:,} of {:,}'.format(num_subwords, vocab_size)) # Calculate the percentage of words that are '##' subwords. prcnt = float(num_subwords) / vocab_size * 100.0 print('%.1f%%' % prcnt) sns.countplot(subword_lengths) plt.title('Subword Token Lengths (w/o "##")') plt.xlabel('Subword Length') plt.ylabel('# of ## Subwords')
/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning
MIT
BERTimbau.ipynb
Laelapz/Some_Tests
The Binomial Distribution This notebook is part of [Bite Size Bayes](https://allendowney.github.io/BiteSizeBayes/), an introduction to probability and Bayesian statistics using Python.Copyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) The following cell downloads `utils.py`, which contains some utility function we'll need.
from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://github.com/AllenDowney/BiteSizeBayes/raw/master/utils.py')
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
If everything we need is installed, the following cell should run with no error messages.
import numpy as np import pandas as pd import matplotlib.pyplot as plt
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
The Euro problem revisitedIn [a previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/07_euro.ipynb) I presented a problem from David MacKay's book, [*Information Theory, Inference, and Learning Algorithms*](http://www.inference.org.uk/mackay/itila/p0.html):> A statistical statement appeared in The Guardian onFriday January 4, 2002:>> >"When spun on edge 250 times, a Belgian one-euro coin cameup heads 140 times and tails 110. ‘It looks very suspiciousto me’, said Barry Blight, a statistics lecturer at the LondonSchool of Economics. ‘If the coin were unbiased the chance ofgetting a result as extreme as that would be less than 7%’.">> But [asks MacKay] do these data give evidence that the coin is biased rather than fair? To answer this question, we made these modeling decisions:* If you spin a coin on edge, there is some probability, $x$, that it will land heads up.* The value of $x$ varies from one coin to the next, depending on how the coin is balanced and other factors.We started with a uniform prior distribution for $x$, then updated it 250 times, once for each spin of the coin. Then we used the posterior distribution to compute the MAP, posterior mean, and a credible interval.But we never really answered MacKay's question.In this notebook, I introduce the binomial distribution and we will use it to solve the Euro problem more efficiently. Then we'll get back to MacKay's question and see if we can find a more satisfying answer. Binomial distributionSuppose I tell you that a coin is "fair", that is, the probability of heads is 50%. If you spin it twice, there are four outcomes: `HH`, `HT`, `TH`, and `TT`.All four outcomes have the same probability, 25%. If we add up the total number of heads, it is either 0, 1, or 2. The probability of 0 and 2 is 25%, and the probability of 1 is 50%.More generally, suppose the probability of heads is `p` and we spin the coin `n` times. What is the probability that we get a total of `k` heads?The answer is given by the binomial distribution:$P(k; n, p) = \binom{n}{k} p^k (1-p)^{n-k}$where $\binom{n}{k}$ is the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), usually pronounced "n choose k".We can compute this expression ourselves, but we can also use the SciPy function `binom.pmf`:
from scipy.stats import binom n = 2 p = 0.5 ks = np.arange(n+1) a = binom.pmf(ks, n, p) a
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
If we put this result in a Series, the result is the distribution of `k` for the given values of `n` and `p`.
pmf_k = pd.Series(a, index=ks) pmf_k
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
The following function computes the binomial distribution for given values of `n` and `p`:
def make_binomial(n, p): """Make a binomial PMF. n: number of spins p: probability of heads returns: Series representing a PMF """ ks = np.arange(n+1) a = binom.pmf(ks, n, p) pmf_k = pd.Series(a, index=ks) return pmf_k
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
And here's what it looks like with `n=250` and `p=0.5`:
pmf_k = make_binomial(n=250, p=0.5) pmf_k.plot() plt.xlabel('Number of heads (k)') plt.ylabel('Probability') plt.title('Binomial distribution');
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
The most likely value in this distribution is 125:
pmf_k.idxmax()
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
But even though it is the most likely value, the probability that we get exactly 125 heads is only about 5%.
pmf_k[125]
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
In MacKay's example, we got 140 heads, which is less likely than 125:
pmf_k[140]
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
In the article MacKay quotes, the statistician says, ‘If the coin were unbiased the chance of getting a result as extreme as that would be less than 7%’.We can use the binomial distribution to check his math. The following function takes a PMF and computes the total probability of values greater than or equal to `threshold`.
def prob_ge(pmf, threshold): """Probability of values greater than a threshold. pmf: Series representing a PMF threshold: value to compare to returns: probability """ ge = (pmf.index >= threshold) total = pmf[ge].sum() return total
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Here's the probability of getting 140 heads or more:
prob_ge(pmf_k, 140)
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
It's about 3.3%, which is less than 7%. The reason is that the statistician includes all values "as extreme as" 140, which includes values less than or equal to 110, because 140 exceeds the expected value by 15 and 110 falls short by 15. The probability of values less than or equal to 110 is also 3.3%,so the total probability of values "as extreme" as 140 is about 7%.The point of this calculation is that these extreme values are unlikely if the coin is fair.That's interesting, but it doesn't answer MacKay's question. Let's see if we can. Estimating xAs promised, we can use the binomial distribution to solve the Euro problem more efficiently. Let's start again with a uniform prior:
xs = np.arange(101) / 100 uniform = pd.Series(1, index=xs) uniform /= uniform.sum()
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
We can use `binom.pmf` to compute the likelihood of the data for each possible value of $x$.
k = 140 n = 250 xs = uniform.index likelihood = binom.pmf(k, n, p=xs)
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Now we can do the Bayesian update in the usual way, multiplying the priors and likelihoods,
posterior = uniform * likelihood
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Computing the total probability of the data,
total = posterior.sum() total
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
And normalizing the posterior,
posterior /= total
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Here's what it looks like.
posterior.plot(label='Uniform') plt.xlabel('Probability of heads (x)') plt.ylabel('Probability') plt.title('Posterior distribution, uniform prior') plt.legend()
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
**Exercise:** Based on what we know about coins in the real world, it doesn't seem like every value of $x$ is equally likely. I would expect values near 50% to be more likely and values near the extremes to be less likely. In Notebook 7, we used a triangle prior to represent this belief about the distribution of $x$. The following code makes a PMF that represents a triangle prior.
ramp_up = np.arange(50) ramp_down = np.arange(50, -1, -1) a = np.append(ramp_up, ramp_down) triangle = pd.Series(a, index=xs) triangle /= triangle.sum()
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Update this prior with the likelihoods we just computed and plot the results.
# Solution posterior2 = triangle * likelihood total2 = posterior2.sum() total2 # Solution posterior2 /= total2 # Solution posterior.plot(label='Uniform') posterior2.plot(label='Triangle') plt.xlabel('Probability of heads (x)') plt.ylabel('Probability') plt.title('Posterior distribution, uniform prior') plt.legend();
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
EvidenceFinally, let's get back to MacKay's question: do these data give evidence that the coin is biased rather than fair?I'll use a Bayes table to answer this question, so here's the function that makes one:
def make_bayes_table(hypos, prior, likelihood): """Make a Bayes table. hypos: sequence of hypotheses prior: prior probabilities likelihood: sequence of likelihoods returns: DataFrame """ table = pd.DataFrame(index=hypos) table['prior'] = prior table['likelihood'] = likelihood table['unnorm'] = table['prior'] * table['likelihood'] prob_data = table['unnorm'].sum() table['posterior'] = table['unnorm'] / prob_data return table
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Recall that data, $D$, is considered evidence in favor of a hypothesis, `H`, if the posterior probability is greater than the prior, that is, if$P(H|D) > P(H)$For this example, I'll call the hypotheses `fair` and `biased`:
hypos = ['fair', 'biased']
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
And just to get started, I'll assume that the prior probabilities are 50/50.
prior = [0.5, 0.5]
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Now we have to compute the probability of the data under each hypothesis.If the coin is fair, the probability of heads is 50%, and we can compute the probability of the data (140 heads out of 250 spins) using the binomial distribution:
k = 140 n = 250 like_fair = binom.pmf(k, n, p=0.5) like_fair
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
So that's the probability of the data, given that the coin is fair.But if the coin is biased, what's the probability of the data? Well, that depends on what "biased" means.If we know ahead of time that "biased" means the probability of heads is 56%, we can use the binomial distribution again:
like_biased = binom.pmf(k, n, p=0.56) like_biased
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Now we can put the likelihoods in the Bayes table:
likes = [like_fair, like_biased] make_bayes_table(hypos, prior, likes)
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
The posterior probability of `biased` is about 86%, so the data is evidence that the coin is biased, at least for this definition of "biased".But we used the data to define the hypothesis, which seems like cheating. To be fair, we should define "biased" before we see the data. Uniformly distributed biasSuppose "biased" means that the probability of heads is anything except 50%, and all other values are equally likely.We can represent that definition by making a uniform distribution and removing 50%.
biased_uniform = uniform.copy() biased_uniform[50] = 0 biased_uniform /= biased_uniform.sum()
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Now, to compute the probability of the data under this hypothesis, we compute the probability of the data for each value of $x$.
xs = biased_uniform.index likelihood = binom.pmf(k, n, xs)
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
And then compute the total probability in the usual way:
like_uniform = np.sum(biased_uniform * likelihood) like_uniform
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
So that's the probability of the data under the "biased uniform" hypothesis.Now we make a Bayes table that compares the hypotheses `fair` and `biased uniform`:
hypos = ['fair', 'biased uniform'] likes = [like_fair, like_uniform] make_bayes_table(hypos, prior, likes)
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Using this definition of `biased`, the posterior is less than the prior, so the data are evidence that the coin is *fair*.In this example, the data might support the fair hypothesis or the biased hypothesis, depending on the definition of "biased". **Exercise:** Suppose "biased" doesn't mean every value of $x$ is equally likely. Maybe values near 50% are more likely and values near the extremes are less likely. In the previous exercise we created a PMF that represents a triangle-shaped distribution.We can use it to represent an alternative definition of "biased":
biased_triangle = triangle.copy() biased_triangle[50] = 0 biased_triangle /= biased_triangle.sum()
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Compute the total probability of the data under this definition of "biased" and use a Bayes table to compare it with the fair hypothesis.Is the data evidence that the coin is biased?
# Solution like_triangle = np.sum(biased_triangle * likelihood) like_triangle # Solution hypos = ['fair', 'biased triangle'] likes = [like_fair, like_triangle] make_bayes_table(hypos, prior, likes) # Solution # For this definition of "biased", # the data are slightly in favor of the fair hypothesis.
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Bayes factorIn the previous section, we used a Bayes table to see whether the data are in favor of the fair or biased hypothesis.I assumed that the prior probabilities were 50/50, but that was an arbitrary choice. And it was unnecessary, because we don't really need a Bayes table to say whether the data favor one hypothesis or another: we can just look at the likelihoods.Under the first definition of biased, `x=0.56`, the likelihood of the biased hypothesis is higher:
like_fair, like_biased
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
Under the biased uniform definition, the likelihood of the fair hypothesis is higher.
like_fair, like_uniform
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
The ratio of these likelihoods tells us which hypothesis the data support.If the ratio is less than 1, the data support the second hypothesis:
like_fair / like_biased
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
If the ratio is greater than 1, the data support the first hypothesis:
like_fair / like_uniform
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
This likelihood ratio is called a [Bayes factor](https://en.wikipedia.org/wiki/Bayes_factor); it provides a concise way to present the strength of a dataset as evidence for or against a hypothesis. SummaryIn this notebook I introduced the binomial disrtribution and used it to solve the Euro problem more efficiently.Then we used the results to (finally) answer the original version of the Euro problem, considering whether the data support the hypothesis that the coin is fair or biased. We found that the answer depends on how we define "biased". And we summarized the results using a Bayes factor, which quantifies the strength of the evidence.[In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/13_price.ipynb) we'll start on a new problem based on the television game show *The Price Is Right*. Exercises**Exercise:** In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, `x`.Based on previous tests, the distribution of `x` in the population of designs is roughly uniform between 10% and 40%.Now suppose the new ultra-secret Alien Blaster 9000 is being tested. In a press conference, a Defense League general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: "The same number of targets were hit in the two tests, so we have reason to think this new design is consistent."Is this data good or bad; that is, does it increase or decrease your estimate of `x` for the Alien Blaster 9000?Plot the prior and posterior distributions, and use the following function to compute the prior and posterior means.
def pmf_mean(pmf): """Compute the mean of a PMF. pmf: Series representing a PMF return: float """ return np.sum(pmf.index * pmf) # Solution xs = np.linspace(0.1, 0.4) prior = pd.Series(1, index=xs) prior /= prior.sum() # Solution likelihood = xs**2 + (1-xs)**2 # Solution posterior = prior * likelihood posterior /= posterior.sum() # Solution prior.plot(color='gray', label='prior') posterior.plot(label='posterior') plt.xlabel('Probability of success (x)') plt.ylabel('Probability') plt.ylim(0, 0.027) plt.title('Distribution of before and after testing') plt.legend(); # Solution pmf_mean(prior), pmf_mean(posterior) # With this prior, being "consistent" is more likely # to mean "consistently bad".
_____no_output_____
MIT
solutions/12_binomial.ipynb
jonathonfletcher/BiteSizeBayes
1.1. Introducing IPython and the Jupyter Notebook
print("Hello world!") 2 + 2 _ * 3 !ls %lsmagic %%writefile test.txt Hello world! # Let's check what this file contains. with open('test.txt', 'r') as f: print(f.read()) %run? from IPython.display import HTML, SVG, YouTubeVideo HTML(''' <table style="border: 2px solid black;"> ''' + ''.join(['<tr>' + ''.join([f'<td>{row},{col}</td>' for col in range(5)]) + '</tr>' for row in range(5)]) + ''' </table> ''') SVG('''<svg width="600" height="80">''' + ''.join([f'''<circle cx="{(30 + 3*i) * (10 - i)}" cy="30" r="{3. * float(i)}" fill="red" stroke-width="2" stroke="black"> </circle>''' for i in range(10)]) + '''</svg>''') YouTubeVideo('VQBZ2MqWBZI')
_____no_output_____
MIT
chapter01_basic/01_notebook.ipynb
aaazzz640/cookbook-2nd-code
DataLit Homework Assignment Week 4Historical sales data from 45 stores. This dataset comes from KAGGLE (https://www.kaggle.com/manjeetsingh/retaildataset).
# Stores # Contains anonymized information about the 45 stores, indicating the type and size of store. stores = 'dataset/stores data-set.csv' # feature # Contains additional data related to the store, department, and regional activity for the given dates. feature = 'dataset/Features data set.csv' # Sales # Contains historical sales data, which covers to 2010-02-05 to 2012-11-01. sales = 'dataset/sales data-set.csv' data_stores = pd.read_csv(stores) data_feature = pd.read_csv(feature) data_sales = pd.read_csv(sales) data_stores.head() data_feature.head() data_sales.head() #drop all Markdowns inside data_feature data_feature.drop(['MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5'], axis='columns',inplace=True) data_feature.head() # Merge the data in a unique DataFrame df = pd.merge(pd.merge(data_feature, data_sales, on=['Store', 'Date', 'IsHoliday']), data_stores, on=['Store']) # Convert Date to pandas Date format df['Date'] = pd.to_datetime(df['Date']) df.head() df.dtypes df.shape df.Type.value_counts() # df_average_sales_week = df.groupby(by=['Date'], as_index=False)['Weekly_Sales'].sum() # df_average_sales = df_average_sales_week.sort_values('Weekly_Sales', ascending=False) # plt.figure(figsize=(15,5)) # plt.plot(df_average_sales_week.Date, df_average_sales_week.Weekly_Sales) # plt.show() df.groupby([df.Date.dt.year,df.Date.dt.month]).Weekly_Sales.mean() df.groupby(df.Date.dt.year).Weekly_Sales.mean() df.groupby([df.Date.dt.year,df.Date.dt.month]).Weekly_Sales.mean().plot() # fig_size = plt.rcParams["figure.figsize"] # plt.plot( df.Date, df.Weekly_Sales,'o-') # fig_size[0] = 14 # fig_size[1] = 4 # plt.rcParams["figure.figsize"] = fig_size # plt.ylabel('Label 1') # plt.show() fig, ax = plt.subplots() ax.plot( df.Date.dt.year, df.Weekly_Sales) ax.set(xlabel='time (s)', ylabel='voltage (mV)', title='About as simple as it gets, folks') ax.grid() #fig.savefig("test.png") plt.show() df.describe().transpose()
_____no_output_____
MIT
week4/.ipynb_checkpoints/Retail-Data-Analytics-checkpoint.ipynb
guillainbisimwa/Data-lit
A/B testing, traffic shifting and autoscaling IntroductionIn this lab you will create an endpoint with multiple variants, splitting the traffic between them. Then after testing and reviewing the endpoint performance metrics, you will shift the traffic to one variant and configure it to autoscale. Table of Contents- [1. Create an endpoint with multiple variants](c3w2-1.) - [1.1. Construct Docker Image URI](c3w2-1.1.) - [Exercise 1](c3w2-ex-1) - [1.2. Create Amazon SageMaker Models](c3w2-1.2.) - [Exercise 2](c3w2-ex-2) - [Exercise 3](c3w2-ex-3) - [1.3. Set up Amazon SageMaker production variants](c3w2-1.3.) - [Exercise 4](c3w2-ex-4) - [Exercise 5](c3w2-ex-5) - [1.4. Configure and create endpoint](c3w2-1.4.) - [Exercise 6](c3w2-ex-6)- [2. Test model](c3w2-2.) - [2.1. Test the model on a few sample strings](c3w2-2.1.) - [Exercise 7](c3w2-ex-7) - [2.2. Generate traffic and review the endpoint performance metrics](c3w2-2.2.)- [3. Shift the traffic to one variant and review the endpoint performance metrics](c3w2-3.) - [Exercise 8](c3w2-ex-8)- [4. Configure one variant to autoscale](c3w2-4.) Let's install and import the required modules.
# please ignore warning messages during the installation !pip install --disable-pip-version-check -q sagemaker==2.35.0 !conda install -q -y pytorch==1.6.0 -c pytorch !pip install --disable-pip-version-check -q transformers==3.5.1 import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format='retina' import boto3 import sagemaker import pandas as pd import botocore config = botocore.config.Config(user_agent_extra='dlai-pds/c3/w2') # low-level service client of the boto3 session sm = boto3.client(service_name='sagemaker', config=config) sm_runtime = boto3.client('sagemaker-runtime', config=config) sess = sagemaker.Session(sagemaker_client=sm, sagemaker_runtime_client=sm_runtime) bucket = sess.default_bucket() role = sagemaker.get_execution_role() region = sess.boto_region_name cw = boto3.client(service_name='cloudwatch', config=config) autoscale = boto3.client(service_name="application-autoscaling", config=config)
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
1. Create an endpoint with multiple variants Two models trained to analyze customer feedback and classify the messages into positive (1), neutral (0), and negative (-1) sentiments are saved in the following S3 bucket paths. These `tar.gz` files contain the model artifacts, which result from model training.
model_a_s3_uri = 's3://dlai-practical-data-science/models/ab/variant_a/model.tar.gz' model_b_s3_uri = 's3://dlai-practical-data-science/models/ab/variant_b/model.tar.gz'
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Let's deploy an endpoint splitting the traffic between these two models 50/50 to perform A/B Testing. Instead of creating a PyTorch Model object and calling `model.deploy()` function, you will create an `Endpoint configuration` with multiple model variants. Here is the workflow you will follow to create an endpoint: 1.1. Construct Docker Image URIYou will need to create the models in Amazon SageMaker, which retrieves the URI for the pre-built SageMaker Docker image stored in Amazon Elastic Container Registry (ECR). Let's construct the ECR URI which you will pass into the `create_model` function later.Set the instance type. For the purposes of this lab, you will use a relatively small instance. Please refer to [this link](https://aws.amazon.com/sagemaker/pricing/) for additional instance types that may work for your use cases outside of this lab.
inference_instance_type = 'ml.m5.large'
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Exercise 1Create an ECR URI using the `'PyTorch'` framework. Review other parameters of the image.
inference_image_uri = sagemaker.image_uris.retrieve( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes framework='pytorch', # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes version='1.6.0', instance_type=inference_instance_type, region=region, py_version='py3', image_scope='inference' ) print(inference_image_uri)
763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.6.0-cpu-py3
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
1.2. Create Amazon SageMaker ModelsAmazon SageMaker Model includes information such as the S3 location of the model, the container image that can be used for inference with that model, the execution role, and the model name. Let's construct the model names.
import time from pprint import pprint timestamp = int(time.time()) model_name_a = '{}-{}'.format('a', timestamp) model_name_b = '{}-{}'.format('b', timestamp)
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
You will use the following function to check if the model already exists in Amazon SageMaker.
def check_model_existence(model_name): for model in sm.list_models()['Models']: if model_name == model['ModelName']: return True return False
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Exercise 2Create an Amazon SageMaker Model based on the `model_a_s3_uri` data.**Instructions**: Use `sm.create_model` function, which requires the model name, Amazon SageMaker execution role and a primary container description (`PrimaryContainer` dictionary). The `PrimaryContainer` includes the S3 bucket location of the model artifacts (`ModelDataUrl` key) and ECR URI (`Image` key).
if not check_model_existence(model_name_a): model_a = sm.create_model( ModelName=model_name_a, ExecutionRoleArn=role, PrimaryContainer={ ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes 'ModelDataUrl': model_a_s3_uri, # Replace None 'Image': inference_image_uri # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes } ) pprint(model_a) else: print("Model {} already exists".format(model_name_a))
{'ModelArn': 'arn:aws:sagemaker:us-east-1:299076282420:model/a-1638384450', 'ResponseMetadata': {'HTTPHeaders': {'content-length': '74', 'content-type': 'application/x-amz-json-1.1', 'date': 'Wed, 01 Dec 2021 18:48:32 GMT', 'x-amzn-requestid': '5321bf92-6e4f-471e-ae2b-6a0e62328d57'}, 'HTTPStatusCode': 200, 'RequestId': '5321bf92-6e4f-471e-ae2b-6a0e62328d57', 'RetryAttempts': 0}}
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Exercise 3Create an Amazon SageMaker Model based on the `model_b_s3_uri` data.**Instructions**: Use the example in the cell above.
if not check_model_existence(model_name_b): model_b = sm.create_model( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes ModelName=model_name_b, ExecutionRoleArn=role, ### END SOLUTION - DO NOT delete this comment for grading purposes PrimaryContainer={ 'ModelDataUrl': model_b_s3_uri, 'Image': inference_image_uri } ) pprint(model_b) else: print("Model {} already exists".format(model_name_b))
{'ModelArn': 'arn:aws:sagemaker:us-east-1:299076282420:model/b-1638384450', 'ResponseMetadata': {'HTTPHeaders': {'content-length': '74', 'content-type': 'application/x-amz-json-1.1', 'date': 'Wed, 01 Dec 2021 18:48:48 GMT', 'x-amzn-requestid': '1b67df35-4d26-41f1-9fa4-be4154bd7c06'}, 'HTTPStatusCode': 200, 'RequestId': '1b67df35-4d26-41f1-9fa4-be4154bd7c06', 'RetryAttempts': 0}}
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
1.3. Set up Amazon SageMaker production variantsA production variant is a packaged SageMaker Model combined with the configuration related to how that model will be hosted. You have constructed the model in the section above. The hosting resources configuration includes information on how you want that model to be hosted: the number and type of instances, a pointer to the SageMaker package model, as well as a variant name and variant weight. A single SageMaker Endpoint can actually include multiple production variants. Exercise 4Create an Amazon SageMaker production variant for the SageMaker Model with the `model_name_a`.**Instructions**: Use the `production_variant` function passing the `model_name_a` and instance type defined above.```pythonvariantA = production_variant( model_name=..., SageMaker Model name instance_type=..., instance type initial_weight=50, traffic distribution weight initial_instance_count=1, instance count variant_name='VariantA', production variant name)```
from sagemaker.session import production_variant variantA = production_variant( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes model_name=model_name_a, # Replace None instance_type=inference_instance_type, # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes initial_weight=50, initial_instance_count=1, variant_name='VariantA', ) print(variantA)
{'ModelName': 'a-1638384450', 'InstanceType': 'ml.m5.large', 'InitialInstanceCount': 1, 'VariantName': 'VariantA', 'InitialVariantWeight': 50}
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Exercise 5Create an Amazon SageMaker production variant for the SageMaker Model with the `model_name_b`.**Instructions**: See the required arguments in the cell above.
variantB = production_variant( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes model_name=model_name_b, # Replace all None instance_type=inference_instance_type, # Replace all None initial_weight=50, # Replace all None ### END SOLUTION - DO NOT delete this comment for grading purposes initial_instance_count=1, variant_name='VariantB' ) print(variantB)
{'ModelName': 'b-1638384450', 'InstanceType': 'ml.m5.large', 'InitialInstanceCount': 1, 'VariantName': 'VariantB', 'InitialVariantWeight': 50}
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
1.4. Configure and create the endpointYou will use the following functions to check if the endpoint configuration and endpoint itself already exist in Amazon SageMaker.
def check_endpoint_config_existence(endpoint_config_name): for endpoint_config in sm.list_endpoint_configs()['EndpointConfigs']: if endpoint_config_name == endpoint_config['EndpointConfigName']: return True return False def check_endpoint_existence(endpoint_name): for endpoint in sm.list_endpoints()['Endpoints']: if endpoint_name == endpoint['EndpointName']: return True return False
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Create the endpoint configuration by specifying the name and pointing to the two production variants that you just configured that tell SageMaker how you want to host those models.
endpoint_config_name = '{}-{}'.format('ab', timestamp) if not check_endpoint_config_existence(endpoint_config_name): endpoint_config = sm.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[variantA, variantB] ) pprint(endpoint_config) else: print("Endpoint configuration {} already exists".format(endpoint_config_name))
{'EndpointConfigArn': 'arn:aws:sagemaker:us-east-1:299076282420:endpoint-config/ab-1638384450', 'ResponseMetadata': {'HTTPHeaders': {'content-length': '94', 'content-type': 'application/x-amz-json-1.1', 'date': 'Wed, 01 Dec 2021 18:51:17 GMT', 'x-amzn-requestid': '5b63d51e-5ea7-491d-8b11-24f79b2d378e'}, 'HTTPStatusCode': 200, 'RequestId': '5b63d51e-5ea7-491d-8b11-24f79b2d378e', 'RetryAttempts': 0}}
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Construct the endpoint name.
model_ab_endpoint_name = '{}-{}'.format('ab', timestamp) print('Endpoint name: {}'.format(model_ab_endpoint_name))
Endpoint name: ab-1638384450
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Exercise 6Create an endpoint with the endpoint name and configuration defined above.
if not check_endpoint_existence(model_ab_endpoint_name): endpoint_response = sm.create_endpoint( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes EndpointName=model_ab_endpoint_name, # Replace None EndpointConfigName=endpoint_config_name # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes ) print('Creating endpoint {}'.format(model_ab_endpoint_name)) pprint(endpoint_response) else: print("Endpoint {} already exists".format(model_ab_endpoint_name))
Creating endpoint ab-1638384450 {'EndpointArn': 'arn:aws:sagemaker:us-east-1:299076282420:endpoint/ab-1638384450', 'ResponseMetadata': {'HTTPHeaders': {'content-length': '81', 'content-type': 'application/x-amz-json-1.1', 'date': 'Wed, 01 Dec 2021 18:51:49 GMT', 'x-amzn-requestid': '4bb0f1e1-5d2a-426b-97c6-7ff757363c56'}, 'HTTPStatusCode': 200, 'RequestId': '4bb0f1e1-5d2a-426b-97c6-7ff757363c56', 'RetryAttempts': 0}}
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Review the created endpoint configuration in the AWS console.**Instructions**:- open the link- notice that you are in the section Amazon SageMaker -> Endpoint configuration- check the name of the endpoint configuration, its Amazon Resource Name (ARN) and production variants- click on the production variants and check their container information: image and model data location
from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpointConfig/{}">REST Endpoint configuration</a></b>'.format( region, endpoint_config_name ) ) )
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Review the created endpoint in the AWS console.**Instructions**:- open the link- notice that you are in the section Amazon SageMaker -> Endpoints- check the name of the endpoint, its ARN and status- below you can review the monitoring metrics such as CPU, memory and disk utilization. Further down you can see the endpoint configuration settings with its production variants
from IPython.core.display import display, HTML display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST endpoint</a></b>'.format(region, model_ab_endpoint_name)))
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Wait for the endpoint to deploy. _This cell will take approximately 5-10 minutes to run._
%%time waiter = sm.get_waiter('endpoint_in_service') waiter.wait(EndpointName=model_ab_endpoint_name)
CPU times: user 220 ms, sys: 18.9 ms, total: 238 ms Wall time: 8min 32s
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
_Wait until the ^^ endpoint ^^ is deployed_ 2. Test model 2.1. Test the model on a few sample stringsHere, you will pass sample strings of text to the endpoint in order to see the sentiment. You are given one example of each, however, feel free to play around and change the strings yourself! Exercise 7Create an Amazon SageMaker Predictor based on the deployed endpoint.**Instructions**: Use the `Predictor` object with the following parameters. Please pass JSON serializer and deserializer objects here, calling them with the functions `JSONLinesSerializer()` and `JSONLinesDeserializer()`, respectively. More information about the serializers can be found [here](https://sagemaker.readthedocs.io/en/stable/api/inference/serializers.html).```pythonpredictor = Predictor( endpoint_name=..., endpoint name serializer=..., a serializer object, used to encode data for an inference endpoint deserializer=..., a deserializer object, used to decode data from an inference endpoint sagemaker_session=sess)```
from sagemaker.predictor import Predictor from sagemaker.serializers import JSONLinesSerializer from sagemaker.deserializers import JSONLinesDeserializer inputs = [ {"features": ["I love this product!"]}, {"features": ["OK, but not great."]}, {"features": ["This is not the right product."]}, ] predictor = Predictor( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes endpoint_name=model_ab_endpoint_name, # Replace None serializer=JSONLinesSerializer(), # Replace None deserializer=JSONLinesDeserializer(), # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes sagemaker_session=sess ) predicted_classes = predictor.predict(inputs) for predicted_class in predicted_classes: print("Predicted class {} with probability {}".format(predicted_class['predicted_label'], predicted_class['probability']))
Predicted class 1 with probability 0.9605445861816406 Predicted class 0 with probability 0.5798221230506897 Predicted class -1 with probability 0.7667604684829712
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
2.2. Generate traffic and review the endpoint performance metricsNow you will generate traffic. To analyze the endpoint performance you will review some of the metrics that Amazon SageMaker emits in CloudWatch: CPU Utilization, Latency and Invocations. Full list of namespaces and metrics can be found [here](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html). CloudWatch `get_metric_statistics` documentation can be found [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html).But before that, let's create a function that will help to extract the results from CloudWatch and plot them.
def plot_endpoint_metrics_for_variants(endpoint_name, namespace_name, metric_name, variant_names, start_time, end_time): try: joint_variant_metrics = None for variant_name in variant_names: metrics = cw.get_metric_statistics( # extracts the results in a dictionary format Namespace=namespace_name, # the namespace of the metric, e.g. "AWS/SageMaker" MetricName=metric_name, # the name of the metric, e.g. "CPUUtilization" StartTime=start_time, # the time stamp that determines the first data point to return EndTime=end_time, # the time stamp that determines the last data point to return Period=60, # the granularity, in seconds, of the returned data points Statistics=["Sum"], # the metric statistics Dimensions=[ # dimensions, as CloudWatch treats each unique combination of dimensions as a separate metric {"Name": "EndpointName", "Value": endpoint_name}, {"Name": "VariantName", "Value": variant_name} ], ) if metrics["Datapoints"]: # access the results from the distionary using the key "Datapoints" df_metrics = pd.DataFrame(metrics["Datapoints"]) \ .sort_values("Timestamp") \ .set_index("Timestamp") \ .drop("Unit", axis=1) \ .rename(columns={"Sum": variant_name}) # rename the column with the metric results as a variant_name if joint_variant_metrics is None: joint_variant_metrics = df_metrics else: joint_variant_metrics = joint_variant_metrics.join(df_metrics, how="outer") joint_variant_metrics.plot(title=metric_name) except: pass
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Establish wide enough time bounds to show all the charts using the same timeframe:
from datetime import datetime, timedelta start_time = datetime.now() - timedelta(minutes=30) end_time = datetime.now() + timedelta(minutes=30) print('Start Time: {}'.format(start_time)) print('End Time: {}'.format(end_time))
Start Time: 2021-12-01 18:38:05.978052 End Time: 2021-12-01 19:38:05.978095
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Set the list of the the variant names to analyze.
variant_names = [variantA["VariantName"], variantB["VariantName"]] print(variant_names)
['VariantA', 'VariantB']
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Run some predictions and view the metrics for each variant. _This cell will take approximately 1-2 minutes to run._
%%time for i in range(0, 100): predicted_classes = predictor.predict(inputs)
CPU times: user 231 ms, sys: 7.57 ms, total: 239 ms Wall time: 1min 37s
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
_Μake sure the predictions ^^ above ^^ ran successfully_ Let’s query CloudWatch to get a few metrics that are split across variants. If you see `Metrics not yet available`, please be patient as metrics may take a few mins to appear in CloudWatch.
time.sleep(30) # Sleep to accomodate a slight delay in metrics gathering # CPUUtilization # The sum of each individual CPU core's utilization. # The CPU utilization of each core can range between 0 and 100. For example, if there are four CPUs, CPUUtilization can range from 0% to 400%. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="/aws/sagemaker/Endpoints", metric_name="CPUUtilization", variant_names=variant_names, start_time=start_time, end_time=end_time ) # Invocations # The number of requests sent to a model endpoint. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="Invocations", variant_names=variant_names, start_time=start_time, end_time=end_time ) # InvocationsPerInstance # The number of invocations sent to a model, normalized by InstanceCount in each production variant. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="InvocationsPerInstance", variant_names=variant_names, start_time=start_time, end_time=end_time ) # ModelLatency # The interval of time taken by a model to respond as viewed from SageMaker (in microseconds). plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="ModelLatency", variant_names=variant_names, start_time=start_time, end_time=end_time )
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
3. Shift the traffic to one variant and review the endpoint performance metricsGenerally, the winning model would need to be chosen. The decision would be made based on the endpoint performance metrics and some other business related evaluations. Here you can assume that the winning model is in the Variant B and shift all traffic to it. Construct a list with the updated endpoint weights. _**No downtime** occurs during this traffic-shift activity._ _This may take a few minutes. Please be patient._
updated_endpoint_config = [ { "VariantName": variantA["VariantName"], "DesiredWeight": 0, }, { "VariantName": variantB["VariantName"], "DesiredWeight": 100, }, ]
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Exercise 8Update variant weights in the configuration of the existing endpoint.**Instructions**: Use the `sm.update_endpoint_weights_and_capacities` function, passing the endpoint name and list of updated weights for each of the variants that you defined above.
sm.update_endpoint_weights_and_capacities( ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes EndpointName=model_ab_endpoint_name, # Replace None DesiredWeightsAndCapacities=updated_endpoint_config # Replace None ### END SOLUTION - DO NOT delete this comment for grading purposes )
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
_Wait for the ^^ endpoint update ^^ to complete above_This may take a few minutes. Please be patient. _There is **no downtime** while the update is applying._ While waiting for the update (or afterwards) you can review the endpoint in the AWS console.**Instructions**:- open the link- notice that you are in the section Amazon SageMaker -> Endpoints- check the name of the endpoint, its ARN and status (`Updating` or `InService`)- below you can see the endpoint runtime settings with the updated weights
from IPython.core.display import display, HTML display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST endpoint</a></b>'.format(region, model_ab_endpoint_name))) waiter = sm.get_waiter("endpoint_in_service") waiter.wait(EndpointName=model_ab_endpoint_name)
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Run some more predictions and view the metrics for each variant. _This cell will take approximately 1-2 minutes to run._
%%time for i in range(0, 100): predicted_classes = predictor.predict(inputs)
CPU times: user 222 ms, sys: 15.7 ms, total: 238 ms Wall time: 1min 31s
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
_Μake sure the predictions ^^ above ^^ ran successfully_If you see `Metrics not yet available`, please be patient as metrics may take a few minutes to appear in CloudWatch. Compare the results with the plots above.
# CPUUtilization # The sum of each individual CPU core's utilization. # The CPU utilization of each core can range between 0 and 100. For example, if there are four CPUs, CPUUtilization can range from 0% to 400%. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="/aws/sagemaker/Endpoints", metric_name="CPUUtilization", variant_names=variant_names, start_time=start_time, end_time=end_time ) # Invocations # The number of requests sent to a model endpoint. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="Invocations", variant_names=variant_names, start_time=start_time, end_time=end_time ) # InvocationsPerInstance # The number of invocations sent to a model, normalized by InstanceCount in each production variant. plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="InvocationsPerInstance", variant_names=variant_names, start_time=start_time, end_time=end_time ) # ModelLatency # The interval of time taken by a model to respond as viewed from SageMaker (in microseconds). plot_endpoint_metrics_for_variants( endpoint_name=model_ab_endpoint_name, namespace_name="AWS/SageMaker", metric_name="ModelLatency", variant_names=variant_names, start_time=start_time, end_time=end_time )
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
4. Configure one variant to autoscaleLet's configure Variant B to autoscale. You would not autoscale Variant A since no traffic is being passed to it at this time.First, you need to define a scalable target. It is an AWS resource and in this case you want to scale a `sagemaker` resource as indicated in the `ServiceNameSpace` parameter. Then the `ResourceId` is a SageMaker Endpoint. Because autoscaling is used by other AWS resources, you’ll see a few parameters that will remain static for scaling SageMaker Endpoints. Thus the `ScalableDimension` is a set value for SageMaker Endpoint scaling.You also need to specify a few key parameters that control the min and max behavior for your Machine Learning instances. The `MinCapacity` indicates the minimum number of instances you plan to scale in to. The `MaxCapacity` is the maximum number of instances you want to scale out to. So in this case you always want to have at least 1 instance running and a maximum of 2 during peak periods.
autoscale.register_scalable_target( ServiceNamespace="sagemaker", ResourceId="endpoint/" + model_ab_endpoint_name + "/variant/VariantB", ScalableDimension="sagemaker:variant:DesiredInstanceCount", MinCapacity=1, MaxCapacity=2, RoleARN=role, SuspendedState={ "DynamicScalingInSuspended": False, "DynamicScalingOutSuspended": False, "ScheduledScalingSuspended": False, }, ) waiter = sm.get_waiter("endpoint_in_service") waiter.wait(EndpointName=model_ab_endpoint_name)
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Check that the parameters from the function above are in the description of the scalable target:
autoscale.describe_scalable_targets( ServiceNamespace="sagemaker", MaxResults=100, )
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Define and apply scaling policy using the `put_scaling_policy` function. The scaling policy provides additional information about the scaling behavior for your instance. `TargetTrackingScaling` refers to a specific autoscaling type supported by SageMaker, that uses a scaling metric and a target value as the indicator to scale.In the scaling policy configuration, you have the predefined metric `PredefinedMetricSpecification` which is the number of invocations on your instance and the `TargetValue` which indicates the number of invocations per ML instance you want to allow before triggering your scaling policy. A scale out cooldown of 60 seconds means that after autoscaling successfully scales out it starts to calculate the cooldown time. The scaling policy won’t increase the desired capacity again until the cooldown period ends.The scale in cooldown setting of 300 seconds means that SageMaker will not attempt to start another cooldown policy within 300 seconds of when the last one completed.
autoscale.put_scaling_policy( PolicyName="bert-reviews-autoscale-policy", ServiceNamespace="sagemaker", ResourceId="endpoint/" + model_ab_endpoint_name + "/variant/VariantB", ScalableDimension="sagemaker:variant:DesiredInstanceCount", PolicyType="TargetTrackingScaling", TargetTrackingScalingPolicyConfiguration={ "TargetValue": 2.0, # the number of invocations per ML instance you want to allow before triggering your scaling policy "PredefinedMetricSpecification": { "PredefinedMetricType": "SageMakerVariantInvocationsPerInstance", # scaling metric }, "ScaleOutCooldown": 60, # wait time, in seconds, before beginning another scale out activity after last one completes "ScaleInCooldown": 300, # wait time, in seconds, before beginning another scale in activity after last one completes }, ) waiter = sm.get_waiter("endpoint_in_service") waiter.wait(EndpointName=model_ab_endpoint_name)
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Generate traffic again and review the endpoint in the AWS console. _This cell will take approximately 1-2 minutes to run._
%%time for i in range(0, 100): predicted_classes = predictor.predict(inputs)
CPU times: user 215 ms, sys: 19.2 ms, total: 234 ms Wall time: 1min 31s
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Review the autoscaling:- open the link- notice that you are in the section Amazon SageMaker -> Endpoints- below you can see the endpoint runtime settings with the instance counts. You can run the predictions multiple times to observe the increase of the instance count to 2
from IPython.core.display import display, HTML display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST endpoint</a></b>'.format(region, model_ab_endpoint_name)))
_____no_output_____
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
Upload the notebook into S3 bucket for grading purposes.**Note:** you may need to click on "Save" button before the upload.
!aws s3 cp ./C3_W2_Assignment.ipynb s3://$bucket/C3_W2_Assignment_Learner.ipynb
upload: ./C3_W2_Assignment.ipynb to s3://sagemaker-us-east-1-299076282420/C3_W2_Assignment_Learner.ipynb
MIT
Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb
BhargavTumu/coursera-practical-data-science-specialization
import numpy as np import tensorflow as tf from tensorflow import keras import pandas as pd import seaborn as sns from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc from sklearn.model_selection import train_test_split from pandas.plotting import register_matplotlib_converters %matplotlib inline %config InlineBackend.figure_format='retina' register_matplotlib_converters() sns.set(style='whitegrid', palette='muted', font_scale=1.5) rcParams['figure.figsize'] = 22, 10 RANDOM_SEED = 42 np.random.seed(RANDOM_SEED) tf.random.set_seed(RANDOM_SEED) df = pd.read_csv( "dateindex1.csv", parse_dates=['datetime'], index_col="datetime" ) df['hour'] = df.index.hour df['day_of_month'] = df.index.day df['day_of_week'] = df.index.dayofweek df['month'] = df.index.month df['year'] = df.index.year df.head() X = df.drop(columns=['G']) Y = df[['G']] print(X) model = keras.models.Sequential() model.add(keras.layers.Dense(5, activation='relu', input_shape=(5,))) model.add(keras.layers.Dense(5, activation='relu')) model.add(keras.layers.Dense(1)) model.compile(optimizer='adam', loss='mean_squared_error') model.fit(X, Y, epochs=100, callbacks=[keras.callbacks.EarlyStopping(patience=3)]) test_data = np.array([ 22, 1, 5, 1, 2021]) print(model.predict(test_data.reshape(1,5), batch_size=1)) # Save entire model to a HDF5 file model.save('working_model.h5')
_____no_output_____
CC0-1.0
datascience/models/regression.ipynb
VimeshShahama/Cyber---SDGP
Class Coding Lab: Introduction to ProgrammingThe goals of this lab are to help you to understand:1. the Jupyter and IDLE programming environments1. basic Python Syntax2. variables and their use3. how to sequence instructions together into a cohesive program4. the input() function for input and print() function for output Let's start with an example: Hello, world!This program asks for your name as input, then says hello to you as output. Most often it's the first program you write when learning a new programming language. Click in the cell below and click the run cell button.
your_name = input("What is your name? ") print('Hello there',your_name)
_____no_output_____
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
Believe it or not there's a lot going on in this simple two-line program, so let's break it down. - The first line: - Asks you for input, prompting you `What is your Name?` - It then stores your input in the variable `your_name` - The second line: - prints out the following text: `Hello there` - then prints out the contents of the variable `your_name`At this point you might have a few questions. What is a variable? Why do I need it? Why is this two lines? Etc... All will be revealed in time. VariablesVariables are names in our code which store values. I think of variables as cardboard boxes. Boxes hold things. Variables hold things. The name of the variable is on the ouside of the box (that way you know which box it is), and value of the variable represents the contents of the box. Variable Assignment**Assignment** is an operation where we store data in our variable. It's like packing something up in the box.In this example we assign the value "USA" to the variable **country**
# Here's an example of variable assignment. Wre country = 'USA'
_____no_output_____
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
Variable Access What good is storing data if you cannot retrieve it? Lucky for us, retrieving the data in variable is as simple as calling its name:
country # This should say 'USA'
_____no_output_____
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
At this point you might be thinking: Can I overwrite a variable? The answer, of course, is yes! Just re-assign it a different value:
country = 'Canada'
_____no_output_____
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
You can also access a variable multiple times. Each time it simply gives you its value:
country, country, country
_____no_output_____
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
The Purpose Of VariablesVariables play an vital role in programming. Computer instructions have no memory of each other. That is one line of code has no idea what is happening in the other lines of code. The only way we can "connect" what happens from one line to the next is through variables. For example, if we re-write the Hello, World program at the top of the page without variables, we get the following:
input("What is your name? ") print('Hello there')
What is your name? Bob Hello there
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
When you execute this program, notice there is no longer a connection between the input and the output. In fact, the input on line 1 doesn't matter because the output on line 2 doesn't know about it. It cannot because we never stored the results of the input into a variable! What's in a name? Um, EVERYTHINGComputer code serves two equally important purposes:1. To solve a problem (obviously)2. To communicate hwo you solved problem to another person (hmmm... I didn't think of that!)If our code does something useful, like land a rocket, predict the weather, or calculate month-end account balances then the chances are 100% certain that *someone else will need to read and understand our code.* Therefore it's just as important we develop code that is easilty understood by both the computer and our colleagues.This starts with the names we choose for our variables. Consider the following program:
y = input("Enter your city: ") x = input("Enter your state: ") print(x,y,'is a nice place to live')
Enter your city: yeet town Enter your state: yeet state yeet state yeet town is a nice place to live
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
What do `x` and `y` represent? Is there a semantic (design) error in this program?You might find it easy to figure out the answers to these questions, but consider this more human-friendly version:
state = input("Enter your city: ") city = input("Enter your state: ") print(city,state,'is a nice place to live')
_____no_output_____
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
Do the aptly-named variables make it easier to find the semantic errors in this second version? You Do It:Finally re-write this program so that it uses well-thought out variables AND in semantically correct:
# TODO: Code it re-write the above program to work as it should: Stating City State is a nice place to live city = input("Enter your city: ") state = input("Enter your state: ") print(city + ",", state, "is a nice place to live")
Enter your city: Yeet Town Enter your state: Yeet State Yeet Town, Yeet State is a nice place to live
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
Now Try This:Now try to write a program which asks for two separate inputs: your first name and your last name. The program should then output `Hello` with your first name and last name.For example if you enter `Mike` for the first name and `Fudge` for the last name the program should output `Hello Mike Fudge`**HINTS** - Use appropriate variable names. If you need to create a two word variable name use an underscore in place of the space between the words. eg. `two_words` - You will need a separate set of inputs for each name.
# TODO: write your code here first_name = input("What's your name? ") last_name = input("What's your last name? ") print ("Hello,",first_name,last_name)
What's your name? Bob What's your last name? Bobson Hello, Bob Bobson
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
Variable Concatenation: Your First OperatorThe `+` symbol is used to combine to variables containing text values together. Consider the following example:
prefix = "re" suffix = "ment" root = input("Enter a root word, like 'ship': ") print( prefix + root + suffix)
Enter a root word, like 'ship': yeet reyeetment
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
Now Try ThisWrite a program to prompt for three colors as input, then outputs those three colors as a lis, informing me which one was the middle (2nd entered) color. For example if you were to enter `red` then `green` then `blue` the program would output: `Your colors were: red, green, and blue. The middle was is green.`**HINTS** - you'll need three variables one fore each input - you should try to make the program output like my example. This includes commas and the word `and`.
# TODO: write your code here first_color = input("Choose a color: ") second_color = input("Choose another color: ") third_color = input("Choose another color: ") print("Your colors were", first_color + ",", second_color + ", and", third_color + ". The middle color was", second_color + ".")
Choose a color: red Choose another color: yellow Choose another color: blue Your colors were red, yellow, and blue. The middle color was yellow.
MIT
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb
MahopacHS/spring2019-ditoccoa0302
Before your start:- Read the README.md file- Comment as much as you can and use the resources (README.md file)- Happy learning!
#import numpy and pandas
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Challenge 1 - The `stats` SubmoduleThis submodule contains statistical functions for conducting hypothesis tests, producing various distributions and other useful tools. Let's examine this submodule using the KickStarter dataset. Load the data using Ironhack's database (db: kickstarter, table: projects).
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Now print the `head` function to examine the dataset.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Import the `mode` function from `scipy.stats` and find the mode of the `country` and `currency` column.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
The trimmed mean is a function that computes the mean of the data with observations removed. The most common way to compute a trimmed mean is by specifying a percentage and then removing elements from both ends. However, we can also specify a threshold on both ends. The goal of this function is to create a more robust method of computing the mean that is less influenced by outliers. SciPy contains a function called `tmean` for computing the trimmed mean. In the cell below, import the `tmean` function and then find the 75th percentile of the `goal` column. Compute the trimmed mean between 0 and the 75th percentile of the column. Read more about the `tmean` function [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.tmean.htmlscipy.stats.tmean).
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
SciPy contains various statistical tests. One of the tests is Fisher's exact test. This test is used for contingency tables. The test originates from the "Lady Tasting Tea" experiment. In 1935, Fisher published the results of the experiment in his book. The experiment was based on a claim by Muriel Bristol that she can taste whether tea or milk was first poured into the cup. Fisher devised this test to disprove her claim. The null hypothesis is that the treatments do not affect outcomes, while the alternative hypothesis is that the treatment does affect outcome. To read more about Fisher's exact test, see:* [Wikipedia's explanation](http://b.link/test61)* [A cool deep explanation](http://b.link/handbook47)* [An explanation with some important Fisher's considerations](http://b.link/significance76)Let's perform Fisher's exact test on our KickStarter data. We intend to test the hypothesis that the choice of currency has an impact on meeting the pledge goal. We'll start by creating two derived columns in our dataframe. The first will contain 1 if the amount of money in `usd_pledged_real` is greater than the amount of money in `usd_goal_real`. We can compute this by using the `np.where` function. If the amount in one column is greater than the other, enter a value of 1, otherwise enter a value of zero. Add this column to the dataframe and name it `goal_met`.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Next, create a column that checks whether the currency of the project is in US Dollars. Create a column called `usd` using the `np.where` function where if the currency is US Dollars, assign a value of 1 to the row and 0 otherwise.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Now create a contingency table using the `pd.crosstab` function in the cell below to compare the `goal_met` and `usd` columns. Import the `fisher_exact` function from `scipy.stats` and conduct the hypothesis test on the contingency table that you have generated above. You can read more about the `fisher_exact` function [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisher_exact.htmlscipy.stats.fisher_exact). The output of the function should be the odds ratio and the p-value. The p-value will provide you with the outcome of the test.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Challenge 2 - The `interpolate` submoduleThis submodule allows us to interpolate between two points and create a continuous distribution based on the observed data.In the cell below, import the `interp1d` function and first take a sample of 10 rows from `kickstarter`.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Next, create a linear interpolation of the backers as a function of `usd_pledged_real`. Create a function `f` that generates a linear interpolation of backers as predicted by the amount of real pledged dollars.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Now create a new variable called `x_new`. This variable will contain all integers between the minimum number of backers in our sample and the maximum number of backers. The goal here is to take the dataset that contains few obeservations due to sampling and fill all observations with a value using the interpolation function. Hint: one option is the `np.arange` function.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs