markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Vertex SDK: AutoML training tabular binary classification model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_binary_classification_batch.ipynb"> <img src="https:...
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_automl_tabular_binary_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Here, we just check if there are any events within dt time, and if there are, we assume there is only one of them. This is a horrible approximation. Don't do this.
class PoissonSpikingApproximate(object): def __init__(self, size, seed, dt=0.001): self.rng = np.random.RandomState(seed=seed) self.dt = dt self.value = 1.0 / dt self.size = size self.output = np.zeros(size) def __call__(self, t, x): self.output[:] = 0 p =...
spike trains - poisson and regular.ipynb
tcstewar/testing_notebooks
gpl-2.0
For this attempt, I'm screwing something up in the logic, as it's firing more frequently than it should. The idea is to use the same approach as above to see if any spikes happen during the dt. If there are any spikes, then count that as 1 spike. But now I need to know if there are any more spikes (given that we kno...
class PoissonSpikingExactBad(object): def __init__(self, size, seed, dt=0.001): self.rng = np.random.RandomState(seed=seed) self.dt = dt self.value = 1.0 / dt self.size = size self.output = np.zeros(size) def __call__(self, t, x): p = 1.0 - np.exp(-x*self.dt) ...
spike trains - poisson and regular.ipynb
tcstewar/testing_notebooks
gpl-2.0
Now for one that works. Here we do the approach of actually figuring out when during the time step the events happen, and continue until we fall off the end of the timestep. This is how everyone says to do it.
class PoissonSpikingExact(object): def __init__(self, size, seed, dt=0.001): self.rng = np.random.RandomState(seed=seed) self.dt = dt self.value = 1.0 / dt self.size = size self.output = np.zeros(size) def next_spike_times(self, rate): return -np.log(1.0-s...
spike trains - poisson and regular.ipynb
tcstewar/testing_notebooks
gpl-2.0
Let's test the accuracy of these models.
def test_accuracy(cls, rate, T=1): test_model = nengo.Network() with test_model: stim = nengo.Node(rate) spikes = nengo.Node(cls(1, seed=1), size_in=1) nengo.Connection(stim, spikes, synapse=None) p = nengo.Probe(spikes) sim = nengo.Simulator(test_model) sim.run(T, progr...
spike trains - poisson and regular.ipynb
tcstewar/testing_notebooks
gpl-2.0
<h2> Let's look at a slice of retention time </h2>
def get_rt_slice(df, rt_bounds): ''' PURPOSE: Given a tidy feature table with 'mz' and 'rt' column headers, retain only the features whose rt is between rt_left and rt_right INPUT: df - a tidy pandas dataframe with 'mz' and 'rt' column headers rt_left...
notebooks/MTBLS315/exploratory/Retention_time_slice_MTBLS315_uhplc_pos_classifer-4ppm.ipynb
irockafe/revo_healthcare
mit
Fit a model Is there a relationship between the diamond price and its weight? Our first goal should be to determine whether the data provide evidence of an association between price and carats. If the evidence is weak, then one might argue that bigger diamonds are not better! To evaluate the model we will use a special...
import statsmodels.api as sm
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Since statsmodels offers also functions to fit a linear regression model, we do not need to import and use sklearn to fit the model but we can do everything with statsmodels. We will use its function OLS() that fits a linear regression based on the Ordinary Least Squares algorithm. The model we want to get is : y_hat...
X = sm.add_constant(diamondData.carats) # this append a column of ones simpleModel = sm.OLS(diamondData.price, X).fit() # fit the model simpleModel.params # here are the beta coefficients (intercept and slope of the linear regression line)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The intercept (beta0) is -259.6 and the slope (beta1) is 3721 Therefore our simple (one variable only) model looks like: Diamond Price = -259.6 + 3721 * Diamond Weight We can plot the obtained regression line together with the input data X. We can do it by drawing a line using the beta parameters just calculated or a...
%matplotlib inline import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.scatter(diamondData.carats, diamondData.price) # draw linear regression line x = [0.1,0.4] y = [-259.6 + 3721 * i for i in x] ax.plot(x, y) # alternatively, plot the fitted values #y_hat = simpleModel.fittedval...
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
This answers our first question. There is a relationship between price and weight of diamond and we can model it. Analyse the model Which kind of relation is between weight and price? Next question would be to find out if the relation is linear and how it looks like. Coefficients interpretation Beta1 (the slope of the ...
diamondData.carats.mean() # this is the weight mean of our dataset
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Instead of X as input for the model, we take X centered around the mean, i.e. we shift X of a value equal to the sample mean:
XmeanCentered = diamondData.carats - diamondData.carats.mean() XmeanCentered = sm.add_constant(XmeanCentered) # this append a column of ones meanCenteredModel = sm.OLS(diamondData.price, XmeanCentered).fit() # fit a new model meanCenteredModel.params
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
As you can see, the slope is the same as the previous model, only the intercept shifted. This is always valid when you shift your X values. Thus $500.1 is the expected price for the average sized diamond of the initial dataset (=0.2042 carats). This is an intercept making much more sense. You can shift the X input by a...
Xtenth = diamondData.carats *10 # rescale the X Xtenth = sm.add_constant(Xtenth) tenthModel = sm.OLS(diamondData.price, Xtenth).fit() # again fit the model tenthModel.params
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The intercept is the same as in the original model, only the slope coefficient is divided by 10. This is always valid when you re-scale the X values. We expect a 372.102 (SIN) dollar change in price for every 1/10th of a carat increase in mass of diamond. Predicting the price of a diamond Once we have a model of th...
simpleModel.params[0] + 0.2*simpleModel.params[1]
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
I expect to pay around 485 SIN $. Or I can use the predict() function available in the statsmodel package:
newDiamond = [1, 0.2] # remember to add always the intercept! simpleModel.predict(newDiamond)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
It's also possible to pass a list of values to predict:
newDiamonds = sm.add_constant([0.16, 0.27, 0.34]) # add the intecept simpleModel.predict(newDiamonds)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Result: for 0.16, 0.27, and 0.34 carats, we predict the prices to be 335.74, 745.05, 1005.52 (SIN) dollars Model fit How strong is the relationship? We know that there is a relationship between diamonds carats and prices, we would like to know the strength of this relationship. In other words, given a certain diamond w...
y = diamondData.price y_hat = simpleModel.fittedvalues max (abs (y - y_hat))
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Conveniently, residuals are also stored in the results attribute resid:
residuals = simpleModel.resid max(abs(residuals))
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
85 SIN$ (per defect or excess) is the biggest difference done by the model. Don't confuse errors and residuals. The error is the deviation of the observed value from the (unobservable) true value of a quantity of interest (for example, a population mean), and the residual is the difference between the observed value an...
fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(simpleModel.fittedvalues, residuals, 'o') # as round marks # pretty-up the plot ax.plot ((0, 1200), (0,0)) # draw also a line at y=0 fig.suptitle("Residuals versus fitted prices") ax.set_ylabel('Residuals [SIN $]') ax.set_xlabel('Price [SIN $]') ax.grid(True)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The residuals should be distributed uniformly without showing any pattern and having a constant variance. If we see from the residuals vs. fitted plot that the variance of the residuals increases as the fitted values increase (takes a form of a horizontal cone) this is the sign of heteroscedasticity. Homoscedasticity...
sum(residuals)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The mean of the residuals is expected to be zero. This comes directly from the fact that OLS minimises the sum of square residuals
import numpy as np np.mean(residuals)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
This is one of the assumptions for regression analysis: residuals should have a normal (or Gaussian) distribution.
plt.hist(residuals)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
It looks normal but we can verify better with a Q-Q plot. Q-Q plot to verify the residuals distribution Q-Q plots (stands for a "quantile-quantile plot") can be used to check whether the data is distributed normally or not. It is a plot where the axes are purposely transformed in order to make a normal (or Gaussian) ...
sm.qqplot(residuals, fit=True, line = '45')
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Estimating residual variation The residual variation measures how well the regression line fits the data points. It is the variation in the dependent variable (Price) that is not explained by the regression model and is represented by the residuals. We want the residual variation to be as small as possible. Each re...
n = len(y) MSE = sum(residuals**2) / (n-2) RMSE = np.sqrt(MSE) RMSE
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
RMSE can be used to calculate the standardized residuals too. They equal the value of a residual divided by an estimate of its standard deviation (so, RMSE). Large standardized residuals are an indication of an outlier.
max(simpleModel.resid / RMSE)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Summarizing the variation: R-squared The total variation is the residual variation (variation after removing predictors) plus the systematic variation (variation explained by regression model). R-squared is the percentage of variability explained by the regression model: R-squared = explained / total variation = 1 ...
simpleModel.rsquared
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
We are quite close to a perfect model. You can use a fitted line plot to graphically illustrate different R-squared values. The more variation that is explained by the model, the closer the data points fall to the line. Theoretically, if a model could explain 100% of the variation, the fitted values would always equal ...
fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(simpleModel.fittedvalues, diamondData.price, 'o') # as round marks # identity line plt.plot(simpleModel.fittedvalues, simpleModel.fittedvalues, '--') # pretty-up the plot fig.suptitle("Relation between estimated and actual diamonds' prices") ax.set_ylabel('E...
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
R-squared can be a misleading summary and needs to be carefully taken (deleting data can inflate R-squared for example). In conclusion (residuals distribution, variation) the model is pretty good and the relation is very strong. Because of this, sometimes is preferred to use the adjusted Rsquared, which is Rsquared ad...
1 - (1-simpleModel.rsquared)*((n-1)/simpleModel.df_resid)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Of course, it is also available from the model results:
simpleModel.rsquared_adj
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Confidence How accurately can we predict the diamond prices? For any given weight in carats, what is our prediction for the price, and what is the accuracy of this prediction? In statistics, a sequence of random variables is independent and identically distributed (IID) if each random variable has the same probability ...
# prepare the data y = diamondData.price x = diamondData.carats n = len(y) # calculate beta1 beta1 = (np.corrcoef (y,x) * np.std(y) / np.std(x))[0][1] beta1 # calculate beta0 beta0 = np.mean(y) - beta1 * np.mean(x) beta0
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Sigma is unknown but its estimate is the squared root of the sum of the errors squared, divided by n-2 (the degrees of freedom)
e = y - beta0 - beta1 * x # the residuals # unbiased estimate for variance sigma = np.sqrt(sum(e**2) / (n-2)) sigma
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Confidence intervals A 95% confidence interval is defined as a range of values such that with 95% probability, the range will contain the true unknown value of the parameter. Recall of quantiles and percentiles of a distribution If you were the 95th percentile on an exam, it means that 95% of people scored worse than ...
ssx = sum((x - np.mean(x))**2) # calculate standard error for beta0 seBeta0 = (1 / n + np.mean(x) ** 2 / ssx) ** 0.5 * sigma seBeta0 # calculate standard error for beta1 seBeta1 = sigma / np.sqrt(ssx) seBeta1
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The standard error of the parameter measures the precision of the estimate of the parameter. The smaller the standard error, the more precise the estimate. Hypothesis testing Hypothesis testing is concerned with making decisions using data. A null hypothesis is specified that represents the status quo, usually labeled ...
tBeta0 = beta0 / seBeta0 tBeta0 tBeta1 = beta1 / seBeta1 tBeta1
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
P-values P-values are the most common measure of "statistical significance". The P-value is the probability under the null hypothesis of obtaining evidence as extreme or more extreme than would be observed by chance alone. If the p-value is small, then either H0 is true and we have observed an extreme rare event or H...
from scipy.stats import t
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The survival function of a random variable is the probability that the random variable is bigger than a value x. The SciPy.stats function sf() returns this probability:
degreesOfFreedom = simpleModel.df_resid # The residual degree of freedom pBeta0 = t.sf(abs(tBeta0), df=degreesOfFreedom)*2 # two-sided pBeta1 = t.sf(abs(tBeta1), df=degreesOfFreedom)*2
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Let's summarise the values calculated until now:
print ("## Estimate Std. Error t-value p-value") print ("Intercept: ", beta0, seBeta0, tBeta0, pBeta0) print ("Carats: ", beta1, seBeta1, tBeta1, pBeta1)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
If the p-value is less than the significance level (0.05 in our case) then the model explains the variation in the response. T confidence intervals For small samples we can use the t-distribution to calculate the confidence intervals. The t-distribution has been invented by William Gosset in 1908 and is indexed by a de...
alpha=0.05 # confidence interval for two-sided hypothesis qt = 1 - (alpha/2) # =0.975 for a 2-sided 95% probability t_value = t.ppf(qt, df=degreesOfFreedom) t_value
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Now we can calculate the intervals for beta0 and beta1:
limits=[-1,1] [beta0 + i*t_value*seBeta0 for i in limits] [beta1 + i*t_value*seBeta1 for i in limits]
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Interpretation: With 95% confidence, we estimate that 1 carat increase in diamond size results in a 3556 to 3886 increase in price in (Singapore) dollars. Plot the confidence interval We calculate the interval for each x value; will use the isf() function to get the inverse survival function:
predicted = simpleModel.fittedvalues x_1 = simpleModel.model.exog # just the x values plus column of 1 # get standard deviation of predicted values predvar = simpleModel.mse_resid + (x_1 * np.dot(simpleModel.cov_params(), x_1.T).T).sum(1) predstd = np.sqrt(predvar) tp...
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Summary of statistic values The statsmodel package offers an overview of the model values, similar to what we calculated above:
simpleModel.summary()
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Many values can also be accessed directly, for example the standard errors:
simpleModel.bse
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
You can see all the values available using dir():
dir(simpleModel)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
In the plot below we see spikes at frequencies of 1/day, as well as the associated harmonics.
plt.plot(ST.index[1:], np.log10(ST.values)[1:]) #plt.plot(f[1:N_final], 20.0 / len(T) * np.log10(np.abs(ST[1:N_final]))) plt.xlabel('frequency in days') plt.ylabel('Power') plt.title('T spectra') rx = (1. / N) * correlate(T, T, mode = 'same') plt.plot(fftshift(rx)[0:N//2]) # THIS METHOD OF FINDING THE PEAKS IS COMPLE...
notebooks/sinusoid_regression_T.ipynb
RJTK/dwglasso_cweeds
mit
Repeating the process is not useful
print(a[0:25]) f0_yr = f[52] f0_d = f[19012] print(f0_yr, f0_d) # Error functions for sinusoidal regression def err_f1(theta): # No frequency optimization a_yr, a_d, phi_yr, phi_d = theta syr = a_yr * np.sin(2*pi*f0_yr*t_days + phi_yr) sd = a_d * np.sin(2*pi*f0_d*t_days + phi_d) return T1 - syr - sd ...
notebooks/sinusoid_regression_T.ipynb
RJTK/dwglasso_cweeds
mit
Aside. You may get an warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare ...
weights = sentiment_model.coefficients weights.column_names() weights
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment. Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'va...
weights.num_rows() num_positive_weights = (weights['value'] >= 0).sum() num_negative_weights = (weights['value'] < 0).sum() print "Number of positive weights: %s " % num_positive_weights print "Number of negative weights: %s " % num_negative_weights
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Predicting sentiment These scores can be used to make class predictions as follows: $$ \hat{y} = \left{ \begin{array}{ll} +1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \ -1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \ \end{array} \right. $$ Using scores, write code to calculate $\hat{y}$, the class predictions:
def class_predictions(scores): """ make class predictions """ preds = [] for score in scores: if score > 0: pred = 1 else: pred = -1 preds.append(pred) return preds class_predictions(scores)
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create. Probability predictions Recall from the lectures that we can also calculate the probability predictions from the scores using: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}. $$ U...
def calculate_proba(scores): """ Calculate the probability predictions from the scores. """ proba_preds = [] for score in scores: proba_pred = 1 / (1 + math.exp(-score)) proba_preds.append(proba_pred) return proba_preds calculate_proba(scores)
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review? Find the most positive (and negative) review We now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all...
# probability predictions on test_data using the sentiment_model test_data['proba_pred'] = sentiment_model.predict(test_data, output_type='probability') test_data test_data['name','proba_pred'].topk('proba_pred', k=20).print_rows(20)
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice] Now, let us repeat this excercise to find the "most negative reviews." Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive...
test_data['name','proba_pred'].topk('proba_pred', k=20, reverse=True).print_rows(20)
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice] Compute accuracy of the classifier We will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by $$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{...
# Test SArray comparison print graphlab.SArray([1,1,1]) == sample_test_data['sentiment'] print sentiment_model.predict(sample_test_data) == sample_test_data['sentiment'] def get_classification_accuracy(model, data, true_labels): # First get the predictions ## YOUR CODE HERE predictions = model.predict(data...
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?
simple_weights = simple_model.coefficients positive_significant_words = simple_weights[(simple_weights['value'] > 0) & (simple_weights['name'] == "word_count_subset")]['index'] print len(positive_significant_words) print positive_significant_words
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?
weights.filter_by(positive_significant_words, 'index')
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Now compute the accuracy of the majority class classifier on test_data. Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76). Notes: what is majority class classifier? - https://www.coursera.org/learn/ml-classification/module/0Wcl...
print (test_data['sentiment'] == +1).sum() print (test_data['sentiment'] == -1).sum() print (test_data['sentiment'] == +1).sum()/len(test_data['sentiment'])
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
An Array-Based Implementation of Dual-Pivot-Quicksort
import random as rnd
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{sort}(L)$ sorts the list $L$ in place.
def sort(L): quickSort(0, len(L) - 1, L)
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{quickSort}(a, b, L)$ sorts the sublist $L[a:b+1]$ in place.
def quickSort(a, b, L): if b <= a: return # at most one element, nothing to do x1, x2 = L[a], L[b] if x1 > x2: swap(a, b, L) m1, m2 = partition(a, b, L) # m1 and m2 are the split indices quickSort(a, m1 - 1, L) if L[m1] != L[m2]: quickSort(m1 + 1, m2 - 1, L) quickSo...
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{partition}(\texttt{start}, \texttt{end}, L)$ returns two indices $m_1$ and $m_2$ into the list $L$ and regroups the elements of $L$ such that after the function returns the following holds: $\forall i \in {\texttt{start}, \cdots, m_1-1} : L[i] < L[m_1]$, $\forall i \in { m_1+1, \cdots, m_2-...
def partition(start, end, L): p1 = L[start] p2 = L[end] idxLeft = start idxMiddle = start + 1 idxRight = end while idxMiddle < idxRight: x = L[idxMiddle] if x < p1: idxLeft += 1 swap(idxLeft, idxMiddle, L) idxMiddle += 1 elif x <= p2...
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
Testing
L = [5, 7, 9, 1, 24, 11, 5, 2, 5, 8, 2, 13, 9] print(L) p1, p2 = partition(0, len(L) - 1, L) print(L[:p1], L[p1], L[p1+1:p2], L[p2], L[p2+1:]) def demo(): L = [ rnd.randrange(1, 200) for n in range(1, 16) ] print("L = ", L) sort(L) print("L = ", L) demo() def isOrdered(L): for i in range(len(L) -...
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.
def testSort(n, k): for i in range(n): L = [ rnd.randrange(2*k) for x in range(k) ] oldL = L[:] sort(L) isOrdered(L) sameElements(oldL, L) assert len(L) == len(oldL) print('.', end='') print() print("All tests successful!") %%time testSort(100, 20_000...
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
Next, we sort a million random integers.
%%timeit k = 1_000_000 L = [ rnd.randrange(2 * k) for x in range(k) ] sort(L)
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
Again, we sort a million integers. This time, many of the integers have the same value.
%%timeit k = 1_000_000 L = [ rnd.randrange(1000) for x in range(k) ] sort(L)
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
Vertex SDK: AutoML training text sentiment analysis model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_sentiment_analysis_batch.ipynb"> <img src="https://cloud.goog...
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Dataset Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. import_...
dataset = aip.TextDataset.create( display_name="Crowdflower Claritin-Twitter" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aip.schema.dataset.ioformat.text.sentiment, ) print(dataset.resource_name)
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLTextTrainingJob class, with the following parameters: display_name: The human readable name for the Tr...
dag = aip.AutoMLTextTrainingJob( display_name="claritin_" + TIMESTAMP, prediction_type="sentiment", sentiment_max=SENTIMENT_MAX, ) print(dag)
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for tra...
model = dag.run( dataset=dataset, model_display_name="claritin_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, )
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in y...
# Get model resource ID models = aip.Model.list(filter="display_name=claritin_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = mod...
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send a batch prediction request Send a batch prediction to your deployed model. Get test item(s) Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate ...
test_items = ! gsutil cat $IMPORT_FILE | head -n2 if len(test_items[0]) == 4: _, test_item_1, test_label_1, _ = str(test_items[0]).split(",") _, test_item_2, test_label_2, _ = str(test_items[1]).split(",") else: test_item_1, test_label_1, _ = str(test_items[0]).split(",") test_item_2, test_label_2, _ =...
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get the predictions Next, get the results from the completed batch prediction job. The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or m...
import json import tensorflow as tf bp_iter_outputs = batch_predict_job.iter_outputs() prediction_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("prediction"): prediction_results.append(blob.name) tags = list() for prediction_result in prediction_results: gfile_...
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
ABC PMC on a 2D gaussian example In this example we're looking at a dataset that has been drawn from a 2D gaussian distribution. We're going to assume that we don't have a proper likelihood but that we know the covariance matrix $\Sigma$ of the distribution. Using the ABC PMC algorithm we will approximate the posterior...
samples_size = 1000 sigma = np.eye(2) * 0.25 means = [1.1, 1.5] data = np.random.multivariate_normal(means, sigma, samples_size) matshow(sigma) title("covariance matrix sigma") colorbar()
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Then we need to define our model/simulation. In this case this is simple: we draw again random variables from a multivariate gaussian distribution using the given mean and the sigma from above
def create_new_sample(theta): return np.random.multivariate_normal(theta, sigma, samples_size)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Next, we need to define a distance measure. We will use the sum of the absolute differences of the means of the simulated and the observed data
def dist_measure(x, y): return np.sum(np.abs(np.mean(x, axis=0) - np.mean(y, axis=0)))
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Verification To verify if everything works and to see the effect of the random samples in the simulation we compute the distance for 1000 simulations at the true mean values
distances = [dist_measure(data, create_new_sample(means)) for _ in range(1000)] sns.distplot(distances, axlabel="distances", ) title("Variablility of distance from simulations")
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Setup Now we're going to set up the ABC PMC sampling
import abcpmc
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
As a prior we're going to use a gaussian prior using our best guess about the distribution of the means.
prior = abcpmc.GaussianPrior(mu=[1.0, 1.0], sigma=np.eye(2) * 0.5)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
As threshold $\epsilon$ we're going to use the $\alpha^{th}$ percentile of the sorted distances of the particles of the current iteration. The simplest way to do this is to define a constant $\epsilon$ and iteratively adapt the theshold. As starting value we're going to define a sufficiently high value so that the acce...
alpha = 75 T = 20 eps_start = 1.0 eps = abcpmc.ConstEps(T, eps_start)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Finally, we create an instance of your sampler. We want to use 5000 particles and the functions we defined above. Additionally we're going to make use of the built-in parallelization and use 7 cores for the sampling
sampler = abcpmc.Sampler(N=5000, Y=data, postfn=create_new_sample, dist=dist_measure, threads=7)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Optionally, we can customize the proposal creation. Here we're going to use a "Optimal Local Covariance Matrix"-kernel (OLCM) as proposed by (Fillipi et al. 2012). This has shown to yield a high acceptance ratio togheter with a faster decrease of the thresold.
sampler.particle_proposal_cls = abcpmc.OLCMParticleProposal
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Sampling Now we're ready to sample. All we need to do is to iterate over the yielded values of your sampler instance. The sample function returns a namedtuple per iteration that contains all the information that we're interestend in
def launch(): eps = abcpmc.ConstEps(T, eps_start) pools = [] for pool in sampler.sample(prior, eps): print("T: {0}, eps: {1:>.4f}, ratio: {2:>.4f}".format(pool.t, eps(pool.eps), pool.ratio)) for i, (mean, std) in enumerate(zip(*abcpmc.weighted_avg_and_std(pool.thetas, pool.ws, axis=0))): ...
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Postprocessing How did the sampled values evolve over the iterations? As the threshold is decreasing we expect the errors to shrink while the means converge to the true means.
for i in range(len(means)): moments = np.array([abcpmc.weighted_avg_and_std(pool.thetas[:,i], pool.ws, axis=0) for pool in pools]) errorbar(range(T), moments[:, 0], moments[:, 1]) hlines(means, 0, T, linestyle="dotted", linewidth=0.7) _ = xlim([-.5, T])
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
How does the distribution of the distances look like after we have approximated the posterior? If we're close to the true posterior we expect to have a high bin count around the values we've found in the earlier distribution plot
distances = np.array([pool.dists for pool in pools]).flatten() sns.distplot(distances, axlabel="distance")
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
How did our $\epsilon$ values behave over the iterations? Using the $\alpha^{th}$ percentile causes the threshold to decrease relatively fast in the beginning and to plateau later on
eps_values = np.array([pool.eps for pool in pools]) plot(eps_values, label=r"$\epsilon$ values") xlabel("Iteration") ylabel(r"$\epsilon$") legend(loc="best")
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
What about the acceptance ratio? ABC PMC with the OLCM kernel gives us a relatively high acceptance ratio.
acc_ratios = np.array([pool.ratio for pool in pools]) plot(acc_ratios, label="Acceptance ratio") ylim([0, 1]) xlabel("Iteration") ylabel("Acceptance ratio") legend(loc="best") %pylab inline rc('text', usetex=True) rc('axes', labelsize=15, titlesize=15)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Finally what does our posterior look like? For the visualization we're using triangle.py (https://github.com/dfm/triangle.py)
import triangle samples = np.vstack([pool.thetas for pool in pools]) fig = triangle.corner(samples, truths= means)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Omitting the first couple of iterations..
idx = -1 samples = pools[idx].thetas fig = triangle.corner(samples, weights=pools[idx].ws, truths= means) for mean, std in zip(*abcpmc.weighted_avg_and_std(samples, pools[idx].ws, axis=0)): print(u"mean: {0:>.4f} \u00B1 {1:>.4f}".format(mean,std))
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Variables & Terminology $W_{i}$ - weights of the $i$th layer $B_{i}$ - biases of the $i$th layer $L_{a}^{i}$ - activation (Inner product of weights and inputs of previous layer) of the $i$th layer. $L_{o}^{i}$ - output of the $i$th layer. (This is $f(L_{a}^{i})$, where $f$ is the activation function) MLP with...
from IPython.display import YouTubeVideo YouTubeVideo("LOc_y67AzCA") import numpy as np from utils import backprop_decision_boundary, backprop_make_classification, backprop_make_moons from sklearn.metrics import accuracy_score from theano import tensor as T from theano import function, shared import matplotlib.pyplot ...
notebooks/day4/04_backpropagation.ipynb
jaidevd/inmantec_fdp
mit
Exercise: Implement an MLP with two hidden layers, for the following dataset
X, Y = backprop_make_moons() plt.scatter(X[:, 0], X[:, 1], c=np.argmax(Y, axis=1))
notebooks/day4/04_backpropagation.ipynb
jaidevd/inmantec_fdp
mit
Hints: Use two hidden layers, one containing 3 and the other containing 4 neurons Use learning rate $\alpha$ = 0.2 Try to make the network converge in 1000 iterations
# enter code here
notebooks/day4/04_backpropagation.ipynb
jaidevd/inmantec_fdp
mit
We will sample 2000 times from $Z \sim N(0,I_{50 \times 50})$ and look at the normalized spacing between the top 2 values.
Z = np.random.standard_normal((2000,50)) T = np.zeros(2000) for i in range(2000): W = np.sort(Z[i]) T[i] = W[-1] * (W[-1] - W[-2]) Ugrid = np.linspace(0,1,101) covtest_fig = plt.figure(figsize=(6,6)) ax = covtest_fig.gca() ax.plot(Ugrid, ECDF(np.exp(-T))(Ugrid), drawstyle='steps', c='k', label='covtest...
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
The covariance test is an asymptotic result, and can be used in a sequential procedure called forward stop to determine when to stop the LASSO path. An exact version of the covariance test was developed in a general framework for problems beyond the LASSO using the Kac-Rice formula. A sequential version along the LARS...
from scipy.stats import norm as ndist Texact = np.zeros(2000) for i in range(2000): W = np.sort(Z[i]) Texact[i] = ndist.sf(W[-1]) / ndist.sf(W[-2]) ax.plot(Ugrid, ECDF(Texact)(Ugrid), c='blue', drawstyle='steps', label='exact covTest', linewidth=3) covtest_fig
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
Covariance test for regression The above tests were based on an IID sample, though both the covtest and its exact version can be used in a regression setting. Both tests need access to the covariance of the noise. Formally, suppose $$ y|X \sim N(\mu, \Sigma) $$ the exact test is a test of $$H_0:\mu=0.$$ The test is b...
n, p, nsim, sigma = 50, 200, 1000, 1.5 def instance(n, p, beta=None, sigma=sigma): X = (np.random.standard_normal((n,p)) + np.random.standard_normal(n)[:,None]) X /= X.std(0)[None,:] X /= np.sqrt(n) Y = np.random.standard_normal(n) * sigma if beta is not None: Y += np.dot(X, beta) retur...
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
Let's make a dataset under our global null and compute the exact covtest $p$-value.
X, Y = instance(n, p, sigma=sigma) cone, pval, idx, sign = covtest(X, Y, exact=False) pval
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
The object cone is an instance of selection.affine.constraints which does much of the work for affine selection procedures. The variables idx and sign store which variable achieved $\lambda_{\max}$ and the sign of its correlation with $y$.
cone def simulation(beta): Pcov = [] Pexact = [] for i in range(nsim): X, Y = instance(n, p, sigma=sigma, beta=beta) Pcov.append(covtest(X, Y, sigma=sigma, exact=False)[1]) Pexact.append(covtest(X, Y, sigma=sigma, exact=True)[1]) Ugrid = np.linspace(0,1,101) plt.figure(fig...
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
Null
beta = np.zeros(p) simulation(beta)
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
1-sparse
beta = np.zeros(p) beta[0] = 4 simulation(beta)
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
2-sparse
beta = np.zeros(p) beta[:2] = 4 simulation(beta)
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
5-sparse
beta = np.zeros(p) beta[:5] = 4 simulation(beta)
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
Show gain matrix a.k.a. leadfield matrix with sensitivity map
picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False) picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True) fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True) fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14) for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'e...
0.21/_downloads/2212671cb1d04d466a35eb15470863da/plot_forward_sensitivity_maps.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Synthetic Dataset Generation
def generate_data( slopes: List[float], group_size: int, noise_stddev: float ) -> pd.DataFrame: """ Generate `len(slopes)` * `group_size` examples with dependency y = slope * x + noise. """ dfs = [] for i, slope in enumerate(slopes): curr_df = pd.DataFrame...
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Let us create a situation where in-fold generation of target-based features leads to overfitting. To do so, make a lot of small categories and set noise variance to a high value. Such settings result in leakage of noise from target into in-fold generated mean. Thus, regressor learns how to use this leakage, which is us...
slopes = [2, 1, 3, 4, -1, -2, 3, 2, 1, 5, -2, -3, -5, 8, 1, -7, 0, 2, 0] group_size = 5 noise_stddev = 10 train_df = generate_data(slopes, group_size, noise_stddev) train_df.head()
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Generate test set from the same distribution (in a way that preserves balance between categories).
test_df = generate_data(slopes, group_size, noise_stddev) test_df.head()
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit