markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Vertex SDK: AutoML training tabular binary classification model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_binary_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_binary_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_binary_classification_batch.ipynb"> Open in Google Cloud Notebooks </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex SDK to create tabular binary classification models and do batch prediction using a Google Cloud AutoML model. Dataset The dataset used for this tutorial is the Bank Marketing . This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. Objective In this tutorial, you create an AutoML tabular binary classification model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Make a batch prediction. There is one key difference between using batch prediction and using online prediction: Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time. Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python.
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_automl_tabular_binary_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Here, we just check if there are any events within dt time, and if there are, we assume there is only one of them. This is a horrible approximation. Don't do this.
class PoissonSpikingApproximate(object): def __init__(self, size, seed, dt=0.001): self.rng = np.random.RandomState(seed=seed) self.dt = dt self.value = 1.0 / dt self.size = size self.output = np.zeros(size) def __call__(self, t, x): self.output[:] = 0 p = 1.0 - np.exp(-x*self.dt) self.output[p>self.rng.rand(self.size)] = self.value return self.output
spike trains - poisson and regular.ipynb
tcstewar/testing_notebooks
gpl-2.0
For this attempt, I'm screwing something up in the logic, as it's firing more frequently than it should. The idea is to use the same approach as above to see if any spikes happen during the dt. If there are any spikes, then count that as 1 spike. But now I need to know if there are any more spikes (given that we know at least one spike happened). Since one spike happened at one instant in time, but all the other points in time during that dt could also have a spike (since the whole point of a poisson process is that everything's independent), we can remove that one infinitessimal point in take from dt, leaving us with dt, and we can check if any spikes happened in that remaining time. In other words, just do the same process over again, only considering those neurons who did spike on the previous iteration of this logic. Continue until no neurons have another spike. I'm not sure if my logic is bad or my coding is bad, but this doesn't work.
class PoissonSpikingExactBad(object): def __init__(self, size, seed, dt=0.001): self.rng = np.random.RandomState(seed=seed) self.dt = dt self.value = 1.0 / dt self.size = size self.output = np.zeros(size) def __call__(self, t, x): p = 1.0 - np.exp(-x*self.dt) self.output[:] = 0 s = np.where(p>self.rng.rand(self.size))[0] self.output[s] += self.value count = len(s) while count > 0: s2 = np.where(p[s]>self.rng.rand(count))[0] s = s[s2] self.output[s] += self.value count = len(s) return self.output
spike trains - poisson and regular.ipynb
tcstewar/testing_notebooks
gpl-2.0
Now for one that works. Here we do the approach of actually figuring out when during the time step the events happen, and continue until we fall off the end of the timestep. This is how everyone says to do it.
class PoissonSpikingExact(object): def __init__(self, size, seed, dt=0.001): self.rng = np.random.RandomState(seed=seed) self.dt = dt self.value = 1.0 / dt self.size = size self.output = np.zeros(size) def next_spike_times(self, rate): return -np.log(1.0-self.rng.rand(len(rate))) / rate def __call__(self, t, x): self.output[:] = 0 next_spikes = self.next_spike_times(x) s = np.where(next_spikes<self.dt)[0] count = len(s) self.output[s] += self.value while count > 0: next_spikes[s] += self.next_spike_times(x[s]) s2 = np.where(next_spikes[s]<self.dt)[0] count = len(s2) s = s[s2] self.output[s] += self.value return self.output model = nengo.Network() with model: freq=10 stim = nengo.Node(lambda t: np.sin(t*np.pi*2*freq)) ens = nengo.Ensemble(n_neurons=5, dimensions=1, neuron_type=nengo.LIFRate(), seed=1) nengo.Connection(stim, ens, synapse=None) regular_spikes = nengo.Node(RegularSpiking(ens.n_neurons), size_in=ens.n_neurons) nengo.Connection(ens.neurons, regular_spikes, synapse=None) poisson_spikes = nengo.Node(PoissonSpikingExact(ens.n_neurons, seed=1), size_in=ens.n_neurons) nengo.Connection(ens.neurons, poisson_spikes, synapse=None) p_rate = nengo.Probe(ens.neurons) p_regular = nengo.Probe(regular_spikes) p_poisson = nengo.Probe(poisson_spikes) sim = nengo.Simulator(model) sim.run(0.1) pylab.figure(figsize=(10,8)) pylab.subplot(3,1,1) pylab.plot(sim.trange(), sim.data[p_rate]) pylab.xlim(0, sim.time) pylab.ylabel('rate') pylab.subplot(3,1,2) import nengo.utils.matplotlib nengo.utils.matplotlib.rasterplot(sim.trange(), sim.data[p_regular]) pylab.xlim(0, sim.time) pylab.ylabel('regular spiking') pylab.subplot(3,1,3) nengo.utils.matplotlib.rasterplot(sim.trange(), sim.data[p_poisson]) pylab.xlim(0, sim.time) pylab.ylabel('poisson spiking') pylab.show()
spike trains - poisson and regular.ipynb
tcstewar/testing_notebooks
gpl-2.0
Let's test the accuracy of these models.
def test_accuracy(cls, rate, T=1): test_model = nengo.Network() with test_model: stim = nengo.Node(rate) spikes = nengo.Node(cls(1, seed=1), size_in=1) nengo.Connection(stim, spikes, synapse=None) p = nengo.Probe(spikes) sim = nengo.Simulator(test_model) sim.run(T, progress_bar=False) return np.mean(sim.data[p]) rates = np.linspace(0, 1000, 11) result_approx = [test_accuracy(PoissonSpikingApproximate, r) for r in rates] result_bad = [test_accuracy(PoissonSpikingExactBad, r) for r in rates] result_exact = [test_accuracy(PoissonSpikingExact, r) for r in rates] pylab.plot(rates, result_approx, label='spike rate approx') pylab.plot(rates, result_bad, label='spike rate bad') pylab.plot(rates, result_exact, label='spike rate exact') pylab.plot(rates, rates, ls='--', c='k', label='ideal') pylab.legend(loc='best') pylab.show()
spike trains - poisson and regular.ipynb
tcstewar/testing_notebooks
gpl-2.0
<h2> Let's look at a slice of retention time </h2>
def get_rt_slice(df, rt_bounds): ''' PURPOSE: Given a tidy feature table with 'mz' and 'rt' column headers, retain only the features whose rt is between rt_left and rt_right INPUT: df - a tidy pandas dataframe with 'mz' and 'rt' column headers rt_left, rt_right: the boundaries of your rt_slice, in seconds ''' out_df = df.loc[ (df['rt'] > rt_bounds[0]) & (df['rt'] < rt_bounds[1])] return out_df #df.head() df_slice = get_rt_slice(df, (750, 1050)) print df_slice.shape # separate samples from xcms/camera things to make feature table not_samples = ['mz', 'mzmin', 'mzmax', 'rt', 'rtmin', 'rtmax', 'npeaks', 'uhplc_pos', ] samples_list = df_slice.columns.difference(not_samples) mz_rt_df = df_slice[not_samples] # convert to samples x features X_df_raw = df_slice[samples_list].T # Remove zero-full columns and fill zeroes with 1/2 minimum values X_df = remove_zero_columns(X_df_raw) print X_df.shape X_slice = X_df.as_matrix() # pqn-normalize X_slice_pqn = pqn_normalize(X) # Make a null model AUC curve & compare it to null-model # For the slice of retention time # Random forest magic! rf_estimators = 1000 n_iter = 50 test_size = 0.3 random_state = 1 cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state) clf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state) true_auc, all_aucs = make_null_model(X_slice_pqn, y, clf_rf, cross_val_rf, num_shuffles=5) # make dataframe from true and false aucs flattened_aucs = [j for i in all_aucs for j in i] my_dict = {'true_auc': true_auc, 'null_auc': flattened_aucs} df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'], value_name='auc', var_name='AUC_type') print df_tidy.head() #print flattened_aucs sns.violinplot(x='AUC_type', y='auc', inner='points', data=df_tidy, bw=0.7) # Plot distribution of AUC vals plt.title("Classification is not possible when data is shuffled") #sns.plt.ylabel('count') plt.xlabel('True model vs. Null Model') plt.ylabel('AUC') #sns.plt.plot(auc_true, 0, color='red', markersize=10) plt.savefig('/home/irockafe/Desktop/auc_distribution', format='pdf') plt.show()
notebooks/MTBLS315/exploratory/Retention_time_slice_MTBLS315_uhplc_pos_classifer-4ppm.ipynb
irockafe/revo_healthcare
mit
Fit a model Is there a relationship between the diamond price and its weight? Our first goal should be to determine whether the data provide evidence of an association between price and carats. If the evidence is weak, then one might argue that bigger diamonds are not better! To evaluate the model we will use a special Python package, statsmodel, which has nice available functions for this. Statsmodel is a package based on the original - later removed - statistics module of SciPy (Scientific Python) by Jonathan Taylor, corrected, improved, tested and released as a new package during the Google Summer of Code 2009.
import statsmodels.api as sm
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Since statsmodels offers also functions to fit a linear regression model, we do not need to import and use sklearn to fit the model but we can do everything with statsmodels. We will use its function OLS() that fits a linear regression based on the Ordinary Least Squares algorithm. The model we want to get is : y_hat = beta0 + beta1 * X where y_hat is the estimated Diamond Price (the dependent variable) and x is the diamond Weight (the independent variable). An intercept is not included by default and should be added by the user:
X = sm.add_constant(diamondData.carats) # this append a column of ones simpleModel = sm.OLS(diamondData.price, X).fit() # fit the model simpleModel.params # here are the beta coefficients (intercept and slope of the linear regression line)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The intercept (beta0) is -259.6 and the slope (beta1) is 3721 Therefore our simple (one variable only) model looks like: Diamond Price = -259.6 + 3721 * Diamond Weight We can plot the obtained regression line together with the input data X. We can do it by drawing a line using the beta parameters just calculated or also plotting the fitted values:
%matplotlib inline import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.scatter(diamondData.carats, diamondData.price) # draw linear regression line x = [0.1,0.4] y = [-259.6 + 3721 * i for i in x] ax.plot(x, y) # alternatively, plot the fitted values #y_hat = simpleModel.fittedvalues #ax.plot(diamondData.carats, y_hat) # pretty-up the plot fig.suptitle("Relation between diamonds' price and weight") ax.set_ylabel('Price [SIN $]') ax.set_xlabel('Weight [carat]') ax.grid(True)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
This answers our first question. There is a relationship between price and weight of diamond and we can model it. Analyse the model Which kind of relation is between weight and price? Next question would be to find out if the relation is linear and how it looks like. Coefficients interpretation Beta1 (the slope of the regression line) is the expected change in response for a 1 unit change in the predictor. In our case, we expect 3721 Singapore dollars increase in price for every carat increase in mass of diamond. This is within the restricted range considered; extrapolation of the regression line for bigger diamond stones would not be advisable as these stones are rarer and command a different price range. Beta0 (the intercept of the regression line) is the expected price when the weight is zero. This does not always make sense and in our case the negative intercept is even more puzzling because it suggests that a zero-carat diamond ring has a negative economic value! Getting a more interpretable intercept The intercept -259.63 is the expected price of a 0 carat diamond. Which does not make much sense (unless you consider it the cost of bothering the diamond expert when you ask the price of a non-existing diamond :) ) It can be ignored (the model applies only to a restricted range of the data, say starting from 0.1 Carats) or can be an indication that a different model could be more precise, for example a non-linear regression. We can also use instead the expected price for a more suitable weight, for example the average diamond weight.
diamondData.carats.mean() # this is the weight mean of our dataset
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Instead of X as input for the model, we take X centered around the mean, i.e. we shift X of a value equal to the sample mean:
XmeanCentered = diamondData.carats - diamondData.carats.mean() XmeanCentered = sm.add_constant(XmeanCentered) # this append a column of ones meanCenteredModel = sm.OLS(diamondData.price, XmeanCentered).fit() # fit a new model meanCenteredModel.params
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
As you can see, the slope is the same as the previous model, only the intercept shifted. This is always valid when you shift your X values. Thus $500.1 is the expected price for the average sized diamond of the initial dataset (=0.2042 carats). This is an intercept making much more sense. You can shift the X input by a certain value but you can also re-scale them. This can be useful when one unit is quite large and we would prefer a finer unit. For example, in our case, one carat is worth of almost 4K SIN$ and could make sense to talk about tenth of carats (= 1/10).
Xtenth = diamondData.carats *10 # rescale the X Xtenth = sm.add_constant(Xtenth) tenthModel = sm.OLS(diamondData.price, Xtenth).fit() # again fit the model tenthModel.params
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The intercept is the same as in the original model, only the slope coefficient is divided by 10. This is always valid when you re-scale the X values. We expect a 372.102 (SIN) dollar change in price for every 1/10th of a carat increase in mass of diamond. Predicting the price of a diamond Once we have a model of the relation, we can use it for predictions. The statsmodel package has a method predict() associated to each model, that takes a new set of input and will output the predicted values, according to the model. Let's say that I want to buy a 0.2 carats diamond. How much should I expect it to cost? I can use the beta parameters estimated by the model and just putting them into the linear regression formula:
simpleModel.params[0] + 0.2*simpleModel.params[1]
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
I expect to pay around 485 SIN $. Or I can use the predict() function available in the statsmodel package:
newDiamond = [1, 0.2] # remember to add always the intercept! simpleModel.predict(newDiamond)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
It's also possible to pass a list of values to predict:
newDiamonds = sm.add_constant([0.16, 0.27, 0.34]) # add the intecept simpleModel.predict(newDiamonds)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Result: for 0.16, 0.27, and 0.34 carats, we predict the prices to be 335.74, 745.05, 1005.52 (SIN) dollars Model fit How strong is the relationship? We know that there is a relationship between diamonds carats and prices, we would like to know the strength of this relationship. In other words, given a certain diamond weight, can we predict the price with a high level of accuracy? This would be a strong relationship. Or is a prediction of prices based on weight only slightly better than a random guess? This would be a weak relationship. Residuals As we have seen previously, the residuals are the difference between the observed (y) and the predicted outcome (y_hat):
y = diamondData.price y_hat = simpleModel.fittedvalues max (abs (y - y_hat))
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Conveniently, residuals are also stored in the results attribute resid:
residuals = simpleModel.resid max(abs(residuals))
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
85 SIN$ (per defect or excess) is the biggest difference done by the model. Don't confuse errors and residuals. The error is the deviation of the observed value from the (unobservable) true value of a quantity of interest (for example, a population mean), and the residual is the difference between the observed value and the estimated value of the quantity of interest (for example, a sample mean). We can learn many things from the residuals. One is that their distribution and properties can give us an indication about the model fit. Residuals should not show any pattern The residuals and their plot can highlight a poor model fit. Here we plot the residuals versus the fitted values:
fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(simpleModel.fittedvalues, residuals, 'o') # as round marks # pretty-up the plot ax.plot ((0, 1200), (0,0)) # draw also a line at y=0 fig.suptitle("Residuals versus fitted prices") ax.set_ylabel('Residuals [SIN $]') ax.set_xlabel('Price [SIN $]') ax.grid(True)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The residuals should be distributed uniformly without showing any pattern and having a constant variance. If we see from the residuals vs. fitted plot that the variance of the residuals increases as the fitted values increase (takes a form of a horizontal cone) this is the sign of heteroscedasticity. Homoscedasticity describes a situation in which the error term (that is, the “noise” or random disturbance in the relationship between the independent variables and the dependent variable) is the same across all values of the independent variables. Heteroscedasticity (the violation of homoscedasticity) is present when the size of the error term differs across values of an independent variable. Examining the scatterplot of the residuals against the predicted values of the dependent variable would show the classic cone-shaped pattern of heteroscedasticity. Other patterns could be: - curvilinear (indicate is non-linear / missing higher-order term) - a single point is far away from zero (probably an outlier) - a single point is far away from the others in the x-direction (probably an influential point) Residuals should be normally distributed The sum of the residuals is expected to be zero (when there is an intercept). This follows directly from the normal equation, i.e. the equation that the OLS estimator solves.
sum(residuals)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The mean of the residuals is expected to be zero. This comes directly from the fact that OLS minimises the sum of square residuals
import numpy as np np.mean(residuals)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
This is one of the assumptions for regression analysis: residuals should have a normal (or Gaussian) distribution.
plt.hist(residuals)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
It looks normal but we can verify better with a Q-Q plot. Q-Q plot to verify the residuals distribution Q-Q plots (stands for a "quantile-quantile plot") can be used to check whether the data is distributed normally or not. It is a plot where the axes are purposely transformed in order to make a normal (or Gaussian) distribution appear in a straight line. In other words, a perfectly normal distribution would exactly follow a line with slope = 1 and intercept = 0. Therefore, if the plot does not appear to be - roughly - a straight line, then the underlying distribution is not normal. If it bends up, then there are more "high flyer" values than expected, for instance. The theoretical quantiles are placed along the x-axis. That is, the x-axis is not your data, it is simply an expectation of where your data should have been, if it were normal. The actual data is plotted along the y-axis. The values are the standard deviations from the mean. So, 0 is the mean of the data, 1 is 1 standard deviation above, etc. This means, for instance, that 68.27% of all your data should be between -1 & 1, if you have a normal distribution. statsmodels offers a handy qqplot() function:
sm.qqplot(residuals, fit=True, line = '45')
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Estimating residual variation The residual variation measures how well the regression line fits the data points. It is the variation in the dependent variable (Price) that is not explained by the regression model and is represented by the residuals. We want the residual variation to be as small as possible. Each residual is distributed normally with mean 0 and variance = sigma_squared. We have previously seen that the ML Estimate of variance, sigma_squared, is sum(residuals squared) divided by n and we called it the Mean Squared Error (MSE). Most people use (n-2) instead of n so that the estimator is unbiased (the -2 is accounting for the degrees of freedom for intercept and slope). The square root of the estimate, sigma, is called the Root Mean Squared Error (RMSE). We want both MSE and RMSE to be as small as possible. In our diamonds example the estimated residual variation (unbiased RMSE) is :
n = len(y) MSE = sum(residuals**2) / (n-2) RMSE = np.sqrt(MSE) RMSE
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
RMSE can be used to calculate the standardized residuals too. They equal the value of a residual divided by an estimate of its standard deviation (so, RMSE). Large standardized residuals are an indication of an outlier.
max(simpleModel.resid / RMSE)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Summarizing the variation: R-squared The total variation is the residual variation (variation after removing predictors) plus the systematic variation (variation explained by regression model). R-squared is the percentage of variability explained by the regression model: R-squared = explained / total variation = 1 - residual / total variation R-squared is always between 0 and 1 (0% and 100%): - 0% indicates that the model explains none of the variability of the response data around its mean. - 100% indicates that the model explains all the variability of the response data around its mean. In general, the higher the R-squared, the better the model fits your data.
simpleModel.rsquared
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
We are quite close to a perfect model. You can use a fitted line plot to graphically illustrate different R-squared values. The more variation that is explained by the model, the closer the data points fall to the line. Theoretically, if a model could explain 100% of the variation, the fitted values would always equal the observed values and all of the data points would fall on the line.
fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(simpleModel.fittedvalues, diamondData.price, 'o') # as round marks # identity line plt.plot(simpleModel.fittedvalues, simpleModel.fittedvalues, '--') # pretty-up the plot fig.suptitle("Relation between estimated and actual diamonds' prices") ax.set_ylabel('Estimated Price [SIN $]') ax.set_xlabel('Actual Price [SIN $]') ax.grid(True)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
R-squared can be a misleading summary and needs to be carefully taken (deleting data can inflate R-squared for example). In conclusion (residuals distribution, variation) the model is pretty good and the relation is very strong. Because of this, sometimes is preferred to use the adjusted Rsquared, which is Rsquared adjusted for the number of observations. There are several formula that can be used, normally it is the Wherry's formula:
1 - (1-simpleModel.rsquared)*((n-1)/simpleModel.df_resid)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Of course, it is also available from the model results:
simpleModel.rsquared_adj
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Confidence How accurately can we predict the diamond prices? For any given weight in carats, what is our prediction for the price, and what is the accuracy of this prediction? In statistics, a sequence of random variables is independent and identically distributed (IID) if each random variable has the same probability distribution as the others and all are mutually independent. Inference for regression In the case of regression with IID sampling assumptions and normal distributed residuals, the statistics for our estimated beta coefficients: - will follow a finite sample T-distributions and be normally distributed - can be used to test null hypotesis - can be used to create a confidence interval In probability and statistics, the t-distribution is any member of a family of continuous probability distributions that arises when estimating the mean of a normally distributed population in situations where the sample size is small and population standard deviation is unknown. Whereas a normal distribution describes a full population, t-distributions describe samples drawn from a full population. The t-distribution becomes closer to the normal (Gaussian) distribution as its degrees of freedom (df) increases. The t-distribution arises in a variety of statistical estimation problems where the goal is to estimate an unknown parameter, such as a mean value, in a setting where the data are observed with additive errors. If the population standard deviation of these errors is unknown and has to be estimated from the data, the t-distribution is often used to account for the extra uncertainty that results from this estimation. Confidence intervals and hypothesis tests are two statistical procedures in which the quantiles of the sampling distribution of a particular statistic (e.g. the standard score) are required. Confidence levels are expressed as a percentage (for example, a 95% confidence level). It means that should you take a sample over and over again, 95 percent of the time your results will match the results you get from the population. When the population standard deviation sigma is not known, an interval estimate for the population with confidence level (1-alfa) is given by: Xmean +- t * (estimated standard error of the mean) where t is a critical value determined from the t-distribution in such a way that there is an are (1-alfa) between t and -t. First, we need to calculate the variance. Estimating the coefficients and the variance Recall that our linear regression model is: Y = Beta0 + Beta1 * X + errors We can define the beta parameters as: beta0 = mean(Y) - beta1 * mean(X) beta1 = Cor(Y,X) * Sd(Y)/Sd(X) For our diamonds example:
# prepare the data y = diamondData.price x = diamondData.carats n = len(y) # calculate beta1 beta1 = (np.corrcoef (y,x) * np.std(y) / np.std(x))[0][1] beta1 # calculate beta0 beta0 = np.mean(y) - beta1 * np.mean(x) beta0
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Sigma is unknown but its estimate is the squared root of the sum of the errors squared, divided by n-2 (the degrees of freedom)
e = y - beta0 - beta1 * x # the residuals # unbiased estimate for variance sigma = np.sqrt(sum(e**2) / (n-2)) sigma
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Confidence intervals A 95% confidence interval is defined as a range of values such that with 95% probability, the range will contain the true unknown value of the parameter. Recall of quantiles and percentiles of a distribution If you were the 95th percentile on an exam, it means that 95% of people scored worse than you and 5% scored better. These are sample quantiles. Now for a population: the i-th quantile of a distribution with distribution function F is simply the point x_i so that : F(x_i) = i A percentile is simply a quantile with i expressed as a percent. The 95th percentile of a distribution is the point where the probability that a random variable drawn from the population is less than 95%. Approximately 68%, 95% and 99% of the normal density lies respectively within 1,2 and 3 standard deviations from the mean. Estimating the Standard Errors Now we need to calculate the standard errors.
ssx = sum((x - np.mean(x))**2) # calculate standard error for beta0 seBeta0 = (1 / n + np.mean(x) ** 2 / ssx) ** 0.5 * sigma seBeta0 # calculate standard error for beta1 seBeta1 = sigma / np.sqrt(ssx) seBeta1
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The standard error of the parameter measures the precision of the estimate of the parameter. The smaller the standard error, the more precise the estimate. Hypothesis testing Hypothesis testing is concerned with making decisions using data. A null hypothesis is specified that represents the status quo, usually labeled H0. The null hypothesis is assumed true and statistical evidence is required to reject it in favour of an alternative hypothesis. Consider testing H0: mu = mu0 If we take the set of all possible values for which you fail to reject H0, this set is an alfa% confidence interval for mu, alfa depending on the set. Getting the T-values Testing for null hypotesis H0: estimated beta0 and beta1 are equal to real coefficients Dividing the parameter by its standard error calculates a t-value:
tBeta0 = beta0 / seBeta0 tBeta0 tBeta1 = beta1 / seBeta1 tBeta1
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
P-values P-values are the most common measure of "statistical significance". The P-value is the probability under the null hypothesis of obtaining evidence as extreme or more extreme than would be observed by chance alone. If the p-value is small, then either H0 is true and we have observed an extreme rare event or H0 is false. Let's say a P-value is 0.1: then the probability of seeing evidence as extreme or more extreme than what actually has been obtained under H0 is 0.1 (10%). By reporting a P-value, any observer can perform the own hypothesis test at whatever alfa level they choose. If the P-value is less than alfa then you reject the null hypothesis. Estimating the P-values for hypotesis contrary beta0 is not equal to zero We can use the T-distribution module from SciPy.stats to calculate the p-values
from scipy.stats import t
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
The survival function of a random variable is the probability that the random variable is bigger than a value x. The SciPy.stats function sf() returns this probability:
degreesOfFreedom = simpleModel.df_resid # The residual degree of freedom pBeta0 = t.sf(abs(tBeta0), df=degreesOfFreedom)*2 # two-sided pBeta1 = t.sf(abs(tBeta1), df=degreesOfFreedom)*2
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Let's summarise the values calculated until now:
print ("## Estimate Std. Error t-value p-value") print ("Intercept: ", beta0, seBeta0, tBeta0, pBeta0) print ("Carats: ", beta1, seBeta1, tBeta1, pBeta1)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
If the p-value is less than the significance level (0.05 in our case) then the model explains the variation in the response. T confidence intervals For small samples we can use the t-distribution to calculate the confidence intervals. The t-distribution has been invented by William Gosset in 1908 and is indexed by a degrees of freedom (df): it gets more like a standard normal as df gets larger. ppf() is the Percent Point Function from SciPy.stats, that has as input the quantile (for a 2-sided 95% probability) and the degrees of freedom.
alpha=0.05 # confidence interval for two-sided hypothesis qt = 1 - (alpha/2) # =0.975 for a 2-sided 95% probability t_value = t.ppf(qt, df=degreesOfFreedom) t_value
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Now we can calculate the intervals for beta0 and beta1:
limits=[-1,1] [beta0 + i*t_value*seBeta0 for i in limits] [beta1 + i*t_value*seBeta1 for i in limits]
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Interpretation: With 95% confidence, we estimate that 1 carat increase in diamond size results in a 3556 to 3886 increase in price in (Singapore) dollars. Plot the confidence interval We calculate the interval for each x value; will use the isf() function to get the inverse survival function:
predicted = simpleModel.fittedvalues x_1 = simpleModel.model.exog # just the x values plus column of 1 # get standard deviation of predicted values predvar = simpleModel.mse_resid + (x_1 * np.dot(simpleModel.cov_params(), x_1.T).T).sum(1) predstd = np.sqrt(predvar) tppf = t.isf(alpha/2.0, simpleModel.df_resid) interval_u = predicted + tppf * predstd interval_l = predicted - tppf * predstd fig, ax = plt.subplots() ax.plot(x,y, 'o', label="data") ax.plot(x, simpleModel.fittedvalues, 'g-', label="OLS") ax.plot(x, interval_u, 'c--', label = "Intervals") ax.plot(x, interval_l, 'c--') # pretty-up the plot fig.suptitle("OLS Linear Regression with confidence intervals") ax.set_ylabel('Predicted Price [SIN $]') ax.set_xlabel('Weight [Carat]') ax.grid(True) ax.legend(loc='best')
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Summary of statistic values The statsmodel package offers an overview of the model values, similar to what we calculated above:
simpleModel.summary()
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
Many values can also be accessed directly, for example the standard errors:
simpleModel.bse
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
You can see all the values available using dir():
dir(simpleModel)
01-Regression/LRinference.ipynb
Mashimo/datascience
apache-2.0
In the plot below we see spikes at frequencies of 1/day, as well as the associated harmonics.
plt.plot(ST.index[1:], np.log10(ST.values)[1:]) #plt.plot(f[1:N_final], 20.0 / len(T) * np.log10(np.abs(ST[1:N_final]))) plt.xlabel('frequency in days') plt.ylabel('Power') plt.title('T spectra') rx = (1. / N) * correlate(T, T, mode = 'same') plt.plot(fftshift(rx)[0:N//2]) # THIS METHOD OF FINDING THE PEAKS IS COMPLETELY NON-ROBUST a = np.argsort(-ST.values.flatten()) # Frequeny in units of days f0_yr = ST.index[a[0]] f0_d = ST.index[a[1]] print(f0_yr, f0_d) print(a[0:25]) f_yr = f[52] f_d = f[19012] f_hd = f[38024] print(f_yr, f_d, f_hd) unix_birth = datetime.datetime(1970, 1, 1) time_in_days = lambda t: (t - unix_birth).total_seconds() / 86400 # 86400 = timedelta(days=1).total_seconds() t_days = np.fromiter(map(time_in_days, t), np.float64) # Time indices in units of days # Error functions for sinusoidal regression def err_f0(theta): # No frequency optimization a_yr, a_d, a_hd, phi_yr, phi_d, phi_hd = theta syr = a_yr * np.sin(2*pi*f_yr*t_days + phi_yr) sd = a_d * np.sin(2*pi*f_d*t_days + phi_d) shd = a_hd * np.sin(2*pi*f_hd*t_days + phi_hd) return T - syr - sd - shd res = least_squares(err_f0, (1, 1, 1, 0, 0, 0), method='lm', loss='linear', verbose=1) a_yr, a_d, a_hd, phi_yr, phi_d, phi_hd = res.x print('theta0:', res.x) print('Optimality:', res.optimality) print('status:', res.status) print('message:', res.message) print('success:', res.success) x_hat = a_yr * np.sin(2*pi*f_yr*t_days + phi_yr) + a_d * np.sin(2*pi*f_d*t_days + phi_d) +\ a_hd * np.sin(2*pi*f_hd*t_days + phi_hd) fig, axes = plt.subplots(3, 1) ax0, ax1, ax2 = axes ax0.plot(t, x_hat); ax0.plot(t, T, alpha=0.5) ax1.plot(t[0:100000], x_hat[0:100000]); ax1.plot(t[0:100000], T[0:100000], alpha=0.5) ax2.plot(t[0:1000], x_hat[0:1000]); ax2.plot(t[0:1000], T[0:1000], alpha=0.5) T1 = T - x_hat plt.plot(t, T1) plt.plot(t, T, alpha = 0.5) N = len(T1) ST1 = fft(T1)[:N // 2] ST1 = pd.DataFrame(index=f, data=np.abs(ST1)) plt.plot(ST1.index, np.log10(ST1.values)) #plt.plot(f[1:N_final], 20.0 / len(T) * np.log10(np.abs(ST[1:N_final]))) plt.xlabel('frequency in days') plt.ylabel('Power') plt.title('$T_1$ spectra') rx = (1. / N) * correlate(T1, T1, mode = 'same') plt.plot(fftshift(rx)[0:N//2])
notebooks/sinusoid_regression_T.ipynb
RJTK/dwglasso_cweeds
mit
Repeating the process is not useful
print(a[0:25]) f0_yr = f[52] f0_d = f[19012] print(f0_yr, f0_d) # Error functions for sinusoidal regression def err_f1(theta): # No frequency optimization a_yr, a_d, phi_yr, phi_d = theta syr = a_yr * np.sin(2*pi*f0_yr*t_days + phi_yr) sd = a_d * np.sin(2*pi*f0_d*t_days + phi_d) return T1 - syr - sd res0 = least_squares(err_f1, (1, 1, 0, 0), method='lm', loss='linear', verbose=1) a0_yr, a0_d, phi0_yr, phi0_d = res0.x print('theta0:', res0.x) print('Optimality:', res0.optimality) print('status:', res0.status) print('message:', res0.message) print('success:', res0.success) x_hat = a_yr * np.sin(2*pi*f0_yr*t_days + phi_yr) + a_d * np.sin(2*pi*f0_d*t_days + phi_d) fig, axes = plt.subplots(3, 1) ax0, ax1, ax2 = axes ax0.plot(t, x_hat); ax0.plot(t, T1, alpha=0.5) ax1.plot(t[0:100000], x_hat[0:100000]); ax1.plot(t[0:100000], T1[0:100000], alpha=0.5) ax2.plot(t[0:1000], x_hat[0:1000]); ax2.plot(t[0:1000], T1[0:1000], alpha=0.5)
notebooks/sinusoid_regression_T.ipynb
RJTK/dwglasso_cweeds
mit
Aside. You may get an warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above. Now that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:
weights = sentiment_model.coefficients weights.column_names() weights
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment. Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)).
weights.num_rows() num_positive_weights = (weights['value'] >= 0).sum() num_negative_weights = (weights['value'] < 0).sum() print "Number of positive weights: %s " % num_positive_weights print "Number of negative weights: %s " % num_negative_weights
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Predicting sentiment These scores can be used to make class predictions as follows: $$ \hat{y} = \left{ \begin{array}{ll} +1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \ -1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \ \end{array} \right. $$ Using scores, write code to calculate $\hat{y}$, the class predictions:
def class_predictions(scores): """ make class predictions """ preds = [] for score in scores: if score > 0: pred = 1 else: pred = -1 preds.append(pred) return preds class_predictions(scores)
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create. Probability predictions Recall from the lectures that we can also calculate the probability predictions from the scores using: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}. $$ Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].
def calculate_proba(scores): """ Calculate the probability predictions from the scores. """ proba_preds = [] for score in scores: proba_pred = 1 / (1 + math.exp(-score)) proba_preds.append(proba_pred) return proba_preds calculate_proba(scores)
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review? Find the most positive (and negative) review We now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance. Using the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews." To calculate these top-20 reviews, use the following steps: 1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.) 2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.)
# probability predictions on test_data using the sentiment_model test_data['proba_pred'] = sentiment_model.predict(test_data, output_type='probability') test_data test_data['name','proba_pred'].topk('proba_pred', k=20).print_rows(20)
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice] Now, let us repeat this excercise to find the "most negative reviews." Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.
test_data['name','proba_pred'].topk('proba_pred', k=20, reverse=True).print_rows(20)
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice] Compute accuracy of the classifier We will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by $$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}} $$ This can be computed as follows: Step 1: Use the trained model to compute class predictions (Hint: Use the predict method) Step 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below). Step 3: Divide the total number of correct predictions by the total number of data points in the dataset. Complete the function below to compute the classification accuracy:
# Test SArray comparison print graphlab.SArray([1,1,1]) == sample_test_data['sentiment'] print sentiment_model.predict(sample_test_data) == sample_test_data['sentiment'] def get_classification_accuracy(model, data, true_labels): # First get the predictions ## YOUR CODE HERE predictions = model.predict(data) # Compute the number of correctly classified examples ## YOUR CODE HERE # compare 2 SArray, true = 1, false = 0 num_correct = sum(predictions == true_labels) # Then compute accuracy by dividing num_correct by total number of examples ## YOUR CODE HERE accuracy = num_correct/len(data) return accuracy
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?
simple_weights = simple_model.coefficients positive_significant_words = simple_weights[(simple_weights['value'] > 0) & (simple_weights['name'] == "word_count_subset")]['index'] print len(positive_significant_words) print positive_significant_words
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?
weights.filter_by(positive_significant_words, 'index')
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Now compute the accuracy of the majority class classifier on test_data. Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76). Notes: what is majority class classifier? - https://www.coursera.org/learn/ml-classification/module/0Wclc/discussions/KoQDsuMOEeWFPAqzawk_tQ It is a classifier that always picks the most frequently occurring class (the "majority class"). For example, consider spam classification with two classes: positive/"it's spam" and negative/"it's not spam". Around 90% of all email is spam, so in the case of spam classification, the positive class is the majority class. The majority class spam classifier always guesses that an email is spam. Despite the simplicity, this classifier will have around 90% accuracy! In the first assignment, we are asked to determine the accuracy of a majority class classifier where the two classes are +1/positive review sentiment or -1/negative review sentiment. A majority class classifier is "trained" to guess the majority class of the training set. For example, if the majority of reviews in the training set are positive, then the majority class will always guess that a review is positive.
print (test_data['sentiment'] == +1).sum() print (test_data['sentiment'] == -1).sum() print (test_data['sentiment'] == +1).sum()/len(test_data['sentiment'])
machine_learning/3_classification/assigment/week1/module-2-linear-classifier-assignment-graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
An Array-Based Implementation of Dual-Pivot-Quicksort
import random as rnd
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{sort}(L)$ sorts the list $L$ in place.
def sort(L): quickSort(0, len(L) - 1, L)
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{quickSort}(a, b, L)$ sorts the sublist $L[a:b+1]$ in place.
def quickSort(a, b, L): if b <= a: return # at most one element, nothing to do x1, x2 = L[a], L[b] if x1 > x2: swap(a, b, L) m1, m2 = partition(a, b, L) # m1 and m2 are the split indices quickSort(a, m1 - 1, L) if L[m1] != L[m2]: quickSort(m1 + 1, m2 - 1, L) quickSort(m2 + 1, b, L)
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{partition}(\texttt{start}, \texttt{end}, L)$ returns two indices $m_1$ and $m_2$ into the list $L$ and regroups the elements of $L$ such that after the function returns the following holds: $\forall i \in {\texttt{start}, \cdots, m_1-1} : L[i] < L[m_1]$, $\forall i \in { m_1+1, \cdots, m_2-1 } : L[m_1] \leq L[i] \leq L[m_2]$, $\forall i \in { m_2+1, \cdots, \texttt{end} } : L[m_2] < L[i]$, $L[m_1] = p_1$, $L[m_2] = p_2$. Here, $p_1$ is the element that is at the index $\texttt{start}$ at the time of the invocation of the function, while $p_2$ is the elements at index $\texttt{end}$. It is assumed that $p_1 \leq p_2$. The for-loop of partition maintains the following invariants: $L[\texttt{start}] = p_1$, $L[\texttt{end}] = p_2$, $\forall i \in {\texttt{start} + 1, \cdots, \texttt{idxLeft} } : L[i] < p_1$ $\forall i \in {\texttt{idxLeft} + 1, \cdots, \texttt{idxMiddle} - 1} : p_1 \leq L[i] \leq p_2$ $\forall i \in {\texttt{idxRight}, \cdots, \texttt{end}-1} : p_2 < L[i]$
def partition(start, end, L): p1 = L[start] p2 = L[end] idxLeft = start idxMiddle = start + 1 idxRight = end while idxMiddle < idxRight: x = L[idxMiddle] if x < p1: idxLeft += 1 swap(idxLeft, idxMiddle, L) idxMiddle += 1 elif x <= p2: idxMiddle += 1 else: idxRight -= 1 swap(idxMiddle, idxRight, L) swap(start, idxLeft, L) swap(end, idxRight, L) return idxLeft, idxRight
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
Testing
L = [5, 7, 9, 1, 24, 11, 5, 2, 5, 8, 2, 13, 9] print(L) p1, p2 = partition(0, len(L) - 1, L) print(L[:p1], L[p1], L[p1+1:p2], L[p2], L[p2+1:]) def demo(): L = [ rnd.randrange(1, 200) for n in range(1, 16) ] print("L = ", L) sort(L) print("L = ", L) demo() def isOrdered(L): for i in range(len(L) - 1): assert L[i] <= L[i+1] from collections import Counter def sameElements(L, S): assert Counter(L) == Counter(S)
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.
def testSort(n, k): for i in range(n): L = [ rnd.randrange(2*k) for x in range(k) ] oldL = L[:] sort(L) isOrdered(L) sameElements(oldL, L) assert len(L) == len(oldL) print('.', end='') print() print("All tests successful!") %%time testSort(100, 20_000)
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
Next, we sort a million random integers.
%%timeit k = 1_000_000 L = [ rnd.randrange(2 * k) for x in range(k) ] sort(L)
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
Again, we sort a million integers. This time, many of the integers have the same value.
%%timeit k = 1_000_000 L = [ rnd.randrange(1000) for x in range(k) ] sort(L)
Python/Chapter-05/Dual-Pivot-Quicksort-Array.ipynb
karlstroetmann/Algorithms
gpl-2.0
Vertex SDK: AutoML training text sentiment analysis model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_sentiment_analysis_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_sentiment_analysis_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_sentiment_analysis_batch.ipynb"> Open in Google Cloud Notebooks </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex SDK to create text sentiment analysis models and do batch prediction using a Google Cloud AutoML model. Dataset The dataset used for this tutorial is the Crowdflower Claritin-Twitter dataset from data.world Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. Objective In this tutorial, you create an AutoML text sentiment analysis model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Make a batch prediction. There is one key difference between using batch prediction and using online prediction: Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time. Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python.
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Dataset Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. import_schema_uri: The data labeling schema for the data items. This operation may take several minutes.
dataset = aip.TextDataset.create( display_name="Crowdflower Claritin-Twitter" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aip.schema.dataset.ioformat.text.sentiment, ) print(dataset.resource_name)
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLTextTrainingJob class, with the following parameters: display_name: The human readable name for the TrainingJob resource. prediction_type: The type task to train the model for. classification: A text classification model. sentiment: A text sentiment analysis model. extraction: A text entity extraction model. multi_label: If a classification task, whether single (False) or multi-labeled (True). sentiment_max: If a sentiment analysis task, the maximum sentiment value. The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
dag = aip.AutoMLTextTrainingJob( display_name="claritin_" + TIMESTAMP, prediction_type="sentiment", sentiment_max=SENTIMENT_MAX, ) print(dag)
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for training. test_fraction_split: The percentage of the dataset to use for test (holdout data). validation_fraction_split: The percentage of the dataset to use for validation. The run method when completed returns the Model resource. The execution of the training pipeline will take upto 20 minutes.
model = dag.run( dataset=dataset, model_display_name="claritin_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, )
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
# Get model resource ID models = aip.Model.list(filter="display_name=claritin_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation)
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send a batch prediction request Send a batch prediction to your deployed model. Get test item(s) Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
test_items = ! gsutil cat $IMPORT_FILE | head -n2 if len(test_items[0]) == 4: _, test_item_1, test_label_1, _ = str(test_items[0]).split(",") _, test_item_2, test_label_2, _ = str(test_items[1]).split(",") else: test_item_1, test_label_1, _ = str(test_items[0]).split(",") test_item_2, test_label_2, _ = str(test_items[1]).split(",") print(test_item_1, test_label_1) print(test_item_2, test_label_2)
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get the predictions Next, get the results from the completed batch prediction job. The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format: content: The prediction request. prediction: The prediction response. sentiment: The sentiment.
import json import tensorflow as tf bp_iter_outputs = batch_predict_job.iter_outputs() prediction_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("prediction"): prediction_results.append(blob.name) tags = list() for prediction_result in prediction_results: gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}" with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile: for line in gfile.readlines(): line = json.loads(line) print(line) break
notebooks/community/sdk/sdk_automl_text_sentiment_analysis_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
ABC PMC on a 2D gaussian example In this example we're looking at a dataset that has been drawn from a 2D gaussian distribution. We're going to assume that we don't have a proper likelihood but that we know the covariance matrix $\Sigma$ of the distribution. Using the ABC PMC algorithm we will approximate the posterior of the distribtion of the mean values. First we generate a new dataset by drawing random variables from a mulitvariate gaussian around mean=[1.1, 1.5]. This is going to be our observed data set
samples_size = 1000 sigma = np.eye(2) * 0.25 means = [1.1, 1.5] data = np.random.multivariate_normal(means, sigma, samples_size) matshow(sigma) title("covariance matrix sigma") colorbar()
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Then we need to define our model/simulation. In this case this is simple: we draw again random variables from a multivariate gaussian distribution using the given mean and the sigma from above
def create_new_sample(theta): return np.random.multivariate_normal(theta, sigma, samples_size)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Next, we need to define a distance measure. We will use the sum of the absolute differences of the means of the simulated and the observed data
def dist_measure(x, y): return np.sum(np.abs(np.mean(x, axis=0) - np.mean(y, axis=0)))
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Verification To verify if everything works and to see the effect of the random samples in the simulation we compute the distance for 1000 simulations at the true mean values
distances = [dist_measure(data, create_new_sample(means)) for _ in range(1000)] sns.distplot(distances, axlabel="distances", ) title("Variablility of distance from simulations")
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Setup Now we're going to set up the ABC PMC sampling
import abcpmc
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
As a prior we're going to use a gaussian prior using our best guess about the distribution of the means.
prior = abcpmc.GaussianPrior(mu=[1.0, 1.0], sigma=np.eye(2) * 0.5)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
As threshold $\epsilon$ we're going to use the $\alpha^{th}$ percentile of the sorted distances of the particles of the current iteration. The simplest way to do this is to define a constant $\epsilon$ and iteratively adapt the theshold. As starting value we're going to define a sufficiently high value so that the acceptance ratio is reasonable and we will sample for T iterations
alpha = 75 T = 20 eps_start = 1.0 eps = abcpmc.ConstEps(T, eps_start)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Finally, we create an instance of your sampler. We want to use 5000 particles and the functions we defined above. Additionally we're going to make use of the built-in parallelization and use 7 cores for the sampling
sampler = abcpmc.Sampler(N=5000, Y=data, postfn=create_new_sample, dist=dist_measure, threads=7)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Optionally, we can customize the proposal creation. Here we're going to use a "Optimal Local Covariance Matrix"-kernel (OLCM) as proposed by (Fillipi et al. 2012). This has shown to yield a high acceptance ratio togheter with a faster decrease of the thresold.
sampler.particle_proposal_cls = abcpmc.OLCMParticleProposal
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Sampling Now we're ready to sample. All we need to do is to iterate over the yielded values of your sampler instance. The sample function returns a namedtuple per iteration that contains all the information that we're interestend in
def launch(): eps = abcpmc.ConstEps(T, eps_start) pools = [] for pool in sampler.sample(prior, eps): print("T: {0}, eps: {1:>.4f}, ratio: {2:>.4f}".format(pool.t, eps(pool.eps), pool.ratio)) for i, (mean, std) in enumerate(zip(*abcpmc.weighted_avg_and_std(pool.thetas, pool.ws, axis=0))): print(u" theta[{0}]: {1:>.4f} \u00B1 {2:>.4f}".format(i, mean,std)) eps.eps = np.percentile(pool.dists, alpha) # reduce eps value pools.append(pool) sampler.close() return pools import time t0 = time.time() pools = launch() print "took", (time.time() - t0)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Postprocessing How did the sampled values evolve over the iterations? As the threshold is decreasing we expect the errors to shrink while the means converge to the true means.
for i in range(len(means)): moments = np.array([abcpmc.weighted_avg_and_std(pool.thetas[:,i], pool.ws, axis=0) for pool in pools]) errorbar(range(T), moments[:, 0], moments[:, 1]) hlines(means, 0, T, linestyle="dotted", linewidth=0.7) _ = xlim([-.5, T])
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
How does the distribution of the distances look like after we have approximated the posterior? If we're close to the true posterior we expect to have a high bin count around the values we've found in the earlier distribution plot
distances = np.array([pool.dists for pool in pools]).flatten() sns.distplot(distances, axlabel="distance")
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
How did our $\epsilon$ values behave over the iterations? Using the $\alpha^{th}$ percentile causes the threshold to decrease relatively fast in the beginning and to plateau later on
eps_values = np.array([pool.eps for pool in pools]) plot(eps_values, label=r"$\epsilon$ values") xlabel("Iteration") ylabel(r"$\epsilon$") legend(loc="best")
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
What about the acceptance ratio? ABC PMC with the OLCM kernel gives us a relatively high acceptance ratio.
acc_ratios = np.array([pool.ratio for pool in pools]) plot(acc_ratios, label="Acceptance ratio") ylim([0, 1]) xlabel("Iteration") ylabel("Acceptance ratio") legend(loc="best") %pylab inline rc('text', usetex=True) rc('axes', labelsize=15, titlesize=15)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Finally what does our posterior look like? For the visualization we're using triangle.py (https://github.com/dfm/triangle.py)
import triangle samples = np.vstack([pool.thetas for pool in pools]) fig = triangle.corner(samples, truths= means)
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Omitting the first couple of iterations..
idx = -1 samples = pools[idx].thetas fig = triangle.corner(samples, weights=pools[idx].ws, truths= means) for mean, std in zip(*abcpmc.weighted_avg_and_std(samples, pools[idx].ws, axis=0)): print(u"mean: {0:>.4f} \u00B1 {1:>.4f}".format(mean,std))
notebooks/2d_gauss.ipynb
jakeret/abcpmc
gpl-3.0
Variables & Terminology $W_{i}$ - weights of the $i$th layer $B_{i}$ - biases of the $i$th layer $L_{a}^{i}$ - activation (Inner product of weights and inputs of previous layer) of the $i$th layer. $L_{o}^{i}$ - output of the $i$th layer. (This is $f(L_{a}^{i})$, where $f$ is the activation function) MLP with one input, one hidden, one output layer $X, y$ are the training samples $\mathbf{W_{1}}$ and $\mathbf{W_{2}}$ are the weights for first (hidden) and the second (output) layer. $\mathbf{B_{1}}$ and $\mathbf{B_{2}}$ are the biases for first (hidden) and the second (output) layer. $L_{a}^{0} = L_{o}^{0}$, since the first (zeroth) layers is just the input. Activations and outputs $L_{a}^{1} = X\mathbf{W_{1}} + \mathbf{B_{1}}$ $L_{o}^{1} = \frac{1}{1 + e^{-L_{a}^{1}}}$ $L_{a}^{2} = L_{o}^{1}\mathbf{W_{2}} + \mathbf{B_{2}}$ $L_{o}^{2} = \frac{1}{1 + e^{-L_{a}^{2}}}$ Loss $E = \frac{1}{2} \sum_{S}(y - L_{o}^{2})^{2}$ Derivation of backpropagation learning rule:
from IPython.display import YouTubeVideo YouTubeVideo("LOc_y67AzCA") import numpy as np from utils import backprop_decision_boundary, backprop_make_classification, backprop_make_moons from sklearn.metrics import accuracy_score from theano import tensor as T from theano import function, shared import matplotlib.pyplot as plt plt.style.use('ggplot') plt.rc('figure', figsize=(8, 6)) %matplotlib inline x, y = T.dmatrices('xy') # weights and biases w1 = shared(np.random.rand(2, 3), name="w1") b1 = shared(np.random.rand(1, 3), name="b1") w2 = shared(np.random.rand(3, 2), name="w2") b2 = shared(np.random.rand(1, 2), name="b2") # layer activations l1_activation = T.dot(x, w1) + b1.repeat(x.shape[0], axis=0) l1_output = 1.0 / (1 + T.exp(-l1_activation)) l2_activation = T.dot(l1_output, w2) + b2.repeat(l1_output.shape[0], axis=0) l2_output = 1.0 / (1 + T.exp(-l2_activation)) # losses and gradients loss = 0.5 * T.sum((y - l2_output) ** 2) gw1, gb1, gw2, gb2 = T.grad(loss, [w1, b1, w2, b2]) # functions alpha = 0.2 predict = function([x], l2_output) train = function([x, y], loss, updates=[(w1, w1 - alpha * gw1), (b1, b1 - alpha * gb1), (w2, w2 - alpha * gw2), (b2, b2 - alpha * gb2)]) # make dummy data X, Y = backprop_make_classification() backprop_decision_boundary(predict, X, Y) y_hat = predict(X) print("Accuracy: ", accuracy_score(np.argmax(Y, axis=1), np.argmax(y_hat, axis=1))) for i in range(500): l = train(X, Y) if i % 100 == 0: print(l) backprop_decision_boundary(predict, X, Y) y_hat = predict(X) print("Accuracy: ", accuracy_score(np.argmax(Y, axis=1), np.argmax(y_hat, axis=1)))
notebooks/day4/04_backpropagation.ipynb
jaidevd/inmantec_fdp
mit
Exercise: Implement an MLP with two hidden layers, for the following dataset
X, Y = backprop_make_moons() plt.scatter(X[:, 0], X[:, 1], c=np.argmax(Y, axis=1))
notebooks/day4/04_backpropagation.ipynb
jaidevd/inmantec_fdp
mit
Hints: Use two hidden layers, one containing 3 and the other containing 4 neurons Use learning rate $\alpha$ = 0.2 Try to make the network converge in 1000 iterations
# enter code here
notebooks/day4/04_backpropagation.ipynb
jaidevd/inmantec_fdp
mit
We will sample 2000 times from $Z \sim N(0,I_{50 \times 50})$ and look at the normalized spacing between the top 2 values.
Z = np.random.standard_normal((2000,50)) T = np.zeros(2000) for i in range(2000): W = np.sort(Z[i]) T[i] = W[-1] * (W[-1] - W[-2]) Ugrid = np.linspace(0,1,101) covtest_fig = plt.figure(figsize=(6,6)) ax = covtest_fig.gca() ax.plot(Ugrid, ECDF(np.exp(-T))(Ugrid), drawstyle='steps', c='k', label='covtest', linewidth=3) ax.set_title('Null distribution') ax.legend(loc='upper left');
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
The covariance test is an asymptotic result, and can be used in a sequential procedure called forward stop to determine when to stop the LASSO path. An exact version of the covariance test was developed in a general framework for problems beyond the LASSO using the Kac-Rice formula. A sequential version along the LARS path was developed, which we refer to as the spacings test. Here is the exact test, which is the first step of the spacings test.
from scipy.stats import norm as ndist Texact = np.zeros(2000) for i in range(2000): W = np.sort(Z[i]) Texact[i] = ndist.sf(W[-1]) / ndist.sf(W[-2]) ax.plot(Ugrid, ECDF(Texact)(Ugrid), c='blue', drawstyle='steps', label='exact covTest', linewidth=3) covtest_fig
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
Covariance test for regression The above tests were based on an IID sample, though both the covtest and its exact version can be used in a regression setting. Both tests need access to the covariance of the noise. Formally, suppose $$ y|X \sim N(\mu, \Sigma) $$ the exact test is a test of $$H_0:\mu=0.$$ The test is based on $$ \lambda_{\max} = \|X^Ty\|_{\infty}. $$ This value of $\lambda$ is the value at which the first variable enters the LASSO. That is, $\lambda_{\max}$ is the smallest $\lambda$ for which 0 solves $$ \text{minimize}_{\beta} \frac{1}{2} \|y-X\beta\|^2_2 + \lambda \|\beta\|_1. $$ Formally, the exact test conditions on the variable $i^(y)$ that achieves $\lambda_{\max}$ and tests a weaker null hypothesis $$H_0:X[:,i^(y)]^T\mu=0.$$ The covtest is an approximation of this test, based on the same Mills ratio calculation. (This calculation roughly says that the overshoot of a Gaussian above a level $u$ is roughly an exponential random variable with mean $u^{-1}$). Here is a simulation under $\Sigma = \sigma^2 I$ with $\sigma$ known. The design matrix, before standardization, is Gaussian equicorrelated in the population with parameter 1/2.
n, p, nsim, sigma = 50, 200, 1000, 1.5 def instance(n, p, beta=None, sigma=sigma): X = (np.random.standard_normal((n,p)) + np.random.standard_normal(n)[:,None]) X /= X.std(0)[None,:] X /= np.sqrt(n) Y = np.random.standard_normal(n) * sigma if beta is not None: Y += np.dot(X, beta) return X, Y
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
Let's make a dataset under our global null and compute the exact covtest $p$-value.
X, Y = instance(n, p, sigma=sigma) cone, pval, idx, sign = covtest(X, Y, exact=False) pval
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
The object cone is an instance of selection.affine.constraints which does much of the work for affine selection procedures. The variables idx and sign store which variable achieved $\lambda_{\max}$ and the sign of its correlation with $y$.
cone def simulation(beta): Pcov = [] Pexact = [] for i in range(nsim): X, Y = instance(n, p, sigma=sigma, beta=beta) Pcov.append(covtest(X, Y, sigma=sigma, exact=False)[1]) Pexact.append(covtest(X, Y, sigma=sigma, exact=True)[1]) Ugrid = np.linspace(0,1,101) plt.figure(figsize=(6,6)) plt.plot(Ugrid, ECDF(Pcov)(Ugrid), label='covtest', ds='steps', c='k', linewidth=3) plt.plot(Ugrid, ECDF(Pexact)(Ugrid), label='exact covtest', ds='steps', c='blue', linewidth=3) plt.legend(loc='lower right')
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
Null
beta = np.zeros(p) simulation(beta)
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
1-sparse
beta = np.zeros(p) beta[0] = 4 simulation(beta)
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
2-sparse
beta = np.zeros(p) beta[:2] = 4 simulation(beta)
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
5-sparse
beta = np.zeros(p) beta[:5] = 4 simulation(beta)
doc/source/algorithms/covtest.ipynb
selective-inference/selective-inference
bsd-3-clause
Show gain matrix a.k.a. leadfield matrix with sensitivity map
picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False) picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True) fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True) fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14) for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']): im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto', cmap='RdBu_r') ax.set_title(ch_type.upper()) ax.set_xlabel('sources') ax.set_ylabel('sensors') fig.colorbar(im, ax=ax) fig_2, ax = plt.subplots() ax.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()], bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'], color=['c', 'b', 'k']) fig_2.legend() ax.set(title='Normal orientation sensitivity', xlabel='sensitivity', ylabel='count') grad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir, clim=dict(lims=[0, 50, 100]))
0.21/_downloads/2212671cb1d04d466a35eb15470863da/plot_forward_sensitivity_maps.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Synthetic Dataset Generation
def generate_data( slopes: List[float], group_size: int, noise_stddev: float ) -> pd.DataFrame: """ Generate `len(slopes)` * `group_size` examples with dependency y = slope * x + noise. """ dfs = [] for i, slope in enumerate(slopes): curr_df = pd.DataFrame(columns=['x', 'category', 'y']) curr_df['x'] = range(group_size) curr_df['category'] = i curr_df['y'] = curr_df['x'].apply( lambda x: slope * x + np.random.normal(scale=noise_stddev) ) dfs.append(curr_df) df = pd.concat(dfs) return df
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Let us create a situation where in-fold generation of target-based features leads to overfitting. To do so, make a lot of small categories and set noise variance to a high value. Such settings result in leakage of noise from target into in-fold generated mean. Thus, regressor learns how to use this leakage, which is useless on hold-out sets.
slopes = [2, 1, 3, 4, -1, -2, 3, 2, 1, 5, -2, -3, -5, 8, 1, -7, 0, 2, 0] group_size = 5 noise_stddev = 10 train_df = generate_data(slopes, group_size, noise_stddev) train_df.head()
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit
Generate test set from the same distribution (in a way that preserves balance between categories).
test_df = generate_data(slopes, group_size, noise_stddev) test_df.head()
docs/target_encoding_demo.ipynb
Nikolay-Lysenko/dsawl
mit