markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Next, demonstrate how to generate spectra of stars... MWS_MAIN
from desitarget.mock.mockmaker import MWS_MAINMaker %time demo_mockmaker(MWS_MAINMaker, seed=seed, loc='left')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
MWS_NEARBY
from desitarget.mock.mockmaker import MWS_NEARBYMaker %time demo_mockmaker(MWS_NEARBYMaker, seed=seed, loc='left')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
White dwarfs (WDs)
from desitarget.mock.mockmaker import WDMaker %time demo_mockmaker(WDMaker, seed=seed, loc='right')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Finally demonstrate how to generate (empyt) SKY spectra.
from desitarget.mock.mockmaker import SKYMaker SKY = SKYMaker(seed=seed) skydata = SKY.read(healpixels=healpixel, nside=nside) skyflux, skywave, skytargets, skytruth, objtruth = SKY.make_spectra(skydata) SKY.select_targets(skytargets, skytruth)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Create count vectorizer
vect = CountVectorizer(max_features=1000) X_dtm = vect.fit_transform(dataTraining['plot']) X_dtm.shape print(vect.get_feature_names()[:50])
exercises/P2-MovieGenrePrediction.ipynb
albahnsen/PracticalMachineLearningClass
mit
Create y
dataTraining['genres'] = dataTraining['genres'].map(lambda x: eval(x)) le = MultiLabelBinarizer() y_genres = le.fit_transform(dataTraining['genres']) y_genres X_train, X_test, y_train_genres, y_test_genres = train_test_split(X_dtm, y_genres, test_size=0.33, random_state=42)
exercises/P2-MovieGenrePrediction.ipynb
albahnsen/PracticalMachineLearningClass
mit
Train multi-class multi-label model
clf = OneVsRestClassifier(RandomForestClassifier(n_jobs=-1, n_estimators=100, max_depth=10, random_state=42)) clf.fit(X_train, y_train_genres) y_pred_genres = clf.predict_proba(X_test) roc_auc_score(y_test_genres, y_pred_genres, average='macro')
exercises/P2-MovieGenrePrediction.ipynb
albahnsen/PracticalMachineLearningClass
mit
Predict the testing dataset
X_test_dtm = vect.transform(dataTesting['plot']) cols = ['p_Action', 'p_Adventure', 'p_Animation', 'p_Biography', 'p_Comedy', 'p_Crime', 'p_Documentary', 'p_Drama', 'p_Family', 'p_Fantasy', 'p_Film-Noir', 'p_History', 'p_Horror', 'p_Music', 'p_Musical', 'p_Mystery', 'p_News', 'p_Romance', 'p_Sci-Fi', '...
exercises/P2-MovieGenrePrediction.ipynb
albahnsen/PracticalMachineLearningClass
mit
Challenge 1 - Fibonacci Sequence and the Golden Ratio The Fibonacci sequence is defined by $f_{n+1} =f_n + f_{n-1}$ and the initial values $f_0 = 0$ and $f_1 = 1$. The first few elements of the sequence are: $0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377 ...$ Using what you just learned about functions, defin...
# Answer def fib(n): """Return nth element of the Fibonacci sequence.""" # Create the base case n0 = 0 n1 = 1 # Loop n times. Just ignore the variable i. for i in range(n): n_new = n0 + n1 n0 = n1 n1 = n_new return n0
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
The ratio of successive elements in the Fibonacci sequence converges to $$\phi = (1 + \sqrt{5})/2 ≈ 1.61803\dots$$ which is the famous golden ratio. Your task is to approximate $\phi$. Define a function phi_approx that calculates the approximate value of $\phi$ obtained by the ratio of the $n$-th and $(n−1)$-st element...
#Answer: phi_approx_output_format = \ """Approximation order: {:d} fib_n: {:g} fib_(n-1): {:g} phi: {:.25f}""" def phi_approx(n, show_output=True): """Return the nth-order Fibonacci approximation to the golden ratio.""" fib_n = fib(n) fib_nm1 = fib(n - 1) phi = fib_n/fib_nm1 if show_out...
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
Finally, simply using the ":" gives you all the elements in the array. Challenge 2 - Projectile Motion In this challenge problem, you will be building what is known as a NUMERICAL INTEGRATOR in order to predict the projectiles trajectory through a gravitational field (i.e. what happens when you throw a ball through the...
#Your code here #Answers g = -9.8 v = np.array([3.,3.]) r = np.array([0.,0.])
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
Now we have the functions that calculate the changes in the position and velocity vectors. We're almost there! Now, we will need a while-loop in order to step the projectile through its trajectory. What would the condition be? Well, we know that the projectile stops when it hits the ground. So, one way we can do this i...
#Your code here. #Answer dt = 0.01 while (r[1] >= 0.): v = intV(v,g,dt) r = intR(r,v,dt) print(r)
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
Here, you have 2 dimensions with the array timeseriesData, and as such much specify the row first and then the column. So, - array_name[n,:] is the n-th row, and all columns within that row. - array_name[:,n] is the n-th column, and all rows within that particular column. Now then, let's see what the data looks like us...
#Your code here #Answer plt.plot(t,signal) plt.show()
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
Coping our Model in a new folder
try: shutil.copytree('C:/Users/Miguel/workspace/Thesis/Geomodeller/Basic_case/3_horizontal_layers', 'Temp_test/') except: print "The folder is already created"
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Simplest case: three horizontal layers, with depth unknow Loading pre-made Geomodeller model You have to be very careful with the path, and all the bars to the RIGHT
hor_lay = 'Temp_test/horizontal_layers.xml'#C:\Users\Miguel\workspace\Thesis\Thesis\Temp3 print hor_lay reload(geogrid) G1 = geogrid.GeoGrid() # Using G1, we can read the dimensions of our Murci geomodel G1.get_dimensions_from_geomodeller_xml_project(hor_lay) #G1.set_dimensions(dim=(0,23000,0,16000,-8000,1000)) nx =...
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Setting Bayes Model
Image("Nice Notebooks\THL_no_thickness.png") alpha = pm.Normal("alpha", -350, 0.05) alpha alpha = pm.Normal("alpha", -350, 0.05)# value= -250) #Thickness of the layers thickness_layer1 = pm.Normal("thickness_layer1", -150, 0.005) # a lot of uncertainty so the constrains are necessary thickness_layer2 = pm.Normal("th...
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Extracting Posterior Traces to Arrays
n_samples = 20 alpha_samples, alpha_samples_all = M.trace('alpha')[-n_samples:], M.trace("alpha")[:] beta_samples, beta_samples_all = M.trace('beta')[-n_samples:], M.trace("beta")[:] gamma_samples, gamma_samples_all = M.trace('gamma')[-n_samples:], M.trace('gamma')[:] section_samples, section_samples_all = M.trace('se...
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Plotting the results
fig, ax = plt.subplots(1, 2, figsize=(15, 5)) ax[0].hist(alpha_samples_all, histtype='stepfilled', bins=30, alpha=1, label="Upper most layer", normed=True) ax[0].hist(beta_samples_all, histtype='stepfilled', bins=30, alpha=1, label="Middle layer", normed=True, color = "g") ax[0].hist(gamma_samples_all...
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Estimation An recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data. In parametric inference, we specify a priori a suitable distribution, then choose the parameters that best fit the data. e.g. $\mu$ and $\sigma^2$ in the case o...
x = array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604, 5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745, 1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 , 0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357, 1.20611533, 2.16594...
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Fitting data to probability distributions We start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by ‘best fit’. There are two commonly used criteria: Method of moments chooses the parameters ...
precip = pd.read_table("data/nashville_precip.txt", index_col=0, na_values='NA', delim_whitespace=True) precip.head() _ = precip.hist(sharex=True, sharey=True, grid=False) tight_layout()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
The first step is recognixing what sort of distribution to fit our data to. A couple of observations: The data are skewed, with a longer tail to the right than to the left The data are positive-valued, since they are measuring rainfall The data are continuous There are a few possible choices, but one suitable alterna...
precip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January:
from scipy.stats.distributions import gamma hist(precip.Jan, normed=True, bins=20) plot(linspace(0, 10), gamma.pdf(linspace(0, 10), alpha_mom[0], beta_mom[0]))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution:
axs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False) for ax in axs.ravel(): # Get month m = ax.get_title() # Plot fitted distribution x = linspace(*ax.get_xlim()) ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m])) # Annotate with paramet...
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Maximum Likelihood Maximum likelihood (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties. There is a ton of theory regarding ML. We will restrict ourselves to the mechanics here. Say we have some data $y = y_1,y_2,\l...
y = np.random.poisson(5, size=100) plt.hist(y, bins=12, normed=True) xlabel('y'); ylabel('Pr(y)')
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can plot the likelihood function for any value of the parameter(s):
lambdas = np.linspace(0,15) x = 5 plt.plot(lambdas, [poisson_like(x, l) for l in lambdas]) xlabel('$\lambda$') ylabel('L($\lambda$|x={0})'.format(x))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$.
lam = 5 xvals = arange(15) plt.bar(xvals, [poisson_like(x, lam) for x in xvals]) xlabel('x') ylabel('Pr(X|$\lambda$=5)')
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function:
# some function func = lambda x: 3./(1 + 400*np.exp(-2*x)) - 1 xvals = np.linspace(0, 6) plot(xvals, func(xvals)) text(5.3, 2.1, '$f(x)$', fontsize=16) # zero line plot([0,6], [0,0], 'k-') # value at step n plot([4,4], [0,func(4)], 'k:') plt.text(4, -.2, '$x_n$', fontsize=16) # tangent line tanline = lambda x: -0.858 +...
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
where log_mean and mean_log are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function.
# Calculate statistics log_mean = precip.mean().apply(log) mean_log = precip.apply(log).mean()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
And now plug this back into the solution for beta: <div style="font-size: 120%;"> $$ \beta = \frac{\alpha}{\bar{X}} $$
beta_mle = alpha_mle/precip.mean()[-1] beta_mle
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can compare the fit of the estimates derived from MLE to those from the method of moments:
dec = precip.Dec dec.hist(normed=True, bins=10, grid=False) x = linspace(0, dec.max()) plot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-') plot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--')
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution. Example: truncated distribution Suppose that we observe $Y$ truncated below at $a$ (where $a$ is known). If $X$ is the distribution of our observation, then: $$ P(X ...
x = np.random.normal(size=10000) a = -1 x_small = x < a while x_small.sum(): x[x_small] = np.random.normal(size=x_small.sum()) x_small = x < a _ = hist(x, bins=100)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can construct a log likelihood for this function using the conditional form: $$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$
from scipy.stats.distributions import norm trunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) - np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
In general, simulating data is a terrific way of testing your model before using it with real data. Kernel density estimates In some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disri...
# Some random data y = np.random.random(15) * 10 y x = np.linspace(0, 10, 100) # Smoothing parameter s = 0.4 # Calculate the kernels kernels = np.transpose([norm.pdf(x, yi, s) for yi in y]) plot(x, kernels, 'k:') plot(x, kernels.sum(1)) plot(y, np.zeros(len(y)), 'ro', ms=10)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Exercise: Cervical dystonia analysis Recall the cervical dystonia database, which is a clinical trial of botulinum toxin type B (BotB) for patients with cervical dystonia from nine U.S. sites. The response variable is measurements on the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, p...
cdystonia = pd.read_csv("data/cdystonia.csv") cdystonia[cdystonia.obs==6].hist(column='twstrs', by=cdystonia.treat, bins=8)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Regression models A general, primary goal of many statistical data analysis tasks is to relate the influence of one variable on another. For example, we may wish to know how different medical interventions influence the incidence or duration of disease, or perhaps a how baseball player's performance varies as a functio...
x = np.array([2.2, 4.3, 5.1, 5.8, 6.4, 8.0]) y = np.array([0.4, 10.1, 14.0, 10.9, 15.4, 18.5]) plot(x,y,'ro')
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$. <div style="font-size: 150%;"> $y_i = f(x_i) + \epsilon_i$ </div> where $f$ is some function, for exa...
ss = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x) ** 2) ss([0,1],x,y) b0,b1 = fmin(ss, [0,1], args=(x,y)) b0,b1 plot(x, y, 'ro') plot([0,10], [b0, b0+b1*10]) plot(x, y, 'ro') plot([0,10], [b0, b0+b1*10]) for xi, yi in zip(x,y): plot([xi]*2, [yi, b0+b1*xi], 'k:') xlim(2, 9); ylim(0, 20)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Minimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences:
sabs = lambda theta, x, y: np.sum(np.abs(y - theta[0] - theta[1]*x)) b0,b1 = fmin(sabs, [0,1], args=(x,y)) print b0,b1 plot(x, y, 'ro') plot([0,10], [b0, b0+b1*10])
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model: <div style="font-size: 150%;"> $y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \epsilon_i$ </div>
ss2 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)) ** 2) b0,b1,b2 = fmin(ss2, [1,1,-1], args=(x,y)) print b0,b1,b2 plot(x, y, 'ro') xvals = np.linspace(0, 10, 100) plot(xvals, b0 + b1*xvals + b2*(xvals**2))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Although polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters. For some data, it may be reasonable to consider polynomials of order>2. For example, consider the relationship between the number of home...
ss3 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2) - theta[3]*(x**3)) ** 2) bb = pd.read_csv("data/baseball.csv", index_col=0) plot(bb.hr, bb.rbi, 'r.') b0,b1,b2,b3 = fmin(ss3, [0,1,-1,0], args=(bb.hr, bb.rbi)) xvals = arange(40) plot(xvals, b0 + b1*xvals + b2*(xvals**2) + b3*(xvals**3))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Of course, we need not fit least squares models by hand. The statsmodels package implements least squares models that allow for model fitting in a single line:
import statsmodels.api as sm straight_line = sm.OLS(y, sm.add_constant(x)).fit() straight_line.summary() from statsmodels.formula.api import ols as OLS data = pd.DataFrame(dict(x=x, y=y)) cubic_fit = OLS('y ~ x + I(x**2)', data).fit() cubic_fit.summary()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Exercise: Polynomial function Write a function that specified a polynomial of arbitrary degree. Model Selection How do we choose among competing models for a given dataset? More parameters are not necessarily better, from the standpoint of model fit. For example, fitting a 9-th order polynomial to the sample data from ...
def calc_poly(params, data): x = np.c_[[data**i for i in range(len(params))]] return np.dot(params, x) ssp = lambda theta, x, y: np.sum((y - calc_poly(theta, x)) ** 2) betas = fmin(ssp, np.zeros(10), args=(x,y), maxiter=1e6) plot(x, y, 'ro') xvals = np.linspace(0, max(x), 100) plot(xvals, calc_poly...
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
One approach is to use an information-theoretic criterion to select the most appropriate model. For example Akaike's Information Criterion (AIC) balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC as: $$AIC = n \log(\hat{\sig...
n = len(x) aic = lambda rss, p, n: n * np.log(rss/(n-p-1)) + 2*p RSS1 = ss(fmin(ss, [0,1], args=(x,y)), x, y) RSS2 = ss2(fmin(ss2, [1,1,-1], args=(x,y)), x, y) print aic(RSS1, 2, n), aic(RSS2, 3, n)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Hence, we would select the 2-parameter (linear) model. Logistic Regression Fitting a line to the relationship between two variables using the least squares approach is sensible when the variable we are trying to predict is continuous, but what about when the data are dichotomous? male/female pass/fail died/survived L...
titanic = pd.read_excel("data/titanic.xls", "titanic") titanic.name jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) yticks([0,1]) ylabel("survived") xlabel("log(fare)")
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale. Clearly, fitting a line through this data makes little sense, for several reasons. First, for most values of the predictor variable, the line would predict values that are not zero or one. Secon...
x = np.log(titanic.fare[titanic.fare>0]) y = titanic.survived[titanic.fare>0] betas_titanic = fmin(ss, [1,1], args=(x,y)) jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) yticks([0,1]) ylabel("survived") xlabel("log(fare)") plt.plot([0,7],...
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
If we look at this data, we can see that for most values of fare, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the "survived" (y=1) side for larger values of fare than on the "died" (y=0) side. Stochastic model Rather than model the binary outcome...
logit = lambda p: np.log(p/(1.-p)) unit_interval = np.linspace(0,1) plt.plot(unit_interval/(1-unit_interval), unit_interval)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
The inverse of the logit transformation is: <div style="font-size: 150%;"> $$p = \frac{1}{1 + \exp(-x)}$$ So, now our model is: <div style="font-size: 120%;"> $$\text{logit}(p_i) = \beta_0 + \beta_1 x_i + \epsilon_i$$ We can fit this model using maximum likelihood. Our likelihood, again based on the Bernoulli mo...
invlogit = lambda x: 1. / (1 + np.exp(-x)) def logistic_like(theta, x, y): p = invlogit(theta[0] + theta[1] * x) # Return negative of log-likelihood return -np.sum(y * np.log(p) + (1-y) * np.log(1 - p))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
... and fit the model.
b0,b1 = fmin(logistic_like, [0.5,0], args=(x,y)) b0, b1 jitter = np.random.normal(scale=0.01, size=len(x)) plot(x, y+jitter, 'r.', alpha=0.3) yticks([0,.25,.5,.75,1]) xvals = np.linspace(0, 600) plot(xvals, invlogit(b0+b1*xvals))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
As with our least squares model, we can easily fit logistic regression models in statsmodels, in this case using the GLM (generalized linear model) class with a binomial error distribution specified.
logistic = sm.GLM(y, sm.add_constant(x), family=sm.families.Binomial()).fit() logistic.summary()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Exercise: multivariate logistic regression Which other variables might be relevant for predicting the probability of surviving the Titanic? Generalize the model likelihood to include 2 or 3 other covariates from the dataset. Bootstrapping Parametric inference can be non-robust: inaccurate if parametric assumptions are...
np.random.permutation(titanic.name)[:5]
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Similarly, we can use the random.randint method to generate a sample with replacement, which we can use when bootstrapping.
random_ind = np.random.randint(0, len(titanic), 5) titanic.name[random_ind]
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We regard S as an "estimate" of population P population : sample :: sample : bootstrap sample The idea is to generate replicate bootstrap samples: <div style="font-size: 120%;"> $$S^* = \{S_1^*, S_2^*, \ldots, S_R^*\}$$ </div> Compute statistic $t$ (estimate) for each bootstrap sample: <div style="font-size: 120%;...
n = 10 R = 1000 # Original sample (n=10) x = np.random.normal(size=n) # 1000 bootstrap samples of size 10 s = [x[np.random.randint(0,n,n)].mean() for i in range(R)] _ = hist(s, bins=30)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Since we have estimated the expectation of the bootstrapped statistics, we can estimate the bias of T: $$\hat{B}^ = \bar{T}^ - T$$
boot_mean - np.mean(x)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Explanation of code above The code above contains three list comprehensions for very compactly simulating sampling distribution of the man Create a list of sample sizes to simulate (ssizes) For each sample size (sz), generate random 100 samples, and store those samples in a matrix of size sz $\times$ 100 (i.e. each c...
# make a pair of plots ssmin, ssmax = min(ssizes), max(ssizes) theoryss = np.linspace(ssmin, ssmax, 250) fig, (ax1, ax2) = plt.subplots(1,2) # 1 x 2 grid of plots fig.set_size_inches(12,4) # plot histograms of sampling distributions for (ss,mean) in zip(ssizes, means): ax1.hist(mean, normed=True, histtype='stepf...
inclass-2016-02-22-Confidence-Intervals.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Sample Estimate of the Standard Error of the Mean In real-life life, we don't have access to the sampling distribution of the mean or the true population parameter $\sigma$ from which can calculate the standard error of the mean. However, we can still use our unbiased sample estimator of the standard deviation, $s$, to...
N = 1000 samples50 = popn.rvs(size=(50, N)) # N samples of size 50 means50 = np.mean(samples50, axis=0) # sample means std50 = np.std(samples50, axis=0, ddof=1) # sample std devs se50 = std50/np.sqrt(50) # sample standard errors frac_overlap_mu = [] zs = np.arange(1,3,step=0.05) for z in zs: lowCI = means50 - z*se...
inclass-2016-02-22-Confidence-Intervals.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Interpreting our simulation How should we interpret the results above? We found as we increased the scaling of our confidence intervals (larger $z$), the true mean was within sample confidence intervals a greater proportion of the time. For example, when $z = 1$ we found that the true mean was within our CIs roughly 6...
ndraw = 100 x = means50[:ndraw] y = range(0,ndraw) plt.errorbar(x, y, xerr=1.96*se50[:ndraw], fmt='o') plt.vlines(mu, 0, ndraw, linestyle='dashed', color='#D55E00', linewidth=3, zorder=5) plt.ylim(-1,101) plt.yticks([]) plt.title("95% CI: mean ± 1.96×SE\nfor 100 samples of size 50") fig = plt.gcf() fig.set_size_inches(...
inclass-2016-02-22-Confidence-Intervals.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Generating a table of CIs and corresponding margins of error The table below gives the percent CI and the corresponding margin of error ($z \times {SE}$) for that confidence interval.
perc = np.array([.80, .90, .95, .99, .997]) zval = stdnorm.ppf(1 - (1 - perc)/2) # account for the two tails of the sampling distn print("% CI \tz × SE") print("-----\t------") for (i,j) in zip(perc, zval): print("{:5.1f}\t{:6.2f}".format(i*100, j)) # see the string docs (https://docs.python.org/3.4/library...
inclass-2016-02-22-Confidence-Intervals.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Define Data Location For remote data the interaction will use ssh to securely interact with the data<br/> This uses the reverse connection capability in paraview so that the paraview server can be submitted to a job scheduler<br/> Note: The default paraview server connection will use port 11111
remote_data = True remote_server_auto = True case_name = 'caratung-ar-6p0-pitch-8p0' data_dir='/gpfs/thirdparty/zenotech/home/dstandingford/VALIDATION/CARATUNG' data_host='dstandingford@vis03' paraview_cmd='mpiexec /gpfs/cfms/apps/zCFD/bin/pvserver' if not remote_server_auto: paraview_cmd=None if not remote_data...
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Validation and regression
# Validation for Caradonna Tung Rotor (Mach at Tip - 0.877) from NASA TM 81232, page 34 validate = True regression = True # Make movie option currently not working - TODO make_movie = False if (validate): valid = True validation_tol = 0.0100 valid_lower_cl_0p50 = 0.2298-validation_tol valid_upper_cl_0p...
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Initialise Environment
%pylab inline from paraview.simple import * paraview.simple._DisableFirstRenderCameraReset() import pylab as pl
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Get control dictionary
from zutil.post import get_case_parameters,print_html_parameters parameters=get_case_parameters(case_name,data_host=data_host,data_dir=data_dir) # print parameters
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Define test conditions
from IPython.display import HTML HTML(print_html_parameters(parameters)) aspect_ratio = 6.0 Pitch = 8.0 from zutil.post import for_each from zutil import rotate_vector from zutil.post import get_csv_data def plot_cp_profile(ax,file_root,span_loc,ax2): wall = PVDReader( FileName=file_root+'_wall.pvd' ) wa...
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Cp Profile
from zutil.post import get_case_root, cp_profile_wall_from_file_span from zutil.post import ProgressBar from collections import OrderedDict factor = 0.0 pbar = ProgressBar() plot_list = OrderedDict([(0.50,{'exp_data_file': 'data/cp-0p50.txt', 'cp_axis':[0.0,1.0,1.2,-1.0]}), (0.68,{'exp_data_f...
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Convergence
from zutil.post import residual_plot, get_case_report residual_plot(get_case_report(case_name)) show() if make_movie: from zutil.post import get_case_root from zutil.post import ProgressBar pb = ProgressBar() vtu = PVDReader( FileName=[get_case_root(case_name,num_procs)+'.pvd'] ) vtu.UpdatePipeline...
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Check validation and regression¶
if (validate): def validate_data(name, value, valid_lower, valid_upper): if ((value < valid_lower) or (value > valid_upper)): print 'INVALID: ' + name + ' %.4f '%valid_lower + '%.4f '%value + ' %.4f'%valid_upper return False else: return True valid ...
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Cleaning up
if remote_data: #print 'Disconnecting from remote paraview server connection' Disconnect()
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Isolate X and y:
X = training_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values y = training_data['Facies'].values
LiamLearn/K-fold_CV_F1_score__MATT.ipynb
esa-as/2016-ml-contest
apache-2.0
We want the well names to use as groups in the k-fold analysis, so we'll get those too:
wells = training_data["Well Name"].values
LiamLearn/K-fold_CV_F1_score__MATT.ipynb
esa-as/2016-ml-contest
apache-2.0
Now we train as normal, but LeaveOneGroupOut gives us the approriate indices from X and y to test against one well at a time:
from sklearn.svm import SVC from sklearn.model_selection import LeaveOneGroupOut logo = LeaveOneGroupOut() for train, test in logo.split(X, y, groups=wells): well_name = wells[test[0]] score = SVC().fit(X[train], y[train]).score(X[test], y[test]) print("{:>20s} {:.3f}".format(well_name, score))
LiamLearn/K-fold_CV_F1_score__MATT.ipynb
esa-as/2016-ml-contest
apache-2.0
実験条件 実験条件は以下の通りです. 人工データパラメータ サンプル数 (n_samples): [100] 総ノイズスケール: $c=0.5, 1.0$. 交絡因子のスケール: $c/\sqrt{Q}$ データ観測ノイズ分布 (data_noise_type): ['laplace', 'uniform'] 交絡因子数 (n_confs or $Q$): [10] 観測データノイズスケール: 3に固定 回帰係数の分布 uniform(-1.5, 1.5) 推定ハイパーパラメータ 交絡因子相関係数 (L_cov_21s): [[-.9, -.7, -.5, -.3, 0, .3, .5, .7, .9]] モデル観...
conds = [ { 'totalnoise': totalnoise, 'L_cov_21s': L_cov_21s, 'n_samples': n_samples, 'n_confs': n_confs, 'data_noise_type': data_noise_type, 'model_noise_type': model_noise_type, 'b21_dist': b21_dist } for...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
人工データの生成 実験条件に基づいて人工データを生成する関数を定義します.
def gen_artificial_data_given_cond(ix_trial, cond): # 実験条件に基づく人工データ生成パラメータの設定 n_confs = cond['n_confs'] gen_data_params = deepcopy(gen_data_params_default) gen_data_params.n_samples = cond['n_samples'] gen_data_params.conf_dist = [['all'] for _ in range(n_confs)] gen_data_params.e1_dist = [cond[...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
実行例です.
data = gen_artificial_data_given_cond(0, conds[0]) xs = data['xs'] plt.figure(figsize=(3, 3)) plt.scatter(xs[:, 0], xs[:, 1]) data = gen_artificial_data_given_cond(0, { 'totalnoise': 3 * np.sqrt(1), 'n_samples': 10000, 'n_confs': 1, 'data_noise_type': 'laplace', 'b21_dis...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
トライアルの定義 トライアルとは, 生成された1つの人工データに対する因果推論と精度評価の処理を指します. 一つの実験条件に対し, トライアルを100回実行します.
n_trials_per_cond = 100
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
実験条件パラメータの基準値
# 因果推論パラメータ infer_params = InferParams( seed=0, standardize=True, subtract_mu_reg=False, fix_mu_zero=True, prior_var_mu='auto', prior_scale='uniform', max_c=1.0, n_mc_samples=10000, dist_noise='laplace', df_indvdl=8.0, prior_indvdls=['t'], cs=[0.4, 0.6, 0.8], ...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
プログラム トライアル識別子の生成 以下の情報からトライアルに対する識別子を生成します. トライアルインデックス (ix_trial) サンプル数 (n_samples) 交絡因子数 (n_confs) 人工データ観測ノイズの種類 (data_noise_type) 予測モデル観測ノイズの種類 (model_noise_type) 交絡因子相関係数 (L_cov_21s) 総ノイズスケール (totalnoise) 回帰係数分布 (b21_dist) トライアル識別子は推定結果をデータフレームに格納するときに使用されます.
def make_id(ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, L_cov_21s, totalnoise, b21_dist): L_cov_21s_ = ' '.join([str(v) for v in L_cov_21s]) return hashlib.md5( str((L_cov_21s_, ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, totalnoise, b21_dist.replace(' ', '...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
トライアル結果のデータフレームへの追加 トライアル結果をデータフレームに追加します. 引数dfがNoneの場合, 新たにデータフレームを作成します.
def add_result_to_df(df, result): if df is None: return pd.DataFrame({k: [v] for k, v in result.items()}) else: return df.append(result, ignore_index=True) # テスト result1 = {'col1': 10, 'col2': 20} result2 = {'col1': 30, 'col2': -10} df1 = add_result_to_df(None, result1) print('--- df1 ---') pri...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
データフレーム内のトライアル識別子の確認 計算済みの結果に対して再計算しないために使用します.
def df_exist_result_id(df, result_id): if df is not None: return result_id in np.array(df['result_id']) else: False
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
データフレームの取得 データフレームをセーブ・ロードする関数を定義します. ファイルが存在しなければNoneを返します.
def load_df(df_file): if os.path.exists(df_file): return load_pklz(df_file) else: return None def save_df(df_file, df): save_pklz(df_file, df)
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
トライアル実行 トライアルインデックスと実験条件を引数としてトライアルを実行し, 推定結果を返します.
def _estimate_hparams(xs, infer_params): assert(type(infer_params) == InferParams) sampling_mode = infer_params.sampling_mode hparamss = define_hparam_searchspace(infer_params) results = find_best_model(xs, hparamss, sampling_mode) hparams_best = results[0] bf = results[2] - results[5] # Bayes ...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
メインプログラム
def run_expr(conds): # データフレームファイル名 data_dir = '.' df_file = data_dir + '/20160902-eval-bml-results.pklz' # ファイルが存在すれば以前の続きから実行します. df = load_df(df_file) # 実験条件に渡るループ n_skip = 0 for cond in conds: print(cond) # トライアルに渡るループ for ix_trial in rang...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
メインプログラムの実行
df = run_expr(conds)
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
結果の確認
import pandas as pd # データフレームファイル名 data_dir = '.' df_file = data_dir + '/20160902-eval-bml-results.pklz' df = load_pklz(df_file) sg = df.groupby(['model_noise_type', 'data_noise_type', 'n_confs', 'totalnoise']) sg1 = sg['correct_rate'].mean() sg2 = sg['correct_rate'].count() sg3 = sg['time_causal_inference'].mean() ...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
回帰係数の大きさとBayesFactor $2\log(BF)$を横軸, $|b_{21}|$(または$|b_{12}|$)を縦軸に取りプロットしました. $2\log(BF)$が10以上だと真の回帰係数(の絶対値)が大きく因果効果があると言えるのですが, $2\log(BF)$がそれ以下だと, 回帰係数が大きい場合も小さい場合もあり, BFで因果効果の有無を判断するのは難しいと言えそうです. 因果効果があるモデルと無いモデルとの比較も必要なのでしょう.
data = np.array(df[['regcoef_true', 'log_bf', 'totalnoise', 'correct_rate']]) ixs1 = np.where(data[:, 3] == 1.0)[0] ixs2 = np.where(data[:, 3] == 0.0)[0] plt.scatter(data[ixs1, 1], np.abs(data[ixs1, 0]), marker='o', s=20, c='r', label='Success') plt.scatter(data[ixs2, 1], np.abs(data[ixs2, 0]), marker='*', s=70, c='b',...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
回帰系数の予測精度 人工データの回帰係数をU(-1.5, 1.5)で生成した実験で, 回帰系数の真値を横軸, 事後分布平均を縦軸に取りプロットしました. 真値が小さい場合は因果方向予測の正解(赤)と不正解(青)に関わらず事後分布平均が小さくなっています. 一方, 正解の場合には回帰係数が小さく, 不正解の場合には回帰係数が小さく推定される傾向があるようです.
data = np.array(df[['regcoef_true', 'regcoef_est', 'correct_rate']]) ixs1 = np.where(data[:, 2] == 1)[0] ixs2 = np.where(data[:, 2] == 0)[0] assert(len(ixs1) + len(ixs2) == len(data)) plt.figure(figsize=(5, 5)) plt.scatter(data[ixs1, 0], data[ixs1, 1], c='r', label='Correct') plt.scatter(data[ixs2, 0], data[ixs2, 1], ...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
EPSで出力
data = np.array(df[['regcoef_true', 'regcoef_est', 'correct_rate']]) ixs1 = np.where(data[:, 2] == 1)[0] ixs2 = np.where(data[:, 2] == 0)[0] assert(len(ixs1) + len(ixs2) == len(data)) plt.figure(figsize=(5, 5)) plt.scatter(data[ixs1, 0], data[ixs1, 1], c='r', label='Correct') plt.plot([-3, 3], [-3, 3]) plt.xlim(-3, 3)...
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
Naive implementation of closest_pair
def naive_closest_pair(points): best, p, q = np.inf, None, None n = len(points) for i in range(n): for j in range(i + 1, n): d = euclid_distance(points[i], points[j]) if d < best: best, p, q = d, points[i], points[j] return best, p, q
Day 11 - Closest pair of points.ipynb
AlexandruValeanu/365-days-of-algorithms
gpl-3.0
Draw points (with closest-pair marked as red)
def draw_points(points, p, q): xs, ys = zip(*points) plt.figure(figsize=(10,10)) plt.scatter(xs, ys) plt.scatter([p[0], q[0]], [p[1], q[1]], s=100, c='red') plt.plot([p[0], q[0]], [p[1], q[1]], 'k', c='red') plt.show()
Day 11 - Closest pair of points.ipynb
AlexandruValeanu/365-days-of-algorithms
gpl-3.0
Run(s)
points = [(26, 77), (12, 37), (14, 18), (19, 96), (71, 95), (91, 9), (98, 43), (66, 77), (2, 75), (94, 91)] xs, ys = zip(*points) d, p, q = closest_pair(points) assert d == naive_closest_pair(points)[0] print("The closest pair of points is ({0}, {1}) at distance {2}".format(p, q, d)) draw_points(points, p, q) N = ...
Day 11 - Closest pair of points.ipynb
AlexandruValeanu/365-days-of-algorithms
gpl-3.0
To parse the data file, just read 3 lines at a time to build individual vectors. Each module's data has a known number of measurements(82) so the list of vectors can be split into groups and assembled into Module objects.
from collections import namedtuple from statistics import mean,stdev Vector = namedtuple('Vector', ['x', 'y', 'z', 'label']) def parse_vectors(lines): vecs = [] lines_iter = iter(lines) label = "" def tokenize(l): nonlocal label l = l.split('#')[-1] toks = [t for t in l.split('\t...
legacy/Potting Data Analysis.ipynb
cfangmeier/UNL-Gantry-Encapsulation-Monitoring
mit
Now that the potting data has been successfully loaded into an appropriate data structure, some plots can be done. First, let's look at the location of the potting positions on the TBM, both TBM and HDI side
tbm_horiz = [] tbm_verti = [] hdi_horiz = [] hdi_verti = [] plt.figure() for module in modules: tbm_horiz.append(module.tbm_on_tbm[1][0]-module.tbm_on_tbm[0][0]) tbm_horiz.append(module.tbm_on_tbm[2][0]-module.tbm_on_tbm[3][0]) tbm_verti.append(module.tbm_on_tbm[3][1]-module.tbm_on_tbm[0][1]) tbm_verti....
legacy/Potting Data Analysis.ipynb
cfangmeier/UNL-Gantry-Encapsulation-Monitoring
mit
So now we know what the average and standard deviation of the trace lengths on the TBM are. good. Now let's examine how flat the Modules are overall by looking at the points for the HDI and BBM bond pads in the YZ plane.
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(12, 5)) for i, module in enumerate(modules): ys = [] zs = [] _, offset_y, offset_z, *_ = module.hdi_bond_pads[0] for bond_pad in module.hdi_bond_pads[:16]: ys.append(bond_pad[1]-offset_y) zs.append(bond_pad[2]-offset_z) axes[0][0].p...
legacy/Potting Data Analysis.ipynb
cfangmeier/UNL-Gantry-Encapsulation-Monitoring
mit
HDI/BBM Offset Data There is also data available for center/rotation of hdi/bbm. We can try to measure the offsets. The raw data files are missing some rows. This would throw off the parser by introducing a misalignment of the hdi/bbm pairing. I added the missng rows by hand.
from IPython.display import Markdown, display_markdown with open("./orientationData.txt") as f: vecs = parse_vectors(f.readlines()) pairs = [] NULL = set([0]) for i in range(len(vecs)//16): for j in range(8): hdi = vecs[i*16+j] bbm = vecs[i*16+j+8] pair = (hdi,bbm) if set(hdi[:3]...
legacy/Potting Data Analysis.ipynb
cfangmeier/UNL-Gantry-Encapsulation-Monitoring
mit
Load the data Load the catalogues
panstarrs = Table.read("panstarrs_u1.fits") wise = Table.read("wise_u1.fits")
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Coordinates As we will use the coordinates to retrieve the extinction in their positions
coords_panstarrs = SkyCoord(panstarrs['raMean'], panstarrs['decMean'], unit=(u.deg, u.deg), frame='icrs') coords_wise = SkyCoord(wise['raWise'], wise['decWise'], unit=(u.deg, u.deg), frame='icrs')
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Reddening Get the extinction for the positions of the sources in the catalogues.
ext_panstarrs = get_eb_v(coords_panstarrs.ra.deg, coords_panstarrs.dec.deg) ext_wise = get_eb_v(coords_wise.ra.deg, coords_wise.dec.deg)
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Apply the correction to each position
i_correction = ext_panstarrs * FILTER_EXT["i"] w1_correction = ext_wise * FILTER_EXT["W1"] hist(i_correction, bins=100); hist(w1_correction, bins=100); panstarrs.rename_column("i", 'iUncor') wise.rename_column("W1mag", 'W1magUncor') panstarrs["i"] = panstarrs["iUncor"] - i_correction wise["W1mag"] = wise["W1magU...
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Save the corrected catalogues PanSTARRS
columns_save = ['objID', 'raMean', 'decMean', 'raMeanErr', 'decMeanErr', 'i', 'iErr'] panstarrs[columns_save].write('panstarrs_u2.fits', format="fits") panstarrs["ext"] = ext_panstarrs panstarrs[['objID', "ext"]].write('panstarrs_extinction.fits', format="fits") # Free memory del panstarrs
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
WISE
columns_save = ['AllWISE', 'raWise', 'decWise', 'raWiseErr', 'decWiseErr', 'W1mag', 'W1magErr'] wise[columns_save].write('wise_u2.fits', format="fits") wise["ext"] = ext_wise wise[['AllWISE', "ext"]].write('wise_extinction.fits', format="fits")
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Let's make sure we install the necessary version of tensorflow. After doing the pip install above, click Restart the kernel on the notebook so that the Python environment picks up the new packages.
import os PROJECT = "qwiklabs-gcp-bdc77450c97b4bf6" # REPLACE WITH YOUR PROJECT NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) # Do not change these os.environ["PROJECT"] = PROJECT os.environ["REGION"] = REGION o...
quests/serverlessml/02_bqml/solution/first_model.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
5.1 Numerical Differentiation Two-point forward-difference formula $f'(x) = \frac{f(x+h) - f(x)}{h} - \frac{h}{2}f''(c)$ where $c$ is between $x$ and $x+h$ Example Use the two-point forward-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1/x$ at $x = 2$
# Parameters x = 2 h = 0.1 # Symbolic computation sym_x = sym.Symbol('x') sym_deri_x1 = sym.diff(1 / sym_x, sym_x) sym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf() # Approximation f = lambda x : 1 / x deri_x1 = (f(x + h) - f(x)) / h # Comparison print('approximate = %f, real value = %f, backward error = %f' %(de...
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Three-point centered-difference formula $f'(x) = \frac{f(x+h) - f(x-h)}{2h} - \frac{h^2}{6}f'''(c)$ where $x-h < c < x+h$ Example Use the three-point centered-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1 / x$ at $x = 2$
# Parameters x = 2 h = 0.1 f = lambda x : 1 / x # Symbolic computation sym_x = sym.Symbol('x') sym_deri_x1 = sym.diff(1 / sym_x, sym_x) sym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf() # Approximation deri_x1 = (f(x + h) - f(x - h)) / (2 * h) # Comparison print('approximate = %f, real value = %f, backward error ...
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense