markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Some Formal Basics (skip if you just want code examples) To set the context, we here briefly describe statistical hypothesis testing. Informally, one defines a hypothesis on a certain domain and then uses a statistical test to check whether this hypothesis is true. Formally, the goal is to reject a so-called null-hypot...
# use scipy for generating samples from scipy.stats import norm, laplace def sample_gaussian_vs_laplace(n=220, mu=0.0, sigma2=1, b=sqrt(0.5)): # sample from both distributions X=norm.rvs(size=n, loc=mu, scale=sigma2) Y=laplace.rvs(size=n, loc=mu, scale=b) return X,Y mu=0.0 sigma2=1 b=sqrt(0.5...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statisti...
print "Gaussian vs. Laplace" print "Sample means: %.2f vs %.2f" % (mean(X), mean(Y)) print "Samples variances: %.2f vs %.2f" % (var(X), var(Y))
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Quadratic Time MMD We now describe the quadratic time MMD, as described in [1, Lemma 6], which is implemented in Shogun. All methods in this section are implemented in <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html">CQuadraticTimeMMD</a>, which accepts any type of features in...
# turn data into Shogun representation (columns vectors) feat_p=RealFeatures(X.reshape(1,len(X))) feat_q=RealFeatures(Y.reshape(1,len(Y))) # choose kernel for testing. Here: Gaussian kernel_width=1 kernel=GaussianKernel(10, kernel_width) # create mmd instance of test-statistic mmd=QuadraticTimeMMD(kernel, feat_p, fea...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Any sub-class of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CHypothesisTest.html">CHypothesisTest</a> can compute approximate the null distribution using permutation/bootstrapping. This way always is guaranteed to produce consistent results, however, it might take a long time as for each sample...
# this is not necessary as bootstrapping is the default mmd.set_null_approximation_method(PERMUTATION) mmd.set_statistic_type(UNBIASED) # to reduce runtime, should be larger practice mmd.set_num_null_samples(100) # now show a couple of ways to compute the test # compute p-value for computed test statistic p_value=mm...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Precomputing Kernel Matrices Bootstrapping re-computes the test statistic for a bunch of permutations of the test data. For kernel two-sample test methods, in particular those of the MMD class, this means that only the joint kernel matrix of $X$ and $Y$ needs to be permuted. Thus, we can precompute the matrix, which gi...
# precompute kernel to be faster for null sampling p_and_q=mmd.get_p_and_q() kernel.init(p_and_q, p_and_q); precomputed_kernel=CustomKernel(kernel); mmd.set_kernel(precomputed_kernel); # increase number of iterations since should be faster now mmd.set_num_null_samples(500); p_value_boot=mmd.perform_test(); print "P-va...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Now let us visualise distribution of MMD statistic under $H_0:p=q$ and $H_A:p\neq q$. Sample both null and alternative distribution for that. Use the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CTwoSampleTest.html">CTwoSampleTest</a> to sample from the null distribution (permutation...
num_samples=500 # sample null distribution mmd.set_num_null_samples(num_samples) null_samples=mmd.sample_null() # sample alternative distribution, generate new data for that alt_samples=zeros(num_samples) for i in range(num_samples): X=norm.rvs(size=n, loc=mu, scale=sigma2) Y=laplace.rvs(size=n, loc=mu, scale...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Null and Alternative Distribution Illustrated Visualise both distributions, $H_0:p=q$ is rejected if a sample from the alternative distribution is larger than the $(1-\alpha)$-quantil of the null distribution. See [1] for more details on their forms. From the visualisations, we can read off the test's type I and type I...
def plot_alt_vs_null(alt_samples, null_samples, alpha): figure(figsize=(18,5)) subplot(131) hist(null_samples, 50, color='blue') title('Null distribution') subplot(132) title('Alternative distribution') hist(alt_samples, 50, color='green') subplot(133) hist(null_samples, 50...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Different Ways to Approximate the Null Distribution for the Quadratic Time MMD As already mentioned, bootstrapping the null distribution is expensive business. There exist a couple of methods that are more sophisticated and either allow very fast approximations without guarantees or reasonably fast approximations that ...
# optional: plot spectrum of joint kernel matrix from numpy.linalg import eig # get joint feature object and compute kernel matrix and its spectrum feats_p_q=mmd.get_p_and_q() mmd.get_kernel().init(feats_p_q, feats_p_q) K=mmd.get_kernel().get_kernel_matrix() w,_=eig(K) # visualise K and its spectrum (only up to thres...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums,...
# threshold for eigenspectrum thresh=0.1 # compute number of eigenvalues to use num_eigen=len(w[w>thresh]) # finally, do the test, use biased statistic mmd.set_statistic_type(BIASED) #tell Shogun to use spectrum approximation mmd.set_null_approximation_method(MMD2_SPECTRUM) mmd.set_num_eigenvalues_spectrum(num_eigen...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
The Gamma Moment Matching Approximation and Type I errors $\DeclareMathOperator{\var}{var}$ Another method for approximating the null-distribution is by matching the first two moments of a <a href="http://en.wikipedia.org/wiki/Gamma_distribution">Gamma distribution</a> and then compute the quantiles of that. This does ...
# tell Shogun to use gamma approximation mmd.set_null_approximation_method(MMD2_GAMMA) # the usual test interface p_value_gamma=mmd.perform_test() print "Gamma: P-value of MMD test is %.2f" % p_value_gamma # compare with ground truth bootstrapping mmd.set_null_approximation_method(PERMUTATION) p_value_boot=mmd.perfor...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution: re-sample data (assuming infinite amounts), perform the test and average results. Below we compare type I errors or all meth...
# type I error is false alarm, therefore sample data under H0 num_trials=50 rejections_gamma=zeros(num_trials) rejections_spectrum=zeros(num_trials) rejections_bootstrap=zeros(num_trials) num_samples=50 alpha=0.05 for i in range(num_trials): X=norm.rvs(size=n, loc=mu, scale=sigma2) Y=laplace.rvs(size=n, loc=mu,...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I...
# paramters of dataset m=20000 distance=10 stretch=5 num_blobs=3 angle=pi/4 # these are streaming features gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0) gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle) # stream some data and plot num_plot=1000 features=gen_p.get_streamed_features(...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute $$ \mmd_l^2[\mathcal{F},X,Y]=\frac{1}{m_2}\sum_{i=1}^{m_2} k...
block_size=100 # if features are already under the streaming interface, just pass them mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size) # compute an unbiased estimate in linear time statistic=mmd.compute_statistic() print "MMD_l[X,Y]^2=%.2f" % statistic # note: due to the streaming nature, successive calls of ...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Sometimes, one might want to use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a> with data that is stored in memory. In that case, it is easy to data in the form of for example <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStreamingDense...
# data source gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0) gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle) # retreive some points, store them as non-streaming data in memory data_p=gen_p.get_streamed_features(100) data_q=gen_q.get_streamed_features(data_p.get_num_vectors()) print "...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
The Gaussian Approximation to the Null Distribution As for any two-sample test in Shogun, bootstrapping can be used to approximate the null distribution. This results in a consistent, but slow test. The number of samples to take is the only parameter. Note that since <a href="http://www.shogun-toolbox.org/doc/en/latest...
mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size) print "m=%d samples from p and q" % m print "Binary test result is: " + ("Rejection" if mmd.perform_test(alpha) else "No rejection") print "P-value test result is %.2f" % mmd.perform_test()
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Kernel Selection for the MMD -- Overview $\DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\argmax}{arg\,max}$ Now which kernel do we actually use for our tests? So far, we just plugged in arbritary ones. However, for kernel two-sample testing, it is possible to do something more clever. Shogun's kernel sel...
sigmas=[2**x for x in linspace(-5,5, 10)] print "Choosing kernel width from", ["{0:.2f}".format(sigma) for sigma in sigmas] combined=CombinedKernel() for i in range(len(sigmas)): combined.append_kernel(GaussianKernel(10, sigmas[i])) # mmd instance using streaming features block_size=1000 mmd=LinearTimeMMD(combined...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
Now perform two-sample test with that kernel
alpha=0.05 mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, m, block_size) mmd.set_null_approximation_method(MMD1_GAUSSIAN); p_value_best=mmd.perform_test(); print "Bootstrapping: P-value of MMD test with optimal kernel is %.2f" % p_value_best
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand a...
mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, 5000, block_size) num_samples=500 # sample null and alternative distribution, implicitly generate new data for that null_samples=zeros(num_samples) alt_samples=zeros(num_samples) for i in range(num_samples): alt_samples[i]=mmd.compute_statistic() # tell MMD to ...
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
ICING tutorial <hr> ICING is a IG clonotype inference library developed in Python. <font color="red"><b>NB:</b></font> This is <font color="red"><b>NOT</b></font> a quickstart guide for ICING. This intended as a detailed tutorial on how ICING works internally. If you're only interested into using ICING, please refer to...
db_file = '../examples/data/clones_100.100.tab' # dialect="excel" for CSV or XLS files # for computational reasons, let's limit the dataset to the first 1000 sequences X = io.load_dataframe(db_file, dialect="excel-tab")[:1000] # turn the following off if data are real # otherwise, assume that the "SEQUENCE_ID" field...
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
1. Preprocessing step: data shrinking Specially in CLL patients, most of the input sequences have the same V genes AND junction. In this case, it is possible to remove such sequences from the analysis (we just need to remember them after.) In other words, we can collapse repeated sequences into a single one, which will...
# group by junction and v genes groups = X.groupby(["v_gene_set_str", "junc"]).groups.values() idxs = np.array([elem[0] for elem in groups]) # take one of them weights = np.array([len(elem) for elem in groups]) # assign its weight
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
2. High-level group inference The number of sequences at this point may be still very high, in particular when IGs are mutated and there is not much replication. However, we rely on the fact that IG similarity is mainly constrained on their junction length. Therefore, we infer high-level groups based on their junction ...
n_clusters = 50 X_all = idxs.reshape(-1,1) kmeans = MiniBatchKMeans(n_init=100, n_clusters=min(n_clusters, X_all.shape[0])) lengths = X['junction_length'].values kmeans.fit(lengths[idxs].reshape(-1,1))
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
3. Fine-grained group inference Now we have higih-level groups of IGs we have to extract clonotypes from. Divide the dataset based on the labels extracted from MiniBatchKMeans. For each one of the cluster, find clonotypes contained in it using DBSCAN. This algorithm allows us to use a custom metric between IGs. [<font ...
dbscan = DBSCAN(min_samples=20, n_jobs=-1, algorithm='brute', eps=0.2, metric=partial(distance_dataframe, X, junction_dist=distances.StringDistance(model='ham'), correct=True, tol=0)) dbscan_labels = np.zeros_like(kmeans.labels_).ravel() for label in np.unique(k...
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
Quickstart <hr> All of the above-mentioned steps are integrated in ICING with a simple call to the class inference.ICINGTwoStep. The following is an example of a working script.
db_file = '../examples/data/clones_100.100.tab' correct = True tolerance = 0 X = io.load_dataframe(db_file)[:1000] # turn the following off if data are real X['true_clone'] = [x[3] for x in X.sequence_id.str.split('_')] true_clones = LabelEncoder().fit_transform(X.true_clone.values) ii = inference.ICINGTwoStep( ...
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
If you want to save the results:
X['icing_clones (%s)' % ('_'.join(('StringDistance', str(eps), '0', 'corr' if correct else 'nocorr', "%.4f" % tac)))] = labels X.to_csv(db_file.split('/')[-1] + '_icing.csv')
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
How is the result?
from sklearn import metrics true_clones = LabelEncoder().fit_transform(X.true_clone.values) print "FMI: %.5f" % (metrics.fowlkes_mallows_score(true_clones, labels)) print "ARI: %.5f" % (metrics.adjusted_rand_score(true_clones, labels)) print "AMI: %.5f" % (metrics.adjusted_mutual_info_score(true_clones, labels)) print...
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
Is it better or worse than the result with everyone at the same time?
labels = dbscan.fit_predict(np.arange(X.shape[0]).reshape(-1, 1)) print "FMI: %.5f" % metrics.fowlkes_mallows_score(true_clones, labels) print "ARI: %.5f" % (metrics.adjusted_rand_score(true_clones, labels)) print "AMI: %.5f" % (metrics.adjusted_mutual_info_score(true_clones, labels)) print "NMI: %.5f" % (metrics.norm...
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
Now fit using the XID+ interface to pystan
%%time from xidplus.stan_fit import SPIRE fit=SPIRE.all_bands(prior250,prior350,prior500,iter=1000)
docs/notebooks/examples/XID+example_run_script.ipynb
pdh21/XID_plus
mit
Initialise the posterior class with the fit object from pystan, and save alongside the prior classes
posterior=xidplus.posterior_stan(fit,[prior250,prior350,prior500]) xidplus.save([prior250,prior350,prior500],posterior,'test')
docs/notebooks/examples/XID+example_run_script.ipynb
pdh21/XID_plus
mit
Alternatively, you can fit with the pyro backend.
%%time from xidplus.pyro_fit import SPIRE fit_pyro=SPIRE.all_bands([prior250,prior350,prior500],n_steps=10000,lr=0.001,sub=0.1) posterior_pyro=xidplus.posterior_pyro(fit_pyro,[prior250,prior350,prior500]) xidplus.save([prior250,prior350,prior500],posterior_pyro,'test_pyro') plt.semilogy(posterior_pyro.loss_history)
docs/notebooks/examples/XID+example_run_script.ipynb
pdh21/XID_plus
mit
You can fit with the numpyro backend.
%%time from xidplus.numpyro_fit import SPIRE fit_numpyro=SPIRE.all_bands([prior250,prior350,prior500]) posterior_numpyro=xidplus.posterior_numpyro(fit_numpyro,[prior250,prior350,prior500]) xidplus.save([prior250,prior350,prior500],posterior_numpyro,'test_numpyro') prior250.bkg
docs/notebooks/examples/XID+example_run_script.ipynb
pdh21/XID_plus
mit
We will want to run the notebook in the future with updated values. How can we do this? Make the dates automatically updated.
start = datetime.datetime(2017, 3, 2) # the day Snap went public end = datetime.date.today() # datetime.date.today snap = web.DataReader("SNAP", 'google', start, end) snap snap.index.tolist()
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
.format() We want to print something with systematic changes in the text. Suppose we want to print out the following information: 'On day X Snap closed at VALUE Y and the volume was Z.'
# How did we do this before? for index in snap.index: print('On day', index, 'Snap closed at', snap['Close'][index], 'and the volume was', snap['Volume'][index], '.')
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
This looks aweful. We want to cut the day and express the volume in millions.
# express Volume in millions snap['Volume'] = snap['Volume']/10**6 snap
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
The .format() method what is format and how does it work? Google and find a good link
print('Today is {}.'.format(datetime.date.today())) for index in snap.index: print('On {} Snap closed at ${} and the volume was {} million.'.format(index, snap['Close'][index], snap['Volume'][index])) for index in snap.index: print('On {:.10} Snap closed at ${} and the volume was {:.1f} million.'.format(st...
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
Check Olson's blog and style recommendation
fig, ax = plt.subplots() #figsize=(8,5)) snap['Close'].plot(ax=ax, grid=True, style='o', alpha=.6) ax.set_xlim([snap.index[0]-datetime.timedelta(days=1), snap.index[-1]+datetime.timedelta(days=1)]) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.yaxis.set_ticks_position('left') ax.xaxis.se...
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
Machine Dependent Options Each installation of GPy also creates an installation.cfg file. This file should include any installation specific settings for your GPy installation. For example, if a particular machine is set up to run OpenMP then the installation.cfg file should contain
# This is the local installation configuration file for GPy [parallel] openmp=True
GPy/config.ipynb
SheffieldML/notebook
bsd-3-clause
在可变的集合类型中(list和dictionary)中,如果默认参数为该类型,那么所有的操作调用该函数的操作将会发生变化
def foo(values, x=[]): for value in values: x.append(value) return x foo([0,1,2]) foo([4,5]) def foo_fix(values, x=[]): if len(x) != 0: x = [] for value in values: x.append(value) return x foo_fix([0,1,2]) foo_fix([4,5])
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
2 global 参数
x = 5 def set_x(y): x = y print 'inner x is {}'.format(x) set_x(10) print 'global x is {}'.format(x)
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
x = 5 表明为global变量,但是在set_x函数内部中,出现了x,但是其为局部变量,因此全局变量x并没有发生改变。
def set_global_x(y): global x x = y print 'global x is {}'.format(x) set_global_x(10) print 'global x now is {}'.format(x)
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
通过添加global关键字,使得global变量x发生了改变。 3 Exercise Fibonacci sequence $F_{n+1}=F_{n}+F_{n-1}$ 其中 $F_{0}=0,F_{1}=1,F_{2}=1,F_{3}=2 \cdots$ 递归版本 算法时间时间复杂度高达 $T(n)=n^2$
def fib_recursive(n): if n == 0 or n == 1: return n else: return fib_recursive(n-1) + fib_recursive(n-2) fib_recursive(10)
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
迭代版本 算法时间复杂度为$T(n)=n$
def fib_iterator(n): g = 0 h = 1 i = 0 while i < n: h = g + h g = h - g i += 1 return g fib_iterator(10)
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
迭代器版本 使用 yield 关键字可以实现迭代器
def fib_iter(n): g = 0 h = 1 i = 0 while i < n: h = g + h g = h -g i += 1 yield g for value in fib_iter(10): print value,
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
矩阵求解法 $$\begin{bmatrix}F_{n+1}\F_{n}\end{bmatrix}=\begin{bmatrix}1&1\1&0\end{bmatrix}\begin{bmatrix}F_{n}\F_{n-1}\end{bmatrix}$$ 令$u_{n+1}=Au_{n}$ 其中 $u_{n+1}=\begin{bmatrix}F_{n+1}\F_{n}\end{bmatrix}$ 通过矩阵的迭代求解 $u_{n+1}=A^{n}u_{0}$,其中 $u_{0}=\begin{bmatrix}1 \0 \end{bmatrix}$,对于$A^n$ 可以通过 $(A^{n/2})^{2}$ 方式求解,使得算法时间复...
import numpy as np a = np.array([[1,1],[1,0]]) def pow_n(n): if n == 1: return a elif n % 2 == 0: half = pow_n(n/2) return half.dot(half) else: half = pow_n((n-1)/2) return a.dot(half).dot(half) def fib_pow(n): a_n = pow_n(n) u_0 = np.array([1,0]) return ...
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Quick Sort
def quick_sort(array): if len(array) < 2: return array else: pivot = array[0] left = [item for item in array[1:] if item < pivot] right = [item for item in array[1:] if item >= pivot] return quick_sort(left)+[pivot]+quick_sort(right) quick_sort([10,11,3,21,9,22])
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Y yo para que quiero eso? De que sirve pandas? Pandas te sirve si quieres: Trabajar con datos de manera facil. Explorar un conjunto de datos de manera rapida, enterder los datos que tienes. Facilmente manipular informacion, por ejemplo sacar estadisticas. Graficas patrones y distribuciones de datos. Trabajar con Excel...
df.head()
Dia1/.ipynb_checkpoints/2_PandasIntro-checkpoint.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
No nos sirve nada vacio, entonces agreguemos le informacion! LLenando informacion con un Dataframe Situacion: Suponte que eres un taquero y quieres hacer un dataframe de cuantos tacos vendes en una semana igual y para ver que tacos son mas populares y echarle mas ganas en ellos, Asumiremos: Que vende tacos de Pastor...
df['Pastor']=np.random.randint(100, size=7) df['Tripas']=np.random.randint(100, size=7) df['Chorizo']=np.random.randint(100, size=7) df.index=['Lunes','Martes','Miercoles','Jueves','Viernes','Sabado','Domingo'] df.
Dia1/.ipynb_checkpoints/2_PandasIntro-checkpoint.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Lesson 1 Create Data - We begin by creating our own data set for analysis. This prevents the end user reading this tutorial from having to download any files to replicate the results below. We will export this data set to a text file so that you can get some experience pulling data from a text file. Get Data - We will ...
# Import all libraries needed for the tutorial # General syntax to import specific functions in a library: ##from (library) import (specific library function) from pandas import DataFrame, read_csv # General syntax to import a library but no functions: ##import (library) as (give the library a nickname/alias) impor...
notebooks/pandas_tutorial.ipynb
babraham123/script-runner
mit
Single computer get data (5) # usdgbp
demo.get_price('GBPUSD') process.processSinglePrice() demo.get_price('USDEUR') process.processSinglePrice() demo.get_price('EURGBP') process.processSinglePrice() demo.get_prices(1) process.processPrices(3)
demos/demo1/00_DEMO_01.ipynb
mhallett/MeDaReDa
mit
limitations Multi computer (cloud) set workers to work
while True: process.processSinglePrice() #break
demos/demo1/00_DEMO_01.ipynb
mhallett/MeDaReDa
mit
Trying reduce without and with an initializer.
from operator import add for result in (reduce(add, [42]), reduce(add, [42], 10)): print(result)
content/posts/coding/recursion_looping_relationship.ipynb
dm-wyncode/zipped-code
mit
My rewrite of functools.reduce using recursion. For the sake of demonstration only.
def first(value_list): return value_list[0] def rest(value_list): return value_list[1:] def is_undefined(value): return value is None def recursive_reduce(function, iterable, initializer=None): if is_undefined(initializer): initializer = accum_value = first(iterable) else: ...
content/posts/coding/recursion_looping_relationship.ipynb
dm-wyncode/zipped-code
mit
Test. Test if the two functions return the sum of a list of random numbers.
from random import choice from operator import add LINE = ''.join(('-', ) * 20) print(LINE) for _ in range(5): # create a tuple of random numbers of length 2 to 10 test_values = tuple(choice(range(101)) for _ in range(choice(range(2, 11)))) print('Testing these values: {}'.format(test_values)) # use su...
content/posts/coding/recursion_looping_relationship.ipynb
dm-wyncode/zipped-code
mit
Then let us generate some points in 2-D that will form our dataset:
# Create some data points
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Let's visualise these points in a scatterplot using the plot function from matplotlib
# Visualise the points in a scatterplot
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Here, imagine that the purpose is to build a classifier that for a given new point will return whether it belongs to the crosses (class 1) or circles (class 0). Learning Activity 2: Computing the output of a Perceptron Let’s now define a function which returns the output of a Perceptron for a single input point.
# Now let's build a perceptron for our points def outPerceptron(x,w,b): innerProd = np.dot(x,w) # computes the weighted sum of input output = 0 if innerProd > b: output = 1 return output
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
It’s useful to define a function which returns the sequence of outputs of the Perceptron for a sequence of input points:
# Define a function which returns the sequence of outputs for a sequence of input points def multiOutPerceptron(X,w,b): nInstances = X.shape[0] outputs = np.zeros(nInstances) for i in range(0,nInstances): outputs[i] = outPerceptron(X[i,:],w,b) return outputs
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Bonus Activity: Efficient coding of multiOutPerceptron In the above implementation, the simple outPerceptron function is called for every single instance. It is cleaner and more efficient to code everything in one function using matrices:
# Optimise the multiOutPerceptron function
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
In the above implementation, the simple outPerceptron function is called for every single instance. It is cleaner and more efficient to code everything in one function using matrices. Learning Activity 4: Playing with weights and thresholds Let’s try some weights and thresholds, and see what happens:
# Try some initial weights and thresholds
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
So this is clearly not great! it classifies the first point as in one category and all the others in the other one. Let's try something else (an educated guess this time).
# Try an "educated guess"
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
This is much better! To obtain these values, we found a separating hyperplane (here a line) between the points. The equation of the line is y = 0.5x-0.2 Quiz - Can you explain why this line corresponds to the weights and bias we used? - Is this separating line unique? what does it mean? Can you check that the perceptr...
# Visualise the separating line
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Now try adding new points to see how they are classified:
# Add new points and test
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Visualise the new test points in the graph and plot the separating lines.
# Visualise the new points and line
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Note here that the two sets of parameters classify the squares identically but not the triangle. You can now ask yourself, which one of the two sets of parameters makes more sense? How would you classify that triangle? These type of points are frequent in realistic datasets and the question of how to classify them "acc...
def function(x): return np.exp(-np.sin(x))*(x**2) def gradient(x): return -x*np.exp(-np.sin(x))*(x*np.cos(x)-2) # use wolfram alpha!
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Let's see what the function looks like
# Visualise the function
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Now let us implement a simple Gradient Descent that uses constant stepsizes. We define two functions, the first one is the most simple version which doesn't store the intermediate steps that are taken. The second one does store the steps which is useful to visualize what is going on and explain some of the typical beha...
def simpleGD(x0,stepsize,nsteps): x = x0 for k in range(0,nsteps): x -= stepsize*gradient(x) return x def simpleGD2(x0,stepsize,nsteps): x = np.zeros(nsteps+1) x[0] = x0 for k in range(0,nsteps): x[k+1] = x[k]-stepsize*gradient(x[k]) return x
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Let's see what it looks like. Let's start from $x_0 = 3$, use a (constant) stepsize of $\delta=0.1$ and let's go for 100 steps.
# Try the first given values
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Simple inspection of the figure above shows that that is close enough to the actual true minimum ($x^\star=0$) A few standard situations:
# Try the second given values
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Ok! so that's still alright
# Try the third given values
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
That's not... Visual inspection of the figure above shows that we got stuck in a local optimum. Below we define a simple visualization function to show where the GD algorithm brings us. It can be overlooked.
def viz(x,a=-10,b=10): xx = np.linspace(a,b,100) yy = function(xx) ygd = function(x) plt.plot(xx,yy) plt.plot(x,ygd,color='red') plt.plot(x[0],ygd[0],marker='o',color='green',markersize=10) plt.plot(x[len(x)-1],ygd[len(x)-1],marker='o',color='red',markersize=10) plt.show()
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Let's show the steps that were taken in the various cases that we considered above
# Visualise the steps taken in the previous cases
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
To summarise these three cases: - In the first case, we start from a sensible point (not far from the optimal value $x^\star = 0$ and on a slope that leads directly to it) and we get to a very satisfactory point. - In the second case, we start from a less sensible point (on a slope that does not lead directly to it) a...
from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.optimizers import SGD, RMSprop from keras.utils import np_utils # Some generic parameters for the learning process batch_size = 100 # number of instances each noisy gradient will be evalu...
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Learning Activity 8: Loading the MNIST dataset Keras does the loading of the data itself and shuffles the data randomly. This is useful since the difficulty of the examples in the dataset is not uniform (the last examples are harder than the first ones)
# Load the MNIST data
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
You can also depict a sample from either the training or the test set using the imshow() function:
# Display the first image
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Ok the label 5 does indeed seem to correspond to that number! Let's check the dimension of the dataset Learning Activity 9: Reshaping the dataset Each image in MNIST has 28 by 28 pixels, which results in a $28\times 28$ array. As a next step, and prior to feeding the data into our NN classifier, we needd to flatten eac...
# Reshaping of vectors in a format that works with the way the layers are coded
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Remember, it is always good practice to check the dimensionality of your train and test data using the shape command prior to constructing any classification model:
# Check the dimensionality of train and test
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
So we have 60,000 training samples, 10,000 test samples and the dimension of the samples (instances) are 28x28 arrays. We need to reshape these instances as vectors (of 784=28x28 components). For storage efficiency, the values of the components are stored as Uint8, we need to cast that as float32 so that Keras can deal...
# Set y categorical
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Learning Activity 10: Building a NN classifier A neural network model consists of artificial neurons arranged in a sequence of layers. Each layer receives a vector of inputs and converts these into some output. The interconnection pattern is "dense" meaning it is fully connected to the previous layer. Note that the fir...
# First, declare a model with a sequential architecture # Then add a first layer with 500 nodes and 784 inputs (the pixels of the image) # Define the activation function to use on the nodes of that first layer # Second hidden layer with 300 nodes # Output layer with 10 categories (+using softmax)
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Learning Activity 11: Training and testing of the model Here we define a somewhat standard optimizer for NN. It is based on Stochastic Gradient Descent with some standard choice for the annealing.
# Definition of the optimizer.
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Finding the right arguments here is non trivial but the choice suggested here will work well. The only parameter we can explain here is the first one which can be understood as an initial scaling of the gradients. At this stage, launch the learning (fit the model). The model.fit function takes all the necessary argum...
# Fit the model
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Obviously we care far more about the results on the validation set since it is the data that the NN has not used for its training. Good results on the test set means the model is robust.
# Display the results, the accuracy (over the test set) should be in the 98%
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Bonus: Does it work?
def whatAmI(img): score = model.predict(img,batch_size=1,verbose=0) for s in range(0,10): print ('Am I a ', s, '? -- score: ', np.around(score[0][s]*100,3)) index = 1004 # here use anything between 0 and 9999 test = np.reshape(images_train[index,],(1,784)) plt.imshow(np.reshape(test,(28,28)), cmap="gr...
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
Does it work? (experimental Pt2)
from scipy import misc test = misc.imread('data/ex7.jpg') test = np.reshape(test,(1,784)) test = test.astype('float32') test /= 255. plt.imshow(np.reshape(test,(28,28)), cmap="gray") whatAmI(test)
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
To keep the calculations below manageable we specify a single nside=64 healpixel in an arbitrary location of the DESI footprint.
healpixel = 26030 nside = 64
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Specifying the random seed makes our calculations reproducible.
seed = 555 rand = np.random.RandomState(seed)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Define a couple wrapper routines we will use below several times.
def plot_subset(wave, flux, truth, objtruth, nplot=16, ncol=4, these=None, xlim=None, loc='right', targname='', objtype=''): """Plot a random sampling of spectra.""" nspec, npix = flux.shape if nspec < nplot: nplot = nspec nrow = np.ceil(nplot / ncol).astype('int') ...
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Tracer QSOs Both tracer and Lya QSO spectra contain an underlying QSO spectrum, but the Lya QSOs (which we demonstrate below) also include the Lya forest (here, based on the v2.0 of the "London" mocks). Every target class has its own dedicated "Maker" class.
from desitarget.mock.mockmaker import QSOMaker QSO = QSOMaker(seed=seed)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
The various read methods return a dictionary with (hopefully self-explanatory) target- and mock-specific quantities. Because most mock catalogs only come with (cosmologically accurate) 3D positions (RA, Dec, redshift), we use Gaussian mixture models trained on real data to assign other quantities like shapes, magnitude...
dir(QSOMaker) data = QSO.read(healpixels=healpixel, nside=nside) for key in sorted(list(data.keys())): print('{:>20}'.format(key))
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Now we can generate the spectra as well as the targeting catalogs (targets) and corresponding truth table.
%time flux, wave, targets, truth, objtruth = QSO.make_spectra(data) print(flux.shape, wave.shape)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
The truth catalog contains the target-type-agnostic, known properties of each object (including the noiseless photometry), while the objtruth catalog contains different information depending on the type of target.
truth objtruth
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Next, let's run target selection, after which point the targets catalog should look just like an imaging targeting catalog (here, using the DR7 data model).
QSO.select_targets(targets, truth) targets
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
And indeed, we can see that only a subset of the QSOs were identified as targets (the rest scattered out of the QSO color selection boxes).
from desitarget.targetmask import desi_mask isqso = (targets['DESI_TARGET'] & desi_mask.QSO) != 0 print('Identified {} / {} QSO targets.'.format(np.count_nonzero(isqso), len(targets)))
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Finally, let's plot some example spectra.
plot_subset(wave, flux, truth, objtruth, targname='QSO')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Generating QSO spectra with cosmological Lya skewers proceeds along similar lines. Here, we also include BALs with 25% probability.
from desitarget.mock.mockmaker import LYAMaker mockfile='/project/projectdirs/desi/mocks/lya_forest/london/v9.0/v9.0.0/master.fits' LYA = LYAMaker(seed=seed, balprob=0.25) lyadata = LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside) %time lyaflux, lyawave, lyatargets, lyatruth, lyaobjtruth = LYA.make_spe...
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Lets plot together some of the spectra with the old and new continum model
plt.figure(figsize=(20, 10)) indx=rand.choice(len(lyaflux),9) for i in range(9): plt.subplot(3, 3, i+1) plt.plot(lyawave,lyaflux[indx[i]],label="Old Continum") plt.plot(lyawave_cont,lyaflux_cont[indx[i]],label="New Continum") plt.legend()
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
And finally we compare the colors, for the two runs with the new and old continum
plt.plot(lyatruth["FLUX_W1"],lyatruth_cont["FLUX_W1"]/lyatruth["FLUX_W1"]-1,'.') plt.xlabel("FLUX_W1") plt.ylabel(r"FLUX_W1$^{new}$/FLUX_W1-1") plt.plot(lyatruth["FLUX_W2"],lyatruth_cont["FLUX_W2"]/lyatruth["FLUX_W2"]-1,'.') plt.xlabel("FLUX_W2") plt.ylabel(r"(FLUX_W2$^{new}$/FLUX_W2)-1") plt.hist(lyatruth["FLUX_W1"...
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Conclusion: Colors are slightly affected by changing the continum model. To Finalize the LYA section, lets generate another set of spectra now including DLAs, metals, LYB, etc.
del sys.modules['desitarget.mock.mockmaker'] from desitarget.mock.mockmaker import LYAMaker ##Done in order to reload the desitarget, it doesn't seem to be enough with initiating a diferent variable for the LYAMaker class. LYA = LYAMaker(seed=seed,sqmodel='lya_simqso_model',balprob=0.25,add_dla=True,add_metals="all...
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Demonstrate the other extragalactic target classes: LRG, ELG, and BGS. For simplicity let's write a little wrapper script that does all the key steps.
def demo_mockmaker(Maker, seed=None, nrand=16, loc='right'): TARGET = Maker(seed=seed) log.info('Reading the mock catalog for {}s'.format(TARGET.objtype)) tdata = TARGET.read(healpixels=healpixel, nside=nside) log.info('Generating {} random spectra.'.format(nrand)) indx = rand.choice(len(...
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
LRGs
from desitarget.mock.mockmaker import LRGMaker %time demo_mockmaker(LRGMaker, seed=seed, loc='left')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
ELGs
from desitarget.mock.mockmaker import ELGMaker %time demo_mockmaker(ELGMaker, seed=seed, loc='left')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
BGS
from desitarget.mock.mockmaker import BGSMaker %time demo_mockmaker(BGSMaker, seed=seed)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause