markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
scipy implementation using scipy.ndimage.generic_filter—the custom callback function is just-in-time compiled by numba
@jit(nopython=True) def filter_denoise(neighborhood): if neighborhood.mean() < 10: return neighborhood.min() else: return neighborhood[13] def denoise_scipy(a, b): for channel in range(2): b[channel] = generic_filter(input=a[channel], function=filter_denoise, ...
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
numba implementation of a universal function via numba.guvectorize
# just removed return statement def denoise_guvectorize(a, b): for channel in range(2): for f_band in range(4, a.shape[1] - 4): for t_step in range(1, a.shape[2] - 1): neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2] if neighborhood.mean() <...
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
serial version
denoise_numba = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)', nopython=True)(denoise_guvectorize)
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
parallel version
denoise_parallel = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)', nopython=True, target='parallel')(denoise_guvectorize)
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
check results test the implementations on a randomly generated dataset and verfiy that all the results are the same
size = 100 data = np.random.rand(2, size, int(size*1.5)) data[:, int(size/4):int(size/2), int(size/4):int(size/2)] = 27 result_python = denoise(data, np.zeros_like(data)) result_scipy = denoise_scipy(data, np.zeros_like(data)) result_numba = denoise_numba(data, np.zeros_like(data)) result_parallel = denoise_parallel(d...
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
check if the different implementations produce the same result
assert np.allclose(result_python, result_scipy) and np.allclose(result_python, result_numba)
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
plot results
fig, ax = plt.subplots(2,2) fig.set_figheight(8) fig.set_figwidth(12) im1 = ax[0, 0].imshow(data[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[0, 0].set_title('data') im2 = ax[0, 1].imshow(result_python[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[0, 1].set_title('pure python') im3 = ax[1, 0].i...
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
profile for different data sizes time the different implementations on different dataset sizes
sizes = [30, 50, 100, 200, 400, 800, 1600] progress_bar = pyprind.ProgBar(iterations=len(sizes), track_time=True, stream=1, monitor=True) t_python = np.empty_like(sizes, dtype=np.float64) t_scipy = np.empty_like(sizes, dtype=np.float64) t_numba = np.empty_like(sizes, dtype=np.float64) t_parallel = np.empty_like(sizes,...
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
plot profile results
fig, ax = plt.subplots(figsize=(15,5)) p1 = ax.loglog(sizes, t_python, color='black', marker='.', label='python') p2 = ax.loglog(sizes, t_scipy, color='blue', marker='.', label='scipy') p3 = ax.loglog(sizes, t_numba, color='green', marker='.', label='numba') p4 = ax.loglog(sizes, t_parallel, color='red', marker='.', la...
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
Download the sequence data Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data. SRA link: http://trace.ncbi.nlm.nih.gov/Traces/study/?acc=SRP021469
%%bash ## make a new directory for this analysis mkdir -p empirical_10/fastq/
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
For each ERS (individuals) get all of the ERR (sequence file accessions).
## IPython code import pandas as pd import numpy as np import urllib2 import os ## open the SRA run table from github url url = "https://raw.githubusercontent.com/"+\ "dereneaton/RADmissing/master/empirical_10_SraRunTable.txt" intable = urllib2.urlopen(url) indata = pd.read_table(intable, sep="\t") ## print fir...
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
for ID, SRR in zip(indata.Sample_Name_s, indata.Run_s): wget_download(SRR, "empirical_10/fastq/", ID) %%bash ## convert sra files to fastq using fastq-dump tool ## output as gzipped into the fastq directory fastq-dump --gzip -O empirical_10/fastq/ empirical_10/fastq/*.sra ## remove .sra files rm empirical_10/fast...
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Note: The data here are from Illumina Casava <1.8, so the phred scores are offset by 64 instead of 33, so we use that in the params file below.
%%bash ## substitute new parameters into file sed -i '/## 1. /c\empirical_10/ ## 1. working directory ' params.txt sed -i '/## 6. /c\TGCAG ## 6. cutters ' params.txt sed -i '/## 7. /c\20 ## 7. N processors ' params.txt sed -i '/## 9. /c\6 ## 9. NQu...
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Assemble in pyrad
%%bash pyrad -p params.txt -s 234567 >> log.txt 2>&1 %%bash sed -i '/## 12./c\2 ## 12. MinCov ' params.txt sed -i '/## 14./c\empirical_10_m2 ## 14. output name ' params.txt %%bash pyrad -p params.txt -s 7 >> log.txt 2>&1
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Results We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples. Raw data amounts The average number of raw reads per sample is 1.36M.
import pandas as pd ## read in the data sdat = pd.read_table("empirical_10/stats/s2.rawedit.txt", header=0, nrows=14) ## print summary stats print sdat["passed.total"].describe() ## find which sample has the most raw data maxraw = sdat["passed.total"].max() print "\nmost raw data in sample:" print sdat['sample '][sda...
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Look at distributions of coverage pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others.
## read in the s3 results sdat = pd.read_table("empirical_10/stats/s3.clusters.txt", header=0, nrows=14) ## print summary stats print "summary of means\n==================" print sdat['dpt.me'].describe() ## print summary stats print "\nsummary of std\n==================" print sdat['dpt.sd'].describe() ## print sum...
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Plot the coverage for the sample with highest mean coverage Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
import toyplot import toyplot.svg import numpy as np ## read in the depth information for this sample with open("empirical_10/clust.85/38362_rex.depths", 'rb') as indat: depths = np.array(indat.read().strip().split(","), dtype=int) ## make a barplot in Toyplot canvas = toyplot.Canvas(width=350, height=300) ax...
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Print final stats table
cat empirical_10/stats/empirical_10_m4.stats %%bash head -n 10 empirical_10/stats/empirical_10_m2.stats
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Infer ML phylogeny in raxml as an unrooted tree
%%bash ## raxml argumement w/ ... raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \ -w /home/deren/Documents/RADmissing/empirical_10/ \ -n empirical_10_m4 -s empirical_10/outfiles/empirical_10_m4.phy %%bash ## raxml argumement w/ ...
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Plot the tree in R using ape
%load_ext rpy2.ipython %%R -h 800 -w 800 library(ape) tre <- read.tree("empirical_10/RAxML_bipartitions.empirical_10_m4") ltre <- ladderize(tre) par(mfrow=c(1,2)) plot(ltre, use.edge.length=F) nodelabels(ltre$node.label) plot(ltre, type='u')
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Using simple normalisation: f(z)/f(z0) (equation 2 from first draft of GZH paper) $\frac{f}{f_{0}}=1 - {\zeta} * (z-z_{0})$ $\zeta = constant$
fit_and_plot(x, yn, mu, fzeta_lin_mu_none, 2)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}=1 - {\zeta} * (z-z_{0})$ $\zeta = \zeta[0]+\zeta[1] * \mu$
fit_and_plot(x, yn, mu, fzeta_lin_mu_lin, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}=e^{\frac{-(z-z_0)}{\zeta}}$ $\zeta = constant$
fit_and_plot(x, yn, mu, fzeta_exp_mu_none, 2)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}=e^{\frac{-(z-z_0)}{\zeta}}$ $\zeta = \zeta[0]+\zeta[1] * \mu$
p = fit_and_plot(x, yn, mu, fzeta_exp_mu_lin, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$ $\zeta_{a}, \zeta_{b} = constant$
fit_and_plot(x, yn, mu, fzeta_qud_mu_none, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$ $\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $ $\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
fit_and_plot(x, yn, mu, fzeta_qud_mu_lin, 5)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$ $\zeta_{a}, \zeta_{b}, \zeta_{c} = constant$
fit_and_plot(x, yn, mu, fzeta_cub_mu_none, 4)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$ $\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $ $\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $ $\zeta_{c} = \zeta_{c}[0] + \zeta_{c}[1] * \mu $
fit_and_plot(x, yn, mu, fzeta_cub_mu_lin, 7)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
Using alternative normalisation, as in eqn. 4: (f(z0)-1) / (f(z)-1) $\frac{1-f_{0}}{1-f}=1 - {\zeta} * (z-z_{0})$ $\zeta = constant$
fit_and_plot(x, ym, mu, fzeta_lin_mu_none, 2)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}=1 - {\zeta} * (z-z_{0})$ $\zeta = \zeta[0]+\zeta[1] * \mu$
fit_and_plot(x, ym, mu, fzeta_lin_mu_lin, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}=e^{\frac{-(z-z_0)}{\zeta}}$ $\zeta = constant$
fit_and_plot(x, ym, mu, fzeta_exp_mu_none, 2)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}=e^{\frac{-(z-z_0)}{\zeta}}$ $\zeta = \zeta[0]+\zeta[1] * \mu$
fit_and_plot(x, ym, mu, fzeta_exp_mu_lin, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$ $\zeta_{a}, \zeta_{b} = constant$
fit_and_plot(x, ym, mu, fzeta_qud_mu_none, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$ $\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $ $\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
fit_and_plot(x, ym, mu, fzeta_qud_mu_lin, 5)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$ $\zeta_{a}, \zeta_{b}, \zeta_{c} = constant$
fit_and_plot(x, ym, mu, fzeta_cub_mu_none, 4)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$ $\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $ $\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $ $\zeta_{c} = \zeta_{c}[0] + \zeta_{c}[1] * \mu $
fit_and_plot(x, ym, mu, fzeta_cub_mu_lin, 7)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
Plotting the phase diagram To plot a phase diagram, we send our phase diagram object into the PDPlotter class.
#Let's show all phases, including unstable ones plotter = PDPlotter(pd, show_unstable=0.2, backend="matplotlib") plotter.show()
notebooks/2013-01-01-Plotting and Analyzing a Phase Diagram using the Materials API.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
Calculating energy above hull and other phase equilibria properties
import collections data = collections.defaultdict(list) for e in entries: decomp, ehull = pd.get_decomp_and_e_above_hull(e) data["Materials ID"].append(e.entry_id) data["Composition"].append(e.composition.reduced_formula) data["Ehull"].append(ehull) data["Decomposition"].append(" + ".join(["%.2...
notebooks/2013-01-01-Plotting and Analyzing a Phase Diagram using the Materials API.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
USE-CASE: Testing Proportions Is coin biased ? We toss coin 250 times, 140 heads, 120 tails.
# se have: n_h = 140 n_t = 110 observations = (n_h, n_t) n_observations = n_h + n_t print observations, n_observations, # We define the null hypothesis and the test statistic def run_null_hypothesis(n_observations): """the model of Null hypothesis""" sample = [random.choice('HT') for _ in range(n_observation...
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
In the example above, like most of what will follow, we used the MC way to evaluate the p-value. Nevertheless, in many cases we can analytically, with the frequentist approach have an evaluation of the p-value. Below is show the way of getting a p-value using the Probability Mass Function (pmf) of the binomial distribu...
p = 0 for i in range(140, 250): p += stats.distributions.binom.pmf(k, 250, 0.5) pval = 1-p print "The p-value using the frequentist approach is: " , pval
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
Is dice crooked ? we have the frequencies {1:8, 2:9, 3:19, 4:5, 6:8, 6:11}
observations = {1:8, 2:9, 3:19, 4:5, 5:8, 6:11} observations_frequencies = np.array(observations.values()) n_dice_drops = np.sum(observations_frequencies) print n_dice_drops def run_null_hypothesis(n_dice_drops): """the model of Null hypothesis""" dice_values = [1, 2, 3, 4, 5, 6] rolls = np.random.choice(...
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
USE-CASE: Testing Difference in Means
d1 = np.random.normal(38.601, 1.42, 1000) d2 = np.random.normal(38.523, 1.42, 1000) plt.figure(1) plt.subplot(211) count, bins, ignored = plt.hist(d1, 30, normed=True) plt.figure(1) plt.subplot(211) count, bins, ignored = plt.hist(d2, 30, normed=True) # plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * # ...
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
USE-CASE: Testing a Correlation
data = np.random.multivariate_normal([0, 0], [[1, .75],[.75, 1]], 1000) x = data[:, 0] y = data[:, 1] plt.scatter(x, y) # we can make the null hypothesis model just by shuffling the data of one variable x2 = x.copy() np.random.shuffle(x2) plt.scatter(x2, y) def run_null_hypothesis(x, y): """the model of Null hypo...
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
USE-CASE: Testing Proportions with chi2 test Above we used total deviation as test statistic Sum(abs(observed-expected)) It is more common to use chi2 statistic. Sum((observed-expected)^2/expected) Lets see how what results we get having chi2 statistic
observations = {1:8, 2:9, 3:19, 4:5, 5:8, 6:11} observations_frequencies = np.array(observations.values()) n_dice_drops = np.sum(observations_frequencies) print n_dice_drops def run_null_hypothesis(n_dice_drops): """the model of Null hypothesis""" dice_values = [1, 2, 3, 4, 5, 6] rolls = np.random.choice(...
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
we see that the p-value is smaller using the chi2 statistic as test statistic. => This is very important point since we see that the chioice of t-statistic affects quite a lot the p-value USE-CASE: Testing Structures in Histograms e.g.understand if we have signal over background
# Lets say we have already a histogram with the bins values below: x_obs = {1:1, 2:2, 3:2, 4:0, 5:3, 6:1, 7:1, 8:2, 9:5, 10:6, 11:1, 12:0, 13:1, 14:2, 15:1, 16:3, 17:1, 18:0, 19:1, 20:0} x_bgr = {1:1.2, 2:1.8, 3:1.8, 4:1.9, 5:1.9, 6:2, 7:2, 8:2, 9:1.8, 10:1.8, 11:1.7, 12:1.7, 13:1.7, 14:1.6, 15:1.6,...
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
lets focus only in the bin 9 with signal value = 5 How likelie it is to find nobs = 5 while a backgronud is 1.8 ? The number of entries $n_s$ in a bar can be treated as a Poisson variable with mean $\nu_s$. In this scenario we can calculate the p-value as $P(n>= n_{obs}) = \Sigma_{n=n_{obs}}^{\infty} pmf_{poisson}(n;...
from scipy import stats pmf_values = [] N_obs = 5 N_bgr = 1.8 for i in range(0, N_obs-1): pmf_values.append(stats.distributions.poisson.pmf(i, N_bgr)) pval = 1-np.sum(pmf_values) print 'The p-value is ', pval
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
a point to keep in mind is that the background comes with uncertainty so we eventually have a range of p-values In principle we can apply the procedure above to the number of entries in a subset of bins. E.g. in the two bings with large peak we have $n_{obs}=11$ with expected $\nu_b=3.2$.
from scipy import stats pmf_values = [] N_obs = 11 N_bgr = 3.2 for i in range(0, N_obs-1): pmf_values.append(stats.distributions.poisson.pmf(i, N_bgr)) pval = 1-np.sum(pmf_values) print 'The p-value is ', pval
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
In Caffe models get specified in separate protobuf files. Additionally a solver has to be specified, that determines training parameters. Instantiate the solver and train the network.
solver = caffe.SGDSolver('mnist_solver.prototxt') solver.net.forward() niter = 2500 test_interval = 100 # losses will also be stored in the log train_loss = np.zeros(niter) test_acc = np.zeros(int(np.ceil(niter / test_interval))) output = np.zeros((niter, 8, 10)) # the main solver loop for it in range(niter): sol...
notebooks/caffe/train.ipynb
Petr-By/qtpyvis
mit
The weights are saved in a .caffemodel file.
solver.net.save('mnist.caffemodel')
notebooks/caffe/train.ipynb
Petr-By/qtpyvis
mit
Este pequeño script muestra algunos aspectos importantes de la sintaxis de Python. Comentarios Los comentarios en Python empiezan con un "pound", "hash" o numeral # y cualquier cosa que lo siga hasta el final de la línea es ignorada por el intérprete. Es decir, pueden tener comentarios que toman toda la línea, o sólo p...
print(2*(3+4)) print(2*3+4) print((2*3)+4)
clases/02-Sintaxis-de-Python.ipynb
leoferres/prograUDD
mit
Los parentesis también se usan para pasar parámetros a una función cuando se llama. En el siguiente snippet de código, la función print() se usa para mostrar, por ej, los contenidos de una variable. La función se "llama" con un par de parentesis con los argumentos de la función adentro.
x = 3 print('first value:', x) print('second value:', 2)
clases/02-Sintaxis-de-Python.ipynb
leoferres/prograUDD
mit
Algunas funciones se llaman sin argumentos y actuan sobre el objeto que evalúan. Los parentesis deben ser usados igual, aunque la función tenga argumentos.
L = [4,2,3,1] L.sort() print(L)
clases/02-Sintaxis-de-Python.ipynb
leoferres/prograUDD
mit
Make some data
n = 1000 p = 10 X = np.random.standard_normal((n,p)) X.shape A = np.random.random((p,1)) A y = X @ A y.shape
notebooks/linear_model.ipynb
cbare/Etudes
apache-2.0
Too easy
model = LinearRegression().fit(X, y) model X_test = np.random.standard_normal((n,p)) y_test = X_test @ A from sklearn.metrics import r2_score y_pred = model.predict(X_test) r2_score(y_test, y_pred) plt.scatter(y_test, y_pred, color='#3033ff30') plt.show()
notebooks/linear_model.ipynb
cbare/Etudes
apache-2.0
Adding Noise
noise = 1/2 X_train = X + np.random.normal(loc=0, scale=noise, size=(n,p)) y_train = y + np.random.normal(loc=0, scale=noise, size=(n,1)) model = LinearRegression().fit(X_train, y_train) X_test_noisy = X_test + np.random.normal(loc=0, scale=noise, size=(n,p)) y_test_noisy = y_test + np.random.normal(loc=0, scale=noi...
notebooks/linear_model.ipynb
cbare/Etudes
apache-2.0
Transformed features Let's make the problem harder. Let's say there are 10 true features that are linearly related with our target variable. We don't necessarily get to observe those, but we can measure 10 other features. These might be combinations of the original features with more or less noise added. Some variable ...
def tr(v, extra_noise): a,b,c,d,e,f,g,h,i,j = v super_noisy = np.random.normal(loc=0, scale=extra_noise, size=None) return (a+b, b*c, (c + d + e)/3, d + i/10, e, f, g+super_noisy, h + i/5, h + c/3, 0) noise = 1/5 X_tr_train = np.apply_along_axis(tr, axis=1, arr=X, extra_noise=2) + np.random.normal(loc=0, s...
notebooks/linear_model.ipynb
cbare/Etudes
apache-2.0
1.1 Required Module numpy: NumPy is the fundamental package for scientific computing in Python. pytorch: End-to-end deep learning platform. torchvision: This package consists of popular datasets, model architectures, and common image transformations for computer vision. tensorflow: An open source machine learning frame...
# Load all necessary modules here, for clearness import torch import numpy as np import torch.nn as nn import torch.nn.functional as F import torch.optim as optim # from torchvision.datasets import MNIST import torchvision from torchvision import transforms from torch.optim import lr_scheduler # from tensorboardX impor...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
2. Classfication Model Ww would define a simple Convolutional Neural Network to classify MNIST 2.1 Short indroduction of MNIST The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing syste...
class FeedForwardNeuralNetwork(nn.Module): """ Inputs Linear/Function Output [128, 1, 28, 28] -> Linear(28*28, 100) -> [128, 100] # first hidden lyaer -> ReLU -> [128, 100] # relu activation function, may sigmoid -> Linear...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3. Training We would define training function here. Additionally, hyper-parameters, loss function, metric would be included here too. 3.1 Pre-set hyper-parameters setting hyperparameters like below hyper paprameters include following part learning rate: usually we start from a quite bigger lr like 1e-1, 1e-2, 1e-3, a...
### Hyper parameters batch_size = 128 # batch size is 128 n_epochs = 5 # train for 5 epochs learning_rate = 0.01 # learning rate is 0.01 input_size = 28*28 # input image has size 28x28 hidden_size = 100 # hidden neurons is 100 for each layer output_size = 10 # classes of prediction l2_norm = 0 # not to use l2 penalty ...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3.2 Initialize model parameters Pytorch provide default initialization (uniform intialization) for linear layer. But there is still some useful intialization method. Read more about initialization from this link torch.nn.init.normal_ torch.nn.init.uniform_ torch.nn.init.constant_ torch.nn.init.eye_ torc...
def show_weight_bias(model): """Show some weights and bias distribution every layers in model. !!YOU CAN READ THIS CODE LATER!! """ # Create a figure and a set of subplots fig, axs = plt.subplots(2,3, sharey=False, tight_layout=True) # weight and bias for every hidden layer h1_w = ...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业1 使用 torch.nn.init.constant_, torch.nn.init.xavier_uniform_, torch.nn.kaiming_uniform_去重写初始化函数,使用对应函数初始化模型,并且使用show_weight_bias显示模型隐藏层的参数分布。此处应该有6个cell作答。
def weight_bias_reset_constant(model): """Constant initalization """ for m in model.modules(): if isinstance(m, nn.Linear): val = 0.1 torch.nn.init.constant_(m.weight, val) torch.nn.init.constant_(m.bias, val) weight_bias_reset_constant(model) show_weight_bias(mo...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3.3 Repeat over certain numbers of epoch Shuffle whole training data shuffle train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, **kwargs) * For each mini-batch data * load mini-batch data ``` for batch_idx, (data, target) in enumerate(train_loader): \ ... ``` * compu...
# define method of preprocessing data for evaluating train_transform = transforms.Compose([ transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor. # Normalize a tensor image with mean 0.1307 and standard deviation 0.3081 transforms.Normalize((0.1307,), (0.3081,)) ]) test_transform = tran...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3.3.2 & 3.3.3 compute gradient of loss over parameters & update parameters with gradient descent
def train(train_loader, model, loss_fn, optimizer, get_grad=False): """train model using loss_fn and optimizer. When thid function is called, model trains for one epoch. Args: train_loader: train data model: prediction model loss_fn: loss function to judge the distance between target and...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
Define function fit and use train_epoch and test_epoch
def fit(train_loader, val_loader, model, loss_fn, optimizer, n_epochs, get_grad=False): """train and val model here, we use train_epoch to train model and val_epoch to val model prediction performance Args: train_loader: train data val_loader: validation data model: prediction mode...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 2 运行一下fit函数,根据结束时候训练集的accuracy,回答:模型是否训练到过拟合。 使用提供的show_curve函数,画出训练的时候loss和accuracy的变化 Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代码进行重新定义模型和优化器。注意到此处用的是默认初始化。
### Hyper parameters batch_size = 128 # batch size is 128 n_epochs = 5 # train for 5 epochs learning_rate = 0.01 # learning rate is 0.01 input_size = 28*28 # input image has size 28x28 hidden_size = 100 # hidden neurons is 100 for each layer output_size = 10 # classes of prediction l2_norm = 0 # not to use l2 penalty ...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
模型没有训练到过拟合,观察上面训练数据,随着代数增多,测试集的正确率并没有下降。
show_curve(train_accs, 'accuracy') show_curve(train_losses, 'loss')
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 3 将n_epochs设为10,观察模型是否能在训练集上达到过拟合, 使用show_curve作图。 当希望模型在5个epoch内在训练集上达到过拟合,可以通过适当调整learning rate来实现。选择一个合适的learing rate,训练模型,并且使用show_curve作图, 验证你的learning rate Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代码进行重新定义模型和优化器。注意到此处用的是默认初始化。
### Hyper parameters batch_size = 128 # batch size is 128 n_epochs = 5 # train for 5 epochs learning_rate = 0.01 # learning rate is 0.01 input_size = 28*28 # input image has size 28x28 hidden_size = 100 # hidden neurons is 100 for each layer output_size = 10 # classes of prediction l2_norm = 0 # not to use l2 penalty ...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
观察数据可以发现其实10代也没有过拟合。
# 3.1 show_curve show_curve(train_accs, 'accuracy') show_curve(train_losses, 'loss') # 3.2 Train batch_size = 128 # batch size is 128 n_epochs = 5 # train for 5 epochs learning_rate = 0.7 input_size = 28*28 # input image has size 28x28 hidden_size = 100 # hidden neurons is 100 for each layer output_size = 10 # classes...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3.4 save model Pytorch provide two kinds of method to save model. We recommmend the method which only saves parameters. Because it's more feasible and dont' rely on fixed model. When saving parameters, we not only save learnable parameters in model, but also learnable parameters in optimizer. A common PyTorch convent...
# show parameters in model # Print model's state_dict print("Model's state_dict:") for param_tensor in model.state_dict(): print(param_tensor, "\t", model.state_dict()[param_tensor].size()) # Print optimizer's state_dict print("\nOptimizer's state_dict:") for var_name in optimizer.state_dict(): print(var_name...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 4 使用 test_epoch 函数,预测new_model在test_loader上的accuracy和loss
# test your model prediction performance new_test_loss, new_test_accuracy = evaluate(test_loader, new_model, loss_fn) message = 'Average loss: {:.4f}, Accuracy: {:.4f}'.format(new_test_loss, new_test_accuracy) print(message)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
4. Training Advanced 4.1 l2_norm we could minimize the regularization term below by use $weight_decay$ in SGD optimizer \begin{equation} L_norm = {\sum_{i=1}^{m}{\theta_{i}^{2}}} \end{equation} set l2_norm=0.01, let's train and see
### Hyper parameters batch_size = 128 n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0.01 # use l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 5 思考正则项在loss中占比的影响。使用 l2_norm = 1, 训练模型 Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代码进行重新定义模型和优化器。注意到此处用的是默认初始化。
# Hyper parameters batch_size = 128 n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 1 # use l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn =...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
4.2 dropout During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call. Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代码进行重新定义模型和优化器。注意到此处用的是默认初始化。
### Hyper parameters batch_size = 128 n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # without using l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entro...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
4.3 batch_normalization Batch normalization is a technique for improving the performance and stability of artificial neural networks \begin{equation} y=\frac{x-E[x]}{\sqrt{Var[x]+\epsilon}} * \gamma + \beta, \end{equation} $\gamma$ and $\beta$ are learnable parameters Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代...
### Hyper parameters batch_size = 128 n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # without using l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entro...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
4.4 data augmentation data augmentation can be more complicated to gain a better generalization on test dataset
# only add random horizontal flip train_transform_1 = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor. # Normalize a tensor image with mean and standard deviation transforms.Normalize((0.1307,), (0.3081,)) ]) # only add ran...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 6 使用提供的train_transform_2, train_transform_3,重新加载train_loader,并且使用fit进行训练 Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。注意到此处用的是默认初始化。
# train_transform_2 batch_size = 128 train_dataset_2 = torchvision.datasets.MNIST(root='./data', train=True, transform=train_transform_2, download=False) train_loader_2 = torch.utils.data.DataLoader(dataset=train_dataset_2, ...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
5. Visualizatio of training and validation phase We could use tensorboard to visualize our training and test phase. You could find example here 6. Gradient explosion and vanishing We have embedded code which shows grad for hidden2 and hidden3 layer. By observing their grad changes, we can see whether gradient is norma...
### Hyper parameters batch_size = 128 n_epochs = 15 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # use l2 penalty get_grad = True # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
6.1.1 Gradient Vanishing Set learning=e-10
### Hyper parameters batch_size = 128 n_epochs = 15 learning_rate = 1e-10 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # use l2 penalty get_grad = True # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_f...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
6.1.2 Gradient Explosion 6.1.2.1 learning rate set learning rate = 10
### Hyper parameters batch_size = 128 n_epochs = 15 learning_rate = 10 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # not to use l2 penalty get_grad = True # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy lo...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
6.1.2.2 normalization for input data 6.1.2.3 unsuitable weight initialization
### Hyper parameters batch_size = 128 n_epochs = 15 learning_rate = 1 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # not to use l2 penalty get_grad = True # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy los...
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
astroplan Plan for everything but the clouds ☁️ Brett Morris with Jazmin Berlanga Medina, Christoph Deil, Eric Jeschke, Adrian Price-Whelan, Erik Tollerud Getting started bash pip install astropy astroplan echo "Optionally:" pip install wcsaxes astroquery Outline Background: astropy astroplan basics <img src="https:/...
# Altitude-azimuth frame: from astropy.coordinates import SkyCoord, EarthLocation, AltAz import astropy.units as u from astropy.time import Time # Specify location of Apache Point Observatory with astropy.coordinates.EarthLocation apache_point = EarthLocation.from_geodetic(-105.82*u.deg, 32.78*u.deg, 2798*u.m) # Spe...
presentation.ipynb
bmorris3/gsoc2015
mit
astropy: Get coordinates of the Sun
# Where is the sun right now? from astropy.coordinates import get_sun sun = get_sun(time) print(sun)
presentation.ipynb
bmorris3/gsoc2015
mit
astroplan v0.1 Open source in Python astropy powered Get (alt/az) positions of targets at any time, from any observatory Can I observe these targets given some constraints (airmass, moon separation, etc.)? astroplan basics astroplan.Observer: contains information about an observer's location, environment on the Eart...
from astroplan import Observer # Construct an astroplan.Observer at Apache Point Observatory apache_point = Observer.at_site("Apache Point") apache_point = Observer.at_site("APO") # also works print(apache_point.location.to_geodetic())
presentation.ipynb
bmorris3/gsoc2015
mit
astroplan basics astroplan.FixedTarget: contains information about celestial objects with no (slow) proper motion
from astroplan import FixedTarget # Construct an astroplan.FixedTarget for Vega vega = FixedTarget.from_name("Vega") # (with internet access) # # (without internet access) # vega_icrs = SkyCoord(ra=279.235416*u.deg, dec=38.78516*u.deg) # vega = FixedTarget(coord=vega_icrs, name="Vega") vega_altaz = apache_point.alt...
presentation.ipynb
bmorris3/gsoc2015
mit
Convenience methods Is it night at this observatory at time? | Question | Answer | |------------------|---------------| | Is it nighttime? | observer.is_night(time) | | Is Vega up? | observer.target_is_up(time, vega) | | What is the LST? | observer.local_sidereal_time(time) | | Hour angle of Vega...
apache_point.is_night(time)
presentation.ipynb
bmorris3/gsoc2015
mit
Is Vega above the horizon at time?
apache_point.target_is_up(time, vega)
presentation.ipynb
bmorris3/gsoc2015
mit
Make your own TUI window:
# Local Sidereal time apache_point.local_sidereal_time(time) # Hour angle apache_point.target_hour_angle(time, vega) # Parallactic angle apache_point.parallactic_angle(time, vega)
presentation.ipynb
bmorris3/gsoc2015
mit
Rise/set times Next sunset
sunset = apache_point.sun_set_time(time, which='next') print("{0.jd} = {0.iso}".format(sunset))
presentation.ipynb
bmorris3/gsoc2015
mit
Next rise of Vega
vega_rise = apache_point.target_rise_time(time, vega, which='next') print(vega_rise.iso)
presentation.ipynb
bmorris3/gsoc2015
mit
Next astronomical (-18 deg) twilight
astronomical_twilight = apache_point.twilight_evening_astronomical(time, which='next') print(astronomical_twilight.iso)
presentation.ipynb
bmorris3/gsoc2015
mit
What is that time in local Seattle time (PST)?
# Specify your time zone with `pytz` import pytz my_timezone = pytz.timezone('US/Pacific') astronomical_twilight.to_datetime(my_timezone)
presentation.ipynb
bmorris3/gsoc2015
mit
Constraints Can I observe target(s) given: Time of year of night at "night" Telescope: Altitude constraints, i.e. 15-80$^\circ$ altitude Location on Earth Moon separation, illumination Constraints example Let's read in a list of RA/Dec of our targets:
%%writefile targets.txt # name ra_degrees dec_degrees Polaris 37.95456067 89.26410897 Vega 279.234734787 38.783688956 Albireo 292.68033548 27.959680072 Algol 47.042218553 40.955646675 Rigel 78.634467067 -8.201638365 Regulus 152.092962438 11.967208776
presentation.ipynb
bmorris3/gsoc2015
mit
Read in target file to a list of astroplan.FixedTarget objects:
# Read in the table of targets from astropy.table import Table target_table = Table.read('targets.txt', format='ascii') # Create astroplan.FixedTarget objects for each one in the table from astropy.coordinates import SkyCoord import astropy.units as u from astroplan import FixedTarget targets = [FixedTarget(coord=S...
presentation.ipynb
bmorris3/gsoc2015
mit
Initialize astroplan.Observer, observing time window:
from astroplan import Observer from astropy.time import Time subaru = Observer.at_site("Subaru") time_range = Time(["2015-08-01 06:00", "2015-08-01 12:00"])
presentation.ipynb
bmorris3/gsoc2015
mit
Define and compute constraints:
from astroplan import (AltitudeConstraint, AirmassConstraint, AtNightConstraint) # Define constraints: constraints = [AltitudeConstraint(10*u.deg, 80*u.deg), AirmassConstraint(5), AtNightConstraint.twilight_civil()] from astroplan import is_observable, is_always_observable # Com...
presentation.ipynb
bmorris3/gsoc2015
mit
Plot celestial sphere positions
%matplotlib inline from astroplan.plots import plot_sky, plot_airmass import numpy as np import matplotlib.pyplot as plt plot_times = time_range[0] + np.linspace(0, 1, 10)*(time_range[1] - time_range[0]) fig = plt.figure(figsize=(12, 6)) ax0 = fig.add_subplot(121, projection='polar') ax1 = fig.add_subplot(122) # Pl...
presentation.ipynb
bmorris3/gsoc2015
mit
Plot finder charts
# This method requires astroquery, wcsaxes from astroplan.plots import plot_finder_image m1 = FixedTarget.from_name('M1') plot_finder_image(m1, survey='DSS');
presentation.ipynb
bmorris3/gsoc2015
mit
Actually it is even easier. There are special operators to add, subtract, multipy, and divide into the current variable. += -= *= /=
print(count) count += 5 print(count) count *= 2 print(count)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Assign 5 to the variable x and then in a new line add 2 to x and store the result back in x. While loops Loops are one of the most powerful feature of programming. With a loop you can make a computer do some repetitive task over and over again (is it any wonder that computers are taking our jobs?) The while loop...
feet_of_snow = 0 while (feet_of_snow < 3): print("Snow is falling!") feet_of_snow += 1 print("We got " + str(feet_of_snow) + " feet of snow :(")
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Write out a while loop that prints out the values 1 - 5 Infinite loops One thing you need to be careful of with while loops is the case where the condition never turns false. This means that the loop will keep going and going forever. You'll hear your computer's fan heat up and the program will hang. If you are ...
snowflakes = 0 # This condition is never false so this will run forever while (snowflakes >= 0): snowflakes += 1 # If you ran this by accident, press the square (stop) button to kill the loop # The other common infinite loop is forgetting to update to a counting variable count = 0 while (count < 3): prin...
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit