markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
scipy implementation using scipy.ndimage.generic_filter—the custom callback function is just-in-time compiled by numba
@jit(nopython=True) def filter_denoise(neighborhood): if neighborhood.mean() < 10: return neighborhood.min() else: return neighborhood[13] def denoise_scipy(a, b): for channel in range(2): b[channel] = generic_filter(input=a[channel], function=filter_denoise, size=(9, 3), mode='constant') return b
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
numba implementation of a universal function via numba.guvectorize
# just removed return statement def denoise_guvectorize(a, b): for channel in range(2): for f_band in range(4, a.shape[1] - 4): for t_step in range(1, a.shape[2] - 1): neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2] if neighborhood.mean() < 10: b[channel, f_band, t_step] = neighborhood.min() else: b[channel, f_band, t_step] = neighborhood[4, 1]
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
serial version
denoise_numba = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)', nopython=True)(denoise_guvectorize)
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
parallel version
denoise_parallel = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)', nopython=True, target='parallel')(denoise_guvectorize)
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
check results test the implementations on a randomly generated dataset and verfiy that all the results are the same
size = 100 data = np.random.rand(2, size, int(size*1.5)) data[:, int(size/4):int(size/2), int(size/4):int(size/2)] = 27 result_python = denoise(data, np.zeros_like(data)) result_scipy = denoise_scipy(data, np.zeros_like(data)) result_numba = denoise_numba(data, np.zeros_like(data)) result_parallel = denoise_parallel(data, np.zeros_like(data))
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
check if the different implementations produce the same result
assert np.allclose(result_python, result_scipy) and np.allclose(result_python, result_numba)
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
plot results
fig, ax = plt.subplots(2,2) fig.set_figheight(8) fig.set_figwidth(12) im1 = ax[0, 0].imshow(data[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[0, 0].set_title('data') im2 = ax[0, 1].imshow(result_python[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[0, 1].set_title('pure python') im3 = ax[1, 0].imshow(result_scipy[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[1, 0].set_title('scipy') im4 = ax[1, 1].imshow(result_numba[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[1, 1].set_title('numba')
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
profile for different data sizes time the different implementations on different dataset sizes
sizes = [30, 50, 100, 200, 400, 800, 1600] progress_bar = pyprind.ProgBar(iterations=len(sizes), track_time=True, stream=1, monitor=True) t_python = np.empty_like(sizes, dtype=np.float64) t_scipy = np.empty_like(sizes, dtype=np.float64) t_numba = np.empty_like(sizes, dtype=np.float64) t_parallel = np.empty_like(sizes, dtype=np.float64) for size in range(len(sizes)): progress_bar.update(item_id=sizes[size]) data = np.random.rand(2, sizes[size], sizes[size])*0.75 t_1 = %timeit -oq denoise(data, np.zeros_like(data)) t_2 = %timeit -oq denoise_scipy(data, np.zeros_like(data)) t_3 = %timeit -oq denoise_numba(data, np.zeros_like(data)) t_4 = %timeit -oq denoise_parallel(data, np.zeros_like(data)) t_python[size] = t_1.best t_scipy[size] = t_2.best t_numba[size] = t_3.best t_parallel[size] = t_4.best
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
plot profile results
fig, ax = plt.subplots(figsize=(15,5)) p1 = ax.loglog(sizes, t_python, color='black', marker='.', label='python') p2 = ax.loglog(sizes, t_scipy, color='blue', marker='.', label='scipy') p3 = ax.loglog(sizes, t_numba, color='green', marker='.', label='numba') p4 = ax.loglog(sizes, t_parallel, color='red', marker='.', label='parallel') lx = ax.set_xlabel("data array size (2 x n x n elements)") ly = ax.set_ylabel("time (seconds)") t1 = ax.set_title("running times of the 'denoise' algorithm") ax.grid(True, which='major') l = ax.legend()
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit
Download the sequence data Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data. SRA link: http://trace.ncbi.nlm.nih.gov/Traces/study/?acc=SRP021469
%%bash ## make a new directory for this analysis mkdir -p empirical_10/fastq/
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
For each ERS (individuals) get all of the ERR (sequence file accessions).
## IPython code import pandas as pd import numpy as np import urllib2 import os ## open the SRA run table from github url url = "https://raw.githubusercontent.com/"+\ "dereneaton/RADmissing/master/empirical_10_SraRunTable.txt" intable = urllib2.urlopen(url) indata = pd.read_table(intable, sep="\t") ## print first few rows print indata.head() def wget_download(SRR, outdir, outname): """ Python function to get sra data from ncbi and write to outdir with a new name using bash call wget """ ## get output name output = os.path.join(outdir, outname+".sra") ## create a call string call = "wget -q -r -nH --cut-dirs=9 -O "+output+" "+\ "ftp://ftp-trace.ncbi.nlm.nih.gov/"+\ "sra/sra-instant/reads/ByRun/sra/SRR/"+\ "{}/{}/{}.sra;".format(SRR[:6], SRR, SRR) ## call bash script ! $call
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
for ID, SRR in zip(indata.Sample_Name_s, indata.Run_s): wget_download(SRR, "empirical_10/fastq/", ID) %%bash ## convert sra files to fastq using fastq-dump tool ## output as gzipped into the fastq directory fastq-dump --gzip -O empirical_10/fastq/ empirical_10/fastq/*.sra ## remove .sra files rm empirical_10/fastq/*.sra %%bash ls -lh empirical_10/fastq/
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Note: The data here are from Illumina Casava <1.8, so the phred scores are offset by 64 instead of 33, so we use that in the params file below.
%%bash ## substitute new parameters into file sed -i '/## 1. /c\empirical_10/ ## 1. working directory ' params.txt sed -i '/## 6. /c\TGCAG ## 6. cutters ' params.txt sed -i '/## 7. /c\20 ## 7. N processors ' params.txt sed -i '/## 9. /c\6 ## 9. NQual ' params.txt sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt sed -i '/## 12./c\4 ## 12. MinCov ' params.txt sed -i '/## 13./c\10 ## 13. maxSH ' params.txt sed -i '/## 14./c\empirical_10_m4 ## 14. output name ' params.txt sed -i '/## 18./c\empirical_10/fastq/*.gz ## 18. data location ' params.txt sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt sed -i '/## 30./c\p,n,s ## 30. output formats ' params.txt cat params.txt
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Assemble in pyrad
%%bash pyrad -p params.txt -s 234567 >> log.txt 2>&1 %%bash sed -i '/## 12./c\2 ## 12. MinCov ' params.txt sed -i '/## 14./c\empirical_10_m2 ## 14. output name ' params.txt %%bash pyrad -p params.txt -s 7 >> log.txt 2>&1
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Results We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples. Raw data amounts The average number of raw reads per sample is 1.36M.
import pandas as pd ## read in the data sdat = pd.read_table("empirical_10/stats/s2.rawedit.txt", header=0, nrows=14) ## print summary stats print sdat["passed.total"].describe() ## find which sample has the most raw data maxraw = sdat["passed.total"].max() print "\nmost raw data in sample:" print sdat['sample '][sdat['passed.total']==maxraw]
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Look at distributions of coverage pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others.
## read in the s3 results sdat = pd.read_table("empirical_10/stats/s3.clusters.txt", header=0, nrows=14) ## print summary stats print "summary of means\n==================" print sdat['dpt.me'].describe() ## print summary stats print "\nsummary of std\n==================" print sdat['dpt.sd'].describe() ## print summary stats print "\nsummary of proportion lowdepth\n==================" print pd.Series(1-sdat['d>5.tot']/sdat["total"]).describe() ## find which sample has the greatest depth of retained loci max_hiprop = (sdat["d>5.tot"]/sdat["total"]).max() print "\nhighest coverage in sample:" print sdat['taxa'][sdat['d>5.tot']/sdat["total"]==max_hiprop] maxprop =(sdat['d>5.tot']/sdat['total']).max() print "\nhighest prop coverage in sample:" print sdat['taxa'][sdat['d>5.tot']/sdat['total']==maxprop] import numpy as np ## print mean and std of coverage for the highest coverage sample with open("empirical_10/clust.85/38362_rex.depths", 'rb') as indat: depths = np.array(indat.read().strip().split(","), dtype=int) print "Means for sample 38362_rex" print depths.mean(), depths.std() print depths[depths>5].mean(), depths[depths>5].std()
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Plot the coverage for the sample with highest mean coverage Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
import toyplot import toyplot.svg import numpy as np ## read in the depth information for this sample with open("empirical_10/clust.85/38362_rex.depths", 'rb') as indat: depths = np.array(indat.read().strip().split(","), dtype=int) ## make a barplot in Toyplot canvas = toyplot.Canvas(width=350, height=300) axes = canvas.axes(xlabel="Depth of coverage (N reads)", ylabel="N loci", label="dataset10/sample=38362_rex") ## select the loci with depth > 5 (kept) keeps = depths[depths>5] ## plot kept and discarded loci edat = np.histogram(depths, range(30)) # density=True) kdat = np.histogram(keeps, range(30)) #, density=True) axes.bars(edat) axes.bars(kdat) #toyplot.svg.render(canvas, "empirical_10_depthplot.svg")
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Print final stats table
cat empirical_10/stats/empirical_10_m4.stats %%bash head -n 10 empirical_10/stats/empirical_10_m2.stats
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Infer ML phylogeny in raxml as an unrooted tree
%%bash ## raxml argumement w/ ... raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \ -w /home/deren/Documents/RADmissing/empirical_10/ \ -n empirical_10_m4 -s empirical_10/outfiles/empirical_10_m4.phy %%bash ## raxml argumement w/ ... raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \ -w /home/deren/Documents/RADmissing/empirical_10/ \ -n empirical_10_m2 -s empirical_10/outfiles/empirical_10_m2.phy %%bash head -n 20 empirical_10/RAxML_info.empirical_10_m4 %%bash head -n 20 empirical_10/RAxML_info.empirical_10_m2
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Plot the tree in R using ape
%load_ext rpy2.ipython %%R -h 800 -w 800 library(ape) tre <- read.tree("empirical_10/RAxML_bipartitions.empirical_10_m4") ltre <- ladderize(tre) par(mfrow=c(1,2)) plot(ltre, use.edge.length=F) nodelabels(ltre$node.label) plot(ltre, type='u')
emp_nb_Pedicularis.ipynb
dereneaton/RADmissing
mit
Using simple normalisation: f(z)/f(z0) (equation 2 from first draft of GZH paper) $\frac{f}{f_{0}}=1 - {\zeta} * (z-z_{0})$ $\zeta = constant$
fit_and_plot(x, yn, mu, fzeta_lin_mu_none, 2)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}=1 - {\zeta} * (z-z_{0})$ $\zeta = \zeta[0]+\zeta[1] * \mu$
fit_and_plot(x, yn, mu, fzeta_lin_mu_lin, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}=e^{\frac{-(z-z_0)}{\zeta}}$ $\zeta = constant$
fit_and_plot(x, yn, mu, fzeta_exp_mu_none, 2)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}=e^{\frac{-(z-z_0)}{\zeta}}$ $\zeta = \zeta[0]+\zeta[1] * \mu$
p = fit_and_plot(x, yn, mu, fzeta_exp_mu_lin, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$ $\zeta_{a}, \zeta_{b} = constant$
fit_and_plot(x, yn, mu, fzeta_qud_mu_none, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$ $\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $ $\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
fit_and_plot(x, yn, mu, fzeta_qud_mu_lin, 5)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$ $\zeta_{a}, \zeta_{b}, \zeta_{c} = constant$
fit_and_plot(x, yn, mu, fzeta_cub_mu_none, 4)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$ $\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $ $\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $ $\zeta_{c} = \zeta_{c}[0] + \zeta_{c}[1] * \mu $
fit_and_plot(x, yn, mu, fzeta_cub_mu_lin, 7)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
Using alternative normalisation, as in eqn. 4: (f(z0)-1) / (f(z)-1) $\frac{1-f_{0}}{1-f}=1 - {\zeta} * (z-z_{0})$ $\zeta = constant$
fit_and_plot(x, ym, mu, fzeta_lin_mu_none, 2)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}=1 - {\zeta} * (z-z_{0})$ $\zeta = \zeta[0]+\zeta[1] * \mu$
fit_and_plot(x, ym, mu, fzeta_lin_mu_lin, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}=e^{\frac{-(z-z_0)}{\zeta}}$ $\zeta = constant$
fit_and_plot(x, ym, mu, fzeta_exp_mu_none, 2)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}=e^{\frac{-(z-z_0)}{\zeta}}$ $\zeta = \zeta[0]+\zeta[1] * \mu$
fit_and_plot(x, ym, mu, fzeta_exp_mu_lin, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$ $\zeta_{a}, \zeta_{b} = constant$
fit_and_plot(x, ym, mu, fzeta_qud_mu_none, 3)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$ $\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $ $\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
fit_and_plot(x, ym, mu, fzeta_qud_mu_lin, 5)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$ $\zeta_{a}, \zeta_{b}, \zeta_{c} = constant$
fit_and_plot(x, ym, mu, fzeta_cub_mu_none, 4)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
$\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$ $\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $ $\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $ $\zeta_{c} = \zeta_{c}[0] + \zeta_{c}[1] * \mu $
fit_and_plot(x, ym, mu, fzeta_cub_mu_lin, 7)
python/notebooks/zeta_models_compared.ipynb
willettk/gzhubble
mit
Plotting the phase diagram To plot a phase diagram, we send our phase diagram object into the PDPlotter class.
#Let's show all phases, including unstable ones plotter = PDPlotter(pd, show_unstable=0.2, backend="matplotlib") plotter.show()
notebooks/2013-01-01-Plotting and Analyzing a Phase Diagram using the Materials API.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
Calculating energy above hull and other phase equilibria properties
import collections data = collections.defaultdict(list) for e in entries: decomp, ehull = pd.get_decomp_and_e_above_hull(e) data["Materials ID"].append(e.entry_id) data["Composition"].append(e.composition.reduced_formula) data["Ehull"].append(ehull) data["Decomposition"].append(" + ".join(["%.2f %s" % (v, k.composition.formula) for k, v in decomp.items()])) from pandas import DataFrame df = DataFrame(data, columns=["Materials ID", "Composition", "Ehull", "Decomposition"]) print(df.head(30))
notebooks/2013-01-01-Plotting and Analyzing a Phase Diagram using the Materials API.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
USE-CASE: Testing Proportions Is coin biased ? We toss coin 250 times, 140 heads, 120 tails.
# se have: n_h = 140 n_t = 110 observations = (n_h, n_t) n_observations = n_h + n_t print observations, n_observations, # We define the null hypothesis and the test statistic def run_null_hypothesis(n_observations): """the model of Null hypothesis""" sample = [random.choice('HT') for _ in range(n_observations)] df = pd.DataFrame(sample) value_counts = df[0].value_counts() n_heads = value_counts['H'] n_tails = value_counts['T'] return (n_heads, n_tails) def test_statistic((n_heads, n_tails)): """Computes the test statistic""" return abs(n_heads - n_tails) test_stat_H0 = test_statistic(run_null_hypothesis(n_observations)) test_stat_H1 = test_statistic(observations) print "Test Statistic for Null Hypothesis H0:", test_stat_H0 print "Test Statistic for Hypothesis H1:", test_stat_H1 # we perform iterations for good statistics N_ITER = 1000 test_stat_H0_v = [test_statistic(run_null_hypothesis(n_observations)) for _ in range(N_ITER)] p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER print "The p-value is: ", p_value
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
In the example above, like most of what will follow, we used the MC way to evaluate the p-value. Nevertheless, in many cases we can analytically, with the frequentist approach have an evaluation of the p-value. Below is show the way of getting a p-value using the Probability Mass Function (pmf) of the binomial distribution. The sucess (heads is up) process follows a Binomial distribution X ~ B(n,p) where n is the number of flips and p is the prob. of success (heads up) in each flip From the classical Hypothesis Test, the p-value will correspond to the probability of getting the effect we see (or even a more rare effect) under the Null Hypothesis. Here the H0 is that the coin is not biased => p=0.5 And we have to sum up the probabilities (using the pmf) that we see k=
p = 0 for i in range(140, 250): p += stats.distributions.binom.pmf(k, 250, 0.5) pval = 1-p print "The p-value using the frequentist approach is: " , pval
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
Is dice crooked ? we have the frequencies {1:8, 2:9, 3:19, 4:5, 6:8, 6:11}
observations = {1:8, 2:9, 3:19, 4:5, 5:8, 6:11} observations_frequencies = np.array(observations.values()) n_dice_drops = np.sum(observations_frequencies) print n_dice_drops def run_null_hypothesis(n_dice_drops): """the model of Null hypothesis""" dice_values = [1, 2, 3, 4, 5, 6] rolls = np.random.choice(dice_values, n_dice_drops, replace=True) return np.array(dict(pd.DataFrame(rolls)[0].value_counts()).values()) def test_statistic(dice_frequencies, n_dice_drops): """Computes the test statistic""" expected_frequencies = np.ones(6) * n_dice_drops / 6. return sum(abs(dice_frequencies - expected_frequencies)) test_stat_H0 = test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops) test_stat_H1 = test_statistic(observations_frequencies, n_dice_drops) print "Test Statistic for Null Hypothesis H0:", test_stat_H0 print "Test Statistic for Hypothesis H1:", test_stat_H1 # we perform iterations for good statistics N_ITER = 1000 test_stat_H0_v = [test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops) for _ in range(N_ITER)] p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER print "The p-value is: ", p_value
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
USE-CASE: Testing Difference in Means
d1 = np.random.normal(38.601, 1.42, 1000) d2 = np.random.normal(38.523, 1.42, 1000) plt.figure(1) plt.subplot(211) count, bins, ignored = plt.hist(d1, 30, normed=True) plt.figure(1) plt.subplot(211) count, bins, ignored = plt.hist(d2, 30, normed=True) # plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * # np.exp( - (bins - mu)**2 / (2 * sigma**2) ), # linewidth=2, color='r') plt.show() # one way to model the null hypothesis is by permutations, shuffle values of the two distributions and treat them as one d_all = [i for i in d1] + [ i for i in d2] np.random.shuffle(d_all) count, bins, ignored = plt.hist(d_all, 30, normed=True) plt.show() def run_null_hypothesis(d1, d2): """the model of Null hypothesis - treat the two distributions as one""" d_all = [i for i in d1] + [ i for i in d2] np.random.shuffle(d_all) return (d_all[:len(d1)], d_all[len(d1):]) def test_statistic(d1, d2): """Computes the test statistic""" test_stat = abs(np.mean(d1) - np.mean(d2)) return test_stat test_stat_H0 = test_statistic(*run_null_hypothesis(d1, d2)) test_stat_H1 = test_statistic(d1, d2) print "Test Statistic for Null Hypothesis H0:", test_stat_H0 print "Test Statistic for Hypothesis H1:", test_stat_H1 # we perform iterations for good statistics N_ITER = 1000 test_stat_H0_v = [test_statistic(*run_null_hypothesis(d1, d2)) for _ in range(N_ITER)] p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER print "The p-value is: ", p_value # The p-value here is not small. # It means that we expect by chance to see an effect as big as the observed about 80% of time.
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
USE-CASE: Testing a Correlation
data = np.random.multivariate_normal([0, 0], [[1, .75],[.75, 1]], 1000) x = data[:, 0] y = data[:, 1] plt.scatter(x, y) # we can make the null hypothesis model just by shuffling the data of one variable x2 = x.copy() np.random.shuffle(x2) plt.scatter(x2, y) def run_null_hypothesis(x, y): """the model of Null hypothesis - treat the two distributions as one""" x2 = x.copy() np.random.shuffle(x2) return (x2, y) def test_statistic(x, y): """Computes the test statistic""" test_stat = abs(np.corrcoef(x, y)[0][1]) return test_stat test_stat_H0 = test_statistic(*run_null_hypothesis(x, y)) test_stat_H1 = test_statistic(x, y) print "Test Statistic for Null Hypothesis H0:", test_stat_H0 print "Test Statistic for Hypothesis H1:", test_stat_H1 # we perform iterations for good statistics N_ITER = 1000 test_stat_H0_v = [test_statistic(*run_null_hypothesis(d1, d2)) for _ in range(N_ITER)] p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER print "The p-value is: ", p_value
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
USE-CASE: Testing Proportions with chi2 test Above we used total deviation as test statistic Sum(abs(observed-expected)) It is more common to use chi2 statistic. Sum((observed-expected)^2/expected) Lets see how what results we get having chi2 statistic
observations = {1:8, 2:9, 3:19, 4:5, 5:8, 6:11} observations_frequencies = np.array(observations.values()) n_dice_drops = np.sum(observations_frequencies) print n_dice_drops def run_null_hypothesis(n_dice_drops): """the model of Null hypothesis""" dice_values = [1, 2, 3, 4, 5, 6] rolls = np.random.choice(dice_values, n_dice_drops, replace=True) return np.array(dict(pd.DataFrame(rolls)[0].value_counts()).values()) def test_statistic(dice_frequencies, n_dice_drops): """Computes the test statistic""" expected_frequencies = np.ones(6) * n_dice_drops / 6. return sum( (dice_frequencies - expected_frequencies)**2 / expected_frequencies) test_stat_H0 = test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops) test_stat_H1 = test_statistic(observations_frequencies, n_dice_drops) print "Test Statistic for Null Hypothesis H0:", test_stat_H0 print "Test Statistic for Hypothesis H1:", test_stat_H1 # we perform iterations for good statistics N_ITER = 1000 test_stat_H0_v = [test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops) for _ in range(N_ITER)] p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER print "The p-value is: ", p_value
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
we see that the p-value is smaller using the chi2 statistic as test statistic. => This is very important point since we see that the chioice of t-statistic affects quite a lot the p-value USE-CASE: Testing Structures in Histograms e.g.understand if we have signal over background
# Lets say we have already a histogram with the bins values below: x_obs = {1:1, 2:2, 3:2, 4:0, 5:3, 6:1, 7:1, 8:2, 9:5, 10:6, 11:1, 12:0, 13:1, 14:2, 15:1, 16:3, 17:1, 18:0, 19:1, 20:0} x_bgr = {1:1.2, 2:1.8, 3:1.8, 4:1.9, 5:1.9, 6:2, 7:2, 8:2, 9:1.8, 10:1.8, 11:1.7, 12:1.7, 13:1.7, 14:1.6, 15:1.6, 16:1.6, 17:1.5, 18:1.5, 19:1.1, 20:0.3} _ = plt.bar(x_obs.keys(), x_obs.values(), color='b') _ = plt.bar(x_bgr.keys(), x_bgr.values(), alpha=0.6, color='r') # lets say that the red is what we know as background (e.g. from monde carlo) and blue is the observed signal. # Is this signal statistical significant ? # The H0 would say that both those distributions come from the same process. # So we can construct the H0 model by adding those values and then splitting them in two parts.
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
lets focus only in the bin 9 with signal value = 5 How likelie it is to find nobs = 5 while a backgronud is 1.8 ? The number of entries $n_s$ in a bar can be treated as a Poisson variable with mean $\nu_s$. In this scenario we can calculate the p-value as $P(n>= n_{obs}) = \Sigma_{n=n_{obs}}^{\infty} pmf_{poisson}(n;\nu_s=0, \nu_b) = 1 - \Sigma_{n=0}^{n_{obs}-1} pmf_{poisson}(n;\nu_s=0, \nu_b) $
from scipy import stats pmf_values = [] N_obs = 5 N_bgr = 1.8 for i in range(0, N_obs-1): pmf_values.append(stats.distributions.poisson.pmf(i, N_bgr)) pval = 1-np.sum(pmf_values) print 'The p-value is ', pval
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
a point to keep in mind is that the background comes with uncertainty so we eventually have a range of p-values In principle we can apply the procedure above to the number of entries in a subset of bins. E.g. in the two bings with large peak we have $n_{obs}=11$ with expected $\nu_b=3.2$.
from scipy import stats pmf_values = [] N_obs = 11 N_bgr = 3.2 for i in range(0, N_obs-1): pmf_values.append(stats.distributions.poisson.pmf(i, N_bgr)) pval = 1-np.sum(pmf_values) print 'The p-value is ', pval
core/Hypothesis_Testing.ipynb
tsarouch/python_minutes
gpl-2.0
In Caffe models get specified in separate protobuf files. Additionally a solver has to be specified, that determines training parameters. Instantiate the solver and train the network.
solver = caffe.SGDSolver('mnist_solver.prototxt') solver.net.forward() niter = 2500 test_interval = 100 # losses will also be stored in the log train_loss = np.zeros(niter) test_acc = np.zeros(int(np.ceil(niter / test_interval))) output = np.zeros((niter, 8, 10)) # the main solver loop for it in range(niter): solver.step(1) # SGD by Caffe # store the train loss train_loss[it] = solver.net.blobs['loss'].data # store the output on the first test batch # (start the forward pass at conv1 to avoid loading new data) solver.test_nets[0].forward(start='conv2d_1') output[it] = solver.test_nets[0].blobs['dense_2'].data[:8] # run a full test every so often # (Caffe can also do this for us and write to a log, but we show here # how to do it directly in Python, where more complicated things are easier.) if it % test_interval == 0: print ('Iteration', it, 'testing...') correct = 0 test_iter = 100 for test_it in range(test_iter): solver.test_nets[0].forward() correct += sum(solver.test_nets[0].blobs['dense_2'].data.argmax(1) == solver.test_nets[0].blobs['label'].data) test_acc[it // test_interval] = correct / (64 * test_iter) _, ax1 = plt.subplots() ax2 = ax1.twinx() ax1.plot(np.arange(niter), train_loss) ax2.plot(test_interval * np.arange(len(test_acc)), test_acc, 'r') ax1.set_xlabel('iteration') ax1.set_ylabel('train loss') ax2.set_ylabel('test accuracy') ax2.set_title('Test Accuracy: {:.2f}'.format(test_acc[-1]))
notebooks/caffe/train.ipynb
Petr-By/qtpyvis
mit
The weights are saved in a .caffemodel file.
solver.net.save('mnist.caffemodel')
notebooks/caffe/train.ipynb
Petr-By/qtpyvis
mit
Este pequeño script muestra algunos aspectos importantes de la sintaxis de Python. Comentarios Los comentarios en Python empiezan con un "pound", "hash" o numeral # y cualquier cosa que lo siga hasta el final de la línea es ignorada por el intérprete. Es decir, pueden tener comentarios que toman toda la línea, o sólo parte de ella. En el ejemplo de arriba hay tres comentarios: ```python set the midpoint make two empty lists lower = []; upper = [] or lower = []; upper = [] # make two empty lists split the numbers into lower and upper ``` Python no tiene una manera de hacer comentarios multilínea como C, por ejemplo (/* ... */). "Enter" termina una línea ejecutable (una sentencia, ?) The line Python midpoint = 5 Esta operación se llama asignación, y básicamente consiste en crear una variable y darle un valor en particular: 5, en este caso. Noten que no hay nada que marque el final de la sentencia, ni {...} ni ; ni nada por el estilo (solo Enter). Esto es bastante diferente de los lenguajes de programación como C o Java, que necesitaban los ; (a lo mejor razones históricas?) Sin embargo si por alguna razón necesitan de hecho "span" más de una línea. Python x = 1 + 2 + 3 + 4 +\ 5 + 6 + 7 + 8 también es posible continuar en la siguiente línea si existen paréntesis, y sin usar el operador \, así: Python x = (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8) Los dioses del Python igual recomiendan el segundo método, en vez del símbolo de continuación: \. Alguno se anima a decir por qué? Los espacios importan! Vean el siguiente snippet de código: Python for i in range(10): if i &lt; midpoint: lower.append(i) else: upper.append(i) Aqui hay varias cosas que notar. Lo primero es que hay un condicional (el scope introducido por el if), y un "loop" (o ciclo), el scope introducido por el for. No es tan importante a esta altura, pero nos presenta lo que ha sido la caracteristica mas controversial de la sintaxis de Python: el espacio en blanco tiene semántica! En otros lenguajes de programación, un bloque (scope) se define explicitamente con algun símbolo. Cuál es el simbolo que define scope en el siguiente código? C // C code for(int i=0; i&lt;100; i++) { // curly braces indicate code block total += i;} y en este: Go package main import "fmt" func main() { sum := 0 for i := 0; i &lt; 10; i++ { sum += i } fmt.Println(sum) } En Python los scope (o bloques de código) se determinan por indentación. Python for i in range(100): # indentation indicates code block total += i y el scope siempre se precede con : en la línea anterior. A mí me gusta como queda la indentación... es más limpia que la {}, pero al mismo puede producir confusion en los n00bs. Lo siguiente produce diferentes resultados: ``` if x < 4: >>> if x < 4: ........y = x * 2 ........y = x * 2 ........print(x) ....print(x) ``` El código de la izquierda va a ser ejecutado sólo si el valor de x es menor que 4, mientras que el de la derecha se va a ejecutar no importa el valor de x. A mí me parece más leible el código con espacios que con curlies, a ustedes? Por último, el número de espacios en blanco no es importante. Sólo es necesario que sean sistemáticos, es decir, no pueden ir cambiando en un script entre 2 y 4 espacios, digamos. La convención es usar 4 espacios (y nunca "tabs"), y esta es la convención que usamos en estos notebooks. (Aunque a mí me guste los 2 espacios en C :( ). Los espacios en blanco adentro de las líneas no tienen efecto, sólo antes de empezar la línea. Lo siguiente es equivalente: Python x=1+2 x = 1 + 2 x = 1 + 2 Obviamente, abusar esta flexibilidad del lenguaje afecta la legibilidad del código. La tercera línea se ve bastante espantosa. La primera en menor medida, y la del medio es la que (a mi) me hace más sentido. Comparen por ejemplo Python x=10**-2 con Python x = 10 ** -2 De hecho se sugiere poner espacios entre los operadores binarios. Parentesis Los parentesis son para agrupar términos y para hacer llamada a funciones con parametros. Primero, se usan para agrupar los términos de los operadores matemáticos:
print(2*(3+4)) print(2*3+4) print((2*3)+4)
clases/02-Sintaxis-de-Python.ipynb
leoferres/prograUDD
mit
Los parentesis también se usan para pasar parámetros a una función cuando se llama. En el siguiente snippet de código, la función print() se usa para mostrar, por ej, los contenidos de una variable. La función se "llama" con un par de parentesis con los argumentos de la función adentro.
x = 3 print('first value:', x) print('second value:', 2)
clases/02-Sintaxis-de-Python.ipynb
leoferres/prograUDD
mit
Algunas funciones se llaman sin argumentos y actuan sobre el objeto que evalúan. Los parentesis deben ser usados igual, aunque la función tenga argumentos.
L = [4,2,3,1] L.sort() print(L)
clases/02-Sintaxis-de-Python.ipynb
leoferres/prograUDD
mit
Make some data
n = 1000 p = 10 X = np.random.standard_normal((n,p)) X.shape A = np.random.random((p,1)) A y = X @ A y.shape
notebooks/linear_model.ipynb
cbare/Etudes
apache-2.0
Too easy
model = LinearRegression().fit(X, y) model X_test = np.random.standard_normal((n,p)) y_test = X_test @ A from sklearn.metrics import r2_score y_pred = model.predict(X_test) r2_score(y_test, y_pred) plt.scatter(y_test, y_pred, color='#3033ff30') plt.show()
notebooks/linear_model.ipynb
cbare/Etudes
apache-2.0
Adding Noise
noise = 1/2 X_train = X + np.random.normal(loc=0, scale=noise, size=(n,p)) y_train = y + np.random.normal(loc=0, scale=noise, size=(n,1)) model = LinearRegression().fit(X_train, y_train) X_test_noisy = X_test + np.random.normal(loc=0, scale=noise, size=(n,p)) y_test_noisy = y_test + np.random.normal(loc=0, scale=noise, size=(n,1)) y_pred = model.predict(X_test_noisy) r2_score(y_test, y_pred) plt.scatter(y_test_noisy, y_pred, color='#3033ff30') plt.show() import itertools def learning_curve(A, noise=1/3): p = A.shape[0] results = [] n_train_seq = itertools.chain.from_iterable(itertools.repeat(x, 10) for x in range(20, 500, 20)) for n in n_train_seq: X = np.random.standard_normal((n,p)) y = X @ A X_train = X + np.random.normal(loc=0, scale=noise, size=(n,p)) y_train = y + np.random.normal(loc=0, scale=noise, size=(n,1)) model = LinearRegression().fit(X_train, y_train) n_test = 1000 X_test = np.random.standard_normal((n_test,p)) y_test = X_test @ A X_test_noisy = X_test + np.random.normal(loc=0, scale=noise, size=(n_test,p)) y_test_noisy = y_test + np.random.normal(loc=0, scale=noise, size=(n_test,1)) y_pred = model.predict(X_test_noisy) results.append((n, r2_score(y_test_noisy, y_pred))) return np.array(results) lc = learning_curve(A, noise=1/3) from sklearn.linear_model import Ridge from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import make_pipeline X_lc = lc[:,0:1] y_lc = lc[:,1] degree = 3 lc_model = make_pipeline(PolynomialFeatures(degree), Ridge()) lc_model.fit(X_lc, y=y_lc) lc_y_plot = lc_model.predict(X_lc) plt.scatter(lc[:,0], lc[:,1], color='#3033ff30') plt.plot(lc[:,0], lc_y_plot, color='teal', linewidth=2, label="degree %d" % degree) plt.title('r-squared as a function of n') plt.show()
notebooks/linear_model.ipynb
cbare/Etudes
apache-2.0
Transformed features Let's make the problem harder. Let's say there are 10 true features that are linearly related with our target variable. We don't necessarily get to observe those, but we can measure 10 other features. These might be combinations of the original features with more or less noise added. Some variable are totally hidden.
def tr(v, extra_noise): a,b,c,d,e,f,g,h,i,j = v super_noisy = np.random.normal(loc=0, scale=extra_noise, size=None) return (a+b, b*c, (c + d + e)/3, d + i/10, e, f, g+super_noisy, h + i/5, h + c/3, 0) noise = 1/5 X_tr_train = np.apply_along_axis(tr, axis=1, arr=X, extra_noise=2) + np.random.normal(loc=0, scale=noise, size=(n,p)) model = LinearRegression().fit(X_tr_train, y_train) X_tr_test = np.apply_along_axis(tr, axis=1, arr=X_test, extra_noise=1) + np.random.normal(loc=0, scale=noise, size=(n,p)) y_pred = model.predict(X_tr_test) r2_score(y_test, y_pred) plt.scatter(y_test, y_pred, color='#3033ff30') plt.show() model.coef_ A from pandas.plotting import scatter_matrix df = pd.DataFrame(X_tr_train, columns=list(letters[:10])) df['y'] = y_train df.shape df.head() x = scatter_matrix(df, alpha = 0.2, figsize = (6, 6), diagonal = 'kde') df = pd.DataFrame(X, columns=list(letters[:10])) df['y'] = y_train df.shape x = scatter_matrix(df, alpha = 0.2, figsize = (6, 6), diagonal = 'kde')
notebooks/linear_model.ipynb
cbare/Etudes
apache-2.0
1.1 Required Module numpy: NumPy is the fundamental package for scientific computing in Python. pytorch: End-to-end deep learning platform. torchvision: This package consists of popular datasets, model architectures, and common image transformations for computer vision. tensorflow: An open source machine learning framework. tensorboard: A suite of visualization tools to make training easier to understand, debug, and optimize TensorFlow programs. tensorboardX: Tensorboard for Pytorch. matplotlib: It is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. 1.2 Common Setup
# Load all necessary modules here, for clearness import torch import numpy as np import torch.nn as nn import torch.nn.functional as F import torch.optim as optim # from torchvision.datasets import MNIST import torchvision from torchvision import transforms from torch.optim import lr_scheduler # from tensorboardX import SummaryWriter from collections import OrderedDict import matplotlib.pyplot as plt # from tqdm import tqdm # Whether to put data in GPU according to GPU is available or not # cuda = torch.cuda.is_available() # In case the default gpu does not have enough space, you can choose which device to use # torch.cuda.set_device(device) # device: id # Since gpu in lab is not enough for your guys, we prefer to cpu computation cuda = torch.device('cpu')
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
2. Classfication Model Ww would define a simple Convolutional Neural Network to classify MNIST 2.1 Short indroduction of MNIST The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The MNIST database contains 60,000 training images and 10,000 testing images. Each class has 5000 traning images and 1000 test images. Each image is 32x32. And they look like images below. 2.2 Define A FeedForward Neural Network We would fefine a FeedForward Neural Network with 3 hidden layers. Each layer is followed a activation function, we would try sigmoid and relu respectively. For simplicity, each hidden layer has the equal neurons. In reality, however, we would apply different amount of neurons in different hidden layers. 2.2.1 Activation Function There are many useful activation function and you can choose one of them to use. Usually we use relu as our network function. 2.2.1.1 ReLU Applies the rectified linear unit function element-wise \begin{equation} ReLU(x) = max(0, x) \end{equation} 2.2.1.2 Sigmoid Applies the element-wise function: \begin{equation} Sigmoid(x)=\frac{1}{1+e^{-x}} \end{equation} 2.2.2 Network's Input and output Inputs: For every batch [batchSize, channels, height, width] -> [B,C,H,W] Outputs: prediction scores of each images, eg. [0.001, 0.0034 ..., 0.3] [batchSize, classes] Network Structure Inputs Linear/Function Output [128, 1, 28, 28] -&gt; Linear(28*28, 100) -&gt; [128, 100] # first hidden lyaer -&gt; ReLU -&gt; [128, 100] # relu activation function, may sigmoid -&gt; Linear(100, 100) -&gt; [128, 100] # second hidden lyaer -&gt; ReLU -&gt; [128, 100] # relu activation function, may sigmoid -&gt; Linear(100, 100) -&gt; [128, 100] # third hidden lyaer -&gt; ReLU -&gt; [128, 100] # relu activation function, may sigmoid -&gt; Linear(100, 10) -&gt; [128, 10] # Classification Layer
class FeedForwardNeuralNetwork(nn.Module): """ Inputs Linear/Function Output [128, 1, 28, 28] -> Linear(28*28, 100) -> [128, 100] # first hidden lyaer -> ReLU -> [128, 100] # relu activation function, may sigmoid -> Linear(100, 100) -> [128, 100] # second hidden lyaer -> ReLU -> [128, 100] # relu activation function, may sigmoid -> Linear(100, 100) -> [128, 100] # third hidden lyaer -> ReLU -> [128, 100] # relu activation function, may sigmoid -> Linear(100, 10) -> [128, 10] # Classification Layer """ def __init__(self, input_size, hidden_size, output_size, activation_function='RELU'): super(FeedForwardNeuralNetwork, self).__init__() self.use_dropout = False self.use_bn = False self.hidden1 = nn.Linear(input_size, hidden_size) # Linear function 1: 784 --> 100 self.hidden2 = nn.Linear(hidden_size, hidden_size) # Linear function 2: 100 --> 100 self.hidden3 = nn.Linear(hidden_size, hidden_size) # Linear function 3: 100 --> 100 # Linear function 4 (readout): 100 --> 10 self.classification_layer = nn.Linear(hidden_size, output_size) self.dropout = nn.Dropout(p=0.5) # Drop out with prob = 0.5 self.hidden1_bn = nn.BatchNorm1d(hidden_size) # Batch Normalization self.hidden2_bn = nn.BatchNorm1d(hidden_size) self.hidden3_bn = nn.BatchNorm1d(hidden_size) # Non-linearity if activation_function == 'SIGMOID': self.activation_function1 = nn.Sigmoid() self.activation_function2 = nn.Sigmoid() self.activation_function3 = nn.Sigmoid() elif activation_function == 'RELU': self.activation_function1 = nn.ReLU() self.activation_function2 = nn.ReLU() self.activation_function3 = nn.ReLU() def forward(self, x): """Defines the computation performed at every call. Should be overridden by all subclasses. Args: x: [batch_size, channel, height, width], input for network Returns: out: [batch_size, n_classes], output from network """ x = x.view(x.size(0), -1) # flatten x in [128, 784] out = self.hidden1(x) out = self.activation_function1(out) # Non-linearity 1 if self.use_bn == True: out = self.hidden1_bn(out) out = self.hidden2(out) out = self.activation_function2(out) if self.use_bn == True: out = self.hidden2_bn(out) out = self.hidden3(out) if self.use_bn == True: out = self.hidden3_bn(out) out = self.activation_function3(out) if self.use_dropout == True: out = self.dropout(out) out = self.classification_layer(out) return out def set_use_dropout(self, use_dropout): """Whether to use dropout. Auxiliary function for our exp, not necessary. Args: use_dropout: True, False """ self.use_dropout = use_dropout def set_use_bn(self, use_bn): """Whether to use batch normalization. Auxiliary function for our exp, not necessary. Args: use_bn: True, False """ self.use_bn = use_bn def get_grad(self): """Return average grad for hidden2, hidden3. Auxiliary function for our exp, not necessary. """ hidden2_average_grad = np.mean(np.sqrt(np.square(self.hidden2.weight.grad.detach().numpy()))) hidden3_average_grad = np.mean(np.sqrt(np.square(self.hidden3.weight.grad.detach().numpy()))) return hidden2_average_grad, hidden3_average_grad
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3. Training We would define training function here. Additionally, hyper-parameters, loss function, metric would be included here too. 3.1 Pre-set hyper-parameters setting hyperparameters like below hyper paprameters include following part learning rate: usually we start from a quite bigger lr like 1e-1, 1e-2, 1e-3, and slow lr as epoch moves. n_epochs: training epoch must set large so model has enough time to converge. Usually, we will set a quite big epoch at the first training time. batch_size: usually, bigger batch size mean's better usage of GPU and model would need less epoches to converge. And the exponent of 2 is used, eg. 2, 4, 8, 16, 32, 64, 128. 256.
### Hyper parameters batch_size = 128 # batch size is 128 n_epochs = 5 # train for 5 epochs learning_rate = 0.01 # learning rate is 0.01 input_size = 28*28 # input image has size 28x28 hidden_size = 100 # hidden neurons is 100 for each layer output_size = 10 # classes of prediction l2_norm = 0 # not to use l2 penalty dropout = False # not to use get_grad = False # not to obtain grad # create a model object model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3.2 Initialize model parameters Pytorch provide default initialization (uniform intialization) for linear layer. But there is still some useful intialization method. Read more about initialization from this link torch.nn.init.normal_ torch.nn.init.uniform_ torch.nn.init.constant_ torch.nn.init.eye_ torch.nn.init.xavier_uniform_ torch.nn.init.xavier_normal_ torch.nn.init.kaiming_uniform_
def show_weight_bias(model): """Show some weights and bias distribution every layers in model. !!YOU CAN READ THIS CODE LATER!! """ # Create a figure and a set of subplots fig, axs = plt.subplots(2,3, sharey=False, tight_layout=True) # weight and bias for every hidden layer h1_w = model.hidden1.weight.detach().numpy().flatten() h1_b = model.hidden1.bias.detach().numpy().flatten() h2_w = model.hidden2.weight.detach().numpy().flatten() h2_b = model.hidden2.bias.detach().numpy().flatten() h3_w = model.hidden3.weight.detach().numpy().flatten() h3_b = model.hidden3.bias.detach().numpy().flatten() axs[0,0].hist(h1_w) axs[0,1].hist(h2_w) axs[0,2].hist(h3_w) axs[1,0].hist(h1_b) axs[1,1].hist(h2_b) axs[1,2].hist(h3_b) # set title for every sub plots axs[0,0].set_title('hidden1_weight') axs[0,1].set_title('hidden2_weight') axs[0,2].set_title('hidden3_weight') axs[1,0].set_title('hidden1_bias') axs[1,1].set_title('hidden2_bias') axs[1,2].set_title('hidden3_bias') # Show default initialization for every hidden layer by pytorch # it's uniform distribution show_weight_bias(model) # If you want to use other intialization method, you can use code below # and define your initialization below def weight_bias_reset(model): """Custom initialization, you can use your favorable initialization method. """ for m in model.modules(): if isinstance(m, nn.Linear): # initialize linear layer with mean and std mean, std = 0, 0.1 # Initialization method torch.nn.init.normal_(m.weight, mean, std) torch.nn.init.normal_(m.bias, mean, std) # Another way to initialize # m.weight.data.normal_(mean, std) # m.bias.data.normal_(mean, std) weight_bias_reset(model) # reset parameters for each hidden layer show_weight_bias(model) # show weight and bias distribution, normal distribution now.
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业1 使用 torch.nn.init.constant_, torch.nn.init.xavier_uniform_, torch.nn.kaiming_uniform_去重写初始化函数,使用对应函数初始化模型,并且使用show_weight_bias显示模型隐藏层的参数分布。此处应该有6个cell作答。
def weight_bias_reset_constant(model): """Constant initalization """ for m in model.modules(): if isinstance(m, nn.Linear): val = 0.1 torch.nn.init.constant_(m.weight, val) torch.nn.init.constant_(m.bias, val) weight_bias_reset_constant(model) show_weight_bias(model) def weight_bias_reset_xavier_uniform(model): """xaveir_uniform, gain=1 """ for m in model.modules(): if isinstance(m, nn.Linear): gain = 1 torch.nn.init.xavier_uniform_(m.weight, gain) # torch.nn.init.xavier_uniform_(m.bias, gain) weight_bias_reset_xavier_uniform(model) show_weight_bias(model) def weight_bias_reset_kaiming_uniform(model): """kaiming_uniform, a=0, mode='fan_in', non_linearity='relu' """ for m in model.modules(): if isinstance(m, nn.Linear): a = 0 torch.nn.init.kaiming_uniform_(m.weight, a=a, mode='fan_in', nonlinearity='relu') # torch.nn.init.kaiming_uniform_(m.bias, a=a, mode='fan_in', nonlinearity='relu') weight_bias_reset_kaiming_uniform(model) show_weight_bias(model)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3.3 Repeat over certain numbers of epoch Shuffle whole training data shuffle train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, **kwargs) * For each mini-batch data * load mini-batch data ``` for batch_idx, (data, target) in enumerate(train_loader): \ ... ``` * compute gradient of loss over parameters ``` output = net(data) # make prediction loss = loss_fn(output, target) # compute loss loss.backward() # compute gradient of loss over parameters ``` * update parameters with gradient descent ``` optimzer.step() # update parameters with gradient descent ``` 3.3.1 Shuffle whole traning data 3.3.1.1 Data Loading Please pay attention to data augmentation. Read more data augmentation method from this link. torchvision.transforms.RandomVerticalFlip torchvision.transforms.RandomHorizontalFlip ...
# define method of preprocessing data for evaluating train_transform = transforms.Compose([ transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor. # Normalize a tensor image with mean 0.1307 and standard deviation 0.3081 transforms.Normalize((0.1307,), (0.3081,)) ]) test_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) # use MNIST provided by torchvision # torchvision.datasets provide MNIST dataset for classification train_dataset = torchvision.datasets.MNIST(root='./data', train=True, transform=train_transform, download=True) test_dataset = torchvision.datasets.MNIST(root='./data', train=False, transform=test_transform, download=False) # pay attention to this, train_dataset doesn't load any data # It just defined some method and store some message to preprocess data train_dataset # Data loader. # Combines a dataset and a sampler, # and provides single- or multi-process iterators over the dataset. train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=False) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # functions to show an image def imshow(img): """show some imgs in datasets !!YOU CAN READ THIS CODE LATER!! """ npimg = img.numpy() # convert tensor to numpy plt.imshow(np.transpose(npimg, (1, 2, 0))) # [channel, height, width] -> [height, width, channel] plt.show() # get some random training images by batch dataiter = iter(train_loader) images, labels = dataiter.next() # get a batch of images # show images imshow(torchvision.utils.make_grid(images))
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3.3.2 & 3.3.3 compute gradient of loss over parameters & update parameters with gradient descent
def train(train_loader, model, loss_fn, optimizer, get_grad=False): """train model using loss_fn and optimizer. When thid function is called, model trains for one epoch. Args: train_loader: train data model: prediction model loss_fn: loss function to judge the distance between target and outputs optimizer: optimize the loss function get_grad: True, False Returns: total_loss: loss average_grad2: average grad for hidden 2 in this epoch average_grad3: average grad for hidden 3 in this epoch """ # set the module in training model, affecting module e.g., Dropout, BatchNorm, etc. model.train() total_loss = 0 grad_2 = 0.0 # store sum(grad) for hidden 3 layer grad_3 = 0.0 # store sum(grad) for hidden 3 layer for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() # clear gradients of all optimized torch.Tensors' outputs = model(data) # make predictions loss = loss_fn(outputs, target) # compute loss total_loss += loss.item() # accumulate every batch loss in a epoch loss.backward() # compute gradient of loss over parameters if get_grad == True: g2, g3 = model.get_grad() # get grad for hiddern 2 and 3 layer in this batch grad_2 += g2 # accumulate grad for hidden 2 grad_3 += g3 # accumulate grad for hidden 2 optimizer.step() # update parameters with gradient descent average_loss = total_loss / batch_idx # average loss in this epoch average_grad2 = grad_2 / batch_idx # average grad for hidden 2 in this epoch average_grad3 = grad_3 / batch_idx # average grad for hidden 3 in this epoch return average_loss, average_grad2, average_grad3 def evaluate(loader, model, loss_fn): """test model's prediction performance on loader. When thid function is called, model is evaluated. Args: loader: data for evaluation model: prediction model loss_fn: loss function to judge the distance between target and outputs Returns: total_loss accuracy """ # context-manager that disabled gradient computation with torch.no_grad(): # set the module in evaluation mode model.eval() correct = 0.0 # account correct amount of data total_loss = 0 # account loss for batch_idx, (data, target) in enumerate(loader): outputs = model(data) # make predictions # return the maximum value of each row of the input tensor in the # given dimension dim, the second return vale is the index location # of each maxium value found(argmax) _, predicted = torch.max(outputs, 1) # Detach: Returns a new Tensor, detached from the current graph. #The result will never require gradient. correct += (predicted == target).sum().detach().numpy() loss = loss_fn(outputs, target) # compute loss total_loss += loss.item() # accumulate every batch loss in a epoch accuracy = correct*100.0 / len(loader.dataset) # accuracy in a epoch return total_loss, accuracy
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
Define function fit and use train_epoch and test_epoch
def fit(train_loader, val_loader, model, loss_fn, optimizer, n_epochs, get_grad=False): """train and val model here, we use train_epoch to train model and val_epoch to val model prediction performance Args: train_loader: train data val_loader: validation data model: prediction model loss_fn: loss function to judge the distance between target and outputs optimizer: optimize the loss function n_epochs: training epochs get_grad: Whether to get grad of hidden2 layer and hidden3 layer Returns: train_accs: accuracy of train n_epochs, a list train_losses: loss of n_epochs, a list """ grad_2 = [] # save grad for hidden 2 every epoch grad_3 = [] # save grad for hidden 3 every epoch train_accs = [] # save train accuracy every epoch train_losses = [] # save train loss every epoch for epoch in range(n_epochs): # train for n_epochs # train model on training datasets, optimize loss function and update model parameters train_loss, average_grad2, average_grad3 = train(train_loader, model, loss_fn, optimizer, get_grad) # evaluate model performance on train dataset _, train_accuracy = evaluate(train_loader, model, loss_fn) message = 'Epoch: {}/{}. Train set: Average loss: {:.4f}, Accuracy: {:.4f}'.format(epoch+1, \ n_epochs, train_loss, train_accuracy) print(message) # save loss, accuracy, grad train_accs.append(train_accuracy) train_losses.append(train_loss) grad_2.append(average_grad2) grad_3.append(average_grad3) # evaluate model performance on val dataset val_loss, val_accuracy = evaluate(val_loader, model, loss_fn) message = 'Epoch: {}/{}. Validation set: Average loss: {:.4f}, Accuracy: {:.4f}'.format(epoch+1, \ n_epochs, val_loss, val_accuracy) print(message) # Whether to get grad for showing if get_grad == True: fig, ax = plt.subplots() # add a set of subplots to this figure ax.plot(grad_2, label='Gradient for Hidden 2 Layer') # plot grad 2 ax.plot(grad_3, label='Gradient for Hidden 3 Layer') # plot grad 3 plt.ylim(top=0.004) # place a legend on axes legend = ax.legend(loc='best', shadow=True, fontsize='x-large') return train_accs, train_losses def show_curve(ys, title): """plot curlve for Loss and Accuacy !!YOU CAN READ THIS LATER, if you are interested Args: ys: loss or acc list title: Loss or Accuracy """ x = np.array(range(len(ys))) y = np.array(ys) plt.plot(x, y, c='b') plt.axis() plt.title('{} Curve:'.format(title)) plt.xlabel('Epoch') plt.ylabel('{} Value'.format(title)) plt.show()
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 2 运行一下fit函数,根据结束时候训练集的accuracy,回答:模型是否训练到过拟合。 使用提供的show_curve函数,画出训练的时候loss和accuracy的变化 Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代码进行重新定义模型和优化器。注意到此处用的是默认初始化。
### Hyper parameters batch_size = 128 # batch size is 128 n_epochs = 5 # train for 5 epochs learning_rate = 0.01 # learning rate is 0.01 input_size = 28*28 # input image has size 28x28 hidden_size = 100 # hidden neurons is 100 for each layer output_size = 10 # classes of prediction l2_norm = 0 # not to use l2 penalty dropout = False # not to use get_grad = False # not to obtain grad # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
模型没有训练到过拟合,观察上面训练数据,随着代数增多,测试集的正确率并没有下降。
show_curve(train_accs, 'accuracy') show_curve(train_losses, 'loss')
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 3 将n_epochs设为10,观察模型是否能在训练集上达到过拟合, 使用show_curve作图。 当希望模型在5个epoch内在训练集上达到过拟合,可以通过适当调整learning rate来实现。选择一个合适的learing rate,训练模型,并且使用show_curve作图, 验证你的learning rate Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代码进行重新定义模型和优化器。注意到此处用的是默认初始化。
### Hyper parameters batch_size = 128 # batch size is 128 n_epochs = 5 # train for 5 epochs learning_rate = 0.01 # learning rate is 0.01 input_size = 28*28 # input image has size 28x28 hidden_size = 100 # hidden neurons is 100 for each layer output_size = 10 # classes of prediction l2_norm = 0 # not to use l2 penalty dropout = False # not to use get_grad = False # not to obtain grad # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) # 3.1 Train n_epochs = 10 train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
观察数据可以发现其实10代也没有过拟合。
# 3.1 show_curve show_curve(train_accs, 'accuracy') show_curve(train_losses, 'loss') # 3.2 Train batch_size = 128 # batch size is 128 n_epochs = 5 # train for 5 epochs learning_rate = 0.7 input_size = 28*28 # input image has size 28x28 hidden_size = 100 # hidden neurons is 100 for each layer output_size = 10 # classes of prediction l2_norm = 0 # not to use l2 penalty dropout = False # not to use get_grad = False # not to obtain grad # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad) # 3.2 show_curve show_curve(train_accs, 'accuracy') show_curve(train_losses, 'loss')
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
3.4 save model Pytorch provide two kinds of method to save model. We recommmend the method which only saves parameters. Because it's more feasible and dont' rely on fixed model. When saving parameters, we not only save learnable parameters in model, but also learnable parameters in optimizer. A common PyTorch convention is to save models using either a .pt or .pth file extension. Read more abount save load from this link
# show parameters in model # Print model's state_dict print("Model's state_dict:") for param_tensor in model.state_dict(): print(param_tensor, "\t", model.state_dict()[param_tensor].size()) # Print optimizer's state_dict print("\nOptimizer's state_dict:") for var_name in optimizer.state_dict(): print(var_name, "\t", optimizer.state_dict()[var_name]) # save model save_path = './model.pt' torch.save(model.state_dict(), save_path) # load parameters from files saved_parametes = torch.load(save_path) print(saved_parametes) # initailze model by saved parameters new_model = FeedForwardNeuralNetwork(input_size, hidden_size, output_size) new_model.load_state_dict(saved_parametes)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 4 使用 test_epoch 函数,预测new_model在test_loader上的accuracy和loss
# test your model prediction performance new_test_loss, new_test_accuracy = evaluate(test_loader, new_model, loss_fn) message = 'Average loss: {:.4f}, Accuracy: {:.4f}'.format(new_test_loss, new_test_accuracy) print(message)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
4. Training Advanced 4.1 l2_norm we could minimize the regularization term below by use $weight_decay$ in SGD optimizer \begin{equation} L_norm = {\sum_{i=1}^{m}{\theta_{i}^{2}}} \end{equation} set l2_norm=0.01, let's train and see
### Hyper parameters batch_size = 128 n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0.01 # use l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 5 思考正则项在loss中占比的影响。使用 l2_norm = 1, 训练模型 Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代码进行重新定义模型和优化器。注意到此处用的是默认初始化。
# Hyper parameters batch_size = 128 n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 1 # use l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
4.2 dropout During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call. Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代码进行重新定义模型和优化器。注意到此处用的是默认初始化。
### Hyper parameters batch_size = 128 n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # without using l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) # Set dropout to True and probability = 0.5 model.set_use_dropout(True) train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
4.3 batch_normalization Batch normalization is a technique for improving the performance and stability of artificial neural networks \begin{equation} y=\frac{x-E[x]}{\sqrt{Var[x]+\epsilon}} * \gamma + \beta, \end{equation} $\gamma$ and $\beta$ are learnable parameters Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。可以使用以下代码进行重新定义模型和优化器。注意到此处用的是默认初始化。
### Hyper parameters batch_size = 128 n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # without using l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) model.set_use_bn(True) model.use_bn train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
4.4 data augmentation data augmentation can be more complicated to gain a better generalization on test dataset
# only add random horizontal flip train_transform_1 = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor. # Normalize a tensor image with mean and standard deviation transforms.Normalize((0.1307,), (0.3081,)) ]) # only add random crop train_transform_2 = transforms.Compose([ transforms.RandomCrop(size=[28,28], padding=4), transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor. # Normalize a tensor image with mean and standard deviation transforms.Normalize((0.1307,), (0.3081,)) ]) # add random horizontal flip and random crop train_transform_3 = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.RandomCrop(size=[28,28], padding=4), transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor. # Normalize a tensor image with mean and standard deviation transforms.Normalize((0.1307,), (0.3081,)) ]) # reload train_loader using trans train_dataset_1 = torchvision.datasets.MNIST(root='./data', train=True, transform=train_transform_1, download=False) train_loader_1 = torch.utils.data.DataLoader(dataset=train_dataset_1, batch_size=batch_size, shuffle=True) print(train_dataset_1) ### Hyper parameters batch_size = 128 n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # without using l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) train_accs, train_losses = fit(train_loader_1, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
作业 6 使用提供的train_transform_2, train_transform_3,重新加载train_loader,并且使用fit进行训练 Hints: 因为jupyter对变量有上下文关系,模型,优化器需要重新声明。注意到此处用的是默认初始化。
# train_transform_2 batch_size = 128 train_dataset_2 = torchvision.datasets.MNIST(root='./data', train=True, transform=train_transform_2, download=False) train_loader_2 = torch.utils.data.DataLoader(dataset=train_dataset_2, batch_size=batch_size, shuffle=True) n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # without using l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) train_accs, train_losses = fit(train_loader_2, test_loader, model, loss_fn, optimizer, n_epochs, get_grad) # train_transform_3 batch_size = 128 train_dataset_3 = torchvision.datasets.MNIST(root='./data', train=True, transform=train_transform_3, download=False) train_loader_3 = torch.utils.data.DataLoader(dataset=train_dataset_3, batch_size=batch_size, shuffle=True) n_epochs = 5 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # without using l2 penalty get_grad = False # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) train_accs, train_losses = fit(train_loader_3, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
5. Visualizatio of training and validation phase We could use tensorboard to visualize our training and test phase. You could find example here 6. Gradient explosion and vanishing We have embedded code which shows grad for hidden2 and hidden3 layer. By observing their grad changes, we can see whether gradient is normal or not. For plot grad changes, you need to set get_grad=True in fit function
### Hyper parameters batch_size = 128 n_epochs = 15 learning_rate = 0.01 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # use l2 penalty get_grad = True # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
6.1.1 Gradient Vanishing Set learning=e-10
### Hyper parameters batch_size = 128 n_epochs = 15 learning_rate = 1e-10 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # use l2 penalty get_grad = True # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad=get_grad)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
6.1.2 Gradient Explosion 6.1.2.1 learning rate set learning rate = 10
### Hyper parameters batch_size = 128 n_epochs = 15 learning_rate = 10 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # not to use l2 penalty get_grad = True # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad=True)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
6.1.2.2 normalization for input data 6.1.2.3 unsuitable weight initialization
### Hyper parameters batch_size = 128 n_epochs = 15 learning_rate = 1 input_size = 28*28 hidden_size = 100 output_size = 10 l2_norm = 0 # not to use l2 penalty get_grad = True # declare a model model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size) # Cross entropy loss_fn = torch.nn.CrossEntropyLoss() # l2_norm can be done in SGD optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm) # reset parameters as 10 def wrong_weight_bias_reset(model): """Using normalization with mean=0, std=1 to initialize model's parameter """ for m in model.modules(): if isinstance(m, nn.Linear): # initialize linear layer with mean and std mean, std = 0, 1 # Initialization method torch.nn.init.normal_(m.weight, mean, std) torch.nn.init.normal_(m.bias, mean, std) wrong_weight_bias_reset(model) show_weight_bias(model) fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad=True)
Homework/Principles of Artificial Neural Networks/Week 4 Training Issues 1/TrainingIssues.ipynb
MegaShow/college-programming
mit
astroplan Plan for everything but the clouds ☁️ Brett Morris with Jazmin Berlanga Medina, Christoph Deil, Eric Jeschke, Adrian Price-Whelan, Erik Tollerud Getting started bash pip install astropy astroplan echo "Optionally:" pip install wcsaxes astroquery Outline Background: astropy astroplan basics <img src="https://raw.githubusercontent.com/astropy/astropy-logo/master/generated/astropy_powered.png" width="300"> Background Open source Python astronomical toolkit Generic and powerful astropy-affiliated packages assemble more specific functionality astropy: RA/Dec -> Alt/Az
# Altitude-azimuth frame: from astropy.coordinates import SkyCoord, EarthLocation, AltAz import astropy.units as u from astropy.time import Time # Specify location of Apache Point Observatory with astropy.coordinates.EarthLocation apache_point = EarthLocation.from_geodetic(-105.82*u.deg, 32.78*u.deg, 2798*u.m) # Specify star's location with astropy.coordinates.SkyCoord vega_icrs = SkyCoord(ra=279.235416*u.deg, dec=38.78516*u.deg) # Initialize an altitude/azimuth frame at the present time time = Time.now() altaz_frame = AltAz(obstime=time, location=apache_point) # Get Vega's alt/az position now vega_altaz = vega_icrs.transform_to(altaz_frame) print("Vega (altitude, azimuth) [deg, deg] at {}:\n\t({}, {})" .format(time, vega_altaz.alt.degree, vega_altaz.az.degree))
presentation.ipynb
bmorris3/gsoc2015
mit
astropy: Get coordinates of the Sun
# Where is the sun right now? from astropy.coordinates import get_sun sun = get_sun(time) print(sun)
presentation.ipynb
bmorris3/gsoc2015
mit
astroplan v0.1 Open source in Python astropy powered Get (alt/az) positions of targets at any time, from any observatory Can I observe these targets given some constraints (airmass, moon separation, etc.)? astroplan basics astroplan.Observer: contains information about an observer's location, environment on the Earth
from astroplan import Observer # Construct an astroplan.Observer at Apache Point Observatory apache_point = Observer.at_site("Apache Point") apache_point = Observer.at_site("APO") # also works print(apache_point.location.to_geodetic())
presentation.ipynb
bmorris3/gsoc2015
mit
astroplan basics astroplan.FixedTarget: contains information about celestial objects with no (slow) proper motion
from astroplan import FixedTarget # Construct an astroplan.FixedTarget for Vega vega = FixedTarget.from_name("Vega") # (with internet access) # # (without internet access) # vega_icrs = SkyCoord(ra=279.235416*u.deg, dec=38.78516*u.deg) # vega = FixedTarget(coord=vega_icrs, name="Vega") vega_altaz = apache_point.altaz(time, vega) print("Vega (altitude, azimuth) [deg, deg] at {}:\n\t({}, {})" .format(time, vega_altaz.alt.degree, vega_altaz.az.degree))
presentation.ipynb
bmorris3/gsoc2015
mit
Convenience methods Is it night at this observatory at time? | Question | Answer | |------------------|---------------| | Is it nighttime? | observer.is_night(time) | | Is Vega up? | observer.target_is_up(time, vega) | | What is the LST? | observer.local_sidereal_time(time) | | Hour angle of Vega? | observer.target_hour_angle(time, vega) |
apache_point.is_night(time)
presentation.ipynb
bmorris3/gsoc2015
mit
Is Vega above the horizon at time?
apache_point.target_is_up(time, vega)
presentation.ipynb
bmorris3/gsoc2015
mit
Make your own TUI window:
# Local Sidereal time apache_point.local_sidereal_time(time) # Hour angle apache_point.target_hour_angle(time, vega) # Parallactic angle apache_point.parallactic_angle(time, vega)
presentation.ipynb
bmorris3/gsoc2015
mit
Rise/set times Next sunset
sunset = apache_point.sun_set_time(time, which='next') print("{0.jd} = {0.iso}".format(sunset))
presentation.ipynb
bmorris3/gsoc2015
mit
Next rise of Vega
vega_rise = apache_point.target_rise_time(time, vega, which='next') print(vega_rise.iso)
presentation.ipynb
bmorris3/gsoc2015
mit
Next astronomical (-18 deg) twilight
astronomical_twilight = apache_point.twilight_evening_astronomical(time, which='next') print(astronomical_twilight.iso)
presentation.ipynb
bmorris3/gsoc2015
mit
What is that time in local Seattle time (PST)?
# Specify your time zone with `pytz` import pytz my_timezone = pytz.timezone('US/Pacific') astronomical_twilight.to_datetime(my_timezone)
presentation.ipynb
bmorris3/gsoc2015
mit
Constraints Can I observe target(s) given: Time of year of night at "night" Telescope: Altitude constraints, i.e. 15-80$^\circ$ altitude Location on Earth Moon separation, illumination Constraints example Let's read in a list of RA/Dec of our targets:
%%writefile targets.txt # name ra_degrees dec_degrees Polaris 37.95456067 89.26410897 Vega 279.234734787 38.783688956 Albireo 292.68033548 27.959680072 Algol 47.042218553 40.955646675 Rigel 78.634467067 -8.201638365 Regulus 152.092962438 11.967208776
presentation.ipynb
bmorris3/gsoc2015
mit
Read in target file to a list of astroplan.FixedTarget objects:
# Read in the table of targets from astropy.table import Table target_table = Table.read('targets.txt', format='ascii') # Create astroplan.FixedTarget objects for each one in the table from astropy.coordinates import SkyCoord import astropy.units as u from astroplan import FixedTarget targets = [FixedTarget(coord=SkyCoord(ra=ra*u.deg, dec=dec*u.deg), name=name) for name, ra, dec in target_table]
presentation.ipynb
bmorris3/gsoc2015
mit
Initialize astroplan.Observer, observing time window:
from astroplan import Observer from astropy.time import Time subaru = Observer.at_site("Subaru") time_range = Time(["2015-08-01 06:00", "2015-08-01 12:00"])
presentation.ipynb
bmorris3/gsoc2015
mit
Define and compute constraints:
from astroplan import (AltitudeConstraint, AirmassConstraint, AtNightConstraint) # Define constraints: constraints = [AltitudeConstraint(10*u.deg, 80*u.deg), AirmassConstraint(5), AtNightConstraint.twilight_civil()] from astroplan import is_observable, is_always_observable # Compute: are targets *ever* observable in the time range? ever_observable = is_observable(constraints, subaru, targets, time_range=time_range) # Compute: are targets *always* observable in the time range? always_observable = is_always_observable(constraints, subaru, targets, time_range=time_range) from astroplan import observability_table table = observability_table(constraints, subaru, targets, time_range=time_range) print(table)
presentation.ipynb
bmorris3/gsoc2015
mit
Plot celestial sphere positions
%matplotlib inline from astroplan.plots import plot_sky, plot_airmass import numpy as np import matplotlib.pyplot as plt plot_times = time_range[0] + np.linspace(0, 1, 10)*(time_range[1] - time_range[0]) fig = plt.figure(figsize=(12, 6)) ax0 = fig.add_subplot(121, projection='polar') ax1 = fig.add_subplot(122) # Plot Vega track plot_sky(targets[1], subaru, plot_times, ax=ax0, style_kwargs=dict(color='b', label='Vega')) plot_airmass(targets[1], subaru, plot_times, ax=ax1, style_kwargs=dict(color='b')) # Plot Albireo track plot_sky(targets[2], subaru, plot_times, ax=ax0, style_kwargs=dict(color='r', label='Albireo')) plot_airmass(targets[2], subaru, plot_times, ax=ax1, style_kwargs=dict(color='r')) fig.subplots_adjust(wspace=0.4);
presentation.ipynb
bmorris3/gsoc2015
mit
Plot finder charts
# This method requires astroquery, wcsaxes from astroplan.plots import plot_finder_image m1 = FixedTarget.from_name('M1') plot_finder_image(m1, survey='DSS');
presentation.ipynb
bmorris3/gsoc2015
mit
Actually it is even easier. There are special operators to add, subtract, multipy, and divide into the current variable. += -= *= /=
print(count) count += 5 print(count) count *= 2 print(count)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Assign 5 to the variable x and then in a new line add 2 to x and store the result back in x. While loops Loops are one of the most powerful feature of programming. With a loop you can make a computer do some repetitive task over and over again (is it any wonder that computers are taking our jobs?) The while loop repeats while a condition is true and stops when that condition is false while (condition): code to run Each time a while loop runs, the condition is checked, if the condition is true, the code inside the loop runs and then it goes back and checks the condition again.
feet_of_snow = 0 while (feet_of_snow < 3): print("Snow is falling!") feet_of_snow += 1 print("We got " + str(feet_of_snow) + " feet of snow :(")
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Write out a while loop that prints out the values 1 - 5 Infinite loops One thing you need to be careful of with while loops is the case where the condition never turns false. This means that the loop will keep going and going forever. You'll hear your computer's fan heat up and the program will hang. If you are doing any significant amount of programming, you will write an infinite loop, it cannot be helped. But watch out for them anyways. HINT To stop an infinite loop in a jupyter notebook select Kernel->Restart from the menu above.
snowflakes = 0 # This condition is never false so this will run forever while (snowflakes >= 0): snowflakes += 1 # If you ran this by accident, press the square (stop) button to kill the loop # The other common infinite loop is forgetting to update to a counting variable count = 0 while (count < 3): print("Let it snow!")
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit