markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Let us first define our function $f(u)$ that will calculate the right hand side of our model. We will pass in the array $u$ which contains our different populations and set them individually in the function:
def f(u): """Returns the right-hand side of the epidemic model equations. Parameters ---------- u : array of float array containing the solution at time n. u is passed in and distributed to the different components by calling the individual value in u[i] Returns ---...
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
Next we will define the euler solution as a function so that we can call it as we iterate through time.
def euler_step(u, f, dt): """Returns the solution at the next time-step using Euler's method. Parameters ---------- u : array of float solution at the previous time-step. f : function function to compute the right hand-side of the system of equation. dt : float ...
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
Now we are ready to set up our initial conditions and solve! We will use a simplified population to start with.
e = .1 #vaccination success rate p = .75 # newborn vaccination rate mu = .02 # death rate beta = .002 # contact rate gamma = .5 # Recovery rate S0 = 100 # Initial Susceptibles V0 = 50 # Initial Vaccinated I0 = 75 # Initial Infected R0 = 10 # Initial Recovered N = S0 + I0 + R0 + V0 #Total population (remains constant)
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
Now we will implement our discretization using a for loop to iterate over time. We create a numpy array $u$ that will hold all of our values at each time step for each component (SVIR). We will use dt of 1 to represent 1 day and iterate over 365 days.
T = 365 # Iterate over 1 year dt = 1 # 1 day N = int(T/dt)+1 # Total number of iterations t = numpy.linspace(0, T, N) # Time discretization u = numpy.zeros((N,4)) # Initialize the solution array with zero values u[0] = [S0, V0, I0, R0] # Set the initial conditions in the solution array for n in range(N-1): # Loop thr...
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
Now we use python's pyplot library to plot all of our results on the same graph:
pyplot.figure(figsize=(15,5)) pyplot.grid(True) pyplot.xlabel(r'time', fontsize=18) pyplot.ylabel(r'population', fontsize=18) pyplot.xlim(0, 500) pyplot.title('Population of SVIR model over time', fontsize=18) pyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible'); pyplot.plot(t,u[:,1], color='green', lw=2, l...
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
The graph is interesting because it exhibits some oscillating behavior. You can see that under the given parameters, the number of infected people drops within the first few days. Notice that the susceptible individuals grow until about 180 days. The return of infection is a result of too many susceptible people in the...
#Changing the following parameters e = .5 #vaccination success rate gamma = .1 # Recovery rate S0 = 100 # Initial Susceptibles V0 = 50 # Initial Vaccinated I0 = 75 # Initial Infected R0 = 10 # Initial Recovered N = S0 + I0 + R0 + V0 #Total population (remains constant) T = 365 # Iterate over 1 year dt = 1 # 1 day N ...
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
However, every time we want to examine new parameters we have to go back and change the values within the cell and re run our code. This is very cumbersome if we want to examine how different parameters affect our outcome. If only there were some solution we could implement that would allow us to change parameters on t...
from ipywidgets import interact, HTML, FloatSlider from IPython.display import clear_output, display
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
The below cell is a quick view of a few different interactive widgets that are available. Notice that we must define a function (in this case $z$) where we call the function $z$ and parameter $x$, where $x$ is passed into the function $z$.
def z(x): print(x) interact(z, x=True) # Checkbox interact(z, x=10) # Slider interact(z, x='text') # Text entry
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
Redefining the Model to Accept Parameters In order to use ipywidgets and pass parameters in our functions we have to slightly redefine our functions to accept these changing parameters. This will ensure that we don't have to re-run any code and our graph will update as we change parameters! We will start with our funct...
def f(u, init): """Returns the right-hand side of the epidemic model equations. Parameters ---------- u : array of float array containing the solution at time n. u is passed in and distributed to the different components by calling the individual value in u[i] init : array ...
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
Now we will change our $euler step$ function which calls our function $f$ to include the new $init$ array that we are passing.
def euler_step(u, f, dt, init): return u + dt * f(u, init)
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
In order to make changes to our parameters, we will use slider widgets. Now that we have our functions set up, we will build another function which we will use to update the graph as we move our slider parameters. First we must build the sliders for each parameter. Using the FloatSlider method from ipywidgets, we can s...
#Build slider for each parameter desired pSlider = FloatSlider(description='p', min=0, max=1, step=0.1) eSlider = FloatSlider(description='e', min=0, max=1, step=0.1) muSlider = FloatSlider(description='mu', min=0, max=1, step=0.005) betaSlider = FloatSlider(description='beta', min=0, max=.01, step=0.0005) gammaSlider...
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
Notice that the graph starts with all parameters equal to zero. Unfortunately we cannot set the initial value of the slider. We can work around this using conditional statements to see if the slider values are equal to zero, then use different parameters. Notice that as you change the parameters the graph starts to com...
Disease = [{'name': "Ebola", 'p': 0, 'e': 0, 'mu': .04, 'beta': .005, 'gamma': 0}, \ {'name': "Measles", 'p': .9, 'e': .9, 'mu': .02, 'beta': .002, 'gamma': .9}, \ {'name': "Tuberculosis", 'p': .5, 'e': .2, 'mu': .06, 'beta': .001, 'gamma': .3}] #Example def z(x): print(x) interact(z, x = 'T...
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
References Scherer, A. and McLean, A. "Mathematical Models of Vaccination", British Medical Bulletin Volume 62 Issue 1, 2015 Oxford University Press. Online Barba, L., "Practical Numerical Methods with Python" George Washington University For a good explanation of some of the simpler models and overview of param...
from IPython.core.display import HTML css_file = 'numericalmoocstyle.css' HTML(open(css_file, "r").read())
cdigangi8/Managing_Epidemics_Model.ipynb
numerical-mooc/assignment-bank-2015
mit
Init
import os # Note: you will need to install `rpy2.ipython` and the necessary R packages (see next cell) %load_ext rpy2.ipython %%R library(ggplot2) library(dplyr) library(tidyr) workDir = os.path.abspath(workDir) if not os.path.isdir(workDir): os.makedirs(workDir) %cd $workDir genomeDir = os.path.join(workDi...
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Experimental design How many gradients? Which are labeled treatments & which are controls? For this tutorial, we'll keep things simple and just simulate one control & one treatment For the labeled treatment, 34% of the taxa (1 of 3) will incorporate 50% isotope The script below ("SIPSim incorpConfigExample") is helpf...
%%bash source activate SIPSim # creating example config SIPSim incorp_config_example \ --percTaxa 34 \ --percIncorpUnif 50 \ --n_reps 1 \ > incorp.config !cat incorp.config
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Pre-fractionation communities What is the relative abundance of taxa in the pre-fractionation samples?
%%bash source activate SIPSim SIPSim communities \ --config incorp.config \ ./genomes_rn/genome_index.txt \ > comm.txt !cat comm.txt
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Note: "library" = gradient Simulating gradient fractions BD size ranges for each fraction (& start/end of the total BD range)
%%bash source activate SIPSim SIPSim gradient_fractions \ --BD_min 1.67323 \ --BD_max 1.7744 \ comm.txt \ > fracs.txt !head -n 6 fracs.txt
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Simulating fragments Simulating shotgun-fragments Fragment length distribution: skewed-normal Primer sequences (wait... what?) If you were to simulate amplicons, instead of shotgun fragments, you can use something like the following:
# primers = """>515F # GTGCCAGCMGCCGCGGTAA # >806R # GGACTACHVGGGTWTCTAAT # """ # F = os.path.join(workDir, '515F-806R.fna') # with open(F, 'wb') as oFH: # oFH.write(primers) # print 'File written: {}'.format(F)
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Simulation
%%bash -s $genomeDir source activate SIPSim # skewed-normal SIPSim fragments \ $1/genome_index.txt \ --fp $1 \ --fld skewed-normal,9000,2500,-5 \ --flr None,None \ --nf 1000 \ --debug \ --tbl \ > shotFrags.txt !head -n 5 shotFrags.txt !tail -n 5 shotFrags.txt
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Plotting fragments
%%R -w 700 -h 350 df = read.delim('shotFrags.txt') p = ggplot(df, aes(fragGC, fragLength, color=taxon_name)) + geom_density2d() + scale_color_discrete('Taxon') + labs(x='Fragment G+C', y='Fragment length (bp)') + theme_bw() + theme( text = element_text(size=16) ) plot(p)
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Note: for information on what's going on in this config file, use the command: SIPSim isotope_incorp -h Converting fragments to a 2d-KDE Estimating the joint-probabilty for fragment G+C & length
%%bash source activate SIPSim SIPSim fragment_KDE \ shotFrags.txt \ > shotFrags_kde.pkl !ls -thlc shotFrags_kde.pkl
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Note: The generated list of KDEs (1 per taxon per gradient) are in a binary file format To get a table of length/G+C values, use the command: SIPSim KDE_sample Adding diffusion Simulating the BD distribution of fragments as Gaussian distributions. One Gaussian distribution per homogeneous set of DNA molecules (same ...
%%bash source activate SIPSim SIPSim diffusion \ shotFrags_kde.pkl \ --np 3 \ > shotFrags_kde_dif.pkl !ls -thlc shotFrags_kde_dif.pkl
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Plotting fragment distribution w/ and w/out diffusion Making a table of fragment values from KDEs
n = 100000 %%bash -s $n source activate SIPSim SIPSim KDE_sample -n $1 shotFrags_kde.pkl > shotFrags_kde.txt SIPSim KDE_sample -n $1 shotFrags_kde_dif.pkl > shotFrags_kde_dif.txt ls -thlc shotFrags_kde*.txt
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Plotting plotting KDE with or without diffusion added
%%R df1 = read.delim('shotFrags_kde.txt', sep='\t') df2 = read.delim('shotFrags_kde_dif.txt', sep='\t') df1$data = 'no diffusion' df2$data = 'diffusion' df = rbind(df1, df2) %>% gather(Taxon, BD, Clostridium_ljungdahlii_DSM_13528, Escherichia_coli_1303, Streptomyces_pratensis_ATCC_33331) %>% mutate...
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Adding diffusive boundary layer (DBL) effects 'smearing' effects
%%bash source activate SIPSim SIPSim DBL \ shotFrags_kde_dif.pkl \ --np 3 \ > shotFrags_kde_dif_DBL.pkl # viewing DBL logs !ls -thlc *pkl
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Adding isotope incorporation Using the config file produced in the Experimental Design section
%%bash source activate SIPSim SIPSim isotope_incorp \ --comm comm.txt \ --np 3 \ shotFrags_kde_dif_DBL.pkl \ incorp.config \ > shotFrags_KDE_dif_DBL_inc.pkl !ls -thlc *.pkl
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Note: statistics on how much isotope was incorporated by each taxon are listed in "BD-shift_stats.txt"
%%R df = read.delim('BD-shift_stats.txt', sep='\t') df
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Making an OTU table Number of amplicon-fragment in each fraction in each gradient Assuming a total pre-fractionation community size of 1e7
%%bash source activate SIPSim SIPSim OTU_table \ --abs 1e7 \ --np 3 \ shotFrags_KDE_dif_DBL_inc.pkl \ comm.txt \ fracs.txt \ > OTU.txt !head -n 7 OTU.txt
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Plotting fragment count distributions
%%R -h 350 -w 750 df = read.delim('OTU.txt', sep='\t') p = ggplot(df, aes(BD_mid, count, fill=taxon)) + geom_area(stat='identity', position='dodge', alpha=0.5) + scale_x_continuous(expand=c(0,0)) + labs(x='Buoyant density') + labs(y='Shotgun fragment counts') + facet_grid(library ~ .) + theme_...
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Notes: This plot represents the theoretical number of amplicon-fragments at each BD across each gradient. Derived from subsampling the fragment BD proability distributions generated in earlier steps. The fragment BD distribution of one of the 3 taxa should have shifted in Gradient 2 (the treatment gradient). The fra...
%%R -h 350 -w 750 p = ggplot(df, aes(BD_mid, count, fill=taxon)) + geom_area(stat='identity', position='fill') + scale_x_continuous(expand=c(0,0)) + scale_y_continuous(expand=c(0,0)) + labs(x='Buoyant density') + labs(y='Shotgun fragment counts') + facet_grid(library ~ .) + theme_bw() + ...
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Adding effects of PCR This will alter the fragment counts based on the PCR kinetic model of: Suzuki MT, Giovannoni SJ. (1996). Bias caused by template annealing in the amplification of mixtures of 16S rRNA genes by PCR. Appl Environ Microbiol 62:625-630.
%%bash source activate SIPSim SIPSim OTU_PCR OTU.txt > OTU_PCR.txt !head -n 5 OTU_PCR.txt !tail -n 5 OTU_PCR.txt
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Notes The table is in the same format as with the original OTU table, but the counts and relative abundances should be altered. Simulating sequencing Sampling from the OTU table
%%bash source activate SIPSim SIPSim OTU_subsample OTU_PCR.txt > OTU_PCR_sub.txt !head -n 5 OTU_PCR_sub.txt
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Notes The table is in the same format as with the original OTU table, but the counts and relative abundances should be altered. Plotting
%%R -h 350 -w 750 df = read.delim('OTU_PCR_sub.txt', sep='\t') p = ggplot(df, aes(BD_mid, rel_abund, fill=taxon)) + geom_area(stat='identity', position='fill') + scale_x_continuous(expand=c(0,0)) + scale_y_continuous(expand=c(0,0)) + labs(x='Buoyant density') + labs(y='Taxon relative abundances')...
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Misc A 'wide' OTU table If you want to reformat the OTU table to a more standard 'wide' format (as used in Mothur or QIIME):
%%bash source activate SIPSim SIPSim OTU_wide_long -w \ OTU_PCR_sub.txt \ > OTU_PCR_sub_wide.txt !head -n 4 OTU_PCR_sub_wide.txt
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
SIP metadata If you want to make a table of SIP sample metadata
%%bash source activate SIPSim SIPSim OTU_sample_data \ OTU_PCR_sub.txt \ > OTU_PCR_sub_meta.txt !head OTU_PCR_sub_meta.txt
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
Other SIPSim commands SIPSim -l will list all available SIPSim commands
%%bash source activate SIPSim SIPSim -l
ipynb/example/2_simulation-shotgun.ipynb
nick-youngblut/SIPSim
mit
TBtrans is capable of calculating transport in $N\ge 1$ electrode systems. In this example we will explore a 4-terminal graphene GNR cross-bar (one zGNR, the other aGNR) system.
graphene = sisl.geom.graphene(orthogonal=True) R = [0.1, 1.43] hop = [0., -2.7]
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Create the two electrodes in $x$ and $y$ directions. We will force the systems to be nano-ribbons, i.e. only periodic along the ribbon. In sisl there are two ways of accomplishing this. Explicitly set number of auxiliary supercells Add vacuum beyond the orbital interaction ranges The below code uses the first method....
elec_y = graphene.tile(3, axis=0) elec_y.set_nsc([1, 3, 1]) elec_y.write('elec_y.xyz') elec_x = graphene.tile(5, axis=1) elec_x.set_nsc([3, 1, 1]) elec_x.write('elec_x.xyz')
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Subsequently we create the electronic structure.
H_y = sisl.Hamiltonian(elec_y) H_y.construct((R, hop)) H_y.write('ELEC_Y.nc') H_x = sisl.Hamiltonian(elec_x) H_x.construct((R, hop)) H_x.write('ELEC_X.nc')
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Now we have created the electronic structure for the electrodes. All that is needed is the electronic structure of the device region, i.e. the crossing nano-ribbons.
dev_y = elec_y.tile(30, axis=1) dev_y = dev_y.translate( -dev_y.center(what='xyz') ) dev_x = elec_x.tile(18, axis=0) dev_x = dev_x.translate( -dev_x.center(what='xyz') )
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Remove any atoms that are duplicated, i.e. when we overlay these two geometries some atoms are the same.
device = dev_y.add(dev_x) device.set_nsc([1,1,1]) duplicates = [] for ia in dev_y: idx = device.close(ia, 0.1) if len(idx) > 1: duplicates.append(idx[1]) device = device.remove(duplicates)
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Can you explain why set_nsc([1, 1, 1]) is called? And if so, is it necessary to do this step? Ensure the lattice vectors are big enough for plotting. Try and convince your-self that the lattice vectors are unimportant for tbtrans in this example. HINT: what is the periodicity?
device = device.add_vacuum(70, 0).add_vacuum(20, 1) device = device.translate( device.center(what='cell') - device.center(what='xyz') ) device.write('device.xyz')
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Since this system has 4 electrodes we need to tell tbtrans where the 4 electrodes are in the device. The following lines prints out the fdf-lines that are appropriate for each of the electrodes (RUN.fdf is already filled correctly):
print('elec-Y-1: semi-inf -A2: {}'.format(1)) print('elec-Y-2: semi-inf +A2: end {}'.format(len(dev_y))) print('elec-X-1: semi-inf -A1: {}'.format(len(dev_y) + 1)) print('elec-X-2: semi-inf +A1: end {}'.format(-1)) H = sisl.Hamiltonian(device) H.construct([R, hop]) H.write('DEVICE.nc')
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Exercises In this example we have more than 1 transmission path. Before you run the below code which plots all relevant transmissions ($T_{ij}$ for $j>i$), consider if there are any symmetries, and if so, determine how many different transmission spectra you should expect? Please plot the geometry using your favourite ...
tbt = sisl.get_sile('siesta.TBT.nc')
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Make easy function calls for plotting energy resolved quantites:
E = tbt.E Eplot = partial(plt.plot, E) # Make a shorthand version for the function (simplifies the below line) T = tbt.transmission t12, t13, t14, t23, t24, t34 = T(0, 1), T(0, 2), T(0, 3), T(1, 2), T(1, 3), T(2, 3) Eplot(t12, label=r'$T_{12}$'); Eplot(t13, label=r'$T_{13}$'); Eplot(t14, label=r'$T_{14}$'); Eplot(t23,...
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
In RUN.fdf we have added the flag TBT.T.All which tells tbtrans to calculate all transmissions, i.e. between all $i\to j$ for all $i,j \in {1,2,3,4}$. This flag is by default False, why? Create 3 plots each with $T_{1j}$ and $T_{j1}$ for all $j\neq1$.
# Insert plot of T12 and T21 # Insert plot of T13 and T31 # Insert plot of T14 and T41
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Considering symmetries, try to figure out which transmissions ($T_{ij}$) are unique? Plot the bulk DOS for the 2 differing electrodes. Plot the spectral DOS injected by all 4 electrodes.
# Helper routines, this makes BDOS(...) == tbt.BDOS(..., norm='atom') BDOS = partial(tbt.BDOS, norm='atom') ADOS = partial(tbt.ADOS, norm='atom')
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Bulk density of states:
Eplot(..., label=r'$BDOS_1$'); Eplot(..., label=r'$BDOS_2$'); plt.ylabel('DOS [1/eV/N]'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend();
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Spectral density of states for all electrodes: - As a final exercise you can explore the details of the density of states for single atoms. Take for instance atom 205 (204 in Python index) which is in both GNRs at the crossing. Feel free to play around with different atoms, subset of atoms (pass a list) etc.
Eplot(..., label=r'$ADOS_1$'); ... plt.ylabel('DOS [1/eV/N]'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend();
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
For 2D structures one can easily plot the DOS per atom via a scatter plot in matplotlib, here is the skeleton code for that, you should select an energy point and figure out how to extract the atom resolved DOS (you will need to look-up the documentation for the ADOS method to figure out which flag to use.
Eidx = tbt.Eindex(...) ADOS = [tbt.ADOS(i, ....) for i in range(4)] f, axs = plt.subplots(2, 2, figsize=(10, 10)) a_xy = tbt.geometry.xyz[tbt.a_dev, :2] for i in range(4): A = ADOS[i] A *= 100 / A.max() # normalize to maximum 100 (simply for plotting) axs[i // 2][i % 2].scatter(a_xy[:, 0], a_xy[:, 1], A, c=...
TB_06/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
MECA653: Traitement de donnée - Analyse de la base de donnée de la sécurité routière L'objectif ici est d'analyser les données fournies par le ministère de l'intérieure sur les accidents de la route resencés en 2016. Le Module Panda sera largement utilisé. Sources Lien vers data.gouv.fr : https://www.data.gouv.fr/fr/da...
dfc = pd.read_csv('./DATA/caracteristiques_2016.csv') dfu = pd.read_csv('./DATA/usagers_2016.csv') dfl = pd.read_csv('./DATA/lieux_2016.csv') df = pd.concat([dfu, dfc, dfl], axis=1) dfc.tail() dfu.head() dfl.tail() df.head() df = pd.concat([df, dfl], axis=1) df.head()
doc/Traitement_donnees/.ipynb_checkpoints/TD Traitement de Données-checkpoint.ipynb
lcharleux/numerical_analysis
gpl-2.0
2 - Quelle est la poportion Homme/Femme impliquée dans les accidents ? Représenter le résultat sous forme graphique.
# methode pas propre (h,c)=df[df.sexe==1].shape (f,c)=df[df.sexe==2].shape (t,c)=df.shape print('h/t=', h/t) print('f/t=', f/t) # methode panda df["sexe"].value_counts(normalize=True) fig = plt.figure() df[df.grav==2].sexe.value_counts(normalize=True).plot.pie(labels=['Homme', 'Femme'], colors= ['r', 'g'], autopct=...
doc/Traitement_donnees/.ipynb_checkpoints/TD Traitement de Données-checkpoint.ipynb
lcharleux/numerical_analysis
gpl-2.0
2 - Quelle est la poportion des accidents ayant eu lieu le jour, la nuit ou a l'aube/crépuscule? Représenter le résultat sous forme graphique.
dlum = df["lum"].value_counts(normalize=True) dlum = dlum.sort_index() dlum dlum[3] = dlum[3:5].sum() fig = plt.figure() dlum[1:3].plot.pie(labels=['Jour','Aube/crépuscule', 'Nuit'], colors= ['y', 'g' , 'b'], autopct='%.2f')
doc/Traitement_donnees/.ipynb_checkpoints/TD Traitement de Données-checkpoint.ipynb
lcharleux/numerical_analysis
gpl-2.0
3- Position géographique
df.lat=df.lat/100000 df.long=df.long/100000 dfp = df[df.gps=='M'] dfp = dfp[['lat','long']] dfp = dfp[(dfp.long!=0.0) & (dfp.lat!=0.0)] dfp.head() #fig = plt.figure() dfp.plot.scatter(x='long', y='lat',s=1); df[(df.long!=0.0) & (df.lat!=0.0) & (df.gps=='M')].plot.scatter(x='long', y='lat',s=.5);
doc/Traitement_donnees/.ipynb_checkpoints/TD Traitement de Données-checkpoint.ipynb
lcharleux/numerical_analysis
gpl-2.0
The total cost is made up of the following elements: Equipment cost: From 30,000 up to 50,000 Spare parts cost for 5 years: From 16000 to 18000 Each year is sampled separately from the normal distribution. Maintenance Charges for 5 years: Annual rate of 12% of the equipment price (not including the spares).
n_years = 5 maint_rate = 0.12
costs.ipynb
surfer1-dev/Stochastic_estimation_of_equipment_costs
mit
Total Cost = Equipment + Spares + Maintenance Where Spares = Spares for Year 1 + Spares for Year 2 + Spares for Year 3 + Spares for Year 4 + Spares for Year 5 Maintenance = Equipment * Maintenance Rate * Number of Years The objective of this simulation is to vary the price of the equipment and spares by drawing samples...
model = Model() with model: # Priors for unknown model parameters equip = Normal('equip', mu=40000, sd=4) spare1 = Normal('spare1', mu=17000, sd=500) spare2 = Normal('spare2', mu=17000, sd=500) spare3 = Normal('spare3', mu=17500, sd=500) spare4 = Normal('spare4', mu=17500, sd=500) spar...
costs.ipynb
surfer1-dev/Stochastic_estimation_of_equipment_costs
mit
Model Machine
mp = phantasy.MachinePortal(machine='FRIB_FE', segment='LEBT')
docs/source/src/notebooks/phantasy_element.ipynb
archman/phantasy
bsd-3-clause
Get Element by type
mp.get_all_types()
docs/source/src/notebooks/phantasy_element.ipynb
archman/phantasy
bsd-3-clause
Example: Electric static Quadrupole Get the first equad
equads = mp.get_elements(type='EQUAD') equads # first equad equad0 = equads[0] equad0
docs/source/src/notebooks/phantasy_element.ipynb
archman/phantasy
bsd-3-clause
Investigate the equad
print("Index : %d" % equad0.index) print("Name : %s" % equad0.name) print("Family : %s" % equad0.family) print("Location : (begin) %f (end) %f" % (equad0.sb, equad0.se)) print("Length : %f" % equad0.length) print("Groups : %s" % equad0.group) print("PVs : %s" % equad0.pv()) print("Tags : %s" % equ...
docs/source/src/notebooks/phantasy_element.ipynb
archman/phantasy
bsd-3-clause
Dynamic field: V All available dynamic fields could be retrieved by equad0.fields (for equad0 here, there is only one field, i.e. V).
equad0.V
docs/source/src/notebooks/phantasy_element.ipynb
archman/phantasy
bsd-3-clause
Get values If only readback value is of interest, Approach 1 is recommanded and most natural.
# Approach 1: dynamic field feature (readback PV) print("Readback: %f" % equad0.V) # Approach 2: caget(pv_name) pv_rdbk = equad0.pv(field='V', handle='readback') print("Readback: %s" % phantasy.caget(pv_rdbk)) # Approach 3: CaField v_field = equad0.get_field('V') print("Readback: %f" % v_field.get(handle='readback'))...
docs/source/src/notebooks/phantasy_element.ipynb
archman/phantasy
bsd-3-clause
Set values Always Approach 1 is recommanded.
# Save orignal set value for 'V' field v0 = equad0.get_field('V').get(handle='setpoint') # Approach 1: dynamic field feature (setpoint PV) equad0.V = 2000 # Approach 2: caput(pv_name) pv_cset = equad0.pv(field='V', handle='setpoint') phantasy.caput(pv_cset, 1000) # Approach 3: CaField v_field = equad0.get_field('V')...
docs/source/src/notebooks/phantasy_element.ipynb
archman/phantasy
bsd-3-clause
Preparing the data for modeling
iris = load_iris() X, y = iris.data, iris.target # this is unsupervised; we aren't going to split
doc/examples/decomposition/SelectivePCA weighting.ipynb
tgsmith61591/skutil
bsd-3-clause
Basic k-Means, no weighting: Here, we'll run a basic k-Means (k=3) preceded by a default SelectivePCA (no weighting)
from sklearn.metrics import accuracy_score from skutil.decomposition import SelectivePCA from sklearn.pipeline import Pipeline from sklearn.cluster import KMeans # define our default pipe pca = SelectivePCA(n_components=0.99) pipe = Pipeline([ ('pca', pca), ('model', KMeans(3)) ]) # fit the pipe...
doc/examples/decomposition/SelectivePCA weighting.ipynb
tgsmith61591/skutil
bsd-3-clause
This is a nice accuracy, but not a stellar one... Surely we can improve this, right? Part of the problem is that clustering (distance metrics) treats all the features equally. Since PCA intrinsically orders features based on importance, we can weight them according to the variability they each explain. Thus, the most i...
pca.pca_.explained_variance_ratio_
doc/examples/decomposition/SelectivePCA weighting.ipynb
tgsmith61591/skutil
bsd-3-clause
And here's what our weighting vector will ultimately look like:
weights = pca.pca_.explained_variance_ratio_ weights -= np.median(weights) weights += 1 weights
doc/examples/decomposition/SelectivePCA weighting.ipynb
tgsmith61591/skutil
bsd-3-clause
k-Means with weighting:
# define our weighted pipe pca = SelectivePCA(n_components=0.99, weight=True) pipe = Pipeline([ ('pca', pca), ('model', KMeans(3)) ]) # fit the pipe pipe.fit(X, y) # predict and score print('Train accuracy (with weighting): %.5f' % accuracy_score(y, pipe.predict(X)))
doc/examples/decomposition/SelectivePCA weighting.ipynb
tgsmith61591/skutil
bsd-3-clause
TFX – Introduction to Apache Beam TFX is designed to be scalable to very large datasets which require substantial resources. Distributed pipeline frameworks such as Apache Beam offer the ability to distribute processing across compute clusters and apply the resources required. Many of the standard TFX components use ...
!pip install -q -U \ tensorflow==2.0.0 \ apache-beam
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Import packages We import necessary packages, including Beam.
from datetime import datetime import os import pprint import tempfile import urllib pp = pprint.PrettyPrinter() import tensorflow as tf import apache_beam as beam from apache_beam import pvalue from apache_beam.runners.interactive.display import pipeline_graph import graphviz print('TensorFlow version: {}'.format(tf...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Create a Beam Pipeline Create a pipeline, including a simple PCollection and a ParDo() transform. A PCollection<T> is an immutable collection of values of type T. A PCollection can contain either a bounded or unbounded number of elements. Bounded and unbounded PCollections are produced as the output of PTransfor...
first_pipeline = beam.Pipeline() lines = (first_pipeline | "Create" >> beam.Create(["Hello", "World", "!!!"]) # PCollection | "Print" >> beam.ParDo(print)) # ParDo transform result = first_pipeline.run() result.state
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Display the structure of this pipeline.
def display_pipeline(pipeline): graph = pipeline_graph.PipelineGraph(pipeline) return graphviz.Source(graph.get_dot()) display_pipeline(first_pipeline)
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Next, invoke run inside a with block.
with beam.Pipeline() as with_pipeline: lines = (with_pipeline | "Create" >> beam.Create(["Hello", "World", "!!!"]) | "Print" >> beam.ParDo(print)) display_pipeline(with_pipeline)
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Exercise 1 — Creating and Running Your Beam Pipeline Build a Beam pipeline that creates a PCollection containing integers 0 to 10 and prints them. Add a step in the pipeline to square each item. Display the pipeline. Warning: the ParDo() method must either return None or a list. Solution:
with beam.Pipeline() as with_pipeline: lines = (with_pipeline | "Create" >> beam.Create(range(10 + 1)) | "Square" >> beam.ParDo(lambda x: [x ** 2]) | "Print" >> beam.ParDo(print)) display_pipeline(with_pipeline)
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Core Transforms Beam has a set of core transforms on data that is contained in PCollections. In the cells that follow, explore several core transforms and observe the results in order to develop some understanding and intuition for what each transform does. Map The Map transform applies a simple 1-to-1 mapping functi...
with beam.Pipeline() as pipeline: lines = (pipeline | "Create" >> beam.Create([1, 2, 3]) | "Multiply" >> beam.ParDo(lambda number: [number * 2]) # ParDo with integers | "Print" >> beam.ParDo(print)) with beam.Pipeline() as pipeline: lines = (pipeline | "Create" >> beam.Cre...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
GroupByKey GroupByKey takes a keyed collection of elements and produces a collection where each element consists of a key and all values associated with that key. GroupByKey is a transform for processing collections of key/value pairs. It’s a parallel reduction operation, analogous to the Shuffle phase of a Map/Shuffle...
with beam.Pipeline() as pipeline: lines = (pipeline | beam.Create(['apple', 'ball', 'car', 'bear', 'cheetah', 'ant']) | beam.Map(lambda word: (word[0], word)) | beam.GroupByKey() | beam.ParDo(print))
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Exercise 2 — Group Items by Key Build a Beam pipeline that creates a PCollection containing integers 0 to 10 and prints them. Add a step in the pipeline to add a key to each item that will indicate whether it is even or odd. Use GroupByKey to group even items together and odd items together. Solution:
with beam.Pipeline() as pipeline: lines = (pipeline | beam.Create(range(10 + 1)) | beam.Map(lambda x: ("odd" if x % 2 else "even", x)) | beam.GroupByKey() | beam.ParDo(print))
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
CoGroupByKey can combine multiple PCollections, assuming every element is a tuple whose first item is the key to join on.
pipeline = beam.Pipeline() fruits = pipeline | 'Fruits' >> beam.Create(['apple', 'banana', 'cherry']) countries = pipeline | 'Countries' >> beam.Create(['australia', 'brazil', ...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Combine Combine is a transform for combining collections of elements or values. Combine has variants that work on entire PCollections, and some that combine the values for each key in PCollections of key/value pairs. To apply a Combine transform, you must provide the function that contains the logic for combining the e...
with beam.Pipeline() as pipeline: lines = (pipeline | beam.Create([1, 2, 3, 4, 5]) | beam.CombineGlobally(sum) | beam.Map(print)) with beam.Pipeline() as pipeline: lines = (pipeline | beam.Create([1, 2, 3, 4, 5]) | beam.combiners.Mean.Globally() | b...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Exercise 3 — Combine Items Start with Beam pipeline you built in the previous exercise: it creates a PCollection containing integers 0 to 10, groups them by their parity, and prints the groups. Add a step that computes the mean of each group (i.e., the mean of all odd numbers between 0 and 10, and the mean of all even...
with beam.Pipeline() as pipeline: lines = (pipeline | beam.Create(range(10 + 1)) | beam.Map(lambda x: ("odd" if x % 2 else "even", x)) | beam.Map(lambda x: (x[0], x[1] ** 2)) | beam.CombinePerKey(AverageFn()) | beam.ParDo(print))
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Flatten Flatten is a transform for PCollection objects that store the same data type. Flatten merges multiple PCollection objects into a single logical PCollection. Data encoding in merged collections By default, the coder for the output PCollection is the same as the coder for the first PCollection in the input PColle...
pipeline = beam.Pipeline() wordsStartingWithA = (pipeline | 'Words starting with A' >> beam.Create(['apple', 'ant', 'arrow'])) wordsStartingWithB = (pipeline | 'Words starting with B' >> beam.Create(['ball', 'book', 'bow'])) ((wordsStartingWithA, wordsStartingWithB) | beam.Flatten() | beam.ParDo(print)) ...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Partition Partition is a transform for PCollection objects that store the same data type. Partition splits a single PCollection into a fixed number of smaller collections. Partition divides the elements of a PCollection according to a partitioning function that you provide. The partitioning function contains the logic ...
def partition_fn(number, num_partitions): partition = number // 100 return min(partition, num_partitions - 1) with beam.Pipeline() as pipeline: lines = (pipeline | beam.Create([1, 110, 2, 350, 4, 5, 100, 150, 3]) | beam.Partition(partition_fn, 3)) lines[0] | '< 100' >> beam.ParDo(pri...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Side Inputs In addition to the main input PCollection, you can provide additional inputs to a ParDo transform in the form of side inputs. A side input is an additional input that your DoFn can access each time it processes an element in the input PCollection. When you specify a side input, you create a view of some oth...
def increment(number, inc=1): return number + inc with beam.Pipeline() as pipeline: lines = (pipeline | "Create" >> beam.Create([1, 2, 3, 4, 5]) | "Increment" >> beam.Map(increment) | "Print" >> beam.ParDo(print)) with beam.Pipeline() as pipeline: lines = (pipeline ...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Additional Outputs While ParDo always produces a main output PCollection (as the return value from apply), you can also have your ParDo produce any number of additional output PCollections. If you choose to have multiple outputs, your ParDo returns all of the output PCollections (including the main output) bundled toge...
def compute(number): if number % 2 == 0: yield number else: yield pvalue.TaggedOutput("odd", number + 10) with beam.Pipeline() as pipeline: even, odd = (pipeline | "Create" >> beam.Create([1, 2, 3, 4, 5, 6, 7]) | "Increment" >> beam.ParDo(compute).with_outputs("odd", ...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Branching A transform does not consume or otherwise alter the input collection – remember that a PCollection is immutable by definition. This means that you can apply multiple transforms to the same input PCollection to create a branching pipeline.
with beam.Pipeline() as branching_pipeline: numbers = (branching_pipeline | beam.Create([1, 2, 3, 4, 5])) mult5_results = numbers | beam.Map(lambda num: num * 5) mult10_results = numbers | beam.Map(lambda num: num * 10) mult5_results | 'Log multiply 5' >> beam.ParDo(print, 'Mult 5') mult10_results | 'Log mu...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Composite Transforms Transforms can have a nested structure, where a complex transform performs multiple simpler transforms (such as more than one ParDo, Combine, GroupByKey, or even other composite transforms). These transforms are called composite transforms. Nesting multiple transforms inside a single composite tran...
class ExtractAndMultiplyNumbers(beam.PTransform): def expand(self, pcollection): return (pcollection | beam.FlatMap(lambda line: line.split(",")) | beam.Map(lambda num: int(num) * 10)) with beam.Pipeline() as composite_pipeline: lines = (composite_pipeline | beam.Create...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Filter Filter, given a predicate, filters out all elements that don't satisfy that predicate. Filter may also be used to filter based on an inequality with a given value based on the comparison ordering of the element. You can pass functions with multiple arguments to Filter. They are passed as additional positional a...
class FilterOddNumbers(beam.DoFn): def process(self, element, *args, **kwargs): if element % 2 == 1: yield element with beam.Pipeline() as pipeline: lines = (pipeline | beam.Create(range(1, 11)) | beam.ParDo(FilterOddNumbers()) | beam.ParDo(print)) with beam....
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Aggregation Beam uses windowing to divide a continuously updating unbounded PCollection into logical windows of finite size. These logical windows are determined by some characteristic associated with a data element, such as a timestamp. Aggregation transforms (such as GroupByKey and Combine) work on a per-window basis...
with beam.Pipeline() as pipeline: lines = (pipeline | beam.Create(range(1, 11)) | beam.combiners.Count.Globally() # Count | beam.ParDo(print)) with beam.Pipeline() as pipeline: lines = (pipeline | beam.Create(range(1, 11)) | beam.CombineGlobally(sum) # Combine...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Pipeline I/O When you create a pipeline, you often need to read data from some external source, such as a file or a database. Likewise, you may want your pipeline to output its result data to an external storage system. Beam provides read and write transforms for a number of common data storage types. If you want your ...
DATA_PATH = 'https://raw.githubusercontent.com/ageron/open-datasets/master/' \ 'online_news_popularity_for_course/online_news_popularity_for_course.csv' _data_root = tempfile.mkdtemp(prefix='tfx-data') _data_filepath = os.path.join(_data_root, "data.csv") urllib.request.urlretrieve(DATA_PATH, _data_filepath) !head ...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Putting Everything Together Use several of the concepts, classes, and methods discussed above in a concrete example. Exercise 4 — Reading, Filtering, Parsing, Grouping and Averaging Write a Beam pipeline that reads the dataset, computes the mean label (the numbers in the last column) for each article category (the thir...
with beam.Pipeline() as pipeline: lines = (pipeline | beam.io.ReadFromText(_data_filepath) | beam.Filter(lambda line: line < "2014-01-01") | beam.Map(lambda line: line.split(",")) # CSV parser? | beam.Map(lambda cols: (cols[2], float(cols[-1]))) | beam.combiners....
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
Note that there are many other built-in I/O transforms. Windowing As discussed above, windowing subdivides a PCollection according to the timestamps of its individual elements. Some Beam transforms, such as GroupByKey and Combine, group multiple elements by a common key. Ordinarily, that grouping operation groups all o...
DAYS = 24 * 60 * 60 class AssignTimestamps(beam.DoFn): def process(self, element): date = datetime.strptime(element[0], "%Y-%m-%d") yield beam.window.TimestampedValue(element, date.timestamp()) with beam.Pipeline() as window_pipeline: lines = (window_pipeline | beam.io.ReadFromText(_data_filepa...
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
tensorflow/workshops
apache-2.0
============================================== Compute effect-matched-spatial filtering (EMS) ============================================== This example computes the EMS to reconstruct the time course of the experimental effect as described in: Aaron Schurger, Sebastien Marti, and Stanislas Dehaene, "Reducing multi-se...
# Author: Denis Engemann <denis.engemann@gmail.com> # Jean-Remi King <jeanremi.king@gmail.com> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne import io, EvokedArray from mne.datasets import sample from mne.decoding import EMS, compute_ems from sklearn.cross_...
0.13/_downloads/plot_ems_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note that a similar transformation can be applied with compute_ems However, this function replicates Schurger et al's original paper, and thus applies the normalization outside a leave-one-out cross-validation, which we recommend not to do.
epochs.equalize_event_counts(event_ids) X_transform, filters, classes = compute_ems(epochs)
0.13/_downloads/plot_ems_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Autosipper
# config directory must have "__init__.py" file # from the 'config' directory, import the following classes: from config import Motor, ASI_Controller, Autosipper from config import utils as ut autosipper = Autosipper(Motor('config/motor.yaml'), ASI_Controller('config/asi_controller.yaml')) autosipper.coord_frames fro...
notebooks/ExperimentTemplate.ipynb
FordyceLab/AcqPack
mit
Manifold
from config import Manifold manifold = Manifold('192.168.1.3', 'config/valvemaps/valvemap.csv', 512) manifold.valvemap[manifold.valvemap.name>0] for i in [2,0,14,8]: status = 'x' if manifold.read_valve(i): status = 'o' print status, manifold.valvemap.name.iloc[i] for i in range(16): status = ...
notebooks/ExperimentTemplate.ipynb
FordyceLab/AcqPack
mit
Micromanager
# !!!! Also must have MM folder on system PATH # mm_version = 'C:\Micro-Manager-1.4' # cfg = 'C:\Micro-Manager-1.4\SetupNumber2_05102016.cfg' mm_version = 'C:\Program Files\Micro-Manager-2.0beta' cfg = 'C:\Program Files\Micro-Manager-2.0beta\Setup2_20170413.cfg' import sys sys.path.insert(0, mm_version) # make it so p...
notebooks/ExperimentTemplate.ipynb
FordyceLab/AcqPack
mit
Preset: 1_PBP ConfigGroup,Channel,1_PBP,TIFilterBlock1,Label,1-PBP Preset: 2_BF ConfigGroup,Channel,2_BF,TIFilterBlock1,Label,2-BF Preset: 3_DAPI ConfigGroup,Channel,3_DAPI,TIFilterBlock1,Label,3-DAPI Preset: 4_eGFP ConfigGroup,Channel,4_eGFP,TIFilterBlock1,Label,4-GFP Preset: 5_Cy5 ConfigGroup,Channel,5_Cy5,TIFi...
core.setConfig('Channel','2_BF') core.setProperty(core.getCameraDevice(), "Exposure", 300) core.snapImage() img = core.getImage() plt.imshow(img,cmap='gray') image = Image.fromarray(img) # image.save('TESTIMAGE.tif') position_list = ut.load_mm_positionlist("C:/Users/fordycelab/Desktop/D1_cjm.pos") position_list def...
notebooks/ExperimentTemplate.ipynb
FordyceLab/AcqPack
mit
MM Get info
core.getFocusDevice() core.getCameraDevice() core.XYStageDevice() core.getDevicePropertyNames(core.getCameraDevice())
notebooks/ExperimentTemplate.ipynb
FordyceLab/AcqPack
mit
Video
# cv2.startWindowThread() cv2.namedWindow('Video') cv2.imshow('Video',img) cv2.waitKey(0) cv2.destroyAllWindows() core.stopSequenceAcquisition() import cv2 cv2.namedWindow('Video') core.startContinuousSequenceAcquisition(1) while True: img = core.getLastImage() if core.getRemainingImageCount() > 0: # ...
notebooks/ExperimentTemplate.ipynb
FordyceLab/AcqPack
mit
EXIT
autosipper.exit() manifold.exit() core.unloadAllDevices() core.reset() print 'closed'
notebooks/ExperimentTemplate.ipynb
FordyceLab/AcqPack
mit