markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The first block of code simply writes a text-file with the required input, the last line of code tells the geometry (h2o) to write its information to the file STRUCT.fdf. It automatically writes the following geometry information to the fdf file: - LatticeConstant - LatticeVectors - NumberOfAtoms - AtomicCoordinatesFor...
fdf = get_sile('RUN.fdf') H = fdf.read_hamiltonian() # Create a short-hand to handle the geometry h2o = H.geometry print(H)
docs/tutorials/tutorial_siesta_1.ipynb
zerothi/sisl
mpl-2.0
A lot of new information has appeared. The Hamiltonian object describes the non-orthogonal basis and the "hopping" elements between the orbitals. We see it is a non-orthogonal basis via: orthogonal: False. Secondly, we see it was an un-polarized calculation (Spin{unpolarized...). Lastly the geometry information is prin...
def plot_atom(atom): no = len(atom) # number of orbitals nx = no // 4 ny = no // nx if nx * ny < no: nx += 1 fig, axs = plt.subplots(nx, ny, figsize=(20, 5*nx)) fig.suptitle('Atom: {}'.format(atom.symbol), fontsize=14) def my_plot(i, orb): grid = orb.toGrid(atom=atom) ...
docs/tutorials/tutorial_siesta_1.ipynb
zerothi/sisl
mpl-2.0
Hamiltonian eigenstates At this point we have the full Hamiltonian as well as the basis functions used in the Siesta calculation. This completes what is needed to calculate a great deal of physical quantities, e.g. eigenstates, density of states, projected density of states and wavefunctions. To begin with we calculate...
es = H.eigenstate() # We specify an origin to center the molecule in the grid h2o.sc.origin = [-4, -4, -4] # Reduce the contained eigenstates to only the HOMO and LUMO # Find the index of the smallest positive eigenvalue idx_lumo = (es.eig > 0).nonzero()[0][0] es = es.sub([idx_lumo - 1, idx_lumo]) _, ax = plt.subplot...
docs/tutorials/tutorial_siesta_1.ipynb
zerothi/sisl
mpl-2.0
These are not that interesting. The projection of the HOMO and LUMO states show where the largest weight of the HOMO and LUMO states, however we can't see the orbital symmetry differences between the HOMO and LUMO states. Instead of plotting the weight on each orbital it is more interesting to plot the actual wavefunct...
def integrate(g): print('Real space integrated wavefunction: {:.4f}'.format((np.absolute(g.grid) ** 2).sum() * g.dvolume)) g = Grid(0.2, sc=h2o.sc) es.sub(0).wavefunction(g) integrate(g) #g.write('HOMO.cube') g.fill(0) # reset the grid values to 0 es.sub(1).wavefunction(g) integrate(g) #g.write('LUMO.cube')
docs/tutorials/tutorial_siesta_1.ipynb
zerothi/sisl
mpl-2.0
Real space charge Since we have the basis functions we can also plot the charge in the grid. We can do this via either reading the density matrix or read in the charge output directly from Siesta. Since both should yield the same value we can compare the output from Siesta with that calculated in sisl. You will notice ...
DM = fdf.read_density_matrix() rho = get_sile('siesta_1.nc').read_grid('Rho') DM_rho = rho.copy() DM_rho.fill(0) DM.density(DM_rho) diff = DM_rho - rho print('Real space integrated density difference: {:.3e}'.format(diff.grid.sum() * diff.dvolume))
docs/tutorials/tutorial_siesta_1.ipynb
zerothi/sisl
mpl-2.0
Morph volumetric source estimate This example demonstrates how to morph an individual subject's :class:mne.VolSourceEstimate to a common reference space. We achieve this using :class:mne.SourceMorph. Pre-computed data will be morphed based on an affine transformation and a nonlinear registration method known as Symmetr...
# Author: Tommy Clausner <tommy.clausner@gmail.com> # # License: BSD (3-clause) import os import nibabel as nib import mne from mne.datasets import sample, fetch_fsaverage from mne.minimum_norm import apply_inverse, read_inverse_operator from nilearn.plotting import plot_glass_brain print(__doc__)
0.20/_downloads/7bbeb6a728b7d16c6e61cd487ba9e517/plot_morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute example data. For reference see sphx_glr_auto_examples_inverse_plot_compute_mne_inverse_volume.py Load data:
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0)) inverse_operator = read_inverse_operator(fname_inv) # Apply inverse operator stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM") # To save time stc.crop(0.09, 0.09)
0.20/_downloads/7bbeb6a728b7d16c6e61cd487ba9e517/plot_morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Apply morph to VolSourceEstimate The morph can be applied to the source estimate data, by giving it as the first argument to the :meth:morph.apply() &lt;mne.SourceMorph.apply&gt; method:
stc_fsaverage = morph.apply(stc)
0.20/_downloads/7bbeb6a728b7d16c6e61cd487ba9e517/plot_morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
思考 一个猜数程序
# 猜数,人猜 # 简单版本 import random a = random.randint(1,1000) print('Now you can guess...') guess_mark = True while guess_mark: user_number =int(input('please input number:')) if user_number > a: print('too big') if user_number < a: print('too small') if user_number == a: print('b...
Python 基础课程/Python Basic Lesson 06 - 随机数.ipynb
chinapnr/python_study
gpl-3.0
更加复杂的生成随机内容。 可以参考我们开发的python 函数包中的 random 部分,https://fishbase.readthedocs.io/en/latest/fish_random.html fish_random.gen_random_address(zone) 通过省份行政区划代码,返回该省份的随机地址 fish_random.get_random_areanote(zone) 省份行政区划代码,返回下辖的随机地区名称 fish_random.gen_random_bank_card([…]) 通过指定的银行名称,随机生成该银行的卡号 fish_random.gen_random_company_...
from fishbase.fish_random import * # 这些银行卡卡号只是符合规范,可以通过最基本的银行卡号规范检查,但是实际上是不存在的 # 随机生成一张银行卡卡号 print(gen_random_bank_card()) # 随机生成一张中国银行的借记卡卡号 print(gen_random_bank_card('中国银行', 'CC')) # 随机生成一张中国银行的贷记卡卡号 print(gen_random_bank_card('中国银行', 'DC')) from fishbase.fish_random import * # 生成假的身份证号码,符合标准身份证的分段设置和校验位 # 指定...
Python 基础课程/Python Basic Lesson 06 - 随机数.ipynb
chinapnr/python_study
gpl-3.0
猜数程序修改为机器猜,根据每次人返回的结果来调整策略
# 猜数,机器猜 min = 0 max = 1000 guess_ok_mark = False while not guess_ok_mark: cur_guess = int((min + max) / 2) print('I guess:', cur_guess) human_answer = input('Please tell me big or small:') if human_answer == 'big': max = cur_guess if human_answer == 'small': min = cur_guess ...
Python 基础课程/Python Basic Lesson 06 - 随机数.ipynb
chinapnr/python_study
gpl-3.0
Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?
import matplotlib.pyplot as plt %matplotlib inline plt.show()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04a-Matplotlib/Matplotlib Exercises A - Solved! .ipynb
arcyfelix/Courses
apache-2.0
Exercise 2 Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.
fig = plt.figure() ax1 = fig.add_axes([0, 0, 1, 1]) ax2 = fig.add_axes([0.2, 0.5, .2, .2])
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04a-Matplotlib/Matplotlib Exercises A - Solved! .ipynb
arcyfelix/Courses
apache-2.0
Now plot (x,y) on both axes. And call your figure object to show it.
fig = plt.figure() ax1 = fig.add_axes([0, 0, 1, 1]) ax2 = fig.add_axes([0.2, 0.5, .2, .2]) ax1.plot(x, y) ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('title') ax2.plot(x, y) ax2.set_xlabel('x') ax2.set_ylabel('y') ax2.set_title('title')
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04a-Matplotlib/Matplotlib Exercises A - Solved! .ipynb
arcyfelix/Courses
apache-2.0
Exercise 3 Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]
fig = plt.figure() ax1 = fig.add_axes([0, 0, 1, 1]) ax2 = fig.add_axes([0.2, 0.5, 0.4, 0.4])
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04a-Matplotlib/Matplotlib Exercises A - Solved! .ipynb
arcyfelix/Courses
apache-2.0
Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot:
fig = plt.figure() ax1 = fig.add_axes([0, 0, 1, 1]) ax2 = fig.add_axes([0.2, 0.5, 0.4, 0.4]) ax1.plot(x, z) ax1.set_xbound(lower = 0, upper = 100) ax1.set_xlabel('X') ax1.set_ylabel('Z') ax2.plot(x, y) ax2.set_title('zoom') ax2.set_xbound(lower = 20, upper = 22) ax2.set_ybound(lower = 30, upper = 50) ax2.set_xlabel('...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04a-Matplotlib/Matplotlib Exercises A - Solved! .ipynb
arcyfelix/Courses
apache-2.0
Exercise 4 Use plt.subplots(nrows=1, ncols=2) to create the plot below.
fig, ax = plt.subplots(nrows = 1, ncols = 2)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04a-Matplotlib/Matplotlib Exercises A - Solved! .ipynb
arcyfelix/Courses
apache-2.0
Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style
fig, ax = plt.subplots(nrows = 1, ncols = 2) ax[0].plot(x, y, 'b--', lw = 3) ax[1].plot(x, z, 'r', lw = 3 ) plt.tight_layout()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04a-Matplotlib/Matplotlib Exercises A - Solved! .ipynb
arcyfelix/Courses
apache-2.0
See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.
fig, ax = plt.subplots(figsize = (12, 2) ,nrows = 1, ncols = 2) ax[0].plot(x, y, 'b', lw = 3) ax[0].set_xlabel('x') ax[0].set_ylabel('y') ax[1].plot(x, z, 'r--', lw = 3 ) ax[1].set_xlabel('x') ax[1].set_xlabel('z')
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04a-Matplotlib/Matplotlib Exercises A - Solved! .ipynb
arcyfelix/Courses
apache-2.0
I also found this post about equal frequency binning in Python useful. statsmodels
from statsmodels.stats.weightstats import DescrStatsW wq = DescrStatsW(data=np.arange(0, 101), weights=np.ones(101)* 1.5) wq.quantile(probs=np.arange(0, 1.01, 0.01), return_pandas=False)
development/quantiles.ipynb
DamienIrving/ocean-analysis
mit
Neuron Model Passive properties Test relaxation of neuron and threshold to equilibrium values in absence of intrinsic currents and input. We then have \begin{align} \tau_m \dot{V}&= \left[-g_{NaL}(V-E_{Na})-g_{KL}(V-E_K)\right] = -(g_{NaL}+g_{KL})V+(g_{NaL}E_{Na}+g_{KL}E_K)\ \Leftrightarrow\quad \tau_{\text{eff}}\dot{V...
def Vpass(t, V0, gNaL, ENa, gKL, EK, taum, I=0): tau_eff = taum/(gNaL + gKL) Vinf = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL) return V0*np.exp(-t/tau_eff) + Vinf*(1-np.exp(-t/tau_eff)) def theta(t, th0, theq, tauth): return th0*np.exp(-t/tauth) + theq*(1-np.exp(-t/tauth)) nest.ResetKernel() nest.SetDefault...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Agreement is excellent. Spiking without intrinsic currents or synaptic input The equations above hold for input current $I(t)$, but with \begin{equation} V_{\infty}(I) = \frac{g_{NaL}E_{Na}+g_{KL}E_K}{g_{NaL}+g_{KL}} + \frac{I}{g_{NaL}+g_{KL}} \end{equation} In NEST, we need to inject input current into the ht_neuron w...
def t_first_spike(gNaL, ENa, gKL, EK, taum, theq, tI, I): tau_eff = taum/(gNaL + gKL) Vinf0 = (gNaL*ENa + gKL*EK)/(gNaL + gKL) VinfI = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL) return tI - tau_eff * np.log((theq-VinfI) / (Vinf0-VinfI)) nest.ResetKernel() nest.SetKernelStatus({'resolution': 0.001}) nest.SetD...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Agreement is as good as possible: All spikes occur in NEST at then end of the time step containing the expected spike time. Inter-spike interval After each spike, $V_m = \theta = E_{Na}$, i.e., all memory is erased. We can thus treat ISIs independently. $\theta$ relaxes according to the equation above. For $V_m$, we ha...
def Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0): tau_eff = taum/(gNaL + gKL + taum/tauspk) Vinf = (gNaL*ENa + gKL*EK + I + taum/tauspk*EK)/(gNaL + gKL + taum/tauspk) return ENa*np.exp(-tspk/tau_eff) + Vinf*(1-np.exp(-tspk/tau_eff)) def thetaspike(tspk, ENa, theq, tauth): return ENa*np.exp(-tspk...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
ISIs are as predicted: measured ISI is predicted rounded up to next time step ISIs are perfectly regular as expected Intrinsic Currents Preparations
nest.ResetKernel() class Channel: """ Base class for channel models in Python. """ def tau_m(self, V): raise NotImplementedError() def tau_h(self, V): raise NotImplementedError() def m_inf(self, V): raise NotImplementedError() def h_inf(self, V): raise NotImpl...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
I_h channel The $I_h$ current is governed by \begin{align} I_h &= g_{\text{peak}, h} m_h(V, t) (V-E_h) \ \frac{\text{d}m_h}{\text{d}t} &= \frac{m_h^{\infty}-m_h}{\tau_{m,h}(V)}\ m_h^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+75\text{mV}}{5.5\text{mV}}\right)} \ \tau_{m,h}(V) &= \frac{1}{\exp(-14.59-0.086V) + \exp(-1.8...
nest.ResetKernel() class Ih(Channel): nest_g = 'g_peak_h' nest_I = 'I_h' def __init__(self, ht_params): self.hp = ht_params def tau_m(self, V): return 1/(np.exp(-14.59-0.086*V) + np.exp(-1.87 + 0.0701*V)) def m_inf(self, V): return 1/(1+np.exp((V+75)/...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
The time constant is extremely long, up to 1s, for relevant voltages where $I_h$ is perceptible. We thus need long test runs. Curves are in good agreement with Fig 5 of Huguenard and McCormick, J Neurophysiol 68:1373, 1992, cited in [HT05]. I_h data there was from guinea pig slices at 35.5 C and needed no temperature a...
ih = Ih(nest.GetDefaults('ht_neuron')) nr, cr = voltage_clamp(ih, [(500, -65.), (500, -80.), (500, -100.), (500, -90.), (500, -55.)]) plt.subplot(1, 2, 1) plt.plot(nr.times, nr.I_h, label='NEST'); plt.plot(cr.times, cr.I_h, label='Control'); plt.legend(loc='upper left'); plt.xlabel('Time [ms]'); plt.ylabel('I_h [mV]'...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Agreement is very good Note that currents have units of $mV$ due to choice of dimensionless conductances. I_T Channel The corrected equations used for the $I_T$ channel in NEST are \begin{align} I_T &= g_{\text{peak}, T} m_T^2(V, t) h_T(V,t) (V-E_T) \ m_T^{\infty}(V) &= \frac{1}{1+\exp\left(-\frac{V+59\text{mV}}{6.2\...
nest.ResetKernel() class IT(Channel): nest_g = 'g_peak_T' nest_I = 'I_T' def __init__(self, ht_params): self.hp = ht_params def tau_m(self, V): return 0.13 + 0.22/(np.exp(-(V+132)/16.7) + np.exp((V+16.8)/18.2)) def tau_h(self, V): return 8.2 + (56.6 + 0.27...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Time constants here are much shorter than for I_h Time constants are about five times shorter than in Fig 1 of Huguenard and McCormick, J Neurophysiol 68:1373, 1992, cited in [HT05], but that may be due to the fact that the original data was collected at 23-25C and parameters have been adjusted to 36C. Steady-state act...
iT = IT(nest.GetDefaults('ht_neuron')) nr, cr = voltage_clamp(iT, [(200, -65.), (200, -80.), (200, -100.), (200, -90.), (200, -70.), (200, -55.)], nest_dt=0.1) plt.subplot(1, 2, 1) plt.plot(nr.times, nr.I_T, label='NEST'); plt.plot(cr.times, cr.I_T, label='Control'); p...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Also here the results are in good agreement and the error appears acceptable. I_NaP channel This channel adapts instantaneously to changes in membrane potential: \begin{align} I_{NaP} &= - g_{\text{peak}, NaP} (m_{NaP}^{\infty}(V, t))^3 (V-E_{NaP}) \ m_{NaP}^{\infty}(V) &= \frac{1}{1+\exp\left(-\frac{V+55.7\text{mV}}{...
nest.ResetKernel() class INaP(Channel): nest_g = 'g_peak_NaP' nest_I = 'I_NaP' def __init__(self, ht_params): self.hp = ht_params def m_inf(self, V): return 1/(1+np.exp(-(V+55.7)/7.7)) def compute_I(self, t, V, m0, h0, D0): return self.I_V_curve(V * np...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Perfect agreement Step structure is because $V$ changes only every second. I_KNa channel (aka I_DK) Equations for this channel are \begin{align} I_{DK} &= - g_{\text{peak},DK} m_{DK}(V,t) (V - E_{DK})\ m_{DK} &= \frac{1}{1 + \left(\frac{d_{1/2}}{D}\right)^{3.5}}\ \frac{dD}{dt} &= D_{\text{influx}}(V) - \frac{D-D_{\t...
nest.ResetKernel() class IDK(Channel): nest_g = 'g_peak_KNa' nest_I = 'I_KNa' def __init__(self, ht_params): self.hp = ht_params def m_DK(self, D): return 1/(1+(0.25/D)**3.5) def D_inf(self, V): return 1250. * self.D_influx(V) + 0.001 def D_influx...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Properties of I_DK
iDK = IDK(nest.GetDefaults('ht_neuron')) D=np.linspace(0.01, 1.5,num=200); V=np.linspace(-110, 30, num=200); ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4); ax2 = ax1.twinx() ax3 = plt.subplot2grid((1, 9), (0, 6), colspan=3); ax1.plot(V, -iDK.m_DK(iDK.D_inf(V))*(V - iDK.hp['E_rev_KNa']), 'g'); ax1.set_ylabel('Cur...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Note that current in steady state is $\approx 0$ for $V < -40$mV $\sim -(V-E_{DK})$ for $V> -30$mV Voltage clamp
nr, cr = voltage_clamp(iDK, [(500, -65.), (500, -35.), (500, -25.), (500, 0.), (5000, -70.)], nest_dt=1.) ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4); ax2 = plt.subplot2grid((1, 9), (0, 6), colspan=3); ax1.plot(nr.times, nr.I_KNa, label='NEST'); ax1.plot(cr.times, cr.I_KNa, label='Control...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Looks very fine. Note that the current gets appreviable only when $V>-35$ mV Once that threshold is crossed, the current adjust instantaneously to changes in $V$, since it is in the linear regime. When returning from $V=0$ to $V=-70$ mV, the current remains large for a long time since $D$ has to drop below 1 before $m_...
nest.ResetKernel() class SynChannel: """ Base class for synapse channel models in Python. """ def t_peak(self): return self.tau_1 * self.tau_2 / (self.tau_2 - self.tau_1) * np.log(self.tau_2/self.tau_1) def beta(self, t): val = ( ( np.exp(-t/self.tau_1) - np.exp(-t/self.tau_2) ...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
AMPA, GABA_A, GABA_B channels
nest.ResetKernel() class PlainChannel(SynChannel): def __init__(self, hp, receptor): self.hp = hp self.receptor = receptor self.rec_code = hp['receptor_types'][receptor] self.tau_1 = hp['tau_rise_'+receptor] self.tau_2 = hp['tau_decay_'+receptor] self.g_peak = hp['g_p...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Looks quite good, but the error is maybe a bit larger than one would hope. But the synaptic rise time is short (0.5 ms) compared to the integration step in NEST (0.1 ms), which may explain the error. Reducing the time step reduces the error:
ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA') am_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.001) plt.subplot(1, 2, 1); plt.plot(am_n.times, am_n.g_AMPA, label='NEST'); plt.plot(am_c.times, am_c.g_AMPA, label='Control'); plt.xlabel('Time [ms]'); plt.ylabel('g_AMPA'); plt.title('AMPA Channel')...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Looks good for all For GABA_B the error is negligible even for dt = 0.1, since the time constants are large. NMDA Channel The equations for this channel are \begin{align} \bar{g}{\text{NMDA}}(t) &= m(V, t) g{\text{NMDA}}(t) m(V, t)\ &= a(V) m_{\text{fast}}^(V, t) + ( 1 - a(V) ) m_{\text{slow}}^(V, t)\ a(V...
class NMDAInstantChannel(SynChannel): def __init__(self, hp, receptor): self.hp = hp self.receptor = receptor self.rec_code = hp['receptor_types'][receptor] self.tau_1 = hp['tau_rise_'+receptor] self.tau_2 = hp['tau_decay_'+receptor] self.g_peak = hp['g_peak_'+recepto...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Looks good Jumps are due to blocking/unblocking of Mg channels with changes in $V$ NMDA with unblocking over time
class NMDAChannel(SynChannel): def __init__(self, hp, receptor): self.hp = hp self.receptor = receptor self.rec_code = hp['receptor_types'][receptor] self.tau_1 = hp['tau_rise_'+receptor] self.tau_2 = hp['tau_decay_'+receptor] self.g_peak = hp['g_peak_'+receptor] ...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Looks fine, too. Synapse Model We test the synapse model by placing it between two parrot neurons, sending spikes with differing intervals and compare to expected weights.
nest.ResetKernel() sp = nest.GetDefaults('ht_synapse') P0 = sp['P'] dP = sp['delta_P'] tP = sp['tau_P'] spike_times = [10., 12., 20., 20.5, 100., 200., 1000.] expected = [(0., P0, P0)] for idx, t in enumerate(spike_times): tlast, Psend, Ppost = expected[idx] Psend = 1 - (1-Ppost)*math.exp(-(t-tlast)/tP) exp...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Perfect agreement, synapse model looks fine. Integration test: Neuron driven through all synapses We drive a Hill-Tononi neuron through pulse packets arriving at 1 second intervals, impinging through all synapse types. Compare this to Fig 5 of [HT05].
nest.ResetKernel() nrn = nest.Create('ht_neuron') ppg = nest.Create('pulsepacket_generator', n=4, params={'pulse_times': [700., 1700., 2700., 3700.], 'activity': 700, 'sdev': 50.}) pr = nest.Create('parrot_neuron', n=4) mm = nest.Create('multimeter', params=...
doc/model_details/HillTononiModels.ipynb
jakobj/nest-simulator
gpl-2.0
Gaussian Blur This first section demonstrates performing a simple Gaussian blur on an image. It presents the image, as well as a slider that controls how much blur is applied. Numba is used to compile the python blur kernel, which is invoked when the user modifies the slider. Note: This simple example does not handle ...
# smaller image img_blur = (scipy.misc.ascent()[::-1,:]/255.0)[:250, :250].copy(order='C') palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)] width, height = img_blur.shape p_blur = figure(x_range=(0, width), y_range=(0, height)) r_blur = p_blur.image(image=[img_blur], x=[0], y=[0], dw=[width], dh=[height], pale...
examples/howto/notebook_comms/Numba Image Example.ipynb
percyfal/bokeh
bsd-3-clause
3x3 Image Kernels Many image processing filters can be expressed as 3x3 matrices. This more sophisticated example demonstrates how numba can be used to compile kernels for arbitrary 3x3 kernels, and then provides serveral predefined kernels for the user to experiment with. The UI presents the image to process (along w...
@jit def getitem(img, x, y): w, h = img.shape if x >= w: x = w - 1 - (x - w) if y >= h: y = h - 1 - (y - h) return img[x, y] def filter_factory(kernel): ksum = np.sum(kernel) if ksum == 0: ksum = 1 k9 = kernel / ksum @jit def kernel_apply(img, out, x, y): ...
examples/howto/notebook_comms/Numba Image Example.ipynb
percyfal/bokeh
bsd-3-clause
Wavelet Decomposition This last example demostrates a Haar wavelet decomposition using a Numba-compiled function. Play around with the slider to see differnet levels of decomposition of the image.
@njit def wavelet_decomposition(img, tmp): """ Perform inplace wavelet decomposition on `img` with `tmp` as a temporarily buffer. This is a very simple wavelet for demonstration """ w, h = img.shape halfwidth, halfheight = w//2, h//2 lefthalf, righthalf = tmp[:halfwidth, :], tmp[halfw...
examples/howto/notebook_comms/Numba Image Example.ipynb
percyfal/bokeh
bsd-3-clause
... while planets provides a dataframe that has an easier to use dataframe index with * units removed * spaces replaced by underscore * all lower case
from planetarypy.constants import planets planets planets.dtypes
notebooks/planetary constants.ipynb
michaelaye/planetpy
bsd-3-clause
One can also directly import a planet which will be a pandas Series and is in fact just a column of above table.
from planetarypy.constants import mars mars
notebooks/planetary constants.ipynb
michaelaye/planetpy
bsd-3-clause
WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: Misspecification: true model is quadratic, estimate only linear Independent noise/error term Two groups for error variance, low and high variance groups
nsample = 50 x = np.linspace(0, 20, nsample) X = np.column_stack((x, (x - 5)**2)) X = sm.add_constant(X) beta = [5., 0.5, -0.01] sig = 0.5 w = np.ones(nsample) w[nsample * 6//10:] = 3 y_true = np.dot(X, beta) e = np.random.normal(size=nsample) y = y_true + sig * w * e X = X[:,[0,1]]
examples/notebooks/wls.ipynb
yl565/statsmodels
bsd-3-clause
WLS knowing the true variance ratio of heteroscedasticity
mod_wls = sm.WLS(y, X, weights=1./w) res_wls = mod_wls.fit() print(res_wls.summary())
examples/notebooks/wls.ipynb
yl565/statsmodels
bsd-3-clause
OLS vs. WLS Estimate an OLS model for comparison:
res_ols = sm.OLS(y, X).fit() print(res_ols.params) print(res_wls.params)
examples/notebooks/wls.ipynb
yl565/statsmodels
bsd-3-clause
Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
se = np.vstack([[res_wls.bse], [res_ols.bse], [res_ols.HC0_se], [res_ols.HC1_se], [res_ols.HC2_se], [res_ols.HC3_se]]) se = np.round(se,4) colnames = ['x1', 'const'] rownames = ['WLS', 'OLS', 'OLS_HC0', 'OLS_HC1', 'OLS_HC3', 'OLS_HC3'] tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)...
examples/notebooks/wls.ipynb
yl565/statsmodels
bsd-3-clause
Calculate OLS prediction interval:
covb = res_ols.cov_params() prediction_var = res_ols.mse_resid + (X * np.dot(covb,X.T).T).sum(1) prediction_std = np.sqrt(prediction_var) tppf = stats.t.ppf(0.975, res_ols.df_resid) prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
examples/notebooks/wls.ipynb
yl565/statsmodels
bsd-3-clause
Draw a plot to compare predicted values in WLS and OLS:
prstd, iv_l, iv_u = wls_prediction_std(res_wls) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(x, y, 'o', label="Data") ax.plot(x, y_true, 'b-', label="True") # OLS ax.plot(x, res_ols.fittedvalues, 'r--') ax.plot(x, iv_u_ols, 'r--', label="OLS") ax.plot(x, iv_l_ols, 'r--') # WLS ax.plot(x, res_wls.fittedvalues, 'g--.')...
examples/notebooks/wls.ipynb
yl565/statsmodels
bsd-3-clause
Feasible Weighted Least Squares (2-stage FWLS)
resid1 = res_ols.resid[w==1.] var1 = resid1.var(ddof=int(res_ols.df_model)+1) resid2 = res_ols.resid[w!=1.] var2 = resid2.var(ddof=int(res_ols.df_model)+1) w_est = w.copy() w_est[w!=1.] = np.sqrt(var2) / np.sqrt(var1) res_fwls = sm.WLS(y, X, 1./w_est).fit() print(res_fwls.summary())
examples/notebooks/wls.ipynb
yl565/statsmodels
bsd-3-clause
If you want to see injector_lattice.py file you can run following command (lattice file is very large): $ %load injector_lattice.py The variable cell contains all the elements of the lattice in right order. And again Ocelot will work with class MagneticLattice instead of simple sequence of element. So we have to run fo...
lat = MagneticLattice(cell, stop=None)
demos/ipython_tutorials/2_tracking.ipynb
ocelot-collab/ocelot
gpl-3.0
1. Design optics calculation of the European XFEL Injector Remark For convenience reasons, we define optical functions starting at the gun by backtracking of the optical functions derived from ASTRA (or similar space charge code) at 130 MeV at the entrance to the first quadrupole. The optical functions we thus obtain h...
# initialization of Twiss object tws0 = Twiss() # defining initial twiss parameters tws0.beta_x = 29.171 tws0.beta_y = 29.171 tws0.alpha_x = 10.955 tws0.alpha_y = 10.955 # defining initial electron energy in GeV tws0.E = 0.005 # calculate optical functions with initial twiss parameters tws = twiss(lat, tws0, nPoints=...
demos/ipython_tutorials/2_tracking.ipynb
ocelot-collab/ocelot
gpl-3.0
2. Tracking in first and second order approximation without any collective effects Remark Because of the reasons mentioned above, we start the beam tracking from the first quadrupole after RF cavities. Loading of beam distribution In order to perform tracking we have to have beam distribution. We will load beam distrib...
#from ocelot.adaptors.astra2ocelot import * #p_array_init = astraBeam2particleArray(filename='beam_130MeV.ast') #p_array_init = astraBeam2particleArray(filename='beam_130MeV_off_crest.ast') # save ParticleArray to compresssed numpy array #save_particle_array("tracking_beam.npz", p_array_init) p_array_init = load_part...
demos/ipython_tutorials/2_tracking.ipynb
ocelot-collab/ocelot
gpl-3.0
Selection of the tracking order and lattice for the tracking. MagneticLattice(sequence, start=None, stop=None, method=MethodTM()) have wollowing arguments: * sequence - list of the elements, * start - first element of the lattice. If None, then lattice starts from the first element of the sequence, * stop - last ele...
# initialization of tracking method method = MethodTM() # for second order tracking we have to choose SecondTM method.global_method = SecondTM # for first order tracking uncomment next line # method.global_method = TransferMap # we start simulation from the first quadrupole (QI.46.I1) after RF section. # you can ch...
demos/ipython_tutorials/2_tracking.ipynb
ocelot-collab/ocelot
gpl-3.0
Tracking for tracking we have to define following objects: Navigator is object which navigates the beam distribution (ParticleArray) throught the lattice. The Navigator knows with what step (atr: unit_step) the beam distribution will be tracked and knows where to apply one or another Physics Processes. In order to...
navi = Navigator(lat_t) p_array = deepcopy(p_array_init) start = time.time() tws_track, p_array = track(lat_t, p_array, navi) print("\n time exec:", time.time() - start, "sec") # you can change top_plot argument, for example top_plot=["alpha_x", "alpha_y"] plot_opt_func(lat_t, tws_track, top_plot=["E"], fig_name=0, le...
demos/ipython_tutorials/2_tracking.ipynb
ocelot-collab/ocelot
gpl-3.0
Tracking with beam matching To match beam with design optics we can use artificial matching - beam Transformation: BeamTransform(tws=Twiss()) In Twiss object beta, alpha functions as well as phase advances twiss.mux and twiss.muy (zero by default) also can be specified
tw = Twiss() tw.beta_x = 2.36088 tw.beta_y = 2.824 tw.alpha_x = 1.2206 tw.alpha_y = -1.35329 bt = BeamTransform(tws=tw) navi = Navigator(lat_t) navi.unit_step = 1 # ignored in that case, tracking will performs element by element. # - there is no PhysicsProc along the lattice, ...
demos/ipython_tutorials/2_tracking.ipynb
ocelot-collab/ocelot
gpl-3.0
<div class="alert alert-block alert-warning"> <b>Note:</b> The function “track()” reruns twiss list ("tws_track") and ParticleArray ("p_array"). “p_array” is final ParticleArray. "tws_track” is a list of Twiss objects where twiss parameters are calculated from the particle distribution. So, inside each Twiss object, t...
sigma_x = np.sqrt([tw.xx for tw in tws_track]) s = [tw.s for tw in tws_track] plt.plot(s, sigma_x) plt.xlabel("s [m]") plt.ylabel(r"$\sigma_x$, [m]") plt.show()
demos/ipython_tutorials/2_tracking.ipynb
ocelot-collab/ocelot
gpl-3.0
Beam distribution
# the beam head is on left side show_e_beam(p_array, figsize=(8,6))
demos/ipython_tutorials/2_tracking.ipynb
ocelot-collab/ocelot
gpl-3.0
Explicit usage of matplotlib functions Current profile
bins_start, hist_start = get_current(p_array, num_bins=200) plt.figure(4) plt.title("current: end") plt.plot(bins_start*1000, hist_start) plt.xlabel("s, mm") plt.ylabel("I, A") plt.grid(True) plt.show() tau = np.array([p.tau for p in p_array]) dp = np.array([p.p for p in p_array]) x = np.array([p.x for p in p_array])...
demos/ipython_tutorials/2_tracking.ipynb
ocelot-collab/ocelot
gpl-3.0
Benchmarking the Forward Transform Define some test data:
def make_forward_data(M, N): x = -0.5 + np.random.rand(M) f_hat = np.random.randn(N) + 1j * np.random.randn(N) return x, f_hat
notebooks/Benchmarks.ipynb
jakevdp/nfft
mit
Define a utility function around pynfft:
def pynfft_forward(x, f_hat): M = len(x) N = len(f_hat) plan = pynfft.nfft.NFFT(N, M) plan.x = x plan.precompute() plan.f_hat = f_hat # Need copy because of bug in pynfft 1.x # See https://github.com/ghisvail/pyNFFT/issues/57 return plan.trafo().copy() x, f_hat = make_forward_data(1...
notebooks/Benchmarks.ipynb
jakevdp/nfft
mit
Benchmarking the Adjoint Transform Define some test data:
def make_adjoint_data(M): x = -0.5 + np.random.rand(M) f = np.random.randn(M) + 1j * np.random.randn(M) return x, f
notebooks/Benchmarks.ipynb
jakevdp/nfft
mit
Define a utility function around pynfft:
def pynfft_adjoint(x, f, N): M = len(x) plan = pynfft.nfft.NFFT(N, M) plan.x = x plan.precompute() plan.f = f # Need copy because of bug in pynfft 1.x # See https://github.com/ghisvail/pyNFFT/issues/57 return plan.adjoint().copy() x, f = make_adjoint_data(1000) N = 100000 out1 = nfft.n...
notebooks/Benchmarks.ipynb
jakevdp/nfft
mit
Discussion Increment the count manually
# morewords = ['why','are','you','not','looking','in','my','eyes'] # for word in morewords: # word_counts[word] += 1
notebooks/ch01/12_determine_the_top_n_items_occurring_in_a_list.ipynb
tuanavu/python-cookbook-3rd
mit
Update word counts using update()
morewords = ['why','are','you','not','looking','in','my','eyes'] word_counts.update(morewords) print(word_counts.most_common(3))
notebooks/ch01/12_determine_the_top_n_items_occurring_in_a_list.ipynb
tuanavu/python-cookbook-3rd
mit
You can use Counter to do mathematical operations.
a = Counter(words) b = Counter(morewords) print(a) print(b) # Combine counts c = a + b c # Subtract counts d = a - b d
notebooks/ch01/12_determine_the_top_n_items_occurring_in_a_list.ipynb
tuanavu/python-cookbook-3rd
mit
Obtenemos nuestro clasificador
path = '../../rsc/obj/' cls_path = path + 'cls.sav' cluster_path = path + 'cluster.sav' cls = pickle.load(open(cls_path, 'rb')) cluster = pickle.load(open(cluster_path, 'rb'))
code/notebooks/Phytoliths_Classifier/Phytoliths_Recognition.ipynb
jasag/Phytoliths-recognition-system
bsd-3-clause
Obtenemos la imagen ejemplo
img_path = '../../rsc/img/Default/2017_5_17_17_54Image_746.jpg' # img_path = '../../rsc/img/Default/2017_5_17_18_17Image_803.jpg' # img_path = '../../rsc/img/Default/2017_5_17_16_38Image_483.jpg' # img_path = '../../rsc/img/Default/2017_5_17_18_9Image_7351.jpg' # img_path = '../../rsc/img/Default/2017_5_17_15_27Image_1...
code/notebooks/Phytoliths_Classifier/Phytoliths_Recognition.ipynb
jasag/Phytoliths-recognition-system
bsd-3-clause
Definimos algunas funciones necesarias
def predict_image(imgTest): global cluster global cls num_centers = len(cluster.cluster_centers_) testInstances = [] features = daisy(imgTest) numFils, numCols, sizeDesc = features.shape features = features.reshape((numFils*numCols,sizeDesc)) pertenencias=cl...
code/notebooks/Phytoliths_Classifier/Phytoliths_Recognition.ipynb
jasag/Phytoliths-recognition-system
bsd-3-clause
Documenting Invariants An invariant is something that is true at some point in the code. Invariants and the contract are what we use to guide our implementation. Pre-conditions and post-conditions are special cases of invariants. Pre-conditions are true at function entry. They constrain the user. Post-conditions are t...
def quad_roots(a=1.0, b=2.0, c=0.0): """Returns the roots of a quadratic equation: ax^2 + bx + c. INPUTS ======= a: float, optional, default value is 1 Coefficient of quadratic term b: float, optional, default value is 2 Coefficient of linear term c: float, optional, default v...
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Accessing Documentation (1) Documentation can be accessed by calling the __doc__ special method Simply calling function_name.__doc__ will give a pretty ugly output You can make it cleaner by making use of splitlines()
quad_roots.__doc__.splitlines()
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Accessing Documentation (2) A nice way to access the documentation is to use the pydoc module.
import pydoc pydoc.doc(quad_roots)
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Testing There are different kinds of tests inspired by the interface principles just described. acceptance tests verify that a program meets a customer's expectations. In a sense these are a test of the interface to the customer: does the program do everything you promised the customer it would do? unit tests are t...
import doctest doctest.testmod(verbose=True)
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Principles of Testing Test simple parts first Test code at its boundaries The idea is that most errors happen at data boundaries such as empty input, single input item, exactly full array, wierd values, etc. If a piece of code works at the boundaries, its likely to work elsewhere... Program defensively "Program defen...
def test_quadroots(): assert quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j)) test_quadroots()
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Test at the boundaries Here we write a test to handle the crazy case in which the user passes strings in as the coefficients.
def test_quadroots_types(): try: quad_roots("", "green", "hi") except TypeError as err: assert(type(err) == TypeError) test_quadroots_types()
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
We can also check to make sure the $a=0$ case is handled okay:
def test_quadroots_zerocoeff(): try: quad_roots(a=0.0) except ValueError as err: assert(type(err) == ValueError) test_quadroots_zerocoeff()
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
When you get an error It could be that: you messed up an implementation you did not handle a case your test was messed up (be careful of this) If the error was not found in an existing test, create a new test that represents the problem before you do anything else. The test should capture the essence of the problem: ...
%%file roots.py def quad_roots(a=1.0, b=2.0, c=0.0): """Returns the roots of a quadratic equation: ax^2 + bx + c = 0. INPUTS ======= a: float, optional, default value is 1 Coefficient of quadratic term b: float, optional, default value is 2 Coefficient of linear term c: float,...
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Let's put our tests into one file.
%%file test_roots.py import roots def test_quadroots_result(): assert roots.quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j)) def test_quadroots_types(): try: roots.quad_roots("", "green", "hi") except TypeError as err: assert(type(err) == TypeError) def test_quadroots_zerocoeff(): try...
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Code Coverage In some sense, it would be nice to somehow check that every line in a program has been covered by a test. If you could do this, you might know that a particular line has not contributed to making something wrong. But this is hard to do: it would be hard to use normal input data to force a program to go t...
%%file roots.py def linear_roots(a=1.0, b=0.0): """Returns the roots of a linear equation: ax+ b = 0. INPUTS ======= a: float, optional, default value is 1 Coefficient of linear term b: float, optional, default value is 0 Coefficient of constant term RETURNS ======== ...
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Run the tests and check code coverage
!pytest --cov
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Run the tests, report code coverage, and report missing lines.
!pytest --cov --cov-report term-missing
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Run tests, including the doctests, report code coverage, and report missing lines.
!pytest --doctest-modules --cov --cov-report term-missing
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Let's put some tests in for the linear roots function.
%%file test_roots.py import roots def test_quadroots_result(): assert roots.quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j)) def test_quadroots_types(): try: roots.quad_roots("", "green", "hi") except TypeError as err: assert(type(err) == TypeError) def test_quadroots_zerocoeff(): try...
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Now run the tests and check code coverage.
!pytest --doctest-modules --cov --cov-report term-missing
lectures/L7/L7.ipynb
IACS-CS-207/cs207-F17
mit
Purpose HoloViews is an incredibly convenient way of working interactively and exploratively within a notebook or commandline context. However, once you have implemented a polished interactive dashboard or some other complex interactive visualization, you will often want to deploy it outside the notebook to share with ...
# Declare some points points = hv.Points(np.random.randn(1000,2 )) # Declare points as source of selection stream selection = hv.streams.Selection1D(source=points) # Write function that uses the selection indices to slice points and compute stats def selected_info(index): arr = points.array()[index] if index:...
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
<img src='https://assets.holoviews.org/gifs/examples/streams/bokeh/point_selection1d.gif'></img> Working with the BokehRenderer When working with Bokeh server or wanting to manipulate a backend specific plot object you will have to use a HoloViews Renderer directly to convert the HoloViews object into the backend speci...
renderer = hv.renderer('bokeh') print(renderer)
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
python BokehRenderer() All Renderer classes in HoloViews are so called ParameterizedFunctions; they provide both classmethods and instance methods to render an object. You can easily create a new Renderer instance using the .instance method:
renderer = renderer.instance(mode='server')
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
Renderers can also have different modes. In this case we will instantiate the renderer in 'server' mode, which tells the Renderer to render the HoloViews object to a format that can easily be deployed as a server app. Before going into more detail about deploying server apps we will quickly remind ourselves how the ren...
hvplot = renderer.get_plot(layout) print(hvplot)
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
&lt;LayoutPlot LayoutPlot01811&gt; Using the state attribute on the HoloViews plot we can access the Bokeh Column model, which we can then work with directly.
hvplot.state
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
Column(id='1570', ...) In the background this is how HoloViews converts any HoloViews object into Bokeh models, which can then be converted to embeddable or standalone HTML and be rendered in the browser. This conversion is usually done in the background using the figure_data method:
html = renderer._figure_data(hvplot, 'html')
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
Bokeh Documents In Bokeh the Document is the basic unit at which Bokeh models (such as plots, layouts and widgets) are held and serialized. The serialized JSON representation is then sent to BokehJS on the client-side browser. When in 'server' mode the BokehRenderer will automatically return a server Document:
renderer(layout)
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
(&lt;bokeh.document.Document at 0x11afc7590&gt;, {'file-ext': 'html', 'mime_type': u'text/html'}) We can also easily use the server_doc method to get a Bokeh Document, which does not require you to make an instance in 'server' mode.
doc = renderer.server_doc(layout) doc.title = 'HoloViews App'
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
In the background however, HoloViews uses the Panel library to render components to a Bokeh model which can be rendered in the notebook, to a file or on a server:
import panel as pn model = pn.panel(layout).get_root() model
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
For more information on the interaction between Panel and HoloViews see the the Panel documentation. Deploying with panel serve Deployment from a script with panel serve is one of the most common ways to deploy a Bokeh app. Any .py or .ipynb file that attaches a plot to Bokeh's curdoc can be deployed using panel serve....
hv.renderer('bokeh').server_doc(layout)
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
In addition to starting a server from a script we can also start up a server interactively, so let's do a quick deep dive into Bokeh Application and Server objects and how we can work with them from within HoloViews. Bokeh Server To start a Bokeh server directly from a notebook we can also use Panel, specifically we'll...
def sine(frequency, phase, amplitude): xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.sin(frequency*xs+phase)*amplitude)).opts(width=800) ranges = dict(frequency=(1, 5), phase=(-np.pi, np.pi), amplitude=(-2, 2), y=(-2, 2)) dmap = hv.DynamicMap(sine, kdims=['frequency', 'phase', 'amplitude']).redim.range(...
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
&lt;bokeh.server.server.Server object at 0x10b3a0510&gt; Next we can define a callback on the IOLoop that will open the server app in a new browser window and actually start the app (and if outside the notebook the IOLoop):
server.start() server.show('/') # Outside the notebook ioloop needs to be started # from tornado.ioloop import IOLoop # loop = IOLoop.current() # loop.start()
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
After running the cell above you should have noticed a new browser window popping up displaying our plot. Once you are done playing with it you can stop it with:
server.stop()
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
We can achieve the equivalent using the .show method on a Panel object:
server = pn.panel(dmap).show()
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
<img width='80%' src="https://assets.holoviews.org/gifs/guides/user_guide/Deploying_Bokeh_Apps/bokeh_server_new_window.png"></img> We will once again stop this Server before continuing:
server.stop()
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause