markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The synaptic connection attributes inside a cartridge and those specified by Composition Rule II are loaded from the csv file synapse_lamina.csv. A few examples are listed below. Descriptions of each of the columns follow: prename, postname - Indicate the neurons connected by the synapse. model - Indicates the synapse...
synapse_data = pd.read_csv("./synapse_lamina.csv") synapse_data = synapse_data.dropna(axis=1) synapse_data.head(n=7)
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
To change the lamina circuitry, rows may be added, deleted, or modified in both csv files. These files are processed by the generate_vision_gexf.py script to generate a GEXF file containing the full lamina configuration comprising 768 cartridges; this file may be used to instantiate the lamina LPU using Neurokernel. G...
%cd -q ~/neurokernel/examples/vision/data import vision_configuration as vc lamina = vc.Lamina(24, 32, 'neuron_types_lamina.csv', 'synapse_lamina.csv', None) print lamina.num_cartridges p.figure(figsize=(15,7)) X = lamina.hexarray.X Y = lamina.hexarray.Y p.plot(X.reshape(-1), Y.reshape(-1), 'o', markerfacecolor = 'w',...
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
We now create all the cartridges. Each cartridge contains one copy of all specified columnar neurons and elements as well as all the intra-cartridge connections. Individual neurons and synapses in each cartridge can be accessed as follows:
lamina.create_cartridges() lamina.cartridges[100] lamina.cartridges[100].neurons['L2'] lamina.cartridges[100].synapses[8]
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
We assign each cartridge to a position on the hexagonal grid and link it to its 6 immediate neighbor cartridges; the first element of the neighbors attribute is the cartridge itself, while the remaining 6 elements are its neighbors:
lamina.connect_cartridges() lamina.cartridges[100].neighbors
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
The non-columnar neurons are created as follows:
lamina.create_non_columnar_neurons()
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
After all the cartridges and non-columnar neurons are created, we can specify interconnects between cartridges based on the composition rules. We first configure inter-cartridge synapses based on Composition Rule II:
lamina.connect_composition_II()
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
In the example below, the L4 neuron in cartridge 100 (shown as a red dot), receives inputs (green lines) from neurons in some neighboring cartridges (green dots), and provides outputs (blue lines) to neurons in other neighboring cartridges (blue dots):
p.figure(figsize=(15,7)) p.plot(X.reshape(-1), Y.reshape(-1), 'o', markerfacecolor = 'w', markeredgecolor = 'b', markersize = 10) p.axis('equal') p.axis([X.min()-1, X.max()+1, Y.min()-1, Y.max()+1]) p.gca().invert_yaxis() # plot the position of L4 neuron in cartridge 236 neuron = lamina.cartridges[236].neurons...
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
We then configure inter-cartridge synapses based on Composition Rule I:
lamina.connect_composition_I()
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
In the example below, amacrine cell 0 (red dot) receives inputs (green lines) from neurons in several neighboring cartridges (green dots), and provides outputs (blue lines) to neurons in other neighboring cartridges (blue dots):
p.figure(figsize = (15,7)) p.plot(X.reshape(-1), Y.reshape(-1), 'o', markerfacecolor = 'w', markeredgecolor = 'b', markersize = 10) p.axis('equal') p.axis([X.min()-1, X.max()+1, Y.min()-1, Y.max()+1]) p.gca().invert_yaxis() # plot the position of Amacrine cell 240 neuron = lamina.non_columnar_neurons['Am'][240...
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
We now specify selectors to each public neuron to enable possible connections to other LPUs:
lamina.add_selectors()
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
The selector of cartridge neurons, e.g., L1 neurons, are of the form
lamina.cartridges[0].neurons['L1'].params['selector']
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
Finally, we output the full configuration to GEXF file format that can be used to instantiate the lamina LPU:
lamina.export_to_gexf('lamina.gexf.gz')
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
Executing the Combined Lamina and Medulla Model Once again assuming that the Neurokernel source has been cloned to ~/neurokernel, we first create GEXF files containing the configurations for both the lamina and medulla models:
%cd -q ~/neurokernel/examples/vision/data %run generate_vision_gexf.py
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
We then generate an input of duration 1.0 seconds:
%run gen_vis_input.py
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
Finally, we execute the model. Note that if you have access to only 1 GPU, replace --med_dev 1 with --med_dev 0 in the third line below; this will force both the lamina and medulla models to use the same GPU (at the expense of slower execution):
%cd -q ~/neurokernel/examples/vision %run vision_demo.py --lam_dev 0 --med_dev 1
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
The visualization script produces a video that depicts an input signal provided to a grid comprising neurons associated with each of the 768 cartridges in one of the fly's eyes as well as the response of select neurons in the corresponding columns in the retina/lamina and medulla LPUs. The resulting video (hosted on Yo...
import IPython.display IPython.display.YouTubeVideo('5eB78fLl1AM')
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
TensorFlow IO から PostgreSQL データベースを読み取る <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/postgresql"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td> <td><a target="_blank" href="https://colab.research.goo...
try: %tensorflow_version 2.x except Exception: pass !pip install tensorflow-io
site/ja/io/tutorials/postgresql.ipynb
tensorflow/docs-l10n
apache-2.0
PostgreSQL のインストールとセットアップ (オプション) 注: このノートブックは、Google Colab でのみ実行するように設計されています。システムにパッケージをインストールし、sudo アクセスが必要です。ローカルの Jupyter ノートブックで実行する場合は、注意して続行してください。 Google Colab での使用法をデモするには、PostgreSQL サーバーをインストールします。パスワードと空のデータベースも必要です。 このノートブックを Google Colab で実行していない場合、または既存のデータベースを使用する場合は、次の設定をスキップして次のセクションに進んでください。
# Install postgresql server !sudo apt-get -y -qq update !sudo apt-get -y -qq install postgresql !sudo service postgresql start # Setup a password `postgres` for username `postgres` !sudo -u postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'postgres';" # Setup a database with name `tfio_demo` to be used !sud...
site/ja/io/tutorials/postgresql.ipynb
tensorflow/docs-l10n
apache-2.0
必要な環境変数を設定する 次の環境変数は、前のセクションの PostgreSQL 設定に基づいています。設定が異なる場合、または既存のデータベースを使用している場合は、それに応じて変更する必要があります。
%env TFIO_DEMO_DATABASE_NAME=tfio_demo %env TFIO_DEMO_DATABASE_HOST=localhost %env TFIO_DEMO_DATABASE_PORT=5432 %env TFIO_DEMO_DATABASE_USER=postgres %env TFIO_DEMO_DATABASE_PASS=postgres
site/ja/io/tutorials/postgresql.ipynb
tensorflow/docs-l10n
apache-2.0
PostgreSQL サーバーでデータを準備する このチュートリアルではデータベースを作成し、デモのためにデータベースにデータを入力します。このチュートリアルで使用されるデータは、Air Quality Data Set からのデータで、UCI Machine Learning Repository から入手できます。 以下は、Air Quality Data Set のサブセットのプレビューです。 Date | Time | CO(GT) | PT08.S1(CO) | NMHC(GT) | C6H6(GT) | PT08.S2(NMHC) | NOx(GT) | PT08.S3(NOx) | NO2(GT) | PT08.S4(...
!curl -s -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/postgresql/AirQualityUCI.sql !PGPASSWORD=$TFIO_DEMO_DATABASE_PASS psql -q -h $TFIO_DEMO_DATABASE_HOST -p $TFIO_DEMO_DATABASE_PORT -U $TFIO_DEMO_DATABASE_USER -d $TFIO_DEMO_DATABASE_NAME -f AirQualityUCI.sql
site/ja/io/tutorials/postgresql.ipynb
tensorflow/docs-l10n
apache-2.0
PostgreSQL サーバーからデータセットを作成し、TensorFlow で使用する PostgreSQL サーバーからのデータセットの作成は、queryおよびendpoint引数を指定してtfio.experimental.IODataset.from_sqlを呼び出して簡単に実行できます。queryはテーブル内の選択した列の SQL クエリで、endpoint引数はアドレスとデータベース名です。
import os import tensorflow_io as tfio endpoint="postgresql://{}:{}@{}?port={}&dbname={}".format( os.environ['TFIO_DEMO_DATABASE_USER'], os.environ['TFIO_DEMO_DATABASE_PASS'], os.environ['TFIO_DEMO_DATABASE_HOST'], os.environ['TFIO_DEMO_DATABASE_PORT'], os.environ['TFIO_DEMO_DATABASE_NAME'], ) dat...
site/ja/io/tutorials/postgresql.ipynb
tensorflow/docs-l10n
apache-2.0
上記の dataset.element_spec の出力からわかるように、作成された Dataset の要素はデータセットテーブルの列名をキーとする Python dict オブジェクトであるため、さらに演算を適用するのが非常に便利です。たとえば、Dataset の nox と no2 フィールドを選択して、その差を計算することができます。
dataset = tfio.experimental.IODataset.from_sql( query="SELECT nox, no2 FROM AirQualityUCI;", endpoint=endpoint) dataset = dataset.map(lambda e: (e['nox'] - e['no2'])) # check only the first 20 record dataset = dataset.take(20) print("NOx - NO2:") for difference in dataset: print(difference.numpy())
site/ja/io/tutorials/postgresql.ipynb
tensorflow/docs-l10n
apache-2.0
解析条件 解析対象として、地震波S波速度2000m/secの地盤を考えます。
# S波速度(m/sec2) Vs = 2.000e+03 # ポアソン比(-) Nu = 4.800e-01 # 質量密度(kg/m3) rho = 1.800e+01 # ラメの弾性定数 Mu = rho*Vs**2 # ヤング率 E = Mu*(2*(1+Nu)) # ラメの弾性定数 Lambda = E*Nu/((1+Nu)*(1-2*Nu)) # P波速度(m/sec2) Vp = np.sqrt((Lambda+2.000e+00*Mu)/rho)
demo/linear-dynamic-2D.ipynb
tkoyama010/getfem_presentation
cc0-1.0
対象領域は6000m×6000mの矩形領域とします。
d = 1.500e+02 x = 6.000e+03 z = 6.000e+03 m = gf.Mesh('cartesian', np.arange(0., x+d, d), np.arange(0., z+d, d)) m.set('optimize_structure') m.export_to_pos("./pos/m.pos") # 変位用オブジェクト mfu = gf.MeshFem(m,2) mfu.set_fem(gf.Fem('FEM_QK(2,1)')) # データ用オブジェクト mfd = gf.MeshFem(m, 1) mfd.set_fem(gf.Fem('FEM_QK(2,1)')) mi...
demo/linear-dynamic-2D.ipynb
tkoyama010/getfem_presentation
cc0-1.0
境界条件は底面にインピーダンスを考慮したダンパー境界(水平方向$\rho V_S A$、上下方向$\rho V_P A$)を設け、側面部分には水平ローラーを設けます。ただし、$A$は各ダンパーの担当面積を意味します。 今回定義が必要な側面と底面について面を定義しておきます。
P = m.pts() cbot = (abs(P[1,:]-0.000e+00) < 1.000e-6) cright = (abs(P[0,:]-x) < 1.000e-6) cleft = (abs(P[0,:]-0.000e+00) < 1.000e-6) pidbot = np.compress(cbot,range(0,m.nbpts())) pidright = np.compress(cright,range(0,m.nbpts())) pidleft = np.compress(cleft,range(0,m.nbpts())) fbot = m.faces_from_pid(pidbot) fright = ...
demo/linear-dynamic-2D.ipynb
tkoyama010/getfem_presentation
cc0-1.0
左右の水平ローラーを設定する際にはDirichlet条件$HU=R$の$H$と$R$を左右両端でそれぞれ求めそれを足し合わせます。
(H_LEFT,R_LEFT) = gf.asm_dirichlet(LEFT, mim, mfu, mfd, mfd.eval('[[0,0],[0,1]]'), mfd.eval('[0,0]')) (H_RIGHT,R_RIGHT) = gf.asm_dirichlet(RIGHT, mim, mfu, mfd, mfd.eval('[[0,0],[0,1]]'), mfd.eval('[0,0]')) H = H_LEFT+H_RIGHT R = R_LEFT+R_RIGHT (N,U0) = H.dirichlet_nullspace(R)
demo/linear-dynamic-2D.ipynb
tkoyama010/getfem_presentation
cc0-1.0
底面の粘性境界は外部粘性減衰としてNeumann条件から計算したものを減衰行列に足し合わせることにより考慮します。
nbd = mfd.nbdof() C_BOTTOM = gf.asm_boundary_source(BOTTOM, mim, mfu, mfd, np.repeat([[rho*Vs], [rho*Vp]],nbd,1)) C_BOTTOM_X = gf.asm_boundary_source(BOTTOM, mim, mfu, mfd, np.repeat([[rho*Vs], [0]],nbd,1))
demo/linear-dynamic-2D.ipynb
tkoyama010/getfem_presentation
cc0-1.0
支配方程式 ここで、今回計算対象とする弾性体の支配方程式であるNavierの式をおさらいします。 $\left(\lambda+\mu\right)\dfrac{\partial}{\partial x}\left(\dfrac{\partial u_{x}}{\partial x}+\dfrac{\partial u_{y}}{\partial y}+\dfrac{\partial u_{z}}{\partial z}\right)+\mu\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial y^{2}}+\dfrac{\partial^{...
# 剛性行列 K = gf.asm_linear_elasticity(mim, mfu, mfd, np.repeat([Lambda], nbd), np.repeat([Mu], nbd)) # 質量行列 M = gf.asm_mass_matrix(mim, mfu)*rho # 減衰行列 C = gf.Spmat('copy',M) C.clear() C.set_diag((C_BOTTOM)) C_X = C_BOTTOM_X
demo/linear-dynamic-2D.ipynb
tkoyama010/getfem_presentation
cc0-1.0
なお、今回は面内波の計算のみを行いますが、$\lambda = -\mu$として入力を行えば、面外波の計算も可能です。現時点で、行列はGetfem++のSpmatオブジェクトになっていますが、これをMatrixMarketフォーマットでファイルに出力のうえ、Scipyのsparse matrixとして読み込みます。MatrixMarketは疎行列のメジャーなファイルフォーマットの1つです。 http://math.nist.gov/MatrixMarket/formats.html
N.save('mm', "N.mtx"); N = io.mmread("N.mtx") K.save('mm', "K.mtx"); K = io.mmread("K.mtx") M.save('mm', "M.mtx"); M = io.mmread("M.mtx") C.save('mm', "C.mtx"); C = io.mmread("C.mtx") # 側面の境界条件を考慮した行列 Nt = N.transpose() KK = Nt*K*N MM = Nt*M*N CC = Nt*C*N CC_X = Nt*C_X
demo/linear-dynamic-2D.ipynb
tkoyama010/getfem_presentation
cc0-1.0
下部の粘性境界からの入力速度波形としてRickerWaveletを使用します。
TP = 200 VP = 1.0 time_step = 500 wave = np.zeros(time_step) time = np.arange(TP*2) omegaP = 2.000E+00*np.pi/TP tauR = omegaP/np.sqrt(2.0)*(time-TP) wave[10+time] = -np.sqrt(np.e)*tauR*VP*np.exp(-tauR**2/2.000E+00) %matplotlib inline plt.plot(wave) plt.show()
demo/linear-dynamic-2D.ipynb
tkoyama010/getfem_presentation
cc0-1.0
Newmark-β法による時刻歴応答解析 以上で、時刻歴応答解析に必要な質量行列・剛性行列・減衰行列および入力地震動が揃いました。これらを使用してNewmark-β法で時刻歴応答解析を行います。
# βの値 beta = 1./ 4. # 時間刻み dt = 0.01 sl = gf.Slice(('none',), mfu, 2) MMM = MM+dt/2.000e+00*CC+beta*dt**2*KK dis = np.zeros(CC_X.size) vel = np.zeros(CC_X.size) acc = np.zeros(CC_X.size) for stpot in np.arange(1,time_step): dis0 = dis vel0 = vel acc0 = acc FFF = -CC*(vel0+dt/2.000e+00*acc0)-KK*(dis0+vel0*...
demo/linear-dynamic-2D.ipynb
tkoyama010/getfem_presentation
cc0-1.0
<a id="dualdemo"></a> Example 2: Dual Simulations This example plots a deterministic simulation and a stochastic simulation of the same system. Back to top
// SBML Part model *myModel() // Reactions: J0: A -> B; k*A; A = 10; k = 1; end // SED-ML Part // Models model1 = model "myModel" // Simulations simulation1 = simulate uniform(0, 5, 100) simulation2 = simulate uniform_stochastic(0, 5, 100) // Tasks task1 = run simulation1 on model1 task2 = run simulation2 on mo...
example-notebooks/omex-basics.ipynb
0u812/nteract
bsd-3-clause
<a id="ensemble"></a> Example 3: Stochastic Ensemble This example uses a repeated task to run multiple copies of a stochastic simulation, then plots the ensemble. Back to top
// SBML Part model *myModel() // Reactions: J0: A -> B; k*A; A = 100; k = 1; end // SED-ML Part // Models model1 = model "myModel" // Simulations simulation1 = simulate uniform_stochastic(0, 5, 100) // Tasks task1 = run simulation1 on model1 repeat1 = repeat task1 for \ local.x in uniform(0,25,25), reset=True...
example-notebooks/omex-basics.ipynb
0u812/nteract
bsd-3-clause
<a id="phaseportrait"></a> Example 4: Phase portrait In addition to timecourse plots, SED-ML can also be used to create phase portraits. This is useful to show the presenence (or absence, in this case) of limit cycles. Here, we use the well-known Lorenz attractor to show this feature. Back to top
// -- Begin Antimony block model *lorenz() // Rate Rules: x' = sigma*(y - x); y' = x*(rho - z) - y; z' = x*y - beta*z; // Variable initializations: x = 0.96259; sigma = 10; y = 2.07272; rho = 28; z = 18.65888; beta = 2.67; // Other declarations: var x, y, z; const sigma, rho, beta; end // ...
example-notebooks/omex-basics.ipynb
0u812/nteract
bsd-3-clause
<a id="paramscan"></a> Example 5: Parameter scanning Through the use of repeated tasks, SED-ML can be used to scan through parameter values. This example shows how to scan through a set of predefined values for a kinetic parameter (J1_KK2). Back to top
// -- Begin Antimony block model *MAPKcascade() // Compartments and Species: compartment compartment_; species MKKK in compartment_, MKKK_P in compartment_, MKK in compartment_; species MKK_P in compartment_, MKK_PP in compartment_, MAPK in compartment_; species MAPK_P in compartment_, MAPK_PP in compartment_...
example-notebooks/omex-basics.ipynb
0u812/nteract
bsd-3-clause
Tutorials Preliminaries: Setup & introduction Beam dynamics Tutorial N1. Linear optics.. Web version. Linear optics. Double Bend Achromat (DBA). Simple example of usage OCELOT functions to get periodic solution for a storage ring cell. Tutorial N2. Tracking.. Web version. Linear optics of the European XFEL Injector...
import IPython print('IPython:', IPython.__version__) import numpy print('numpy:', numpy.__version__) import scipy print('scipy:', scipy.__version__) import matplotlib print('matplotlib:', matplotlib.__version__) import ocelot print('ocelot:', ocelot.__version__)
demos/ipython_tutorials/1_introduction.ipynb
iagapov/ocelot
gpl-3.0
Optical function calculation Uses: * twiss() function and, * Twiss() object contains twiss parameters and other information at one certain position (s) of lattice To calculate twiss parameters you have to run twiss(lattice, tws0=None, nPoints=None) function. If you want to get a periodic solution leave tws0 by default...
tws=twiss(lat) # to see twiss paraments at the begining of the cell, uncomment next line # print(tws[0]) # to see twiss paraments at the end of the cell, uncomment next line print(tws[0]) # plot optical functions. plot_opt_func(lat, tws, top_plot = ["Dx", "Dy"], legend=False, font_size=10) plt.show() # you also can...
demos/ipython_tutorials/1_introduction.ipynb
iagapov/ocelot
gpl-3.0
Now we define the states available
states = [[0,0],[1,0],[0,1],[1,1]] states = [np.array(state) for state in states]
Two-particles.ipynb
ivergara/science_notebooks
gpl-3.0
We can get all the combinations of the states as follows
list(itertools.combinations(states,2))
Two-particles.ipynb
ivergara/science_notebooks
gpl-3.0
Perform an XOR between the states which is equivalent to a hop process
for pair in itertools.combinations(states,2): xor = np.logical_xor(*pair).astype(int) print(f"Left state {pair[0]}, right state {pair[1]}, XOR result {xor}")
Two-particles.ipynb
ivergara/science_notebooks
gpl-3.0
Let's define a helper function to determine if a transition is allowed. Basically, in this problem, the process has to conserve particle number in the sense that a hop only involves one electron.
def allowed(jump): if np.sum(jump) != 1: return 0 return 1 for pair in itertools.combinations(states,2): xor = np.logical_xor(*pair).astype(int) print(f"Left state {pair[0]}, right state {pair[1]}: allowed {bool(allowed(xor))}")
Two-particles.ipynb
ivergara/science_notebooks
gpl-3.0
Now, building the matrix by calculating the combinations and then adding the transpose to have a symmetric matrix.
matrix = np.zeros((4,4)) for i in range(len(states)): for j in range(i): matrix[i][j] = allowed(np.logical_xor(states[i], states[j]).astype(int)) matrix+matrix.T
Two-particles.ipynb
ivergara/science_notebooks
gpl-3.0
Peak finding Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should: Properly handle local maxima at the endpoints of the input array. Return a Numpy array of integer indices. Handle any Python iterable as input.
def find_peaks(a): """Find the indices of the local maxima in a sequence.""" b = np.array(a) c = b.max() return b[c] p1 = find_peaks([2,0,1,0,2,0,1]) assert np.allclose(p1, np.array([0,2,4,6])) p2 = find_peaks(np.array([0,1,2,3])) assert np.allclose(p2, np.array([3])) p3 = find_peaks([3,2,1,0]) assert ...
assignments/assignment07/AlgorithmsEx02.ipynb
geoneill12/phys202-2015-work
mit
Get data
data_path = spm_face.data_path() subjects_dir = data_path + '/subjects' raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds' raw = io.read_raw_ctf(raw_fname % 1) # Take first run # To save time and memory for this demo, we'll just use the first # 2.5 minutes (all we need to get 30 total events) and h...
0.15/_downloads/plot_covariance_whitening_dspm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Inspired by varlens examples, here is how this simple function works:
import urllib urltemplate = "https://raw.githubusercontent.com/hammerlab/varlens/master/test/data/CELSR1/bams/{}" url = urllib.URLopener() url.retrieve(urltemplate.format("bam_5.bam"), "bam_5.bam") url.retrieve(urltemplate.format("bam_5.bam.bai"), "bam_5.bam.bai") samfile = pysam.AlignmentFile("bam_5.bam", "rb") # C...
notebook/Naive Strategy.ipynb
hammerlab/isovar
apache-2.0
Let's compare the contexts for the variant and the reference alleles:
allele1 = "T" contexify(samfile, chromosome, location, allele1, radius) allele2 = "C" contexify(samfile, chromosome, location, allele2, radius)
notebook/Naive Strategy.ipynb
hammerlab/isovar
apache-2.0
Because we provided a parameter ai for the icy albedo, our model now contains several sub-processes contained within the process called albedo. Together these implement the step-function formula above. The process called iceline simply looks for grid cells with temperature below $T_f$.
print model1.param # A python shortcut... we can use the dictionary to pass lots of input arguments simultaneously: # same thing as before, but written differently: model1 = climlab.EBM_annual( num_lat=180, **param) print model1 def ebm_plot(e, return_fig=False): templimits = -60,32 radlimits = -340, 3...
notes/EBM_albedo_feedback.ipynb
brian-rose/env-415-site
mit
Polar-amplified warming in the EBM Add a small radiative forcing The equivalent of doubling CO2 in this model is something like $$ A \rightarrow A - \delta A $$ where $\delta A = 4$ W m$^{-2}$.
deltaA = 4. # This is a very handy way to "clone" an existing model: model2 = climlab.process_like(model1) # Now change the longwave parameter: model2.subprocess['LW'].A = param['A'] - deltaA # and integrate out to equilibrium again model2.integrate_years(5, verbose=False) plt.plot(model1.lat, model1.Ts) plt.plot...
notes/EBM_albedo_feedback.ipynb
brian-rose/env-415-site
mit
In the ice-free regime, there is no polar-amplified warming. A uniform radiative forcing produces a uniform warming. A different kind of climate forcing: changing the solar constant Historically EBMs have been used to study the climatic response to a change in the energy output from the Sun. We can do that easily with ...
m = climlab.EBM_annual( num_lat=180, **param ) # The current (default) solar constant, corresponding to present-day conditions: m.subprocess.insolation.S0
notes/EBM_albedo_feedback.ipynb
brian-rose/env-415-site
mit
What happens if we decrease $S_0$?
# First, get to equilibrium m.integrate_years(5.) # Check for energy balance print climlab.global_mean(m.net_radiation) m.icelat # Now make the solar constant smaller: m.subprocess.insolation.S0 = 1300. # Integrate to new equilibrium m.integrate_years(10.) # Check for energy balance print climlab.global_mean(m....
notes/EBM_albedo_feedback.ipynb
brian-rose/env-415-site
mit
A much colder climate! The ice line is sitting at 54º. The heat transport shows that the atmosphere is moving lots of energy across the ice line, trying hard to compensate for the strong radiative cooling everywhere poleward of the ice line. What happens if we decrease $S_0$ even more?
# Now make the solar constant smaller: m.subprocess.insolation.S0 = 1200. # First, get to equilibrium m.integrate_years(5.) # Check for energy balance print climlab.global_mean(m.net_radiation)
notes/EBM_albedo_feedback.ipynb
brian-rose/env-415-site
mit
ebm_plot(m) Something very different happened! Where is the ice line now? Now what happens if we set $S_0$ back to its present-day value?
# Now make the solar constant smaller: m.subprocess.insolation.S0 = 1365.2 # First, get to equilibrium m.integrate_years(5.) # Check for energy balance print climlab.global_mean(m.net_radiation) ebm_plot(m)
notes/EBM_albedo_feedback.ipynb
brian-rose/env-415-site
mit
To make a pretty, publication grade map for your study area look no further than cartopy. In this tutorial we will walk through generating a basemap with: - Bathymetry/topography - Coastline - Scatter data - Location labels - Inset map - Legend This code can be generalised to any region you wish to map First we import ...
import matplotlib.pyplot as plt import pandas as pd import numpy as np import xarray as xr from pathlib import Path
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
Then we import cartopy itself
import cartopy
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
A few other modules and functions which we will use later to add cool stuff to our plots. Also updating font sizes for improved readability
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER plt.rcParams.update({"font.size": 20}) SMALL_SIZE = 22 MEDIUM_SIZE = 22 LARGE_SIZE = 26 plt.rc("font", size=SMALL_SIZE) plt.rc("xtick", labelsize=SMALL_SIZE) plt.rc("ytick", labelsize=SMALL_SIZE) plt.rc("axes", titlesize=SMALL_SIZE) plt.rc("lege...
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
Note on bathymetry data To save space and time I have subset the bathymetry plotted in this example. If you wish to map a different area you will need to download the GEBCO topography data found here. You can find a notebook intro to using xarray for netcdf here on the UEA python website. Or go to Callum's github for a...
# Open prepared bathymetry dataset using pathlib to sepcify the relative path bathy_file_path = Path('../data/bathy.nc') bathy_ds = xr.open_dataset(bathy_file_path) bathy_lon, bathy_lat, bathy_h = bathy_ds.bathymetry.longitude, bathy_ds.bathymetry.latitude, bathy_ds.bathymetry.values
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
We're just interested in bathy here, so set any height values greater than 0 to to 0 and set contour levels to plot later
bathy_h[bathy_h > 0] = 0 bathy_conts = np.arange(-9000, 500, 500)
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
Here we load some scatter data from a two column csv for plotting later
# Load some scatter data of smaple locations near South Georgia data = pd.read_csv("../data/scatter_coords.csv") lons = data.Longitude.values lats = data.Latitude.values # Subset of sampling locations sample_lon = lons[[0, 2, 7]] sample_lat = lats[[0, 2, 7]]
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
Now to make the map itself. First we define our coordinate system. Here we are using a Plate Carrée projection, which is one of equidistant cylindrical projections. A full list of Cartopy projections is available at http://scitools.org.uk/cartopy/docs/latest/crs/projections.html. Then we create figure and axes instance...
coord = ccrs.PlateCarree() fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(111, projection=coord) ax.set_extent([-42, -23, -60, -50], crs=coord);
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
Now we contour the bathymetry data
fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(111, projection=coord) ax.set_extent([-42, -23, -60, -50], crs=coord) bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r")
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
A good start. To make it more map like we add gridlines, formatted labels and a colorbar
fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(111, projection=coord) ax.set_extent([-42, -23, -60, -50], crs=coord) bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r") gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color="k", alpha=0.5, l...
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
Now to add a few more features. First coastlines from cartopy's natural features toolbox. Then scatters of the samples we imported earlier
fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(111, projection=coord) ax.set_extent([-42, -23, -60, -50], crs=coord) bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r") gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color="k", alpha=0.5, l...
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
To finish off the map we add a legend for the scatter plot, an inset map showing the area at a larger scale and some text identifying the islands
fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(111, projection=coord) ax.set_extent([-42, -23, -60, -50], crs=coord) bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r") gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color="k", alpha=0.5, l...
content/notebooks/2019-05-30-cartopy-map.ipynb
ueapy/ueapy.github.io
mit
We will compute persistent homology of a 2-simplex (triangle) ABC. The filtration is as follows: first the top vertex (C) of the triangle is added, then the rest of vertices (A and B) followed by the the bottom edge (AB), then the rest of the edges (AC and BC) and finally the triangle is filled in (ABC).
scx = [Simplex((2,), 0), # C Simplex((0,), 1), # A Simplex((1,), 1), # B Simplex((0,1), 2), # AB Simplex((1,2), 3), # BC Simplex((0,2), 3), # AC ...
2015_2016/lab13/Computing Persistent Homology.ipynb
gregorjerse/rt2
gpl-3.0
Now the persistent homology is computed.
f = Filtration(scx, data_cmp) p = DynamicPersistenceChains(f) p.pair_simplices() smap = p.make_simplex_map(f)
2015_2016/lab13/Computing Persistent Homology.ipynb
gregorjerse/rt2
gpl-3.0
Now output the computed persistence diagram. For each critical cell that appears in the filtration the time of Birth and Death is given as well as the cell that kills it (its pair). The features that persist forever have Death value set to inf.
print "{:>10}{:>10}{:>10}{:>10}".format("First", "Second", "Birth", "Death") for i in (i for i in p if i.sign()): b = smap[i] if i.unpaired(): print "{:>10}{:>10}{:>10}{:>10}".format(b, '', b.data, "inf") else: d = smap[i.pair()] print "{:>10}{:>10}{:>10}{:>10}".format(b, d, b.data, ...
2015_2016/lab13/Computing Persistent Homology.ipynb
gregorjerse/rt2
gpl-3.0
Load the data from the publication First we will load the data collected in [1]_. In this experiment subjects listened to natural speech. Raw EEG and the speech stimulus are provided. We will load these below, downsampling the data in order to speed up computation since we know that our features are primarily low-frequ...
path = mne.datasets.mtrf.data_path() decim = 2 data = loadmat(join(path, 'speech_data.mat')) raw = data['EEG'].T speech = data['envelope'].T sfreq = float(data['Fs']) sfreq /= decim speech = mne.filter.resample(speech, down=decim, npad='auto') raw = mne.filter.resample(raw, down=decim, npad='auto') # Read in channel p...
0.20/_downloads/e31a3c546b89121086d731bfb81c98aa/plot_receptive_field_mtrf.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
setup: 1. Generate synthetic data for temperature observation time-series
# Create time-axis for our syntethic sample utc = Calendar() # provide conversion and math for utc time-zone t0 = utc.time(2016, 1, 1) dt = deltahours(1) n = 24*3 # 3 days length #ta = TimeAxisFixedDeltaT(t0, dt, n) ta = TimeAxis(t0, dt, n) # same as ta, but needed for now(we work on aligning them) # 1. Create the ter...
notebooks/grid-pp/gridpp_geopoints.ipynb
statkraft/shyft-doc
lgpl-3.0
setup 2. Transform observation with bias to grid using kriging
# Generate the observation grid by kriging the observations out to 1x1km grid # first create idw and kriging parameters that we will utilize in the next steps # kriging parameters btk_params = BTKParameter() # we could tune parameters here if needed # idw parameters,somewhat adapted to the fact that we # know we inte...
notebooks/grid-pp/gridpp_geopoints.ipynb
statkraft/shyft-doc
lgpl-3.0
setup 3. Create 3 forecasts sets for the 1x1 km grid
# Create a forecast grid by copying the obs_grid time-series # since we know that idw of them to obs_points will give approx. # the obs_set_w_bias time-series # for the simplicity, we assume the same forecast for all 3 days fc_grid = TemperatureSourceVector() fc_grid_1_day_back = TemperatureSourceVector() # this is ...
notebooks/grid-pp/gridpp_geopoints.ipynb
statkraft/shyft-doc
lgpl-3.0
grid-pp: 1. Transform forecasts from grid to observation points (IDW)
# Now we have 3 simulated forecasts at a 1x1 km grid # fc_grid, fc_grid_1_day_back, fc_grid_2_day_back # we start to do the grid pp algorithm stuff # - we know the our forecasts have some degC. bias, and we would hope that # the kalman filter 'learns' the offset # as a first step we project the grid_forecasts to the ...
notebooks/grid-pp/gridpp_geopoints.ipynb
statkraft/shyft-doc
lgpl-3.0
grid-pp: 2. Calculate the bias time-series using Kalman filter on the observation set
# Create a TemperatureSourceVector to hold the set of bias time-series bias_set = TemperatureSourceVector() # Create the Kalman filter having 8 samples spaced every 3 hours to represent a daily periodic pattern kalman_dt_hours = 3 kalman_dt =deltahours(kalman_dt_hours) kta = TimeAxis(t0, kalman_dt, int(24//kalman_dt_...
notebooks/grid-pp/gridpp_geopoints.ipynb
statkraft/shyft-doc
lgpl-3.0
grid-pp: 3. Spread the bias at observation points out to the grid using kriging
# Generate the bias grid by kriging the bias out on the 1x1km grid btk_params = BTKParameter() btk_bias_params = BTKParameter(temperature_gradient=-0.6, temperature_gradient_sd=0.25, sill=25.0, nugget=0.5, range=5000.0, zscale=20.0) bias_grid = bayesian_kriging_temperature(bias_set, grid_1x1, ta.fixed_dt, btk_bias_par...
notebooks/grid-pp/gridpp_geopoints.ipynb
statkraft/shyft-doc
lgpl-3.0
Presentation&Test: 8. Finally, Transform corrected forecasts from grid to observation points to see if we did reach the goal of adjusting the forecast (IDW)
# Generate the corrected forecast set by Krieging transform of temperature model fc_at_observations_improved = idw_temperature(fc_grid_improved, obs_points, ta.fixed_dt, idw_params) fc_at_observations_raw =idw_temperature(fc_grid, obs_points, ta.fixed_dt, idw_params)
notebooks/grid-pp/gridpp_geopoints.ipynb
statkraft/shyft-doc
lgpl-3.0
9. Plot the results
# Make a time-series plot of temperature sets for i in range(len(bias_set)): fig, ax = plt.subplots(figsize=(20, 10)) timestamps = [datetime.datetime.utcfromtimestamp(p.start) for p in obs_set[i].ts.time_axis] ax.plot(timestamps, obs_set[i].ts.values, label = str(i+1) + ' Observation') ax.plot(timestamp...
notebooks/grid-pp/gridpp_geopoints.ipynb
statkraft/shyft-doc
lgpl-3.0
Annotating bad spans of data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The tutorial tut-events-vs-annotations describes how :class:~mne.Annotations can be read from embedded events in the raw recording file, and tut-annotate-raw describes in detail how to interactively annotate a :class:~mne.io.Raw data object. Here, we focus on be...
fig = raw.plot() fig.canvas.key_press_event('a')
0.19/_downloads/03db2d983950efa77a26beb0ac22b422/plot_20_rejecting_bad_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
.. sidebar:: Annotating good spans The default "BAD\_" prefix for new labels can be removed simply by pressing the backspace key four times before typing your custom annotation label. You can see that default annotation label is "BAD_"; this can be edited prior to pressing the "Add label" button to customize the label...
eog_events = mne.preprocessing.find_eog_events(raw) onsets = eog_events[:, 0] / raw.info['sfreq'] - 0.25 durations = [0.5] * len(eog_events) descriptions = ['bad blink'] * len(eog_events) blink_annot = mne.Annotations(onsets, durations, descriptions, orig_time=raw.info['meas_date']) raw.se...
0.19/_downloads/03db2d983950efa77a26beb0ac22b422/plot_20_rejecting_bad_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
See the section tut-section-programmatic-annotations for more details on creating annotations programmatically. Rejecting Epochs based on channel amplitude ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Besides "bad" annotations, the :class:mne.Epochs class constructor has another means of rejecting epochs, based on signa...
reject_criteria = dict(mag=3000e-15, # 3000 fT grad=3000e-13, # 3000 fT/cm eeg=100e-6, # 100 μV eog=200e-6) # 200 μV flat_criteria = dict(mag=1e-15, # 1 fT grad=1e-13, # 1 fT/cm ...
0.19/_downloads/03db2d983950efa77a26beb0ac22b422/plot_20_rejecting_bad_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Alternatively, if rejection thresholds were not originally given to the :class:~mne.Epochs constructor, they can be passed to :meth:~mne.Epochs.drop_bad later instead; this can also be a way of imposing progressively more stringent rejection criteria:
stronger_reject_criteria = dict(mag=2000e-15, # 2000 fT grad=2000e-13, # 2000 fT/cm eeg=100e-6, # 100 μV eog=100e-6) # 100 μV epochs.drop_bad(reject=stronger_reject_criteria) print(epochs.drop_log)
0.19/_downloads/03db2d983950efa77a26beb0ac22b422/plot_20_rejecting_bad_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Finally, put a %%tutor at the top of any cell with Python code, and watch the visualization:
%%tutor mylist = [] for i in range(10): mylist.append(i ** 2)
examples/Tutor Magic in IPython.ipynb
Calysto/metakernel
bsd-3-clause
Initialization Throughout the course, some code is already written for you, and organized in modules called packages. The cell below is an initialization step that must be called at the beginning of the notebook.
import packages.initialization import pioneer3dx as p3dx p3dx.init()
Moving the Robot.ipynb
ecervera/UJI_AMR
mit
Motion Let's move the robot on the simulator! You are going to use a widget, a Graphical User Interface (GUI) with two sliders for moving the robot in two ways: translation and rotation.
import motion_widget
Moving the Robot.ipynb
ecervera/UJI_AMR
mit
Get number of pages for publications
#open first page, parse html, get number of pages and their links import html5lib import urllib2 url="http://www.research.lancs.ac.uk/portal/en/organisations/energy-lancaster/publications.html" aResp = urllib2.urlopen(url) t = aResp.read() dom = html5lib.parse(t, treebuilder="dom") links=getAttr(dom,'portal_navigator_p...
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
csaladenes/csaladenes.github.io
mit
Extract links to publications, from all pages
#create publist array publist=[] #parse publications links on all pages for pagenr in range(nr_of_pages): aResp = urllib2.urlopen(url+'?page='+str(pagenr)) t = aResp.read() dom = html5lib.parse(t, treebuilder="dom") #get html list htmlpublist=dom.getElementsByTagName('ol') #extract pub links ...
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
csaladenes/csaladenes.github.io
mit
Keyword extraction, for each publication
for r in range(len(publist)): pub=publist[r] aResp = urllib2.urlopen(pub) t = aResp.read() dom = html5lib.parse(t, treebuilder="dom") #get keywords from pub page keywords=getAttr(dom,'keywords',el='ul') if keywords: pubdict[pub]['keywords']=[i.childNodes[0].childNodes[0].nodeValue fo...
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
csaladenes/csaladenes.github.io
mit
Mine titles and abstracts for topics
#import dependencies import pandas as pd from textblob import TextBlob #import spacy #nlp = spacy.load('en') #run once if you need to download nltk corpora, igonre otherwise import nltk nltk.download() #get topical nouns for title and abstract using natural language processing for i in range(len(pubdict.keys())): ...
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
csaladenes/csaladenes.github.io
mit
Save output for D3 word cloud
keywords=[j for i in pubdict if 'keywords' in pubdict[i] if pubdict[i]['keywords'] for j in pubdict[i]['keywords']] titles=[pubdict[i]['title'] for i in pubdict if 'title' in pubdict[i] if pubdict[i]['title']] abstracts=[pubdict[i]['abstract'] for i in pubdict if 'abstract' in pubdict[i] if pubdict[i]['abstract']] titl...
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
csaladenes/csaladenes.github.io
mit
Having consturcted three project score vectors (without title, with title, both), we sort the projects based on high scores. These are best matching research projects. We display a link to them below. Repeat for each topic.
for topic_id in range(1,len(topics)): #select topic #topic_id=1 #use title usetitle=True verbose=False #initiate global DFs DF=pd.DataFrame() projects1={} projects2={} projects12={} #specify depth (n most relevant projects) depth=100 #get topical nouns with textblob ...
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
csaladenes/csaladenes.github.io
mit
Quick Look at the Data Read in and display a random image from the test_dataset folder
path = '../test_dataset/IMG/*' img_list = glob.glob(path) # Grab a random image and display it idx = np.random.randint(0, len(img_list)-1) image = mpimg.imread(img_list[idx]) plt.imshow(image)
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
priteshgudge/RoboND-Rover-Project
apache-2.0
Perspective Transform Define the perspective transform function from the lesson and test it on an image.
# Define a function to perform a perspective transform # I've used the example grid image above to choose source points for the # grid cell in front of the rover (each grid cell is 1 square meter in the sim) # Define a function to perform a perspective transform def perspect_transform(img, src, dst): M ...
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
priteshgudge/RoboND-Rover-Project
apache-2.0
Color Thresholding Define the color thresholding function from the lesson and apply it to the warped image TODO: Ultimately, you want your map to not just include navigable terrain but also obstacles and the positions of the rock samples you're searching for. Modify this function or write a new function that returns t...
# Identify pixels above the threshold # Threshold of RGB > 160 does a nice job of identifying ground pixels only def color_thresh(img, rgb_thresh=(160, 160, 160)): # Create an array of zeros same xy size as img, but single channel color_select = np.zeros_like(img[:,:,0]) # Require that each pixel be above a...
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
priteshgudge/RoboND-Rover-Project
apache-2.0
Coordinate Transformations Define the functions used to do coordinate transforms and apply them to an image.
def rover_coords(binary_img): # Identify nonzero pixels ypos, xpos = binary_img.nonzero() # Calculate pixel positions with reference to the rover position being at the # center bottom of the image. x_pixel = np.absolute(ypos - binary_img.shape[0]).astype(np.float) y_pixel = -(xpos - binary_im...
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
priteshgudge/RoboND-Rover-Project
apache-2.0
Read in saved data and ground truth map of the world The next cell is all setup to read your saved data into a pandas dataframe. Here you'll also read in a "ground truth" map of the world, where white pixels (pixel value = 1) represent navigable terrain. After that, we'll define a class to store telemetry data and p...
# Import pandas and read in csv file as a dataframe import pandas as pd # Change this path to your data directory df = pd.read_csv('../test_dataset/robot_log.csv') img_list_sorted = df["Path"].tolist() # Create list of image pathnames # Read in ground truth map and create a 3-channel image with it ground_truth = mpimg....
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
priteshgudge/RoboND-Rover-Project
apache-2.0
Write a function to process stored images Modify the process_image() function below by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this process_image() function in conjunction with the moviepy video processing package ...
# Define a function to pass stored images to # reading rover position and yaw angle from csv file # This function will be used by moviepy to create an output video def process_image(img): # Example of how to use the Databucket() object defined in the previous cell # print(data.xpos[0], data.ypos[0], data....
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
priteshgudge/RoboND-Rover-Project
apache-2.0
Make a video from processed image data The cell below is set up to read the csv file you saved along with camera images from the rover. Change the pathname below to the location of the csv file for your data. Also change the path to where you want to save the output video. The ground truth map is a black and white ...
# Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from moviepy.editor import ImageSequenceClip # Define pathname to save the output video output = '../output/test_mapping.mp4' clip = ImageSequenceClip(data.images, fps=60) new_clip = clip.fl_image(process_image) #NOTE:...
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
priteshgudge/RoboND-Rover-Project
apache-2.0
압축 방법 0000000100110... 10 -> 1bit p=0.5(베르누이 분포일 때) 가장 정보가 많은 경우 p=0.01 이라고 하면 0이 엄청나게 많고 1이 적은 경우 압축하는 방법은 0이 몇 번 나오는 지 쓴다. 예를 들어 0이 7번 나오고 1이 2번 나오고 다시 0이 10번 나온다는 식으로 줄여서 쓴다. 그러면 이 숫자를 다시 이진법으로 쓰는 방식 p=0.5일 경우에는 압축의 의미가 없다. 계속 0과 1이 번갈아 나오기 때문 원리는 그렇다. 확률 변수는 수치를 담고 있고 내가 얼마나 많은 정보를 가지고 있는 지를 판단할 수 있다. 표본 데이터가 ...
-1/2*np.log2(1/2)-1/2*np.log2(1/2) # 이럴 경우에는 아까 말한 0,1 두 개가 똑같아서 압축을 해도 의미가 없는 경우다.
통계, 머신러닝 복습/160622수_19일차_의사 결정 나무 Decision Tree/2.엔트로피.ipynb
kimkipyo/dss_git_kkp
mit
만약 데이터가 모두 60개가 있고 그 중 Y= 0 인 데이터가 20개, Y = 1인 데이터가 40개 있는 경우는 $$ P(y=0) = \dfrac{20}{60} = \dfrac{1}{3} $$ $$ P(y=1) = \dfrac{40}{60} = \dfrac{2}{3} $$ $$ H[Y] = -\dfrac{1}{3}\log_2\left(\dfrac{1}{3}\right) -\dfrac{2}{3}\log_2\left(\dfrac{2}{3}\right) = 0.92 $$
-1/3*np.log2(1/3)-2/3*np.log2(2/3)
통계, 머신러닝 복습/160622수_19일차_의사 결정 나무 Decision Tree/2.엔트로피.ipynb
kimkipyo/dss_git_kkp
mit
만약 데이터가 모두 40개가 있고 그 중 Y= 0 인 데이터가 30개, Y = 1인 데이터가 10개 있는 경우는 $$ P(y=0) = \dfrac{30}{40} = \dfrac{3}{4} $$ $$ P(y=1) = \dfrac{10}{40} = \dfrac{1}{4} $$ $$ H[Y] = -\dfrac{3}{4}\log_2\left(\dfrac{3}{4}\right) -\dfrac{1}{4}\log_2\left(\dfrac{1}{4}\right) = 0.81 $$
-3/4*np.log2(3/4)-1/4*np.log2(1/4)
통계, 머신러닝 복습/160622수_19일차_의사 결정 나무 Decision Tree/2.엔트로피.ipynb
kimkipyo/dss_git_kkp
mit
만약 데이터가 모두 20개가 있고 그 중 Y= 0 인 데이터가 20개, Y = 1인 데이터가 0개 있는 경우는 $$ P(y=0) = \dfrac{20}{20} = 1 $$ $$ P(y=1) = \dfrac{0}{20} = 0 $$ $$ H[Y] \rightarrow 0 $$ 조건부 엔트로피 조건부 엔트로피는 다음과 같이 정의한다. $$ H[Y \mid X] = - \sum_i \sum_j \,p(x_i, y_j) \log_2 p(y_j \mid x_i) $$ $$ H[Y \mid X] = -\int \int p(x, y) \log_2 p(y \mid x) \; dx...
-(25/100 * (20/25 * np.log2(20/25) + 5/25 * np.log2(5/25)) + 75/100 * (25/75 * np.log2(25/75) + 50/75 * np.log2(50/75)))
통계, 머신러닝 복습/160622수_19일차_의사 결정 나무 Decision Tree/2.엔트로피.ipynb
kimkipyo/dss_git_kkp
mit
python implementation using pure python
def denoise(a, b): for channel in range(2): for f_band in range(4, a.shape[1] - 4): for t_step in range(1, a.shape[2] - 1): neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2] if neighborhood.mean() < 10: b[channel, f_band, ...
profiling/Denoise algorithm.ipynb
jacobdein/alpine-soundscapes
mit