markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
The synaptic connection attributes inside a cartridge and those specified by Composition Rule II are loaded from the csv file synapse_lamina.csv. A few examples are listed below. Descriptions of each of the columns follow:
prename, postname - Indicate the neurons connected by the synapse.
model - Indicates the synapse model used in the LPU.
cart - Specifies the relative position from the cartridge of the presynaptic neuron to that of the postsynaptic neuron (see In [6] below).
mode - indicates if the synapse affects the neurotransmitter release at the axon terminals of the postsynaptic neuron.
scale is the number of synapses between the pre- and post-synaptic neurons.
The remaining columns are synapse model parameters.
|
synapse_data = pd.read_csv("./synapse_lamina.csv")
synapse_data = synapse_data.dropna(axis=1)
synapse_data.head(n=7)
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
To change the lamina circuitry, rows may be added, deleted, or modified in both csv files.
These files are processed by the generate_vision_gexf.py script to generate a GEXF file containing the full lamina configuration comprising 768 cartridges; this file may be used to instantiate the lamina LPU using Neurokernel.
Generation of Lamina Configuration and Creating GEXF
We first generate a lamina configuration model using the above two csv files, for a total of 24$\times$32=768 cartridges, where 24 is the number of rows and 32 is the number of columns for a hexagonal array. The hexagonal array is shown below.
|
%cd -q ~/neurokernel/examples/vision/data
import vision_configuration as vc
lamina = vc.Lamina(24, 32, 'neuron_types_lamina.csv', 'synapse_lamina.csv', None)
print lamina.num_cartridges
p.figure(figsize=(15,7))
X = lamina.hexarray.X
Y = lamina.hexarray.Y
p.plot(X.reshape(-1), Y.reshape(-1), 'o', markerfacecolor = 'w',
markeredgecolor = 'b', markersize = 12)
p.axis('equal')
p.axis([X.min()-1, X.max()+1, Y.min()-1, Y.max()+1])
p.xlabel('x (arbitrary unit)')
p.ylabel('y (arbitrary unit)')
p.gca().invert_yaxis()
# label one cartridge position
center_row = 12
center_col = 16
p.plot(X[center_row, center_col], Y[center_row, center_col],
marker = '$0$', markeredgecolor = 'r', hold = True)
# find and label all its neighbors, the numbers are used for
# column 'cart' in the synapse_lamina.csv
neighbors = lamina.hexarray.find_neighbor(center_row, center_col)
for neighbor, i in zip(neighbors[1:], range(1,7)):
if neighbor is not None:
neighbor_row = lamina.hexarray.row[neighbor]
neighbor_col = lamina.hexarray.col[neighbor]
p.plot(X[neighbor_row, neighbor_col], Y [neighbor_row, neighbor_col],
marker = '$'+str(i)+'$', markeredgecolor = 'r', hold = True)
tx1 = p.text(55, 10, 'Anterior', fontsize=12)
ar1 = p.arrow(65, 9.5, -3, 0, head_width = 1, color = 'k')
tx2 = p.text(63, 4, 'Dorsal', fontsize = 12)
ar2 = p.arrow(65, 9.5, 0, -3, head_width = 1, color = 'k')
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
We now create all the cartridges. Each cartridge contains one copy of all specified columnar neurons and elements as well as all the intra-cartridge connections. Individual neurons and synapses in each cartridge can be accessed as follows:
|
lamina.create_cartridges()
lamina.cartridges[100]
lamina.cartridges[100].neurons['L2']
lamina.cartridges[100].synapses[8]
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
We assign each cartridge to a position on the hexagonal grid and link it to its 6 immediate neighbor cartridges; the first element of the neighbors attribute is the cartridge itself, while the remaining 6 elements are its neighbors:
|
lamina.connect_cartridges()
lamina.cartridges[100].neighbors
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
The non-columnar neurons are created as follows:
|
lamina.create_non_columnar_neurons()
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
After all the cartridges and non-columnar neurons are created, we can specify interconnects between cartridges based on the composition rules. We first configure inter-cartridge synapses based on Composition Rule II:
|
lamina.connect_composition_II()
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
In the example below, the L4 neuron in cartridge 100 (shown as a red dot), receives inputs (green lines) from neurons in some neighboring cartridges (green dots), and provides outputs (blue lines) to neurons in other neighboring cartridges (blue dots):
|
p.figure(figsize=(15,7))
p.plot(X.reshape(-1), Y.reshape(-1), 'o', markerfacecolor = 'w',
markeredgecolor = 'b', markersize = 10)
p.axis('equal')
p.axis([X.min()-1, X.max()+1, Y.min()-1, Y.max()+1])
p.gca().invert_yaxis()
# plot the position of L4 neuron in cartridge 236
neuron = lamina.cartridges[236].neurons['L4']
x, y = neuron.position()
p.plot(x, y, 'o', markerfacecolor = 'r', markersize = 10, hold = True)
# plot the positions of the neuron the L4 neuron is presynaptic to
for synapse in neuron.outgoing_synapses:
post_x, post_y = synapse.post_neuron.position()
p.plot(post_x+0.1, post_y+0.1, 'o', markerfacecolor = 'b', markersize = 3, hold = True)
p.plot([x, post_x+0.1], [y, post_y+0.1], 'b', hold = True)
# plot the positions of the neuron the L4 neuron is postsynaptic to
for synapse in neuron.incoming_synapses:
pre_x, pre_y = synapse.pre_neuron.position()
p.plot(pre_x-0.1, pre_y-0.1, 'o', markerfacecolor = 'g', markersize = 3, hold = True)
p.plot([x, pre_x-0.1], [y, pre_y-0.1], 'g', hold = True)
p.xlabel('x (arbitrary unit)')
p.ylabel('y (arbitrary unit)')
tx1 = p.text(55, 10, 'Anterior', fontsize=12)
ar1 = p.arrow(65, 9.5, -3, 0, head_width = 1, color = 'k')
tx2 = p.text(63, 4, 'Dorsal', fontsize = 12)
ar2 = p.arrow(65, 9.5, 0, -3, head_width = 1, color = 'k')
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
We then configure inter-cartridge synapses based on Composition Rule I:
|
lamina.connect_composition_I()
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
In the example below, amacrine cell 0 (red dot) receives inputs (green lines) from neurons in several neighboring cartridges (green dots), and provides outputs (blue lines) to neurons in other neighboring cartridges (blue dots):
|
p.figure(figsize = (15,7))
p.plot(X.reshape(-1), Y.reshape(-1), 'o', markerfacecolor = 'w',
markeredgecolor = 'b', markersize = 10)
p.axis('equal')
p.axis([X.min()-1, X.max()+1, Y.min()-1, Y.max()+1])
p.gca().invert_yaxis()
# plot the position of Amacrine cell 240
neuron = lamina.non_columnar_neurons['Am'][240]
x, y = neuron.position()
p.plot(x, y, 'o', markerfacecolor = 'r', markersize = 10, hold = True)
# plot the positions of the neuron the Am 240 is presynaptic to
for synapse in neuron.outgoing_synapses:
post_x, post_y = synapse.post_neuron.position()
p.plot(post_x+0.1, post_y+0.1, 'o', markerfacecolor = 'b', markersize = 3, hold = True)
p.plot([x, post_x+0.1], [y, post_y+0.1], 'b', hold = True)
# plot the positions of the neuron the Am 0 is postsynaptic to
for synapse in neuron.incoming_synapses:
pre_x, pre_y = synapse.pre_neuron.position()
p.plot(pre_x-0.1, pre_y-0.1, 'o', markerfacecolor = 'g', markersize = 3, hold = True)
p.plot([x, pre_x-0.1], [y, pre_y-0.1], 'g', hold = True)
p.xlabel('x (arbitrary unit)')
p.ylabel('y (arbitrary unit)')
tx1 = p.text(55, 10, 'Anterior', fontsize=12)
ar1 = p.arrow(65, 9.5, -3, 0, head_width = 1, color = 'k')
tx2 = p.text(63, 4, 'Dorsal', fontsize = 12)
ar2 = p.arrow(65, 9.5, 0, -3, head_width = 1, color = 'k')
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
We now specify selectors to each public neuron to enable possible connections to other LPUs:
|
lamina.add_selectors()
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
The selector of cartridge neurons, e.g., L1 neurons, are of the form
|
lamina.cartridges[0].neurons['L1'].params['selector']
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
Finally, we output the full configuration to GEXF file format that can be used to instantiate the lamina LPU:
|
lamina.export_to_gexf('lamina.gexf.gz')
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
Executing the Combined Lamina and Medulla Model
Once again assuming that the Neurokernel source has been cloned to ~/neurokernel, we first create GEXF files containing the configurations for both the lamina and medulla models:
|
%cd -q ~/neurokernel/examples/vision/data
%run generate_vision_gexf.py
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
We then generate an input of duration 1.0 seconds:
|
%run gen_vis_input.py
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
Finally, we execute the model. Note that if you have access to only 1 GPU, replace --med_dev 1 with --med_dev 0 in the third line below; this will force both the lamina and medulla models to use the same GPU (at the expense of slower execution):
|
%cd -q ~/neurokernel/examples/vision
%run vision_demo.py --lam_dev 0 --med_dev 1
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
The visualization script produces a video that depicts an input signal provided to a grid comprising neurons associated with each of the 768 cartridges in one of the fly's eyes as well as the response of select neurons in the corresponding columns in the retina/lamina and medulla LPUs.
The resulting video (hosted on YouTube) can be viewed below:
|
import IPython.display
IPython.display.YouTubeVideo('5eB78fLl1AM')
|
notebooks/vision.ipynb
|
neurokernel/vision
|
bsd-3-clause
|
TensorFlow IO から PostgreSQL データベースを読み取る
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/postgresql"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/io/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
概要
このチュートリアルでは、トレーニングまたは推論のために PostgreSQL データベースサーバーからtf.data.Datasetを作成し、作成したDatasetをtf.kerasに渡す方法を紹介します。
SQL データベースは、データサイエンティストにとって重要なデータソースです。最も人気のあるオープンソース SQL データベースの 1 つである PostgreSQL は、企業全体の重要なデータやトランザクションデータを格納するために広く使用されています。PostgreSQL データベースサーバーから直接Datasetを作成し、トレーニングまたは推論のためにDatasetをtf.kerasに渡すと、データパイプラインを大幅に簡略化されるのでデータサイエンティストは機械学習モデルの構築に専念できます。
セットアップと使用法
必要な tensorflow-io パッケージをインストールし、ランタイムを再起動する
|
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
|
site/ja/io/tutorials/postgresql.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
PostgreSQL のインストールとセットアップ (オプション)
注: このノートブックは、Google Colab でのみ実行するように設計されています。システムにパッケージをインストールし、sudo アクセスが必要です。ローカルの Jupyter ノートブックで実行する場合は、注意して続行してください。
Google Colab での使用法をデモするには、PostgreSQL サーバーをインストールします。パスワードと空のデータベースも必要です。
このノートブックを Google Colab で実行していない場合、または既存のデータベースを使用する場合は、次の設定をスキップして次のセクションに進んでください。
|
# Install postgresql server
!sudo apt-get -y -qq update
!sudo apt-get -y -qq install postgresql
!sudo service postgresql start
# Setup a password `postgres` for username `postgres`
!sudo -u postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'postgres';"
# Setup a database with name `tfio_demo` to be used
!sudo -u postgres psql -U postgres -c 'DROP DATABASE IF EXISTS tfio_demo;'
!sudo -u postgres psql -U postgres -c 'CREATE DATABASE tfio_demo;'
|
site/ja/io/tutorials/postgresql.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
必要な環境変数を設定する
次の環境変数は、前のセクションの PostgreSQL 設定に基づいています。設定が異なる場合、または既存のデータベースを使用している場合は、それに応じて変更する必要があります。
|
%env TFIO_DEMO_DATABASE_NAME=tfio_demo
%env TFIO_DEMO_DATABASE_HOST=localhost
%env TFIO_DEMO_DATABASE_PORT=5432
%env TFIO_DEMO_DATABASE_USER=postgres
%env TFIO_DEMO_DATABASE_PASS=postgres
|
site/ja/io/tutorials/postgresql.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
PostgreSQL サーバーでデータを準備する
このチュートリアルではデータベースを作成し、デモのためにデータベースにデータを入力します。このチュートリアルで使用されるデータは、Air Quality Data Set からのデータで、UCI Machine Learning Repository から入手できます。
以下は、Air Quality Data Set のサブセットのプレビューです。
Date | Time | CO(GT) | PT08.S1(CO) | NMHC(GT) | C6H6(GT) | PT08.S2(NMHC) | NOx(GT) | PT08.S3(NOx) | NO2(GT) | PT08.S4(NO2) | PT08.S5(O3) | T | RH | AH
--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ---
10/03/2004 | 18.00.00 | 2,6 | 1360 | 150 | 11,9 | 1046 | 166 | 1056 | 113 | 1692 | 1268 | 13,6 | 48,9 | 0,7578
10/03/2004 | 19.00.00 | 2 | 1292 | 112 | 9,4 | 955 | 103 | 1174 | 92 | 1559 | 972 | 13,3 | 47,7 | 0,7255
10/03/2004 | 20.00.00 | 2,2 | 1402 | 88 | 9,0 | 939 | 131 | 1140 | 114 | 1555 | 1074 | 11,9 | 54,0 | 0,7502
10/03/2004 | 21.00.00 | 2,2 | 1376 | 80 | 9,2 | 948 | 172 | 1092 | 122 | 1584 | 1203 | 11,0 | 60,0 | 0,7867
10/03/2004 | 22.00.00 | 1,6 | 1272 | 51 | 6,5 | 836 | 131 | 1205 | 116 | 1490 | 1110 | 11,2 | 59,6 | 0,7888
大気質データセットと UCI 機械学習リポジトリの詳細については、参照文献セクションをご覧ください。
データの準備をシンプルにするために、Air Quality Data Setの SQL バージョンが用意されており、AirQualityUCI.sql として入手できます。
表を作成するステートメントは次のとおりです。
CREATE TABLE AirQualityUCI (
Date DATE,
Time TIME,
CO REAL,
PT08S1 INT,
NMHC REAL,
C6H6 REAL,
PT08S2 INT,
NOx REAL,
PT08S3 INT,
NO2 REAL,
PT08S4 INT,
PT08S5 INT,
T REAL,
RH REAL,
AH REAL
);
データベースに表を作成してデータを入力するための完全なコマンドは以下のとおりです。
|
!curl -s -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/postgresql/AirQualityUCI.sql
!PGPASSWORD=$TFIO_DEMO_DATABASE_PASS psql -q -h $TFIO_DEMO_DATABASE_HOST -p $TFIO_DEMO_DATABASE_PORT -U $TFIO_DEMO_DATABASE_USER -d $TFIO_DEMO_DATABASE_NAME -f AirQualityUCI.sql
|
site/ja/io/tutorials/postgresql.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
PostgreSQL サーバーからデータセットを作成し、TensorFlow で使用する
PostgreSQL サーバーからのデータセットの作成は、queryおよびendpoint引数を指定してtfio.experimental.IODataset.from_sqlを呼び出して簡単に実行できます。queryはテーブル内の選択した列の SQL クエリで、endpoint引数はアドレスとデータベース名です。
|
import os
import tensorflow_io as tfio
endpoint="postgresql://{}:{}@{}?port={}&dbname={}".format(
os.environ['TFIO_DEMO_DATABASE_USER'],
os.environ['TFIO_DEMO_DATABASE_PASS'],
os.environ['TFIO_DEMO_DATABASE_HOST'],
os.environ['TFIO_DEMO_DATABASE_PORT'],
os.environ['TFIO_DEMO_DATABASE_NAME'],
)
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT co, pt08s1 FROM AirQualityUCI;",
endpoint=endpoint)
print(dataset.element_spec)
|
site/ja/io/tutorials/postgresql.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
上記の dataset.element_spec の出力からわかるように、作成された Dataset の要素はデータセットテーブルの列名をキーとする Python dict オブジェクトであるため、さらに演算を適用するのが非常に便利です。たとえば、Dataset の nox と no2 フィールドを選択して、その差を計算することができます。
|
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT nox, no2 FROM AirQualityUCI;",
endpoint=endpoint)
dataset = dataset.map(lambda e: (e['nox'] - e['no2']))
# check only the first 20 record
dataset = dataset.take(20)
print("NOx - NO2:")
for difference in dataset:
print(difference.numpy())
|
site/ja/io/tutorials/postgresql.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
解析条件
解析対象として、地震波S波速度2000m/secの地盤を考えます。
|
# S波速度(m/sec2)
Vs = 2.000e+03
# ポアソン比(-)
Nu = 4.800e-01
# 質量密度(kg/m3)
rho = 1.800e+01
# ラメの弾性定数
Mu = rho*Vs**2
# ヤング率
E = Mu*(2*(1+Nu))
# ラメの弾性定数
Lambda = E*Nu/((1+Nu)*(1-2*Nu))
# P波速度(m/sec2)
Vp = np.sqrt((Lambda+2.000e+00*Mu)/rho)
|
demo/linear-dynamic-2D.ipynb
|
tkoyama010/getfem_presentation
|
cc0-1.0
|
対象領域は6000m×6000mの矩形領域とします。
|
d = 1.500e+02
x = 6.000e+03
z = 6.000e+03
m = gf.Mesh('cartesian', np.arange(0., x+d, d), np.arange(0., z+d, d))
m.set('optimize_structure')
m.export_to_pos("./pos/m.pos")
# 変位用オブジェクト
mfu = gf.MeshFem(m,2)
mfu.set_fem(gf.Fem('FEM_QK(2,1)'))
# データ用オブジェクト
mfd = gf.MeshFem(m, 1)
mfd.set_fem(gf.Fem('FEM_QK(2,1)'))
mim = gf.MeshIm(m, gf.Integ('IM_QUAD(3)'))
%%writefile gscript
Print "./png/m.png";
Exit;
!gmsh ./pos/m.pos gscript
from IPython.core.display import Image
Image('./png/m.png')
|
demo/linear-dynamic-2D.ipynb
|
tkoyama010/getfem_presentation
|
cc0-1.0
|
境界条件は底面にインピーダンスを考慮したダンパー境界(水平方向$\rho V_S A$、上下方向$\rho V_P A$)を設け、側面部分には水平ローラーを設けます。ただし、$A$は各ダンパーの担当面積を意味します。
今回定義が必要な側面と底面について面を定義しておきます。
|
P = m.pts()
cbot = (abs(P[1,:]-0.000e+00) < 1.000e-6)
cright = (abs(P[0,:]-x) < 1.000e-6)
cleft = (abs(P[0,:]-0.000e+00) < 1.000e-6)
pidbot = np.compress(cbot,range(0,m.nbpts()))
pidright = np.compress(cright,range(0,m.nbpts()))
pidleft = np.compress(cleft,range(0,m.nbpts()))
fbot = m.faces_from_pid(pidbot)
fright = m.faces_from_pid(pidright)
fleft = m.faces_from_pid(pidleft)
BOTTOM = 1
RIGHT = 2
LEFT = 3
SIDE = 4
m.set_region(BOTTOM, fbot)
m.set_region(RIGHT,fright)
m.set_region(LEFT,fleft)
|
demo/linear-dynamic-2D.ipynb
|
tkoyama010/getfem_presentation
|
cc0-1.0
|
左右の水平ローラーを設定する際にはDirichlet条件$HU=R$の$H$と$R$を左右両端でそれぞれ求めそれを足し合わせます。
|
(H_LEFT,R_LEFT) = gf.asm_dirichlet(LEFT, mim, mfu, mfd, mfd.eval('[[0,0],[0,1]]'), mfd.eval('[0,0]'))
(H_RIGHT,R_RIGHT) = gf.asm_dirichlet(RIGHT, mim, mfu, mfd, mfd.eval('[[0,0],[0,1]]'), mfd.eval('[0,0]'))
H = H_LEFT+H_RIGHT
R = R_LEFT+R_RIGHT
(N,U0) = H.dirichlet_nullspace(R)
|
demo/linear-dynamic-2D.ipynb
|
tkoyama010/getfem_presentation
|
cc0-1.0
|
底面の粘性境界は外部粘性減衰としてNeumann条件から計算したものを減衰行列に足し合わせることにより考慮します。
|
nbd = mfd.nbdof()
C_BOTTOM = gf.asm_boundary_source(BOTTOM, mim, mfu, mfd, np.repeat([[rho*Vs], [rho*Vp]],nbd,1))
C_BOTTOM_X = gf.asm_boundary_source(BOTTOM, mim, mfu, mfd, np.repeat([[rho*Vs], [0]],nbd,1))
|
demo/linear-dynamic-2D.ipynb
|
tkoyama010/getfem_presentation
|
cc0-1.0
|
支配方程式
ここで、今回計算対象とする弾性体の支配方程式であるNavierの式をおさらいします。
$\left(\lambda+\mu\right)\dfrac{\partial}{\partial x}\left(\dfrac{\partial u_{x}}{\partial x}+\dfrac{\partial u_{y}}{\partial y}+\dfrac{\partial u_{z}}{\partial z}\right)+\mu\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial y^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}\right)u_{x}+f_{x}=\rho\dfrac{\partial^{2}u_{x}}{\partial t^{2}}$
$\left(\lambda+\mu\right)\dfrac{\partial}{\partial y}\left(\dfrac{\partial u_{x}}{\partial x}+\dfrac{\partial u_{y}}{\partial y}+\dfrac{\partial u_{z}}{\partial z}\right)+\mu\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial y^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}\right)u_{y}+f_{y} = \rho\dfrac{\partial^{2}u_{y}}{\partial t^{2}}$
$\left(\lambda+\mu\right)\dfrac{\partial}{\partial z}\left(\dfrac{\partial u_{x}}{\partial x}+\dfrac{\partial u_{y}}{\partial y}+\dfrac{\partial u_{z}}{\partial z}\right)+\mu\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial y^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}\right)u_{z}+f_{z} = \rho\dfrac{\partial^{2}u_{z}}{\partial t^{2}}$
今回は2次元の解析ですので、Y方向の状態量は一定とします。($\dfrac{\partial}{\partial y}=0$)
$\left(\lambda+\mu\right)\dfrac{\partial}{\partial x}\left(\dfrac{\partial u_{x}}{\partial x}+\dfrac{\partial u_{z}}{\partial z}\right)+\mu\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}\right)u_{x}+f_{x}=\rho\dfrac{\partial^{2}u_{x}}{\partial t^{2}}$
$\left(\lambda+\mu\right)\dfrac{\partial}{\partial z}\left(\dfrac{\partial u_{x}}{\partial x}+\dfrac{\partial u_{z}}{\partial z}\right)+\mu\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}\right)u_{z}+f_{z}=\rho\dfrac{\partial^{2}u_{z}}{\partial t^{2}}$
$\mu\left(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial z^{2}}\right)u_{y}+f_{y}=\rho\dfrac{\partial^{2}u_{y}}{\partial t^{2}}$
上式の上2式は面内波の式、下の1式は面外波の式になります。今回はこの面外波の時刻歴応答解析を行います。下のように計算することで、剛性行列・質量行列・減衰行列を計算することができます。
|
# 剛性行列
K = gf.asm_linear_elasticity(mim, mfu, mfd, np.repeat([Lambda], nbd), np.repeat([Mu], nbd))
# 質量行列
M = gf.asm_mass_matrix(mim, mfu)*rho
# 減衰行列
C = gf.Spmat('copy',M)
C.clear()
C.set_diag((C_BOTTOM))
C_X = C_BOTTOM_X
|
demo/linear-dynamic-2D.ipynb
|
tkoyama010/getfem_presentation
|
cc0-1.0
|
なお、今回は面内波の計算のみを行いますが、$\lambda = -\mu$として入力を行えば、面外波の計算も可能です。現時点で、行列はGetfem++のSpmatオブジェクトになっていますが、これをMatrixMarketフォーマットでファイルに出力のうえ、Scipyのsparse matrixとして読み込みます。MatrixMarketは疎行列のメジャーなファイルフォーマットの1つです。
http://math.nist.gov/MatrixMarket/formats.html
|
N.save('mm', "N.mtx"); N = io.mmread("N.mtx")
K.save('mm', "K.mtx"); K = io.mmread("K.mtx")
M.save('mm', "M.mtx"); M = io.mmread("M.mtx")
C.save('mm', "C.mtx"); C = io.mmread("C.mtx")
# 側面の境界条件を考慮した行列
Nt = N.transpose()
KK = Nt*K*N
MM = Nt*M*N
CC = Nt*C*N
CC_X = Nt*C_X
|
demo/linear-dynamic-2D.ipynb
|
tkoyama010/getfem_presentation
|
cc0-1.0
|
下部の粘性境界からの入力速度波形としてRickerWaveletを使用します。
|
TP = 200
VP = 1.0
time_step = 500
wave = np.zeros(time_step)
time = np.arange(TP*2)
omegaP = 2.000E+00*np.pi/TP
tauR = omegaP/np.sqrt(2.0)*(time-TP)
wave[10+time] = -np.sqrt(np.e)*tauR*VP*np.exp(-tauR**2/2.000E+00)
%matplotlib inline
plt.plot(wave)
plt.show()
|
demo/linear-dynamic-2D.ipynb
|
tkoyama010/getfem_presentation
|
cc0-1.0
|
Newmark-β法による時刻歴応答解析
以上で、時刻歴応答解析に必要な質量行列・剛性行列・減衰行列および入力地震動が揃いました。これらを使用してNewmark-β法で時刻歴応答解析を行います。
|
# βの値
beta = 1./ 4.
# 時間刻み
dt = 0.01
sl = gf.Slice(('none',), mfu, 2)
MMM = MM+dt/2.000e+00*CC+beta*dt**2*KK
dis = np.zeros(CC_X.size)
vel = np.zeros(CC_X.size)
acc = np.zeros(CC_X.size)
for stpot in np.arange(1,time_step):
dis0 = dis
vel0 = vel
acc0 = acc
FFF = -CC*(vel0+dt/2.000e+00*acc0)-KK*(dis0+vel0*dt+(1.000e+00/2.000e+00-beta)*acc0*dt**2)-CC_X*wave[stpot]
acc = linalg.spsolve(MMM, FFF)
dis = dis0+vel0*dt+(1.000e+00/2.000e+00-beta)*acc0*dt**2+beta*acc*dt**2
vel = vel0+1.000e+00/2.000e+00*(acc0+acc)*dt
filename = 'results/linear-dynamic-'+("00000"+str(stpot))[-5:]+'.vtk'
sl.export_to_vtk(filename, 'ascii', mfu, N*dis, 'Displacement', mfu, N*vel, 'Velocity', mfu, N*acc, 'Acceralation')
|
demo/linear-dynamic-2D.ipynb
|
tkoyama010/getfem_presentation
|
cc0-1.0
|
<a id="dualdemo"></a>
Example 2: Dual Simulations
This example plots a deterministic simulation and a stochastic simulation of the same system.
Back to top
|
// SBML Part
model *myModel()
// Reactions:
J0: A -> B; k*A;
A = 10;
k = 1;
end
// SED-ML Part
// Models
model1 = model "myModel"
// Simulations
simulation1 = simulate uniform(0, 5, 100)
simulation2 = simulate uniform_stochastic(0, 5, 100)
// Tasks
task1 = run simulation1 on model1
task2 = run simulation2 on model1
// Outputs
plot "Deterministic Solution" task1.time vs task1.A, task1.B
plot "Stochastic Solution" task2.time vs task2.A, task2.B
|
example-notebooks/omex-basics.ipynb
|
0u812/nteract
|
bsd-3-clause
|
<a id="ensemble"></a>
Example 3: Stochastic Ensemble
This example uses a repeated task to run multiple copies of a stochastic simulation, then plots the ensemble.
Back to top
|
// SBML Part
model *myModel()
// Reactions:
J0: A -> B; k*A;
A = 100;
k = 1;
end
// SED-ML Part
// Models
model1 = model "myModel"
// Simulations
simulation1 = simulate uniform_stochastic(0, 5, 100)
// Tasks
task1 = run simulation1 on model1
repeat1 = repeat task1 for \
local.x in uniform(0,25,25), reset=True
// Outputs
plot "Stochastic Ensemble" repeat1.time vs repeat1.A, repeat1.B
|
example-notebooks/omex-basics.ipynb
|
0u812/nteract
|
bsd-3-clause
|
<a id="phaseportrait"></a>
Example 4: Phase portrait
In addition to timecourse plots, SED-ML can also be used to create phase portraits. This is useful to show the presenence (or absence, in this case) of limit cycles. Here, we use the well-known Lorenz attractor to show this feature.
Back to top
|
// -- Begin Antimony block
model *lorenz()
// Rate Rules:
x' = sigma*(y - x);
y' = x*(rho - z) - y;
z' = x*y - beta*z;
// Variable initializations:
x = 0.96259;
sigma = 10;
y = 2.07272;
rho = 28;
z = 18.65888;
beta = 2.67;
// Other declarations:
var x, y, z;
const sigma, rho, beta;
end
// -- End Antimony block
// -- Begin PhraSEDML block
// Models
model1 = model "lorenz"
// Simulations
sim1 = simulate uniform(0, 15, 2000)
// Tasks
task1 = run sim1 on model1
// Outputs
plot "Phase Portrait" z vs x
// -- End PhraSEDML block
|
example-notebooks/omex-basics.ipynb
|
0u812/nteract
|
bsd-3-clause
|
<a id="paramscan"></a>
Example 5: Parameter scanning
Through the use of repeated tasks, SED-ML can be used to scan through parameter values. This example shows how to scan through a set of predefined values for a kinetic parameter (J1_KK2).
Back to top
|
// -- Begin Antimony block
model *MAPKcascade()
// Compartments and Species:
compartment compartment_;
species MKKK in compartment_, MKKK_P in compartment_, MKK in compartment_;
species MKK_P in compartment_, MKK_PP in compartment_, MAPK in compartment_;
species MAPK_P in compartment_, MAPK_PP in compartment_;
// Reactions:
J0: MKKK => MKKK_P; J0_V1*MKKK/((1 + (MAPK_PP/J0_Ki)^J0_n)*(J0_K1 + MKKK));
J1: MKKK_P => MKKK; J1_V2*MKKK_P/(J1_KK2 + MKKK_P);
J2: MKK => MKK_P; J2_k3*MKKK_P*MKK/(J2_KK3 + MKK);
J3: MKK_P => MKK_PP; J3_k4*MKKK_P*MKK_P/(J3_KK4 + MKK_P);
J4: MKK_PP => MKK_P; J4_V5*MKK_PP/(J4_KK5 + MKK_PP);
J5: MKK_P => MKK; J5_V6*MKK_P/(J5_KK6 + MKK_P);
J6: MAPK => MAPK_P; J6_k7*MKK_PP*MAPK/(J6_KK7 + MAPK);
J7: MAPK_P => MAPK_PP; J7_k8*MKK_PP*MAPK_P/(J7_KK8 + MAPK_P);
J8: MAPK_PP => MAPK_P; J8_V9*MAPK_PP/(J8_KK9 + MAPK_PP);
J9: MAPK_P => MAPK; J9_V10*MAPK_P/(J9_KK10 + MAPK_P);
// Species initializations:
MKKK = 90;
MKKK_P = 10;
MKK = 280;
MKK_P = 10;
MKK_PP = 10;
MAPK = 280;
MAPK_P = 10;
MAPK_PP = 10;
// Compartment initializations:
compartment_ = 1;
// Variable initializations:
J0_V1 = 2.5;
J0_Ki = 9;
J0_n = 1;
J0_K1 = 10;
J1_V2 = 0.25;
J1_KK2 = 8;
J2_k3 = 0.025;
J2_KK3 = 15;
J3_k4 = 0.025;
J3_KK4 = 15;
J4_V5 = 0.75;
J4_KK5 = 15;
J5_V6 = 0.75;
J5_KK6 = 15;
J6_k7 = 0.025;
J6_KK7 = 15;
J7_k8 = 0.025;
J7_KK8 = 15;
J8_V9 = 0.5;
J8_KK9 = 15;
J9_V10 = 0.5;
J9_KK10 = 15;
// Other declarations:
const compartment_, J0_V1, J0_Ki, J0_n, J0_K1, J1_V2, J1_KK2, J2_k3, J2_KK3;
const J3_k4, J3_KK4, J4_V5, J4_KK5, J5_V6, J5_KK6, J6_k7, J6_KK7, J7_k8;
const J7_KK8, J8_V9, J8_KK9, J9_V10, J9_KK10;
end
// -- End Antimony block
// -- Begin PhraSEDML block
// Models
model1 = model "MAPKcascade"
// Simulations
sim1 = simulate uniform(0, 4000, 1000)
// Tasks
task1 = run sim1 on model1
// Repeated Tasks
repeat1 = repeat task1 for model1.J1_KK2 in [1, 10, 40], reset=true
// Outputs
plot "Sampled Simulation" repeat1.time vs repeat1.MKK, repeat1.MKK_P, repeat1.MAPK_PP
// -- End PhraSEDML block
|
example-notebooks/omex-basics.ipynb
|
0u812/nteract
|
bsd-3-clause
|
Tutorials
Preliminaries: Setup & introduction
Beam dynamics
Tutorial N1. Linear optics.. Web version.
Linear optics. Double Bend Achromat (DBA). Simple example of usage OCELOT functions to get periodic solution for a storage ring cell.
Tutorial N2. Tracking.. Web version.
Linear optics of the European XFEL Injector.
Tracking. First and second order.
Tutorial N3. Space Charge.. Web version.
Tracking through RF cavities with SC effects and RF focusing.
Tutorial N4. Wakefields.. Web version.
Tracking through corrugated structure (energy chirper) with Wakefields
Tutorial N5. CSR.. Web version.
Tracking trough bunch compressor with CSR effect.
Tutorial N6. RF Coupler Kick.. Web version.
Coupler Kick. Example of RF coupler kick influence on trajjectory and optics.
Tutorial N7. Lattice design.. Web version.
Lattice design, twiss matching, twiss backtracking
Preliminaries
The tutorial includes 7 simple examples dediacted to beam dynamics and optics. However, you should have a basic understanding of Computer Programming terminologies. A basic understanding of Python language is a plus.
This tutorial requires the following packages:
Python 3.4-3.6 (python 2.7 can work as well)
numpy version 1.8 or later: http://www.numpy.org/
scipy version 0.15 or later: http://www.scipy.org/
matplotlib version 1.5 or later: http://matplotlib.org/
ipython version 2.4 or later, with notebook support: http://ipython.org
Optional to speed up python
- numexpr (version 2.6.1)
- pyfftw (version 0.10)
The easiest way to get these is to download and install the (very large) Anaconda software distribution.
Alternatively, you can download and install miniconda.
The following command will install all required packages:
$ conda install numpy scipy matplotlib ipython-notebook
Ocelot installation
you have to download from GitHub zip file.
Unzip ocelot-master.zip to your working folder /your_working_dir/.
Rename folder ../your_working_dir/ocelot-master to /your_working_dir/ocelot.
Add ../your_working_dir/ to PYTHONPATH
Windows 7: go to Control Panel -> System and Security -> System -> Advance System Settings -> Environment Variables.
and in User variables add /your_working_dir/ to PYTHONPATH. If variable PYTHONPATH does not exist, create it
Variable name: PYTHONPATH
Variable value: ../your_working_dir/
- Linux:
$ export PYTHONPATH=/your_working_dir/:$PYTHONPATH
To launch "ipython notebook" or "jupyter notebook"
in command line run following commands:
$ ipython notebook
or
$ ipython notebook --notebook-dir="path_to_your_directory"
or
$ jupyter notebook --notebook-dir="path_to_your_directory"
Checking your installation
You can run the following code to check the versions of the packages on your system:
(in IPython notebook, press shift and return together to execute the contents of a cell)
|
import IPython
print('IPython:', IPython.__version__)
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import ocelot
print('ocelot:', ocelot.__version__)
|
demos/ipython_tutorials/1_introduction.ipynb
|
iagapov/ocelot
|
gpl-3.0
|
Optical function calculation
Uses:
* twiss() function and,
* Twiss() object contains twiss parameters and other information at one certain position (s) of lattice
To calculate twiss parameters you have to run twiss(lattice, tws0=None, nPoints=None) function. If you want to get a periodic solution leave tws0 by default.
You can change the number of points over the cell, If nPoints=None, then twiss parameters are calculated at the end of each element.
twiss() function returns list of Twiss() objects.
You will see the Twiss object contains more information than just twiss parameters.
|
tws=twiss(lat)
# to see twiss paraments at the begining of the cell, uncomment next line
# print(tws[0])
# to see twiss paraments at the end of the cell, uncomment next line
print(tws[0])
# plot optical functions.
plot_opt_func(lat, tws, top_plot = ["Dx", "Dy"], legend=False, font_size=10)
plt.show()
# you also can use standard matplotlib functions for plotting
#s = [tw.s for tw in tws]
#bx = [tw.beta_x for tw in tws]
#plt.plot(s, bx)
#plt.show()
# you can play with quadrupole strength and try to make achromat
Q4.k1 = 1.18
# to make achromat uncomment next line
# Q4.k1 = 1.18543769836
# To use matching function, please see ocelot/demos/ebeam/dba.py
# updating transfer maps after changing element parameters.
lat.update_transfer_maps()
# recalculate twiss parameters
tws=twiss(lat, nPoints=1000)
plot_opt_func(lat, tws, legend=False)
plt.show()
|
demos/ipython_tutorials/1_introduction.ipynb
|
iagapov/ocelot
|
gpl-3.0
|
Now we define the states available
|
states = [[0,0],[1,0],[0,1],[1,1]]
states = [np.array(state) for state in states]
|
Two-particles.ipynb
|
ivergara/science_notebooks
|
gpl-3.0
|
We can get all the combinations of the states as follows
|
list(itertools.combinations(states,2))
|
Two-particles.ipynb
|
ivergara/science_notebooks
|
gpl-3.0
|
Perform an XOR between the states which is equivalent to a hop process
|
for pair in itertools.combinations(states,2):
xor = np.logical_xor(*pair).astype(int)
print(f"Left state {pair[0]}, right state {pair[1]}, XOR result {xor}")
|
Two-particles.ipynb
|
ivergara/science_notebooks
|
gpl-3.0
|
Let's define a helper function to determine if a transition is allowed. Basically, in this problem, the process has to conserve particle number in the sense that a hop only involves one electron.
|
def allowed(jump):
if np.sum(jump) != 1:
return 0
return 1
for pair in itertools.combinations(states,2):
xor = np.logical_xor(*pair).astype(int)
print(f"Left state {pair[0]}, right state {pair[1]}: allowed {bool(allowed(xor))}")
|
Two-particles.ipynb
|
ivergara/science_notebooks
|
gpl-3.0
|
Now, building the matrix by calculating the combinations and then adding the transpose to have a symmetric matrix.
|
matrix = np.zeros((4,4))
for i in range(len(states)):
for j in range(i):
matrix[i][j] = allowed(np.logical_xor(states[i], states[j]).astype(int))
matrix+matrix.T
|
Two-particles.ipynb
|
ivergara/science_notebooks
|
gpl-3.0
|
Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
|
def find_peaks(a):
"""Find the indices of the local maxima in a sequence."""
b = np.array(a)
c = b.max()
return b[c]
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
|
assignments/assignment07/AlgorithmsEx02.ipynb
|
geoneill12/phys202-2015-work
|
mit
|
Get data
|
data_path = spm_face.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds'
raw = io.read_raw_ctf(raw_fname % 1) # Take first run
# To save time and memory for this demo, we'll just use the first
# 2.5 minutes (all we need to get 30 total events) and heavily
# resample 480->60 Hz (usually you wouldn't do either of these!)
raw = raw.crop(0, 150.).load_data().resample(60, npad='auto')
picks = mne.pick_types(raw.info, meg=True, exclude='bads')
raw.filter(1, None, n_jobs=1, fir_design='firwin')
events = mne.find_events(raw, stim_channel='UPPT001')
event_ids = {"faces": 1, "scrambled": 2}
tmin, tmax = -0.2, 0.5
baseline = None # no baseline as high-pass is applied
reject = dict(mag=3e-12)
# Make source space
trans = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces1_3D_raw-trans.fif'
src = mne.setup_source_space('spm', spacing='oct6', subjects_dir=subjects_dir,
add_dist=False)
bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif'
forward = mne.make_forward_solution(raw.info, trans, src, bem)
del src
# inverse parameters
conditions = 'faces', 'scrambled'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'dSPM'
clim = dict(kind='value', lims=[0, 2.5, 5])
|
0.15/_downloads/plot_covariance_whitening_dspm.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Inspired by varlens examples, here is how this simple function works:
|
import urllib
urltemplate = "https://raw.githubusercontent.com/hammerlab/varlens/master/test/data/CELSR1/bams/{}"
url = urllib.URLopener()
url.retrieve(urltemplate.format("bam_5.bam"), "bam_5.bam")
url.retrieve(urltemplate.format("bam_5.bam.bai"), "bam_5.bam.bai")
samfile = pysam.AlignmentFile("bam_5.bam", "rb")
# C -> T variant at this particular locus
chromosome = "chr22"
location = 46930258
radius = 5
|
notebook/Naive Strategy.ipynb
|
hammerlab/isovar
|
apache-2.0
|
Let's compare the contexts for the variant and the reference alleles:
|
allele1 = "T"
contexify(samfile, chromosome, location, allele1, radius)
allele2 = "C"
contexify(samfile, chromosome, location, allele2, radius)
|
notebook/Naive Strategy.ipynb
|
hammerlab/isovar
|
apache-2.0
|
Because we provided a parameter ai for the icy albedo, our model now contains several sub-processes contained within the process called albedo. Together these implement the step-function formula above.
The process called iceline simply looks for grid cells with temperature below $T_f$.
|
print model1.param
# A python shortcut... we can use the dictionary to pass lots of input arguments simultaneously:
# same thing as before, but written differently:
model1 = climlab.EBM_annual( num_lat=180, **param)
print model1
def ebm_plot(e, return_fig=False):
templimits = -60,32
radlimits = -340, 340
htlimits = -6,6
latlimits = -90,90
lat_ticks = np.arange(-90,90,30)
fig = plt.figure(figsize=(8,12))
ax1 = fig.add_subplot(3,1,1)
ax1.plot(e.lat, e.Ts)
ax1.set_ylim(templimits)
ax1.set_ylabel('Temperature (deg C)')
ax2 = fig.add_subplot(3,1,2)
ax2.plot(e.lat, e.ASR, 'k--', label='SW' )
ax2.plot(e.lat, -e.OLR, 'r--', label='LW' )
ax2.plot(e.lat, e.net_radiation, 'c-', label='net rad' )
ax2.plot(e.lat, e.heat_transport_convergence(), 'g--', label='dyn' )
ax2.plot(e.lat, e.net_radiation.squeeze() + e.heat_transport_convergence(), 'b-', label='total' )
ax2.set_ylim(radlimits)
ax2.set_ylabel('Energy budget (W m$^{-2}$)')
ax2.legend()
ax3 = fig.add_subplot(3,1,3)
ax3.plot(e.lat_bounds, e.heat_transport() )
ax3.set_ylim(htlimits)
ax3.set_ylabel('Heat transport (PW)')
for ax in [ax1, ax2, ax3]:
ax.set_xlabel('Latitude')
ax.set_xlim(latlimits)
ax.set_xticks(lat_ticks)
ax.grid()
if return_fig:
return fig
# Integrate out to equilibrium.
model1.integrate_years(5)
# Check for energy balance
print climlab.global_mean(model1.net_radiation)
f = ebm_plot(model1)
# There is a diagnostic that tells us the current location of the ice edge:
model1.icelat
|
notes/EBM_albedo_feedback.ipynb
|
brian-rose/env-415-site
|
mit
|
Polar-amplified warming in the EBM
Add a small radiative forcing
The equivalent of doubling CO2 in this model is something like
$$ A \rightarrow A - \delta A $$
where $\delta A = 4$ W m$^{-2}$.
|
deltaA = 4.
# This is a very handy way to "clone" an existing model:
model2 = climlab.process_like(model1)
# Now change the longwave parameter:
model2.subprocess['LW'].A = param['A'] - deltaA
# and integrate out to equilibrium again
model2.integrate_years(5, verbose=False)
plt.plot(model1.lat, model1.Ts)
plt.plot(model2.lat, model2.Ts)
|
notes/EBM_albedo_feedback.ipynb
|
brian-rose/env-415-site
|
mit
|
In the ice-free regime, there is no polar-amplified warming. A uniform radiative forcing produces a uniform warming.
A different kind of climate forcing: changing the solar constant
Historically EBMs have been used to study the climatic response to a change in the energy output from the Sun.
We can do that easily with climlab:
|
m = climlab.EBM_annual( num_lat=180, **param )
# The current (default) solar constant, corresponding to present-day conditions:
m.subprocess.insolation.S0
|
notes/EBM_albedo_feedback.ipynb
|
brian-rose/env-415-site
|
mit
|
What happens if we decrease $S_0$?
|
# First, get to equilibrium
m.integrate_years(5.)
# Check for energy balance
print climlab.global_mean(m.net_radiation)
m.icelat
# Now make the solar constant smaller:
m.subprocess.insolation.S0 = 1300.
# Integrate to new equilibrium
m.integrate_years(10.)
# Check for energy balance
print climlab.global_mean(m.net_radiation)
m.icelat
ebm_plot(m)
|
notes/EBM_albedo_feedback.ipynb
|
brian-rose/env-415-site
|
mit
|
A much colder climate! The ice line is sitting at 54º. The heat transport shows that the atmosphere is moving lots of energy across the ice line, trying hard to compensate for the strong radiative cooling everywhere poleward of the ice line.
What happens if we decrease $S_0$ even more?
|
# Now make the solar constant smaller:
m.subprocess.insolation.S0 = 1200.
# First, get to equilibrium
m.integrate_years(5.)
# Check for energy balance
print climlab.global_mean(m.net_radiation)
|
notes/EBM_albedo_feedback.ipynb
|
brian-rose/env-415-site
|
mit
|
ebm_plot(m)
Something very different happened! Where is the ice line now?
Now what happens if we set $S_0$ back to its present-day value?
|
# Now make the solar constant smaller:
m.subprocess.insolation.S0 = 1365.2
# First, get to equilibrium
m.integrate_years(5.)
# Check for energy balance
print climlab.global_mean(m.net_radiation)
ebm_plot(m)
|
notes/EBM_albedo_feedback.ipynb
|
brian-rose/env-415-site
|
mit
|
To make a pretty, publication grade map for your study area look no further than cartopy.
In this tutorial we will walk through generating a basemap with:
- Bathymetry/topography
- Coastline
- Scatter data
- Location labels
- Inset map
- Legend
This code can be generalised to any region you wish to map
First we import some modules for manipulating and plotting data
|
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import xarray as xr
from pathlib import Path
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
Then we import cartopy itself
|
import cartopy
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
A few other modules and functions which we will use later to add cool stuff to our plots. Also updating font sizes for improved readability
|
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
plt.rcParams.update({"font.size": 20})
SMALL_SIZE = 22
MEDIUM_SIZE = 22
LARGE_SIZE = 26
plt.rc("font", size=SMALL_SIZE)
plt.rc("xtick", labelsize=SMALL_SIZE)
plt.rc("ytick", labelsize=SMALL_SIZE)
plt.rc("axes", titlesize=SMALL_SIZE)
plt.rc("legend", fontsize=SMALL_SIZE)
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
Note on bathymetry data
To save space and time I have subset the bathymetry plotted in this example. If you wish to map a different area you will need to download the GEBCO topography data found here.
You can find a notebook intro to using xarray for netcdf here on the UEA python website. Or go to Callum's github for a worked example using GEBCO data.
|
# Open prepared bathymetry dataset using pathlib to sepcify the relative path
bathy_file_path = Path('../data/bathy.nc')
bathy_ds = xr.open_dataset(bathy_file_path)
bathy_lon, bathy_lat, bathy_h = bathy_ds.bathymetry.longitude, bathy_ds.bathymetry.latitude, bathy_ds.bathymetry.values
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
We're just interested in bathy here, so set any height values greater than 0 to to 0 and set contour levels to plot later
|
bathy_h[bathy_h > 0] = 0
bathy_conts = np.arange(-9000, 500, 500)
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
Here we load some scatter data from a two column csv for plotting later
|
# Load some scatter data of smaple locations near South Georgia
data = pd.read_csv("../data/scatter_coords.csv")
lons = data.Longitude.values
lats = data.Latitude.values
# Subset of sampling locations
sample_lon = lons[[0, 2, 7]]
sample_lat = lats[[0, 2, 7]]
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
Now to make the map itself. First we define our coordinate system. Here we are using a Plate Carrée projection, which is one of equidistant cylindrical projections.
A full list of Cartopy projections is available at http://scitools.org.uk/cartopy/docs/latest/crs/projections.html.
Then we create figure and axes instances and set the plotting extent in degrees [West, East, South, North]
|
coord = ccrs.PlateCarree()
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord);
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
Now we contour the bathymetry data
|
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord)
bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r")
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
A good start. To make it more map like we add gridlines, formatted labels and a colorbar
|
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord)
bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r")
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color="k", alpha=0.5, linestyle="--")
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.ylines = True
gl.xlines = True
fig.colorbar(bathy, ax=ax, orientation="horizontal", label="Bathymetry (m)", shrink=0.7, pad=0.08, aspect=40);
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
Now to add a few more features. First coastlines from cartopy's natural features toolbox. Then scatters of the samples we imported earlier
|
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord)
bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r")
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color="k", alpha=0.5, linestyle="--")
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.ylines = True
gl.xlines = True
fig.colorbar(bathy, ax=ax, orientation="horizontal", label="Bathymetry (m)", shrink=0.7, pad=0.08, aspect=40)
feature = cartopy.feature.NaturalEarthFeature(
name="coastline", category="physical", scale="50m", edgecolor="0.5", facecolor="0.8"
)
ax.add_feature(feature)
ax.scatter(lons, lats, zorder=5, color="red", label="Samples collected")
ax.scatter(sample_lon, sample_lat, zorder=10, color="k", marker="D", s=50, label="Samples sequenced");
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
To finish off the map we add a legend for the scatter plot, an inset map showing the area at a larger scale and some text identifying the islands
|
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord)
bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r")
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color="k", alpha=0.5, linestyle="--")
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.ylines = True
gl.xlines = True
fig.colorbar(bathy, ax=ax, orientation="horizontal", label="Bathymetry (m)", shrink=0.7, pad=0.08, aspect=40)
ax.add_feature(feature)
ax.scatter(lons, lats, zorder=5, color="red", label="Samples collected")
ax.scatter(sample_lon, sample_lat, zorder=10, color="k", marker="D", s=50, label="Samples sequenced")
fig.legend(bbox_to_anchor=(0.12, 0.2), loc="lower left")
tr2 = ccrs.Stereographic(central_latitude=-55, central_longitude=-35)
sub_ax = plt.axes(
[0.63, 0.65, 0.2, 0.2], projection=ccrs.Stereographic(central_latitude=-55, central_longitude=-35)
)
sub_ax.set_extent([-70, -15, -75, 10])
x_co = [-42, -42, -23, -23, -42]
y_co = [-60, -50, -50, -60, -60]
sub_ax.add_feature(feature)
sub_ax.plot(x_co, y_co, transform=coord, zorder=10, color="red")
ax.text(-38.5, -54.9, "South\nGeorgia", fontsize=14)
ax.text(-26.8, -58.2, "South\nSandwich\nIslands", fontsize=14);
HTML(html)
|
content/notebooks/2019-05-30-cartopy-map.ipynb
|
ueapy/ueapy.github.io
|
mit
|
We will compute persistent homology of a 2-simplex (triangle) ABC. The filtration is as follows: first the top vertex (C) of the triangle is added, then the rest of vertices (A and B) followed by the the bottom edge (AB), then the rest of the edges (AC and BC) and finally the triangle is filled in (ABC).
|
scx = [Simplex((2,), 0), # C
Simplex((0,), 1), # A
Simplex((1,), 1), # B
Simplex((0,1), 2), # AB
Simplex((1,2), 3), # BC
Simplex((0,2), 3), # AC
Simplex((0,1,2), 4), # ABC
]
|
2015_2016/lab13/Computing Persistent Homology.ipynb
|
gregorjerse/rt2
|
gpl-3.0
|
Now the persistent homology is computed.
|
f = Filtration(scx, data_cmp)
p = DynamicPersistenceChains(f)
p.pair_simplices()
smap = p.make_simplex_map(f)
|
2015_2016/lab13/Computing Persistent Homology.ipynb
|
gregorjerse/rt2
|
gpl-3.0
|
Now output the computed persistence diagram. For each critical cell that appears in the filtration the time of Birth and Death is given as well as the cell that kills it (its pair). The features that persist forever have Death value set to inf.
|
print "{:>10}{:>10}{:>10}{:>10}".format("First", "Second", "Birth", "Death")
for i in (i for i in p if i.sign()):
b = smap[i]
if i.unpaired():
print "{:>10}{:>10}{:>10}{:>10}".format(b, '', b.data, "inf")
else:
d = smap[i.pair()]
print "{:>10}{:>10}{:>10}{:>10}".format(b, d, b.data, d.data)
|
2015_2016/lab13/Computing Persistent Homology.ipynb
|
gregorjerse/rt2
|
gpl-3.0
|
Load the data from the publication
First we will load the data collected in [1]_. In this experiment subjects
listened to natural speech. Raw EEG and the speech stimulus are provided.
We will load these below, downsampling the data in order to speed up
computation since we know that our features are primarily low-frequency in
nature. Then we'll visualize both the EEG and speech envelope.
|
path = mne.datasets.mtrf.data_path()
decim = 2
data = loadmat(join(path, 'speech_data.mat'))
raw = data['EEG'].T
speech = data['envelope'].T
sfreq = float(data['Fs'])
sfreq /= decim
speech = mne.filter.resample(speech, down=decim, npad='auto')
raw = mne.filter.resample(raw, down=decim, npad='auto')
# Read in channel positions and create our MNE objects from the raw data
montage = mne.channels.make_standard_montage('biosemi128')
info = mne.create_info(montage.ch_names, sfreq, 'eeg').set_montage(montage)
raw = mne.io.RawArray(raw, info)
n_channels = len(raw.ch_names)
# Plot a sample of brain and stimulus activity
fig, ax = plt.subplots()
lns = ax.plot(scale(raw[:, :800][0].T), color='k', alpha=.1)
ln1 = ax.plot(scale(speech[0, :800]), color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['EEG', 'Speech Envelope'], frameon=False)
ax.set(title="Sample activity", xlabel="Time (s)")
mne.viz.tight_layout()
|
0.20/_downloads/e31a3c546b89121086d731bfb81c98aa/plot_receptive_field_mtrf.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
setup: 1. Generate synthetic data for temperature observation time-series
|
# Create time-axis for our syntethic sample
utc = Calendar() # provide conversion and math for utc time-zone
t0 = utc.time(2016, 1, 1)
dt = deltahours(1)
n = 24*3 # 3 days length
#ta = TimeAxisFixedDeltaT(t0, dt, n)
ta = TimeAxis(t0, dt, n) # same as ta, but needed for now(we work on aligning them)
# 1. Create the terrain based geo-points for the 1x1km grid and the observations
# a. Create the grid, based on a syntethic terrain model
# specification of 1 x 1 km
grid_1x1 = GeoPointVector()
for x in range(10):
for y in range(10):
grid_1x1.append(GeoPoint(x*1000, y*1000, (x+y)*50)) # z from 0 to 1000 m
# b. Create the observation points, for metered temperature
# reasonable withing that grid_1x1, and with elevation z
# that corresponds approximately to the position
obs_points = GeoPointVector()
obs_points.append(GeoPoint( 100, 100, 10)) # observation point at the lowest part
obs_points.append(GeoPoint(5100, 100, 270 )) # halfway out in x-direction @ 270 masl
obs_points.append(GeoPoint( 100, 5100, 250)) # halfway out in y-direction @ 250 masl
obs_points.append(GeoPoint(10100,10100, 1080 )) # x-y at max, so @1080 masl
# 2. Create time-series having a constant temperature of 15 degC
# and add them to the syntetic observation set
# make sure there is some reality, like temperature gradient etc.
ts = TimeSeries(ta, fill_value=20.0,point_fx=stair_case) # 20 degC at z_t= 0 meter above sea-level
# assume set temp.gradient to -0.6 degC/100m, and estimate the other values accordingly
tgrad = -0.6/100.0 # in our case in units of degC/m
z_t = 0 # meter above sea-level
# Create a TemperatureSourceVector to hold the set of observation time-series
constant_bias=[-1.0,-0.6,0.7,+1.0]
obs_set = TemperatureSourceVector()
obs_set_w_bias = TemperatureSourceVector()
for geo_point,bias in zip(obs_points,constant_bias):
temp_at_site = ts + tgrad*(geo_point.z-z_t)
obs_set.append(TemperatureSource(geo_point,temp_at_site))
obs_set_w_bias.append(TemperatureSource(geo_point,temp_at_site + bias))
|
notebooks/grid-pp/gridpp_geopoints.ipynb
|
statkraft/shyft-doc
|
lgpl-3.0
|
setup 2. Transform observation with bias to grid using kriging
|
# Generate the observation grid by kriging the observations out to 1x1km grid
# first create idw and kriging parameters that we will utilize in the next steps
# kriging parameters
btk_params = BTKParameter() # we could tune parameters here if needed
# idw parameters,somewhat adapted to the fact that we
# know we interpolate from a grid, with a lot of neigbours around
idw_params = IDWTemperatureParameter() # here we could tune the paramete if needed
idw_params.max_distance = 20*1000.0 # max at 10 km because we search for max-gradients
idw_params.max_members = 20 # for grid, this include all possible close neighbors
idw_params.gradient_by_equation = True # resolve horisontal component out
# now use kriging for our 'syntethic' observations with bias
obs_grid = bayesian_kriging_temperature(obs_set_w_bias,grid_1x1,ta.fixed_dt,btk_params)
# if we idw/btk back to the sites, we should have something that equals the with_bias:
# we should get close to zero differences in this to-grid-and-back operation
back_test = idw_temperature(obs_grid, obs_points, ta.fixed_dt, idw_params) # note the ta.fixed_dt here!
for bt,wb in zip(back_test,obs_set_w_bias):
print("IDW Diff {} : {} ".format(bt.mid_point(),abs((bt.ts-wb.ts).values.to_numpy()).max()))
#back_test = bayesian_kriging_temperature(obs_grid, obs_points, ta, btk_params)
#for bt,wb in zip(back_test,obs_set_w_bias):
# print("BTK Diff {} : {} ".format(bt.mid_point(),abs((bt.ts-wb.ts).values.to_numpy()).max()))
|
notebooks/grid-pp/gridpp_geopoints.ipynb
|
statkraft/shyft-doc
|
lgpl-3.0
|
setup 3. Create 3 forecasts sets for the 1x1 km grid
|
# Create a forecast grid by copying the obs_grid time-series
# since we know that idw of them to obs_points will give approx.
# the obs_set_w_bias time-series
# for the simplicity, we assume the same forecast for all 3 days
fc_grid = TemperatureSourceVector()
fc_grid_1_day_back = TemperatureSourceVector() # this is previous day
fc_grid_2_day_back = TemperatureSourceVector() # this is fc two days ago
one_day_back_dt = deltahours(-24)
two_days_back_dt = deltahours(-24*2)
noise_bias = [0.0 for i in range(len(obs_grid))] # we could generate white noise ts into these to test kalman
for fc,bias in zip(obs_grid,noise_bias):
fc_grid.append(TemperatureSource(fc.mid_point(),fc.ts + bias ))
fc_grid_1_day_back.append(
TemperatureSource(
fc.mid_point(),
time_shift(fc.ts + bias, one_day_back_dt) #time-shift the signal back
)
)
fc_grid_2_day_back.append(
TemperatureSource(
fc.mid_point(),
time_shift(fc.ts + bias, two_days_back_dt)
)
)
grid_forecasts = [fc_grid_2_day_back, fc_grid_1_day_back, fc_grid ]
|
notebooks/grid-pp/gridpp_geopoints.ipynb
|
statkraft/shyft-doc
|
lgpl-3.0
|
grid-pp: 1. Transform forecasts from grid to observation points (IDW)
|
# Now we have 3 simulated forecasts at a 1x1 km grid
# fc_grid, fc_grid_1_day_back, fc_grid_2_day_back
# we start to do the grid pp algorithm stuff
# - we know the our forecasts have some degC. bias, and we would hope that
# the kalman filter 'learns' the offset
# as a first step we project the grid_forecasts to the observation points
# making a list of historical forecasts at each observation point.
fc_at_observation_points = [idw_temperature(fc, obs_points, ta.fixed_dt, idw_params)\
for fc in grid_forecasts]
historical_forecasts = []
for i in range(len(obs_points)): # correlate obs.point and fc using common i
fc_list = TemperatureSourceVector() # the kalman bias predictor below accepts TsVector of forecasts
for fc in fc_at_observation_points:
fc_list.append(fc[i]) # pick out the fc_ts only, for the i'th observation point
#print("{} adding fc pt {} t0={}".format(i,fc[i].mid_point(),utc.to_string(fc[i].ts.time(0))))
historical_forecasts.append(fc_list)
# historical_forecasts now cntains 3 forecasts for each observation point
|
notebooks/grid-pp/gridpp_geopoints.ipynb
|
statkraft/shyft-doc
|
lgpl-3.0
|
grid-pp: 2. Calculate the bias time-series using Kalman filter on the observation set
|
# Create a TemperatureSourceVector to hold the set of bias time-series
bias_set = TemperatureSourceVector()
# Create the Kalman filter having 8 samples spaced every 3 hours to represent a daily periodic pattern
kalman_dt_hours = 3
kalman_dt =deltahours(kalman_dt_hours)
kta = TimeAxis(t0, kalman_dt, int(24//kalman_dt_hours))
# Calculate the coefficients of Kalman filter and
# Create bias time-series based on the daily periodic pattern
for i in range(len(obs_set)):
kf = KalmanFilter() # each observation location do have it's own kf &predictor
kbp = KalmanBiasPredictor(kf)
#print("Diffs for obs", i)
#for fc in historical_forecasts[i]:
# print((fc.ts-obs_set[i].ts).values.to_numpy())
kbp.update_with_forecast(historical_forecasts[i], obs_set[i].ts, kta)
pattern = KalmanState.get_x(kbp.state)
#print(pattern)
bias_ts = create_periodic_pattern_ts(pattern, kalman_dt, ta.time(0), ta)
bias_set.append(TemperatureSource(obs_set[i].mid_point(), bias_ts))
|
notebooks/grid-pp/gridpp_geopoints.ipynb
|
statkraft/shyft-doc
|
lgpl-3.0
|
grid-pp: 3. Spread the bias at observation points out to the grid using kriging
|
# Generate the bias grid by kriging the bias out on the 1x1km grid
btk_params = BTKParameter()
btk_bias_params = BTKParameter(temperature_gradient=-0.6, temperature_gradient_sd=0.25, sill=25.0, nugget=0.5, range=5000.0, zscale=20.0)
bias_grid = bayesian_kriging_temperature(bias_set, grid_1x1, ta.fixed_dt, btk_bias_params)
# Correct forecasts by applying bias time-series on the grid
fc_grid_improved = TemperatureSourceVector()
for i in range(len(fc_grid)):
fc_grid_improved.append(
TemperatureSource(
fc_grid[i].mid_point(),
fc_grid[i].ts - bias_grid[i].ts # By convention, sub bias time-series(hmm..)
)
)
# Check the first value of the time-series. It should be around 15
tx =ta.time(0)
print("Comparison original synthetic grid cell [0]\n\t at the lower left corner,\n\t at t {}\n\toriginal grid: {}\n\timproved grid: {}\n\t vs bias grid: {}\n\t nearest obs: {}"
.format(utc.to_string(tx),
fc_grid[0].ts(tx),
fc_grid_improved[0].ts(tx),
bias_grid[0].ts(tx),
obs_set[0].ts(tx)
)
)
|
notebooks/grid-pp/gridpp_geopoints.ipynb
|
statkraft/shyft-doc
|
lgpl-3.0
|
Presentation&Test: 8. Finally, Transform corrected forecasts from grid to observation points to see if we did reach the goal of adjusting the forecast (IDW)
|
# Generate the corrected forecast set by Krieging transform of temperature model
fc_at_observations_improved = idw_temperature(fc_grid_improved, obs_points, ta.fixed_dt, idw_params)
fc_at_observations_raw =idw_temperature(fc_grid, obs_points, ta.fixed_dt, idw_params)
|
notebooks/grid-pp/gridpp_geopoints.ipynb
|
statkraft/shyft-doc
|
lgpl-3.0
|
9. Plot the results
|
# Make a time-series plot of temperature sets
for i in range(len(bias_set)):
fig, ax = plt.subplots(figsize=(20, 10))
timestamps = [datetime.datetime.utcfromtimestamp(p.start) for p in obs_set[i].ts.time_axis]
ax.plot(timestamps, obs_set[i].ts.values, label = str(i+1) + ' Observation')
ax.plot(timestamps, fc_at_observations_improved[i].ts.values, label = str(i+1) + ' Forecast improved')
ax.plot(timestamps, fc_at_observations_raw[i].ts.values,linestyle='--', label = str(i+1) + ' Forecast (raw)')
#ax.plot(timestamps, bias_set[i].ts.values, label = str(i+1) + ' Bias')
fig.autofmt_xdate()
ax.legend(title='Temperature')
ax.set_ylabel('Temp ($^\circ$C)')
# Make a scatter plot of grid temperature forecasts at ts.value(0)
x = [fc.mid_point().x for fc in bias_grid]
y = [fc.mid_point().y for fc in bias_grid]
fig, ax = plt.subplots(figsize=(10, 5))
temps = np.array([bias.ts.value(0) for bias in bias_grid])
plot = ax.scatter(x, y, c=temps, marker='o', s=500, lw=0)
plt.colorbar(plot).set_label('Temp bias correction ($^\circ$C)')
|
notebooks/grid-pp/gridpp_geopoints.ipynb
|
statkraft/shyft-doc
|
lgpl-3.0
|
Annotating bad spans of data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The tutorial tut-events-vs-annotations describes how
:class:~mne.Annotations can be read from embedded events in the raw
recording file, and tut-annotate-raw describes in detail how to
interactively annotate a :class:~mne.io.Raw data object. Here, we focus on
best practices for annotating bad data spans so that they will be excluded
from your analysis pipeline.
The reject_by_annotation parameter
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the interactive raw.plot() window, the annotation controls can be
opened by pressing :kbd:a. Here, new annotation labels can be created or
existing annotation labels can be selected for use.
|
fig = raw.plot()
fig.canvas.key_press_event('a')
|
0.19/_downloads/03db2d983950efa77a26beb0ac22b422/plot_20_rejecting_bad_data.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
.. sidebar:: Annotating good spans
The default "BAD\_" prefix for new labels can be removed simply by
pressing the backspace key four times before typing your custom
annotation label.
You can see that default annotation label is "BAD_"; this can be edited
prior to pressing the "Add label" button to customize the label. The intent
is that users can annotate with as many or as few labels as makes sense for
their research needs, but that annotations marking spans that should be
excluded from the analysis pipeline should all begin with "BAD" or "bad"
(e.g., "bad_cough", "bad-eyes-closed", "bad door slamming", etc). When this
practice is followed, many processing steps in MNE-Python will automatically
exclude the "bad"-labelled spans of data; this behavior is controlled by a
parameter reject_by_annotation that can be found in many MNE-Python
functions or class constructors, including:
creation of epoched data from continuous data (:class:mne.Epochs)
independent components analysis (:class:mne.preprocessing.ICA)
functions for finding heartbeat and blink artifacts
(:func:~mne.preprocessing.find_ecg_events,
:func:~mne.preprocessing.find_eog_events)
covariance computations (:func:mne.compute_raw_covariance)
power spectral density computation (:meth:mne.io.Raw.plot_psd,
:func:mne.time_frequency.psd_welch)
For example, when creating epochs from continuous data, if
reject_by_annotation=True the :class:~mne.Epochs constructor will drop
any epoch that partially or fully overlaps with an annotated span that begins
with "bad".
Generating annotations programmatically
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The tut-artifact-overview tutorial introduced the artifact detection
functions :func:~mne.preprocessing.find_eog_events and
:func:~mne.preprocessing.find_ecg_events (although that tutorial mostly
relied on their higher-level wrappers
:func:~mne.preprocessing.create_eog_epochs and
:func:~mne.preprocessing.create_ecg_epochs). Here, for demonstration
purposes, we make use of the lower-level artifact detection function to get
an events array telling us where the blinks are, then automatically add
"bad_blink" annotations around them (this is not necessary when using
:func:~mne.preprocessing.create_eog_epochs, it is done here just to show
how annotations are added non-interactively). We'll start the annotations
250 ms before the blink and end them 250 ms after it:
|
eog_events = mne.preprocessing.find_eog_events(raw)
onsets = eog_events[:, 0] / raw.info['sfreq'] - 0.25
durations = [0.5] * len(eog_events)
descriptions = ['bad blink'] * len(eog_events)
blink_annot = mne.Annotations(onsets, durations, descriptions,
orig_time=raw.info['meas_date'])
raw.set_annotations(blink_annot)
|
0.19/_downloads/03db2d983950efa77a26beb0ac22b422/plot_20_rejecting_bad_data.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
See the section tut-section-programmatic-annotations for more details
on creating annotations programmatically.
Rejecting Epochs based on channel amplitude
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Besides "bad" annotations, the :class:mne.Epochs class constructor has
another means of rejecting epochs, based on signal amplitude thresholds for
each channel type. In the overview tutorial
<tut-section-overview-epoching> we saw an example of this: setting maximum
acceptable peak-to-peak amplitudes for each channel type in an epoch, using
the reject parameter. There is also a related parameter, flat, that
can be used to set minimum acceptable peak-to-peak amplitudes for each
channel type in an epoch:
|
reject_criteria = dict(mag=3000e-15, # 3000 fT
grad=3000e-13, # 3000 fT/cm
eeg=100e-6, # 100 μV
eog=200e-6) # 200 μV
flat_criteria = dict(mag=1e-15, # 1 fT
grad=1e-13, # 1 fT/cm
eeg=1e-6) # 1 μV
|
0.19/_downloads/03db2d983950efa77a26beb0ac22b422/plot_20_rejecting_bad_data.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Alternatively, if rejection thresholds were not originally given to the
:class:~mne.Epochs constructor, they can be passed to
:meth:~mne.Epochs.drop_bad later instead; this can also be a way of
imposing progressively more stringent rejection criteria:
|
stronger_reject_criteria = dict(mag=2000e-15, # 2000 fT
grad=2000e-13, # 2000 fT/cm
eeg=100e-6, # 100 μV
eog=100e-6) # 100 μV
epochs.drop_bad(reject=stronger_reject_criteria)
print(epochs.drop_log)
|
0.19/_downloads/03db2d983950efa77a26beb0ac22b422/plot_20_rejecting_bad_data.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Finally, put a %%tutor at the top of any cell with Python code, and watch the visualization:
|
%%tutor
mylist = []
for i in range(10):
mylist.append(i ** 2)
|
examples/Tutor Magic in IPython.ipynb
|
Calysto/metakernel
|
bsd-3-clause
|
Initialization
Throughout the course, some code is already written for you, and organized in modules called packages. The cell below is an initialization step that must be called at the beginning of the notebook.
|
import packages.initialization
import pioneer3dx as p3dx
p3dx.init()
|
Moving the Robot.ipynb
|
ecervera/UJI_AMR
|
mit
|
Motion
Let's move the robot on the simulator!
You are going to use a widget, a Graphical User Interface (GUI) with two sliders for moving the robot in two ways: translation and rotation.
|
import motion_widget
|
Moving the Robot.ipynb
|
ecervera/UJI_AMR
|
mit
|
Get number of pages for publications
|
#open first page, parse html, get number of pages and their links
import html5lib
import urllib2
url="http://www.research.lancs.ac.uk/portal/en/organisations/energy-lancaster/publications.html"
aResp = urllib2.urlopen(url)
t = aResp.read()
dom = html5lib.parse(t, treebuilder="dom")
links=getAttr(dom,'portal_navigator_paging',el='span')[0].childNodes
nr_of_pages=int([i for i in links if i.nodeType==1][::-1][0].childNodes[0].childNodes[0].nodeValue)-1
|
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
|
csaladenes/csaladenes.github.io
|
mit
|
Extract links to publications, from all pages
|
#create publist array
publist=[]
#parse publications links on all pages
for pagenr in range(nr_of_pages):
aResp = urllib2.urlopen(url+'?page='+str(pagenr))
t = aResp.read()
dom = html5lib.parse(t, treebuilder="dom")
#get html list
htmlpublist=dom.getElementsByTagName('ol')
#extract pub links
for i in htmlpublist[0].childNodes:
if i.nodeType==1:
if i.childNodes[0].nodeType==1:
j=i.childNodes[1].childNodes[0].childNodes[0]
if j.nodeType==1:
publist.append(j.getAttribute('href'))
print 'finished page',pagenr
print len(publist),'publications associated with Energy Lancaster'
#create dictionary
pubdict={i:{"url":i} for i in publist}
|
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
|
csaladenes/csaladenes.github.io
|
mit
|
Keyword extraction, for each publication
|
for r in range(len(publist)):
pub=publist[r]
aResp = urllib2.urlopen(pub)
t = aResp.read()
dom = html5lib.parse(t, treebuilder="dom")
#get keywords from pub page
keywords=getAttr(dom,'keywords',el='ul')
if keywords:
pubdict[pub]['keywords']=[i.childNodes[0].childNodes[0].nodeValue for i in keywords[0].getElementsByTagName('a')]
#get title from pub page
title=getAttr(dom,'title',el='h2')
if title:
pubdict[pub]['title']=title[0].childNodes[0].childNodes[0].nodeValue
abstract=getAttr(dom,'rendering_researchoutput_abstractportal',el='div')
if abstract:
pubdict[pub]['abstract']=abstract[0].childNodes[0].childNodes[0].nodeValue
if r%10==0: print 'processed',r,'publications...'
#save parsed data
import json
file('pubdict.json','w').write(json.dumps(pubdict))
#load if saved previously
#pubdict=json.loads(file('pubdict.json','r').read())
|
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
|
csaladenes/csaladenes.github.io
|
mit
|
Mine titles and abstracts for topics
|
#import dependencies
import pandas as pd
from textblob import TextBlob
#import spacy
#nlp = spacy.load('en')
#run once if you need to download nltk corpora, igonre otherwise
import nltk
nltk.download()
#get topical nouns for title and abstract using natural language processing
for i in range(len(pubdict.keys())):
if 'title' in pubdict[pubdict.keys()[i]]:
if text:
text=pubdict[pubdict.keys()[i]]['title']
#get topical nouns with textblob
blob1 = TextBlob(text)
keywords1=blob1.noun_phrases
#get topical nouns with spacy
blob2 = nlp(text)
keywords2=[]
for k in blob2.noun_chunks:
keywords2.append(str(k).decode('utf8').replace(u'\n',' '))
#create unified, unique set of topical nouns, called keywords here
keywords=list(set(keywords2).union(set(keywords1)))
pubdict[pubdict.keys()[i]]['title-nlp']=keywords
if 'abstract' in pubdict[pubdict.keys()[i]]:
text=pubdict[pubdict.keys()[i]]['abstract']
if text:
#get topical nouns with textblob
blob1 = TextBlob(text)
keywords1=blob1.noun_phrases
#get topical nouns with spacy
blob2 = nlp(text)
keywords2=[]
for k in blob2.noun_chunks:
keywords2.append(str(k).decode('utf8').replace(u'\n',' '))
#create unified, unique set of topical nouns, called keywords here
keywords=list(set(keywords2).union(set(keywords1)))
pubdict[pubdict.keys()[i]]['abstract-nlp']=keywords
print i,',',
#save parsed data
file('pubdict2.json','w').write(json.dumps(pubdict))
#load if saved previously
#pubdict=json.loads(file('pubdict2.json','r').read())
|
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
|
csaladenes/csaladenes.github.io
|
mit
|
Save output for D3 word cloud
|
keywords=[j for i in pubdict if 'keywords' in pubdict[i] if pubdict[i]['keywords'] for j in pubdict[i]['keywords']]
titles=[pubdict[i]['title'] for i in pubdict if 'title' in pubdict[i] if pubdict[i]['title']]
abstracts=[pubdict[i]['abstract'] for i in pubdict if 'abstract' in pubdict[i] if pubdict[i]['abstract']]
title_nlp=[j for i in pubdict if 'title-nlp' in pubdict[i] if pubdict[i]['title-nlp'] for j in pubdict[i]['title-nlp']]
abstract_nlp=[j for i in pubdict if 'abstract-nlp' in pubdict[i] if pubdict[i]['abstract-nlp'] for j in pubdict[i]['abstract-nlp']]
kt=keywords+titles
kta=kt+abstracts
kt_nlp=keywords+title_nlp
kta_nlp=kt+abstract_nlp
file('keywords.json','w').write(json.dumps(keywords))
file('titles.json','w').write(json.dumps(titles))
file('abstracts.json','w').write(json.dumps(abstracts))
file('kt.json','w').write(json.dumps(kt))
file('kta.json','w').write(json.dumps(kta))
file('kt_nlp.json','w').write(json.dumps(kt_nlp))
file('kta_nlp.json','w').write(json.dumps(kta_nlp))
import re
def convert(name):
s1 = re.sub('(.)([A-Z][a-z]+)', r'\1 \2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1 \2', s1).lower()
kc=[convert(i) for i in keywords]
file('kc.json','w').write(json.dumps(kc))
ks=[j for i in kc for j in i.split()]
file('ks.json','w').write(json.dumps(ks))
ktc_nlp=[convert(i) for i in kt_nlp]
file('ktc_nlp.json','w').write(json.dumps(ktc_nlp))
kts_nlp=[j for i in ktc_nlp for j in i.split()]
file('kts_nlp.json','w').write(json.dumps(kts_nlp))
ktac_nlp=[convert(i) for i in kta_nlp]
file('ktac_nlp.json','w').write(json.dumps(ktac_nlp))
ktas_nlp=[j for i in ktac_nlp for j in i.split()]
file('ktas_nlp.json','w').write(json.dumps(ktas_nlp))
|
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
|
csaladenes/csaladenes.github.io
|
mit
|
Having consturcted three project score vectors (without title, with title, both), we sort the projects based on high scores. These are best matching research projects. We display a link to them below. Repeat for each topic.
|
for topic_id in range(1,len(topics)):
#select topic
#topic_id=1
#use title
usetitle=True
verbose=False
#initiate global DFs
DF=pd.DataFrame()
projects1={}
projects2={}
projects12={}
#specify depth (n most relevant projects)
depth=100
#get topical nouns with textblob
blob1 = TextBlob(topics[topic_id].decode('utf8'))
keywords1=blob1.noun_phrases
#get topical nouns with spacy
blob2 = nlp(topics[topic_id].decode('utf8'))
keywords2=[]
for i in blob2.noun_chunks:
keywords2.append(str(i).replace(u'\n',' '))
#create unified, unique set of topical nouns, called keywords here
keywords=list(set(keywords2).union(set(keywords1)))
print '----- started processing topic ', topic_id,'-----'
print 'topic keywords are:',
for keyword in keywords: print keyword+', ',
print ' '
#construct search query from title and keywords, the cycle through the keywords
for keyword in keywords:
if usetitle:
if verbose: print 'query for <'+title+keyword+'>'
query=repr(title+keyword).replace(' ','+')[2:-1]
u0='http://gtr.rcuk.ac.uk/search/project/csv?term='
u1='&selectedFacets=&fields='
u2='pro.gr,pro.t,pro.a,pro.orcidId,per.fn,per.on,per.sn,'
u3='per.fnsn,per.orcidId,per.org.n,per.pro.t,per.pro.abs,pub.t,pub.a,pub.orcidId,org.n,org.orcidId,'
u4='acp.t,acp.d,acp.i,acp.oid,kf.d,kf.oid,is.t,is.d,is.oid,col.i,col.d,col.c,col.dept,col.org,col.pc,col.pic,'
u5='col.oid,ip.t,ip.d,ip.i,ip.oid,pol.i,pol.gt,pol.in,pol.oid,prod.t,prod.d,prod.i,prod.oid,rtp.t,rtp.d,rtp.i,'
u6='rtp.oid,rdm.t,rdm.d,rdm.i,rdm.oid,stp.t,stp.d,stp.i,stp.oid,so.t,so.d,so.cn,so.i,so.oid,ff.t,ff.d,ff.c,'
u7='ff.org,ff.dept,ff.oid,dis.t,dis.d,dis.i,dis.oid'
u8='&type=&fetchSize=50'
u9='&selectedSortableField=score&selectedSortOrder=DESC'
url=u0+query+u8+u9
#query RCUK GtR API
df=pd.read_csv(url,nrows=depth)
#record scores
df['score'] = depth-df.index
df=df.set_index('ProjectReference')
DF=pd.concat([DF,df])
for i in df.index:
if i not in projects12:projects12[i]=0
projects12[i]+=df.loc[i]['score']**2
if i not in projects1:projects1[i]=0
projects1[i]+=df.loc[i]['score']**2
if verbose: print 'query for <'+keyword+'>'
query=repr(keyword).replace(' ','+')[2:-1]
u0='http://gtr.rcuk.ac.uk/search/project/csv?term='
u1='&selectedFacets=&fields='
u2='pro.gr,pro.t,pro.a,pro.orcidId,per.fn,per.on,per.sn,'
u3='per.fnsn,per.orcidId,per.org.n,per.pro.t,per.pro.abs,pub.t,pub.a,pub.orcidId,org.n,org.orcidId,'
u4='acp.t,acp.d,acp.i,acp.oid,kf.d,kf.oid,is.t,is.d,is.oid,col.i,col.d,col.c,col.dept,col.org,col.pc,col.pic,'
u5='col.oid,ip.t,ip.d,ip.i,ip.oid,pol.i,pol.gt,pol.in,pol.oid,prod.t,prod.d,prod.i,prod.oid,rtp.t,rtp.d,rtp.i,'
u6='rtp.oid,rdm.t,rdm.d,rdm.i,rdm.oid,stp.t,stp.d,stp.i,stp.oid,so.t,so.d,so.cn,so.i,so.oid,ff.t,ff.d,ff.c,'
u7='ff.org,ff.dept,ff.oid,dis.t,dis.d,dis.i,dis.oid'
u8='&type=&fetchSize=50'
u9='&selectedSortableField=score&selectedSortOrder=DESC'
url=u0+query+u8+u9
#query RCUK GtR API
df=pd.read_csv(url,nrows=depth)
#record scores
df['score'] = depth-df.index
df=df.set_index('ProjectReference')
DF=pd.concat([DF,df])
for i in df.index:
if i not in projects12:projects12[i]=0
projects12[i]+=df.loc[i]['score']**2
if i not in projects2:projects2[i]=0
projects2[i]+=df.loc[i]['score']**2
print '----- finished topic ', topic_id,'-----'
print ' '
###### SORTING #######
#select top projects
#sort project vectors
top=30
import operator
sorted_projects1=sorted(projects1.items(), key=operator.itemgetter(1))[::-1][:30]
sorted_projects2=sorted(projects2.items(), key=operator.itemgetter(1))[::-1][:30]
sorted_projects12=sorted(projects12.items(), key=operator.itemgetter(1))[::-1][:30]
#record scores in sorted vector in a master vector
projects={}
for i in range(len(sorted_projects1)):
if sorted_projects1[i][0] not in projects:projects[sorted_projects1[i][0]]=0
projects[sorted_projects1[i][0]]+=(top-i)**2
for i in range(len(sorted_projects2)):
if sorted_projects2[i][0] not in projects:projects[sorted_projects2[i][0]]=0
projects[sorted_projects2[i][0]]+=(top-i)**2
for i in range(len(sorted_projects12)):
if sorted_projects12[i][0] not in projects:projects[sorted_projects12[i][0]]=0
projects[sorted_projects12[i][0]]+=(top-i)**2
#save final vector of most relevant projects
sorted_projects=sorted(projects.items(), key=operator.itemgetter(1))[::-1][:30]
###### DISPLAY ########
#print resulting links to projects
for i in range(len(sorted_projects)):
print str(i+1)+'.',DF.loc[sorted_projects[i][0]][u'GTRProjectUrl'].values[0],\
DF.loc[sorted_projects[i][0]][u'PIFirstName'].values[0],\
DF.loc[sorted_projects[i][0]][u'PISurname'].values[0]+'|',\
DF.loc[sorted_projects[i][0]][u'LeadROName'].values[0]+'|',\
DF.loc[sorted_projects[i][0]][u'StartDate'].values[0][6:]+'-'+\
DF.loc[sorted_projects[i][0]][u'EndDate'].values[0][6:]+'|',\
str(int(DF.loc[sorted_projects[i][0]][u'AwardPounds'].values[0])/1000)+'k'
print DF.loc[sorted_projects[i][0]][u'Title'].values[0]+'\n'
#print '----------------------------------------------------'
|
test/gcrf-hub/wordcloud/.ipynb_checkpoints/wordcloud-checkpoint.ipynb
|
csaladenes/csaladenes.github.io
|
mit
|
Quick Look at the Data
Read in and display a random image from the test_dataset folder
|
path = '../test_dataset/IMG/*'
img_list = glob.glob(path)
# Grab a random image and display it
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
plt.imshow(image)
|
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
|
priteshgudge/RoboND-Rover-Project
|
apache-2.0
|
Perspective Transform
Define the perspective transform function from the lesson and test it on an image.
|
# Define a function to perform a perspective transform
# I've used the example grid image above to choose source points for the
# grid cell in front of the rover (each grid cell is 1 square meter in the sim)
# Define a function to perform a perspective transform
def perspect_transform(img, src, dst):
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))# keep same size as input image
return warped
# Define calibration box in source (actual) and destination (desired) coordinates
# These source and destination points are defined to warp the image
# to a grid where each 10x10 pixel square represents 1 square meter
# The destination box will be 2*dst_size on each side
dst_size = 5
# Set a bottom offset to account for the fact that the bottom of the image
# is not the position of the rover but a bit in front of it
# this is just a rough guess, feel free to change it!
bottom_offset = 6
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
warped = perspect_transform(grid_img, source, destination)
plt.imshow(warped)
#scipy.misc.imsave('../output/warped_example.jpg', warped)
|
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
|
priteshgudge/RoboND-Rover-Project
|
apache-2.0
|
Color Thresholding
Define the color thresholding function from the lesson and apply it to the warped image
TODO: Ultimately, you want your map to not just include navigable terrain but also obstacles and the positions of the rock samples you're searching for. Modify this function or write a new function that returns the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that you can map these areas into world coordinates as well.
Suggestion: Think about imposing a lower and upper boundary in your color selection to be more specific about choosing colors. Feel free to get creative and even bring in functions from other libraries. Here's an example of color selection using OpenCV.
Beware: if you start manipulating images with OpenCV, keep in mind that it defaults to BGR instead of RGB color space when reading/writing images, so things can get confusing.
|
# Identify pixels above the threshold
# Threshold of RGB > 160 does a nice job of identifying ground pixels only
def color_thresh(img, rgb_thresh=(160, 160, 160)):
# Create an array of zeros same xy size as img, but single channel
color_select = np.zeros_like(img[:,:,0])
# Require that each pixel be above all three threshold values in RGB
# above_thresh will now contain a boolean array with "True"
# where threshold was met
above_thresh = (img[:,:,0] > rgb_thresh[0]) \
& (img[:,:,1] > rgb_thresh[1]) \
& (img[:,:,2] > rgb_thresh[2])
# Index the array of zeros with the boolean array and set to 1
color_select[above_thresh] = 1
# Return the binary image
return color_select
threshed = color_thresh(warped)
plt.imshow(threshed, cmap='gray')
#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)
|
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
|
priteshgudge/RoboND-Rover-Project
|
apache-2.0
|
Coordinate Transformations
Define the functions used to do coordinate transforms and apply them to an image.
|
def rover_coords(binary_img):
# Identify nonzero pixels
ypos, xpos = binary_img.nonzero()
# Calculate pixel positions with reference to the rover position being at the
# center bottom of the image.
x_pixel = np.absolute(ypos - binary_img.shape[0]).astype(np.float)
y_pixel = -(xpos - binary_img.shape[0]).astype(np.float)
return x_pixel, y_pixel
# Define a function to convert to radial coords in rover space
def to_polar_coords(x_pixel, y_pixel):
# Convert (x_pixel, y_pixel) to (distance, angle)
# in polar coordinates in rover space
# Calculate distance to each pixel
dist = np.sqrt(x_pixel**2 + y_pixel**2)
# Calculate angle away from vertical for each pixel
angles = np.arctan2(y_pixel, x_pixel)
return dist, angles
# Define a function to map rover space pixels to world space
def pix_to_world(xpix, ypix, x_rover, y_rover, yaw_rover, world_size, scale):
# Map pixels from rover space to world coords
yaw = yaw_rover * np.pi / 180
# Perform rotation, translation and clipping all at once
x_pix_world = np.clip(np.int_((((xpix * np.cos(yaw)) - (ypix * np.sin(yaw)))/scale) + x_rover),
0, world_size - 1)
y_pix_world = np.clip(np.int_((((xpix * np.sin(yaw)) + (ypix * np.cos(yaw)))/scale) + y_rover),
0, world_size - 1)
return x_pix_world, y_pix_world
# Grab another random image
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
warped = perspect_transform(image, source, destination)
threshed = color_thresh(warped)
# Calculate pixel values in rover-centric coords and distance/angle to all pixels
xpix, ypix = rover_coords(threshed)
dist, angles = to_polar_coords(xpix, ypix)
mean_dir = np.mean(angles)
# Do some plotting
fig = plt.figure(figsize=(12,9))
plt.subplot(221)
plt.imshow(image)
plt.subplot(222)
plt.imshow(warped)
plt.subplot(223)
plt.imshow(threshed, cmap='gray')
plt.subplot(224)
plt.plot(xpix, ypix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
arrow_length = 100
x_arrow = arrow_length * np.cos(mean_dir)
y_arrow = arrow_length * np.sin(mean_dir)
plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)
|
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
|
priteshgudge/RoboND-Rover-Project
|
apache-2.0
|
Read in saved data and ground truth map of the world
The next cell is all setup to read your saved data into a pandas dataframe. Here you'll also read in a "ground truth" map of the world, where white pixels (pixel value = 1) represent navigable terrain.
After that, we'll define a class to store telemetry data and pathnames to images. When you instantiate this class (data = Databucket()) you'll have a global variable called data that you can refer to for telemetry and map data within the process_image() function in the following cell.
|
# Import pandas and read in csv file as a dataframe
import pandas as pd
# Change this path to your data directory
df = pd.read_csv('../test_dataset/robot_log.csv')
img_list_sorted = df["Path"].tolist() # Create list of image pathnames
# Read in ground truth map and create a 3-channel image with it
ground_truth = mpimg.imread('../calibration_images/map_bw.png')
ground_truth_3d = np.dstack((ground_truth*0, ground_truth, ground_truth*0)).astype(np.float)
# Creating a class to be the data container
# Will read in saved data from csv file and populate this object
# Worldmap is instantiated as 200 x 200 grids corresponding
# to a 200m x 200m space (same size as the ground truth map: 200 x 200 pixels)
# This encompasses the full range of output position values in x and y from the sim
class Databucket():
def __init__(self):
self.images = img_list_sorted
self.xpos = df["X_Position"].values
self.ypos = df["Y_Position"].values
self.yaw = df["Yaw"].values
self.count = 0
self.worldmap = np.zeros((200, 200, 3)).astype(np.float)
self.ground_truth = ground_truth_3d # Ground truth worldmap
# Instantiate a Databucket().. this will be a global variable/object
# that you can refer to in the process_image() function below
data = Databucket()
|
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
|
priteshgudge/RoboND-Rover-Project
|
apache-2.0
|
Write a function to process stored images
Modify the process_image() function below by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this process_image() function in conjunction with the moviepy video processing package to create a video from the images you saved taking data in the simulator.
In short, you will be passing individual images into process_image() and building up an image called output_image that will be stored as one frame of video. You can make a mosaic of the various steps of your analysis process and add text as you like (example provided below).
To start with, you can simply run the next three cells to see what happens, but then go ahead and modify them such that the output video demonstrates your mapping process. Feel free to get creative!
|
# Define a function to pass stored images to
# reading rover position and yaw angle from csv file
# This function will be used by moviepy to create an output video
def process_image(img):
# Example of how to use the Databucket() object defined in the previous cell
# print(data.xpos[0], data.ypos[0], data.yaw[0])
warp = perspect_transform(img, source, destination)
output_image = np.zeros((img.shape[0] + data.worldmap.shape[0], img.shape[1]*2, 3))
# Example
output_image[0:img.shape[0], 0:img.shape[1]] = img
output_image[0:img.shape[0], img.shape[1]:] = warp
cv2.putText(output_image,"Populate this image with your analyses to make a video!", (20, 20),
cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)
data.count += 1 # Keep track of the index in the Databucket()
return output_image
|
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
|
priteshgudge/RoboND-Rover-Project
|
apache-2.0
|
Make a video from processed image data
The cell below is set up to read the csv file you saved along with camera images from the rover. Change the pathname below to the location of the csv file for your data. Also change the path to where you want to save the output video. The ground truth map is a black and white image of a top-down view of the environment and is read in here so you can overplot your results against the ground truth. The Python class Databucket() is defined here for you so you can access image and map data iteratively while processing images.
You can gain insight into how various parameters are changing from image to image by saving them to the Databucket(). For now the only thing meant for updating and saving with each iteration is self.worldmap where you can iteratively build your map, for example, adding navigable terrain pixels to one color channel, obstacles to another, and ground truth to another. You can add attributes to Databucket() if you would like to save other quantities for consideration after the processing is done.
|
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from moviepy.editor import ImageSequenceClip
# Define pathname to save the output video
output = '../output/test_mapping.mp4'
clip = ImageSequenceClip(data.images, fps=60)
new_clip = clip.fl_image(process_image) #NOTE: this function expects color images!!
%time new_clip.write_videofile(output, audio=False)
# This last cell should function as an inline video player
# If it fails to render the video, you can simply have a look
# at the saved mp4 in your `/output` folder
from IPython.display import HTML
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(output))
|
code/.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
|
priteshgudge/RoboND-Rover-Project
|
apache-2.0
|
압축 방법
0000000100110...
10 -> 1bit
p=0.5(베르누이 분포일 때) 가장 정보가 많은 경우
p=0.01 이라고 하면 0이 엄청나게 많고 1이 적은 경우
압축하는 방법은 0이 몇 번 나오는 지 쓴다. 예를 들어 0이 7번 나오고 1이 2번 나오고 다시 0이 10번 나온다는 식으로 줄여서 쓴다. 그러면 이 숫자를 다시 이진법으로 쓰는 방식
p=0.5일 경우에는 압축의 의미가 없다. 계속 0과 1이 번갈아 나오기 때문
원리는 그렇다. 확률 변수는 수치를 담고 있고 내가 얼마나 많은 정보를 가지고 있는 지를 판단할 수 있다.
표본 데이터가 주어진 경우
확률 변수 모형, 즉 이론적인 확률 밀도(질량) 함수가 아닌 실제 데이터가 주어진 경우에는 확률 밀도(질량) 함수를 추정하여 엔트로피를 계산한다.
예를 들어 데이터가 모두 80개가 있고 그 중 Y = 0 인 데이터가 40개, Y = 1인 데이터가 40개 있는 경우는
$$ P(y=0) = \dfrac{40}{80} = \dfrac{1}{2} $$
$$ P(y=1) = \dfrac{40}{80} = \dfrac{1}{2} $$
$$ H[Y] = -\dfrac{1}{2}\log_2\left(\dfrac{1}{2}\right) -\dfrac{1}{2}\log_2\left(\dfrac{1}{2}\right) = \dfrac{1}{2} + \dfrac{1}{2} = 1 $$
|
-1/2*np.log2(1/2)-1/2*np.log2(1/2)
# 이럴 경우에는 아까 말한 0,1 두 개가 똑같아서 압축을 해도 의미가 없는 경우다.
|
통계, 머신러닝 복습/160622수_19일차_의사 결정 나무 Decision Tree/2.엔트로피.ipynb
|
kimkipyo/dss_git_kkp
|
mit
|
만약 데이터가 모두 60개가 있고 그 중 Y= 0 인 데이터가 20개, Y = 1인 데이터가 40개 있는 경우는
$$ P(y=0) = \dfrac{20}{60} = \dfrac{1}{3} $$
$$ P(y=1) = \dfrac{40}{60} = \dfrac{2}{3} $$
$$ H[Y] = -\dfrac{1}{3}\log_2\left(\dfrac{1}{3}\right) -\dfrac{2}{3}\log_2\left(\dfrac{2}{3}\right) = 0.92 $$
|
-1/3*np.log2(1/3)-2/3*np.log2(2/3)
|
통계, 머신러닝 복습/160622수_19일차_의사 결정 나무 Decision Tree/2.엔트로피.ipynb
|
kimkipyo/dss_git_kkp
|
mit
|
만약 데이터가 모두 40개가 있고 그 중 Y= 0 인 데이터가 30개, Y = 1인 데이터가 10개 있는 경우는
$$ P(y=0) = \dfrac{30}{40} = \dfrac{3}{4} $$
$$ P(y=1) = \dfrac{10}{40} = \dfrac{1}{4} $$
$$ H[Y] = -\dfrac{3}{4}\log_2\left(\dfrac{3}{4}\right) -\dfrac{1}{4}\log_2\left(\dfrac{1}{4}\right) = 0.81 $$
|
-3/4*np.log2(3/4)-1/4*np.log2(1/4)
|
통계, 머신러닝 복습/160622수_19일차_의사 결정 나무 Decision Tree/2.엔트로피.ipynb
|
kimkipyo/dss_git_kkp
|
mit
|
만약 데이터가 모두 20개가 있고 그 중 Y= 0 인 데이터가 20개, Y = 1인 데이터가 0개 있는 경우는
$$ P(y=0) = \dfrac{20}{20} = 1 $$
$$ P(y=1) = \dfrac{0}{20} = 0 $$
$$ H[Y] \rightarrow 0 $$
조건부 엔트로피
조건부 엔트로피는 다음과 같이 정의한다.
$$ H[Y \mid X] = - \sum_i \sum_j \,p(x_i, y_j) \log_2 p(y_j \mid x_i) $$
$$ H[Y \mid X] = -\int \int p(x, y) \log_2 p(y \mid x) \; dxdy $$
이 식은 조건부 확률 분포의 정의를 사용하여 다음과 같이 고칠 수 있다.
$$ H[Y \mid X] = \sum_i \,p(x_i)\,H[Y \mid x_i] $$
$$ H[Y \mid X] = \int p(x)\,H[Y \mid x] \; dx $$
위에는 X가 선택되지 않은 상황의 엔트로피의 가중합이고 아래는 X가 특정한 값이 선택된 경우의 가중합
(증명)
$$
\begin{eqnarray}
H[Y \mid X]
&=& - \sum_i \sum_j \,p(x_i, y_j) \log_2 p(y_j \mid x_i) \
&=& - \sum_i \sum_j p(y_j \mid x_i) p(x_i) \log_2 p(y_j \mid x_i) \
&=& - \sum_i p(x_i) \sum_j p(y_j \mid x_i) \log_2 p(y_j \mid x_i) \
&=& \sum_i p(x_i) H[Y \mid x_i] \
\end{eqnarray}
$$
조건부 엔트로피와 결합 엔트로피는 다음과 같은 관계를 가진다.
$$ H[ X, Y ] = H[Y \mid X] + H[X] $$
조건부 엔트로피 계산의 예
예를 들어 데이터가 모두 80개가 있고 $X$, $Y$ 값이 다음과 같다고 하자
| | Y = 0 | Y = 1 | sum |
|-|-|-|-|
| X = 0 | 30 | 10 | 40 |
| X = 1 | 10 | 30 | 40 |
$$
H[Y \mid X ] = p(X=0)\,H[Y \mid X=0] + p(X=1)\,H[Y \mid X=1] = \dfrac{40}{80} \cdot 0.81 + \dfrac{40}{80} \cdot 0.81 = 0.81
$$
만약 데이터가 모두 80개가 있고 $X$, $Y$ 값이 다음과 같다면
| | Y = 0 | Y = 1 | sum |
|-|-|-|-|
| X = 0 | 20 | 40 | 60 |
| X = 1 | 20 | 0 | 20 |
$$
H[Y \mid X ] = p(X=0)\,H[Y \mid X=0] + p(X=1)\,H[Y \mid X=1] = \dfrac{60}{80} \cdot 0.92 + \dfrac{20}{80} \cdot 0 = 0.69
$$
실습 문제
<img src="5.png.jpg" stype="width:60%; margin: 0 auto 0 auto;">
|
-(25/100 * (20/25 * np.log2(20/25) + 5/25 * np.log2(5/25)) + 75/100 * (25/75 * np.log2(25/75) + 50/75 * np.log2(50/75)))
|
통계, 머신러닝 복습/160622수_19일차_의사 결정 나무 Decision Tree/2.엔트로피.ipynb
|
kimkipyo/dss_git_kkp
|
mit
|
python
implementation using pure python
|
def denoise(a, b):
for channel in range(2):
for f_band in range(4, a.shape[1] - 4):
for t_step in range(1, a.shape[2] - 1):
neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2]
if neighborhood.mean() < 10:
b[channel, f_band, t_step] = neighborhood.min()
else:
b[channel, f_band, t_step] = neighborhood[4, 1]
return b
|
profiling/Denoise algorithm.ipynb
|
jacobdein/alpine-soundscapes
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.